All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH 00/16] Wind River Systems AVP PMD
@ 2017-02-25  1:22 Allain Legacy
  2017-02-25  1:23 ` [PATCH 01/16] config: adds attributes for the " Allain Legacy
                   ` (16 more replies)
  0 siblings, 17 replies; 172+ messages in thread
From: Allain Legacy @ 2017-02-25  1:22 UTC (permalink / raw)
  To: ferruh.yigit; +Cc: dev

This patch series submits an initial version of the AVP PMD from Wind River
Systems.  The series includes shared header files, driver implementation,
and changes to documentation files in support of this new driver.  The AVP
driver is a shared memory based device.  It is intended to be used as a PMD
within a virtual machine running on a Wind River virtualization platform.
See: http://www.windriver.com/products/titanium-cloud/

Allain Legacy (16):
  config: added attributes for the AVP PMD
  net/avp: added public header files
  maintainers: claim responsibility for AVP PMD
  net/avp: added PMD version map file
  net/avp: added log macros
  drivers/net: added driver makefiles
  net/avp: driver registration
  net/avp: device initialization
  net/avp: device configuration
  net/avp: queue setup and release
  net/avp: packet receive functions
  net/avp: packet transmit functions
  net/avp: device statistics operations
  net/avp: device promiscuous functions
  net/avp: device start and stop operations
  doc: added information related to the AVP PMD

 MAINTAINERS                             |    6 +
 config/common_base                      |   10 +
 config/common_linuxapp                  |    1 +
 doc/guides/nics/avp.rst                 |   99 ++
 doc/guides/nics/features/avp.ini        |   17 +
 doc/guides/nics/index.rst               |    1 +
 drivers/net/Makefile                    |    1 +
 drivers/net/avp/Makefile                |   61 +
 drivers/net/avp/avp_ethdev.c            | 2371 +++++++++++++++++++++++++++++++
 drivers/net/avp/avp_logs.h              |   59 +
 drivers/net/avp/rte_avp_common.h        |  427 ++++++
 drivers/net/avp/rte_avp_fifo.h          |  157 ++
 drivers/net/avp/rte_pmd_avp_version.map |    4 +
 mk/rte.app.mk                           |    1 +
 14 files changed, 3215 insertions(+)
 create mode 100644 doc/guides/nics/avp.rst
 create mode 100644 doc/guides/nics/features/avp.ini
 create mode 100644 drivers/net/avp/Makefile
 create mode 100644 drivers/net/avp/avp_ethdev.c
 create mode 100644 drivers/net/avp/avp_logs.h
 create mode 100644 drivers/net/avp/rte_avp_common.h
 create mode 100644 drivers/net/avp/rte_avp_fifo.h
 create mode 100644 drivers/net/avp/rte_pmd_avp_version.map

-- 
1.8.3.1

^ permalink raw reply	[flat|nested] 172+ messages in thread

* [PATCH 01/16] config: adds attributes for the AVP PMD
  2017-02-25  1:22 [PATCH 00/16] Wind River Systems AVP PMD Allain Legacy
@ 2017-02-25  1:23 ` Allain Legacy
  2017-02-25  1:23 ` [PATCH 02/16] net/avp: public header files Allain Legacy
                   ` (15 subsequent siblings)
  16 siblings, 0 replies; 172+ messages in thread
From: Allain Legacy @ 2017-02-25  1:23 UTC (permalink / raw)
  To: ferruh.yigit; +Cc: dev

Updates the common base configuration file to include a top level config
attribute for the AVP PMD.

Signed-off-by: Allain Legacy <allain.legacy@windriver.com>
Signed-off-by: Matt Peters <matt.peters@windriver.com>
---
 config/common_base | 5 +++++
 1 file changed, 5 insertions(+)

diff --git a/config/common_base b/config/common_base
index aeee13e..912bc68 100644
--- a/config/common_base
+++ b/config/common_base
@@ -348,6 +348,11 @@ CONFIG_RTE_LIBRTE_QEDE_FW=""
 CONFIG_RTE_LIBRTE_PMD_AF_PACKET=n
 
 #
+# Compile WRS accelerated virtual port (AVP) guest PMD driver
+#
+CONFIG_RTE_LIBRTE_AVP_PMD=n
+
+#
 # Compile the TAP PMD
 # It is enabled by default for Linux only.
 #
-- 
1.8.3.1

^ permalink raw reply related	[flat|nested] 172+ messages in thread

* [PATCH 02/16] net/avp: public header files
  2017-02-25  1:22 [PATCH 00/16] Wind River Systems AVP PMD Allain Legacy
  2017-02-25  1:23 ` [PATCH 01/16] config: adds attributes for the " Allain Legacy
@ 2017-02-25  1:23 ` Allain Legacy
  2017-02-25  1:23 ` [PATCH 03/16] maintainers: claim responsibility for AVP PMD Allain Legacy
                   ` (14 subsequent siblings)
  16 siblings, 0 replies; 172+ messages in thread
From: Allain Legacy @ 2017-02-25  1:23 UTC (permalink / raw)
  To: ferruh.yigit; +Cc: dev

Adds public/exported header files for the AVP PMD.  The AVP device is a
shared memory based device.  The structures and constants that define the
method of operation of the device must be visible by both the PMD and the
host DPDK application.  They must not change without proper version
controls and updates to both the hypervisor DPDK application and the PMD.

The hypervisor DPDK application is a Wind River Systems proprietary
virtual switch.

Signed-off-by: Allain Legacy <allain.legacy@windriver.com>
Signed-off-by: Matt Peters <matt.peters@windriver.com>
---
 drivers/net/avp/rte_avp_common.h | 427 +++++++++++++++++++++++++++++++++++++++
 drivers/net/avp/rte_avp_fifo.h   | 157 ++++++++++++++
 2 files changed, 584 insertions(+)
 create mode 100644 drivers/net/avp/rte_avp_common.h
 create mode 100644 drivers/net/avp/rte_avp_fifo.h

diff --git a/drivers/net/avp/rte_avp_common.h b/drivers/net/avp/rte_avp_common.h
new file mode 100644
index 0000000..a358663
--- /dev/null
+++ b/drivers/net/avp/rte_avp_common.h
@@ -0,0 +1,427 @@
+/*-
+ *   This file is provided under a dual BSD/LGPLv2 license.  When using or
+ *   redistributing this file, you may do so under either license.
+ *
+ *   GNU LESSER GENERAL PUBLIC LICENSE
+ *
+ *   Copyright(c) 2010-2013 Intel Corporation. All rights reserved.
+ *   Copyright(c) 2014-2015 Wind River Systems, Inc. All rights reserved.
+ *
+ *   This program is free software; you can redistribute it and/or modify
+ *   it under the terms of version 2.1 of the GNU Lesser General Public License
+ *   as published by the Free Software Foundation.
+ *
+ *   This program is distributed in the hope that it will be useful, but
+ *   WITHOUT ANY WARRANTY; without even the implied warranty of
+ *   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+ *   Lesser General Public License for more details.
+ *
+ *   You should have received a copy of the GNU Lesser General Public License
+ *   along with this program; if not, write to the Free Software
+ *   Foundation, Inc., 51 Franklin St - Fifth Floor, Boston, MA 02110-1301 USA.
+ *
+ *   Contact Information:
+ *   Wind River Systems, Inc.
+ *
+ *
+ *   BSD LICENSE
+ *
+ *   Copyright(c) 2010-2013 Intel Corporation. All rights reserved.
+ *   Copyright(c) 2014-2016 Wind River Systems, Inc. All rights reserved.
+ *   All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *   * Redistributions of source code must retain the above copyright
+ *     notice, this list of conditions and the following disclaimer.
+ *   * Redistributions in binary form must reproduce the above copyright
+ *     notice, this list of conditions and the following disclaimer in
+ *     the documentation and/or other materials provided with the
+ *     distribution.
+ *   * Neither the name of Intel Corporation nor the names of its
+ *     contributors may be used to endorse or promote products derived
+ *     from this software without specific prior written permission.
+ *
+ *    THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *    "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *    LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *    A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *    OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *    SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *    LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *    DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *    THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *    (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *    OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ *
+ */
+
+#ifndef _RTE_AVP_COMMON_H_
+#define _RTE_AVP_COMMON_H_
+
+#ifdef __KERNEL__
+#include <linux/if.h>
+#endif
+
+/**
+ * AVP name is part of network device name.
+ */
+#define RTE_AVP_NAMESIZE 32
+
+/**
+ * AVP alias is a user-defined value used for lookups from secondary
+ * processes.  Typically, this is a UUID.
+ */
+#define RTE_AVP_ALIASSIZE 128
+
+/**
+ * Memory aligment (cache aligned)
+ */
+#ifndef RTE_AVP_ALIGNMENT
+#define RTE_AVP_ALIGNMENT 64
+#endif
+
+/*
+ * Request id.
+ */
+enum rte_avp_req_id {
+	RTE_AVP_REQ_UNKNOWN = 0,
+	RTE_AVP_REQ_CHANGE_MTU,
+	RTE_AVP_REQ_CFG_NETWORK_IF,
+	RTE_AVP_REQ_CFG_DEVICE,
+	RTE_AVP_REQ_SHUTDOWN_DEVICE,
+	RTE_AVP_REQ_MAX,
+};
+
+/**@{ AVP device driver types */
+#define RTE_AVP_DRIVER_TYPE_UNKNOWN 0
+#define RTE_AVP_DRIVER_TYPE_DPDK 1
+#define RTE_AVP_DRIVER_TYPE_KERNEL 2
+#define RTE_AVP_DRIVER_TYPE_QEMU 3
+/**@} */
+
+/**@{ AVP device operational modes */
+#define RTE_AVP_MODE_HOST 0 /**< AVP interface created in host */
+#define RTE_AVP_MODE_GUEST 1 /**< AVP interface created for export to guest */
+#define RTE_AVP_MODE_TRACE 2 /**< AVP interface created for packet tracing */
+/**@} */
+
+/*
+ * Structure for AVP queue configuration query request/result
+ */
+struct rte_avp_device_config {
+	uint64_t device_id;	/**< Unique system identifier */
+	uint32_t driver_type; /**< Device Driver type */
+	uint32_t driver_version; /**< Device Driver version */
+	uint32_t features; /**< Negotiated features */
+	uint16_t num_tx_queues;	/**< Number of active transmit queues */
+	uint16_t num_rx_queues;	/**< Number of active receive queues */
+	uint8_t if_up; /**< 1: interface up, 0: interface down */
+} __attribute__ ((__packed__));
+
+/*
+ * Structure for AVP request.
+ */
+struct rte_avp_request {
+	uint32_t req_id; /**< Request id */
+	union {
+		uint32_t new_mtu; /**< New MTU */
+		uint8_t if_up;	/**< 1: interface up, 0: interface down */
+	struct rte_avp_device_config config; /**< Queue configuration */
+	};
+	int32_t result;	/**< Result for processing request */
+} __attribute__ ((__packed__));
+
+/*
+ * FIFO struct mapped in a shared memory. It describes a circular buffer FIFO
+ * Write and read should wrap around. FIFO is empty when write == read
+ * Writing should never overwrite the read position
+ */
+struct rte_avp_fifo {
+	volatile unsigned write; /**< Next position to be written*/
+	volatile unsigned read; /**< Next position to be read */
+	unsigned len; /**< Circular buffer length */
+	unsigned elem_size; /**< Pointer size - for 32/64 bit OS */
+	void *volatile buffer[0]; /**< The buffer contains mbuf pointers */
+};
+
+
+/*
+ * AVP packet buffer header used to define the exchange of packet data.
+ */
+struct rte_avp_desc {
+	uint64_t pad0;
+	void *pkt_mbuf; /**< Reference to packet mbuf */
+	uint8_t pad1[14];
+	uint16_t ol_flags; /**< Offload features. */
+	void *next;	/**< Reference to next buffer in chain */
+	void *data;	/**< Start address of data in segment buffer. */
+	uint16_t data_len; /**< Amount of data in segment buffer. */
+	uint8_t nb_segs; /**< Number of segments */
+	uint8_t pad2;
+	uint16_t pkt_len; /**< Total pkt len: sum of all segment data_len. */
+	uint32_t pad3;
+	uint16_t vlan_tci; /**< VLAN Tag Control Identifier (CPU order). */
+	uint32_t pad4;
+} __attribute__ ((__aligned__(RTE_AVP_ALIGNMENT), __packed__));
+
+
+/**{ AVP device features */
+#define RTE_AVP_FEATURE_VLAN_OFFLOAD (1<<0) /**< Emulated HW VLAN offload */
+/**@} */
+
+
+/**@{ Offload feature flags */
+#define RTE_AVP_TX_VLAN_PKT 0x0001 /**< TX packet is a 802.1q VLAN packet. */
+#define RTE_AVP_RX_VLAN_PKT 0x0800 /**< RX packet is a 802.1q VLAN packet. */
+/**@} */
+
+
+/**@{ AVP PCI identifiers */
+#define RTE_AVP_PCI_VENDOR_ID   0x1af4
+#define RTE_AVP_PCI_DEVICE_ID   0x1110
+/**@} */
+
+/**@{ AVP PCI subsystem identifiers */
+#define RTE_AVP_PCI_SUB_VENDOR_ID RTE_AVP_PCI_VENDOR_ID
+#define RTE_AVP_PCI_SUB_DEVICE_ID 0x1104
+/**@} */
+
+/**@{ AVP PCI BAR definitions */
+#define RTE_AVP_PCI_MMIO_BAR   0
+#define RTE_AVP_PCI_MSIX_BAR   1
+#define RTE_AVP_PCI_MEMORY_BAR 2
+#define RTE_AVP_PCI_MEMMAP_BAR 4
+#define RTE_AVP_PCI_DEVICE_BAR 5
+#define RTE_AVP_PCI_MAX_BAR    6
+/**@} */
+
+/**@{ AVP PCI BAR name definitions */
+#define RTE_AVP_MMIO_BAR_NAME   "avp-mmio"
+#define RTE_AVP_MSIX_BAR_NAME   "avp-msix"
+#define RTE_AVP_MEMORY_BAR_NAME "avp-memory"
+#define RTE_AVP_MEMMAP_BAR_NAME "avp-memmap"
+#define RTE_AVP_DEVICE_BAR_NAME "avp-device"
+/**@} */
+
+/**@{ AVP PCI MSI-X vectors */
+#define RTE_AVP_MIGRATION_MSIX_VECTOR 0	/**< Migration interrupts */
+#define RTE_AVP_MAX_MSIX_VECTORS 1
+/**@} */
+
+/**@} AVP Migration status/ack register values */
+#define RTE_AVP_MIGRATION_NONE      0 /**< Migration never executed */
+#define RTE_AVP_MIGRATION_DETACHED  1 /**< Device attached during migration */
+#define RTE_AVP_MIGRATION_ATTACHED  2 /**< Device reattached during migration */
+#define RTE_AVP_MIGRATION_ERROR     3 /**< Device failed to attach/detach */
+/**@} */
+
+/**@} AVP MMIO Register Offsets */
+#define RTE_AVP_REGISTER_BASE 0
+#define RTE_AVP_INTERRUPT_MASK_OFFSET (RTE_AVP_REGISTER_BASE + 0)
+#define RTE_AVP_INTERRUPT_STATUS_OFFSET (RTE_AVP_REGISTER_BASE + 4)
+#define RTE_AVP_MIGRATION_STATUS_OFFSET (RTE_AVP_REGISTER_BASE + 8)
+#define RTE_AVP_MIGRATION_ACK_OFFSET (RTE_AVP_REGISTER_BASE + 12)
+/**@} */
+
+/**@} AVP Interrupt Status Mask */
+#define RTE_AVP_MIGRATION_INTERRUPT_MASK (1 << 1)
+#define RTE_AVP_APP_INTERRUPTS_MASK      (0xFFFFFFFF)
+#define RTE_AVP_NO_INTERRUPTS_MASK       (0)
+/**@} */
+
+/*
+ * Maximum number of memory regions to export
+ */
+#define RTE_AVP_MAX_MAPS  2048
+
+/*
+ * Description of a single memory region
+ */
+struct rte_avp_memmap {
+	void *addr;
+	phys_addr_t phys_addr;
+	uint64_t length;
+};
+
+/*
+ * AVP memory mapping validation marker
+ */
+#define RTE_AVP_MEMMAP_MAGIC (0x20131969)
+
+/**@{  AVP memory map versions */
+#define RTE_AVP_MEMMAP_VERSION_1 1
+#define RTE_AVP_MEMMAP_VERSION RTE_AVP_MEMMAP_VERSION_1
+/**@} */
+
+/*
+ * Defines a list of memory regions exported from the host to the guest
+ */
+struct rte_avp_memmap_info {
+	uint32_t magic; /**< Memory validation marker */
+	uint32_t version; /**< Data format version */
+	uint32_t nb_maps;
+	struct rte_avp_memmap maps[RTE_AVP_MAX_MAPS];
+};
+
+/*
+ * AVP device memory validation marker
+ */
+#define RTE_AVP_DEVICE_MAGIC (0x20131975)
+
+/**@{  AVP device map versions
+ * WARNING:  do not change the format or names of these variables.  They are
+ * automatically parsed from the build system to generate the SDK package
+ * name.
+ **/
+#define RTE_AVP_RELEASE_VERSION_1 1
+#define RTE_AVP_RELEASE_VERSION RTE_AVP_RELEASE_VERSION_1
+#define RTE_AVP_MAJOR_VERSION_0 0
+#define RTE_AVP_MAJOR_VERSION_1 1
+#define RTE_AVP_MAJOR_VERSION_2 2
+#define RTE_AVP_MAJOR_VERSION RTE_AVP_MAJOR_VERSION_2
+#define RTE_AVP_MINOR_VERSION_0 0
+#define RTE_AVP_MINOR_VERSION_1 1
+#define RTE_AVP_MINOR_VERSION_13 13
+#define RTE_AVP_MINOR_VERSION RTE_AVP_MINOR_VERSION_13
+/**@} */
+
+
+/**
+ * Generates a 32-bit version number from the specified version number
+ * components
+ */
+#define RTE_AVP_MAKE_VERSION(_release, _major, _minor) \
+((((_release) & 0xffff) << 16) | (((_major) & 0xff) << 8) | ((_minor) & 0xff))
+
+
+/**
+ * Represents the current version of the AVP host driver
+ * WARNING:  in the current development branch the host and guest driver
+ * version should always be the same.  When patching guest features back to
+ * GA releases the host version number should not be updated unless there was
+ * an actual change made to the host driver.
+ */
+#define RTE_AVP_CURRENT_HOST_VERSION \
+RTE_AVP_MAKE_VERSION(RTE_AVP_RELEASE_VERSION_1, \
+		     RTE_AVP_MAJOR_VERSION_0, \
+		     RTE_AVP_MINOR_VERSION_1)
+
+
+/**
+ * Represents the current version of the AVP guest drivers
+ */
+#define RTE_AVP_CURRENT_GUEST_VERSION \
+RTE_AVP_MAKE_VERSION(RTE_AVP_RELEASE_VERSION_1, \
+		     RTE_AVP_MAJOR_VERSION_2, \
+		     RTE_AVP_MINOR_VERSION_13)
+
+/**
+ * Access AVP device version values
+ */
+#define RTE_AVP_GET_RELEASE_VERSION(_version) (((_version) >> 16) & 0xffff)
+#define RTE_AVP_GET_MAJOR_VERSION(_version) (((_version) >> 8) & 0xff)
+#define RTE_AVP_GET_MINOR_VERSION(_version) ((_version) & 0xff)
+/**@}*/
+
+
+/**
+ * Remove the minor version number so that only the release and major versions
+ * are used for comparisons.
+ */
+#define RTE_AVP_STRIP_MINOR_VERSION(_version) ((_version) >> 8)
+
+
+/**
+ * Defines the number of mbuf pools supported per device (1 per socket)
+ * @note This value should be equal to RTE_MAX_NUMA_NODES
+ */
+#define RTE_AVP_MAX_MEMPOOLS (8)
+
+/*
+ * Defines address translation parameters for each support mbuf pool
+ */
+struct rte_avp_mempool_info {
+	void *addr;
+	phys_addr_t phys_addr;
+	uint64_t length;
+};
+
+/*
+ * Struct used to create a AVP device. Passed to the kernel in IOCTL call or
+ * via inter-VM shared memory when used in a guest.
+ */
+struct rte_avp_device_info {
+	uint32_t magic;	/**< Memory validation marker */
+	uint32_t version; /**< Data format version */
+
+	char ifname[RTE_AVP_NAMESIZE];	/**< Network device name for AVP */
+
+	phys_addr_t tx_phys;
+	phys_addr_t rx_phys;
+	phys_addr_t alloc_phys;
+	phys_addr_t free_phys;
+
+	uint32_t features; /**< Supported feature bitmap */
+	uint8_t min_rx_queues; /**< Minimum supported receive/free queues */
+	uint8_t num_rx_queues; /**< Recommended number of receive/free queues */
+	uint8_t max_rx_queues; /**< Maximum supported receive/free queues */
+	uint8_t min_tx_queues; /**< Minimum supported transmit/alloc queues */
+	uint8_t num_tx_queues; /**< Recommended number of transmit/alloc queues */
+	uint8_t max_tx_queues; /**< Maximum supported transmit/alloc queues */
+
+	uint32_t tx_size; /**< Size of each transmit queue */
+	uint32_t rx_size; /**< Size of each receive queue */
+	uint32_t alloc_size; /**< Size of each alloc queue */
+	uint32_t free_size;	/**< Size of each free queue */
+
+	/* Used by Ethtool */
+	phys_addr_t req_phys;
+	phys_addr_t resp_phys;
+	phys_addr_t sync_phys;
+	void *sync_va;
+
+	/* mbuf mempool (used when a single memory area is supported) */
+	void *mbuf_va;
+	phys_addr_t mbuf_phys;
+
+	/* mbuf mempools */
+	struct rte_avp_mempool_info pool[RTE_AVP_MAX_MEMPOOLS];
+
+#ifdef __KERNEL__
+	/* Ethernet info */
+	char ethaddr[ETH_ALEN];
+#else
+	char ethaddr[ETHER_ADDR_LEN];
+#endif
+
+	uint8_t mode; /**< device mode, i.e guest, host, trace */
+
+	/* mbuf size */
+	unsigned mbuf_size;
+
+	/*
+	 * unique id to differentiate between two instantiations of the same AVP
+	 * device (i.e., the guest needs to know if the device has been deleted
+	 * and recreated).
+	 */
+	uint64_t device_id;
+
+	uint32_t max_rx_pkt_len; /**< Maximum receive unit size */
+};
+
+#define RTE_AVP_MAX_QUEUES (8) /**< Maximum number of queues per device */
+
+/** Maximum number of chained mbufs in a packet */
+#define RTE_AVP_MAX_MBUF_SEGMENTS (5)
+
+#define RTE_AVP_DEVICE "avp"
+
+#define RTE_AVP_IOCTL_TEST    _IOWR(0, 1, int)
+#define RTE_AVP_IOCTL_CREATE  _IOWR(0, 2, struct rte_avp_device_info)
+#define RTE_AVP_IOCTL_RELEASE _IOWR(0, 3, struct rte_avp_device_info)
+#define RTE_AVP_IOCTL_QUERY   _IOWR(0, 4, struct rte_avp_device_config)
+
+#endif /* _RTE_AVP_COMMON_H_ */
diff --git a/drivers/net/avp/rte_avp_fifo.h b/drivers/net/avp/rte_avp_fifo.h
new file mode 100644
index 0000000..acb548c
--- /dev/null
+++ b/drivers/net/avp/rte_avp_fifo.h
@@ -0,0 +1,157 @@
+/*-
+ *   This file is provided under a dual BSD/LGPLv2 license.  When using or
+ *   redistributing this file, you may do so under either license.
+ *
+ *   GNU LESSER GENERAL PUBLIC LICENSE
+ *
+ *   Copyright(c) 2010-2013 Intel Corporation. All rights reserved.
+ *   Copyright(c) 2014 Wind River Systems, Inc. All rights reserved.
+ *
+ *   This program is free software; you can redistribute it and/or modify
+ *   it under the terms of version 2.1 of the GNU Lesser General Public License
+ *   as published by the Free Software Foundation.
+ *
+ *   This program is distributed in the hope that it will be useful, but
+ *   WITHOUT ANY WARRANTY; without even the implied warranty of
+ *   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+ *   Lesser General Public License for more details.
+ *
+ *   You should have received a copy of the GNU Lesser General Public License
+ *   along with this program; if not, write to the Free Software
+ *   Foundation, Inc., 51 Franklin St - Fifth Floor, Boston, MA 02110-1301 USA.
+ *
+ *   Contact Information:
+ *   Wind River Systems, Inc.
+ *
+ *
+ *   BSD LICENSE
+ *
+ *   Copyright(c) 2010-2013 Intel Corporation. All rights reserved.
+ *   Copyright(c) 2014 Wind River Systems, Inc. All rights reserved.
+ *   All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *   * Redistributions of source code must retain the above copyright
+ *     notice, this list of conditions and the following disclaimer.
+ *   * Redistributions in binary form must reproduce the above copyright
+ *     notice, this list of conditions and the following disclaimer in
+ *     the documentation and/or other materials provided with the
+ *     distribution.
+ *   * Neither the name of Intel Corporation nor the names of its
+ *     contributors may be used to endorse or promote products derived
+ *     from this software without specific prior written permission.
+ *
+ *    THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *    "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *    LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *    A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *    OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *    SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *    LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *    DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *    THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *    (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *    OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ *
+ */
+
+#ifndef _RTE_AVP_FIFO_H_
+#define _RTE_AVP_FIFO_H_
+
+#ifdef __KERNEL__
+#define AVP_WMB() smp_wmb()
+#define AVP_RMB() smp_rmb()
+#else
+#define AVP_WMB() rte_wmb()
+#define AVP_RMB() rte_rmb()
+#endif
+
+#ifndef __KERNEL__
+/**
+ * Initializes the avp fifo structure
+ */
+static inline void
+avp_fifo_init(struct rte_avp_fifo *fifo, unsigned size)
+{
+	/* Ensure size is power of 2 */
+	if (size & (size - 1))
+		rte_panic("AVP fifo size must be power of 2\n");
+
+	fifo->write = 0;
+	fifo->read = 0;
+	fifo->len = size;
+	fifo->elem_size = sizeof(void *);
+}
+#endif
+
+/**
+ * Adds num elements into the fifo. Return the number actually written
+ */
+static inline unsigned
+avp_fifo_put(struct rte_avp_fifo *fifo, void **data, unsigned num)
+{
+	unsigned i = 0;
+	unsigned fifo_write = fifo->write;
+	unsigned fifo_read = fifo->read;
+	unsigned new_write = fifo_write;
+
+	for (i = 0; i < num; i++) {
+		new_write = (new_write + 1) & (fifo->len - 1);
+
+		if (new_write == fifo_read)
+			break;
+		fifo->buffer[fifo_write] = data[i];
+		fifo_write = new_write;
+	}
+	AVP_WMB();
+	fifo->write = fifo_write;
+	return i;
+}
+
+/**
+ * Get up to num elements from the fifo. Return the number actually read
+ */
+static inline unsigned
+avp_fifo_get(struct rte_avp_fifo *fifo, void **data, unsigned num)
+{
+	unsigned i = 0;
+	unsigned new_read = fifo->read;
+	unsigned fifo_write = fifo->write;
+
+	if (new_read == fifo_write)
+		return 0; /* empty */
+
+	for (i = 0; i < num; i++) {
+		if (new_read == fifo_write)
+			break;
+
+		data[i] = fifo->buffer[new_read];
+		new_read = (new_read + 1) & (fifo->len - 1);
+	}
+	AVP_RMB();
+	fifo->read = new_read;
+	return i;
+}
+
+/**
+ * Get the num of elements in the fifo
+ */
+static inline unsigned
+avp_fifo_count(struct rte_avp_fifo *fifo)
+{
+	return (fifo->len + fifo->write - fifo->read) & (fifo->len - 1);
+}
+
+/**
+ * Get the num of available elements in the fifo
+ */
+static inline unsigned
+avp_fifo_free_count(struct rte_avp_fifo *fifo)
+{
+	return (fifo->read - fifo->write - 1) & (fifo->len - 1);
+}
+
+#endif /* _RTE_AVP_FIFO_H_ */
-- 
1.8.3.1

^ permalink raw reply related	[flat|nested] 172+ messages in thread

* [PATCH 03/16] maintainers: claim responsibility for AVP PMD
  2017-02-25  1:22 [PATCH 00/16] Wind River Systems AVP PMD Allain Legacy
  2017-02-25  1:23 ` [PATCH 01/16] config: adds attributes for the " Allain Legacy
  2017-02-25  1:23 ` [PATCH 02/16] net/avp: public header files Allain Legacy
@ 2017-02-25  1:23 ` Allain Legacy
  2017-02-25  1:23 ` [PATCH 04/16] net/avp: add PMD version map file Allain Legacy
                   ` (13 subsequent siblings)
  16 siblings, 0 replies; 172+ messages in thread
From: Allain Legacy @ 2017-02-25  1:23 UTC (permalink / raw)
  To: ferruh.yigit; +Cc: dev

Updating the maintainers file to claim the AVP PMD driver on behalf of Wind
River Systems, Inc.

Signed-off-by: Allain Legacy <allain.legacy@windriver.com>
Signed-off-by: Matt Peters <matt.peters@windriver.com>
---
 MAINTAINERS | 5 +++++
 1 file changed, 5 insertions(+)

diff --git a/MAINTAINERS b/MAINTAINERS
index 24e0eff..992ffa5 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -423,6 +423,11 @@ Null Networking PMD
 M: Tetsuya Mukawa <mtetsuyah@gmail.com>
 F: drivers/net/null/
 
+Wind River AVP PMD
+M: Allain Legacy <allain.legacy@windriver.com>
+M: Matt Peters <matt.peters@windriver.com>
+F: drivers/net/avp
+
 
 Crypto Drivers
 --------------
-- 
1.8.3.1

^ permalink raw reply related	[flat|nested] 172+ messages in thread

* [PATCH 04/16] net/avp: add PMD version map file
  2017-02-25  1:22 [PATCH 00/16] Wind River Systems AVP PMD Allain Legacy
                   ` (2 preceding siblings ...)
  2017-02-25  1:23 ` [PATCH 03/16] maintainers: claim responsibility for AVP PMD Allain Legacy
@ 2017-02-25  1:23 ` Allain Legacy
  2017-02-25  1:23 ` [PATCH 05/16] net/avp: debug log macros Allain Legacy
                   ` (12 subsequent siblings)
  16 siblings, 0 replies; 172+ messages in thread
From: Allain Legacy @ 2017-02-25  1:23 UTC (permalink / raw)
  To: ferruh.yigit; +Cc: dev

Adds a default ABI version file for the AVP PMD.

Signed-off-by: Allain Legacy <allain.legacy@windriver.com>
Signed-off-by: Matt Peters <matt.peters@windriver.com>
---
 drivers/net/avp/rte_pmd_avp_version.map | 4 ++++
 1 file changed, 4 insertions(+)
 create mode 100644 drivers/net/avp/rte_pmd_avp_version.map

diff --git a/drivers/net/avp/rte_pmd_avp_version.map b/drivers/net/avp/rte_pmd_avp_version.map
new file mode 100644
index 0000000..af8f3f4
--- /dev/null
+++ b/drivers/net/avp/rte_pmd_avp_version.map
@@ -0,0 +1,4 @@
+DPDK_17.05 {
+
+    local: *;
+};
-- 
1.8.3.1

^ permalink raw reply related	[flat|nested] 172+ messages in thread

* [PATCH 05/16] net/avp: debug log macros
  2017-02-25  1:22 [PATCH 00/16] Wind River Systems AVP PMD Allain Legacy
                   ` (3 preceding siblings ...)
  2017-02-25  1:23 ` [PATCH 04/16] net/avp: add PMD version map file Allain Legacy
@ 2017-02-25  1:23 ` Allain Legacy
  2017-02-25  1:23 ` [PATCH 06/16] drivers/net: adds driver makefiles for AVP PMD Allain Legacy
                   ` (11 subsequent siblings)
  16 siblings, 0 replies; 172+ messages in thread
From: Allain Legacy @ 2017-02-25  1:23 UTC (permalink / raw)
  To: ferruh.yigit; +Cc: dev

Adds a header file with log macros for the AVP PMD

Signed-off-by: Allain Legacy <allain.legacy@windriver.com>
Signed-off-by: Matt Peters <matt.peters@windriver.com>
---
 config/common_base         |  4 ++++
 drivers/net/avp/avp_logs.h | 59 ++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 63 insertions(+)
 create mode 100644 drivers/net/avp/avp_logs.h

diff --git a/config/common_base b/config/common_base
index 912bc68..fe8363d 100644
--- a/config/common_base
+++ b/config/common_base
@@ -351,6 +351,10 @@ CONFIG_RTE_LIBRTE_PMD_AF_PACKET=n
 # Compile WRS accelerated virtual port (AVP) guest PMD driver
 #
 CONFIG_RTE_LIBRTE_AVP_PMD=n
+CONFIG_RTE_LIBRTE_AVP_DEBUG_RX=n
+CONFIG_RTE_LIBRTE_AVP_DEBUG_TX=n
+CONFIG_RTE_LIBRTE_AVP_DEBUG_DRIVER=y
+CONFIG_RTE_LIBRTE_AVP_DEBUG_BUFFERS=n
 
 #
 # Compile the TAP PMD
diff --git a/drivers/net/avp/avp_logs.h b/drivers/net/avp/avp_logs.h
new file mode 100644
index 0000000..d8bee9c
--- /dev/null
+++ b/drivers/net/avp/avp_logs.h
@@ -0,0 +1,59 @@
+/*
+ * BSD LICENSE
+ *
+ * Copyright (c) 2013-2015, Wind River Systems, Inc.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions are met:
+ *
+ * 1) Redistributions of source code must retain the above copyright notice,
+ * this list of conditions and the following disclaimer.
+ *
+ * 2) Redistributions in binary form must reproduce the above copyright notice,
+ * this list of conditions and the following disclaimer in the documentation
+ * and/or other materials provided with the distribution.
+ *
+ * 3) Neither the name of Wind River Systems nor the names of its contributors
+ * may be used to endorse or promote products derived from this software
+ * without specific prior written permission.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE
+ * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+ * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+ * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+ * POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef _AVP_LOGS_H_
+#define _AVP_LOGS_H_
+
+#include <rte_log.h>
+
+#ifdef RTE_LIBRTE_AVP_DEBUG_RX
+#define PMD_RX_LOG(level, fmt, args...) \
+	RTE_LOG(level, PMD, "%s() rx: " fmt , __func__, ## args)
+#else
+#define PMD_RX_LOG(level, fmt, args...) do { } while (0)
+#endif
+
+#ifdef RTE_LIBRTE_AVP_DEBUG_TX
+#define PMD_TX_LOG(level, fmt, args...) \
+	RTE_LOG(level, PMD, "%s() tx: " fmt , __func__, ## args)
+#else
+#define PMD_TX_LOG(level, fmt, args...) do { } while (0)
+#endif
+
+#ifdef RTE_LIBRTE_AVP_DEBUG_DRIVER
+#define PMD_DRV_LOG(level, fmt, args...) \
+	RTE_LOG(level, PMD, "%s(): " fmt , __func__, ## args)
+#else
+#define PMD_DRV_LOG(level, fmt, args...) do { } while (0)
+#endif
+
+#endif /* _AVP_LOGS_H_ */
-- 
1.8.3.1

^ permalink raw reply related	[flat|nested] 172+ messages in thread

* [PATCH 06/16] drivers/net: adds driver makefiles for AVP PMD
  2017-02-25  1:22 [PATCH 00/16] Wind River Systems AVP PMD Allain Legacy
                   ` (4 preceding siblings ...)
  2017-02-25  1:23 ` [PATCH 05/16] net/avp: debug log macros Allain Legacy
@ 2017-02-25  1:23 ` Allain Legacy
  2017-02-25  1:23 ` [PATCH 07/16] net/avp: driver registration Allain Legacy
                   ` (10 subsequent siblings)
  16 siblings, 0 replies; 172+ messages in thread
From: Allain Legacy @ 2017-02-25  1:23 UTC (permalink / raw)
  To: ferruh.yigit; +Cc: dev

Adds a default Makefile to the driver directory but does not include any
source files.

Signed-off-by: Allain Legacy <allain.legacy@windriver.com>
Signed-off-by: Matt Peters <matt.peters@windriver.com>
---
 drivers/net/Makefile     |  1 +
 drivers/net/avp/Makefile | 52 ++++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 53 insertions(+)
 create mode 100644 drivers/net/avp/Makefile

diff --git a/drivers/net/Makefile b/drivers/net/Makefile
index 40fc333..592383e 100644
--- a/drivers/net/Makefile
+++ b/drivers/net/Makefile
@@ -32,6 +32,7 @@
 include $(RTE_SDK)/mk/rte.vars.mk
 
 DIRS-$(CONFIG_RTE_LIBRTE_PMD_AF_PACKET) += af_packet
+DIRS-$(CONFIG_RTE_LIBRTE_AVP_PMD) += avp
 DIRS-$(CONFIG_RTE_LIBRTE_BNX2X_PMD) += bnx2x
 DIRS-$(CONFIG_RTE_LIBRTE_PMD_BOND) += bonding
 DIRS-$(CONFIG_RTE_LIBRTE_CXGBE_PMD) += cxgbe
diff --git a/drivers/net/avp/Makefile b/drivers/net/avp/Makefile
new file mode 100644
index 0000000..68a0fa5
--- /dev/null
+++ b/drivers/net/avp/Makefile
@@ -0,0 +1,52 @@
+#   BSD LICENSE
+#
+#   Copyright(c) 2013-2017, Wind River Systems, Inc. All rights reserved.
+#   All rights reserved.
+#
+#   Redistribution and use in source and binary forms, with or without
+#   modification, are permitted provided that the following conditions
+#   are met:
+#
+#     * Redistributions of source code must retain the above copyright
+#       notice, this list of conditions and the following disclaimer.
+#     * Redistributions in binary form must reproduce the above copyright
+#       notice, this list of conditions and the following disclaimer in
+#       the documentation and/or other materials provided with the
+#       distribution.
+#     * Neither the name of Wind River Systems nor the names of its
+#       contributors may be used to endorse or promote products derived
+#       from this software without specific prior written permission.
+#
+#   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+#   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+#   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+#   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+#   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+#   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+#   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+#   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+#   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+#   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+#   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+
+
+include $(RTE_SDK)/mk/rte.vars.mk
+
+#
+# library name
+#
+LIB = librte_pmd_avp.a
+
+CFLAGS += -O3
+CFLAGS += $(WERROR_FLAGS) -I$(SRCDIR)
+
+EXPORT_MAP := rte_pmd_avp_version.map
+
+LIBABIVER := 1
+
+# install public header files to enable compilation of the hypervisor level
+# dpdk application
+SYMLINK-$(CONFIG_RTE_LIBRTE_AVP_PMD)-include += rte_avp_common.h
+SYMLINK-$(CONFIG_RTE_LIBRTE_AVP_PMD)-include += rte_avp_fifo.h
+
+include $(RTE_SDK)/mk/rte.lib.mk
-- 
1.8.3.1

^ permalink raw reply related	[flat|nested] 172+ messages in thread

* [PATCH 07/16] net/avp: driver registration
  2017-02-25  1:22 [PATCH 00/16] Wind River Systems AVP PMD Allain Legacy
                   ` (5 preceding siblings ...)
  2017-02-25  1:23 ` [PATCH 06/16] drivers/net: adds driver makefiles for AVP PMD Allain Legacy
@ 2017-02-25  1:23 ` Allain Legacy
  2017-02-25  1:23 ` [PATCH 08/16] net/avp: device initialization Allain Legacy
                   ` (9 subsequent siblings)
  16 siblings, 0 replies; 172+ messages in thread
From: Allain Legacy @ 2017-02-25  1:23 UTC (permalink / raw)
  To: ferruh.yigit; +Cc: dev

Adds the initial framework for registering the driver against the support
PCI device identifiers.

Signed-off-by: Allain Legacy <allain.legacy@windriver.com>
Signed-off-by: Matt Peters <matt.peters@windriver.com>
---
 config/common_linuxapp                       |   1 +
 config/defconfig_i686-native-linuxapp-gcc    |   5 +
 config/defconfig_i686-native-linuxapp-icc    |   5 +
 config/defconfig_x86_x32-native-linuxapp-gcc |   5 +
 drivers/net/avp/Makefile                     |   8 +
 drivers/net/avp/avp_ethdev.c                 | 232 +++++++++++++++++++++++++++
 mk/rte.app.mk                                |   1 +
 7 files changed, 257 insertions(+)
 create mode 100644 drivers/net/avp/avp_ethdev.c

diff --git a/config/common_linuxapp b/config/common_linuxapp
index 00ebaac..8690a00 100644
--- a/config/common_linuxapp
+++ b/config/common_linuxapp
@@ -43,6 +43,7 @@ CONFIG_RTE_LIBRTE_VHOST=y
 CONFIG_RTE_LIBRTE_PMD_VHOST=y
 CONFIG_RTE_LIBRTE_PMD_AF_PACKET=y
 CONFIG_RTE_LIBRTE_PMD_TAP=y
+CONFIG_RTE_LIBRTE_AVP_PMD=y
 CONFIG_RTE_LIBRTE_NFP_PMD=y
 CONFIG_RTE_LIBRTE_POWER=y
 CONFIG_RTE_VIRTIO_USER=y
diff --git a/config/defconfig_i686-native-linuxapp-gcc b/config/defconfig_i686-native-linuxapp-gcc
index 745c401..9847bdb 100644
--- a/config/defconfig_i686-native-linuxapp-gcc
+++ b/config/defconfig_i686-native-linuxapp-gcc
@@ -75,3 +75,8 @@ CONFIG_RTE_LIBRTE_PMD_KASUMI=n
 # ZUC PMD is not supported on 32-bit
 #
 CONFIG_RTE_LIBRTE_PMD_ZUC=n
+
+#
+# AVP PMD is not supported on 32-bit
+#
+CONFIG_RTE_LIBRTE_AVP_PMD=n
diff --git a/config/defconfig_i686-native-linuxapp-icc b/config/defconfig_i686-native-linuxapp-icc
index 50a3008..269e88e 100644
--- a/config/defconfig_i686-native-linuxapp-icc
+++ b/config/defconfig_i686-native-linuxapp-icc
@@ -75,3 +75,8 @@ CONFIG_RTE_LIBRTE_PMD_KASUMI=n
 # ZUC PMD is not supported on 32-bit
 #
 CONFIG_RTE_LIBRTE_PMD_ZUC=n
+
+#
+# AVP PMD is not supported on 32-bit
+#
+CONFIG_RTE_LIBRTE_AVP_PMD=n
diff --git a/config/defconfig_x86_x32-native-linuxapp-gcc b/config/defconfig_x86_x32-native-linuxapp-gcc
index 3e55c5c..19573cb 100644
--- a/config/defconfig_x86_x32-native-linuxapp-gcc
+++ b/config/defconfig_x86_x32-native-linuxapp-gcc
@@ -50,3 +50,8 @@ CONFIG_RTE_LIBRTE_KNI=n
 # Solarflare PMD is not supported on 32-bit
 #
 CONFIG_RTE_LIBRTE_SFC_EFX_PMD=n
+
+#
+# AVP PMD is not supported on 32-bit
+#
+CONFIG_RTE_LIBRTE_AVP_PMD=n
diff --git a/drivers/net/avp/Makefile b/drivers/net/avp/Makefile
index 68a0fa5..9cf0449 100644
--- a/drivers/net/avp/Makefile
+++ b/drivers/net/avp/Makefile
@@ -49,4 +49,12 @@ LIBABIVER := 1
 SYMLINK-$(CONFIG_RTE_LIBRTE_AVP_PMD)-include += rte_avp_common.h
 SYMLINK-$(CONFIG_RTE_LIBRTE_AVP_PMD)-include += rte_avp_fifo.h
 
+#
+# all source files are stored in SRCS-y
+#
+SRCS-$(CONFIG_RTE_LIBRTE_AVP_PMD) += avp_ethdev.c
+
+# this lib depends upon:
+DEPDIRS-$(CONFIG_RTE_LIBRTE_AVP_PMD) += lib/librte_eal lib/librte_ether
+
 include $(RTE_SDK)/mk/rte.lib.mk
diff --git a/drivers/net/avp/avp_ethdev.c b/drivers/net/avp/avp_ethdev.c
new file mode 100644
index 0000000..a774aa1
--- /dev/null
+++ b/drivers/net/avp/avp_ethdev.c
@@ -0,0 +1,232 @@
+/*
+ *   BSD LICENSE
+ *
+ * Copyright (c) 2013-2016, Wind River Systems, Inc.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions are met:
+ *
+ * 1) Redistributions of source code must retain the above copyright notice,
+ * this list of conditions and the following disclaimer.
+ *
+ * 2) Redistributions in binary form must reproduce the above copyright notice,
+ * this list of conditions and the following disclaimer in the documentation
+ * and/or other materials provided with the distribution.
+ *
+ * 3) Neither the name of Wind River Systems nor the names of its contributors
+ * may be used to endorse or promote products derived from this software
+ * without specific prior written permission.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE
+ * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+ * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+ * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+ * POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include <stdint.h>
+#include <string.h>
+#include <stdio.h>
+#include <errno.h>
+#include <unistd.h>
+#include <sys/io.h>
+
+#include <rte_ethdev.h>
+#include <rte_memcpy.h>
+#include <rte_string_fns.h>
+#include <rte_memzone.h>
+#include <rte_malloc.h>
+#include <rte_atomic.h>
+#include <rte_branch_prediction.h>
+#include <rte_pci.h>
+#include <rte_ether.h>
+#include <rte_common.h>
+#include <rte_cycles.h>
+#include <rte_spinlock.h>
+#include <rte_byteorder.h>
+#include <rte_dev.h>
+#include <rte_memory.h>
+#include <rte_eal.h>
+
+#include "rte_avp_common.h"
+#include "rte_avp_fifo.h"
+
+#include "avp_logs.h"
+
+
+
+static int eth_avp_dev_init(struct rte_eth_dev *eth_dev);
+static int eth_avp_dev_uninit(struct rte_eth_dev *eth_dev);
+
+
+#define AVP_DEV_TO_PCI(eth_dev) RTE_DEV_TO_PCI((eth_dev)->device)
+
+
+#define RTE_AVP_MAX_MAC_ADDRS 1
+#define RTE_AVP_MIN_RX_BUFSIZE ETHER_MIN_LEN
+
+
+/*
+ * Defines the number of microseconds to wait before checking the response
+ * queue for completion.
+ */
+#define RTE_AVP_REQUEST_DELAY_USECS (5000)
+
+/*
+ * Defines the number times to check the response queue for completion before
+ * declaring a timeout.
+ */
+#define RTE_AVP_MAX_REQUEST_RETRY (100)
+
+/* Defines the current PCI driver version number */
+#define RTE_AVP_DPDK_DRIVER_VERSION RTE_AVP_CURRENT_GUEST_VERSION
+
+/*
+ * The set of PCI devices this driver supports
+ */
+static struct rte_pci_id pci_id_avp_map[] = {
+	{ .vendor_id = RTE_AVP_PCI_VENDOR_ID,
+	  .device_id = RTE_AVP_PCI_DEVICE_ID,
+	  .subsystem_vendor_id = RTE_AVP_PCI_SUB_VENDOR_ID,
+	  .subsystem_device_id = RTE_AVP_PCI_SUB_DEVICE_ID,
+	  .class_id = RTE_CLASS_ANY_ID,
+	},
+
+	{ .vendor_id = 0, /* sentinel */
+	},
+};
+
+
+/*
+ * Defines the AVP device attributes which are attached to an RTE ethernet
+ * device
+ */
+struct avp_dev {
+	uint32_t magic; /**< Memory validation marker */
+	uint64_t device_id; /**< Unique system identifier */
+	struct ether_addr ethaddr; /**< Host specified MAC address */
+	struct rte_eth_dev_data *dev_data;
+	/**< Back pointer to ethernet device data */
+	volatile uint32_t flags; /**< Device operational flags */
+	uint8_t port_id; /**< Ethernet port identifier */
+	struct rte_mempool *pool; /**< pkt mbuf mempool */
+	unsigned guest_mbuf_size; /**< local pool mbuf size */
+	unsigned host_mbuf_size; /**< host mbuf size */
+	unsigned max_rx_pkt_len; /**< maximum receive unit */
+	uint32_t host_features; /**< Supported feature bitmap */
+	uint32_t features; /**< Enabled feature bitmap */
+	unsigned num_tx_queues; /**< Negotiated number of transmit queues */
+	unsigned max_tx_queues; /**< Maximum number of transmit queues */
+	unsigned num_rx_queues; /**< Negotiated number of receive queues */
+	unsigned max_rx_queues; /**< Maximum number of receive queues */
+
+	struct rte_avp_fifo *tx_q[RTE_AVP_MAX_QUEUES]; /**< TX queue */
+	struct rte_avp_fifo *rx_q[RTE_AVP_MAX_QUEUES]; /**< RX queue */
+	struct rte_avp_fifo *alloc_q[RTE_AVP_MAX_QUEUES];
+	/**< Allocated mbufs queue */
+	struct rte_avp_fifo *free_q[RTE_AVP_MAX_QUEUES];
+	/**< To be freed mbufs queue */
+
+	/* mutual exclusion over the 'flag' and 'resp_q/req_q' fields */
+	rte_spinlock_t lock;
+
+	/* For request & response */
+	struct rte_avp_fifo *req_q; /**< Request queue */
+	struct rte_avp_fifo *resp_q; /**< Response queue */
+	void *host_sync_addr; /**< (host) Req/Resp Mem address */
+	void *sync_addr; /**< Req/Resp Mem address */
+	void *host_mbuf_addr; /**< (host) MBUF pool start address */
+	void *mbuf_addr; /**< MBUF pool start address */
+} __rte_cache_aligned;
+
+/* RTE ethernet private data */
+struct avp_adapter {
+	struct avp_dev avp;
+} __rte_cache_aligned;
+
+/* Macro to cast the ethernet device private data to a AVP object */
+#define RTE_AVP_DEV_PRIVATE_TO_HW(adapter) \
+	(&((struct avp_adapter *)adapter)->avp)
+
+/*
+ * This function is based on probe() function in avp_pci.c
+ * It returns 0 on success.
+ */
+static int
+eth_avp_dev_init(struct rte_eth_dev *eth_dev)
+{
+	struct avp_dev *avp =
+		RTE_AVP_DEV_PRIVATE_TO_HW(eth_dev->data->dev_private);
+	struct rte_pci_device *pci_dev;
+
+	pci_dev = AVP_DEV_TO_PCI(eth_dev);
+
+	if (rte_eal_process_type() != RTE_PROC_PRIMARY) {
+		/*
+		 * no setup required on secondary processes.  All data is saved
+		 * in dev_private by the primary process. All resource should
+		 * be mapped to the same virtual address so all pointers should
+		 * be valid.
+		 */
+		return 0;
+	}
+
+	rte_eth_copy_pci_info(eth_dev, pci_dev);
+
+	eth_dev->data->dev_flags |= RTE_ETH_DEV_DETACHABLE;
+
+	/* Allocate memory for storing MAC addresses */
+	eth_dev->data->mac_addrs = rte_zmalloc("avp_ethdev", ETHER_ADDR_LEN, 0);
+	if (eth_dev->data->mac_addrs == NULL) {
+		PMD_DRV_LOG(ERR,
+			    "Failed to allocate %d bytes "
+			    "needed to store MAC addresses\n",
+			    ETHER_ADDR_LEN);
+		return -ENOMEM;
+	}
+
+	ether_addr_copy(&avp->ethaddr, &eth_dev->data->mac_addrs[0]);
+
+	return 0;
+}
+
+static int
+eth_avp_dev_uninit(struct rte_eth_dev *eth_dev)
+{
+	if (rte_eal_process_type() != RTE_PROC_PRIMARY)
+		return -EPERM;
+
+	if (eth_dev->data == NULL)
+		return 0;
+
+	if (eth_dev->data->mac_addrs != NULL) {
+		rte_free(eth_dev->data->mac_addrs);
+		eth_dev->data->mac_addrs = NULL;
+	}
+
+	return 0;
+}
+
+
+static struct eth_driver rte_avp_pmd = {
+	{
+		.id_table = pci_id_avp_map,
+		.drv_flags = RTE_PCI_DRV_NEED_MAPPING,
+		.probe = rte_eth_dev_pci_probe,
+		.remove = rte_eth_dev_pci_remove,
+	},
+	.eth_dev_init = eth_avp_dev_init,
+	.eth_dev_uninit = eth_avp_dev_uninit,
+	.dev_private_size = sizeof(struct avp_adapter),
+};
+
+
+
+RTE_PMD_REGISTER_PCI(rte_avp, rte_avp_pmd.pci_drv);
+RTE_PMD_REGISTER_PCI_TABLE(rte_avp, pci_id_avp_map);
diff --git a/mk/rte.app.mk b/mk/rte.app.mk
index 92f3635..c8654f3 100644
--- a/mk/rte.app.mk
+++ b/mk/rte.app.mk
@@ -104,6 +104,7 @@ ifeq ($(CONFIG_RTE_BUILD_SHARED_LIB),n)
 # plugins (link only if static libraries)
 
 _LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_AF_PACKET)  += -lrte_pmd_af_packet
+_LDLIBS-$(CONFIG_RTE_LIBRTE_AVP_PMD)        += -lrte_pmd_avp
 _LDLIBS-$(CONFIG_RTE_LIBRTE_BNX2X_PMD)      += -lrte_pmd_bnx2x -lz
 _LDLIBS-$(CONFIG_RTE_LIBRTE_BNXT_PMD)       += -lrte_pmd_bnxt
 _LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_BOND)       += -lrte_pmd_bond
-- 
1.8.3.1

^ permalink raw reply related	[flat|nested] 172+ messages in thread

* [PATCH 08/16] net/avp: device initialization
  2017-02-25  1:22 [PATCH 00/16] Wind River Systems AVP PMD Allain Legacy
                   ` (6 preceding siblings ...)
  2017-02-25  1:23 ` [PATCH 07/16] net/avp: driver registration Allain Legacy
@ 2017-02-25  1:23 ` Allain Legacy
  2017-02-25  1:23 ` [PATCH 09/16] net/avp: device configuration Allain Legacy
                   ` (8 subsequent siblings)
  16 siblings, 0 replies; 172+ messages in thread
From: Allain Legacy @ 2017-02-25  1:23 UTC (permalink / raw)
  To: ferruh.yigit; +Cc: dev

Adds support for initialization newly probed AVP PCI devices.  Initial
queue translations are setup in preparation for device configuration.

Signed-off-by: Allain Legacy <allain.legacy@windriver.com>
Signed-off-by: Matt Peters <matt.peters@windriver.com>
---
 drivers/net/avp/avp_ethdev.c | 770 +++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 770 insertions(+)

diff --git a/drivers/net/avp/avp_ethdev.c b/drivers/net/avp/avp_ethdev.c
index a774aa1..b48895d 100644
--- a/drivers/net/avp/avp_ethdev.c
+++ b/drivers/net/avp/avp_ethdev.c
@@ -60,6 +60,8 @@
 #include "avp_logs.h"
 
 
+static int avp_dev_create(struct rte_pci_device *pci_dev,
+			  struct rte_eth_dev *eth_dev);
 
 static int eth_avp_dev_init(struct rte_eth_dev *eth_dev);
 static int eth_avp_dev_uninit(struct rte_eth_dev *eth_dev);
@@ -103,6 +105,16 @@
 };
 
 
+/**@{ AVP device flags */
+#define RTE_AVP_F_PROMISC (1<<1)
+#define RTE_AVP_F_CONFIGURED (1<<2)
+#define RTE_AVP_F_LINKUP (1<<3)
+#define RTE_AVP_F_DETACHED (1<<4)
+/**@} */
+
+/* Ethernet device validation marker */
+#define RTE_AVP_ETHDEV_MAGIC 0x92972862
+
 /*
  * Defines the AVP device attributes which are attached to an RTE ethernet
  * device
@@ -150,11 +162,739 @@ struct avp_adapter {
 	struct avp_dev avp;
 } __rte_cache_aligned;
 
+
+/* 32-bit MMIO register write */
+#define RTE_AVP_WRITE32(_value, _addr) ((*(uint32_t *)_addr) = (_value))
+
+/* 32-bit MMIO register read */
+#define RTE_AVP_READ32(_addr) (*(uint32_t *)(_addr))
+
 /* Macro to cast the ethernet device private data to a AVP object */
 #define RTE_AVP_DEV_PRIVATE_TO_HW(adapter) \
 	(&((struct avp_adapter *)adapter)->avp)
 
 /*
+ * Defines the structure of a AVP device queue for the purpose of handling the
+ * receive and transmit burst callback functions
+ */
+struct avp_queue {
+	struct rte_eth_dev_data *dev_data;
+	/**< Backpointer to ethernet device data */
+	struct avp_dev *avp; /**< Backpointer to AVP device */
+	uint16_t queue_id;
+	/**< Queue identifier used for indexing current queue */
+	uint16_t queue_base;
+	/**< Base queue identifier for queue servicing */
+	uint16_t queue_limit;
+	/**< Maximum queue identifier for queue servicing */
+
+#ifdef RTE_LIBRTE_AVP_STATS
+	uint64_t packets;
+	uint64_t bytes;
+	uint64_t errors;
+#endif
+};
+
+/* send a request and wait for a response
+ *
+ * @warning must be called while holding the avp->lock spinlock.
+ */
+static int
+avp_dev_process_request(struct avp_dev *avp, struct rte_avp_request *request)
+{
+	unsigned retry = RTE_AVP_MAX_REQUEST_RETRY;
+	void *resp_addr = NULL;
+	unsigned count;
+	int ret;
+
+	PMD_DRV_LOG(DEBUG, "Sending request %u to host\n", request->req_id);
+
+	request->result = -ENOTSUP;
+
+	/* Discard any stale responses before starting a new request */
+	while (avp_fifo_get(avp->resp_q, (void **)&resp_addr, 1))
+		PMD_DRV_LOG(DEBUG, "Discarding stale response\n");
+
+	rte_memcpy(avp->sync_addr, request, sizeof(*request));
+	count = avp_fifo_put(avp->req_q, &avp->host_sync_addr, 1);
+	if (count < 1) {
+		PMD_DRV_LOG(ERR, "Cannot send request %u to host\n",
+			    request->req_id);
+		ret = -EBUSY;
+		goto done;
+	}
+
+	while (retry--) {
+		/* wait for a response */
+		usleep(RTE_AVP_REQUEST_DELAY_USECS);
+
+		count = avp_fifo_count(avp->resp_q);
+		if (count >= 1) {
+			/* response received */
+			break;
+		}
+
+		if ((count < 1) && (retry == 0)) {
+			PMD_DRV_LOG(ERR,
+				    "Timeout while waiting "
+				    "for a response for %u\n",
+				    request->req_id);
+			ret = -ETIME;
+			goto done;
+		}
+	}
+
+	/* retrieve the response */
+	count = avp_fifo_get(avp->resp_q, (void **)&resp_addr, 1);
+	if ((count != 1) || (resp_addr != avp->host_sync_addr)) {
+		PMD_DRV_LOG(ERR,
+			    "Invalid response from host, "
+			    "count=%u resp=%p host_sync_addr=%p\n",
+			    count, resp_addr, avp->host_sync_addr);
+		ret = -ENODATA;
+		goto done;
+	}
+
+	/* copy to user buffer */
+	rte_memcpy(request, avp->sync_addr, sizeof(*request));
+	ret = 0;
+
+	PMD_DRV_LOG(DEBUG, "Result %d received for request %u\n",
+		    request->result, request->req_id);
+
+done:
+	return ret;
+}
+
+static int
+avp_dev_ctrl_set_config(struct rte_eth_dev *eth_dev,
+			struct rte_avp_device_config *config)
+{
+	struct avp_dev *avp =
+		RTE_AVP_DEV_PRIVATE_TO_HW(eth_dev->data->dev_private);
+	struct rte_avp_request request;
+	int ret;
+
+	/* setup a configure request */
+	memset(&request, 0, sizeof(request));
+	request.req_id = RTE_AVP_REQ_CFG_DEVICE;
+	memcpy(&request.config, config, sizeof(request.config));
+
+	ret = avp_dev_process_request(avp, &request);
+
+	return ret == 0 ? request.result : ret;
+}
+
+static int
+avp_dev_ctrl_shutdown(struct rte_eth_dev *eth_dev)
+{
+	struct avp_dev *avp =
+		RTE_AVP_DEV_PRIVATE_TO_HW(eth_dev->data->dev_private);
+	struct rte_avp_request request;
+	int ret;
+
+	/* setup a shutdown request */
+	memset(&request, 0, sizeof(request));
+	request.req_id = RTE_AVP_REQ_SHUTDOWN_DEVICE;
+
+	ret = avp_dev_process_request(avp, &request);
+
+	return ret == 0 ? request.result : ret;
+}
+
+/* translate from host mbuf virtual address to guest virtual address */
+static inline void *
+avp_dev_translate_buffer(struct avp_dev *avp, void *host_mbuf_address)
+{
+	return RTE_PTR_ADD(RTE_PTR_SUB(host_mbuf_address,
+				       (uintptr_t)avp->host_mbuf_addr),
+			   (uintptr_t)avp->mbuf_addr);
+}
+
+/* translate from host physical address to guest virtual address */
+static void *
+avp_dev_translate_address(struct rte_eth_dev *eth_dev,
+			  phys_addr_t host_phys_addr)
+{
+	struct rte_pci_device *pci_dev = AVP_DEV_TO_PCI(eth_dev);
+	struct rte_mem_resource *resource;
+	struct rte_avp_memmap_info *info;
+	struct rte_avp_memmap *map;
+	off_t offset;
+	void *addr;
+	unsigned i;
+
+	addr = pci_dev->mem_resource[RTE_AVP_PCI_MEMORY_BAR].addr;
+	resource = &pci_dev->mem_resource[RTE_AVP_PCI_MEMMAP_BAR];
+	info = (struct rte_avp_memmap_info *)resource->addr;
+
+	offset = 0;
+	for (i = 0; i < info->nb_maps; i++) {
+		/* search all segments looking for a matching address */
+		map = &info->maps[i];
+
+		if ((host_phys_addr >= map->phys_addr) &&
+			(host_phys_addr < (map->phys_addr + map->length))) {
+			/* address is within this segment */
+			offset += (host_phys_addr - map->phys_addr);
+			addr = RTE_PTR_ADD(addr, offset);
+
+			PMD_DRV_LOG(DEBUG,
+				    "Translating host physical 0x%"PRIx64" "
+				    "to guest virtual 0x%p\n",
+				    host_phys_addr, addr);
+
+			return addr;
+		}
+		offset += map->length;
+	}
+
+	return NULL;
+}
+
+/* verify that the incoming device version is compatible with our version */
+static int
+avp_dev_version_check(uint32_t version)
+{
+	uint32_t driver =
+		RTE_AVP_STRIP_MINOR_VERSION(RTE_AVP_DPDK_DRIVER_VERSION);
+	uint32_t device = RTE_AVP_STRIP_MINOR_VERSION(version);
+
+	if (device <= driver) {
+		/* the incoming device version is less than or equal to our
+		 * own */
+		return 0;
+	}
+
+	return 1;
+}
+
+/* verify that memory regions have expected version and validation markers */
+static int
+avp_dev_check_regions(struct rte_eth_dev *eth_dev)
+{
+	struct rte_pci_device *pci_dev = AVP_DEV_TO_PCI(eth_dev);
+	struct rte_avp_memmap_info *memmap;
+	struct rte_avp_device_info *info;
+	struct rte_mem_resource *resource;
+	unsigned i;
+
+	/* Dump resource info for debug */
+	for (i = 0; i < PCI_MAX_RESOURCE; i++) {
+		resource = &pci_dev->mem_resource[i];
+		if ((resource->phys_addr == 0) || (resource->len == 0))
+			continue;
+
+		PMD_DRV_LOG(DEBUG,
+			    "resource[%u]: phys=0x%"PRIx64" "
+			    "len=%"PRIu64" addr=%p\n",
+			    i, resource->phys_addr,
+			    resource->len, resource->addr);
+
+		switch (i) {
+		case RTE_AVP_PCI_MEMMAP_BAR:
+			memmap = (struct rte_avp_memmap_info *)resource->addr;
+			if ((memmap->magic != RTE_AVP_MEMMAP_MAGIC) ||
+			    (memmap->version != RTE_AVP_MEMMAP_VERSION)) {
+				PMD_DRV_LOG(ERR,
+					    "Invalid memmap magic 0x%08x "
+					    "and version %u\n",
+					    memmap->magic, memmap->version);
+				return -EINVAL;
+			}
+			break;
+
+		case RTE_AVP_PCI_DEVICE_BAR:
+			info = (struct rte_avp_device_info *)resource->addr;
+			if ((info->magic != RTE_AVP_DEVICE_MAGIC) ||
+			    avp_dev_version_check(info->version)) {
+				PMD_DRV_LOG(ERR,
+					    "Invalid device info magic 0x%08x "
+					    "or version 0x%08x > 0x%08x\n",
+					    info->magic, info->version,
+					    RTE_AVP_DPDK_DRIVER_VERSION);
+				return -EINVAL;
+			}
+			break;
+
+		case RTE_AVP_PCI_MEMORY_BAR:
+		case RTE_AVP_PCI_MMIO_BAR:
+			if (resource->addr == NULL) {
+				PMD_DRV_LOG(ERR,
+					    "Missing address space "
+					    "for BAR%u\n", i);
+				return -EINVAL;
+			}
+			break;
+
+		case RTE_AVP_PCI_MSIX_BAR:
+		default:
+			/* no validation required */
+			break;
+		}
+	}
+
+	return 0;
+}
+
+static int
+avp_dev_detach(struct rte_eth_dev *eth_dev)
+{
+	struct avp_dev *avp =
+		RTE_AVP_DEV_PRIVATE_TO_HW(eth_dev->data->dev_private);
+	int ret;
+
+	PMD_DRV_LOG(NOTICE, "Detaching port %u from AVP device 0x%"PRIx64"\n",
+		    eth_dev->data->port_id, avp->device_id);
+
+	rte_spinlock_lock(&avp->lock);
+
+	if (avp->flags & RTE_AVP_F_DETACHED) {
+		PMD_DRV_LOG(NOTICE, "port %u already detached\n",
+			    eth_dev->data->port_id);
+		ret = 0;
+		goto unlock;
+	}
+
+	/* shutdown the device first so the host stops sending us packets. */
+	ret = avp_dev_ctrl_shutdown(eth_dev);
+	if (ret < 0) {
+		PMD_DRV_LOG(ERR, "Failed to send/recv shutdown to host, "
+			    "ret=%d\n", ret);
+		avp->flags &= ~RTE_AVP_F_DETACHED;
+		goto unlock;
+	}
+
+	avp->flags |= RTE_AVP_F_DETACHED;
+	rte_wmb();
+
+	/* wait for queues to acknowledge the presence of the detach flag */
+	rte_delay_ms(1);
+
+	ret = 0;
+
+unlock:
+	rte_spinlock_unlock(&avp->lock);
+	return ret;
+}
+
+static void
+_avp_set_rx_queue_mappings(struct rte_eth_dev *eth_dev, uint16_t rx_queue_id)
+{
+	struct avp_dev *avp =
+		RTE_AVP_DEV_PRIVATE_TO_HW(eth_dev->data->dev_private);
+	struct avp_queue *rxq;
+	uint16_t queue_count;
+	uint16_t remainder;
+
+	rxq = (struct avp_queue *)eth_dev->data->rx_queues[rx_queue_id];
+
+	/*
+	 * Must map all AVP fifos as evenly as possible between the configured
+	 * device queues.  Each device queue will service a subset of the AVP
+	 * fifos. If there is an odd number of device queues the first set of
+	 * device queues will get the extra AVP fifos.
+	 */
+	queue_count = avp->num_rx_queues / eth_dev->data->nb_rx_queues;
+	remainder = avp->num_rx_queues % eth_dev->data->nb_rx_queues;
+	if (rx_queue_id < remainder) {
+		/* these queues must service one extra FIFO */
+		rxq->queue_base = rx_queue_id * (queue_count + 1);
+		rxq->queue_limit = rxq->queue_base + (queue_count + 1) - 1;
+	} else {
+		/* these queues service the regular number of FIFO */
+		rxq->queue_base = ((remainder * (queue_count + 1)) +
+				   ((rx_queue_id - remainder) * queue_count));
+		rxq->queue_limit = rxq->queue_base + queue_count - 1;
+	}
+
+	PMD_DRV_LOG(DEBUG, "rxq %u at %p base %u limit %u\n",
+		    rx_queue_id, rxq, rxq->queue_base, rxq->queue_limit);
+
+	rxq->queue_id = rxq->queue_base;
+}
+
+static void
+_avp_set_queue_counts(struct rte_eth_dev *eth_dev)
+{
+	struct rte_pci_device *pci_dev = AVP_DEV_TO_PCI(eth_dev);
+	struct avp_dev *avp =
+		RTE_AVP_DEV_PRIVATE_TO_HW(eth_dev->data->dev_private);
+	struct rte_avp_device_info *host_info;
+	void *addr;
+
+	addr = pci_dev->mem_resource[RTE_AVP_PCI_DEVICE_BAR].addr;
+	host_info = (struct rte_avp_device_info *)addr;
+
+	/*
+	 * the transmit direction is not negotiated beyond respecting the max
+	 * number of queues because the host can handle arbitrary guest tx
+	 * queues (host rx queues).
+	 */
+	avp->num_tx_queues = eth_dev->data->nb_tx_queues;
+
+	/*
+	 * the receive direction is more restrictive.  The host requires a
+	 * minimum number of guest rx queues (host tx queues) therefore
+	 * negotiate a value that is at least as large as the host minimum
+	 * requirement.  If the host and guest values are not identical then a
+	 * mapping will be established in the receive_queue_setup function.
+	 */
+	avp->num_rx_queues = RTE_MAX(host_info->min_rx_queues,
+				     eth_dev->data->nb_rx_queues);
+
+	PMD_DRV_LOG(DEBUG, "Requesting %u Tx and %u Rx queues from host\n",
+		    avp->num_tx_queues, avp->num_rx_queues);
+}
+
+static int
+avp_dev_attach(struct rte_eth_dev *eth_dev)
+{
+	struct avp_dev *avp =
+		RTE_AVP_DEV_PRIVATE_TO_HW(eth_dev->data->dev_private);
+	struct rte_avp_device_config config;
+	unsigned i;
+	int ret;
+
+	PMD_DRV_LOG(NOTICE, "Attaching port %u to AVP device 0x%"PRIx64"\n",
+		    eth_dev->data->port_id, avp->device_id);
+
+	rte_spinlock_lock(&avp->lock);
+
+	if (!(avp->flags & RTE_AVP_F_DETACHED)) {
+		PMD_DRV_LOG(NOTICE, "port %u already attached\n",
+			    eth_dev->data->port_id);
+		ret = 0;
+		goto unlock;
+	}
+
+	/*
+	 * make sure that the detached flag is set prior to reconfiguring the
+	 * queues.
+	 */
+	avp->flags |= RTE_AVP_F_DETACHED;
+	rte_wmb();
+
+	/*
+	 * re-run the device create utility which will parse the new host info
+	 * and setup the AVP device queue pointers.
+	 */
+	ret = avp_dev_create(AVP_DEV_TO_PCI(eth_dev), eth_dev);
+	if (ret < 0) {
+		PMD_DRV_LOG(ERR,
+			    "Failed to re-create AVP device, ret=%d\n", ret);
+		goto unlock;
+	}
+
+	if (avp->flags & RTE_AVP_F_CONFIGURED) {
+		/*
+		 * Update the receive queue mapping to handle cases where the
+		 * source and destination hosts have different queue
+		 * requirements.  As long as the DETACHED flag is asserted the
+		 * queue table should not be referenced so it should be safe to
+		 * update it.
+		 */
+		_avp_set_queue_counts(eth_dev);
+		for (i = 0; i < eth_dev->data->nb_rx_queues; i++)
+			_avp_set_rx_queue_mappings(eth_dev, i);
+
+		/*
+		 * Update the host with our config details so that it knows the
+		 * device is active.
+		 */
+		memset(&config, 0, sizeof(config));
+		config.device_id = avp->device_id;
+		config.driver_type = RTE_AVP_DRIVER_TYPE_DPDK;
+		config.driver_version = RTE_AVP_DPDK_DRIVER_VERSION;
+		config.features = avp->features;
+		config.num_tx_queues = avp->num_tx_queues;
+		config.num_rx_queues = avp->num_rx_queues;
+		config.if_up = !!(avp->flags & RTE_AVP_F_LINKUP);
+
+		ret = avp_dev_ctrl_set_config(eth_dev, &config);
+		if (ret < 0) {
+			PMD_DRV_LOG(ERR, "Config request failed by host, "
+				    "ret=%d\n", ret);
+			goto unlock;
+		}
+	}
+
+	rte_wmb();
+	avp->flags &= ~RTE_AVP_F_DETACHED;
+
+	ret = 0;
+
+unlock:
+	rte_spinlock_unlock(&avp->lock);
+	return ret;
+}
+
+static void
+avp_dev_interrupt_handler(struct rte_intr_handle *intr_handle,
+						  void *data)
+{
+	struct rte_eth_dev *eth_dev = data;
+	struct rte_pci_device *pci_dev = AVP_DEV_TO_PCI(eth_dev);
+	void *registers = pci_dev->mem_resource[RTE_AVP_PCI_MMIO_BAR].addr;
+	uint32_t status, value;
+	int ret;
+
+	if (registers == NULL)
+		rte_panic("no mapped MMIO register space\n");
+
+	/* read the interrupt status register
+	 * note: this register clears on read so all raised interrupts must be
+	 *    handled or remembered for later processing
+	 */
+	status = RTE_AVP_READ32(
+		RTE_PTR_ADD(registers,
+			    RTE_AVP_INTERRUPT_STATUS_OFFSET));
+
+	if (status | RTE_AVP_MIGRATION_INTERRUPT_MASK) {
+		/* handle interrupt based on current status */
+		value = RTE_AVP_READ32(
+			RTE_PTR_ADD(registers,
+				    RTE_AVP_MIGRATION_STATUS_OFFSET));
+		switch (value) {
+		case RTE_AVP_MIGRATION_DETACHED:
+			ret = avp_dev_detach(eth_dev);
+			break;
+		case RTE_AVP_MIGRATION_ATTACHED:
+			ret = avp_dev_attach(eth_dev);
+			break;
+		default:
+			PMD_DRV_LOG(ERR,
+				    "unexpected migration status, status=%u\n",
+				    value);
+			ret = -EINVAL;
+		}
+
+		/* acknowledge the request by writing out our current status */
+		value = (ret == 0 ? value : RTE_AVP_MIGRATION_ERROR);
+		RTE_AVP_WRITE32(value,
+				RTE_PTR_ADD(registers,
+					    RTE_AVP_MIGRATION_ACK_OFFSET));
+
+		PMD_DRV_LOG(NOTICE, "AVP migration interrupt handled\n");
+	}
+
+	if (status & ~RTE_AVP_MIGRATION_INTERRUPT_MASK)
+		PMD_DRV_LOG(WARNING,
+			    "AVP unexpected interrupt, status=0x%08x\n",
+			    status);
+
+	/* re-enable UIO interrupt handling */
+	ret = rte_intr_enable(intr_handle);
+	if (ret < 0) {
+		PMD_DRV_LOG(ERR,
+			    "Failed to re-enable UIO interrupts, "
+			    "ret=%d\n", ret);
+		/* continue */
+	}
+}
+
+static int
+avp_dev_enable_interrupts(struct rte_eth_dev *eth_dev)
+{
+	struct rte_pci_device *pci_dev = AVP_DEV_TO_PCI(eth_dev);
+	void *registers = pci_dev->mem_resource[RTE_AVP_PCI_MMIO_BAR].addr;
+	int ret;
+
+	if (registers == NULL)
+		return -EINVAL;
+
+	/* enable UIO interrupt handling */
+	ret = rte_intr_enable(&(pci_dev->intr_handle));
+	if (ret < 0) {
+		PMD_DRV_LOG(ERR,
+			    "Failed to enable UIO interrupts, ret=%d\n", ret);
+		return ret;
+	}
+
+	/* inform the device that all interrupts are enabled */
+	RTE_AVP_WRITE32(RTE_AVP_APP_INTERRUPTS_MASK,
+			RTE_PTR_ADD(registers, RTE_AVP_INTERRUPT_MASK_OFFSET));
+
+	return 0;
+}
+
+static int
+avp_dev_setup_interrupts(struct rte_eth_dev *eth_dev)
+{
+	struct rte_pci_device *pci_dev = AVP_DEV_TO_PCI(eth_dev);
+	int ret;
+
+	/* register a callback handler with UIO for interrupt notifications */
+	ret = rte_intr_callback_register(&(pci_dev->intr_handle),
+					 avp_dev_interrupt_handler,
+					 (void *)eth_dev);
+	if (ret < 0) {
+		PMD_DRV_LOG(ERR,
+			    "Failed to register UIO interrupt callback, "
+			    "ret=%d\n", ret);
+		return ret;
+	}
+
+	/* enable interrupt processing */
+	return avp_dev_enable_interrupts(eth_dev);
+}
+
+static int
+avp_dev_migration_pending(struct rte_eth_dev *eth_dev)
+{
+	struct rte_pci_device *pci_dev = AVP_DEV_TO_PCI(eth_dev);
+	void *registers = pci_dev->mem_resource[RTE_AVP_PCI_MMIO_BAR].addr;
+	uint32_t value;
+
+	if (registers == NULL)
+		return 0;
+
+	value = RTE_AVP_READ32(RTE_PTR_ADD(registers,
+					   RTE_AVP_MIGRATION_STATUS_OFFSET));
+	if (value == RTE_AVP_MIGRATION_DETACHED) {
+		/* migration is in progress; ack it if we have not already */
+		RTE_AVP_WRITE32(value,
+				RTE_PTR_ADD(registers,
+					    RTE_AVP_MIGRATION_ACK_OFFSET));
+		return 1;
+	}
+	return 0;
+}
+
+/*
+ * create a AVP device using the supplied device info by first translating it
+ * to guest address space(s).
+ */
+static int
+avp_dev_create(struct rte_pci_device *pci_dev,
+	       struct rte_eth_dev *eth_dev)
+{
+	struct avp_dev *avp =
+		RTE_AVP_DEV_PRIVATE_TO_HW(eth_dev->data->dev_private);
+	struct rte_avp_device_info *host_info;
+	struct rte_mem_resource *resource;
+	unsigned i;
+
+	resource = &pci_dev->mem_resource[RTE_AVP_PCI_DEVICE_BAR];
+	if (resource->addr == NULL) {
+		PMD_DRV_LOG(ERR, "BAR%u is not mapped\n",
+			    RTE_AVP_PCI_DEVICE_BAR);
+		return -EFAULT;
+	}
+	host_info = (struct rte_avp_device_info *)resource->addr;
+
+	if ((host_info->magic != RTE_AVP_DEVICE_MAGIC) ||
+		avp_dev_version_check(host_info->version)) {
+		PMD_DRV_LOG(ERR, "Invalid AVP PCI device, magic 0x%08x "
+			    "version 0x%08x > 0x%08x\n",
+			    host_info->magic, host_info->version,
+			    RTE_AVP_DPDK_DRIVER_VERSION);
+		return -EINVAL;
+	}
+
+	PMD_DRV_LOG(DEBUG, "AVP host device is v%u.%u.%u\n",
+		    RTE_AVP_GET_RELEASE_VERSION(host_info->version),
+		    RTE_AVP_GET_MAJOR_VERSION(host_info->version),
+		    RTE_AVP_GET_MINOR_VERSION(host_info->version));
+
+	PMD_DRV_LOG(DEBUG, "AVP host supports %u to %u TX queue(s)\n",
+		    host_info->min_tx_queues, host_info->max_tx_queues);
+	PMD_DRV_LOG(DEBUG, "AVP host supports %u to %u RX queue(s)\n",
+		    host_info->min_rx_queues, host_info->max_rx_queues);
+	PMD_DRV_LOG(DEBUG, "AVP host supports features 0x%08x\n",
+		    host_info->features);
+
+	if (avp->magic != RTE_AVP_ETHDEV_MAGIC) {
+		/*
+		 * First time initialization (i.e., not during a VM
+		 * migration)
+		 */
+		memset(avp, 0, sizeof(*avp));
+		avp->magic = RTE_AVP_ETHDEV_MAGIC;
+		avp->dev_data = eth_dev->data;
+		avp->port_id = eth_dev->data->port_id;
+		avp->host_mbuf_size = host_info->mbuf_size;
+		avp->host_features = host_info->features;
+		rte_spinlock_init(&avp->lock);
+		memcpy(&avp->ethaddr.addr_bytes[0],
+		       host_info->ethaddr, ETHER_ADDR_LEN);
+		/* adjust max values to not exceed our max */
+		avp->max_tx_queues =
+			RTE_MIN(host_info->max_tx_queues, RTE_AVP_MAX_QUEUES);
+		avp->max_rx_queues =
+			RTE_MIN(host_info->max_rx_queues, RTE_AVP_MAX_QUEUES);
+	} else {
+		/* Re-attaching during migration */
+
+		/* TODO... requires validation of host values */
+		if ((host_info->features & avp->features) != avp->features) {
+			PMD_DRV_LOG(ERR, "AVP host features mismatched; "
+				    "0x%08x, host=0x%08x\n",
+				    avp->features, host_info->features);
+			/* this should not be possible; continue for now */
+		}
+	}
+
+	/* the device id is allowed to change over migrations */
+	avp->device_id = host_info->device_id;
+
+	/* translate incoming host addresses to guest address space */
+	PMD_DRV_LOG(DEBUG, "AVP first host tx queue at 0x%"PRIx64"\n",
+		    host_info->tx_phys);
+	PMD_DRV_LOG(DEBUG, "AVP first host alloc queue at 0x%"PRIx64"\n",
+		    host_info->alloc_phys);
+	for (i = 0; i < avp->max_tx_queues; i++) {
+		avp->tx_q[i] = avp_dev_translate_address(eth_dev,
+			host_info->tx_phys + (i * host_info->tx_size));
+
+		avp->alloc_q[i] = avp_dev_translate_address(eth_dev,
+			host_info->alloc_phys + (i * host_info->alloc_size));
+	}
+
+	PMD_DRV_LOG(DEBUG, "AVP first host rx queue at 0x%"PRIx64"\n",
+		    host_info->rx_phys);
+	PMD_DRV_LOG(DEBUG, "AVP first host free queue at 0x%"PRIx64"\n",
+		    host_info->free_phys);
+	for (i = 0; i < avp->max_rx_queues; i++) {
+		avp->rx_q[i] = avp_dev_translate_address(eth_dev,
+			host_info->rx_phys + (i * host_info->rx_size));
+		avp->free_q[i] = avp_dev_translate_address(eth_dev,
+			host_info->free_phys + (i * host_info->free_size));
+	}
+
+	PMD_DRV_LOG(DEBUG, "AVP host request queue at 0x%"PRIx64"\n",
+		    host_info->req_phys);
+	PMD_DRV_LOG(DEBUG, "AVP host response queue at 0x%"PRIx64"\n",
+		    host_info->resp_phys);
+	PMD_DRV_LOG(DEBUG, "AVP host sync address at 0x%"PRIx64"\n",
+		    host_info->sync_phys);
+	PMD_DRV_LOG(DEBUG, "AVP host mbuf address at 0x%"PRIx64"\n",
+		    host_info->mbuf_phys);
+	avp->req_q = avp_dev_translate_address(eth_dev, host_info->req_phys);
+	avp->resp_q = avp_dev_translate_address(eth_dev, host_info->resp_phys);
+	avp->sync_addr =
+		avp_dev_translate_address(eth_dev, host_info->sync_phys);
+	avp->mbuf_addr =
+		avp_dev_translate_address(eth_dev, host_info->mbuf_phys);
+
+	/*
+	 * store the host mbuf virtual address so that we can calculate
+	 * relative offsets for each mbuf as they are processed
+	 */
+	avp->host_mbuf_addr = host_info->mbuf_va;
+	avp->host_sync_addr = host_info->sync_va;
+
+	/*
+	 * store the maximum packet length that is supported by the host.
+	 */
+	avp->max_rx_pkt_len = host_info->max_rx_pkt_len;
+	PMD_DRV_LOG(DEBUG, "AVP host max receive packet length is %u\n",
+				host_info->max_rx_pkt_len);
+
+	return 0;
+}
+
+/*
  * This function is based on probe() function in avp_pci.c
  * It returns 0 on success.
  */
@@ -164,6 +904,7 @@ struct avp_adapter {
 	struct avp_dev *avp =
 		RTE_AVP_DEV_PRIVATE_TO_HW(eth_dev->data->dev_private);
 	struct rte_pci_device *pci_dev;
+	int ret;
 
 	pci_dev = AVP_DEV_TO_PCI(eth_dev);
 
@@ -181,6 +922,34 @@ struct avp_adapter {
 
 	eth_dev->data->dev_flags |= RTE_ETH_DEV_DETACHABLE;
 
+	/* Check current migration status */
+	if (avp_dev_migration_pending(eth_dev)) {
+		PMD_DRV_LOG(ERR, "VM live migration operation in progress\n");
+		return -EBUSY;
+	}
+
+	/* Check BAR resources */
+	ret = avp_dev_check_regions(eth_dev);
+	if (ret < 0) {
+		PMD_DRV_LOG(ERR,
+			    "Failed to validate BAR resources, ret=%d\n", ret);
+		return ret;
+	}
+
+	/* Enable interrupts */
+	ret = avp_dev_setup_interrupts(eth_dev);
+	if (ret < 0) {
+		PMD_DRV_LOG(ERR, "Failed to enable interrupts, ret=%d\n", ret);
+		return ret;
+	}
+
+	/* Handle each subtype */
+	ret = avp_dev_create(pci_dev, eth_dev);
+	if (ret < 0) {
+		PMD_DRV_LOG(ERR, "Failed to create device, ret=%d\n", ret);
+		return ret;
+	}
+
 	/* Allocate memory for storing MAC addresses */
 	eth_dev->data->mac_addrs = rte_zmalloc("avp_ethdev", ETHER_ADDR_LEN, 0);
 	if (eth_dev->data->mac_addrs == NULL) {
@@ -191,6 +960,7 @@ struct avp_adapter {
 		return -ENOMEM;
 	}
 
+	/* Get a mac from device config */
 	ether_addr_copy(&avp->ethaddr, &eth_dev->data->mac_addrs[0]);
 
 	return 0;
-- 
1.8.3.1

^ permalink raw reply related	[flat|nested] 172+ messages in thread

* [PATCH 09/16] net/avp: device configuration
  2017-02-25  1:22 [PATCH 00/16] Wind River Systems AVP PMD Allain Legacy
                   ` (7 preceding siblings ...)
  2017-02-25  1:23 ` [PATCH 08/16] net/avp: device initialization Allain Legacy
@ 2017-02-25  1:23 ` Allain Legacy
  2017-02-25  1:23 ` [PATCH 10/16] net/avp: queue setup and release Allain Legacy
                   ` (7 subsequent siblings)
  16 siblings, 0 replies; 172+ messages in thread
From: Allain Legacy @ 2017-02-25  1:23 UTC (permalink / raw)
  To: ferruh.yigit; +Cc: dev

Adds support for "dev_configure" operations to allow an application to
configure the device.

Signed-off-by: Allain Legacy <allain.legacy@windriver.com>
Signed-off-by: Matt Peters <matt.peters@windriver.com>
---
 drivers/net/avp/avp_ethdev.c | 136 +++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 136 insertions(+)

diff --git a/drivers/net/avp/avp_ethdev.c b/drivers/net/avp/avp_ethdev.c
index b48895d..13ec9e2 100644
--- a/drivers/net/avp/avp_ethdev.c
+++ b/drivers/net/avp/avp_ethdev.c
@@ -66,6 +66,12 @@ static int avp_dev_create(struct rte_pci_device *pci_dev,
 static int eth_avp_dev_init(struct rte_eth_dev *eth_dev);
 static int eth_avp_dev_uninit(struct rte_eth_dev *eth_dev);
 
+static int avp_dev_configure(struct rte_eth_dev *dev);
+static void avp_dev_info_get(struct rte_eth_dev *dev,
+			     struct rte_eth_dev_info *dev_info);
+static void avp_vlan_offload_set(struct rte_eth_dev *dev, int mask);
+static int avp_dev_link_update(struct rte_eth_dev *dev,
+			       __rte_unused int wait_to_complete);
 
 #define AVP_DEV_TO_PCI(eth_dev) RTE_DEV_TO_PCI((eth_dev)->device)
 
@@ -104,6 +110,15 @@ static int avp_dev_create(struct rte_pci_device *pci_dev,
 	},
 };
 
+/*
+ * dev_ops for avp, bare necessities for basic operation
+ */
+static const struct eth_dev_ops avp_eth_dev_ops = {
+	.dev_configure       = avp_dev_configure,
+	.dev_infos_get       = avp_dev_info_get,
+	.vlan_offload_set    = avp_vlan_offload_set,
+	.link_update         = avp_dev_link_update,
+};
 
 /**@{ AVP device flags */
 #define RTE_AVP_F_PROMISC (1<<1)
@@ -907,6 +922,7 @@ struct avp_queue {
 	int ret;
 
 	pci_dev = AVP_DEV_TO_PCI(eth_dev);
+	eth_dev->dev_ops = &avp_eth_dev_ops;
 
 	if (rte_eal_process_type() != RTE_PROC_PRIMARY) {
 		/*
@@ -997,6 +1013,126 @@ struct avp_queue {
 };
 
 
+static int
+avp_dev_configure(struct rte_eth_dev *eth_dev)
+{
+	struct rte_pci_device *pci_dev = AVP_DEV_TO_PCI(eth_dev);
+	struct avp_dev *avp =
+		RTE_AVP_DEV_PRIVATE_TO_HW(eth_dev->data->dev_private);
+	struct rte_avp_device_info *host_info;
+	struct rte_avp_device_config config;
+	int mask = 0;
+	void *addr;
+	int ret;
+
+	rte_spinlock_lock(&avp->lock);
+	if (avp->flags & RTE_AVP_F_DETACHED) {
+		PMD_DRV_LOG(ERR,
+			    "Operation not supported during "
+			    "VM live migration\n");
+		ret = -ENOTSUP;
+		goto unlock;
+	}
+
+	addr = pci_dev->mem_resource[RTE_AVP_PCI_DEVICE_BAR].addr;
+	host_info = (struct rte_avp_device_info *)addr;
+
+	/* Setup required number of queues */
+	_avp_set_queue_counts(eth_dev);
+
+	mask = (ETH_VLAN_STRIP_MASK |
+		ETH_VLAN_FILTER_MASK |
+		ETH_VLAN_EXTEND_MASK);
+	avp_vlan_offload_set(eth_dev, mask);
+
+	/* update device config */
+	memset(&config, 0, sizeof(config));
+	config.device_id = host_info->device_id;
+	config.driver_type = RTE_AVP_DRIVER_TYPE_DPDK;
+	config.driver_version = RTE_AVP_DPDK_DRIVER_VERSION;
+	config.features = avp->features;
+	config.num_tx_queues = avp->num_tx_queues;
+	config.num_rx_queues = avp->num_rx_queues;
+
+	ret = avp_dev_ctrl_set_config(eth_dev, &config);
+	if (ret < 0) {
+		PMD_DRV_LOG(ERR,
+			    "Config request failed by host, ret=%d\n", ret);
+		goto unlock;
+	}
+
+	avp->flags |= RTE_AVP_F_CONFIGURED;
+	ret = 0;
+
+unlock:
+	rte_spinlock_unlock(&avp->lock);
+	return ret;
+}
+
+
+static int
+avp_dev_link_update(struct rte_eth_dev *eth_dev,
+					__rte_unused int wait_to_complete)
+{
+	struct avp_dev *avp =
+		RTE_AVP_DEV_PRIVATE_TO_HW(eth_dev->data->dev_private);
+	struct rte_eth_link *link = &eth_dev->data->dev_link;
+
+	link->link_speed = ETH_SPEED_NUM_10G;
+	link->link_duplex = ETH_LINK_FULL_DUPLEX;
+	link->link_status = !!(avp->flags & RTE_AVP_F_LINKUP);
+
+	return -1;
+}
+
+
+static void
+avp_dev_info_get(struct rte_eth_dev *eth_dev,
+		 struct rte_eth_dev_info *dev_info)
+{
+	struct avp_dev *avp =
+		RTE_AVP_DEV_PRIVATE_TO_HW(eth_dev->data->dev_private);
+
+	dev_info->driver_name = "rte_avp_pmd";
+	dev_info->max_rx_queues = avp->max_rx_queues;
+	dev_info->max_tx_queues = avp->max_tx_queues;
+	dev_info->min_rx_bufsize = RTE_AVP_MIN_RX_BUFSIZE;
+	dev_info->max_rx_pktlen = avp->max_rx_pkt_len;
+	dev_info->max_mac_addrs = RTE_AVP_MAX_MAC_ADDRS;
+	if (avp->host_features & RTE_AVP_FEATURE_VLAN_OFFLOAD) {
+		dev_info->rx_offload_capa = DEV_RX_OFFLOAD_VLAN_STRIP;
+		dev_info->tx_offload_capa = DEV_TX_OFFLOAD_VLAN_INSERT;
+	}
+}
+
+static void
+avp_vlan_offload_set(struct rte_eth_dev *eth_dev, int mask)
+{
+	struct avp_dev *avp =
+		RTE_AVP_DEV_PRIVATE_TO_HW(eth_dev->data->dev_private);
+
+	if (mask & ETH_VLAN_STRIP_MASK) {
+		if (avp->host_features & RTE_AVP_FEATURE_VLAN_OFFLOAD) {
+			if (eth_dev->data->dev_conf.rxmode.hw_vlan_strip)
+				avp->features |= RTE_AVP_FEATURE_VLAN_OFFLOAD;
+			else
+				avp->features &= ~RTE_AVP_FEATURE_VLAN_OFFLOAD;
+		} else {
+			PMD_DRV_LOG(ERR, "VLAN strip offload not supported\n");
+		}
+	}
+
+	if (mask & ETH_VLAN_FILTER_MASK) {
+		if (eth_dev->data->dev_conf.rxmode.hw_vlan_filter)
+			PMD_DRV_LOG(ERR, "VLAN filter offload not supported\n");
+	}
+
+	if (mask & ETH_VLAN_EXTEND_MASK) {
+		if (eth_dev->data->dev_conf.rxmode.hw_vlan_extend)
+			PMD_DRV_LOG(ERR, "VLAN extend offload not supported\n");
+	}
+}
+
 
 RTE_PMD_REGISTER_PCI(rte_avp, rte_avp_pmd.pci_drv);
 RTE_PMD_REGISTER_PCI_TABLE(rte_avp, pci_id_avp_map);
-- 
1.8.3.1

^ permalink raw reply related	[flat|nested] 172+ messages in thread

* [PATCH 10/16] net/avp: queue setup and release
  2017-02-25  1:22 [PATCH 00/16] Wind River Systems AVP PMD Allain Legacy
                   ` (8 preceding siblings ...)
  2017-02-25  1:23 ` [PATCH 09/16] net/avp: device configuration Allain Legacy
@ 2017-02-25  1:23 ` Allain Legacy
  2017-02-25  1:23 ` [PATCH 11/16] net/avp: packet receive functions Allain Legacy
                   ` (6 subsequent siblings)
  16 siblings, 0 replies; 172+ messages in thread
From: Allain Legacy @ 2017-02-25  1:23 UTC (permalink / raw)
  To: ferruh.yigit; +Cc: dev

Adds queue management operations so that an appliation can setup and
release the transmit and receive queues.

Signed-off-by: Allain Legacy <allain.legacy@windriver.com>
Signed-off-by: Matt Peters <matt.peters@windriver.com>
---
 drivers/net/avp/avp_ethdev.c | 148 ++++++++++++++++++++++++++++++++++++++++++-
 1 file changed, 147 insertions(+), 1 deletion(-)

diff --git a/drivers/net/avp/avp_ethdev.c b/drivers/net/avp/avp_ethdev.c
index 13ec9e2..a4b6b42 100644
--- a/drivers/net/avp/avp_ethdev.c
+++ b/drivers/net/avp/avp_ethdev.c
@@ -72,7 +72,21 @@ static void avp_dev_info_get(struct rte_eth_dev *dev,
 static void avp_vlan_offload_set(struct rte_eth_dev *dev, int mask);
 static int avp_dev_link_update(struct rte_eth_dev *dev,
 			       __rte_unused int wait_to_complete);
-
+static int avp_dev_rx_queue_setup(struct rte_eth_dev *dev,
+				  uint16_t rx_queue_id,
+				  uint16_t nb_rx_desc,
+				  unsigned int socket_id,
+				  const struct rte_eth_rxconf *rx_conf,
+				  struct rte_mempool *pool);
+
+static int avp_dev_tx_queue_setup(struct rte_eth_dev *dev,
+				  uint16_t tx_queue_id,
+				  uint16_t nb_tx_desc,
+				  unsigned int socket_id,
+				  const struct rte_eth_txconf *tx_conf);
+
+static void avp_dev_rx_queue_release(void *rxq);
+static void avp_dev_tx_queue_release(void *txq);
 #define AVP_DEV_TO_PCI(eth_dev) RTE_DEV_TO_PCI((eth_dev)->device)
 
 
@@ -118,6 +132,10 @@ static int avp_dev_link_update(struct rte_eth_dev *dev,
 	.dev_infos_get       = avp_dev_info_get,
 	.vlan_offload_set    = avp_vlan_offload_set,
 	.link_update         = avp_dev_link_update,
+	.rx_queue_setup      = avp_dev_rx_queue_setup,
+	.rx_queue_release    = avp_dev_rx_queue_release,
+	.tx_queue_setup      = avp_dev_tx_queue_setup,
+	.tx_queue_release    = avp_dev_tx_queue_release,
 };
 
 /**@{ AVP device flags */
@@ -1014,6 +1032,134 @@ struct avp_queue {
 
 
 static int
+avp_dev_rx_queue_setup(struct rte_eth_dev *eth_dev,
+		       uint16_t rx_queue_id,
+		       uint16_t nb_rx_desc,
+		       unsigned int socket_id,
+		       const struct rte_eth_rxconf *rx_conf,
+		       struct rte_mempool *pool)
+{
+	struct avp_dev *avp =
+		RTE_AVP_DEV_PRIVATE_TO_HW(eth_dev->data->dev_private);
+	struct rte_pktmbuf_pool_private *mbp_priv;
+	struct avp_queue *rxq;
+
+	if (rx_queue_id >= eth_dev->data->nb_rx_queues) {
+		PMD_DRV_LOG(ERR, "RX queue id is out of range: "
+			    "rx_queue_id=%u, nb_rx_queues=%u\n",
+			    rx_queue_id, eth_dev->data->nb_rx_queues);
+		return -EINVAL;
+	}
+
+	/* Save mbuf pool pointer */
+	avp->pool = pool;
+
+	/* Save the local mbuf size */
+	mbp_priv = rte_mempool_get_priv(pool);
+	avp->guest_mbuf_size = (uint16_t) (mbp_priv->mbuf_data_room_size);
+	avp->guest_mbuf_size -= RTE_PKTMBUF_HEADROOM;
+
+	PMD_DRV_LOG(DEBUG, "AVP max_rx_pkt_len=(%u,%u) mbuf_size=(%u,%u)\n",
+		    avp->max_rx_pkt_len,
+		    eth_dev->data->dev_conf.rxmode.max_rx_pkt_len,
+		    avp->host_mbuf_size,
+		    avp->guest_mbuf_size);
+
+	/* allocate a queue object */
+	rxq = rte_zmalloc_socket("ethdev RX queue", sizeof(struct avp_queue),
+				 RTE_CACHE_LINE_SIZE, socket_id);
+	if (rxq == NULL) {
+		PMD_DRV_LOG(ERR, "Failed to allocate new Rx queue object\n");
+		return -ENOMEM;
+	}
+
+	/* save back pointers to AVP and Ethernet devices */
+	rxq->avp = avp;
+	rxq->dev_data = eth_dev->data;
+	eth_dev->data->rx_queues[rx_queue_id] = (void *)rxq;
+
+	/* setup the queue receive mapping for the current queue. */
+	_avp_set_rx_queue_mappings(eth_dev, rx_queue_id);
+
+	PMD_DRV_LOG(DEBUG, "Rx queue %u setup at %p\n", rx_queue_id, rxq);
+
+	(void)nb_rx_desc;
+	(void)rx_conf;
+	return 0;
+}
+
+static int
+avp_dev_tx_queue_setup(struct rte_eth_dev *eth_dev,
+		       uint16_t tx_queue_id,
+		       uint16_t nb_tx_desc,
+		       unsigned int socket_id,
+		       const struct rte_eth_txconf *tx_conf)
+{
+	struct avp_dev *avp =
+		RTE_AVP_DEV_PRIVATE_TO_HW(eth_dev->data->dev_private);
+	struct avp_queue *txq;
+
+	if (tx_queue_id >= eth_dev->data->nb_tx_queues) {
+		PMD_DRV_LOG(ERR, "TX queue id is out of range: "
+			    "tx_queue_id=%u, nb_tx_queues=%u\n",
+			    tx_queue_id, eth_dev->data->nb_tx_queues);
+		return -EINVAL;
+	}
+
+	/* allocate a queue object */
+	txq = rte_zmalloc_socket("ethdev TX queue", sizeof(struct avp_queue),
+				 RTE_CACHE_LINE_SIZE, socket_id);
+	if (txq == NULL) {
+		PMD_DRV_LOG(ERR, "Failed to allocate new Tx queue object\n");
+		return -ENOMEM;
+	}
+
+	/* only the configured set of transmit queues are used */
+	txq->queue_id = tx_queue_id;
+	txq->queue_base = tx_queue_id;
+	txq->queue_limit = tx_queue_id;
+
+	/* save back pointers to AVP and Ethernet devices */
+	txq->avp = avp;
+	txq->dev_data = eth_dev->data;
+	eth_dev->data->tx_queues[tx_queue_id] = (void *)txq;
+
+	PMD_DRV_LOG(DEBUG, "Tx queue %u setup at %p\n", tx_queue_id, txq);
+
+	(void)nb_tx_desc;
+	(void)tx_conf;
+	return 0;
+}
+
+static void
+avp_dev_rx_queue_release(void *rx_queue)
+{
+	struct avp_queue *rxq = (struct avp_queue *)rx_queue;
+	struct avp_dev *avp = rxq->avp;
+	struct rte_eth_dev_data *data = avp->dev_data;
+	unsigned i;
+
+	for (i = 0; i < avp->num_rx_queues; i++) {
+		if (data->rx_queues[i] == rxq)
+			data->rx_queues[i] = NULL;
+	}
+}
+
+static void
+avp_dev_tx_queue_release(void *tx_queue)
+{
+	struct avp_queue *txq = (struct avp_queue *)tx_queue;
+	struct avp_dev *avp = txq->avp;
+	struct rte_eth_dev_data *data = avp->dev_data;
+	unsigned i;
+
+	for (i = 0; i < avp->num_tx_queues; i++) {
+		if (data->tx_queues[i] == txq)
+			data->tx_queues[i] = NULL;
+	}
+}
+
+static int
 avp_dev_configure(struct rte_eth_dev *eth_dev)
 {
 	struct rte_pci_device *pci_dev = AVP_DEV_TO_PCI(eth_dev);
-- 
1.8.3.1

^ permalink raw reply related	[flat|nested] 172+ messages in thread

* [PATCH 11/16] net/avp: packet receive functions
  2017-02-25  1:22 [PATCH 00/16] Wind River Systems AVP PMD Allain Legacy
                   ` (9 preceding siblings ...)
  2017-02-25  1:23 ` [PATCH 10/16] net/avp: queue setup and release Allain Legacy
@ 2017-02-25  1:23 ` Allain Legacy
  2017-02-25  1:23 ` [PATCH 12/16] net/avp: packet transmit functions Allain Legacy
                   ` (5 subsequent siblings)
  16 siblings, 0 replies; 172+ messages in thread
From: Allain Legacy @ 2017-02-25  1:23 UTC (permalink / raw)
  To: ferruh.yigit; +Cc: dev

Adds function required for receiving packets from the host application via
AVP device queues.  Both the simple and scattered functions are supported.

Signed-off-by: Allain Legacy <allain.legacy@windriver.com>
Signed-off-by: Matt Peters <matt.peters@windriver.com>
---
 drivers/net/avp/Makefile     |   1 +
 drivers/net/avp/avp_ethdev.c | 469 +++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 470 insertions(+)

diff --git a/drivers/net/avp/Makefile b/drivers/net/avp/Makefile
index 9cf0449..3013cd1 100644
--- a/drivers/net/avp/Makefile
+++ b/drivers/net/avp/Makefile
@@ -56,5 +56,6 @@ SRCS-$(CONFIG_RTE_LIBRTE_AVP_PMD) += avp_ethdev.c
 
 # this lib depends upon:
 DEPDIRS-$(CONFIG_RTE_LIBRTE_AVP_PMD) += lib/librte_eal lib/librte_ether
+DEPDIRS-$(CONFIG_RTE_LIBRTE_AVP_PMD) += lib/librte_mempool lib/librte_mbuf
 
 include $(RTE_SDK)/mk/rte.lib.mk
diff --git a/drivers/net/avp/avp_ethdev.c b/drivers/net/avp/avp_ethdev.c
index a4b6b42..36bb9c0 100644
--- a/drivers/net/avp/avp_ethdev.c
+++ b/drivers/net/avp/avp_ethdev.c
@@ -85,11 +85,19 @@ static int avp_dev_tx_queue_setup(struct rte_eth_dev *dev,
 				  unsigned int socket_id,
 				  const struct rte_eth_txconf *tx_conf);
 
+static uint16_t avp_recv_scattered_pkts(void *rx_queue,
+					struct rte_mbuf **rx_pkts,
+					uint16_t nb_pkts);
+
+static uint16_t avp_recv_pkts(void *rx_queue,
+			      struct rte_mbuf **rx_pkts,
+			      uint16_t nb_pkts);
 static void avp_dev_rx_queue_release(void *rxq);
 static void avp_dev_tx_queue_release(void *txq);
 #define AVP_DEV_TO_PCI(eth_dev) RTE_DEV_TO_PCI((eth_dev)->device)
 
 
+#define RTE_AVP_MAX_RX_BURST 64
 #define RTE_AVP_MAX_MAC_ADDRS 1
 #define RTE_AVP_MIN_RX_BUFSIZE ETHER_MIN_LEN
 
@@ -195,6 +203,18 @@ struct avp_adapter {
 	struct avp_dev avp;
 } __rte_cache_aligned;
 
+/**@{ AVP device statistics */
+#ifdef RTE_LIBRTE_AVP_STATS
+#define RTE_AVP_STATS_INC(queue, name) \
+	((queue)->name++)
+#define RTE_AVP_STATS_ADD(queue, name, value) \
+	((queue)->name += (value))
+#else
+#define RTE_AVP_STATS_INC(queue, name) do {} while (0)
+#define RTE_AVP_STATS_ADD(queue, name, value) do {} while (0)
+#endif
+/**@} */
+
 
 /* 32-bit MMIO register write */
 #define RTE_AVP_WRITE32(_value, _addr) ((*(uint32_t *)_addr) = (_value))
@@ -941,6 +961,7 @@ struct avp_queue {
 
 	pci_dev = AVP_DEV_TO_PCI(eth_dev);
 	eth_dev->dev_ops = &avp_eth_dev_ops;
+	eth_dev->rx_pkt_burst = &avp_recv_pkts;
 
 	if (rte_eal_process_type() != RTE_PROC_PRIMARY) {
 		/*
@@ -949,6 +970,12 @@ struct avp_queue {
 		 * be mapped to the same virtual address so all pointers should
 		 * be valid.
 		 */
+		if (eth_dev->data->scattered_rx) {
+			PMD_DRV_LOG(NOTICE,
+				    "AVP device configured "
+				    "for chained mbufs\n");
+			eth_dev->rx_pkt_burst = avp_recv_scattered_pkts;
+		}
 		return 0;
 	}
 
@@ -1032,6 +1059,36 @@ struct avp_queue {
 
 
 static int
+avp_dev_enable_scattered(struct rte_eth_dev *eth_dev,
+			 struct avp_dev *avp)
+{
+	unsigned max_rx_pkt_len = eth_dev->data->dev_conf.rxmode.max_rx_pkt_len;
+
+	if ((max_rx_pkt_len > avp->guest_mbuf_size) ||
+	    (max_rx_pkt_len > avp->host_mbuf_size)) {
+		/*
+		 * If the guest MTU is greater than either the host or guest
+		 * buffers then chained mbufs have to be enabled in the TX
+		 * direction.  It is assumed that the application will not need
+		 * to send packets larger than their max_rx_pkt_len (MRU).
+		 */
+		return 1;
+	}
+
+	if ((avp->max_rx_pkt_len > avp->guest_mbuf_size) ||
+	    (avp->max_rx_pkt_len > avp->host_mbuf_size)) {
+		/*
+		 * If the host MRU is greater than its own mbuf size or the
+		 * guest mbuf size then chained mbufs have to be enabled in the
+		 * RX direction.
+		 */
+		return 1;
+	}
+
+	return 0;
+}
+
+static int
 avp_dev_rx_queue_setup(struct rte_eth_dev *eth_dev,
 		       uint16_t rx_queue_id,
 		       uint16_t nb_rx_desc,
@@ -1059,6 +1116,16 @@ struct avp_queue {
 	avp->guest_mbuf_size = (uint16_t) (mbp_priv->mbuf_data_room_size);
 	avp->guest_mbuf_size -= RTE_PKTMBUF_HEADROOM;
 
+	if (avp_dev_enable_scattered(eth_dev, avp)) {
+		if (!eth_dev->data->scattered_rx) {
+			PMD_DRV_LOG(NOTICE,
+				    "AVP device configured "
+				    "for chained mbufs\n");
+			eth_dev->data->scattered_rx = 1;
+			eth_dev->rx_pkt_burst = avp_recv_scattered_pkts;
+		}
+	}
+
 	PMD_DRV_LOG(DEBUG, "AVP max_rx_pkt_len=(%u,%u) mbuf_size=(%u,%u)\n",
 		    avp->max_rx_pkt_len,
 		    eth_dev->data->dev_conf.rxmode.max_rx_pkt_len,
@@ -1131,6 +1198,408 @@ struct avp_queue {
 	return 0;
 }
 
+static inline int
+_avp_cmp_ether_addr(struct ether_addr *a, struct ether_addr *b)
+{
+	uint16_t *_a = (uint16_t *)&a->addr_bytes[0];
+	uint16_t *_b = (uint16_t *)&b->addr_bytes[0];
+	return (_a[0] ^ _b[0]) | (_a[1] ^ _b[1]) | (_a[2] ^ _b[2]);
+}
+
+static inline int
+_avp_mac_filter(struct avp_dev *avp, struct rte_mbuf *m)
+{
+	struct ether_hdr *eth = rte_pktmbuf_mtod(m, struct ether_hdr *);
+
+	if (likely(_avp_cmp_ether_addr(&avp->ethaddr, &eth->d_addr) == 0)) {
+		/* allow all packets destined to our address */
+		return 0;
+	}
+
+	if (likely(is_broadcast_ether_addr(&eth->d_addr))) {
+		/* allow all broadcast packets */
+		return 0;
+	}
+
+	if (likely(is_multicast_ether_addr(&eth->d_addr))) {
+		/* allow all multicast packets */
+		return 0;
+	}
+
+	if (avp->flags & RTE_AVP_F_PROMISC) {
+		/* allow all packets when in promiscuous mode */
+		return 0;
+	}
+
+	return -1;
+}
+
+#ifdef RTE_LIBRTE_AVP_DEBUG_BUFFERS
+static inline void
+__avp_dev_buffer_sanity_check(struct avp_dev *avp, struct rte_avp_desc *buf)
+{
+	struct rte_avp_desc *first_buf;
+	struct rte_avp_desc *pkt_buf;
+	unsigned pkt_len;
+	unsigned nb_segs;
+	void *pkt_data;
+	unsigned i;
+
+	first_buf = avp_dev_translate_buffer(avp, buf);
+
+	i = 0;
+	pkt_len = 0;
+	nb_segs = first_buf->nb_segs;
+	do {
+		/* Adjust pointers for guest addressing */
+		pkt_buf = avp_dev_translate_buffer(avp, buf);
+		if (pkt_buf == NULL) {
+			rte_panic("bad buffer: segment %u has an "
+				  "invalid address %p\n", i, buf);
+		}
+		pkt_data = avp_dev_translate_buffer(avp, pkt_buf->data);
+		if (pkt_data == NULL)
+			rte_panic("bad buffer: segment %u has a "
+				  "NULL data pointer\n", i);
+		if (pkt_buf->data_len == 0)
+			rte_panic("bad buffer: segment %u has "
+				  "0 data length\n", i);
+		pkt_len += pkt_buf->data_len;
+		nb_segs--;
+		i++;
+
+	} while (nb_segs && (buf = pkt_buf->next) != NULL);
+
+	if (nb_segs != 0) {
+		rte_panic("bad buffer: expected %u segments found %u\n",
+			  first_buf->nb_segs, (first_buf->nb_segs - nb_segs));
+	}
+	if (pkt_len != first_buf->pkt_len) {
+		rte_panic("bad buffer: expected length %u found %u\n",
+			  first_buf->pkt_len, pkt_len);
+	}
+}
+
+#define avp_dev_buffer_sanity_check(a, b) \
+	__avp_dev_buffer_sanity_check((a), (b))
+
+#else /* RTE_LIBRTE_AVP_DEBUG_BUFFERS */
+
+#define avp_dev_buffer_sanity_check(a, b) do {} while (0)
+
+#endif
+
+/*
+ * Copy a host buffer chain to a set of mbufs.	This function assumes that
+ * there exactly the required number of mbufs to copy all source bytes.
+ */
+static inline struct rte_mbuf *
+avp_dev_copy_from_buffers(struct avp_dev *avp,
+			  struct rte_avp_desc *buf,
+			  struct rte_mbuf **mbufs,
+			  unsigned count)
+{
+	struct rte_mbuf *m_previous = NULL;
+	struct rte_avp_desc *pkt_buf;
+	unsigned total_length = 0;
+	unsigned copy_length;
+	unsigned src_offset;
+	struct rte_mbuf *m;
+	uint16_t ol_flags;
+	uint16_t vlan_tci;
+	void *pkt_data;
+	unsigned i;
+
+	avp_dev_buffer_sanity_check(avp, buf);
+
+	/* setup the first source buffer */
+	pkt_buf = avp_dev_translate_buffer(avp, buf);
+	pkt_data = avp_dev_translate_buffer(avp, pkt_buf->data);
+	total_length = pkt_buf->pkt_len;
+	src_offset = 0;
+
+	if (pkt_buf->ol_flags & RTE_AVP_RX_VLAN_PKT) {
+		ol_flags = PKT_RX_VLAN_PKT;
+		vlan_tci = pkt_buf->vlan_tci;
+	} else {
+		ol_flags = 0;
+		vlan_tci = 0;
+	}
+
+	for (i = 0; (i < count) && (buf != NULL); i++) {
+		/* fill each destination buffer */
+		m = mbufs[i];
+
+		if (m_previous != NULL)
+			m_previous->next = m;
+
+		m_previous = m;
+
+		do {
+			/*
+			 * Copy as many source buffers as will fit in the
+			 * destination buffer.
+			 */
+			copy_length = RTE_MIN((avp->guest_mbuf_size -
+					       rte_pktmbuf_data_len(m)),
+					      (pkt_buf->data_len -
+					       src_offset));
+			rte_memcpy(RTE_PTR_ADD(rte_pktmbuf_mtod(m, void *),
+					       rte_pktmbuf_data_len(m)),
+				   RTE_PTR_ADD(pkt_data, src_offset),
+				   copy_length);
+			rte_pktmbuf_data_len(m) += copy_length;
+			src_offset += copy_length;
+
+			if (likely(src_offset == pkt_buf->data_len)) {
+				/* need a new source buffer */
+				buf = pkt_buf->next;
+				if (buf != NULL) {
+					pkt_buf = avp_dev_translate_buffer(
+						avp, buf);
+					pkt_data = avp_dev_translate_buffer(
+						avp, pkt_buf->data);
+					src_offset = 0;
+				}
+			}
+
+			if (unlikely(rte_pktmbuf_data_len(m) ==
+				     avp->guest_mbuf_size)) {
+				/* need a new destination mbuf */
+				break;
+			}
+
+		} while (buf != NULL);
+	}
+
+	m = mbufs[0];
+	m->ol_flags = ol_flags;
+	m->nb_segs = count;
+	rte_pktmbuf_pkt_len(m) = total_length;
+	m->vlan_tci = vlan_tci;
+
+	__rte_mbuf_sanity_check(m, 1);
+
+	return m;
+}
+
+static uint16_t
+avp_recv_scattered_pkts(void *rx_queue,
+			struct rte_mbuf **rx_pkts,
+			uint16_t nb_pkts)
+{
+	struct avp_queue *rxq = (struct avp_queue *)rx_queue;
+	struct rte_avp_desc *avp_bufs[RTE_AVP_MAX_RX_BURST];
+	struct rte_mbuf *mbufs[RTE_AVP_MAX_MBUF_SEGMENTS];
+	struct avp_dev *avp = rxq->avp;
+	struct rte_avp_desc *pkt_buf;
+	struct rte_avp_fifo *free_q;
+	struct rte_avp_fifo *rx_q;
+	struct rte_avp_desc *buf;
+	unsigned count, avail, n;
+	unsigned guest_mbuf_size;
+	struct rte_mbuf *m;
+	unsigned required;
+	unsigned buf_len;
+	unsigned port_id;
+	unsigned i;
+
+	if (unlikely(avp->flags & RTE_AVP_F_DETACHED)) {
+		/* VM live migration in progress */
+		return 0;
+	}
+
+	guest_mbuf_size = avp->guest_mbuf_size;
+	port_id = avp->port_id;
+	rx_q = avp->rx_q[rxq->queue_id];
+	free_q = avp->free_q[rxq->queue_id];
+
+	/* setup next queue to service */
+	rxq->queue_id = (rxq->queue_id < rxq->queue_limit) ?
+		(rxq->queue_id + 1) : rxq->queue_base;
+
+	/* determine how many slots are available in the free queue */
+	count = avp_fifo_free_count(free_q);
+
+	/* determine how many packets are available in the rx queue */
+	avail = avp_fifo_count(rx_q);
+
+	/* determine how many packets can be received */
+	count = RTE_MIN(count, avail);
+	count = RTE_MIN(count, nb_pkts);
+	count = RTE_MIN(count, (unsigned)RTE_AVP_MAX_RX_BURST);
+
+	if (unlikely(count == 0)) {
+		/* no free buffers, or no buffers on the rx queue */
+		return 0;
+	}
+
+	/* retrieve pending packets */
+	n = avp_fifo_get(rx_q, (void **)&avp_bufs, count);
+	PMD_RX_LOG(DEBUG, "Receving %u packets from Rx queue at %p\n",
+		   count, rx_q);
+
+	count = 0;
+	for (i = 0; i < n; i++) {
+
+		/* prefetch next entry while processing current one */
+		if (i+1 < n) {
+			pkt_buf = avp_dev_translate_buffer(avp, avp_bufs[i+1]);
+			rte_prefetch0(pkt_buf);
+		}
+		buf = avp_bufs[i];
+
+		/* Peek into the first buffer to determine the total length */
+		pkt_buf = avp_dev_translate_buffer(avp, buf);
+		buf_len = pkt_buf->pkt_len;
+
+		/* Allocate enough mbufs to receive the entire packet */
+		required = (buf_len + guest_mbuf_size - 1) / guest_mbuf_size;
+		if (rte_pktmbuf_alloc_bulk(avp->pool, mbufs, required)) {
+			rxq->dev_data->rx_mbuf_alloc_failed++;
+			continue;
+		}
+
+		/* Copy the data from the buffers to our mbufs */
+		m = avp_dev_copy_from_buffers(avp, buf, mbufs, required);
+
+		/* finalize mbuf */
+		m->port = port_id;
+
+		if (_avp_mac_filter(avp, m) != 0) {
+			/* silently discard packets not destined to our MAC */
+			rte_pktmbuf_free(m);
+			continue;
+		}
+
+		/* return new mbuf to caller */
+		rx_pkts[count++] = m;
+		RTE_AVP_STATS_ADD(rxq, bytes, buf_len);
+	}
+
+	RTE_AVP_STATS_ADD(rxq, packets, count);
+
+	/* return the buffers to the free queue */
+	avp_fifo_put(free_q, (void **)&avp_bufs[0], n);
+
+	return count;
+}
+
+
+static uint16_t
+avp_recv_pkts(void *rx_queue,
+	      struct rte_mbuf **rx_pkts,
+	      uint16_t nb_pkts)
+{
+	struct avp_queue *rxq = (struct avp_queue *)rx_queue;
+	struct rte_avp_desc *avp_bufs[RTE_AVP_MAX_RX_BURST];
+	struct avp_dev *avp = rxq->avp;
+	struct rte_avp_desc *pkt_buf;
+	struct rte_avp_fifo *free_q;
+	struct rte_avp_fifo *rx_q;
+	unsigned count, avail, n;
+	struct rte_mbuf *m;
+	unsigned pkt_len;
+	char *pkt_data;
+	unsigned i;
+
+	if (unlikely(avp->flags & RTE_AVP_F_DETACHED)) {
+		/* VM live migration in progress */
+		return 0;
+	}
+
+	rx_q = avp->rx_q[rxq->queue_id];
+	free_q = avp->free_q[rxq->queue_id];
+
+	/* setup next queue to service */
+	rxq->queue_id = (rxq->queue_id < rxq->queue_limit) ?
+		(rxq->queue_id + 1) : rxq->queue_base;
+
+	/* determine how many slots are available in the free queue */
+	count = avp_fifo_free_count(free_q);
+
+	/* determine how many packets are available in the rx queue */
+	avail = avp_fifo_count(rx_q);
+
+	/* determine how many packets can be received */
+	count = RTE_MIN(count, avail);
+	count = RTE_MIN(count, nb_pkts);
+	count = RTE_MIN(count, (unsigned)RTE_AVP_MAX_RX_BURST);
+
+	if (unlikely(count == 0)) {
+		/* no free buffers, or no buffers on the rx queue */
+		return 0;
+	}
+
+	/* retrieve pending packets */
+	n = avp_fifo_get(rx_q, (void **)&avp_bufs, count);
+	PMD_RX_LOG(DEBUG, "Receving %u packets from Rx queue at %p\n",
+		   count, rx_q);
+
+	count = 0;
+	for (i = 0; i < n; i++) {
+
+		/* prefetch next entry while processing current one */
+		if (i < n-1) {
+			pkt_buf = avp_dev_translate_buffer(avp, avp_bufs[i+1]);
+			rte_prefetch0(pkt_buf);
+		}
+
+		/* Adjust host pointers for guest addressing */
+		pkt_buf = avp_dev_translate_buffer(avp, avp_bufs[i]);
+		pkt_data = avp_dev_translate_buffer(avp, pkt_buf->data);
+		pkt_len = pkt_buf->pkt_len;
+
+		if (unlikely((pkt_len > avp->guest_mbuf_size) ||
+			     (pkt_buf->nb_segs > 1))) {
+			/*
+			 * application should be using the scattered receive
+			 * function
+			 */
+			RTE_AVP_STATS_INC(rxq, errors);
+			continue;
+		}
+
+		/* process each packet to be transmitted */
+		m = rte_pktmbuf_alloc(avp->pool);
+		if (unlikely(m == NULL)) {
+			rxq->dev_data->rx_mbuf_alloc_failed++;
+			continue;
+		}
+
+		/* copy data out of the host buffer to our buffer */
+		m->data_off = RTE_PKTMBUF_HEADROOM;
+		rte_memcpy(rte_pktmbuf_mtod(m, void *), pkt_data, pkt_len);
+
+		/* initialize the local mbuf */
+		rte_pktmbuf_data_len(m) = pkt_len;
+		rte_pktmbuf_pkt_len(m) = pkt_len;
+		m->port = avp->port_id;
+
+		if (pkt_buf->ol_flags & RTE_AVP_RX_VLAN_PKT) {
+			m->ol_flags = PKT_RX_VLAN_PKT;
+			m->vlan_tci = pkt_buf->vlan_tci;
+		}
+
+		if (_avp_mac_filter(avp, m) != 0) {
+			/* silently discard packets not destined to our MAC */
+			rte_pktmbuf_free(m);
+			continue;
+		}
+
+		/* return new mbuf to caller */
+		rx_pkts[count++] = m;
+		RTE_AVP_STATS_ADD(rxq, bytes, pkt_len);
+	}
+
+	RTE_AVP_STATS_ADD(rxq, packets, count);
+
+	/* return the buffers to the free queue */
+	avp_fifo_put(free_q, (void **)&avp_bufs[0], n);
+
+	return count;
+}
+
 static void
 avp_dev_rx_queue_release(void *rx_queue)
 {
-- 
1.8.3.1

^ permalink raw reply related	[flat|nested] 172+ messages in thread

* [PATCH 12/16] net/avp: packet transmit functions
  2017-02-25  1:22 [PATCH 00/16] Wind River Systems AVP PMD Allain Legacy
                   ` (10 preceding siblings ...)
  2017-02-25  1:23 ` [PATCH 11/16] net/avp: packet receive functions Allain Legacy
@ 2017-02-25  1:23 ` Allain Legacy
  2017-02-25  1:23 ` [PATCH 13/16] net/avp: device statistics operations Allain Legacy
                   ` (4 subsequent siblings)
  16 siblings, 0 replies; 172+ messages in thread
From: Allain Legacy @ 2017-02-25  1:23 UTC (permalink / raw)
  To: ferruh.yigit; +Cc: dev

Adds support for packet transmit functions so that an application can send
packets to the host application via an AVP device queue.  Both the simple
and scattered functions are supported.

Signed-off-by: Allain Legacy <allain.legacy@windriver.com>
Signed-off-by: Matt Peters <matt.peters@windriver.com>
---
 drivers/net/avp/avp_ethdev.c | 347 +++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 347 insertions(+)

diff --git a/drivers/net/avp/avp_ethdev.c b/drivers/net/avp/avp_ethdev.c
index 36bb9c0..ffb6eeb 100644
--- a/drivers/net/avp/avp_ethdev.c
+++ b/drivers/net/avp/avp_ethdev.c
@@ -92,12 +92,22 @@ static uint16_t avp_recv_scattered_pkts(void *rx_queue,
 static uint16_t avp_recv_pkts(void *rx_queue,
 			      struct rte_mbuf **rx_pkts,
 			      uint16_t nb_pkts);
+
+static uint16_t avp_xmit_scattered_pkts(void *tx_queue,
+					struct rte_mbuf **tx_pkts,
+					uint16_t nb_pkts);
+
+static uint16_t avp_xmit_pkts(void *tx_queue,
+			      struct rte_mbuf **tx_pkts,
+			      uint16_t nb_pkts);
+
 static void avp_dev_rx_queue_release(void *rxq);
 static void avp_dev_tx_queue_release(void *txq);
 #define AVP_DEV_TO_PCI(eth_dev) RTE_DEV_TO_PCI((eth_dev)->device)
 
 
 #define RTE_AVP_MAX_RX_BURST 64
+#define RTE_AVP_MAX_TX_BURST 64
 #define RTE_AVP_MAX_MAC_ADDRS 1
 #define RTE_AVP_MIN_RX_BUFSIZE ETHER_MIN_LEN
 
@@ -962,6 +972,7 @@ struct avp_queue {
 	pci_dev = AVP_DEV_TO_PCI(eth_dev);
 	eth_dev->dev_ops = &avp_eth_dev_ops;
 	eth_dev->rx_pkt_burst = &avp_recv_pkts;
+	eth_dev->tx_pkt_burst = &avp_xmit_pkts;
 
 	if (rte_eal_process_type() != RTE_PROC_PRIMARY) {
 		/*
@@ -975,6 +986,7 @@ struct avp_queue {
 				    "AVP device configured "
 				    "for chained mbufs\n");
 			eth_dev->rx_pkt_burst = avp_recv_scattered_pkts;
+			eth_dev->tx_pkt_burst = avp_xmit_scattered_pkts;
 		}
 		return 0;
 	}
@@ -1123,6 +1135,7 @@ struct avp_queue {
 				    "for chained mbufs\n");
 			eth_dev->data->scattered_rx = 1;
 			eth_dev->rx_pkt_burst = avp_recv_scattered_pkts;
+			eth_dev->tx_pkt_burst = avp_xmit_scattered_pkts;
 		}
 	}
 
@@ -1600,6 +1613,340 @@ struct avp_queue {
 	return count;
 }
 
+/*
+ * Copy a chained mbuf to a set of host buffers.  This function assumes that
+ * there are sufficient destination buffers to contain the entire source
+ * packet.
+ */
+static inline uint16_t
+avp_dev_copy_to_buffers(struct avp_dev *avp,
+			struct rte_mbuf *mbuf,
+			struct rte_avp_desc **buffers,
+			unsigned count)
+{
+	struct rte_avp_desc *previous_buf = NULL;
+	struct rte_avp_desc *first_buf = NULL;
+	struct rte_avp_desc *pkt_buf;
+	struct rte_avp_desc *buf;
+	size_t total_length;
+	struct rte_mbuf *m;
+	size_t copy_length;
+	size_t src_offset;
+	char *pkt_data;
+	unsigned i;
+
+	__rte_mbuf_sanity_check(mbuf, 1);
+
+	m = mbuf;
+	src_offset = 0;
+	total_length = rte_pktmbuf_pkt_len(m);
+	for (i = 0; (i < count) && (m != NULL); i++) {
+		/* fill each destination buffer */
+		buf = buffers[i];
+
+		if (i < count - 1) {
+			/* prefetch next entry while processing this one */
+			pkt_buf = avp_dev_translate_buffer(avp, buffers[i+1]);
+			rte_prefetch0(pkt_buf);
+		}
+
+		/* Adjust pointers for guest addressing */
+		pkt_buf = avp_dev_translate_buffer(avp, buf);
+		pkt_data = avp_dev_translate_buffer(avp, pkt_buf->data);
+
+		/* setup the buffer chain */
+		if (previous_buf != NULL)
+			previous_buf->next = buf;
+		else
+			first_buf = pkt_buf;
+
+		previous_buf = pkt_buf;
+
+		do {
+			/*
+			 * copy as many source mbuf segments as will fit in the
+			 * destination buffer.
+			 */
+			copy_length = RTE_MIN((avp->host_mbuf_size -
+					       pkt_buf->data_len),
+					      (rte_pktmbuf_data_len(m) -
+					       src_offset));
+			rte_memcpy(RTE_PTR_ADD(pkt_data, pkt_buf->data_len),
+				   RTE_PTR_ADD(rte_pktmbuf_mtod(m, void *),
+					       src_offset),
+				   copy_length);
+			pkt_buf->data_len += copy_length;
+			src_offset += copy_length;
+
+			if (likely(src_offset == rte_pktmbuf_data_len(m))) {
+				/* need a new source buffer */
+				m = m->next;
+				src_offset = 0;
+			}
+
+			if (unlikely(pkt_buf->data_len ==
+				     avp->host_mbuf_size)) {
+				/* need a new destination buffer */
+				break;
+			}
+
+		} while (m != NULL);
+	}
+
+	first_buf->nb_segs = count;
+	first_buf->pkt_len = total_length;
+
+	if (mbuf->ol_flags & PKT_TX_VLAN_PKT) {
+		first_buf->ol_flags |= RTE_AVP_TX_VLAN_PKT;
+		first_buf->vlan_tci = mbuf->vlan_tci;
+	}
+
+	avp_dev_buffer_sanity_check(avp, buffers[0]);
+
+	return total_length;
+}
+
+
+static uint16_t
+avp_xmit_scattered_pkts(void *tx_queue,
+			struct rte_mbuf **tx_pkts,
+			uint16_t nb_pkts)
+{
+	struct rte_avp_desc *avp_bufs[(RTE_AVP_MAX_TX_BURST *
+				       RTE_AVP_MAX_MBUF_SEGMENTS)];
+	struct avp_queue *txq = (struct avp_queue *)tx_queue;
+	struct rte_avp_desc *tx_bufs[RTE_AVP_MAX_TX_BURST];
+	struct avp_dev *avp = txq->avp;
+	struct rte_avp_fifo *alloc_q;
+	struct rte_avp_fifo *tx_q;
+	unsigned count, avail, n;
+	unsigned orig_nb_pkts;
+	struct rte_mbuf *m;
+	unsigned required;
+	unsigned segments;
+	unsigned tx_bytes;
+	unsigned i;
+
+	orig_nb_pkts = nb_pkts;
+	if (unlikely(avp->flags & RTE_AVP_F_DETACHED)) {
+		/* VM live migration in progress */
+		/* TODO ... buffer for X packets then drop? */
+		RTE_AVP_STATS_ADD(txq, errors, nb_pkts);
+		return 0;
+	}
+
+	tx_q = avp->tx_q[txq->queue_id];
+	alloc_q = avp->alloc_q[txq->queue_id];
+
+	/* limit the number of transmitted packets to the max burst size */
+	if (unlikely(nb_pkts > RTE_AVP_MAX_TX_BURST))
+		nb_pkts = RTE_AVP_MAX_TX_BURST;
+
+	/* determine how many buffers are available to copy into */
+	avail = avp_fifo_count(alloc_q);
+	if (unlikely(avail > (RTE_AVP_MAX_TX_BURST *
+			      RTE_AVP_MAX_MBUF_SEGMENTS)))
+		avail = RTE_AVP_MAX_TX_BURST * RTE_AVP_MAX_MBUF_SEGMENTS;
+
+	/* determine how many slots are available in the transmit queue */
+	count = avp_fifo_free_count(tx_q);
+
+	/* determine how many packets can be sent */
+	nb_pkts = RTE_MIN(count, nb_pkts);
+
+	/* determine how many packets will fit in the available buffers */
+	count = 0;
+	segments = 0;
+	for (i = 0; i < nb_pkts; i++) {
+		m = tx_pkts[i];
+		if (likely(i < (unsigned)nb_pkts - 1)) {
+			/* prefetch next entry while processing this one */
+			rte_prefetch0(tx_pkts[i+1]);
+		}
+		required = (rte_pktmbuf_pkt_len(m) + avp->host_mbuf_size - 1) /
+			avp->host_mbuf_size;
+
+		if (unlikely((required == 0) ||
+			     (required > RTE_AVP_MAX_MBUF_SEGMENTS)))
+			break;
+		else if (unlikely(required + segments > avail))
+			break;
+		segments += required;
+		count++;
+	}
+	nb_pkts = count;
+
+	if (unlikely(nb_pkts == 0)) {
+		/* no available buffers, or no space on the tx queue */
+		RTE_AVP_STATS_ADD(txq, errors, orig_nb_pkts);
+		return 0;
+	}
+
+	PMD_TX_LOG(DEBUG, "Sending %u packets on Tx queue at %p\n",
+		   nb_pkts, tx_q);
+
+	/* retrieve sufficient send buffers */
+	n = avp_fifo_get(alloc_q, (void **)&avp_bufs, segments);
+	if (unlikely(n != segments)) {
+		PMD_TX_LOG(DEBUG, "Failed to allocate buffers "
+			   "n=%u, segments=%u, orig=%u\n",
+			   n, segments, orig_nb_pkts);
+		RTE_AVP_STATS_ADD(txq, errors, orig_nb_pkts);
+		return 0;
+	}
+
+	tx_bytes = 0;
+	count = 0;
+	for (i = 0; i < nb_pkts; i++) {
+		/* process each packet to be transmitted */
+		m = tx_pkts[i];
+
+		/* determine how many buffers are required for this packet */
+		required = (rte_pktmbuf_pkt_len(m) + avp->host_mbuf_size - 1) /
+			avp->host_mbuf_size;
+
+		tx_bytes += avp_dev_copy_to_buffers(avp, m,
+						    &avp_bufs[count], required);
+		tx_bufs[i] = avp_bufs[count];
+		count += required;
+
+		/* free the original mbuf */
+		rte_pktmbuf_free(m);
+	}
+
+	RTE_AVP_STATS_ADD(txq, packets, nb_pkts);
+	RTE_AVP_STATS_ADD(txq, bytes, tx_bytes);
+
+#ifdef RTE_LIBRTE_AVP_DEBUG_BUFFERS
+	for (i = 0; i < nb_pkts; i++)
+		avp_dev_buffer_sanity_check(avp, tx_bufs[i]);
+#endif
+
+	/* send the packets */
+	n = avp_fifo_put(tx_q, (void **)&tx_bufs[0], nb_pkts);
+	if (unlikely(n != orig_nb_pkts))
+		RTE_AVP_STATS_ADD(txq, errors, (orig_nb_pkts - n));
+
+	return n;
+}
+
+
+static uint16_t
+avp_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
+{
+	struct avp_queue *txq = (struct avp_queue *)tx_queue;
+	struct rte_avp_desc *avp_bufs[RTE_AVP_MAX_TX_BURST];
+	struct avp_dev *avp = txq->avp;
+	struct rte_avp_desc *pkt_buf;
+	struct rte_avp_fifo *alloc_q;
+	struct rte_avp_fifo *tx_q;
+	unsigned count, avail, n;
+	struct rte_mbuf *m;
+	unsigned pkt_len;
+	unsigned tx_bytes;
+	char *pkt_data;
+	unsigned i;
+
+	if (unlikely(avp->flags & RTE_AVP_F_DETACHED)) {
+		/* VM live migration in progress */
+		/* TODO ... buffer for X packets then drop?! */
+		RTE_AVP_STATS_INC(txq, errors);
+		return 0;
+	}
+
+	tx_q = avp->tx_q[txq->queue_id];
+	alloc_q = avp->alloc_q[txq->queue_id];
+
+	/* limit the number of transmitted packets to the max burst size */
+	if (unlikely(nb_pkts > RTE_AVP_MAX_TX_BURST))
+		nb_pkts = RTE_AVP_MAX_TX_BURST;
+
+	/* determine how many buffers are available to copy into */
+	avail = avp_fifo_count(alloc_q);
+
+	/* determine how many slots are available in the transmit queue */
+	count = avp_fifo_free_count(tx_q);
+
+	/* determine how many packets can be sent */
+	count = RTE_MIN(count, avail);
+	count = RTE_MIN(count, nb_pkts);
+
+	if (unlikely(count == 0)) {
+		/* no available buffers, or no space on the tx queue */
+		RTE_AVP_STATS_INC(txq, errors);
+		return 0;
+	}
+
+	PMD_TX_LOG(DEBUG, "Sending %u packets on Tx queue at %p\n",
+		   count, tx_q);
+
+	/* retrieve sufficient send buffers */
+	n = avp_fifo_get(alloc_q, (void **)&avp_bufs, count);
+	if (unlikely(n != count)) {
+		RTE_AVP_STATS_INC(txq, errors);
+		return 0;
+	}
+
+	tx_bytes = 0;
+	for (i = 0; i < count; i++) {
+
+		/* prefetch next entry while processing the current one */
+		if (i < count-1) {
+			pkt_buf = avp_dev_translate_buffer(avp, avp_bufs[i+1]);
+			rte_prefetch0(pkt_buf);
+		}
+
+		/* process each packet to be transmitted */
+		m = tx_pkts[i];
+
+		/* Adjust pointers for guest addressing */
+		pkt_buf = avp_dev_translate_buffer(avp, avp_bufs[i]);
+		pkt_data = avp_dev_translate_buffer(avp, pkt_buf->data);
+		pkt_len = rte_pktmbuf_pkt_len(m);
+
+		if (unlikely((pkt_len > avp->guest_mbuf_size) ||
+					 (pkt_len > avp->host_mbuf_size))) {
+			/*
+			 * application should be using the scattered transmit
+			 * function; send it truncated to avoid the performance
+			 * hit of having to manage returning the already
+			 * allocated buffer to the free list.  This should not
+			 * happen since the application should have set the
+			 * max_rx_pkt_len based on its MTU and it should be
+			 * policing its own packet sizes.
+			 */
+			RTE_AVP_STATS_INC(txq, errors);
+			pkt_len = RTE_MIN(avp->guest_mbuf_size,
+					  avp->host_mbuf_size);
+		}
+
+		/* copy data out of our mbuf and into the AVP buffer */
+		rte_memcpy(pkt_data, rte_pktmbuf_mtod(m, void *), pkt_len);
+		pkt_buf->pkt_len = pkt_len;
+		pkt_buf->data_len = pkt_len;
+		pkt_buf->nb_segs = 1;
+		pkt_buf->next = NULL;
+
+		if (m->ol_flags & PKT_TX_VLAN_PKT) {
+			pkt_buf->ol_flags |= RTE_AVP_TX_VLAN_PKT;
+			pkt_buf->vlan_tci = m->vlan_tci;
+		}
+
+		tx_bytes += pkt_len;
+
+		/* free the original mbuf */
+		rte_pktmbuf_free(m);
+	}
+
+	RTE_AVP_STATS_ADD(txq, packets, count);
+	RTE_AVP_STATS_ADD(txq, bytes, tx_bytes);
+
+	/* send the packets */
+	n = avp_fifo_put(tx_q, (void **)&avp_bufs[0], count);
+
+	return n;
+}
+
 static void
 avp_dev_rx_queue_release(void *rx_queue)
 {
-- 
1.8.3.1

^ permalink raw reply related	[flat|nested] 172+ messages in thread

* [PATCH 13/16] net/avp: device statistics operations
  2017-02-25  1:22 [PATCH 00/16] Wind River Systems AVP PMD Allain Legacy
                   ` (11 preceding siblings ...)
  2017-02-25  1:23 ` [PATCH 12/16] net/avp: packet transmit functions Allain Legacy
@ 2017-02-25  1:23 ` Allain Legacy
  2017-02-25  1:23 ` [PATCH 14/16] net/avp: device promiscuous functions Allain Legacy
                   ` (3 subsequent siblings)
  16 siblings, 0 replies; 172+ messages in thread
From: Allain Legacy @ 2017-02-25  1:23 UTC (permalink / raw)
  To: ferruh.yigit; +Cc: dev

Adds support for device get/set operations against an AVP device so that an
application can query and reset statistics on an AVP device.

Signed-off-by: Allain Legacy <allain.legacy@windriver.com>
Signed-off-by: Matt Peters <matt.peters@windriver.com>
---
 config/common_base           |  1 +
 drivers/net/avp/avp_ethdev.c | 81 ++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 82 insertions(+)

diff --git a/config/common_base b/config/common_base
index fe8363d..02dac41 100644
--- a/config/common_base
+++ b/config/common_base
@@ -355,6 +355,7 @@ CONFIG_RTE_LIBRTE_AVP_DEBUG_RX=n
 CONFIG_RTE_LIBRTE_AVP_DEBUG_TX=n
 CONFIG_RTE_LIBRTE_AVP_DEBUG_DRIVER=y
 CONFIG_RTE_LIBRTE_AVP_DEBUG_BUFFERS=n
+CONFIG_RTE_LIBRTE_AVP_STATS=y
 
 #
 # Compile the TAP PMD
diff --git a/drivers/net/avp/avp_ethdev.c b/drivers/net/avp/avp_ethdev.c
index ffb6eeb..61861fc 100644
--- a/drivers/net/avp/avp_ethdev.c
+++ b/drivers/net/avp/avp_ethdev.c
@@ -103,6 +103,12 @@ static uint16_t avp_xmit_pkts(void *tx_queue,
 
 static void avp_dev_rx_queue_release(void *rxq);
 static void avp_dev_tx_queue_release(void *txq);
+
+static void avp_dev_stats_get(struct rte_eth_dev *dev,
+			      struct rte_eth_stats *stats);
+static void avp_dev_stats_reset(struct rte_eth_dev *dev);
+
+
 #define AVP_DEV_TO_PCI(eth_dev) RTE_DEV_TO_PCI((eth_dev)->device)
 
 
@@ -149,6 +155,8 @@ static uint16_t avp_xmit_pkts(void *tx_queue,
 	.dev_configure       = avp_dev_configure,
 	.dev_infos_get       = avp_dev_info_get,
 	.vlan_offload_set    = avp_vlan_offload_set,
+	.stats_get           = avp_dev_stats_get,
+	.stats_reset         = avp_dev_stats_reset,
 	.link_update         = avp_dev_link_update,
 	.rx_queue_setup      = avp_dev_rx_queue_setup,
 	.rx_queue_release    = avp_dev_rx_queue_release,
@@ -2095,6 +2103,79 @@ struct avp_queue {
 	}
 }
 
+static void
+avp_dev_stats_get(struct rte_eth_dev *eth_dev, struct rte_eth_stats *stats)
+{
+#ifdef RTE_LIBRTE_AVP_STATS
+	struct avp_dev *avp =
+		RTE_AVP_DEV_PRIVATE_TO_HW(eth_dev->data->dev_private);
+	unsigned i;
+
+	memset(stats, 0, sizeof(*stats));
+	for (i = 0; i < avp->num_rx_queues; i++) {
+		struct avp_queue *rxq = avp->dev_data->rx_queues[i];
+
+		if (rxq) {
+			stats->ipackets += rxq->packets;
+			stats->ibytes += rxq->bytes;
+			stats->ierrors += rxq->errors;
+
+			stats->q_ipackets[i] += rxq->packets;
+			stats->q_ibytes[i] += rxq->bytes;
+			stats->q_errors[i] += rxq->errors;
+		}
+	}
+
+	for (i = 0; i < avp->num_tx_queues; i++) {
+		struct avp_queue *txq = avp->dev_data->tx_queues[i];
+
+		if (txq) {
+			stats->opackets += txq->packets;
+			stats->obytes += txq->bytes;
+			stats->oerrors += txq->errors;
+
+			stats->q_opackets[i] += txq->packets;
+			stats->q_obytes[i] += txq->bytes;
+			stats->q_errors[i] += txq->errors;
+		}
+	}
+#else
+	(void)eth_dev;
+	(void)stats;
+#endif
+}
+
+static void
+avp_dev_stats_reset(struct rte_eth_dev *eth_dev)
+{
+#ifdef RTE_LIBRTE_AVP_STATS
+	struct avp_dev *avp =
+		RTE_AVP_DEV_PRIVATE_TO_HW(eth_dev->data->dev_private);
+	unsigned i;
+
+	for (i = 0; i < avp->num_rx_queues; i++) {
+		struct avp_queue *rxq = avp->dev_data->rx_queues[i];
+
+		if (rxq) {
+			rxq->bytes = 0;
+			rxq->packets = 0;
+			rxq->errors = 0;
+		}
+	}
+
+	for (i = 0; i < avp->num_tx_queues; i++) {
+		struct avp_queue *txq = avp->dev_data->tx_queues[i];
+
+		if (txq) {
+			txq->bytes = 0;
+			txq->packets = 0;
+			txq->errors = 0;
+		}
+	}
+#else
+	(void)eth_dev;
+#endif
+}
 
 RTE_PMD_REGISTER_PCI(rte_avp, rte_avp_pmd.pci_drv);
 RTE_PMD_REGISTER_PCI_TABLE(rte_avp, pci_id_avp_map);
-- 
1.8.3.1

^ permalink raw reply related	[flat|nested] 172+ messages in thread

* [PATCH 14/16] net/avp: device promiscuous functions
  2017-02-25  1:22 [PATCH 00/16] Wind River Systems AVP PMD Allain Legacy
                   ` (12 preceding siblings ...)
  2017-02-25  1:23 ` [PATCH 13/16] net/avp: device statistics operations Allain Legacy
@ 2017-02-25  1:23 ` Allain Legacy
  2017-02-25  1:23 ` [PATCH 15/16] net/avp: device start and stop operations Allain Legacy
                   ` (2 subsequent siblings)
  16 siblings, 0 replies; 172+ messages in thread
From: Allain Legacy @ 2017-02-25  1:23 UTC (permalink / raw)
  To: ferruh.yigit; +Cc: dev

Adds support for setting and clearing promiscuous mode on an AVP device.
When enabled the _mac_filter function will allow packets destined to any
MAC address to be processed by the receive functions.

Signed-off-by: Allain Legacy <allain.legacy@windriver.com>
Signed-off-by: Matt Peters <matt.peters@windriver.com>
---
 drivers/net/avp/avp_ethdev.c | 34 ++++++++++++++++++++++++++++++++++
 1 file changed, 34 insertions(+)

diff --git a/drivers/net/avp/avp_ethdev.c b/drivers/net/avp/avp_ethdev.c
index 61861fc..618deaf 100644
--- a/drivers/net/avp/avp_ethdev.c
+++ b/drivers/net/avp/avp_ethdev.c
@@ -72,6 +72,9 @@ static void avp_dev_info_get(struct rte_eth_dev *dev,
 static void avp_vlan_offload_set(struct rte_eth_dev *dev, int mask);
 static int avp_dev_link_update(struct rte_eth_dev *dev,
 			       __rte_unused int wait_to_complete);
+static void avp_dev_promiscuous_enable(struct rte_eth_dev *dev);
+static void avp_dev_promiscuous_disable(struct rte_eth_dev *dev);
+
 static int avp_dev_rx_queue_setup(struct rte_eth_dev *dev,
 				  uint16_t rx_queue_id,
 				  uint16_t nb_rx_desc,
@@ -158,6 +161,8 @@ static void avp_dev_stats_get(struct rte_eth_dev *dev,
 	.stats_get           = avp_dev_stats_get,
 	.stats_reset         = avp_dev_stats_reset,
 	.link_update         = avp_dev_link_update,
+	.promiscuous_enable  = avp_dev_promiscuous_enable,
+	.promiscuous_disable = avp_dev_promiscuous_disable,
 	.rx_queue_setup      = avp_dev_rx_queue_setup,
 	.rx_queue_release    = avp_dev_rx_queue_release,
 	.tx_queue_setup      = avp_dev_tx_queue_setup,
@@ -2055,6 +2060,35 @@ struct avp_queue {
 	return -1;
 }
 
+static void
+avp_dev_promiscuous_enable(struct rte_eth_dev *eth_dev)
+{
+	struct avp_dev *avp =
+		RTE_AVP_DEV_PRIVATE_TO_HW(eth_dev->data->dev_private);
+
+	rte_spinlock_lock(&avp->lock);
+	if ((avp->flags & RTE_AVP_F_PROMISC) == 0) {
+		avp->flags |= RTE_AVP_F_PROMISC;
+		PMD_DRV_LOG(DEBUG, "Promiscuous mode enabled on %u\n",
+			    eth_dev->data->port_id);
+	}
+	rte_spinlock_unlock(&avp->lock);
+}
+
+static void
+avp_dev_promiscuous_disable(struct rte_eth_dev *eth_dev)
+{
+	struct avp_dev *avp =
+		RTE_AVP_DEV_PRIVATE_TO_HW(eth_dev->data->dev_private);
+
+	rte_spinlock_lock(&avp->lock);
+	if ((avp->flags & RTE_AVP_F_PROMISC) != 0) {
+		avp->flags &= ~RTE_AVP_F_PROMISC;
+		PMD_DRV_LOG(DEBUG, "Promiscuous mode disabled on %u\n",
+			    eth_dev->data->port_id);
+	}
+	rte_spinlock_unlock(&avp->lock);
+}
 
 static void
 avp_dev_info_get(struct rte_eth_dev *eth_dev,
-- 
1.8.3.1

^ permalink raw reply related	[flat|nested] 172+ messages in thread

* [PATCH 15/16] net/avp: device start and stop operations
  2017-02-25  1:22 [PATCH 00/16] Wind River Systems AVP PMD Allain Legacy
                   ` (13 preceding siblings ...)
  2017-02-25  1:23 ` [PATCH 14/16] net/avp: device promiscuous functions Allain Legacy
@ 2017-02-25  1:23 ` Allain Legacy
  2017-02-25  1:23 ` [PATCH 16/16] doc: adds information related to the AVP PMD Allain Legacy
  2017-02-26 19:08 ` [PATCH v2 00/16] Wind River Systems " Allain Legacy
  16 siblings, 0 replies; 172+ messages in thread
From: Allain Legacy @ 2017-02-25  1:23 UTC (permalink / raw)
  To: ferruh.yigit; +Cc: dev

Adds support for device start and stop functions.  This allows an
application to control the administrative state of an AVP device.  Stopping
the device will notify the host application to stop sending packets on that
device's receive queues.

Signed-off-by: Allain Legacy <allain.legacy@windriver.com>
Signed-off-by: Matt Peters <matt.peters@windriver.com>
---
 drivers/net/avp/avp_ethdev.c | 157 +++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 157 insertions(+)

diff --git a/drivers/net/avp/avp_ethdev.c b/drivers/net/avp/avp_ethdev.c
index 618deaf..8006ed9 100644
--- a/drivers/net/avp/avp_ethdev.c
+++ b/drivers/net/avp/avp_ethdev.c
@@ -67,6 +67,9 @@ static int avp_dev_create(struct rte_pci_device *pci_dev,
 static int eth_avp_dev_uninit(struct rte_eth_dev *eth_dev);
 
 static int avp_dev_configure(struct rte_eth_dev *dev);
+static int avp_dev_start(struct rte_eth_dev *dev);
+static void avp_dev_stop(struct rte_eth_dev *dev);
+static void avp_dev_close(struct rte_eth_dev *dev);
 static void avp_dev_info_get(struct rte_eth_dev *dev,
 			     struct rte_eth_dev_info *dev_info);
 static void avp_vlan_offload_set(struct rte_eth_dev *dev, int mask);
@@ -156,6 +159,9 @@ static void avp_dev_stats_get(struct rte_eth_dev *dev,
  */
 static const struct eth_dev_ops avp_eth_dev_ops = {
 	.dev_configure       = avp_dev_configure,
+	.dev_start           = avp_dev_start,
+	.dev_stop            = avp_dev_stop,
+	.dev_close           = avp_dev_close,
 	.dev_infos_get       = avp_dev_info_get,
 	.vlan_offload_set    = avp_vlan_offload_set,
 	.stats_get           = avp_dev_stats_get,
@@ -343,6 +349,24 @@ struct avp_queue {
 }
 
 static int
+avp_dev_ctrl_set_link_state(struct rte_eth_dev *eth_dev, unsigned state)
+{
+	struct avp_dev *avp =
+		RTE_AVP_DEV_PRIVATE_TO_HW(eth_dev->data->dev_private);
+	struct rte_avp_request request;
+	int ret;
+
+	/* setup a link state change request */
+	memset(&request, 0, sizeof(request));
+	request.req_id = RTE_AVP_REQ_CFG_NETWORK_IF;
+	request.if_up = state;
+
+	ret = avp_dev_process_request(avp, &request);
+
+	return ret == 0 ? request.result : ret;
+}
+
+static int
 avp_dev_ctrl_set_config(struct rte_eth_dev *eth_dev,
 			struct rte_avp_device_config *config)
 {
@@ -795,6 +819,31 @@ struct avp_queue {
 }
 
 static int
+avp_dev_disable_interrupts(struct rte_eth_dev *eth_dev)
+{
+	struct rte_pci_device *pci_dev = AVP_DEV_TO_PCI(eth_dev);
+	void *registers = pci_dev->mem_resource[RTE_AVP_PCI_MMIO_BAR].addr;
+	int ret;
+
+	if (registers == NULL)
+		return 0;
+
+	/* inform the device that all interrupts are disabled */
+	RTE_AVP_WRITE32(RTE_AVP_NO_INTERRUPTS_MASK,
+			RTE_PTR_ADD(registers, RTE_AVP_INTERRUPT_MASK_OFFSET));
+
+	/* enable UIO interrupt handling */
+	ret = rte_intr_disable(&(pci_dev->intr_handle));
+	if (ret < 0) {
+		PMD_DRV_LOG(ERR,
+			    "Failed to enable UIO interrupts, ret=%d\n", ret);
+		return ret;
+	}
+
+	return 0;
+}
+
+static int
 avp_dev_setup_interrupts(struct rte_eth_dev *eth_dev)
 {
 	struct rte_pci_device *pci_dev = AVP_DEV_TO_PCI(eth_dev);
@@ -2044,6 +2093,114 @@ struct avp_queue {
 	return ret;
 }
 
+static int
+avp_dev_start(struct rte_eth_dev *eth_dev)
+{
+	struct avp_dev *avp =
+		RTE_AVP_DEV_PRIVATE_TO_HW(eth_dev->data->dev_private);
+	int ret;
+
+	rte_spinlock_lock(&avp->lock);
+	if (avp->flags & RTE_AVP_F_DETACHED) {
+		PMD_DRV_LOG(ERR,
+			    "Operation not supported during "
+			    "VM live migration\n");
+		ret = -ENOTSUP;
+		goto unlock;
+	}
+
+	/* disable features that we do not support */
+	eth_dev->data->dev_conf.rxmode.hw_ip_checksum = 0;
+	eth_dev->data->dev_conf.rxmode.hw_vlan_filter = 0;
+	eth_dev->data->dev_conf.rxmode.hw_vlan_extend = 0;
+	eth_dev->data->dev_conf.rxmode.hw_strip_crc = 0;
+
+	/* update link state */
+	ret = avp_dev_ctrl_set_link_state(eth_dev, 1);
+	if (ret < 0) {
+		PMD_DRV_LOG(ERR,
+			    "Link state change failed by host, ret=%d\n", ret);
+		goto unlock;
+	}
+
+	/* remember current link state */
+	avp->flags |= RTE_AVP_F_LINKUP;
+
+	ret = 0;
+
+unlock:
+	rte_spinlock_unlock(&avp->lock);
+	return ret;
+}
+
+static void
+avp_dev_stop(struct rte_eth_dev *eth_dev)
+{
+	struct avp_dev *avp =
+		RTE_AVP_DEV_PRIVATE_TO_HW(eth_dev->data->dev_private);
+	int ret;
+
+	rte_spinlock_lock(&avp->lock);
+	if (avp->flags & RTE_AVP_F_DETACHED) {
+		PMD_DRV_LOG(ERR,
+			    "Operation not supported during "
+			    "VM live migration\n");
+		goto unlock;
+	}
+
+	/* remember current link state */
+	avp->flags &= ~RTE_AVP_F_LINKUP;
+
+	/* update link state */
+	ret = avp_dev_ctrl_set_link_state(eth_dev, 0);
+	if (ret < 0) {
+		PMD_DRV_LOG(ERR,
+			    "Link state change failed by host, ret=%d\n", ret);
+		goto unlock;
+	}
+
+unlock:
+	rte_spinlock_unlock(&avp->lock);
+	return;
+}
+
+static void
+avp_dev_close(struct rte_eth_dev *eth_dev)
+{
+	struct avp_dev *avp =
+		RTE_AVP_DEV_PRIVATE_TO_HW(eth_dev->data->dev_private);
+	int ret;
+
+	rte_spinlock_lock(&avp->lock);
+	if (avp->flags & RTE_AVP_F_DETACHED) {
+		PMD_DRV_LOG(ERR,
+			    "Operation not supported during "
+			    "VM live migration\n");
+		goto unlock;
+	}
+
+	/* remember current link state */
+	avp->flags &= ~RTE_AVP_F_LINKUP;
+	avp->flags &= ~RTE_AVP_F_CONFIGURED;
+
+	ret = avp_dev_disable_interrupts(eth_dev);
+	if (ret < 0) {
+		PMD_DRV_LOG(ERR, "Failed to disable interrupts\n");
+		/* continue */
+	}
+
+	/* update device state */
+	ret = avp_dev_ctrl_shutdown(eth_dev);
+	if (ret < 0) {
+		PMD_DRV_LOG(ERR, "Device shutdown failed by host, ret=%d\n",
+			    ret);
+		goto unlock;
+	}
+
+unlock:
+	rte_spinlock_unlock(&avp->lock);
+	return;
+}
 
 static int
 avp_dev_link_update(struct rte_eth_dev *eth_dev,
-- 
1.8.3.1

^ permalink raw reply related	[flat|nested] 172+ messages in thread

* [PATCH 16/16] doc: adds information related to the AVP PMD
  2017-02-25  1:22 [PATCH 00/16] Wind River Systems AVP PMD Allain Legacy
                   ` (14 preceding siblings ...)
  2017-02-25  1:23 ` [PATCH 15/16] net/avp: device start and stop operations Allain Legacy
@ 2017-02-25  1:23 ` Allain Legacy
  2017-02-26 19:08 ` [PATCH v2 00/16] Wind River Systems " Allain Legacy
  16 siblings, 0 replies; 172+ messages in thread
From: Allain Legacy @ 2017-02-25  1:23 UTC (permalink / raw)
  To: ferruh.yigit; +Cc: dev

Updates the documentation and feature lists for the AVP PMD device.

Signed-off-by: Allain Legacy <allain.legacy@windriver.com>
Signed-off-by: Matt Peters <matt.peters@windriver.com>
---
 MAINTAINERS                      |  1 +
 doc/guides/nics/avp.rst          | 99 ++++++++++++++++++++++++++++++++++++++++
 doc/guides/nics/features/avp.ini | 17 +++++++
 doc/guides/nics/index.rst        |  1 +
 4 files changed, 118 insertions(+)
 create mode 100644 doc/guides/nics/avp.rst
 create mode 100644 doc/guides/nics/features/avp.ini

diff --git a/MAINTAINERS b/MAINTAINERS
index 992ffa5..9dd6e7e 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -427,6 +427,7 @@ Wind River AVP PMD
 M: Allain Legacy <allain.legacy@windriver.com>
 M: Matt Peters <matt.peters@windriver.com>
 F: drivers/net/avp
+F: doc/guides/nics/avp.rst
 
 
 Crypto Drivers
diff --git a/doc/guides/nics/avp.rst b/doc/guides/nics/avp.rst
new file mode 100644
index 0000000..24bb4b6
--- /dev/null
+++ b/doc/guides/nics/avp.rst
@@ -0,0 +1,99 @@
+..  BSD LICENSE
+    Copyright(c) 2017 Wind River Systems, Inc. rights reserved.
+    All rights reserved.
+
+    Redistribution and use in source and binary forms, with or without
+    modification, are permitted provided that the following conditions
+    are met:
+
+    * Redistributions of source code must retain the above copyright
+    notice, this list of conditions and the following disclaimer.
+    * Redistributions in binary form must reproduce the above copyright
+    notice, this list of conditions and the following disclaimer in
+    the documentation and/or other materials provided with the
+    distribution.
+    * Neither the name of Intel Corporation nor the names of its
+    contributors may be used to endorse or promote products derived
+    from this software without specific prior written permission.
+
+    THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+    "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+    LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+    A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+    OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+    SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+    LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+    DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+    THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+    (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+    OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+
+AVP Poll Mode Driver
+=================================================================
+
+The Accelerated Virtual Port (AVP) device is a shared memory based device
+available on the `virtualization platforms <http://www.windriver.com/products/titanium-cloud/>`_
+from Wind River Systems.  It is based on an earlier implementation of the DPDK
+KNI device and made available to VM instances via a mechanism based on an early
+implementation of qemu-kvm ivshmem.
+
+The driver binds to PCI devices that are exported by the hypervisor DPDK
+application via the ivshmem-like mechanism.  The definition of the device
+structure and configuration options are defined in rte_avp_common.h and
+rte_avp_fifo.h.  These two header files are made available as part of the PMD
+implementation in order to share the device definitions between the guest
+implementation (i.e., the PMD) and the host implementation (i.e., the
+hypervisor DPDK application).
+
+
+Features and Limitations of the AVP PMD
+---------------------------------------
+
+The AVP PMD driver provides the following functionality.
+
+*   Receive and transmit of both simple and chained mbuf packets,
+
+*   Chained mbufs may include up to 5 chained segments,
+
+*   Up to 8 receive and transmit queues per device,
+
+*   Only a single MAC address is supported,
+
+*   The MAC address cannot be modified,
+
+*   The maximum receive packet length is 9238 bytes,
+
+*   VLAN header stripping and inserting,
+
+*   Promiscuous mode
+
+*   VM live-migration
+
+*   PCI hotplug insertion and removal
+
+
+Prerequisites
+-------------
+
+The following prerequisites apply:
+
+*   A virtual machine running in a Wind River Systems virtualization
+    environment and configured with at least one neutron port defined with a
+    vif-model set to "avp".
+
+
+Launching a VM with an AVP type network attachment
+--------------------------------------------------
+
+The following example will launch a VM with three network attachments.  The
+first attachment will have a default vif-model of "virtio".  The next two
+network attachments will have a vif-model of "avp" and may be used with a DPDK
+application which is built to include the AVP PMD driver.
+
+.. code-block:: console
+
+    nova boot --flavor small --image my-image \
+       --nic net-id=${NETWORK1_UUID} \
+       --nic net-id=${NETWORK2_UUID},vif-model=avp \
+       --nic net-id=${NETWORK3_UUID},vif-model=avp \
+       --security-group default my-instance1
diff --git a/doc/guides/nics/features/avp.ini b/doc/guides/nics/features/avp.ini
new file mode 100644
index 0000000..64bf42e
--- /dev/null
+++ b/doc/guides/nics/features/avp.ini
@@ -0,0 +1,17 @@
+;
+; Supported features of the 'AVP' network poll mode driver.
+;
+; Refer to default.ini for the full list of available PMD features.
+;
+[Features]
+Link status          = Y
+Queue start/stop     = Y
+Jumbo frame          = Y
+Scattered Rx         = Y
+Promiscuous mode     = Y
+Unicast MAC filter   = Y
+VLAN offload         = Y
+Basic stats          = Y
+Stats per queue      = Y
+Linux UIO            = Y
+x86-64               = Y
diff --git a/doc/guides/nics/index.rst b/doc/guides/nics/index.rst
index 87f9334..0ddcea5 100644
--- a/doc/guides/nics/index.rst
+++ b/doc/guides/nics/index.rst
@@ -36,6 +36,7 @@ Network Interface Controller Drivers
     :numbered:
 
     overview
+    avp
     bnx2x
     bnxt
     cxgbe
-- 
1.8.3.1

^ permalink raw reply related	[flat|nested] 172+ messages in thread

* [PATCH v2 00/16] Wind River Systems AVP PMD
  2017-02-25  1:22 [PATCH 00/16] Wind River Systems AVP PMD Allain Legacy
                   ` (15 preceding siblings ...)
  2017-02-25  1:23 ` [PATCH 16/16] doc: adds information related to the AVP PMD Allain Legacy
@ 2017-02-26 19:08 ` Allain Legacy
  2017-02-26 19:08   ` [PATCH v2 01/15] config: adds attributes for the " Allain Legacy
                     ` (16 more replies)
  16 siblings, 17 replies; 172+ messages in thread
From: Allain Legacy @ 2017-02-26 19:08 UTC (permalink / raw)
  To: ferruh.yigit; +Cc: dev

This patch series submits an initial version of the AVP PMD from Wind River
Systems.  The series includes shared header files, driver implementation,
and changes to documentation files in support of this new driver.  The AVP
driver is a shared memory based device.  It is intended to be used as a PMD
within a virtual machine running on a Wind River virtualization platform.
See: http://www.windriver.com/products/titanium-cloud/

v2:
* Fixed coding style violations that slipped in accidentally because of an
  out of date checkpatch.pl from an older kernel.

Allain Legacy (16):
  config: added attributes for the AVP PMD
  net/avp: added public header files
  maintainers: claim responsibility for AVP PMD
  net/avp: added PMD version map file
  net/avp: added log macros
  drivers/net: added driver makefiles
  net/avp: driver registration
  net/avp: device initialization
  net/avp: device configuration
  net/avp: queue setup and release
  net/avp: packet receive functions
  net/avp: packet transmit functions
  net/avp: device statistics operations
  net/avp: device promiscuous functions
  net/avp: device start and stop operations
  doc: added information related to the AVP PMD

 MAINTAINERS                             |    6 +
 config/common_base                      |   10 +
 config/common_linuxapp                  |    1 +
 doc/guides/nics/avp.rst                 |   99 ++
 doc/guides/nics/features/avp.ini        |   17 +
 doc/guides/nics/index.rst               |    1 +
 drivers/net/Makefile                    |    1 +
 drivers/net/avp/Makefile                |   61 +
 drivers/net/avp/avp_ethdev.c            | 2371 +++++++++++++++++++++++++++++++
 drivers/net/avp/avp_logs.h              |   59 +
 drivers/net/avp/rte_avp_common.h        |  427 ++++++
 drivers/net/avp/rte_avp_fifo.h          |  157 ++
 drivers/net/avp/rte_pmd_avp_version.map |    4 +
 mk/rte.app.mk                           |    1 +
 14 files changed, 3215 insertions(+)
 create mode 100644 doc/guides/nics/avp.rst
 create mode 100644 doc/guides/nics/features/avp.ini
 create mode 100644 drivers/net/avp/Makefile
 create mode 100644 drivers/net/avp/avp_ethdev.c
 create mode 100644 drivers/net/avp/avp_logs.h
 create mode 100644 drivers/net/avp/rte_avp_common.h
 create mode 100644 drivers/net/avp/rte_avp_fifo.h
 create mode 100644 drivers/net/avp/rte_pmd_avp_version.map

-- 
1.8.3.1

^ permalink raw reply	[flat|nested] 172+ messages in thread

* [PATCH v2 01/15] config: adds attributes for the AVP PMD
  2017-02-26 19:08 ` [PATCH v2 00/16] Wind River Systems " Allain Legacy
@ 2017-02-26 19:08   ` Allain Legacy
  2017-02-26 19:08   ` [PATCH v2 02/15] net/avp: public header files Allain Legacy
                     ` (15 subsequent siblings)
  16 siblings, 0 replies; 172+ messages in thread
From: Allain Legacy @ 2017-02-26 19:08 UTC (permalink / raw)
  To: ferruh.yigit; +Cc: dev

Updates the common base configuration file to include a top level config
attribute for the AVP PMD.

Signed-off-by: Allain Legacy <allain.legacy@windriver.com>
Signed-off-by: Matt Peters <matt.peters@windriver.com>
---
 config/common_base | 5 +++++
 1 file changed, 5 insertions(+)

diff --git a/config/common_base b/config/common_base
index aeee13e..912bc68 100644
--- a/config/common_base
+++ b/config/common_base
@@ -348,6 +348,11 @@ CONFIG_RTE_LIBRTE_QEDE_FW=""
 CONFIG_RTE_LIBRTE_PMD_AF_PACKET=n
 
 #
+# Compile WRS accelerated virtual port (AVP) guest PMD driver
+#
+CONFIG_RTE_LIBRTE_AVP_PMD=n
+
+#
 # Compile the TAP PMD
 # It is enabled by default for Linux only.
 #
-- 
1.8.3.1

^ permalink raw reply related	[flat|nested] 172+ messages in thread

* [PATCH v2 02/15] net/avp: public header files
  2017-02-26 19:08 ` [PATCH v2 00/16] Wind River Systems " Allain Legacy
  2017-02-26 19:08   ` [PATCH v2 01/15] config: adds attributes for the " Allain Legacy
@ 2017-02-26 19:08   ` Allain Legacy
  2017-02-28 11:49     ` Jerin Jacob
  2017-02-26 19:08   ` [PATCH v2 03/15] maintainers: claim responsibility for AVP PMD Allain Legacy
                     ` (14 subsequent siblings)
  16 siblings, 1 reply; 172+ messages in thread
From: Allain Legacy @ 2017-02-26 19:08 UTC (permalink / raw)
  To: ferruh.yigit; +Cc: dev

Adds public/exported header files for the AVP PMD.  The AVP device is a
shared memory based device.  The structures and constants that define the
method of operation of the device must be visible by both the PMD and the
host DPDK application.  They must not change without proper version
controls and updates to both the hypervisor DPDK application and the PMD.

The hypervisor DPDK application is a Wind River Systems proprietary
virtual switch.

Signed-off-by: Allain Legacy <allain.legacy@windriver.com>
Signed-off-by: Matt Peters <matt.peters@windriver.com>
---
 drivers/net/avp/rte_avp_common.h | 424 +++++++++++++++++++++++++++++++++++++++
 drivers/net/avp/rte_avp_fifo.h   | 157 +++++++++++++++
 2 files changed, 581 insertions(+)
 create mode 100644 drivers/net/avp/rte_avp_common.h
 create mode 100644 drivers/net/avp/rte_avp_fifo.h

diff --git a/drivers/net/avp/rte_avp_common.h b/drivers/net/avp/rte_avp_common.h
new file mode 100644
index 0000000..579d5ac
--- /dev/null
+++ b/drivers/net/avp/rte_avp_common.h
@@ -0,0 +1,424 @@
+/*-
+ *   This file is provided under a dual BSD/LGPLv2 license.  When using or
+ *   redistributing this file, you may do so under either license.
+ *
+ *   GNU LESSER GENERAL PUBLIC LICENSE
+ *
+ *   Copyright(c) 2010-2013 Intel Corporation. All rights reserved.
+ *   Copyright(c) 2014-2015 Wind River Systems, Inc. All rights reserved.
+ *
+ *   This program is free software; you can redistribute it and/or modify
+ *   it under the terms of version 2.1 of the GNU Lesser General Public License
+ *   as published by the Free Software Foundation.
+ *
+ *   This program is distributed in the hope that it will be useful, but
+ *   WITHOUT ANY WARRANTY; without even the implied warranty of
+ *   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+ *   Lesser General Public License for more details.
+ *
+ *   Contact Information:
+ *   Wind River Systems, Inc.
+ *
+ *
+ *   BSD LICENSE
+ *
+ *   Copyright(c) 2010-2013 Intel Corporation. All rights reserved.
+ *   Copyright(c) 2014-2016 Wind River Systems, Inc. All rights reserved.
+ *   All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *   * Redistributions of source code must retain the above copyright
+ *     notice, this list of conditions and the following disclaimer.
+ *   * Redistributions in binary form must reproduce the above copyright
+ *     notice, this list of conditions and the following disclaimer in
+ *     the documentation and/or other materials provided with the
+ *     distribution.
+ *   * Neither the name of Intel Corporation nor the names of its
+ *     contributors may be used to endorse or promote products derived
+ *     from this software without specific prior written permission.
+ *
+ *    THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *    "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *    LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *    A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *    OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *    SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *    LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *    DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *    THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *    (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *    OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ *
+ */
+
+#ifndef _RTE_AVP_COMMON_H_
+#define _RTE_AVP_COMMON_H_
+
+#ifdef __KERNEL__
+#include <linux/if.h>
+#endif
+
+/**
+ * AVP name is part of network device name.
+ */
+#define RTE_AVP_NAMESIZE 32
+
+/**
+ * AVP alias is a user-defined value used for lookups from secondary
+ * processes.  Typically, this is a UUID.
+ */
+#define RTE_AVP_ALIASSIZE 128
+
+/**
+ * Memory aligment (cache aligned)
+ */
+#ifndef RTE_AVP_ALIGNMENT
+#define RTE_AVP_ALIGNMENT 64
+#endif
+
+/*
+ * Request id.
+ */
+enum rte_avp_req_id {
+	RTE_AVP_REQ_UNKNOWN = 0,
+	RTE_AVP_REQ_CHANGE_MTU,
+	RTE_AVP_REQ_CFG_NETWORK_IF,
+	RTE_AVP_REQ_CFG_DEVICE,
+	RTE_AVP_REQ_SHUTDOWN_DEVICE,
+	RTE_AVP_REQ_MAX,
+};
+
+/**@{ AVP device driver types */
+#define RTE_AVP_DRIVER_TYPE_UNKNOWN 0
+#define RTE_AVP_DRIVER_TYPE_DPDK 1
+#define RTE_AVP_DRIVER_TYPE_KERNEL 2
+#define RTE_AVP_DRIVER_TYPE_QEMU 3
+/**@} */
+
+/**@{ AVP device operational modes */
+#define RTE_AVP_MODE_HOST 0 /**< AVP interface created in host */
+#define RTE_AVP_MODE_GUEST 1 /**< AVP interface created for export to guest */
+#define RTE_AVP_MODE_TRACE 2 /**< AVP interface created for packet tracing */
+/**@} */
+
+/*
+ * Structure for AVP queue configuration query request/result
+ */
+struct rte_avp_device_config {
+	uint64_t device_id;	/**< Unique system identifier */
+	uint32_t driver_type; /**< Device Driver type */
+	uint32_t driver_version; /**< Device Driver version */
+	uint32_t features; /**< Negotiated features */
+	uint16_t num_tx_queues;	/**< Number of active transmit queues */
+	uint16_t num_rx_queues;	/**< Number of active receive queues */
+	uint8_t if_up; /**< 1: interface up, 0: interface down */
+} __attribute__ ((__packed__));
+
+/*
+ * Structure for AVP request.
+ */
+struct rte_avp_request {
+	uint32_t req_id; /**< Request id */
+	union {
+		uint32_t new_mtu; /**< New MTU */
+		uint8_t if_up;	/**< 1: interface up, 0: interface down */
+	struct rte_avp_device_config config; /**< Queue configuration */
+	};
+	int32_t result;	/**< Result for processing request */
+} __attribute__ ((__packed__));
+
+/*
+ * FIFO struct mapped in a shared memory. It describes a circular buffer FIFO
+ * Write and read should wrap around. FIFO is empty when write == read
+ * Writing should never overwrite the read position
+ */
+struct rte_avp_fifo {
+	volatile unsigned int write; /**< Next position to be written*/
+	volatile unsigned int read; /**< Next position to be read */
+	unsigned int len; /**< Circular buffer length */
+	unsigned int elem_size; /**< Pointer size - for 32/64 bit OS */
+	void *volatile buffer[0]; /**< The buffer contains mbuf pointers */
+};
+
+
+/*
+ * AVP packet buffer header used to define the exchange of packet data.
+ */
+struct rte_avp_desc {
+	uint64_t pad0;
+	void *pkt_mbuf; /**< Reference to packet mbuf */
+	uint8_t pad1[14];
+	uint16_t ol_flags; /**< Offload features. */
+	void *next;	/**< Reference to next buffer in chain */
+	void *data;	/**< Start address of data in segment buffer. */
+	uint16_t data_len; /**< Amount of data in segment buffer. */
+	uint8_t nb_segs; /**< Number of segments */
+	uint8_t pad2;
+	uint16_t pkt_len; /**< Total pkt len: sum of all segment data_len. */
+	uint32_t pad3;
+	uint16_t vlan_tci; /**< VLAN Tag Control Identifier (CPU order). */
+	uint32_t pad4;
+} __attribute__ ((__aligned__(RTE_AVP_ALIGNMENT), __packed__));
+
+
+/**{ AVP device features */
+#define RTE_AVP_FEATURE_VLAN_OFFLOAD (1 << 0) /**< Emulated HW VLAN offload */
+/**@} */
+
+
+/**@{ Offload feature flags */
+#define RTE_AVP_TX_VLAN_PKT 0x0001 /**< TX packet is a 802.1q VLAN packet. */
+#define RTE_AVP_RX_VLAN_PKT 0x0800 /**< RX packet is a 802.1q VLAN packet. */
+/**@} */
+
+
+/**@{ AVP PCI identifiers */
+#define RTE_AVP_PCI_VENDOR_ID   0x1af4
+#define RTE_AVP_PCI_DEVICE_ID   0x1110
+/**@} */
+
+/**@{ AVP PCI subsystem identifiers */
+#define RTE_AVP_PCI_SUB_VENDOR_ID RTE_AVP_PCI_VENDOR_ID
+#define RTE_AVP_PCI_SUB_DEVICE_ID 0x1104
+/**@} */
+
+/**@{ AVP PCI BAR definitions */
+#define RTE_AVP_PCI_MMIO_BAR   0
+#define RTE_AVP_PCI_MSIX_BAR   1
+#define RTE_AVP_PCI_MEMORY_BAR 2
+#define RTE_AVP_PCI_MEMMAP_BAR 4
+#define RTE_AVP_PCI_DEVICE_BAR 5
+#define RTE_AVP_PCI_MAX_BAR    6
+/**@} */
+
+/**@{ AVP PCI BAR name definitions */
+#define RTE_AVP_MMIO_BAR_NAME   "avp-mmio"
+#define RTE_AVP_MSIX_BAR_NAME   "avp-msix"
+#define RTE_AVP_MEMORY_BAR_NAME "avp-memory"
+#define RTE_AVP_MEMMAP_BAR_NAME "avp-memmap"
+#define RTE_AVP_DEVICE_BAR_NAME "avp-device"
+/**@} */
+
+/**@{ AVP PCI MSI-X vectors */
+#define RTE_AVP_MIGRATION_MSIX_VECTOR 0	/**< Migration interrupts */
+#define RTE_AVP_MAX_MSIX_VECTORS 1
+/**@} */
+
+/**@} AVP Migration status/ack register values */
+#define RTE_AVP_MIGRATION_NONE      0 /**< Migration never executed */
+#define RTE_AVP_MIGRATION_DETACHED  1 /**< Device attached during migration */
+#define RTE_AVP_MIGRATION_ATTACHED  2 /**< Device reattached during migration */
+#define RTE_AVP_MIGRATION_ERROR     3 /**< Device failed to attach/detach */
+/**@} */
+
+/**@} AVP MMIO Register Offsets */
+#define RTE_AVP_REGISTER_BASE 0
+#define RTE_AVP_INTERRUPT_MASK_OFFSET (RTE_AVP_REGISTER_BASE + 0)
+#define RTE_AVP_INTERRUPT_STATUS_OFFSET (RTE_AVP_REGISTER_BASE + 4)
+#define RTE_AVP_MIGRATION_STATUS_OFFSET (RTE_AVP_REGISTER_BASE + 8)
+#define RTE_AVP_MIGRATION_ACK_OFFSET (RTE_AVP_REGISTER_BASE + 12)
+/**@} */
+
+/**@} AVP Interrupt Status Mask */
+#define RTE_AVP_MIGRATION_INTERRUPT_MASK (1 << 1)
+#define RTE_AVP_APP_INTERRUPTS_MASK      (0xFFFFFFFF)
+#define RTE_AVP_NO_INTERRUPTS_MASK       (0)
+/**@} */
+
+/*
+ * Maximum number of memory regions to export
+ */
+#define RTE_AVP_MAX_MAPS  2048
+
+/*
+ * Description of a single memory region
+ */
+struct rte_avp_memmap {
+	void *addr;
+	phys_addr_t phys_addr;
+	uint64_t length;
+};
+
+/*
+ * AVP memory mapping validation marker
+ */
+#define RTE_AVP_MEMMAP_MAGIC (0x20131969)
+
+/**@{  AVP memory map versions */
+#define RTE_AVP_MEMMAP_VERSION_1 1
+#define RTE_AVP_MEMMAP_VERSION RTE_AVP_MEMMAP_VERSION_1
+/**@} */
+
+/*
+ * Defines a list of memory regions exported from the host to the guest
+ */
+struct rte_avp_memmap_info {
+	uint32_t magic; /**< Memory validation marker */
+	uint32_t version; /**< Data format version */
+	uint32_t nb_maps;
+	struct rte_avp_memmap maps[RTE_AVP_MAX_MAPS];
+};
+
+/*
+ * AVP device memory validation marker
+ */
+#define RTE_AVP_DEVICE_MAGIC (0x20131975)
+
+/**@{  AVP device map versions
+ * WARNING:  do not change the format or names of these variables.  They are
+ * automatically parsed from the build system to generate the SDK package
+ * name.
+ **/
+#define RTE_AVP_RELEASE_VERSION_1 1
+#define RTE_AVP_RELEASE_VERSION RTE_AVP_RELEASE_VERSION_1
+#define RTE_AVP_MAJOR_VERSION_0 0
+#define RTE_AVP_MAJOR_VERSION_1 1
+#define RTE_AVP_MAJOR_VERSION_2 2
+#define RTE_AVP_MAJOR_VERSION RTE_AVP_MAJOR_VERSION_2
+#define RTE_AVP_MINOR_VERSION_0 0
+#define RTE_AVP_MINOR_VERSION_1 1
+#define RTE_AVP_MINOR_VERSION_13 13
+#define RTE_AVP_MINOR_VERSION RTE_AVP_MINOR_VERSION_13
+/**@} */
+
+
+/**
+ * Generates a 32-bit version number from the specified version number
+ * components
+ */
+#define RTE_AVP_MAKE_VERSION(_release, _major, _minor) \
+((((_release) & 0xffff) << 16) | (((_major) & 0xff) << 8) | ((_minor) & 0xff))
+
+
+/**
+ * Represents the current version of the AVP host driver
+ * WARNING:  in the current development branch the host and guest driver
+ * version should always be the same.  When patching guest features back to
+ * GA releases the host version number should not be updated unless there was
+ * an actual change made to the host driver.
+ */
+#define RTE_AVP_CURRENT_HOST_VERSION \
+RTE_AVP_MAKE_VERSION(RTE_AVP_RELEASE_VERSION_1, \
+		     RTE_AVP_MAJOR_VERSION_0, \
+		     RTE_AVP_MINOR_VERSION_1)
+
+
+/**
+ * Represents the current version of the AVP guest drivers
+ */
+#define RTE_AVP_CURRENT_GUEST_VERSION \
+RTE_AVP_MAKE_VERSION(RTE_AVP_RELEASE_VERSION_1, \
+		     RTE_AVP_MAJOR_VERSION_2, \
+		     RTE_AVP_MINOR_VERSION_13)
+
+/**
+ * Access AVP device version values
+ */
+#define RTE_AVP_GET_RELEASE_VERSION(_version) (((_version) >> 16) & 0xffff)
+#define RTE_AVP_GET_MAJOR_VERSION(_version) (((_version) >> 8) & 0xff)
+#define RTE_AVP_GET_MINOR_VERSION(_version) ((_version) & 0xff)
+/**@}*/
+
+
+/**
+ * Remove the minor version number so that only the release and major versions
+ * are used for comparisons.
+ */
+#define RTE_AVP_STRIP_MINOR_VERSION(_version) ((_version) >> 8)
+
+
+/**
+ * Defines the number of mbuf pools supported per device (1 per socket)
+ * @note This value should be equal to RTE_MAX_NUMA_NODES
+ */
+#define RTE_AVP_MAX_MEMPOOLS (8)
+
+/*
+ * Defines address translation parameters for each support mbuf pool
+ */
+struct rte_avp_mempool_info {
+	void *addr;
+	phys_addr_t phys_addr;
+	uint64_t length;
+};
+
+/*
+ * Struct used to create a AVP device. Passed to the kernel in IOCTL call or
+ * via inter-VM shared memory when used in a guest.
+ */
+struct rte_avp_device_info {
+	uint32_t magic;	/**< Memory validation marker */
+	uint32_t version; /**< Data format version */
+
+	char ifname[RTE_AVP_NAMESIZE];	/**< Network device name for AVP */
+
+	phys_addr_t tx_phys;
+	phys_addr_t rx_phys;
+	phys_addr_t alloc_phys;
+	phys_addr_t free_phys;
+
+	uint32_t features; /**< Supported feature bitmap */
+	uint8_t min_rx_queues; /**< Minimum supported receive/free queues */
+	uint8_t num_rx_queues; /**< Recommended number of receive/free queues */
+	uint8_t max_rx_queues; /**< Maximum supported receive/free queues */
+	uint8_t min_tx_queues; /**< Minimum supported transmit/alloc queues */
+	uint8_t num_tx_queues;
+	/**< Recommended number of transmit/alloc queues */
+	uint8_t max_tx_queues; /**< Maximum supported transmit/alloc queues */
+
+	uint32_t tx_size; /**< Size of each transmit queue */
+	uint32_t rx_size; /**< Size of each receive queue */
+	uint32_t alloc_size; /**< Size of each alloc queue */
+	uint32_t free_size;	/**< Size of each free queue */
+
+	/* Used by Ethtool */
+	phys_addr_t req_phys;
+	phys_addr_t resp_phys;
+	phys_addr_t sync_phys;
+	void *sync_va;
+
+	/* mbuf mempool (used when a single memory area is supported) */
+	void *mbuf_va;
+	phys_addr_t mbuf_phys;
+
+	/* mbuf mempools */
+	struct rte_avp_mempool_info pool[RTE_AVP_MAX_MEMPOOLS];
+
+#ifdef __KERNEL__
+	/* Ethernet info */
+	char ethaddr[ETH_ALEN];
+#else
+	char ethaddr[ETHER_ADDR_LEN];
+#endif
+
+	uint8_t mode; /**< device mode, i.e guest, host, trace */
+
+	/* mbuf size */
+	unsigned int mbuf_size;
+
+	/*
+	 * unique id to differentiate between two instantiations of the same
+	 * AVP device (i.e., the guest needs to know if the device has been
+	 * deleted and recreated).
+	 */
+	uint64_t device_id;
+
+	uint32_t max_rx_pkt_len; /**< Maximum receive unit size */
+};
+
+#define RTE_AVP_MAX_QUEUES (8) /**< Maximum number of queues per device */
+
+/** Maximum number of chained mbufs in a packet */
+#define RTE_AVP_MAX_MBUF_SEGMENTS (5)
+
+#define RTE_AVP_DEVICE "avp"
+
+#define RTE_AVP_IOCTL_TEST    _IOWR(0, 1, int)
+#define RTE_AVP_IOCTL_CREATE  _IOWR(0, 2, struct rte_avp_device_info)
+#define RTE_AVP_IOCTL_RELEASE _IOWR(0, 3, struct rte_avp_device_info)
+#define RTE_AVP_IOCTL_QUERY   _IOWR(0, 4, struct rte_avp_device_config)
+
+#endif /* _RTE_AVP_COMMON_H_ */
diff --git a/drivers/net/avp/rte_avp_fifo.h b/drivers/net/avp/rte_avp_fifo.h
new file mode 100644
index 0000000..1a475de
--- /dev/null
+++ b/drivers/net/avp/rte_avp_fifo.h
@@ -0,0 +1,157 @@
+/*-
+ *   This file is provided under a dual BSD/LGPLv2 license.  When using or
+ *   redistributing this file, you may do so under either license.
+ *
+ *   GNU LESSER GENERAL PUBLIC LICENSE
+ *
+ *   Copyright(c) 2010-2013 Intel Corporation. All rights reserved.
+ *   Copyright(c) 2014 Wind River Systems, Inc. All rights reserved.
+ *
+ *   This program is free software; you can redistribute it and/or modify
+ *   it under the terms of version 2.1 of the GNU Lesser General Public License
+ *   as published by the Free Software Foundation.
+ *
+ *   This program is distributed in the hope that it will be useful, but
+ *   WITHOUT ANY WARRANTY; without even the implied warranty of
+ *   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+ *   Lesser General Public License for more details.
+ *
+ *   Contact Information:
+ *   Wind River Systems, Inc.
+ *
+ *
+ *   BSD LICENSE
+ *
+ *   Copyright(c) 2010-2013 Intel Corporation. All rights reserved.
+ *   Copyright(c) 2014 Wind River Systems, Inc. All rights reserved.
+ *   All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *   * Redistributions of source code must retain the above copyright
+ *     notice, this list of conditions and the following disclaimer.
+ *   * Redistributions in binary form must reproduce the above copyright
+ *     notice, this list of conditions and the following disclaimer in
+ *     the documentation and/or other materials provided with the
+ *     distribution.
+ *   * Neither the name of Intel Corporation nor the names of its
+ *     contributors may be used to endorse or promote products derived
+ *     from this software without specific prior written permission.
+ *
+ *    THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *    "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *    LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *    A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *    OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *    SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *    LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *    DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *    THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *    (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *    OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ *
+ */
+
+#ifndef _RTE_AVP_FIFO_H_
+#define _RTE_AVP_FIFO_H_
+
+#ifdef __KERNEL__
+/* Write memory barrier for kernel compiles */
+#define AVP_WMB() smp_wmb()
+/* Read memory barrier for kernel compiles */
+#define AVP_RMB() smp_rmb()
+#else
+/* Write memory barrier for userspace compiles */
+#define AVP_WMB() rte_wmb()
+/* Read memory barrier for userspace compiles */
+#define AVP_RMB() rte_rmb()
+#endif
+
+#ifndef __KERNEL__
+/**
+ * Initializes the avp fifo structure
+ */
+static inline void
+avp_fifo_init(struct rte_avp_fifo *fifo, unsigned int size)
+{
+	/* Ensure size is power of 2 */
+	if (size & (size - 1))
+		rte_panic("AVP fifo size must be power of 2\n");
+
+	fifo->write = 0;
+	fifo->read = 0;
+	fifo->len = size;
+	fifo->elem_size = sizeof(void *);
+}
+#endif
+
+/**
+ * Adds num elements into the fifo. Return the number actually written
+ */
+static inline unsigned
+avp_fifo_put(struct rte_avp_fifo *fifo, void **data, unsigned int num)
+{
+	unsigned int i = 0;
+	unsigned int fifo_write = fifo->write;
+	unsigned int fifo_read = fifo->read;
+	unsigned int new_write = fifo_write;
+
+	for (i = 0; i < num; i++) {
+		new_write = (new_write + 1) & (fifo->len - 1);
+
+		if (new_write == fifo_read)
+			break;
+		fifo->buffer[fifo_write] = data[i];
+		fifo_write = new_write;
+	}
+	AVP_WMB();
+	fifo->write = fifo_write;
+	return i;
+}
+
+/**
+ * Get up to num elements from the fifo. Return the number actually read
+ */
+static inline unsigned int
+avp_fifo_get(struct rte_avp_fifo *fifo, void **data, unsigned int num)
+{
+	unsigned int i = 0;
+	unsigned int new_read = fifo->read;
+	unsigned int fifo_write = fifo->write;
+
+	if (new_read == fifo_write)
+		return 0; /* empty */
+
+	for (i = 0; i < num; i++) {
+		if (new_read == fifo_write)
+			break;
+
+		data[i] = fifo->buffer[new_read];
+		new_read = (new_read + 1) & (fifo->len - 1);
+	}
+	AVP_RMB();
+	fifo->read = new_read;
+	return i;
+}
+
+/**
+ * Get the num of elements in the fifo
+ */
+static inline unsigned int
+avp_fifo_count(struct rte_avp_fifo *fifo)
+{
+	return (fifo->len + fifo->write - fifo->read) & (fifo->len - 1);
+}
+
+/**
+ * Get the num of available elements in the fifo
+ */
+static inline unsigned int
+avp_fifo_free_count(struct rte_avp_fifo *fifo)
+{
+	return (fifo->read - fifo->write - 1) & (fifo->len - 1);
+}
+
+#endif /* _RTE_AVP_FIFO_H_ */
-- 
1.8.3.1

^ permalink raw reply related	[flat|nested] 172+ messages in thread

* [PATCH v2 03/15] maintainers: claim responsibility for AVP PMD
  2017-02-26 19:08 ` [PATCH v2 00/16] Wind River Systems " Allain Legacy
  2017-02-26 19:08   ` [PATCH v2 01/15] config: adds attributes for the " Allain Legacy
  2017-02-26 19:08   ` [PATCH v2 02/15] net/avp: public header files Allain Legacy
@ 2017-02-26 19:08   ` Allain Legacy
  2017-02-26 19:08   ` [PATCH v2 04/15] net/avp: add PMD version map file Allain Legacy
                     ` (13 subsequent siblings)
  16 siblings, 0 replies; 172+ messages in thread
From: Allain Legacy @ 2017-02-26 19:08 UTC (permalink / raw)
  To: ferruh.yigit; +Cc: dev

Updating the maintainers file to claim the AVP PMD driver on behalf of Wind
River Systems, Inc.

Signed-off-by: Allain Legacy <allain.legacy@windriver.com>
Signed-off-by: Matt Peters <matt.peters@windriver.com>
---
 MAINTAINERS | 5 +++++
 1 file changed, 5 insertions(+)

diff --git a/MAINTAINERS b/MAINTAINERS
index 24e0eff..992ffa5 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -423,6 +423,11 @@ Null Networking PMD
 M: Tetsuya Mukawa <mtetsuyah@gmail.com>
 F: drivers/net/null/
 
+Wind River AVP PMD
+M: Allain Legacy <allain.legacy@windriver.com>
+M: Matt Peters <matt.peters@windriver.com>
+F: drivers/net/avp
+
 
 Crypto Drivers
 --------------
-- 
1.8.3.1

^ permalink raw reply related	[flat|nested] 172+ messages in thread

* [PATCH v2 04/15] net/avp: add PMD version map file
  2017-02-26 19:08 ` [PATCH v2 00/16] Wind River Systems " Allain Legacy
                     ` (2 preceding siblings ...)
  2017-02-26 19:08   ` [PATCH v2 03/15] maintainers: claim responsibility for AVP PMD Allain Legacy
@ 2017-02-26 19:08   ` Allain Legacy
  2017-02-26 19:08   ` [PATCH v2 05/15] net/avp: debug log macros Allain Legacy
                     ` (12 subsequent siblings)
  16 siblings, 0 replies; 172+ messages in thread
From: Allain Legacy @ 2017-02-26 19:08 UTC (permalink / raw)
  To: ferruh.yigit; +Cc: dev

Adds a default ABI version file for the AVP PMD.

Signed-off-by: Allain Legacy <allain.legacy@windriver.com>
Signed-off-by: Matt Peters <matt.peters@windriver.com>
---
 drivers/net/avp/rte_pmd_avp_version.map | 4 ++++
 1 file changed, 4 insertions(+)
 create mode 100644 drivers/net/avp/rte_pmd_avp_version.map

diff --git a/drivers/net/avp/rte_pmd_avp_version.map b/drivers/net/avp/rte_pmd_avp_version.map
new file mode 100644
index 0000000..af8f3f4
--- /dev/null
+++ b/drivers/net/avp/rte_pmd_avp_version.map
@@ -0,0 +1,4 @@
+DPDK_17.05 {
+
+    local: *;
+};
-- 
1.8.3.1

^ permalink raw reply related	[flat|nested] 172+ messages in thread

* [PATCH v2 05/15] net/avp: debug log macros
  2017-02-26 19:08 ` [PATCH v2 00/16] Wind River Systems " Allain Legacy
                     ` (3 preceding siblings ...)
  2017-02-26 19:08   ` [PATCH v2 04/15] net/avp: add PMD version map file Allain Legacy
@ 2017-02-26 19:08   ` Allain Legacy
  2017-02-26 19:08   ` [PATCH v2 06/15] drivers/net: adds driver makefiles for AVP PMD Allain Legacy
                     ` (11 subsequent siblings)
  16 siblings, 0 replies; 172+ messages in thread
From: Allain Legacy @ 2017-02-26 19:08 UTC (permalink / raw)
  To: ferruh.yigit; +Cc: dev

Adds a header file with log macros for the AVP PMD

Signed-off-by: Allain Legacy <allain.legacy@windriver.com>
Signed-off-by: Matt Peters <matt.peters@windriver.com>
---
 config/common_base         |  4 ++++
 drivers/net/avp/avp_logs.h | 59 ++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 63 insertions(+)
 create mode 100644 drivers/net/avp/avp_logs.h

diff --git a/config/common_base b/config/common_base
index 912bc68..fe8363d 100644
--- a/config/common_base
+++ b/config/common_base
@@ -351,6 +351,10 @@ CONFIG_RTE_LIBRTE_PMD_AF_PACKET=n
 # Compile WRS accelerated virtual port (AVP) guest PMD driver
 #
 CONFIG_RTE_LIBRTE_AVP_PMD=n
+CONFIG_RTE_LIBRTE_AVP_DEBUG_RX=n
+CONFIG_RTE_LIBRTE_AVP_DEBUG_TX=n
+CONFIG_RTE_LIBRTE_AVP_DEBUG_DRIVER=y
+CONFIG_RTE_LIBRTE_AVP_DEBUG_BUFFERS=n
 
 #
 # Compile the TAP PMD
diff --git a/drivers/net/avp/avp_logs.h b/drivers/net/avp/avp_logs.h
new file mode 100644
index 0000000..252cab7
--- /dev/null
+++ b/drivers/net/avp/avp_logs.h
@@ -0,0 +1,59 @@
+/*
+ * BSD LICENSE
+ *
+ * Copyright (c) 2013-2015, Wind River Systems, Inc.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions are met:
+ *
+ * 1) Redistributions of source code must retain the above copyright notice,
+ * this list of conditions and the following disclaimer.
+ *
+ * 2) Redistributions in binary form must reproduce the above copyright notice,
+ * this list of conditions and the following disclaimer in the documentation
+ * and/or other materials provided with the distribution.
+ *
+ * 3) Neither the name of Wind River Systems nor the names of its contributors
+ * may be used to endorse or promote products derived from this software
+ * without specific prior written permission.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE
+ * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+ * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+ * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+ * POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef _AVP_LOGS_H_
+#define _AVP_LOGS_H_
+
+#include <rte_log.h>
+
+#ifdef RTE_LIBRTE_AVP_DEBUG_RX
+#define PMD_RX_LOG(level, fmt, args...) \
+	RTE_LOG(level, PMD, "%s() rx: " fmt, __func__, ## args)
+#else
+#define PMD_RX_LOG(level, fmt, args...) do { } while (0)
+#endif
+
+#ifdef RTE_LIBRTE_AVP_DEBUG_TX
+#define PMD_TX_LOG(level, fmt, args...) \
+	RTE_LOG(level, PMD, "%s() tx: " fmt, __func__, ## args)
+#else
+#define PMD_TX_LOG(level, fmt, args...) do { } while (0)
+#endif
+
+#ifdef RTE_LIBRTE_AVP_DEBUG_DRIVER
+#define PMD_DRV_LOG(level, fmt, args...) \
+	RTE_LOG(level, PMD, "%s(): " fmt, __func__, ## args)
+#else
+#define PMD_DRV_LOG(level, fmt, args...) do { } while (0)
+#endif
+
+#endif /* _AVP_LOGS_H_ */
-- 
1.8.3.1

^ permalink raw reply related	[flat|nested] 172+ messages in thread

* [PATCH v2 06/15] drivers/net: adds driver makefiles for AVP PMD
  2017-02-26 19:08 ` [PATCH v2 00/16] Wind River Systems " Allain Legacy
                     ` (4 preceding siblings ...)
  2017-02-26 19:08   ` [PATCH v2 05/15] net/avp: debug log macros Allain Legacy
@ 2017-02-26 19:08   ` Allain Legacy
  2017-02-26 19:08   ` [PATCH v2 07/15] net/avp: driver registration Allain Legacy
                     ` (10 subsequent siblings)
  16 siblings, 0 replies; 172+ messages in thread
From: Allain Legacy @ 2017-02-26 19:08 UTC (permalink / raw)
  To: ferruh.yigit; +Cc: dev

Adds a default Makefile to the driver directory but does not include any
source files.

Signed-off-by: Allain Legacy <allain.legacy@windriver.com>
Signed-off-by: Matt Peters <matt.peters@windriver.com>
---
 drivers/net/Makefile     |  1 +
 drivers/net/avp/Makefile | 52 ++++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 53 insertions(+)
 create mode 100644 drivers/net/avp/Makefile

diff --git a/drivers/net/Makefile b/drivers/net/Makefile
index 40fc333..592383e 100644
--- a/drivers/net/Makefile
+++ b/drivers/net/Makefile
@@ -32,6 +32,7 @@
 include $(RTE_SDK)/mk/rte.vars.mk
 
 DIRS-$(CONFIG_RTE_LIBRTE_PMD_AF_PACKET) += af_packet
+DIRS-$(CONFIG_RTE_LIBRTE_AVP_PMD) += avp
 DIRS-$(CONFIG_RTE_LIBRTE_BNX2X_PMD) += bnx2x
 DIRS-$(CONFIG_RTE_LIBRTE_PMD_BOND) += bonding
 DIRS-$(CONFIG_RTE_LIBRTE_CXGBE_PMD) += cxgbe
diff --git a/drivers/net/avp/Makefile b/drivers/net/avp/Makefile
new file mode 100644
index 0000000..68a0fa5
--- /dev/null
+++ b/drivers/net/avp/Makefile
@@ -0,0 +1,52 @@
+#   BSD LICENSE
+#
+#   Copyright(c) 2013-2017, Wind River Systems, Inc. All rights reserved.
+#   All rights reserved.
+#
+#   Redistribution and use in source and binary forms, with or without
+#   modification, are permitted provided that the following conditions
+#   are met:
+#
+#     * Redistributions of source code must retain the above copyright
+#       notice, this list of conditions and the following disclaimer.
+#     * Redistributions in binary form must reproduce the above copyright
+#       notice, this list of conditions and the following disclaimer in
+#       the documentation and/or other materials provided with the
+#       distribution.
+#     * Neither the name of Wind River Systems nor the names of its
+#       contributors may be used to endorse or promote products derived
+#       from this software without specific prior written permission.
+#
+#   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+#   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+#   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+#   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+#   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+#   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+#   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+#   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+#   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+#   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+#   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+
+
+include $(RTE_SDK)/mk/rte.vars.mk
+
+#
+# library name
+#
+LIB = librte_pmd_avp.a
+
+CFLAGS += -O3
+CFLAGS += $(WERROR_FLAGS) -I$(SRCDIR)
+
+EXPORT_MAP := rte_pmd_avp_version.map
+
+LIBABIVER := 1
+
+# install public header files to enable compilation of the hypervisor level
+# dpdk application
+SYMLINK-$(CONFIG_RTE_LIBRTE_AVP_PMD)-include += rte_avp_common.h
+SYMLINK-$(CONFIG_RTE_LIBRTE_AVP_PMD)-include += rte_avp_fifo.h
+
+include $(RTE_SDK)/mk/rte.lib.mk
-- 
1.8.3.1

^ permalink raw reply related	[flat|nested] 172+ messages in thread

* [PATCH v2 07/15] net/avp: driver registration
  2017-02-26 19:08 ` [PATCH v2 00/16] Wind River Systems " Allain Legacy
                     ` (5 preceding siblings ...)
  2017-02-26 19:08   ` [PATCH v2 06/15] drivers/net: adds driver makefiles for AVP PMD Allain Legacy
@ 2017-02-26 19:08   ` Allain Legacy
  2017-02-27 16:47     ` Stephen Hemminger
  2017-02-27 16:53     ` Stephen Hemminger
  2017-02-26 19:08   ` [PATCH v2 08/15] net/avp: device initialization Allain Legacy
                     ` (9 subsequent siblings)
  16 siblings, 2 replies; 172+ messages in thread
From: Allain Legacy @ 2017-02-26 19:08 UTC (permalink / raw)
  To: ferruh.yigit; +Cc: dev

Adds the initial framework for registering the driver against the support
PCI device identifiers.

Signed-off-by: Allain Legacy <allain.legacy@windriver.com>
Signed-off-by: Matt Peters <matt.peters@windriver.com>
---
 config/common_linuxapp                       |   1 +
 config/defconfig_i686-native-linuxapp-gcc    |   5 +
 config/defconfig_i686-native-linuxapp-icc    |   5 +
 config/defconfig_x86_x32-native-linuxapp-gcc |   5 +
 drivers/net/avp/Makefile                     |   8 +
 drivers/net/avp/avp_ethdev.c                 | 232 +++++++++++++++++++++++++++
 mk/rte.app.mk                                |   1 +
 7 files changed, 257 insertions(+)
 create mode 100644 drivers/net/avp/avp_ethdev.c

diff --git a/config/common_linuxapp b/config/common_linuxapp
index 00ebaac..8690a00 100644
--- a/config/common_linuxapp
+++ b/config/common_linuxapp
@@ -43,6 +43,7 @@ CONFIG_RTE_LIBRTE_VHOST=y
 CONFIG_RTE_LIBRTE_PMD_VHOST=y
 CONFIG_RTE_LIBRTE_PMD_AF_PACKET=y
 CONFIG_RTE_LIBRTE_PMD_TAP=y
+CONFIG_RTE_LIBRTE_AVP_PMD=y
 CONFIG_RTE_LIBRTE_NFP_PMD=y
 CONFIG_RTE_LIBRTE_POWER=y
 CONFIG_RTE_VIRTIO_USER=y
diff --git a/config/defconfig_i686-native-linuxapp-gcc b/config/defconfig_i686-native-linuxapp-gcc
index 745c401..9847bdb 100644
--- a/config/defconfig_i686-native-linuxapp-gcc
+++ b/config/defconfig_i686-native-linuxapp-gcc
@@ -75,3 +75,8 @@ CONFIG_RTE_LIBRTE_PMD_KASUMI=n
 # ZUC PMD is not supported on 32-bit
 #
 CONFIG_RTE_LIBRTE_PMD_ZUC=n
+
+#
+# AVP PMD is not supported on 32-bit
+#
+CONFIG_RTE_LIBRTE_AVP_PMD=n
diff --git a/config/defconfig_i686-native-linuxapp-icc b/config/defconfig_i686-native-linuxapp-icc
index 50a3008..269e88e 100644
--- a/config/defconfig_i686-native-linuxapp-icc
+++ b/config/defconfig_i686-native-linuxapp-icc
@@ -75,3 +75,8 @@ CONFIG_RTE_LIBRTE_PMD_KASUMI=n
 # ZUC PMD is not supported on 32-bit
 #
 CONFIG_RTE_LIBRTE_PMD_ZUC=n
+
+#
+# AVP PMD is not supported on 32-bit
+#
+CONFIG_RTE_LIBRTE_AVP_PMD=n
diff --git a/config/defconfig_x86_x32-native-linuxapp-gcc b/config/defconfig_x86_x32-native-linuxapp-gcc
index 3e55c5c..19573cb 100644
--- a/config/defconfig_x86_x32-native-linuxapp-gcc
+++ b/config/defconfig_x86_x32-native-linuxapp-gcc
@@ -50,3 +50,8 @@ CONFIG_RTE_LIBRTE_KNI=n
 # Solarflare PMD is not supported on 32-bit
 #
 CONFIG_RTE_LIBRTE_SFC_EFX_PMD=n
+
+#
+# AVP PMD is not supported on 32-bit
+#
+CONFIG_RTE_LIBRTE_AVP_PMD=n
diff --git a/drivers/net/avp/Makefile b/drivers/net/avp/Makefile
index 68a0fa5..9cf0449 100644
--- a/drivers/net/avp/Makefile
+++ b/drivers/net/avp/Makefile
@@ -49,4 +49,12 @@ LIBABIVER := 1
 SYMLINK-$(CONFIG_RTE_LIBRTE_AVP_PMD)-include += rte_avp_common.h
 SYMLINK-$(CONFIG_RTE_LIBRTE_AVP_PMD)-include += rte_avp_fifo.h
 
+#
+# all source files are stored in SRCS-y
+#
+SRCS-$(CONFIG_RTE_LIBRTE_AVP_PMD) += avp_ethdev.c
+
+# this lib depends upon:
+DEPDIRS-$(CONFIG_RTE_LIBRTE_AVP_PMD) += lib/librte_eal lib/librte_ether
+
 include $(RTE_SDK)/mk/rte.lib.mk
diff --git a/drivers/net/avp/avp_ethdev.c b/drivers/net/avp/avp_ethdev.c
new file mode 100644
index 0000000..6522555
--- /dev/null
+++ b/drivers/net/avp/avp_ethdev.c
@@ -0,0 +1,232 @@
+/*
+ *   BSD LICENSE
+ *
+ * Copyright (c) 2013-2016, Wind River Systems, Inc.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions are met:
+ *
+ * 1) Redistributions of source code must retain the above copyright notice,
+ * this list of conditions and the following disclaimer.
+ *
+ * 2) Redistributions in binary form must reproduce the above copyright notice,
+ * this list of conditions and the following disclaimer in the documentation
+ * and/or other materials provided with the distribution.
+ *
+ * 3) Neither the name of Wind River Systems nor the names of its contributors
+ * may be used to endorse or promote products derived from this software
+ * without specific prior written permission.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE
+ * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+ * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+ * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+ * POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include <stdint.h>
+#include <string.h>
+#include <stdio.h>
+#include <errno.h>
+#include <unistd.h>
+#include <sys/io.h>
+
+#include <rte_ethdev.h>
+#include <rte_memcpy.h>
+#include <rte_string_fns.h>
+#include <rte_memzone.h>
+#include <rte_malloc.h>
+#include <rte_atomic.h>
+#include <rte_branch_prediction.h>
+#include <rte_pci.h>
+#include <rte_ether.h>
+#include <rte_common.h>
+#include <rte_cycles.h>
+#include <rte_spinlock.h>
+#include <rte_byteorder.h>
+#include <rte_dev.h>
+#include <rte_memory.h>
+#include <rte_eal.h>
+
+#include "rte_avp_common.h"
+#include "rte_avp_fifo.h"
+
+#include "avp_logs.h"
+
+
+
+static int eth_avp_dev_init(struct rte_eth_dev *eth_dev);
+static int eth_avp_dev_uninit(struct rte_eth_dev *eth_dev);
+
+
+#define AVP_DEV_TO_PCI(eth_dev) RTE_DEV_TO_PCI((eth_dev)->device)
+
+
+#define RTE_AVP_MAX_MAC_ADDRS 1
+#define RTE_AVP_MIN_RX_BUFSIZE ETHER_MIN_LEN
+
+
+/*
+ * Defines the number of microseconds to wait before checking the response
+ * queue for completion.
+ */
+#define RTE_AVP_REQUEST_DELAY_USECS (5000)
+
+/*
+ * Defines the number times to check the response queue for completion before
+ * declaring a timeout.
+ */
+#define RTE_AVP_MAX_REQUEST_RETRY (100)
+
+/* Defines the current PCI driver version number */
+#define RTE_AVP_DPDK_DRIVER_VERSION RTE_AVP_CURRENT_GUEST_VERSION
+
+/*
+ * The set of PCI devices this driver supports
+ */
+static struct rte_pci_id pci_id_avp_map[] = {
+	{ .vendor_id = RTE_AVP_PCI_VENDOR_ID,
+	  .device_id = RTE_AVP_PCI_DEVICE_ID,
+	  .subsystem_vendor_id = RTE_AVP_PCI_SUB_VENDOR_ID,
+	  .subsystem_device_id = RTE_AVP_PCI_SUB_DEVICE_ID,
+	  .class_id = RTE_CLASS_ANY_ID,
+	},
+
+	{ .vendor_id = 0, /* sentinel */
+	},
+};
+
+
+/*
+ * Defines the AVP device attributes which are attached to an RTE ethernet
+ * device
+ */
+struct avp_dev {
+	uint32_t magic; /**< Memory validation marker */
+	uint64_t device_id; /**< Unique system identifier */
+	struct ether_addr ethaddr; /**< Host specified MAC address */
+	struct rte_eth_dev_data *dev_data;
+	/**< Back pointer to ethernet device data */
+	volatile uint32_t flags; /**< Device operational flags */
+	uint8_t port_id; /**< Ethernet port identifier */
+	struct rte_mempool *pool; /**< pkt mbuf mempool */
+	unsigned int guest_mbuf_size; /**< local pool mbuf size */
+	unsigned int host_mbuf_size; /**< host mbuf size */
+	unsigned int max_rx_pkt_len; /**< maximum receive unit */
+	uint32_t host_features; /**< Supported feature bitmap */
+	uint32_t features; /**< Enabled feature bitmap */
+	unsigned int num_tx_queues; /**< Negotiated number of transmit queues */
+	unsigned int max_tx_queues; /**< Maximum number of transmit queues */
+	unsigned int num_rx_queues; /**< Negotiated number of receive queues */
+	unsigned int max_rx_queues; /**< Maximum number of receive queues */
+
+	struct rte_avp_fifo *tx_q[RTE_AVP_MAX_QUEUES]; /**< TX queue */
+	struct rte_avp_fifo *rx_q[RTE_AVP_MAX_QUEUES]; /**< RX queue */
+	struct rte_avp_fifo *alloc_q[RTE_AVP_MAX_QUEUES];
+	/**< Allocated mbufs queue */
+	struct rte_avp_fifo *free_q[RTE_AVP_MAX_QUEUES];
+	/**< To be freed mbufs queue */
+
+	/* mutual exclusion over the 'flag' and 'resp_q/req_q' fields */
+	rte_spinlock_t lock;
+
+	/* For request & response */
+	struct rte_avp_fifo *req_q; /**< Request queue */
+	struct rte_avp_fifo *resp_q; /**< Response queue */
+	void *host_sync_addr; /**< (host) Req/Resp Mem address */
+	void *sync_addr; /**< Req/Resp Mem address */
+	void *host_mbuf_addr; /**< (host) MBUF pool start address */
+	void *mbuf_addr; /**< MBUF pool start address */
+} __rte_cache_aligned;
+
+/* RTE ethernet private data */
+struct avp_adapter {
+	struct avp_dev avp;
+} __rte_cache_aligned;
+
+/* Macro to cast the ethernet device private data to a AVP object */
+#define RTE_AVP_DEV_PRIVATE_TO_HW(adapter) \
+	(&((struct avp_adapter *)adapter)->avp)
+
+/*
+ * This function is based on probe() function in avp_pci.c
+ * It returns 0 on success.
+ */
+static int
+eth_avp_dev_init(struct rte_eth_dev *eth_dev)
+{
+	struct avp_dev *avp =
+		RTE_AVP_DEV_PRIVATE_TO_HW(eth_dev->data->dev_private);
+	struct rte_pci_device *pci_dev;
+
+	pci_dev = AVP_DEV_TO_PCI(eth_dev);
+
+	if (rte_eal_process_type() != RTE_PROC_PRIMARY) {
+		/*
+		 * no setup required on secondary processes.  All data is saved
+		 * in dev_private by the primary process. All resource should
+		 * be mapped to the same virtual address so all pointers should
+		 * be valid.
+		 */
+		return 0;
+	}
+
+	rte_eth_copy_pci_info(eth_dev, pci_dev);
+
+	eth_dev->data->dev_flags |= RTE_ETH_DEV_DETACHABLE;
+
+	/* Allocate memory for storing MAC addresses */
+	eth_dev->data->mac_addrs = rte_zmalloc("avp_ethdev", ETHER_ADDR_LEN, 0);
+	if (eth_dev->data->mac_addrs == NULL) {
+		PMD_DRV_LOG(ERR,
+			    "Failed to allocate %d bytes "
+			    "needed to store MAC addresses\n",
+			    ETHER_ADDR_LEN);
+		return -ENOMEM;
+	}
+
+	ether_addr_copy(&avp->ethaddr, &eth_dev->data->mac_addrs[0]);
+
+	return 0;
+}
+
+static int
+eth_avp_dev_uninit(struct rte_eth_dev *eth_dev)
+{
+	if (rte_eal_process_type() != RTE_PROC_PRIMARY)
+		return -EPERM;
+
+	if (eth_dev->data == NULL)
+		return 0;
+
+	if (eth_dev->data->mac_addrs != NULL) {
+		rte_free(eth_dev->data->mac_addrs);
+		eth_dev->data->mac_addrs = NULL;
+	}
+
+	return 0;
+}
+
+
+static struct eth_driver rte_avp_pmd = {
+	{
+		.id_table = pci_id_avp_map,
+		.drv_flags = RTE_PCI_DRV_NEED_MAPPING,
+		.probe = rte_eth_dev_pci_probe,
+		.remove = rte_eth_dev_pci_remove,
+	},
+	.eth_dev_init = eth_avp_dev_init,
+	.eth_dev_uninit = eth_avp_dev_uninit,
+	.dev_private_size = sizeof(struct avp_adapter),
+};
+
+
+
+RTE_PMD_REGISTER_PCI(rte_avp, rte_avp_pmd.pci_drv);
+RTE_PMD_REGISTER_PCI_TABLE(rte_avp, pci_id_avp_map);
diff --git a/mk/rte.app.mk b/mk/rte.app.mk
index 92f3635..c8654f3 100644
--- a/mk/rte.app.mk
+++ b/mk/rte.app.mk
@@ -104,6 +104,7 @@ ifeq ($(CONFIG_RTE_BUILD_SHARED_LIB),n)
 # plugins (link only if static libraries)
 
 _LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_AF_PACKET)  += -lrte_pmd_af_packet
+_LDLIBS-$(CONFIG_RTE_LIBRTE_AVP_PMD)        += -lrte_pmd_avp
 _LDLIBS-$(CONFIG_RTE_LIBRTE_BNX2X_PMD)      += -lrte_pmd_bnx2x -lz
 _LDLIBS-$(CONFIG_RTE_LIBRTE_BNXT_PMD)       += -lrte_pmd_bnxt
 _LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_BOND)       += -lrte_pmd_bond
-- 
1.8.3.1

^ permalink raw reply related	[flat|nested] 172+ messages in thread

* [PATCH v2 08/15] net/avp: device initialization
  2017-02-26 19:08 ` [PATCH v2 00/16] Wind River Systems " Allain Legacy
                     ` (6 preceding siblings ...)
  2017-02-26 19:08   ` [PATCH v2 07/15] net/avp: driver registration Allain Legacy
@ 2017-02-26 19:08   ` Allain Legacy
  2017-02-28 11:57     ` Jerin Jacob
  2017-02-26 19:08   ` [PATCH v2 09/15] net/avp: device configuration Allain Legacy
                     ` (8 subsequent siblings)
  16 siblings, 1 reply; 172+ messages in thread
From: Allain Legacy @ 2017-02-26 19:08 UTC (permalink / raw)
  To: ferruh.yigit; +Cc: dev

Adds support for initialization newly probed AVP PCI devices.  Initial
queue translations are setup in preparation for device configuration.

Signed-off-by: Allain Legacy <allain.legacy@windriver.com>
Signed-off-by: Matt Peters <matt.peters@windriver.com>
---
 drivers/net/avp/avp_ethdev.c | 757 +++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 757 insertions(+)

diff --git a/drivers/net/avp/avp_ethdev.c b/drivers/net/avp/avp_ethdev.c
index 6522555..3bbff33 100644
--- a/drivers/net/avp/avp_ethdev.c
+++ b/drivers/net/avp/avp_ethdev.c
@@ -60,6 +60,8 @@
 #include "avp_logs.h"
 
 
+static int avp_dev_create(struct rte_pci_device *pci_dev,
+			  struct rte_eth_dev *eth_dev);
 
 static int eth_avp_dev_init(struct rte_eth_dev *eth_dev);
 static int eth_avp_dev_uninit(struct rte_eth_dev *eth_dev);
@@ -103,6 +105,16 @@
 };
 
 
+/**@{ AVP device flags */
+#define RTE_AVP_F_PROMISC (1 << 1)
+#define RTE_AVP_F_CONFIGURED (1 << 2)
+#define RTE_AVP_F_LINKUP (1 << 3)
+#define RTE_AVP_F_DETACHED (1 << 4)
+/**@} */
+
+/* Ethernet device validation marker */
+#define RTE_AVP_ETHDEV_MAGIC 0x92972862
+
 /*
  * Defines the AVP device attributes which are attached to an RTE ethernet
  * device
@@ -150,11 +162,726 @@ struct avp_adapter {
 	struct avp_dev avp;
 } __rte_cache_aligned;
 
+
+/* 32-bit MMIO register write */
+#define RTE_AVP_WRITE32(_value, _addr) ((*(uint32_t *)_addr) = (_value))
+
+/* 32-bit MMIO register read */
+#define RTE_AVP_READ32(_addr) (*(uint32_t *)(_addr))
+
 /* Macro to cast the ethernet device private data to a AVP object */
 #define RTE_AVP_DEV_PRIVATE_TO_HW(adapter) \
 	(&((struct avp_adapter *)adapter)->avp)
 
 /*
+ * Defines the structure of a AVP device queue for the purpose of handling the
+ * receive and transmit burst callback functions
+ */
+struct avp_queue {
+	struct rte_eth_dev_data *dev_data;
+	/**< Backpointer to ethernet device data */
+	struct avp_dev *avp; /**< Backpointer to AVP device */
+	uint16_t queue_id;
+	/**< Queue identifier used for indexing current queue */
+	uint16_t queue_base;
+	/**< Base queue identifier for queue servicing */
+	uint16_t queue_limit;
+	/**< Maximum queue identifier for queue servicing */
+
+	uint64_t packets;
+	uint64_t bytes;
+	uint64_t errors;
+};
+
+/* send a request and wait for a response
+ *
+ * @warning must be called while holding the avp->lock spinlock.
+ */
+static int
+avp_dev_process_request(struct avp_dev *avp, struct rte_avp_request *request)
+{
+	unsigned int retry = RTE_AVP_MAX_REQUEST_RETRY;
+	void *resp_addr = NULL;
+	unsigned int count;
+	int ret;
+
+	PMD_DRV_LOG(DEBUG, "Sending request %u to host\n", request->req_id);
+
+	request->result = -ENOTSUP;
+
+	/* Discard any stale responses before starting a new request */
+	while (avp_fifo_get(avp->resp_q, (void **)&resp_addr, 1))
+		PMD_DRV_LOG(DEBUG, "Discarding stale response\n");
+
+	rte_memcpy(avp->sync_addr, request, sizeof(*request));
+	count = avp_fifo_put(avp->req_q, &avp->host_sync_addr, 1);
+	if (count < 1) {
+		PMD_DRV_LOG(ERR, "Cannot send request %u to host\n",
+			    request->req_id);
+		ret = -EBUSY;
+		goto done;
+	}
+
+	while (retry--) {
+		/* wait for a response */
+		usleep(RTE_AVP_REQUEST_DELAY_USECS);
+
+		count = avp_fifo_count(avp->resp_q);
+		if (count >= 1) {
+			/* response received */
+			break;
+		}
+
+		if ((count < 1) && (retry == 0)) {
+			PMD_DRV_LOG(ERR,
+				    "Timeout while waiting "
+				    "for a response for %u\n",
+				    request->req_id);
+			ret = -ETIME;
+			goto done;
+		}
+	}
+
+	/* retrieve the response */
+	count = avp_fifo_get(avp->resp_q, (void **)&resp_addr, 1);
+	if ((count != 1) || (resp_addr != avp->host_sync_addr)) {
+		PMD_DRV_LOG(ERR,
+			    "Invalid response from host, "
+			    "count=%u resp=%p host_sync_addr=%p\n",
+			    count, resp_addr, avp->host_sync_addr);
+		ret = -ENODATA;
+		goto done;
+	}
+
+	/* copy to user buffer */
+	rte_memcpy(request, avp->sync_addr, sizeof(*request));
+	ret = 0;
+
+	PMD_DRV_LOG(DEBUG, "Result %d received for request %u\n",
+		    request->result, request->req_id);
+
+done:
+	return ret;
+}
+
+static int
+avp_dev_ctrl_set_config(struct rte_eth_dev *eth_dev,
+			struct rte_avp_device_config *config)
+{
+	struct avp_dev *avp =
+		RTE_AVP_DEV_PRIVATE_TO_HW(eth_dev->data->dev_private);
+	struct rte_avp_request request;
+	int ret;
+
+	/* setup a configure request */
+	memset(&request, 0, sizeof(request));
+	request.req_id = RTE_AVP_REQ_CFG_DEVICE;
+	memcpy(&request.config, config, sizeof(request.config));
+
+	ret = avp_dev_process_request(avp, &request);
+
+	return ret == 0 ? request.result : ret;
+}
+
+static int
+avp_dev_ctrl_shutdown(struct rte_eth_dev *eth_dev)
+{
+	struct avp_dev *avp =
+		RTE_AVP_DEV_PRIVATE_TO_HW(eth_dev->data->dev_private);
+	struct rte_avp_request request;
+	int ret;
+
+	/* setup a shutdown request */
+	memset(&request, 0, sizeof(request));
+	request.req_id = RTE_AVP_REQ_SHUTDOWN_DEVICE;
+
+	ret = avp_dev_process_request(avp, &request);
+
+	return ret == 0 ? request.result : ret;
+}
+
+/* translate from host physical address to guest virtual address */
+static void *
+avp_dev_translate_address(struct rte_eth_dev *eth_dev,
+			  phys_addr_t host_phys_addr)
+{
+	struct rte_pci_device *pci_dev = AVP_DEV_TO_PCI(eth_dev);
+	struct rte_mem_resource *resource;
+	struct rte_avp_memmap_info *info;
+	struct rte_avp_memmap *map;
+	off_t offset;
+	void *addr;
+	unsigned int i;
+
+	addr = pci_dev->mem_resource[RTE_AVP_PCI_MEMORY_BAR].addr;
+	resource = &pci_dev->mem_resource[RTE_AVP_PCI_MEMMAP_BAR];
+	info = (struct rte_avp_memmap_info *)resource->addr;
+
+	offset = 0;
+	for (i = 0; i < info->nb_maps; i++) {
+		/* search all segments looking for a matching address */
+		map = &info->maps[i];
+
+		if ((host_phys_addr >= map->phys_addr) &&
+			(host_phys_addr < (map->phys_addr + map->length))) {
+			/* address is within this segment */
+			offset += (host_phys_addr - map->phys_addr);
+			addr = RTE_PTR_ADD(addr, offset);
+
+			PMD_DRV_LOG(DEBUG,
+				    "Translating host physical 0x%" PRIx64 " "
+				    "to guest virtual 0x%p\n",
+				    host_phys_addr, addr);
+
+			return addr;
+		}
+		offset += map->length;
+	}
+
+	return NULL;
+}
+
+/* verify that the incoming device version is compatible with our version */
+static int
+avp_dev_version_check(uint32_t version)
+{
+	uint32_t driver =
+		RTE_AVP_STRIP_MINOR_VERSION(RTE_AVP_DPDK_DRIVER_VERSION);
+	uint32_t device = RTE_AVP_STRIP_MINOR_VERSION(version);
+
+	if (device <= driver) {
+		/* the host driver version is less than or equal to ours */
+		return 0;
+	}
+
+	return 1;
+}
+
+/* verify that memory regions have expected version and validation markers */
+static int
+avp_dev_check_regions(struct rte_eth_dev *eth_dev)
+{
+	struct rte_pci_device *pci_dev = AVP_DEV_TO_PCI(eth_dev);
+	struct rte_avp_memmap_info *memmap;
+	struct rte_avp_device_info *info;
+	struct rte_mem_resource *resource;
+	unsigned int i;
+
+	/* Dump resource info for debug */
+	for (i = 0; i < PCI_MAX_RESOURCE; i++) {
+		resource = &pci_dev->mem_resource[i];
+		if ((resource->phys_addr == 0) || (resource->len == 0))
+			continue;
+
+		PMD_DRV_LOG(DEBUG, "resource[%u]: phys=0x%" PRIx64 " "
+			    "len=%" PRIu64 " addr=%p\n",
+			    i, resource->phys_addr,
+			    resource->len, resource->addr);
+
+		switch (i) {
+		case RTE_AVP_PCI_MEMMAP_BAR:
+			memmap = (struct rte_avp_memmap_info *)resource->addr;
+			if ((memmap->magic != RTE_AVP_MEMMAP_MAGIC) ||
+			    (memmap->version != RTE_AVP_MEMMAP_VERSION)) {
+				PMD_DRV_LOG(ERR,
+					    "Invalid memmap magic 0x%08x "
+					    "and version %u\n",
+					    memmap->magic, memmap->version);
+				return -EINVAL;
+			}
+			break;
+
+		case RTE_AVP_PCI_DEVICE_BAR:
+			info = (struct rte_avp_device_info *)resource->addr;
+			if ((info->magic != RTE_AVP_DEVICE_MAGIC) ||
+			    avp_dev_version_check(info->version)) {
+				PMD_DRV_LOG(ERR,
+					    "Invalid device info magic 0x%08x "
+					    "or version 0x%08x > 0x%08x\n",
+					    info->magic, info->version,
+					    RTE_AVP_DPDK_DRIVER_VERSION);
+				return -EINVAL;
+			}
+			break;
+
+		case RTE_AVP_PCI_MEMORY_BAR:
+		case RTE_AVP_PCI_MMIO_BAR:
+			if (resource->addr == NULL) {
+				PMD_DRV_LOG(ERR,
+					    "Missing address space "
+					    "for BAR%u\n", i);
+				return -EINVAL;
+			}
+			break;
+
+		case RTE_AVP_PCI_MSIX_BAR:
+		default:
+			/* no validation required */
+			break;
+		}
+	}
+
+	return 0;
+}
+
+static int
+avp_dev_detach(struct rte_eth_dev *eth_dev)
+{
+	struct avp_dev *avp =
+		RTE_AVP_DEV_PRIVATE_TO_HW(eth_dev->data->dev_private);
+	int ret;
+
+	PMD_DRV_LOG(NOTICE, "Detaching port %u from AVP device 0x%" PRIx64 "\n",
+		    eth_dev->data->port_id, avp->device_id);
+
+	rte_spinlock_lock(&avp->lock);
+
+	if (avp->flags & RTE_AVP_F_DETACHED) {
+		PMD_DRV_LOG(NOTICE, "port %u already detached\n",
+			    eth_dev->data->port_id);
+		ret = 0;
+		goto unlock;
+	}
+
+	/* shutdown the device first so the host stops sending us packets. */
+	ret = avp_dev_ctrl_shutdown(eth_dev);
+	if (ret < 0) {
+		PMD_DRV_LOG(ERR, "Failed to send/recv shutdown to host, "
+			    "ret=%d\n", ret);
+		avp->flags &= ~RTE_AVP_F_DETACHED;
+		goto unlock;
+	}
+
+	avp->flags |= RTE_AVP_F_DETACHED;
+	rte_wmb();
+
+	/* wait for queues to acknowledge the presence of the detach flag */
+	rte_delay_ms(1);
+
+	ret = 0;
+
+unlock:
+	rte_spinlock_unlock(&avp->lock);
+	return ret;
+}
+
+static void
+_avp_set_rx_queue_mappings(struct rte_eth_dev *eth_dev, uint16_t rx_queue_id)
+{
+	struct avp_dev *avp =
+		RTE_AVP_DEV_PRIVATE_TO_HW(eth_dev->data->dev_private);
+	struct avp_queue *rxq;
+	uint16_t queue_count;
+	uint16_t remainder;
+
+	rxq = (struct avp_queue *)eth_dev->data->rx_queues[rx_queue_id];
+
+	/*
+	 * Must map all AVP fifos as evenly as possible between the configured
+	 * device queues.  Each device queue will service a subset of the AVP
+	 * fifos. If there is an odd number of device queues the first set of
+	 * device queues will get the extra AVP fifos.
+	 */
+	queue_count = avp->num_rx_queues / eth_dev->data->nb_rx_queues;
+	remainder = avp->num_rx_queues % eth_dev->data->nb_rx_queues;
+	if (rx_queue_id < remainder) {
+		/* these queues must service one extra FIFO */
+		rxq->queue_base = rx_queue_id * (queue_count + 1);
+		rxq->queue_limit = rxq->queue_base + (queue_count + 1) - 1;
+	} else {
+		/* these queues service the regular number of FIFO */
+		rxq->queue_base = ((remainder * (queue_count + 1)) +
+				   ((rx_queue_id - remainder) * queue_count));
+		rxq->queue_limit = rxq->queue_base + queue_count - 1;
+	}
+
+	PMD_DRV_LOG(DEBUG, "rxq %u at %p base %u limit %u\n",
+		    rx_queue_id, rxq, rxq->queue_base, rxq->queue_limit);
+
+	rxq->queue_id = rxq->queue_base;
+}
+
+static void
+_avp_set_queue_counts(struct rte_eth_dev *eth_dev)
+{
+	struct rte_pci_device *pci_dev = AVP_DEV_TO_PCI(eth_dev);
+	struct avp_dev *avp =
+		RTE_AVP_DEV_PRIVATE_TO_HW(eth_dev->data->dev_private);
+	struct rte_avp_device_info *host_info;
+	void *addr;
+
+	addr = pci_dev->mem_resource[RTE_AVP_PCI_DEVICE_BAR].addr;
+	host_info = (struct rte_avp_device_info *)addr;
+
+	/*
+	 * the transmit direction is not negotiated beyond respecting the max
+	 * number of queues because the host can handle arbitrary guest tx
+	 * queues (host rx queues).
+	 */
+	avp->num_tx_queues = eth_dev->data->nb_tx_queues;
+
+	/*
+	 * the receive direction is more restrictive.  The host requires a
+	 * minimum number of guest rx queues (host tx queues) therefore
+	 * negotiate a value that is at least as large as the host minimum
+	 * requirement.  If the host and guest values are not identical then a
+	 * mapping will be established in the receive_queue_setup function.
+	 */
+	avp->num_rx_queues = RTE_MAX(host_info->min_rx_queues,
+				     eth_dev->data->nb_rx_queues);
+
+	PMD_DRV_LOG(DEBUG, "Requesting %u Tx and %u Rx queues from host\n",
+		    avp->num_tx_queues, avp->num_rx_queues);
+}
+
+static int
+avp_dev_attach(struct rte_eth_dev *eth_dev)
+{
+	struct avp_dev *avp =
+		RTE_AVP_DEV_PRIVATE_TO_HW(eth_dev->data->dev_private);
+	struct rte_avp_device_config config;
+	unsigned int i;
+	int ret;
+
+	PMD_DRV_LOG(NOTICE, "Attaching port %u to AVP device 0x%" PRIx64 "\n",
+		    eth_dev->data->port_id, avp->device_id);
+
+	rte_spinlock_lock(&avp->lock);
+
+	if (!(avp->flags & RTE_AVP_F_DETACHED)) {
+		PMD_DRV_LOG(NOTICE, "port %u already attached\n",
+			    eth_dev->data->port_id);
+		ret = 0;
+		goto unlock;
+	}
+
+	/*
+	 * make sure that the detached flag is set prior to reconfiguring the
+	 * queues.
+	 */
+	avp->flags |= RTE_AVP_F_DETACHED;
+	rte_wmb();
+
+	/*
+	 * re-run the device create utility which will parse the new host info
+	 * and setup the AVP device queue pointers.
+	 */
+	ret = avp_dev_create(AVP_DEV_TO_PCI(eth_dev), eth_dev);
+	if (ret < 0) {
+		PMD_DRV_LOG(ERR,
+			    "Failed to re-create AVP device, ret=%d\n", ret);
+		goto unlock;
+	}
+
+	if (avp->flags & RTE_AVP_F_CONFIGURED) {
+		/*
+		 * Update the receive queue mapping to handle cases where the
+		 * source and destination hosts have different queue
+		 * requirements.  As long as the DETACHED flag is asserted the
+		 * queue table should not be referenced so it should be safe to
+		 * update it.
+		 */
+		_avp_set_queue_counts(eth_dev);
+		for (i = 0; i < eth_dev->data->nb_rx_queues; i++)
+			_avp_set_rx_queue_mappings(eth_dev, i);
+
+		/*
+		 * Update the host with our config details so that it knows the
+		 * device is active.
+		 */
+		memset(&config, 0, sizeof(config));
+		config.device_id = avp->device_id;
+		config.driver_type = RTE_AVP_DRIVER_TYPE_DPDK;
+		config.driver_version = RTE_AVP_DPDK_DRIVER_VERSION;
+		config.features = avp->features;
+		config.num_tx_queues = avp->num_tx_queues;
+		config.num_rx_queues = avp->num_rx_queues;
+		config.if_up = !!(avp->flags & RTE_AVP_F_LINKUP);
+
+		ret = avp_dev_ctrl_set_config(eth_dev, &config);
+		if (ret < 0) {
+			PMD_DRV_LOG(ERR, "Config request failed by host, "
+				    "ret=%d\n", ret);
+			goto unlock;
+		}
+	}
+
+	rte_wmb();
+	avp->flags &= ~RTE_AVP_F_DETACHED;
+
+	ret = 0;
+
+unlock:
+	rte_spinlock_unlock(&avp->lock);
+	return ret;
+}
+
+static void
+avp_dev_interrupt_handler(struct rte_intr_handle *intr_handle,
+						  void *data)
+{
+	struct rte_eth_dev *eth_dev = data;
+	struct rte_pci_device *pci_dev = AVP_DEV_TO_PCI(eth_dev);
+	void *registers = pci_dev->mem_resource[RTE_AVP_PCI_MMIO_BAR].addr;
+	uint32_t status, value;
+	int ret;
+
+	if (registers == NULL)
+		rte_panic("no mapped MMIO register space\n");
+
+	/* read the interrupt status register
+	 * note: this register clears on read so all raised interrupts must be
+	 *    handled or remembered for later processing
+	 */
+	status = RTE_AVP_READ32(
+		RTE_PTR_ADD(registers,
+			    RTE_AVP_INTERRUPT_STATUS_OFFSET));
+
+	if (status | RTE_AVP_MIGRATION_INTERRUPT_MASK) {
+		/* handle interrupt based on current status */
+		value = RTE_AVP_READ32(
+			RTE_PTR_ADD(registers,
+				    RTE_AVP_MIGRATION_STATUS_OFFSET));
+		switch (value) {
+		case RTE_AVP_MIGRATION_DETACHED:
+			ret = avp_dev_detach(eth_dev);
+			break;
+		case RTE_AVP_MIGRATION_ATTACHED:
+			ret = avp_dev_attach(eth_dev);
+			break;
+		default:
+			PMD_DRV_LOG(ERR,
+				    "unexpected migration status, status=%u\n",
+				    value);
+			ret = -EINVAL;
+		}
+
+		/* acknowledge the request by writing out our current status */
+		value = (ret == 0 ? value : RTE_AVP_MIGRATION_ERROR);
+		RTE_AVP_WRITE32(value,
+				RTE_PTR_ADD(registers,
+					    RTE_AVP_MIGRATION_ACK_OFFSET));
+
+		PMD_DRV_LOG(NOTICE, "AVP migration interrupt handled\n");
+	}
+
+	if (status & ~RTE_AVP_MIGRATION_INTERRUPT_MASK)
+		PMD_DRV_LOG(WARNING,
+			    "AVP unexpected interrupt, status=0x%08x\n",
+			    status);
+
+	/* re-enable UIO interrupt handling */
+	ret = rte_intr_enable(intr_handle);
+	if (ret < 0) {
+		PMD_DRV_LOG(ERR,
+			    "Failed to re-enable UIO interrupts, "
+			    "ret=%d\n", ret);
+		/* continue */
+	}
+}
+
+static int
+avp_dev_enable_interrupts(struct rte_eth_dev *eth_dev)
+{
+	struct rte_pci_device *pci_dev = AVP_DEV_TO_PCI(eth_dev);
+	void *registers = pci_dev->mem_resource[RTE_AVP_PCI_MMIO_BAR].addr;
+	int ret;
+
+	if (registers == NULL)
+		return -EINVAL;
+
+	/* enable UIO interrupt handling */
+	ret = rte_intr_enable(&pci_dev->intr_handle);
+	if (ret < 0) {
+		PMD_DRV_LOG(ERR,
+			    "Failed to enable UIO interrupts, ret=%d\n", ret);
+		return ret;
+	}
+
+	/* inform the device that all interrupts are enabled */
+	RTE_AVP_WRITE32(RTE_AVP_APP_INTERRUPTS_MASK,
+			RTE_PTR_ADD(registers, RTE_AVP_INTERRUPT_MASK_OFFSET));
+
+	return 0;
+}
+
+static int
+avp_dev_setup_interrupts(struct rte_eth_dev *eth_dev)
+{
+	struct rte_pci_device *pci_dev = AVP_DEV_TO_PCI(eth_dev);
+	int ret;
+
+	/* register a callback handler with UIO for interrupt notifications */
+	ret = rte_intr_callback_register(&pci_dev->intr_handle,
+					 avp_dev_interrupt_handler,
+					 (void *)eth_dev);
+	if (ret < 0) {
+		PMD_DRV_LOG(ERR,
+			    "Failed to register UIO interrupt callback, "
+			    "ret=%d\n", ret);
+		return ret;
+	}
+
+	/* enable interrupt processing */
+	return avp_dev_enable_interrupts(eth_dev);
+}
+
+static int
+avp_dev_migration_pending(struct rte_eth_dev *eth_dev)
+{
+	struct rte_pci_device *pci_dev = AVP_DEV_TO_PCI(eth_dev);
+	void *registers = pci_dev->mem_resource[RTE_AVP_PCI_MMIO_BAR].addr;
+	uint32_t value;
+
+	if (registers == NULL)
+		return 0;
+
+	value = RTE_AVP_READ32(RTE_PTR_ADD(registers,
+					   RTE_AVP_MIGRATION_STATUS_OFFSET));
+	if (value == RTE_AVP_MIGRATION_DETACHED) {
+		/* migration is in progress; ack it if we have not already */
+		RTE_AVP_WRITE32(value,
+				RTE_PTR_ADD(registers,
+					    RTE_AVP_MIGRATION_ACK_OFFSET));
+		return 1;
+	}
+	return 0;
+}
+
+/*
+ * create a AVP device using the supplied device info by first translating it
+ * to guest address space(s).
+ */
+static int
+avp_dev_create(struct rte_pci_device *pci_dev,
+	       struct rte_eth_dev *eth_dev)
+{
+	struct avp_dev *avp =
+		RTE_AVP_DEV_PRIVATE_TO_HW(eth_dev->data->dev_private);
+	struct rte_avp_device_info *host_info;
+	struct rte_mem_resource *resource;
+	unsigned int i;
+
+	resource = &pci_dev->mem_resource[RTE_AVP_PCI_DEVICE_BAR];
+	if (resource->addr == NULL) {
+		PMD_DRV_LOG(ERR, "BAR%u is not mapped\n",
+			    RTE_AVP_PCI_DEVICE_BAR);
+		return -EFAULT;
+	}
+	host_info = (struct rte_avp_device_info *)resource->addr;
+
+	if ((host_info->magic != RTE_AVP_DEVICE_MAGIC) ||
+		avp_dev_version_check(host_info->version)) {
+		PMD_DRV_LOG(ERR, "Invalid AVP PCI device, magic 0x%08x "
+			    "version 0x%08x > 0x%08x\n",
+			    host_info->magic, host_info->version,
+			    RTE_AVP_DPDK_DRIVER_VERSION);
+		return -EINVAL;
+	}
+
+	PMD_DRV_LOG(DEBUG, "AVP host device is v%u.%u.%u\n",
+		    RTE_AVP_GET_RELEASE_VERSION(host_info->version),
+		    RTE_AVP_GET_MAJOR_VERSION(host_info->version),
+		    RTE_AVP_GET_MINOR_VERSION(host_info->version));
+
+	PMD_DRV_LOG(DEBUG, "AVP host supports %u to %u TX queue(s)\n",
+		    host_info->min_tx_queues, host_info->max_tx_queues);
+	PMD_DRV_LOG(DEBUG, "AVP host supports %u to %u RX queue(s)\n",
+		    host_info->min_rx_queues, host_info->max_rx_queues);
+	PMD_DRV_LOG(DEBUG, "AVP host supports features 0x%08x\n",
+		    host_info->features);
+
+	if (avp->magic != RTE_AVP_ETHDEV_MAGIC) {
+		/*
+		 * First time initialization (i.e., not during a VM
+		 * migration)
+		 */
+		memset(avp, 0, sizeof(*avp));
+		avp->magic = RTE_AVP_ETHDEV_MAGIC;
+		avp->dev_data = eth_dev->data;
+		avp->port_id = eth_dev->data->port_id;
+		avp->host_mbuf_size = host_info->mbuf_size;
+		avp->host_features = host_info->features;
+		rte_spinlock_init(&avp->lock);
+		memcpy(&avp->ethaddr.addr_bytes[0],
+		       host_info->ethaddr, ETHER_ADDR_LEN);
+		/* adjust max values to not exceed our max */
+		avp->max_tx_queues =
+			RTE_MIN(host_info->max_tx_queues, RTE_AVP_MAX_QUEUES);
+		avp->max_rx_queues =
+			RTE_MIN(host_info->max_rx_queues, RTE_AVP_MAX_QUEUES);
+	} else {
+		/* Re-attaching during migration */
+
+		/* TODO... requires validation of host values */
+		if ((host_info->features & avp->features) != avp->features) {
+			PMD_DRV_LOG(ERR, "AVP host features mismatched; "
+				    "0x%08x, host=0x%08x\n",
+				    avp->features, host_info->features);
+			/* this should not be possible; continue for now */
+		}
+	}
+
+	/* the device id is allowed to change over migrations */
+	avp->device_id = host_info->device_id;
+
+	/* translate incoming host addresses to guest address space */
+	PMD_DRV_LOG(DEBUG, "AVP first host tx queue at 0x%" PRIx64 "\n",
+		    host_info->tx_phys);
+	PMD_DRV_LOG(DEBUG, "AVP first host alloc queue at 0x%" PRIx64 "\n",
+		    host_info->alloc_phys);
+	for (i = 0; i < avp->max_tx_queues; i++) {
+		avp->tx_q[i] = avp_dev_translate_address(eth_dev,
+			host_info->tx_phys + (i * host_info->tx_size));
+
+		avp->alloc_q[i] = avp_dev_translate_address(eth_dev,
+			host_info->alloc_phys + (i * host_info->alloc_size));
+	}
+
+	PMD_DRV_LOG(DEBUG, "AVP first host rx queue at 0x%" PRIx64 "\n",
+		    host_info->rx_phys);
+	PMD_DRV_LOG(DEBUG, "AVP first host free queue at 0x%" PRIx64 "\n",
+		    host_info->free_phys);
+	for (i = 0; i < avp->max_rx_queues; i++) {
+		avp->rx_q[i] = avp_dev_translate_address(eth_dev,
+			host_info->rx_phys + (i * host_info->rx_size));
+		avp->free_q[i] = avp_dev_translate_address(eth_dev,
+			host_info->free_phys + (i * host_info->free_size));
+	}
+
+	PMD_DRV_LOG(DEBUG, "AVP host request queue at 0x%" PRIx64 "\n",
+		    host_info->req_phys);
+	PMD_DRV_LOG(DEBUG, "AVP host response queue at 0x%" PRIx64 "\n",
+		    host_info->resp_phys);
+	PMD_DRV_LOG(DEBUG, "AVP host sync address at 0x%" PRIx64 "\n",
+		    host_info->sync_phys);
+	PMD_DRV_LOG(DEBUG, "AVP host mbuf address at 0x%" PRIx64 "\n",
+		    host_info->mbuf_phys);
+	avp->req_q = avp_dev_translate_address(eth_dev, host_info->req_phys);
+	avp->resp_q = avp_dev_translate_address(eth_dev, host_info->resp_phys);
+	avp->sync_addr =
+		avp_dev_translate_address(eth_dev, host_info->sync_phys);
+	avp->mbuf_addr =
+		avp_dev_translate_address(eth_dev, host_info->mbuf_phys);
+
+	/*
+	 * store the host mbuf virtual address so that we can calculate
+	 * relative offsets for each mbuf as they are processed
+	 */
+	avp->host_mbuf_addr = host_info->mbuf_va;
+	avp->host_sync_addr = host_info->sync_va;
+
+	/*
+	 * store the maximum packet length that is supported by the host.
+	 */
+	avp->max_rx_pkt_len = host_info->max_rx_pkt_len;
+	PMD_DRV_LOG(DEBUG, "AVP host max receive packet length is %u\n",
+				host_info->max_rx_pkt_len);
+
+	return 0;
+}
+
+/*
  * This function is based on probe() function in avp_pci.c
  * It returns 0 on success.
  */
@@ -164,6 +891,7 @@ struct avp_adapter {
 	struct avp_dev *avp =
 		RTE_AVP_DEV_PRIVATE_TO_HW(eth_dev->data->dev_private);
 	struct rte_pci_device *pci_dev;
+	int ret;
 
 	pci_dev = AVP_DEV_TO_PCI(eth_dev);
 
@@ -181,6 +909,34 @@ struct avp_adapter {
 
 	eth_dev->data->dev_flags |= RTE_ETH_DEV_DETACHABLE;
 
+	/* Check current migration status */
+	if (avp_dev_migration_pending(eth_dev)) {
+		PMD_DRV_LOG(ERR, "VM live migration operation in progress\n");
+		return -EBUSY;
+	}
+
+	/* Check BAR resources */
+	ret = avp_dev_check_regions(eth_dev);
+	if (ret < 0) {
+		PMD_DRV_LOG(ERR,
+			    "Failed to validate BAR resources, ret=%d\n", ret);
+		return ret;
+	}
+
+	/* Enable interrupts */
+	ret = avp_dev_setup_interrupts(eth_dev);
+	if (ret < 0) {
+		PMD_DRV_LOG(ERR, "Failed to enable interrupts, ret=%d\n", ret);
+		return ret;
+	}
+
+	/* Handle each subtype */
+	ret = avp_dev_create(pci_dev, eth_dev);
+	if (ret < 0) {
+		PMD_DRV_LOG(ERR, "Failed to create device, ret=%d\n", ret);
+		return ret;
+	}
+
 	/* Allocate memory for storing MAC addresses */
 	eth_dev->data->mac_addrs = rte_zmalloc("avp_ethdev", ETHER_ADDR_LEN, 0);
 	if (eth_dev->data->mac_addrs == NULL) {
@@ -191,6 +947,7 @@ struct avp_adapter {
 		return -ENOMEM;
 	}
 
+	/* Get a mac from device config */
 	ether_addr_copy(&avp->ethaddr, &eth_dev->data->mac_addrs[0]);
 
 	return 0;
-- 
1.8.3.1

^ permalink raw reply related	[flat|nested] 172+ messages in thread

* [PATCH v2 09/15] net/avp: device configuration
  2017-02-26 19:08 ` [PATCH v2 00/16] Wind River Systems " Allain Legacy
                     ` (7 preceding siblings ...)
  2017-02-26 19:08   ` [PATCH v2 08/15] net/avp: device initialization Allain Legacy
@ 2017-02-26 19:08   ` Allain Legacy
  2017-02-26 19:08   ` [PATCH v2 10/15] net/avp: queue setup and release Allain Legacy
                     ` (7 subsequent siblings)
  16 siblings, 0 replies; 172+ messages in thread
From: Allain Legacy @ 2017-02-26 19:08 UTC (permalink / raw)
  To: ferruh.yigit; +Cc: dev

Adds support for "dev_configure" operations to allow an application to
configure the device.

Signed-off-by: Allain Legacy <allain.legacy@windriver.com>
Signed-off-by: Matt Peters <matt.peters@windriver.com>
---
 drivers/net/avp/avp_ethdev.c | 136 +++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 136 insertions(+)

diff --git a/drivers/net/avp/avp_ethdev.c b/drivers/net/avp/avp_ethdev.c
index 3bbff33..71a6927 100644
--- a/drivers/net/avp/avp_ethdev.c
+++ b/drivers/net/avp/avp_ethdev.c
@@ -66,6 +66,12 @@ static int avp_dev_create(struct rte_pci_device *pci_dev,
 static int eth_avp_dev_init(struct rte_eth_dev *eth_dev);
 static int eth_avp_dev_uninit(struct rte_eth_dev *eth_dev);
 
+static int avp_dev_configure(struct rte_eth_dev *dev);
+static void avp_dev_info_get(struct rte_eth_dev *dev,
+			     struct rte_eth_dev_info *dev_info);
+static void avp_vlan_offload_set(struct rte_eth_dev *dev, int mask);
+static int avp_dev_link_update(struct rte_eth_dev *dev,
+			       __rte_unused int wait_to_complete);
 
 #define AVP_DEV_TO_PCI(eth_dev) RTE_DEV_TO_PCI((eth_dev)->device)
 
@@ -104,6 +110,15 @@ static int avp_dev_create(struct rte_pci_device *pci_dev,
 	},
 };
 
+/*
+ * dev_ops for avp, bare necessities for basic operation
+ */
+static const struct eth_dev_ops avp_eth_dev_ops = {
+	.dev_configure       = avp_dev_configure,
+	.dev_infos_get       = avp_dev_info_get,
+	.vlan_offload_set    = avp_vlan_offload_set,
+	.link_update         = avp_dev_link_update,
+};
 
 /**@{ AVP device flags */
 #define RTE_AVP_F_PROMISC (1 << 1)
@@ -894,6 +909,7 @@ struct avp_queue {
 	int ret;
 
 	pci_dev = AVP_DEV_TO_PCI(eth_dev);
+	eth_dev->dev_ops = &avp_eth_dev_ops;
 
 	if (rte_eal_process_type() != RTE_PROC_PRIMARY) {
 		/*
@@ -984,6 +1000,126 @@ struct avp_queue {
 };
 
 
+static int
+avp_dev_configure(struct rte_eth_dev *eth_dev)
+{
+	struct rte_pci_device *pci_dev = AVP_DEV_TO_PCI(eth_dev);
+	struct avp_dev *avp =
+		RTE_AVP_DEV_PRIVATE_TO_HW(eth_dev->data->dev_private);
+	struct rte_avp_device_info *host_info;
+	struct rte_avp_device_config config;
+	int mask = 0;
+	void *addr;
+	int ret;
+
+	rte_spinlock_lock(&avp->lock);
+	if (avp->flags & RTE_AVP_F_DETACHED) {
+		PMD_DRV_LOG(ERR,
+			    "Operation not supported during "
+			    "VM live migration\n");
+		ret = -ENOTSUP;
+		goto unlock;
+	}
+
+	addr = pci_dev->mem_resource[RTE_AVP_PCI_DEVICE_BAR].addr;
+	host_info = (struct rte_avp_device_info *)addr;
+
+	/* Setup required number of queues */
+	_avp_set_queue_counts(eth_dev);
+
+	mask = (ETH_VLAN_STRIP_MASK |
+		ETH_VLAN_FILTER_MASK |
+		ETH_VLAN_EXTEND_MASK);
+	avp_vlan_offload_set(eth_dev, mask);
+
+	/* update device config */
+	memset(&config, 0, sizeof(config));
+	config.device_id = host_info->device_id;
+	config.driver_type = RTE_AVP_DRIVER_TYPE_DPDK;
+	config.driver_version = RTE_AVP_DPDK_DRIVER_VERSION;
+	config.features = avp->features;
+	config.num_tx_queues = avp->num_tx_queues;
+	config.num_rx_queues = avp->num_rx_queues;
+
+	ret = avp_dev_ctrl_set_config(eth_dev, &config);
+	if (ret < 0) {
+		PMD_DRV_LOG(ERR,
+			    "Config request failed by host, ret=%d\n", ret);
+		goto unlock;
+	}
+
+	avp->flags |= RTE_AVP_F_CONFIGURED;
+	ret = 0;
+
+unlock:
+	rte_spinlock_unlock(&avp->lock);
+	return ret;
+}
+
+
+static int
+avp_dev_link_update(struct rte_eth_dev *eth_dev,
+					__rte_unused int wait_to_complete)
+{
+	struct avp_dev *avp =
+		RTE_AVP_DEV_PRIVATE_TO_HW(eth_dev->data->dev_private);
+	struct rte_eth_link *link = &eth_dev->data->dev_link;
+
+	link->link_speed = ETH_SPEED_NUM_10G;
+	link->link_duplex = ETH_LINK_FULL_DUPLEX;
+	link->link_status = !!(avp->flags & RTE_AVP_F_LINKUP);
+
+	return -1;
+}
+
+
+static void
+avp_dev_info_get(struct rte_eth_dev *eth_dev,
+		 struct rte_eth_dev_info *dev_info)
+{
+	struct avp_dev *avp =
+		RTE_AVP_DEV_PRIVATE_TO_HW(eth_dev->data->dev_private);
+
+	dev_info->driver_name = "rte_avp_pmd";
+	dev_info->max_rx_queues = avp->max_rx_queues;
+	dev_info->max_tx_queues = avp->max_tx_queues;
+	dev_info->min_rx_bufsize = RTE_AVP_MIN_RX_BUFSIZE;
+	dev_info->max_rx_pktlen = avp->max_rx_pkt_len;
+	dev_info->max_mac_addrs = RTE_AVP_MAX_MAC_ADDRS;
+	if (avp->host_features & RTE_AVP_FEATURE_VLAN_OFFLOAD) {
+		dev_info->rx_offload_capa = DEV_RX_OFFLOAD_VLAN_STRIP;
+		dev_info->tx_offload_capa = DEV_TX_OFFLOAD_VLAN_INSERT;
+	}
+}
+
+static void
+avp_vlan_offload_set(struct rte_eth_dev *eth_dev, int mask)
+{
+	struct avp_dev *avp =
+		RTE_AVP_DEV_PRIVATE_TO_HW(eth_dev->data->dev_private);
+
+	if (mask & ETH_VLAN_STRIP_MASK) {
+		if (avp->host_features & RTE_AVP_FEATURE_VLAN_OFFLOAD) {
+			if (eth_dev->data->dev_conf.rxmode.hw_vlan_strip)
+				avp->features |= RTE_AVP_FEATURE_VLAN_OFFLOAD;
+			else
+				avp->features &= ~RTE_AVP_FEATURE_VLAN_OFFLOAD;
+		} else {
+			PMD_DRV_LOG(ERR, "VLAN strip offload not supported\n");
+		}
+	}
+
+	if (mask & ETH_VLAN_FILTER_MASK) {
+		if (eth_dev->data->dev_conf.rxmode.hw_vlan_filter)
+			PMD_DRV_LOG(ERR, "VLAN filter offload not supported\n");
+	}
+
+	if (mask & ETH_VLAN_EXTEND_MASK) {
+		if (eth_dev->data->dev_conf.rxmode.hw_vlan_extend)
+			PMD_DRV_LOG(ERR, "VLAN extend offload not supported\n");
+	}
+}
+
 
 RTE_PMD_REGISTER_PCI(rte_avp, rte_avp_pmd.pci_drv);
 RTE_PMD_REGISTER_PCI_TABLE(rte_avp, pci_id_avp_map);
-- 
1.8.3.1

^ permalink raw reply related	[flat|nested] 172+ messages in thread

* [PATCH v2 10/15] net/avp: queue setup and release
  2017-02-26 19:08 ` [PATCH v2 00/16] Wind River Systems " Allain Legacy
                     ` (8 preceding siblings ...)
  2017-02-26 19:08   ` [PATCH v2 09/15] net/avp: device configuration Allain Legacy
@ 2017-02-26 19:08   ` Allain Legacy
  2017-02-26 19:08   ` [PATCH v2 11/15] net/avp: packet receive functions Allain Legacy
                     ` (6 subsequent siblings)
  16 siblings, 0 replies; 172+ messages in thread
From: Allain Legacy @ 2017-02-26 19:08 UTC (permalink / raw)
  To: ferruh.yigit; +Cc: dev

Adds queue management operations so that an appliation can setup and
release the transmit and receive queues.

Signed-off-by: Allain Legacy <allain.legacy@windriver.com>
Signed-off-by: Matt Peters <matt.peters@windriver.com>
---
 drivers/net/avp/avp_ethdev.c | 148 ++++++++++++++++++++++++++++++++++++++++++-
 1 file changed, 147 insertions(+), 1 deletion(-)

diff --git a/drivers/net/avp/avp_ethdev.c b/drivers/net/avp/avp_ethdev.c
index 71a6927..0f14fef 100644
--- a/drivers/net/avp/avp_ethdev.c
+++ b/drivers/net/avp/avp_ethdev.c
@@ -72,7 +72,21 @@ static void avp_dev_info_get(struct rte_eth_dev *dev,
 static void avp_vlan_offload_set(struct rte_eth_dev *dev, int mask);
 static int avp_dev_link_update(struct rte_eth_dev *dev,
 			       __rte_unused int wait_to_complete);
-
+static int avp_dev_rx_queue_setup(struct rte_eth_dev *dev,
+				  uint16_t rx_queue_id,
+				  uint16_t nb_rx_desc,
+				  unsigned int socket_id,
+				  const struct rte_eth_rxconf *rx_conf,
+				  struct rte_mempool *pool);
+
+static int avp_dev_tx_queue_setup(struct rte_eth_dev *dev,
+				  uint16_t tx_queue_id,
+				  uint16_t nb_tx_desc,
+				  unsigned int socket_id,
+				  const struct rte_eth_txconf *tx_conf);
+
+static void avp_dev_rx_queue_release(void *rxq);
+static void avp_dev_tx_queue_release(void *txq);
 #define AVP_DEV_TO_PCI(eth_dev) RTE_DEV_TO_PCI((eth_dev)->device)
 
 
@@ -118,6 +132,10 @@ static int avp_dev_link_update(struct rte_eth_dev *dev,
 	.dev_infos_get       = avp_dev_info_get,
 	.vlan_offload_set    = avp_vlan_offload_set,
 	.link_update         = avp_dev_link_update,
+	.rx_queue_setup      = avp_dev_rx_queue_setup,
+	.rx_queue_release    = avp_dev_rx_queue_release,
+	.tx_queue_setup      = avp_dev_tx_queue_setup,
+	.tx_queue_release    = avp_dev_tx_queue_release,
 };
 
 /**@{ AVP device flags */
@@ -1001,6 +1019,134 @@ struct avp_queue {
 
 
 static int
+avp_dev_rx_queue_setup(struct rte_eth_dev *eth_dev,
+		       uint16_t rx_queue_id,
+		       uint16_t nb_rx_desc,
+		       unsigned int socket_id,
+		       const struct rte_eth_rxconf *rx_conf,
+		       struct rte_mempool *pool)
+{
+	struct avp_dev *avp =
+		RTE_AVP_DEV_PRIVATE_TO_HW(eth_dev->data->dev_private);
+	struct rte_pktmbuf_pool_private *mbp_priv;
+	struct avp_queue *rxq;
+
+	if (rx_queue_id >= eth_dev->data->nb_rx_queues) {
+		PMD_DRV_LOG(ERR, "RX queue id is out of range: "
+			    "rx_queue_id=%u, nb_rx_queues=%u\n",
+			    rx_queue_id, eth_dev->data->nb_rx_queues);
+		return -EINVAL;
+	}
+
+	/* Save mbuf pool pointer */
+	avp->pool = pool;
+
+	/* Save the local mbuf size */
+	mbp_priv = rte_mempool_get_priv(pool);
+	avp->guest_mbuf_size = (uint16_t)(mbp_priv->mbuf_data_room_size);
+	avp->guest_mbuf_size -= RTE_PKTMBUF_HEADROOM;
+
+	PMD_DRV_LOG(DEBUG, "AVP max_rx_pkt_len=(%u,%u) mbuf_size=(%u,%u)\n",
+		    avp->max_rx_pkt_len,
+		    eth_dev->data->dev_conf.rxmode.max_rx_pkt_len,
+		    avp->host_mbuf_size,
+		    avp->guest_mbuf_size);
+
+	/* allocate a queue object */
+	rxq = rte_zmalloc_socket("ethdev RX queue", sizeof(struct avp_queue),
+				 RTE_CACHE_LINE_SIZE, socket_id);
+	if (rxq == NULL) {
+		PMD_DRV_LOG(ERR, "Failed to allocate new Rx queue object\n");
+		return -ENOMEM;
+	}
+
+	/* save back pointers to AVP and Ethernet devices */
+	rxq->avp = avp;
+	rxq->dev_data = eth_dev->data;
+	eth_dev->data->rx_queues[rx_queue_id] = (void *)rxq;
+
+	/* setup the queue receive mapping for the current queue. */
+	_avp_set_rx_queue_mappings(eth_dev, rx_queue_id);
+
+	PMD_DRV_LOG(DEBUG, "Rx queue %u setup at %p\n", rx_queue_id, rxq);
+
+	(void)nb_rx_desc;
+	(void)rx_conf;
+	return 0;
+}
+
+static int
+avp_dev_tx_queue_setup(struct rte_eth_dev *eth_dev,
+		       uint16_t tx_queue_id,
+		       uint16_t nb_tx_desc,
+		       unsigned int socket_id,
+		       const struct rte_eth_txconf *tx_conf)
+{
+	struct avp_dev *avp =
+		RTE_AVP_DEV_PRIVATE_TO_HW(eth_dev->data->dev_private);
+	struct avp_queue *txq;
+
+	if (tx_queue_id >= eth_dev->data->nb_tx_queues) {
+		PMD_DRV_LOG(ERR, "TX queue id is out of range: "
+			    "tx_queue_id=%u, nb_tx_queues=%u\n",
+			    tx_queue_id, eth_dev->data->nb_tx_queues);
+		return -EINVAL;
+	}
+
+	/* allocate a queue object */
+	txq = rte_zmalloc_socket("ethdev TX queue", sizeof(struct avp_queue),
+				 RTE_CACHE_LINE_SIZE, socket_id);
+	if (txq == NULL) {
+		PMD_DRV_LOG(ERR, "Failed to allocate new Tx queue object\n");
+		return -ENOMEM;
+	}
+
+	/* only the configured set of transmit queues are used */
+	txq->queue_id = tx_queue_id;
+	txq->queue_base = tx_queue_id;
+	txq->queue_limit = tx_queue_id;
+
+	/* save back pointers to AVP and Ethernet devices */
+	txq->avp = avp;
+	txq->dev_data = eth_dev->data;
+	eth_dev->data->tx_queues[tx_queue_id] = (void *)txq;
+
+	PMD_DRV_LOG(DEBUG, "Tx queue %u setup at %p\n", tx_queue_id, txq);
+
+	(void)nb_tx_desc;
+	(void)tx_conf;
+	return 0;
+}
+
+static void
+avp_dev_rx_queue_release(void *rx_queue)
+{
+	struct avp_queue *rxq = (struct avp_queue *)rx_queue;
+	struct avp_dev *avp = rxq->avp;
+	struct rte_eth_dev_data *data = avp->dev_data;
+	unsigned int i;
+
+	for (i = 0; i < avp->num_rx_queues; i++) {
+		if (data->rx_queues[i] == rxq)
+			data->rx_queues[i] = NULL;
+	}
+}
+
+static void
+avp_dev_tx_queue_release(void *tx_queue)
+{
+	struct avp_queue *txq = (struct avp_queue *)tx_queue;
+	struct avp_dev *avp = txq->avp;
+	struct rte_eth_dev_data *data = avp->dev_data;
+	unsigned int i;
+
+	for (i = 0; i < avp->num_tx_queues; i++) {
+		if (data->tx_queues[i] == txq)
+			data->tx_queues[i] = NULL;
+	}
+}
+
+static int
 avp_dev_configure(struct rte_eth_dev *eth_dev)
 {
 	struct rte_pci_device *pci_dev = AVP_DEV_TO_PCI(eth_dev);
-- 
1.8.3.1

^ permalink raw reply related	[flat|nested] 172+ messages in thread

* [PATCH v2 11/15] net/avp: packet receive functions
  2017-02-26 19:08 ` [PATCH v2 00/16] Wind River Systems " Allain Legacy
                     ` (9 preceding siblings ...)
  2017-02-26 19:08   ` [PATCH v2 10/15] net/avp: queue setup and release Allain Legacy
@ 2017-02-26 19:08   ` Allain Legacy
  2017-02-27 16:46     ` Stephen Hemminger
  2017-02-26 19:09   ` [PATCH v2 12/15] net/avp: packet transmit functions Allain Legacy
                     ` (5 subsequent siblings)
  16 siblings, 1 reply; 172+ messages in thread
From: Allain Legacy @ 2017-02-26 19:08 UTC (permalink / raw)
  To: ferruh.yigit; +Cc: dev

Adds function required for receiving packets from the host application via
AVP device queues.  Both the simple and scattered functions are supported.

Signed-off-by: Allain Legacy <allain.legacy@windriver.com>
Signed-off-by: Matt Peters <matt.peters@windriver.com>
---
 drivers/net/avp/Makefile     |   1 +
 drivers/net/avp/avp_ethdev.c | 468 +++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 469 insertions(+)

diff --git a/drivers/net/avp/Makefile b/drivers/net/avp/Makefile
index 9cf0449..3013cd1 100644
--- a/drivers/net/avp/Makefile
+++ b/drivers/net/avp/Makefile
@@ -56,5 +56,6 @@ SRCS-$(CONFIG_RTE_LIBRTE_AVP_PMD) += avp_ethdev.c
 
 # this lib depends upon:
 DEPDIRS-$(CONFIG_RTE_LIBRTE_AVP_PMD) += lib/librte_eal lib/librte_ether
+DEPDIRS-$(CONFIG_RTE_LIBRTE_AVP_PMD) += lib/librte_mempool lib/librte_mbuf
 
 include $(RTE_SDK)/mk/rte.lib.mk
diff --git a/drivers/net/avp/avp_ethdev.c b/drivers/net/avp/avp_ethdev.c
index 0f14fef..cd0b0c0 100644
--- a/drivers/net/avp/avp_ethdev.c
+++ b/drivers/net/avp/avp_ethdev.c
@@ -85,11 +85,19 @@ static int avp_dev_tx_queue_setup(struct rte_eth_dev *dev,
 				  unsigned int socket_id,
 				  const struct rte_eth_txconf *tx_conf);
 
+static uint16_t avp_recv_scattered_pkts(void *rx_queue,
+					struct rte_mbuf **rx_pkts,
+					uint16_t nb_pkts);
+
+static uint16_t avp_recv_pkts(void *rx_queue,
+			      struct rte_mbuf **rx_pkts,
+			      uint16_t nb_pkts);
 static void avp_dev_rx_queue_release(void *rxq);
 static void avp_dev_tx_queue_release(void *txq);
 #define AVP_DEV_TO_PCI(eth_dev) RTE_DEV_TO_PCI((eth_dev)->device)
 
 
+#define RTE_AVP_MAX_RX_BURST 64
 #define RTE_AVP_MAX_MAC_ADDRS 1
 #define RTE_AVP_MIN_RX_BUFSIZE ETHER_MIN_LEN
 
@@ -333,6 +341,15 @@ struct avp_queue {
 	return ret == 0 ? request.result : ret;
 }
 
+/* translate from host mbuf virtual address to guest virtual address */
+static inline void *
+avp_dev_translate_buffer(struct avp_dev *avp, void *host_mbuf_address)
+{
+	return RTE_PTR_ADD(RTE_PTR_SUB(host_mbuf_address,
+				       (uintptr_t)avp->host_mbuf_addr),
+			   (uintptr_t)avp->mbuf_addr);
+}
+
 /* translate from host physical address to guest virtual address */
 static void *
 avp_dev_translate_address(struct rte_eth_dev *eth_dev,
@@ -928,6 +945,7 @@ struct avp_queue {
 
 	pci_dev = AVP_DEV_TO_PCI(eth_dev);
 	eth_dev->dev_ops = &avp_eth_dev_ops;
+	eth_dev->rx_pkt_burst = &avp_recv_pkts;
 
 	if (rte_eal_process_type() != RTE_PROC_PRIMARY) {
 		/*
@@ -936,6 +954,12 @@ struct avp_queue {
 		 * be mapped to the same virtual address so all pointers should
 		 * be valid.
 		 */
+		if (eth_dev->data->scattered_rx) {
+			PMD_DRV_LOG(NOTICE,
+				    "AVP device configured "
+				    "for chained mbufs\n");
+			eth_dev->rx_pkt_burst = avp_recv_scattered_pkts;
+		}
 		return 0;
 	}
 
@@ -1019,6 +1043,38 @@ struct avp_queue {
 
 
 static int
+avp_dev_enable_scattered(struct rte_eth_dev *eth_dev,
+			 struct avp_dev *avp)
+{
+	unsigned int max_rx_pkt_len;
+
+	max_rx_pkt_len = eth_dev->data->dev_conf.rxmode.max_rx_pkt_len;
+
+	if ((max_rx_pkt_len > avp->guest_mbuf_size) ||
+	    (max_rx_pkt_len > avp->host_mbuf_size)) {
+		/*
+		 * If the guest MTU is greater than either the host or guest
+		 * buffers then chained mbufs have to be enabled in the TX
+		 * direction.  It is assumed that the application will not need
+		 * to send packets larger than their max_rx_pkt_len (MRU).
+		 */
+		return 1;
+	}
+
+	if ((avp->max_rx_pkt_len > avp->guest_mbuf_size) ||
+	    (avp->max_rx_pkt_len > avp->host_mbuf_size)) {
+		/*
+		 * If the host MRU is greater than its own mbuf size or the
+		 * guest mbuf size then chained mbufs have to be enabled in the
+		 * RX direction.
+		 */
+		return 1;
+	}
+
+	return 0;
+}
+
+static int
 avp_dev_rx_queue_setup(struct rte_eth_dev *eth_dev,
 		       uint16_t rx_queue_id,
 		       uint16_t nb_rx_desc,
@@ -1046,6 +1102,16 @@ struct avp_queue {
 	avp->guest_mbuf_size = (uint16_t)(mbp_priv->mbuf_data_room_size);
 	avp->guest_mbuf_size -= RTE_PKTMBUF_HEADROOM;
 
+	if (avp_dev_enable_scattered(eth_dev, avp)) {
+		if (!eth_dev->data->scattered_rx) {
+			PMD_DRV_LOG(NOTICE,
+				    "AVP device configured "
+				    "for chained mbufs\n");
+			eth_dev->data->scattered_rx = 1;
+			eth_dev->rx_pkt_burst = avp_recv_scattered_pkts;
+		}
+	}
+
 	PMD_DRV_LOG(DEBUG, "AVP max_rx_pkt_len=(%u,%u) mbuf_size=(%u,%u)\n",
 		    avp->max_rx_pkt_len,
 		    eth_dev->data->dev_conf.rxmode.max_rx_pkt_len,
@@ -1118,6 +1184,408 @@ struct avp_queue {
 	return 0;
 }
 
+static inline int
+_avp_cmp_ether_addr(struct ether_addr *a, struct ether_addr *b)
+{
+	uint16_t *_a = (uint16_t *)&a->addr_bytes[0];
+	uint16_t *_b = (uint16_t *)&b->addr_bytes[0];
+	return (_a[0] ^ _b[0]) | (_a[1] ^ _b[1]) | (_a[2] ^ _b[2]);
+}
+
+static inline int
+_avp_mac_filter(struct avp_dev *avp, struct rte_mbuf *m)
+{
+	struct ether_hdr *eth = rte_pktmbuf_mtod(m, struct ether_hdr *);
+
+	if (likely(_avp_cmp_ether_addr(&avp->ethaddr, &eth->d_addr) == 0)) {
+		/* allow all packets destined to our address */
+		return 0;
+	}
+
+	if (likely(is_broadcast_ether_addr(&eth->d_addr))) {
+		/* allow all broadcast packets */
+		return 0;
+	}
+
+	if (likely(is_multicast_ether_addr(&eth->d_addr))) {
+		/* allow all multicast packets */
+		return 0;
+	}
+
+	if (avp->flags & RTE_AVP_F_PROMISC) {
+		/* allow all packets when in promiscuous mode */
+		return 0;
+	}
+
+	return -1;
+}
+
+#ifdef RTE_LIBRTE_AVP_DEBUG_BUFFERS
+static inline void
+__avp_dev_buffer_sanity_check(struct avp_dev *avp, struct rte_avp_desc *buf)
+{
+	struct rte_avp_desc *first_buf;
+	struct rte_avp_desc *pkt_buf;
+	unsigned int pkt_len;
+	unsigned int nb_segs;
+	void *pkt_data;
+	unsigned int i;
+
+	first_buf = avp_dev_translate_buffer(avp, buf);
+
+	i = 0;
+	pkt_len = 0;
+	nb_segs = first_buf->nb_segs;
+	do {
+		/* Adjust pointers for guest addressing */
+		pkt_buf = avp_dev_translate_buffer(avp, buf);
+		if (pkt_buf == NULL) {
+			rte_panic("bad buffer: segment %u has an "
+				  "invalid address %p\n", i, buf);
+		}
+		pkt_data = avp_dev_translate_buffer(avp, pkt_buf->data);
+		if (pkt_data == NULL)
+			rte_panic("bad buffer: segment %u has a "
+				  "NULL data pointer\n", i);
+		if (pkt_buf->data_len == 0)
+			rte_panic("bad buffer: segment %u has "
+				  "0 data length\n", i);
+		pkt_len += pkt_buf->data_len;
+		nb_segs--;
+		i++;
+
+	} while (nb_segs && (buf = pkt_buf->next) != NULL);
+
+	if (nb_segs != 0) {
+		rte_panic("bad buffer: expected %u segments found %u\n",
+			  first_buf->nb_segs, (first_buf->nb_segs - nb_segs));
+	}
+	if (pkt_len != first_buf->pkt_len) {
+		rte_panic("bad buffer: expected length %u found %u\n",
+			  first_buf->pkt_len, pkt_len);
+	}
+}
+
+#define avp_dev_buffer_sanity_check(a, b) \
+	__avp_dev_buffer_sanity_check((a), (b))
+
+#else /* RTE_LIBRTE_AVP_DEBUG_BUFFERS */
+
+#define avp_dev_buffer_sanity_check(a, b) do {} while (0)
+
+#endif
+
+/*
+ * Copy a host buffer chain to a set of mbufs.	This function assumes that
+ * there exactly the required number of mbufs to copy all source bytes.
+ */
+static inline struct rte_mbuf *
+avp_dev_copy_from_buffers(struct avp_dev *avp,
+			  struct rte_avp_desc *buf,
+			  struct rte_mbuf **mbufs,
+			  unsigned int count)
+{
+	struct rte_mbuf *m_previous = NULL;
+	struct rte_avp_desc *pkt_buf;
+	unsigned int total_length = 0;
+	unsigned int copy_length;
+	unsigned int src_offset;
+	struct rte_mbuf *m;
+	uint16_t ol_flags;
+	uint16_t vlan_tci;
+	void *pkt_data;
+	unsigned int i;
+
+	avp_dev_buffer_sanity_check(avp, buf);
+
+	/* setup the first source buffer */
+	pkt_buf = avp_dev_translate_buffer(avp, buf);
+	pkt_data = avp_dev_translate_buffer(avp, pkt_buf->data);
+	total_length = pkt_buf->pkt_len;
+	src_offset = 0;
+
+	if (pkt_buf->ol_flags & RTE_AVP_RX_VLAN_PKT) {
+		ol_flags = PKT_RX_VLAN_PKT;
+		vlan_tci = pkt_buf->vlan_tci;
+	} else {
+		ol_flags = 0;
+		vlan_tci = 0;
+	}
+
+	for (i = 0; (i < count) && (buf != NULL); i++) {
+		/* fill each destination buffer */
+		m = mbufs[i];
+
+		if (m_previous != NULL)
+			m_previous->next = m;
+
+		m_previous = m;
+
+		do {
+			/*
+			 * Copy as many source buffers as will fit in the
+			 * destination buffer.
+			 */
+			copy_length = RTE_MIN((avp->guest_mbuf_size -
+					       rte_pktmbuf_data_len(m)),
+					      (pkt_buf->data_len -
+					       src_offset));
+			rte_memcpy(RTE_PTR_ADD(rte_pktmbuf_mtod(m, void *),
+					       rte_pktmbuf_data_len(m)),
+				   RTE_PTR_ADD(pkt_data, src_offset),
+				   copy_length);
+			rte_pktmbuf_data_len(m) += copy_length;
+			src_offset += copy_length;
+
+			if (likely(src_offset == pkt_buf->data_len)) {
+				/* need a new source buffer */
+				buf = pkt_buf->next;
+				if (buf != NULL) {
+					pkt_buf = avp_dev_translate_buffer(
+						avp, buf);
+					pkt_data = avp_dev_translate_buffer(
+						avp, pkt_buf->data);
+					src_offset = 0;
+				}
+			}
+
+			if (unlikely(rte_pktmbuf_data_len(m) ==
+				     avp->guest_mbuf_size)) {
+				/* need a new destination mbuf */
+				break;
+			}
+
+		} while (buf != NULL);
+	}
+
+	m = mbufs[0];
+	m->ol_flags = ol_flags;
+	m->nb_segs = count;
+	rte_pktmbuf_pkt_len(m) = total_length;
+	m->vlan_tci = vlan_tci;
+
+	__rte_mbuf_sanity_check(m, 1);
+
+	return m;
+}
+
+static uint16_t
+avp_recv_scattered_pkts(void *rx_queue,
+			struct rte_mbuf **rx_pkts,
+			uint16_t nb_pkts)
+{
+	struct avp_queue *rxq = (struct avp_queue *)rx_queue;
+	struct rte_avp_desc *avp_bufs[RTE_AVP_MAX_RX_BURST];
+	struct rte_mbuf *mbufs[RTE_AVP_MAX_MBUF_SEGMENTS];
+	struct avp_dev *avp = rxq->avp;
+	struct rte_avp_desc *pkt_buf;
+	struct rte_avp_fifo *free_q;
+	struct rte_avp_fifo *rx_q;
+	struct rte_avp_desc *buf;
+	unsigned int count, avail, n;
+	unsigned int guest_mbuf_size;
+	struct rte_mbuf *m;
+	unsigned int required;
+	unsigned int buf_len;
+	unsigned int port_id;
+	unsigned int i;
+
+	if (unlikely(avp->flags & RTE_AVP_F_DETACHED)) {
+		/* VM live migration in progress */
+		return 0;
+	}
+
+	guest_mbuf_size = avp->guest_mbuf_size;
+	port_id = avp->port_id;
+	rx_q = avp->rx_q[rxq->queue_id];
+	free_q = avp->free_q[rxq->queue_id];
+
+	/* setup next queue to service */
+	rxq->queue_id = (rxq->queue_id < rxq->queue_limit) ?
+		(rxq->queue_id + 1) : rxq->queue_base;
+
+	/* determine how many slots are available in the free queue */
+	count = avp_fifo_free_count(free_q);
+
+	/* determine how many packets are available in the rx queue */
+	avail = avp_fifo_count(rx_q);
+
+	/* determine how many packets can be received */
+	count = RTE_MIN(count, avail);
+	count = RTE_MIN(count, nb_pkts);
+	count = RTE_MIN(count, (unsigned int)RTE_AVP_MAX_RX_BURST);
+
+	if (unlikely(count == 0)) {
+		/* no free buffers, or no buffers on the rx queue */
+		return 0;
+	}
+
+	/* retrieve pending packets */
+	n = avp_fifo_get(rx_q, (void **)&avp_bufs, count);
+	PMD_RX_LOG(DEBUG, "Receving %u packets from Rx queue at %p\n",
+		   count, rx_q);
+
+	count = 0;
+	for (i = 0; i < n; i++) {
+		/* prefetch next entry while processing current one */
+		if (i + 1 < n) {
+			pkt_buf = avp_dev_translate_buffer(avp,
+							   avp_bufs[i + 1]);
+			rte_prefetch0(pkt_buf);
+		}
+		buf = avp_bufs[i];
+
+		/* Peek into the first buffer to determine the total length */
+		pkt_buf = avp_dev_translate_buffer(avp, buf);
+		buf_len = pkt_buf->pkt_len;
+
+		/* Allocate enough mbufs to receive the entire packet */
+		required = (buf_len + guest_mbuf_size - 1) / guest_mbuf_size;
+		if (rte_pktmbuf_alloc_bulk(avp->pool, mbufs, required)) {
+			rxq->dev_data->rx_mbuf_alloc_failed++;
+			continue;
+		}
+
+		/* Copy the data from the buffers to our mbufs */
+		m = avp_dev_copy_from_buffers(avp, buf, mbufs, required);
+
+		/* finalize mbuf */
+		m->port = port_id;
+
+		if (_avp_mac_filter(avp, m) != 0) {
+			/* silently discard packets not destined to our MAC */
+			rte_pktmbuf_free(m);
+			continue;
+		}
+
+		/* return new mbuf to caller */
+		rx_pkts[count++] = m;
+		rxq->bytes += buf_len;
+	}
+
+	rxq->packets += count;
+
+	/* return the buffers to the free queue */
+	avp_fifo_put(free_q, (void **)&avp_bufs[0], n);
+
+	return count;
+}
+
+
+static uint16_t
+avp_recv_pkts(void *rx_queue,
+	      struct rte_mbuf **rx_pkts,
+	      uint16_t nb_pkts)
+{
+	struct avp_queue *rxq = (struct avp_queue *)rx_queue;
+	struct rte_avp_desc *avp_bufs[RTE_AVP_MAX_RX_BURST];
+	struct avp_dev *avp = rxq->avp;
+	struct rte_avp_desc *pkt_buf;
+	struct rte_avp_fifo *free_q;
+	struct rte_avp_fifo *rx_q;
+	unsigned int count, avail, n;
+	unsigned int pkt_len;
+	struct rte_mbuf *m;
+	char *pkt_data;
+	unsigned int i;
+
+	if (unlikely(avp->flags & RTE_AVP_F_DETACHED)) {
+		/* VM live migration in progress */
+		return 0;
+	}
+
+	rx_q = avp->rx_q[rxq->queue_id];
+	free_q = avp->free_q[rxq->queue_id];
+
+	/* setup next queue to service */
+	rxq->queue_id = (rxq->queue_id < rxq->queue_limit) ?
+		(rxq->queue_id + 1) : rxq->queue_base;
+
+	/* determine how many slots are available in the free queue */
+	count = avp_fifo_free_count(free_q);
+
+	/* determine how many packets are available in the rx queue */
+	avail = avp_fifo_count(rx_q);
+
+	/* determine how many packets can be received */
+	count = RTE_MIN(count, avail);
+	count = RTE_MIN(count, nb_pkts);
+	count = RTE_MIN(count, (unsigned int)RTE_AVP_MAX_RX_BURST);
+
+	if (unlikely(count == 0)) {
+		/* no free buffers, or no buffers on the rx queue */
+		return 0;
+	}
+
+	/* retrieve pending packets */
+	n = avp_fifo_get(rx_q, (void **)&avp_bufs, count);
+	PMD_RX_LOG(DEBUG, "Receving %u packets from Rx queue at %p\n",
+		   count, rx_q);
+
+	count = 0;
+	for (i = 0; i < n; i++) {
+		/* prefetch next entry while processing current one */
+		if (i < n - 1) {
+			pkt_buf = avp_dev_translate_buffer(avp,
+							   avp_bufs[i + 1]);
+			rte_prefetch0(pkt_buf);
+		}
+
+		/* Adjust host pointers for guest addressing */
+		pkt_buf = avp_dev_translate_buffer(avp, avp_bufs[i]);
+		pkt_data = avp_dev_translate_buffer(avp, pkt_buf->data);
+		pkt_len = pkt_buf->pkt_len;
+
+		if (unlikely((pkt_len > avp->guest_mbuf_size) ||
+			     (pkt_buf->nb_segs > 1))) {
+			/*
+			 * application should be using the scattered receive
+			 * function
+			 */
+			rxq->errors++;
+			continue;
+		}
+
+		/* process each packet to be transmitted */
+		m = rte_pktmbuf_alloc(avp->pool);
+		if (unlikely(m == NULL)) {
+			rxq->dev_data->rx_mbuf_alloc_failed++;
+			continue;
+		}
+
+		/* copy data out of the host buffer to our buffer */
+		m->data_off = RTE_PKTMBUF_HEADROOM;
+		rte_memcpy(rte_pktmbuf_mtod(m, void *), pkt_data, pkt_len);
+
+		/* initialize the local mbuf */
+		rte_pktmbuf_data_len(m) = pkt_len;
+		rte_pktmbuf_pkt_len(m) = pkt_len;
+		m->port = avp->port_id;
+
+		if (pkt_buf->ol_flags & RTE_AVP_RX_VLAN_PKT) {
+			m->ol_flags = PKT_RX_VLAN_PKT;
+			m->vlan_tci = pkt_buf->vlan_tci;
+		}
+
+		if (_avp_mac_filter(avp, m) != 0) {
+			/* silently discard packets not destined to our MAC */
+			rte_pktmbuf_free(m);
+			continue;
+		}
+
+		/* return new mbuf to caller */
+		rx_pkts[count++] = m;
+		rxq->bytes += pkt_len;
+	}
+
+	rxq->packets += count;
+
+	/* return the buffers to the free queue */
+	avp_fifo_put(free_q, (void **)&avp_bufs[0], n);
+
+	return count;
+}
+
 static void
 avp_dev_rx_queue_release(void *rx_queue)
 {
-- 
1.8.3.1

^ permalink raw reply related	[flat|nested] 172+ messages in thread

* [PATCH v2 12/15] net/avp: packet transmit functions
  2017-02-26 19:08 ` [PATCH v2 00/16] Wind River Systems " Allain Legacy
                     ` (10 preceding siblings ...)
  2017-02-26 19:08   ` [PATCH v2 11/15] net/avp: packet receive functions Allain Legacy
@ 2017-02-26 19:09   ` Allain Legacy
  2017-02-26 22:18     ` Legacy, Allain
  2017-02-26 19:09   ` [PATCH v2 13/15] net/avp: device promiscuous functions Allain Legacy
                     ` (4 subsequent siblings)
  16 siblings, 1 reply; 172+ messages in thread
From: Allain Legacy @ 2017-02-26 19:09 UTC (permalink / raw)
  To: ferruh.yigit; +Cc: dev

Adds support for packet transmit functions so that an application can send
packets to the host application via an AVP device queue.  Both the simple
and scattered functions are supported.

Signed-off-by: Allain Legacy <allain.legacy@windriver.com>
Signed-off-by: Matt Peters <matt.peters@windriver.com>
---
 drivers/net/avp/avp_ethdev.c | 419 +++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 419 insertions(+)

diff --git a/drivers/net/avp/avp_ethdev.c b/drivers/net/avp/avp_ethdev.c
index cd0b0c0..514e27d 100644
--- a/drivers/net/avp/avp_ethdev.c
+++ b/drivers/net/avp/avp_ethdev.c
@@ -92,12 +92,28 @@ static uint16_t avp_recv_scattered_pkts(void *rx_queue,
 static uint16_t avp_recv_pkts(void *rx_queue,
 			      struct rte_mbuf **rx_pkts,
 			      uint16_t nb_pkts);
+
+static uint16_t avp_xmit_scattered_pkts(void *tx_queue,
+					struct rte_mbuf **tx_pkts,
+					uint16_t nb_pkts);
+
+static uint16_t avp_xmit_pkts(void *tx_queue,
+			      struct rte_mbuf **tx_pkts,
+			      uint16_t nb_pkts);
+
 static void avp_dev_rx_queue_release(void *rxq);
 static void avp_dev_tx_queue_release(void *txq);
+
+static void avp_dev_stats_get(struct rte_eth_dev *dev,
+			      struct rte_eth_stats *stats);
+static void avp_dev_stats_reset(struct rte_eth_dev *dev);
+
+
 #define AVP_DEV_TO_PCI(eth_dev) RTE_DEV_TO_PCI((eth_dev)->device)
 
 
 #define RTE_AVP_MAX_RX_BURST 64
+#define RTE_AVP_MAX_TX_BURST 64
 #define RTE_AVP_MAX_MAC_ADDRS 1
 #define RTE_AVP_MIN_RX_BUFSIZE ETHER_MIN_LEN
 
@@ -139,6 +155,8 @@ static uint16_t avp_recv_pkts(void *rx_queue,
 	.dev_configure       = avp_dev_configure,
 	.dev_infos_get       = avp_dev_info_get,
 	.vlan_offload_set    = avp_vlan_offload_set,
+	.stats_get           = avp_dev_stats_get,
+	.stats_reset         = avp_dev_stats_reset,
 	.link_update         = avp_dev_link_update,
 	.rx_queue_setup      = avp_dev_rx_queue_setup,
 	.rx_queue_release    = avp_dev_rx_queue_release,
@@ -946,6 +964,7 @@ struct avp_queue {
 	pci_dev = AVP_DEV_TO_PCI(eth_dev);
 	eth_dev->dev_ops = &avp_eth_dev_ops;
 	eth_dev->rx_pkt_burst = &avp_recv_pkts;
+	eth_dev->tx_pkt_burst = &avp_xmit_pkts;
 
 	if (rte_eal_process_type() != RTE_PROC_PRIMARY) {
 		/*
@@ -959,6 +978,7 @@ struct avp_queue {
 				    "AVP device configured "
 				    "for chained mbufs\n");
 			eth_dev->rx_pkt_burst = avp_recv_scattered_pkts;
+			eth_dev->tx_pkt_burst = avp_xmit_scattered_pkts;
 		}
 		return 0;
 	}
@@ -1109,6 +1129,7 @@ struct avp_queue {
 				    "for chained mbufs\n");
 			eth_dev->data->scattered_rx = 1;
 			eth_dev->rx_pkt_burst = avp_recv_scattered_pkts;
+			eth_dev->tx_pkt_burst = avp_xmit_scattered_pkts;
 		}
 	}
 
@@ -1586,6 +1607,340 @@ struct avp_queue {
 	return count;
 }
 
+/*
+ * Copy a chained mbuf to a set of host buffers.  This function assumes that
+ * there are sufficient destination buffers to contain the entire source
+ * packet.
+ */
+static inline uint16_t
+avp_dev_copy_to_buffers(struct avp_dev *avp,
+			struct rte_mbuf *mbuf,
+			struct rte_avp_desc **buffers,
+			unsigned int count)
+{
+	struct rte_avp_desc *previous_buf = NULL;
+	struct rte_avp_desc *first_buf = NULL;
+	struct rte_avp_desc *pkt_buf;
+	struct rte_avp_desc *buf;
+	size_t total_length;
+	struct rte_mbuf *m;
+	size_t copy_length;
+	size_t src_offset;
+	char *pkt_data;
+	unsigned int i;
+
+	__rte_mbuf_sanity_check(mbuf, 1);
+
+	m = mbuf;
+	src_offset = 0;
+	total_length = rte_pktmbuf_pkt_len(m);
+	for (i = 0; (i < count) && (m != NULL); i++) {
+		/* fill each destination buffer */
+		buf = buffers[i];
+
+		if (i < count - 1) {
+			/* prefetch next entry while processing this one */
+			pkt_buf = avp_dev_translate_buffer(avp, buffers[i + 1]);
+			rte_prefetch0(pkt_buf);
+		}
+
+		/* Adjust pointers for guest addressing */
+		pkt_buf = avp_dev_translate_buffer(avp, buf);
+		pkt_data = avp_dev_translate_buffer(avp, pkt_buf->data);
+
+		/* setup the buffer chain */
+		if (previous_buf != NULL)
+			previous_buf->next = buf;
+		else
+			first_buf = pkt_buf;
+
+		previous_buf = pkt_buf;
+
+		do {
+			/*
+			 * copy as many source mbuf segments as will fit in the
+			 * destination buffer.
+			 */
+			copy_length = RTE_MIN((avp->host_mbuf_size -
+					       pkt_buf->data_len),
+					      (rte_pktmbuf_data_len(m) -
+					       src_offset));
+			rte_memcpy(RTE_PTR_ADD(pkt_data, pkt_buf->data_len),
+				   RTE_PTR_ADD(rte_pktmbuf_mtod(m, void *),
+					       src_offset),
+				   copy_length);
+			pkt_buf->data_len += copy_length;
+			src_offset += copy_length;
+
+			if (likely(src_offset == rte_pktmbuf_data_len(m))) {
+				/* need a new source buffer */
+				m = m->next;
+				src_offset = 0;
+			}
+
+			if (unlikely(pkt_buf->data_len ==
+				     avp->host_mbuf_size)) {
+				/* need a new destination buffer */
+				break;
+			}
+
+		} while (m != NULL);
+	}
+
+	first_buf->nb_segs = count;
+	first_buf->pkt_len = total_length;
+
+	if (mbuf->ol_flags & PKT_TX_VLAN_PKT) {
+		first_buf->ol_flags |= RTE_AVP_TX_VLAN_PKT;
+		first_buf->vlan_tci = mbuf->vlan_tci;
+	}
+
+	avp_dev_buffer_sanity_check(avp, buffers[0]);
+
+	return total_length;
+}
+
+
+static uint16_t
+avp_xmit_scattered_pkts(void *tx_queue,
+			struct rte_mbuf **tx_pkts,
+			uint16_t nb_pkts)
+{
+	struct rte_avp_desc *avp_bufs[(RTE_AVP_MAX_TX_BURST *
+				       RTE_AVP_MAX_MBUF_SEGMENTS)];
+	struct avp_queue *txq = (struct avp_queue *)tx_queue;
+	struct rte_avp_desc *tx_bufs[RTE_AVP_MAX_TX_BURST];
+	struct avp_dev *avp = txq->avp;
+	struct rte_avp_fifo *alloc_q;
+	struct rte_avp_fifo *tx_q;
+	unsigned int count, avail, n;
+	unsigned int orig_nb_pkts;
+	struct rte_mbuf *m;
+	unsigned int required;
+	unsigned int segments;
+	unsigned int tx_bytes;
+	unsigned int i;
+
+	orig_nb_pkts = nb_pkts;
+	if (unlikely(avp->flags & RTE_AVP_F_DETACHED)) {
+		/* VM live migration in progress */
+		/* TODO ... buffer for X packets then drop? */
+		txq->errors += nb_pkts;
+		return 0;
+	}
+
+	tx_q = avp->tx_q[txq->queue_id];
+	alloc_q = avp->alloc_q[txq->queue_id];
+
+	/* limit the number of transmitted packets to the max burst size */
+	if (unlikely(nb_pkts > RTE_AVP_MAX_TX_BURST))
+		nb_pkts = RTE_AVP_MAX_TX_BURST;
+
+	/* determine how many buffers are available to copy into */
+	avail = avp_fifo_count(alloc_q);
+	if (unlikely(avail > (RTE_AVP_MAX_TX_BURST *
+			      RTE_AVP_MAX_MBUF_SEGMENTS)))
+		avail = RTE_AVP_MAX_TX_BURST * RTE_AVP_MAX_MBUF_SEGMENTS;
+
+	/* determine how many slots are available in the transmit queue */
+	count = avp_fifo_free_count(tx_q);
+
+	/* determine how many packets can be sent */
+	nb_pkts = RTE_MIN(count, nb_pkts);
+
+	/* determine how many packets will fit in the available buffers */
+	count = 0;
+	segments = 0;
+	for (i = 0; i < nb_pkts; i++) {
+		m = tx_pkts[i];
+		if (likely(i < (unsigned int)nb_pkts - 1)) {
+			/* prefetch next entry while processing this one */
+			rte_prefetch0(tx_pkts[i + 1]);
+		}
+		required = (rte_pktmbuf_pkt_len(m) + avp->host_mbuf_size - 1) /
+			avp->host_mbuf_size;
+
+		if (unlikely((required == 0) ||
+			     (required > RTE_AVP_MAX_MBUF_SEGMENTS)))
+			break;
+		else if (unlikely(required + segments > avail))
+			break;
+		segments += required;
+		count++;
+	}
+	nb_pkts = count;
+
+	if (unlikely(nb_pkts == 0)) {
+		/* no available buffers, or no space on the tx queue */
+		txq->errors += orig_nb_pkts;
+		return 0;
+	}
+
+	PMD_TX_LOG(DEBUG, "Sending %u packets on Tx queue at %p\n",
+		   nb_pkts, tx_q);
+
+	/* retrieve sufficient send buffers */
+	n = avp_fifo_get(alloc_q, (void **)&avp_bufs, segments);
+	if (unlikely(n != segments)) {
+		PMD_TX_LOG(DEBUG, "Failed to allocate buffers "
+			   "n=%u, segments=%u, orig=%u\n",
+			   n, segments, orig_nb_pkts);
+		txq->errors += orig_nb_pkts;
+		return 0;
+	}
+
+	tx_bytes = 0;
+	count = 0;
+	for (i = 0; i < nb_pkts; i++) {
+		/* process each packet to be transmitted */
+		m = tx_pkts[i];
+
+		/* determine how many buffers are required for this packet */
+		required = (rte_pktmbuf_pkt_len(m) + avp->host_mbuf_size - 1) /
+			avp->host_mbuf_size;
+
+		tx_bytes += avp_dev_copy_to_buffers(avp, m,
+						    &avp_bufs[count], required);
+		tx_bufs[i] = avp_bufs[count];
+		count += required;
+
+		/* free the original mbuf */
+		rte_pktmbuf_free(m);
+	}
+
+	txq->packets += nb_pkts;
+	txq->bytes += tx_bytes;
+
+#ifdef RTE_LIBRTE_AVP_DEBUG_BUFFERS
+	for (i = 0; i < nb_pkts; i++)
+		avp_dev_buffer_sanity_check(avp, tx_bufs[i]);
+#endif
+
+	/* send the packets */
+	n = avp_fifo_put(tx_q, (void **)&tx_bufs[0], nb_pkts);
+	if (unlikely(n != orig_nb_pkts))
+		txq->errors += (orig_nb_pkts - n);
+
+	return n;
+}
+
+
+static uint16_t
+avp_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
+{
+	struct avp_queue *txq = (struct avp_queue *)tx_queue;
+	struct rte_avp_desc *avp_bufs[RTE_AVP_MAX_TX_BURST];
+	struct avp_dev *avp = txq->avp;
+	struct rte_avp_desc *pkt_buf;
+	struct rte_avp_fifo *alloc_q;
+	struct rte_avp_fifo *tx_q;
+	unsigned int count, avail, n;
+	struct rte_mbuf *m;
+	unsigned int pkt_len;
+	unsigned int tx_bytes;
+	char *pkt_data;
+	unsigned int i;
+
+	if (unlikely(avp->flags & RTE_AVP_F_DETACHED)) {
+		/* VM live migration in progress */
+		/* TODO ... buffer for X packets then drop?! */
+		txq->errors++;
+		return 0;
+	}
+
+	tx_q = avp->tx_q[txq->queue_id];
+	alloc_q = avp->alloc_q[txq->queue_id];
+
+	/* limit the number of transmitted packets to the max burst size */
+	if (unlikely(nb_pkts > RTE_AVP_MAX_TX_BURST))
+		nb_pkts = RTE_AVP_MAX_TX_BURST;
+
+	/* determine how many buffers are available to copy into */
+	avail = avp_fifo_count(alloc_q);
+
+	/* determine how many slots are available in the transmit queue */
+	count = avp_fifo_free_count(tx_q);
+
+	/* determine how many packets can be sent */
+	count = RTE_MIN(count, avail);
+	count = RTE_MIN(count, nb_pkts);
+
+	if (unlikely(count == 0)) {
+		/* no available buffers, or no space on the tx queue */
+		txq->errors++;
+		return 0;
+	}
+
+	PMD_TX_LOG(DEBUG, "Sending %u packets on Tx queue at %p\n",
+		   count, tx_q);
+
+	/* retrieve sufficient send buffers */
+	n = avp_fifo_get(alloc_q, (void **)&avp_bufs, count);
+	if (unlikely(n != count)) {
+		txq->errors++;
+		return 0;
+	}
+
+	tx_bytes = 0;
+	for (i = 0; i < count; i++) {
+		/* prefetch next entry while processing the current one */
+		if (i < count - 1) {
+			pkt_buf = avp_dev_translate_buffer(avp,
+							   avp_bufs[i + 1]);
+			rte_prefetch0(pkt_buf);
+		}
+
+		/* process each packet to be transmitted */
+		m = tx_pkts[i];
+
+		/* Adjust pointers for guest addressing */
+		pkt_buf = avp_dev_translate_buffer(avp, avp_bufs[i]);
+		pkt_data = avp_dev_translate_buffer(avp, pkt_buf->data);
+		pkt_len = rte_pktmbuf_pkt_len(m);
+
+		if (unlikely((pkt_len > avp->guest_mbuf_size) ||
+					 (pkt_len > avp->host_mbuf_size))) {
+			/*
+			 * application should be using the scattered transmit
+			 * function; send it truncated to avoid the performance
+			 * hit of having to manage returning the already
+			 * allocated buffer to the free list.  This should not
+			 * happen since the application should have set the
+			 * max_rx_pkt_len based on its MTU and it should be
+			 * policing its own packet sizes.
+			 */
+			txq->errors++;
+			pkt_len = RTE_MIN(avp->guest_mbuf_size,
+					  avp->host_mbuf_size);
+		}
+
+		/* copy data out of our mbuf and into the AVP buffer */
+		rte_memcpy(pkt_data, rte_pktmbuf_mtod(m, void *), pkt_len);
+		pkt_buf->pkt_len = pkt_len;
+		pkt_buf->data_len = pkt_len;
+		pkt_buf->nb_segs = 1;
+		pkt_buf->next = NULL;
+
+		if (m->ol_flags & PKT_TX_VLAN_PKT) {
+			pkt_buf->ol_flags |= RTE_AVP_TX_VLAN_PKT;
+			pkt_buf->vlan_tci = m->vlan_tci;
+		}
+
+		tx_bytes += pkt_len;
+
+		/* free the original mbuf */
+		rte_pktmbuf_free(m);
+	}
+
+	txq->packets += count;
+	txq->bytes += tx_bytes;
+
+	/* send the packets */
+	n = avp_fifo_put(tx_q, (void **)&avp_bufs[0], count);
+
+	return n;
+}
+
 static void
 avp_dev_rx_queue_release(void *rx_queue)
 {
@@ -1734,6 +2089,70 @@ struct avp_queue {
 	}
 }
 
+static void
+avp_dev_stats_get(struct rte_eth_dev *eth_dev, struct rte_eth_stats *stats)
+{
+	struct avp_dev *avp =
+		RTE_AVP_DEV_PRIVATE_TO_HW(eth_dev->data->dev_private);
+	unsigned int i;
+
+	memset(stats, 0, sizeof(*stats));
+	for (i = 0; i < avp->num_rx_queues; i++) {
+		struct avp_queue *rxq = avp->dev_data->rx_queues[i];
+
+		if (rxq) {
+			stats->ipackets += rxq->packets;
+			stats->ibytes += rxq->bytes;
+			stats->ierrors += rxq->errors;
+
+			stats->q_ipackets[i] += rxq->packets;
+			stats->q_ibytes[i] += rxq->bytes;
+			stats->q_errors[i] += rxq->errors;
+		}
+	}
+
+	for (i = 0; i < avp->num_tx_queues; i++) {
+		struct avp_queue *txq = avp->dev_data->tx_queues[i];
+
+		if (txq) {
+			stats->opackets += txq->packets;
+			stats->obytes += txq->bytes;
+			stats->oerrors += txq->errors;
+
+			stats->q_opackets[i] += txq->packets;
+			stats->q_obytes[i] += txq->bytes;
+			stats->q_errors[i] += txq->errors;
+		}
+	}
+}
+
+static void
+avp_dev_stats_reset(struct rte_eth_dev *eth_dev)
+{
+	struct avp_dev *avp =
+		RTE_AVP_DEV_PRIVATE_TO_HW(eth_dev->data->dev_private);
+	unsigned int i;
+
+	for (i = 0; i < avp->num_rx_queues; i++) {
+		struct avp_queue *rxq = avp->dev_data->rx_queues[i];
+
+		if (rxq) {
+			rxq->bytes = 0;
+			rxq->packets = 0;
+			rxq->errors = 0;
+		}
+	}
+
+	for (i = 0; i < avp->num_tx_queues; i++) {
+		struct avp_queue *txq = avp->dev_data->tx_queues[i];
+
+		if (txq) {
+			txq->bytes = 0;
+			txq->packets = 0;
+			txq->errors = 0;
+		}
+	}
+}
 
 RTE_PMD_REGISTER_PCI(rte_avp, rte_avp_pmd.pci_drv);
 RTE_PMD_REGISTER_PCI_TABLE(rte_avp, pci_id_avp_map);
-- 
1.8.3.1

^ permalink raw reply related	[flat|nested] 172+ messages in thread

* [PATCH v2 13/15] net/avp: device promiscuous functions
  2017-02-26 19:08 ` [PATCH v2 00/16] Wind River Systems " Allain Legacy
                     ` (11 preceding siblings ...)
  2017-02-26 19:09   ` [PATCH v2 12/15] net/avp: packet transmit functions Allain Legacy
@ 2017-02-26 19:09   ` Allain Legacy
  2017-02-26 19:09   ` [PATCH v2 14/15] net/avp: device start and stop operations Allain Legacy
                     ` (3 subsequent siblings)
  16 siblings, 0 replies; 172+ messages in thread
From: Allain Legacy @ 2017-02-26 19:09 UTC (permalink / raw)
  To: ferruh.yigit; +Cc: dev

Adds support for setting and clearing promiscuous mode on an AVP device.
When enabled the _mac_filter function will allow packets destined to any
MAC address to be processed by the receive functions.

Signed-off-by: Allain Legacy <allain.legacy@windriver.com>
Signed-off-by: Matt Peters <matt.peters@windriver.com>
---
 drivers/net/avp/avp_ethdev.c | 34 ++++++++++++++++++++++++++++++++++
 1 file changed, 34 insertions(+)

diff --git a/drivers/net/avp/avp_ethdev.c b/drivers/net/avp/avp_ethdev.c
index 514e27d..dc08737 100644
--- a/drivers/net/avp/avp_ethdev.c
+++ b/drivers/net/avp/avp_ethdev.c
@@ -72,6 +72,9 @@ static void avp_dev_info_get(struct rte_eth_dev *dev,
 static void avp_vlan_offload_set(struct rte_eth_dev *dev, int mask);
 static int avp_dev_link_update(struct rte_eth_dev *dev,
 			       __rte_unused int wait_to_complete);
+static void avp_dev_promiscuous_enable(struct rte_eth_dev *dev);
+static void avp_dev_promiscuous_disable(struct rte_eth_dev *dev);
+
 static int avp_dev_rx_queue_setup(struct rte_eth_dev *dev,
 				  uint16_t rx_queue_id,
 				  uint16_t nb_rx_desc,
@@ -158,6 +161,8 @@ static void avp_dev_stats_get(struct rte_eth_dev *dev,
 	.stats_get           = avp_dev_stats_get,
 	.stats_reset         = avp_dev_stats_reset,
 	.link_update         = avp_dev_link_update,
+	.promiscuous_enable  = avp_dev_promiscuous_enable,
+	.promiscuous_disable = avp_dev_promiscuous_disable,
 	.rx_queue_setup      = avp_dev_rx_queue_setup,
 	.rx_queue_release    = avp_dev_rx_queue_release,
 	.tx_queue_setup      = avp_dev_tx_queue_setup,
@@ -2041,6 +2046,35 @@ struct avp_queue {
 	return -1;
 }
 
+static void
+avp_dev_promiscuous_enable(struct rte_eth_dev *eth_dev)
+{
+	struct avp_dev *avp =
+		RTE_AVP_DEV_PRIVATE_TO_HW(eth_dev->data->dev_private);
+
+	rte_spinlock_lock(&avp->lock);
+	if ((avp->flags & RTE_AVP_F_PROMISC) == 0) {
+		avp->flags |= RTE_AVP_F_PROMISC;
+		PMD_DRV_LOG(DEBUG, "Promiscuous mode enabled on %u\n",
+			    eth_dev->data->port_id);
+	}
+	rte_spinlock_unlock(&avp->lock);
+}
+
+static void
+avp_dev_promiscuous_disable(struct rte_eth_dev *eth_dev)
+{
+	struct avp_dev *avp =
+		RTE_AVP_DEV_PRIVATE_TO_HW(eth_dev->data->dev_private);
+
+	rte_spinlock_lock(&avp->lock);
+	if ((avp->flags & RTE_AVP_F_PROMISC) != 0) {
+		avp->flags &= ~RTE_AVP_F_PROMISC;
+		PMD_DRV_LOG(DEBUG, "Promiscuous mode disabled on %u\n",
+			    eth_dev->data->port_id);
+	}
+	rte_spinlock_unlock(&avp->lock);
+}
 
 static void
 avp_dev_info_get(struct rte_eth_dev *eth_dev,
-- 
1.8.3.1

^ permalink raw reply related	[flat|nested] 172+ messages in thread

* [PATCH v2 14/15] net/avp: device start and stop operations
  2017-02-26 19:08 ` [PATCH v2 00/16] Wind River Systems " Allain Legacy
                     ` (12 preceding siblings ...)
  2017-02-26 19:09   ` [PATCH v2 13/15] net/avp: device promiscuous functions Allain Legacy
@ 2017-02-26 19:09   ` Allain Legacy
  2017-02-26 19:09   ` [PATCH v2 15/15] doc: adds information related to the AVP PMD Allain Legacy
                     ` (2 subsequent siblings)
  16 siblings, 0 replies; 172+ messages in thread
From: Allain Legacy @ 2017-02-26 19:09 UTC (permalink / raw)
  To: ferruh.yigit; +Cc: dev

Adds support for device start and stop functions.  This allows an
application to control the administrative state of an AVP device.  Stopping
the device will notify the host application to stop sending packets on that
device's receive queues.

Signed-off-by: Allain Legacy <allain.legacy@windriver.com>
Signed-off-by: Matt Peters <matt.peters@windriver.com>
---
 drivers/net/avp/avp_ethdev.c | 155 +++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 155 insertions(+)

diff --git a/drivers/net/avp/avp_ethdev.c b/drivers/net/avp/avp_ethdev.c
index dc08737..9f7436d 100644
--- a/drivers/net/avp/avp_ethdev.c
+++ b/drivers/net/avp/avp_ethdev.c
@@ -67,6 +67,9 @@ static int avp_dev_create(struct rte_pci_device *pci_dev,
 static int eth_avp_dev_uninit(struct rte_eth_dev *eth_dev);
 
 static int avp_dev_configure(struct rte_eth_dev *dev);
+static int avp_dev_start(struct rte_eth_dev *dev);
+static void avp_dev_stop(struct rte_eth_dev *dev);
+static void avp_dev_close(struct rte_eth_dev *dev);
 static void avp_dev_info_get(struct rte_eth_dev *dev,
 			     struct rte_eth_dev_info *dev_info);
 static void avp_vlan_offload_set(struct rte_eth_dev *dev, int mask);
@@ -156,6 +159,9 @@ static void avp_dev_stats_get(struct rte_eth_dev *dev,
  */
 static const struct eth_dev_ops avp_eth_dev_ops = {
 	.dev_configure       = avp_dev_configure,
+	.dev_start           = avp_dev_start,
+	.dev_stop            = avp_dev_stop,
+	.dev_close           = avp_dev_close,
 	.dev_infos_get       = avp_dev_info_get,
 	.vlan_offload_set    = avp_vlan_offload_set,
 	.stats_get           = avp_dev_stats_get,
@@ -329,6 +335,24 @@ struct avp_queue {
 }
 
 static int
+avp_dev_ctrl_set_link_state(struct rte_eth_dev *eth_dev, unsigned int state)
+{
+	struct avp_dev *avp =
+		RTE_AVP_DEV_PRIVATE_TO_HW(eth_dev->data->dev_private);
+	struct rte_avp_request request;
+	int ret;
+
+	/* setup a link state change request */
+	memset(&request, 0, sizeof(request));
+	request.req_id = RTE_AVP_REQ_CFG_NETWORK_IF;
+	request.if_up = state;
+
+	ret = avp_dev_process_request(avp, &request);
+
+	return ret == 0 ? request.result : ret;
+}
+
+static int
 avp_dev_ctrl_set_config(struct rte_eth_dev *eth_dev,
 			struct rte_avp_device_config *config)
 {
@@ -779,6 +803,31 @@ struct avp_queue {
 }
 
 static int
+avp_dev_disable_interrupts(struct rte_eth_dev *eth_dev)
+{
+	struct rte_pci_device *pci_dev = AVP_DEV_TO_PCI(eth_dev);
+	void *registers = pci_dev->mem_resource[RTE_AVP_PCI_MMIO_BAR].addr;
+	int ret;
+
+	if (registers == NULL)
+		return 0;
+
+	/* inform the device that all interrupts are disabled */
+	RTE_AVP_WRITE32(RTE_AVP_NO_INTERRUPTS_MASK,
+			RTE_PTR_ADD(registers, RTE_AVP_INTERRUPT_MASK_OFFSET));
+
+	/* enable UIO interrupt handling */
+	ret = rte_intr_disable(&pci_dev->intr_handle);
+	if (ret < 0) {
+		PMD_DRV_LOG(ERR,
+			    "Failed to enable UIO interrupts, ret=%d\n", ret);
+		return ret;
+	}
+
+	return 0;
+}
+
+static int
 avp_dev_setup_interrupts(struct rte_eth_dev *eth_dev)
 {
 	struct rte_pci_device *pci_dev = AVP_DEV_TO_PCI(eth_dev);
@@ -2030,6 +2079,112 @@ struct avp_queue {
 	return ret;
 }
 
+static int
+avp_dev_start(struct rte_eth_dev *eth_dev)
+{
+	struct avp_dev *avp =
+		RTE_AVP_DEV_PRIVATE_TO_HW(eth_dev->data->dev_private);
+	int ret;
+
+	rte_spinlock_lock(&avp->lock);
+	if (avp->flags & RTE_AVP_F_DETACHED) {
+		PMD_DRV_LOG(ERR,
+			    "Operation not supported during "
+			    "VM live migration\n");
+		ret = -ENOTSUP;
+		goto unlock;
+	}
+
+	/* disable features that we do not support */
+	eth_dev->data->dev_conf.rxmode.hw_ip_checksum = 0;
+	eth_dev->data->dev_conf.rxmode.hw_vlan_filter = 0;
+	eth_dev->data->dev_conf.rxmode.hw_vlan_extend = 0;
+	eth_dev->data->dev_conf.rxmode.hw_strip_crc = 0;
+
+	/* update link state */
+	ret = avp_dev_ctrl_set_link_state(eth_dev, 1);
+	if (ret < 0) {
+		PMD_DRV_LOG(ERR,
+			    "Link state change failed by host, ret=%d\n", ret);
+		goto unlock;
+	}
+
+	/* remember current link state */
+	avp->flags |= RTE_AVP_F_LINKUP;
+
+	ret = 0;
+
+unlock:
+	rte_spinlock_unlock(&avp->lock);
+	return ret;
+}
+
+static void
+avp_dev_stop(struct rte_eth_dev *eth_dev)
+{
+	struct avp_dev *avp =
+		RTE_AVP_DEV_PRIVATE_TO_HW(eth_dev->data->dev_private);
+	int ret;
+
+	rte_spinlock_lock(&avp->lock);
+	if (avp->flags & RTE_AVP_F_DETACHED) {
+		PMD_DRV_LOG(ERR,
+			    "Operation not supported during "
+			    "VM live migration\n");
+		goto unlock;
+	}
+
+	/* remember current link state */
+	avp->flags &= ~RTE_AVP_F_LINKUP;
+
+	/* update link state */
+	ret = avp_dev_ctrl_set_link_state(eth_dev, 0);
+	if (ret < 0) {
+		PMD_DRV_LOG(ERR,
+			    "Link state change failed by host, ret=%d\n", ret);
+		goto unlock;
+	}
+
+unlock:
+	rte_spinlock_unlock(&avp->lock);
+}
+
+static void
+avp_dev_close(struct rte_eth_dev *eth_dev)
+{
+	struct avp_dev *avp =
+		RTE_AVP_DEV_PRIVATE_TO_HW(eth_dev->data->dev_private);
+	int ret;
+
+	rte_spinlock_lock(&avp->lock);
+	if (avp->flags & RTE_AVP_F_DETACHED) {
+		PMD_DRV_LOG(ERR,
+			    "Operation not supported during "
+			    "VM live migration\n");
+		goto unlock;
+	}
+
+	/* remember current link state */
+	avp->flags &= ~RTE_AVP_F_LINKUP;
+	avp->flags &= ~RTE_AVP_F_CONFIGURED;
+
+	ret = avp_dev_disable_interrupts(eth_dev);
+	if (ret < 0) {
+		PMD_DRV_LOG(ERR, "Failed to disable interrupts\n");
+		/* continue */
+	}
+
+	/* update device state */
+	ret = avp_dev_ctrl_shutdown(eth_dev);
+	if (ret < 0) {
+		PMD_DRV_LOG(ERR, "Device shutdown failed by host, ret=%d\n",
+			    ret);
+		goto unlock;
+	}
+
+unlock:
+	rte_spinlock_unlock(&avp->lock);
+}
 
 static int
 avp_dev_link_update(struct rte_eth_dev *eth_dev,
-- 
1.8.3.1

^ permalink raw reply related	[flat|nested] 172+ messages in thread

* [PATCH v2 15/15] doc: adds information related to the AVP PMD
  2017-02-26 19:08 ` [PATCH v2 00/16] Wind River Systems " Allain Legacy
                     ` (13 preceding siblings ...)
  2017-02-26 19:09   ` [PATCH v2 14/15] net/avp: device start and stop operations Allain Legacy
@ 2017-02-26 19:09   ` Allain Legacy
  2017-02-27 17:04     ` Mcnamara, John
  2017-02-27  8:54   ` [PATCH v2 00/16] Wind River Systems " Vincent JARDIN
  2017-03-02  0:19   ` [PATCH v3 " Allain Legacy
  16 siblings, 1 reply; 172+ messages in thread
From: Allain Legacy @ 2017-02-26 19:09 UTC (permalink / raw)
  To: ferruh.yigit; +Cc: dev

Updates the documentation and feature lists for the AVP PMD device.

Signed-off-by: Allain Legacy <allain.legacy@windriver.com>
Signed-off-by: Matt Peters <matt.peters@windriver.com>
---
 MAINTAINERS                      |  1 +
 doc/guides/nics/avp.rst          | 99 ++++++++++++++++++++++++++++++++++++++++
 doc/guides/nics/features/avp.ini | 17 +++++++
 doc/guides/nics/index.rst        |  1 +
 4 files changed, 118 insertions(+)
 create mode 100644 doc/guides/nics/avp.rst
 create mode 100644 doc/guides/nics/features/avp.ini

diff --git a/MAINTAINERS b/MAINTAINERS
index 992ffa5..9dd6e7e 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -427,6 +427,7 @@ Wind River AVP PMD
 M: Allain Legacy <allain.legacy@windriver.com>
 M: Matt Peters <matt.peters@windriver.com>
 F: drivers/net/avp
+F: doc/guides/nics/avp.rst
 
 
 Crypto Drivers
diff --git a/doc/guides/nics/avp.rst b/doc/guides/nics/avp.rst
new file mode 100644
index 0000000..24bb4b6
--- /dev/null
+++ b/doc/guides/nics/avp.rst
@@ -0,0 +1,99 @@
+..  BSD LICENSE
+    Copyright(c) 2017 Wind River Systems, Inc. rights reserved.
+    All rights reserved.
+
+    Redistribution and use in source and binary forms, with or without
+    modification, are permitted provided that the following conditions
+    are met:
+
+    * Redistributions of source code must retain the above copyright
+    notice, this list of conditions and the following disclaimer.
+    * Redistributions in binary form must reproduce the above copyright
+    notice, this list of conditions and the following disclaimer in
+    the documentation and/or other materials provided with the
+    distribution.
+    * Neither the name of Intel Corporation nor the names of its
+    contributors may be used to endorse or promote products derived
+    from this software without specific prior written permission.
+
+    THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+    "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+    LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+    A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+    OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+    SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+    LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+    DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+    THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+    (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+    OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+
+AVP Poll Mode Driver
+=================================================================
+
+The Accelerated Virtual Port (AVP) device is a shared memory based device
+available on the `virtualization platforms <http://www.windriver.com/products/titanium-cloud/>`_
+from Wind River Systems.  It is based on an earlier implementation of the DPDK
+KNI device and made available to VM instances via a mechanism based on an early
+implementation of qemu-kvm ivshmem.
+
+The driver binds to PCI devices that are exported by the hypervisor DPDK
+application via the ivshmem-like mechanism.  The definition of the device
+structure and configuration options are defined in rte_avp_common.h and
+rte_avp_fifo.h.  These two header files are made available as part of the PMD
+implementation in order to share the device definitions between the guest
+implementation (i.e., the PMD) and the host implementation (i.e., the
+hypervisor DPDK application).
+
+
+Features and Limitations of the AVP PMD
+---------------------------------------
+
+The AVP PMD driver provides the following functionality.
+
+*   Receive and transmit of both simple and chained mbuf packets,
+
+*   Chained mbufs may include up to 5 chained segments,
+
+*   Up to 8 receive and transmit queues per device,
+
+*   Only a single MAC address is supported,
+
+*   The MAC address cannot be modified,
+
+*   The maximum receive packet length is 9238 bytes,
+
+*   VLAN header stripping and inserting,
+
+*   Promiscuous mode
+
+*   VM live-migration
+
+*   PCI hotplug insertion and removal
+
+
+Prerequisites
+-------------
+
+The following prerequisites apply:
+
+*   A virtual machine running in a Wind River Systems virtualization
+    environment and configured with at least one neutron port defined with a
+    vif-model set to "avp".
+
+
+Launching a VM with an AVP type network attachment
+--------------------------------------------------
+
+The following example will launch a VM with three network attachments.  The
+first attachment will have a default vif-model of "virtio".  The next two
+network attachments will have a vif-model of "avp" and may be used with a DPDK
+application which is built to include the AVP PMD driver.
+
+.. code-block:: console
+
+    nova boot --flavor small --image my-image \
+       --nic net-id=${NETWORK1_UUID} \
+       --nic net-id=${NETWORK2_UUID},vif-model=avp \
+       --nic net-id=${NETWORK3_UUID},vif-model=avp \
+       --security-group default my-instance1
diff --git a/doc/guides/nics/features/avp.ini b/doc/guides/nics/features/avp.ini
new file mode 100644
index 0000000..64bf42e
--- /dev/null
+++ b/doc/guides/nics/features/avp.ini
@@ -0,0 +1,17 @@
+;
+; Supported features of the 'AVP' network poll mode driver.
+;
+; Refer to default.ini for the full list of available PMD features.
+;
+[Features]
+Link status          = Y
+Queue start/stop     = Y
+Jumbo frame          = Y
+Scattered Rx         = Y
+Promiscuous mode     = Y
+Unicast MAC filter   = Y
+VLAN offload         = Y
+Basic stats          = Y
+Stats per queue      = Y
+Linux UIO            = Y
+x86-64               = Y
diff --git a/doc/guides/nics/index.rst b/doc/guides/nics/index.rst
index 87f9334..0ddcea5 100644
--- a/doc/guides/nics/index.rst
+++ b/doc/guides/nics/index.rst
@@ -36,6 +36,7 @@ Network Interface Controller Drivers
     :numbered:
 
     overview
+    avp
     bnx2x
     bnxt
     cxgbe
-- 
1.8.3.1

^ permalink raw reply related	[flat|nested] 172+ messages in thread

* Re: [PATCH v2 12/15] net/avp: packet transmit functions
  2017-02-26 19:09   ` [PATCH v2 12/15] net/avp: packet transmit functions Allain Legacy
@ 2017-02-26 22:18     ` Legacy, Allain
  0 siblings, 0 replies; 172+ messages in thread
From: Legacy, Allain @ 2017-02-26 22:18 UTC (permalink / raw)
  To: YIGIT, FERRUH; +Cc: dev

> -----Original Message-----
> From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Allain Legacy
> Sent: Sunday, February 26, 2017 2:09 PM
> To: YIGIT, FERRUH
> Cc: dev@dpdk.org
> Subject: [dpdk-dev] [PATCH v2 12/15] net/avp: packet transmit functions
> 
> Adds support for packet transmit functions so that an application can send
> packets to the host application via an AVP device queue.  Both the simple
> and scattered functions are supported.
> 
> Signed-off-by: Allain Legacy <allain.legacy@windriver.com>
> Signed-off-by: Matt Peters <matt.peters@windriver.com>
> ---
>  drivers/net/avp/avp_ethdev.c | 419
> +++++++++++++++++++++++++++++++++++++++++++
>  1 file changed, 419 insertions(+)
> 
> diff --git a/drivers/net/avp/avp_ethdev.c b/drivers/net/avp/avp_ethdev.c
> index cd0b0c0..514e27d 100644
> --- a/drivers/net/avp/avp_ethdev.c
> +++ b/drivers/net/avp/avp_ethdev.c
> @@ -92,12 +92,28 @@ static uint16_t avp_recv_scattered_pkts(void
> *rx_queue,
>  static uint16_t avp_recv_pkts(void *rx_queue,
>  			      struct rte_mbuf **rx_pkts,
>  			      uint16_t nb_pkts);
> +
> +static uint16_t avp_xmit_scattered_pkts(void *tx_queue,
> +					struct rte_mbuf **tx_pkts,
> +					uint16_t nb_pkts);
> +
> +static uint16_t avp_xmit_pkts(void *tx_queue,
> +			      struct rte_mbuf **tx_pkts,
> +			      uint16_t nb_pkts);
> +
>  static void avp_dev_rx_queue_release(void *rxq);
>  static void avp_dev_tx_queue_release(void *txq);
> +
> +static void avp_dev_stats_get(struct rte_eth_dev *dev,
> +			      struct rte_eth_stats *stats);
> +static void avp_dev_stats_reset(struct rte_eth_dev *dev);
> +
> +

The previous patchset (v1) included a separate patch for the stats functions.  While creating this patchset (v2) the stats commit was accidentally squashed.  I will restore it on the next iteration. 

^ permalink raw reply	[flat|nested] 172+ messages in thread

* Re: [PATCH v2 00/16] Wind River Systems AVP PMD
  2017-02-26 19:08 ` [PATCH v2 00/16] Wind River Systems " Allain Legacy
                     ` (14 preceding siblings ...)
  2017-02-26 19:09   ` [PATCH v2 15/15] doc: adds information related to the AVP PMD Allain Legacy
@ 2017-02-27  8:54   ` Vincent JARDIN
  2017-02-27 12:15     ` Legacy, Allain
  2017-03-02  0:19   ` [PATCH v3 " Allain Legacy
  16 siblings, 1 reply; 172+ messages in thread
From: Vincent JARDIN @ 2017-02-27  8:54 UTC (permalink / raw)
  To: Allain Legacy; +Cc: ferruh.yigit, dev, thomas.monjalon, Olivier MATZ

Le 26/02/2017 à 20:08, Allain Legacy a écrit :
> This patch series submits an initial version of the AVP PMD from Wind River
> Systems.  The series includes shared header files, driver implementation,
> and changes to documentation files in support of this new driver.  The AVP
> driver is a shared memory based device.  It is intended to be used as a PMD
> within a virtual machine running on a Wind River virtualization platform.
> See: http://www.windriver.com/products/titanium-cloud/

Allain,

I think it needs more information and performance details about this AVP 
vs virtio. I understand from your link that Titanium is a Qemu/Linux 
based solution, so virtio could be used, correct?

So, before spending too much time in reviewing your long serie, I think 
that proper statements are needed. In the past, other NICs using Qemu 
have been sent, but they were avoided since virtio solved all the issues.

Thank you,
   Vincent

^ permalink raw reply	[flat|nested] 172+ messages in thread

* Re: [PATCH v2 00/16] Wind River Systems AVP PMD
  2017-02-27  8:54   ` [PATCH v2 00/16] Wind River Systems " Vincent JARDIN
@ 2017-02-27 12:15     ` Legacy, Allain
  2017-02-27 15:17       ` Wiles, Keith
  0 siblings, 1 reply; 172+ messages in thread
From: Legacy, Allain @ 2017-02-27 12:15 UTC (permalink / raw)
  To: Vincent JARDIN; +Cc: YIGIT, FERRUH, dev, thomas.monjalon, Olivier MATZ

> -----Original Message-----
> From: Vincent JARDIN [mailto:vincent.jardin@6wind.com]
...
> So, before spending too much time in reviewing your long serie, I think that
> proper statements are needed. In the past, other NICs using Qemu have
> been sent, but they were avoided since virtio solved all the issues.
> 
Ok, I'll put together some additional information.  Should I resubmit the patch series with an updated cover letter, update the NIC guide doc, or just reply to this thread?

So is the intent to only have a single qemu based device/driver then?   If so, why?

^ permalink raw reply	[flat|nested] 172+ messages in thread

* Re: [PATCH v2 00/16] Wind River Systems AVP PMD
  2017-02-27 12:15     ` Legacy, Allain
@ 2017-02-27 15:17       ` Wiles, Keith
  0 siblings, 0 replies; 172+ messages in thread
From: Wiles, Keith @ 2017-02-27 15:17 UTC (permalink / raw)
  To: Legacy, Allain (Wind River)
  Cc: Vincent JARDIN, Yigit, Ferruh, dev, thomas.monjalon, Olivier MATZ


> On Feb 27, 2017, at 4:15 AM, Legacy, Allain <Allain.Legacy@windriver.com> wrote:
> 
>> -----Original Message-----
>> From: Vincent JARDIN [mailto:vincent.jardin@6wind.com]
> ...
>> So, before spending too much time in reviewing your long serie, I think that
>> proper statements are needed. In the past, other NICs using Qemu have
>> been sent, but they were avoided since virtio solved all the issues.
>> 
> Ok, I'll put together some additional information.  Should I resubmit the patch series with an updated cover letter, update the NIC guide doc, or just reply to this thread?
> 
> So is the intent to only have a single qemu based device/driver then?   If so, why?

Allain, I think the best information is not really performance, big is a good way to prove its worth. If you can list the advantages
 and differences with AVP would help understand why someone would like to use AVP.

Even if the performance was same it may have many other advantages that would be good to include in DPDK.

All that said I do not think we should be limiting PMDs in DPDK just because they look similar to some other PMD. They are taking ownership of the maintenance of the PMD and if for some reason the PMD becomes not maintained then we can discuss removing the PMD later. We have a number of PMDs now that perform the same function only in a different way, which seems just fine with everyone. Just my $0.02 worth.

> 

Regards,
Keith

^ permalink raw reply	[flat|nested] 172+ messages in thread

* Re: [PATCH v2 11/15] net/avp: packet receive functions
  2017-02-26 19:08   ` [PATCH v2 11/15] net/avp: packet receive functions Allain Legacy
@ 2017-02-27 16:46     ` Stephen Hemminger
  2017-02-27 17:06       ` Legacy, Allain
  0 siblings, 1 reply; 172+ messages in thread
From: Stephen Hemminger @ 2017-02-27 16:46 UTC (permalink / raw)
  To: Allain Legacy; +Cc: ferruh.yigit, dev

On Sun, 26 Feb 2017 14:08:59 -0500
Allain Legacy <allain.legacy@windriver.com> wrote:

> +		if (eth_dev->data->scattered_rx) {
> +			PMD_DRV_LOG(NOTICE,
> +				    "AVP device configured "
> +				    "for chained mbufs\n");

Try not to break error messages onto two lines, it makes it harder when a user
is trying to find the location of the error message with search tools

^ permalink raw reply	[flat|nested] 172+ messages in thread

* Re: [PATCH v2 07/15] net/avp: driver registration
  2017-02-26 19:08   ` [PATCH v2 07/15] net/avp: driver registration Allain Legacy
@ 2017-02-27 16:47     ` Stephen Hemminger
  2017-02-27 17:10       ` Legacy, Allain
  2017-02-27 16:53     ` Stephen Hemminger
  1 sibling, 1 reply; 172+ messages in thread
From: Stephen Hemminger @ 2017-02-27 16:47 UTC (permalink / raw)
  To: Allain Legacy; +Cc: ferruh.yigit, dev

On Sun, 26 Feb 2017 14:08:55 -0500
Allain Legacy <allain.legacy@windriver.com> wrote:

> +static struct rte_pci_id pci_id_avp_map[] = {
> +	{ .vendor_id = RTE_AVP_PCI_VENDOR_ID,
> +	  .device_id = RTE_AVP_PCI_DEVICE_ID,
> +	  .subsystem_vendor_id = RTE_AVP_PCI_SUB_VENDOR_ID,
> +	  .subsystem_device_id = RTE_AVP_PCI_SUB_DEVICE_ID,
> +	  .class_id = RTE_CLASS_ANY_ID,
> +	},
> +
> +	{ .vendor_id = 0, /* sentinel */
> +	},
> +};
> +

PCI table should be const

^ permalink raw reply	[flat|nested] 172+ messages in thread

* Re: [PATCH v2 07/15] net/avp: driver registration
  2017-02-26 19:08   ` [PATCH v2 07/15] net/avp: driver registration Allain Legacy
  2017-02-27 16:47     ` Stephen Hemminger
@ 2017-02-27 16:53     ` Stephen Hemminger
  2017-02-27 17:09       ` Legacy, Allain
  1 sibling, 1 reply; 172+ messages in thread
From: Stephen Hemminger @ 2017-02-27 16:53 UTC (permalink / raw)
  To: Allain Legacy; +Cc: ferruh.yigit, dev

On Sun, 26 Feb 2017 14:08:55 -0500
Allain Legacy <allain.legacy@windriver.com> wrote:

> +struct avp_dev {
> +	uint32_t magic; /**< Memory validation marker */
> +	uint64_t device_id; /**< Unique system identifier */
> +	struct ether_addr ethaddr; /**< Host specified MAC address */
> +	struct rte_eth_dev_data *dev_data;
> +	/**< Back pointer to ethernet device data */
> +	volatile uint32_t flags; /**< Device operational flags */
> +	uint8_t port_id; /**< Ethernet port identifier */
> +	struct rte_mempool *pool; /**< pkt mbuf mempool */
> +	unsigned int guest_mbuf_size; /**< local pool mbuf size */
> +	unsigned int host_mbuf_size; /**< host mbuf size */
> +	unsigned int max_rx_pkt_len; /**< maximum receive unit */
> +	uint32_t host_features; /**< Supported feature bitmap */
> +	uint32_t features; /**< Enabled feature bitmap */
> +	unsigned int num_tx_queues; /**< Negotiated number of transmit queues */
> +	unsigned int max_tx_queues; /**< Maximum number of transmit queues */
> +	unsigned int num_rx_queues; /**< Negotiated number of receive queues */
> +	unsigned int max_rx_queues; /**< Maximum number of receive queues */
> +
> +	struct rte_avp_fifo *tx_q[RTE_AVP_MAX_QUEUES]; /**< TX queue */
> +	struct rte_avp_fifo *rx_q[RTE_AVP_MAX_QUEUES]; /**< RX queue */
> +	struct rte_avp_fifo *alloc_q[RTE_AVP_MAX_QUEUES];
> +	/**< Allocated mbufs queue */
> +	struct rte_avp_fifo *free_q[RTE_AVP_MAX_QUEUES];
> +	/**< To be freed mbufs queue */
> +
> +	/* mutual exclusion over the 'flag' and 'resp_q/req_q' fields */
> +	rte_spinlock_t lock;

What exactly is the spinlock protecting?  The control operations in DPDK
are defined to be not thread safe. I.e it is responsibility of caller to synchronize.
Therefore is lock really needed?

^ permalink raw reply	[flat|nested] 172+ messages in thread

* Re: [PATCH v2 15/15] doc: adds information related to the AVP PMD
  2017-02-26 19:09   ` [PATCH v2 15/15] doc: adds information related to the AVP PMD Allain Legacy
@ 2017-02-27 17:04     ` Mcnamara, John
  2017-02-27 17:07       ` Legacy, Allain
  0 siblings, 1 reply; 172+ messages in thread
From: Mcnamara, John @ 2017-02-27 17:04 UTC (permalink / raw)
  To: Legacy, Allain (Wind River), Yigit, Ferruh; +Cc: dev



> -----Original Message-----
> From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Allain Legacy
> Sent: Sunday, February 26, 2017 7:09 PM
> To: Yigit, Ferruh <ferruh.yigit@intel.com>
> Cc: dev@dpdk.org
> Subject: [dpdk-dev] [PATCH v2 15/15] doc: adds information related to the
> AVP PMD
> 
> Updates the documentation and feature lists for the AVP PMD device.
> 
> Signed-off-by: Allain Legacy <allain.legacy@windriver.com>
> Signed-off-by: Matt Peters <matt.peters@windriver.com>


Thanks for that.

The patchset should also include an update to the release notes:

    doc/guides/rel_notes/release_17_05.rst

But apart from that.


Acked-by: John McNamara <john.mcnamara@intel.com>

^ permalink raw reply	[flat|nested] 172+ messages in thread

* Re: [PATCH v2 11/15] net/avp: packet receive functions
  2017-02-27 16:46     ` Stephen Hemminger
@ 2017-02-27 17:06       ` Legacy, Allain
  2017-02-28 10:27         ` Bruce Richardson
  0 siblings, 1 reply; 172+ messages in thread
From: Legacy, Allain @ 2017-02-27 17:06 UTC (permalink / raw)
  To: Stephen Hemminger; +Cc: YIGIT, FERRUH, dev


> -----Original Message-----
> From: Stephen Hemminger [mailto:stephen@networkplumber.org]
> On Sun, 26 Feb 2017 14:08:59 -0500
> Allain Legacy <allain.legacy@windriver.com> wrote:
> 
> Try not to break error messages onto two lines, it makes it harder when a
> user is trying to find the location of the error message with search tools
That's our normal practice, but checkpatches.sh flagged them as warnings.   If these are acceptable exceptions to the tool then are there plans to publish an exhaustive list of checkpatches.sh errors/warnings/checks that can be ignored?  Alternatively, are there plans to change the tools to ignore these automatically and not flag them as issues.   As a contributor it is difficult to tell what will be deemed acceptable and what will not (in some cases) so the default position is to fix all issues before submission. 

^ permalink raw reply	[flat|nested] 172+ messages in thread

* Re: [PATCH v2 15/15] doc: adds information related to the AVP PMD
  2017-02-27 17:04     ` Mcnamara, John
@ 2017-02-27 17:07       ` Legacy, Allain
  0 siblings, 0 replies; 172+ messages in thread
From: Legacy, Allain @ 2017-02-27 17:07 UTC (permalink / raw)
  To: MCNAMARA, JOHN, YIGIT, FERRUH; +Cc: dev

> -----Original Message-----
> From: Mcnamara, John [mailto:john.mcnamara@intel.com]
> 
> The patchset should also include an update to the release notes:
> 
>     doc/guides/rel_notes/release_17_05.rst
> 
Ok.  will do.

^ permalink raw reply	[flat|nested] 172+ messages in thread

* Re: [PATCH v2 07/15] net/avp: driver registration
  2017-02-27 16:53     ` Stephen Hemminger
@ 2017-02-27 17:09       ` Legacy, Allain
  0 siblings, 0 replies; 172+ messages in thread
From: Legacy, Allain @ 2017-02-27 17:09 UTC (permalink / raw)
  To: Stephen Hemminger; +Cc: YIGIT, FERRUH, dev

> -----Original Message-----
> From: Stephen Hemminger [mailto:stephen@networkplumber.org]
> 
> What exactly is the spinlock protecting?  The control operations in DPDK are
> defined to be not thread safe. I.e it is responsibility of caller to synchronize.
> Therefore is lock really needed?
The implementation assumes that interrupts (for VM live-migration) may be serviced on a thread other than the thread used to do normal device operations (i.e., configure, start, stop, etc...).   We use the spinlock to protect the device flags and the request/response queue against concurrent access from the management thread and the interrupt thread. 

^ permalink raw reply	[flat|nested] 172+ messages in thread

* Re: [PATCH v2 07/15] net/avp: driver registration
  2017-02-27 16:47     ` Stephen Hemminger
@ 2017-02-27 17:10       ` Legacy, Allain
  0 siblings, 0 replies; 172+ messages in thread
From: Legacy, Allain @ 2017-02-27 17:10 UTC (permalink / raw)
  To: Stephen Hemminger; +Cc: YIGIT, FERRUH, dev

> -----Original Message-----
> From: Stephen Hemminger [mailto:stephen@networkplumber.org]
> 
> PCI table should be const
Ok. Will do.

^ permalink raw reply	[flat|nested] 172+ messages in thread

* Re: [PATCH v2 11/15] net/avp: packet receive functions
  2017-02-27 17:06       ` Legacy, Allain
@ 2017-02-28 10:27         ` Bruce Richardson
  2017-03-01 13:23           ` Legacy, Allain
  0 siblings, 1 reply; 172+ messages in thread
From: Bruce Richardson @ 2017-02-28 10:27 UTC (permalink / raw)
  To: Legacy, Allain; +Cc: Stephen Hemminger, YIGIT, FERRUH, dev

On Mon, Feb 27, 2017 at 05:06:24PM +0000, Legacy, Allain wrote:
> 
> > -----Original Message-----
> > From: Stephen Hemminger [mailto:stephen@networkplumber.org]
> > On Sun, 26 Feb 2017 14:08:59 -0500
> > Allain Legacy <allain.legacy@windriver.com> wrote:
> > 
> > Try not to break error messages onto two lines, it makes it harder when a
> > user is trying to find the location of the error message with search tools
> That's our normal practice, but checkpatches.sh flagged them as warnings.   If these are acceptable exceptions to the tool then are there plans to publish an exhaustive list of checkpatches.sh errors/warnings/checks that can be ignored?  Alternatively, are there plans to change the tools to ignore these automatically and not flag them as issues.   As a contributor it is difficult to tell what will be deemed acceptable and what will not (in some cases) so the default position is to fix all issues before submission. 
>
In my experience, checkpatch ignores long lines that are due to error
messages. Perhaps you need to put the error message on a separate line,
if other things before the message are of significant size.

/Bruce

^ permalink raw reply	[flat|nested] 172+ messages in thread

* Re: [PATCH v2 02/15] net/avp: public header files
  2017-02-26 19:08   ` [PATCH v2 02/15] net/avp: public header files Allain Legacy
@ 2017-02-28 11:49     ` Jerin Jacob
  2017-03-01 13:25       ` Legacy, Allain
  0 siblings, 1 reply; 172+ messages in thread
From: Jerin Jacob @ 2017-02-28 11:49 UTC (permalink / raw)
  To: Allain Legacy; +Cc: ferruh.yigit, dev

On Sun, Feb 26, 2017 at 02:08:50PM -0500, Allain Legacy wrote:
> Adds public/exported header files for the AVP PMD.  The AVP device is a
> shared memory based device.  The structures and constants that define the
> method of operation of the device must be visible by both the PMD and the
> host DPDK application.  They must not change without proper version
> controls and updates to both the hypervisor DPDK application and the PMD.
> 
> The hypervisor DPDK application is a Wind River Systems proprietary
> virtual switch.
> 
> Signed-off-by: Allain Legacy <allain.legacy@windriver.com>
> Signed-off-by: Matt Peters <matt.peters@windriver.com>
> ---
>  drivers/net/avp/rte_avp_common.h | 424 +++++++++++++++++++++++++++++++++++++++
>  drivers/net/avp/rte_avp_fifo.h   | 157 +++++++++++++++
>  2 files changed, 581 insertions(+)
>  create mode 100644 drivers/net/avp/rte_avp_common.h
>  create mode 100644 drivers/net/avp/rte_avp_fifo.h
> 
> diff --git a/drivers/net/avp/rte_avp_common.h b/drivers/net/avp/rte_avp_common.h
> new file mode 100644
> index 0000000..579d5ac
> --- /dev/null
> +++ b/drivers/net/avp/rte_avp_common.h
> @@ -0,0 +1,424 @@
> +/*-
> + *   This file is provided under a dual BSD/LGPLv2 license.  When using or
> + *   redistributing this file, you may do so under either license.
> + *
> + *   GNU LESSER GENERAL PUBLIC LICENSE
> + *
> + *   Copyright(c) 2010-2013 Intel Corporation. All rights reserved.
> + *   Copyright(c) 2014-2015 Wind River Systems, Inc. All rights reserved.
> + *
> + *   This program is free software; you can redistribute it and/or modify
> + *   it under the terms of version 2.1 of the GNU Lesser General Public License
> + *   as published by the Free Software Foundation.
> + *
> + *   This program is distributed in the hope that it will be useful, but
> + *   WITHOUT ANY WARRANTY; without even the implied warranty of
> + *   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
> + *   Lesser General Public License for more details.
> + *
> + *   Contact Information:
> + *   Wind River Systems, Inc.
> + *
> + *
> + *   BSD LICENSE
> + *
> + *   Copyright(c) 2010-2013 Intel Corporation. All rights reserved.
> + *   Copyright(c) 2014-2016 Wind River Systems, Inc. All rights reserved.
> + *   All rights reserved.
> + *
> + *   Redistribution and use in source and binary forms, with or without
> + *   modification, are permitted provided that the following conditions
> + *   are met:
> + *
> + *   * Redistributions of source code must retain the above copyright
> + *     notice, this list of conditions and the following disclaimer.
> + *   * Redistributions in binary form must reproduce the above copyright
> + *     notice, this list of conditions and the following disclaimer in
> + *     the documentation and/or other materials provided with the
> + *     distribution.
> + *   * Neither the name of Intel Corporation nor the names of its
> + *     contributors may be used to endorse or promote products derived
> + *     from this software without specific prior written permission.
> + *
> + *    THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
> + *    "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
> + *    LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
> + *    A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
> + *    OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
> + *    SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
> + *    LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
> + *    DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
> + *    THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
> + *    (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
> + *    OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
> + *
> + */
> +
> +#ifndef _RTE_AVP_COMMON_H_
> +#define _RTE_AVP_COMMON_H_
> +
> +#ifdef __KERNEL__
> +#include <linux/if.h>
> +#endif
> +
> +/**
> + * AVP name is part of network device name.
> + */
> +#define RTE_AVP_NAMESIZE 32
> +
> +/**
> + * AVP alias is a user-defined value used for lookups from secondary
> + * processes.  Typically, this is a UUID.
> + */
> +#define RTE_AVP_ALIASSIZE 128
> +
> +/**
> + * Memory aligment (cache aligned)
> + */
> +#ifndef RTE_AVP_ALIGNMENT
> +#define RTE_AVP_ALIGNMENT 64

I think we use RTE_CACHE_LINE_SIZE here? PPC and ThunderX1 targets are
cache line size of 128B

> +#endif
> +
> +/*
> + * Request id.
> + */

^ permalink raw reply	[flat|nested] 172+ messages in thread

* Re: [PATCH v2 08/15] net/avp: device initialization
  2017-02-26 19:08   ` [PATCH v2 08/15] net/avp: device initialization Allain Legacy
@ 2017-02-28 11:57     ` Jerin Jacob
  2017-03-01 13:29       ` Legacy, Allain
  0 siblings, 1 reply; 172+ messages in thread
From: Jerin Jacob @ 2017-02-28 11:57 UTC (permalink / raw)
  To: Allain Legacy; +Cc: ferruh.yigit, dev

On Sun, Feb 26, 2017 at 02:08:56PM -0500, Allain Legacy wrote:
> Adds support for initialization newly probed AVP PCI devices.  Initial
> queue translations are setup in preparation for device configuration.
> 
> Signed-off-by: Allain Legacy <allain.legacy@windriver.com>
> Signed-off-by: Matt Peters <matt.peters@windriver.com>
> ---
>  drivers/net/avp/avp_ethdev.c | 757 +++++++++++++++++++++++++++++++++++++++++++
>  1 file changed, 757 insertions(+)
> 
> diff --git a/drivers/net/avp/avp_ethdev.c b/drivers/net/avp/avp_ethdev.c
> index 6522555..3bbff33 100644
> --- a/drivers/net/avp/avp_ethdev.c
> +++ b/drivers/net/avp/avp_ethdev.c
> @@ -60,6 +60,8 @@
>  #include "avp_logs.h"
>  
>  
> +static int avp_dev_create(struct rte_pci_device *pci_dev,
> +			  struct rte_eth_dev *eth_dev);
>  
>  static int eth_avp_dev_init(struct rte_eth_dev *eth_dev);
>  static int eth_avp_dev_uninit(struct rte_eth_dev *eth_dev);
> @@ -103,6 +105,16 @@
>  };
>  
>  
> +/**@{ AVP device flags */
> +#define RTE_AVP_F_PROMISC (1 << 1)
> +#define RTE_AVP_F_CONFIGURED (1 << 2)
> +#define RTE_AVP_F_LINKUP (1 << 3)
> +#define RTE_AVP_F_DETACHED (1 << 4)
> +/**@} */
> +
> +/* Ethernet device validation marker */
> +#define RTE_AVP_ETHDEV_MAGIC 0x92972862

I think, we don't need to add RTE_ for internal flags and PMD APIs etc.
> +
>  /*
>   * Defines the AVP device attributes which are attached to an RTE ethernet
>   * device
> @@ -150,11 +162,726 @@ struct avp_adapter {
>  	struct avp_dev avp;
>  } __rte_cache_aligned;
>  
> +
> +/* 32-bit MMIO register write */
> +#define RTE_AVP_WRITE32(_value, _addr) ((*(uint32_t *)_addr) = (_value))
> +
> +/* 32-bit MMIO register read */
> +#define RTE_AVP_READ32(_addr) (*(uint32_t *)(_addr))

Use rte_write32 and rte_read32 API instead.

> +
>  /* Macro to cast the ethernet device private data to a AVP object */
>  #define RTE_AVP_DEV_PRIVATE_TO_HW(adapter) \
>  	(&((struct avp_adapter *)adapter)->avp)
>  
>  /*
> + * Defines the structure of a AVP device queue for the purpose of handling the
> + * receive and transmit burst callback functions
> + */

^ permalink raw reply	[flat|nested] 172+ messages in thread

* Re: [PATCH v2 11/15] net/avp: packet receive functions
  2017-02-28 10:27         ` Bruce Richardson
@ 2017-03-01 13:23           ` Legacy, Allain
  2017-03-01 14:14             ` Thomas Monjalon
  0 siblings, 1 reply; 172+ messages in thread
From: Legacy, Allain @ 2017-03-01 13:23 UTC (permalink / raw)
  To: RICHARDSON, BRUCE; +Cc: Stephen Hemminger, YIGIT, FERRUH, dev

> -----Original Message-----
> From: Bruce Richardson [mailto:bruce.richardson@intel.com]
> In my experience, checkpatch ignores long lines that are due to error
> messages. Perhaps you need to put the error message on a separate line,
> if other things before the message are of significant size.
I have gone through the entire patchset and reverted my previous changes to reduce line length on any occurrences of debug log strings.   I kept the error message on the first line and all input variables on subsequent lines (and no longer than 80).   checkpatches.sh flags most of them as warnings, but since unbroken strings seems more important I will submit my next patchset version (v3) like this. 

WARNING:LONG_LINE_STRING: line over 80 characters
#120: FILE: drivers/net/avp/avp_ethdev.c:236:
+			PMD_DRV_LOG(ERR, "Timeout while waiting for a response for %u\n",

^ permalink raw reply	[flat|nested] 172+ messages in thread

* Re: [PATCH v2 02/15] net/avp: public header files
  2017-02-28 11:49     ` Jerin Jacob
@ 2017-03-01 13:25       ` Legacy, Allain
  0 siblings, 0 replies; 172+ messages in thread
From: Legacy, Allain @ 2017-03-01 13:25 UTC (permalink / raw)
  To: Jerin Jacob; +Cc: YIGIT, FERRUH, dev

> > +#ifndef RTE_AVP_ALIGNMENT
> > +#define RTE_AVP_ALIGNMENT 64
> 
> I think we use RTE_CACHE_LINE_SIZE here? PPC and ThunderX1 targets are
> cache line size of 128B
We need this to stay aligned with our host compile environment so we are going to retain this as a local value instead of relying on the RTE define.

^ permalink raw reply	[flat|nested] 172+ messages in thread

* Re: [PATCH v2 08/15] net/avp: device initialization
  2017-02-28 11:57     ` Jerin Jacob
@ 2017-03-01 13:29       ` Legacy, Allain
  0 siblings, 0 replies; 172+ messages in thread
From: Legacy, Allain @ 2017-03-01 13:29 UTC (permalink / raw)
  To: Jerin Jacob; +Cc: YIGIT, FERRUH, dev

> -----Original Message-----
> From: Jerin Jacob [mailto:jerin.jacob@caviumnetworks.com]
> > +/* Ethernet device validation marker */ #define
> RTE_AVP_ETHDEV_MAGIC
> > +0x92972862
> 
> I think, we don't need to add RTE_ for internal flags and PMD APIs etc.
Ok, will rename.

> > +/* 32-bit MMIO register read */
> > +#define RTE_AVP_READ32(_addr) (*(uint32_t *)(_addr))
> 
> Use rte_write32 and rte_read32 API instead.
Ok, will replace with standard API.

^ permalink raw reply	[flat|nested] 172+ messages in thread

* Re: [PATCH v2 11/15] net/avp: packet receive functions
  2017-03-01 13:23           ` Legacy, Allain
@ 2017-03-01 14:14             ` Thomas Monjalon
  2017-03-01 14:54               ` Legacy, Allain
  2017-03-01 15:10               ` Stephen Hemminger
  0 siblings, 2 replies; 172+ messages in thread
From: Thomas Monjalon @ 2017-03-01 14:14 UTC (permalink / raw)
  To: Legacy, Allain; +Cc: dev, RICHARDSON, BRUCE, Stephen Hemminger, YIGIT, FERRUH

2017-03-01 13:23, Legacy, Allain:
> > -----Original Message-----
> > From: Bruce Richardson [mailto:bruce.richardson@intel.com]
> > In my experience, checkpatch ignores long lines that are due to error
> > messages. Perhaps you need to put the error message on a separate line,
> > if other things before the message are of significant size.
> I have gone through the entire patchset and reverted my previous changes to reduce line length on any occurrences of debug log strings.   I kept the error message on the first line and all input variables on subsequent lines (and no longer than 80).   checkpatches.sh flags most of them as warnings, but since unbroken strings seems more important I will submit my next patchset version (v3) like this. 
> 
> WARNING:LONG_LINE_STRING: line over 80 characters
> #120: FILE: drivers/net/avp/avp_ethdev.c:236:
> +			PMD_DRV_LOG(ERR, "Timeout while waiting for a response for %u\n",
> 

There is maybe something to fix in the checkpatches.sh options.
Please could you look at SPLIT_STRING and LONG_LINE_STRING?

^ permalink raw reply	[flat|nested] 172+ messages in thread

* Re: [PATCH v2 11/15] net/avp: packet receive functions
  2017-03-01 14:14             ` Thomas Monjalon
@ 2017-03-01 14:54               ` Legacy, Allain
  2017-03-01 15:10               ` Stephen Hemminger
  1 sibling, 0 replies; 172+ messages in thread
From: Legacy, Allain @ 2017-03-01 14:54 UTC (permalink / raw)
  To: Thomas Monjalon; +Cc: dev, RICHARDSON, BRUCE, Stephen Hemminger, YIGIT, FERRUH

> -----Original Message-----
> From: Thomas Monjalon [mailto:thomas.monjalon@6wind.com]
> > WARNING:LONG_LINE_STRING: line over 80 characters
> > #120: FILE: drivers/net/avp/avp_ethdev.c:236:
> > +			PMD_DRV_LOG(ERR, "Timeout while waiting for a
> response for %u\n",
> >
> 
> There is maybe something to fix in the checkpatches.sh options.
> Please could you look at SPLIT_STRING and LONG_LINE_STRING?
Looks like we are missing LONG_LINE_STRING in our options.  Would you like me to add it via a patch request?

^ permalink raw reply	[flat|nested] 172+ messages in thread

* Re: [PATCH v2 11/15] net/avp: packet receive functions
  2017-03-01 14:14             ` Thomas Monjalon
  2017-03-01 14:54               ` Legacy, Allain
@ 2017-03-01 15:10               ` Stephen Hemminger
  2017-03-01 15:40                 ` Legacy, Allain
  1 sibling, 1 reply; 172+ messages in thread
From: Stephen Hemminger @ 2017-03-01 15:10 UTC (permalink / raw)
  To: Thomas Monjalon; +Cc: Legacy, Allain, dev, RICHARDSON, BRUCE, YIGIT, FERRUH

On Wed, 01 Mar 2017 15:14:57 +0100
Thomas Monjalon <thomas.monjalon@6wind.com> wrote:

> 2017-03-01 13:23, Legacy, Allain:
> > > -----Original Message-----
> > > From: Bruce Richardson [mailto:bruce.richardson@intel.com]
> > > In my experience, checkpatch ignores long lines that are due to error
> > > messages. Perhaps you need to put the error message on a separate line,
> > > if other things before the message are of significant size.  
> > I have gone through the entire patchset and reverted my previous changes to reduce line length on any occurrences of debug log strings.   I kept the error message on the first line and all input variables on subsequent lines (and no longer than 80).   checkpatches.sh flags most of them as warnings, but since unbroken strings seems more important I will submit my next patchset version (v3) like this. 
> > 
> > WARNING:LONG_LINE_STRING: line over 80 characters
> > #120: FILE: drivers/net/avp/avp_ethdev.c:236:
> > +			PMD_DRV_LOG(ERR, "Timeout while waiting for a response for %u\n",
> >   
> 
> There is maybe something to fix in the checkpatches.sh options.
> Please could you look at SPLIT_STRING and LONG_LINE_STRING?

In checkpatch source there is a regex to identify logging functions and special exceptions for
long lines etc. But the logging functions are for kernel (printk etc), not DPDK logging functions
so the messages are incorrect.

Maybe there is some way to extend checkpatch to handle rte_log?

^ permalink raw reply	[flat|nested] 172+ messages in thread

* Re: [PATCH v2 11/15] net/avp: packet receive functions
  2017-03-01 15:10               ` Stephen Hemminger
@ 2017-03-01 15:40                 ` Legacy, Allain
  0 siblings, 0 replies; 172+ messages in thread
From: Legacy, Allain @ 2017-03-01 15:40 UTC (permalink / raw)
  To: Stephen Hemminger, Thomas Monjalon; +Cc: dev, RICHARDSON, BRUCE, YIGIT, FERRUH

> -----Original Message-----
> From: Stephen Hemminger [mailto:stephen@networkplumber.org]
> 
> In checkpatch source there is a regex to identify logging functions and special
> exceptions for long lines etc. But the logging functions are for kernel (printk
> etc), not DPDK logging functions so the messages are incorrect.
> 
> Maybe there is some way to extend checkpatch to handle rte_log?

Adding LONG_LINE_STRING helped, but we still get a smaller number of warnings for concatenated strings like this one so it must be a quirk with how they handle those. 

WARNING:LONG_LINE: line over 80 characters
#217: FILE: drivers/net/avp/avp_ethdev.c:326:
+			PMD_DRV_LOG(DEBUG, "Translating host physical 0x%" PRIx64 " to guest virtual 0x%p\n",

^ permalink raw reply	[flat|nested] 172+ messages in thread

* [PATCH v3 00/16] Wind River Systems AVP PMD
  2017-02-26 19:08 ` [PATCH v2 00/16] Wind River Systems " Allain Legacy
                     ` (15 preceding siblings ...)
  2017-02-27  8:54   ` [PATCH v2 00/16] Wind River Systems " Vincent JARDIN
@ 2017-03-02  0:19   ` Allain Legacy
  2017-03-02  0:19     ` [PATCH v3 01/16] config: adds attributes for the " Allain Legacy
                       ` (16 more replies)
  16 siblings, 17 replies; 172+ messages in thread
From: Allain Legacy @ 2017-03-02  0:19 UTC (permalink / raw)
  To: ferruh.yigit; +Cc: ian.jolliffe, jerin.jacob, stephen, thomas.monjalon, dev

This patch series submits an initial version of the AVP PMD from Wind River
Systems.  The series includes shared header files, driver implementation,
and changes to documentation files in support of this new driver.  The AVP
driver is a shared memory based device.  It is intended to be used as a PMD
within a virtual machine running on a Wind River virtualization platform.
See: http://www.windriver.com/products/titanium-cloud/

It enables optimized packet throughput without requiring any packet
processing in qemu. This allowed us to provide our customers with a
significant performance increase for both DPDK and non-DPDK applications
in the VM.   Since our AVP implementation supports VM live-migration it
is viewed as a better alternative to PCI passthrough or PCI SRIOV since
neither of those support VM live-migration without manual intervention
or significant performance penalties.

Since the initial implementation of AVP devices, vhost-user has become part
of the qemu offering with a significant performance increase over the
original virtio implementation.  However, vhost-user still does not achieve
the level of performance that the AVP device can provide to our customers
for DPDK based guests.

A number of our customers have requested that we upstream the driver to
dpdk.org.

v2:
* Fixed coding style violations that slipped in accidentally because of an
  out of date checkpatch.pl from an older kernel.

v3:
* Updated 17.05 release notes to add a section for this new PMD
* Added additional info to the AVP nic guide document to clarify the
  benefit of using AVP over virtio.
* Fixed spelling error in debug log missed by local checkpatch.pl version
* Split the transmit patch to separate the stats functions as they
  accidentally got squashed in the last patchset.
* Fixed debug log strings so that they exceed 80 characters rather than
  span multiple lines.
* Renamed RTE_AVP_* defines that were in avp_ethdev.h to be AVP_* instead
* Replaced usage of RTE_WRITE32 and RTE_READ32 with rte_write32_relaxed
  and rte_read32_relaxed.
* Declared rte_pci_id table as const


Allain Legacy (16):
  config: added attributes for the AVP PMD
  net/avp: added public header files
  maintainers: claim responsibility for AVP PMD
  net/avp: added PMD version map file
  net/avp: added log macros
  drivers/net: added driver makefiles
  net/avp: driver registration
  net/avp: device initialization
  net/avp: device configuration
  net/avp: queue setup and release
  net/avp: packet receive functions
  net/avp: packet transmit functions
  net/avp: device statistics operations
  net/avp: device promiscuous functions
  net/avp: device start and stop operations
  doc: added information related to the AVP PMD

 MAINTAINERS                             |    6 +
 config/common_base                      |   10 +
 config/common_linuxapp                  |    1 +
 doc/guides/nics/avp.rst                 |   99 ++
 doc/guides/nics/features/avp.ini        |   17 +
 doc/guides/nics/index.rst               |    1 +
 drivers/net/Makefile                    |    1 +
 drivers/net/avp/Makefile                |   61 +
 drivers/net/avp/avp_ethdev.c            | 2371 +++++++++++++++++++++++++++++++
 drivers/net/avp/avp_logs.h              |   59 +
 drivers/net/avp/rte_avp_common.h        |  427 ++++++
 drivers/net/avp/rte_avp_fifo.h          |  157 ++
 drivers/net/avp/rte_pmd_avp_version.map |    4 +
 mk/rte.app.mk                           |    1 +
 14 files changed, 3215 insertions(+)
 create mode 100644 doc/guides/nics/avp.rst
 create mode 100644 doc/guides/nics/features/avp.ini
 create mode 100644 drivers/net/avp/Makefile
 create mode 100644 drivers/net/avp/avp_ethdev.c
 create mode 100644 drivers/net/avp/avp_logs.h
 create mode 100644 drivers/net/avp/rte_avp_common.h
 create mode 100644 drivers/net/avp/rte_avp_fifo.h
 create mode 100644 drivers/net/avp/rte_pmd_avp_version.map

-- 
1.8.3.1

^ permalink raw reply	[flat|nested] 172+ messages in thread

* [PATCH v3 01/16] config: adds attributes for the AVP PMD
  2017-03-02  0:19   ` [PATCH v3 " Allain Legacy
@ 2017-03-02  0:19     ` Allain Legacy
  2017-03-02  0:19     ` [PATCH v3 02/16] net/avp: public header files Allain Legacy
                       ` (15 subsequent siblings)
  16 siblings, 0 replies; 172+ messages in thread
From: Allain Legacy @ 2017-03-02  0:19 UTC (permalink / raw)
  To: ferruh.yigit; +Cc: ian.jolliffe, jerin.jacob, stephen, thomas.monjalon, dev

Updates the common base configuration file to include a top level config
attribute for the AVP PMD.

Signed-off-by: Allain Legacy <allain.legacy@windriver.com>
Signed-off-by: Matt Peters <matt.peters@windriver.com>
---
 config/common_base | 5 +++++
 1 file changed, 5 insertions(+)

diff --git a/config/common_base b/config/common_base
index aeee13e..912bc68 100644
--- a/config/common_base
+++ b/config/common_base
@@ -348,6 +348,11 @@ CONFIG_RTE_LIBRTE_QEDE_FW=""
 CONFIG_RTE_LIBRTE_PMD_AF_PACKET=n
 
 #
+# Compile WRS accelerated virtual port (AVP) guest PMD driver
+#
+CONFIG_RTE_LIBRTE_AVP_PMD=n
+
+#
 # Compile the TAP PMD
 # It is enabled by default for Linux only.
 #
-- 
1.8.3.1

^ permalink raw reply related	[flat|nested] 172+ messages in thread

* [PATCH v3 02/16] net/avp: public header files
  2017-03-02  0:19   ` [PATCH v3 " Allain Legacy
  2017-03-02  0:19     ` [PATCH v3 01/16] config: adds attributes for the " Allain Legacy
@ 2017-03-02  0:19     ` Allain Legacy
  2017-03-03 14:37       ` Chas Williams
  2017-03-02  0:19     ` [PATCH v3 03/16] maintainers: claim responsibility for AVP PMD Allain Legacy
                       ` (14 subsequent siblings)
  16 siblings, 1 reply; 172+ messages in thread
From: Allain Legacy @ 2017-03-02  0:19 UTC (permalink / raw)
  To: ferruh.yigit; +Cc: ian.jolliffe, jerin.jacob, stephen, thomas.monjalon, dev

Adds public/exported header files for the AVP PMD.  The AVP device is a
shared memory based device.  The structures and constants that define the
method of operation of the device must be visible by both the PMD and the
host DPDK application.  They must not change without proper version
controls and updates to both the hypervisor DPDK application and the PMD.

The hypervisor DPDK application is a Wind River Systems proprietary
virtual switch.

Signed-off-by: Allain Legacy <allain.legacy@windriver.com>
Signed-off-by: Matt Peters <matt.peters@windriver.com>
---
 drivers/net/avp/rte_avp_common.h | 424 +++++++++++++++++++++++++++++++++++++++
 drivers/net/avp/rte_avp_fifo.h   | 157 +++++++++++++++
 2 files changed, 581 insertions(+)
 create mode 100644 drivers/net/avp/rte_avp_common.h
 create mode 100644 drivers/net/avp/rte_avp_fifo.h

diff --git a/drivers/net/avp/rte_avp_common.h b/drivers/net/avp/rte_avp_common.h
new file mode 100644
index 0000000..579d5ac
--- /dev/null
+++ b/drivers/net/avp/rte_avp_common.h
@@ -0,0 +1,424 @@
+/*-
+ *   This file is provided under a dual BSD/LGPLv2 license.  When using or
+ *   redistributing this file, you may do so under either license.
+ *
+ *   GNU LESSER GENERAL PUBLIC LICENSE
+ *
+ *   Copyright(c) 2010-2013 Intel Corporation. All rights reserved.
+ *   Copyright(c) 2014-2015 Wind River Systems, Inc. All rights reserved.
+ *
+ *   This program is free software; you can redistribute it and/or modify
+ *   it under the terms of version 2.1 of the GNU Lesser General Public License
+ *   as published by the Free Software Foundation.
+ *
+ *   This program is distributed in the hope that it will be useful, but
+ *   WITHOUT ANY WARRANTY; without even the implied warranty of
+ *   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+ *   Lesser General Public License for more details.
+ *
+ *   Contact Information:
+ *   Wind River Systems, Inc.
+ *
+ *
+ *   BSD LICENSE
+ *
+ *   Copyright(c) 2010-2013 Intel Corporation. All rights reserved.
+ *   Copyright(c) 2014-2016 Wind River Systems, Inc. All rights reserved.
+ *   All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *   * Redistributions of source code must retain the above copyright
+ *     notice, this list of conditions and the following disclaimer.
+ *   * Redistributions in binary form must reproduce the above copyright
+ *     notice, this list of conditions and the following disclaimer in
+ *     the documentation and/or other materials provided with the
+ *     distribution.
+ *   * Neither the name of Intel Corporation nor the names of its
+ *     contributors may be used to endorse or promote products derived
+ *     from this software without specific prior written permission.
+ *
+ *    THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *    "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *    LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *    A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *    OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *    SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *    LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *    DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *    THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *    (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *    OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ *
+ */
+
+#ifndef _RTE_AVP_COMMON_H_
+#define _RTE_AVP_COMMON_H_
+
+#ifdef __KERNEL__
+#include <linux/if.h>
+#endif
+
+/**
+ * AVP name is part of network device name.
+ */
+#define RTE_AVP_NAMESIZE 32
+
+/**
+ * AVP alias is a user-defined value used for lookups from secondary
+ * processes.  Typically, this is a UUID.
+ */
+#define RTE_AVP_ALIASSIZE 128
+
+/**
+ * Memory aligment (cache aligned)
+ */
+#ifndef RTE_AVP_ALIGNMENT
+#define RTE_AVP_ALIGNMENT 64
+#endif
+
+/*
+ * Request id.
+ */
+enum rte_avp_req_id {
+	RTE_AVP_REQ_UNKNOWN = 0,
+	RTE_AVP_REQ_CHANGE_MTU,
+	RTE_AVP_REQ_CFG_NETWORK_IF,
+	RTE_AVP_REQ_CFG_DEVICE,
+	RTE_AVP_REQ_SHUTDOWN_DEVICE,
+	RTE_AVP_REQ_MAX,
+};
+
+/**@{ AVP device driver types */
+#define RTE_AVP_DRIVER_TYPE_UNKNOWN 0
+#define RTE_AVP_DRIVER_TYPE_DPDK 1
+#define RTE_AVP_DRIVER_TYPE_KERNEL 2
+#define RTE_AVP_DRIVER_TYPE_QEMU 3
+/**@} */
+
+/**@{ AVP device operational modes */
+#define RTE_AVP_MODE_HOST 0 /**< AVP interface created in host */
+#define RTE_AVP_MODE_GUEST 1 /**< AVP interface created for export to guest */
+#define RTE_AVP_MODE_TRACE 2 /**< AVP interface created for packet tracing */
+/**@} */
+
+/*
+ * Structure for AVP queue configuration query request/result
+ */
+struct rte_avp_device_config {
+	uint64_t device_id;	/**< Unique system identifier */
+	uint32_t driver_type; /**< Device Driver type */
+	uint32_t driver_version; /**< Device Driver version */
+	uint32_t features; /**< Negotiated features */
+	uint16_t num_tx_queues;	/**< Number of active transmit queues */
+	uint16_t num_rx_queues;	/**< Number of active receive queues */
+	uint8_t if_up; /**< 1: interface up, 0: interface down */
+} __attribute__ ((__packed__));
+
+/*
+ * Structure for AVP request.
+ */
+struct rte_avp_request {
+	uint32_t req_id; /**< Request id */
+	union {
+		uint32_t new_mtu; /**< New MTU */
+		uint8_t if_up;	/**< 1: interface up, 0: interface down */
+	struct rte_avp_device_config config; /**< Queue configuration */
+	};
+	int32_t result;	/**< Result for processing request */
+} __attribute__ ((__packed__));
+
+/*
+ * FIFO struct mapped in a shared memory. It describes a circular buffer FIFO
+ * Write and read should wrap around. FIFO is empty when write == read
+ * Writing should never overwrite the read position
+ */
+struct rte_avp_fifo {
+	volatile unsigned int write; /**< Next position to be written*/
+	volatile unsigned int read; /**< Next position to be read */
+	unsigned int len; /**< Circular buffer length */
+	unsigned int elem_size; /**< Pointer size - for 32/64 bit OS */
+	void *volatile buffer[0]; /**< The buffer contains mbuf pointers */
+};
+
+
+/*
+ * AVP packet buffer header used to define the exchange of packet data.
+ */
+struct rte_avp_desc {
+	uint64_t pad0;
+	void *pkt_mbuf; /**< Reference to packet mbuf */
+	uint8_t pad1[14];
+	uint16_t ol_flags; /**< Offload features. */
+	void *next;	/**< Reference to next buffer in chain */
+	void *data;	/**< Start address of data in segment buffer. */
+	uint16_t data_len; /**< Amount of data in segment buffer. */
+	uint8_t nb_segs; /**< Number of segments */
+	uint8_t pad2;
+	uint16_t pkt_len; /**< Total pkt len: sum of all segment data_len. */
+	uint32_t pad3;
+	uint16_t vlan_tci; /**< VLAN Tag Control Identifier (CPU order). */
+	uint32_t pad4;
+} __attribute__ ((__aligned__(RTE_AVP_ALIGNMENT), __packed__));
+
+
+/**{ AVP device features */
+#define RTE_AVP_FEATURE_VLAN_OFFLOAD (1 << 0) /**< Emulated HW VLAN offload */
+/**@} */
+
+
+/**@{ Offload feature flags */
+#define RTE_AVP_TX_VLAN_PKT 0x0001 /**< TX packet is a 802.1q VLAN packet. */
+#define RTE_AVP_RX_VLAN_PKT 0x0800 /**< RX packet is a 802.1q VLAN packet. */
+/**@} */
+
+
+/**@{ AVP PCI identifiers */
+#define RTE_AVP_PCI_VENDOR_ID   0x1af4
+#define RTE_AVP_PCI_DEVICE_ID   0x1110
+/**@} */
+
+/**@{ AVP PCI subsystem identifiers */
+#define RTE_AVP_PCI_SUB_VENDOR_ID RTE_AVP_PCI_VENDOR_ID
+#define RTE_AVP_PCI_SUB_DEVICE_ID 0x1104
+/**@} */
+
+/**@{ AVP PCI BAR definitions */
+#define RTE_AVP_PCI_MMIO_BAR   0
+#define RTE_AVP_PCI_MSIX_BAR   1
+#define RTE_AVP_PCI_MEMORY_BAR 2
+#define RTE_AVP_PCI_MEMMAP_BAR 4
+#define RTE_AVP_PCI_DEVICE_BAR 5
+#define RTE_AVP_PCI_MAX_BAR    6
+/**@} */
+
+/**@{ AVP PCI BAR name definitions */
+#define RTE_AVP_MMIO_BAR_NAME   "avp-mmio"
+#define RTE_AVP_MSIX_BAR_NAME   "avp-msix"
+#define RTE_AVP_MEMORY_BAR_NAME "avp-memory"
+#define RTE_AVP_MEMMAP_BAR_NAME "avp-memmap"
+#define RTE_AVP_DEVICE_BAR_NAME "avp-device"
+/**@} */
+
+/**@{ AVP PCI MSI-X vectors */
+#define RTE_AVP_MIGRATION_MSIX_VECTOR 0	/**< Migration interrupts */
+#define RTE_AVP_MAX_MSIX_VECTORS 1
+/**@} */
+
+/**@} AVP Migration status/ack register values */
+#define RTE_AVP_MIGRATION_NONE      0 /**< Migration never executed */
+#define RTE_AVP_MIGRATION_DETACHED  1 /**< Device attached during migration */
+#define RTE_AVP_MIGRATION_ATTACHED  2 /**< Device reattached during migration */
+#define RTE_AVP_MIGRATION_ERROR     3 /**< Device failed to attach/detach */
+/**@} */
+
+/**@} AVP MMIO Register Offsets */
+#define RTE_AVP_REGISTER_BASE 0
+#define RTE_AVP_INTERRUPT_MASK_OFFSET (RTE_AVP_REGISTER_BASE + 0)
+#define RTE_AVP_INTERRUPT_STATUS_OFFSET (RTE_AVP_REGISTER_BASE + 4)
+#define RTE_AVP_MIGRATION_STATUS_OFFSET (RTE_AVP_REGISTER_BASE + 8)
+#define RTE_AVP_MIGRATION_ACK_OFFSET (RTE_AVP_REGISTER_BASE + 12)
+/**@} */
+
+/**@} AVP Interrupt Status Mask */
+#define RTE_AVP_MIGRATION_INTERRUPT_MASK (1 << 1)
+#define RTE_AVP_APP_INTERRUPTS_MASK      (0xFFFFFFFF)
+#define RTE_AVP_NO_INTERRUPTS_MASK       (0)
+/**@} */
+
+/*
+ * Maximum number of memory regions to export
+ */
+#define RTE_AVP_MAX_MAPS  2048
+
+/*
+ * Description of a single memory region
+ */
+struct rte_avp_memmap {
+	void *addr;
+	phys_addr_t phys_addr;
+	uint64_t length;
+};
+
+/*
+ * AVP memory mapping validation marker
+ */
+#define RTE_AVP_MEMMAP_MAGIC (0x20131969)
+
+/**@{  AVP memory map versions */
+#define RTE_AVP_MEMMAP_VERSION_1 1
+#define RTE_AVP_MEMMAP_VERSION RTE_AVP_MEMMAP_VERSION_1
+/**@} */
+
+/*
+ * Defines a list of memory regions exported from the host to the guest
+ */
+struct rte_avp_memmap_info {
+	uint32_t magic; /**< Memory validation marker */
+	uint32_t version; /**< Data format version */
+	uint32_t nb_maps;
+	struct rte_avp_memmap maps[RTE_AVP_MAX_MAPS];
+};
+
+/*
+ * AVP device memory validation marker
+ */
+#define RTE_AVP_DEVICE_MAGIC (0x20131975)
+
+/**@{  AVP device map versions
+ * WARNING:  do not change the format or names of these variables.  They are
+ * automatically parsed from the build system to generate the SDK package
+ * name.
+ **/
+#define RTE_AVP_RELEASE_VERSION_1 1
+#define RTE_AVP_RELEASE_VERSION RTE_AVP_RELEASE_VERSION_1
+#define RTE_AVP_MAJOR_VERSION_0 0
+#define RTE_AVP_MAJOR_VERSION_1 1
+#define RTE_AVP_MAJOR_VERSION_2 2
+#define RTE_AVP_MAJOR_VERSION RTE_AVP_MAJOR_VERSION_2
+#define RTE_AVP_MINOR_VERSION_0 0
+#define RTE_AVP_MINOR_VERSION_1 1
+#define RTE_AVP_MINOR_VERSION_13 13
+#define RTE_AVP_MINOR_VERSION RTE_AVP_MINOR_VERSION_13
+/**@} */
+
+
+/**
+ * Generates a 32-bit version number from the specified version number
+ * components
+ */
+#define RTE_AVP_MAKE_VERSION(_release, _major, _minor) \
+((((_release) & 0xffff) << 16) | (((_major) & 0xff) << 8) | ((_minor) & 0xff))
+
+
+/**
+ * Represents the current version of the AVP host driver
+ * WARNING:  in the current development branch the host and guest driver
+ * version should always be the same.  When patching guest features back to
+ * GA releases the host version number should not be updated unless there was
+ * an actual change made to the host driver.
+ */
+#define RTE_AVP_CURRENT_HOST_VERSION \
+RTE_AVP_MAKE_VERSION(RTE_AVP_RELEASE_VERSION_1, \
+		     RTE_AVP_MAJOR_VERSION_0, \
+		     RTE_AVP_MINOR_VERSION_1)
+
+
+/**
+ * Represents the current version of the AVP guest drivers
+ */
+#define RTE_AVP_CURRENT_GUEST_VERSION \
+RTE_AVP_MAKE_VERSION(RTE_AVP_RELEASE_VERSION_1, \
+		     RTE_AVP_MAJOR_VERSION_2, \
+		     RTE_AVP_MINOR_VERSION_13)
+
+/**
+ * Access AVP device version values
+ */
+#define RTE_AVP_GET_RELEASE_VERSION(_version) (((_version) >> 16) & 0xffff)
+#define RTE_AVP_GET_MAJOR_VERSION(_version) (((_version) >> 8) & 0xff)
+#define RTE_AVP_GET_MINOR_VERSION(_version) ((_version) & 0xff)
+/**@}*/
+
+
+/**
+ * Remove the minor version number so that only the release and major versions
+ * are used for comparisons.
+ */
+#define RTE_AVP_STRIP_MINOR_VERSION(_version) ((_version) >> 8)
+
+
+/**
+ * Defines the number of mbuf pools supported per device (1 per socket)
+ * @note This value should be equal to RTE_MAX_NUMA_NODES
+ */
+#define RTE_AVP_MAX_MEMPOOLS (8)
+
+/*
+ * Defines address translation parameters for each support mbuf pool
+ */
+struct rte_avp_mempool_info {
+	void *addr;
+	phys_addr_t phys_addr;
+	uint64_t length;
+};
+
+/*
+ * Struct used to create a AVP device. Passed to the kernel in IOCTL call or
+ * via inter-VM shared memory when used in a guest.
+ */
+struct rte_avp_device_info {
+	uint32_t magic;	/**< Memory validation marker */
+	uint32_t version; /**< Data format version */
+
+	char ifname[RTE_AVP_NAMESIZE];	/**< Network device name for AVP */
+
+	phys_addr_t tx_phys;
+	phys_addr_t rx_phys;
+	phys_addr_t alloc_phys;
+	phys_addr_t free_phys;
+
+	uint32_t features; /**< Supported feature bitmap */
+	uint8_t min_rx_queues; /**< Minimum supported receive/free queues */
+	uint8_t num_rx_queues; /**< Recommended number of receive/free queues */
+	uint8_t max_rx_queues; /**< Maximum supported receive/free queues */
+	uint8_t min_tx_queues; /**< Minimum supported transmit/alloc queues */
+	uint8_t num_tx_queues;
+	/**< Recommended number of transmit/alloc queues */
+	uint8_t max_tx_queues; /**< Maximum supported transmit/alloc queues */
+
+	uint32_t tx_size; /**< Size of each transmit queue */
+	uint32_t rx_size; /**< Size of each receive queue */
+	uint32_t alloc_size; /**< Size of each alloc queue */
+	uint32_t free_size;	/**< Size of each free queue */
+
+	/* Used by Ethtool */
+	phys_addr_t req_phys;
+	phys_addr_t resp_phys;
+	phys_addr_t sync_phys;
+	void *sync_va;
+
+	/* mbuf mempool (used when a single memory area is supported) */
+	void *mbuf_va;
+	phys_addr_t mbuf_phys;
+
+	/* mbuf mempools */
+	struct rte_avp_mempool_info pool[RTE_AVP_MAX_MEMPOOLS];
+
+#ifdef __KERNEL__
+	/* Ethernet info */
+	char ethaddr[ETH_ALEN];
+#else
+	char ethaddr[ETHER_ADDR_LEN];
+#endif
+
+	uint8_t mode; /**< device mode, i.e guest, host, trace */
+
+	/* mbuf size */
+	unsigned int mbuf_size;
+
+	/*
+	 * unique id to differentiate between two instantiations of the same
+	 * AVP device (i.e., the guest needs to know if the device has been
+	 * deleted and recreated).
+	 */
+	uint64_t device_id;
+
+	uint32_t max_rx_pkt_len; /**< Maximum receive unit size */
+};
+
+#define RTE_AVP_MAX_QUEUES (8) /**< Maximum number of queues per device */
+
+/** Maximum number of chained mbufs in a packet */
+#define RTE_AVP_MAX_MBUF_SEGMENTS (5)
+
+#define RTE_AVP_DEVICE "avp"
+
+#define RTE_AVP_IOCTL_TEST    _IOWR(0, 1, int)
+#define RTE_AVP_IOCTL_CREATE  _IOWR(0, 2, struct rte_avp_device_info)
+#define RTE_AVP_IOCTL_RELEASE _IOWR(0, 3, struct rte_avp_device_info)
+#define RTE_AVP_IOCTL_QUERY   _IOWR(0, 4, struct rte_avp_device_config)
+
+#endif /* _RTE_AVP_COMMON_H_ */
diff --git a/drivers/net/avp/rte_avp_fifo.h b/drivers/net/avp/rte_avp_fifo.h
new file mode 100644
index 0000000..1a475de
--- /dev/null
+++ b/drivers/net/avp/rte_avp_fifo.h
@@ -0,0 +1,157 @@
+/*-
+ *   This file is provided under a dual BSD/LGPLv2 license.  When using or
+ *   redistributing this file, you may do so under either license.
+ *
+ *   GNU LESSER GENERAL PUBLIC LICENSE
+ *
+ *   Copyright(c) 2010-2013 Intel Corporation. All rights reserved.
+ *   Copyright(c) 2014 Wind River Systems, Inc. All rights reserved.
+ *
+ *   This program is free software; you can redistribute it and/or modify
+ *   it under the terms of version 2.1 of the GNU Lesser General Public License
+ *   as published by the Free Software Foundation.
+ *
+ *   This program is distributed in the hope that it will be useful, but
+ *   WITHOUT ANY WARRANTY; without even the implied warranty of
+ *   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+ *   Lesser General Public License for more details.
+ *
+ *   Contact Information:
+ *   Wind River Systems, Inc.
+ *
+ *
+ *   BSD LICENSE
+ *
+ *   Copyright(c) 2010-2013 Intel Corporation. All rights reserved.
+ *   Copyright(c) 2014 Wind River Systems, Inc. All rights reserved.
+ *   All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *   * Redistributions of source code must retain the above copyright
+ *     notice, this list of conditions and the following disclaimer.
+ *   * Redistributions in binary form must reproduce the above copyright
+ *     notice, this list of conditions and the following disclaimer in
+ *     the documentation and/or other materials provided with the
+ *     distribution.
+ *   * Neither the name of Intel Corporation nor the names of its
+ *     contributors may be used to endorse or promote products derived
+ *     from this software without specific prior written permission.
+ *
+ *    THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *    "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *    LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *    A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *    OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *    SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *    LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *    DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *    THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *    (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *    OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ *
+ */
+
+#ifndef _RTE_AVP_FIFO_H_
+#define _RTE_AVP_FIFO_H_
+
+#ifdef __KERNEL__
+/* Write memory barrier for kernel compiles */
+#define AVP_WMB() smp_wmb()
+/* Read memory barrier for kernel compiles */
+#define AVP_RMB() smp_rmb()
+#else
+/* Write memory barrier for userspace compiles */
+#define AVP_WMB() rte_wmb()
+/* Read memory barrier for userspace compiles */
+#define AVP_RMB() rte_rmb()
+#endif
+
+#ifndef __KERNEL__
+/**
+ * Initializes the avp fifo structure
+ */
+static inline void
+avp_fifo_init(struct rte_avp_fifo *fifo, unsigned int size)
+{
+	/* Ensure size is power of 2 */
+	if (size & (size - 1))
+		rte_panic("AVP fifo size must be power of 2\n");
+
+	fifo->write = 0;
+	fifo->read = 0;
+	fifo->len = size;
+	fifo->elem_size = sizeof(void *);
+}
+#endif
+
+/**
+ * Adds num elements into the fifo. Return the number actually written
+ */
+static inline unsigned
+avp_fifo_put(struct rte_avp_fifo *fifo, void **data, unsigned int num)
+{
+	unsigned int i = 0;
+	unsigned int fifo_write = fifo->write;
+	unsigned int fifo_read = fifo->read;
+	unsigned int new_write = fifo_write;
+
+	for (i = 0; i < num; i++) {
+		new_write = (new_write + 1) & (fifo->len - 1);
+
+		if (new_write == fifo_read)
+			break;
+		fifo->buffer[fifo_write] = data[i];
+		fifo_write = new_write;
+	}
+	AVP_WMB();
+	fifo->write = fifo_write;
+	return i;
+}
+
+/**
+ * Get up to num elements from the fifo. Return the number actually read
+ */
+static inline unsigned int
+avp_fifo_get(struct rte_avp_fifo *fifo, void **data, unsigned int num)
+{
+	unsigned int i = 0;
+	unsigned int new_read = fifo->read;
+	unsigned int fifo_write = fifo->write;
+
+	if (new_read == fifo_write)
+		return 0; /* empty */
+
+	for (i = 0; i < num; i++) {
+		if (new_read == fifo_write)
+			break;
+
+		data[i] = fifo->buffer[new_read];
+		new_read = (new_read + 1) & (fifo->len - 1);
+	}
+	AVP_RMB();
+	fifo->read = new_read;
+	return i;
+}
+
+/**
+ * Get the num of elements in the fifo
+ */
+static inline unsigned int
+avp_fifo_count(struct rte_avp_fifo *fifo)
+{
+	return (fifo->len + fifo->write - fifo->read) & (fifo->len - 1);
+}
+
+/**
+ * Get the num of available elements in the fifo
+ */
+static inline unsigned int
+avp_fifo_free_count(struct rte_avp_fifo *fifo)
+{
+	return (fifo->read - fifo->write - 1) & (fifo->len - 1);
+}
+
+#endif /* _RTE_AVP_FIFO_H_ */
-- 
1.8.3.1

^ permalink raw reply related	[flat|nested] 172+ messages in thread

* [PATCH v3 03/16] maintainers: claim responsibility for AVP PMD
  2017-03-02  0:19   ` [PATCH v3 " Allain Legacy
  2017-03-02  0:19     ` [PATCH v3 01/16] config: adds attributes for the " Allain Legacy
  2017-03-02  0:19     ` [PATCH v3 02/16] net/avp: public header files Allain Legacy
@ 2017-03-02  0:19     ` Allain Legacy
  2017-03-02  0:19     ` [PATCH v3 04/16] net/avp: add PMD version map file Allain Legacy
                       ` (13 subsequent siblings)
  16 siblings, 0 replies; 172+ messages in thread
From: Allain Legacy @ 2017-03-02  0:19 UTC (permalink / raw)
  To: ferruh.yigit; +Cc: ian.jolliffe, jerin.jacob, stephen, thomas.monjalon, dev

Updating the maintainers file to claim the AVP PMD driver on behalf of Wind
River Systems, Inc.

Signed-off-by: Allain Legacy <allain.legacy@windriver.com>
Signed-off-by: Matt Peters <matt.peters@windriver.com>
---
 MAINTAINERS | 5 +++++
 1 file changed, 5 insertions(+)

diff --git a/MAINTAINERS b/MAINTAINERS
index 5030c1c..fef23a0 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -423,6 +423,11 @@ Null Networking PMD
 M: Tetsuya Mukawa <mtetsuyah@gmail.com>
 F: drivers/net/null/
 
+Wind River AVP PMD
+M: Allain Legacy <allain.legacy@windriver.com>
+M: Matt Peters <matt.peters@windriver.com>
+F: drivers/net/avp
+
 
 Crypto Drivers
 --------------
-- 
1.8.3.1

^ permalink raw reply related	[flat|nested] 172+ messages in thread

* [PATCH v3 04/16] net/avp: add PMD version map file
  2017-03-02  0:19   ` [PATCH v3 " Allain Legacy
                       ` (2 preceding siblings ...)
  2017-03-02  0:19     ` [PATCH v3 03/16] maintainers: claim responsibility for AVP PMD Allain Legacy
@ 2017-03-02  0:19     ` Allain Legacy
  2017-03-02  0:19     ` [PATCH v3 05/16] net/avp: debug log macros Allain Legacy
                       ` (12 subsequent siblings)
  16 siblings, 0 replies; 172+ messages in thread
From: Allain Legacy @ 2017-03-02  0:19 UTC (permalink / raw)
  To: ferruh.yigit; +Cc: ian.jolliffe, jerin.jacob, stephen, thomas.monjalon, dev

Adds a default ABI version file for the AVP PMD.

Signed-off-by: Allain Legacy <allain.legacy@windriver.com>
Signed-off-by: Matt Peters <matt.peters@windriver.com>
---
 drivers/net/avp/rte_pmd_avp_version.map | 4 ++++
 1 file changed, 4 insertions(+)
 create mode 100644 drivers/net/avp/rte_pmd_avp_version.map

diff --git a/drivers/net/avp/rte_pmd_avp_version.map b/drivers/net/avp/rte_pmd_avp_version.map
new file mode 100644
index 0000000..af8f3f4
--- /dev/null
+++ b/drivers/net/avp/rte_pmd_avp_version.map
@@ -0,0 +1,4 @@
+DPDK_17.05 {
+
+    local: *;
+};
-- 
1.8.3.1

^ permalink raw reply related	[flat|nested] 172+ messages in thread

* [PATCH v3 05/16] net/avp: debug log macros
  2017-03-02  0:19   ` [PATCH v3 " Allain Legacy
                       ` (3 preceding siblings ...)
  2017-03-02  0:19     ` [PATCH v3 04/16] net/avp: add PMD version map file Allain Legacy
@ 2017-03-02  0:19     ` Allain Legacy
  2017-03-02  0:19     ` [PATCH v3 06/16] drivers/net: adds driver makefiles for AVP PMD Allain Legacy
                       ` (11 subsequent siblings)
  16 siblings, 0 replies; 172+ messages in thread
From: Allain Legacy @ 2017-03-02  0:19 UTC (permalink / raw)
  To: ferruh.yigit; +Cc: ian.jolliffe, jerin.jacob, stephen, thomas.monjalon, dev

Adds a header file with log macros for the AVP PMD

Signed-off-by: Allain Legacy <allain.legacy@windriver.com>
Signed-off-by: Matt Peters <matt.peters@windriver.com>
---
 config/common_base         |  4 ++++
 drivers/net/avp/avp_logs.h | 59 ++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 63 insertions(+)
 create mode 100644 drivers/net/avp/avp_logs.h

diff --git a/config/common_base b/config/common_base
index 912bc68..fe8363d 100644
--- a/config/common_base
+++ b/config/common_base
@@ -351,6 +351,10 @@ CONFIG_RTE_LIBRTE_PMD_AF_PACKET=n
 # Compile WRS accelerated virtual port (AVP) guest PMD driver
 #
 CONFIG_RTE_LIBRTE_AVP_PMD=n
+CONFIG_RTE_LIBRTE_AVP_DEBUG_RX=n
+CONFIG_RTE_LIBRTE_AVP_DEBUG_TX=n
+CONFIG_RTE_LIBRTE_AVP_DEBUG_DRIVER=y
+CONFIG_RTE_LIBRTE_AVP_DEBUG_BUFFERS=n
 
 #
 # Compile the TAP PMD
diff --git a/drivers/net/avp/avp_logs.h b/drivers/net/avp/avp_logs.h
new file mode 100644
index 0000000..252cab7
--- /dev/null
+++ b/drivers/net/avp/avp_logs.h
@@ -0,0 +1,59 @@
+/*
+ * BSD LICENSE
+ *
+ * Copyright (c) 2013-2015, Wind River Systems, Inc.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions are met:
+ *
+ * 1) Redistributions of source code must retain the above copyright notice,
+ * this list of conditions and the following disclaimer.
+ *
+ * 2) Redistributions in binary form must reproduce the above copyright notice,
+ * this list of conditions and the following disclaimer in the documentation
+ * and/or other materials provided with the distribution.
+ *
+ * 3) Neither the name of Wind River Systems nor the names of its contributors
+ * may be used to endorse or promote products derived from this software
+ * without specific prior written permission.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE
+ * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+ * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+ * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+ * POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef _AVP_LOGS_H_
+#define _AVP_LOGS_H_
+
+#include <rte_log.h>
+
+#ifdef RTE_LIBRTE_AVP_DEBUG_RX
+#define PMD_RX_LOG(level, fmt, args...) \
+	RTE_LOG(level, PMD, "%s() rx: " fmt, __func__, ## args)
+#else
+#define PMD_RX_LOG(level, fmt, args...) do { } while (0)
+#endif
+
+#ifdef RTE_LIBRTE_AVP_DEBUG_TX
+#define PMD_TX_LOG(level, fmt, args...) \
+	RTE_LOG(level, PMD, "%s() tx: " fmt, __func__, ## args)
+#else
+#define PMD_TX_LOG(level, fmt, args...) do { } while (0)
+#endif
+
+#ifdef RTE_LIBRTE_AVP_DEBUG_DRIVER
+#define PMD_DRV_LOG(level, fmt, args...) \
+	RTE_LOG(level, PMD, "%s(): " fmt, __func__, ## args)
+#else
+#define PMD_DRV_LOG(level, fmt, args...) do { } while (0)
+#endif
+
+#endif /* _AVP_LOGS_H_ */
-- 
1.8.3.1

^ permalink raw reply related	[flat|nested] 172+ messages in thread

* [PATCH v3 06/16] drivers/net: adds driver makefiles for AVP PMD
  2017-03-02  0:19   ` [PATCH v3 " Allain Legacy
                       ` (4 preceding siblings ...)
  2017-03-02  0:19     ` [PATCH v3 05/16] net/avp: debug log macros Allain Legacy
@ 2017-03-02  0:19     ` Allain Legacy
  2017-03-02  0:19     ` [PATCH v3 07/16] net/avp: driver registration Allain Legacy
                       ` (10 subsequent siblings)
  16 siblings, 0 replies; 172+ messages in thread
From: Allain Legacy @ 2017-03-02  0:19 UTC (permalink / raw)
  To: ferruh.yigit; +Cc: ian.jolliffe, jerin.jacob, stephen, thomas.monjalon, dev

Adds a default Makefile to the driver directory but does not include any
source files.

Signed-off-by: Allain Legacy <allain.legacy@windriver.com>
Signed-off-by: Matt Peters <matt.peters@windriver.com>
---
 drivers/net/Makefile     |  1 +
 drivers/net/avp/Makefile | 52 ++++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 53 insertions(+)
 create mode 100644 drivers/net/avp/Makefile

diff --git a/drivers/net/Makefile b/drivers/net/Makefile
index 40fc333..592383e 100644
--- a/drivers/net/Makefile
+++ b/drivers/net/Makefile
@@ -32,6 +32,7 @@
 include $(RTE_SDK)/mk/rte.vars.mk
 
 DIRS-$(CONFIG_RTE_LIBRTE_PMD_AF_PACKET) += af_packet
+DIRS-$(CONFIG_RTE_LIBRTE_AVP_PMD) += avp
 DIRS-$(CONFIG_RTE_LIBRTE_BNX2X_PMD) += bnx2x
 DIRS-$(CONFIG_RTE_LIBRTE_PMD_BOND) += bonding
 DIRS-$(CONFIG_RTE_LIBRTE_CXGBE_PMD) += cxgbe
diff --git a/drivers/net/avp/Makefile b/drivers/net/avp/Makefile
new file mode 100644
index 0000000..68a0fa5
--- /dev/null
+++ b/drivers/net/avp/Makefile
@@ -0,0 +1,52 @@
+#   BSD LICENSE
+#
+#   Copyright(c) 2013-2017, Wind River Systems, Inc. All rights reserved.
+#   All rights reserved.
+#
+#   Redistribution and use in source and binary forms, with or without
+#   modification, are permitted provided that the following conditions
+#   are met:
+#
+#     * Redistributions of source code must retain the above copyright
+#       notice, this list of conditions and the following disclaimer.
+#     * Redistributions in binary form must reproduce the above copyright
+#       notice, this list of conditions and the following disclaimer in
+#       the documentation and/or other materials provided with the
+#       distribution.
+#     * Neither the name of Wind River Systems nor the names of its
+#       contributors may be used to endorse or promote products derived
+#       from this software without specific prior written permission.
+#
+#   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+#   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+#   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+#   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+#   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+#   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+#   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+#   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+#   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+#   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+#   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+
+
+include $(RTE_SDK)/mk/rte.vars.mk
+
+#
+# library name
+#
+LIB = librte_pmd_avp.a
+
+CFLAGS += -O3
+CFLAGS += $(WERROR_FLAGS) -I$(SRCDIR)
+
+EXPORT_MAP := rte_pmd_avp_version.map
+
+LIBABIVER := 1
+
+# install public header files to enable compilation of the hypervisor level
+# dpdk application
+SYMLINK-$(CONFIG_RTE_LIBRTE_AVP_PMD)-include += rte_avp_common.h
+SYMLINK-$(CONFIG_RTE_LIBRTE_AVP_PMD)-include += rte_avp_fifo.h
+
+include $(RTE_SDK)/mk/rte.lib.mk
-- 
1.8.3.1

^ permalink raw reply related	[flat|nested] 172+ messages in thread

* [PATCH v3 07/16] net/avp: driver registration
  2017-03-02  0:19   ` [PATCH v3 " Allain Legacy
                       ` (5 preceding siblings ...)
  2017-03-02  0:19     ` [PATCH v3 06/16] drivers/net: adds driver makefiles for AVP PMD Allain Legacy
@ 2017-03-02  0:19     ` Allain Legacy
  2017-03-02  0:20     ` [PATCH v3 08/16] net/avp: device initialization Allain Legacy
                       ` (9 subsequent siblings)
  16 siblings, 0 replies; 172+ messages in thread
From: Allain Legacy @ 2017-03-02  0:19 UTC (permalink / raw)
  To: ferruh.yigit; +Cc: ian.jolliffe, jerin.jacob, stephen, thomas.monjalon, dev

Adds the initial framework for registering the driver against the support
PCI device identifiers.

Signed-off-by: Allain Legacy <allain.legacy@windriver.com>
Signed-off-by: Matt Peters <matt.peters@windriver.com>
---
 config/common_linuxapp                       |   1 +
 config/defconfig_i686-native-linuxapp-gcc    |   5 +
 config/defconfig_i686-native-linuxapp-icc    |   5 +
 config/defconfig_x86_x32-native-linuxapp-gcc |   5 +
 drivers/net/avp/Makefile                     |   8 +
 drivers/net/avp/avp_ethdev.c                 | 230 +++++++++++++++++++++++++++
 mk/rte.app.mk                                |   1 +
 7 files changed, 255 insertions(+)
 create mode 100644 drivers/net/avp/avp_ethdev.c

diff --git a/config/common_linuxapp b/config/common_linuxapp
index 00ebaac..8690a00 100644
--- a/config/common_linuxapp
+++ b/config/common_linuxapp
@@ -43,6 +43,7 @@ CONFIG_RTE_LIBRTE_VHOST=y
 CONFIG_RTE_LIBRTE_PMD_VHOST=y
 CONFIG_RTE_LIBRTE_PMD_AF_PACKET=y
 CONFIG_RTE_LIBRTE_PMD_TAP=y
+CONFIG_RTE_LIBRTE_AVP_PMD=y
 CONFIG_RTE_LIBRTE_NFP_PMD=y
 CONFIG_RTE_LIBRTE_POWER=y
 CONFIG_RTE_VIRTIO_USER=y
diff --git a/config/defconfig_i686-native-linuxapp-gcc b/config/defconfig_i686-native-linuxapp-gcc
index 745c401..9847bdb 100644
--- a/config/defconfig_i686-native-linuxapp-gcc
+++ b/config/defconfig_i686-native-linuxapp-gcc
@@ -75,3 +75,8 @@ CONFIG_RTE_LIBRTE_PMD_KASUMI=n
 # ZUC PMD is not supported on 32-bit
 #
 CONFIG_RTE_LIBRTE_PMD_ZUC=n
+
+#
+# AVP PMD is not supported on 32-bit
+#
+CONFIG_RTE_LIBRTE_AVP_PMD=n
diff --git a/config/defconfig_i686-native-linuxapp-icc b/config/defconfig_i686-native-linuxapp-icc
index 50a3008..269e88e 100644
--- a/config/defconfig_i686-native-linuxapp-icc
+++ b/config/defconfig_i686-native-linuxapp-icc
@@ -75,3 +75,8 @@ CONFIG_RTE_LIBRTE_PMD_KASUMI=n
 # ZUC PMD is not supported on 32-bit
 #
 CONFIG_RTE_LIBRTE_PMD_ZUC=n
+
+#
+# AVP PMD is not supported on 32-bit
+#
+CONFIG_RTE_LIBRTE_AVP_PMD=n
diff --git a/config/defconfig_x86_x32-native-linuxapp-gcc b/config/defconfig_x86_x32-native-linuxapp-gcc
index 3e55c5c..19573cb 100644
--- a/config/defconfig_x86_x32-native-linuxapp-gcc
+++ b/config/defconfig_x86_x32-native-linuxapp-gcc
@@ -50,3 +50,8 @@ CONFIG_RTE_LIBRTE_KNI=n
 # Solarflare PMD is not supported on 32-bit
 #
 CONFIG_RTE_LIBRTE_SFC_EFX_PMD=n
+
+#
+# AVP PMD is not supported on 32-bit
+#
+CONFIG_RTE_LIBRTE_AVP_PMD=n
diff --git a/drivers/net/avp/Makefile b/drivers/net/avp/Makefile
index 68a0fa5..9cf0449 100644
--- a/drivers/net/avp/Makefile
+++ b/drivers/net/avp/Makefile
@@ -49,4 +49,12 @@ LIBABIVER := 1
 SYMLINK-$(CONFIG_RTE_LIBRTE_AVP_PMD)-include += rte_avp_common.h
 SYMLINK-$(CONFIG_RTE_LIBRTE_AVP_PMD)-include += rte_avp_fifo.h
 
+#
+# all source files are stored in SRCS-y
+#
+SRCS-$(CONFIG_RTE_LIBRTE_AVP_PMD) += avp_ethdev.c
+
+# this lib depends upon:
+DEPDIRS-$(CONFIG_RTE_LIBRTE_AVP_PMD) += lib/librte_eal lib/librte_ether
+
 include $(RTE_SDK)/mk/rte.lib.mk
diff --git a/drivers/net/avp/avp_ethdev.c b/drivers/net/avp/avp_ethdev.c
new file mode 100644
index 0000000..f0bf862
--- /dev/null
+++ b/drivers/net/avp/avp_ethdev.c
@@ -0,0 +1,230 @@
+/*
+ *   BSD LICENSE
+ *
+ * Copyright (c) 2013-2016, Wind River Systems, Inc.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions are met:
+ *
+ * 1) Redistributions of source code must retain the above copyright notice,
+ * this list of conditions and the following disclaimer.
+ *
+ * 2) Redistributions in binary form must reproduce the above copyright notice,
+ * this list of conditions and the following disclaimer in the documentation
+ * and/or other materials provided with the distribution.
+ *
+ * 3) Neither the name of Wind River Systems nor the names of its contributors
+ * may be used to endorse or promote products derived from this software
+ * without specific prior written permission.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE
+ * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+ * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+ * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+ * POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include <stdint.h>
+#include <string.h>
+#include <stdio.h>
+#include <errno.h>
+#include <unistd.h>
+#include <sys/io.h>
+
+#include <rte_ethdev.h>
+#include <rte_memcpy.h>
+#include <rte_string_fns.h>
+#include <rte_memzone.h>
+#include <rte_malloc.h>
+#include <rte_atomic.h>
+#include <rte_branch_prediction.h>
+#include <rte_pci.h>
+#include <rte_ether.h>
+#include <rte_common.h>
+#include <rte_cycles.h>
+#include <rte_spinlock.h>
+#include <rte_byteorder.h>
+#include <rte_dev.h>
+#include <rte_memory.h>
+#include <rte_eal.h>
+
+#include "rte_avp_common.h"
+#include "rte_avp_fifo.h"
+
+#include "avp_logs.h"
+
+
+
+static int eth_avp_dev_init(struct rte_eth_dev *eth_dev);
+static int eth_avp_dev_uninit(struct rte_eth_dev *eth_dev);
+
+
+#define AVP_DEV_TO_PCI(eth_dev) RTE_DEV_TO_PCI((eth_dev)->device)
+
+
+#define AVP_MAX_MAC_ADDRS 1
+#define AVP_MIN_RX_BUFSIZE ETHER_MIN_LEN
+
+
+/*
+ * Defines the number of microseconds to wait before checking the response
+ * queue for completion.
+ */
+#define AVP_REQUEST_DELAY_USECS (5000)
+
+/*
+ * Defines the number times to check the response queue for completion before
+ * declaring a timeout.
+ */
+#define AVP_MAX_REQUEST_RETRY (100)
+
+/* Defines the current PCI driver version number */
+#define AVP_DPDK_DRIVER_VERSION RTE_AVP_CURRENT_GUEST_VERSION
+
+/*
+ * The set of PCI devices this driver supports
+ */
+static const struct rte_pci_id pci_id_avp_map[] = {
+	{ .vendor_id = RTE_AVP_PCI_VENDOR_ID,
+	  .device_id = RTE_AVP_PCI_DEVICE_ID,
+	  .subsystem_vendor_id = RTE_AVP_PCI_SUB_VENDOR_ID,
+	  .subsystem_device_id = RTE_AVP_PCI_SUB_DEVICE_ID,
+	  .class_id = RTE_CLASS_ANY_ID,
+	},
+
+	{ .vendor_id = 0, /* sentinel */
+	},
+};
+
+
+/*
+ * Defines the AVP device attributes which are attached to an RTE ethernet
+ * device
+ */
+struct avp_dev {
+	uint32_t magic; /**< Memory validation marker */
+	uint64_t device_id; /**< Unique system identifier */
+	struct ether_addr ethaddr; /**< Host specified MAC address */
+	struct rte_eth_dev_data *dev_data;
+	/**< Back pointer to ethernet device data */
+	volatile uint32_t flags; /**< Device operational flags */
+	uint8_t port_id; /**< Ethernet port identifier */
+	struct rte_mempool *pool; /**< pkt mbuf mempool */
+	unsigned int guest_mbuf_size; /**< local pool mbuf size */
+	unsigned int host_mbuf_size; /**< host mbuf size */
+	unsigned int max_rx_pkt_len; /**< maximum receive unit */
+	uint32_t host_features; /**< Supported feature bitmap */
+	uint32_t features; /**< Enabled feature bitmap */
+	unsigned int num_tx_queues; /**< Negotiated number of transmit queues */
+	unsigned int max_tx_queues; /**< Maximum number of transmit queues */
+	unsigned int num_rx_queues; /**< Negotiated number of receive queues */
+	unsigned int max_rx_queues; /**< Maximum number of receive queues */
+
+	struct rte_avp_fifo *tx_q[RTE_AVP_MAX_QUEUES]; /**< TX queue */
+	struct rte_avp_fifo *rx_q[RTE_AVP_MAX_QUEUES]; /**< RX queue */
+	struct rte_avp_fifo *alloc_q[RTE_AVP_MAX_QUEUES];
+	/**< Allocated mbufs queue */
+	struct rte_avp_fifo *free_q[RTE_AVP_MAX_QUEUES];
+	/**< To be freed mbufs queue */
+
+	/* mutual exclusion over the 'flag' and 'resp_q/req_q' fields */
+	rte_spinlock_t lock;
+
+	/* For request & response */
+	struct rte_avp_fifo *req_q; /**< Request queue */
+	struct rte_avp_fifo *resp_q; /**< Response queue */
+	void *host_sync_addr; /**< (host) Req/Resp Mem address */
+	void *sync_addr; /**< Req/Resp Mem address */
+	void *host_mbuf_addr; /**< (host) MBUF pool start address */
+	void *mbuf_addr; /**< MBUF pool start address */
+} __rte_cache_aligned;
+
+/* RTE ethernet private data */
+struct avp_adapter {
+	struct avp_dev avp;
+} __rte_cache_aligned;
+
+/* Macro to cast the ethernet device private data to a AVP object */
+#define AVP_DEV_PRIVATE_TO_HW(adapter) \
+	(&((struct avp_adapter *)adapter)->avp)
+
+/*
+ * This function is based on probe() function in avp_pci.c
+ * It returns 0 on success.
+ */
+static int
+eth_avp_dev_init(struct rte_eth_dev *eth_dev)
+{
+	struct avp_dev *avp =
+		AVP_DEV_PRIVATE_TO_HW(eth_dev->data->dev_private);
+	struct rte_pci_device *pci_dev;
+
+	pci_dev = AVP_DEV_TO_PCI(eth_dev);
+
+	if (rte_eal_process_type() != RTE_PROC_PRIMARY) {
+		/*
+		 * no setup required on secondary processes.  All data is saved
+		 * in dev_private by the primary process. All resource should
+		 * be mapped to the same virtual address so all pointers should
+		 * be valid.
+		 */
+		return 0;
+	}
+
+	rte_eth_copy_pci_info(eth_dev, pci_dev);
+
+	eth_dev->data->dev_flags |= RTE_ETH_DEV_DETACHABLE;
+
+	/* Allocate memory for storing MAC addresses */
+	eth_dev->data->mac_addrs = rte_zmalloc("avp_ethdev", ETHER_ADDR_LEN, 0);
+	if (eth_dev->data->mac_addrs == NULL) {
+		PMD_DRV_LOG(ERR, "Failed to allocate %d bytes needed to store MAC addresses\n",
+			    ETHER_ADDR_LEN);
+		return -ENOMEM;
+	}
+
+	ether_addr_copy(&avp->ethaddr, &eth_dev->data->mac_addrs[0]);
+
+	return 0;
+}
+
+static int
+eth_avp_dev_uninit(struct rte_eth_dev *eth_dev)
+{
+	if (rte_eal_process_type() != RTE_PROC_PRIMARY)
+		return -EPERM;
+
+	if (eth_dev->data == NULL)
+		return 0;
+
+	if (eth_dev->data->mac_addrs != NULL) {
+		rte_free(eth_dev->data->mac_addrs);
+		eth_dev->data->mac_addrs = NULL;
+	}
+
+	return 0;
+}
+
+
+static struct eth_driver rte_avp_pmd = {
+	{
+		.id_table = pci_id_avp_map,
+		.drv_flags = RTE_PCI_DRV_NEED_MAPPING,
+		.probe = rte_eth_dev_pci_probe,
+		.remove = rte_eth_dev_pci_remove,
+	},
+	.eth_dev_init = eth_avp_dev_init,
+	.eth_dev_uninit = eth_avp_dev_uninit,
+	.dev_private_size = sizeof(struct avp_adapter),
+};
+
+
+
+RTE_PMD_REGISTER_PCI(rte_avp, rte_avp_pmd.pci_drv);
+RTE_PMD_REGISTER_PCI_TABLE(rte_avp, pci_id_avp_map);
diff --git a/mk/rte.app.mk b/mk/rte.app.mk
index d46a33e..9d66257 100644
--- a/mk/rte.app.mk
+++ b/mk/rte.app.mk
@@ -104,6 +104,7 @@ ifeq ($(CONFIG_RTE_BUILD_SHARED_LIB),n)
 # plugins (link only if static libraries)
 
 _LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_AF_PACKET)  += -lrte_pmd_af_packet
+_LDLIBS-$(CONFIG_RTE_LIBRTE_AVP_PMD)        += -lrte_pmd_avp
 _LDLIBS-$(CONFIG_RTE_LIBRTE_BNX2X_PMD)      += -lrte_pmd_bnx2x -lz
 _LDLIBS-$(CONFIG_RTE_LIBRTE_BNXT_PMD)       += -lrte_pmd_bnxt
 _LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_BOND)       += -lrte_pmd_bond
-- 
1.8.3.1

^ permalink raw reply related	[flat|nested] 172+ messages in thread

* [PATCH v3 08/16] net/avp: device initialization
  2017-03-02  0:19   ` [PATCH v3 " Allain Legacy
                       ` (6 preceding siblings ...)
  2017-03-02  0:19     ` [PATCH v3 07/16] net/avp: driver registration Allain Legacy
@ 2017-03-02  0:20     ` Allain Legacy
  2017-03-03 15:04       ` Chas Williams
  2017-03-02  0:20     ` [PATCH v3 09/16] net/avp: device configuration Allain Legacy
                       ` (8 subsequent siblings)
  16 siblings, 1 reply; 172+ messages in thread
From: Allain Legacy @ 2017-03-02  0:20 UTC (permalink / raw)
  To: ferruh.yigit; +Cc: ian.jolliffe, jerin.jacob, stephen, thomas.monjalon, dev

Adds support for initialization newly probed AVP PCI devices.  Initial
queue translations are setup in preparation for device configuration.

Signed-off-by: Allain Legacy <allain.legacy@windriver.com>
Signed-off-by: Matt Peters <matt.peters@windriver.com>
---
 drivers/net/avp/avp_ethdev.c | 733 +++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 733 insertions(+)

diff --git a/drivers/net/avp/avp_ethdev.c b/drivers/net/avp/avp_ethdev.c
index f0bf862..467cec8 100644
--- a/drivers/net/avp/avp_ethdev.c
+++ b/drivers/net/avp/avp_ethdev.c
@@ -53,6 +53,7 @@
 #include <rte_dev.h>
 #include <rte_memory.h>
 #include <rte_eal.h>
+#include <rte_io.h>
 
 #include "rte_avp_common.h"
 #include "rte_avp_fifo.h"
@@ -60,6 +61,8 @@
 #include "avp_logs.h"
 
 
+static int avp_dev_create(struct rte_pci_device *pci_dev,
+			  struct rte_eth_dev *eth_dev);
 
 static int eth_avp_dev_init(struct rte_eth_dev *eth_dev);
 static int eth_avp_dev_uninit(struct rte_eth_dev *eth_dev);
@@ -103,6 +106,16 @@
 };
 
 
+/**@{ AVP device flags */
+#define AVP_F_PROMISC (1 << 1)
+#define AVP_F_CONFIGURED (1 << 2)
+#define AVP_F_LINKUP (1 << 3)
+#define AVP_F_DETACHED (1 << 4)
+/**@} */
+
+/* Ethernet device validation marker */
+#define AVP_ETHDEV_MAGIC 0x92972862
+
 /*
  * Defines the AVP device attributes which are attached to an RTE ethernet
  * device
@@ -150,11 +163,701 @@ struct avp_adapter {
 	struct avp_dev avp;
 } __rte_cache_aligned;
 
+
+/* 32-bit MMIO register write */
+#define AVP_WRITE32(_value, _addr) rte_write32_relaxed((_value), (_addr))
+
+/* 32-bit MMIO register read */
+#define AVP_READ32(_addr) rte_read32_relaxed((_addr))
+
 /* Macro to cast the ethernet device private data to a AVP object */
 #define AVP_DEV_PRIVATE_TO_HW(adapter) \
 	(&((struct avp_adapter *)adapter)->avp)
 
 /*
+ * Defines the structure of a AVP device queue for the purpose of handling the
+ * receive and transmit burst callback functions
+ */
+struct avp_queue {
+	struct rte_eth_dev_data *dev_data;
+	/**< Backpointer to ethernet device data */
+	struct avp_dev *avp; /**< Backpointer to AVP device */
+	uint16_t queue_id;
+	/**< Queue identifier used for indexing current queue */
+	uint16_t queue_base;
+	/**< Base queue identifier for queue servicing */
+	uint16_t queue_limit;
+	/**< Maximum queue identifier for queue servicing */
+
+	uint64_t packets;
+	uint64_t bytes;
+	uint64_t errors;
+};
+
+/* send a request and wait for a response
+ *
+ * @warning must be called while holding the avp->lock spinlock.
+ */
+static int
+avp_dev_process_request(struct avp_dev *avp, struct rte_avp_request *request)
+{
+	unsigned int retry = AVP_MAX_REQUEST_RETRY;
+	void *resp_addr = NULL;
+	unsigned int count;
+	int ret;
+
+	PMD_DRV_LOG(DEBUG, "Sending request %u to host\n", request->req_id);
+
+	request->result = -ENOTSUP;
+
+	/* Discard any stale responses before starting a new request */
+	while (avp_fifo_get(avp->resp_q, (void **)&resp_addr, 1))
+		PMD_DRV_LOG(DEBUG, "Discarding stale response\n");
+
+	rte_memcpy(avp->sync_addr, request, sizeof(*request));
+	count = avp_fifo_put(avp->req_q, &avp->host_sync_addr, 1);
+	if (count < 1) {
+		PMD_DRV_LOG(ERR, "Cannot send request %u to host\n",
+			    request->req_id);
+		ret = -EBUSY;
+		goto done;
+	}
+
+	while (retry--) {
+		/* wait for a response */
+		usleep(AVP_REQUEST_DELAY_USECS);
+
+		count = avp_fifo_count(avp->resp_q);
+		if (count >= 1) {
+			/* response received */
+			break;
+		}
+
+		if ((count < 1) && (retry == 0)) {
+			PMD_DRV_LOG(ERR, "Timeout while waiting for a response for %u\n",
+				    request->req_id);
+			ret = -ETIME;
+			goto done;
+		}
+	}
+
+	/* retrieve the response */
+	count = avp_fifo_get(avp->resp_q, (void **)&resp_addr, 1);
+	if ((count != 1) || (resp_addr != avp->host_sync_addr)) {
+		PMD_DRV_LOG(ERR, "Invalid response from host, count=%u resp=%p host_sync_addr=%p\n",
+			    count, resp_addr, avp->host_sync_addr);
+		ret = -ENODATA;
+		goto done;
+	}
+
+	/* copy to user buffer */
+	rte_memcpy(request, avp->sync_addr, sizeof(*request));
+	ret = 0;
+
+	PMD_DRV_LOG(DEBUG, "Result %d received for request %u\n",
+		    request->result, request->req_id);
+
+done:
+	return ret;
+}
+
+static int
+avp_dev_ctrl_set_config(struct rte_eth_dev *eth_dev,
+			struct rte_avp_device_config *config)
+{
+	struct avp_dev *avp = AVP_DEV_PRIVATE_TO_HW(eth_dev->data->dev_private);
+	struct rte_avp_request request;
+	int ret;
+
+	/* setup a configure request */
+	memset(&request, 0, sizeof(request));
+	request.req_id = RTE_AVP_REQ_CFG_DEVICE;
+	memcpy(&request.config, config, sizeof(request.config));
+
+	ret = avp_dev_process_request(avp, &request);
+
+	return ret == 0 ? request.result : ret;
+}
+
+static int
+avp_dev_ctrl_shutdown(struct rte_eth_dev *eth_dev)
+{
+	struct avp_dev *avp = AVP_DEV_PRIVATE_TO_HW(eth_dev->data->dev_private);
+	struct rte_avp_request request;
+	int ret;
+
+	/* setup a shutdown request */
+	memset(&request, 0, sizeof(request));
+	request.req_id = RTE_AVP_REQ_SHUTDOWN_DEVICE;
+
+	ret = avp_dev_process_request(avp, &request);
+
+	return ret == 0 ? request.result : ret;
+}
+
+/* translate from host physical address to guest virtual address */
+static void *
+avp_dev_translate_address(struct rte_eth_dev *eth_dev,
+			  phys_addr_t host_phys_addr)
+{
+	struct rte_pci_device *pci_dev = AVP_DEV_TO_PCI(eth_dev);
+	struct rte_mem_resource *resource;
+	struct rte_avp_memmap_info *info;
+	struct rte_avp_memmap *map;
+	off_t offset;
+	void *addr;
+	unsigned int i;
+
+	addr = pci_dev->mem_resource[RTE_AVP_PCI_MEMORY_BAR].addr;
+	resource = &pci_dev->mem_resource[RTE_AVP_PCI_MEMMAP_BAR];
+	info = (struct rte_avp_memmap_info *)resource->addr;
+
+	offset = 0;
+	for (i = 0; i < info->nb_maps; i++) {
+		/* search all segments looking for a matching address */
+		map = &info->maps[i];
+
+		if ((host_phys_addr >= map->phys_addr) &&
+			(host_phys_addr < (map->phys_addr + map->length))) {
+			/* address is within this segment */
+			offset += (host_phys_addr - map->phys_addr);
+			addr = RTE_PTR_ADD(addr, offset);
+
+			PMD_DRV_LOG(DEBUG, "Translating host physical 0x%" PRIx64 " to guest virtual 0x%p\n",
+				    host_phys_addr, addr);
+
+			return addr;
+		}
+		offset += map->length;
+	}
+
+	return NULL;
+}
+
+/* verify that the incoming device version is compatible with our version */
+static int
+avp_dev_version_check(uint32_t version)
+{
+	uint32_t driver = RTE_AVP_STRIP_MINOR_VERSION(AVP_DPDK_DRIVER_VERSION);
+	uint32_t device = RTE_AVP_STRIP_MINOR_VERSION(version);
+
+	if (device <= driver) {
+		/* the host driver version is less than or equal to ours */
+		return 0;
+	}
+
+	return 1;
+}
+
+/* verify that memory regions have expected version and validation markers */
+static int
+avp_dev_check_regions(struct rte_eth_dev *eth_dev)
+{
+	struct rte_pci_device *pci_dev = AVP_DEV_TO_PCI(eth_dev);
+	struct rte_avp_memmap_info *memmap;
+	struct rte_avp_device_info *info;
+	struct rte_mem_resource *resource;
+	unsigned int i;
+
+	/* Dump resource info for debug */
+	for (i = 0; i < PCI_MAX_RESOURCE; i++) {
+		resource = &pci_dev->mem_resource[i];
+		if ((resource->phys_addr == 0) || (resource->len == 0))
+			continue;
+
+		PMD_DRV_LOG(DEBUG, "resource[%u]: phys=0x%" PRIx64 " len=%" PRIu64 " addr=%p\n",
+			    i, resource->phys_addr,
+			    resource->len, resource->addr);
+
+		switch (i) {
+		case RTE_AVP_PCI_MEMMAP_BAR:
+			memmap = (struct rte_avp_memmap_info *)resource->addr;
+			if ((memmap->magic != RTE_AVP_MEMMAP_MAGIC) ||
+			    (memmap->version != RTE_AVP_MEMMAP_VERSION)) {
+				PMD_DRV_LOG(ERR, "Invalid memmap magic 0x%08x and version %u\n",
+					    memmap->magic, memmap->version);
+				return -EINVAL;
+			}
+			break;
+
+		case RTE_AVP_PCI_DEVICE_BAR:
+			info = (struct rte_avp_device_info *)resource->addr;
+			if ((info->magic != RTE_AVP_DEVICE_MAGIC) ||
+			    avp_dev_version_check(info->version)) {
+				PMD_DRV_LOG(ERR, "Invalid device info magic 0x%08x or version 0x%08x > 0x%08x\n",
+					    info->magic, info->version,
+					    AVP_DPDK_DRIVER_VERSION);
+				return -EINVAL;
+			}
+			break;
+
+		case RTE_AVP_PCI_MEMORY_BAR:
+		case RTE_AVP_PCI_MMIO_BAR:
+			if (resource->addr == NULL) {
+				PMD_DRV_LOG(ERR, "Missing address space for BAR%u\n",
+					    i);
+				return -EINVAL;
+			}
+			break;
+
+		case RTE_AVP_PCI_MSIX_BAR:
+		default:
+			/* no validation required */
+			break;
+		}
+	}
+
+	return 0;
+}
+
+static int
+avp_dev_detach(struct rte_eth_dev *eth_dev)
+{
+	struct avp_dev *avp = AVP_DEV_PRIVATE_TO_HW(eth_dev->data->dev_private);
+	int ret;
+
+	PMD_DRV_LOG(NOTICE, "Detaching port %u from AVP device 0x%" PRIx64 "\n",
+		    eth_dev->data->port_id, avp->device_id);
+
+	rte_spinlock_lock(&avp->lock);
+
+	if (avp->flags & AVP_F_DETACHED) {
+		PMD_DRV_LOG(NOTICE, "port %u already detached\n",
+			    eth_dev->data->port_id);
+		ret = 0;
+		goto unlock;
+	}
+
+	/* shutdown the device first so the host stops sending us packets. */
+	ret = avp_dev_ctrl_shutdown(eth_dev);
+	if (ret < 0) {
+		PMD_DRV_LOG(ERR, "Failed to send/recv shutdown to host, ret=%d\n",
+			    ret);
+		avp->flags &= ~AVP_F_DETACHED;
+		goto unlock;
+	}
+
+	avp->flags |= AVP_F_DETACHED;
+	rte_wmb();
+
+	/* wait for queues to acknowledge the presence of the detach flag */
+	rte_delay_ms(1);
+
+	ret = 0;
+
+unlock:
+	rte_spinlock_unlock(&avp->lock);
+	return ret;
+}
+
+static void
+_avp_set_rx_queue_mappings(struct rte_eth_dev *eth_dev, uint16_t rx_queue_id)
+{
+	struct avp_dev *avp =
+		AVP_DEV_PRIVATE_TO_HW(eth_dev->data->dev_private);
+	struct avp_queue *rxq;
+	uint16_t queue_count;
+	uint16_t remainder;
+
+	rxq = (struct avp_queue *)eth_dev->data->rx_queues[rx_queue_id];
+
+	/*
+	 * Must map all AVP fifos as evenly as possible between the configured
+	 * device queues.  Each device queue will service a subset of the AVP
+	 * fifos. If there is an odd number of device queues the first set of
+	 * device queues will get the extra AVP fifos.
+	 */
+	queue_count = avp->num_rx_queues / eth_dev->data->nb_rx_queues;
+	remainder = avp->num_rx_queues % eth_dev->data->nb_rx_queues;
+	if (rx_queue_id < remainder) {
+		/* these queues must service one extra FIFO */
+		rxq->queue_base = rx_queue_id * (queue_count + 1);
+		rxq->queue_limit = rxq->queue_base + (queue_count + 1) - 1;
+	} else {
+		/* these queues service the regular number of FIFO */
+		rxq->queue_base = ((remainder * (queue_count + 1)) +
+				   ((rx_queue_id - remainder) * queue_count));
+		rxq->queue_limit = rxq->queue_base + queue_count - 1;
+	}
+
+	PMD_DRV_LOG(DEBUG, "rxq %u at %p base %u limit %u\n",
+		    rx_queue_id, rxq, rxq->queue_base, rxq->queue_limit);
+
+	rxq->queue_id = rxq->queue_base;
+}
+
+static void
+_avp_set_queue_counts(struct rte_eth_dev *eth_dev)
+{
+	struct rte_pci_device *pci_dev = AVP_DEV_TO_PCI(eth_dev);
+	struct avp_dev *avp = AVP_DEV_PRIVATE_TO_HW(eth_dev->data->dev_private);
+	struct rte_avp_device_info *host_info;
+	void *addr;
+
+	addr = pci_dev->mem_resource[RTE_AVP_PCI_DEVICE_BAR].addr;
+	host_info = (struct rte_avp_device_info *)addr;
+
+	/*
+	 * the transmit direction is not negotiated beyond respecting the max
+	 * number of queues because the host can handle arbitrary guest tx
+	 * queues (host rx queues).
+	 */
+	avp->num_tx_queues = eth_dev->data->nb_tx_queues;
+
+	/*
+	 * the receive direction is more restrictive.  The host requires a
+	 * minimum number of guest rx queues (host tx queues) therefore
+	 * negotiate a value that is at least as large as the host minimum
+	 * requirement.  If the host and guest values are not identical then a
+	 * mapping will be established in the receive_queue_setup function.
+	 */
+	avp->num_rx_queues = RTE_MAX(host_info->min_rx_queues,
+				     eth_dev->data->nb_rx_queues);
+
+	PMD_DRV_LOG(DEBUG, "Requesting %u Tx and %u Rx queues from host\n",
+		    avp->num_tx_queues, avp->num_rx_queues);
+}
+
+static int
+avp_dev_attach(struct rte_eth_dev *eth_dev)
+{
+	struct avp_dev *avp = AVP_DEV_PRIVATE_TO_HW(eth_dev->data->dev_private);
+	struct rte_avp_device_config config;
+	unsigned int i;
+	int ret;
+
+	PMD_DRV_LOG(NOTICE, "Attaching port %u to AVP device 0x%" PRIx64 "\n",
+		    eth_dev->data->port_id, avp->device_id);
+
+	rte_spinlock_lock(&avp->lock);
+
+	if (!(avp->flags & AVP_F_DETACHED)) {
+		PMD_DRV_LOG(NOTICE, "port %u already attached\n",
+			    eth_dev->data->port_id);
+		ret = 0;
+		goto unlock;
+	}
+
+	/*
+	 * make sure that the detached flag is set prior to reconfiguring the
+	 * queues.
+	 */
+	avp->flags |= AVP_F_DETACHED;
+	rte_wmb();
+
+	/*
+	 * re-run the device create utility which will parse the new host info
+	 * and setup the AVP device queue pointers.
+	 */
+	ret = avp_dev_create(AVP_DEV_TO_PCI(eth_dev), eth_dev);
+	if (ret < 0) {
+		PMD_DRV_LOG(ERR, "Failed to re-create AVP device, ret=%d\n",
+			    ret);
+		goto unlock;
+	}
+
+	if (avp->flags & AVP_F_CONFIGURED) {
+		/*
+		 * Update the receive queue mapping to handle cases where the
+		 * source and destination hosts have different queue
+		 * requirements.  As long as the DETACHED flag is asserted the
+		 * queue table should not be referenced so it should be safe to
+		 * update it.
+		 */
+		_avp_set_queue_counts(eth_dev);
+		for (i = 0; i < eth_dev->data->nb_rx_queues; i++)
+			_avp_set_rx_queue_mappings(eth_dev, i);
+
+		/*
+		 * Update the host with our config details so that it knows the
+		 * device is active.
+		 */
+		memset(&config, 0, sizeof(config));
+		config.device_id = avp->device_id;
+		config.driver_type = RTE_AVP_DRIVER_TYPE_DPDK;
+		config.driver_version = AVP_DPDK_DRIVER_VERSION;
+		config.features = avp->features;
+		config.num_tx_queues = avp->num_tx_queues;
+		config.num_rx_queues = avp->num_rx_queues;
+		config.if_up = !!(avp->flags & AVP_F_LINKUP);
+
+		ret = avp_dev_ctrl_set_config(eth_dev, &config);
+		if (ret < 0) {
+			PMD_DRV_LOG(ERR, "Config request failed by host, ret=%d\n",
+				    ret);
+			goto unlock;
+		}
+	}
+
+	rte_wmb();
+	avp->flags &= ~AVP_F_DETACHED;
+
+	ret = 0;
+
+unlock:
+	rte_spinlock_unlock(&avp->lock);
+	return ret;
+}
+
+static void
+avp_dev_interrupt_handler(struct rte_intr_handle *intr_handle,
+						  void *data)
+{
+	struct rte_eth_dev *eth_dev = data;
+	struct rte_pci_device *pci_dev = AVP_DEV_TO_PCI(eth_dev);
+	void *registers = pci_dev->mem_resource[RTE_AVP_PCI_MMIO_BAR].addr;
+	uint32_t status, value;
+	int ret;
+
+	if (registers == NULL)
+		rte_panic("no mapped MMIO register space\n");
+
+	/* read the interrupt status register
+	 * note: this register clears on read so all raised interrupts must be
+	 *    handled or remembered for later processing
+	 */
+	status = AVP_READ32(
+		RTE_PTR_ADD(registers,
+			    RTE_AVP_INTERRUPT_STATUS_OFFSET));
+
+	if (status | RTE_AVP_MIGRATION_INTERRUPT_MASK) {
+		/* handle interrupt based on current status */
+		value = AVP_READ32(
+			RTE_PTR_ADD(registers,
+				    RTE_AVP_MIGRATION_STATUS_OFFSET));
+		switch (value) {
+		case RTE_AVP_MIGRATION_DETACHED:
+			ret = avp_dev_detach(eth_dev);
+			break;
+		case RTE_AVP_MIGRATION_ATTACHED:
+			ret = avp_dev_attach(eth_dev);
+			break;
+		default:
+			PMD_DRV_LOG(ERR, "unexpected migration status, status=%u\n",
+				    value);
+			ret = -EINVAL;
+		}
+
+		/* acknowledge the request by writing out our current status */
+		value = (ret == 0 ? value : RTE_AVP_MIGRATION_ERROR);
+		AVP_WRITE32(value,
+			    RTE_PTR_ADD(registers,
+					RTE_AVP_MIGRATION_ACK_OFFSET));
+
+		PMD_DRV_LOG(NOTICE, "AVP migration interrupt handled\n");
+	}
+
+	if (status & ~RTE_AVP_MIGRATION_INTERRUPT_MASK)
+		PMD_DRV_LOG(WARNING, "AVP unexpected interrupt, status=0x%08x\n",
+			    status);
+
+	/* re-enable UIO interrupt handling */
+	ret = rte_intr_enable(intr_handle);
+	if (ret < 0) {
+		PMD_DRV_LOG(ERR, "Failed to re-enable UIO interrupts, ret=%d\n",
+			    ret);
+		/* continue */
+	}
+}
+
+static int
+avp_dev_enable_interrupts(struct rte_eth_dev *eth_dev)
+{
+	struct rte_pci_device *pci_dev = AVP_DEV_TO_PCI(eth_dev);
+	void *registers = pci_dev->mem_resource[RTE_AVP_PCI_MMIO_BAR].addr;
+	int ret;
+
+	if (registers == NULL)
+		return -EINVAL;
+
+	/* enable UIO interrupt handling */
+	ret = rte_intr_enable(&pci_dev->intr_handle);
+	if (ret < 0) {
+		PMD_DRV_LOG(ERR, "Failed to enable UIO interrupts, ret=%d\n",
+			    ret);
+		return ret;
+	}
+
+	/* inform the device that all interrupts are enabled */
+	AVP_WRITE32(RTE_AVP_APP_INTERRUPTS_MASK,
+		    RTE_PTR_ADD(registers, RTE_AVP_INTERRUPT_MASK_OFFSET));
+
+	return 0;
+}
+
+static int
+avp_dev_setup_interrupts(struct rte_eth_dev *eth_dev)
+{
+	struct rte_pci_device *pci_dev = AVP_DEV_TO_PCI(eth_dev);
+	int ret;
+
+	/* register a callback handler with UIO for interrupt notifications */
+	ret = rte_intr_callback_register(&pci_dev->intr_handle,
+					 avp_dev_interrupt_handler,
+					 (void *)eth_dev);
+	if (ret < 0) {
+		PMD_DRV_LOG(ERR, "Failed to register UIO interrupt callback, ret=%d\n",
+			    ret);
+		return ret;
+	}
+
+	/* enable interrupt processing */
+	return avp_dev_enable_interrupts(eth_dev);
+}
+
+static int
+avp_dev_migration_pending(struct rte_eth_dev *eth_dev)
+{
+	struct rte_pci_device *pci_dev = AVP_DEV_TO_PCI(eth_dev);
+	void *registers = pci_dev->mem_resource[RTE_AVP_PCI_MMIO_BAR].addr;
+	uint32_t value;
+
+	if (registers == NULL)
+		return 0;
+
+	value = AVP_READ32(RTE_PTR_ADD(registers,
+				       RTE_AVP_MIGRATION_STATUS_OFFSET));
+	if (value == RTE_AVP_MIGRATION_DETACHED) {
+		/* migration is in progress; ack it if we have not already */
+		AVP_WRITE32(value,
+			    RTE_PTR_ADD(registers,
+					RTE_AVP_MIGRATION_ACK_OFFSET));
+		return 1;
+	}
+	return 0;
+}
+
+/*
+ * create a AVP device using the supplied device info by first translating it
+ * to guest address space(s).
+ */
+static int
+avp_dev_create(struct rte_pci_device *pci_dev,
+	       struct rte_eth_dev *eth_dev)
+{
+	struct avp_dev *avp = AVP_DEV_PRIVATE_TO_HW(eth_dev->data->dev_private);
+	struct rte_avp_device_info *host_info;
+	struct rte_mem_resource *resource;
+	unsigned int i;
+
+	resource = &pci_dev->mem_resource[RTE_AVP_PCI_DEVICE_BAR];
+	if (resource->addr == NULL) {
+		PMD_DRV_LOG(ERR, "BAR%u is not mapped\n",
+			    RTE_AVP_PCI_DEVICE_BAR);
+		return -EFAULT;
+	}
+	host_info = (struct rte_avp_device_info *)resource->addr;
+
+	if ((host_info->magic != RTE_AVP_DEVICE_MAGIC) ||
+		avp_dev_version_check(host_info->version)) {
+		PMD_DRV_LOG(ERR, "Invalid AVP PCI device, magic 0x%08x version 0x%08x > 0x%08x\n",
+			    host_info->magic, host_info->version,
+			    AVP_DPDK_DRIVER_VERSION);
+		return -EINVAL;
+	}
+
+	PMD_DRV_LOG(DEBUG, "AVP host device is v%u.%u.%u\n",
+		    RTE_AVP_GET_RELEASE_VERSION(host_info->version),
+		    RTE_AVP_GET_MAJOR_VERSION(host_info->version),
+		    RTE_AVP_GET_MINOR_VERSION(host_info->version));
+
+	PMD_DRV_LOG(DEBUG, "AVP host supports %u to %u TX queue(s)\n",
+		    host_info->min_tx_queues, host_info->max_tx_queues);
+	PMD_DRV_LOG(DEBUG, "AVP host supports %u to %u RX queue(s)\n",
+		    host_info->min_rx_queues, host_info->max_rx_queues);
+	PMD_DRV_LOG(DEBUG, "AVP host supports features 0x%08x\n",
+		    host_info->features);
+
+	if (avp->magic != AVP_ETHDEV_MAGIC) {
+		/*
+		 * First time initialization (i.e., not during a VM
+		 * migration)
+		 */
+		memset(avp, 0, sizeof(*avp));
+		avp->magic = AVP_ETHDEV_MAGIC;
+		avp->dev_data = eth_dev->data;
+		avp->port_id = eth_dev->data->port_id;
+		avp->host_mbuf_size = host_info->mbuf_size;
+		avp->host_features = host_info->features;
+		rte_spinlock_init(&avp->lock);
+		memcpy(&avp->ethaddr.addr_bytes[0],
+		       host_info->ethaddr, ETHER_ADDR_LEN);
+		/* adjust max values to not exceed our max */
+		avp->max_tx_queues =
+			RTE_MIN(host_info->max_tx_queues, RTE_AVP_MAX_QUEUES);
+		avp->max_rx_queues =
+			RTE_MIN(host_info->max_rx_queues, RTE_AVP_MAX_QUEUES);
+	} else {
+		/* Re-attaching during migration */
+
+		/* TODO... requires validation of host values */
+		if ((host_info->features & avp->features) != avp->features) {
+			PMD_DRV_LOG(ERR, "AVP host features mismatched; 0x%08x, host=0x%08x\n",
+				    avp->features, host_info->features);
+			/* this should not be possible; continue for now */
+		}
+	}
+
+	/* the device id is allowed to change over migrations */
+	avp->device_id = host_info->device_id;
+
+	/* translate incoming host addresses to guest address space */
+	PMD_DRV_LOG(DEBUG, "AVP first host tx queue at 0x%" PRIx64 "\n",
+		    host_info->tx_phys);
+	PMD_DRV_LOG(DEBUG, "AVP first host alloc queue at 0x%" PRIx64 "\n",
+		    host_info->alloc_phys);
+	for (i = 0; i < avp->max_tx_queues; i++) {
+		avp->tx_q[i] = avp_dev_translate_address(eth_dev,
+			host_info->tx_phys + (i * host_info->tx_size));
+
+		avp->alloc_q[i] = avp_dev_translate_address(eth_dev,
+			host_info->alloc_phys + (i * host_info->alloc_size));
+	}
+
+	PMD_DRV_LOG(DEBUG, "AVP first host rx queue at 0x%" PRIx64 "\n",
+		    host_info->rx_phys);
+	PMD_DRV_LOG(DEBUG, "AVP first host free queue at 0x%" PRIx64 "\n",
+		    host_info->free_phys);
+	for (i = 0; i < avp->max_rx_queues; i++) {
+		avp->rx_q[i] = avp_dev_translate_address(eth_dev,
+			host_info->rx_phys + (i * host_info->rx_size));
+		avp->free_q[i] = avp_dev_translate_address(eth_dev,
+			host_info->free_phys + (i * host_info->free_size));
+	}
+
+	PMD_DRV_LOG(DEBUG, "AVP host request queue at 0x%" PRIx64 "\n",
+		    host_info->req_phys);
+	PMD_DRV_LOG(DEBUG, "AVP host response queue at 0x%" PRIx64 "\n",
+		    host_info->resp_phys);
+	PMD_DRV_LOG(DEBUG, "AVP host sync address at 0x%" PRIx64 "\n",
+		    host_info->sync_phys);
+	PMD_DRV_LOG(DEBUG, "AVP host mbuf address at 0x%" PRIx64 "\n",
+		    host_info->mbuf_phys);
+	avp->req_q = avp_dev_translate_address(eth_dev, host_info->req_phys);
+	avp->resp_q = avp_dev_translate_address(eth_dev, host_info->resp_phys);
+	avp->sync_addr =
+		avp_dev_translate_address(eth_dev, host_info->sync_phys);
+	avp->mbuf_addr =
+		avp_dev_translate_address(eth_dev, host_info->mbuf_phys);
+
+	/*
+	 * store the host mbuf virtual address so that we can calculate
+	 * relative offsets for each mbuf as they are processed
+	 */
+	avp->host_mbuf_addr = host_info->mbuf_va;
+	avp->host_sync_addr = host_info->sync_va;
+
+	/*
+	 * store the maximum packet length that is supported by the host.
+	 */
+	avp->max_rx_pkt_len = host_info->max_rx_pkt_len;
+	PMD_DRV_LOG(DEBUG, "AVP host max receive packet length is %u\n",
+				host_info->max_rx_pkt_len);
+
+	return 0;
+}
+
+/*
  * This function is based on probe() function in avp_pci.c
  * It returns 0 on success.
  */
@@ -164,6 +867,7 @@ struct avp_adapter {
 	struct avp_dev *avp =
 		AVP_DEV_PRIVATE_TO_HW(eth_dev->data->dev_private);
 	struct rte_pci_device *pci_dev;
+	int ret;
 
 	pci_dev = AVP_DEV_TO_PCI(eth_dev);
 
@@ -181,6 +885,34 @@ struct avp_adapter {
 
 	eth_dev->data->dev_flags |= RTE_ETH_DEV_DETACHABLE;
 
+	/* Check current migration status */
+	if (avp_dev_migration_pending(eth_dev)) {
+		PMD_DRV_LOG(ERR, "VM live migration operation in progress\n");
+		return -EBUSY;
+	}
+
+	/* Check BAR resources */
+	ret = avp_dev_check_regions(eth_dev);
+	if (ret < 0) {
+		PMD_DRV_LOG(ERR, "Failed to validate BAR resources, ret=%d\n",
+			    ret);
+		return ret;
+	}
+
+	/* Enable interrupts */
+	ret = avp_dev_setup_interrupts(eth_dev);
+	if (ret < 0) {
+		PMD_DRV_LOG(ERR, "Failed to enable interrupts, ret=%d\n", ret);
+		return ret;
+	}
+
+	/* Handle each subtype */
+	ret = avp_dev_create(pci_dev, eth_dev);
+	if (ret < 0) {
+		PMD_DRV_LOG(ERR, "Failed to create device, ret=%d\n", ret);
+		return ret;
+	}
+
 	/* Allocate memory for storing MAC addresses */
 	eth_dev->data->mac_addrs = rte_zmalloc("avp_ethdev", ETHER_ADDR_LEN, 0);
 	if (eth_dev->data->mac_addrs == NULL) {
@@ -189,6 +921,7 @@ struct avp_adapter {
 		return -ENOMEM;
 	}
 
+	/* Get a mac from device config */
 	ether_addr_copy(&avp->ethaddr, &eth_dev->data->mac_addrs[0]);
 
 	return 0;
-- 
1.8.3.1

^ permalink raw reply related	[flat|nested] 172+ messages in thread

* [PATCH v3 09/16] net/avp: device configuration
  2017-03-02  0:19   ` [PATCH v3 " Allain Legacy
                       ` (7 preceding siblings ...)
  2017-03-02  0:20     ` [PATCH v3 08/16] net/avp: device initialization Allain Legacy
@ 2017-03-02  0:20     ` Allain Legacy
  2017-03-02  0:20     ` [PATCH v3 10/16] net/avp: queue setup and release Allain Legacy
                       ` (7 subsequent siblings)
  16 siblings, 0 replies; 172+ messages in thread
From: Allain Legacy @ 2017-03-02  0:20 UTC (permalink / raw)
  To: ferruh.yigit; +Cc: ian.jolliffe, jerin.jacob, stephen, thomas.monjalon, dev

Adds support for "dev_configure" operations to allow an application to
configure the device.

Signed-off-by: Allain Legacy <allain.legacy@windriver.com>
Signed-off-by: Matt Peters <matt.peters@windriver.com>
---
 drivers/net/avp/avp_ethdev.c | 131 +++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 131 insertions(+)

diff --git a/drivers/net/avp/avp_ethdev.c b/drivers/net/avp/avp_ethdev.c
index 467cec8..e9c67ce 100644
--- a/drivers/net/avp/avp_ethdev.c
+++ b/drivers/net/avp/avp_ethdev.c
@@ -67,6 +67,12 @@ static int avp_dev_create(struct rte_pci_device *pci_dev,
 static int eth_avp_dev_init(struct rte_eth_dev *eth_dev);
 static int eth_avp_dev_uninit(struct rte_eth_dev *eth_dev);
 
+static int avp_dev_configure(struct rte_eth_dev *dev);
+static void avp_dev_info_get(struct rte_eth_dev *dev,
+			     struct rte_eth_dev_info *dev_info);
+static void avp_vlan_offload_set(struct rte_eth_dev *dev, int mask);
+static int avp_dev_link_update(struct rte_eth_dev *dev,
+			       __rte_unused int wait_to_complete);
 
 #define AVP_DEV_TO_PCI(eth_dev) RTE_DEV_TO_PCI((eth_dev)->device)
 
@@ -105,6 +111,15 @@ static int avp_dev_create(struct rte_pci_device *pci_dev,
 	},
 };
 
+/*
+ * dev_ops for avp, bare necessities for basic operation
+ */
+static const struct eth_dev_ops avp_eth_dev_ops = {
+	.dev_configure       = avp_dev_configure,
+	.dev_infos_get       = avp_dev_info_get,
+	.vlan_offload_set    = avp_vlan_offload_set,
+	.link_update         = avp_dev_link_update,
+};
 
 /**@{ AVP device flags */
 #define AVP_F_PROMISC (1 << 1)
@@ -870,6 +885,7 @@ struct avp_queue {
 	int ret;
 
 	pci_dev = AVP_DEV_TO_PCI(eth_dev);
+	eth_dev->dev_ops = &avp_eth_dev_ops;
 
 	if (rte_eal_process_type() != RTE_PROC_PRIMARY) {
 		/*
@@ -958,6 +974,121 @@ struct avp_queue {
 };
 
 
+static int
+avp_dev_configure(struct rte_eth_dev *eth_dev)
+{
+	struct rte_pci_device *pci_dev = AVP_DEV_TO_PCI(eth_dev);
+	struct avp_dev *avp = AVP_DEV_PRIVATE_TO_HW(eth_dev->data->dev_private);
+	struct rte_avp_device_info *host_info;
+	struct rte_avp_device_config config;
+	int mask = 0;
+	void *addr;
+	int ret;
+
+	rte_spinlock_lock(&avp->lock);
+	if (avp->flags & AVP_F_DETACHED) {
+		PMD_DRV_LOG(ERR, "Operation not supported during VM live migration\n");
+		ret = -ENOTSUP;
+		goto unlock;
+	}
+
+	addr = pci_dev->mem_resource[RTE_AVP_PCI_DEVICE_BAR].addr;
+	host_info = (struct rte_avp_device_info *)addr;
+
+	/* Setup required number of queues */
+	_avp_set_queue_counts(eth_dev);
+
+	mask = (ETH_VLAN_STRIP_MASK |
+		ETH_VLAN_FILTER_MASK |
+		ETH_VLAN_EXTEND_MASK);
+	avp_vlan_offload_set(eth_dev, mask);
+
+	/* update device config */
+	memset(&config, 0, sizeof(config));
+	config.device_id = host_info->device_id;
+	config.driver_type = RTE_AVP_DRIVER_TYPE_DPDK;
+	config.driver_version = AVP_DPDK_DRIVER_VERSION;
+	config.features = avp->features;
+	config.num_tx_queues = avp->num_tx_queues;
+	config.num_rx_queues = avp->num_rx_queues;
+
+	ret = avp_dev_ctrl_set_config(eth_dev, &config);
+	if (ret < 0) {
+		PMD_DRV_LOG(ERR, "Config request failed by host, ret=%d\n",
+			    ret);
+		goto unlock;
+	}
+
+	avp->flags |= AVP_F_CONFIGURED;
+	ret = 0;
+
+unlock:
+	rte_spinlock_unlock(&avp->lock);
+	return ret;
+}
+
+
+static int
+avp_dev_link_update(struct rte_eth_dev *eth_dev,
+					__rte_unused int wait_to_complete)
+{
+	struct avp_dev *avp = AVP_DEV_PRIVATE_TO_HW(eth_dev->data->dev_private);
+	struct rte_eth_link *link = &eth_dev->data->dev_link;
+
+	link->link_speed = ETH_SPEED_NUM_10G;
+	link->link_duplex = ETH_LINK_FULL_DUPLEX;
+	link->link_status = !!(avp->flags & AVP_F_LINKUP);
+
+	return -1;
+}
+
+
+static void
+avp_dev_info_get(struct rte_eth_dev *eth_dev,
+		 struct rte_eth_dev_info *dev_info)
+{
+	struct avp_dev *avp = AVP_DEV_PRIVATE_TO_HW(eth_dev->data->dev_private);
+
+	dev_info->driver_name = "rte_avp_pmd";
+	dev_info->pci_dev = RTE_DEV_TO_PCI(eth_dev->device);
+	dev_info->max_rx_queues = avp->max_rx_queues;
+	dev_info->max_tx_queues = avp->max_tx_queues;
+	dev_info->min_rx_bufsize = AVP_MIN_RX_BUFSIZE;
+	dev_info->max_rx_pktlen = avp->max_rx_pkt_len;
+	dev_info->max_mac_addrs = AVP_MAX_MAC_ADDRS;
+	if (avp->host_features & RTE_AVP_FEATURE_VLAN_OFFLOAD) {
+		dev_info->rx_offload_capa = DEV_RX_OFFLOAD_VLAN_STRIP;
+		dev_info->tx_offload_capa = DEV_TX_OFFLOAD_VLAN_INSERT;
+	}
+}
+
+static void
+avp_vlan_offload_set(struct rte_eth_dev *eth_dev, int mask)
+{
+	struct avp_dev *avp = AVP_DEV_PRIVATE_TO_HW(eth_dev->data->dev_private);
+
+	if (mask & ETH_VLAN_STRIP_MASK) {
+		if (avp->host_features & RTE_AVP_FEATURE_VLAN_OFFLOAD) {
+			if (eth_dev->data->dev_conf.rxmode.hw_vlan_strip)
+				avp->features |= RTE_AVP_FEATURE_VLAN_OFFLOAD;
+			else
+				avp->features &= ~RTE_AVP_FEATURE_VLAN_OFFLOAD;
+		} else {
+			PMD_DRV_LOG(ERR, "VLAN strip offload not supported\n");
+		}
+	}
+
+	if (mask & ETH_VLAN_FILTER_MASK) {
+		if (eth_dev->data->dev_conf.rxmode.hw_vlan_filter)
+			PMD_DRV_LOG(ERR, "VLAN filter offload not supported\n");
+	}
+
+	if (mask & ETH_VLAN_EXTEND_MASK) {
+		if (eth_dev->data->dev_conf.rxmode.hw_vlan_extend)
+			PMD_DRV_LOG(ERR, "VLAN extend offload not supported\n");
+	}
+}
+
 
 RTE_PMD_REGISTER_PCI(rte_avp, rte_avp_pmd.pci_drv);
 RTE_PMD_REGISTER_PCI_TABLE(rte_avp, pci_id_avp_map);
-- 
1.8.3.1

^ permalink raw reply related	[flat|nested] 172+ messages in thread

* [PATCH v3 10/16] net/avp: queue setup and release
  2017-03-02  0:19   ` [PATCH v3 " Allain Legacy
                       ` (8 preceding siblings ...)
  2017-03-02  0:20     ` [PATCH v3 09/16] net/avp: device configuration Allain Legacy
@ 2017-03-02  0:20     ` Allain Legacy
  2017-03-02  0:20     ` [PATCH v3 11/16] net/avp: packet receive functions Allain Legacy
                       ` (6 subsequent siblings)
  16 siblings, 0 replies; 172+ messages in thread
From: Allain Legacy @ 2017-03-02  0:20 UTC (permalink / raw)
  To: ferruh.yigit; +Cc: ian.jolliffe, jerin.jacob, stephen, thomas.monjalon, dev

Adds queue management operations so that an appliation can setup and
release the transmit and receive queues.

Signed-off-by: Allain Legacy <allain.legacy@windriver.com>
Signed-off-by: Matt Peters <matt.peters@windriver.com>
---
 drivers/net/avp/avp_ethdev.c | 144 ++++++++++++++++++++++++++++++++++++++++++-
 1 file changed, 143 insertions(+), 1 deletion(-)

diff --git a/drivers/net/avp/avp_ethdev.c b/drivers/net/avp/avp_ethdev.c
index e9c67ce..b0c5ae4 100644
--- a/drivers/net/avp/avp_ethdev.c
+++ b/drivers/net/avp/avp_ethdev.c
@@ -73,7 +73,21 @@ static void avp_dev_info_get(struct rte_eth_dev *dev,
 static void avp_vlan_offload_set(struct rte_eth_dev *dev, int mask);
 static int avp_dev_link_update(struct rte_eth_dev *dev,
 			       __rte_unused int wait_to_complete);
-
+static int avp_dev_rx_queue_setup(struct rte_eth_dev *dev,
+				  uint16_t rx_queue_id,
+				  uint16_t nb_rx_desc,
+				  unsigned int socket_id,
+				  const struct rte_eth_rxconf *rx_conf,
+				  struct rte_mempool *pool);
+
+static int avp_dev_tx_queue_setup(struct rte_eth_dev *dev,
+				  uint16_t tx_queue_id,
+				  uint16_t nb_tx_desc,
+				  unsigned int socket_id,
+				  const struct rte_eth_txconf *tx_conf);
+
+static void avp_dev_rx_queue_release(void *rxq);
+static void avp_dev_tx_queue_release(void *txq);
 #define AVP_DEV_TO_PCI(eth_dev) RTE_DEV_TO_PCI((eth_dev)->device)
 
 
@@ -119,6 +133,10 @@ static int avp_dev_link_update(struct rte_eth_dev *dev,
 	.dev_infos_get       = avp_dev_info_get,
 	.vlan_offload_set    = avp_vlan_offload_set,
 	.link_update         = avp_dev_link_update,
+	.rx_queue_setup      = avp_dev_rx_queue_setup,
+	.rx_queue_release    = avp_dev_rx_queue_release,
+	.tx_queue_setup      = avp_dev_tx_queue_setup,
+	.tx_queue_release    = avp_dev_tx_queue_release,
 };
 
 /**@{ AVP device flags */
@@ -975,6 +993,130 @@ struct avp_queue {
 
 
 static int
+avp_dev_rx_queue_setup(struct rte_eth_dev *eth_dev,
+		       uint16_t rx_queue_id,
+		       uint16_t nb_rx_desc,
+		       unsigned int socket_id,
+		       const struct rte_eth_rxconf *rx_conf,
+		       struct rte_mempool *pool)
+{
+	struct avp_dev *avp = AVP_DEV_PRIVATE_TO_HW(eth_dev->data->dev_private);
+	struct rte_pktmbuf_pool_private *mbp_priv;
+	struct avp_queue *rxq;
+
+	if (rx_queue_id >= eth_dev->data->nb_rx_queues) {
+		PMD_DRV_LOG(ERR, "RX queue id is out of range: rx_queue_id=%u, nb_rx_queues=%u\n",
+			    rx_queue_id, eth_dev->data->nb_rx_queues);
+		return -EINVAL;
+	}
+
+	/* Save mbuf pool pointer */
+	avp->pool = pool;
+
+	/* Save the local mbuf size */
+	mbp_priv = rte_mempool_get_priv(pool);
+	avp->guest_mbuf_size = (uint16_t)(mbp_priv->mbuf_data_room_size);
+	avp->guest_mbuf_size -= RTE_PKTMBUF_HEADROOM;
+
+	PMD_DRV_LOG(DEBUG, "AVP max_rx_pkt_len=(%u,%u) mbuf_size=(%u,%u)\n",
+		    avp->max_rx_pkt_len,
+		    eth_dev->data->dev_conf.rxmode.max_rx_pkt_len,
+		    avp->host_mbuf_size,
+		    avp->guest_mbuf_size);
+
+	/* allocate a queue object */
+	rxq = rte_zmalloc_socket("ethdev RX queue", sizeof(struct avp_queue),
+				 RTE_CACHE_LINE_SIZE, socket_id);
+	if (rxq == NULL) {
+		PMD_DRV_LOG(ERR, "Failed to allocate new Rx queue object\n");
+		return -ENOMEM;
+	}
+
+	/* save back pointers to AVP and Ethernet devices */
+	rxq->avp = avp;
+	rxq->dev_data = eth_dev->data;
+	eth_dev->data->rx_queues[rx_queue_id] = (void *)rxq;
+
+	/* setup the queue receive mapping for the current queue. */
+	_avp_set_rx_queue_mappings(eth_dev, rx_queue_id);
+
+	PMD_DRV_LOG(DEBUG, "Rx queue %u setup at %p\n", rx_queue_id, rxq);
+
+	(void)nb_rx_desc;
+	(void)rx_conf;
+	return 0;
+}
+
+static int
+avp_dev_tx_queue_setup(struct rte_eth_dev *eth_dev,
+		       uint16_t tx_queue_id,
+		       uint16_t nb_tx_desc,
+		       unsigned int socket_id,
+		       const struct rte_eth_txconf *tx_conf)
+{
+	struct avp_dev *avp = AVP_DEV_PRIVATE_TO_HW(eth_dev->data->dev_private);
+	struct avp_queue *txq;
+
+	if (tx_queue_id >= eth_dev->data->nb_tx_queues) {
+		PMD_DRV_LOG(ERR, "TX queue id is out of range: tx_queue_id=%u, nb_tx_queues=%u\n",
+			    tx_queue_id, eth_dev->data->nb_tx_queues);
+		return -EINVAL;
+	}
+
+	/* allocate a queue object */
+	txq = rte_zmalloc_socket("ethdev TX queue", sizeof(struct avp_queue),
+				 RTE_CACHE_LINE_SIZE, socket_id);
+	if (txq == NULL) {
+		PMD_DRV_LOG(ERR, "Failed to allocate new Tx queue object\n");
+		return -ENOMEM;
+	}
+
+	/* only the configured set of transmit queues are used */
+	txq->queue_id = tx_queue_id;
+	txq->queue_base = tx_queue_id;
+	txq->queue_limit = tx_queue_id;
+
+	/* save back pointers to AVP and Ethernet devices */
+	txq->avp = avp;
+	txq->dev_data = eth_dev->data;
+	eth_dev->data->tx_queues[tx_queue_id] = (void *)txq;
+
+	PMD_DRV_LOG(DEBUG, "Tx queue %u setup at %p\n", tx_queue_id, txq);
+
+	(void)nb_tx_desc;
+	(void)tx_conf;
+	return 0;
+}
+
+static void
+avp_dev_rx_queue_release(void *rx_queue)
+{
+	struct avp_queue *rxq = (struct avp_queue *)rx_queue;
+	struct avp_dev *avp = rxq->avp;
+	struct rte_eth_dev_data *data = avp->dev_data;
+	unsigned int i;
+
+	for (i = 0; i < avp->num_rx_queues; i++) {
+		if (data->rx_queues[i] == rxq)
+			data->rx_queues[i] = NULL;
+	}
+}
+
+static void
+avp_dev_tx_queue_release(void *tx_queue)
+{
+	struct avp_queue *txq = (struct avp_queue *)tx_queue;
+	struct avp_dev *avp = txq->avp;
+	struct rte_eth_dev_data *data = avp->dev_data;
+	unsigned int i;
+
+	for (i = 0; i < avp->num_tx_queues; i++) {
+		if (data->tx_queues[i] == txq)
+			data->tx_queues[i] = NULL;
+	}
+}
+
+static int
 avp_dev_configure(struct rte_eth_dev *eth_dev)
 {
 	struct rte_pci_device *pci_dev = AVP_DEV_TO_PCI(eth_dev);
-- 
1.8.3.1

^ permalink raw reply related	[flat|nested] 172+ messages in thread

* [PATCH v3 11/16] net/avp: packet receive functions
  2017-03-02  0:19   ` [PATCH v3 " Allain Legacy
                       ` (9 preceding siblings ...)
  2017-03-02  0:20     ` [PATCH v3 10/16] net/avp: queue setup and release Allain Legacy
@ 2017-03-02  0:20     ` Allain Legacy
  2017-03-02  0:20     ` [PATCH v3 12/16] net/avp: packet transmit functions Allain Legacy
                       ` (5 subsequent siblings)
  16 siblings, 0 replies; 172+ messages in thread
From: Allain Legacy @ 2017-03-02  0:20 UTC (permalink / raw)
  To: ferruh.yigit; +Cc: ian.jolliffe, jerin.jacob, stephen, thomas.monjalon, dev

Adds function required for receiving packets from the host application via
AVP device queues.  Both the simple and scattered functions are supported.

Signed-off-by: Allain Legacy <allain.legacy@windriver.com>
Signed-off-by: Matt Peters <matt.peters@windriver.com>
---
 drivers/net/avp/Makefile     |   1 +
 drivers/net/avp/avp_ethdev.c | 461 +++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 462 insertions(+)

diff --git a/drivers/net/avp/Makefile b/drivers/net/avp/Makefile
index 9cf0449..3013cd1 100644
--- a/drivers/net/avp/Makefile
+++ b/drivers/net/avp/Makefile
@@ -56,5 +56,6 @@ SRCS-$(CONFIG_RTE_LIBRTE_AVP_PMD) += avp_ethdev.c
 
 # this lib depends upon:
 DEPDIRS-$(CONFIG_RTE_LIBRTE_AVP_PMD) += lib/librte_eal lib/librte_ether
+DEPDIRS-$(CONFIG_RTE_LIBRTE_AVP_PMD) += lib/librte_mempool lib/librte_mbuf
 
 include $(RTE_SDK)/mk/rte.lib.mk
diff --git a/drivers/net/avp/avp_ethdev.c b/drivers/net/avp/avp_ethdev.c
index b0c5ae4..836d4e4 100644
--- a/drivers/net/avp/avp_ethdev.c
+++ b/drivers/net/avp/avp_ethdev.c
@@ -86,11 +86,19 @@ static int avp_dev_tx_queue_setup(struct rte_eth_dev *dev,
 				  unsigned int socket_id,
 				  const struct rte_eth_txconf *tx_conf);
 
+static uint16_t avp_recv_scattered_pkts(void *rx_queue,
+					struct rte_mbuf **rx_pkts,
+					uint16_t nb_pkts);
+
+static uint16_t avp_recv_pkts(void *rx_queue,
+			      struct rte_mbuf **rx_pkts,
+			      uint16_t nb_pkts);
 static void avp_dev_rx_queue_release(void *rxq);
 static void avp_dev_tx_queue_release(void *txq);
 #define AVP_DEV_TO_PCI(eth_dev) RTE_DEV_TO_PCI((eth_dev)->device)
 
 
+#define AVP_MAX_RX_BURST 64
 #define AVP_MAX_MAC_ADDRS 1
 #define AVP_MIN_RX_BUFSIZE ETHER_MIN_LEN
 
@@ -328,6 +336,15 @@ struct avp_queue {
 	return ret == 0 ? request.result : ret;
 }
 
+/* translate from host mbuf virtual address to guest virtual address */
+static inline void *
+avp_dev_translate_buffer(struct avp_dev *avp, void *host_mbuf_address)
+{
+	return RTE_PTR_ADD(RTE_PTR_SUB(host_mbuf_address,
+				       (uintptr_t)avp->host_mbuf_addr),
+			   (uintptr_t)avp->mbuf_addr);
+}
+
 /* translate from host physical address to guest virtual address */
 static void *
 avp_dev_translate_address(struct rte_eth_dev *eth_dev,
@@ -904,6 +921,7 @@ struct avp_queue {
 
 	pci_dev = AVP_DEV_TO_PCI(eth_dev);
 	eth_dev->dev_ops = &avp_eth_dev_ops;
+	eth_dev->rx_pkt_burst = &avp_recv_pkts;
 
 	if (rte_eal_process_type() != RTE_PROC_PRIMARY) {
 		/*
@@ -912,6 +930,10 @@ struct avp_queue {
 		 * be mapped to the same virtual address so all pointers should
 		 * be valid.
 		 */
+		if (eth_dev->data->scattered_rx) {
+			PMD_DRV_LOG(NOTICE, "AVP device configured for chained mbufs\n");
+			eth_dev->rx_pkt_burst = avp_recv_scattered_pkts;
+		}
 		return 0;
 	}
 
@@ -993,6 +1015,38 @@ struct avp_queue {
 
 
 static int
+avp_dev_enable_scattered(struct rte_eth_dev *eth_dev,
+			 struct avp_dev *avp)
+{
+	unsigned int max_rx_pkt_len;
+
+	max_rx_pkt_len = eth_dev->data->dev_conf.rxmode.max_rx_pkt_len;
+
+	if ((max_rx_pkt_len > avp->guest_mbuf_size) ||
+	    (max_rx_pkt_len > avp->host_mbuf_size)) {
+		/*
+		 * If the guest MTU is greater than either the host or guest
+		 * buffers then chained mbufs have to be enabled in the TX
+		 * direction.  It is assumed that the application will not need
+		 * to send packets larger than their max_rx_pkt_len (MRU).
+		 */
+		return 1;
+	}
+
+	if ((avp->max_rx_pkt_len > avp->guest_mbuf_size) ||
+	    (avp->max_rx_pkt_len > avp->host_mbuf_size)) {
+		/*
+		 * If the host MRU is greater than its own mbuf size or the
+		 * guest mbuf size then chained mbufs have to be enabled in the
+		 * RX direction.
+		 */
+		return 1;
+	}
+
+	return 0;
+}
+
+static int
 avp_dev_rx_queue_setup(struct rte_eth_dev *eth_dev,
 		       uint16_t rx_queue_id,
 		       uint16_t nb_rx_desc,
@@ -1018,6 +1072,14 @@ struct avp_queue {
 	avp->guest_mbuf_size = (uint16_t)(mbp_priv->mbuf_data_room_size);
 	avp->guest_mbuf_size -= RTE_PKTMBUF_HEADROOM;
 
+	if (avp_dev_enable_scattered(eth_dev, avp)) {
+		if (!eth_dev->data->scattered_rx) {
+			PMD_DRV_LOG(NOTICE, "AVP device configured for chained mbufs\n");
+			eth_dev->data->scattered_rx = 1;
+			eth_dev->rx_pkt_burst = avp_recv_scattered_pkts;
+		}
+	}
+
 	PMD_DRV_LOG(DEBUG, "AVP max_rx_pkt_len=(%u,%u) mbuf_size=(%u,%u)\n",
 		    avp->max_rx_pkt_len,
 		    eth_dev->data->dev_conf.rxmode.max_rx_pkt_len,
@@ -1088,6 +1150,405 @@ struct avp_queue {
 	return 0;
 }
 
+static inline int
+_avp_cmp_ether_addr(struct ether_addr *a, struct ether_addr *b)
+{
+	uint16_t *_a = (uint16_t *)&a->addr_bytes[0];
+	uint16_t *_b = (uint16_t *)&b->addr_bytes[0];
+	return (_a[0] ^ _b[0]) | (_a[1] ^ _b[1]) | (_a[2] ^ _b[2]);
+}
+
+static inline int
+_avp_mac_filter(struct avp_dev *avp, struct rte_mbuf *m)
+{
+	struct ether_hdr *eth = rte_pktmbuf_mtod(m, struct ether_hdr *);
+
+	if (likely(_avp_cmp_ether_addr(&avp->ethaddr, &eth->d_addr) == 0)) {
+		/* allow all packets destined to our address */
+		return 0;
+	}
+
+	if (likely(is_broadcast_ether_addr(&eth->d_addr))) {
+		/* allow all broadcast packets */
+		return 0;
+	}
+
+	if (likely(is_multicast_ether_addr(&eth->d_addr))) {
+		/* allow all multicast packets */
+		return 0;
+	}
+
+	if (avp->flags & AVP_F_PROMISC) {
+		/* allow all packets when in promiscuous mode */
+		return 0;
+	}
+
+	return -1;
+}
+
+#ifdef RTE_LIBRTE_AVP_DEBUG_BUFFERS
+static inline void
+__avp_dev_buffer_sanity_check(struct avp_dev *avp, struct rte_avp_desc *buf)
+{
+	struct rte_avp_desc *first_buf;
+	struct rte_avp_desc *pkt_buf;
+	unsigned int pkt_len;
+	unsigned int nb_segs;
+	void *pkt_data;
+	unsigned int i;
+
+	first_buf = avp_dev_translate_buffer(avp, buf);
+
+	i = 0;
+	pkt_len = 0;
+	nb_segs = first_buf->nb_segs;
+	do {
+		/* Adjust pointers for guest addressing */
+		pkt_buf = avp_dev_translate_buffer(avp, buf);
+		if (pkt_buf == NULL)
+			rte_panic("bad buffer: segment %u has an invalid address %p\n",
+				  i, buf);
+		pkt_data = avp_dev_translate_buffer(avp, pkt_buf->data);
+		if (pkt_data == NULL)
+			rte_panic("bad buffer: segment %u has a NULL data pointer\n",
+				  i);
+		if (pkt_buf->data_len == 0)
+			rte_panic("bad buffer: segment %u has 0 data length\n",
+				  i);
+		pkt_len += pkt_buf->data_len;
+		nb_segs--;
+		i++;
+
+	} while (nb_segs && (buf = pkt_buf->next) != NULL);
+
+	if (nb_segs != 0)
+		rte_panic("bad buffer: expected %u segments found %u\n",
+			  first_buf->nb_segs, (first_buf->nb_segs - nb_segs));
+	if (pkt_len != first_buf->pkt_len)
+		rte_panic("bad buffer: expected length %u found %u\n",
+			  first_buf->pkt_len, pkt_len);
+}
+
+#define avp_dev_buffer_sanity_check(a, b) \
+	__avp_dev_buffer_sanity_check((a), (b))
+
+#else /* RTE_LIBRTE_AVP_DEBUG_BUFFERS */
+
+#define avp_dev_buffer_sanity_check(a, b) do {} while (0)
+
+#endif
+
+/*
+ * Copy a host buffer chain to a set of mbufs.	This function assumes that
+ * there exactly the required number of mbufs to copy all source bytes.
+ */
+static inline struct rte_mbuf *
+avp_dev_copy_from_buffers(struct avp_dev *avp,
+			  struct rte_avp_desc *buf,
+			  struct rte_mbuf **mbufs,
+			  unsigned int count)
+{
+	struct rte_mbuf *m_previous = NULL;
+	struct rte_avp_desc *pkt_buf;
+	unsigned int total_length = 0;
+	unsigned int copy_length;
+	unsigned int src_offset;
+	struct rte_mbuf *m;
+	uint16_t ol_flags;
+	uint16_t vlan_tci;
+	void *pkt_data;
+	unsigned int i;
+
+	avp_dev_buffer_sanity_check(avp, buf);
+
+	/* setup the first source buffer */
+	pkt_buf = avp_dev_translate_buffer(avp, buf);
+	pkt_data = avp_dev_translate_buffer(avp, pkt_buf->data);
+	total_length = pkt_buf->pkt_len;
+	src_offset = 0;
+
+	if (pkt_buf->ol_flags & RTE_AVP_RX_VLAN_PKT) {
+		ol_flags = PKT_RX_VLAN_PKT;
+		vlan_tci = pkt_buf->vlan_tci;
+	} else {
+		ol_flags = 0;
+		vlan_tci = 0;
+	}
+
+	for (i = 0; (i < count) && (buf != NULL); i++) {
+		/* fill each destination buffer */
+		m = mbufs[i];
+
+		if (m_previous != NULL)
+			m_previous->next = m;
+
+		m_previous = m;
+
+		do {
+			/*
+			 * Copy as many source buffers as will fit in the
+			 * destination buffer.
+			 */
+			copy_length = RTE_MIN((avp->guest_mbuf_size -
+					       rte_pktmbuf_data_len(m)),
+					      (pkt_buf->data_len -
+					       src_offset));
+			rte_memcpy(RTE_PTR_ADD(rte_pktmbuf_mtod(m, void *),
+					       rte_pktmbuf_data_len(m)),
+				   RTE_PTR_ADD(pkt_data, src_offset),
+				   copy_length);
+			rte_pktmbuf_data_len(m) += copy_length;
+			src_offset += copy_length;
+
+			if (likely(src_offset == pkt_buf->data_len)) {
+				/* need a new source buffer */
+				buf = pkt_buf->next;
+				if (buf != NULL) {
+					pkt_buf = avp_dev_translate_buffer(
+						avp, buf);
+					pkt_data = avp_dev_translate_buffer(
+						avp, pkt_buf->data);
+					src_offset = 0;
+				}
+			}
+
+			if (unlikely(rte_pktmbuf_data_len(m) ==
+				     avp->guest_mbuf_size)) {
+				/* need a new destination mbuf */
+				break;
+			}
+
+		} while (buf != NULL);
+	}
+
+	m = mbufs[0];
+	m->ol_flags = ol_flags;
+	m->nb_segs = count;
+	rte_pktmbuf_pkt_len(m) = total_length;
+	m->vlan_tci = vlan_tci;
+
+	__rte_mbuf_sanity_check(m, 1);
+
+	return m;
+}
+
+static uint16_t
+avp_recv_scattered_pkts(void *rx_queue,
+			struct rte_mbuf **rx_pkts,
+			uint16_t nb_pkts)
+{
+	struct avp_queue *rxq = (struct avp_queue *)rx_queue;
+	struct rte_avp_desc *avp_bufs[AVP_MAX_RX_BURST];
+	struct rte_mbuf *mbufs[RTE_AVP_MAX_MBUF_SEGMENTS];
+	struct avp_dev *avp = rxq->avp;
+	struct rte_avp_desc *pkt_buf;
+	struct rte_avp_fifo *free_q;
+	struct rte_avp_fifo *rx_q;
+	struct rte_avp_desc *buf;
+	unsigned int count, avail, n;
+	unsigned int guest_mbuf_size;
+	struct rte_mbuf *m;
+	unsigned int required;
+	unsigned int buf_len;
+	unsigned int port_id;
+	unsigned int i;
+
+	if (unlikely(avp->flags & AVP_F_DETACHED)) {
+		/* VM live migration in progress */
+		return 0;
+	}
+
+	guest_mbuf_size = avp->guest_mbuf_size;
+	port_id = avp->port_id;
+	rx_q = avp->rx_q[rxq->queue_id];
+	free_q = avp->free_q[rxq->queue_id];
+
+	/* setup next queue to service */
+	rxq->queue_id = (rxq->queue_id < rxq->queue_limit) ?
+		(rxq->queue_id + 1) : rxq->queue_base;
+
+	/* determine how many slots are available in the free queue */
+	count = avp_fifo_free_count(free_q);
+
+	/* determine how many packets are available in the rx queue */
+	avail = avp_fifo_count(rx_q);
+
+	/* determine how many packets can be received */
+	count = RTE_MIN(count, avail);
+	count = RTE_MIN(count, nb_pkts);
+	count = RTE_MIN(count, (unsigned int)AVP_MAX_RX_BURST);
+
+	if (unlikely(count == 0)) {
+		/* no free buffers, or no buffers on the rx queue */
+		return 0;
+	}
+
+	/* retrieve pending packets */
+	n = avp_fifo_get(rx_q, (void **)&avp_bufs, count);
+	PMD_RX_LOG(DEBUG, "Receiving %u packets from Rx queue at %p\n",
+		   count, rx_q);
+
+	count = 0;
+	for (i = 0; i < n; i++) {
+		/* prefetch next entry while processing current one */
+		if (i + 1 < n) {
+			pkt_buf = avp_dev_translate_buffer(avp,
+							   avp_bufs[i + 1]);
+			rte_prefetch0(pkt_buf);
+		}
+		buf = avp_bufs[i];
+
+		/* Peek into the first buffer to determine the total length */
+		pkt_buf = avp_dev_translate_buffer(avp, buf);
+		buf_len = pkt_buf->pkt_len;
+
+		/* Allocate enough mbufs to receive the entire packet */
+		required = (buf_len + guest_mbuf_size - 1) / guest_mbuf_size;
+		if (rte_pktmbuf_alloc_bulk(avp->pool, mbufs, required)) {
+			rxq->dev_data->rx_mbuf_alloc_failed++;
+			continue;
+		}
+
+		/* Copy the data from the buffers to our mbufs */
+		m = avp_dev_copy_from_buffers(avp, buf, mbufs, required);
+
+		/* finalize mbuf */
+		m->port = port_id;
+
+		if (_avp_mac_filter(avp, m) != 0) {
+			/* silently discard packets not destined to our MAC */
+			rte_pktmbuf_free(m);
+			continue;
+		}
+
+		/* return new mbuf to caller */
+		rx_pkts[count++] = m;
+		rxq->bytes += buf_len;
+	}
+
+	rxq->packets += count;
+
+	/* return the buffers to the free queue */
+	avp_fifo_put(free_q, (void **)&avp_bufs[0], n);
+
+	return count;
+}
+
+
+static uint16_t
+avp_recv_pkts(void *rx_queue,
+	      struct rte_mbuf **rx_pkts,
+	      uint16_t nb_pkts)
+{
+	struct avp_queue *rxq = (struct avp_queue *)rx_queue;
+	struct rte_avp_desc *avp_bufs[AVP_MAX_RX_BURST];
+	struct avp_dev *avp = rxq->avp;
+	struct rte_avp_desc *pkt_buf;
+	struct rte_avp_fifo *free_q;
+	struct rte_avp_fifo *rx_q;
+	unsigned int count, avail, n;
+	unsigned int pkt_len;
+	struct rte_mbuf *m;
+	char *pkt_data;
+	unsigned int i;
+
+	if (unlikely(avp->flags & AVP_F_DETACHED)) {
+		/* VM live migration in progress */
+		return 0;
+	}
+
+	rx_q = avp->rx_q[rxq->queue_id];
+	free_q = avp->free_q[rxq->queue_id];
+
+	/* setup next queue to service */
+	rxq->queue_id = (rxq->queue_id < rxq->queue_limit) ?
+		(rxq->queue_id + 1) : rxq->queue_base;
+
+	/* determine how many slots are available in the free queue */
+	count = avp_fifo_free_count(free_q);
+
+	/* determine how many packets are available in the rx queue */
+	avail = avp_fifo_count(rx_q);
+
+	/* determine how many packets can be received */
+	count = RTE_MIN(count, avail);
+	count = RTE_MIN(count, nb_pkts);
+	count = RTE_MIN(count, (unsigned int)AVP_MAX_RX_BURST);
+
+	if (unlikely(count == 0)) {
+		/* no free buffers, or no buffers on the rx queue */
+		return 0;
+	}
+
+	/* retrieve pending packets */
+	n = avp_fifo_get(rx_q, (void **)&avp_bufs, count);
+	PMD_RX_LOG(DEBUG, "Receiving %u packets from Rx queue at %p\n",
+		   count, rx_q);
+
+	count = 0;
+	for (i = 0; i < n; i++) {
+		/* prefetch next entry while processing current one */
+		if (i < n - 1) {
+			pkt_buf = avp_dev_translate_buffer(avp,
+							   avp_bufs[i + 1]);
+			rte_prefetch0(pkt_buf);
+		}
+
+		/* Adjust host pointers for guest addressing */
+		pkt_buf = avp_dev_translate_buffer(avp, avp_bufs[i]);
+		pkt_data = avp_dev_translate_buffer(avp, pkt_buf->data);
+		pkt_len = pkt_buf->pkt_len;
+
+		if (unlikely((pkt_len > avp->guest_mbuf_size) ||
+			     (pkt_buf->nb_segs > 1))) {
+			/*
+			 * application should be using the scattered receive
+			 * function
+			 */
+			rxq->errors++;
+			continue;
+		}
+
+		/* process each packet to be transmitted */
+		m = rte_pktmbuf_alloc(avp->pool);
+		if (unlikely(m == NULL)) {
+			rxq->dev_data->rx_mbuf_alloc_failed++;
+			continue;
+		}
+
+		/* copy data out of the host buffer to our buffer */
+		m->data_off = RTE_PKTMBUF_HEADROOM;
+		rte_memcpy(rte_pktmbuf_mtod(m, void *), pkt_data, pkt_len);
+
+		/* initialize the local mbuf */
+		rte_pktmbuf_data_len(m) = pkt_len;
+		rte_pktmbuf_pkt_len(m) = pkt_len;
+		m->port = avp->port_id;
+
+		if (pkt_buf->ol_flags & RTE_AVP_RX_VLAN_PKT) {
+			m->ol_flags = PKT_RX_VLAN_PKT;
+			m->vlan_tci = pkt_buf->vlan_tci;
+		}
+
+		if (_avp_mac_filter(avp, m) != 0) {
+			/* silently discard packets not destined to our MAC */
+			rte_pktmbuf_free(m);
+			continue;
+		}
+
+		/* return new mbuf to caller */
+		rx_pkts[count++] = m;
+		rxq->bytes += pkt_len;
+	}
+
+	rxq->packets += count;
+
+	/* return the buffers to the free queue */
+	avp_fifo_put(free_q, (void **)&avp_bufs[0], n);
+
+	return count;
+}
+
 static void
 avp_dev_rx_queue_release(void *rx_queue)
 {
-- 
1.8.3.1

^ permalink raw reply related	[flat|nested] 172+ messages in thread

* [PATCH v3 12/16] net/avp: packet transmit functions
  2017-03-02  0:19   ` [PATCH v3 " Allain Legacy
                       ` (10 preceding siblings ...)
  2017-03-02  0:20     ` [PATCH v3 11/16] net/avp: packet receive functions Allain Legacy
@ 2017-03-02  0:20     ` Allain Legacy
  2017-03-02  0:20     ` [PATCH v3 13/16] net/avp: device statistics operations Allain Legacy
                       ` (4 subsequent siblings)
  16 siblings, 0 replies; 172+ messages in thread
From: Allain Legacy @ 2017-03-02  0:20 UTC (permalink / raw)
  To: ferruh.yigit; +Cc: ian.jolliffe, jerin.jacob, stephen, thomas.monjalon, dev

Adds support for packet transmit functions so that an application can send
packets to the host application via an AVP device queue.  Both the simple
and scattered functions are supported.

Signed-off-by: Allain Legacy <allain.legacy@windriver.com>
Signed-off-by: Matt Peters <matt.peters@windriver.com>
---
 drivers/net/avp/avp_ethdev.c | 349 +++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 349 insertions(+)

diff --git a/drivers/net/avp/avp_ethdev.c b/drivers/net/avp/avp_ethdev.c
index 836d4e4..bd7f7ec 100644
--- a/drivers/net/avp/avp_ethdev.c
+++ b/drivers/net/avp/avp_ethdev.c
@@ -93,12 +93,24 @@ static uint16_t avp_recv_scattered_pkts(void *rx_queue,
 static uint16_t avp_recv_pkts(void *rx_queue,
 			      struct rte_mbuf **rx_pkts,
 			      uint16_t nb_pkts);
+
+static uint16_t avp_xmit_scattered_pkts(void *tx_queue,
+					struct rte_mbuf **tx_pkts,
+					uint16_t nb_pkts);
+
+static uint16_t avp_xmit_pkts(void *tx_queue,
+			      struct rte_mbuf **tx_pkts,
+			      uint16_t nb_pkts);
+
 static void avp_dev_rx_queue_release(void *rxq);
 static void avp_dev_tx_queue_release(void *txq);
+
+
 #define AVP_DEV_TO_PCI(eth_dev) RTE_DEV_TO_PCI((eth_dev)->device)
 
 
 #define AVP_MAX_RX_BURST 64
+#define AVP_MAX_TX_BURST 64
 #define AVP_MAX_MAC_ADDRS 1
 #define AVP_MIN_RX_BUFSIZE ETHER_MIN_LEN
 
@@ -922,6 +934,7 @@ struct avp_queue {
 	pci_dev = AVP_DEV_TO_PCI(eth_dev);
 	eth_dev->dev_ops = &avp_eth_dev_ops;
 	eth_dev->rx_pkt_burst = &avp_recv_pkts;
+	eth_dev->tx_pkt_burst = &avp_xmit_pkts;
 
 	if (rte_eal_process_type() != RTE_PROC_PRIMARY) {
 		/*
@@ -933,6 +946,7 @@ struct avp_queue {
 		if (eth_dev->data->scattered_rx) {
 			PMD_DRV_LOG(NOTICE, "AVP device configured for chained mbufs\n");
 			eth_dev->rx_pkt_burst = avp_recv_scattered_pkts;
+			eth_dev->tx_pkt_burst = avp_xmit_scattered_pkts;
 		}
 		return 0;
 	}
@@ -1077,6 +1091,7 @@ struct avp_queue {
 			PMD_DRV_LOG(NOTICE, "AVP device configured for chained mbufs\n");
 			eth_dev->data->scattered_rx = 1;
 			eth_dev->rx_pkt_burst = avp_recv_scattered_pkts;
+			eth_dev->tx_pkt_burst = avp_xmit_scattered_pkts;
 		}
 	}
 
@@ -1549,6 +1564,340 @@ struct avp_queue {
 	return count;
 }
 
+/*
+ * Copy a chained mbuf to a set of host buffers.  This function assumes that
+ * there are sufficient destination buffers to contain the entire source
+ * packet.
+ */
+static inline uint16_t
+avp_dev_copy_to_buffers(struct avp_dev *avp,
+			struct rte_mbuf *mbuf,
+			struct rte_avp_desc **buffers,
+			unsigned int count)
+{
+	struct rte_avp_desc *previous_buf = NULL;
+	struct rte_avp_desc *first_buf = NULL;
+	struct rte_avp_desc *pkt_buf;
+	struct rte_avp_desc *buf;
+	size_t total_length;
+	struct rte_mbuf *m;
+	size_t copy_length;
+	size_t src_offset;
+	char *pkt_data;
+	unsigned int i;
+
+	__rte_mbuf_sanity_check(mbuf, 1);
+
+	m = mbuf;
+	src_offset = 0;
+	total_length = rte_pktmbuf_pkt_len(m);
+	for (i = 0; (i < count) && (m != NULL); i++) {
+		/* fill each destination buffer */
+		buf = buffers[i];
+
+		if (i < count - 1) {
+			/* prefetch next entry while processing this one */
+			pkt_buf = avp_dev_translate_buffer(avp, buffers[i + 1]);
+			rte_prefetch0(pkt_buf);
+		}
+
+		/* Adjust pointers for guest addressing */
+		pkt_buf = avp_dev_translate_buffer(avp, buf);
+		pkt_data = avp_dev_translate_buffer(avp, pkt_buf->data);
+
+		/* setup the buffer chain */
+		if (previous_buf != NULL)
+			previous_buf->next = buf;
+		else
+			first_buf = pkt_buf;
+
+		previous_buf = pkt_buf;
+
+		do {
+			/*
+			 * copy as many source mbuf segments as will fit in the
+			 * destination buffer.
+			 */
+			copy_length = RTE_MIN((avp->host_mbuf_size -
+					       pkt_buf->data_len),
+					      (rte_pktmbuf_data_len(m) -
+					       src_offset));
+			rte_memcpy(RTE_PTR_ADD(pkt_data, pkt_buf->data_len),
+				   RTE_PTR_ADD(rte_pktmbuf_mtod(m, void *),
+					       src_offset),
+				   copy_length);
+			pkt_buf->data_len += copy_length;
+			src_offset += copy_length;
+
+			if (likely(src_offset == rte_pktmbuf_data_len(m))) {
+				/* need a new source buffer */
+				m = m->next;
+				src_offset = 0;
+			}
+
+			if (unlikely(pkt_buf->data_len ==
+				     avp->host_mbuf_size)) {
+				/* need a new destination buffer */
+				break;
+			}
+
+		} while (m != NULL);
+	}
+
+	first_buf->nb_segs = count;
+	first_buf->pkt_len = total_length;
+
+	if (mbuf->ol_flags & PKT_TX_VLAN_PKT) {
+		first_buf->ol_flags |= RTE_AVP_TX_VLAN_PKT;
+		first_buf->vlan_tci = mbuf->vlan_tci;
+	}
+
+	avp_dev_buffer_sanity_check(avp, buffers[0]);
+
+	return total_length;
+}
+
+
+static uint16_t
+avp_xmit_scattered_pkts(void *tx_queue,
+			struct rte_mbuf **tx_pkts,
+			uint16_t nb_pkts)
+{
+	struct rte_avp_desc *avp_bufs[(AVP_MAX_TX_BURST *
+				       RTE_AVP_MAX_MBUF_SEGMENTS)];
+	struct avp_queue *txq = (struct avp_queue *)tx_queue;
+	struct rte_avp_desc *tx_bufs[AVP_MAX_TX_BURST];
+	struct avp_dev *avp = txq->avp;
+	struct rte_avp_fifo *alloc_q;
+	struct rte_avp_fifo *tx_q;
+	unsigned int count, avail, n;
+	unsigned int orig_nb_pkts;
+	struct rte_mbuf *m;
+	unsigned int required;
+	unsigned int segments;
+	unsigned int tx_bytes;
+	unsigned int i;
+
+	orig_nb_pkts = nb_pkts;
+	if (unlikely(avp->flags & AVP_F_DETACHED)) {
+		/* VM live migration in progress */
+		/* TODO ... buffer for X packets then drop? */
+		txq->errors += nb_pkts;
+		return 0;
+	}
+
+	tx_q = avp->tx_q[txq->queue_id];
+	alloc_q = avp->alloc_q[txq->queue_id];
+
+	/* limit the number of transmitted packets to the max burst size */
+	if (unlikely(nb_pkts > AVP_MAX_TX_BURST))
+		nb_pkts = AVP_MAX_TX_BURST;
+
+	/* determine how many buffers are available to copy into */
+	avail = avp_fifo_count(alloc_q);
+	if (unlikely(avail > (AVP_MAX_TX_BURST *
+			      RTE_AVP_MAX_MBUF_SEGMENTS)))
+		avail = AVP_MAX_TX_BURST * RTE_AVP_MAX_MBUF_SEGMENTS;
+
+	/* determine how many slots are available in the transmit queue */
+	count = avp_fifo_free_count(tx_q);
+
+	/* determine how many packets can be sent */
+	nb_pkts = RTE_MIN(count, nb_pkts);
+
+	/* determine how many packets will fit in the available buffers */
+	count = 0;
+	segments = 0;
+	for (i = 0; i < nb_pkts; i++) {
+		m = tx_pkts[i];
+		if (likely(i < (unsigned int)nb_pkts - 1)) {
+			/* prefetch next entry while processing this one */
+			rte_prefetch0(tx_pkts[i + 1]);
+		}
+		required = (rte_pktmbuf_pkt_len(m) + avp->host_mbuf_size - 1) /
+			avp->host_mbuf_size;
+
+		if (unlikely((required == 0) ||
+			     (required > RTE_AVP_MAX_MBUF_SEGMENTS)))
+			break;
+		else if (unlikely(required + segments > avail))
+			break;
+		segments += required;
+		count++;
+	}
+	nb_pkts = count;
+
+	if (unlikely(nb_pkts == 0)) {
+		/* no available buffers, or no space on the tx queue */
+		txq->errors += orig_nb_pkts;
+		return 0;
+	}
+
+	PMD_TX_LOG(DEBUG, "Sending %u packets on Tx queue at %p\n",
+		   nb_pkts, tx_q);
+
+	/* retrieve sufficient send buffers */
+	n = avp_fifo_get(alloc_q, (void **)&avp_bufs, segments);
+	if (unlikely(n != segments)) {
+		PMD_TX_LOG(DEBUG, "Failed to allocate buffers "
+			   "n=%u, segments=%u, orig=%u\n",
+			   n, segments, orig_nb_pkts);
+		txq->errors += orig_nb_pkts;
+		return 0;
+	}
+
+	tx_bytes = 0;
+	count = 0;
+	for (i = 0; i < nb_pkts; i++) {
+		/* process each packet to be transmitted */
+		m = tx_pkts[i];
+
+		/* determine how many buffers are required for this packet */
+		required = (rte_pktmbuf_pkt_len(m) + avp->host_mbuf_size - 1) /
+			avp->host_mbuf_size;
+
+		tx_bytes += avp_dev_copy_to_buffers(avp, m,
+						    &avp_bufs[count], required);
+		tx_bufs[i] = avp_bufs[count];
+		count += required;
+
+		/* free the original mbuf */
+		rte_pktmbuf_free(m);
+	}
+
+	txq->packets += nb_pkts;
+	txq->bytes += tx_bytes;
+
+#ifdef RTE_LIBRTE_AVP_DEBUG_BUFFERS
+	for (i = 0; i < nb_pkts; i++)
+		avp_dev_buffer_sanity_check(avp, tx_bufs[i]);
+#endif
+
+	/* send the packets */
+	n = avp_fifo_put(tx_q, (void **)&tx_bufs[0], nb_pkts);
+	if (unlikely(n != orig_nb_pkts))
+		txq->errors += (orig_nb_pkts - n);
+
+	return n;
+}
+
+
+static uint16_t
+avp_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
+{
+	struct avp_queue *txq = (struct avp_queue *)tx_queue;
+	struct rte_avp_desc *avp_bufs[AVP_MAX_TX_BURST];
+	struct avp_dev *avp = txq->avp;
+	struct rte_avp_desc *pkt_buf;
+	struct rte_avp_fifo *alloc_q;
+	struct rte_avp_fifo *tx_q;
+	unsigned int count, avail, n;
+	struct rte_mbuf *m;
+	unsigned int pkt_len;
+	unsigned int tx_bytes;
+	char *pkt_data;
+	unsigned int i;
+
+	if (unlikely(avp->flags & AVP_F_DETACHED)) {
+		/* VM live migration in progress */
+		/* TODO ... buffer for X packets then drop?! */
+		txq->errors++;
+		return 0;
+	}
+
+	tx_q = avp->tx_q[txq->queue_id];
+	alloc_q = avp->alloc_q[txq->queue_id];
+
+	/* limit the number of transmitted packets to the max burst size */
+	if (unlikely(nb_pkts > AVP_MAX_TX_BURST))
+		nb_pkts = AVP_MAX_TX_BURST;
+
+	/* determine how many buffers are available to copy into */
+	avail = avp_fifo_count(alloc_q);
+
+	/* determine how many slots are available in the transmit queue */
+	count = avp_fifo_free_count(tx_q);
+
+	/* determine how many packets can be sent */
+	count = RTE_MIN(count, avail);
+	count = RTE_MIN(count, nb_pkts);
+
+	if (unlikely(count == 0)) {
+		/* no available buffers, or no space on the tx queue */
+		txq->errors++;
+		return 0;
+	}
+
+	PMD_TX_LOG(DEBUG, "Sending %u packets on Tx queue at %p\n",
+		   count, tx_q);
+
+	/* retrieve sufficient send buffers */
+	n = avp_fifo_get(alloc_q, (void **)&avp_bufs, count);
+	if (unlikely(n != count)) {
+		txq->errors++;
+		return 0;
+	}
+
+	tx_bytes = 0;
+	for (i = 0; i < count; i++) {
+		/* prefetch next entry while processing the current one */
+		if (i < count - 1) {
+			pkt_buf = avp_dev_translate_buffer(avp,
+							   avp_bufs[i + 1]);
+			rte_prefetch0(pkt_buf);
+		}
+
+		/* process each packet to be transmitted */
+		m = tx_pkts[i];
+
+		/* Adjust pointers for guest addressing */
+		pkt_buf = avp_dev_translate_buffer(avp, avp_bufs[i]);
+		pkt_data = avp_dev_translate_buffer(avp, pkt_buf->data);
+		pkt_len = rte_pktmbuf_pkt_len(m);
+
+		if (unlikely((pkt_len > avp->guest_mbuf_size) ||
+					 (pkt_len > avp->host_mbuf_size))) {
+			/*
+			 * application should be using the scattered transmit
+			 * function; send it truncated to avoid the performance
+			 * hit of having to manage returning the already
+			 * allocated buffer to the free list.  This should not
+			 * happen since the application should have set the
+			 * max_rx_pkt_len based on its MTU and it should be
+			 * policing its own packet sizes.
+			 */
+			txq->errors++;
+			pkt_len = RTE_MIN(avp->guest_mbuf_size,
+					  avp->host_mbuf_size);
+		}
+
+		/* copy data out of our mbuf and into the AVP buffer */
+		rte_memcpy(pkt_data, rte_pktmbuf_mtod(m, void *), pkt_len);
+		pkt_buf->pkt_len = pkt_len;
+		pkt_buf->data_len = pkt_len;
+		pkt_buf->nb_segs = 1;
+		pkt_buf->next = NULL;
+
+		if (m->ol_flags & PKT_TX_VLAN_PKT) {
+			pkt_buf->ol_flags |= RTE_AVP_TX_VLAN_PKT;
+			pkt_buf->vlan_tci = m->vlan_tci;
+		}
+
+		tx_bytes += pkt_len;
+
+		/* free the original mbuf */
+		rte_pktmbuf_free(m);
+	}
+
+	txq->packets += count;
+	txq->bytes += tx_bytes;
+
+	/* send the packets */
+	n = avp_fifo_put(tx_q, (void **)&avp_bufs[0], count);
+
+	return n;
+}
+
 static void
 avp_dev_rx_queue_release(void *rx_queue)
 {
-- 
1.8.3.1

^ permalink raw reply related	[flat|nested] 172+ messages in thread

* [PATCH v3 13/16] net/avp: device statistics operations
  2017-03-02  0:19   ` [PATCH v3 " Allain Legacy
                       ` (11 preceding siblings ...)
  2017-03-02  0:20     ` [PATCH v3 12/16] net/avp: packet transmit functions Allain Legacy
@ 2017-03-02  0:20     ` Allain Legacy
  2017-03-02  0:35       ` Stephen Hemminger
  2017-03-02  0:20     ` [PATCH v3 14/16] net/avp: device promiscuous functions Allain Legacy
                       ` (3 subsequent siblings)
  16 siblings, 1 reply; 172+ messages in thread
From: Allain Legacy @ 2017-03-02  0:20 UTC (permalink / raw)
  To: ferruh.yigit; +Cc: ian.jolliffe, jerin.jacob, stephen, thomas.monjalon, dev

Adds device functions to query and reset statistics.

Signed-off-by: Allain Legacy <allain.legacy@windriver.com>
---
 drivers/net/avp/avp_ethdev.c | 68 ++++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 68 insertions(+)

diff --git a/drivers/net/avp/avp_ethdev.c b/drivers/net/avp/avp_ethdev.c
index bd7f7ec..81c6551 100644
--- a/drivers/net/avp/avp_ethdev.c
+++ b/drivers/net/avp/avp_ethdev.c
@@ -105,6 +105,10 @@ static uint16_t avp_xmit_pkts(void *tx_queue,
 static void avp_dev_rx_queue_release(void *rxq);
 static void avp_dev_tx_queue_release(void *txq);
 
+static void avp_dev_stats_get(struct rte_eth_dev *dev,
+			      struct rte_eth_stats *stats);
+static void avp_dev_stats_reset(struct rte_eth_dev *dev);
+
 
 #define AVP_DEV_TO_PCI(eth_dev) RTE_DEV_TO_PCI((eth_dev)->device)
 
@@ -152,6 +156,8 @@ static uint16_t avp_xmit_pkts(void *tx_queue,
 	.dev_configure       = avp_dev_configure,
 	.dev_infos_get       = avp_dev_info_get,
 	.vlan_offload_set    = avp_vlan_offload_set,
+	.stats_get           = avp_dev_stats_get,
+	.stats_reset         = avp_dev_stats_reset,
 	.link_update         = avp_dev_link_update,
 	.rx_queue_setup      = avp_dev_rx_queue_setup,
 	.rx_queue_release    = avp_dev_rx_queue_release,
@@ -2041,6 +2047,68 @@ struct avp_queue {
 	}
 }
 
+static void
+avp_dev_stats_get(struct rte_eth_dev *eth_dev, struct rte_eth_stats *stats)
+{
+	struct avp_dev *avp = AVP_DEV_PRIVATE_TO_HW(eth_dev->data->dev_private);
+	unsigned int i;
+
+	memset(stats, 0, sizeof(*stats));
+	for (i = 0; i < avp->num_rx_queues; i++) {
+		struct avp_queue *rxq = avp->dev_data->rx_queues[i];
+
+		if (rxq) {
+			stats->ipackets += rxq->packets;
+			stats->ibytes += rxq->bytes;
+			stats->ierrors += rxq->errors;
+
+			stats->q_ipackets[i] += rxq->packets;
+			stats->q_ibytes[i] += rxq->bytes;
+			stats->q_errors[i] += rxq->errors;
+		}
+	}
+
+	for (i = 0; i < avp->num_tx_queues; i++) {
+		struct avp_queue *txq = avp->dev_data->tx_queues[i];
+
+		if (txq) {
+			stats->opackets += txq->packets;
+			stats->obytes += txq->bytes;
+			stats->oerrors += txq->errors;
+
+			stats->q_opackets[i] += txq->packets;
+			stats->q_obytes[i] += txq->bytes;
+			stats->q_errors[i] += txq->errors;
+		}
+	}
+}
+
+static void
+avp_dev_stats_reset(struct rte_eth_dev *eth_dev)
+{
+	struct avp_dev *avp = AVP_DEV_PRIVATE_TO_HW(eth_dev->data->dev_private);
+	unsigned int i;
+
+	for (i = 0; i < avp->num_rx_queues; i++) {
+		struct avp_queue *rxq = avp->dev_data->rx_queues[i];
+
+		if (rxq) {
+			rxq->bytes = 0;
+			rxq->packets = 0;
+			rxq->errors = 0;
+		}
+	}
+
+	for (i = 0; i < avp->num_tx_queues; i++) {
+		struct avp_queue *txq = avp->dev_data->tx_queues[i];
+
+		if (txq) {
+			txq->bytes = 0;
+			txq->packets = 0;
+			txq->errors = 0;
+		}
+	}
+}
 
 RTE_PMD_REGISTER_PCI(rte_avp, rte_avp_pmd.pci_drv);
 RTE_PMD_REGISTER_PCI_TABLE(rte_avp, pci_id_avp_map);
-- 
1.8.3.1

^ permalink raw reply related	[flat|nested] 172+ messages in thread

* [PATCH v3 14/16] net/avp: device promiscuous functions
  2017-03-02  0:19   ` [PATCH v3 " Allain Legacy
                       ` (12 preceding siblings ...)
  2017-03-02  0:20     ` [PATCH v3 13/16] net/avp: device statistics operations Allain Legacy
@ 2017-03-02  0:20     ` Allain Legacy
  2017-03-02  0:20     ` [PATCH v3 15/16] net/avp: device start and stop operations Allain Legacy
                       ` (2 subsequent siblings)
  16 siblings, 0 replies; 172+ messages in thread
From: Allain Legacy @ 2017-03-02  0:20 UTC (permalink / raw)
  To: ferruh.yigit; +Cc: ian.jolliffe, jerin.jacob, stephen, thomas.monjalon, dev

Adds support for setting and clearing promiscuous mode on an AVP device.
When enabled the _mac_filter function will allow packets destined to any
MAC address to be processed by the receive functions.

Signed-off-by: Allain Legacy <allain.legacy@windriver.com>
Signed-off-by: Matt Peters <matt.peters@windriver.com>
---
 drivers/net/avp/avp_ethdev.c | 32 ++++++++++++++++++++++++++++++++
 1 file changed, 32 insertions(+)

diff --git a/drivers/net/avp/avp_ethdev.c b/drivers/net/avp/avp_ethdev.c
index 81c6551..2fe1251 100644
--- a/drivers/net/avp/avp_ethdev.c
+++ b/drivers/net/avp/avp_ethdev.c
@@ -73,6 +73,9 @@ static void avp_dev_info_get(struct rte_eth_dev *dev,
 static void avp_vlan_offload_set(struct rte_eth_dev *dev, int mask);
 static int avp_dev_link_update(struct rte_eth_dev *dev,
 			       __rte_unused int wait_to_complete);
+static void avp_dev_promiscuous_enable(struct rte_eth_dev *dev);
+static void avp_dev_promiscuous_disable(struct rte_eth_dev *dev);
+
 static int avp_dev_rx_queue_setup(struct rte_eth_dev *dev,
 				  uint16_t rx_queue_id,
 				  uint16_t nb_rx_desc,
@@ -159,6 +162,8 @@ static void avp_dev_stats_get(struct rte_eth_dev *dev,
 	.stats_get           = avp_dev_stats_get,
 	.stats_reset         = avp_dev_stats_reset,
 	.link_update         = avp_dev_link_update,
+	.promiscuous_enable  = avp_dev_promiscuous_enable,
+	.promiscuous_disable = avp_dev_promiscuous_disable,
 	.rx_queue_setup      = avp_dev_rx_queue_setup,
 	.rx_queue_release    = avp_dev_rx_queue_release,
 	.tx_queue_setup      = avp_dev_tx_queue_setup,
@@ -2000,6 +2005,33 @@ struct avp_queue {
 	return -1;
 }
 
+static void
+avp_dev_promiscuous_enable(struct rte_eth_dev *eth_dev)
+{
+	struct avp_dev *avp = AVP_DEV_PRIVATE_TO_HW(eth_dev->data->dev_private);
+
+	rte_spinlock_lock(&avp->lock);
+	if ((avp->flags & AVP_F_PROMISC) == 0) {
+		avp->flags |= AVP_F_PROMISC;
+		PMD_DRV_LOG(DEBUG, "Promiscuous mode enabled on %u\n",
+			    eth_dev->data->port_id);
+	}
+	rte_spinlock_unlock(&avp->lock);
+}
+
+static void
+avp_dev_promiscuous_disable(struct rte_eth_dev *eth_dev)
+{
+	struct avp_dev *avp = AVP_DEV_PRIVATE_TO_HW(eth_dev->data->dev_private);
+
+	rte_spinlock_lock(&avp->lock);
+	if ((avp->flags & AVP_F_PROMISC) != 0) {
+		avp->flags &= ~AVP_F_PROMISC;
+		PMD_DRV_LOG(DEBUG, "Promiscuous mode disabled on %u\n",
+			    eth_dev->data->port_id);
+	}
+	rte_spinlock_unlock(&avp->lock);
+}
 
 static void
 avp_dev_info_get(struct rte_eth_dev *eth_dev,
-- 
1.8.3.1

^ permalink raw reply related	[flat|nested] 172+ messages in thread

* [PATCH v3 15/16] net/avp: device start and stop operations
  2017-03-02  0:19   ` [PATCH v3 " Allain Legacy
                       ` (13 preceding siblings ...)
  2017-03-02  0:20     ` [PATCH v3 14/16] net/avp: device promiscuous functions Allain Legacy
@ 2017-03-02  0:20     ` Allain Legacy
  2017-03-02  0:37       ` Stephen Hemminger
  2017-03-02  0:20     ` [PATCH v3 16/16] doc: adds information related to the AVP PMD Allain Legacy
  2017-03-13 19:16     ` [PATCH v4 00/17] Wind River Systems " Allain Legacy
  16 siblings, 1 reply; 172+ messages in thread
From: Allain Legacy @ 2017-03-02  0:20 UTC (permalink / raw)
  To: ferruh.yigit; +Cc: ian.jolliffe, jerin.jacob, stephen, thomas.monjalon, dev

Adds support for device start and stop functions.  This allows an
application to control the administrative state of an AVP device.  Stopping
the device will notify the host application to stop sending packets on that
device's receive queues.

Signed-off-by: Allain Legacy <allain.legacy@windriver.com>
Signed-off-by: Matt Peters <matt.peters@windriver.com>
---
 drivers/net/avp/avp_ethdev.c | 145 +++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 145 insertions(+)

diff --git a/drivers/net/avp/avp_ethdev.c b/drivers/net/avp/avp_ethdev.c
index 2fe1251..660233a 100644
--- a/drivers/net/avp/avp_ethdev.c
+++ b/drivers/net/avp/avp_ethdev.c
@@ -68,6 +68,9 @@ static int avp_dev_create(struct rte_pci_device *pci_dev,
 static int eth_avp_dev_uninit(struct rte_eth_dev *eth_dev);
 
 static int avp_dev_configure(struct rte_eth_dev *dev);
+static int avp_dev_start(struct rte_eth_dev *dev);
+static void avp_dev_stop(struct rte_eth_dev *dev);
+static void avp_dev_close(struct rte_eth_dev *dev);
 static void avp_dev_info_get(struct rte_eth_dev *dev,
 			     struct rte_eth_dev_info *dev_info);
 static void avp_vlan_offload_set(struct rte_eth_dev *dev, int mask);
@@ -157,6 +160,9 @@ static void avp_dev_stats_get(struct rte_eth_dev *dev,
  */
 static const struct eth_dev_ops avp_eth_dev_ops = {
 	.dev_configure       = avp_dev_configure,
+	.dev_start           = avp_dev_start,
+	.dev_stop            = avp_dev_stop,
+	.dev_close           = avp_dev_close,
 	.dev_infos_get       = avp_dev_info_get,
 	.vlan_offload_set    = avp_vlan_offload_set,
 	.stats_get           = avp_dev_stats_get,
@@ -326,6 +332,23 @@ struct avp_queue {
 }
 
 static int
+avp_dev_ctrl_set_link_state(struct rte_eth_dev *eth_dev, unsigned int state)
+{
+	struct avp_dev *avp = AVP_DEV_PRIVATE_TO_HW(eth_dev->data->dev_private);
+	struct rte_avp_request request;
+	int ret;
+
+	/* setup a link state change request */
+	memset(&request, 0, sizeof(request));
+	request.req_id = RTE_AVP_REQ_CFG_NETWORK_IF;
+	request.if_up = state;
+
+	ret = avp_dev_process_request(avp, &request);
+
+	return ret == 0 ? request.result : ret;
+}
+
+static int
 avp_dev_ctrl_set_config(struct rte_eth_dev *eth_dev,
 			struct rte_avp_device_config *config)
 {
@@ -759,6 +782,31 @@ struct avp_queue {
 }
 
 static int
+avp_dev_disable_interrupts(struct rte_eth_dev *eth_dev)
+{
+	struct rte_pci_device *pci_dev = AVP_DEV_TO_PCI(eth_dev);
+	void *registers = pci_dev->mem_resource[RTE_AVP_PCI_MMIO_BAR].addr;
+	int ret;
+
+	if (registers == NULL)
+		return 0;
+
+	/* inform the device that all interrupts are disabled */
+	AVP_WRITE32(RTE_AVP_NO_INTERRUPTS_MASK,
+		    RTE_PTR_ADD(registers, RTE_AVP_INTERRUPT_MASK_OFFSET));
+
+	/* enable UIO interrupt handling */
+	ret = rte_intr_disable(&pci_dev->intr_handle);
+	if (ret < 0) {
+		PMD_DRV_LOG(ERR, "Failed to enable UIO interrupts, ret=%d\n",
+			    ret);
+		return ret;
+	}
+
+	return 0;
+}
+
+static int
 avp_dev_setup_interrupts(struct rte_eth_dev *eth_dev)
 {
 	struct rte_pci_device *pci_dev = AVP_DEV_TO_PCI(eth_dev);
@@ -1990,6 +2038,103 @@ struct avp_queue {
 	return ret;
 }
 
+static int
+avp_dev_start(struct rte_eth_dev *eth_dev)
+{
+	struct avp_dev *avp = AVP_DEV_PRIVATE_TO_HW(eth_dev->data->dev_private);
+	int ret;
+
+	rte_spinlock_lock(&avp->lock);
+	if (avp->flags & AVP_F_DETACHED) {
+		PMD_DRV_LOG(ERR, "Operation not supported during VM live migration\n");
+		ret = -ENOTSUP;
+		goto unlock;
+	}
+
+	/* disable features that we do not support */
+	eth_dev->data->dev_conf.rxmode.hw_ip_checksum = 0;
+	eth_dev->data->dev_conf.rxmode.hw_vlan_filter = 0;
+	eth_dev->data->dev_conf.rxmode.hw_vlan_extend = 0;
+	eth_dev->data->dev_conf.rxmode.hw_strip_crc = 0;
+
+	/* update link state */
+	ret = avp_dev_ctrl_set_link_state(eth_dev, 1);
+	if (ret < 0) {
+		PMD_DRV_LOG(ERR, "Link state change failed by host, ret=%d\n",
+			    ret);
+		goto unlock;
+	}
+
+	/* remember current link state */
+	avp->flags |= AVP_F_LINKUP;
+
+	ret = 0;
+
+unlock:
+	rte_spinlock_unlock(&avp->lock);
+	return ret;
+}
+
+static void
+avp_dev_stop(struct rte_eth_dev *eth_dev)
+{
+	struct avp_dev *avp = AVP_DEV_PRIVATE_TO_HW(eth_dev->data->dev_private);
+	int ret;
+
+	rte_spinlock_lock(&avp->lock);
+	if (avp->flags & AVP_F_DETACHED) {
+		PMD_DRV_LOG(ERR, "Operation not supported during VM live migration\n");
+		goto unlock;
+	}
+
+	/* remember current link state */
+	avp->flags &= ~AVP_F_LINKUP;
+
+	/* update link state */
+	ret = avp_dev_ctrl_set_link_state(eth_dev, 0);
+	if (ret < 0) {
+		PMD_DRV_LOG(ERR, "Link state change failed by host, ret=%d\n",
+			    ret);
+		goto unlock;
+	}
+
+unlock:
+	rte_spinlock_unlock(&avp->lock);
+}
+
+static void
+avp_dev_close(struct rte_eth_dev *eth_dev)
+{
+	struct avp_dev *avp = AVP_DEV_PRIVATE_TO_HW(eth_dev->data->dev_private);
+	int ret;
+
+	rte_spinlock_lock(&avp->lock);
+	if (avp->flags & AVP_F_DETACHED) {
+		PMD_DRV_LOG(ERR, "Operation not supported during VM live migration\n");
+		goto unlock;
+	}
+
+	/* remember current link state */
+	avp->flags &= ~AVP_F_LINKUP;
+	avp->flags &= ~AVP_F_CONFIGURED;
+
+	ret = avp_dev_disable_interrupts(eth_dev);
+	if (ret < 0) {
+		PMD_DRV_LOG(ERR, "Failed to disable interrupts\n");
+		/* continue */
+	}
+
+	/* update device state */
+	ret = avp_dev_ctrl_shutdown(eth_dev);
+	if (ret < 0) {
+		PMD_DRV_LOG(ERR, "Device shutdown failed by host, ret=%d\n",
+			    ret);
+		goto unlock;
+	}
+
+unlock:
+	rte_spinlock_unlock(&avp->lock);
+}
 
 static int
 avp_dev_link_update(struct rte_eth_dev *eth_dev,
-- 
1.8.3.1

^ permalink raw reply related	[flat|nested] 172+ messages in thread

* [PATCH v3 16/16] doc: adds information related to the AVP PMD
  2017-03-02  0:19   ` [PATCH v3 " Allain Legacy
                       ` (14 preceding siblings ...)
  2017-03-02  0:20     ` [PATCH v3 15/16] net/avp: device start and stop operations Allain Legacy
@ 2017-03-02  0:20     ` Allain Legacy
  2017-03-03 16:21       ` Vincent JARDIN
  2017-03-13 19:16     ` [PATCH v4 00/17] Wind River Systems " Allain Legacy
  16 siblings, 1 reply; 172+ messages in thread
From: Allain Legacy @ 2017-03-02  0:20 UTC (permalink / raw)
  To: ferruh.yigit; +Cc: ian.jolliffe, jerin.jacob, stephen, thomas.monjalon, dev

Updates the documentation and feature lists for the AVP PMD device.

Signed-off-by: Allain Legacy <allain.legacy@windriver.com>
Signed-off-by: Matt Peters <matt.peters@windriver.com>
Acked-by: John McNamara <john.mcnamara@intel.com>
---
 MAINTAINERS                            |   1 +
 doc/guides/nics/avp.rst                | 112 +++++++++++++++++++++++++++++++++
 doc/guides/nics/features/avp.ini       |  17 +++++
 doc/guides/nics/index.rst              |   1 +
 doc/guides/rel_notes/release_17_05.rst |   5 ++
 5 files changed, 136 insertions(+)
 create mode 100644 doc/guides/nics/avp.rst
 create mode 100644 doc/guides/nics/features/avp.ini

diff --git a/MAINTAINERS b/MAINTAINERS
index fef23a0..4a14945 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -427,6 +427,7 @@ Wind River AVP PMD
 M: Allain Legacy <allain.legacy@windriver.com>
 M: Matt Peters <matt.peters@windriver.com>
 F: drivers/net/avp
+F: doc/guides/nics/avp.rst
 
 
 Crypto Drivers
diff --git a/doc/guides/nics/avp.rst b/doc/guides/nics/avp.rst
new file mode 100644
index 0000000..af6d04d
--- /dev/null
+++ b/doc/guides/nics/avp.rst
@@ -0,0 +1,112 @@
+..  BSD LICENSE
+    Copyright(c) 2017 Wind River Systems, Inc. rights reserved.
+    All rights reserved.
+
+    Redistribution and use in source and binary forms, with or without
+    modification, are permitted provided that the following conditions
+    are met:
+
+    * Redistributions of source code must retain the above copyright
+    notice, this list of conditions and the following disclaimer.
+    * Redistributions in binary form must reproduce the above copyright
+    notice, this list of conditions and the following disclaimer in
+    the documentation and/or other materials provided with the
+    distribution.
+    * Neither the name of Intel Corporation nor the names of its
+    contributors may be used to endorse or promote products derived
+    from this software without specific prior written permission.
+
+    THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+    "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+    LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+    A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+    OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+    SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+    LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+    DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+    THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+    (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+    OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+
+AVP Poll Mode Driver
+=================================================================
+
+The Accelerated Virtual Port (AVP) device is a shared memory based device
+available on the `virtualization platforms <http://www.windriver.com/products/titanium-cloud/>`_
+from Wind River Systems.  It is based on an earlier implementation of the DPDK
+KNI device and made available to VM instances via a mechanism based on an early
+implementation of qemu-kvm ivshmem.
+
+It enables optimized packet throughput without requiring any packet processing
+in qemu. This provides our customers with a significant performance increase
+for DPDK applications in the VM.  Since our AVP implementation supports VM
+live-migration it is viewed as a better alternative to PCI passthrough or PCI
+SRIOV since neither of those support VM live-migration without manual
+intervention or significant performance penalties.
+
+Since the initial implementation of AVP devices, vhost-user has become
+part of the qemu offering with a significant performance increase over
+the original virtio implementation.  However, vhost-user still does
+not achieve the level of performance that the AVP device can provide
+to our customers for DPDK based VM instances.
+
+The driver binds to PCI devices that are exported by the hypervisor DPDK
+application via the ivshmem-like mechanism.  The definition of the device
+structure and configuration options are defined in rte_avp_common.h and
+rte_avp_fifo.h.  These two header files are made available as part of the PMD
+implementation in order to share the device definitions between the guest
+implementation (i.e., the PMD) and the host implementation (i.e., the
+hypervisor DPDK vswitch application).
+
+
+Features and Limitations of the AVP PMD
+---------------------------------------
+
+The AVP PMD driver provides the following functionality.
+
+*   Receive and transmit of both simple and chained mbuf packets,
+
+*   Chained mbufs may include up to 5 chained segments,
+
+*   Up to 8 receive and transmit queues per device,
+
+*   Only a single MAC address is supported,
+
+*   The MAC address cannot be modified,
+
+*   The maximum receive packet length is 9238 bytes,
+
+*   VLAN header stripping and inserting,
+
+*   Promiscuous mode
+
+*   VM live-migration
+
+*   PCI hotplug insertion and removal
+
+
+Prerequisites
+-------------
+
+The following prerequisites apply:
+
+*   A virtual machine running in a Wind River Systems virtualization
+    environment and configured with at least one neutron port defined with a
+    vif-model set to "avp".
+
+
+Launching a VM with an AVP type network attachment
+--------------------------------------------------
+
+The following example will launch a VM with three network attachments.  The
+first attachment will have a default vif-model of "virtio".  The next two
+network attachments will have a vif-model of "avp" and may be used with a DPDK
+application which is built to include the AVP PMD driver.
+
+.. code-block:: console
+
+    nova boot --flavor small --image my-image \
+       --nic net-id=${NETWORK1_UUID} \
+       --nic net-id=${NETWORK2_UUID},vif-model=avp \
+       --nic net-id=${NETWORK3_UUID},vif-model=avp \
+       --security-group default my-instance1
diff --git a/doc/guides/nics/features/avp.ini b/doc/guides/nics/features/avp.ini
new file mode 100644
index 0000000..64bf42e
--- /dev/null
+++ b/doc/guides/nics/features/avp.ini
@@ -0,0 +1,17 @@
+;
+; Supported features of the 'AVP' network poll mode driver.
+;
+; Refer to default.ini for the full list of available PMD features.
+;
+[Features]
+Link status          = Y
+Queue start/stop     = Y
+Jumbo frame          = Y
+Scattered Rx         = Y
+Promiscuous mode     = Y
+Unicast MAC filter   = Y
+VLAN offload         = Y
+Basic stats          = Y
+Stats per queue      = Y
+Linux UIO            = Y
+x86-64               = Y
diff --git a/doc/guides/nics/index.rst b/doc/guides/nics/index.rst
index 87f9334..0ddcea5 100644
--- a/doc/guides/nics/index.rst
+++ b/doc/guides/nics/index.rst
@@ -36,6 +36,7 @@ Network Interface Controller Drivers
     :numbered:
 
     overview
+    avp
     bnx2x
     bnxt
     cxgbe
diff --git a/doc/guides/rel_notes/release_17_05.rst b/doc/guides/rel_notes/release_17_05.rst
index e25ea9f..3accbac 100644
--- a/doc/guides/rel_notes/release_17_05.rst
+++ b/doc/guides/rel_notes/release_17_05.rst
@@ -41,6 +41,11 @@ New Features
      Also, make sure to start the actual text at the margin.
      =========================================================
 
+* **Added support for the Wind River Systems AVP PMD.**
+
+  Added a new networking driver for the AVP device type. Theses devices are
+  specific to the Wind River Systems virtualization platforms.
+
 
 Resolved Issues
 ---------------
-- 
1.8.3.1

^ permalink raw reply related	[flat|nested] 172+ messages in thread

* Re: [PATCH v3 13/16] net/avp: device statistics operations
  2017-03-02  0:20     ` [PATCH v3 13/16] net/avp: device statistics operations Allain Legacy
@ 2017-03-02  0:35       ` Stephen Hemminger
  2017-03-09 13:48         ` Legacy, Allain
  0 siblings, 1 reply; 172+ messages in thread
From: Stephen Hemminger @ 2017-03-02  0:35 UTC (permalink / raw)
  To: Allain Legacy
  Cc: ferruh.yigit, ian.jolliffe, jerin.jacob, thomas.monjalon, dev

On Wed, 1 Mar 2017 19:20:05 -0500
Allain Legacy <allain.legacy@windriver.com> wrote:

> +static void
> +avp_dev_stats_get(struct rte_eth_dev *eth_dev, struct rte_eth_stats *stats)
> +{
> +	struct avp_dev *avp = AVP_DEV_PRIVATE_TO_HW(eth_dev->data->dev_private);
> +	unsigned int i;
> +
> +	memset(stats, 0, sizeof(*stats));

Memset here is unnecessary since only caller is rte_eth_stats_get()
which already did memset

int
rte_eth_stats_get(uint8_t port_id, struct rte_eth_stats *stats)
{
	struct rte_eth_dev *dev;

	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -EINVAL);

	dev = &rte_eth_devices[port_id];
	memset(stats, 0, sizeof(*stats));

	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->stats_get, -ENOTSUP);
	stats->rx_nombuf = dev->data->rx_mbuf_alloc_failed;
	(*dev->dev_ops->stats_get)(dev, stats);

^ permalink raw reply	[flat|nested] 172+ messages in thread

* Re: [PATCH v3 15/16] net/avp: device start and stop operations
  2017-03-02  0:20     ` [PATCH v3 15/16] net/avp: device start and stop operations Allain Legacy
@ 2017-03-02  0:37       ` Stephen Hemminger
  2017-03-09 13:49         ` Legacy, Allain
  0 siblings, 1 reply; 172+ messages in thread
From: Stephen Hemminger @ 2017-03-02  0:37 UTC (permalink / raw)
  To: Allain Legacy
  Cc: ferruh.yigit, ian.jolliffe, jerin.jacob, thomas.monjalon, dev

On Wed, 1 Mar 2017 19:20:07 -0500
Allain Legacy <allain.legacy@windriver.com> wrote:

> +
> +static void
> +avp_dev_close(struct rte_eth_dev *eth_dev)
> +{
> +	struct avp_dev *avp = AVP_DEV_PRIVATE_TO_HW(eth_dev->data->dev_private);
> +	int ret;
> +
> +	rte_spinlock_lock(&avp->lock);
> +	if (avp->flags & AVP_F_DETACHED) {
> +		PMD_DRV_LOG(ERR, "Operation not supported during VM live migration\n");
> +		goto unlock;
> +	}
> +
> +	/* remember current link state */
> +	avp->flags &= ~AVP_F_LINKUP;
> +	avp->flags &= ~AVP_F_CONFIGURED;
> +
> +	ret = avp_dev_disable_interrupts(eth_dev);
> +	if (ret < 0) {
> +		PMD_DRV_LOG(ERR, "Failed to disable interrupts\n");
> +		/* continue */
> +	}
> +
> +	/* update device state */
> +	ret = avp_dev_ctrl_shutdown(eth_dev);
> +	if (ret < 0) {
> +		PMD_DRV_LOG(ERR, "Device shutdown failed by host, ret=%d\n",
> +			    ret);
> +		goto unlock;
> +	}
> +
> +unlock:
> +	rte_spinlock_unlock(&avp->lock);
> +}

The second goto is unnecessary.

^ permalink raw reply	[flat|nested] 172+ messages in thread

* Re: [PATCH v3 02/16] net/avp: public header files
  2017-03-02  0:19     ` [PATCH v3 02/16] net/avp: public header files Allain Legacy
@ 2017-03-03 14:37       ` Chas Williams
  2017-03-03 15:35         ` Legacy, Allain
  0 siblings, 1 reply; 172+ messages in thread
From: Chas Williams @ 2017-03-03 14:37 UTC (permalink / raw)
  To: Allain Legacy, ferruh.yigit
  Cc: ian.jolliffe, jerin.jacob, stephen, thomas.monjalon, dev

On Wed, 2017-03-01 at 19:19 -0500, Allain Legacy wrote:
> +
> +/**
> + * Memory aligment (cache aligned)

Spelling -- alignment.

> + */
> +#ifndef RTE_AVP_ALIGNMENT
> +#define RTE_AVP_ALIGNMENT 64
> +#endif

This is already provided by DPDK as CONFIG_RTE_CACHE_LINE_SIZE

> + * Defines the number of mbuf pools supported per device (1 per socket)
> + * @note This value should be equal to RTE_MAX_NUMA_NODES
> + */
> +#define RTE_AVP_MAX_MEMPOOLS (8)

Perhaps it should be RTE_MAX_NUMA_NODES then?

> +#define RTE_AVP_MAX_QUEUES (8) /**< Maximum number of queues per device */
> +
> +/** Maximum number of chained mbufs in a packet */
> +#define RTE_AVP_MAX_MBUF_SEGMENTS (5)

The ()'s around constants aren't really necessary.  You aren't going to
get side effects.

^ permalink raw reply	[flat|nested] 172+ messages in thread

* Re: [PATCH v3 08/16] net/avp: device initialization
  2017-03-02  0:20     ` [PATCH v3 08/16] net/avp: device initialization Allain Legacy
@ 2017-03-03 15:04       ` Chas Williams
  2017-03-09 14:03         ` Legacy, Allain
  2017-03-09 14:48         ` Legacy, Allain
  0 siblings, 2 replies; 172+ messages in thread
From: Chas Williams @ 2017-03-03 15:04 UTC (permalink / raw)
  To: Allain Legacy, ferruh.yigit
  Cc: ian.jolliffe, jerin.jacob, stephen, thomas.monjalon, dev

On Wed, 2017-03-01 at 19:20 -0500, Allain Legacy wrote:
> +	/* Check current migration status */
> +	if (avp_dev_migration_pending(eth_dev)) {
> +		PMD_DRV_LOG(ERR, "VM live migration operation in progress\n");
> +		return -EBUSY;
> +	}
> +
> +	/* Check BAR resources */
> +	ret = avp_dev_check_regions(eth_dev);
> +	if (ret < 0) {
> +		PMD_DRV_LOG(ERR, "Failed to validate BAR resources, ret=%d\n",
> +			    ret);
> +		return ret;
> +	}
> +
> +	/* Enable interrupts */
> +	ret = avp_dev_setup_interrupts(eth_dev);
> +	if (ret < 0) {
> +		PMD_DRV_LOG(ERR, "Failed to enable interrupts, ret=%d\n", ret);
> +		return ret;
> +	}

I don't see the other side of this to unregister the callback.  It's also
a bit confusing with this here and the other parts in part 15.  It looks
like you enable the interrupts on .dev_create but disable on .dev_stop?
If that's the case, you likely want to just do the setup here and the
enable in .dev_start.

^ permalink raw reply	[flat|nested] 172+ messages in thread

* Re: [PATCH v3 02/16] net/avp: public header files
  2017-03-03 14:37       ` Chas Williams
@ 2017-03-03 15:35         ` Legacy, Allain
  0 siblings, 0 replies; 172+ messages in thread
From: Legacy, Allain @ 2017-03-03 15:35 UTC (permalink / raw)
  To: Chas Williams, YIGIT, FERRUH
  Cc: Jolliffe, Ian, jerin.jacob, stephen, thomas.monjalon, dev

> -----Original Message-----
> From: Chas Williams [mailto:3chas3@gmail.com]
> > + */
> > +#ifndef RTE_AVP_ALIGNMENT
> > +#define RTE_AVP_ALIGNMENT 64
> > +#endif
> 
> This is already provided by DPDK as CONFIG_RTE_CACHE_LINE_SIZE

We took another look at our usage of this.  We need it for another component's compile, but we'll refactor this out so that it is not needed in this file. 

> 
> > + * Defines the number of mbuf pools supported per device (1 per socket)
> > + * @note This value should be equal to RTE_MAX_NUMA_NODES
> > + */
> > +#define RTE_AVP_MAX_MEMPOOLS (8)
> 
> Perhaps it should be RTE_MAX_NUMA_NODES then?
I'll remove the comment because it is more about this aligning with the host max rather than the local max. 

> The ()'s around constants aren't really necessary.  You aren't going to
> get side effects.
Ok.  Will remove.

^ permalink raw reply	[flat|nested] 172+ messages in thread

* Re: [PATCH v3 16/16] doc: adds information related to the AVP PMD
  2017-03-02  0:20     ` [PATCH v3 16/16] doc: adds information related to the AVP PMD Allain Legacy
@ 2017-03-03 16:21       ` Vincent JARDIN
  2017-03-13 19:17         ` Legacy, Allain
  0 siblings, 1 reply; 172+ messages in thread
From: Vincent JARDIN @ 2017-03-03 16:21 UTC (permalink / raw)
  To: Allain Legacy, ferruh.yigit
  Cc: ian.jolliffe, jerin.jacob, stephen, thomas.monjalon, dev

Le 02/03/2017 à 01:20, Allain Legacy a écrit :
> +Since the initial implementation of AVP devices, vhost-user has become
> +part of the qemu offering with a significant performance increase over
> +the original virtio implementation.  However, vhost-user still does
> +not achieve the level of performance that the AVP device can provide
> +to our customers for DPDK based VM instances.

Allain,

please, can you be more explicit: why is virtio not fast enough?

Moreover, why should we get another PMD for Qemu/kvm which is not 
virtio? There is not argument into your doc about it.
NEC, before vhost-user, made a memnic proposal too because 
virtio/vhost-user was not available.
Now, we all agree that vhost-user is the right way to support VMs, it 
avoids duplication of maintenances.

Please add some arguments that explains why virtio should not be used, 
so others like memnic or avp should be.

Regarding,
+    nova boot --flavor small --image my-image \
+       --nic net-id=${NETWORK1_UUID} \
+       --nic net-id=${NETWORK2_UUID},vif-model=avp \
+       --nic net-id=${NETWORK3_UUID},vif-model=avp \
+       --security-group default my-instance1

I do not see how to get it working with vanilla nova. Please, I think 
you should rather show with qemu or virsh.

Then, there is not such AVP netdevice into Linux kernel upstream. Before 
adding any AVP support, it should be added into legacy upstream so we 
can be sure that the APIs will be solid and won't need to be updated 
because of some kernel constraints.

Thank you,
   Vincent

^ permalink raw reply	[flat|nested] 172+ messages in thread

* Re: [PATCH v3 13/16] net/avp: device statistics operations
  2017-03-02  0:35       ` Stephen Hemminger
@ 2017-03-09 13:48         ` Legacy, Allain
  0 siblings, 0 replies; 172+ messages in thread
From: Legacy, Allain @ 2017-03-09 13:48 UTC (permalink / raw)
  To: Stephen Hemminger
  Cc: YIGIT, FERRUH, Jolliffe, Ian, jerin.jacob, thomas.monjalon, dev

> -----Original Message-----
> From: Stephen Hemminger [mailto:stephen@networkplumber.org]
> Memset here is unnecessary since only caller is rte_eth_stats_get() which
> already did memset
> 
Ok.  Will remove.

^ permalink raw reply	[flat|nested] 172+ messages in thread

* Re: [PATCH v3 15/16] net/avp: device start and stop operations
  2017-03-02  0:37       ` Stephen Hemminger
@ 2017-03-09 13:49         ` Legacy, Allain
  0 siblings, 0 replies; 172+ messages in thread
From: Legacy, Allain @ 2017-03-09 13:49 UTC (permalink / raw)
  To: Stephen Hemminger
  Cc: YIGIT, FERRUH, Jolliffe, Ian, jerin.jacob, thomas.monjalon, dev

> -----Original Message-----
> From: Stephen Hemminger [mailto:stephen@networkplumber.org]
> 
> The second goto is unnecessary.
Ok.  Will remove.

^ permalink raw reply	[flat|nested] 172+ messages in thread

* Re: [PATCH v3 08/16] net/avp: device initialization
  2017-03-03 15:04       ` Chas Williams
@ 2017-03-09 14:03         ` Legacy, Allain
  2017-03-09 14:48         ` Legacy, Allain
  1 sibling, 0 replies; 172+ messages in thread
From: Legacy, Allain @ 2017-03-09 14:03 UTC (permalink / raw)
  To: Chas Williams, YIGIT, FERRUH
  Cc: Jolliffe, Ian, jerin.jacob, stephen, thomas.monjalon, dev

> -----Original Message-----
> From: Chas Williams [mailto:3chas3@gmail.com]
> I don't see the other side of this to unregister the callback.  It's also a bit
> confusing with this here and the other parts in part 15.  It looks like you
> enable the interrupts on .dev_create but disable on .dev_stop?
> If that's the case, you likely want to just do the setup here and the enable in
> .dev_start.
Agreed.  This is not symmetric.  I will setup in create, enable in start, disable in stop.


^ permalink raw reply	[flat|nested] 172+ messages in thread

* Re: [PATCH v3 08/16] net/avp: device initialization
  2017-03-03 15:04       ` Chas Williams
  2017-03-09 14:03         ` Legacy, Allain
@ 2017-03-09 14:48         ` Legacy, Allain
  1 sibling, 0 replies; 172+ messages in thread
From: Legacy, Allain @ 2017-03-09 14:48 UTC (permalink / raw)
  To: Chas Williams, YIGIT, FERRUH
  Cc: Jolliffe, Ian, jerin.jacob, stephen, thomas.monjalon, dev

> -----Original Message-----
> From: Legacy, Allain
> > From: Chas Williams [mailto:3chas3@gmail.com]
> > I don't see the other side of this to unregister the callback.  It's also a bit
> > confusing with this here and the other parts in part 15.  It looks like you
> > enable the interrupts on .dev_create but disable on .dev_stop?
> > If that's the case, you likely want to just do the setup here and the enable
> in
> > .dev_start.
> Agreed.  This is not symmetric.  I will setup in create, enable in start, disable
> in stop.
I was mistaken.  The interrupts need to be setup and enabled when the device 
is created, and then only disabled if the device is closed in preparation for 
removal.  The start/stop functions should not be involved.

So:

avp_dev_create(), will do:
    avp_dev_setup_interrupts()
    avp_enable_interrupts()

and

avp_dev_close(), will do:
    avp_disable_interrupts().

Currently, avp_dev_create() and avp_dev_close() are in separate patches.  
 I'll see how much trouble it would be to move the dev_close() to be in the 
same patch as the dev_create() but there is little value in doing that since 
functionally there is no net effect.  If the effort is too high I will be inclined 
to leave it as is. 


^ permalink raw reply	[flat|nested] 172+ messages in thread

* [PATCH v4 00/17] Wind River Systems AVP PMD
  2017-03-02  0:19   ` [PATCH v3 " Allain Legacy
                       ` (15 preceding siblings ...)
  2017-03-02  0:20     ` [PATCH v3 16/16] doc: adds information related to the AVP PMD Allain Legacy
@ 2017-03-13 19:16     ` Allain Legacy
  2017-03-13 19:16       ` [PATCH v4 01/17] config: adds attributes for the " Allain Legacy
                         ` (18 more replies)
  16 siblings, 19 replies; 172+ messages in thread
From: Allain Legacy @ 2017-03-13 19:16 UTC (permalink / raw)
  To: ferruh.yigit
  Cc: dev, ian.jolliffe, bruce.richardson, john.mcnamara, keith.wiles,
	thomas.monjalon, vincent.jardin, jerin.jacob, stephen, 3chas3

This patch series submits an initial version of the AVP PMD from Wind River
Systems.  The series includes shared header files, driver implementation,
and changes to documentation files in support of this new driver.  The AVP
driver is a shared memory based device.  It is intended to be used as a PMD
within a virtual machine running on a Wind River virtualization platform.
See: http://www.windriver.com/products/titanium-cloud/

It enables optimized packet throughput without requiring any packet
processing in qemu. This allowed us to provide our customers with a
significant performance increase for both DPDK and non-DPDK applications
in the VM.   Since our AVP implementation supports VM live-migration it
is viewed as a better alternative to PCI passthrough or PCI SRIOV since
neither of those support VM live-migration without manual intervention
or significant performance penalties.

Since the initial implementation of AVP devices, vhost-user has become part
of the qemu offering with a significant performance increase over the
original virtio implementation.  However, vhost-user still does not achieve
the level of performance that the AVP device can provide to our customers
for DPDK based guests.

A number of our customers have requested that we upstream the driver to
dpdk.org.

v2:
* Fixed coding style violations that slipped in accidentally because of an
  out of date checkpatch.pl from an older kernel.

v3:
* Updated 17.05 release notes to add a section for this new PMD
* Added additional info to the AVP nic guide document to clarify the
  benefit of using AVP over virtio.
* Fixed spelling error in debug log missed by local checkpatch.pl version
* Split the transmit patch to separate the stats functions as they
  accidentally got squashed in the last patchset.
* Fixed debug log strings so that they exceed 80 characters rather than
  span multiple lines.
* Renamed RTE_AVP_* defines that were in avp_ethdev.h to be AVP_* instead
* Replaced usage of RTE_WRITE32 and RTE_READ32 with rte_write32_relaxed
  and rte_read32_relaxed.
* Declared rte_pci_id table as const

v4:
* Split our interrupt handlers to a separate patch and moved to the end
  of the series.
* Removed memset() from stats_get API
* Removed usage of RTE_AVP_ALIGNMENT
* Removed unnecessary parentheses in rte_avp_common.h
* Removed unneeded "goto unlock" where there are no statements in between
  the goto and the end of the function.
* Re-tested with pktgen and found that rte_eth_tx_burst() is being called
  with 0 packets even before starting traffic which resulted in
  incrementing oerrors; fixed in transmit patch.

Allain Legacy (17):
  config: added attributes for the AVP PMD
  net/avp: added public header files
  maintainers: claim responsibility for AVP PMD
  net/avp: added PMD version map file
  net/avp: added log macros
  drivers/net: added driver makefiles
  net/avp: driver registration
  net/avp: device initialization
  net/avp: device configuration
  net/avp: queue setup and release
  net/avp: packet receive functions
  net/avp: packet transmit functions
  net/avp: device statistics operations
  net/avp: device promiscuous functions
  net/avp: device start and stop operations
  net/avp: migration interrupt handling
  doc: added information related to the AVP PMD

 MAINTAINERS                             |    6 +
 config/common_base                      |   10 +
 config/common_linuxapp                  |    1 +
 doc/guides/nics/avp.rst                 |   99 ++
 doc/guides/nics/features/avp.ini        |   17 +
 doc/guides/nics/index.rst               |    1 +
 drivers/net/Makefile                    |    1 +
 drivers/net/avp/Makefile                |   61 +
 drivers/net/avp/avp_ethdev.c            | 2371 +++++++++++++++++++++++++++++++
 drivers/net/avp/avp_logs.h              |   59 +
 drivers/net/avp/rte_avp_common.h        |  427 ++++++
 drivers/net/avp/rte_avp_fifo.h          |  157 ++
 drivers/net/avp/rte_pmd_avp_version.map |    4 +
 mk/rte.app.mk                           |    1 +
 14 files changed, 3215 insertions(+)
 create mode 100644 doc/guides/nics/avp.rst
 create mode 100644 doc/guides/nics/features/avp.ini
 create mode 100644 drivers/net/avp/Makefile
 create mode 100644 drivers/net/avp/avp_ethdev.c
 create mode 100644 drivers/net/avp/avp_logs.h
 create mode 100644 drivers/net/avp/rte_avp_common.h
 create mode 100644 drivers/net/avp/rte_avp_fifo.h
 create mode 100644 drivers/net/avp/rte_pmd_avp_version.map

-- 
1.8.3.1

^ permalink raw reply	[flat|nested] 172+ messages in thread

* [PATCH v4 01/17] config: adds attributes for the AVP PMD
  2017-03-13 19:16     ` [PATCH v4 00/17] Wind River Systems " Allain Legacy
@ 2017-03-13 19:16       ` Allain Legacy
  2017-03-13 19:16       ` [PATCH v4 02/17] net/avp: public header files Allain Legacy
                         ` (17 subsequent siblings)
  18 siblings, 0 replies; 172+ messages in thread
From: Allain Legacy @ 2017-03-13 19:16 UTC (permalink / raw)
  To: ferruh.yigit
  Cc: dev, ian.jolliffe, bruce.richardson, john.mcnamara, keith.wiles,
	thomas.monjalon, vincent.jardin, jerin.jacob, stephen, 3chas3

Updates the common base configuration file to include a top level config
attribute for the AVP PMD.

Signed-off-by: Allain Legacy <allain.legacy@windriver.com>
Signed-off-by: Matt Peters <matt.peters@windriver.com>
---
 config/common_base | 5 +++++
 1 file changed, 5 insertions(+)

diff --git a/config/common_base b/config/common_base
index aeee13e..912bc68 100644
--- a/config/common_base
+++ b/config/common_base
@@ -348,6 +348,11 @@ CONFIG_RTE_LIBRTE_QEDE_FW=""
 CONFIG_RTE_LIBRTE_PMD_AF_PACKET=n
 
 #
+# Compile WRS accelerated virtual port (AVP) guest PMD driver
+#
+CONFIG_RTE_LIBRTE_AVP_PMD=n
+
+#
 # Compile the TAP PMD
 # It is enabled by default for Linux only.
 #
-- 
1.8.3.1

^ permalink raw reply related	[flat|nested] 172+ messages in thread

* [PATCH v4 02/17] net/avp: public header files
  2017-03-13 19:16     ` [PATCH v4 00/17] Wind River Systems " Allain Legacy
  2017-03-13 19:16       ` [PATCH v4 01/17] config: adds attributes for the " Allain Legacy
@ 2017-03-13 19:16       ` Allain Legacy
  2017-03-13 19:16       ` [PATCH v4 03/17] maintainers: claim responsibility for AVP PMD Allain Legacy
                         ` (16 subsequent siblings)
  18 siblings, 0 replies; 172+ messages in thread
From: Allain Legacy @ 2017-03-13 19:16 UTC (permalink / raw)
  To: ferruh.yigit
  Cc: dev, ian.jolliffe, bruce.richardson, john.mcnamara, keith.wiles,
	thomas.monjalon, vincent.jardin, jerin.jacob, stephen, 3chas3

Adds public/exported header files for the AVP PMD.  The AVP device is a
shared memory based device.  The structures and constants that define the
method of operation of the device must be visible by both the PMD and the
host DPDK application.  They must not change without proper version
controls and updates to both the hypervisor DPDK application and the PMD.

The hypervisor DPDK application is a Wind River Systems proprietary
virtual switch.

Signed-off-by: Allain Legacy <allain.legacy@windriver.com>
Signed-off-by: Matt Peters <matt.peters@windriver.com>
---
 drivers/net/avp/rte_avp_common.h | 416 +++++++++++++++++++++++++++++++++++++++
 drivers/net/avp/rte_avp_fifo.h   | 157 +++++++++++++++
 2 files changed, 573 insertions(+)
 create mode 100644 drivers/net/avp/rte_avp_common.h
 create mode 100644 drivers/net/avp/rte_avp_fifo.h

diff --git a/drivers/net/avp/rte_avp_common.h b/drivers/net/avp/rte_avp_common.h
new file mode 100644
index 0000000..3c12252
--- /dev/null
+++ b/drivers/net/avp/rte_avp_common.h
@@ -0,0 +1,416 @@
+/*-
+ *   This file is provided under a dual BSD/LGPLv2 license.  When using or
+ *   redistributing this file, you may do so under either license.
+ *
+ *   GNU LESSER GENERAL PUBLIC LICENSE
+ *
+ *   Copyright(c) 2010-2013 Intel Corporation. All rights reserved.
+ *   Copyright(c) 2014-2015 Wind River Systems, Inc. All rights reserved.
+ *
+ *   This program is free software; you can redistribute it and/or modify
+ *   it under the terms of version 2.1 of the GNU Lesser General Public License
+ *   as published by the Free Software Foundation.
+ *
+ *   This program is distributed in the hope that it will be useful, but
+ *   WITHOUT ANY WARRANTY; without even the implied warranty of
+ *   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+ *   Lesser General Public License for more details.
+ *
+ *   Contact Information:
+ *   Wind River Systems, Inc.
+ *
+ *
+ *   BSD LICENSE
+ *
+ *   Copyright(c) 2010-2013 Intel Corporation. All rights reserved.
+ *   Copyright(c) 2014-2016 Wind River Systems, Inc. All rights reserved.
+ *   All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *   * Redistributions of source code must retain the above copyright
+ *     notice, this list of conditions and the following disclaimer.
+ *   * Redistributions in binary form must reproduce the above copyright
+ *     notice, this list of conditions and the following disclaimer in
+ *     the documentation and/or other materials provided with the
+ *     distribution.
+ *   * Neither the name of Intel Corporation nor the names of its
+ *     contributors may be used to endorse or promote products derived
+ *     from this software without specific prior written permission.
+ *
+ *    THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *    "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *    LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *    A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *    OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *    SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *    LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *    DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *    THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *    (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *    OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ *
+ */
+
+#ifndef _RTE_AVP_COMMON_H_
+#define _RTE_AVP_COMMON_H_
+
+#ifdef __KERNEL__
+#include <linux/if.h>
+#endif
+
+/**
+ * AVP name is part of network device name.
+ */
+#define RTE_AVP_NAMESIZE 32
+
+/**
+ * AVP alias is a user-defined value used for lookups from secondary
+ * processes.  Typically, this is a UUID.
+ */
+#define RTE_AVP_ALIASSIZE 128
+
+/*
+ * Request id.
+ */
+enum rte_avp_req_id {
+	RTE_AVP_REQ_UNKNOWN = 0,
+	RTE_AVP_REQ_CHANGE_MTU,
+	RTE_AVP_REQ_CFG_NETWORK_IF,
+	RTE_AVP_REQ_CFG_DEVICE,
+	RTE_AVP_REQ_SHUTDOWN_DEVICE,
+	RTE_AVP_REQ_MAX,
+};
+
+/**@{ AVP device driver types */
+#define RTE_AVP_DRIVER_TYPE_UNKNOWN 0
+#define RTE_AVP_DRIVER_TYPE_DPDK 1
+#define RTE_AVP_DRIVER_TYPE_KERNEL 2
+#define RTE_AVP_DRIVER_TYPE_QEMU 3
+/**@} */
+
+/**@{ AVP device operational modes */
+#define RTE_AVP_MODE_HOST 0 /**< AVP interface created in host */
+#define RTE_AVP_MODE_GUEST 1 /**< AVP interface created for export to guest */
+#define RTE_AVP_MODE_TRACE 2 /**< AVP interface created for packet tracing */
+/**@} */
+
+/*
+ * Structure for AVP queue configuration query request/result
+ */
+struct rte_avp_device_config {
+	uint64_t device_id;	/**< Unique system identifier */
+	uint32_t driver_type; /**< Device Driver type */
+	uint32_t driver_version; /**< Device Driver version */
+	uint32_t features; /**< Negotiated features */
+	uint16_t num_tx_queues;	/**< Number of active transmit queues */
+	uint16_t num_rx_queues;	/**< Number of active receive queues */
+	uint8_t if_up; /**< 1: interface up, 0: interface down */
+} __attribute__ ((__packed__));
+
+/*
+ * Structure for AVP request.
+ */
+struct rte_avp_request {
+	uint32_t req_id; /**< Request id */
+	union {
+		uint32_t new_mtu; /**< New MTU */
+		uint8_t if_up;	/**< 1: interface up, 0: interface down */
+	struct rte_avp_device_config config; /**< Queue configuration */
+	};
+	int32_t result;	/**< Result for processing request */
+} __attribute__ ((__packed__));
+
+/*
+ * FIFO struct mapped in a shared memory. It describes a circular buffer FIFO
+ * Write and read should wrap around. FIFO is empty when write == read
+ * Writing should never overwrite the read position
+ */
+struct rte_avp_fifo {
+	volatile unsigned int write; /**< Next position to be written*/
+	volatile unsigned int read; /**< Next position to be read */
+	unsigned int len; /**< Circular buffer length */
+	unsigned int elem_size; /**< Pointer size - for 32/64 bit OS */
+	void *volatile buffer[0]; /**< The buffer contains mbuf pointers */
+};
+
+
+/*
+ * AVP packet buffer header used to define the exchange of packet data.
+ */
+struct rte_avp_desc {
+	uint64_t pad0;
+	void *pkt_mbuf; /**< Reference to packet mbuf */
+	uint8_t pad1[14];
+	uint16_t ol_flags; /**< Offload features. */
+	void *next;	/**< Reference to next buffer in chain */
+	void *data;	/**< Start address of data in segment buffer. */
+	uint16_t data_len; /**< Amount of data in segment buffer. */
+	uint8_t nb_segs; /**< Number of segments */
+	uint8_t pad2;
+	uint16_t pkt_len; /**< Total pkt len: sum of all segment data_len. */
+	uint32_t pad3;
+	uint16_t vlan_tci; /**< VLAN Tag Control Identifier (CPU order). */
+	uint32_t pad4;
+} __attribute__ ((__aligned__(RTE_CACHE_LINE_SIZE), __packed__));
+
+
+/**{ AVP device features */
+#define RTE_AVP_FEATURE_VLAN_OFFLOAD (1 << 0) /**< Emulated HW VLAN offload */
+/**@} */
+
+
+/**@{ Offload feature flags */
+#define RTE_AVP_TX_VLAN_PKT 0x0001 /**< TX packet is a 802.1q VLAN packet. */
+#define RTE_AVP_RX_VLAN_PKT 0x0800 /**< RX packet is a 802.1q VLAN packet. */
+/**@} */
+
+
+/**@{ AVP PCI identifiers */
+#define RTE_AVP_PCI_VENDOR_ID   0x1af4
+#define RTE_AVP_PCI_DEVICE_ID   0x1110
+/**@} */
+
+/**@{ AVP PCI subsystem identifiers */
+#define RTE_AVP_PCI_SUB_VENDOR_ID RTE_AVP_PCI_VENDOR_ID
+#define RTE_AVP_PCI_SUB_DEVICE_ID 0x1104
+/**@} */
+
+/**@{ AVP PCI BAR definitions */
+#define RTE_AVP_PCI_MMIO_BAR   0
+#define RTE_AVP_PCI_MSIX_BAR   1
+#define RTE_AVP_PCI_MEMORY_BAR 2
+#define RTE_AVP_PCI_MEMMAP_BAR 4
+#define RTE_AVP_PCI_DEVICE_BAR 5
+#define RTE_AVP_PCI_MAX_BAR    6
+/**@} */
+
+/**@{ AVP PCI BAR name definitions */
+#define RTE_AVP_MMIO_BAR_NAME   "avp-mmio"
+#define RTE_AVP_MSIX_BAR_NAME   "avp-msix"
+#define RTE_AVP_MEMORY_BAR_NAME "avp-memory"
+#define RTE_AVP_MEMMAP_BAR_NAME "avp-memmap"
+#define RTE_AVP_DEVICE_BAR_NAME "avp-device"
+/**@} */
+
+/**@{ AVP PCI MSI-X vectors */
+#define RTE_AVP_MIGRATION_MSIX_VECTOR 0	/**< Migration interrupts */
+#define RTE_AVP_MAX_MSIX_VECTORS 1
+/**@} */
+
+/**@} AVP Migration status/ack register values */
+#define RTE_AVP_MIGRATION_NONE      0 /**< Migration never executed */
+#define RTE_AVP_MIGRATION_DETACHED  1 /**< Device attached during migration */
+#define RTE_AVP_MIGRATION_ATTACHED  2 /**< Device reattached during migration */
+#define RTE_AVP_MIGRATION_ERROR     3 /**< Device failed to attach/detach */
+/**@} */
+
+/**@} AVP MMIO Register Offsets */
+#define RTE_AVP_REGISTER_BASE 0
+#define RTE_AVP_INTERRUPT_MASK_OFFSET (RTE_AVP_REGISTER_BASE + 0)
+#define RTE_AVP_INTERRUPT_STATUS_OFFSET (RTE_AVP_REGISTER_BASE + 4)
+#define RTE_AVP_MIGRATION_STATUS_OFFSET (RTE_AVP_REGISTER_BASE + 8)
+#define RTE_AVP_MIGRATION_ACK_OFFSET (RTE_AVP_REGISTER_BASE + 12)
+/**@} */
+
+/**@} AVP Interrupt Status Mask */
+#define RTE_AVP_MIGRATION_INTERRUPT_MASK (1 << 1)
+#define RTE_AVP_APP_INTERRUPTS_MASK      0xFFFFFFFF
+#define RTE_AVP_NO_INTERRUPTS_MASK       0
+/**@} */
+
+/*
+ * Maximum number of memory regions to export
+ */
+#define RTE_AVP_MAX_MAPS  2048
+
+/*
+ * Description of a single memory region
+ */
+struct rte_avp_memmap {
+	void *addr;
+	phys_addr_t phys_addr;
+	uint64_t length;
+};
+
+/*
+ * AVP memory mapping validation marker
+ */
+#define RTE_AVP_MEMMAP_MAGIC 0x20131969
+
+/**@{  AVP memory map versions */
+#define RTE_AVP_MEMMAP_VERSION_1 1
+#define RTE_AVP_MEMMAP_VERSION RTE_AVP_MEMMAP_VERSION_1
+/**@} */
+
+/*
+ * Defines a list of memory regions exported from the host to the guest
+ */
+struct rte_avp_memmap_info {
+	uint32_t magic; /**< Memory validation marker */
+	uint32_t version; /**< Data format version */
+	uint32_t nb_maps;
+	struct rte_avp_memmap maps[RTE_AVP_MAX_MAPS];
+};
+
+/*
+ * AVP device memory validation marker
+ */
+#define RTE_AVP_DEVICE_MAGIC 0x20131975
+
+/**@{  AVP device map versions
+ * WARNING:  do not change the format or names of these variables.  They are
+ * automatically parsed from the build system to generate the SDK package
+ * name.
+ **/
+#define RTE_AVP_RELEASE_VERSION_1 1
+#define RTE_AVP_RELEASE_VERSION RTE_AVP_RELEASE_VERSION_1
+#define RTE_AVP_MAJOR_VERSION_0 0
+#define RTE_AVP_MAJOR_VERSION_1 1
+#define RTE_AVP_MAJOR_VERSION_2 2
+#define RTE_AVP_MAJOR_VERSION RTE_AVP_MAJOR_VERSION_2
+#define RTE_AVP_MINOR_VERSION_0 0
+#define RTE_AVP_MINOR_VERSION_1 1
+#define RTE_AVP_MINOR_VERSION_13 13
+#define RTE_AVP_MINOR_VERSION RTE_AVP_MINOR_VERSION_13
+/**@} */
+
+
+/**
+ * Generates a 32-bit version number from the specified version number
+ * components
+ */
+#define RTE_AVP_MAKE_VERSION(_release, _major, _minor) \
+((((_release) & 0xffff) << 16) | (((_major) & 0xff) << 8) | ((_minor) & 0xff))
+
+
+/**
+ * Represents the current version of the AVP host driver
+ * WARNING:  in the current development branch the host and guest driver
+ * version should always be the same.  When patching guest features back to
+ * GA releases the host version number should not be updated unless there was
+ * an actual change made to the host driver.
+ */
+#define RTE_AVP_CURRENT_HOST_VERSION \
+RTE_AVP_MAKE_VERSION(RTE_AVP_RELEASE_VERSION_1, \
+		     RTE_AVP_MAJOR_VERSION_0, \
+		     RTE_AVP_MINOR_VERSION_1)
+
+
+/**
+ * Represents the current version of the AVP guest drivers
+ */
+#define RTE_AVP_CURRENT_GUEST_VERSION \
+RTE_AVP_MAKE_VERSION(RTE_AVP_RELEASE_VERSION_1, \
+		     RTE_AVP_MAJOR_VERSION_2, \
+		     RTE_AVP_MINOR_VERSION_13)
+
+/**
+ * Access AVP device version values
+ */
+#define RTE_AVP_GET_RELEASE_VERSION(_version) (((_version) >> 16) & 0xffff)
+#define RTE_AVP_GET_MAJOR_VERSION(_version) (((_version) >> 8) & 0xff)
+#define RTE_AVP_GET_MINOR_VERSION(_version) ((_version) & 0xff)
+/**@}*/
+
+
+/**
+ * Remove the minor version number so that only the release and major versions
+ * are used for comparisons.
+ */
+#define RTE_AVP_STRIP_MINOR_VERSION(_version) ((_version) >> 8)
+
+
+/**
+ * Defines the number of mbuf pools supported per device (1 per socket)
+ */
+#define RTE_AVP_MAX_MEMPOOLS 8
+
+/*
+ * Defines address translation parameters for each support mbuf pool
+ */
+struct rte_avp_mempool_info {
+	void *addr;
+	phys_addr_t phys_addr;
+	uint64_t length;
+};
+
+/*
+ * Struct used to create a AVP device. Passed to the kernel in IOCTL call or
+ * via inter-VM shared memory when used in a guest.
+ */
+struct rte_avp_device_info {
+	uint32_t magic;	/**< Memory validation marker */
+	uint32_t version; /**< Data format version */
+
+	char ifname[RTE_AVP_NAMESIZE];	/**< Network device name for AVP */
+
+	phys_addr_t tx_phys;
+	phys_addr_t rx_phys;
+	phys_addr_t alloc_phys;
+	phys_addr_t free_phys;
+
+	uint32_t features; /**< Supported feature bitmap */
+	uint8_t min_rx_queues; /**< Minimum supported receive/free queues */
+	uint8_t num_rx_queues; /**< Recommended number of receive/free queues */
+	uint8_t max_rx_queues; /**< Maximum supported receive/free queues */
+	uint8_t min_tx_queues; /**< Minimum supported transmit/alloc queues */
+	uint8_t num_tx_queues;
+	/**< Recommended number of transmit/alloc queues */
+	uint8_t max_tx_queues; /**< Maximum supported transmit/alloc queues */
+
+	uint32_t tx_size; /**< Size of each transmit queue */
+	uint32_t rx_size; /**< Size of each receive queue */
+	uint32_t alloc_size; /**< Size of each alloc queue */
+	uint32_t free_size;	/**< Size of each free queue */
+
+	/* Used by Ethtool */
+	phys_addr_t req_phys;
+	phys_addr_t resp_phys;
+	phys_addr_t sync_phys;
+	void *sync_va;
+
+	/* mbuf mempool (used when a single memory area is supported) */
+	void *mbuf_va;
+	phys_addr_t mbuf_phys;
+
+	/* mbuf mempools */
+	struct rte_avp_mempool_info pool[RTE_AVP_MAX_MEMPOOLS];
+
+#ifdef __KERNEL__
+	/* Ethernet info */
+	char ethaddr[ETH_ALEN];
+#else
+	char ethaddr[ETHER_ADDR_LEN];
+#endif
+
+	uint8_t mode; /**< device mode, i.e guest, host, trace */
+
+	/* mbuf size */
+	unsigned int mbuf_size;
+
+	/*
+	 * unique id to differentiate between two instantiations of the same
+	 * AVP device (i.e., the guest needs to know if the device has been
+	 * deleted and recreated).
+	 */
+	uint64_t device_id;
+
+	uint32_t max_rx_pkt_len; /**< Maximum receive unit size */
+};
+
+#define RTE_AVP_MAX_QUEUES 8 /**< Maximum number of queues per device */
+
+/** Maximum number of chained mbufs in a packet */
+#define RTE_AVP_MAX_MBUF_SEGMENTS 5
+
+#define RTE_AVP_DEVICE "avp"
+
+#define RTE_AVP_IOCTL_TEST    _IOWR(0, 1, int)
+#define RTE_AVP_IOCTL_CREATE  _IOWR(0, 2, struct rte_avp_device_info)
+#define RTE_AVP_IOCTL_RELEASE _IOWR(0, 3, struct rte_avp_device_info)
+#define RTE_AVP_IOCTL_QUERY   _IOWR(0, 4, struct rte_avp_device_config)
+
+#endif /* _RTE_AVP_COMMON_H_ */
diff --git a/drivers/net/avp/rte_avp_fifo.h b/drivers/net/avp/rte_avp_fifo.h
new file mode 100644
index 0000000..1a475de
--- /dev/null
+++ b/drivers/net/avp/rte_avp_fifo.h
@@ -0,0 +1,157 @@
+/*-
+ *   This file is provided under a dual BSD/LGPLv2 license.  When using or
+ *   redistributing this file, you may do so under either license.
+ *
+ *   GNU LESSER GENERAL PUBLIC LICENSE
+ *
+ *   Copyright(c) 2010-2013 Intel Corporation. All rights reserved.
+ *   Copyright(c) 2014 Wind River Systems, Inc. All rights reserved.
+ *
+ *   This program is free software; you can redistribute it and/or modify
+ *   it under the terms of version 2.1 of the GNU Lesser General Public License
+ *   as published by the Free Software Foundation.
+ *
+ *   This program is distributed in the hope that it will be useful, but
+ *   WITHOUT ANY WARRANTY; without even the implied warranty of
+ *   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+ *   Lesser General Public License for more details.
+ *
+ *   Contact Information:
+ *   Wind River Systems, Inc.
+ *
+ *
+ *   BSD LICENSE
+ *
+ *   Copyright(c) 2010-2013 Intel Corporation. All rights reserved.
+ *   Copyright(c) 2014 Wind River Systems, Inc. All rights reserved.
+ *   All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *   * Redistributions of source code must retain the above copyright
+ *     notice, this list of conditions and the following disclaimer.
+ *   * Redistributions in binary form must reproduce the above copyright
+ *     notice, this list of conditions and the following disclaimer in
+ *     the documentation and/or other materials provided with the
+ *     distribution.
+ *   * Neither the name of Intel Corporation nor the names of its
+ *     contributors may be used to endorse or promote products derived
+ *     from this software without specific prior written permission.
+ *
+ *    THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *    "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *    LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *    A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *    OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *    SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *    LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *    DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *    THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *    (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *    OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ *
+ */
+
+#ifndef _RTE_AVP_FIFO_H_
+#define _RTE_AVP_FIFO_H_
+
+#ifdef __KERNEL__
+/* Write memory barrier for kernel compiles */
+#define AVP_WMB() smp_wmb()
+/* Read memory barrier for kernel compiles */
+#define AVP_RMB() smp_rmb()
+#else
+/* Write memory barrier for userspace compiles */
+#define AVP_WMB() rte_wmb()
+/* Read memory barrier for userspace compiles */
+#define AVP_RMB() rte_rmb()
+#endif
+
+#ifndef __KERNEL__
+/**
+ * Initializes the avp fifo structure
+ */
+static inline void
+avp_fifo_init(struct rte_avp_fifo *fifo, unsigned int size)
+{
+	/* Ensure size is power of 2 */
+	if (size & (size - 1))
+		rte_panic("AVP fifo size must be power of 2\n");
+
+	fifo->write = 0;
+	fifo->read = 0;
+	fifo->len = size;
+	fifo->elem_size = sizeof(void *);
+}
+#endif
+
+/**
+ * Adds num elements into the fifo. Return the number actually written
+ */
+static inline unsigned
+avp_fifo_put(struct rte_avp_fifo *fifo, void **data, unsigned int num)
+{
+	unsigned int i = 0;
+	unsigned int fifo_write = fifo->write;
+	unsigned int fifo_read = fifo->read;
+	unsigned int new_write = fifo_write;
+
+	for (i = 0; i < num; i++) {
+		new_write = (new_write + 1) & (fifo->len - 1);
+
+		if (new_write == fifo_read)
+			break;
+		fifo->buffer[fifo_write] = data[i];
+		fifo_write = new_write;
+	}
+	AVP_WMB();
+	fifo->write = fifo_write;
+	return i;
+}
+
+/**
+ * Get up to num elements from the fifo. Return the number actually read
+ */
+static inline unsigned int
+avp_fifo_get(struct rte_avp_fifo *fifo, void **data, unsigned int num)
+{
+	unsigned int i = 0;
+	unsigned int new_read = fifo->read;
+	unsigned int fifo_write = fifo->write;
+
+	if (new_read == fifo_write)
+		return 0; /* empty */
+
+	for (i = 0; i < num; i++) {
+		if (new_read == fifo_write)
+			break;
+
+		data[i] = fifo->buffer[new_read];
+		new_read = (new_read + 1) & (fifo->len - 1);
+	}
+	AVP_RMB();
+	fifo->read = new_read;
+	return i;
+}
+
+/**
+ * Get the num of elements in the fifo
+ */
+static inline unsigned int
+avp_fifo_count(struct rte_avp_fifo *fifo)
+{
+	return (fifo->len + fifo->write - fifo->read) & (fifo->len - 1);
+}
+
+/**
+ * Get the num of available elements in the fifo
+ */
+static inline unsigned int
+avp_fifo_free_count(struct rte_avp_fifo *fifo)
+{
+	return (fifo->read - fifo->write - 1) & (fifo->len - 1);
+}
+
+#endif /* _RTE_AVP_FIFO_H_ */
-- 
1.8.3.1

^ permalink raw reply related	[flat|nested] 172+ messages in thread

* [PATCH v4 03/17] maintainers: claim responsibility for AVP PMD
  2017-03-13 19:16     ` [PATCH v4 00/17] Wind River Systems " Allain Legacy
  2017-03-13 19:16       ` [PATCH v4 01/17] config: adds attributes for the " Allain Legacy
  2017-03-13 19:16       ` [PATCH v4 02/17] net/avp: public header files Allain Legacy
@ 2017-03-13 19:16       ` Allain Legacy
  2017-03-13 19:16       ` [PATCH v4 04/17] net/avp: add PMD version map file Allain Legacy
                         ` (15 subsequent siblings)
  18 siblings, 0 replies; 172+ messages in thread
From: Allain Legacy @ 2017-03-13 19:16 UTC (permalink / raw)
  To: ferruh.yigit
  Cc: dev, ian.jolliffe, bruce.richardson, john.mcnamara, keith.wiles,
	thomas.monjalon, vincent.jardin, jerin.jacob, stephen, 3chas3

Updating the maintainers file to claim the AVP PMD driver on behalf of Wind
River Systems, Inc.

Signed-off-by: Allain Legacy <allain.legacy@windriver.com>
Signed-off-by: Matt Peters <matt.peters@windriver.com>
---
 MAINTAINERS | 5 +++++
 1 file changed, 5 insertions(+)

diff --git a/MAINTAINERS b/MAINTAINERS
index 5030c1c..fef23a0 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -423,6 +423,11 @@ Null Networking PMD
 M: Tetsuya Mukawa <mtetsuyah@gmail.com>
 F: drivers/net/null/
 
+Wind River AVP PMD
+M: Allain Legacy <allain.legacy@windriver.com>
+M: Matt Peters <matt.peters@windriver.com>
+F: drivers/net/avp
+
 
 Crypto Drivers
 --------------
-- 
1.8.3.1

^ permalink raw reply related	[flat|nested] 172+ messages in thread

* [PATCH v4 04/17] net/avp: add PMD version map file
  2017-03-13 19:16     ` [PATCH v4 00/17] Wind River Systems " Allain Legacy
                         ` (2 preceding siblings ...)
  2017-03-13 19:16       ` [PATCH v4 03/17] maintainers: claim responsibility for AVP PMD Allain Legacy
@ 2017-03-13 19:16       ` Allain Legacy
  2017-03-16 14:52         ` Ferruh Yigit
  2017-03-13 19:16       ` [PATCH v4 05/17] net/avp: debug log macros Allain Legacy
                         ` (14 subsequent siblings)
  18 siblings, 1 reply; 172+ messages in thread
From: Allain Legacy @ 2017-03-13 19:16 UTC (permalink / raw)
  To: ferruh.yigit
  Cc: dev, ian.jolliffe, bruce.richardson, john.mcnamara, keith.wiles,
	thomas.monjalon, vincent.jardin, jerin.jacob, stephen, 3chas3

Adds a default ABI version file for the AVP PMD.

Signed-off-by: Allain Legacy <allain.legacy@windriver.com>
Signed-off-by: Matt Peters <matt.peters@windriver.com>
---
 drivers/net/avp/rte_pmd_avp_version.map | 4 ++++
 1 file changed, 4 insertions(+)
 create mode 100644 drivers/net/avp/rte_pmd_avp_version.map

diff --git a/drivers/net/avp/rte_pmd_avp_version.map b/drivers/net/avp/rte_pmd_avp_version.map
new file mode 100644
index 0000000..af8f3f4
--- /dev/null
+++ b/drivers/net/avp/rte_pmd_avp_version.map
@@ -0,0 +1,4 @@
+DPDK_17.05 {
+
+    local: *;
+};
-- 
1.8.3.1

^ permalink raw reply related	[flat|nested] 172+ messages in thread

* [PATCH v4 05/17] net/avp: debug log macros
  2017-03-13 19:16     ` [PATCH v4 00/17] Wind River Systems " Allain Legacy
                         ` (3 preceding siblings ...)
  2017-03-13 19:16       ` [PATCH v4 04/17] net/avp: add PMD version map file Allain Legacy
@ 2017-03-13 19:16       ` Allain Legacy
  2017-03-13 19:16       ` [PATCH v4 06/17] drivers/net: adds driver makefiles for AVP PMD Allain Legacy
                         ` (13 subsequent siblings)
  18 siblings, 0 replies; 172+ messages in thread
From: Allain Legacy @ 2017-03-13 19:16 UTC (permalink / raw)
  To: ferruh.yigit
  Cc: dev, ian.jolliffe, bruce.richardson, john.mcnamara, keith.wiles,
	thomas.monjalon, vincent.jardin, jerin.jacob, stephen, 3chas3

Adds a header file with log macros for the AVP PMD

Signed-off-by: Allain Legacy <allain.legacy@windriver.com>
Signed-off-by: Matt Peters <matt.peters@windriver.com>
---
 config/common_base         |  4 ++++
 drivers/net/avp/avp_logs.h | 59 ++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 63 insertions(+)
 create mode 100644 drivers/net/avp/avp_logs.h

diff --git a/config/common_base b/config/common_base
index 912bc68..fe8363d 100644
--- a/config/common_base
+++ b/config/common_base
@@ -351,6 +351,10 @@ CONFIG_RTE_LIBRTE_PMD_AF_PACKET=n
 # Compile WRS accelerated virtual port (AVP) guest PMD driver
 #
 CONFIG_RTE_LIBRTE_AVP_PMD=n
+CONFIG_RTE_LIBRTE_AVP_DEBUG_RX=n
+CONFIG_RTE_LIBRTE_AVP_DEBUG_TX=n
+CONFIG_RTE_LIBRTE_AVP_DEBUG_DRIVER=y
+CONFIG_RTE_LIBRTE_AVP_DEBUG_BUFFERS=n
 
 #
 # Compile the TAP PMD
diff --git a/drivers/net/avp/avp_logs.h b/drivers/net/avp/avp_logs.h
new file mode 100644
index 0000000..252cab7
--- /dev/null
+++ b/drivers/net/avp/avp_logs.h
@@ -0,0 +1,59 @@
+/*
+ * BSD LICENSE
+ *
+ * Copyright (c) 2013-2015, Wind River Systems, Inc.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions are met:
+ *
+ * 1) Redistributions of source code must retain the above copyright notice,
+ * this list of conditions and the following disclaimer.
+ *
+ * 2) Redistributions in binary form must reproduce the above copyright notice,
+ * this list of conditions and the following disclaimer in the documentation
+ * and/or other materials provided with the distribution.
+ *
+ * 3) Neither the name of Wind River Systems nor the names of its contributors
+ * may be used to endorse or promote products derived from this software
+ * without specific prior written permission.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE
+ * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+ * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+ * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+ * POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef _AVP_LOGS_H_
+#define _AVP_LOGS_H_
+
+#include <rte_log.h>
+
+#ifdef RTE_LIBRTE_AVP_DEBUG_RX
+#define PMD_RX_LOG(level, fmt, args...) \
+	RTE_LOG(level, PMD, "%s() rx: " fmt, __func__, ## args)
+#else
+#define PMD_RX_LOG(level, fmt, args...) do { } while (0)
+#endif
+
+#ifdef RTE_LIBRTE_AVP_DEBUG_TX
+#define PMD_TX_LOG(level, fmt, args...) \
+	RTE_LOG(level, PMD, "%s() tx: " fmt, __func__, ## args)
+#else
+#define PMD_TX_LOG(level, fmt, args...) do { } while (0)
+#endif
+
+#ifdef RTE_LIBRTE_AVP_DEBUG_DRIVER
+#define PMD_DRV_LOG(level, fmt, args...) \
+	RTE_LOG(level, PMD, "%s(): " fmt, __func__, ## args)
+#else
+#define PMD_DRV_LOG(level, fmt, args...) do { } while (0)
+#endif
+
+#endif /* _AVP_LOGS_H_ */
-- 
1.8.3.1

^ permalink raw reply related	[flat|nested] 172+ messages in thread

* [PATCH v4 06/17] drivers/net: adds driver makefiles for AVP PMD
  2017-03-13 19:16     ` [PATCH v4 00/17] Wind River Systems " Allain Legacy
                         ` (4 preceding siblings ...)
  2017-03-13 19:16       ` [PATCH v4 05/17] net/avp: debug log macros Allain Legacy
@ 2017-03-13 19:16       ` Allain Legacy
  2017-03-13 19:16       ` [PATCH v4 07/17] net/avp: driver registration Allain Legacy
                         ` (12 subsequent siblings)
  18 siblings, 0 replies; 172+ messages in thread
From: Allain Legacy @ 2017-03-13 19:16 UTC (permalink / raw)
  To: ferruh.yigit
  Cc: dev, ian.jolliffe, bruce.richardson, john.mcnamara, keith.wiles,
	thomas.monjalon, vincent.jardin, jerin.jacob, stephen, 3chas3

Adds a default Makefile to the driver directory but does not include any
source files.

Signed-off-by: Allain Legacy <allain.legacy@windriver.com>
Signed-off-by: Matt Peters <matt.peters@windriver.com>
---
 drivers/net/Makefile     |  1 +
 drivers/net/avp/Makefile | 52 ++++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 53 insertions(+)
 create mode 100644 drivers/net/avp/Makefile

diff --git a/drivers/net/Makefile b/drivers/net/Makefile
index 40fc333..592383e 100644
--- a/drivers/net/Makefile
+++ b/drivers/net/Makefile
@@ -32,6 +32,7 @@
 include $(RTE_SDK)/mk/rte.vars.mk
 
 DIRS-$(CONFIG_RTE_LIBRTE_PMD_AF_PACKET) += af_packet
+DIRS-$(CONFIG_RTE_LIBRTE_AVP_PMD) += avp
 DIRS-$(CONFIG_RTE_LIBRTE_BNX2X_PMD) += bnx2x
 DIRS-$(CONFIG_RTE_LIBRTE_PMD_BOND) += bonding
 DIRS-$(CONFIG_RTE_LIBRTE_CXGBE_PMD) += cxgbe
diff --git a/drivers/net/avp/Makefile b/drivers/net/avp/Makefile
new file mode 100644
index 0000000..68a0fa5
--- /dev/null
+++ b/drivers/net/avp/Makefile
@@ -0,0 +1,52 @@
+#   BSD LICENSE
+#
+#   Copyright(c) 2013-2017, Wind River Systems, Inc. All rights reserved.
+#   All rights reserved.
+#
+#   Redistribution and use in source and binary forms, with or without
+#   modification, are permitted provided that the following conditions
+#   are met:
+#
+#     * Redistributions of source code must retain the above copyright
+#       notice, this list of conditions and the following disclaimer.
+#     * Redistributions in binary form must reproduce the above copyright
+#       notice, this list of conditions and the following disclaimer in
+#       the documentation and/or other materials provided with the
+#       distribution.
+#     * Neither the name of Wind River Systems nor the names of its
+#       contributors may be used to endorse or promote products derived
+#       from this software without specific prior written permission.
+#
+#   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+#   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+#   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+#   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+#   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+#   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+#   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+#   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+#   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+#   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+#   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+
+
+include $(RTE_SDK)/mk/rte.vars.mk
+
+#
+# library name
+#
+LIB = librte_pmd_avp.a
+
+CFLAGS += -O3
+CFLAGS += $(WERROR_FLAGS) -I$(SRCDIR)
+
+EXPORT_MAP := rte_pmd_avp_version.map
+
+LIBABIVER := 1
+
+# install public header files to enable compilation of the hypervisor level
+# dpdk application
+SYMLINK-$(CONFIG_RTE_LIBRTE_AVP_PMD)-include += rte_avp_common.h
+SYMLINK-$(CONFIG_RTE_LIBRTE_AVP_PMD)-include += rte_avp_fifo.h
+
+include $(RTE_SDK)/mk/rte.lib.mk
-- 
1.8.3.1

^ permalink raw reply related	[flat|nested] 172+ messages in thread

* [PATCH v4 07/17] net/avp: driver registration
  2017-03-13 19:16     ` [PATCH v4 00/17] Wind River Systems " Allain Legacy
                         ` (5 preceding siblings ...)
  2017-03-13 19:16       ` [PATCH v4 06/17] drivers/net: adds driver makefiles for AVP PMD Allain Legacy
@ 2017-03-13 19:16       ` Allain Legacy
  2017-03-16 14:53         ` Ferruh Yigit
  2017-03-13 19:16       ` [PATCH v4 08/17] net/avp: device initialization Allain Legacy
                         ` (11 subsequent siblings)
  18 siblings, 1 reply; 172+ messages in thread
From: Allain Legacy @ 2017-03-13 19:16 UTC (permalink / raw)
  To: ferruh.yigit
  Cc: dev, ian.jolliffe, bruce.richardson, john.mcnamara, keith.wiles,
	thomas.monjalon, vincent.jardin, jerin.jacob, stephen, 3chas3

Adds the initial framework for registering the driver against the support
PCI device identifiers.

Signed-off-by: Allain Legacy <allain.legacy@windriver.com>
Signed-off-by: Matt Peters <matt.peters@windriver.com>
---
 config/common_linuxapp                       |   1 +
 config/defconfig_i686-native-linuxapp-gcc    |   5 +
 config/defconfig_i686-native-linuxapp-icc    |   5 +
 config/defconfig_x86_x32-native-linuxapp-gcc |   5 +
 drivers/net/avp/Makefile                     |   8 +
 drivers/net/avp/avp_ethdev.c                 | 227 +++++++++++++++++++++++++++
 mk/rte.app.mk                                |   1 +
 7 files changed, 252 insertions(+)
 create mode 100644 drivers/net/avp/avp_ethdev.c

diff --git a/config/common_linuxapp b/config/common_linuxapp
index 00ebaac..8690a00 100644
--- a/config/common_linuxapp
+++ b/config/common_linuxapp
@@ -43,6 +43,7 @@ CONFIG_RTE_LIBRTE_VHOST=y
 CONFIG_RTE_LIBRTE_PMD_VHOST=y
 CONFIG_RTE_LIBRTE_PMD_AF_PACKET=y
 CONFIG_RTE_LIBRTE_PMD_TAP=y
+CONFIG_RTE_LIBRTE_AVP_PMD=y
 CONFIG_RTE_LIBRTE_NFP_PMD=y
 CONFIG_RTE_LIBRTE_POWER=y
 CONFIG_RTE_VIRTIO_USER=y
diff --git a/config/defconfig_i686-native-linuxapp-gcc b/config/defconfig_i686-native-linuxapp-gcc
index 745c401..9847bdb 100644
--- a/config/defconfig_i686-native-linuxapp-gcc
+++ b/config/defconfig_i686-native-linuxapp-gcc
@@ -75,3 +75,8 @@ CONFIG_RTE_LIBRTE_PMD_KASUMI=n
 # ZUC PMD is not supported on 32-bit
 #
 CONFIG_RTE_LIBRTE_PMD_ZUC=n
+
+#
+# AVP PMD is not supported on 32-bit
+#
+CONFIG_RTE_LIBRTE_AVP_PMD=n
diff --git a/config/defconfig_i686-native-linuxapp-icc b/config/defconfig_i686-native-linuxapp-icc
index 50a3008..269e88e 100644
--- a/config/defconfig_i686-native-linuxapp-icc
+++ b/config/defconfig_i686-native-linuxapp-icc
@@ -75,3 +75,8 @@ CONFIG_RTE_LIBRTE_PMD_KASUMI=n
 # ZUC PMD is not supported on 32-bit
 #
 CONFIG_RTE_LIBRTE_PMD_ZUC=n
+
+#
+# AVP PMD is not supported on 32-bit
+#
+CONFIG_RTE_LIBRTE_AVP_PMD=n
diff --git a/config/defconfig_x86_x32-native-linuxapp-gcc b/config/defconfig_x86_x32-native-linuxapp-gcc
index 3e55c5c..19573cb 100644
--- a/config/defconfig_x86_x32-native-linuxapp-gcc
+++ b/config/defconfig_x86_x32-native-linuxapp-gcc
@@ -50,3 +50,8 @@ CONFIG_RTE_LIBRTE_KNI=n
 # Solarflare PMD is not supported on 32-bit
 #
 CONFIG_RTE_LIBRTE_SFC_EFX_PMD=n
+
+#
+# AVP PMD is not supported on 32-bit
+#
+CONFIG_RTE_LIBRTE_AVP_PMD=n
diff --git a/drivers/net/avp/Makefile b/drivers/net/avp/Makefile
index 68a0fa5..9cf0449 100644
--- a/drivers/net/avp/Makefile
+++ b/drivers/net/avp/Makefile
@@ -49,4 +49,12 @@ LIBABIVER := 1
 SYMLINK-$(CONFIG_RTE_LIBRTE_AVP_PMD)-include += rte_avp_common.h
 SYMLINK-$(CONFIG_RTE_LIBRTE_AVP_PMD)-include += rte_avp_fifo.h
 
+#
+# all source files are stored in SRCS-y
+#
+SRCS-$(CONFIG_RTE_LIBRTE_AVP_PMD) += avp_ethdev.c
+
+# this lib depends upon:
+DEPDIRS-$(CONFIG_RTE_LIBRTE_AVP_PMD) += lib/librte_eal lib/librte_ether
+
 include $(RTE_SDK)/mk/rte.lib.mk
diff --git a/drivers/net/avp/avp_ethdev.c b/drivers/net/avp/avp_ethdev.c
new file mode 100644
index 0000000..ab710ea
--- /dev/null
+++ b/drivers/net/avp/avp_ethdev.c
@@ -0,0 +1,227 @@
+/*
+ *   BSD LICENSE
+ *
+ * Copyright (c) 2013-2016, Wind River Systems, Inc.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions are met:
+ *
+ * 1) Redistributions of source code must retain the above copyright notice,
+ * this list of conditions and the following disclaimer.
+ *
+ * 2) Redistributions in binary form must reproduce the above copyright notice,
+ * this list of conditions and the following disclaimer in the documentation
+ * and/or other materials provided with the distribution.
+ *
+ * 3) Neither the name of Wind River Systems nor the names of its contributors
+ * may be used to endorse or promote products derived from this software
+ * without specific prior written permission.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE
+ * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+ * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+ * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+ * POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include <stdint.h>
+#include <string.h>
+#include <stdio.h>
+#include <errno.h>
+#include <unistd.h>
+#include <sys/io.h>
+
+#include <rte_ethdev.h>
+#include <rte_memcpy.h>
+#include <rte_string_fns.h>
+#include <rte_memzone.h>
+#include <rte_malloc.h>
+#include <rte_atomic.h>
+#include <rte_branch_prediction.h>
+#include <rte_pci.h>
+#include <rte_ether.h>
+#include <rte_common.h>
+#include <rte_cycles.h>
+#include <rte_byteorder.h>
+#include <rte_dev.h>
+#include <rte_memory.h>
+#include <rte_eal.h>
+
+#include "rte_avp_common.h"
+#include "rte_avp_fifo.h"
+
+#include "avp_logs.h"
+
+
+
+static int eth_avp_dev_init(struct rte_eth_dev *eth_dev);
+static int eth_avp_dev_uninit(struct rte_eth_dev *eth_dev);
+
+
+#define AVP_DEV_TO_PCI(eth_dev) RTE_DEV_TO_PCI((eth_dev)->device)
+
+
+#define AVP_MAX_MAC_ADDRS 1
+#define AVP_MIN_RX_BUFSIZE ETHER_MIN_LEN
+
+
+/*
+ * Defines the number of microseconds to wait before checking the response
+ * queue for completion.
+ */
+#define AVP_REQUEST_DELAY_USECS (5000)
+
+/*
+ * Defines the number times to check the response queue for completion before
+ * declaring a timeout.
+ */
+#define AVP_MAX_REQUEST_RETRY (100)
+
+/* Defines the current PCI driver version number */
+#define AVP_DPDK_DRIVER_VERSION RTE_AVP_CURRENT_GUEST_VERSION
+
+/*
+ * The set of PCI devices this driver supports
+ */
+static const struct rte_pci_id pci_id_avp_map[] = {
+	{ .vendor_id = RTE_AVP_PCI_VENDOR_ID,
+	  .device_id = RTE_AVP_PCI_DEVICE_ID,
+	  .subsystem_vendor_id = RTE_AVP_PCI_SUB_VENDOR_ID,
+	  .subsystem_device_id = RTE_AVP_PCI_SUB_DEVICE_ID,
+	  .class_id = RTE_CLASS_ANY_ID,
+	},
+
+	{ .vendor_id = 0, /* sentinel */
+	},
+};
+
+
+/*
+ * Defines the AVP device attributes which are attached to an RTE ethernet
+ * device
+ */
+struct avp_dev {
+	uint32_t magic; /**< Memory validation marker */
+	uint64_t device_id; /**< Unique system identifier */
+	struct ether_addr ethaddr; /**< Host specified MAC address */
+	struct rte_eth_dev_data *dev_data;
+	/**< Back pointer to ethernet device data */
+	volatile uint32_t flags; /**< Device operational flags */
+	uint8_t port_id; /**< Ethernet port identifier */
+	struct rte_mempool *pool; /**< pkt mbuf mempool */
+	unsigned int guest_mbuf_size; /**< local pool mbuf size */
+	unsigned int host_mbuf_size; /**< host mbuf size */
+	unsigned int max_rx_pkt_len; /**< maximum receive unit */
+	uint32_t host_features; /**< Supported feature bitmap */
+	uint32_t features; /**< Enabled feature bitmap */
+	unsigned int num_tx_queues; /**< Negotiated number of transmit queues */
+	unsigned int max_tx_queues; /**< Maximum number of transmit queues */
+	unsigned int num_rx_queues; /**< Negotiated number of receive queues */
+	unsigned int max_rx_queues; /**< Maximum number of receive queues */
+
+	struct rte_avp_fifo *tx_q[RTE_AVP_MAX_QUEUES]; /**< TX queue */
+	struct rte_avp_fifo *rx_q[RTE_AVP_MAX_QUEUES]; /**< RX queue */
+	struct rte_avp_fifo *alloc_q[RTE_AVP_MAX_QUEUES];
+	/**< Allocated mbufs queue */
+	struct rte_avp_fifo *free_q[RTE_AVP_MAX_QUEUES];
+	/**< To be freed mbufs queue */
+
+	/* For request & response */
+	struct rte_avp_fifo *req_q; /**< Request queue */
+	struct rte_avp_fifo *resp_q; /**< Response queue */
+	void *host_sync_addr; /**< (host) Req/Resp Mem address */
+	void *sync_addr; /**< Req/Resp Mem address */
+	void *host_mbuf_addr; /**< (host) MBUF pool start address */
+	void *mbuf_addr; /**< MBUF pool start address */
+} __rte_cache_aligned;
+
+/* RTE ethernet private data */
+struct avp_adapter {
+	struct avp_dev avp;
+} __rte_cache_aligned;
+
+/* Macro to cast the ethernet device private data to a AVP object */
+#define AVP_DEV_PRIVATE_TO_HW(adapter) \
+	(&((struct avp_adapter *)adapter)->avp)
+
+/*
+ * This function is based on probe() function in avp_pci.c
+ * It returns 0 on success.
+ */
+static int
+eth_avp_dev_init(struct rte_eth_dev *eth_dev)
+{
+	struct avp_dev *avp =
+		AVP_DEV_PRIVATE_TO_HW(eth_dev->data->dev_private);
+	struct rte_pci_device *pci_dev;
+
+	pci_dev = AVP_DEV_TO_PCI(eth_dev);
+
+	if (rte_eal_process_type() != RTE_PROC_PRIMARY) {
+		/*
+		 * no setup required on secondary processes.  All data is saved
+		 * in dev_private by the primary process. All resource should
+		 * be mapped to the same virtual address so all pointers should
+		 * be valid.
+		 */
+		return 0;
+	}
+
+	rte_eth_copy_pci_info(eth_dev, pci_dev);
+
+	eth_dev->data->dev_flags |= RTE_ETH_DEV_DETACHABLE;
+
+	/* Allocate memory for storing MAC addresses */
+	eth_dev->data->mac_addrs = rte_zmalloc("avp_ethdev", ETHER_ADDR_LEN, 0);
+	if (eth_dev->data->mac_addrs == NULL) {
+		PMD_DRV_LOG(ERR, "Failed to allocate %d bytes needed to store MAC addresses\n",
+			    ETHER_ADDR_LEN);
+		return -ENOMEM;
+	}
+
+	/* Get a mac from device config */
+	ether_addr_copy(&avp->ethaddr, &eth_dev->data->mac_addrs[0]);
+
+	return 0;
+}
+
+static int
+eth_avp_dev_uninit(struct rte_eth_dev *eth_dev)
+{
+	if (rte_eal_process_type() != RTE_PROC_PRIMARY)
+		return -EPERM;
+
+	if (eth_dev->data == NULL)
+		return 0;
+
+	if (eth_dev->data->mac_addrs != NULL) {
+		rte_free(eth_dev->data->mac_addrs);
+		eth_dev->data->mac_addrs = NULL;
+	}
+
+	return 0;
+}
+
+
+static struct eth_driver rte_avp_pmd = {
+	{
+		.id_table = pci_id_avp_map,
+		.drv_flags = RTE_PCI_DRV_NEED_MAPPING,
+		.probe = rte_eth_dev_pci_probe,
+		.remove = rte_eth_dev_pci_remove,
+	},
+	.eth_dev_init = eth_avp_dev_init,
+	.eth_dev_uninit = eth_avp_dev_uninit,
+	.dev_private_size = sizeof(struct avp_adapter),
+};
+
+
+
+RTE_PMD_REGISTER_PCI(rte_avp, rte_avp_pmd.pci_drv);
+RTE_PMD_REGISTER_PCI_TABLE(rte_avp, pci_id_avp_map);
diff --git a/mk/rte.app.mk b/mk/rte.app.mk
index d46a33e..9d66257 100644
--- a/mk/rte.app.mk
+++ b/mk/rte.app.mk
@@ -104,6 +104,7 @@ ifeq ($(CONFIG_RTE_BUILD_SHARED_LIB),n)
 # plugins (link only if static libraries)
 
 _LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_AF_PACKET)  += -lrte_pmd_af_packet
+_LDLIBS-$(CONFIG_RTE_LIBRTE_AVP_PMD)        += -lrte_pmd_avp
 _LDLIBS-$(CONFIG_RTE_LIBRTE_BNX2X_PMD)      += -lrte_pmd_bnx2x -lz
 _LDLIBS-$(CONFIG_RTE_LIBRTE_BNXT_PMD)       += -lrte_pmd_bnxt
 _LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_BOND)       += -lrte_pmd_bond
-- 
1.8.3.1

^ permalink raw reply related	[flat|nested] 172+ messages in thread

* [PATCH v4 08/17] net/avp: device initialization
  2017-03-13 19:16     ` [PATCH v4 00/17] Wind River Systems " Allain Legacy
                         ` (6 preceding siblings ...)
  2017-03-13 19:16       ` [PATCH v4 07/17] net/avp: driver registration Allain Legacy
@ 2017-03-13 19:16       ` Allain Legacy
  2017-03-13 19:16       ` [PATCH v4 09/17] net/avp: device configuration Allain Legacy
                         ` (10 subsequent siblings)
  18 siblings, 0 replies; 172+ messages in thread
From: Allain Legacy @ 2017-03-13 19:16 UTC (permalink / raw)
  To: ferruh.yigit
  Cc: dev, ian.jolliffe, bruce.richardson, john.mcnamara, keith.wiles,
	thomas.monjalon, vincent.jardin, jerin.jacob, stephen, 3chas3

Adds support for initialization newly probed AVP PCI devices.  Initial
queue translations are setup in preparation for device configuration.

Signed-off-by: Allain Legacy <allain.legacy@windriver.com>
Signed-off-by: Matt Peters <matt.peters@windriver.com>
---
 drivers/net/avp/avp_ethdev.c | 299 +++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 299 insertions(+)

diff --git a/drivers/net/avp/avp_ethdev.c b/drivers/net/avp/avp_ethdev.c
index ab710ea..a7d270f 100644
--- a/drivers/net/avp/avp_ethdev.c
+++ b/drivers/net/avp/avp_ethdev.c
@@ -52,6 +52,7 @@
 #include <rte_dev.h>
 #include <rte_memory.h>
 #include <rte_eal.h>
+#include <rte_io.h>
 
 #include "rte_avp_common.h"
 #include "rte_avp_fifo.h"
@@ -59,6 +60,8 @@
 #include "avp_logs.h"
 
 
+static int avp_dev_create(struct rte_pci_device *pci_dev,
+			  struct rte_eth_dev *eth_dev);
 
 static int eth_avp_dev_init(struct rte_eth_dev *eth_dev);
 static int eth_avp_dev_uninit(struct rte_eth_dev *eth_dev);
@@ -102,6 +105,15 @@
 };
 
 
+/**@{ AVP device flags */
+#define AVP_F_PROMISC (1 << 1)
+#define AVP_F_CONFIGURED (1 << 2)
+#define AVP_F_LINKUP (1 << 3)
+/**@} */
+
+/* Ethernet device validation marker */
+#define AVP_ETHDEV_MAGIC 0x92972862
+
 /*
  * Defines the AVP device attributes which are attached to an RTE ethernet
  * device
@@ -146,11 +158,282 @@ struct avp_adapter {
 	struct avp_dev avp;
 } __rte_cache_aligned;
 
+
+/* 32-bit MMIO register write */
+#define AVP_WRITE32(_value, _addr) rte_write32_relaxed((_value), (_addr))
+
+/* 32-bit MMIO register read */
+#define AVP_READ32(_addr) rte_read32_relaxed((_addr))
+
 /* Macro to cast the ethernet device private data to a AVP object */
 #define AVP_DEV_PRIVATE_TO_HW(adapter) \
 	(&((struct avp_adapter *)adapter)->avp)
 
 /*
+ * Defines the structure of a AVP device queue for the purpose of handling the
+ * receive and transmit burst callback functions
+ */
+struct avp_queue {
+	struct rte_eth_dev_data *dev_data;
+	/**< Backpointer to ethernet device data */
+	struct avp_dev *avp; /**< Backpointer to AVP device */
+	uint16_t queue_id;
+	/**< Queue identifier used for indexing current queue */
+	uint16_t queue_base;
+	/**< Base queue identifier for queue servicing */
+	uint16_t queue_limit;
+	/**< Maximum queue identifier for queue servicing */
+
+	uint64_t packets;
+	uint64_t bytes;
+	uint64_t errors;
+};
+
+/* translate from host physical address to guest virtual address */
+static void *
+avp_dev_translate_address(struct rte_eth_dev *eth_dev,
+			  phys_addr_t host_phys_addr)
+{
+	struct rte_pci_device *pci_dev = AVP_DEV_TO_PCI(eth_dev);
+	struct rte_mem_resource *resource;
+	struct rte_avp_memmap_info *info;
+	struct rte_avp_memmap *map;
+	off_t offset;
+	void *addr;
+	unsigned int i;
+
+	addr = pci_dev->mem_resource[RTE_AVP_PCI_MEMORY_BAR].addr;
+	resource = &pci_dev->mem_resource[RTE_AVP_PCI_MEMMAP_BAR];
+	info = (struct rte_avp_memmap_info *)resource->addr;
+
+	offset = 0;
+	for (i = 0; i < info->nb_maps; i++) {
+		/* search all segments looking for a matching address */
+		map = &info->maps[i];
+
+		if ((host_phys_addr >= map->phys_addr) &&
+			(host_phys_addr < (map->phys_addr + map->length))) {
+			/* address is within this segment */
+			offset += (host_phys_addr - map->phys_addr);
+			addr = RTE_PTR_ADD(addr, offset);
+
+			PMD_DRV_LOG(DEBUG, "Translating host physical 0x%" PRIx64 " to guest virtual 0x%p\n",
+				    host_phys_addr, addr);
+
+			return addr;
+		}
+		offset += map->length;
+	}
+
+	return NULL;
+}
+
+/* verify that the incoming device version is compatible with our version */
+static int
+avp_dev_version_check(uint32_t version)
+{
+	uint32_t driver = RTE_AVP_STRIP_MINOR_VERSION(AVP_DPDK_DRIVER_VERSION);
+	uint32_t device = RTE_AVP_STRIP_MINOR_VERSION(version);
+
+	if (device <= driver) {
+		/* the host driver version is less than or equal to ours */
+		return 0;
+	}
+
+	return 1;
+}
+
+/* verify that memory regions have expected version and validation markers */
+static int
+avp_dev_check_regions(struct rte_eth_dev *eth_dev)
+{
+	struct rte_pci_device *pci_dev = AVP_DEV_TO_PCI(eth_dev);
+	struct rte_avp_memmap_info *memmap;
+	struct rte_avp_device_info *info;
+	struct rte_mem_resource *resource;
+	unsigned int i;
+
+	/* Dump resource info for debug */
+	for (i = 0; i < PCI_MAX_RESOURCE; i++) {
+		resource = &pci_dev->mem_resource[i];
+		if ((resource->phys_addr == 0) || (resource->len == 0))
+			continue;
+
+		PMD_DRV_LOG(DEBUG, "resource[%u]: phys=0x%" PRIx64 " len=%" PRIu64 " addr=%p\n",
+			    i, resource->phys_addr,
+			    resource->len, resource->addr);
+
+		switch (i) {
+		case RTE_AVP_PCI_MEMMAP_BAR:
+			memmap = (struct rte_avp_memmap_info *)resource->addr;
+			if ((memmap->magic != RTE_AVP_MEMMAP_MAGIC) ||
+			    (memmap->version != RTE_AVP_MEMMAP_VERSION)) {
+				PMD_DRV_LOG(ERR, "Invalid memmap magic 0x%08x and version %u\n",
+					    memmap->magic, memmap->version);
+				return -EINVAL;
+			}
+			break;
+
+		case RTE_AVP_PCI_DEVICE_BAR:
+			info = (struct rte_avp_device_info *)resource->addr;
+			if ((info->magic != RTE_AVP_DEVICE_MAGIC) ||
+			    avp_dev_version_check(info->version)) {
+				PMD_DRV_LOG(ERR, "Invalid device info magic 0x%08x or version 0x%08x > 0x%08x\n",
+					    info->magic, info->version,
+					    AVP_DPDK_DRIVER_VERSION);
+				return -EINVAL;
+			}
+			break;
+
+		case RTE_AVP_PCI_MEMORY_BAR:
+		case RTE_AVP_PCI_MMIO_BAR:
+			if (resource->addr == NULL) {
+				PMD_DRV_LOG(ERR, "Missing address space for BAR%u\n",
+					    i);
+				return -EINVAL;
+			}
+			break;
+
+		case RTE_AVP_PCI_MSIX_BAR:
+		default:
+			/* no validation required */
+			break;
+		}
+	}
+
+	return 0;
+}
+
+/*
+ * create a AVP device using the supplied device info by first translating it
+ * to guest address space(s).
+ */
+static int
+avp_dev_create(struct rte_pci_device *pci_dev,
+	       struct rte_eth_dev *eth_dev)
+{
+	struct avp_dev *avp = AVP_DEV_PRIVATE_TO_HW(eth_dev->data->dev_private);
+	struct rte_avp_device_info *host_info;
+	struct rte_mem_resource *resource;
+	unsigned int i;
+
+	resource = &pci_dev->mem_resource[RTE_AVP_PCI_DEVICE_BAR];
+	if (resource->addr == NULL) {
+		PMD_DRV_LOG(ERR, "BAR%u is not mapped\n",
+			    RTE_AVP_PCI_DEVICE_BAR);
+		return -EFAULT;
+	}
+	host_info = (struct rte_avp_device_info *)resource->addr;
+
+	if ((host_info->magic != RTE_AVP_DEVICE_MAGIC) ||
+		avp_dev_version_check(host_info->version)) {
+		PMD_DRV_LOG(ERR, "Invalid AVP PCI device, magic 0x%08x version 0x%08x > 0x%08x\n",
+			    host_info->magic, host_info->version,
+			    AVP_DPDK_DRIVER_VERSION);
+		return -EINVAL;
+	}
+
+	PMD_DRV_LOG(DEBUG, "AVP host device is v%u.%u.%u\n",
+		    RTE_AVP_GET_RELEASE_VERSION(host_info->version),
+		    RTE_AVP_GET_MAJOR_VERSION(host_info->version),
+		    RTE_AVP_GET_MINOR_VERSION(host_info->version));
+
+	PMD_DRV_LOG(DEBUG, "AVP host supports %u to %u TX queue(s)\n",
+		    host_info->min_tx_queues, host_info->max_tx_queues);
+	PMD_DRV_LOG(DEBUG, "AVP host supports %u to %u RX queue(s)\n",
+		    host_info->min_rx_queues, host_info->max_rx_queues);
+	PMD_DRV_LOG(DEBUG, "AVP host supports features 0x%08x\n",
+		    host_info->features);
+
+	if (avp->magic != AVP_ETHDEV_MAGIC) {
+		/*
+		 * First time initialization (i.e., not during a VM
+		 * migration)
+		 */
+		memset(avp, 0, sizeof(*avp));
+		avp->magic = AVP_ETHDEV_MAGIC;
+		avp->dev_data = eth_dev->data;
+		avp->port_id = eth_dev->data->port_id;
+		avp->host_mbuf_size = host_info->mbuf_size;
+		avp->host_features = host_info->features;
+		memcpy(&avp->ethaddr.addr_bytes[0],
+		       host_info->ethaddr, ETHER_ADDR_LEN);
+		/* adjust max values to not exceed our max */
+		avp->max_tx_queues =
+			RTE_MIN(host_info->max_tx_queues, RTE_AVP_MAX_QUEUES);
+		avp->max_rx_queues =
+			RTE_MIN(host_info->max_rx_queues, RTE_AVP_MAX_QUEUES);
+	} else {
+		/* Re-attaching during migration */
+
+		/* TODO... requires validation of host values */
+		if ((host_info->features & avp->features) != avp->features) {
+			PMD_DRV_LOG(ERR, "AVP host features mismatched; 0x%08x, host=0x%08x\n",
+				    avp->features, host_info->features);
+			/* this should not be possible; continue for now */
+		}
+	}
+
+	/* the device id is allowed to change over migrations */
+	avp->device_id = host_info->device_id;
+
+	/* translate incoming host addresses to guest address space */
+	PMD_DRV_LOG(DEBUG, "AVP first host tx queue at 0x%" PRIx64 "\n",
+		    host_info->tx_phys);
+	PMD_DRV_LOG(DEBUG, "AVP first host alloc queue at 0x%" PRIx64 "\n",
+		    host_info->alloc_phys);
+	for (i = 0; i < avp->max_tx_queues; i++) {
+		avp->tx_q[i] = avp_dev_translate_address(eth_dev,
+			host_info->tx_phys + (i * host_info->tx_size));
+
+		avp->alloc_q[i] = avp_dev_translate_address(eth_dev,
+			host_info->alloc_phys + (i * host_info->alloc_size));
+	}
+
+	PMD_DRV_LOG(DEBUG, "AVP first host rx queue at 0x%" PRIx64 "\n",
+		    host_info->rx_phys);
+	PMD_DRV_LOG(DEBUG, "AVP first host free queue at 0x%" PRIx64 "\n",
+		    host_info->free_phys);
+	for (i = 0; i < avp->max_rx_queues; i++) {
+		avp->rx_q[i] = avp_dev_translate_address(eth_dev,
+			host_info->rx_phys + (i * host_info->rx_size));
+		avp->free_q[i] = avp_dev_translate_address(eth_dev,
+			host_info->free_phys + (i * host_info->free_size));
+	}
+
+	PMD_DRV_LOG(DEBUG, "AVP host request queue at 0x%" PRIx64 "\n",
+		    host_info->req_phys);
+	PMD_DRV_LOG(DEBUG, "AVP host response queue at 0x%" PRIx64 "\n",
+		    host_info->resp_phys);
+	PMD_DRV_LOG(DEBUG, "AVP host sync address at 0x%" PRIx64 "\n",
+		    host_info->sync_phys);
+	PMD_DRV_LOG(DEBUG, "AVP host mbuf address at 0x%" PRIx64 "\n",
+		    host_info->mbuf_phys);
+	avp->req_q = avp_dev_translate_address(eth_dev, host_info->req_phys);
+	avp->resp_q = avp_dev_translate_address(eth_dev, host_info->resp_phys);
+	avp->sync_addr =
+		avp_dev_translate_address(eth_dev, host_info->sync_phys);
+	avp->mbuf_addr =
+		avp_dev_translate_address(eth_dev, host_info->mbuf_phys);
+
+	/*
+	 * store the host mbuf virtual address so that we can calculate
+	 * relative offsets for each mbuf as they are processed
+	 */
+	avp->host_mbuf_addr = host_info->mbuf_va;
+	avp->host_sync_addr = host_info->sync_va;
+
+	/*
+	 * store the maximum packet length that is supported by the host.
+	 */
+	avp->max_rx_pkt_len = host_info->max_rx_pkt_len;
+	PMD_DRV_LOG(DEBUG, "AVP host max receive packet length is %u\n",
+				host_info->max_rx_pkt_len);
+
+	return 0;
+}
+
+/*
  * This function is based on probe() function in avp_pci.c
  * It returns 0 on success.
  */
@@ -160,6 +443,7 @@ struct avp_adapter {
 	struct avp_dev *avp =
 		AVP_DEV_PRIVATE_TO_HW(eth_dev->data->dev_private);
 	struct rte_pci_device *pci_dev;
+	int ret;
 
 	pci_dev = AVP_DEV_TO_PCI(eth_dev);
 
@@ -177,6 +461,21 @@ struct avp_adapter {
 
 	eth_dev->data->dev_flags |= RTE_ETH_DEV_DETACHABLE;
 
+	/* Check BAR resources */
+	ret = avp_dev_check_regions(eth_dev);
+	if (ret < 0) {
+		PMD_DRV_LOG(ERR, "Failed to validate BAR resources, ret=%d\n",
+			    ret);
+		return ret;
+	}
+
+	/* Handle each subtype */
+	ret = avp_dev_create(pci_dev, eth_dev);
+	if (ret < 0) {
+		PMD_DRV_LOG(ERR, "Failed to create device, ret=%d\n", ret);
+		return ret;
+	}
+
 	/* Allocate memory for storing MAC addresses */
 	eth_dev->data->mac_addrs = rte_zmalloc("avp_ethdev", ETHER_ADDR_LEN, 0);
 	if (eth_dev->data->mac_addrs == NULL) {
-- 
1.8.3.1

^ permalink raw reply related	[flat|nested] 172+ messages in thread

* [PATCH v4 09/17] net/avp: device configuration
  2017-03-13 19:16     ` [PATCH v4 00/17] Wind River Systems " Allain Legacy
                         ` (7 preceding siblings ...)
  2017-03-13 19:16       ` [PATCH v4 08/17] net/avp: device initialization Allain Legacy
@ 2017-03-13 19:16       ` Allain Legacy
  2017-03-13 19:16       ` [PATCH v4 10/17] net/avp: queue setup and release Allain Legacy
                         ` (9 subsequent siblings)
  18 siblings, 0 replies; 172+ messages in thread
From: Allain Legacy @ 2017-03-13 19:16 UTC (permalink / raw)
  To: ferruh.yigit
  Cc: dev, ian.jolliffe, bruce.richardson, john.mcnamara, keith.wiles,
	thomas.monjalon, vincent.jardin, jerin.jacob, stephen, 3chas3

Adds support for "dev_configure" operations to allow an application to
configure the device.

Signed-off-by: Allain Legacy <allain.legacy@windriver.com>
Signed-off-by: Matt Peters <matt.peters@windriver.com>
---
 drivers/net/avp/avp_ethdev.c | 240 +++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 240 insertions(+)

diff --git a/drivers/net/avp/avp_ethdev.c b/drivers/net/avp/avp_ethdev.c
index a7d270f..bb6ccdf 100644
--- a/drivers/net/avp/avp_ethdev.c
+++ b/drivers/net/avp/avp_ethdev.c
@@ -66,6 +66,12 @@ static int avp_dev_create(struct rte_pci_device *pci_dev,
 static int eth_avp_dev_init(struct rte_eth_dev *eth_dev);
 static int eth_avp_dev_uninit(struct rte_eth_dev *eth_dev);
 
+static int avp_dev_configure(struct rte_eth_dev *dev);
+static void avp_dev_info_get(struct rte_eth_dev *dev,
+			     struct rte_eth_dev_info *dev_info);
+static void avp_vlan_offload_set(struct rte_eth_dev *dev, int mask);
+static int avp_dev_link_update(struct rte_eth_dev *dev,
+			       __rte_unused int wait_to_complete);
 
 #define AVP_DEV_TO_PCI(eth_dev) RTE_DEV_TO_PCI((eth_dev)->device)
 
@@ -104,6 +110,15 @@ static int avp_dev_create(struct rte_pci_device *pci_dev,
 	},
 };
 
+/*
+ * dev_ops for avp, bare necessities for basic operation
+ */
+static const struct eth_dev_ops avp_eth_dev_ops = {
+	.dev_configure       = avp_dev_configure,
+	.dev_infos_get       = avp_dev_info_get,
+	.vlan_offload_set    = avp_vlan_offload_set,
+	.link_update         = avp_dev_link_update,
+};
 
 /**@{ AVP device flags */
 #define AVP_F_PROMISC (1 << 1)
@@ -189,6 +204,91 @@ struct avp_queue {
 	uint64_t errors;
 };
 
+/* send a request and wait for a response
+ *
+ * @warning must be called while holding the avp->lock spinlock.
+ */
+static int
+avp_dev_process_request(struct avp_dev *avp, struct rte_avp_request *request)
+{
+	unsigned int retry = AVP_MAX_REQUEST_RETRY;
+	void *resp_addr = NULL;
+	unsigned int count;
+	int ret;
+
+	PMD_DRV_LOG(DEBUG, "Sending request %u to host\n", request->req_id);
+
+	request->result = -ENOTSUP;
+
+	/* Discard any stale responses before starting a new request */
+	while (avp_fifo_get(avp->resp_q, (void **)&resp_addr, 1))
+		PMD_DRV_LOG(DEBUG, "Discarding stale response\n");
+
+	rte_memcpy(avp->sync_addr, request, sizeof(*request));
+	count = avp_fifo_put(avp->req_q, &avp->host_sync_addr, 1);
+	if (count < 1) {
+		PMD_DRV_LOG(ERR, "Cannot send request %u to host\n",
+			    request->req_id);
+		ret = -EBUSY;
+		goto done;
+	}
+
+	while (retry--) {
+		/* wait for a response */
+		usleep(AVP_REQUEST_DELAY_USECS);
+
+		count = avp_fifo_count(avp->resp_q);
+		if (count >= 1) {
+			/* response received */
+			break;
+		}
+
+		if ((count < 1) && (retry == 0)) {
+			PMD_DRV_LOG(ERR, "Timeout while waiting for a response for %u\n",
+				    request->req_id);
+			ret = -ETIME;
+			goto done;
+		}
+	}
+
+	/* retrieve the response */
+	count = avp_fifo_get(avp->resp_q, (void **)&resp_addr, 1);
+	if ((count != 1) || (resp_addr != avp->host_sync_addr)) {
+		PMD_DRV_LOG(ERR, "Invalid response from host, count=%u resp=%p host_sync_addr=%p\n",
+			    count, resp_addr, avp->host_sync_addr);
+		ret = -ENODATA;
+		goto done;
+	}
+
+	/* copy to user buffer */
+	rte_memcpy(request, avp->sync_addr, sizeof(*request));
+	ret = 0;
+
+	PMD_DRV_LOG(DEBUG, "Result %d received for request %u\n",
+		    request->result, request->req_id);
+
+done:
+	return ret;
+}
+
+static int
+avp_dev_ctrl_set_config(struct rte_eth_dev *eth_dev,
+			struct rte_avp_device_config *config)
+{
+	struct avp_dev *avp = AVP_DEV_PRIVATE_TO_HW(eth_dev->data->dev_private);
+	struct rte_avp_request request;
+	int ret;
+
+	/* setup a configure request */
+	memset(&request, 0, sizeof(request));
+	request.req_id = RTE_AVP_REQ_CFG_DEVICE;
+	memcpy(&request.config, config, sizeof(request.config));
+
+	ret = avp_dev_process_request(avp, &request);
+
+	return ret == 0 ? request.result : ret;
+}
+
 /* translate from host physical address to guest virtual address */
 static void *
 avp_dev_translate_address(struct rte_eth_dev *eth_dev,
@@ -304,6 +404,38 @@ struct avp_queue {
 	return 0;
 }
 
+static void
+_avp_set_queue_counts(struct rte_eth_dev *eth_dev)
+{
+	struct rte_pci_device *pci_dev = AVP_DEV_TO_PCI(eth_dev);
+	struct avp_dev *avp = AVP_DEV_PRIVATE_TO_HW(eth_dev->data->dev_private);
+	struct rte_avp_device_info *host_info;
+	void *addr;
+
+	addr = pci_dev->mem_resource[RTE_AVP_PCI_DEVICE_BAR].addr;
+	host_info = (struct rte_avp_device_info *)addr;
+
+	/*
+	 * the transmit direction is not negotiated beyond respecting the max
+	 * number of queues because the host can handle arbitrary guest tx
+	 * queues (host rx queues).
+	 */
+	avp->num_tx_queues = eth_dev->data->nb_tx_queues;
+
+	/*
+	 * the receive direction is more restrictive.  The host requires a
+	 * minimum number of guest rx queues (host tx queues) therefore
+	 * negotiate a value that is at least as large as the host minimum
+	 * requirement.  If the host and guest values are not identical then a
+	 * mapping will be established in the receive_queue_setup function.
+	 */
+	avp->num_rx_queues = RTE_MAX(host_info->min_rx_queues,
+				     eth_dev->data->nb_rx_queues);
+
+	PMD_DRV_LOG(DEBUG, "Requesting %u Tx and %u Rx queues from host\n",
+		    avp->num_tx_queues, avp->num_rx_queues);
+}
+
 /*
  * create a AVP device using the supplied device info by first translating it
  * to guest address space(s).
@@ -446,6 +578,7 @@ struct avp_queue {
 	int ret;
 
 	pci_dev = AVP_DEV_TO_PCI(eth_dev);
+	eth_dev->dev_ops = &avp_eth_dev_ops;
 
 	if (rte_eal_process_type() != RTE_PROC_PRIMARY) {
 		/*
@@ -521,6 +654,113 @@ struct avp_queue {
 };
 
 
+static int
+avp_dev_configure(struct rte_eth_dev *eth_dev)
+{
+	struct rte_pci_device *pci_dev = AVP_DEV_TO_PCI(eth_dev);
+	struct avp_dev *avp = AVP_DEV_PRIVATE_TO_HW(eth_dev->data->dev_private);
+	struct rte_avp_device_info *host_info;
+	struct rte_avp_device_config config;
+	int mask = 0;
+	void *addr;
+	int ret;
+
+	addr = pci_dev->mem_resource[RTE_AVP_PCI_DEVICE_BAR].addr;
+	host_info = (struct rte_avp_device_info *)addr;
+
+	/* Setup required number of queues */
+	_avp_set_queue_counts(eth_dev);
+
+	mask = (ETH_VLAN_STRIP_MASK |
+		ETH_VLAN_FILTER_MASK |
+		ETH_VLAN_EXTEND_MASK);
+	avp_vlan_offload_set(eth_dev, mask);
+
+	/* update device config */
+	memset(&config, 0, sizeof(config));
+	config.device_id = host_info->device_id;
+	config.driver_type = RTE_AVP_DRIVER_TYPE_DPDK;
+	config.driver_version = AVP_DPDK_DRIVER_VERSION;
+	config.features = avp->features;
+	config.num_tx_queues = avp->num_tx_queues;
+	config.num_rx_queues = avp->num_rx_queues;
+
+	ret = avp_dev_ctrl_set_config(eth_dev, &config);
+	if (ret < 0) {
+		PMD_DRV_LOG(ERR, "Config request failed by host, ret=%d\n",
+			    ret);
+		goto unlock;
+	}
+
+	avp->flags |= AVP_F_CONFIGURED;
+	ret = 0;
+
+unlock:
+	return ret;
+}
+
+
+static int
+avp_dev_link_update(struct rte_eth_dev *eth_dev,
+					__rte_unused int wait_to_complete)
+{
+	struct avp_dev *avp = AVP_DEV_PRIVATE_TO_HW(eth_dev->data->dev_private);
+	struct rte_eth_link *link = &eth_dev->data->dev_link;
+
+	link->link_speed = ETH_SPEED_NUM_10G;
+	link->link_duplex = ETH_LINK_FULL_DUPLEX;
+	link->link_status = !!(avp->flags & AVP_F_LINKUP);
+
+	return -1;
+}
+
+
+static void
+avp_dev_info_get(struct rte_eth_dev *eth_dev,
+		 struct rte_eth_dev_info *dev_info)
+{
+	struct avp_dev *avp = AVP_DEV_PRIVATE_TO_HW(eth_dev->data->dev_private);
+
+	dev_info->driver_name = "rte_avp_pmd";
+	dev_info->pci_dev = RTE_DEV_TO_PCI(eth_dev->device);
+	dev_info->max_rx_queues = avp->max_rx_queues;
+	dev_info->max_tx_queues = avp->max_tx_queues;
+	dev_info->min_rx_bufsize = AVP_MIN_RX_BUFSIZE;
+	dev_info->max_rx_pktlen = avp->max_rx_pkt_len;
+	dev_info->max_mac_addrs = AVP_MAX_MAC_ADDRS;
+	if (avp->host_features & RTE_AVP_FEATURE_VLAN_OFFLOAD) {
+		dev_info->rx_offload_capa = DEV_RX_OFFLOAD_VLAN_STRIP;
+		dev_info->tx_offload_capa = DEV_TX_OFFLOAD_VLAN_INSERT;
+	}
+}
+
+static void
+avp_vlan_offload_set(struct rte_eth_dev *eth_dev, int mask)
+{
+	struct avp_dev *avp = AVP_DEV_PRIVATE_TO_HW(eth_dev->data->dev_private);
+
+	if (mask & ETH_VLAN_STRIP_MASK) {
+		if (avp->host_features & RTE_AVP_FEATURE_VLAN_OFFLOAD) {
+			if (eth_dev->data->dev_conf.rxmode.hw_vlan_strip)
+				avp->features |= RTE_AVP_FEATURE_VLAN_OFFLOAD;
+			else
+				avp->features &= ~RTE_AVP_FEATURE_VLAN_OFFLOAD;
+		} else {
+			PMD_DRV_LOG(ERR, "VLAN strip offload not supported\n");
+		}
+	}
+
+	if (mask & ETH_VLAN_FILTER_MASK) {
+		if (eth_dev->data->dev_conf.rxmode.hw_vlan_filter)
+			PMD_DRV_LOG(ERR, "VLAN filter offload not supported\n");
+	}
+
+	if (mask & ETH_VLAN_EXTEND_MASK) {
+		if (eth_dev->data->dev_conf.rxmode.hw_vlan_extend)
+			PMD_DRV_LOG(ERR, "VLAN extend offload not supported\n");
+	}
+}
+
 
 RTE_PMD_REGISTER_PCI(rte_avp, rte_avp_pmd.pci_drv);
 RTE_PMD_REGISTER_PCI_TABLE(rte_avp, pci_id_avp_map);
-- 
1.8.3.1

^ permalink raw reply related	[flat|nested] 172+ messages in thread

* [PATCH v4 10/17] net/avp: queue setup and release
  2017-03-13 19:16     ` [PATCH v4 00/17] Wind River Systems " Allain Legacy
                         ` (8 preceding siblings ...)
  2017-03-13 19:16       ` [PATCH v4 09/17] net/avp: device configuration Allain Legacy
@ 2017-03-13 19:16       ` Allain Legacy
  2017-03-13 19:16       ` [PATCH v4 11/17] net/avp: packet receive functions Allain Legacy
                         ` (8 subsequent siblings)
  18 siblings, 0 replies; 172+ messages in thread
From: Allain Legacy @ 2017-03-13 19:16 UTC (permalink / raw)
  To: ferruh.yigit
  Cc: dev, ian.jolliffe, bruce.richardson, john.mcnamara, keith.wiles,
	thomas.monjalon, vincent.jardin, jerin.jacob, stephen, 3chas3

Adds queue management operations so that an appliation can setup and
release the transmit and receive queues.

Signed-off-by: Allain Legacy <allain.legacy@windriver.com>
Signed-off-by: Matt Peters <matt.peters@windriver.com>
---
 drivers/net/avp/avp_ethdev.c | 180 ++++++++++++++++++++++++++++++++++++++++++-
 1 file changed, 179 insertions(+), 1 deletion(-)

diff --git a/drivers/net/avp/avp_ethdev.c b/drivers/net/avp/avp_ethdev.c
index bb6ccdf..ad24217 100644
--- a/drivers/net/avp/avp_ethdev.c
+++ b/drivers/net/avp/avp_ethdev.c
@@ -72,7 +72,21 @@ static void avp_dev_info_get(struct rte_eth_dev *dev,
 static void avp_vlan_offload_set(struct rte_eth_dev *dev, int mask);
 static int avp_dev_link_update(struct rte_eth_dev *dev,
 			       __rte_unused int wait_to_complete);
-
+static int avp_dev_rx_queue_setup(struct rte_eth_dev *dev,
+				  uint16_t rx_queue_id,
+				  uint16_t nb_rx_desc,
+				  unsigned int socket_id,
+				  const struct rte_eth_rxconf *rx_conf,
+				  struct rte_mempool *pool);
+
+static int avp_dev_tx_queue_setup(struct rte_eth_dev *dev,
+				  uint16_t tx_queue_id,
+				  uint16_t nb_tx_desc,
+				  unsigned int socket_id,
+				  const struct rte_eth_txconf *tx_conf);
+
+static void avp_dev_rx_queue_release(void *rxq);
+static void avp_dev_tx_queue_release(void *txq);
 #define AVP_DEV_TO_PCI(eth_dev) RTE_DEV_TO_PCI((eth_dev)->device)
 
 
@@ -118,6 +132,10 @@ static int avp_dev_link_update(struct rte_eth_dev *dev,
 	.dev_infos_get       = avp_dev_info_get,
 	.vlan_offload_set    = avp_vlan_offload_set,
 	.link_update         = avp_dev_link_update,
+	.rx_queue_setup      = avp_dev_rx_queue_setup,
+	.rx_queue_release    = avp_dev_rx_queue_release,
+	.tx_queue_setup      = avp_dev_tx_queue_setup,
+	.tx_queue_release    = avp_dev_tx_queue_release,
 };
 
 /**@{ AVP device flags */
@@ -405,6 +423,42 @@ struct avp_queue {
 }
 
 static void
+_avp_set_rx_queue_mappings(struct rte_eth_dev *eth_dev, uint16_t rx_queue_id)
+{
+	struct avp_dev *avp =
+		AVP_DEV_PRIVATE_TO_HW(eth_dev->data->dev_private);
+	struct avp_queue *rxq;
+	uint16_t queue_count;
+	uint16_t remainder;
+
+	rxq = (struct avp_queue *)eth_dev->data->rx_queues[rx_queue_id];
+
+	/*
+	 * Must map all AVP fifos as evenly as possible between the configured
+	 * device queues.  Each device queue will service a subset of the AVP
+	 * fifos. If there is an odd number of device queues the first set of
+	 * device queues will get the extra AVP fifos.
+	 */
+	queue_count = avp->num_rx_queues / eth_dev->data->nb_rx_queues;
+	remainder = avp->num_rx_queues % eth_dev->data->nb_rx_queues;
+	if (rx_queue_id < remainder) {
+		/* these queues must service one extra FIFO */
+		rxq->queue_base = rx_queue_id * (queue_count + 1);
+		rxq->queue_limit = rxq->queue_base + (queue_count + 1) - 1;
+	} else {
+		/* these queues service the regular number of FIFO */
+		rxq->queue_base = ((remainder * (queue_count + 1)) +
+				   ((rx_queue_id - remainder) * queue_count));
+		rxq->queue_limit = rxq->queue_base + queue_count - 1;
+	}
+
+	PMD_DRV_LOG(DEBUG, "rxq %u at %p base %u limit %u\n",
+		    rx_queue_id, rxq, rxq->queue_base, rxq->queue_limit);
+
+	rxq->queue_id = rxq->queue_base;
+}
+
+static void
 _avp_set_queue_counts(struct rte_eth_dev *eth_dev)
 {
 	struct rte_pci_device *pci_dev = AVP_DEV_TO_PCI(eth_dev);
@@ -655,6 +709,130 @@ struct avp_queue {
 
 
 static int
+avp_dev_rx_queue_setup(struct rte_eth_dev *eth_dev,
+		       uint16_t rx_queue_id,
+		       uint16_t nb_rx_desc,
+		       unsigned int socket_id,
+		       const struct rte_eth_rxconf *rx_conf,
+		       struct rte_mempool *pool)
+{
+	struct avp_dev *avp = AVP_DEV_PRIVATE_TO_HW(eth_dev->data->dev_private);
+	struct rte_pktmbuf_pool_private *mbp_priv;
+	struct avp_queue *rxq;
+
+	if (rx_queue_id >= eth_dev->data->nb_rx_queues) {
+		PMD_DRV_LOG(ERR, "RX queue id is out of range: rx_queue_id=%u, nb_rx_queues=%u\n",
+			    rx_queue_id, eth_dev->data->nb_rx_queues);
+		return -EINVAL;
+	}
+
+	/* Save mbuf pool pointer */
+	avp->pool = pool;
+
+	/* Save the local mbuf size */
+	mbp_priv = rte_mempool_get_priv(pool);
+	avp->guest_mbuf_size = (uint16_t)(mbp_priv->mbuf_data_room_size);
+	avp->guest_mbuf_size -= RTE_PKTMBUF_HEADROOM;
+
+	PMD_DRV_LOG(DEBUG, "AVP max_rx_pkt_len=(%u,%u) mbuf_size=(%u,%u)\n",
+		    avp->max_rx_pkt_len,
+		    eth_dev->data->dev_conf.rxmode.max_rx_pkt_len,
+		    avp->host_mbuf_size,
+		    avp->guest_mbuf_size);
+
+	/* allocate a queue object */
+	rxq = rte_zmalloc_socket("ethdev RX queue", sizeof(struct avp_queue),
+				 RTE_CACHE_LINE_SIZE, socket_id);
+	if (rxq == NULL) {
+		PMD_DRV_LOG(ERR, "Failed to allocate new Rx queue object\n");
+		return -ENOMEM;
+	}
+
+	/* save back pointers to AVP and Ethernet devices */
+	rxq->avp = avp;
+	rxq->dev_data = eth_dev->data;
+	eth_dev->data->rx_queues[rx_queue_id] = (void *)rxq;
+
+	/* setup the queue receive mapping for the current queue. */
+	_avp_set_rx_queue_mappings(eth_dev, rx_queue_id);
+
+	PMD_DRV_LOG(DEBUG, "Rx queue %u setup at %p\n", rx_queue_id, rxq);
+
+	(void)nb_rx_desc;
+	(void)rx_conf;
+	return 0;
+}
+
+static int
+avp_dev_tx_queue_setup(struct rte_eth_dev *eth_dev,
+		       uint16_t tx_queue_id,
+		       uint16_t nb_tx_desc,
+		       unsigned int socket_id,
+		       const struct rte_eth_txconf *tx_conf)
+{
+	struct avp_dev *avp = AVP_DEV_PRIVATE_TO_HW(eth_dev->data->dev_private);
+	struct avp_queue *txq;
+
+	if (tx_queue_id >= eth_dev->data->nb_tx_queues) {
+		PMD_DRV_LOG(ERR, "TX queue id is out of range: tx_queue_id=%u, nb_tx_queues=%u\n",
+			    tx_queue_id, eth_dev->data->nb_tx_queues);
+		return -EINVAL;
+	}
+
+	/* allocate a queue object */
+	txq = rte_zmalloc_socket("ethdev TX queue", sizeof(struct avp_queue),
+				 RTE_CACHE_LINE_SIZE, socket_id);
+	if (txq == NULL) {
+		PMD_DRV_LOG(ERR, "Failed to allocate new Tx queue object\n");
+		return -ENOMEM;
+	}
+
+	/* only the configured set of transmit queues are used */
+	txq->queue_id = tx_queue_id;
+	txq->queue_base = tx_queue_id;
+	txq->queue_limit = tx_queue_id;
+
+	/* save back pointers to AVP and Ethernet devices */
+	txq->avp = avp;
+	txq->dev_data = eth_dev->data;
+	eth_dev->data->tx_queues[tx_queue_id] = (void *)txq;
+
+	PMD_DRV_LOG(DEBUG, "Tx queue %u setup at %p\n", tx_queue_id, txq);
+
+	(void)nb_tx_desc;
+	(void)tx_conf;
+	return 0;
+}
+
+static void
+avp_dev_rx_queue_release(void *rx_queue)
+{
+	struct avp_queue *rxq = (struct avp_queue *)rx_queue;
+	struct avp_dev *avp = rxq->avp;
+	struct rte_eth_dev_data *data = avp->dev_data;
+	unsigned int i;
+
+	for (i = 0; i < avp->num_rx_queues; i++) {
+		if (data->rx_queues[i] == rxq)
+			data->rx_queues[i] = NULL;
+	}
+}
+
+static void
+avp_dev_tx_queue_release(void *tx_queue)
+{
+	struct avp_queue *txq = (struct avp_queue *)tx_queue;
+	struct avp_dev *avp = txq->avp;
+	struct rte_eth_dev_data *data = avp->dev_data;
+	unsigned int i;
+
+	for (i = 0; i < avp->num_tx_queues; i++) {
+		if (data->tx_queues[i] == txq)
+			data->tx_queues[i] = NULL;
+	}
+}
+
+static int
 avp_dev_configure(struct rte_eth_dev *eth_dev)
 {
 	struct rte_pci_device *pci_dev = AVP_DEV_TO_PCI(eth_dev);
-- 
1.8.3.1

^ permalink raw reply related	[flat|nested] 172+ messages in thread

* [PATCH v4 11/17] net/avp: packet receive functions
  2017-03-13 19:16     ` [PATCH v4 00/17] Wind River Systems " Allain Legacy
                         ` (9 preceding siblings ...)
  2017-03-13 19:16       ` [PATCH v4 10/17] net/avp: queue setup and release Allain Legacy
@ 2017-03-13 19:16       ` Allain Legacy
  2017-03-13 19:16       ` [PATCH v4 12/17] net/avp: packet transmit functions Allain Legacy
                         ` (7 subsequent siblings)
  18 siblings, 0 replies; 172+ messages in thread
From: Allain Legacy @ 2017-03-13 19:16 UTC (permalink / raw)
  To: ferruh.yigit
  Cc: dev, ian.jolliffe, bruce.richardson, john.mcnamara, keith.wiles,
	thomas.monjalon, vincent.jardin, jerin.jacob, stephen, 3chas3

Adds function required for receiving packets from the host application via
AVP device queues.  Both the simple and scattered functions are supported.

Signed-off-by: Allain Legacy <allain.legacy@windriver.com>
Signed-off-by: Matt Peters <matt.peters@windriver.com>
---
 drivers/net/avp/Makefile     |   1 +
 drivers/net/avp/avp_ethdev.c | 451 +++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 452 insertions(+)

diff --git a/drivers/net/avp/Makefile b/drivers/net/avp/Makefile
index 9cf0449..3013cd1 100644
--- a/drivers/net/avp/Makefile
+++ b/drivers/net/avp/Makefile
@@ -56,5 +56,6 @@ SRCS-$(CONFIG_RTE_LIBRTE_AVP_PMD) += avp_ethdev.c
 
 # this lib depends upon:
 DEPDIRS-$(CONFIG_RTE_LIBRTE_AVP_PMD) += lib/librte_eal lib/librte_ether
+DEPDIRS-$(CONFIG_RTE_LIBRTE_AVP_PMD) += lib/librte_mempool lib/librte_mbuf
 
 include $(RTE_SDK)/mk/rte.lib.mk
diff --git a/drivers/net/avp/avp_ethdev.c b/drivers/net/avp/avp_ethdev.c
index ad24217..65ae858 100644
--- a/drivers/net/avp/avp_ethdev.c
+++ b/drivers/net/avp/avp_ethdev.c
@@ -85,11 +85,19 @@ static int avp_dev_tx_queue_setup(struct rte_eth_dev *dev,
 				  unsigned int socket_id,
 				  const struct rte_eth_txconf *tx_conf);
 
+static uint16_t avp_recv_scattered_pkts(void *rx_queue,
+					struct rte_mbuf **rx_pkts,
+					uint16_t nb_pkts);
+
+static uint16_t avp_recv_pkts(void *rx_queue,
+			      struct rte_mbuf **rx_pkts,
+			      uint16_t nb_pkts);
 static void avp_dev_rx_queue_release(void *rxq);
 static void avp_dev_tx_queue_release(void *txq);
 #define AVP_DEV_TO_PCI(eth_dev) RTE_DEV_TO_PCI((eth_dev)->device)
 
 
+#define AVP_MAX_RX_BURST 64
 #define AVP_MAX_MAC_ADDRS 1
 #define AVP_MIN_RX_BUFSIZE ETHER_MIN_LEN
 
@@ -307,6 +315,15 @@ struct avp_queue {
 	return ret == 0 ? request.result : ret;
 }
 
+/* translate from host mbuf virtual address to guest virtual address */
+static inline void *
+avp_dev_translate_buffer(struct avp_dev *avp, void *host_mbuf_address)
+{
+	return RTE_PTR_ADD(RTE_PTR_SUB(host_mbuf_address,
+				       (uintptr_t)avp->host_mbuf_addr),
+			   (uintptr_t)avp->mbuf_addr);
+}
+
 /* translate from host physical address to guest virtual address */
 static void *
 avp_dev_translate_address(struct rte_eth_dev *eth_dev,
@@ -633,6 +650,7 @@ struct avp_queue {
 
 	pci_dev = AVP_DEV_TO_PCI(eth_dev);
 	eth_dev->dev_ops = &avp_eth_dev_ops;
+	eth_dev->rx_pkt_burst = &avp_recv_pkts;
 
 	if (rte_eal_process_type() != RTE_PROC_PRIMARY) {
 		/*
@@ -641,6 +659,10 @@ struct avp_queue {
 		 * be mapped to the same virtual address so all pointers should
 		 * be valid.
 		 */
+		if (eth_dev->data->scattered_rx) {
+			PMD_DRV_LOG(NOTICE, "AVP device configured for chained mbufs\n");
+			eth_dev->rx_pkt_burst = avp_recv_scattered_pkts;
+		}
 		return 0;
 	}
 
@@ -709,6 +731,38 @@ struct avp_queue {
 
 
 static int
+avp_dev_enable_scattered(struct rte_eth_dev *eth_dev,
+			 struct avp_dev *avp)
+{
+	unsigned int max_rx_pkt_len;
+
+	max_rx_pkt_len = eth_dev->data->dev_conf.rxmode.max_rx_pkt_len;
+
+	if ((max_rx_pkt_len > avp->guest_mbuf_size) ||
+	    (max_rx_pkt_len > avp->host_mbuf_size)) {
+		/*
+		 * If the guest MTU is greater than either the host or guest
+		 * buffers then chained mbufs have to be enabled in the TX
+		 * direction.  It is assumed that the application will not need
+		 * to send packets larger than their max_rx_pkt_len (MRU).
+		 */
+		return 1;
+	}
+
+	if ((avp->max_rx_pkt_len > avp->guest_mbuf_size) ||
+	    (avp->max_rx_pkt_len > avp->host_mbuf_size)) {
+		/*
+		 * If the host MRU is greater than its own mbuf size or the
+		 * guest mbuf size then chained mbufs have to be enabled in the
+		 * RX direction.
+		 */
+		return 1;
+	}
+
+	return 0;
+}
+
+static int
 avp_dev_rx_queue_setup(struct rte_eth_dev *eth_dev,
 		       uint16_t rx_queue_id,
 		       uint16_t nb_rx_desc,
@@ -734,6 +788,14 @@ struct avp_queue {
 	avp->guest_mbuf_size = (uint16_t)(mbp_priv->mbuf_data_room_size);
 	avp->guest_mbuf_size -= RTE_PKTMBUF_HEADROOM;
 
+	if (avp_dev_enable_scattered(eth_dev, avp)) {
+		if (!eth_dev->data->scattered_rx) {
+			PMD_DRV_LOG(NOTICE, "AVP device configured for chained mbufs\n");
+			eth_dev->data->scattered_rx = 1;
+			eth_dev->rx_pkt_burst = avp_recv_scattered_pkts;
+		}
+	}
+
 	PMD_DRV_LOG(DEBUG, "AVP max_rx_pkt_len=(%u,%u) mbuf_size=(%u,%u)\n",
 		    avp->max_rx_pkt_len,
 		    eth_dev->data->dev_conf.rxmode.max_rx_pkt_len,
@@ -804,6 +866,395 @@ struct avp_queue {
 	return 0;
 }
 
+static inline int
+_avp_cmp_ether_addr(struct ether_addr *a, struct ether_addr *b)
+{
+	uint16_t *_a = (uint16_t *)&a->addr_bytes[0];
+	uint16_t *_b = (uint16_t *)&b->addr_bytes[0];
+	return (_a[0] ^ _b[0]) | (_a[1] ^ _b[1]) | (_a[2] ^ _b[2]);
+}
+
+static inline int
+_avp_mac_filter(struct avp_dev *avp, struct rte_mbuf *m)
+{
+	struct ether_hdr *eth = rte_pktmbuf_mtod(m, struct ether_hdr *);
+
+	if (likely(_avp_cmp_ether_addr(&avp->ethaddr, &eth->d_addr) == 0)) {
+		/* allow all packets destined to our address */
+		return 0;
+	}
+
+	if (likely(is_broadcast_ether_addr(&eth->d_addr))) {
+		/* allow all broadcast packets */
+		return 0;
+	}
+
+	if (likely(is_multicast_ether_addr(&eth->d_addr))) {
+		/* allow all multicast packets */
+		return 0;
+	}
+
+	if (avp->flags & AVP_F_PROMISC) {
+		/* allow all packets when in promiscuous mode */
+		return 0;
+	}
+
+	return -1;
+}
+
+#ifdef RTE_LIBRTE_AVP_DEBUG_BUFFERS
+static inline void
+__avp_dev_buffer_sanity_check(struct avp_dev *avp, struct rte_avp_desc *buf)
+{
+	struct rte_avp_desc *first_buf;
+	struct rte_avp_desc *pkt_buf;
+	unsigned int pkt_len;
+	unsigned int nb_segs;
+	void *pkt_data;
+	unsigned int i;
+
+	first_buf = avp_dev_translate_buffer(avp, buf);
+
+	i = 0;
+	pkt_len = 0;
+	nb_segs = first_buf->nb_segs;
+	do {
+		/* Adjust pointers for guest addressing */
+		pkt_buf = avp_dev_translate_buffer(avp, buf);
+		if (pkt_buf == NULL)
+			rte_panic("bad buffer: segment %u has an invalid address %p\n",
+				  i, buf);
+		pkt_data = avp_dev_translate_buffer(avp, pkt_buf->data);
+		if (pkt_data == NULL)
+			rte_panic("bad buffer: segment %u has a NULL data pointer\n",
+				  i);
+		if (pkt_buf->data_len == 0)
+			rte_panic("bad buffer: segment %u has 0 data length\n",
+				  i);
+		pkt_len += pkt_buf->data_len;
+		nb_segs--;
+		i++;
+
+	} while (nb_segs && (buf = pkt_buf->next) != NULL);
+
+	if (nb_segs != 0)
+		rte_panic("bad buffer: expected %u segments found %u\n",
+			  first_buf->nb_segs, (first_buf->nb_segs - nb_segs));
+	if (pkt_len != first_buf->pkt_len)
+		rte_panic("bad buffer: expected length %u found %u\n",
+			  first_buf->pkt_len, pkt_len);
+}
+
+#define avp_dev_buffer_sanity_check(a, b) \
+	__avp_dev_buffer_sanity_check((a), (b))
+
+#else /* RTE_LIBRTE_AVP_DEBUG_BUFFERS */
+
+#define avp_dev_buffer_sanity_check(a, b) do {} while (0)
+
+#endif
+
+/*
+ * Copy a host buffer chain to a set of mbufs.	This function assumes that
+ * there exactly the required number of mbufs to copy all source bytes.
+ */
+static inline struct rte_mbuf *
+avp_dev_copy_from_buffers(struct avp_dev *avp,
+			  struct rte_avp_desc *buf,
+			  struct rte_mbuf **mbufs,
+			  unsigned int count)
+{
+	struct rte_mbuf *m_previous = NULL;
+	struct rte_avp_desc *pkt_buf;
+	unsigned int total_length = 0;
+	unsigned int copy_length;
+	unsigned int src_offset;
+	struct rte_mbuf *m;
+	uint16_t ol_flags;
+	uint16_t vlan_tci;
+	void *pkt_data;
+	unsigned int i;
+
+	avp_dev_buffer_sanity_check(avp, buf);
+
+	/* setup the first source buffer */
+	pkt_buf = avp_dev_translate_buffer(avp, buf);
+	pkt_data = avp_dev_translate_buffer(avp, pkt_buf->data);
+	total_length = pkt_buf->pkt_len;
+	src_offset = 0;
+
+	if (pkt_buf->ol_flags & RTE_AVP_RX_VLAN_PKT) {
+		ol_flags = PKT_RX_VLAN_PKT;
+		vlan_tci = pkt_buf->vlan_tci;
+	} else {
+		ol_flags = 0;
+		vlan_tci = 0;
+	}
+
+	for (i = 0; (i < count) && (buf != NULL); i++) {
+		/* fill each destination buffer */
+		m = mbufs[i];
+
+		if (m_previous != NULL)
+			m_previous->next = m;
+
+		m_previous = m;
+
+		do {
+			/*
+			 * Copy as many source buffers as will fit in the
+			 * destination buffer.
+			 */
+			copy_length = RTE_MIN((avp->guest_mbuf_size -
+					       rte_pktmbuf_data_len(m)),
+					      (pkt_buf->data_len -
+					       src_offset));
+			rte_memcpy(RTE_PTR_ADD(rte_pktmbuf_mtod(m, void *),
+					       rte_pktmbuf_data_len(m)),
+				   RTE_PTR_ADD(pkt_data, src_offset),
+				   copy_length);
+			rte_pktmbuf_data_len(m) += copy_length;
+			src_offset += copy_length;
+
+			if (likely(src_offset == pkt_buf->data_len)) {
+				/* need a new source buffer */
+				buf = pkt_buf->next;
+				if (buf != NULL) {
+					pkt_buf = avp_dev_translate_buffer(
+						avp, buf);
+					pkt_data = avp_dev_translate_buffer(
+						avp, pkt_buf->data);
+					src_offset = 0;
+				}
+			}
+
+			if (unlikely(rte_pktmbuf_data_len(m) ==
+				     avp->guest_mbuf_size)) {
+				/* need a new destination mbuf */
+				break;
+			}
+
+		} while (buf != NULL);
+	}
+
+	m = mbufs[0];
+	m->ol_flags = ol_flags;
+	m->nb_segs = count;
+	rte_pktmbuf_pkt_len(m) = total_length;
+	m->vlan_tci = vlan_tci;
+
+	__rte_mbuf_sanity_check(m, 1);
+
+	return m;
+}
+
+static uint16_t
+avp_recv_scattered_pkts(void *rx_queue,
+			struct rte_mbuf **rx_pkts,
+			uint16_t nb_pkts)
+{
+	struct avp_queue *rxq = (struct avp_queue *)rx_queue;
+	struct rte_avp_desc *avp_bufs[AVP_MAX_RX_BURST];
+	struct rte_mbuf *mbufs[RTE_AVP_MAX_MBUF_SEGMENTS];
+	struct avp_dev *avp = rxq->avp;
+	struct rte_avp_desc *pkt_buf;
+	struct rte_avp_fifo *free_q;
+	struct rte_avp_fifo *rx_q;
+	struct rte_avp_desc *buf;
+	unsigned int count, avail, n;
+	unsigned int guest_mbuf_size;
+	struct rte_mbuf *m;
+	unsigned int required;
+	unsigned int buf_len;
+	unsigned int port_id;
+	unsigned int i;
+
+	guest_mbuf_size = avp->guest_mbuf_size;
+	port_id = avp->port_id;
+	rx_q = avp->rx_q[rxq->queue_id];
+	free_q = avp->free_q[rxq->queue_id];
+
+	/* setup next queue to service */
+	rxq->queue_id = (rxq->queue_id < rxq->queue_limit) ?
+		(rxq->queue_id + 1) : rxq->queue_base;
+
+	/* determine how many slots are available in the free queue */
+	count = avp_fifo_free_count(free_q);
+
+	/* determine how many packets are available in the rx queue */
+	avail = avp_fifo_count(rx_q);
+
+	/* determine how many packets can be received */
+	count = RTE_MIN(count, avail);
+	count = RTE_MIN(count, nb_pkts);
+	count = RTE_MIN(count, (unsigned int)AVP_MAX_RX_BURST);
+
+	if (unlikely(count == 0)) {
+		/* no free buffers, or no buffers on the rx queue */
+		return 0;
+	}
+
+	/* retrieve pending packets */
+	n = avp_fifo_get(rx_q, (void **)&avp_bufs, count);
+	PMD_RX_LOG(DEBUG, "Receiving %u packets from Rx queue at %p\n",
+		   count, rx_q);
+
+	count = 0;
+	for (i = 0; i < n; i++) {
+		/* prefetch next entry while processing current one */
+		if (i + 1 < n) {
+			pkt_buf = avp_dev_translate_buffer(avp,
+							   avp_bufs[i + 1]);
+			rte_prefetch0(pkt_buf);
+		}
+		buf = avp_bufs[i];
+
+		/* Peek into the first buffer to determine the total length */
+		pkt_buf = avp_dev_translate_buffer(avp, buf);
+		buf_len = pkt_buf->pkt_len;
+
+		/* Allocate enough mbufs to receive the entire packet */
+		required = (buf_len + guest_mbuf_size - 1) / guest_mbuf_size;
+		if (rte_pktmbuf_alloc_bulk(avp->pool, mbufs, required)) {
+			rxq->dev_data->rx_mbuf_alloc_failed++;
+			continue;
+		}
+
+		/* Copy the data from the buffers to our mbufs */
+		m = avp_dev_copy_from_buffers(avp, buf, mbufs, required);
+
+		/* finalize mbuf */
+		m->port = port_id;
+
+		if (_avp_mac_filter(avp, m) != 0) {
+			/* silently discard packets not destined to our MAC */
+			rte_pktmbuf_free(m);
+			continue;
+		}
+
+		/* return new mbuf to caller */
+		rx_pkts[count++] = m;
+		rxq->bytes += buf_len;
+	}
+
+	rxq->packets += count;
+
+	/* return the buffers to the free queue */
+	avp_fifo_put(free_q, (void **)&avp_bufs[0], n);
+
+	return count;
+}
+
+
+static uint16_t
+avp_recv_pkts(void *rx_queue,
+	      struct rte_mbuf **rx_pkts,
+	      uint16_t nb_pkts)
+{
+	struct avp_queue *rxq = (struct avp_queue *)rx_queue;
+	struct rte_avp_desc *avp_bufs[AVP_MAX_RX_BURST];
+	struct avp_dev *avp = rxq->avp;
+	struct rte_avp_desc *pkt_buf;
+	struct rte_avp_fifo *free_q;
+	struct rte_avp_fifo *rx_q;
+	unsigned int count, avail, n;
+	unsigned int pkt_len;
+	struct rte_mbuf *m;
+	char *pkt_data;
+	unsigned int i;
+
+	rx_q = avp->rx_q[rxq->queue_id];
+	free_q = avp->free_q[rxq->queue_id];
+
+	/* setup next queue to service */
+	rxq->queue_id = (rxq->queue_id < rxq->queue_limit) ?
+		(rxq->queue_id + 1) : rxq->queue_base;
+
+	/* determine how many slots are available in the free queue */
+	count = avp_fifo_free_count(free_q);
+
+	/* determine how many packets are available in the rx queue */
+	avail = avp_fifo_count(rx_q);
+
+	/* determine how many packets can be received */
+	count = RTE_MIN(count, avail);
+	count = RTE_MIN(count, nb_pkts);
+	count = RTE_MIN(count, (unsigned int)AVP_MAX_RX_BURST);
+
+	if (unlikely(count == 0)) {
+		/* no free buffers, or no buffers on the rx queue */
+		return 0;
+	}
+
+	/* retrieve pending packets */
+	n = avp_fifo_get(rx_q, (void **)&avp_bufs, count);
+	PMD_RX_LOG(DEBUG, "Receiving %u packets from Rx queue at %p\n",
+		   count, rx_q);
+
+	count = 0;
+	for (i = 0; i < n; i++) {
+		/* prefetch next entry while processing current one */
+		if (i < n - 1) {
+			pkt_buf = avp_dev_translate_buffer(avp,
+							   avp_bufs[i + 1]);
+			rte_prefetch0(pkt_buf);
+		}
+
+		/* Adjust host pointers for guest addressing */
+		pkt_buf = avp_dev_translate_buffer(avp, avp_bufs[i]);
+		pkt_data = avp_dev_translate_buffer(avp, pkt_buf->data);
+		pkt_len = pkt_buf->pkt_len;
+
+		if (unlikely((pkt_len > avp->guest_mbuf_size) ||
+			     (pkt_buf->nb_segs > 1))) {
+			/*
+			 * application should be using the scattered receive
+			 * function
+			 */
+			rxq->errors++;
+			continue;
+		}
+
+		/* process each packet to be transmitted */
+		m = rte_pktmbuf_alloc(avp->pool);
+		if (unlikely(m == NULL)) {
+			rxq->dev_data->rx_mbuf_alloc_failed++;
+			continue;
+		}
+
+		/* copy data out of the host buffer to our buffer */
+		m->data_off = RTE_PKTMBUF_HEADROOM;
+		rte_memcpy(rte_pktmbuf_mtod(m, void *), pkt_data, pkt_len);
+
+		/* initialize the local mbuf */
+		rte_pktmbuf_data_len(m) = pkt_len;
+		rte_pktmbuf_pkt_len(m) = pkt_len;
+		m->port = avp->port_id;
+
+		if (pkt_buf->ol_flags & RTE_AVP_RX_VLAN_PKT) {
+			m->ol_flags = PKT_RX_VLAN_PKT;
+			m->vlan_tci = pkt_buf->vlan_tci;
+		}
+
+		if (_avp_mac_filter(avp, m) != 0) {
+			/* silently discard packets not destined to our MAC */
+			rte_pktmbuf_free(m);
+			continue;
+		}
+
+		/* return new mbuf to caller */
+		rx_pkts[count++] = m;
+		rxq->bytes += pkt_len;
+	}
+
+	rxq->packets += count;
+
+	/* return the buffers to the free queue */
+	avp_fifo_put(free_q, (void **)&avp_bufs[0], n);
+
+	return count;
+}
+
 static void
 avp_dev_rx_queue_release(void *rx_queue)
 {
-- 
1.8.3.1

^ permalink raw reply related	[flat|nested] 172+ messages in thread

* [PATCH v4 12/17] net/avp: packet transmit functions
  2017-03-13 19:16     ` [PATCH v4 00/17] Wind River Systems " Allain Legacy
                         ` (10 preceding siblings ...)
  2017-03-13 19:16       ` [PATCH v4 11/17] net/avp: packet receive functions Allain Legacy
@ 2017-03-13 19:16       ` Allain Legacy
  2017-03-13 19:16       ` [PATCH v4 13/17] net/avp: device statistics operations Allain Legacy
                         ` (6 subsequent siblings)
  18 siblings, 0 replies; 172+ messages in thread
From: Allain Legacy @ 2017-03-13 19:16 UTC (permalink / raw)
  To: ferruh.yigit
  Cc: dev, ian.jolliffe, bruce.richardson, john.mcnamara, keith.wiles,
	thomas.monjalon, vincent.jardin, jerin.jacob, stephen, 3chas3

Adds support for packet transmit functions so that an application can send
packets to the host application via an AVP device queue.  Both the simple
and scattered functions are supported.

Signed-off-by: Allain Legacy <allain.legacy@windriver.com>
Signed-off-by: Matt Peters <matt.peters@windriver.com>
---
 drivers/net/avp/avp_ethdev.c | 335 +++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 335 insertions(+)

diff --git a/drivers/net/avp/avp_ethdev.c b/drivers/net/avp/avp_ethdev.c
index 65ae858..78018f5 100644
--- a/drivers/net/avp/avp_ethdev.c
+++ b/drivers/net/avp/avp_ethdev.c
@@ -92,12 +92,24 @@ static uint16_t avp_recv_scattered_pkts(void *rx_queue,
 static uint16_t avp_recv_pkts(void *rx_queue,
 			      struct rte_mbuf **rx_pkts,
 			      uint16_t nb_pkts);
+
+static uint16_t avp_xmit_scattered_pkts(void *tx_queue,
+					struct rte_mbuf **tx_pkts,
+					uint16_t nb_pkts);
+
+static uint16_t avp_xmit_pkts(void *tx_queue,
+			      struct rte_mbuf **tx_pkts,
+			      uint16_t nb_pkts);
+
 static void avp_dev_rx_queue_release(void *rxq);
 static void avp_dev_tx_queue_release(void *txq);
+
+
 #define AVP_DEV_TO_PCI(eth_dev) RTE_DEV_TO_PCI((eth_dev)->device)
 
 
 #define AVP_MAX_RX_BURST 64
+#define AVP_MAX_TX_BURST 64
 #define AVP_MAX_MAC_ADDRS 1
 #define AVP_MIN_RX_BUFSIZE ETHER_MIN_LEN
 
@@ -651,6 +663,7 @@ struct avp_queue {
 	pci_dev = AVP_DEV_TO_PCI(eth_dev);
 	eth_dev->dev_ops = &avp_eth_dev_ops;
 	eth_dev->rx_pkt_burst = &avp_recv_pkts;
+	eth_dev->tx_pkt_burst = &avp_xmit_pkts;
 
 	if (rte_eal_process_type() != RTE_PROC_PRIMARY) {
 		/*
@@ -662,6 +675,7 @@ struct avp_queue {
 		if (eth_dev->data->scattered_rx) {
 			PMD_DRV_LOG(NOTICE, "AVP device configured for chained mbufs\n");
 			eth_dev->rx_pkt_burst = avp_recv_scattered_pkts;
+			eth_dev->tx_pkt_burst = avp_xmit_scattered_pkts;
 		}
 		return 0;
 	}
@@ -793,6 +807,7 @@ struct avp_queue {
 			PMD_DRV_LOG(NOTICE, "AVP device configured for chained mbufs\n");
 			eth_dev->data->scattered_rx = 1;
 			eth_dev->rx_pkt_burst = avp_recv_scattered_pkts;
+			eth_dev->tx_pkt_burst = avp_xmit_scattered_pkts;
 		}
 	}
 
@@ -1255,6 +1270,326 @@ struct avp_queue {
 	return count;
 }
 
+/*
+ * Copy a chained mbuf to a set of host buffers.  This function assumes that
+ * there are sufficient destination buffers to contain the entire source
+ * packet.
+ */
+static inline uint16_t
+avp_dev_copy_to_buffers(struct avp_dev *avp,
+			struct rte_mbuf *mbuf,
+			struct rte_avp_desc **buffers,
+			unsigned int count)
+{
+	struct rte_avp_desc *previous_buf = NULL;
+	struct rte_avp_desc *first_buf = NULL;
+	struct rte_avp_desc *pkt_buf;
+	struct rte_avp_desc *buf;
+	size_t total_length;
+	struct rte_mbuf *m;
+	size_t copy_length;
+	size_t src_offset;
+	char *pkt_data;
+	unsigned int i;
+
+	__rte_mbuf_sanity_check(mbuf, 1);
+
+	m = mbuf;
+	src_offset = 0;
+	total_length = rte_pktmbuf_pkt_len(m);
+	for (i = 0; (i < count) && (m != NULL); i++) {
+		/* fill each destination buffer */
+		buf = buffers[i];
+
+		if (i < count - 1) {
+			/* prefetch next entry while processing this one */
+			pkt_buf = avp_dev_translate_buffer(avp, buffers[i + 1]);
+			rte_prefetch0(pkt_buf);
+		}
+
+		/* Adjust pointers for guest addressing */
+		pkt_buf = avp_dev_translate_buffer(avp, buf);
+		pkt_data = avp_dev_translate_buffer(avp, pkt_buf->data);
+
+		/* setup the buffer chain */
+		if (previous_buf != NULL)
+			previous_buf->next = buf;
+		else
+			first_buf = pkt_buf;
+
+		previous_buf = pkt_buf;
+
+		do {
+			/*
+			 * copy as many source mbuf segments as will fit in the
+			 * destination buffer.
+			 */
+			copy_length = RTE_MIN((avp->host_mbuf_size -
+					       pkt_buf->data_len),
+					      (rte_pktmbuf_data_len(m) -
+					       src_offset));
+			rte_memcpy(RTE_PTR_ADD(pkt_data, pkt_buf->data_len),
+				   RTE_PTR_ADD(rte_pktmbuf_mtod(m, void *),
+					       src_offset),
+				   copy_length);
+			pkt_buf->data_len += copy_length;
+			src_offset += copy_length;
+
+			if (likely(src_offset == rte_pktmbuf_data_len(m))) {
+				/* need a new source buffer */
+				m = m->next;
+				src_offset = 0;
+			}
+
+			if (unlikely(pkt_buf->data_len ==
+				     avp->host_mbuf_size)) {
+				/* need a new destination buffer */
+				break;
+			}
+
+		} while (m != NULL);
+	}
+
+	first_buf->nb_segs = count;
+	first_buf->pkt_len = total_length;
+
+	if (mbuf->ol_flags & PKT_TX_VLAN_PKT) {
+		first_buf->ol_flags |= RTE_AVP_TX_VLAN_PKT;
+		first_buf->vlan_tci = mbuf->vlan_tci;
+	}
+
+	avp_dev_buffer_sanity_check(avp, buffers[0]);
+
+	return total_length;
+}
+
+
+static uint16_t
+avp_xmit_scattered_pkts(void *tx_queue,
+			struct rte_mbuf **tx_pkts,
+			uint16_t nb_pkts)
+{
+	struct rte_avp_desc *avp_bufs[(AVP_MAX_TX_BURST *
+				       RTE_AVP_MAX_MBUF_SEGMENTS)];
+	struct avp_queue *txq = (struct avp_queue *)tx_queue;
+	struct rte_avp_desc *tx_bufs[AVP_MAX_TX_BURST];
+	struct avp_dev *avp = txq->avp;
+	struct rte_avp_fifo *alloc_q;
+	struct rte_avp_fifo *tx_q;
+	unsigned int count, avail, n;
+	unsigned int orig_nb_pkts;
+	struct rte_mbuf *m;
+	unsigned int required;
+	unsigned int segments;
+	unsigned int tx_bytes;
+	unsigned int i;
+
+	orig_nb_pkts = nb_pkts;
+	tx_q = avp->tx_q[txq->queue_id];
+	alloc_q = avp->alloc_q[txq->queue_id];
+
+	/* limit the number of transmitted packets to the max burst size */
+	if (unlikely(nb_pkts > AVP_MAX_TX_BURST))
+		nb_pkts = AVP_MAX_TX_BURST;
+
+	/* determine how many buffers are available to copy into */
+	avail = avp_fifo_count(alloc_q);
+	if (unlikely(avail > (AVP_MAX_TX_BURST *
+			      RTE_AVP_MAX_MBUF_SEGMENTS)))
+		avail = AVP_MAX_TX_BURST * RTE_AVP_MAX_MBUF_SEGMENTS;
+
+	/* determine how many slots are available in the transmit queue */
+	count = avp_fifo_free_count(tx_q);
+
+	/* determine how many packets can be sent */
+	nb_pkts = RTE_MIN(count, nb_pkts);
+
+	/* determine how many packets will fit in the available buffers */
+	count = 0;
+	segments = 0;
+	for (i = 0; i < nb_pkts; i++) {
+		m = tx_pkts[i];
+		if (likely(i < (unsigned int)nb_pkts - 1)) {
+			/* prefetch next entry while processing this one */
+			rte_prefetch0(tx_pkts[i + 1]);
+		}
+		required = (rte_pktmbuf_pkt_len(m) + avp->host_mbuf_size - 1) /
+			avp->host_mbuf_size;
+
+		if (unlikely((required == 0) ||
+			     (required > RTE_AVP_MAX_MBUF_SEGMENTS)))
+			break;
+		else if (unlikely(required + segments > avail))
+			break;
+		segments += required;
+		count++;
+	}
+	nb_pkts = count;
+
+	if (unlikely(nb_pkts == 0)) {
+		/* no available buffers, or no space on the tx queue */
+		txq->errors += orig_nb_pkts;
+		return 0;
+	}
+
+	PMD_TX_LOG(DEBUG, "Sending %u packets on Tx queue at %p\n",
+		   nb_pkts, tx_q);
+
+	/* retrieve sufficient send buffers */
+	n = avp_fifo_get(alloc_q, (void **)&avp_bufs, segments);
+	if (unlikely(n != segments)) {
+		PMD_TX_LOG(DEBUG, "Failed to allocate buffers "
+			   "n=%u, segments=%u, orig=%u\n",
+			   n, segments, orig_nb_pkts);
+		txq->errors += orig_nb_pkts;
+		return 0;
+	}
+
+	tx_bytes = 0;
+	count = 0;
+	for (i = 0; i < nb_pkts; i++) {
+		/* process each packet to be transmitted */
+		m = tx_pkts[i];
+
+		/* determine how many buffers are required for this packet */
+		required = (rte_pktmbuf_pkt_len(m) + avp->host_mbuf_size - 1) /
+			avp->host_mbuf_size;
+
+		tx_bytes += avp_dev_copy_to_buffers(avp, m,
+						    &avp_bufs[count], required);
+		tx_bufs[i] = avp_bufs[count];
+		count += required;
+
+		/* free the original mbuf */
+		rte_pktmbuf_free(m);
+	}
+
+	txq->packets += nb_pkts;
+	txq->bytes += tx_bytes;
+
+#ifdef RTE_LIBRTE_AVP_DEBUG_BUFFERS
+	for (i = 0; i < nb_pkts; i++)
+		avp_dev_buffer_sanity_check(avp, tx_bufs[i]);
+#endif
+
+	/* send the packets */
+	n = avp_fifo_put(tx_q, (void **)&tx_bufs[0], nb_pkts);
+	if (unlikely(n != orig_nb_pkts))
+		txq->errors += (orig_nb_pkts - n);
+
+	return n;
+}
+
+
+static uint16_t
+avp_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
+{
+	struct avp_queue *txq = (struct avp_queue *)tx_queue;
+	struct rte_avp_desc *avp_bufs[AVP_MAX_TX_BURST];
+	struct avp_dev *avp = txq->avp;
+	struct rte_avp_desc *pkt_buf;
+	struct rte_avp_fifo *alloc_q;
+	struct rte_avp_fifo *tx_q;
+	unsigned int count, avail, n;
+	struct rte_mbuf *m;
+	unsigned int pkt_len;
+	unsigned int tx_bytes;
+	char *pkt_data;
+	unsigned int i;
+
+	tx_q = avp->tx_q[txq->queue_id];
+	alloc_q = avp->alloc_q[txq->queue_id];
+
+	/* limit the number of transmitted packets to the max burst size */
+	if (unlikely(nb_pkts > AVP_MAX_TX_BURST))
+		nb_pkts = AVP_MAX_TX_BURST;
+
+	/* determine how many buffers are available to copy into */
+	avail = avp_fifo_count(alloc_q);
+
+	/* determine how many slots are available in the transmit queue */
+	count = avp_fifo_free_count(tx_q);
+
+	/* determine how many packets can be sent */
+	count = RTE_MIN(count, avail);
+	count = RTE_MIN(count, nb_pkts);
+
+	if (unlikely(count == 0)) {
+		/* no available buffers, or no space on the tx queue */
+		txq->errors += nb_pkts;
+		return 0;
+	}
+
+	PMD_TX_LOG(DEBUG, "Sending %u packets on Tx queue at %p\n",
+		   count, tx_q);
+
+	/* retrieve sufficient send buffers */
+	n = avp_fifo_get(alloc_q, (void **)&avp_bufs, count);
+	if (unlikely(n != count)) {
+		txq->errors++;
+		return 0;
+	}
+
+	tx_bytes = 0;
+	for (i = 0; i < count; i++) {
+		/* prefetch next entry while processing the current one */
+		if (i < count - 1) {
+			pkt_buf = avp_dev_translate_buffer(avp,
+							   avp_bufs[i + 1]);
+			rte_prefetch0(pkt_buf);
+		}
+
+		/* process each packet to be transmitted */
+		m = tx_pkts[i];
+
+		/* Adjust pointers for guest addressing */
+		pkt_buf = avp_dev_translate_buffer(avp, avp_bufs[i]);
+		pkt_data = avp_dev_translate_buffer(avp, pkt_buf->data);
+		pkt_len = rte_pktmbuf_pkt_len(m);
+
+		if (unlikely((pkt_len > avp->guest_mbuf_size) ||
+					 (pkt_len > avp->host_mbuf_size))) {
+			/*
+			 * application should be using the scattered transmit
+			 * function; send it truncated to avoid the performance
+			 * hit of having to manage returning the already
+			 * allocated buffer to the free list.  This should not
+			 * happen since the application should have set the
+			 * max_rx_pkt_len based on its MTU and it should be
+			 * policing its own packet sizes.
+			 */
+			txq->errors++;
+			pkt_len = RTE_MIN(avp->guest_mbuf_size,
+					  avp->host_mbuf_size);
+		}
+
+		/* copy data out of our mbuf and into the AVP buffer */
+		rte_memcpy(pkt_data, rte_pktmbuf_mtod(m, void *), pkt_len);
+		pkt_buf->pkt_len = pkt_len;
+		pkt_buf->data_len = pkt_len;
+		pkt_buf->nb_segs = 1;
+		pkt_buf->next = NULL;
+
+		if (m->ol_flags & PKT_TX_VLAN_PKT) {
+			pkt_buf->ol_flags |= RTE_AVP_TX_VLAN_PKT;
+			pkt_buf->vlan_tci = m->vlan_tci;
+		}
+
+		tx_bytes += pkt_len;
+
+		/* free the original mbuf */
+		rte_pktmbuf_free(m);
+	}
+
+	txq->packets += count;
+	txq->bytes += tx_bytes;
+
+	/* send the packets */
+	n = avp_fifo_put(tx_q, (void **)&avp_bufs[0], count);
+
+	return n;
+}
+
 static void
 avp_dev_rx_queue_release(void *rx_queue)
 {
-- 
1.8.3.1

^ permalink raw reply related	[flat|nested] 172+ messages in thread

* [PATCH v4 13/17] net/avp: device statistics operations
  2017-03-13 19:16     ` [PATCH v4 00/17] Wind River Systems " Allain Legacy
                         ` (11 preceding siblings ...)
  2017-03-13 19:16       ` [PATCH v4 12/17] net/avp: packet transmit functions Allain Legacy
@ 2017-03-13 19:16       ` Allain Legacy
  2017-03-13 19:16       ` [PATCH v4 14/17] net/avp: device promiscuous functions Allain Legacy
                         ` (5 subsequent siblings)
  18 siblings, 0 replies; 172+ messages in thread
From: Allain Legacy @ 2017-03-13 19:16 UTC (permalink / raw)
  To: ferruh.yigit
  Cc: dev, ian.jolliffe, bruce.richardson, john.mcnamara, keith.wiles,
	thomas.monjalon, vincent.jardin, jerin.jacob, stephen, 3chas3

Adds device functions to query and reset statistics.

Signed-off-by: Allain Legacy <allain.legacy@windriver.com>
---
 drivers/net/avp/avp_ethdev.c | 67 ++++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 67 insertions(+)

diff --git a/drivers/net/avp/avp_ethdev.c b/drivers/net/avp/avp_ethdev.c
index 78018f5..c5039e2 100644
--- a/drivers/net/avp/avp_ethdev.c
+++ b/drivers/net/avp/avp_ethdev.c
@@ -104,6 +104,10 @@ static uint16_t avp_xmit_pkts(void *tx_queue,
 static void avp_dev_rx_queue_release(void *rxq);
 static void avp_dev_tx_queue_release(void *txq);
 
+static void avp_dev_stats_get(struct rte_eth_dev *dev,
+			      struct rte_eth_stats *stats);
+static void avp_dev_stats_reset(struct rte_eth_dev *dev);
+
 
 #define AVP_DEV_TO_PCI(eth_dev) RTE_DEV_TO_PCI((eth_dev)->device)
 
@@ -151,6 +155,8 @@ static uint16_t avp_xmit_pkts(void *tx_queue,
 	.dev_configure       = avp_dev_configure,
 	.dev_infos_get       = avp_dev_info_get,
 	.vlan_offload_set    = avp_vlan_offload_set,
+	.stats_get           = avp_dev_stats_get,
+	.stats_reset         = avp_dev_stats_reset,
 	.link_update         = avp_dev_link_update,
 	.rx_queue_setup      = avp_dev_rx_queue_setup,
 	.rx_queue_release    = avp_dev_rx_queue_release,
@@ -1725,6 +1731,67 @@ struct avp_queue {
 	}
 }
 
+static void
+avp_dev_stats_get(struct rte_eth_dev *eth_dev, struct rte_eth_stats *stats)
+{
+	struct avp_dev *avp = AVP_DEV_PRIVATE_TO_HW(eth_dev->data->dev_private);
+	unsigned int i;
+
+	for (i = 0; i < avp->num_rx_queues; i++) {
+		struct avp_queue *rxq = avp->dev_data->rx_queues[i];
+
+		if (rxq) {
+			stats->ipackets += rxq->packets;
+			stats->ibytes += rxq->bytes;
+			stats->ierrors += rxq->errors;
+
+			stats->q_ipackets[i] += rxq->packets;
+			stats->q_ibytes[i] += rxq->bytes;
+			stats->q_errors[i] += rxq->errors;
+		}
+	}
+
+	for (i = 0; i < avp->num_tx_queues; i++) {
+		struct avp_queue *txq = avp->dev_data->tx_queues[i];
+
+		if (txq) {
+			stats->opackets += txq->packets;
+			stats->obytes += txq->bytes;
+			stats->oerrors += txq->errors;
+
+			stats->q_opackets[i] += txq->packets;
+			stats->q_obytes[i] += txq->bytes;
+			stats->q_errors[i] += txq->errors;
+		}
+	}
+}
+
+static void
+avp_dev_stats_reset(struct rte_eth_dev *eth_dev)
+{
+	struct avp_dev *avp = AVP_DEV_PRIVATE_TO_HW(eth_dev->data->dev_private);
+	unsigned int i;
+
+	for (i = 0; i < avp->num_rx_queues; i++) {
+		struct avp_queue *rxq = avp->dev_data->rx_queues[i];
+
+		if (rxq) {
+			rxq->bytes = 0;
+			rxq->packets = 0;
+			rxq->errors = 0;
+		}
+	}
+
+	for (i = 0; i < avp->num_tx_queues; i++) {
+		struct avp_queue *txq = avp->dev_data->tx_queues[i];
+
+		if (txq) {
+			txq->bytes = 0;
+			txq->packets = 0;
+			txq->errors = 0;
+		}
+	}
+}
 
 RTE_PMD_REGISTER_PCI(rte_avp, rte_avp_pmd.pci_drv);
 RTE_PMD_REGISTER_PCI_TABLE(rte_avp, pci_id_avp_map);
-- 
1.8.3.1

^ permalink raw reply related	[flat|nested] 172+ messages in thread

* [PATCH v4 14/17] net/avp: device promiscuous functions
  2017-03-13 19:16     ` [PATCH v4 00/17] Wind River Systems " Allain Legacy
                         ` (12 preceding siblings ...)
  2017-03-13 19:16       ` [PATCH v4 13/17] net/avp: device statistics operations Allain Legacy
@ 2017-03-13 19:16       ` Allain Legacy
  2017-03-13 19:16       ` [PATCH v4 15/17] net/avp: device start and stop operations Allain Legacy
                         ` (4 subsequent siblings)
  18 siblings, 0 replies; 172+ messages in thread
From: Allain Legacy @ 2017-03-13 19:16 UTC (permalink / raw)
  To: ferruh.yigit
  Cc: dev, ian.jolliffe, bruce.richardson, john.mcnamara, keith.wiles,
	thomas.monjalon, vincent.jardin, jerin.jacob, stephen, 3chas3

Adds support for setting and clearing promiscuous mode on an AVP device.
When enabled the _mac_filter function will allow packets destined to any
MAC address to be processed by the receive functions.

Signed-off-by: Allain Legacy <allain.legacy@windriver.com>
Signed-off-by: Matt Peters <matt.peters@windriver.com>
---
 drivers/net/avp/avp_ethdev.c | 28 ++++++++++++++++++++++++++++
 1 file changed, 28 insertions(+)

diff --git a/drivers/net/avp/avp_ethdev.c b/drivers/net/avp/avp_ethdev.c
index c5039e2..8883742 100644
--- a/drivers/net/avp/avp_ethdev.c
+++ b/drivers/net/avp/avp_ethdev.c
@@ -72,6 +72,9 @@ static void avp_dev_info_get(struct rte_eth_dev *dev,
 static void avp_vlan_offload_set(struct rte_eth_dev *dev, int mask);
 static int avp_dev_link_update(struct rte_eth_dev *dev,
 			       __rte_unused int wait_to_complete);
+static void avp_dev_promiscuous_enable(struct rte_eth_dev *dev);
+static void avp_dev_promiscuous_disable(struct rte_eth_dev *dev);
+
 static int avp_dev_rx_queue_setup(struct rte_eth_dev *dev,
 				  uint16_t rx_queue_id,
 				  uint16_t nb_rx_desc,
@@ -158,6 +161,8 @@ static void avp_dev_stats_get(struct rte_eth_dev *dev,
 	.stats_get           = avp_dev_stats_get,
 	.stats_reset         = avp_dev_stats_reset,
 	.link_update         = avp_dev_link_update,
+	.promiscuous_enable  = avp_dev_promiscuous_enable,
+	.promiscuous_disable = avp_dev_promiscuous_disable,
 	.rx_queue_setup      = avp_dev_rx_queue_setup,
 	.rx_queue_release    = avp_dev_rx_queue_release,
 	.tx_queue_setup      = avp_dev_tx_queue_setup,
@@ -1684,6 +1689,29 @@ struct avp_queue {
 	return -1;
 }
 
+static void
+avp_dev_promiscuous_enable(struct rte_eth_dev *eth_dev)
+{
+	struct avp_dev *avp = AVP_DEV_PRIVATE_TO_HW(eth_dev->data->dev_private);
+
+	if ((avp->flags & AVP_F_PROMISC) == 0) {
+		avp->flags |= AVP_F_PROMISC;
+		PMD_DRV_LOG(DEBUG, "Promiscuous mode enabled on %u\n",
+			    eth_dev->data->port_id);
+	}
+}
+
+static void
+avp_dev_promiscuous_disable(struct rte_eth_dev *eth_dev)
+{
+	struct avp_dev *avp = AVP_DEV_PRIVATE_TO_HW(eth_dev->data->dev_private);
+
+	if ((avp->flags & AVP_F_PROMISC) != 0) {
+		avp->flags &= ~AVP_F_PROMISC;
+		PMD_DRV_LOG(DEBUG, "Promiscuous mode disabled on %u\n",
+			    eth_dev->data->port_id);
+	}
+}
 
 static void
 avp_dev_info_get(struct rte_eth_dev *eth_dev,
-- 
1.8.3.1

^ permalink raw reply related	[flat|nested] 172+ messages in thread

* [PATCH v4 15/17] net/avp: device start and stop operations
  2017-03-13 19:16     ` [PATCH v4 00/17] Wind River Systems " Allain Legacy
                         ` (13 preceding siblings ...)
  2017-03-13 19:16       ` [PATCH v4 14/17] net/avp: device promiscuous functions Allain Legacy
@ 2017-03-13 19:16       ` Allain Legacy
  2017-03-13 19:16       ` [PATCH v4 16/17] net/avp: migration interrupt handling Allain Legacy
                         ` (3 subsequent siblings)
  18 siblings, 0 replies; 172+ messages in thread
From: Allain Legacy @ 2017-03-13 19:16 UTC (permalink / raw)
  To: ferruh.yigit
  Cc: dev, ian.jolliffe, bruce.richardson, john.mcnamara, keith.wiles,
	thomas.monjalon, vincent.jardin, jerin.jacob, stephen, 3chas3

Adds support for device start and stop functions.  This allows an
application to control the administrative state of an AVP device.  Stopping
the device will notify the host application to stop sending packets on that
device's receive queues.

Signed-off-by: Allain Legacy <allain.legacy@windriver.com>
Signed-off-by: Matt Peters <matt.peters@windriver.com>
---
 drivers/net/avp/avp_ethdev.c | 102 +++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 102 insertions(+)

diff --git a/drivers/net/avp/avp_ethdev.c b/drivers/net/avp/avp_ethdev.c
index 8883742..d37e137 100644
--- a/drivers/net/avp/avp_ethdev.c
+++ b/drivers/net/avp/avp_ethdev.c
@@ -67,6 +67,9 @@ static int avp_dev_create(struct rte_pci_device *pci_dev,
 static int eth_avp_dev_uninit(struct rte_eth_dev *eth_dev);
 
 static int avp_dev_configure(struct rte_eth_dev *dev);
+static int avp_dev_start(struct rte_eth_dev *dev);
+static void avp_dev_stop(struct rte_eth_dev *dev);
+static void avp_dev_close(struct rte_eth_dev *dev);
 static void avp_dev_info_get(struct rte_eth_dev *dev,
 			     struct rte_eth_dev_info *dev_info);
 static void avp_vlan_offload_set(struct rte_eth_dev *dev, int mask);
@@ -156,6 +159,9 @@ static void avp_dev_stats_get(struct rte_eth_dev *dev,
  */
 static const struct eth_dev_ops avp_eth_dev_ops = {
 	.dev_configure       = avp_dev_configure,
+	.dev_start           = avp_dev_start,
+	.dev_stop            = avp_dev_stop,
+	.dev_close           = avp_dev_close,
 	.dev_infos_get       = avp_dev_info_get,
 	.vlan_offload_set    = avp_vlan_offload_set,
 	.stats_get           = avp_dev_stats_get,
@@ -321,6 +327,23 @@ struct avp_queue {
 }
 
 static int
+avp_dev_ctrl_set_link_state(struct rte_eth_dev *eth_dev, unsigned int state)
+{
+	struct avp_dev *avp = AVP_DEV_PRIVATE_TO_HW(eth_dev->data->dev_private);
+	struct rte_avp_request request;
+	int ret;
+
+	/* setup a link state change request */
+	memset(&request, 0, sizeof(request));
+	request.req_id = RTE_AVP_REQ_CFG_NETWORK_IF;
+	request.if_up = state;
+
+	ret = avp_dev_process_request(avp, &request);
+
+	return ret == 0 ? request.result : ret;
+}
+
+static int
 avp_dev_ctrl_set_config(struct rte_eth_dev *eth_dev,
 			struct rte_avp_device_config *config)
 {
@@ -338,6 +361,22 @@ struct avp_queue {
 	return ret == 0 ? request.result : ret;
 }
 
+static int
+avp_dev_ctrl_shutdown(struct rte_eth_dev *eth_dev)
+{
+	struct avp_dev *avp = AVP_DEV_PRIVATE_TO_HW(eth_dev->data->dev_private);
+	struct rte_avp_request request;
+	int ret;
+
+	/* setup a shutdown request */
+	memset(&request, 0, sizeof(request));
+	request.req_id = RTE_AVP_REQ_SHUTDOWN_DEVICE;
+
+	ret = avp_dev_process_request(avp, &request);
+
+	return ret == 0 ? request.result : ret;
+}
+
 /* translate from host mbuf virtual address to guest virtual address */
 static inline void *
 avp_dev_translate_buffer(struct avp_dev *avp, void *host_mbuf_address)
@@ -1674,6 +1713,69 @@ struct avp_queue {
 	return ret;
 }
 
+static int
+avp_dev_start(struct rte_eth_dev *eth_dev)
+{
+	struct avp_dev *avp = AVP_DEV_PRIVATE_TO_HW(eth_dev->data->dev_private);
+	int ret;
+
+	/* disable features that we do not support */
+	eth_dev->data->dev_conf.rxmode.hw_ip_checksum = 0;
+	eth_dev->data->dev_conf.rxmode.hw_vlan_filter = 0;
+	eth_dev->data->dev_conf.rxmode.hw_vlan_extend = 0;
+	eth_dev->data->dev_conf.rxmode.hw_strip_crc = 0;
+
+	/* update link state */
+	ret = avp_dev_ctrl_set_link_state(eth_dev, 1);
+	if (ret < 0) {
+		PMD_DRV_LOG(ERR, "Link state change failed by host, ret=%d\n",
+			    ret);
+		goto unlock;
+	}
+
+	/* remember current link state */
+	avp->flags |= AVP_F_LINKUP;
+
+	ret = 0;
+
+unlock:
+	return ret;
+}
+
+static void
+avp_dev_stop(struct rte_eth_dev *eth_dev)
+{
+	struct avp_dev *avp = AVP_DEV_PRIVATE_TO_HW(eth_dev->data->dev_private);
+	int ret;
+
+	avp->flags &= ~AVP_F_LINKUP;
+
+	/* update link state */
+	ret = avp_dev_ctrl_set_link_state(eth_dev, 0);
+	if (ret < 0) {
+		PMD_DRV_LOG(ERR, "Link state change failed by host, ret=%d\n",
+			    ret);
+	}
+}
+
+static void
+avp_dev_close(struct rte_eth_dev *eth_dev)
+{
+	struct avp_dev *avp = AVP_DEV_PRIVATE_TO_HW(eth_dev->data->dev_private);
+	int ret;
+
+	/* remember current link state */
+	avp->flags &= ~AVP_F_LINKUP;
+	avp->flags &= ~AVP_F_CONFIGURED;
+
+	/* update device state */
+	ret = avp_dev_ctrl_shutdown(eth_dev);
+	if (ret < 0) {
+		PMD_DRV_LOG(ERR, "Device shutdown failed by host, ret=%d\n",
+			    ret);
+		/* continue */
+	}
+}
 
 static int
 avp_dev_link_update(struct rte_eth_dev *eth_dev,
-- 
1.8.3.1

^ permalink raw reply related	[flat|nested] 172+ messages in thread

* [PATCH v4 16/17] net/avp: migration interrupt handling
  2017-03-13 19:16     ` [PATCH v4 00/17] Wind River Systems " Allain Legacy
                         ` (14 preceding siblings ...)
  2017-03-13 19:16       ` [PATCH v4 15/17] net/avp: device start and stop operations Allain Legacy
@ 2017-03-13 19:16       ` Allain Legacy
  2017-03-13 19:16       ` [PATCH v4 17/17] doc: adds information related to the AVP PMD Allain Legacy
                         ` (2 subsequent siblings)
  18 siblings, 0 replies; 172+ messages in thread
From: Allain Legacy @ 2017-03-13 19:16 UTC (permalink / raw)
  To: ferruh.yigit
  Cc: dev, ian.jolliffe, bruce.richardson, john.mcnamara, keith.wiles,
	thomas.monjalon, vincent.jardin, jerin.jacob, stephen, 3chas3

This commit introduces changes required to support VM live-migration.  This
is done by registering and responding to interrupts coming from the host to
signal that the memory is about to be made invalid and replaced with a new
memory zone on the destination compute node.

Enabling and disabling of the interrupts are maintained outside of the
start/stop functions because they must be enabled for the lifetime of the
device.  This is so that host interrupts are serviced and acked even in
cases where the app may have stopped the device.

Signed-off-by: Allain Legacy <allain.legacy@windriver.com>
Signed-off-by: Matt Peters <matt.peters@windriver.com>
---
 drivers/net/avp/avp_ethdev.c | 370 +++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 370 insertions(+)

diff --git a/drivers/net/avp/avp_ethdev.c b/drivers/net/avp/avp_ethdev.c
index d37e137..1c598cb 100644
--- a/drivers/net/avp/avp_ethdev.c
+++ b/drivers/net/avp/avp_ethdev.c
@@ -48,6 +48,7 @@
 #include <rte_ether.h>
 #include <rte_common.h>
 #include <rte_cycles.h>
+#include <rte_spinlock.h>
 #include <rte_byteorder.h>
 #include <rte_dev.h>
 #include <rte_memory.h>
@@ -179,6 +180,7 @@ static void avp_dev_stats_get(struct rte_eth_dev *dev,
 #define AVP_F_PROMISC (1 << 1)
 #define AVP_F_CONFIGURED (1 << 2)
 #define AVP_F_LINKUP (1 << 3)
+#define AVP_F_DETACHED (1 << 4)
 /**@} */
 
 /* Ethernet device validation marker */
@@ -214,6 +216,9 @@ struct avp_dev {
 	struct rte_avp_fifo *free_q[RTE_AVP_MAX_QUEUES];
 	/**< To be freed mbufs queue */
 
+	/* mutual exclusion over the 'flag' and 'resp_q/req_q' fields */
+	rte_spinlock_t lock;
+
 	/* For request & response */
 	struct rte_avp_fifo *req_q; /**< Request queue */
 	struct rte_avp_fifo *resp_q; /**< Response queue */
@@ -501,6 +506,46 @@ struct avp_queue {
 	return 0;
 }
 
+static int
+avp_dev_detach(struct rte_eth_dev *eth_dev)
+{
+	struct avp_dev *avp = AVP_DEV_PRIVATE_TO_HW(eth_dev->data->dev_private);
+	int ret;
+
+	PMD_DRV_LOG(NOTICE, "Detaching port %u from AVP device 0x%" PRIx64 "\n",
+		    eth_dev->data->port_id, avp->device_id);
+
+	rte_spinlock_lock(&avp->lock);
+
+	if (avp->flags & AVP_F_DETACHED) {
+		PMD_DRV_LOG(NOTICE, "port %u already detached\n",
+			    eth_dev->data->port_id);
+		ret = 0;
+		goto unlock;
+	}
+
+	/* shutdown the device first so the host stops sending us packets. */
+	ret = avp_dev_ctrl_shutdown(eth_dev);
+	if (ret < 0) {
+		PMD_DRV_LOG(ERR, "Failed to send/recv shutdown to host, ret=%d\n",
+			    ret);
+		avp->flags &= ~AVP_F_DETACHED;
+		goto unlock;
+	}
+
+	avp->flags |= AVP_F_DETACHED;
+	rte_wmb();
+
+	/* wait for queues to acknowledge the presence of the detach flag */
+	rte_delay_ms(1);
+
+	ret = 0;
+
+unlock:
+	rte_spinlock_unlock(&avp->lock);
+	return ret;
+}
+
 static void
 _avp_set_rx_queue_mappings(struct rte_eth_dev *eth_dev, uint16_t rx_queue_id)
 {
@@ -569,6 +614,240 @@ struct avp_queue {
 		    avp->num_tx_queues, avp->num_rx_queues);
 }
 
+static int
+avp_dev_attach(struct rte_eth_dev *eth_dev)
+{
+	struct avp_dev *avp = AVP_DEV_PRIVATE_TO_HW(eth_dev->data->dev_private);
+	struct rte_avp_device_config config;
+	unsigned int i;
+	int ret;
+
+	PMD_DRV_LOG(NOTICE, "Attaching port %u to AVP device 0x%" PRIx64 "\n",
+		    eth_dev->data->port_id, avp->device_id);
+
+	rte_spinlock_lock(&avp->lock);
+
+	if (!(avp->flags & AVP_F_DETACHED)) {
+		PMD_DRV_LOG(NOTICE, "port %u already attached\n",
+			    eth_dev->data->port_id);
+		ret = 0;
+		goto unlock;
+	}
+
+	/*
+	 * make sure that the detached flag is set prior to reconfiguring the
+	 * queues.
+	 */
+	avp->flags |= AVP_F_DETACHED;
+	rte_wmb();
+
+	/*
+	 * re-run the device create utility which will parse the new host info
+	 * and setup the AVP device queue pointers.
+	 */
+	ret = avp_dev_create(AVP_DEV_TO_PCI(eth_dev), eth_dev);
+	if (ret < 0) {
+		PMD_DRV_LOG(ERR, "Failed to re-create AVP device, ret=%d\n",
+			    ret);
+		goto unlock;
+	}
+
+	if (avp->flags & AVP_F_CONFIGURED) {
+		/*
+		 * Update the receive queue mapping to handle cases where the
+		 * source and destination hosts have different queue
+		 * requirements.  As long as the DETACHED flag is asserted the
+		 * queue table should not be referenced so it should be safe to
+		 * update it.
+		 */
+		_avp_set_queue_counts(eth_dev);
+		for (i = 0; i < eth_dev->data->nb_rx_queues; i++)
+			_avp_set_rx_queue_mappings(eth_dev, i);
+
+		/*
+		 * Update the host with our config details so that it knows the
+		 * device is active.
+		 */
+		memset(&config, 0, sizeof(config));
+		config.device_id = avp->device_id;
+		config.driver_type = RTE_AVP_DRIVER_TYPE_DPDK;
+		config.driver_version = AVP_DPDK_DRIVER_VERSION;
+		config.features = avp->features;
+		config.num_tx_queues = avp->num_tx_queues;
+		config.num_rx_queues = avp->num_rx_queues;
+		config.if_up = !!(avp->flags & AVP_F_LINKUP);
+
+		ret = avp_dev_ctrl_set_config(eth_dev, &config);
+		if (ret < 0) {
+			PMD_DRV_LOG(ERR, "Config request failed by host, ret=%d\n",
+				    ret);
+			goto unlock;
+		}
+	}
+
+	rte_wmb();
+	avp->flags &= ~AVP_F_DETACHED;
+
+	ret = 0;
+
+unlock:
+	rte_spinlock_unlock(&avp->lock);
+	return ret;
+}
+
+static void
+avp_dev_interrupt_handler(struct rte_intr_handle *intr_handle,
+						  void *data)
+{
+	struct rte_eth_dev *eth_dev = data;
+	struct rte_pci_device *pci_dev = AVP_DEV_TO_PCI(eth_dev);
+	void *registers = pci_dev->mem_resource[RTE_AVP_PCI_MMIO_BAR].addr;
+	uint32_t status, value;
+	int ret;
+
+	if (registers == NULL)
+		rte_panic("no mapped MMIO register space\n");
+
+	/* read the interrupt status register
+	 * note: this register clears on read so all raised interrupts must be
+	 *    handled or remembered for later processing
+	 */
+	status = AVP_READ32(
+		RTE_PTR_ADD(registers,
+			    RTE_AVP_INTERRUPT_STATUS_OFFSET));
+
+	if (status | RTE_AVP_MIGRATION_INTERRUPT_MASK) {
+		/* handle interrupt based on current status */
+		value = AVP_READ32(
+			RTE_PTR_ADD(registers,
+				    RTE_AVP_MIGRATION_STATUS_OFFSET));
+		switch (value) {
+		case RTE_AVP_MIGRATION_DETACHED:
+			ret = avp_dev_detach(eth_dev);
+			break;
+		case RTE_AVP_MIGRATION_ATTACHED:
+			ret = avp_dev_attach(eth_dev);
+			break;
+		default:
+			PMD_DRV_LOG(ERR, "unexpected migration status, status=%u\n",
+				    value);
+			ret = -EINVAL;
+		}
+
+		/* acknowledge the request by writing out our current status */
+		value = (ret == 0 ? value : RTE_AVP_MIGRATION_ERROR);
+		AVP_WRITE32(value,
+			    RTE_PTR_ADD(registers,
+					RTE_AVP_MIGRATION_ACK_OFFSET));
+
+		PMD_DRV_LOG(NOTICE, "AVP migration interrupt handled\n");
+	}
+
+	if (status & ~RTE_AVP_MIGRATION_INTERRUPT_MASK)
+		PMD_DRV_LOG(WARNING, "AVP unexpected interrupt, status=0x%08x\n",
+			    status);
+
+	/* re-enable UIO interrupt handling */
+	ret = rte_intr_enable(intr_handle);
+	if (ret < 0) {
+		PMD_DRV_LOG(ERR, "Failed to re-enable UIO interrupts, ret=%d\n",
+			    ret);
+		/* continue */
+	}
+}
+
+static int
+avp_dev_enable_interrupts(struct rte_eth_dev *eth_dev)
+{
+	struct rte_pci_device *pci_dev = AVP_DEV_TO_PCI(eth_dev);
+	void *registers = pci_dev->mem_resource[RTE_AVP_PCI_MMIO_BAR].addr;
+	int ret;
+
+	if (registers == NULL)
+		return -EINVAL;
+
+	/* enable UIO interrupt handling */
+	ret = rte_intr_enable(&pci_dev->intr_handle);
+	if (ret < 0) {
+		PMD_DRV_LOG(ERR, "Failed to enable UIO interrupts, ret=%d\n",
+			    ret);
+		return ret;
+	}
+
+	/* inform the device that all interrupts are enabled */
+	AVP_WRITE32(RTE_AVP_APP_INTERRUPTS_MASK,
+		    RTE_PTR_ADD(registers, RTE_AVP_INTERRUPT_MASK_OFFSET));
+
+	return 0;
+}
+
+static int
+avp_dev_disable_interrupts(struct rte_eth_dev *eth_dev)
+{
+	struct rte_pci_device *pci_dev = AVP_DEV_TO_PCI(eth_dev);
+	void *registers = pci_dev->mem_resource[RTE_AVP_PCI_MMIO_BAR].addr;
+	int ret;
+
+	if (registers == NULL)
+		return 0;
+
+	/* inform the device that all interrupts are disabled */
+	AVP_WRITE32(RTE_AVP_NO_INTERRUPTS_MASK,
+		    RTE_PTR_ADD(registers, RTE_AVP_INTERRUPT_MASK_OFFSET));
+
+	/* enable UIO interrupt handling */
+	ret = rte_intr_disable(&pci_dev->intr_handle);
+	if (ret < 0) {
+		PMD_DRV_LOG(ERR, "Failed to disable UIO interrupts, ret=%d\n",
+			    ret);
+		return ret;
+	}
+
+	return 0;
+}
+
+static int
+avp_dev_setup_interrupts(struct rte_eth_dev *eth_dev)
+{
+	struct rte_pci_device *pci_dev = AVP_DEV_TO_PCI(eth_dev);
+	int ret;
+
+	/* register a callback handler with UIO for interrupt notifications */
+	ret = rte_intr_callback_register(&pci_dev->intr_handle,
+					 avp_dev_interrupt_handler,
+					 (void *)eth_dev);
+	if (ret < 0) {
+		PMD_DRV_LOG(ERR, "Failed to register UIO interrupt callback, ret=%d\n",
+			    ret);
+		return ret;
+	}
+
+	/* enable interrupt processing */
+	return avp_dev_enable_interrupts(eth_dev);
+}
+
+static int
+avp_dev_migration_pending(struct rte_eth_dev *eth_dev)
+{
+	struct rte_pci_device *pci_dev = AVP_DEV_TO_PCI(eth_dev);
+	void *registers = pci_dev->mem_resource[RTE_AVP_PCI_MMIO_BAR].addr;
+	uint32_t value;
+
+	if (registers == NULL)
+		return 0;
+
+	value = AVP_READ32(RTE_PTR_ADD(registers,
+				       RTE_AVP_MIGRATION_STATUS_OFFSET));
+	if (value == RTE_AVP_MIGRATION_DETACHED) {
+		/* migration is in progress; ack it if we have not already */
+		AVP_WRITE32(value,
+			    RTE_PTR_ADD(registers,
+					RTE_AVP_MIGRATION_ACK_OFFSET));
+		return 1;
+	}
+	return 0;
+}
+
 /*
  * create a AVP device using the supplied device info by first translating it
  * to guest address space(s).
@@ -621,6 +900,7 @@ struct avp_queue {
 		avp->port_id = eth_dev->data->port_id;
 		avp->host_mbuf_size = host_info->mbuf_size;
 		avp->host_features = host_info->features;
+		rte_spinlock_init(&avp->lock);
 		memcpy(&avp->ethaddr.addr_bytes[0],
 		       host_info->ethaddr, ETHER_ADDR_LEN);
 		/* adjust max values to not exceed our max */
@@ -734,6 +1014,12 @@ struct avp_queue {
 
 	eth_dev->data->dev_flags |= RTE_ETH_DEV_DETACHABLE;
 
+	/* Check current migration status */
+	if (avp_dev_migration_pending(eth_dev)) {
+		PMD_DRV_LOG(ERR, "VM live migration operation in progress\n");
+		return -EBUSY;
+	}
+
 	/* Check BAR resources */
 	ret = avp_dev_check_regions(eth_dev);
 	if (ret < 0) {
@@ -742,6 +1028,13 @@ struct avp_queue {
 		return ret;
 	}
 
+	/* Enable interrupts */
+	ret = avp_dev_setup_interrupts(eth_dev);
+	if (ret < 0) {
+		PMD_DRV_LOG(ERR, "Failed to enable interrupts, ret=%d\n", ret);
+		return ret;
+	}
+
 	/* Handle each subtype */
 	ret = avp_dev_create(pci_dev, eth_dev);
 	if (ret < 0) {
@@ -766,12 +1059,20 @@ struct avp_queue {
 static int
 eth_avp_dev_uninit(struct rte_eth_dev *eth_dev)
 {
+	int ret;
+
 	if (rte_eal_process_type() != RTE_PROC_PRIMARY)
 		return -EPERM;
 
 	if (eth_dev->data == NULL)
 		return 0;
 
+	ret = avp_dev_disable_interrupts(eth_dev);
+	if (ret != 0) {
+		PMD_DRV_LOG(ERR, "Failed to disable interrupts, ret=%d\n", ret);
+		return ret;
+	}
+
 	if (eth_dev->data->mac_addrs != NULL) {
 		rte_free(eth_dev->data->mac_addrs);
 		eth_dev->data->mac_addrs = NULL;
@@ -1134,6 +1435,11 @@ struct avp_queue {
 	unsigned int port_id;
 	unsigned int i;
 
+	if (unlikely(avp->flags & AVP_F_DETACHED)) {
+		/* VM live migration in progress */
+		return 0;
+	}
+
 	guest_mbuf_size = avp->guest_mbuf_size;
 	port_id = avp->port_id;
 	rx_q = avp->rx_q[rxq->queue_id];
@@ -1228,6 +1534,11 @@ struct avp_queue {
 	char *pkt_data;
 	unsigned int i;
 
+	if (unlikely(avp->flags & AVP_F_DETACHED)) {
+		/* VM live migration in progress */
+		return 0;
+	}
+
 	rx_q = avp->rx_q[rxq->queue_id];
 	free_q = avp->free_q[rxq->queue_id];
 
@@ -1435,6 +1746,13 @@ struct avp_queue {
 	unsigned int i;
 
 	orig_nb_pkts = nb_pkts;
+	if (unlikely(avp->flags & AVP_F_DETACHED)) {
+		/* VM live migration in progress */
+		/* TODO ... buffer for X packets then drop? */
+		txq->errors += nb_pkts;
+		return 0;
+	}
+
 	tx_q = avp->tx_q[txq->queue_id];
 	alloc_q = avp->alloc_q[txq->queue_id];
 
@@ -1547,6 +1865,13 @@ struct avp_queue {
 	char *pkt_data;
 	unsigned int i;
 
+	if (unlikely(avp->flags & AVP_F_DETACHED)) {
+		/* VM live migration in progress */
+		/* TODO ... buffer for X packets then drop?! */
+		txq->errors++;
+		return 0;
+	}
+
 	tx_q = avp->tx_q[txq->queue_id];
 	alloc_q = avp->alloc_q[txq->queue_id];
 
@@ -1679,6 +2004,13 @@ struct avp_queue {
 	void *addr;
 	int ret;
 
+	rte_spinlock_lock(&avp->lock);
+	if (avp->flags & AVP_F_DETACHED) {
+		PMD_DRV_LOG(ERR, "Operation not supported during VM live migration\n");
+		ret = -ENOTSUP;
+		goto unlock;
+	}
+
 	addr = pci_dev->mem_resource[RTE_AVP_PCI_DEVICE_BAR].addr;
 	host_info = (struct rte_avp_device_info *)addr;
 
@@ -1710,6 +2042,7 @@ struct avp_queue {
 	ret = 0;
 
 unlock:
+	rte_spinlock_unlock(&avp->lock);
 	return ret;
 }
 
@@ -1719,6 +2052,13 @@ struct avp_queue {
 	struct avp_dev *avp = AVP_DEV_PRIVATE_TO_HW(eth_dev->data->dev_private);
 	int ret;
 
+	rte_spinlock_lock(&avp->lock);
+	if (avp->flags & AVP_F_DETACHED) {
+		PMD_DRV_LOG(ERR, "Operation not supported during VM live migration\n");
+		ret = -ENOTSUP;
+		goto unlock;
+	}
+
 	/* disable features that we do not support */
 	eth_dev->data->dev_conf.rxmode.hw_ip_checksum = 0;
 	eth_dev->data->dev_conf.rxmode.hw_vlan_filter = 0;
@@ -1739,6 +2079,7 @@ struct avp_queue {
 	ret = 0;
 
 unlock:
+	rte_spinlock_unlock(&avp->lock);
 	return ret;
 }
 
@@ -1748,6 +2089,13 @@ struct avp_queue {
 	struct avp_dev *avp = AVP_DEV_PRIVATE_TO_HW(eth_dev->data->dev_private);
 	int ret;
 
+	rte_spinlock_lock(&avp->lock);
+	if (avp->flags & AVP_F_DETACHED) {
+		PMD_DRV_LOG(ERR, "Operation not supported during VM live migration\n");
+		goto unlock;
+	}
+
+	/* remember current link state */
 	avp->flags &= ~AVP_F_LINKUP;
 
 	/* update link state */
@@ -1756,6 +2104,9 @@ struct avp_queue {
 		PMD_DRV_LOG(ERR, "Link state change failed by host, ret=%d\n",
 			    ret);
 	}
+
+unlock:
+	rte_spinlock_unlock(&avp->lock);
 }
 
 static void
@@ -1764,10 +2115,22 @@ struct avp_queue {
 	struct avp_dev *avp = AVP_DEV_PRIVATE_TO_HW(eth_dev->data->dev_private);
 	int ret;
 
+	rte_spinlock_lock(&avp->lock);
+	if (avp->flags & AVP_F_DETACHED) {
+		PMD_DRV_LOG(ERR, "Operation not supported during VM live migration\n");
+		goto unlock;
+	}
+
 	/* remember current link state */
 	avp->flags &= ~AVP_F_LINKUP;
 	avp->flags &= ~AVP_F_CONFIGURED;
 
+	ret = avp_dev_disable_interrupts(eth_dev);
+	if (ret < 0) {
+		PMD_DRV_LOG(ERR, "Failed to disable interrupts\n");
+		/* continue */
+	}
+
 	/* update device state */
 	ret = avp_dev_ctrl_shutdown(eth_dev);
 	if (ret < 0) {
@@ -1775,6 +2138,9 @@ struct avp_queue {
 			    ret);
 		/* continue */
 	}
+
+unlock:
+	rte_spinlock_unlock(&avp->lock);
 }
 
 static int
@@ -1796,11 +2162,13 @@ struct avp_queue {
 {
 	struct avp_dev *avp = AVP_DEV_PRIVATE_TO_HW(eth_dev->data->dev_private);
 
+	rte_spinlock_lock(&avp->lock);
 	if ((avp->flags & AVP_F_PROMISC) == 0) {
 		avp->flags |= AVP_F_PROMISC;
 		PMD_DRV_LOG(DEBUG, "Promiscuous mode enabled on %u\n",
 			    eth_dev->data->port_id);
 	}
+	rte_spinlock_unlock(&avp->lock);
 }
 
 static void
@@ -1808,11 +2176,13 @@ struct avp_queue {
 {
 	struct avp_dev *avp = AVP_DEV_PRIVATE_TO_HW(eth_dev->data->dev_private);
 
+	rte_spinlock_lock(&avp->lock);
 	if ((avp->flags & AVP_F_PROMISC) != 0) {
 		avp->flags &= ~AVP_F_PROMISC;
 		PMD_DRV_LOG(DEBUG, "Promiscuous mode disabled on %u\n",
 			    eth_dev->data->port_id);
 	}
+	rte_spinlock_unlock(&avp->lock);
 }
 
 static void
-- 
1.8.3.1

^ permalink raw reply related	[flat|nested] 172+ messages in thread

* [PATCH v4 17/17] doc: adds information related to the AVP PMD
  2017-03-13 19:16     ` [PATCH v4 00/17] Wind River Systems " Allain Legacy
                         ` (15 preceding siblings ...)
  2017-03-13 19:16       ` [PATCH v4 16/17] net/avp: migration interrupt handling Allain Legacy
@ 2017-03-13 19:16       ` Allain Legacy
  2017-03-16 14:53         ` Ferruh Yigit
  2017-03-14 17:37       ` [PATCH v4 00/17] Wind River Systems AVP PMD vs virtio? Vincent JARDIN
  2017-03-23 11:23       ` [PATCH v5 00/14] Wind River Systems AVP PMD Allain Legacy
  18 siblings, 1 reply; 172+ messages in thread
From: Allain Legacy @ 2017-03-13 19:16 UTC (permalink / raw)
  To: ferruh.yigit
  Cc: dev, ian.jolliffe, bruce.richardson, john.mcnamara, keith.wiles,
	thomas.monjalon, vincent.jardin, jerin.jacob, stephen, 3chas3

Updates the documentation and feature lists for the AVP PMD device.

Signed-off-by: Allain Legacy <allain.legacy@windriver.com>
Signed-off-by: Matt Peters <matt.peters@windriver.com>
Acked-by: John McNamara <john.mcnamara@intel.com>
---
 MAINTAINERS                            |   1 +
 doc/guides/nics/avp.rst                | 112 +++++++++++++++++++++++++++++++++
 doc/guides/nics/features/avp.ini       |  17 +++++
 doc/guides/nics/index.rst              |   1 +
 doc/guides/rel_notes/release_17_05.rst |   5 ++
 5 files changed, 136 insertions(+)
 create mode 100644 doc/guides/nics/avp.rst
 create mode 100644 doc/guides/nics/features/avp.ini

diff --git a/MAINTAINERS b/MAINTAINERS
index fef23a0..4a14945 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -427,6 +427,7 @@ Wind River AVP PMD
 M: Allain Legacy <allain.legacy@windriver.com>
 M: Matt Peters <matt.peters@windriver.com>
 F: drivers/net/avp
+F: doc/guides/nics/avp.rst
 
 
 Crypto Drivers
diff --git a/doc/guides/nics/avp.rst b/doc/guides/nics/avp.rst
new file mode 100644
index 0000000..af6d04d
--- /dev/null
+++ b/doc/guides/nics/avp.rst
@@ -0,0 +1,112 @@
+..  BSD LICENSE
+    Copyright(c) 2017 Wind River Systems, Inc. rights reserved.
+    All rights reserved.
+
+    Redistribution and use in source and binary forms, with or without
+    modification, are permitted provided that the following conditions
+    are met:
+
+    * Redistributions of source code must retain the above copyright
+    notice, this list of conditions and the following disclaimer.
+    * Redistributions in binary form must reproduce the above copyright
+    notice, this list of conditions and the following disclaimer in
+    the documentation and/or other materials provided with the
+    distribution.
+    * Neither the name of Intel Corporation nor the names of its
+    contributors may be used to endorse or promote products derived
+    from this software without specific prior written permission.
+
+    THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+    "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+    LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+    A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+    OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+    SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+    LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+    DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+    THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+    (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+    OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+
+AVP Poll Mode Driver
+=================================================================
+
+The Accelerated Virtual Port (AVP) device is a shared memory based device
+available on the `virtualization platforms <http://www.windriver.com/products/titanium-cloud/>`_
+from Wind River Systems.  It is based on an earlier implementation of the DPDK
+KNI device and made available to VM instances via a mechanism based on an early
+implementation of qemu-kvm ivshmem.
+
+It enables optimized packet throughput without requiring any packet processing
+in qemu. This provides our customers with a significant performance increase
+for DPDK applications in the VM.  Since our AVP implementation supports VM
+live-migration it is viewed as a better alternative to PCI passthrough or PCI
+SRIOV since neither of those support VM live-migration without manual
+intervention or significant performance penalties.
+
+Since the initial implementation of AVP devices, vhost-user has become
+part of the qemu offering with a significant performance increase over
+the original virtio implementation.  However, vhost-user still does
+not achieve the level of performance that the AVP device can provide
+to our customers for DPDK based VM instances.
+
+The driver binds to PCI devices that are exported by the hypervisor DPDK
+application via the ivshmem-like mechanism.  The definition of the device
+structure and configuration options are defined in rte_avp_common.h and
+rte_avp_fifo.h.  These two header files are made available as part of the PMD
+implementation in order to share the device definitions between the guest
+implementation (i.e., the PMD) and the host implementation (i.e., the
+hypervisor DPDK vswitch application).
+
+
+Features and Limitations of the AVP PMD
+---------------------------------------
+
+The AVP PMD driver provides the following functionality.
+
+*   Receive and transmit of both simple and chained mbuf packets,
+
+*   Chained mbufs may include up to 5 chained segments,
+
+*   Up to 8 receive and transmit queues per device,
+
+*   Only a single MAC address is supported,
+
+*   The MAC address cannot be modified,
+
+*   The maximum receive packet length is 9238 bytes,
+
+*   VLAN header stripping and inserting,
+
+*   Promiscuous mode
+
+*   VM live-migration
+
+*   PCI hotplug insertion and removal
+
+
+Prerequisites
+-------------
+
+The following prerequisites apply:
+
+*   A virtual machine running in a Wind River Systems virtualization
+    environment and configured with at least one neutron port defined with a
+    vif-model set to "avp".
+
+
+Launching a VM with an AVP type network attachment
+--------------------------------------------------
+
+The following example will launch a VM with three network attachments.  The
+first attachment will have a default vif-model of "virtio".  The next two
+network attachments will have a vif-model of "avp" and may be used with a DPDK
+application which is built to include the AVP PMD driver.
+
+.. code-block:: console
+
+    nova boot --flavor small --image my-image \
+       --nic net-id=${NETWORK1_UUID} \
+       --nic net-id=${NETWORK2_UUID},vif-model=avp \
+       --nic net-id=${NETWORK3_UUID},vif-model=avp \
+       --security-group default my-instance1
diff --git a/doc/guides/nics/features/avp.ini b/doc/guides/nics/features/avp.ini
new file mode 100644
index 0000000..64bf42e
--- /dev/null
+++ b/doc/guides/nics/features/avp.ini
@@ -0,0 +1,17 @@
+;
+; Supported features of the 'AVP' network poll mode driver.
+;
+; Refer to default.ini for the full list of available PMD features.
+;
+[Features]
+Link status          = Y
+Queue start/stop     = Y
+Jumbo frame          = Y
+Scattered Rx         = Y
+Promiscuous mode     = Y
+Unicast MAC filter   = Y
+VLAN offload         = Y
+Basic stats          = Y
+Stats per queue      = Y
+Linux UIO            = Y
+x86-64               = Y
diff --git a/doc/guides/nics/index.rst b/doc/guides/nics/index.rst
index 87f9334..0ddcea5 100644
--- a/doc/guides/nics/index.rst
+++ b/doc/guides/nics/index.rst
@@ -36,6 +36,7 @@ Network Interface Controller Drivers
     :numbered:
 
     overview
+    avp
     bnx2x
     bnxt
     cxgbe
diff --git a/doc/guides/rel_notes/release_17_05.rst b/doc/guides/rel_notes/release_17_05.rst
index 4b90036..8d0e66b 100644
--- a/doc/guides/rel_notes/release_17_05.rst
+++ b/doc/guides/rel_notes/release_17_05.rst
@@ -41,6 +41,11 @@ New Features
      Also, make sure to start the actual text at the margin.
      =========================================================
 
+* **Added support for the Wind River Systems AVP PMD.**
+
+  Added a new networking driver for the AVP device type. Theses devices are
+  specific to the Wind River Systems virtualization platforms.
+
 
 * **Added powerpc support in pci probing for vfio-pci devices.**
 
-- 
1.8.3.1

^ permalink raw reply related	[flat|nested] 172+ messages in thread

* Re: [PATCH v3 16/16] doc: adds information related to the AVP PMD
  2017-03-03 16:21       ` Vincent JARDIN
@ 2017-03-13 19:17         ` Legacy, Allain
  0 siblings, 0 replies; 172+ messages in thread
From: Legacy, Allain @ 2017-03-13 19:17 UTC (permalink / raw)
  To: Vincent JARDIN, YIGIT, FERRUH
  Cc: Jolliffe, Ian, jerin.jacob, stephen, thomas.monjalon, dev, WILES, ROGER

Vincent,
Perhaps you can help me understand why the performance or functionality of AVP vs. Virtio is relevant to the decision of accepting this driver.   There are many drivers in the DPDK; most of which provide the same functionality at comparable performance rates.  AVP is just another such driver.   The fact that it is virtual rather than physical, in my opinion, should not influence the decision of accepting this driver.   On the other hand, code quality/complexity or lack of a maintainer are reasonable reasons for rejecting.    If our driver is accepted we are committed to maintaining it and testing changes as required by any driver framework changes which may impact all drivers.

Along the same lines, I do not understand why upstreaming AVP in to the Linux kernel or qemu/kvm should be a prerequisite for inclusion in the DPDK.   Continuing my analogy from above, the AVP device is a commercial offering tied to the Wind River Systems Titanium product line.   It enables virtualized DPDK applications and increases DPDK adoption.   Similarly to how a driver from company XYX is tied to a commercial NIC that must be purchased by a customer, our AVP device is available to operators that choose to leverage our Titanium product to implement their Cloud solutions.    It is not our intention to upstream the qemu/kvm or host vswitch portion of the AVP device.   Our qemu/kvm extensions are GPL so they are available to our customers if they desire to rebuild qemu/kvm with their own proprietary extensions

Our AVP device was implemented in 2013 in response to the challenge of lower than required performance of qemu/virtio in both user space and DPDK applications in the VM.   Rather than making complex changes to qemu/virtio and continuously have to forward prop those as we upgraded to newer versions of qemu we decided to decouple ourselves from that code base.   We developed the AVP device based on an evolution of KNI+ivshmem by enhancing both with features that would meet the needs of our customers; better performance, multi-queue support, live-migration support, hot-plug support.    As I said in my earlier response, since 2013, qemu/virtio has seen improved performance with the introduction of vhost-user.   The performance of vhost-user still has not yet achieved performance levels equal to our AVP PMD.   

I acknowledge that the AVP driver could exist as an out-of-tree driver loaded as a shared library at runtime.  In fact, 2 years ago we released our driver source on github for this very reason.  We provide instructions and support for building the AVP PMD as a shared library.   Some customers have adopted this method while many insist on an in-tree driver for several reasons.   

Most importantly, they want to eliminate the burden of needing to build and support an additional package into their product.   An in-tree driver would eliminate the need for a separate build/packaging process.   Also, they want an option that allows them to be able to develop directly on the bleeding edge of DPDK rather than waiting on us to update our out-of-tree driver based on stable releases of the DPDK.   In this regard, an in-tree driver would allow our customers to work directly on the latest DPDK. 

An in-tree driver provides obvious benefits to our customers, but keep in mind that this also provides a benefit to the DPDK.  If a customer must develop on a stable release because they must use an out-of-tree driver then they are less likely to contribute fixes/enhancements/testing upstream.  I know this first hand because I work with software from different sources on a daily basis and it is a significant burden to have to reproduce/test fixes on master when you build/ship on an older stable release.   Accepting this driver would increase the potential pool of developers available for contributions and reviews.

Again, we are committed to contributing to the DPDK community by supporting our driver and upstreaming other fixes/enhancements we develop along the way.   We feel that if the DPDK is limited to only a single virtual driver of any type then choice and innovation is also limited.   In the end if more variety and innovation increases DPDK adoption then this is a win for DPDK and everyone that is involved in the project.

Regards,
Allain

Allain Legacy, Software Developer
direct 613.270.2279  fax 613.492.7870 skype allain.legacy
 



> -----Original Message-----
> From: Vincent JARDIN [mailto:vincent.jardin@6wind.com]
> Sent: Friday, March 03, 2017 11:22 AM
> To: Legacy, Allain; YIGIT, FERRUH
> Cc: Jolliffe, Ian; jerin.jacob@caviumnetworks.com;
> stephen@networkplumber.org; thomas.monjalon@6wind.com;
> dev@dpdk.org
> Subject: Re: [dpdk-dev] [PATCH v3 16/16] doc: adds information related to
> the AVP PMD
> 
> Le 02/03/2017 à 01:20, Allain Legacy a écrit :
> > +Since the initial implementation of AVP devices, vhost-user has become
> > +part of the qemu offering with a significant performance increase over
> > +the original virtio implementation.  However, vhost-user still does
> > +not achieve the level of performance that the AVP device can provide
> > +to our customers for DPDK based VM instances.
> 
> Allain,
> 
> please, can you be more explicit: why is virtio not fast enough?
> 
> Moreover, why should we get another PMD for Qemu/kvm which is not
> virtio? There is not argument into your doc about it.
> NEC, before vhost-user, made a memnic proposal too because
> virtio/vhost-user was not available.
> Now, we all agree that vhost-user is the right way to support VMs, it
> avoids duplication of maintenances.
> 
> Please add some arguments that explains why virtio should not be used,
> so others like memnic or avp should be.
> 
> Regarding,
> +    nova boot --flavor small --image my-image \
> +       --nic net-id=${NETWORK1_UUID} \
> +       --nic net-id=${NETWORK2_UUID},vif-model=avp \
> +       --nic net-id=${NETWORK3_UUID},vif-model=avp \
> +       --security-group default my-instance1
> 
> I do not see how to get it working with vanilla nova. Please, I think
> you should rather show with qemu or virsh.
> 
> Then, there is not such AVP netdevice into Linux kernel upstream. Before
> adding any AVP support, it should be added into legacy upstream so we
> can be sure that the APIs will be solid and won't need to be updated
> because of some kernel constraints.
> 
> Thank you,
>    Vincent

^ permalink raw reply	[flat|nested] 172+ messages in thread

* Re: [PATCH v4 00/17] Wind River Systems AVP PMD vs virtio?
  2017-03-13 19:16     ` [PATCH v4 00/17] Wind River Systems " Allain Legacy
                         ` (16 preceding siblings ...)
  2017-03-13 19:16       ` [PATCH v4 17/17] doc: adds information related to the AVP PMD Allain Legacy
@ 2017-03-14 17:37       ` Vincent JARDIN
  2017-03-15  4:10         ` O'Driscoll, Tim
  2017-03-23 11:23       ` [PATCH v5 00/14] Wind River Systems AVP PMD Allain Legacy
  18 siblings, 1 reply; 172+ messages in thread
From: Vincent JARDIN @ 2017-03-14 17:37 UTC (permalink / raw)
  To: Allain Legacy
  Cc: ferruh.yigit, dev, ian.jolliffe, bruce.richardson, john.mcnamara,
	keith.wiles, thomas.monjalon, jerin.jacob, stephen, 3chas3

Allain,

see inline,
+ I did restore the thread from 
http://dpdk.org/ml/archives/dev/2017-March/060087.html into the same email.

To make it short, using ivshmem, you keep people unfocused from virtio.

> Vincent,
> Perhaps you can help me understand why the performance or functionality of
> AVP vs. Virtio is relevant to the decision of accepting this driver. There
> are many drivers in the DPDK; most of which provide the same functionality
> at comparable performance rates. AVP is just another such driver. The fact
> that it is virtual rather than physical, in my opinion, should not influence
> the decision of accepting this driver.

No, it is a different logic: your driver is about Qemu's vNIC support.
Other PMDs are here to support vendors and different hypervisors. For 
isntance, in case of vmxnet3, different versions, evolutions and 
optimisations can be managed inside vmxnet3. We need to avoid the 
proliferation of PMDs for a same hypervisor while there is already an 
efficient solution, virtio.

> On the other hand, code
> quality/complexity or lack of a maintainer are reasonable reasons for
> rejecting. If our driver is accepted we are committed to maintaining it and
> testing changes as required by any driver framework changes which may impact
> all drivers.

But, then it is unfocusing your capabilities. As a community, we need to 
be sure that we are all focused on improving existing solutions. Since 
virtio is the one, I would rather prefer to see more people working on 
improving the virtio's community instead of getting everyone unfocused.

>
> Along the same lines, I do not understand why upstreaming AVP in to the Linux
> kernel or qemu/kvm should be a prerequisite for inclusion in the DPDK.
> Continuing my analogy from above, the AVP device is a commercial offering
> tied to the Wind River Systems Titanium product line. It enables virtualized
> DPDK applications and increases DPDK adoption. Similarly to how a driver from
> company XYX is tied to a commercial NIC that must be purchased by a customer,
> our AVP device is available to operators that choose to leverage our Titanium
> product to implement their Cloud solutions. It is not our intention to
> upstream the qemu/kvm or host vswitch portion of the AVP device. Our qemu/kvm
> extensions are GPL so they are available to our customers if they desire to
> rebuild qemu/kvm with their own proprietary extensions

It was a solution before 2013, but now we are in 2017, vhost-user is 
here, show them the new proper path. Let's be focus, please.

> Our AVP device was implemented in 2013 in response to the challenge of lower
> than required performance of qemu/virtio in both user space and DPDK
> applications in the VM. Rather than making complex changes to qemu/virtio
> and continuously have to forward prop those as we upgraded to newer versions
> of qemu we decided to decouple ourselves from that code base. We developed the
> AVP device based on an evolution of KNI+ivshmem by enhancing both with
> features that would meet the needs of our customers; better performance,
> multi-queue support, live-migration support, hot-plug support. As I said in
> my earlier response, since 2013, qemu/virtio has seen improved performance
> with the introduction of vhost-user. The performance of vhost-user still has
> not yet achieved performance levels equal to our AVP PMD.

Frankly, I was on the boat than you: I did strong pitch the use of 
ivshmem for vNICs for many benefits. But Redhat folks did push bash to 
ask every one to be focused in virtio. So, back to 2013, I would have 
supported your AVP approach, but now we are in 2017 and we must stack 
focused on a single and proper qemu solution => virtio/vhost.

>
> I acknowledge that the AVP driver could exist as an out-of-tree driver loaded
> as a shared library at runtime. In fact, 2 years ago we released our driver
> source on github for this very reason.  We provide instructions and support
> for building the AVP PMD as a shared library. Some customers have adopted
> this method while many insist on an in-tree driver for several reasons.
>
> Most importantly, they want to eliminate the burden of needing to build and
> support an additional package into their product. An in-tree driver would
> eliminate the need for a separate build/packaging process. Also, they want
> an option that allows them to be able to develop directly on the bleeding
> edge of DPDK rather than waiting on us to update our out-of-tree driver
> based on stable releases of the DPDK. In this regard, an in-tree driver
> would allow our customers to work directly on the latest DPDK.

Make them move to virtio, then this pain will disappear.

> An in-tree driver provides obvious benefits to our customers, but keep in
> mind that this also provides a benefit to the DPDK. If a customer must
> develop on a stable release because they must use an out-of-tree driver then
> they are less likely to contribute fixes/enhancements/testing upstream. I
> know this first hand because I work with software from different sources on
> a daily basis and it is a significant burden to have to reproduce/test fixes
> on master when you build/ship on an older stable release. Accepting this
> driver would increase the potential pool of developers available for
> contributions and reviews.

So, again, you are make a call for unfocusing from virtio, please.

> Again, we are committed to contributing to the DPDK community by supporting
> our driver and upstreaming other fixes/enhancements we develop along the
> way. We feel that if the DPDK is limited to only a single virtual driver of
> any type then choice and innovation is also limited. In the end if more
> variety and innovation increases DPDK adoption then this is a win for DPDK
> and everyone that is involved in the project.

Frankly, show/demonstrate that AVP can be faster and better in 
performances than virtio, then I would support you strongly, since, back 
in 2013 days, I did strongly believe in innovation and agility with 
ivshmem. But right now, you did not provide any relevant data, you just 
made the proof points that you are getting the community unfocused from 
virtio 4 years later. We are not in 2013 anymore.

> This patch series submits an initial version of the AVP PMD from Wind River
> Systems.  The series includes shared header files, driver implementation,
> and changes to documentation files in support of this new driver.  The AVP
> driver is a shared memory based device.  It is intended to be used as a PMD
> within a virtual machine running on a Wind River virtualization platform.
> See: http://www.windriver.com/products/titanium-cloud/
>
> It enables optimized packet throughput without requiring any packet
> processing in qemu. This allowed us to provide our customers with a
> significant performance increase for both DPDK and non-DPDK applications
> in the VM.   Since our AVP implementation supports VM live-migration it
> is viewed as a better alternative to PCI passthrough or PCI SRIOV since
> neither of those support VM live-migration without manual intervention
> or significant performance penalties.
>
> Since the initial implementation of AVP devices, vhost-user has become part
> of the qemu offering with a significant performance increase over the
> original virtio implementation.  However, vhost-user still does not achieve
> the level of performance that the AVP device can provide to our customers
> for DPDK based guests.
>
> A number of our customers have requested that we upstream the driver to
> dpdk.org.
>
> v2:
> * Fixed coding style violations that slipped in accidentally because of an
>   out of date checkpatch.pl from an older kernel.
>
> v3:
> * Updated 17.05 release notes to add a section for this new PMD
> * Added additional info to the AVP nic guide document to clarify the
>   benefit of using AVP over virtio.
> * Fixed spelling error in debug log missed by local checkpatch.pl version
> * Split the transmit patch to separate the stats functions as they
>   accidentally got squashed in the last patchset.
> * Fixed debug log strings so that they exceed 80 characters rather than
>   span multiple lines.
> * Renamed RTE_AVP_* defines that were in avp_ethdev.h to be AVP_* instead
> * Replaced usage of RTE_WRITE32 and RTE_READ32 with rte_write32_relaxed
>   and rte_read32_relaxed.
> * Declared rte_pci_id table as const
>
> v4:
> * Split our interrupt handlers to a separate patch and moved to the end
>   of the series.
> * Removed memset() from stats_get API
> * Removed usage of RTE_AVP_ALIGNMENT
> * Removed unnecessary parentheses in rte_avp_common.h
> * Removed unneeded "goto unlock" where there are no statements in between
>   the goto and the end of the function.
> * Re-tested with pktgen and found that rte_eth_tx_burst() is being called
>   with 0 packets even before starting traffic which resulted in
>   incrementing oerrors; fixed in transmit patch.
>
> Allain Legacy (17):
>   config: added attributes for the AVP PMD
>   net/avp: added public header files
>   maintainers: claim responsibility for AVP PMD
>   net/avp: added PMD version map file
>   net/avp: added log macros
>   drivers/net: added driver makefiles
>   net/avp: driver registration
>   net/avp: device initialization
>   net/avp: device configuration
>   net/avp: queue setup and release
>   net/avp: packet receive functions
>   net/avp: packet transmit functions
>   net/avp: device statistics operations
>   net/avp: device promiscuous functions
>   net/avp: device start and stop operations
>   net/avp: migration interrupt handling
>   doc: added information related to the AVP PMD
>
>  MAINTAINERS                             |    6 +
>  config/common_base                      |   10 +
>  config/common_linuxapp                  |    1 +
>  doc/guides/nics/avp.rst                 |   99 ++
>  doc/guides/nics/features/avp.ini        |   17 +
>  doc/guides/nics/index.rst               |    1 +
>  drivers/net/Makefile                    |    1 +
>  drivers/net/avp/Makefile                |   61 +
>  drivers/net/avp/avp_ethdev.c            | 2371 +++++++++++++++++++++++++++++++
>  drivers/net/avp/avp_logs.h              |   59 +
>  drivers/net/avp/rte_avp_common.h        |  427 ++++++
>  drivers/net/avp/rte_avp_fifo.h          |  157 ++
>  drivers/net/avp/rte_pmd_avp_version.map |    4 +
>  mk/rte.app.mk                           |    1 +
>  14 files changed, 3215 insertions(+)
>  create mode 100644 doc/guides/nics/avp.rst
>  create mode 100644 doc/guides/nics/features/avp.ini
>  create mode 100644 drivers/net/avp/Makefile
>  create mode 100644 drivers/net/avp/avp_ethdev.c
>  create mode 100644 drivers/net/avp/avp_logs.h
>  create mode 100644 drivers/net/avp/rte_avp_common.h
>  create mode 100644 drivers/net/avp/rte_avp_fifo.h
>  create mode 100644 drivers/net/avp/rte_pmd_avp_version.map
>

so, still an nack because:
   - no performance data of avp vs virtio,
   - 2013 is gone,
   - it unfocuses from virtio.

Best regards,
   Vincent

^ permalink raw reply	[flat|nested] 172+ messages in thread

* Re: [PATCH v4 00/17] Wind River Systems AVP PMD vs virtio?
  2017-03-14 17:37       ` [PATCH v4 00/17] Wind River Systems AVP PMD vs virtio? Vincent JARDIN
@ 2017-03-15  4:10         ` O'Driscoll, Tim
  2017-03-15 10:55           ` Thomas Monjalon
                             ` (4 more replies)
  0 siblings, 5 replies; 172+ messages in thread
From: O'Driscoll, Tim @ 2017-03-15  4:10 UTC (permalink / raw)
  To: Vincent JARDIN, Legacy, Allain (Wind River)
  Cc: Yigit, Ferruh, dev, Jolliffe, Ian (Wind River),
	Richardson, Bruce, Mcnamara, John, Wiles, Keith, thomas.monjalon,
	jerin.jacob, stephen, 3chas3

I've included a couple of specific comments inline below, and a general comment here.

We have somebody proposing to add a new driver to DPDK. It's standalone and doesn't affect any of the core libraries. They're willing to maintain the driver and have included a patch to update the maintainers file. They've also included the relevant documentation changes. I haven't seen any negative comment on the patches themselves except for a request from John McNamara for an update to the Release Notes that was addressed in a later version. I think we should be welcoming this into DPDK rather than questioning/rejecting it.

I'd suggest that this is a good topic for the next Tech Board meeting.


> From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Vincent JARDIN
> 
> Allain,
> 
> see inline,
> + I did restore the thread from
> http://dpdk.org/ml/archives/dev/2017-March/060087.html into the same
> email.
> 
> To make it short, using ivshmem, you keep people unfocused from virtio.

I agree with the desire to have virtio as the preferred solution. I think the way to do that is by promoting the benefits of a standard solution and continually improving the performance, as we are doing. I don't think it's a reason to reject alternative solutions though.

> > Vincent,
> > Perhaps you can help me understand why the performance or
> functionality of
> > AVP vs. Virtio is relevant to the decision of accepting this driver.
> There
> > are many drivers in the DPDK; most of which provide the same
> functionality
> > at comparable performance rates. AVP is just another such driver. The
> fact
> > that it is virtual rather than physical, in my opinion, should not
> influence
> > the decision of accepting this driver.
> 
> No, it is a different logic: your driver is about Qemu's vNIC support.
> Other PMDs are here to support vendors and different hypervisors. For
> isntance, in case of vmxnet3, different versions, evolutions and
> optimisations can be managed inside vmxnet3. We need to avoid the
> proliferation of PMDs for a same hypervisor while there is already an
> efficient solution, virtio.
> 
> > On the other hand, code
> > quality/complexity or lack of a maintainer are reasonable reasons for
> > rejecting. If our driver is accepted we are committed to maintaining
> it and
> > testing changes as required by any driver framework changes which may
> impact
> > all drivers.
> 
> But, then it is unfocusing your capabilities. As a community, we need to
> be sure that we are all focused on improving existing solutions. Since
> virtio is the one, I would rather prefer to see more people working on
> improving the virtio's community instead of getting everyone unfocused.
> 
> >
> > Along the same lines, I do not understand why upstreaming AVP in to
> the Linux
> > kernel or qemu/kvm should be a prerequisite for inclusion in the DPDK.
> > Continuing my analogy from above, the AVP device is a commercial
> offering
> > tied to the Wind River Systems Titanium product line. It enables
> virtualized
> > DPDK applications and increases DPDK adoption. Similarly to how a
> driver from
> > company XYX is tied to a commercial NIC that must be purchased by a
> customer,
> > our AVP device is available to operators that choose to leverage our
> Titanium
> > product to implement their Cloud solutions. It is not our intention to
> > upstream the qemu/kvm or host vswitch portion of the AVP device. Our
> qemu/kvm
> > extensions are GPL so they are available to our customers if they
> desire to
> > rebuild qemu/kvm with their own proprietary extensions
> 
> It was a solution before 2013, but now we are in 2017, vhost-user is
> here, show them the new proper path. Let's be focus, please.
> 
> > Our AVP device was implemented in 2013 in response to the challenge of
> lower
> > than required performance of qemu/virtio in both user space and DPDK
> > applications in the VM. Rather than making complex changes to
> qemu/virtio
> > and continuously have to forward prop those as we upgraded to newer
> versions
> > of qemu we decided to decouple ourselves from that code base. We
> developed the
> > AVP device based on an evolution of KNI+ivshmem by enhancing both with
> > features that would meet the needs of our customers; better
> performance,
> > multi-queue support, live-migration support, hot-plug support. As I
> said in
> > my earlier response, since 2013, qemu/virtio has seen improved
> performance
> > with the introduction of vhost-user. The performance of vhost-user
> still has
> > not yet achieved performance levels equal to our AVP PMD.
> 
> Frankly, I was on the boat than you: I did strong pitch the use of
> ivshmem for vNICs for many benefits. But Redhat folks did push bash to
> ask every one to be focused in virtio. So, back to 2013, I would have
> supported your AVP approach, but now we are in 2017 and we must stack
> focused on a single and proper qemu solution => virtio/vhost.
> 
> >
> > I acknowledge that the AVP driver could exist as an out-of-tree driver
> loaded
> > as a shared library at runtime. In fact, 2 years ago we released our
> driver
> > source on github for this very reason.  We provide instructions and
> support
> > for building the AVP PMD as a shared library. Some customers have
> adopted
> > this method while many insist on an in-tree driver for several
> reasons.
> >
> > Most importantly, they want to eliminate the burden of needing to
> build and
> > support an additional package into their product. An in-tree driver
> would
> > eliminate the need for a separate build/packaging process. Also, they
> want
> > an option that allows them to be able to develop directly on the
> bleeding
> > edge of DPDK rather than waiting on us to update our out-of-tree
> driver
> > based on stable releases of the DPDK. In this regard, an in-tree
> driver
> > would allow our customers to work directly on the latest DPDK.
> 
> Make them move to virtio, then this pain will disappear.
> 
> > An in-tree driver provides obvious benefits to our customers, but keep
> in
> > mind that this also provides a benefit to the DPDK. If a customer must
> > develop on a stable release because they must use an out-of-tree
> driver then
> > they are less likely to contribute fixes/enhancements/testing
> upstream. I
> > know this first hand because I work with software from different
> sources on
> > a daily basis and it is a significant burden to have to reproduce/test
> fixes
> > on master when you build/ship on an older stable release. Accepting
> this
> > driver would increase the potential pool of developers available for
> > contributions and reviews.
> 
> So, again, you are make a call for unfocusing from virtio, please.
> 
> > Again, we are committed to contributing to the DPDK community by
> supporting
> > our driver and upstreaming other fixes/enhancements we develop along
> the
> > way. We feel that if the DPDK is limited to only a single virtual
> driver of
> > any type then choice and innovation is also limited. In the end if
> more
> > variety and innovation increases DPDK adoption then this is a win for
> DPDK
> > and everyone that is involved in the project.
> 
> Frankly, show/demonstrate that AVP can be faster and better in
> performances than virtio, then I would support you strongly, since, back
> in 2013 days, I did strongly believe in innovation and agility with
> ivshmem. But right now, you did not provide any relevant data, you just
> made the proof points that you are getting the community unfocused from
> virtio 4 years later. We are not in 2013 anymore.
> 
> > This patch series submits an initial version of the AVP PMD from Wind
> River
> > Systems.  The series includes shared header files, driver
> implementation,
> > and changes to documentation files in support of this new driver.  The
> AVP
> > driver is a shared memory based device.  It is intended to be used as
> a PMD
> > within a virtual machine running on a Wind River virtualization
> platform.
> > See: http://www.windriver.com/products/titanium-cloud/
> >
> > It enables optimized packet throughput without requiring any packet
> > processing in qemu. This allowed us to provide our customers with a
> > significant performance increase for both DPDK and non-DPDK
> applications
> > in the VM.   Since our AVP implementation supports VM live-migration
> it
> > is viewed as a better alternative to PCI passthrough or PCI SRIOV
> since
> > neither of those support VM live-migration without manual intervention
> > or significant performance penalties.
> >
> > Since the initial implementation of AVP devices, vhost-user has become
> part
> > of the qemu offering with a significant performance increase over the
> > original virtio implementation.  However, vhost-user still does not
> achieve
> > the level of performance that the AVP device can provide to our
> customers
> > for DPDK based guests.
> >
> > A number of our customers have requested that we upstream the driver
> to
> > dpdk.org.
> >
> > v2:
> > * Fixed coding style violations that slipped in accidentally because
> of an
> >   out of date checkpatch.pl from an older kernel.
> >
> > v3:
> > * Updated 17.05 release notes to add a section for this new PMD
> > * Added additional info to the AVP nic guide document to clarify the
> >   benefit of using AVP over virtio.
> > * Fixed spelling error in debug log missed by local checkpatch.pl
> version
> > * Split the transmit patch to separate the stats functions as they
> >   accidentally got squashed in the last patchset.
> > * Fixed debug log strings so that they exceed 80 characters rather
> than
> >   span multiple lines.
> > * Renamed RTE_AVP_* defines that were in avp_ethdev.h to be AVP_*
> instead
> > * Replaced usage of RTE_WRITE32 and RTE_READ32 with
> rte_write32_relaxed
> >   and rte_read32_relaxed.
> > * Declared rte_pci_id table as const
> >
> > v4:
> > * Split our interrupt handlers to a separate patch and moved to the
> end
> >   of the series.
> > * Removed memset() from stats_get API
> > * Removed usage of RTE_AVP_ALIGNMENT
> > * Removed unnecessary parentheses in rte_avp_common.h
> > * Removed unneeded "goto unlock" where there are no statements in
> between
> >   the goto and the end of the function.
> > * Re-tested with pktgen and found that rte_eth_tx_burst() is being
> called
> >   with 0 packets even before starting traffic which resulted in
> >   incrementing oerrors; fixed in transmit patch.
> >
> > Allain Legacy (17):
> >   config: added attributes for the AVP PMD
> >   net/avp: added public header files
> >   maintainers: claim responsibility for AVP PMD
> >   net/avp: added PMD version map file
> >   net/avp: added log macros
> >   drivers/net: added driver makefiles
> >   net/avp: driver registration
> >   net/avp: device initialization
> >   net/avp: device configuration
> >   net/avp: queue setup and release
> >   net/avp: packet receive functions
> >   net/avp: packet transmit functions
> >   net/avp: device statistics operations
> >   net/avp: device promiscuous functions
> >   net/avp: device start and stop operations
> >   net/avp: migration interrupt handling
> >   doc: added information related to the AVP PMD
> >
> >  MAINTAINERS                             |    6 +
> >  config/common_base                      |   10 +
> >  config/common_linuxapp                  |    1 +
> >  doc/guides/nics/avp.rst                 |   99 ++
> >  doc/guides/nics/features/avp.ini        |   17 +
> >  doc/guides/nics/index.rst               |    1 +
> >  drivers/net/Makefile                    |    1 +
> >  drivers/net/avp/Makefile                |   61 +
> >  drivers/net/avp/avp_ethdev.c            | 2371
> +++++++++++++++++++++++++++++++
> >  drivers/net/avp/avp_logs.h              |   59 +
> >  drivers/net/avp/rte_avp_common.h        |  427 ++++++
> >  drivers/net/avp/rte_avp_fifo.h          |  157 ++
> >  drivers/net/avp/rte_pmd_avp_version.map |    4 +
> >  mk/rte.app.mk                           |    1 +
> >  14 files changed, 3215 insertions(+)
> >  create mode 100644 doc/guides/nics/avp.rst
> >  create mode 100644 doc/guides/nics/features/avp.ini
> >  create mode 100644 drivers/net/avp/Makefile
> >  create mode 100644 drivers/net/avp/avp_ethdev.c
> >  create mode 100644 drivers/net/avp/avp_logs.h
> >  create mode 100644 drivers/net/avp/rte_avp_common.h
> >  create mode 100644 drivers/net/avp/rte_avp_fifo.h
> >  create mode 100644 drivers/net/avp/rte_pmd_avp_version.map
> >
> 
> so, still an nack because:
>    - no performance data of avp vs virtio,

I don't think it should be a requirement for Allain to provide performance data in order to justify getting this accepted into DPDK. Keith pointed out in a previous comment on this patch set that even if performance is the same as virtio, there might still be other reasons why people would want to use it.

>    - 2013 is gone,
>    - it unfocuses from virtio.
> 
> Best regards,
>    Vincent

^ permalink raw reply	[flat|nested] 172+ messages in thread

* Re: [PATCH v4 00/17] Wind River Systems AVP PMD vs virtio?
  2017-03-15  4:10         ` O'Driscoll, Tim
@ 2017-03-15 10:55           ` Thomas Monjalon
  2017-03-15 14:02             ` Vincent JARDIN
  2017-03-15 11:29           ` Ferruh Yigit
                             ` (3 subsequent siblings)
  4 siblings, 1 reply; 172+ messages in thread
From: Thomas Monjalon @ 2017-03-15 10:55 UTC (permalink / raw)
  To: O'Driscoll, Tim
  Cc: Vincent JARDIN, Legacy, Allain (Wind River),
	Yigit, Ferruh, dev, Jolliffe, Ian (Wind River),
	Wiles, Keith, techboard

2017-03-15 04:10, O'Driscoll, Tim:
> I've included a couple of specific comments inline below, and a general comment here.
> 
> We have somebody proposing to add a new driver to DPDK. It's standalone and doesn't affect any of the core libraries. They're willing to maintain the driver and have included a patch to update the maintainers file. They've also included the relevant documentation changes. I haven't seen any negative comment on the patches themselves except for a request from John McNamara for an update to the Release Notes that was addressed in a later version. I think we should be welcoming this into DPDK rather than questioning/rejecting it.
> 
> I'd suggest that this is a good topic for the next Tech Board meeting.

I agree Tim.
CC'ing techboard to add this item to the agenda of the next meeting.

^ permalink raw reply	[flat|nested] 172+ messages in thread

* Re: [PATCH v4 00/17] Wind River Systems AVP PMD vs virtio?
  2017-03-15  4:10         ` O'Driscoll, Tim
  2017-03-15 10:55           ` Thomas Monjalon
@ 2017-03-15 11:29           ` Ferruh Yigit
  2017-03-15 14:08             ` Vincent JARDIN
  2017-03-15 14:02           ` Vincent JARDIN
                             ` (2 subsequent siblings)
  4 siblings, 1 reply; 172+ messages in thread
From: Ferruh Yigit @ 2017-03-15 11:29 UTC (permalink / raw)
  To: O'Driscoll, Tim, Vincent JARDIN, Legacy, Allain (Wind River)
  Cc: dev, Jolliffe, Ian (Wind River),
	Richardson, Bruce, Mcnamara, John, Wiles, Keith, thomas.monjalon,
	jerin.jacob, stephen, 3chas3

On 3/15/2017 4:10 AM, O'Driscoll, Tim wrote:
> I've included a couple of specific comments inline below, and a general comment here.
> 
> We have somebody proposing to add a new driver to DPDK. It's standalone and doesn't affect any of the core libraries.
> They're willing to maintain the driver and have included a patch to
update the maintainers file.

+1

The scope of the patch is limited to PMD.
As long as it is maintained, it is good to have alternative approaches.


> They've also included the relevant documentation changes. I haven't seen any negative comment on the patches themselves except for a request from John McNamara for an update to the Release Notes that was addressed in a later version. I think we should be welcoming this into DPDK rather than questioning/rejecting it.
> 
> I'd suggest that this is a good topic for the next Tech Board meeting.

<...>

>> To make it short, using ivshmem, you keep people unfocused from virtio.
> 
> I agree with the desire to have virtio as the preferred solution. I think the way to do that is by promoting the benefits of a standard solution and continually improving the performance, as we are doing. I don't think it's a reason to reject alternative solutions though.
> 

<...>

>> so, still an nack because:
>>    - no performance data of avp vs virtio,
> 
> I don't think it should be a requirement for Allain to provide performance data in order to justify getting this accepted into DPDK. Keith pointed out in a previous comment on this patch set that even if performance is the same as virtio, there might still be other reasons why people would want to use it.
> 
>>    - 2013 is gone,
>>    - it unfocuses from virtio.
>>
>> Best regards,
>>    Vincent

^ permalink raw reply	[flat|nested] 172+ messages in thread

* Re: [PATCH v4 00/17] Wind River Systems AVP PMD vs virtio?
  2017-03-15  4:10         ` O'Driscoll, Tim
  2017-03-15 10:55           ` Thomas Monjalon
  2017-03-15 11:29           ` Ferruh Yigit
@ 2017-03-15 14:02           ` Vincent JARDIN
  2017-03-15 14:02           ` Vincent JARDIN
  2017-03-16 23:17           ` Stephen Hemminger
  4 siblings, 0 replies; 172+ messages in thread
From: Vincent JARDIN @ 2017-03-15 14:02 UTC (permalink / raw)
  To: O'Driscoll, Tim, Legacy, Allain (Wind River)
  Cc: Yigit, Ferruh, dev, Jolliffe, Ian (Wind River),
	Richardson, Bruce, Mcnamara, John, Wiles, Keith, thomas.monjalon,
	jerin.jacob, stephen, 3chas3

Le 15/03/2017 à 05:10, O'Driscoll, Tim a écrit :
>> so, still an nack because:
>>    - no performance data of avp vs virtio,
> I don't think it should be a requirement for Allain to provide performance data in order to justify getting this accepted into DPDK. Keith pointed out in a previous comment on this patch set that even if performance is the same as virtio, there might still be other reasons why people would want to use it.
>
>>    - 2013 is gone,
>>    - it unfocuses from virtio.

Tim,

you get it wrong, it IS the major point: if AVP is good in performance, 
then an alternative to virtio is needed.

Please, stop turning around the topic, and send facts/numbers that 
demonstrate that there is a value in having alternative to virtio. 
Currently, the only argument is a code developed in 2013 needs to  be 
upstreamed because vhost-user was not available in 2013.

Best regards,
   Vincent

^ permalink raw reply	[flat|nested] 172+ messages in thread

* Re: [PATCH v4 00/17] Wind River Systems AVP PMD vs virtio?
  2017-03-15  4:10         ` O'Driscoll, Tim
                             ` (2 preceding siblings ...)
  2017-03-15 14:02           ` Vincent JARDIN
@ 2017-03-15 14:02           ` Vincent JARDIN
  2017-03-15 20:19             ` Wiles, Keith
  2017-03-16 23:17           ` Stephen Hemminger
  4 siblings, 1 reply; 172+ messages in thread
From: Vincent JARDIN @ 2017-03-15 14:02 UTC (permalink / raw)
  To: O'Driscoll, Tim, Legacy, Allain (Wind River)
  Cc: Yigit, Ferruh, dev, Jolliffe, Ian (Wind River),
	Richardson, Bruce, Mcnamara, John, Wiles, Keith, thomas.monjalon,
	jerin.jacob, stephen, 3chas3

Le 15/03/2017 à 05:10, O'Driscoll, Tim a écrit :
>> so, still an nack because:
>>    - no performance data of avp vs virtio,
> I don't think it should be a requirement for Allain to provide performance data in order to justify getting this accepted into DPDK. Keith pointed out in a previous comment on this patch set that even if performance is the same as virtio, there might still be other reasons why people would want to use it.
>
>>    - 2013 is gone,
>>    - it unfocuses from virtio.

Tim,

you get it wrong, it IS the major point: if AVP is good in performance, 
then an alternative to virtio is needed.

Please, stop turning around the topic, and send facts/numbers that 
demonstrate that there is a value in having alternative to virtio. 
Currently, the only argument is a code developed in 2013 needs to  be 
upstreamed because vhost-user was not available in 2013.

Best regards,
   Vincent

^ permalink raw reply	[flat|nested] 172+ messages in thread

* Re: [PATCH v4 00/17] Wind River Systems AVP PMD vs virtio?
  2017-03-15 10:55           ` Thomas Monjalon
@ 2017-03-15 14:02             ` Vincent JARDIN
  2017-03-16  3:18               ` O'Driscoll, Tim
  0 siblings, 1 reply; 172+ messages in thread
From: Vincent JARDIN @ 2017-03-15 14:02 UTC (permalink / raw)
  To: Thomas Monjalon, O'Driscoll, Tim
  Cc: Legacy, Allain (Wind River),
	Yigit, Ferruh, dev, Jolliffe, Ian (Wind River),
	Wiles, Keith, techboard

Le 15/03/2017 à 11:55, Thomas Monjalon a écrit :
>> I'd suggest that this is a good topic for the next Tech Board meeting.
> I agree Tim.
> CC'ing techboard to add this item to the agenda of the next meeting.

Frankly, I disagree, it is missing some discussions on the list.

^ permalink raw reply	[flat|nested] 172+ messages in thread

* Re: [PATCH v4 00/17] Wind River Systems AVP PMD vs virtio?
  2017-03-15 11:29           ` Ferruh Yigit
@ 2017-03-15 14:08             ` Vincent JARDIN
  2017-03-15 18:18               ` Ferruh Yigit
  0 siblings, 1 reply; 172+ messages in thread
From: Vincent JARDIN @ 2017-03-15 14:08 UTC (permalink / raw)
  To: Ferruh Yigit, O'Driscoll, Tim, thomas.monjalon
  Cc: Legacy, Allain (Wind River), dev, Jolliffe, Ian (Wind River),
	Richardson, Bruce, Mcnamara, John, Wiles, Keith, jerin.jacob,
	stephen, 3chas3

Le 15/03/2017 à 12:29, Ferruh Yigit a écrit :
> The scope of the patch is limited to PMD.
> As long as it is maintained, it is good to have alternative approaches.

 From your logic, then, how many PMDs can be accepted?

See my previous email: techboard should not bypass discussion of the 
dev@ mailing list. I believe that the question for the techboard is 
about the number of PMDs that we can get into the DPDK: if any PMDs can 
get in, so AVP, fail-safe, xyz, whatever the models that are proposed. 
We could even get 2 PMDs of a same (v)NIC, but with different 
implementation-designs.

Best regards,
   Vincent

^ permalink raw reply	[flat|nested] 172+ messages in thread

* Re: [PATCH v4 00/17] Wind River Systems AVP PMD vs virtio?
  2017-03-15 14:08             ` Vincent JARDIN
@ 2017-03-15 18:18               ` Ferruh Yigit
  0 siblings, 0 replies; 172+ messages in thread
From: Ferruh Yigit @ 2017-03-15 18:18 UTC (permalink / raw)
  To: Vincent JARDIN, O'Driscoll, Tim, thomas.monjalon
  Cc: Legacy, Allain (Wind River), dev, Jolliffe, Ian (Wind River),
	Richardson, Bruce, Mcnamara, John, Wiles, Keith, jerin.jacob,
	stephen, 3chas3

On 3/15/2017 2:08 PM, Vincent JARDIN wrote:
> Le 15/03/2017 à 12:29, Ferruh Yigit a écrit :
>> The scope of the patch is limited to PMD.
>> As long as it is maintained, it is good to have alternative approaches.
> 
>  From your logic, then, how many PMDs can be accepted?
> 
> See my previous email: techboard should not bypass discussion of the 
> dev@ mailing list. I believe that the question for the techboard is 
> about the number of PMDs that we can get into the DPDK: if any PMDs can 
> get in, so AVP, fail-safe, xyz, whatever the models that are proposed. 
> We could even get 2 PMDs of a same (v)NIC, but with different 
> implementation-designs.

There is difference between code duplication, and providing alternative
solutions. Agree that we should prevent duplication.

And what is bad if number of the PMDs increased, again as long as they
are owned/maintained?
If not maintained properly, they can be removed as recent net/mpipe example.

I believe PMDs are a little different from rest of DPDK, because of
their scope, it is easy to exclude any of them without effecting rest,
and it is good to have alternative solutions, so user can pick one works
best for her, and natural selection can do its work here too.

> 
> Best regards,
>    Vincent
> 

^ permalink raw reply	[flat|nested] 172+ messages in thread

* Re: [PATCH v4 00/17] Wind River Systems AVP PMD vs virtio?
  2017-03-15 14:02           ` Vincent JARDIN
@ 2017-03-15 20:19             ` Wiles, Keith
  0 siblings, 0 replies; 172+ messages in thread
From: Wiles, Keith @ 2017-03-15 20:19 UTC (permalink / raw)
  To: Vincent JARDIN
  Cc: O'Driscoll, Tim, Legacy, Allain (Wind River),
	Yigit, Ferruh, dev, Jolliffe, Ian (Wind River),
	Richardson, Bruce, Mcnamara, John, thomas.monjalon, jerin.jacob,
	stephen, 3chas3


> On Mar 15, 2017, at 10:02 PM, Vincent JARDIN <vincent.jardin@6wind.com> wrote:
> 
> Le 15/03/2017 à 05:10, O'Driscoll, Tim a écrit :
>>> so, still an nack because:
>>>   - no performance data of avp vs virtio,
>> I don't think it should be a requirement for Allain to provide performance data in order to justify getting this accepted into DPDK. Keith pointed out in a previous comment on this patch set that even if performance is the same as virtio, there might still be other reasons why people would want to use it.
>> 
>>>   - 2013 is gone,
>>>   - it unfocuses from virtio.
> 
> Tim,
> 
> you get it wrong, it IS the major point: if AVP is good in performance, then an alternative to virtio is needed.
> 
> Please, stop turning around the topic, and send facts/numbers that demonstrate that there is a value in having alternative to virtio. Currently, the only argument is a code developed in 2013 needs to  be upstreamed because vhost-user was not available in 2013.

I am sorry Vincent, but no one else provided performance data on any of the PMDs I did not have to provide performance data on TAP compared to AF_PACKET. Then why would you need to require performance data on this PMD, just because it uses something similar to what we have in some other PMDs is not a reason. The core code of DPKD is not PMDs is very important part, but they are just drivers for data I/O and DPDK provides the framework to run these drivers.

Tim is not turning around the topic here, only pointing out the real questions and statements on this topic.
> 
> Best regards,
>  Vincent

Regards,
Keith


^ permalink raw reply	[flat|nested] 172+ messages in thread

* Re: [PATCH v4 00/17] Wind River Systems AVP PMD vs virtio?
  2017-03-15 14:02             ` Vincent JARDIN
@ 2017-03-16  3:18               ` O'Driscoll, Tim
  2017-03-16  8:52                 ` Francois Ozog
  2017-03-16 10:32                 ` Chas Williams
  0 siblings, 2 replies; 172+ messages in thread
From: O'Driscoll, Tim @ 2017-03-16  3:18 UTC (permalink / raw)
  To: Vincent JARDIN, Thomas Monjalon
  Cc: Legacy, Allain (Wind River),
	Yigit, Ferruh, dev, Jolliffe, Ian (Wind River),
	Wiles, Keith, techboard

> From: Vincent JARDIN [mailto:vincent.jardin@6wind.com]
> 
> Le 15/03/2017 à 11:55, Thomas Monjalon a écrit :
> >> I'd suggest that this is a good topic for the next Tech Board
> meeting.
> > I agree Tim.
> > CC'ing techboard to add this item to the agenda of the next meeting.
> 
> Frankly, I disagree, it is missing some discussions on the list.

I think the discussion on the mailing list is at an impasse and it won't be resolved there. I think the Tech Board needs to consider several issues:
- What are the requirements for a new PMD to be accepted? For example, you're asking for performance data in this case, when this hasn't been a requirement for other PMDs.
- Should there be different requirements for PMDs for virtual devices versus physical devices?
- Based on these criteria, should the AVP PMD be accepted or not?

^ permalink raw reply	[flat|nested] 172+ messages in thread

* Re: [PATCH v4 00/17] Wind River Systems AVP PMD vs virtio?
  2017-03-16  3:18               ` O'Driscoll, Tim
@ 2017-03-16  8:52                 ` Francois Ozog
  2017-03-16  9:51                   ` Wiles, Keith
  2017-03-16 10:32                 ` Chas Williams
  1 sibling, 1 reply; 172+ messages in thread
From: Francois Ozog @ 2017-03-16  8:52 UTC (permalink / raw)
  To: O'Driscoll, Tim
  Cc: Vincent JARDIN, Thomas Monjalon, Legacy, Allain (Wind River),
	Yigit, Ferruh, dev, Jolliffe, Ian (Wind River),
	Wiles, Keith, techboard

Hi,

Virtio is special in many ways:
- it is a  multi-vendor supported specification
- it is a multi-vendor opensource implementation in guest OSes
(Windows, Linux, FreeBSD...)
- it is a multi-vendor, opensource implementation in hypervisors


So, the great benefit of virtio is that with a SINGLE device driver in
a VM, applications are guaranteed to work in all situations (all
hypervisors, all backends). The real issue I see with AVP is that it
would bring uncertainty in virtual environments, breaking the "peace"
of mind that virtio brings. does the hypervisor supports this vnic?
does the virtual switch support the vnic?
Having a single multi-vendor supported specification and
implementations foster creativity, so I wouldn't be surprised to see
native virtio support from Smart NICs in a very near future!

*** Bottom line, if there are good ideas in AVP (performance,
security...), I would rather push them to virtio. ***


Lastly, physical PMDs have been accepted based on implicit existence
of upstream drivers (valid for virtio and vmxnet3). So as a bare
minimum requirement, I would ask for Qemu, OVS and Linux upstream AVP
support. Is it the case?

Cordially,

François-Frédéric


On 16 March 2017 at 04:18, O'Driscoll, Tim <tim.odriscoll@intel.com> wrote:
>> From: Vincent JARDIN [mailto:vincent.jardin@6wind.com]
>>
>> Le 15/03/2017 à 11:55, Thomas Monjalon a écrit :
>> >> I'd suggest that this is a good topic for the next Tech Board
>> meeting.
>> > I agree Tim.
>> > CC'ing techboard to add this item to the agenda of the next meeting.
>>
>> Frankly, I disagree, it is missing some discussions on the list.
>
> I think the discussion on the mailing list is at an impasse and it won't be resolved there. I think the Tech Board needs to consider several issues:
> - What are the requirements for a new PMD to be accepted? For example, you're asking for performance data in this case, when this hasn't been a requirement for other PMDs.
> - Should there be different requirements for PMDs for virtual devices versus physical devices?
> - Based on these criteria, should the AVP PMD be accepted or not?



-- 
François-Frédéric Ozog | Director Linaro Networking Group
T: +33.67221.6485
francois.ozog@linaro.org | Skype: ffozog

^ permalink raw reply	[flat|nested] 172+ messages in thread

* Re: [PATCH v4 00/17] Wind River Systems AVP PMD vs virtio?
  2017-03-16  8:52                 ` Francois Ozog
@ 2017-03-16  9:51                   ` Wiles, Keith
  0 siblings, 0 replies; 172+ messages in thread
From: Wiles, Keith @ 2017-03-16  9:51 UTC (permalink / raw)
  To: Francois Ozog
  Cc: O'Driscoll, Tim, Vincent JARDIN, Thomas Monjalon, Legacy,
	Allain (Wind River),
	Yigit, Ferruh, dev, Jolliffe, Ian (Wind River),
	techboard



Sent from my iPhone

> On Mar 16, 2017, at 4:53 PM, Francois Ozog <francois.ozog@linaro.org> wrote:
> 
> Hi,
> 
> Virtio is special in many ways:
> - it is a  multi-vendor supported specification
> - it is a multi-vendor opensource implementation in guest OSes
> (Windows, Linux, FreeBSD...)
> - it is a multi-vendor, opensource implementation in hypervisors
> 
> 
> So, the great benefit of virtio is that with a SINGLE device driver in
> a VM, applications are guaranteed to work in all situations (all
> hypervisors, all backends). The real issue I see with AVP is that it
> would bring uncertainty in virtual environments, breaking the "peace"
> of mind that virtio brings. does the hypervisor supports this vnic?
> does the virtual switch support the vnic?
> Having a single multi-vendor supported specification and
> implementations foster creativity, so I wouldn't be surprised to see
> native virtio support from Smart NICs in a very near future!
> 
> *** Bottom line, if there are good ideas in AVP (performance,
> security...), I would rather push them to virtio. ***
> 
> 
> Lastly, physical PMDs have been accepted based on implicit existence
> of upstream drivers (valid for virtio and vmxnet3). So as a bare
> minimum requirement, I would ask for Qemu, OVS and Linux upstream AVP
> support. Is it the case?

You are missing the point people will vote with there feet having competing solutions just forces a heathy eco system. 

Because the code is now open sourced then virtio can just take the improvement ideas and put it in their code. 

I do not believe you have convinced me that having another solution hurts the eco system. 

You can say we already have a driver for one nic why would we want another one. If someone produces a driver for a nic we already then I would accept it into dpdk. 

Maybe is slower or faster, less or more features, easier to maintain, but fills a gap like AVP then we must except that driver. At some point one will win or some will prefer one over the other. 

> 
> Cordially,
> 
> François-Frédéric
> 
> 
> On 16 March 2017 at 04:18, O'Driscoll, Tim <tim.odriscoll@intel.com> wrote:
>>> From: Vincent JARDIN [mailto:vincent.jardin@6wind.com]
>>> 
>>> Le 15/03/2017 à 11:55, Thomas Monjalon a écrit :
>>>>> I'd suggest that this is a good topic for the next Tech Board
>>> meeting.
>>>> I agree Tim.
>>>> CC'ing techboard to add this item to the agenda of the next meeting.
>>> 
>>> Frankly, I disagree, it is missing some discussions on the list.
>> 
>> I think the discussion on the mailing list is at an impasse and it won't be resolved there. I think the Tech Board needs to consider several issues:
>> - What are the requirements for a new PMD to be accepted? For example, you're asking for performance data in this case, when this hasn't been a requirement for other PMDs.
>> - Should there be different requirements for PMDs for virtual devices versus physical devices?
>> - Based on these criteria, should the AVP PMD be accepted or not?
> 
> 
> 
> -- 
> François-Frédéric Ozog | Director Linaro Networking Group
> T: +33.67221.6485
> francois.ozog@linaro.org | Skype: ffozog

^ permalink raw reply	[flat|nested] 172+ messages in thread

* Re: [PATCH v4 00/17] Wind River Systems AVP PMD vs virtio?
  2017-03-16  3:18               ` O'Driscoll, Tim
  2017-03-16  8:52                 ` Francois Ozog
@ 2017-03-16 10:32                 ` Chas Williams
  2017-03-16 18:09                   ` Francois Ozog
  1 sibling, 1 reply; 172+ messages in thread
From: Chas Williams @ 2017-03-16 10:32 UTC (permalink / raw)
  To: O'Driscoll, Tim, Vincent JARDIN, Thomas Monjalon
  Cc: Legacy, Allain (Wind River),
	Yigit, Ferruh, dev, Jolliffe, Ian (Wind River),
	Wiles, Keith, techboard

On Thu, 2017-03-16 at 03:18 +0000, O'Driscoll, Tim wrote:
> > From: Vincent JARDIN [mailto:vincent.jardin@6wind.com]
> > 
> > Le 15/03/2017 à 11:55, Thomas Monjalon a écrit :
> > >> I'd suggest that this is a good topic for the next Tech Board
> > meeting.
> > > I agree Tim.
> > > CC'ing techboard to add this item to the agenda of the next meeting.
> > 
> > Frankly, I disagree, it is missing some discussions on the list.
> 
> I think the discussion on the mailing list is at an impasse and it won't be resolved there. I think the Tech Board needs to consider several issues:
> - What are the requirements for a new PMD to be accepted? For example, you're asking for performance data in this case, when this hasn't been a requirement for other PMDs.

It does seem like that would be the purpose of the tech board in the
first place.  The tech board doesn't need to decide individual matters
but must at least provide guidelines for the developers to follow.
Otherwise you are asking for a popular vote to decide matters.

As for performance data, the tech board could certainly make this a
requirement.  If your argument is that we can't require X because we
didn't require X in the past means that the tech board is basically
pointless -- it can't make any changes to the existing processes.

Should performance be a criterion?  Possibly.  What happens when X
is faster at B but slower at A and Y is faster at A but slower at B?
Now you don't have a clear case of what "performance" means since it
varies based on what the end user is doing.  So which is faster?

DPDK already has overlapping PMD's -- PCAP, AF_PACKET and now TAP.
So if your reasoning is that DPDK doesn't want overlapping support,
DPDK needs to start thinking about narrowing down the existing
overlapping PMD's.  Otherwise, it does look like hypocrisy.

> - Should there be different requirements for PMDs for virtual devices versus physical devices?

How "real" does a device need to be?  SRIOV blurs the line somewhat
between virtual and physical devices.  What is a VF, physical or virtual?
It looks like a physical device in DPDK, but it's really virtual.

Personally, I would prefer to see a minimum set of required capabilities.
Not every driver needs to support offload but it seems like there should
be some minimum set of functionality, like changing the MTU, supporting
tagged traffic, or changing the MAC address.  Stuff a driver might need
to be able to interoperate with other parts of DPDK (like bonding).

> - Based on these criteria, should the AVP PMD be accepted or not?

^ permalink raw reply	[flat|nested] 172+ messages in thread

* Re: [PATCH v4 04/17] net/avp: add PMD version map file
  2017-03-13 19:16       ` [PATCH v4 04/17] net/avp: add PMD version map file Allain Legacy
@ 2017-03-16 14:52         ` Ferruh Yigit
  2017-03-16 15:33           ` Legacy, Allain
  0 siblings, 1 reply; 172+ messages in thread
From: Ferruh Yigit @ 2017-03-16 14:52 UTC (permalink / raw)
  To: Allain Legacy
  Cc: dev, ian.jolliffe, bruce.richardson, john.mcnamara, keith.wiles,
	thomas.monjalon, vincent.jardin, jerin.jacob, stephen, 3chas3

On 3/13/2017 7:16 PM, Allain Legacy wrote:
> Adds a default ABI version file for the AVP PMD.
> 
> Signed-off-by: Allain Legacy <allain.legacy@windriver.com>
> Signed-off-by: Matt Peters <matt.peters@windriver.com>
> ---
>  drivers/net/avp/rte_pmd_avp_version.map | 4 ++++
>  1 file changed, 4 insertions(+)
>  create mode 100644 drivers/net/avp/rte_pmd_avp_version.map
> 
> diff --git a/drivers/net/avp/rte_pmd_avp_version.map b/drivers/net/avp/rte_pmd_avp_version.map
> new file mode 100644
> index 0000000..af8f3f4
> --- /dev/null
> +++ b/drivers/net/avp/rte_pmd_avp_version.map
> @@ -0,0 +1,4 @@
> +DPDK_17.05 {
> +
> +    local: *;
> +};
> 

Hi Allain,

Instead of adding files per patch, may I suggest different ordering:
First add skeleton files in a patch, later add functional pieces one by
one, like:

Merge patch 1/17, 3/17, this patch (4/17), 6/17 (removing SYMLINK), into
single patch and make it AVP first patch. This will be skeleton patch.

Second patch can be introducing public headers (2/17) and updating
Makefile to include them.

Third, debug log patch (5/17)

Patch 7/17 and later can stay same.

What do you think?

^ permalink raw reply	[flat|nested] 172+ messages in thread

* Re: [PATCH v4 07/17] net/avp: driver registration
  2017-03-13 19:16       ` [PATCH v4 07/17] net/avp: driver registration Allain Legacy
@ 2017-03-16 14:53         ` Ferruh Yigit
  2017-03-16 15:37           ` Legacy, Allain
  0 siblings, 1 reply; 172+ messages in thread
From: Ferruh Yigit @ 2017-03-16 14:53 UTC (permalink / raw)
  To: Allain Legacy
  Cc: dev, ian.jolliffe, bruce.richardson, john.mcnamara, keith.wiles,
	thomas.monjalon, vincent.jardin, jerin.jacob, stephen, 3chas3

On 3/13/2017 7:16 PM, Allain Legacy wrote:
> Adds the initial framework for registering the driver against the support
> PCI device identifiers.
> 
> Signed-off-by: Allain Legacy <allain.legacy@windriver.com>
> Signed-off-by: Matt Peters <matt.peters@windriver.com>

<...>

> +static int eth_avp_dev_init(struct rte_eth_dev *eth_dev);
> +static int eth_avp_dev_uninit(struct rte_eth_dev *eth_dev);

I am for removing static function declarations by reordering functions,
and for this case even reordering not required I think, you can remove them.

> +
> +
> +#define AVP_DEV_TO_PCI(eth_dev) RTE_DEV_TO_PCI((eth_dev)->device)
> +
> +
> +#define AVP_MAX_MAC_ADDRS 1
> +#define AVP_MIN_RX_BUFSIZE ETHER_MIN_LEN
> +
> +
> +/*
> + * Defines the number of microseconds to wait before checking the response
> + * queue for completion.
> + */
> +#define AVP_REQUEST_DELAY_USECS (5000)
> +
> +/*
> + * Defines the number times to check the response queue for completion before
> + * declaring a timeout.
> + */
> +#define AVP_MAX_REQUEST_RETRY (100)
> +
> +/* Defines the current PCI driver version number */
> +#define AVP_DPDK_DRIVER_VERSION RTE_AVP_CURRENT_GUEST_VERSION
> +

I would suggest creating a avp_ethdev.h and move above defines there,
but of course this is at your will.

> +/*
> + * The set of PCI devices this driver supports
> + */
> +static const struct rte_pci_id pci_id_avp_map[] = {
> +	{ .vendor_id = RTE_AVP_PCI_VENDOR_ID,
> +	  .device_id = RTE_AVP_PCI_DEVICE_ID,
> +	  .subsystem_vendor_id = RTE_AVP_PCI_SUB_VENDOR_ID,
> +	  .subsystem_device_id = RTE_AVP_PCI_SUB_DEVICE_ID,
> +	  .class_id = RTE_CLASS_ANY_ID,
> +	},
> +
> +	{ .vendor_id = 0, /* sentinel */
> +	},
> +};
> +
> +
> +/*
> + * Defines the AVP device attributes which are attached to an RTE ethernet
> + * device
> + */
> +struct avp_dev {
> +	uint32_t magic; /**< Memory validation marker */
> +	uint64_t device_id; /**< Unique system identifier */
> +	struct ether_addr ethaddr; /**< Host specified MAC address */
> +	struct rte_eth_dev_data *dev_data;
> +	/**< Back pointer to ethernet device data */
> +	volatile uint32_t flags; /**< Device operational flags */
> +	uint8_t port_id; /**< Ethernet port identifier */
> +	struct rte_mempool *pool; /**< pkt mbuf mempool */
> +	unsigned int guest_mbuf_size; /**< local pool mbuf size */
> +	unsigned int host_mbuf_size; /**< host mbuf size */
> +	unsigned int max_rx_pkt_len; /**< maximum receive unit */
> +	uint32_t host_features; /**< Supported feature bitmap */
> +	uint32_t features; /**< Enabled feature bitmap */
> +	unsigned int num_tx_queues; /**< Negotiated number of transmit queues */
> +	unsigned int max_tx_queues; /**< Maximum number of transmit queues */
> +	unsigned int num_rx_queues; /**< Negotiated number of receive queues */
> +	unsigned int max_rx_queues; /**< Maximum number of receive queues */
> +
> +	struct rte_avp_fifo *tx_q[RTE_AVP_MAX_QUEUES]; /**< TX queue */
> +	struct rte_avp_fifo *rx_q[RTE_AVP_MAX_QUEUES]; /**< RX queue */
> +	struct rte_avp_fifo *alloc_q[RTE_AVP_MAX_QUEUES];
> +	/**< Allocated mbufs queue */
> +	struct rte_avp_fifo *free_q[RTE_AVP_MAX_QUEUES];
> +	/**< To be freed mbufs queue */
> +
> +	/* For request & response */
> +	struct rte_avp_fifo *req_q; /**< Request queue */
> +	struct rte_avp_fifo *resp_q; /**< Response queue */
> +	void *host_sync_addr; /**< (host) Req/Resp Mem address */
> +	void *sync_addr; /**< Req/Resp Mem address */
> +	void *host_mbuf_addr; /**< (host) MBUF pool start address */
> +	void *mbuf_addr; /**< MBUF pool start address */
> +} __rte_cache_aligned;
> +
> +/* RTE ethernet private data */
> +struct avp_adapter {
> +	struct avp_dev avp;
> +} __rte_cache_aligned;

if avp_ethdev.h created, structs also can go there.

> +
> +/* Macro to cast the ethernet device private data to a AVP object */
> +#define AVP_DEV_PRIVATE_TO_HW(adapter) \
> +	(&((struct avp_adapter *)adapter)->avp)
> +

<...>

> +	rte_eth_copy_pci_info(eth_dev, pci_dev);
> +
> +	eth_dev->data->dev_flags |= RTE_ETH_DEV_DETACHABLE;
> +
> +	/* Allocate memory for storing MAC addresses */
> +	eth_dev->data->mac_addrs = rte_zmalloc("avp_ethdev", ETHER_ADDR_LEN, 0);
> +	if (eth_dev->data->mac_addrs == NULL) {
> +		PMD_DRV_LOG(ERR, "Failed to allocate %d bytes needed to store MAC addresses\n",
> +			    ETHER_ADDR_LEN);
> +		return -ENOMEM;
> +	}
> +
> +	/* Get a mac from device config */
> +	ether_addr_copy(&avp->ethaddr, &eth_dev->data->mac_addrs[0]);

This copies MAC address from avp->ethaddr to eth_dev.
But at this point avp->ethaddr is all zero, is this the intention?

> +
> +	return 0;
> +}
> +

<...>

^ permalink raw reply	[flat|nested] 172+ messages in thread

* Re: [PATCH v4 17/17] doc: adds information related to the AVP PMD
  2017-03-13 19:16       ` [PATCH v4 17/17] doc: adds information related to the AVP PMD Allain Legacy
@ 2017-03-16 14:53         ` Ferruh Yigit
  2017-03-16 15:37           ` Legacy, Allain
  0 siblings, 1 reply; 172+ messages in thread
From: Ferruh Yigit @ 2017-03-16 14:53 UTC (permalink / raw)
  To: Allain Legacy
  Cc: dev, ian.jolliffe, bruce.richardson, john.mcnamara, keith.wiles,
	thomas.monjalon, vincent.jardin, jerin.jacob, stephen, 3chas3

On 3/13/2017 7:16 PM, Allain Legacy wrote:
> Updates the documentation and feature lists for the AVP PMD device.
> 
> Signed-off-by: Allain Legacy <allain.legacy@windriver.com>
> Signed-off-by: Matt Peters <matt.peters@windriver.com>
> Acked-by: John McNamara <john.mcnamara@intel.com>

<...>

> diff --git a/doc/guides/nics/features/avp.ini b/doc/guides/nics/features/avp.ini
> new file mode 100644
> index 0000000..64bf42e
> --- /dev/null
> +++ b/doc/guides/nics/features/avp.ini
> @@ -0,0 +1,17 @@
> +;
> +; Supported features of the 'AVP' network poll mode driver.
> +;
> +; Refer to default.ini for the full list of available PMD features.
> +;
> +[Features]
> +Link status          = Y
> +Queue start/stop     = Y
> +Jumbo frame          = Y
> +Scattered Rx         = Y
> +Promiscuous mode     = Y
> +Unicast MAC filter   = Y
> +VLAN offload         = Y
> +Basic stats          = Y
> +Stats per queue      = Y
> +Linux UIO            = Y
> +x86-64               = Y

Each feature can be added with the patch feature actually implemented,
and initial empty version can be part of skeleton patch.

^ permalink raw reply	[flat|nested] 172+ messages in thread

* Re: [PATCH v4 04/17] net/avp: add PMD version map file
  2017-03-16 14:52         ` Ferruh Yigit
@ 2017-03-16 15:33           ` Legacy, Allain
  0 siblings, 0 replies; 172+ messages in thread
From: Legacy, Allain @ 2017-03-16 15:33 UTC (permalink / raw)
  To: YIGIT, FERRUH
  Cc: dev, Jolliffe, Ian, RICHARDSON, BRUCE, MCNAMARA, JOHN, WILES,
	ROGER, thomas.monjalon, vincent.jardin, jerin.jacob, stephen,
	3chas3

> -----Original Message-----
> From: Ferruh Yigit [mailto:ferruh.yigit@intel.com]
> Sent: Thursday, March 16, 2017 10:53 AM

<..>
 
> Hi Allain,
> 
> Instead of adding files per patch, may I suggest different ordering:
> First add skeleton files in a patch, later add functional pieces one by one, like:
> 
> Merge patch 1/17, 3/17, this patch (4/17), 6/17 (removing SYMLINK), into
> single patch and make it AVP first patch. This will be skeleton patch.
> 
> Second patch can be introducing public headers (2/17) and updating Makefile
> to include them.
> 
> Third, debug log patch (5/17)
> 
> Patch 7/17 and later can stay same.
> 
> What do you think?

Sure, I can give that a go and see what it looks like.

^ permalink raw reply	[flat|nested] 172+ messages in thread

* Re: [PATCH v4 07/17] net/avp: driver registration
  2017-03-16 14:53         ` Ferruh Yigit
@ 2017-03-16 15:37           ` Legacy, Allain
  0 siblings, 0 replies; 172+ messages in thread
From: Legacy, Allain @ 2017-03-16 15:37 UTC (permalink / raw)
  To: YIGIT, FERRUH
  Cc: dev, Jolliffe, Ian, RICHARDSON, BRUCE, MCNAMARA, JOHN, WILES,
	ROGER, thomas.monjalon, vincent.jardin, jerin.jacob, stephen,
	3chas3

> -----Original Message-----
> From: Ferruh Yigit [mailto:ferruh.yigit@intel.com]
> Sent: Thursday, March 16, 2017 10:53 AM

<...>

> I am for removing static function declarations by reordering functions, and
> for this case even reordering not required I think, you can remove them.
Ok.  Will do


> > +	/* Get a mac from device config */
> > +	ether_addr_copy(&avp->ethaddr, &eth_dev->data->mac_addrs[0]);
> 
> This copies MAC address from avp->ethaddr to eth_dev.
> But at this point avp->ethaddr is all zero, is this the intention?

This is because of the patch splitting.  The avp_dev_create() call sets this up.  I didn't notice that I had them in separate patches.  I will try to fix this up.

^ permalink raw reply	[flat|nested] 172+ messages in thread

* Re: [PATCH v4 17/17] doc: adds information related to the AVP PMD
  2017-03-16 14:53         ` Ferruh Yigit
@ 2017-03-16 15:37           ` Legacy, Allain
  0 siblings, 0 replies; 172+ messages in thread
From: Legacy, Allain @ 2017-03-16 15:37 UTC (permalink / raw)
  To: YIGIT, FERRUH
  Cc: dev, Jolliffe, Ian, RICHARDSON, BRUCE, MCNAMARA, JOHN, WILES,
	ROGER, thomas.monjalon, vincent.jardin, jerin.jacob, stephen,
	3chas3

> -----Original Message-----
> From: Ferruh Yigit [mailto:ferruh.yigit@intel.com]
> Sent: Thursday, March 16, 2017 10:53 AM

<...>

> +;
> > +[Features]
> > +Link status          = Y
> > +Queue start/stop     = Y
> > +Jumbo frame          = Y
> > +Scattered Rx         = Y
> > +Promiscuous mode     = Y
> > +Unicast MAC filter   = Y
> > +VLAN offload         = Y
> > +Basic stats          = Y
> > +Stats per queue      = Y
> > +Linux UIO            = Y
> > +x86-64               = Y
> 
> Each feature can be added with the patch feature actually implemented, and
> initial empty version can be part of skeleton patch.

Ok.  Will do.

^ permalink raw reply	[flat|nested] 172+ messages in thread

* Re: [PATCH v4 00/17] Wind River Systems AVP PMD vs virtio?
  2017-03-16 10:32                 ` Chas Williams
@ 2017-03-16 18:09                   ` Francois Ozog
  0 siblings, 0 replies; 172+ messages in thread
From: Francois Ozog @ 2017-03-16 18:09 UTC (permalink / raw)
  To: Chas Williams
  Cc: O'Driscoll, Tim, Vincent JARDIN, Thomas Monjalon, Legacy,
	Allain (Wind River),
	Yigit, Ferruh, dev, Jolliffe, Ian (Wind River),
	Wiles, Keith, techboard

On 16 March 2017 at 11:32, Chas Williams <3chas3@gmail.com> wrote:
>
> On Thu, 2017-03-16 at 03:18 +0000, O'Driscoll, Tim wrote:
> > > From: Vincent JARDIN [mailto:vincent.jardin@6wind.com]
> > >
> > > Le 15/03/2017 à 11:55, Thomas Monjalon a écrit :
> > > >> I'd suggest that this is a good topic for the next Tech Board
> > > meeting.
> > > > I agree Tim.
> > > > CC'ing techboard to add this item to the agenda of the next meeting.
> > >
> > > Frankly, I disagree, it is missing some discussions on the list.
> >
> > I think the discussion on the mailing list is at an impasse and it won't be resolved there. I think the Tech Board needs to consider several issues:
> > - What are the requirements for a new PMD to be accepted? For example, you're asking for performance data in this case, when this hasn't been a requirement for other PMDs.
>
> It does seem like that would be the purpose of the tech board in the
> first place.  The tech board doesn't need to decide individual matters
> but must at least provide guidelines for the developers to follow.
> Otherwise you are asking for a popular vote to decide matters.
>
> As for performance data, the tech board could certainly make this a
> requirement.  If your argument is that we can't require X because we
> didn't require X in the past means that the tech board is basically
> pointless -- it can't make any changes to the existing processes.
>
> Should performance be a criterion?  Possibly.  What happens when X
> is faster at B but slower at A and Y is faster at A but slower at B?
> Now you don't have a clear case of what "performance" means since it
> varies based on what the end user is doing.  So which is faster?
>
> DPDK already has overlapping PMD's -- PCAP, AF_PACKET and now TAP.
> So if your reasoning is that DPDK doesn't want overlapping support,
> DPDK needs to start thinking about narrowing down the existing
> overlapping PMD's.  Otherwise, it does look like hypocrisy.
>
> > - Should there be different requirements for PMDs for virtual devices versus physical devices?
>
> How "real" does a device need to be?  SRIOV blurs the line somewhat
> between virtual and physical devices.  What is a VF, physical or virtual?
> It looks like a physical device in DPDK, but it's really virtual.
>

SR-IOV is a way to partition hardware and is virtual from the PCI bus
stand point. Nothing to do with Qemu virtualization.
So acccessing a VF from either host or guest does not change the
nature of the device: it is a physical device with its features,
packet queue format, packet descriptors. You just change the access
method.
There is not such a thing as a VF driver. There are  XYZ model ABC PF
and VF PMDs. In other words, with SR-IOV you cannot use Intel 82599 VF
driver on a Mellanox Connectx4 NIC.


> Personally, I would prefer to see a minimum set of required capabilities.
> Not every driver needs to support offload but it seems like there should
> be some minimum set of functionality, like changing the MTU, supporting
> tagged traffic, or changing the MAC address.  Stuff a driver might need
> to be able to interoperate with other parts of DPDK (like bonding).
>
> > - Based on these criteria, should the AVP PMD be accepted or not?




-- 
François-Frédéric Ozog | Director Linaro Networking Group
T: +33.67221.6485
francois.ozog@linaro.org | Skype: ffozog

^ permalink raw reply	[flat|nested] 172+ messages in thread

* Re: [PATCH v4 00/17] Wind River Systems AVP PMD vs virtio?
  2017-03-15  4:10         ` O'Driscoll, Tim
                             ` (3 preceding siblings ...)
  2017-03-15 14:02           ` Vincent JARDIN
@ 2017-03-16 23:17           ` Stephen Hemminger
  2017-03-16 23:41             ` [PATCH v4 00/17] Wind River Systems AVP PMD vs virtio? - ivshmem is back Vincent JARDIN
  4 siblings, 1 reply; 172+ messages in thread
From: Stephen Hemminger @ 2017-03-16 23:17 UTC (permalink / raw)
  To: O'Driscoll, Tim
  Cc: Vincent JARDIN, Legacy, Allain (Wind River),
	Yigit, Ferruh, dev, Jolliffe, Ian (Wind River),
	Richardson, Bruce, Mcnamara, John, Wiles, Keith, thomas.monjalon,
	jerin.jacob, 3chas3

On Wed, 15 Mar 2017 04:10:56 +0000
"O'Driscoll, Tim" <tim.odriscoll@intel.com> wrote:

> I've included a couple of specific comments inline below, and a general comment here.
> 
> We have somebody proposing to add a new driver to DPDK. It's standalone and doesn't affect any of the core libraries. They're willing to maintain the driver and have included a patch to update the maintainers file. They've also included the relevant documentation changes. I haven't seen any negative comment on the patches themselves except for a request from John McNamara for an update to the Release Notes that was addressed in a later version. I think we should be welcoming this into DPDK rather than questioning/rejecting it.
> 
> I'd suggest that this is a good topic for the next Tech Board meeting.

This is a virtualization driver for supporting DPDK on platform that provides an alternative
virtual network driver. I see no reason it shouldn't be part of DPDK. Given the unstable
ABI for drivers, supporting out of tree DPDK drivers is difficult. The DPDK should try
to be inclusive and support as many environments as possible.

^ permalink raw reply	[flat|nested] 172+ messages in thread

* Re: [PATCH v4 00/17] Wind River Systems AVP PMD vs virtio? - ivshmem is back
  2017-03-16 23:17           ` Stephen Hemminger
@ 2017-03-16 23:41             ` Vincent JARDIN
  2017-03-17  0:08               ` Wiles, Keith
  2017-03-17  0:11               ` Wiles, Keith
  0 siblings, 2 replies; 172+ messages in thread
From: Vincent JARDIN @ 2017-03-16 23:41 UTC (permalink / raw)
  To: Stephen Hemminger, O'Driscoll, Tim, Legacy, Allain (Wind River)
  Cc: Yigit, Ferruh, dev, Jolliffe, Ian (Wind River),
	Richardson, Bruce, Mcnamara, John, Wiles, Keith, thomas.monjalon,
	jerin.jacob, 3chas3, stefanha, Markus Armbruster

Let's be back to 2014 with Qemu's thoughts on it,
+Stefan
 
https://lists.linuxfoundation.org/pipermail/virtualization/2014-June/026767.html

and
+Markus
 
https://lists.linuxfoundation.org/pipermail/virtualization/2014-June/026713.html

> 6. Device models belong into QEMU
>
>    Say you build an actual interface on top of ivshmem.  Then ivshmem in
>    QEMU together with the supporting host code outside QEMU (see 3.) and
>    the lower layer of the code using it in guests (kernel + user space)
>    provide something that to me very much looks like a device model.
>
>    Device models belong into QEMU.  It's what QEMU does.


Le 17/03/2017 à 00:17, Stephen Hemminger a écrit :
> On Wed, 15 Mar 2017 04:10:56 +0000
> "O'Driscoll, Tim" <tim.odriscoll@intel.com> wrote:
>
>> I've included a couple of specific comments inline below, and a general comment here.
>>
>> We have somebody proposing to add a new driver to DPDK. It's standalone and doesn't affect any of the core libraries. They're willing to maintain the driver and have included a patch to update the maintainers file. They've also included the relevant documentation changes. I haven't seen any negative comment on the patches themselves except for a request from John McNamara for an update to the Release Notes that was addressed in a later version. I think we should be welcoming this into DPDK rather than questioning/rejecting it.
>>
>> I'd suggest that this is a good topic for the next Tech Board meeting.
>
> This is a virtualization driver for supporting DPDK on platform that provides an alternative
> virtual network driver. I see no reason it shouldn't be part of DPDK. Given the unstable
> ABI for drivers, supporting out of tree DPDK drivers is difficult. The DPDK should try
> to be inclusive and support as many environments as possible.
>

On Qemu mailing list, back to 2014, I did push to build models of 
devices over ivshmem, like AVP, but folks did not want that we abuse of 
it. The Qemu community wants that we avoid unfocusing. So, by being too 
much inclusive, we abuse of the Qemu's capabilities.

So, because of being "inclusive", we should allow any cases, it is not a 
proper way to make sure that virtio gets all the focuses it deserves.

^ permalink raw reply	[flat|nested] 172+ messages in thread

* Re: [PATCH v4 00/17] Wind River Systems AVP PMD vs virtio? - ivshmem is back
  2017-03-16 23:41             ` [PATCH v4 00/17] Wind River Systems AVP PMD vs virtio? - ivshmem is back Vincent JARDIN
@ 2017-03-17  0:08               ` Wiles, Keith
  2017-03-17  0:15                 ` O'Driscoll, Tim
  2017-03-17  0:11               ` Wiles, Keith
  1 sibling, 1 reply; 172+ messages in thread
From: Wiles, Keith @ 2017-03-17  0:08 UTC (permalink / raw)
  To: Vincent JARDIN
  Cc: Stephen Hemminger, O'Driscoll, Tim, Legacy,
	Allain (Wind River),
	Yigit, Ferruh, dev, Jolliffe, Ian (Wind River),
	Richardson, Bruce, Mcnamara, John, thomas.monjalon, jerin.jacob,
	3chas3, stefanha, Markus Armbruster


> On Mar 17, 2017, at 7:41 AM, Vincent JARDIN <vincent.jardin@6wind.com> wrote:
> 
> Let's be back to 2014 with Qemu's thoughts on it,
> +Stefan
> https://lists.linuxfoundation.org/pipermail/virtualization/2014-June/026767.html
> 
> and
> +Markus
> https://lists.linuxfoundation.org/pipermail/virtualization/2014-June/026713.html
> 
>> 6. Device models belong into QEMU
>> 
>>   Say you build an actual interface on top of ivshmem.  Then ivshmem in
>>   QEMU together with the supporting host code outside QEMU (see 3.) and
>>   the lower layer of the code using it in guests (kernel + user space)
>>   provide something that to me very much looks like a device model.
>> 
>>   Device models belong into QEMU.  It's what QEMU does.
> 
> 
> Le 17/03/2017 à 00:17, Stephen Hemminger a écrit :
>> On Wed, 15 Mar 2017 04:10:56 +0000
>> "O'Driscoll, Tim" <tim.odriscoll@intel.com> wrote:
>> 
>>> I've included a couple of specific comments inline below, and a general comment here.
>>> 
>>> We have somebody proposing to add a new driver to DPDK. It's standalone and doesn't affect any of the core libraries. They're willing to maintain the driver and have included a patch to update the maintainers file. They've also included the relevant documentation changes. I haven't seen any negative comment on the patches themselves except for a request from John McNamara for an update to the Release Notes that was addressed in a later version. I think we should be welcoming this into DPDK rather than questioning/rejecting it.
>>> 
>>> I'd suggest that this is a good topic for the next Tech Board meeting.
>> 
>> This is a virtualization driver for supporting DPDK on platform that provides an alternative
>> virtual network driver. I see no reason it shouldn't be part of DPDK. Given the unstable
>> ABI for drivers, supporting out of tree DPDK drivers is difficult. The DPDK should try
>> to be inclusive and support as many environments as possible.


+2!! for Stephen’s comment.

>> 
> 
> On Qemu mailing list, back to 2014, I did push to build models of devices over ivshmem, like AVP, but folks did not want that we abuse of it. The Qemu community wants that we avoid unfocusing. So, by being too much inclusive, we abuse of the Qemu's capabilities.
> 
> So, because of being "inclusive", we should allow any cases, it is not a proper way to make sure that virtio gets all the focuses it deserves.
> 
> 

Regards,
Keith


^ permalink raw reply	[flat|nested] 172+ messages in thread

* Re: [PATCH v4 00/17] Wind River Systems AVP PMD vs virtio? - ivshmem is back
  2017-03-16 23:41             ` [PATCH v4 00/17] Wind River Systems AVP PMD vs virtio? - ivshmem is back Vincent JARDIN
  2017-03-17  0:08               ` Wiles, Keith
@ 2017-03-17  0:11               ` Wiles, Keith
  2017-03-17  0:14                 ` Stephen Hemminger
  2017-03-17  0:31                 ` Vincent JARDIN
  1 sibling, 2 replies; 172+ messages in thread
From: Wiles, Keith @ 2017-03-17  0:11 UTC (permalink / raw)
  To: Vincent JARDIN
  Cc: Stephen Hemminger, O'Driscoll, Tim, Legacy,
	Allain (Wind River),
	Yigit, Ferruh, dev, Jolliffe, Ian (Wind River),
	Richardson, Bruce, Mcnamara, John, thomas.monjalon, jerin.jacob,
	3chas3, stefanha, Markus Armbruster


> On Mar 17, 2017, at 7:41 AM, Vincent JARDIN <vincent.jardin@6wind.com> wrote:
> 
> Let's be back to 2014 with Qemu's thoughts on it,
> +Stefan
> https://lists.linuxfoundation.org/pipermail/virtualization/2014-June/026767.html
> 
> and
> +Markus
> https://lists.linuxfoundation.org/pipermail/virtualization/2014-June/026713.html
> 
>> 6. Device models belong into QEMU
>> 
>>   Say you build an actual interface on top of ivshmem.  Then ivshmem in
>>   QEMU together with the supporting host code outside QEMU (see 3.) and
>>   the lower layer of the code using it in guests (kernel + user space)
>>   provide something that to me very much looks like a device model.
>> 
>>   Device models belong into QEMU.  It's what QEMU does.
> 
> 
> Le 17/03/2017 à 00:17, Stephen Hemminger a écrit :
>> On Wed, 15 Mar 2017 04:10:56 +0000
>> "O'Driscoll, Tim" <tim.odriscoll@intel.com> wrote:
>> 
>>> I've included a couple of specific comments inline below, and a general comment here.
>>> 
>>> We have somebody proposing to add a new driver to DPDK. It's standalone and doesn't affect any of the core libraries. They're willing to maintain the driver and have included a patch to update the maintainers file. They've also included the relevant documentation changes. I haven't seen any negative comment on the patches themselves except for a request from John McNamara for an update to the Release Notes that was addressed in a later version. I think we should be welcoming this into DPDK rather than questioning/rejecting it.
>>> 
>>> I'd suggest that this is a good topic for the next Tech Board meeting.
>> 
>> This is a virtualization driver for supporting DPDK on platform that provides an alternative
>> virtual network driver. I see no reason it shouldn't be part of DPDK. Given the unstable
>> ABI for drivers, supporting out of tree DPDK drivers is difficult. The DPDK should try
>> to be inclusive and support as many environments as possible.
>> 
> 
> On Qemu mailing list, back to 2014, I did push to build models of devices over ivshmem, like AVP, but folks did not want that we abuse of it. The Qemu community wants that we avoid unfocusing. So, by being too much inclusive, we abuse of the Qemu's capabilities.
> 
> So, because of being "inclusive", we should allow any cases, it is not a proper way to make sure that virtio gets all the focuses it deserves.
> 
> 

Why are we bring QEMU in to the picture it does not make a lot of sense to me. Stephen stated it well above and I hope my comments were stating the same conclusion. I do not see your real reasons for not allowing this driver into DPDK, it seems like some other hidden agenda is at play here, but I am a paranoid person :-)


Regards,
Keith


^ permalink raw reply	[flat|nested] 172+ messages in thread

* Re: [PATCH v4 00/17] Wind River Systems AVP PMD vs virtio? - ivshmem is back
  2017-03-17  0:11               ` Wiles, Keith
@ 2017-03-17  0:14                 ` Stephen Hemminger
  2017-03-17  0:31                 ` Vincent JARDIN
  1 sibling, 0 replies; 172+ messages in thread
From: Stephen Hemminger @ 2017-03-17  0:14 UTC (permalink / raw)
  To: Wiles, Keith
  Cc: Vincent JARDIN, O'Driscoll, Tim, Legacy, Allain (Wind River),
	Yigit, Ferruh, dev, Jolliffe, Ian (Wind River),
	Richardson, Bruce, Mcnamara, John, thomas.monjalon, jerin.jacob,
	3chas3, stefanha, Markus Armbruster

On Fri, 17 Mar 2017 00:11:10 +0000
"Wiles, Keith" <keith.wiles@intel.com> wrote:

> > On Mar 17, 2017, at 7:41 AM, Vincent JARDIN <vincent.jardin@6wind.com> wrote:
> > 
> > Let's be back to 2014 with Qemu's thoughts on it,
> > +Stefan
> > https://lists.linuxfoundation.org/pipermail/virtualization/2014-June/026767.html
> > 
> > and
> > +Markus
> > https://lists.linuxfoundation.org/pipermail/virtualization/2014-June/026713.html
> >   
> >> 6. Device models belong into QEMU
> >> 
> >>   Say you build an actual interface on top of ivshmem.  Then ivshmem in
> >>   QEMU together with the supporting host code outside QEMU (see 3.) and
> >>   the lower layer of the code using it in guests (kernel + user space)
> >>   provide something that to me very much looks like a device model.
> >> 
> >>   Device models belong into QEMU.  It's what QEMU does.  
> > 
> > 
> > Le 17/03/2017 à 00:17, Stephen Hemminger a écrit :  
> >> On Wed, 15 Mar 2017 04:10:56 +0000
> >> "O'Driscoll, Tim" <tim.odriscoll@intel.com> wrote:
> >>   
> >>> I've included a couple of specific comments inline below, and a general comment here.
> >>> 
> >>> We have somebody proposing to add a new driver to DPDK. It's standalone and doesn't affect any of the core libraries. They're willing to maintain the driver and have included a patch to update the maintainers file. They've also included the relevant documentation changes. I haven't seen any negative comment on the patches themselves except for a request from John McNamara for an update to the Release Notes that was addressed in a later version. I think we should be welcoming this into DPDK rather than questioning/rejecting it.
> >>> 
> >>> I'd suggest that this is a good topic for the next Tech Board meeting.  
> >> 
> >> This is a virtualization driver for supporting DPDK on platform that provides an alternative
> >> virtual network driver. I see no reason it shouldn't be part of DPDK. Given the unstable
> >> ABI for drivers, supporting out of tree DPDK drivers is difficult. The DPDK should try
> >> to be inclusive and support as many environments as possible.
> >>   
> > 
> > On Qemu mailing list, back to 2014, I did push to build models of devices over ivshmem, like AVP, but folks did not want that we abuse of it. The Qemu community wants that we avoid unfocusing. So, by being too much inclusive, we abuse of the Qemu's capabilities.
> > 
> > So, because of being "inclusive", we should allow any cases, it is not a proper way to make sure that virtio gets all the focuses it deserves.
> > 
> >   
> 
> Why are we bring QEMU in to the picture it does not make a lot of sense to me. Stephen stated it well above and I hope my comments were stating the same conclusion. I do not see your real reasons for not allowing this driver into DPDK, it seems like some other hidden agenda is at play here, but I am a paranoid person :-)

I am thinking of people already using Windriver systems. One can argue all you want that
they should be using QEMU/KVM/Virtio or 6Wind Virtual Accelerator but it is not the
role of DPDK to be used to influence customers architecture decisions.

^ permalink raw reply	[flat|nested] 172+ messages in thread

* Re: [PATCH v4 00/17] Wind River Systems AVP PMD vs virtio? - ivshmem is back
  2017-03-17  0:08               ` Wiles, Keith
@ 2017-03-17  0:15                 ` O'Driscoll, Tim
  0 siblings, 0 replies; 172+ messages in thread
From: O'Driscoll, Tim @ 2017-03-17  0:15 UTC (permalink / raw)
  To: Wiles, Keith, Vincent JARDIN
  Cc: Stephen Hemminger, Legacy, Allain (Wind River),
	Yigit, Ferruh, dev, Jolliffe, Ian (Wind River),
	Richardson, Bruce, Mcnamara, John, thomas.monjalon, jerin.jacob,
	3chas3, stefanha, Markus Armbruster

> From: Wiles, Keith
> 
> 
> > On Mar 17, 2017, at 7:41 AM, Vincent JARDIN <vincent.jardin@6wind.com>
> wrote:
> >
> > Let's be back to 2014 with Qemu's thoughts on it,
> > +Stefan
> > https://lists.linuxfoundation.org/pipermail/virtualization/2014-
> June/026767.html
> >
> > and
> > +Markus
> > https://lists.linuxfoundation.org/pipermail/virtualization/2014-
> June/026713.html
> >
> >> 6. Device models belong into QEMU
> >>
> >>   Say you build an actual interface on top of ivshmem.  Then ivshmem
> in
> >>   QEMU together with the supporting host code outside QEMU (see 3.)
> and
> >>   the lower layer of the code using it in guests (kernel + user
> space)
> >>   provide something that to me very much looks like a device model.
> >>
> >>   Device models belong into QEMU.  It's what QEMU does.
> >
> >
> > Le 17/03/2017 à 00:17, Stephen Hemminger a écrit :
> >> On Wed, 15 Mar 2017 04:10:56 +0000
> >> "O'Driscoll, Tim" <tim.odriscoll@intel.com> wrote:
> >>
> >>> I've included a couple of specific comments inline below, and a
> general comment here.
> >>>
> >>> We have somebody proposing to add a new driver to DPDK. It's
> standalone and doesn't affect any of the core libraries. They're willing
> to maintain the driver and have included a patch to update the
> maintainers file. They've also included the relevant documentation
> changes. I haven't seen any negative comment on the patches themselves
> except for a request from John McNamara for an update to the Release
> Notes that was addressed in a later version. I think we should be
> welcoming this into DPDK rather than questioning/rejecting it.
> >>>
> >>> I'd suggest that this is a good topic for the next Tech Board
> meeting.
> >>
> >> This is a virtualization driver for supporting DPDK on platform that
> provides an alternative
> >> virtual network driver. I see no reason it shouldn't be part of DPDK.
> Given the unstable
> >> ABI for drivers, supporting out of tree DPDK drivers is difficult.
> The DPDK should try
> >> to be inclusive and support as many environments as possible.
> 
> 
> +2!! for Stephen’s comment.

+1 (I'm only half as important as Keith :-)

I don't think there's any doubt over the benefit of virtio and the fact that it should be our preferred solution. I'm sure everybody agrees on that. The issue is whether we should block alternative solutions. I don't think we should.

> 
> >>
> >
> > On Qemu mailing list, back to 2014, I did push to build models of
> devices over ivshmem, like AVP, but folks did not want that we abuse of
> it. The Qemu community wants that we avoid unfocusing. So, by being too
> much inclusive, we abuse of the Qemu's capabilities.
> >
> > So, because of being "inclusive", we should allow any cases, it is not
> a proper way to make sure that virtio gets all the focuses it deserves.
> >
> >
> 
> Regards,
> Keith


^ permalink raw reply	[flat|nested] 172+ messages in thread

* Re: [PATCH v4 00/17] Wind River Systems AVP PMD vs virtio? - ivshmem is back
  2017-03-17  0:11               ` Wiles, Keith
  2017-03-17  0:14                 ` Stephen Hemminger
@ 2017-03-17  0:31                 ` Vincent JARDIN
  2017-03-17  0:53                   ` Wiles, Keith
  1 sibling, 1 reply; 172+ messages in thread
From: Vincent JARDIN @ 2017-03-17  0:31 UTC (permalink / raw)
  To: Wiles, Keith
  Cc: Stephen Hemminger, O'Driscoll, Tim, Legacy,
	Allain (Wind River),
	Yigit, Ferruh, dev, Jolliffe, Ian (Wind River),
	Richardson, Bruce, Mcnamara, John, thomas.monjalon, jerin.jacob,
	3chas3, stefanha, Markus Armbruster

Le 17/03/2017 à 01:11, Wiles, Keith a écrit :
> it seems like some other hidden agenda is at play here, but I am a paranoid person :-)

Keith, please stop such invalid argument! It is non sense.

We need to understand the benefits of diverging from virtio since here 
it is about creating new device models for Qemu while bypassing qemu 
using ivshmem. I do not care whether it is memnic, AVP, xyz. But I do 
care about Qemu's and virtio's performances along with avoiding to get 
too many NIC models while 1 model can be used to have uniform IOs.

^ permalink raw reply	[flat|nested] 172+ messages in thread

* Re: [PATCH v4 00/17] Wind River Systems AVP PMD vs virtio? - ivshmem is back
  2017-03-17  0:31                 ` Vincent JARDIN
@ 2017-03-17  0:53                   ` Wiles, Keith
  2017-03-17  8:48                     ` Thomas Monjalon
  0 siblings, 1 reply; 172+ messages in thread
From: Wiles, Keith @ 2017-03-17  0:53 UTC (permalink / raw)
  To: Vincent JARDIN
  Cc: Stephen Hemminger, O'Driscoll, Tim, Legacy,
	Allain (Wind River),
	Yigit, Ferruh, dev, Jolliffe, Ian (Wind River),
	Richardson, Bruce, Mcnamara, John, thomas.monjalon, jerin.jacob,
	3chas3, stefanha, Markus Armbruster


> On Mar 17, 2017, at 8:31 AM, Vincent JARDIN <vincent.jardin@6wind.com> wrote:
> 
> Le 17/03/2017 à 01:11, Wiles, Keith a écrit :
>> it seems like some other hidden agenda is at play here, but I am a paranoid person :-)
> 
> Keith, please stop such invalid argument! It is non sense.
> 
> We need to understand the benefits of diverging from virtio since here it is about creating new device models for Qemu while bypassing qemu using ivshmem. I do not care whether it is memnic, AVP, xyz. But I do care about Qemu's and virtio's performances along with avoiding to get too many NIC models while 1 model can be used to have uniform IOs.

Vincent, the problem is just because you state my argument is invalid does not make it so. The problem is we would have to remove a number of drivers in DPDK as they provide the same functionality with your logic. To the extreme we would have to start removing drivers that did not perform well against another PMD/hardware. You have requested that AVP must be faster to even be consider into DPDK by requesting performance data.

Now you are stating the virtio is the only way to provide this type of transport, which is not the case. Using virtio or some other method is up to the developers of the product to use as it may be filling a solution which is not just speed or security or something else.

Restricting what PMDs go into DPDK is going to seem like a we (DPDK) do not want anything but our own code or ideas of what is best. PMDs are many and all of them handle Ethernet packets, should we start restricting PMDs to only one Ethernet supported device, NO.

Regardless of what the transport layer is ivshmem, virtio sharemem, ethernet, a serial line at 9600 baud or two squirrels transferring nuts it just does not make sense to restrict a transport type like ivshmem (that AVP is using) over any other transport type for DPDK and the community.

> 
> 

Regards,
Keith


^ permalink raw reply	[flat|nested] 172+ messages in thread

* Re: [PATCH v4 00/17] Wind River Systems AVP PMD vs virtio? - ivshmem is back
  2017-03-17  0:53                   ` Wiles, Keith
@ 2017-03-17  8:48                     ` Thomas Monjalon
  2017-03-17 10:15                       ` Legacy, Allain
                                         ` (2 more replies)
  0 siblings, 3 replies; 172+ messages in thread
From: Thomas Monjalon @ 2017-03-17  8:48 UTC (permalink / raw)
  To: Wiles, Keith, Jason Wang, Michael S. Tsirkin
  Cc: Vincent JARDIN, Stephen Hemminger, O'Driscoll, Tim, Legacy,
	Allain (Wind River),
	Yigit, Ferruh, dev, Jolliffe, Ian (Wind River),
	Markus Armbruster, Stefan Hajnoczi

2017-03-17 00:53, Wiles, Keith:
> Regardless of what the transport layer is ivshmem, virtio sharemem, ethernet, a serial line at 9600 baud or two squirrels transferring nuts it just does not make sense to restrict a transport type like ivshmem (that AVP is using) over any other transport type for DPDK and the community.

Thank you Keith for opening the range of DPDK possibilities :)
Are you currently working on a poll mode driver for squirrels?
How do you poll squirrels? Is it approved by WWF? ;)

And more seriously, thank you for allowing us taking a breath in
this discussion.

I think there is one interesting technological point in this thread.
We are discussing about IVSHMEM but its support by Qemu is confused.
This feature is not in the MAINTAINERS file of Qemu.
Please Qemu maintainers, what is the future of IVSHMEM?

^ permalink raw reply	[flat|nested] 172+ messages in thread

* Re: [PATCH v4 00/17] Wind River Systems AVP PMD vs virtio? - ivshmem is back
  2017-03-17  8:48                     ` Thomas Monjalon
@ 2017-03-17 10:15                       ` Legacy, Allain
  2017-03-17 13:52                       ` Michael S. Tsirkin
       [not found]                       ` <20170317093320.GA11116@stefanha-x1.localdomain>
  2 siblings, 0 replies; 172+ messages in thread
From: Legacy, Allain @ 2017-03-17 10:15 UTC (permalink / raw)
  To: Thomas Monjalon, WILES, ROGER, Jason Wang, Michael S. Tsirkin
  Cc: Vincent JARDIN, Stephen Hemminger, O'Driscoll, Tim, YIGIT,
	FERRUH, dev, Jolliffe, Ian, Markus Armbruster, Stefan Hajnoczi

> -----Original Message-----
> From: Thomas Monjalon [mailto:thomas.monjalon@6wind.com]
> Sent: Friday, March 17, 2017 4:49 AM

<...>

> I think there is one interesting technological point in this thread.
> We are discussing about IVSHMEM but its support by Qemu is confused.
> This feature is not in the MAINTAINERS file of Qemu.
> Please Qemu maintainers, what is the future of IVSHMEM?

I need to clarify a technical point.  AVP is not ivshmem; at least not in the strict sense of the term.   We initially started out using ivshmem as-is but then we needed to fork from that in order to add functionality that we needed to better integrate with our solution.  AVP is now a separate Wind River specific extension completely independent of any past ivshmem implementation.  I used the term ivshmem in the documentation to provide context to the reader to understand, by comparison, what type of shared memory model we are using.  I can remove that reference if it leads to confusion or a desire to make direct comparisons with ivshmem.

^ permalink raw reply	[flat|nested] 172+ messages in thread

* Re: [PATCH v4 00/17] Wind River Systems AVP PMD vs virtio? - ivshmem is back
  2017-03-17  8:48                     ` Thomas Monjalon
  2017-03-17 10:15                       ` Legacy, Allain
@ 2017-03-17 13:52                       ` Michael S. Tsirkin
  2017-03-20 22:30                         ` Hobywan Kenoby
       [not found]                       ` <20170317093320.GA11116@stefanha-x1.localdomain>
  2 siblings, 1 reply; 172+ messages in thread
From: Michael S. Tsirkin @ 2017-03-17 13:52 UTC (permalink / raw)
  To: Thomas Monjalon
  Cc: Wiles, Keith, Jason Wang, Vincent JARDIN, Stephen Hemminger,
	O'Driscoll, Tim, Legacy, Allain (Wind River),
	Yigit, Ferruh, dev, Jolliffe, Ian (Wind River),
	Markus Armbruster, Stefan Hajnoczi

On Fri, Mar 17, 2017 at 09:48:38AM +0100, Thomas Monjalon wrote:
> I think there is one interesting technological point in this thread.
> We are discussing about IVSHMEM but its support by Qemu is confused.
> This feature is not in the MAINTAINERS file of Qemu.
> Please Qemu maintainers, what is the future of IVSHMEM?

You should try asking this question on the qemu mailing list. Looking at
archives, Jan Kiszka was the last one who expressed some interest in
this device.

-- 
MST

^ permalink raw reply	[flat|nested] 172+ messages in thread

* Re: [PATCH v4 00/17] Wind River Systems AVP PMD vs virtio? - ivshmem is back
  2017-03-17 13:52                       ` Michael S. Tsirkin
@ 2017-03-20 22:30                         ` Hobywan Kenoby
  2017-03-21 11:06                           ` Thomas Monjalon
  0 siblings, 1 reply; 172+ messages in thread
From: Hobywan Kenoby @ 2017-03-20 22:30 UTC (permalink / raw)
  To: Michael S. Tsirkin, Thomas Monjalon
  Cc: Wiles, Keith, Jason Wang, Vincent JARDIN, Stephen Hemminger,
	O'Driscoll, Tim, Legacy, Allain (Wind River),
	Yigit, Ferruh, dev, Jolliffe, Ian (Wind River),
	Markus Armbruster, Stefan Hajnoczi

If AVP was an upstream device of Qemu or Linux kernel that would be very natural to have a DPDK PMD (setting aside my comments on a preferred single virtual device).

As far as I know this is not the case.

Because of that, one could see the AVP PMD as a way to leverage open source to promote proprietary technology. That is the heart of the problem with the proposal.

So I would recommend waiting for an upstream qemu support to consider AVP in DPDK .

FF
________________________________
From: dev <dev-bounces@dpdk.org> on behalf of Michael S. Tsirkin <mst@redhat.com>
Sent: Friday, March 17, 2017 2:52:06 PM
To: Thomas Monjalon
Cc: Wiles, Keith; Jason Wang; Vincent JARDIN; Stephen Hemminger; O'Driscoll, Tim; Legacy, Allain (Wind River); Yigit, Ferruh; dev@dpdk.org; Jolliffe, Ian (Wind River); Markus Armbruster; Stefan Hajnoczi
Subject: Re: [dpdk-dev] [PATCH v4 00/17] Wind River Systems AVP PMD vs virtio? - ivshmem is back

On Fri, Mar 17, 2017 at 09:48:38AM +0100, Thomas Monjalon wrote:
> I think there is one interesting technological point in this thread.
> We are discussing about IVSHMEM but its support by Qemu is confused.
> This feature is not in the MAINTAINERS file of Qemu.
> Please Qemu maintainers, what is the future of IVSHMEM?

You should try asking this question on the qemu mailing list. Looking at
archives, Jan Kiszka was the last one who expressed some interest in
this device.

--
MST

^ permalink raw reply	[flat|nested] 172+ messages in thread

* Re: [PATCH v4 00/17] Wind River Systems AVP PMD vs virtio? - ivshmem is back
  2017-03-20 22:30                         ` Hobywan Kenoby
@ 2017-03-21 11:06                           ` Thomas Monjalon
  0 siblings, 0 replies; 172+ messages in thread
From: Thomas Monjalon @ 2017-03-21 11:06 UTC (permalink / raw)
  To: Hobywan Kenoby
  Cc: Michael S. Tsirkin, Wiles, Keith, Jason Wang, Vincent JARDIN,
	Stephen Hemminger, O'Driscoll, Tim, Legacy,
	Allain (Wind River),
	Yigit, Ferruh, dev, Jolliffe, Ian (Wind River),
	Markus Armbruster, Stefan Hajnoczi

Yes, DPDK can be used to promote underlying commercial hardware or software,
as long as it is fair and not intrusive.
Please promote more products by becoming an Open Source contributor ;)


2017-03-20 22:30, Hobywan Kenoby:
> If AVP was an upstream device of Qemu or Linux kernel that would be very natural to have a DPDK PMD (setting aside my comments on a preferred single virtual device).
> 
> As far as I know this is not the case.
> 
> Because of that, one could see the AVP PMD as a way to leverage open source to promote proprietary technology. That is the heart of the problem with the proposal.
> 
> So I would recommend waiting for an upstream qemu support to consider AVP in DPDK .
> 
> FF
> ________________________________
> From: dev <dev-bounces@dpdk.org> on behalf of Michael S. Tsirkin <mst@redhat.com>
> Sent: Friday, March 17, 2017 2:52:06 PM
> To: Thomas Monjalon
> Cc: Wiles, Keith; Jason Wang; Vincent JARDIN; Stephen Hemminger; O'Driscoll, Tim; Legacy, Allain (Wind River); Yigit, Ferruh; dev@dpdk.org; Jolliffe, Ian (Wind River); Markus Armbruster; Stefan Hajnoczi
> Subject: Re: [dpdk-dev] [PATCH v4 00/17] Wind River Systems AVP PMD vs virtio? - ivshmem is back
> 
> On Fri, Mar 17, 2017 at 09:48:38AM +0100, Thomas Monjalon wrote:
> > I think there is one interesting technological point in this thread.
> > We are discussing about IVSHMEM but its support by Qemu is confused.
> > This feature is not in the MAINTAINERS file of Qemu.
> > Please Qemu maintainers, what is the future of IVSHMEM?
> 
> You should try asking this question on the qemu mailing list. Looking at
> archives, Jan Kiszka was the last one who expressed some interest in
> this device.
> 
> --
> MST

^ permalink raw reply	[flat|nested] 172+ messages in thread

* [PATCH v5 00/14] Wind River Systems AVP PMD
  2017-03-13 19:16     ` [PATCH v4 00/17] Wind River Systems " Allain Legacy
                         ` (17 preceding siblings ...)
  2017-03-14 17:37       ` [PATCH v4 00/17] Wind River Systems AVP PMD vs virtio? Vincent JARDIN
@ 2017-03-23 11:23       ` Allain Legacy
  2017-03-23 11:24         ` [PATCH v5 01/14] drivers/net: adds AVP PMD base files Allain Legacy
                           ` (15 more replies)
  18 siblings, 16 replies; 172+ messages in thread
From: Allain Legacy @ 2017-03-23 11:23 UTC (permalink / raw)
  To: ferruh.yigit
  Cc: dev, ian.jolliffe, bruce.richardson, john.mcnamara, keith.wiles,
	thomas.monjalon, vincent.jardin, jerin.jacob, stephen, 3chas3

This patch series submits an initial version of the AVP PMD from Wind River
Systems.  The series includes shared header files, driver implementation,
and changes to documentation files in support of this new driver.  The AVP
driver is a shared memory based device.  It is intended to be used as a PMD
within a virtual machine running on a Wind River virtualization platform.
See: http://www.windriver.com/products/titanium-cloud/

It enables optimized packet throughput without requiring any packet
processing in qemu. This allowed us to provide our customers with a
significant performance increase for both DPDK and non-DPDK applications
in the VM.   Since our AVP implementation supports VM live-migration it
is viewed as a better alternative to PCI passthrough or PCI SRIOV since
neither of those support VM live-migration without manual intervention
or significant performance penalties.

Since the initial implementation of AVP devices, vhost-user has become part
of the qemu offering with a significant performance increase over the
original virtio implementation.  However, vhost-user still does not achieve
the level of performance that the AVP device can provide to our customers
for DPDK based guests.

A number of our customers have requested that we upstream the driver to
dpdk.org.

v2:
* Fixed coding style violations that slipped in accidentally because of an
  out of date checkpatch.pl from an older kernel.

v3:
* Updated 17.05 release notes to add a section for this new PMD
* Added additional info to the AVP nic guide document to clarify the
  benefit of using AVP over virtio.
* Fixed spelling error in debug log missed by local checkpatch.pl version
* Split the transmit patch to separate the stats functions as they
  accidentally got squashed in the last patchset.
* Fixed debug log strings so that they exceed 80 characters rather than
  span multiple lines.
* Renamed RTE_AVP_* defines that were in avp_ethdev.h to be AVP_* instead
* Replaced usage of RTE_WRITE32 and RTE_READ32 with rte_write32_relaxed
  and rte_read32_relaxed.
* Declared rte_pci_id table as const

v4:
* Split our interrupt handlers to a separate patch and moved to the end
  of the series.
* Removed memset() from stats_get API
* Removed usage of RTE_AVP_ALIGNMENT
* Removed unnecessary parentheses in rte_avp_common.h
* Removed unneeded "goto unlock" where there are no statements in between
  the goto and the end of the function.
* Re-tested with pktgen and found that rte_eth_tx_burst() is being called
  with 0 packets even before starting traffic which resulted in
  incrementing oerrors; fixed in transmit patch.

v5:
* Updated documentation to remove references to ivshmem as it lead to
  confusion about whether AVP is exactly like ivshmem or simply based on
  how ivshmem exports memory to a VM via a PCI device.
* Restructured first set of patches to condense them down to a base patch
  with the files needed to apply subsequent patches.
* Removed static prototypes from init/uninit functions in avp_ethdev.c
* Moved MAC addresses init to the device initialization patch because it
  is setup by the avp_dev_create() function.
* Split the changes to the avp.ini features file so that features are
  marked as enabled in the patch that actually enables them.

Allain Legacy (14):
  drivers/net: adds AVP PMD base files
  net/avp: public header files
  net/avp: debug log macros
  net/avp: driver registration
  net/avp: device initialization
  net/avp: device configuration
  net/avp: queue setup and release
  net/avp: packet receive functions
  net/avp: packet transmit functions
  net/avp: device statistics operations
  net/avp: device promiscuous functions
  net/avp: device start and stop operations
  net/avp: migration interrupt handling
  doc: adds information related to the AVP PMD

 MAINTAINERS                                  |    6 +
 config/common_base                           |    9 +
 config/common_linuxapp                       |    1 +
 config/defconfig_i686-native-linuxapp-gcc    |    5 +
 config/defconfig_i686-native-linuxapp-icc    |    5 +
 config/defconfig_x86_x32-native-linuxapp-gcc |    5 +
 doc/guides/nics/avp.rst                      |  107 ++
 doc/guides/nics/features/avp.ini             |   16 +
 doc/guides/nics/index.rst                    |    1 +
 doc/guides/rel_notes/release_17_05.rst       |    5 +
 drivers/net/Makefile                         |    1 +
 drivers/net/avp/Makefile                     |   61 +
 drivers/net/avp/avp_ethdev.c                 | 2294 ++++++++++++++++++++++++++
 drivers/net/avp/avp_logs.h                   |   59 +
 drivers/net/avp/rte_avp_common.h             |  416 +++++
 drivers/net/avp/rte_avp_fifo.h               |  157 ++
 drivers/net/avp/rte_pmd_avp_version.map      |    4 +
 mk/rte.app.mk                                |    1 +
 18 files changed, 3153 insertions(+)
 create mode 100644 doc/guides/nics/avp.rst
 create mode 100644 doc/guides/nics/features/avp.ini
 create mode 100644 drivers/net/avp/Makefile
 create mode 100644 drivers/net/avp/avp_ethdev.c
 create mode 100644 drivers/net/avp/avp_logs.h
 create mode 100644 drivers/net/avp/rte_avp_common.h
 create mode 100644 drivers/net/avp/rte_avp_fifo.h
 create mode 100644 drivers/net/avp/rte_pmd_avp_version.map

-- 
1.8.3.1

^ permalink raw reply	[flat|nested] 172+ messages in thread

* [PATCH v5 01/14] drivers/net: adds AVP PMD base files
  2017-03-23 11:23       ` [PATCH v5 00/14] Wind River Systems AVP PMD Allain Legacy
@ 2017-03-23 11:24         ` Allain Legacy
  2017-03-23 11:24         ` [PATCH v5 02/14] net/avp: public header files Allain Legacy
                           ` (14 subsequent siblings)
  15 siblings, 0 replies; 172+ messages in thread
From: Allain Legacy @ 2017-03-23 11:24 UTC (permalink / raw)
  To: ferruh.yigit
  Cc: dev, ian.jolliffe, bruce.richardson, john.mcnamara, keith.wiles,
	thomas.monjalon, vincent.jardin, jerin.jacob, stephen, 3chas3

This commit introduces the AVP PMD file structure without adding any actual
driver functionality.  Functional blocks will be added in later patches.

Signed-off-by: Allain Legacy <allain.legacy@windriver.com>
Signed-off-by: Matt Peters <matt.peters@windriver.com>
---
 MAINTAINERS                             |  5 ++++
 config/common_base                      |  5 ++++
 doc/guides/nics/features/avp.ini        |  6 +++++
 drivers/net/Makefile                    |  1 +
 drivers/net/avp/Makefile                | 47 +++++++++++++++++++++++++++++++++
 drivers/net/avp/rte_pmd_avp_version.map |  4 +++
 6 files changed, 68 insertions(+)
 create mode 100644 doc/guides/nics/features/avp.ini
 create mode 100644 drivers/net/avp/Makefile
 create mode 100644 drivers/net/avp/rte_pmd_avp_version.map

diff --git a/MAINTAINERS b/MAINTAINERS
index 39bc78e..4e9aa00 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -416,6 +416,11 @@ Null Networking PMD
 M: Tetsuya Mukawa <mtetsuyah@gmail.com>
 F: drivers/net/null/
 
+Wind River AVP PMD
+M: Allain Legacy <allain.legacy@windriver.com>
+M: Matt Peters <matt.peters@windriver.com>
+F: drivers/net/avp
+
 
 Crypto Drivers
 --------------
diff --git a/config/common_base b/config/common_base
index 37aa1e1..a4ad577 100644
--- a/config/common_base
+++ b/config/common_base
@@ -353,6 +353,11 @@ CONFIG_RTE_LIBRTE_QEDE_FW=""
 CONFIG_RTE_LIBRTE_PMD_AF_PACKET=n
 
 #
+# Compile WRS accelerated virtual port (AVP) guest PMD driver
+#
+CONFIG_RTE_LIBRTE_AVP_PMD=n
+
+#
 # Compile the TAP PMD
 # It is enabled by default for Linux only.
 #
diff --git a/doc/guides/nics/features/avp.ini b/doc/guides/nics/features/avp.ini
new file mode 100644
index 0000000..4353929
--- /dev/null
+++ b/doc/guides/nics/features/avp.ini
@@ -0,0 +1,6 @@
+;
+; Supported features of the 'AVP' network poll mode driver.
+;
+; Refer to default.ini for the full list of available PMD features.
+;
+[Features]
diff --git a/drivers/net/Makefile b/drivers/net/Makefile
index a16f25e..52b9297 100644
--- a/drivers/net/Makefile
+++ b/drivers/net/Makefile
@@ -32,6 +32,7 @@
 include $(RTE_SDK)/mk/rte.vars.mk
 
 DIRS-$(CONFIG_RTE_LIBRTE_PMD_AF_PACKET) += af_packet
+DIRS-$(CONFIG_RTE_LIBRTE_AVP_PMD) += avp
 DIRS-$(CONFIG_RTE_LIBRTE_BNX2X_PMD) += bnx2x
 DIRS-$(CONFIG_RTE_LIBRTE_PMD_BOND) += bonding
 DIRS-$(CONFIG_RTE_LIBRTE_CXGBE_PMD) += cxgbe
diff --git a/drivers/net/avp/Makefile b/drivers/net/avp/Makefile
new file mode 100644
index 0000000..c6e03d5
--- /dev/null
+++ b/drivers/net/avp/Makefile
@@ -0,0 +1,47 @@
+#   BSD LICENSE
+#
+#   Copyright(c) 2013-2017, Wind River Systems, Inc. All rights reserved.
+#   All rights reserved.
+#
+#   Redistribution and use in source and binary forms, with or without
+#   modification, are permitted provided that the following conditions
+#   are met:
+#
+#     * Redistributions of source code must retain the above copyright
+#       notice, this list of conditions and the following disclaimer.
+#     * Redistributions in binary form must reproduce the above copyright
+#       notice, this list of conditions and the following disclaimer in
+#       the documentation and/or other materials provided with the
+#       distribution.
+#     * Neither the name of Wind River Systems nor the names of its
+#       contributors may be used to endorse or promote products derived
+#       from this software without specific prior written permission.
+#
+#   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+#   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+#   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+#   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+#   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+#   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+#   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+#   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+#   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+#   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+#   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+
+
+include $(RTE_SDK)/mk/rte.vars.mk
+
+#
+# library name
+#
+LIB = librte_pmd_avp.a
+
+CFLAGS += -O3
+CFLAGS += $(WERROR_FLAGS) -I$(SRCDIR)
+
+EXPORT_MAP := rte_pmd_avp_version.map
+
+LIBABIVER := 1
+
+include $(RTE_SDK)/mk/rte.lib.mk
diff --git a/drivers/net/avp/rte_pmd_avp_version.map b/drivers/net/avp/rte_pmd_avp_version.map
new file mode 100644
index 0000000..af8f3f4
--- /dev/null
+++ b/drivers/net/avp/rte_pmd_avp_version.map
@@ -0,0 +1,4 @@
+DPDK_17.05 {
+
+    local: *;
+};
-- 
1.8.3.1

^ permalink raw reply related	[flat|nested] 172+ messages in thread

* [PATCH v5 02/14] net/avp: public header files
  2017-03-23 11:23       ` [PATCH v5 00/14] Wind River Systems AVP PMD Allain Legacy
  2017-03-23 11:24         ` [PATCH v5 01/14] drivers/net: adds AVP PMD base files Allain Legacy
@ 2017-03-23 11:24         ` Allain Legacy
  2017-03-23 11:24         ` [PATCH v5 03/14] net/avp: debug log macros Allain Legacy
                           ` (13 subsequent siblings)
  15 siblings, 0 replies; 172+ messages in thread
From: Allain Legacy @ 2017-03-23 11:24 UTC (permalink / raw)
  To: ferruh.yigit
  Cc: dev, ian.jolliffe, bruce.richardson, john.mcnamara, keith.wiles,
	thomas.monjalon, vincent.jardin, jerin.jacob, stephen, 3chas3

Adds public/exported header files for the AVP PMD.  The AVP device is a
shared memory based device.  The structures and constants that define the
method of operation of the device must be visible by both the PMD and the
host DPDK application.  They must not change without proper version
controls and updates to both the hypervisor DPDK application and the PMD.

The hypervisor DPDK application is a Wind River Systems proprietary
virtual switch.

Signed-off-by: Allain Legacy <allain.legacy@windriver.com>
Signed-off-by: Matt Peters <matt.peters@windriver.com>
---
 drivers/net/avp/Makefile         |   5 +
 drivers/net/avp/rte_avp_common.h | 416 +++++++++++++++++++++++++++++++++++++++
 drivers/net/avp/rte_avp_fifo.h   | 157 +++++++++++++++
 3 files changed, 578 insertions(+)
 create mode 100644 drivers/net/avp/rte_avp_common.h
 create mode 100644 drivers/net/avp/rte_avp_fifo.h

diff --git a/drivers/net/avp/Makefile b/drivers/net/avp/Makefile
index c6e03d5..68a0fa5 100644
--- a/drivers/net/avp/Makefile
+++ b/drivers/net/avp/Makefile
@@ -44,4 +44,9 @@ EXPORT_MAP := rte_pmd_avp_version.map
 
 LIBABIVER := 1
 
+# install public header files to enable compilation of the hypervisor level
+# dpdk application
+SYMLINK-$(CONFIG_RTE_LIBRTE_AVP_PMD)-include += rte_avp_common.h
+SYMLINK-$(CONFIG_RTE_LIBRTE_AVP_PMD)-include += rte_avp_fifo.h
+
 include $(RTE_SDK)/mk/rte.lib.mk
diff --git a/drivers/net/avp/rte_avp_common.h b/drivers/net/avp/rte_avp_common.h
new file mode 100644
index 0000000..31d763e
--- /dev/null
+++ b/drivers/net/avp/rte_avp_common.h
@@ -0,0 +1,416 @@
+/*-
+ *   This file is provided under a dual BSD/LGPLv2 license.  When using or
+ *   redistributing this file, you may do so under either license.
+ *
+ *   GNU LESSER GENERAL PUBLIC LICENSE
+ *
+ *   Copyright(c) 2010-2013 Intel Corporation. All rights reserved.
+ *   Copyright(c) 2014-2017 Wind River Systems, Inc. All rights reserved.
+ *
+ *   This program is free software; you can redistribute it and/or modify
+ *   it under the terms of version 2.1 of the GNU Lesser General Public License
+ *   as published by the Free Software Foundation.
+ *
+ *   This program is distributed in the hope that it will be useful, but
+ *   WITHOUT ANY WARRANTY; without even the implied warranty of
+ *   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+ *   Lesser General Public License for more details.
+ *
+ *   Contact Information:
+ *   Wind River Systems, Inc.
+ *
+ *
+ *   BSD LICENSE
+ *
+ *   Copyright(c) 2010-2013 Intel Corporation. All rights reserved.
+ *   Copyright(c) 2014-2017 Wind River Systems, Inc. All rights reserved.
+ *   All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *   * Redistributions of source code must retain the above copyright
+ *     notice, this list of conditions and the following disclaimer.
+ *   * Redistributions in binary form must reproduce the above copyright
+ *     notice, this list of conditions and the following disclaimer in
+ *     the documentation and/or other materials provided with the
+ *     distribution.
+ *   * Neither the name of Intel Corporation nor the names of its
+ *     contributors may be used to endorse or promote products derived
+ *     from this software without specific prior written permission.
+ *
+ *    THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *    "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *    LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *    A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *    OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *    SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *    LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *    DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *    THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *    (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *    OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ *
+ */
+
+#ifndef _RTE_AVP_COMMON_H_
+#define _RTE_AVP_COMMON_H_
+
+#ifdef __KERNEL__
+#include <linux/if.h>
+#endif
+
+/**
+ * AVP name is part of network device name.
+ */
+#define RTE_AVP_NAMESIZE 32
+
+/**
+ * AVP alias is a user-defined value used for lookups from secondary
+ * processes.  Typically, this is a UUID.
+ */
+#define RTE_AVP_ALIASSIZE 128
+
+/*
+ * Request id.
+ */
+enum rte_avp_req_id {
+	RTE_AVP_REQ_UNKNOWN = 0,
+	RTE_AVP_REQ_CHANGE_MTU,
+	RTE_AVP_REQ_CFG_NETWORK_IF,
+	RTE_AVP_REQ_CFG_DEVICE,
+	RTE_AVP_REQ_SHUTDOWN_DEVICE,
+	RTE_AVP_REQ_MAX,
+};
+
+/**@{ AVP device driver types */
+#define RTE_AVP_DRIVER_TYPE_UNKNOWN 0
+#define RTE_AVP_DRIVER_TYPE_DPDK 1
+#define RTE_AVP_DRIVER_TYPE_KERNEL 2
+#define RTE_AVP_DRIVER_TYPE_QEMU 3
+/**@} */
+
+/**@{ AVP device operational modes */
+#define RTE_AVP_MODE_HOST 0 /**< AVP interface created in host */
+#define RTE_AVP_MODE_GUEST 1 /**< AVP interface created for export to guest */
+#define RTE_AVP_MODE_TRACE 2 /**< AVP interface created for packet tracing */
+/**@} */
+
+/*
+ * Structure for AVP queue configuration query request/result
+ */
+struct rte_avp_device_config {
+	uint64_t device_id;	/**< Unique system identifier */
+	uint32_t driver_type; /**< Device Driver type */
+	uint32_t driver_version; /**< Device Driver version */
+	uint32_t features; /**< Negotiated features */
+	uint16_t num_tx_queues;	/**< Number of active transmit queues */
+	uint16_t num_rx_queues;	/**< Number of active receive queues */
+	uint8_t if_up; /**< 1: interface up, 0: interface down */
+} __attribute__ ((__packed__));
+
+/*
+ * Structure for AVP request.
+ */
+struct rte_avp_request {
+	uint32_t req_id; /**< Request id */
+	union {
+		uint32_t new_mtu; /**< New MTU */
+		uint8_t if_up;	/**< 1: interface up, 0: interface down */
+	struct rte_avp_device_config config; /**< Queue configuration */
+	};
+	int32_t result;	/**< Result for processing request */
+} __attribute__ ((__packed__));
+
+/*
+ * FIFO struct mapped in a shared memory. It describes a circular buffer FIFO
+ * Write and read should wrap around. FIFO is empty when write == read
+ * Writing should never overwrite the read position
+ */
+struct rte_avp_fifo {
+	volatile unsigned int write; /**< Next position to be written*/
+	volatile unsigned int read; /**< Next position to be read */
+	unsigned int len; /**< Circular buffer length */
+	unsigned int elem_size; /**< Pointer size - for 32/64 bit OS */
+	void *volatile buffer[0]; /**< The buffer contains mbuf pointers */
+};
+
+
+/*
+ * AVP packet buffer header used to define the exchange of packet data.
+ */
+struct rte_avp_desc {
+	uint64_t pad0;
+	void *pkt_mbuf; /**< Reference to packet mbuf */
+	uint8_t pad1[14];
+	uint16_t ol_flags; /**< Offload features. */
+	void *next;	/**< Reference to next buffer in chain */
+	void *data;	/**< Start address of data in segment buffer. */
+	uint16_t data_len; /**< Amount of data in segment buffer. */
+	uint8_t nb_segs; /**< Number of segments */
+	uint8_t pad2;
+	uint16_t pkt_len; /**< Total pkt len: sum of all segment data_len. */
+	uint32_t pad3;
+	uint16_t vlan_tci; /**< VLAN Tag Control Identifier (CPU order). */
+	uint32_t pad4;
+} __attribute__ ((__aligned__(RTE_CACHE_LINE_SIZE), __packed__));
+
+
+/**{ AVP device features */
+#define RTE_AVP_FEATURE_VLAN_OFFLOAD (1 << 0) /**< Emulated HW VLAN offload */
+/**@} */
+
+
+/**@{ Offload feature flags */
+#define RTE_AVP_TX_VLAN_PKT 0x0001 /**< TX packet is a 802.1q VLAN packet. */
+#define RTE_AVP_RX_VLAN_PKT 0x0800 /**< RX packet is a 802.1q VLAN packet. */
+/**@} */
+
+
+/**@{ AVP PCI identifiers */
+#define RTE_AVP_PCI_VENDOR_ID   0x1af4
+#define RTE_AVP_PCI_DEVICE_ID   0x1110
+/**@} */
+
+/**@{ AVP PCI subsystem identifiers */
+#define RTE_AVP_PCI_SUB_VENDOR_ID RTE_AVP_PCI_VENDOR_ID
+#define RTE_AVP_PCI_SUB_DEVICE_ID 0x1104
+/**@} */
+
+/**@{ AVP PCI BAR definitions */
+#define RTE_AVP_PCI_MMIO_BAR   0
+#define RTE_AVP_PCI_MSIX_BAR   1
+#define RTE_AVP_PCI_MEMORY_BAR 2
+#define RTE_AVP_PCI_MEMMAP_BAR 4
+#define RTE_AVP_PCI_DEVICE_BAR 5
+#define RTE_AVP_PCI_MAX_BAR    6
+/**@} */
+
+/**@{ AVP PCI BAR name definitions */
+#define RTE_AVP_MMIO_BAR_NAME   "avp-mmio"
+#define RTE_AVP_MSIX_BAR_NAME   "avp-msix"
+#define RTE_AVP_MEMORY_BAR_NAME "avp-memory"
+#define RTE_AVP_MEMMAP_BAR_NAME "avp-memmap"
+#define RTE_AVP_DEVICE_BAR_NAME "avp-device"
+/**@} */
+
+/**@{ AVP PCI MSI-X vectors */
+#define RTE_AVP_MIGRATION_MSIX_VECTOR 0	/**< Migration interrupts */
+#define RTE_AVP_MAX_MSIX_VECTORS 1
+/**@} */
+
+/**@} AVP Migration status/ack register values */
+#define RTE_AVP_MIGRATION_NONE      0 /**< Migration never executed */
+#define RTE_AVP_MIGRATION_DETACHED  1 /**< Device attached during migration */
+#define RTE_AVP_MIGRATION_ATTACHED  2 /**< Device reattached during migration */
+#define RTE_AVP_MIGRATION_ERROR     3 /**< Device failed to attach/detach */
+/**@} */
+
+/**@} AVP MMIO Register Offsets */
+#define RTE_AVP_REGISTER_BASE 0
+#define RTE_AVP_INTERRUPT_MASK_OFFSET (RTE_AVP_REGISTER_BASE + 0)
+#define RTE_AVP_INTERRUPT_STATUS_OFFSET (RTE_AVP_REGISTER_BASE + 4)
+#define RTE_AVP_MIGRATION_STATUS_OFFSET (RTE_AVP_REGISTER_BASE + 8)
+#define RTE_AVP_MIGRATION_ACK_OFFSET (RTE_AVP_REGISTER_BASE + 12)
+/**@} */
+
+/**@} AVP Interrupt Status Mask */
+#define RTE_AVP_MIGRATION_INTERRUPT_MASK (1 << 1)
+#define RTE_AVP_APP_INTERRUPTS_MASK      0xFFFFFFFF
+#define RTE_AVP_NO_INTERRUPTS_MASK       0
+/**@} */
+
+/*
+ * Maximum number of memory regions to export
+ */
+#define RTE_AVP_MAX_MAPS  2048
+
+/*
+ * Description of a single memory region
+ */
+struct rte_avp_memmap {
+	void *addr;
+	phys_addr_t phys_addr;
+	uint64_t length;
+};
+
+/*
+ * AVP memory mapping validation marker
+ */
+#define RTE_AVP_MEMMAP_MAGIC 0x20131969
+
+/**@{  AVP memory map versions */
+#define RTE_AVP_MEMMAP_VERSION_1 1
+#define RTE_AVP_MEMMAP_VERSION RTE_AVP_MEMMAP_VERSION_1
+/**@} */
+
+/*
+ * Defines a list of memory regions exported from the host to the guest
+ */
+struct rte_avp_memmap_info {
+	uint32_t magic; /**< Memory validation marker */
+	uint32_t version; /**< Data format version */
+	uint32_t nb_maps;
+	struct rte_avp_memmap maps[RTE_AVP_MAX_MAPS];
+};
+
+/*
+ * AVP device memory validation marker
+ */
+#define RTE_AVP_DEVICE_MAGIC 0x20131975
+
+/**@{  AVP device map versions
+ * WARNING:  do not change the format or names of these variables.  They are
+ * automatically parsed from the build system to generate the SDK package
+ * name.
+ **/
+#define RTE_AVP_RELEASE_VERSION_1 1
+#define RTE_AVP_RELEASE_VERSION RTE_AVP_RELEASE_VERSION_1
+#define RTE_AVP_MAJOR_VERSION_0 0
+#define RTE_AVP_MAJOR_VERSION_1 1
+#define RTE_AVP_MAJOR_VERSION_2 2
+#define RTE_AVP_MAJOR_VERSION RTE_AVP_MAJOR_VERSION_2
+#define RTE_AVP_MINOR_VERSION_0 0
+#define RTE_AVP_MINOR_VERSION_1 1
+#define RTE_AVP_MINOR_VERSION_13 13
+#define RTE_AVP_MINOR_VERSION RTE_AVP_MINOR_VERSION_13
+/**@} */
+
+
+/**
+ * Generates a 32-bit version number from the specified version number
+ * components
+ */
+#define RTE_AVP_MAKE_VERSION(_release, _major, _minor) \
+((((_release) & 0xffff) << 16) | (((_major) & 0xff) << 8) | ((_minor) & 0xff))
+
+
+/**
+ * Represents the current version of the AVP host driver
+ * WARNING:  in the current development branch the host and guest driver
+ * version should always be the same.  When patching guest features back to
+ * GA releases the host version number should not be updated unless there was
+ * an actual change made to the host driver.
+ */
+#define RTE_AVP_CURRENT_HOST_VERSION \
+RTE_AVP_MAKE_VERSION(RTE_AVP_RELEASE_VERSION_1, \
+		     RTE_AVP_MAJOR_VERSION_0, \
+		     RTE_AVP_MINOR_VERSION_1)
+
+
+/**
+ * Represents the current version of the AVP guest drivers
+ */
+#define RTE_AVP_CURRENT_GUEST_VERSION \
+RTE_AVP_MAKE_VERSION(RTE_AVP_RELEASE_VERSION_1, \
+		     RTE_AVP_MAJOR_VERSION_2, \
+		     RTE_AVP_MINOR_VERSION_13)
+
+/**
+ * Access AVP device version values
+ */
+#define RTE_AVP_GET_RELEASE_VERSION(_version) (((_version) >> 16) & 0xffff)
+#define RTE_AVP_GET_MAJOR_VERSION(_version) (((_version) >> 8) & 0xff)
+#define RTE_AVP_GET_MINOR_VERSION(_version) ((_version) & 0xff)
+/**@}*/
+
+
+/**
+ * Remove the minor version number so that only the release and major versions
+ * are used for comparisons.
+ */
+#define RTE_AVP_STRIP_MINOR_VERSION(_version) ((_version) >> 8)
+
+
+/**
+ * Defines the number of mbuf pools supported per device (1 per socket)
+ */
+#define RTE_AVP_MAX_MEMPOOLS 8
+
+/*
+ * Defines address translation parameters for each support mbuf pool
+ */
+struct rte_avp_mempool_info {
+	void *addr;
+	phys_addr_t phys_addr;
+	uint64_t length;
+};
+
+/*
+ * Struct used to create a AVP device. Passed to the kernel in IOCTL call or
+ * via inter-VM shared memory when used in a guest.
+ */
+struct rte_avp_device_info {
+	uint32_t magic;	/**< Memory validation marker */
+	uint32_t version; /**< Data format version */
+
+	char ifname[RTE_AVP_NAMESIZE];	/**< Network device name for AVP */
+
+	phys_addr_t tx_phys;
+	phys_addr_t rx_phys;
+	phys_addr_t alloc_phys;
+	phys_addr_t free_phys;
+
+	uint32_t features; /**< Supported feature bitmap */
+	uint8_t min_rx_queues; /**< Minimum supported receive/free queues */
+	uint8_t num_rx_queues; /**< Recommended number of receive/free queues */
+	uint8_t max_rx_queues; /**< Maximum supported receive/free queues */
+	uint8_t min_tx_queues; /**< Minimum supported transmit/alloc queues */
+	uint8_t num_tx_queues;
+	/**< Recommended number of transmit/alloc queues */
+	uint8_t max_tx_queues; /**< Maximum supported transmit/alloc queues */
+
+	uint32_t tx_size; /**< Size of each transmit queue */
+	uint32_t rx_size; /**< Size of each receive queue */
+	uint32_t alloc_size; /**< Size of each alloc queue */
+	uint32_t free_size;	/**< Size of each free queue */
+
+	/* Used by Ethtool */
+	phys_addr_t req_phys;
+	phys_addr_t resp_phys;
+	phys_addr_t sync_phys;
+	void *sync_va;
+
+	/* mbuf mempool (used when a single memory area is supported) */
+	void *mbuf_va;
+	phys_addr_t mbuf_phys;
+
+	/* mbuf mempools */
+	struct rte_avp_mempool_info pool[RTE_AVP_MAX_MEMPOOLS];
+
+#ifdef __KERNEL__
+	/* Ethernet info */
+	char ethaddr[ETH_ALEN];
+#else
+	char ethaddr[ETHER_ADDR_LEN];
+#endif
+
+	uint8_t mode; /**< device mode, i.e guest, host, trace */
+
+	/* mbuf size */
+	unsigned int mbuf_size;
+
+	/*
+	 * unique id to differentiate between two instantiations of the same
+	 * AVP device (i.e., the guest needs to know if the device has been
+	 * deleted and recreated).
+	 */
+	uint64_t device_id;
+
+	uint32_t max_rx_pkt_len; /**< Maximum receive unit size */
+};
+
+#define RTE_AVP_MAX_QUEUES 8 /**< Maximum number of queues per device */
+
+/** Maximum number of chained mbufs in a packet */
+#define RTE_AVP_MAX_MBUF_SEGMENTS 5
+
+#define RTE_AVP_DEVICE "avp"
+
+#define RTE_AVP_IOCTL_TEST    _IOWR(0, 1, int)
+#define RTE_AVP_IOCTL_CREATE  _IOWR(0, 2, struct rte_avp_device_info)
+#define RTE_AVP_IOCTL_RELEASE _IOWR(0, 3, struct rte_avp_device_info)
+#define RTE_AVP_IOCTL_QUERY   _IOWR(0, 4, struct rte_avp_device_config)
+
+#endif /* _RTE_AVP_COMMON_H_ */
diff --git a/drivers/net/avp/rte_avp_fifo.h b/drivers/net/avp/rte_avp_fifo.h
new file mode 100644
index 0000000..8262e4f
--- /dev/null
+++ b/drivers/net/avp/rte_avp_fifo.h
@@ -0,0 +1,157 @@
+/*-
+ *   This file is provided under a dual BSD/LGPLv2 license.  When using or
+ *   redistributing this file, you may do so under either license.
+ *
+ *   GNU LESSER GENERAL PUBLIC LICENSE
+ *
+ *   Copyright(c) 2010-2013 Intel Corporation. All rights reserved.
+ *   Copyright(c) 2014 Wind River Systems, Inc. All rights reserved.
+ *
+ *   This program is free software; you can redistribute it and/or modify
+ *   it under the terms of version 2.1 of the GNU Lesser General Public License
+ *   as published by the Free Software Foundation.
+ *
+ *   This program is distributed in the hope that it will be useful, but
+ *   WITHOUT ANY WARRANTY; without even the implied warranty of
+ *   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+ *   Lesser General Public License for more details.
+ *
+ *   Contact Information:
+ *   Wind River Systems, Inc.
+ *
+ *
+ *   BSD LICENSE
+ *
+ *   Copyright(c) 2010-2013 Intel Corporation. All rights reserved.
+ *   Copyright(c) 2013-2017 Wind River Systems, Inc. All rights reserved.
+ *   All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *   * Redistributions of source code must retain the above copyright
+ *     notice, this list of conditions and the following disclaimer.
+ *   * Redistributions in binary form must reproduce the above copyright
+ *     notice, this list of conditions and the following disclaimer in
+ *     the documentation and/or other materials provided with the
+ *     distribution.
+ *   * Neither the name of Intel Corporation nor the names of its
+ *     contributors may be used to endorse or promote products derived
+ *     from this software without specific prior written permission.
+ *
+ *    THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *    "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *    LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *    A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *    OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *    SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *    LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *    DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *    THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *    (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *    OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ *
+ */
+
+#ifndef _RTE_AVP_FIFO_H_
+#define _RTE_AVP_FIFO_H_
+
+#ifdef __KERNEL__
+/* Write memory barrier for kernel compiles */
+#define AVP_WMB() smp_wmb()
+/* Read memory barrier for kernel compiles */
+#define AVP_RMB() smp_rmb()
+#else
+/* Write memory barrier for userspace compiles */
+#define AVP_WMB() rte_wmb()
+/* Read memory barrier for userspace compiles */
+#define AVP_RMB() rte_rmb()
+#endif
+
+#ifndef __KERNEL__
+/**
+ * Initializes the avp fifo structure
+ */
+static inline void
+avp_fifo_init(struct rte_avp_fifo *fifo, unsigned int size)
+{
+	/* Ensure size is power of 2 */
+	if (size & (size - 1))
+		rte_panic("AVP fifo size must be power of 2\n");
+
+	fifo->write = 0;
+	fifo->read = 0;
+	fifo->len = size;
+	fifo->elem_size = sizeof(void *);
+}
+#endif
+
+/**
+ * Adds num elements into the fifo. Return the number actually written
+ */
+static inline unsigned
+avp_fifo_put(struct rte_avp_fifo *fifo, void **data, unsigned int num)
+{
+	unsigned int i = 0;
+	unsigned int fifo_write = fifo->write;
+	unsigned int fifo_read = fifo->read;
+	unsigned int new_write = fifo_write;
+
+	for (i = 0; i < num; i++) {
+		new_write = (new_write + 1) & (fifo->len - 1);
+
+		if (new_write == fifo_read)
+			break;
+		fifo->buffer[fifo_write] = data[i];
+		fifo_write = new_write;
+	}
+	AVP_WMB();
+	fifo->write = fifo_write;
+	return i;
+}
+
+/**
+ * Get up to num elements from the fifo. Return the number actually read
+ */
+static inline unsigned int
+avp_fifo_get(struct rte_avp_fifo *fifo, void **data, unsigned int num)
+{
+	unsigned int i = 0;
+	unsigned int new_read = fifo->read;
+	unsigned int fifo_write = fifo->write;
+
+	if (new_read == fifo_write)
+		return 0; /* empty */
+
+	for (i = 0; i < num; i++) {
+		if (new_read == fifo_write)
+			break;
+
+		data[i] = fifo->buffer[new_read];
+		new_read = (new_read + 1) & (fifo->len - 1);
+	}
+	AVP_RMB();
+	fifo->read = new_read;
+	return i;
+}
+
+/**
+ * Get the num of elements in the fifo
+ */
+static inline unsigned int
+avp_fifo_count(struct rte_avp_fifo *fifo)
+{
+	return (fifo->len + fifo->write - fifo->read) & (fifo->len - 1);
+}
+
+/**
+ * Get the num of available elements in the fifo
+ */
+static inline unsigned int
+avp_fifo_free_count(struct rte_avp_fifo *fifo)
+{
+	return (fifo->read - fifo->write - 1) & (fifo->len - 1);
+}
+
+#endif /* _RTE_AVP_FIFO_H_ */
-- 
1.8.3.1

^ permalink raw reply related	[flat|nested] 172+ messages in thread

* [PATCH v5 03/14] net/avp: debug log macros
  2017-03-23 11:23       ` [PATCH v5 00/14] Wind River Systems AVP PMD Allain Legacy
  2017-03-23 11:24         ` [PATCH v5 01/14] drivers/net: adds AVP PMD base files Allain Legacy
  2017-03-23 11:24         ` [PATCH v5 02/14] net/avp: public header files Allain Legacy
@ 2017-03-23 11:24         ` Allain Legacy
  2017-03-23 11:24         ` [PATCH v5 04/14] net/avp: driver registration Allain Legacy
                           ` (12 subsequent siblings)
  15 siblings, 0 replies; 172+ messages in thread
From: Allain Legacy @ 2017-03-23 11:24 UTC (permalink / raw)
  To: ferruh.yigit
  Cc: dev, ian.jolliffe, bruce.richardson, john.mcnamara, keith.wiles,
	thomas.monjalon, vincent.jardin, jerin.jacob, stephen, 3chas3

Adds a header file with log macros for the AVP PMD

Signed-off-by: Allain Legacy <allain.legacy@windriver.com>
Signed-off-by: Matt Peters <matt.peters@windriver.com>
---
 config/common_base         |  4 ++++
 drivers/net/avp/avp_logs.h | 59 ++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 63 insertions(+)
 create mode 100644 drivers/net/avp/avp_logs.h

diff --git a/config/common_base b/config/common_base
index a4ad577..5beedc8 100644
--- a/config/common_base
+++ b/config/common_base
@@ -356,6 +356,10 @@ CONFIG_RTE_LIBRTE_PMD_AF_PACKET=n
 # Compile WRS accelerated virtual port (AVP) guest PMD driver
 #
 CONFIG_RTE_LIBRTE_AVP_PMD=n
+CONFIG_RTE_LIBRTE_AVP_DEBUG_RX=n
+CONFIG_RTE_LIBRTE_AVP_DEBUG_TX=n
+CONFIG_RTE_LIBRTE_AVP_DEBUG_DRIVER=y
+CONFIG_RTE_LIBRTE_AVP_DEBUG_BUFFERS=n
 
 #
 # Compile the TAP PMD
diff --git a/drivers/net/avp/avp_logs.h b/drivers/net/avp/avp_logs.h
new file mode 100644
index 0000000..252cab7
--- /dev/null
+++ b/drivers/net/avp/avp_logs.h
@@ -0,0 +1,59 @@
+/*
+ * BSD LICENSE
+ *
+ * Copyright (c) 2013-2015, Wind River Systems, Inc.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions are met:
+ *
+ * 1) Redistributions of source code must retain the above copyright notice,
+ * this list of conditions and the following disclaimer.
+ *
+ * 2) Redistributions in binary form must reproduce the above copyright notice,
+ * this list of conditions and the following disclaimer in the documentation
+ * and/or other materials provided with the distribution.
+ *
+ * 3) Neither the name of Wind River Systems nor the names of its contributors
+ * may be used to endorse or promote products derived from this software
+ * without specific prior written permission.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE
+ * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+ * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+ * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+ * POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef _AVP_LOGS_H_
+#define _AVP_LOGS_H_
+
+#include <rte_log.h>
+
+#ifdef RTE_LIBRTE_AVP_DEBUG_RX
+#define PMD_RX_LOG(level, fmt, args...) \
+	RTE_LOG(level, PMD, "%s() rx: " fmt, __func__, ## args)
+#else
+#define PMD_RX_LOG(level, fmt, args...) do { } while (0)
+#endif
+
+#ifdef RTE_LIBRTE_AVP_DEBUG_TX
+#define PMD_TX_LOG(level, fmt, args...) \
+	RTE_LOG(level, PMD, "%s() tx: " fmt, __func__, ## args)
+#else
+#define PMD_TX_LOG(level, fmt, args...) do { } while (0)
+#endif
+
+#ifdef RTE_LIBRTE_AVP_DEBUG_DRIVER
+#define PMD_DRV_LOG(level, fmt, args...) \
+	RTE_LOG(level, PMD, "%s(): " fmt, __func__, ## args)
+#else
+#define PMD_DRV_LOG(level, fmt, args...) do { } while (0)
+#endif
+
+#endif /* _AVP_LOGS_H_ */
-- 
1.8.3.1

^ permalink raw reply related	[flat|nested] 172+ messages in thread

* [PATCH v5 04/14] net/avp: driver registration
  2017-03-23 11:23       ` [PATCH v5 00/14] Wind River Systems AVP PMD Allain Legacy
                           ` (2 preceding siblings ...)
  2017-03-23 11:24         ` [PATCH v5 03/14] net/avp: debug log macros Allain Legacy
@ 2017-03-23 11:24         ` Allain Legacy
  2017-03-23 11:24         ` [PATCH v5 05/14] net/avp: device initialization Allain Legacy
                           ` (11 subsequent siblings)
  15 siblings, 0 replies; 172+ messages in thread
From: Allain Legacy @ 2017-03-23 11:24 UTC (permalink / raw)
  To: ferruh.yigit
  Cc: dev, ian.jolliffe, bruce.richardson, john.mcnamara, keith.wiles,
	thomas.monjalon, vincent.jardin, jerin.jacob, stephen, 3chas3

Adds the initial framework for registering the driver against the support
PCI device identifiers.

Signed-off-by: Allain Legacy <allain.legacy@windriver.com>
Signed-off-by: Matt Peters <matt.peters@windriver.com>
---
 config/common_linuxapp                       |   1 +
 config/defconfig_i686-native-linuxapp-gcc    |   5 +
 config/defconfig_i686-native-linuxapp-icc    |   5 +
 config/defconfig_x86_x32-native-linuxapp-gcc |   5 +
 drivers/net/avp/Makefile                     |   8 ++
 drivers/net/avp/avp_ethdev.c                 | 205 +++++++++++++++++++++++++++
 mk/rte.app.mk                                |   1 +
 7 files changed, 230 insertions(+)
 create mode 100644 drivers/net/avp/avp_ethdev.c

diff --git a/config/common_linuxapp b/config/common_linuxapp
index 00ebaac..8690a00 100644
--- a/config/common_linuxapp
+++ b/config/common_linuxapp
@@ -43,6 +43,7 @@ CONFIG_RTE_LIBRTE_VHOST=y
 CONFIG_RTE_LIBRTE_PMD_VHOST=y
 CONFIG_RTE_LIBRTE_PMD_AF_PACKET=y
 CONFIG_RTE_LIBRTE_PMD_TAP=y
+CONFIG_RTE_LIBRTE_AVP_PMD=y
 CONFIG_RTE_LIBRTE_NFP_PMD=y
 CONFIG_RTE_LIBRTE_POWER=y
 CONFIG_RTE_VIRTIO_USER=y
diff --git a/config/defconfig_i686-native-linuxapp-gcc b/config/defconfig_i686-native-linuxapp-gcc
index 745c401..9847bdb 100644
--- a/config/defconfig_i686-native-linuxapp-gcc
+++ b/config/defconfig_i686-native-linuxapp-gcc
@@ -75,3 +75,8 @@ CONFIG_RTE_LIBRTE_PMD_KASUMI=n
 # ZUC PMD is not supported on 32-bit
 #
 CONFIG_RTE_LIBRTE_PMD_ZUC=n
+
+#
+# AVP PMD is not supported on 32-bit
+#
+CONFIG_RTE_LIBRTE_AVP_PMD=n
diff --git a/config/defconfig_i686-native-linuxapp-icc b/config/defconfig_i686-native-linuxapp-icc
index 50a3008..269e88e 100644
--- a/config/defconfig_i686-native-linuxapp-icc
+++ b/config/defconfig_i686-native-linuxapp-icc
@@ -75,3 +75,8 @@ CONFIG_RTE_LIBRTE_PMD_KASUMI=n
 # ZUC PMD is not supported on 32-bit
 #
 CONFIG_RTE_LIBRTE_PMD_ZUC=n
+
+#
+# AVP PMD is not supported on 32-bit
+#
+CONFIG_RTE_LIBRTE_AVP_PMD=n
diff --git a/config/defconfig_x86_x32-native-linuxapp-gcc b/config/defconfig_x86_x32-native-linuxapp-gcc
index 3e55c5c..19573cb 100644
--- a/config/defconfig_x86_x32-native-linuxapp-gcc
+++ b/config/defconfig_x86_x32-native-linuxapp-gcc
@@ -50,3 +50,8 @@ CONFIG_RTE_LIBRTE_KNI=n
 # Solarflare PMD is not supported on 32-bit
 #
 CONFIG_RTE_LIBRTE_SFC_EFX_PMD=n
+
+#
+# AVP PMD is not supported on 32-bit
+#
+CONFIG_RTE_LIBRTE_AVP_PMD=n
diff --git a/drivers/net/avp/Makefile b/drivers/net/avp/Makefile
index 68a0fa5..9cf0449 100644
--- a/drivers/net/avp/Makefile
+++ b/drivers/net/avp/Makefile
@@ -49,4 +49,12 @@ LIBABIVER := 1
 SYMLINK-$(CONFIG_RTE_LIBRTE_AVP_PMD)-include += rte_avp_common.h
 SYMLINK-$(CONFIG_RTE_LIBRTE_AVP_PMD)-include += rte_avp_fifo.h
 
+#
+# all source files are stored in SRCS-y
+#
+SRCS-$(CONFIG_RTE_LIBRTE_AVP_PMD) += avp_ethdev.c
+
+# this lib depends upon:
+DEPDIRS-$(CONFIG_RTE_LIBRTE_AVP_PMD) += lib/librte_eal lib/librte_ether
+
 include $(RTE_SDK)/mk/rte.lib.mk
diff --git a/drivers/net/avp/avp_ethdev.c b/drivers/net/avp/avp_ethdev.c
new file mode 100644
index 0000000..f5fa453
--- /dev/null
+++ b/drivers/net/avp/avp_ethdev.c
@@ -0,0 +1,205 @@
+/*
+ *   BSD LICENSE
+ *
+ * Copyright (c) 2013-2017, Wind River Systems, Inc.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions are met:
+ *
+ * 1) Redistributions of source code must retain the above copyright notice,
+ * this list of conditions and the following disclaimer.
+ *
+ * 2) Redistributions in binary form must reproduce the above copyright notice,
+ * this list of conditions and the following disclaimer in the documentation
+ * and/or other materials provided with the distribution.
+ *
+ * 3) Neither the name of Wind River Systems nor the names of its contributors
+ * may be used to endorse or promote products derived from this software
+ * without specific prior written permission.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE
+ * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+ * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+ * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+ * POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include <stdint.h>
+#include <string.h>
+#include <stdio.h>
+#include <errno.h>
+#include <unistd.h>
+#include <sys/io.h>
+
+#include <rte_ethdev.h>
+#include <rte_memcpy.h>
+#include <rte_string_fns.h>
+#include <rte_memzone.h>
+#include <rte_malloc.h>
+#include <rte_atomic.h>
+#include <rte_branch_prediction.h>
+#include <rte_pci.h>
+#include <rte_ether.h>
+#include <rte_common.h>
+#include <rte_cycles.h>
+#include <rte_byteorder.h>
+#include <rte_dev.h>
+#include <rte_memory.h>
+#include <rte_eal.h>
+
+#include "rte_avp_common.h"
+#include "rte_avp_fifo.h"
+
+#include "avp_logs.h"
+
+
+
+#define AVP_DEV_TO_PCI(eth_dev) RTE_DEV_TO_PCI((eth_dev)->device)
+
+
+#define AVP_MAX_MAC_ADDRS 1
+#define AVP_MIN_RX_BUFSIZE ETHER_MIN_LEN
+
+
+/*
+ * Defines the number of microseconds to wait before checking the response
+ * queue for completion.
+ */
+#define AVP_REQUEST_DELAY_USECS (5000)
+
+/*
+ * Defines the number times to check the response queue for completion before
+ * declaring a timeout.
+ */
+#define AVP_MAX_REQUEST_RETRY (100)
+
+/* Defines the current PCI driver version number */
+#define AVP_DPDK_DRIVER_VERSION RTE_AVP_CURRENT_GUEST_VERSION
+
+/*
+ * The set of PCI devices this driver supports
+ */
+static const struct rte_pci_id pci_id_avp_map[] = {
+	{ .vendor_id = RTE_AVP_PCI_VENDOR_ID,
+	  .device_id = RTE_AVP_PCI_DEVICE_ID,
+	  .subsystem_vendor_id = RTE_AVP_PCI_SUB_VENDOR_ID,
+	  .subsystem_device_id = RTE_AVP_PCI_SUB_DEVICE_ID,
+	  .class_id = RTE_CLASS_ANY_ID,
+	},
+
+	{ .vendor_id = 0, /* sentinel */
+	},
+};
+
+
+/*
+ * Defines the AVP device attributes which are attached to an RTE ethernet
+ * device
+ */
+struct avp_dev {
+	uint32_t magic; /**< Memory validation marker */
+	uint64_t device_id; /**< Unique system identifier */
+	struct ether_addr ethaddr; /**< Host specified MAC address */
+	struct rte_eth_dev_data *dev_data;
+	/**< Back pointer to ethernet device data */
+	volatile uint32_t flags; /**< Device operational flags */
+	uint8_t port_id; /**< Ethernet port identifier */
+	struct rte_mempool *pool; /**< pkt mbuf mempool */
+	unsigned int guest_mbuf_size; /**< local pool mbuf size */
+	unsigned int host_mbuf_size; /**< host mbuf size */
+	unsigned int max_rx_pkt_len; /**< maximum receive unit */
+	uint32_t host_features; /**< Supported feature bitmap */
+	uint32_t features; /**< Enabled feature bitmap */
+	unsigned int num_tx_queues; /**< Negotiated number of transmit queues */
+	unsigned int max_tx_queues; /**< Maximum number of transmit queues */
+	unsigned int num_rx_queues; /**< Negotiated number of receive queues */
+	unsigned int max_rx_queues; /**< Maximum number of receive queues */
+
+	struct rte_avp_fifo *tx_q[RTE_AVP_MAX_QUEUES]; /**< TX queue */
+	struct rte_avp_fifo *rx_q[RTE_AVP_MAX_QUEUES]; /**< RX queue */
+	struct rte_avp_fifo *alloc_q[RTE_AVP_MAX_QUEUES];
+	/**< Allocated mbufs queue */
+	struct rte_avp_fifo *free_q[RTE_AVP_MAX_QUEUES];
+	/**< To be freed mbufs queue */
+
+	/* For request & response */
+	struct rte_avp_fifo *req_q; /**< Request queue */
+	struct rte_avp_fifo *resp_q; /**< Response queue */
+	void *host_sync_addr; /**< (host) Req/Resp Mem address */
+	void *sync_addr; /**< Req/Resp Mem address */
+	void *host_mbuf_addr; /**< (host) MBUF pool start address */
+	void *mbuf_addr; /**< MBUF pool start address */
+} __rte_cache_aligned;
+
+/* RTE ethernet private data */
+struct avp_adapter {
+	struct avp_dev avp;
+} __rte_cache_aligned;
+
+/* Macro to cast the ethernet device private data to a AVP object */
+#define AVP_DEV_PRIVATE_TO_HW(adapter) \
+	(&((struct avp_adapter *)adapter)->avp)
+
+/*
+ * This function is based on probe() function in avp_pci.c
+ * It returns 0 on success.
+ */
+static int
+eth_avp_dev_init(struct rte_eth_dev *eth_dev)
+{
+	struct rte_pci_device *pci_dev;
+
+	pci_dev = AVP_DEV_TO_PCI(eth_dev);
+
+	if (rte_eal_process_type() != RTE_PROC_PRIMARY) {
+		/*
+		 * no setup required on secondary processes.  All data is saved
+		 * in dev_private by the primary process. All resource should
+		 * be mapped to the same virtual address so all pointers should
+		 * be valid.
+		 */
+		return 0;
+	}
+
+	rte_eth_copy_pci_info(eth_dev, pci_dev);
+
+	eth_dev->data->dev_flags |= RTE_ETH_DEV_DETACHABLE;
+
+	return 0;
+}
+
+static int
+eth_avp_dev_uninit(struct rte_eth_dev *eth_dev)
+{
+	if (rte_eal_process_type() != RTE_PROC_PRIMARY)
+		return -EPERM;
+
+	if (eth_dev->data == NULL)
+		return 0;
+
+	return 0;
+}
+
+
+static struct eth_driver rte_avp_pmd = {
+	{
+		.id_table = pci_id_avp_map,
+		.drv_flags = RTE_PCI_DRV_NEED_MAPPING,
+		.probe = rte_eth_dev_pci_probe,
+		.remove = rte_eth_dev_pci_remove,
+	},
+	.eth_dev_init = eth_avp_dev_init,
+	.eth_dev_uninit = eth_avp_dev_uninit,
+	.dev_private_size = sizeof(struct avp_adapter),
+};
+
+
+
+RTE_PMD_REGISTER_PCI(rte_avp, rte_avp_pmd.pci_drv);
+RTE_PMD_REGISTER_PCI_TABLE(rte_avp, pci_id_avp_map);
diff --git a/mk/rte.app.mk b/mk/rte.app.mk
index 0e0b600..6a3669f 100644
--- a/mk/rte.app.mk
+++ b/mk/rte.app.mk
@@ -104,6 +104,7 @@ ifeq ($(CONFIG_RTE_BUILD_SHARED_LIB),n)
 # plugins (link only if static libraries)
 
 _LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_AF_PACKET)  += -lrte_pmd_af_packet
+_LDLIBS-$(CONFIG_RTE_LIBRTE_AVP_PMD)        += -lrte_pmd_avp
 _LDLIBS-$(CONFIG_RTE_LIBRTE_BNX2X_PMD)      += -lrte_pmd_bnx2x -lz
 _LDLIBS-$(CONFIG_RTE_LIBRTE_BNXT_PMD)       += -lrte_pmd_bnxt
 _LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_BOND)       += -lrte_pmd_bond
-- 
1.8.3.1

^ permalink raw reply related	[flat|nested] 172+ messages in thread

* [PATCH v5 05/14] net/avp: device initialization
  2017-03-23 11:23       ` [PATCH v5 00/14] Wind River Systems AVP PMD Allain Legacy
                           ` (3 preceding siblings ...)
  2017-03-23 11:24         ` [PATCH v5 04/14] net/avp: driver registration Allain Legacy
@ 2017-03-23 11:24         ` Allain Legacy
  2017-03-23 11:24         ` [PATCH v5 06/14] net/avp: device configuration Allain Legacy
                           ` (10 subsequent siblings)
  15 siblings, 0 replies; 172+ messages in thread
From: Allain Legacy @ 2017-03-23 11:24 UTC (permalink / raw)
  To: ferruh.yigit
  Cc: dev, ian.jolliffe, bruce.richardson, john.mcnamara, keith.wiles,
	thomas.monjalon, vincent.jardin, jerin.jacob, stephen, 3chas3

Adds support for initialization newly probed AVP PCI devices.  Initial
queue translations are setup in preparation for device configuration.

Signed-off-by: Allain Legacy <allain.legacy@windriver.com>
Signed-off-by: Matt Peters <matt.peters@windriver.com>
---
 drivers/net/avp/avp_ethdev.c | 315 +++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 315 insertions(+)

diff --git a/drivers/net/avp/avp_ethdev.c b/drivers/net/avp/avp_ethdev.c
index f5fa453..e937fb52 100644
--- a/drivers/net/avp/avp_ethdev.c
+++ b/drivers/net/avp/avp_ethdev.c
@@ -52,6 +52,7 @@
 #include <rte_dev.h>
 #include <rte_memory.h>
 #include <rte_eal.h>
+#include <rte_io.h>
 
 #include "rte_avp_common.h"
 #include "rte_avp_fifo.h"
@@ -98,6 +99,15 @@
 };
 
 
+/**@{ AVP device flags */
+#define AVP_F_PROMISC (1 << 1)
+#define AVP_F_CONFIGURED (1 << 2)
+#define AVP_F_LINKUP (1 << 3)
+/**@} */
+
+/* Ethernet device validation marker */
+#define AVP_ETHDEV_MAGIC 0x92972862
+
 /*
  * Defines the AVP device attributes which are attached to an RTE ethernet
  * device
@@ -142,18 +152,292 @@ struct avp_adapter {
 	struct avp_dev avp;
 } __rte_cache_aligned;
 
+
+/* 32-bit MMIO register write */
+#define AVP_WRITE32(_value, _addr) rte_write32_relaxed((_value), (_addr))
+
+/* 32-bit MMIO register read */
+#define AVP_READ32(_addr) rte_read32_relaxed((_addr))
+
 /* Macro to cast the ethernet device private data to a AVP object */
 #define AVP_DEV_PRIVATE_TO_HW(adapter) \
 	(&((struct avp_adapter *)adapter)->avp)
 
 /*
+ * Defines the structure of a AVP device queue for the purpose of handling the
+ * receive and transmit burst callback functions
+ */
+struct avp_queue {
+	struct rte_eth_dev_data *dev_data;
+	/**< Backpointer to ethernet device data */
+	struct avp_dev *avp; /**< Backpointer to AVP device */
+	uint16_t queue_id;
+	/**< Queue identifier used for indexing current queue */
+	uint16_t queue_base;
+	/**< Base queue identifier for queue servicing */
+	uint16_t queue_limit;
+	/**< Maximum queue identifier for queue servicing */
+
+	uint64_t packets;
+	uint64_t bytes;
+	uint64_t errors;
+};
+
+/* translate from host physical address to guest virtual address */
+static void *
+avp_dev_translate_address(struct rte_eth_dev *eth_dev,
+			  phys_addr_t host_phys_addr)
+{
+	struct rte_pci_device *pci_dev = AVP_DEV_TO_PCI(eth_dev);
+	struct rte_mem_resource *resource;
+	struct rte_avp_memmap_info *info;
+	struct rte_avp_memmap *map;
+	off_t offset;
+	void *addr;
+	unsigned int i;
+
+	addr = pci_dev->mem_resource[RTE_AVP_PCI_MEMORY_BAR].addr;
+	resource = &pci_dev->mem_resource[RTE_AVP_PCI_MEMMAP_BAR];
+	info = (struct rte_avp_memmap_info *)resource->addr;
+
+	offset = 0;
+	for (i = 0; i < info->nb_maps; i++) {
+		/* search all segments looking for a matching address */
+		map = &info->maps[i];
+
+		if ((host_phys_addr >= map->phys_addr) &&
+			(host_phys_addr < (map->phys_addr + map->length))) {
+			/* address is within this segment */
+			offset += (host_phys_addr - map->phys_addr);
+			addr = RTE_PTR_ADD(addr, offset);
+
+			PMD_DRV_LOG(DEBUG, "Translating host physical 0x%" PRIx64 " to guest virtual 0x%p\n",
+				    host_phys_addr, addr);
+
+			return addr;
+		}
+		offset += map->length;
+	}
+
+	return NULL;
+}
+
+/* verify that the incoming device version is compatible with our version */
+static int
+avp_dev_version_check(uint32_t version)
+{
+	uint32_t driver = RTE_AVP_STRIP_MINOR_VERSION(AVP_DPDK_DRIVER_VERSION);
+	uint32_t device = RTE_AVP_STRIP_MINOR_VERSION(version);
+
+	if (device <= driver) {
+		/* the host driver version is less than or equal to ours */
+		return 0;
+	}
+
+	return 1;
+}
+
+/* verify that memory regions have expected version and validation markers */
+static int
+avp_dev_check_regions(struct rte_eth_dev *eth_dev)
+{
+	struct rte_pci_device *pci_dev = AVP_DEV_TO_PCI(eth_dev);
+	struct rte_avp_memmap_info *memmap;
+	struct rte_avp_device_info *info;
+	struct rte_mem_resource *resource;
+	unsigned int i;
+
+	/* Dump resource info for debug */
+	for (i = 0; i < PCI_MAX_RESOURCE; i++) {
+		resource = &pci_dev->mem_resource[i];
+		if ((resource->phys_addr == 0) || (resource->len == 0))
+			continue;
+
+		PMD_DRV_LOG(DEBUG, "resource[%u]: phys=0x%" PRIx64 " len=%" PRIu64 " addr=%p\n",
+			    i, resource->phys_addr,
+			    resource->len, resource->addr);
+
+		switch (i) {
+		case RTE_AVP_PCI_MEMMAP_BAR:
+			memmap = (struct rte_avp_memmap_info *)resource->addr;
+			if ((memmap->magic != RTE_AVP_MEMMAP_MAGIC) ||
+			    (memmap->version != RTE_AVP_MEMMAP_VERSION)) {
+				PMD_DRV_LOG(ERR, "Invalid memmap magic 0x%08x and version %u\n",
+					    memmap->magic, memmap->version);
+				return -EINVAL;
+			}
+			break;
+
+		case RTE_AVP_PCI_DEVICE_BAR:
+			info = (struct rte_avp_device_info *)resource->addr;
+			if ((info->magic != RTE_AVP_DEVICE_MAGIC) ||
+			    avp_dev_version_check(info->version)) {
+				PMD_DRV_LOG(ERR, "Invalid device info magic 0x%08x or version 0x%08x > 0x%08x\n",
+					    info->magic, info->version,
+					    AVP_DPDK_DRIVER_VERSION);
+				return -EINVAL;
+			}
+			break;
+
+		case RTE_AVP_PCI_MEMORY_BAR:
+		case RTE_AVP_PCI_MMIO_BAR:
+			if (resource->addr == NULL) {
+				PMD_DRV_LOG(ERR, "Missing address space for BAR%u\n",
+					    i);
+				return -EINVAL;
+			}
+			break;
+
+		case RTE_AVP_PCI_MSIX_BAR:
+		default:
+			/* no validation required */
+			break;
+		}
+	}
+
+	return 0;
+}
+
+/*
+ * create a AVP device using the supplied device info by first translating it
+ * to guest address space(s).
+ */
+static int
+avp_dev_create(struct rte_pci_device *pci_dev,
+	       struct rte_eth_dev *eth_dev)
+{
+	struct avp_dev *avp = AVP_DEV_PRIVATE_TO_HW(eth_dev->data->dev_private);
+	struct rte_avp_device_info *host_info;
+	struct rte_mem_resource *resource;
+	unsigned int i;
+
+	resource = &pci_dev->mem_resource[RTE_AVP_PCI_DEVICE_BAR];
+	if (resource->addr == NULL) {
+		PMD_DRV_LOG(ERR, "BAR%u is not mapped\n",
+			    RTE_AVP_PCI_DEVICE_BAR);
+		return -EFAULT;
+	}
+	host_info = (struct rte_avp_device_info *)resource->addr;
+
+	if ((host_info->magic != RTE_AVP_DEVICE_MAGIC) ||
+		avp_dev_version_check(host_info->version)) {
+		PMD_DRV_LOG(ERR, "Invalid AVP PCI device, magic 0x%08x version 0x%08x > 0x%08x\n",
+			    host_info->magic, host_info->version,
+			    AVP_DPDK_DRIVER_VERSION);
+		return -EINVAL;
+	}
+
+	PMD_DRV_LOG(DEBUG, "AVP host device is v%u.%u.%u\n",
+		    RTE_AVP_GET_RELEASE_VERSION(host_info->version),
+		    RTE_AVP_GET_MAJOR_VERSION(host_info->version),
+		    RTE_AVP_GET_MINOR_VERSION(host_info->version));
+
+	PMD_DRV_LOG(DEBUG, "AVP host supports %u to %u TX queue(s)\n",
+		    host_info->min_tx_queues, host_info->max_tx_queues);
+	PMD_DRV_LOG(DEBUG, "AVP host supports %u to %u RX queue(s)\n",
+		    host_info->min_rx_queues, host_info->max_rx_queues);
+	PMD_DRV_LOG(DEBUG, "AVP host supports features 0x%08x\n",
+		    host_info->features);
+
+	if (avp->magic != AVP_ETHDEV_MAGIC) {
+		/*
+		 * First time initialization (i.e., not during a VM
+		 * migration)
+		 */
+		memset(avp, 0, sizeof(*avp));
+		avp->magic = AVP_ETHDEV_MAGIC;
+		avp->dev_data = eth_dev->data;
+		avp->port_id = eth_dev->data->port_id;
+		avp->host_mbuf_size = host_info->mbuf_size;
+		avp->host_features = host_info->features;
+		memcpy(&avp->ethaddr.addr_bytes[0],
+		       host_info->ethaddr, ETHER_ADDR_LEN);
+		/* adjust max values to not exceed our max */
+		avp->max_tx_queues =
+			RTE_MIN(host_info->max_tx_queues, RTE_AVP_MAX_QUEUES);
+		avp->max_rx_queues =
+			RTE_MIN(host_info->max_rx_queues, RTE_AVP_MAX_QUEUES);
+	} else {
+		/* Re-attaching during migration */
+
+		/* TODO... requires validation of host values */
+		if ((host_info->features & avp->features) != avp->features) {
+			PMD_DRV_LOG(ERR, "AVP host features mismatched; 0x%08x, host=0x%08x\n",
+				    avp->features, host_info->features);
+			/* this should not be possible; continue for now */
+		}
+	}
+
+	/* the device id is allowed to change over migrations */
+	avp->device_id = host_info->device_id;
+
+	/* translate incoming host addresses to guest address space */
+	PMD_DRV_LOG(DEBUG, "AVP first host tx queue at 0x%" PRIx64 "\n",
+		    host_info->tx_phys);
+	PMD_DRV_LOG(DEBUG, "AVP first host alloc queue at 0x%" PRIx64 "\n",
+		    host_info->alloc_phys);
+	for (i = 0; i < avp->max_tx_queues; i++) {
+		avp->tx_q[i] = avp_dev_translate_address(eth_dev,
+			host_info->tx_phys + (i * host_info->tx_size));
+
+		avp->alloc_q[i] = avp_dev_translate_address(eth_dev,
+			host_info->alloc_phys + (i * host_info->alloc_size));
+	}
+
+	PMD_DRV_LOG(DEBUG, "AVP first host rx queue at 0x%" PRIx64 "\n",
+		    host_info->rx_phys);
+	PMD_DRV_LOG(DEBUG, "AVP first host free queue at 0x%" PRIx64 "\n",
+		    host_info->free_phys);
+	for (i = 0; i < avp->max_rx_queues; i++) {
+		avp->rx_q[i] = avp_dev_translate_address(eth_dev,
+			host_info->rx_phys + (i * host_info->rx_size));
+		avp->free_q[i] = avp_dev_translate_address(eth_dev,
+			host_info->free_phys + (i * host_info->free_size));
+	}
+
+	PMD_DRV_LOG(DEBUG, "AVP host request queue at 0x%" PRIx64 "\n",
+		    host_info->req_phys);
+	PMD_DRV_LOG(DEBUG, "AVP host response queue at 0x%" PRIx64 "\n",
+		    host_info->resp_phys);
+	PMD_DRV_LOG(DEBUG, "AVP host sync address at 0x%" PRIx64 "\n",
+		    host_info->sync_phys);
+	PMD_DRV_LOG(DEBUG, "AVP host mbuf address at 0x%" PRIx64 "\n",
+		    host_info->mbuf_phys);
+	avp->req_q = avp_dev_translate_address(eth_dev, host_info->req_phys);
+	avp->resp_q = avp_dev_translate_address(eth_dev, host_info->resp_phys);
+	avp->sync_addr =
+		avp_dev_translate_address(eth_dev, host_info->sync_phys);
+	avp->mbuf_addr =
+		avp_dev_translate_address(eth_dev, host_info->mbuf_phys);
+
+	/*
+	 * store the host mbuf virtual address so that we can calculate
+	 * relative offsets for each mbuf as they are processed
+	 */
+	avp->host_mbuf_addr = host_info->mbuf_va;
+	avp->host_sync_addr = host_info->sync_va;
+
+	/*
+	 * store the maximum packet length that is supported by the host.
+	 */
+	avp->max_rx_pkt_len = host_info->max_rx_pkt_len;
+	PMD_DRV_LOG(DEBUG, "AVP host max receive packet length is %u\n",
+				host_info->max_rx_pkt_len);
+
+	return 0;
+}
+
+/*
  * This function is based on probe() function in avp_pci.c
  * It returns 0 on success.
  */
 static int
 eth_avp_dev_init(struct rte_eth_dev *eth_dev)
 {
+	struct avp_dev *avp =
+		AVP_DEV_PRIVATE_TO_HW(eth_dev->data->dev_private);
 	struct rte_pci_device *pci_dev;
+	int ret;
 
 	pci_dev = AVP_DEV_TO_PCI(eth_dev);
 
@@ -171,6 +455,32 @@ struct avp_adapter {
 
 	eth_dev->data->dev_flags |= RTE_ETH_DEV_DETACHABLE;
 
+	/* Check BAR resources */
+	ret = avp_dev_check_regions(eth_dev);
+	if (ret < 0) {
+		PMD_DRV_LOG(ERR, "Failed to validate BAR resources, ret=%d\n",
+			    ret);
+		return ret;
+	}
+
+	/* Handle each subtype */
+	ret = avp_dev_create(pci_dev, eth_dev);
+	if (ret < 0) {
+		PMD_DRV_LOG(ERR, "Failed to create device, ret=%d\n", ret);
+		return ret;
+	}
+
+	/* Allocate memory for storing MAC addresses */
+	eth_dev->data->mac_addrs = rte_zmalloc("avp_ethdev", ETHER_ADDR_LEN, 0);
+	if (eth_dev->data->mac_addrs == NULL) {
+		PMD_DRV_LOG(ERR, "Failed to allocate %d bytes needed to store MAC addresses\n",
+			    ETHER_ADDR_LEN);
+		return -ENOMEM;
+	}
+
+	/* Get a mac from device config */
+	ether_addr_copy(&avp->ethaddr, &eth_dev->data->mac_addrs[0]);
+
 	return 0;
 }
 
@@ -183,6 +493,11 @@ struct avp_adapter {
 	if (eth_dev->data == NULL)
 		return 0;
 
+	if (eth_dev->data->mac_addrs != NULL) {
+		rte_free(eth_dev->data->mac_addrs);
+		eth_dev->data->mac_addrs = NULL;
+	}
+
 	return 0;
 }
 
-- 
1.8.3.1

^ permalink raw reply related	[flat|nested] 172+ messages in thread

* [PATCH v5 06/14] net/avp: device configuration
  2017-03-23 11:23       ` [PATCH v5 00/14] Wind River Systems AVP PMD Allain Legacy
                           ` (4 preceding siblings ...)
  2017-03-23 11:24         ` [PATCH v5 05/14] net/avp: device initialization Allain Legacy
@ 2017-03-23 11:24         ` Allain Legacy
  2017-03-23 11:24         ` [PATCH v5 07/14] net/avp: queue setup and release Allain Legacy
                           ` (9 subsequent siblings)
  15 siblings, 0 replies; 172+ messages in thread
From: Allain Legacy @ 2017-03-23 11:24 UTC (permalink / raw)
  To: ferruh.yigit
  Cc: dev, ian.jolliffe, bruce.richardson, john.mcnamara, keith.wiles,
	thomas.monjalon, vincent.jardin, jerin.jacob, stephen, 3chas3

Adds support for "dev_configure" operations to allow an application to
configure the device.

Signed-off-by: Allain Legacy <allain.legacy@windriver.com>
Signed-off-by: Matt Peters <matt.peters@windriver.com>
---
 doc/guides/nics/features/avp.ini |   4 +
 drivers/net/avp/avp_ethdev.c     | 241 +++++++++++++++++++++++++++++++++++++++
 2 files changed, 245 insertions(+)

diff --git a/doc/guides/nics/features/avp.ini b/doc/guides/nics/features/avp.ini
index 4353929..45a2185 100644
--- a/doc/guides/nics/features/avp.ini
+++ b/doc/guides/nics/features/avp.ini
@@ -4,3 +4,7 @@
 ; Refer to default.ini for the full list of available PMD features.
 ;
 [Features]
+Link status          = Y
+VLAN offload         = Y
+Linux UIO            = Y
+x86-64               = Y
diff --git a/drivers/net/avp/avp_ethdev.c b/drivers/net/avp/avp_ethdev.c
index e937fb52..d9ad3f1 100644
--- a/drivers/net/avp/avp_ethdev.c
+++ b/drivers/net/avp/avp_ethdev.c
@@ -61,6 +61,13 @@
 
 
 
+static int avp_dev_configure(struct rte_eth_dev *dev);
+static void avp_dev_info_get(struct rte_eth_dev *dev,
+			     struct rte_eth_dev_info *dev_info);
+static void avp_vlan_offload_set(struct rte_eth_dev *dev, int mask);
+static int avp_dev_link_update(struct rte_eth_dev *dev,
+			       __rte_unused int wait_to_complete);
+
 #define AVP_DEV_TO_PCI(eth_dev) RTE_DEV_TO_PCI((eth_dev)->device)
 
 
@@ -98,6 +105,15 @@
 	},
 };
 
+/*
+ * dev_ops for avp, bare necessities for basic operation
+ */
+static const struct eth_dev_ops avp_eth_dev_ops = {
+	.dev_configure       = avp_dev_configure,
+	.dev_infos_get       = avp_dev_info_get,
+	.vlan_offload_set    = avp_vlan_offload_set,
+	.link_update         = avp_dev_link_update,
+};
 
 /**@{ AVP device flags */
 #define AVP_F_PROMISC (1 << 1)
@@ -183,6 +199,91 @@ struct avp_queue {
 	uint64_t errors;
 };
 
+/* send a request and wait for a response
+ *
+ * @warning must be called while holding the avp->lock spinlock.
+ */
+static int
+avp_dev_process_request(struct avp_dev *avp, struct rte_avp_request *request)
+{
+	unsigned int retry = AVP_MAX_REQUEST_RETRY;
+	void *resp_addr = NULL;
+	unsigned int count;
+	int ret;
+
+	PMD_DRV_LOG(DEBUG, "Sending request %u to host\n", request->req_id);
+
+	request->result = -ENOTSUP;
+
+	/* Discard any stale responses before starting a new request */
+	while (avp_fifo_get(avp->resp_q, (void **)&resp_addr, 1))
+		PMD_DRV_LOG(DEBUG, "Discarding stale response\n");
+
+	rte_memcpy(avp->sync_addr, request, sizeof(*request));
+	count = avp_fifo_put(avp->req_q, &avp->host_sync_addr, 1);
+	if (count < 1) {
+		PMD_DRV_LOG(ERR, "Cannot send request %u to host\n",
+			    request->req_id);
+		ret = -EBUSY;
+		goto done;
+	}
+
+	while (retry--) {
+		/* wait for a response */
+		usleep(AVP_REQUEST_DELAY_USECS);
+
+		count = avp_fifo_count(avp->resp_q);
+		if (count >= 1) {
+			/* response received */
+			break;
+		}
+
+		if ((count < 1) && (retry == 0)) {
+			PMD_DRV_LOG(ERR, "Timeout while waiting for a response for %u\n",
+				    request->req_id);
+			ret = -ETIME;
+			goto done;
+		}
+	}
+
+	/* retrieve the response */
+	count = avp_fifo_get(avp->resp_q, (void **)&resp_addr, 1);
+	if ((count != 1) || (resp_addr != avp->host_sync_addr)) {
+		PMD_DRV_LOG(ERR, "Invalid response from host, count=%u resp=%p host_sync_addr=%p\n",
+			    count, resp_addr, avp->host_sync_addr);
+		ret = -ENODATA;
+		goto done;
+	}
+
+	/* copy to user buffer */
+	rte_memcpy(request, avp->sync_addr, sizeof(*request));
+	ret = 0;
+
+	PMD_DRV_LOG(DEBUG, "Result %d received for request %u\n",
+		    request->result, request->req_id);
+
+done:
+	return ret;
+}
+
+static int
+avp_dev_ctrl_set_config(struct rte_eth_dev *eth_dev,
+			struct rte_avp_device_config *config)
+{
+	struct avp_dev *avp = AVP_DEV_PRIVATE_TO_HW(eth_dev->data->dev_private);
+	struct rte_avp_request request;
+	int ret;
+
+	/* setup a configure request */
+	memset(&request, 0, sizeof(request));
+	request.req_id = RTE_AVP_REQ_CFG_DEVICE;
+	memcpy(&request.config, config, sizeof(request.config));
+
+	ret = avp_dev_process_request(avp, &request);
+
+	return ret == 0 ? request.result : ret;
+}
+
 /* translate from host physical address to guest virtual address */
 static void *
 avp_dev_translate_address(struct rte_eth_dev *eth_dev,
@@ -298,6 +399,38 @@ struct avp_queue {
 	return 0;
 }
 
+static void
+_avp_set_queue_counts(struct rte_eth_dev *eth_dev)
+{
+	struct rte_pci_device *pci_dev = AVP_DEV_TO_PCI(eth_dev);
+	struct avp_dev *avp = AVP_DEV_PRIVATE_TO_HW(eth_dev->data->dev_private);
+	struct rte_avp_device_info *host_info;
+	void *addr;
+
+	addr = pci_dev->mem_resource[RTE_AVP_PCI_DEVICE_BAR].addr;
+	host_info = (struct rte_avp_device_info *)addr;
+
+	/*
+	 * the transmit direction is not negotiated beyond respecting the max
+	 * number of queues because the host can handle arbitrary guest tx
+	 * queues (host rx queues).
+	 */
+	avp->num_tx_queues = eth_dev->data->nb_tx_queues;
+
+	/*
+	 * the receive direction is more restrictive.  The host requires a
+	 * minimum number of guest rx queues (host tx queues) therefore
+	 * negotiate a value that is at least as large as the host minimum
+	 * requirement.  If the host and guest values are not identical then a
+	 * mapping will be established in the receive_queue_setup function.
+	 */
+	avp->num_rx_queues = RTE_MAX(host_info->min_rx_queues,
+				     eth_dev->data->nb_rx_queues);
+
+	PMD_DRV_LOG(DEBUG, "Requesting %u Tx and %u Rx queues from host\n",
+		    avp->num_tx_queues, avp->num_rx_queues);
+}
+
 /*
  * create a AVP device using the supplied device info by first translating it
  * to guest address space(s).
@@ -440,6 +573,7 @@ struct avp_queue {
 	int ret;
 
 	pci_dev = AVP_DEV_TO_PCI(eth_dev);
+	eth_dev->dev_ops = &avp_eth_dev_ops;
 
 	if (rte_eal_process_type() != RTE_PROC_PRIMARY) {
 		/*
@@ -515,6 +649,113 @@ struct avp_queue {
 };
 
 
+static int
+avp_dev_configure(struct rte_eth_dev *eth_dev)
+{
+	struct rte_pci_device *pci_dev = AVP_DEV_TO_PCI(eth_dev);
+	struct avp_dev *avp = AVP_DEV_PRIVATE_TO_HW(eth_dev->data->dev_private);
+	struct rte_avp_device_info *host_info;
+	struct rte_avp_device_config config;
+	int mask = 0;
+	void *addr;
+	int ret;
+
+	addr = pci_dev->mem_resource[RTE_AVP_PCI_DEVICE_BAR].addr;
+	host_info = (struct rte_avp_device_info *)addr;
+
+	/* Setup required number of queues */
+	_avp_set_queue_counts(eth_dev);
+
+	mask = (ETH_VLAN_STRIP_MASK |
+		ETH_VLAN_FILTER_MASK |
+		ETH_VLAN_EXTEND_MASK);
+	avp_vlan_offload_set(eth_dev, mask);
+
+	/* update device config */
+	memset(&config, 0, sizeof(config));
+	config.device_id = host_info->device_id;
+	config.driver_type = RTE_AVP_DRIVER_TYPE_DPDK;
+	config.driver_version = AVP_DPDK_DRIVER_VERSION;
+	config.features = avp->features;
+	config.num_tx_queues = avp->num_tx_queues;
+	config.num_rx_queues = avp->num_rx_queues;
+
+	ret = avp_dev_ctrl_set_config(eth_dev, &config);
+	if (ret < 0) {
+		PMD_DRV_LOG(ERR, "Config request failed by host, ret=%d\n",
+			    ret);
+		goto unlock;
+	}
+
+	avp->flags |= AVP_F_CONFIGURED;
+	ret = 0;
+
+unlock:
+	return ret;
+}
+
+
+static int
+avp_dev_link_update(struct rte_eth_dev *eth_dev,
+					__rte_unused int wait_to_complete)
+{
+	struct avp_dev *avp = AVP_DEV_PRIVATE_TO_HW(eth_dev->data->dev_private);
+	struct rte_eth_link *link = &eth_dev->data->dev_link;
+
+	link->link_speed = ETH_SPEED_NUM_10G;
+	link->link_duplex = ETH_LINK_FULL_DUPLEX;
+	link->link_status = !!(avp->flags & AVP_F_LINKUP);
+
+	return -1;
+}
+
+
+static void
+avp_dev_info_get(struct rte_eth_dev *eth_dev,
+		 struct rte_eth_dev_info *dev_info)
+{
+	struct avp_dev *avp = AVP_DEV_PRIVATE_TO_HW(eth_dev->data->dev_private);
+
+	dev_info->driver_name = "rte_avp_pmd";
+	dev_info->pci_dev = RTE_DEV_TO_PCI(eth_dev->device);
+	dev_info->max_rx_queues = avp->max_rx_queues;
+	dev_info->max_tx_queues = avp->max_tx_queues;
+	dev_info->min_rx_bufsize = AVP_MIN_RX_BUFSIZE;
+	dev_info->max_rx_pktlen = avp->max_rx_pkt_len;
+	dev_info->max_mac_addrs = AVP_MAX_MAC_ADDRS;
+	if (avp->host_features & RTE_AVP_FEATURE_VLAN_OFFLOAD) {
+		dev_info->rx_offload_capa = DEV_RX_OFFLOAD_VLAN_STRIP;
+		dev_info->tx_offload_capa = DEV_TX_OFFLOAD_VLAN_INSERT;
+	}
+}
+
+static void
+avp_vlan_offload_set(struct rte_eth_dev *eth_dev, int mask)
+{
+	struct avp_dev *avp = AVP_DEV_PRIVATE_TO_HW(eth_dev->data->dev_private);
+
+	if (mask & ETH_VLAN_STRIP_MASK) {
+		if (avp->host_features & RTE_AVP_FEATURE_VLAN_OFFLOAD) {
+			if (eth_dev->data->dev_conf.rxmode.hw_vlan_strip)
+				avp->features |= RTE_AVP_FEATURE_VLAN_OFFLOAD;
+			else
+				avp->features &= ~RTE_AVP_FEATURE_VLAN_OFFLOAD;
+		} else {
+			PMD_DRV_LOG(ERR, "VLAN strip offload not supported\n");
+		}
+	}
+
+	if (mask & ETH_VLAN_FILTER_MASK) {
+		if (eth_dev->data->dev_conf.rxmode.hw_vlan_filter)
+			PMD_DRV_LOG(ERR, "VLAN filter offload not supported\n");
+	}
+
+	if (mask & ETH_VLAN_EXTEND_MASK) {
+		if (eth_dev->data->dev_conf.rxmode.hw_vlan_extend)
+			PMD_DRV_LOG(ERR, "VLAN extend offload not supported\n");
+	}
+}
+
 
 RTE_PMD_REGISTER_PCI(rte_avp, rte_avp_pmd.pci_drv);
 RTE_PMD_REGISTER_PCI_TABLE(rte_avp, pci_id_avp_map);
-- 
1.8.3.1

^ permalink raw reply related	[flat|nested] 172+ messages in thread

* [PATCH v5 07/14] net/avp: queue setup and release
  2017-03-23 11:23       ` [PATCH v5 00/14] Wind River Systems AVP PMD Allain Legacy
                           ` (5 preceding siblings ...)
  2017-03-23 11:24         ` [PATCH v5 06/14] net/avp: device configuration Allain Legacy
@ 2017-03-23 11:24         ` Allain Legacy
  2017-03-23 11:24         ` [PATCH v5 08/14] net/avp: packet receive functions Allain Legacy
                           ` (8 subsequent siblings)
  15 siblings, 0 replies; 172+ messages in thread
From: Allain Legacy @ 2017-03-23 11:24 UTC (permalink / raw)
  To: ferruh.yigit
  Cc: dev, ian.jolliffe, bruce.richardson, john.mcnamara, keith.wiles,
	thomas.monjalon, vincent.jardin, jerin.jacob, stephen, 3chas3

Adds queue management operations so that an appliation can setup and
release the transmit and receive queues.

Signed-off-by: Allain Legacy <allain.legacy@windriver.com>
Signed-off-by: Matt Peters <matt.peters@windriver.com>
---
 drivers/net/avp/avp_ethdev.c | 180 ++++++++++++++++++++++++++++++++++++++++++-
 1 file changed, 179 insertions(+), 1 deletion(-)

diff --git a/drivers/net/avp/avp_ethdev.c b/drivers/net/avp/avp_ethdev.c
index d9ad3f1..ecc581f 100644
--- a/drivers/net/avp/avp_ethdev.c
+++ b/drivers/net/avp/avp_ethdev.c
@@ -67,7 +67,21 @@ static void avp_dev_info_get(struct rte_eth_dev *dev,
 static void avp_vlan_offload_set(struct rte_eth_dev *dev, int mask);
 static int avp_dev_link_update(struct rte_eth_dev *dev,
 			       __rte_unused int wait_to_complete);
-
+static int avp_dev_rx_queue_setup(struct rte_eth_dev *dev,
+				  uint16_t rx_queue_id,
+				  uint16_t nb_rx_desc,
+				  unsigned int socket_id,
+				  const struct rte_eth_rxconf *rx_conf,
+				  struct rte_mempool *pool);
+
+static int avp_dev_tx_queue_setup(struct rte_eth_dev *dev,
+				  uint16_t tx_queue_id,
+				  uint16_t nb_tx_desc,
+				  unsigned int socket_id,
+				  const struct rte_eth_txconf *tx_conf);
+
+static void avp_dev_rx_queue_release(void *rxq);
+static void avp_dev_tx_queue_release(void *txq);
 #define AVP_DEV_TO_PCI(eth_dev) RTE_DEV_TO_PCI((eth_dev)->device)
 
 
@@ -113,6 +127,10 @@ static int avp_dev_link_update(struct rte_eth_dev *dev,
 	.dev_infos_get       = avp_dev_info_get,
 	.vlan_offload_set    = avp_vlan_offload_set,
 	.link_update         = avp_dev_link_update,
+	.rx_queue_setup      = avp_dev_rx_queue_setup,
+	.rx_queue_release    = avp_dev_rx_queue_release,
+	.tx_queue_setup      = avp_dev_tx_queue_setup,
+	.tx_queue_release    = avp_dev_tx_queue_release,
 };
 
 /**@{ AVP device flags */
@@ -400,6 +418,42 @@ struct avp_queue {
 }
 
 static void
+_avp_set_rx_queue_mappings(struct rte_eth_dev *eth_dev, uint16_t rx_queue_id)
+{
+	struct avp_dev *avp =
+		AVP_DEV_PRIVATE_TO_HW(eth_dev->data->dev_private);
+	struct avp_queue *rxq;
+	uint16_t queue_count;
+	uint16_t remainder;
+
+	rxq = (struct avp_queue *)eth_dev->data->rx_queues[rx_queue_id];
+
+	/*
+	 * Must map all AVP fifos as evenly as possible between the configured
+	 * device queues.  Each device queue will service a subset of the AVP
+	 * fifos. If there is an odd number of device queues the first set of
+	 * device queues will get the extra AVP fifos.
+	 */
+	queue_count = avp->num_rx_queues / eth_dev->data->nb_rx_queues;
+	remainder = avp->num_rx_queues % eth_dev->data->nb_rx_queues;
+	if (rx_queue_id < remainder) {
+		/* these queues must service one extra FIFO */
+		rxq->queue_base = rx_queue_id * (queue_count + 1);
+		rxq->queue_limit = rxq->queue_base + (queue_count + 1) - 1;
+	} else {
+		/* these queues service the regular number of FIFO */
+		rxq->queue_base = ((remainder * (queue_count + 1)) +
+				   ((rx_queue_id - remainder) * queue_count));
+		rxq->queue_limit = rxq->queue_base + queue_count - 1;
+	}
+
+	PMD_DRV_LOG(DEBUG, "rxq %u at %p base %u limit %u\n",
+		    rx_queue_id, rxq, rxq->queue_base, rxq->queue_limit);
+
+	rxq->queue_id = rxq->queue_base;
+}
+
+static void
 _avp_set_queue_counts(struct rte_eth_dev *eth_dev)
 {
 	struct rte_pci_device *pci_dev = AVP_DEV_TO_PCI(eth_dev);
@@ -650,6 +704,130 @@ struct avp_queue {
 
 
 static int
+avp_dev_rx_queue_setup(struct rte_eth_dev *eth_dev,
+		       uint16_t rx_queue_id,
+		       uint16_t nb_rx_desc,
+		       unsigned int socket_id,
+		       const struct rte_eth_rxconf *rx_conf,
+		       struct rte_mempool *pool)
+{
+	struct avp_dev *avp = AVP_DEV_PRIVATE_TO_HW(eth_dev->data->dev_private);
+	struct rte_pktmbuf_pool_private *mbp_priv;
+	struct avp_queue *rxq;
+
+	if (rx_queue_id >= eth_dev->data->nb_rx_queues) {
+		PMD_DRV_LOG(ERR, "RX queue id is out of range: rx_queue_id=%u, nb_rx_queues=%u\n",
+			    rx_queue_id, eth_dev->data->nb_rx_queues);
+		return -EINVAL;
+	}
+
+	/* Save mbuf pool pointer */
+	avp->pool = pool;
+
+	/* Save the local mbuf size */
+	mbp_priv = rte_mempool_get_priv(pool);
+	avp->guest_mbuf_size = (uint16_t)(mbp_priv->mbuf_data_room_size);
+	avp->guest_mbuf_size -= RTE_PKTMBUF_HEADROOM;
+
+	PMD_DRV_LOG(DEBUG, "AVP max_rx_pkt_len=(%u,%u) mbuf_size=(%u,%u)\n",
+		    avp->max_rx_pkt_len,
+		    eth_dev->data->dev_conf.rxmode.max_rx_pkt_len,
+		    avp->host_mbuf_size,
+		    avp->guest_mbuf_size);
+
+	/* allocate a queue object */
+	rxq = rte_zmalloc_socket("ethdev RX queue", sizeof(struct avp_queue),
+				 RTE_CACHE_LINE_SIZE, socket_id);
+	if (rxq == NULL) {
+		PMD_DRV_LOG(ERR, "Failed to allocate new Rx queue object\n");
+		return -ENOMEM;
+	}
+
+	/* save back pointers to AVP and Ethernet devices */
+	rxq->avp = avp;
+	rxq->dev_data = eth_dev->data;
+	eth_dev->data->rx_queues[rx_queue_id] = (void *)rxq;
+
+	/* setup the queue receive mapping for the current queue. */
+	_avp_set_rx_queue_mappings(eth_dev, rx_queue_id);
+
+	PMD_DRV_LOG(DEBUG, "Rx queue %u setup at %p\n", rx_queue_id, rxq);
+
+	(void)nb_rx_desc;
+	(void)rx_conf;
+	return 0;
+}
+
+static int
+avp_dev_tx_queue_setup(struct rte_eth_dev *eth_dev,
+		       uint16_t tx_queue_id,
+		       uint16_t nb_tx_desc,
+		       unsigned int socket_id,
+		       const struct rte_eth_txconf *tx_conf)
+{
+	struct avp_dev *avp = AVP_DEV_PRIVATE_TO_HW(eth_dev->data->dev_private);
+	struct avp_queue *txq;
+
+	if (tx_queue_id >= eth_dev->data->nb_tx_queues) {
+		PMD_DRV_LOG(ERR, "TX queue id is out of range: tx_queue_id=%u, nb_tx_queues=%u\n",
+			    tx_queue_id, eth_dev->data->nb_tx_queues);
+		return -EINVAL;
+	}
+
+	/* allocate a queue object */
+	txq = rte_zmalloc_socket("ethdev TX queue", sizeof(struct avp_queue),
+				 RTE_CACHE_LINE_SIZE, socket_id);
+	if (txq == NULL) {
+		PMD_DRV_LOG(ERR, "Failed to allocate new Tx queue object\n");
+		return -ENOMEM;
+	}
+
+	/* only the configured set of transmit queues are used */
+	txq->queue_id = tx_queue_id;
+	txq->queue_base = tx_queue_id;
+	txq->queue_limit = tx_queue_id;
+
+	/* save back pointers to AVP and Ethernet devices */
+	txq->avp = avp;
+	txq->dev_data = eth_dev->data;
+	eth_dev->data->tx_queues[tx_queue_id] = (void *)txq;
+
+	PMD_DRV_LOG(DEBUG, "Tx queue %u setup at %p\n", tx_queue_id, txq);
+
+	(void)nb_tx_desc;
+	(void)tx_conf;
+	return 0;
+}
+
+static void
+avp_dev_rx_queue_release(void *rx_queue)
+{
+	struct avp_queue *rxq = (struct avp_queue *)rx_queue;
+	struct avp_dev *avp = rxq->avp;
+	struct rte_eth_dev_data *data = avp->dev_data;
+	unsigned int i;
+
+	for (i = 0; i < avp->num_rx_queues; i++) {
+		if (data->rx_queues[i] == rxq)
+			data->rx_queues[i] = NULL;
+	}
+}
+
+static void
+avp_dev_tx_queue_release(void *tx_queue)
+{
+	struct avp_queue *txq = (struct avp_queue *)tx_queue;
+	struct avp_dev *avp = txq->avp;
+	struct rte_eth_dev_data *data = avp->dev_data;
+	unsigned int i;
+
+	for (i = 0; i < avp->num_tx_queues; i++) {
+		if (data->tx_queues[i] == txq)
+			data->tx_queues[i] = NULL;
+	}
+}
+
+static int
 avp_dev_configure(struct rte_eth_dev *eth_dev)
 {
 	struct rte_pci_device *pci_dev = AVP_DEV_TO_PCI(eth_dev);
-- 
1.8.3.1

^ permalink raw reply related	[flat|nested] 172+ messages in thread

* [PATCH v5 08/14] net/avp: packet receive functions
  2017-03-23 11:23       ` [PATCH v5 00/14] Wind River Systems AVP PMD Allain Legacy
                           ` (6 preceding siblings ...)
  2017-03-23 11:24         ` [PATCH v5 07/14] net/avp: queue setup and release Allain Legacy
@ 2017-03-23 11:24         ` Allain Legacy
  2017-03-23 11:24         ` [PATCH v5 09/14] net/avp: packet transmit functions Allain Legacy
                           ` (7 subsequent siblings)
  15 siblings, 0 replies; 172+ messages in thread
From: Allain Legacy @ 2017-03-23 11:24 UTC (permalink / raw)
  To: ferruh.yigit
  Cc: dev, ian.jolliffe, bruce.richardson, john.mcnamara, keith.wiles,
	thomas.monjalon, vincent.jardin, jerin.jacob, stephen, 3chas3

Adds function required for receiving packets from the host application via
AVP device queues.  Both the simple and scattered functions are supported.

Signed-off-by: Allain Legacy <allain.legacy@windriver.com>
Signed-off-by: Matt Peters <matt.peters@windriver.com>
---
 doc/guides/nics/features/avp.ini |   3 +
 drivers/net/avp/Makefile         |   1 +
 drivers/net/avp/avp_ethdev.c     | 451 +++++++++++++++++++++++++++++++++++++++
 3 files changed, 455 insertions(+)

diff --git a/doc/guides/nics/features/avp.ini b/doc/guides/nics/features/avp.ini
index 45a2185..e748ea8 100644
--- a/doc/guides/nics/features/avp.ini
+++ b/doc/guides/nics/features/avp.ini
@@ -5,6 +5,9 @@
 ;
 [Features]
 Link status          = Y
+Jumbo frame          = Y
+Scattered Rx         = Y
+Unicast MAC filter   = Y
 VLAN offload         = Y
 Linux UIO            = Y
 x86-64               = Y
diff --git a/drivers/net/avp/Makefile b/drivers/net/avp/Makefile
index 9cf0449..3013cd1 100644
--- a/drivers/net/avp/Makefile
+++ b/drivers/net/avp/Makefile
@@ -56,5 +56,6 @@ SRCS-$(CONFIG_RTE_LIBRTE_AVP_PMD) += avp_ethdev.c
 
 # this lib depends upon:
 DEPDIRS-$(CONFIG_RTE_LIBRTE_AVP_PMD) += lib/librte_eal lib/librte_ether
+DEPDIRS-$(CONFIG_RTE_LIBRTE_AVP_PMD) += lib/librte_mempool lib/librte_mbuf
 
 include $(RTE_SDK)/mk/rte.lib.mk
diff --git a/drivers/net/avp/avp_ethdev.c b/drivers/net/avp/avp_ethdev.c
index ecc581f..7524e1a 100644
--- a/drivers/net/avp/avp_ethdev.c
+++ b/drivers/net/avp/avp_ethdev.c
@@ -80,11 +80,19 @@ static int avp_dev_tx_queue_setup(struct rte_eth_dev *dev,
 				  unsigned int socket_id,
 				  const struct rte_eth_txconf *tx_conf);
 
+static uint16_t avp_recv_scattered_pkts(void *rx_queue,
+					struct rte_mbuf **rx_pkts,
+					uint16_t nb_pkts);
+
+static uint16_t avp_recv_pkts(void *rx_queue,
+			      struct rte_mbuf **rx_pkts,
+			      uint16_t nb_pkts);
 static void avp_dev_rx_queue_release(void *rxq);
 static void avp_dev_tx_queue_release(void *txq);
 #define AVP_DEV_TO_PCI(eth_dev) RTE_DEV_TO_PCI((eth_dev)->device)
 
 
+#define AVP_MAX_RX_BURST 64
 #define AVP_MAX_MAC_ADDRS 1
 #define AVP_MIN_RX_BUFSIZE ETHER_MIN_LEN
 
@@ -302,6 +310,15 @@ struct avp_queue {
 	return ret == 0 ? request.result : ret;
 }
 
+/* translate from host mbuf virtual address to guest virtual address */
+static inline void *
+avp_dev_translate_buffer(struct avp_dev *avp, void *host_mbuf_address)
+{
+	return RTE_PTR_ADD(RTE_PTR_SUB(host_mbuf_address,
+				       (uintptr_t)avp->host_mbuf_addr),
+			   (uintptr_t)avp->mbuf_addr);
+}
+
 /* translate from host physical address to guest virtual address */
 static void *
 avp_dev_translate_address(struct rte_eth_dev *eth_dev,
@@ -628,6 +645,7 @@ struct avp_queue {
 
 	pci_dev = AVP_DEV_TO_PCI(eth_dev);
 	eth_dev->dev_ops = &avp_eth_dev_ops;
+	eth_dev->rx_pkt_burst = &avp_recv_pkts;
 
 	if (rte_eal_process_type() != RTE_PROC_PRIMARY) {
 		/*
@@ -636,6 +654,10 @@ struct avp_queue {
 		 * be mapped to the same virtual address so all pointers should
 		 * be valid.
 		 */
+		if (eth_dev->data->scattered_rx) {
+			PMD_DRV_LOG(NOTICE, "AVP device configured for chained mbufs\n");
+			eth_dev->rx_pkt_burst = avp_recv_scattered_pkts;
+		}
 		return 0;
 	}
 
@@ -704,6 +726,38 @@ struct avp_queue {
 
 
 static int
+avp_dev_enable_scattered(struct rte_eth_dev *eth_dev,
+			 struct avp_dev *avp)
+{
+	unsigned int max_rx_pkt_len;
+
+	max_rx_pkt_len = eth_dev->data->dev_conf.rxmode.max_rx_pkt_len;
+
+	if ((max_rx_pkt_len > avp->guest_mbuf_size) ||
+	    (max_rx_pkt_len > avp->host_mbuf_size)) {
+		/*
+		 * If the guest MTU is greater than either the host or guest
+		 * buffers then chained mbufs have to be enabled in the TX
+		 * direction.  It is assumed that the application will not need
+		 * to send packets larger than their max_rx_pkt_len (MRU).
+		 */
+		return 1;
+	}
+
+	if ((avp->max_rx_pkt_len > avp->guest_mbuf_size) ||
+	    (avp->max_rx_pkt_len > avp->host_mbuf_size)) {
+		/*
+		 * If the host MRU is greater than its own mbuf size or the
+		 * guest mbuf size then chained mbufs have to be enabled in the
+		 * RX direction.
+		 */
+		return 1;
+	}
+
+	return 0;
+}
+
+static int
 avp_dev_rx_queue_setup(struct rte_eth_dev *eth_dev,
 		       uint16_t rx_queue_id,
 		       uint16_t nb_rx_desc,
@@ -729,6 +783,14 @@ struct avp_queue {
 	avp->guest_mbuf_size = (uint16_t)(mbp_priv->mbuf_data_room_size);
 	avp->guest_mbuf_size -= RTE_PKTMBUF_HEADROOM;
 
+	if (avp_dev_enable_scattered(eth_dev, avp)) {
+		if (!eth_dev->data->scattered_rx) {
+			PMD_DRV_LOG(NOTICE, "AVP device configured for chained mbufs\n");
+			eth_dev->data->scattered_rx = 1;
+			eth_dev->rx_pkt_burst = avp_recv_scattered_pkts;
+		}
+	}
+
 	PMD_DRV_LOG(DEBUG, "AVP max_rx_pkt_len=(%u,%u) mbuf_size=(%u,%u)\n",
 		    avp->max_rx_pkt_len,
 		    eth_dev->data->dev_conf.rxmode.max_rx_pkt_len,
@@ -799,6 +861,395 @@ struct avp_queue {
 	return 0;
 }
 
+static inline int
+_avp_cmp_ether_addr(struct ether_addr *a, struct ether_addr *b)
+{
+	uint16_t *_a = (uint16_t *)&a->addr_bytes[0];
+	uint16_t *_b = (uint16_t *)&b->addr_bytes[0];
+	return (_a[0] ^ _b[0]) | (_a[1] ^ _b[1]) | (_a[2] ^ _b[2]);
+}
+
+static inline int
+_avp_mac_filter(struct avp_dev *avp, struct rte_mbuf *m)
+{
+	struct ether_hdr *eth = rte_pktmbuf_mtod(m, struct ether_hdr *);
+
+	if (likely(_avp_cmp_ether_addr(&avp->ethaddr, &eth->d_addr) == 0)) {
+		/* allow all packets destined to our address */
+		return 0;
+	}
+
+	if (likely(is_broadcast_ether_addr(&eth->d_addr))) {
+		/* allow all broadcast packets */
+		return 0;
+	}
+
+	if (likely(is_multicast_ether_addr(&eth->d_addr))) {
+		/* allow all multicast packets */
+		return 0;
+	}
+
+	if (avp->flags & AVP_F_PROMISC) {
+		/* allow all packets when in promiscuous mode */
+		return 0;
+	}
+
+	return -1;
+}
+
+#ifdef RTE_LIBRTE_AVP_DEBUG_BUFFERS
+static inline void
+__avp_dev_buffer_sanity_check(struct avp_dev *avp, struct rte_avp_desc *buf)
+{
+	struct rte_avp_desc *first_buf;
+	struct rte_avp_desc *pkt_buf;
+	unsigned int pkt_len;
+	unsigned int nb_segs;
+	void *pkt_data;
+	unsigned int i;
+
+	first_buf = avp_dev_translate_buffer(avp, buf);
+
+	i = 0;
+	pkt_len = 0;
+	nb_segs = first_buf->nb_segs;
+	do {
+		/* Adjust pointers for guest addressing */
+		pkt_buf = avp_dev_translate_buffer(avp, buf);
+		if (pkt_buf == NULL)
+			rte_panic("bad buffer: segment %u has an invalid address %p\n",
+				  i, buf);
+		pkt_data = avp_dev_translate_buffer(avp, pkt_buf->data);
+		if (pkt_data == NULL)
+			rte_panic("bad buffer: segment %u has a NULL data pointer\n",
+				  i);
+		if (pkt_buf->data_len == 0)
+			rte_panic("bad buffer: segment %u has 0 data length\n",
+				  i);
+		pkt_len += pkt_buf->data_len;
+		nb_segs--;
+		i++;
+
+	} while (nb_segs && (buf = pkt_buf->next) != NULL);
+
+	if (nb_segs != 0)
+		rte_panic("bad buffer: expected %u segments found %u\n",
+			  first_buf->nb_segs, (first_buf->nb_segs - nb_segs));
+	if (pkt_len != first_buf->pkt_len)
+		rte_panic("bad buffer: expected length %u found %u\n",
+			  first_buf->pkt_len, pkt_len);
+}
+
+#define avp_dev_buffer_sanity_check(a, b) \
+	__avp_dev_buffer_sanity_check((a), (b))
+
+#else /* RTE_LIBRTE_AVP_DEBUG_BUFFERS */
+
+#define avp_dev_buffer_sanity_check(a, b) do {} while (0)
+
+#endif
+
+/*
+ * Copy a host buffer chain to a set of mbufs.	This function assumes that
+ * there exactly the required number of mbufs to copy all source bytes.
+ */
+static inline struct rte_mbuf *
+avp_dev_copy_from_buffers(struct avp_dev *avp,
+			  struct rte_avp_desc *buf,
+			  struct rte_mbuf **mbufs,
+			  unsigned int count)
+{
+	struct rte_mbuf *m_previous = NULL;
+	struct rte_avp_desc *pkt_buf;
+	unsigned int total_length = 0;
+	unsigned int copy_length;
+	unsigned int src_offset;
+	struct rte_mbuf *m;
+	uint16_t ol_flags;
+	uint16_t vlan_tci;
+	void *pkt_data;
+	unsigned int i;
+
+	avp_dev_buffer_sanity_check(avp, buf);
+
+	/* setup the first source buffer */
+	pkt_buf = avp_dev_translate_buffer(avp, buf);
+	pkt_data = avp_dev_translate_buffer(avp, pkt_buf->data);
+	total_length = pkt_buf->pkt_len;
+	src_offset = 0;
+
+	if (pkt_buf->ol_flags & RTE_AVP_RX_VLAN_PKT) {
+		ol_flags = PKT_RX_VLAN_PKT;
+		vlan_tci = pkt_buf->vlan_tci;
+	} else {
+		ol_flags = 0;
+		vlan_tci = 0;
+	}
+
+	for (i = 0; (i < count) && (buf != NULL); i++) {
+		/* fill each destination buffer */
+		m = mbufs[i];
+
+		if (m_previous != NULL)
+			m_previous->next = m;
+
+		m_previous = m;
+
+		do {
+			/*
+			 * Copy as many source buffers as will fit in the
+			 * destination buffer.
+			 */
+			copy_length = RTE_MIN((avp->guest_mbuf_size -
+					       rte_pktmbuf_data_len(m)),
+					      (pkt_buf->data_len -
+					       src_offset));
+			rte_memcpy(RTE_PTR_ADD(rte_pktmbuf_mtod(m, void *),
+					       rte_pktmbuf_data_len(m)),
+				   RTE_PTR_ADD(pkt_data, src_offset),
+				   copy_length);
+			rte_pktmbuf_data_len(m) += copy_length;
+			src_offset += copy_length;
+
+			if (likely(src_offset == pkt_buf->data_len)) {
+				/* need a new source buffer */
+				buf = pkt_buf->next;
+				if (buf != NULL) {
+					pkt_buf = avp_dev_translate_buffer(
+						avp, buf);
+					pkt_data = avp_dev_translate_buffer(
+						avp, pkt_buf->data);
+					src_offset = 0;
+				}
+			}
+
+			if (unlikely(rte_pktmbuf_data_len(m) ==
+				     avp->guest_mbuf_size)) {
+				/* need a new destination mbuf */
+				break;
+			}
+
+		} while (buf != NULL);
+	}
+
+	m = mbufs[0];
+	m->ol_flags = ol_flags;
+	m->nb_segs = count;
+	rte_pktmbuf_pkt_len(m) = total_length;
+	m->vlan_tci = vlan_tci;
+
+	__rte_mbuf_sanity_check(m, 1);
+
+	return m;
+}
+
+static uint16_t
+avp_recv_scattered_pkts(void *rx_queue,
+			struct rte_mbuf **rx_pkts,
+			uint16_t nb_pkts)
+{
+	struct avp_queue *rxq = (struct avp_queue *)rx_queue;
+	struct rte_avp_desc *avp_bufs[AVP_MAX_RX_BURST];
+	struct rte_mbuf *mbufs[RTE_AVP_MAX_MBUF_SEGMENTS];
+	struct avp_dev *avp = rxq->avp;
+	struct rte_avp_desc *pkt_buf;
+	struct rte_avp_fifo *free_q;
+	struct rte_avp_fifo *rx_q;
+	struct rte_avp_desc *buf;
+	unsigned int count, avail, n;
+	unsigned int guest_mbuf_size;
+	struct rte_mbuf *m;
+	unsigned int required;
+	unsigned int buf_len;
+	unsigned int port_id;
+	unsigned int i;
+
+	guest_mbuf_size = avp->guest_mbuf_size;
+	port_id = avp->port_id;
+	rx_q = avp->rx_q[rxq->queue_id];
+	free_q = avp->free_q[rxq->queue_id];
+
+	/* setup next queue to service */
+	rxq->queue_id = (rxq->queue_id < rxq->queue_limit) ?
+		(rxq->queue_id + 1) : rxq->queue_base;
+
+	/* determine how many slots are available in the free queue */
+	count = avp_fifo_free_count(free_q);
+
+	/* determine how many packets are available in the rx queue */
+	avail = avp_fifo_count(rx_q);
+
+	/* determine how many packets can be received */
+	count = RTE_MIN(count, avail);
+	count = RTE_MIN(count, nb_pkts);
+	count = RTE_MIN(count, (unsigned int)AVP_MAX_RX_BURST);
+
+	if (unlikely(count == 0)) {
+		/* no free buffers, or no buffers on the rx queue */
+		return 0;
+	}
+
+	/* retrieve pending packets */
+	n = avp_fifo_get(rx_q, (void **)&avp_bufs, count);
+	PMD_RX_LOG(DEBUG, "Receiving %u packets from Rx queue at %p\n",
+		   count, rx_q);
+
+	count = 0;
+	for (i = 0; i < n; i++) {
+		/* prefetch next entry while processing current one */
+		if (i + 1 < n) {
+			pkt_buf = avp_dev_translate_buffer(avp,
+							   avp_bufs[i + 1]);
+			rte_prefetch0(pkt_buf);
+		}
+		buf = avp_bufs[i];
+
+		/* Peek into the first buffer to determine the total length */
+		pkt_buf = avp_dev_translate_buffer(avp, buf);
+		buf_len = pkt_buf->pkt_len;
+
+		/* Allocate enough mbufs to receive the entire packet */
+		required = (buf_len + guest_mbuf_size - 1) / guest_mbuf_size;
+		if (rte_pktmbuf_alloc_bulk(avp->pool, mbufs, required)) {
+			rxq->dev_data->rx_mbuf_alloc_failed++;
+			continue;
+		}
+
+		/* Copy the data from the buffers to our mbufs */
+		m = avp_dev_copy_from_buffers(avp, buf, mbufs, required);
+
+		/* finalize mbuf */
+		m->port = port_id;
+
+		if (_avp_mac_filter(avp, m) != 0) {
+			/* silently discard packets not destined to our MAC */
+			rte_pktmbuf_free(m);
+			continue;
+		}
+
+		/* return new mbuf to caller */
+		rx_pkts[count++] = m;
+		rxq->bytes += buf_len;
+	}
+
+	rxq->packets += count;
+
+	/* return the buffers to the free queue */
+	avp_fifo_put(free_q, (void **)&avp_bufs[0], n);
+
+	return count;
+}
+
+
+static uint16_t
+avp_recv_pkts(void *rx_queue,
+	      struct rte_mbuf **rx_pkts,
+	      uint16_t nb_pkts)
+{
+	struct avp_queue *rxq = (struct avp_queue *)rx_queue;
+	struct rte_avp_desc *avp_bufs[AVP_MAX_RX_BURST];
+	struct avp_dev *avp = rxq->avp;
+	struct rte_avp_desc *pkt_buf;
+	struct rte_avp_fifo *free_q;
+	struct rte_avp_fifo *rx_q;
+	unsigned int count, avail, n;
+	unsigned int pkt_len;
+	struct rte_mbuf *m;
+	char *pkt_data;
+	unsigned int i;
+
+	rx_q = avp->rx_q[rxq->queue_id];
+	free_q = avp->free_q[rxq->queue_id];
+
+	/* setup next queue to service */
+	rxq->queue_id = (rxq->queue_id < rxq->queue_limit) ?
+		(rxq->queue_id + 1) : rxq->queue_base;
+
+	/* determine how many slots are available in the free queue */
+	count = avp_fifo_free_count(free_q);
+
+	/* determine how many packets are available in the rx queue */
+	avail = avp_fifo_count(rx_q);
+
+	/* determine how many packets can be received */
+	count = RTE_MIN(count, avail);
+	count = RTE_MIN(count, nb_pkts);
+	count = RTE_MIN(count, (unsigned int)AVP_MAX_RX_BURST);
+
+	if (unlikely(count == 0)) {
+		/* no free buffers, or no buffers on the rx queue */
+		return 0;
+	}
+
+	/* retrieve pending packets */
+	n = avp_fifo_get(rx_q, (void **)&avp_bufs, count);
+	PMD_RX_LOG(DEBUG, "Receiving %u packets from Rx queue at %p\n",
+		   count, rx_q);
+
+	count = 0;
+	for (i = 0; i < n; i++) {
+		/* prefetch next entry while processing current one */
+		if (i < n - 1) {
+			pkt_buf = avp_dev_translate_buffer(avp,
+							   avp_bufs[i + 1]);
+			rte_prefetch0(pkt_buf);
+		}
+
+		/* Adjust host pointers for guest addressing */
+		pkt_buf = avp_dev_translate_buffer(avp, avp_bufs[i]);
+		pkt_data = avp_dev_translate_buffer(avp, pkt_buf->data);
+		pkt_len = pkt_buf->pkt_len;
+
+		if (unlikely((pkt_len > avp->guest_mbuf_size) ||
+			     (pkt_buf->nb_segs > 1))) {
+			/*
+			 * application should be using the scattered receive
+			 * function
+			 */
+			rxq->errors++;
+			continue;
+		}
+
+		/* process each packet to be transmitted */
+		m = rte_pktmbuf_alloc(avp->pool);
+		if (unlikely(m == NULL)) {
+			rxq->dev_data->rx_mbuf_alloc_failed++;
+			continue;
+		}
+
+		/* copy data out of the host buffer to our buffer */
+		m->data_off = RTE_PKTMBUF_HEADROOM;
+		rte_memcpy(rte_pktmbuf_mtod(m, void *), pkt_data, pkt_len);
+
+		/* initialize the local mbuf */
+		rte_pktmbuf_data_len(m) = pkt_len;
+		rte_pktmbuf_pkt_len(m) = pkt_len;
+		m->port = avp->port_id;
+
+		if (pkt_buf->ol_flags & RTE_AVP_RX_VLAN_PKT) {
+			m->ol_flags = PKT_RX_VLAN_PKT;
+			m->vlan_tci = pkt_buf->vlan_tci;
+		}
+
+		if (_avp_mac_filter(avp, m) != 0) {
+			/* silently discard packets not destined to our MAC */
+			rte_pktmbuf_free(m);
+			continue;
+		}
+
+		/* return new mbuf to caller */
+		rx_pkts[count++] = m;
+		rxq->bytes += pkt_len;
+	}
+
+	rxq->packets += count;
+
+	/* return the buffers to the free queue */
+	avp_fifo_put(free_q, (void **)&avp_bufs[0], n);
+
+	return count;
+}
+
 static void
 avp_dev_rx_queue_release(void *rx_queue)
 {
-- 
1.8.3.1

^ permalink raw reply related	[flat|nested] 172+ messages in thread

* [PATCH v5 09/14] net/avp: packet transmit functions
  2017-03-23 11:23       ` [PATCH v5 00/14] Wind River Systems AVP PMD Allain Legacy
                           ` (7 preceding siblings ...)
  2017-03-23 11:24         ` [PATCH v5 08/14] net/avp: packet receive functions Allain Legacy
@ 2017-03-23 11:24         ` Allain Legacy
  2017-03-23 11:24         ` [PATCH v5 10/14] net/avp: device statistics operations Allain Legacy
                           ` (6 subsequent siblings)
  15 siblings, 0 replies; 172+ messages in thread
From: Allain Legacy @ 2017-03-23 11:24 UTC (permalink / raw)
  To: ferruh.yigit
  Cc: dev, ian.jolliffe, bruce.richardson, john.mcnamara, keith.wiles,
	thomas.monjalon, vincent.jardin, jerin.jacob, stephen, 3chas3

Adds support for packet transmit functions so that an application can send
packets to the host application via an AVP device queue.  Both the simple
and scattered functions are supported.

Signed-off-by: Allain Legacy <allain.legacy@windriver.com>
Signed-off-by: Matt Peters <matt.peters@windriver.com>
---
 drivers/net/avp/avp_ethdev.c | 335 +++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 335 insertions(+)

diff --git a/drivers/net/avp/avp_ethdev.c b/drivers/net/avp/avp_ethdev.c
index 7524e1a..07efd42 100644
--- a/drivers/net/avp/avp_ethdev.c
+++ b/drivers/net/avp/avp_ethdev.c
@@ -87,12 +87,24 @@ static uint16_t avp_recv_scattered_pkts(void *rx_queue,
 static uint16_t avp_recv_pkts(void *rx_queue,
 			      struct rte_mbuf **rx_pkts,
 			      uint16_t nb_pkts);
+
+static uint16_t avp_xmit_scattered_pkts(void *tx_queue,
+					struct rte_mbuf **tx_pkts,
+					uint16_t nb_pkts);
+
+static uint16_t avp_xmit_pkts(void *tx_queue,
+			      struct rte_mbuf **tx_pkts,
+			      uint16_t nb_pkts);
+
 static void avp_dev_rx_queue_release(void *rxq);
 static void avp_dev_tx_queue_release(void *txq);
+
+
 #define AVP_DEV_TO_PCI(eth_dev) RTE_DEV_TO_PCI((eth_dev)->device)
 
 
 #define AVP_MAX_RX_BURST 64
+#define AVP_MAX_TX_BURST 64
 #define AVP_MAX_MAC_ADDRS 1
 #define AVP_MIN_RX_BUFSIZE ETHER_MIN_LEN
 
@@ -646,6 +658,7 @@ struct avp_queue {
 	pci_dev = AVP_DEV_TO_PCI(eth_dev);
 	eth_dev->dev_ops = &avp_eth_dev_ops;
 	eth_dev->rx_pkt_burst = &avp_recv_pkts;
+	eth_dev->tx_pkt_burst = &avp_xmit_pkts;
 
 	if (rte_eal_process_type() != RTE_PROC_PRIMARY) {
 		/*
@@ -657,6 +670,7 @@ struct avp_queue {
 		if (eth_dev->data->scattered_rx) {
 			PMD_DRV_LOG(NOTICE, "AVP device configured for chained mbufs\n");
 			eth_dev->rx_pkt_burst = avp_recv_scattered_pkts;
+			eth_dev->tx_pkt_burst = avp_xmit_scattered_pkts;
 		}
 		return 0;
 	}
@@ -788,6 +802,7 @@ struct avp_queue {
 			PMD_DRV_LOG(NOTICE, "AVP device configured for chained mbufs\n");
 			eth_dev->data->scattered_rx = 1;
 			eth_dev->rx_pkt_burst = avp_recv_scattered_pkts;
+			eth_dev->tx_pkt_burst = avp_xmit_scattered_pkts;
 		}
 	}
 
@@ -1250,6 +1265,326 @@ struct avp_queue {
 	return count;
 }
 
+/*
+ * Copy a chained mbuf to a set of host buffers.  This function assumes that
+ * there are sufficient destination buffers to contain the entire source
+ * packet.
+ */
+static inline uint16_t
+avp_dev_copy_to_buffers(struct avp_dev *avp,
+			struct rte_mbuf *mbuf,
+			struct rte_avp_desc **buffers,
+			unsigned int count)
+{
+	struct rte_avp_desc *previous_buf = NULL;
+	struct rte_avp_desc *first_buf = NULL;
+	struct rte_avp_desc *pkt_buf;
+	struct rte_avp_desc *buf;
+	size_t total_length;
+	struct rte_mbuf *m;
+	size_t copy_length;
+	size_t src_offset;
+	char *pkt_data;
+	unsigned int i;
+
+	__rte_mbuf_sanity_check(mbuf, 1);
+
+	m = mbuf;
+	src_offset = 0;
+	total_length = rte_pktmbuf_pkt_len(m);
+	for (i = 0; (i < count) && (m != NULL); i++) {
+		/* fill each destination buffer */
+		buf = buffers[i];
+
+		if (i < count - 1) {
+			/* prefetch next entry while processing this one */
+			pkt_buf = avp_dev_translate_buffer(avp, buffers[i + 1]);
+			rte_prefetch0(pkt_buf);
+		}
+
+		/* Adjust pointers for guest addressing */
+		pkt_buf = avp_dev_translate_buffer(avp, buf);
+		pkt_data = avp_dev_translate_buffer(avp, pkt_buf->data);
+
+		/* setup the buffer chain */
+		if (previous_buf != NULL)
+			previous_buf->next = buf;
+		else
+			first_buf = pkt_buf;
+
+		previous_buf = pkt_buf;
+
+		do {
+			/*
+			 * copy as many source mbuf segments as will fit in the
+			 * destination buffer.
+			 */
+			copy_length = RTE_MIN((avp->host_mbuf_size -
+					       pkt_buf->data_len),
+					      (rte_pktmbuf_data_len(m) -
+					       src_offset));
+			rte_memcpy(RTE_PTR_ADD(pkt_data, pkt_buf->data_len),
+				   RTE_PTR_ADD(rte_pktmbuf_mtod(m, void *),
+					       src_offset),
+				   copy_length);
+			pkt_buf->data_len += copy_length;
+			src_offset += copy_length;
+
+			if (likely(src_offset == rte_pktmbuf_data_len(m))) {
+				/* need a new source buffer */
+				m = m->next;
+				src_offset = 0;
+			}
+
+			if (unlikely(pkt_buf->data_len ==
+				     avp->host_mbuf_size)) {
+				/* need a new destination buffer */
+				break;
+			}
+
+		} while (m != NULL);
+	}
+
+	first_buf->nb_segs = count;
+	first_buf->pkt_len = total_length;
+
+	if (mbuf->ol_flags & PKT_TX_VLAN_PKT) {
+		first_buf->ol_flags |= RTE_AVP_TX_VLAN_PKT;
+		first_buf->vlan_tci = mbuf->vlan_tci;
+	}
+
+	avp_dev_buffer_sanity_check(avp, buffers[0]);
+
+	return total_length;
+}
+
+
+static uint16_t
+avp_xmit_scattered_pkts(void *tx_queue,
+			struct rte_mbuf **tx_pkts,
+			uint16_t nb_pkts)
+{
+	struct rte_avp_desc *avp_bufs[(AVP_MAX_TX_BURST *
+				       RTE_AVP_MAX_MBUF_SEGMENTS)];
+	struct avp_queue *txq = (struct avp_queue *)tx_queue;
+	struct rte_avp_desc *tx_bufs[AVP_MAX_TX_BURST];
+	struct avp_dev *avp = txq->avp;
+	struct rte_avp_fifo *alloc_q;
+	struct rte_avp_fifo *tx_q;
+	unsigned int count, avail, n;
+	unsigned int orig_nb_pkts;
+	struct rte_mbuf *m;
+	unsigned int required;
+	unsigned int segments;
+	unsigned int tx_bytes;
+	unsigned int i;
+
+	orig_nb_pkts = nb_pkts;
+	tx_q = avp->tx_q[txq->queue_id];
+	alloc_q = avp->alloc_q[txq->queue_id];
+
+	/* limit the number of transmitted packets to the max burst size */
+	if (unlikely(nb_pkts > AVP_MAX_TX_BURST))
+		nb_pkts = AVP_MAX_TX_BURST;
+
+	/* determine how many buffers are available to copy into */
+	avail = avp_fifo_count(alloc_q);
+	if (unlikely(avail > (AVP_MAX_TX_BURST *
+			      RTE_AVP_MAX_MBUF_SEGMENTS)))
+		avail = AVP_MAX_TX_BURST * RTE_AVP_MAX_MBUF_SEGMENTS;
+
+	/* determine how many slots are available in the transmit queue */
+	count = avp_fifo_free_count(tx_q);
+
+	/* determine how many packets can be sent */
+	nb_pkts = RTE_MIN(count, nb_pkts);
+
+	/* determine how many packets will fit in the available buffers */
+	count = 0;
+	segments = 0;
+	for (i = 0; i < nb_pkts; i++) {
+		m = tx_pkts[i];
+		if (likely(i < (unsigned int)nb_pkts - 1)) {
+			/* prefetch next entry while processing this one */
+			rte_prefetch0(tx_pkts[i + 1]);
+		}
+		required = (rte_pktmbuf_pkt_len(m) + avp->host_mbuf_size - 1) /
+			avp->host_mbuf_size;
+
+		if (unlikely((required == 0) ||
+			     (required > RTE_AVP_MAX_MBUF_SEGMENTS)))
+			break;
+		else if (unlikely(required + segments > avail))
+			break;
+		segments += required;
+		count++;
+	}
+	nb_pkts = count;
+
+	if (unlikely(nb_pkts == 0)) {
+		/* no available buffers, or no space on the tx queue */
+		txq->errors += orig_nb_pkts;
+		return 0;
+	}
+
+	PMD_TX_LOG(DEBUG, "Sending %u packets on Tx queue at %p\n",
+		   nb_pkts, tx_q);
+
+	/* retrieve sufficient send buffers */
+	n = avp_fifo_get(alloc_q, (void **)&avp_bufs, segments);
+	if (unlikely(n != segments)) {
+		PMD_TX_LOG(DEBUG, "Failed to allocate buffers "
+			   "n=%u, segments=%u, orig=%u\n",
+			   n, segments, orig_nb_pkts);
+		txq->errors += orig_nb_pkts;
+		return 0;
+	}
+
+	tx_bytes = 0;
+	count = 0;
+	for (i = 0; i < nb_pkts; i++) {
+		/* process each packet to be transmitted */
+		m = tx_pkts[i];
+
+		/* determine how many buffers are required for this packet */
+		required = (rte_pktmbuf_pkt_len(m) + avp->host_mbuf_size - 1) /
+			avp->host_mbuf_size;
+
+		tx_bytes += avp_dev_copy_to_buffers(avp, m,
+						    &avp_bufs[count], required);
+		tx_bufs[i] = avp_bufs[count];
+		count += required;
+
+		/* free the original mbuf */
+		rte_pktmbuf_free(m);
+	}
+
+	txq->packets += nb_pkts;
+	txq->bytes += tx_bytes;
+
+#ifdef RTE_LIBRTE_AVP_DEBUG_BUFFERS
+	for (i = 0; i < nb_pkts; i++)
+		avp_dev_buffer_sanity_check(avp, tx_bufs[i]);
+#endif
+
+	/* send the packets */
+	n = avp_fifo_put(tx_q, (void **)&tx_bufs[0], nb_pkts);
+	if (unlikely(n != orig_nb_pkts))
+		txq->errors += (orig_nb_pkts - n);
+
+	return n;
+}
+
+
+static uint16_t
+avp_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
+{
+	struct avp_queue *txq = (struct avp_queue *)tx_queue;
+	struct rte_avp_desc *avp_bufs[AVP_MAX_TX_BURST];
+	struct avp_dev *avp = txq->avp;
+	struct rte_avp_desc *pkt_buf;
+	struct rte_avp_fifo *alloc_q;
+	struct rte_avp_fifo *tx_q;
+	unsigned int count, avail, n;
+	struct rte_mbuf *m;
+	unsigned int pkt_len;
+	unsigned int tx_bytes;
+	char *pkt_data;
+	unsigned int i;
+
+	tx_q = avp->tx_q[txq->queue_id];
+	alloc_q = avp->alloc_q[txq->queue_id];
+
+	/* limit the number of transmitted packets to the max burst size */
+	if (unlikely(nb_pkts > AVP_MAX_TX_BURST))
+		nb_pkts = AVP_MAX_TX_BURST;
+
+	/* determine how many buffers are available to copy into */
+	avail = avp_fifo_count(alloc_q);
+
+	/* determine how many slots are available in the transmit queue */
+	count = avp_fifo_free_count(tx_q);
+
+	/* determine how many packets can be sent */
+	count = RTE_MIN(count, avail);
+	count = RTE_MIN(count, nb_pkts);
+
+	if (unlikely(count == 0)) {
+		/* no available buffers, or no space on the tx queue */
+		txq->errors += nb_pkts;
+		return 0;
+	}
+
+	PMD_TX_LOG(DEBUG, "Sending %u packets on Tx queue at %p\n",
+		   count, tx_q);
+
+	/* retrieve sufficient send buffers */
+	n = avp_fifo_get(alloc_q, (void **)&avp_bufs, count);
+	if (unlikely(n != count)) {
+		txq->errors++;
+		return 0;
+	}
+
+	tx_bytes = 0;
+	for (i = 0; i < count; i++) {
+		/* prefetch next entry while processing the current one */
+		if (i < count - 1) {
+			pkt_buf = avp_dev_translate_buffer(avp,
+							   avp_bufs[i + 1]);
+			rte_prefetch0(pkt_buf);
+		}
+
+		/* process each packet to be transmitted */
+		m = tx_pkts[i];
+
+		/* Adjust pointers for guest addressing */
+		pkt_buf = avp_dev_translate_buffer(avp, avp_bufs[i]);
+		pkt_data = avp_dev_translate_buffer(avp, pkt_buf->data);
+		pkt_len = rte_pktmbuf_pkt_len(m);
+
+		if (unlikely((pkt_len > avp->guest_mbuf_size) ||
+					 (pkt_len > avp->host_mbuf_size))) {
+			/*
+			 * application should be using the scattered transmit
+			 * function; send it truncated to avoid the performance
+			 * hit of having to manage returning the already
+			 * allocated buffer to the free list.  This should not
+			 * happen since the application should have set the
+			 * max_rx_pkt_len based on its MTU and it should be
+			 * policing its own packet sizes.
+			 */
+			txq->errors++;
+			pkt_len = RTE_MIN(avp->guest_mbuf_size,
+					  avp->host_mbuf_size);
+		}
+
+		/* copy data out of our mbuf and into the AVP buffer */
+		rte_memcpy(pkt_data, rte_pktmbuf_mtod(m, void *), pkt_len);
+		pkt_buf->pkt_len = pkt_len;
+		pkt_buf->data_len = pkt_len;
+		pkt_buf->nb_segs = 1;
+		pkt_buf->next = NULL;
+
+		if (m->ol_flags & PKT_TX_VLAN_PKT) {
+			pkt_buf->ol_flags |= RTE_AVP_TX_VLAN_PKT;
+			pkt_buf->vlan_tci = m->vlan_tci;
+		}
+
+		tx_bytes += pkt_len;
+
+		/* free the original mbuf */
+		rte_pktmbuf_free(m);
+	}
+
+	txq->packets += count;
+	txq->bytes += tx_bytes;
+
+	/* send the packets */
+	n = avp_fifo_put(tx_q, (void **)&avp_bufs[0], count);
+
+	return n;
+}
+
 static void
 avp_dev_rx_queue_release(void *rx_queue)
 {
-- 
1.8.3.1

^ permalink raw reply related	[flat|nested] 172+ messages in thread

* [PATCH v5 10/14] net/avp: device statistics operations
  2017-03-23 11:23       ` [PATCH v5 00/14] Wind River Systems AVP PMD Allain Legacy
                           ` (8 preceding siblings ...)
  2017-03-23 11:24         ` [PATCH v5 09/14] net/avp: packet transmit functions Allain Legacy
@ 2017-03-23 11:24         ` Allain Legacy
  2017-03-23 11:24         ` [PATCH v5 11/14] net/avp: device promiscuous functions Allain Legacy
                           ` (5 subsequent siblings)
  15 siblings, 0 replies; 172+ messages in thread
From: Allain Legacy @ 2017-03-23 11:24 UTC (permalink / raw)
  To: ferruh.yigit
  Cc: dev, ian.jolliffe, bruce.richardson, john.mcnamara, keith.wiles,
	thomas.monjalon, vincent.jardin, jerin.jacob, stephen, 3chas3

Adds device functions to query and reset statistics.

Signed-off-by: Allain Legacy <allain.legacy@windriver.com>
---
 doc/guides/nics/features/avp.ini |  2 ++
 drivers/net/avp/avp_ethdev.c     | 67 ++++++++++++++++++++++++++++++++++++++++
 2 files changed, 69 insertions(+)

diff --git a/doc/guides/nics/features/avp.ini b/doc/guides/nics/features/avp.ini
index e748ea8..0de761c 100644
--- a/doc/guides/nics/features/avp.ini
+++ b/doc/guides/nics/features/avp.ini
@@ -9,5 +9,7 @@ Jumbo frame          = Y
 Scattered Rx         = Y
 Unicast MAC filter   = Y
 VLAN offload         = Y
+Basic stats          = Y
+Stats per queue      = Y
 Linux UIO            = Y
 x86-64               = Y
diff --git a/drivers/net/avp/avp_ethdev.c b/drivers/net/avp/avp_ethdev.c
index 07efd42..f24c6a8 100644
--- a/drivers/net/avp/avp_ethdev.c
+++ b/drivers/net/avp/avp_ethdev.c
@@ -99,6 +99,10 @@ static uint16_t avp_xmit_pkts(void *tx_queue,
 static void avp_dev_rx_queue_release(void *rxq);
 static void avp_dev_tx_queue_release(void *txq);
 
+static void avp_dev_stats_get(struct rte_eth_dev *dev,
+			      struct rte_eth_stats *stats);
+static void avp_dev_stats_reset(struct rte_eth_dev *dev);
+
 
 #define AVP_DEV_TO_PCI(eth_dev) RTE_DEV_TO_PCI((eth_dev)->device)
 
@@ -146,6 +150,8 @@ static uint16_t avp_xmit_pkts(void *tx_queue,
 	.dev_configure       = avp_dev_configure,
 	.dev_infos_get       = avp_dev_info_get,
 	.vlan_offload_set    = avp_vlan_offload_set,
+	.stats_get           = avp_dev_stats_get,
+	.stats_reset         = avp_dev_stats_reset,
 	.link_update         = avp_dev_link_update,
 	.rx_queue_setup      = avp_dev_rx_queue_setup,
 	.rx_queue_release    = avp_dev_rx_queue_release,
@@ -1720,6 +1726,67 @@ struct avp_queue {
 	}
 }
 
+static void
+avp_dev_stats_get(struct rte_eth_dev *eth_dev, struct rte_eth_stats *stats)
+{
+	struct avp_dev *avp = AVP_DEV_PRIVATE_TO_HW(eth_dev->data->dev_private);
+	unsigned int i;
+
+	for (i = 0; i < avp->num_rx_queues; i++) {
+		struct avp_queue *rxq = avp->dev_data->rx_queues[i];
+
+		if (rxq) {
+			stats->ipackets += rxq->packets;
+			stats->ibytes += rxq->bytes;
+			stats->ierrors += rxq->errors;
+
+			stats->q_ipackets[i] += rxq->packets;
+			stats->q_ibytes[i] += rxq->bytes;
+			stats->q_errors[i] += rxq->errors;
+		}
+	}
+
+	for (i = 0; i < avp->num_tx_queues; i++) {
+		struct avp_queue *txq = avp->dev_data->tx_queues[i];
+
+		if (txq) {
+			stats->opackets += txq->packets;
+			stats->obytes += txq->bytes;
+			stats->oerrors += txq->errors;
+
+			stats->q_opackets[i] += txq->packets;
+			stats->q_obytes[i] += txq->bytes;
+			stats->q_errors[i] += txq->errors;
+		}
+	}
+}
+
+static void
+avp_dev_stats_reset(struct rte_eth_dev *eth_dev)
+{
+	struct avp_dev *avp = AVP_DEV_PRIVATE_TO_HW(eth_dev->data->dev_private);
+	unsigned int i;
+
+	for (i = 0; i < avp->num_rx_queues; i++) {
+		struct avp_queue *rxq = avp->dev_data->rx_queues[i];
+
+		if (rxq) {
+			rxq->bytes = 0;
+			rxq->packets = 0;
+			rxq->errors = 0;
+		}
+	}
+
+	for (i = 0; i < avp->num_tx_queues; i++) {
+		struct avp_queue *txq = avp->dev_data->tx_queues[i];
+
+		if (txq) {
+			txq->bytes = 0;
+			txq->packets = 0;
+			txq->errors = 0;
+		}
+	}
+}
 
 RTE_PMD_REGISTER_PCI(rte_avp, rte_avp_pmd.pci_drv);
 RTE_PMD_REGISTER_PCI_TABLE(rte_avp, pci_id_avp_map);
-- 
1.8.3.1

^ permalink raw reply related	[flat|nested] 172+ messages in thread

* [PATCH v5 11/14] net/avp: device promiscuous functions
  2017-03-23 11:23       ` [PATCH v5 00/14] Wind River Systems AVP PMD Allain Legacy
                           ` (9 preceding siblings ...)
  2017-03-23 11:24         ` [PATCH v5 10/14] net/avp: device statistics operations Allain Legacy
@ 2017-03-23 11:24         ` Allain Legacy
  2017-03-23 11:24         ` [PATCH v5 12/14] net/avp: device start and stop operations Allain Legacy
                           ` (4 subsequent siblings)
  15 siblings, 0 replies; 172+ messages in thread
From: Allain Legacy @ 2017-03-23 11:24 UTC (permalink / raw)
  To: ferruh.yigit
  Cc: dev, ian.jolliffe, bruce.richardson, john.mcnamara, keith.wiles,
	thomas.monjalon, vincent.jardin, jerin.jacob, stephen, 3chas3

Adds support for setting and clearing promiscuous mode on an AVP device.
When enabled the _mac_filter function will allow packets destined to any
MAC address to be processed by the receive functions.

Signed-off-by: Allain Legacy <allain.legacy@windriver.com>
Signed-off-by: Matt Peters <matt.peters@windriver.com>
---
 doc/guides/nics/features/avp.ini |  1 +
 drivers/net/avp/avp_ethdev.c     | 28 ++++++++++++++++++++++++++++
 2 files changed, 29 insertions(+)

diff --git a/doc/guides/nics/features/avp.ini b/doc/guides/nics/features/avp.ini
index 0de761c..ceb6993 100644
--- a/doc/guides/nics/features/avp.ini
+++ b/doc/guides/nics/features/avp.ini
@@ -7,6 +7,7 @@
 Link status          = Y
 Jumbo frame          = Y
 Scattered Rx         = Y
+Promiscuous mode     = Y
 Unicast MAC filter   = Y
 VLAN offload         = Y
 Basic stats          = Y
diff --git a/drivers/net/avp/avp_ethdev.c b/drivers/net/avp/avp_ethdev.c
index f24c6a8..d008e36 100644
--- a/drivers/net/avp/avp_ethdev.c
+++ b/drivers/net/avp/avp_ethdev.c
@@ -67,6 +67,9 @@ static void avp_dev_info_get(struct rte_eth_dev *dev,
 static void avp_vlan_offload_set(struct rte_eth_dev *dev, int mask);
 static int avp_dev_link_update(struct rte_eth_dev *dev,
 			       __rte_unused int wait_to_complete);
+static void avp_dev_promiscuous_enable(struct rte_eth_dev *dev);
+static void avp_dev_promiscuous_disable(struct rte_eth_dev *dev);
+
 static int avp_dev_rx_queue_setup(struct rte_eth_dev *dev,
 				  uint16_t rx_queue_id,
 				  uint16_t nb_rx_desc,
@@ -153,6 +156,8 @@ static void avp_dev_stats_get(struct rte_eth_dev *dev,
 	.stats_get           = avp_dev_stats_get,
 	.stats_reset         = avp_dev_stats_reset,
 	.link_update         = avp_dev_link_update,
+	.promiscuous_enable  = avp_dev_promiscuous_enable,
+	.promiscuous_disable = avp_dev_promiscuous_disable,
 	.rx_queue_setup      = avp_dev_rx_queue_setup,
 	.rx_queue_release    = avp_dev_rx_queue_release,
 	.tx_queue_setup      = avp_dev_tx_queue_setup,
@@ -1679,6 +1684,29 @@ struct avp_queue {
 	return -1;
 }
 
+static void
+avp_dev_promiscuous_enable(struct rte_eth_dev *eth_dev)
+{
+	struct avp_dev *avp = AVP_DEV_PRIVATE_TO_HW(eth_dev->data->dev_private);
+
+	if ((avp->flags & AVP_F_PROMISC) == 0) {
+		avp->flags |= AVP_F_PROMISC;
+		PMD_DRV_LOG(DEBUG, "Promiscuous mode enabled on %u\n",
+			    eth_dev->data->port_id);
+	}
+}
+
+static void
+avp_dev_promiscuous_disable(struct rte_eth_dev *eth_dev)
+{
+	struct avp_dev *avp = AVP_DEV_PRIVATE_TO_HW(eth_dev->data->dev_private);
+
+	if ((avp->flags & AVP_F_PROMISC) != 0) {
+		avp->flags &= ~AVP_F_PROMISC;
+		PMD_DRV_LOG(DEBUG, "Promiscuous mode disabled on %u\n",
+			    eth_dev->data->port_id);
+	}
+}
 
 static void
 avp_dev_info_get(struct rte_eth_dev *eth_dev,
-- 
1.8.3.1

^ permalink raw reply related	[flat|nested] 172+ messages in thread

* [PATCH v5 12/14] net/avp: device start and stop operations
  2017-03-23 11:23       ` [PATCH v5 00/14] Wind River Systems AVP PMD Allain Legacy
                           ` (10 preceding siblings ...)
  2017-03-23 11:24         ` [PATCH v5 11/14] net/avp: device promiscuous functions Allain Legacy
@ 2017-03-23 11:24         ` Allain Legacy
  2017-03-23 11:24         ` [PATCH v5 13/14] net/avp: migration interrupt handling Allain Legacy
                           ` (3 subsequent siblings)
  15 siblings, 0 replies; 172+ messages in thread
From: Allain Legacy @ 2017-03-23 11:24 UTC (permalink / raw)
  To: ferruh.yigit
  Cc: dev, ian.jolliffe, bruce.richardson, john.mcnamara, keith.wiles,
	thomas.monjalon, vincent.jardin, jerin.jacob, stephen, 3chas3

Adds support for device start and stop functions.  This allows an
application to control the administrative state of an AVP device.  Stopping
the device will notify the host application to stop sending packets on that
device's receive queues.

Signed-off-by: Allain Legacy <allain.legacy@windriver.com>
Signed-off-by: Matt Peters <matt.peters@windriver.com>
---
 drivers/net/avp/avp_ethdev.c | 102 +++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 102 insertions(+)

diff --git a/drivers/net/avp/avp_ethdev.c b/drivers/net/avp/avp_ethdev.c
index d008e36..9824190 100644
--- a/drivers/net/avp/avp_ethdev.c
+++ b/drivers/net/avp/avp_ethdev.c
@@ -62,6 +62,9 @@
 
 
 static int avp_dev_configure(struct rte_eth_dev *dev);
+static int avp_dev_start(struct rte_eth_dev *dev);
+static void avp_dev_stop(struct rte_eth_dev *dev);
+static void avp_dev_close(struct rte_eth_dev *dev);
 static void avp_dev_info_get(struct rte_eth_dev *dev,
 			     struct rte_eth_dev_info *dev_info);
 static void avp_vlan_offload_set(struct rte_eth_dev *dev, int mask);
@@ -151,6 +154,9 @@ static void avp_dev_stats_get(struct rte_eth_dev *dev,
  */
 static const struct eth_dev_ops avp_eth_dev_ops = {
 	.dev_configure       = avp_dev_configure,
+	.dev_start           = avp_dev_start,
+	.dev_stop            = avp_dev_stop,
+	.dev_close           = avp_dev_close,
 	.dev_infos_get       = avp_dev_info_get,
 	.vlan_offload_set    = avp_vlan_offload_set,
 	.stats_get           = avp_dev_stats_get,
@@ -316,6 +322,23 @@ struct avp_queue {
 }
 
 static int
+avp_dev_ctrl_set_link_state(struct rte_eth_dev *eth_dev, unsigned int state)
+{
+	struct avp_dev *avp = AVP_DEV_PRIVATE_TO_HW(eth_dev->data->dev_private);
+	struct rte_avp_request request;
+	int ret;
+
+	/* setup a link state change request */
+	memset(&request, 0, sizeof(request));
+	request.req_id = RTE_AVP_REQ_CFG_NETWORK_IF;
+	request.if_up = state;
+
+	ret = avp_dev_process_request(avp, &request);
+
+	return ret == 0 ? request.result : ret;
+}
+
+static int
 avp_dev_ctrl_set_config(struct rte_eth_dev *eth_dev,
 			struct rte_avp_device_config *config)
 {
@@ -333,6 +356,22 @@ struct avp_queue {
 	return ret == 0 ? request.result : ret;
 }
 
+static int
+avp_dev_ctrl_shutdown(struct rte_eth_dev *eth_dev)
+{
+	struct avp_dev *avp = AVP_DEV_PRIVATE_TO_HW(eth_dev->data->dev_private);
+	struct rte_avp_request request;
+	int ret;
+
+	/* setup a shutdown request */
+	memset(&request, 0, sizeof(request));
+	request.req_id = RTE_AVP_REQ_SHUTDOWN_DEVICE;
+
+	ret = avp_dev_process_request(avp, &request);
+
+	return ret == 0 ? request.result : ret;
+}
+
 /* translate from host mbuf virtual address to guest virtual address */
 static inline void *
 avp_dev_translate_buffer(struct avp_dev *avp, void *host_mbuf_address)
@@ -1669,6 +1708,69 @@ struct avp_queue {
 	return ret;
 }
 
+static int
+avp_dev_start(struct rte_eth_dev *eth_dev)
+{
+	struct avp_dev *avp = AVP_DEV_PRIVATE_TO_HW(eth_dev->data->dev_private);
+	int ret;
+
+	/* disable features that we do not support */
+	eth_dev->data->dev_conf.rxmode.hw_ip_checksum = 0;
+	eth_dev->data->dev_conf.rxmode.hw_vlan_filter = 0;
+	eth_dev->data->dev_conf.rxmode.hw_vlan_extend = 0;
+	eth_dev->data->dev_conf.rxmode.hw_strip_crc = 0;
+
+	/* update link state */
+	ret = avp_dev_ctrl_set_link_state(eth_dev, 1);
+	if (ret < 0) {
+		PMD_DRV_LOG(ERR, "Link state change failed by host, ret=%d\n",
+			    ret);
+		goto unlock;
+	}
+
+	/* remember current link state */
+	avp->flags |= AVP_F_LINKUP;
+
+	ret = 0;
+
+unlock:
+	return ret;
+}
+
+static void
+avp_dev_stop(struct rte_eth_dev *eth_dev)
+{
+	struct avp_dev *avp = AVP_DEV_PRIVATE_TO_HW(eth_dev->data->dev_private);
+	int ret;
+
+	avp->flags &= ~AVP_F_LINKUP;
+
+	/* update link state */
+	ret = avp_dev_ctrl_set_link_state(eth_dev, 0);
+	if (ret < 0) {
+		PMD_DRV_LOG(ERR, "Link state change failed by host, ret=%d\n",
+			    ret);
+	}
+}
+
+static void
+avp_dev_close(struct rte_eth_dev *eth_dev)
+{
+	struct avp_dev *avp = AVP_DEV_PRIVATE_TO_HW(eth_dev->data->dev_private);
+	int ret;
+
+	/* remember current link state */
+	avp->flags &= ~AVP_F_LINKUP;
+	avp->flags &= ~AVP_F_CONFIGURED;
+
+	/* update device state */
+	ret = avp_dev_ctrl_shutdown(eth_dev);
+	if (ret < 0) {
+		PMD_DRV_LOG(ERR, "Device shutdown failed by host, ret=%d\n",
+			    ret);
+		/* continue */
+	}
+}
 
 static int
 avp_dev_link_update(struct rte_eth_dev *eth_dev,
-- 
1.8.3.1

^ permalink raw reply related	[flat|nested] 172+ messages in thread

* [PATCH v5 13/14] net/avp: migration interrupt handling
  2017-03-23 11:23       ` [PATCH v5 00/14] Wind River Systems AVP PMD Allain Legacy
                           ` (11 preceding siblings ...)
  2017-03-23 11:24         ` [PATCH v5 12/14] net/avp: device start and stop operations Allain Legacy
@ 2017-03-23 11:24         ` Allain Legacy
  2017-03-23 11:24         ` [PATCH v5 14/14] doc: adds information related to the AVP PMD Allain Legacy
                           ` (2 subsequent siblings)
  15 siblings, 0 replies; 172+ messages in thread
From: Allain Legacy @ 2017-03-23 11:24 UTC (permalink / raw)
  To: ferruh.yigit
  Cc: dev, ian.jolliffe, bruce.richardson, john.mcnamara, keith.wiles,
	thomas.monjalon, vincent.jardin, jerin.jacob, stephen, 3chas3

This commit introduces changes required to support VM live-migration.  This
is done by registering and responding to interrupts coming from the host to
signal that the memory is about to be made invalid and replaced with a new
memory zone on the destination compute node.

Enabling and disabling of the interrupts are maintained outside of the
start/stop functions because they must be enabled for the lifetime of the
device.  This is so that host interrupts are serviced and acked even in
cases where the app may have stopped the device.

Signed-off-by: Allain Legacy <allain.legacy@windriver.com>
Signed-off-by: Matt Peters <matt.peters@windriver.com>
---
 drivers/net/avp/avp_ethdev.c | 372 +++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 372 insertions(+)

diff --git a/drivers/net/avp/avp_ethdev.c b/drivers/net/avp/avp_ethdev.c
index 9824190..e166867 100644
--- a/drivers/net/avp/avp_ethdev.c
+++ b/drivers/net/avp/avp_ethdev.c
@@ -48,6 +48,7 @@
 #include <rte_ether.h>
 #include <rte_common.h>
 #include <rte_cycles.h>
+#include <rte_spinlock.h>
 #include <rte_byteorder.h>
 #include <rte_dev.h>
 #include <rte_memory.h>
@@ -60,6 +61,8 @@
 #include "avp_logs.h"
 
 
+static int avp_dev_create(struct rte_pci_device *pci_dev,
+			  struct rte_eth_dev *eth_dev);
 
 static int avp_dev_configure(struct rte_eth_dev *dev);
 static int avp_dev_start(struct rte_eth_dev *dev);
@@ -174,6 +177,7 @@ static void avp_dev_stats_get(struct rte_eth_dev *dev,
 #define AVP_F_PROMISC (1 << 1)
 #define AVP_F_CONFIGURED (1 << 2)
 #define AVP_F_LINKUP (1 << 3)
+#define AVP_F_DETACHED (1 << 4)
 /**@} */
 
 /* Ethernet device validation marker */
@@ -209,6 +213,9 @@ struct avp_dev {
 	struct rte_avp_fifo *free_q[RTE_AVP_MAX_QUEUES];
 	/**< To be freed mbufs queue */
 
+	/* mutual exclusion over the 'flag' and 'resp_q/req_q' fields */
+	rte_spinlock_t lock;
+
 	/* For request & response */
 	struct rte_avp_fifo *req_q; /**< Request queue */
 	struct rte_avp_fifo *resp_q; /**< Response queue */
@@ -496,6 +503,46 @@ struct avp_queue {
 	return 0;
 }
 
+static int
+avp_dev_detach(struct rte_eth_dev *eth_dev)
+{
+	struct avp_dev *avp = AVP_DEV_PRIVATE_TO_HW(eth_dev->data->dev_private);
+	int ret;
+
+	PMD_DRV_LOG(NOTICE, "Detaching port %u from AVP device 0x%" PRIx64 "\n",
+		    eth_dev->data->port_id, avp->device_id);
+
+	rte_spinlock_lock(&avp->lock);
+
+	if (avp->flags & AVP_F_DETACHED) {
+		PMD_DRV_LOG(NOTICE, "port %u already detached\n",
+			    eth_dev->data->port_id);
+		ret = 0;
+		goto unlock;
+	}
+
+	/* shutdown the device first so the host stops sending us packets. */
+	ret = avp_dev_ctrl_shutdown(eth_dev);
+	if (ret < 0) {
+		PMD_DRV_LOG(ERR, "Failed to send/recv shutdown to host, ret=%d\n",
+			    ret);
+		avp->flags &= ~AVP_F_DETACHED;
+		goto unlock;
+	}
+
+	avp->flags |= AVP_F_DETACHED;
+	rte_wmb();
+
+	/* wait for queues to acknowledge the presence of the detach flag */
+	rte_delay_ms(1);
+
+	ret = 0;
+
+unlock:
+	rte_spinlock_unlock(&avp->lock);
+	return ret;
+}
+
 static void
 _avp_set_rx_queue_mappings(struct rte_eth_dev *eth_dev, uint16_t rx_queue_id)
 {
@@ -564,6 +611,240 @@ struct avp_queue {
 		    avp->num_tx_queues, avp->num_rx_queues);
 }
 
+static int
+avp_dev_attach(struct rte_eth_dev *eth_dev)
+{
+	struct avp_dev *avp = AVP_DEV_PRIVATE_TO_HW(eth_dev->data->dev_private);
+	struct rte_avp_device_config config;
+	unsigned int i;
+	int ret;
+
+	PMD_DRV_LOG(NOTICE, "Attaching port %u to AVP device 0x%" PRIx64 "\n",
+		    eth_dev->data->port_id, avp->device_id);
+
+	rte_spinlock_lock(&avp->lock);
+
+	if (!(avp->flags & AVP_F_DETACHED)) {
+		PMD_DRV_LOG(NOTICE, "port %u already attached\n",
+			    eth_dev->data->port_id);
+		ret = 0;
+		goto unlock;
+	}
+
+	/*
+	 * make sure that the detached flag is set prior to reconfiguring the
+	 * queues.
+	 */
+	avp->flags |= AVP_F_DETACHED;
+	rte_wmb();
+
+	/*
+	 * re-run the device create utility which will parse the new host info
+	 * and setup the AVP device queue pointers.
+	 */
+	ret = avp_dev_create(AVP_DEV_TO_PCI(eth_dev), eth_dev);
+	if (ret < 0) {
+		PMD_DRV_LOG(ERR, "Failed to re-create AVP device, ret=%d\n",
+			    ret);
+		goto unlock;
+	}
+
+	if (avp->flags & AVP_F_CONFIGURED) {
+		/*
+		 * Update the receive queue mapping to handle cases where the
+		 * source and destination hosts have different queue
+		 * requirements.  As long as the DETACHED flag is asserted the
+		 * queue table should not be referenced so it should be safe to
+		 * update it.
+		 */
+		_avp_set_queue_counts(eth_dev);
+		for (i = 0; i < eth_dev->data->nb_rx_queues; i++)
+			_avp_set_rx_queue_mappings(eth_dev, i);
+
+		/*
+		 * Update the host with our config details so that it knows the
+		 * device is active.
+		 */
+		memset(&config, 0, sizeof(config));
+		config.device_id = avp->device_id;
+		config.driver_type = RTE_AVP_DRIVER_TYPE_DPDK;
+		config.driver_version = AVP_DPDK_DRIVER_VERSION;
+		config.features = avp->features;
+		config.num_tx_queues = avp->num_tx_queues;
+		config.num_rx_queues = avp->num_rx_queues;
+		config.if_up = !!(avp->flags & AVP_F_LINKUP);
+
+		ret = avp_dev_ctrl_set_config(eth_dev, &config);
+		if (ret < 0) {
+			PMD_DRV_LOG(ERR, "Config request failed by host, ret=%d\n",
+				    ret);
+			goto unlock;
+		}
+	}
+
+	rte_wmb();
+	avp->flags &= ~AVP_F_DETACHED;
+
+	ret = 0;
+
+unlock:
+	rte_spinlock_unlock(&avp->lock);
+	return ret;
+}
+
+static void
+avp_dev_interrupt_handler(struct rte_intr_handle *intr_handle,
+						  void *data)
+{
+	struct rte_eth_dev *eth_dev = data;
+	struct rte_pci_device *pci_dev = AVP_DEV_TO_PCI(eth_dev);
+	void *registers = pci_dev->mem_resource[RTE_AVP_PCI_MMIO_BAR].addr;
+	uint32_t status, value;
+	int ret;
+
+	if (registers == NULL)
+		rte_panic("no mapped MMIO register space\n");
+
+	/* read the interrupt status register
+	 * note: this register clears on read so all raised interrupts must be
+	 *    handled or remembered for later processing
+	 */
+	status = AVP_READ32(
+		RTE_PTR_ADD(registers,
+			    RTE_AVP_INTERRUPT_STATUS_OFFSET));
+
+	if (status | RTE_AVP_MIGRATION_INTERRUPT_MASK) {
+		/* handle interrupt based on current status */
+		value = AVP_READ32(
+			RTE_PTR_ADD(registers,
+				    RTE_AVP_MIGRATION_STATUS_OFFSET));
+		switch (value) {
+		case RTE_AVP_MIGRATION_DETACHED:
+			ret = avp_dev_detach(eth_dev);
+			break;
+		case RTE_AVP_MIGRATION_ATTACHED:
+			ret = avp_dev_attach(eth_dev);
+			break;
+		default:
+			PMD_DRV_LOG(ERR, "unexpected migration status, status=%u\n",
+				    value);
+			ret = -EINVAL;
+		}
+
+		/* acknowledge the request by writing out our current status */
+		value = (ret == 0 ? value : RTE_AVP_MIGRATION_ERROR);
+		AVP_WRITE32(value,
+			    RTE_PTR_ADD(registers,
+					RTE_AVP_MIGRATION_ACK_OFFSET));
+
+		PMD_DRV_LOG(NOTICE, "AVP migration interrupt handled\n");
+	}
+
+	if (status & ~RTE_AVP_MIGRATION_INTERRUPT_MASK)
+		PMD_DRV_LOG(WARNING, "AVP unexpected interrupt, status=0x%08x\n",
+			    status);
+
+	/* re-enable UIO interrupt handling */
+	ret = rte_intr_enable(intr_handle);
+	if (ret < 0) {
+		PMD_DRV_LOG(ERR, "Failed to re-enable UIO interrupts, ret=%d\n",
+			    ret);
+		/* continue */
+	}
+}
+
+static int
+avp_dev_enable_interrupts(struct rte_eth_dev *eth_dev)
+{
+	struct rte_pci_device *pci_dev = AVP_DEV_TO_PCI(eth_dev);
+	void *registers = pci_dev->mem_resource[RTE_AVP_PCI_MMIO_BAR].addr;
+	int ret;
+
+	if (registers == NULL)
+		return -EINVAL;
+
+	/* enable UIO interrupt handling */
+	ret = rte_intr_enable(&pci_dev->intr_handle);
+	if (ret < 0) {
+		PMD_DRV_LOG(ERR, "Failed to enable UIO interrupts, ret=%d\n",
+			    ret);
+		return ret;
+	}
+
+	/* inform the device that all interrupts are enabled */
+	AVP_WRITE32(RTE_AVP_APP_INTERRUPTS_MASK,
+		    RTE_PTR_ADD(registers, RTE_AVP_INTERRUPT_MASK_OFFSET));
+
+	return 0;
+}
+
+static int
+avp_dev_disable_interrupts(struct rte_eth_dev *eth_dev)
+{
+	struct rte_pci_device *pci_dev = AVP_DEV_TO_PCI(eth_dev);
+	void *registers = pci_dev->mem_resource[RTE_AVP_PCI_MMIO_BAR].addr;
+	int ret;
+
+	if (registers == NULL)
+		return 0;
+
+	/* inform the device that all interrupts are disabled */
+	AVP_WRITE32(RTE_AVP_NO_INTERRUPTS_MASK,
+		    RTE_PTR_ADD(registers, RTE_AVP_INTERRUPT_MASK_OFFSET));
+
+	/* enable UIO interrupt handling */
+	ret = rte_intr_disable(&pci_dev->intr_handle);
+	if (ret < 0) {
+		PMD_DRV_LOG(ERR, "Failed to disable UIO interrupts, ret=%d\n",
+			    ret);
+		return ret;
+	}
+
+	return 0;
+}
+
+static int
+avp_dev_setup_interrupts(struct rte_eth_dev *eth_dev)
+{
+	struct rte_pci_device *pci_dev = AVP_DEV_TO_PCI(eth_dev);
+	int ret;
+
+	/* register a callback handler with UIO for interrupt notifications */
+	ret = rte_intr_callback_register(&pci_dev->intr_handle,
+					 avp_dev_interrupt_handler,
+					 (void *)eth_dev);
+	if (ret < 0) {
+		PMD_DRV_LOG(ERR, "Failed to register UIO interrupt callback, ret=%d\n",
+			    ret);
+		return ret;
+	}
+
+	/* enable interrupt processing */
+	return avp_dev_enable_interrupts(eth_dev);
+}
+
+static int
+avp_dev_migration_pending(struct rte_eth_dev *eth_dev)
+{
+	struct rte_pci_device *pci_dev = AVP_DEV_TO_PCI(eth_dev);
+	void *registers = pci_dev->mem_resource[RTE_AVP_PCI_MMIO_BAR].addr;
+	uint32_t value;
+
+	if (registers == NULL)
+		return 0;
+
+	value = AVP_READ32(RTE_PTR_ADD(registers,
+				       RTE_AVP_MIGRATION_STATUS_OFFSET));
+	if (value == RTE_AVP_MIGRATION_DETACHED) {
+		/* migration is in progress; ack it if we have not already */
+		AVP_WRITE32(value,
+			    RTE_PTR_ADD(registers,
+					RTE_AVP_MIGRATION_ACK_OFFSET));
+		return 1;
+	}
+	return 0;
+}
+
 /*
  * create a AVP device using the supplied device info by first translating it
  * to guest address space(s).
@@ -616,6 +897,7 @@ struct avp_queue {
 		avp->port_id = eth_dev->data->port_id;
 		avp->host_mbuf_size = host_info->mbuf_size;
 		avp->host_features = host_info->features;
+		rte_spinlock_init(&avp->lock);
 		memcpy(&avp->ethaddr.addr_bytes[0],
 		       host_info->ethaddr, ETHER_ADDR_LEN);
 		/* adjust max values to not exceed our max */
@@ -729,6 +1011,12 @@ struct avp_queue {
 
 	eth_dev->data->dev_flags |= RTE_ETH_DEV_DETACHABLE;
 
+	/* Check current migration status */
+	if (avp_dev_migration_pending(eth_dev)) {
+		PMD_DRV_LOG(ERR, "VM live migration operation in progress\n");
+		return -EBUSY;
+	}
+
 	/* Check BAR resources */
 	ret = avp_dev_check_regions(eth_dev);
 	if (ret < 0) {
@@ -737,6 +1025,13 @@ struct avp_queue {
 		return ret;
 	}
 
+	/* Enable interrupts */
+	ret = avp_dev_setup_interrupts(eth_dev);
+	if (ret < 0) {
+		PMD_DRV_LOG(ERR, "Failed to enable interrupts, ret=%d\n", ret);
+		return ret;
+	}
+
 	/* Handle each subtype */
 	ret = avp_dev_create(pci_dev, eth_dev);
 	if (ret < 0) {
@@ -761,12 +1056,20 @@ struct avp_queue {
 static int
 eth_avp_dev_uninit(struct rte_eth_dev *eth_dev)
 {
+	int ret;
+
 	if (rte_eal_process_type() != RTE_PROC_PRIMARY)
 		return -EPERM;
 
 	if (eth_dev->data == NULL)
 		return 0;
 
+	ret = avp_dev_disable_interrupts(eth_dev);
+	if (ret != 0) {
+		PMD_DRV_LOG(ERR, "Failed to disable interrupts, ret=%d\n", ret);
+		return ret;
+	}
+
 	if (eth_dev->data->mac_addrs != NULL) {
 		rte_free(eth_dev->data->mac_addrs);
 		eth_dev->data->mac_addrs = NULL;
@@ -1129,6 +1432,11 @@ struct avp_queue {
 	unsigned int port_id;
 	unsigned int i;
 
+	if (unlikely(avp->flags & AVP_F_DETACHED)) {
+		/* VM live migration in progress */
+		return 0;
+	}
+
 	guest_mbuf_size = avp->guest_mbuf_size;
 	port_id = avp->port_id;
 	rx_q = avp->rx_q[rxq->queue_id];
@@ -1223,6 +1531,11 @@ struct avp_queue {
 	char *pkt_data;
 	unsigned int i;
 
+	if (unlikely(avp->flags & AVP_F_DETACHED)) {
+		/* VM live migration in progress */
+		return 0;
+	}
+
 	rx_q = avp->rx_q[rxq->queue_id];
 	free_q = avp->free_q[rxq->queue_id];
 
@@ -1430,6 +1743,13 @@ struct avp_queue {
 	unsigned int i;
 
 	orig_nb_pkts = nb_pkts;
+	if (unlikely(avp->flags & AVP_F_DETACHED)) {
+		/* VM live migration in progress */
+		/* TODO ... buffer for X packets then drop? */
+		txq->errors += nb_pkts;
+		return 0;
+	}
+
 	tx_q = avp->tx_q[txq->queue_id];
 	alloc_q = avp->alloc_q[txq->queue_id];
 
@@ -1542,6 +1862,13 @@ struct avp_queue {
 	char *pkt_data;
 	unsigned int i;
 
+	if (unlikely(avp->flags & AVP_F_DETACHED)) {
+		/* VM live migration in progress */
+		/* TODO ... buffer for X packets then drop?! */
+		txq->errors++;
+		return 0;
+	}
+
 	tx_q = avp->tx_q[txq->queue_id];
 	alloc_q = avp->alloc_q[txq->queue_id];
 
@@ -1674,6 +2001,13 @@ struct avp_queue {
 	void *addr;
 	int ret;
 
+	rte_spinlock_lock(&avp->lock);
+	if (avp->flags & AVP_F_DETACHED) {
+		PMD_DRV_LOG(ERR, "Operation not supported during VM live migration\n");
+		ret = -ENOTSUP;
+		goto unlock;
+	}
+
 	addr = pci_dev->mem_resource[RTE_AVP_PCI_DEVICE_BAR].addr;
 	host_info = (struct rte_avp_device_info *)addr;
 
@@ -1705,6 +2039,7 @@ struct avp_queue {
 	ret = 0;
 
 unlock:
+	rte_spinlock_unlock(&avp->lock);
 	return ret;
 }
 
@@ -1714,6 +2049,13 @@ struct avp_queue {
 	struct avp_dev *avp = AVP_DEV_PRIVATE_TO_HW(eth_dev->data->dev_private);
 	int ret;
 
+	rte_spinlock_lock(&avp->lock);
+	if (avp->flags & AVP_F_DETACHED) {
+		PMD_DRV_LOG(ERR, "Operation not supported during VM live migration\n");
+		ret = -ENOTSUP;
+		goto unlock;
+	}
+
 	/* disable features that we do not support */
 	eth_dev->data->dev_conf.rxmode.hw_ip_checksum = 0;
 	eth_dev->data->dev_conf.rxmode.hw_vlan_filter = 0;
@@ -1734,6 +2076,7 @@ struct avp_queue {
 	ret = 0;
 
 unlock:
+	rte_spinlock_unlock(&avp->lock);
 	return ret;
 }
 
@@ -1743,6 +2086,13 @@ struct avp_queue {
 	struct avp_dev *avp = AVP_DEV_PRIVATE_TO_HW(eth_dev->data->dev_private);
 	int ret;
 
+	rte_spinlock_lock(&avp->lock);
+	if (avp->flags & AVP_F_DETACHED) {
+		PMD_DRV_LOG(ERR, "Operation not supported during VM live migration\n");
+		goto unlock;
+	}
+
+	/* remember current link state */
 	avp->flags &= ~AVP_F_LINKUP;
 
 	/* update link state */
@@ -1751,6 +2101,9 @@ struct avp_queue {
 		PMD_DRV_LOG(ERR, "Link state change failed by host, ret=%d\n",
 			    ret);
 	}
+
+unlock:
+	rte_spinlock_unlock(&avp->lock);
 }
 
 static void
@@ -1759,10 +2112,22 @@ struct avp_queue {
 	struct avp_dev *avp = AVP_DEV_PRIVATE_TO_HW(eth_dev->data->dev_private);
 	int ret;
 
+	rte_spinlock_lock(&avp->lock);
+	if (avp->flags & AVP_F_DETACHED) {
+		PMD_DRV_LOG(ERR, "Operation not supported during VM live migration\n");
+		goto unlock;
+	}
+
 	/* remember current link state */
 	avp->flags &= ~AVP_F_LINKUP;
 	avp->flags &= ~AVP_F_CONFIGURED;
 
+	ret = avp_dev_disable_interrupts(eth_dev);
+	if (ret < 0) {
+		PMD_DRV_LOG(ERR, "Failed to disable interrupts\n");
+		/* continue */
+	}
+
 	/* update device state */
 	ret = avp_dev_ctrl_shutdown(eth_dev);
 	if (ret < 0) {
@@ -1770,6 +2135,9 @@ struct avp_queue {
 			    ret);
 		/* continue */
 	}
+
+unlock:
+	rte_spinlock_unlock(&avp->lock);
 }
 
 static int
@@ -1791,11 +2159,13 @@ struct avp_queue {
 {
 	struct avp_dev *avp = AVP_DEV_PRIVATE_TO_HW(eth_dev->data->dev_private);
 
+	rte_spinlock_lock(&avp->lock);
 	if ((avp->flags & AVP_F_PROMISC) == 0) {
 		avp->flags |= AVP_F_PROMISC;
 		PMD_DRV_LOG(DEBUG, "Promiscuous mode enabled on %u\n",
 			    eth_dev->data->port_id);
 	}
+	rte_spinlock_unlock(&avp->lock);
 }
 
 static void
@@ -1803,11 +2173,13 @@ struct avp_queue {
 {
 	struct avp_dev *avp = AVP_DEV_PRIVATE_TO_HW(eth_dev->data->dev_private);
 
+	rte_spinlock_lock(&avp->lock);
 	if ((avp->flags & AVP_F_PROMISC) != 0) {
 		avp->flags &= ~AVP_F_PROMISC;
 		PMD_DRV_LOG(DEBUG, "Promiscuous mode disabled on %u\n",
 			    eth_dev->data->port_id);
 	}
+	rte_spinlock_unlock(&avp->lock);
 }
 
 static void
-- 
1.8.3.1

^ permalink raw reply related	[flat|nested] 172+ messages in thread

* [PATCH v5 14/14] doc: adds information related to the AVP PMD
  2017-03-23 11:23       ` [PATCH v5 00/14] Wind River Systems AVP PMD Allain Legacy
                           ` (12 preceding siblings ...)
  2017-03-23 11:24         ` [PATCH v5 13/14] net/avp: migration interrupt handling Allain Legacy
@ 2017-03-23 11:24         ` Allain Legacy
  2017-03-23 14:18         ` [PATCH v5 00/14] Wind River Systems " Ferruh Yigit
  2017-03-28 11:53         ` [PATCH v6 " Allain Legacy
  15 siblings, 0 replies; 172+ messages in thread
From: Allain Legacy @ 2017-03-23 11:24 UTC (permalink / raw)
  To: ferruh.yigit
  Cc: dev, ian.jolliffe, bruce.richardson, john.mcnamara, keith.wiles,
	thomas.monjalon, vincent.jardin, jerin.jacob, stephen, 3chas3

Updates the documentation and feature lists for the AVP PMD device.

Signed-off-by: Allain Legacy <allain.legacy@windriver.com>
Signed-off-by: Matt Peters <matt.peters@windriver.com>
Acked-by: John McNamara <john.mcnamara@intel.com>
---
 MAINTAINERS                            |   1 +
 doc/guides/nics/avp.rst                | 107 +++++++++++++++++++++++++++++++++
 doc/guides/nics/index.rst              |   1 +
 doc/guides/rel_notes/release_17_05.rst |   5 ++
 4 files changed, 114 insertions(+)
 create mode 100644 doc/guides/nics/avp.rst

diff --git a/MAINTAINERS b/MAINTAINERS
index 4e9aa00..ec27449 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -420,6 +420,7 @@ Wind River AVP PMD
 M: Allain Legacy <allain.legacy@windriver.com>
 M: Matt Peters <matt.peters@windriver.com>
 F: drivers/net/avp
+F: doc/guides/nics/avp.rst
 
 
 Crypto Drivers
diff --git a/doc/guides/nics/avp.rst b/doc/guides/nics/avp.rst
new file mode 100644
index 0000000..c301f65
--- /dev/null
+++ b/doc/guides/nics/avp.rst
@@ -0,0 +1,107 @@
+..  BSD LICENSE
+    Copyright(c) 2017 Wind River Systems, Inc. rights reserved.
+    All rights reserved.
+
+    Redistribution and use in source and binary forms, with or without
+    modification, are permitted provided that the following conditions
+    are met:
+
+    * Redistributions of source code must retain the above copyright
+    notice, this list of conditions and the following disclaimer.
+    * Redistributions in binary form must reproduce the above copyright
+    notice, this list of conditions and the following disclaimer in
+    the documentation and/or other materials provided with the
+    distribution.
+    * Neither the name of Intel Corporation nor the names of its
+    contributors may be used to endorse or promote products derived
+    from this software without specific prior written permission.
+
+    THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+    "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+    LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+    A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+    OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+    SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+    LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+    DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+    THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+    (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+    OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+
+AVP Poll Mode Driver
+=================================================================
+
+The Accelerated Virtual Port (AVP) device is a shared memory based device
+available on the `virtualization platforms <http://www.windriver.com/products/titanium-cloud/>`_
+from Wind River Systems.
+
+It enables optimized packet throughput without requiring any packet processing
+in qemu. This provides our customers with a significant performance increase
+for DPDK applications in the VM.  Since our AVP implementation supports VM
+live-migration it is viewed as a better alternative to PCI passthrough or PCI
+SRIOV since neither of those support VM live-migration without manual
+intervention or significant performance penalties.
+
+The driver binds to PCI devices that are exported by the hypervisor DPDK
+application via a shared memory mechanism.  It supports a subset of the full
+Ethernet device API which enables the application to use the typical
+configuration and packet transfer functions.
+
+The definition of the device structure and configuration options are defined in
+rte_avp_common.h and rte_avp_fifo.h public header files.  These two header
+files are made available as part of the PMD implementation in order to share
+the device definitions between the guest implementation (i.e., the PMD) and the
+host implementation (i.e., the hypervisor DPDK vswitch application).
+
+
+Features and Limitations of the AVP PMD
+---------------------------------------
+
+The AVP PMD driver provides the following functionality.
+
+*   Receive and transmit of both simple and chained mbuf packets,
+
+*   Chained mbufs may include up to 5 chained segments,
+
+*   Up to 8 receive and transmit queues per device,
+
+*   Only a single MAC address is supported,
+
+*   The MAC address cannot be modified,
+
+*   The maximum receive packet length is 9238 bytes,
+
+*   VLAN header stripping and inserting,
+
+*   Promiscuous mode
+
+*   VM live-migration
+
+*   PCI hotplug insertion and removal
+
+
+Prerequisites
+-------------
+
+The following prerequisites apply:
+
+*   A virtual machine running in a Wind River Systems virtualization
+    environment and configured with at least one neutron port defined with a
+    vif-model set to "avp".
+
+
+Launching a VM with an AVP type network attachment
+--------------------------------------------------
+
+The following example will launch a VM with three network attachments.  The
+first attachment will have a default vif-model of "virtio".  The next two
+network attachments will have a vif-model of "avp" and may be used with a DPDK
+application which is built to include the AVP PMD driver.
+
+.. code-block:: console
+
+    nova boot --flavor small --image my-image \
+       --nic net-id=${NETWORK1_UUID} \
+       --nic net-id=${NETWORK2_UUID},vif-model=avp \
+       --nic net-id=${NETWORK3_UUID},vif-model=avp \
+       --security-group default my-instance1
diff --git a/doc/guides/nics/index.rst b/doc/guides/nics/index.rst
index 87f9334..0ddcea5 100644
--- a/doc/guides/nics/index.rst
+++ b/doc/guides/nics/index.rst
@@ -36,6 +36,7 @@ Network Interface Controller Drivers
     :numbered:
 
     overview
+    avp
     bnx2x
     bnxt
     cxgbe
diff --git a/doc/guides/rel_notes/release_17_05.rst b/doc/guides/rel_notes/release_17_05.rst
index 918f483..5b0855c 100644
--- a/doc/guides/rel_notes/release_17_05.rst
+++ b/doc/guides/rel_notes/release_17_05.rst
@@ -49,6 +49,11 @@ New Features
 
   sPAPR IOMMU based pci probing enabled for vfio-pci devices.
 
+* **Added support for the Wind River Systems AVP PMD.**
+
+  Added a new networking driver for the AVP device type. Theses devices are
+  specific to the Wind River Systems virtualization platforms.
+
 Resolved Issues
 ---------------
 
-- 
1.8.3.1

^ permalink raw reply related	[flat|nested] 172+ messages in thread

* Re: [PATCH v5 00/14] Wind River Systems AVP PMD
  2017-03-23 11:23       ` [PATCH v5 00/14] Wind River Systems AVP PMD Allain Legacy
                           ` (13 preceding siblings ...)
  2017-03-23 11:24         ` [PATCH v5 14/14] doc: adds information related to the AVP PMD Allain Legacy
@ 2017-03-23 14:18         ` Ferruh Yigit
  2017-03-23 18:28           ` Legacy, Allain
  2017-03-28 11:53         ` [PATCH v6 " Allain Legacy
  15 siblings, 1 reply; 172+ messages in thread
From: Ferruh Yigit @ 2017-03-23 14:18 UTC (permalink / raw)
  To: Allain Legacy
  Cc: dev, ian.jolliffe, bruce.richardson, john.mcnamara, keith.wiles,
	thomas.monjalon, vincent.jardin, jerin.jacob, stephen, 3chas3,
	Ananyev, Konstantin

On 3/23/2017 11:23 AM, Allain Legacy wrote:
> This patch series submits an initial version of the AVP PMD from Wind River
> Systems.  The series includes shared header files, driver implementation,
> and changes to documentation files in support of this new driver.  The AVP
> driver is a shared memory based device.  It is intended to be used as a PMD
> within a virtual machine running on a Wind River virtualization platform.
> See: http://www.windriver.com/products/titanium-cloud/
> 
> It enables optimized packet throughput without requiring any packet
> processing in qemu. This allowed us to provide our customers with a
> significant performance increase for both DPDK and non-DPDK applications
> in the VM.   Since our AVP implementation supports VM live-migration it
> is viewed as a better alternative to PCI passthrough or PCI SRIOV since
> neither of those support VM live-migration without manual intervention
> or significant performance penalties.
> 
> Since the initial implementation of AVP devices, vhost-user has become part
> of the qemu offering with a significant performance increase over the
> original virtio implementation.  However, vhost-user still does not achieve
> the level of performance that the AVP device can provide to our customers
> for DPDK based guests.
> 
> A number of our customers have requested that we upstream the driver to
> dpdk.org.
> 
> v2:
> * Fixed coding style violations that slipped in accidentally because of an
>   out of date checkpatch.pl from an older kernel.
> 
> v3:
> * Updated 17.05 release notes to add a section for this new PMD
> * Added additional info to the AVP nic guide document to clarify the
>   benefit of using AVP over virtio.
> * Fixed spelling error in debug log missed by local checkpatch.pl version
> * Split the transmit patch to separate the stats functions as they
>   accidentally got squashed in the last patchset.
> * Fixed debug log strings so that they exceed 80 characters rather than
>   span multiple lines.
> * Renamed RTE_AVP_* defines that were in avp_ethdev.h to be AVP_* instead
> * Replaced usage of RTE_WRITE32 and RTE_READ32 with rte_write32_relaxed
>   and rte_read32_relaxed.
> * Declared rte_pci_id table as const
> 
> v4:
> * Split our interrupt handlers to a separate patch and moved to the end
>   of the series.
> * Removed memset() from stats_get API
> * Removed usage of RTE_AVP_ALIGNMENT
> * Removed unnecessary parentheses in rte_avp_common.h
> * Removed unneeded "goto unlock" where there are no statements in between
>   the goto and the end of the function.
> * Re-tested with pktgen and found that rte_eth_tx_burst() is being called
>   with 0 packets even before starting traffic which resulted in
>   incrementing oerrors; fixed in transmit patch.
> 
> v5:
> * Updated documentation to remove references to ivshmem as it lead to
>   confusion about whether AVP is exactly like ivshmem or simply based on
>   how ivshmem exports memory to a VM via a PCI device.
> * Restructured first set of patches to condense them down to a base patch
>   with the files needed to apply subsequent patches.
> * Removed static prototypes from init/uninit functions in avp_ethdev.c
> * Moved MAC addresses init to the device initialization patch because it
>   is setup by the avp_dev_create() function.
> * Split the changes to the avp.ini features file so that features are
>   marked as enabled in the patch that actually enables them.
> 
> Allain Legacy (14):
>   drivers/net: adds AVP PMD base files
>   net/avp: public header files
>   net/avp: debug log macros
>   net/avp: driver registration
>   net/avp: device initialization
>   net/avp: device configuration
>   net/avp: queue setup and release
>   net/avp: packet receive functions
>   net/avp: packet transmit functions
>   net/avp: device statistics operations
>   net/avp: device promiscuous functions
>   net/avp: device start and stop operations
>   net/avp: migration interrupt handling
>   doc: adds information related to the AVP PMD

Hi Allain,

Overall PMD looks good to me.

Only can you please update documentation based on tech board decision [1].

Repeating related part here:
"
It should be clearly stated  that right now AVP PMD is limited to use
with WindRiver hypervisor and for most cases virtio is still a preferred
virtual device to use with QEMU.
"

Thanks,
ferruh


[1]
http://dpdk.org/ml/archives/dev/2017-March/061009.html

^ permalink raw reply	[flat|nested] 172+ messages in thread

* Re: [PATCH v5 00/14] Wind River Systems AVP PMD
  2017-03-23 14:18         ` [PATCH v5 00/14] Wind River Systems " Ferruh Yigit
@ 2017-03-23 18:28           ` Legacy, Allain
  2017-03-23 20:35             ` Vincent Jardin
  0 siblings, 1 reply; 172+ messages in thread
From: Legacy, Allain @ 2017-03-23 18:28 UTC (permalink / raw)
  To: YIGIT, FERRUH
  Cc: dev, Jolliffe, Ian, RICHARDSON, BRUCE, MCNAMARA, JOHN, WILES,
	ROGER, thomas.monjalon, vincent.jardin, jerin.jacob, stephen,
	3chas3, ANANYEV, KONSTANTIN

> -----Original Message-----
> From: Ferruh Yigit [mailto:ferruh.yigit@intel.com]
> Sent: Thursday, March 23, 2017 10:18 AM
<...> 
> Hi Allain,
> 
> Overall PMD looks good to me.
> 
> Only can you please update documentation based on tech board decision [1].
> 
> Repeating related part here:
> "
> It should be clearly stated  that right now AVP PMD is limited to use with
> WindRiver hypervisor and for most cases virtio is still a preferred virtual
> device to use with QEMU.
> "
> 
Thanks Ferruh,  I'll update the documentation NIC guide and try to send v6 in the next day or so.

^ permalink raw reply	[flat|nested] 172+ messages in thread

* Re: [PATCH v5 00/14] Wind River Systems AVP PMD
  2017-03-23 18:28           ` Legacy, Allain
@ 2017-03-23 20:35             ` Vincent Jardin
  0 siblings, 0 replies; 172+ messages in thread
From: Vincent Jardin @ 2017-03-23 20:35 UTC (permalink / raw)
  To: Legacy, Allain, YIGIT, FERRUH
  Cc: dev, Jolliffe, Ian, RICHARDSON, BRUCE, MCNAMARA, JOHN, WILES,
	ROGER, thomas.monjalon, jerin.jacob, stephen, 3chas3, ANANYEV,
	KONSTANTIN



Le 23 mars 2017 7:28:02 PM "Legacy, Allain" <Allain.Legacy@windriver.com> a 
écrit :

>> -----Original Message-----
>> From: Ferruh Yigit [mailto:ferruh.yigit@intel.com]
>> Sent: Thursday, March 23, 2017 10:18 AM
> <...>
>> Hi Allain,
>>
>> Overall PMD looks good to me.
>>
>> Only can you please update documentation based on tech board decision [1].
>>
>> Repeating related part here:
>> "
>> It should be clearly stated  that right now AVP PMD is limited to use with
>> WindRiver hypervisor and for most cases virtio is still a preferred virtual
>> device to use with QEMU.
>> "
>>
> Thanks Ferruh,  I'll update the documentation NIC guide and try to send v6 
> in the next day or so.
>

Thanks. So, from this PMD accepted it means that DPDK is opened and 
flexible to integrate many models of PMD.

All my best,
  Vincent

>
>
>

^ permalink raw reply	[flat|nested] 172+ messages in thread

* [PATCH v6 00/14] Wind River Systems AVP PMD
  2017-03-23 11:23       ` [PATCH v5 00/14] Wind River Systems AVP PMD Allain Legacy
                           ` (14 preceding siblings ...)
  2017-03-23 14:18         ` [PATCH v5 00/14] Wind River Systems " Ferruh Yigit
@ 2017-03-28 11:53         ` Allain Legacy
  2017-03-28 11:53           ` [PATCH v6 01/14] drivers/net: adds AVP PMD base files Allain Legacy
                             ` (14 more replies)
  15 siblings, 15 replies; 172+ messages in thread
From: Allain Legacy @ 2017-03-28 11:53 UTC (permalink / raw)
  To: ferruh.yigit
  Cc: dev, ian.jolliffe, bruce.richardson, john.mcnamara, keith.wiles,
	tim.odriscoll, thomas.monjalon, vincent.jardin, jerin.jacob,
	stephen, 3chas3

This patch series submits an initial version of the AVP PMD from Wind River
Systems.  The series includes shared header files, driver implementation,
and changes to documentation files in support of this new driver.  The AVP
driver is a shared memory based device.  It is intended to be used as a PMD
within a virtual machine running on a Wind River virtualization platform.
See: http://www.windriver.com/products/titanium-cloud/

It enables optimized packet throughput without requiring any packet
processing in qemu. This allowed us to provide our customers with a
significant performance increase for both DPDK and non-DPDK applications
in the VM.   Since our AVP implementation supports VM live-migration it
is viewed as a better alternative to PCI passthrough or PCI SRIOV since
neither of those support VM live-migration without manual intervention
or significant performance penalties.

Since the initial implementation of AVP devices, vhost-user has become part
of the qemu offering with a significant performance increase over the
original virtio implementation.  However, vhost-user still does not achieve
the level of performance that the AVP device can provide to our customers
for DPDK based guests.

A number of our customers have requested that we upstream the driver to
dpdk.org.

v2:
* Fixed coding style violations that slipped in accidentally because of an
  out of date checkpatch.pl from an older kernel.

v3:
* Updated 17.05 release notes to add a section for this new PMD
* Added additional info to the AVP nic guide document to clarify the
  benefit of using AVP over virtio.
* Fixed spelling error in debug log missed by local checkpatch.pl version
* Split the transmit patch to separate the stats functions as they
  accidentally got squashed in the last patchset.
* Fixed debug log strings so that they exceed 80 characters rather than
  span multiple lines.
* Renamed RTE_AVP_* defines that were in avp_ethdev.h to be AVP_* instead
* Replaced usage of RTE_WRITE32 and RTE_READ32 with rte_write32_relaxed
  and rte_read32_relaxed.
* Declared rte_pci_id table as const

v4:
* Split our interrupt handlers to a separate patch and moved to the end
  of the series.
* Removed memset() from stats_get API
* Removed usage of RTE_AVP_ALIGNMENT
* Removed unnecessary parentheses in rte_avp_common.h
* Removed unneeded "goto unlock" where there are no statements in between
  the goto and the end of the function.
* Re-tested with pktgen and found that rte_eth_tx_burst() is being called
  with 0 packets even before starting traffic which resulted in
  incrementing oerrors; fixed in transmit patch.

v5:
* Updated documentation to remove references to ivshmem as it lead to
  confusion about whether AVP is exactly like ivshmem or simply based on
  how ivshmem exports memory to a VM via a PCI device.
* Restructured first set of patches to condense them down to a base patch
  with the files needed to apply subsequent patches.
* Removed static prototypes from init/uninit functions in avp_ethdev.c
* Moved MAC addresses init to the device initialization patch because it
  is setup by the avp_dev_create() function.
* Split the changes to the avp.ini features file so that features are
  marked as enabled in the patch that actually enables them.

v6:
* Updated documentation to indicate that virtio is the default device type
  on Wind River Systems virtualization platforms and that AVP devices are
  specialized devices available for applications looking for increased
  performance
* Adjusted makefile DEPDIRS based on recent commits in master.

Allain Legacy (14):
  drivers/net: adds AVP PMD base files
  net/avp: public header files
  net/avp: debug log macros
  net/avp: driver registration
  net/avp: device initialization
  net/avp: device configuration
  net/avp: queue setup and release
  net/avp: packet receive functions
  net/avp: packet transmit functions
  net/avp: device statistics operations
  net/avp: device promiscuous functions
  net/avp: device start and stop operations
  net/avp: migration interrupt handling
  doc: adds information related to the AVP PMD

 MAINTAINERS                                  |    6 +
 config/common_base                           |    9 +
 config/common_linuxapp                       |    1 +
 config/defconfig_i686-native-linuxapp-gcc    |    5 +
 config/defconfig_i686-native-linuxapp-icc    |    5 +
 config/defconfig_x86_x32-native-linuxapp-gcc |    5 +
 doc/guides/nics/avp.rst                      |  111 ++
 doc/guides/nics/features/avp.ini             |   16 +
 doc/guides/nics/index.rst                    |    1 +
 doc/guides/rel_notes/release_17_05.rst       |    4 +
 drivers/net/Makefile                         |    2 +
 drivers/net/avp/Makefile                     |   57 +
 drivers/net/avp/avp_ethdev.c                 | 2294 ++++++++++++++++++++++++++
 drivers/net/avp/avp_logs.h                   |   59 +
 drivers/net/avp/rte_avp_common.h             |  416 +++++
 drivers/net/avp/rte_avp_fifo.h               |  157 ++
 drivers/net/avp/rte_pmd_avp_version.map      |    4 +
 mk/rte.app.mk                                |    1 +
 18 files changed, 3153 insertions(+)
 create mode 100644 doc/guides/nics/avp.rst
 create mode 100644 doc/guides/nics/features/avp.ini
 create mode 100644 drivers/net/avp/Makefile
 create mode 100644 drivers/net/avp/avp_ethdev.c
 create mode 100644 drivers/net/avp/avp_logs.h
 create mode 100644 drivers/net/avp/rte_avp_common.h
 create mode 100644 drivers/net/avp/rte_avp_fifo.h
 create mode 100644 drivers/net/avp/rte_pmd_avp_version.map

-- 
2.12.1

^ permalink raw reply	[flat|nested] 172+ messages in thread

* [PATCH v6 01/14] drivers/net: adds AVP PMD base files
  2017-03-28 11:53         ` [PATCH v6 " Allain Legacy
@ 2017-03-28 11:53           ` Allain Legacy
  2017-03-28 11:53           ` [PATCH v6 02/14] net/avp: public header files Allain Legacy
                             ` (13 subsequent siblings)
  14 siblings, 0 replies; 172+ messages in thread
From: Allain Legacy @ 2017-03-28 11:53 UTC (permalink / raw)
  To: ferruh.yigit
  Cc: dev, ian.jolliffe, bruce.richardson, john.mcnamara, keith.wiles,
	tim.odriscoll, thomas.monjalon, vincent.jardin, jerin.jacob,
	stephen, 3chas3

This commit introduces the AVP PMD file structure without adding any actual
driver functionality.  Functional blocks will be added in later patches.

Signed-off-by: Allain Legacy <allain.legacy@windriver.com>
Signed-off-by: Matt Peters <matt.peters@windriver.com>
---
 MAINTAINERS                             |  5 ++++
 config/common_base                      |  5 ++++
 doc/guides/nics/features/avp.ini        |  6 +++++
 drivers/net/Makefile                    |  2 ++
 drivers/net/avp/Makefile                | 47 +++++++++++++++++++++++++++++++++
 drivers/net/avp/rte_pmd_avp_version.map |  4 +++
 6 files changed, 69 insertions(+)
 create mode 100644 doc/guides/nics/features/avp.ini
 create mode 100644 drivers/net/avp/Makefile
 create mode 100644 drivers/net/avp/rte_pmd_avp_version.map

diff --git a/MAINTAINERS b/MAINTAINERS
index 0b1524d3c..e3ac0e383 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -413,6 +413,11 @@ Null Networking PMD
 M: Tetsuya Mukawa <mtetsuyah@gmail.com>
 F: drivers/net/null/
 
+Wind River AVP PMD
+M: Allain Legacy <allain.legacy@windriver.com>
+M: Matt Peters <matt.peters@windriver.com>
+F: drivers/net/avp
+
 
 Crypto Drivers
 --------------
diff --git a/config/common_base b/config/common_base
index 37aa1e163..a4ad577f9 100644
--- a/config/common_base
+++ b/config/common_base
@@ -353,6 +353,11 @@ CONFIG_RTE_LIBRTE_QEDE_FW=""
 CONFIG_RTE_LIBRTE_PMD_AF_PACKET=n
 
 #
+# Compile WRS accelerated virtual port (AVP) guest PMD driver
+#
+CONFIG_RTE_LIBRTE_AVP_PMD=n
+
+#
 # Compile the TAP PMD
 # It is enabled by default for Linux only.
 #
diff --git a/doc/guides/nics/features/avp.ini b/doc/guides/nics/features/avp.ini
new file mode 100644
index 000000000..435392951
--- /dev/null
+++ b/doc/guides/nics/features/avp.ini
@@ -0,0 +1,6 @@
+;
+; Supported features of the 'AVP' network poll mode driver.
+;
+; Refer to default.ini for the full list of available PMD features.
+;
+[Features]
diff --git a/drivers/net/Makefile b/drivers/net/Makefile
index 5b0bc3412..87f0ca5c4 100644
--- a/drivers/net/Makefile
+++ b/drivers/net/Makefile
@@ -36,6 +36,8 @@ core-libs += librte_net librte_kvargs
 
 DIRS-$(CONFIG_RTE_LIBRTE_PMD_AF_PACKET) += af_packet
 DEPDIRS-af_packet = $(core-libs)
+DIRS-$(CONFIG_RTE_LIBRTE_AVP_PMD) += avp
+DEPDIRS-avp = $(core-libs)
 DIRS-$(CONFIG_RTE_LIBRTE_BNX2X_PMD) += bnx2x
 DEPDIRS-bnx2x = $(core-libs)
 DIRS-$(CONFIG_RTE_LIBRTE_PMD_BOND) += bonding
diff --git a/drivers/net/avp/Makefile b/drivers/net/avp/Makefile
new file mode 100644
index 000000000..c6e03d5a9
--- /dev/null
+++ b/drivers/net/avp/Makefile
@@ -0,0 +1,47 @@
+#   BSD LICENSE
+#
+#   Copyright(c) 2013-2017, Wind River Systems, Inc. All rights reserved.
+#   All rights reserved.
+#
+#   Redistribution and use in source and binary forms, with or without
+#   modification, are permitted provided that the following conditions
+#   are met:
+#
+#     * Redistributions of source code must retain the above copyright
+#       notice, this list of conditions and the following disclaimer.
+#     * Redistributions in binary form must reproduce the above copyright
+#       notice, this list of conditions and the following disclaimer in
+#       the documentation and/or other materials provided with the
+#       distribution.
+#     * Neither the name of Wind River Systems nor the names of its
+#       contributors may be used to endorse or promote products derived
+#       from this software without specific prior written permission.
+#
+#   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+#   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+#   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+#   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+#   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+#   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+#   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+#   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+#   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+#   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+#   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+
+
+include $(RTE_SDK)/mk/rte.vars.mk
+
+#
+# library name
+#
+LIB = librte_pmd_avp.a
+
+CFLAGS += -O3
+CFLAGS += $(WERROR_FLAGS) -I$(SRCDIR)
+
+EXPORT_MAP := rte_pmd_avp_version.map
+
+LIBABIVER := 1
+
+include $(RTE_SDK)/mk/rte.lib.mk
diff --git a/drivers/net/avp/rte_pmd_avp_version.map b/drivers/net/avp/rte_pmd_avp_version.map
new file mode 100644
index 000000000..af8f3f479
--- /dev/null
+++ b/drivers/net/avp/rte_pmd_avp_version.map
@@ -0,0 +1,4 @@
+DPDK_17.05 {
+
+    local: *;
+};
-- 
2.12.1

^ permalink raw reply related	[flat|nested] 172+ messages in thread

* [PATCH v6 02/14] net/avp: public header files
  2017-03-28 11:53         ` [PATCH v6 " Allain Legacy
  2017-03-28 11:53           ` [PATCH v6 01/14] drivers/net: adds AVP PMD base files Allain Legacy
@ 2017-03-28 11:53           ` Allain Legacy
  2017-03-28 11:53           ` [PATCH v6 03/14] net/avp: debug log macros Allain Legacy
                             ` (12 subsequent siblings)
  14 siblings, 0 replies; 172+ messages in thread
From: Allain Legacy @ 2017-03-28 11:53 UTC (permalink / raw)
  To: ferruh.yigit
  Cc: dev, ian.jolliffe, bruce.richardson, john.mcnamara, keith.wiles,
	tim.odriscoll, thomas.monjalon, vincent.jardin, jerin.jacob,
	stephen, 3chas3

Adds public/exported header files for the AVP PMD.  The AVP device is a
shared memory based device.  The structures and constants that define the
method of operation of the device must be visible by both the PMD and the
host DPDK application.  They must not change without proper version
controls and updates to both the hypervisor DPDK application and the PMD.

The hypervisor DPDK application is a Wind River Systems proprietary
virtual switch.

Signed-off-by: Allain Legacy <allain.legacy@windriver.com>
Signed-off-by: Matt Peters <matt.peters@windriver.com>
---
 drivers/net/avp/Makefile         |   5 +
 drivers/net/avp/rte_avp_common.h | 416 +++++++++++++++++++++++++++++++++++++++
 drivers/net/avp/rte_avp_fifo.h   | 157 +++++++++++++++
 3 files changed, 578 insertions(+)
 create mode 100644 drivers/net/avp/rte_avp_common.h
 create mode 100644 drivers/net/avp/rte_avp_fifo.h

diff --git a/drivers/net/avp/Makefile b/drivers/net/avp/Makefile
index c6e03d5a9..68a0fa513 100644
--- a/drivers/net/avp/Makefile
+++ b/drivers/net/avp/Makefile
@@ -44,4 +44,9 @@ EXPORT_MAP := rte_pmd_avp_version.map
 
 LIBABIVER := 1
 
+# install public header files to enable compilation of the hypervisor level
+# dpdk application
+SYMLINK-$(CONFIG_RTE_LIBRTE_AVP_PMD)-include += rte_avp_common.h
+SYMLINK-$(CONFIG_RTE_LIBRTE_AVP_PMD)-include += rte_avp_fifo.h
+
 include $(RTE_SDK)/mk/rte.lib.mk
diff --git a/drivers/net/avp/rte_avp_common.h b/drivers/net/avp/rte_avp_common.h
new file mode 100644
index 000000000..31d763ee5
--- /dev/null
+++ b/drivers/net/avp/rte_avp_common.h
@@ -0,0 +1,416 @@
+/*-
+ *   This file is provided under a dual BSD/LGPLv2 license.  When using or
+ *   redistributing this file, you may do so under either license.
+ *
+ *   GNU LESSER GENERAL PUBLIC LICENSE
+ *
+ *   Copyright(c) 2010-2013 Intel Corporation. All rights reserved.
+ *   Copyright(c) 2014-2017 Wind River Systems, Inc. All rights reserved.
+ *
+ *   This program is free software; you can redistribute it and/or modify
+ *   it under the terms of version 2.1 of the GNU Lesser General Public License
+ *   as published by the Free Software Foundation.
+ *
+ *   This program is distributed in the hope that it will be useful, but
+ *   WITHOUT ANY WARRANTY; without even the implied warranty of
+ *   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+ *   Lesser General Public License for more details.
+ *
+ *   Contact Information:
+ *   Wind River Systems, Inc.
+ *
+ *
+ *   BSD LICENSE
+ *
+ *   Copyright(c) 2010-2013 Intel Corporation. All rights reserved.
+ *   Copyright(c) 2014-2017 Wind River Systems, Inc. All rights reserved.
+ *   All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *   * Redistributions of source code must retain the above copyright
+ *     notice, this list of conditions and the following disclaimer.
+ *   * Redistributions in binary form must reproduce the above copyright
+ *     notice, this list of conditions and the following disclaimer in
+ *     the documentation and/or other materials provided with the
+ *     distribution.
+ *   * Neither the name of Intel Corporation nor the names of its
+ *     contributors may be used to endorse or promote products derived
+ *     from this software without specific prior written permission.
+ *
+ *    THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *    "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *    LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *    A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *    OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *    SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *    LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *    DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *    THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *    (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *    OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ *
+ */
+
+#ifndef _RTE_AVP_COMMON_H_
+#define _RTE_AVP_COMMON_H_
+
+#ifdef __KERNEL__
+#include <linux/if.h>
+#endif
+
+/**
+ * AVP name is part of network device name.
+ */
+#define RTE_AVP_NAMESIZE 32
+
+/**
+ * AVP alias is a user-defined value used for lookups from secondary
+ * processes.  Typically, this is a UUID.
+ */
+#define RTE_AVP_ALIASSIZE 128
+
+/*
+ * Request id.
+ */
+enum rte_avp_req_id {
+	RTE_AVP_REQ_UNKNOWN = 0,
+	RTE_AVP_REQ_CHANGE_MTU,
+	RTE_AVP_REQ_CFG_NETWORK_IF,
+	RTE_AVP_REQ_CFG_DEVICE,
+	RTE_AVP_REQ_SHUTDOWN_DEVICE,
+	RTE_AVP_REQ_MAX,
+};
+
+/**@{ AVP device driver types */
+#define RTE_AVP_DRIVER_TYPE_UNKNOWN 0
+#define RTE_AVP_DRIVER_TYPE_DPDK 1
+#define RTE_AVP_DRIVER_TYPE_KERNEL 2
+#define RTE_AVP_DRIVER_TYPE_QEMU 3
+/**@} */
+
+/**@{ AVP device operational modes */
+#define RTE_AVP_MODE_HOST 0 /**< AVP interface created in host */
+#define RTE_AVP_MODE_GUEST 1 /**< AVP interface created for export to guest */
+#define RTE_AVP_MODE_TRACE 2 /**< AVP interface created for packet tracing */
+/**@} */
+
+/*
+ * Structure for AVP queue configuration query request/result
+ */
+struct rte_avp_device_config {
+	uint64_t device_id;	/**< Unique system identifier */
+	uint32_t driver_type; /**< Device Driver type */
+	uint32_t driver_version; /**< Device Driver version */
+	uint32_t features; /**< Negotiated features */
+	uint16_t num_tx_queues;	/**< Number of active transmit queues */
+	uint16_t num_rx_queues;	/**< Number of active receive queues */
+	uint8_t if_up; /**< 1: interface up, 0: interface down */
+} __attribute__ ((__packed__));
+
+/*
+ * Structure for AVP request.
+ */
+struct rte_avp_request {
+	uint32_t req_id; /**< Request id */
+	union {
+		uint32_t new_mtu; /**< New MTU */
+		uint8_t if_up;	/**< 1: interface up, 0: interface down */
+	struct rte_avp_device_config config; /**< Queue configuration */
+	};
+	int32_t result;	/**< Result for processing request */
+} __attribute__ ((__packed__));
+
+/*
+ * FIFO struct mapped in a shared memory. It describes a circular buffer FIFO
+ * Write and read should wrap around. FIFO is empty when write == read
+ * Writing should never overwrite the read position
+ */
+struct rte_avp_fifo {
+	volatile unsigned int write; /**< Next position to be written*/
+	volatile unsigned int read; /**< Next position to be read */
+	unsigned int len; /**< Circular buffer length */
+	unsigned int elem_size; /**< Pointer size - for 32/64 bit OS */
+	void *volatile buffer[0]; /**< The buffer contains mbuf pointers */
+};
+
+
+/*
+ * AVP packet buffer header used to define the exchange of packet data.
+ */
+struct rte_avp_desc {
+	uint64_t pad0;
+	void *pkt_mbuf; /**< Reference to packet mbuf */
+	uint8_t pad1[14];
+	uint16_t ol_flags; /**< Offload features. */
+	void *next;	/**< Reference to next buffer in chain */
+	void *data;	/**< Start address of data in segment buffer. */
+	uint16_t data_len; /**< Amount of data in segment buffer. */
+	uint8_t nb_segs; /**< Number of segments */
+	uint8_t pad2;
+	uint16_t pkt_len; /**< Total pkt len: sum of all segment data_len. */
+	uint32_t pad3;
+	uint16_t vlan_tci; /**< VLAN Tag Control Identifier (CPU order). */
+	uint32_t pad4;
+} __attribute__ ((__aligned__(RTE_CACHE_LINE_SIZE), __packed__));
+
+
+/**{ AVP device features */
+#define RTE_AVP_FEATURE_VLAN_OFFLOAD (1 << 0) /**< Emulated HW VLAN offload */
+/**@} */
+
+
+/**@{ Offload feature flags */
+#define RTE_AVP_TX_VLAN_PKT 0x0001 /**< TX packet is a 802.1q VLAN packet. */
+#define RTE_AVP_RX_VLAN_PKT 0x0800 /**< RX packet is a 802.1q VLAN packet. */
+/**@} */
+
+
+/**@{ AVP PCI identifiers */
+#define RTE_AVP_PCI_VENDOR_ID   0x1af4
+#define RTE_AVP_PCI_DEVICE_ID   0x1110
+/**@} */
+
+/**@{ AVP PCI subsystem identifiers */
+#define RTE_AVP_PCI_SUB_VENDOR_ID RTE_AVP_PCI_VENDOR_ID
+#define RTE_AVP_PCI_SUB_DEVICE_ID 0x1104
+/**@} */
+
+/**@{ AVP PCI BAR definitions */
+#define RTE_AVP_PCI_MMIO_BAR   0
+#define RTE_AVP_PCI_MSIX_BAR   1
+#define RTE_AVP_PCI_MEMORY_BAR 2
+#define RTE_AVP_PCI_MEMMAP_BAR 4
+#define RTE_AVP_PCI_DEVICE_BAR 5
+#define RTE_AVP_PCI_MAX_BAR    6
+/**@} */
+
+/**@{ AVP PCI BAR name definitions */
+#define RTE_AVP_MMIO_BAR_NAME   "avp-mmio"
+#define RTE_AVP_MSIX_BAR_NAME   "avp-msix"
+#define RTE_AVP_MEMORY_BAR_NAME "avp-memory"
+#define RTE_AVP_MEMMAP_BAR_NAME "avp-memmap"
+#define RTE_AVP_DEVICE_BAR_NAME "avp-device"
+/**@} */
+
+/**@{ AVP PCI MSI-X vectors */
+#define RTE_AVP_MIGRATION_MSIX_VECTOR 0	/**< Migration interrupts */
+#define RTE_AVP_MAX_MSIX_VECTORS 1
+/**@} */
+
+/**@} AVP Migration status/ack register values */
+#define RTE_AVP_MIGRATION_NONE      0 /**< Migration never executed */
+#define RTE_AVP_MIGRATION_DETACHED  1 /**< Device attached during migration */
+#define RTE_AVP_MIGRATION_ATTACHED  2 /**< Device reattached during migration */
+#define RTE_AVP_MIGRATION_ERROR     3 /**< Device failed to attach/detach */
+/**@} */
+
+/**@} AVP MMIO Register Offsets */
+#define RTE_AVP_REGISTER_BASE 0
+#define RTE_AVP_INTERRUPT_MASK_OFFSET (RTE_AVP_REGISTER_BASE + 0)
+#define RTE_AVP_INTERRUPT_STATUS_OFFSET (RTE_AVP_REGISTER_BASE + 4)
+#define RTE_AVP_MIGRATION_STATUS_OFFSET (RTE_AVP_REGISTER_BASE + 8)
+#define RTE_AVP_MIGRATION_ACK_OFFSET (RTE_AVP_REGISTER_BASE + 12)
+/**@} */
+
+/**@} AVP Interrupt Status Mask */
+#define RTE_AVP_MIGRATION_INTERRUPT_MASK (1 << 1)
+#define RTE_AVP_APP_INTERRUPTS_MASK      0xFFFFFFFF
+#define RTE_AVP_NO_INTERRUPTS_MASK       0
+/**@} */
+
+/*
+ * Maximum number of memory regions to export
+ */
+#define RTE_AVP_MAX_MAPS  2048
+
+/*
+ * Description of a single memory region
+ */
+struct rte_avp_memmap {
+	void *addr;
+	phys_addr_t phys_addr;
+	uint64_t length;
+};
+
+/*
+ * AVP memory mapping validation marker
+ */
+#define RTE_AVP_MEMMAP_MAGIC 0x20131969
+
+/**@{  AVP memory map versions */
+#define RTE_AVP_MEMMAP_VERSION_1 1
+#define RTE_AVP_MEMMAP_VERSION RTE_AVP_MEMMAP_VERSION_1
+/**@} */
+
+/*
+ * Defines a list of memory regions exported from the host to the guest
+ */
+struct rte_avp_memmap_info {
+	uint32_t magic; /**< Memory validation marker */
+	uint32_t version; /**< Data format version */
+	uint32_t nb_maps;
+	struct rte_avp_memmap maps[RTE_AVP_MAX_MAPS];
+};
+
+/*
+ * AVP device memory validation marker
+ */
+#define RTE_AVP_DEVICE_MAGIC 0x20131975
+
+/**@{  AVP device map versions
+ * WARNING:  do not change the format or names of these variables.  They are
+ * automatically parsed from the build system to generate the SDK package
+ * name.
+ **/
+#define RTE_AVP_RELEASE_VERSION_1 1
+#define RTE_AVP_RELEASE_VERSION RTE_AVP_RELEASE_VERSION_1
+#define RTE_AVP_MAJOR_VERSION_0 0
+#define RTE_AVP_MAJOR_VERSION_1 1
+#define RTE_AVP_MAJOR_VERSION_2 2
+#define RTE_AVP_MAJOR_VERSION RTE_AVP_MAJOR_VERSION_2
+#define RTE_AVP_MINOR_VERSION_0 0
+#define RTE_AVP_MINOR_VERSION_1 1
+#define RTE_AVP_MINOR_VERSION_13 13
+#define RTE_AVP_MINOR_VERSION RTE_AVP_MINOR_VERSION_13
+/**@} */
+
+
+/**
+ * Generates a 32-bit version number from the specified version number
+ * components
+ */
+#define RTE_AVP_MAKE_VERSION(_release, _major, _minor) \
+((((_release) & 0xffff) << 16) | (((_major) & 0xff) << 8) | ((_minor) & 0xff))
+
+
+/**
+ * Represents the current version of the AVP host driver
+ * WARNING:  in the current development branch the host and guest driver
+ * version should always be the same.  When patching guest features back to
+ * GA releases the host version number should not be updated unless there was
+ * an actual change made to the host driver.
+ */
+#define RTE_AVP_CURRENT_HOST_VERSION \
+RTE_AVP_MAKE_VERSION(RTE_AVP_RELEASE_VERSION_1, \
+		     RTE_AVP_MAJOR_VERSION_0, \
+		     RTE_AVP_MINOR_VERSION_1)
+
+
+/**
+ * Represents the current version of the AVP guest drivers
+ */
+#define RTE_AVP_CURRENT_GUEST_VERSION \
+RTE_AVP_MAKE_VERSION(RTE_AVP_RELEASE_VERSION_1, \
+		     RTE_AVP_MAJOR_VERSION_2, \
+		     RTE_AVP_MINOR_VERSION_13)
+
+/**
+ * Access AVP device version values
+ */
+#define RTE_AVP_GET_RELEASE_VERSION(_version) (((_version) >> 16) & 0xffff)
+#define RTE_AVP_GET_MAJOR_VERSION(_version) (((_version) >> 8) & 0xff)
+#define RTE_AVP_GET_MINOR_VERSION(_version) ((_version) & 0xff)
+/**@}*/
+
+
+/**
+ * Remove the minor version number so that only the release and major versions
+ * are used for comparisons.
+ */
+#define RTE_AVP_STRIP_MINOR_VERSION(_version) ((_version) >> 8)
+
+
+/**
+ * Defines the number of mbuf pools supported per device (1 per socket)
+ */
+#define RTE_AVP_MAX_MEMPOOLS 8
+
+/*
+ * Defines address translation parameters for each support mbuf pool
+ */
+struct rte_avp_mempool_info {
+	void *addr;
+	phys_addr_t phys_addr;
+	uint64_t length;
+};
+
+/*
+ * Struct used to create a AVP device. Passed to the kernel in IOCTL call or
+ * via inter-VM shared memory when used in a guest.
+ */
+struct rte_avp_device_info {
+	uint32_t magic;	/**< Memory validation marker */
+	uint32_t version; /**< Data format version */
+
+	char ifname[RTE_AVP_NAMESIZE];	/**< Network device name for AVP */
+
+	phys_addr_t tx_phys;
+	phys_addr_t rx_phys;
+	phys_addr_t alloc_phys;
+	phys_addr_t free_phys;
+
+	uint32_t features; /**< Supported feature bitmap */
+	uint8_t min_rx_queues; /**< Minimum supported receive/free queues */
+	uint8_t num_rx_queues; /**< Recommended number of receive/free queues */
+	uint8_t max_rx_queues; /**< Maximum supported receive/free queues */
+	uint8_t min_tx_queues; /**< Minimum supported transmit/alloc queues */
+	uint8_t num_tx_queues;
+	/**< Recommended number of transmit/alloc queues */
+	uint8_t max_tx_queues; /**< Maximum supported transmit/alloc queues */
+
+	uint32_t tx_size; /**< Size of each transmit queue */
+	uint32_t rx_size; /**< Size of each receive queue */
+	uint32_t alloc_size; /**< Size of each alloc queue */
+	uint32_t free_size;	/**< Size of each free queue */
+
+	/* Used by Ethtool */
+	phys_addr_t req_phys;
+	phys_addr_t resp_phys;
+	phys_addr_t sync_phys;
+	void *sync_va;
+
+	/* mbuf mempool (used when a single memory area is supported) */
+	void *mbuf_va;
+	phys_addr_t mbuf_phys;
+
+	/* mbuf mempools */
+	struct rte_avp_mempool_info pool[RTE_AVP_MAX_MEMPOOLS];
+
+#ifdef __KERNEL__
+	/* Ethernet info */
+	char ethaddr[ETH_ALEN];
+#else
+	char ethaddr[ETHER_ADDR_LEN];
+#endif
+
+	uint8_t mode; /**< device mode, i.e guest, host, trace */
+
+	/* mbuf size */
+	unsigned int mbuf_size;
+
+	/*
+	 * unique id to differentiate between two instantiations of the same
+	 * AVP device (i.e., the guest needs to know if the device has been
+	 * deleted and recreated).
+	 */
+	uint64_t device_id;
+
+	uint32_t max_rx_pkt_len; /**< Maximum receive unit size */
+};
+
+#define RTE_AVP_MAX_QUEUES 8 /**< Maximum number of queues per device */
+
+/** Maximum number of chained mbufs in a packet */
+#define RTE_AVP_MAX_MBUF_SEGMENTS 5
+
+#define RTE_AVP_DEVICE "avp"
+
+#define RTE_AVP_IOCTL_TEST    _IOWR(0, 1, int)
+#define RTE_AVP_IOCTL_CREATE  _IOWR(0, 2, struct rte_avp_device_info)
+#define RTE_AVP_IOCTL_RELEASE _IOWR(0, 3, struct rte_avp_device_info)
+#define RTE_AVP_IOCTL_QUERY   _IOWR(0, 4, struct rte_avp_device_config)
+
+#endif /* _RTE_AVP_COMMON_H_ */
diff --git a/drivers/net/avp/rte_avp_fifo.h b/drivers/net/avp/rte_avp_fifo.h
new file mode 100644
index 000000000..8262e4f65
--- /dev/null
+++ b/drivers/net/avp/rte_avp_fifo.h
@@ -0,0 +1,157 @@
+/*-
+ *   This file is provided under a dual BSD/LGPLv2 license.  When using or
+ *   redistributing this file, you may do so under either license.
+ *
+ *   GNU LESSER GENERAL PUBLIC LICENSE
+ *
+ *   Copyright(c) 2010-2013 Intel Corporation. All rights reserved.
+ *   Copyright(c) 2014 Wind River Systems, Inc. All rights reserved.
+ *
+ *   This program is free software; you can redistribute it and/or modify
+ *   it under the terms of version 2.1 of the GNU Lesser General Public License
+ *   as published by the Free Software Foundation.
+ *
+ *   This program is distributed in the hope that it will be useful, but
+ *   WITHOUT ANY WARRANTY; without even the implied warranty of
+ *   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+ *   Lesser General Public License for more details.
+ *
+ *   Contact Information:
+ *   Wind River Systems, Inc.
+ *
+ *
+ *   BSD LICENSE
+ *
+ *   Copyright(c) 2010-2013 Intel Corporation. All rights reserved.
+ *   Copyright(c) 2013-2017 Wind River Systems, Inc. All rights reserved.
+ *   All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *   * Redistributions of source code must retain the above copyright
+ *     notice, this list of conditions and the following disclaimer.
+ *   * Redistributions in binary form must reproduce the above copyright
+ *     notice, this list of conditions and the following disclaimer in
+ *     the documentation and/or other materials provided with the
+ *     distribution.
+ *   * Neither the name of Intel Corporation nor the names of its
+ *     contributors may be used to endorse or promote products derived
+ *     from this software without specific prior written permission.
+ *
+ *    THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *    "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *    LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *    A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *    OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *    SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *    LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *    DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *    THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *    (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *    OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ *
+ */
+
+#ifndef _RTE_AVP_FIFO_H_
+#define _RTE_AVP_FIFO_H_
+
+#ifdef __KERNEL__
+/* Write memory barrier for kernel compiles */
+#define AVP_WMB() smp_wmb()
+/* Read memory barrier for kernel compiles */
+#define AVP_RMB() smp_rmb()
+#else
+/* Write memory barrier for userspace compiles */
+#define AVP_WMB() rte_wmb()
+/* Read memory barrier for userspace compiles */
+#define AVP_RMB() rte_rmb()
+#endif
+
+#ifndef __KERNEL__
+/**
+ * Initializes the avp fifo structure
+ */
+static inline void
+avp_fifo_init(struct rte_avp_fifo *fifo, unsigned int size)
+{
+	/* Ensure size is power of 2 */
+	if (size & (size - 1))
+		rte_panic("AVP fifo size must be power of 2\n");
+
+	fifo->write = 0;
+	fifo->read = 0;
+	fifo->len = size;
+	fifo->elem_size = sizeof(void *);
+}
+#endif
+
+/**
+ * Adds num elements into the fifo. Return the number actually written
+ */
+static inline unsigned
+avp_fifo_put(struct rte_avp_fifo *fifo, void **data, unsigned int num)
+{
+	unsigned int i = 0;
+	unsigned int fifo_write = fifo->write;
+	unsigned int fifo_read = fifo->read;
+	unsigned int new_write = fifo_write;
+
+	for (i = 0; i < num; i++) {
+		new_write = (new_write + 1) & (fifo->len - 1);
+
+		if (new_write == fifo_read)
+			break;
+		fifo->buffer[fifo_write] = data[i];
+		fifo_write = new_write;
+	}
+	AVP_WMB();
+	fifo->write = fifo_write;
+	return i;
+}
+
+/**
+ * Get up to num elements from the fifo. Return the number actually read
+ */
+static inline unsigned int
+avp_fifo_get(struct rte_avp_fifo *fifo, void **data, unsigned int num)
+{
+	unsigned int i = 0;
+	unsigned int new_read = fifo->read;
+	unsigned int fifo_write = fifo->write;
+
+	if (new_read == fifo_write)
+		return 0; /* empty */
+
+	for (i = 0; i < num; i++) {
+		if (new_read == fifo_write)
+			break;
+
+		data[i] = fifo->buffer[new_read];
+		new_read = (new_read + 1) & (fifo->len - 1);
+	}
+	AVP_RMB();
+	fifo->read = new_read;
+	return i;
+}
+
+/**
+ * Get the num of elements in the fifo
+ */
+static inline unsigned int
+avp_fifo_count(struct rte_avp_fifo *fifo)
+{
+	return (fifo->len + fifo->write - fifo->read) & (fifo->len - 1);
+}
+
+/**
+ * Get the num of available elements in the fifo
+ */
+static inline unsigned int
+avp_fifo_free_count(struct rte_avp_fifo *fifo)
+{
+	return (fifo->read - fifo->write - 1) & (fifo->len - 1);
+}
+
+#endif /* _RTE_AVP_FIFO_H_ */
-- 
2.12.1

^ permalink raw reply related	[flat|nested] 172+ messages in thread

* [PATCH v6 03/14] net/avp: debug log macros
  2017-03-28 11:53         ` [PATCH v6 " Allain Legacy
  2017-03-28 11:53           ` [PATCH v6 01/14] drivers/net: adds AVP PMD base files Allain Legacy
  2017-03-28 11:53           ` [PATCH v6 02/14] net/avp: public header files Allain Legacy
@ 2017-03-28 11:53           ` Allain Legacy
  2017-03-28 11:53           ` [PATCH v6 04/14] net/avp: driver registration Allain Legacy
                             ` (11 subsequent siblings)
  14 siblings, 0 replies; 172+ messages in thread
From: Allain Legacy @ 2017-03-28 11:53 UTC (permalink / raw)
  To: ferruh.yigit
  Cc: dev, ian.jolliffe, bruce.richardson, john.mcnamara, keith.wiles,
	tim.odriscoll, thomas.monjalon, vincent.jardin, jerin.jacob,
	stephen, 3chas3

Adds a header file with log macros for the AVP PMD

Signed-off-by: Allain Legacy <allain.legacy@windriver.com>
Signed-off-by: Matt Peters <matt.peters@windriver.com>
---
 config/common_base         |  4 ++++
 drivers/net/avp/avp_logs.h | 59 ++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 63 insertions(+)
 create mode 100644 drivers/net/avp/avp_logs.h

diff --git a/config/common_base b/config/common_base
index a4ad577f9..5beedc8ee 100644
--- a/config/common_base
+++ b/config/common_base
@@ -356,6 +356,10 @@ CONFIG_RTE_LIBRTE_PMD_AF_PACKET=n
 # Compile WRS accelerated virtual port (AVP) guest PMD driver
 #
 CONFIG_RTE_LIBRTE_AVP_PMD=n
+CONFIG_RTE_LIBRTE_AVP_DEBUG_RX=n
+CONFIG_RTE_LIBRTE_AVP_DEBUG_TX=n
+CONFIG_RTE_LIBRTE_AVP_DEBUG_DRIVER=y
+CONFIG_RTE_LIBRTE_AVP_DEBUG_BUFFERS=n
 
 #
 # Compile the TAP PMD
diff --git a/drivers/net/avp/avp_logs.h b/drivers/net/avp/avp_logs.h
new file mode 100644
index 000000000..252cab7da
--- /dev/null
+++ b/drivers/net/avp/avp_logs.h
@@ -0,0 +1,59 @@
+/*
+ * BSD LICENSE
+ *
+ * Copyright (c) 2013-2015, Wind River Systems, Inc.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions are met:
+ *
+ * 1) Redistributions of source code must retain the above copyright notice,
+ * this list of conditions and the following disclaimer.
+ *
+ * 2) Redistributions in binary form must reproduce the above copyright notice,
+ * this list of conditions and the following disclaimer in the documentation
+ * and/or other materials provided with the distribution.
+ *
+ * 3) Neither the name of Wind River Systems nor the names of its contributors
+ * may be used to endorse or promote products derived from this software
+ * without specific prior written permission.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE
+ * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+ * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+ * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+ * POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef _AVP_LOGS_H_
+#define _AVP_LOGS_H_
+
+#include <rte_log.h>
+
+#ifdef RTE_LIBRTE_AVP_DEBUG_RX
+#define PMD_RX_LOG(level, fmt, args...) \
+	RTE_LOG(level, PMD, "%s() rx: " fmt, __func__, ## args)
+#else
+#define PMD_RX_LOG(level, fmt, args...) do { } while (0)
+#endif
+
+#ifdef RTE_LIBRTE_AVP_DEBUG_TX
+#define PMD_TX_LOG(level, fmt, args...) \
+	RTE_LOG(level, PMD, "%s() tx: " fmt, __func__, ## args)
+#else
+#define PMD_TX_LOG(level, fmt, args...) do { } while (0)
+#endif
+
+#ifdef RTE_LIBRTE_AVP_DEBUG_DRIVER
+#define PMD_DRV_LOG(level, fmt, args...) \
+	RTE_LOG(level, PMD, "%s(): " fmt, __func__, ## args)
+#else
+#define PMD_DRV_LOG(level, fmt, args...) do { } while (0)
+#endif
+
+#endif /* _AVP_LOGS_H_ */
-- 
2.12.1

^ permalink raw reply related	[flat|nested] 172+ messages in thread

* [PATCH v6 04/14] net/avp: driver registration
  2017-03-28 11:53         ` [PATCH v6 " Allain Legacy
                             ` (2 preceding siblings ...)
  2017-03-28 11:53           ` [PATCH v6 03/14] net/avp: debug log macros Allain Legacy
@ 2017-03-28 11:53           ` Allain Legacy
  2017-03-28 11:54           ` [PATCH v6 05/14] net/avp: device initialization Allain Legacy
                             ` (10 subsequent siblings)
  14 siblings, 0 replies; 172+ messages in thread
From: Allain Legacy @ 2017-03-28 11:53 UTC (permalink / raw)
  To: ferruh.yigit
  Cc: dev, ian.jolliffe, bruce.richardson, john.mcnamara, keith.wiles,
	tim.odriscoll, thomas.monjalon, vincent.jardin, jerin.jacob,
	stephen, 3chas3

Adds the initial framework for registering the driver against the support
PCI device identifiers.

Signed-off-by: Allain Legacy <allain.legacy@windriver.com>
Signed-off-by: Matt Peters <matt.peters@windriver.com>
---
 config/common_linuxapp                       |   1 +
 config/defconfig_i686-native-linuxapp-gcc    |   5 +
 config/defconfig_i686-native-linuxapp-icc    |   5 +
 config/defconfig_x86_x32-native-linuxapp-gcc |   5 +
 drivers/net/avp/Makefile                     |   5 +
 drivers/net/avp/avp_ethdev.c                 | 205 +++++++++++++++++++++++++++
 mk/rte.app.mk                                |   1 +
 7 files changed, 227 insertions(+)
 create mode 100644 drivers/net/avp/avp_ethdev.c

diff --git a/config/common_linuxapp b/config/common_linuxapp
index 00ebaac3d..8690a0033 100644
--- a/config/common_linuxapp
+++ b/config/common_linuxapp
@@ -43,6 +43,7 @@ CONFIG_RTE_LIBRTE_VHOST=y
 CONFIG_RTE_LIBRTE_PMD_VHOST=y
 CONFIG_RTE_LIBRTE_PMD_AF_PACKET=y
 CONFIG_RTE_LIBRTE_PMD_TAP=y
+CONFIG_RTE_LIBRTE_AVP_PMD=y
 CONFIG_RTE_LIBRTE_NFP_PMD=y
 CONFIG_RTE_LIBRTE_POWER=y
 CONFIG_RTE_VIRTIO_USER=y
diff --git a/config/defconfig_i686-native-linuxapp-gcc b/config/defconfig_i686-native-linuxapp-gcc
index 745c4011f..9847bdb6c 100644
--- a/config/defconfig_i686-native-linuxapp-gcc
+++ b/config/defconfig_i686-native-linuxapp-gcc
@@ -75,3 +75,8 @@ CONFIG_RTE_LIBRTE_PMD_KASUMI=n
 # ZUC PMD is not supported on 32-bit
 #
 CONFIG_RTE_LIBRTE_PMD_ZUC=n
+
+#
+# AVP PMD is not supported on 32-bit
+#
+CONFIG_RTE_LIBRTE_AVP_PMD=n
diff --git a/config/defconfig_i686-native-linuxapp-icc b/config/defconfig_i686-native-linuxapp-icc
index 50a3008fc..269e88e95 100644
--- a/config/defconfig_i686-native-linuxapp-icc
+++ b/config/defconfig_i686-native-linuxapp-icc
@@ -75,3 +75,8 @@ CONFIG_RTE_LIBRTE_PMD_KASUMI=n
 # ZUC PMD is not supported on 32-bit
 #
 CONFIG_RTE_LIBRTE_PMD_ZUC=n
+
+#
+# AVP PMD is not supported on 32-bit
+#
+CONFIG_RTE_LIBRTE_AVP_PMD=n
diff --git a/config/defconfig_x86_x32-native-linuxapp-gcc b/config/defconfig_x86_x32-native-linuxapp-gcc
index 3e55c5cac..19573cb50 100644
--- a/config/defconfig_x86_x32-native-linuxapp-gcc
+++ b/config/defconfig_x86_x32-native-linuxapp-gcc
@@ -50,3 +50,8 @@ CONFIG_RTE_LIBRTE_KNI=n
 # Solarflare PMD is not supported on 32-bit
 #
 CONFIG_RTE_LIBRTE_SFC_EFX_PMD=n
+
+#
+# AVP PMD is not supported on 32-bit
+#
+CONFIG_RTE_LIBRTE_AVP_PMD=n
diff --git a/drivers/net/avp/Makefile b/drivers/net/avp/Makefile
index 68a0fa513..cd465aac9 100644
--- a/drivers/net/avp/Makefile
+++ b/drivers/net/avp/Makefile
@@ -49,4 +49,9 @@ LIBABIVER := 1
 SYMLINK-$(CONFIG_RTE_LIBRTE_AVP_PMD)-include += rte_avp_common.h
 SYMLINK-$(CONFIG_RTE_LIBRTE_AVP_PMD)-include += rte_avp_fifo.h
 
+#
+# all source files are stored in SRCS-y
+#
+SRCS-$(CONFIG_RTE_LIBRTE_AVP_PMD) += avp_ethdev.c
+
 include $(RTE_SDK)/mk/rte.lib.mk
diff --git a/drivers/net/avp/avp_ethdev.c b/drivers/net/avp/avp_ethdev.c
new file mode 100644
index 000000000..f5fa45301
--- /dev/null
+++ b/drivers/net/avp/avp_ethdev.c
@@ -0,0 +1,205 @@
+/*
+ *   BSD LICENSE
+ *
+ * Copyright (c) 2013-2017, Wind River Systems, Inc.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions are met:
+ *
+ * 1) Redistributions of source code must retain the above copyright notice,
+ * this list of conditions and the following disclaimer.
+ *
+ * 2) Redistributions in binary form must reproduce the above copyright notice,
+ * this list of conditions and the following disclaimer in the documentation
+ * and/or other materials provided with the distribution.
+ *
+ * 3) Neither the name of Wind River Systems nor the names of its contributors
+ * may be used to endorse or promote products derived from this software
+ * without specific prior written permission.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE
+ * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+ * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+ * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+ * POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include <stdint.h>
+#include <string.h>
+#include <stdio.h>
+#include <errno.h>
+#include <unistd.h>
+#include <sys/io.h>
+
+#include <rte_ethdev.h>
+#include <rte_memcpy.h>
+#include <rte_string_fns.h>
+#include <rte_memzone.h>
+#include <rte_malloc.h>
+#include <rte_atomic.h>
+#include <rte_branch_prediction.h>
+#include <rte_pci.h>
+#include <rte_ether.h>
+#include <rte_common.h>
+#include <rte_cycles.h>
+#include <rte_byteorder.h>
+#include <rte_dev.h>
+#include <rte_memory.h>
+#include <rte_eal.h>
+
+#include "rte_avp_common.h"
+#include "rte_avp_fifo.h"
+
+#include "avp_logs.h"
+
+
+
+#define AVP_DEV_TO_PCI(eth_dev) RTE_DEV_TO_PCI((eth_dev)->device)
+
+
+#define AVP_MAX_MAC_ADDRS 1
+#define AVP_MIN_RX_BUFSIZE ETHER_MIN_LEN
+
+
+/*
+ * Defines the number of microseconds to wait before checking the response
+ * queue for completion.
+ */
+#define AVP_REQUEST_DELAY_USECS (5000)
+
+/*
+ * Defines the number times to check the response queue for completion before
+ * declaring a timeout.
+ */
+#define AVP_MAX_REQUEST_RETRY (100)
+
+/* Defines the current PCI driver version number */
+#define AVP_DPDK_DRIVER_VERSION RTE_AVP_CURRENT_GUEST_VERSION
+
+/*
+ * The set of PCI devices this driver supports
+ */
+static const struct rte_pci_id pci_id_avp_map[] = {
+	{ .vendor_id = RTE_AVP_PCI_VENDOR_ID,
+	  .device_id = RTE_AVP_PCI_DEVICE_ID,
+	  .subsystem_vendor_id = RTE_AVP_PCI_SUB_VENDOR_ID,
+	  .subsystem_device_id = RTE_AVP_PCI_SUB_DEVICE_ID,
+	  .class_id = RTE_CLASS_ANY_ID,
+	},
+
+	{ .vendor_id = 0, /* sentinel */
+	},
+};
+
+
+/*
+ * Defines the AVP device attributes which are attached to an RTE ethernet
+ * device
+ */
+struct avp_dev {
+	uint32_t magic; /**< Memory validation marker */
+	uint64_t device_id; /**< Unique system identifier */
+	struct ether_addr ethaddr; /**< Host specified MAC address */
+	struct rte_eth_dev_data *dev_data;
+	/**< Back pointer to ethernet device data */
+	volatile uint32_t flags; /**< Device operational flags */
+	uint8_t port_id; /**< Ethernet port identifier */
+	struct rte_mempool *pool; /**< pkt mbuf mempool */
+	unsigned int guest_mbuf_size; /**< local pool mbuf size */
+	unsigned int host_mbuf_size; /**< host mbuf size */
+	unsigned int max_rx_pkt_len; /**< maximum receive unit */
+	uint32_t host_features; /**< Supported feature bitmap */
+	uint32_t features; /**< Enabled feature bitmap */
+	unsigned int num_tx_queues; /**< Negotiated number of transmit queues */
+	unsigned int max_tx_queues; /**< Maximum number of transmit queues */
+	unsigned int num_rx_queues; /**< Negotiated number of receive queues */
+	unsigned int max_rx_queues; /**< Maximum number of receive queues */
+
+	struct rte_avp_fifo *tx_q[RTE_AVP_MAX_QUEUES]; /**< TX queue */
+	struct rte_avp_fifo *rx_q[RTE_AVP_MAX_QUEUES]; /**< RX queue */
+	struct rte_avp_fifo *alloc_q[RTE_AVP_MAX_QUEUES];
+	/**< Allocated mbufs queue */
+	struct rte_avp_fifo *free_q[RTE_AVP_MAX_QUEUES];
+	/**< To be freed mbufs queue */
+
+	/* For request & response */
+	struct rte_avp_fifo *req_q; /**< Request queue */
+	struct rte_avp_fifo *resp_q; /**< Response queue */
+	void *host_sync_addr; /**< (host) Req/Resp Mem address */
+	void *sync_addr; /**< Req/Resp Mem address */
+	void *host_mbuf_addr; /**< (host) MBUF pool start address */
+	void *mbuf_addr; /**< MBUF pool start address */
+} __rte_cache_aligned;
+
+/* RTE ethernet private data */
+struct avp_adapter {
+	struct avp_dev avp;
+} __rte_cache_aligned;
+
+/* Macro to cast the ethernet device private data to a AVP object */
+#define AVP_DEV_PRIVATE_TO_HW(adapter) \
+	(&((struct avp_adapter *)adapter)->avp)
+
+/*
+ * This function is based on probe() function in avp_pci.c
+ * It returns 0 on success.
+ */
+static int
+eth_avp_dev_init(struct rte_eth_dev *eth_dev)
+{
+	struct rte_pci_device *pci_dev;
+
+	pci_dev = AVP_DEV_TO_PCI(eth_dev);
+
+	if (rte_eal_process_type() != RTE_PROC_PRIMARY) {
+		/*
+		 * no setup required on secondary processes.  All data is saved
+		 * in dev_private by the primary process. All resource should
+		 * be mapped to the same virtual address so all pointers should
+		 * be valid.
+		 */
+		return 0;
+	}
+
+	rte_eth_copy_pci_info(eth_dev, pci_dev);
+
+	eth_dev->data->dev_flags |= RTE_ETH_DEV_DETACHABLE;
+
+	return 0;
+}
+
+static int
+eth_avp_dev_uninit(struct rte_eth_dev *eth_dev)
+{
+	if (rte_eal_process_type() != RTE_PROC_PRIMARY)
+		return -EPERM;
+
+	if (eth_dev->data == NULL)
+		return 0;
+
+	return 0;
+}
+
+
+static struct eth_driver rte_avp_pmd = {
+	{
+		.id_table = pci_id_avp_map,
+		.drv_flags = RTE_PCI_DRV_NEED_MAPPING,
+		.probe = rte_eth_dev_pci_probe,
+		.remove = rte_eth_dev_pci_remove,
+	},
+	.eth_dev_init = eth_avp_dev_init,
+	.eth_dev_uninit = eth_avp_dev_uninit,
+	.dev_private_size = sizeof(struct avp_adapter),
+};
+
+
+
+RTE_PMD_REGISTER_PCI(rte_avp, rte_avp_pmd.pci_drv);
+RTE_PMD_REGISTER_PCI_TABLE(rte_avp, pci_id_avp_map);
diff --git a/mk/rte.app.mk b/mk/rte.app.mk
index 62a2a1a0b..74b286327 100644
--- a/mk/rte.app.mk
+++ b/mk/rte.app.mk
@@ -103,6 +103,7 @@ ifeq ($(CONFIG_RTE_BUILD_SHARED_LIB),n)
 # plugins (link only if static libraries)
 
 _LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_AF_PACKET)  += -lrte_pmd_af_packet
+_LDLIBS-$(CONFIG_RTE_LIBRTE_AVP_PMD)        += -lrte_pmd_avp
 _LDLIBS-$(CONFIG_RTE_LIBRTE_BNX2X_PMD)      += -lrte_pmd_bnx2x -lz
 _LDLIBS-$(CONFIG_RTE_LIBRTE_BNXT_PMD)       += -lrte_pmd_bnxt
 _LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_BOND)       += -lrte_pmd_bond
-- 
2.12.1

^ permalink raw reply related	[flat|nested] 172+ messages in thread

* [PATCH v6 05/14] net/avp: device initialization
  2017-03-28 11:53         ` [PATCH v6 " Allain Legacy
                             ` (3 preceding siblings ...)
  2017-03-28 11:53           ` [PATCH v6 04/14] net/avp: driver registration Allain Legacy
@ 2017-03-28 11:54           ` Allain Legacy
  2017-03-28 11:54           ` [PATCH v6 06/14] net/avp: device configuration Allain Legacy
                             ` (9 subsequent siblings)
  14 siblings, 0 replies; 172+ messages in thread
From: Allain Legacy @ 2017-03-28 11:54 UTC (permalink / raw)
  To: ferruh.yigit
  Cc: dev, ian.jolliffe, bruce.richardson, john.mcnamara, keith.wiles,
	tim.odriscoll, thomas.monjalon, vincent.jardin, jerin.jacob,
	stephen, 3chas3

Adds support for initialization newly probed AVP PCI devices.  Initial
queue translations are setup in preparation for device configuration.

Signed-off-by: Allain Legacy <allain.legacy@windriver.com>
Signed-off-by: Matt Peters <matt.peters@windriver.com>
---
 drivers/net/avp/avp_ethdev.c | 315 +++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 315 insertions(+)

diff --git a/drivers/net/avp/avp_ethdev.c b/drivers/net/avp/avp_ethdev.c
index f5fa45301..e937fb522 100644
--- a/drivers/net/avp/avp_ethdev.c
+++ b/drivers/net/avp/avp_ethdev.c
@@ -52,6 +52,7 @@
 #include <rte_dev.h>
 #include <rte_memory.h>
 #include <rte_eal.h>
+#include <rte_io.h>
 
 #include "rte_avp_common.h"
 #include "rte_avp_fifo.h"
@@ -98,6 +99,15 @@ static const struct rte_pci_id pci_id_avp_map[] = {
 };
 
 
+/**@{ AVP device flags */
+#define AVP_F_PROMISC (1 << 1)
+#define AVP_F_CONFIGURED (1 << 2)
+#define AVP_F_LINKUP (1 << 3)
+/**@} */
+
+/* Ethernet device validation marker */
+#define AVP_ETHDEV_MAGIC 0x92972862
+
 /*
  * Defines the AVP device attributes which are attached to an RTE ethernet
  * device
@@ -142,18 +152,292 @@ struct avp_adapter {
 	struct avp_dev avp;
 } __rte_cache_aligned;
 
+
+/* 32-bit MMIO register write */
+#define AVP_WRITE32(_value, _addr) rte_write32_relaxed((_value), (_addr))
+
+/* 32-bit MMIO register read */
+#define AVP_READ32(_addr) rte_read32_relaxed((_addr))
+
 /* Macro to cast the ethernet device private data to a AVP object */
 #define AVP_DEV_PRIVATE_TO_HW(adapter) \
 	(&((struct avp_adapter *)adapter)->avp)
 
 /*
+ * Defines the structure of a AVP device queue for the purpose of handling the
+ * receive and transmit burst callback functions
+ */
+struct avp_queue {
+	struct rte_eth_dev_data *dev_data;
+	/**< Backpointer to ethernet device data */
+	struct avp_dev *avp; /**< Backpointer to AVP device */
+	uint16_t queue_id;
+	/**< Queue identifier used for indexing current queue */
+	uint16_t queue_base;
+	/**< Base queue identifier for queue servicing */
+	uint16_t queue_limit;
+	/**< Maximum queue identifier for queue servicing */
+
+	uint64_t packets;
+	uint64_t bytes;
+	uint64_t errors;
+};
+
+/* translate from host physical address to guest virtual address */
+static void *
+avp_dev_translate_address(struct rte_eth_dev *eth_dev,
+			  phys_addr_t host_phys_addr)
+{
+	struct rte_pci_device *pci_dev = AVP_DEV_TO_PCI(eth_dev);
+	struct rte_mem_resource *resource;
+	struct rte_avp_memmap_info *info;
+	struct rte_avp_memmap *map;
+	off_t offset;
+	void *addr;
+	unsigned int i;
+
+	addr = pci_dev->mem_resource[RTE_AVP_PCI_MEMORY_BAR].addr;
+	resource = &pci_dev->mem_resource[RTE_AVP_PCI_MEMMAP_BAR];
+	info = (struct rte_avp_memmap_info *)resource->addr;
+
+	offset = 0;
+	for (i = 0; i < info->nb_maps; i++) {
+		/* search all segments looking for a matching address */
+		map = &info->maps[i];
+
+		if ((host_phys_addr >= map->phys_addr) &&
+			(host_phys_addr < (map->phys_addr + map->length))) {
+			/* address is within this segment */
+			offset += (host_phys_addr - map->phys_addr);
+			addr = RTE_PTR_ADD(addr, offset);
+
+			PMD_DRV_LOG(DEBUG, "Translating host physical 0x%" PRIx64 " to guest virtual 0x%p\n",
+				    host_phys_addr, addr);
+
+			return addr;
+		}
+		offset += map->length;
+	}
+
+	return NULL;
+}
+
+/* verify that the incoming device version is compatible with our version */
+static int
+avp_dev_version_check(uint32_t version)
+{
+	uint32_t driver = RTE_AVP_STRIP_MINOR_VERSION(AVP_DPDK_DRIVER_VERSION);
+	uint32_t device = RTE_AVP_STRIP_MINOR_VERSION(version);
+
+	if (device <= driver) {
+		/* the host driver version is less than or equal to ours */
+		return 0;
+	}
+
+	return 1;
+}
+
+/* verify that memory regions have expected version and validation markers */
+static int
+avp_dev_check_regions(struct rte_eth_dev *eth_dev)
+{
+	struct rte_pci_device *pci_dev = AVP_DEV_TO_PCI(eth_dev);
+	struct rte_avp_memmap_info *memmap;
+	struct rte_avp_device_info *info;
+	struct rte_mem_resource *resource;
+	unsigned int i;
+
+	/* Dump resource info for debug */
+	for (i = 0; i < PCI_MAX_RESOURCE; i++) {
+		resource = &pci_dev->mem_resource[i];
+		if ((resource->phys_addr == 0) || (resource->len == 0))
+			continue;
+
+		PMD_DRV_LOG(DEBUG, "resource[%u]: phys=0x%" PRIx64 " len=%" PRIu64 " addr=%p\n",
+			    i, resource->phys_addr,
+			    resource->len, resource->addr);
+
+		switch (i) {
+		case RTE_AVP_PCI_MEMMAP_BAR:
+			memmap = (struct rte_avp_memmap_info *)resource->addr;
+			if ((memmap->magic != RTE_AVP_MEMMAP_MAGIC) ||
+			    (memmap->version != RTE_AVP_MEMMAP_VERSION)) {
+				PMD_DRV_LOG(ERR, "Invalid memmap magic 0x%08x and version %u\n",
+					    memmap->magic, memmap->version);
+				return -EINVAL;
+			}
+			break;
+
+		case RTE_AVP_PCI_DEVICE_BAR:
+			info = (struct rte_avp_device_info *)resource->addr;
+			if ((info->magic != RTE_AVP_DEVICE_MAGIC) ||
+			    avp_dev_version_check(info->version)) {
+				PMD_DRV_LOG(ERR, "Invalid device info magic 0x%08x or version 0x%08x > 0x%08x\n",
+					    info->magic, info->version,
+					    AVP_DPDK_DRIVER_VERSION);
+				return -EINVAL;
+			}
+			break;
+
+		case RTE_AVP_PCI_MEMORY_BAR:
+		case RTE_AVP_PCI_MMIO_BAR:
+			if (resource->addr == NULL) {
+				PMD_DRV_LOG(ERR, "Missing address space for BAR%u\n",
+					    i);
+				return -EINVAL;
+			}
+			break;
+
+		case RTE_AVP_PCI_MSIX_BAR:
+		default:
+			/* no validation required */
+			break;
+		}
+	}
+
+	return 0;
+}
+
+/*
+ * create a AVP device using the supplied device info by first translating it
+ * to guest address space(s).
+ */
+static int
+avp_dev_create(struct rte_pci_device *pci_dev,
+	       struct rte_eth_dev *eth_dev)
+{
+	struct avp_dev *avp = AVP_DEV_PRIVATE_TO_HW(eth_dev->data->dev_private);
+	struct rte_avp_device_info *host_info;
+	struct rte_mem_resource *resource;
+	unsigned int i;
+
+	resource = &pci_dev->mem_resource[RTE_AVP_PCI_DEVICE_BAR];
+	if (resource->addr == NULL) {
+		PMD_DRV_LOG(ERR, "BAR%u is not mapped\n",
+			    RTE_AVP_PCI_DEVICE_BAR);
+		return -EFAULT;
+	}
+	host_info = (struct rte_avp_device_info *)resource->addr;
+
+	if ((host_info->magic != RTE_AVP_DEVICE_MAGIC) ||
+		avp_dev_version_check(host_info->version)) {
+		PMD_DRV_LOG(ERR, "Invalid AVP PCI device, magic 0x%08x version 0x%08x > 0x%08x\n",
+			    host_info->magic, host_info->version,
+			    AVP_DPDK_DRIVER_VERSION);
+		return -EINVAL;
+	}
+
+	PMD_DRV_LOG(DEBUG, "AVP host device is v%u.%u.%u\n",
+		    RTE_AVP_GET_RELEASE_VERSION(host_info->version),
+		    RTE_AVP_GET_MAJOR_VERSION(host_info->version),
+		    RTE_AVP_GET_MINOR_VERSION(host_info->version));
+
+	PMD_DRV_LOG(DEBUG, "AVP host supports %u to %u TX queue(s)\n",
+		    host_info->min_tx_queues, host_info->max_tx_queues);
+	PMD_DRV_LOG(DEBUG, "AVP host supports %u to %u RX queue(s)\n",
+		    host_info->min_rx_queues, host_info->max_rx_queues);
+	PMD_DRV_LOG(DEBUG, "AVP host supports features 0x%08x\n",
+		    host_info->features);
+
+	if (avp->magic != AVP_ETHDEV_MAGIC) {
+		/*
+		 * First time initialization (i.e., not during a VM
+		 * migration)
+		 */
+		memset(avp, 0, sizeof(*avp));
+		avp->magic = AVP_ETHDEV_MAGIC;
+		avp->dev_data = eth_dev->data;
+		avp->port_id = eth_dev->data->port_id;
+		avp->host_mbuf_size = host_info->mbuf_size;
+		avp->host_features = host_info->features;
+		memcpy(&avp->ethaddr.addr_bytes[0],
+		       host_info->ethaddr, ETHER_ADDR_LEN);
+		/* adjust max values to not exceed our max */
+		avp->max_tx_queues =
+			RTE_MIN(host_info->max_tx_queues, RTE_AVP_MAX_QUEUES);
+		avp->max_rx_queues =
+			RTE_MIN(host_info->max_rx_queues, RTE_AVP_MAX_QUEUES);
+	} else {
+		/* Re-attaching during migration */
+
+		/* TODO... requires validation of host values */
+		if ((host_info->features & avp->features) != avp->features) {
+			PMD_DRV_LOG(ERR, "AVP host features mismatched; 0x%08x, host=0x%08x\n",
+				    avp->features, host_info->features);
+			/* this should not be possible; continue for now */
+		}
+	}
+
+	/* the device id is allowed to change over migrations */
+	avp->device_id = host_info->device_id;
+
+	/* translate incoming host addresses to guest address space */
+	PMD_DRV_LOG(DEBUG, "AVP first host tx queue at 0x%" PRIx64 "\n",
+		    host_info->tx_phys);
+	PMD_DRV_LOG(DEBUG, "AVP first host alloc queue at 0x%" PRIx64 "\n",
+		    host_info->alloc_phys);
+	for (i = 0; i < avp->max_tx_queues; i++) {
+		avp->tx_q[i] = avp_dev_translate_address(eth_dev,
+			host_info->tx_phys + (i * host_info->tx_size));
+
+		avp->alloc_q[i] = avp_dev_translate_address(eth_dev,
+			host_info->alloc_phys + (i * host_info->alloc_size));
+	}
+
+	PMD_DRV_LOG(DEBUG, "AVP first host rx queue at 0x%" PRIx64 "\n",
+		    host_info->rx_phys);
+	PMD_DRV_LOG(DEBUG, "AVP first host free queue at 0x%" PRIx64 "\n",
+		    host_info->free_phys);
+	for (i = 0; i < avp->max_rx_queues; i++) {
+		avp->rx_q[i] = avp_dev_translate_address(eth_dev,
+			host_info->rx_phys + (i * host_info->rx_size));
+		avp->free_q[i] = avp_dev_translate_address(eth_dev,
+			host_info->free_phys + (i * host_info->free_size));
+	}
+
+	PMD_DRV_LOG(DEBUG, "AVP host request queue at 0x%" PRIx64 "\n",
+		    host_info->req_phys);
+	PMD_DRV_LOG(DEBUG, "AVP host response queue at 0x%" PRIx64 "\n",
+		    host_info->resp_phys);
+	PMD_DRV_LOG(DEBUG, "AVP host sync address at 0x%" PRIx64 "\n",
+		    host_info->sync_phys);
+	PMD_DRV_LOG(DEBUG, "AVP host mbuf address at 0x%" PRIx64 "\n",
+		    host_info->mbuf_phys);
+	avp->req_q = avp_dev_translate_address(eth_dev, host_info->req_phys);
+	avp->resp_q = avp_dev_translate_address(eth_dev, host_info->resp_phys);
+	avp->sync_addr =
+		avp_dev_translate_address(eth_dev, host_info->sync_phys);
+	avp->mbuf_addr =
+		avp_dev_translate_address(eth_dev, host_info->mbuf_phys);
+
+	/*
+	 * store the host mbuf virtual address so that we can calculate
+	 * relative offsets for each mbuf as they are processed
+	 */
+	avp->host_mbuf_addr = host_info->mbuf_va;
+	avp->host_sync_addr = host_info->sync_va;
+
+	/*
+	 * store the maximum packet length that is supported by the host.
+	 */
+	avp->max_rx_pkt_len = host_info->max_rx_pkt_len;
+	PMD_DRV_LOG(DEBUG, "AVP host max receive packet length is %u\n",
+				host_info->max_rx_pkt_len);
+
+	return 0;
+}
+
+/*
  * This function is based on probe() function in avp_pci.c
  * It returns 0 on success.
  */
 static int
 eth_avp_dev_init(struct rte_eth_dev *eth_dev)
 {
+	struct avp_dev *avp =
+		AVP_DEV_PRIVATE_TO_HW(eth_dev->data->dev_private);
 	struct rte_pci_device *pci_dev;
+	int ret;
 
 	pci_dev = AVP_DEV_TO_PCI(eth_dev);
 
@@ -171,6 +455,32 @@ eth_avp_dev_init(struct rte_eth_dev *eth_dev)
 
 	eth_dev->data->dev_flags |= RTE_ETH_DEV_DETACHABLE;
 
+	/* Check BAR resources */
+	ret = avp_dev_check_regions(eth_dev);
+	if (ret < 0) {
+		PMD_DRV_LOG(ERR, "Failed to validate BAR resources, ret=%d\n",
+			    ret);
+		return ret;
+	}
+
+	/* Handle each subtype */
+	ret = avp_dev_create(pci_dev, eth_dev);
+	if (ret < 0) {
+		PMD_DRV_LOG(ERR, "Failed to create device, ret=%d\n", ret);
+		return ret;
+	}
+
+	/* Allocate memory for storing MAC addresses */
+	eth_dev->data->mac_addrs = rte_zmalloc("avp_ethdev", ETHER_ADDR_LEN, 0);
+	if (eth_dev->data->mac_addrs == NULL) {
+		PMD_DRV_LOG(ERR, "Failed to allocate %d bytes needed to store MAC addresses\n",
+			    ETHER_ADDR_LEN);
+		return -ENOMEM;
+	}
+
+	/* Get a mac from device config */
+	ether_addr_copy(&avp->ethaddr, &eth_dev->data->mac_addrs[0]);
+
 	return 0;
 }
 
@@ -183,6 +493,11 @@ eth_avp_dev_uninit(struct rte_eth_dev *eth_dev)
 	if (eth_dev->data == NULL)
 		return 0;
 
+	if (eth_dev->data->mac_addrs != NULL) {
+		rte_free(eth_dev->data->mac_addrs);
+		eth_dev->data->mac_addrs = NULL;
+	}
+
 	return 0;
 }
 
-- 
2.12.1

^ permalink raw reply related	[flat|nested] 172+ messages in thread

* [PATCH v6 06/14] net/avp: device configuration
  2017-03-28 11:53         ` [PATCH v6 " Allain Legacy
                             ` (4 preceding siblings ...)
  2017-03-28 11:54           ` [PATCH v6 05/14] net/avp: device initialization Allain Legacy
@ 2017-03-28 11:54           ` Allain Legacy
  2017-03-29 10:28             ` Ferruh Yigit
  2017-03-28 11:54           ` [PATCH v6 07/14] net/avp: queue setup and release Allain Legacy
                             ` (8 subsequent siblings)
  14 siblings, 1 reply; 172+ messages in thread
From: Allain Legacy @ 2017-03-28 11:54 UTC (permalink / raw)
  To: ferruh.yigit
  Cc: dev, ian.jolliffe, bruce.richardson, john.mcnamara, keith.wiles,
	tim.odriscoll, thomas.monjalon, vincent.jardin, jerin.jacob,
	stephen, 3chas3

Adds support for "dev_configure" operations to allow an application to
configure the device.

Signed-off-by: Allain Legacy <allain.legacy@windriver.com>
Signed-off-by: Matt Peters <matt.peters@windriver.com>
---
 doc/guides/nics/features/avp.ini |   4 +
 drivers/net/avp/avp_ethdev.c     | 241 +++++++++++++++++++++++++++++++++++++++
 2 files changed, 245 insertions(+)

diff --git a/doc/guides/nics/features/avp.ini b/doc/guides/nics/features/avp.ini
index 435392951..45a2185b6 100644
--- a/doc/guides/nics/features/avp.ini
+++ b/doc/guides/nics/features/avp.ini
@@ -4,3 +4,7 @@
 ; Refer to default.ini for the full list of available PMD features.
 ;
 [Features]
+Link status          = Y
+VLAN offload         = Y
+Linux UIO            = Y
+x86-64               = Y
diff --git a/drivers/net/avp/avp_ethdev.c b/drivers/net/avp/avp_ethdev.c
index e937fb522..d9ad3f169 100644
--- a/drivers/net/avp/avp_ethdev.c
+++ b/drivers/net/avp/avp_ethdev.c
@@ -61,6 +61,13 @@
 
 
 
+static int avp_dev_configure(struct rte_eth_dev *dev);
+static void avp_dev_info_get(struct rte_eth_dev *dev,
+			     struct rte_eth_dev_info *dev_info);
+static void avp_vlan_offload_set(struct rte_eth_dev *dev, int mask);
+static int avp_dev_link_update(struct rte_eth_dev *dev,
+			       __rte_unused int wait_to_complete);
+
 #define AVP_DEV_TO_PCI(eth_dev) RTE_DEV_TO_PCI((eth_dev)->device)
 
 
@@ -98,6 +105,15 @@ static const struct rte_pci_id pci_id_avp_map[] = {
 	},
 };
 
+/*
+ * dev_ops for avp, bare necessities for basic operation
+ */
+static const struct eth_dev_ops avp_eth_dev_ops = {
+	.dev_configure       = avp_dev_configure,
+	.dev_infos_get       = avp_dev_info_get,
+	.vlan_offload_set    = avp_vlan_offload_set,
+	.link_update         = avp_dev_link_update,
+};
 
 /**@{ AVP device flags */
 #define AVP_F_PROMISC (1 << 1)
@@ -183,6 +199,91 @@ struct avp_queue {
 	uint64_t errors;
 };
 
+/* send a request and wait for a response
+ *
+ * @warning must be called while holding the avp->lock spinlock.
+ */
+static int
+avp_dev_process_request(struct avp_dev *avp, struct rte_avp_request *request)
+{
+	unsigned int retry = AVP_MAX_REQUEST_RETRY;
+	void *resp_addr = NULL;
+	unsigned int count;
+	int ret;
+
+	PMD_DRV_LOG(DEBUG, "Sending request %u to host\n", request->req_id);
+
+	request->result = -ENOTSUP;
+
+	/* Discard any stale responses before starting a new request */
+	while (avp_fifo_get(avp->resp_q, (void **)&resp_addr, 1))
+		PMD_DRV_LOG(DEBUG, "Discarding stale response\n");
+
+	rte_memcpy(avp->sync_addr, request, sizeof(*request));
+	count = avp_fifo_put(avp->req_q, &avp->host_sync_addr, 1);
+	if (count < 1) {
+		PMD_DRV_LOG(ERR, "Cannot send request %u to host\n",
+			    request->req_id);
+		ret = -EBUSY;
+		goto done;
+	}
+
+	while (retry--) {
+		/* wait for a response */
+		usleep(AVP_REQUEST_DELAY_USECS);
+
+		count = avp_fifo_count(avp->resp_q);
+		if (count >= 1) {
+			/* response received */
+			break;
+		}
+
+		if ((count < 1) && (retry == 0)) {
+			PMD_DRV_LOG(ERR, "Timeout while waiting for a response for %u\n",
+				    request->req_id);
+			ret = -ETIME;
+			goto done;
+		}
+	}
+
+	/* retrieve the response */
+	count = avp_fifo_get(avp->resp_q, (void **)&resp_addr, 1);
+	if ((count != 1) || (resp_addr != avp->host_sync_addr)) {
+		PMD_DRV_LOG(ERR, "Invalid response from host, count=%u resp=%p host_sync_addr=%p\n",
+			    count, resp_addr, avp->host_sync_addr);
+		ret = -ENODATA;
+		goto done;
+	}
+
+	/* copy to user buffer */
+	rte_memcpy(request, avp->sync_addr, sizeof(*request));
+	ret = 0;
+
+	PMD_DRV_LOG(DEBUG, "Result %d received for request %u\n",
+		    request->result, request->req_id);
+
+done:
+	return ret;
+}
+
+static int
+avp_dev_ctrl_set_config(struct rte_eth_dev *eth_dev,
+			struct rte_avp_device_config *config)
+{
+	struct avp_dev *avp = AVP_DEV_PRIVATE_TO_HW(eth_dev->data->dev_private);
+	struct rte_avp_request request;
+	int ret;
+
+	/* setup a configure request */
+	memset(&request, 0, sizeof(request));
+	request.req_id = RTE_AVP_REQ_CFG_DEVICE;
+	memcpy(&request.config, config, sizeof(request.config));
+
+	ret = avp_dev_process_request(avp, &request);
+
+	return ret == 0 ? request.result : ret;
+}
+
 /* translate from host physical address to guest virtual address */
 static void *
 avp_dev_translate_address(struct rte_eth_dev *eth_dev,
@@ -298,6 +399,38 @@ avp_dev_check_regions(struct rte_eth_dev *eth_dev)
 	return 0;
 }
 
+static void
+_avp_set_queue_counts(struct rte_eth_dev *eth_dev)
+{
+	struct rte_pci_device *pci_dev = AVP_DEV_TO_PCI(eth_dev);
+	struct avp_dev *avp = AVP_DEV_PRIVATE_TO_HW(eth_dev->data->dev_private);
+	struct rte_avp_device_info *host_info;
+	void *addr;
+
+	addr = pci_dev->mem_resource[RTE_AVP_PCI_DEVICE_BAR].addr;
+	host_info = (struct rte_avp_device_info *)addr;
+
+	/*
+	 * the transmit direction is not negotiated beyond respecting the max
+	 * number of queues because the host can handle arbitrary guest tx
+	 * queues (host rx queues).
+	 */
+	avp->num_tx_queues = eth_dev->data->nb_tx_queues;
+
+	/*
+	 * the receive direction is more restrictive.  The host requires a
+	 * minimum number of guest rx queues (host tx queues) therefore
+	 * negotiate a value that is at least as large as the host minimum
+	 * requirement.  If the host and guest values are not identical then a
+	 * mapping will be established in the receive_queue_setup function.
+	 */
+	avp->num_rx_queues = RTE_MAX(host_info->min_rx_queues,
+				     eth_dev->data->nb_rx_queues);
+
+	PMD_DRV_LOG(DEBUG, "Requesting %u Tx and %u Rx queues from host\n",
+		    avp->num_tx_queues, avp->num_rx_queues);
+}
+
 /*
  * create a AVP device using the supplied device info by first translating it
  * to guest address space(s).
@@ -440,6 +573,7 @@ eth_avp_dev_init(struct rte_eth_dev *eth_dev)
 	int ret;
 
 	pci_dev = AVP_DEV_TO_PCI(eth_dev);
+	eth_dev->dev_ops = &avp_eth_dev_ops;
 
 	if (rte_eal_process_type() != RTE_PROC_PRIMARY) {
 		/*
@@ -515,6 +649,113 @@ static struct eth_driver rte_avp_pmd = {
 };
 
 
+static int
+avp_dev_configure(struct rte_eth_dev *eth_dev)
+{
+	struct rte_pci_device *pci_dev = AVP_DEV_TO_PCI(eth_dev);
+	struct avp_dev *avp = AVP_DEV_PRIVATE_TO_HW(eth_dev->data->dev_private);
+	struct rte_avp_device_info *host_info;
+	struct rte_avp_device_config config;
+	int mask = 0;
+	void *addr;
+	int ret;
+
+	addr = pci_dev->mem_resource[RTE_AVP_PCI_DEVICE_BAR].addr;
+	host_info = (struct rte_avp_device_info *)addr;
+
+	/* Setup required number of queues */
+	_avp_set_queue_counts(eth_dev);
+
+	mask = (ETH_VLAN_STRIP_MASK |
+		ETH_VLAN_FILTER_MASK |
+		ETH_VLAN_EXTEND_MASK);
+	avp_vlan_offload_set(eth_dev, mask);
+
+	/* update device config */
+	memset(&config, 0, sizeof(config));
+	config.device_id = host_info->device_id;
+	config.driver_type = RTE_AVP_DRIVER_TYPE_DPDK;
+	config.driver_version = AVP_DPDK_DRIVER_VERSION;
+	config.features = avp->features;
+	config.num_tx_queues = avp->num_tx_queues;
+	config.num_rx_queues = avp->num_rx_queues;
+
+	ret = avp_dev_ctrl_set_config(eth_dev, &config);
+	if (ret < 0) {
+		PMD_DRV_LOG(ERR, "Config request failed by host, ret=%d\n",
+			    ret);
+		goto unlock;
+	}
+
+	avp->flags |= AVP_F_CONFIGURED;
+	ret = 0;
+
+unlock:
+	return ret;
+}
+
+
+static int
+avp_dev_link_update(struct rte_eth_dev *eth_dev,
+					__rte_unused int wait_to_complete)
+{
+	struct avp_dev *avp = AVP_DEV_PRIVATE_TO_HW(eth_dev->data->dev_private);
+	struct rte_eth_link *link = &eth_dev->data->dev_link;
+
+	link->link_speed = ETH_SPEED_NUM_10G;
+	link->link_duplex = ETH_LINK_FULL_DUPLEX;
+	link->link_status = !!(avp->flags & AVP_F_LINKUP);
+
+	return -1;
+}
+
+
+static void
+avp_dev_info_get(struct rte_eth_dev *eth_dev,
+		 struct rte_eth_dev_info *dev_info)
+{
+	struct avp_dev *avp = AVP_DEV_PRIVATE_TO_HW(eth_dev->data->dev_private);
+
+	dev_info->driver_name = "rte_avp_pmd";
+	dev_info->pci_dev = RTE_DEV_TO_PCI(eth_dev->device);
+	dev_info->max_rx_queues = avp->max_rx_queues;
+	dev_info->max_tx_queues = avp->max_tx_queues;
+	dev_info->min_rx_bufsize = AVP_MIN_RX_BUFSIZE;
+	dev_info->max_rx_pktlen = avp->max_rx_pkt_len;
+	dev_info->max_mac_addrs = AVP_MAX_MAC_ADDRS;
+	if (avp->host_features & RTE_AVP_FEATURE_VLAN_OFFLOAD) {
+		dev_info->rx_offload_capa = DEV_RX_OFFLOAD_VLAN_STRIP;
+		dev_info->tx_offload_capa = DEV_TX_OFFLOAD_VLAN_INSERT;
+	}
+}
+
+static void
+avp_vlan_offload_set(struct rte_eth_dev *eth_dev, int mask)
+{
+	struct avp_dev *avp = AVP_DEV_PRIVATE_TO_HW(eth_dev->data->dev_private);
+
+	if (mask & ETH_VLAN_STRIP_MASK) {
+		if (avp->host_features & RTE_AVP_FEATURE_VLAN_OFFLOAD) {
+			if (eth_dev->data->dev_conf.rxmode.hw_vlan_strip)
+				avp->features |= RTE_AVP_FEATURE_VLAN_OFFLOAD;
+			else
+				avp->features &= ~RTE_AVP_FEATURE_VLAN_OFFLOAD;
+		} else {
+			PMD_DRV_LOG(ERR, "VLAN strip offload not supported\n");
+		}
+	}
+
+	if (mask & ETH_VLAN_FILTER_MASK) {
+		if (eth_dev->data->dev_conf.rxmode.hw_vlan_filter)
+			PMD_DRV_LOG(ERR, "VLAN filter offload not supported\n");
+	}
+
+	if (mask & ETH_VLAN_EXTEND_MASK) {
+		if (eth_dev->data->dev_conf.rxmode.hw_vlan_extend)
+			PMD_DRV_LOG(ERR, "VLAN extend offload not supported\n");
+	}
+}
+
 
 RTE_PMD_REGISTER_PCI(rte_avp, rte_avp_pmd.pci_drv);
 RTE_PMD_REGISTER_PCI_TABLE(rte_avp, pci_id_avp_map);
-- 
2.12.1

^ permalink raw reply related	[flat|nested] 172+ messages in thread

* [PATCH v6 07/14] net/avp: queue setup and release
  2017-03-28 11:53         ` [PATCH v6 " Allain Legacy
                             ` (5 preceding siblings ...)
  2017-03-28 11:54           ` [PATCH v6 06/14] net/avp: device configuration Allain Legacy
@ 2017-03-28 11:54           ` Allain Legacy
  2017-03-28 11:54           ` [PATCH v6 08/14] net/avp: packet receive functions Allain Legacy
                             ` (7 subsequent siblings)
  14 siblings, 0 replies; 172+ messages in thread
From: Allain Legacy @ 2017-03-28 11:54 UTC (permalink / raw)
  To: ferruh.yigit
  Cc: dev, ian.jolliffe, bruce.richardson, john.mcnamara, keith.wiles,
	tim.odriscoll, thomas.monjalon, vincent.jardin, jerin.jacob,
	stephen, 3chas3

Adds queue management operations so that an appliation can setup and
release the transmit and receive queues.

Signed-off-by: Allain Legacy <allain.legacy@windriver.com>
Signed-off-by: Matt Peters <matt.peters@windriver.com>
---
 drivers/net/avp/avp_ethdev.c | 180 ++++++++++++++++++++++++++++++++++++++++++-
 1 file changed, 179 insertions(+), 1 deletion(-)

diff --git a/drivers/net/avp/avp_ethdev.c b/drivers/net/avp/avp_ethdev.c
index d9ad3f169..ecc581fcf 100644
--- a/drivers/net/avp/avp_ethdev.c
+++ b/drivers/net/avp/avp_ethdev.c
@@ -67,7 +67,21 @@ static void avp_dev_info_get(struct rte_eth_dev *dev,
 static void avp_vlan_offload_set(struct rte_eth_dev *dev, int mask);
 static int avp_dev_link_update(struct rte_eth_dev *dev,
 			       __rte_unused int wait_to_complete);
-
+static int avp_dev_rx_queue_setup(struct rte_eth_dev *dev,
+				  uint16_t rx_queue_id,
+				  uint16_t nb_rx_desc,
+				  unsigned int socket_id,
+				  const struct rte_eth_rxconf *rx_conf,
+				  struct rte_mempool *pool);
+
+static int avp_dev_tx_queue_setup(struct rte_eth_dev *dev,
+				  uint16_t tx_queue_id,
+				  uint16_t nb_tx_desc,
+				  unsigned int socket_id,
+				  const struct rte_eth_txconf *tx_conf);
+
+static void avp_dev_rx_queue_release(void *rxq);
+static void avp_dev_tx_queue_release(void *txq);
 #define AVP_DEV_TO_PCI(eth_dev) RTE_DEV_TO_PCI((eth_dev)->device)
 
 
@@ -113,6 +127,10 @@ static const struct eth_dev_ops avp_eth_dev_ops = {
 	.dev_infos_get       = avp_dev_info_get,
 	.vlan_offload_set    = avp_vlan_offload_set,
 	.link_update         = avp_dev_link_update,
+	.rx_queue_setup      = avp_dev_rx_queue_setup,
+	.rx_queue_release    = avp_dev_rx_queue_release,
+	.tx_queue_setup      = avp_dev_tx_queue_setup,
+	.tx_queue_release    = avp_dev_tx_queue_release,
 };
 
 /**@{ AVP device flags */
@@ -400,6 +418,42 @@ avp_dev_check_regions(struct rte_eth_dev *eth_dev)
 }
 
 static void
+_avp_set_rx_queue_mappings(struct rte_eth_dev *eth_dev, uint16_t rx_queue_id)
+{
+	struct avp_dev *avp =
+		AVP_DEV_PRIVATE_TO_HW(eth_dev->data->dev_private);
+	struct avp_queue *rxq;
+	uint16_t queue_count;
+	uint16_t remainder;
+
+	rxq = (struct avp_queue *)eth_dev->data->rx_queues[rx_queue_id];
+
+	/*
+	 * Must map all AVP fifos as evenly as possible between the configured
+	 * device queues.  Each device queue will service a subset of the AVP
+	 * fifos. If there is an odd number of device queues the first set of
+	 * device queues will get the extra AVP fifos.
+	 */
+	queue_count = avp->num_rx_queues / eth_dev->data->nb_rx_queues;
+	remainder = avp->num_rx_queues % eth_dev->data->nb_rx_queues;
+	if (rx_queue_id < remainder) {
+		/* these queues must service one extra FIFO */
+		rxq->queue_base = rx_queue_id * (queue_count + 1);
+		rxq->queue_limit = rxq->queue_base + (queue_count + 1) - 1;
+	} else {
+		/* these queues service the regular number of FIFO */
+		rxq->queue_base = ((remainder * (queue_count + 1)) +
+				   ((rx_queue_id - remainder) * queue_count));
+		rxq->queue_limit = rxq->queue_base + queue_count - 1;
+	}
+
+	PMD_DRV_LOG(DEBUG, "rxq %u at %p base %u limit %u\n",
+		    rx_queue_id, rxq, rxq->queue_base, rxq->queue_limit);
+
+	rxq->queue_id = rxq->queue_base;
+}
+
+static void
 _avp_set_queue_counts(struct rte_eth_dev *eth_dev)
 {
 	struct rte_pci_device *pci_dev = AVP_DEV_TO_PCI(eth_dev);
@@ -650,6 +704,130 @@ static struct eth_driver rte_avp_pmd = {
 
 
 static int
+avp_dev_rx_queue_setup(struct rte_eth_dev *eth_dev,
+		       uint16_t rx_queue_id,
+		       uint16_t nb_rx_desc,
+		       unsigned int socket_id,
+		       const struct rte_eth_rxconf *rx_conf,
+		       struct rte_mempool *pool)
+{
+	struct avp_dev *avp = AVP_DEV_PRIVATE_TO_HW(eth_dev->data->dev_private);
+	struct rte_pktmbuf_pool_private *mbp_priv;
+	struct avp_queue *rxq;
+
+	if (rx_queue_id >= eth_dev->data->nb_rx_queues) {
+		PMD_DRV_LOG(ERR, "RX queue id is out of range: rx_queue_id=%u, nb_rx_queues=%u\n",
+			    rx_queue_id, eth_dev->data->nb_rx_queues);
+		return -EINVAL;
+	}
+
+	/* Save mbuf pool pointer */
+	avp->pool = pool;
+
+	/* Save the local mbuf size */
+	mbp_priv = rte_mempool_get_priv(pool);
+	avp->guest_mbuf_size = (uint16_t)(mbp_priv->mbuf_data_room_size);
+	avp->guest_mbuf_size -= RTE_PKTMBUF_HEADROOM;
+
+	PMD_DRV_LOG(DEBUG, "AVP max_rx_pkt_len=(%u,%u) mbuf_size=(%u,%u)\n",
+		    avp->max_rx_pkt_len,
+		    eth_dev->data->dev_conf.rxmode.max_rx_pkt_len,
+		    avp->host_mbuf_size,
+		    avp->guest_mbuf_size);
+
+	/* allocate a queue object */
+	rxq = rte_zmalloc_socket("ethdev RX queue", sizeof(struct avp_queue),
+				 RTE_CACHE_LINE_SIZE, socket_id);
+	if (rxq == NULL) {
+		PMD_DRV_LOG(ERR, "Failed to allocate new Rx queue object\n");
+		return -ENOMEM;
+	}
+
+	/* save back pointers to AVP and Ethernet devices */
+	rxq->avp = avp;
+	rxq->dev_data = eth_dev->data;
+	eth_dev->data->rx_queues[rx_queue_id] = (void *)rxq;
+
+	/* setup the queue receive mapping for the current queue. */
+	_avp_set_rx_queue_mappings(eth_dev, rx_queue_id);
+
+	PMD_DRV_LOG(DEBUG, "Rx queue %u setup at %p\n", rx_queue_id, rxq);
+
+	(void)nb_rx_desc;
+	(void)rx_conf;
+	return 0;
+}
+
+static int
+avp_dev_tx_queue_setup(struct rte_eth_dev *eth_dev,
+		       uint16_t tx_queue_id,
+		       uint16_t nb_tx_desc,
+		       unsigned int socket_id,
+		       const struct rte_eth_txconf *tx_conf)
+{
+	struct avp_dev *avp = AVP_DEV_PRIVATE_TO_HW(eth_dev->data->dev_private);
+	struct avp_queue *txq;
+
+	if (tx_queue_id >= eth_dev->data->nb_tx_queues) {
+		PMD_DRV_LOG(ERR, "TX queue id is out of range: tx_queue_id=%u, nb_tx_queues=%u\n",
+			    tx_queue_id, eth_dev->data->nb_tx_queues);
+		return -EINVAL;
+	}
+
+	/* allocate a queue object */
+	txq = rte_zmalloc_socket("ethdev TX queue", sizeof(struct avp_queue),
+				 RTE_CACHE_LINE_SIZE, socket_id);
+	if (txq == NULL) {
+		PMD_DRV_LOG(ERR, "Failed to allocate new Tx queue object\n");
+		return -ENOMEM;
+	}
+
+	/* only the configured set of transmit queues are used */
+	txq->queue_id = tx_queue_id;
+	txq->queue_base = tx_queue_id;
+	txq->queue_limit = tx_queue_id;
+
+	/* save back pointers to AVP and Ethernet devices */
+	txq->avp = avp;
+	txq->dev_data = eth_dev->data;
+	eth_dev->data->tx_queues[tx_queue_id] = (void *)txq;
+
+	PMD_DRV_LOG(DEBUG, "Tx queue %u setup at %p\n", tx_queue_id, txq);
+
+	(void)nb_tx_desc;
+	(void)tx_conf;
+	return 0;
+}
+
+static void
+avp_dev_rx_queue_release(void *rx_queue)
+{
+	struct avp_queue *rxq = (struct avp_queue *)rx_queue;
+	struct avp_dev *avp = rxq->avp;
+	struct rte_eth_dev_data *data = avp->dev_data;
+	unsigned int i;
+
+	for (i = 0; i < avp->num_rx_queues; i++) {
+		if (data->rx_queues[i] == rxq)
+			data->rx_queues[i] = NULL;
+	}
+}
+
+static void
+avp_dev_tx_queue_release(void *tx_queue)
+{
+	struct avp_queue *txq = (struct avp_queue *)tx_queue;
+	struct avp_dev *avp = txq->avp;
+	struct rte_eth_dev_data *data = avp->dev_data;
+	unsigned int i;
+
+	for (i = 0; i < avp->num_tx_queues; i++) {
+		if (data->tx_queues[i] == txq)
+			data->tx_queues[i] = NULL;
+	}
+}
+
+static int
 avp_dev_configure(struct rte_eth_dev *eth_dev)
 {
 	struct rte_pci_device *pci_dev = AVP_DEV_TO_PCI(eth_dev);
-- 
2.12.1

^ permalink raw reply related	[flat|nested] 172+ messages in thread

* [PATCH v6 08/14] net/avp: packet receive functions
  2017-03-28 11:53         ` [PATCH v6 " Allain Legacy
                             ` (6 preceding siblings ...)
  2017-03-28 11:54           ` [PATCH v6 07/14] net/avp: queue setup and release Allain Legacy
@ 2017-03-28 11:54           ` Allain Legacy
  2017-03-28 11:54           ` [PATCH v6 09/14] net/avp: packet transmit functions Allain Legacy
                             ` (6 subsequent siblings)
  14 siblings, 0 replies; 172+ messages in thread
From: Allain Legacy @ 2017-03-28 11:54 UTC (permalink / raw)
  To: ferruh.yigit
  Cc: dev, ian.jolliffe, bruce.richardson, john.mcnamara, keith.wiles,
	tim.odriscoll, thomas.monjalon, vincent.jardin, jerin.jacob,
	stephen, 3chas3

Adds function required for receiving packets from the host application via
AVP device queues.  Both the simple and scattered functions are supported.

Signed-off-by: Allain Legacy <allain.legacy@windriver.com>
Signed-off-by: Matt Peters <matt.peters@windriver.com>
---
 doc/guides/nics/features/avp.ini |   3 +
 drivers/net/avp/avp_ethdev.c     | 451 +++++++++++++++++++++++++++++++++++++++
 2 files changed, 454 insertions(+)

diff --git a/doc/guides/nics/features/avp.ini b/doc/guides/nics/features/avp.ini
index 45a2185b6..e748ea80d 100644
--- a/doc/guides/nics/features/avp.ini
+++ b/doc/guides/nics/features/avp.ini
@@ -5,6 +5,9 @@
 ;
 [Features]
 Link status          = Y
+Jumbo frame          = Y
+Scattered Rx         = Y
+Unicast MAC filter   = Y
 VLAN offload         = Y
 Linux UIO            = Y
 x86-64               = Y
diff --git a/drivers/net/avp/avp_ethdev.c b/drivers/net/avp/avp_ethdev.c
index ecc581fcf..7524e1ae1 100644
--- a/drivers/net/avp/avp_ethdev.c
+++ b/drivers/net/avp/avp_ethdev.c
@@ -80,11 +80,19 @@ static int avp_dev_tx_queue_setup(struct rte_eth_dev *dev,
 				  unsigned int socket_id,
 				  const struct rte_eth_txconf *tx_conf);
 
+static uint16_t avp_recv_scattered_pkts(void *rx_queue,
+					struct rte_mbuf **rx_pkts,
+					uint16_t nb_pkts);
+
+static uint16_t avp_recv_pkts(void *rx_queue,
+			      struct rte_mbuf **rx_pkts,
+			      uint16_t nb_pkts);
 static void avp_dev_rx_queue_release(void *rxq);
 static void avp_dev_tx_queue_release(void *txq);
 #define AVP_DEV_TO_PCI(eth_dev) RTE_DEV_TO_PCI((eth_dev)->device)
 
 
+#define AVP_MAX_RX_BURST 64
 #define AVP_MAX_MAC_ADDRS 1
 #define AVP_MIN_RX_BUFSIZE ETHER_MIN_LEN
 
@@ -302,6 +310,15 @@ avp_dev_ctrl_set_config(struct rte_eth_dev *eth_dev,
 	return ret == 0 ? request.result : ret;
 }
 
+/* translate from host mbuf virtual address to guest virtual address */
+static inline void *
+avp_dev_translate_buffer(struct avp_dev *avp, void *host_mbuf_address)
+{
+	return RTE_PTR_ADD(RTE_PTR_SUB(host_mbuf_address,
+				       (uintptr_t)avp->host_mbuf_addr),
+			   (uintptr_t)avp->mbuf_addr);
+}
+
 /* translate from host physical address to guest virtual address */
 static void *
 avp_dev_translate_address(struct rte_eth_dev *eth_dev,
@@ -628,6 +645,7 @@ eth_avp_dev_init(struct rte_eth_dev *eth_dev)
 
 	pci_dev = AVP_DEV_TO_PCI(eth_dev);
 	eth_dev->dev_ops = &avp_eth_dev_ops;
+	eth_dev->rx_pkt_burst = &avp_recv_pkts;
 
 	if (rte_eal_process_type() != RTE_PROC_PRIMARY) {
 		/*
@@ -636,6 +654,10 @@ eth_avp_dev_init(struct rte_eth_dev *eth_dev)
 		 * be mapped to the same virtual address so all pointers should
 		 * be valid.
 		 */
+		if (eth_dev->data->scattered_rx) {
+			PMD_DRV_LOG(NOTICE, "AVP device configured for chained mbufs\n");
+			eth_dev->rx_pkt_burst = avp_recv_scattered_pkts;
+		}
 		return 0;
 	}
 
@@ -704,6 +726,38 @@ static struct eth_driver rte_avp_pmd = {
 
 
 static int
+avp_dev_enable_scattered(struct rte_eth_dev *eth_dev,
+			 struct avp_dev *avp)
+{
+	unsigned int max_rx_pkt_len;
+
+	max_rx_pkt_len = eth_dev->data->dev_conf.rxmode.max_rx_pkt_len;
+
+	if ((max_rx_pkt_len > avp->guest_mbuf_size) ||
+	    (max_rx_pkt_len > avp->host_mbuf_size)) {
+		/*
+		 * If the guest MTU is greater than either the host or guest
+		 * buffers then chained mbufs have to be enabled in the TX
+		 * direction.  It is assumed that the application will not need
+		 * to send packets larger than their max_rx_pkt_len (MRU).
+		 */
+		return 1;
+	}
+
+	if ((avp->max_rx_pkt_len > avp->guest_mbuf_size) ||
+	    (avp->max_rx_pkt_len > avp->host_mbuf_size)) {
+		/*
+		 * If the host MRU is greater than its own mbuf size or the
+		 * guest mbuf size then chained mbufs have to be enabled in the
+		 * RX direction.
+		 */
+		return 1;
+	}
+
+	return 0;
+}
+
+static int
 avp_dev_rx_queue_setup(struct rte_eth_dev *eth_dev,
 		       uint16_t rx_queue_id,
 		       uint16_t nb_rx_desc,
@@ -729,6 +783,14 @@ avp_dev_rx_queue_setup(struct rte_eth_dev *eth_dev,
 	avp->guest_mbuf_size = (uint16_t)(mbp_priv->mbuf_data_room_size);
 	avp->guest_mbuf_size -= RTE_PKTMBUF_HEADROOM;
 
+	if (avp_dev_enable_scattered(eth_dev, avp)) {
+		if (!eth_dev->data->scattered_rx) {
+			PMD_DRV_LOG(NOTICE, "AVP device configured for chained mbufs\n");
+			eth_dev->data->scattered_rx = 1;
+			eth_dev->rx_pkt_burst = avp_recv_scattered_pkts;
+		}
+	}
+
 	PMD_DRV_LOG(DEBUG, "AVP max_rx_pkt_len=(%u,%u) mbuf_size=(%u,%u)\n",
 		    avp->max_rx_pkt_len,
 		    eth_dev->data->dev_conf.rxmode.max_rx_pkt_len,
@@ -799,6 +861,395 @@ avp_dev_tx_queue_setup(struct rte_eth_dev *eth_dev,
 	return 0;
 }
 
+static inline int
+_avp_cmp_ether_addr(struct ether_addr *a, struct ether_addr *b)
+{
+	uint16_t *_a = (uint16_t *)&a->addr_bytes[0];
+	uint16_t *_b = (uint16_t *)&b->addr_bytes[0];
+	return (_a[0] ^ _b[0]) | (_a[1] ^ _b[1]) | (_a[2] ^ _b[2]);
+}
+
+static inline int
+_avp_mac_filter(struct avp_dev *avp, struct rte_mbuf *m)
+{
+	struct ether_hdr *eth = rte_pktmbuf_mtod(m, struct ether_hdr *);
+
+	if (likely(_avp_cmp_ether_addr(&avp->ethaddr, &eth->d_addr) == 0)) {
+		/* allow all packets destined to our address */
+		return 0;
+	}
+
+	if (likely(is_broadcast_ether_addr(&eth->d_addr))) {
+		/* allow all broadcast packets */
+		return 0;
+	}
+
+	if (likely(is_multicast_ether_addr(&eth->d_addr))) {
+		/* allow all multicast packets */
+		return 0;
+	}
+
+	if (avp->flags & AVP_F_PROMISC) {
+		/* allow all packets when in promiscuous mode */
+		return 0;
+	}
+
+	return -1;
+}
+
+#ifdef RTE_LIBRTE_AVP_DEBUG_BUFFERS
+static inline void
+__avp_dev_buffer_sanity_check(struct avp_dev *avp, struct rte_avp_desc *buf)
+{
+	struct rte_avp_desc *first_buf;
+	struct rte_avp_desc *pkt_buf;
+	unsigned int pkt_len;
+	unsigned int nb_segs;
+	void *pkt_data;
+	unsigned int i;
+
+	first_buf = avp_dev_translate_buffer(avp, buf);
+
+	i = 0;
+	pkt_len = 0;
+	nb_segs = first_buf->nb_segs;
+	do {
+		/* Adjust pointers for guest addressing */
+		pkt_buf = avp_dev_translate_buffer(avp, buf);
+		if (pkt_buf == NULL)
+			rte_panic("bad buffer: segment %u has an invalid address %p\n",
+				  i, buf);
+		pkt_data = avp_dev_translate_buffer(avp, pkt_buf->data);
+		if (pkt_data == NULL)
+			rte_panic("bad buffer: segment %u has a NULL data pointer\n",
+				  i);
+		if (pkt_buf->data_len == 0)
+			rte_panic("bad buffer: segment %u has 0 data length\n",
+				  i);
+		pkt_len += pkt_buf->data_len;
+		nb_segs--;
+		i++;
+
+	} while (nb_segs && (buf = pkt_buf->next) != NULL);
+
+	if (nb_segs != 0)
+		rte_panic("bad buffer: expected %u segments found %u\n",
+			  first_buf->nb_segs, (first_buf->nb_segs - nb_segs));
+	if (pkt_len != first_buf->pkt_len)
+		rte_panic("bad buffer: expected length %u found %u\n",
+			  first_buf->pkt_len, pkt_len);
+}
+
+#define avp_dev_buffer_sanity_check(a, b) \
+	__avp_dev_buffer_sanity_check((a), (b))
+
+#else /* RTE_LIBRTE_AVP_DEBUG_BUFFERS */
+
+#define avp_dev_buffer_sanity_check(a, b) do {} while (0)
+
+#endif
+
+/*
+ * Copy a host buffer chain to a set of mbufs.	This function assumes that
+ * there exactly the required number of mbufs to copy all source bytes.
+ */
+static inline struct rte_mbuf *
+avp_dev_copy_from_buffers(struct avp_dev *avp,
+			  struct rte_avp_desc *buf,
+			  struct rte_mbuf **mbufs,
+			  unsigned int count)
+{
+	struct rte_mbuf *m_previous = NULL;
+	struct rte_avp_desc *pkt_buf;
+	unsigned int total_length = 0;
+	unsigned int copy_length;
+	unsigned int src_offset;
+	struct rte_mbuf *m;
+	uint16_t ol_flags;
+	uint16_t vlan_tci;
+	void *pkt_data;
+	unsigned int i;
+
+	avp_dev_buffer_sanity_check(avp, buf);
+
+	/* setup the first source buffer */
+	pkt_buf = avp_dev_translate_buffer(avp, buf);
+	pkt_data = avp_dev_translate_buffer(avp, pkt_buf->data);
+	total_length = pkt_buf->pkt_len;
+	src_offset = 0;
+
+	if (pkt_buf->ol_flags & RTE_AVP_RX_VLAN_PKT) {
+		ol_flags = PKT_RX_VLAN_PKT;
+		vlan_tci = pkt_buf->vlan_tci;
+	} else {
+		ol_flags = 0;
+		vlan_tci = 0;
+	}
+
+	for (i = 0; (i < count) && (buf != NULL); i++) {
+		/* fill each destination buffer */
+		m = mbufs[i];
+
+		if (m_previous != NULL)
+			m_previous->next = m;
+
+		m_previous = m;
+
+		do {
+			/*
+			 * Copy as many source buffers as will fit in the
+			 * destination buffer.
+			 */
+			copy_length = RTE_MIN((avp->guest_mbuf_size -
+					       rte_pktmbuf_data_len(m)),
+					      (pkt_buf->data_len -
+					       src_offset));
+			rte_memcpy(RTE_PTR_ADD(rte_pktmbuf_mtod(m, void *),
+					       rte_pktmbuf_data_len(m)),
+				   RTE_PTR_ADD(pkt_data, src_offset),
+				   copy_length);
+			rte_pktmbuf_data_len(m) += copy_length;
+			src_offset += copy_length;
+
+			if (likely(src_offset == pkt_buf->data_len)) {
+				/* need a new source buffer */
+				buf = pkt_buf->next;
+				if (buf != NULL) {
+					pkt_buf = avp_dev_translate_buffer(
+						avp, buf);
+					pkt_data = avp_dev_translate_buffer(
+						avp, pkt_buf->data);
+					src_offset = 0;
+				}
+			}
+
+			if (unlikely(rte_pktmbuf_data_len(m) ==
+				     avp->guest_mbuf_size)) {
+				/* need a new destination mbuf */
+				break;
+			}
+
+		} while (buf != NULL);
+	}
+
+	m = mbufs[0];
+	m->ol_flags = ol_flags;
+	m->nb_segs = count;
+	rte_pktmbuf_pkt_len(m) = total_length;
+	m->vlan_tci = vlan_tci;
+
+	__rte_mbuf_sanity_check(m, 1);
+
+	return m;
+}
+
+static uint16_t
+avp_recv_scattered_pkts(void *rx_queue,
+			struct rte_mbuf **rx_pkts,
+			uint16_t nb_pkts)
+{
+	struct avp_queue *rxq = (struct avp_queue *)rx_queue;
+	struct rte_avp_desc *avp_bufs[AVP_MAX_RX_BURST];
+	struct rte_mbuf *mbufs[RTE_AVP_MAX_MBUF_SEGMENTS];
+	struct avp_dev *avp = rxq->avp;
+	struct rte_avp_desc *pkt_buf;
+	struct rte_avp_fifo *free_q;
+	struct rte_avp_fifo *rx_q;
+	struct rte_avp_desc *buf;
+	unsigned int count, avail, n;
+	unsigned int guest_mbuf_size;
+	struct rte_mbuf *m;
+	unsigned int required;
+	unsigned int buf_len;
+	unsigned int port_id;
+	unsigned int i;
+
+	guest_mbuf_size = avp->guest_mbuf_size;
+	port_id = avp->port_id;
+	rx_q = avp->rx_q[rxq->queue_id];
+	free_q = avp->free_q[rxq->queue_id];
+
+	/* setup next queue to service */
+	rxq->queue_id = (rxq->queue_id < rxq->queue_limit) ?
+		(rxq->queue_id + 1) : rxq->queue_base;
+
+	/* determine how many slots are available in the free queue */
+	count = avp_fifo_free_count(free_q);
+
+	/* determine how many packets are available in the rx queue */
+	avail = avp_fifo_count(rx_q);
+
+	/* determine how many packets can be received */
+	count = RTE_MIN(count, avail);
+	count = RTE_MIN(count, nb_pkts);
+	count = RTE_MIN(count, (unsigned int)AVP_MAX_RX_BURST);
+
+	if (unlikely(count == 0)) {
+		/* no free buffers, or no buffers on the rx queue */
+		return 0;
+	}
+
+	/* retrieve pending packets */
+	n = avp_fifo_get(rx_q, (void **)&avp_bufs, count);
+	PMD_RX_LOG(DEBUG, "Receiving %u packets from Rx queue at %p\n",
+		   count, rx_q);
+
+	count = 0;
+	for (i = 0; i < n; i++) {
+		/* prefetch next entry while processing current one */
+		if (i + 1 < n) {
+			pkt_buf = avp_dev_translate_buffer(avp,
+							   avp_bufs[i + 1]);
+			rte_prefetch0(pkt_buf);
+		}
+		buf = avp_bufs[i];
+
+		/* Peek into the first buffer to determine the total length */
+		pkt_buf = avp_dev_translate_buffer(avp, buf);
+		buf_len = pkt_buf->pkt_len;
+
+		/* Allocate enough mbufs to receive the entire packet */
+		required = (buf_len + guest_mbuf_size - 1) / guest_mbuf_size;
+		if (rte_pktmbuf_alloc_bulk(avp->pool, mbufs, required)) {
+			rxq->dev_data->rx_mbuf_alloc_failed++;
+			continue;
+		}
+
+		/* Copy the data from the buffers to our mbufs */
+		m = avp_dev_copy_from_buffers(avp, buf, mbufs, required);
+
+		/* finalize mbuf */
+		m->port = port_id;
+
+		if (_avp_mac_filter(avp, m) != 0) {
+			/* silently discard packets not destined to our MAC */
+			rte_pktmbuf_free(m);
+			continue;
+		}
+
+		/* return new mbuf to caller */
+		rx_pkts[count++] = m;
+		rxq->bytes += buf_len;
+	}
+
+	rxq->packets += count;
+
+	/* return the buffers to the free queue */
+	avp_fifo_put(free_q, (void **)&avp_bufs[0], n);
+
+	return count;
+}
+
+
+static uint16_t
+avp_recv_pkts(void *rx_queue,
+	      struct rte_mbuf **rx_pkts,
+	      uint16_t nb_pkts)
+{
+	struct avp_queue *rxq = (struct avp_queue *)rx_queue;
+	struct rte_avp_desc *avp_bufs[AVP_MAX_RX_BURST];
+	struct avp_dev *avp = rxq->avp;
+	struct rte_avp_desc *pkt_buf;
+	struct rte_avp_fifo *free_q;
+	struct rte_avp_fifo *rx_q;
+	unsigned int count, avail, n;
+	unsigned int pkt_len;
+	struct rte_mbuf *m;
+	char *pkt_data;
+	unsigned int i;
+
+	rx_q = avp->rx_q[rxq->queue_id];
+	free_q = avp->free_q[rxq->queue_id];
+
+	/* setup next queue to service */
+	rxq->queue_id = (rxq->queue_id < rxq->queue_limit) ?
+		(rxq->queue_id + 1) : rxq->queue_base;
+
+	/* determine how many slots are available in the free queue */
+	count = avp_fifo_free_count(free_q);
+
+	/* determine how many packets are available in the rx queue */
+	avail = avp_fifo_count(rx_q);
+
+	/* determine how many packets can be received */
+	count = RTE_MIN(count, avail);
+	count = RTE_MIN(count, nb_pkts);
+	count = RTE_MIN(count, (unsigned int)AVP_MAX_RX_BURST);
+
+	if (unlikely(count == 0)) {
+		/* no free buffers, or no buffers on the rx queue */
+		return 0;
+	}
+
+	/* retrieve pending packets */
+	n = avp_fifo_get(rx_q, (void **)&avp_bufs, count);
+	PMD_RX_LOG(DEBUG, "Receiving %u packets from Rx queue at %p\n",
+		   count, rx_q);
+
+	count = 0;
+	for (i = 0; i < n; i++) {
+		/* prefetch next entry while processing current one */
+		if (i < n - 1) {
+			pkt_buf = avp_dev_translate_buffer(avp,
+							   avp_bufs[i + 1]);
+			rte_prefetch0(pkt_buf);
+		}
+
+		/* Adjust host pointers for guest addressing */
+		pkt_buf = avp_dev_translate_buffer(avp, avp_bufs[i]);
+		pkt_data = avp_dev_translate_buffer(avp, pkt_buf->data);
+		pkt_len = pkt_buf->pkt_len;
+
+		if (unlikely((pkt_len > avp->guest_mbuf_size) ||
+			     (pkt_buf->nb_segs > 1))) {
+			/*
+			 * application should be using the scattered receive
+			 * function
+			 */
+			rxq->errors++;
+			continue;
+		}
+
+		/* process each packet to be transmitted */
+		m = rte_pktmbuf_alloc(avp->pool);
+		if (unlikely(m == NULL)) {
+			rxq->dev_data->rx_mbuf_alloc_failed++;
+			continue;
+		}
+
+		/* copy data out of the host buffer to our buffer */
+		m->data_off = RTE_PKTMBUF_HEADROOM;
+		rte_memcpy(rte_pktmbuf_mtod(m, void *), pkt_data, pkt_len);
+
+		/* initialize the local mbuf */
+		rte_pktmbuf_data_len(m) = pkt_len;
+		rte_pktmbuf_pkt_len(m) = pkt_len;
+		m->port = avp->port_id;
+
+		if (pkt_buf->ol_flags & RTE_AVP_RX_VLAN_PKT) {
+			m->ol_flags = PKT_RX_VLAN_PKT;
+			m->vlan_tci = pkt_buf->vlan_tci;
+		}
+
+		if (_avp_mac_filter(avp, m) != 0) {
+			/* silently discard packets not destined to our MAC */
+			rte_pktmbuf_free(m);
+			continue;
+		}
+
+		/* return new mbuf to caller */
+		rx_pkts[count++] = m;
+		rxq->bytes += pkt_len;
+	}
+
+	rxq->packets += count;
+
+	/* return the buffers to the free queue */
+	avp_fifo_put(free_q, (void **)&avp_bufs[0], n);
+
+	return count;
+}
+
 static void
 avp_dev_rx_queue_release(void *rx_queue)
 {
-- 
2.12.1

^ permalink raw reply related	[flat|nested] 172+ messages in thread

* [PATCH v6 09/14] net/avp: packet transmit functions
  2017-03-28 11:53         ` [PATCH v6 " Allain Legacy
                             ` (7 preceding siblings ...)
  2017-03-28 11:54           ` [PATCH v6 08/14] net/avp: packet receive functions Allain Legacy
@ 2017-03-28 11:54           ` Allain Legacy
  2017-03-28 11:54           ` [PATCH v6 10/14] net/avp: device statistics operations Allain Legacy
                             ` (5 subsequent siblings)
  14 siblings, 0 replies; 172+ messages in thread
From: Allain Legacy @ 2017-03-28 11:54 UTC (permalink / raw)
  To: ferruh.yigit
  Cc: dev, ian.jolliffe, bruce.richardson, john.mcnamara, keith.wiles,
	tim.odriscoll, thomas.monjalon, vincent.jardin, jerin.jacob,
	stephen, 3chas3

Adds support for packet transmit functions so that an application can send
packets to the host application via an AVP device queue.  Both the simple
and scattered functions are supported.

Signed-off-by: Allain Legacy <allain.legacy@windriver.com>
Signed-off-by: Matt Peters <matt.peters@windriver.com>
---
 drivers/net/avp/avp_ethdev.c | 335 +++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 335 insertions(+)

diff --git a/drivers/net/avp/avp_ethdev.c b/drivers/net/avp/avp_ethdev.c
index 7524e1ae1..07efd4282 100644
--- a/drivers/net/avp/avp_ethdev.c
+++ b/drivers/net/avp/avp_ethdev.c
@@ -87,12 +87,24 @@ static uint16_t avp_recv_scattered_pkts(void *rx_queue,
 static uint16_t avp_recv_pkts(void *rx_queue,
 			      struct rte_mbuf **rx_pkts,
 			      uint16_t nb_pkts);
+
+static uint16_t avp_xmit_scattered_pkts(void *tx_queue,
+					struct rte_mbuf **tx_pkts,
+					uint16_t nb_pkts);
+
+static uint16_t avp_xmit_pkts(void *tx_queue,
+			      struct rte_mbuf **tx_pkts,
+			      uint16_t nb_pkts);
+
 static void avp_dev_rx_queue_release(void *rxq);
 static void avp_dev_tx_queue_release(void *txq);
+
+
 #define AVP_DEV_TO_PCI(eth_dev) RTE_DEV_TO_PCI((eth_dev)->device)
 
 
 #define AVP_MAX_RX_BURST 64
+#define AVP_MAX_TX_BURST 64
 #define AVP_MAX_MAC_ADDRS 1
 #define AVP_MIN_RX_BUFSIZE ETHER_MIN_LEN
 
@@ -646,6 +658,7 @@ eth_avp_dev_init(struct rte_eth_dev *eth_dev)
 	pci_dev = AVP_DEV_TO_PCI(eth_dev);
 	eth_dev->dev_ops = &avp_eth_dev_ops;
 	eth_dev->rx_pkt_burst = &avp_recv_pkts;
+	eth_dev->tx_pkt_burst = &avp_xmit_pkts;
 
 	if (rte_eal_process_type() != RTE_PROC_PRIMARY) {
 		/*
@@ -657,6 +670,7 @@ eth_avp_dev_init(struct rte_eth_dev *eth_dev)
 		if (eth_dev->data->scattered_rx) {
 			PMD_DRV_LOG(NOTICE, "AVP device configured for chained mbufs\n");
 			eth_dev->rx_pkt_burst = avp_recv_scattered_pkts;
+			eth_dev->tx_pkt_burst = avp_xmit_scattered_pkts;
 		}
 		return 0;
 	}
@@ -788,6 +802,7 @@ avp_dev_rx_queue_setup(struct rte_eth_dev *eth_dev,
 			PMD_DRV_LOG(NOTICE, "AVP device configured for chained mbufs\n");
 			eth_dev->data->scattered_rx = 1;
 			eth_dev->rx_pkt_burst = avp_recv_scattered_pkts;
+			eth_dev->tx_pkt_burst = avp_xmit_scattered_pkts;
 		}
 	}
 
@@ -1250,6 +1265,326 @@ avp_recv_pkts(void *rx_queue,
 	return count;
 }
 
+/*
+ * Copy a chained mbuf to a set of host buffers.  This function assumes that
+ * there are sufficient destination buffers to contain the entire source
+ * packet.
+ */
+static inline uint16_t
+avp_dev_copy_to_buffers(struct avp_dev *avp,
+			struct rte_mbuf *mbuf,
+			struct rte_avp_desc **buffers,
+			unsigned int count)
+{
+	struct rte_avp_desc *previous_buf = NULL;
+	struct rte_avp_desc *first_buf = NULL;
+	struct rte_avp_desc *pkt_buf;
+	struct rte_avp_desc *buf;
+	size_t total_length;
+	struct rte_mbuf *m;
+	size_t copy_length;
+	size_t src_offset;
+	char *pkt_data;
+	unsigned int i;
+
+	__rte_mbuf_sanity_check(mbuf, 1);
+
+	m = mbuf;
+	src_offset = 0;
+	total_length = rte_pktmbuf_pkt_len(m);
+	for (i = 0; (i < count) && (m != NULL); i++) {
+		/* fill each destination buffer */
+		buf = buffers[i];
+
+		if (i < count - 1) {
+			/* prefetch next entry while processing this one */
+			pkt_buf = avp_dev_translate_buffer(avp, buffers[i + 1]);
+			rte_prefetch0(pkt_buf);
+		}
+
+		/* Adjust pointers for guest addressing */
+		pkt_buf = avp_dev_translate_buffer(avp, buf);
+		pkt_data = avp_dev_translate_buffer(avp, pkt_buf->data);
+
+		/* setup the buffer chain */
+		if (previous_buf != NULL)
+			previous_buf->next = buf;
+		else
+			first_buf = pkt_buf;
+
+		previous_buf = pkt_buf;
+
+		do {
+			/*
+			 * copy as many source mbuf segments as will fit in the
+			 * destination buffer.
+			 */
+			copy_length = RTE_MIN((avp->host_mbuf_size -
+					       pkt_buf->data_len),
+					      (rte_pktmbuf_data_len(m) -
+					       src_offset));
+			rte_memcpy(RTE_PTR_ADD(pkt_data, pkt_buf->data_len),
+				   RTE_PTR_ADD(rte_pktmbuf_mtod(m, void *),
+					       src_offset),
+				   copy_length);
+			pkt_buf->data_len += copy_length;
+			src_offset += copy_length;
+
+			if (likely(src_offset == rte_pktmbuf_data_len(m))) {
+				/* need a new source buffer */
+				m = m->next;
+				src_offset = 0;
+			}
+
+			if (unlikely(pkt_buf->data_len ==
+				     avp->host_mbuf_size)) {
+				/* need a new destination buffer */
+				break;
+			}
+
+		} while (m != NULL);
+	}
+
+	first_buf->nb_segs = count;
+	first_buf->pkt_len = total_length;
+
+	if (mbuf->ol_flags & PKT_TX_VLAN_PKT) {
+		first_buf->ol_flags |= RTE_AVP_TX_VLAN_PKT;
+		first_buf->vlan_tci = mbuf->vlan_tci;
+	}
+
+	avp_dev_buffer_sanity_check(avp, buffers[0]);
+
+	return total_length;
+}
+
+
+static uint16_t
+avp_xmit_scattered_pkts(void *tx_queue,
+			struct rte_mbuf **tx_pkts,
+			uint16_t nb_pkts)
+{
+	struct rte_avp_desc *avp_bufs[(AVP_MAX_TX_BURST *
+				       RTE_AVP_MAX_MBUF_SEGMENTS)];
+	struct avp_queue *txq = (struct avp_queue *)tx_queue;
+	struct rte_avp_desc *tx_bufs[AVP_MAX_TX_BURST];
+	struct avp_dev *avp = txq->avp;
+	struct rte_avp_fifo *alloc_q;
+	struct rte_avp_fifo *tx_q;
+	unsigned int count, avail, n;
+	unsigned int orig_nb_pkts;
+	struct rte_mbuf *m;
+	unsigned int required;
+	unsigned int segments;
+	unsigned int tx_bytes;
+	unsigned int i;
+
+	orig_nb_pkts = nb_pkts;
+	tx_q = avp->tx_q[txq->queue_id];
+	alloc_q = avp->alloc_q[txq->queue_id];
+
+	/* limit the number of transmitted packets to the max burst size */
+	if (unlikely(nb_pkts > AVP_MAX_TX_BURST))
+		nb_pkts = AVP_MAX_TX_BURST;
+
+	/* determine how many buffers are available to copy into */
+	avail = avp_fifo_count(alloc_q);
+	if (unlikely(avail > (AVP_MAX_TX_BURST *
+			      RTE_AVP_MAX_MBUF_SEGMENTS)))
+		avail = AVP_MAX_TX_BURST * RTE_AVP_MAX_MBUF_SEGMENTS;
+
+	/* determine how many slots are available in the transmit queue */
+	count = avp_fifo_free_count(tx_q);
+
+	/* determine how many packets can be sent */
+	nb_pkts = RTE_MIN(count, nb_pkts);
+
+	/* determine how many packets will fit in the available buffers */
+	count = 0;
+	segments = 0;
+	for (i = 0; i < nb_pkts; i++) {
+		m = tx_pkts[i];
+		if (likely(i < (unsigned int)nb_pkts - 1)) {
+			/* prefetch next entry while processing this one */
+			rte_prefetch0(tx_pkts[i + 1]);
+		}
+		required = (rte_pktmbuf_pkt_len(m) + avp->host_mbuf_size - 1) /
+			avp->host_mbuf_size;
+
+		if (unlikely((required == 0) ||
+			     (required > RTE_AVP_MAX_MBUF_SEGMENTS)))
+			break;
+		else if (unlikely(required + segments > avail))
+			break;
+		segments += required;
+		count++;
+	}
+	nb_pkts = count;
+
+	if (unlikely(nb_pkts == 0)) {
+		/* no available buffers, or no space on the tx queue */
+		txq->errors += orig_nb_pkts;
+		return 0;
+	}
+
+	PMD_TX_LOG(DEBUG, "Sending %u packets on Tx queue at %p\n",
+		   nb_pkts, tx_q);
+
+	/* retrieve sufficient send buffers */
+	n = avp_fifo_get(alloc_q, (void **)&avp_bufs, segments);
+	if (unlikely(n != segments)) {
+		PMD_TX_LOG(DEBUG, "Failed to allocate buffers "
+			   "n=%u, segments=%u, orig=%u\n",
+			   n, segments, orig_nb_pkts);
+		txq->errors += orig_nb_pkts;
+		return 0;
+	}
+
+	tx_bytes = 0;
+	count = 0;
+	for (i = 0; i < nb_pkts; i++) {
+		/* process each packet to be transmitted */
+		m = tx_pkts[i];
+
+		/* determine how many buffers are required for this packet */
+		required = (rte_pktmbuf_pkt_len(m) + avp->host_mbuf_size - 1) /
+			avp->host_mbuf_size;
+
+		tx_bytes += avp_dev_copy_to_buffers(avp, m,
+						    &avp_bufs[count], required);
+		tx_bufs[i] = avp_bufs[count];
+		count += required;
+
+		/* free the original mbuf */
+		rte_pktmbuf_free(m);
+	}
+
+	txq->packets += nb_pkts;
+	txq->bytes += tx_bytes;
+
+#ifdef RTE_LIBRTE_AVP_DEBUG_BUFFERS
+	for (i = 0; i < nb_pkts; i++)
+		avp_dev_buffer_sanity_check(avp, tx_bufs[i]);
+#endif
+
+	/* send the packets */
+	n = avp_fifo_put(tx_q, (void **)&tx_bufs[0], nb_pkts);
+	if (unlikely(n != orig_nb_pkts))
+		txq->errors += (orig_nb_pkts - n);
+
+	return n;
+}
+
+
+static uint16_t
+avp_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
+{
+	struct avp_queue *txq = (struct avp_queue *)tx_queue;
+	struct rte_avp_desc *avp_bufs[AVP_MAX_TX_BURST];
+	struct avp_dev *avp = txq->avp;
+	struct rte_avp_desc *pkt_buf;
+	struct rte_avp_fifo *alloc_q;
+	struct rte_avp_fifo *tx_q;
+	unsigned int count, avail, n;
+	struct rte_mbuf *m;
+	unsigned int pkt_len;
+	unsigned int tx_bytes;
+	char *pkt_data;
+	unsigned int i;
+
+	tx_q = avp->tx_q[txq->queue_id];
+	alloc_q = avp->alloc_q[txq->queue_id];
+
+	/* limit the number of transmitted packets to the max burst size */
+	if (unlikely(nb_pkts > AVP_MAX_TX_BURST))
+		nb_pkts = AVP_MAX_TX_BURST;
+
+	/* determine how many buffers are available to copy into */
+	avail = avp_fifo_count(alloc_q);
+
+	/* determine how many slots are available in the transmit queue */
+	count = avp_fifo_free_count(tx_q);
+
+	/* determine how many packets can be sent */
+	count = RTE_MIN(count, avail);
+	count = RTE_MIN(count, nb_pkts);
+
+	if (unlikely(count == 0)) {
+		/* no available buffers, or no space on the tx queue */
+		txq->errors += nb_pkts;
+		return 0;
+	}
+
+	PMD_TX_LOG(DEBUG, "Sending %u packets on Tx queue at %p\n",
+		   count, tx_q);
+
+	/* retrieve sufficient send buffers */
+	n = avp_fifo_get(alloc_q, (void **)&avp_bufs, count);
+	if (unlikely(n != count)) {
+		txq->errors++;
+		return 0;
+	}
+
+	tx_bytes = 0;
+	for (i = 0; i < count; i++) {
+		/* prefetch next entry while processing the current one */
+		if (i < count - 1) {
+			pkt_buf = avp_dev_translate_buffer(avp,
+							   avp_bufs[i + 1]);
+			rte_prefetch0(pkt_buf);
+		}
+
+		/* process each packet to be transmitted */
+		m = tx_pkts[i];
+
+		/* Adjust pointers for guest addressing */
+		pkt_buf = avp_dev_translate_buffer(avp, avp_bufs[i]);
+		pkt_data = avp_dev_translate_buffer(avp, pkt_buf->data);
+		pkt_len = rte_pktmbuf_pkt_len(m);
+
+		if (unlikely((pkt_len > avp->guest_mbuf_size) ||
+					 (pkt_len > avp->host_mbuf_size))) {
+			/*
+			 * application should be using the scattered transmit
+			 * function; send it truncated to avoid the performance
+			 * hit of having to manage returning the already
+			 * allocated buffer to the free list.  This should not
+			 * happen since the application should have set the
+			 * max_rx_pkt_len based on its MTU and it should be
+			 * policing its own packet sizes.
+			 */
+			txq->errors++;
+			pkt_len = RTE_MIN(avp->guest_mbuf_size,
+					  avp->host_mbuf_size);
+		}
+
+		/* copy data out of our mbuf and into the AVP buffer */
+		rte_memcpy(pkt_data, rte_pktmbuf_mtod(m, void *), pkt_len);
+		pkt_buf->pkt_len = pkt_len;
+		pkt_buf->data_len = pkt_len;
+		pkt_buf->nb_segs = 1;
+		pkt_buf->next = NULL;
+
+		if (m->ol_flags & PKT_TX_VLAN_PKT) {
+			pkt_buf->ol_flags |= RTE_AVP_TX_VLAN_PKT;
+			pkt_buf->vlan_tci = m->vlan_tci;
+		}
+
+		tx_bytes += pkt_len;
+
+		/* free the original mbuf */
+		rte_pktmbuf_free(m);
+	}
+
+	txq->packets += count;
+	txq->bytes += tx_bytes;
+
+	/* send the packets */
+	n = avp_fifo_put(tx_q, (void **)&avp_bufs[0], count);
+
+	return n;
+}
+
 static void
 avp_dev_rx_queue_release(void *rx_queue)
 {
-- 
2.12.1

^ permalink raw reply related	[flat|nested] 172+ messages in thread

* [PATCH v6 10/14] net/avp: device statistics operations
  2017-03-28 11:53         ` [PATCH v6 " Allain Legacy
                             ` (8 preceding siblings ...)
  2017-03-28 11:54           ` [PATCH v6 09/14] net/avp: packet transmit functions Allain Legacy
@ 2017-03-28 11:54           ` Allain Legacy
  2017-03-28 11:54           ` [PATCH v6 11/14] net/avp: device promiscuous functions Allain Legacy
                             ` (4 subsequent siblings)
  14 siblings, 0 replies; 172+ messages in thread
From: Allain Legacy @ 2017-03-28 11:54 UTC (permalink / raw)
  To: ferruh.yigit
  Cc: dev, ian.jolliffe, bruce.richardson, john.mcnamara, keith.wiles,
	tim.odriscoll, thomas.monjalon, vincent.jardin, jerin.jacob,
	stephen, 3chas3

Adds device functions to query and reset statistics.

Signed-off-by: Allain Legacy <allain.legacy@windriver.com>
---
 doc/guides/nics/features/avp.ini |  2 ++
 drivers/net/avp/avp_ethdev.c     | 67 ++++++++++++++++++++++++++++++++++++++++
 2 files changed, 69 insertions(+)

diff --git a/doc/guides/nics/features/avp.ini b/doc/guides/nics/features/avp.ini
index e748ea80d..0de761ce8 100644
--- a/doc/guides/nics/features/avp.ini
+++ b/doc/guides/nics/features/avp.ini
@@ -9,5 +9,7 @@ Jumbo frame          = Y
 Scattered Rx         = Y
 Unicast MAC filter   = Y
 VLAN offload         = Y
+Basic stats          = Y
+Stats per queue      = Y
 Linux UIO            = Y
 x86-64               = Y
diff --git a/drivers/net/avp/avp_ethdev.c b/drivers/net/avp/avp_ethdev.c
index 07efd4282..f24c6a8fc 100644
--- a/drivers/net/avp/avp_ethdev.c
+++ b/drivers/net/avp/avp_ethdev.c
@@ -99,6 +99,10 @@ static uint16_t avp_xmit_pkts(void *tx_queue,
 static void avp_dev_rx_queue_release(void *rxq);
 static void avp_dev_tx_queue_release(void *txq);
 
+static void avp_dev_stats_get(struct rte_eth_dev *dev,
+			      struct rte_eth_stats *stats);
+static void avp_dev_stats_reset(struct rte_eth_dev *dev);
+
 
 #define AVP_DEV_TO_PCI(eth_dev) RTE_DEV_TO_PCI((eth_dev)->device)
 
@@ -146,6 +150,8 @@ static const struct eth_dev_ops avp_eth_dev_ops = {
 	.dev_configure       = avp_dev_configure,
 	.dev_infos_get       = avp_dev_info_get,
 	.vlan_offload_set    = avp_vlan_offload_set,
+	.stats_get           = avp_dev_stats_get,
+	.stats_reset         = avp_dev_stats_reset,
 	.link_update         = avp_dev_link_update,
 	.rx_queue_setup      = avp_dev_rx_queue_setup,
 	.rx_queue_release    = avp_dev_rx_queue_release,
@@ -1720,6 +1726,67 @@ avp_vlan_offload_set(struct rte_eth_dev *eth_dev, int mask)
 	}
 }
 
+static void
+avp_dev_stats_get(struct rte_eth_dev *eth_dev, struct rte_eth_stats *stats)
+{
+	struct avp_dev *avp = AVP_DEV_PRIVATE_TO_HW(eth_dev->data->dev_private);
+	unsigned int i;
+
+	for (i = 0; i < avp->num_rx_queues; i++) {
+		struct avp_queue *rxq = avp->dev_data->rx_queues[i];
+
+		if (rxq) {
+			stats->ipackets += rxq->packets;
+			stats->ibytes += rxq->bytes;
+			stats->ierrors += rxq->errors;
+
+			stats->q_ipackets[i] += rxq->packets;
+			stats->q_ibytes[i] += rxq->bytes;
+			stats->q_errors[i] += rxq->errors;
+		}
+	}
+
+	for (i = 0; i < avp->num_tx_queues; i++) {
+		struct avp_queue *txq = avp->dev_data->tx_queues[i];
+
+		if (txq) {
+			stats->opackets += txq->packets;
+			stats->obytes += txq->bytes;
+			stats->oerrors += txq->errors;
+
+			stats->q_opackets[i] += txq->packets;
+			stats->q_obytes[i] += txq->bytes;
+			stats->q_errors[i] += txq->errors;
+		}
+	}
+}
+
+static void
+avp_dev_stats_reset(struct rte_eth_dev *eth_dev)
+{
+	struct avp_dev *avp = AVP_DEV_PRIVATE_TO_HW(eth_dev->data->dev_private);
+	unsigned int i;
+
+	for (i = 0; i < avp->num_rx_queues; i++) {
+		struct avp_queue *rxq = avp->dev_data->rx_queues[i];
+
+		if (rxq) {
+			rxq->bytes = 0;
+			rxq->packets = 0;
+			rxq->errors = 0;
+		}
+	}
+
+	for (i = 0; i < avp->num_tx_queues; i++) {
+		struct avp_queue *txq = avp->dev_data->tx_queues[i];
+
+		if (txq) {
+			txq->bytes = 0;
+			txq->packets = 0;
+			txq->errors = 0;
+		}
+	}
+}
 
 RTE_PMD_REGISTER_PCI(rte_avp, rte_avp_pmd.pci_drv);
 RTE_PMD_REGISTER_PCI_TABLE(rte_avp, pci_id_avp_map);
-- 
2.12.1

^ permalink raw reply related	[flat|nested] 172+ messages in thread

* [PATCH v6 11/14] net/avp: device promiscuous functions
  2017-03-28 11:53         ` [PATCH v6 " Allain Legacy
                             ` (9 preceding siblings ...)
  2017-03-28 11:54           ` [PATCH v6 10/14] net/avp: device statistics operations Allain Legacy
@ 2017-03-28 11:54           ` Allain Legacy
  2017-03-28 11:54           ` [PATCH v6 12/14] net/avp: device start and stop operations Allain Legacy
                             ` (3 subsequent siblings)
  14 siblings, 0 replies; 172+ messages in thread
From: Allain Legacy @ 2017-03-28 11:54 UTC (permalink / raw)
  To: ferruh.yigit
  Cc: dev, ian.jolliffe, bruce.richardson, john.mcnamara, keith.wiles,
	tim.odriscoll, thomas.monjalon, vincent.jardin, jerin.jacob,
	stephen, 3chas3

Adds support for setting and clearing promiscuous mode on an AVP device.
When enabled the _mac_filter function will allow packets destined to any
MAC address to be processed by the receive functions.

Signed-off-by: Allain Legacy <allain.legacy@windriver.com>
Signed-off-by: Matt Peters <matt.peters@windriver.com>
---
 doc/guides/nics/features/avp.ini |  1 +
 drivers/net/avp/avp_ethdev.c     | 28 ++++++++++++++++++++++++++++
 2 files changed, 29 insertions(+)

diff --git a/doc/guides/nics/features/avp.ini b/doc/guides/nics/features/avp.ini
index 0de761ce8..ceb69939b 100644
--- a/doc/guides/nics/features/avp.ini
+++ b/doc/guides/nics/features/avp.ini
@@ -7,6 +7,7 @@
 Link status          = Y
 Jumbo frame          = Y
 Scattered Rx         = Y
+Promiscuous mode     = Y
 Unicast MAC filter   = Y
 VLAN offload         = Y
 Basic stats          = Y
diff --git a/drivers/net/avp/avp_ethdev.c b/drivers/net/avp/avp_ethdev.c
index f24c6a8fc..d008e36b7 100644
--- a/drivers/net/avp/avp_ethdev.c
+++ b/drivers/net/avp/avp_ethdev.c
@@ -67,6 +67,9 @@ static void avp_dev_info_get(struct rte_eth_dev *dev,
 static void avp_vlan_offload_set(struct rte_eth_dev *dev, int mask);
 static int avp_dev_link_update(struct rte_eth_dev *dev,
 			       __rte_unused int wait_to_complete);
+static void avp_dev_promiscuous_enable(struct rte_eth_dev *dev);
+static void avp_dev_promiscuous_disable(struct rte_eth_dev *dev);
+
 static int avp_dev_rx_queue_setup(struct rte_eth_dev *dev,
 				  uint16_t rx_queue_id,
 				  uint16_t nb_rx_desc,
@@ -153,6 +156,8 @@ static const struct eth_dev_ops avp_eth_dev_ops = {
 	.stats_get           = avp_dev_stats_get,
 	.stats_reset         = avp_dev_stats_reset,
 	.link_update         = avp_dev_link_update,
+	.promiscuous_enable  = avp_dev_promiscuous_enable,
+	.promiscuous_disable = avp_dev_promiscuous_disable,
 	.rx_queue_setup      = avp_dev_rx_queue_setup,
 	.rx_queue_release    = avp_dev_rx_queue_release,
 	.tx_queue_setup      = avp_dev_tx_queue_setup,
@@ -1679,6 +1684,29 @@ avp_dev_link_update(struct rte_eth_dev *eth_dev,
 	return -1;
 }
 
+static void
+avp_dev_promiscuous_enable(struct rte_eth_dev *eth_dev)
+{
+	struct avp_dev *avp = AVP_DEV_PRIVATE_TO_HW(eth_dev->data->dev_private);
+
+	if ((avp->flags & AVP_F_PROMISC) == 0) {
+		avp->flags |= AVP_F_PROMISC;
+		PMD_DRV_LOG(DEBUG, "Promiscuous mode enabled on %u\n",
+			    eth_dev->data->port_id);
+	}
+}
+
+static void
+avp_dev_promiscuous_disable(struct rte_eth_dev *eth_dev)
+{
+	struct avp_dev *avp = AVP_DEV_PRIVATE_TO_HW(eth_dev->data->dev_private);
+
+	if ((avp->flags & AVP_F_PROMISC) != 0) {
+		avp->flags &= ~AVP_F_PROMISC;
+		PMD_DRV_LOG(DEBUG, "Promiscuous mode disabled on %u\n",
+			    eth_dev->data->port_id);
+	}
+}
 
 static void
 avp_dev_info_get(struct rte_eth_dev *eth_dev,
-- 
2.12.1

^ permalink raw reply related	[flat|nested] 172+ messages in thread

* [PATCH v6 12/14] net/avp: device start and stop operations
  2017-03-28 11:53         ` [PATCH v6 " Allain Legacy
                             ` (10 preceding siblings ...)
  2017-03-28 11:54           ` [PATCH v6 11/14] net/avp: device promiscuous functions Allain Legacy
@ 2017-03-28 11:54           ` Allain Legacy
  2017-03-28 11:54           ` [PATCH v6 13/14] net/avp: migration interrupt handling Allain Legacy
                             ` (2 subsequent siblings)
  14 siblings, 0 replies; 172+ messages in thread
From: Allain Legacy @ 2017-03-28 11:54 UTC (permalink / raw)
  To: ferruh.yigit
  Cc: dev, ian.jolliffe, bruce.richardson, john.mcnamara, keith.wiles,
	tim.odriscoll, thomas.monjalon, vincent.jardin, jerin.jacob,
	stephen, 3chas3

Adds support for device start and stop functions.  This allows an
application to control the administrative state of an AVP device.  Stopping
the device will notify the host application to stop sending packets on that
device's receive queues.

Signed-off-by: Allain Legacy <allain.legacy@windriver.com>
Signed-off-by: Matt Peters <matt.peters@windriver.com>
---
 drivers/net/avp/avp_ethdev.c | 102 +++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 102 insertions(+)

diff --git a/drivers/net/avp/avp_ethdev.c b/drivers/net/avp/avp_ethdev.c
index d008e36b7..9824190a0 100644
--- a/drivers/net/avp/avp_ethdev.c
+++ b/drivers/net/avp/avp_ethdev.c
@@ -62,6 +62,9 @@
 
 
 static int avp_dev_configure(struct rte_eth_dev *dev);
+static int avp_dev_start(struct rte_eth_dev *dev);
+static void avp_dev_stop(struct rte_eth_dev *dev);
+static void avp_dev_close(struct rte_eth_dev *dev);
 static void avp_dev_info_get(struct rte_eth_dev *dev,
 			     struct rte_eth_dev_info *dev_info);
 static void avp_vlan_offload_set(struct rte_eth_dev *dev, int mask);
@@ -151,6 +154,9 @@ static const struct rte_pci_id pci_id_avp_map[] = {
  */
 static const struct eth_dev_ops avp_eth_dev_ops = {
 	.dev_configure       = avp_dev_configure,
+	.dev_start           = avp_dev_start,
+	.dev_stop            = avp_dev_stop,
+	.dev_close           = avp_dev_close,
 	.dev_infos_get       = avp_dev_info_get,
 	.vlan_offload_set    = avp_vlan_offload_set,
 	.stats_get           = avp_dev_stats_get,
@@ -316,6 +322,23 @@ avp_dev_process_request(struct avp_dev *avp, struct rte_avp_request *request)
 }
 
 static int
+avp_dev_ctrl_set_link_state(struct rte_eth_dev *eth_dev, unsigned int state)
+{
+	struct avp_dev *avp = AVP_DEV_PRIVATE_TO_HW(eth_dev->data->dev_private);
+	struct rte_avp_request request;
+	int ret;
+
+	/* setup a link state change request */
+	memset(&request, 0, sizeof(request));
+	request.req_id = RTE_AVP_REQ_CFG_NETWORK_IF;
+	request.if_up = state;
+
+	ret = avp_dev_process_request(avp, &request);
+
+	return ret == 0 ? request.result : ret;
+}
+
+static int
 avp_dev_ctrl_set_config(struct rte_eth_dev *eth_dev,
 			struct rte_avp_device_config *config)
 {
@@ -333,6 +356,22 @@ avp_dev_ctrl_set_config(struct rte_eth_dev *eth_dev,
 	return ret == 0 ? request.result : ret;
 }
 
+static int
+avp_dev_ctrl_shutdown(struct rte_eth_dev *eth_dev)
+{
+	struct avp_dev *avp = AVP_DEV_PRIVATE_TO_HW(eth_dev->data->dev_private);
+	struct rte_avp_request request;
+	int ret;
+
+	/* setup a shutdown request */
+	memset(&request, 0, sizeof(request));
+	request.req_id = RTE_AVP_REQ_SHUTDOWN_DEVICE;
+
+	ret = avp_dev_process_request(avp, &request);
+
+	return ret == 0 ? request.result : ret;
+}
+
 /* translate from host mbuf virtual address to guest virtual address */
 static inline void *
 avp_dev_translate_buffer(struct avp_dev *avp, void *host_mbuf_address)
@@ -1669,6 +1708,69 @@ avp_dev_configure(struct rte_eth_dev *eth_dev)
 	return ret;
 }
 
+static int
+avp_dev_start(struct rte_eth_dev *eth_dev)
+{
+	struct avp_dev *avp = AVP_DEV_PRIVATE_TO_HW(eth_dev->data->dev_private);
+	int ret;
+
+	/* disable features that we do not support */
+	eth_dev->data->dev_conf.rxmode.hw_ip_checksum = 0;
+	eth_dev->data->dev_conf.rxmode.hw_vlan_filter = 0;
+	eth_dev->data->dev_conf.rxmode.hw_vlan_extend = 0;
+	eth_dev->data->dev_conf.rxmode.hw_strip_crc = 0;
+
+	/* update link state */
+	ret = avp_dev_ctrl_set_link_state(eth_dev, 1);
+	if (ret < 0) {
+		PMD_DRV_LOG(ERR, "Link state change failed by host, ret=%d\n",
+			    ret);
+		goto unlock;
+	}
+
+	/* remember current link state */
+	avp->flags |= AVP_F_LINKUP;
+
+	ret = 0;
+
+unlock:
+	return ret;
+}
+
+static void
+avp_dev_stop(struct rte_eth_dev *eth_dev)
+{
+	struct avp_dev *avp = AVP_DEV_PRIVATE_TO_HW(eth_dev->data->dev_private);
+	int ret;
+
+	avp->flags &= ~AVP_F_LINKUP;
+
+	/* update link state */
+	ret = avp_dev_ctrl_set_link_state(eth_dev, 0);
+	if (ret < 0) {
+		PMD_DRV_LOG(ERR, "Link state change failed by host, ret=%d\n",
+			    ret);
+	}
+}
+
+static void
+avp_dev_close(struct rte_eth_dev *eth_dev)
+{
+	struct avp_dev *avp = AVP_DEV_PRIVATE_TO_HW(eth_dev->data->dev_private);
+	int ret;
+
+	/* remember current link state */
+	avp->flags &= ~AVP_F_LINKUP;
+	avp->flags &= ~AVP_F_CONFIGURED;
+
+	/* update device state */
+	ret = avp_dev_ctrl_shutdown(eth_dev);
+	if (ret < 0) {
+		PMD_DRV_LOG(ERR, "Device shutdown failed by host, ret=%d\n",
+			    ret);
+		/* continue */
+	}
+}
 
 static int
 avp_dev_link_update(struct rte_eth_dev *eth_dev,
-- 
2.12.1

^ permalink raw reply related	[flat|nested] 172+ messages in thread

* [PATCH v6 13/14] net/avp: migration interrupt handling
  2017-03-28 11:53         ` [PATCH v6 " Allain Legacy
                             ` (11 preceding siblings ...)
  2017-03-28 11:54           ` [PATCH v6 12/14] net/avp: device start and stop operations Allain Legacy
@ 2017-03-28 11:54           ` Allain Legacy
  2017-03-28 11:54           ` [PATCH v6 14/14] doc: adds information related to the AVP PMD Allain Legacy
  2017-03-29 10:44           ` [PATCH v6 00/14] Wind River Systems " Vincent JARDIN
  14 siblings, 0 replies; 172+ messages in thread
From: Allain Legacy @ 2017-03-28 11:54 UTC (permalink / raw)
  To: ferruh.yigit
  Cc: dev, ian.jolliffe, bruce.richardson, john.mcnamara, keith.wiles,
	tim.odriscoll, thomas.monjalon, vincent.jardin, jerin.jacob,
	stephen, 3chas3

This commit introduces changes required to support VM live-migration.  This
is done by registering and responding to interrupts coming from the host to
signal that the memory is about to be made invalid and replaced with a new
memory zone on the destination compute node.

Enabling and disabling of the interrupts are maintained outside of the
start/stop functions because they must be enabled for the lifetime of the
device.  This is so that host interrupts are serviced and acked even in
cases where the app may have stopped the device.

Signed-off-by: Allain Legacy <allain.legacy@windriver.com>
Signed-off-by: Matt Peters <matt.peters@windriver.com>
---
 drivers/net/avp/avp_ethdev.c | 372 +++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 372 insertions(+)

diff --git a/drivers/net/avp/avp_ethdev.c b/drivers/net/avp/avp_ethdev.c
index 9824190a0..e166867aa 100644
--- a/drivers/net/avp/avp_ethdev.c
+++ b/drivers/net/avp/avp_ethdev.c
@@ -48,6 +48,7 @@
 #include <rte_ether.h>
 #include <rte_common.h>
 #include <rte_cycles.h>
+#include <rte_spinlock.h>
 #include <rte_byteorder.h>
 #include <rte_dev.h>
 #include <rte_memory.h>
@@ -60,6 +61,8 @@
 #include "avp_logs.h"
 
 
+static int avp_dev_create(struct rte_pci_device *pci_dev,
+			  struct rte_eth_dev *eth_dev);
 
 static int avp_dev_configure(struct rte_eth_dev *dev);
 static int avp_dev_start(struct rte_eth_dev *dev);
@@ -174,6 +177,7 @@ static const struct eth_dev_ops avp_eth_dev_ops = {
 #define AVP_F_PROMISC (1 << 1)
 #define AVP_F_CONFIGURED (1 << 2)
 #define AVP_F_LINKUP (1 << 3)
+#define AVP_F_DETACHED (1 << 4)
 /**@} */
 
 /* Ethernet device validation marker */
@@ -209,6 +213,9 @@ struct avp_dev {
 	struct rte_avp_fifo *free_q[RTE_AVP_MAX_QUEUES];
 	/**< To be freed mbufs queue */
 
+	/* mutual exclusion over the 'flag' and 'resp_q/req_q' fields */
+	rte_spinlock_t lock;
+
 	/* For request & response */
 	struct rte_avp_fifo *req_q; /**< Request queue */
 	struct rte_avp_fifo *resp_q; /**< Response queue */
@@ -496,6 +503,46 @@ avp_dev_check_regions(struct rte_eth_dev *eth_dev)
 	return 0;
 }
 
+static int
+avp_dev_detach(struct rte_eth_dev *eth_dev)
+{
+	struct avp_dev *avp = AVP_DEV_PRIVATE_TO_HW(eth_dev->data->dev_private);
+	int ret;
+
+	PMD_DRV_LOG(NOTICE, "Detaching port %u from AVP device 0x%" PRIx64 "\n",
+		    eth_dev->data->port_id, avp->device_id);
+
+	rte_spinlock_lock(&avp->lock);
+
+	if (avp->flags & AVP_F_DETACHED) {
+		PMD_DRV_LOG(NOTICE, "port %u already detached\n",
+			    eth_dev->data->port_id);
+		ret = 0;
+		goto unlock;
+	}
+
+	/* shutdown the device first so the host stops sending us packets. */
+	ret = avp_dev_ctrl_shutdown(eth_dev);
+	if (ret < 0) {
+		PMD_DRV_LOG(ERR, "Failed to send/recv shutdown to host, ret=%d\n",
+			    ret);
+		avp->flags &= ~AVP_F_DETACHED;
+		goto unlock;
+	}
+
+	avp->flags |= AVP_F_DETACHED;
+	rte_wmb();
+
+	/* wait for queues to acknowledge the presence of the detach flag */
+	rte_delay_ms(1);
+
+	ret = 0;
+
+unlock:
+	rte_spinlock_unlock(&avp->lock);
+	return ret;
+}
+
 static void
 _avp_set_rx_queue_mappings(struct rte_eth_dev *eth_dev, uint16_t rx_queue_id)
 {
@@ -564,6 +611,240 @@ _avp_set_queue_counts(struct rte_eth_dev *eth_dev)
 		    avp->num_tx_queues, avp->num_rx_queues);
 }
 
+static int
+avp_dev_attach(struct rte_eth_dev *eth_dev)
+{
+	struct avp_dev *avp = AVP_DEV_PRIVATE_TO_HW(eth_dev->data->dev_private);
+	struct rte_avp_device_config config;
+	unsigned int i;
+	int ret;
+
+	PMD_DRV_LOG(NOTICE, "Attaching port %u to AVP device 0x%" PRIx64 "\n",
+		    eth_dev->data->port_id, avp->device_id);
+
+	rte_spinlock_lock(&avp->lock);
+
+	if (!(avp->flags & AVP_F_DETACHED)) {
+		PMD_DRV_LOG(NOTICE, "port %u already attached\n",
+			    eth_dev->data->port_id);
+		ret = 0;
+		goto unlock;
+	}
+
+	/*
+	 * make sure that the detached flag is set prior to reconfiguring the
+	 * queues.
+	 */
+	avp->flags |= AVP_F_DETACHED;
+	rte_wmb();
+
+	/*
+	 * re-run the device create utility which will parse the new host info
+	 * and setup the AVP device queue pointers.
+	 */
+	ret = avp_dev_create(AVP_DEV_TO_PCI(eth_dev), eth_dev);
+	if (ret < 0) {
+		PMD_DRV_LOG(ERR, "Failed to re-create AVP device, ret=%d\n",
+			    ret);
+		goto unlock;
+	}
+
+	if (avp->flags & AVP_F_CONFIGURED) {
+		/*
+		 * Update the receive queue mapping to handle cases where the
+		 * source and destination hosts have different queue
+		 * requirements.  As long as the DETACHED flag is asserted the
+		 * queue table should not be referenced so it should be safe to
+		 * update it.
+		 */
+		_avp_set_queue_counts(eth_dev);
+		for (i = 0; i < eth_dev->data->nb_rx_queues; i++)
+			_avp_set_rx_queue_mappings(eth_dev, i);
+
+		/*
+		 * Update the host with our config details so that it knows the
+		 * device is active.
+		 */
+		memset(&config, 0, sizeof(config));
+		config.device_id = avp->device_id;
+		config.driver_type = RTE_AVP_DRIVER_TYPE_DPDK;
+		config.driver_version = AVP_DPDK_DRIVER_VERSION;
+		config.features = avp->features;
+		config.num_tx_queues = avp->num_tx_queues;
+		config.num_rx_queues = avp->num_rx_queues;
+		config.if_up = !!(avp->flags & AVP_F_LINKUP);
+
+		ret = avp_dev_ctrl_set_config(eth_dev, &config);
+		if (ret < 0) {
+			PMD_DRV_LOG(ERR, "Config request failed by host, ret=%d\n",
+				    ret);
+			goto unlock;
+		}
+	}
+
+	rte_wmb();
+	avp->flags &= ~AVP_F_DETACHED;
+
+	ret = 0;
+
+unlock:
+	rte_spinlock_unlock(&avp->lock);
+	return ret;
+}
+
+static void
+avp_dev_interrupt_handler(struct rte_intr_handle *intr_handle,
+						  void *data)
+{
+	struct rte_eth_dev *eth_dev = data;
+	struct rte_pci_device *pci_dev = AVP_DEV_TO_PCI(eth_dev);
+	void *registers = pci_dev->mem_resource[RTE_AVP_PCI_MMIO_BAR].addr;
+	uint32_t status, value;
+	int ret;
+
+	if (registers == NULL)
+		rte_panic("no mapped MMIO register space\n");
+
+	/* read the interrupt status register
+	 * note: this register clears on read so all raised interrupts must be
+	 *    handled or remembered for later processing
+	 */
+	status = AVP_READ32(
+		RTE_PTR_ADD(registers,
+			    RTE_AVP_INTERRUPT_STATUS_OFFSET));
+
+	if (status | RTE_AVP_MIGRATION_INTERRUPT_MASK) {
+		/* handle interrupt based on current status */
+		value = AVP_READ32(
+			RTE_PTR_ADD(registers,
+				    RTE_AVP_MIGRATION_STATUS_OFFSET));
+		switch (value) {
+		case RTE_AVP_MIGRATION_DETACHED:
+			ret = avp_dev_detach(eth_dev);
+			break;
+		case RTE_AVP_MIGRATION_ATTACHED:
+			ret = avp_dev_attach(eth_dev);
+			break;
+		default:
+			PMD_DRV_LOG(ERR, "unexpected migration status, status=%u\n",
+				    value);
+			ret = -EINVAL;
+		}
+
+		/* acknowledge the request by writing out our current status */
+		value = (ret == 0 ? value : RTE_AVP_MIGRATION_ERROR);
+		AVP_WRITE32(value,
+			    RTE_PTR_ADD(registers,
+					RTE_AVP_MIGRATION_ACK_OFFSET));
+
+		PMD_DRV_LOG(NOTICE, "AVP migration interrupt handled\n");
+	}
+
+	if (status & ~RTE_AVP_MIGRATION_INTERRUPT_MASK)
+		PMD_DRV_LOG(WARNING, "AVP unexpected interrupt, status=0x%08x\n",
+			    status);
+
+	/* re-enable UIO interrupt handling */
+	ret = rte_intr_enable(intr_handle);
+	if (ret < 0) {
+		PMD_DRV_LOG(ERR, "Failed to re-enable UIO interrupts, ret=%d\n",
+			    ret);
+		/* continue */
+	}
+}
+
+static int
+avp_dev_enable_interrupts(struct rte_eth_dev *eth_dev)
+{
+	struct rte_pci_device *pci_dev = AVP_DEV_TO_PCI(eth_dev);
+	void *registers = pci_dev->mem_resource[RTE_AVP_PCI_MMIO_BAR].addr;
+	int ret;
+
+	if (registers == NULL)
+		return -EINVAL;
+
+	/* enable UIO interrupt handling */
+	ret = rte_intr_enable(&pci_dev->intr_handle);
+	if (ret < 0) {
+		PMD_DRV_LOG(ERR, "Failed to enable UIO interrupts, ret=%d\n",
+			    ret);
+		return ret;
+	}
+
+	/* inform the device that all interrupts are enabled */
+	AVP_WRITE32(RTE_AVP_APP_INTERRUPTS_MASK,
+		    RTE_PTR_ADD(registers, RTE_AVP_INTERRUPT_MASK_OFFSET));
+
+	return 0;
+}
+
+static int
+avp_dev_disable_interrupts(struct rte_eth_dev *eth_dev)
+{
+	struct rte_pci_device *pci_dev = AVP_DEV_TO_PCI(eth_dev);
+	void *registers = pci_dev->mem_resource[RTE_AVP_PCI_MMIO_BAR].addr;
+	int ret;
+
+	if (registers == NULL)
+		return 0;
+
+	/* inform the device that all interrupts are disabled */
+	AVP_WRITE32(RTE_AVP_NO_INTERRUPTS_MASK,
+		    RTE_PTR_ADD(registers, RTE_AVP_INTERRUPT_MASK_OFFSET));
+
+	/* enable UIO interrupt handling */
+	ret = rte_intr_disable(&pci_dev->intr_handle);
+	if (ret < 0) {
+		PMD_DRV_LOG(ERR, "Failed to disable UIO interrupts, ret=%d\n",
+			    ret);
+		return ret;
+	}
+
+	return 0;
+}
+
+static int
+avp_dev_setup_interrupts(struct rte_eth_dev *eth_dev)
+{
+	struct rte_pci_device *pci_dev = AVP_DEV_TO_PCI(eth_dev);
+	int ret;
+
+	/* register a callback handler with UIO for interrupt notifications */
+	ret = rte_intr_callback_register(&pci_dev->intr_handle,
+					 avp_dev_interrupt_handler,
+					 (void *)eth_dev);
+	if (ret < 0) {
+		PMD_DRV_LOG(ERR, "Failed to register UIO interrupt callback, ret=%d\n",
+			    ret);
+		return ret;
+	}
+
+	/* enable interrupt processing */
+	return avp_dev_enable_interrupts(eth_dev);
+}
+
+static int
+avp_dev_migration_pending(struct rte_eth_dev *eth_dev)
+{
+	struct rte_pci_device *pci_dev = AVP_DEV_TO_PCI(eth_dev);
+	void *registers = pci_dev->mem_resource[RTE_AVP_PCI_MMIO_BAR].addr;
+	uint32_t value;
+
+	if (registers == NULL)
+		return 0;
+
+	value = AVP_READ32(RTE_PTR_ADD(registers,
+				       RTE_AVP_MIGRATION_STATUS_OFFSET));
+	if (value == RTE_AVP_MIGRATION_DETACHED) {
+		/* migration is in progress; ack it if we have not already */
+		AVP_WRITE32(value,
+			    RTE_PTR_ADD(registers,
+					RTE_AVP_MIGRATION_ACK_OFFSET));
+		return 1;
+	}
+	return 0;
+}
+
 /*
  * create a AVP device using the supplied device info by first translating it
  * to guest address space(s).
@@ -616,6 +897,7 @@ avp_dev_create(struct rte_pci_device *pci_dev,
 		avp->port_id = eth_dev->data->port_id;
 		avp->host_mbuf_size = host_info->mbuf_size;
 		avp->host_features = host_info->features;
+		rte_spinlock_init(&avp->lock);
 		memcpy(&avp->ethaddr.addr_bytes[0],
 		       host_info->ethaddr, ETHER_ADDR_LEN);
 		/* adjust max values to not exceed our max */
@@ -729,6 +1011,12 @@ eth_avp_dev_init(struct rte_eth_dev *eth_dev)
 
 	eth_dev->data->dev_flags |= RTE_ETH_DEV_DETACHABLE;
 
+	/* Check current migration status */
+	if (avp_dev_migration_pending(eth_dev)) {
+		PMD_DRV_LOG(ERR, "VM live migration operation in progress\n");
+		return -EBUSY;
+	}
+
 	/* Check BAR resources */
 	ret = avp_dev_check_regions(eth_dev);
 	if (ret < 0) {
@@ -737,6 +1025,13 @@ eth_avp_dev_init(struct rte_eth_dev *eth_dev)
 		return ret;
 	}
 
+	/* Enable interrupts */
+	ret = avp_dev_setup_interrupts(eth_dev);
+	if (ret < 0) {
+		PMD_DRV_LOG(ERR, "Failed to enable interrupts, ret=%d\n", ret);
+		return ret;
+	}
+
 	/* Handle each subtype */
 	ret = avp_dev_create(pci_dev, eth_dev);
 	if (ret < 0) {
@@ -761,12 +1056,20 @@ eth_avp_dev_init(struct rte_eth_dev *eth_dev)
 static int
 eth_avp_dev_uninit(struct rte_eth_dev *eth_dev)
 {
+	int ret;
+
 	if (rte_eal_process_type() != RTE_PROC_PRIMARY)
 		return -EPERM;
 
 	if (eth_dev->data == NULL)
 		return 0;
 
+	ret = avp_dev_disable_interrupts(eth_dev);
+	if (ret != 0) {
+		PMD_DRV_LOG(ERR, "Failed to disable interrupts, ret=%d\n", ret);
+		return ret;
+	}
+
 	if (eth_dev->data->mac_addrs != NULL) {
 		rte_free(eth_dev->data->mac_addrs);
 		eth_dev->data->mac_addrs = NULL;
@@ -1129,6 +1432,11 @@ avp_recv_scattered_pkts(void *rx_queue,
 	unsigned int port_id;
 	unsigned int i;
 
+	if (unlikely(avp->flags & AVP_F_DETACHED)) {
+		/* VM live migration in progress */
+		return 0;
+	}
+
 	guest_mbuf_size = avp->guest_mbuf_size;
 	port_id = avp->port_id;
 	rx_q = avp->rx_q[rxq->queue_id];
@@ -1223,6 +1531,11 @@ avp_recv_pkts(void *rx_queue,
 	char *pkt_data;
 	unsigned int i;
 
+	if (unlikely(avp->flags & AVP_F_DETACHED)) {
+		/* VM live migration in progress */
+		return 0;
+	}
+
 	rx_q = avp->rx_q[rxq->queue_id];
 	free_q = avp->free_q[rxq->queue_id];
 
@@ -1430,6 +1743,13 @@ avp_xmit_scattered_pkts(void *tx_queue,
 	unsigned int i;
 
 	orig_nb_pkts = nb_pkts;
+	if (unlikely(avp->flags & AVP_F_DETACHED)) {
+		/* VM live migration in progress */
+		/* TODO ... buffer for X packets then drop? */
+		txq->errors += nb_pkts;
+		return 0;
+	}
+
 	tx_q = avp->tx_q[txq->queue_id];
 	alloc_q = avp->alloc_q[txq->queue_id];
 
@@ -1542,6 +1862,13 @@ avp_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
 	char *pkt_data;
 	unsigned int i;
 
+	if (unlikely(avp->flags & AVP_F_DETACHED)) {
+		/* VM live migration in progress */
+		/* TODO ... buffer for X packets then drop?! */
+		txq->errors++;
+		return 0;
+	}
+
 	tx_q = avp->tx_q[txq->queue_id];
 	alloc_q = avp->alloc_q[txq->queue_id];
 
@@ -1674,6 +2001,13 @@ avp_dev_configure(struct rte_eth_dev *eth_dev)
 	void *addr;
 	int ret;
 
+	rte_spinlock_lock(&avp->lock);
+	if (avp->flags & AVP_F_DETACHED) {
+		PMD_DRV_LOG(ERR, "Operation not supported during VM live migration\n");
+		ret = -ENOTSUP;
+		goto unlock;
+	}
+
 	addr = pci_dev->mem_resource[RTE_AVP_PCI_DEVICE_BAR].addr;
 	host_info = (struct rte_avp_device_info *)addr;
 
@@ -1705,6 +2039,7 @@ avp_dev_configure(struct rte_eth_dev *eth_dev)
 	ret = 0;
 
 unlock:
+	rte_spinlock_unlock(&avp->lock);
 	return ret;
 }
 
@@ -1714,6 +2049,13 @@ avp_dev_start(struct rte_eth_dev *eth_dev)
 	struct avp_dev *avp = AVP_DEV_PRIVATE_TO_HW(eth_dev->data->dev_private);
 	int ret;
 
+	rte_spinlock_lock(&avp->lock);
+	if (avp->flags & AVP_F_DETACHED) {
+		PMD_DRV_LOG(ERR, "Operation not supported during VM live migration\n");
+		ret = -ENOTSUP;
+		goto unlock;
+	}
+
 	/* disable features that we do not support */
 	eth_dev->data->dev_conf.rxmode.hw_ip_checksum = 0;
 	eth_dev->data->dev_conf.rxmode.hw_vlan_filter = 0;
@@ -1734,6 +2076,7 @@ avp_dev_start(struct rte_eth_dev *eth_dev)
 	ret = 0;
 
 unlock:
+	rte_spinlock_unlock(&avp->lock);
 	return ret;
 }
 
@@ -1743,6 +2086,13 @@ avp_dev_stop(struct rte_eth_dev *eth_dev)
 	struct avp_dev *avp = AVP_DEV_PRIVATE_TO_HW(eth_dev->data->dev_private);
 	int ret;
 
+	rte_spinlock_lock(&avp->lock);
+	if (avp->flags & AVP_F_DETACHED) {
+		PMD_DRV_LOG(ERR, "Operation not supported during VM live migration\n");
+		goto unlock;
+	}
+
+	/* remember current link state */
 	avp->flags &= ~AVP_F_LINKUP;
 
 	/* update link state */
@@ -1751,6 +2101,9 @@ avp_dev_stop(struct rte_eth_dev *eth_dev)
 		PMD_DRV_LOG(ERR, "Link state change failed by host, ret=%d\n",
 			    ret);
 	}
+
+unlock:
+	rte_spinlock_unlock(&avp->lock);
 }
 
 static void
@@ -1759,10 +2112,22 @@ avp_dev_close(struct rte_eth_dev *eth_dev)
 	struct avp_dev *avp = AVP_DEV_PRIVATE_TO_HW(eth_dev->data->dev_private);
 	int ret;
 
+	rte_spinlock_lock(&avp->lock);
+	if (avp->flags & AVP_F_DETACHED) {
+		PMD_DRV_LOG(ERR, "Operation not supported during VM live migration\n");
+		goto unlock;
+	}
+
 	/* remember current link state */
 	avp->flags &= ~AVP_F_LINKUP;
 	avp->flags &= ~AVP_F_CONFIGURED;
 
+	ret = avp_dev_disable_interrupts(eth_dev);
+	if (ret < 0) {
+		PMD_DRV_LOG(ERR, "Failed to disable interrupts\n");
+		/* continue */
+	}
+
 	/* update device state */
 	ret = avp_dev_ctrl_shutdown(eth_dev);
 	if (ret < 0) {
@@ -1770,6 +2135,9 @@ avp_dev_close(struct rte_eth_dev *eth_dev)
 			    ret);
 		/* continue */
 	}
+
+unlock:
+	rte_spinlock_unlock(&avp->lock);
 }
 
 static int
@@ -1791,11 +2159,13 @@ avp_dev_promiscuous_enable(struct rte_eth_dev *eth_dev)
 {
 	struct avp_dev *avp = AVP_DEV_PRIVATE_TO_HW(eth_dev->data->dev_private);
 
+	rte_spinlock_lock(&avp->lock);
 	if ((avp->flags & AVP_F_PROMISC) == 0) {
 		avp->flags |= AVP_F_PROMISC;
 		PMD_DRV_LOG(DEBUG, "Promiscuous mode enabled on %u\n",
 			    eth_dev->data->port_id);
 	}
+	rte_spinlock_unlock(&avp->lock);
 }
 
 static void
@@ -1803,11 +2173,13 @@ avp_dev_promiscuous_disable(struct rte_eth_dev *eth_dev)
 {
 	struct avp_dev *avp = AVP_DEV_PRIVATE_TO_HW(eth_dev->data->dev_private);
 
+	rte_spinlock_lock(&avp->lock);
 	if ((avp->flags & AVP_F_PROMISC) != 0) {
 		avp->flags &= ~AVP_F_PROMISC;
 		PMD_DRV_LOG(DEBUG, "Promiscuous mode disabled on %u\n",
 			    eth_dev->data->port_id);
 	}
+	rte_spinlock_unlock(&avp->lock);
 }
 
 static void
-- 
2.12.1

^ permalink raw reply related	[flat|nested] 172+ messages in thread

* [PATCH v6 14/14] doc: adds information related to the AVP PMD
  2017-03-28 11:53         ` [PATCH v6 " Allain Legacy
                             ` (12 preceding siblings ...)
  2017-03-28 11:54           ` [PATCH v6 13/14] net/avp: migration interrupt handling Allain Legacy
@ 2017-03-28 11:54           ` Allain Legacy
  2017-03-29 10:44           ` [PATCH v6 00/14] Wind River Systems " Vincent JARDIN
  14 siblings, 0 replies; 172+ messages in thread
From: Allain Legacy @ 2017-03-28 11:54 UTC (permalink / raw)
  To: ferruh.yigit
  Cc: dev, ian.jolliffe, bruce.richardson, john.mcnamara, keith.wiles,
	tim.odriscoll, thomas.monjalon, vincent.jardin, jerin.jacob,
	stephen, 3chas3

Updates the documentation and feature lists for the AVP PMD device.

Signed-off-by: Allain Legacy <allain.legacy@windriver.com>
Signed-off-by: Matt Peters <matt.peters@windriver.com>
Acked-by: John McNamara <john.mcnamara@intel.com>
---
 MAINTAINERS                            |   1 +
 doc/guides/nics/avp.rst                | 111 +++++++++++++++++++++++++++++++++
 doc/guides/nics/index.rst              |   1 +
 doc/guides/rel_notes/release_17_05.rst |   4 ++
 4 files changed, 117 insertions(+)
 create mode 100644 doc/guides/nics/avp.rst

diff --git a/MAINTAINERS b/MAINTAINERS
index e3ac0e383..522f4194f 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -417,6 +417,7 @@ Wind River AVP PMD
 M: Allain Legacy <allain.legacy@windriver.com>
 M: Matt Peters <matt.peters@windriver.com>
 F: drivers/net/avp
+F: doc/guides/nics/avp.rst
 
 
 Crypto Drivers
diff --git a/doc/guides/nics/avp.rst b/doc/guides/nics/avp.rst
new file mode 100644
index 000000000..1fcba66ce
--- /dev/null
+++ b/doc/guides/nics/avp.rst
@@ -0,0 +1,111 @@
+..  BSD LICENSE
+    Copyright(c) 2017 Wind River Systems, Inc. rights reserved.
+    All rights reserved.
+
+    Redistribution and use in source and binary forms, with or without
+    modification, are permitted provided that the following conditions
+    are met:
+
+    * Redistributions of source code must retain the above copyright
+    notice, this list of conditions and the following disclaimer.
+    * Redistributions in binary form must reproduce the above copyright
+    notice, this list of conditions and the following disclaimer in
+    the documentation and/or other materials provided with the
+    distribution.
+    * Neither the name of Intel Corporation nor the names of its
+    contributors may be used to endorse or promote products derived
+    from this software without specific prior written permission.
+
+    THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+    "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+    LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+    A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+    OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+    SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+    LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+    DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+    THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+    (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+    OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+
+AVP Poll Mode Driver
+=================================================================
+
+The Accelerated Virtual Port (AVP) device is a shared memory based device
+only available on `virtualization platforms <http://www.windriver.com/products/titanium-cloud/>`_
+from Wind River Systems.  The Wind River Systems virtualization platform
+currently uses QEMU/KVM as its hypervisor and as such provides support for all
+of the QEMU supported virtual and/or emulated devices (e.g., virtio, e1000,
+etc.).  The platform offers the virtio device type as the default device when
+launching a virtual machine or creating a virtual machine port.  The AVP device
+is a specialized device available to customers that require increased
+throughput and decreased latency to meet the demands of their performance
+focused applications.
+
+The AVP driver binds to any AVP PCI devices that have been exported by the Wind
+River Systems QEMU/KVM hypervisor.  As a user of the DPDK driver API it
+supports a subset of the full Ethernet device API to enable the application to
+use the standard device configuration functions and packet receive/transmit
+functions.
+
+These devices enable optimized packet throughput by bypassing QEMU and
+delivering packets directly to the virtual switch via a shared memory
+mechanism.  This provides DPDK applications running in virtual machines with
+significantly improved throughput and latency over other device types.
+
+The AVP device implementation is integrated with the QEMU/KVM live-migration
+mechanism to allow applications to seamlessly migrate from one hypervisor node
+to another with minimal packet loss.
+
+
+Features and Limitations of the AVP PMD
+---------------------------------------
+
+The AVP PMD driver provides the following functionality.
+
+*   Receive and transmit of both simple and chained mbuf packets,
+
+*   Chained mbufs may include up to 5 chained segments,
+
+*   Up to 8 receive and transmit queues per device,
+
+*   Only a single MAC address is supported,
+
+*   The MAC address cannot be modified,
+
+*   The maximum receive packet length is 9238 bytes,
+
+*   VLAN header stripping and inserting,
+
+*   Promiscuous mode
+
+*   VM live-migration
+
+*   PCI hotplug insertion and removal
+
+
+Prerequisites
+-------------
+
+The following prerequisites apply:
+
+*   A virtual machine running in a Wind River Systems virtualization
+    environment and configured with at least one neutron port defined with a
+    vif-model set to "avp".
+
+
+Launching a VM with an AVP type network attachment
+--------------------------------------------------
+
+The following example will launch a VM with three network attachments.  The
+first attachment will have a default vif-model of "virtio".  The next two
+network attachments will have a vif-model of "avp" and may be used with a DPDK
+application which is built to include the AVP PMD driver.
+
+.. code-block:: console
+
+    nova boot --flavor small --image my-image \
+       --nic net-id=${NETWORK1_UUID} \
+       --nic net-id=${NETWORK2_UUID},vif-model=avp \
+       --nic net-id=${NETWORK3_UUID},vif-model=avp \
+       --security-group default my-instance1
diff --git a/doc/guides/nics/index.rst b/doc/guides/nics/index.rst
index 87f933495..0ddcea512 100644
--- a/doc/guides/nics/index.rst
+++ b/doc/guides/nics/index.rst
@@ -36,6 +36,7 @@ Network Interface Controller Drivers
     :numbered:
 
     overview
+    avp
     bnx2x
     bnxt
     cxgbe
diff --git a/doc/guides/rel_notes/release_17_05.rst b/doc/guides/rel_notes/release_17_05.rst
index 2a045b3e2..34d0dac0b 100644
--- a/doc/guides/rel_notes/release_17_05.rst
+++ b/doc/guides/rel_notes/release_17_05.rst
@@ -55,6 +55,10 @@ New Features
 
   sPAPR IOMMU based pci probing enabled for vfio-pci devices.
 
+* **Added support for the Wind River Systems AVP PMD.**
+
+  Added a new networking driver for the AVP device type. Theses devices are
+  specific to the Wind River Systems virtualization platforms.
 
 Resolved Issues
 ---------------
-- 
2.12.1

^ permalink raw reply related	[flat|nested] 172+ messages in thread

* Re: [PATCH v6 06/14] net/avp: device configuration
  2017-03-28 11:54           ` [PATCH v6 06/14] net/avp: device configuration Allain Legacy
@ 2017-03-29 10:28             ` Ferruh Yigit
  0 siblings, 0 replies; 172+ messages in thread
From: Ferruh Yigit @ 2017-03-29 10:28 UTC (permalink / raw)
  To: Allain Legacy
  Cc: dev, ian.jolliffe, bruce.richardson, john.mcnamara, keith.wiles,
	tim.odriscoll, thomas.monjalon, vincent.jardin, jerin.jacob,
	stephen, 3chas3

On 3/28/2017 12:54 PM, Allain Legacy wrote:
> Adds support for "dev_configure" operations to allow an application to
> configure the device.
> 
> Signed-off-by: Allain Legacy <allain.legacy@windriver.com>
> Signed-off-by: Matt Peters <matt.peters@windriver.com>

<...>

>  RTE_PMD_REGISTER_PCI(rte_avp, rte_avp_pmd.pci_drv);
>  RTE_PMD_REGISTER_PCI_TABLE(rte_avp, pci_id_avp_map);

Defined net device naming is "net_xxx", for this case it should be
"net_avp", instead of rte_avp.

Since there is no other outstanding issue for the PMD, I will fix this
while applying.

^ permalink raw reply	[flat|nested] 172+ messages in thread

* Re: [PATCH v6 00/14] Wind River Systems AVP PMD
  2017-03-28 11:53         ` [PATCH v6 " Allain Legacy
                             ` (13 preceding siblings ...)
  2017-03-28 11:54           ` [PATCH v6 14/14] doc: adds information related to the AVP PMD Allain Legacy
@ 2017-03-29 10:44           ` Vincent JARDIN
  2017-03-29 11:05             ` Ferruh Yigit
  14 siblings, 1 reply; 172+ messages in thread
From: Vincent JARDIN @ 2017-03-29 10:44 UTC (permalink / raw)
  To: Allain Legacy, ferruh.yigit
  Cc: dev, ian.jolliffe, bruce.richardson, john.mcnamara, keith.wiles,
	tim.odriscoll, thomas.monjalon, jerin.jacob, stephen, 3chas3

Le 28/03/2017 à 13:53, Allain Legacy a écrit :
> This patch series submits an initial version of the AVP PMD from Wind River
> Systems.  The series includes shared header files, driver implementation,
> and changes to documentation files in support of this new driver.  The AVP
> driver is a shared memory based device.  It is intended to be used as a PMD
> within a virtual machine running on a Wind River virtualization platform.
> See: http://www.windriver.com/products/titanium-cloud/
>
> It enables optimized packet throughput without requiring any packet
> processing in qemu. This allowed us to provide our customers with a
> significant performance increase for both DPDK and non-DPDK applications
> in the VM.   Since our AVP implementation supports VM live-migration it
> is viewed as a better alternative to PCI passthrough or PCI SRIOV since
> neither of those support VM live-migration without manual intervention
> or significant performance penalties.
>
> Since the initial implementation of AVP devices, vhost-user has become part
> of the qemu offering with a significant performance increase over the
> original virtio implementation.  However, vhost-user still does not achieve
> the level of performance that the AVP device can provide to our customers
> for DPDK based guests.
>
> A number of our customers have requested that we upstream the driver to
> dpdk.org.

Acked-by: vincent.jardin@6wind.com

^ permalink raw reply	[flat|nested] 172+ messages in thread

* Re: [PATCH v6 00/14] Wind River Systems AVP PMD
  2017-03-29 10:44           ` [PATCH v6 00/14] Wind River Systems " Vincent JARDIN
@ 2017-03-29 11:05             ` Ferruh Yigit
  0 siblings, 0 replies; 172+ messages in thread
From: Ferruh Yigit @ 2017-03-29 11:05 UTC (permalink / raw)
  To: Vincent JARDIN, Allain Legacy
  Cc: dev, ian.jolliffe, bruce.richardson, john.mcnamara, keith.wiles,
	tim.odriscoll, thomas.monjalon, jerin.jacob, stephen, 3chas3

On 3/29/2017 11:44 AM, Vincent JARDIN wrote:
> Le 28/03/2017 à 13:53, Allain Legacy a écrit :
>> This patch series submits an initial version of the AVP PMD from Wind River
>> Systems.  The series includes shared header files, driver implementation,
>> and changes to documentation files in support of this new driver.  The AVP
>> driver is a shared memory based device.  It is intended to be used as a PMD
>> within a virtual machine running on a Wind River virtualization platform.
>> See: http://www.windriver.com/products/titanium-cloud/
>>
>> It enables optimized packet throughput without requiring any packet
>> processing in qemu. This allowed us to provide our customers with a
>> significant performance increase for both DPDK and non-DPDK applications
>> in the VM.   Since our AVP implementation supports VM live-migration it
>> is viewed as a better alternative to PCI passthrough or PCI SRIOV since
>> neither of those support VM live-migration without manual intervention
>> or significant performance penalties.
>>
>> Since the initial implementation of AVP devices, vhost-user has become part
>> of the qemu offering with a significant performance increase over the
>> original virtio implementation.  However, vhost-user still does not achieve
>> the level of performance that the AVP device can provide to our customers
>> for DPDK based guests.
>>
>> A number of our customers have requested that we upstream the driver to
>> dpdk.org.
> 
> Acked-by: vincent.jardin@6wind.com

Series applied to dpdk-next-net/master, thanks.

^ permalink raw reply	[flat|nested] 172+ messages in thread

* Re: [PATCH v4 00/17] Wind River Systems AVP PMD vs virtio? - ivshmem is back
       [not found]                       ` <20170317093320.GA11116@stefanha-x1.localdomain>
@ 2017-03-30  8:55                         ` Markus Armbruster
  0 siblings, 0 replies; 172+ messages in thread
From: Markus Armbruster @ 2017-03-30  8:55 UTC (permalink / raw)
  To: Stefan Hajnoczi
  Cc: Marc-André Lureau, Wiles, Keith, Jason Wang,
	Michael S. Tsirkin, Vincent JARDIN, Stephen Hemminger,
	O'Driscoll, Tim, Legacy, Allain (Wind River),
	Yigit, Ferruh, dev, Jolliffe, Ian (Wind River),
	Thomas Monjalon

Stefan Hajnoczi <stefanha@redhat.com> writes:

> On Fri, Mar 17, 2017 at 09:48:38AM +0100, Thomas Monjalon wrote:
>> We are discussing about IVSHMEM but its support by Qemu is confused.
>> This feature is not in the MAINTAINERS file of Qemu.
>> Please Qemu maintainers, what is the future of IVSHMEM?

Red-headed stepchild?

> git-log(1) shows that Marc-André Lureau worked on ivshmem in 2016 but
> it's not under very active development at the moment.  I have CCed him.

Marc-André and I did substantial work to fix ivhmem's worst technical
issues, including memory-corrupting race conditions (unlikely ones, but
those are arguably the worst).  More issues remain.  Have a look at
docs/specs/ivshmem-spec.txt in the QEMU source tree[1].  It's depressing
reading, I'm afraid.

In my opinion, merging ivshmem in 2010 was a mistake.  Some users have
since found it useful (good for them), so we fixed it up for their
benefit.  Nevertheless, I still can't recommend it.

> The vhost-user interface seems to be getting more attention.  It's a way
> to run virtio devices in separate host processes (e.g. userspace network
> switch).
>
> There was a brief discussion about "ivshmem 2.0" recently but I think
> that fizzled out because the use case was narrow: a new and cut-down
> device model for real-time use cases.

Yes.  If you're interested in ivshmem, go read it[2].  At least two good
ideas came up: provide for an ID of the next higher protocol level, and
generic state signalling.

Why do I like these ideas?  My main objection to ivshmem's isn't
technical flaws, but that it's a bad building block (see [3] item 5).
These ideas could make it a less bad building block.

Turning ideas into reality is work.  Jan Kiszka (who started the
discussion) chose not to pursue it, because his requirements don't align
with upstream QEMU's very well, so he'd have to do more extra work than
he can justify.

Bottom line: if you want a better future for ivshmem than the status
quo, you need to put in the work and solve some hard problems.


[1] Link to my public repository, for convenience:
http://repo.or.cz/qemu/armbru.git/blob_plain/HEAD:/docs/specs/ivshmem-spec.txt
[2] https://lists.gnu.org/archive/html/qemu-devel/2017-01/msg02860.html
[3] https://lists.gnu.org/archive/html/qemu-devel/2014-06/msg02968.html

^ permalink raw reply	[flat|nested] 172+ messages in thread

end of thread, other threads:[~2017-03-30  8:55 UTC | newest]

Thread overview: 172+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2017-02-25  1:22 [PATCH 00/16] Wind River Systems AVP PMD Allain Legacy
2017-02-25  1:23 ` [PATCH 01/16] config: adds attributes for the " Allain Legacy
2017-02-25  1:23 ` [PATCH 02/16] net/avp: public header files Allain Legacy
2017-02-25  1:23 ` [PATCH 03/16] maintainers: claim responsibility for AVP PMD Allain Legacy
2017-02-25  1:23 ` [PATCH 04/16] net/avp: add PMD version map file Allain Legacy
2017-02-25  1:23 ` [PATCH 05/16] net/avp: debug log macros Allain Legacy
2017-02-25  1:23 ` [PATCH 06/16] drivers/net: adds driver makefiles for AVP PMD Allain Legacy
2017-02-25  1:23 ` [PATCH 07/16] net/avp: driver registration Allain Legacy
2017-02-25  1:23 ` [PATCH 08/16] net/avp: device initialization Allain Legacy
2017-02-25  1:23 ` [PATCH 09/16] net/avp: device configuration Allain Legacy
2017-02-25  1:23 ` [PATCH 10/16] net/avp: queue setup and release Allain Legacy
2017-02-25  1:23 ` [PATCH 11/16] net/avp: packet receive functions Allain Legacy
2017-02-25  1:23 ` [PATCH 12/16] net/avp: packet transmit functions Allain Legacy
2017-02-25  1:23 ` [PATCH 13/16] net/avp: device statistics operations Allain Legacy
2017-02-25  1:23 ` [PATCH 14/16] net/avp: device promiscuous functions Allain Legacy
2017-02-25  1:23 ` [PATCH 15/16] net/avp: device start and stop operations Allain Legacy
2017-02-25  1:23 ` [PATCH 16/16] doc: adds information related to the AVP PMD Allain Legacy
2017-02-26 19:08 ` [PATCH v2 00/16] Wind River Systems " Allain Legacy
2017-02-26 19:08   ` [PATCH v2 01/15] config: adds attributes for the " Allain Legacy
2017-02-26 19:08   ` [PATCH v2 02/15] net/avp: public header files Allain Legacy
2017-02-28 11:49     ` Jerin Jacob
2017-03-01 13:25       ` Legacy, Allain
2017-02-26 19:08   ` [PATCH v2 03/15] maintainers: claim responsibility for AVP PMD Allain Legacy
2017-02-26 19:08   ` [PATCH v2 04/15] net/avp: add PMD version map file Allain Legacy
2017-02-26 19:08   ` [PATCH v2 05/15] net/avp: debug log macros Allain Legacy
2017-02-26 19:08   ` [PATCH v2 06/15] drivers/net: adds driver makefiles for AVP PMD Allain Legacy
2017-02-26 19:08   ` [PATCH v2 07/15] net/avp: driver registration Allain Legacy
2017-02-27 16:47     ` Stephen Hemminger
2017-02-27 17:10       ` Legacy, Allain
2017-02-27 16:53     ` Stephen Hemminger
2017-02-27 17:09       ` Legacy, Allain
2017-02-26 19:08   ` [PATCH v2 08/15] net/avp: device initialization Allain Legacy
2017-02-28 11:57     ` Jerin Jacob
2017-03-01 13:29       ` Legacy, Allain
2017-02-26 19:08   ` [PATCH v2 09/15] net/avp: device configuration Allain Legacy
2017-02-26 19:08   ` [PATCH v2 10/15] net/avp: queue setup and release Allain Legacy
2017-02-26 19:08   ` [PATCH v2 11/15] net/avp: packet receive functions Allain Legacy
2017-02-27 16:46     ` Stephen Hemminger
2017-02-27 17:06       ` Legacy, Allain
2017-02-28 10:27         ` Bruce Richardson
2017-03-01 13:23           ` Legacy, Allain
2017-03-01 14:14             ` Thomas Monjalon
2017-03-01 14:54               ` Legacy, Allain
2017-03-01 15:10               ` Stephen Hemminger
2017-03-01 15:40                 ` Legacy, Allain
2017-02-26 19:09   ` [PATCH v2 12/15] net/avp: packet transmit functions Allain Legacy
2017-02-26 22:18     ` Legacy, Allain
2017-02-26 19:09   ` [PATCH v2 13/15] net/avp: device promiscuous functions Allain Legacy
2017-02-26 19:09   ` [PATCH v2 14/15] net/avp: device start and stop operations Allain Legacy
2017-02-26 19:09   ` [PATCH v2 15/15] doc: adds information related to the AVP PMD Allain Legacy
2017-02-27 17:04     ` Mcnamara, John
2017-02-27 17:07       ` Legacy, Allain
2017-02-27  8:54   ` [PATCH v2 00/16] Wind River Systems " Vincent JARDIN
2017-02-27 12:15     ` Legacy, Allain
2017-02-27 15:17       ` Wiles, Keith
2017-03-02  0:19   ` [PATCH v3 " Allain Legacy
2017-03-02  0:19     ` [PATCH v3 01/16] config: adds attributes for the " Allain Legacy
2017-03-02  0:19     ` [PATCH v3 02/16] net/avp: public header files Allain Legacy
2017-03-03 14:37       ` Chas Williams
2017-03-03 15:35         ` Legacy, Allain
2017-03-02  0:19     ` [PATCH v3 03/16] maintainers: claim responsibility for AVP PMD Allain Legacy
2017-03-02  0:19     ` [PATCH v3 04/16] net/avp: add PMD version map file Allain Legacy
2017-03-02  0:19     ` [PATCH v3 05/16] net/avp: debug log macros Allain Legacy
2017-03-02  0:19     ` [PATCH v3 06/16] drivers/net: adds driver makefiles for AVP PMD Allain Legacy
2017-03-02  0:19     ` [PATCH v3 07/16] net/avp: driver registration Allain Legacy
2017-03-02  0:20     ` [PATCH v3 08/16] net/avp: device initialization Allain Legacy
2017-03-03 15:04       ` Chas Williams
2017-03-09 14:03         ` Legacy, Allain
2017-03-09 14:48         ` Legacy, Allain
2017-03-02  0:20     ` [PATCH v3 09/16] net/avp: device configuration Allain Legacy
2017-03-02  0:20     ` [PATCH v3 10/16] net/avp: queue setup and release Allain Legacy
2017-03-02  0:20     ` [PATCH v3 11/16] net/avp: packet receive functions Allain Legacy
2017-03-02  0:20     ` [PATCH v3 12/16] net/avp: packet transmit functions Allain Legacy
2017-03-02  0:20     ` [PATCH v3 13/16] net/avp: device statistics operations Allain Legacy
2017-03-02  0:35       ` Stephen Hemminger
2017-03-09 13:48         ` Legacy, Allain
2017-03-02  0:20     ` [PATCH v3 14/16] net/avp: device promiscuous functions Allain Legacy
2017-03-02  0:20     ` [PATCH v3 15/16] net/avp: device start and stop operations Allain Legacy
2017-03-02  0:37       ` Stephen Hemminger
2017-03-09 13:49         ` Legacy, Allain
2017-03-02  0:20     ` [PATCH v3 16/16] doc: adds information related to the AVP PMD Allain Legacy
2017-03-03 16:21       ` Vincent JARDIN
2017-03-13 19:17         ` Legacy, Allain
2017-03-13 19:16     ` [PATCH v4 00/17] Wind River Systems " Allain Legacy
2017-03-13 19:16       ` [PATCH v4 01/17] config: adds attributes for the " Allain Legacy
2017-03-13 19:16       ` [PATCH v4 02/17] net/avp: public header files Allain Legacy
2017-03-13 19:16       ` [PATCH v4 03/17] maintainers: claim responsibility for AVP PMD Allain Legacy
2017-03-13 19:16       ` [PATCH v4 04/17] net/avp: add PMD version map file Allain Legacy
2017-03-16 14:52         ` Ferruh Yigit
2017-03-16 15:33           ` Legacy, Allain
2017-03-13 19:16       ` [PATCH v4 05/17] net/avp: debug log macros Allain Legacy
2017-03-13 19:16       ` [PATCH v4 06/17] drivers/net: adds driver makefiles for AVP PMD Allain Legacy
2017-03-13 19:16       ` [PATCH v4 07/17] net/avp: driver registration Allain Legacy
2017-03-16 14:53         ` Ferruh Yigit
2017-03-16 15:37           ` Legacy, Allain
2017-03-13 19:16       ` [PATCH v4 08/17] net/avp: device initialization Allain Legacy
2017-03-13 19:16       ` [PATCH v4 09/17] net/avp: device configuration Allain Legacy
2017-03-13 19:16       ` [PATCH v4 10/17] net/avp: queue setup and release Allain Legacy
2017-03-13 19:16       ` [PATCH v4 11/17] net/avp: packet receive functions Allain Legacy
2017-03-13 19:16       ` [PATCH v4 12/17] net/avp: packet transmit functions Allain Legacy
2017-03-13 19:16       ` [PATCH v4 13/17] net/avp: device statistics operations Allain Legacy
2017-03-13 19:16       ` [PATCH v4 14/17] net/avp: device promiscuous functions Allain Legacy
2017-03-13 19:16       ` [PATCH v4 15/17] net/avp: device start and stop operations Allain Legacy
2017-03-13 19:16       ` [PATCH v4 16/17] net/avp: migration interrupt handling Allain Legacy
2017-03-13 19:16       ` [PATCH v4 17/17] doc: adds information related to the AVP PMD Allain Legacy
2017-03-16 14:53         ` Ferruh Yigit
2017-03-16 15:37           ` Legacy, Allain
2017-03-14 17:37       ` [PATCH v4 00/17] Wind River Systems AVP PMD vs virtio? Vincent JARDIN
2017-03-15  4:10         ` O'Driscoll, Tim
2017-03-15 10:55           ` Thomas Monjalon
2017-03-15 14:02             ` Vincent JARDIN
2017-03-16  3:18               ` O'Driscoll, Tim
2017-03-16  8:52                 ` Francois Ozog
2017-03-16  9:51                   ` Wiles, Keith
2017-03-16 10:32                 ` Chas Williams
2017-03-16 18:09                   ` Francois Ozog
2017-03-15 11:29           ` Ferruh Yigit
2017-03-15 14:08             ` Vincent JARDIN
2017-03-15 18:18               ` Ferruh Yigit
2017-03-15 14:02           ` Vincent JARDIN
2017-03-15 14:02           ` Vincent JARDIN
2017-03-15 20:19             ` Wiles, Keith
2017-03-16 23:17           ` Stephen Hemminger
2017-03-16 23:41             ` [PATCH v4 00/17] Wind River Systems AVP PMD vs virtio? - ivshmem is back Vincent JARDIN
2017-03-17  0:08               ` Wiles, Keith
2017-03-17  0:15                 ` O'Driscoll, Tim
2017-03-17  0:11               ` Wiles, Keith
2017-03-17  0:14                 ` Stephen Hemminger
2017-03-17  0:31                 ` Vincent JARDIN
2017-03-17  0:53                   ` Wiles, Keith
2017-03-17  8:48                     ` Thomas Monjalon
2017-03-17 10:15                       ` Legacy, Allain
2017-03-17 13:52                       ` Michael S. Tsirkin
2017-03-20 22:30                         ` Hobywan Kenoby
2017-03-21 11:06                           ` Thomas Monjalon
     [not found]                       ` <20170317093320.GA11116@stefanha-x1.localdomain>
2017-03-30  8:55                         ` Markus Armbruster
2017-03-23 11:23       ` [PATCH v5 00/14] Wind River Systems AVP PMD Allain Legacy
2017-03-23 11:24         ` [PATCH v5 01/14] drivers/net: adds AVP PMD base files Allain Legacy
2017-03-23 11:24         ` [PATCH v5 02/14] net/avp: public header files Allain Legacy
2017-03-23 11:24         ` [PATCH v5 03/14] net/avp: debug log macros Allain Legacy
2017-03-23 11:24         ` [PATCH v5 04/14] net/avp: driver registration Allain Legacy
2017-03-23 11:24         ` [PATCH v5 05/14] net/avp: device initialization Allain Legacy
2017-03-23 11:24         ` [PATCH v5 06/14] net/avp: device configuration Allain Legacy
2017-03-23 11:24         ` [PATCH v5 07/14] net/avp: queue setup and release Allain Legacy
2017-03-23 11:24         ` [PATCH v5 08/14] net/avp: packet receive functions Allain Legacy
2017-03-23 11:24         ` [PATCH v5 09/14] net/avp: packet transmit functions Allain Legacy
2017-03-23 11:24         ` [PATCH v5 10/14] net/avp: device statistics operations Allain Legacy
2017-03-23 11:24         ` [PATCH v5 11/14] net/avp: device promiscuous functions Allain Legacy
2017-03-23 11:24         ` [PATCH v5 12/14] net/avp: device start and stop operations Allain Legacy
2017-03-23 11:24         ` [PATCH v5 13/14] net/avp: migration interrupt handling Allain Legacy
2017-03-23 11:24         ` [PATCH v5 14/14] doc: adds information related to the AVP PMD Allain Legacy
2017-03-23 14:18         ` [PATCH v5 00/14] Wind River Systems " Ferruh Yigit
2017-03-23 18:28           ` Legacy, Allain
2017-03-23 20:35             ` Vincent Jardin
2017-03-28 11:53         ` [PATCH v6 " Allain Legacy
2017-03-28 11:53           ` [PATCH v6 01/14] drivers/net: adds AVP PMD base files Allain Legacy
2017-03-28 11:53           ` [PATCH v6 02/14] net/avp: public header files Allain Legacy
2017-03-28 11:53           ` [PATCH v6 03/14] net/avp: debug log macros Allain Legacy
2017-03-28 11:53           ` [PATCH v6 04/14] net/avp: driver registration Allain Legacy
2017-03-28 11:54           ` [PATCH v6 05/14] net/avp: device initialization Allain Legacy
2017-03-28 11:54           ` [PATCH v6 06/14] net/avp: device configuration Allain Legacy
2017-03-29 10:28             ` Ferruh Yigit
2017-03-28 11:54           ` [PATCH v6 07/14] net/avp: queue setup and release Allain Legacy
2017-03-28 11:54           ` [PATCH v6 08/14] net/avp: packet receive functions Allain Legacy
2017-03-28 11:54           ` [PATCH v6 09/14] net/avp: packet transmit functions Allain Legacy
2017-03-28 11:54           ` [PATCH v6 10/14] net/avp: device statistics operations Allain Legacy
2017-03-28 11:54           ` [PATCH v6 11/14] net/avp: device promiscuous functions Allain Legacy
2017-03-28 11:54           ` [PATCH v6 12/14] net/avp: device start and stop operations Allain Legacy
2017-03-28 11:54           ` [PATCH v6 13/14] net/avp: migration interrupt handling Allain Legacy
2017-03-28 11:54           ` [PATCH v6 14/14] doc: adds information related to the AVP PMD Allain Legacy
2017-03-29 10:44           ` [PATCH v6 00/14] Wind River Systems " Vincent JARDIN
2017-03-29 11:05             ` Ferruh Yigit

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.