All of lore.kernel.org
 help / color / mirror / Atom feed
* [RFC 00/29] latest virtio1.1 prototype
@ 2017-06-21  2:57 Tiwei Bie
  2017-06-21  2:57 ` [RFC 01/29] net/virtio: vring init for 1.1 Tiwei Bie
                   ` (28 more replies)
  0 siblings, 29 replies; 30+ messages in thread
From: Tiwei Bie @ 2017-06-21  2:57 UTC (permalink / raw)
  To: dev

This patchset rebased Yuanhan's virtio1.1 prototype [1] to the
current master branch of dpdk-next-virtio tree. It also contains
Jens' fixes, my fixes and optimizations.

After sending each RFC patchset to the mailing list, I'll also
collect them to my github repo [2] to give everyone a repo where
can find the latest working code of virtio1.1 prototype.

[1] http://dpdk.org/browse/next/dpdk-next-virtio/log/?h=for-testing
[2] https://github.com/btw616/dpdk-virtio1.1

Best regards,
Tiwei Bie

Jens Freimann (1):
  vhost: descriptor length should include vhost header

Tiwei Bie (14):
  net/virtio: avoid touching packet data
  net/virtio: fix virtio1.1 feature negotiation
  net/virtio: the Rx support for virtio1.1 has been added now
  vhost: VIRTIO_NET_F_MRG_RXBUF is not supported for now
  vhost: fix vring addr setup
  net/virtio: free mbuf when need to use
  vhost: don't copy descs during Rx
  vhost: fix mbuf leak
  net/virtio: cleanup txd when free count below threshold
  net/virtio: refill descs for vhost in batch
  vhost: remove dead code
  vhost: various optimizations for Tx
  vhost: make the code more readable
  vhost: update and return descs in batch

Yuanhan Liu (14):
  net/virtio: vring init for 1.1
  net/virtio: implement 1.1 guest Tx
  net/virtio-user: add option to enable 1.1
  vhost: enable 1.1 for testing
  vhost: set desc addr for 1.1
  vhost: implement virtio 1.1 dequeue path
  vhost: mark desc being used
  xxx: batch the desc_hw update?
  xxx: virtio: remove overheads
  vhost: prefetch desc
  add virtio 1.1 test guide
  testpmd: add s-txonly
  net/virtio: implement the Rx code path
  vhost: a rough implementation on enqueue code path

 README-virtio-1.1                                |  50 ++++
 app/test-pmd/Makefile                            |   1 +
 app/test-pmd/s-txonly.c                          | 134 ++++++++++
 app/test-pmd/testpmd.c                           |   1 +
 app/test-pmd/testpmd.h                           |   1 +
 drivers/net/virtio/Makefile                      |   1 +
 drivers/net/virtio/virtio-1.1.h                  |  19 ++
 drivers/net/virtio/virtio_ethdev.c               |  45 ++--
 drivers/net/virtio/virtio_ethdev.h               |   3 +
 drivers/net/virtio/virtio_pci.h                  |   7 +
 drivers/net/virtio/virtio_ring.h                 |  15 +-
 drivers/net/virtio/virtio_rxtx.c                 | 320 +++++++++++------------
 drivers/net/virtio/virtio_rxtx_1.1.c             | 161 ++++++++++++
 drivers/net/virtio/virtio_user/virtio_user_dev.c |  12 +-
 drivers/net/virtio/virtio_user/virtio_user_dev.h |   3 +-
 drivers/net/virtio/virtio_user_ethdev.c          |  14 +-
 drivers/net/virtio/virtqueue.h                   |  11 +
 lib/librte_vhost/vhost.c                         |   4 +
 lib/librte_vhost/vhost.h                         |   7 +-
 lib/librte_vhost/vhost_user.c                    |  16 +-
 lib/librte_vhost/virtio-1.1.h                    |  23 ++
 lib/librte_vhost/virtio_net.c                    | 310 +++++++++++++++++++++-
 22 files changed, 958 insertions(+), 200 deletions(-)
 create mode 100644 README-virtio-1.1
 create mode 100644 app/test-pmd/s-txonly.c
 create mode 100644 drivers/net/virtio/virtio-1.1.h
 create mode 100644 drivers/net/virtio/virtio_rxtx_1.1.c
 create mode 100644 lib/librte_vhost/virtio-1.1.h

-- 
2.7.4

^ permalink raw reply	[flat|nested] 30+ messages in thread

* [RFC 01/29] net/virtio: vring init for 1.1
  2017-06-21  2:57 [RFC 00/29] latest virtio1.1 prototype Tiwei Bie
@ 2017-06-21  2:57 ` Tiwei Bie
  2017-06-21  2:57 ` [RFC 02/29] net/virtio: implement 1.1 guest Tx Tiwei Bie
                   ` (27 subsequent siblings)
  28 siblings, 0 replies; 30+ messages in thread
From: Tiwei Bie @ 2017-06-21  2:57 UTC (permalink / raw)
  To: dev; +Cc: Yuanhan Liu, Jens Freimann

From: Yuanhan Liu <yuanhan.liu@linux.intel.com>

Add and initialize descriptor data structures.

Signed-off-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
[rename desc_1_1 to vring_desc_1_1, refactor desc init code]
Signed-off-by: Jens Freimann <jfreiman@redhat.com>
---
 drivers/net/virtio/virtio-1.1.h    | 19 +++++++++++++++++++
 drivers/net/virtio/virtio_ethdev.c | 22 ++++++++++++----------
 drivers/net/virtio/virtio_pci.h    |  7 +++++++
 drivers/net/virtio/virtio_ring.h   | 15 +++++++++++++--
 drivers/net/virtio/virtqueue.h     | 10 ++++++++++
 5 files changed, 61 insertions(+), 12 deletions(-)
 create mode 100644 drivers/net/virtio/virtio-1.1.h

diff --git a/drivers/net/virtio/virtio-1.1.h b/drivers/net/virtio/virtio-1.1.h
new file mode 100644
index 0000000..48cbb18
--- /dev/null
+++ b/drivers/net/virtio/virtio-1.1.h
@@ -0,0 +1,19 @@
+#ifndef __VIRTIO_1_1_H
+#define __VIRTIO_1_1_H
+
+#define VRING_DESC_F_NEXT	1
+#define VRING_DESC_F_WRITE	2
+#define VRING_DESC_F_INDIRECT	4
+
+#define BATCH_NOT_FIRST 0x0010
+#define BATCH_NOT_LAST  0x0020
+#define DESC_HW		0x0080
+
+struct vring_desc_1_1 {
+        uint64_t addr;
+        uint32_t len;
+        uint16_t index;
+        uint16_t flags;
+};
+
+#endif /* __VIRTIO_1_1_H */
diff --git a/drivers/net/virtio/virtio_ethdev.c b/drivers/net/virtio/virtio_ethdev.c
index 66c28ac..7c4799a 100644
--- a/drivers/net/virtio/virtio_ethdev.c
+++ b/drivers/net/virtio/virtio_ethdev.c
@@ -320,19 +320,21 @@ virtio_init_vring(struct virtqueue *vq)
 
 	PMD_INIT_FUNC_TRACE();
 
-	/*
-	 * Reinitialise since virtio port might have been stopped and restarted
-	 */
 	memset(ring_mem, 0, vq->vq_ring_size);
-	vring_init(vr, size, ring_mem, VIRTIO_PCI_VRING_ALIGN);
-	vq->vq_used_cons_idx = 0;
-	vq->vq_desc_head_idx = 0;
-	vq->vq_avail_idx = 0;
-	vq->vq_desc_tail_idx = (uint16_t)(vq->vq_nentries - 1);
+	vring_init(vq->hw, vr, size, ring_mem, VIRTIO_PCI_VRING_ALIGN);
+
 	vq->vq_free_cnt = vq->vq_nentries;
 	memset(vq->vq_descx, 0, sizeof(struct vq_desc_extra) * vq->vq_nentries);
+	vq->vq_used_cons_idx = 0;
+	vq->vq_avail_idx     = 0;
+	if (vtpci_version_1_1(vq->hw)) {
+		vring_desc_init_1_1(vr, size);
+	} else {
+		vq->vq_desc_head_idx = 0;
+		vq->vq_desc_tail_idx = (uint16_t)(vq->vq_nentries - 1);
 
-	vring_desc_init(vr->desc, size);
+		vring_desc_init(vr->desc, size);
+	}
 
 	/*
 	 * Disable device(host) interrupting guest
@@ -407,7 +409,7 @@ virtio_init_queue(struct rte_eth_dev *dev, uint16_t vtpci_queue_idx)
 	/*
 	 * Reserve a memzone for vring elements
 	 */
-	size = vring_size(vq_size, VIRTIO_PCI_VRING_ALIGN);
+	size = vring_size(hw, vq_size, VIRTIO_PCI_VRING_ALIGN);
 	vq->vq_ring_size = RTE_ALIGN_CEIL(size, VIRTIO_PCI_VRING_ALIGN);
 	PMD_INIT_LOG(DEBUG, "vring_size: %d, rounded_vring_size: %d",
 		     size, vq->vq_ring_size);
diff --git a/drivers/net/virtio/virtio_pci.h b/drivers/net/virtio/virtio_pci.h
index 18caebd..ec74009 100644
--- a/drivers/net/virtio/virtio_pci.h
+++ b/drivers/net/virtio/virtio_pci.h
@@ -140,6 +140,7 @@ struct virtnet_ctl;
 
 #define VIRTIO_F_VERSION_1		32
 #define VIRTIO_F_IOMMU_PLATFORM	33
+#define VIRTIO_F_VERSION_1_1		34
 
 /*
  * Some VirtIO feature bits (currently bits 28 through 31) are
@@ -318,6 +319,12 @@ vtpci_with_feature(struct virtio_hw *hw, uint64_t bit)
 	return (hw->guest_features & (1ULL << bit)) != 0;
 }
 
+static inline int
+vtpci_version_1_1(struct virtio_hw *hw)
+{
+	return vtpci_with_feature(hw, VIRTIO_F_VERSION_1_1);
+}
+
 /*
  * Function declaration from virtio_pci.c
  */
diff --git a/drivers/net/virtio/virtio_ring.h b/drivers/net/virtio/virtio_ring.h
index fcecc16..a991092 100644
--- a/drivers/net/virtio/virtio_ring.h
+++ b/drivers/net/virtio/virtio_ring.h
@@ -38,6 +38,8 @@
 
 #include <rte_common.h>
 
+#include "virtio-1.1.h"
+
 /* This marks a buffer as continuing via the next field. */
 #define VRING_DESC_F_NEXT       1
 /* This marks a buffer as write-only (otherwise read-only). */
@@ -88,6 +90,7 @@ struct vring {
 	struct vring_desc  *desc;
 	struct vring_avail *avail;
 	struct vring_used  *used;
+	struct vring_desc_1_1 *desc_1_1;
 };
 
 /* The standard layout for the ring is a continuous chunk of memory which
@@ -124,10 +127,13 @@ struct vring {
 #define vring_avail_event(vr) (*(uint16_t *)&(vr)->used->ring[(vr)->num])
 
 static inline size_t
-vring_size(unsigned int num, unsigned long align)
+vring_size(struct virtio_hw *hw, unsigned int num, unsigned long align)
 {
 	size_t size;
 
+	if (vtpci_version_1_1(hw))
+		return num * sizeof(struct vring_desc_1_1);
+
 	size = num * sizeof(struct vring_desc);
 	size += sizeof(struct vring_avail) + (num * sizeof(uint16_t));
 	size = RTE_ALIGN_CEIL(size, align);
@@ -137,10 +143,15 @@ vring_size(unsigned int num, unsigned long align)
 }
 
 static inline void
-vring_init(struct vring *vr, unsigned int num, uint8_t *p,
+vring_init(struct virtio_hw *hw, struct vring *vr, unsigned int num, uint8_t *p,
 	unsigned long align)
 {
 	vr->num = num;
+	if (vtpci_version_1_1(hw)) {
+		vr->desc_1_1 = (struct vring_desc_1_1 *)p;
+		return;
+	}
+
 	vr->desc = (struct vring_desc *) p;
 	vr->avail = (struct vring_avail *) (p +
 		num * sizeof(struct vring_desc));
diff --git a/drivers/net/virtio/virtqueue.h b/drivers/net/virtio/virtqueue.h
index 2e12086..91d2db7 100644
--- a/drivers/net/virtio/virtqueue.h
+++ b/drivers/net/virtio/virtqueue.h
@@ -266,6 +266,16 @@ struct virtio_tx_region {
 			   __attribute__((__aligned__(16)));
 };
 
+static inline void
+vring_desc_init_1_1(struct vring *vr, int n)
+{
+	int i;
+	for (i = 0; i < n; i++) {
+		struct vring_desc_1_1 *desc = &vr->desc_1_1[i];
+		desc->index = i;
+	}
+}
+
 /* Chain all the descriptors in the ring with an END */
 static inline void
 vring_desc_init(struct vring_desc *dp, uint16_t n)
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [RFC 02/29] net/virtio: implement 1.1 guest Tx
  2017-06-21  2:57 [RFC 00/29] latest virtio1.1 prototype Tiwei Bie
  2017-06-21  2:57 ` [RFC 01/29] net/virtio: vring init for 1.1 Tiwei Bie
@ 2017-06-21  2:57 ` Tiwei Bie
  2017-06-21  2:57 ` [RFC 03/29] net/virtio-user: add option to enable 1.1 Tiwei Bie
                   ` (26 subsequent siblings)
  28 siblings, 0 replies; 30+ messages in thread
From: Tiwei Bie @ 2017-06-21  2:57 UTC (permalink / raw)
  To: dev; +Cc: Yuanhan Liu

From: Yuanhan Liu <yuanhan.liu@linux.intel.com>

build only so far

Signed-off-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
---
 drivers/net/virtio/Makefile          |   1 +
 drivers/net/virtio/virtio_ethdev.c   |  24 ++++--
 drivers/net/virtio/virtio_ethdev.h   |   3 +
 drivers/net/virtio/virtio_rxtx.c     |   3 +
 drivers/net/virtio/virtio_rxtx_1.1.c | 159 +++++++++++++++++++++++++++++++++++
 5 files changed, 183 insertions(+), 7 deletions(-)
 create mode 100644 drivers/net/virtio/virtio_rxtx_1.1.c

diff --git a/drivers/net/virtio/Makefile b/drivers/net/virtio/Makefile
index b21b878..4c4ff42 100644
--- a/drivers/net/virtio/Makefile
+++ b/drivers/net/virtio/Makefile
@@ -49,6 +49,7 @@ LIBABIVER := 1
 SRCS-$(CONFIG_RTE_LIBRTE_VIRTIO_PMD) += virtqueue.c
 SRCS-$(CONFIG_RTE_LIBRTE_VIRTIO_PMD) += virtio_pci.c
 SRCS-$(CONFIG_RTE_LIBRTE_VIRTIO_PMD) += virtio_rxtx.c
+SRCS-$(CONFIG_RTE_LIBRTE_VIRTIO_PMD) += virtio_rxtx_1.1.c
 SRCS-$(CONFIG_RTE_LIBRTE_VIRTIO_PMD) += virtio_ethdev.c
 SRCS-$(CONFIG_RTE_LIBRTE_VIRTIO_PMD) += virtio_rxtx_simple.c
 
diff --git a/drivers/net/virtio/virtio_ethdev.c b/drivers/net/virtio/virtio_ethdev.c
index 7c4799a..35ce07d 100644
--- a/drivers/net/virtio/virtio_ethdev.c
+++ b/drivers/net/virtio/virtio_ethdev.c
@@ -334,12 +334,12 @@ virtio_init_vring(struct virtqueue *vq)
 		vq->vq_desc_tail_idx = (uint16_t)(vq->vq_nentries - 1);
 
 		vring_desc_init(vr->desc, size);
-	}
 
-	/*
-	 * Disable device(host) interrupting guest
-	 */
-	virtqueue_disable_intr(vq);
+		/*
+		 * Disable device(host) interrupting guest
+		 */
+		virtqueue_disable_intr(vq);
+	}
 }
 
 static int
@@ -625,7 +625,8 @@ virtio_dev_close(struct rte_eth_dev *dev)
 	}
 
 	vtpci_reset(hw);
-	virtio_dev_free_mbufs(dev);
+	if (!vtpci_version_1_1(hw))
+		virtio_dev_free_mbufs(dev);
 	virtio_free_queues(hw);
 }
 
@@ -1535,7 +1536,6 @@ eth_virtio_dev_init(struct rte_eth_dev *eth_dev)
 	RTE_BUILD_BUG_ON(RTE_PKTMBUF_HEADROOM < sizeof(struct virtio_net_hdr_mrg_rxbuf));
 
 	eth_dev->dev_ops = &virtio_eth_dev_ops;
-	eth_dev->tx_pkt_burst = &virtio_xmit_pkts;
 
 	if (rte_eal_process_type() == RTE_PROC_SECONDARY) {
 		if (!hw->virtio_user_dev) {
@@ -1579,6 +1579,12 @@ eth_virtio_dev_init(struct rte_eth_dev *eth_dev)
 	if (ret < 0)
 		return ret;
 
+	/* FIXME: as second process? */
+	if (vtpci_version_1_1(hw))
+		eth_dev->tx_pkt_burst = &virtio_xmit_pkts_1_1;
+	else
+		eth_dev->tx_pkt_burst = &virtio_xmit_pkts;
+
 	/* Setup interrupt callback  */
 	if (eth_dev->data->dev_flags & RTE_ETH_DEV_INTR_LSC)
 		rte_intr_callback_register(eth_dev->intr_handle,
@@ -1750,6 +1756,10 @@ virtio_dev_start(struct rte_eth_dev *dev)
 		}
 	}
 
+	/*no rx support for virtio 1.1 yet*/
+	if (vtpci_version_1_1(hw))
+		return 0;
+
 	/*Notify the backend
 	 *Otherwise the tap backend might already stop its queue due to fullness.
 	 *vhost backend will have no chance to be waked up
diff --git a/drivers/net/virtio/virtio_ethdev.h b/drivers/net/virtio/virtio_ethdev.h
index c3413c6..deed34e 100644
--- a/drivers/net/virtio/virtio_ethdev.h
+++ b/drivers/net/virtio/virtio_ethdev.h
@@ -69,6 +69,7 @@
 	 1u << VIRTIO_NET_F_MTU	| \
 	 1u << VIRTIO_RING_F_INDIRECT_DESC |    \
 	 1ULL << VIRTIO_F_VERSION_1       |	\
+	 1ULL << VIRTIO_F_VERSION_1_1     |	\
 	 1ULL << VIRTIO_F_IOMMU_PLATFORM)
 
 #define VIRTIO_PMD_SUPPORTED_GUEST_FEATURES	\
@@ -104,6 +105,8 @@ uint16_t virtio_recv_mergeable_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
 
 uint16_t virtio_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
 		uint16_t nb_pkts);
+uint16_t virtio_xmit_pkts_1_1(void *tx_queue, struct rte_mbuf **tx_pkts,
+		uint16_t nb_pkts);
 
 uint16_t virtio_recv_pkts_vec(void *rx_queue, struct rte_mbuf **rx_pkts,
 		uint16_t nb_pkts);
diff --git a/drivers/net/virtio/virtio_rxtx.c b/drivers/net/virtio/virtio_rxtx.c
index fbc96df..e697192 100644
--- a/drivers/net/virtio/virtio_rxtx.c
+++ b/drivers/net/virtio/virtio_rxtx.c
@@ -427,6 +427,9 @@ virtio_dev_rx_queue_setup(struct rte_eth_dev *dev,
 
 	PMD_INIT_FUNC_TRACE();
 
+	if (vtpci_version_1_1(hw))
+		return 0;
+
 	if (nb_desc == 0 || nb_desc > vq->vq_nentries)
 		nb_desc = vq->vq_nentries;
 	vq->vq_free_cnt = RTE_MIN(vq->vq_free_cnt, nb_desc);
diff --git a/drivers/net/virtio/virtio_rxtx_1.1.c b/drivers/net/virtio/virtio_rxtx_1.1.c
new file mode 100644
index 0000000..e47a346
--- /dev/null
+++ b/drivers/net/virtio/virtio_rxtx_1.1.c
@@ -0,0 +1,159 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright(c) 2010-2016 Intel Corporation. All rights reserved.
+ *   All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Intel Corporation nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include <stdint.h>
+#include <stdio.h>
+#include <stdlib.h>
+#include <string.h>
+#include <errno.h>
+
+#include <rte_cycles.h>
+#include <rte_memory.h>
+#include <rte_memzone.h>
+#include <rte_branch_prediction.h>
+#include <rte_mempool.h>
+#include <rte_malloc.h>
+#include <rte_mbuf.h>
+#include <rte_ether.h>
+#include <rte_ethdev.h>
+#include <rte_prefetch.h>
+#include <rte_string_fns.h>
+#include <rte_errno.h>
+#include <rte_byteorder.h>
+#include <rte_cpuflags.h>
+#include <rte_net.h>
+#include <rte_ip.h>
+#include <rte_udp.h>
+#include <rte_tcp.h>
+
+#include "virtio_logs.h"
+#include "virtio_ethdev.h"
+#include "virtio_pci.h"
+#include "virtqueue.h"
+#include "virtio_rxtx.h"
+
+/* Cleanup from completed transmits. */
+static void
+virtio_xmit_cleanup(struct virtqueue *vq)
+{
+	uint16_t idx;
+	uint16_t size = vq->vq_nentries;
+	struct vring_desc_1_1 *desc = vq->vq_ring.desc_1_1;
+
+	idx = vq->vq_used_cons_idx & (size - 1);
+	while ((desc[idx].flags & DESC_HW) == 0) {
+		struct vq_desc_extra *dxp;
+
+		dxp = &vq->vq_descx[idx];
+		if (dxp->cookie != NULL) {
+			rte_pktmbuf_free(dxp->cookie);
+			dxp->cookie = NULL;
+		}
+
+		idx = (++vq->vq_used_cons_idx) & (size - 1);
+		vq->vq_free_cnt++;
+
+		if (vq->vq_free_cnt >= size)
+			break;
+	}
+}
+
+static inline void
+virtio_xmit(struct virtnet_tx *txvq, struct rte_mbuf *mbuf)
+{
+	struct virtio_tx_region *txr = txvq->virtio_net_hdr_mz->addr;
+	struct virtqueue *vq = txvq->vq;
+	struct vring_desc_1_1 *desc = vq->vq_ring.desc_1_1;
+	uint16_t idx;
+	uint16_t head_idx = (vq->vq_avail_idx++) & (vq->vq_nentries - 1);
+
+	idx = head_idx;
+	vq->vq_free_cnt -= mbuf->nb_segs + 1;
+	vq->vq_descx[idx].cookie = mbuf;
+
+	desc[idx].addr  = txvq->virtio_net_hdr_mem +
+			  RTE_PTR_DIFF(&txr[idx].tx_hdr, txr);
+	desc[idx].len   = vq->hw->vtnet_hdr_size;
+	desc[idx].flags = VRING_DESC_F_NEXT;
+
+	do {
+		idx = (vq->vq_avail_idx++) & (vq->vq_nentries - 1);
+		desc[idx].addr  = VIRTIO_MBUF_DATA_DMA_ADDR(mbuf, vq);
+		desc[idx].len   = mbuf->data_len;
+		desc[idx].flags = DESC_HW | VRING_DESC_F_NEXT;
+	} while ((mbuf = mbuf->next) != NULL);
+
+	desc[idx].flags &= ~VRING_DESC_F_NEXT;
+
+	/*
+	 * update the head last, so that when the host saw such flag
+	 * is set, it means all others in the same chain is also set
+	 */
+	rte_smp_wmb();
+	desc[head_idx].flags |= DESC_HW;
+}
+
+uint16_t
+virtio_xmit_pkts_1_1(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
+{
+	struct virtnet_tx *txvq = tx_queue;
+	struct virtqueue *vq = txvq->vq;
+	uint16_t i;
+
+	if (unlikely(nb_pkts < 1))
+		return nb_pkts;
+
+	PMD_TX_LOG(DEBUG, "%d packets to xmit", nb_pkts);
+
+	for (i = 0; i < nb_pkts; i++) {
+		struct rte_mbuf *txm = tx_pkts[i];
+
+		if (unlikely(txm->nb_segs + 1 > vq->vq_free_cnt)) {
+			virtio_xmit_cleanup(vq);
+
+			if (unlikely(txm->nb_segs + 1 > vq->vq_free_cnt)) {
+				PMD_TX_LOG(ERR,
+					   "No free tx descriptors to transmit");
+				break;
+			}
+		}
+
+		virtio_xmit(txvq, txm);
+		txvq->stats.bytes += txm->pkt_len;
+	}
+
+	txvq->stats.packets += i;
+	txvq->stats.errors  += nb_pkts - i;
+
+	return i;
+}
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [RFC 03/29] net/virtio-user: add option to enable 1.1
  2017-06-21  2:57 [RFC 00/29] latest virtio1.1 prototype Tiwei Bie
  2017-06-21  2:57 ` [RFC 01/29] net/virtio: vring init for 1.1 Tiwei Bie
  2017-06-21  2:57 ` [RFC 02/29] net/virtio: implement 1.1 guest Tx Tiwei Bie
@ 2017-06-21  2:57 ` Tiwei Bie
  2017-06-21  2:57 ` [RFC 04/29] vhost: enable 1.1 for testing Tiwei Bie
                   ` (25 subsequent siblings)
  28 siblings, 0 replies; 30+ messages in thread
From: Tiwei Bie @ 2017-06-21  2:57 UTC (permalink / raw)
  To: dev; +Cc: Yuanhan Liu

From: Yuanhan Liu <yuanhan.liu@linux.intel.com>

Signed-off-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
---
 drivers/net/virtio/virtio_user/virtio_user_dev.c |  9 ++++++++-
 drivers/net/virtio/virtio_user/virtio_user_dev.h |  3 ++-
 drivers/net/virtio/virtio_user_ethdev.c          | 14 +++++++++++++-
 3 files changed, 23 insertions(+), 3 deletions(-)

diff --git a/drivers/net/virtio/virtio_user/virtio_user_dev.c b/drivers/net/virtio/virtio_user/virtio_user_dev.c
index 450404b..3ff6a05 100644
--- a/drivers/net/virtio/virtio_user/virtio_user_dev.c
+++ b/drivers/net/virtio/virtio_user/virtio_user_dev.c
@@ -337,7 +337,8 @@ virtio_user_dev_setup(struct virtio_user_dev *dev)
 
 int
 virtio_user_dev_init(struct virtio_user_dev *dev, char *path, int queues,
-		     int cq, int queue_size, const char *mac, char **ifname)
+		     int cq, int queue_size, const char *mac, char **ifname,
+		     int version_1_1)
 {
 	snprintf(dev->path, PATH_MAX, "%s", path);
 	dev->max_queue_pairs = queues;
@@ -365,6 +366,12 @@ virtio_user_dev_init(struct virtio_user_dev *dev, char *path, int queues,
 		PMD_INIT_LOG(ERR, "get_features failed: %s", strerror(errno));
 		return -1;
 	}
+
+	if (version_1_1)
+		dev->features |= (1ull << VIRTIO_F_VERSION_1_1);
+	else
+		dev->features &= ~(1ull << VIRTIO_F_VERSION_1_1);
+
 	if (dev->mac_specified)
 		dev->device_features |= (1ull << VIRTIO_NET_F_MAC);
 
diff --git a/drivers/net/virtio/virtio_user/virtio_user_dev.h b/drivers/net/virtio/virtio_user/virtio_user_dev.h
index 8361b6b..76fa17f 100644
--- a/drivers/net/virtio/virtio_user/virtio_user_dev.h
+++ b/drivers/net/virtio/virtio_user/virtio_user_dev.h
@@ -71,7 +71,8 @@ int is_vhost_user_by_type(const char *path);
 int virtio_user_start_device(struct virtio_user_dev *dev);
 int virtio_user_stop_device(struct virtio_user_dev *dev);
 int virtio_user_dev_init(struct virtio_user_dev *dev, char *path, int queues,
-			 int cq, int queue_size, const char *mac, char **ifname);
+			 int cq, int queue_size, const char *mac, char **ifname,
+			 int version_1_1);
 void virtio_user_dev_uninit(struct virtio_user_dev *dev);
 void virtio_user_handle_cq(struct virtio_user_dev *dev, uint16_t queue_idx);
 #endif
diff --git a/drivers/net/virtio/virtio_user_ethdev.c b/drivers/net/virtio/virtio_user_ethdev.c
index 280406c..74c7c60 100644
--- a/drivers/net/virtio/virtio_user_ethdev.c
+++ b/drivers/net/virtio/virtio_user_ethdev.c
@@ -300,6 +300,8 @@ static const char *valid_args[] = {
 	VIRTIO_USER_ARG_QUEUE_SIZE,
 #define VIRTIO_USER_ARG_INTERFACE_NAME "iface"
 	VIRTIO_USER_ARG_INTERFACE_NAME,
+#define VIRTIO_USER_ARG_VERSION_1_1     "version_1_1"
+	VIRTIO_USER_ARG_VERSION_1_1,
 	NULL
 };
 
@@ -404,6 +406,7 @@ virtio_user_pmd_probe(struct rte_vdev_device *dev)
 	char *ifname = NULL;
 	char *mac_addr = NULL;
 	int ret = -1;
+	uint64_t version_1_1 = 0;
 
 	kvlist = rte_kvargs_parse(rte_vdev_device_args(dev), valid_args);
 	if (!kvlist) {
@@ -478,6 +481,15 @@ virtio_user_pmd_probe(struct rte_vdev_device *dev)
 		cq = 1;
 	}
 
+	if (rte_kvargs_count(kvlist, VIRTIO_USER_ARG_VERSION_1_1) == 1) {
+		if (rte_kvargs_process(kvlist, VIRTIO_USER_ARG_VERSION_1_1,
+				       &get_integer_arg, &version_1_1) < 0) {
+			PMD_INIT_LOG(ERR, "error to parse %s",
+				     VIRTIO_USER_ARG_VERSION_1_1);
+			goto end;
+		}
+	}
+
 	if (queues > 1 && cq == 0) {
 		PMD_INIT_LOG(ERR, "multi-q requires ctrl-q");
 		goto end;
@@ -499,7 +511,7 @@ virtio_user_pmd_probe(struct rte_vdev_device *dev)
 
 		hw = eth_dev->data->dev_private;
 		if (virtio_user_dev_init(hw->virtio_user_dev, path, queues, cq,
-				 queue_size, mac_addr, &ifname) < 0) {
+				 queue_size, mac_addr, &ifname, version_1_1) < 0) {
 			PMD_INIT_LOG(ERR, "virtio_user_dev_init fails");
 			virtio_user_eth_dev_free(eth_dev);
 			goto end;
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [RFC 04/29] vhost: enable 1.1 for testing
  2017-06-21  2:57 [RFC 00/29] latest virtio1.1 prototype Tiwei Bie
                   ` (2 preceding siblings ...)
  2017-06-21  2:57 ` [RFC 03/29] net/virtio-user: add option to enable 1.1 Tiwei Bie
@ 2017-06-21  2:57 ` Tiwei Bie
  2017-06-21  2:57 ` [RFC 05/29] vhost: set desc addr for 1.1 Tiwei Bie
                   ` (24 subsequent siblings)
  28 siblings, 0 replies; 30+ messages in thread
From: Tiwei Bie @ 2017-06-21  2:57 UTC (permalink / raw)
  To: dev; +Cc: Yuanhan Liu

From: Yuanhan Liu <yuanhan.liu@linux.intel.com>

Just set the features on, no actual work has been done. Just
make sure the virtio PMD could have this feature been enabled,
for testing only.

Signed-off-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
---
 lib/librte_vhost/vhost.h | 4 ++++
 1 file changed, 4 insertions(+)

diff --git a/lib/librte_vhost/vhost.h b/lib/librte_vhost/vhost.h
index ddd8a9c..208b2eb 100644
--- a/lib/librte_vhost/vhost.h
+++ b/lib/librte_vhost/vhost.h
@@ -138,6 +138,9 @@ struct vhost_virtqueue {
 #ifndef VIRTIO_F_VERSION_1
  #define VIRTIO_F_VERSION_1 32
 #endif
+#ifndef VIRTIO_F_VERSION_1_1
+ #define VIRTIO_F_VERSION_1_1 34
+#endif
 
 #define VHOST_USER_F_PROTOCOL_FEATURES	30
 
@@ -148,6 +151,7 @@ struct vhost_virtqueue {
 				(1ULL << VIRTIO_NET_F_GUEST_ANNOUNCE) | \
 				(1ULL << VIRTIO_NET_F_MQ)      | \
 				(1ULL << VIRTIO_F_VERSION_1)   | \
+				(1ULL << VIRTIO_F_VERSION_1_1) | \
 				(1ULL << VHOST_F_LOG_ALL)      | \
 				(1ULL << VHOST_USER_F_PROTOCOL_FEATURES) | \
 				(1ULL << VIRTIO_NET_F_HOST_TSO4) | \
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [RFC 05/29] vhost: set desc addr for 1.1
  2017-06-21  2:57 [RFC 00/29] latest virtio1.1 prototype Tiwei Bie
                   ` (3 preceding siblings ...)
  2017-06-21  2:57 ` [RFC 04/29] vhost: enable 1.1 for testing Tiwei Bie
@ 2017-06-21  2:57 ` Tiwei Bie
  2017-06-21  2:57 ` [RFC 06/29] vhost: implement virtio 1.1 dequeue path Tiwei Bie
                   ` (23 subsequent siblings)
  28 siblings, 0 replies; 30+ messages in thread
From: Tiwei Bie @ 2017-06-21  2:57 UTC (permalink / raw)
  To: dev; +Cc: Yuanhan Liu

From: Yuanhan Liu <yuanhan.liu@linux.intel.com>

Signed-off-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
---
 lib/librte_vhost/vhost.h      | 1 +
 lib/librte_vhost/vhost_user.c | 1 +
 2 files changed, 2 insertions(+)

diff --git a/lib/librte_vhost/vhost.h b/lib/librte_vhost/vhost.h
index 208b2eb..f3b7ad5 100644
--- a/lib/librte_vhost/vhost.h
+++ b/lib/librte_vhost/vhost.h
@@ -86,6 +86,7 @@ TAILQ_HEAD(zcopy_mbuf_list, zcopy_mbuf);
  */
 struct vhost_virtqueue {
 	struct vring_desc	*desc;
+	struct vring_desc_1_1   *desc_1_1;
 	struct vring_avail	*avail;
 	struct vring_used	*used;
 	uint32_t		size;
diff --git a/lib/librte_vhost/vhost_user.c b/lib/librte_vhost/vhost_user.c
index e90b44c..3a2de79 100644
--- a/lib/librte_vhost/vhost_user.c
+++ b/lib/librte_vhost/vhost_user.c
@@ -351,6 +351,7 @@ vhost_user_set_vring_addr(struct virtio_net *dev, VhostUserMsg *msg)
 			dev->vid);
 		return -1;
 	}
+	vq->desc_1_1 = (struct vring_desc_1_1 *)vq->desc;
 
 	dev = numa_realloc(dev, msg->payload.addr.index);
 	vq = dev->virtqueue[msg->payload.addr.index];
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [RFC 06/29] vhost: implement virtio 1.1 dequeue path
  2017-06-21  2:57 [RFC 00/29] latest virtio1.1 prototype Tiwei Bie
                   ` (4 preceding siblings ...)
  2017-06-21  2:57 ` [RFC 05/29] vhost: set desc addr for 1.1 Tiwei Bie
@ 2017-06-21  2:57 ` Tiwei Bie
  2017-06-21  2:57 ` [RFC 07/29] vhost: mark desc being used Tiwei Bie
                   ` (22 subsequent siblings)
  28 siblings, 0 replies; 30+ messages in thread
From: Tiwei Bie @ 2017-06-21  2:57 UTC (permalink / raw)
  To: dev; +Cc: Yuanhan Liu, Jens Freimann

From: Yuanhan Liu <yuanhan.liu@linux.intel.com>

Build test only; haven't tested it yet

Signed-off-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
Signed-off-by: Jens Freimann <jfreiman@redhat.com>
---
 lib/librte_vhost/virtio-1.1.h |  23 ++++++
 lib/librte_vhost/virtio_net.c | 181 ++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 204 insertions(+)
 create mode 100644 lib/librte_vhost/virtio-1.1.h

diff --git a/lib/librte_vhost/virtio-1.1.h b/lib/librte_vhost/virtio-1.1.h
new file mode 100644
index 0000000..4241d0a
--- /dev/null
+++ b/lib/librte_vhost/virtio-1.1.h
@@ -0,0 +1,23 @@
+#ifndef __VIRTIO_1_1_H
+#define __VIRTIO_1_1_H
+
+#define __le64	uint64_t
+#define __le32	uint32_t
+#define __le16	uint16_t
+
+#define VRING_DESC_F_NEXT	1
+#define VRING_DESC_F_WRITE	2
+#define VRING_DESC_F_INDIRECT	4
+
+#define BATCH_NOT_FIRST 0x0010
+#define BATCH_NOT_LAST  0x0020
+#define DESC_HW		0x0080
+
+struct vring_desc_1_1 {
+        __le64 addr;
+        __le32 len;
+        __le16 index;
+        __le16 flags;
+};
+
+#endif /* __VIRTIO_1_1_H */
diff --git a/lib/librte_vhost/virtio_net.c b/lib/librte_vhost/virtio_net.c
index 48219e0..fd6f200 100644
--- a/lib/librte_vhost/virtio_net.c
+++ b/lib/librte_vhost/virtio_net.c
@@ -46,6 +46,7 @@
 #include <rte_arp.h>
 
 #include "vhost.h"
+#include "virtio-1.1.h"
 
 #define MAX_PKT_BURST 32
 
@@ -973,6 +974,183 @@ mbuf_is_consumed(struct rte_mbuf *m)
 	return true;
 }
 
+static inline uint16_t
+dequeue_desc(struct virtio_net *dev, struct vhost_virtqueue *vq,
+	     struct rte_mempool *mbuf_pool, struct rte_mbuf *m,
+	     struct vring_desc_1_1 *descs)
+{
+	struct vring_desc_1_1 *desc;
+	uint64_t desc_addr;
+	uint32_t desc_avail, desc_offset;
+	uint32_t mbuf_avail, mbuf_offset;
+	uint32_t cpy_len;
+	struct rte_mbuf *cur = m, *prev = m;
+	struct virtio_net_hdr *hdr = NULL;
+	uint16_t head_idx = vq->last_used_idx;
+
+	desc = &descs[(head_idx++) & (vq->size - 1)];
+	if (unlikely((desc->len < dev->vhost_hlen)) ||
+			(desc->flags & VRING_DESC_F_INDIRECT))
+		return -1;
+
+	desc_addr = rte_vhost_gpa_to_vva(dev->mem, desc->addr);
+	if (unlikely(!desc_addr))
+		return -1;
+
+	if (virtio_net_with_host_offload(dev)) {
+		hdr = (struct virtio_net_hdr *)((uintptr_t)desc_addr);
+		rte_prefetch0(hdr);
+	}
+
+	/*
+	 * A virtio driver normally uses at least 2 desc buffers
+	 * for Tx: the first for storing the header, and others
+	 * for storing the data.
+	 */
+	if (likely((desc->len == dev->vhost_hlen) &&
+		   (desc->flags & VRING_DESC_F_NEXT) != 0)) {
+		desc->flags = 0;
+
+		desc = &descs[(head_idx++) & (vq->size - 1)];
+		if (unlikely(desc->flags & VRING_DESC_F_INDIRECT))
+			return -1;
+
+		desc_addr = rte_vhost_gpa_to_vva(dev->mem, desc->addr);
+		if (unlikely(!desc_addr))
+			return -1;
+
+		desc_offset = 0;
+		desc_avail  = desc->len;
+	} else {
+		desc_avail  = desc->len - dev->vhost_hlen;
+		desc_offset = dev->vhost_hlen;
+	}
+
+	rte_prefetch0((void *)(uintptr_t)(desc_addr + desc_offset));
+
+	PRINT_PACKET(dev, (uintptr_t)(desc_addr + desc_offset), desc_avail, 0);
+
+	mbuf_offset = 0;
+	mbuf_avail  = m->buf_len - RTE_PKTMBUF_HEADROOM;
+	while (1) {
+		uint64_t hpa;
+
+		cpy_len = RTE_MIN(desc_avail, mbuf_avail);
+
+		/*
+		 * A desc buf might across two host physical pages that are
+		 * not continuous. In such case (gpa_to_hpa returns 0), data
+		 * will be copied even though zero copy is enabled.
+		 */
+		if (unlikely(dev->dequeue_zero_copy && (hpa = gpa_to_hpa(dev,
+					desc->addr + desc_offset, cpy_len)))) {
+			cur->data_len = cpy_len;
+			cur->data_off = 0;
+			cur->buf_addr = (void *)(uintptr_t)desc_addr;
+			cur->buf_physaddr = hpa;
+
+			/*
+			 * In zero copy mode, one mbuf can only reference data
+			 * for one or partial of one desc buff.
+			 */
+			mbuf_avail = cpy_len;
+		} else {
+			rte_memcpy(rte_pktmbuf_mtod_offset(cur, void *,
+							   mbuf_offset),
+				(void *)((uintptr_t)(desc_addr + desc_offset)),
+				cpy_len);
+		}
+
+		mbuf_avail  -= cpy_len;
+		mbuf_offset += cpy_len;
+		desc_avail  -= cpy_len;
+		desc_offset += cpy_len;
+
+		/* This desc reaches to its end, get the next one */
+		if (desc_avail == 0) {
+			desc->flags = 0;
+
+			if ((desc->flags & VRING_DESC_F_NEXT) == 0)
+				break;
+
+			desc = &descs[(head_idx++) & (vq->size - 1)];
+			if (unlikely(desc->flags & VRING_DESC_F_INDIRECT))
+				return -1;
+
+			desc_addr = rte_vhost_gpa_to_vva(dev->mem, desc->addr);
+			if (unlikely(!desc_addr))
+				return -1;
+
+			rte_prefetch0((void *)(uintptr_t)desc_addr);
+
+			desc_offset = 0;
+			desc_avail  = desc->len;
+
+			PRINT_PACKET(dev, (uintptr_t)desc_addr, desc->len, 0);
+		}
+
+		/*
+		 * This mbuf reaches to its end, get a new one
+		 * to hold more data.
+		 */
+		if (mbuf_avail == 0) {
+			cur = rte_pktmbuf_alloc(mbuf_pool);
+			if (unlikely(cur == NULL)) {
+				RTE_LOG(ERR, VHOST_DATA, "Failed to "
+					"allocate memory for mbuf.\n");
+				return -1;
+			}
+
+			prev->next = cur;
+			prev->data_len = mbuf_offset;
+			m->nb_segs += 1;
+			m->pkt_len += mbuf_offset;
+			prev = cur;
+
+			mbuf_offset = 0;
+			mbuf_avail  = cur->buf_len - RTE_PKTMBUF_HEADROOM;
+		}
+	}
+	desc->flags = 0;
+
+	prev->data_len = mbuf_offset;
+	m->pkt_len    += mbuf_offset;
+
+	if (hdr)
+		vhost_dequeue_offload(hdr, m);
+
+	vq->last_used_idx = head_idx;
+
+	return 0;
+}
+
+static inline uint16_t
+vhost_dequeue_burst_1_1(struct virtio_net *dev, struct vhost_virtqueue *vq,
+			struct rte_mempool *mbuf_pool, struct rte_mbuf **pkts,
+			uint16_t count)
+{
+	uint16_t i;
+	uint16_t idx;
+	struct vring_desc_1_1 *desc = vq->desc_1_1;
+
+	for (i = 0; i < count; i++) {
+		idx = vq->last_used_idx & (vq->size - 1);
+		if (!(desc[idx].flags & DESC_HW))
+			break;
+
+		pkts[i] = rte_pktmbuf_alloc(mbuf_pool);
+		if (unlikely(pkts[i] == NULL)) {
+			RTE_LOG(ERR, VHOST_DATA,
+				"Failed to allocate memory for mbuf.\n");
+			break;
+		}
+
+		dequeue_desc(dev, vq, mbuf_pool, pkts[i], desc);
+	}
+
+	return i;
+}
+
 uint16_t
 rte_vhost_dequeue_burst(int vid, uint16_t queue_id,
 	struct rte_mempool *mbuf_pool, struct rte_mbuf **pkts, uint16_t count)
@@ -1000,6 +1178,9 @@ rte_vhost_dequeue_burst(int vid, uint16_t queue_id,
 	if (unlikely(vq->enabled == 0))
 		return 0;
 
+	if (dev->features & (1ULL << VIRTIO_F_VERSION_1_1))
+		return vhost_dequeue_burst_1_1(dev, vq, mbuf_pool, pkts, count);
+
 	if (unlikely(dev->dequeue_zero_copy)) {
 		struct zcopy_mbuf *zmbuf, *next;
 		int nr_updated = 0;
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [RFC 07/29] vhost: mark desc being used
  2017-06-21  2:57 [RFC 00/29] latest virtio1.1 prototype Tiwei Bie
                   ` (5 preceding siblings ...)
  2017-06-21  2:57 ` [RFC 06/29] vhost: implement virtio 1.1 dequeue path Tiwei Bie
@ 2017-06-21  2:57 ` Tiwei Bie
  2017-06-21  2:57 ` [RFC 08/29] xxx: batch the desc_hw update? Tiwei Bie
                   ` (21 subsequent siblings)
  28 siblings, 0 replies; 30+ messages in thread
From: Tiwei Bie @ 2017-06-21  2:57 UTC (permalink / raw)
  To: dev; +Cc: Yuanhan Liu

From: Yuanhan Liu <yuanhan.liu@linux.intel.com>

Signed-off-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
---
 lib/librte_vhost/virtio_net.c | 10 ++++++----
 1 file changed, 6 insertions(+), 4 deletions(-)

diff --git a/lib/librte_vhost/virtio_net.c b/lib/librte_vhost/virtio_net.c
index fd6f200..df88e31 100644
--- a/lib/librte_vhost/virtio_net.c
+++ b/lib/librte_vhost/virtio_net.c
@@ -1009,6 +1009,7 @@ dequeue_desc(struct virtio_net *dev, struct vhost_virtqueue *vq,
 	 */
 	if (likely((desc->len == dev->vhost_hlen) &&
 		   (desc->flags & VRING_DESC_F_NEXT) != 0)) {
+		rte_smp_wmb();
 		desc->flags = 0;
 
 		desc = &descs[(head_idx++) & (vq->size - 1)];
@@ -1068,11 +1069,11 @@ dequeue_desc(struct virtio_net *dev, struct vhost_virtqueue *vq,
 
 		/* This desc reaches to its end, get the next one */
 		if (desc_avail == 0) {
+			if ((desc->flags & VRING_DESC_F_NEXT) == 0)
+				break;
+
+			rte_smp_wmb();
 			desc->flags = 0;
-
-			if ((desc->flags & VRING_DESC_F_NEXT) == 0)
-				break;
-
 			desc = &descs[(head_idx++) & (vq->size - 1)];
 			if (unlikely(desc->flags & VRING_DESC_F_INDIRECT))
 				return -1;
@@ -1111,6 +1112,7 @@ dequeue_desc(struct virtio_net *dev, struct vhost_virtqueue *vq,
 			mbuf_avail  = cur->buf_len - RTE_PKTMBUF_HEADROOM;
 		}
 	}
+	rte_smp_wmb();
 	desc->flags = 0;
 
 	prev->data_len = mbuf_offset;
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [RFC 08/29] xxx: batch the desc_hw update?
  2017-06-21  2:57 [RFC 00/29] latest virtio1.1 prototype Tiwei Bie
                   ` (6 preceding siblings ...)
  2017-06-21  2:57 ` [RFC 07/29] vhost: mark desc being used Tiwei Bie
@ 2017-06-21  2:57 ` Tiwei Bie
  2017-06-21  2:57 ` [RFC 09/29] xxx: virtio: remove overheads Tiwei Bie
                   ` (20 subsequent siblings)
  28 siblings, 0 replies; 30+ messages in thread
From: Tiwei Bie @ 2017-06-21  2:57 UTC (permalink / raw)
  To: dev; +Cc: Yuanhan Liu

From: Yuanhan Liu <yuanhan.liu@linux.intel.com>

Signed-off-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
---
 drivers/net/virtio/virtio_rxtx_1.1.c | 18 ++++++++++--------
 lib/librte_vhost/virtio_net.c        | 17 ++++++++++-------
 2 files changed, 20 insertions(+), 15 deletions(-)

diff --git a/drivers/net/virtio/virtio_rxtx_1.1.c b/drivers/net/virtio/virtio_rxtx_1.1.c
index e47a346..05f9dc7 100644
--- a/drivers/net/virtio/virtio_rxtx_1.1.c
+++ b/drivers/net/virtio/virtio_rxtx_1.1.c
@@ -89,7 +89,7 @@ virtio_xmit_cleanup(struct virtqueue *vq)
 }
 
 static inline void
-virtio_xmit(struct virtnet_tx *txvq, struct rte_mbuf *mbuf)
+virtio_xmit(struct virtnet_tx *txvq, struct rte_mbuf *mbuf, int first_mbuf)
 {
 	struct virtio_tx_region *txr = txvq->virtio_net_hdr_mz->addr;
 	struct virtqueue *vq = txvq->vq;
@@ -105,6 +105,8 @@ virtio_xmit(struct virtnet_tx *txvq, struct rte_mbuf *mbuf)
 			  RTE_PTR_DIFF(&txr[idx].tx_hdr, txr);
 	desc[idx].len   = vq->hw->vtnet_hdr_size;
 	desc[idx].flags = VRING_DESC_F_NEXT;
+	if (!first_mbuf)
+		desc[idx].flags |= DESC_HW;
 
 	do {
 		idx = (vq->vq_avail_idx++) & (vq->vq_nentries - 1);
@@ -115,12 +117,6 @@ virtio_xmit(struct virtnet_tx *txvq, struct rte_mbuf *mbuf)
 
 	desc[idx].flags &= ~VRING_DESC_F_NEXT;
 
-	/*
-	 * update the head last, so that when the host saw such flag
-	 * is set, it means all others in the same chain is also set
-	 */
-	rte_smp_wmb();
-	desc[head_idx].flags |= DESC_HW;
 }
 
 uint16_t
@@ -129,6 +125,7 @@ virtio_xmit_pkts_1_1(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts
 	struct virtnet_tx *txvq = tx_queue;
 	struct virtqueue *vq = txvq->vq;
 	uint16_t i;
+	uint16_t head_idx = vq->vq_avail_idx;
 
 	if (unlikely(nb_pkts < 1))
 		return nb_pkts;
@@ -148,10 +145,15 @@ virtio_xmit_pkts_1_1(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts
 			}
 		}
 
-		virtio_xmit(txvq, txm);
+		virtio_xmit(txvq, txm, i == 0);
 		txvq->stats.bytes += txm->pkt_len;
 	}
 
+	if (likely(i)) {
+		rte_smp_wmb();
+		vq->vq_ring.desc_1_1[head_idx & (vq->vq_nentries - 1)].flags |= DESC_HW;
+	}
+
 	txvq->stats.packets += i;
 	txvq->stats.errors  += nb_pkts - i;
 
diff --git a/lib/librte_vhost/virtio_net.c b/lib/librte_vhost/virtio_net.c
index df88e31..c9e466f 100644
--- a/lib/librte_vhost/virtio_net.c
+++ b/lib/librte_vhost/virtio_net.c
@@ -1009,9 +1009,6 @@ dequeue_desc(struct virtio_net *dev, struct vhost_virtqueue *vq,
 	 */
 	if (likely((desc->len == dev->vhost_hlen) &&
 		   (desc->flags & VRING_DESC_F_NEXT) != 0)) {
-		rte_smp_wmb();
-		desc->flags = 0;
-
 		desc = &descs[(head_idx++) & (vq->size - 1)];
 		if (unlikely(desc->flags & VRING_DESC_F_INDIRECT))
 			return -1;
@@ -1072,8 +1069,6 @@ dequeue_desc(struct virtio_net *dev, struct vhost_virtqueue *vq,
 			if ((desc->flags & VRING_DESC_F_NEXT) == 0)
 				break;
 
-			rte_smp_wmb();
-			desc->flags = 0;
 			desc = &descs[(head_idx++) & (vq->size - 1)];
 			if (unlikely(desc->flags & VRING_DESC_F_INDIRECT))
 				return -1;
@@ -1112,8 +1107,6 @@ dequeue_desc(struct virtio_net *dev, struct vhost_virtqueue *vq,
 			mbuf_avail  = cur->buf_len - RTE_PKTMBUF_HEADROOM;
 		}
 	}
-	rte_smp_wmb();
-	desc->flags = 0;
 
 	prev->data_len = mbuf_offset;
 	m->pkt_len    += mbuf_offset;
@@ -1134,7 +1127,9 @@ vhost_dequeue_burst_1_1(struct virtio_net *dev, struct vhost_virtqueue *vq,
 	uint16_t i;
 	uint16_t idx;
 	struct vring_desc_1_1 *desc = vq->desc_1_1;
+	uint16_t head_idx = vq->last_used_idx;
 
+	count = RTE_MIN(MAX_PKT_BURST, count);
 	for (i = 0; i < count; i++) {
 		idx = vq->last_used_idx & (vq->size - 1);
 		if (!(desc[idx].flags & DESC_HW))
@@ -1150,6 +1145,14 @@ vhost_dequeue_burst_1_1(struct virtio_net *dev, struct vhost_virtqueue *vq,
 		dequeue_desc(dev, vq, mbuf_pool, pkts[i], desc);
 	}
 
+	if (likely(i)) {
+		for (idx = 1; idx < (uint16_t)(vq->last_used_idx - head_idx); idx++) {
+			desc[(idx + head_idx) & (vq->size - 1)].flags = 0;
+		}
+		rte_smp_wmb();
+		desc[head_idx & (vq->size - 1)].flags = 0;
+	}
+
 	return i;
 }
 
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [RFC 09/29] xxx: virtio: remove overheads
  2017-06-21  2:57 [RFC 00/29] latest virtio1.1 prototype Tiwei Bie
                   ` (7 preceding siblings ...)
  2017-06-21  2:57 ` [RFC 08/29] xxx: batch the desc_hw update? Tiwei Bie
@ 2017-06-21  2:57 ` Tiwei Bie
  2017-06-21  2:57 ` [RFC 10/29] vhost: prefetch desc Tiwei Bie
                   ` (19 subsequent siblings)
  28 siblings, 0 replies; 30+ messages in thread
From: Tiwei Bie @ 2017-06-21  2:57 UTC (permalink / raw)
  To: dev; +Cc: Yuanhan Liu

From: Yuanhan Liu <yuanhan.liu@linux.intel.com>

for better performance comparing

Signed-off-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
---
 drivers/net/virtio/virtio_rxtx.c | 190 +++------------------------------------
 1 file changed, 13 insertions(+), 177 deletions(-)

diff --git a/drivers/net/virtio/virtio_rxtx.c b/drivers/net/virtio/virtio_rxtx.c
index e697192..c49ac0d 100644
--- a/drivers/net/virtio/virtio_rxtx.c
+++ b/drivers/net/virtio/virtio_rxtx.c
@@ -218,76 +218,16 @@ virtqueue_enqueue_recv_refill(struct virtqueue *vq, struct rte_mbuf *cookie)
 	return 0;
 }
 
-/* When doing TSO, the IP length is not included in the pseudo header
- * checksum of the packet given to the PMD, but for virtio it is
- * expected.
- */
-static void
-virtio_tso_fix_cksum(struct rte_mbuf *m)
-{
-	/* common case: header is not fragmented */
-	if (likely(rte_pktmbuf_data_len(m) >= m->l2_len + m->l3_len +
-			m->l4_len)) {
-		struct ipv4_hdr *iph;
-		struct ipv6_hdr *ip6h;
-		struct tcp_hdr *th;
-		uint16_t prev_cksum, new_cksum, ip_len, ip_paylen;
-		uint32_t tmp;
-
-		iph = rte_pktmbuf_mtod_offset(m, struct ipv4_hdr *, m->l2_len);
-		th = RTE_PTR_ADD(iph, m->l3_len);
-		if ((iph->version_ihl >> 4) == 4) {
-			iph->hdr_checksum = 0;
-			iph->hdr_checksum = rte_ipv4_cksum(iph);
-			ip_len = iph->total_length;
-			ip_paylen = rte_cpu_to_be_16(rte_be_to_cpu_16(ip_len) -
-				m->l3_len);
-		} else {
-			ip6h = (struct ipv6_hdr *)iph;
-			ip_paylen = ip6h->payload_len;
-		}
-
-		/* calculate the new phdr checksum not including ip_paylen */
-		prev_cksum = th->cksum;
-		tmp = prev_cksum;
-		tmp += ip_paylen;
-		tmp = (tmp & 0xffff) + (tmp >> 16);
-		new_cksum = tmp;
-
-		/* replace it in the packet */
-		th->cksum = new_cksum;
-	}
-}
-
-static inline int
-tx_offload_enabled(struct virtio_hw *hw)
-{
-	return vtpci_with_feature(hw, VIRTIO_NET_F_CSUM) ||
-		vtpci_with_feature(hw, VIRTIO_NET_F_HOST_TSO4) ||
-		vtpci_with_feature(hw, VIRTIO_NET_F_HOST_TSO6);
-}
-
-/* avoid write operation when necessary, to lessen cache issues */
-#define ASSIGN_UNLESS_EQUAL(var, val) do {	\
-	if ((var) != (val))			\
-		(var) = (val);			\
-} while (0)
-
 static inline void
 virtqueue_enqueue_xmit(struct virtnet_tx *txvq, struct rte_mbuf *cookie,
-		       uint16_t needed, int use_indirect, int can_push)
+		       uint16_t needed)
 {
 	struct virtio_tx_region *txr = txvq->virtio_net_hdr_mz->addr;
 	struct vq_desc_extra *dxp;
 	struct virtqueue *vq = txvq->vq;
 	struct vring_desc *start_dp;
-	uint16_t seg_num = cookie->nb_segs;
 	uint16_t head_idx, idx;
-	uint16_t head_size = vq->hw->vtnet_hdr_size;
-	struct virtio_net_hdr *hdr;
-	int offload;
 
-	offload = tx_offload_enabled(vq->hw);
 	head_idx = vq->vq_desc_head_idx;
 	idx = head_idx;
 	dxp = &vq->vq_descx[idx];
@@ -296,91 +236,15 @@ virtqueue_enqueue_xmit(struct virtnet_tx *txvq, struct rte_mbuf *cookie,
 
 	start_dp = vq->vq_ring.desc;
 
-	if (can_push) {
-		/* prepend cannot fail, checked by caller */
-		hdr = (struct virtio_net_hdr *)
-			rte_pktmbuf_prepend(cookie, head_size);
-		/* if offload disabled, it is not zeroed below, do it now */
-		if (offload == 0) {
-			ASSIGN_UNLESS_EQUAL(hdr->csum_start, 0);
-			ASSIGN_UNLESS_EQUAL(hdr->csum_offset, 0);
-			ASSIGN_UNLESS_EQUAL(hdr->flags, 0);
-			ASSIGN_UNLESS_EQUAL(hdr->gso_type, 0);
-			ASSIGN_UNLESS_EQUAL(hdr->gso_size, 0);
-			ASSIGN_UNLESS_EQUAL(hdr->hdr_len, 0);
-		}
-	} else if (use_indirect) {
-		/* setup tx ring slot to point to indirect
-		 * descriptor list stored in reserved region.
-		 *
-		 * the first slot in indirect ring is already preset
-		 * to point to the header in reserved region
-		 */
-		start_dp[idx].addr  = txvq->virtio_net_hdr_mem +
-			RTE_PTR_DIFF(&txr[idx].tx_indir, txr);
-		start_dp[idx].len   = (seg_num + 1) * sizeof(struct vring_desc);
-		start_dp[idx].flags = VRING_DESC_F_INDIRECT;
-		hdr = (struct virtio_net_hdr *)&txr[idx].tx_hdr;
-
-		/* loop below will fill in rest of the indirect elements */
-		start_dp = txr[idx].tx_indir;
-		idx = 1;
-	} else {
-		/* setup first tx ring slot to point to header
-		 * stored in reserved region.
-		 */
-		start_dp[idx].addr  = txvq->virtio_net_hdr_mem +
-			RTE_PTR_DIFF(&txr[idx].tx_hdr, txr);
-		start_dp[idx].len   = vq->hw->vtnet_hdr_size;
-		start_dp[idx].flags = VRING_DESC_F_NEXT;
-		hdr = (struct virtio_net_hdr *)&txr[idx].tx_hdr;
-
-		idx = start_dp[idx].next;
-	}
-
-	/* Checksum Offload / TSO */
-	if (offload) {
-		if (cookie->ol_flags & PKT_TX_TCP_SEG)
-			cookie->ol_flags |= PKT_TX_TCP_CKSUM;
-
-		switch (cookie->ol_flags & PKT_TX_L4_MASK) {
-		case PKT_TX_UDP_CKSUM:
-			hdr->csum_start = cookie->l2_len + cookie->l3_len;
-			hdr->csum_offset = offsetof(struct udp_hdr,
-				dgram_cksum);
-			hdr->flags = VIRTIO_NET_HDR_F_NEEDS_CSUM;
-			break;
-
-		case PKT_TX_TCP_CKSUM:
-			hdr->csum_start = cookie->l2_len + cookie->l3_len;
-			hdr->csum_offset = offsetof(struct tcp_hdr, cksum);
-			hdr->flags = VIRTIO_NET_HDR_F_NEEDS_CSUM;
-			break;
-
-		default:
-			ASSIGN_UNLESS_EQUAL(hdr->csum_start, 0);
-			ASSIGN_UNLESS_EQUAL(hdr->csum_offset, 0);
-			ASSIGN_UNLESS_EQUAL(hdr->flags, 0);
-			break;
-		}
-
-		/* TCP Segmentation Offload */
-		if (cookie->ol_flags & PKT_TX_TCP_SEG) {
-			virtio_tso_fix_cksum(cookie);
-			hdr->gso_type = (cookie->ol_flags & PKT_TX_IPV6) ?
-				VIRTIO_NET_HDR_GSO_TCPV6 :
-				VIRTIO_NET_HDR_GSO_TCPV4;
-			hdr->gso_size = cookie->tso_segsz;
-			hdr->hdr_len =
-				cookie->l2_len +
-				cookie->l3_len +
-				cookie->l4_len;
-		} else {
-			ASSIGN_UNLESS_EQUAL(hdr->gso_type, 0);
-			ASSIGN_UNLESS_EQUAL(hdr->gso_size, 0);
-			ASSIGN_UNLESS_EQUAL(hdr->hdr_len, 0);
-		}
-	}
+       /* setup first tx ring slot to point to header
+        * stored in reserved region.
+        */
+       start_dp[idx].addr  = txvq->virtio_net_hdr_mem +
+       	RTE_PTR_DIFF(&txr[idx].tx_hdr, txr);
+       start_dp[idx].len   = vq->hw->vtnet_hdr_size;
+       start_dp[idx].flags = VRING_DESC_F_NEXT;
+
+       idx = start_dp[idx].next;
 
 	do {
 		start_dp[idx].addr  = VIRTIO_MBUF_DATA_DMA_ADDR(cookie, vq);
@@ -389,9 +253,6 @@ virtqueue_enqueue_xmit(struct virtnet_tx *txvq, struct rte_mbuf *cookie,
 		idx = start_dp[idx].next;
 	} while ((cookie = cookie->next) != NULL);
 
-	if (use_indirect)
-		idx = vq->vq_ring.desc[head_idx].next;
-
 	vq->vq_desc_head_idx = idx;
 	if (vq->vq_desc_head_idx == VQ_RING_DESC_CHAIN_END)
 		vq->vq_desc_tail_idx = idx;
@@ -1011,9 +872,7 @@ virtio_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
 	struct virtnet_tx *txvq = tx_queue;
 	struct virtqueue *vq = txvq->vq;
 	struct virtio_hw *hw = vq->hw;
-	uint16_t hdr_size = hw->vtnet_hdr_size;
 	uint16_t nb_used, nb_tx = 0;
-	int error;
 
 	if (unlikely(hw->started == 0))
 		return nb_tx;
@@ -1030,37 +889,14 @@ virtio_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
 
 	for (nb_tx = 0; nb_tx < nb_pkts; nb_tx++) {
 		struct rte_mbuf *txm = tx_pkts[nb_tx];
-		int can_push = 0, use_indirect = 0, slots, need;
-
-		/* Do VLAN tag insertion */
-		if (unlikely(txm->ol_flags & PKT_TX_VLAN_PKT)) {
-			error = rte_vlan_insert(&txm);
-			if (unlikely(error)) {
-				rte_pktmbuf_free(txm);
-				continue;
-			}
-		}
-
-		/* optimize ring usage */
-		if ((vtpci_with_feature(hw, VIRTIO_F_ANY_LAYOUT) ||
-		      vtpci_with_feature(hw, VIRTIO_F_VERSION_1)) &&
-		    rte_mbuf_refcnt_read(txm) == 1 &&
-		    RTE_MBUF_DIRECT(txm) &&
-		    txm->nb_segs == 1 &&
-		    rte_pktmbuf_headroom(txm) >= hdr_size &&
-		    rte_is_aligned(rte_pktmbuf_mtod(txm, char *),
-				   __alignof__(struct virtio_net_hdr_mrg_rxbuf)))
-			can_push = 1;
-		else if (vtpci_with_feature(hw, VIRTIO_RING_F_INDIRECT_DESC) &&
-			 txm->nb_segs < VIRTIO_MAX_TX_INDIRECT)
-			use_indirect = 1;
+		int slots, need;
 
 		/* How many main ring entries are needed to this Tx?
 		 * any_layout => number of segments
 		 * indirect   => 1
 		 * default    => number of segments + 1
 		 */
-		slots = use_indirect ? 1 : (txm->nb_segs + !can_push);
+		slots =txm->nb_segs + 1;
 		need = slots - vq->vq_free_cnt;
 
 		/* Positive value indicates it need free vring descriptors */
@@ -1079,7 +915,7 @@ virtio_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
 		}
 
 		/* Enqueue Packet buffers */
-		virtqueue_enqueue_xmit(txvq, txm, slots, use_indirect, can_push);
+		virtqueue_enqueue_xmit(txvq, txm, slots);
 
 		txvq->stats.bytes += txm->pkt_len;
 		virtio_update_packet_stats(&txvq->stats, txm);
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [RFC 10/29] vhost: prefetch desc
  2017-06-21  2:57 [RFC 00/29] latest virtio1.1 prototype Tiwei Bie
                   ` (8 preceding siblings ...)
  2017-06-21  2:57 ` [RFC 09/29] xxx: virtio: remove overheads Tiwei Bie
@ 2017-06-21  2:57 ` Tiwei Bie
  2017-06-21  2:57 ` [RFC 11/29] add virtio 1.1 test guide Tiwei Bie
                   ` (18 subsequent siblings)
  28 siblings, 0 replies; 30+ messages in thread
From: Tiwei Bie @ 2017-06-21  2:57 UTC (permalink / raw)
  To: dev; +Cc: Yuanhan Liu

From: Yuanhan Liu <yuanhan.liu@linux.intel.com>

Signed-off-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
---
 lib/librte_vhost/virtio_net.c | 36 +++++++++++++++++++++++++++++-------
 1 file changed, 29 insertions(+), 7 deletions(-)

diff --git a/lib/librte_vhost/virtio_net.c b/lib/librte_vhost/virtio_net.c
index c9e466f..b4d9031 100644
--- a/lib/librte_vhost/virtio_net.c
+++ b/lib/librte_vhost/virtio_net.c
@@ -974,10 +974,10 @@ mbuf_is_consumed(struct rte_mbuf *m)
 	return true;
 }
 
-static inline uint16_t
+static inline uint16_t __attribute__((always_inline))
 dequeue_desc(struct virtio_net *dev, struct vhost_virtqueue *vq,
 	     struct rte_mempool *mbuf_pool, struct rte_mbuf *m,
-	     struct vring_desc_1_1 *descs)
+	     struct vring_desc_1_1 *descs, uint16_t *desc_idx)
 {
 	struct vring_desc_1_1 *desc;
 	uint64_t desc_addr;
@@ -986,7 +986,7 @@ dequeue_desc(struct virtio_net *dev, struct vhost_virtqueue *vq,
 	uint32_t cpy_len;
 	struct rte_mbuf *cur = m, *prev = m;
 	struct virtio_net_hdr *hdr = NULL;
-	uint16_t head_idx = vq->last_used_idx;
+	uint16_t head_idx = *desc_idx;
 
 	desc = &descs[(head_idx++) & (vq->size - 1)];
 	if (unlikely((desc->len < dev->vhost_hlen)) ||
@@ -1114,7 +1114,7 @@ dequeue_desc(struct virtio_net *dev, struct vhost_virtqueue *vq,
 	if (hdr)
 		vhost_dequeue_offload(hdr, m);
 
-	vq->last_used_idx = head_idx;
+	*desc_idx = head_idx;
 
 	return 0;
 }
@@ -1128,11 +1128,32 @@ vhost_dequeue_burst_1_1(struct virtio_net *dev, struct vhost_virtqueue *vq,
 	uint16_t idx;
 	struct vring_desc_1_1 *desc = vq->desc_1_1;
 	uint16_t head_idx = vq->last_used_idx;
+	struct vring_desc_1_1 desc_cached[64];
+	uint16_t desc_idx = 0;
+
+	idx = vq->last_used_idx & (vq->size - 1);
+	if (!(desc[idx].flags & DESC_HW))
+		return 0;
 
 	count = RTE_MIN(MAX_PKT_BURST, count);
+
+	{
+		uint16_t size = vq->size - idx;
+		if (size >= 64)
+			rte_memcpy(&desc_cached[0],    &desc[idx], 64 * sizeof(struct vring_desc_1_1));
+		else {
+			rte_memcpy(&desc_cached[0],    &desc[idx], size * sizeof(struct vring_desc_1_1));
+			rte_memcpy(&desc_cached[size], &desc[0],   (64 - size) * sizeof(struct vring_desc_1_1));
+		}
+	}
+
+	//for (i = 0; i < 64; i++) {
+	//	idx = (vq->last_used_idx + i) & (vq->size - 1);
+	//	desc_cached[i] = desc[idx];
+	//}
+
 	for (i = 0; i < count; i++) {
-		idx = vq->last_used_idx & (vq->size - 1);
-		if (!(desc[idx].flags & DESC_HW))
+		if (!(desc_cached[desc_idx].flags & DESC_HW))
 			break;
 
 		pkts[i] = rte_pktmbuf_alloc(mbuf_pool);
@@ -1142,9 +1163,10 @@ vhost_dequeue_burst_1_1(struct virtio_net *dev, struct vhost_virtqueue *vq,
 			break;
 		}
 
-		dequeue_desc(dev, vq, mbuf_pool, pkts[i], desc);
+		dequeue_desc(dev, vq, mbuf_pool, pkts[i], desc_cached, &desc_idx);
 	}
 
+	vq->last_used_idx += desc_idx;
 	if (likely(i)) {
 		for (idx = 1; idx < (uint16_t)(vq->last_used_idx - head_idx); idx++) {
 			desc[(idx + head_idx) & (vq->size - 1)].flags = 0;
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [RFC 11/29] add virtio 1.1 test guide
  2017-06-21  2:57 [RFC 00/29] latest virtio1.1 prototype Tiwei Bie
                   ` (9 preceding siblings ...)
  2017-06-21  2:57 ` [RFC 10/29] vhost: prefetch desc Tiwei Bie
@ 2017-06-21  2:57 ` Tiwei Bie
  2017-06-21  2:57 ` [RFC 12/29] testpmd: add s-txonly Tiwei Bie
                   ` (17 subsequent siblings)
  28 siblings, 0 replies; 30+ messages in thread
From: Tiwei Bie @ 2017-06-21  2:57 UTC (permalink / raw)
  To: dev; +Cc: Yuanhan Liu

From: Yuanhan Liu <yuanhan.liu@linux.intel.com>

Signed-off-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
---
 README-virtio-1.1 | 50 ++++++++++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 50 insertions(+)
 create mode 100644 README-virtio-1.1

diff --git a/README-virtio-1.1 b/README-virtio-1.1
new file mode 100644
index 0000000..8af3eb3
--- /dev/null
+++ b/README-virtio-1.1
@@ -0,0 +1,50 @@
+This branch implements a very rough virtio 1.1 prototpye: only the Tx path has
+been implemented. And below are the test scripts and guidelines for testing:
+
+- build DPDK
+
+  $ export RTE_SDK=/path/to/dpdk/src
+  $ export RTE_TARGET=x86_64-native-linuxapp-gcc
+  $ make install T=$RTE_TARGET
+
+- run host.sh
+
+- run virtio-user.sh
+  execute 'start' inside the pmd
+  execute 'show port stats all' (2 or more times) to see the throughtput.
+
+Note: both scripts should run on the same machine.
+
+You could also set "version_1_1=0" at virtio-user.sh to test
+the difference between virtio 1.1 and virtio 0.95/1.0.
+
+---
+[yliu@yliu-dev ~]$ cat /tmp/host.sh
+#!/bin/bash
+
+[ "$gdb" ] && gdb="gdb --args"
+
+rm -f vhost-net
+
+sudo $gdb $RTE_SDK/x86_64-native-linuxapp-gcc/app/testpmd \
+        -c 0x5 -n 4 --socket-mem 2048,0 \
+        --no-pci --file-prefix=vhost            \
+        --vdev 'net_vhost0,iface=/tmp/vhost-net' \
+        -- \
+        --forward-mode=rxonly \
+        #-i
+
+
+[yliu@yliu-dev ~]$ cat /tmp/virtio-user.sh
+#!/bin/bash
+
+[ "$gdb" ] && gdb="gdb --args"
+
+sudo $gdb $RTE_SDK/x86_64-native-linuxapp-gcc/app/testpmd       \
+        -c 0x9 -n 4 --socket-mem 2048,0 \
+        --no-pci --file-prefix=virtio           \
+        --vdev=net_virtio_user0,mac=52:54:00:00:00:15,path=/tmp/vhost-net,version_1_1=1 \
+        -- \
+        --forward-mode=txonly \
+        --disable-hw-vlan --no-flush-rx \
+        -i
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [RFC 12/29] testpmd: add s-txonly
  2017-06-21  2:57 [RFC 00/29] latest virtio1.1 prototype Tiwei Bie
                   ` (10 preceding siblings ...)
  2017-06-21  2:57 ` [RFC 11/29] add virtio 1.1 test guide Tiwei Bie
@ 2017-06-21  2:57 ` Tiwei Bie
  2017-06-21  2:57 ` [RFC 13/29] net/virtio: implement the Rx code path Tiwei Bie
                   ` (16 subsequent siblings)
  28 siblings, 0 replies; 30+ messages in thread
From: Tiwei Bie @ 2017-06-21  2:57 UTC (permalink / raw)
  To: dev; +Cc: Yuanhan Liu

From: Yuanhan Liu <yuanhan.liu@linux.intel.com>

Signed-off-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
---
 app/test-pmd/Makefile   |   1 +
 app/test-pmd/s-txonly.c | 134 ++++++++++++++++++++++++++++++++++++++++++++++++
 app/test-pmd/testpmd.c  |   1 +
 app/test-pmd/testpmd.h  |   1 +
 4 files changed, 137 insertions(+)
 create mode 100644 app/test-pmd/s-txonly.c

diff --git a/app/test-pmd/Makefile b/app/test-pmd/Makefile
index 35ecee9..d87cae6 100644
--- a/app/test-pmd/Makefile
+++ b/app/test-pmd/Makefile
@@ -55,6 +55,7 @@ SRCS-y += macswap.c
 SRCS-y += flowgen.c
 SRCS-y += rxonly.c
 SRCS-y += txonly.c
+SRCS-y += s-txonly.c
 SRCS-y += csumonly.c
 SRCS-y += icmpecho.c
 SRCS-$(CONFIG_RTE_LIBRTE_IEEE1588) += ieee1588fwd.c
diff --git a/app/test-pmd/s-txonly.c b/app/test-pmd/s-txonly.c
new file mode 100644
index 0000000..6275136
--- /dev/null
+++ b/app/test-pmd/s-txonly.c
@@ -0,0 +1,134 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright(c) 2010-2014 Intel Corporation. All rights reserved.
+ *   All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Intel Corporation nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include <stdarg.h>
+#include <string.h>
+#include <stdio.h>
+#include <errno.h>
+#include <stdint.h>
+#include <unistd.h>
+#include <inttypes.h>
+#include <assert.h>
+
+#include <rte_common.h>
+#include <rte_byteorder.h>
+#include <rte_log.h>
+#include <rte_debug.h>
+#include <rte_cycles.h>
+#include <rte_memory.h>
+#include <rte_memcpy.h>
+#include <rte_memzone.h>
+#include <rte_launch.h>
+#include <rte_eal.h>
+#include <rte_per_lcore.h>
+#include <rte_lcore.h>
+#include <rte_atomic.h>
+#include <rte_branch_prediction.h>
+#include <rte_memory.h>
+#include <rte_mempool.h>
+#include <rte_mbuf.h>
+#include <rte_memcpy.h>
+#include <rte_interrupts.h>
+#include <rte_pci.h>
+#include <rte_ether.h>
+#include <rte_ethdev.h>
+#include <rte_ip.h>
+#include <rte_tcp.h>
+#include <rte_udp.h>
+#include <rte_string_fns.h>
+#include <rte_flow.h>
+
+#include "testpmd.h"
+
+#undef MAX_PKT_BURST
+#define MAX_PKT_BURST	32
+
+/*
+ * Transmit a burst of multi-segments packets.
+ */
+static void
+pkt_burst_transmit(struct fwd_stream *fs)
+{
+	static struct rte_mbuf *pkts[MAX_PKT_BURST];
+	uint16_t nb_tx;
+	int  i;
+#ifdef RTE_TEST_PMD_RECORD_CORE_CYCLES
+	uint64_t start_tsc;
+	uint64_t end_tsc;
+	uint64_t core_cycles;
+#endif
+
+#ifdef RTE_TEST_PMD_RECORD_CORE_CYCLES
+	start_tsc = rte_rdtsc();
+#endif
+
+	for (i = 0; i < MAX_PKT_BURST; i++) {
+		if (unlikely(!pkts[i])) {
+			pkts[i] = rte_pktmbuf_alloc(current_fwd_lcore()->mbp);
+			assert(pkts[i]);
+
+			pkts[i]->data_len = tx_pkt_seg_lengths[0];
+			pkts[i]->pkt_len  = tx_pkt_seg_lengths[0];
+			pkts[i]->nb_segs  = 1;
+		}
+
+		rte_pktmbuf_refcnt_update(pkts[i], 1);
+	}
+
+	nb_tx = rte_eth_tx_burst(fs->tx_port, fs->tx_queue, pkts, MAX_PKT_BURST);
+
+	//uint32_t retry;
+	//if (unlikely(nb_tx < MAX_PKT_BURST) && fs->retry_enabled) {
+	//	retry = 0;
+	//	while (nb_tx < MAX_PKT_BURST && retry++ < burst_tx_retry_num) {
+	//		rte_delay_us(burst_tx_delay_time);
+	//		nb_tx += rte_eth_tx_burst(fs->tx_port, fs->tx_queue,
+	//				&pkts[nb_tx], MAX_PKT_BURST - nb_tx);
+	//	}
+	//}
+
+	fs->tx_packets += nb_tx;
+
+#ifdef RTE_TEST_PMD_RECORD_CORE_CYCLES
+	end_tsc = rte_rdtsc();
+	core_cycles = (end_tsc - start_tsc);
+	fs->core_cycles = (uint64_t) (fs->core_cycles + core_cycles);
+#endif
+}
+
+struct fwd_engine s_tx_only_engine = {
+	.fwd_mode_name  = "s-txonly",
+	.port_fwd_begin = NULL,
+	.port_fwd_end   = NULL,
+	.packet_fwd     = pkt_burst_transmit,
+};
diff --git a/app/test-pmd/testpmd.c b/app/test-pmd/testpmd.c
index d1041af..f224357 100644
--- a/app/test-pmd/testpmd.c
+++ b/app/test-pmd/testpmd.c
@@ -160,6 +160,7 @@ struct fwd_engine * fwd_engines[] = {
 	&flow_gen_engine,
 	&rx_only_engine,
 	&tx_only_engine,
+	&s_tx_only_engine,
 	&csum_fwd_engine,
 	&icmp_echo_engine,
 #ifdef RTE_LIBRTE_IEEE1588
diff --git a/app/test-pmd/testpmd.h b/app/test-pmd/testpmd.h
index e6c43ba..b53ce0a 100644
--- a/app/test-pmd/testpmd.h
+++ b/app/test-pmd/testpmd.h
@@ -248,6 +248,7 @@ extern struct fwd_engine mac_swap_engine;
 extern struct fwd_engine flow_gen_engine;
 extern struct fwd_engine rx_only_engine;
 extern struct fwd_engine tx_only_engine;
+extern struct fwd_engine s_tx_only_engine;
 extern struct fwd_engine csum_fwd_engine;
 extern struct fwd_engine icmp_echo_engine;
 #ifdef RTE_LIBRTE_IEEE1588
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [RFC 13/29] net/virtio: implement the Rx code path
  2017-06-21  2:57 [RFC 00/29] latest virtio1.1 prototype Tiwei Bie
                   ` (11 preceding siblings ...)
  2017-06-21  2:57 ` [RFC 12/29] testpmd: add s-txonly Tiwei Bie
@ 2017-06-21  2:57 ` Tiwei Bie
  2017-06-21  2:57 ` [RFC 14/29] vhost: a rough implementation on enqueue " Tiwei Bie
                   ` (15 subsequent siblings)
  28 siblings, 0 replies; 30+ messages in thread
From: Tiwei Bie @ 2017-06-21  2:57 UTC (permalink / raw)
  To: dev; +Cc: Yuanhan Liu

From: Yuanhan Liu <yuanhan.liu@linux.intel.com>

Just make it stick to the non-mergeable code path now, though it's likely
it would  be really easy to add such support.

Signed-off-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
---
 drivers/net/virtio/virtio_ethdev.c |   5 +-
 drivers/net/virtio/virtio_rxtx.c   | 121 ++++++++++++++++++++++++++++++++++---
 drivers/net/virtio/virtqueue.h     |   1 +
 3 files changed, 116 insertions(+), 11 deletions(-)

diff --git a/drivers/net/virtio/virtio_ethdev.c b/drivers/net/virtio/virtio_ethdev.c
index 35ce07d..8b754ac 100644
--- a/drivers/net/virtio/virtio_ethdev.c
+++ b/drivers/net/virtio/virtio_ethdev.c
@@ -1241,7 +1241,7 @@ static void
 rx_func_get(struct rte_eth_dev *eth_dev)
 {
 	struct virtio_hw *hw = eth_dev->data->dev_private;
-	if (vtpci_with_feature(hw, VIRTIO_NET_F_MRG_RXBUF))
+	if (0 && vtpci_with_feature(hw, VIRTIO_NET_F_MRG_RXBUF))
 		eth_dev->rx_pkt_burst = &virtio_recv_mergeable_pkts;
 	else
 		eth_dev->rx_pkt_burst = &virtio_recv_pkts;
@@ -1373,7 +1373,8 @@ virtio_init_device(struct rte_eth_dev *eth_dev, uint64_t req_features)
 
 	/* Setting up rx_header size for the device */
 	if (vtpci_with_feature(hw, VIRTIO_NET_F_MRG_RXBUF) ||
-	    vtpci_with_feature(hw, VIRTIO_F_VERSION_1))
+	    vtpci_with_feature(hw, VIRTIO_F_VERSION_1) ||
+	    vtpci_with_feature(hw, VIRTIO_F_VERSION_1_1))
 		hw->vtnet_hdr_size = sizeof(struct virtio_net_hdr_mrg_rxbuf);
 	else
 		hw->vtnet_hdr_size = sizeof(struct virtio_net_hdr);
diff --git a/drivers/net/virtio/virtio_rxtx.c b/drivers/net/virtio/virtio_rxtx.c
index c49ac0d..3be64da 100644
--- a/drivers/net/virtio/virtio_rxtx.c
+++ b/drivers/net/virtio/virtio_rxtx.c
@@ -115,8 +115,8 @@ vq_ring_free_chain(struct virtqueue *vq, uint16_t desc_idx)
 	dp->next = VQ_RING_DESC_CHAIN_END;
 }
 
-static uint16_t
-virtqueue_dequeue_burst_rx(struct virtqueue *vq, struct rte_mbuf **rx_pkts,
+static inline uint16_t
+virtqueue_dequeue_burst_rx_1_0(struct virtqueue *vq, struct rte_mbuf **rx_pkts,
 			   uint32_t *len, uint16_t num)
 {
 	struct vring_used_elem *uep;
@@ -149,6 +149,51 @@ virtqueue_dequeue_burst_rx(struct virtqueue *vq, struct rte_mbuf **rx_pkts,
 	return i;
 }
 
+static inline uint16_t
+virtqueue_dequeue_burst_rx_1_1(struct virtqueue *vq, struct rte_mbuf **rx_pkts,
+			   uint32_t *len, uint16_t num)
+{
+	struct vring_desc_1_1 *desc = vq->vq_ring.desc_1_1;
+	struct rte_mbuf *cookie;
+	uint16_t used_idx;
+	uint16_t i;
+
+	for (i = 0; i < num ; i++) {
+		used_idx = (uint16_t)(vq->vq_used_cons_idx & (vq->vq_nentries - 1));
+		if ((desc[used_idx].flags & DESC_HW))
+			break;
+
+		len[i] = desc[used_idx].len;
+		cookie = vq->vq_descx[used_idx].cookie;
+
+		if (unlikely(cookie == NULL)) {
+			PMD_DRV_LOG(ERR, "vring descriptor with no mbuf cookie at %u\n",
+				vq->vq_used_cons_idx);
+			break;
+		}
+		vq->vq_descx[used_idx].cookie = NULL;
+
+		rte_prefetch0(cookie);
+		rte_packet_prefetch(rte_pktmbuf_mtod(cookie, void *));
+		rx_pkts[i]  = cookie;
+
+		vq->vq_used_cons_idx++;
+		vq->vq_free_cnt++;
+	}
+
+	return i;
+}
+
+static inline uint16_t
+virtqueue_dequeue_burst_rx(struct virtqueue *vq, struct rte_mbuf **rx_pkts,
+			   uint32_t *len, uint16_t num)
+{
+	if (vtpci_version_1_1(vq->hw))
+		return virtqueue_dequeue_burst_rx_1_1(vq, rx_pkts, len, num);
+	else
+		return virtqueue_dequeue_burst_rx_1_0(vq, rx_pkts, len, num);
+}
+
 #ifndef DEFAULT_TX_FREE_THRESH
 #define DEFAULT_TX_FREE_THRESH 32
 #endif
@@ -179,7 +224,7 @@ virtio_xmit_cleanup(struct virtqueue *vq, uint16_t num)
 
 
 static inline int
-virtqueue_enqueue_recv_refill(struct virtqueue *vq, struct rte_mbuf *cookie)
+virtqueue_enqueue_recv_refill_1_0(struct virtqueue *vq, struct rte_mbuf *cookie)
 {
 	struct vq_desc_extra *dxp;
 	struct virtio_hw *hw = vq->hw;
@@ -218,6 +263,53 @@ virtqueue_enqueue_recv_refill(struct virtqueue *vq, struct rte_mbuf *cookie)
 	return 0;
 }
 
+static inline int
+virtqueue_enqueue_recv_refill_1_1(struct virtqueue *vq, struct rte_mbuf *cookie)
+{
+	struct vq_desc_extra *dxp;
+	struct virtio_hw *hw = vq->hw;
+	uint16_t needed = 1;
+	uint16_t idx;
+	struct vring_desc_1_1 *desc = vq->vq_ring.desc_1_1;
+
+	if (unlikely(vq->vq_free_cnt == 0))
+		return -ENOSPC;
+	if (unlikely(vq->vq_free_cnt < needed))
+		return -EMSGSIZE;
+
+	idx = vq->vq_desc_head_idx & (vq->vq_nentries - 1);
+	if (unlikely(desc[idx].flags & DESC_HW))
+		return -EFAULT;
+
+	dxp = &vq->vq_descx[idx];
+	dxp->cookie = cookie;
+	dxp->ndescs = needed;
+
+	desc[idx].addr =
+		VIRTIO_MBUF_ADDR(cookie, vq) +
+		RTE_PKTMBUF_HEADROOM - hw->vtnet_hdr_size;
+	desc[idx].len =
+		cookie->buf_len - RTE_PKTMBUF_HEADROOM + hw->vtnet_hdr_size;
+	desc[idx].flags =  VRING_DESC_F_WRITE;
+	vq->vq_desc_head_idx++;
+
+	vq->vq_free_cnt -= needed;
+
+	rte_smp_wmb();
+	desc[idx].flags |= DESC_HW;
+
+	return 0;
+}
+
+static inline int
+virtqueue_enqueue_recv_refill(struct virtqueue *vq, struct rte_mbuf *cookie)
+{
+	if (vtpci_version_1_1(vq->hw))
+		return virtqueue_enqueue_recv_refill_1_1(vq, cookie);
+	else
+		return virtqueue_enqueue_recv_refill_1_0(vq, cookie);
+}
+
 static inline void
 virtqueue_enqueue_xmit(struct virtnet_tx *txvq, struct rte_mbuf *cookie,
 		       uint16_t needed)
@@ -288,9 +380,6 @@ virtio_dev_rx_queue_setup(struct rte_eth_dev *dev,
 
 	PMD_INIT_FUNC_TRACE();
 
-	if (vtpci_version_1_1(hw))
-		return 0;
-
 	if (nb_desc == 0 || nb_desc > vq->vq_nentries)
 		nb_desc = vq->vq_nentries;
 	vq->vq_free_cnt = RTE_MIN(vq->vq_free_cnt, nb_desc);
@@ -343,7 +432,8 @@ virtio_dev_rx_queue_setup(struct rte_eth_dev *dev,
 		nbufs++;
 	}
 
-	vq_update_avail_idx(vq);
+	if (!vtpci_version_1_1(hw))
+		vq_update_avail_idx(vq);
 
 	PMD_INIT_LOG(DEBUG, "Allocated %d bufs", nbufs);
 
@@ -368,6 +458,10 @@ virtio_update_rxtx_handler(struct rte_eth_dev *dev,
 	if (rte_cpu_get_flag_enabled(RTE_CPUFLAG_NEON))
 		use_simple_rxtx = 1;
 #endif
+
+	if (vtpci_version_1_1(hw))
+		use_simple_rxtx = 0;
+
 	/* Use simple rx/tx func if single segment and no offloads */
 	if (use_simple_rxtx &&
 	    (tx_conf->txq_flags & VIRTIO_SIMPLE_FLAGS) == VIRTIO_SIMPLE_FLAGS &&
@@ -604,7 +698,16 @@ virtio_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts)
 	if (unlikely(hw->started == 0))
 		return nb_rx;
 
-	nb_used = VIRTQUEUE_NUSED(vq);
+	/*
+	 * we have no idea to know how many used entries without scanning
+	 * the desc for virtio 1.1. Thus, let's simply set nb_used to nb_pkts
+	 * and let virtqueue_dequeue_burst_rx() to figure out the real
+	 * number.
+	 */
+	if (vtpci_version_1_1(hw))
+		nb_used = nb_pkts;
+	else
+		nb_used = VIRTQUEUE_NUSED(vq);
 
 	virtio_rmb();
 
@@ -681,7 +784,7 @@ virtio_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts)
 		nb_enqueued++;
 	}
 
-	if (likely(nb_enqueued)) {
+	if (likely(nb_enqueued) && !vtpci_version_1_1(hw)) {
 		vq_update_avail_idx(vq);
 
 		if (unlikely(virtqueue_kick_prepare(vq))) {
diff --git a/drivers/net/virtio/virtqueue.h b/drivers/net/virtio/virtqueue.h
index 91d2db7..45f49d7 100644
--- a/drivers/net/virtio/virtqueue.h
+++ b/drivers/net/virtio/virtqueue.h
@@ -272,6 +272,7 @@ vring_desc_init_1_1(struct vring *vr, int n)
 	int i;
 	for (i = 0; i < n; i++) {
 		struct vring_desc_1_1 *desc = &vr->desc_1_1[i];
+		desc->flags = 0;
 		desc->index = i;
 	}
 }
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [RFC 14/29] vhost: a rough implementation on enqueue code path
  2017-06-21  2:57 [RFC 00/29] latest virtio1.1 prototype Tiwei Bie
                   ` (12 preceding siblings ...)
  2017-06-21  2:57 ` [RFC 13/29] net/virtio: implement the Rx code path Tiwei Bie
@ 2017-06-21  2:57 ` Tiwei Bie
  2017-06-21  2:57 ` [RFC 15/29] vhost: descriptor length should include vhost header Tiwei Bie
                   ` (14 subsequent siblings)
  28 siblings, 0 replies; 30+ messages in thread
From: Tiwei Bie @ 2017-06-21  2:57 UTC (permalink / raw)
  To: dev; +Cc: Yuanhan Liu

From: Yuanhan Liu <yuanhan.liu@linux.intel.com>

Signed-off-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
---
 lib/librte_vhost/virtio_net.c | 124 +++++++++++++++++++++++++++++++++++++++++-
 1 file changed, 123 insertions(+), 1 deletion(-)

diff --git a/lib/librte_vhost/virtio_net.c b/lib/librte_vhost/virtio_net.c
index b4d9031..f7dd4eb 100644
--- a/lib/librte_vhost/virtio_net.c
+++ b/lib/librte_vhost/virtio_net.c
@@ -583,6 +583,126 @@ virtio_dev_merge_rx(struct virtio_net *dev, uint16_t queue_id,
 	return pkt_idx;
 }
 
+static inline int __attribute__((always_inline))
+enqueue_pkt(struct virtio_net *dev, struct vring_desc_1_1 *descs,
+	    uint16_t desc_idx, struct rte_mbuf *m)
+{
+	uint32_t desc_avail, desc_offset;
+	uint32_t mbuf_avail, mbuf_offset;
+	uint32_t cpy_len;
+	struct vring_desc_1_1 *desc;
+	uint64_t desc_addr;
+	struct virtio_net_hdr_mrg_rxbuf *hdr;
+
+	desc = &descs[desc_idx];
+	desc_addr = rte_vhost_gpa_to_vva(dev->mem, desc->addr);
+	/*
+	 * Checking of 'desc_addr' placed outside of 'unlikely' macro to avoid
+	 * performance issue with some versions of gcc (4.8.4 and 5.3.0) which
+	 * otherwise stores offset on the stack instead of in a register.
+	 */
+	if (unlikely(desc->len < dev->vhost_hlen) || !desc_addr)
+		return -1;
+
+	rte_prefetch0((void *)(uintptr_t)desc_addr);
+
+	hdr = (struct virtio_net_hdr_mrg_rxbuf *)(uintptr_t)desc_addr;
+	virtio_enqueue_offload(m, &hdr->hdr);
+	vhost_log_write(dev, desc->addr, dev->vhost_hlen);
+	PRINT_PACKET(dev, (uintptr_t)desc_addr, dev->vhost_hlen, 0);
+
+	desc_offset = dev->vhost_hlen;
+	desc_avail  = desc->len - dev->vhost_hlen;
+
+	mbuf_avail  = rte_pktmbuf_data_len(m);
+	mbuf_offset = 0;
+	while (mbuf_avail != 0 || m->next != NULL) {
+		/* done with current mbuf, fetch next */
+		if (mbuf_avail == 0) {
+			m = m->next;
+
+			mbuf_offset = 0;
+			mbuf_avail  = rte_pktmbuf_data_len(m);
+		}
+
+		/* done with current desc buf, fetch next */
+		if (desc_avail == 0) {
+			if ((desc->flags & VRING_DESC_F_NEXT) == 0) {
+				/* Room in vring buffer is not enough */
+				return -1;
+			}
+
+			/** NOTE: we should not come here with current
+			    virtio-user implementation **/
+			desc_idx = (desc_idx + 1); // & (vq->size - 1);
+			desc = &descs[desc_idx];
+			if (unlikely(!(desc->flags & DESC_HW)))
+				return -1;
+
+			desc_addr = rte_vhost_gpa_to_vva(dev->mem, desc->addr);
+			if (unlikely(!desc_addr))
+				return -1;
+
+			desc_offset = 0;
+			desc_avail  = desc->len;
+		}
+
+		cpy_len = RTE_MIN(desc_avail, mbuf_avail);
+		rte_memcpy((void *)((uintptr_t)(desc_addr + desc_offset)),
+			rte_pktmbuf_mtod_offset(m, void *, mbuf_offset),
+			cpy_len);
+		vhost_log_write(dev, desc->addr + desc_offset, cpy_len);
+		PRINT_PACKET(dev, (uintptr_t)(desc_addr + desc_offset),
+			     cpy_len, 0);
+
+		mbuf_avail  -= cpy_len;
+		mbuf_offset += cpy_len;
+		desc_avail  -= cpy_len;
+		desc_offset += cpy_len;
+	}
+
+	return 0;
+}
+
+static inline uint32_t __attribute__((always_inline))
+vhost_enqueue_burst_1_1(struct virtio_net *dev, uint16_t queue_id,
+	      struct rte_mbuf **pkts, uint32_t count)
+{
+	struct vhost_virtqueue *vq;
+	uint16_t i;
+	uint16_t idx;
+	struct vring_desc_1_1 *desc;
+	uint16_t head_idx;
+
+	vq = dev->virtqueue[queue_id];
+	if (unlikely(vq->enabled == 0))
+		return 0;
+
+	head_idx = vq->last_used_idx;
+	desc = vq->desc_1_1;
+	for (i = 0; i < count; i++) {
+		/* XXX: there is an assumption that no desc will be chained */
+		idx = vq->last_used_idx & (vq->size - 1);
+		if (!(desc[idx].flags & DESC_HW))
+			break;
+
+		if (enqueue_pkt(dev, desc, idx, pkts[i]) < 0)
+			break;
+
+		vq->last_used_idx++;
+	}
+	count = i;
+
+	rte_smp_wmb();
+	for (i = 0; i < count; i++) {
+		idx = (head_idx + i) & (vq->size - 1);
+		desc[idx].flags &= ~DESC_HW;
+		desc[idx].len    = pkts[i]->pkt_len;
+	}
+
+	return count;
+}
+
 uint16_t
 rte_vhost_enqueue_burst(int vid, uint16_t queue_id,
 	struct rte_mbuf **pkts, uint16_t count)
@@ -592,7 +712,9 @@ rte_vhost_enqueue_burst(int vid, uint16_t queue_id,
 	if (!dev)
 		return 0;
 
-	if (dev->features & (1 << VIRTIO_NET_F_MRG_RXBUF))
+	if (dev->features & (1ULL << VIRTIO_F_VERSION_1_1))
+		return vhost_enqueue_burst_1_1(dev, queue_id, pkts, count);
+	else if (dev->features & (1 << VIRTIO_NET_F_MRG_RXBUF))
 		return virtio_dev_merge_rx(dev, queue_id, pkts, count);
 	else
 		return virtio_dev_rx(dev, queue_id, pkts, count);
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [RFC 15/29] vhost: descriptor length should include vhost header
  2017-06-21  2:57 [RFC 00/29] latest virtio1.1 prototype Tiwei Bie
                   ` (13 preceding siblings ...)
  2017-06-21  2:57 ` [RFC 14/29] vhost: a rough implementation on enqueue " Tiwei Bie
@ 2017-06-21  2:57 ` Tiwei Bie
  2017-06-21  2:57 ` [RFC 16/29] net/virtio: avoid touching packet data Tiwei Bie
                   ` (13 subsequent siblings)
  28 siblings, 0 replies; 30+ messages in thread
From: Tiwei Bie @ 2017-06-21  2:57 UTC (permalink / raw)
  To: dev; +Cc: Jens Freimann

From: Jens Freimann <jfreiman@redhat.com>

Signed-off-by: Jens Freimann <jfreiman@redhat.com>
---
 lib/librte_vhost/virtio_net.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/lib/librte_vhost/virtio_net.c b/lib/librte_vhost/virtio_net.c
index f7dd4eb..7a978b9 100644
--- a/lib/librte_vhost/virtio_net.c
+++ b/lib/librte_vhost/virtio_net.c
@@ -697,7 +697,7 @@ vhost_enqueue_burst_1_1(struct virtio_net *dev, uint16_t queue_id,
 	for (i = 0; i < count; i++) {
 		idx = (head_idx + i) & (vq->size - 1);
 		desc[idx].flags &= ~DESC_HW;
-		desc[idx].len    = pkts[i]->pkt_len;
+		desc[idx].len    = pkts[i]->pkt_len + dev->vhost_hlen;
 	}
 
 	return count;
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [RFC 16/29] net/virtio: avoid touching packet data
  2017-06-21  2:57 [RFC 00/29] latest virtio1.1 prototype Tiwei Bie
                   ` (14 preceding siblings ...)
  2017-06-21  2:57 ` [RFC 15/29] vhost: descriptor length should include vhost header Tiwei Bie
@ 2017-06-21  2:57 ` Tiwei Bie
  2017-06-21  2:57 ` [RFC 17/29] net/virtio: fix virtio1.1 feature negotiation Tiwei Bie
                   ` (12 subsequent siblings)
  28 siblings, 0 replies; 30+ messages in thread
From: Tiwei Bie @ 2017-06-21  2:57 UTC (permalink / raw)
  To: dev

For performance testing purpose, avoid touching the packet data
when receiving packets.

Signed-off-by: Tiwei Bie <tiwei.bie@intel.com>
---
 drivers/net/virtio/virtio_rxtx.c | 8 ++++++++
 1 file changed, 8 insertions(+)

diff --git a/drivers/net/virtio/virtio_rxtx.c b/drivers/net/virtio/virtio_rxtx.c
index 3be64da..93d564f 100644
--- a/drivers/net/virtio/virtio_rxtx.c
+++ b/drivers/net/virtio/virtio_rxtx.c
@@ -139,7 +139,9 @@ virtqueue_dequeue_burst_rx_1_0(struct virtqueue *vq, struct rte_mbuf **rx_pkts,
 		}
 
 		rte_prefetch0(cookie);
+#if 0
 		rte_packet_prefetch(rte_pktmbuf_mtod(cookie, void *));
+#endif
 		rx_pkts[i]  = cookie;
 		vq->vq_used_cons_idx++;
 		vq_ring_free_chain(vq, desc_idx);
@@ -174,7 +176,9 @@ virtqueue_dequeue_burst_rx_1_1(struct virtqueue *vq, struct rte_mbuf **rx_pkts,
 		vq->vq_descx[used_idx].cookie = NULL;
 
 		rte_prefetch0(cookie);
+#if 0
 		rte_packet_prefetch(rte_pktmbuf_mtod(cookie, void *));
+#endif
 		rx_pkts[i]  = cookie;
 
 		vq->vq_used_cons_idx++;
@@ -568,7 +572,9 @@ static void
 virtio_update_packet_stats(struct virtnet_stats *stats, struct rte_mbuf *mbuf)
 {
 	uint32_t s = mbuf->pkt_len;
+#if 0
 	struct ether_addr *ea;
+#endif
 
 	if (s == 64) {
 		stats->size_bins[1]++;
@@ -587,6 +593,7 @@ virtio_update_packet_stats(struct virtnet_stats *stats, struct rte_mbuf *mbuf)
 			stats->size_bins[7]++;
 	}
 
+#if 0
 	ea = rte_pktmbuf_mtod(mbuf, struct ether_addr *);
 	if (is_multicast_ether_addr(ea)) {
 		if (is_broadcast_ether_addr(ea))
@@ -594,6 +601,7 @@ virtio_update_packet_stats(struct virtnet_stats *stats, struct rte_mbuf *mbuf)
 		else
 			stats->multicast++;
 	}
+#endif
 }
 
 /* Optionally fill offload information in structure */
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [RFC 17/29] net/virtio: fix virtio1.1 feature negotiation
  2017-06-21  2:57 [RFC 00/29] latest virtio1.1 prototype Tiwei Bie
                   ` (15 preceding siblings ...)
  2017-06-21  2:57 ` [RFC 16/29] net/virtio: avoid touching packet data Tiwei Bie
@ 2017-06-21  2:57 ` Tiwei Bie
  2017-06-21  2:57 ` [RFC 18/29] net/virtio: the Rx support for virtio1.1 has been added now Tiwei Bie
                   ` (11 subsequent siblings)
  28 siblings, 0 replies; 30+ messages in thread
From: Tiwei Bie @ 2017-06-21  2:57 UTC (permalink / raw)
  To: dev

Signed-off-by: Tiwei Bie <tiwei.bie@intel.com>
---
 drivers/net/virtio/virtio_user/virtio_user_dev.c | 7 ++++---
 1 file changed, 4 insertions(+), 3 deletions(-)

diff --git a/drivers/net/virtio/virtio_user/virtio_user_dev.c b/drivers/net/virtio/virtio_user/virtio_user_dev.c
index 3ff6a05..e3471d1 100644
--- a/drivers/net/virtio/virtio_user/virtio_user_dev.c
+++ b/drivers/net/virtio/virtio_user/virtio_user_dev.c
@@ -333,7 +333,8 @@ virtio_user_dev_setup(struct virtio_user_dev *dev)
 	 1ULL << VIRTIO_NET_F_GUEST_CSUM	|	\
 	 1ULL << VIRTIO_NET_F_GUEST_TSO4	|	\
 	 1ULL << VIRTIO_NET_F_GUEST_TSO6	|	\
-	 1ULL << VIRTIO_F_VERSION_1)
+	 1ULL << VIRTIO_F_VERSION_1		|	\
+	 1ULL << VIRTIO_F_VERSION_1_1)
 
 int
 virtio_user_dev_init(struct virtio_user_dev *dev, char *path, int queues,
@@ -368,9 +369,9 @@ virtio_user_dev_init(struct virtio_user_dev *dev, char *path, int queues,
 	}
 
 	if (version_1_1)
-		dev->features |= (1ull << VIRTIO_F_VERSION_1_1);
+		dev->device_features |= (1ull << VIRTIO_F_VERSION_1_1);
 	else
-		dev->features &= ~(1ull << VIRTIO_F_VERSION_1_1);
+		dev->device_features &= ~(1ull << VIRTIO_F_VERSION_1_1);
 
 	if (dev->mac_specified)
 		dev->device_features |= (1ull << VIRTIO_NET_F_MAC);
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [RFC 18/29] net/virtio: the Rx support for virtio1.1 has been added now
  2017-06-21  2:57 [RFC 00/29] latest virtio1.1 prototype Tiwei Bie
                   ` (16 preceding siblings ...)
  2017-06-21  2:57 ` [RFC 17/29] net/virtio: fix virtio1.1 feature negotiation Tiwei Bie
@ 2017-06-21  2:57 ` Tiwei Bie
  2017-06-21  2:57 ` [RFC 19/29] vhost: VIRTIO_NET_F_MRG_RXBUF is not supported for now Tiwei Bie
                   ` (10 subsequent siblings)
  28 siblings, 0 replies; 30+ messages in thread
From: Tiwei Bie @ 2017-06-21  2:57 UTC (permalink / raw)
  To: dev

Signed-off-by: Tiwei Bie <tiwei.bie@intel.com>
---
 drivers/net/virtio/virtio_ethdev.c | 4 ----
 1 file changed, 4 deletions(-)

diff --git a/drivers/net/virtio/virtio_ethdev.c b/drivers/net/virtio/virtio_ethdev.c
index 8b754ac..334c4b8 100644
--- a/drivers/net/virtio/virtio_ethdev.c
+++ b/drivers/net/virtio/virtio_ethdev.c
@@ -1757,10 +1757,6 @@ virtio_dev_start(struct rte_eth_dev *dev)
 		}
 	}
 
-	/*no rx support for virtio 1.1 yet*/
-	if (vtpci_version_1_1(hw))
-		return 0;
-
 	/*Notify the backend
 	 *Otherwise the tap backend might already stop its queue due to fullness.
 	 *vhost backend will have no chance to be waked up
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [RFC 19/29] vhost: VIRTIO_NET_F_MRG_RXBUF is not supported for now
  2017-06-21  2:57 [RFC 00/29] latest virtio1.1 prototype Tiwei Bie
                   ` (17 preceding siblings ...)
  2017-06-21  2:57 ` [RFC 18/29] net/virtio: the Rx support for virtio1.1 has been added now Tiwei Bie
@ 2017-06-21  2:57 ` Tiwei Bie
  2017-06-21  2:57 ` [RFC 20/29] vhost: fix vring addr setup Tiwei Bie
                   ` (9 subsequent siblings)
  28 siblings, 0 replies; 30+ messages in thread
From: Tiwei Bie @ 2017-06-21  2:57 UTC (permalink / raw)
  To: dev

Signed-off-by: Tiwei Bie <tiwei.bie@intel.com>
---
 lib/librte_vhost/vhost.h | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/lib/librte_vhost/vhost.h b/lib/librte_vhost/vhost.h
index f3b7ad5..7976621 100644
--- a/lib/librte_vhost/vhost.h
+++ b/lib/librte_vhost/vhost.h
@@ -146,7 +146,7 @@ struct vhost_virtqueue {
 #define VHOST_USER_F_PROTOCOL_FEATURES	30
 
 /* Features supported by this builtin vhost-user net driver. */
-#define VIRTIO_NET_SUPPORTED_FEATURES ((1ULL << VIRTIO_NET_F_MRG_RXBUF) | \
+#define VIRTIO_NET_SUPPORTED_FEATURES ( \
 				(1ULL << VIRTIO_NET_F_CTRL_VQ) | \
 				(1ULL << VIRTIO_NET_F_CTRL_RX) | \
 				(1ULL << VIRTIO_NET_F_GUEST_ANNOUNCE) | \
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [RFC 20/29] vhost: fix vring addr setup
  2017-06-21  2:57 [RFC 00/29] latest virtio1.1 prototype Tiwei Bie
                   ` (18 preceding siblings ...)
  2017-06-21  2:57 ` [RFC 19/29] vhost: VIRTIO_NET_F_MRG_RXBUF is not supported for now Tiwei Bie
@ 2017-06-21  2:57 ` Tiwei Bie
  2017-06-21  2:57 ` [RFC 21/29] net/virtio: free mbuf when need to use Tiwei Bie
                   ` (8 subsequent siblings)
  28 siblings, 0 replies; 30+ messages in thread
From: Tiwei Bie @ 2017-06-21  2:57 UTC (permalink / raw)
  To: dev

Signed-off-by: Tiwei Bie <tiwei.bie@intel.com>
---
 lib/librte_vhost/vhost.c      |  4 ++++
 lib/librte_vhost/vhost_user.c | 17 +++++++++++++++--
 2 files changed, 19 insertions(+), 2 deletions(-)

diff --git a/lib/librte_vhost/vhost.c b/lib/librte_vhost/vhost.c
index 19c5a43..b7bc1ee 100644
--- a/lib/librte_vhost/vhost.c
+++ b/lib/librte_vhost/vhost.c
@@ -441,6 +441,10 @@ rte_vhost_enable_guest_notification(int vid, uint16_t queue_id, int enable)
 		return -1;
 	}
 
+	if (dev->features & (1ULL << VIRTIO_F_VERSION_1_1)) {
+		return 0;
+	}
+
 	dev->virtqueue[queue_id]->used->flags = VRING_USED_F_NO_NOTIFY;
 	return 0;
 }
diff --git a/lib/librte_vhost/vhost_user.c b/lib/librte_vhost/vhost_user.c
index 3a2de79..9265dcb 100644
--- a/lib/librte_vhost/vhost_user.c
+++ b/lib/librte_vhost/vhost_user.c
@@ -342,6 +342,19 @@ vhost_user_set_vring_addr(struct virtio_net *dev, VhostUserMsg *msg)
 	/* addr->index refers to the queue index. The txq 1, rxq is 0. */
 	vq = dev->virtqueue[msg->payload.addr.index];
 
+	if (dev->features & (1ULL << VIRTIO_F_VERSION_1_1)) {
+		vq->desc_1_1 = (struct vring_desc_1_1 *)(uintptr_t)qva_to_vva
+					(dev, msg->payload.addr.desc_user_addr);
+		vq->desc = NULL;
+		vq->avail = NULL;
+		vq->used = NULL;
+		vq->log_guest_addr = 0;
+
+		assert(vq->last_used_idx == 0);
+
+		return 0;
+	}
+
 	/* The addresses are converted from QEMU virtual to Vhost virtual. */
 	vq->desc = (struct vring_desc *)(uintptr_t)qva_to_vva(dev,
 			msg->payload.addr.desc_user_addr);
@@ -351,7 +364,7 @@ vhost_user_set_vring_addr(struct virtio_net *dev, VhostUserMsg *msg)
 			dev->vid);
 		return -1;
 	}
-	vq->desc_1_1 = (struct vring_desc_1_1 *)vq->desc;
+	vq->desc_1_1 = NULL;
 
 	dev = numa_realloc(dev, msg->payload.addr.index);
 	vq = dev->virtqueue[msg->payload.addr.index];
@@ -617,7 +630,7 @@ vhost_user_set_mem_table(struct virtio_net *dev, struct VhostUserMsg *pmsg)
 static int
 vq_is_ready(struct vhost_virtqueue *vq)
 {
-	return vq && vq->desc   &&
+	return vq && (vq->desc || vq->desc_1_1) &&
 	       vq->kickfd != VIRTIO_UNINITIALIZED_EVENTFD &&
 	       vq->callfd != VIRTIO_UNINITIALIZED_EVENTFD;
 }
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [RFC 21/29] net/virtio: free mbuf when need to use
  2017-06-21  2:57 [RFC 00/29] latest virtio1.1 prototype Tiwei Bie
                   ` (19 preceding siblings ...)
  2017-06-21  2:57 ` [RFC 20/29] vhost: fix vring addr setup Tiwei Bie
@ 2017-06-21  2:57 ` Tiwei Bie
  2017-06-21  2:57 ` [RFC 22/29] vhost: don't copy descs during Rx Tiwei Bie
                   ` (7 subsequent siblings)
  28 siblings, 0 replies; 30+ messages in thread
From: Tiwei Bie @ 2017-06-21  2:57 UTC (permalink / raw)
  To: dev

Signed-off-by: Tiwei Bie <tiwei.bie@intel.com>
---
 drivers/net/virtio/virtio_rxtx_1.1.c | 15 ++++++---------
 1 file changed, 6 insertions(+), 9 deletions(-)

diff --git a/drivers/net/virtio/virtio_rxtx_1.1.c b/drivers/net/virtio/virtio_rxtx_1.1.c
index 05f9dc7..fdc7402 100644
--- a/drivers/net/virtio/virtio_rxtx_1.1.c
+++ b/drivers/net/virtio/virtio_rxtx_1.1.c
@@ -72,14 +72,6 @@ virtio_xmit_cleanup(struct virtqueue *vq)
 
 	idx = vq->vq_used_cons_idx & (size - 1);
 	while ((desc[idx].flags & DESC_HW) == 0) {
-		struct vq_desc_extra *dxp;
-
-		dxp = &vq->vq_descx[idx];
-		if (dxp->cookie != NULL) {
-			rte_pktmbuf_free(dxp->cookie);
-			dxp->cookie = NULL;
-		}
-
 		idx = (++vq->vq_used_cons_idx) & (size - 1);
 		vq->vq_free_cnt++;
 
@@ -96,10 +88,15 @@ virtio_xmit(struct virtnet_tx *txvq, struct rte_mbuf *mbuf, int first_mbuf)
 	struct vring_desc_1_1 *desc = vq->vq_ring.desc_1_1;
 	uint16_t idx;
 	uint16_t head_idx = (vq->vq_avail_idx++) & (vq->vq_nentries - 1);
+	struct vq_desc_extra *dxp;
 
 	idx = head_idx;
 	vq->vq_free_cnt -= mbuf->nb_segs + 1;
-	vq->vq_descx[idx].cookie = mbuf;
+
+	dxp = &vq->vq_descx[idx];
+	if (dxp->cookie != NULL)
+		rte_pktmbuf_free(dxp->cookie);
+	dxp->cookie = mbuf;
 
 	desc[idx].addr  = txvq->virtio_net_hdr_mem +
 			  RTE_PTR_DIFF(&txr[idx].tx_hdr, txr);
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [RFC 22/29] vhost: don't copy descs during Rx
  2017-06-21  2:57 [RFC 00/29] latest virtio1.1 prototype Tiwei Bie
                   ` (20 preceding siblings ...)
  2017-06-21  2:57 ` [RFC 21/29] net/virtio: free mbuf when need to use Tiwei Bie
@ 2017-06-21  2:57 ` Tiwei Bie
  2017-06-21  2:57 ` [RFC 23/29] vhost: fix mbuf leak Tiwei Bie
                   ` (6 subsequent siblings)
  28 siblings, 0 replies; 30+ messages in thread
From: Tiwei Bie @ 2017-06-21  2:57 UTC (permalink / raw)
  To: dev

Signed-off-by: Tiwei Bie <tiwei.bie@intel.com>
---
 lib/librte_vhost/virtio_net.c | 35 ++++++++++-------------------------
 1 file changed, 10 insertions(+), 25 deletions(-)

diff --git a/lib/librte_vhost/virtio_net.c b/lib/librte_vhost/virtio_net.c
index 7a978b9..c14582b 100644
--- a/lib/librte_vhost/virtio_net.c
+++ b/lib/librte_vhost/virtio_net.c
@@ -1247,35 +1247,18 @@ vhost_dequeue_burst_1_1(struct virtio_net *dev, struct vhost_virtqueue *vq,
 			uint16_t count)
 {
 	uint16_t i;
-	uint16_t idx;
 	struct vring_desc_1_1 *desc = vq->desc_1_1;
 	uint16_t head_idx = vq->last_used_idx;
-	struct vring_desc_1_1 desc_cached[64];
-	uint16_t desc_idx = 0;
+	uint16_t desc_idx;
 
-	idx = vq->last_used_idx & (vq->size - 1);
-	if (!(desc[idx].flags & DESC_HW))
+	desc_idx = vq->last_used_idx;
+	if (!(desc[desc_idx & (vq->size - 1)].flags & DESC_HW))
 		return 0;
 
 	count = RTE_MIN(MAX_PKT_BURST, count);
 
-	{
-		uint16_t size = vq->size - idx;
-		if (size >= 64)
-			rte_memcpy(&desc_cached[0],    &desc[idx], 64 * sizeof(struct vring_desc_1_1));
-		else {
-			rte_memcpy(&desc_cached[0],    &desc[idx], size * sizeof(struct vring_desc_1_1));
-			rte_memcpy(&desc_cached[size], &desc[0],   (64 - size) * sizeof(struct vring_desc_1_1));
-		}
-	}
-
-	//for (i = 0; i < 64; i++) {
-	//	idx = (vq->last_used_idx + i) & (vq->size - 1);
-	//	desc_cached[i] = desc[idx];
-	//}
-
 	for (i = 0; i < count; i++) {
-		if (!(desc_cached[desc_idx].flags & DESC_HW))
+		if (!(desc[desc_idx & (vq->size - 1)].flags & DESC_HW))
 			break;
 
 		pkts[i] = rte_pktmbuf_alloc(mbuf_pool);
@@ -1285,13 +1268,15 @@ vhost_dequeue_burst_1_1(struct virtio_net *dev, struct vhost_virtqueue *vq,
 			break;
 		}
 
-		dequeue_desc(dev, vq, mbuf_pool, pkts[i], desc_cached, &desc_idx);
+		dequeue_desc(dev, vq, mbuf_pool, pkts[i], desc, &desc_idx);
 	}
 
-	vq->last_used_idx += desc_idx;
+	vq->last_used_idx = desc_idx;
 	if (likely(i)) {
-		for (idx = 1; idx < (uint16_t)(vq->last_used_idx - head_idx); idx++) {
-			desc[(idx + head_idx) & (vq->size - 1)].flags = 0;
+		for (desc_idx = 1;
+		     desc_idx < (uint16_t)(vq->last_used_idx - head_idx);
+		     desc_idx++) {
+			desc[(desc_idx + head_idx) & (vq->size - 1)].flags = 0;
 		}
 		rte_smp_wmb();
 		desc[head_idx & (vq->size - 1)].flags = 0;
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [RFC 23/29] vhost: fix mbuf leak
  2017-06-21  2:57 [RFC 00/29] latest virtio1.1 prototype Tiwei Bie
                   ` (21 preceding siblings ...)
  2017-06-21  2:57 ` [RFC 22/29] vhost: don't copy descs during Rx Tiwei Bie
@ 2017-06-21  2:57 ` Tiwei Bie
  2017-06-21  2:58 ` [RFC 24/29] net/virtio: cleanup txd when free count below threshold Tiwei Bie
                   ` (5 subsequent siblings)
  28 siblings, 0 replies; 30+ messages in thread
From: Tiwei Bie @ 2017-06-21  2:57 UTC (permalink / raw)
  To: dev

Signed-off-by: Tiwei Bie <tiwei.bie@intel.com>
---
 lib/librte_vhost/virtio_net.c | 7 ++++++-
 1 file changed, 6 insertions(+), 1 deletion(-)

diff --git a/lib/librte_vhost/virtio_net.c b/lib/librte_vhost/virtio_net.c
index c14582b..2bd1298 100644
--- a/lib/librte_vhost/virtio_net.c
+++ b/lib/librte_vhost/virtio_net.c
@@ -1250,6 +1250,7 @@ vhost_dequeue_burst_1_1(struct virtio_net *dev, struct vhost_virtqueue *vq,
 	struct vring_desc_1_1 *desc = vq->desc_1_1;
 	uint16_t head_idx = vq->last_used_idx;
 	uint16_t desc_idx;
+	int err;
 
 	desc_idx = vq->last_used_idx;
 	if (!(desc[desc_idx & (vq->size - 1)].flags & DESC_HW))
@@ -1268,7 +1269,11 @@ vhost_dequeue_burst_1_1(struct virtio_net *dev, struct vhost_virtqueue *vq,
 			break;
 		}
 
-		dequeue_desc(dev, vq, mbuf_pool, pkts[i], desc, &desc_idx);
+		err = dequeue_desc(dev, vq, mbuf_pool, pkts[i], desc, &desc_idx);
+		if (unlikely(err)) {
+			rte_pktmbuf_free(pkts[i]);
+			break;
+		}
 	}
 
 	vq->last_used_idx = desc_idx;
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [RFC 24/29] net/virtio: cleanup txd when free count below threshold
  2017-06-21  2:57 [RFC 00/29] latest virtio1.1 prototype Tiwei Bie
                   ` (22 preceding siblings ...)
  2017-06-21  2:57 ` [RFC 23/29] vhost: fix mbuf leak Tiwei Bie
@ 2017-06-21  2:58 ` Tiwei Bie
  2017-06-21  2:58 ` [RFC 25/29] net/virtio: refill descs for vhost in batch Tiwei Bie
                   ` (4 subsequent siblings)
  28 siblings, 0 replies; 30+ messages in thread
From: Tiwei Bie @ 2017-06-21  2:58 UTC (permalink / raw)
  To: dev

Signed-off-by: Tiwei Bie <tiwei.bie@intel.com>
---
 drivers/net/virtio/virtio_rxtx_1.1.c | 3 +++
 1 file changed, 3 insertions(+)

diff --git a/drivers/net/virtio/virtio_rxtx_1.1.c b/drivers/net/virtio/virtio_rxtx_1.1.c
index fdc7402..4602e6d 100644
--- a/drivers/net/virtio/virtio_rxtx_1.1.c
+++ b/drivers/net/virtio/virtio_rxtx_1.1.c
@@ -129,6 +129,9 @@ virtio_xmit_pkts_1_1(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts
 
 	PMD_TX_LOG(DEBUG, "%d packets to xmit", nb_pkts);
 
+	if (likely(vq->vq_free_cnt < vq->vq_free_thresh))
+		virtio_xmit_cleanup(vq);
+
 	for (i = 0; i < nb_pkts; i++) {
 		struct rte_mbuf *txm = tx_pkts[i];
 
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [RFC 25/29] net/virtio: refill descs for vhost in batch
  2017-06-21  2:57 [RFC 00/29] latest virtio1.1 prototype Tiwei Bie
                   ` (23 preceding siblings ...)
  2017-06-21  2:58 ` [RFC 24/29] net/virtio: cleanup txd when free count below threshold Tiwei Bie
@ 2017-06-21  2:58 ` Tiwei Bie
  2017-06-21  2:58 ` [RFC 26/29] vhost: remove dead code Tiwei Bie
                   ` (3 subsequent siblings)
  28 siblings, 0 replies; 30+ messages in thread
From: Tiwei Bie @ 2017-06-21  2:58 UTC (permalink / raw)
  To: dev

Signed-off-by: Tiwei Bie <tiwei.bie@intel.com>
---
 drivers/net/virtio/virtio_rxtx.c | 32 ++++++++++++++++++++++++++++----
 1 file changed, 28 insertions(+), 4 deletions(-)

diff --git a/drivers/net/virtio/virtio_rxtx.c b/drivers/net/virtio/virtio_rxtx.c
index 93d564f..3dc5eaf 100644
--- a/drivers/net/virtio/virtio_rxtx.c
+++ b/drivers/net/virtio/virtio_rxtx.c
@@ -299,9 +299,6 @@ virtqueue_enqueue_recv_refill_1_1(struct virtqueue *vq, struct rte_mbuf *cookie)
 
 	vq->vq_free_cnt -= needed;
 
-	rte_smp_wmb();
-	desc[idx].flags |= DESC_HW;
-
 	return 0;
 }
 
@@ -381,6 +378,7 @@ virtio_dev_rx_queue_setup(struct rte_eth_dev *dev,
 	int error, nbufs;
 	struct rte_mbuf *m;
 	uint16_t desc_idx;
+	uint16_t head_idx;
 
 	PMD_INIT_FUNC_TRACE();
 
@@ -418,6 +416,8 @@ virtio_dev_rx_queue_setup(struct rte_eth_dev *dev,
 			&rxvq->fake_mbuf;
 	}
 
+	head_idx = vq->vq_desc_head_idx;
+
 	while (!virtqueue_full(vq)) {
 		m = rte_mbuf_raw_alloc(rxvq->mpool);
 		if (m == NULL)
@@ -438,6 +438,14 @@ virtio_dev_rx_queue_setup(struct rte_eth_dev *dev,
 
 	if (!vtpci_version_1_1(hw))
 		vq_update_avail_idx(vq);
+	else {
+		struct vring_desc_1_1 *desc = vq->vq_ring.desc_1_1;
+		int i;
+		for (i = 0; i < nbufs; i++) {
+			desc[head_idx & (vq->vq_nentries - 1)].flags |= DESC_HW;
+			head_idx++;
+		}
+	}
 
 	PMD_INIT_LOG(DEBUG, "Allocated %d bufs", nbufs);
 
@@ -701,6 +709,7 @@ virtio_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts)
 	uint32_t hdr_size;
 	int offload;
 	struct virtio_net_hdr *hdr;
+	uint16_t head_idx, idx;
 
 	nb_rx = 0;
 	if (unlikely(hw->started == 0))
@@ -774,8 +783,11 @@ virtio_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts)
 
 	rxvq->stats.packets += nb_rx;
 
+	head_idx = vq->vq_desc_head_idx;
+
 	/* Allocate new mbuf for the used descriptor */
 	error = ENOSPC;
+	int count = 0;
 	while (likely(!virtqueue_full(vq))) {
 		new_mbuf = rte_mbuf_raw_alloc(rxvq->mpool);
 		if (unlikely(new_mbuf == NULL)) {
@@ -790,9 +802,21 @@ virtio_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts)
 			break;
 		}
 		nb_enqueued++;
+		count++;
 	}
 
-	if (likely(nb_enqueued) && !vtpci_version_1_1(hw)) {
+	if (vtpci_version_1_1(hw)) {
+		struct vring_desc_1_1 *desc = vq->vq_ring.desc_1_1;
+		if (count > 0) {
+			rte_smp_wmb();
+			idx = head_idx + 1;
+			while (--count) {
+				desc[idx & (vq->vq_nentries - 1)].flags |= DESC_HW;
+				idx++;
+			}
+			desc[head_idx & (vq->vq_nentries - 1)].flags |= DESC_HW;
+		}
+	} else if (likely(nb_enqueued)) {
 		vq_update_avail_idx(vq);
 
 		if (unlikely(virtqueue_kick_prepare(vq))) {
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [RFC 26/29] vhost: remove dead code
  2017-06-21  2:57 [RFC 00/29] latest virtio1.1 prototype Tiwei Bie
                   ` (24 preceding siblings ...)
  2017-06-21  2:58 ` [RFC 25/29] net/virtio: refill descs for vhost in batch Tiwei Bie
@ 2017-06-21  2:58 ` Tiwei Bie
  2017-06-21  2:58 ` [RFC 27/29] vhost: various optimizations for Tx Tiwei Bie
                   ` (2 subsequent siblings)
  28 siblings, 0 replies; 30+ messages in thread
From: Tiwei Bie @ 2017-06-21  2:58 UTC (permalink / raw)
  To: dev

Signed-off-by: Tiwei Bie <tiwei.bie@intel.com>
---
 lib/librte_vhost/virtio_net.c | 28 ++++------------------------
 1 file changed, 4 insertions(+), 24 deletions(-)

diff --git a/lib/librte_vhost/virtio_net.c b/lib/librte_vhost/virtio_net.c
index 2bd1298..049b400 100644
--- a/lib/librte_vhost/virtio_net.c
+++ b/lib/librte_vhost/virtio_net.c
@@ -1153,33 +1153,13 @@ dequeue_desc(struct virtio_net *dev, struct vhost_virtqueue *vq,
 	mbuf_offset = 0;
 	mbuf_avail  = m->buf_len - RTE_PKTMBUF_HEADROOM;
 	while (1) {
-		uint64_t hpa;
 
 		cpy_len = RTE_MIN(desc_avail, mbuf_avail);
 
-		/*
-		 * A desc buf might across two host physical pages that are
-		 * not continuous. In such case (gpa_to_hpa returns 0), data
-		 * will be copied even though zero copy is enabled.
-		 */
-		if (unlikely(dev->dequeue_zero_copy && (hpa = gpa_to_hpa(dev,
-					desc->addr + desc_offset, cpy_len)))) {
-			cur->data_len = cpy_len;
-			cur->data_off = 0;
-			cur->buf_addr = (void *)(uintptr_t)desc_addr;
-			cur->buf_physaddr = hpa;
-
-			/*
-			 * In zero copy mode, one mbuf can only reference data
-			 * for one or partial of one desc buff.
-			 */
-			mbuf_avail = cpy_len;
-		} else {
-			rte_memcpy(rte_pktmbuf_mtod_offset(cur, void *,
-							   mbuf_offset),
-				(void *)((uintptr_t)(desc_addr + desc_offset)),
-				cpy_len);
-		}
+		rte_memcpy(rte_pktmbuf_mtod_offset(cur, void *,
+						   mbuf_offset),
+			(void *)((uintptr_t)(desc_addr + desc_offset)),
+			cpy_len);
 
 		mbuf_avail  -= cpy_len;
 		mbuf_offset += cpy_len;
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [RFC 27/29] vhost: various optimizations for Tx
  2017-06-21  2:57 [RFC 00/29] latest virtio1.1 prototype Tiwei Bie
                   ` (25 preceding siblings ...)
  2017-06-21  2:58 ` [RFC 26/29] vhost: remove dead code Tiwei Bie
@ 2017-06-21  2:58 ` Tiwei Bie
  2017-06-21  2:58 ` [RFC 28/29] vhost: make the code more readable Tiwei Bie
  2017-06-21  2:58 ` [RFC 29/29] vhost: update and return descs in batch Tiwei Bie
  28 siblings, 0 replies; 30+ messages in thread
From: Tiwei Bie @ 2017-06-21  2:58 UTC (permalink / raw)
  To: dev

Signed-off-by: Tiwei Bie <tiwei.bie@intel.com>
---
 lib/librte_vhost/virtio_net.c | 16 +++++++++++-----
 1 file changed, 11 insertions(+), 5 deletions(-)

diff --git a/lib/librte_vhost/virtio_net.c b/lib/librte_vhost/virtio_net.c
index 049b400..2d111a3 100644
--- a/lib/librte_vhost/virtio_net.c
+++ b/lib/librte_vhost/virtio_net.c
@@ -604,6 +604,8 @@ enqueue_pkt(struct virtio_net *dev, struct vring_desc_1_1 *descs,
 	if (unlikely(desc->len < dev->vhost_hlen) || !desc_addr)
 		return -1;
 
+	desc->len = m->pkt_len + dev->vhost_hlen;
+
 	rte_prefetch0((void *)(uintptr_t)desc_addr);
 
 	hdr = (struct virtio_net_hdr_mrg_rxbuf *)(uintptr_t)desc_addr;
@@ -632,6 +634,7 @@ enqueue_pkt(struct virtio_net *dev, struct vring_desc_1_1 *descs,
 				return -1;
 			}
 
+			rte_panic("Shouldn't reach here\n");
 			/** NOTE: we should not come here with current
 			    virtio-user implementation **/
 			desc_idx = (desc_idx + 1); // & (vq->size - 1);
@@ -680,6 +683,8 @@ vhost_enqueue_burst_1_1(struct virtio_net *dev, uint16_t queue_id,
 
 	head_idx = vq->last_used_idx;
 	desc = vq->desc_1_1;
+	count = RTE_MIN(count, (uint32_t)MAX_PKT_BURST);
+
 	for (i = 0; i < count; i++) {
 		/* XXX: there is an assumption that no desc will be chained */
 		idx = vq->last_used_idx & (vq->size - 1);
@@ -693,11 +698,12 @@ vhost_enqueue_burst_1_1(struct virtio_net *dev, uint16_t queue_id,
 	}
 	count = i;
 
-	rte_smp_wmb();
-	for (i = 0; i < count; i++) {
-		idx = (head_idx + i) & (vq->size - 1);
-		desc[idx].flags &= ~DESC_HW;
-		desc[idx].len    = pkts[i]->pkt_len + dev->vhost_hlen;
+	if (count) {
+		rte_smp_wmb();
+		for (i = 0; i < count; i++) {
+			idx = (head_idx + i) & (vq->size - 1);
+			desc[idx].flags &= ~DESC_HW;
+		}
 	}
 
 	return count;
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [RFC 28/29] vhost: make the code more readable
  2017-06-21  2:57 [RFC 00/29] latest virtio1.1 prototype Tiwei Bie
                   ` (26 preceding siblings ...)
  2017-06-21  2:58 ` [RFC 27/29] vhost: various optimizations for Tx Tiwei Bie
@ 2017-06-21  2:58 ` Tiwei Bie
  2017-06-21  2:58 ` [RFC 29/29] vhost: update and return descs in batch Tiwei Bie
  28 siblings, 0 replies; 30+ messages in thread
From: Tiwei Bie @ 2017-06-21  2:58 UTC (permalink / raw)
  To: dev

Signed-off-by: Tiwei Bie <tiwei.bie@intel.com>
---
 lib/librte_vhost/virtio_net.c | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/lib/librte_vhost/virtio_net.c b/lib/librte_vhost/virtio_net.c
index 2d111a3..8344bcb 100644
--- a/lib/librte_vhost/virtio_net.c
+++ b/lib/librte_vhost/virtio_net.c
@@ -1264,10 +1264,10 @@ vhost_dequeue_burst_1_1(struct virtio_net *dev, struct vhost_virtqueue *vq,
 
 	vq->last_used_idx = desc_idx;
 	if (likely(i)) {
-		for (desc_idx = 1;
-		     desc_idx < (uint16_t)(vq->last_used_idx - head_idx);
+		for (desc_idx = head_idx + 1;
+		     desc_idx != vq->last_used_idx;
 		     desc_idx++) {
-			desc[(desc_idx + head_idx) & (vq->size - 1)].flags = 0;
+			desc[desc_idx & (vq->size - 1)].flags = 0;
 		}
 		rte_smp_wmb();
 		desc[head_idx & (vq->size - 1)].flags = 0;
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [RFC 29/29] vhost: update and return descs in batch
  2017-06-21  2:57 [RFC 00/29] latest virtio1.1 prototype Tiwei Bie
                   ` (27 preceding siblings ...)
  2017-06-21  2:58 ` [RFC 28/29] vhost: make the code more readable Tiwei Bie
@ 2017-06-21  2:58 ` Tiwei Bie
  28 siblings, 0 replies; 30+ messages in thread
From: Tiwei Bie @ 2017-06-21  2:58 UTC (permalink / raw)
  To: dev

Signed-off-by: Tiwei Bie <tiwei.bie@intel.com>
---
 lib/librte_vhost/virtio_net.c | 10 ++++++----
 1 file changed, 6 insertions(+), 4 deletions(-)

diff --git a/lib/librte_vhost/virtio_net.c b/lib/librte_vhost/virtio_net.c
index 8344bcb..7f76b1a 100644
--- a/lib/librte_vhost/virtio_net.c
+++ b/lib/librte_vhost/virtio_net.c
@@ -604,8 +604,6 @@ enqueue_pkt(struct virtio_net *dev, struct vring_desc_1_1 *descs,
 	if (unlikely(desc->len < dev->vhost_hlen) || !desc_addr)
 		return -1;
 
-	desc->len = m->pkt_len + dev->vhost_hlen;
-
 	rte_prefetch0((void *)(uintptr_t)desc_addr);
 
 	hdr = (struct virtio_net_hdr_mrg_rxbuf *)(uintptr_t)desc_addr;
@@ -699,11 +697,15 @@ vhost_enqueue_burst_1_1(struct virtio_net *dev, uint16_t queue_id,
 	count = i;
 
 	if (count) {
-		rte_smp_wmb();
-		for (i = 0; i < count; i++) {
+		for (i = 1; i < count; i++) {
 			idx = (head_idx + i) & (vq->size - 1);
+			desc[idx].len = pkts[i]->pkt_len + dev->vhost_hlen;
 			desc[idx].flags &= ~DESC_HW;
 		}
+		desc[head_idx & (vq->size - 1)].len =
+			pkts[0]->pkt_len + dev->vhost_hlen;
+		rte_smp_wmb();
+		desc[head_idx & (vq->size - 1)].flags &= ~DESC_HW;
 	}
 
 	return count;
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 30+ messages in thread

end of thread, other threads:[~2017-06-21  2:59 UTC | newest]

Thread overview: 30+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2017-06-21  2:57 [RFC 00/29] latest virtio1.1 prototype Tiwei Bie
2017-06-21  2:57 ` [RFC 01/29] net/virtio: vring init for 1.1 Tiwei Bie
2017-06-21  2:57 ` [RFC 02/29] net/virtio: implement 1.1 guest Tx Tiwei Bie
2017-06-21  2:57 ` [RFC 03/29] net/virtio-user: add option to enable 1.1 Tiwei Bie
2017-06-21  2:57 ` [RFC 04/29] vhost: enable 1.1 for testing Tiwei Bie
2017-06-21  2:57 ` [RFC 05/29] vhost: set desc addr for 1.1 Tiwei Bie
2017-06-21  2:57 ` [RFC 06/29] vhost: implement virtio 1.1 dequeue path Tiwei Bie
2017-06-21  2:57 ` [RFC 07/29] vhost: mark desc being used Tiwei Bie
2017-06-21  2:57 ` [RFC 08/29] xxx: batch the desc_hw update? Tiwei Bie
2017-06-21  2:57 ` [RFC 09/29] xxx: virtio: remove overheads Tiwei Bie
2017-06-21  2:57 ` [RFC 10/29] vhost: prefetch desc Tiwei Bie
2017-06-21  2:57 ` [RFC 11/29] add virtio 1.1 test guide Tiwei Bie
2017-06-21  2:57 ` [RFC 12/29] testpmd: add s-txonly Tiwei Bie
2017-06-21  2:57 ` [RFC 13/29] net/virtio: implement the Rx code path Tiwei Bie
2017-06-21  2:57 ` [RFC 14/29] vhost: a rough implementation on enqueue " Tiwei Bie
2017-06-21  2:57 ` [RFC 15/29] vhost: descriptor length should include vhost header Tiwei Bie
2017-06-21  2:57 ` [RFC 16/29] net/virtio: avoid touching packet data Tiwei Bie
2017-06-21  2:57 ` [RFC 17/29] net/virtio: fix virtio1.1 feature negotiation Tiwei Bie
2017-06-21  2:57 ` [RFC 18/29] net/virtio: the Rx support for virtio1.1 has been added now Tiwei Bie
2017-06-21  2:57 ` [RFC 19/29] vhost: VIRTIO_NET_F_MRG_RXBUF is not supported for now Tiwei Bie
2017-06-21  2:57 ` [RFC 20/29] vhost: fix vring addr setup Tiwei Bie
2017-06-21  2:57 ` [RFC 21/29] net/virtio: free mbuf when need to use Tiwei Bie
2017-06-21  2:57 ` [RFC 22/29] vhost: don't copy descs during Rx Tiwei Bie
2017-06-21  2:57 ` [RFC 23/29] vhost: fix mbuf leak Tiwei Bie
2017-06-21  2:58 ` [RFC 24/29] net/virtio: cleanup txd when free count below threshold Tiwei Bie
2017-06-21  2:58 ` [RFC 25/29] net/virtio: refill descs for vhost in batch Tiwei Bie
2017-06-21  2:58 ` [RFC 26/29] vhost: remove dead code Tiwei Bie
2017-06-21  2:58 ` [RFC 27/29] vhost: various optimizations for Tx Tiwei Bie
2017-06-21  2:58 ` [RFC 28/29] vhost: make the code more readable Tiwei Bie
2017-06-21  2:58 ` [RFC 29/29] vhost: update and return descs in batch Tiwei Bie

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.