netdev.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH RFC (resend) net-next 0/6] virtio-net: Add support for virtio-net header extensions
@ 2017-04-15 16:38 Vladislav Yasevich
  2017-04-15 16:38 ` [PATCH RFC (resend) net-next 1/6] virtio-net: Remove the use the padded vnet_header structure Vladislav Yasevich
                   ` (6 more replies)
  0 siblings, 7 replies; 16+ messages in thread
From: Vladislav Yasevich @ 2017-04-15 16:38 UTC (permalink / raw)
  To: netdev; +Cc: virtio-dev, mst, virtualization, maxime.coquelin

Curreclty virtion net header is fixed size and adding things to it is rather
difficult to do.  This series attempt to add the infrastructure as well as some
extensions that try to resolve some deficiencies we currently have.

First, vnet header only has space for 16 flags.  This may not be enough
in the future.  The extensions will provide space for 32 possbile extension
flags and 32 possible extensions.   These flags will be carried in the
first pseudo extension header, the presense of which will be determined by
the flag in the virtio net header.

The extensions themselves will immidiately follow the extension header itself.
They will be added to the packet in the same order as they appear in the
extension flags.  No padding is placed between the extensions and any
extensions negotiated, but not used need by a given packet will convert to
trailing padding.

For example:
 | vnet mrg hdr | ext hdr | ext 1 | ext 2 | ext 5 | .. pad .. | packet data |

Extensions proposed in this series are:
 - IPv6 fragment id extension
   * Currently, the guest generated fragment id is discarded and the host
     generates an IPv6 fragment id if the packet has to be fragmented.  The
     code attempts to add time based perturbation to id generation to make
     it harder to guess the next fragment id to be used.  However, doing this
     on the host may result is less perturbation (due to differnet timing)
     and might make id guessing easier.  Ideally, the ids generated by the
     guest should be used.  One could also argue that we a "violating" the
     IPv6 protocol in the if the _strict_ interpretation of the spec.

 - VLAN header acceleration
   * Currently virtio doesn't not do vlan header acceleration and instead
     uses software tagging.  One of the first things that the host will do is
     strip the vlan header out.  When passing the packet the a guest the
     vlan header is re-inserted in to the packet.  We can skip all that work
     if we can pass the vlan data in accelearted format.  Then the host will
     not do any extra work.  However, so far, this yeilded a very small
     perf bump (only ~1%).  I am still looking into this.

 - UDP tunnel offload
   * Similar to vlan acceleration, with this extension we can pass additional
     data to host for support GSO with udp tunnel and possible other
     encapsulations.  This yeilds a significant perfromance improvement
    (still testing remote checksum code).

An addition extension that is unfinished (due to still testing for any
side-effects) is checksum passthrough to support drivers that set
CHECKSUM_COMPLETE.  This would eliminate the need for guests to compute
the software checksum.

This series only takes care of virtio net.  I have addition patches for the
host side (vhost and tap/macvtap as well as qemu), but wanted to get feedback
on the general approach first.

Vladislav Yasevich (6):
  virtio-net: Remove the use the padded vnet_header structure
  virtio-net: make header length handling uniform
  virtio_net: Add basic skeleton for handling vnet header extensions.
  virtio-net: Add support for IPv6 fragment id vnet header extension.
  virtio-net: Add support for vlan acceleration vnet header extension.
  virtio-net: Add support for UDP tunnel offload and extension.

 drivers/net/virtio_net.c        | 132 +++++++++++++++++++++++++++++++++-------
 include/linux/skbuff.h          |   5 ++
 include/linux/virtio_net.h      |  91 ++++++++++++++++++++++++++-
 include/uapi/linux/virtio_net.h |  38 ++++++++++++
 4 files changed, 242 insertions(+), 24 deletions(-)

-- 
2.7.4



Vladislav Yasevich (6):
  virtio-net: Remove the use the padded vnet_header structure
  virtio-net: make header length handling uniform
  virtio_net: Add basic skeleton for handling vnet header extensions.
  virtio-net: Add support for IPv6 fragment id vnet header extension.
  virtio-net: Add support for vlan acceleration vnet header extension.
  virtio: Add support for UDP tunnel offload and extension.

 drivers/net/virtio_net.c        | 121 ++++++++++++++++++++++++++++++++++------
 include/linux/skbuff.h          |   5 ++
 include/linux/virtio_net.h      |  91 +++++++++++++++++++++++++++++-
 include/uapi/linux/virtio_net.h |  38 +++++++++++++
 4 files changed, 236 insertions(+), 19 deletions(-)

-- 
2.7.4

^ permalink raw reply	[flat|nested] 16+ messages in thread

* [PATCH RFC (resend) net-next 1/6] virtio-net: Remove the use the padded vnet_header structure
  2017-04-15 16:38 [PATCH RFC (resend) net-next 0/6] virtio-net: Add support for virtio-net header extensions Vladislav Yasevich
@ 2017-04-15 16:38 ` Vladislav Yasevich
  2017-04-15 16:38 ` [PATCH RFC (resend) net-next 2/6] virtio-net: make header length handling uniform Vladislav Yasevich
                   ` (5 subsequent siblings)
  6 siblings, 0 replies; 16+ messages in thread
From: Vladislav Yasevich @ 2017-04-15 16:38 UTC (permalink / raw)
  To: netdev; +Cc: virtio-dev, mst, virtualization, maxime.coquelin

We can replace the structure with a properly aligned size instead.
The current structure attempts to align on a 16 byte boundary, so
preserve it.

Signed-off-by: Vlad Yaseivch <vyasevic@redhat.com>
---
 drivers/net/virtio_net.c | 22 ++++++++++++----------
 1 file changed, 12 insertions(+), 10 deletions(-)

diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c
index b0d241d..2937a98 100644
--- a/drivers/net/virtio_net.c
+++ b/drivers/net/virtio_net.c
@@ -174,15 +174,20 @@ struct virtnet_info {
 	u32 speed;
 };
 
-struct padded_vnet_hdr {
-	struct virtio_net_hdr_mrg_rxbuf hdr;
+static inline u8 padded_vnet_hdr(struct virtnet_info *vi)
+{
+	u8 hdr_len = vi->hdr_len;
+
 	/*
 	 * hdr is in a separate sg buffer, and data sg buffer shares same page
 	 * with this header sg. This padding makes next sg 16 byte aligned
 	 * after the header.
 	 */
-	char padding[4];
-};
+	if (!vi->mergeable_rx_bufs)
+		hdr_len = __ALIGN_KERNEL_MASK(hdr_len, 15);
+
+	return hdr_len;
+}
 
 /* Converting between virtqueue no. and kernel tx/rx queue no.
  * 0:rx0 1:tx0 2:rx1 3:tx1 ... 2N:rxN 2N+1:txN 2N+2:cvq
@@ -289,10 +294,7 @@ static struct sk_buff *page_to_skb(struct virtnet_info *vi,
 	hdr = skb_vnet_hdr(skb);
 
 	hdr_len = vi->hdr_len;
-	if (vi->mergeable_rx_bufs)
-		hdr_padded_len = sizeof *hdr;
-	else
-		hdr_padded_len = sizeof(struct padded_vnet_hdr);
+	hdr_padded_len = padded_vnet_hdr(vi);
 
 	memcpy(hdr, p, hdr_len);
 
@@ -840,7 +842,7 @@ static int add_recvbuf_big(struct virtnet_info *vi, struct receive_queue *rq,
 	sg_set_buf(&rq->sg[0], p, vi->hdr_len);
 
 	/* rq->sg[1] for data packet, from offset */
-	offset = sizeof(struct padded_vnet_hdr);
+	offset = padded_vnet_hdr(vi);
 	sg_set_buf(&rq->sg[1], p + offset, PAGE_SIZE - offset);
 
 	/* chain first in list head */
@@ -1790,8 +1792,8 @@ static int virtnet_reset(struct virtnet_info *vi, int curr_qp, int xdp_qp)
 
 static int virtnet_xdp_set(struct net_device *dev, struct bpf_prog *prog)
 {
-	unsigned long int max_sz = PAGE_SIZE - sizeof(struct padded_vnet_hdr);
 	struct virtnet_info *vi = netdev_priv(dev);
+	unsigned long int max_sz = PAGE_SIZE - padded_vnet_hdr(vi);
 	struct bpf_prog *old_prog;
 	u16 xdp_qp = 0, curr_qp;
 	int i, err;
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [PATCH RFC (resend) net-next 2/6] virtio-net: make header length handling uniform
  2017-04-15 16:38 [PATCH RFC (resend) net-next 0/6] virtio-net: Add support for virtio-net header extensions Vladislav Yasevich
  2017-04-15 16:38 ` [PATCH RFC (resend) net-next 1/6] virtio-net: Remove the use the padded vnet_header structure Vladislav Yasevich
@ 2017-04-15 16:38 ` Vladislav Yasevich
  2017-04-15 16:38 ` [PATCH RFC (resend) net-next 3/6] virtio_net: Add basic skeleton for handling vnet header extensions Vladislav Yasevich
                   ` (4 subsequent siblings)
  6 siblings, 0 replies; 16+ messages in thread
From: Vladislav Yasevich @ 2017-04-15 16:38 UTC (permalink / raw)
  To: netdev
  Cc: virtualization, virtio-dev, mst, jasowang, maxime.coquelin,
	Vladislav Yasevich

Consistently use hdr_len stored in virtnet_info
structure instead of direclty using specific sizes.
This will be required as the size of the virtio net
header grows due to future extensions.

Signed-off-by: Vladislav Yasevich <vyasevic@redhat.com>
---
 drivers/net/virtio_net.c | 8 ++++----
 1 file changed, 4 insertions(+), 4 deletions(-)

diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c
index 2937a98..5ad6ee6 100644
--- a/drivers/net/virtio_net.c
+++ b/drivers/net/virtio_net.c
@@ -855,9 +855,9 @@ static int add_recvbuf_big(struct virtnet_info *vi, struct receive_queue *rq,
 	return err;
 }
 
-static unsigned int get_mergeable_buf_len(struct ewma_pkt_len *avg_pkt_len)
+static unsigned int get_mergeable_buf_len(struct ewma_pkt_len *avg_pkt_len,
+					  u8 hdr_len)
 {
-	const size_t hdr_len = sizeof(struct virtio_net_hdr_mrg_rxbuf);
 	unsigned int len;
 
 	len = hdr_len + clamp_t(unsigned int, ewma_pkt_len_read(avg_pkt_len),
@@ -875,7 +875,7 @@ static int add_recvbuf_mergeable(struct virtnet_info *vi,
 	int err;
 	unsigned int len, hole;
 
-	len = get_mergeable_buf_len(&rq->mrg_avg_pkt_len);
+	len = get_mergeable_buf_len(&rq->mrg_avg_pkt_len, vi->hdr_len);
 	if (unlikely(!skb_page_frag_refill(len + headroom, alloc_frag, gfp)))
 		return -ENOMEM;
 
@@ -2188,7 +2188,7 @@ static ssize_t mergeable_rx_buffer_size_show(struct netdev_rx_queue *queue,
 
 	BUG_ON(queue_index >= vi->max_queue_pairs);
 	avg = &vi->rq[queue_index].mrg_avg_pkt_len;
-	return sprintf(buf, "%u\n", get_mergeable_buf_len(avg));
+	return sprintf(buf, "%u\n", get_mergeable_buf_len(avg, vi->hdr_len));
 }
 
 static struct rx_queue_attribute mergeable_rx_buffer_size_attribute =
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [PATCH RFC (resend) net-next 3/6] virtio_net: Add basic skeleton for handling vnet header extensions.
  2017-04-15 16:38 [PATCH RFC (resend) net-next 0/6] virtio-net: Add support for virtio-net header extensions Vladislav Yasevich
  2017-04-15 16:38 ` [PATCH RFC (resend) net-next 1/6] virtio-net: Remove the use the padded vnet_header structure Vladislav Yasevich
  2017-04-15 16:38 ` [PATCH RFC (resend) net-next 2/6] virtio-net: make header length handling uniform Vladislav Yasevich
@ 2017-04-15 16:38 ` Vladislav Yasevich
  2017-04-18  2:52   ` Jason Wang
  2017-04-15 16:38 ` [PATCH RFC (resend) net-next 4/6] virtio-net: Add support for IPv6 fragment id vnet header extension Vladislav Yasevich
                   ` (3 subsequent siblings)
  6 siblings, 1 reply; 16+ messages in thread
From: Vladislav Yasevich @ 2017-04-15 16:38 UTC (permalink / raw)
  To: netdev
  Cc: virtualization, virtio-dev, mst, jasowang, maxime.coquelin,
	Vladislav Yasevich

This is the basic sceleton which will be fleshed out by individiual
extensions.

Signed-off-by: Vladislav Yasevich <vyasevic@redhat.com>
---
 drivers/net/virtio_net.c        | 21 +++++++++++++++++++++
 include/linux/virtio_net.h      | 12 ++++++++++++
 include/uapi/linux/virtio_net.h | 11 +++++++++++
 3 files changed, 44 insertions(+)

diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c
index 5ad6ee6..08e2709 100644
--- a/drivers/net/virtio_net.c
+++ b/drivers/net/virtio_net.c
@@ -145,6 +145,10 @@ struct virtnet_info {
 	/* Packet virtio header size */
 	u8 hdr_len;
 
+	/* Header extensions were negotiated */
+	bool hdr_ext;
+	u32 ext_mask;
+
 	/* Active statistics */
 	struct virtnet_stats __percpu *stats;
 
@@ -174,6 +178,11 @@ struct virtnet_info {
 	u32 speed;
 };
 
+struct virtio_net_hdr_max {
+	struct virtio_net_hdr_mrg_rxbuf hdr;
+	struct virtio_net_ext_hdr ext_hdr;
+};
+
 static inline u8 padded_vnet_hdr(struct virtnet_info *vi)
 {
 	u8 hdr_len = vi->hdr_len;
@@ -214,6 +223,7 @@ static int rxq2vq(int rxq)
 
 static inline struct virtio_net_hdr_mrg_rxbuf *skb_vnet_hdr(struct sk_buff *skb)
 {
+	BUILD_BUG_ON(sizeof(struct virtio_net_hdr_max) > sizeof(skb->cb));
 	return (struct virtio_net_hdr_mrg_rxbuf *)skb->cb;
 }
 
@@ -767,6 +777,12 @@ static int receive_buf(struct virtnet_info *vi, struct receive_queue *rq,
 		goto frame_err;
 	}
 
+	if (vi->hdr_ext &&
+	    virtio_net_ext_to_skb(skb,
+				  (struct virtio_net_ext_hdr *)(hdr + 1))) {
+		goto frame_err;
+	}
+
 	skb->protocol = eth_type_trans(skb, dev);
 	pr_debug("Receiving skb proto 0x%04x len %i type %i\n",
 		 ntohs(skb->protocol), skb->len, skb->pkt_type);
@@ -1106,6 +1122,11 @@ static int xmit_skb(struct send_queue *sq, struct sk_buff *skb)
 	if (vi->mergeable_rx_bufs)
 		hdr->num_buffers = 0;
 
+	if (vi->hdr_ext &&
+	    virtio_net_ext_from_skb(skb, (struct virtio_net_ext_hdr *)(hdr + 1),
+				    vi->ext_mask))
+		BUG();
+
 	sg_init_table(sq->sg, skb_shinfo(skb)->nr_frags + (can_push ? 1 : 2));
 	if (can_push) {
 		__skb_push(skb, hdr_len);
diff --git a/include/linux/virtio_net.h b/include/linux/virtio_net.h
index 5209b5e..eaa524f 100644
--- a/include/linux/virtio_net.h
+++ b/include/linux/virtio_net.h
@@ -100,4 +100,16 @@ static inline int virtio_net_hdr_from_skb(const struct sk_buff *skb,
 	return 0;
 }
 
+static inline int virtio_net_ext_to_skb(struct sk_buff *skb,
+					struct virtio_net_ext_hdr *ext)
+{
+	return 0;
+}
+
+static inline int virtio_net_ext_from_skb(const struct sk_buff *skb,
+					  struct virtio_net_ext_hdr *ext,
+					  __u32 ext_mask)
+{
+	return 0;
+}
 #endif /* _LINUX_VIRTIO_NET_H */
diff --git a/include/uapi/linux/virtio_net.h b/include/uapi/linux/virtio_net.h
index fc353b5..0039b72 100644
--- a/include/uapi/linux/virtio_net.h
+++ b/include/uapi/linux/virtio_net.h
@@ -88,6 +88,7 @@ struct virtio_net_config {
 struct virtio_net_hdr_v1 {
 #define VIRTIO_NET_HDR_F_NEEDS_CSUM	1	/* Use csum_start, csum_offset */
 #define VIRTIO_NET_HDR_F_DATA_VALID	2	/* Csum is valid */
+#define VIRTIO_NET_HDR_F_VNET_EXT	4	/* Vnet extensions present */
 	__u8 flags;
 #define VIRTIO_NET_HDR_GSO_NONE		0	/* Not a GSO frame */
 #define VIRTIO_NET_HDR_GSO_TCPV4	1	/* GSO frame, IPv4 TCP (TSO) */
@@ -102,6 +103,16 @@ struct virtio_net_hdr_v1 {
 	__virtio16 num_buffers;	/* Number of merged rx buffers */
 };
 
+/* If IRTIO_NET_HDR_F_VNET_EXT flags is set, this header immediately
+ * follows the virtio_net_hdr.  The flags in this header will indicate
+ * which extension will follow.  The extnsion data will immidiately follow
+ * this header.
+ */
+struct virtio_net_ext_hdr {
+	__u32 flags;
+	__u8 extensions[];
+};
+
 #ifndef VIRTIO_NET_NO_LEGACY
 /* This header comes first in the scatter-gather list.
  * For legacy virtio, if VIRTIO_F_ANY_LAYOUT is not negotiated, it must
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [PATCH RFC (resend) net-next 4/6] virtio-net: Add support for IPv6 fragment id vnet header extension.
  2017-04-15 16:38 [PATCH RFC (resend) net-next 0/6] virtio-net: Add support for virtio-net header extensions Vladislav Yasevich
                   ` (2 preceding siblings ...)
  2017-04-15 16:38 ` [PATCH RFC (resend) net-next 3/6] virtio_net: Add basic skeleton for handling vnet header extensions Vladislav Yasevich
@ 2017-04-15 16:38 ` Vladislav Yasevich
  2017-04-15 16:38 ` [PATCH RFC (resend) net-next 5/6] virtio-net: Add support for vlan acceleration " Vladislav Yasevich
                   ` (2 subsequent siblings)
  6 siblings, 0 replies; 16+ messages in thread
From: Vladislav Yasevich @ 2017-04-15 16:38 UTC (permalink / raw)
  To: netdev
  Cc: virtualization, virtio-dev, mst, jasowang, maxime.coquelin,
	Vladislav Yasevich

This adds the ability to pass a guest generated IPv6 fragment id to the
host hypervisor.  The id is passed as big endian to eliminate unnecessary
conversions.  The host will be able to directly use this id instead of
attempting to generate its own.  This makes the IPv6 framgnet id sligtly
harder to predict.

Signed-off-by: Vladislav Yasevich <vyasevic@redhat.com>
---
 drivers/net/virtio_net.c        | 27 +++++++++++++++++++++++++--
 include/linux/virtio_net.h      | 20 ++++++++++++++++++++
 include/uapi/linux/virtio_net.h |  7 +++++++
 3 files changed, 52 insertions(+), 2 deletions(-)

diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c
index 08e2709..18eb0dd 100644
--- a/drivers/net/virtio_net.c
+++ b/drivers/net/virtio_net.c
@@ -181,6 +181,7 @@ struct virtnet_info {
 struct virtio_net_hdr_max {
 	struct virtio_net_hdr_mrg_rxbuf hdr;
 	struct virtio_net_ext_hdr ext_hdr;
+	struct virtio_net_ext_ip6frag ip6f_ext;
 };
 
 static inline u8 padded_vnet_hdr(struct virtnet_info *vi)
@@ -2260,6 +2261,23 @@ static bool virtnet_validate_features(struct virtio_device *vdev)
 	return true;
 }
 
+static void virtnet_init_extensions(struct virtio_device *vdev)
+{
+	struct virtnet_info *vi = vdev->priv;
+
+	/* Start with V1 header size plus the extension header */
+	vi->hdr_len = sizeof(struct virtio_net_hdr_v1) +
+		      sizeof(struct virtio_net_ext_hdr);
+
+	/* now check all the negotiated extensions and add up
+	 * the sizes
+	 */
+	if (virtio_has_feature(vdev, VIRTIO_NET_F_IP6_FRAGID)) {
+		vi->hdr_len += sizeof(u32);
+		vi->ext_mask |= VIRTIO_NET_EXT_F_IP6FRAG;
+	}
+}
+
 #define MIN_MTU ETH_MIN_MTU
 #define MAX_MTU ETH_MAX_MTU
 
@@ -2377,8 +2395,13 @@ static int virtnet_probe(struct virtio_device *vdev)
 	if (virtio_has_feature(vdev, VIRTIO_NET_F_MRG_RXBUF))
 		vi->mergeable_rx_bufs = true;
 
-	if (virtio_has_feature(vdev, VIRTIO_NET_F_MRG_RXBUF) ||
-	    virtio_has_feature(vdev, VIRTIO_F_VERSION_1))
+	if (virtio_has_feature(vdev, VIRTIO_NET_F_IP6_FRAGID))
+		vi->hdr_ext = true;
+
+	if (vi->hdr_ext)
+		virtnet_init_extensions(vdev);
+	else if (virtio_has_feature(vdev, VIRTIO_NET_F_MRG_RXBUF) ||
+		 virtio_has_feature(vdev, VIRTIO_F_VERSION_1))
 		vi->hdr_len = sizeof(struct virtio_net_hdr_mrg_rxbuf);
 	else
 		vi->hdr_len = sizeof(struct virtio_net_hdr);
diff --git a/include/linux/virtio_net.h b/include/linux/virtio_net.h
index eaa524f..3b259dc 100644
--- a/include/linux/virtio_net.h
+++ b/include/linux/virtio_net.h
@@ -103,6 +103,16 @@ static inline int virtio_net_hdr_from_skb(const struct sk_buff *skb,
 static inline int virtio_net_ext_to_skb(struct sk_buff *skb,
 					struct virtio_net_ext_hdr *ext)
 {
+	__u8 *ptr = ext->extensions;
+
+	if (ext->flags & VIRTIO_NET_EXT_F_IP6FRAG) {
+		struct virtio_net_ext_ip6frag *fhdr =
+					(struct virtio_net_ext_ip6frag *)ptr;
+
+		skb_shinfo(skb)->ip6_frag_id = fhdr->frag_id;
+		ptr += sizeof(struct virtio_net_ext_ip6frag);
+	}
+
 	return 0;
 }
 
@@ -110,6 +120,16 @@ static inline int virtio_net_ext_from_skb(const struct sk_buff *skb,
 					  struct virtio_net_ext_hdr *ext,
 					  __u32 ext_mask)
 {
+	__u8 *ptr = ext->extensions;
+
+	if ((ext_mask & VIRTIO_NET_EXT_F_IP6FRAG) &&
+	    skb_shinfo(skb)->ip6_frag_id) {
+		struct virtio_net_ext_ip6frag *fhdr =
+					(struct virtio_net_ext_ip6frag *)ptr;
+		fhdr->frag_id = skb_shinfo(skb)->ip6_frag_id;
+		ext->flags |= VIRTIO_NET_EXT_F_IP6FRAG;
+	}
+
 	return 0;
 }
 #endif /* _LINUX_VIRTIO_NET_H */
diff --git a/include/uapi/linux/virtio_net.h b/include/uapi/linux/virtio_net.h
index 0039b72..eac8d94 100644
--- a/include/uapi/linux/virtio_net.h
+++ b/include/uapi/linux/virtio_net.h
@@ -56,6 +56,7 @@
 #define VIRTIO_NET_F_MQ	22	/* Device supports Receive Flow
 					 * Steering */
 #define VIRTIO_NET_F_CTRL_MAC_ADDR 23	/* Set MAC address */
+#define VIRTIO_NET_F_IP6_FRAGID    24	/* Host supports VLAN accleration */
 
 #ifndef VIRTIO_NET_NO_LEGACY
 #define VIRTIO_NET_F_GSO	6	/* Host handles pkts w/ any GSO type */
@@ -109,10 +110,16 @@ struct virtio_net_hdr_v1 {
  * this header.
  */
 struct virtio_net_ext_hdr {
+#define VIRTIO_NET_EXT_F_IP6FRAG	(1<<0)
 	__u32 flags;
 	__u8 extensions[];
 };
 
+/* Same as vlan_hdr */
+struct virtio_net_ext_ip6frag {
+	__be32 frag_id;
+};
+
 #ifndef VIRTIO_NET_NO_LEGACY
 /* This header comes first in the scatter-gather list.
  * For legacy virtio, if VIRTIO_F_ANY_LAYOUT is not negotiated, it must
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [PATCH RFC (resend) net-next 5/6] virtio-net: Add support for vlan acceleration vnet header extension.
  2017-04-15 16:38 [PATCH RFC (resend) net-next 0/6] virtio-net: Add support for virtio-net header extensions Vladislav Yasevich
                   ` (3 preceding siblings ...)
  2017-04-15 16:38 ` [PATCH RFC (resend) net-next 4/6] virtio-net: Add support for IPv6 fragment id vnet header extension Vladislav Yasevich
@ 2017-04-15 16:38 ` Vladislav Yasevich
  2017-04-16  0:28   ` Michael S. Tsirkin
  2017-04-18  2:54   ` Jason Wang
  2017-04-15 16:38 ` [PATCH RFC (resend) net-next 6/6] virtio: Add support for UDP tunnel offload and extension Vladislav Yasevich
  2017-04-18  3:01 ` [PATCH RFC (resend) net-next 0/6] virtio-net: Add support for virtio-net header extensions Jason Wang
  6 siblings, 2 replies; 16+ messages in thread
From: Vladislav Yasevich @ 2017-04-15 16:38 UTC (permalink / raw)
  To: netdev
  Cc: virtualization, virtio-dev, mst, jasowang, maxime.coquelin,
	Vladislav Yasevich

This extension allows us to pass vlan ID and vlan protocol data to the
host hypervisor as part of the vnet header and lets us take advantage
of HW accelerated vlan tagging in the host.  It requires support in the
host to negotiate the feature.  When the extension is enabled, the
virtio device will enabled HW accelerated vlan features.

Signed-off-by: Vladislav Yasevich <vyasevic@redhat.com>
---
 drivers/net/virtio_net.c        | 17 ++++++++++++++++-
 include/linux/virtio_net.h      | 17 +++++++++++++++++
 include/uapi/linux/virtio_net.h |  7 +++++++
 3 files changed, 40 insertions(+), 1 deletion(-)

diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c
index 18eb0dd..696ef4a 100644
--- a/drivers/net/virtio_net.c
+++ b/drivers/net/virtio_net.c
@@ -182,6 +182,7 @@ struct virtio_net_hdr_max {
 	struct virtio_net_hdr_mrg_rxbuf hdr;
 	struct virtio_net_ext_hdr ext_hdr;
 	struct virtio_net_ext_ip6frag ip6f_ext;
+	struct virtio_net_ext_vlan vlan_ext;
 };
 
 static inline u8 padded_vnet_hdr(struct virtnet_info *vi)
@@ -2276,6 +2277,11 @@ static void virtnet_init_extensions(struct virtio_device *vdev)
 		vi->hdr_len += sizeof(u32);
 		vi->ext_mask |= VIRTIO_NET_EXT_F_IP6FRAG;
 	}
+
+	if (virtio_has_feature(vdev, VIRTIO_NET_F_VLAN_OFFLOAD)) {
+		vi->hdr_len += sizeof(struct virtio_net_ext_vlan);
+		vi->ext_mask |= VIRTIO_NET_EXT_F_VLAN;
+	}
 }
 
 #define MIN_MTU ETH_MIN_MTU
@@ -2352,6 +2358,14 @@ static int virtnet_probe(struct virtio_device *vdev)
 	if (virtio_has_feature(vdev, VIRTIO_NET_F_GUEST_CSUM))
 		dev->features |= NETIF_F_RXCSUM;
 
+	if (virtio_has_feature(vdev, VIRTIO_NET_F_VLAN_OFFLOAD)) {
+		dev->features |= NETIF_F_HW_VLAN_CTAG_TX |
+				 NETIF_F_HW_VLAN_CTAG_RX |
+				 NETIF_F_HW_VLAN_STAG_TX |
+				 NETIF_F_HW_VLAN_STAG_RX;
+	}
+
+
 	dev->vlan_features = dev->features;
 
 	/* MTU range: 68 - 65535 */
@@ -2395,7 +2409,8 @@ static int virtnet_probe(struct virtio_device *vdev)
 	if (virtio_has_feature(vdev, VIRTIO_NET_F_MRG_RXBUF))
 		vi->mergeable_rx_bufs = true;
 
-	if (virtio_has_feature(vdev, VIRTIO_NET_F_IP6_FRAGID))
+	if (virtio_has_feature(vdev, VIRTIO_NET_F_IP6_FRAGID) ||
+	    virtio_has_feature(vdev, VIRTIO_NET_F_VLAN_OFFLOAD))
 		vi->hdr_ext = true;
 
 	if (vi->hdr_ext)
diff --git a/include/linux/virtio_net.h b/include/linux/virtio_net.h
index 3b259dc..e790191 100644
--- a/include/linux/virtio_net.h
+++ b/include/linux/virtio_net.h
@@ -113,6 +113,14 @@ static inline int virtio_net_ext_to_skb(struct sk_buff *skb,
 		ptr += sizeof(struct virtio_net_ext_ip6frag);
 	}
 
+	if (ext->flags & VIRTIO_NET_EXT_F_VLAN) {
+		struct virtio_net_ext_vlan *vhdr =
+					(struct virtio_net_ext_vlan *)ptr;
+
+		__vlan_hwaccel_put_tag(skb, vhdr->vlan_proto, vhdr->vlan_tci);
+		ptr += sizeof(struct virtio_net_ext_vlan);
+	}
+
 	return 0;
 }
 
@@ -130,6 +138,15 @@ static inline int virtio_net_ext_from_skb(const struct sk_buff *skb,
 		ext->flags |= VIRTIO_NET_EXT_F_IP6FRAG;
 	}
 
+	if (ext_mask & VIRTIO_NET_EXT_F_VLAN && skb_vlan_tag_present(skb)) {
+		struct virtio_net_ext_vlan *vhdr =
+					(struct virtio_net_ext_vlan *)ptr;
+
+		vlan_get_tag(skb, &vhdr->vlan_tci);
+		vhdr->vlan_proto = skb->vlan_proto;
+		ext->flags |= VIRTIO_NET_EXT_F_VLAN;
+	}
+
 	return 0;
 }
 #endif /* _LINUX_VIRTIO_NET_H */
diff --git a/include/uapi/linux/virtio_net.h b/include/uapi/linux/virtio_net.h
index eac8d94..6125de7 100644
--- a/include/uapi/linux/virtio_net.h
+++ b/include/uapi/linux/virtio_net.h
@@ -57,6 +57,7 @@
 					 * Steering */
 #define VIRTIO_NET_F_CTRL_MAC_ADDR 23	/* Set MAC address */
 #define VIRTIO_NET_F_IP6_FRAGID    24	/* Host supports VLAN accleration */
+#define VIRTIO_NET_F_VLAN_OFFLOAD 25	/* Host supports VLAN accleration */
 
 #ifndef VIRTIO_NET_NO_LEGACY
 #define VIRTIO_NET_F_GSO	6	/* Host handles pkts w/ any GSO type */
@@ -111,6 +112,7 @@ struct virtio_net_hdr_v1 {
  */
 struct virtio_net_ext_hdr {
 #define VIRTIO_NET_EXT_F_IP6FRAG	(1<<0)
+#define VIRTIO_NET_EXT_F_VLAN		(1<<1)
 	__u32 flags;
 	__u8 extensions[];
 };
@@ -120,6 +122,11 @@ struct virtio_net_ext_ip6frag {
 	__be32 frag_id;
 };
 
+struct virtio_net_ext_vlan {
+	__be16 vlan_tci;
+	__be16 vlan_proto;
+};
+
 #ifndef VIRTIO_NET_NO_LEGACY
 /* This header comes first in the scatter-gather list.
  * For legacy virtio, if VIRTIO_F_ANY_LAYOUT is not negotiated, it must
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [PATCH RFC (resend) net-next 6/6] virtio: Add support for UDP tunnel offload and extension.
  2017-04-15 16:38 [PATCH RFC (resend) net-next 0/6] virtio-net: Add support for virtio-net header extensions Vladislav Yasevich
                   ` (4 preceding siblings ...)
  2017-04-15 16:38 ` [PATCH RFC (resend) net-next 5/6] virtio-net: Add support for vlan acceleration " Vladislav Yasevich
@ 2017-04-15 16:38 ` Vladislav Yasevich
  2017-04-18  3:01 ` [PATCH RFC (resend) net-next 0/6] virtio-net: Add support for virtio-net header extensions Jason Wang
  6 siblings, 0 replies; 16+ messages in thread
From: Vladislav Yasevich @ 2017-04-15 16:38 UTC (permalink / raw)
  To: netdev
  Cc: virtualization, virtio-dev, mst, jasowang, maxime.coquelin,
	Vladislav Yasevich, Vlad Yasevich

This patch provides the ablility to negotiate UDP tunnel offload features
in the virtio devices as well as the necesary extension to pass additional
information to the host.  This work is based on earlier work by Jarno
Rajahalme <jarno@ovn.org).

The innert transport header offset is carried in a virtio net header
extension which is negotiated between the host and the guest.  When
the extension is enabled, device features to support UDP tunnel offload
are enabled.

Signed-off-by: Vlad Yasevich <vaysevic@redhat.com>
---
 drivers/net/virtio_net.c        | 36 ++++++++++++++++++++++++++------
 include/linux/skbuff.h          |  5 +++++
 include/linux/virtio_net.h      | 46 ++++++++++++++++++++++++++++++++++++++---
 include/uapi/linux/virtio_net.h | 13 ++++++++++++
 4 files changed, 91 insertions(+), 9 deletions(-)

diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c
index 696ef4a..d122ecc 100644
--- a/drivers/net/virtio_net.c
+++ b/drivers/net/virtio_net.c
@@ -183,6 +183,7 @@ struct virtio_net_hdr_max {
 	struct virtio_net_ext_hdr ext_hdr;
 	struct virtio_net_ext_ip6frag ip6f_ext;
 	struct virtio_net_ext_vlan vlan_ext;
+	struct virtio_net_ext_udp_tunnel enc_ext;
 };
 
 static inline u8 padded_vnet_hdr(struct virtnet_info *vi)
@@ -738,6 +739,7 @@ static int receive_buf(struct virtnet_info *vi, struct receive_queue *rq,
 	struct sk_buff *skb;
 	struct virtio_net_hdr_mrg_rxbuf *hdr;
 	int ret;
+	bool little_endian = virtio_is_little_endian(vi->vdev);
 
 	if (unlikely(len < vi->hdr_len + ETH_HLEN)) {
 		pr_debug("%s: short packet %i\n", dev->name, len);
@@ -771,8 +773,7 @@ static int receive_buf(struct virtnet_info *vi, struct receive_queue *rq,
 	if (hdr->hdr.flags & VIRTIO_NET_HDR_F_DATA_VALID)
 		skb->ip_summed = CHECKSUM_UNNECESSARY;
 
-	if (virtio_net_hdr_to_skb(skb, &hdr->hdr,
-				  virtio_is_little_endian(vi->vdev))) {
+	if (virtio_net_hdr_to_skb(skb, &hdr->hdr, little_endian)) {
 		net_warn_ratelimited("%s: bad gso: type: %u, size: %u\n",
 				     dev->name, hdr->hdr.gso_type,
 				     hdr->hdr.gso_size);
@@ -781,7 +782,8 @@ static int receive_buf(struct virtnet_info *vi, struct receive_queue *rq,
 
 	if (vi->hdr_ext &&
 	    virtio_net_ext_to_skb(skb,
-				  (struct virtio_net_ext_hdr *)(hdr + 1))) {
+				  (struct virtio_net_ext_hdr *)(hdr + 1),
+				  little_endian)) {
 		goto frame_err;
 	}
 
@@ -1104,6 +1106,7 @@ static int xmit_skb(struct send_queue *sq, struct sk_buff *skb)
 	unsigned num_sg;
 	unsigned hdr_len = vi->hdr_len;
 	bool can_push;
+	bool little_endian = virtio_is_little_endian(vi->vdev);
 
 	pr_debug("%s: xmit %p %pM\n", vi->dev->name, skb, dest);
 
@@ -1126,7 +1129,7 @@ static int xmit_skb(struct send_queue *sq, struct sk_buff *skb)
 
 	if (vi->hdr_ext &&
 	    virtio_net_ext_from_skb(skb, (struct virtio_net_ext_hdr *)(hdr + 1),
-				    vi->ext_mask))
+				    vi->ext_mask, little_endian))
 		BUG();
 
 	sg_init_table(sq->sg, skb_shinfo(skb)->nr_frags + (can_push ? 1 : 2));
@@ -2282,6 +2285,11 @@ static void virtnet_init_extensions(struct virtio_device *vdev)
 		vi->hdr_len += sizeof(struct virtio_net_ext_vlan);
 		vi->ext_mask |= VIRTIO_NET_EXT_F_VLAN;
 	}
+
+	if (virtio_has_feature(vdev, VIRTIO_NET_F_UDP_TUNNEL)) {
+		vi->hdr_len += sizeof(struct virtio_net_ext_udp_tunnel);
+		vi->ext_mask |= VIRTIO_NET_EXT_F_UDP_TUNNEL;
+	}
 }
 
 #define MIN_MTU ETH_MIN_MTU
@@ -2338,6 +2346,12 @@ static int virtnet_probe(struct virtio_device *vdev)
 		if (virtio_has_feature(vdev, VIRTIO_NET_F_GSO)) {
 			dev->hw_features |= NETIF_F_TSO | NETIF_F_UFO
 				| NETIF_F_TSO_ECN | NETIF_F_TSO6;
+
+			if (virtio_has_feature(vdev, VIRTIO_NET_F_UDP_TUNNEL)) {
+				dev->hw_features |= NETIF_F_GSO_UDP_TUNNEL |
+						    NETIF_F_GSO_UDP_TUNNEL_CSUM |
+						    NETIF_F_GSO_TUNNEL_REMCSUM;
+			}
 		}
 		/* Individual feature bits: what can host handle? */
 		if (virtio_has_feature(vdev, VIRTIO_NET_F_HOST_TSO4))
@@ -2348,11 +2362,20 @@ static int virtnet_probe(struct virtio_device *vdev)
 			dev->hw_features |= NETIF_F_TSO_ECN;
 		if (virtio_has_feature(vdev, VIRTIO_NET_F_HOST_UFO))
 			dev->hw_features |= NETIF_F_UFO;
+		if (virtio_has_feature(vdev, VIRTIO_NET_F_UDP_TUNNEL)) {
+			dev->hw_features |= NETIF_F_GSO_UDP_TUNNEL |
+					    NETIF_F_GSO_UDP_TUNNEL_CSUM |
+					    NETIF_F_GSO_TUNNEL_REMCSUM;
+		}
+
 
 		dev->features |= NETIF_F_GSO_ROBUST;
 
 		if (gso)
-			dev->features |= dev->hw_features & (NETIF_F_ALL_TSO|NETIF_F_UFO);
+			dev->features |= dev->hw_features &
+					 (NETIF_F_ALL_TSO |
+					  NETIF_F_UFO |
+					  NETIF_F_GSO_ENCAP_ALL);
 		/* (!csum && gso) case will be fixed by register_netdev() */
 	}
 	if (virtio_has_feature(vdev, VIRTIO_NET_F_GUEST_CSUM))
@@ -2410,7 +2433,8 @@ static int virtnet_probe(struct virtio_device *vdev)
 		vi->mergeable_rx_bufs = true;
 
 	if (virtio_has_feature(vdev, VIRTIO_NET_F_IP6_FRAGID) ||
-	    virtio_has_feature(vdev, VIRTIO_NET_F_VLAN_OFFLOAD))
+	    virtio_has_feature(vdev, VIRTIO_NET_F_VLAN_OFFLOAD) ||
+	    virtio_has_feature(vdev, VIRTIO_NET_F_UDP_TUNNEL))
 		vi->hdr_ext = true;
 
 	if (vi->hdr_ext)
diff --git a/include/linux/skbuff.h b/include/linux/skbuff.h
index 741d75c..ad363da 100644
--- a/include/linux/skbuff.h
+++ b/include/linux/skbuff.h
@@ -2270,6 +2270,11 @@ static inline int skb_inner_network_offset(const struct sk_buff *skb)
 	return skb_inner_network_header(skb) - skb->data;
 }
 
+static inline int skb_inner_mac_offset(const struct sk_buff *skb)
+{
+	return skb_inner_mac_header(skb) - skb->data;
+}
+
 static inline int pskb_network_may_pull(struct sk_buff *skb, unsigned int len)
 {
 	return pskb_may_pull(skb, skb_network_offset(skb) + len);
diff --git a/include/linux/virtio_net.h b/include/linux/virtio_net.h
index e790191..716ac32 100644
--- a/include/linux/virtio_net.h
+++ b/include/linux/virtio_net.h
@@ -11,7 +11,7 @@ static inline int virtio_net_hdr_to_skb(struct sk_buff *skb,
 	unsigned short gso_type = 0;
 
 	if (hdr->gso_type != VIRTIO_NET_HDR_GSO_NONE) {
-		switch (hdr->gso_type & ~VIRTIO_NET_HDR_GSO_ECN) {
+		switch (hdr->gso_type & ~VIRTIO_NET_HDR_GSO_FLAGS) {
 		case VIRTIO_NET_HDR_GSO_TCPV4:
 			gso_type = SKB_GSO_TCPV4;
 			break;
@@ -27,6 +27,14 @@ static inline int virtio_net_hdr_to_skb(struct sk_buff *skb,
 
 		if (hdr->gso_type & VIRTIO_NET_HDR_GSO_ECN)
 			gso_type |= SKB_GSO_TCP_ECN;
+		if (hdr->gso_type & VIRTIO_NET_HDR_GSO_UDP_TUNNEL)
+			gso_type |= SKB_GSO_UDP_TUNNEL;
+		if (hdr->gso_type & VIRTIO_NET_HDR_GSO_UDP_TUNNEL_CSUM)
+			gso_type |= SKB_GSO_UDP_TUNNEL_CSUM;
+		if (hdr->gso_type & VIRTIO_NET_HDR_GSO_TUNNEL_REMCSUM) {
+			gso_type |= SKB_GSO_TUNNEL_REMCSUM;
+			skb->remcsum_offload = true;
+		}
 
 		if (hdr->gso_size == 0)
 			return -EINVAL;
@@ -77,8 +85,15 @@ static inline int virtio_net_hdr_from_skb(const struct sk_buff *skb,
 			hdr->gso_type = VIRTIO_NET_HDR_GSO_UDP;
 		else
 			return -EINVAL;
+
 		if (sinfo->gso_type & SKB_GSO_TCP_ECN)
 			hdr->gso_type |= VIRTIO_NET_HDR_GSO_ECN;
+		if (sinfo->gso_type & SKB_GSO_UDP_TUNNEL)
+			hdr->gso_type = VIRTIO_NET_HDR_GSO_UDP_TUNNEL;
+		if (sinfo->gso_type & SKB_GSO_UDP_TUNNEL_CSUM)
+			hdr->gso_type = VIRTIO_NET_HDR_GSO_UDP_TUNNEL_CSUM;
+		if (sinfo->gso_type & SKB_GSO_TUNNEL_REMCSUM)
+			hdr->gso_type = VIRTIO_NET_HDR_GSO_TUNNEL_REMCSUM;
 	} else
 		hdr->gso_type = VIRTIO_NET_HDR_GSO_NONE;
 
@@ -101,7 +116,8 @@ static inline int virtio_net_hdr_from_skb(const struct sk_buff *skb,
 }
 
 static inline int virtio_net_ext_to_skb(struct sk_buff *skb,
-					struct virtio_net_ext_hdr *ext)
+					struct virtio_net_ext_hdr *ext,
+					bool little_endian)
 {
 	__u8 *ptr = ext->extensions;
 
@@ -121,12 +137,27 @@ static inline int virtio_net_ext_to_skb(struct sk_buff *skb,
 		ptr += sizeof(struct virtio_net_ext_vlan);
 	}
 
+	if (ext->flags & VIRTIO_NET_EXT_F_UDP_TUNNEL) {
+		struct virtio_net_ext_udp_tunnel *uhdr =
+					(struct virtio_net_ext_udp_tunnel *)ptr;
+		u16 inner_offset = __virtio16_to_cpu(little_endian,
+						     uhdr->inner_mac_offset);
+
+		skb->encapsulation = 1;
+		skb_set_inner_mac_header(skb, inner_offset);
+		skb_set_inner_network_header(skb, inner_offset + ETH_HLEN);
+		/* this would be set by skb_partial_csum_set */
+		skb_set_inner_transport_header(skb,
+					       skb_checksum_start_offset(skb));
+	}
+
 	return 0;
 }
 
 static inline int virtio_net_ext_from_skb(const struct sk_buff *skb,
 					  struct virtio_net_ext_hdr *ext,
-					  __u32 ext_mask)
+					  __u32 ext_mask,
+					  bool little_endian)
 {
 	__u8 *ptr = ext->extensions;
 
@@ -147,6 +178,15 @@ static inline int virtio_net_ext_from_skb(const struct sk_buff *skb,
 		ext->flags |= VIRTIO_NET_EXT_F_VLAN;
 	}
 
+	if (ext_mask & VIRTIO_NET_EXT_F_UDP_TUNNEL && skb_is_gso(skb) &&
+	    skb->encapsulation) {
+		struct virtio_net_ext_udp_tunnel *uhdr =
+					(struct virtio_net_ext_udp_tunnel *)ptr;
+
+		uhdr->inner_mac_offset = skb_inner_mac_offset(skb);
+		ext->flags |= VIRTIO_NET_EXT_F_UDP_TUNNEL;
+	}
+
 	return 0;
 }
 #endif /* _LINUX_VIRTIO_NET_H */
diff --git a/include/uapi/linux/virtio_net.h b/include/uapi/linux/virtio_net.h
index 6125de7..26ddacf 100644
--- a/include/uapi/linux/virtio_net.h
+++ b/include/uapi/linux/virtio_net.h
@@ -58,6 +58,7 @@
 #define VIRTIO_NET_F_CTRL_MAC_ADDR 23	/* Set MAC address */
 #define VIRTIO_NET_F_IP6_FRAGID    24	/* Host supports VLAN accleration */
 #define VIRTIO_NET_F_VLAN_OFFLOAD 25	/* Host supports VLAN accleration */
+#define VIRTIO_NET_F_UDP_TUNNEL    26   /* Host supports UDP tunnel offload */
 
 #ifndef VIRTIO_NET_NO_LEGACY
 #define VIRTIO_NET_F_GSO	6	/* Host handles pkts w/ any GSO type */
@@ -96,7 +97,14 @@ struct virtio_net_hdr_v1 {
 #define VIRTIO_NET_HDR_GSO_TCPV4	1	/* GSO frame, IPv4 TCP (TSO) */
 #define VIRTIO_NET_HDR_GSO_UDP		3	/* GSO frame, IPv4 UDP (UFO) */
 #define VIRTIO_NET_HDR_GSO_TCPV6	4	/* GSO frame, IPv6 TCP */
+#define VIRTIO_NET_HDR_GSO_UDP_TUNNEL	0x10	/* GSO frame, UDP tunnel */
+#define VIRTIO_NET_HDR_GSO_UDP_TUNNEL_CSUM 0x20 /* GSO frame, UDP tnl + csum */
+#define VIRTIO_NET_HDR_GSO_TUNNEL_REMCSUM  0x40	/* tunnel with TSO + remcsum */
 #define VIRTIO_NET_HDR_GSO_ECN		0x80	/* TCP has ECN set */
+#define VIRTIO_NET_HDR_GSO_FLAGS	(VIRTIO_NET_HDR_GSO_UDP_TUNNEL | \
+					 VIRTIO_NET_HDR_GSO_UDP_TUNNEL_CSUM | \
+					 VIRTIO_NET_HDR_GSO_TUNNEL_REMCSUM | \
+					 VIRTIO_NET_HDR_GSO_ECN)
 	__u8 gso_type;
 	__virtio16 hdr_len;	/* Ethernet + IP + tcp/udp hdrs */
 	__virtio16 gso_size;	/* Bytes to append to hdr_len per frame */
@@ -113,6 +121,7 @@ struct virtio_net_hdr_v1 {
 struct virtio_net_ext_hdr {
 #define VIRTIO_NET_EXT_F_IP6FRAG	(1<<0)
 #define VIRTIO_NET_EXT_F_VLAN		(1<<1)
+#define VIRTIO_NET_EXT_F_UDP_TUNNEL	(1<<2)
 	__u32 flags;
 	__u8 extensions[];
 };
@@ -127,6 +136,10 @@ struct virtio_net_ext_vlan {
 	__be16 vlan_proto;
 };
 
+struct virtio_net_ext_udp_tunnel {
+	__virtio16 inner_mac_offset;
+};
+
 #ifndef VIRTIO_NET_NO_LEGACY
 /* This header comes first in the scatter-gather list.
  * For legacy virtio, if VIRTIO_F_ANY_LAYOUT is not negotiated, it must
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 16+ messages in thread

* Re: [PATCH RFC (resend) net-next 5/6] virtio-net: Add support for vlan acceleration vnet header extension.
  2017-04-15 16:38 ` [PATCH RFC (resend) net-next 5/6] virtio-net: Add support for vlan acceleration " Vladislav Yasevich
@ 2017-04-16  0:28   ` Michael S. Tsirkin
  2017-04-18  2:54   ` Jason Wang
  1 sibling, 0 replies; 16+ messages in thread
From: Michael S. Tsirkin @ 2017-04-16  0:28 UTC (permalink / raw)
  To: Vladislav Yasevich
  Cc: netdev, virtualization, virtio-dev, jasowang, maxime.coquelin,
	Vladislav Yasevich

On Sat, Apr 15, 2017 at 12:38:17PM -0400, Vladislav Yasevich wrote:
> This extension allows us to pass vlan ID and vlan protocol data to the
> host hypervisor as part of the vnet header and lets us take advantage
> of HW accelerated vlan tagging in the host.  It requires support in the
> host to negotiate the feature.  When the extension is enabled, the
> virtio device will enabled HW accelerated vlan features.
> 
> Signed-off-by: Vladislav Yasevich <vyasevic@redhat.com>

Performance data will be required to justify this.


> ---
>  drivers/net/virtio_net.c        | 17 ++++++++++++++++-
>  include/linux/virtio_net.h      | 17 +++++++++++++++++
>  include/uapi/linux/virtio_net.h |  7 +++++++
>  3 files changed, 40 insertions(+), 1 deletion(-)
> 
> diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c
> index 18eb0dd..696ef4a 100644
> --- a/drivers/net/virtio_net.c
> +++ b/drivers/net/virtio_net.c
> @@ -182,6 +182,7 @@ struct virtio_net_hdr_max {
>  	struct virtio_net_hdr_mrg_rxbuf hdr;
>  	struct virtio_net_ext_hdr ext_hdr;
>  	struct virtio_net_ext_ip6frag ip6f_ext;
> +	struct virtio_net_ext_vlan vlan_ext;
>  };
>  
>  static inline u8 padded_vnet_hdr(struct virtnet_info *vi)
> @@ -2276,6 +2277,11 @@ static void virtnet_init_extensions(struct virtio_device *vdev)
>  		vi->hdr_len += sizeof(u32);
>  		vi->ext_mask |= VIRTIO_NET_EXT_F_IP6FRAG;
>  	}
> +
> +	if (virtio_has_feature(vdev, VIRTIO_NET_F_VLAN_OFFLOAD)) {
> +		vi->hdr_len += sizeof(struct virtio_net_ext_vlan);
> +		vi->ext_mask |= VIRTIO_NET_EXT_F_VLAN;
> +	}
>  }
>  
>  #define MIN_MTU ETH_MIN_MTU
> @@ -2352,6 +2358,14 @@ static int virtnet_probe(struct virtio_device *vdev)
>  	if (virtio_has_feature(vdev, VIRTIO_NET_F_GUEST_CSUM))
>  		dev->features |= NETIF_F_RXCSUM;
>  
> +	if (virtio_has_feature(vdev, VIRTIO_NET_F_VLAN_OFFLOAD)) {
> +		dev->features |= NETIF_F_HW_VLAN_CTAG_TX |
> +				 NETIF_F_HW_VLAN_CTAG_RX |
> +				 NETIF_F_HW_VLAN_STAG_TX |
> +				 NETIF_F_HW_VLAN_STAG_RX;
> +	}
> +
> +
>  	dev->vlan_features = dev->features;
>  
>  	/* MTU range: 68 - 65535 */
> @@ -2395,7 +2409,8 @@ static int virtnet_probe(struct virtio_device *vdev)
>  	if (virtio_has_feature(vdev, VIRTIO_NET_F_MRG_RXBUF))
>  		vi->mergeable_rx_bufs = true;
>  
> -	if (virtio_has_feature(vdev, VIRTIO_NET_F_IP6_FRAGID))
> +	if (virtio_has_feature(vdev, VIRTIO_NET_F_IP6_FRAGID) ||
> +	    virtio_has_feature(vdev, VIRTIO_NET_F_VLAN_OFFLOAD))
>  		vi->hdr_ext = true;
>  
>  	if (vi->hdr_ext)
> diff --git a/include/linux/virtio_net.h b/include/linux/virtio_net.h
> index 3b259dc..e790191 100644
> --- a/include/linux/virtio_net.h
> +++ b/include/linux/virtio_net.h
> @@ -113,6 +113,14 @@ static inline int virtio_net_ext_to_skb(struct sk_buff *skb,
>  		ptr += sizeof(struct virtio_net_ext_ip6frag);
>  	}
>  
> +	if (ext->flags & VIRTIO_NET_EXT_F_VLAN) {
> +		struct virtio_net_ext_vlan *vhdr =
> +					(struct virtio_net_ext_vlan *)ptr;
> +
> +		__vlan_hwaccel_put_tag(skb, vhdr->vlan_proto, vhdr->vlan_tci);
> +		ptr += sizeof(struct virtio_net_ext_vlan);
> +	}
> +
>  	return 0;
>  }
>  
> @@ -130,6 +138,15 @@ static inline int virtio_net_ext_from_skb(const struct sk_buff *skb,
>  		ext->flags |= VIRTIO_NET_EXT_F_IP6FRAG;
>  	}
>  
> +	if (ext_mask & VIRTIO_NET_EXT_F_VLAN && skb_vlan_tag_present(skb)) {
> +		struct virtio_net_ext_vlan *vhdr =
> +					(struct virtio_net_ext_vlan *)ptr;
> +
> +		vlan_get_tag(skb, &vhdr->vlan_tci);
> +		vhdr->vlan_proto = skb->vlan_proto;
> +		ext->flags |= VIRTIO_NET_EXT_F_VLAN;
> +	}
> +
>  	return 0;
>  }
>  #endif /* _LINUX_VIRTIO_NET_H */
> diff --git a/include/uapi/linux/virtio_net.h b/include/uapi/linux/virtio_net.h
> index eac8d94..6125de7 100644
> --- a/include/uapi/linux/virtio_net.h
> +++ b/include/uapi/linux/virtio_net.h
> @@ -57,6 +57,7 @@
>  					 * Steering */
>  #define VIRTIO_NET_F_CTRL_MAC_ADDR 23	/* Set MAC address */
>  #define VIRTIO_NET_F_IP6_FRAGID    24	/* Host supports VLAN accleration */
> +#define VIRTIO_NET_F_VLAN_OFFLOAD 25	/* Host supports VLAN accleration */
>  
>  #ifndef VIRTIO_NET_NO_LEGACY
>  #define VIRTIO_NET_F_GSO	6	/* Host handles pkts w/ any GSO type */
> @@ -111,6 +112,7 @@ struct virtio_net_hdr_v1 {
>   */
>  struct virtio_net_ext_hdr {
>  #define VIRTIO_NET_EXT_F_IP6FRAG	(1<<0)
> +#define VIRTIO_NET_EXT_F_VLAN		(1<<1)
>  	__u32 flags;
>  	__u8 extensions[];
>  };
> @@ -120,6 +122,11 @@ struct virtio_net_ext_ip6frag {
>  	__be32 frag_id;
>  };
>  
> +struct virtio_net_ext_vlan {
> +	__be16 vlan_tci;
> +	__be16 vlan_proto;
> +};
> +
>  #ifndef VIRTIO_NET_NO_LEGACY
>  /* This header comes first in the scatter-gather list.
>   * For legacy virtio, if VIRTIO_F_ANY_LAYOUT is not negotiated, it must
> -- 
> 2.7.4

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH RFC (resend) net-next 3/6] virtio_net: Add basic skeleton for handling vnet header extensions.
  2017-04-15 16:38 ` [PATCH RFC (resend) net-next 3/6] virtio_net: Add basic skeleton for handling vnet header extensions Vladislav Yasevich
@ 2017-04-18  2:52   ` Jason Wang
  0 siblings, 0 replies; 16+ messages in thread
From: Jason Wang @ 2017-04-18  2:52 UTC (permalink / raw)
  To: Vladislav Yasevich, netdev
  Cc: virtualization, virtio-dev, mst, maxime.coquelin, Vladislav Yasevich



On 2017年04月16日 00:38, Vladislav Yasevich wrote:
> This is the basic sceleton which will be fleshed out by individiual
> extensions.
>
> Signed-off-by: Vladislav Yasevich <vyasevic@redhat.com>
> ---
>   drivers/net/virtio_net.c        | 21 +++++++++++++++++++++
>   include/linux/virtio_net.h      | 12 ++++++++++++
>   include/uapi/linux/virtio_net.h | 11 +++++++++++
>   3 files changed, 44 insertions(+)
>
> diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c
> index 5ad6ee6..08e2709 100644
> --- a/drivers/net/virtio_net.c
> +++ b/drivers/net/virtio_net.c
> @@ -145,6 +145,10 @@ struct virtnet_info {
>   	/* Packet virtio header size */
>   	u8 hdr_len;
>   
> +	/* Header extensions were negotiated */
> +	bool hdr_ext;
> +	u32 ext_mask;
> +
>   	/* Active statistics */
>   	struct virtnet_stats __percpu *stats;
>   
> @@ -174,6 +178,11 @@ struct virtnet_info {
>   	u32 speed;
>   };
>   
> +struct virtio_net_hdr_max {
> +	struct virtio_net_hdr_mrg_rxbuf hdr;
> +	struct virtio_net_ext_hdr ext_hdr;
> +};
> +
>   static inline u8 padded_vnet_hdr(struct virtnet_info *vi)
>   {
>   	u8 hdr_len = vi->hdr_len;
> @@ -214,6 +223,7 @@ static int rxq2vq(int rxq)
>   
>   static inline struct virtio_net_hdr_mrg_rxbuf *skb_vnet_hdr(struct sk_buff *skb)
>   {
> +	BUILD_BUG_ON(sizeof(struct virtio_net_hdr_max) > sizeof(skb->cb));
>   	return (struct virtio_net_hdr_mrg_rxbuf *)skb->cb;
>   }
>   
> @@ -767,6 +777,12 @@ static int receive_buf(struct virtnet_info *vi, struct receive_queue *rq,
>   		goto frame_err;
>   	}
>   
> +	if (vi->hdr_ext &&
> +	    virtio_net_ext_to_skb(skb,
> +				  (struct virtio_net_ext_hdr *)(hdr + 1))) {
> +		goto frame_err;
> +	}
> +
>   	skb->protocol = eth_type_trans(skb, dev);
>   	pr_debug("Receiving skb proto 0x%04x len %i type %i\n",
>   		 ntohs(skb->protocol), skb->len, skb->pkt_type);
> @@ -1106,6 +1122,11 @@ static int xmit_skb(struct send_queue *sq, struct sk_buff *skb)
>   	if (vi->mergeable_rx_bufs)
>   		hdr->num_buffers = 0;
>   
> +	if (vi->hdr_ext &&
> +	    virtio_net_ext_from_skb(skb, (struct virtio_net_ext_hdr *)(hdr + 1),
> +				    vi->ext_mask))
> +		BUG();
> +
>   	sg_init_table(sq->sg, skb_shinfo(skb)->nr_frags + (can_push ? 1 : 2));
>   	if (can_push) {
>   		__skb_push(skb, hdr_len);
> diff --git a/include/linux/virtio_net.h b/include/linux/virtio_net.h
> index 5209b5e..eaa524f 100644
> --- a/include/linux/virtio_net.h
> +++ b/include/linux/virtio_net.h
> @@ -100,4 +100,16 @@ static inline int virtio_net_hdr_from_skb(const struct sk_buff *skb,
>   	return 0;
>   }
>   
> +static inline int virtio_net_ext_to_skb(struct sk_buff *skb,
> +					struct virtio_net_ext_hdr *ext)
> +{
> +	return 0;
> +}
> +
> +static inline int virtio_net_ext_from_skb(const struct sk_buff *skb,
> +					  struct virtio_net_ext_hdr *ext,
> +					  __u32 ext_mask)
> +{
> +	return 0;
> +}
>   #endif /* _LINUX_VIRTIO_NET_H */
> diff --git a/include/uapi/linux/virtio_net.h b/include/uapi/linux/virtio_net.h
> index fc353b5..0039b72 100644
> --- a/include/uapi/linux/virtio_net.h
> +++ b/include/uapi/linux/virtio_net.h
> @@ -88,6 +88,7 @@ struct virtio_net_config {
>   struct virtio_net_hdr_v1 {
>   #define VIRTIO_NET_HDR_F_NEEDS_CSUM	1	/* Use csum_start, csum_offset */
>   #define VIRTIO_NET_HDR_F_DATA_VALID	2	/* Csum is valid */
> +#define VIRTIO_NET_HDR_F_VNET_EXT	4	/* Vnet extensions present */
>   	__u8 flags;
>   #define VIRTIO_NET_HDR_GSO_NONE		0	/* Not a GSO frame */
>   #define VIRTIO_NET_HDR_GSO_TCPV4	1	/* GSO frame, IPv4 TCP (TSO) */
> @@ -102,6 +103,16 @@ struct virtio_net_hdr_v1 {
>   	__virtio16 num_buffers;	/* Number of merged rx buffers */
>   };
>   
> +/* If IRTIO_NET_HDR_F_VNET_EXT flags is set, this header immediately
> + * follows the virtio_net_hdr.  The flags in this header will indicate
> + * which extension will follow.  The extnsion data will immidiately follow

s/extnsion/extension/

> + * this header.
> + */
> +struct virtio_net_ext_hdr {
> +	__u32 flags;
> +	__u8 extensions[];
> +};
> +
>   #ifndef VIRTIO_NET_NO_LEGACY
>   /* This header comes first in the scatter-gather list.
>    * For legacy virtio, if VIRTIO_F_ANY_LAYOUT is not negotiated, it must

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH RFC (resend) net-next 5/6] virtio-net: Add support for vlan acceleration vnet header extension.
  2017-04-15 16:38 ` [PATCH RFC (resend) net-next 5/6] virtio-net: Add support for vlan acceleration " Vladislav Yasevich
  2017-04-16  0:28   ` Michael S. Tsirkin
@ 2017-04-18  2:54   ` Jason Wang
  1 sibling, 0 replies; 16+ messages in thread
From: Jason Wang @ 2017-04-18  2:54 UTC (permalink / raw)
  To: Vladislav Yasevich, netdev
  Cc: virtualization, virtio-dev, mst, maxime.coquelin, Vladislav Yasevich



On 2017年04月16日 00:38, Vladislav Yasevich wrote:
> This extension allows us to pass vlan ID and vlan protocol data to the
> host hypervisor as part of the vnet header and lets us take advantage
> of HW accelerated vlan tagging in the host.  It requires support in the
> host to negotiate the feature.  When the extension is enabled, the
> virtio device will enabled HW accelerated vlan features.
>
> Signed-off-by: Vladislav Yasevich <vyasevic@redhat.com>
> ---
>   drivers/net/virtio_net.c        | 17 ++++++++++++++++-
>   include/linux/virtio_net.h      | 17 +++++++++++++++++
>   include/uapi/linux/virtio_net.h |  7 +++++++
>   3 files changed, 40 insertions(+), 1 deletion(-)
>
> diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c
> index 18eb0dd..696ef4a 100644
> --- a/drivers/net/virtio_net.c
> +++ b/drivers/net/virtio_net.c
> @@ -182,6 +182,7 @@ struct virtio_net_hdr_max {
>   	struct virtio_net_hdr_mrg_rxbuf hdr;
>   	struct virtio_net_ext_hdr ext_hdr;
>   	struct virtio_net_ext_ip6frag ip6f_ext;
> +	struct virtio_net_ext_vlan vlan_ext;
>   };
>   
>   static inline u8 padded_vnet_hdr(struct virtnet_info *vi)
> @@ -2276,6 +2277,11 @@ static void virtnet_init_extensions(struct virtio_device *vdev)
>   		vi->hdr_len += sizeof(u32);
>   		vi->ext_mask |= VIRTIO_NET_EXT_F_IP6FRAG;
>   	}
> +
> +	if (virtio_has_feature(vdev, VIRTIO_NET_F_VLAN_OFFLOAD)) {
> +		vi->hdr_len += sizeof(struct virtio_net_ext_vlan);
> +		vi->ext_mask |= VIRTIO_NET_EXT_F_VLAN;
> +	}
>   }
>   
>   #define MIN_MTU ETH_MIN_MTU
> @@ -2352,6 +2358,14 @@ static int virtnet_probe(struct virtio_device *vdev)
>   	if (virtio_has_feature(vdev, VIRTIO_NET_F_GUEST_CSUM))
>   		dev->features |= NETIF_F_RXCSUM;
>   
> +	if (virtio_has_feature(vdev, VIRTIO_NET_F_VLAN_OFFLOAD)) {
> +		dev->features |= NETIF_F_HW_VLAN_CTAG_TX |
> +				 NETIF_F_HW_VLAN_CTAG_RX |
> +				 NETIF_F_HW_VLAN_STAG_TX |
> +				 NETIF_F_HW_VLAN_STAG_RX;
> +	}
> +
> +
>   	dev->vlan_features = dev->features;
>   
>   	/* MTU range: 68 - 65535 */
> @@ -2395,7 +2409,8 @@ static int virtnet_probe(struct virtio_device *vdev)
>   	if (virtio_has_feature(vdev, VIRTIO_NET_F_MRG_RXBUF))
>   		vi->mergeable_rx_bufs = true;
>   
> -	if (virtio_has_feature(vdev, VIRTIO_NET_F_IP6_FRAGID))
> +	if (virtio_has_feature(vdev, VIRTIO_NET_F_IP6_FRAGID) ||
> +	    virtio_has_feature(vdev, VIRTIO_NET_F_VLAN_OFFLOAD))
>   		vi->hdr_ext = true;
>   
>   	if (vi->hdr_ext)
> diff --git a/include/linux/virtio_net.h b/include/linux/virtio_net.h
> index 3b259dc..e790191 100644
> --- a/include/linux/virtio_net.h
> +++ b/include/linux/virtio_net.h
> @@ -113,6 +113,14 @@ static inline int virtio_net_ext_to_skb(struct sk_buff *skb,
>   		ptr += sizeof(struct virtio_net_ext_ip6frag);
>   	}
>   
> +	if (ext->flags & VIRTIO_NET_EXT_F_VLAN) {
> +		struct virtio_net_ext_vlan *vhdr =
> +					(struct virtio_net_ext_vlan *)ptr;
> +
> +		__vlan_hwaccel_put_tag(skb, vhdr->vlan_proto, vhdr->vlan_tci);
> +		ptr += sizeof(struct virtio_net_ext_vlan);
> +	}
> +
>   	return 0;
>   }
>   
> @@ -130,6 +138,15 @@ static inline int virtio_net_ext_from_skb(const struct sk_buff *skb,
>   		ext->flags |= VIRTIO_NET_EXT_F_IP6FRAG;

Looks like you need advance ptr here?

>   	}
>   
> +	if (ext_mask & VIRTIO_NET_EXT_F_VLAN && skb_vlan_tag_present(skb)) {
> +		struct virtio_net_ext_vlan *vhdr =
> +					(struct virtio_net_ext_vlan *)ptr;
> +
> +		vlan_get_tag(skb, &vhdr->vlan_tci);
> +		vhdr->vlan_proto = skb->vlan_proto;
> +		ext->flags |= VIRTIO_NET_EXT_F_VLAN;

And here?

Thanks
> +	}
> +
>   	return 0;
>   }
>   #endif /* _LINUX_VIRTIO_NET_H */
> diff --git a/include/uapi/linux/virtio_net.h b/include/uapi/linux/virtio_net.h
> index eac8d94..6125de7 100644
> --- a/include/uapi/linux/virtio_net.h
> +++ b/include/uapi/linux/virtio_net.h
> @@ -57,6 +57,7 @@
>   					 * Steering */
>   #define VIRTIO_NET_F_CTRL_MAC_ADDR 23	/* Set MAC address */
>   #define VIRTIO_NET_F_IP6_FRAGID    24	/* Host supports VLAN accleration */
> +#define VIRTIO_NET_F_VLAN_OFFLOAD 25	/* Host supports VLAN accleration */
>   
>   #ifndef VIRTIO_NET_NO_LEGACY
>   #define VIRTIO_NET_F_GSO	6	/* Host handles pkts w/ any GSO type */
> @@ -111,6 +112,7 @@ struct virtio_net_hdr_v1 {
>    */
>   struct virtio_net_ext_hdr {
>   #define VIRTIO_NET_EXT_F_IP6FRAG	(1<<0)
> +#define VIRTIO_NET_EXT_F_VLAN		(1<<1)
>   	__u32 flags;
>   	__u8 extensions[];
>   };
> @@ -120,6 +122,11 @@ struct virtio_net_ext_ip6frag {
>   	__be32 frag_id;
>   };
>   
> +struct virtio_net_ext_vlan {
> +	__be16 vlan_tci;
> +	__be16 vlan_proto;
> +};
> +
>   #ifndef VIRTIO_NET_NO_LEGACY
>   /* This header comes first in the scatter-gather list.
>    * For legacy virtio, if VIRTIO_F_ANY_LAYOUT is not negotiated, it must

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH RFC (resend) net-next 0/6] virtio-net: Add support for virtio-net header extensions
  2017-04-15 16:38 [PATCH RFC (resend) net-next 0/6] virtio-net: Add support for virtio-net header extensions Vladislav Yasevich
                   ` (5 preceding siblings ...)
  2017-04-15 16:38 ` [PATCH RFC (resend) net-next 6/6] virtio: Add support for UDP tunnel offload and extension Vladislav Yasevich
@ 2017-04-18  3:01 ` Jason Wang
  2017-04-20 15:34   ` Vlad Yasevich
  6 siblings, 1 reply; 16+ messages in thread
From: Jason Wang @ 2017-04-18  3:01 UTC (permalink / raw)
  To: Vladislav Yasevich, netdev
  Cc: virtualization, virtio-dev, mst, maxime.coquelin, Vladislav Yasevich



On 2017年04月16日 00:38, Vladislav Yasevich wrote:
> Curreclty virtion net header is fixed size and adding things to it is rather
> difficult to do.  This series attempt to add the infrastructure as well as some
> extensions that try to resolve some deficiencies we currently have.
>
> First, vnet header only has space for 16 flags.  This may not be enough
> in the future.  The extensions will provide space for 32 possbile extension
> flags and 32 possible extensions.   These flags will be carried in the
> first pseudo extension header, the presense of which will be determined by
> the flag in the virtio net header.
>
> The extensions themselves will immidiately follow the extension header itself.
> They will be added to the packet in the same order as they appear in the
> extension flags.  No padding is placed between the extensions and any
> extensions negotiated, but not used need by a given packet will convert to
> trailing padding.

Do we need a explicit padding (e.g an extension) which could be 
controlled by each side?

>
> For example:
>   | vnet mrg hdr | ext hdr | ext 1 | ext 2 | ext 5 | .. pad .. | packet data |

Just some rough thoughts:

- Is this better to use TLV instead of bitmap here? One advantage of TLV 
is that the length is not limited by the length of bitmap.
- For 1.1, do we really want something like vnet header? AFAIK, it was 
not used by modern NICs, is this better to pack all meta-data into 
descriptor itself? This may need a some changes in tun/macvtap, but 
looks more PCIE friendly.

Thanks

>
> Extensions proposed in this series are:
>   - IPv6 fragment id extension
>     * Currently, the guest generated fragment id is discarded and the host
>       generates an IPv6 fragment id if the packet has to be fragmented.  The
>       code attempts to add time based perturbation to id generation to make
>       it harder to guess the next fragment id to be used.  However, doing this
>       on the host may result is less perturbation (due to differnet timing)
>       and might make id guessing easier.  Ideally, the ids generated by the
>       guest should be used.  One could also argue that we a "violating" the
>       IPv6 protocol in the if the _strict_ interpretation of the spec.
>
>   - VLAN header acceleration
>     * Currently virtio doesn't not do vlan header acceleration and instead
>       uses software tagging.  One of the first things that the host will do is
>       strip the vlan header out.  When passing the packet the a guest the
>       vlan header is re-inserted in to the packet.  We can skip all that work
>       if we can pass the vlan data in accelearted format.  Then the host will
>       not do any extra work.  However, so far, this yeilded a very small
>       perf bump (only ~1%).  I am still looking into this.
>
>   - UDP tunnel offload
>     * Similar to vlan acceleration, with this extension we can pass additional
>       data to host for support GSO with udp tunnel and possible other
>       encapsulations.  This yeilds a significant perfromance improvement
>      (still testing remote checksum code).
>
> An addition extension that is unfinished (due to still testing for any
> side-effects) is checksum passthrough to support drivers that set
> CHECKSUM_COMPLETE.  This would eliminate the need for guests to compute
> the software checksum.
>
> This series only takes care of virtio net.  I have addition patches for the
> host side (vhost and tap/macvtap as well as qemu), but wanted to get feedback
> on the general approach first.
>
> Vladislav Yasevich (6):
>    virtio-net: Remove the use the padded vnet_header structure
>    virtio-net: make header length handling uniform
>    virtio_net: Add basic skeleton for handling vnet header extensions.
>    virtio-net: Add support for IPv6 fragment id vnet header extension.
>    virtio-net: Add support for vlan acceleration vnet header extension.
>    virtio-net: Add support for UDP tunnel offload and extension.
>
>   drivers/net/virtio_net.c        | 132 +++++++++++++++++++++++++++++++++-------
>   include/linux/skbuff.h          |   5 ++
>   include/linux/virtio_net.h      |  91 ++++++++++++++++++++++++++-
>   include/uapi/linux/virtio_net.h |  38 ++++++++++++
>   4 files changed, 242 insertions(+), 24 deletions(-)
>

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH RFC (resend) net-next 0/6] virtio-net: Add support for virtio-net header extensions
  2017-04-18  3:01 ` [PATCH RFC (resend) net-next 0/6] virtio-net: Add support for virtio-net header extensions Jason Wang
@ 2017-04-20 15:34   ` Vlad Yasevich
  2017-04-21  4:05     ` Jason Wang
  2017-04-24 17:04     ` Michael S. Tsirkin
  0 siblings, 2 replies; 16+ messages in thread
From: Vlad Yasevich @ 2017-04-20 15:34 UTC (permalink / raw)
  To: Jason Wang, Vladislav Yasevich, netdev
  Cc: virtio-dev, mst, maxime.coquelin, virtualization

On 04/17/2017 11:01 PM, Jason Wang wrote:
> 
> 
> On 2017年04月16日 00:38, Vladislav Yasevich wrote:
>> Curreclty virtion net header is fixed size and adding things to it is rather
>> difficult to do.  This series attempt to add the infrastructure as well as some
>> extensions that try to resolve some deficiencies we currently have.
>>
>> First, vnet header only has space for 16 flags.  This may not be enough
>> in the future.  The extensions will provide space for 32 possbile extension
>> flags and 32 possible extensions.   These flags will be carried in the
>> first pseudo extension header, the presense of which will be determined by
>> the flag in the virtio net header.
>>
>> The extensions themselves will immidiately follow the extension header itself.
>> They will be added to the packet in the same order as they appear in the
>> extension flags.  No padding is placed between the extensions and any
>> extensions negotiated, but not used need by a given packet will convert to
>> trailing padding.
> 
> Do we need a explicit padding (e.g an extension) which could be controlled by each side?

I don't think so.  The size of the vnet header is set based on the extensions negotiated.
The one part I am not crazy about is that in the case of packet not using any extensions,
the data is still placed after the entire vnet header, which essentially adds a lot
of padding.  However, that's really no different then if we simply grew the vnet header.

The other thing I've tried before is putting extensions into their own sg buffer, but that
made it slower.

> 
>>
>> For example:
>>   | vnet mrg hdr | ext hdr | ext 1 | ext 2 | ext 5 | .. pad .. | packet data |
> 
> Just some rough thoughts:
> 
> - Is this better to use TLV instead of bitmap here? One advantage of TLV is that the
> length is not limited by the length of bitmap.

but the disadvantage is that we add at least 4 bytes per extension of just TL data.  That
makes this thing even longer.

> - For 1.1, do we really want something like vnet header? AFAIK, it was not used by modern
> NICs, is this better to pack all meta-data into descriptor itself? This may need a some
> changes in tun/macvtap, but looks more PCIE friendly.

That would really be ideal and I've looked at this.  There are small issues of exposing
the 'net metadata' of the descriptor to taps so they can be filled in.  The alternative
is to use a different control structure for tap->qemu|vhost channel (that can be
implementation specific) and have qemu|vhost populate the 'net metadata' of the descriptor.

Thanks
-vlad

> 
> Thanks
> 
>>
>> Extensions proposed in this series are:
>>   - IPv6 fragment id extension
>>     * Currently, the guest generated fragment id is discarded and the host
>>       generates an IPv6 fragment id if the packet has to be fragmented.  The
>>       code attempts to add time based perturbation to id generation to make
>>       it harder to guess the next fragment id to be used.  However, doing this
>>       on the host may result is less perturbation (due to differnet timing)
>>       and might make id guessing easier.  Ideally, the ids generated by the
>>       guest should be used.  One could also argue that we a "violating" the
>>       IPv6 protocol in the if the _strict_ interpretation of the spec.
>>
>>   - VLAN header acceleration
>>     * Currently virtio doesn't not do vlan header acceleration and instead
>>       uses software tagging.  One of the first things that the host will do is
>>       strip the vlan header out.  When passing the packet the a guest the
>>       vlan header is re-inserted in to the packet.  We can skip all that work
>>       if we can pass the vlan data in accelearted format.  Then the host will
>>       not do any extra work.  However, so far, this yeilded a very small
>>       perf bump (only ~1%).  I am still looking into this.
>>
>>   - UDP tunnel offload
>>     * Similar to vlan acceleration, with this extension we can pass additional
>>       data to host for support GSO with udp tunnel and possible other
>>       encapsulations.  This yeilds a significant perfromance improvement
>>      (still testing remote checksum code).
>>
>> An addition extension that is unfinished (due to still testing for any
>> side-effects) is checksum passthrough to support drivers that set
>> CHECKSUM_COMPLETE.  This would eliminate the need for guests to compute
>> the software checksum.
>>
>> This series only takes care of virtio net.  I have addition patches for the
>> host side (vhost and tap/macvtap as well as qemu), but wanted to get feedback
>> on the general approach first.
>>
>> Vladislav Yasevich (6):
>>    virtio-net: Remove the use the padded vnet_header structure
>>    virtio-net: make header length handling uniform
>>    virtio_net: Add basic skeleton for handling vnet header extensions.
>>    virtio-net: Add support for IPv6 fragment id vnet header extension.
>>    virtio-net: Add support for vlan acceleration vnet header extension.
>>    virtio-net: Add support for UDP tunnel offload and extension.
>>
>>   drivers/net/virtio_net.c        | 132 +++++++++++++++++++++++++++++++++-------
>>   include/linux/skbuff.h          |   5 ++
>>   include/linux/virtio_net.h      |  91 ++++++++++++++++++++++++++-
>>   include/uapi/linux/virtio_net.h |  38 ++++++++++++
>>   4 files changed, 242 insertions(+), 24 deletions(-)
>>
> 

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH RFC (resend) net-next 0/6] virtio-net: Add support for virtio-net header extensions
  2017-04-20 15:34   ` Vlad Yasevich
@ 2017-04-21  4:05     ` Jason Wang
  2017-04-21 13:08       ` Vlad Yasevich
  2017-04-24 17:04     ` Michael S. Tsirkin
  1 sibling, 1 reply; 16+ messages in thread
From: Jason Wang @ 2017-04-21  4:05 UTC (permalink / raw)
  To: vyasevic, Vladislav Yasevich, netdev
  Cc: virtio-dev, mst, maxime.coquelin, virtualization



On 2017年04月20日 23:34, Vlad Yasevich wrote:
> On 04/17/2017 11:01 PM, Jason Wang wrote:
>>
>> On 2017年04月16日 00:38, Vladislav Yasevich wrote:
>>> Curreclty virtion net header is fixed size and adding things to it is rather
>>> difficult to do.  This series attempt to add the infrastructure as well as some
>>> extensions that try to resolve some deficiencies we currently have.
>>>
>>> First, vnet header only has space for 16 flags.  This may not be enough
>>> in the future.  The extensions will provide space for 32 possbile extension
>>> flags and 32 possible extensions.   These flags will be carried in the
>>> first pseudo extension header, the presense of which will be determined by
>>> the flag in the virtio net header.
>>>
>>> The extensions themselves will immidiately follow the extension header itself.
>>> They will be added to the packet in the same order as they appear in the
>>> extension flags.  No padding is placed between the extensions and any
>>> extensions negotiated, but not used need by a given packet will convert to
>>> trailing padding.
>> Do we need a explicit padding (e.g an extension) which could be controlled by each side?
> I don't think so.  The size of the vnet header is set based on the extensions negotiated.
> The one part I am not crazy about is that in the case of packet not using any extensions,
> the data is still placed after the entire vnet header, which essentially adds a lot
> of padding.  However, that's really no different then if we simply grew the vnet header.
>
> The other thing I've tried before is putting extensions into their own sg buffer, but that
> made it slower.h

Yes.

>
>>> For example:
>>>    | vnet mrg hdr | ext hdr | ext 1 | ext 2 | ext 5 | .. pad .. | packet data |
>> Just some rough thoughts:
>>
>> - Is this better to use TLV instead of bitmap here? One advantage of TLV is that the
>> length is not limited by the length of bitmap.
> but the disadvantage is that we add at least 4 bytes per extension of just TL data.  That
> makes this thing even longer.

Yes, and it looks like the length is still limited by e.g the length of T.

>
>> - For 1.1, do we really want something like vnet header? AFAIK, it was not used by modern
>> NICs, is this better to pack all meta-data into descriptor itself? This may need a some
>> changes in tun/macvtap, but looks more PCIE friendly.
> That would really be ideal and I've looked at this.  There are small issues of exposing
> the 'net metadata' of the descriptor to taps so they can be filled in.  The alternative
> is to use a different control structure for tap->qemu|vhost channel (that can be
> implementation specific) and have qemu|vhost populate the 'net metadata' of the descriptor.

Yes, this needs some thought. For vhost, things looks a little bit 
easier, we can probably use msg_control.

Thanks

> Thanks
> -vlad
>
>> Thanks
>>
>>> Extensions proposed in this series are:
>>>    - IPv6 fragment id extension
>>>      * Currently, the guest generated fragment id is discarded and the host
>>>        generates an IPv6 fragment id if the packet has to be fragmented.  The
>>>        code attempts to add time based perturbation to id generation to make
>>>        it harder to guess the next fragment id to be used.  However, doing this
>>>        on the host may result is less perturbation (due to differnet timing)
>>>        and might make id guessing easier.  Ideally, the ids generated by the
>>>        guest should be used.  One could also argue that we a "violating" the
>>>        IPv6 protocol in the if the _strict_ interpretation of the spec.
>>>
>>>    - VLAN header acceleration
>>>      * Currently virtio doesn't not do vlan header acceleration and instead
>>>        uses software tagging.  One of the first things that the host will do is
>>>        strip the vlan header out.  When passing the packet the a guest the
>>>        vlan header is re-inserted in to the packet.  We can skip all that work
>>>        if we can pass the vlan data in accelearted format.  Then the host will
>>>        not do any extra work.  However, so far, this yeilded a very small
>>>        perf bump (only ~1%).  I am still looking into this.
>>>
>>>    - UDP tunnel offload
>>>      * Similar to vlan acceleration, with this extension we can pass additional
>>>        data to host for support GSO with udp tunnel and possible other
>>>        encapsulations.  This yeilds a significant perfromance improvement
>>>       (still testing remote checksum code).
>>>
>>> An addition extension that is unfinished (due to still testing for any
>>> side-effects) is checksum passthrough to support drivers that set
>>> CHECKSUM_COMPLETE.  This would eliminate the need for guests to compute
>>> the software checksum.
>>>
>>> This series only takes care of virtio net.  I have addition patches for the
>>> host side (vhost and tap/macvtap as well as qemu), but wanted to get feedback
>>> on the general approach first.
>>>
>>> Vladislav Yasevich (6):
>>>     virtio-net: Remove the use the padded vnet_header structure
>>>     virtio-net: make header length handling uniform
>>>     virtio_net: Add basic skeleton for handling vnet header extensions.
>>>     virtio-net: Add support for IPv6 fragment id vnet header extension.
>>>     virtio-net: Add support for vlan acceleration vnet header extension.
>>>     virtio-net: Add support for UDP tunnel offload and extension.
>>>
>>>    drivers/net/virtio_net.c        | 132 +++++++++++++++++++++++++++++++++-------
>>>    include/linux/skbuff.h          |   5 ++
>>>    include/linux/virtio_net.h      |  91 ++++++++++++++++++++++++++-
>>>    include/uapi/linux/virtio_net.h |  38 ++++++++++++
>>>    4 files changed, 242 insertions(+), 24 deletions(-)
>>>

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH RFC (resend) net-next 0/6] virtio-net: Add support for virtio-net header extensions
  2017-04-21  4:05     ` Jason Wang
@ 2017-04-21 13:08       ` Vlad Yasevich
  2017-04-24  3:22         ` Jason Wang
  0 siblings, 1 reply; 16+ messages in thread
From: Vlad Yasevich @ 2017-04-21 13:08 UTC (permalink / raw)
  To: Jason Wang, Vladislav Yasevich, netdev
  Cc: virtio-dev, maxime.coquelin, virtualization, mst

On 04/21/2017 12:05 AM, Jason Wang wrote:
> 
> 
> On 2017年04月20日 23:34, Vlad Yasevich wrote:
>> On 04/17/2017 11:01 PM, Jason Wang wrote:
>>>
>>> On 2017年04月16日 00:38, Vladislav Yasevich wrote:
>>>> Curreclty virtion net header is fixed size and adding things to it is rather
>>>> difficult to do.  This series attempt to add the infrastructure as well as some
>>>> extensions that try to resolve some deficiencies we currently have.
>>>>
>>>> First, vnet header only has space for 16 flags.  This may not be enough
>>>> in the future.  The extensions will provide space for 32 possbile extension
>>>> flags and 32 possible extensions.   These flags will be carried in the
>>>> first pseudo extension header, the presense of which will be determined by
>>>> the flag in the virtio net header.
>>>>
>>>> The extensions themselves will immidiately follow the extension header itself.
>>>> They will be added to the packet in the same order as they appear in the
>>>> extension flags.  No padding is placed between the extensions and any
>>>> extensions negotiated, but not used need by a given packet will convert to
>>>> trailing padding.
>>> Do we need a explicit padding (e.g an extension) which could be controlled by each side?
>> I don't think so.  The size of the vnet header is set based on the extensions negotiated.
>> The one part I am not crazy about is that in the case of packet not using any extensions,
>> the data is still placed after the entire vnet header, which essentially adds a lot
>> of padding.  However, that's really no different then if we simply grew the vnet header.
>>
>> The other thing I've tried before is putting extensions into their own sg buffer, but that
>> made it slower.h
> 
> Yes.
> 
>>
>>>> For example:
>>>>    | vnet mrg hdr | ext hdr | ext 1 | ext 2 | ext 5 | .. pad .. | packet data |
>>> Just some rough thoughts:
>>>
>>> - Is this better to use TLV instead of bitmap here? One advantage of TLV is that the
>>> length is not limited by the length of bitmap.
>> but the disadvantage is that we add at least 4 bytes per extension of just TL data.  That
>> makes this thing even longer.
> 
> Yes, and it looks like the length is still limited by e.g the length of T.

Not only that, but it is also limited by the skb->cb as a whole.  So adding putting
extensions into a TLV style means we have less extensions for now, until we get rid of
skb->cb usage.

> 
>>
>>> - For 1.1, do we really want something like vnet header? AFAIK, it was not used by modern
>>> NICs, is this better to pack all meta-data into descriptor itself? This may need a some
>>> changes in tun/macvtap, but looks more PCIE friendly.
>> That would really be ideal and I've looked at this.  There are small issues of exposing
>> the 'net metadata' of the descriptor to taps so they can be filled in.  The alternative
>> is to use a different control structure for tap->qemu|vhost channel (that can be
>> implementation specific) and have qemu|vhost populate the 'net metadata' of the descriptor.
> 
> Yes, this needs some thought. For vhost, things looks a little bit easier, we can probably
> use msg_control.
> 

We can use msg_control in qemu as well, can't we?  It really is a question of who is doing
the work and the number of copies.

I can take a closer look of how it would look if we extend the descriptor with type
specific data.  I don't know if other users of virtio would benefit from it?

-vlad
> Thanks
> 
>> Thanks
>> -vlad
>>
>>> Thanks
>>>
>>>> Extensions proposed in this series are:
>>>>    - IPv6 fragment id extension
>>>>      * Currently, the guest generated fragment id is discarded and the host
>>>>        generates an IPv6 fragment id if the packet has to be fragmented.  The
>>>>        code attempts to add time based perturbation to id generation to make
>>>>        it harder to guess the next fragment id to be used.  However, doing this
>>>>        on the host may result is less perturbation (due to differnet timing)
>>>>        and might make id guessing easier.  Ideally, the ids generated by the
>>>>        guest should be used.  One could also argue that we a "violating" the
>>>>        IPv6 protocol in the if the _strict_ interpretation of the spec.
>>>>
>>>>    - VLAN header acceleration
>>>>      * Currently virtio doesn't not do vlan header acceleration and instead
>>>>        uses software tagging.  One of the first things that the host will do is
>>>>        strip the vlan header out.  When passing the packet the a guest the
>>>>        vlan header is re-inserted in to the packet.  We can skip all that work
>>>>        if we can pass the vlan data in accelearted format.  Then the host will
>>>>        not do any extra work.  However, so far, this yeilded a very small
>>>>        perf bump (only ~1%).  I am still looking into this.
>>>>
>>>>    - UDP tunnel offload
>>>>      * Similar to vlan acceleration, with this extension we can pass additional
>>>>        data to host for support GSO with udp tunnel and possible other
>>>>        encapsulations.  This yeilds a significant perfromance improvement
>>>>       (still testing remote checksum code).
>>>>
>>>> An addition extension that is unfinished (due to still testing for any
>>>> side-effects) is checksum passthrough to support drivers that set
>>>> CHECKSUM_COMPLETE.  This would eliminate the need for guests to compute
>>>> the software checksum.
>>>>
>>>> This series only takes care of virtio net.  I have addition patches for the
>>>> host side (vhost and tap/macvtap as well as qemu), but wanted to get feedback
>>>> on the general approach first.
>>>>
>>>> Vladislav Yasevich (6):
>>>>     virtio-net: Remove the use the padded vnet_header structure
>>>>     virtio-net: make header length handling uniform
>>>>     virtio_net: Add basic skeleton for handling vnet header extensions.
>>>>     virtio-net: Add support for IPv6 fragment id vnet header extension.
>>>>     virtio-net: Add support for vlan acceleration vnet header extension.
>>>>     virtio-net: Add support for UDP tunnel offload and extension.
>>>>
>>>>    drivers/net/virtio_net.c        | 132 +++++++++++++++++++++++++++++++++-------
>>>>    include/linux/skbuff.h          |   5 ++
>>>>    include/linux/virtio_net.h      |  91 ++++++++++++++++++++++++++-
>>>>    include/uapi/linux/virtio_net.h |  38 ++++++++++++
>>>>    4 files changed, 242 insertions(+), 24 deletions(-)
>>>>
> 

_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH RFC (resend) net-next 0/6] virtio-net: Add support for virtio-net header extensions
  2017-04-21 13:08       ` Vlad Yasevich
@ 2017-04-24  3:22         ` Jason Wang
  0 siblings, 0 replies; 16+ messages in thread
From: Jason Wang @ 2017-04-24  3:22 UTC (permalink / raw)
  To: vyasevic, Vladislav Yasevich, netdev
  Cc: virtio-dev, mst, maxime.coquelin, virtualization



On 2017年04月21日 21:08, Vlad Yasevich wrote:
> On 04/21/2017 12:05 AM, Jason Wang wrote:
>> On 2017年04月20日 23:34, Vlad Yasevich wrote:
>>> On 04/17/2017 11:01 PM, Jason Wang wrote:
>>>> On 2017年04月16日 00:38, Vladislav Yasevich wrote:
>>>>> Curreclty virtion net header is fixed size and adding things to it is rather
>>>>> difficult to do.  This series attempt to add the infrastructure as well as some
>>>>> extensions that try to resolve some deficiencies we currently have.
>>>>>
>>>>> First, vnet header only has space for 16 flags.  This may not be enough
>>>>> in the future.  The extensions will provide space for 32 possbile extension
>>>>> flags and 32 possible extensions.   These flags will be carried in the
>>>>> first pseudo extension header, the presense of which will be determined by
>>>>> the flag in the virtio net header.
>>>>>
>>>>> The extensions themselves will immidiately follow the extension header itself.
>>>>> They will be added to the packet in the same order as they appear in the
>>>>> extension flags.  No padding is placed between the extensions and any
>>>>> extensions negotiated, but not used need by a given packet will convert to
>>>>> trailing padding.
>>>> Do we need a explicit padding (e.g an extension) which could be controlled by each side?
>>> I don't think so.  The size of the vnet header is set based on the extensions negotiated.
>>> The one part I am not crazy about is that in the case of packet not using any extensions,
>>> the data is still placed after the entire vnet header, which essentially adds a lot
>>> of padding.  However, that's really no different then if we simply grew the vnet header.
>>>
>>> The other thing I've tried before is putting extensions into their own sg buffer, but that
>>> made it slower.h
>> Yes.
>>
>>>>> For example:
>>>>>     | vnet mrg hdr | ext hdr | ext 1 | ext 2 | ext 5 | .. pad .. | packet data |
>>>> Just some rough thoughts:
>>>>
>>>> - Is this better to use TLV instead of bitmap here? One advantage of TLV is that the
>>>> length is not limited by the length of bitmap.
>>> but the disadvantage is that we add at least 4 bytes per extension of just TL data.  That
>>> makes this thing even longer.
>> Yes, and it looks like the length is still limited by e.g the length of T.
> Not only that, but it is also limited by the skb->cb as a whole.  So adding putting
> extensions into a TLV style means we have less extensions for now, until we get rid of
> skb->cb usage.
>
>>>> - For 1.1, do we really want something like vnet header? AFAIK, it was not used by modern
>>>> NICs, is this better to pack all meta-data into descriptor itself? This may need a some
>>>> changes in tun/macvtap, but looks more PCIE friendly.
>>> That would really be ideal and I've looked at this.  There are small issues of exposing
>>> the 'net metadata' of the descriptor to taps so they can be filled in.  The alternative
>>> is to use a different control structure for tap->qemu|vhost channel (that can be
>>> implementation specific) and have qemu|vhost populate the 'net metadata' of the descriptor.
>> Yes, this needs some thought. For vhost, things looks a little bit easier, we can probably
>> use msg_control.
>>
> We can use msg_control in qemu as well, can't we?

AFAIK, it needs some changes since we don't export socket to userspace.

>   It really is a question of who is doing
> the work and the number of copies.
>
> I can take a closer look of how it would look if we extend the descriptor with type
> specific data.  I don't know if other users of virtio would benefit from it?

Not sure, but we can have a common descriptor header followed by device 
specific meta data. This probably need some prototype benchmarking to 
see the benefits first.

Thanks

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH RFC (resend) net-next 0/6] virtio-net: Add support for virtio-net header extensions
  2017-04-20 15:34   ` Vlad Yasevich
  2017-04-21  4:05     ` Jason Wang
@ 2017-04-24 17:04     ` Michael S. Tsirkin
  1 sibling, 0 replies; 16+ messages in thread
From: Michael S. Tsirkin @ 2017-04-24 17:04 UTC (permalink / raw)
  To: Vlad Yasevich
  Cc: Jason Wang, Vladislav Yasevich, netdev, virtio-dev,
	maxime.coquelin, virtualization

On Thu, Apr 20, 2017 at 11:34:57AM -0400, Vlad Yasevich wrote:
> > - For 1.1, do we really want something like vnet header? AFAIK, it was not used by modern
> > NICs, is this better to pack all meta-data into descriptor itself? This may need a some
> > changes in tun/macvtap, but looks more PCIE friendly.
> 
> That would really be ideal and I've looked at this.

We already have at least 16 unused bits in the used ring
(head is 16 bit we are using 32 for it).

-- 
MST

^ permalink raw reply	[flat|nested] 16+ messages in thread

end of thread, other threads:[~2017-04-24 17:04 UTC | newest]

Thread overview: 16+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2017-04-15 16:38 [PATCH RFC (resend) net-next 0/6] virtio-net: Add support for virtio-net header extensions Vladislav Yasevich
2017-04-15 16:38 ` [PATCH RFC (resend) net-next 1/6] virtio-net: Remove the use the padded vnet_header structure Vladislav Yasevich
2017-04-15 16:38 ` [PATCH RFC (resend) net-next 2/6] virtio-net: make header length handling uniform Vladislav Yasevich
2017-04-15 16:38 ` [PATCH RFC (resend) net-next 3/6] virtio_net: Add basic skeleton for handling vnet header extensions Vladislav Yasevich
2017-04-18  2:52   ` Jason Wang
2017-04-15 16:38 ` [PATCH RFC (resend) net-next 4/6] virtio-net: Add support for IPv6 fragment id vnet header extension Vladislav Yasevich
2017-04-15 16:38 ` [PATCH RFC (resend) net-next 5/6] virtio-net: Add support for vlan acceleration " Vladislav Yasevich
2017-04-16  0:28   ` Michael S. Tsirkin
2017-04-18  2:54   ` Jason Wang
2017-04-15 16:38 ` [PATCH RFC (resend) net-next 6/6] virtio: Add support for UDP tunnel offload and extension Vladislav Yasevich
2017-04-18  3:01 ` [PATCH RFC (resend) net-next 0/6] virtio-net: Add support for virtio-net header extensions Jason Wang
2017-04-20 15:34   ` Vlad Yasevich
2017-04-21  4:05     ` Jason Wang
2017-04-21 13:08       ` Vlad Yasevich
2017-04-24  3:22         ` Jason Wang
2017-04-24 17:04     ` Michael S. Tsirkin

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).