All of lore.kernel.org
 help / color / mirror / Atom feed
* [RFC PATCH 0/4] Implement local checksum offload support for tunnel segmentation
@ 2016-01-14  5:11 Alexander Duyck
  2016-01-14  5:12 ` [RFC PATCH 1/4] net: Move GSO csum into SKB_GSO_CB Alexander Duyck
                   ` (3 more replies)
  0 siblings, 4 replies; 7+ messages in thread
From: Alexander Duyck @ 2016-01-14  5:11 UTC (permalink / raw)
  To: ecree, netdev, tom

This patch series updates the existing segmentation offload code for
tunnels to make better use of existing and updated GSO checksum
computation.  This is done primarily through two mechanisms.  First we
maintain a separate checksum in the GSO context block of the sk_buff.  This
allows us to maintain two checksum values, one offloaded with values stored
in csum_start and csum_offset, and one computed and tracked in
SKB_GSO_CB(skb)->csum.  By maintaining these two values we are able to take
advantage of the same sort of math used in local checksum offload so that
we can provide both inner and outer checksums with minimal overhead.

Below is the performance for a netperf session between an ixgbe PF and VF
on the same host but in different namespaces.  As can be seen a significant
gain in performance can be had from allowing the use of Tx checksum offload
on the inner headers while performing a software offload on the outer
header computation:

 Recv   Send   Send                       Utilization  Service Demand
 Socket Socket Message Elapsed            Send  Recv   Send  Recv
 Size   Size   Size    Time    Throughput local remote local remote
 bytes  bytes  bytes   secs.   10^6bits/s % S   % U    us/KB us/KB

Before:
 87380  16384  16384   10.00   12844.38   9.30  -1.00  0.712 -1.00
After:
 87380  16384  16384   10.00   13216.63   6.78  -1.00  0.504 -1.000

The one piece I believe that may be controversial in this patch is the
part where I added the usage of CHECKSUM_UNNECESSARY in GSO so that we
could use it to flag when we were computing a checksum but it didn't belong
to the current transport layer.  So for example in the case of remote
checksum offload if there is a request for us to generate the outer header
checksum, but we do not have a hardware offload we need to compute the
checksum for the data, but not the TCP headers.  So in order to not
populate the headers in this case, and to not flag TCP as being offloaded
by the hardware I have added the use of CHECKSUM_UNNECESSARY to indicate
that there is a checksum, but it doesn't apply to the current headers.

---

Alexander Duyck (4):
      net: Move GSO csum into SKB_GSO_CB
      net: Update remote checksum segmentation to support use of GSO checksum
      net: Store checksum result for offloaded GSO checksums
      net: Allow UDP and GRE to use inner checksum offloads with outer checksums needed


 include/linux/skbuff.h |   27 +++++++++++++++++------
 net/core/skbuff.c      |   32 +++++++++++++++------------
 net/ipv4/gre_offload.c |    2 --
 net/ipv4/tcp_offload.c |    8 +++++--
 net/ipv4/udp_offload.c |   57 +++++++++++++++++-------------------------------
 5 files changed, 64 insertions(+), 62 deletions(-)

^ permalink raw reply	[flat|nested] 7+ messages in thread

* [RFC PATCH 1/4] net: Move GSO csum into SKB_GSO_CB
  2016-01-14  5:11 [RFC PATCH 0/4] Implement local checksum offload support for tunnel segmentation Alexander Duyck
@ 2016-01-14  5:12 ` Alexander Duyck
  2016-01-14 11:10   ` Edward Cree
  2016-01-14  5:12 ` [RFC PATCH 2/4] net: Update remote checksum segmentation to support use of GSO checksum Alexander Duyck
                   ` (2 subsequent siblings)
  3 siblings, 1 reply; 7+ messages in thread
From: Alexander Duyck @ 2016-01-14  5:12 UTC (permalink / raw)
  To: ecree, netdev, tom

This patch moves the checksum maintained by GSO out of skb->csum and into
the GSO context block in order to allow for us to work on outer checksums
while maintaining the inner checksum offsets in the case of the inner
checksum being offloaded, while the outer checksums will be computed.

While updating the code I also did a minor cleanu-up on gso_make_checksum.
The change is mostly to make it so that we store the values and compute the
checksum instead of computing the checksum and then storing the values we
needed to update.

Signed-off-by: Alexander Duyck <aduyck@mirantis.com>
---
 include/linux/skbuff.h |   14 +++++++-------
 net/core/skbuff.c      |   16 +++++++++-------
 2 files changed, 16 insertions(+), 14 deletions(-)

diff --git a/include/linux/skbuff.h b/include/linux/skbuff.h
index 6b6bd42d6134..0150abb81929 100644
--- a/include/linux/skbuff.h
+++ b/include/linux/skbuff.h
@@ -3532,6 +3532,7 @@ static inline struct sec_path *skb_sec_path(struct sk_buff *skb)
 struct skb_gso_cb {
 	int	mac_offset;
 	int	encap_level;
+	__wsum	csum;
 	__u16	csum_start;
 };
 #define SKB_GSO_CB(skb) ((struct skb_gso_cb *)(skb)->cb)
@@ -3567,15 +3568,14 @@ static inline int gso_pskb_expand_head(struct sk_buff *skb, int extra)
  */
 static inline __sum16 gso_make_checksum(struct sk_buff *skb, __wsum res)
 {
-	int plen = SKB_GSO_CB(skb)->csum_start - skb_headroom(skb) -
-		   skb_transport_offset(skb);
-	__wsum partial;
+	unsigned char *csum_start = skb_transport_header(skb);
+	int plen = (skb->head + SKB_GSO_CB(skb)->csum_start) - csum_start;
+	__wsum partial = SKB_GSO_CB(skb)->csum;
 
-	partial = csum_partial(skb_transport_header(skb), plen, skb->csum);
-	skb->csum = res;
-	SKB_GSO_CB(skb)->csum_start -= plen;
+	SKB_GSO_CB(skb)->csum = res;
+	SKB_GSO_CB(skb)->csum_start = csum_start - skb->head;
 
-	return csum_fold(partial);
+	return csum_fold(csum_partial(csum_start, plen, partial));
 }
 
 static inline bool skb_is_gso(const struct sk_buff *skb)
diff --git a/net/core/skbuff.c b/net/core/skbuff.c
index b2df375ec9c2..02c638a643ea 100644
--- a/net/core/skbuff.c
+++ b/net/core/skbuff.c
@@ -3100,11 +3100,12 @@ struct sk_buff *skb_segment(struct sk_buff *head_skb,
 
 		if (!sg && !nskb->remcsum_offload) {
 			nskb->ip_summed = CHECKSUM_NONE;
-			nskb->csum = skb_copy_and_csum_bits(head_skb, offset,
-							    skb_put(nskb, len),
-							    len, 0);
+			SKB_GSO_CB(nskb)->csum =
+				skb_copy_and_csum_bits(head_skb, offset,
+						       skb_put(nskb, len),
+						       len, 0);
 			SKB_GSO_CB(nskb)->csum_start =
-			    skb_headroom(nskb) + doffset;
+				skb_headroom(nskb) + doffset;
 			continue;
 		}
 
@@ -3171,11 +3172,12 @@ skip_fraglist:
 
 perform_csum_check:
 		if (!csum && !nskb->remcsum_offload) {
-			nskb->csum = skb_checksum(nskb, doffset,
-						  nskb->len - doffset, 0);
 			nskb->ip_summed = CHECKSUM_NONE;
+			SKB_GSO_CB(nskb)->csum =
+				skb_checksum(nskb, doffset,
+					     nskb->len - doffset, 0);
 			SKB_GSO_CB(nskb)->csum_start =
-			    skb_headroom(nskb) + doffset;
+				skb_headroom(nskb) + doffset;
 		}
 	} while ((offset += len) < head_skb->len);
 

^ permalink raw reply related	[flat|nested] 7+ messages in thread

* [RFC PATCH 2/4] net: Update remote checksum segmentation to support use of GSO checksum
  2016-01-14  5:11 [RFC PATCH 0/4] Implement local checksum offload support for tunnel segmentation Alexander Duyck
  2016-01-14  5:12 ` [RFC PATCH 1/4] net: Move GSO csum into SKB_GSO_CB Alexander Duyck
@ 2016-01-14  5:12 ` Alexander Duyck
  2016-01-14  5:12 ` [RFC PATCH 3/4] net: Store checksum result for offloaded GSO checksums Alexander Duyck
  2016-01-14  5:12 ` [RFC PATCH 4/4] net: Allow UDP and GRE to use inner checksum offloads with outer checksums needed Alexander Duyck
  3 siblings, 0 replies; 7+ messages in thread
From: Alexander Duyck @ 2016-01-14  5:12 UTC (permalink / raw)
  To: ecree, netdev, tom

This patch addresses two main issues.

First in the case of remote checksum offload we were avoiding dealing with
scatter-gather issues.  As a result it would be possible to assemble a
series of frames that used frags instead of being linearized as they should
have if remote checksum offload was enabled.

Second I have updated the code so that we now let GSO take care of doing
the checksum on the data itself and drop the special case that was added
for remote checksum offload.

Signed-off-by: Alexander Duyck <aduyck@mirantis.com>
---
 net/core/skbuff.c      |   16 +++++++++-------
 net/ipv4/udp_offload.c |   12 ------------
 2 files changed, 9 insertions(+), 19 deletions(-)

diff --git a/net/core/skbuff.c b/net/core/skbuff.c
index 02c638a643ea..58ff3afdf79a 100644
--- a/net/core/skbuff.c
+++ b/net/core/skbuff.c
@@ -2991,7 +2991,7 @@ struct sk_buff *skb_segment(struct sk_buff *head_skb,
 	unsigned int headroom;
 	unsigned int len;
 	__be16 proto;
-	bool csum;
+	bool csum = !head_skb->encap_hdr_csum;
 	int sg = !!(features & NETIF_F_SG);
 	int nfrags = skb_shinfo(head_skb)->nr_frags;
 	int err = -ENOMEM;
@@ -3004,8 +3004,8 @@ struct sk_buff *skb_segment(struct sk_buff *head_skb,
 	if (unlikely(!proto))
 		return ERR_PTR(-EINVAL);
 
-	csum = !head_skb->encap_hdr_csum &&
-	    !!can_checksum_protocol(features, proto);
+	if (!head_skb->remcsum_offload)
+		csum &= !!can_checksum_protocol(features, proto);
 
 	headroom = skb_headroom(head_skb);
 	pos = skb_headlen(head_skb);
@@ -3098,8 +3098,9 @@ struct sk_buff *skb_segment(struct sk_buff *head_skb,
 		if (nskb->len == len + doffset)
 			goto perform_csum_check;
 
-		if (!sg && !nskb->remcsum_offload) {
-			nskb->ip_summed = CHECKSUM_NONE;
+		if (!sg) {
+			if (!nskb->remcsum_offload)
+				nskb->ip_summed = CHECKSUM_NONE;
 			SKB_GSO_CB(nskb)->csum =
 				skb_copy_and_csum_bits(head_skb, offset,
 						       skb_put(nskb, len),
@@ -3171,8 +3172,9 @@ skip_fraglist:
 		nskb->truesize += nskb->data_len;
 
 perform_csum_check:
-		if (!csum && !nskb->remcsum_offload) {
-			nskb->ip_summed = CHECKSUM_NONE;
+		if (!csum) {
+			if (!nskb->remcsum_offload)
+				nskb->ip_summed = CHECKSUM_NONE;
 			SKB_GSO_CB(nskb)->csum =
 				skb_checksum(nskb, doffset,
 					     nskb->len - doffset, 0);
diff --git a/net/ipv4/udp_offload.c b/net/ipv4/udp_offload.c
index 130042660181..ab7531b4dd24 100644
--- a/net/ipv4/udp_offload.c
+++ b/net/ipv4/udp_offload.c
@@ -115,18 +115,6 @@ static struct sk_buff *__skb_udp_tunnel_segment(struct sk_buff *skb,
 			skb->ip_summed = CHECKSUM_PARTIAL;
 			skb->csum_start = skb_transport_header(skb) - skb->head;
 			skb->csum_offset = offsetof(struct udphdr, check);
-		} else if (remcsum) {
-			/* Need to calculate checksum from scratch,
-			 * inner checksums are never when doing
-			 * remote_checksum_offload.
-			 */
-
-			skb->csum = skb_checksum(skb, udp_offset,
-						 skb->len - udp_offset,
-						 0);
-			uh->check = csum_fold(skb->csum);
-			if (uh->check == 0)
-				uh->check = CSUM_MANGLED_0;
 		} else {
 			uh->check = gso_make_checksum(skb, ~uh->check);
 

^ permalink raw reply related	[flat|nested] 7+ messages in thread

* [RFC PATCH 3/4] net: Store checksum result for offloaded GSO checksums
  2016-01-14  5:11 [RFC PATCH 0/4] Implement local checksum offload support for tunnel segmentation Alexander Duyck
  2016-01-14  5:12 ` [RFC PATCH 1/4] net: Move GSO csum into SKB_GSO_CB Alexander Duyck
  2016-01-14  5:12 ` [RFC PATCH 2/4] net: Update remote checksum segmentation to support use of GSO checksum Alexander Duyck
@ 2016-01-14  5:12 ` Alexander Duyck
  2016-01-14 12:02   ` Edward Cree
  2016-01-14  5:12 ` [RFC PATCH 4/4] net: Allow UDP and GRE to use inner checksum offloads with outer checksums needed Alexander Duyck
  3 siblings, 1 reply; 7+ messages in thread
From: Alexander Duyck @ 2016-01-14  5:12 UTC (permalink / raw)
  To: ecree, netdev, tom

This patch makes it so that we can offload the checksums for a packet up
to a certain point and then begin computing the checksums via software.
Setting this up is fairly straight forward as all we need to do is reset
the values stored in csum and csum_start for the GSO context block.

One complication for this is remote checksum offload.  In order to allow
the inner checksums to be offloaded while computing the outer checksum
manually we needed to have some way of indicating that the offload wasn't
real.  In order to do that I replaced CHECKSUM_PARTIAL with
CHECKSUM_UNNECESSARY in the case of us computing checksums for the outer
header while skipping computing checksums for the inner headers.  We clean
up the ip_summed flag and set it to either CHECKSUM_PARTIAL or
CHECKSUM_NONE once we hand the packet off to the next lower level.

Signed-off-by: Alexander Duyck <aduyck@mirantis.com>
---
 include/linux/skbuff.h |   13 +++++++++++++
 net/core/skbuff.c      |    8 ++++----
 net/ipv4/tcp_offload.c |    8 ++++++--
 net/ipv4/udp_offload.c |    1 +
 4 files changed, 24 insertions(+), 6 deletions(-)

diff --git a/include/linux/skbuff.h b/include/linux/skbuff.h
index 0150abb81929..8a73cb4c7e5a 100644
--- a/include/linux/skbuff.h
+++ b/include/linux/skbuff.h
@@ -2161,6 +2161,11 @@ static inline int skb_checksum_start_offset(const struct sk_buff *skb)
 	return skb->csum_start - skb_headroom(skb);
 }
 
+static inline unsigned char *skb_checksum_start(const struct sk_buff *skb)
+{
+	return skb->head + skb->csum_start;
+}
+
 static inline int skb_transport_offset(const struct sk_buff *skb)
 {
 	return skb_transport_header(skb) - skb->data;
@@ -3558,6 +3563,14 @@ static inline int gso_pskb_expand_head(struct sk_buff *skb, int extra)
 	return 0;
 }
 
+static inline void gso_reset_checksum(struct sk_buff *skb, __wsum res)
+{
+	unsigned char *csum_start = skb_checksum_start(skb);
+
+	SKB_GSO_CB(skb)->csum = res;
+	SKB_GSO_CB(skb)->csum_start = csum_start - skb->head;
+}
+
 /* Compute the checksum for a gso segment. First compute the checksum value
  * from the start of transport header to SKB_GSO_CB(skb)->csum_start, and
  * then add in skb->csum (checksum from csum_start to end of packet).
diff --git a/net/core/skbuff.c b/net/core/skbuff.c
index 58ff3afdf79a..25ce87932cb9 100644
--- a/net/core/skbuff.c
+++ b/net/core/skbuff.c
@@ -3099,8 +3099,8 @@ struct sk_buff *skb_segment(struct sk_buff *head_skb,
 			goto perform_csum_check;
 
 		if (!sg) {
-			if (!nskb->remcsum_offload)
-				nskb->ip_summed = CHECKSUM_NONE;
+			nskb->ip_summed = nskb->remcsum_offload ?
+					  CHECKSUM_UNNECESSARY : CHECKSUM_NONE;
 			SKB_GSO_CB(nskb)->csum =
 				skb_copy_and_csum_bits(head_skb, offset,
 						       skb_put(nskb, len),
@@ -3173,8 +3173,8 @@ skip_fraglist:
 
 perform_csum_check:
 		if (!csum) {
-			if (!nskb->remcsum_offload)
-				nskb->ip_summed = CHECKSUM_NONE;
+			nskb->ip_summed = nskb->remcsum_offload ?
+					  CHECKSUM_UNNECESSARY : CHECKSUM_NONE;
 			SKB_GSO_CB(nskb)->csum =
 				skb_checksum(nskb, doffset,
 					     nskb->len - doffset, 0);
diff --git a/net/ipv4/tcp_offload.c b/net/ipv4/tcp_offload.c
index 9864a2dbadce..894f87b23094 100644
--- a/net/ipv4/tcp_offload.c
+++ b/net/ipv4/tcp_offload.c
@@ -135,7 +135,9 @@ struct sk_buff *tcp_gso_segment(struct sk_buff *skb,
 		th->fin = th->psh = 0;
 		th->check = newcheck;
 
-		if (skb->ip_summed != CHECKSUM_PARTIAL)
+		if (skb->ip_summed == CHECKSUM_PARTIAL)
+			gso_reset_checksum(skb, ~th->check);
+		else if (skb->ip_summed != CHECKSUM_UNNECESSARY)
 			th->check = gso_make_checksum(skb, ~th->check);
 
 		seq += mss;
@@ -169,7 +171,9 @@ struct sk_buff *tcp_gso_segment(struct sk_buff *skb,
 		      skb->data_len);
 	th->check = ~csum_fold((__force __wsum)((__force u32)th->check +
 				(__force u32)delta));
-	if (skb->ip_summed != CHECKSUM_PARTIAL)
+	if (skb->ip_summed == CHECKSUM_PARTIAL)
+		gso_reset_checksum(skb, ~th->check);
+	else if (skb->ip_summed != CHECKSUM_UNNECESSARY)
 		th->check = gso_make_checksum(skb, ~th->check);
 out:
 	return segs;
diff --git a/net/ipv4/udp_offload.c b/net/ipv4/udp_offload.c
index ab7531b4dd24..45824f9b81c4 100644
--- a/net/ipv4/udp_offload.c
+++ b/net/ipv4/udp_offload.c
@@ -115,6 +115,7 @@ static struct sk_buff *__skb_udp_tunnel_segment(struct sk_buff *skb,
 			skb->ip_summed = CHECKSUM_PARTIAL;
 			skb->csum_start = skb_transport_header(skb) - skb->head;
 			skb->csum_offset = offsetof(struct udphdr, check);
+			gso_reset_checksum(skb, ~uh->check);
 		} else {
 			uh->check = gso_make_checksum(skb, ~uh->check);
 

^ permalink raw reply related	[flat|nested] 7+ messages in thread

* [RFC PATCH 4/4] net: Allow UDP and GRE to use inner checksum offloads with outer checksums needed
  2016-01-14  5:11 [RFC PATCH 0/4] Implement local checksum offload support for tunnel segmentation Alexander Duyck
                   ` (2 preceding siblings ...)
  2016-01-14  5:12 ` [RFC PATCH 3/4] net: Store checksum result for offloaded GSO checksums Alexander Duyck
@ 2016-01-14  5:12 ` Alexander Duyck
  3 siblings, 0 replies; 7+ messages in thread
From: Alexander Duyck @ 2016-01-14  5:12 UTC (permalink / raw)
  To: ecree, netdev, tom

This patch enables us to use inner checksum offloads if provided by
hardware with outer checksums computed by software.

Signed-off-by: Alexander Duyck <aduyck@mirantis.com>
---
 net/ipv4/gre_offload.c |    2 --
 net/ipv4/udp_offload.c |   44 +++++++++++++++++++-------------------------
 2 files changed, 19 insertions(+), 27 deletions(-)

diff --git a/net/ipv4/gre_offload.c b/net/ipv4/gre_offload.c
index 5a8ee3282550..61608fe975fc 100644
--- a/net/ipv4/gre_offload.c
+++ b/net/ipv4/gre_offload.c
@@ -53,8 +53,6 @@ static struct sk_buff *gre_gso_segment(struct sk_buff *skb,
 		goto out;
 
 	csum = !!(greh->flags & GRE_CSUM);
-	if (csum)
-		skb->encap_hdr_csum = 1;
 
 	/* setup inner skb. */
 	skb->protocol = greh->protocol;
diff --git a/net/ipv4/udp_offload.c b/net/ipv4/udp_offload.c
index 45824f9b81c4..0a7f1207f3d3 100644
--- a/net/ipv4/udp_offload.c
+++ b/net/ipv4/udp_offload.c
@@ -39,31 +39,30 @@ static struct sk_buff *__skb_udp_tunnel_segment(struct sk_buff *skb,
 	netdev_features_t enc_features;
 	int udp_offset, outer_hlen;
 	unsigned int oldlen;
-	bool need_csum = !!(skb_shinfo(skb)->gso_type &
-			    SKB_GSO_UDP_TUNNEL_CSUM);
 	bool remcsum = !!(skb_shinfo(skb)->gso_type & SKB_GSO_TUNNEL_REMCSUM);
-	bool offload_csum = false, dont_encap = (need_csum || remcsum);
+	bool need_csum, load_csum;
 
 	oldlen = (u16)~skb->len;
 
 	if (unlikely(!pskb_may_pull(skb, tnl_hlen)))
 		goto out;
 
+	/* Try to offload checksum if possible */
+	need_csum = !!(skb_shinfo(skb)->gso_type & SKB_GSO_UDP_TUNNEL_CSUM);
+	load_csum = need_csum &&
+		    !(skb->dev->features &
+		      (is_ipv6 ? (NETIF_F_HW_CSUM | NETIF_F_IPV6_CSUM) :
+				 (NETIF_F_HW_CSUM | NETIF_F_IP_CSUM)));
+
 	skb->encapsulation = 0;
 	__skb_pull(skb, tnl_hlen);
 	skb_reset_mac_header(skb);
 	skb_set_network_header(skb, skb_inner_network_offset(skb));
 	skb->mac_len = skb_inner_network_offset(skb);
 	skb->protocol = new_protocol;
-	skb->encap_hdr_csum = need_csum;
+	skb->encap_hdr_csum = remcsum & load_csum;
 	skb->remcsum_offload = remcsum;
 
-	/* Try to offload checksum if possible */
-	offload_csum = !!(need_csum &&
-			  ((skb->dev->features & NETIF_F_HW_CSUM) ||
-			   (skb->dev->features & (is_ipv6 ?
-			    NETIF_F_IPV6_CSUM : NETIF_F_IP_CSUM))));
-
 	/* segment inner packet. */
 	enc_features = skb->dev->hw_enc_features & features;
 	segs = gso_inner_segment(skb, enc_features);
@@ -81,16 +80,11 @@ static struct sk_buff *__skb_udp_tunnel_segment(struct sk_buff *skb,
 		int len;
 		__be32 delta;
 
-		if (dont_encap) {
-			skb->encapsulation = 0;
+		if (remcsum)
 			skb->ip_summed = CHECKSUM_NONE;
-		} else {
-			/* Only set up inner headers if we might be offloading
-			 * inner checksum.
-			 */
-			skb_reset_inner_headers(skb);
-			skb->encapsulation = 1;
-		}
+
+		skb_reset_inner_headers(skb);
+		skb->encapsulation = skb->ip_summed == CHECKSUM_PARTIAL;
 
 		skb->mac_len = mac_len;
 		skb->protocol = protocol;
@@ -111,16 +105,16 @@ static struct sk_buff *__skb_udp_tunnel_segment(struct sk_buff *skb,
 		uh->check = ~csum_fold((__force __wsum)
 				       ((__force u32)uh->check +
 					(__force u32)delta));
-		if (offload_csum) {
+
+		if (skb->encapsulation || load_csum) {
+			uh->check = gso_make_checksum(skb, ~uh->check);
+			if (uh->check == 0)
+				uh->check = CSUM_MANGLED_0;
+		} else {
 			skb->ip_summed = CHECKSUM_PARTIAL;
 			skb->csum_start = skb_transport_header(skb) - skb->head;
 			skb->csum_offset = offsetof(struct udphdr, check);
 			gso_reset_checksum(skb, ~uh->check);
-		} else {
-			uh->check = gso_make_checksum(skb, ~uh->check);
-
-			if (uh->check == 0)
-				uh->check = CSUM_MANGLED_0;
 		}
 	} while ((skb = skb->next));
 out:

^ permalink raw reply related	[flat|nested] 7+ messages in thread

* Re: [RFC PATCH 1/4] net: Move GSO csum into SKB_GSO_CB
  2016-01-14  5:12 ` [RFC PATCH 1/4] net: Move GSO csum into SKB_GSO_CB Alexander Duyck
@ 2016-01-14 11:10   ` Edward Cree
  0 siblings, 0 replies; 7+ messages in thread
From: Edward Cree @ 2016-01-14 11:10 UTC (permalink / raw)
  To: Alexander Duyck, netdev, tom

On 14/01/16 05:12, Alexander Duyck wrote:
> This patch moves the checksum maintained by GSO out of skb->csum and into
> the GSO context block in order to allow for us to work on outer checksums
> while maintaining the inner checksum offsets in the case of the inner
> checksum being offloaded, while the outer checksums will be computed.
>
> While updating the code I also did a minor cleanu-up on gso_make_checksum.
> The change is mostly to make it so that we store the values and compute the
> checksum instead of computing the checksum and then storing the values we
> needed to update.
>
> Signed-off-by: Alexander Duyck <aduyck@mirantis.com>
> ---
>  include/linux/skbuff.h |   14 +++++++-------
>  net/core/skbuff.c      |   16 +++++++++-------
>  2 files changed, 16 insertions(+), 14 deletions(-)
>
> diff --git a/include/linux/skbuff.h b/include/linux/skbuff.h
> index 6b6bd42d6134..0150abb81929 100644
> --- a/include/linux/skbuff.h
> +++ b/include/linux/skbuff.h
> @@ -3532,6 +3532,7 @@ static inline struct sec_path *skb_sec_path(struct sk_buff *skb)
>  struct skb_gso_cb {
>  	int	mac_offset;
>  	int	encap_level;
> +	__wsum	csum;
>  	__u16	csum_start;
>  };
>  #define SKB_GSO_CB(skb) ((struct skb_gso_cb *)(skb)->cb)
> @@ -3567,15 +3568,14 @@ static inline int gso_pskb_expand_head(struct sk_buff *skb, int extra)
>   */
>  static inline __sum16 gso_make_checksum(struct sk_buff *skb, __wsum res)
>  {
> -	int plen = SKB_GSO_CB(skb)->csum_start - skb_headroom(skb) -
> -		   skb_transport_offset(skb);
> -	__wsum partial;
> +	unsigned char *csum_start = skb_transport_header(skb);
> +	int plen = (skb->head + SKB_GSO_CB(skb)->csum_start) - csum_start;
> +	__wsum partial = SKB_GSO_CB(skb)->csum;
>  
> -	partial = csum_partial(skb_transport_header(skb), plen, skb->csum);
> -	skb->csum = res;
> -	SKB_GSO_CB(skb)->csum_start -= plen;
> +	SKB_GSO_CB(skb)->csum = res;
> +	SKB_GSO_CB(skb)->csum_start = csum_start - skb->head;
>  
> -	return csum_fold(partial);
> +	return csum_fold(csum_partial(csum_start, plen, partial));
>  }
Update the comment above this function?

Apart from that this looks good.

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [RFC PATCH 3/4] net: Store checksum result for offloaded GSO checksums
  2016-01-14  5:12 ` [RFC PATCH 3/4] net: Store checksum result for offloaded GSO checksums Alexander Duyck
@ 2016-01-14 12:02   ` Edward Cree
  0 siblings, 0 replies; 7+ messages in thread
From: Edward Cree @ 2016-01-14 12:02 UTC (permalink / raw)
  To: Alexander Duyck, netdev, tom

On 14/01/16 05:12, Alexander Duyck wrote:
> In order to do that I replaced CHECKSUM_PARTIAL with
> CHECKSUM_UNNECESSARY in the case of us computing checksums for the outer
> header while skipping computing checksums for the inner headers.  We clean
> up the ip_summed flag and set it to either CHECKSUM_PARTIAL or
> CHECKSUM_NONE once we hand the packet off to the next lower level.
I feel like this should probably be mentioned in the comment at thetop of
 skbuff.h.  I think section E of that comment will need to be more or less
 completely re-written with this patch series.  In particular, if I'm
 understandingcorrectly, csum_start and csum_offset will point to the
 inner checksum (unless using RCO), same as they do without GSO.  And GSO
 will never offload multiple checksums (although TSO will still have to
 _edit_ multiple checksums).
Or better still, proper documentation of GSO and TSO would be nice :)

^ permalink raw reply	[flat|nested] 7+ messages in thread

end of thread, other threads:[~2016-01-14 12:02 UTC | newest]

Thread overview: 7+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2016-01-14  5:11 [RFC PATCH 0/4] Implement local checksum offload support for tunnel segmentation Alexander Duyck
2016-01-14  5:12 ` [RFC PATCH 1/4] net: Move GSO csum into SKB_GSO_CB Alexander Duyck
2016-01-14 11:10   ` Edward Cree
2016-01-14  5:12 ` [RFC PATCH 2/4] net: Update remote checksum segmentation to support use of GSO checksum Alexander Duyck
2016-01-14  5:12 ` [RFC PATCH 3/4] net: Store checksum result for offloaded GSO checksums Alexander Duyck
2016-01-14 12:02   ` Edward Cree
2016-01-14  5:12 ` [RFC PATCH 4/4] net: Allow UDP and GRE to use inner checksum offloads with outer checksums needed Alexander Duyck

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.