All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH RFC net-next] vxlan: GRO support at tunnel layer
@ 2015-06-26 23:09 Tom Herbert
  2015-06-27  0:46 ` Rick Jones
  0 siblings, 1 reply; 9+ messages in thread
From: Tom Herbert @ 2015-06-26 23:09 UTC (permalink / raw)
  To: davem, netdev, sramamur

Add calls to gro_cells infrastructure to do GRO when receiving on a tunnel.

Testing:

Ran 200 netperf TCP_STREAM instance

- With fix (GRO enabled on VXLAN interface)

  Verify GRO is happening.

  9084 MBps tput
  3.44% CPU utilization

- Without fix (GRO disabled on VXLAN interface)

  Verified no GRO is happening.

  9084 MBps tput
  5.54% CPU utilization

Signed-off-by: Tom Herbert <tom@herbertland.com>
---
 drivers/net/vxlan.c | 11 +++++++++--
 1 file changed, 9 insertions(+), 2 deletions(-)

diff --git a/drivers/net/vxlan.c b/drivers/net/vxlan.c
index 34c519e..d433f8a 100644
--- a/drivers/net/vxlan.c
+++ b/drivers/net/vxlan.c
@@ -28,6 +28,7 @@
 #include <linux/hash.h>
 #include <linux/ethtool.h>
 #include <net/arp.h>
+#include <net/gro_cells.h>
 #include <net/ndisc.h>
 #include <net/ip.h>
 #include <net/ip_tunnels.h>
@@ -132,6 +133,7 @@ struct vxlan_dev {
 	spinlock_t	  hash_lock;
 	unsigned int	  addrcnt;
 	unsigned int	  addrmax;
+	struct gro_cells  gro_cells;
 
 	struct hlist_head fdb_head[FDB_HASH_SIZE];
 };
@@ -1326,7 +1328,7 @@ static void vxlan_rcv(struct vxlan_sock *vs, struct sk_buff *skb,
 	stats->rx_bytes += skb->len;
 	u64_stats_update_end(&stats->syncp);
 
-	netif_rx(skb);
+	gro_cells_receive(&vxlan->gro_cells, skb);
 
 	return;
 drop:
@@ -2384,6 +2386,8 @@ static void vxlan_setup(struct net_device *dev)
 
 	vxlan->dev = dev;
 
+	gro_cells_init(&vxlan->gro_cells, dev);
+
 	for (h = 0; h < FDB_HASH_SIZE; ++h)
 		INIT_HLIST_HEAD(&vxlan->fdb_head[h]);
 }
@@ -2759,6 +2763,7 @@ static void vxlan_dellink(struct net_device *dev, struct list_head *head)
 		hlist_del_rcu(&vxlan->hlist);
 	spin_unlock(&vn->sock_lock);
 
+	gro_cells_destroy(&vxlan->gro_cells);
 	list_del(&vxlan->next);
 	unregister_netdevice_queue(dev, head);
 }
@@ -2964,8 +2969,10 @@ static void __net_exit vxlan_exit_net(struct net *net)
 		/* If vxlan->dev is in the same netns, it has already been added
 		 * to the list by the previous loop.
 		 */
-		if (!net_eq(dev_net(vxlan->dev), net))
+		if (!net_eq(dev_net(vxlan->dev), net)) {
+			gro_cells_destroy(&vxlan->gro_cells);
 			unregister_netdevice_queue(vxlan->dev, &list);
+		}
 	}
 
 	unregister_netdevice_many(&list);
-- 
1.8.1

^ permalink raw reply related	[flat|nested] 9+ messages in thread

* Re: [PATCH RFC net-next] vxlan: GRO support at tunnel layer
  2015-06-26 23:09 [PATCH RFC net-next] vxlan: GRO support at tunnel layer Tom Herbert
@ 2015-06-27  0:46 ` Rick Jones
  2015-06-28 17:20   ` Ramu Ramamurthy
  0 siblings, 1 reply; 9+ messages in thread
From: Rick Jones @ 2015-06-27  0:46 UTC (permalink / raw)
  To: Tom Herbert, davem, netdev, sramamur

On 06/26/2015 04:09 PM, Tom Herbert wrote:
> Add calls to gro_cells infrastructure to do GRO when receiving on a tunnel.
>
> Testing:
>
> Ran 200 netperf TCP_STREAM instance
>
> - With fix (GRO enabled on VXLAN interface)
>
>    Verify GRO is happening.
>
>    9084 MBps tput
>    3.44% CPU utilization
>
> - Without fix (GRO disabled on VXLAN interface)
>
>    Verified no GRO is happening.
>
>    9084 MBps tput
>    5.54% CPU utilization

This has been an area of interest so:

Tested-by: Rick Jones <rick.jones2@hp.com>

Some single-stream results between two otherwise identical systems with 
82599ES NICs in them, one running a 4.1.0-rc1+ kernel from a davem tree 
from a while ago, the other running 4.1.0+ from a davem tree pulled 
yesterday upon which I've applied the patch.

Netperf command used:

netperf -l 30 -H <IP> -t TCP_MAERTS -c -- -O 
throughput,local_cpu_util,local_cpu_peak_util,local_cpu_peak_id,local_sd

First, inbound to the unpatched system from the patched:


MIGRATED TCP MAERTS TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to 
192.168.0.21 () port 0 AF_INET : demo
Throughput Local Local   Local   Local
            CPU   Peak    Peak    Service
            Util  Per CPU Per CPU Demand
            %     Util %  ID
5487.42    6.01  99.83   0       2.872
5580.83    6.20  99.16   0       2.911
5445.52    5.68  98.92   0       2.734
5653.36    6.24  99.80   0       2.891
5187.56    5.66  97.41   0       2.858

Second, inbound to the patched system from the unpatched:

MIGRATED TCP MAERTS TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to 
192.168.0.22 () port 0 AF_INET : demo
Throughput Local Local   Local   Local
            CPU   Peak    Peak    Service
            Util  Per CPU Per CPU Demand
            %     Util %  ID
6933.29    3.19  93.67   3       1.208
7031.35    3.34  95.08   3       1.244
7006.28    3.27  94.55   3       1.223
6948.62    3.09  93.20   3       1.165
7007.80    3.22  94.34   3       1.206

Comparing the service demands shows a > 50% reduction in overhead.

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH RFC net-next] vxlan: GRO support at tunnel layer
  2015-06-27  0:46 ` Rick Jones
@ 2015-06-28 17:20   ` Ramu Ramamurthy
  2015-06-28 21:31     ` Tom Herbert
                       ` (2 more replies)
  0 siblings, 3 replies; 9+ messages in thread
From: Ramu Ramamurthy @ 2015-06-28 17:20 UTC (permalink / raw)
  To: Rick Jones; +Cc: Tom Herbert, davem, netdev

On 2015-06-26 17:46, Rick Jones wrote:
> On 06/26/2015 04:09 PM, Tom Herbert wrote:
>> Add calls to gro_cells infrastructure to do GRO when receiving on a 
>> tunnel.
>> 
>> Testing:
>> 
>> Ran 200 netperf TCP_STREAM instance
>> 
>> - With fix (GRO enabled on VXLAN interface)
>> 
>>    Verify GRO is happening.
>> 
>>    9084 MBps tput
>>    3.44% CPU utilization
>> 
>> - Without fix (GRO disabled on VXLAN interface)
>> 
>>    Verified no GRO is happening.
>> 
>>    9084 MBps tput
>>    5.54% CPU utilization
> 
> This has been an area of interest so:
> 
> Tested-by: Rick Jones <rick.jones2@hp.com>
> 
> Some single-stream results between two otherwise identical systems
> with 82599ES NICs in them, one running a 4.1.0-rc1+ kernel from a
> davem tree from a while ago, the other running 4.1.0+ from a davem
> tree pulled yesterday upon which I've applied the patch.
> 
> Netperf command used:
> 
> netperf -l 30 -H <IP> -t TCP_MAERTS -c -- -O
> throughput,local_cpu_util,local_cpu_peak_util,local_cpu_peak_id,local_sd
> 
> First, inbound to the unpatched system from the patched:
> 
> 
> MIGRATED TCP MAERTS TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to
> 192.168.0.21 () port 0 AF_INET : demo
> Throughput Local Local   Local   Local
>            CPU   Peak    Peak    Service
>            Util  Per CPU Per CPU Demand
>            %     Util %  ID
> 5487.42    6.01  99.83   0       2.872
> 5580.83    6.20  99.16   0       2.911
> 5445.52    5.68  98.92   0       2.734
> 5653.36    6.24  99.80   0       2.891
> 5187.56    5.66  97.41   0       2.858
> 
> Second, inbound to the patched system from the unpatched:
> 
> MIGRATED TCP MAERTS TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to
> 192.168.0.22 () port 0 AF_INET : demo
> Throughput Local Local   Local   Local
>            CPU   Peak    Peak    Service
>            Util  Per CPU Per CPU Demand
>            %     Util %  ID
> 6933.29    3.19  93.67   3       1.208
> 7031.35    3.34  95.08   3       1.244
> 7006.28    3.27  94.55   3       1.223
> 6948.62    3.09  93.20   3       1.165
> 7007.80    3.22  94.34   3       1.206
> 
> Comparing the service demands shows a > 50% reduction in overhead.

Rick, in your test, are you seeing gro becoming effective on the vxlan 
interface
with the 82599ES nic ? (ie, tcpdump on the vxlan interface shows larger 
frames
than the mtu of that interface, and kernel trace shows 
vxlan_gro_receive() being hit)

Throughputs of 5.5 Gbps (or the improved 7Gbs) leads me to suspect that 
gro is still not effective
in your test on the vxlan interface with the 82588ES nic - Because, when 
vxlan gro became effective with the patch
I suggested earlier, I could see throughput of ~8.5 Gbps on that nic.

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH RFC net-next] vxlan: GRO support at tunnel layer
  2015-06-28 17:20   ` Ramu Ramamurthy
@ 2015-06-28 21:31     ` Tom Herbert
  2015-06-29 15:40       ` Rick Jones
  2015-06-29 15:45     ` Rick Jones
  2015-06-29 20:04     ` Rick Jones
  2 siblings, 1 reply; 9+ messages in thread
From: Tom Herbert @ 2015-06-28 21:31 UTC (permalink / raw)
  To: Ramu Ramamurthy
  Cc: Rick Jones, David S. Miller, Linux Kernel Network Developers

On Sun, Jun 28, 2015 at 10:20 AM, Ramu Ramamurthy
<sramamur@linux.vnet.ibm.com> wrote:
> On 2015-06-26 17:46, Rick Jones wrote:
>>
>> On 06/26/2015 04:09 PM, Tom Herbert wrote:
>>>
>>> Add calls to gro_cells infrastructure to do GRO when receiving on a
>>> tunnel.
>>>
>>> Testing:
>>>
>>> Ran 200 netperf TCP_STREAM instance
>>>
>>> - With fix (GRO enabled on VXLAN interface)
>>>
>>>    Verify GRO is happening.
>>>
>>>    9084 MBps tput
>>>    3.44% CPU utilization
>>>
>>> - Without fix (GRO disabled on VXLAN interface)
>>>
>>>    Verified no GRO is happening.
>>>
>>>    9084 MBps tput
>>>    5.54% CPU utilization
>>
>>
>> This has been an area of interest so:
>>
>> Tested-by: Rick Jones <rick.jones2@hp.com>
>>
>> Some single-stream results between two otherwise identical systems
>> with 82599ES NICs in them, one running a 4.1.0-rc1+ kernel from a
>> davem tree from a while ago, the other running 4.1.0+ from a davem
>> tree pulled yesterday upon which I've applied the patch.
>>
>> Netperf command used:
>>
>> netperf -l 30 -H <IP> -t TCP_MAERTS -c -- -O
>> throughput,local_cpu_util,local_cpu_peak_util,local_cpu_peak_id,local_sd
>>
>> First, inbound to the unpatched system from the patched:
>>
>>
>> MIGRATED TCP MAERTS TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to
>> 192.168.0.21 () port 0 AF_INET : demo
>> Throughput Local Local   Local   Local
>>            CPU   Peak    Peak    Service
>>            Util  Per CPU Per CPU Demand
>>            %     Util %  ID
>> 5487.42    6.01  99.83   0       2.872
>> 5580.83    6.20  99.16   0       2.911
>> 5445.52    5.68  98.92   0       2.734
>> 5653.36    6.24  99.80   0       2.891
>> 5187.56    5.66  97.41   0       2.858
>>
>> Second, inbound to the patched system from the unpatched:
>>
>> MIGRATED TCP MAERTS TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to
>> 192.168.0.22 () port 0 AF_INET : demo
>> Throughput Local Local   Local   Local
>>            CPU   Peak    Peak    Service
>>            Util  Per CPU Per CPU Demand
>>            %     Util %  ID
>> 6933.29    3.19  93.67   3       1.208
>> 7031.35    3.34  95.08   3       1.244
>> 7006.28    3.27  94.55   3       1.223
>> 6948.62    3.09  93.20   3       1.165
>> 7007.80    3.22  94.34   3       1.206
>>
>> Comparing the service demands shows a > 50% reduction in overhead.
>
>
> Rick, in your test, are you seeing gro becoming effective on the vxlan
> interface
> with the 82599ES nic ? (ie, tcpdump on the vxlan interface shows larger
> frames
> than the mtu of that interface, and kernel trace shows vxlan_gro_receive()
> being hit)
>
> Throughputs of 5.5 Gbps (or the improved 7Gbs) leads me to suspect that gro
> is still not effective
> in your test on the vxlan interface with the 82588ES nic - Because, when
> vxlan gro became effective with the patch
> I suggested earlier, I could see throughput of ~8.5 Gbps on that nic.
>
You're comparing apples to oranges. Please test the patch in your
environment I posted and report results. Please also test with
multiple connections, single connection performance can be misleading
and does not really reflect what real production servers are doing.

Tom

>

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH RFC net-next] vxlan: GRO support at tunnel layer
  2015-06-28 21:31     ` Tom Herbert
@ 2015-06-29 15:40       ` Rick Jones
  0 siblings, 0 replies; 9+ messages in thread
From: Rick Jones @ 2015-06-29 15:40 UTC (permalink / raw)
  To: Tom Herbert, Ramu Ramamurthy
  Cc: David S. Miller, Linux Kernel Network Developers

On 06/28/2015 02:31 PM, Tom Herbert wrote:
> You're comparing apples to oranges. Please test the patch in your
> environment I posted and report results. Please also test with
> multiple connections, single connection performance can be misleading
> and does not really reflect what real production servers are doing.

Slight drift - Linux is, for lack of a better expression, a complete 
fruit stand.  One customer might indeed be into oranges, but I've had 
customers coming to me wanting to see shiny apples.

happy benchmarking,

rick jones

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH RFC net-next] vxlan: GRO support at tunnel layer
  2015-06-28 17:20   ` Ramu Ramamurthy
  2015-06-28 21:31     ` Tom Herbert
@ 2015-06-29 15:45     ` Rick Jones
  2015-06-29 20:04     ` Rick Jones
  2 siblings, 0 replies; 9+ messages in thread
From: Rick Jones @ 2015-06-29 15:45 UTC (permalink / raw)
  To: Ramu Ramamurthy; +Cc: Tom Herbert, davem, netdev

On 06/28/2015 10:20 AM, Ramu Ramamurthy wrote:
> Rick, in your test, are you seeing gro becoming effective on the
> vxlan interface with the 82599ES nic ? (ie, tcpdump on the vxlan
> interface shows larger frames than the mtu of that interface, and
> kernel trace shows vxlan_gro_receive() being hit)
>
> Throughputs of 5.5 Gbps (or the improved 7Gbs) leads me to suspect
> that gro is still not effective in your test on the vxlan interface
> with the 82588ES nic - Because, when vxlan gro became effective with
> the patch I suggested earlier, I could see throughput of ~8.5 Gbps on
> that nic.

For the 5.X gbit/s test, where I am not getting GRO, I am seeing 1398 
byte data packets when I trace vxlan0.

For the other direction, at 7ish Gbit/s I am seeing 64XXX byte packets 
on vxlan0, with the occasional 25XXX byte packet.  If I disable gro on 
that receiving vxlan0 interface, the throughput is more like 4.X Gbit/s

happy benchmarking,

rick

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH RFC net-next] vxlan: GRO support at tunnel layer
  2015-06-28 17:20   ` Ramu Ramamurthy
  2015-06-28 21:31     ` Tom Herbert
  2015-06-29 15:45     ` Rick Jones
@ 2015-06-29 20:04     ` Rick Jones
  2015-06-30  1:06       ` Jesse Gross
  2015-06-30  5:06       ` Eric Dumazet
  2 siblings, 2 replies; 9+ messages in thread
From: Rick Jones @ 2015-06-29 20:04 UTC (permalink / raw)
  To: Ramu Ramamurthy; +Cc: Tom Herbert, davem, netdev

I went ahead and put the patched kernel on both systems.  I was getting 
mixed results - in one direction, results in the 8Gbit/s range, in the 
other in the 7 Gbit/s.  I noticed that interrupts were going to 
different CPUs so I started playing with IRQ assignments, and bound all 
interrupts of the 82599ES to CPU0 to remove that variable.  At that 
point I started getting 8.X Gbit/s consistently in either direction.

root@qu-stbaz1-perf0000:~# HDR="-P 1"; for i in `seq 1 5`; do netperf -H 
192.168.0.22 -l 30 $HDR -c -C -- -O 
throughput,local_cpu_util,local_sd,local_cpu_peak_util,local_cpu_peak_id,remote_cpu_util,remote_sd,remote_cpu_peak_util,remote_cpu_peak_id; 
HDR="-P 0"; done
MIGRATED TCP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to 
192.168.0.22 () port 0 AF_INET : demo
Throughput Local Local   Local   Local   Remote Remote  Remote  Remote
            CPU   Service Peak    Peak    CPU    Service Peak    Peak
            Util  Demand  Per CPU Per CPU Util   Demand  Per CPU Per CPU
            %             Util %  ID      %              Util %  ID
8768.48    1.95  0.582   62.22   0       4.36   1.304   99.97   0
8757.99    1.95  0.583   62.27   0       4.37   1.307   100.00  0
8793.86    2.01  0.600   64.32   0       4.23   1.262   100.00  0
8720.98    1.93  0.580   61.67   0       4.45   1.337   99.97   0
8380.49    1.84  0.575   58.74   0       4.39   1.374   100.00  0


root@qu-stbaz1-perf0001:~# HDR="-P 1"; for i in `seq 1 5`; do netperf -H 
192.168.0.21 -l 30 $HDR -c -C -- -O 
throughput,local_cpu_util,local_sd,local_cpu_peak_util,local_cpu_peak_id,remote_cpu_util,remote_sd,remote_cpu_peak_util,remote_cpu_peak_id; 
HDR="-P 0"; done
MIGRATED TCP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to 
192.168.0.21 () port 0 AF_INET : demo
Throughput Local Local   Local   Local   Remote Remote  Remote  Remote
            CPU   Service Peak    Peak    CPU    Service Peak    Peak
            Util  Demand  Per CPU Per CPU Util   Demand  Per CPU Per CPU
            %             Util %  ID      %              Util %  ID
8365.16    1.93  0.604   61.64   0       4.57   1.431   99.97   0
8724.08    2.01  0.604   64.31   0       4.66   1.401   100.00  0
8653.70    1.98  0.600   63.37   0       4.67   1.414   99.90   0
8748.05    1.99  0.596   63.62   0       4.62   1.383   99.97   0
8756.66    1.99  0.595   63.55   0       4.52   1.354   99.97   0

If I switch the interrupts to a core on the other socket, throughput 
drops to 7.5 Gbit/s or so either way.

I'm still trying to get onto the consoles to check power management 
settings. Intel(R) Xeon(R) CPU E5-2670 0 @ 2.60GHz are the processors in 
use.

happy benchmarking,

rick

PS  FWIW, if I shift from using just the linux native vxlan to a "mostly 
full" set of OpenStack compute node plumbing - two OVS bridges and a 
linux bridge and associated plumbing with a vxlan tunnel defined in OVS, 
but nothing above the Linux bridge (and no VMs) I see more like 4.9 
Gbit/s.  The veth pair connecting the linux bridge to the top ovs bridge 
show rx checksum and gro enabled.  the linux bridge itself shows GRO but 
rx checksum off (fixed).  I'm not sure how to go about checking the OVS 
constructs.

root@qu-stbaz1-perf0000:/home/raj# cat raj_full_stack.sh
ovs-vsctl add-br br-tun
ovs-vsctl add-port br-tun vxlan0 -- set interface vxlan0 type=vxlan 
options:remote_ip=$1 options:key=99 options:dst_port=4789
ovs-vsctl add-port br-tun patch-tun -- set interface patch-tun 
type=patch options:peer=patch-int

ovs-vsctl add-br br-int
ovs-vsctl add-port br-int patch-int -- set interface patch-int 
type=patch options:peer=patch-tun

brctl addbr qbr
ip link add dev qvb type veth peer name qvo
brctl addif qbr qvb

ovs-vsctl add-port br-int qvo

ifconfig qbr $2
ifconfig qbr mtu 1450
ifconfig qvb up
ifconfig qvo up

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH RFC net-next] vxlan: GRO support at tunnel layer
  2015-06-29 20:04     ` Rick Jones
@ 2015-06-30  1:06       ` Jesse Gross
  2015-06-30  5:06       ` Eric Dumazet
  1 sibling, 0 replies; 9+ messages in thread
From: Jesse Gross @ 2015-06-30  1:06 UTC (permalink / raw)
  To: Rick Jones; +Cc: Ramu Ramamurthy, Tom Herbert, David Miller, netdev

On Mon, Jun 29, 2015 at 1:04 PM, Rick Jones <rick.jones2@hp.com> wrote:
> PS  FWIW, if I shift from using just the linux native vxlan to a "mostly
> full" set of OpenStack compute node plumbing - two OVS bridges and a linux
> bridge and associated plumbing with a vxlan tunnel defined in OVS, but
> nothing above the Linux bridge (and no VMs) I see more like 4.9 Gbit/s.  The
> veth pair connecting the linux bridge to the top ovs bridge show rx checksum
> and gro enabled.  the linux bridge itself shows GRO but rx checksum off
> (fixed).  I'm not sure how to go about checking the OVS constructs.

This is because the OVS path won't go through the VXLAN device receive
routines and the code from this patch won't be executed. Your results
make sense then because it is similar to the original no GRO case.

This should hopefully be resolved soon - there are some patches in
progress that will make OVS use the normal tunnel device receive
paths. Once those are in, the performance should be equal in both
cases.

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH RFC net-next] vxlan: GRO support at tunnel layer
  2015-06-29 20:04     ` Rick Jones
  2015-06-30  1:06       ` Jesse Gross
@ 2015-06-30  5:06       ` Eric Dumazet
  1 sibling, 0 replies; 9+ messages in thread
From: Eric Dumazet @ 2015-06-30  5:06 UTC (permalink / raw)
  To: Rick Jones; +Cc: Ramu Ramamurthy, Tom Herbert, davem, netdev

On Mon, 2015-06-29 at 13:04 -0700, Rick Jones wrote:

> PS  FWIW, if I shift from using just the linux native vxlan to a "mostly 
> full" set of OpenStack compute node plumbing - two OVS bridges and a 
> linux bridge and associated plumbing with a vxlan tunnel defined in OVS, 
> but nothing above the Linux bridge (and no VMs) I see more like 4.9 
> Gbit/s.  The veth pair connecting the linux bridge to the top ovs bridge 
> show rx checksum and gro enabled.  the linux bridge itself shows GRO but 
> rx checksum off (fixed).  I'm not sure how to go about checking the OVS 
> constructs.
> 

This is exactly why we had to implement GRO at the first stage.

We can not assume a proper netdev tunnel is there where we can use
gro_cells.

^ permalink raw reply	[flat|nested] 9+ messages in thread

end of thread, other threads:[~2015-06-30  5:06 UTC | newest]

Thread overview: 9+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2015-06-26 23:09 [PATCH RFC net-next] vxlan: GRO support at tunnel layer Tom Herbert
2015-06-27  0:46 ` Rick Jones
2015-06-28 17:20   ` Ramu Ramamurthy
2015-06-28 21:31     ` Tom Herbert
2015-06-29 15:40       ` Rick Jones
2015-06-29 15:45     ` Rick Jones
2015-06-29 20:04     ` Rick Jones
2015-06-30  1:06       ` Jesse Gross
2015-06-30  5:06       ` Eric Dumazet

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.