All of lore.kernel.org
 help / color / mirror / Atom feed
* some veth related issues
@ 2009-08-04 14:30 Or Gerlitz
  2009-08-04 15:47 ` Ben Greear
  0 siblings, 1 reply; 10+ messages in thread
From: Or Gerlitz @ 2009-08-04 14:30 UTC (permalink / raw)
  To: netdev

I'm trying to do some veth testing and came into couple of issues:

1. when doing a veth(1,0)->bridge->veth(2,3) test using pktgen the packet size reported
by the veth and bridge statistics is eight bytes wheres the pkt_size param to pktgen is 64

However, if doing ping -s 22 on a veth(1,0)->bridge->NIC config, the reported packet size
is 50 which makes sense since the NIC adds/removes the L2 header of 14 bytes.

2. veth(1,0)->bridge->veth(2,3) pktgen test works in the sense that packets are forwarded
by the bridge to the veth-(2,3) device pair, but pktgen veth(1,0)->bridge->NIC doesn't work
- no TX counters are increased on the NIC (I run some traffic "from" veth-3 and the NIC devices
to avoid the bridge flooding path).

To debug the issue failure to get veth(1,0)->bridge->NIC config working, I removed the bridge,
run pktgen over veth1 and opened a tcpdump on veth0, the resulted dump looks quite bad, see below.

Here's some data, any ideas will be very much appreciated, this is 2.6.30

Or.

ping on veth(1,0)->bridge->NIC the reported packet size (bytes/packets) is 64 on eth1 and 50 on veth1/0

Inter-|   Receive                                                |  Transmit
 face |bytes    packets errs drop fifo frame compressed multicast|bytes    packets
  eth1:24911352  389240    0    0    0     0          0         0 24926644  389304
 veth1:19466002  389256    0    0    0     0          0         0 19468999  389270
 veth0:19468999  389270    0    0    0     0          0         0 19466002  389256

pktgen veth(1,0)->bridge->veth(2,3) the reported packet size is 8 on veth1 and veth3 but
the pkt_size param was 64

Inter-|   Receive                                                |  Transmit
 face |bytes    packets errs drop fifo frame compressed multicast|bytes    packets
 veth1:    4066      17    0    0    0     0          0         0 80007992 10000030
 veth0:80007992 10000030    0    0    0     0          0         0     4066      17
 veth3:80004818 10000012    0    0    0     0          0         0     6370      30
 veth2:    6370      30    0    0    0     0          0         0 80004818 10000012


here's the tcpdump output, I told pktgen to send 10 packets and delay for a second between packets,
to make sure tcpdump captures everything. The first packet is what I was expecting, but none of the
ones that follow... below is the pktgen script I used and some config info

72:ec:8e:4f:89:01 > 72:ec:8e:4f:89:03, ethertype IPv4 (0x0800), length 64: 20.20.49.11.discard > 20.20.49.13.discard: UDP, length 22
00:00:20:11:ab:09 > 45:00:00:32:65:72, ethertype Unknown (0x1414), length 50:
	0x0000:  310b 1414 310d 0009 0009 001e 0000 be9b  1...1...........
	0x0010:  e955 0000 0001 4a78 a286 0001 5afc 6d6f  .U....Jx....Z.mo
	0x0020:  6465 000a                                de..
00:09:00:09:00:1e > 31:0b:14:14:31:0d, 802.3, length 36: LLC, dsap Unknown (0xbe), ssap Unknown (0x9a), cmd 0x55e9: Supervisory, Receiver Ready, rcv seq 42, Flags [Command, Poll], length 22
00:01:4a:78:a2:86 > be:9b:e9:55:00:00, 802.3, length 22: LLC, dsap Unknown (0x5a), ssap Unknown (0xfc), cmd 0x6f6d: Supervisory, Receiver not Ready, rcv seq 55, Flags [Command, Poll], length 8
[|ether]
[|ether]
[|ether]
[|ether]
[|ether]
[|ether]

# ifconfig | grep veth
veth0     Link encap:Ethernet  HWaddr 72:EC:8E:4F:89:00
veth1     Link encap:Ethernet  HWaddr 72:EC:8E:4F:89:01
veth2     Link encap:Ethernet  HWaddr 72:EC:8E:4F:89:02
veth3     Link encap:Ethernet  HWaddr 72:EC:8E:4F:89:03

#! /bin/sh

#modprobe pktgen


function pgset() {
    local result

    echo $1 > $PGDEV

    result=`cat $PGDEV | fgrep "Result: OK:"`
    if [ "$result" = "" ]; then
         cat $PGDEV | fgrep Result:
    fi
}

function pg() {
    echo inject > $PGDEV
    cat $PGDEV
}

# Config Start Here -----------------------------------------------------------


# thread config
# Each CPU has own thread. Two CPU exammple. We add veth1, eth2 respectivly.

PGDEV=/proc/net/pktgen/kpktgend_0
  echo "Removing all devices"
 pgset "rem_device_all"
  echo "Adding veth1"
 pgset "add_device veth1"
  echo "Setting max_before_softirq 10000"
 pgset "max_before_softirq 10000"


# device config
# delay 0 means maximum speed.

CLONE_SKB="clone_skb 1000000"
# NIC adds 4 bytes CRC
PKT_SIZE="pkt_size 64"

# COUNT 0 means forever
#COUNT="count 0"
COUNT="count 10"
DELAY="delay 1000000000"

PGDEV=/proc/net/pktgen/veth1
  echo "Configuring $PGDEV"
 pgset "$COUNT"
 pgset "$CLONE_SKB"
 pgset "$PKT_SIZE"
 pgset "$DELAY"

 pgset "src_min 20.20.49.11"
 pgset "src_max 20.20.49.11"
 pgset "src_mac 72:ec:8e:4f:89:01"

 pgset "dst 20.20.49.13"
 pgset "dst_mac 72:ec:8e:4f:89:03"

# Time to run
PGDEV=/proc/net/pktgen/pgctrl

 echo "Running... ctrl^C to stop"
 pgset "start"
 echo "Done"

# Result can be vieved in /proc/net/pktgen/veth1

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: some veth related issues
  2009-08-04 14:30 some veth related issues Or Gerlitz
@ 2009-08-04 15:47 ` Ben Greear
  2009-08-05  4:40   ` Or Gerlitz
  0 siblings, 1 reply; 10+ messages in thread
From: Ben Greear @ 2009-08-04 15:47 UTC (permalink / raw)
  To: Or Gerlitz; +Cc: netdev

Or Gerlitz wrote:
> I'm trying to do some veth testing and came into couple of issues:
>
> 1. when doing a veth(1,0)->bridge->veth(2,3) test using pktgen the packet size reported
> by the veth and bridge statistics is eight bytes wheres the pkt_size param to pktgen is 64
>
> However, if doing ping -s 22 on a veth(1,0)->bridge->NIC config, the reported packet size
> is 50 which makes sense since the NIC adds/removes the L2 header of 14 bytes.
>
> 2. veth(1,0)->bridge->veth(2,3) pktgen test works in the sense that packets are forwarded
> by the bridge to the veth-(2,3) device pair, but pktgen veth(1,0)->bridge->NIC doesn't work
> - no TX counters are increased on the NIC (I run some traffic "from" veth-3 and the NIC devices
> to avoid the bridge flooding path).
>   

Try setting clone to 0.  Might be that sending cloned pkts over veth is 
a bad idea.

Thanks,
Ben

> To debug the issue failure to get veth(1,0)->bridge->NIC config working, I removed the bridge,
> run pktgen over veth1 and opened a tcpdump on veth0, the resulted dump looks quite bad, see below.
>
> Here's some data, any ideas will be very much appreciated, this is 2.6.30
>
> Or.
>
> ping on veth(1,0)->bridge->NIC the reported packet size (bytes/packets) is 64 on eth1 and 50 on veth1/0
>
> Inter-|   Receive                                                |  Transmit
>  face |bytes    packets errs drop fifo frame compressed multicast|bytes    packets
>   eth1:24911352  389240    0    0    0     0          0         0 24926644  389304
>  veth1:19466002  389256    0    0    0     0          0         0 19468999  389270
>  veth0:19468999  389270    0    0    0     0          0         0 19466002  389256
>
> pktgen veth(1,0)->bridge->veth(2,3) the reported packet size is 8 on veth1 and veth3 but
> the pkt_size param was 64
>
> Inter-|   Receive                                                |  Transmit
>  face |bytes    packets errs drop fifo frame compressed multicast|bytes    packets
>  veth1:    4066      17    0    0    0     0          0         0 80007992 10000030
>  veth0:80007992 10000030    0    0    0     0          0         0     4066      17
>  veth3:80004818 10000012    0    0    0     0          0         0     6370      30
>  veth2:    6370      30    0    0    0     0          0         0 80004818 10000012
>
>
> here's the tcpdump output, I told pktgen to send 10 packets and delay for a second between packets,
> to make sure tcpdump captures everything. The first packet is what I was expecting, but none of the
> ones that follow... below is the pktgen script I used and some config info
>
> 72:ec:8e:4f:89:01 > 72:ec:8e:4f:89:03, ethertype IPv4 (0x0800), length 64: 20.20.49.11.discard > 20.20.49.13.discard: UDP, length 22
> 00:00:20:11:ab:09 > 45:00:00:32:65:72, ethertype Unknown (0x1414), length 50:
> 	0x0000:  310b 1414 310d 0009 0009 001e 0000 be9b  1...1...........
> 	0x0010:  e955 0000 0001 4a78 a286 0001 5afc 6d6f  .U....Jx....Z.mo
> 	0x0020:  6465 000a                                de..
> 00:09:00:09:00:1e > 31:0b:14:14:31:0d, 802.3, length 36: LLC, dsap Unknown (0xbe), ssap Unknown (0x9a), cmd 0x55e9: Supervisory, Receiver Ready, rcv seq 42, Flags [Command, Poll], length 22
> 00:01:4a:78:a2:86 > be:9b:e9:55:00:00, 802.3, length 22: LLC, dsap Unknown (0x5a), ssap Unknown (0xfc), cmd 0x6f6d: Supervisory, Receiver not Ready, rcv seq 55, Flags [Command, Poll], length 8
> [|ether]
> [|ether]
> [|ether]
> [|ether]
> [|ether]
> [|ether]
>
> # ifconfig | grep veth
> veth0     Link encap:Ethernet  HWaddr 72:EC:8E:4F:89:00
> veth1     Link encap:Ethernet  HWaddr 72:EC:8E:4F:89:01
> veth2     Link encap:Ethernet  HWaddr 72:EC:8E:4F:89:02
> veth3     Link encap:Ethernet  HWaddr 72:EC:8E:4F:89:03
>
> #! /bin/sh
>
> #modprobe pktgen
>
>
> function pgset() {
>     local result
>
>     echo $1 > $PGDEV
>
>     result=`cat $PGDEV | fgrep "Result: OK:"`
>     if [ "$result" = "" ]; then
>          cat $PGDEV | fgrep Result:
>     fi
> }
>
> function pg() {
>     echo inject > $PGDEV
>     cat $PGDEV
> }
>
> # Config Start Here -----------------------------------------------------------
>
>
> # thread config
> # Each CPU has own thread. Two CPU exammple. We add veth1, eth2 respectivly.
>
> PGDEV=/proc/net/pktgen/kpktgend_0
>   echo "Removing all devices"
>  pgset "rem_device_all"
>   echo "Adding veth1"
>  pgset "add_device veth1"
>   echo "Setting max_before_softirq 10000"
>  pgset "max_before_softirq 10000"
>
>
> # device config
> # delay 0 means maximum speed.
>
> CLONE_SKB="clone_skb 1000000"
> # NIC adds 4 bytes CRC
> PKT_SIZE="pkt_size 64"
>
> # COUNT 0 means forever
> #COUNT="count 0"
> COUNT="count 10"
> DELAY="delay 1000000000"
>
> PGDEV=/proc/net/pktgen/veth1
>   echo "Configuring $PGDEV"
>  pgset "$COUNT"
>  pgset "$CLONE_SKB"
>  pgset "$PKT_SIZE"
>  pgset "$DELAY"
>
>  pgset "src_min 20.20.49.11"
>  pgset "src_max 20.20.49.11"
>  pgset "src_mac 72:ec:8e:4f:89:01"
>
>  pgset "dst 20.20.49.13"
>  pgset "dst_mac 72:ec:8e:4f:89:03"
>
> # Time to run
> PGDEV=/proc/net/pktgen/pgctrl
>
>  echo "Running... ctrl^C to stop"
>  pgset "start"
>  echo "Done"
>
> # Result can be vieved in /proc/net/pktgen/veth1
> --
> To unsubscribe from this list: send the line "unsubscribe netdev" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>   


-- 
Ben Greear <greearb@candelatech.com> 
Candela Technologies Inc  http://www.candelatech.com



^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: some veth related issues
  2009-08-04 15:47 ` Ben Greear
@ 2009-08-05  4:40   ` Or Gerlitz
  2009-08-05  4:48     ` Ben Greear
  0 siblings, 1 reply; 10+ messages in thread
From: Or Gerlitz @ 2009-08-05  4:40 UTC (permalink / raw)
  To: Ben Greear; +Cc: netdev

Ben Greear wrote:
> Try setting clone to 0. Might be that sending cloned pkts over veth is 
> a bad idea.

okay, thanks, this fixed the problem, however it seems there's some 
performance loss, with cloning I saw numbers of 500K packets-per-second 
and above where without cloning I get about 400K PPS.

Or.


^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: some veth related issues
  2009-08-05  4:40   ` Or Gerlitz
@ 2009-08-05  4:48     ` Ben Greear
  2009-08-05  5:41       ` bridge vs macvlan performance (was: some veth related issues) Or Gerlitz
  2009-08-05  5:41       ` bridge vs macvlan performance (was: some veth related issues) Or Gerlitz
  0 siblings, 2 replies; 10+ messages in thread
From: Ben Greear @ 2009-08-05  4:48 UTC (permalink / raw)
  To: Or Gerlitz; +Cc: netdev

Or Gerlitz wrote:
> Ben Greear wrote:
>> Try setting clone to 0. Might be that sending cloned pkts over veth 
>> is a bad idea.
>
> okay, thanks, this fixed the problem, however it seems there's some 
> performance loss, with cloning I saw numbers of 500K 
> packets-per-second and above where without cloning I get about 400K PPS.
Well, it seems we could and should fix veth to work, but it will have to 
do equivalent work of copying
an skb most likely, so either way you'll probably get a big performance hit.

Thanks,
Ben
>
> Or.


-- 
Ben Greear <greearb@candelatech.com> 
Candela Technologies Inc  http://www.candelatech.com



^ permalink raw reply	[flat|nested] 10+ messages in thread

* bridge vs macvlan performance (was: some veth related issues)
  2009-08-05  4:48     ` Ben Greear
@ 2009-08-05  5:41       ` Or Gerlitz
  2009-08-05  5:50         ` bridge vs macvlan performance Ben Greear
  2009-08-05  5:50         ` Ben Greear
  2009-08-05  5:41       ` bridge vs macvlan performance (was: some veth related issues) Or Gerlitz
  1 sibling, 2 replies; 10+ messages in thread
From: Or Gerlitz @ 2009-08-05  5:41 UTC (permalink / raw)
  To: Ben Greear, Stephen Hemminger
  Cc: netdev, Vytautas Valancius, Sapan Bhatia, virtualization

Ben Greear wrote:
> Well, it seems we could and should fix veth to work, but it will have 
> to do equivalent work of copying  an skb most likely, so either way 
> you'll probably get a big performance hit.
Using the same pktgen script (i.e with clone=0) I see that a 
veth-->bridge-->veth configuration gives about 400K PPS forwarding 
performance where macvlan-->veth-->macvlan gives 680K PPS (again, I made 
sure that the bridge has applied learning before I start the test). 
Basically, both the bridge and macvlan use hash on the destination mac 
in order to know to which device forward the packet, is there anything 
in the bridge logic that can explain the gap? It there something which 
isn't really apples-to-apples in this comparison?

Or.


^ permalink raw reply	[flat|nested] 10+ messages in thread

* bridge vs macvlan performance (was: some veth related issues)
  2009-08-05  4:48     ` Ben Greear
  2009-08-05  5:41       ` bridge vs macvlan performance (was: some veth related issues) Or Gerlitz
@ 2009-08-05  5:41       ` Or Gerlitz
  1 sibling, 0 replies; 10+ messages in thread
From: Or Gerlitz @ 2009-08-05  5:41 UTC (permalink / raw)
  To: Ben Greear, Stephen Hemminger
  Cc: Sapan Bhatia, netdev, Vytautas Valancius, virtualization

Ben Greear wrote:
> Well, it seems we could and should fix veth to work, but it will have 
> to do equivalent work of copying  an skb most likely, so either way 
> you'll probably get a big performance hit.
Using the same pktgen script (i.e with clone=0) I see that a 
veth-->bridge-->veth configuration gives about 400K PPS forwarding 
performance where macvlan-->veth-->macvlan gives 680K PPS (again, I made 
sure that the bridge has applied learning before I start the test). 
Basically, both the bridge and macvlan use hash on the destination mac 
in order to know to which device forward the packet, is there anything 
in the bridge logic that can explain the gap? It there something which 
isn't really apples-to-apples in this comparison?

Or.

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: bridge vs macvlan performance
  2009-08-05  5:41       ` bridge vs macvlan performance (was: some veth related issues) Or Gerlitz
@ 2009-08-05  5:50         ` Ben Greear
  2009-08-05  7:02           ` Or Gerlitz
  2009-08-05  7:02           ` Or Gerlitz
  2009-08-05  5:50         ` Ben Greear
  1 sibling, 2 replies; 10+ messages in thread
From: Ben Greear @ 2009-08-05  5:50 UTC (permalink / raw)
  To: Or Gerlitz
  Cc: Stephen Hemminger, netdev, Vytautas Valancius, Sapan Bhatia,
	virtualization

Or Gerlitz wrote:
> Ben Greear wrote:
>> Well, it seems we could and should fix veth to work, but it will have 
>> to do equivalent work of copying  an skb most likely, so either way 
>> you'll probably get a big performance hit.
> Using the same pktgen script (i.e with clone=0) I see that a 
> veth-->bridge-->veth configuration gives about 400K PPS forwarding 
> performance where macvlan-->veth-->macvlan gives 680K PPS (again, I 
> made sure that the bridge has applied learning before I start the 
> test). Basically, both the bridge and macvlan use hash on the 
> destination mac in order to know to which device forward the packet, 
> is there anything in the bridge logic that can explain the gap? It 
> there something which isn't really apples-to-apples in this comparison?
A VETH has to send to it's peer, so your descriptions are a bit vague.

What are you really configuring?  Maybe show us your script or commands 
that set up each of these tests?

Ben

>
> Or.
>
> -- 
> To unsubscribe from this list: send the line "unsubscribe netdev" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html


-- 
Ben Greear <greearb@candelatech.com> 
Candela Technologies Inc  http://www.candelatech.com



^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: bridge vs macvlan performance
  2009-08-05  5:41       ` bridge vs macvlan performance (was: some veth related issues) Or Gerlitz
  2009-08-05  5:50         ` bridge vs macvlan performance Ben Greear
@ 2009-08-05  5:50         ` Ben Greear
  1 sibling, 0 replies; 10+ messages in thread
From: Ben Greear @ 2009-08-05  5:50 UTC (permalink / raw)
  To: Or Gerlitz
  Cc: Sapan Bhatia, netdev, Stephen Hemminger, Vytautas Valancius,
	virtualization

Or Gerlitz wrote:
> Ben Greear wrote:
>> Well, it seems we could and should fix veth to work, but it will have 
>> to do equivalent work of copying  an skb most likely, so either way 
>> you'll probably get a big performance hit.
> Using the same pktgen script (i.e with clone=0) I see that a 
> veth-->bridge-->veth configuration gives about 400K PPS forwarding 
> performance where macvlan-->veth-->macvlan gives 680K PPS (again, I 
> made sure that the bridge has applied learning before I start the 
> test). Basically, both the bridge and macvlan use hash on the 
> destination mac in order to know to which device forward the packet, 
> is there anything in the bridge logic that can explain the gap? It 
> there something which isn't really apples-to-apples in this comparison?
A VETH has to send to it's peer, so your descriptions are a bit vague.

What are you really configuring?  Maybe show us your script or commands 
that set up each of these tests?

Ben

>
> Or.
>
> -- 
> To unsubscribe from this list: send the line "unsubscribe netdev" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html


-- 
Ben Greear <greearb@candelatech.com> 
Candela Technologies Inc  http://www.candelatech.com

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: bridge vs macvlan performance
  2009-08-05  5:50         ` bridge vs macvlan performance Ben Greear
@ 2009-08-05  7:02           ` Or Gerlitz
  2009-08-05  7:02           ` Or Gerlitz
  1 sibling, 0 replies; 10+ messages in thread
From: Or Gerlitz @ 2009-08-05  7:02 UTC (permalink / raw)
  To: Ben Greear
  Cc: Stephen Hemminger, netdev, Vytautas Valancius, Sapan Bhatia,
	virtualization

Ben Greear wrote:
> Or Gerlitz wrote:

>> Using the same pktgen script (i.e with clone=0) I see that a
>> veth-->bridge-->veth configuration gives about 400K PPS forwarding
>> performance where macvlan-->veth-->macvlan gives 680K PPS (again, I
>> made sure that the bridge has applied learning before I start the test).

(its interesting how many times the same mistake can be done...) setting net.bridge.bridge-nf-call-iptables=0 made the veth-->bridge-->veth test to deliver 600K PPS thus reducing the gain achieved by the macvlan-->veth-->macvlan test from 70% to 20% which is way less but still notable.

> A VETH has to send to it's peer, so your descriptions are a bit vague.
> What are you really configuring?  Maybe show us your script or commands
> that set up each of these tests?

Yes, VETH has to send to its peer, so the veth/bridge/veth test has actually two more hops vs the macvlan/veth/macvlan test, maybe this can explain the difference, as for you question see below my configuration.

I am looking for the simplest setup to test the Linux bridge forwarding performance, I could do a tap-->bridge-->tap test with two processes sitting in user space, but I tend to think that user/kernel switches and the tap code may become the bottleneck in that case, where the kernel pktgen is much more efficient. 

Or.

------> for veth/bridge/veth test I do the below such that my config is
------> pktgen --> veth1 --> veth0 --> br0 --> veth2 --> veth3

BRIDGE=br0

brctl addbr $BRIDGE
ifconfig $BRIDGE up

# set the bridge such that it does NOT call iptables 
sysctl -w net.bridge.bridge-nf-call-iptables=0

DEV_A=veth0
DEV_B=veth1
MAC_A=72:EC:8E:4F:89:00
MAC_B=72:EC:8E:4F:89:01

DEV_B_IP=20.20.49.11
MASK=16

# create the 1st veth device pair
ip link add name $DEV_A address $MAC_A type veth peer name $DEV_B address $MAC_B

# bring up and connect one veth device to the bridge
ifconfig $DEV_A up
brctl addif $BRIDGE $DEV_A

# configure the other veth device as NIC
ifconfig $DEV_B $DEV_B_IP/$MASK up

DEV_C=veth2
DEV_D=veth3
MAC_C=72:EC:8E:4F:89:02
MAC_D=72:EC:8E:4F:89:03

DEV_D_IP=20.20.49.13

# create the 2nd veth device pair
ip link add name $DEV_C address $MAC_C type veth peer name $DEV_D address $MAC_D

# bring up and connect the other veth device to the bridge
ifconfig $DEV_C up
brctl addif $BRIDGE $DEV_C

# configure the other veth device as NIC
ifconfig $DEV_D $DEV_D_IP/$MASK up

# make local Linux bridge learning come into play, populate the bridge FDB 
REMOTE=1.1.1.1
ping -I $DEV_B $REMOTE -i 0.05 -c 10 -q
ping -I $DEV_D $REMOTE -i 0.05 -c 10 -q

# examine the bridge FDB to make sure learning happened
brctl showmacs $BRIDGE

------> for macvlan/veth/macvlan test I do the below such that my config is
------> pktgen --> mv0 --> veth1 --> veth0 --> mv1

DEV_A=veth0
DEV_B=veth1
MAC_A=72:EC:8E:4F:89:00
MAC_B=72:EC:8E:4F:89:01

# create the 1st veth device pair
ip link add name $DEV_A address $MAC_A type veth peer name $DEV_B address $MAC_B

# bring up and connect one veth device to the bridge
ifconfig $DEV_A up
ifconfig $DEV_B up

UPLINK_DEV_A=veth1
UPLINK_DEV_B=veth0

DEV_A=mv1
DEV_B=mv0

MAC_A=00:19:d1:29:d2:01
MAC_B=00:19:d1:29:d2:00

ip link add link $UPLINK_DEV_A address $MAC_A $DEV_A type macvlan
ip link add link $UPLINK_DEV_B address $MAC_B $DEV_B type macvlan

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: bridge vs macvlan performance
  2009-08-05  5:50         ` bridge vs macvlan performance Ben Greear
  2009-08-05  7:02           ` Or Gerlitz
@ 2009-08-05  7:02           ` Or Gerlitz
  1 sibling, 0 replies; 10+ messages in thread
From: Or Gerlitz @ 2009-08-05  7:02 UTC (permalink / raw)
  To: Ben Greear
  Cc: Sapan Bhatia, netdev, Stephen Hemminger, Vytautas Valancius,
	virtualization

Ben Greear wrote:
> Or Gerlitz wrote:

>> Using the same pktgen script (i.e with clone=0) I see that a
>> veth-->bridge-->veth configuration gives about 400K PPS forwarding
>> performance where macvlan-->veth-->macvlan gives 680K PPS (again, I
>> made sure that the bridge has applied learning before I start the test).

(its interesting how many times the same mistake can be done...) setting net.bridge.bridge-nf-call-iptables=0 made the veth-->bridge-->veth test to deliver 600K PPS thus reducing the gain achieved by the macvlan-->veth-->macvlan test from 70% to 20% which is way less but still notable.

> A VETH has to send to it's peer, so your descriptions are a bit vague.
> What are you really configuring?  Maybe show us your script or commands
> that set up each of these tests?

Yes, VETH has to send to its peer, so the veth/bridge/veth test has actually two more hops vs the macvlan/veth/macvlan test, maybe this can explain the difference, as for you question see below my configuration.

I am looking for the simplest setup to test the Linux bridge forwarding performance, I could do a tap-->bridge-->tap test with two processes sitting in user space, but I tend to think that user/kernel switches and the tap code may become the bottleneck in that case, where the kernel pktgen is much more efficient. 

Or.

------> for veth/bridge/veth test I do the below such that my config is
------> pktgen --> veth1 --> veth0 --> br0 --> veth2 --> veth3

BRIDGE=br0

brctl addbr $BRIDGE
ifconfig $BRIDGE up

# set the bridge such that it does NOT call iptables 
sysctl -w net.bridge.bridge-nf-call-iptables=0

DEV_A=veth0
DEV_B=veth1
MAC_A=72:EC:8E:4F:89:00
MAC_B=72:EC:8E:4F:89:01

DEV_B_IP=20.20.49.11
MASK=16

# create the 1st veth device pair
ip link add name $DEV_A address $MAC_A type veth peer name $DEV_B address $MAC_B

# bring up and connect one veth device to the bridge
ifconfig $DEV_A up
brctl addif $BRIDGE $DEV_A

# configure the other veth device as NIC
ifconfig $DEV_B $DEV_B_IP/$MASK up

DEV_C=veth2
DEV_D=veth3
MAC_C=72:EC:8E:4F:89:02
MAC_D=72:EC:8E:4F:89:03

DEV_D_IP=20.20.49.13

# create the 2nd veth device pair
ip link add name $DEV_C address $MAC_C type veth peer name $DEV_D address $MAC_D

# bring up and connect the other veth device to the bridge
ifconfig $DEV_C up
brctl addif $BRIDGE $DEV_C

# configure the other veth device as NIC
ifconfig $DEV_D $DEV_D_IP/$MASK up

# make local Linux bridge learning come into play, populate the bridge FDB 
REMOTE=1.1.1.1
ping -I $DEV_B $REMOTE -i 0.05 -c 10 -q
ping -I $DEV_D $REMOTE -i 0.05 -c 10 -q

# examine the bridge FDB to make sure learning happened
brctl showmacs $BRIDGE

------> for macvlan/veth/macvlan test I do the below such that my config is
------> pktgen --> mv0 --> veth1 --> veth0 --> mv1

DEV_A=veth0
DEV_B=veth1
MAC_A=72:EC:8E:4F:89:00
MAC_B=72:EC:8E:4F:89:01

# create the 1st veth device pair
ip link add name $DEV_A address $MAC_A type veth peer name $DEV_B address $MAC_B

# bring up and connect one veth device to the bridge
ifconfig $DEV_A up
ifconfig $DEV_B up

UPLINK_DEV_A=veth1
UPLINK_DEV_B=veth0

DEV_A=mv1
DEV_B=mv0

MAC_A=00:19:d1:29:d2:01
MAC_B=00:19:d1:29:d2:00

ip link add link $UPLINK_DEV_A address $MAC_A $DEV_A type macvlan
ip link add link $UPLINK_DEV_B address $MAC_B $DEV_B type macvlan

^ permalink raw reply	[flat|nested] 10+ messages in thread

end of thread, other threads:[~2009-08-05  7:02 UTC | newest]

Thread overview: 10+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2009-08-04 14:30 some veth related issues Or Gerlitz
2009-08-04 15:47 ` Ben Greear
2009-08-05  4:40   ` Or Gerlitz
2009-08-05  4:48     ` Ben Greear
2009-08-05  5:41       ` bridge vs macvlan performance (was: some veth related issues) Or Gerlitz
2009-08-05  5:50         ` bridge vs macvlan performance Ben Greear
2009-08-05  7:02           ` Or Gerlitz
2009-08-05  7:02           ` Or Gerlitz
2009-08-05  5:50         ` Ben Greear
2009-08-05  5:41       ` bridge vs macvlan performance (was: some veth related issues) Or Gerlitz

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.