From: wenxu <wenxu@ucloud.cn>
To: Roi Dayan <roid@mellanox.com>
Cc: netdev@vger.kernel.org, saeedm@mellanox.com
Subject: Bad performance for VF outgoing in offloaded mode
Date: Thu, 28 Nov 2019 13:03:06 +0800 [thread overview]
Message-ID: <fc909cd7-3e82-89a6-9fe8-8eba546686d8@ucloud.cn> (raw)
In-Reply-To: <84874b42-c525-2149-539d-e7510d15f6a6@mellanox.com>
Hi mellanox team,
I did a performance test for tc offload with upstream kernel:
I setup a vm with a VF as eth0
In the vm:
ifconfig eth0 10.0.0.75/24 up
On the host the mlx_p0 is the pf representor and mlx_pf0vf0 is the vf representor
The device in the switchdev mode
# grep -ri "" /sys/class/net/*/phys_* 2>/dev/null
/sys/class/net/mlx_p0/phys_port_name:p0
/sys/class/net/mlx_p0/phys_switch_id:34ebc100034b6b50
/sys/class/net/mlx_pf0vf0/phys_port_name:pf0vf0
/sys/class/net/mlx_pf0vf0/phys_switch_id:34ebc100034b6b50
/sys/class/net/mlx_pf0vf1/phys_port_name:pf0vf1
/sys/class/net/mlx_pf0vf1/phys_switch_id:34ebc100034b6b50
The tc filter as following: just forward ip/arp packets in mlx_p0 and mlx_pf0vf0 each other
tc qdisc add dev mlx_p0 ingress
tc qdisc add dev mlx_pf0vf0 ingress
tc filter add dev mlx_pf0vf0 pref 2 ingress protocol ip flower skip_sw action mirred egress redirect dev mlx_p0
tc filter add dev mlx_p0 pref 2 ingress protocol ip flower skip_sw action mirred egress redirect dev mlx_pf0vf0
tc filter add dev mlx_pf0vf0 pref 1 ingress protocol arp flower skip_sw action mirred egress redirect dev mlx_p0
tc filter add dev mlx_p0 pref 1 ingress protocol arp flower skip_sw action mirred egress redirect dev mlx_pf0vf0
The remote server device eth0:
ifconfig eth0 10.0.0.241/24
test case 1: tcp recieve from VF to PF
In the vm: iperf -s
On the remote server:
iperf -c 10.0.0.75 -t 10 -i 2
------------------------------------------------------------
Client connecting to 10.0.0.75, TCP port 5001
TCP window size: 85.0 KByte (default)
------------------------------------------------------------
[ 3] local 10.0.0.241 port 59708 connected with 10.0.0.75 port 5001
[ ID] Interval Transfer Bandwidth
[ 3] 0.0- 2.0 sec 5.40 GBytes 23.2 Gbits/sec
[ 3] 2.0- 4.0 sec 5.35 GBytes 23.0 Gbits/sec
[ 3] 4.0- 6.0 sec 5.46 GBytes 23.5 Gbits/sec
[ 3] 6.0- 8.0 sec 5.10 GBytes 21.9 Gbits/sec
[ 3] 8.0-10.0 sec 5.36 GBytes 23.0 Gbits/sec
[ 3] 0.0-10.0 sec 26.7 GBytes 22.9 Gbits/sec
Good performance with offload.
# tc -s filter ls dev mlx_p0 ingress
filter protocol arp pref 1 flower chain 0
filter protocol arp pref 1 flower chain 0 handle 0x1
eth_type arp
skip_sw
in_hw in_hw_count 1
action order 1: mirred (Egress Redirect to device mlx_pf0vf0) stolen
index 4 ref 1 bind 1 installed 971 sec used 82 sec
Action statistics:
Sent 420 bytes 7 pkt (dropped 0, overlimits 0 requeues 0)
Sent software 0 bytes 0 pkt
Sent hardware 420 bytes 7 pkt
backlog 0b 0p requeues 0
filter protocol ip pref 2 flower chain 0
filter protocol ip pref 2 flower chain 0 handle 0x1
eth_type ipv4
skip_sw
in_hw in_hw_count 1
action order 1: mirred (Egress Redirect to device mlx_pf0vf0) stolen
index 2 ref 1 bind 1 installed 972 sec used 67 sec
Action statistics:
Sent 79272204362 bytes 91511261 pkt (dropped 0, overlimits 0 requeues 0)
Sent software 0 bytes 0 pkt
Sent hardware 79272204362 bytes 91511261 pkt
backlog 0b 0p requeues 0
# tc -s filter ls dev mlx_pf0vf0 ingress
filter protocol arp pref 1 flower chain 0
filter protocol arp pref 1 flower chain 0 handle 0x1
eth_type arp
skip_sw
in_hw in_hw_count 1
action order 1: mirred (Egress Redirect to device mlx_p0) stolen
index 3 ref 1 bind 1 installed 978 sec used 88 sec
Action statistics:
Sent 600 bytes 10 pkt (dropped 0, overlimits 0 requeues 0)
Sent software 0 bytes 0 pkt
Sent hardware 600 bytes 10 pkt
backlog 0b 0p requeues 0
filter protocol ip pref 2 flower chain 0
filter protocol ip pref 2 flower chain 0 handle 0x1
eth_type ipv4
skip_sw
in_hw in_hw_count 1
action order 1: mirred (Egress Redirect to device mlx_p0) stolen
index 1 ref 1 bind 1 installed 978 sec used 73 sec
Action statistics:
Sent 71556027574 bytes 47805525 pkt (dropped 0, overlimits 0 requeues 0)
Sent software 0 bytes 0 pkt
Sent hardware 71556027574 bytes 47805525 pkt
backlog 0b 0p requeues 0
test case 2: tcp send from VF to PF
On the reomte server: iperf -s
in the vm:
# iperf -c 10.0.0.241 -t 10 -i 2
------------------------------------------------------------
Client connecting to 10.0.0.241, TCP port 5001
TCP window size: 230 KByte (default)
------------------------------------------------------------
[ 3] local 10.0.0.75 port 53166 connected with 10.0.0.241 port 5001
[ ID] Interval Transfer Bandwidth
[ 3] 0.0- 2.0 sec 939 MBytes 3.94 Gbits/sec
[ 3] 2.0- 4.0 sec 944 MBytes 3.96 Gbits/sec
[ 3] 4.0- 6.0 sec 1.01 GBytes 4.34 Gbits/sec
[ 3] 6.0- 8.0 sec 1.03 GBytes 4.44 Gbits/sec
[ 3] 8.0-10.0 sec 1.02 GBytes 4.39 Gbits/sec
[ 3] 0.0-10.0 sec 4.90 GBytes 4.21 Gbits/sec
Bad performance with offload. All the packet are offloaded.
It is the offload problem in the hardware?
BR
wenxu
next prev parent reply other threads:[~2019-11-28 5:03 UTC|newest]
Thread overview: 30+ messages / expand[flat|nested] mbox.gz Atom feed top
2019-11-19 7:08 [PATCH net-next] ip_gre: Make none-tun-dst gre tunnel keep tunnel info wenxu
2019-11-20 0:39 ` David Miller
2019-11-21 7:30 ` Question about flow table offload in mlx5e wenxu
2019-11-21 7:42 ` Paul Blakey
2019-11-21 8:28 ` wenxu
2019-11-21 11:39 ` Paul Blakey
2019-11-21 11:40 ` Paul Blakey
2019-11-21 12:35 ` wenxu
2019-11-21 13:05 ` Paul Blakey
2019-11-21 13:39 ` wenxu
2019-11-22 6:12 ` wenxu
[not found] ` <64285654-bc9a-c76e-5875-dc6e434dc4d4@ucloud.cn>
2019-11-24 8:46 ` Paul Blakey
2019-11-24 11:14 ` wenxu
2019-11-26 8:18 ` wenxu
[not found] ` <84874b42-c525-2149-539d-e7510d15f6a6@mellanox.com>
2019-11-27 12:16 ` wenxu
2019-11-27 12:16 ` wenxu
2019-11-27 13:11 ` Paul Blakey
2019-11-27 13:20 ` Paul Blakey
2019-11-27 13:45 ` wenxu
2019-12-02 3:37 ` wenxu
2019-12-05 15:17 ` Paul Blakey
2019-12-08 9:39 ` Paul Blakey
2019-12-09 3:18 ` wenxu
2019-12-09 7:48 ` Paul Blakey
2019-12-09 10:48 ` wenxu
2019-12-10 6:53 ` wenxu
2019-11-28 5:03 ` wenxu [this message]
2019-12-04 13:50 ` Bad performance for VF outgoing in offloaded mode Roi Dayan
2019-12-04 14:32 ` wenxu
2019-12-05 3:41 ` wenxu
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=fc909cd7-3e82-89a6-9fe8-8eba546686d8@ucloud.cn \
--to=wenxu@ucloud.cn \
--cc=netdev@vger.kernel.org \
--cc=roid@mellanox.com \
--cc=saeedm@mellanox.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).