netdev.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: wenxu <wenxu@ucloud.cn>
To: Eli Cohen <elic@nvidia.com>
Cc: Linux Kernel Network Developers <netdev@vger.kernel.org>
Subject: Re: mlx5_vdpa problem
Date: Fri, 30 Oct 2020 15:50:21 +0800	[thread overview]
Message-ID: <40968d30-f4c4-5f9c-5c6c-fe3d7e5571a3@ucloud.cn> (raw)
In-Reply-To: <20201029124544.GC139728@mtl-vdi-166.wap.labs.mlnx>

Hi Eli,


Thanks for your reply.


I update the firmware to the lasted one

firmware-version: 22.28.4000 (MT_0000000430)


I find there are the same problems as my description.

It is the same for the test what your suggestion.

I did the experiment with two hosts so the host the
representor for your VF and the uplink representor are connected to an
ovs switch and other host is configured with ip address 10.0.0.7/24.

And the ovs enable hw-offload, So the packet don't go through rep port of VF.
But there are the same problems. I think it maybe a FW bug. Thx.

On 10/29/2020 8:45 PM, Eli Cohen wrote:
> On Thu, Oct 22, 2020 at 06:40:56PM +0800, wenxu wrote:
>
> Please make sure your firmware is updated.
>
> https://www.mellanox.com/support/firmware/connectx6dx
>
>> Hi mellanox team,
>>
>>
>> I test the mlx5 vdpa  in linux-5.9 and meet several problem.
>>
>>
>> # lspci | grep Ether | grep Dx
>> b3:00.0 Ethernet controller: Mellanox Technologies MT2892 Family [ConnectX-6 Dx]
>> b3:00.1 Ethernet controller: Mellanox Technologies MT2892 Family [ConnectX-6 Dx]
>>
>> # ethtool -i net2
>> driver: mlx5e_rep
>> version: 5.9.0
>> firmware-version: 22.28.1002 (MT_0000000430)
>> expansion-rom-version:
>> bus-info: 0000:b3:00.0
>> supports-statistics: yes
>> supports-test: no
>> supports-eeprom-access: no
>> supports-register-dump: no
>> supports-priv-flags: no
>>
>>
>> init switchdev:
>>
>>
>> # echo 1 > /sys/class/net/net2/device/sriov_numvfs
>> # echo 0000:b3:00.2 > /sys/bus/pci/drivers/mlx5_core/unbind
>> # devlink dev eswitch set pci/0000:b3:00.0  mode switchdev encap enable
>>
>> # modprobe vdpa vhost-vdpa mlx5_vdpa
>>
>> # ip l set dev net2 vf 0 mac 52:90:01:00:02:13
>> # echo 0000:b3:00.2 > /sys/bus/pci/drivers/mlx5_core/bind
>>
>>
>> setup vm:
>>
>> # qemu-system-x86_64 -name test  -enable-kvm -smp 16,sockets=2,cores=8,threads=1 -m 8192 -drive file=./centos7.qcow2,format=qcow2,if=none,id=drive-virtio-disk0 -device virtio-blk-pci,scsi=off,bus=pci.0,addr=0x7,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1 -netdev type=vhost-vdpa,vhostdev=/dev/vhost-vdpa-0,id=vhost-vdpa0 -device virtio-net-pci,netdev=vhost-vdpa0,page-per-vq=on,iommu_platform=on,id=net0,bus=pci.0,addr=0x3,disable-legacy=on -vnc 0.0.0.0:0
>>
>>
>> In the vm:  virtio net device  eth0 with ip address 10.0.0.75/24
>>
>> On the host: VF0 rep device pf0vf0 with ip address 10.0.0.7/24
>>
>>
>> problem 1:
>>
>> On the host:
>>
>> # ping 10.0.0.75
>>
>> Some times there will be loss packets.
>>
>> And in the VM:
>>
>> dmesg shows:
>>
>> eth0: bad tso: type 100, size: 0
>>
>> eth0: bad tso: type 10, size: 28
>>
>>
>> So I think maybe the vnet header is not init with 0?  And Then I clear the gso_type, gso_size and flags in the virtio_net driver.  There is no packets dropped.
>>
>>
>> problem 2:
>>
>> In the vm: iperf -s
>>
>> On the host: iperf -c 10.0.0.75 -t 100 -i 2.
>>
>>
>> The tcp connection can't established for the syn+ack with partail cum but not handle correct by hardware.
>>
>> After I set the csum off  for eth0 in the vm, the problem is resolved. Although the mlx5_vnet support VIRTIO_NET_F_CSUM feature.
>>
>>
>>
>> problem 3:
>>
>>
>> The iperf perofrmance not stable before I disable the pf0vf0 tso offload
>>
>> #ethtool -K pf0vf0 tso off
>>
>>
>> I know the mlx5_vnet did not support feature VIRTIO_NET_F_GUEST_TSO4. But the hardware can't  cut the big tso packet to several small tcp packet and send to virtio  net device?
>>
>>
>>
>>
>> BR
>>
>> wenxu
>>
>>
>>

  reply	other threads:[~2020-10-30  7:58 UTC|newest]

Thread overview: 9+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
     [not found] <1603098438-20200-1-git-send-email-wenxu@ucloud.cn>
2020-10-20  2:03 ` [PATCH net] vdpa/mlx5: Fix miss to set VIRTIO_NET_S_LINK_UP for virtio_net_config Jason Wang
2020-10-20  7:44   ` Eli Cohen
2020-10-20  7:50     ` Jason Wang
2020-10-22  9:59     ` wenxu
2020-10-22 10:40       ` mlx5_vdpa problem wenxu
2020-10-29 11:29         ` Eli Cohen
2020-10-29 12:45         ` Eli Cohen
2020-10-30  7:50           ` wenxu [this message]
2020-11-01 11:50             ` Eli Cohen

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=40968d30-f4c4-5f9c-5c6c-fe3d7e5571a3@ucloud.cn \
    --to=wenxu@ucloud.cn \
    --cc=elic@nvidia.com \
    --cc=netdev@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).