netdev.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* Re: [PATCH net] vdpa/mlx5: Fix miss to set VIRTIO_NET_S_LINK_UP for virtio_net_config
       [not found] <1603098438-20200-1-git-send-email-wenxu@ucloud.cn>
@ 2020-10-20  2:03 ` Jason Wang
  2020-10-20  7:44   ` Eli Cohen
  0 siblings, 1 reply; 11+ messages in thread
From: Jason Wang @ 2020-10-20  2:03 UTC (permalink / raw)
  To: wenxu, netdev, eli


On 2020/10/19 下午5:07, wenxu@ucloud.cn wrote:
> From: wenxu <wenxu@ucloud.cn>
>
> Qemu get virtio_net_config from the vdpa driver. So The vdpa driver
> should set the VIRTIO_NET_S_LINK_UP flag to virtio_net_config like
> vdpa_sim. Or the link of virtio net NIC in the virtual machine will
> never up.
>
> Fixes: 1a86b377aa21 ("vdpa/mlx5: Add VDPA driver for supported mlx5 devices")
> Signed-off-by: wenxu <wenxu@ucloud.cn>
> ---
>   drivers/vdpa/mlx5/net/mlx5_vnet.c | 2 ++
>   1 file changed, 2 insertions(+)
>
> diff --git a/drivers/vdpa/mlx5/net/mlx5_vnet.c b/drivers/vdpa/mlx5/net/mlx5_vnet.c
> index 74264e59..af6c74c 100644
> --- a/drivers/vdpa/mlx5/net/mlx5_vnet.c
> +++ b/drivers/vdpa/mlx5/net/mlx5_vnet.c
> @@ -1537,6 +1537,8 @@ static int mlx5_vdpa_set_features(struct vdpa_device *vdev, u64 features)
>   	ndev->mvdev.actual_features = features & ndev->mvdev.mlx_features;
>   	ndev->config.mtu = __cpu_to_virtio16(mlx5_vdpa_is_little_endian(mvdev),
>   					     ndev->mtu);
> +	ndev->config.status = __cpu_to_virtio16(mlx5_vdpa_is_little_endian(mvdev),
> +					       VIRTIO_NET_S_LINK_UP);
>   	return err;
>   }
>   


Other than the small issue pointed out by Jakub.

Acked-by: Jason Wang <jasowang@redhat.com>



^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH net] vdpa/mlx5: Fix miss to set VIRTIO_NET_S_LINK_UP for virtio_net_config
  2020-10-20  2:03 ` [PATCH net] vdpa/mlx5: Fix miss to set VIRTIO_NET_S_LINK_UP for virtio_net_config Jason Wang
@ 2020-10-20  7:44   ` Eli Cohen
  2020-10-20  7:50     ` Jason Wang
  2020-10-22  9:59     ` wenxu
  0 siblings, 2 replies; 11+ messages in thread
From: Eli Cohen @ 2020-10-20  7:44 UTC (permalink / raw)
  To: Jason Wang; +Cc: wenxu, netdev, eli

On Tue, Oct 20, 2020 at 10:03:00AM +0800, Jason Wang wrote:
> 
> On 2020/10/19 下午5:07, wenxu@ucloud.cn wrote:
> > From: wenxu <wenxu@ucloud.cn>
> > 
> > Qemu get virtio_net_config from the vdpa driver. So The vdpa driver
> > should set the VIRTIO_NET_S_LINK_UP flag to virtio_net_config like
> > vdpa_sim. Or the link of virtio net NIC in the virtual machine will
> > never up.
> > 
> > Fixes: 1a86b377aa21 ("vdpa/mlx5: Add VDPA driver for supported mlx5 devices")
> > Signed-off-by: wenxu <wenxu@ucloud.cn>
> > ---
> >   drivers/vdpa/mlx5/net/mlx5_vnet.c | 2 ++
> >   1 file changed, 2 insertions(+)
> > 
> > diff --git a/drivers/vdpa/mlx5/net/mlx5_vnet.c b/drivers/vdpa/mlx5/net/mlx5_vnet.c
> > index 74264e59..af6c74c 100644
> > --- a/drivers/vdpa/mlx5/net/mlx5_vnet.c
> > +++ b/drivers/vdpa/mlx5/net/mlx5_vnet.c
> > @@ -1537,6 +1537,8 @@ static int mlx5_vdpa_set_features(struct vdpa_device *vdev, u64 features)
> >   	ndev->mvdev.actual_features = features & ndev->mvdev.mlx_features;
> >   	ndev->config.mtu = __cpu_to_virtio16(mlx5_vdpa_is_little_endian(mvdev),
> >   					     ndev->mtu);
> > +	ndev->config.status = __cpu_to_virtio16(mlx5_vdpa_is_little_endian(mvdev),
> > +					       VIRTIO_NET_S_LINK_UP);
> >   	return err;
> >   }
> 
> 
> Other than the small issue pointed out by Jakub.
> 
> Acked-by: Jason Wang <jasowang@redhat.com>
> 
> 

I already posted a fix for this a while ago and Michael should merge it.

https://lkml.org/lkml/2020/9/17/543

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH net] vdpa/mlx5: Fix miss to set VIRTIO_NET_S_LINK_UP for virtio_net_config
  2020-10-20  7:44   ` Eli Cohen
@ 2020-10-20  7:50     ` Jason Wang
  2020-10-22  9:59     ` wenxu
  1 sibling, 0 replies; 11+ messages in thread
From: Jason Wang @ 2020-10-20  7:50 UTC (permalink / raw)
  To: Eli Cohen; +Cc: wenxu, netdev, eli


On 2020/10/20 下午3:44, Eli Cohen wrote:
> On Tue, Oct 20, 2020 at 10:03:00AM +0800, Jason Wang wrote:
>> On 2020/10/19 下午5:07, wenxu@ucloud.cn wrote:
>>> From: wenxu <wenxu@ucloud.cn>
>>>
>>> Qemu get virtio_net_config from the vdpa driver. So The vdpa driver
>>> should set the VIRTIO_NET_S_LINK_UP flag to virtio_net_config like
>>> vdpa_sim. Or the link of virtio net NIC in the virtual machine will
>>> never up.
>>>
>>> Fixes: 1a86b377aa21 ("vdpa/mlx5: Add VDPA driver for supported mlx5 devices")
>>> Signed-off-by: wenxu <wenxu@ucloud.cn>
>>> ---
>>>    drivers/vdpa/mlx5/net/mlx5_vnet.c | 2 ++
>>>    1 file changed, 2 insertions(+)
>>>
>>> diff --git a/drivers/vdpa/mlx5/net/mlx5_vnet.c b/drivers/vdpa/mlx5/net/mlx5_vnet.c
>>> index 74264e59..af6c74c 100644
>>> --- a/drivers/vdpa/mlx5/net/mlx5_vnet.c
>>> +++ b/drivers/vdpa/mlx5/net/mlx5_vnet.c
>>> @@ -1537,6 +1537,8 @@ static int mlx5_vdpa_set_features(struct vdpa_device *vdev, u64 features)
>>>    	ndev->mvdev.actual_features = features & ndev->mvdev.mlx_features;
>>>    	ndev->config.mtu = __cpu_to_virtio16(mlx5_vdpa_is_little_endian(mvdev),
>>>    					     ndev->mtu);
>>> +	ndev->config.status = __cpu_to_virtio16(mlx5_vdpa_is_little_endian(mvdev),
>>> +					       VIRTIO_NET_S_LINK_UP);
>>>    	return err;
>>>    }
>>
>> Other than the small issue pointed out by Jakub.
>>
>> Acked-by: Jason Wang <jasowang@redhat.com>
>>
>>
> I already posted a fix for this a while ago and Michael should merge it.
>
> https://lkml.org/lkml/2020/9/17/543


Aha, I just forget this.

It's queued here: 
https://git.kernel.org/pub/scm/linux/kernel/git/mst/vhost.git/commit/?h=linux-next

And with my ack actually.

Thanks



^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH net] vdpa/mlx5: Fix miss to set VIRTIO_NET_S_LINK_UP for virtio_net_config
  2020-10-20  7:44   ` Eli Cohen
  2020-10-20  7:50     ` Jason Wang
@ 2020-10-22  9:59     ` wenxu
  2020-10-22 10:40       ` mlx5_vdpa problem wenxu
  1 sibling, 1 reply; 11+ messages in thread
From: wenxu @ 2020-10-22  9:59 UTC (permalink / raw)
  To: Eli Cohen; +Cc: netdev, eli

Hi mellanox team,


I test the mlx5 vdpa  in linux-5.9 and meet several problem.


# lspci | grep Ether | grep Dx
b3:00.0 Ethernet controller: Mellanox Technologies MT2892 Family [ConnectX-6 Dx]
b3:00.1 Ethernet controller: Mellanox Technologies MT2892 Family [ConnectX-6 Dx]

# ethtool -i net2
driver: mlx5e_rep
version: 5.9.0
firmware-version: 22.28.1002 (MT_0000000430)
expansion-rom-version:
bus-info: 0000:b3:00.0
supports-statistics: yes
supports-test: no
supports-eeprom-access: no
supports-register-dump: no
supports-priv-flags: no


init switchdev:


# echo 1 > /sys/class/net/net2/device/sriov_numvfs
# echo 0000:b3:00.2 > /sys/bus/pci/drivers/mlx5_core/unbind
# devlink dev eswitch set pci/0000:b3:00.0  mode switchdev encap enable

# modprobe vdpa vhost-vdpa mlx5_vdpa

# ip l set dev net2 vf 0 mac 52:90:01:00:02:13
# echo 0000:b3:00.2 > /sys/bus/pci/drivers/mlx5_core/bind


setup vm:

# qemu-system-x86_64 -name test  -enable-kvm -smp 16,sockets=2,cores=8,threads=1 -m 8192 -drive file=./centos7.qcow2,format=qcow2,if=none,id=drive-virtio-disk0 -device virtio-blk-pci,scsi=off,bus=pci.0,addr=0x7,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1 -netdev type=vhost-vdpa,vhostdev=/dev/vhost-vdpa-0,id=vhost-vdpa0 -device virtio-net-pci,netdev=vhost-vdpa0,page-per-vq=on,iommu_platform=on,id=net0,bus=pci.0,addr=0x3,disable-legacy=on -vnc 0.0.0.0:0


In the vm:  virtio net device  eth0 with ip address 10.0.0.75/24

On the host: VF0 rep device pf0vf0 with ip address 10.0.0.7/24


problem 1:

On the host:

# ping 10.0.0.75

Some times there will be loss packets.

And in the VM:

dmesg shows:

eth0: bad tso: type 100, size: 0

eth0: bad tso: type 10, size: 28


So I think maybe the vnet header is not init with 0?  And Then I clear the gso_type, gso_size and flags in the virtio_net driver.  There is no packets dropped.


problem 2:

In the vm: iperf -s

On the host: iperf -c 10.0.0.75 -t 100 -i 2.


The tcp connection can't established for the syn+ack with partail cum but not handle correct by hardware.

After I set the csum off  for eth0 in the vm, the problem is resolved. Although the mlx5_vnet support VIRTIO_NET_F_CSUM feature.



problem 3:


The iperf perofrmance not stable before I disable the pf0vf0 tso offload

#ethtool -K pf0vf0 tso off


I know the mlx5_vnet did not support feature VIRTIO_NET_F_GUEST_TSO4. But the hardware can't  cut the big tso packet to several small tcp packet and send to virtio  net device?




BR

wenxu


^ permalink raw reply	[flat|nested] 11+ messages in thread

* mlx5_vdpa problem
  2020-10-22  9:59     ` wenxu
@ 2020-10-22 10:40       ` wenxu
  2020-10-29 11:29         ` Eli Cohen
  2020-10-29 12:45         ` Eli Cohen
  0 siblings, 2 replies; 11+ messages in thread
From: wenxu @ 2020-10-22 10:40 UTC (permalink / raw)
  To: Eli Cohen; +Cc: Linux Kernel Network Developers


Hi mellanox team,


I test the mlx5 vdpa  in linux-5.9 and meet several problem.


# lspci | grep Ether | grep Dx
b3:00.0 Ethernet controller: Mellanox Technologies MT2892 Family [ConnectX-6 Dx]
b3:00.1 Ethernet controller: Mellanox Technologies MT2892 Family [ConnectX-6 Dx]

# ethtool -i net2
driver: mlx5e_rep
version: 5.9.0
firmware-version: 22.28.1002 (MT_0000000430)
expansion-rom-version:
bus-info: 0000:b3:00.0
supports-statistics: yes
supports-test: no
supports-eeprom-access: no
supports-register-dump: no
supports-priv-flags: no


init switchdev:


# echo 1 > /sys/class/net/net2/device/sriov_numvfs
# echo 0000:b3:00.2 > /sys/bus/pci/drivers/mlx5_core/unbind
# devlink dev eswitch set pci/0000:b3:00.0  mode switchdev encap enable

# modprobe vdpa vhost-vdpa mlx5_vdpa

# ip l set dev net2 vf 0 mac 52:90:01:00:02:13
# echo 0000:b3:00.2 > /sys/bus/pci/drivers/mlx5_core/bind


setup vm:

# qemu-system-x86_64 -name test  -enable-kvm -smp 16,sockets=2,cores=8,threads=1 -m 8192 -drive file=./centos7.qcow2,format=qcow2,if=none,id=drive-virtio-disk0 -device virtio-blk-pci,scsi=off,bus=pci.0,addr=0x7,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1 -netdev type=vhost-vdpa,vhostdev=/dev/vhost-vdpa-0,id=vhost-vdpa0 -device virtio-net-pci,netdev=vhost-vdpa0,page-per-vq=on,iommu_platform=on,id=net0,bus=pci.0,addr=0x3,disable-legacy=on -vnc 0.0.0.0:0


In the vm:  virtio net device  eth0 with ip address 10.0.0.75/24

On the host: VF0 rep device pf0vf0 with ip address 10.0.0.7/24


problem 1:

On the host:

# ping 10.0.0.75

Some times there will be loss packets.

And in the VM:

dmesg shows:

eth0: bad tso: type 100, size: 0

eth0: bad tso: type 10, size: 28


So I think maybe the vnet header is not init with 0?  And Then I clear the gso_type, gso_size and flags in the virtio_net driver.  There is no packets dropped.


problem 2:

In the vm: iperf -s

On the host: iperf -c 10.0.0.75 -t 100 -i 2.


The tcp connection can't established for the syn+ack with partail cum but not handle correct by hardware.

After I set the csum off  for eth0 in the vm, the problem is resolved. Although the mlx5_vnet support VIRTIO_NET_F_CSUM feature.



problem 3:


The iperf perofrmance not stable before I disable the pf0vf0 tso offload

#ethtool -K pf0vf0 tso off


I know the mlx5_vnet did not support feature VIRTIO_NET_F_GUEST_TSO4. But the hardware can't  cut the big tso packet to several small tcp packet and send to virtio  net device?




BR

wenxu




^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: mlx5_vdpa problem
  2020-10-22 10:40       ` mlx5_vdpa problem wenxu
@ 2020-10-29 11:29         ` Eli Cohen
  2020-10-29 12:45         ` Eli Cohen
  1 sibling, 0 replies; 11+ messages in thread
From: Eli Cohen @ 2020-10-29 11:29 UTC (permalink / raw)
  To: wenxu; +Cc: Linux Kernel Network Developers

On Thu, Oct 22, 2020 at 06:40:56PM +0800, wenxu wrote:
> 
> Hi mellanox team,
> 
> 
> I test the mlx5 vdpa  in linux-5.9 and meet several problem.
> 
> 
> # lspci | grep Ether | grep Dx
> b3:00.0 Ethernet controller: Mellanox Technologies MT2892 Family [ConnectX-6 Dx]
> b3:00.1 Ethernet controller: Mellanox Technologies MT2892 Family [ConnectX-6 Dx]
> 
> # ethtool -i net2
> driver: mlx5e_rep
> version: 5.9.0
> firmware-version: 22.28.1002 (MT_0000000430)
> expansion-rom-version:
> bus-info: 0000:b3:00.0
> supports-statistics: yes
> supports-test: no
> supports-eeprom-access: no
> supports-register-dump: no
> supports-priv-flags: no
> 
> 
> init switchdev:
> 
> 
> # echo 1 > /sys/class/net/net2/device/sriov_numvfs
> # echo 0000:b3:00.2 > /sys/bus/pci/drivers/mlx5_core/unbind
> # devlink dev eswitch set pci/0000:b3:00.0  mode switchdev encap enable
> 
> # modprobe vdpa vhost-vdpa mlx5_vdpa
> 
> # ip l set dev net2 vf 0 mac 52:90:01:00:02:13
> # echo 0000:b3:00.2 > /sys/bus/pci/drivers/mlx5_core/bind
> 
> 
> setup vm:
> 
> # qemu-system-x86_64 -name test  -enable-kvm -smp 16,sockets=2,cores=8,threads=1 -m 8192 -drive file=./centos7.qcow2,format=qcow2,if=none,id=drive-virtio-disk0 -device virtio-blk-pci,scsi=off,bus=pci.0,addr=0x7,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1 -netdev type=vhost-vdpa,vhostdev=/dev/vhost-vdpa-0,id=vhost-vdpa0 -device virtio-net-pci,netdev=vhost-vdpa0,page-per-vq=on,iommu_platform=on,id=net0,bus=pci.0,addr=0x3,disable-legacy=on -vnc 0.0.0.0:0
> 
> 
> In the vm:  virtio net device  eth0 with ip address 10.0.0.75/24
> 
> On the host: VF0 rep device pf0vf0 with ip address 10.0.0.7/24
> 
> 
> problem 1:
> 
> On the host:
> 
> # ping 10.0.0.75
> 
> Some times there will be loss packets.
> 
> And in the VM:
> 
> dmesg shows:
> 
> eth0: bad tso: type 100, size: 0
> 
> eth0: bad tso: type 10, size: 28
> 
> 
> So I think maybe the vnet header is not init with 0?  And Then I clear the gso_type, gso_size and flags in the virtio_net driver.  There is no packets dropped.

Hi wenxu, thanks for reporting this.

Usually, you would not assign IP address to the representor as it
represents a switch port. Nevertheless, I will try to reproduce this
here.
Could you repeat your experiment with two hosts so the host the
representor for your VF and the uplink representor are connected to an
ovs switch and other host is configured with ip address 10.0.0.7/24?

Also, can you send the firmware version you're using?

> 
> 
> problem 2:
> 
> In the vm: iperf -s
> 
> On the host: iperf -c 10.0.0.75 -t 100 -i 2.
> 
> 
> The tcp connection can't established for the syn+ack with partail cum but not handle correct by hardware.
> 
> After I set the csum off  for eth0 in the vm, the problem is resolved. Although the mlx5_vnet support VIRTIO_NET_F_CSUM feature.
> 
> 
> 
> problem 3:
> 
> 
> The iperf perofrmance not stable before I disable the pf0vf0 tso offload
> 
> #ethtool -K pf0vf0 tso off
> 
> 
> I know the mlx5_vnet did not support feature VIRTIO_NET_F_GUEST_TSO4. But the hardware can't  cut the big tso packet to several small tcp packet and send to virtio  net device?
> 
> 
> 
> 
> BR
> 
> wenxu
> 
> 
> 

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: mlx5_vdpa problem
  2020-10-22 10:40       ` mlx5_vdpa problem wenxu
  2020-10-29 11:29         ` Eli Cohen
@ 2020-10-29 12:45         ` Eli Cohen
  2020-10-30  7:50           ` wenxu
  1 sibling, 1 reply; 11+ messages in thread
From: Eli Cohen @ 2020-10-29 12:45 UTC (permalink / raw)
  To: wenxu; +Cc: Linux Kernel Network Developers

On Thu, Oct 22, 2020 at 06:40:56PM +0800, wenxu wrote:

Please make sure your firmware is updated.

https://www.mellanox.com/support/firmware/connectx6dx

> 
> Hi mellanox team,
> 
> 
> I test the mlx5 vdpa  in linux-5.9 and meet several problem.
> 
> 
> # lspci | grep Ether | grep Dx
> b3:00.0 Ethernet controller: Mellanox Technologies MT2892 Family [ConnectX-6 Dx]
> b3:00.1 Ethernet controller: Mellanox Technologies MT2892 Family [ConnectX-6 Dx]
> 
> # ethtool -i net2
> driver: mlx5e_rep
> version: 5.9.0
> firmware-version: 22.28.1002 (MT_0000000430)
> expansion-rom-version:
> bus-info: 0000:b3:00.0
> supports-statistics: yes
> supports-test: no
> supports-eeprom-access: no
> supports-register-dump: no
> supports-priv-flags: no
> 
> 
> init switchdev:
> 
> 
> # echo 1 > /sys/class/net/net2/device/sriov_numvfs
> # echo 0000:b3:00.2 > /sys/bus/pci/drivers/mlx5_core/unbind
> # devlink dev eswitch set pci/0000:b3:00.0  mode switchdev encap enable
> 
> # modprobe vdpa vhost-vdpa mlx5_vdpa
> 
> # ip l set dev net2 vf 0 mac 52:90:01:00:02:13
> # echo 0000:b3:00.2 > /sys/bus/pci/drivers/mlx5_core/bind
> 
> 
> setup vm:
> 
> # qemu-system-x86_64 -name test  -enable-kvm -smp 16,sockets=2,cores=8,threads=1 -m 8192 -drive file=./centos7.qcow2,format=qcow2,if=none,id=drive-virtio-disk0 -device virtio-blk-pci,scsi=off,bus=pci.0,addr=0x7,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1 -netdev type=vhost-vdpa,vhostdev=/dev/vhost-vdpa-0,id=vhost-vdpa0 -device virtio-net-pci,netdev=vhost-vdpa0,page-per-vq=on,iommu_platform=on,id=net0,bus=pci.0,addr=0x3,disable-legacy=on -vnc 0.0.0.0:0
> 
> 
> In the vm:  virtio net device  eth0 with ip address 10.0.0.75/24
> 
> On the host: VF0 rep device pf0vf0 with ip address 10.0.0.7/24
> 
> 
> problem 1:
> 
> On the host:
> 
> # ping 10.0.0.75
> 
> Some times there will be loss packets.
> 
> And in the VM:
> 
> dmesg shows:
> 
> eth0: bad tso: type 100, size: 0
> 
> eth0: bad tso: type 10, size: 28
> 
> 
> So I think maybe the vnet header is not init with 0?  And Then I clear the gso_type, gso_size and flags in the virtio_net driver.  There is no packets dropped.
> 
> 
> problem 2:
> 
> In the vm: iperf -s
> 
> On the host: iperf -c 10.0.0.75 -t 100 -i 2.
> 
> 
> The tcp connection can't established for the syn+ack with partail cum but not handle correct by hardware.
> 
> After I set the csum off  for eth0 in the vm, the problem is resolved. Although the mlx5_vnet support VIRTIO_NET_F_CSUM feature.
> 
> 
> 
> problem 3:
> 
> 
> The iperf perofrmance not stable before I disable the pf0vf0 tso offload
> 
> #ethtool -K pf0vf0 tso off
> 
> 
> I know the mlx5_vnet did not support feature VIRTIO_NET_F_GUEST_TSO4. But the hardware can't  cut the big tso packet to several small tcp packet and send to virtio  net device?
> 
> 
> 
> 
> BR
> 
> wenxu
> 
> 
> 

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: mlx5_vdpa problem
  2020-10-29 12:45         ` Eli Cohen
@ 2020-10-30  7:50           ` wenxu
  2020-11-01 11:50             ` Eli Cohen
  0 siblings, 1 reply; 11+ messages in thread
From: wenxu @ 2020-10-30  7:50 UTC (permalink / raw)
  To: Eli Cohen; +Cc: Linux Kernel Network Developers

Hi Eli,


Thanks for your reply.


I update the firmware to the lasted one

firmware-version: 22.28.4000 (MT_0000000430)


I find there are the same problems as my description.

It is the same for the test what your suggestion.

I did the experiment with two hosts so the host the
representor for your VF and the uplink representor are connected to an
ovs switch and other host is configured with ip address 10.0.0.7/24.

And the ovs enable hw-offload, So the packet don't go through rep port of VF.
But there are the same problems. I think it maybe a FW bug. Thx.

On 10/29/2020 8:45 PM, Eli Cohen wrote:
> On Thu, Oct 22, 2020 at 06:40:56PM +0800, wenxu wrote:
>
> Please make sure your firmware is updated.
>
> https://www.mellanox.com/support/firmware/connectx6dx
>
>> Hi mellanox team,
>>
>>
>> I test the mlx5 vdpa  in linux-5.9 and meet several problem.
>>
>>
>> # lspci | grep Ether | grep Dx
>> b3:00.0 Ethernet controller: Mellanox Technologies MT2892 Family [ConnectX-6 Dx]
>> b3:00.1 Ethernet controller: Mellanox Technologies MT2892 Family [ConnectX-6 Dx]
>>
>> # ethtool -i net2
>> driver: mlx5e_rep
>> version: 5.9.0
>> firmware-version: 22.28.1002 (MT_0000000430)
>> expansion-rom-version:
>> bus-info: 0000:b3:00.0
>> supports-statistics: yes
>> supports-test: no
>> supports-eeprom-access: no
>> supports-register-dump: no
>> supports-priv-flags: no
>>
>>
>> init switchdev:
>>
>>
>> # echo 1 > /sys/class/net/net2/device/sriov_numvfs
>> # echo 0000:b3:00.2 > /sys/bus/pci/drivers/mlx5_core/unbind
>> # devlink dev eswitch set pci/0000:b3:00.0  mode switchdev encap enable
>>
>> # modprobe vdpa vhost-vdpa mlx5_vdpa
>>
>> # ip l set dev net2 vf 0 mac 52:90:01:00:02:13
>> # echo 0000:b3:00.2 > /sys/bus/pci/drivers/mlx5_core/bind
>>
>>
>> setup vm:
>>
>> # qemu-system-x86_64 -name test  -enable-kvm -smp 16,sockets=2,cores=8,threads=1 -m 8192 -drive file=./centos7.qcow2,format=qcow2,if=none,id=drive-virtio-disk0 -device virtio-blk-pci,scsi=off,bus=pci.0,addr=0x7,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1 -netdev type=vhost-vdpa,vhostdev=/dev/vhost-vdpa-0,id=vhost-vdpa0 -device virtio-net-pci,netdev=vhost-vdpa0,page-per-vq=on,iommu_platform=on,id=net0,bus=pci.0,addr=0x3,disable-legacy=on -vnc 0.0.0.0:0
>>
>>
>> In the vm:  virtio net device  eth0 with ip address 10.0.0.75/24
>>
>> On the host: VF0 rep device pf0vf0 with ip address 10.0.0.7/24
>>
>>
>> problem 1:
>>
>> On the host:
>>
>> # ping 10.0.0.75
>>
>> Some times there will be loss packets.
>>
>> And in the VM:
>>
>> dmesg shows:
>>
>> eth0: bad tso: type 100, size: 0
>>
>> eth0: bad tso: type 10, size: 28
>>
>>
>> So I think maybe the vnet header is not init with 0?  And Then I clear the gso_type, gso_size and flags in the virtio_net driver.  There is no packets dropped.
>>
>>
>> problem 2:
>>
>> In the vm: iperf -s
>>
>> On the host: iperf -c 10.0.0.75 -t 100 -i 2.
>>
>>
>> The tcp connection can't established for the syn+ack with partail cum but not handle correct by hardware.
>>
>> After I set the csum off  for eth0 in the vm, the problem is resolved. Although the mlx5_vnet support VIRTIO_NET_F_CSUM feature.
>>
>>
>>
>> problem 3:
>>
>>
>> The iperf perofrmance not stable before I disable the pf0vf0 tso offload
>>
>> #ethtool -K pf0vf0 tso off
>>
>>
>> I know the mlx5_vnet did not support feature VIRTIO_NET_F_GUEST_TSO4. But the hardware can't  cut the big tso packet to several small tcp packet and send to virtio  net device?
>>
>>
>>
>>
>> BR
>>
>> wenxu
>>
>>
>>

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: mlx5_vdpa problem
  2020-10-30  7:50           ` wenxu
@ 2020-11-01 11:50             ` Eli Cohen
  0 siblings, 0 replies; 11+ messages in thread
From: Eli Cohen @ 2020-11-01 11:50 UTC (permalink / raw)
  To: wenxu; +Cc: Linux Kernel Network Developers

On Fri, Oct 30, 2020 at 03:50:21PM +0800, wenxu wrote:
> Hi Eli,
> 
> 
> Thanks for your reply.
> 
> 
> I update the firmware to the lasted one
> 
> firmware-version: 22.28.4000 (MT_0000000430)
> 
> 
> I find there are the same problems as my description.
> 
> It is the same for the test what your suggestion.
> 
> I did the experiment with two hosts so the host the
> representor for your VF and the uplink representor are connected to an
> ovs switch and other host is configured with ip address 10.0.0.7/24.
> 
> And the ovs enable hw-offload, So the packet don't go through rep port of VF.
> But there are the same problems. I think it maybe a FW bug. Thx.

I will refer you to firmware engineer to work with.

> 
> On 10/29/2020 8:45 PM, Eli Cohen wrote:
> > On Thu, Oct 22, 2020 at 06:40:56PM +0800, wenxu wrote:
> >
> > Please make sure your firmware is updated.
> >
> > https://www.mellanox.com/support/firmware/connectx6dx
> >
> >> Hi mellanox team,
> >>
> >>
> >> I test the mlx5 vdpa  in linux-5.9 and meet several problem.
> >>
> >>
> >> # lspci | grep Ether | grep Dx
> >> b3:00.0 Ethernet controller: Mellanox Technologies MT2892 Family [ConnectX-6 Dx]
> >> b3:00.1 Ethernet controller: Mellanox Technologies MT2892 Family [ConnectX-6 Dx]
> >>
> >> # ethtool -i net2
> >> driver: mlx5e_rep
> >> version: 5.9.0
> >> firmware-version: 22.28.1002 (MT_0000000430)
> >> expansion-rom-version:
> >> bus-info: 0000:b3:00.0
> >> supports-statistics: yes
> >> supports-test: no
> >> supports-eeprom-access: no
> >> supports-register-dump: no
> >> supports-priv-flags: no
> >>
> >>
> >> init switchdev:
> >>
> >>
> >> # echo 1 > /sys/class/net/net2/device/sriov_numvfs
> >> # echo 0000:b3:00.2 > /sys/bus/pci/drivers/mlx5_core/unbind
> >> # devlink dev eswitch set pci/0000:b3:00.0  mode switchdev encap enable
> >>
> >> # modprobe vdpa vhost-vdpa mlx5_vdpa
> >>
> >> # ip l set dev net2 vf 0 mac 52:90:01:00:02:13
> >> # echo 0000:b3:00.2 > /sys/bus/pci/drivers/mlx5_core/bind
> >>
> >>
> >> setup vm:
> >>
> >> # qemu-system-x86_64 -name test  -enable-kvm -smp 16,sockets=2,cores=8,threads=1 -m 8192 -drive file=./centos7.qcow2,format=qcow2,if=none,id=drive-virtio-disk0 -device virtio-blk-pci,scsi=off,bus=pci.0,addr=0x7,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1 -netdev type=vhost-vdpa,vhostdev=/dev/vhost-vdpa-0,id=vhost-vdpa0 -device virtio-net-pci,netdev=vhost-vdpa0,page-per-vq=on,iommu_platform=on,id=net0,bus=pci.0,addr=0x3,disable-legacy=on -vnc 0.0.0.0:0
> >>
> >>
> >> In the vm:  virtio net device  eth0 with ip address 10.0.0.75/24
> >>
> >> On the host: VF0 rep device pf0vf0 with ip address 10.0.0.7/24
> >>
> >>
> >> problem 1:
> >>
> >> On the host:
> >>
> >> # ping 10.0.0.75
> >>
> >> Some times there will be loss packets.
> >>
> >> And in the VM:
> >>
> >> dmesg shows:
> >>
> >> eth0: bad tso: type 100, size: 0
> >>
> >> eth0: bad tso: type 10, size: 28
> >>
> >>
> >> So I think maybe the vnet header is not init with 0?  And Then I clear the gso_type, gso_size and flags in the virtio_net driver.  There is no packets dropped.
> >>
> >>
> >> problem 2:
> >>
> >> In the vm: iperf -s
> >>
> >> On the host: iperf -c 10.0.0.75 -t 100 -i 2.
> >>
> >>
> >> The tcp connection can't established for the syn+ack with partail cum but not handle correct by hardware.
> >>
> >> After I set the csum off  for eth0 in the vm, the problem is resolved. Although the mlx5_vnet support VIRTIO_NET_F_CSUM feature.
> >>
> >>
> >>
> >> problem 3:
> >>
> >>
> >> The iperf perofrmance not stable before I disable the pf0vf0 tso offload
> >>
> >> #ethtool -K pf0vf0 tso off
> >>
> >>
> >> I know the mlx5_vnet did not support feature VIRTIO_NET_F_GUEST_TSO4. But the hardware can't  cut the big tso packet to several small tcp packet and send to virtio  net device?
> >>
> >>
> >>
> >>
> >> BR
> >>
> >> wenxu
> >>
> >>
> >>

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH net] vdpa/mlx5: Fix miss to set VIRTIO_NET_S_LINK_UP for virtio_net_config
  2020-10-19 14:17 [PATCH net] vdpa/mlx5: Fix miss to set VIRTIO_NET_S_LINK_UP for virtio_net_config wenxu
@ 2020-10-19 22:02 ` Jakub Kicinski
  0 siblings, 0 replies; 11+ messages in thread
From: Jakub Kicinski @ 2020-10-19 22:02 UTC (permalink / raw)
  To: wenxu; +Cc: eli, netdev, jasowang, parav, virtualization

On Mon, 19 Oct 2020 22:17:07 +0800 wenxu@ucloud.cn wrote:
> From: wenxu <wenxu@ucloud.cn>
> 
> Qemu get virtio_net_config from the vdpa driver. So The vdpa driver
> should set the VIRTIO_NET_S_LINK_UP flag to virtio_net_config like
> vdpa_sim. Or the link of virtio net NIC in the virtual machine will
> never up.
> 
> Fixes:1a86b377aa21 ("vdpa/mlx5: Add VDPA driver for supported mlx5 devices")

        ^ missing space

Please keep CCing netdev on future submissions, but the vdpa patches
actually go through Michael's tree, so make sure to CC
virtualization@lists.linux-foundation.org

> Signed-off-by: wenxu <wenxu@ucloud.cn>
> ---
>  drivers/vdpa/mlx5/net/mlx5_vnet.c | 2 ++
>  1 file changed, 2 insertions(+)
> 
> diff --git a/drivers/vdpa/mlx5/net/mlx5_vnet.c b/drivers/vdpa/mlx5/net/mlx5_vnet.c
> index 74264e59..af6c74c 100644
> --- a/drivers/vdpa/mlx5/net/mlx5_vnet.c
> +++ b/drivers/vdpa/mlx5/net/mlx5_vnet.c
> @@ -1537,6 +1537,8 @@ static int mlx5_vdpa_set_features(struct vdpa_device *vdev, u64 features)
>  	ndev->mvdev.actual_features = features & ndev->mvdev.mlx_features;
>  	ndev->config.mtu = __cpu_to_virtio16(mlx5_vdpa_is_little_endian(mvdev),
>  					     ndev->mtu);
> +	ndev->config.status = __cpu_to_virtio16(mlx5_vdpa_is_little_endian(mvdev),
> +					       VIRTIO_NET_S_LINK_UP);
>  	return err;
>  }
>  


^ permalink raw reply	[flat|nested] 11+ messages in thread

* [PATCH net] vdpa/mlx5: Fix miss to set VIRTIO_NET_S_LINK_UP for virtio_net_config
@ 2020-10-19 14:17 wenxu
  2020-10-19 22:02 ` Jakub Kicinski
  0 siblings, 1 reply; 11+ messages in thread
From: wenxu @ 2020-10-19 14:17 UTC (permalink / raw)
  To: eli; +Cc: netdev, jasowang, parav

From: wenxu <wenxu@ucloud.cn>

Qemu get virtio_net_config from the vdpa driver. So The vdpa driver
should set the VIRTIO_NET_S_LINK_UP flag to virtio_net_config like
vdpa_sim. Or the link of virtio net NIC in the virtual machine will
never up.

Fixes:1a86b377aa21 ("vdpa/mlx5: Add VDPA driver for supported mlx5 devices")
Signed-off-by: wenxu <wenxu@ucloud.cn>
---
 drivers/vdpa/mlx5/net/mlx5_vnet.c | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/drivers/vdpa/mlx5/net/mlx5_vnet.c b/drivers/vdpa/mlx5/net/mlx5_vnet.c
index 74264e59..af6c74c 100644
--- a/drivers/vdpa/mlx5/net/mlx5_vnet.c
+++ b/drivers/vdpa/mlx5/net/mlx5_vnet.c
@@ -1537,6 +1537,8 @@ static int mlx5_vdpa_set_features(struct vdpa_device *vdev, u64 features)
 	ndev->mvdev.actual_features = features & ndev->mvdev.mlx_features;
 	ndev->config.mtu = __cpu_to_virtio16(mlx5_vdpa_is_little_endian(mvdev),
 					     ndev->mtu);
+	ndev->config.status = __cpu_to_virtio16(mlx5_vdpa_is_little_endian(mvdev),
+					       VIRTIO_NET_S_LINK_UP);
 	return err;
 }
 
-- 
1.8.3.1


^ permalink raw reply related	[flat|nested] 11+ messages in thread

end of thread, other threads:[~2020-11-01 11:50 UTC | newest]

Thread overview: 11+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
     [not found] <1603098438-20200-1-git-send-email-wenxu@ucloud.cn>
2020-10-20  2:03 ` [PATCH net] vdpa/mlx5: Fix miss to set VIRTIO_NET_S_LINK_UP for virtio_net_config Jason Wang
2020-10-20  7:44   ` Eli Cohen
2020-10-20  7:50     ` Jason Wang
2020-10-22  9:59     ` wenxu
2020-10-22 10:40       ` mlx5_vdpa problem wenxu
2020-10-29 11:29         ` Eli Cohen
2020-10-29 12:45         ` Eli Cohen
2020-10-30  7:50           ` wenxu
2020-11-01 11:50             ` Eli Cohen
2020-10-19 14:17 [PATCH net] vdpa/mlx5: Fix miss to set VIRTIO_NET_S_LINK_UP for virtio_net_config wenxu
2020-10-19 22:02 ` Jakub Kicinski

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).