All of lore.kernel.org
 help / color / mirror / Atom feed
* Re: how test the Translation Support for SMMUv3?
       [not found] <61332e87.404e.18caa3d6e15.Coremail.figure1802@126.com>
@ 2023-12-27 21:30 ` Nicolin Chen
  2023-12-31 14:18   ` Ben
  0 siblings, 1 reply; 8+ messages in thread
From: Nicolin Chen @ 2023-12-27 21:30 UTC (permalink / raw)
  To: Ben; +Cc: eric.auger, linux-arm-kernel

On Wed, Dec 27, 2023 at 03:46:41PM +0800, Ben wrote:
> Hi Nicolin,
> I saw your pachset on Add Nested Translation Support for SMMUv3
> [LWN.net]<https://lwn.net/Articles/931502/>, i have built the
> kernel and qemu you provided in cover letter.
> I run a QEMU system as Host, and launch a VM by kvmtool. But i
> cannot find a IOMMU on VM, how do you test your patchset?

Make sure you build the kernel with CONFIG_IOMMUFD, and QEMU
likely.

> would you like provide your steps?

# A set of sample commands
echo 0002:01:00.0 | tee /sys/bus/pci/devices/0002\:01\:00.0/driver/unbind; dmesg | tail
echo 8086 0111 | tee /sys/bus/pci/drivers/vfio-pci/new_id

qemu-1214 -object iommufd,id=iommufd0 \
-machine virt,accel=kvm,gic-version=3,iommu=nested-smmuv3,iommufd=iommufd0 \
-cpu host -m 256m -nographic  -kernel /root/Image -bios /root/AAVMF_CODE.fd \
-initrd /root/buildroot-20200422-aarch64-qemu-test-rootfs.cpio \
-object memory-backend-ram,size=256m,id=m0 -numa node,cpus=0,nodeid=0,memdev=m0 \
-device vfio-pci,host=0002:01:00.0,rombar=0,iommufd=iommufd0,id="test0"

> BTW, i add a printk on arm_smmu_device_hw_probe() to print out the smmu->features , it looks your QEMU still only support ARM_SMMU_FEAT_TRANS_S1.

QEMU is used by the guest that should only support S1. The host
kernel on the hand should support both S1 and S2 for nesting.

> Does it provide a vIOMMU of SMMUv3 for your kernel and qemu patchset?

Once you boot with "iommu=nested-smmuv3", there should be one.

Nicolin

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re:Re: how test the Translation Support for SMMUv3?
  2023-12-27 21:30 ` how test the Translation Support for SMMUv3? Nicolin Chen
@ 2023-12-31 14:18   ` Ben
  2024-01-02 19:51     ` Nicolin Chen
  0 siblings, 1 reply; 8+ messages in thread
From: Ben @ 2023-12-31 14:18 UTC (permalink / raw)
  To: Nicolin Chen; +Cc: eric.auger, linux-arm-kernel

At 2023-12-28 05:30:56, "Nicolin Chen" <nicolinc@nvidia.com> wrote:
>On Wed, Dec 27, 2023 at 03:46:41PM +0800, Ben wrote:
>> Hi Nicolin,
>> I saw your pachset on Add Nested Translation Support for SMMUv3
>> [LWN.net]<https://lwn.net/Articles/931502/>, i have built the
>> kernel and qemu you provided in cover letter.
>> I run a QEMU system as Host, and launch a VM by kvmtool. But i
>> cannot find a IOMMU on VM, how do you test your patchset?
>
>Make sure you build the kernel with CONFIG_IOMMUFD, and QEMU
>likely.
>
>> would you like provide your steps?
>
># A set of sample commands
>echo 0002:01:00.0 | tee /sys/bus/pci/devices/0002\:01\:00.0/driver/unbind; dmesg | tail
>echo 8086 0111 | tee /sys/bus/pci/drivers/vfio-pci/new_id
>
>qemu-1214 -object iommufd,id=iommufd0 \
>-machine virt,accel=kvm,gic-version=3,iommu=nested-smmuv3,iommufd=iommufd0 \
>-cpu host -m 256m -nographic  -kernel /root/Image -bios /root/AAVMF_CODE.fd \
>-initrd /root/buildroot-20200422-aarch64-qemu-test-rootfs.cpio \
>-object memory-backend-ram,size=256m,id=m0 -numa node,cpus=0,nodeid=0,memdev=m0 \

>-device vfio-pci,host=0002:01:00.0,rombar=0,iommufd=iommufd0,id="test0"


Thanks!

I am trying your patchset on FVP (Fixed Virtual Platforms) but failed.

Here is the Host side running on FVP (platform is  rdn1egde).

master:~# echo 0000:05:00.0 > /sys/bus/pci/devices/0000\:05\:00.0/driver/unbind
master:~# echo 0abc aced > /sys/bus/pci/drivers/vfio-pci/new_id

when i want to run the QEMU to launch a VM, some failed, like below:

root@master:/# cat qemu-iommufd.sh 
./build/qemu-system-aarch64 -L /usr/local/share/qemu -object iommufd,id=iommufd0 -machine virt,accel=kvm,gic-version=3,iommu=nested-smmuv3,iommufd=iommufd0 -cpu host -m 256m -nographic -kernel /Image -append "noinintrd nokaslr root=/dev/vda rootfstype=ext4 rw" -drive if=none,file=/busybox_arm64.ext4,id=hd0 -device virtio-blk-device,drive=hd0 -device vfio-pci,host=0000:05:00.0,iommufd=iommufd0,id="test0"
root@master:/# sh qemu-iommufd.sh 
WARNING: Image format was not specified for '/busybox_arm64.ext4' and probing guessed raw.
         Automatically detecting the format is dangerous for raw images, write operations on block 0 will be restricted.
         Specify the 'raw' format explicitly to remove the restrictions.
qemu-system-aarch64: -device vfio-pci,host=0000:05:00.0,iommufd=iommufd0,id=test0: vfio 0000:05:00.0: vfio /sys/bus/pci/devices/0000:05:00.0/vfio-dev: failed to load "/sys/bus/pci/devices/0000:05:00.0/vfio-dev/vfio0/dev"

It looks cannot find the /sys/bus/pci/devices/0000:05:00.0/vfio-dev/vfio0/dev for this device. 

root@master:/# ls -l /sys/bus/pci/devices/0000\:05\:00.0/vfio-dev/vfio0/
total 0
lrwxrwxrwx 1 root root    0 Dec 31 13:29 device -> ../../../0000:05:00.0
drwxr-xr-x 2 root root    0 Dec 31 13:29 power
lrwxrwxrwx 1 root root    0 Dec 31 13:29 subsystem -> ../../../../../../../../class/vfio-dev
-rw-r--r-- 1 root root 4096 Dec 31 13:20 uevent

any suggestion on that?

BTW, another questions, 
1. does it the device which assigned to VM by VFIO can leverage the nested IOMMU? how about the virtual device emulated  by QEMU without  assigned via VFIO? 
2. when fill the S1 and S2 page table for device on nested IOMMU scenario? does it a shadow page table for vIOMMU on VM? and will trap into hypervisor to refill the real S1 and S2 page table? I am not clear the workflow for your patchset.

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: Re: how test the Translation Support for SMMUv3?
  2023-12-31 14:18   ` Ben
@ 2024-01-02 19:51     ` Nicolin Chen
  2024-01-03 14:09       ` Ben
  0 siblings, 1 reply; 8+ messages in thread
From: Nicolin Chen @ 2024-01-02 19:51 UTC (permalink / raw)
  To: Ben; +Cc: eric.auger, linux-arm-kernel

On Sun, Dec 31, 2023 at 10:18:12PM +0800, Ben wrote:
 
> I am trying your patchset on FVP (Fixed Virtual Platforms) but failed.
> 
> Here is the Host side running on FVP (platform is  rdn1egde).
> 
> master:~# echo 0000:05:00.0 > /sys/bus/pci/devices/0000\:05\:00.0/driver/unbind
> master:~# echo 0abc aced > /sys/bus/pci/drivers/vfio-pci/new_id
> 
> when i want to run the QEMU to launch a VM, some failed, like below:
> 
> root@master:/# cat qemu-iommufd.sh
> ./build/qemu-system-aarch64 -L /usr/local/share/qemu -object iommufd,id=iommufd0 -machine virt,accel=kvm,gic-version=3,iommu=nested-smmuv3,iommufd=iommufd0 -cpu host -m 256m -nographic -kernel /Image -append "noinintrd nokaslr root=/dev/vda rootfstype=ext4 rw" -drive if=none,file=/busybox_arm64.ext4,id=hd0 -device virtio-blk-device,drive=hd0 -device vfio-pci,host=0000:05:00.0,iommufd=iommufd0,id="test0"
> root@master:/# sh qemu-iommufd.sh
> WARNING: Image format was not specified for '/busybox_arm64.ext4' and probing guessed raw.
>          Automatically detecting the format is dangerous for raw images, write operations on block 0 will be restricted.
>          Specify the 'raw' format explicitly to remove the restrictions.
> qemu-system-aarch64: -device vfio-pci,host=0000:05:00.0,iommufd=iommufd0,id=test0: vfio 0000:05:00.0: vfio /sys/bus/pci/devices/0000:05:00.0/vfio-dev: failed to load "/sys/bus/pci/devices/0000:05:00.0/vfio-dev/vfio0/dev"
> 
> It looks cannot find the /sys/bus/pci/devices/0000:05:00.0/vfio-dev/vfio0/dev for this device.
> 
> root@master:/# ls -l /sys/bus/pci/devices/0000\:05\:00.0/vfio-dev/vfio0/
> total 0
> lrwxrwxrwx 1 root root    0 Dec 31 13:29 device -> ../../../0000:05:00.0
> drwxr-xr-x 2 root root    0 Dec 31 13:29 power
> lrwxrwxrwx 1 root root    0 Dec 31 13:29 subsystem -> ../../../../../../../../class/vfio-dev
> -rw-r--r-- 1 root root 4096 Dec 31 13:20 uevent
> 
> any suggestion on that?

CONFIG_VFIO_DEVICE_CDEV=y

Do you have this enabled in kernel config?

> BTW, another questions,
> 1. does it the device which assigned to VM by VFIO can leverage the nested IOMMU?

I think so, as long as it's behind an IOMMU hardware that supports
nesting (and requiring both host kernel and VMM/qemu patches).

> how about the virtual device emulated  by QEMU without  assigned via VFIO?

The basic nesting feature is about 2-stage translation setup (STE
configuration in SMMU term) and cache invalidation. An emulated
device doesn't exist in the host kernel, so there is no nesting
IMHO.

> 2. when fill the S1 and S2 page table for device on nested IOMMU scenario?
> does it a shadow page table for vIOMMU on VM? and will trap into hypervisor
> to refill the real S1 and S2 page table? I am not clear the workflow for your
> patchset.

S2 page table is created/filled at VM creating stage. It's basically
managed by the hypervisor or host kernel. S1 page table on the other
hand is created inside the guest memory and thus managed by the guest
OS. As I mentioned the above, nesting is all about STE configuration
besides cache invalidation. VMM traps the S1 page table pointer from
the guest and forwards the pointer to the host kernel to then setup
the STE of the device's for a 2-stage translation mode.

Nicolin

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re:Re: Re: how test the Translation Support for SMMUv3?
  2024-01-02 19:51     ` Nicolin Chen
@ 2024-01-03 14:09       ` Ben
  2024-01-03 17:38         ` Nicolin Chen
  0 siblings, 1 reply; 8+ messages in thread
From: Ben @ 2024-01-03 14:09 UTC (permalink / raw)
  To: Nicolin Chen; +Cc: eric.auger, linux-arm-kernel




At 2024-01-03 03:51:24, "Nicolin Chen" <nicolinc@nvidia.com> wrote:
>On Sun, Dec 31, 2023 at 10:18:12PM +0800, Ben wrote:
> 
>> I am trying your patchset on FVP (Fixed Virtual Platforms) but failed.
>> 
>> Here is the Host side running on FVP (platform is  rdn1egde).
>> 
>> master:~# echo 0000:05:00.0 > /sys/bus/pci/devices/0000\:05\:00.0/driver/unbind
>> master:~# echo 0abc aced > /sys/bus/pci/drivers/vfio-pci/new_id
>> 
>> when i want to run the QEMU to launch a VM, some failed, like below:
>> 
>> root@master:/# cat qemu-iommufd.sh
>> ./build/qemu-system-aarch64 -L /usr/local/share/qemu -object iommufd,id=iommufd0 -machine virt,accel=kvm,gic-version=3,iommu=nested-smmuv3,iommufd=iommufd0 -cpu host -m 256m -nographic -kernel /Image -append "noinintrd nokaslr root=/dev/vda rootfstype=ext4 rw" -drive if=none,file=/busybox_arm64.ext4,id=hd0 -device virtio-blk-device,drive=hd0 -device vfio-pci,host=0000:05:00.0,iommufd=iommufd0,id="test0"
>> root@master:/# sh qemu-iommufd.sh
>> WARNING: Image format was not specified for '/busybox_arm64.ext4' and probing guessed raw.
>>          Automatically detecting the format is dangerous for raw images, write operations on block 0 will be restricted.
>>          Specify the 'raw' format explicitly to remove the restrictions.
>> qemu-system-aarch64: -device vfio-pci,host=0000:05:00.0,iommufd=iommufd0,id=test0: vfio 0000:05:00.0: vfio /sys/bus/pci/devices/0000:05:00.0/vfio-dev: failed to load "/sys/bus/pci/devices/0000:05:00.0/vfio-dev/vfio0/dev"
>> 
>> It looks cannot find the /sys/bus/pci/devices/0000:05:00.0/vfio-dev/vfio0/dev for this device.
>> 
>> root@master:/# ls -l /sys/bus/pci/devices/0000\:05\:00.0/vfio-dev/vfio0/
>> total 0
>> lrwxrwxrwx 1 root root    0 Dec 31 13:29 device -> ../../../0000:05:00.0
>> drwxr-xr-x 2 root root    0 Dec 31 13:29 power
>> lrwxrwxrwx 1 root root    0 Dec 31 13:29 subsystem -> ../../../../../../../../class/vfio-dev
>> -rw-r--r-- 1 root root 4096 Dec 31 13:20 uevent
>> 
>> any suggestion on that?
>
>CONFIG_VFIO_DEVICE_CDEV=y
>
>Do you have this enabled in kernel config?

Thanks your suggestion. Right now I can run the QEMU to launch a VM.
After assigned a device to VM and  binded  the vfio-pci driver for this device in VM, 
it failed to open "/dev/vfio/x" device file. Any suggestion?

Here is the log and steps:

On host side:
root@master:~# lspci -k
02:00.0 Unassigned class [ff00]: ARM Device ff80
        Subsystem: ARM Device 0000

echo 13b5 ff80 > /sys/bus/pci/drivers/vfio-pci/new_id

./qemu-system-aarch64-iommufd -L /usr/local/share/qemu -object iommufd,id=iommufd0 -machine virt,accel=kvm,gic-version=3,iommu=nested-smmuv3,iommufd=iommufd0 -cpu host -m 256m -nographic -kernel ./Image -append "noinintrd nokaslr root=/dev/vda rootfstype=ext4 rw" -drive if=none,file=./busybox_arm64.ext4,id=hd0 -device virtio-blk-device,drive=hd0 -device vfio-pci,host=0000:02:00.0,iommufd=iommufd0


On the VM side:

/ # echo 13b5 ff80 > /sys/bus/pci/drivers/vfio-pci/new_id
/ # ./vfio_test 0000:00:02.0
Failed to open /dev/vfio/1, -1 (No such file or directory)



>
>> BTW, another questions,
>> 1. does it the device which assigned to VM by VFIO can leverage the nested IOMMU?
>
>I think so, as long as it's behind an IOMMU hardware that supports
>nesting (and requiring both host kernel and VMM/qemu patches).
>
>> how about the virtual device emulated  by QEMU without  assigned via VFIO?
>
>The basic nesting feature is about 2-stage translation setup (STE
>configuration in SMMU term) and cache invalidation. An emulated
>device doesn't exist in the host kernel, so there is no nesting
>IMHO.
>
>> 2. when fill the S1 and S2 page table for device on nested IOMMU scenario?
>> does it a shadow page table for vIOMMU on VM? and will trap into hypervisor
>> to refill the real S1 and S2 page table? I am not clear the workflow for your
>> patchset.
>
>S2 page table is created/filled at VM creating stage. It's basically
>managed by the hypervisor or host kernel. S1 page table on the other
>hand is created inside the guest memory and thus managed by the guest
>OS. As I mentioned the above, nesting is all about STE configuration
>besides cache invalidation. VMM traps the S1 page table pointer from
>the guest and forwards the pointer to the host kernel to then setup
>the STE of the device's for a 2-stage translation mode.
>

Thanks a lot!
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: Re: Re: how test the Translation Support for SMMUv3?
  2024-01-03 14:09       ` Ben
@ 2024-01-03 17:38         ` Nicolin Chen
  2024-01-04 13:13           ` Ben
  0 siblings, 1 reply; 8+ messages in thread
From: Nicolin Chen @ 2024-01-03 17:38 UTC (permalink / raw)
  To: Ben; +Cc: eric.auger, linux-arm-kernel

On Wed, Jan 03, 2024 at 10:09:34PM +0800, Ben wrote:
> At 2024-01-03 03:51:24, "Nicolin Chen" <nicolinc@nvidia.com> wrote:
> >On Sun, Dec 31, 2023 at 10:18:12PM +0800, Ben wrote:
> >
> >> I am trying your patchset on FVP (Fixed Virtual Platforms) but failed.
> >>
> >> Here is the Host side running on FVP (platform is  rdn1egde).
> >>
> >> master:~# echo 0000:05:00.0 > /sys/bus/pci/devices/0000\:05\:00.0/driver/unbind
> >> master:~# echo 0abc aced > /sys/bus/pci/drivers/vfio-pci/new_id
> >>
> >> when i want to run the QEMU to launch a VM, some failed, like below:
> >>
> >> root@master:/# cat qemu-iommufd.sh
> >> ./build/qemu-system-aarch64 -L /usr/local/share/qemu -object iommufd,id=iommufd0 -machine virt,accel=kvm,gic-version=3,iommu=nested-smmuv3,iommufd=iommufd0 -cpu host -m 256m -nographic -kernel /Image -append "noinintrd nokaslr root=/dev/vda rootfstype=ext4 rw" -drive if=none,file=/busybox_arm64.ext4,id=hd0 -device virtio-blk-device,drive=hd0 -device vfio-pci,host=0000:05:00.0,iommufd=iommufd0,id="test0"
> >> root@master:/# sh qemu-iommufd.sh
> >> WARNING: Image format was not specified for '/busybox_arm64.ext4' and probing guessed raw.
> >>          Automatically detecting the format is dangerous for raw images, write operations on block 0 will be restricted.
> >>          Specify the 'raw' format explicitly to remove the restrictions.
> >> qemu-system-aarch64: -device vfio-pci,host=0000:05:00.0,iommufd=iommufd0,id=test0: vfio 0000:05:00.0: vfio /sys/bus/pci/devices/0000:05:00.0/vfio-dev: failed to load "/sys/bus/pci/devices/0000:05:00.0/vfio-dev/vfio0/dev"
> >>
> >> It looks cannot find the /sys/bus/pci/devices/0000:05:00.0/vfio-dev/vfio0/dev for this device.
> >>
> >> root@master:/# ls -l /sys/bus/pci/devices/0000\:05\:00.0/vfio-dev/vfio0/
> >> total 0
> >> lrwxrwxrwx 1 root root    0 Dec 31 13:29 device -> ../../../0000:05:00.0
> >> drwxr-xr-x 2 root root    0 Dec 31 13:29 power
> >> lrwxrwxrwx 1 root root    0 Dec 31 13:29 subsystem -> ../../../../../../../../class/vfio-dev
> >> -rw-r--r-- 1 root root 4096 Dec 31 13:20 uevent
> >>
> >> any suggestion on that?
> >
> >CONFIG_VFIO_DEVICE_CDEV=y
> >
> >Do you have this enabled in kernel config?
> 
> Thanks your suggestion. Right now I can run the QEMU to launch a VM.
> After assigned a device to VM and  binded  the vfio-pci driver for this device in VM,
> it failed to open "/dev/vfio/x" device file. Any suggestion?
> 
> Here is the log and steps:
> 
> On host side:
> root@master:~# lspci -k
> 02:00.0 Unassigned class [ff00]: ARM Device ff80
>         Subsystem: ARM Device 0000
> 
> echo 13b5 ff80 > /sys/bus/pci/drivers/vfio-pci/new_id
> 
> ./qemu-system-aarch64-iommufd -L /usr/local/share/qemu -object iommufd,id=iommufd0 -machine virt,accel=kvm,gic-version=3,iommu=nested-smmuv3,iommufd=iommufd0 -cpu host -m 256m -nographic -kernel ./Image -append "noinintrd nokaslr root=/dev/vda rootfstype=ext4 rw" -drive if=none,file=./busybox_arm64.ext4,id=hd0 -device virtio-blk-device,drive=hd0 -device vfio-pci,host=0000:02:00.0,iommufd=iommufd0
> 
> 
> On the VM side:
> 
> / # echo 13b5 ff80 > /sys/bus/pci/drivers/vfio-pci/new_id
> / # ./vfio_test 0000:00:02.0
> Failed to open /dev/vfio/1, -1 (No such file or directory)

VM side? You mean in the guest? No, you shouldn't configure
it to a VFIO device in the guest. Just treat it as a native
PCI device and let its driver in the guest kernel probe it.

The "Unassigned class" returned by the lspci running in the
host is likely telling you that your kernel doesn't support
the device at all?

Try some simpler device that's supported first. What happens
to the 0000:05:00.0 that you passed through previously?

Nicolin

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re:Re: Re: Re: how test the Translation Support for SMMUv3?
  2024-01-03 17:38         ` Nicolin Chen
@ 2024-01-04 13:13           ` Ben
  2024-01-04 22:35             ` Nicolin Chen
  0 siblings, 1 reply; 8+ messages in thread
From: Ben @ 2024-01-04 13:13 UTC (permalink / raw)
  To: Nicolin Chen; +Cc: eric.auger, linux-arm-kernel



At 2024-01-04 01:38:11, "Nicolin Chen" <nicolinc@nvidia.com> wrote:
>On Wed, Jan 03, 2024 at 10:09:34PM +0800, Ben wrote:
>> At 2024-01-03 03:51:24, "Nicolin Chen" <nicolinc@nvidia.com> wrote:
>> >On Sun, Dec 31, 2023 at 10:18:12PM +0800, Ben wrote:
>> >
>> >> I am trying your patchset on FVP (Fixed Virtual Platforms) but failed.
>> >>
>> >> Here is the Host side running on FVP (platform is  rdn1egde).
>> >>
>> >> master:~# echo 0000:05:00.0 > /sys/bus/pci/devices/0000\:05\:00.0/driver/unbind
>> >> master:~# echo 0abc aced > /sys/bus/pci/drivers/vfio-pci/new_id
>> >>
>> >> when i want to run the QEMU to launch a VM, some failed, like below:
>> >>
>> >> root@master:/# cat qemu-iommufd.sh
>> >> ./build/qemu-system-aarch64 -L /usr/local/share/qemu -object iommufd,id=iommufd0 -machine virt,accel=kvm,gic-version=3,iommu=nested-smmuv3,iommufd=iommufd0 -cpu host -m 256m -nographic -kernel /Image -append "noinintrd nokaslr root=/dev/vda rootfstype=ext4 rw" -drive if=none,file=/busybox_arm64.ext4,id=hd0 -device virtio-blk-device,drive=hd0 -device vfio-pci,host=0000:05:00.0,iommufd=iommufd0,id="test0"
>> >> root@master:/# sh qemu-iommufd.sh
>> >> WARNING: Image format was not specified for '/busybox_arm64.ext4' and probing guessed raw.
>> >>          Automatically detecting the format is dangerous for raw images, write operations on block 0 will be restricted.
>> >>          Specify the 'raw' format explicitly to remove the restrictions.
>> >> qemu-system-aarch64: -device vfio-pci,host=0000:05:00.0,iommufd=iommufd0,id=test0: vfio 0000:05:00.0: vfio /sys/bus/pci/devices/0000:05:00.0/vfio-dev: failed to load "/sys/bus/pci/devices/0000:05:00.0/vfio-dev/vfio0/dev"
>> >>
>> >> It looks cannot find the /sys/bus/pci/devices/0000:05:00.0/vfio-dev/vfio0/dev for this device.
>> >>
>> >> root@master:/# ls -l /sys/bus/pci/devices/0000\:05\:00.0/vfio-dev/vfio0/
>> >> total 0
>> >> lrwxrwxrwx 1 root root    0 Dec 31 13:29 device -> ../../../0000:05:00.0
>> >> drwxr-xr-x 2 root root    0 Dec 31 13:29 power
>> >> lrwxrwxrwx 1 root root    0 Dec 31 13:29 subsystem -> ../../../../../../../../class/vfio-dev
>> >> -rw-r--r-- 1 root root 4096 Dec 31 13:20 uevent
>> >>
>> >> any suggestion on that?
>> >
>> >CONFIG_VFIO_DEVICE_CDEV=y
>> >
>> >Do you have this enabled in kernel config?
>> 
>> Thanks your suggestion. Right now I can run the QEMU to launch a VM.
>> After assigned a device to VM and  binded  the vfio-pci driver for this device in VM,
>> it failed to open "/dev/vfio/x" device file. Any suggestion?
>> 
>> Here is the log and steps:
>> 
>> On host side:
>> root@master:~# lspci -k
>> 02:00.0 Unassigned class [ff00]: ARM Device ff80
>>         Subsystem: ARM Device 0000
>> 
>> echo 13b5 ff80 > /sys/bus/pci/drivers/vfio-pci/new_id
>> 
>> ./qemu-system-aarch64-iommufd -L /usr/local/share/qemu -object iommufd,id=iommufd0 -machine virt,accel=kvm,gic-version=3,iommu=nested-smmuv3,iommufd=iommufd0 -cpu host -m 256m -nographic -kernel ./Image -append "noinintrd nokaslr root=/dev/vda rootfstype=ext4 rw" -drive if=none,file=./busybox_arm64.ext4,id=hd0 -device virtio-blk-device,drive=hd0 -device vfio-pci,host=0000:02:00.0,iommufd=iommufd0
>> 
>> 
>> On the VM side:
>> 
>> / # echo 13b5 ff80 > /sys/bus/pci/drivers/vfio-pci/new_id
>> / # ./vfio_test 0000:00:02.0
>> Failed to open /dev/vfio/1, -1 (No such file or directory)
>
>VM side? You mean in the guest? 

yes, Guest Side.


>No, you shouldn't configure
>it to a VFIO device in the guest. Just treat it as a native
>PCI device and let its driver in the guest kernel probe it.
>
>The "Unassigned class" returned by the lspci running in the
>host is likely telling you that your kernel doesn't support
>the device at all?

The device (13b5 ff80) is a special device (SMMUv3TestEngine) implemented in FVP,
I wrote a simple PCI driver for it, just probe and call dma_alloc_coherent() API to alloc a DMA buffer.

Here is log on Guest side:
/ # insmod smmu_test.ko 
[ 8251.668308] smmu_test: module verification failed: signature and/or required key missing - tainting kernel
[ 8251.671198] smmu_test 0000:00:02.0: Adding to iommu group 1
[ 8251.672748] arm_smmu_attach_dev========
[ 8251.673823] arm_smmu_domain_finalise_s1 ====
[17991.095955] arm-smmu-v3 arm-smmu-v3.0.auto: arm_smmu_domain_finalise_nested ======
[ 8251.675278] smmu_test 0000:00:02.0: enabling device (0000 -> 0002)
qemu-system-aarch64-iommufd: IOMMU_IOAS_MAP failed: Bad address
qemu-system-aarch64-iommufd: vfio_container_dma_map(0xaaaaf4b56c60, 0x8000000000, 0x40000, 0xffffbc6a4000) = -14 (Bad address)
qemu-system-aarch64-iommufd: IOMMU_IOAS_MAP failed: Bad address
qemu-system-aarch64-iommufd: vfio_container_dma_map(0xaaaaf4b56c60, 0x800004c000, 0x1000, 0xffffbf49d000) = -14 (Bad address)
[ 8251.678163] smmu_test_pci_probe === reg_phy 0x8000000000, len 0x40000
[ 8251.679978] smmu_test_pci_probe === reg 0xffff800009300000
[ 8251.681908] smmu_test_alloc_dma ---- iova 0xffff800008008000   dma_addr 0xfffff000
/ # 

so in this log, some qemu error logs are observed, does it the nested SMMU work fine? 
In the Guest side, is it GPA or SPA about the return address (dma_addr) of dma_alloc_coherent() API?


>
>Try some simpler device that's supported first. What happens
>to the 0000:05:00.0 that you passed through previously?

05:00.0 is a SATA device on FVP, but it is very strange that failed to assigned to VM, "device busy" reported. I changed to use SMMUv3TestEngine device.
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: Re: Re: Re: how test the Translation Support for SMMUv3?
  2024-01-04 13:13           ` Ben
@ 2024-01-04 22:35             ` Nicolin Chen
  2024-01-05  1:50               ` Ben
  0 siblings, 1 reply; 8+ messages in thread
From: Nicolin Chen @ 2024-01-04 22:35 UTC (permalink / raw)
  To: Ben; +Cc: eric.auger, linux-arm-kernel

On Thu, Jan 04, 2024 at 09:13:39PM +0800, Ben wrote:
> At 2024-01-04 01:38:11, "Nicolin Chen" <nicolinc@nvidia.com> wrote:
> >The "Unassigned class" returned by the lspci running in the
> >host is likely telling you that your kernel doesn't support
> >the device at all?
> 
> The device (13b5 ff80) is a special device (SMMUv3TestEngine) implemented in FVP,
> I wrote a simple PCI driver for it, just probe and call dma_alloc_coherent() API to alloc a DMA buffer.
> 
> Here is log on Guest side:
> / # insmod smmu_test.ko
> [ 8251.668308] smmu_test: module verification failed: signature and/or required key missing - tainting kernel
> [ 8251.671198] smmu_test 0000:00:02.0: Adding to iommu group 1
> [ 8251.672748] arm_smmu_attach_dev========
> [ 8251.673823] arm_smmu_domain_finalise_s1 ====
> [17991.095955] arm-smmu-v3 arm-smmu-v3.0.auto: arm_smmu_domain_finalise_nested ======
> [ 8251.675278] smmu_test 0000:00:02.0: enabling device (0000 -> 0002)
> qemu-system-aarch64-iommufd: IOMMU_IOAS_MAP failed: Bad address
> qemu-system-aarch64-iommufd: vfio_container_dma_map(0xaaaaf4b56c60, 0x8000000000, 0x40000, 0xffffbc6a4000) = -14 (Bad address)
> qemu-system-aarch64-iommufd: IOMMU_IOAS_MAP failed: Bad address
> qemu-system-aarch64-iommufd: vfio_container_dma_map(0xaaaaf4b56c60, 0x800004c000, 0x1000, 0xffffbf49d000) = -14 (Bad address)
> [ 8251.678163] smmu_test_pci_probe === reg_phy 0x8000000000, len 0x40000
> [ 8251.679978] smmu_test_pci_probe === reg 0xffff800009300000
> [ 8251.681908] smmu_test_alloc_dma ---- iova 0xffff800008008000   dma_addr 0xfffff000

Given the dma_addr looks good to me, I think the result is a "pass"?

> so in this log, some qemu error logs are observed, does it the nested SMMU work fine?

0x8000000000 stuff are the PCI bar memory space that isn't supported
well yet, which shouldn't break a DMA test with 2-stage translation.

> In the Guest side, is it GPA or SPA about the return address (dma_addr) of dma_alloc_coherent() API?

I think so. Try the same test in the host and see if it returns a
similar print, just ignoring those QEMU logs.

> >Try some simpler device that's supported first. What happens
> >to the 0000:05:00.0 that you passed through previously?
> 
> 05:00.0 is a SATA device on FVP, but it is very strange that failed to assigned to VM, "device busy" reported. I changed to use SMMUv3TestEngine device.

OK.

Nicolin

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re:Re: Re: Re: Re: how test the Translation Support for SMMUv3?
  2024-01-04 22:35             ` Nicolin Chen
@ 2024-01-05  1:50               ` Ben
  0 siblings, 0 replies; 8+ messages in thread
From: Ben @ 2024-01-05  1:50 UTC (permalink / raw)
  To: Nicolin Chen; +Cc: eric.auger, linux-arm-kernel

At 2024-01-05 06:35:12, "Nicolin Chen" <nicolinc@nvidia.com> wrote:
>On Thu, Jan 04, 2024 at 09:13:39PM +0800, Ben wrote:
>> At 2024-01-04 01:38:11, "Nicolin Chen" <nicolinc@nvidia.com> wrote:
>> >The "Unassigned class" returned by the lspci running in the
>> >host is likely telling you that your kernel doesn't support
>> >the device at all?
>> 
>> The device (13b5 ff80) is a special device (SMMUv3TestEngine) implemented in FVP,
>> I wrote a simple PCI driver for it, just probe and call dma_alloc_coherent() API to alloc a DMA buffer.
>> 
>> Here is log on Guest side:
>> / # insmod smmu_test.ko
>> [ 8251.668308] smmu_test: module verification failed: signature and/or required key missing - tainting kernel
>> [ 8251.671198] smmu_test 0000:00:02.0: Adding to iommu group 1
>> [ 8251.672748] arm_smmu_attach_dev========
>> [ 8251.673823] arm_smmu_domain_finalise_s1 ====
>> [17991.095955] arm-smmu-v3 arm-smmu-v3.0.auto: arm_smmu_domain_finalise_nested ======
>> [ 8251.675278] smmu_test 0000:00:02.0: enabling device (0000 -> 0002)
>> qemu-system-aarch64-iommufd: IOMMU_IOAS_MAP failed: Bad address
>> qemu-system-aarch64-iommufd: vfio_container_dma_map(0xaaaaf4b56c60, 0x8000000000, 0x40000, 0xffffbc6a4000) = -14 (Bad address)
>> qemu-system-aarch64-iommufd: IOMMU_IOAS_MAP failed: Bad address
>> qemu-system-aarch64-iommufd: vfio_container_dma_map(0xaaaaf4b56c60, 0x800004c000, 0x1000, 0xffffbf49d000) = -14 (Bad address)
>> [ 8251.678163] smmu_test_pci_probe === reg_phy 0x8000000000, len 0x40000
>> [ 8251.679978] smmu_test_pci_probe === reg 0xffff800009300000
>> [ 8251.681908] smmu_test_alloc_dma ---- iova 0xffff800008008000   dma_addr 0xfffff000
>
>Given the dma_addr looks good to me, I think the result is a "pass"?

do you think this dma_addr is GPA or SPA?

I think it looks pass.


>
>> so in this log, some qemu error logs are observed, does it the nested SMMU work fine?
>
>0x8000000000 stuff are the PCI bar memory space that isn't supported
>well yet, 

why said this PCI bar memory space isn't supportedit well, hardware issue or software issue? this is physical address of the PCI bar 0, it got from below code:

iommu->reg_phys = pci_resource_start(pdev, 0);



>which shouldn't break a DMA test with 2-stage translation.
>
>> In the Guest side, is it GPA or SPA about the return address (dma_addr) of dma_alloc_coherent() API?
>
>I think so. Try the same test in the host and see if it returns a
>similar print, just ignoring those QEMU logs.


Here is the log for Host side:

[    0.720409] smmu_test 0000:02:00.0: Adding to iommu group 1                                                                                                                              
[    0.720427] smmu_test 0000:02:00.0: enabling device (0000 -> 0002)                                                                                                                       
[    0.720433] smmu_test_pci_probe === reg_phy 0x70100000, len 0x40000                                                                                                                      
[    0.720439] smmu_test_pci_probe === reg 0xffff800009f80000                                                                                                                               
[    0.720447] smmu_test_alloc_dma ---- iova 0xffff800009669000   dma_addr 0xfffff000                                                                        


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 8+ messages in thread

end of thread, other threads:[~2024-01-05  1:50 UTC | newest]

Thread overview: 8+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
     [not found] <61332e87.404e.18caa3d6e15.Coremail.figure1802@126.com>
2023-12-27 21:30 ` how test the Translation Support for SMMUv3? Nicolin Chen
2023-12-31 14:18   ` Ben
2024-01-02 19:51     ` Nicolin Chen
2024-01-03 14:09       ` Ben
2024-01-03 17:38         ` Nicolin Chen
2024-01-04 13:13           ` Ben
2024-01-04 22:35             ` Nicolin Chen
2024-01-05  1:50               ` Ben

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.