All of lore.kernel.org
 help / color / mirror / Atom feed
* Question: KVM: Failed to bind vfio with PCI-e / SMMU on Juno-r2
@ 2019-03-11  6:42 Leo Yan
  2019-03-11  6:57 ` Leo Yan
  2019-03-11  8:23 ` Auger Eric
  0 siblings, 2 replies; 20+ messages in thread
From: Leo Yan @ 2019-03-11  6:42 UTC (permalink / raw)
  To: kvmarm, eric.auger; +Cc: Daniel Thompson

Hi all,

I am trying to enable PCI-e device pass-through mode with KVM, since
Juno-r2 board has PCI-e bus so I firstly try to use vfio to
passthrough the network card on PCI-e bus.

According to Juno-r2 board TRM [1], there has a CoreLink MMU-401 (SMMU)
between PCI-e devices and CCI bus; IIUC, PCI-e device and the SMMU can
be used for vfio for address isolation and from hardware pespective it
is sufficient for support pass-through mode.

I followed Eric's blog [2] for 'VFIO-PCI driver binding', so I
executed blow commands on Juno-r2 board:

  echo vfio-pci > /sys/bus/pci/devices/0000\:08\:00.0/driver_override
  echo 0000:08:00.0 > /sys/bus/pci/drivers/sky2/unbind
  echo 0000:08:00.0 > /sys/bus/pci/drivers_probe

But at the last command for vifo probing, it reports failure as below:

[   21.553889] sky2 0000:08:00.0 enp8s0: disabling interface
[   21.616720] vfio-pci: probe of 0000:08:00.0 failed with error -22

I looked into for the code, though 'dev->bus->iommu_ops' points to the
data structure 'arm_smmu_ops', but 'dev->iommu_group' is NULL thus the
probe function returns failure with below flow:

  vfio_pci_probe()
    `-> vfio_iommu_group_get()
          `-> iommu_group_get()
                `-> return NULL;

Alternatively, if enable the kconfig CONFIG_VFIO_NOIOMMU & set global
variable 'noiommu' = true, the probe function still returns error; since
the function iommu_present(dev->bus) return back 'arm_smmu_ops' so you
could see the code will run into below logic:

vfio_iommu_group_get()
{
	group = iommu_group_get(dev);

#ifdef CONFIG_VFIO_NOIOMMU

	/*
	 * With noiommu enabled, an IOMMU group will be created for a device
	 * that doesn't already have one and doesn't have an iommu_ops on their
	 * bus.  We set iommudata simply to be able to identify these groups
	 * as special use and for reclamation later.
	 */
	if (group || !noiommu || iommu_present(dev->bus))
		return group;    ==> return 'group' and 'group' is NULL

	[...]
}

So either using SMMU or with kernel config CONFIG_VFIO_NOIOMMU, both cannot
bind vifo driver for network card device on Juno-r2 board.

P.s. I also checked the sysfs node and found it doesn't contain node
'iommu_group':

# ls /sys/bus/pci/devices/0000\:08\:00.0/iommu_group
ls: cannot access '/sys/bus/pci/devices/0000:08:00.0/iommu_group': No
such file or directory

Could you give some suggestions for this so that I can proceed?  Very
appreciate for any comment.

Thanks,
Leo Yan

[1] http://infocenter.arm.com/help/topic/com.arm.doc.ddi0515f/DDI0515F_juno_arm_development_platform_soc_trm.pdf
[2] https://www.linaro.org/blog/kvm-pciemsi-passthrough-armarm64/

^ permalink raw reply	[flat|nested] 20+ messages in thread

end of thread, other threads:[~2019-03-20  8:42 UTC | newest]

Thread overview: 20+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-03-11  6:42 Question: KVM: Failed to bind vfio with PCI-e / SMMU on Juno-r2 Leo Yan
2019-03-11  6:57 ` Leo Yan
2019-03-11  8:23 ` Auger Eric
2019-03-11  9:39   ` Leo Yan
2019-03-11  9:47     ` Auger Eric
2019-03-11 14:35       ` Leo Yan
2019-03-13  8:00         ` Leo Yan
2019-03-13 10:01           ` Leo Yan
2019-03-13 10:16             ` Auger Eric
2019-03-13 10:01           ` Auger Eric
2019-03-13 10:24             ` Auger Eric
2019-03-13 11:52               ` Leo Yan
2019-03-15  9:37               ` Leo Yan
2019-03-15 11:03                 ` Auger Eric
2019-03-15 12:54                   ` Robin Murphy
2019-03-16  4:56                     ` Leo Yan
2019-03-18 12:25                       ` Robin Murphy
2019-03-19  1:33                         ` Leo Yan
2019-03-20  8:42                           ` Leo Yan
2019-03-13 11:35             ` Leo Yan

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.