All of lore.kernel.org
 help / color / mirror / Atom feed
* VFIO on ARM64
@ 2017-09-12 18:01 valmiki
  2017-09-12 18:27   ` Alex Williamson
  2017-09-13  1:20 ` Jean-Philippe Brucker
  0 siblings, 2 replies; 10+ messages in thread
From: valmiki @ 2017-09-12 18:01 UTC (permalink / raw)
  To: iommu, kvm, linux-pci; +Cc: Jean-Philippe Brucker, Alex Williamson, kevin.tian

Hi, as per VFIO documentation i see that we need to see 
"/sys/bus/pci/devices/0000:06:0d.0/iommu_group" in order to find group 
in which PCI bus is attached.
But as per drivers/pci/pci-sysfs.c in static struct attribute 
*pci_dev_attrs[], i don't see any such attribute.
I tried enabling SMMUv2 driver and SMMU for PCIe node on our SOC, but 
this file doesn't show up and also in /sys/kernel/iommu_group i do not 
see "/sys/kernel/iommu_groups/17/devices/0000:00:1f.00" file, i see only 
PCIe root port device tree node in that group and not individual buses.
So on ARM64 for showing these paths i.e show specific to each bus, does 
SMMU need any particular confguration (we have SMMUv2) ?
Do we need any specific kernel configuration ?


Regards,
Valmiki

---
This email has been checked for viruses by Avast antivirus software.
https://www.avast.com/antivirus

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: VFIO on ARM64
@ 2017-09-12 18:27   ` Alex Williamson
  0 siblings, 0 replies; 10+ messages in thread
From: Alex Williamson @ 2017-09-12 18:27 UTC (permalink / raw)
  To: valmiki
  Cc: iommu, kvm, linux-pci, Jean-Philippe Brucker, kevin.tian, Auger Eric

[Cc +Eric Auger]

On Tue, 12 Sep 2017 23:31:00 +0530
valmiki <valmikibow@gmail.com> wrote:

> Hi, as per VFIO documentation i see that we need to see 
> "/sys/bus/pci/devices/0000:06:0d.0/iommu_group" in order to find group 
> in which PCI bus is attached.
> But as per drivers/pci/pci-sysfs.c in static struct attribute 
> *pci_dev_attrs[], i don't see any such attribute.
> I tried enabling SMMUv2 driver and SMMU for PCIe node on our SOC, but 
> this file doesn't show up and also in /sys/kernel/iommu_group i do not 
> see "/sys/kernel/iommu_groups/17/devices/0000:00:1f.00" file, i see only 
> PCIe root port device tree node in that group and not individual buses.
> So on ARM64 for showing these paths i.e show specific to each bus, does 
> SMMU need any particular confguration (we have SMMUv2) ?
> Do we need any specific kernel configuration ?
> 
> 
> Regards,
> Valmiki

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: VFIO on ARM64
@ 2017-09-12 18:27   ` Alex Williamson
  0 siblings, 0 replies; 10+ messages in thread
From: Alex Williamson @ 2017-09-12 18:27 UTC (permalink / raw)
  To: valmiki
  Cc: kvm-u79uwXL29TY76Z2rM5mHXA, linux-pci,
	iommu-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA

[Cc +Eric Auger]

On Tue, 12 Sep 2017 23:31:00 +0530
valmiki <valmikibow-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org> wrote:

> Hi, as per VFIO documentation i see that we need to see 
> "/sys/bus/pci/devices/0000:06:0d.0/iommu_group" in order to find group 
> in which PCI bus is attached.
> But as per drivers/pci/pci-sysfs.c in static struct attribute 
> *pci_dev_attrs[], i don't see any such attribute.
> I tried enabling SMMUv2 driver and SMMU for PCIe node on our SOC, but 
> this file doesn't show up and also in /sys/kernel/iommu_group i do not 
> see "/sys/kernel/iommu_groups/17/devices/0000:00:1f.00" file, i see only 
> PCIe root port device tree node in that group and not individual buses.
> So on ARM64 for showing these paths i.e show specific to each bus, does 
> SMMU need any particular confguration (we have SMMUv2) ?
> Do we need any specific kernel configuration ?
> 
> 
> Regards,
> Valmiki

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: VFIO on ARM64
  2017-09-12 18:01 VFIO on ARM64 valmiki
  2017-09-12 18:27   ` Alex Williamson
@ 2017-09-13  1:20 ` Jean-Philippe Brucker
  2017-09-13 17:38   ` valmiki
  1 sibling, 1 reply; 10+ messages in thread
From: Jean-Philippe Brucker @ 2017-09-13  1:20 UTC (permalink / raw)
  To: valmiki, iommu, kvm, linux-pci; +Cc: Alex Williamson, kevin.tian

Hi Valmiki,

On 12/09/17 19:01, valmiki wrote:
> Hi, as per VFIO documentation i see that we need to see
> "/sys/bus/pci/devices/0000:06:0d.0/iommu_group" in order to find group
> in which PCI bus is attached.
> But as per drivers/pci/pci-sysfs.c in static struct attribute
> *pci_dev_attrs[], i don't see any such attribute.

This iommu_group attribute is created by
drivers/iommu/iommu.c:iommu_group_add_device. It is a symbolic link to
/sys/kernel/iommu_groups/<group>.

> I tried enabling SMMUv2 driver and SMMU for PCIe node on our SOC, but
> this file doesn't show up and also in /sys/kernel/iommu_group i do not
> see "/sys/kernel/iommu_groups/17/devices/0000:00:1f.00" file, i see only
> PCIe root port device tree node in that group and not individual buses.
> So on ARM64 for showing these paths i.e show specific to each bus, does
> SMMU need any particular confguration (we have SMMUv2) > Do we need any specific kernel configuration ?

I don't think so. If you're able to see the root complex in an IOMMU
group, then the configuration is probably fine. Could you provide a little
more information about your system, for example lspci along with "find
/sys/kernel/iommu_groups/*/devices/*"?

Ideally, each PCIe device will be in its own IOMMU group. So you shouldn't
have each bus in a group, but rather one device per group. Linux puts
multiple devices in a group if the IOMMU cannot properly isolate them. In
general it's not something you want in your system, because all devices in
a group will have the same address space and cannot be passed to a guest
separately.

Thanks,
Jean

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: VFIO on ARM64
  2017-09-13  1:20 ` Jean-Philippe Brucker
@ 2017-09-13 17:38   ` valmiki
  2017-09-13 18:57     ` Jean-Philippe Brucker
  2017-12-03 13:56       ` valmiki
  0 siblings, 2 replies; 10+ messages in thread
From: valmiki @ 2017-09-13 17:38 UTC (permalink / raw)
  To: Jean-Philippe Brucker, iommu, kvm, linux-pci; +Cc: Alex Williamson, kevin.tian

On 9/13/2017 6:50 AM, Jean-Philippe Brucker wrote:
> Hi Valmiki,
>
> On 12/09/17 19:01, valmiki wrote:
>> Hi, as per VFIO documentation i see that we need to see
>> "/sys/bus/pci/devices/0000:06:0d.0/iommu_group" in order to find group
>> in which PCI bus is attached.
>> But as per drivers/pci/pci-sysfs.c in static struct attribute
>> *pci_dev_attrs[], i don't see any such attribute.
>
> This iommu_group attribute is created by
> drivers/iommu/iommu.c:iommu_group_add_device. It is a symbolic link to
> /sys/kernel/iommu_groups/<group>.
>
>> I tried enabling SMMUv2 driver and SMMU for PCIe node on our SOC, but
>> this file doesn't show up and also in /sys/kernel/iommu_group i do not
>> see "/sys/kernel/iommu_groups/17/devices/0000:00:1f.00" file, i see only
>> PCIe root port device tree node in that group and not individual buses.
>> So on ARM64 for showing these paths i.e show specific to each bus, does
>> SMMU need any particular confguration (we have SMMUv2) > Do we need any specific kernel configuration ?
>
> I don't think so. If you're able to see the root complex in an IOMMU
> group, then the configuration is probably fine. Could you provide a little
> more information about your system, for example lspci along with "find
> /sys/kernel/iommu_groups/*/devices/*"?
>
Here is the log:
root@:~# lspci
00:00.0 PCI bridge: Corporation Device a023
01:00.0 Memory controller: Corporation Device a024
root@:~# find /sys/kernel/iommu_groups/*/devices/*
/sys/kernel/iommu_groups/0/devices/ad0c0000.pcie
/sys/kernel/iommu_groups/1/devices/ad0f0000.spi
/sys/kernel/iommu_groups/2/devices/adc70000.sdhci
/sys/kernel/iommu_groups/3/devices/ad9d0000.usb0
root@:~#
> Ideally, each PCIe device will be in its own IOMMU group. So you shouldn't
> have each bus in a group, but rather one device per group. Linux puts
> multiple devices in a group if the IOMMU cannot properly isolate them. In
> general it's not something you want in your system, because all devices in
> a group will have the same address space and cannot be passed to a guest
> separately.
>
So i don't see separate group per pci device.When you say one pci device 
per group, when does smmu creates one group per pci device ?
As per boot log i see that smmu drvier gets probed first and then pcie 
root port driver, so how will smmu know number of pci devices present 
downstream and create a group for each device ?

Regards,
Valmiki

---
This email has been checked for viruses by Avast antivirus software.
https://www.avast.com/antivirus

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: VFIO on ARM64
  2017-09-13 17:38   ` valmiki
@ 2017-09-13 18:57     ` Jean-Philippe Brucker
  2017-12-03 13:56       ` valmiki
  1 sibling, 0 replies; 10+ messages in thread
From: Jean-Philippe Brucker @ 2017-09-13 18:57 UTC (permalink / raw)
  To: valmiki, iommu, kvm, linux-pci; +Cc: Alex Williamson, kevin.tian

On 13/09/17 18:38, valmiki wrote:
> On 9/13/2017 6:50 AM, Jean-Philippe Brucker wrote:
>> Hi Valmiki,
>>
>> On 12/09/17 19:01, valmiki wrote:
>>> Hi, as per VFIO documentation i see that we need to see
>>> "/sys/bus/pci/devices/0000:06:0d.0/iommu_group" in order to find group
>>> in which PCI bus is attached.
>>> But as per drivers/pci/pci-sysfs.c in static struct attribute
>>> *pci_dev_attrs[], i don't see any such attribute.
>>
>> This iommu_group attribute is created by
>> drivers/iommu/iommu.c:iommu_group_add_device. It is a symbolic link to
>> /sys/kernel/iommu_groups/<group>.
>>
>>> I tried enabling SMMUv2 driver and SMMU for PCIe node on our SOC, but
>>> this file doesn't show up and also in /sys/kernel/iommu_group i do not
>>> see "/sys/kernel/iommu_groups/17/devices/0000:00:1f.00" file, i see only
>>> PCIe root port device tree node in that group and not individual buses.
>>> So on ARM64 for showing these paths i.e show specific to each bus, does
>>> SMMU need any particular confguration (we have SMMUv2) > Do we need any specific kernel configuration ?
>>
>> I don't think so. If you're able to see the root complex in an IOMMU
>> group, then the configuration is probably fine. Could you provide a little
>> more information about your system, for example lspci along with "find
>> /sys/kernel/iommu_groups/*/devices/*"?
>>
> Here is the log:
> root@:~# lspci
> 00:00.0 PCI bridge: Corporation Device a023
> 01:00.0 Memory controller: Corporation Device a024
> root@:~# find /sys/kernel/iommu_groups/*/devices/*
> /sys/kernel/iommu_groups/0/devices/ad0c0000.pcie
> /sys/kernel/iommu_groups/1/devices/ad0f0000.spi
> /sys/kernel/iommu_groups/2/devices/adc70000.sdhci
> /sys/kernel/iommu_groups/3/devices/ad9d0000.usb0
> root@:~#
>> Ideally, each PCIe device will be in its own IOMMU group. So you shouldn't
>> have each bus in a group, but rather one device per group. Linux puts
>> multiple devices in a group if the IOMMU cannot properly isolate them. In
>> general it's not something you want in your system, because all devices in
>> a group will have the same address space and cannot be passed to a guest
>> separately.
>>
> So i don't see separate group per pci device.When you say one pci device
> per group, when does smmu creates one group per pci device ?
> As per boot log i see that smmu drvier gets probed first and then pcie
> root port driver, so how will smmu know number of pci devices present
> downstream and create a group for each device ?

(I'm assuming you're using device-tree since you mentioned it in your
initial post.) Are you using the iommu-map property in your root complex
node? The "iommus" property in device-tree nodes defines one or more
static SIDs of a device, and doesn't work with PCI. iommu-map is a
wildcard for the whole PCI bus. It defines how PCI Requester IDs are
translated to SIDs. See Documentation/devicetree/bindings/pci/pci-iommu.txt

Thanks,
Jean

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Invalidation in SMMU v3
@ 2017-12-03 13:56       ` valmiki
  0 siblings, 0 replies; 10+ messages in thread
From: valmiki @ 2017-12-03 13:56 UTC (permalink / raw)
  To: Jean-Philippe Brucker, iommu, kvm, linux-pci; +Cc: Alex Williamson, kevin.tian

Hi Jean,

In PASID flow arm_smmu_atc_inv_master_all is called where size and iova 
of arm_smmu_atc_inv_to_cmd are zero and  no address is being filled in 
struct arm_smmu_cmdq_ent->atc.addr.
So how will smmu hardware know if there are any ats translations 
requested or not ?
How invalidation are carried out in PASID flow w.r.t address and size?


Regards,
Valmiki

---
This email has been checked for viruses by Avast antivirus software.
https://www.avast.com/antivirus

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Invalidation in SMMU v3
@ 2017-12-03 13:56       ` valmiki
  0 siblings, 0 replies; 10+ messages in thread
From: valmiki @ 2017-12-03 13:56 UTC (permalink / raw)
  To: Jean-Philippe Brucker,
	iommu-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA,
	kvm-u79uwXL29TY76Z2rM5mHXA, linux-pci

Hi Jean,

In PASID flow arm_smmu_atc_inv_master_all is called where size and iova 
of arm_smmu_atc_inv_to_cmd are zero and  no address is being filled in 
struct arm_smmu_cmdq_ent->atc.addr.
So how will smmu hardware know if there are any ats translations 
requested or not ?
How invalidation are carried out in PASID flow w.r.t address and size?


Regards,
Valmiki

---
This email has been checked for viruses by Avast antivirus software.
https://www.avast.com/antivirus

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: Invalidation in SMMU v3
@ 2017-12-04 11:12         ` Jean-Philippe Brucker
  0 siblings, 0 replies; 10+ messages in thread
From: Jean-Philippe Brucker @ 2017-12-04 11:12 UTC (permalink / raw)
  To: valmiki, iommu, kvm, linux-pci; +Cc: Alex Williamson, kevin.tian

Hi Valmiki,

On 03/12/17 13:56, valmiki wrote:
> Hi Jean,
> 
> In PASID flow arm_smmu_atc_inv_master_all is called where size and iova 
> of arm_smmu_atc_inv_to_cmd are zero and  no address is being filled in 
> struct arm_smmu_cmdq_ent->atc.addr.
> So how will smmu hardware know if there are any ats translations 
> requested or not ?
> How invalidation are carried out in PASID flow w.r.t address and size?

arm_smmu_atc_inv_master_all() is used to invalidate the whole address
space, for example when unbinding a process address space from the master.

The encoding is a bit special: when addr and size are 0,
arm_smmu_atc_inv_to_cmd() sets cmd->atc.size to 52, which according to the
SMMUv3 spec corresponds to a 2^64 byte span, meaning invalidate all. The
SMMU then converts this command into a PCIe ATC invalidation (with bits
62:12 all 1b and bit 63 = 0b, according to the ATS spec 2.3.2)

Smaller invalidations will go via arm_smmu_atc_inv_domain(), and
arm_smmu_atc_inv_to_cmd() will compute the appropriate range that covers
the requested address and size.

In more details, when a range is unmapped from a process (with munmap()
for example), the MMU notifier calls our invalidate_range() callback,
which calls arm_smmu_atc_inv_domain() with the right PASID. When a process
exits or unbind() is called, we use arm_smmu_atc_inv_master_all() with its
PASID.

Thanks,
Jean

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: Invalidation in SMMU v3
@ 2017-12-04 11:12         ` Jean-Philippe Brucker
  0 siblings, 0 replies; 10+ messages in thread
From: Jean-Philippe Brucker @ 2017-12-04 11:12 UTC (permalink / raw)
  To: valmiki, iommu-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA,
	kvm-u79uwXL29TY76Z2rM5mHXA, linux-pci

Hi Valmiki,

On 03/12/17 13:56, valmiki wrote:
> Hi Jean,
> 
> In PASID flow arm_smmu_atc_inv_master_all is called where size and iova 
> of arm_smmu_atc_inv_to_cmd are zero and  no address is being filled in 
> struct arm_smmu_cmdq_ent->atc.addr.
> So how will smmu hardware know if there are any ats translations 
> requested or not ?
> How invalidation are carried out in PASID flow w.r.t address and size?

arm_smmu_atc_inv_master_all() is used to invalidate the whole address
space, for example when unbinding a process address space from the master.

The encoding is a bit special: when addr and size are 0,
arm_smmu_atc_inv_to_cmd() sets cmd->atc.size to 52, which according to the
SMMUv3 spec corresponds to a 2^64 byte span, meaning invalidate all. The
SMMU then converts this command into a PCIe ATC invalidation (with bits
62:12 all 1b and bit 63 = 0b, according to the ATS spec 2.3.2)

Smaller invalidations will go via arm_smmu_atc_inv_domain(), and
arm_smmu_atc_inv_to_cmd() will compute the appropriate range that covers
the requested address and size.

In more details, when a range is unmapped from a process (with munmap()
for example), the MMU notifier calls our invalidate_range() callback,
which calls arm_smmu_atc_inv_domain() with the right PASID. When a process
exits or unbind() is called, we use arm_smmu_atc_inv_master_all() with its
PASID.

Thanks,
Jean

^ permalink raw reply	[flat|nested] 10+ messages in thread

end of thread, other threads:[~2017-12-04 11:12 UTC | newest]

Thread overview: 10+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2017-09-12 18:01 VFIO on ARM64 valmiki
2017-09-12 18:27 ` Alex Williamson
2017-09-12 18:27   ` Alex Williamson
2017-09-13  1:20 ` Jean-Philippe Brucker
2017-09-13 17:38   ` valmiki
2017-09-13 18:57     ` Jean-Philippe Brucker
2017-12-03 13:56     ` Invalidation in SMMU v3 valmiki
2017-12-03 13:56       ` valmiki
2017-12-04 11:12       ` Jean-Philippe Brucker
2017-12-04 11:12         ` Jean-Philippe Brucker

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.