All of lore.kernel.org
 help / color / mirror / Atom feed
From: Auger Eric <eric.auger@redhat.com>
To: "Vincent Stehlé" <vincent.stehle@arm.com>
Cc: Alex Williamson <alex.williamson@redhat.com>,
	eric.auger.pro@gmail.com, iommu@lists.linux-foundation.org,
	linux-kernel@vger.kernel.org, kvm@vger.kernel.org,
	kvmarm@lists.cs.columbia.edu, joro@8bytes.org,
	jacob.jun.pan@linux.intel.com, yi.l.liu@linux.intel.com,
	jean-philippe.brucker@arm.com, will.deacon@arm.com,
	robin.murphy@arm.com, kevin.tian@intel.com, ashok.raj@intel.com,
	marc.zyngier@arm.com, christoffer.dall@arm.com,
	peter.maydell@linaro.org
Subject: Re: [PATCH v6 09/22] vfio: VFIO_IOMMU_BIND/UNBIND_MSI
Date: Wed, 10 Apr 2019 15:02:23 +0200	[thread overview]
Message-ID: <2cdd4142-98e5-14de-2f34-264244f24d01@redhat.com> (raw)
In-Reply-To: <20190410123531.GA19023@debian>

Hi Vincent,

On 4/10/19 2:35 PM, Vincent Stehlé wrote:
> On Thu, Apr 04, 2019 at 08:55:25AM +0200, Auger Eric wrote:
>> Hi Marc, Robin, Alex,
> (..)
>> Do you think this is a reasonable assumption to consider devices within
>> the same host iommu group share the same MSI doorbell?
> 
> Hi Eric,
> 
> I am not sure this assumption always hold.
> 
> Marc, Robin and Alex can correct me, but for example I think the following
> topology is valid for Arm systems:
> 
>  +------------+  +------------+
>  | Endpoint A |  | Endpoint B |
>  +------------+  +------------+
>             v     v
>           /---------\
>          |  Non-ACS  |
>          |  Switch   |
>           \---------/
>                v
>        +---------------+
>        |     PCIe      |
>        | Root Complex  |
>        +---------------+
>                v
>          +-----------+
>          |   SMMU    |
>          +-----------+
>                v
>   +--------------------------+
>   |   System interconnect    |
>   +--------------------------+
>         v              v
>   +-----------+  +-----------+
>   |   ITS A   |  |   ITS B   |
>   +-----------+  +-----------+
> 
> All PCIe Endpoints and ITS could be in the same ITS Group 0, meaning
> devices could send their MSI at any ITS in hardware.
> 
> For Linux the two PCIe Endpoints would be in the same iommu group, because
> the switch in this example does not support ACS.
> 
> I think the devicetree msi-map property could be used to "map" the RID of
> Endpoint A to ITS A and the RID of Endpoint B to ITS B, which would violate
> the assumption.
> 
> See the monolithic example in [1], the example system in [2], appendices
> D, E and F in [3] and the msi-map property in [4].

Thank you for the review & links.

I understand the above topology is perfectly valid. Now the question is:
is it sufficiently common to care about it?

At the moment VFIO/vIOMMU assignment of devices belonging to the same
group isn't upstream yet. Work is ongoing by Alex to support it. It uses
a PCIe-to-PCI bridge on guest side and it looks this topology is not
supported by the SMMUv3 driver. Then comes the trouble of using several
ITS in nested mode.

If this topology is sufficiently rare I propose we to do not support it
in this VFIO/vIOMMU use case. in v7 I introduced a check that aims to
verify devices attached to the same nested iommu_domain share the same
msi_domain.

Thanks

Eric
> 
> Best regards,
> Vincent.
> 
> [1] https://static.docs.arm.com/100336/0102/corelink_gic600_generic_interrupt_controller_technical_reference_manual_100336_0102_00_en.pdf
> [2] http://infocenter.arm.com/help/topic/com.arm.doc.den0049d/DEN0049D_IO_Remapping_Table.pdf
> [3] https://static.docs.arm.com/den0029/50/Q1-DEN0029B_SBSA_5.0.pdf
> [4] https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/Documentation/devicetree/bindings/pci/pci-msi.txt
> 

WARNING: multiple messages have this Message-ID (diff)
From: Auger Eric <eric.auger@redhat.com>
To: "Vincent Stehlé" <vincent.stehle@arm.com>
Cc: yi.l.liu@linux.intel.com, kevin.tian@intel.com,
	ashok.raj@intel.com, kvm@vger.kernel.org,
	peter.maydell@linaro.org, jean-philippe.brucker@arm.com,
	will.deacon@arm.com, linux-kernel@vger.kernel.org,
	iommu@lists.linux-foundation.org, christoffer.dall@arm.com,
	marc.zyngier@arm.com,
	Alex Williamson <alex.williamson@redhat.com>,
	robin.murphy@arm.com, kvmarm@lists.cs.columbia.edu,
	eric.auger.pro@gmail.com
Subject: Re: [PATCH v6 09/22] vfio: VFIO_IOMMU_BIND/UNBIND_MSI
Date: Wed, 10 Apr 2019 15:02:23 +0200	[thread overview]
Message-ID: <2cdd4142-98e5-14de-2f34-264244f24d01@redhat.com> (raw)
Message-ID: <20190410130223.V50TP0QjkghaDmUHMHtTHfr7aWfYCFCMjWaVBis9XS0@z> (raw)
In-Reply-To: <20190410123531.GA19023@debian>

Hi Vincent,

On 4/10/19 2:35 PM, Vincent Stehlé wrote:
> On Thu, Apr 04, 2019 at 08:55:25AM +0200, Auger Eric wrote:
>> Hi Marc, Robin, Alex,
> (..)
>> Do you think this is a reasonable assumption to consider devices within
>> the same host iommu group share the same MSI doorbell?
> 
> Hi Eric,
> 
> I am not sure this assumption always hold.
> 
> Marc, Robin and Alex can correct me, but for example I think the following
> topology is valid for Arm systems:
> 
>  +------------+  +------------+
>  | Endpoint A |  | Endpoint B |
>  +------------+  +------------+
>             v     v
>           /---------\
>          |  Non-ACS  |
>          |  Switch   |
>           \---------/
>                v
>        +---------------+
>        |     PCIe      |
>        | Root Complex  |
>        +---------------+
>                v
>          +-----------+
>          |   SMMU    |
>          +-----------+
>                v
>   +--------------------------+
>   |   System interconnect    |
>   +--------------------------+
>         v              v
>   +-----------+  +-----------+
>   |   ITS A   |  |   ITS B   |
>   +-----------+  +-----------+
> 
> All PCIe Endpoints and ITS could be in the same ITS Group 0, meaning
> devices could send their MSI at any ITS in hardware.
> 
> For Linux the two PCIe Endpoints would be in the same iommu group, because
> the switch in this example does not support ACS.
> 
> I think the devicetree msi-map property could be used to "map" the RID of
> Endpoint A to ITS A and the RID of Endpoint B to ITS B, which would violate
> the assumption.
> 
> See the monolithic example in [1], the example system in [2], appendices
> D, E and F in [3] and the msi-map property in [4].

Thank you for the review & links.

I understand the above topology is perfectly valid. Now the question is:
is it sufficiently common to care about it?

At the moment VFIO/vIOMMU assignment of devices belonging to the same
group isn't upstream yet. Work is ongoing by Alex to support it. It uses
a PCIe-to-PCI bridge on guest side and it looks this topology is not
supported by the SMMUv3 driver. Then comes the trouble of using several
ITS in nested mode.

If this topology is sufficiently rare I propose we to do not support it
in this VFIO/vIOMMU use case. in v7 I introduced a check that aims to
verify devices attached to the same nested iommu_domain share the same
msi_domain.

Thanks

Eric
> 
> Best regards,
> Vincent.
> 
> [1] https://static.docs.arm.com/100336/0102/corelink_gic600_generic_interrupt_controller_technical_reference_manual_100336_0102_00_en.pdf
> [2] http://infocenter.arm.com/help/topic/com.arm.doc.den0049d/DEN0049D_IO_Remapping_Table.pdf
> [3] https://static.docs.arm.com/den0029/50/Q1-DEN0029B_SBSA_5.0.pdf
> [4] https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/Documentation/devicetree/bindings/pci/pci-msi.txt
> 
_______________________________________________
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu

WARNING: multiple messages have this Message-ID (diff)
From: Auger Eric <eric.auger@redhat.com>
To: "Vincent Stehlé" <vincent.stehle@arm.com>
Cc: yi.l.liu@linux.intel.com, kevin.tian@intel.com,
	jacob.jun.pan@linux.intel.com, ashok.raj@intel.com,
	kvm@vger.kernel.org, joro@8bytes.org, will.deacon@arm.com,
	linux-kernel@vger.kernel.org, iommu@lists.linux-foundation.org,
	marc.zyngier@arm.com,
	Alex Williamson <alex.williamson@redhat.com>,
	robin.murphy@arm.com, kvmarm@lists.cs.columbia.edu,
	eric.auger.pro@gmail.com
Subject: Re: [PATCH v6 09/22] vfio: VFIO_IOMMU_BIND/UNBIND_MSI
Date: Wed, 10 Apr 2019 15:02:23 +0200	[thread overview]
Message-ID: <2cdd4142-98e5-14de-2f34-264244f24d01@redhat.com> (raw)
Message-ID: <20190410130223.L9Kt8pXb-cUaha-lLvHbAvh6uoBB77-HBiKUew8uXdA@z> (raw)
In-Reply-To: <20190410123531.GA19023@debian>

Hi Vincent,

On 4/10/19 2:35 PM, Vincent Stehlé wrote:
> On Thu, Apr 04, 2019 at 08:55:25AM +0200, Auger Eric wrote:
>> Hi Marc, Robin, Alex,
> (..)
>> Do you think this is a reasonable assumption to consider devices within
>> the same host iommu group share the same MSI doorbell?
> 
> Hi Eric,
> 
> I am not sure this assumption always hold.
> 
> Marc, Robin and Alex can correct me, but for example I think the following
> topology is valid for Arm systems:
> 
>  +------------+  +------------+
>  | Endpoint A |  | Endpoint B |
>  +------------+  +------------+
>             v     v
>           /---------\
>          |  Non-ACS  |
>          |  Switch   |
>           \---------/
>                v
>        +---------------+
>        |     PCIe      |
>        | Root Complex  |
>        +---------------+
>                v
>          +-----------+
>          |   SMMU    |
>          +-----------+
>                v
>   +--------------------------+
>   |   System interconnect    |
>   +--------------------------+
>         v              v
>   +-----------+  +-----------+
>   |   ITS A   |  |   ITS B   |
>   +-----------+  +-----------+
> 
> All PCIe Endpoints and ITS could be in the same ITS Group 0, meaning
> devices could send their MSI at any ITS in hardware.
> 
> For Linux the two PCIe Endpoints would be in the same iommu group, because
> the switch in this example does not support ACS.
> 
> I think the devicetree msi-map property could be used to "map" the RID of
> Endpoint A to ITS A and the RID of Endpoint B to ITS B, which would violate
> the assumption.
> 
> See the monolithic example in [1], the example system in [2], appendices
> D, E and F in [3] and the msi-map property in [4].

Thank you for the review & links.

I understand the above topology is perfectly valid. Now the question is:
is it sufficiently common to care about it?

At the moment VFIO/vIOMMU assignment of devices belonging to the same
group isn't upstream yet. Work is ongoing by Alex to support it. It uses
a PCIe-to-PCI bridge on guest side and it looks this topology is not
supported by the SMMUv3 driver. Then comes the trouble of using several
ITS in nested mode.

If this topology is sufficiently rare I propose we to do not support it
in this VFIO/vIOMMU use case. in v7 I introduced a check that aims to
verify devices attached to the same nested iommu_domain share the same
msi_domain.

Thanks

Eric
> 
> Best regards,
> Vincent.
> 
> [1] https://static.docs.arm.com/100336/0102/corelink_gic600_generic_interrupt_controller_technical_reference_manual_100336_0102_00_en.pdf
> [2] http://infocenter.arm.com/help/topic/com.arm.doc.den0049d/DEN0049D_IO_Remapping_Table.pdf
> [3] https://static.docs.arm.com/den0029/50/Q1-DEN0029B_SBSA_5.0.pdf
> [4] https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/Documentation/devicetree/bindings/pci/pci-msi.txt
> 
_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

  reply	other threads:[~2019-04-10 13:02 UTC|newest]

Thread overview: 67+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-03-17 17:22 [PATCH v6 00/22] SMMUv3 Nested Stage Setup Eric Auger
2019-03-17 17:22 ` Eric Auger
2019-03-17 17:22 ` [PATCH v6 01/22] driver core: add per device iommu param Eric Auger
2019-03-17 17:22   ` Eric Auger
2019-03-17 17:22 ` [PATCH v6 02/22] iommu: introduce device fault data Eric Auger
2019-03-21 22:04   ` Jacob Pan
2019-03-22  8:00     ` Auger Eric
2019-03-17 17:22 ` [PATCH v6 03/22] iommu: introduce device fault report API Eric Auger
2019-03-17 17:22   ` Eric Auger
2019-03-21 20:57   ` Alex Williamson
2019-03-17 17:22 ` [PATCH v6 04/22] iommu: Introduce attach/detach_pasid_table API Eric Auger
2019-03-17 17:22 ` [PATCH v6 05/22] iommu: Introduce cache_invalidate API Eric Auger
2019-03-17 17:22   ` Eric Auger
2019-03-20 16:37   ` Jacob Pan
2019-03-20 16:37     ` Jacob Pan
2019-03-20 16:50     ` Jean-Philippe Brucker
2019-03-21 13:54       ` Auger Eric
2019-03-21 14:13         ` Jean-Philippe Brucker
2019-03-21 14:13           ` Jean-Philippe Brucker
2019-03-21 14:32           ` Auger Eric
2019-03-21 14:32             ` Auger Eric
2019-03-21 22:10             ` Jacob Pan
2019-03-22  7:58               ` Auger Eric
2019-03-17 17:22 ` [PATCH v6 06/22] iommu: Introduce bind/unbind_guest_msi Eric Auger
2019-03-17 17:22 ` [PATCH v6 07/22] vfio: VFIO_IOMMU_ATTACH/DETACH_PASID_TABLE Eric Auger
2019-03-21 22:19   ` Alex Williamson
2019-03-22  7:58     ` Auger Eric
2019-03-17 17:22 ` [PATCH v6 08/22] vfio: VFIO_IOMMU_CACHE_INVALIDATE Eric Auger
2019-03-21 22:43   ` Alex Williamson
2019-03-17 17:22 ` [PATCH v6 09/22] vfio: VFIO_IOMMU_BIND/UNBIND_MSI Eric Auger
2019-03-17 17:22   ` Eric Auger
2019-03-21 23:01   ` Alex Williamson
2019-03-22  9:30     ` Auger Eric
2019-03-22  9:30       ` Auger Eric
2019-03-22 22:09       ` Alex Williamson
2019-04-03 14:30         ` Auger Eric
2019-04-03 17:38           ` Alex Williamson
2019-04-04  6:55             ` Auger Eric
2019-04-10 12:35               ` Vincent Stehlé
2019-04-10 12:35                 ` Vincent Stehlé
2019-04-10 12:35                 ` Vincent Stehlé
2019-04-10 13:02                 ` Auger Eric [this message]
2019-04-10 13:02                   ` Auger Eric
2019-04-10 13:02                   ` Auger Eric
2019-04-10 13:15                 ` Marc Zyngier
2019-04-10 13:15                   ` Marc Zyngier
2019-04-10 13:15                   ` Marc Zyngier
2019-03-17 17:22 ` [PATCH v6 10/22] iommu/arm-smmu-v3: Link domains and devices Eric Auger
2019-03-17 17:22 ` [PATCH v6 11/22] iommu/arm-smmu-v3: Maintain a SID->device structure Eric Auger
2019-03-17 17:22   ` Eric Auger
2019-03-17 17:22 ` [PATCH v6 12/22] iommu/smmuv3: Get prepared for nested stage support Eric Auger
2019-03-17 17:22   ` Eric Auger
2019-03-17 17:22 ` [PATCH v6 13/22] iommu/smmuv3: Implement attach/detach_pasid_table Eric Auger
2019-03-17 17:22 ` [PATCH v6 14/22] iommu/smmuv3: Implement cache_invalidate Eric Auger
2019-03-17 17:22 ` [PATCH v6 15/22] dma-iommu: Implement NESTED_MSI cookie Eric Auger
2019-03-17 17:22 ` [PATCH v6 16/22] iommu/smmuv3: Implement bind/unbind_guest_msi Eric Auger
2019-03-17 17:22 ` [PATCH v6 17/22] iommu/smmuv3: Report non recoverable faults Eric Auger
2019-03-17 17:22 ` [PATCH v6 18/22] vfio-pci: Add a new VFIO_REGION_TYPE_NESTED region type Eric Auger
2019-03-17 17:22   ` Eric Auger
2019-03-17 17:22 ` [PATCH v6 19/22] vfio-pci: Register an iommu fault handler Eric Auger
2019-03-17 17:22   ` Eric Auger
2019-03-17 17:22 ` [PATCH v6 20/22] vfio_pci: Allow to mmap the fault queue Eric Auger
2019-03-17 17:22   ` Eric Auger
2019-03-17 17:22 ` [PATCH v6 21/22] vfio-pci: Add VFIO_PCI_DMA_FAULT_IRQ_INDEX Eric Auger
2019-03-17 17:22   ` Eric Auger
2019-03-17 17:22 ` [PATCH v6 22/22] vfio: Document nested stage control Eric Auger
2019-03-22 13:27 ` [PATCH v6 00/22] SMMUv3 Nested Stage Setup Auger Eric

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=2cdd4142-98e5-14de-2f34-264244f24d01@redhat.com \
    --to=eric.auger@redhat.com \
    --cc=alex.williamson@redhat.com \
    --cc=ashok.raj@intel.com \
    --cc=christoffer.dall@arm.com \
    --cc=eric.auger.pro@gmail.com \
    --cc=iommu@lists.linux-foundation.org \
    --cc=jacob.jun.pan@linux.intel.com \
    --cc=jean-philippe.brucker@arm.com \
    --cc=joro@8bytes.org \
    --cc=kevin.tian@intel.com \
    --cc=kvm@vger.kernel.org \
    --cc=kvmarm@lists.cs.columbia.edu \
    --cc=linux-kernel@vger.kernel.org \
    --cc=marc.zyngier@arm.com \
    --cc=peter.maydell@linaro.org \
    --cc=robin.murphy@arm.com \
    --cc=vincent.stehle@arm.com \
    --cc=will.deacon@arm.com \
    --cc=yi.l.liu@linux.intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.