From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from eggs.gnu.org ([140.186.70.92]:50046) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1QzDIT-0006Ir-5t for qemu-devel@nongnu.org; Thu, 01 Sep 2011 15:51:39 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1QzDIR-0002FQ-H8 for qemu-devel@nongnu.org; Thu, 01 Sep 2011 15:51:33 -0400 Received: from mx1.redhat.com ([209.132.183.28]:27805) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1QzDIR-0002FJ-6j for qemu-devel@nongnu.org; Thu, 01 Sep 2011 15:51:31 -0400 From: Alex Williamson Date: Thu, 01 Sep 2011 13:50:37 -0600 Message-ID: <20110901195037.2391.62303.stgit@s20.home> In-Reply-To: <20110901194915.2391.97400.stgit@s20.home> References: <20110901194915.2391.97400.stgit@s20.home> MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: 7bit Subject: [Qemu-devel] [RFC PATCH 2/5] intel-iommu: Implement iommu_device_group List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: chrisw@sous-sol.org, aik@au1.ibm.com, pmac@au1.ibm.com, dwg@au1.ibm.com, joerg.roedel@amd.com, agraf@suse.de, benve@cisco.com, aafabbri@cisco.com, B08248@freescale.com, B07421@freescale.com, avi@redhat.com, kvm@vger.kernel.org, qemu-devel@nongnu.org, iommu@lists.linux-foundation.org, linux-pci@vger.kernel.org Cc: alex.williamson@redhat.com We generally have BDF granularity for devices, so we just need to make sure devices aren't hidden behind PCIe-to-PCI bridges. We can then make up a group number that's simply the concatenated seg|bus|dev|fn so we don't have to track them (not that users should depend on that). Also add an option to group multi-function (non-SR-IOV) devices together. It's disturbingly not uncommon for functions to have dependencies between each other on the same device and systems may wish to enforce that they are grouped together. Signed-off-by: Alex Williamson --- drivers/pci/intel-iommu.c | 52 +++++++++++++++++++++++++++++++++++++++++++++ 1 files changed, 52 insertions(+), 0 deletions(-) diff --git a/drivers/pci/intel-iommu.c b/drivers/pci/intel-iommu.c index f02c34d..a4d9a1a 100644 --- a/drivers/pci/intel-iommu.c +++ b/drivers/pci/intel-iommu.c @@ -404,6 +404,7 @@ static int dmar_map_gfx = 1; static int dmar_forcedac; static int intel_iommu_strict; static int intel_iommu_superpage = 1; +static int intel_iommu_no_mf_groups; #define DUMMY_DEVICE_DOMAIN_INFO ((struct device_domain_info *)(-1)) static DEFINE_SPINLOCK(device_domain_lock); @@ -438,6 +439,10 @@ static int __init intel_iommu_setup(char *str) printk(KERN_INFO "Intel-IOMMU: disable supported super page\n"); intel_iommu_superpage = 0; + } else if (!strncmp(str, "no_mf_groups", 12)) { + printk(KERN_INFO + "Intel-IOMMU: disable separate groups for multifunction devices\n"); + intel_iommu_no_mf_groups = 1; } str += strcspn(str, ","); @@ -3902,6 +3907,52 @@ static int intel_iommu_domain_has_cap(struct iommu_domain *domain, return 0; } +/* Group numbers are arbitrary. Device with the same group number + * indicate the iommu cannot differentiate between them. To avoid + * tracking used groups we just use the seg|bus|devfn of the lowest + * level we're able to differentiate devices */ +static int intel_iommu_device_group(struct device *dev, unsigned int *groupid) +{ + struct pci_dev *pdev = to_pci_dev(dev); + struct pci_dev *bridge; + union { + struct { + u8 devfn; + u8 bus; + u16 segment; + } pci; + u32 group; + } id; + + if (iommu_no_mapping(dev)) + return -ENODEV; + + id.pci.segment = pci_domain_nr(pdev->bus); + id.pci.bus = pdev->bus->number; + id.pci.devfn = pdev->devfn; + + if (!device_to_iommu(id.pci.segment, id.pci.bus, id.pci.devfn)) + return -ENODEV; + + bridge = pci_find_upstream_pcie_bridge(pdev); + if (bridge) { + if (pci_is_pcie(bridge)) { + id.pci.bus = bridge->subordinate->number; + id.pci.devfn = 0; + } else { + id.pci.bus = bridge->bus->number; + id.pci.devfn = bridge->devfn; + } + } + + /* Virtual functions always get their own group */ + if (!pdev->is_virtfn && intel_iommu_no_mf_groups) + id.pci.devfn = PCI_DEVFN(PCI_SLOT(id.pci.devfn), 0); + + *groupid = id.group; + return 0; +} + static struct iommu_ops intel_iommu_ops = { .domain_init = intel_iommu_domain_init, .domain_destroy = intel_iommu_domain_destroy, @@ -3911,6 +3962,7 @@ static struct iommu_ops intel_iommu_ops = { .unmap = intel_iommu_unmap, .iova_to_phys = intel_iommu_iova_to_phys, .domain_has_cap = intel_iommu_domain_has_cap, + .device_group = intel_iommu_device_group, }; static void __devinit quirk_iommu_rwbf(struct pci_dev *dev)