From: Jon Derrick <jonathan.derrick@intel.com>
To: <linux-pci@vger.kernel.org>,
Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>
Cc: Bjorn Helgaas <helgaas@kernel.org>,
Christoph Hellwig <hch@lst.de>,
Andrzej Jakowski <andrzej.jakowski@linux.intel.com>,
Sushma Kalakota <sushmax.kalakota@intel.com>,
<linux-kernel@vger.kernel.org>, <x86@kernel.org>,
Jon Derrick <jonathan.derrick@intel.com>,
Andy Shevchenko <andriy.shevchenko@intel.com>
Subject: [PATCH 3/6] PCI: vmd: Create IRQ Domain configuration helper
Date: Tue, 28 Jul 2020 13:49:42 -0600 [thread overview]
Message-ID: <20200728194945.14126-4-jonathan.derrick@intel.com> (raw)
In-Reply-To: <20200728194945.14126-1-jonathan.derrick@intel.com>
Moves the IRQ and MSI Domain configuration code to new helpers. No
functional changes.
Reviewed-by: Andy Shevchenko <andriy.shevchenko@intel.com>
Signed-off-by: Jon Derrick <jonathan.derrick@intel.com>
---
drivers/pci/controller/vmd.c | 52 ++++++++++++++++++++++++------------
1 file changed, 35 insertions(+), 17 deletions(-)
diff --git a/drivers/pci/controller/vmd.c b/drivers/pci/controller/vmd.c
index a462719af12a..703c48171993 100644
--- a/drivers/pci/controller/vmd.c
+++ b/drivers/pci/controller/vmd.c
@@ -298,6 +298,34 @@ static struct msi_domain_info vmd_msi_domain_info = {
.chip = &vmd_msi_controller,
};
+static int vmd_create_irq_domain(struct vmd_dev *vmd)
+{
+ struct fwnode_handle *fn;
+
+ fn = irq_domain_alloc_named_id_fwnode("VMD-MSI", vmd->sysdata.domain);
+ if (!fn)
+ return -ENODEV;
+
+ vmd->irq_domain = pci_msi_create_irq_domain(fn, &vmd_msi_domain_info,
+ x86_vector_domain);
+ if (!vmd->irq_domain) {
+ irq_domain_free_fwnode(fn);
+ return -ENODEV;
+ }
+
+ return 0;
+}
+
+static void vmd_remove_irq_domain(struct vmd_dev *vmd)
+{
+ if (vmd->irq_domain) {
+ struct fwnode_handle *fn = vmd->irq_domain->fwnode;
+
+ irq_domain_remove(vmd->irq_domain);
+ irq_domain_free_fwnode(fn);
+ }
+}
+
static char __iomem *vmd_cfg_addr(struct vmd_dev *vmd, struct pci_bus *bus,
unsigned int devfn, int reg, int len)
{
@@ -503,7 +531,6 @@ static int vmd_get_bus_number_start(struct vmd_dev *vmd)
static int vmd_enable_domain(struct vmd_dev *vmd, unsigned long features)
{
struct pci_sysdata *sd = &vmd->sysdata;
- struct fwnode_handle *fn;
struct resource *res;
u32 upper_bits;
unsigned long flags;
@@ -598,16 +625,9 @@ static int vmd_enable_domain(struct vmd_dev *vmd, unsigned long features)
sd->node = pcibus_to_node(vmd->dev->bus);
- fn = irq_domain_alloc_named_id_fwnode("VMD-MSI", vmd->sysdata.domain);
- if (!fn)
- return -ENODEV;
-
- vmd->irq_domain = pci_msi_create_irq_domain(fn, &vmd_msi_domain_info,
- x86_vector_domain);
- if (!vmd->irq_domain) {
- irq_domain_free_fwnode(fn);
- return -ENODEV;
- }
+ ret = vmd_create_irq_domain(vmd);
+ if (ret)
+ return ret;
pci_add_resource(&resources, &vmd->resources[0]);
pci_add_resource_offset(&resources, &vmd->resources[1], offset[0]);
@@ -617,13 +637,13 @@ static int vmd_enable_domain(struct vmd_dev *vmd, unsigned long features)
&vmd_ops, sd, &resources);
if (!vmd->bus) {
pci_free_resource_list(&resources);
- irq_domain_remove(vmd->irq_domain);
- irq_domain_free_fwnode(fn);
+ vmd_remove_irq_domain(vmd);
return -ENODEV;
}
vmd_attach_resources(vmd);
- dev_set_msi_domain(&vmd->bus->dev, vmd->irq_domain);
+ if (vmd->irq_domain)
+ dev_set_msi_domain(&vmd->bus->dev, vmd->irq_domain);
pci_scan_child_bus(vmd->bus);
pci_assign_unassigned_bus_resources(vmd->bus);
@@ -732,15 +752,13 @@ static void vmd_cleanup_srcu(struct vmd_dev *vmd)
static void vmd_remove(struct pci_dev *dev)
{
struct vmd_dev *vmd = pci_get_drvdata(dev);
- struct fwnode_handle *fn = vmd->irq_domain->fwnode;
sysfs_remove_link(&vmd->dev->dev.kobj, "domain");
pci_stop_root_bus(vmd->bus);
pci_remove_root_bus(vmd->bus);
vmd_cleanup_srcu(vmd);
vmd_detach_resources(vmd);
- irq_domain_remove(vmd->irq_domain);
- irq_domain_free_fwnode(fn);
+ vmd_remove_irq_domain(vmd);
}
#ifdef CONFIG_PM_SLEEP
--
2.27.0
next prev parent reply other threads:[~2020-07-28 20:07 UTC|newest]
Thread overview: 15+ messages / expand[flat|nested] mbox.gz Atom feed top
2020-07-28 19:49 [PATCH 0/6] VMD MSI Remapping Bypass Jon Derrick
2020-07-28 19:49 ` [PATCH 1/6] PCI: vmd: Create physical offset helper Jon Derrick
2020-07-28 19:49 ` [PATCH 2/6] PCI: vmd: Create bus offset configuration helper Jon Derrick
2020-07-28 19:49 ` Jon Derrick [this message]
2020-07-28 19:49 ` [PATCH 4/6] PCI: vmd: Create IRQ allocation helper Jon Derrick
2020-07-28 19:49 ` [PATCH 5/6] x86/apic/msi: Use Real PCI DMA device when configuring IRTE Jon Derrick
2020-09-07 14:32 ` Lorenzo Pieralisi
2020-09-28 11:32 ` Thomas Gleixner
2020-10-20 20:26 ` Bjorn Helgaas
2020-10-21 1:20 ` Derrick, Jonathan
2020-10-21 2:21 ` Bjorn Helgaas
2020-10-21 19:55 ` Derrick, Jonathan
2020-07-28 19:49 ` [PATCH 6/6] PCI: vmd: Disable MSI/X remapping when possible Jon Derrick
2020-10-20 20:23 ` Bjorn Helgaas
2020-08-17 16:14 ` [PATCH 0/6] VMD MSI Remapping Bypass Derrick, Jonathan
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20200728194945.14126-4-jonathan.derrick@intel.com \
--to=jonathan.derrick@intel.com \
--cc=andriy.shevchenko@intel.com \
--cc=andrzej.jakowski@linux.intel.com \
--cc=hch@lst.de \
--cc=helgaas@kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-pci@vger.kernel.org \
--cc=lorenzo.pieralisi@arm.com \
--cc=sushmax.kalakota@intel.com \
--cc=x86@kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).