linux-pci.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [RFC v4 0/7] PCI: hv: Support host bridge probing on ARM64
@ 2021-07-14 10:27 Boqun Feng
  2021-07-14 10:27 ` [RFC v4 1/7] PCI: Introduce domain_nr in pci_host_bridge Boqun Feng
                   ` (6 more replies)
  0 siblings, 7 replies; 17+ messages in thread
From: Boqun Feng @ 2021-07-14 10:27 UTC (permalink / raw)
  To: Bjorn Helgaas, Arnd Bergmann, Marc Zyngier
  Cc: Catalin Marinas, Will Deacon, K. Y. Srinivasan, Haiyang Zhang,
	Stephen Hemminger, Wei Liu, Dexuan Cui, Lorenzo Pieralisi,
	Rob Herring, Krzysztof Wilczyński, Boqun Feng,
	linux-arm-kernel, linux-kernel, linux-hyperv, linux-pci,
	Sunil Muthuswamy, Mike Rapoport

Hi Bjorn, Arnd and Marc,

This is the v4 for the preparation of virtual PCI support on Hyper-V
ARM64. Previous versions:

v1:	https://lore.kernel.org/lkml/20210319161956.2838291-1-boqun.feng@gmail.com/
v2:	https://lore.kernel.org/lkml/20210503144635.2297386-1-boqun.feng@gmail.com/
v3:	https://lore.kernel.org/lkml/20210609163211.3467449-1-boqun.feng@gmail.com/

Changes since last version:

*	Rebase to 5.14-rc1.

*	Change the ordering of patches, now the first three patches are
	non Hyper-V specific ones (i.e. patches touch general code),
	make it convenient to review.

The basic problem we need to resolve is that ARM64 is an arch with
PCI_DOMAINS_GENERIC=y, so the bus sysdata is pci_config_window. However,
Hyper-V PCI provides a paravirtualized PCI interface, so there is no
actual pci_config_window for a PCI host bridge, so no information can be
retrieve from the pci_config_window of a Hyper-V virtual PCI bus. Also
there is no corresponding ACPI device for the Hyper-V PCI root bridge,
which introduces a special case when trying to find the ACPI device from
the sysdata (see patch #3).

With this patchset, we could enable the virtual PCI on Hyper-V ARM64
guest with other code under development.

Comments and suggestions are welcome.

Regards,
Boqun

Arnd Bergmann (1):
  PCI: hv: Generify PCI probing

Boqun Feng (6):
  PCI: Introduce domain_nr in pci_host_bridge
  PCI: Allow msi domain set-up at host probing time
  arm64: PCI: Support root bridge preparation for Hyper-V PCI
  PCI: hv: Use pci_host_bridge::domain_nr for PCI domain
  PCI: hv: Set up msi domain at bridge probing time
  PCI: hv: Turn on the host bridge probing on ARM64

 arch/arm64/kernel/pci.c             |  8 ++-
 drivers/pci/controller/pci-hyperv.c | 86 +++++++++++++++++------------
 drivers/pci/probe.c                 | 11 +++-
 include/linux/pci.h                 | 10 ++++
 4 files changed, 76 insertions(+), 39 deletions(-)

-- 
2.30.2


^ permalink raw reply	[flat|nested] 17+ messages in thread

* [RFC v4 1/7] PCI: Introduce domain_nr in pci_host_bridge
  2021-07-14 10:27 [RFC v4 0/7] PCI: hv: Support host bridge probing on ARM64 Boqun Feng
@ 2021-07-14 10:27 ` Boqun Feng
  2021-07-14 19:33   ` Bjorn Helgaas
  2021-07-14 10:27 ` [RFC v4 2/7] PCI: Allow msi domain set-up at host probing time Boqun Feng
                   ` (5 subsequent siblings)
  6 siblings, 1 reply; 17+ messages in thread
From: Boqun Feng @ 2021-07-14 10:27 UTC (permalink / raw)
  To: Bjorn Helgaas, Arnd Bergmann, Marc Zyngier
  Cc: Catalin Marinas, Will Deacon, K. Y. Srinivasan, Haiyang Zhang,
	Stephen Hemminger, Wei Liu, Dexuan Cui, Lorenzo Pieralisi,
	Rob Herring, Krzysztof Wilczyński, Boqun Feng,
	linux-arm-kernel, linux-kernel, linux-hyperv, linux-pci,
	Sunil Muthuswamy, Mike Rapoport

Currently we retrieve the PCI domain number of the host bridge from the
bus sysdata (or pci_config_window if PCI_DOMAINS_GENERIC=y). Actually
we have the information at PCI host bridge probing time, and it makes
sense that we store it into pci_host_bridge. One benefit of doing so is
the requirement for supporting PCI on Hyper-V for ARM64, because the
host bridge of Hyper-V doesn't have pci_config_window, whereas ARM64 is
a PCI_DOMAINS_GENERIC=y arch, so we cannot retrieve the PCI domain
number from pci_config_window on ARM64 Hyper-V guest.

As the preparation for ARM64 Hyper-V PCI support, we introduce the
domain_nr in pci_host_bridge and a sentinel value to allow drivers to
set domain numbers properly at probing time. Currently
CONFIG_PCI_DOMAINS_GENERIC=y archs are only users of this
newly-introduced field.

Signed-off-by: Boqun Feng <boqun.feng@gmail.com>
---
 drivers/pci/probe.c |  6 +++++-
 include/linux/pci.h | 10 ++++++++++
 2 files changed, 15 insertions(+), 1 deletion(-)

diff --git a/drivers/pci/probe.c b/drivers/pci/probe.c
index 79177ac37880..60c50d4f156f 100644
--- a/drivers/pci/probe.c
+++ b/drivers/pci/probe.c
@@ -594,6 +594,7 @@ static void pci_init_host_bridge(struct pci_host_bridge *bridge)
 	bridge->native_pme = 1;
 	bridge->native_ltr = 1;
 	bridge->native_dpc = 1;
+	bridge->domain_nr = PCI_DOMAIN_NR_NOT_SET;
 
 	device_initialize(&bridge->dev);
 }
@@ -898,7 +899,10 @@ static int pci_register_host_bridge(struct pci_host_bridge *bridge)
 	bus->ops = bridge->ops;
 	bus->number = bus->busn_res.start = bridge->busnr;
 #ifdef CONFIG_PCI_DOMAINS_GENERIC
-	bus->domain_nr = pci_bus_find_domain_nr(bus, parent);
+	if (bridge->domain_nr == PCI_DOMAIN_NR_NOT_SET)
+		bus->domain_nr = pci_bus_find_domain_nr(bus, parent);
+	else
+		bus->domain_nr = bridge->domain_nr;
 #endif
 
 	b = pci_find_bus(pci_domain_nr(bus), bridge->busnr);
diff --git a/include/linux/pci.h b/include/linux/pci.h
index 540b377ca8f6..952bb7d46576 100644
--- a/include/linux/pci.h
+++ b/include/linux/pci.h
@@ -526,6 +526,15 @@ static inline int pci_channel_offline(struct pci_dev *pdev)
 	return (pdev->error_state != pci_channel_io_normal);
 }
 
+/*
+ * PCI Conventional has at most 256 PCI bus segments and PCI Express has at
+ * most 65536 "PCI Segments Groups", therefore -1 is not a valid PCI domain
+ * number, and can be used as a sentinel value indicating ->domain_nr is not
+ * set by the driver (and CONFIG_PCI_DOMAINS_GENERIC=y can set it in generic
+ * code).
+ */
+#define PCI_DOMAIN_NR_NOT_SET (-1)
+
 struct pci_host_bridge {
 	struct device	dev;
 	struct pci_bus	*bus;		/* Root bus */
@@ -533,6 +542,7 @@ struct pci_host_bridge {
 	struct pci_ops	*child_ops;
 	void		*sysdata;
 	int		busnr;
+	int		domain_nr;
 	struct list_head windows;	/* resource_entry */
 	struct list_head dma_ranges;	/* dma ranges resource list */
 	u8 (*swizzle_irq)(struct pci_dev *, u8 *); /* Platform IRQ swizzler */
-- 
2.30.2


^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [RFC v4 2/7] PCI: Allow msi domain set-up at host probing time
  2021-07-14 10:27 [RFC v4 0/7] PCI: hv: Support host bridge probing on ARM64 Boqun Feng
  2021-07-14 10:27 ` [RFC v4 1/7] PCI: Introduce domain_nr in pci_host_bridge Boqun Feng
@ 2021-07-14 10:27 ` Boqun Feng
  2021-07-14 19:39   ` Bjorn Helgaas
  2021-07-14 10:27 ` [RFC v4 3/7] arm64: PCI: Support root bridge preparation for Hyper-V PCI Boqun Feng
                   ` (4 subsequent siblings)
  6 siblings, 1 reply; 17+ messages in thread
From: Boqun Feng @ 2021-07-14 10:27 UTC (permalink / raw)
  To: Bjorn Helgaas, Arnd Bergmann, Marc Zyngier
  Cc: Catalin Marinas, Will Deacon, K. Y. Srinivasan, Haiyang Zhang,
	Stephen Hemminger, Wei Liu, Dexuan Cui, Lorenzo Pieralisi,
	Rob Herring, Krzysztof Wilczyński, Boqun Feng,
	linux-arm-kernel, linux-kernel, linux-hyperv, linux-pci,
	Sunil Muthuswamy, Mike Rapoport

For GENERIC_MSI_IRQ_DOMAIN drivers, we can set up the msi domain via
dev_set_msi_domain() at probing time, and drivers can use this more
generic way to set up the msi domain for the host bridge.

This is the preparation for ARM64 Hyper-V PCI support.

Originally-by: Arnd Bergmann <arnd@arndb.de>
Signed-off-by: Boqun Feng <boqun.feng@gmail.com>
---
 drivers/pci/probe.c | 5 ++++-
 1 file changed, 4 insertions(+), 1 deletion(-)

diff --git a/drivers/pci/probe.c b/drivers/pci/probe.c
index 60c50d4f156f..539b5139e376 100644
--- a/drivers/pci/probe.c
+++ b/drivers/pci/probe.c
@@ -829,11 +829,14 @@ static struct irq_domain *pci_host_bridge_msi_domain(struct pci_bus *bus)
 {
 	struct irq_domain *d;
 
+	/* Default set by host bridge driver */
+	d = dev_get_msi_domain(bus->bridge);
 	/*
 	 * Any firmware interface that can resolve the msi_domain
 	 * should be called from here.
 	 */
-	d = pci_host_bridge_of_msi_domain(bus);
+	if (!d)
+		d = pci_host_bridge_of_msi_domain(bus);
 	if (!d)
 		d = pci_host_bridge_acpi_msi_domain(bus);
 
-- 
2.30.2


^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [RFC v4 3/7] arm64: PCI: Support root bridge preparation for Hyper-V PCI
  2021-07-14 10:27 [RFC v4 0/7] PCI: hv: Support host bridge probing on ARM64 Boqun Feng
  2021-07-14 10:27 ` [RFC v4 1/7] PCI: Introduce domain_nr in pci_host_bridge Boqun Feng
  2021-07-14 10:27 ` [RFC v4 2/7] PCI: Allow msi domain set-up at host probing time Boqun Feng
@ 2021-07-14 10:27 ` Boqun Feng
  2021-07-14 19:46   ` Bjorn Helgaas
  2021-07-14 10:27 ` [RFC v4 4/7] PCI: hv: Generify PCI probing Boqun Feng
                   ` (3 subsequent siblings)
  6 siblings, 1 reply; 17+ messages in thread
From: Boqun Feng @ 2021-07-14 10:27 UTC (permalink / raw)
  To: Bjorn Helgaas, Arnd Bergmann, Marc Zyngier
  Cc: Catalin Marinas, Will Deacon, K. Y. Srinivasan, Haiyang Zhang,
	Stephen Hemminger, Wei Liu, Dexuan Cui, Lorenzo Pieralisi,
	Rob Herring, Krzysztof Wilczyński, Boqun Feng,
	linux-arm-kernel, linux-kernel, linux-hyperv, linux-pci,
	Sunil Muthuswamy, Mike Rapoport

Currently at root bridge preparation, the corresponding ACPI device will
be set as the companion, however for a Hyper-V virtual PCI root bridge,
there is no corresponding ACPI device, because a Hyper-V virtual PCI
root bridge is discovered via VMBus rather than ACPI table. In order to
support this, we need to make pcibios_root_bridge_prepare() work with
cfg->parent being NULL.

Signed-off-by: Boqun Feng <boqun.feng@gmail.com>
---
 arch/arm64/kernel/pci.c | 8 +++++++-
 1 file changed, 7 insertions(+), 1 deletion(-)

diff --git a/arch/arm64/kernel/pci.c b/arch/arm64/kernel/pci.c
index 1006ed2d7c60..3b81ac42bc1f 100644
--- a/arch/arm64/kernel/pci.c
+++ b/arch/arm64/kernel/pci.c
@@ -84,7 +84,13 @@ int pcibios_root_bridge_prepare(struct pci_host_bridge *bridge)
 {
 	if (!acpi_disabled) {
 		struct pci_config_window *cfg = bridge->bus->sysdata;
-		struct acpi_device *adev = to_acpi_device(cfg->parent);
+		/*
+		 * On Hyper-V there is no corresponding APCI device for a root
+		 * bridge, therefore ->parent is set as NULL by the driver. And
+		 * set 'adev` as NULL in this case because there is no proper
+		 * ACPI device.
+		 */
+		struct acpi_device *adev = cfg->parent ? to_acpi_device(cfg->parent) : NULL;
 		struct device *bus_dev = &bridge->bus->dev;
 
 		ACPI_COMPANION_SET(&bridge->dev, adev);
-- 
2.30.2


^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [RFC v4 4/7] PCI: hv: Generify PCI probing
  2021-07-14 10:27 [RFC v4 0/7] PCI: hv: Support host bridge probing on ARM64 Boqun Feng
                   ` (2 preceding siblings ...)
  2021-07-14 10:27 ` [RFC v4 3/7] arm64: PCI: Support root bridge preparation for Hyper-V PCI Boqun Feng
@ 2021-07-14 10:27 ` Boqun Feng
  2021-07-14 19:49   ` Bjorn Helgaas
  2021-07-14 10:27 ` [RFC v4 5/7] PCI: hv: Use pci_host_bridge::domain_nr for PCI domain Boqun Feng
                   ` (2 subsequent siblings)
  6 siblings, 1 reply; 17+ messages in thread
From: Boqun Feng @ 2021-07-14 10:27 UTC (permalink / raw)
  To: Bjorn Helgaas, Arnd Bergmann, Marc Zyngier
  Cc: Catalin Marinas, Will Deacon, K. Y. Srinivasan, Haiyang Zhang,
	Stephen Hemminger, Wei Liu, Dexuan Cui, Lorenzo Pieralisi,
	Rob Herring, Krzysztof Wilczyński, Boqun Feng,
	linux-arm-kernel, linux-kernel, linux-hyperv, linux-pci,
	Sunil Muthuswamy, Mike Rapoport

From: Arnd Bergmann <arnd@arndb.de>

In order to support ARM64 Hyper-V PCI, we need to set up the bridge at
probing time because ARM64 is a PCI_DOMAIN_GENERIC arch and we don't
have pci_config_window (ARM64 sysdata) for a PCI root bus on Hyper-V, so
it's impossible to retrieve the information (e.g. PCI domains, irq
domains) from bus sysdata on ARM64 after creation.

Originally in create_root_hv_pci_bus(), pci_create_root_bus() is used to
create the root bus and the corresponding bridge based on x86 sysdata.
Now we create a bridge first and then call pci_scan_root_bus_bridge(),
which allows us to do the necessary set-ups for the bridge.

Signed-off-by: Arnd Bergmann <arnd@arndb.de>
Signed-off-by: Boqun Feng <boqun.feng@gmail.com>
---
Please note that the whole commit message is from me (Boqun), and most
of the code is from Arnd, and I believe the initial purpose of the
changes is for clean-up and code unification.

I added Hyper-V related rationale in the commit message because I use it
to support Hyper-V virtual PCI on ARM64. If the SoB tags should be
adjusted, please let me know.

 drivers/pci/controller/pci-hyperv.c | 57 +++++++++++++++--------------
 1 file changed, 30 insertions(+), 27 deletions(-)

diff --git a/drivers/pci/controller/pci-hyperv.c b/drivers/pci/controller/pci-hyperv.c
index a53bd8728d0d..8d42da5dd1d4 100644
--- a/drivers/pci/controller/pci-hyperv.c
+++ b/drivers/pci/controller/pci-hyperv.c
@@ -449,6 +449,7 @@ enum hv_pcibus_state {
 
 struct hv_pcibus_device {
 	struct pci_sysdata sysdata;
+	struct pci_host_bridge *bridge;
 	/* Protocol version negotiated with the host */
 	enum pci_protocol_version_t protocol_version;
 	enum hv_pcibus_state state;
@@ -464,8 +465,6 @@ struct hv_pcibus_device {
 	spinlock_t device_list_lock;	/* Protect lists below */
 	void __iomem *cfg_addr;
 
-	struct list_head resources_for_children;
-
 	struct list_head children;
 	struct list_head dr_list;
 
@@ -1797,7 +1796,7 @@ static void hv_pci_assign_slots(struct hv_pcibus_device *hbus)
 
 		slot_nr = PCI_SLOT(wslot_to_devfn(hpdev->desc.win_slot.slot));
 		snprintf(name, SLOT_NAME_SIZE, "%u", hpdev->desc.ser);
-		hpdev->pci_slot = pci_create_slot(hbus->pci_bus, slot_nr,
+		hpdev->pci_slot = pci_create_slot(hbus->bridge->bus, slot_nr,
 					  name, NULL);
 		if (IS_ERR(hpdev->pci_slot)) {
 			pr_warn("pci_create slot %s failed\n", name);
@@ -1827,7 +1826,7 @@ static void hv_pci_remove_slots(struct hv_pcibus_device *hbus)
 static void hv_pci_assign_numa_node(struct hv_pcibus_device *hbus)
 {
 	struct pci_dev *dev;
-	struct pci_bus *bus = hbus->pci_bus;
+	struct pci_bus *bus = hbus->bridge->bus;
 	struct hv_pci_dev *hv_dev;
 
 	list_for_each_entry(dev, &bus->devices, bus_list) {
@@ -1850,21 +1849,22 @@ static void hv_pci_assign_numa_node(struct hv_pcibus_device *hbus)
  */
 static int create_root_hv_pci_bus(struct hv_pcibus_device *hbus)
 {
-	/* Register the device */
-	hbus->pci_bus = pci_create_root_bus(&hbus->hdev->device,
-					    0, /* bus number is always zero */
-					    &hv_pcifront_ops,
-					    &hbus->sysdata,
-					    &hbus->resources_for_children);
-	if (!hbus->pci_bus)
-		return -ENODEV;
+	int error;
+	struct pci_host_bridge *bridge = hbus->bridge;
+
+	bridge->dev.parent = &hbus->hdev->device;
+	bridge->sysdata = &hbus->sysdata;
+	bridge->ops = &hv_pcifront_ops;
+
+	error = pci_scan_root_bus_bridge(bridge);
+	if (error)
+		return error;
 
 	pci_lock_rescan_remove();
-	pci_scan_child_bus(hbus->pci_bus);
 	hv_pci_assign_numa_node(hbus);
-	pci_bus_assign_resources(hbus->pci_bus);
+	pci_bus_assign_resources(bridge->bus);
 	hv_pci_assign_slots(hbus);
-	pci_bus_add_devices(hbus->pci_bus);
+	pci_bus_add_devices(bridge->bus);
 	pci_unlock_rescan_remove();
 	hbus->state = hv_pcibus_installed;
 	return 0;
@@ -2127,7 +2127,7 @@ static void pci_devices_present_work(struct work_struct *work)
 		 * because there may have been changes.
 		 */
 		pci_lock_rescan_remove();
-		pci_scan_child_bus(hbus->pci_bus);
+		pci_scan_child_bus(hbus->bridge->bus);
 		hv_pci_assign_numa_node(hbus);
 		hv_pci_assign_slots(hbus);
 		pci_unlock_rescan_remove();
@@ -2295,8 +2295,8 @@ static void hv_eject_device_work(struct work_struct *work)
 	/*
 	 * Ejection can come before or after the PCI bus has been set up, so
 	 * attempt to find it and tear down the bus state, if it exists.  This
-	 * must be done without constructs like pci_domain_nr(hbus->pci_bus)
-	 * because hbus->pci_bus may not exist yet.
+	 * must be done without constructs like pci_domain_nr(hbus->bridge->bus)
+	 * because hbus->bridge->bus may not exist yet.
 	 */
 	wslot = wslot_to_devfn(hpdev->desc.win_slot.slot);
 	pdev = pci_get_domain_bus_and_slot(hbus->sysdata.domain, 0, wslot);
@@ -2662,8 +2662,7 @@ static int hv_pci_allocate_bridge_windows(struct hv_pcibus_device *hbus)
 		/* Modify this resource to become a bridge window. */
 		hbus->low_mmio_res->flags |= IORESOURCE_WINDOW;
 		hbus->low_mmio_res->flags &= ~IORESOURCE_BUSY;
-		pci_add_resource(&hbus->resources_for_children,
-				 hbus->low_mmio_res);
+		pci_add_resource(&hbus->bridge->windows, hbus->low_mmio_res);
 	}
 
 	if (hbus->high_mmio_space) {
@@ -2682,8 +2681,7 @@ static int hv_pci_allocate_bridge_windows(struct hv_pcibus_device *hbus)
 		/* Modify this resource to become a bridge window. */
 		hbus->high_mmio_res->flags |= IORESOURCE_WINDOW;
 		hbus->high_mmio_res->flags &= ~IORESOURCE_BUSY;
-		pci_add_resource(&hbus->resources_for_children,
-				 hbus->high_mmio_res);
+		pci_add_resource(&hbus->bridge->windows, hbus->high_mmio_res);
 	}
 
 	return 0;
@@ -3002,6 +3000,7 @@ static void hv_put_dom_num(u16 dom)
 static int hv_pci_probe(struct hv_device *hdev,
 			const struct hv_vmbus_device_id *dev_id)
 {
+	struct pci_host_bridge *bridge;
 	struct hv_pcibus_device *hbus;
 	u16 dom_req, dom;
 	char *name;
@@ -3014,6 +3013,10 @@ static int hv_pci_probe(struct hv_device *hdev,
 	 */
 	BUILD_BUG_ON(sizeof(*hbus) > HV_HYP_PAGE_SIZE);
 
+	bridge = devm_pci_alloc_host_bridge(&hdev->device, 0);
+	if (!bridge)
+		return -ENOMEM;
+
 	/*
 	 * With the recent 59bb47985c1d ("mm, sl[aou]b: guarantee natural
 	 * alignment for kmalloc(power-of-two)"), kzalloc() is able to allocate
@@ -3035,6 +3038,8 @@ static int hv_pci_probe(struct hv_device *hdev,
 	hbus = kzalloc(HV_HYP_PAGE_SIZE, GFP_KERNEL);
 	if (!hbus)
 		return -ENOMEM;
+
+	hbus->bridge = bridge;
 	hbus->state = hv_pcibus_init;
 	hbus->wslot_res_allocated = -1;
 
@@ -3071,7 +3076,6 @@ static int hv_pci_probe(struct hv_device *hdev,
 	hbus->hdev = hdev;
 	INIT_LIST_HEAD(&hbus->children);
 	INIT_LIST_HEAD(&hbus->dr_list);
-	INIT_LIST_HEAD(&hbus->resources_for_children);
 	spin_lock_init(&hbus->config_lock);
 	spin_lock_init(&hbus->device_list_lock);
 	spin_lock_init(&hbus->retarget_msi_interrupt_lock);
@@ -3295,9 +3299,9 @@ static int hv_pci_remove(struct hv_device *hdev)
 
 		/* Remove the bus from PCI's point of view. */
 		pci_lock_rescan_remove();
-		pci_stop_root_bus(hbus->pci_bus);
+		pci_stop_root_bus(hbus->bridge->bus);
 		hv_pci_remove_slots(hbus);
-		pci_remove_root_bus(hbus->pci_bus);
+		pci_remove_root_bus(hbus->bridge->bus);
 		pci_unlock_rescan_remove();
 	}
 
@@ -3307,7 +3311,6 @@ static int hv_pci_remove(struct hv_device *hdev)
 
 	iounmap(hbus->cfg_addr);
 	hv_free_config_window(hbus);
-	pci_free_resource_list(&hbus->resources_for_children);
 	hv_pci_free_bridge_windows(hbus);
 	irq_domain_remove(hbus->irq_domain);
 	irq_domain_free_fwnode(hbus->sysdata.fwnode);
@@ -3390,7 +3393,7 @@ static int hv_pci_restore_msi_msg(struct pci_dev *pdev, void *arg)
  */
 static void hv_pci_restore_msi_state(struct hv_pcibus_device *hbus)
 {
-	pci_walk_bus(hbus->pci_bus, hv_pci_restore_msi_msg, NULL);
+	pci_walk_bus(hbus->bridge->bus, hv_pci_restore_msi_msg, NULL);
 }
 
 static int hv_pci_resume(struct hv_device *hdev)
-- 
2.30.2


^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [RFC v4 5/7] PCI: hv: Use pci_host_bridge::domain_nr for PCI domain
  2021-07-14 10:27 [RFC v4 0/7] PCI: hv: Support host bridge probing on ARM64 Boqun Feng
                   ` (3 preceding siblings ...)
  2021-07-14 10:27 ` [RFC v4 4/7] PCI: hv: Generify PCI probing Boqun Feng
@ 2021-07-14 10:27 ` Boqun Feng
  2021-07-14 17:04   ` [EXTERNAL] " Sunil Muthuswamy
  2021-07-14 10:27 ` [RFC v4 6/7] PCI: hv: Set up msi domain at bridge probing time Boqun Feng
  2021-07-14 10:27 ` [RFC v4 7/7] PCI: hv: Turn on the host bridge probing on ARM64 Boqun Feng
  6 siblings, 1 reply; 17+ messages in thread
From: Boqun Feng @ 2021-07-14 10:27 UTC (permalink / raw)
  To: Bjorn Helgaas, Arnd Bergmann, Marc Zyngier
  Cc: Catalin Marinas, Will Deacon, K. Y. Srinivasan, Haiyang Zhang,
	Stephen Hemminger, Wei Liu, Dexuan Cui, Lorenzo Pieralisi,
	Rob Herring, Krzysztof Wilczyński, Boqun Feng,
	linux-arm-kernel, linux-kernel, linux-hyperv, linux-pci,
	Sunil Muthuswamy, Mike Rapoport

No functional change, just store and maintain the PCI domain number in
the generic pci_host_bridge instead of x86 specific pci_sysdata.

Signed-off-by: Boqun Feng <boqun.feng@gmail.com>
---
 drivers/pci/controller/pci-hyperv.c | 9 +++++----
 1 file changed, 5 insertions(+), 4 deletions(-)

diff --git a/drivers/pci/controller/pci-hyperv.c b/drivers/pci/controller/pci-hyperv.c
index 8d42da5dd1d4..5741b1dd3c14 100644
--- a/drivers/pci/controller/pci-hyperv.c
+++ b/drivers/pci/controller/pci-hyperv.c
@@ -2299,7 +2299,7 @@ static void hv_eject_device_work(struct work_struct *work)
 	 * because hbus->bridge->bus may not exist yet.
 	 */
 	wslot = wslot_to_devfn(hpdev->desc.win_slot.slot);
-	pdev = pci_get_domain_bus_and_slot(hbus->sysdata.domain, 0, wslot);
+	pdev = pci_get_domain_bus_and_slot(hbus->bridge->domain_nr, 0, wslot);
 	if (pdev) {
 		pci_lock_rescan_remove();
 		pci_stop_and_remove_bus_device(pdev);
@@ -3071,6 +3071,7 @@ static int hv_pci_probe(struct hv_device *hdev,
 			 "PCI dom# 0x%hx has collision, using 0x%hx",
 			 dom_req, dom);
 
+	hbus->bridge->domain_nr = dom;
 	hbus->sysdata.domain = dom;
 
 	hbus->hdev = hdev;
@@ -3080,7 +3081,7 @@ static int hv_pci_probe(struct hv_device *hdev,
 	spin_lock_init(&hbus->device_list_lock);
 	spin_lock_init(&hbus->retarget_msi_interrupt_lock);
 	hbus->wq = alloc_ordered_workqueue("hv_pci_%x", 0,
-					   hbus->sysdata.domain);
+					   hbus->bridge->domain_nr);
 	if (!hbus->wq) {
 		ret = -ENOMEM;
 		goto free_dom;
@@ -3207,7 +3208,7 @@ static int hv_pci_probe(struct hv_device *hdev,
 destroy_wq:
 	destroy_workqueue(hbus->wq);
 free_dom:
-	hv_put_dom_num(hbus->sysdata.domain);
+	hv_put_dom_num(hbus->bridge->domain_nr);
 free_bus:
 	kfree(hbus);
 	return ret;
@@ -3315,7 +3316,7 @@ static int hv_pci_remove(struct hv_device *hdev)
 	irq_domain_remove(hbus->irq_domain);
 	irq_domain_free_fwnode(hbus->sysdata.fwnode);
 
-	hv_put_dom_num(hbus->sysdata.domain);
+	hv_put_dom_num(hbus->bridge->domain_nr);
 
 	kfree(hbus);
 	return ret;
-- 
2.30.2


^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [RFC v4 6/7] PCI: hv: Set up msi domain at bridge probing time
  2021-07-14 10:27 [RFC v4 0/7] PCI: hv: Support host bridge probing on ARM64 Boqun Feng
                   ` (4 preceding siblings ...)
  2021-07-14 10:27 ` [RFC v4 5/7] PCI: hv: Use pci_host_bridge::domain_nr for PCI domain Boqun Feng
@ 2021-07-14 10:27 ` Boqun Feng
  2021-07-14 10:27 ` [RFC v4 7/7] PCI: hv: Turn on the host bridge probing on ARM64 Boqun Feng
  6 siblings, 0 replies; 17+ messages in thread
From: Boqun Feng @ 2021-07-14 10:27 UTC (permalink / raw)
  To: Bjorn Helgaas, Arnd Bergmann, Marc Zyngier
  Cc: Catalin Marinas, Will Deacon, K. Y. Srinivasan, Haiyang Zhang,
	Stephen Hemminger, Wei Liu, Dexuan Cui, Lorenzo Pieralisi,
	Rob Herring, Krzysztof Wilczyński, Boqun Feng,
	linux-arm-kernel, linux-kernel, linux-hyperv, linux-pci,
	Sunil Muthuswamy, Mike Rapoport

Since PCI_HYPERV depends on PCI_MSI_IRQ_DOMAIN which selects
GENERIC_MSI_IRQ_DOMAIN, we can use dev_set_msi_domain() to set up the
msi irq domain at probing time, and this works for both x86 and ARM64.

Therefore use it as the preparation for ARM64 Hyper-V PCI support.

As a result, no longer need to maintain ->fwnode in x86 specific
pci_sysdata, and make hv_pcibus_device own it instead.

Signed-off-by: Boqun Feng <boqun.feng@gmail.com>
---
 drivers/pci/controller/pci-hyperv.c | 13 ++++++++-----
 1 file changed, 8 insertions(+), 5 deletions(-)

diff --git a/drivers/pci/controller/pci-hyperv.c b/drivers/pci/controller/pci-hyperv.c
index 5741b1dd3c14..e6276aaa4659 100644
--- a/drivers/pci/controller/pci-hyperv.c
+++ b/drivers/pci/controller/pci-hyperv.c
@@ -450,6 +450,7 @@ enum hv_pcibus_state {
 struct hv_pcibus_device {
 	struct pci_sysdata sysdata;
 	struct pci_host_bridge *bridge;
+	struct fwnode_handle *fwnode;
 	/* Protocol version negotiated with the host */
 	enum pci_protocol_version_t protocol_version;
 	enum hv_pcibus_state state;
@@ -1565,7 +1566,7 @@ static int hv_pcie_init_irq_domain(struct hv_pcibus_device *hbus)
 	hbus->msi_info.handler = handle_edge_irq;
 	hbus->msi_info.handler_name = "edge";
 	hbus->msi_info.data = hbus;
-	hbus->irq_domain = pci_msi_create_irq_domain(hbus->sysdata.fwnode,
+	hbus->irq_domain = pci_msi_create_irq_domain(hbus->fwnode,
 						     &hbus->msi_info,
 						     x86_vector_domain);
 	if (!hbus->irq_domain) {
@@ -1574,6 +1575,8 @@ static int hv_pcie_init_irq_domain(struct hv_pcibus_device *hbus)
 		return -ENODEV;
 	}
 
+	dev_set_msi_domain(&hbus->bridge->dev, hbus->irq_domain);
+
 	return 0;
 }
 
@@ -3118,9 +3121,9 @@ static int hv_pci_probe(struct hv_device *hdev,
 		goto unmap;
 	}
 
-	hbus->sysdata.fwnode = irq_domain_alloc_named_fwnode(name);
+	hbus->fwnode = irq_domain_alloc_named_fwnode(name);
 	kfree(name);
-	if (!hbus->sysdata.fwnode) {
+	if (!hbus->fwnode) {
 		ret = -ENOMEM;
 		goto unmap;
 	}
@@ -3198,7 +3201,7 @@ static int hv_pci_probe(struct hv_device *hdev,
 free_irq_domain:
 	irq_domain_remove(hbus->irq_domain);
 free_fwnode:
-	irq_domain_free_fwnode(hbus->sysdata.fwnode);
+	irq_domain_free_fwnode(hbus->fwnode);
 unmap:
 	iounmap(hbus->cfg_addr);
 free_config:
@@ -3314,7 +3317,7 @@ static int hv_pci_remove(struct hv_device *hdev)
 	hv_free_config_window(hbus);
 	hv_pci_free_bridge_windows(hbus);
 	irq_domain_remove(hbus->irq_domain);
-	irq_domain_free_fwnode(hbus->sysdata.fwnode);
+	irq_domain_free_fwnode(hbus->fwnode);
 
 	hv_put_dom_num(hbus->bridge->domain_nr);
 
-- 
2.30.2


^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [RFC v4 7/7] PCI: hv: Turn on the host bridge probing on ARM64
  2021-07-14 10:27 [RFC v4 0/7] PCI: hv: Support host bridge probing on ARM64 Boqun Feng
                   ` (5 preceding siblings ...)
  2021-07-14 10:27 ` [RFC v4 6/7] PCI: hv: Set up msi domain at bridge probing time Boqun Feng
@ 2021-07-14 10:27 ` Boqun Feng
  6 siblings, 0 replies; 17+ messages in thread
From: Boqun Feng @ 2021-07-14 10:27 UTC (permalink / raw)
  To: Bjorn Helgaas, Arnd Bergmann, Marc Zyngier
  Cc: Catalin Marinas, Will Deacon, K. Y. Srinivasan, Haiyang Zhang,
	Stephen Hemminger, Wei Liu, Dexuan Cui, Lorenzo Pieralisi,
	Rob Herring, Krzysztof Wilczyński, Boqun Feng,
	linux-arm-kernel, linux-kernel, linux-hyperv, linux-pci,
	Sunil Muthuswamy, Mike Rapoport

Now we have everything we need, just provide a proper sysdata type for
the bus to use on ARM64 and everything else works.

Signed-off-by: Boqun Feng <boqun.feng@gmail.com>
---
 drivers/pci/controller/pci-hyperv.c | 7 +++++++
 1 file changed, 7 insertions(+)

diff --git a/drivers/pci/controller/pci-hyperv.c b/drivers/pci/controller/pci-hyperv.c
index e6276aaa4659..62dbe98d1fe1 100644
--- a/drivers/pci/controller/pci-hyperv.c
+++ b/drivers/pci/controller/pci-hyperv.c
@@ -40,6 +40,7 @@
 #include <linux/kernel.h>
 #include <linux/module.h>
 #include <linux/pci.h>
+#include <linux/pci-ecam.h>
 #include <linux/delay.h>
 #include <linux/semaphore.h>
 #include <linux/irqdomain.h>
@@ -448,7 +449,11 @@ enum hv_pcibus_state {
 };
 
 struct hv_pcibus_device {
+#ifdef CONFIG_X86
 	struct pci_sysdata sysdata;
+#elif defined(CONFIG_ARM64)
+	struct pci_config_window sysdata;
+#endif
 	struct pci_host_bridge *bridge;
 	struct fwnode_handle *fwnode;
 	/* Protocol version negotiated with the host */
@@ -3075,7 +3080,9 @@ static int hv_pci_probe(struct hv_device *hdev,
 			 dom_req, dom);
 
 	hbus->bridge->domain_nr = dom;
+#ifdef CONFIG_X86
 	hbus->sysdata.domain = dom;
+#endif
 
 	hbus->hdev = hdev;
 	INIT_LIST_HEAD(&hbus->children);
-- 
2.30.2


^ permalink raw reply related	[flat|nested] 17+ messages in thread

* RE: [EXTERNAL] [RFC v4 5/7] PCI: hv: Use pci_host_bridge::domain_nr for PCI domain
  2021-07-14 10:27 ` [RFC v4 5/7] PCI: hv: Use pci_host_bridge::domain_nr for PCI domain Boqun Feng
@ 2021-07-14 17:04   ` Sunil Muthuswamy
  2021-07-15  3:59     ` Boqun Feng
  0 siblings, 1 reply; 17+ messages in thread
From: Sunil Muthuswamy @ 2021-07-14 17:04 UTC (permalink / raw)
  To: Boqun Feng, Bjorn Helgaas, Arnd Bergmann, Marc Zyngier
  Cc: Catalin Marinas, Will Deacon, KY Srinivasan, Haiyang Zhang,
	Stephen Hemminger, Wei Liu, Dexuan Cui, Lorenzo Pieralisi,
	Rob Herring, Krzysztof Wilczyński, linux-arm-kernel,
	linux-kernel, linux-hyperv, linux-pci, Mike Rapoport

> diff --git a/drivers/pci/controller/pci-hyperv.c b/drivers/pci/controller/pci-hyperv.c
> index 8d42da5dd1d4..5741b1dd3c14 100644
> --- a/drivers/pci/controller/pci-hyperv.c
> +++ b/drivers/pci/controller/pci-hyperv.c
> @@ -2299,7 +2299,7 @@ static void hv_eject_device_work(struct work_struct *work)
>  	 * because hbus->bridge->bus may not exist yet.
>  	 */
>  	wslot = wslot_to_devfn(hpdev->desc.win_slot.slot);
> -	pdev = pci_get_domain_bus_and_slot(hbus->sysdata.domain, 0, wslot);
> +	pdev = pci_get_domain_bus_and_slot(hbus->bridge->domain_nr, 0, wslot);
>  	if (pdev) {
>  		pci_lock_rescan_remove();
>  		pci_stop_and_remove_bus_device(pdev);
> @@ -3071,6 +3071,7 @@ static int hv_pci_probe(struct hv_device *hdev,
>  			 "PCI dom# 0x%hx has collision, using 0x%hx",
>  			 dom_req, dom);
> 
> +	hbus->bridge->domain_nr = dom;
>  	hbus->sysdata.domain = dom;
With your other patches everything is moving over to based off of bridge->domain_nr.
Do we still need to update sysdata.domain?


^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [RFC v4 1/7] PCI: Introduce domain_nr in pci_host_bridge
  2021-07-14 10:27 ` [RFC v4 1/7] PCI: Introduce domain_nr in pci_host_bridge Boqun Feng
@ 2021-07-14 19:33   ` Bjorn Helgaas
  2021-07-15 17:30     ` Boqun Feng
  0 siblings, 1 reply; 17+ messages in thread
From: Bjorn Helgaas @ 2021-07-14 19:33 UTC (permalink / raw)
  To: Boqun Feng
  Cc: Bjorn Helgaas, Arnd Bergmann, Marc Zyngier, Catalin Marinas,
	Will Deacon, K. Y. Srinivasan, Haiyang Zhang, Stephen Hemminger,
	Wei Liu, Dexuan Cui, Lorenzo Pieralisi, Rob Herring,
	Krzysztof Wilczyński, linux-arm-kernel, linux-kernel,
	linux-hyperv, linux-pci, Sunil Muthuswamy, Mike Rapoport

On Wed, Jul 14, 2021 at 06:27:31PM +0800, Boqun Feng wrote:
> Currently we retrieve the PCI domain number of the host bridge from the
> bus sysdata (or pci_config_window if PCI_DOMAINS_GENERIC=y). Actually
> we have the information at PCI host bridge probing time, and it makes
> sense that we store it into pci_host_bridge. One benefit of doing so is
> the requirement for supporting PCI on Hyper-V for ARM64, because the
> host bridge of Hyper-V doesn't have pci_config_window, whereas ARM64 is
> a PCI_DOMAINS_GENERIC=y arch, so we cannot retrieve the PCI domain
> number from pci_config_window on ARM64 Hyper-V guest.
> 
> As the preparation for ARM64 Hyper-V PCI support, we introduce the
> domain_nr in pci_host_bridge and a sentinel value to allow drivers to
> set domain numbers properly at probing time. Currently
> CONFIG_PCI_DOMAINS_GENERIC=y archs are only users of this
> newly-introduced field.

Thanks for pushing on this.  PCI_DOMAINS_GENERIC is really not very
generic today and it will be good to make it more so.

> Signed-off-by: Boqun Feng <boqun.feng@gmail.com>
> ---
>  drivers/pci/probe.c |  6 +++++-
>  include/linux/pci.h | 10 ++++++++++
>  2 files changed, 15 insertions(+), 1 deletion(-)
> 
> diff --git a/drivers/pci/probe.c b/drivers/pci/probe.c
> index 79177ac37880..60c50d4f156f 100644
> --- a/drivers/pci/probe.c
> +++ b/drivers/pci/probe.c
> @@ -594,6 +594,7 @@ static void pci_init_host_bridge(struct pci_host_bridge *bridge)
>  	bridge->native_pme = 1;
>  	bridge->native_ltr = 1;
>  	bridge->native_dpc = 1;
> +	bridge->domain_nr = PCI_DOMAIN_NR_NOT_SET;
>  
>  	device_initialize(&bridge->dev);
>  }
> @@ -898,7 +899,10 @@ static int pci_register_host_bridge(struct pci_host_bridge *bridge)
>  	bus->ops = bridge->ops;
>  	bus->number = bus->busn_res.start = bridge->busnr;
>  #ifdef CONFIG_PCI_DOMAINS_GENERIC
> -	bus->domain_nr = pci_bus_find_domain_nr(bus, parent);
> +	if (bridge->domain_nr == PCI_DOMAIN_NR_NOT_SET)
> +		bus->domain_nr = pci_bus_find_domain_nr(bus, parent);
> +	else
> +		bus->domain_nr = bridge->domain_nr;

The domain_nr in struct pci_bus is really only used by
pci_domain_nr().  It seems like it really belongs in the struct
pci_host_bridge and probably doesn't need to be duplicated in the
struct pci_bus.  But that's probably a project for the future.

>  #endif
>  
>  	b = pci_find_bus(pci_domain_nr(bus), bridge->busnr);
> diff --git a/include/linux/pci.h b/include/linux/pci.h
> index 540b377ca8f6..952bb7d46576 100644
> --- a/include/linux/pci.h
> +++ b/include/linux/pci.h
> @@ -526,6 +526,15 @@ static inline int pci_channel_offline(struct pci_dev *pdev)
>  	return (pdev->error_state != pci_channel_io_normal);
>  }
>  
> +/*
> + * PCI Conventional has at most 256 PCI bus segments and PCI Express has at
> + * most 65536 "PCI Segments Groups", therefore -1 is not a valid PCI domain

s/Segments/Segment/

Do you have a reference for these limits?  I don't think either
Conventional PCI or PCIe actually specifies a hardware limit on the
number of domains (I think PCI uses "segment group" to mean the same
thing).

"Segment" in the Conventional PCI spec, r3.0, means a bus segment,
which connects all the devices on a single bus.  Obviously there's a
limit of 256 buses under a single host bridge, but that's different
concept than a domain/segment group.

The PCI Firmware spec, r3.3, defines "Segment Group Number" as being
in the range 0..65535, but as far as I know, that's just a firmware
issue, and it applies equally to Conventional PCI and PCIe.

I think you're right that -1 is a reasonable sentinel; I just don't
want to claim a difference here unless there really is one.

> + * number, and can be used as a sentinel value indicating ->domain_nr is not
> + * set by the driver (and CONFIG_PCI_DOMAINS_GENERIC=y can set it in generic
> + * code).
> + */
> +#define PCI_DOMAIN_NR_NOT_SET (-1)
> +
>  struct pci_host_bridge {
>  	struct device	dev;
>  	struct pci_bus	*bus;		/* Root bus */
> @@ -533,6 +542,7 @@ struct pci_host_bridge {
>  	struct pci_ops	*child_ops;
>  	void		*sysdata;
>  	int		busnr;
> +	int		domain_nr;
>  	struct list_head windows;	/* resource_entry */
>  	struct list_head dma_ranges;	/* dma ranges resource list */
>  	u8 (*swizzle_irq)(struct pci_dev *, u8 *); /* Platform IRQ swizzler */
> -- 
> 2.30.2
> 

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [RFC v4 2/7] PCI: Allow msi domain set-up at host probing time
  2021-07-14 10:27 ` [RFC v4 2/7] PCI: Allow msi domain set-up at host probing time Boqun Feng
@ 2021-07-14 19:39   ` Bjorn Helgaas
  0 siblings, 0 replies; 17+ messages in thread
From: Bjorn Helgaas @ 2021-07-14 19:39 UTC (permalink / raw)
  To: Boqun Feng
  Cc: Bjorn Helgaas, Arnd Bergmann, Marc Zyngier, Catalin Marinas,
	Will Deacon, K. Y. Srinivasan, Haiyang Zhang, Stephen Hemminger,
	Wei Liu, Dexuan Cui, Lorenzo Pieralisi, Rob Herring,
	Krzysztof Wilczyński, linux-arm-kernel, linux-kernel,
	linux-hyperv, linux-pci, Sunil Muthuswamy, Mike Rapoport

Capitalize "MSI" in subject and commit log.

On Wed, Jul 14, 2021 at 06:27:32PM +0800, Boqun Feng wrote:
> For GENERIC_MSI_IRQ_DOMAIN drivers, we can set up the msi domain via
> dev_set_msi_domain() at probing time, and drivers can use this more
> generic way to set up the msi domain for the host bridge.

This doesn't tell me what this patch *does* or why we need it.

> This is the preparation for ARM64 Hyper-V PCI support.
> 
> Originally-by: Arnd Bergmann <arnd@arndb.de>
> Signed-off-by: Boqun Feng <boqun.feng@gmail.com>
> ---
>  drivers/pci/probe.c | 5 ++++-
>  1 file changed, 4 insertions(+), 1 deletion(-)
> 
> diff --git a/drivers/pci/probe.c b/drivers/pci/probe.c
> index 60c50d4f156f..539b5139e376 100644
> --- a/drivers/pci/probe.c
> +++ b/drivers/pci/probe.c
> @@ -829,11 +829,14 @@ static struct irq_domain *pci_host_bridge_msi_domain(struct pci_bus *bus)
>  {
>  	struct irq_domain *d;
>  
> +	/* Default set by host bridge driver */
> +	d = dev_get_msi_domain(bus->bridge);

Add blank line here between executable code and the following comment.

>  	/*
>  	 * Any firmware interface that can resolve the msi_domain
>  	 * should be called from here.
>  	 */
> -	d = pci_host_bridge_of_msi_domain(bus);
> +	if (!d)
> +		d = pci_host_bridge_of_msi_domain(bus);
>  	if (!d)
>  		d = pci_host_bridge_acpi_msi_domain(bus);
>  
> -- 
> 2.30.2
> 

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [RFC v4 3/7] arm64: PCI: Support root bridge preparation for Hyper-V PCI
  2021-07-14 10:27 ` [RFC v4 3/7] arm64: PCI: Support root bridge preparation for Hyper-V PCI Boqun Feng
@ 2021-07-14 19:46   ` Bjorn Helgaas
  0 siblings, 0 replies; 17+ messages in thread
From: Bjorn Helgaas @ 2021-07-14 19:46 UTC (permalink / raw)
  To: Boqun Feng
  Cc: Bjorn Helgaas, Arnd Bergmann, Marc Zyngier, Catalin Marinas,
	Will Deacon, K. Y. Srinivasan, Haiyang Zhang, Stephen Hemminger,
	Wei Liu, Dexuan Cui, Lorenzo Pieralisi, Rob Herring,
	Krzysztof Wilczyński, linux-arm-kernel, linux-kernel,
	linux-hyperv, linux-pci, Sunil Muthuswamy, Mike Rapoport

On Wed, Jul 14, 2021 at 06:27:33PM +0800, Boqun Feng wrote:
> Currently at root bridge preparation, the corresponding ACPI device will
> be set as the companion, however for a Hyper-V virtual PCI root bridge,
> there is no corresponding ACPI device, because a Hyper-V virtual PCI
> root bridge is discovered via VMBus rather than ACPI table. In order to
> support this, we need to make pcibios_root_bridge_prepare() work with
> cfg->parent being NULL.

It would be nice to have a hint about why we don't actually need the
ACPI companion device in this case.

> Signed-off-by: Boqun Feng <boqun.feng@gmail.com>
> ---
>  arch/arm64/kernel/pci.c | 8 +++++++-
>  1 file changed, 7 insertions(+), 1 deletion(-)
> 
> diff --git a/arch/arm64/kernel/pci.c b/arch/arm64/kernel/pci.c
> index 1006ed2d7c60..3b81ac42bc1f 100644
> --- a/arch/arm64/kernel/pci.c
> +++ b/arch/arm64/kernel/pci.c
> @@ -84,7 +84,13 @@ int pcibios_root_bridge_prepare(struct pci_host_bridge *bridge)
>  {
>  	if (!acpi_disabled) {
>  		struct pci_config_window *cfg = bridge->bus->sysdata;
> -		struct acpi_device *adev = to_acpi_device(cfg->parent);
> +		/*
> +		 * On Hyper-V there is no corresponding APCI device for a root
> +		 * bridge, therefore ->parent is set as NULL by the driver. And
> +		 * set 'adev` as NULL in this case because there is no proper
> +		 * ACPI device.
> +		 */
> +		struct acpi_device *adev = cfg->parent ? to_acpi_device(cfg->parent) : NULL;
>  		struct device *bus_dev = &bridge->bus->dev;
>  
>  		ACPI_COMPANION_SET(&bridge->dev, adev);

s/APCI/ACPI/ above.

I think this would be more readable like this:

  struct pci_config_window *cfg = bridge->bus->sysdata;
  ...

  if (acpi_disabled)
    return 0;

  /*
   * On Hyper-V there is no corresponding ACPI device for a root
   * ...
   */
  cfg = bridge->bus->sysdata;
  if (!cfg->parent)
    return 0;

  adev = to_acpi_device(cfg->parent);
  bus_dev = &bridge->bus->dev;
  ACPI_COMPANION_SET(&bridge->dev, adev);
  ...

This could be done in two steps: the first to restructure the code
without making any functional change, and a second to return when
there's no cfg->parent.  If you do it in one step, the patch will be
much harder to read.

Bjorn

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [RFC v4 4/7] PCI: hv: Generify PCI probing
  2021-07-14 10:27 ` [RFC v4 4/7] PCI: hv: Generify PCI probing Boqun Feng
@ 2021-07-14 19:49   ` Bjorn Helgaas
  0 siblings, 0 replies; 17+ messages in thread
From: Bjorn Helgaas @ 2021-07-14 19:49 UTC (permalink / raw)
  To: Boqun Feng
  Cc: Bjorn Helgaas, Arnd Bergmann, Marc Zyngier, Catalin Marinas,
	Will Deacon, K. Y. Srinivasan, Haiyang Zhang, Stephen Hemminger,
	Wei Liu, Dexuan Cui, Lorenzo Pieralisi, Rob Herring,
	Krzysztof Wilczyński, linux-arm-kernel, linux-kernel,
	linux-hyperv, linux-pci, Sunil Muthuswamy, Mike Rapoport

On Wed, Jul 14, 2021 at 06:27:34PM +0800, Boqun Feng wrote:
> From: Arnd Bergmann <arnd@arndb.de>
> 
> In order to support ARM64 Hyper-V PCI, we need to set up the bridge at
> probing time because ARM64 is a PCI_DOMAIN_GENERIC arch and we don't
> have pci_config_window (ARM64 sysdata) for a PCI root bus on Hyper-V, so
> it's impossible to retrieve the information (e.g. PCI domains, irq
> domains) from bus sysdata on ARM64 after creation.
> 
> Originally in create_root_hv_pci_bus(), pci_create_root_bus() is used to
> create the root bus and the corresponding bridge based on x86 sysdata.
> Now we create a bridge first and then call pci_scan_root_bus_bridge(),
> which allows us to do the necessary set-ups for the bridge.

Also here and other patches in this series:

  s/irq/IRQ/
  s/msi/MSI/

in subject lines, commit logs, comments.

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [EXTERNAL] [RFC v4 5/7] PCI: hv: Use pci_host_bridge::domain_nr for PCI domain
  2021-07-14 17:04   ` [EXTERNAL] " Sunil Muthuswamy
@ 2021-07-15  3:59     ` Boqun Feng
  0 siblings, 0 replies; 17+ messages in thread
From: Boqun Feng @ 2021-07-15  3:59 UTC (permalink / raw)
  To: Sunil Muthuswamy
  Cc: Bjorn Helgaas, Arnd Bergmann, Marc Zyngier, Catalin Marinas,
	Will Deacon, KY Srinivasan, Haiyang Zhang, Stephen Hemminger,
	Wei Liu, Dexuan Cui, Lorenzo Pieralisi, Rob Herring,
	Krzysztof Wilczyński, linux-arm-kernel, linux-kernel,
	linux-hyperv, linux-pci, Mike Rapoport

On Wed, Jul 14, 2021 at 05:04:38PM +0000, Sunil Muthuswamy wrote:
> > diff --git a/drivers/pci/controller/pci-hyperv.c b/drivers/pci/controller/pci-hyperv.c
> > index 8d42da5dd1d4..5741b1dd3c14 100644
> > --- a/drivers/pci/controller/pci-hyperv.c
> > +++ b/drivers/pci/controller/pci-hyperv.c
> > @@ -2299,7 +2299,7 @@ static void hv_eject_device_work(struct work_struct *work)
> >  	 * because hbus->bridge->bus may not exist yet.
> >  	 */
> >  	wslot = wslot_to_devfn(hpdev->desc.win_slot.slot);
> > -	pdev = pci_get_domain_bus_and_slot(hbus->sysdata.domain, 0, wslot);
> > +	pdev = pci_get_domain_bus_and_slot(hbus->bridge->domain_nr, 0, wslot);
> >  	if (pdev) {
> >  		pci_lock_rescan_remove();
> >  		pci_stop_and_remove_bus_device(pdev);
> > @@ -3071,6 +3071,7 @@ static int hv_pci_probe(struct hv_device *hdev,
> >  			 "PCI dom# 0x%hx has collision, using 0x%hx",
> >  			 dom_req, dom);
> > 
> > +	hbus->bridge->domain_nr = dom;
> >  	hbus->sysdata.domain = dom;
> With your other patches everything is moving over to based off of bridge->domain_nr.
> Do we still need to update sysdata.domain?

Yes, we still need it, because x86 is not a CONFIG_PCI_DOMAINS_GENERIC=y
architecture, and this patchset only makes CONFIG_PCI_DOMAINS_GENERIC=y
archs work with bridge->domain_nr. x86 still use the arch-specific
pci_domain_nr(), so we need to set the field in sysdata.

Regards,
Boqun

> 

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [RFC v4 1/7] PCI: Introduce domain_nr in pci_host_bridge
  2021-07-14 19:33   ` Bjorn Helgaas
@ 2021-07-15 17:30     ` Boqun Feng
  2021-07-15 19:07       ` Bjorn Helgaas
  2021-07-15 19:11       ` Arnd Bergmann
  0 siblings, 2 replies; 17+ messages in thread
From: Boqun Feng @ 2021-07-15 17:30 UTC (permalink / raw)
  To: Bjorn Helgaas
  Cc: Bjorn Helgaas, Arnd Bergmann, Marc Zyngier, Catalin Marinas,
	Will Deacon, K. Y. Srinivasan, Haiyang Zhang, Stephen Hemminger,
	Wei Liu, Dexuan Cui, Lorenzo Pieralisi, Rob Herring,
	Krzysztof Wilczyński, linux-arm-kernel, linux-kernel,
	linux-hyperv, linux-pci, Sunil Muthuswamy, Mike Rapoport

On Wed, Jul 14, 2021 at 02:33:19PM -0500, Bjorn Helgaas wrote:
> On Wed, Jul 14, 2021 at 06:27:31PM +0800, Boqun Feng wrote:
> > Currently we retrieve the PCI domain number of the host bridge from the
> > bus sysdata (or pci_config_window if PCI_DOMAINS_GENERIC=y). Actually
> > we have the information at PCI host bridge probing time, and it makes
> > sense that we store it into pci_host_bridge. One benefit of doing so is
> > the requirement for supporting PCI on Hyper-V for ARM64, because the
> > host bridge of Hyper-V doesn't have pci_config_window, whereas ARM64 is
> > a PCI_DOMAINS_GENERIC=y arch, so we cannot retrieve the PCI domain
> > number from pci_config_window on ARM64 Hyper-V guest.
> > 
> > As the preparation for ARM64 Hyper-V PCI support, we introduce the
> > domain_nr in pci_host_bridge and a sentinel value to allow drivers to
> > set domain numbers properly at probing time. Currently
> > CONFIG_PCI_DOMAINS_GENERIC=y archs are only users of this
> > newly-introduced field.
> 
> Thanks for pushing on this.  PCI_DOMAINS_GENERIC is really not very
> generic today and it will be good to make it more so.
> 
> > Signed-off-by: Boqun Feng <boqun.feng@gmail.com>
> > ---
> >  drivers/pci/probe.c |  6 +++++-
> >  include/linux/pci.h | 10 ++++++++++
> >  2 files changed, 15 insertions(+), 1 deletion(-)
> > 
> > diff --git a/drivers/pci/probe.c b/drivers/pci/probe.c
> > index 79177ac37880..60c50d4f156f 100644
> > --- a/drivers/pci/probe.c
> > +++ b/drivers/pci/probe.c
> > @@ -594,6 +594,7 @@ static void pci_init_host_bridge(struct pci_host_bridge *bridge)
> >  	bridge->native_pme = 1;
> >  	bridge->native_ltr = 1;
> >  	bridge->native_dpc = 1;
> > +	bridge->domain_nr = PCI_DOMAIN_NR_NOT_SET;
> >  
> >  	device_initialize(&bridge->dev);
> >  }
> > @@ -898,7 +899,10 @@ static int pci_register_host_bridge(struct pci_host_bridge *bridge)
> >  	bus->ops = bridge->ops;
> >  	bus->number = bus->busn_res.start = bridge->busnr;
> >  #ifdef CONFIG_PCI_DOMAINS_GENERIC
> > -	bus->domain_nr = pci_bus_find_domain_nr(bus, parent);
> > +	if (bridge->domain_nr == PCI_DOMAIN_NR_NOT_SET)
> > +		bus->domain_nr = pci_bus_find_domain_nr(bus, parent);
> > +	else
> > +		bus->domain_nr = bridge->domain_nr;
> 
> The domain_nr in struct pci_bus is really only used by
> pci_domain_nr().  It seems like it really belongs in the struct
> pci_host_bridge and probably doesn't need to be duplicated in the
> struct pci_bus.  But that's probably a project for the future.
> 

Agreed. Maybe we can define pci_bus_domain_nr() as:

	static inline int pci_domain_nr(struct pci_bus *bus)
	{
		struct device *bridge = bus->bridge;
		struct pci_host_bridge *b = container_of(bridge, struct pci_host_bridge, dev);

		return b->domain_nr;
	}

but apart from corretness (e.g. should we use get_device() for
bus->bridge?), it makes more sense if ->domain_nr of pci_host_bridge
is used (as a way to set domain number at probing time) for most of
drivers and archs. ;-)

> >  #endif
> >  
> >  	b = pci_find_bus(pci_domain_nr(bus), bridge->busnr);
> > diff --git a/include/linux/pci.h b/include/linux/pci.h
> > index 540b377ca8f6..952bb7d46576 100644
> > --- a/include/linux/pci.h
> > +++ b/include/linux/pci.h
> > @@ -526,6 +526,15 @@ static inline int pci_channel_offline(struct pci_dev *pdev)
> >  	return (pdev->error_state != pci_channel_io_normal);
> >  }
> >  
> > +/*
> > + * PCI Conventional has at most 256 PCI bus segments and PCI Express has at
> > + * most 65536 "PCI Segments Groups", therefore -1 is not a valid PCI domain
> 
> s/Segments/Segment/
> 
> Do you have a reference for these limits?  I don't think either
> Conventional PCI or PCIe actually specifies a hardware limit on the
> number of domains (I think PCI uses "segment group" to mean the same
> thing).
> 
> "Segment" in the Conventional PCI spec, r3.0, means a bus segment,
> which connects all the devices on a single bus.  Obviously there's a
> limit of 256 buses under a single host bridge, but that's different
> concept than a domain/segment group.
> 
> The PCI Firmware spec, r3.3, defines "Segment Group Number" as being
> in the range 0..65535, but as far as I know, that's just a firmware
> issue, and it applies equally to Conventional PCI and PCIe.
> 
> I think you're right that -1 is a reasonable sentinel; I just don't
> want to claim a difference here unless there really is one.
> 

I think you're right, I got confused on the concepts of "Segment" and
"Segment Group".

After digging in specs, I haven't find any difference on the limitation
between Conventional PCI and PCIe. The PCI Firmware spec, r3.2, refers
ACPI (3.0 and later) spec for the details of "Segment Group", and in
ACPI spec v6.3, the description _SEG object says:

"""
The lower 16 bits of _SEG returned integer is the PCI Segment Group
number. Other bits are reserved.
"""

So I'm thinking replacing the comments with:

Currently in ACPI spec, for each PCI host bridge, PCI Segment Group
number is limited to a 16-bit value, therefore (int)-1 is not a valid
PCI domain number, and can be used as a sentinel value indicating
->domain_nr is not set by the driver (and CONFIG_PCI_DOMAINS_GENERIC=y
archs will set it with pci_bus_find_domain_nr()).

Thoughts?

Regards,
BOqun

> > + * number, and can be used as a sentinel value indicating ->domain_nr is not
> > + * set by the driver (and CONFIG_PCI_DOMAINS_GENERIC=y can set it in generic
> > + * code).
> > + */
> > +#define PCI_DOMAIN_NR_NOT_SET (-1)
> > +
> >  struct pci_host_bridge {
> >  	struct device	dev;
> >  	struct pci_bus	*bus;		/* Root bus */
> > @@ -533,6 +542,7 @@ struct pci_host_bridge {
> >  	struct pci_ops	*child_ops;
> >  	void		*sysdata;
> >  	int		busnr;
> > +	int		domain_nr;
> >  	struct list_head windows;	/* resource_entry */
> >  	struct list_head dma_ranges;	/* dma ranges resource list */
> >  	u8 (*swizzle_irq)(struct pci_dev *, u8 *); /* Platform IRQ swizzler */
> > -- 
> > 2.30.2
> > 

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [RFC v4 1/7] PCI: Introduce domain_nr in pci_host_bridge
  2021-07-15 17:30     ` Boqun Feng
@ 2021-07-15 19:07       ` Bjorn Helgaas
  2021-07-15 19:11       ` Arnd Bergmann
  1 sibling, 0 replies; 17+ messages in thread
From: Bjorn Helgaas @ 2021-07-15 19:07 UTC (permalink / raw)
  To: Boqun Feng
  Cc: Bjorn Helgaas, Arnd Bergmann, Marc Zyngier, Catalin Marinas,
	Will Deacon, K. Y. Srinivasan, Haiyang Zhang, Stephen Hemminger,
	Wei Liu, Dexuan Cui, Lorenzo Pieralisi, Rob Herring,
	Krzysztof Wilczyński, linux-arm-kernel, linux-kernel,
	linux-hyperv, linux-pci, Sunil Muthuswamy, Mike Rapoport

On Fri, Jul 16, 2021 at 01:30:52AM +0800, Boqun Feng wrote:
> On Wed, Jul 14, 2021 at 02:33:19PM -0500, Bjorn Helgaas wrote:
> > On Wed, Jul 14, 2021 at 06:27:31PM +0800, Boqun Feng wrote:
> > > Currently we retrieve the PCI domain number of the host bridge from the
> > > bus sysdata (or pci_config_window if PCI_DOMAINS_GENERIC=y). Actually
> > > we have the information at PCI host bridge probing time, and it makes
> > > sense that we store it into pci_host_bridge. One benefit of doing so is
> > > the requirement for supporting PCI on Hyper-V for ARM64, because the
> > > host bridge of Hyper-V doesn't have pci_config_window, whereas ARM64 is
> > > a PCI_DOMAINS_GENERIC=y arch, so we cannot retrieve the PCI domain
> > > number from pci_config_window on ARM64 Hyper-V guest.
> > > 
> > > As the preparation for ARM64 Hyper-V PCI support, we introduce the
> > > domain_nr in pci_host_bridge and a sentinel value to allow drivers to
> > > set domain numbers properly at probing time. Currently
> > > CONFIG_PCI_DOMAINS_GENERIC=y archs are only users of this
> > > newly-introduced field.
> > 
> > Thanks for pushing on this.  PCI_DOMAINS_GENERIC is really not very
> > generic today and it will be good to make it more so.
> > 
> > > Signed-off-by: Boqun Feng <boqun.feng@gmail.com>
> > > ---
> > >  drivers/pci/probe.c |  6 +++++-
> > >  include/linux/pci.h | 10 ++++++++++
> > >  2 files changed, 15 insertions(+), 1 deletion(-)
> > > 
> > > diff --git a/drivers/pci/probe.c b/drivers/pci/probe.c
> > > index 79177ac37880..60c50d4f156f 100644
> > > --- a/drivers/pci/probe.c
> > > +++ b/drivers/pci/probe.c
> > > @@ -594,6 +594,7 @@ static void pci_init_host_bridge(struct pci_host_bridge *bridge)
> > >  	bridge->native_pme = 1;
> > >  	bridge->native_ltr = 1;
> > >  	bridge->native_dpc = 1;
> > > +	bridge->domain_nr = PCI_DOMAIN_NR_NOT_SET;
> > >  
> > >  	device_initialize(&bridge->dev);
> > >  }
> > > @@ -898,7 +899,10 @@ static int pci_register_host_bridge(struct pci_host_bridge *bridge)
> > >  	bus->ops = bridge->ops;
> > >  	bus->number = bus->busn_res.start = bridge->busnr;
> > >  #ifdef CONFIG_PCI_DOMAINS_GENERIC
> > > -	bus->domain_nr = pci_bus_find_domain_nr(bus, parent);
> > > +	if (bridge->domain_nr == PCI_DOMAIN_NR_NOT_SET)
> > > +		bus->domain_nr = pci_bus_find_domain_nr(bus, parent);
> > > +	else
> > > +		bus->domain_nr = bridge->domain_nr;
> > 
> > The domain_nr in struct pci_bus is really only used by
> > pci_domain_nr().  It seems like it really belongs in the struct
> > pci_host_bridge and probably doesn't need to be duplicated in the
> > struct pci_bus.  But that's probably a project for the future.
> 
> Agreed. Maybe we can define pci_bus_domain_nr() as:
> 
> 	static inline int pci_domain_nr(struct pci_bus *bus)
> 	{
> 		struct device *bridge = bus->bridge;
> 		struct pci_host_bridge *b = container_of(bridge, struct pci_host_bridge, dev);
> 
> 		return b->domain_nr;
> 	}
> 
> but apart from corretness (e.g. should we use get_device() for
> bus->bridge?), it makes more sense if ->domain_nr of pci_host_bridge
> is used (as a way to set domain number at probing time) for most of
> drivers and archs. ;-)

If we're holding a struct pci_bus *, we must have a reference on the
bus, which in turn holds a reference on upstream devices, so there
should be no need for get_device() here.

But yes, I think something like this is where we should be heading.

> > >  #endif
> > >  
> > >  	b = pci_find_bus(pci_domain_nr(bus), bridge->busnr);
> > > diff --git a/include/linux/pci.h b/include/linux/pci.h
> > > index 540b377ca8f6..952bb7d46576 100644
> > > --- a/include/linux/pci.h
> > > +++ b/include/linux/pci.h
> > > @@ -526,6 +526,15 @@ static inline int pci_channel_offline(struct pci_dev *pdev)
> > >  	return (pdev->error_state != pci_channel_io_normal);
> > >  }
> > >  
> > > +/*
> > > + * PCI Conventional has at most 256 PCI bus segments and PCI Express has at
> > > + * most 65536 "PCI Segments Groups", therefore -1 is not a valid PCI domain
> > 
> > s/Segments/Segment/
> > 
> > Do you have a reference for these limits?  I don't think either
> > Conventional PCI or PCIe actually specifies a hardware limit on the
> > number of domains (I think PCI uses "segment group" to mean the same
> > thing).
> > 
> > "Segment" in the Conventional PCI spec, r3.0, means a bus segment,
> > which connects all the devices on a single bus.  Obviously there's a
> > limit of 256 buses under a single host bridge, but that's different
> > concept than a domain/segment group.
> > 
> > The PCI Firmware spec, r3.3, defines "Segment Group Number" as being
> > in the range 0..65535, but as far as I know, that's just a firmware
> > issue, and it applies equally to Conventional PCI and PCIe.
> > 
> > I think you're right that -1 is a reasonable sentinel; I just don't
> > want to claim a difference here unless there really is one.
> > 
> 
> I think you're right, I got confused on the concepts of "Segment" and
> "Segment Group".
> 
> After digging in specs, I haven't find any difference on the limitation
> between Conventional PCI and PCIe. The PCI Firmware spec, r3.2, refers
> ACPI (3.0 and later) spec for the details of "Segment Group", and in
> ACPI spec v6.3, the description _SEG object says:
> 
> """
> The lower 16 bits of _SEG returned integer is the PCI Segment Group
> number. Other bits are reserved.
> """
> 
> So I'm thinking replacing the comments with:
> 
> Currently in ACPI spec, for each PCI host bridge, PCI Segment Group
> number is limited to a 16-bit value, therefore (int)-1 is not a valid
> PCI domain number, and can be used as a sentinel value indicating
> ->domain_nr is not set by the driver (and CONFIG_PCI_DOMAINS_GENERIC=y
> archs will set it with pci_bus_find_domain_nr()).

Yes, I think that's a better description.

> > > + * number, and can be used as a sentinel value indicating ->domain_nr is not
> > > + * set by the driver (and CONFIG_PCI_DOMAINS_GENERIC=y can set it in generic
> > > + * code).

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [RFC v4 1/7] PCI: Introduce domain_nr in pci_host_bridge
  2021-07-15 17:30     ` Boqun Feng
  2021-07-15 19:07       ` Bjorn Helgaas
@ 2021-07-15 19:11       ` Arnd Bergmann
  1 sibling, 0 replies; 17+ messages in thread
From: Arnd Bergmann @ 2021-07-15 19:11 UTC (permalink / raw)
  To: Boqun Feng
  Cc: Bjorn Helgaas, Bjorn Helgaas, Arnd Bergmann, Marc Zyngier,
	Catalin Marinas, Will Deacon, K. Y. Srinivasan, Haiyang Zhang,
	Stephen Hemminger, Wei Liu, Dexuan Cui, Lorenzo Pieralisi,
	Rob Herring, Krzysztof Wilczyński, Linux ARM,
	Linux Kernel Mailing List, Linux on Hyper-V List, linux-pci,
	Sunil Muthuswamy, Mike Rapoport

On Thu, Jul 15, 2021 at 7:30 PM Boqun Feng <boqun.feng@gmail.com> wrote:
> On Wed, Jul 14, 2021 at 02:33:19PM -0500, Bjorn Helgaas wrote:
> > On Wed, Jul 14, 2021 at 06:27:31PM +0800, Boqun Feng wrote:
> > >  #ifdef CONFIG_PCI_DOMAINS_GENERIC
> > > -   bus->domain_nr = pci_bus_find_domain_nr(bus, parent);
> > > +   if (bridge->domain_nr == PCI_DOMAIN_NR_NOT_SET)
> > > +           bus->domain_nr = pci_bus_find_domain_nr(bus, parent);
> > > +   else
> > > +           bus->domain_nr = bridge->domain_nr;
> >
> > The domain_nr in struct pci_bus is really only used by
> > pci_domain_nr().  It seems like it really belongs in the struct
> > pci_host_bridge and probably doesn't need to be duplicated in the
> > struct pci_bus.  But that's probably a project for the future.
> >

+1

> Agreed. Maybe we can define pci_bus_domain_nr() as:
>
>         static inline int pci_domain_nr(struct pci_bus *bus)
>         {
>                 struct device *bridge = bus->bridge;
>                 struct pci_host_bridge *b = container_of(bridge, struct pci_host_bridge, dev);
>
>                 return b->domain_nr;
>         }
>
> but apart from corretness (e.g. should we use get_device() for
> bus->bridge?), it makes more sense if ->domain_nr of pci_host_bridge
> is used (as a way to set domain number at probing time) for most of
> drivers and archs. ;-)

It needs to be "pci_find_host_bridge(bus)" instead of bus->bridge
and container_of().

Then again, if we get pci_domain_nr() to be completely generic, I'd
prefer to change most callers to just open-code the bridge->domain_nr
access, as most of them will already have a pointer to the pci_host_bridge
when calling this.

       Arnd

^ permalink raw reply	[flat|nested] 17+ messages in thread

end of thread, other threads:[~2021-07-15 19:12 UTC | newest]

Thread overview: 17+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-07-14 10:27 [RFC v4 0/7] PCI: hv: Support host bridge probing on ARM64 Boqun Feng
2021-07-14 10:27 ` [RFC v4 1/7] PCI: Introduce domain_nr in pci_host_bridge Boqun Feng
2021-07-14 19:33   ` Bjorn Helgaas
2021-07-15 17:30     ` Boqun Feng
2021-07-15 19:07       ` Bjorn Helgaas
2021-07-15 19:11       ` Arnd Bergmann
2021-07-14 10:27 ` [RFC v4 2/7] PCI: Allow msi domain set-up at host probing time Boqun Feng
2021-07-14 19:39   ` Bjorn Helgaas
2021-07-14 10:27 ` [RFC v4 3/7] arm64: PCI: Support root bridge preparation for Hyper-V PCI Boqun Feng
2021-07-14 19:46   ` Bjorn Helgaas
2021-07-14 10:27 ` [RFC v4 4/7] PCI: hv: Generify PCI probing Boqun Feng
2021-07-14 19:49   ` Bjorn Helgaas
2021-07-14 10:27 ` [RFC v4 5/7] PCI: hv: Use pci_host_bridge::domain_nr for PCI domain Boqun Feng
2021-07-14 17:04   ` [EXTERNAL] " Sunil Muthuswamy
2021-07-15  3:59     ` Boqun Feng
2021-07-14 10:27 ` [RFC v4 6/7] PCI: hv: Set up msi domain at bridge probing time Boqun Feng
2021-07-14 10:27 ` [RFC v4 7/7] PCI: hv: Turn on the host bridge probing on ARM64 Boqun Feng

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).