From: Ben Widawsky <ben.widawsky@intel.com>
To: "Michael S. Tsirkin" <mst@redhat.com>
Cc: "David Hildenbrand" <david@redhat.com>,
"Vishal Verma" <vishal.l.verma@intel.com>,
"John Groves (jgroves)" <jgroves@micron.com>,
"Chris Browy" <cbrowy@avery-design.com>,
qemu-devel@nongnu.org, linux-cxl@vger.kernel.org,
"Markus Armbruster" <armbru@redhat.com>,
"Jonathan Cameron" <Jonathan.Cameron@huawei.com>,
"Igor Mammedov" <imammedo@redhat.com>,
"Dan Williams" <dan.j.williams@intel.com>,
"Ira Weiny" <ira.weiny@intel.com>,
"Philippe Mathieu-Daudé" <f4bug@amsat.org>
Subject: Re: [RFC PATCH v3 16/31] hw/pci: Plumb _UID through host bridges
Date: Tue, 2 Feb 2021 07:42:57 -0800 [thread overview]
Message-ID: <20210202154257.zepdz74logmi52wn@intel.com> (raw)
In-Reply-To: <20210202101504-mutt-send-email-mst@kernel.org>
Thanks for looking! Mixing comments to Jonathan and Michael..
On 21-02-02 10:24:43, Michael S. Tsirkin wrote:
> On Tue, Feb 02, 2021 at 03:00:56PM +0000, Jonathan Cameron wrote:
> > On Mon, 1 Feb 2021 16:59:33 -0800
> > Ben Widawsky <ben.widawsky@intel.com> wrote:
> >
> > > Currently, QEMU makes _UID equivalent to the bus number (_BBN). While
> > > there is nothing wrong with doing it this way, CXL spec has a heavy
> > > reliance on _UID to identify host bridges and there is no link to the
> > > bus number. Having a distinct UID solves two problems. The first is it
> > > gets us around the limitation of 256 (current max bus number).
>
> Not sure I understand. You want more than 256 host bridges?
>
I don't want more than 256 host bridges, but I want the ability to disaggregate
_UID and bus (_BBN). The reasoning is just to align with the spec where _UID is
used to identify a CXL host bridge which is unrelated (perhaps) to the bus
number.
> > The
> > > second is it allows us to replicate hardware configurations where bus
> > > number and uid aren't equivalent.
>
> A bit more data on when this needs to be the case?
>
Doesn't *need* to be the case. I was making a concerted effort to allow full
spec flexibility, but I don't believe it to be necessary unless we want to
accurately emulate a real platform.
> > The latter has benefits for our
> > > development and debugging using QEMU.
> > >
> > > The other way to do this would be to implement the expanded bus
> > > numbering, but having an explicit uid makes more sense when trying to
> > > replicate real hardware configurations.
> > >
> > > The QEMU commandline to utilize this would be:
> > > -device pxb-cxl,id=cxl.0,bus="pcie.0",bus_nr=1,uid=x
> > >
> > > Signed-off-by: Ben Widawsky <ben.widawsky@intel.com>
>
> However, if doing this how do we ensure UID is still unique?
> What do we do for cases where UID was not specified?
> One idea is to generate a string UID and just stick the bus #
> in there.
This is totally mishandled in the code currently. I like your idea though.
>
>
> > > --
> > >
> > > I'm guessing this patch will be somewhat controversial. For early CXL
> > > work, this can be dropped without too much heartache.
> >
> > Whilst I'm not personally against, this maybe best to drop for now as you
> > say.
> >
I think it'd be good to understand from the PCIe experts if CXL matches in this
regard. If PCIe generally allows (and does in practice) _UID not matching _BBN,
perhaps this is an overall improvement to the code.
> > > ---
> > > hw/i386/acpi-build.c | 3 ++-
> > > hw/pci-bridge/pci_expander_bridge.c | 19 +++++++++++++++++++
> > > hw/pci/pci.c | 11 +++++++++++
> > > include/hw/pci/pci.h | 1 +
> > > include/hw/pci/pci_bus.h | 1 +
> > > 5 files changed, 34 insertions(+), 1 deletion(-)
> > >
> > > diff --git a/hw/i386/acpi-build.c b/hw/i386/acpi-build.c
> > > index cf6eb54c22..145a503e92 100644
> > > --- a/hw/i386/acpi-build.c
> > > +++ b/hw/i386/acpi-build.c
> > > @@ -1343,6 +1343,7 @@ build_dsdt(GArray *table_data, BIOSLinker *linker,
> > > QLIST_FOREACH(bus, &bus->child, sibling) {
> > > uint8_t bus_num = pci_bus_num(bus);
> > > uint8_t numa_node = pci_bus_numa_node(bus);
> > > + int32_t uid = pci_bus_uid(bus);
> > >
> > > /* look only for expander root buses */
> > > if (!pci_bus_is_root(bus)) {
> > > @@ -1356,7 +1357,7 @@ build_dsdt(GArray *table_data, BIOSLinker *linker,
> > > scope = aml_scope("\\_SB");
> > > dev = aml_device("PC%.02X", bus_num);
> > > aml_append(dev, aml_name_decl("_BBN", aml_int(bus_num)));
> > > - init_pci_acpi(dev, bus_num, pci_bus_is_express(bus) ? PCIE : PCI);
> > > + init_pci_acpi(dev, uid, pci_bus_is_express(bus) ? PCIE : PCI);
> > >
> > > if (numa_node != NUMA_NODE_UNASSIGNED) {
> > > aml_append(dev, aml_name_decl("_PXM", aml_int(numa_node)));
> > > diff --git a/hw/pci-bridge/pci_expander_bridge.c b/hw/pci-bridge/pci_expander_bridge.c
> > > index b42592e1ff..5021b60435 100644
> > > --- a/hw/pci-bridge/pci_expander_bridge.c
> > > +++ b/hw/pci-bridge/pci_expander_bridge.c
> > > @@ -67,6 +67,7 @@ struct PXBDev {
> > >
> > > uint8_t bus_nr;
> > > uint16_t numa_node;
> > > + int32_t uid;
> > > };
> > >
> > > static PXBDev *convert_to_pxb(PCIDevice *dev)
>
> As long as we are doing this, do we want to support a string uid too?
> How about a 64 bit uid? Why not?
If generally the idea of the patch is welcome, I am happy to change it.
>
>
> > > @@ -98,12 +99,20 @@ static uint16_t pxb_bus_numa_node(PCIBus *bus)
> > > return pxb->numa_node;
> > > }
> > >
> > > +static int32_t pxb_bus_uid(PCIBus *bus)
> > > +{
> > > + PXBDev *pxb = convert_to_pxb(bus->parent_dev);
> > > +
> > > + return pxb->uid;
> > > +}
> > > +
> > > static void pxb_bus_class_init(ObjectClass *class, void *data)
> > > {
> > > PCIBusClass *pbc = PCI_BUS_CLASS(class);
> > >
> > > pbc->bus_num = pxb_bus_num;
> > > pbc->numa_node = pxb_bus_numa_node;
> > > + pbc->uid = pxb_bus_uid;
> > > }
> > >
> > > static const TypeInfo pxb_bus_info = {
> > > @@ -329,6 +338,7 @@ static Property pxb_dev_properties[] = {
> > > /* Note: 0 is not a legal PXB bus number. */
> > > DEFINE_PROP_UINT8("bus_nr", PXBDev, bus_nr, 0),
> > > DEFINE_PROP_UINT16("numa_node", PXBDev, numa_node, NUMA_NODE_UNASSIGNED),
> > > + DEFINE_PROP_INT32("uid", PXBDev, uid, -1),
> > > DEFINE_PROP_END_OF_LIST(),
> > > };
> > >
> > > @@ -400,12 +410,21 @@ static const TypeInfo pxb_pcie_dev_info = {
> > >
> > > static void pxb_cxl_dev_realize(PCIDevice *dev, Error **errp)
> > > {
> > > + PXBDev *pxb = convert_to_pxb(dev);
> > > +
> > > /* A CXL PXB's parent bus is still PCIe */
> > > if (!pci_bus_is_express(pci_get_bus(dev))) {
> > > error_setg(errp, "pxb-cxl devices cannot reside on a PCI bus");
> > > return;
> > > }
> > >
> > > + if (pxb->uid < 0) {
> > > + error_setg(errp, "pxb-cxl devices must have a valid uid (0-2147483647)");
> > > + return;
> > > + }
> > > +
> > > + /* FIXME: Check that uid doesn't collide with UIDs of other host bridges */
> > > +
> > > pxb_dev_realize_common(dev, CXL, errp);
> > > }
> > >
> > > diff --git a/hw/pci/pci.c b/hw/pci/pci.c
> > > index adbe8aa260..bf019d91a0 100644
> > > --- a/hw/pci/pci.c
> > > +++ b/hw/pci/pci.c
> > > @@ -170,6 +170,11 @@ static uint16_t pcibus_numa_node(PCIBus *bus)
> > > return NUMA_NODE_UNASSIGNED;
> > > }
> > >
> > > +static int32_t pcibus_uid(PCIBus *bus)
> > > +{
> > > + return -1;
> > > +}
> > > +
> > > static void pci_bus_class_init(ObjectClass *klass, void *data)
> > > {
> > > BusClass *k = BUS_CLASS(klass);
> > > @@ -184,6 +189,7 @@ static void pci_bus_class_init(ObjectClass *klass, void *data)
> > >
> > > pbc->bus_num = pcibus_num;
> > > pbc->numa_node = pcibus_numa_node;
> > > + pbc->uid = pcibus_uid;
> > > }
> > >
> > > static const TypeInfo pci_bus_info = {
> > > @@ -530,6 +536,11 @@ int pci_bus_numa_node(PCIBus *bus)
> > > return PCI_BUS_GET_CLASS(bus)->numa_node(bus);
> > > }
> > >
> > > +int pci_bus_uid(PCIBus *bus)
> > > +{
> > > + return PCI_BUS_GET_CLASS(bus)->uid(bus);
> > > +}
> > > +
> > > static int get_pci_config_device(QEMUFile *f, void *pv, size_t size,
> > > const VMStateField *field)
> > > {
> > > diff --git a/include/hw/pci/pci.h b/include/hw/pci/pci.h
> > > index bde3697bee..a46de48ccd 100644
> > > --- a/include/hw/pci/pci.h
> > > +++ b/include/hw/pci/pci.h
> > > @@ -463,6 +463,7 @@ static inline int pci_dev_bus_num(const PCIDevice *dev)
> > > }
> > >
> > > int pci_bus_numa_node(PCIBus *bus);
> > > +int pci_bus_uid(PCIBus *bus);
> > > void pci_for_each_device(PCIBus *bus, int bus_num,
> > > void (*fn)(PCIBus *bus, PCIDevice *d, void *opaque),
> > > void *opaque);
> > > diff --git a/include/hw/pci/pci_bus.h b/include/hw/pci/pci_bus.h
> > > index eb94e7e85c..3c9fbc55bb 100644
> > > --- a/include/hw/pci/pci_bus.h
> > > +++ b/include/hw/pci/pci_bus.h
> > > @@ -17,6 +17,7 @@ struct PCIBusClass {
> > >
> > > int (*bus_num)(PCIBus *bus);
> > > uint16_t (*numa_node)(PCIBus *bus);
> > > + int32_t (*uid)(PCIBus *bus);
> > > };
> > >
> > > enum PCIBusFlags {
>
next prev parent reply other threads:[~2021-02-02 16:07 UTC|newest]
Thread overview: 65+ messages / expand[flat|nested] mbox.gz Atom feed top
2021-02-02 0:59 [RFC PATCH v3 00/31] CXL 2.0 Support Ben Widawsky
2021-02-02 0:59 ` [RFC PATCH v3 01/31] hw/pci/cxl: Add a CXL component type (interface) Ben Widawsky
2021-02-02 0:59 ` [RFC PATCH v3 02/31] hw/cxl/component: Introduce CXL components (8.1.x, 8.2.5) Ben Widawsky
2021-02-02 11:48 ` Jonathan Cameron
2021-02-17 18:36 ` Ben Widawsky
2021-02-11 17:08 ` Jonathan Cameron
2021-02-17 16:40 ` Ben Widawsky
2021-02-02 0:59 ` [RFC PATCH v3 03/31] hw/cxl/device: Introduce a CXL device (8.2.8) Ben Widawsky
2021-02-02 12:03 ` Jonathan Cameron
2021-02-02 0:59 ` [RFC PATCH v3 04/31] hw/cxl/device: Implement the CAP array (8.2.8.1-2) Ben Widawsky
2021-02-02 12:23 ` Jonathan Cameron
2021-02-17 22:15 ` Ben Widawsky
2021-02-02 0:59 ` [RFC PATCH v3 05/31] hw/cxl/device: Implement basic mailbox (8.2.8.4) Ben Widawsky
2021-02-02 14:58 ` Jonathan Cameron
2021-02-11 17:46 ` Jonathan Cameron
2021-02-18 0:55 ` Ben Widawsky
2021-02-18 16:50 ` Jonathan Cameron
2021-02-11 18:09 ` Jonathan Cameron
2021-02-02 0:59 ` [RFC PATCH v3 06/31] hw/cxl/device: Add memory device utilities Ben Widawsky
2021-02-02 0:59 ` [RFC PATCH v3 07/31] hw/cxl/device: Add cheap EVENTS implementation (8.2.9.1) Ben Widawsky
2021-02-02 13:44 ` Jonathan Cameron
2021-02-11 17:59 ` Jonathan Cameron
2021-02-02 0:59 ` [RFC PATCH v3 08/31] hw/cxl/device: Timestamp implementation (8.2.9.3) Ben Widawsky
2021-02-02 0:59 ` [RFC PATCH v3 09/31] hw/cxl/device: Add log commands (8.2.9.4) + CEL Ben Widawsky
2021-02-02 0:59 ` [RFC PATCH v3 10/31] hw/pxb: Use a type for realizing expanders Ben Widawsky
2021-02-02 13:50 ` Jonathan Cameron
2021-02-02 0:59 ` [RFC PATCH v3 11/31] hw/pci/cxl: Create a CXL bus type Ben Widawsky
2021-02-02 0:59 ` [RFC PATCH v3 12/31] hw/pxb: Allow creation of a CXL PXB (host bridge) Ben Widawsky
2021-02-02 0:59 ` [RFC PATCH v3 13/31] qtest: allow DSDT acpi table changes Ben Widawsky
2021-02-02 0:59 ` [RFC PATCH v3 14/31] acpi/pci: Consolidate host bridge setup Ben Widawsky
2021-02-02 13:56 ` Jonathan Cameron
2021-12-02 10:32 ` Jonathan Cameron via
2021-02-02 0:59 ` [RFC PATCH v3 15/31] tests/acpi: remove stale allowed tables Ben Widawsky
2021-02-02 0:59 ` [RFC PATCH v3 16/31] hw/pci: Plumb _UID through host bridges Ben Widawsky
2021-02-02 15:00 ` Jonathan Cameron
2021-02-02 15:24 ` Michael S. Tsirkin
2021-02-02 15:42 ` Ben Widawsky [this message]
2021-02-02 15:51 ` Michael S. Tsirkin
2021-02-02 16:20 ` Ben Widawsky
2021-02-02 0:59 ` [RFC PATCH v3 17/31] hw/cxl/component: Implement host bridge MMIO (8.2.5, table 142) Ben Widawsky
2021-02-02 19:21 ` Jonathan Cameron
2021-02-02 19:45 ` Ben Widawsky
2021-02-02 20:43 ` Jonathan Cameron
2021-02-02 21:03 ` Ben Widawsky
2021-02-02 22:06 ` Jonathan Cameron
2021-02-02 0:59 ` [RFC PATCH v3 18/31] acpi/pxb/cxl: Reserve host bridge MMIO Ben Widawsky
2021-02-02 0:59 ` [RFC PATCH v3 19/31] hw/pxb/cxl: Add "windows" for host bridges Ben Widawsky
2021-02-02 0:59 ` [RFC PATCH v3 20/31] hw/cxl/rp: Add a root port Ben Widawsky
2021-02-02 0:59 ` [RFC PATCH v3 21/31] hw/cxl/device: Add a memory device (8.2.8.5) Ben Widawsky
2021-02-02 14:26 ` Eric Blake
2021-02-02 15:06 ` Ben Widawsky
2021-02-02 0:59 ` [RFC PATCH v3 22/31] hw/cxl/device: Implement MMIO HDM decoding (8.2.5.12) Ben Widawsky
2021-02-02 0:59 ` [RFC PATCH v3 23/31] acpi/cxl: Add _OSC implementation (9.14.2) Ben Widawsky
2021-02-02 0:59 ` [RFC PATCH v3 24/31] tests/acpi: allow CEDT table addition Ben Widawsky
2021-02-02 0:59 ` [RFC PATCH v3 25/31] acpi/cxl: Create the CEDT (9.14.1) Ben Widawsky
2021-02-02 0:59 ` [RFC PATCH v3 26/31] tests/acpi: Add new CEDT files Ben Widawsky
2021-02-02 0:59 ` [RFC PATCH v3 27/31] hw/cxl/device: Add some trivial commands Ben Widawsky
2021-02-02 0:59 ` [RFC PATCH v3 28/31] hw/cxl/device: Plumb real LSA sizing Ben Widawsky
2021-02-02 0:59 ` [RFC PATCH v3 29/31] hw/cxl/device: Implement get/set LSA Ben Widawsky
2021-02-02 0:59 ` [RFC PATCH v3 30/31] qtest/cxl: Add very basic sanity tests Ben Widawsky
2021-02-02 0:59 ` [RFC PATCH v3 31/31] WIP: i386/cxl: Initialize a host bridge Ben Widawsky
2021-02-02 1:33 ` [RFC PATCH v3 00/31] CXL 2.0 Support no-reply
2021-02-03 17:42 ` Ben Widawsky
2021-02-11 18:51 ` Jonathan Cameron
2021-03-11 23:27 ` [RFC PATCH] hw/mem/cxl_type3: Go back to subregions Ben Widawsky
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20210202154257.zepdz74logmi52wn@intel.com \
--to=ben.widawsky@intel.com \
--cc=Jonathan.Cameron@huawei.com \
--cc=armbru@redhat.com \
--cc=cbrowy@avery-design.com \
--cc=dan.j.williams@intel.com \
--cc=david@redhat.com \
--cc=f4bug@amsat.org \
--cc=imammedo@redhat.com \
--cc=ira.weiny@intel.com \
--cc=jgroves@micron.com \
--cc=linux-cxl@vger.kernel.org \
--cc=mst@redhat.com \
--cc=qemu-devel@nongnu.org \
--cc=vishal.l.verma@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).