All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v7 00/46] CXl 2.0 emulation Support
@ 2022-03-06 17:40 ` Jonathan Cameron via
  0 siblings, 0 replies; 124+ messages in thread
From: Jonathan Cameron @ 2022-03-06 17:40 UTC (permalink / raw)
  To: linuxarm, qemu-devel, Alex Bennée, Marcel Apfelbaum,
	Michael S . Tsirkin, Igor Mammedov, Markus Armbruster
  Cc: linux-cxl, Ben Widawsky, Peter Maydell,
	Shameerali Kolothum Thodi, Philippe Mathieu-Daudé,
	Saransh Gupta1, Shreyas Shah, Chris Browy, Samarth Saxena,
	Dan Williams

Ideally I'd love it if we could start picking up the earlier
sections of this series as I think those have been reasonably
well reviewed and should not be particularly controversial.
(perhaps up to patch 15 inline with what Michael Tsirkin suggested
on v5).

There is one core memory handling related patch (34) marked as RFC.
Whilst it's impact seems small to me, I'm not sure it is the best way
to meet our requirements wrt interleaving.

Changes since v7:

Thanks to all who have taken a look.
Small amount of reordering was necessary due to LSA fix in patch 17.
Test moved forwards to patch 22 and so all intermediate patches
move -1 in the series.

(New stuff)
- Switch support.  Needed to support more interesting topologies.
(Ben Widawsky)
- Patch 17: Fix reversed condition on presence of LSA that meant these never
  got properly initialized. Related change needed to ensure test for cxl_type3
  always needs an LSA. We can relax this later when adding volatile memory
  support.
(Markus Armbuster)
- Patch 27: Change -cxl-fixed-memory-window option handling to use
  qobject_input_visitor_new_str().  This changed the required handling of
  targets parameter to require an array index and hence test and docs updates.
  e.g. targets.1=cxl_hb0,targets.2=cxl_hb1
  (Patches 38,40,42,43)
- Missing structure element docs and version number (optimisitic :)
(Alex Bennée)
- Added Reviewed-by tags.  Thanks!.
- Series wise: Switch to compiler.h QEMU_BUILD_BUG_ON/MSG QEMU_PACKED
  and QEMU_ALIGNED as Alex suggested in patch 20.
- Patch 6: Dropped documentation for a non-existent lock.
           Added error code suitable for unimplemented commands.
	   Reordered code for better readability.
- Patch 9: Reorder as suggested to avoid a goto.
- Patch 16: Add LOG_UNIMP message where feature not yet implemented.
            Drop "Explain" comment that doesn't explain anything.
- Patch 18: Drop pointless void * cast.
            Add assertion as suggested (without divide)
- Patch 19: Use pstrcpy rather than snprintf for a fixed string.
            The compiler.h comment was in this patch but affects a
	    number of other patches as well.
- Patch 20: Move structure CXLType3Dev to header when originally
            introduced so changes are more obvious in this patch.
- Patch 21: Substantial refactor to resolve unclear use of sizeof
            on the LSA command header. Now uses a variable length
	    last element so we can use offsetof()
- Patch 22: Use g_autoptr() to avoid need for explicit free in tests
  	    Similar in later patches.
- Patch 29: Minor reorganziation as suggested.
	    
(Tidy up from me)
- Trivial stuff like moving header includes to patch where first used.
- Patch 17: Drop ifndef protections from TYPE_CXL_TYPE3_DEV as there
            doesn't seem to be a reason.

Series organized to allow it to be taken in stages if the maintainers
prefer that approach. Most sets end with the addition of appropriate
tests (TBD for final set)

Patches 0-15 - CXL PXB
Patches 16-22 - Type 3 Device, Root Port
Patches 23-40 - ACPI, board elements and interleave decoding to enable x86 hosts
Patches 41-42 - arm64 support on virt.
Patch 43 - Initial documentation
Patches 44-46 - Switch support.

Gitlab CI is proving challenging to get a completely clean bill of health
as there seem to be some intermittent failures in common with the
main QEMU gitlab. In particular an ASAN leak error that appears in some
upstream CI runs and build-oss-fuzz timeouts.
Results at http://gitlab.com/jic23/qemu cxl-v7-draft-2-for-test
which also includes the DOE/CDAT patches serial number support which
will form part of a future series.

Updated background info:

Looking in particular for:
* Review of the PCI interactions
* x86 and ARM machine interactions (particularly the memory maps)
* Review of the interleaving approach - is the basic idea
  acceptable?
* Review of the command line interface.
* CXL related review welcome but much of that got reviewed
  in earlier versions and hasn't changed substantially.

Big TODOs:

* Volatile memory devices (easy but it's more code so left for now).
* Hotplug?  May not need much but it's not tested yet!
* More tests and tighter verification that values written to hardware
  are actually valid - stuff that real hardware would check.
* Testing, testing and more testing.  I have been running a basic
  set of ARM and x86 tests on this, but there is always room for
  more tests and greater automation.
* CFMWS flags as requested by Ben.

Why do we want QEMU emulation of CXL?

As Ben stated in V3, QEMU support has been critical to getting OS
software written given lack of availability of hardware supporting the
latest CXL features (coupled with very high demand for support being
ready in a timely fashion). What has become clear since Ben's v3
is that situation is a continuous one. Whilst we can't talk about
them yet, CXL 3.0 features and OS support have been prototyped on
top of this support and a lot of the ongoing kernel work is being
tested against these patches. The kernel CXL mocking code allows
some forms of testing, but QEMU provides a more versatile and
exensible platform.

Other features on the qemu-list that build on these include PCI-DOE
/CDAT support from the Avery Design team further showing how this
code is useful. Whilst not directly related this is also the test
platform for work on PCI IDE/CMA + related DMTF SPDM as CXL both
utilizes and extends those technologies and is likely to be an early
adopter.
Refs:
CMA Kernel: https://lore.kernel.org/all/20210804161839.3492053-1-Jonathan.Cameron@huawei.com/
CMA Qemu: https://lore.kernel.org/qemu-devel/1624665723-5169-1-git-send-email-cbrowy@avery-design.com/
DOE Qemu: https://lore.kernel.org/qemu-devel/1623329999-15662-1-git-send-email-cbrowy@avery-design.com/

As can be seen there is non trivial interaction with other areas of
Qemu, particularly PCI and keeping this set up to date is proving
a burden we'd rather do without :)

Ben mentioned a few other good reasons in v3:
https://lore.kernel.org/qemu-devel/20210202005948.241655-1-ben.widawsky@intel.com/

What we have here is about what you need for it to be useful for testing
currently kernel code.  Note the kernel code is moving fast so
since v4, some features have been introduced we don't yet support in
QEMU (e.g. use of the PCIe serial number extended capability).

All comments welcome.

Additional info that was here in v5 is now in the documentation patch.

Thanks,

Jonathan

Ben Widawsky (24):
  hw/pci/cxl: Add a CXL component type (interface)
  hw/cxl/component: Introduce CXL components (8.1.x, 8.2.5)
  hw/cxl/device: Introduce a CXL device (8.2.8)
  hw/cxl/device: Implement the CAP array (8.2.8.1-2)
  hw/cxl/device: Implement basic mailbox (8.2.8.4)
  hw/cxl/device: Add memory device utilities
  hw/cxl/device: Add cheap EVENTS implementation (8.2.9.1)
  hw/cxl/device: Timestamp implementation (8.2.9.3)
  hw/cxl/device: Add log commands (8.2.9.4) + CEL
  hw/pxb: Use a type for realizing expanders
  hw/pci/cxl: Create a CXL bus type
  hw/pxb: Allow creation of a CXL PXB (host bridge)
  hw/cxl/rp: Add a root port
  hw/cxl/device: Add a memory device (8.2.8.5)
  hw/cxl/device: Implement MMIO HDM decoding (8.2.5.12)
  hw/cxl/device: Add some trivial commands
  hw/cxl/device: Plumb real Label Storage Area (LSA) sizing
  hw/cxl/device: Implement get/set Label Storage Area (LSA)
  hw/cxl/component: Implement host bridge MMIO (8.2.5, table 142)
  acpi/cxl: Add _OSC implementation (9.14.2)
  acpi/cxl: Create the CEDT (9.14.1)
  acpi/cxl: Introduce CFMWS structures in CEDT
  hw/cxl/component Add a dumb HDM decoder handler
  qtest/cxl: Add more complex test cases with CFMWs

Jonathan Cameron (22):
  MAINTAINERS: Add entry for Compute Express Link Emulation
  cxl: Machine level control on whether CXL support is enabled
  qtest/cxl: Introduce initial test for pxb-cxl only.
  qtests/cxl: Add initial root port and CXL type3 tests
  hw/cxl/component: Add utils for interleave parameter encoding/decoding
  hw/cxl/host: Add support for CXL Fixed Memory Windows.
  hw/pci-host/gpex-acpi: Add support for dsdt construction for pxb-cxl
  pci/pcie_port: Add pci_find_port_by_pn()
  CXL/cxl_component: Add cxl_get_hb_cstate()
  mem/cxl_type3: Add read and write functions for associated hostmem.
  cxl/cxl-host: Add memops for CFMWS region.
  RFC: softmmu/memory: Add ops to memory_region_ram_init_from_file
  i386/pc: Enable CXL fixed memory windows
  tests/acpi: q35: Allow addition of a CXL test.
  qtests/bios-tables-test: Add a test for CXL emulation.
  tests/acpi: Add tables for CXL emulation.
  hw/arm/virt: Basic CXL enablement on pci_expander_bridge instances
    pxb-cxl
  qtest/cxl: Add aarch64 virt test for CXL
  docs/cxl: Add initial Compute eXpress Link (CXL) documentation.
  pci-bridge/cxl_upstream: Add a CXL switch upstream port
  pci-bridge/cxl_downstream: Add a CXL switch downstream port
  cxl/cxl-host: Support interleave decoding with one level of switches.

 MAINTAINERS                         |   7 +
 docs/system/device-emulation.rst    |   1 +
 docs/system/devices/cxl.rst         | 302 +++++++++++++++++
 hw/Kconfig                          |   1 +
 hw/acpi/Kconfig                     |   5 +
 hw/acpi/cxl-stub.c                  |  12 +
 hw/acpi/cxl.c                       | 231 +++++++++++++
 hw/acpi/meson.build                 |   4 +-
 hw/arm/Kconfig                      |   1 +
 hw/arm/virt-acpi-build.c            |  33 ++
 hw/arm/virt.c                       |  40 ++-
 hw/core/machine.c                   |  28 ++
 hw/cxl/Kconfig                      |   3 +
 hw/cxl/cxl-component-utils.c        | 284 ++++++++++++++++
 hw/cxl/cxl-device-utils.c           | 265 +++++++++++++++
 hw/cxl/cxl-host-stubs.c             |  16 +
 hw/cxl/cxl-host.c                   | 262 +++++++++++++++
 hw/cxl/cxl-mailbox-utils.c          | 485 ++++++++++++++++++++++++++++
 hw/cxl/meson.build                  |  12 +
 hw/i386/acpi-build.c                |  57 +++-
 hw/i386/pc.c                        |  57 +++-
 hw/mem/Kconfig                      |   5 +
 hw/mem/cxl_type3.c                  | 352 ++++++++++++++++++++
 hw/mem/meson.build                  |   1 +
 hw/meson.build                      |   1 +
 hw/pci-bridge/Kconfig               |   5 +
 hw/pci-bridge/cxl_downstream.c      | 229 +++++++++++++
 hw/pci-bridge/cxl_root_port.c       | 231 +++++++++++++
 hw/pci-bridge/cxl_upstream.c        | 206 ++++++++++++
 hw/pci-bridge/meson.build           |   1 +
 hw/pci-bridge/pci_expander_bridge.c | 172 +++++++++-
 hw/pci-bridge/pcie_root_port.c      |   6 +-
 hw/pci-host/gpex-acpi.c             |  20 +-
 hw/pci/pci.c                        |  21 +-
 hw/pci/pcie_port.c                  |  25 ++
 include/hw/acpi/cxl.h               |  28 ++
 include/hw/arm/virt.h               |   1 +
 include/hw/boards.h                 |   2 +
 include/hw/cxl/cxl.h                |  54 ++++
 include/hw/cxl/cxl_component.h      | 207 ++++++++++++
 include/hw/cxl/cxl_device.h         | 270 ++++++++++++++++
 include/hw/cxl/cxl_pci.h            | 156 +++++++++
 include/hw/pci/pci.h                |  14 +
 include/hw/pci/pci_bridge.h         |  20 ++
 include/hw/pci/pci_bus.h            |   7 +
 include/hw/pci/pci_ids.h            |   1 +
 include/hw/pci/pcie_port.h          |   2 +
 qapi/machine.json                   |  18 ++
 qemu-options.hx                     |  38 +++
 scripts/device-crash-test           |   1 +
 softmmu/memory.c                    |   9 +
 softmmu/vl.c                        |  42 +++
 tests/data/acpi/q35/CEDT.cxl        | Bin 0 -> 184 bytes
 tests/data/acpi/q35/DSDT.cxl        | Bin 0 -> 9627 bytes
 tests/qtest/bios-tables-test.c      |  44 +++
 tests/qtest/cxl-test.c              | 181 +++++++++++
 tests/qtest/meson.build             |   5 +
 57 files changed, 4456 insertions(+), 25 deletions(-)
 create mode 100644 docs/system/devices/cxl.rst
 create mode 100644 hw/acpi/cxl-stub.c
 create mode 100644 hw/acpi/cxl.c
 create mode 100644 hw/cxl/Kconfig
 create mode 100644 hw/cxl/cxl-component-utils.c
 create mode 100644 hw/cxl/cxl-device-utils.c
 create mode 100644 hw/cxl/cxl-host-stubs.c
 create mode 100644 hw/cxl/cxl-host.c
 create mode 100644 hw/cxl/cxl-mailbox-utils.c
 create mode 100644 hw/cxl/meson.build
 create mode 100644 hw/mem/cxl_type3.c
 create mode 100644 hw/pci-bridge/cxl_downstream.c
 create mode 100644 hw/pci-bridge/cxl_root_port.c
 create mode 100644 hw/pci-bridge/cxl_upstream.c
 create mode 100644 include/hw/acpi/cxl.h
 create mode 100644 include/hw/cxl/cxl.h
 create mode 100644 include/hw/cxl/cxl_component.h
 create mode 100644 include/hw/cxl/cxl_device.h
 create mode 100644 include/hw/cxl/cxl_pci.h
 create mode 100644 tests/data/acpi/q35/CEDT.cxl
 create mode 100644 tests/data/acpi/q35/DSDT.cxl
 create mode 100644 tests/qtest/cxl-test.c

-- 
2.32.0


^ permalink raw reply	[flat|nested] 124+ messages in thread

* [PATCH v7 00/46] CXl 2.0 emulation Support
@ 2022-03-06 17:40 ` Jonathan Cameron via
  0 siblings, 0 replies; 124+ messages in thread
From: Jonathan Cameron via @ 2022-03-06 17:40 UTC (permalink / raw)
  To: linuxarm, qemu-devel, Alex Bennée, Marcel Apfelbaum,
	Michael S . Tsirkin, Igor Mammedov, Markus Armbruster
  Cc: linux-cxl, Ben Widawsky, Peter Maydell,
	Shameerali Kolothum Thodi, Philippe Mathieu-Daudé,
	Saransh Gupta1, Shreyas Shah, Chris Browy, Samarth Saxena,
	Dan Williams

Ideally I'd love it if we could start picking up the earlier
sections of this series as I think those have been reasonably
well reviewed and should not be particularly controversial.
(perhaps up to patch 15 inline with what Michael Tsirkin suggested
on v5).

There is one core memory handling related patch (34) marked as RFC.
Whilst it's impact seems small to me, I'm not sure it is the best way
to meet our requirements wrt interleaving.

Changes since v7:

Thanks to all who have taken a look.
Small amount of reordering was necessary due to LSA fix in patch 17.
Test moved forwards to patch 22 and so all intermediate patches
move -1 in the series.

(New stuff)
- Switch support.  Needed to support more interesting topologies.
(Ben Widawsky)
- Patch 17: Fix reversed condition on presence of LSA that meant these never
  got properly initialized. Related change needed to ensure test for cxl_type3
  always needs an LSA. We can relax this later when adding volatile memory
  support.
(Markus Armbuster)
- Patch 27: Change -cxl-fixed-memory-window option handling to use
  qobject_input_visitor_new_str().  This changed the required handling of
  targets parameter to require an array index and hence test and docs updates.
  e.g. targets.1=cxl_hb0,targets.2=cxl_hb1
  (Patches 38,40,42,43)
- Missing structure element docs and version number (optimisitic :)
(Alex Bennée)
- Added Reviewed-by tags.  Thanks!.
- Series wise: Switch to compiler.h QEMU_BUILD_BUG_ON/MSG QEMU_PACKED
  and QEMU_ALIGNED as Alex suggested in patch 20.
- Patch 6: Dropped documentation for a non-existent lock.
           Added error code suitable for unimplemented commands.
	   Reordered code for better readability.
- Patch 9: Reorder as suggested to avoid a goto.
- Patch 16: Add LOG_UNIMP message where feature not yet implemented.
            Drop "Explain" comment that doesn't explain anything.
- Patch 18: Drop pointless void * cast.
            Add assertion as suggested (without divide)
- Patch 19: Use pstrcpy rather than snprintf for a fixed string.
            The compiler.h comment was in this patch but affects a
	    number of other patches as well.
- Patch 20: Move structure CXLType3Dev to header when originally
            introduced so changes are more obvious in this patch.
- Patch 21: Substantial refactor to resolve unclear use of sizeof
            on the LSA command header. Now uses a variable length
	    last element so we can use offsetof()
- Patch 22: Use g_autoptr() to avoid need for explicit free in tests
  	    Similar in later patches.
- Patch 29: Minor reorganziation as suggested.
	    
(Tidy up from me)
- Trivial stuff like moving header includes to patch where first used.
- Patch 17: Drop ifndef protections from TYPE_CXL_TYPE3_DEV as there
            doesn't seem to be a reason.

Series organized to allow it to be taken in stages if the maintainers
prefer that approach. Most sets end with the addition of appropriate
tests (TBD for final set)

Patches 0-15 - CXL PXB
Patches 16-22 - Type 3 Device, Root Port
Patches 23-40 - ACPI, board elements and interleave decoding to enable x86 hosts
Patches 41-42 - arm64 support on virt.
Patch 43 - Initial documentation
Patches 44-46 - Switch support.

Gitlab CI is proving challenging to get a completely clean bill of health
as there seem to be some intermittent failures in common with the
main QEMU gitlab. In particular an ASAN leak error that appears in some
upstream CI runs and build-oss-fuzz timeouts.
Results at http://gitlab.com/jic23/qemu cxl-v7-draft-2-for-test
which also includes the DOE/CDAT patches serial number support which
will form part of a future series.

Updated background info:

Looking in particular for:
* Review of the PCI interactions
* x86 and ARM machine interactions (particularly the memory maps)
* Review of the interleaving approach - is the basic idea
  acceptable?
* Review of the command line interface.
* CXL related review welcome but much of that got reviewed
  in earlier versions and hasn't changed substantially.

Big TODOs:

* Volatile memory devices (easy but it's more code so left for now).
* Hotplug?  May not need much but it's not tested yet!
* More tests and tighter verification that values written to hardware
  are actually valid - stuff that real hardware would check.
* Testing, testing and more testing.  I have been running a basic
  set of ARM and x86 tests on this, but there is always room for
  more tests and greater automation.
* CFMWS flags as requested by Ben.

Why do we want QEMU emulation of CXL?

As Ben stated in V3, QEMU support has been critical to getting OS
software written given lack of availability of hardware supporting the
latest CXL features (coupled with very high demand for support being
ready in a timely fashion). What has become clear since Ben's v3
is that situation is a continuous one. Whilst we can't talk about
them yet, CXL 3.0 features and OS support have been prototyped on
top of this support and a lot of the ongoing kernel work is being
tested against these patches. The kernel CXL mocking code allows
some forms of testing, but QEMU provides a more versatile and
exensible platform.

Other features on the qemu-list that build on these include PCI-DOE
/CDAT support from the Avery Design team further showing how this
code is useful. Whilst not directly related this is also the test
platform for work on PCI IDE/CMA + related DMTF SPDM as CXL both
utilizes and extends those technologies and is likely to be an early
adopter.
Refs:
CMA Kernel: https://lore.kernel.org/all/20210804161839.3492053-1-Jonathan.Cameron@huawei.com/
CMA Qemu: https://lore.kernel.org/qemu-devel/1624665723-5169-1-git-send-email-cbrowy@avery-design.com/
DOE Qemu: https://lore.kernel.org/qemu-devel/1623329999-15662-1-git-send-email-cbrowy@avery-design.com/

As can be seen there is non trivial interaction with other areas of
Qemu, particularly PCI and keeping this set up to date is proving
a burden we'd rather do without :)

Ben mentioned a few other good reasons in v3:
https://lore.kernel.org/qemu-devel/20210202005948.241655-1-ben.widawsky@intel.com/

What we have here is about what you need for it to be useful for testing
currently kernel code.  Note the kernel code is moving fast so
since v4, some features have been introduced we don't yet support in
QEMU (e.g. use of the PCIe serial number extended capability).

All comments welcome.

Additional info that was here in v5 is now in the documentation patch.

Thanks,

Jonathan

Ben Widawsky (24):
  hw/pci/cxl: Add a CXL component type (interface)
  hw/cxl/component: Introduce CXL components (8.1.x, 8.2.5)
  hw/cxl/device: Introduce a CXL device (8.2.8)
  hw/cxl/device: Implement the CAP array (8.2.8.1-2)
  hw/cxl/device: Implement basic mailbox (8.2.8.4)
  hw/cxl/device: Add memory device utilities
  hw/cxl/device: Add cheap EVENTS implementation (8.2.9.1)
  hw/cxl/device: Timestamp implementation (8.2.9.3)
  hw/cxl/device: Add log commands (8.2.9.4) + CEL
  hw/pxb: Use a type for realizing expanders
  hw/pci/cxl: Create a CXL bus type
  hw/pxb: Allow creation of a CXL PXB (host bridge)
  hw/cxl/rp: Add a root port
  hw/cxl/device: Add a memory device (8.2.8.5)
  hw/cxl/device: Implement MMIO HDM decoding (8.2.5.12)
  hw/cxl/device: Add some trivial commands
  hw/cxl/device: Plumb real Label Storage Area (LSA) sizing
  hw/cxl/device: Implement get/set Label Storage Area (LSA)
  hw/cxl/component: Implement host bridge MMIO (8.2.5, table 142)
  acpi/cxl: Add _OSC implementation (9.14.2)
  acpi/cxl: Create the CEDT (9.14.1)
  acpi/cxl: Introduce CFMWS structures in CEDT
  hw/cxl/component Add a dumb HDM decoder handler
  qtest/cxl: Add more complex test cases with CFMWs

Jonathan Cameron (22):
  MAINTAINERS: Add entry for Compute Express Link Emulation
  cxl: Machine level control on whether CXL support is enabled
  qtest/cxl: Introduce initial test for pxb-cxl only.
  qtests/cxl: Add initial root port and CXL type3 tests
  hw/cxl/component: Add utils for interleave parameter encoding/decoding
  hw/cxl/host: Add support for CXL Fixed Memory Windows.
  hw/pci-host/gpex-acpi: Add support for dsdt construction for pxb-cxl
  pci/pcie_port: Add pci_find_port_by_pn()
  CXL/cxl_component: Add cxl_get_hb_cstate()
  mem/cxl_type3: Add read and write functions for associated hostmem.
  cxl/cxl-host: Add memops for CFMWS region.
  RFC: softmmu/memory: Add ops to memory_region_ram_init_from_file
  i386/pc: Enable CXL fixed memory windows
  tests/acpi: q35: Allow addition of a CXL test.
  qtests/bios-tables-test: Add a test for CXL emulation.
  tests/acpi: Add tables for CXL emulation.
  hw/arm/virt: Basic CXL enablement on pci_expander_bridge instances
    pxb-cxl
  qtest/cxl: Add aarch64 virt test for CXL
  docs/cxl: Add initial Compute eXpress Link (CXL) documentation.
  pci-bridge/cxl_upstream: Add a CXL switch upstream port
  pci-bridge/cxl_downstream: Add a CXL switch downstream port
  cxl/cxl-host: Support interleave decoding with one level of switches.

 MAINTAINERS                         |   7 +
 docs/system/device-emulation.rst    |   1 +
 docs/system/devices/cxl.rst         | 302 +++++++++++++++++
 hw/Kconfig                          |   1 +
 hw/acpi/Kconfig                     |   5 +
 hw/acpi/cxl-stub.c                  |  12 +
 hw/acpi/cxl.c                       | 231 +++++++++++++
 hw/acpi/meson.build                 |   4 +-
 hw/arm/Kconfig                      |   1 +
 hw/arm/virt-acpi-build.c            |  33 ++
 hw/arm/virt.c                       |  40 ++-
 hw/core/machine.c                   |  28 ++
 hw/cxl/Kconfig                      |   3 +
 hw/cxl/cxl-component-utils.c        | 284 ++++++++++++++++
 hw/cxl/cxl-device-utils.c           | 265 +++++++++++++++
 hw/cxl/cxl-host-stubs.c             |  16 +
 hw/cxl/cxl-host.c                   | 262 +++++++++++++++
 hw/cxl/cxl-mailbox-utils.c          | 485 ++++++++++++++++++++++++++++
 hw/cxl/meson.build                  |  12 +
 hw/i386/acpi-build.c                |  57 +++-
 hw/i386/pc.c                        |  57 +++-
 hw/mem/Kconfig                      |   5 +
 hw/mem/cxl_type3.c                  | 352 ++++++++++++++++++++
 hw/mem/meson.build                  |   1 +
 hw/meson.build                      |   1 +
 hw/pci-bridge/Kconfig               |   5 +
 hw/pci-bridge/cxl_downstream.c      | 229 +++++++++++++
 hw/pci-bridge/cxl_root_port.c       | 231 +++++++++++++
 hw/pci-bridge/cxl_upstream.c        | 206 ++++++++++++
 hw/pci-bridge/meson.build           |   1 +
 hw/pci-bridge/pci_expander_bridge.c | 172 +++++++++-
 hw/pci-bridge/pcie_root_port.c      |   6 +-
 hw/pci-host/gpex-acpi.c             |  20 +-
 hw/pci/pci.c                        |  21 +-
 hw/pci/pcie_port.c                  |  25 ++
 include/hw/acpi/cxl.h               |  28 ++
 include/hw/arm/virt.h               |   1 +
 include/hw/boards.h                 |   2 +
 include/hw/cxl/cxl.h                |  54 ++++
 include/hw/cxl/cxl_component.h      | 207 ++++++++++++
 include/hw/cxl/cxl_device.h         | 270 ++++++++++++++++
 include/hw/cxl/cxl_pci.h            | 156 +++++++++
 include/hw/pci/pci.h                |  14 +
 include/hw/pci/pci_bridge.h         |  20 ++
 include/hw/pci/pci_bus.h            |   7 +
 include/hw/pci/pci_ids.h            |   1 +
 include/hw/pci/pcie_port.h          |   2 +
 qapi/machine.json                   |  18 ++
 qemu-options.hx                     |  38 +++
 scripts/device-crash-test           |   1 +
 softmmu/memory.c                    |   9 +
 softmmu/vl.c                        |  42 +++
 tests/data/acpi/q35/CEDT.cxl        | Bin 0 -> 184 bytes
 tests/data/acpi/q35/DSDT.cxl        | Bin 0 -> 9627 bytes
 tests/qtest/bios-tables-test.c      |  44 +++
 tests/qtest/cxl-test.c              | 181 +++++++++++
 tests/qtest/meson.build             |   5 +
 57 files changed, 4456 insertions(+), 25 deletions(-)
 create mode 100644 docs/system/devices/cxl.rst
 create mode 100644 hw/acpi/cxl-stub.c
 create mode 100644 hw/acpi/cxl.c
 create mode 100644 hw/cxl/Kconfig
 create mode 100644 hw/cxl/cxl-component-utils.c
 create mode 100644 hw/cxl/cxl-device-utils.c
 create mode 100644 hw/cxl/cxl-host-stubs.c
 create mode 100644 hw/cxl/cxl-host.c
 create mode 100644 hw/cxl/cxl-mailbox-utils.c
 create mode 100644 hw/cxl/meson.build
 create mode 100644 hw/mem/cxl_type3.c
 create mode 100644 hw/pci-bridge/cxl_downstream.c
 create mode 100644 hw/pci-bridge/cxl_root_port.c
 create mode 100644 hw/pci-bridge/cxl_upstream.c
 create mode 100644 include/hw/acpi/cxl.h
 create mode 100644 include/hw/cxl/cxl.h
 create mode 100644 include/hw/cxl/cxl_component.h
 create mode 100644 include/hw/cxl/cxl_device.h
 create mode 100644 include/hw/cxl/cxl_pci.h
 create mode 100644 tests/data/acpi/q35/CEDT.cxl
 create mode 100644 tests/data/acpi/q35/DSDT.cxl
 create mode 100644 tests/qtest/cxl-test.c

-- 
2.32.0



^ permalink raw reply	[flat|nested] 124+ messages in thread

* [PATCH v7 01/46] hw/pci/cxl: Add a CXL component type (interface)
  2022-03-06 17:40 ` Jonathan Cameron via
@ 2022-03-06 17:40   ` Jonathan Cameron via
  -1 siblings, 0 replies; 124+ messages in thread
From: Jonathan Cameron @ 2022-03-06 17:40 UTC (permalink / raw)
  To: linuxarm, qemu-devel, Alex Bennée, Marcel Apfelbaum,
	Michael S . Tsirkin, Igor Mammedov, Markus Armbruster
  Cc: linux-cxl, Ben Widawsky, Peter Maydell,
	Shameerali Kolothum Thodi, Philippe Mathieu-Daudé,
	Saransh Gupta1, Shreyas Shah, Chris Browy, Samarth Saxena,
	Dan Williams

From: Ben Widawsky <ben.widawsky@intel.com>

A CXL component is a hardware entity that implements CXL component
registers from the CXL 2.0 spec (8.2.3). Currently these represent 3
general types.
1. Host Bridge
2. Ports (root, upstream, downstream)
3. Devices (memory, other)

A CXL component can be conceptually thought of as a PCIe device with
extra functionality when enumerated and enabled. For this reason, CXL
does here, and will continue to add on to existing PCI code paths.

Host bridges will typically need to be handled specially and so they can
implement this newly introduced interface or not. All other components
should implement this interface. Implementing this interface allows the
core PCI code to treat these devices as special where appropriate.

Signed-off-by: Ben Widawsky <ben.widawsky@intel.com>
Signed-off-by: Jonathan Cameron <Jonathan.Cameron@huawei.com>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
---
 hw/pci/pci.c         | 10 ++++++++++
 include/hw/pci/pci.h |  8 ++++++++
 2 files changed, 18 insertions(+)

diff --git a/hw/pci/pci.c b/hw/pci/pci.c
index 5d30f9ca60..474ea98c1d 100644
--- a/hw/pci/pci.c
+++ b/hw/pci/pci.c
@@ -201,6 +201,11 @@ static const TypeInfo pci_bus_info = {
     .class_init = pci_bus_class_init,
 };
 
+static const TypeInfo cxl_interface_info = {
+    .name          = INTERFACE_CXL_DEVICE,
+    .parent        = TYPE_INTERFACE,
+};
+
 static const TypeInfo pcie_interface_info = {
     .name          = INTERFACE_PCIE_DEVICE,
     .parent        = TYPE_INTERFACE,
@@ -2128,6 +2133,10 @@ static void pci_qdev_realize(DeviceState *qdev, Error **errp)
         pci_dev->cap_present |= QEMU_PCI_CAP_EXPRESS;
     }
 
+    if (object_class_dynamic_cast(klass, INTERFACE_CXL_DEVICE)) {
+        pci_dev->cap_present |= QEMU_PCIE_CAP_CXL;
+    }
+
     pci_dev = do_pci_register_device(pci_dev,
                                      object_get_typename(OBJECT(qdev)),
                                      pci_dev->devfn, errp);
@@ -2884,6 +2893,7 @@ static void pci_register_types(void)
     type_register_static(&pci_bus_info);
     type_register_static(&pcie_bus_info);
     type_register_static(&conventional_pci_interface_info);
+    type_register_static(&cxl_interface_info);
     type_register_static(&pcie_interface_info);
     type_register_static(&pci_device_type_info);
 }
diff --git a/include/hw/pci/pci.h b/include/hw/pci/pci.h
index c3f3c90473..305df7add6 100644
--- a/include/hw/pci/pci.h
+++ b/include/hw/pci/pci.h
@@ -196,6 +196,8 @@ enum {
     QEMU_PCIE_LNKSTA_DLLLA = (1 << QEMU_PCIE_LNKSTA_DLLLA_BITNR),
 #define QEMU_PCIE_EXTCAP_INIT_BITNR 9
     QEMU_PCIE_EXTCAP_INIT = (1 << QEMU_PCIE_EXTCAP_INIT_BITNR),
+#define QEMU_PCIE_CXL_BITNR 10
+    QEMU_PCIE_CAP_CXL = (1 << QEMU_PCIE_CXL_BITNR),
 };
 
 #define TYPE_PCI_DEVICE "pci-device"
@@ -203,6 +205,12 @@ typedef struct PCIDeviceClass PCIDeviceClass;
 DECLARE_OBJ_CHECKERS(PCIDevice, PCIDeviceClass,
                      PCI_DEVICE, TYPE_PCI_DEVICE)
 
+/*
+ * Implemented by devices that can be plugged on CXL buses. In the spec, this is
+ * actually a "CXL Component, but we name it device to match the PCI naming.
+ */
+#define INTERFACE_CXL_DEVICE "cxl-device"
+
 /* Implemented by devices that can be plugged on PCI Express buses */
 #define INTERFACE_PCIE_DEVICE "pci-express-device"
 
-- 
2.32.0


^ permalink raw reply related	[flat|nested] 124+ messages in thread

* [PATCH v7 01/46] hw/pci/cxl: Add a CXL component type (interface)
@ 2022-03-06 17:40   ` Jonathan Cameron via
  0 siblings, 0 replies; 124+ messages in thread
From: Jonathan Cameron via @ 2022-03-06 17:40 UTC (permalink / raw)
  To: linuxarm, qemu-devel, Alex Bennée, Marcel Apfelbaum,
	Michael S . Tsirkin, Igor Mammedov, Markus Armbruster
  Cc: linux-cxl, Ben Widawsky, Peter Maydell,
	Shameerali Kolothum Thodi, Philippe Mathieu-Daudé,
	Saransh Gupta1, Shreyas Shah, Chris Browy, Samarth Saxena,
	Dan Williams

From: Ben Widawsky <ben.widawsky@intel.com>

A CXL component is a hardware entity that implements CXL component
registers from the CXL 2.0 spec (8.2.3). Currently these represent 3
general types.
1. Host Bridge
2. Ports (root, upstream, downstream)
3. Devices (memory, other)

A CXL component can be conceptually thought of as a PCIe device with
extra functionality when enumerated and enabled. For this reason, CXL
does here, and will continue to add on to existing PCI code paths.

Host bridges will typically need to be handled specially and so they can
implement this newly introduced interface or not. All other components
should implement this interface. Implementing this interface allows the
core PCI code to treat these devices as special where appropriate.

Signed-off-by: Ben Widawsky <ben.widawsky@intel.com>
Signed-off-by: Jonathan Cameron <Jonathan.Cameron@huawei.com>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
---
 hw/pci/pci.c         | 10 ++++++++++
 include/hw/pci/pci.h |  8 ++++++++
 2 files changed, 18 insertions(+)

diff --git a/hw/pci/pci.c b/hw/pci/pci.c
index 5d30f9ca60..474ea98c1d 100644
--- a/hw/pci/pci.c
+++ b/hw/pci/pci.c
@@ -201,6 +201,11 @@ static const TypeInfo pci_bus_info = {
     .class_init = pci_bus_class_init,
 };
 
+static const TypeInfo cxl_interface_info = {
+    .name          = INTERFACE_CXL_DEVICE,
+    .parent        = TYPE_INTERFACE,
+};
+
 static const TypeInfo pcie_interface_info = {
     .name          = INTERFACE_PCIE_DEVICE,
     .parent        = TYPE_INTERFACE,
@@ -2128,6 +2133,10 @@ static void pci_qdev_realize(DeviceState *qdev, Error **errp)
         pci_dev->cap_present |= QEMU_PCI_CAP_EXPRESS;
     }
 
+    if (object_class_dynamic_cast(klass, INTERFACE_CXL_DEVICE)) {
+        pci_dev->cap_present |= QEMU_PCIE_CAP_CXL;
+    }
+
     pci_dev = do_pci_register_device(pci_dev,
                                      object_get_typename(OBJECT(qdev)),
                                      pci_dev->devfn, errp);
@@ -2884,6 +2893,7 @@ static void pci_register_types(void)
     type_register_static(&pci_bus_info);
     type_register_static(&pcie_bus_info);
     type_register_static(&conventional_pci_interface_info);
+    type_register_static(&cxl_interface_info);
     type_register_static(&pcie_interface_info);
     type_register_static(&pci_device_type_info);
 }
diff --git a/include/hw/pci/pci.h b/include/hw/pci/pci.h
index c3f3c90473..305df7add6 100644
--- a/include/hw/pci/pci.h
+++ b/include/hw/pci/pci.h
@@ -196,6 +196,8 @@ enum {
     QEMU_PCIE_LNKSTA_DLLLA = (1 << QEMU_PCIE_LNKSTA_DLLLA_BITNR),
 #define QEMU_PCIE_EXTCAP_INIT_BITNR 9
     QEMU_PCIE_EXTCAP_INIT = (1 << QEMU_PCIE_EXTCAP_INIT_BITNR),
+#define QEMU_PCIE_CXL_BITNR 10
+    QEMU_PCIE_CAP_CXL = (1 << QEMU_PCIE_CXL_BITNR),
 };
 
 #define TYPE_PCI_DEVICE "pci-device"
@@ -203,6 +205,12 @@ typedef struct PCIDeviceClass PCIDeviceClass;
 DECLARE_OBJ_CHECKERS(PCIDevice, PCIDeviceClass,
                      PCI_DEVICE, TYPE_PCI_DEVICE)
 
+/*
+ * Implemented by devices that can be plugged on CXL buses. In the spec, this is
+ * actually a "CXL Component, but we name it device to match the PCI naming.
+ */
+#define INTERFACE_CXL_DEVICE "cxl-device"
+
 /* Implemented by devices that can be plugged on PCI Express buses */
 #define INTERFACE_PCIE_DEVICE "pci-express-device"
 
-- 
2.32.0



^ permalink raw reply related	[flat|nested] 124+ messages in thread

* [PATCH v7 02/46] hw/cxl/component: Introduce CXL components (8.1.x, 8.2.5)
  2022-03-06 17:40 ` Jonathan Cameron via
@ 2022-03-06 17:40   ` Jonathan Cameron via
  -1 siblings, 0 replies; 124+ messages in thread
From: Jonathan Cameron @ 2022-03-06 17:40 UTC (permalink / raw)
  To: linuxarm, qemu-devel, Alex Bennée, Marcel Apfelbaum,
	Michael S . Tsirkin, Igor Mammedov, Markus Armbruster
  Cc: linux-cxl, Ben Widawsky, Peter Maydell,
	Shameerali Kolothum Thodi, Philippe Mathieu-Daudé,
	Saransh Gupta1, Shreyas Shah, Chris Browy, Samarth Saxena,
	Dan Williams

From: Ben Widawsky <ben.widawsky@intel.com>

A CXL 2.0 component is any entity in the CXL topology. All components
have a analogous function in PCIe. Except for the CXL host bridge, all
have a PCIe config space that is accessible via the common PCIe
mechanisms. CXL components are enumerated via DVSEC fields in the
extended PCIe header space. CXL components will minimally implement some
subset of CXL.mem and CXL.cache registers defined in 8.2.5 of the CXL
2.0 specification. Two headers and a utility library are introduced to
support the minimum functionality needed to enumerate components.

The cxl_pci header manages bits associated with PCI, specifically the
DVSEC and related fields. The cxl_component.h variant has data
structures and APIs that are useful for drivers implementing any of the
CXL 2.0 components. The library takes care of making use of the DVSEC
bits and the CXL.[mem|cache] registers. Per spec, the registers are
little endian.

None of the mechanisms required to enumerate a CXL capable hostbridge
are introduced at this point.

Note that the CXL.mem and CXL.cache registers used are always 4B wide.
It's possible in the future that this constraint will not hold.

Signed-off-by: Ben Widawsky <ben.widawsky@intel.com>
Signed-off-by: Jonathan Cameron <Jonathan.Cameron@huawei.com>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
---
v7:
 * Use QEMU_PACKED / QEMU_BUILD_BUG_ON() (Alex)
 
 hw/Kconfig                     |   1 +
 hw/cxl/Kconfig                 |   3 +
 hw/cxl/cxl-component-utils.c   | 219 +++++++++++++++++++++++++++++++++
 hw/cxl/meson.build             |   4 +
 hw/meson.build                 |   1 +
 include/hw/cxl/cxl.h           |  16 +++
 include/hw/cxl/cxl_component.h | 197 +++++++++++++++++++++++++++++
 include/hw/cxl/cxl_pci.h       | 135 ++++++++++++++++++++
 8 files changed, 576 insertions(+)

diff --git a/hw/Kconfig b/hw/Kconfig
index ad20cce0a9..50e0952889 100644
--- a/hw/Kconfig
+++ b/hw/Kconfig
@@ -6,6 +6,7 @@ source audio/Kconfig
 source block/Kconfig
 source char/Kconfig
 source core/Kconfig
+source cxl/Kconfig
 source display/Kconfig
 source dma/Kconfig
 source gpio/Kconfig
diff --git a/hw/cxl/Kconfig b/hw/cxl/Kconfig
new file mode 100644
index 0000000000..8e67519b16
--- /dev/null
+++ b/hw/cxl/Kconfig
@@ -0,0 +1,3 @@
+config CXL
+    bool
+    default y if PCI_EXPRESS
diff --git a/hw/cxl/cxl-component-utils.c b/hw/cxl/cxl-component-utils.c
new file mode 100644
index 0000000000..410f8ef328
--- /dev/null
+++ b/hw/cxl/cxl-component-utils.c
@@ -0,0 +1,219 @@
+/*
+ * CXL Utility library for components
+ *
+ * Copyright(C) 2020 Intel Corporation.
+ *
+ * This work is licensed under the terms of the GNU GPL, version 2. See the
+ * COPYING file in the top-level directory.
+ */
+
+#include "qemu/osdep.h"
+#include "qemu/log.h"
+#include "hw/pci/pci.h"
+#include "hw/cxl/cxl.h"
+
+static uint64_t cxl_cache_mem_read_reg(void *opaque, hwaddr offset,
+                                       unsigned size)
+{
+    CXLComponentState *cxl_cstate = opaque;
+    ComponentRegisters *cregs = &cxl_cstate->crb;
+
+    if (size == 8) {
+        qemu_log_mask(LOG_UNIMP,
+                      "CXL 8 byte cache mem registers not implemented\n");
+        return 0;
+    }
+
+    if (cregs->special_ops && cregs->special_ops->read) {
+        return cregs->special_ops->read(cxl_cstate, offset, size);
+    } else {
+        return cregs->cache_mem_registers[offset / 4];
+    }
+}
+
+static void cxl_cache_mem_write_reg(void *opaque, hwaddr offset, uint64_t value,
+                                    unsigned size)
+{
+    CXLComponentState *cxl_cstate = opaque;
+    ComponentRegisters *cregs = &cxl_cstate->crb;
+
+    if (size == 8) {
+        qemu_log_mask(LOG_UNIMP,
+                      "CXL 8 byte cache mem registers not implemented\n");
+        return;
+    }
+    if (cregs->special_ops && cregs->special_ops->write) {
+        cregs->special_ops->write(cxl_cstate, offset, value, size);
+    } else {
+        cregs->cache_mem_registers[offset / 4] = value;
+    }
+}
+
+/*
+ * 8.2.3
+ *   The access restrictions specified in Section 8.2.2 also apply to CXL 2.0
+ *   Component Registers.
+ *
+ * 8.2.2
+ *   • A 32 bit register shall be accessed as a 4 Bytes quantity. Partial
+ *   reads are not permitted.
+ *   • A 64 bit register shall be accessed as a 8 Bytes quantity. Partial
+ *   reads are not permitted.
+ *
+ * As of the spec defined today, only 4 byte registers exist.
+ */
+static const MemoryRegionOps cache_mem_ops = {
+    .read = cxl_cache_mem_read_reg,
+    .write = cxl_cache_mem_write_reg,
+    .endianness = DEVICE_LITTLE_ENDIAN,
+    .valid = {
+        .min_access_size = 4,
+        .max_access_size = 8,
+        .unaligned = false,
+    },
+    .impl = {
+        .min_access_size = 4,
+        .max_access_size = 8,
+    },
+};
+
+void cxl_component_register_block_init(Object *obj,
+                                       CXLComponentState *cxl_cstate,
+                                       const char *type)
+{
+    ComponentRegisters *cregs = &cxl_cstate->crb;
+
+    memory_region_init(&cregs->component_registers, obj, type,
+                       CXL2_COMPONENT_BLOCK_SIZE);
+
+    /* io registers controls link which we don't care about in QEMU */
+    memory_region_init_io(&cregs->io, obj, NULL, cregs, ".io",
+                          CXL2_COMPONENT_IO_REGION_SIZE);
+    memory_region_init_io(&cregs->cache_mem, obj, &cache_mem_ops, cregs,
+                          ".cache_mem", CXL2_COMPONENT_CM_REGION_SIZE);
+
+    memory_region_add_subregion(&cregs->component_registers, 0, &cregs->io);
+    memory_region_add_subregion(&cregs->component_registers,
+                                CXL2_COMPONENT_IO_REGION_SIZE,
+                                &cregs->cache_mem);
+}
+
+static void ras_init_common(uint32_t *reg_state)
+{
+    reg_state[R_CXL_RAS_UNC_ERR_STATUS] = 0;
+    reg_state[R_CXL_RAS_UNC_ERR_MASK] = 0x1cfff;
+    reg_state[R_CXL_RAS_UNC_ERR_SEVERITY] = 0x1cfff;
+    reg_state[R_CXL_RAS_COR_ERR_STATUS] = 0;
+    reg_state[R_CXL_RAS_COR_ERR_MASK] = 0x3f;
+
+    /* CXL switches and devices must set */
+    reg_state[R_CXL_RAS_ERR_CAP_CTRL] = 0;
+}
+
+static void hdm_init_common(uint32_t *reg_state)
+{
+    ARRAY_FIELD_DP32(reg_state, CXL_HDM_DECODER_CAPABILITY, DECODER_COUNT, 0);
+    ARRAY_FIELD_DP32(reg_state, CXL_HDM_DECODER_CAPABILITY, TARGET_COUNT, 1);
+    ARRAY_FIELD_DP32(reg_state, CXL_HDM_DECODER_GLOBAL_CONTROL,
+                     HDM_DECODER_ENABLE, 0);
+}
+
+void cxl_component_register_init_common(uint32_t *reg_state, enum reg_type type)
+{
+    int caps = 0;
+    switch (type) {
+    case CXL2_DOWNSTREAM_PORT:
+    case CXL2_DEVICE:
+        /* CAP, RAS, Link */
+        caps = 2;
+        break;
+    case CXL2_UPSTREAM_PORT:
+    case CXL2_TYPE3_DEVICE:
+    case CXL2_LOGICAL_DEVICE:
+        /* + HDM */
+        caps = 3;
+        break;
+    case CXL2_ROOT_PORT:
+        /* + Extended Security, + Snoop */
+        caps = 5;
+        break;
+    default:
+        abort();
+    }
+
+    memset(reg_state, 0, CXL2_COMPONENT_CM_REGION_SIZE);
+
+    /* CXL Capability Header Register */
+    ARRAY_FIELD_DP32(reg_state, CXL_CAPABILITY_HEADER, ID, 1);
+    ARRAY_FIELD_DP32(reg_state, CXL_CAPABILITY_HEADER, VERSION, 1);
+    ARRAY_FIELD_DP32(reg_state, CXL_CAPABILITY_HEADER, CACHE_MEM_VERSION, 1);
+    ARRAY_FIELD_DP32(reg_state, CXL_CAPABILITY_HEADER, ARRAY_SIZE, caps);
+
+
+#define init_cap_reg(reg, id, version)                                        \
+    QEMU_BUILD_BUG_ON(CXL_##reg##_REGISTERS_OFFSET == 0);                     \
+    do {                                                                      \
+        int which = R_CXL_##reg##_CAPABILITY_HEADER;                          \
+        reg_state[which] = FIELD_DP32(reg_state[which],                       \
+                                      CXL_##reg##_CAPABILITY_HEADER, ID, id); \
+        reg_state[which] =                                                    \
+            FIELD_DP32(reg_state[which], CXL_##reg##_CAPABILITY_HEADER,       \
+                       VERSION, version);                                     \
+        reg_state[which] =                                                    \
+            FIELD_DP32(reg_state[which], CXL_##reg##_CAPABILITY_HEADER, PTR,  \
+                       CXL_##reg##_REGISTERS_OFFSET);                         \
+    } while (0)
+
+    init_cap_reg(RAS, 2, 1);
+    ras_init_common(reg_state);
+
+    init_cap_reg(LINK, 4, 2);
+
+    if (caps < 3) {
+        return;
+    }
+
+    init_cap_reg(HDM, 5, 1);
+    hdm_init_common(reg_state);
+
+    if (caps < 5) {
+        return;
+    }
+
+    init_cap_reg(EXTSEC, 6, 1);
+    init_cap_reg(SNOOP, 8, 1);
+
+#undef init_cap_reg
+}
+
+/*
+ * Helper to creates a DVSEC header for a CXL entity. The caller is responsible
+ * for tracking the valid offset.
+ *
+ * This function will build the DVSEC header on behalf of the caller and then
+ * copy in the remaining data for the vendor specific bits.
+ */
+void cxl_component_create_dvsec(CXLComponentState *cxl, uint16_t length,
+                                uint16_t type, uint8_t rev, uint8_t *body)
+{
+    PCIDevice *pdev = cxl->pdev;
+    uint16_t offset = cxl->dvsec_offset;
+
+    assert(offset >= PCI_CFG_SPACE_SIZE &&
+           ((offset + length) < PCI_CFG_SPACE_EXP_SIZE));
+    assert((length & 0xf000) == 0);
+    assert((rev & ~0xf) == 0);
+
+    /* Create the DVSEC in the MCFG space */
+    pcie_add_capability(pdev, PCI_EXT_CAP_ID_DVSEC, 1, offset, length);
+    pci_set_long(pdev->config + offset + PCIE_DVSEC_HEADER1_OFFSET,
+                 (length << 20) | (rev << 16) | CXL_VENDOR_ID);
+    pci_set_word(pdev->config + offset + PCIE_DVSEC_ID_OFFSET, type);
+    memcpy(pdev->config + offset + sizeof(struct dvsec_header),
+           body + sizeof(struct dvsec_header),
+           length - sizeof(struct dvsec_header));
+
+    /* Update state for future DVSEC additions */
+    range_init_nofail(&cxl->dvsecs[type], cxl->dvsec_offset, length);
+    cxl->dvsec_offset += length;
+}
diff --git a/hw/cxl/meson.build b/hw/cxl/meson.build
new file mode 100644
index 0000000000..3231b5de1e
--- /dev/null
+++ b/hw/cxl/meson.build
@@ -0,0 +1,4 @@
+softmmu_ss.add(when: 'CONFIG_CXL',
+               if_true: files(
+                   'cxl-component-utils.c',
+               ))
diff --git a/hw/meson.build b/hw/meson.build
index b3366c888e..9992c5101e 100644
--- a/hw/meson.build
+++ b/hw/meson.build
@@ -6,6 +6,7 @@ subdir('block')
 subdir('char')
 subdir('core')
 subdir('cpu')
+subdir('cxl')
 subdir('display')
 subdir('dma')
 subdir('gpio')
diff --git a/include/hw/cxl/cxl.h b/include/hw/cxl/cxl.h
new file mode 100644
index 0000000000..8c738c7a2b
--- /dev/null
+++ b/include/hw/cxl/cxl.h
@@ -0,0 +1,16 @@
+/*
+ * QEMU CXL Support
+ *
+ * Copyright (c) 2020 Intel
+ *
+ * This work is licensed under the terms of the GNU GPL, version 2. See the
+ * COPYING file in the top-level directory.
+ */
+
+#ifndef CXL_H
+#define CXL_H
+
+#include "cxl_pci.h"
+#include "cxl_component.h"
+
+#endif
diff --git a/include/hw/cxl/cxl_component.h b/include/hw/cxl/cxl_component.h
new file mode 100644
index 0000000000..74e9bfe1ff
--- /dev/null
+++ b/include/hw/cxl/cxl_component.h
@@ -0,0 +1,197 @@
+/*
+ * QEMU CXL Component
+ *
+ * Copyright (c) 2020 Intel
+ *
+ * This work is licensed under the terms of the GNU GPL, version 2. See the
+ * COPYING file in the top-level directory.
+ */
+
+#ifndef CXL_COMPONENT_H
+#define CXL_COMPONENT_H
+
+/* CXL 2.0 - 8.2.4 */
+#define CXL2_COMPONENT_IO_REGION_SIZE 0x1000
+#define CXL2_COMPONENT_CM_REGION_SIZE 0x1000
+#define CXL2_COMPONENT_BLOCK_SIZE 0x10000
+
+#include "qemu/compiler.h"
+#include "qemu/range.h"
+#include "qemu/typedefs.h"
+#include "hw/register.h"
+
+enum reg_type {
+    CXL2_DEVICE,
+    CXL2_TYPE3_DEVICE,
+    CXL2_LOGICAL_DEVICE,
+    CXL2_ROOT_PORT,
+    CXL2_UPSTREAM_PORT,
+    CXL2_DOWNSTREAM_PORT
+};
+
+/*
+ * Capability registers are defined at the top of the CXL.cache/mem region and
+ * are packed. For our purposes we will always define the caps in the same
+ * order.
+ * CXL 2.0 - 8.2.5 Table 142 for details.
+ */
+
+/* CXL 2.0 - 8.2.5.1 */
+REG32(CXL_CAPABILITY_HEADER, 0)
+    FIELD(CXL_CAPABILITY_HEADER, ID, 0, 16)
+    FIELD(CXL_CAPABILITY_HEADER, VERSION, 16, 4)
+    FIELD(CXL_CAPABILITY_HEADER, CACHE_MEM_VERSION, 20, 4)
+    FIELD(CXL_CAPABILITY_HEADER, ARRAY_SIZE, 24, 8)
+
+#define CXLx_CAPABILITY_HEADER(type, offset)                  \
+    REG32(CXL_##type##_CAPABILITY_HEADER, offset)             \
+        FIELD(CXL_##type##_CAPABILITY_HEADER, ID, 0, 16)      \
+        FIELD(CXL_##type##_CAPABILITY_HEADER, VERSION, 16, 4) \
+        FIELD(CXL_##type##_CAPABILITY_HEADER, PTR, 20, 12)
+CXLx_CAPABILITY_HEADER(RAS, 0x4)
+CXLx_CAPABILITY_HEADER(LINK, 0x8)
+CXLx_CAPABILITY_HEADER(HDM, 0xc)
+CXLx_CAPABILITY_HEADER(EXTSEC, 0x10)
+CXLx_CAPABILITY_HEADER(SNOOP, 0x14)
+
+/*
+ * Capability structures contain the actual registers that the CXL component
+ * implements. Some of these are specific to certain types of components, but
+ * this implementation leaves enough space regardless.
+ */
+/* 8.2.5.9 - CXL RAS Capability Structure */
+
+/* Give ample space for caps before this */
+#define CXL_RAS_REGISTERS_OFFSET 0x80
+#define CXL_RAS_REGISTERS_SIZE   0x58
+REG32(CXL_RAS_UNC_ERR_STATUS, CXL_RAS_REGISTERS_OFFSET)
+REG32(CXL_RAS_UNC_ERR_MASK, CXL_RAS_REGISTERS_OFFSET + 0x4)
+REG32(CXL_RAS_UNC_ERR_SEVERITY, CXL_RAS_REGISTERS_OFFSET + 0x8)
+REG32(CXL_RAS_COR_ERR_STATUS, CXL_RAS_REGISTERS_OFFSET + 0xc)
+REG32(CXL_RAS_COR_ERR_MASK, CXL_RAS_REGISTERS_OFFSET + 0x10)
+REG32(CXL_RAS_ERR_CAP_CTRL, CXL_RAS_REGISTERS_OFFSET + 0x14)
+/* Offset 0x18 - 0x58 reserved for RAS logs */
+
+/* 8.2.5.10 - CXL Security Capability Structure */
+#define CXL_SEC_REGISTERS_OFFSET \
+    (CXL_RAS_REGISTERS_OFFSET + CXL_RAS_REGISTERS_SIZE)
+#define CXL_SEC_REGISTERS_SIZE   0 /* We don't implement 1.1 downstream ports */
+
+/* 8.2.5.11 - CXL Link Capability Structure */
+#define CXL_LINK_REGISTERS_OFFSET \
+    (CXL_SEC_REGISTERS_OFFSET + CXL_SEC_REGISTERS_SIZE)
+#define CXL_LINK_REGISTERS_SIZE   0x38
+
+/* 8.2.5.12 - CXL HDM Decoder Capability Structure */
+#define HDM_DECODE_MAX 10 /* 8.2.5.12.1 */
+#define CXL_HDM_REGISTERS_OFFSET \
+    (CXL_LINK_REGISTERS_OFFSET + CXL_LINK_REGISTERS_SIZE)
+#define CXL_HDM_REGISTERS_SIZE (0x20 + HDM_DECODE_MAX + 10)
+#define HDM_DECODER_INIT(n)                                                    \
+  REG32(CXL_HDM_DECODER##n##_BASE_LO,                                          \
+        CXL_HDM_REGISTERS_OFFSET + (0x20 * n) + 0x10)                          \
+            FIELD(CXL_HDM_DECODER##n##_BASE_LO, L, 28, 4)                      \
+  REG32(CXL_HDM_DECODER##n##_BASE_HI,                                          \
+        CXL_HDM_REGISTERS_OFFSET + (0x20 * n) + 0x14)                          \
+  REG32(CXL_HDM_DECODER##n##_SIZE_LO,                                          \
+        CXL_HDM_REGISTERS_OFFSET + (0x20 * n) + 0x18)                          \
+  REG32(CXL_HDM_DECODER##n##_SIZE_HI,                                          \
+        CXL_HDM_REGISTERS_OFFSET + (0x20 * n) + 0x1C)                          \
+  REG32(CXL_HDM_DECODER##n##_CTRL,                                             \
+        CXL_HDM_REGISTERS_OFFSET + (0x20 * n) + 0x20)                          \
+            FIELD(CXL_HDM_DECODER##n##_CTRL, IG, 0, 4)                         \
+            FIELD(CXL_HDM_DECODER##n##_CTRL, IW, 4, 4)                         \
+            FIELD(CXL_HDM_DECODER##n##_CTRL, LOCK_ON_COMMIT, 8, 1)             \
+            FIELD(CXL_HDM_DECODER##n##_CTRL, COMMIT, 9, 1)                     \
+            FIELD(CXL_HDM_DECODER##n##_CTRL, COMMITTED, 10, 1)                 \
+            FIELD(CXL_HDM_DECODER##n##_CTRL, ERR, 11, 1)                       \
+            FIELD(CXL_HDM_DECODER##n##_CTRL, TYPE, 12, 1)                      \
+  REG32(CXL_HDM_DECODER##n##_TARGET_LIST_LO,                                   \
+        CXL_HDM_REGISTERS_OFFSET + (0x20 * n) + 0x24)                          \
+  REG32(CXL_HDM_DECODER##n##_TARGET_LIST_HI,                                   \
+        CXL_HDM_REGISTERS_OFFSET + (0x20 * n) + 0x28)
+
+REG32(CXL_HDM_DECODER_CAPABILITY, CXL_HDM_REGISTERS_OFFSET)
+    FIELD(CXL_HDM_DECODER_CAPABILITY, DECODER_COUNT, 0, 4)
+    FIELD(CXL_HDM_DECODER_CAPABILITY, TARGET_COUNT, 4, 4)
+    FIELD(CXL_HDM_DECODER_CAPABILITY, INTERLEAVE_256B, 8, 1)
+    FIELD(CXL_HDM_DECODER_CAPABILITY, INTELEAVE_4K, 9, 1)
+    FIELD(CXL_HDM_DECODER_CAPABILITY, POISON_ON_ERR_CAP, 10, 1)
+REG32(CXL_HDM_DECODER_GLOBAL_CONTROL, CXL_HDM_REGISTERS_OFFSET + 4)
+    FIELD(CXL_HDM_DECODER_GLOBAL_CONTROL, POISON_ON_ERR_EN, 0, 1)
+    FIELD(CXL_HDM_DECODER_GLOBAL_CONTROL, HDM_DECODER_ENABLE, 1, 1)
+
+HDM_DECODER_INIT(0);
+
+/* 8.2.5.13 - CXL Extended Security Capability Structure (Root complex only) */
+#define EXTSEC_ENTRY_MAX        256
+#define CXL_EXTSEC_REGISTERS_OFFSET \
+    (CXL_HDM_REGISTERS_OFFSET + CXL_HDM_REGISTERS_SIZE)
+#define CXL_EXTSEC_REGISTERS_SIZE   (8 * EXTSEC_ENTRY_MAX + 4)
+
+/* 8.2.5.14 - CXL IDE Capability Structure */
+#define CXL_IDE_REGISTERS_OFFSET \
+    (CXL_EXTSEC_REGISTERS_OFFSET + CXL_EXTSEC_REGISTERS_SIZE)
+#define CXL_IDE_REGISTERS_SIZE   0x20
+
+/* 8.2.5.15 - CXL Snoop Filter Capability Structure */
+#define CXL_SNOOP_REGISTERS_OFFSET \
+    (CXL_IDE_REGISTERS_OFFSET + CXL_IDE_REGISTERS_SIZE)
+#define CXL_SNOOP_REGISTERS_SIZE   0x8
+
+QEMU_BUILD_BUG_MSG((CXL_SNOOP_REGISTERS_OFFSET + CXL_SNOOP_REGISTERS_SIZE) >= 0x1000,
+                   "No space for registers");
+
+typedef struct component_registers {
+    /*
+     * Main memory region to be registered with QEMU core.
+     */
+    MemoryRegion component_registers;
+
+    /*
+     * 8.2.4 Table 141:
+     *   0x0000 - 0x0fff CXL.io registers
+     *   0x1000 - 0x1fff CXL.cache and CXL.mem
+     *   0x2000 - 0xdfff Implementation specific
+     *   0xe000 - 0xe3ff CXL ARB/MUX registers
+     *   0xe400 - 0xffff RSVD
+     */
+    uint32_t io_registers[CXL2_COMPONENT_IO_REGION_SIZE >> 2];
+    MemoryRegion io;
+
+    uint32_t cache_mem_registers[CXL2_COMPONENT_CM_REGION_SIZE >> 2];
+    MemoryRegion cache_mem;
+
+    MemoryRegion impl_specific;
+    MemoryRegion arb_mux;
+    MemoryRegion rsvd;
+
+    /* special_ops is used for any component that needs any specific handling */
+    MemoryRegionOps *special_ops;
+} ComponentRegisters;
+
+/*
+ * A CXL component represents all entities in a CXL hierarchy. This includes,
+ * host bridges, root ports, upstream/downstream switch ports, and devices
+ */
+typedef struct cxl_component {
+    ComponentRegisters crb;
+    union {
+        struct {
+            Range dvsecs[CXL20_MAX_DVSEC];
+            uint16_t dvsec_offset;
+            struct PCIDevice *pdev;
+        };
+    };
+} CXLComponentState;
+
+void cxl_component_register_block_init(Object *obj,
+                                       CXLComponentState *cxl_cstate,
+                                       const char *type);
+void cxl_component_register_init_common(uint32_t *reg_state,
+                                        enum reg_type type);
+
+void cxl_component_create_dvsec(CXLComponentState *cxl_cstate, uint16_t length,
+                                uint16_t type, uint8_t rev, uint8_t *body);
+
+#endif
diff --git a/include/hw/cxl/cxl_pci.h b/include/hw/cxl/cxl_pci.h
new file mode 100644
index 0000000000..810a244fab
--- /dev/null
+++ b/include/hw/cxl/cxl_pci.h
@@ -0,0 +1,135 @@
+/*
+ * QEMU CXL PCI interfaces
+ *
+ * Copyright (c) 2020 Intel
+ *
+ * This work is licensed under the terms of the GNU GPL, version 2. See the
+ * COPYING file in the top-level directory.
+ */
+
+#ifndef CXL_PCI_H
+#define CXL_PCI_H
+
+#include "qemu/compiler.h"
+#include "hw/pci/pci.h"
+#include "hw/pci/pcie.h"
+
+#define CXL_VENDOR_ID 0x1e98
+
+#define PCIE_DVSEC_HEADER1_OFFSET 0x4 /* Offset from start of extend cap */
+#define PCIE_DVSEC_ID_OFFSET 0x8
+
+#define PCIE_CXL_DEVICE_DVSEC_LENGTH 0x38
+#define PCIE_CXL1_DEVICE_DVSEC_REVID 0
+#define PCIE_CXL2_DEVICE_DVSEC_REVID 1
+
+#define EXTENSIONS_PORT_DVSEC_LENGTH 0x28
+#define EXTENSIONS_PORT_DVSEC_REVID 0
+
+#define GPF_PORT_DVSEC_LENGTH 0x10
+#define GPF_PORT_DVSEC_REVID  0
+
+#define PCIE_FLEXBUS_PORT_DVSEC_LENGTH_2_0 0x14
+#define PCIE_FLEXBUS_PORT_DVSEC_REVID_2_0  1
+
+#define REG_LOC_DVSEC_LENGTH 0x24
+#define REG_LOC_DVSEC_REVID  0
+
+enum {
+    PCIE_CXL_DEVICE_DVSEC      = 0,
+    NON_CXL_FUNCTION_MAP_DVSEC = 2,
+    EXTENSIONS_PORT_DVSEC      = 3,
+    GPF_PORT_DVSEC             = 4,
+    GPF_DEVICE_DVSEC           = 5,
+    PCIE_FLEXBUS_PORT_DVSEC    = 7,
+    REG_LOC_DVSEC              = 8,
+    MLD_DVSEC                  = 9,
+    CXL20_MAX_DVSEC
+};
+
+struct dvsec_header {
+    uint32_t cap_hdr;
+    uint32_t dv_hdr1;
+    uint16_t dv_hdr2;
+} QEMU_PACKED;
+QEMU_BUILD_BUG_ON(sizeof(struct dvsec_header) != 10);
+
+/*
+ * CXL 2.0 devices must implement certain DVSEC IDs, and can [optionally]
+ * implement others.
+ *
+ * CXL 2.0 Device: 0, [2], 5, 8
+ * CXL 2.0 RP: 3, 4, 7, 8
+ * CXL 2.0 Upstream Port: [2], 7, 8
+ * CXL 2.0 Downstream Port: 3, 4, 7, 8
+ */
+
+/* CXL 2.0 - 8.1.5 (ID 0003) */
+struct cxl_dvsec_port_extensions {
+    struct dvsec_header hdr;
+    uint16_t status;
+    uint16_t control;
+    uint8_t alt_bus_base;
+    uint8_t alt_bus_limit;
+    uint16_t alt_memory_base;
+    uint16_t alt_memory_limit;
+    uint16_t alt_prefetch_base;
+    uint16_t alt_prefetch_limit;
+    uint32_t alt_prefetch_base_high;
+    uint32_t alt_prefetch_base_low;
+    uint32_t rcrb_base;
+    uint32_t rcrb_base_high;
+};
+QEMU_BUILD_BUG_ON(sizeof(struct cxl_dvsec_port_extensions) != 0x28);
+
+#define PORT_CONTROL_OFFSET          0xc
+#define PORT_CONTROL_UNMASK_SBR      1
+#define PORT_CONTROL_ALT_MEMID_EN    4
+
+/* CXL 2.0 - 8.1.6 GPF DVSEC (ID 0004) */
+struct cxl_dvsec_port_gpf {
+    struct dvsec_header hdr;
+    uint16_t rsvd;
+    uint16_t phase1_ctrl;
+    uint16_t phase2_ctrl;
+};
+QEMU_BUILD_BUG_ON(sizeof(struct cxl_dvsec_port_gpf) != 0x10);
+
+/* CXL 2.0 - 8.1.8/8.2.1.3 Flexbus DVSEC (ID 0007) */
+struct cxl_dvsec_port_flexbus {
+    struct dvsec_header hdr;
+    uint16_t cap;
+    uint16_t ctrl;
+    uint16_t status;
+    uint32_t rcvd_mod_ts_data_phase1;
+};
+QEMU_BUILD_BUG_ON(sizeof(struct cxl_dvsec_port_flexbus) != 0x14);
+
+/* CXL 2.0 - 8.1.9 Register Locator DVSEC (ID 0008) */
+struct cxl_dvsec_register_locator {
+    struct dvsec_header hdr;
+    uint16_t rsvd;
+    uint32_t reg0_base_lo;
+    uint32_t reg0_base_hi;
+    uint32_t reg1_base_lo;
+    uint32_t reg1_base_hi;
+    uint32_t reg2_base_lo;
+    uint32_t reg2_base_hi;
+};
+QEMU_BUILD_BUG_ON(sizeof(struct cxl_dvsec_register_locator) != 0x24);
+
+/* BAR Equivalence Indicator */
+#define BEI_BAR_10H 0
+#define BEI_BAR_14H 1
+#define BEI_BAR_18H 2
+#define BEI_BAR_1cH 3
+#define BEI_BAR_20H 4
+#define BEI_BAR_24H 5
+
+/* Register Block Identifier */
+#define RBI_EMPTY          0
+#define RBI_COMPONENT_REG  (1 << 8)
+#define RBI_BAR_VIRT_ACL   (2 << 8)
+#define RBI_CXL_DEVICE_REG (3 << 8)
+
+#endif
-- 
2.32.0


^ permalink raw reply related	[flat|nested] 124+ messages in thread

* [PATCH v7 02/46] hw/cxl/component: Introduce CXL components (8.1.x, 8.2.5)
@ 2022-03-06 17:40   ` Jonathan Cameron via
  0 siblings, 0 replies; 124+ messages in thread
From: Jonathan Cameron via @ 2022-03-06 17:40 UTC (permalink / raw)
  To: linuxarm, qemu-devel, Alex Bennée, Marcel Apfelbaum,
	Michael S . Tsirkin, Igor Mammedov, Markus Armbruster
  Cc: linux-cxl, Ben Widawsky, Peter Maydell,
	Shameerali Kolothum Thodi, Philippe Mathieu-Daudé,
	Saransh Gupta1, Shreyas Shah, Chris Browy, Samarth Saxena,
	Dan Williams

From: Ben Widawsky <ben.widawsky@intel.com>

A CXL 2.0 component is any entity in the CXL topology. All components
have a analogous function in PCIe. Except for the CXL host bridge, all
have a PCIe config space that is accessible via the common PCIe
mechanisms. CXL components are enumerated via DVSEC fields in the
extended PCIe header space. CXL components will minimally implement some
subset of CXL.mem and CXL.cache registers defined in 8.2.5 of the CXL
2.0 specification. Two headers and a utility library are introduced to
support the minimum functionality needed to enumerate components.

The cxl_pci header manages bits associated with PCI, specifically the
DVSEC and related fields. The cxl_component.h variant has data
structures and APIs that are useful for drivers implementing any of the
CXL 2.0 components. The library takes care of making use of the DVSEC
bits and the CXL.[mem|cache] registers. Per spec, the registers are
little endian.

None of the mechanisms required to enumerate a CXL capable hostbridge
are introduced at this point.

Note that the CXL.mem and CXL.cache registers used are always 4B wide.
It's possible in the future that this constraint will not hold.

Signed-off-by: Ben Widawsky <ben.widawsky@intel.com>
Signed-off-by: Jonathan Cameron <Jonathan.Cameron@huawei.com>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
---
v7:
 * Use QEMU_PACKED / QEMU_BUILD_BUG_ON() (Alex)
 
 hw/Kconfig                     |   1 +
 hw/cxl/Kconfig                 |   3 +
 hw/cxl/cxl-component-utils.c   | 219 +++++++++++++++++++++++++++++++++
 hw/cxl/meson.build             |   4 +
 hw/meson.build                 |   1 +
 include/hw/cxl/cxl.h           |  16 +++
 include/hw/cxl/cxl_component.h | 197 +++++++++++++++++++++++++++++
 include/hw/cxl/cxl_pci.h       | 135 ++++++++++++++++++++
 8 files changed, 576 insertions(+)

diff --git a/hw/Kconfig b/hw/Kconfig
index ad20cce0a9..50e0952889 100644
--- a/hw/Kconfig
+++ b/hw/Kconfig
@@ -6,6 +6,7 @@ source audio/Kconfig
 source block/Kconfig
 source char/Kconfig
 source core/Kconfig
+source cxl/Kconfig
 source display/Kconfig
 source dma/Kconfig
 source gpio/Kconfig
diff --git a/hw/cxl/Kconfig b/hw/cxl/Kconfig
new file mode 100644
index 0000000000..8e67519b16
--- /dev/null
+++ b/hw/cxl/Kconfig
@@ -0,0 +1,3 @@
+config CXL
+    bool
+    default y if PCI_EXPRESS
diff --git a/hw/cxl/cxl-component-utils.c b/hw/cxl/cxl-component-utils.c
new file mode 100644
index 0000000000..410f8ef328
--- /dev/null
+++ b/hw/cxl/cxl-component-utils.c
@@ -0,0 +1,219 @@
+/*
+ * CXL Utility library for components
+ *
+ * Copyright(C) 2020 Intel Corporation.
+ *
+ * This work is licensed under the terms of the GNU GPL, version 2. See the
+ * COPYING file in the top-level directory.
+ */
+
+#include "qemu/osdep.h"
+#include "qemu/log.h"
+#include "hw/pci/pci.h"
+#include "hw/cxl/cxl.h"
+
+static uint64_t cxl_cache_mem_read_reg(void *opaque, hwaddr offset,
+                                       unsigned size)
+{
+    CXLComponentState *cxl_cstate = opaque;
+    ComponentRegisters *cregs = &cxl_cstate->crb;
+
+    if (size == 8) {
+        qemu_log_mask(LOG_UNIMP,
+                      "CXL 8 byte cache mem registers not implemented\n");
+        return 0;
+    }
+
+    if (cregs->special_ops && cregs->special_ops->read) {
+        return cregs->special_ops->read(cxl_cstate, offset, size);
+    } else {
+        return cregs->cache_mem_registers[offset / 4];
+    }
+}
+
+static void cxl_cache_mem_write_reg(void *opaque, hwaddr offset, uint64_t value,
+                                    unsigned size)
+{
+    CXLComponentState *cxl_cstate = opaque;
+    ComponentRegisters *cregs = &cxl_cstate->crb;
+
+    if (size == 8) {
+        qemu_log_mask(LOG_UNIMP,
+                      "CXL 8 byte cache mem registers not implemented\n");
+        return;
+    }
+    if (cregs->special_ops && cregs->special_ops->write) {
+        cregs->special_ops->write(cxl_cstate, offset, value, size);
+    } else {
+        cregs->cache_mem_registers[offset / 4] = value;
+    }
+}
+
+/*
+ * 8.2.3
+ *   The access restrictions specified in Section 8.2.2 also apply to CXL 2.0
+ *   Component Registers.
+ *
+ * 8.2.2
+ *   • A 32 bit register shall be accessed as a 4 Bytes quantity. Partial
+ *   reads are not permitted.
+ *   • A 64 bit register shall be accessed as a 8 Bytes quantity. Partial
+ *   reads are not permitted.
+ *
+ * As of the spec defined today, only 4 byte registers exist.
+ */
+static const MemoryRegionOps cache_mem_ops = {
+    .read = cxl_cache_mem_read_reg,
+    .write = cxl_cache_mem_write_reg,
+    .endianness = DEVICE_LITTLE_ENDIAN,
+    .valid = {
+        .min_access_size = 4,
+        .max_access_size = 8,
+        .unaligned = false,
+    },
+    .impl = {
+        .min_access_size = 4,
+        .max_access_size = 8,
+    },
+};
+
+void cxl_component_register_block_init(Object *obj,
+                                       CXLComponentState *cxl_cstate,
+                                       const char *type)
+{
+    ComponentRegisters *cregs = &cxl_cstate->crb;
+
+    memory_region_init(&cregs->component_registers, obj, type,
+                       CXL2_COMPONENT_BLOCK_SIZE);
+
+    /* io registers controls link which we don't care about in QEMU */
+    memory_region_init_io(&cregs->io, obj, NULL, cregs, ".io",
+                          CXL2_COMPONENT_IO_REGION_SIZE);
+    memory_region_init_io(&cregs->cache_mem, obj, &cache_mem_ops, cregs,
+                          ".cache_mem", CXL2_COMPONENT_CM_REGION_SIZE);
+
+    memory_region_add_subregion(&cregs->component_registers, 0, &cregs->io);
+    memory_region_add_subregion(&cregs->component_registers,
+                                CXL2_COMPONENT_IO_REGION_SIZE,
+                                &cregs->cache_mem);
+}
+
+static void ras_init_common(uint32_t *reg_state)
+{
+    reg_state[R_CXL_RAS_UNC_ERR_STATUS] = 0;
+    reg_state[R_CXL_RAS_UNC_ERR_MASK] = 0x1cfff;
+    reg_state[R_CXL_RAS_UNC_ERR_SEVERITY] = 0x1cfff;
+    reg_state[R_CXL_RAS_COR_ERR_STATUS] = 0;
+    reg_state[R_CXL_RAS_COR_ERR_MASK] = 0x3f;
+
+    /* CXL switches and devices must set */
+    reg_state[R_CXL_RAS_ERR_CAP_CTRL] = 0;
+}
+
+static void hdm_init_common(uint32_t *reg_state)
+{
+    ARRAY_FIELD_DP32(reg_state, CXL_HDM_DECODER_CAPABILITY, DECODER_COUNT, 0);
+    ARRAY_FIELD_DP32(reg_state, CXL_HDM_DECODER_CAPABILITY, TARGET_COUNT, 1);
+    ARRAY_FIELD_DP32(reg_state, CXL_HDM_DECODER_GLOBAL_CONTROL,
+                     HDM_DECODER_ENABLE, 0);
+}
+
+void cxl_component_register_init_common(uint32_t *reg_state, enum reg_type type)
+{
+    int caps = 0;
+    switch (type) {
+    case CXL2_DOWNSTREAM_PORT:
+    case CXL2_DEVICE:
+        /* CAP, RAS, Link */
+        caps = 2;
+        break;
+    case CXL2_UPSTREAM_PORT:
+    case CXL2_TYPE3_DEVICE:
+    case CXL2_LOGICAL_DEVICE:
+        /* + HDM */
+        caps = 3;
+        break;
+    case CXL2_ROOT_PORT:
+        /* + Extended Security, + Snoop */
+        caps = 5;
+        break;
+    default:
+        abort();
+    }
+
+    memset(reg_state, 0, CXL2_COMPONENT_CM_REGION_SIZE);
+
+    /* CXL Capability Header Register */
+    ARRAY_FIELD_DP32(reg_state, CXL_CAPABILITY_HEADER, ID, 1);
+    ARRAY_FIELD_DP32(reg_state, CXL_CAPABILITY_HEADER, VERSION, 1);
+    ARRAY_FIELD_DP32(reg_state, CXL_CAPABILITY_HEADER, CACHE_MEM_VERSION, 1);
+    ARRAY_FIELD_DP32(reg_state, CXL_CAPABILITY_HEADER, ARRAY_SIZE, caps);
+
+
+#define init_cap_reg(reg, id, version)                                        \
+    QEMU_BUILD_BUG_ON(CXL_##reg##_REGISTERS_OFFSET == 0);                     \
+    do {                                                                      \
+        int which = R_CXL_##reg##_CAPABILITY_HEADER;                          \
+        reg_state[which] = FIELD_DP32(reg_state[which],                       \
+                                      CXL_##reg##_CAPABILITY_HEADER, ID, id); \
+        reg_state[which] =                                                    \
+            FIELD_DP32(reg_state[which], CXL_##reg##_CAPABILITY_HEADER,       \
+                       VERSION, version);                                     \
+        reg_state[which] =                                                    \
+            FIELD_DP32(reg_state[which], CXL_##reg##_CAPABILITY_HEADER, PTR,  \
+                       CXL_##reg##_REGISTERS_OFFSET);                         \
+    } while (0)
+
+    init_cap_reg(RAS, 2, 1);
+    ras_init_common(reg_state);
+
+    init_cap_reg(LINK, 4, 2);
+
+    if (caps < 3) {
+        return;
+    }
+
+    init_cap_reg(HDM, 5, 1);
+    hdm_init_common(reg_state);
+
+    if (caps < 5) {
+        return;
+    }
+
+    init_cap_reg(EXTSEC, 6, 1);
+    init_cap_reg(SNOOP, 8, 1);
+
+#undef init_cap_reg
+}
+
+/*
+ * Helper to creates a DVSEC header for a CXL entity. The caller is responsible
+ * for tracking the valid offset.
+ *
+ * This function will build the DVSEC header on behalf of the caller and then
+ * copy in the remaining data for the vendor specific bits.
+ */
+void cxl_component_create_dvsec(CXLComponentState *cxl, uint16_t length,
+                                uint16_t type, uint8_t rev, uint8_t *body)
+{
+    PCIDevice *pdev = cxl->pdev;
+    uint16_t offset = cxl->dvsec_offset;
+
+    assert(offset >= PCI_CFG_SPACE_SIZE &&
+           ((offset + length) < PCI_CFG_SPACE_EXP_SIZE));
+    assert((length & 0xf000) == 0);
+    assert((rev & ~0xf) == 0);
+
+    /* Create the DVSEC in the MCFG space */
+    pcie_add_capability(pdev, PCI_EXT_CAP_ID_DVSEC, 1, offset, length);
+    pci_set_long(pdev->config + offset + PCIE_DVSEC_HEADER1_OFFSET,
+                 (length << 20) | (rev << 16) | CXL_VENDOR_ID);
+    pci_set_word(pdev->config + offset + PCIE_DVSEC_ID_OFFSET, type);
+    memcpy(pdev->config + offset + sizeof(struct dvsec_header),
+           body + sizeof(struct dvsec_header),
+           length - sizeof(struct dvsec_header));
+
+    /* Update state for future DVSEC additions */
+    range_init_nofail(&cxl->dvsecs[type], cxl->dvsec_offset, length);
+    cxl->dvsec_offset += length;
+}
diff --git a/hw/cxl/meson.build b/hw/cxl/meson.build
new file mode 100644
index 0000000000..3231b5de1e
--- /dev/null
+++ b/hw/cxl/meson.build
@@ -0,0 +1,4 @@
+softmmu_ss.add(when: 'CONFIG_CXL',
+               if_true: files(
+                   'cxl-component-utils.c',
+               ))
diff --git a/hw/meson.build b/hw/meson.build
index b3366c888e..9992c5101e 100644
--- a/hw/meson.build
+++ b/hw/meson.build
@@ -6,6 +6,7 @@ subdir('block')
 subdir('char')
 subdir('core')
 subdir('cpu')
+subdir('cxl')
 subdir('display')
 subdir('dma')
 subdir('gpio')
diff --git a/include/hw/cxl/cxl.h b/include/hw/cxl/cxl.h
new file mode 100644
index 0000000000..8c738c7a2b
--- /dev/null
+++ b/include/hw/cxl/cxl.h
@@ -0,0 +1,16 @@
+/*
+ * QEMU CXL Support
+ *
+ * Copyright (c) 2020 Intel
+ *
+ * This work is licensed under the terms of the GNU GPL, version 2. See the
+ * COPYING file in the top-level directory.
+ */
+
+#ifndef CXL_H
+#define CXL_H
+
+#include "cxl_pci.h"
+#include "cxl_component.h"
+
+#endif
diff --git a/include/hw/cxl/cxl_component.h b/include/hw/cxl/cxl_component.h
new file mode 100644
index 0000000000..74e9bfe1ff
--- /dev/null
+++ b/include/hw/cxl/cxl_component.h
@@ -0,0 +1,197 @@
+/*
+ * QEMU CXL Component
+ *
+ * Copyright (c) 2020 Intel
+ *
+ * This work is licensed under the terms of the GNU GPL, version 2. See the
+ * COPYING file in the top-level directory.
+ */
+
+#ifndef CXL_COMPONENT_H
+#define CXL_COMPONENT_H
+
+/* CXL 2.0 - 8.2.4 */
+#define CXL2_COMPONENT_IO_REGION_SIZE 0x1000
+#define CXL2_COMPONENT_CM_REGION_SIZE 0x1000
+#define CXL2_COMPONENT_BLOCK_SIZE 0x10000
+
+#include "qemu/compiler.h"
+#include "qemu/range.h"
+#include "qemu/typedefs.h"
+#include "hw/register.h"
+
+enum reg_type {
+    CXL2_DEVICE,
+    CXL2_TYPE3_DEVICE,
+    CXL2_LOGICAL_DEVICE,
+    CXL2_ROOT_PORT,
+    CXL2_UPSTREAM_PORT,
+    CXL2_DOWNSTREAM_PORT
+};
+
+/*
+ * Capability registers are defined at the top of the CXL.cache/mem region and
+ * are packed. For our purposes we will always define the caps in the same
+ * order.
+ * CXL 2.0 - 8.2.5 Table 142 for details.
+ */
+
+/* CXL 2.0 - 8.2.5.1 */
+REG32(CXL_CAPABILITY_HEADER, 0)
+    FIELD(CXL_CAPABILITY_HEADER, ID, 0, 16)
+    FIELD(CXL_CAPABILITY_HEADER, VERSION, 16, 4)
+    FIELD(CXL_CAPABILITY_HEADER, CACHE_MEM_VERSION, 20, 4)
+    FIELD(CXL_CAPABILITY_HEADER, ARRAY_SIZE, 24, 8)
+
+#define CXLx_CAPABILITY_HEADER(type, offset)                  \
+    REG32(CXL_##type##_CAPABILITY_HEADER, offset)             \
+        FIELD(CXL_##type##_CAPABILITY_HEADER, ID, 0, 16)      \
+        FIELD(CXL_##type##_CAPABILITY_HEADER, VERSION, 16, 4) \
+        FIELD(CXL_##type##_CAPABILITY_HEADER, PTR, 20, 12)
+CXLx_CAPABILITY_HEADER(RAS, 0x4)
+CXLx_CAPABILITY_HEADER(LINK, 0x8)
+CXLx_CAPABILITY_HEADER(HDM, 0xc)
+CXLx_CAPABILITY_HEADER(EXTSEC, 0x10)
+CXLx_CAPABILITY_HEADER(SNOOP, 0x14)
+
+/*
+ * Capability structures contain the actual registers that the CXL component
+ * implements. Some of these are specific to certain types of components, but
+ * this implementation leaves enough space regardless.
+ */
+/* 8.2.5.9 - CXL RAS Capability Structure */
+
+/* Give ample space for caps before this */
+#define CXL_RAS_REGISTERS_OFFSET 0x80
+#define CXL_RAS_REGISTERS_SIZE   0x58
+REG32(CXL_RAS_UNC_ERR_STATUS, CXL_RAS_REGISTERS_OFFSET)
+REG32(CXL_RAS_UNC_ERR_MASK, CXL_RAS_REGISTERS_OFFSET + 0x4)
+REG32(CXL_RAS_UNC_ERR_SEVERITY, CXL_RAS_REGISTERS_OFFSET + 0x8)
+REG32(CXL_RAS_COR_ERR_STATUS, CXL_RAS_REGISTERS_OFFSET + 0xc)
+REG32(CXL_RAS_COR_ERR_MASK, CXL_RAS_REGISTERS_OFFSET + 0x10)
+REG32(CXL_RAS_ERR_CAP_CTRL, CXL_RAS_REGISTERS_OFFSET + 0x14)
+/* Offset 0x18 - 0x58 reserved for RAS logs */
+
+/* 8.2.5.10 - CXL Security Capability Structure */
+#define CXL_SEC_REGISTERS_OFFSET \
+    (CXL_RAS_REGISTERS_OFFSET + CXL_RAS_REGISTERS_SIZE)
+#define CXL_SEC_REGISTERS_SIZE   0 /* We don't implement 1.1 downstream ports */
+
+/* 8.2.5.11 - CXL Link Capability Structure */
+#define CXL_LINK_REGISTERS_OFFSET \
+    (CXL_SEC_REGISTERS_OFFSET + CXL_SEC_REGISTERS_SIZE)
+#define CXL_LINK_REGISTERS_SIZE   0x38
+
+/* 8.2.5.12 - CXL HDM Decoder Capability Structure */
+#define HDM_DECODE_MAX 10 /* 8.2.5.12.1 */
+#define CXL_HDM_REGISTERS_OFFSET \
+    (CXL_LINK_REGISTERS_OFFSET + CXL_LINK_REGISTERS_SIZE)
+#define CXL_HDM_REGISTERS_SIZE (0x20 + HDM_DECODE_MAX + 10)
+#define HDM_DECODER_INIT(n)                                                    \
+  REG32(CXL_HDM_DECODER##n##_BASE_LO,                                          \
+        CXL_HDM_REGISTERS_OFFSET + (0x20 * n) + 0x10)                          \
+            FIELD(CXL_HDM_DECODER##n##_BASE_LO, L, 28, 4)                      \
+  REG32(CXL_HDM_DECODER##n##_BASE_HI,                                          \
+        CXL_HDM_REGISTERS_OFFSET + (0x20 * n) + 0x14)                          \
+  REG32(CXL_HDM_DECODER##n##_SIZE_LO,                                          \
+        CXL_HDM_REGISTERS_OFFSET + (0x20 * n) + 0x18)                          \
+  REG32(CXL_HDM_DECODER##n##_SIZE_HI,                                          \
+        CXL_HDM_REGISTERS_OFFSET + (0x20 * n) + 0x1C)                          \
+  REG32(CXL_HDM_DECODER##n##_CTRL,                                             \
+        CXL_HDM_REGISTERS_OFFSET + (0x20 * n) + 0x20)                          \
+            FIELD(CXL_HDM_DECODER##n##_CTRL, IG, 0, 4)                         \
+            FIELD(CXL_HDM_DECODER##n##_CTRL, IW, 4, 4)                         \
+            FIELD(CXL_HDM_DECODER##n##_CTRL, LOCK_ON_COMMIT, 8, 1)             \
+            FIELD(CXL_HDM_DECODER##n##_CTRL, COMMIT, 9, 1)                     \
+            FIELD(CXL_HDM_DECODER##n##_CTRL, COMMITTED, 10, 1)                 \
+            FIELD(CXL_HDM_DECODER##n##_CTRL, ERR, 11, 1)                       \
+            FIELD(CXL_HDM_DECODER##n##_CTRL, TYPE, 12, 1)                      \
+  REG32(CXL_HDM_DECODER##n##_TARGET_LIST_LO,                                   \
+        CXL_HDM_REGISTERS_OFFSET + (0x20 * n) + 0x24)                          \
+  REG32(CXL_HDM_DECODER##n##_TARGET_LIST_HI,                                   \
+        CXL_HDM_REGISTERS_OFFSET + (0x20 * n) + 0x28)
+
+REG32(CXL_HDM_DECODER_CAPABILITY, CXL_HDM_REGISTERS_OFFSET)
+    FIELD(CXL_HDM_DECODER_CAPABILITY, DECODER_COUNT, 0, 4)
+    FIELD(CXL_HDM_DECODER_CAPABILITY, TARGET_COUNT, 4, 4)
+    FIELD(CXL_HDM_DECODER_CAPABILITY, INTERLEAVE_256B, 8, 1)
+    FIELD(CXL_HDM_DECODER_CAPABILITY, INTELEAVE_4K, 9, 1)
+    FIELD(CXL_HDM_DECODER_CAPABILITY, POISON_ON_ERR_CAP, 10, 1)
+REG32(CXL_HDM_DECODER_GLOBAL_CONTROL, CXL_HDM_REGISTERS_OFFSET + 4)
+    FIELD(CXL_HDM_DECODER_GLOBAL_CONTROL, POISON_ON_ERR_EN, 0, 1)
+    FIELD(CXL_HDM_DECODER_GLOBAL_CONTROL, HDM_DECODER_ENABLE, 1, 1)
+
+HDM_DECODER_INIT(0);
+
+/* 8.2.5.13 - CXL Extended Security Capability Structure (Root complex only) */
+#define EXTSEC_ENTRY_MAX        256
+#define CXL_EXTSEC_REGISTERS_OFFSET \
+    (CXL_HDM_REGISTERS_OFFSET + CXL_HDM_REGISTERS_SIZE)
+#define CXL_EXTSEC_REGISTERS_SIZE   (8 * EXTSEC_ENTRY_MAX + 4)
+
+/* 8.2.5.14 - CXL IDE Capability Structure */
+#define CXL_IDE_REGISTERS_OFFSET \
+    (CXL_EXTSEC_REGISTERS_OFFSET + CXL_EXTSEC_REGISTERS_SIZE)
+#define CXL_IDE_REGISTERS_SIZE   0x20
+
+/* 8.2.5.15 - CXL Snoop Filter Capability Structure */
+#define CXL_SNOOP_REGISTERS_OFFSET \
+    (CXL_IDE_REGISTERS_OFFSET + CXL_IDE_REGISTERS_SIZE)
+#define CXL_SNOOP_REGISTERS_SIZE   0x8
+
+QEMU_BUILD_BUG_MSG((CXL_SNOOP_REGISTERS_OFFSET + CXL_SNOOP_REGISTERS_SIZE) >= 0x1000,
+                   "No space for registers");
+
+typedef struct component_registers {
+    /*
+     * Main memory region to be registered with QEMU core.
+     */
+    MemoryRegion component_registers;
+
+    /*
+     * 8.2.4 Table 141:
+     *   0x0000 - 0x0fff CXL.io registers
+     *   0x1000 - 0x1fff CXL.cache and CXL.mem
+     *   0x2000 - 0xdfff Implementation specific
+     *   0xe000 - 0xe3ff CXL ARB/MUX registers
+     *   0xe400 - 0xffff RSVD
+     */
+    uint32_t io_registers[CXL2_COMPONENT_IO_REGION_SIZE >> 2];
+    MemoryRegion io;
+
+    uint32_t cache_mem_registers[CXL2_COMPONENT_CM_REGION_SIZE >> 2];
+    MemoryRegion cache_mem;
+
+    MemoryRegion impl_specific;
+    MemoryRegion arb_mux;
+    MemoryRegion rsvd;
+
+    /* special_ops is used for any component that needs any specific handling */
+    MemoryRegionOps *special_ops;
+} ComponentRegisters;
+
+/*
+ * A CXL component represents all entities in a CXL hierarchy. This includes,
+ * host bridges, root ports, upstream/downstream switch ports, and devices
+ */
+typedef struct cxl_component {
+    ComponentRegisters crb;
+    union {
+        struct {
+            Range dvsecs[CXL20_MAX_DVSEC];
+            uint16_t dvsec_offset;
+            struct PCIDevice *pdev;
+        };
+    };
+} CXLComponentState;
+
+void cxl_component_register_block_init(Object *obj,
+                                       CXLComponentState *cxl_cstate,
+                                       const char *type);
+void cxl_component_register_init_common(uint32_t *reg_state,
+                                        enum reg_type type);
+
+void cxl_component_create_dvsec(CXLComponentState *cxl_cstate, uint16_t length,
+                                uint16_t type, uint8_t rev, uint8_t *body);
+
+#endif
diff --git a/include/hw/cxl/cxl_pci.h b/include/hw/cxl/cxl_pci.h
new file mode 100644
index 0000000000..810a244fab
--- /dev/null
+++ b/include/hw/cxl/cxl_pci.h
@@ -0,0 +1,135 @@
+/*
+ * QEMU CXL PCI interfaces
+ *
+ * Copyright (c) 2020 Intel
+ *
+ * This work is licensed under the terms of the GNU GPL, version 2. See the
+ * COPYING file in the top-level directory.
+ */
+
+#ifndef CXL_PCI_H
+#define CXL_PCI_H
+
+#include "qemu/compiler.h"
+#include "hw/pci/pci.h"
+#include "hw/pci/pcie.h"
+
+#define CXL_VENDOR_ID 0x1e98
+
+#define PCIE_DVSEC_HEADER1_OFFSET 0x4 /* Offset from start of extend cap */
+#define PCIE_DVSEC_ID_OFFSET 0x8
+
+#define PCIE_CXL_DEVICE_DVSEC_LENGTH 0x38
+#define PCIE_CXL1_DEVICE_DVSEC_REVID 0
+#define PCIE_CXL2_DEVICE_DVSEC_REVID 1
+
+#define EXTENSIONS_PORT_DVSEC_LENGTH 0x28
+#define EXTENSIONS_PORT_DVSEC_REVID 0
+
+#define GPF_PORT_DVSEC_LENGTH 0x10
+#define GPF_PORT_DVSEC_REVID  0
+
+#define PCIE_FLEXBUS_PORT_DVSEC_LENGTH_2_0 0x14
+#define PCIE_FLEXBUS_PORT_DVSEC_REVID_2_0  1
+
+#define REG_LOC_DVSEC_LENGTH 0x24
+#define REG_LOC_DVSEC_REVID  0
+
+enum {
+    PCIE_CXL_DEVICE_DVSEC      = 0,
+    NON_CXL_FUNCTION_MAP_DVSEC = 2,
+    EXTENSIONS_PORT_DVSEC      = 3,
+    GPF_PORT_DVSEC             = 4,
+    GPF_DEVICE_DVSEC           = 5,
+    PCIE_FLEXBUS_PORT_DVSEC    = 7,
+    REG_LOC_DVSEC              = 8,
+    MLD_DVSEC                  = 9,
+    CXL20_MAX_DVSEC
+};
+
+struct dvsec_header {
+    uint32_t cap_hdr;
+    uint32_t dv_hdr1;
+    uint16_t dv_hdr2;
+} QEMU_PACKED;
+QEMU_BUILD_BUG_ON(sizeof(struct dvsec_header) != 10);
+
+/*
+ * CXL 2.0 devices must implement certain DVSEC IDs, and can [optionally]
+ * implement others.
+ *
+ * CXL 2.0 Device: 0, [2], 5, 8
+ * CXL 2.0 RP: 3, 4, 7, 8
+ * CXL 2.0 Upstream Port: [2], 7, 8
+ * CXL 2.0 Downstream Port: 3, 4, 7, 8
+ */
+
+/* CXL 2.0 - 8.1.5 (ID 0003) */
+struct cxl_dvsec_port_extensions {
+    struct dvsec_header hdr;
+    uint16_t status;
+    uint16_t control;
+    uint8_t alt_bus_base;
+    uint8_t alt_bus_limit;
+    uint16_t alt_memory_base;
+    uint16_t alt_memory_limit;
+    uint16_t alt_prefetch_base;
+    uint16_t alt_prefetch_limit;
+    uint32_t alt_prefetch_base_high;
+    uint32_t alt_prefetch_base_low;
+    uint32_t rcrb_base;
+    uint32_t rcrb_base_high;
+};
+QEMU_BUILD_BUG_ON(sizeof(struct cxl_dvsec_port_extensions) != 0x28);
+
+#define PORT_CONTROL_OFFSET          0xc
+#define PORT_CONTROL_UNMASK_SBR      1
+#define PORT_CONTROL_ALT_MEMID_EN    4
+
+/* CXL 2.0 - 8.1.6 GPF DVSEC (ID 0004) */
+struct cxl_dvsec_port_gpf {
+    struct dvsec_header hdr;
+    uint16_t rsvd;
+    uint16_t phase1_ctrl;
+    uint16_t phase2_ctrl;
+};
+QEMU_BUILD_BUG_ON(sizeof(struct cxl_dvsec_port_gpf) != 0x10);
+
+/* CXL 2.0 - 8.1.8/8.2.1.3 Flexbus DVSEC (ID 0007) */
+struct cxl_dvsec_port_flexbus {
+    struct dvsec_header hdr;
+    uint16_t cap;
+    uint16_t ctrl;
+    uint16_t status;
+    uint32_t rcvd_mod_ts_data_phase1;
+};
+QEMU_BUILD_BUG_ON(sizeof(struct cxl_dvsec_port_flexbus) != 0x14);
+
+/* CXL 2.0 - 8.1.9 Register Locator DVSEC (ID 0008) */
+struct cxl_dvsec_register_locator {
+    struct dvsec_header hdr;
+    uint16_t rsvd;
+    uint32_t reg0_base_lo;
+    uint32_t reg0_base_hi;
+    uint32_t reg1_base_lo;
+    uint32_t reg1_base_hi;
+    uint32_t reg2_base_lo;
+    uint32_t reg2_base_hi;
+};
+QEMU_BUILD_BUG_ON(sizeof(struct cxl_dvsec_register_locator) != 0x24);
+
+/* BAR Equivalence Indicator */
+#define BEI_BAR_10H 0
+#define BEI_BAR_14H 1
+#define BEI_BAR_18H 2
+#define BEI_BAR_1cH 3
+#define BEI_BAR_20H 4
+#define BEI_BAR_24H 5
+
+/* Register Block Identifier */
+#define RBI_EMPTY          0
+#define RBI_COMPONENT_REG  (1 << 8)
+#define RBI_BAR_VIRT_ACL   (2 << 8)
+#define RBI_CXL_DEVICE_REG (3 << 8)
+
+#endif
-- 
2.32.0



^ permalink raw reply related	[flat|nested] 124+ messages in thread

* [PATCH v7 03/46] MAINTAINERS: Add entry for Compute Express Link Emulation
  2022-03-06 17:40 ` Jonathan Cameron via
@ 2022-03-06 17:40   ` Jonathan Cameron via
  -1 siblings, 0 replies; 124+ messages in thread
From: Jonathan Cameron @ 2022-03-06 17:40 UTC (permalink / raw)
  To: linuxarm, qemu-devel, Alex Bennée, Marcel Apfelbaum,
	Michael S . Tsirkin, Igor Mammedov, Markus Armbruster
  Cc: linux-cxl, Ben Widawsky, Peter Maydell,
	Shameerali Kolothum Thodi, Philippe Mathieu-Daudé,
	Saransh Gupta1, Shreyas Shah, Chris Browy, Samarth Saxena,
	Dan Williams

From: Jonathan Cameron <jonathan.cameron@huawei.com>

The CXL emulation will be jointly maintained by Ben Widawsky
and Jonathan Cameron.  Broken out as a separate patch
to improve visibility.

Signed-off-by: Jonathan Cameron <jonathan.cameron@huawei.com>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
---
 MAINTAINERS | 7 +++++++
 1 file changed, 7 insertions(+)

diff --git a/MAINTAINERS b/MAINTAINERS
index 68adaac373..4fc39c02b0 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -2535,6 +2535,13 @@ F: qapi/block*.json
 F: qapi/transaction.json
 T: git https://repo.or.cz/qemu/armbru.git block-next
 
+Compute Express Link
+M: Ben Widawsky <ben.widawsky@intel.com>
+M: Jonathan Cameron <jonathan.cameron@huawei.com>
+S: Supported
+F: hw/cxl/
+F: include/hw/cxl/
+
 Dirty Bitmaps
 M: Eric Blake <eblake@redhat.com>
 M: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
-- 
2.32.0


^ permalink raw reply related	[flat|nested] 124+ messages in thread

* [PATCH v7 03/46] MAINTAINERS: Add entry for Compute Express Link Emulation
@ 2022-03-06 17:40   ` Jonathan Cameron via
  0 siblings, 0 replies; 124+ messages in thread
From: Jonathan Cameron via @ 2022-03-06 17:40 UTC (permalink / raw)
  To: linuxarm, qemu-devel, Alex Bennée, Marcel Apfelbaum,
	Michael S . Tsirkin, Igor Mammedov, Markus Armbruster
  Cc: linux-cxl, Ben Widawsky, Peter Maydell,
	Shameerali Kolothum Thodi, Philippe Mathieu-Daudé,
	Saransh Gupta1, Shreyas Shah, Chris Browy, Samarth Saxena,
	Dan Williams

From: Jonathan Cameron <jonathan.cameron@huawei.com>

The CXL emulation will be jointly maintained by Ben Widawsky
and Jonathan Cameron.  Broken out as a separate patch
to improve visibility.

Signed-off-by: Jonathan Cameron <jonathan.cameron@huawei.com>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
---
 MAINTAINERS | 7 +++++++
 1 file changed, 7 insertions(+)

diff --git a/MAINTAINERS b/MAINTAINERS
index 68adaac373..4fc39c02b0 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -2535,6 +2535,13 @@ F: qapi/block*.json
 F: qapi/transaction.json
 T: git https://repo.or.cz/qemu/armbru.git block-next
 
+Compute Express Link
+M: Ben Widawsky <ben.widawsky@intel.com>
+M: Jonathan Cameron <jonathan.cameron@huawei.com>
+S: Supported
+F: hw/cxl/
+F: include/hw/cxl/
+
 Dirty Bitmaps
 M: Eric Blake <eblake@redhat.com>
 M: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
-- 
2.32.0



^ permalink raw reply related	[flat|nested] 124+ messages in thread

* [PATCH v7 04/46] hw/cxl/device: Introduce a CXL device (8.2.8)
  2022-03-06 17:40 ` Jonathan Cameron via
@ 2022-03-06 17:40   ` Jonathan Cameron via
  -1 siblings, 0 replies; 124+ messages in thread
From: Jonathan Cameron @ 2022-03-06 17:40 UTC (permalink / raw)
  To: linuxarm, qemu-devel, Alex Bennée, Marcel Apfelbaum,
	Michael S . Tsirkin, Igor Mammedov, Markus Armbruster
  Cc: linux-cxl, Ben Widawsky, Peter Maydell,
	Shameerali Kolothum Thodi, Philippe Mathieu-Daudé,
	Saransh Gupta1, Shreyas Shah, Chris Browy, Samarth Saxena,
	Dan Williams

From: Ben Widawsky <ben.widawsky@intel.com>

A CXL device is a type of CXL component. Conceptually, a CXL device
would be a leaf node in a CXL topology. From an emulation perspective,
CXL devices are the most complex and so the actual implementation is
reserved for discrete commits.

This new device type is specifically catered towards the eventual
implementation of a Type3 CXL.mem device, 8.2.8.5 in the CXL 2.0
specification.

Signed-off-by: Ben Widawsky <ben.widawsky@intel.com>
Signed-off-by: Jonathan Cameron <Jonathan.Cameron@huawei.com>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
---
 include/hw/cxl/cxl.h        |   1 +
 include/hw/cxl/cxl_device.h | 165 ++++++++++++++++++++++++++++++++++++
 2 files changed, 166 insertions(+)

diff --git a/include/hw/cxl/cxl.h b/include/hw/cxl/cxl.h
index 8c738c7a2b..b9d1ac3fad 100644
--- a/include/hw/cxl/cxl.h
+++ b/include/hw/cxl/cxl.h
@@ -12,5 +12,6 @@
 
 #include "cxl_pci.h"
 #include "cxl_component.h"
+#include "cxl_device.h"
 
 #endif
diff --git a/include/hw/cxl/cxl_device.h b/include/hw/cxl/cxl_device.h
new file mode 100644
index 0000000000..b2416e45bf
--- /dev/null
+++ b/include/hw/cxl/cxl_device.h
@@ -0,0 +1,165 @@
+/*
+ * QEMU CXL Devices
+ *
+ * Copyright (c) 2020 Intel
+ *
+ * This work is licensed under the terms of the GNU GPL, version 2. See the
+ * COPYING file in the top-level directory.
+ */
+
+#ifndef CXL_DEVICE_H
+#define CXL_DEVICE_H
+
+#include "hw/register.h"
+
+/*
+ * The following is how a CXL device's MMIO space is laid out. The only
+ * requirement from the spec is that the capabilities array and the capability
+ * headers start at offset 0 and are contiguously packed. The headers themselves
+ * provide offsets to the register fields. For this emulation, registers will
+ * start at offset 0x80 (m == 0x80). No secondary mailbox is implemented which
+ * means that n = m + sizeof(mailbox registers) + sizeof(device registers).
+ *
+ * This is roughly described in 8.2.8 Figure 138 of the CXL 2.0 spec.
+ *
+ *                       +---------------------------------+
+ *                       |                                 |
+ *                       |    Memory Device Registers      |
+ *                       |                                 |
+ * n + PAYLOAD_SIZE_MAX  -----------------------------------
+ *                  ^    |                                 |
+ *                  |    |                                 |
+ *                  |    |                                 |
+ *                  |    |                                 |
+ *                  |    |                                 |
+ *                  |    |         Mailbox Payload         |
+ *                  |    |                                 |
+ *                  |    |                                 |
+ *                  |    |                                 |
+ *                  |    -----------------------------------
+ *                  |    |       Mailbox Registers         |
+ *                  |    |                                 |
+ *                  n    -----------------------------------
+ *                  ^    |                                 |
+ *                  |    |        Device Registers         |
+ *                  |    |                                 |
+ *                  m    ---------------------------------->
+ *                  ^    |  Memory Device Capability Header|
+ *                  |    -----------------------------------
+ *                  |    |     Mailbox Capability Header   |
+ *                  |    -------------- --------------------
+ *                  |    |     Device Capability Header    |
+ *                  |    -----------------------------------
+ *                  |    |                                 |
+ *                  |    |                                 |
+ *                  |    |      Device Cap Array[0..n]     |
+ *                  |    |                                 |
+ *                  |    |                                 |
+ *                       |                                 |
+ *                  0    +---------------------------------+
+ *
+ */
+
+#define CXL_DEVICE_CAP_HDR1_OFFSET 0x10 /* Figure 138 */
+#define CXL_DEVICE_CAP_REG_SIZE 0x10 /* 8.2.8.2 */
+#define CXL_DEVICE_CAPS_MAX 4 /* 8.2.8.2.1 + 8.2.8.5 */
+
+#define CXL_DEVICE_REGISTERS_OFFSET 0x80 /* Read comment above */
+#define CXL_DEVICE_REGISTERS_LENGTH 0x8 /* 8.2.8.3.1 */
+
+#define CXL_MAILBOX_REGISTERS_OFFSET \
+    (CXL_DEVICE_REGISTERS_OFFSET + CXL_DEVICE_REGISTERS_LENGTH)
+#define CXL_MAILBOX_REGISTERS_SIZE 0x20 /* 8.2.8.4, Figure 139 */
+#define CXL_MAILBOX_PAYLOAD_SHIFT 11
+#define CXL_MAILBOX_MAX_PAYLOAD_SIZE (1 << CXL_MAILBOX_PAYLOAD_SHIFT)
+#define CXL_MAILBOX_REGISTERS_LENGTH \
+    (CXL_MAILBOX_REGISTERS_SIZE + CXL_MAILBOX_MAX_PAYLOAD_SIZE)
+
+typedef struct cxl_device_state {
+    MemoryRegion device_registers;
+
+    /* mmio for device capabilities array - 8.2.8.2 */
+    MemoryRegion device;
+    MemoryRegion caps;
+
+    /* mmio for the mailbox registers 8.2.8.4 */
+    MemoryRegion mailbox;
+
+    /* memory region for persistent memory, HDM */
+    uint64_t pmem_size;
+} CXLDeviceState;
+
+/* Initialize the register block for a device */
+void cxl_device_register_block_init(Object *obj, CXLDeviceState *dev);
+
+/* Set up default values for the register block */
+void cxl_device_register_init_common(CXLDeviceState *dev);
+
+/*
+ * CXL 2.0 - 8.2.8.1 including errata F4
+ * Documented as a 128 bit register, but 64 bit accesses and the second
+ * 64 bits are currently reserved.
+ */
+REG64(CXL_DEV_CAP_ARRAY, 0) /* Documented as 128 bit register but 64 byte accesses */
+    FIELD(CXL_DEV_CAP_ARRAY, CAP_ID, 0, 16)
+    FIELD(CXL_DEV_CAP_ARRAY, CAP_VERSION, 16, 8)
+    FIELD(CXL_DEV_CAP_ARRAY, CAP_COUNT, 32, 16)
+
+/*
+ * Helper macro to initialize capability headers for CXL devices.
+ *
+ * In the 8.2.8.2, this is listed as a 128b register, but in 8.2.8, it says:
+ * > No registers defined in Section 8.2.8 are larger than 64-bits wide so that
+ * > is the maximum access size allowed for these registers. If this rule is not
+ * > followed, the behavior is undefined
+ *
+ * CXL 2.0 Errata F4 states futher that the layouts in the specification are
+ * shown as greater than 128 bits, but implementations are expected to
+ * use any size of access up to 64 bits.
+ *
+ * Here we've chosen to make it 4 dwords. The spec allows any pow2 multiple
+ * access to be used for a register up to 64 bits.
+ */
+#define CXL_DEVICE_CAPABILITY_HEADER_REGISTER(n, offset)  \
+    REG32(CXL_DEV_##n##_CAP_HDR0, offset)                 \
+        FIELD(CXL_DEV_##n##_CAP_HDR0, CAP_ID, 0, 16)      \
+        FIELD(CXL_DEV_##n##_CAP_HDR0, CAP_VERSION, 16, 8) \
+    REG32(CXL_DEV_##n##_CAP_HDR1, offset + 4)             \
+        FIELD(CXL_DEV_##n##_CAP_HDR1, CAP_OFFSET, 0, 32)  \
+    REG32(CXL_DEV_##n##_CAP_HDR2, offset + 8)             \
+        FIELD(CXL_DEV_##n##_CAP_HDR2, CAP_LENGTH, 0, 32)
+
+CXL_DEVICE_CAPABILITY_HEADER_REGISTER(DEVICE, CXL_DEVICE_CAP_HDR1_OFFSET)
+CXL_DEVICE_CAPABILITY_HEADER_REGISTER(MAILBOX, CXL_DEVICE_CAP_HDR1_OFFSET + \
+                                               CXL_DEVICE_CAP_REG_SIZE)
+
+REG32(CXL_DEV_MAILBOX_CAP, 0)
+    FIELD(CXL_DEV_MAILBOX_CAP, PAYLOAD_SIZE, 0, 5)
+    FIELD(CXL_DEV_MAILBOX_CAP, INT_CAP, 5, 1)
+    FIELD(CXL_DEV_MAILBOX_CAP, BG_INT_CAP, 6, 1)
+    FIELD(CXL_DEV_MAILBOX_CAP, MSI_N, 7, 4)
+
+REG32(CXL_DEV_MAILBOX_CTRL, 4)
+    FIELD(CXL_DEV_MAILBOX_CTRL, DOORBELL, 0, 1)
+    FIELD(CXL_DEV_MAILBOX_CTRL, INT_EN, 1, 1)
+    FIELD(CXL_DEV_MAILBOX_CTRL, BG_INT_EN, 2, 1)
+
+REG64(CXL_DEV_MAILBOX_CMD, 8)
+    FIELD(CXL_DEV_MAILBOX_CMD, COMMAND, 0, 8)
+    FIELD(CXL_DEV_MAILBOX_CMD, COMMAND_SET, 8, 8)
+    FIELD(CXL_DEV_MAILBOX_CMD, LENGTH, 16, 20)
+
+REG64(CXL_DEV_MAILBOX_STS, 0x10)
+    FIELD(CXL_DEV_MAILBOX_STS, BG_OP, 0, 1)
+    FIELD(CXL_DEV_MAILBOX_STS, ERRNO, 32, 16)
+    FIELD(CXL_DEV_MAILBOX_STS, VENDOR_ERRNO, 48, 16)
+
+REG64(CXL_DEV_BG_CMD_STS, 0x18)
+    FIELD(CXL_DEV_BG_CMD_STS, BG, 0, 16)
+    FIELD(CXL_DEV_BG_CMD_STS, DONE, 16, 7)
+    FIELD(CXL_DEV_BG_CMD_STS, ERRNO, 32, 16)
+    FIELD(CXL_DEV_BG_CMD_STS, VENDOR_ERRNO, 48, 16)
+
+REG32(CXL_DEV_CMD_PAYLOAD, 0x20)
+
+#endif
-- 
2.32.0


^ permalink raw reply related	[flat|nested] 124+ messages in thread

* [PATCH v7 04/46] hw/cxl/device: Introduce a CXL device (8.2.8)
@ 2022-03-06 17:40   ` Jonathan Cameron via
  0 siblings, 0 replies; 124+ messages in thread
From: Jonathan Cameron via @ 2022-03-06 17:40 UTC (permalink / raw)
  To: linuxarm, qemu-devel, Alex Bennée, Marcel Apfelbaum,
	Michael S . Tsirkin, Igor Mammedov, Markus Armbruster
  Cc: linux-cxl, Ben Widawsky, Peter Maydell,
	Shameerali Kolothum Thodi, Philippe Mathieu-Daudé,
	Saransh Gupta1, Shreyas Shah, Chris Browy, Samarth Saxena,
	Dan Williams

From: Ben Widawsky <ben.widawsky@intel.com>

A CXL device is a type of CXL component. Conceptually, a CXL device
would be a leaf node in a CXL topology. From an emulation perspective,
CXL devices are the most complex and so the actual implementation is
reserved for discrete commits.

This new device type is specifically catered towards the eventual
implementation of a Type3 CXL.mem device, 8.2.8.5 in the CXL 2.0
specification.

Signed-off-by: Ben Widawsky <ben.widawsky@intel.com>
Signed-off-by: Jonathan Cameron <Jonathan.Cameron@huawei.com>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
---
 include/hw/cxl/cxl.h        |   1 +
 include/hw/cxl/cxl_device.h | 165 ++++++++++++++++++++++++++++++++++++
 2 files changed, 166 insertions(+)

diff --git a/include/hw/cxl/cxl.h b/include/hw/cxl/cxl.h
index 8c738c7a2b..b9d1ac3fad 100644
--- a/include/hw/cxl/cxl.h
+++ b/include/hw/cxl/cxl.h
@@ -12,5 +12,6 @@
 
 #include "cxl_pci.h"
 #include "cxl_component.h"
+#include "cxl_device.h"
 
 #endif
diff --git a/include/hw/cxl/cxl_device.h b/include/hw/cxl/cxl_device.h
new file mode 100644
index 0000000000..b2416e45bf
--- /dev/null
+++ b/include/hw/cxl/cxl_device.h
@@ -0,0 +1,165 @@
+/*
+ * QEMU CXL Devices
+ *
+ * Copyright (c) 2020 Intel
+ *
+ * This work is licensed under the terms of the GNU GPL, version 2. See the
+ * COPYING file in the top-level directory.
+ */
+
+#ifndef CXL_DEVICE_H
+#define CXL_DEVICE_H
+
+#include "hw/register.h"
+
+/*
+ * The following is how a CXL device's MMIO space is laid out. The only
+ * requirement from the spec is that the capabilities array and the capability
+ * headers start at offset 0 and are contiguously packed. The headers themselves
+ * provide offsets to the register fields. For this emulation, registers will
+ * start at offset 0x80 (m == 0x80). No secondary mailbox is implemented which
+ * means that n = m + sizeof(mailbox registers) + sizeof(device registers).
+ *
+ * This is roughly described in 8.2.8 Figure 138 of the CXL 2.0 spec.
+ *
+ *                       +---------------------------------+
+ *                       |                                 |
+ *                       |    Memory Device Registers      |
+ *                       |                                 |
+ * n + PAYLOAD_SIZE_MAX  -----------------------------------
+ *                  ^    |                                 |
+ *                  |    |                                 |
+ *                  |    |                                 |
+ *                  |    |                                 |
+ *                  |    |                                 |
+ *                  |    |         Mailbox Payload         |
+ *                  |    |                                 |
+ *                  |    |                                 |
+ *                  |    |                                 |
+ *                  |    -----------------------------------
+ *                  |    |       Mailbox Registers         |
+ *                  |    |                                 |
+ *                  n    -----------------------------------
+ *                  ^    |                                 |
+ *                  |    |        Device Registers         |
+ *                  |    |                                 |
+ *                  m    ---------------------------------->
+ *                  ^    |  Memory Device Capability Header|
+ *                  |    -----------------------------------
+ *                  |    |     Mailbox Capability Header   |
+ *                  |    -------------- --------------------
+ *                  |    |     Device Capability Header    |
+ *                  |    -----------------------------------
+ *                  |    |                                 |
+ *                  |    |                                 |
+ *                  |    |      Device Cap Array[0..n]     |
+ *                  |    |                                 |
+ *                  |    |                                 |
+ *                       |                                 |
+ *                  0    +---------------------------------+
+ *
+ */
+
+#define CXL_DEVICE_CAP_HDR1_OFFSET 0x10 /* Figure 138 */
+#define CXL_DEVICE_CAP_REG_SIZE 0x10 /* 8.2.8.2 */
+#define CXL_DEVICE_CAPS_MAX 4 /* 8.2.8.2.1 + 8.2.8.5 */
+
+#define CXL_DEVICE_REGISTERS_OFFSET 0x80 /* Read comment above */
+#define CXL_DEVICE_REGISTERS_LENGTH 0x8 /* 8.2.8.3.1 */
+
+#define CXL_MAILBOX_REGISTERS_OFFSET \
+    (CXL_DEVICE_REGISTERS_OFFSET + CXL_DEVICE_REGISTERS_LENGTH)
+#define CXL_MAILBOX_REGISTERS_SIZE 0x20 /* 8.2.8.4, Figure 139 */
+#define CXL_MAILBOX_PAYLOAD_SHIFT 11
+#define CXL_MAILBOX_MAX_PAYLOAD_SIZE (1 << CXL_MAILBOX_PAYLOAD_SHIFT)
+#define CXL_MAILBOX_REGISTERS_LENGTH \
+    (CXL_MAILBOX_REGISTERS_SIZE + CXL_MAILBOX_MAX_PAYLOAD_SIZE)
+
+typedef struct cxl_device_state {
+    MemoryRegion device_registers;
+
+    /* mmio for device capabilities array - 8.2.8.2 */
+    MemoryRegion device;
+    MemoryRegion caps;
+
+    /* mmio for the mailbox registers 8.2.8.4 */
+    MemoryRegion mailbox;
+
+    /* memory region for persistent memory, HDM */
+    uint64_t pmem_size;
+} CXLDeviceState;
+
+/* Initialize the register block for a device */
+void cxl_device_register_block_init(Object *obj, CXLDeviceState *dev);
+
+/* Set up default values for the register block */
+void cxl_device_register_init_common(CXLDeviceState *dev);
+
+/*
+ * CXL 2.0 - 8.2.8.1 including errata F4
+ * Documented as a 128 bit register, but 64 bit accesses and the second
+ * 64 bits are currently reserved.
+ */
+REG64(CXL_DEV_CAP_ARRAY, 0) /* Documented as 128 bit register but 64 byte accesses */
+    FIELD(CXL_DEV_CAP_ARRAY, CAP_ID, 0, 16)
+    FIELD(CXL_DEV_CAP_ARRAY, CAP_VERSION, 16, 8)
+    FIELD(CXL_DEV_CAP_ARRAY, CAP_COUNT, 32, 16)
+
+/*
+ * Helper macro to initialize capability headers for CXL devices.
+ *
+ * In the 8.2.8.2, this is listed as a 128b register, but in 8.2.8, it says:
+ * > No registers defined in Section 8.2.8 are larger than 64-bits wide so that
+ * > is the maximum access size allowed for these registers. If this rule is not
+ * > followed, the behavior is undefined
+ *
+ * CXL 2.0 Errata F4 states futher that the layouts in the specification are
+ * shown as greater than 128 bits, but implementations are expected to
+ * use any size of access up to 64 bits.
+ *
+ * Here we've chosen to make it 4 dwords. The spec allows any pow2 multiple
+ * access to be used for a register up to 64 bits.
+ */
+#define CXL_DEVICE_CAPABILITY_HEADER_REGISTER(n, offset)  \
+    REG32(CXL_DEV_##n##_CAP_HDR0, offset)                 \
+        FIELD(CXL_DEV_##n##_CAP_HDR0, CAP_ID, 0, 16)      \
+        FIELD(CXL_DEV_##n##_CAP_HDR0, CAP_VERSION, 16, 8) \
+    REG32(CXL_DEV_##n##_CAP_HDR1, offset + 4)             \
+        FIELD(CXL_DEV_##n##_CAP_HDR1, CAP_OFFSET, 0, 32)  \
+    REG32(CXL_DEV_##n##_CAP_HDR2, offset + 8)             \
+        FIELD(CXL_DEV_##n##_CAP_HDR2, CAP_LENGTH, 0, 32)
+
+CXL_DEVICE_CAPABILITY_HEADER_REGISTER(DEVICE, CXL_DEVICE_CAP_HDR1_OFFSET)
+CXL_DEVICE_CAPABILITY_HEADER_REGISTER(MAILBOX, CXL_DEVICE_CAP_HDR1_OFFSET + \
+                                               CXL_DEVICE_CAP_REG_SIZE)
+
+REG32(CXL_DEV_MAILBOX_CAP, 0)
+    FIELD(CXL_DEV_MAILBOX_CAP, PAYLOAD_SIZE, 0, 5)
+    FIELD(CXL_DEV_MAILBOX_CAP, INT_CAP, 5, 1)
+    FIELD(CXL_DEV_MAILBOX_CAP, BG_INT_CAP, 6, 1)
+    FIELD(CXL_DEV_MAILBOX_CAP, MSI_N, 7, 4)
+
+REG32(CXL_DEV_MAILBOX_CTRL, 4)
+    FIELD(CXL_DEV_MAILBOX_CTRL, DOORBELL, 0, 1)
+    FIELD(CXL_DEV_MAILBOX_CTRL, INT_EN, 1, 1)
+    FIELD(CXL_DEV_MAILBOX_CTRL, BG_INT_EN, 2, 1)
+
+REG64(CXL_DEV_MAILBOX_CMD, 8)
+    FIELD(CXL_DEV_MAILBOX_CMD, COMMAND, 0, 8)
+    FIELD(CXL_DEV_MAILBOX_CMD, COMMAND_SET, 8, 8)
+    FIELD(CXL_DEV_MAILBOX_CMD, LENGTH, 16, 20)
+
+REG64(CXL_DEV_MAILBOX_STS, 0x10)
+    FIELD(CXL_DEV_MAILBOX_STS, BG_OP, 0, 1)
+    FIELD(CXL_DEV_MAILBOX_STS, ERRNO, 32, 16)
+    FIELD(CXL_DEV_MAILBOX_STS, VENDOR_ERRNO, 48, 16)
+
+REG64(CXL_DEV_BG_CMD_STS, 0x18)
+    FIELD(CXL_DEV_BG_CMD_STS, BG, 0, 16)
+    FIELD(CXL_DEV_BG_CMD_STS, DONE, 16, 7)
+    FIELD(CXL_DEV_BG_CMD_STS, ERRNO, 32, 16)
+    FIELD(CXL_DEV_BG_CMD_STS, VENDOR_ERRNO, 48, 16)
+
+REG32(CXL_DEV_CMD_PAYLOAD, 0x20)
+
+#endif
-- 
2.32.0



^ permalink raw reply related	[flat|nested] 124+ messages in thread

* [PATCH v7 05/46] hw/cxl/device: Implement the CAP array (8.2.8.1-2)
  2022-03-06 17:40 ` Jonathan Cameron via
@ 2022-03-06 17:40   ` Jonathan Cameron via
  -1 siblings, 0 replies; 124+ messages in thread
From: Jonathan Cameron @ 2022-03-06 17:40 UTC (permalink / raw)
  To: linuxarm, qemu-devel, Alex Bennée, Marcel Apfelbaum,
	Michael S . Tsirkin, Igor Mammedov, Markus Armbruster
  Cc: linux-cxl, Ben Widawsky, Peter Maydell,
	Shameerali Kolothum Thodi, Philippe Mathieu-Daudé,
	Saransh Gupta1, Shreyas Shah, Chris Browy, Samarth Saxena,
	Dan Williams

From: Ben Widawsky <ben.widawsky@intel.com>

This implements all device MMIO up to the first capability. That
includes the CXL Device Capabilities Array Register, as well as all of
the CXL Device Capability Header Registers. The latter are filled in as
they are implemented in the following patches.

Endianness and alignment are managed by softmmu memory core.

Signed-off-by: Ben Widawsky <ben.widawsky@intel.com>
Signed-off-by: Jonathan Cameron <Jonathan.Cameron@huawei.com>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
---
 hw/cxl/cxl-device-utils.c   | 109 ++++++++++++++++++++++++++++++++++++
 hw/cxl/meson.build          |   1 +
 include/hw/cxl/cxl_device.h |  31 +++++++++-
 3 files changed, 140 insertions(+), 1 deletion(-)

diff --git a/hw/cxl/cxl-device-utils.c b/hw/cxl/cxl-device-utils.c
new file mode 100644
index 0000000000..0895b9d78b
--- /dev/null
+++ b/hw/cxl/cxl-device-utils.c
@@ -0,0 +1,109 @@
+/*
+ * CXL Utility library for devices
+ *
+ * Copyright(C) 2020 Intel Corporation.
+ *
+ * This work is licensed under the terms of the GNU GPL, version 2. See the
+ * COPYING file in the top-level directory.
+ */
+
+#include "qemu/osdep.h"
+#include "qemu/log.h"
+#include "hw/cxl/cxl.h"
+
+/*
+ * Device registers have no restrictions per the spec, and so fall back to the
+ * default memory mapped register rules in 8.2:
+ *   Software shall use CXL.io Memory Read and Write to access memory mapped
+ *   register defined in this section. Unless otherwise specified, software
+ *   shall restrict the accesses width based on the following:
+ *   • A 32 bit register shall be accessed as a 1 Byte, 2 Bytes or 4 Bytes
+ *     quantity.
+ *   • A 64 bit register shall be accessed as a 1 Byte, 2 Bytes, 4 Bytes or 8
+ *     Bytes
+ *   • The address shall be a multiple of the access width, e.g. when
+ *     accessing a register as a 4 Byte quantity, the address shall be
+ *     multiple of 4.
+ *   • The accesses shall map to contiguous bytes.If these rules are not
+ *     followed, the behavior is undefined
+ */
+
+static uint64_t caps_reg_read(void *opaque, hwaddr offset, unsigned size)
+{
+    CXLDeviceState *cxl_dstate = opaque;
+
+    if (size == 4) {
+        return cxl_dstate->caps_reg_state32[offset / 4];
+    } else {
+        return cxl_dstate->caps_reg_state64[offset / 8];
+    }
+}
+
+static uint64_t dev_reg_read(void *opaque, hwaddr offset, unsigned size)
+{
+    return 0;
+}
+
+static const MemoryRegionOps dev_ops = {
+    .read = dev_reg_read,
+    .write = NULL, /* status register is read only */
+    .endianness = DEVICE_LITTLE_ENDIAN,
+    .valid = {
+        .min_access_size = 1,
+        .max_access_size = 8,
+        .unaligned = false,
+    },
+    .impl = {
+        .min_access_size = 1,
+        .max_access_size = 8,
+    },
+};
+
+static const MemoryRegionOps caps_ops = {
+    .read = caps_reg_read,
+    .write = NULL, /* caps registers are read only */
+    .endianness = DEVICE_LITTLE_ENDIAN,
+    .valid = {
+        .min_access_size = 1,
+        .max_access_size = 8,
+        .unaligned = false,
+    },
+    .impl = {
+        .min_access_size = 4,
+        .max_access_size = 8,
+    },
+};
+
+void cxl_device_register_block_init(Object *obj, CXLDeviceState *cxl_dstate)
+{
+    /* This will be a BAR, so needs to be rounded up to pow2 for PCI spec */
+    memory_region_init(&cxl_dstate->device_registers, obj, "device-registers",
+                       pow2ceil(CXL_MMIO_SIZE));
+
+    memory_region_init_io(&cxl_dstate->caps, obj, &caps_ops, cxl_dstate,
+                          "cap-array", CXL_CAPS_SIZE);
+    memory_region_init_io(&cxl_dstate->device, obj, &dev_ops, cxl_dstate,
+                          "device-status", CXL_DEVICE_REGISTERS_LENGTH);
+
+    memory_region_add_subregion(&cxl_dstate->device_registers, 0,
+                                &cxl_dstate->caps);
+    memory_region_add_subregion(&cxl_dstate->device_registers,
+                                CXL_DEVICE_REGISTERS_OFFSET,
+                                &cxl_dstate->device);
+}
+
+static void device_reg_init_common(CXLDeviceState *cxl_dstate) { }
+
+void cxl_device_register_init_common(CXLDeviceState *cxl_dstate)
+{
+    uint64_t *cap_hdrs = cxl_dstate->caps_reg_state64;
+    const int cap_count = 1;
+
+    /* CXL Device Capabilities Array Register */
+    ARRAY_FIELD_DP64(cap_hdrs, CXL_DEV_CAP_ARRAY, CAP_ID, 0);
+    ARRAY_FIELD_DP64(cap_hdrs, CXL_DEV_CAP_ARRAY, CAP_VERSION, 1);
+    ARRAY_FIELD_DP64(cap_hdrs, CXL_DEV_CAP_ARRAY, CAP_COUNT, cap_count);
+
+    cxl_device_cap_init(cxl_dstate, DEVICE, 1);
+    device_reg_init_common(cxl_dstate);
+}
diff --git a/hw/cxl/meson.build b/hw/cxl/meson.build
index 3231b5de1e..dd7c6f8e5a 100644
--- a/hw/cxl/meson.build
+++ b/hw/cxl/meson.build
@@ -1,4 +1,5 @@
 softmmu_ss.add(when: 'CONFIG_CXL',
                if_true: files(
                    'cxl-component-utils.c',
+                   'cxl-device-utils.c',
                ))
diff --git a/include/hw/cxl/cxl_device.h b/include/hw/cxl/cxl_device.h
index b2416e45bf..1ac0dcd97e 100644
--- a/include/hw/cxl/cxl_device.h
+++ b/include/hw/cxl/cxl_device.h
@@ -63,6 +63,8 @@
 #define CXL_DEVICE_CAP_HDR1_OFFSET 0x10 /* Figure 138 */
 #define CXL_DEVICE_CAP_REG_SIZE 0x10 /* 8.2.8.2 */
 #define CXL_DEVICE_CAPS_MAX 4 /* 8.2.8.2.1 + 8.2.8.5 */
+#define CXL_CAPS_SIZE \
+    (CXL_DEVICE_CAP_REG_SIZE * (CXL_DEVICE_CAPS_MAX + 1)) /* +1 for header */
 
 #define CXL_DEVICE_REGISTERS_OFFSET 0x80 /* Read comment above */
 #define CXL_DEVICE_REGISTERS_LENGTH 0x8 /* 8.2.8.3.1 */
@@ -75,12 +77,22 @@
 #define CXL_MAILBOX_REGISTERS_LENGTH \
     (CXL_MAILBOX_REGISTERS_SIZE + CXL_MAILBOX_MAX_PAYLOAD_SIZE)
 
+#define CXL_MMIO_SIZE                                           \
+    (CXL_DEVICE_CAP_REG_SIZE + CXL_DEVICE_REGISTERS_LENGTH +    \
+     CXL_MAILBOX_REGISTERS_LENGTH)
+
 typedef struct cxl_device_state {
     MemoryRegion device_registers;
 
     /* mmio for device capabilities array - 8.2.8.2 */
     MemoryRegion device;
-    MemoryRegion caps;
+    struct {
+        MemoryRegion caps;
+        union {
+            uint32_t caps_reg_state32[CXL_CAPS_SIZE / 4];
+            uint64_t caps_reg_state64[CXL_CAPS_SIZE / 8];
+        };
+    };
 
     /* mmio for the mailbox registers 8.2.8.4 */
     MemoryRegion mailbox;
@@ -133,6 +145,23 @@ CXL_DEVICE_CAPABILITY_HEADER_REGISTER(DEVICE, CXL_DEVICE_CAP_HDR1_OFFSET)
 CXL_DEVICE_CAPABILITY_HEADER_REGISTER(MAILBOX, CXL_DEVICE_CAP_HDR1_OFFSET + \
                                                CXL_DEVICE_CAP_REG_SIZE)
 
+#define cxl_device_cap_init(dstate, reg, cap_id)                           \
+    do {                                                                   \
+        uint32_t *cap_hdrs = dstate->caps_reg_state32;                     \
+        int which = R_CXL_DEV_##reg##_CAP_HDR0;                            \
+        cap_hdrs[which] =                                                  \
+            FIELD_DP32(cap_hdrs[which], CXL_DEV_##reg##_CAP_HDR0,          \
+                       CAP_ID, cap_id);                                    \
+        cap_hdrs[which] = FIELD_DP32(                                      \
+            cap_hdrs[which], CXL_DEV_##reg##_CAP_HDR0, CAP_VERSION, 1);    \
+        cap_hdrs[which + 1] =                                              \
+            FIELD_DP32(cap_hdrs[which + 1], CXL_DEV_##reg##_CAP_HDR1,      \
+                       CAP_OFFSET, CXL_##reg##_REGISTERS_OFFSET);          \
+        cap_hdrs[which + 2] =                                              \
+            FIELD_DP32(cap_hdrs[which + 2], CXL_DEV_##reg##_CAP_HDR2,      \
+                       CAP_LENGTH, CXL_##reg##_REGISTERS_LENGTH);          \
+    } while (0)
+
 REG32(CXL_DEV_MAILBOX_CAP, 0)
     FIELD(CXL_DEV_MAILBOX_CAP, PAYLOAD_SIZE, 0, 5)
     FIELD(CXL_DEV_MAILBOX_CAP, INT_CAP, 5, 1)
-- 
2.32.0


^ permalink raw reply related	[flat|nested] 124+ messages in thread

* [PATCH v7 05/46] hw/cxl/device: Implement the CAP array (8.2.8.1-2)
@ 2022-03-06 17:40   ` Jonathan Cameron via
  0 siblings, 0 replies; 124+ messages in thread
From: Jonathan Cameron via @ 2022-03-06 17:40 UTC (permalink / raw)
  To: linuxarm, qemu-devel, Alex Bennée, Marcel Apfelbaum,
	Michael S . Tsirkin, Igor Mammedov, Markus Armbruster
  Cc: linux-cxl, Ben Widawsky, Peter Maydell,
	Shameerali Kolothum Thodi, Philippe Mathieu-Daudé,
	Saransh Gupta1, Shreyas Shah, Chris Browy, Samarth Saxena,
	Dan Williams

From: Ben Widawsky <ben.widawsky@intel.com>

This implements all device MMIO up to the first capability. That
includes the CXL Device Capabilities Array Register, as well as all of
the CXL Device Capability Header Registers. The latter are filled in as
they are implemented in the following patches.

Endianness and alignment are managed by softmmu memory core.

Signed-off-by: Ben Widawsky <ben.widawsky@intel.com>
Signed-off-by: Jonathan Cameron <Jonathan.Cameron@huawei.com>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
---
 hw/cxl/cxl-device-utils.c   | 109 ++++++++++++++++++++++++++++++++++++
 hw/cxl/meson.build          |   1 +
 include/hw/cxl/cxl_device.h |  31 +++++++++-
 3 files changed, 140 insertions(+), 1 deletion(-)

diff --git a/hw/cxl/cxl-device-utils.c b/hw/cxl/cxl-device-utils.c
new file mode 100644
index 0000000000..0895b9d78b
--- /dev/null
+++ b/hw/cxl/cxl-device-utils.c
@@ -0,0 +1,109 @@
+/*
+ * CXL Utility library for devices
+ *
+ * Copyright(C) 2020 Intel Corporation.
+ *
+ * This work is licensed under the terms of the GNU GPL, version 2. See the
+ * COPYING file in the top-level directory.
+ */
+
+#include "qemu/osdep.h"
+#include "qemu/log.h"
+#include "hw/cxl/cxl.h"
+
+/*
+ * Device registers have no restrictions per the spec, and so fall back to the
+ * default memory mapped register rules in 8.2:
+ *   Software shall use CXL.io Memory Read and Write to access memory mapped
+ *   register defined in this section. Unless otherwise specified, software
+ *   shall restrict the accesses width based on the following:
+ *   • A 32 bit register shall be accessed as a 1 Byte, 2 Bytes or 4 Bytes
+ *     quantity.
+ *   • A 64 bit register shall be accessed as a 1 Byte, 2 Bytes, 4 Bytes or 8
+ *     Bytes
+ *   • The address shall be a multiple of the access width, e.g. when
+ *     accessing a register as a 4 Byte quantity, the address shall be
+ *     multiple of 4.
+ *   • The accesses shall map to contiguous bytes.If these rules are not
+ *     followed, the behavior is undefined
+ */
+
+static uint64_t caps_reg_read(void *opaque, hwaddr offset, unsigned size)
+{
+    CXLDeviceState *cxl_dstate = opaque;
+
+    if (size == 4) {
+        return cxl_dstate->caps_reg_state32[offset / 4];
+    } else {
+        return cxl_dstate->caps_reg_state64[offset / 8];
+    }
+}
+
+static uint64_t dev_reg_read(void *opaque, hwaddr offset, unsigned size)
+{
+    return 0;
+}
+
+static const MemoryRegionOps dev_ops = {
+    .read = dev_reg_read,
+    .write = NULL, /* status register is read only */
+    .endianness = DEVICE_LITTLE_ENDIAN,
+    .valid = {
+        .min_access_size = 1,
+        .max_access_size = 8,
+        .unaligned = false,
+    },
+    .impl = {
+        .min_access_size = 1,
+        .max_access_size = 8,
+    },
+};
+
+static const MemoryRegionOps caps_ops = {
+    .read = caps_reg_read,
+    .write = NULL, /* caps registers are read only */
+    .endianness = DEVICE_LITTLE_ENDIAN,
+    .valid = {
+        .min_access_size = 1,
+        .max_access_size = 8,
+        .unaligned = false,
+    },
+    .impl = {
+        .min_access_size = 4,
+        .max_access_size = 8,
+    },
+};
+
+void cxl_device_register_block_init(Object *obj, CXLDeviceState *cxl_dstate)
+{
+    /* This will be a BAR, so needs to be rounded up to pow2 for PCI spec */
+    memory_region_init(&cxl_dstate->device_registers, obj, "device-registers",
+                       pow2ceil(CXL_MMIO_SIZE));
+
+    memory_region_init_io(&cxl_dstate->caps, obj, &caps_ops, cxl_dstate,
+                          "cap-array", CXL_CAPS_SIZE);
+    memory_region_init_io(&cxl_dstate->device, obj, &dev_ops, cxl_dstate,
+                          "device-status", CXL_DEVICE_REGISTERS_LENGTH);
+
+    memory_region_add_subregion(&cxl_dstate->device_registers, 0,
+                                &cxl_dstate->caps);
+    memory_region_add_subregion(&cxl_dstate->device_registers,
+                                CXL_DEVICE_REGISTERS_OFFSET,
+                                &cxl_dstate->device);
+}
+
+static void device_reg_init_common(CXLDeviceState *cxl_dstate) { }
+
+void cxl_device_register_init_common(CXLDeviceState *cxl_dstate)
+{
+    uint64_t *cap_hdrs = cxl_dstate->caps_reg_state64;
+    const int cap_count = 1;
+
+    /* CXL Device Capabilities Array Register */
+    ARRAY_FIELD_DP64(cap_hdrs, CXL_DEV_CAP_ARRAY, CAP_ID, 0);
+    ARRAY_FIELD_DP64(cap_hdrs, CXL_DEV_CAP_ARRAY, CAP_VERSION, 1);
+    ARRAY_FIELD_DP64(cap_hdrs, CXL_DEV_CAP_ARRAY, CAP_COUNT, cap_count);
+
+    cxl_device_cap_init(cxl_dstate, DEVICE, 1);
+    device_reg_init_common(cxl_dstate);
+}
diff --git a/hw/cxl/meson.build b/hw/cxl/meson.build
index 3231b5de1e..dd7c6f8e5a 100644
--- a/hw/cxl/meson.build
+++ b/hw/cxl/meson.build
@@ -1,4 +1,5 @@
 softmmu_ss.add(when: 'CONFIG_CXL',
                if_true: files(
                    'cxl-component-utils.c',
+                   'cxl-device-utils.c',
                ))
diff --git a/include/hw/cxl/cxl_device.h b/include/hw/cxl/cxl_device.h
index b2416e45bf..1ac0dcd97e 100644
--- a/include/hw/cxl/cxl_device.h
+++ b/include/hw/cxl/cxl_device.h
@@ -63,6 +63,8 @@
 #define CXL_DEVICE_CAP_HDR1_OFFSET 0x10 /* Figure 138 */
 #define CXL_DEVICE_CAP_REG_SIZE 0x10 /* 8.2.8.2 */
 #define CXL_DEVICE_CAPS_MAX 4 /* 8.2.8.2.1 + 8.2.8.5 */
+#define CXL_CAPS_SIZE \
+    (CXL_DEVICE_CAP_REG_SIZE * (CXL_DEVICE_CAPS_MAX + 1)) /* +1 for header */
 
 #define CXL_DEVICE_REGISTERS_OFFSET 0x80 /* Read comment above */
 #define CXL_DEVICE_REGISTERS_LENGTH 0x8 /* 8.2.8.3.1 */
@@ -75,12 +77,22 @@
 #define CXL_MAILBOX_REGISTERS_LENGTH \
     (CXL_MAILBOX_REGISTERS_SIZE + CXL_MAILBOX_MAX_PAYLOAD_SIZE)
 
+#define CXL_MMIO_SIZE                                           \
+    (CXL_DEVICE_CAP_REG_SIZE + CXL_DEVICE_REGISTERS_LENGTH +    \
+     CXL_MAILBOX_REGISTERS_LENGTH)
+
 typedef struct cxl_device_state {
     MemoryRegion device_registers;
 
     /* mmio for device capabilities array - 8.2.8.2 */
     MemoryRegion device;
-    MemoryRegion caps;
+    struct {
+        MemoryRegion caps;
+        union {
+            uint32_t caps_reg_state32[CXL_CAPS_SIZE / 4];
+            uint64_t caps_reg_state64[CXL_CAPS_SIZE / 8];
+        };
+    };
 
     /* mmio for the mailbox registers 8.2.8.4 */
     MemoryRegion mailbox;
@@ -133,6 +145,23 @@ CXL_DEVICE_CAPABILITY_HEADER_REGISTER(DEVICE, CXL_DEVICE_CAP_HDR1_OFFSET)
 CXL_DEVICE_CAPABILITY_HEADER_REGISTER(MAILBOX, CXL_DEVICE_CAP_HDR1_OFFSET + \
                                                CXL_DEVICE_CAP_REG_SIZE)
 
+#define cxl_device_cap_init(dstate, reg, cap_id)                           \
+    do {                                                                   \
+        uint32_t *cap_hdrs = dstate->caps_reg_state32;                     \
+        int which = R_CXL_DEV_##reg##_CAP_HDR0;                            \
+        cap_hdrs[which] =                                                  \
+            FIELD_DP32(cap_hdrs[which], CXL_DEV_##reg##_CAP_HDR0,          \
+                       CAP_ID, cap_id);                                    \
+        cap_hdrs[which] = FIELD_DP32(                                      \
+            cap_hdrs[which], CXL_DEV_##reg##_CAP_HDR0, CAP_VERSION, 1);    \
+        cap_hdrs[which + 1] =                                              \
+            FIELD_DP32(cap_hdrs[which + 1], CXL_DEV_##reg##_CAP_HDR1,      \
+                       CAP_OFFSET, CXL_##reg##_REGISTERS_OFFSET);          \
+        cap_hdrs[which + 2] =                                              \
+            FIELD_DP32(cap_hdrs[which + 2], CXL_DEV_##reg##_CAP_HDR2,      \
+                       CAP_LENGTH, CXL_##reg##_REGISTERS_LENGTH);          \
+    } while (0)
+
 REG32(CXL_DEV_MAILBOX_CAP, 0)
     FIELD(CXL_DEV_MAILBOX_CAP, PAYLOAD_SIZE, 0, 5)
     FIELD(CXL_DEV_MAILBOX_CAP, INT_CAP, 5, 1)
-- 
2.32.0



^ permalink raw reply related	[flat|nested] 124+ messages in thread

* [PATCH v7 06/46] hw/cxl/device: Implement basic mailbox (8.2.8.4)
  2022-03-06 17:40 ` Jonathan Cameron via
@ 2022-03-06 17:40   ` Jonathan Cameron via
  -1 siblings, 0 replies; 124+ messages in thread
From: Jonathan Cameron @ 2022-03-06 17:40 UTC (permalink / raw)
  To: linuxarm, qemu-devel, Alex Bennée, Marcel Apfelbaum,
	Michael S . Tsirkin, Igor Mammedov, Markus Armbruster
  Cc: linux-cxl, Ben Widawsky, Peter Maydell,
	Shameerali Kolothum Thodi, Philippe Mathieu-Daudé,
	Saransh Gupta1, Shreyas Shah, Chris Browy, Samarth Saxena,
	Dan Williams

From: Ben Widawsky <ben.widawsky@intel.com>

This is the beginning of implementing mailbox support for CXL 2.0
devices. The implementation recognizes when the doorbell is rung,
handles the command/payload, clears the doorbell while returning error
codes and data.

Generally the mailbox mechanism is designed to permit communication
between the host OS and the firmware running on the device. For our
purposes, we emulate both the firmware, implemented primarily in
cxl-mailbox-utils.c, and the hardware.

No commands are implemented yet.

Signed-off-by: Ben Widawsky <ben.widawsky@intel.com>
Signed-off-by: Jonathan Cameron <Jonathan.Cameron@huawei.com>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
---

v7:
* Fixed 
* Dropped documentation for a non-existent lock. (Alex)
* Added error code suitable for unimplemented commands. (Alex)
* Reordered code for better readability. (Alex)

 hw/cxl/cxl-device-utils.c   | 122 ++++++++++++++++++++++++-
 hw/cxl/cxl-mailbox-utils.c  | 171 ++++++++++++++++++++++++++++++++++++
 hw/cxl/meson.build          |   1 +
 include/hw/cxl/cxl.h        |   3 +
 include/hw/cxl/cxl_device.h |  19 +++-
 5 files changed, 314 insertions(+), 2 deletions(-)

diff --git a/hw/cxl/cxl-device-utils.c b/hw/cxl/cxl-device-utils.c
index 0895b9d78b..4b995beba7 100644
--- a/hw/cxl/cxl-device-utils.c
+++ b/hw/cxl/cxl-device-utils.c
@@ -44,6 +44,108 @@ static uint64_t dev_reg_read(void *opaque, hwaddr offset, unsigned size)
     return 0;
 }
 
+static uint64_t mailbox_reg_read(void *opaque, hwaddr offset, unsigned size)
+{
+    CXLDeviceState *cxl_dstate = opaque;
+
+    switch (size) {
+    case 1:
+        return cxl_dstate->mbox_reg_state[offset];
+    case 2:
+        return cxl_dstate->mbox_reg_state16[offset / 2];
+    case 4:
+        return cxl_dstate->mbox_reg_state32[offset / 4];
+    case 8:
+        return cxl_dstate->mbox_reg_state64[offset / 8];
+    default:
+        g_assert_not_reached();
+    }
+}
+
+static void mailbox_mem_writel(uint32_t *reg_state, hwaddr offset,
+                               uint64_t value)
+{
+    switch (offset) {
+    case A_CXL_DEV_MAILBOX_CTRL:
+        /* fallthrough */
+    case A_CXL_DEV_MAILBOX_CAP:
+        /* RO register */
+        break;
+    default:
+        qemu_log_mask(LOG_UNIMP,
+                      "%s Unexpected 32-bit access to 0x%" PRIx64 " (WI)\n",
+                      __func__, offset);
+        return;
+    }
+
+    reg_state[offset / 4] = value;
+}
+
+static void mailbox_mem_writeq(uint64_t *reg_state, hwaddr offset,
+                               uint64_t value)
+{
+    switch (offset) {
+    case A_CXL_DEV_MAILBOX_CMD:
+        break;
+    case A_CXL_DEV_BG_CMD_STS:
+        /* BG not supported */
+        /* fallthrough */
+    case A_CXL_DEV_MAILBOX_STS:
+        /* Read only register, will get updated by the state machine */
+        return;
+    default:
+        qemu_log_mask(LOG_UNIMP,
+                      "%s Unexpected 64-bit access to 0x%" PRIx64 " (WI)\n",
+                      __func__, offset);
+        return;
+    }
+
+
+    reg_state[offset / 8] = value;
+}
+
+static void mailbox_reg_write(void *opaque, hwaddr offset, uint64_t value,
+                              unsigned size)
+{
+    CXLDeviceState *cxl_dstate = opaque;
+
+    if (offset >= A_CXL_DEV_CMD_PAYLOAD) {
+        memcpy(cxl_dstate->mbox_reg_state + offset, &value, size);
+        return;
+    }
+
+    switch (size) {
+    case 4:
+        mailbox_mem_writel(cxl_dstate->mbox_reg_state32, offset, value);
+        break;
+    case 8:
+        mailbox_mem_writeq(cxl_dstate->mbox_reg_state64, offset, value);
+        break;
+    default:
+        g_assert_not_reached();
+    }
+
+    if (ARRAY_FIELD_EX32(cxl_dstate->mbox_reg_state32, CXL_DEV_MAILBOX_CTRL,
+                         DOORBELL)) {
+        cxl_process_mailbox(cxl_dstate);
+    }
+}
+
+static const MemoryRegionOps mailbox_ops = {
+    .read = mailbox_reg_read,
+    .write = mailbox_reg_write,
+    .endianness = DEVICE_LITTLE_ENDIAN,
+    .valid = {
+        .min_access_size = 1,
+        .max_access_size = 8,
+        .unaligned = false,
+    },
+    .impl = {
+        .min_access_size = 1,
+        .max_access_size = 8,
+    },
+};
+
 static const MemoryRegionOps dev_ops = {
     .read = dev_reg_read,
     .write = NULL, /* status register is read only */
@@ -84,20 +186,33 @@ void cxl_device_register_block_init(Object *obj, CXLDeviceState *cxl_dstate)
                           "cap-array", CXL_CAPS_SIZE);
     memory_region_init_io(&cxl_dstate->device, obj, &dev_ops, cxl_dstate,
                           "device-status", CXL_DEVICE_REGISTERS_LENGTH);
+    memory_region_init_io(&cxl_dstate->mailbox, obj, &mailbox_ops, cxl_dstate,
+                          "mailbox", CXL_MAILBOX_REGISTERS_LENGTH);
 
     memory_region_add_subregion(&cxl_dstate->device_registers, 0,
                                 &cxl_dstate->caps);
     memory_region_add_subregion(&cxl_dstate->device_registers,
                                 CXL_DEVICE_REGISTERS_OFFSET,
                                 &cxl_dstate->device);
+    memory_region_add_subregion(&cxl_dstate->device_registers,
+                                CXL_MAILBOX_REGISTERS_OFFSET,
+                                &cxl_dstate->mailbox);
 }
 
 static void device_reg_init_common(CXLDeviceState *cxl_dstate) { }
 
+static void mailbox_reg_init_common(CXLDeviceState *cxl_dstate)
+{
+    /* 2048 payload size, with no interrupt or background support */
+    ARRAY_FIELD_DP32(cxl_dstate->mbox_reg_state32, CXL_DEV_MAILBOX_CAP,
+                     PAYLOAD_SIZE, CXL_MAILBOX_PAYLOAD_SHIFT);
+    cxl_dstate->payload_size = CXL_MAILBOX_MAX_PAYLOAD_SIZE;
+}
+
 void cxl_device_register_init_common(CXLDeviceState *cxl_dstate)
 {
     uint64_t *cap_hdrs = cxl_dstate->caps_reg_state64;
-    const int cap_count = 1;
+    const int cap_count = 2;
 
     /* CXL Device Capabilities Array Register */
     ARRAY_FIELD_DP64(cap_hdrs, CXL_DEV_CAP_ARRAY, CAP_ID, 0);
@@ -106,4 +221,9 @@ void cxl_device_register_init_common(CXLDeviceState *cxl_dstate)
 
     cxl_device_cap_init(cxl_dstate, DEVICE, 1);
     device_reg_init_common(cxl_dstate);
+
+    cxl_device_cap_init(cxl_dstate, MAILBOX, 2);
+    mailbox_reg_init_common(cxl_dstate);
+
+    assert(cxl_initialize_mailbox(cxl_dstate) == 0);
 }
diff --git a/hw/cxl/cxl-mailbox-utils.c b/hw/cxl/cxl-mailbox-utils.c
new file mode 100644
index 0000000000..7e03dc224a
--- /dev/null
+++ b/hw/cxl/cxl-mailbox-utils.c
@@ -0,0 +1,171 @@
+/*
+ * CXL Utility library for mailbox interface
+ *
+ * Copyright(C) 2020 Intel Corporation.
+ *
+ * This work is licensed under the terms of the GNU GPL, version 2. See the
+ * COPYING file in the top-level directory.
+ */
+
+#include "qemu/osdep.h"
+#include "hw/cxl/cxl.h"
+#include "hw/pci/pci.h"
+#include "qemu/log.h"
+#include "qemu/uuid.h"
+
+/*
+ * How to add a new command, example. The command set FOO, with cmd BAR.
+ *  1. Add the command set and cmd to the enum.
+ *     FOO    = 0x7f,
+ *          #define BAR 0
+ *  2. Implement the handler
+ *    static ret_code cmd_foo_bar(struct cxl_cmd *cmd,
+ *                                  CXLDeviceState *cxl_dstate, uint16_t *len)
+ *  3. Add the command to the cxl_cmd_set[][]
+ *    [FOO][BAR] = { "FOO_BAR", cmd_foo_bar, x, y },
+ *  4. Implement your handler
+ *     define_mailbox_handler(FOO_BAR) { ... return CXL_MBOX_SUCCESS; }
+ *
+ *
+ *  Writing the handler:
+ *    The handler will provide the &struct cxl_cmd, the &CXLDeviceState, and the
+ *    in/out length of the payload. The handler is responsible for consuming the
+ *    payload from cmd->payload and operating upon it as necessary. It must then
+ *    fill the output data into cmd->payload (overwriting what was there),
+ *    setting the length, and returning a valid return code.
+ *
+ *  XXX: The handler need not worry about endianess. The payload is read out of
+ *  a register interface that already deals with it.
+ */
+
+/* 8.2.8.4.5.1 Command Return Codes */
+typedef enum {
+    CXL_MBOX_SUCCESS = 0x0,
+    CXL_MBOX_BG_STARTED = 0x1,
+    CXL_MBOX_INVALID_INPUT = 0x2,
+    CXL_MBOX_UNSUPPORTED = 0x3,
+    CXL_MBOX_INTERNAL_ERROR = 0x4,
+    CXL_MBOX_RETRY_REQUIRED = 0x5,
+    CXL_MBOX_BUSY = 0x6,
+    CXL_MBOX_MEDIA_DISABLED = 0x7,
+    CXL_MBOX_FW_XFER_IN_PROGRESS = 0x8,
+    CXL_MBOX_FW_XFER_OUT_OF_ORDER = 0x9,
+    CXL_MBOX_FW_AUTH_FAILED = 0xa,
+    CXL_MBOX_FW_INVALID_SLOT = 0xb,
+    CXL_MBOX_FW_ROLLEDBACK = 0xc,
+    CXL_MBOX_FW_REST_REQD = 0xd,
+    CXL_MBOX_INVALID_HANDLE = 0xe,
+    CXL_MBOX_INVALID_PA = 0xf,
+    CXL_MBOX_INJECT_POISON_LIMIT = 0x10,
+    CXL_MBOX_PERMANENT_MEDIA_FAILURE = 0x11,
+    CXL_MBOX_ABORTED = 0x12,
+    CXL_MBOX_INVALID_SECURITY_STATE = 0x13,
+    CXL_MBOX_INCORRECT_PASSPHRASE = 0x14,
+    CXL_MBOX_UNSUPPORTED_MAILBOX = 0x15,
+    CXL_MBOX_INVALID_PAYLOAD_LENGTH = 0x16,
+    CXL_MBOX_MAX = 0x17
+} ret_code;
+
+struct cxl_cmd;
+typedef ret_code (*opcode_handler)(struct cxl_cmd *cmd,
+                                   CXLDeviceState *cxl_dstate, uint16_t *len);
+struct cxl_cmd {
+    const char *name;
+    opcode_handler handler;
+    ssize_t in;
+    uint16_t effect; /* Reported in CEL */
+    uint8_t *payload;
+};
+
+#define DEFINE_MAILBOX_HANDLER_ZEROED(name, size)                         \
+    uint16_t __zero##name = size;                                         \
+    static ret_code cmd_##name(struct cxl_cmd *cmd,                       \
+                               CXLDeviceState *cxl_dstate, uint16_t *len) \
+    {                                                                     \
+        *len = __zero##name;                                              \
+        memset(cmd->payload, 0, *len);                                    \
+        return CXL_MBOX_SUCCESS;                                          \
+    }
+#define DEFINE_MAILBOX_HANDLER_NOP(name)                                  \
+    static ret_code cmd_##name(struct cxl_cmd *cmd,                       \
+                               CXLDeviceState *cxl_dstate, uint16_t *len) \
+    {                                                                     \
+        return CXL_MBOX_SUCCESS;                                          \
+    }
+
+static QemuUUID cel_uuid;
+
+static struct cxl_cmd cxl_cmd_set[256][256] = {};
+
+void cxl_process_mailbox(CXLDeviceState *cxl_dstate)
+{
+    uint16_t ret = CXL_MBOX_SUCCESS;
+    struct cxl_cmd *cxl_cmd;
+    uint64_t status_reg;
+    opcode_handler h;
+
+    /*
+     * current state of mailbox interface
+     *  mbox_cap_reg = cxl_dstate->reg_state32[R_CXL_DEV_MAILBOX_CAP];
+     *  mbox_ctrl_reg = cxl_dstate->reg_state32[R_CXL_DEV_MAILBOX_CTRL];
+     *  status_reg = *(uint64_t *)&cxl_dstate->reg_state[A_CXL_DEV_MAILBOX_STS];
+     */
+    uint64_t command_reg = cxl_dstate->mbox_reg_state64[R_CXL_DEV_MAILBOX_CMD];
+
+    uint8_t set = FIELD_EX64(command_reg, CXL_DEV_MAILBOX_CMD, COMMAND_SET);
+    uint8_t cmd = FIELD_EX64(command_reg, CXL_DEV_MAILBOX_CMD, COMMAND);
+    uint16_t len = FIELD_EX64(command_reg, CXL_DEV_MAILBOX_CMD, LENGTH);
+    cxl_cmd = &cxl_cmd_set[set][cmd];
+    h = cxl_cmd->handler;
+    if (h) {
+        if (len == cxl_cmd->in) {
+            cxl_cmd->payload = cxl_dstate->mbox_reg_state +
+                A_CXL_DEV_CMD_PAYLOAD;
+            ret = (*h)(cxl_cmd, cxl_dstate, &len);
+            assert(len <= cxl_dstate->payload_size);
+        } else {
+            ret = CXL_MBOX_INVALID_PAYLOAD_LENGTH;
+        }
+    } else {
+        qemu_log_mask(LOG_UNIMP, "Command %04xh not implemented\n",
+                      set << 8 | cmd);
+        ret = CXL_MBOX_UNSUPPORTED;
+    }
+
+    /* Set the return code */
+    status_reg = FIELD_DP64(0, CXL_DEV_MAILBOX_STS, ERRNO, ret);
+
+    /* Set the return length */
+    command_reg = FIELD_DP64(command_reg, CXL_DEV_MAILBOX_CMD, COMMAND_SET, 0);
+    command_reg = FIELD_DP64(command_reg, CXL_DEV_MAILBOX_CMD, COMMAND, 0);
+    command_reg = FIELD_DP64(command_reg, CXL_DEV_MAILBOX_CMD, LENGTH, len);
+
+    cxl_dstate->mbox_reg_state64[R_CXL_DEV_MAILBOX_CMD] = command_reg;
+    cxl_dstate->mbox_reg_state64[R_CXL_DEV_MAILBOX_STS] = status_reg;
+
+    /* Tell the host we're done */
+    ARRAY_FIELD_DP32(cxl_dstate->mbox_reg_state32, CXL_DEV_MAILBOX_CTRL,
+                     DOORBELL, 0);
+}
+
+int cxl_initialize_mailbox(CXLDeviceState *cxl_dstate)
+{
+    /* CXL 2.0: Table 169 Get Supported Logs Log Entry */
+    const char *cel_uuidstr = "0da9c0b5-bf41-4b78-8f79-96b1623b3f17";
+
+    for (int set = 0; set < 256; set++) {
+        for (int cmd = 0; cmd < 256; cmd++) {
+            if (cxl_cmd_set[set][cmd].handler) {
+                struct cxl_cmd *c = &cxl_cmd_set[set][cmd];
+                struct cel_log *log =
+                    &cxl_dstate->cel_log[cxl_dstate->cel_size];
+
+                log->opcode = (set << 8) | cmd;
+                log->effect = c->effect;
+                cxl_dstate->cel_size++;
+            }
+        }
+    }
+
+    return qemu_uuid_parse(cel_uuidstr, &cel_uuid);
+}
diff --git a/hw/cxl/meson.build b/hw/cxl/meson.build
index dd7c6f8e5a..e68eea2358 100644
--- a/hw/cxl/meson.build
+++ b/hw/cxl/meson.build
@@ -2,4 +2,5 @@ softmmu_ss.add(when: 'CONFIG_CXL',
                if_true: files(
                    'cxl-component-utils.c',
                    'cxl-device-utils.c',
+                   'cxl-mailbox-utils.c',
                ))
diff --git a/include/hw/cxl/cxl.h b/include/hw/cxl/cxl.h
index b9d1ac3fad..554ad93b6b 100644
--- a/include/hw/cxl/cxl.h
+++ b/include/hw/cxl/cxl.h
@@ -14,4 +14,7 @@
 #include "cxl_component.h"
 #include "cxl_device.h"
 
+#define CXL_COMPONENT_REG_BAR_IDX 0
+#define CXL_DEVICE_REG_BAR_IDX 2
+
 #endif
diff --git a/include/hw/cxl/cxl_device.h b/include/hw/cxl/cxl_device.h
index 1ac0dcd97e..49dcca7e44 100644
--- a/include/hw/cxl/cxl_device.h
+++ b/include/hw/cxl/cxl_device.h
@@ -95,7 +95,21 @@ typedef struct cxl_device_state {
     };
 
     /* mmio for the mailbox registers 8.2.8.4 */
-    MemoryRegion mailbox;
+    struct {
+        MemoryRegion mailbox;
+        uint16_t payload_size;
+        union {
+            uint8_t mbox_reg_state[CXL_MAILBOX_REGISTERS_LENGTH];
+            uint16_t mbox_reg_state16[CXL_MAILBOX_REGISTERS_LENGTH / 2];
+            uint32_t mbox_reg_state32[CXL_MAILBOX_REGISTERS_LENGTH / 4];
+            uint64_t mbox_reg_state64[CXL_MAILBOX_REGISTERS_LENGTH / 8];
+        };
+        struct cel_log {
+            uint16_t opcode;
+            uint16_t effect;
+        } cel_log[1 << 16];
+        size_t cel_size;
+    };
 
     /* memory region for persistent memory, HDM */
     uint64_t pmem_size;
@@ -145,6 +159,9 @@ CXL_DEVICE_CAPABILITY_HEADER_REGISTER(DEVICE, CXL_DEVICE_CAP_HDR1_OFFSET)
 CXL_DEVICE_CAPABILITY_HEADER_REGISTER(MAILBOX, CXL_DEVICE_CAP_HDR1_OFFSET + \
                                                CXL_DEVICE_CAP_REG_SIZE)
 
+int cxl_initialize_mailbox(CXLDeviceState *cxl_dstate);
+void cxl_process_mailbox(CXLDeviceState *cxl_dstate);
+
 #define cxl_device_cap_init(dstate, reg, cap_id)                           \
     do {                                                                   \
         uint32_t *cap_hdrs = dstate->caps_reg_state32;                     \
-- 
2.32.0


^ permalink raw reply related	[flat|nested] 124+ messages in thread

* [PATCH v7 06/46] hw/cxl/device: Implement basic mailbox (8.2.8.4)
@ 2022-03-06 17:40   ` Jonathan Cameron via
  0 siblings, 0 replies; 124+ messages in thread
From: Jonathan Cameron via @ 2022-03-06 17:40 UTC (permalink / raw)
  To: linuxarm, qemu-devel, Alex Bennée, Marcel Apfelbaum,
	Michael S . Tsirkin, Igor Mammedov, Markus Armbruster
  Cc: linux-cxl, Ben Widawsky, Peter Maydell,
	Shameerali Kolothum Thodi, Philippe Mathieu-Daudé,
	Saransh Gupta1, Shreyas Shah, Chris Browy, Samarth Saxena,
	Dan Williams

From: Ben Widawsky <ben.widawsky@intel.com>

This is the beginning of implementing mailbox support for CXL 2.0
devices. The implementation recognizes when the doorbell is rung,
handles the command/payload, clears the doorbell while returning error
codes and data.

Generally the mailbox mechanism is designed to permit communication
between the host OS and the firmware running on the device. For our
purposes, we emulate both the firmware, implemented primarily in
cxl-mailbox-utils.c, and the hardware.

No commands are implemented yet.

Signed-off-by: Ben Widawsky <ben.widawsky@intel.com>
Signed-off-by: Jonathan Cameron <Jonathan.Cameron@huawei.com>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
---

v7:
* Fixed 
* Dropped documentation for a non-existent lock. (Alex)
* Added error code suitable for unimplemented commands. (Alex)
* Reordered code for better readability. (Alex)

 hw/cxl/cxl-device-utils.c   | 122 ++++++++++++++++++++++++-
 hw/cxl/cxl-mailbox-utils.c  | 171 ++++++++++++++++++++++++++++++++++++
 hw/cxl/meson.build          |   1 +
 include/hw/cxl/cxl.h        |   3 +
 include/hw/cxl/cxl_device.h |  19 +++-
 5 files changed, 314 insertions(+), 2 deletions(-)

diff --git a/hw/cxl/cxl-device-utils.c b/hw/cxl/cxl-device-utils.c
index 0895b9d78b..4b995beba7 100644
--- a/hw/cxl/cxl-device-utils.c
+++ b/hw/cxl/cxl-device-utils.c
@@ -44,6 +44,108 @@ static uint64_t dev_reg_read(void *opaque, hwaddr offset, unsigned size)
     return 0;
 }
 
+static uint64_t mailbox_reg_read(void *opaque, hwaddr offset, unsigned size)
+{
+    CXLDeviceState *cxl_dstate = opaque;
+
+    switch (size) {
+    case 1:
+        return cxl_dstate->mbox_reg_state[offset];
+    case 2:
+        return cxl_dstate->mbox_reg_state16[offset / 2];
+    case 4:
+        return cxl_dstate->mbox_reg_state32[offset / 4];
+    case 8:
+        return cxl_dstate->mbox_reg_state64[offset / 8];
+    default:
+        g_assert_not_reached();
+    }
+}
+
+static void mailbox_mem_writel(uint32_t *reg_state, hwaddr offset,
+                               uint64_t value)
+{
+    switch (offset) {
+    case A_CXL_DEV_MAILBOX_CTRL:
+        /* fallthrough */
+    case A_CXL_DEV_MAILBOX_CAP:
+        /* RO register */
+        break;
+    default:
+        qemu_log_mask(LOG_UNIMP,
+                      "%s Unexpected 32-bit access to 0x%" PRIx64 " (WI)\n",
+                      __func__, offset);
+        return;
+    }
+
+    reg_state[offset / 4] = value;
+}
+
+static void mailbox_mem_writeq(uint64_t *reg_state, hwaddr offset,
+                               uint64_t value)
+{
+    switch (offset) {
+    case A_CXL_DEV_MAILBOX_CMD:
+        break;
+    case A_CXL_DEV_BG_CMD_STS:
+        /* BG not supported */
+        /* fallthrough */
+    case A_CXL_DEV_MAILBOX_STS:
+        /* Read only register, will get updated by the state machine */
+        return;
+    default:
+        qemu_log_mask(LOG_UNIMP,
+                      "%s Unexpected 64-bit access to 0x%" PRIx64 " (WI)\n",
+                      __func__, offset);
+        return;
+    }
+
+
+    reg_state[offset / 8] = value;
+}
+
+static void mailbox_reg_write(void *opaque, hwaddr offset, uint64_t value,
+                              unsigned size)
+{
+    CXLDeviceState *cxl_dstate = opaque;
+
+    if (offset >= A_CXL_DEV_CMD_PAYLOAD) {
+        memcpy(cxl_dstate->mbox_reg_state + offset, &value, size);
+        return;
+    }
+
+    switch (size) {
+    case 4:
+        mailbox_mem_writel(cxl_dstate->mbox_reg_state32, offset, value);
+        break;
+    case 8:
+        mailbox_mem_writeq(cxl_dstate->mbox_reg_state64, offset, value);
+        break;
+    default:
+        g_assert_not_reached();
+    }
+
+    if (ARRAY_FIELD_EX32(cxl_dstate->mbox_reg_state32, CXL_DEV_MAILBOX_CTRL,
+                         DOORBELL)) {
+        cxl_process_mailbox(cxl_dstate);
+    }
+}
+
+static const MemoryRegionOps mailbox_ops = {
+    .read = mailbox_reg_read,
+    .write = mailbox_reg_write,
+    .endianness = DEVICE_LITTLE_ENDIAN,
+    .valid = {
+        .min_access_size = 1,
+        .max_access_size = 8,
+        .unaligned = false,
+    },
+    .impl = {
+        .min_access_size = 1,
+        .max_access_size = 8,
+    },
+};
+
 static const MemoryRegionOps dev_ops = {
     .read = dev_reg_read,
     .write = NULL, /* status register is read only */
@@ -84,20 +186,33 @@ void cxl_device_register_block_init(Object *obj, CXLDeviceState *cxl_dstate)
                           "cap-array", CXL_CAPS_SIZE);
     memory_region_init_io(&cxl_dstate->device, obj, &dev_ops, cxl_dstate,
                           "device-status", CXL_DEVICE_REGISTERS_LENGTH);
+    memory_region_init_io(&cxl_dstate->mailbox, obj, &mailbox_ops, cxl_dstate,
+                          "mailbox", CXL_MAILBOX_REGISTERS_LENGTH);
 
     memory_region_add_subregion(&cxl_dstate->device_registers, 0,
                                 &cxl_dstate->caps);
     memory_region_add_subregion(&cxl_dstate->device_registers,
                                 CXL_DEVICE_REGISTERS_OFFSET,
                                 &cxl_dstate->device);
+    memory_region_add_subregion(&cxl_dstate->device_registers,
+                                CXL_MAILBOX_REGISTERS_OFFSET,
+                                &cxl_dstate->mailbox);
 }
 
 static void device_reg_init_common(CXLDeviceState *cxl_dstate) { }
 
+static void mailbox_reg_init_common(CXLDeviceState *cxl_dstate)
+{
+    /* 2048 payload size, with no interrupt or background support */
+    ARRAY_FIELD_DP32(cxl_dstate->mbox_reg_state32, CXL_DEV_MAILBOX_CAP,
+                     PAYLOAD_SIZE, CXL_MAILBOX_PAYLOAD_SHIFT);
+    cxl_dstate->payload_size = CXL_MAILBOX_MAX_PAYLOAD_SIZE;
+}
+
 void cxl_device_register_init_common(CXLDeviceState *cxl_dstate)
 {
     uint64_t *cap_hdrs = cxl_dstate->caps_reg_state64;
-    const int cap_count = 1;
+    const int cap_count = 2;
 
     /* CXL Device Capabilities Array Register */
     ARRAY_FIELD_DP64(cap_hdrs, CXL_DEV_CAP_ARRAY, CAP_ID, 0);
@@ -106,4 +221,9 @@ void cxl_device_register_init_common(CXLDeviceState *cxl_dstate)
 
     cxl_device_cap_init(cxl_dstate, DEVICE, 1);
     device_reg_init_common(cxl_dstate);
+
+    cxl_device_cap_init(cxl_dstate, MAILBOX, 2);
+    mailbox_reg_init_common(cxl_dstate);
+
+    assert(cxl_initialize_mailbox(cxl_dstate) == 0);
 }
diff --git a/hw/cxl/cxl-mailbox-utils.c b/hw/cxl/cxl-mailbox-utils.c
new file mode 100644
index 0000000000..7e03dc224a
--- /dev/null
+++ b/hw/cxl/cxl-mailbox-utils.c
@@ -0,0 +1,171 @@
+/*
+ * CXL Utility library for mailbox interface
+ *
+ * Copyright(C) 2020 Intel Corporation.
+ *
+ * This work is licensed under the terms of the GNU GPL, version 2. See the
+ * COPYING file in the top-level directory.
+ */
+
+#include "qemu/osdep.h"
+#include "hw/cxl/cxl.h"
+#include "hw/pci/pci.h"
+#include "qemu/log.h"
+#include "qemu/uuid.h"
+
+/*
+ * How to add a new command, example. The command set FOO, with cmd BAR.
+ *  1. Add the command set and cmd to the enum.
+ *     FOO    = 0x7f,
+ *          #define BAR 0
+ *  2. Implement the handler
+ *    static ret_code cmd_foo_bar(struct cxl_cmd *cmd,
+ *                                  CXLDeviceState *cxl_dstate, uint16_t *len)
+ *  3. Add the command to the cxl_cmd_set[][]
+ *    [FOO][BAR] = { "FOO_BAR", cmd_foo_bar, x, y },
+ *  4. Implement your handler
+ *     define_mailbox_handler(FOO_BAR) { ... return CXL_MBOX_SUCCESS; }
+ *
+ *
+ *  Writing the handler:
+ *    The handler will provide the &struct cxl_cmd, the &CXLDeviceState, and the
+ *    in/out length of the payload. The handler is responsible for consuming the
+ *    payload from cmd->payload and operating upon it as necessary. It must then
+ *    fill the output data into cmd->payload (overwriting what was there),
+ *    setting the length, and returning a valid return code.
+ *
+ *  XXX: The handler need not worry about endianess. The payload is read out of
+ *  a register interface that already deals with it.
+ */
+
+/* 8.2.8.4.5.1 Command Return Codes */
+typedef enum {
+    CXL_MBOX_SUCCESS = 0x0,
+    CXL_MBOX_BG_STARTED = 0x1,
+    CXL_MBOX_INVALID_INPUT = 0x2,
+    CXL_MBOX_UNSUPPORTED = 0x3,
+    CXL_MBOX_INTERNAL_ERROR = 0x4,
+    CXL_MBOX_RETRY_REQUIRED = 0x5,
+    CXL_MBOX_BUSY = 0x6,
+    CXL_MBOX_MEDIA_DISABLED = 0x7,
+    CXL_MBOX_FW_XFER_IN_PROGRESS = 0x8,
+    CXL_MBOX_FW_XFER_OUT_OF_ORDER = 0x9,
+    CXL_MBOX_FW_AUTH_FAILED = 0xa,
+    CXL_MBOX_FW_INVALID_SLOT = 0xb,
+    CXL_MBOX_FW_ROLLEDBACK = 0xc,
+    CXL_MBOX_FW_REST_REQD = 0xd,
+    CXL_MBOX_INVALID_HANDLE = 0xe,
+    CXL_MBOX_INVALID_PA = 0xf,
+    CXL_MBOX_INJECT_POISON_LIMIT = 0x10,
+    CXL_MBOX_PERMANENT_MEDIA_FAILURE = 0x11,
+    CXL_MBOX_ABORTED = 0x12,
+    CXL_MBOX_INVALID_SECURITY_STATE = 0x13,
+    CXL_MBOX_INCORRECT_PASSPHRASE = 0x14,
+    CXL_MBOX_UNSUPPORTED_MAILBOX = 0x15,
+    CXL_MBOX_INVALID_PAYLOAD_LENGTH = 0x16,
+    CXL_MBOX_MAX = 0x17
+} ret_code;
+
+struct cxl_cmd;
+typedef ret_code (*opcode_handler)(struct cxl_cmd *cmd,
+                                   CXLDeviceState *cxl_dstate, uint16_t *len);
+struct cxl_cmd {
+    const char *name;
+    opcode_handler handler;
+    ssize_t in;
+    uint16_t effect; /* Reported in CEL */
+    uint8_t *payload;
+};
+
+#define DEFINE_MAILBOX_HANDLER_ZEROED(name, size)                         \
+    uint16_t __zero##name = size;                                         \
+    static ret_code cmd_##name(struct cxl_cmd *cmd,                       \
+                               CXLDeviceState *cxl_dstate, uint16_t *len) \
+    {                                                                     \
+        *len = __zero##name;                                              \
+        memset(cmd->payload, 0, *len);                                    \
+        return CXL_MBOX_SUCCESS;                                          \
+    }
+#define DEFINE_MAILBOX_HANDLER_NOP(name)                                  \
+    static ret_code cmd_##name(struct cxl_cmd *cmd,                       \
+                               CXLDeviceState *cxl_dstate, uint16_t *len) \
+    {                                                                     \
+        return CXL_MBOX_SUCCESS;                                          \
+    }
+
+static QemuUUID cel_uuid;
+
+static struct cxl_cmd cxl_cmd_set[256][256] = {};
+
+void cxl_process_mailbox(CXLDeviceState *cxl_dstate)
+{
+    uint16_t ret = CXL_MBOX_SUCCESS;
+    struct cxl_cmd *cxl_cmd;
+    uint64_t status_reg;
+    opcode_handler h;
+
+    /*
+     * current state of mailbox interface
+     *  mbox_cap_reg = cxl_dstate->reg_state32[R_CXL_DEV_MAILBOX_CAP];
+     *  mbox_ctrl_reg = cxl_dstate->reg_state32[R_CXL_DEV_MAILBOX_CTRL];
+     *  status_reg = *(uint64_t *)&cxl_dstate->reg_state[A_CXL_DEV_MAILBOX_STS];
+     */
+    uint64_t command_reg = cxl_dstate->mbox_reg_state64[R_CXL_DEV_MAILBOX_CMD];
+
+    uint8_t set = FIELD_EX64(command_reg, CXL_DEV_MAILBOX_CMD, COMMAND_SET);
+    uint8_t cmd = FIELD_EX64(command_reg, CXL_DEV_MAILBOX_CMD, COMMAND);
+    uint16_t len = FIELD_EX64(command_reg, CXL_DEV_MAILBOX_CMD, LENGTH);
+    cxl_cmd = &cxl_cmd_set[set][cmd];
+    h = cxl_cmd->handler;
+    if (h) {
+        if (len == cxl_cmd->in) {
+            cxl_cmd->payload = cxl_dstate->mbox_reg_state +
+                A_CXL_DEV_CMD_PAYLOAD;
+            ret = (*h)(cxl_cmd, cxl_dstate, &len);
+            assert(len <= cxl_dstate->payload_size);
+        } else {
+            ret = CXL_MBOX_INVALID_PAYLOAD_LENGTH;
+        }
+    } else {
+        qemu_log_mask(LOG_UNIMP, "Command %04xh not implemented\n",
+                      set << 8 | cmd);
+        ret = CXL_MBOX_UNSUPPORTED;
+    }
+
+    /* Set the return code */
+    status_reg = FIELD_DP64(0, CXL_DEV_MAILBOX_STS, ERRNO, ret);
+
+    /* Set the return length */
+    command_reg = FIELD_DP64(command_reg, CXL_DEV_MAILBOX_CMD, COMMAND_SET, 0);
+    command_reg = FIELD_DP64(command_reg, CXL_DEV_MAILBOX_CMD, COMMAND, 0);
+    command_reg = FIELD_DP64(command_reg, CXL_DEV_MAILBOX_CMD, LENGTH, len);
+
+    cxl_dstate->mbox_reg_state64[R_CXL_DEV_MAILBOX_CMD] = command_reg;
+    cxl_dstate->mbox_reg_state64[R_CXL_DEV_MAILBOX_STS] = status_reg;
+
+    /* Tell the host we're done */
+    ARRAY_FIELD_DP32(cxl_dstate->mbox_reg_state32, CXL_DEV_MAILBOX_CTRL,
+                     DOORBELL, 0);
+}
+
+int cxl_initialize_mailbox(CXLDeviceState *cxl_dstate)
+{
+    /* CXL 2.0: Table 169 Get Supported Logs Log Entry */
+    const char *cel_uuidstr = "0da9c0b5-bf41-4b78-8f79-96b1623b3f17";
+
+    for (int set = 0; set < 256; set++) {
+        for (int cmd = 0; cmd < 256; cmd++) {
+            if (cxl_cmd_set[set][cmd].handler) {
+                struct cxl_cmd *c = &cxl_cmd_set[set][cmd];
+                struct cel_log *log =
+                    &cxl_dstate->cel_log[cxl_dstate->cel_size];
+
+                log->opcode = (set << 8) | cmd;
+                log->effect = c->effect;
+                cxl_dstate->cel_size++;
+            }
+        }
+    }
+
+    return qemu_uuid_parse(cel_uuidstr, &cel_uuid);
+}
diff --git a/hw/cxl/meson.build b/hw/cxl/meson.build
index dd7c6f8e5a..e68eea2358 100644
--- a/hw/cxl/meson.build
+++ b/hw/cxl/meson.build
@@ -2,4 +2,5 @@ softmmu_ss.add(when: 'CONFIG_CXL',
                if_true: files(
                    'cxl-component-utils.c',
                    'cxl-device-utils.c',
+                   'cxl-mailbox-utils.c',
                ))
diff --git a/include/hw/cxl/cxl.h b/include/hw/cxl/cxl.h
index b9d1ac3fad..554ad93b6b 100644
--- a/include/hw/cxl/cxl.h
+++ b/include/hw/cxl/cxl.h
@@ -14,4 +14,7 @@
 #include "cxl_component.h"
 #include "cxl_device.h"
 
+#define CXL_COMPONENT_REG_BAR_IDX 0
+#define CXL_DEVICE_REG_BAR_IDX 2
+
 #endif
diff --git a/include/hw/cxl/cxl_device.h b/include/hw/cxl/cxl_device.h
index 1ac0dcd97e..49dcca7e44 100644
--- a/include/hw/cxl/cxl_device.h
+++ b/include/hw/cxl/cxl_device.h
@@ -95,7 +95,21 @@ typedef struct cxl_device_state {
     };
 
     /* mmio for the mailbox registers 8.2.8.4 */
-    MemoryRegion mailbox;
+    struct {
+        MemoryRegion mailbox;
+        uint16_t payload_size;
+        union {
+            uint8_t mbox_reg_state[CXL_MAILBOX_REGISTERS_LENGTH];
+            uint16_t mbox_reg_state16[CXL_MAILBOX_REGISTERS_LENGTH / 2];
+            uint32_t mbox_reg_state32[CXL_MAILBOX_REGISTERS_LENGTH / 4];
+            uint64_t mbox_reg_state64[CXL_MAILBOX_REGISTERS_LENGTH / 8];
+        };
+        struct cel_log {
+            uint16_t opcode;
+            uint16_t effect;
+        } cel_log[1 << 16];
+        size_t cel_size;
+    };
 
     /* memory region for persistent memory, HDM */
     uint64_t pmem_size;
@@ -145,6 +159,9 @@ CXL_DEVICE_CAPABILITY_HEADER_REGISTER(DEVICE, CXL_DEVICE_CAP_HDR1_OFFSET)
 CXL_DEVICE_CAPABILITY_HEADER_REGISTER(MAILBOX, CXL_DEVICE_CAP_HDR1_OFFSET + \
                                                CXL_DEVICE_CAP_REG_SIZE)
 
+int cxl_initialize_mailbox(CXLDeviceState *cxl_dstate);
+void cxl_process_mailbox(CXLDeviceState *cxl_dstate);
+
 #define cxl_device_cap_init(dstate, reg, cap_id)                           \
     do {                                                                   \
         uint32_t *cap_hdrs = dstate->caps_reg_state32;                     \
-- 
2.32.0



^ permalink raw reply related	[flat|nested] 124+ messages in thread

* [PATCH v7 07/46] hw/cxl/device: Add memory device utilities
  2022-03-06 17:40 ` Jonathan Cameron via
@ 2022-03-06 17:40   ` Jonathan Cameron via
  -1 siblings, 0 replies; 124+ messages in thread
From: Jonathan Cameron @ 2022-03-06 17:40 UTC (permalink / raw)
  To: linuxarm, qemu-devel, Alex Bennée, Marcel Apfelbaum,
	Michael S . Tsirkin, Igor Mammedov, Markus Armbruster
  Cc: linux-cxl, Ben Widawsky, Peter Maydell,
	Shameerali Kolothum Thodi, Philippe Mathieu-Daudé,
	Saransh Gupta1, Shreyas Shah, Chris Browy, Samarth Saxena,
	Dan Williams

From: Ben Widawsky <ben.widawsky@intel.com>

Memory devices implement extra capabilities on top of CXL devices. This
adds support for that.

A large part of memory devices is the mailbox/command interface. All of
the mailbox handling is done in the mailbox-utils library. Longer term,
new CXL devices that are being emulated may want to handle commands
differently, and therefore would need a mechanism to opt in/out of the
specific generic handlers. As such, this is considered sufficient for
now, but may need more depth in the future.

Signed-off-by: Ben Widawsky <ben.widawsky@intel.com>
Signed-off-by: Jonathan Cameron <Jonathan.Cameron@huawei.com>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
---
 hw/cxl/cxl-device-utils.c   | 38 ++++++++++++++++++++++++++++++++++++-
 include/hw/cxl/cxl_device.h | 22 ++++++++++++++++++---
 2 files changed, 56 insertions(+), 4 deletions(-)

diff --git a/hw/cxl/cxl-device-utils.c b/hw/cxl/cxl-device-utils.c
index 4b995beba7..1f5587ffcd 100644
--- a/hw/cxl/cxl-device-utils.c
+++ b/hw/cxl/cxl-device-utils.c
@@ -131,6 +131,31 @@ static void mailbox_reg_write(void *opaque, hwaddr offset, uint64_t value,
     }
 }
 
+static uint64_t mdev_reg_read(void *opaque, hwaddr offset, unsigned size)
+{
+    uint64_t retval = 0;
+
+    retval = FIELD_DP64(retval, CXL_MEM_DEV_STS, MEDIA_STATUS, 1);
+    retval = FIELD_DP64(retval, CXL_MEM_DEV_STS, MBOX_READY, 1);
+
+    return retval;
+}
+
+static const MemoryRegionOps mdev_ops = {
+    .read = mdev_reg_read,
+    .write = NULL, /* memory device register is read only */
+    .endianness = DEVICE_LITTLE_ENDIAN,
+    .valid = {
+        .min_access_size = 1,
+        .max_access_size = 8,
+        .unaligned = false,
+    },
+    .impl = {
+        .min_access_size = 8,
+        .max_access_size = 8,
+    },
+};
+
 static const MemoryRegionOps mailbox_ops = {
     .read = mailbox_reg_read,
     .write = mailbox_reg_write,
@@ -188,6 +213,9 @@ void cxl_device_register_block_init(Object *obj, CXLDeviceState *cxl_dstate)
                           "device-status", CXL_DEVICE_REGISTERS_LENGTH);
     memory_region_init_io(&cxl_dstate->mailbox, obj, &mailbox_ops, cxl_dstate,
                           "mailbox", CXL_MAILBOX_REGISTERS_LENGTH);
+    memory_region_init_io(&cxl_dstate->memory_device, obj, &mdev_ops,
+                          cxl_dstate, "memory device caps",
+                          CXL_MEMORY_DEVICE_REGISTERS_LENGTH);
 
     memory_region_add_subregion(&cxl_dstate->device_registers, 0,
                                 &cxl_dstate->caps);
@@ -197,6 +225,9 @@ void cxl_device_register_block_init(Object *obj, CXLDeviceState *cxl_dstate)
     memory_region_add_subregion(&cxl_dstate->device_registers,
                                 CXL_MAILBOX_REGISTERS_OFFSET,
                                 &cxl_dstate->mailbox);
+    memory_region_add_subregion(&cxl_dstate->device_registers,
+                                CXL_MEMORY_DEVICE_REGISTERS_OFFSET,
+                                &cxl_dstate->memory_device);
 }
 
 static void device_reg_init_common(CXLDeviceState *cxl_dstate) { }
@@ -209,10 +240,12 @@ static void mailbox_reg_init_common(CXLDeviceState *cxl_dstate)
     cxl_dstate->payload_size = CXL_MAILBOX_MAX_PAYLOAD_SIZE;
 }
 
+static void memdev_reg_init_common(CXLDeviceState *cxl_dstate) { }
+
 void cxl_device_register_init_common(CXLDeviceState *cxl_dstate)
 {
     uint64_t *cap_hdrs = cxl_dstate->caps_reg_state64;
-    const int cap_count = 2;
+    const int cap_count = 3;
 
     /* CXL Device Capabilities Array Register */
     ARRAY_FIELD_DP64(cap_hdrs, CXL_DEV_CAP_ARRAY, CAP_ID, 0);
@@ -225,5 +258,8 @@ void cxl_device_register_init_common(CXLDeviceState *cxl_dstate)
     cxl_device_cap_init(cxl_dstate, MAILBOX, 2);
     mailbox_reg_init_common(cxl_dstate);
 
+    cxl_device_cap_init(cxl_dstate, MEMORY_DEVICE, 0x4000);
+    memdev_reg_init_common(cxl_dstate);
+
     assert(cxl_initialize_mailbox(cxl_dstate) == 0);
 }
diff --git a/include/hw/cxl/cxl_device.h b/include/hw/cxl/cxl_device.h
index 49dcca7e44..7fd8d0f616 100644
--- a/include/hw/cxl/cxl_device.h
+++ b/include/hw/cxl/cxl_device.h
@@ -77,15 +77,21 @@
 #define CXL_MAILBOX_REGISTERS_LENGTH \
     (CXL_MAILBOX_REGISTERS_SIZE + CXL_MAILBOX_MAX_PAYLOAD_SIZE)
 
-#define CXL_MMIO_SIZE                                           \
-    (CXL_DEVICE_CAP_REG_SIZE + CXL_DEVICE_REGISTERS_LENGTH +    \
-     CXL_MAILBOX_REGISTERS_LENGTH)
+
+#define CXL_MEMORY_DEVICE_REGISTERS_OFFSET \
+    (CXL_MAILBOX_REGISTERS_OFFSET + CXL_MAILBOX_REGISTERS_LENGTH)
+#define CXL_MEMORY_DEVICE_REGISTERS_LENGTH 0x8
+
+#define CXL_MMIO_SIZE                                                   \
+    (CXL_DEVICE_CAP_REG_SIZE + CXL_DEVICE_REGISTERS_LENGTH +            \
+     CXL_MAILBOX_REGISTERS_LENGTH + CXL_MEMORY_DEVICE_REGISTERS_LENGTH)
 
 typedef struct cxl_device_state {
     MemoryRegion device_registers;
 
     /* mmio for device capabilities array - 8.2.8.2 */
     MemoryRegion device;
+    MemoryRegion memory_device;
     struct {
         MemoryRegion caps;
         union {
@@ -158,6 +164,9 @@ REG64(CXL_DEV_CAP_ARRAY, 0) /* Documented as 128 bit register but 64 byte access
 CXL_DEVICE_CAPABILITY_HEADER_REGISTER(DEVICE, CXL_DEVICE_CAP_HDR1_OFFSET)
 CXL_DEVICE_CAPABILITY_HEADER_REGISTER(MAILBOX, CXL_DEVICE_CAP_HDR1_OFFSET + \
                                                CXL_DEVICE_CAP_REG_SIZE)
+CXL_DEVICE_CAPABILITY_HEADER_REGISTER(MEMORY_DEVICE,
+                                      CXL_DEVICE_CAP_HDR1_OFFSET +
+                                          CXL_DEVICE_CAP_REG_SIZE * 2)
 
 int cxl_initialize_mailbox(CXLDeviceState *cxl_dstate);
 void cxl_process_mailbox(CXLDeviceState *cxl_dstate);
@@ -208,4 +217,11 @@ REG64(CXL_DEV_BG_CMD_STS, 0x18)
 
 REG32(CXL_DEV_CMD_PAYLOAD, 0x20)
 
+REG64(CXL_MEM_DEV_STS, 0)
+    FIELD(CXL_MEM_DEV_STS, FATAL, 0, 1)
+    FIELD(CXL_MEM_DEV_STS, FW_HALT, 1, 1)
+    FIELD(CXL_MEM_DEV_STS, MEDIA_STATUS, 2, 2)
+    FIELD(CXL_MEM_DEV_STS, MBOX_READY, 4, 1)
+    FIELD(CXL_MEM_DEV_STS, RESET_NEEDED, 5, 3)
+
 #endif
-- 
2.32.0


^ permalink raw reply related	[flat|nested] 124+ messages in thread

* [PATCH v7 07/46] hw/cxl/device: Add memory device utilities
@ 2022-03-06 17:40   ` Jonathan Cameron via
  0 siblings, 0 replies; 124+ messages in thread
From: Jonathan Cameron via @ 2022-03-06 17:40 UTC (permalink / raw)
  To: linuxarm, qemu-devel, Alex Bennée, Marcel Apfelbaum,
	Michael S . Tsirkin, Igor Mammedov, Markus Armbruster
  Cc: linux-cxl, Ben Widawsky, Peter Maydell,
	Shameerali Kolothum Thodi, Philippe Mathieu-Daudé,
	Saransh Gupta1, Shreyas Shah, Chris Browy, Samarth Saxena,
	Dan Williams

From: Ben Widawsky <ben.widawsky@intel.com>

Memory devices implement extra capabilities on top of CXL devices. This
adds support for that.

A large part of memory devices is the mailbox/command interface. All of
the mailbox handling is done in the mailbox-utils library. Longer term,
new CXL devices that are being emulated may want to handle commands
differently, and therefore would need a mechanism to opt in/out of the
specific generic handlers. As such, this is considered sufficient for
now, but may need more depth in the future.

Signed-off-by: Ben Widawsky <ben.widawsky@intel.com>
Signed-off-by: Jonathan Cameron <Jonathan.Cameron@huawei.com>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
---
 hw/cxl/cxl-device-utils.c   | 38 ++++++++++++++++++++++++++++++++++++-
 include/hw/cxl/cxl_device.h | 22 ++++++++++++++++++---
 2 files changed, 56 insertions(+), 4 deletions(-)

diff --git a/hw/cxl/cxl-device-utils.c b/hw/cxl/cxl-device-utils.c
index 4b995beba7..1f5587ffcd 100644
--- a/hw/cxl/cxl-device-utils.c
+++ b/hw/cxl/cxl-device-utils.c
@@ -131,6 +131,31 @@ static void mailbox_reg_write(void *opaque, hwaddr offset, uint64_t value,
     }
 }
 
+static uint64_t mdev_reg_read(void *opaque, hwaddr offset, unsigned size)
+{
+    uint64_t retval = 0;
+
+    retval = FIELD_DP64(retval, CXL_MEM_DEV_STS, MEDIA_STATUS, 1);
+    retval = FIELD_DP64(retval, CXL_MEM_DEV_STS, MBOX_READY, 1);
+
+    return retval;
+}
+
+static const MemoryRegionOps mdev_ops = {
+    .read = mdev_reg_read,
+    .write = NULL, /* memory device register is read only */
+    .endianness = DEVICE_LITTLE_ENDIAN,
+    .valid = {
+        .min_access_size = 1,
+        .max_access_size = 8,
+        .unaligned = false,
+    },
+    .impl = {
+        .min_access_size = 8,
+        .max_access_size = 8,
+    },
+};
+
 static const MemoryRegionOps mailbox_ops = {
     .read = mailbox_reg_read,
     .write = mailbox_reg_write,
@@ -188,6 +213,9 @@ void cxl_device_register_block_init(Object *obj, CXLDeviceState *cxl_dstate)
                           "device-status", CXL_DEVICE_REGISTERS_LENGTH);
     memory_region_init_io(&cxl_dstate->mailbox, obj, &mailbox_ops, cxl_dstate,
                           "mailbox", CXL_MAILBOX_REGISTERS_LENGTH);
+    memory_region_init_io(&cxl_dstate->memory_device, obj, &mdev_ops,
+                          cxl_dstate, "memory device caps",
+                          CXL_MEMORY_DEVICE_REGISTERS_LENGTH);
 
     memory_region_add_subregion(&cxl_dstate->device_registers, 0,
                                 &cxl_dstate->caps);
@@ -197,6 +225,9 @@ void cxl_device_register_block_init(Object *obj, CXLDeviceState *cxl_dstate)
     memory_region_add_subregion(&cxl_dstate->device_registers,
                                 CXL_MAILBOX_REGISTERS_OFFSET,
                                 &cxl_dstate->mailbox);
+    memory_region_add_subregion(&cxl_dstate->device_registers,
+                                CXL_MEMORY_DEVICE_REGISTERS_OFFSET,
+                                &cxl_dstate->memory_device);
 }
 
 static void device_reg_init_common(CXLDeviceState *cxl_dstate) { }
@@ -209,10 +240,12 @@ static void mailbox_reg_init_common(CXLDeviceState *cxl_dstate)
     cxl_dstate->payload_size = CXL_MAILBOX_MAX_PAYLOAD_SIZE;
 }
 
+static void memdev_reg_init_common(CXLDeviceState *cxl_dstate) { }
+
 void cxl_device_register_init_common(CXLDeviceState *cxl_dstate)
 {
     uint64_t *cap_hdrs = cxl_dstate->caps_reg_state64;
-    const int cap_count = 2;
+    const int cap_count = 3;
 
     /* CXL Device Capabilities Array Register */
     ARRAY_FIELD_DP64(cap_hdrs, CXL_DEV_CAP_ARRAY, CAP_ID, 0);
@@ -225,5 +258,8 @@ void cxl_device_register_init_common(CXLDeviceState *cxl_dstate)
     cxl_device_cap_init(cxl_dstate, MAILBOX, 2);
     mailbox_reg_init_common(cxl_dstate);
 
+    cxl_device_cap_init(cxl_dstate, MEMORY_DEVICE, 0x4000);
+    memdev_reg_init_common(cxl_dstate);
+
     assert(cxl_initialize_mailbox(cxl_dstate) == 0);
 }
diff --git a/include/hw/cxl/cxl_device.h b/include/hw/cxl/cxl_device.h
index 49dcca7e44..7fd8d0f616 100644
--- a/include/hw/cxl/cxl_device.h
+++ b/include/hw/cxl/cxl_device.h
@@ -77,15 +77,21 @@
 #define CXL_MAILBOX_REGISTERS_LENGTH \
     (CXL_MAILBOX_REGISTERS_SIZE + CXL_MAILBOX_MAX_PAYLOAD_SIZE)
 
-#define CXL_MMIO_SIZE                                           \
-    (CXL_DEVICE_CAP_REG_SIZE + CXL_DEVICE_REGISTERS_LENGTH +    \
-     CXL_MAILBOX_REGISTERS_LENGTH)
+
+#define CXL_MEMORY_DEVICE_REGISTERS_OFFSET \
+    (CXL_MAILBOX_REGISTERS_OFFSET + CXL_MAILBOX_REGISTERS_LENGTH)
+#define CXL_MEMORY_DEVICE_REGISTERS_LENGTH 0x8
+
+#define CXL_MMIO_SIZE                                                   \
+    (CXL_DEVICE_CAP_REG_SIZE + CXL_DEVICE_REGISTERS_LENGTH +            \
+     CXL_MAILBOX_REGISTERS_LENGTH + CXL_MEMORY_DEVICE_REGISTERS_LENGTH)
 
 typedef struct cxl_device_state {
     MemoryRegion device_registers;
 
     /* mmio for device capabilities array - 8.2.8.2 */
     MemoryRegion device;
+    MemoryRegion memory_device;
     struct {
         MemoryRegion caps;
         union {
@@ -158,6 +164,9 @@ REG64(CXL_DEV_CAP_ARRAY, 0) /* Documented as 128 bit register but 64 byte access
 CXL_DEVICE_CAPABILITY_HEADER_REGISTER(DEVICE, CXL_DEVICE_CAP_HDR1_OFFSET)
 CXL_DEVICE_CAPABILITY_HEADER_REGISTER(MAILBOX, CXL_DEVICE_CAP_HDR1_OFFSET + \
                                                CXL_DEVICE_CAP_REG_SIZE)
+CXL_DEVICE_CAPABILITY_HEADER_REGISTER(MEMORY_DEVICE,
+                                      CXL_DEVICE_CAP_HDR1_OFFSET +
+                                          CXL_DEVICE_CAP_REG_SIZE * 2)
 
 int cxl_initialize_mailbox(CXLDeviceState *cxl_dstate);
 void cxl_process_mailbox(CXLDeviceState *cxl_dstate);
@@ -208,4 +217,11 @@ REG64(CXL_DEV_BG_CMD_STS, 0x18)
 
 REG32(CXL_DEV_CMD_PAYLOAD, 0x20)
 
+REG64(CXL_MEM_DEV_STS, 0)
+    FIELD(CXL_MEM_DEV_STS, FATAL, 0, 1)
+    FIELD(CXL_MEM_DEV_STS, FW_HALT, 1, 1)
+    FIELD(CXL_MEM_DEV_STS, MEDIA_STATUS, 2, 2)
+    FIELD(CXL_MEM_DEV_STS, MBOX_READY, 4, 1)
+    FIELD(CXL_MEM_DEV_STS, RESET_NEEDED, 5, 3)
+
 #endif
-- 
2.32.0



^ permalink raw reply related	[flat|nested] 124+ messages in thread

* [PATCH v7 08/46] hw/cxl/device: Add cheap EVENTS implementation (8.2.9.1)
  2022-03-06 17:40 ` Jonathan Cameron via
@ 2022-03-06 17:40   ` Jonathan Cameron via
  -1 siblings, 0 replies; 124+ messages in thread
From: Jonathan Cameron @ 2022-03-06 17:40 UTC (permalink / raw)
  To: linuxarm, qemu-devel, Alex Bennée, Marcel Apfelbaum,
	Michael S . Tsirkin, Igor Mammedov, Markus Armbruster
  Cc: linux-cxl, Ben Widawsky, Peter Maydell,
	Shameerali Kolothum Thodi, Philippe Mathieu-Daudé,
	Saransh Gupta1, Shreyas Shah, Chris Browy, Samarth Saxena,
	Dan Williams

From: Ben Widawsky <ben.widawsky@intel.com>

Using the previously implemented stubbed helpers, it is now possible to
easily add the missing, required commands to the implementation.

Signed-off-by: Ben Widawsky <ben.widawsky@intel.com>
Signed-off-by: Jonathan Cameron <Jonathan.Cameron@huawei.com>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
---
 hw/cxl/cxl-mailbox-utils.c | 27 ++++++++++++++++++++++++++-
 1 file changed, 26 insertions(+), 1 deletion(-)

diff --git a/hw/cxl/cxl-mailbox-utils.c b/hw/cxl/cxl-mailbox-utils.c
index 7e03dc224a..51af9282e2 100644
--- a/hw/cxl/cxl-mailbox-utils.c
+++ b/hw/cxl/cxl-mailbox-utils.c
@@ -38,6 +38,14 @@
  *  a register interface that already deals with it.
  */
 
+enum {
+    EVENTS      = 0x01,
+        #define GET_RECORDS   0x0
+        #define CLEAR_RECORDS   0x1
+        #define GET_INTERRUPT_POLICY   0x2
+        #define SET_INTERRUPT_POLICY   0x3
+};
+
 /* 8.2.8.4.5.1 Command Return Codes */
 typedef enum {
     CXL_MBOX_SUCCESS = 0x0,
@@ -93,9 +101,26 @@ struct cxl_cmd {
         return CXL_MBOX_SUCCESS;                                          \
     }
 
+DEFINE_MAILBOX_HANDLER_ZEROED(events_get_records, 0x20);
+DEFINE_MAILBOX_HANDLER_NOP(events_clear_records);
+DEFINE_MAILBOX_HANDLER_ZEROED(events_get_interrupt_policy, 4);
+DEFINE_MAILBOX_HANDLER_NOP(events_set_interrupt_policy);
+
 static QemuUUID cel_uuid;
 
-static struct cxl_cmd cxl_cmd_set[256][256] = {};
+#define IMMEDIATE_CONFIG_CHANGE (1 << 1)
+#define IMMEDIATE_LOG_CHANGE (1 << 4)
+
+static struct cxl_cmd cxl_cmd_set[256][256] = {
+    [EVENTS][GET_RECORDS] = { "EVENTS_GET_RECORDS",
+        cmd_events_get_records, 1, 0 },
+    [EVENTS][CLEAR_RECORDS] = { "EVENTS_CLEAR_RECORDS",
+        cmd_events_clear_records, ~0, IMMEDIATE_LOG_CHANGE },
+    [EVENTS][GET_INTERRUPT_POLICY] = { "EVENTS_GET_INTERRUPT_POLICY",
+        cmd_events_get_interrupt_policy, 0, 0 },
+    [EVENTS][SET_INTERRUPT_POLICY] = { "EVENTS_SET_INTERRUPT_POLICY",
+        cmd_events_set_interrupt_policy, 4, IMMEDIATE_CONFIG_CHANGE },
+};
 
 void cxl_process_mailbox(CXLDeviceState *cxl_dstate)
 {
-- 
2.32.0


^ permalink raw reply related	[flat|nested] 124+ messages in thread

* [PATCH v7 08/46] hw/cxl/device: Add cheap EVENTS implementation (8.2.9.1)
@ 2022-03-06 17:40   ` Jonathan Cameron via
  0 siblings, 0 replies; 124+ messages in thread
From: Jonathan Cameron via @ 2022-03-06 17:40 UTC (permalink / raw)
  To: linuxarm, qemu-devel, Alex Bennée, Marcel Apfelbaum,
	Michael S . Tsirkin, Igor Mammedov, Markus Armbruster
  Cc: linux-cxl, Ben Widawsky, Peter Maydell,
	Shameerali Kolothum Thodi, Philippe Mathieu-Daudé,
	Saransh Gupta1, Shreyas Shah, Chris Browy, Samarth Saxena,
	Dan Williams

From: Ben Widawsky <ben.widawsky@intel.com>

Using the previously implemented stubbed helpers, it is now possible to
easily add the missing, required commands to the implementation.

Signed-off-by: Ben Widawsky <ben.widawsky@intel.com>
Signed-off-by: Jonathan Cameron <Jonathan.Cameron@huawei.com>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
---
 hw/cxl/cxl-mailbox-utils.c | 27 ++++++++++++++++++++++++++-
 1 file changed, 26 insertions(+), 1 deletion(-)

diff --git a/hw/cxl/cxl-mailbox-utils.c b/hw/cxl/cxl-mailbox-utils.c
index 7e03dc224a..51af9282e2 100644
--- a/hw/cxl/cxl-mailbox-utils.c
+++ b/hw/cxl/cxl-mailbox-utils.c
@@ -38,6 +38,14 @@
  *  a register interface that already deals with it.
  */
 
+enum {
+    EVENTS      = 0x01,
+        #define GET_RECORDS   0x0
+        #define CLEAR_RECORDS   0x1
+        #define GET_INTERRUPT_POLICY   0x2
+        #define SET_INTERRUPT_POLICY   0x3
+};
+
 /* 8.2.8.4.5.1 Command Return Codes */
 typedef enum {
     CXL_MBOX_SUCCESS = 0x0,
@@ -93,9 +101,26 @@ struct cxl_cmd {
         return CXL_MBOX_SUCCESS;                                          \
     }
 
+DEFINE_MAILBOX_HANDLER_ZEROED(events_get_records, 0x20);
+DEFINE_MAILBOX_HANDLER_NOP(events_clear_records);
+DEFINE_MAILBOX_HANDLER_ZEROED(events_get_interrupt_policy, 4);
+DEFINE_MAILBOX_HANDLER_NOP(events_set_interrupt_policy);
+
 static QemuUUID cel_uuid;
 
-static struct cxl_cmd cxl_cmd_set[256][256] = {};
+#define IMMEDIATE_CONFIG_CHANGE (1 << 1)
+#define IMMEDIATE_LOG_CHANGE (1 << 4)
+
+static struct cxl_cmd cxl_cmd_set[256][256] = {
+    [EVENTS][GET_RECORDS] = { "EVENTS_GET_RECORDS",
+        cmd_events_get_records, 1, 0 },
+    [EVENTS][CLEAR_RECORDS] = { "EVENTS_CLEAR_RECORDS",
+        cmd_events_clear_records, ~0, IMMEDIATE_LOG_CHANGE },
+    [EVENTS][GET_INTERRUPT_POLICY] = { "EVENTS_GET_INTERRUPT_POLICY",
+        cmd_events_get_interrupt_policy, 0, 0 },
+    [EVENTS][SET_INTERRUPT_POLICY] = { "EVENTS_SET_INTERRUPT_POLICY",
+        cmd_events_set_interrupt_policy, 4, IMMEDIATE_CONFIG_CHANGE },
+};
 
 void cxl_process_mailbox(CXLDeviceState *cxl_dstate)
 {
-- 
2.32.0



^ permalink raw reply related	[flat|nested] 124+ messages in thread

* [PATCH v7 09/46] hw/cxl/device: Timestamp implementation (8.2.9.3)
  2022-03-06 17:40 ` Jonathan Cameron via
@ 2022-03-06 17:41   ` Jonathan Cameron via
  -1 siblings, 0 replies; 124+ messages in thread
From: Jonathan Cameron @ 2022-03-06 17:41 UTC (permalink / raw)
  To: linuxarm, qemu-devel, Alex Bennée, Marcel Apfelbaum,
	Michael S . Tsirkin, Igor Mammedov, Markus Armbruster
  Cc: linux-cxl, Ben Widawsky, Peter Maydell,
	Shameerali Kolothum Thodi, Philippe Mathieu-Daudé,
	Saransh Gupta1, Shreyas Shah, Chris Browy, Samarth Saxena,
	Dan Williams

From: Ben Widawsky <ben.widawsky@intel.com>

Errata F4 to CXL 2.0 clarified the meaning of the timer as the
sum of the value set with the timestamp set command and the number
of nano seconds since it was last set.

Signed-off-by: Ben Widawsky <ben.widawsky@intel.com>
Signed-off-by: Jonathan Cameron <Jonathan.Cameron@huawei.com>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
---
v7:
* Code reorder to avoid goto (Alex)

 hw/cxl/cxl-mailbox-utils.c  | 42 +++++++++++++++++++++++++++++++++++++
 include/hw/cxl/cxl_device.h |  6 ++++++
 2 files changed, 48 insertions(+)

diff --git a/hw/cxl/cxl-mailbox-utils.c b/hw/cxl/cxl-mailbox-utils.c
index 51af9282e2..05e6bbdd6f 100644
--- a/hw/cxl/cxl-mailbox-utils.c
+++ b/hw/cxl/cxl-mailbox-utils.c
@@ -44,6 +44,9 @@ enum {
         #define CLEAR_RECORDS   0x1
         #define GET_INTERRUPT_POLICY   0x2
         #define SET_INTERRUPT_POLICY   0x3
+    TIMESTAMP   = 0x03,
+        #define GET           0x0
+        #define SET           0x1
 };
 
 /* 8.2.8.4.5.1 Command Return Codes */
@@ -106,9 +109,46 @@ DEFINE_MAILBOX_HANDLER_NOP(events_clear_records);
 DEFINE_MAILBOX_HANDLER_ZEROED(events_get_interrupt_policy, 4);
 DEFINE_MAILBOX_HANDLER_NOP(events_set_interrupt_policy);
 
+/* 8.2.9.3.1 */
+static ret_code cmd_timestamp_get(struct cxl_cmd *cmd,
+                                  CXLDeviceState *cxl_dstate,
+                                  uint16_t *len)
+{
+    uint64_t time, delta;
+    uint64_t final_time = 0;
+
+    if (cxl_dstate->timestamp.set) {
+        /* First find the delta from the last time the host set the time. */
+        time = qemu_clock_get_ns(QEMU_CLOCK_VIRTUAL);
+        delta = time - cxl_dstate->timestamp.last_set;
+        final_time = cxl_dstate->timestamp.host_set + delta;
+    }
+
+    /* Then adjust the actual time */
+    stq_le_p(cmd->payload, final_time);
+    *len = 8;
+
+    return CXL_MBOX_SUCCESS;
+}
+
+/* 8.2.9.3.2 */
+static ret_code cmd_timestamp_set(struct cxl_cmd *cmd,
+                                  CXLDeviceState *cxl_dstate,
+                                  uint16_t *len)
+{
+    cxl_dstate->timestamp.set = true;
+    cxl_dstate->timestamp.last_set = qemu_clock_get_ns(QEMU_CLOCK_VIRTUAL);
+
+    cxl_dstate->timestamp.host_set = le64_to_cpu(*(uint64_t *)cmd->payload);
+
+    *len = 0;
+    return CXL_MBOX_SUCCESS;
+}
+
 static QemuUUID cel_uuid;
 
 #define IMMEDIATE_CONFIG_CHANGE (1 << 1)
+#define IMMEDIATE_POLICY_CHANGE (1 << 3)
 #define IMMEDIATE_LOG_CHANGE (1 << 4)
 
 static struct cxl_cmd cxl_cmd_set[256][256] = {
@@ -120,6 +160,8 @@ static struct cxl_cmd cxl_cmd_set[256][256] = {
         cmd_events_get_interrupt_policy, 0, 0 },
     [EVENTS][SET_INTERRUPT_POLICY] = { "EVENTS_SET_INTERRUPT_POLICY",
         cmd_events_set_interrupt_policy, 4, IMMEDIATE_CONFIG_CHANGE },
+    [TIMESTAMP][GET] = { "TIMESTAMP_GET", cmd_timestamp_get, 0, 0 },
+    [TIMESTAMP][SET] = { "TIMESTAMP_SET", cmd_timestamp_set, 8, IMMEDIATE_POLICY_CHANGE },
 };
 
 void cxl_process_mailbox(CXLDeviceState *cxl_dstate)
diff --git a/include/hw/cxl/cxl_device.h b/include/hw/cxl/cxl_device.h
index 7fd8d0f616..8102d2a813 100644
--- a/include/hw/cxl/cxl_device.h
+++ b/include/hw/cxl/cxl_device.h
@@ -117,6 +117,12 @@ typedef struct cxl_device_state {
         size_t cel_size;
     };
 
+    struct {
+        bool set;
+        uint64_t last_set;
+        uint64_t host_set;
+    } timestamp;
+
     /* memory region for persistent memory, HDM */
     uint64_t pmem_size;
 } CXLDeviceState;
-- 
2.32.0


^ permalink raw reply related	[flat|nested] 124+ messages in thread

* [PATCH v7 09/46] hw/cxl/device: Timestamp implementation (8.2.9.3)
@ 2022-03-06 17:41   ` Jonathan Cameron via
  0 siblings, 0 replies; 124+ messages in thread
From: Jonathan Cameron via @ 2022-03-06 17:41 UTC (permalink / raw)
  To: linuxarm, qemu-devel, Alex Bennée, Marcel Apfelbaum,
	Michael S . Tsirkin, Igor Mammedov, Markus Armbruster
  Cc: linux-cxl, Ben Widawsky, Peter Maydell,
	Shameerali Kolothum Thodi, Philippe Mathieu-Daudé,
	Saransh Gupta1, Shreyas Shah, Chris Browy, Samarth Saxena,
	Dan Williams

From: Ben Widawsky <ben.widawsky@intel.com>

Errata F4 to CXL 2.0 clarified the meaning of the timer as the
sum of the value set with the timestamp set command and the number
of nano seconds since it was last set.

Signed-off-by: Ben Widawsky <ben.widawsky@intel.com>
Signed-off-by: Jonathan Cameron <Jonathan.Cameron@huawei.com>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
---
v7:
* Code reorder to avoid goto (Alex)

 hw/cxl/cxl-mailbox-utils.c  | 42 +++++++++++++++++++++++++++++++++++++
 include/hw/cxl/cxl_device.h |  6 ++++++
 2 files changed, 48 insertions(+)

diff --git a/hw/cxl/cxl-mailbox-utils.c b/hw/cxl/cxl-mailbox-utils.c
index 51af9282e2..05e6bbdd6f 100644
--- a/hw/cxl/cxl-mailbox-utils.c
+++ b/hw/cxl/cxl-mailbox-utils.c
@@ -44,6 +44,9 @@ enum {
         #define CLEAR_RECORDS   0x1
         #define GET_INTERRUPT_POLICY   0x2
         #define SET_INTERRUPT_POLICY   0x3
+    TIMESTAMP   = 0x03,
+        #define GET           0x0
+        #define SET           0x1
 };
 
 /* 8.2.8.4.5.1 Command Return Codes */
@@ -106,9 +109,46 @@ DEFINE_MAILBOX_HANDLER_NOP(events_clear_records);
 DEFINE_MAILBOX_HANDLER_ZEROED(events_get_interrupt_policy, 4);
 DEFINE_MAILBOX_HANDLER_NOP(events_set_interrupt_policy);
 
+/* 8.2.9.3.1 */
+static ret_code cmd_timestamp_get(struct cxl_cmd *cmd,
+                                  CXLDeviceState *cxl_dstate,
+                                  uint16_t *len)
+{
+    uint64_t time, delta;
+    uint64_t final_time = 0;
+
+    if (cxl_dstate->timestamp.set) {
+        /* First find the delta from the last time the host set the time. */
+        time = qemu_clock_get_ns(QEMU_CLOCK_VIRTUAL);
+        delta = time - cxl_dstate->timestamp.last_set;
+        final_time = cxl_dstate->timestamp.host_set + delta;
+    }
+
+    /* Then adjust the actual time */
+    stq_le_p(cmd->payload, final_time);
+    *len = 8;
+
+    return CXL_MBOX_SUCCESS;
+}
+
+/* 8.2.9.3.2 */
+static ret_code cmd_timestamp_set(struct cxl_cmd *cmd,
+                                  CXLDeviceState *cxl_dstate,
+                                  uint16_t *len)
+{
+    cxl_dstate->timestamp.set = true;
+    cxl_dstate->timestamp.last_set = qemu_clock_get_ns(QEMU_CLOCK_VIRTUAL);
+
+    cxl_dstate->timestamp.host_set = le64_to_cpu(*(uint64_t *)cmd->payload);
+
+    *len = 0;
+    return CXL_MBOX_SUCCESS;
+}
+
 static QemuUUID cel_uuid;
 
 #define IMMEDIATE_CONFIG_CHANGE (1 << 1)
+#define IMMEDIATE_POLICY_CHANGE (1 << 3)
 #define IMMEDIATE_LOG_CHANGE (1 << 4)
 
 static struct cxl_cmd cxl_cmd_set[256][256] = {
@@ -120,6 +160,8 @@ static struct cxl_cmd cxl_cmd_set[256][256] = {
         cmd_events_get_interrupt_policy, 0, 0 },
     [EVENTS][SET_INTERRUPT_POLICY] = { "EVENTS_SET_INTERRUPT_POLICY",
         cmd_events_set_interrupt_policy, 4, IMMEDIATE_CONFIG_CHANGE },
+    [TIMESTAMP][GET] = { "TIMESTAMP_GET", cmd_timestamp_get, 0, 0 },
+    [TIMESTAMP][SET] = { "TIMESTAMP_SET", cmd_timestamp_set, 8, IMMEDIATE_POLICY_CHANGE },
 };
 
 void cxl_process_mailbox(CXLDeviceState *cxl_dstate)
diff --git a/include/hw/cxl/cxl_device.h b/include/hw/cxl/cxl_device.h
index 7fd8d0f616..8102d2a813 100644
--- a/include/hw/cxl/cxl_device.h
+++ b/include/hw/cxl/cxl_device.h
@@ -117,6 +117,12 @@ typedef struct cxl_device_state {
         size_t cel_size;
     };
 
+    struct {
+        bool set;
+        uint64_t last_set;
+        uint64_t host_set;
+    } timestamp;
+
     /* memory region for persistent memory, HDM */
     uint64_t pmem_size;
 } CXLDeviceState;
-- 
2.32.0



^ permalink raw reply related	[flat|nested] 124+ messages in thread

* [PATCH v7 10/46] hw/cxl/device: Add log commands (8.2.9.4) + CEL
  2022-03-06 17:40 ` Jonathan Cameron via
@ 2022-03-06 17:41   ` Jonathan Cameron via
  -1 siblings, 0 replies; 124+ messages in thread
From: Jonathan Cameron @ 2022-03-06 17:41 UTC (permalink / raw)
  To: linuxarm, qemu-devel, Alex Bennée, Marcel Apfelbaum,
	Michael S . Tsirkin, Igor Mammedov, Markus Armbruster
  Cc: linux-cxl, Ben Widawsky, Peter Maydell,
	Shameerali Kolothum Thodi, Philippe Mathieu-Daudé,
	Saransh Gupta1, Shreyas Shah, Chris Browy, Samarth Saxena,
	Dan Williams

From: Ben Widawsky <ben.widawsky@intel.com>

CXL specification provides for the ability to obtain logs from the
device. Logs are either spec defined, like the "Command Effects Log"
(CEL), or vendor specific. UUIDs are defined for all log types.

The CEL is a mechanism to provide information to the host about which
commands are supported. It is useful both to determine which spec'd
optional commands are supported, as well as provide a list of vendor
specified commands that might be used. The CEL is already created as
part of mailbox initialization, but here it is now exported to hosts
that use these log commands.

Signed-off-by: Ben Widawsky <ben.widawsky@intel.com>
Signed-off-by: Jonathan Cameron <Jonathan.Cameron@huawei.com>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
---
v7:
* Use QEMU_PACKED / QEMU_BUILD_BUG_ON (Alex)

 hw/cxl/cxl-mailbox-utils.c | 69 ++++++++++++++++++++++++++++++++++++++
 1 file changed, 69 insertions(+)

diff --git a/hw/cxl/cxl-mailbox-utils.c b/hw/cxl/cxl-mailbox-utils.c
index 05e6bbdd6f..4e9cb2bccd 100644
--- a/hw/cxl/cxl-mailbox-utils.c
+++ b/hw/cxl/cxl-mailbox-utils.c
@@ -47,6 +47,9 @@ enum {
     TIMESTAMP   = 0x03,
         #define GET           0x0
         #define SET           0x1
+    LOGS        = 0x04,
+        #define GET_SUPPORTED 0x0
+        #define GET_LOG       0x1
 };
 
 /* 8.2.8.4.5.1 Command Return Codes */
@@ -147,6 +150,70 @@ static ret_code cmd_timestamp_set(struct cxl_cmd *cmd,
 
 static QemuUUID cel_uuid;
 
+/* 8.2.9.4.1 */
+static ret_code cmd_logs_get_supported(struct cxl_cmd *cmd,
+                                       CXLDeviceState *cxl_dstate,
+                                       uint16_t *len)
+{
+    struct {
+        uint16_t entries;
+        uint8_t rsvd[6];
+        struct {
+            QemuUUID uuid;
+            uint32_t size;
+        } log_entries[1];
+    } QEMU_PACKED *supported_logs = (void *)cmd->payload;
+    QEMU_BUILD_BUG_ON(sizeof(*supported_logs) != 0x1c);
+
+    supported_logs->entries = 1;
+    supported_logs->log_entries[0].uuid = cel_uuid;
+    supported_logs->log_entries[0].size = 4 * cxl_dstate->cel_size;
+
+    *len = sizeof(*supported_logs);
+    return CXL_MBOX_SUCCESS;
+}
+
+/* 8.2.9.4.2 */
+static ret_code cmd_logs_get_log(struct cxl_cmd *cmd,
+                                 CXLDeviceState *cxl_dstate,
+                                 uint16_t *len)
+{
+    struct {
+        QemuUUID uuid;
+        uint32_t offset;
+        uint32_t length;
+    } QEMU_PACKED QEMU_ALIGNED(16) *get_log = (void *)cmd->payload;
+
+    /*
+     * 8.2.9.4.2
+     *   The device shall return Invalid Parameter if the Offset or Length
+     *   fields attempt to access beyond the size of the log as reported by Get
+     *   Supported Logs.
+     *
+     * XXX: Spec is wrong, "Invalid Parameter" isn't a thing.
+     * XXX: Spec doesn't address incorrect UUID incorrectness.
+     *
+     * The CEL buffer is large enough to fit all commands in the emulation, so
+     * the only possible failure would be if the mailbox itself isn't big
+     * enough.
+     */
+    if (get_log->offset + get_log->length > cxl_dstate->payload_size) {
+        return CXL_MBOX_INVALID_INPUT;
+    }
+
+    if (!qemu_uuid_is_equal(&get_log->uuid, &cel_uuid)) {
+        return CXL_MBOX_UNSUPPORTED;
+    }
+
+    /* Store off everything to local variables so we can wipe out the payload */
+    *len = get_log->length;
+
+    memmove(cmd->payload, cxl_dstate->cel_log + get_log->offset,
+           get_log->length);
+
+    return CXL_MBOX_SUCCESS;
+}
+
 #define IMMEDIATE_CONFIG_CHANGE (1 << 1)
 #define IMMEDIATE_POLICY_CHANGE (1 << 3)
 #define IMMEDIATE_LOG_CHANGE (1 << 4)
@@ -162,6 +229,8 @@ static struct cxl_cmd cxl_cmd_set[256][256] = {
         cmd_events_set_interrupt_policy, 4, IMMEDIATE_CONFIG_CHANGE },
     [TIMESTAMP][GET] = { "TIMESTAMP_GET", cmd_timestamp_get, 0, 0 },
     [TIMESTAMP][SET] = { "TIMESTAMP_SET", cmd_timestamp_set, 8, IMMEDIATE_POLICY_CHANGE },
+    [LOGS][GET_SUPPORTED] = { "LOGS_GET_SUPPORTED", cmd_logs_get_supported, 0, 0 },
+    [LOGS][GET_LOG] = { "LOGS_GET_LOG", cmd_logs_get_log, 0x18, 0 },
 };
 
 void cxl_process_mailbox(CXLDeviceState *cxl_dstate)
-- 
2.32.0


^ permalink raw reply related	[flat|nested] 124+ messages in thread

* [PATCH v7 10/46] hw/cxl/device: Add log commands (8.2.9.4) + CEL
@ 2022-03-06 17:41   ` Jonathan Cameron via
  0 siblings, 0 replies; 124+ messages in thread
From: Jonathan Cameron via @ 2022-03-06 17:41 UTC (permalink / raw)
  To: linuxarm, qemu-devel, Alex Bennée, Marcel Apfelbaum,
	Michael S . Tsirkin, Igor Mammedov, Markus Armbruster
  Cc: linux-cxl, Ben Widawsky, Peter Maydell,
	Shameerali Kolothum Thodi, Philippe Mathieu-Daudé,
	Saransh Gupta1, Shreyas Shah, Chris Browy, Samarth Saxena,
	Dan Williams

From: Ben Widawsky <ben.widawsky@intel.com>

CXL specification provides for the ability to obtain logs from the
device. Logs are either spec defined, like the "Command Effects Log"
(CEL), or vendor specific. UUIDs are defined for all log types.

The CEL is a mechanism to provide information to the host about which
commands are supported. It is useful both to determine which spec'd
optional commands are supported, as well as provide a list of vendor
specified commands that might be used. The CEL is already created as
part of mailbox initialization, but here it is now exported to hosts
that use these log commands.

Signed-off-by: Ben Widawsky <ben.widawsky@intel.com>
Signed-off-by: Jonathan Cameron <Jonathan.Cameron@huawei.com>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
---
v7:
* Use QEMU_PACKED / QEMU_BUILD_BUG_ON (Alex)

 hw/cxl/cxl-mailbox-utils.c | 69 ++++++++++++++++++++++++++++++++++++++
 1 file changed, 69 insertions(+)

diff --git a/hw/cxl/cxl-mailbox-utils.c b/hw/cxl/cxl-mailbox-utils.c
index 05e6bbdd6f..4e9cb2bccd 100644
--- a/hw/cxl/cxl-mailbox-utils.c
+++ b/hw/cxl/cxl-mailbox-utils.c
@@ -47,6 +47,9 @@ enum {
     TIMESTAMP   = 0x03,
         #define GET           0x0
         #define SET           0x1
+    LOGS        = 0x04,
+        #define GET_SUPPORTED 0x0
+        #define GET_LOG       0x1
 };
 
 /* 8.2.8.4.5.1 Command Return Codes */
@@ -147,6 +150,70 @@ static ret_code cmd_timestamp_set(struct cxl_cmd *cmd,
 
 static QemuUUID cel_uuid;
 
+/* 8.2.9.4.1 */
+static ret_code cmd_logs_get_supported(struct cxl_cmd *cmd,
+                                       CXLDeviceState *cxl_dstate,
+                                       uint16_t *len)
+{
+    struct {
+        uint16_t entries;
+        uint8_t rsvd[6];
+        struct {
+            QemuUUID uuid;
+            uint32_t size;
+        } log_entries[1];
+    } QEMU_PACKED *supported_logs = (void *)cmd->payload;
+    QEMU_BUILD_BUG_ON(sizeof(*supported_logs) != 0x1c);
+
+    supported_logs->entries = 1;
+    supported_logs->log_entries[0].uuid = cel_uuid;
+    supported_logs->log_entries[0].size = 4 * cxl_dstate->cel_size;
+
+    *len = sizeof(*supported_logs);
+    return CXL_MBOX_SUCCESS;
+}
+
+/* 8.2.9.4.2 */
+static ret_code cmd_logs_get_log(struct cxl_cmd *cmd,
+                                 CXLDeviceState *cxl_dstate,
+                                 uint16_t *len)
+{
+    struct {
+        QemuUUID uuid;
+        uint32_t offset;
+        uint32_t length;
+    } QEMU_PACKED QEMU_ALIGNED(16) *get_log = (void *)cmd->payload;
+
+    /*
+     * 8.2.9.4.2
+     *   The device shall return Invalid Parameter if the Offset or Length
+     *   fields attempt to access beyond the size of the log as reported by Get
+     *   Supported Logs.
+     *
+     * XXX: Spec is wrong, "Invalid Parameter" isn't a thing.
+     * XXX: Spec doesn't address incorrect UUID incorrectness.
+     *
+     * The CEL buffer is large enough to fit all commands in the emulation, so
+     * the only possible failure would be if the mailbox itself isn't big
+     * enough.
+     */
+    if (get_log->offset + get_log->length > cxl_dstate->payload_size) {
+        return CXL_MBOX_INVALID_INPUT;
+    }
+
+    if (!qemu_uuid_is_equal(&get_log->uuid, &cel_uuid)) {
+        return CXL_MBOX_UNSUPPORTED;
+    }
+
+    /* Store off everything to local variables so we can wipe out the payload */
+    *len = get_log->length;
+
+    memmove(cmd->payload, cxl_dstate->cel_log + get_log->offset,
+           get_log->length);
+
+    return CXL_MBOX_SUCCESS;
+}
+
 #define IMMEDIATE_CONFIG_CHANGE (1 << 1)
 #define IMMEDIATE_POLICY_CHANGE (1 << 3)
 #define IMMEDIATE_LOG_CHANGE (1 << 4)
@@ -162,6 +229,8 @@ static struct cxl_cmd cxl_cmd_set[256][256] = {
         cmd_events_set_interrupt_policy, 4, IMMEDIATE_CONFIG_CHANGE },
     [TIMESTAMP][GET] = { "TIMESTAMP_GET", cmd_timestamp_get, 0, 0 },
     [TIMESTAMP][SET] = { "TIMESTAMP_SET", cmd_timestamp_set, 8, IMMEDIATE_POLICY_CHANGE },
+    [LOGS][GET_SUPPORTED] = { "LOGS_GET_SUPPORTED", cmd_logs_get_supported, 0, 0 },
+    [LOGS][GET_LOG] = { "LOGS_GET_LOG", cmd_logs_get_log, 0x18, 0 },
 };
 
 void cxl_process_mailbox(CXLDeviceState *cxl_dstate)
-- 
2.32.0



^ permalink raw reply related	[flat|nested] 124+ messages in thread

* [PATCH v7 11/46] hw/pxb: Use a type for realizing expanders
  2022-03-06 17:40 ` Jonathan Cameron via
@ 2022-03-06 17:41   ` Jonathan Cameron via
  -1 siblings, 0 replies; 124+ messages in thread
From: Jonathan Cameron @ 2022-03-06 17:41 UTC (permalink / raw)
  To: linuxarm, qemu-devel, Alex Bennée, Marcel Apfelbaum,
	Michael S . Tsirkin, Igor Mammedov, Markus Armbruster
  Cc: linux-cxl, Ben Widawsky, Peter Maydell,
	Shameerali Kolothum Thodi, Philippe Mathieu-Daudé,
	Saransh Gupta1, Shreyas Shah, Chris Browy, Samarth Saxena,
	Dan Williams

From: Ben Widawsky <ben.widawsky@intel.com>

This opens up the possibility for more types of expanders (other than
PCI and PCIe). We'll need this to create a CXL expander.

Signed-off-by: Ben Widawsky <ben.widawsky@intel.com>
Signed-off-by: Jonathan Cameron <Jonathan.Cameron@huawei.com>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
---
 hw/pci-bridge/pci_expander_bridge.c | 11 +++++++----
 1 file changed, 7 insertions(+), 4 deletions(-)

diff --git a/hw/pci-bridge/pci_expander_bridge.c b/hw/pci-bridge/pci_expander_bridge.c
index de932286b5..d4514227a8 100644
--- a/hw/pci-bridge/pci_expander_bridge.c
+++ b/hw/pci-bridge/pci_expander_bridge.c
@@ -24,6 +24,8 @@
 #include "hw/boards.h"
 #include "qom/object.h"
 
+enum BusType { PCI, PCIE };
+
 #define TYPE_PXB_BUS "pxb-bus"
 typedef struct PXBBus PXBBus;
 DECLARE_INSTANCE_CHECKER(PXBBus, PXB_BUS,
@@ -221,7 +223,8 @@ static gint pxb_compare(gconstpointer a, gconstpointer b)
            0;
 }
 
-static void pxb_dev_realize_common(PCIDevice *dev, bool pcie, Error **errp)
+static void pxb_dev_realize_common(PCIDevice *dev, enum BusType type,
+                                   Error **errp)
 {
     PXBDev *pxb = convert_to_pxb(dev);
     DeviceState *ds, *bds = NULL;
@@ -246,7 +249,7 @@ static void pxb_dev_realize_common(PCIDevice *dev, bool pcie, Error **errp)
     }
 
     ds = qdev_new(TYPE_PXB_HOST);
-    if (pcie) {
+    if (type == PCIE) {
         bus = pci_root_bus_new(ds, dev_name, NULL, NULL, 0, TYPE_PXB_PCIE_BUS);
     } else {
         bus = pci_root_bus_new(ds, "pxb-internal", NULL, NULL, 0, TYPE_PXB_BUS);
@@ -295,7 +298,7 @@ static void pxb_dev_realize(PCIDevice *dev, Error **errp)
         return;
     }
 
-    pxb_dev_realize_common(dev, false, errp);
+    pxb_dev_realize_common(dev, PCI, errp);
 }
 
 static void pxb_dev_exitfn(PCIDevice *pci_dev)
@@ -348,7 +351,7 @@ static void pxb_pcie_dev_realize(PCIDevice *dev, Error **errp)
         return;
     }
 
-    pxb_dev_realize_common(dev, true, errp);
+    pxb_dev_realize_common(dev, PCIE, errp);
 }
 
 static void pxb_pcie_dev_class_init(ObjectClass *klass, void *data)
-- 
2.32.0


^ permalink raw reply related	[flat|nested] 124+ messages in thread

* [PATCH v7 11/46] hw/pxb: Use a type for realizing expanders
@ 2022-03-06 17:41   ` Jonathan Cameron via
  0 siblings, 0 replies; 124+ messages in thread
From: Jonathan Cameron via @ 2022-03-06 17:41 UTC (permalink / raw)
  To: linuxarm, qemu-devel, Alex Bennée, Marcel Apfelbaum,
	Michael S . Tsirkin, Igor Mammedov, Markus Armbruster
  Cc: linux-cxl, Ben Widawsky, Peter Maydell,
	Shameerali Kolothum Thodi, Philippe Mathieu-Daudé,
	Saransh Gupta1, Shreyas Shah, Chris Browy, Samarth Saxena,
	Dan Williams

From: Ben Widawsky <ben.widawsky@intel.com>

This opens up the possibility for more types of expanders (other than
PCI and PCIe). We'll need this to create a CXL expander.

Signed-off-by: Ben Widawsky <ben.widawsky@intel.com>
Signed-off-by: Jonathan Cameron <Jonathan.Cameron@huawei.com>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
---
 hw/pci-bridge/pci_expander_bridge.c | 11 +++++++----
 1 file changed, 7 insertions(+), 4 deletions(-)

diff --git a/hw/pci-bridge/pci_expander_bridge.c b/hw/pci-bridge/pci_expander_bridge.c
index de932286b5..d4514227a8 100644
--- a/hw/pci-bridge/pci_expander_bridge.c
+++ b/hw/pci-bridge/pci_expander_bridge.c
@@ -24,6 +24,8 @@
 #include "hw/boards.h"
 #include "qom/object.h"
 
+enum BusType { PCI, PCIE };
+
 #define TYPE_PXB_BUS "pxb-bus"
 typedef struct PXBBus PXBBus;
 DECLARE_INSTANCE_CHECKER(PXBBus, PXB_BUS,
@@ -221,7 +223,8 @@ static gint pxb_compare(gconstpointer a, gconstpointer b)
            0;
 }
 
-static void pxb_dev_realize_common(PCIDevice *dev, bool pcie, Error **errp)
+static void pxb_dev_realize_common(PCIDevice *dev, enum BusType type,
+                                   Error **errp)
 {
     PXBDev *pxb = convert_to_pxb(dev);
     DeviceState *ds, *bds = NULL;
@@ -246,7 +249,7 @@ static void pxb_dev_realize_common(PCIDevice *dev, bool pcie, Error **errp)
     }
 
     ds = qdev_new(TYPE_PXB_HOST);
-    if (pcie) {
+    if (type == PCIE) {
         bus = pci_root_bus_new(ds, dev_name, NULL, NULL, 0, TYPE_PXB_PCIE_BUS);
     } else {
         bus = pci_root_bus_new(ds, "pxb-internal", NULL, NULL, 0, TYPE_PXB_BUS);
@@ -295,7 +298,7 @@ static void pxb_dev_realize(PCIDevice *dev, Error **errp)
         return;
     }
 
-    pxb_dev_realize_common(dev, false, errp);
+    pxb_dev_realize_common(dev, PCI, errp);
 }
 
 static void pxb_dev_exitfn(PCIDevice *pci_dev)
@@ -348,7 +351,7 @@ static void pxb_pcie_dev_realize(PCIDevice *dev, Error **errp)
         return;
     }
 
-    pxb_dev_realize_common(dev, true, errp);
+    pxb_dev_realize_common(dev, PCIE, errp);
 }
 
 static void pxb_pcie_dev_class_init(ObjectClass *klass, void *data)
-- 
2.32.0



^ permalink raw reply related	[flat|nested] 124+ messages in thread

* [PATCH v7 12/46] hw/pci/cxl: Create a CXL bus type
  2022-03-06 17:40 ` Jonathan Cameron via
@ 2022-03-06 17:41   ` Jonathan Cameron via
  -1 siblings, 0 replies; 124+ messages in thread
From: Jonathan Cameron @ 2022-03-06 17:41 UTC (permalink / raw)
  To: linuxarm, qemu-devel, Alex Bennée, Marcel Apfelbaum,
	Michael S . Tsirkin, Igor Mammedov, Markus Armbruster
  Cc: linux-cxl, Ben Widawsky, Peter Maydell,
	Shameerali Kolothum Thodi, Philippe Mathieu-Daudé,
	Saransh Gupta1, Shreyas Shah, Chris Browy, Samarth Saxena,
	Dan Williams

From: Ben Widawsky <ben.widawsky@intel.com>

The easiest way to differentiate a CXL bus, and a PCIE bus is using a
flag. A CXL bus, in hardware, is backward compatible with PCIE, and
therefore the code tries pretty hard to keep them in sync as much as
possible.

The other way to implement this would be to try to cast the bus to the
correct type. This is less code and useful for debugging via simply
looking at the flags.

Signed-off-by: Ben Widawsky <ben.widawsky@intel.com>
Signed-off-by: Jonathan Cameron <Jonathan.Cameron@huawei.com>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
---
 hw/pci-bridge/pci_expander_bridge.c | 9 ++++++++-
 include/hw/pci/pci_bus.h            | 7 +++++++
 2 files changed, 15 insertions(+), 1 deletion(-)

diff --git a/hw/pci-bridge/pci_expander_bridge.c b/hw/pci-bridge/pci_expander_bridge.c
index d4514227a8..a6caa1e7b5 100644
--- a/hw/pci-bridge/pci_expander_bridge.c
+++ b/hw/pci-bridge/pci_expander_bridge.c
@@ -24,7 +24,7 @@
 #include "hw/boards.h"
 #include "qom/object.h"
 
-enum BusType { PCI, PCIE };
+enum BusType { PCI, PCIE, CXL };
 
 #define TYPE_PXB_BUS "pxb-bus"
 typedef struct PXBBus PXBBus;
@@ -35,6 +35,10 @@ DECLARE_INSTANCE_CHECKER(PXBBus, PXB_BUS,
 DECLARE_INSTANCE_CHECKER(PXBBus, PXB_PCIE_BUS,
                          TYPE_PXB_PCIE_BUS)
 
+#define TYPE_PXB_CXL_BUS "pxb-cxl-bus"
+DECLARE_INSTANCE_CHECKER(PXBBus, PXB_CXL_BUS,
+                         TYPE_PXB_CXL_BUS)
+
 struct PXBBus {
     /*< private >*/
     PCIBus parent_obj;
@@ -251,6 +255,9 @@ static void pxb_dev_realize_common(PCIDevice *dev, enum BusType type,
     ds = qdev_new(TYPE_PXB_HOST);
     if (type == PCIE) {
         bus = pci_root_bus_new(ds, dev_name, NULL, NULL, 0, TYPE_PXB_PCIE_BUS);
+    } else if (type == CXL) {
+        bus = pci_root_bus_new(ds, dev_name, NULL, NULL, 0, TYPE_PXB_CXL_BUS);
+        bus->flags |= PCI_BUS_CXL;
     } else {
         bus = pci_root_bus_new(ds, "pxb-internal", NULL, NULL, 0, TYPE_PXB_BUS);
         bds = qdev_new("pci-bridge");
diff --git a/include/hw/pci/pci_bus.h b/include/hw/pci/pci_bus.h
index 347440d42c..eb94e7e85c 100644
--- a/include/hw/pci/pci_bus.h
+++ b/include/hw/pci/pci_bus.h
@@ -24,6 +24,8 @@ enum PCIBusFlags {
     PCI_BUS_IS_ROOT                                         = 0x0001,
     /* PCIe extended configuration space is accessible on this bus */
     PCI_BUS_EXTENDED_CONFIG_SPACE                           = 0x0002,
+    /* This is a CXL Type BUS */
+    PCI_BUS_CXL                                             = 0x0004,
 };
 
 struct PCIBus {
@@ -53,6 +55,11 @@ struct PCIBus {
     Notifier machine_done;
 };
 
+static inline bool pci_bus_is_cxl(PCIBus *bus)
+{
+    return !!(bus->flags & PCI_BUS_CXL);
+}
+
 static inline bool pci_bus_is_root(PCIBus *bus)
 {
     return !!(bus->flags & PCI_BUS_IS_ROOT);
-- 
2.32.0


^ permalink raw reply related	[flat|nested] 124+ messages in thread

* [PATCH v7 12/46] hw/pci/cxl: Create a CXL bus type
@ 2022-03-06 17:41   ` Jonathan Cameron via
  0 siblings, 0 replies; 124+ messages in thread
From: Jonathan Cameron via @ 2022-03-06 17:41 UTC (permalink / raw)
  To: linuxarm, qemu-devel, Alex Bennée, Marcel Apfelbaum,
	Michael S . Tsirkin, Igor Mammedov, Markus Armbruster
  Cc: linux-cxl, Ben Widawsky, Peter Maydell,
	Shameerali Kolothum Thodi, Philippe Mathieu-Daudé,
	Saransh Gupta1, Shreyas Shah, Chris Browy, Samarth Saxena,
	Dan Williams

From: Ben Widawsky <ben.widawsky@intel.com>

The easiest way to differentiate a CXL bus, and a PCIE bus is using a
flag. A CXL bus, in hardware, is backward compatible with PCIE, and
therefore the code tries pretty hard to keep them in sync as much as
possible.

The other way to implement this would be to try to cast the bus to the
correct type. This is less code and useful for debugging via simply
looking at the flags.

Signed-off-by: Ben Widawsky <ben.widawsky@intel.com>
Signed-off-by: Jonathan Cameron <Jonathan.Cameron@huawei.com>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
---
 hw/pci-bridge/pci_expander_bridge.c | 9 ++++++++-
 include/hw/pci/pci_bus.h            | 7 +++++++
 2 files changed, 15 insertions(+), 1 deletion(-)

diff --git a/hw/pci-bridge/pci_expander_bridge.c b/hw/pci-bridge/pci_expander_bridge.c
index d4514227a8..a6caa1e7b5 100644
--- a/hw/pci-bridge/pci_expander_bridge.c
+++ b/hw/pci-bridge/pci_expander_bridge.c
@@ -24,7 +24,7 @@
 #include "hw/boards.h"
 #include "qom/object.h"
 
-enum BusType { PCI, PCIE };
+enum BusType { PCI, PCIE, CXL };
 
 #define TYPE_PXB_BUS "pxb-bus"
 typedef struct PXBBus PXBBus;
@@ -35,6 +35,10 @@ DECLARE_INSTANCE_CHECKER(PXBBus, PXB_BUS,
 DECLARE_INSTANCE_CHECKER(PXBBus, PXB_PCIE_BUS,
                          TYPE_PXB_PCIE_BUS)
 
+#define TYPE_PXB_CXL_BUS "pxb-cxl-bus"
+DECLARE_INSTANCE_CHECKER(PXBBus, PXB_CXL_BUS,
+                         TYPE_PXB_CXL_BUS)
+
 struct PXBBus {
     /*< private >*/
     PCIBus parent_obj;
@@ -251,6 +255,9 @@ static void pxb_dev_realize_common(PCIDevice *dev, enum BusType type,
     ds = qdev_new(TYPE_PXB_HOST);
     if (type == PCIE) {
         bus = pci_root_bus_new(ds, dev_name, NULL, NULL, 0, TYPE_PXB_PCIE_BUS);
+    } else if (type == CXL) {
+        bus = pci_root_bus_new(ds, dev_name, NULL, NULL, 0, TYPE_PXB_CXL_BUS);
+        bus->flags |= PCI_BUS_CXL;
     } else {
         bus = pci_root_bus_new(ds, "pxb-internal", NULL, NULL, 0, TYPE_PXB_BUS);
         bds = qdev_new("pci-bridge");
diff --git a/include/hw/pci/pci_bus.h b/include/hw/pci/pci_bus.h
index 347440d42c..eb94e7e85c 100644
--- a/include/hw/pci/pci_bus.h
+++ b/include/hw/pci/pci_bus.h
@@ -24,6 +24,8 @@ enum PCIBusFlags {
     PCI_BUS_IS_ROOT                                         = 0x0001,
     /* PCIe extended configuration space is accessible on this bus */
     PCI_BUS_EXTENDED_CONFIG_SPACE                           = 0x0002,
+    /* This is a CXL Type BUS */
+    PCI_BUS_CXL                                             = 0x0004,
 };
 
 struct PCIBus {
@@ -53,6 +55,11 @@ struct PCIBus {
     Notifier machine_done;
 };
 
+static inline bool pci_bus_is_cxl(PCIBus *bus)
+{
+    return !!(bus->flags & PCI_BUS_CXL);
+}
+
 static inline bool pci_bus_is_root(PCIBus *bus)
 {
     return !!(bus->flags & PCI_BUS_IS_ROOT);
-- 
2.32.0



^ permalink raw reply related	[flat|nested] 124+ messages in thread

* [PATCH v7 13/46] cxl: Machine level control on whether CXL support is enabled
  2022-03-06 17:40 ` Jonathan Cameron via
@ 2022-03-06 17:41   ` Jonathan Cameron via
  -1 siblings, 0 replies; 124+ messages in thread
From: Jonathan Cameron @ 2022-03-06 17:41 UTC (permalink / raw)
  To: linuxarm, qemu-devel, Alex Bennée, Marcel Apfelbaum,
	Michael S . Tsirkin, Igor Mammedov, Markus Armbruster
  Cc: linux-cxl, Ben Widawsky, Peter Maydell,
	Shameerali Kolothum Thodi, Philippe Mathieu-Daudé,
	Saransh Gupta1, Shreyas Shah, Chris Browy, Samarth Saxena,
	Dan Williams

From: Jonathan Cameron <jonathan.cameron@huawei.com>

There are going to be some potential overheads to CXL enablement,
for example the host bridge region reserved in memory maps.
Add a machine level control so that CXL is disabled by default.

Signed-off-by: Jonathan Cameron <jonathan.cameron@huawei.com>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
---
 hw/core/machine.c    | 28 ++++++++++++++++++++++++++++
 hw/i386/pc.c         |  1 +
 include/hw/boards.h  |  2 ++
 include/hw/cxl/cxl.h |  4 ++++
 4 files changed, 35 insertions(+)

diff --git a/hw/core/machine.c b/hw/core/machine.c
index d856485cb4..6ff5dba64e 100644
--- a/hw/core/machine.c
+++ b/hw/core/machine.c
@@ -31,6 +31,7 @@
 #include "sysemu/qtest.h"
 #include "hw/pci/pci.h"
 #include "hw/mem/nvdimm.h"
+#include "hw/cxl/cxl.h"
 #include "migration/global_state.h"
 #include "migration/vmstate.h"
 #include "exec/confidential-guest-support.h"
@@ -545,6 +546,20 @@ static void machine_set_nvdimm_persistence(Object *obj, const char *value,
     nvdimms_state->persistence_string = g_strdup(value);
 }
 
+static bool machine_get_cxl(Object *obj, Error **errp)
+{
+    MachineState *ms = MACHINE(obj);
+
+    return ms->cxl_devices_state->is_enabled;
+}
+
+static void machine_set_cxl(Object *obj, bool value, Error **errp)
+{
+    MachineState *ms = MACHINE(obj);
+
+    ms->cxl_devices_state->is_enabled = value;
+}
+
 void machine_class_allow_dynamic_sysbus_dev(MachineClass *mc, const char *type)
 {
     QAPI_LIST_PREPEND(mc->allowed_dynamic_sysbus_devices, g_strdup(type));
@@ -777,6 +792,8 @@ static void machine_class_init(ObjectClass *oc, void *data)
     mc->default_ram_size = 128 * MiB;
     mc->rom_file_has_mr = true;
 
+    /* Few machines support CXL, so default to off */
+    mc->cxl_supported = false;
     /* numa node memory size aligned on 8MB by default.
      * On Linux, each node's border has to be 8MB aligned
      */
@@ -922,6 +939,16 @@ static void machine_initfn(Object *obj)
                                         "Valid values are cpu, mem-ctrl");
     }
 
+    if (mc->cxl_supported) {
+        Object *obj = OBJECT(ms);
+
+        ms->cxl_devices_state = g_new0(CXLState, 1);
+        object_property_add_bool(obj, "cxl", machine_get_cxl, machine_set_cxl);
+        object_property_set_description(obj, "cxl",
+                                        "Set on/off to enable/disable "
+                                        "CXL instantiation");
+    }
+
     if (mc->cpu_index_to_instance_props && mc->get_default_cpu_node_id) {
         ms->numa_state = g_new0(NumaState, 1);
         object_property_add_bool(obj, "hmat",
@@ -956,6 +983,7 @@ static void machine_finalize(Object *obj)
     g_free(ms->device_memory);
     g_free(ms->nvdimms_state);
     g_free(ms->numa_state);
+    g_free(ms->cxl_devices_state);
 }
 
 bool machine_usb(MachineState *machine)
diff --git a/hw/i386/pc.c b/hw/i386/pc.c
index c8696ac01e..b6800a511a 100644
--- a/hw/i386/pc.c
+++ b/hw/i386/pc.c
@@ -1739,6 +1739,7 @@ static void pc_machine_class_init(ObjectClass *oc, void *data)
     mc->default_cpu_type = TARGET_DEFAULT_CPU_TYPE;
     mc->nvdimm_supported = true;
     mc->smp_props.dies_supported = true;
+    mc->cxl_supported = true;
     mc->default_ram_id = "pc.ram";
 
     object_class_property_add(oc, PC_MACHINE_MAX_RAM_BELOW_4G, "size",
diff --git a/include/hw/boards.h b/include/hw/boards.h
index c92ac8815c..680718dafc 100644
--- a/include/hw/boards.h
+++ b/include/hw/boards.h
@@ -269,6 +269,7 @@ struct MachineClass {
     bool ignore_boot_device_suffixes;
     bool smbus_no_migration_support;
     bool nvdimm_supported;
+    bool cxl_supported;
     bool numa_mem_supported;
     bool auto_enable_numa;
     SMPCompatProps smp_props;
@@ -360,6 +361,7 @@ struct MachineState {
     CPUArchIdList *possible_cpus;
     CpuTopology smp;
     struct NVDIMMState *nvdimms_state;
+    struct CXLState *cxl_devices_state;
     struct NumaState *numa_state;
 };
 
diff --git a/include/hw/cxl/cxl.h b/include/hw/cxl/cxl.h
index 554ad93b6b..31af92fd5e 100644
--- a/include/hw/cxl/cxl.h
+++ b/include/hw/cxl/cxl.h
@@ -17,4 +17,8 @@
 #define CXL_COMPONENT_REG_BAR_IDX 0
 #define CXL_DEVICE_REG_BAR_IDX 2
 
+typedef struct CXLState {
+    bool is_enabled;
+} CXLState;
+
 #endif
-- 
2.32.0


^ permalink raw reply related	[flat|nested] 124+ messages in thread

* [PATCH v7 13/46] cxl: Machine level control on whether CXL support is enabled
@ 2022-03-06 17:41   ` Jonathan Cameron via
  0 siblings, 0 replies; 124+ messages in thread
From: Jonathan Cameron via @ 2022-03-06 17:41 UTC (permalink / raw)
  To: linuxarm, qemu-devel, Alex Bennée, Marcel Apfelbaum,
	Michael S . Tsirkin, Igor Mammedov, Markus Armbruster
  Cc: linux-cxl, Ben Widawsky, Peter Maydell,
	Shameerali Kolothum Thodi, Philippe Mathieu-Daudé,
	Saransh Gupta1, Shreyas Shah, Chris Browy, Samarth Saxena,
	Dan Williams

From: Jonathan Cameron <jonathan.cameron@huawei.com>

There are going to be some potential overheads to CXL enablement,
for example the host bridge region reserved in memory maps.
Add a machine level control so that CXL is disabled by default.

Signed-off-by: Jonathan Cameron <jonathan.cameron@huawei.com>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
---
 hw/core/machine.c    | 28 ++++++++++++++++++++++++++++
 hw/i386/pc.c         |  1 +
 include/hw/boards.h  |  2 ++
 include/hw/cxl/cxl.h |  4 ++++
 4 files changed, 35 insertions(+)

diff --git a/hw/core/machine.c b/hw/core/machine.c
index d856485cb4..6ff5dba64e 100644
--- a/hw/core/machine.c
+++ b/hw/core/machine.c
@@ -31,6 +31,7 @@
 #include "sysemu/qtest.h"
 #include "hw/pci/pci.h"
 #include "hw/mem/nvdimm.h"
+#include "hw/cxl/cxl.h"
 #include "migration/global_state.h"
 #include "migration/vmstate.h"
 #include "exec/confidential-guest-support.h"
@@ -545,6 +546,20 @@ static void machine_set_nvdimm_persistence(Object *obj, const char *value,
     nvdimms_state->persistence_string = g_strdup(value);
 }
 
+static bool machine_get_cxl(Object *obj, Error **errp)
+{
+    MachineState *ms = MACHINE(obj);
+
+    return ms->cxl_devices_state->is_enabled;
+}
+
+static void machine_set_cxl(Object *obj, bool value, Error **errp)
+{
+    MachineState *ms = MACHINE(obj);
+
+    ms->cxl_devices_state->is_enabled = value;
+}
+
 void machine_class_allow_dynamic_sysbus_dev(MachineClass *mc, const char *type)
 {
     QAPI_LIST_PREPEND(mc->allowed_dynamic_sysbus_devices, g_strdup(type));
@@ -777,6 +792,8 @@ static void machine_class_init(ObjectClass *oc, void *data)
     mc->default_ram_size = 128 * MiB;
     mc->rom_file_has_mr = true;
 
+    /* Few machines support CXL, so default to off */
+    mc->cxl_supported = false;
     /* numa node memory size aligned on 8MB by default.
      * On Linux, each node's border has to be 8MB aligned
      */
@@ -922,6 +939,16 @@ static void machine_initfn(Object *obj)
                                         "Valid values are cpu, mem-ctrl");
     }
 
+    if (mc->cxl_supported) {
+        Object *obj = OBJECT(ms);
+
+        ms->cxl_devices_state = g_new0(CXLState, 1);
+        object_property_add_bool(obj, "cxl", machine_get_cxl, machine_set_cxl);
+        object_property_set_description(obj, "cxl",
+                                        "Set on/off to enable/disable "
+                                        "CXL instantiation");
+    }
+
     if (mc->cpu_index_to_instance_props && mc->get_default_cpu_node_id) {
         ms->numa_state = g_new0(NumaState, 1);
         object_property_add_bool(obj, "hmat",
@@ -956,6 +983,7 @@ static void machine_finalize(Object *obj)
     g_free(ms->device_memory);
     g_free(ms->nvdimms_state);
     g_free(ms->numa_state);
+    g_free(ms->cxl_devices_state);
 }
 
 bool machine_usb(MachineState *machine)
diff --git a/hw/i386/pc.c b/hw/i386/pc.c
index c8696ac01e..b6800a511a 100644
--- a/hw/i386/pc.c
+++ b/hw/i386/pc.c
@@ -1739,6 +1739,7 @@ static void pc_machine_class_init(ObjectClass *oc, void *data)
     mc->default_cpu_type = TARGET_DEFAULT_CPU_TYPE;
     mc->nvdimm_supported = true;
     mc->smp_props.dies_supported = true;
+    mc->cxl_supported = true;
     mc->default_ram_id = "pc.ram";
 
     object_class_property_add(oc, PC_MACHINE_MAX_RAM_BELOW_4G, "size",
diff --git a/include/hw/boards.h b/include/hw/boards.h
index c92ac8815c..680718dafc 100644
--- a/include/hw/boards.h
+++ b/include/hw/boards.h
@@ -269,6 +269,7 @@ struct MachineClass {
     bool ignore_boot_device_suffixes;
     bool smbus_no_migration_support;
     bool nvdimm_supported;
+    bool cxl_supported;
     bool numa_mem_supported;
     bool auto_enable_numa;
     SMPCompatProps smp_props;
@@ -360,6 +361,7 @@ struct MachineState {
     CPUArchIdList *possible_cpus;
     CpuTopology smp;
     struct NVDIMMState *nvdimms_state;
+    struct CXLState *cxl_devices_state;
     struct NumaState *numa_state;
 };
 
diff --git a/include/hw/cxl/cxl.h b/include/hw/cxl/cxl.h
index 554ad93b6b..31af92fd5e 100644
--- a/include/hw/cxl/cxl.h
+++ b/include/hw/cxl/cxl.h
@@ -17,4 +17,8 @@
 #define CXL_COMPONENT_REG_BAR_IDX 0
 #define CXL_DEVICE_REG_BAR_IDX 2
 
+typedef struct CXLState {
+    bool is_enabled;
+} CXLState;
+
 #endif
-- 
2.32.0



^ permalink raw reply related	[flat|nested] 124+ messages in thread

* [PATCH v7 14/46] hw/pxb: Allow creation of a CXL PXB (host bridge)
  2022-03-06 17:40 ` Jonathan Cameron via
@ 2022-03-06 17:41   ` Jonathan Cameron via
  -1 siblings, 0 replies; 124+ messages in thread
From: Jonathan Cameron @ 2022-03-06 17:41 UTC (permalink / raw)
  To: linuxarm, qemu-devel, Alex Bennée, Marcel Apfelbaum,
	Michael S . Tsirkin, Igor Mammedov, Markus Armbruster
  Cc: linux-cxl, Ben Widawsky, Peter Maydell,
	Shameerali Kolothum Thodi, Philippe Mathieu-Daudé,
	Saransh Gupta1, Shreyas Shah, Chris Browy, Samarth Saxena,
	Dan Williams

From: Ben Widawsky <ben.widawsky@intel.com>

This works like adding a typical pxb device, except the name is
'pxb-cxl' instead of 'pxb-pcie'. An example command line would be as
follows:
  -device pxb-cxl,id=cxl.0,bus="pcie.0",bus_nr=1

A CXL PXB is backward compatible with PCIe. What this means in practice
is that an operating system that is unaware of CXL should still be able
to enumerate this topology as if it were PCIe.

One can create multiple CXL PXB host bridges, but a host bridge can only
be connected to the main root bus. Host bridges cannot appear elsewhere
in the topology.

Note that as of this patch, the ACPI tables needed for the host bridge
(specifically, an ACPI object in _SB named ACPI0016 and the CEDT) aren't
created. So while this patch internally creates it, it cannot be
properly used by an operating system or other system software.

Also necessary is to add an exception to scripts/device-crash-test
similar to that for exiting pxb as both must created on a PCIexpress
host bus.

Signed-off-by: Ben Widawsky <ben.widawsky@intel.com>
Signed-off-by: Jonathan.Cameron <Jonathan.Cameron@huawei.com>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
---
 hw/pci-bridge/pci_expander_bridge.c | 86 ++++++++++++++++++++++++++++-
 hw/pci/pci.c                        |  7 +++
 include/hw/pci/pci.h                |  6 ++
 scripts/device-crash-test           |  1 +
 4 files changed, 98 insertions(+), 2 deletions(-)

diff --git a/hw/pci-bridge/pci_expander_bridge.c b/hw/pci-bridge/pci_expander_bridge.c
index a6caa1e7b5..f762eb4a6e 100644
--- a/hw/pci-bridge/pci_expander_bridge.c
+++ b/hw/pci-bridge/pci_expander_bridge.c
@@ -17,6 +17,7 @@
 #include "hw/pci/pci_host.h"
 #include "hw/qdev-properties.h"
 #include "hw/pci/pci_bridge.h"
+#include "hw/cxl/cxl.h"
 #include "qemu/range.h"
 #include "qemu/error-report.h"
 #include "qemu/module.h"
@@ -56,6 +57,16 @@ DECLARE_INSTANCE_CHECKER(PXBDev, PXB_DEV,
 DECLARE_INSTANCE_CHECKER(PXBDev, PXB_PCIE_DEV,
                          TYPE_PXB_PCIE_DEVICE)
 
+#define TYPE_PXB_CXL_DEVICE "pxb-cxl"
+DECLARE_INSTANCE_CHECKER(PXBDev, PXB_CXL_DEV,
+                         TYPE_PXB_CXL_DEVICE)
+
+typedef struct CXLHost {
+    PCIHostState parent_obj;
+
+    CXLComponentState cxl_cstate;
+} CXLHost;
+
 struct PXBDev {
     /*< private >*/
     PCIDevice parent_obj;
@@ -68,6 +79,11 @@ struct PXBDev {
 
 static PXBDev *convert_to_pxb(PCIDevice *dev)
 {
+    /* A CXL PXB's parent bus is PCIe, so the normal check won't work */
+    if (object_dynamic_cast(OBJECT(dev), TYPE_PXB_CXL_DEVICE)) {
+        return PXB_CXL_DEV(dev);
+    }
+
     return pci_bus_is_express(pci_get_bus(dev))
         ? PXB_PCIE_DEV(dev) : PXB_DEV(dev);
 }
@@ -112,11 +128,20 @@ static const TypeInfo pxb_pcie_bus_info = {
     .class_init    = pxb_bus_class_init,
 };
 
+static const TypeInfo pxb_cxl_bus_info = {
+    .name          = TYPE_PXB_CXL_BUS,
+    .parent        = TYPE_CXL_BUS,
+    .instance_size = sizeof(PXBBus),
+    .class_init    = pxb_bus_class_init,
+};
+
 static const char *pxb_host_root_bus_path(PCIHostState *host_bridge,
                                           PCIBus *rootbus)
 {
-    PXBBus *bus = pci_bus_is_express(rootbus) ?
-                  PXB_PCIE_BUS(rootbus) : PXB_BUS(rootbus);
+    PXBBus *bus = pci_bus_is_cxl(rootbus) ?
+                      PXB_CXL_BUS(rootbus) :
+                      pci_bus_is_express(rootbus) ? PXB_PCIE_BUS(rootbus) :
+                                                    PXB_BUS(rootbus);
 
     snprintf(bus->bus_path, 8, "0000:%02x", pxb_bus_num(rootbus));
     return bus->bus_path;
@@ -218,6 +243,10 @@ static int pxb_map_irq_fn(PCIDevice *pci_dev, int pin)
     return pin - PCI_SLOT(pxb->devfn);
 }
 
+static void pxb_dev_reset(DeviceState *dev)
+{
+}
+
 static gint pxb_compare(gconstpointer a, gconstpointer b)
 {
     const PXBDev *pxb_a = a, *pxb_b = b;
@@ -389,13 +418,66 @@ static const TypeInfo pxb_pcie_dev_info = {
     },
 };
 
+static void pxb_cxl_dev_realize(PCIDevice *dev, Error **errp)
+{
+    MachineState *ms = MACHINE(qdev_get_machine());
+
+    /* A CXL PXB's parent bus is still PCIe */
+    if (!pci_bus_is_express(pci_get_bus(dev))) {
+        error_setg(errp, "pxb-cxl devices cannot reside on a PCI bus");
+        return;
+    }
+    if (!ms->cxl_devices_state->is_enabled) {
+        error_setg(errp, "Machine does not have cxl=on");
+        return;
+    }
+
+    pxb_dev_realize_common(dev, CXL, errp);
+    pxb_dev_reset(DEVICE(dev));
+}
+
+static void pxb_cxl_dev_class_init(ObjectClass *klass, void *data)
+{
+    DeviceClass *dc   = DEVICE_CLASS(klass);
+    PCIDeviceClass *k = PCI_DEVICE_CLASS(klass);
+
+    k->realize             = pxb_cxl_dev_realize;
+    k->exit                = pxb_dev_exitfn;
+    /*
+     * XXX: These types of bridges don't actually show up in the hierarchy so
+     * vendor, device, class, etc. ids are intentionally left out.
+     */
+
+    dc->desc = "CXL Host Bridge";
+    device_class_set_props(dc, pxb_dev_properties);
+    set_bit(DEVICE_CATEGORY_BRIDGE, dc->categories);
+
+    /* Host bridges aren't hotpluggable. FIXME: spec reference */
+    dc->hotpluggable = false;
+    dc->reset = pxb_dev_reset;
+}
+
+static const TypeInfo pxb_cxl_dev_info = {
+    .name          = TYPE_PXB_CXL_DEVICE,
+    .parent        = TYPE_PCI_DEVICE,
+    .instance_size = sizeof(PXBDev),
+    .class_init    = pxb_cxl_dev_class_init,
+    .interfaces =
+        (InterfaceInfo[]){
+            { INTERFACE_CONVENTIONAL_PCI_DEVICE },
+            {},
+        },
+};
+
 static void pxb_register_types(void)
 {
     type_register_static(&pxb_bus_info);
     type_register_static(&pxb_pcie_bus_info);
+    type_register_static(&pxb_cxl_bus_info);
     type_register_static(&pxb_host_info);
     type_register_static(&pxb_dev_info);
     type_register_static(&pxb_pcie_dev_info);
+    type_register_static(&pxb_cxl_dev_info);
 }
 
 type_init(pxb_register_types)
diff --git a/hw/pci/pci.c b/hw/pci/pci.c
index 474ea98c1d..cafebf6f59 100644
--- a/hw/pci/pci.c
+++ b/hw/pci/pci.c
@@ -229,6 +229,12 @@ static const TypeInfo pcie_bus_info = {
     .class_init = pcie_bus_class_init,
 };
 
+static const TypeInfo cxl_bus_info = {
+    .name       = TYPE_CXL_BUS,
+    .parent     = TYPE_PCIE_BUS,
+    .class_init = pcie_bus_class_init,
+};
+
 static PCIBus *pci_find_bus_nr(PCIBus *bus, int bus_num);
 static void pci_update_mappings(PCIDevice *d);
 static void pci_irq_handler(void *opaque, int irq_num, int level);
@@ -2892,6 +2898,7 @@ static void pci_register_types(void)
 {
     type_register_static(&pci_bus_info);
     type_register_static(&pcie_bus_info);
+    type_register_static(&cxl_bus_info);
     type_register_static(&conventional_pci_interface_info);
     type_register_static(&cxl_interface_info);
     type_register_static(&pcie_interface_info);
diff --git a/include/hw/pci/pci.h b/include/hw/pci/pci.h
index 305df7add6..f4d09ec582 100644
--- a/include/hw/pci/pci.h
+++ b/include/hw/pci/pci.h
@@ -410,6 +410,7 @@ typedef PCIINTxRoute (*pci_route_irq_fn)(void *opaque, int pin);
 #define TYPE_PCI_BUS "PCI"
 OBJECT_DECLARE_TYPE(PCIBus, PCIBusClass, PCI_BUS)
 #define TYPE_PCIE_BUS "PCIE"
+#define TYPE_CXL_BUS "CXL"
 
 typedef void (*pci_bus_dev_fn)(PCIBus *b, PCIDevice *d, void *opaque);
 typedef void (*pci_bus_fn)(PCIBus *b, void *opaque);
@@ -769,6 +770,11 @@ static inline void pci_irq_pulse(PCIDevice *pci_dev)
     pci_irq_deassert(pci_dev);
 }
 
+static inline int pci_is_cxl(const PCIDevice *d)
+{
+    return d->cap_present & QEMU_PCIE_CAP_CXL;
+}
+
 static inline int pci_is_express(const PCIDevice *d)
 {
     return d->cap_present & QEMU_PCI_CAP_EXPRESS;
diff --git a/scripts/device-crash-test b/scripts/device-crash-test
index 7fbd99158b..52bd3d8f71 100755
--- a/scripts/device-crash-test
+++ b/scripts/device-crash-test
@@ -93,6 +93,7 @@ ERROR_RULE_LIST = [
     {'device':'pci-bridge', 'expected':True},              # Bridge chassis not specified. Each bridge is required to be assigned a unique chassis id > 0.
     {'device':'pci-bridge-seat', 'expected':True},         # Bridge chassis not specified. Each bridge is required to be assigned a unique chassis id > 0.
     {'device':'pxb', 'expected':True},                     # Bridge chassis not specified. Each bridge is required to be assigned a unique chassis id > 0.
+    {'device':'pxb-cxl', 'expected':True},                 # pxb-cxl devices cannot reside on a PCI bus.
     {'device':'scsi-block', 'expected':True},              # drive property not set
     {'device':'scsi-generic', 'expected':True},            # drive property not set
     {'device':'scsi-hd', 'expected':True},                 # drive property not set
-- 
2.32.0


^ permalink raw reply related	[flat|nested] 124+ messages in thread

* [PATCH v7 14/46] hw/pxb: Allow creation of a CXL PXB (host bridge)
@ 2022-03-06 17:41   ` Jonathan Cameron via
  0 siblings, 0 replies; 124+ messages in thread
From: Jonathan Cameron via @ 2022-03-06 17:41 UTC (permalink / raw)
  To: linuxarm, qemu-devel, Alex Bennée, Marcel Apfelbaum,
	Michael S . Tsirkin, Igor Mammedov, Markus Armbruster
  Cc: linux-cxl, Ben Widawsky, Peter Maydell,
	Shameerali Kolothum Thodi, Philippe Mathieu-Daudé,
	Saransh Gupta1, Shreyas Shah, Chris Browy, Samarth Saxena,
	Dan Williams

From: Ben Widawsky <ben.widawsky@intel.com>

This works like adding a typical pxb device, except the name is
'pxb-cxl' instead of 'pxb-pcie'. An example command line would be as
follows:
  -device pxb-cxl,id=cxl.0,bus="pcie.0",bus_nr=1

A CXL PXB is backward compatible with PCIe. What this means in practice
is that an operating system that is unaware of CXL should still be able
to enumerate this topology as if it were PCIe.

One can create multiple CXL PXB host bridges, but a host bridge can only
be connected to the main root bus. Host bridges cannot appear elsewhere
in the topology.

Note that as of this patch, the ACPI tables needed for the host bridge
(specifically, an ACPI object in _SB named ACPI0016 and the CEDT) aren't
created. So while this patch internally creates it, it cannot be
properly used by an operating system or other system software.

Also necessary is to add an exception to scripts/device-crash-test
similar to that for exiting pxb as both must created on a PCIexpress
host bus.

Signed-off-by: Ben Widawsky <ben.widawsky@intel.com>
Signed-off-by: Jonathan.Cameron <Jonathan.Cameron@huawei.com>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
---
 hw/pci-bridge/pci_expander_bridge.c | 86 ++++++++++++++++++++++++++++-
 hw/pci/pci.c                        |  7 +++
 include/hw/pci/pci.h                |  6 ++
 scripts/device-crash-test           |  1 +
 4 files changed, 98 insertions(+), 2 deletions(-)

diff --git a/hw/pci-bridge/pci_expander_bridge.c b/hw/pci-bridge/pci_expander_bridge.c
index a6caa1e7b5..f762eb4a6e 100644
--- a/hw/pci-bridge/pci_expander_bridge.c
+++ b/hw/pci-bridge/pci_expander_bridge.c
@@ -17,6 +17,7 @@
 #include "hw/pci/pci_host.h"
 #include "hw/qdev-properties.h"
 #include "hw/pci/pci_bridge.h"
+#include "hw/cxl/cxl.h"
 #include "qemu/range.h"
 #include "qemu/error-report.h"
 #include "qemu/module.h"
@@ -56,6 +57,16 @@ DECLARE_INSTANCE_CHECKER(PXBDev, PXB_DEV,
 DECLARE_INSTANCE_CHECKER(PXBDev, PXB_PCIE_DEV,
                          TYPE_PXB_PCIE_DEVICE)
 
+#define TYPE_PXB_CXL_DEVICE "pxb-cxl"
+DECLARE_INSTANCE_CHECKER(PXBDev, PXB_CXL_DEV,
+                         TYPE_PXB_CXL_DEVICE)
+
+typedef struct CXLHost {
+    PCIHostState parent_obj;
+
+    CXLComponentState cxl_cstate;
+} CXLHost;
+
 struct PXBDev {
     /*< private >*/
     PCIDevice parent_obj;
@@ -68,6 +79,11 @@ struct PXBDev {
 
 static PXBDev *convert_to_pxb(PCIDevice *dev)
 {
+    /* A CXL PXB's parent bus is PCIe, so the normal check won't work */
+    if (object_dynamic_cast(OBJECT(dev), TYPE_PXB_CXL_DEVICE)) {
+        return PXB_CXL_DEV(dev);
+    }
+
     return pci_bus_is_express(pci_get_bus(dev))
         ? PXB_PCIE_DEV(dev) : PXB_DEV(dev);
 }
@@ -112,11 +128,20 @@ static const TypeInfo pxb_pcie_bus_info = {
     .class_init    = pxb_bus_class_init,
 };
 
+static const TypeInfo pxb_cxl_bus_info = {
+    .name          = TYPE_PXB_CXL_BUS,
+    .parent        = TYPE_CXL_BUS,
+    .instance_size = sizeof(PXBBus),
+    .class_init    = pxb_bus_class_init,
+};
+
 static const char *pxb_host_root_bus_path(PCIHostState *host_bridge,
                                           PCIBus *rootbus)
 {
-    PXBBus *bus = pci_bus_is_express(rootbus) ?
-                  PXB_PCIE_BUS(rootbus) : PXB_BUS(rootbus);
+    PXBBus *bus = pci_bus_is_cxl(rootbus) ?
+                      PXB_CXL_BUS(rootbus) :
+                      pci_bus_is_express(rootbus) ? PXB_PCIE_BUS(rootbus) :
+                                                    PXB_BUS(rootbus);
 
     snprintf(bus->bus_path, 8, "0000:%02x", pxb_bus_num(rootbus));
     return bus->bus_path;
@@ -218,6 +243,10 @@ static int pxb_map_irq_fn(PCIDevice *pci_dev, int pin)
     return pin - PCI_SLOT(pxb->devfn);
 }
 
+static void pxb_dev_reset(DeviceState *dev)
+{
+}
+
 static gint pxb_compare(gconstpointer a, gconstpointer b)
 {
     const PXBDev *pxb_a = a, *pxb_b = b;
@@ -389,13 +418,66 @@ static const TypeInfo pxb_pcie_dev_info = {
     },
 };
 
+static void pxb_cxl_dev_realize(PCIDevice *dev, Error **errp)
+{
+    MachineState *ms = MACHINE(qdev_get_machine());
+
+    /* A CXL PXB's parent bus is still PCIe */
+    if (!pci_bus_is_express(pci_get_bus(dev))) {
+        error_setg(errp, "pxb-cxl devices cannot reside on a PCI bus");
+        return;
+    }
+    if (!ms->cxl_devices_state->is_enabled) {
+        error_setg(errp, "Machine does not have cxl=on");
+        return;
+    }
+
+    pxb_dev_realize_common(dev, CXL, errp);
+    pxb_dev_reset(DEVICE(dev));
+}
+
+static void pxb_cxl_dev_class_init(ObjectClass *klass, void *data)
+{
+    DeviceClass *dc   = DEVICE_CLASS(klass);
+    PCIDeviceClass *k = PCI_DEVICE_CLASS(klass);
+
+    k->realize             = pxb_cxl_dev_realize;
+    k->exit                = pxb_dev_exitfn;
+    /*
+     * XXX: These types of bridges don't actually show up in the hierarchy so
+     * vendor, device, class, etc. ids are intentionally left out.
+     */
+
+    dc->desc = "CXL Host Bridge";
+    device_class_set_props(dc, pxb_dev_properties);
+    set_bit(DEVICE_CATEGORY_BRIDGE, dc->categories);
+
+    /* Host bridges aren't hotpluggable. FIXME: spec reference */
+    dc->hotpluggable = false;
+    dc->reset = pxb_dev_reset;
+}
+
+static const TypeInfo pxb_cxl_dev_info = {
+    .name          = TYPE_PXB_CXL_DEVICE,
+    .parent        = TYPE_PCI_DEVICE,
+    .instance_size = sizeof(PXBDev),
+    .class_init    = pxb_cxl_dev_class_init,
+    .interfaces =
+        (InterfaceInfo[]){
+            { INTERFACE_CONVENTIONAL_PCI_DEVICE },
+            {},
+        },
+};
+
 static void pxb_register_types(void)
 {
     type_register_static(&pxb_bus_info);
     type_register_static(&pxb_pcie_bus_info);
+    type_register_static(&pxb_cxl_bus_info);
     type_register_static(&pxb_host_info);
     type_register_static(&pxb_dev_info);
     type_register_static(&pxb_pcie_dev_info);
+    type_register_static(&pxb_cxl_dev_info);
 }
 
 type_init(pxb_register_types)
diff --git a/hw/pci/pci.c b/hw/pci/pci.c
index 474ea98c1d..cafebf6f59 100644
--- a/hw/pci/pci.c
+++ b/hw/pci/pci.c
@@ -229,6 +229,12 @@ static const TypeInfo pcie_bus_info = {
     .class_init = pcie_bus_class_init,
 };
 
+static const TypeInfo cxl_bus_info = {
+    .name       = TYPE_CXL_BUS,
+    .parent     = TYPE_PCIE_BUS,
+    .class_init = pcie_bus_class_init,
+};
+
 static PCIBus *pci_find_bus_nr(PCIBus *bus, int bus_num);
 static void pci_update_mappings(PCIDevice *d);
 static void pci_irq_handler(void *opaque, int irq_num, int level);
@@ -2892,6 +2898,7 @@ static void pci_register_types(void)
 {
     type_register_static(&pci_bus_info);
     type_register_static(&pcie_bus_info);
+    type_register_static(&cxl_bus_info);
     type_register_static(&conventional_pci_interface_info);
     type_register_static(&cxl_interface_info);
     type_register_static(&pcie_interface_info);
diff --git a/include/hw/pci/pci.h b/include/hw/pci/pci.h
index 305df7add6..f4d09ec582 100644
--- a/include/hw/pci/pci.h
+++ b/include/hw/pci/pci.h
@@ -410,6 +410,7 @@ typedef PCIINTxRoute (*pci_route_irq_fn)(void *opaque, int pin);
 #define TYPE_PCI_BUS "PCI"
 OBJECT_DECLARE_TYPE(PCIBus, PCIBusClass, PCI_BUS)
 #define TYPE_PCIE_BUS "PCIE"
+#define TYPE_CXL_BUS "CXL"
 
 typedef void (*pci_bus_dev_fn)(PCIBus *b, PCIDevice *d, void *opaque);
 typedef void (*pci_bus_fn)(PCIBus *b, void *opaque);
@@ -769,6 +770,11 @@ static inline void pci_irq_pulse(PCIDevice *pci_dev)
     pci_irq_deassert(pci_dev);
 }
 
+static inline int pci_is_cxl(const PCIDevice *d)
+{
+    return d->cap_present & QEMU_PCIE_CAP_CXL;
+}
+
 static inline int pci_is_express(const PCIDevice *d)
 {
     return d->cap_present & QEMU_PCI_CAP_EXPRESS;
diff --git a/scripts/device-crash-test b/scripts/device-crash-test
index 7fbd99158b..52bd3d8f71 100755
--- a/scripts/device-crash-test
+++ b/scripts/device-crash-test
@@ -93,6 +93,7 @@ ERROR_RULE_LIST = [
     {'device':'pci-bridge', 'expected':True},              # Bridge chassis not specified. Each bridge is required to be assigned a unique chassis id > 0.
     {'device':'pci-bridge-seat', 'expected':True},         # Bridge chassis not specified. Each bridge is required to be assigned a unique chassis id > 0.
     {'device':'pxb', 'expected':True},                     # Bridge chassis not specified. Each bridge is required to be assigned a unique chassis id > 0.
+    {'device':'pxb-cxl', 'expected':True},                 # pxb-cxl devices cannot reside on a PCI bus.
     {'device':'scsi-block', 'expected':True},              # drive property not set
     {'device':'scsi-generic', 'expected':True},            # drive property not set
     {'device':'scsi-hd', 'expected':True},                 # drive property not set
-- 
2.32.0



^ permalink raw reply related	[flat|nested] 124+ messages in thread

* [PATCH v7 15/46] qtest/cxl: Introduce initial test for pxb-cxl only.
  2022-03-06 17:40 ` Jonathan Cameron via
@ 2022-03-06 17:41   ` Jonathan Cameron via
  -1 siblings, 0 replies; 124+ messages in thread
From: Jonathan Cameron @ 2022-03-06 17:41 UTC (permalink / raw)
  To: linuxarm, qemu-devel, Alex Bennée, Marcel Apfelbaum,
	Michael S . Tsirkin, Igor Mammedov, Markus Armbruster
  Cc: linux-cxl, Ben Widawsky, Peter Maydell,
	Shameerali Kolothum Thodi, Philippe Mathieu-Daudé,
	Saransh Gupta1, Shreyas Shah, Chris Browy, Samarth Saxena,
	Dan Williams

Initial test with just pxb-cxl.  Other tests will be added
alongside functionality.

Signed-off-by: Jonathan Cameron <Jonathan.Cameron@huawei.com>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Tested-by: Alex Bennée <alex.bennee@linaro.org>
---
 tests/qtest/cxl-test.c  | 23 +++++++++++++++++++++++
 tests/qtest/meson.build |  4 ++++
 2 files changed, 27 insertions(+)

diff --git a/tests/qtest/cxl-test.c b/tests/qtest/cxl-test.c
new file mode 100644
index 0000000000..1006c8ae4e
--- /dev/null
+++ b/tests/qtest/cxl-test.c
@@ -0,0 +1,23 @@
+/*
+ * QTest testcase for CXL
+ *
+ * This work is licensed under the terms of the GNU GPL, version 2 or later.
+ * See the COPYING file in the top-level directory.
+ */
+
+#include "qemu/osdep.h"
+#include "libqtest-single.h"
+
+
+static void cxl_basic_pxb(void)
+{
+    qtest_start("-machine q35,cxl=on -device pxb-cxl,bus=pcie.0");
+    qtest_end();
+}
+
+int main(int argc, char **argv)
+{
+    g_test_init(&argc, &argv, NULL);
+    qtest_add_func("/pci/cxl/basic_pxb", cxl_basic_pxb);
+    return g_test_run();
+}
diff --git a/tests/qtest/meson.build b/tests/qtest/meson.build
index 721eafad12..7e072d9a84 100644
--- a/tests/qtest/meson.build
+++ b/tests/qtest/meson.build
@@ -41,6 +41,9 @@ qtests_pci = \
   (config_all_devices.has_key('CONFIG_VGA') ? ['display-vga-test'] : []) +                  \
   (config_all_devices.has_key('CONFIG_IVSHMEM_DEVICE') ? ['ivshmem-test'] : [])
 
+qtests_cxl = \
+  (config_all_devices.has_key('CONFIG_CXL') ? ['cxl-test'] : [])
+
 qtests_i386 = \
   (slirp.found() ? ['pxe-test', 'test-netfilter'] : []) +             \
   (config_host.has_key('CONFIG_POSIX') ? ['test-filter-mirror'] : []) +                     \
@@ -75,6 +78,7 @@ qtests_i386 = \
    slirp.found() ? ['virtio-net-failover'] : []) +                                          \
   (unpack_edk2_blobs ? ['bios-tables-test'] : []) +                                         \
   qtests_pci +                                                                              \
+  qtests_cxl +                                                                              \
   ['fdc-test',
    'ide-test',
    'hd-geo-test',
-- 
2.32.0


^ permalink raw reply related	[flat|nested] 124+ messages in thread

* [PATCH v7 15/46] qtest/cxl: Introduce initial test for pxb-cxl only.
@ 2022-03-06 17:41   ` Jonathan Cameron via
  0 siblings, 0 replies; 124+ messages in thread
From: Jonathan Cameron via @ 2022-03-06 17:41 UTC (permalink / raw)
  To: linuxarm, qemu-devel, Alex Bennée, Marcel Apfelbaum,
	Michael S . Tsirkin, Igor Mammedov, Markus Armbruster
  Cc: linux-cxl, Ben Widawsky, Peter Maydell,
	Shameerali Kolothum Thodi, Philippe Mathieu-Daudé,
	Saransh Gupta1, Shreyas Shah, Chris Browy, Samarth Saxena,
	Dan Williams

Initial test with just pxb-cxl.  Other tests will be added
alongside functionality.

Signed-off-by: Jonathan Cameron <Jonathan.Cameron@huawei.com>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Tested-by: Alex Bennée <alex.bennee@linaro.org>
---
 tests/qtest/cxl-test.c  | 23 +++++++++++++++++++++++
 tests/qtest/meson.build |  4 ++++
 2 files changed, 27 insertions(+)

diff --git a/tests/qtest/cxl-test.c b/tests/qtest/cxl-test.c
new file mode 100644
index 0000000000..1006c8ae4e
--- /dev/null
+++ b/tests/qtest/cxl-test.c
@@ -0,0 +1,23 @@
+/*
+ * QTest testcase for CXL
+ *
+ * This work is licensed under the terms of the GNU GPL, version 2 or later.
+ * See the COPYING file in the top-level directory.
+ */
+
+#include "qemu/osdep.h"
+#include "libqtest-single.h"
+
+
+static void cxl_basic_pxb(void)
+{
+    qtest_start("-machine q35,cxl=on -device pxb-cxl,bus=pcie.0");
+    qtest_end();
+}
+
+int main(int argc, char **argv)
+{
+    g_test_init(&argc, &argv, NULL);
+    qtest_add_func("/pci/cxl/basic_pxb", cxl_basic_pxb);
+    return g_test_run();
+}
diff --git a/tests/qtest/meson.build b/tests/qtest/meson.build
index 721eafad12..7e072d9a84 100644
--- a/tests/qtest/meson.build
+++ b/tests/qtest/meson.build
@@ -41,6 +41,9 @@ qtests_pci = \
   (config_all_devices.has_key('CONFIG_VGA') ? ['display-vga-test'] : []) +                  \
   (config_all_devices.has_key('CONFIG_IVSHMEM_DEVICE') ? ['ivshmem-test'] : [])
 
+qtests_cxl = \
+  (config_all_devices.has_key('CONFIG_CXL') ? ['cxl-test'] : [])
+
 qtests_i386 = \
   (slirp.found() ? ['pxe-test', 'test-netfilter'] : []) +             \
   (config_host.has_key('CONFIG_POSIX') ? ['test-filter-mirror'] : []) +                     \
@@ -75,6 +78,7 @@ qtests_i386 = \
    slirp.found() ? ['virtio-net-failover'] : []) +                                          \
   (unpack_edk2_blobs ? ['bios-tables-test'] : []) +                                         \
   qtests_pci +                                                                              \
+  qtests_cxl +                                                                              \
   ['fdc-test',
    'ide-test',
    'hd-geo-test',
-- 
2.32.0



^ permalink raw reply related	[flat|nested] 124+ messages in thread

* [PATCH v7 16/46] hw/cxl/rp: Add a root port
  2022-03-06 17:40 ` Jonathan Cameron via
@ 2022-03-06 17:41   ` Jonathan Cameron via
  -1 siblings, 0 replies; 124+ messages in thread
From: Jonathan Cameron @ 2022-03-06 17:41 UTC (permalink / raw)
  To: linuxarm, qemu-devel, Alex Bennée, Marcel Apfelbaum,
	Michael S . Tsirkin, Igor Mammedov, Markus Armbruster
  Cc: linux-cxl, Ben Widawsky, Peter Maydell,
	Shameerali Kolothum Thodi, Philippe Mathieu-Daudé,
	Saransh Gupta1, Shreyas Shah, Chris Browy, Samarth Saxena,
	Dan Williams

From: Ben Widawsky <ben.widawsky@intel.com>

This adds just enough of a root port implementation to be able to
enumerate root ports (creating the required DVSEC entries). What's not
here yet is the MMIO nor the ability to write some of the DVSEC entries.

This can be added with the qemu commandline by adding a rootport to a
specific CXL host bridge. For example:
  -device cxl-rp,id=rp0,bus="cxl.0",addr=0.0,chassis=4

Like the host bridge patch, the ACPI tables aren't generated at this
point and so system software cannot use it.

Signed-off-by: Ben Widawsky <ben.widawsky@intel.com>
Signed-off-by: Jonathan Cameron <Jonathan.Cameron@huawei.com>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
---
v7:
* Add LOG_UNIMP message where feature not yet implemented.
* Drop "Explain" comment that doesn't explain anything.

 hw/pci-bridge/Kconfig          |   5 +
 hw/pci-bridge/cxl_root_port.c  | 231 +++++++++++++++++++++++++++++++++
 hw/pci-bridge/meson.build      |   1 +
 hw/pci-bridge/pcie_root_port.c |   6 +-
 hw/pci/pci.c                   |   4 +-
 5 files changed, 245 insertions(+), 2 deletions(-)

diff --git a/hw/pci-bridge/Kconfig b/hw/pci-bridge/Kconfig
index f8df4315ba..02614f49aa 100644
--- a/hw/pci-bridge/Kconfig
+++ b/hw/pci-bridge/Kconfig
@@ -27,3 +27,8 @@ config DEC_PCI
 
 config SIMBA
     bool
+
+config CXL
+    bool
+    default y if PCI_EXPRESS && PXB
+    depends on PCI_EXPRESS && MSI_NONBROKEN && PXB
diff --git a/hw/pci-bridge/cxl_root_port.c b/hw/pci-bridge/cxl_root_port.c
new file mode 100644
index 0000000000..ccfa816ee6
--- /dev/null
+++ b/hw/pci-bridge/cxl_root_port.c
@@ -0,0 +1,231 @@
+/*
+ * CXL 2.0 Root Port Implementation
+ *
+ * Copyright(C) 2020 Intel Corporation.
+ *
+ * This library is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU Lesser General Public
+ * License as published by the Free Software Foundation; either
+ * version 2 of the License, or (at your option) any later version.
+ *
+ * This library is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+ * Lesser General Public License for more details.
+ *
+ * You should have received a copy of the GNU Lesser General Public
+ * License along with this library; if not, see <http://www.gnu.org/licenses/>
+ */
+
+#include "qemu/osdep.h"
+#include "qemu/log.h"
+#include "qemu/range.h"
+#include "hw/pci/pci_bridge.h"
+#include "hw/pci/pcie_port.h"
+#include "hw/qdev-properties.h"
+#include "hw/sysbus.h"
+#include "qapi/error.h"
+#include "hw/cxl/cxl.h"
+
+#define CXL_ROOT_PORT_DID 0x7075
+
+/* Copied from the gen root port which we derive */
+#define GEN_PCIE_ROOT_PORT_AER_OFFSET 0x100
+#define GEN_PCIE_ROOT_PORT_ACS_OFFSET \
+    (GEN_PCIE_ROOT_PORT_AER_OFFSET + PCI_ERR_SIZEOF)
+#define CXL_ROOT_PORT_DVSEC_OFFSET \
+    (GEN_PCIE_ROOT_PORT_ACS_OFFSET + PCI_ACS_SIZEOF)
+
+typedef struct CXLRootPort {
+    /*< private >*/
+    PCIESlot parent_obj;
+
+    CXLComponentState cxl_cstate;
+    PCIResReserve res_reserve;
+} CXLRootPort;
+
+#define TYPE_CXL_ROOT_PORT "cxl-rp"
+DECLARE_INSTANCE_CHECKER(CXLRootPort, CXL_ROOT_PORT, TYPE_CXL_ROOT_PORT)
+
+static void latch_registers(CXLRootPort *crp)
+{
+    uint32_t *reg_state = crp->cxl_cstate.crb.cache_mem_registers;
+
+    cxl_component_register_init_common(reg_state, CXL2_ROOT_PORT);
+}
+
+static void build_dvsecs(CXLComponentState *cxl)
+{
+    uint8_t *dvsec;
+
+    dvsec = (uint8_t *)&(struct cxl_dvsec_port_extensions){ 0 };
+    cxl_component_create_dvsec(cxl, EXTENSIONS_PORT_DVSEC_LENGTH,
+                               EXTENSIONS_PORT_DVSEC,
+                               EXTENSIONS_PORT_DVSEC_REVID, dvsec);
+
+    dvsec = (uint8_t *)&(struct cxl_dvsec_port_gpf){
+        .rsvd        = 0,
+        .phase1_ctrl = 1, /* 1μs timeout */
+        .phase2_ctrl = 1, /* 1μs timeout */
+    };
+    cxl_component_create_dvsec(cxl, GPF_PORT_DVSEC_LENGTH, GPF_PORT_DVSEC,
+                               GPF_PORT_DVSEC_REVID, dvsec);
+
+    dvsec = (uint8_t *)&(struct cxl_dvsec_port_flexbus){
+        .cap                     = 0x26, /* IO, Mem, non-MLD */
+        .ctrl                    = 0,
+        .status                  = 0x26, /* same */
+        .rcvd_mod_ts_data_phase1 = 0xef, /* WTF? */
+    };
+    cxl_component_create_dvsec(cxl, PCIE_FLEXBUS_PORT_DVSEC_LENGTH_2_0,
+                               PCIE_FLEXBUS_PORT_DVSEC,
+                               PCIE_FLEXBUS_PORT_DVSEC_REVID_2_0, dvsec);
+
+    dvsec = (uint8_t *)&(struct cxl_dvsec_register_locator){
+        .rsvd         = 0,
+        .reg0_base_lo = RBI_COMPONENT_REG | CXL_COMPONENT_REG_BAR_IDX,
+        .reg0_base_hi = 0,
+    };
+    cxl_component_create_dvsec(cxl, REG_LOC_DVSEC_LENGTH, REG_LOC_DVSEC,
+                               REG_LOC_DVSEC_REVID, dvsec);
+}
+
+static void cxl_rp_realize(DeviceState *dev, Error **errp)
+{
+    PCIDevice *pci_dev     = PCI_DEVICE(dev);
+    PCIERootPortClass *rpc = PCIE_ROOT_PORT_GET_CLASS(dev);
+    CXLRootPort *crp       = CXL_ROOT_PORT(dev);
+    CXLComponentState *cxl_cstate = &crp->cxl_cstate;
+    ComponentRegisters *cregs = &cxl_cstate->crb;
+    MemoryRegion *component_bar = &cregs->component_registers;
+    Error *local_err = NULL;
+
+    rpc->parent_realize(dev, &local_err);
+    if (local_err) {
+        error_propagate(errp, local_err);
+        return;
+    }
+
+    int rc =
+        pci_bridge_qemu_reserve_cap_init(pci_dev, 0, crp->res_reserve, errp);
+    if (rc < 0) {
+        rpc->parent_class.exit(pci_dev);
+        return;
+    }
+
+    if (!crp->res_reserve.io || crp->res_reserve.io == -1) {
+        pci_word_test_and_clear_mask(pci_dev->wmask + PCI_COMMAND,
+                                     PCI_COMMAND_IO);
+        pci_dev->wmask[PCI_IO_BASE]  = 0;
+        pci_dev->wmask[PCI_IO_LIMIT] = 0;
+    }
+
+    cxl_cstate->dvsec_offset = CXL_ROOT_PORT_DVSEC_OFFSET;
+    cxl_cstate->pdev = pci_dev;
+    build_dvsecs(&crp->cxl_cstate);
+
+    cxl_component_register_block_init(OBJECT(pci_dev), cxl_cstate,
+                                      TYPE_CXL_ROOT_PORT);
+
+    pci_register_bar(pci_dev, CXL_COMPONENT_REG_BAR_IDX,
+                     PCI_BASE_ADDRESS_SPACE_MEMORY |
+                         PCI_BASE_ADDRESS_MEM_TYPE_64,
+                     component_bar);
+}
+
+static void cxl_rp_reset(DeviceState *dev)
+{
+    PCIERootPortClass *rpc = PCIE_ROOT_PORT_GET_CLASS(dev);
+    CXLRootPort *crp = CXL_ROOT_PORT(dev);
+
+    rpc->parent_reset(dev);
+
+    latch_registers(crp);
+}
+
+static Property gen_rp_props[] = {
+    DEFINE_PROP_UINT32("bus-reserve", CXLRootPort, res_reserve.bus, -1),
+    DEFINE_PROP_SIZE("io-reserve", CXLRootPort, res_reserve.io, -1),
+    DEFINE_PROP_SIZE("mem-reserve", CXLRootPort, res_reserve.mem_non_pref, -1),
+    DEFINE_PROP_SIZE("pref32-reserve", CXLRootPort, res_reserve.mem_pref_32,
+                     -1),
+    DEFINE_PROP_SIZE("pref64-reserve", CXLRootPort, res_reserve.mem_pref_64,
+                     -1),
+    DEFINE_PROP_END_OF_LIST()
+};
+
+static void cxl_rp_dvsec_write_config(PCIDevice *dev, uint32_t addr,
+                                      uint32_t val, int len)
+{
+    CXLRootPort *crp = CXL_ROOT_PORT(dev);
+
+    if (range_contains(&crp->cxl_cstate.dvsecs[EXTENSIONS_PORT_DVSEC], addr)) {
+        uint8_t *reg = &dev->config[addr];
+        addr -= crp->cxl_cstate.dvsecs[EXTENSIONS_PORT_DVSEC].lob;
+        if (addr == PORT_CONTROL_OFFSET) {
+            if (pci_get_word(reg) & PORT_CONTROL_UNMASK_SBR) {
+                /* unmask SBR */
+                qemu_log_mask(LOG_UNIMP, "SBR mask control is not supported\n");
+            }
+            if (pci_get_word(reg) & PORT_CONTROL_ALT_MEMID_EN) {
+                /* Alt Memory & ID Space Enable */
+                qemu_log_mask(LOG_UNIMP,
+                              "Alt Memory & ID space is not supported\n");
+            }
+        }
+    }
+}
+
+static void cxl_rp_write_config(PCIDevice *d, uint32_t address, uint32_t val,
+                                int len)
+{
+    uint16_t slt_ctl, slt_sta;
+
+    pcie_cap_slot_get(d, &slt_ctl, &slt_sta);
+    pci_bridge_write_config(d, address, val, len);
+    pcie_cap_flr_write_config(d, address, val, len);
+    pcie_cap_slot_write_config(d, slt_ctl, slt_sta, address, val, len);
+    pcie_aer_write_config(d, address, val, len);
+
+    cxl_rp_dvsec_write_config(d, address, val, len);
+}
+
+static void cxl_root_port_class_init(ObjectClass *oc, void *data)
+{
+    DeviceClass *dc        = DEVICE_CLASS(oc);
+    PCIDeviceClass *k      = PCI_DEVICE_CLASS(oc);
+    PCIERootPortClass *rpc = PCIE_ROOT_PORT_CLASS(oc);
+
+    k->vendor_id = PCI_VENDOR_ID_INTEL;
+    k->device_id = CXL_ROOT_PORT_DID;
+    dc->desc     = "CXL Root Port";
+    k->revision  = 0;
+    device_class_set_props(dc, gen_rp_props);
+    k->config_write = cxl_rp_write_config;
+
+    device_class_set_parent_realize(dc, cxl_rp_realize, &rpc->parent_realize);
+    device_class_set_parent_reset(dc, cxl_rp_reset, &rpc->parent_reset);
+
+    rpc->aer_offset = GEN_PCIE_ROOT_PORT_AER_OFFSET;
+    rpc->acs_offset = GEN_PCIE_ROOT_PORT_ACS_OFFSET;
+
+    dc->hotpluggable = false;
+}
+
+static const TypeInfo cxl_root_port_info = {
+    .name = TYPE_CXL_ROOT_PORT,
+    .parent = TYPE_PCIE_ROOT_PORT,
+    .instance_size = sizeof(CXLRootPort),
+    .class_init = cxl_root_port_class_init,
+    .interfaces = (InterfaceInfo[]) {
+        { INTERFACE_CXL_DEVICE },
+        { }
+    },
+};
+
+static void cxl_register(void)
+{
+    type_register_static(&cxl_root_port_info);
+}
+
+type_init(cxl_register);
diff --git a/hw/pci-bridge/meson.build b/hw/pci-bridge/meson.build
index daab8acf2a..b6d26a03d5 100644
--- a/hw/pci-bridge/meson.build
+++ b/hw/pci-bridge/meson.build
@@ -5,6 +5,7 @@ pci_ss.add(when: 'CONFIG_IOH3420', if_true: files('ioh3420.c'))
 pci_ss.add(when: 'CONFIG_PCIE_PORT', if_true: files('pcie_root_port.c', 'gen_pcie_root_port.c', 'pcie_pci_bridge.c'))
 pci_ss.add(when: 'CONFIG_PXB', if_true: files('pci_expander_bridge.c'))
 pci_ss.add(when: 'CONFIG_XIO3130', if_true: files('xio3130_upstream.c', 'xio3130_downstream.c'))
+pci_ss.add(when: 'CONFIG_CXL', if_true: files('cxl_root_port.c'))
 
 # NewWorld PowerMac
 pci_ss.add(when: 'CONFIG_DEC_PCI', if_true: files('dec.c'))
diff --git a/hw/pci-bridge/pcie_root_port.c b/hw/pci-bridge/pcie_root_port.c
index f1cfe9d14a..460e48269d 100644
--- a/hw/pci-bridge/pcie_root_port.c
+++ b/hw/pci-bridge/pcie_root_port.c
@@ -67,7 +67,11 @@ static void rp_realize(PCIDevice *d, Error **errp)
     int rc;
 
     pci_config_set_interrupt_pin(d->config, 1);
-    pci_bridge_initfn(d, TYPE_PCIE_BUS);
+    if (d->cap_present & QEMU_PCIE_CAP_CXL) {
+        pci_bridge_initfn(d, TYPE_CXL_BUS);
+    } else {
+        pci_bridge_initfn(d, TYPE_PCIE_BUS);
+    }
     pcie_port_init_reg(d);
 
     rc = pci_bridge_ssvid_init(d, rpc->ssvid_offset, dc->vendor_id,
diff --git a/hw/pci/pci.c b/hw/pci/pci.c
index cafebf6f59..cc4f06937d 100644
--- a/hw/pci/pci.c
+++ b/hw/pci/pci.c
@@ -2708,7 +2708,9 @@ static void pci_device_class_base_init(ObjectClass *klass, void *data)
             object_class_dynamic_cast(klass, INTERFACE_CONVENTIONAL_PCI_DEVICE);
         ObjectClass *pcie =
             object_class_dynamic_cast(klass, INTERFACE_PCIE_DEVICE);
-        assert(conventional || pcie);
+        ObjectClass *cxl =
+            object_class_dynamic_cast(klass, INTERFACE_CXL_DEVICE);
+        assert(conventional || pcie || cxl);
     }
 }
 
-- 
2.32.0


^ permalink raw reply related	[flat|nested] 124+ messages in thread

* [PATCH v7 16/46] hw/cxl/rp: Add a root port
@ 2022-03-06 17:41   ` Jonathan Cameron via
  0 siblings, 0 replies; 124+ messages in thread
From: Jonathan Cameron via @ 2022-03-06 17:41 UTC (permalink / raw)
  To: linuxarm, qemu-devel, Alex Bennée, Marcel Apfelbaum,
	Michael S . Tsirkin, Igor Mammedov, Markus Armbruster
  Cc: linux-cxl, Ben Widawsky, Peter Maydell,
	Shameerali Kolothum Thodi, Philippe Mathieu-Daudé,
	Saransh Gupta1, Shreyas Shah, Chris Browy, Samarth Saxena,
	Dan Williams

From: Ben Widawsky <ben.widawsky@intel.com>

This adds just enough of a root port implementation to be able to
enumerate root ports (creating the required DVSEC entries). What's not
here yet is the MMIO nor the ability to write some of the DVSEC entries.

This can be added with the qemu commandline by adding a rootport to a
specific CXL host bridge. For example:
  -device cxl-rp,id=rp0,bus="cxl.0",addr=0.0,chassis=4

Like the host bridge patch, the ACPI tables aren't generated at this
point and so system software cannot use it.

Signed-off-by: Ben Widawsky <ben.widawsky@intel.com>
Signed-off-by: Jonathan Cameron <Jonathan.Cameron@huawei.com>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
---
v7:
* Add LOG_UNIMP message where feature not yet implemented.
* Drop "Explain" comment that doesn't explain anything.

 hw/pci-bridge/Kconfig          |   5 +
 hw/pci-bridge/cxl_root_port.c  | 231 +++++++++++++++++++++++++++++++++
 hw/pci-bridge/meson.build      |   1 +
 hw/pci-bridge/pcie_root_port.c |   6 +-
 hw/pci/pci.c                   |   4 +-
 5 files changed, 245 insertions(+), 2 deletions(-)

diff --git a/hw/pci-bridge/Kconfig b/hw/pci-bridge/Kconfig
index f8df4315ba..02614f49aa 100644
--- a/hw/pci-bridge/Kconfig
+++ b/hw/pci-bridge/Kconfig
@@ -27,3 +27,8 @@ config DEC_PCI
 
 config SIMBA
     bool
+
+config CXL
+    bool
+    default y if PCI_EXPRESS && PXB
+    depends on PCI_EXPRESS && MSI_NONBROKEN && PXB
diff --git a/hw/pci-bridge/cxl_root_port.c b/hw/pci-bridge/cxl_root_port.c
new file mode 100644
index 0000000000..ccfa816ee6
--- /dev/null
+++ b/hw/pci-bridge/cxl_root_port.c
@@ -0,0 +1,231 @@
+/*
+ * CXL 2.0 Root Port Implementation
+ *
+ * Copyright(C) 2020 Intel Corporation.
+ *
+ * This library is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU Lesser General Public
+ * License as published by the Free Software Foundation; either
+ * version 2 of the License, or (at your option) any later version.
+ *
+ * This library is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+ * Lesser General Public License for more details.
+ *
+ * You should have received a copy of the GNU Lesser General Public
+ * License along with this library; if not, see <http://www.gnu.org/licenses/>
+ */
+
+#include "qemu/osdep.h"
+#include "qemu/log.h"
+#include "qemu/range.h"
+#include "hw/pci/pci_bridge.h"
+#include "hw/pci/pcie_port.h"
+#include "hw/qdev-properties.h"
+#include "hw/sysbus.h"
+#include "qapi/error.h"
+#include "hw/cxl/cxl.h"
+
+#define CXL_ROOT_PORT_DID 0x7075
+
+/* Copied from the gen root port which we derive */
+#define GEN_PCIE_ROOT_PORT_AER_OFFSET 0x100
+#define GEN_PCIE_ROOT_PORT_ACS_OFFSET \
+    (GEN_PCIE_ROOT_PORT_AER_OFFSET + PCI_ERR_SIZEOF)
+#define CXL_ROOT_PORT_DVSEC_OFFSET \
+    (GEN_PCIE_ROOT_PORT_ACS_OFFSET + PCI_ACS_SIZEOF)
+
+typedef struct CXLRootPort {
+    /*< private >*/
+    PCIESlot parent_obj;
+
+    CXLComponentState cxl_cstate;
+    PCIResReserve res_reserve;
+} CXLRootPort;
+
+#define TYPE_CXL_ROOT_PORT "cxl-rp"
+DECLARE_INSTANCE_CHECKER(CXLRootPort, CXL_ROOT_PORT, TYPE_CXL_ROOT_PORT)
+
+static void latch_registers(CXLRootPort *crp)
+{
+    uint32_t *reg_state = crp->cxl_cstate.crb.cache_mem_registers;
+
+    cxl_component_register_init_common(reg_state, CXL2_ROOT_PORT);
+}
+
+static void build_dvsecs(CXLComponentState *cxl)
+{
+    uint8_t *dvsec;
+
+    dvsec = (uint8_t *)&(struct cxl_dvsec_port_extensions){ 0 };
+    cxl_component_create_dvsec(cxl, EXTENSIONS_PORT_DVSEC_LENGTH,
+                               EXTENSIONS_PORT_DVSEC,
+                               EXTENSIONS_PORT_DVSEC_REVID, dvsec);
+
+    dvsec = (uint8_t *)&(struct cxl_dvsec_port_gpf){
+        .rsvd        = 0,
+        .phase1_ctrl = 1, /* 1μs timeout */
+        .phase2_ctrl = 1, /* 1μs timeout */
+    };
+    cxl_component_create_dvsec(cxl, GPF_PORT_DVSEC_LENGTH, GPF_PORT_DVSEC,
+                               GPF_PORT_DVSEC_REVID, dvsec);
+
+    dvsec = (uint8_t *)&(struct cxl_dvsec_port_flexbus){
+        .cap                     = 0x26, /* IO, Mem, non-MLD */
+        .ctrl                    = 0,
+        .status                  = 0x26, /* same */
+        .rcvd_mod_ts_data_phase1 = 0xef, /* WTF? */
+    };
+    cxl_component_create_dvsec(cxl, PCIE_FLEXBUS_PORT_DVSEC_LENGTH_2_0,
+                               PCIE_FLEXBUS_PORT_DVSEC,
+                               PCIE_FLEXBUS_PORT_DVSEC_REVID_2_0, dvsec);
+
+    dvsec = (uint8_t *)&(struct cxl_dvsec_register_locator){
+        .rsvd         = 0,
+        .reg0_base_lo = RBI_COMPONENT_REG | CXL_COMPONENT_REG_BAR_IDX,
+        .reg0_base_hi = 0,
+    };
+    cxl_component_create_dvsec(cxl, REG_LOC_DVSEC_LENGTH, REG_LOC_DVSEC,
+                               REG_LOC_DVSEC_REVID, dvsec);
+}
+
+static void cxl_rp_realize(DeviceState *dev, Error **errp)
+{
+    PCIDevice *pci_dev     = PCI_DEVICE(dev);
+    PCIERootPortClass *rpc = PCIE_ROOT_PORT_GET_CLASS(dev);
+    CXLRootPort *crp       = CXL_ROOT_PORT(dev);
+    CXLComponentState *cxl_cstate = &crp->cxl_cstate;
+    ComponentRegisters *cregs = &cxl_cstate->crb;
+    MemoryRegion *component_bar = &cregs->component_registers;
+    Error *local_err = NULL;
+
+    rpc->parent_realize(dev, &local_err);
+    if (local_err) {
+        error_propagate(errp, local_err);
+        return;
+    }
+
+    int rc =
+        pci_bridge_qemu_reserve_cap_init(pci_dev, 0, crp->res_reserve, errp);
+    if (rc < 0) {
+        rpc->parent_class.exit(pci_dev);
+        return;
+    }
+
+    if (!crp->res_reserve.io || crp->res_reserve.io == -1) {
+        pci_word_test_and_clear_mask(pci_dev->wmask + PCI_COMMAND,
+                                     PCI_COMMAND_IO);
+        pci_dev->wmask[PCI_IO_BASE]  = 0;
+        pci_dev->wmask[PCI_IO_LIMIT] = 0;
+    }
+
+    cxl_cstate->dvsec_offset = CXL_ROOT_PORT_DVSEC_OFFSET;
+    cxl_cstate->pdev = pci_dev;
+    build_dvsecs(&crp->cxl_cstate);
+
+    cxl_component_register_block_init(OBJECT(pci_dev), cxl_cstate,
+                                      TYPE_CXL_ROOT_PORT);
+
+    pci_register_bar(pci_dev, CXL_COMPONENT_REG_BAR_IDX,
+                     PCI_BASE_ADDRESS_SPACE_MEMORY |
+                         PCI_BASE_ADDRESS_MEM_TYPE_64,
+                     component_bar);
+}
+
+static void cxl_rp_reset(DeviceState *dev)
+{
+    PCIERootPortClass *rpc = PCIE_ROOT_PORT_GET_CLASS(dev);
+    CXLRootPort *crp = CXL_ROOT_PORT(dev);
+
+    rpc->parent_reset(dev);
+
+    latch_registers(crp);
+}
+
+static Property gen_rp_props[] = {
+    DEFINE_PROP_UINT32("bus-reserve", CXLRootPort, res_reserve.bus, -1),
+    DEFINE_PROP_SIZE("io-reserve", CXLRootPort, res_reserve.io, -1),
+    DEFINE_PROP_SIZE("mem-reserve", CXLRootPort, res_reserve.mem_non_pref, -1),
+    DEFINE_PROP_SIZE("pref32-reserve", CXLRootPort, res_reserve.mem_pref_32,
+                     -1),
+    DEFINE_PROP_SIZE("pref64-reserve", CXLRootPort, res_reserve.mem_pref_64,
+                     -1),
+    DEFINE_PROP_END_OF_LIST()
+};
+
+static void cxl_rp_dvsec_write_config(PCIDevice *dev, uint32_t addr,
+                                      uint32_t val, int len)
+{
+    CXLRootPort *crp = CXL_ROOT_PORT(dev);
+
+    if (range_contains(&crp->cxl_cstate.dvsecs[EXTENSIONS_PORT_DVSEC], addr)) {
+        uint8_t *reg = &dev->config[addr];
+        addr -= crp->cxl_cstate.dvsecs[EXTENSIONS_PORT_DVSEC].lob;
+        if (addr == PORT_CONTROL_OFFSET) {
+            if (pci_get_word(reg) & PORT_CONTROL_UNMASK_SBR) {
+                /* unmask SBR */
+                qemu_log_mask(LOG_UNIMP, "SBR mask control is not supported\n");
+            }
+            if (pci_get_word(reg) & PORT_CONTROL_ALT_MEMID_EN) {
+                /* Alt Memory & ID Space Enable */
+                qemu_log_mask(LOG_UNIMP,
+                              "Alt Memory & ID space is not supported\n");
+            }
+        }
+    }
+}
+
+static void cxl_rp_write_config(PCIDevice *d, uint32_t address, uint32_t val,
+                                int len)
+{
+    uint16_t slt_ctl, slt_sta;
+
+    pcie_cap_slot_get(d, &slt_ctl, &slt_sta);
+    pci_bridge_write_config(d, address, val, len);
+    pcie_cap_flr_write_config(d, address, val, len);
+    pcie_cap_slot_write_config(d, slt_ctl, slt_sta, address, val, len);
+    pcie_aer_write_config(d, address, val, len);
+
+    cxl_rp_dvsec_write_config(d, address, val, len);
+}
+
+static void cxl_root_port_class_init(ObjectClass *oc, void *data)
+{
+    DeviceClass *dc        = DEVICE_CLASS(oc);
+    PCIDeviceClass *k      = PCI_DEVICE_CLASS(oc);
+    PCIERootPortClass *rpc = PCIE_ROOT_PORT_CLASS(oc);
+
+    k->vendor_id = PCI_VENDOR_ID_INTEL;
+    k->device_id = CXL_ROOT_PORT_DID;
+    dc->desc     = "CXL Root Port";
+    k->revision  = 0;
+    device_class_set_props(dc, gen_rp_props);
+    k->config_write = cxl_rp_write_config;
+
+    device_class_set_parent_realize(dc, cxl_rp_realize, &rpc->parent_realize);
+    device_class_set_parent_reset(dc, cxl_rp_reset, &rpc->parent_reset);
+
+    rpc->aer_offset = GEN_PCIE_ROOT_PORT_AER_OFFSET;
+    rpc->acs_offset = GEN_PCIE_ROOT_PORT_ACS_OFFSET;
+
+    dc->hotpluggable = false;
+}
+
+static const TypeInfo cxl_root_port_info = {
+    .name = TYPE_CXL_ROOT_PORT,
+    .parent = TYPE_PCIE_ROOT_PORT,
+    .instance_size = sizeof(CXLRootPort),
+    .class_init = cxl_root_port_class_init,
+    .interfaces = (InterfaceInfo[]) {
+        { INTERFACE_CXL_DEVICE },
+        { }
+    },
+};
+
+static void cxl_register(void)
+{
+    type_register_static(&cxl_root_port_info);
+}
+
+type_init(cxl_register);
diff --git a/hw/pci-bridge/meson.build b/hw/pci-bridge/meson.build
index daab8acf2a..b6d26a03d5 100644
--- a/hw/pci-bridge/meson.build
+++ b/hw/pci-bridge/meson.build
@@ -5,6 +5,7 @@ pci_ss.add(when: 'CONFIG_IOH3420', if_true: files('ioh3420.c'))
 pci_ss.add(when: 'CONFIG_PCIE_PORT', if_true: files('pcie_root_port.c', 'gen_pcie_root_port.c', 'pcie_pci_bridge.c'))
 pci_ss.add(when: 'CONFIG_PXB', if_true: files('pci_expander_bridge.c'))
 pci_ss.add(when: 'CONFIG_XIO3130', if_true: files('xio3130_upstream.c', 'xio3130_downstream.c'))
+pci_ss.add(when: 'CONFIG_CXL', if_true: files('cxl_root_port.c'))
 
 # NewWorld PowerMac
 pci_ss.add(when: 'CONFIG_DEC_PCI', if_true: files('dec.c'))
diff --git a/hw/pci-bridge/pcie_root_port.c b/hw/pci-bridge/pcie_root_port.c
index f1cfe9d14a..460e48269d 100644
--- a/hw/pci-bridge/pcie_root_port.c
+++ b/hw/pci-bridge/pcie_root_port.c
@@ -67,7 +67,11 @@ static void rp_realize(PCIDevice *d, Error **errp)
     int rc;
 
     pci_config_set_interrupt_pin(d->config, 1);
-    pci_bridge_initfn(d, TYPE_PCIE_BUS);
+    if (d->cap_present & QEMU_PCIE_CAP_CXL) {
+        pci_bridge_initfn(d, TYPE_CXL_BUS);
+    } else {
+        pci_bridge_initfn(d, TYPE_PCIE_BUS);
+    }
     pcie_port_init_reg(d);
 
     rc = pci_bridge_ssvid_init(d, rpc->ssvid_offset, dc->vendor_id,
diff --git a/hw/pci/pci.c b/hw/pci/pci.c
index cafebf6f59..cc4f06937d 100644
--- a/hw/pci/pci.c
+++ b/hw/pci/pci.c
@@ -2708,7 +2708,9 @@ static void pci_device_class_base_init(ObjectClass *klass, void *data)
             object_class_dynamic_cast(klass, INTERFACE_CONVENTIONAL_PCI_DEVICE);
         ObjectClass *pcie =
             object_class_dynamic_cast(klass, INTERFACE_PCIE_DEVICE);
-        assert(conventional || pcie);
+        ObjectClass *cxl =
+            object_class_dynamic_cast(klass, INTERFACE_CXL_DEVICE);
+        assert(conventional || pcie || cxl);
     }
 }
 
-- 
2.32.0



^ permalink raw reply related	[flat|nested] 124+ messages in thread

* [PATCH v7 17/46] hw/cxl/device: Add a memory device (8.2.8.5)
  2022-03-06 17:40 ` Jonathan Cameron via
@ 2022-03-06 17:41   ` Jonathan Cameron via
  -1 siblings, 0 replies; 124+ messages in thread
From: Jonathan Cameron @ 2022-03-06 17:41 UTC (permalink / raw)
  To: linuxarm, qemu-devel, Alex Bennée, Marcel Apfelbaum,
	Michael S . Tsirkin, Igor Mammedov, Markus Armbruster
  Cc: linux-cxl, Ben Widawsky, Peter Maydell,
	Shameerali Kolothum Thodi, Philippe Mathieu-Daudé,
	Saransh Gupta1, Shreyas Shah, Chris Browy, Samarth Saxena,
	Dan Williams

From: Ben Widawsky <ben.widawsky@intel.com>

A CXL memory device (AKA Type 3) is a CXL component that contains some
combination of volatile and persistent memory. It also implements the
previously defined mailbox interface as well as the memory device
firmware interface.

Although the memory device is configured like a normal PCIe device, the
memory traffic is on an entirely separate bus conceptually (using the
same physical wires as PCIe, but different protocol).

Once the CXL topology is fully configure and address decoders committed,
the guest physical address for the memory device is part of a larger
window which is owned by the platform.  The creation of these windows
is later in this series.

The following example will create a 256M device in a 512M window:
-object "memory-backend-file,id=cxl-mem1,share,mem-path=cxl-type3,size=512M"
-device "cxl-type3,bus=rp0,memdev=cxl-mem1,id=cxl-pmem0"

Note: Dropped PCDIMM info interfaces for now.  They can be added if
appropriate at a later date.

Signed-off-by: Ben Widawsky <ben.widawsky@intel.com>
Signed-off-by: Jonathan Cameron <Jonathan.Cameron@huawei.com>
---
v7:
* Fixed reversed condition on LSA being specified (Ben)
* Put struct cxl_type3_dev definition directly in cxl_device.h
  rather than moving it in patch 20 (Alex)
* Use QEMU_PACKED / QEMU_BUILD_BUG_ON() etc from compiler.h (Alex)
 
 hw/cxl/cxl-mailbox-utils.c  |  46 +++++++++++
 hw/mem/Kconfig              |   5 ++
 hw/mem/cxl_type3.c          | 153 ++++++++++++++++++++++++++++++++++++
 hw/mem/meson.build          |   1 +
 include/hw/cxl/cxl_device.h |  17 ++++
 include/hw/cxl/cxl_pci.h    |  21 +++++
 include/hw/pci/pci_ids.h    |   1 +
 7 files changed, 244 insertions(+)

diff --git a/hw/cxl/cxl-mailbox-utils.c b/hw/cxl/cxl-mailbox-utils.c
index 4e9cb2bccd..7ed9606c6b 100644
--- a/hw/cxl/cxl-mailbox-utils.c
+++ b/hw/cxl/cxl-mailbox-utils.c
@@ -50,6 +50,8 @@ enum {
     LOGS        = 0x04,
         #define GET_SUPPORTED 0x0
         #define GET_LOG       0x1
+    IDENTIFY    = 0x40,
+        #define MEMORY_DEVICE 0x0
 };
 
 /* 8.2.8.4.5.1 Command Return Codes */
@@ -214,6 +216,48 @@ static ret_code cmd_logs_get_log(struct cxl_cmd *cmd,
     return CXL_MBOX_SUCCESS;
 }
 
+/* 8.2.9.5.1.1 */
+static ret_code cmd_identify_memory_device(struct cxl_cmd *cmd,
+                                           CXLDeviceState *cxl_dstate,
+                                           uint16_t *len)
+{
+    struct {
+        char fw_revision[0x10];
+        uint64_t total_capacity;
+        uint64_t volatile_capacity;
+        uint64_t persistent_capacity;
+        uint64_t partition_align;
+        uint16_t info_event_log_size;
+        uint16_t warning_event_log_size;
+        uint16_t failure_event_log_size;
+        uint16_t fatal_event_log_size;
+        uint32_t lsa_size;
+        uint8_t poison_list_max_mer[3];
+        uint16_t inject_poison_limit;
+        uint8_t poison_caps;
+        uint8_t qos_telemetry_caps;
+    } QEMU_PACKED *id;
+    QEMU_BUILD_BUG_ON(sizeof(*id) != 0x43);
+
+    uint64_t size = cxl_dstate->pmem_size;
+
+    if (!QEMU_IS_ALIGNED(size, 256 << 20)) {
+        return CXL_MBOX_INTERNAL_ERROR;
+    }
+
+    id = (void *)cmd->payload;
+    memset(id, 0, sizeof(*id));
+
+    /* PMEM only */
+    snprintf(id->fw_revision, 0x10, "BWFW VERSION %02d", 0);
+
+    id->total_capacity = size / (256 << 20);
+    id->persistent_capacity = size / (256 << 20);
+
+    *len = sizeof(*id);
+    return CXL_MBOX_SUCCESS;
+}
+
 #define IMMEDIATE_CONFIG_CHANGE (1 << 1)
 #define IMMEDIATE_POLICY_CHANGE (1 << 3)
 #define IMMEDIATE_LOG_CHANGE (1 << 4)
@@ -231,6 +275,8 @@ static struct cxl_cmd cxl_cmd_set[256][256] = {
     [TIMESTAMP][SET] = { "TIMESTAMP_SET", cmd_timestamp_set, 8, IMMEDIATE_POLICY_CHANGE },
     [LOGS][GET_SUPPORTED] = { "LOGS_GET_SUPPORTED", cmd_logs_get_supported, 0, 0 },
     [LOGS][GET_LOG] = { "LOGS_GET_LOG", cmd_logs_get_log, 0x18, 0 },
+    [IDENTIFY][MEMORY_DEVICE] = { "IDENTIFY_MEMORY_DEVICE",
+        cmd_identify_memory_device, 0, 0 },
 };
 
 void cxl_process_mailbox(CXLDeviceState *cxl_dstate)
diff --git a/hw/mem/Kconfig b/hw/mem/Kconfig
index 03dbb3c7df..73c5ae8ad9 100644
--- a/hw/mem/Kconfig
+++ b/hw/mem/Kconfig
@@ -11,3 +11,8 @@ config NVDIMM
 
 config SPARSE_MEM
     bool
+
+config CXL_MEM_DEVICE
+    bool
+    default y if CXL
+    select MEM_DEVICE
diff --git a/hw/mem/cxl_type3.c b/hw/mem/cxl_type3.c
new file mode 100644
index 0000000000..a8d7cfcc81
--- /dev/null
+++ b/hw/mem/cxl_type3.c
@@ -0,0 +1,153 @@
+#include "qemu/osdep.h"
+#include "qemu/units.h"
+#include "qemu/error-report.h"
+#include "hw/mem/memory-device.h"
+#include "hw/mem/pc-dimm.h"
+#include "hw/pci/pci.h"
+#include "hw/qdev-properties.h"
+#include "qapi/error.h"
+#include "qemu/log.h"
+#include "qemu/module.h"
+#include "qemu/range.h"
+#include "qemu/rcu.h"
+#include "sysemu/hostmem.h"
+#include "hw/cxl/cxl.h"
+
+static void build_dvsecs(CXLType3Dev *ct3d)
+{
+    CXLComponentState *cxl_cstate = &ct3d->cxl_cstate;
+    uint8_t *dvsec;
+
+    dvsec = (uint8_t *)&(struct cxl_dvsec_device){
+        .cap = 0x1e,
+        .ctrl = 0x6,
+        .status2 = 0x2,
+        .range1_size_hi = 0,
+#ifdef SET_PMEM_PADDR
+        .range1_size_lo = (2 << 5) | (2 << 2) | 0x3 | ct3d->size,
+#else
+        .range1_size_lo = 0x3,
+#endif
+        .range1_base_hi = 0,
+        .range1_base_lo = 0,
+    };
+    cxl_component_create_dvsec(cxl_cstate, PCIE_CXL_DEVICE_DVSEC_LENGTH,
+                               PCIE_CXL_DEVICE_DVSEC,
+                               PCIE_CXL2_DEVICE_DVSEC_REVID, dvsec);
+
+    dvsec = (uint8_t *)&(struct cxl_dvsec_register_locator){
+        .rsvd         = 0,
+        .reg0_base_lo = RBI_COMPONENT_REG | CXL_COMPONENT_REG_BAR_IDX,
+        .reg0_base_hi = 0,
+        .reg1_base_lo = RBI_CXL_DEVICE_REG | CXL_DEVICE_REG_BAR_IDX,
+        .reg1_base_hi = 0,
+    };
+    cxl_component_create_dvsec(cxl_cstate, REG_LOC_DVSEC_LENGTH, REG_LOC_DVSEC,
+                               REG_LOC_DVSEC_REVID, dvsec);
+}
+
+static void cxl_setup_memory(CXLType3Dev *ct3d, Error **errp)
+{
+    MemoryRegion *mr;
+
+    if (!ct3d->hostmem) {
+        error_setg(errp, "memdev property must be set");
+        return;
+    }
+
+    mr = host_memory_backend_get_memory(ct3d->hostmem);
+    if (!mr) {
+        error_setg(errp, "memdev property must be set");
+        return;
+    }
+    memory_region_set_nonvolatile(mr, true);
+    memory_region_set_enabled(mr, true);
+    host_memory_backend_set_mapped(ct3d->hostmem, true);
+    ct3d->cxl_dstate.pmem_size = ct3d->hostmem->size;
+}
+
+
+static void ct3_realize(PCIDevice *pci_dev, Error **errp)
+{
+    CXLType3Dev *ct3d = CT3(pci_dev);
+    CXLComponentState *cxl_cstate = &ct3d->cxl_cstate;
+    ComponentRegisters *regs = &cxl_cstate->crb;
+    MemoryRegion *mr = &regs->component_registers;
+    uint8_t *pci_conf = pci_dev->config;
+
+    cxl_setup_memory(ct3d, errp);
+
+    pci_config_set_prog_interface(pci_conf, 0x10);
+    pci_config_set_class(pci_conf, PCI_CLASS_MEMORY_CXL);
+
+    pcie_endpoint_cap_init(pci_dev, 0x80);
+    cxl_cstate->dvsec_offset = 0x100;
+
+    ct3d->cxl_cstate.pdev = pci_dev;
+    build_dvsecs(ct3d);
+
+    cxl_component_register_block_init(OBJECT(pci_dev), cxl_cstate,
+                                      TYPE_CXL_TYPE3_DEV);
+
+    pci_register_bar(
+        pci_dev, CXL_COMPONENT_REG_BAR_IDX,
+        PCI_BASE_ADDRESS_SPACE_MEMORY | PCI_BASE_ADDRESS_MEM_TYPE_64, mr);
+
+    cxl_device_register_block_init(OBJECT(pci_dev), &ct3d->cxl_dstate);
+    pci_register_bar(pci_dev, CXL_DEVICE_REG_BAR_IDX,
+                     PCI_BASE_ADDRESS_SPACE_MEMORY |
+                         PCI_BASE_ADDRESS_MEM_TYPE_64,
+                     &ct3d->cxl_dstate.device_registers);
+}
+
+static void ct3d_reset(DeviceState *dev)
+{
+    CXLType3Dev *ct3d = CT3(dev);
+    uint32_t *reg_state = ct3d->cxl_cstate.crb.cache_mem_registers;
+
+    cxl_component_register_init_common(reg_state, CXL2_TYPE3_DEVICE);
+    cxl_device_register_init_common(&ct3d->cxl_dstate);
+}
+
+static Property ct3_props[] = {
+    DEFINE_PROP_SIZE("size", CXLType3Dev, size, -1),
+    DEFINE_PROP_LINK("memdev", CXLType3Dev, hostmem, TYPE_MEMORY_BACKEND,
+                     HostMemoryBackend *),
+    DEFINE_PROP_END_OF_LIST(),
+};
+
+static void ct3_class_init(ObjectClass *oc, void *data)
+{
+    DeviceClass *dc = DEVICE_CLASS(oc);
+    PCIDeviceClass *pc = PCI_DEVICE_CLASS(oc);
+
+    pc->realize = ct3_realize;
+    pc->class_id = PCI_CLASS_STORAGE_EXPRESS;
+    pc->vendor_id = PCI_VENDOR_ID_INTEL;
+    pc->device_id = 0xd93; /* LVF for now */
+    pc->revision = 1;
+
+    set_bit(DEVICE_CATEGORY_STORAGE, dc->categories);
+    dc->desc = "CXL PMEM Device (Type 3)";
+    dc->reset = ct3d_reset;
+    device_class_set_props(dc, ct3_props);
+}
+
+static const TypeInfo ct3d_info = {
+    .name = TYPE_CXL_TYPE3_DEV,
+    .parent = TYPE_PCI_DEVICE,
+    .class_init = ct3_class_init,
+    .instance_size = sizeof(CXLType3Dev),
+    .interfaces = (InterfaceInfo[]) {
+        { INTERFACE_CXL_DEVICE },
+        { INTERFACE_PCIE_DEVICE },
+        {}
+    },
+};
+
+static void ct3d_registers(void)
+{
+    type_register_static(&ct3d_info);
+}
+
+type_init(ct3d_registers);
diff --git a/hw/mem/meson.build b/hw/mem/meson.build
index 82f86d117e..609b2b36fc 100644
--- a/hw/mem/meson.build
+++ b/hw/mem/meson.build
@@ -3,6 +3,7 @@ mem_ss.add(files('memory-device.c'))
 mem_ss.add(when: 'CONFIG_DIMM', if_true: files('pc-dimm.c'))
 mem_ss.add(when: 'CONFIG_NPCM7XX', if_true: files('npcm7xx_mc.c'))
 mem_ss.add(when: 'CONFIG_NVDIMM', if_true: files('nvdimm.c'))
+mem_ss.add(when: 'CONFIG_CXL_MEM_DEVICE', if_true: files('cxl_type3.c'))
 
 softmmu_ss.add_all(when: 'CONFIG_MEM_DEVICE', if_true: mem_ss)
 
diff --git a/include/hw/cxl/cxl_device.h b/include/hw/cxl/cxl_device.h
index 8102d2a813..72da811c52 100644
--- a/include/hw/cxl/cxl_device.h
+++ b/include/hw/cxl/cxl_device.h
@@ -230,4 +230,21 @@ REG64(CXL_MEM_DEV_STS, 0)
     FIELD(CXL_MEM_DEV_STS, MBOX_READY, 4, 1)
     FIELD(CXL_MEM_DEV_STS, RESET_NEEDED, 5, 3)
 
+typedef struct cxl_type3_dev {
+    /* Private */
+    PCIDevice parent_obj;
+
+    /* Properties */
+    uint64_t size;
+    HostMemoryBackend *hostmem;
+
+    /* State */
+    CXLComponentState cxl_cstate;
+    CXLDeviceState cxl_dstate;
+} CXLType3Dev;
+
+#define TYPE_CXL_TYPE3_DEV "cxl-type3"
+
+#define CT3(obj) OBJECT_CHECK(CXLType3Dev, (obj), TYPE_CXL_TYPE3_DEV)
+
 #endif
diff --git a/include/hw/cxl/cxl_pci.h b/include/hw/cxl/cxl_pci.h
index 810a244fab..cf53fe5425 100644
--- a/include/hw/cxl/cxl_pci.h
+++ b/include/hw/cxl/cxl_pci.h
@@ -64,6 +64,27 @@ QEMU_BUILD_BUG_ON(sizeof(struct dvsec_header) != 10);
  * CXL 2.0 Downstream Port: 3, 4, 7, 8
  */
 
+/* CXL 2.0 - 8.1.3 (ID 0001) */
+struct cxl_dvsec_device {
+    struct dvsec_header hdr;
+    uint16_t cap;
+    uint16_t ctrl;
+    uint16_t status;
+    uint16_t ctrl2;
+    uint16_t status2;
+    uint16_t lock;
+    uint16_t cap2;
+    uint32_t range1_size_hi;
+    uint32_t range1_size_lo;
+    uint32_t range1_base_hi;
+    uint32_t range1_base_lo;
+    uint32_t range2_size_hi;
+    uint32_t range2_size_lo;
+    uint32_t range2_base_hi;
+    uint32_t range2_base_lo;
+};
+QEMU_BUILD_BUG_ON(sizeof(struct cxl_dvsec_device) != 0x38);
+
 /* CXL 2.0 - 8.1.5 (ID 0003) */
 struct cxl_dvsec_port_extensions {
     struct dvsec_header hdr;
diff --git a/include/hw/pci/pci_ids.h b/include/hw/pci/pci_ids.h
index 11abe22d46..898083b86f 100644
--- a/include/hw/pci/pci_ids.h
+++ b/include/hw/pci/pci_ids.h
@@ -53,6 +53,7 @@
 #define PCI_BASE_CLASS_MEMORY            0x05
 #define PCI_CLASS_MEMORY_RAM             0x0500
 #define PCI_CLASS_MEMORY_FLASH           0x0501
+#define PCI_CLASS_MEMORY_CXL             0x0502
 #define PCI_CLASS_MEMORY_OTHER           0x0580
 
 #define PCI_BASE_CLASS_BRIDGE            0x06
-- 
2.32.0


^ permalink raw reply related	[flat|nested] 124+ messages in thread

* [PATCH v7 17/46] hw/cxl/device: Add a memory device (8.2.8.5)
@ 2022-03-06 17:41   ` Jonathan Cameron via
  0 siblings, 0 replies; 124+ messages in thread
From: Jonathan Cameron via @ 2022-03-06 17:41 UTC (permalink / raw)
  To: linuxarm, qemu-devel, Alex Bennée, Marcel Apfelbaum,
	Michael S . Tsirkin, Igor Mammedov, Markus Armbruster
  Cc: linux-cxl, Ben Widawsky, Peter Maydell,
	Shameerali Kolothum Thodi, Philippe Mathieu-Daudé,
	Saransh Gupta1, Shreyas Shah, Chris Browy, Samarth Saxena,
	Dan Williams

From: Ben Widawsky <ben.widawsky@intel.com>

A CXL memory device (AKA Type 3) is a CXL component that contains some
combination of volatile and persistent memory. It also implements the
previously defined mailbox interface as well as the memory device
firmware interface.

Although the memory device is configured like a normal PCIe device, the
memory traffic is on an entirely separate bus conceptually (using the
same physical wires as PCIe, but different protocol).

Once the CXL topology is fully configure and address decoders committed,
the guest physical address for the memory device is part of a larger
window which is owned by the platform.  The creation of these windows
is later in this series.

The following example will create a 256M device in a 512M window:
-object "memory-backend-file,id=cxl-mem1,share,mem-path=cxl-type3,size=512M"
-device "cxl-type3,bus=rp0,memdev=cxl-mem1,id=cxl-pmem0"

Note: Dropped PCDIMM info interfaces for now.  They can be added if
appropriate at a later date.

Signed-off-by: Ben Widawsky <ben.widawsky@intel.com>
Signed-off-by: Jonathan Cameron <Jonathan.Cameron@huawei.com>
---
v7:
* Fixed reversed condition on LSA being specified (Ben)
* Put struct cxl_type3_dev definition directly in cxl_device.h
  rather than moving it in patch 20 (Alex)
* Use QEMU_PACKED / QEMU_BUILD_BUG_ON() etc from compiler.h (Alex)
 
 hw/cxl/cxl-mailbox-utils.c  |  46 +++++++++++
 hw/mem/Kconfig              |   5 ++
 hw/mem/cxl_type3.c          | 153 ++++++++++++++++++++++++++++++++++++
 hw/mem/meson.build          |   1 +
 include/hw/cxl/cxl_device.h |  17 ++++
 include/hw/cxl/cxl_pci.h    |  21 +++++
 include/hw/pci/pci_ids.h    |   1 +
 7 files changed, 244 insertions(+)

diff --git a/hw/cxl/cxl-mailbox-utils.c b/hw/cxl/cxl-mailbox-utils.c
index 4e9cb2bccd..7ed9606c6b 100644
--- a/hw/cxl/cxl-mailbox-utils.c
+++ b/hw/cxl/cxl-mailbox-utils.c
@@ -50,6 +50,8 @@ enum {
     LOGS        = 0x04,
         #define GET_SUPPORTED 0x0
         #define GET_LOG       0x1
+    IDENTIFY    = 0x40,
+        #define MEMORY_DEVICE 0x0
 };
 
 /* 8.2.8.4.5.1 Command Return Codes */
@@ -214,6 +216,48 @@ static ret_code cmd_logs_get_log(struct cxl_cmd *cmd,
     return CXL_MBOX_SUCCESS;
 }
 
+/* 8.2.9.5.1.1 */
+static ret_code cmd_identify_memory_device(struct cxl_cmd *cmd,
+                                           CXLDeviceState *cxl_dstate,
+                                           uint16_t *len)
+{
+    struct {
+        char fw_revision[0x10];
+        uint64_t total_capacity;
+        uint64_t volatile_capacity;
+        uint64_t persistent_capacity;
+        uint64_t partition_align;
+        uint16_t info_event_log_size;
+        uint16_t warning_event_log_size;
+        uint16_t failure_event_log_size;
+        uint16_t fatal_event_log_size;
+        uint32_t lsa_size;
+        uint8_t poison_list_max_mer[3];
+        uint16_t inject_poison_limit;
+        uint8_t poison_caps;
+        uint8_t qos_telemetry_caps;
+    } QEMU_PACKED *id;
+    QEMU_BUILD_BUG_ON(sizeof(*id) != 0x43);
+
+    uint64_t size = cxl_dstate->pmem_size;
+
+    if (!QEMU_IS_ALIGNED(size, 256 << 20)) {
+        return CXL_MBOX_INTERNAL_ERROR;
+    }
+
+    id = (void *)cmd->payload;
+    memset(id, 0, sizeof(*id));
+
+    /* PMEM only */
+    snprintf(id->fw_revision, 0x10, "BWFW VERSION %02d", 0);
+
+    id->total_capacity = size / (256 << 20);
+    id->persistent_capacity = size / (256 << 20);
+
+    *len = sizeof(*id);
+    return CXL_MBOX_SUCCESS;
+}
+
 #define IMMEDIATE_CONFIG_CHANGE (1 << 1)
 #define IMMEDIATE_POLICY_CHANGE (1 << 3)
 #define IMMEDIATE_LOG_CHANGE (1 << 4)
@@ -231,6 +275,8 @@ static struct cxl_cmd cxl_cmd_set[256][256] = {
     [TIMESTAMP][SET] = { "TIMESTAMP_SET", cmd_timestamp_set, 8, IMMEDIATE_POLICY_CHANGE },
     [LOGS][GET_SUPPORTED] = { "LOGS_GET_SUPPORTED", cmd_logs_get_supported, 0, 0 },
     [LOGS][GET_LOG] = { "LOGS_GET_LOG", cmd_logs_get_log, 0x18, 0 },
+    [IDENTIFY][MEMORY_DEVICE] = { "IDENTIFY_MEMORY_DEVICE",
+        cmd_identify_memory_device, 0, 0 },
 };
 
 void cxl_process_mailbox(CXLDeviceState *cxl_dstate)
diff --git a/hw/mem/Kconfig b/hw/mem/Kconfig
index 03dbb3c7df..73c5ae8ad9 100644
--- a/hw/mem/Kconfig
+++ b/hw/mem/Kconfig
@@ -11,3 +11,8 @@ config NVDIMM
 
 config SPARSE_MEM
     bool
+
+config CXL_MEM_DEVICE
+    bool
+    default y if CXL
+    select MEM_DEVICE
diff --git a/hw/mem/cxl_type3.c b/hw/mem/cxl_type3.c
new file mode 100644
index 0000000000..a8d7cfcc81
--- /dev/null
+++ b/hw/mem/cxl_type3.c
@@ -0,0 +1,153 @@
+#include "qemu/osdep.h"
+#include "qemu/units.h"
+#include "qemu/error-report.h"
+#include "hw/mem/memory-device.h"
+#include "hw/mem/pc-dimm.h"
+#include "hw/pci/pci.h"
+#include "hw/qdev-properties.h"
+#include "qapi/error.h"
+#include "qemu/log.h"
+#include "qemu/module.h"
+#include "qemu/range.h"
+#include "qemu/rcu.h"
+#include "sysemu/hostmem.h"
+#include "hw/cxl/cxl.h"
+
+static void build_dvsecs(CXLType3Dev *ct3d)
+{
+    CXLComponentState *cxl_cstate = &ct3d->cxl_cstate;
+    uint8_t *dvsec;
+
+    dvsec = (uint8_t *)&(struct cxl_dvsec_device){
+        .cap = 0x1e,
+        .ctrl = 0x6,
+        .status2 = 0x2,
+        .range1_size_hi = 0,
+#ifdef SET_PMEM_PADDR
+        .range1_size_lo = (2 << 5) | (2 << 2) | 0x3 | ct3d->size,
+#else
+        .range1_size_lo = 0x3,
+#endif
+        .range1_base_hi = 0,
+        .range1_base_lo = 0,
+    };
+    cxl_component_create_dvsec(cxl_cstate, PCIE_CXL_DEVICE_DVSEC_LENGTH,
+                               PCIE_CXL_DEVICE_DVSEC,
+                               PCIE_CXL2_DEVICE_DVSEC_REVID, dvsec);
+
+    dvsec = (uint8_t *)&(struct cxl_dvsec_register_locator){
+        .rsvd         = 0,
+        .reg0_base_lo = RBI_COMPONENT_REG | CXL_COMPONENT_REG_BAR_IDX,
+        .reg0_base_hi = 0,
+        .reg1_base_lo = RBI_CXL_DEVICE_REG | CXL_DEVICE_REG_BAR_IDX,
+        .reg1_base_hi = 0,
+    };
+    cxl_component_create_dvsec(cxl_cstate, REG_LOC_DVSEC_LENGTH, REG_LOC_DVSEC,
+                               REG_LOC_DVSEC_REVID, dvsec);
+}
+
+static void cxl_setup_memory(CXLType3Dev *ct3d, Error **errp)
+{
+    MemoryRegion *mr;
+
+    if (!ct3d->hostmem) {
+        error_setg(errp, "memdev property must be set");
+        return;
+    }
+
+    mr = host_memory_backend_get_memory(ct3d->hostmem);
+    if (!mr) {
+        error_setg(errp, "memdev property must be set");
+        return;
+    }
+    memory_region_set_nonvolatile(mr, true);
+    memory_region_set_enabled(mr, true);
+    host_memory_backend_set_mapped(ct3d->hostmem, true);
+    ct3d->cxl_dstate.pmem_size = ct3d->hostmem->size;
+}
+
+
+static void ct3_realize(PCIDevice *pci_dev, Error **errp)
+{
+    CXLType3Dev *ct3d = CT3(pci_dev);
+    CXLComponentState *cxl_cstate = &ct3d->cxl_cstate;
+    ComponentRegisters *regs = &cxl_cstate->crb;
+    MemoryRegion *mr = &regs->component_registers;
+    uint8_t *pci_conf = pci_dev->config;
+
+    cxl_setup_memory(ct3d, errp);
+
+    pci_config_set_prog_interface(pci_conf, 0x10);
+    pci_config_set_class(pci_conf, PCI_CLASS_MEMORY_CXL);
+
+    pcie_endpoint_cap_init(pci_dev, 0x80);
+    cxl_cstate->dvsec_offset = 0x100;
+
+    ct3d->cxl_cstate.pdev = pci_dev;
+    build_dvsecs(ct3d);
+
+    cxl_component_register_block_init(OBJECT(pci_dev), cxl_cstate,
+                                      TYPE_CXL_TYPE3_DEV);
+
+    pci_register_bar(
+        pci_dev, CXL_COMPONENT_REG_BAR_IDX,
+        PCI_BASE_ADDRESS_SPACE_MEMORY | PCI_BASE_ADDRESS_MEM_TYPE_64, mr);
+
+    cxl_device_register_block_init(OBJECT(pci_dev), &ct3d->cxl_dstate);
+    pci_register_bar(pci_dev, CXL_DEVICE_REG_BAR_IDX,
+                     PCI_BASE_ADDRESS_SPACE_MEMORY |
+                         PCI_BASE_ADDRESS_MEM_TYPE_64,
+                     &ct3d->cxl_dstate.device_registers);
+}
+
+static void ct3d_reset(DeviceState *dev)
+{
+    CXLType3Dev *ct3d = CT3(dev);
+    uint32_t *reg_state = ct3d->cxl_cstate.crb.cache_mem_registers;
+
+    cxl_component_register_init_common(reg_state, CXL2_TYPE3_DEVICE);
+    cxl_device_register_init_common(&ct3d->cxl_dstate);
+}
+
+static Property ct3_props[] = {
+    DEFINE_PROP_SIZE("size", CXLType3Dev, size, -1),
+    DEFINE_PROP_LINK("memdev", CXLType3Dev, hostmem, TYPE_MEMORY_BACKEND,
+                     HostMemoryBackend *),
+    DEFINE_PROP_END_OF_LIST(),
+};
+
+static void ct3_class_init(ObjectClass *oc, void *data)
+{
+    DeviceClass *dc = DEVICE_CLASS(oc);
+    PCIDeviceClass *pc = PCI_DEVICE_CLASS(oc);
+
+    pc->realize = ct3_realize;
+    pc->class_id = PCI_CLASS_STORAGE_EXPRESS;
+    pc->vendor_id = PCI_VENDOR_ID_INTEL;
+    pc->device_id = 0xd93; /* LVF for now */
+    pc->revision = 1;
+
+    set_bit(DEVICE_CATEGORY_STORAGE, dc->categories);
+    dc->desc = "CXL PMEM Device (Type 3)";
+    dc->reset = ct3d_reset;
+    device_class_set_props(dc, ct3_props);
+}
+
+static const TypeInfo ct3d_info = {
+    .name = TYPE_CXL_TYPE3_DEV,
+    .parent = TYPE_PCI_DEVICE,
+    .class_init = ct3_class_init,
+    .instance_size = sizeof(CXLType3Dev),
+    .interfaces = (InterfaceInfo[]) {
+        { INTERFACE_CXL_DEVICE },
+        { INTERFACE_PCIE_DEVICE },
+        {}
+    },
+};
+
+static void ct3d_registers(void)
+{
+    type_register_static(&ct3d_info);
+}
+
+type_init(ct3d_registers);
diff --git a/hw/mem/meson.build b/hw/mem/meson.build
index 82f86d117e..609b2b36fc 100644
--- a/hw/mem/meson.build
+++ b/hw/mem/meson.build
@@ -3,6 +3,7 @@ mem_ss.add(files('memory-device.c'))
 mem_ss.add(when: 'CONFIG_DIMM', if_true: files('pc-dimm.c'))
 mem_ss.add(when: 'CONFIG_NPCM7XX', if_true: files('npcm7xx_mc.c'))
 mem_ss.add(when: 'CONFIG_NVDIMM', if_true: files('nvdimm.c'))
+mem_ss.add(when: 'CONFIG_CXL_MEM_DEVICE', if_true: files('cxl_type3.c'))
 
 softmmu_ss.add_all(when: 'CONFIG_MEM_DEVICE', if_true: mem_ss)
 
diff --git a/include/hw/cxl/cxl_device.h b/include/hw/cxl/cxl_device.h
index 8102d2a813..72da811c52 100644
--- a/include/hw/cxl/cxl_device.h
+++ b/include/hw/cxl/cxl_device.h
@@ -230,4 +230,21 @@ REG64(CXL_MEM_DEV_STS, 0)
     FIELD(CXL_MEM_DEV_STS, MBOX_READY, 4, 1)
     FIELD(CXL_MEM_DEV_STS, RESET_NEEDED, 5, 3)
 
+typedef struct cxl_type3_dev {
+    /* Private */
+    PCIDevice parent_obj;
+
+    /* Properties */
+    uint64_t size;
+    HostMemoryBackend *hostmem;
+
+    /* State */
+    CXLComponentState cxl_cstate;
+    CXLDeviceState cxl_dstate;
+} CXLType3Dev;
+
+#define TYPE_CXL_TYPE3_DEV "cxl-type3"
+
+#define CT3(obj) OBJECT_CHECK(CXLType3Dev, (obj), TYPE_CXL_TYPE3_DEV)
+
 #endif
diff --git a/include/hw/cxl/cxl_pci.h b/include/hw/cxl/cxl_pci.h
index 810a244fab..cf53fe5425 100644
--- a/include/hw/cxl/cxl_pci.h
+++ b/include/hw/cxl/cxl_pci.h
@@ -64,6 +64,27 @@ QEMU_BUILD_BUG_ON(sizeof(struct dvsec_header) != 10);
  * CXL 2.0 Downstream Port: 3, 4, 7, 8
  */
 
+/* CXL 2.0 - 8.1.3 (ID 0001) */
+struct cxl_dvsec_device {
+    struct dvsec_header hdr;
+    uint16_t cap;
+    uint16_t ctrl;
+    uint16_t status;
+    uint16_t ctrl2;
+    uint16_t status2;
+    uint16_t lock;
+    uint16_t cap2;
+    uint32_t range1_size_hi;
+    uint32_t range1_size_lo;
+    uint32_t range1_base_hi;
+    uint32_t range1_base_lo;
+    uint32_t range2_size_hi;
+    uint32_t range2_size_lo;
+    uint32_t range2_base_hi;
+    uint32_t range2_base_lo;
+};
+QEMU_BUILD_BUG_ON(sizeof(struct cxl_dvsec_device) != 0x38);
+
 /* CXL 2.0 - 8.1.5 (ID 0003) */
 struct cxl_dvsec_port_extensions {
     struct dvsec_header hdr;
diff --git a/include/hw/pci/pci_ids.h b/include/hw/pci/pci_ids.h
index 11abe22d46..898083b86f 100644
--- a/include/hw/pci/pci_ids.h
+++ b/include/hw/pci/pci_ids.h
@@ -53,6 +53,7 @@
 #define PCI_BASE_CLASS_MEMORY            0x05
 #define PCI_CLASS_MEMORY_RAM             0x0500
 #define PCI_CLASS_MEMORY_FLASH           0x0501
+#define PCI_CLASS_MEMORY_CXL             0x0502
 #define PCI_CLASS_MEMORY_OTHER           0x0580
 
 #define PCI_BASE_CLASS_BRIDGE            0x06
-- 
2.32.0



^ permalink raw reply related	[flat|nested] 124+ messages in thread

* [PATCH v7 18/46] hw/cxl/device: Implement MMIO HDM decoding (8.2.5.12)
  2022-03-06 17:40 ` Jonathan Cameron via
@ 2022-03-06 17:41   ` Jonathan Cameron via
  -1 siblings, 0 replies; 124+ messages in thread
From: Jonathan Cameron @ 2022-03-06 17:41 UTC (permalink / raw)
  To: linuxarm, qemu-devel, Alex Bennée, Marcel Apfelbaum,
	Michael S . Tsirkin, Igor Mammedov, Markus Armbruster
  Cc: linux-cxl, Ben Widawsky, Peter Maydell,
	Shameerali Kolothum Thodi, Philippe Mathieu-Daudé,
	Saransh Gupta1, Shreyas Shah, Chris Browy, Samarth Saxena,
	Dan Williams

From: Ben Widawsky <ben.widawsky@intel.com>

A device's volatile and persistent memory are known Host Defined Memory
(HDM) regions. The mechanism by which the device is programmed to claim
the addresses associated with those regions is through dedicated logic
known as the HDM decoder. In order to allow the OS to properly program
the HDMs, the HDM decoders must be modeled.

There are two ways the HDM decoders can be implemented, the legacy
mechanism is through the PCIe DVSEC programming from CXL 1.1 (8.1.3.8),
and MMIO is found in 8.2.5.12 of the spec. For now, 8.1.3.8 is not
implemented.

Much of CXL device logic is implemented in cxl-utils. The HDM decoder
however is implemented directly by the device implementation.
Whilst the implementation currently does no validity checks on the
encoder set up, future work will add sanity checking specific to
the type of cxl component.

Signed-off-by: Ben Widawsky <ben.widawsky@intel.com>
Co-developed-by: Jonathan Cameron <Jonathan.Cameron@huawei.com>
Signed-off-by: Jonathan Cameron <Jonathan.Cameron@huawei.com>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
---
v7:
* Drop pointless void * cast. (Alex)
* Add assertion but without the suggested >> 2 as unit is bytes here (Alex)

 hw/mem/cxl_type3.c | 55 ++++++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 55 insertions(+)

diff --git a/hw/mem/cxl_type3.c b/hw/mem/cxl_type3.c
index a8d7cfcc81..16b113d5ed 100644
--- a/hw/mem/cxl_type3.c
+++ b/hw/mem/cxl_type3.c
@@ -46,6 +46,57 @@ static void build_dvsecs(CXLType3Dev *ct3d)
                                REG_LOC_DVSEC_REVID, dvsec);
 }
 
+static void hdm_decoder_commit(CXLType3Dev *ct3d, int which)
+{
+    ComponentRegisters *cregs = &ct3d->cxl_cstate.crb;
+    uint32_t *cache_mem = cregs->cache_mem_registers;
+
+    assert(which == 0);
+
+    /* TODO: Sanity checks that the decoder is possible */
+    ARRAY_FIELD_DP32(cache_mem, CXL_HDM_DECODER0_CTRL, COMMIT, 0);
+    ARRAY_FIELD_DP32(cache_mem, CXL_HDM_DECODER0_CTRL, ERR, 0);
+
+    ARRAY_FIELD_DP32(cache_mem, CXL_HDM_DECODER0_CTRL, COMMITTED, 1);
+}
+
+static void ct3d_reg_write(void *opaque, hwaddr offset, uint64_t value,
+                           unsigned size)
+{
+    CXLComponentState *cxl_cstate = opaque;
+    ComponentRegisters *cregs = &cxl_cstate->crb;
+    CXLType3Dev *ct3d = container_of(cxl_cstate, CXLType3Dev, cxl_cstate);
+    uint32_t *cache_mem = cregs->cache_mem_registers;
+    bool should_commit = false;
+    int which_hdm = -1;
+
+    assert(size == 4);
+    g_assert(offset <= CXL2_COMPONENT_CM_REGION_SIZE);
+
+    switch (offset) {
+    case A_CXL_HDM_DECODER0_CTRL:
+        should_commit = FIELD_EX32(value, CXL_HDM_DECODER0_CTRL, COMMIT);
+        which_hdm = 0;
+        break;
+    default:
+        break;
+    }
+
+    stl_le_p((uint8_t *)cache_mem + offset, value);
+    if (should_commit) {
+        hdm_decoder_commit(ct3d, which_hdm);
+    }
+}
+
+static void ct3_finalize(Object *obj)
+{
+    CXLType3Dev *ct3d = CT3(obj);
+    CXLComponentState *cxl_cstate = &ct3d->cxl_cstate;
+    ComponentRegisters *regs = &cxl_cstate->crb;
+
+    g_free(regs->special_ops);
+}
+
 static void cxl_setup_memory(CXLType3Dev *ct3d, Error **errp)
 {
     MemoryRegion *mr;
@@ -86,6 +137,9 @@ static void ct3_realize(PCIDevice *pci_dev, Error **errp)
     ct3d->cxl_cstate.pdev = pci_dev;
     build_dvsecs(ct3d);
 
+    regs->special_ops = g_new0(MemoryRegionOps, 1);
+    regs->special_ops->write = ct3d_reg_write;
+
     cxl_component_register_block_init(OBJECT(pci_dev), cxl_cstate,
                                       TYPE_CXL_TYPE3_DEV);
 
@@ -138,6 +192,7 @@ static const TypeInfo ct3d_info = {
     .parent = TYPE_PCI_DEVICE,
     .class_init = ct3_class_init,
     .instance_size = sizeof(CXLType3Dev),
+    .instance_finalize = ct3_finalize,
     .interfaces = (InterfaceInfo[]) {
         { INTERFACE_CXL_DEVICE },
         { INTERFACE_PCIE_DEVICE },
-- 
2.32.0


^ permalink raw reply related	[flat|nested] 124+ messages in thread

* [PATCH v7 18/46] hw/cxl/device: Implement MMIO HDM decoding (8.2.5.12)
@ 2022-03-06 17:41   ` Jonathan Cameron via
  0 siblings, 0 replies; 124+ messages in thread
From: Jonathan Cameron via @ 2022-03-06 17:41 UTC (permalink / raw)
  To: linuxarm, qemu-devel, Alex Bennée, Marcel Apfelbaum,
	Michael S . Tsirkin, Igor Mammedov, Markus Armbruster
  Cc: linux-cxl, Ben Widawsky, Peter Maydell,
	Shameerali Kolothum Thodi, Philippe Mathieu-Daudé,
	Saransh Gupta1, Shreyas Shah, Chris Browy, Samarth Saxena,
	Dan Williams

From: Ben Widawsky <ben.widawsky@intel.com>

A device's volatile and persistent memory are known Host Defined Memory
(HDM) regions. The mechanism by which the device is programmed to claim
the addresses associated with those regions is through dedicated logic
known as the HDM decoder. In order to allow the OS to properly program
the HDMs, the HDM decoders must be modeled.

There are two ways the HDM decoders can be implemented, the legacy
mechanism is through the PCIe DVSEC programming from CXL 1.1 (8.1.3.8),
and MMIO is found in 8.2.5.12 of the spec. For now, 8.1.3.8 is not
implemented.

Much of CXL device logic is implemented in cxl-utils. The HDM decoder
however is implemented directly by the device implementation.
Whilst the implementation currently does no validity checks on the
encoder set up, future work will add sanity checking specific to
the type of cxl component.

Signed-off-by: Ben Widawsky <ben.widawsky@intel.com>
Co-developed-by: Jonathan Cameron <Jonathan.Cameron@huawei.com>
Signed-off-by: Jonathan Cameron <Jonathan.Cameron@huawei.com>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
---
v7:
* Drop pointless void * cast. (Alex)
* Add assertion but without the suggested >> 2 as unit is bytes here (Alex)

 hw/mem/cxl_type3.c | 55 ++++++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 55 insertions(+)

diff --git a/hw/mem/cxl_type3.c b/hw/mem/cxl_type3.c
index a8d7cfcc81..16b113d5ed 100644
--- a/hw/mem/cxl_type3.c
+++ b/hw/mem/cxl_type3.c
@@ -46,6 +46,57 @@ static void build_dvsecs(CXLType3Dev *ct3d)
                                REG_LOC_DVSEC_REVID, dvsec);
 }
 
+static void hdm_decoder_commit(CXLType3Dev *ct3d, int which)
+{
+    ComponentRegisters *cregs = &ct3d->cxl_cstate.crb;
+    uint32_t *cache_mem = cregs->cache_mem_registers;
+
+    assert(which == 0);
+
+    /* TODO: Sanity checks that the decoder is possible */
+    ARRAY_FIELD_DP32(cache_mem, CXL_HDM_DECODER0_CTRL, COMMIT, 0);
+    ARRAY_FIELD_DP32(cache_mem, CXL_HDM_DECODER0_CTRL, ERR, 0);
+
+    ARRAY_FIELD_DP32(cache_mem, CXL_HDM_DECODER0_CTRL, COMMITTED, 1);
+}
+
+static void ct3d_reg_write(void *opaque, hwaddr offset, uint64_t value,
+                           unsigned size)
+{
+    CXLComponentState *cxl_cstate = opaque;
+    ComponentRegisters *cregs = &cxl_cstate->crb;
+    CXLType3Dev *ct3d = container_of(cxl_cstate, CXLType3Dev, cxl_cstate);
+    uint32_t *cache_mem = cregs->cache_mem_registers;
+    bool should_commit = false;
+    int which_hdm = -1;
+
+    assert(size == 4);
+    g_assert(offset <= CXL2_COMPONENT_CM_REGION_SIZE);
+
+    switch (offset) {
+    case A_CXL_HDM_DECODER0_CTRL:
+        should_commit = FIELD_EX32(value, CXL_HDM_DECODER0_CTRL, COMMIT);
+        which_hdm = 0;
+        break;
+    default:
+        break;
+    }
+
+    stl_le_p((uint8_t *)cache_mem + offset, value);
+    if (should_commit) {
+        hdm_decoder_commit(ct3d, which_hdm);
+    }
+}
+
+static void ct3_finalize(Object *obj)
+{
+    CXLType3Dev *ct3d = CT3(obj);
+    CXLComponentState *cxl_cstate = &ct3d->cxl_cstate;
+    ComponentRegisters *regs = &cxl_cstate->crb;
+
+    g_free(regs->special_ops);
+}
+
 static void cxl_setup_memory(CXLType3Dev *ct3d, Error **errp)
 {
     MemoryRegion *mr;
@@ -86,6 +137,9 @@ static void ct3_realize(PCIDevice *pci_dev, Error **errp)
     ct3d->cxl_cstate.pdev = pci_dev;
     build_dvsecs(ct3d);
 
+    regs->special_ops = g_new0(MemoryRegionOps, 1);
+    regs->special_ops->write = ct3d_reg_write;
+
     cxl_component_register_block_init(OBJECT(pci_dev), cxl_cstate,
                                       TYPE_CXL_TYPE3_DEV);
 
@@ -138,6 +192,7 @@ static const TypeInfo ct3d_info = {
     .parent = TYPE_PCI_DEVICE,
     .class_init = ct3_class_init,
     .instance_size = sizeof(CXLType3Dev),
+    .instance_finalize = ct3_finalize,
     .interfaces = (InterfaceInfo[]) {
         { INTERFACE_CXL_DEVICE },
         { INTERFACE_PCIE_DEVICE },
-- 
2.32.0



^ permalink raw reply related	[flat|nested] 124+ messages in thread

* [PATCH v7 19/46] hw/cxl/device: Add some trivial commands
  2022-03-06 17:40 ` Jonathan Cameron via
@ 2022-03-06 17:41   ` Jonathan Cameron via
  -1 siblings, 0 replies; 124+ messages in thread
From: Jonathan Cameron @ 2022-03-06 17:41 UTC (permalink / raw)
  To: linuxarm, qemu-devel, Alex Bennée, Marcel Apfelbaum,
	Michael S . Tsirkin, Igor Mammedov, Markus Armbruster
  Cc: linux-cxl, Ben Widawsky, Peter Maydell,
	Shameerali Kolothum Thodi, Philippe Mathieu-Daudé,
	Saransh Gupta1, Shreyas Shah, Chris Browy, Samarth Saxena,
	Dan Williams

From: Ben Widawsky <ben.widawsky@intel.com>

GET_FW_INFO and GET_PARTITION_INFO, for this emulation, is equivalent to
info already returned in the IDENTIFY command. To have a more robust
implementation, add those.

Signed-off-by: Ben Widawsky <ben.widawsky@intel.com>
Signed-off-by: Jonathan Cameron <Jonathan.Cameron@huawei.com>
---
v7:
* Use QEMU_PACKED etc from compiler.h (Alex)
  Note this applies in other patches, not all called out explicitly.
* use pstrcpy() rather than snprintf() for a fixed string

 hw/cxl/cxl-mailbox-utils.c | 69 ++++++++++++++++++++++++++++++++++++++
 1 file changed, 69 insertions(+)

diff --git a/hw/cxl/cxl-mailbox-utils.c b/hw/cxl/cxl-mailbox-utils.c
index 7ed9606c6b..fcd41d9a9d 100644
--- a/hw/cxl/cxl-mailbox-utils.c
+++ b/hw/cxl/cxl-mailbox-utils.c
@@ -10,6 +10,7 @@
 #include "qemu/osdep.h"
 #include "hw/cxl/cxl.h"
 #include "hw/pci/pci.h"
+#include "qemu/cutils.h"
 #include "qemu/log.h"
 #include "qemu/uuid.h"
 
@@ -44,6 +45,8 @@ enum {
         #define CLEAR_RECORDS   0x1
         #define GET_INTERRUPT_POLICY   0x2
         #define SET_INTERRUPT_POLICY   0x3
+    FIRMWARE_UPDATE = 0x02,
+        #define GET_INFO      0x0
     TIMESTAMP   = 0x03,
         #define GET           0x0
         #define SET           0x1
@@ -52,6 +55,8 @@ enum {
         #define GET_LOG       0x1
     IDENTIFY    = 0x40,
         #define MEMORY_DEVICE 0x0
+    CCLS        = 0x41,
+        #define GET_PARTITION_INFO     0x0
 };
 
 /* 8.2.8.4.5.1 Command Return Codes */
@@ -114,6 +119,39 @@ DEFINE_MAILBOX_HANDLER_NOP(events_clear_records);
 DEFINE_MAILBOX_HANDLER_ZEROED(events_get_interrupt_policy, 4);
 DEFINE_MAILBOX_HANDLER_NOP(events_set_interrupt_policy);
 
+/* 8.2.9.2.1 */
+static ret_code cmd_firmware_update_get_info(struct cxl_cmd *cmd,
+                                             CXLDeviceState *cxl_dstate,
+                                             uint16_t *len)
+{
+    struct {
+        uint8_t slots_supported;
+        uint8_t slot_info;
+        uint8_t caps;
+        uint8_t rsvd[0xd];
+        char fw_rev1[0x10];
+        char fw_rev2[0x10];
+        char fw_rev3[0x10];
+        char fw_rev4[0x10];
+    } QEMU_PACKED *fw_info;
+    QEMU_BUILD_BUG_ON(sizeof(*fw_info) != 0x50);
+
+    if (cxl_dstate->pmem_size < (256 << 20)) {
+        return CXL_MBOX_INTERNAL_ERROR;
+    }
+
+    fw_info = (void *)cmd->payload;
+    memset(fw_info, 0, sizeof(*fw_info));
+
+    fw_info->slots_supported = 2;
+    fw_info->slot_info = BIT(0) | BIT(3);
+    fw_info->caps = 0;
+    pstrcpy(fw_info->fw_rev1, sizeof(fw_info->fw_rev1), "BWFW VERSION 0");
+
+    *len = sizeof(*fw_info);
+    return CXL_MBOX_SUCCESS;
+}
+
 /* 8.2.9.3.1 */
 static ret_code cmd_timestamp_get(struct cxl_cmd *cmd,
                                   CXLDeviceState *cxl_dstate,
@@ -258,6 +296,33 @@ static ret_code cmd_identify_memory_device(struct cxl_cmd *cmd,
     return CXL_MBOX_SUCCESS;
 }
 
+static ret_code cmd_ccls_get_partition_info(struct cxl_cmd *cmd,
+                                           CXLDeviceState *cxl_dstate,
+                                           uint16_t *len)
+{
+    struct {
+        uint64_t active_vmem;
+        uint64_t active_pmem;
+        uint64_t next_vmem;
+        uint64_t next_pmem;
+    } QEMU_PACKED *part_info = (void *)cmd->payload;
+    QEMU_BUILD_BUG_ON(sizeof(*part_info) != 0x20);
+    uint64_t size = cxl_dstate->pmem_size;
+
+    if (!QEMU_IS_ALIGNED(size, 256 << 20)) {
+        return CXL_MBOX_INTERNAL_ERROR;
+    }
+
+    /* PMEM only */
+    part_info->active_vmem = 0;
+    part_info->next_vmem = 0;
+    part_info->active_pmem = size / (256 << 20);
+    part_info->next_pmem = part_info->active_pmem;
+
+    *len = sizeof(*part_info);
+    return CXL_MBOX_SUCCESS;
+}
+
 #define IMMEDIATE_CONFIG_CHANGE (1 << 1)
 #define IMMEDIATE_POLICY_CHANGE (1 << 3)
 #define IMMEDIATE_LOG_CHANGE (1 << 4)
@@ -271,12 +336,16 @@ static struct cxl_cmd cxl_cmd_set[256][256] = {
         cmd_events_get_interrupt_policy, 0, 0 },
     [EVENTS][SET_INTERRUPT_POLICY] = { "EVENTS_SET_INTERRUPT_POLICY",
         cmd_events_set_interrupt_policy, 4, IMMEDIATE_CONFIG_CHANGE },
+    [FIRMWARE_UPDATE][GET_INFO] = { "FIRMWARE_UPDATE_GET_INFO",
+        cmd_firmware_update_get_info, 0, 0 },
     [TIMESTAMP][GET] = { "TIMESTAMP_GET", cmd_timestamp_get, 0, 0 },
     [TIMESTAMP][SET] = { "TIMESTAMP_SET", cmd_timestamp_set, 8, IMMEDIATE_POLICY_CHANGE },
     [LOGS][GET_SUPPORTED] = { "LOGS_GET_SUPPORTED", cmd_logs_get_supported, 0, 0 },
     [LOGS][GET_LOG] = { "LOGS_GET_LOG", cmd_logs_get_log, 0x18, 0 },
     [IDENTIFY][MEMORY_DEVICE] = { "IDENTIFY_MEMORY_DEVICE",
         cmd_identify_memory_device, 0, 0 },
+    [CCLS][GET_PARTITION_INFO] = { "CCLS_GET_PARTITION_INFO",
+        cmd_ccls_get_partition_info, 0, 0 },
 };
 
 void cxl_process_mailbox(CXLDeviceState *cxl_dstate)
-- 
2.32.0


^ permalink raw reply related	[flat|nested] 124+ messages in thread

* [PATCH v7 19/46] hw/cxl/device: Add some trivial commands
@ 2022-03-06 17:41   ` Jonathan Cameron via
  0 siblings, 0 replies; 124+ messages in thread
From: Jonathan Cameron via @ 2022-03-06 17:41 UTC (permalink / raw)
  To: linuxarm, qemu-devel, Alex Bennée, Marcel Apfelbaum,
	Michael S . Tsirkin, Igor Mammedov, Markus Armbruster
  Cc: linux-cxl, Ben Widawsky, Peter Maydell,
	Shameerali Kolothum Thodi, Philippe Mathieu-Daudé,
	Saransh Gupta1, Shreyas Shah, Chris Browy, Samarth Saxena,
	Dan Williams

From: Ben Widawsky <ben.widawsky@intel.com>

GET_FW_INFO and GET_PARTITION_INFO, for this emulation, is equivalent to
info already returned in the IDENTIFY command. To have a more robust
implementation, add those.

Signed-off-by: Ben Widawsky <ben.widawsky@intel.com>
Signed-off-by: Jonathan Cameron <Jonathan.Cameron@huawei.com>
---
v7:
* Use QEMU_PACKED etc from compiler.h (Alex)
  Note this applies in other patches, not all called out explicitly.
* use pstrcpy() rather than snprintf() for a fixed string

 hw/cxl/cxl-mailbox-utils.c | 69 ++++++++++++++++++++++++++++++++++++++
 1 file changed, 69 insertions(+)

diff --git a/hw/cxl/cxl-mailbox-utils.c b/hw/cxl/cxl-mailbox-utils.c
index 7ed9606c6b..fcd41d9a9d 100644
--- a/hw/cxl/cxl-mailbox-utils.c
+++ b/hw/cxl/cxl-mailbox-utils.c
@@ -10,6 +10,7 @@
 #include "qemu/osdep.h"
 #include "hw/cxl/cxl.h"
 #include "hw/pci/pci.h"
+#include "qemu/cutils.h"
 #include "qemu/log.h"
 #include "qemu/uuid.h"
 
@@ -44,6 +45,8 @@ enum {
         #define CLEAR_RECORDS   0x1
         #define GET_INTERRUPT_POLICY   0x2
         #define SET_INTERRUPT_POLICY   0x3
+    FIRMWARE_UPDATE = 0x02,
+        #define GET_INFO      0x0
     TIMESTAMP   = 0x03,
         #define GET           0x0
         #define SET           0x1
@@ -52,6 +55,8 @@ enum {
         #define GET_LOG       0x1
     IDENTIFY    = 0x40,
         #define MEMORY_DEVICE 0x0
+    CCLS        = 0x41,
+        #define GET_PARTITION_INFO     0x0
 };
 
 /* 8.2.8.4.5.1 Command Return Codes */
@@ -114,6 +119,39 @@ DEFINE_MAILBOX_HANDLER_NOP(events_clear_records);
 DEFINE_MAILBOX_HANDLER_ZEROED(events_get_interrupt_policy, 4);
 DEFINE_MAILBOX_HANDLER_NOP(events_set_interrupt_policy);
 
+/* 8.2.9.2.1 */
+static ret_code cmd_firmware_update_get_info(struct cxl_cmd *cmd,
+                                             CXLDeviceState *cxl_dstate,
+                                             uint16_t *len)
+{
+    struct {
+        uint8_t slots_supported;
+        uint8_t slot_info;
+        uint8_t caps;
+        uint8_t rsvd[0xd];
+        char fw_rev1[0x10];
+        char fw_rev2[0x10];
+        char fw_rev3[0x10];
+        char fw_rev4[0x10];
+    } QEMU_PACKED *fw_info;
+    QEMU_BUILD_BUG_ON(sizeof(*fw_info) != 0x50);
+
+    if (cxl_dstate->pmem_size < (256 << 20)) {
+        return CXL_MBOX_INTERNAL_ERROR;
+    }
+
+    fw_info = (void *)cmd->payload;
+    memset(fw_info, 0, sizeof(*fw_info));
+
+    fw_info->slots_supported = 2;
+    fw_info->slot_info = BIT(0) | BIT(3);
+    fw_info->caps = 0;
+    pstrcpy(fw_info->fw_rev1, sizeof(fw_info->fw_rev1), "BWFW VERSION 0");
+
+    *len = sizeof(*fw_info);
+    return CXL_MBOX_SUCCESS;
+}
+
 /* 8.2.9.3.1 */
 static ret_code cmd_timestamp_get(struct cxl_cmd *cmd,
                                   CXLDeviceState *cxl_dstate,
@@ -258,6 +296,33 @@ static ret_code cmd_identify_memory_device(struct cxl_cmd *cmd,
     return CXL_MBOX_SUCCESS;
 }
 
+static ret_code cmd_ccls_get_partition_info(struct cxl_cmd *cmd,
+                                           CXLDeviceState *cxl_dstate,
+                                           uint16_t *len)
+{
+    struct {
+        uint64_t active_vmem;
+        uint64_t active_pmem;
+        uint64_t next_vmem;
+        uint64_t next_pmem;
+    } QEMU_PACKED *part_info = (void *)cmd->payload;
+    QEMU_BUILD_BUG_ON(sizeof(*part_info) != 0x20);
+    uint64_t size = cxl_dstate->pmem_size;
+
+    if (!QEMU_IS_ALIGNED(size, 256 << 20)) {
+        return CXL_MBOX_INTERNAL_ERROR;
+    }
+
+    /* PMEM only */
+    part_info->active_vmem = 0;
+    part_info->next_vmem = 0;
+    part_info->active_pmem = size / (256 << 20);
+    part_info->next_pmem = part_info->active_pmem;
+
+    *len = sizeof(*part_info);
+    return CXL_MBOX_SUCCESS;
+}
+
 #define IMMEDIATE_CONFIG_CHANGE (1 << 1)
 #define IMMEDIATE_POLICY_CHANGE (1 << 3)
 #define IMMEDIATE_LOG_CHANGE (1 << 4)
@@ -271,12 +336,16 @@ static struct cxl_cmd cxl_cmd_set[256][256] = {
         cmd_events_get_interrupt_policy, 0, 0 },
     [EVENTS][SET_INTERRUPT_POLICY] = { "EVENTS_SET_INTERRUPT_POLICY",
         cmd_events_set_interrupt_policy, 4, IMMEDIATE_CONFIG_CHANGE },
+    [FIRMWARE_UPDATE][GET_INFO] = { "FIRMWARE_UPDATE_GET_INFO",
+        cmd_firmware_update_get_info, 0, 0 },
     [TIMESTAMP][GET] = { "TIMESTAMP_GET", cmd_timestamp_get, 0, 0 },
     [TIMESTAMP][SET] = { "TIMESTAMP_SET", cmd_timestamp_set, 8, IMMEDIATE_POLICY_CHANGE },
     [LOGS][GET_SUPPORTED] = { "LOGS_GET_SUPPORTED", cmd_logs_get_supported, 0, 0 },
     [LOGS][GET_LOG] = { "LOGS_GET_LOG", cmd_logs_get_log, 0x18, 0 },
     [IDENTIFY][MEMORY_DEVICE] = { "IDENTIFY_MEMORY_DEVICE",
         cmd_identify_memory_device, 0, 0 },
+    [CCLS][GET_PARTITION_INFO] = { "CCLS_GET_PARTITION_INFO",
+        cmd_ccls_get_partition_info, 0, 0 },
 };
 
 void cxl_process_mailbox(CXLDeviceState *cxl_dstate)
-- 
2.32.0



^ permalink raw reply related	[flat|nested] 124+ messages in thread

* [PATCH v7 20/46] hw/cxl/device: Plumb real Label Storage Area (LSA) sizing
  2022-03-06 17:40 ` Jonathan Cameron via
@ 2022-03-06 17:41   ` Jonathan Cameron via
  -1 siblings, 0 replies; 124+ messages in thread
From: Jonathan Cameron @ 2022-03-06 17:41 UTC (permalink / raw)
  To: linuxarm, qemu-devel, Alex Bennée, Marcel Apfelbaum,
	Michael S . Tsirkin, Igor Mammedov, Markus Armbruster
  Cc: linux-cxl, Ben Widawsky, Peter Maydell,
	Shameerali Kolothum Thodi, Philippe Mathieu-Daudé,
	Saransh Gupta1, Shreyas Shah, Chris Browy, Samarth Saxena,
	Dan Williams

From: Ben Widawsky <ben.widawsky@intel.com>

This should introduce no change. Subsequent work will make use of this
new class member.

Signed-off-by: Ben Widawsky <ben.widawsky@intel.com>
Signed-off-by: Jonathan Cameron <Jonathan.Cameron@huawei.com>
---
v7:
* Moved struct cxl_type3_dev to final location in earlier patch (17)

 hw/cxl/cxl-mailbox-utils.c  |  3 +++
 hw/mem/cxl_type3.c          |  9 +++++++++
 include/hw/cxl/cxl_device.h | 10 ++++++++++
 3 files changed, 22 insertions(+)

diff --git a/hw/cxl/cxl-mailbox-utils.c b/hw/cxl/cxl-mailbox-utils.c
index fcd41d9a9d..771b1cfe90 100644
--- a/hw/cxl/cxl-mailbox-utils.c
+++ b/hw/cxl/cxl-mailbox-utils.c
@@ -277,6 +277,8 @@ static ret_code cmd_identify_memory_device(struct cxl_cmd *cmd,
     } QEMU_PACKED *id;
     QEMU_BUILD_BUG_ON(sizeof(*id) != 0x43);
 
+    CXLType3Dev *ct3d = container_of(cxl_dstate, CXLType3Dev, cxl_dstate);
+    CXLType3Class *cvc = CXL_TYPE3_DEV_GET_CLASS(ct3d);
     uint64_t size = cxl_dstate->pmem_size;
 
     if (!QEMU_IS_ALIGNED(size, 256 << 20)) {
@@ -291,6 +293,7 @@ static ret_code cmd_identify_memory_device(struct cxl_cmd *cmd,
 
     id->total_capacity = size / (256 << 20);
     id->persistent_capacity = size / (256 << 20);
+    id->lsa_size = cvc->get_lsa_size(ct3d);
 
     *len = sizeof(*id);
     return CXL_MBOX_SUCCESS;
diff --git a/hw/mem/cxl_type3.c b/hw/mem/cxl_type3.c
index 16b113d5ed..7cd3041eb3 100644
--- a/hw/mem/cxl_type3.c
+++ b/hw/mem/cxl_type3.c
@@ -170,10 +170,16 @@ static Property ct3_props[] = {
     DEFINE_PROP_END_OF_LIST(),
 };
 
+static uint64_t get_lsa_size(CXLType3Dev *ct3d)
+{
+    return 0;
+}
+
 static void ct3_class_init(ObjectClass *oc, void *data)
 {
     DeviceClass *dc = DEVICE_CLASS(oc);
     PCIDeviceClass *pc = PCI_DEVICE_CLASS(oc);
+    CXLType3Class *cvc = CXL_TYPE3_DEV_CLASS(oc);
 
     pc->realize = ct3_realize;
     pc->class_id = PCI_CLASS_STORAGE_EXPRESS;
@@ -185,11 +191,14 @@ static void ct3_class_init(ObjectClass *oc, void *data)
     dc->desc = "CXL PMEM Device (Type 3)";
     dc->reset = ct3d_reset;
     device_class_set_props(dc, ct3_props);
+
+    cvc->get_lsa_size = get_lsa_size;
 }
 
 static const TypeInfo ct3d_info = {
     .name = TYPE_CXL_TYPE3_DEV,
     .parent = TYPE_PCI_DEVICE,
+    .class_size = sizeof(struct CXLType3Class),
     .class_init = ct3_class_init,
     .instance_size = sizeof(CXLType3Dev),
     .instance_finalize = ct3_finalize,
diff --git a/include/hw/cxl/cxl_device.h b/include/hw/cxl/cxl_device.h
index 72da811c52..cf4c110f7e 100644
--- a/include/hw/cxl/cxl_device.h
+++ b/include/hw/cxl/cxl_device.h
@@ -237,6 +237,7 @@ typedef struct cxl_type3_dev {
     /* Properties */
     uint64_t size;
     HostMemoryBackend *hostmem;
+    HostMemoryBackend *lsa;
 
     /* State */
     CXLComponentState cxl_cstate;
@@ -246,5 +247,14 @@ typedef struct cxl_type3_dev {
 #define TYPE_CXL_TYPE3_DEV "cxl-type3"
 
 #define CT3(obj) OBJECT_CHECK(CXLType3Dev, (obj), TYPE_CXL_TYPE3_DEV)
+OBJECT_DECLARE_TYPE(CXLType3Device, CXLType3Class, CXL_TYPE3_DEV)
+
+struct CXLType3Class {
+    /* Private */
+    PCIDeviceClass parent_class;
+
+    /* public */
+    uint64_t (*get_lsa_size)(CXLType3Dev *ct3d);
+};
 
 #endif
-- 
2.32.0


^ permalink raw reply related	[flat|nested] 124+ messages in thread

* [PATCH v7 20/46] hw/cxl/device: Plumb real Label Storage Area (LSA) sizing
@ 2022-03-06 17:41   ` Jonathan Cameron via
  0 siblings, 0 replies; 124+ messages in thread
From: Jonathan Cameron via @ 2022-03-06 17:41 UTC (permalink / raw)
  To: linuxarm, qemu-devel, Alex Bennée, Marcel Apfelbaum,
	Michael S . Tsirkin, Igor Mammedov, Markus Armbruster
  Cc: linux-cxl, Ben Widawsky, Peter Maydell,
	Shameerali Kolothum Thodi, Philippe Mathieu-Daudé,
	Saransh Gupta1, Shreyas Shah, Chris Browy, Samarth Saxena,
	Dan Williams

From: Ben Widawsky <ben.widawsky@intel.com>

This should introduce no change. Subsequent work will make use of this
new class member.

Signed-off-by: Ben Widawsky <ben.widawsky@intel.com>
Signed-off-by: Jonathan Cameron <Jonathan.Cameron@huawei.com>
---
v7:
* Moved struct cxl_type3_dev to final location in earlier patch (17)

 hw/cxl/cxl-mailbox-utils.c  |  3 +++
 hw/mem/cxl_type3.c          |  9 +++++++++
 include/hw/cxl/cxl_device.h | 10 ++++++++++
 3 files changed, 22 insertions(+)

diff --git a/hw/cxl/cxl-mailbox-utils.c b/hw/cxl/cxl-mailbox-utils.c
index fcd41d9a9d..771b1cfe90 100644
--- a/hw/cxl/cxl-mailbox-utils.c
+++ b/hw/cxl/cxl-mailbox-utils.c
@@ -277,6 +277,8 @@ static ret_code cmd_identify_memory_device(struct cxl_cmd *cmd,
     } QEMU_PACKED *id;
     QEMU_BUILD_BUG_ON(sizeof(*id) != 0x43);
 
+    CXLType3Dev *ct3d = container_of(cxl_dstate, CXLType3Dev, cxl_dstate);
+    CXLType3Class *cvc = CXL_TYPE3_DEV_GET_CLASS(ct3d);
     uint64_t size = cxl_dstate->pmem_size;
 
     if (!QEMU_IS_ALIGNED(size, 256 << 20)) {
@@ -291,6 +293,7 @@ static ret_code cmd_identify_memory_device(struct cxl_cmd *cmd,
 
     id->total_capacity = size / (256 << 20);
     id->persistent_capacity = size / (256 << 20);
+    id->lsa_size = cvc->get_lsa_size(ct3d);
 
     *len = sizeof(*id);
     return CXL_MBOX_SUCCESS;
diff --git a/hw/mem/cxl_type3.c b/hw/mem/cxl_type3.c
index 16b113d5ed..7cd3041eb3 100644
--- a/hw/mem/cxl_type3.c
+++ b/hw/mem/cxl_type3.c
@@ -170,10 +170,16 @@ static Property ct3_props[] = {
     DEFINE_PROP_END_OF_LIST(),
 };
 
+static uint64_t get_lsa_size(CXLType3Dev *ct3d)
+{
+    return 0;
+}
+
 static void ct3_class_init(ObjectClass *oc, void *data)
 {
     DeviceClass *dc = DEVICE_CLASS(oc);
     PCIDeviceClass *pc = PCI_DEVICE_CLASS(oc);
+    CXLType3Class *cvc = CXL_TYPE3_DEV_CLASS(oc);
 
     pc->realize = ct3_realize;
     pc->class_id = PCI_CLASS_STORAGE_EXPRESS;
@@ -185,11 +191,14 @@ static void ct3_class_init(ObjectClass *oc, void *data)
     dc->desc = "CXL PMEM Device (Type 3)";
     dc->reset = ct3d_reset;
     device_class_set_props(dc, ct3_props);
+
+    cvc->get_lsa_size = get_lsa_size;
 }
 
 static const TypeInfo ct3d_info = {
     .name = TYPE_CXL_TYPE3_DEV,
     .parent = TYPE_PCI_DEVICE,
+    .class_size = sizeof(struct CXLType3Class),
     .class_init = ct3_class_init,
     .instance_size = sizeof(CXLType3Dev),
     .instance_finalize = ct3_finalize,
diff --git a/include/hw/cxl/cxl_device.h b/include/hw/cxl/cxl_device.h
index 72da811c52..cf4c110f7e 100644
--- a/include/hw/cxl/cxl_device.h
+++ b/include/hw/cxl/cxl_device.h
@@ -237,6 +237,7 @@ typedef struct cxl_type3_dev {
     /* Properties */
     uint64_t size;
     HostMemoryBackend *hostmem;
+    HostMemoryBackend *lsa;
 
     /* State */
     CXLComponentState cxl_cstate;
@@ -246,5 +247,14 @@ typedef struct cxl_type3_dev {
 #define TYPE_CXL_TYPE3_DEV "cxl-type3"
 
 #define CT3(obj) OBJECT_CHECK(CXLType3Dev, (obj), TYPE_CXL_TYPE3_DEV)
+OBJECT_DECLARE_TYPE(CXLType3Device, CXLType3Class, CXL_TYPE3_DEV)
+
+struct CXLType3Class {
+    /* Private */
+    PCIDeviceClass parent_class;
+
+    /* public */
+    uint64_t (*get_lsa_size)(CXLType3Dev *ct3d);
+};
 
 #endif
-- 
2.32.0



^ permalink raw reply related	[flat|nested] 124+ messages in thread

* [PATCH v7 21/46] hw/cxl/device: Implement get/set Label Storage Area (LSA)
  2022-03-06 17:40 ` Jonathan Cameron via
@ 2022-03-06 17:41   ` Jonathan Cameron via
  -1 siblings, 0 replies; 124+ messages in thread
From: Jonathan Cameron @ 2022-03-06 17:41 UTC (permalink / raw)
  To: linuxarm, qemu-devel, Alex Bennée, Marcel Apfelbaum,
	Michael S . Tsirkin, Igor Mammedov, Markus Armbruster
  Cc: linux-cxl, Ben Widawsky, Peter Maydell,
	Shameerali Kolothum Thodi, Philippe Mathieu-Daudé,
	Saransh Gupta1, Shreyas Shah, Chris Browy, Samarth Saxena,
	Dan Williams

From: Ben Widawsky <ben.widawsky@intel.com>

Implement get and set handlers for the Label Storage Area
used to hold data describing persistent memory configuration
so that it can be ensured it is seen in the same configuration
after reboot.

Signed-off-by: Ben Widawsky <ben.widawsky@intel.com>
Signed-off-by: Jonathan Cameron <Jonathan.Cameron@huawei.com>
---
v7:
* Move a value only used in error path into that path (Alex)
* Refactor code for the set_lsa() command to make it easier to follow.
  Not solution is different from the option Alex suggested but hopefully
  achieves the same aim. (Alex)
* Use QEMU_PACKED (Alex)

 hw/cxl/cxl-mailbox-utils.c  | 60 +++++++++++++++++++++++++++++++++++++
 hw/mem/cxl_type3.c          | 56 +++++++++++++++++++++++++++++++++-
 include/hw/cxl/cxl_device.h |  5 ++++
 3 files changed, 120 insertions(+), 1 deletion(-)

diff --git a/hw/cxl/cxl-mailbox-utils.c b/hw/cxl/cxl-mailbox-utils.c
index 771b1cfe90..acb73c7a68 100644
--- a/hw/cxl/cxl-mailbox-utils.c
+++ b/hw/cxl/cxl-mailbox-utils.c
@@ -57,6 +57,8 @@ enum {
         #define MEMORY_DEVICE 0x0
     CCLS        = 0x41,
         #define GET_PARTITION_INFO     0x0
+        #define GET_LSA       0x2
+        #define SET_LSA       0x3
 };
 
 /* 8.2.8.4.5.1 Command Return Codes */
@@ -326,7 +328,62 @@ static ret_code cmd_ccls_get_partition_info(struct cxl_cmd *cmd,
     return CXL_MBOX_SUCCESS;
 }
 
+static ret_code cmd_ccls_get_lsa(struct cxl_cmd *cmd,
+                                 CXLDeviceState *cxl_dstate,
+                                 uint16_t *len)
+{
+    struct {
+        uint32_t offset;
+        uint32_t length;
+    } QEMU_PACKED *get_lsa;
+    CXLType3Dev *ct3d = container_of(cxl_dstate, CXLType3Dev, cxl_dstate);
+    CXLType3Class *cvc = CXL_TYPE3_DEV_GET_CLASS(ct3d);
+    uint32_t offset, length;
+
+    get_lsa = (void *)cmd->payload;
+    offset = get_lsa->offset;
+    length = get_lsa->length;
+
+    if (offset + length > cvc->get_lsa_size(ct3d)) {
+        *len = 0;
+        return CXL_MBOX_INVALID_INPUT;
+    }
+
+    *len = cvc->get_lsa(ct3d, get_lsa, length, offset);
+    return CXL_MBOX_SUCCESS;
+}
+
+static ret_code cmd_ccls_set_lsa(struct cxl_cmd *cmd,
+                                 CXLDeviceState *cxl_dstate,
+                                 uint16_t *len)
+{
+    struct set_lsa_pl {
+        uint32_t offset;
+        uint32_t rsvd;
+        uint8_t data[];
+    } QEMU_PACKED;
+    struct set_lsa_pl *set_lsa_payload = (void *)cmd->payload;
+    CXLType3Dev *ct3d = container_of(cxl_dstate, CXLType3Dev, cxl_dstate);
+    CXLType3Class *cvc = CXL_TYPE3_DEV_GET_CLASS(ct3d);
+    const size_t hdr_len = offsetof(struct set_lsa_pl, data);
+    uint16_t plen = *len;
+
+    *len = 0;
+    if (!plen) {
+        return CXL_MBOX_SUCCESS;
+    }
+
+    if (set_lsa_payload->offset + plen > cvc->get_lsa_size(ct3d) + hdr_len) {
+        return CXL_MBOX_INVALID_INPUT;
+    }
+    plen -= hdr_len;
+
+    cvc->set_lsa(ct3d, set_lsa_payload->data, plen, set_lsa_payload->offset);
+    return CXL_MBOX_SUCCESS;
+}
+
 #define IMMEDIATE_CONFIG_CHANGE (1 << 1)
+#define IMMEDIATE_DATA_CHANGE (1 << 2)
 #define IMMEDIATE_POLICY_CHANGE (1 << 3)
 #define IMMEDIATE_LOG_CHANGE (1 << 4)
 
@@ -349,6 +406,9 @@ static struct cxl_cmd cxl_cmd_set[256][256] = {
         cmd_identify_memory_device, 0, 0 },
     [CCLS][GET_PARTITION_INFO] = { "CCLS_GET_PARTITION_INFO",
         cmd_ccls_get_partition_info, 0, 0 },
+    [CCLS][GET_LSA] = { "CCLS_GET_LSA", cmd_ccls_get_lsa, 0, 0 },
+    [CCLS][SET_LSA] = { "CCLS_SET_LSA", cmd_ccls_set_lsa,
+        ~0, IMMEDIATE_CONFIG_CHANGE | IMMEDIATE_DATA_CHANGE },
 };
 
 void cxl_process_mailbox(CXLDeviceState *cxl_dstate)
diff --git a/hw/mem/cxl_type3.c b/hw/mem/cxl_type3.c
index 7cd3041eb3..244eb5dc91 100644
--- a/hw/mem/cxl_type3.c
+++ b/hw/mem/cxl_type3.c
@@ -8,6 +8,7 @@
 #include "qapi/error.h"
 #include "qemu/log.h"
 #include "qemu/module.h"
+#include "qemu/pmem.h"
 #include "qemu/range.h"
 #include "qemu/rcu.h"
 #include "sysemu/hostmem.h"
@@ -115,6 +116,11 @@ static void cxl_setup_memory(CXLType3Dev *ct3d, Error **errp)
     memory_region_set_enabled(mr, true);
     host_memory_backend_set_mapped(ct3d->hostmem, true);
     ct3d->cxl_dstate.pmem_size = ct3d->hostmem->size;
+
+    if (!ct3d->lsa) {
+        error_setg(errp, "lsa property must be set");
+        return;
+    }
 }
 
 
@@ -167,12 +173,58 @@ static Property ct3_props[] = {
     DEFINE_PROP_SIZE("size", CXLType3Dev, size, -1),
     DEFINE_PROP_LINK("memdev", CXLType3Dev, hostmem, TYPE_MEMORY_BACKEND,
                      HostMemoryBackend *),
+    DEFINE_PROP_LINK("lsa", CXLType3Dev, lsa, TYPE_MEMORY_BACKEND,
+                     HostMemoryBackend *),
     DEFINE_PROP_END_OF_LIST(),
 };
 
 static uint64_t get_lsa_size(CXLType3Dev *ct3d)
 {
-    return 0;
+    MemoryRegion *mr;
+
+    mr = host_memory_backend_get_memory(ct3d->lsa);
+    return memory_region_size(mr);
+}
+
+static void validate_lsa_access(MemoryRegion *mr, uint64_t size,
+                                uint64_t offset)
+{
+    assert(offset + size <= memory_region_size(mr));
+    assert(offset + size > offset);
+}
+
+static uint64_t get_lsa(CXLType3Dev *ct3d, void *buf, uint64_t size,
+                    uint64_t offset)
+{
+    MemoryRegion *mr;
+    void *lsa;
+
+    mr = host_memory_backend_get_memory(ct3d->lsa);
+    validate_lsa_access(mr, size, offset);
+
+    lsa = memory_region_get_ram_ptr(mr) + offset;
+    memcpy(buf, lsa, size);
+
+    return size;
+}
+
+static void set_lsa(CXLType3Dev *ct3d, const void *buf, uint64_t size,
+                    uint64_t offset)
+{
+    MemoryRegion *mr;
+    void *lsa;
+
+    mr = host_memory_backend_get_memory(ct3d->lsa);
+    validate_lsa_access(mr, size, offset);
+
+    lsa = memory_region_get_ram_ptr(mr) + offset;
+    memcpy(lsa, buf, size);
+    memory_region_set_dirty(mr, offset, size);
+
+    /*
+     * Just like the PMEM, if the guest is not allowed to exit gracefully, label
+     * updates will get lost.
+     */
 }
 
 static void ct3_class_init(ObjectClass *oc, void *data)
@@ -193,6 +245,8 @@ static void ct3_class_init(ObjectClass *oc, void *data)
     device_class_set_props(dc, ct3_props);
 
     cvc->get_lsa_size = get_lsa_size;
+    cvc->get_lsa = get_lsa;
+    cvc->set_lsa = set_lsa;
 }
 
 static const TypeInfo ct3d_info = {
diff --git a/include/hw/cxl/cxl_device.h b/include/hw/cxl/cxl_device.h
index cf4c110f7e..288cc11772 100644
--- a/include/hw/cxl/cxl_device.h
+++ b/include/hw/cxl/cxl_device.h
@@ -255,6 +255,11 @@ struct CXLType3Class {
 
     /* public */
     uint64_t (*get_lsa_size)(CXLType3Dev *ct3d);
+
+    uint64_t (*get_lsa)(CXLType3Dev *ct3d, void *buf, uint64_t size,
+                        uint64_t offset);
+    void (*set_lsa)(CXLType3Dev *ct3d, const void *buf, uint64_t size,
+                    uint64_t offset);
 };
 
 #endif
-- 
2.32.0


^ permalink raw reply related	[flat|nested] 124+ messages in thread

* [PATCH v7 21/46] hw/cxl/device: Implement get/set Label Storage Area (LSA)
@ 2022-03-06 17:41   ` Jonathan Cameron via
  0 siblings, 0 replies; 124+ messages in thread
From: Jonathan Cameron via @ 2022-03-06 17:41 UTC (permalink / raw)
  To: linuxarm, qemu-devel, Alex Bennée, Marcel Apfelbaum,
	Michael S . Tsirkin, Igor Mammedov, Markus Armbruster
  Cc: linux-cxl, Ben Widawsky, Peter Maydell,
	Shameerali Kolothum Thodi, Philippe Mathieu-Daudé,
	Saransh Gupta1, Shreyas Shah, Chris Browy, Samarth Saxena,
	Dan Williams

From: Ben Widawsky <ben.widawsky@intel.com>

Implement get and set handlers for the Label Storage Area
used to hold data describing persistent memory configuration
so that it can be ensured it is seen in the same configuration
after reboot.

Signed-off-by: Ben Widawsky <ben.widawsky@intel.com>
Signed-off-by: Jonathan Cameron <Jonathan.Cameron@huawei.com>
---
v7:
* Move a value only used in error path into that path (Alex)
* Refactor code for the set_lsa() command to make it easier to follow.
  Not solution is different from the option Alex suggested but hopefully
  achieves the same aim. (Alex)
* Use QEMU_PACKED (Alex)

 hw/cxl/cxl-mailbox-utils.c  | 60 +++++++++++++++++++++++++++++++++++++
 hw/mem/cxl_type3.c          | 56 +++++++++++++++++++++++++++++++++-
 include/hw/cxl/cxl_device.h |  5 ++++
 3 files changed, 120 insertions(+), 1 deletion(-)

diff --git a/hw/cxl/cxl-mailbox-utils.c b/hw/cxl/cxl-mailbox-utils.c
index 771b1cfe90..acb73c7a68 100644
--- a/hw/cxl/cxl-mailbox-utils.c
+++ b/hw/cxl/cxl-mailbox-utils.c
@@ -57,6 +57,8 @@ enum {
         #define MEMORY_DEVICE 0x0
     CCLS        = 0x41,
         #define GET_PARTITION_INFO     0x0
+        #define GET_LSA       0x2
+        #define SET_LSA       0x3
 };
 
 /* 8.2.8.4.5.1 Command Return Codes */
@@ -326,7 +328,62 @@ static ret_code cmd_ccls_get_partition_info(struct cxl_cmd *cmd,
     return CXL_MBOX_SUCCESS;
 }
 
+static ret_code cmd_ccls_get_lsa(struct cxl_cmd *cmd,
+                                 CXLDeviceState *cxl_dstate,
+                                 uint16_t *len)
+{
+    struct {
+        uint32_t offset;
+        uint32_t length;
+    } QEMU_PACKED *get_lsa;
+    CXLType3Dev *ct3d = container_of(cxl_dstate, CXLType3Dev, cxl_dstate);
+    CXLType3Class *cvc = CXL_TYPE3_DEV_GET_CLASS(ct3d);
+    uint32_t offset, length;
+
+    get_lsa = (void *)cmd->payload;
+    offset = get_lsa->offset;
+    length = get_lsa->length;
+
+    if (offset + length > cvc->get_lsa_size(ct3d)) {
+        *len = 0;
+        return CXL_MBOX_INVALID_INPUT;
+    }
+
+    *len = cvc->get_lsa(ct3d, get_lsa, length, offset);
+    return CXL_MBOX_SUCCESS;
+}
+
+static ret_code cmd_ccls_set_lsa(struct cxl_cmd *cmd,
+                                 CXLDeviceState *cxl_dstate,
+                                 uint16_t *len)
+{
+    struct set_lsa_pl {
+        uint32_t offset;
+        uint32_t rsvd;
+        uint8_t data[];
+    } QEMU_PACKED;
+    struct set_lsa_pl *set_lsa_payload = (void *)cmd->payload;
+    CXLType3Dev *ct3d = container_of(cxl_dstate, CXLType3Dev, cxl_dstate);
+    CXLType3Class *cvc = CXL_TYPE3_DEV_GET_CLASS(ct3d);
+    const size_t hdr_len = offsetof(struct set_lsa_pl, data);
+    uint16_t plen = *len;
+
+    *len = 0;
+    if (!plen) {
+        return CXL_MBOX_SUCCESS;
+    }
+
+    if (set_lsa_payload->offset + plen > cvc->get_lsa_size(ct3d) + hdr_len) {
+        return CXL_MBOX_INVALID_INPUT;
+    }
+    plen -= hdr_len;
+
+    cvc->set_lsa(ct3d, set_lsa_payload->data, plen, set_lsa_payload->offset);
+    return CXL_MBOX_SUCCESS;
+}
+
 #define IMMEDIATE_CONFIG_CHANGE (1 << 1)
+#define IMMEDIATE_DATA_CHANGE (1 << 2)
 #define IMMEDIATE_POLICY_CHANGE (1 << 3)
 #define IMMEDIATE_LOG_CHANGE (1 << 4)
 
@@ -349,6 +406,9 @@ static struct cxl_cmd cxl_cmd_set[256][256] = {
         cmd_identify_memory_device, 0, 0 },
     [CCLS][GET_PARTITION_INFO] = { "CCLS_GET_PARTITION_INFO",
         cmd_ccls_get_partition_info, 0, 0 },
+    [CCLS][GET_LSA] = { "CCLS_GET_LSA", cmd_ccls_get_lsa, 0, 0 },
+    [CCLS][SET_LSA] = { "CCLS_SET_LSA", cmd_ccls_set_lsa,
+        ~0, IMMEDIATE_CONFIG_CHANGE | IMMEDIATE_DATA_CHANGE },
 };
 
 void cxl_process_mailbox(CXLDeviceState *cxl_dstate)
diff --git a/hw/mem/cxl_type3.c b/hw/mem/cxl_type3.c
index 7cd3041eb3..244eb5dc91 100644
--- a/hw/mem/cxl_type3.c
+++ b/hw/mem/cxl_type3.c
@@ -8,6 +8,7 @@
 #include "qapi/error.h"
 #include "qemu/log.h"
 #include "qemu/module.h"
+#include "qemu/pmem.h"
 #include "qemu/range.h"
 #include "qemu/rcu.h"
 #include "sysemu/hostmem.h"
@@ -115,6 +116,11 @@ static void cxl_setup_memory(CXLType3Dev *ct3d, Error **errp)
     memory_region_set_enabled(mr, true);
     host_memory_backend_set_mapped(ct3d->hostmem, true);
     ct3d->cxl_dstate.pmem_size = ct3d->hostmem->size;
+
+    if (!ct3d->lsa) {
+        error_setg(errp, "lsa property must be set");
+        return;
+    }
 }
 
 
@@ -167,12 +173,58 @@ static Property ct3_props[] = {
     DEFINE_PROP_SIZE("size", CXLType3Dev, size, -1),
     DEFINE_PROP_LINK("memdev", CXLType3Dev, hostmem, TYPE_MEMORY_BACKEND,
                      HostMemoryBackend *),
+    DEFINE_PROP_LINK("lsa", CXLType3Dev, lsa, TYPE_MEMORY_BACKEND,
+                     HostMemoryBackend *),
     DEFINE_PROP_END_OF_LIST(),
 };
 
 static uint64_t get_lsa_size(CXLType3Dev *ct3d)
 {
-    return 0;
+    MemoryRegion *mr;
+
+    mr = host_memory_backend_get_memory(ct3d->lsa);
+    return memory_region_size(mr);
+}
+
+static void validate_lsa_access(MemoryRegion *mr, uint64_t size,
+                                uint64_t offset)
+{
+    assert(offset + size <= memory_region_size(mr));
+    assert(offset + size > offset);
+}
+
+static uint64_t get_lsa(CXLType3Dev *ct3d, void *buf, uint64_t size,
+                    uint64_t offset)
+{
+    MemoryRegion *mr;
+    void *lsa;
+
+    mr = host_memory_backend_get_memory(ct3d->lsa);
+    validate_lsa_access(mr, size, offset);
+
+    lsa = memory_region_get_ram_ptr(mr) + offset;
+    memcpy(buf, lsa, size);
+
+    return size;
+}
+
+static void set_lsa(CXLType3Dev *ct3d, const void *buf, uint64_t size,
+                    uint64_t offset)
+{
+    MemoryRegion *mr;
+    void *lsa;
+
+    mr = host_memory_backend_get_memory(ct3d->lsa);
+    validate_lsa_access(mr, size, offset);
+
+    lsa = memory_region_get_ram_ptr(mr) + offset;
+    memcpy(lsa, buf, size);
+    memory_region_set_dirty(mr, offset, size);
+
+    /*
+     * Just like the PMEM, if the guest is not allowed to exit gracefully, label
+     * updates will get lost.
+     */
 }
 
 static void ct3_class_init(ObjectClass *oc, void *data)
@@ -193,6 +245,8 @@ static void ct3_class_init(ObjectClass *oc, void *data)
     device_class_set_props(dc, ct3_props);
 
     cvc->get_lsa_size = get_lsa_size;
+    cvc->get_lsa = get_lsa;
+    cvc->set_lsa = set_lsa;
 }
 
 static const TypeInfo ct3d_info = {
diff --git a/include/hw/cxl/cxl_device.h b/include/hw/cxl/cxl_device.h
index cf4c110f7e..288cc11772 100644
--- a/include/hw/cxl/cxl_device.h
+++ b/include/hw/cxl/cxl_device.h
@@ -255,6 +255,11 @@ struct CXLType3Class {
 
     /* public */
     uint64_t (*get_lsa_size)(CXLType3Dev *ct3d);
+
+    uint64_t (*get_lsa)(CXLType3Dev *ct3d, void *buf, uint64_t size,
+                        uint64_t offset);
+    void (*set_lsa)(CXLType3Dev *ct3d, const void *buf, uint64_t size,
+                    uint64_t offset);
 };
 
 #endif
-- 
2.32.0



^ permalink raw reply related	[flat|nested] 124+ messages in thread

* [PATCH v7 22/46] qtests/cxl: Add initial root port and CXL type3 tests
  2022-03-06 17:40 ` Jonathan Cameron via
@ 2022-03-06 17:41   ` Jonathan Cameron via
  -1 siblings, 0 replies; 124+ messages in thread
From: Jonathan Cameron @ 2022-03-06 17:41 UTC (permalink / raw)
  To: linuxarm, qemu-devel, Alex Bennée, Marcel Apfelbaum,
	Michael S . Tsirkin, Igor Mammedov, Markus Armbruster
  Cc: linux-cxl, Ben Widawsky, Peter Maydell,
	Shameerali Kolothum Thodi, Philippe Mathieu-Daudé,
	Saransh Gupta1, Shreyas Shah, Chris Browy, Samarth Saxena,
	Dan Williams

At this stage we can boot configurations with host bridges,
root ports and type 3 memory devices, so add appropriate
tests.

Signed-off-by: Jonathan Cameron <Jonathan.Cameron@huawei.com>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
---
v7: Patch moved from 18 to 22 as we need LSA support in place to avoid
    introducing backwards compatibility issues.
* Use g_autoptr() to avoid need for explicit free in tests (Alex)

 tests/qtest/cxl-test.c | 126 +++++++++++++++++++++++++++++++++++++++++
 1 file changed, 126 insertions(+)

diff --git a/tests/qtest/cxl-test.c b/tests/qtest/cxl-test.c
index 1006c8ae4e..148bc94340 100644
--- a/tests/qtest/cxl-test.c
+++ b/tests/qtest/cxl-test.c
@@ -8,6 +8,54 @@
 #include "qemu/osdep.h"
 #include "libqtest-single.h"
 
+#define QEMU_PXB_CMD "-machine q35,cxl=on " \
+                     "-device pxb-cxl,id=cxl.0,bus=pcie.0,bus_nr=52 "
+
+#define QEMU_2PXB_CMD "-machine q35,cxl=on " \
+                      "-device pxb-cxl,id=cxl.0,bus=pcie.0,bus_nr=52 "  \
+                      "-device pxb-cxl,id=cxl.1,bus=pcie.0,bus_nr=53 "
+
+#define QEMU_RP "-device cxl-rp,id=rp0,bus=cxl.0,chassis=0,slot=0 "
+
+/* Dual ports on first pxb */
+#define QEMU_2RP "-device cxl-rp,id=rp0,bus=cxl.0,chassis=0,slot=0 " \
+                 "-device cxl-rp,id=rp1,bus=cxl.0,chassis=0,slot=1 "
+
+/* Dual ports on each of the pxb instances */
+#define QEMU_4RP "-device cxl-rp,id=rp0,bus=cxl.0,chassis=0,slot=0 " \
+                 "-device cxl-rp,id=rp1,bus=cxl.0,chassis=0,slot=1 " \
+                 "-device cxl-rp,id=rp2,bus=cxl.1,chassis=0,slot=2 " \
+                 "-device cxl-rp,id=rp3,bus=cxl.1,chassis=0,slot=3 "
+
+#define QEMU_T3D "-object memory-backend-file,id=cxl-mem0,mem-path=%s,size=256M " \
+                 "-object memory-backend-file,id=lsa0,mem-path=%s,size=256M "    \
+                 "-device cxl-type3,bus=rp0,memdev=cxl-mem0,lsa=lsa0,id=cxl-pmem0,size=256M "
+
+#define QEMU_2T3D "-object memory-backend-file,id=cxl-mem0,mem-path=%s,size=256M "    \
+                  "-object memory-backend-file,id=lsa0,mem-path=%s,size=256M "    \
+                  "-device cxl-type3,bus=rp0,memdev=cxl-mem0,lsa=lsa0,id=cxl-pmem0,size=256M " \
+                  "-object memory-backend-file,id=cxl-mem1,mem-path=%s,size=256M "    \
+                  "-object memory-backend-file,id=lsa1,mem-path=%s,size=256M "    \
+                  "-device cxl-type3,bus=rp1,memdev=cxl-mem1,lsa=lsa1,id=cxl-pmem1,size=256M "
+
+#define QEMU_4T3D "-object memory-backend-file,id=cxl-mem0,mem-path=%s,size=256M " \
+                  "-object memory-backend-file,id=lsa0,mem-path=%s,size=256M "    \
+                  "-device cxl-type3,bus=rp0,memdev=cxl-mem0,lsa=lsa0,id=cxl-pmem0,size=256M " \
+                  "-object memory-backend-file,id=cxl-mem1,mem-path=%s,size=256M "    \
+                  "-object memory-backend-file,id=lsa1,mem-path=%s,size=256M "    \
+                  "-device cxl-type3,bus=rp1,memdev=cxl-mem1,lsa=lsa1,id=cxl-pmem1,size=256M " \
+                  "-object memory-backend-file,id=cxl-mem2,mem-path=%s,size=256M "    \
+                  "-object memory-backend-file,id=lsa2,mem-path=%s,size=256M "    \
+                  "-device cxl-type3,bus=rp2,memdev=cxl-mem2,lsa=lsa2,id=cxl-pmem2,size=256M " \
+                  "-object memory-backend-file,id=cxl-mem3,mem-path=%s,size=256M "    \
+                  "-object memory-backend-file,id=lsa3,mem-path=%s,size=256M "    \
+                  "-device cxl-type3,bus=rp3,memdev=cxl-mem3,lsa=lsa3,id=cxl-pmem3,size=256M "
+
+static void cxl_basic_hb(void)
+{
+    qtest_start("-machine q35,cxl=on");
+    qtest_end();
+}
 
 static void cxl_basic_pxb(void)
 {
@@ -15,9 +63,87 @@ static void cxl_basic_pxb(void)
     qtest_end();
 }
 
+static void cxl_pxb_with_window(void)
+{
+    qtest_start(QEMU_PXB_CMD);
+    qtest_end();
+}
+
+static void cxl_2pxb_with_window(void)
+{
+    qtest_start(QEMU_2PXB_CMD);
+    qtest_end();
+}
+
+static void cxl_root_port(void)
+{
+    qtest_start(QEMU_PXB_CMD QEMU_RP);
+    qtest_end();
+}
+
+static void cxl_2root_port(void)
+{
+    qtest_start(QEMU_PXB_CMD QEMU_2RP);
+    qtest_end();
+}
+
+static void cxl_t3d(void)
+{
+    g_autoptr(GString) cmdline = g_string_new(NULL);
+    char template[] = "/tmp/cxl-test-XXXXXX";
+    const char *tmpfs;
+
+    tmpfs = mkdtemp(template);
+
+    g_string_printf(cmdline, QEMU_PXB_CMD QEMU_RP QEMU_T3D, tmpfs, tmpfs);
+
+    qtest_start(cmdline->str);
+    qtest_end();
+}
+
+static void cxl_1pxb_2rp_2t3d(void)
+{
+    g_autoptr(GString) cmdline = g_string_new(NULL);
+    char template[] = "/tmp/cxl-test-XXXXXX";
+    const char *tmpfs;
+
+    tmpfs = mkdtemp(template);
+
+    g_string_printf(cmdline, QEMU_PXB_CMD QEMU_2RP QEMU_2T3D,
+                    tmpfs, tmpfs, tmpfs, tmpfs);
+
+    qtest_start(cmdline->str);
+    qtest_end();
+}
+
+static void cxl_2pxb_4rp_4t3d(void)
+{
+    g_autoptr(GString) cmdline = g_string_new(NULL);
+    char template[] = "/tmp/cxl-test-XXXXXX";
+    const char *tmpfs;
+
+    tmpfs = mkdtemp(template);
+
+    g_string_printf(cmdline, QEMU_2PXB_CMD QEMU_4RP QEMU_4T3D,
+                    tmpfs, tmpfs, tmpfs, tmpfs, tmpfs, tmpfs,
+                    tmpfs, tmpfs);
+
+    qtest_start(cmdline->str);
+    qtest_end();
+}
+
 int main(int argc, char **argv)
 {
     g_test_init(&argc, &argv, NULL);
+
+    qtest_add_func("/pci/cxl/basic_hostbridge", cxl_basic_hb);
     qtest_add_func("/pci/cxl/basic_pxb", cxl_basic_pxb);
+    qtest_add_func("/pci/cxl/pxb_with_window", cxl_pxb_with_window);
+    qtest_add_func("/pci/cxl/pxb_x2_with_window", cxl_2pxb_with_window);
+    qtest_add_func("/pci/cxl/rp", cxl_root_port);
+    qtest_add_func("/pci/cxl/rp_x2", cxl_2root_port);
+    qtest_add_func("/pci/cxl/type3_device", cxl_t3d);
+    qtest_add_func("/pci/cxl/rp_x2_type3_x2", cxl_1pxb_2rp_2t3d);
+    qtest_add_func("/pci/cxl/pxb_x2_root_port_x4_type3_x4", cxl_2pxb_4rp_4t3d);
     return g_test_run();
 }
-- 
2.32.0


^ permalink raw reply related	[flat|nested] 124+ messages in thread

* [PATCH v7 22/46] qtests/cxl: Add initial root port and CXL type3 tests
@ 2022-03-06 17:41   ` Jonathan Cameron via
  0 siblings, 0 replies; 124+ messages in thread
From: Jonathan Cameron via @ 2022-03-06 17:41 UTC (permalink / raw)
  To: linuxarm, qemu-devel, Alex Bennée, Marcel Apfelbaum,
	Michael S . Tsirkin, Igor Mammedov, Markus Armbruster
  Cc: linux-cxl, Ben Widawsky, Peter Maydell,
	Shameerali Kolothum Thodi, Philippe Mathieu-Daudé,
	Saransh Gupta1, Shreyas Shah, Chris Browy, Samarth Saxena,
	Dan Williams

At this stage we can boot configurations with host bridges,
root ports and type 3 memory devices, so add appropriate
tests.

Signed-off-by: Jonathan Cameron <Jonathan.Cameron@huawei.com>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
---
v7: Patch moved from 18 to 22 as we need LSA support in place to avoid
    introducing backwards compatibility issues.
* Use g_autoptr() to avoid need for explicit free in tests (Alex)

 tests/qtest/cxl-test.c | 126 +++++++++++++++++++++++++++++++++++++++++
 1 file changed, 126 insertions(+)

diff --git a/tests/qtest/cxl-test.c b/tests/qtest/cxl-test.c
index 1006c8ae4e..148bc94340 100644
--- a/tests/qtest/cxl-test.c
+++ b/tests/qtest/cxl-test.c
@@ -8,6 +8,54 @@
 #include "qemu/osdep.h"
 #include "libqtest-single.h"
 
+#define QEMU_PXB_CMD "-machine q35,cxl=on " \
+                     "-device pxb-cxl,id=cxl.0,bus=pcie.0,bus_nr=52 "
+
+#define QEMU_2PXB_CMD "-machine q35,cxl=on " \
+                      "-device pxb-cxl,id=cxl.0,bus=pcie.0,bus_nr=52 "  \
+                      "-device pxb-cxl,id=cxl.1,bus=pcie.0,bus_nr=53 "
+
+#define QEMU_RP "-device cxl-rp,id=rp0,bus=cxl.0,chassis=0,slot=0 "
+
+/* Dual ports on first pxb */
+#define QEMU_2RP "-device cxl-rp,id=rp0,bus=cxl.0,chassis=0,slot=0 " \
+                 "-device cxl-rp,id=rp1,bus=cxl.0,chassis=0,slot=1 "
+
+/* Dual ports on each of the pxb instances */
+#define QEMU_4RP "-device cxl-rp,id=rp0,bus=cxl.0,chassis=0,slot=0 " \
+                 "-device cxl-rp,id=rp1,bus=cxl.0,chassis=0,slot=1 " \
+                 "-device cxl-rp,id=rp2,bus=cxl.1,chassis=0,slot=2 " \
+                 "-device cxl-rp,id=rp3,bus=cxl.1,chassis=0,slot=3 "
+
+#define QEMU_T3D "-object memory-backend-file,id=cxl-mem0,mem-path=%s,size=256M " \
+                 "-object memory-backend-file,id=lsa0,mem-path=%s,size=256M "    \
+                 "-device cxl-type3,bus=rp0,memdev=cxl-mem0,lsa=lsa0,id=cxl-pmem0,size=256M "
+
+#define QEMU_2T3D "-object memory-backend-file,id=cxl-mem0,mem-path=%s,size=256M "    \
+                  "-object memory-backend-file,id=lsa0,mem-path=%s,size=256M "    \
+                  "-device cxl-type3,bus=rp0,memdev=cxl-mem0,lsa=lsa0,id=cxl-pmem0,size=256M " \
+                  "-object memory-backend-file,id=cxl-mem1,mem-path=%s,size=256M "    \
+                  "-object memory-backend-file,id=lsa1,mem-path=%s,size=256M "    \
+                  "-device cxl-type3,bus=rp1,memdev=cxl-mem1,lsa=lsa1,id=cxl-pmem1,size=256M "
+
+#define QEMU_4T3D "-object memory-backend-file,id=cxl-mem0,mem-path=%s,size=256M " \
+                  "-object memory-backend-file,id=lsa0,mem-path=%s,size=256M "    \
+                  "-device cxl-type3,bus=rp0,memdev=cxl-mem0,lsa=lsa0,id=cxl-pmem0,size=256M " \
+                  "-object memory-backend-file,id=cxl-mem1,mem-path=%s,size=256M "    \
+                  "-object memory-backend-file,id=lsa1,mem-path=%s,size=256M "    \
+                  "-device cxl-type3,bus=rp1,memdev=cxl-mem1,lsa=lsa1,id=cxl-pmem1,size=256M " \
+                  "-object memory-backend-file,id=cxl-mem2,mem-path=%s,size=256M "    \
+                  "-object memory-backend-file,id=lsa2,mem-path=%s,size=256M "    \
+                  "-device cxl-type3,bus=rp2,memdev=cxl-mem2,lsa=lsa2,id=cxl-pmem2,size=256M " \
+                  "-object memory-backend-file,id=cxl-mem3,mem-path=%s,size=256M "    \
+                  "-object memory-backend-file,id=lsa3,mem-path=%s,size=256M "    \
+                  "-device cxl-type3,bus=rp3,memdev=cxl-mem3,lsa=lsa3,id=cxl-pmem3,size=256M "
+
+static void cxl_basic_hb(void)
+{
+    qtest_start("-machine q35,cxl=on");
+    qtest_end();
+}
 
 static void cxl_basic_pxb(void)
 {
@@ -15,9 +63,87 @@ static void cxl_basic_pxb(void)
     qtest_end();
 }
 
+static void cxl_pxb_with_window(void)
+{
+    qtest_start(QEMU_PXB_CMD);
+    qtest_end();
+}
+
+static void cxl_2pxb_with_window(void)
+{
+    qtest_start(QEMU_2PXB_CMD);
+    qtest_end();
+}
+
+static void cxl_root_port(void)
+{
+    qtest_start(QEMU_PXB_CMD QEMU_RP);
+    qtest_end();
+}
+
+static void cxl_2root_port(void)
+{
+    qtest_start(QEMU_PXB_CMD QEMU_2RP);
+    qtest_end();
+}
+
+static void cxl_t3d(void)
+{
+    g_autoptr(GString) cmdline = g_string_new(NULL);
+    char template[] = "/tmp/cxl-test-XXXXXX";
+    const char *tmpfs;
+
+    tmpfs = mkdtemp(template);
+
+    g_string_printf(cmdline, QEMU_PXB_CMD QEMU_RP QEMU_T3D, tmpfs, tmpfs);
+
+    qtest_start(cmdline->str);
+    qtest_end();
+}
+
+static void cxl_1pxb_2rp_2t3d(void)
+{
+    g_autoptr(GString) cmdline = g_string_new(NULL);
+    char template[] = "/tmp/cxl-test-XXXXXX";
+    const char *tmpfs;
+
+    tmpfs = mkdtemp(template);
+
+    g_string_printf(cmdline, QEMU_PXB_CMD QEMU_2RP QEMU_2T3D,
+                    tmpfs, tmpfs, tmpfs, tmpfs);
+
+    qtest_start(cmdline->str);
+    qtest_end();
+}
+
+static void cxl_2pxb_4rp_4t3d(void)
+{
+    g_autoptr(GString) cmdline = g_string_new(NULL);
+    char template[] = "/tmp/cxl-test-XXXXXX";
+    const char *tmpfs;
+
+    tmpfs = mkdtemp(template);
+
+    g_string_printf(cmdline, QEMU_2PXB_CMD QEMU_4RP QEMU_4T3D,
+                    tmpfs, tmpfs, tmpfs, tmpfs, tmpfs, tmpfs,
+                    tmpfs, tmpfs);
+
+    qtest_start(cmdline->str);
+    qtest_end();
+}
+
 int main(int argc, char **argv)
 {
     g_test_init(&argc, &argv, NULL);
+
+    qtest_add_func("/pci/cxl/basic_hostbridge", cxl_basic_hb);
     qtest_add_func("/pci/cxl/basic_pxb", cxl_basic_pxb);
+    qtest_add_func("/pci/cxl/pxb_with_window", cxl_pxb_with_window);
+    qtest_add_func("/pci/cxl/pxb_x2_with_window", cxl_2pxb_with_window);
+    qtest_add_func("/pci/cxl/rp", cxl_root_port);
+    qtest_add_func("/pci/cxl/rp_x2", cxl_2root_port);
+    qtest_add_func("/pci/cxl/type3_device", cxl_t3d);
+    qtest_add_func("/pci/cxl/rp_x2_type3_x2", cxl_1pxb_2rp_2t3d);
+    qtest_add_func("/pci/cxl/pxb_x2_root_port_x4_type3_x4", cxl_2pxb_4rp_4t3d);
     return g_test_run();
 }
-- 
2.32.0



^ permalink raw reply related	[flat|nested] 124+ messages in thread

* [PATCH v7 23/46] hw/cxl/component: Implement host bridge MMIO (8.2.5, table 142)
  2022-03-06 17:40 ` Jonathan Cameron via
@ 2022-03-06 17:41   ` Jonathan Cameron via
  -1 siblings, 0 replies; 124+ messages in thread
From: Jonathan Cameron @ 2022-03-06 17:41 UTC (permalink / raw)
  To: linuxarm, qemu-devel, Alex Bennée, Marcel Apfelbaum,
	Michael S . Tsirkin, Igor Mammedov, Markus Armbruster
  Cc: linux-cxl, Ben Widawsky, Peter Maydell,
	Shameerali Kolothum Thodi, Philippe Mathieu-Daudé,
	Saransh Gupta1, Shreyas Shah, Chris Browy, Samarth Saxena,
	Dan Williams

From: Ben Widawsky <ben.widawsky@intel.com>

CXL host bridges themselves may have MMIO. Since host bridges don't have
a BAR they are treated as special for MMIO.  This patch includes
i386/pc support.
Also hook up the device reset now that we have have the MMIO
space in which the results are visible.

Note that we duplicate the PCI express case for the aml_build but
the implementations will diverge when the CXL specific _OSC is
introduced.

Signed-off-by: Ben Widawsky <ben.widawsky@intel.com>
Co-developed-by: Jonathan Cameron <Jonathan.Cameron@huawei.com>
Signed-off-by: Jonathan Cameron <Jonathan.Cameron@huawei.com>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
---
 hw/i386/acpi-build.c                | 25 +++++++++++-
 hw/i386/pc.c                        | 27 ++++++++++++-
 hw/pci-bridge/pci_expander_bridge.c | 62 ++++++++++++++++++++++++++++-
 include/hw/cxl/cxl.h                |  4 ++
 4 files changed, 114 insertions(+), 4 deletions(-)

diff --git a/hw/i386/acpi-build.c b/hw/i386/acpi-build.c
index ebd47aa26f..0a28dd6d4e 100644
--- a/hw/i386/acpi-build.c
+++ b/hw/i386/acpi-build.c
@@ -28,6 +28,7 @@
 #include "qemu/bitmap.h"
 #include "qemu/error-report.h"
 #include "hw/pci/pci.h"
+#include "hw/cxl/cxl.h"
 #include "hw/core/cpu.h"
 #include "target/i386/cpu.h"
 #include "hw/misc/pvpanic.h"
@@ -1564,10 +1565,21 @@ build_dsdt(GArray *table_data, BIOSLinker *linker,
             }
 
             scope = aml_scope("\\_SB");
-            dev = aml_device("PC%.02X", bus_num);
+
+            if (pci_bus_is_cxl(bus)) {
+                dev = aml_device("CL%.02X", bus_num);
+            } else {
+                dev = aml_device("PC%.02X", bus_num);
+            }
             aml_append(dev, aml_name_decl("_UID", aml_int(bus_num)));
             aml_append(dev, aml_name_decl("_BBN", aml_int(bus_num)));
-            if (pci_bus_is_express(bus)) {
+            if (pci_bus_is_cxl(bus)) {
+                aml_append(dev, aml_name_decl("_HID", aml_eisaid("PNP0A08")));
+                aml_append(dev, aml_name_decl("_CID", aml_eisaid("PNP0A03")));
+
+                /* Expander bridges do not have ACPI PCI Hot-plug enabled */
+                aml_append(dev, build_q35_osc_method(true));
+            } else if (pci_bus_is_express(bus)) {
                 aml_append(dev, aml_name_decl("_HID", aml_eisaid("PNP0A08")));
                 aml_append(dev, aml_name_decl("_CID", aml_eisaid("PNP0A03")));
 
@@ -1587,6 +1599,15 @@ build_dsdt(GArray *table_data, BIOSLinker *linker,
             aml_append(dev, aml_name_decl("_CRS", crs));
             aml_append(scope, dev);
             aml_append(dsdt, scope);
+
+            /* Handle the ranges for the PXB expanders */
+            if (pci_bus_is_cxl(bus)) {
+                MemoryRegion *mr = &machine->cxl_devices_state->host_mr;
+                uint64_t base = mr->addr;
+
+                crs_range_insert(crs_range_set.mem_ranges, base,
+                                 base + memory_region_size(mr) - 1);
+            }
         }
     }
 
diff --git a/hw/i386/pc.c b/hw/i386/pc.c
index b6800a511a..7a18dce529 100644
--- a/hw/i386/pc.c
+++ b/hw/i386/pc.c
@@ -75,6 +75,7 @@
 #include "acpi-build.h"
 #include "hw/mem/pc-dimm.h"
 #include "hw/mem/nvdimm.h"
+#include "hw/cxl/cxl.h"
 #include "qapi/error.h"
 #include "qapi/qapi-visit-common.h"
 #include "qapi/qapi-visit-machine.h"
@@ -815,6 +816,7 @@ void pc_memory_init(PCMachineState *pcms,
     MachineClass *mc = MACHINE_GET_CLASS(machine);
     PCMachineClass *pcmc = PC_MACHINE_GET_CLASS(pcms);
     X86MachineState *x86ms = X86_MACHINE(pcms);
+    hwaddr cxl_base;
 
     assert(machine->ram_size == x86ms->below_4g_mem_size +
                                 x86ms->above_4g_mem_size);
@@ -904,6 +906,26 @@ void pc_memory_init(PCMachineState *pcms,
                                     &machine->device_memory->mr);
     }
 
+    if (machine->cxl_devices_state->is_enabled) {
+        MemoryRegion *mr = &machine->cxl_devices_state->host_mr;
+        hwaddr cxl_size = MiB;
+
+        if (pcmc->has_reserved_memory && machine->device_memory->base) {
+            cxl_base = machine->device_memory->base;
+            if (!pcmc->broken_reserved_end) {
+                cxl_base += memory_region_size(&machine->device_memory->mr);
+            }
+        } else if (pcms->sgx_epc.size != 0) {
+            cxl_base = sgx_epc_above_4g_end(&pcms->sgx_epc);
+        } else {
+            cxl_base = 0x100000000ULL + x86ms->above_4g_mem_size;
+        }
+
+        e820_add_entry(cxl_base, cxl_size, E820_RESERVED);
+        memory_region_init(mr, OBJECT(machine), "cxl_host_reg", cxl_size);
+        memory_region_add_subregion(system_memory, cxl_base, mr);
+    }
+
     /* Initialize PC system firmware */
     pc_system_firmware_init(pcms, rom_memory);
 
@@ -964,7 +986,10 @@ uint64_t pc_pci_hole64_start(void)
     X86MachineState *x86ms = X86_MACHINE(pcms);
     uint64_t hole64_start = 0;
 
-    if (pcmc->has_reserved_memory && ms->device_memory->base) {
+    if (ms->cxl_devices_state->host_mr.addr) {
+        hole64_start = ms->cxl_devices_state->host_mr.addr +
+            memory_region_size(&ms->cxl_devices_state->host_mr);
+    } else if (pcmc->has_reserved_memory && ms->device_memory->base) {
         hole64_start = ms->device_memory->base;
         if (!pcmc->broken_reserved_end) {
             hole64_start += memory_region_size(&ms->device_memory->mr);
diff --git a/hw/pci-bridge/pci_expander_bridge.c b/hw/pci-bridge/pci_expander_bridge.c
index f762eb4a6e..b3b5f93650 100644
--- a/hw/pci-bridge/pci_expander_bridge.c
+++ b/hw/pci-bridge/pci_expander_bridge.c
@@ -75,6 +75,9 @@ struct PXBDev {
     uint8_t bus_nr;
     uint16_t numa_node;
     bool bypass_iommu;
+    struct cxl_dev {
+        CXLHost *cxl_host_bridge;
+    } cxl;
 };
 
 static PXBDev *convert_to_pxb(PCIDevice *dev)
@@ -92,6 +95,9 @@ static GList *pxb_dev_list;
 
 #define TYPE_PXB_HOST "pxb-host"
 
+#define TYPE_PXB_CXL_HOST "pxb-cxl-host"
+#define PXB_CXL_HOST(obj) OBJECT_CHECK(CXLHost, (obj), TYPE_PXB_CXL_HOST)
+
 static int pxb_bus_num(PCIBus *bus)
 {
     PXBDev *pxb = convert_to_pxb(bus->parent_dev);
@@ -197,6 +203,52 @@ static const TypeInfo pxb_host_info = {
     .class_init    = pxb_host_class_init,
 };
 
+static void pxb_cxl_realize(DeviceState *dev, Error **errp)
+{
+    MachineState *ms = MACHINE(qdev_get_machine());
+    SysBusDevice *sbd = SYS_BUS_DEVICE(dev);
+    CXLHost *cxl = PXB_CXL_HOST(dev);
+    CXLComponentState *cxl_cstate = &cxl->cxl_cstate;
+    struct MemoryRegion *mr = &cxl_cstate->crb.component_registers;
+    hwaddr offset;
+
+    cxl_component_register_block_init(OBJECT(dev), cxl_cstate,
+                                      TYPE_PXB_CXL_HOST);
+    sysbus_init_mmio(sbd, mr);
+
+    offset = memory_region_size(mr) * ms->cxl_devices_state->next_mr_idx;
+    if (offset > memory_region_size(&ms->cxl_devices_state->host_mr)) {
+        error_setg(errp, "Insufficient space for pxb cxl host register space");
+        return;
+    }
+
+    memory_region_add_subregion(&ms->cxl_devices_state->host_mr, offset, mr);
+    ms->cxl_devices_state->next_mr_idx++;
+}
+
+static void pxb_cxl_host_class_init(ObjectClass *class, void *data)
+{
+    DeviceClass *dc = DEVICE_CLASS(class);
+    PCIHostBridgeClass *hc = PCI_HOST_BRIDGE_CLASS(class);
+
+    hc->root_bus_path = pxb_host_root_bus_path;
+    dc->fw_name = "cxl";
+    dc->realize = pxb_cxl_realize;
+    /* Reason: Internal part of the pxb/pxb-pcie device, not usable by itself */
+    dc->user_creatable = false;
+}
+
+/*
+ * This is a device to handle the MMIO for a CXL host bridge. It does nothing
+ * else.
+ */
+static const TypeInfo cxl_host_info = {
+    .name          = TYPE_PXB_CXL_HOST,
+    .parent        = TYPE_PCI_HOST_BRIDGE,
+    .instance_size = sizeof(CXLHost),
+    .class_init    = pxb_cxl_host_class_init,
+};
+
 /*
  * Registers the PXB bus as a child of pci host root bus.
  */
@@ -245,6 +297,12 @@ static int pxb_map_irq_fn(PCIDevice *pci_dev, int pin)
 
 static void pxb_dev_reset(DeviceState *dev)
 {
+    CXLHost *cxl = PXB_CXL_DEV(dev)->cxl.cxl_host_bridge;
+    CXLComponentState *cxl_cstate = &cxl->cxl_cstate;
+    uint32_t *reg_state = cxl_cstate->crb.cache_mem_registers;
+
+    cxl_component_register_init_common(reg_state, CXL2_ROOT_PORT);
+    ARRAY_FIELD_DP32(reg_state, CXL_HDM_DECODER_CAPABILITY, TARGET_COUNT, 8);
 }
 
 static gint pxb_compare(gconstpointer a, gconstpointer b)
@@ -281,12 +339,13 @@ static void pxb_dev_realize_common(PCIDevice *dev, enum BusType type,
         dev_name = dev->qdev.id;
     }
 
-    ds = qdev_new(TYPE_PXB_HOST);
+    ds = qdev_new(type == CXL ? TYPE_PXB_CXL_HOST : TYPE_PXB_HOST);
     if (type == PCIE) {
         bus = pci_root_bus_new(ds, dev_name, NULL, NULL, 0, TYPE_PXB_PCIE_BUS);
     } else if (type == CXL) {
         bus = pci_root_bus_new(ds, dev_name, NULL, NULL, 0, TYPE_PXB_CXL_BUS);
         bus->flags |= PCI_BUS_CXL;
+        PXB_CXL_DEV(dev)->cxl.cxl_host_bridge = PXB_CXL_HOST(ds);
     } else {
         bus = pci_root_bus_new(ds, "pxb-internal", NULL, NULL, 0, TYPE_PXB_BUS);
         bds = qdev_new("pci-bridge");
@@ -475,6 +534,7 @@ static void pxb_register_types(void)
     type_register_static(&pxb_pcie_bus_info);
     type_register_static(&pxb_cxl_bus_info);
     type_register_static(&pxb_host_info);
+    type_register_static(&cxl_host_info);
     type_register_static(&pxb_dev_info);
     type_register_static(&pxb_pcie_dev_info);
     type_register_static(&pxb_cxl_dev_info);
diff --git a/include/hw/cxl/cxl.h b/include/hw/cxl/cxl.h
index 31af92fd5e..75e5bf71e1 100644
--- a/include/hw/cxl/cxl.h
+++ b/include/hw/cxl/cxl.h
@@ -17,8 +17,12 @@
 #define CXL_COMPONENT_REG_BAR_IDX 0
 #define CXL_DEVICE_REG_BAR_IDX 2
 
+#define CXL_WINDOW_MAX 10
+
 typedef struct CXLState {
     bool is_enabled;
+    MemoryRegion host_mr;
+    unsigned int next_mr_idx;
 } CXLState;
 
 #endif
-- 
2.32.0


^ permalink raw reply related	[flat|nested] 124+ messages in thread

* [PATCH v7 23/46] hw/cxl/component: Implement host bridge MMIO (8.2.5, table 142)
@ 2022-03-06 17:41   ` Jonathan Cameron via
  0 siblings, 0 replies; 124+ messages in thread
From: Jonathan Cameron via @ 2022-03-06 17:41 UTC (permalink / raw)
  To: linuxarm, qemu-devel, Alex Bennée, Marcel Apfelbaum,
	Michael S . Tsirkin, Igor Mammedov, Markus Armbruster
  Cc: linux-cxl, Ben Widawsky, Peter Maydell,
	Shameerali Kolothum Thodi, Philippe Mathieu-Daudé,
	Saransh Gupta1, Shreyas Shah, Chris Browy, Samarth Saxena,
	Dan Williams

From: Ben Widawsky <ben.widawsky@intel.com>

CXL host bridges themselves may have MMIO. Since host bridges don't have
a BAR they are treated as special for MMIO.  This patch includes
i386/pc support.
Also hook up the device reset now that we have have the MMIO
space in which the results are visible.

Note that we duplicate the PCI express case for the aml_build but
the implementations will diverge when the CXL specific _OSC is
introduced.

Signed-off-by: Ben Widawsky <ben.widawsky@intel.com>
Co-developed-by: Jonathan Cameron <Jonathan.Cameron@huawei.com>
Signed-off-by: Jonathan Cameron <Jonathan.Cameron@huawei.com>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
---
 hw/i386/acpi-build.c                | 25 +++++++++++-
 hw/i386/pc.c                        | 27 ++++++++++++-
 hw/pci-bridge/pci_expander_bridge.c | 62 ++++++++++++++++++++++++++++-
 include/hw/cxl/cxl.h                |  4 ++
 4 files changed, 114 insertions(+), 4 deletions(-)

diff --git a/hw/i386/acpi-build.c b/hw/i386/acpi-build.c
index ebd47aa26f..0a28dd6d4e 100644
--- a/hw/i386/acpi-build.c
+++ b/hw/i386/acpi-build.c
@@ -28,6 +28,7 @@
 #include "qemu/bitmap.h"
 #include "qemu/error-report.h"
 #include "hw/pci/pci.h"
+#include "hw/cxl/cxl.h"
 #include "hw/core/cpu.h"
 #include "target/i386/cpu.h"
 #include "hw/misc/pvpanic.h"
@@ -1564,10 +1565,21 @@ build_dsdt(GArray *table_data, BIOSLinker *linker,
             }
 
             scope = aml_scope("\\_SB");
-            dev = aml_device("PC%.02X", bus_num);
+
+            if (pci_bus_is_cxl(bus)) {
+                dev = aml_device("CL%.02X", bus_num);
+            } else {
+                dev = aml_device("PC%.02X", bus_num);
+            }
             aml_append(dev, aml_name_decl("_UID", aml_int(bus_num)));
             aml_append(dev, aml_name_decl("_BBN", aml_int(bus_num)));
-            if (pci_bus_is_express(bus)) {
+            if (pci_bus_is_cxl(bus)) {
+                aml_append(dev, aml_name_decl("_HID", aml_eisaid("PNP0A08")));
+                aml_append(dev, aml_name_decl("_CID", aml_eisaid("PNP0A03")));
+
+                /* Expander bridges do not have ACPI PCI Hot-plug enabled */
+                aml_append(dev, build_q35_osc_method(true));
+            } else if (pci_bus_is_express(bus)) {
                 aml_append(dev, aml_name_decl("_HID", aml_eisaid("PNP0A08")));
                 aml_append(dev, aml_name_decl("_CID", aml_eisaid("PNP0A03")));
 
@@ -1587,6 +1599,15 @@ build_dsdt(GArray *table_data, BIOSLinker *linker,
             aml_append(dev, aml_name_decl("_CRS", crs));
             aml_append(scope, dev);
             aml_append(dsdt, scope);
+
+            /* Handle the ranges for the PXB expanders */
+            if (pci_bus_is_cxl(bus)) {
+                MemoryRegion *mr = &machine->cxl_devices_state->host_mr;
+                uint64_t base = mr->addr;
+
+                crs_range_insert(crs_range_set.mem_ranges, base,
+                                 base + memory_region_size(mr) - 1);
+            }
         }
     }
 
diff --git a/hw/i386/pc.c b/hw/i386/pc.c
index b6800a511a..7a18dce529 100644
--- a/hw/i386/pc.c
+++ b/hw/i386/pc.c
@@ -75,6 +75,7 @@
 #include "acpi-build.h"
 #include "hw/mem/pc-dimm.h"
 #include "hw/mem/nvdimm.h"
+#include "hw/cxl/cxl.h"
 #include "qapi/error.h"
 #include "qapi/qapi-visit-common.h"
 #include "qapi/qapi-visit-machine.h"
@@ -815,6 +816,7 @@ void pc_memory_init(PCMachineState *pcms,
     MachineClass *mc = MACHINE_GET_CLASS(machine);
     PCMachineClass *pcmc = PC_MACHINE_GET_CLASS(pcms);
     X86MachineState *x86ms = X86_MACHINE(pcms);
+    hwaddr cxl_base;
 
     assert(machine->ram_size == x86ms->below_4g_mem_size +
                                 x86ms->above_4g_mem_size);
@@ -904,6 +906,26 @@ void pc_memory_init(PCMachineState *pcms,
                                     &machine->device_memory->mr);
     }
 
+    if (machine->cxl_devices_state->is_enabled) {
+        MemoryRegion *mr = &machine->cxl_devices_state->host_mr;
+        hwaddr cxl_size = MiB;
+
+        if (pcmc->has_reserved_memory && machine->device_memory->base) {
+            cxl_base = machine->device_memory->base;
+            if (!pcmc->broken_reserved_end) {
+                cxl_base += memory_region_size(&machine->device_memory->mr);
+            }
+        } else if (pcms->sgx_epc.size != 0) {
+            cxl_base = sgx_epc_above_4g_end(&pcms->sgx_epc);
+        } else {
+            cxl_base = 0x100000000ULL + x86ms->above_4g_mem_size;
+        }
+
+        e820_add_entry(cxl_base, cxl_size, E820_RESERVED);
+        memory_region_init(mr, OBJECT(machine), "cxl_host_reg", cxl_size);
+        memory_region_add_subregion(system_memory, cxl_base, mr);
+    }
+
     /* Initialize PC system firmware */
     pc_system_firmware_init(pcms, rom_memory);
 
@@ -964,7 +986,10 @@ uint64_t pc_pci_hole64_start(void)
     X86MachineState *x86ms = X86_MACHINE(pcms);
     uint64_t hole64_start = 0;
 
-    if (pcmc->has_reserved_memory && ms->device_memory->base) {
+    if (ms->cxl_devices_state->host_mr.addr) {
+        hole64_start = ms->cxl_devices_state->host_mr.addr +
+            memory_region_size(&ms->cxl_devices_state->host_mr);
+    } else if (pcmc->has_reserved_memory && ms->device_memory->base) {
         hole64_start = ms->device_memory->base;
         if (!pcmc->broken_reserved_end) {
             hole64_start += memory_region_size(&ms->device_memory->mr);
diff --git a/hw/pci-bridge/pci_expander_bridge.c b/hw/pci-bridge/pci_expander_bridge.c
index f762eb4a6e..b3b5f93650 100644
--- a/hw/pci-bridge/pci_expander_bridge.c
+++ b/hw/pci-bridge/pci_expander_bridge.c
@@ -75,6 +75,9 @@ struct PXBDev {
     uint8_t bus_nr;
     uint16_t numa_node;
     bool bypass_iommu;
+    struct cxl_dev {
+        CXLHost *cxl_host_bridge;
+    } cxl;
 };
 
 static PXBDev *convert_to_pxb(PCIDevice *dev)
@@ -92,6 +95,9 @@ static GList *pxb_dev_list;
 
 #define TYPE_PXB_HOST "pxb-host"
 
+#define TYPE_PXB_CXL_HOST "pxb-cxl-host"
+#define PXB_CXL_HOST(obj) OBJECT_CHECK(CXLHost, (obj), TYPE_PXB_CXL_HOST)
+
 static int pxb_bus_num(PCIBus *bus)
 {
     PXBDev *pxb = convert_to_pxb(bus->parent_dev);
@@ -197,6 +203,52 @@ static const TypeInfo pxb_host_info = {
     .class_init    = pxb_host_class_init,
 };
 
+static void pxb_cxl_realize(DeviceState *dev, Error **errp)
+{
+    MachineState *ms = MACHINE(qdev_get_machine());
+    SysBusDevice *sbd = SYS_BUS_DEVICE(dev);
+    CXLHost *cxl = PXB_CXL_HOST(dev);
+    CXLComponentState *cxl_cstate = &cxl->cxl_cstate;
+    struct MemoryRegion *mr = &cxl_cstate->crb.component_registers;
+    hwaddr offset;
+
+    cxl_component_register_block_init(OBJECT(dev), cxl_cstate,
+                                      TYPE_PXB_CXL_HOST);
+    sysbus_init_mmio(sbd, mr);
+
+    offset = memory_region_size(mr) * ms->cxl_devices_state->next_mr_idx;
+    if (offset > memory_region_size(&ms->cxl_devices_state->host_mr)) {
+        error_setg(errp, "Insufficient space for pxb cxl host register space");
+        return;
+    }
+
+    memory_region_add_subregion(&ms->cxl_devices_state->host_mr, offset, mr);
+    ms->cxl_devices_state->next_mr_idx++;
+}
+
+static void pxb_cxl_host_class_init(ObjectClass *class, void *data)
+{
+    DeviceClass *dc = DEVICE_CLASS(class);
+    PCIHostBridgeClass *hc = PCI_HOST_BRIDGE_CLASS(class);
+
+    hc->root_bus_path = pxb_host_root_bus_path;
+    dc->fw_name = "cxl";
+    dc->realize = pxb_cxl_realize;
+    /* Reason: Internal part of the pxb/pxb-pcie device, not usable by itself */
+    dc->user_creatable = false;
+}
+
+/*
+ * This is a device to handle the MMIO for a CXL host bridge. It does nothing
+ * else.
+ */
+static const TypeInfo cxl_host_info = {
+    .name          = TYPE_PXB_CXL_HOST,
+    .parent        = TYPE_PCI_HOST_BRIDGE,
+    .instance_size = sizeof(CXLHost),
+    .class_init    = pxb_cxl_host_class_init,
+};
+
 /*
  * Registers the PXB bus as a child of pci host root bus.
  */
@@ -245,6 +297,12 @@ static int pxb_map_irq_fn(PCIDevice *pci_dev, int pin)
 
 static void pxb_dev_reset(DeviceState *dev)
 {
+    CXLHost *cxl = PXB_CXL_DEV(dev)->cxl.cxl_host_bridge;
+    CXLComponentState *cxl_cstate = &cxl->cxl_cstate;
+    uint32_t *reg_state = cxl_cstate->crb.cache_mem_registers;
+
+    cxl_component_register_init_common(reg_state, CXL2_ROOT_PORT);
+    ARRAY_FIELD_DP32(reg_state, CXL_HDM_DECODER_CAPABILITY, TARGET_COUNT, 8);
 }
 
 static gint pxb_compare(gconstpointer a, gconstpointer b)
@@ -281,12 +339,13 @@ static void pxb_dev_realize_common(PCIDevice *dev, enum BusType type,
         dev_name = dev->qdev.id;
     }
 
-    ds = qdev_new(TYPE_PXB_HOST);
+    ds = qdev_new(type == CXL ? TYPE_PXB_CXL_HOST : TYPE_PXB_HOST);
     if (type == PCIE) {
         bus = pci_root_bus_new(ds, dev_name, NULL, NULL, 0, TYPE_PXB_PCIE_BUS);
     } else if (type == CXL) {
         bus = pci_root_bus_new(ds, dev_name, NULL, NULL, 0, TYPE_PXB_CXL_BUS);
         bus->flags |= PCI_BUS_CXL;
+        PXB_CXL_DEV(dev)->cxl.cxl_host_bridge = PXB_CXL_HOST(ds);
     } else {
         bus = pci_root_bus_new(ds, "pxb-internal", NULL, NULL, 0, TYPE_PXB_BUS);
         bds = qdev_new("pci-bridge");
@@ -475,6 +534,7 @@ static void pxb_register_types(void)
     type_register_static(&pxb_pcie_bus_info);
     type_register_static(&pxb_cxl_bus_info);
     type_register_static(&pxb_host_info);
+    type_register_static(&cxl_host_info);
     type_register_static(&pxb_dev_info);
     type_register_static(&pxb_pcie_dev_info);
     type_register_static(&pxb_cxl_dev_info);
diff --git a/include/hw/cxl/cxl.h b/include/hw/cxl/cxl.h
index 31af92fd5e..75e5bf71e1 100644
--- a/include/hw/cxl/cxl.h
+++ b/include/hw/cxl/cxl.h
@@ -17,8 +17,12 @@
 #define CXL_COMPONENT_REG_BAR_IDX 0
 #define CXL_DEVICE_REG_BAR_IDX 2
 
+#define CXL_WINDOW_MAX 10
+
 typedef struct CXLState {
     bool is_enabled;
+    MemoryRegion host_mr;
+    unsigned int next_mr_idx;
 } CXLState;
 
 #endif
-- 
2.32.0



^ permalink raw reply related	[flat|nested] 124+ messages in thread

* [PATCH v7 24/46] acpi/cxl: Add _OSC implementation (9.14.2)
  2022-03-06 17:40 ` Jonathan Cameron via
@ 2022-03-06 17:41   ` Jonathan Cameron via
  -1 siblings, 0 replies; 124+ messages in thread
From: Jonathan Cameron @ 2022-03-06 17:41 UTC (permalink / raw)
  To: linuxarm, qemu-devel, Alex Bennée, Marcel Apfelbaum,
	Michael S . Tsirkin, Igor Mammedov, Markus Armbruster
  Cc: linux-cxl, Ben Widawsky, Peter Maydell,
	Shameerali Kolothum Thodi, Philippe Mathieu-Daudé,
	Saransh Gupta1, Shreyas Shah, Chris Browy, Samarth Saxena,
	Dan Williams

From: Ben Widawsky <ben.widawsky@intel.com>

CXL 2.0 specification adds 2 new dwords to the existing _OSC definition
from PCIe. The new dwords are accessed with a new uuid. This
implementation supports what is in the specification.

Signed-off-by: Ben Widawsky <ben.widawsky@intel.com>
Signed-off-by: Jonathan Cameron <Jonathan.Cameron@huawei.com>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
---
 hw/acpi/Kconfig       |   5 ++
 hw/acpi/cxl-stub.c    |  12 +++++
 hw/acpi/cxl.c         | 104 ++++++++++++++++++++++++++++++++++++++++++
 hw/acpi/meson.build   |   4 +-
 hw/i386/acpi-build.c  |  15 ++++--
 include/hw/acpi/cxl.h |  23 ++++++++++
 6 files changed, 157 insertions(+), 6 deletions(-)

diff --git a/hw/acpi/Kconfig b/hw/acpi/Kconfig
index 19caebde6c..3703aca212 100644
--- a/hw/acpi/Kconfig
+++ b/hw/acpi/Kconfig
@@ -5,6 +5,7 @@ config ACPI_X86
     bool
     select ACPI
     select ACPI_NVDIMM
+    select ACPI_CXL
     select ACPI_CPU_HOTPLUG
     select ACPI_MEMORY_HOTPLUG
     select ACPI_HMAT
@@ -66,3 +67,7 @@ config ACPI_ERST
     bool
     default y
     depends on ACPI && PCI
+
+config ACPI_CXL
+    bool
+    depends on ACPI
diff --git a/hw/acpi/cxl-stub.c b/hw/acpi/cxl-stub.c
new file mode 100644
index 0000000000..15bc21076b
--- /dev/null
+++ b/hw/acpi/cxl-stub.c
@@ -0,0 +1,12 @@
+
+/*
+ * Stubs for ACPI platforms that don't support CXl
+ */
+#include "qemu/osdep.h"
+#include "hw/acpi/aml-build.h"
+#include "hw/acpi/cxl.h"
+
+void build_cxl_osc_method(Aml *dev)
+{
+    g_assert_not_reached();
+}
diff --git a/hw/acpi/cxl.c b/hw/acpi/cxl.c
new file mode 100644
index 0000000000..7124d5a1a3
--- /dev/null
+++ b/hw/acpi/cxl.c
@@ -0,0 +1,104 @@
+/*
+ * CXL ACPI Implementation
+ *
+ * Copyright(C) 2020 Intel Corporation.
+ *
+ * This library is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU Lesser General Public
+ * License as published by the Free Software Foundation; either
+ * version 2 of the License, or (at your option) any later version.
+ *
+ * This library is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+ * Lesser General Public License for more details.
+ *
+ * You should have received a copy of the GNU Lesser General Public
+ * License along with this library; if not, see <http://www.gnu.org/licenses/>
+ */
+
+#include "qemu/osdep.h"
+#include "hw/cxl/cxl.h"
+#include "hw/acpi/acpi.h"
+#include "hw/acpi/aml-build.h"
+#include "hw/acpi/bios-linker-loader.h"
+#include "hw/acpi/cxl.h"
+#include "qapi/error.h"
+#include "qemu/uuid.h"
+
+static Aml *__build_cxl_osc_method(void)
+{
+    Aml *method, *if_uuid, *else_uuid, *if_arg1_not_1, *if_cxl, *if_caps_masked;
+    Aml *a_ctrl = aml_local(0);
+    Aml *a_cdw1 = aml_name("CDW1");
+
+    method = aml_method("_OSC", 4, AML_NOTSERIALIZED);
+    aml_append(method, aml_create_dword_field(aml_arg(3), aml_int(0), "CDW1"));
+
+    /* 9.14.2.1.4 */
+    if_uuid = aml_if(
+        aml_lor(aml_equal(aml_arg(0),
+                          aml_touuid("33DB4D5B-1FF7-401C-9657-7441C03DD766")),
+                aml_equal(aml_arg(0),
+                          aml_touuid("68F2D50B-C469-4D8A-BD3D-941A103FD3FC"))));
+    aml_append(if_uuid, aml_create_dword_field(aml_arg(3), aml_int(4), "CDW2"));
+    aml_append(if_uuid, aml_create_dword_field(aml_arg(3), aml_int(8), "CDW3"));
+
+    aml_append(if_uuid, aml_store(aml_name("CDW3"), a_ctrl));
+
+    /* This is all the same as what's used for PCIe */
+    aml_append(if_uuid,
+               aml_and(aml_name("CTRL"), aml_int(0x1F), aml_name("CTRL")));
+
+    if_arg1_not_1 = aml_if(aml_lnot(aml_equal(aml_arg(1), aml_int(0x1))));
+    /* Unknown revision */
+    aml_append(if_arg1_not_1, aml_or(a_cdw1, aml_int(0x08), a_cdw1));
+    aml_append(if_uuid, if_arg1_not_1);
+
+    if_caps_masked = aml_if(aml_lnot(aml_equal(aml_name("CDW3"), a_ctrl)));
+    /* Capability bits were masked */
+    aml_append(if_caps_masked, aml_or(a_cdw1, aml_int(0x10), a_cdw1));
+    aml_append(if_uuid, if_caps_masked);
+
+    aml_append(if_uuid, aml_store(aml_name("CDW2"), aml_name("SUPP")));
+    aml_append(if_uuid, aml_store(aml_name("CDW3"), aml_name("CTRL")));
+
+    if_cxl = aml_if(aml_equal(
+        aml_arg(0), aml_touuid("68F2D50B-C469-4D8A-BD3D-941A103FD3FC")));
+    /* CXL support field */
+    aml_append(if_cxl, aml_create_dword_field(aml_arg(3), aml_int(12), "CDW4"));
+    /* CXL capabilities */
+    aml_append(if_cxl, aml_create_dword_field(aml_arg(3), aml_int(16), "CDW5"));
+    aml_append(if_cxl, aml_store(aml_name("CDW4"), aml_name("SUPC")));
+    aml_append(if_cxl, aml_store(aml_name("CDW5"), aml_name("CTRC")));
+
+    /* CXL 2.0 Port/Device Register access */
+    aml_append(if_cxl,
+               aml_or(aml_name("CDW5"), aml_int(0x1), aml_name("CDW5")));
+    aml_append(if_uuid, if_cxl);
+
+    /* Update DWORD3 (the return value) */
+    aml_append(if_uuid, aml_store(a_ctrl, aml_name("CDW3")));
+
+    aml_append(if_uuid, aml_return(aml_arg(3)));
+    aml_append(method, if_uuid);
+
+    else_uuid = aml_else();
+
+    /* unrecognized uuid */
+    aml_append(else_uuid,
+               aml_or(aml_name("CDW1"), aml_int(0x4), aml_name("CDW1")));
+    aml_append(else_uuid, aml_return(aml_arg(3)));
+    aml_append(method, else_uuid);
+
+    return method;
+}
+
+void build_cxl_osc_method(Aml *dev)
+{
+    aml_append(dev, aml_name_decl("SUPP", aml_int(0)));
+    aml_append(dev, aml_name_decl("CTRL", aml_int(0)));
+    aml_append(dev, aml_name_decl("SUPC", aml_int(0)));
+    aml_append(dev, aml_name_decl("CTRC", aml_int(0)));
+    aml_append(dev, __build_cxl_osc_method());
+}
diff --git a/hw/acpi/meson.build b/hw/acpi/meson.build
index 8bea2e6933..cea2f5f93a 100644
--- a/hw/acpi/meson.build
+++ b/hw/acpi/meson.build
@@ -13,6 +13,7 @@ acpi_ss.add(when: 'CONFIG_ACPI_MEMORY_HOTPLUG', if_false: files('acpi-mem-hotplu
 acpi_ss.add(when: 'CONFIG_ACPI_NVDIMM', if_true: files('nvdimm.c'))
 acpi_ss.add(when: 'CONFIG_ACPI_NVDIMM', if_false: files('acpi-nvdimm-stub.c'))
 acpi_ss.add(when: 'CONFIG_ACPI_PCI', if_true: files('pci.c'))
+acpi_ss.add(when: 'CONFIG_ACPI_CXL', if_true: files('cxl.c'), if_false: files('cxl-stub.c'))
 acpi_ss.add(when: 'CONFIG_ACPI_VMGENID', if_true: files('vmgenid.c'))
 acpi_ss.add(when: 'CONFIG_ACPI_HW_REDUCED', if_true: files('generic_event_device.c'))
 acpi_ss.add(when: 'CONFIG_ACPI_HMAT', if_true: files('hmat.c'))
@@ -33,4 +34,5 @@ softmmu_ss.add_all(when: 'CONFIG_ACPI', if_true: acpi_ss)
 softmmu_ss.add(when: 'CONFIG_ALL', if_true: files('acpi-stub.c', 'aml-build-stub.c',
                                                   'acpi-x86-stub.c', 'ipmi-stub.c', 'ghes-stub.c',
                                                   'acpi-mem-hotplug-stub.c', 'acpi-cpu-hotplug-stub.c',
-                                                  'acpi-pci-hotplug-stub.c', 'acpi-nvdimm-stub.c'))
+                                                  'acpi-pci-hotplug-stub.c', 'acpi-nvdimm-stub.c',
+                                                  'cxl-stub.c'))
diff --git a/hw/i386/acpi-build.c b/hw/i386/acpi-build.c
index 0a28dd6d4e..b5a4b663f2 100644
--- a/hw/i386/acpi-build.c
+++ b/hw/i386/acpi-build.c
@@ -66,6 +66,7 @@
 #include "hw/acpi/aml-build.h"
 #include "hw/acpi/utils.h"
 #include "hw/acpi/pci.h"
+#include "hw/acpi/cxl.h"
 
 #include "qom/qom-qobject.h"
 #include "hw/i386/amd_iommu.h"
@@ -1574,11 +1575,15 @@ build_dsdt(GArray *table_data, BIOSLinker *linker,
             aml_append(dev, aml_name_decl("_UID", aml_int(bus_num)));
             aml_append(dev, aml_name_decl("_BBN", aml_int(bus_num)));
             if (pci_bus_is_cxl(bus)) {
-                aml_append(dev, aml_name_decl("_HID", aml_eisaid("PNP0A08")));
-                aml_append(dev, aml_name_decl("_CID", aml_eisaid("PNP0A03")));
-
-                /* Expander bridges do not have ACPI PCI Hot-plug enabled */
-                aml_append(dev, build_q35_osc_method(true));
+                struct Aml *pkg = aml_package(2);
+
+                aml_append(dev, aml_name_decl("_HID", aml_string("ACPI0016")));
+                aml_append(pkg, aml_eisaid("PNP0A08"));
+                aml_append(pkg, aml_eisaid("PNP0A03"));
+                aml_append(dev, aml_name_decl("_CID", pkg));
+                aml_append(dev, aml_name_decl("_ADR", aml_int(0)));
+                aml_append(dev, aml_name_decl("_UID", aml_int(bus_num)));
+                build_cxl_osc_method(dev);
             } else if (pci_bus_is_express(bus)) {
                 aml_append(dev, aml_name_decl("_HID", aml_eisaid("PNP0A08")));
                 aml_append(dev, aml_name_decl("_CID", aml_eisaid("PNP0A03")));
diff --git a/include/hw/acpi/cxl.h b/include/hw/acpi/cxl.h
new file mode 100644
index 0000000000..7b8f3b8a2e
--- /dev/null
+++ b/include/hw/acpi/cxl.h
@@ -0,0 +1,23 @@
+/*
+ * Copyright (C) 2020 Intel Corporation
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+
+ * You should have received a copy of the GNU General Public License along
+ * with this program; if not, see <http://www.gnu.org/licenses/>.
+ */
+
+#ifndef HW_ACPI_CXL_H
+#define HW_ACPI_CXL_H
+
+void build_cxl_osc_method(Aml *dev);
+
+#endif
-- 
2.32.0


^ permalink raw reply related	[flat|nested] 124+ messages in thread

* [PATCH v7 24/46] acpi/cxl: Add _OSC implementation (9.14.2)
@ 2022-03-06 17:41   ` Jonathan Cameron via
  0 siblings, 0 replies; 124+ messages in thread
From: Jonathan Cameron via @ 2022-03-06 17:41 UTC (permalink / raw)
  To: linuxarm, qemu-devel, Alex Bennée, Marcel Apfelbaum,
	Michael S . Tsirkin, Igor Mammedov, Markus Armbruster
  Cc: linux-cxl, Ben Widawsky, Peter Maydell,
	Shameerali Kolothum Thodi, Philippe Mathieu-Daudé,
	Saransh Gupta1, Shreyas Shah, Chris Browy, Samarth Saxena,
	Dan Williams

From: Ben Widawsky <ben.widawsky@intel.com>

CXL 2.0 specification adds 2 new dwords to the existing _OSC definition
from PCIe. The new dwords are accessed with a new uuid. This
implementation supports what is in the specification.

Signed-off-by: Ben Widawsky <ben.widawsky@intel.com>
Signed-off-by: Jonathan Cameron <Jonathan.Cameron@huawei.com>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
---
 hw/acpi/Kconfig       |   5 ++
 hw/acpi/cxl-stub.c    |  12 +++++
 hw/acpi/cxl.c         | 104 ++++++++++++++++++++++++++++++++++++++++++
 hw/acpi/meson.build   |   4 +-
 hw/i386/acpi-build.c  |  15 ++++--
 include/hw/acpi/cxl.h |  23 ++++++++++
 6 files changed, 157 insertions(+), 6 deletions(-)

diff --git a/hw/acpi/Kconfig b/hw/acpi/Kconfig
index 19caebde6c..3703aca212 100644
--- a/hw/acpi/Kconfig
+++ b/hw/acpi/Kconfig
@@ -5,6 +5,7 @@ config ACPI_X86
     bool
     select ACPI
     select ACPI_NVDIMM
+    select ACPI_CXL
     select ACPI_CPU_HOTPLUG
     select ACPI_MEMORY_HOTPLUG
     select ACPI_HMAT
@@ -66,3 +67,7 @@ config ACPI_ERST
     bool
     default y
     depends on ACPI && PCI
+
+config ACPI_CXL
+    bool
+    depends on ACPI
diff --git a/hw/acpi/cxl-stub.c b/hw/acpi/cxl-stub.c
new file mode 100644
index 0000000000..15bc21076b
--- /dev/null
+++ b/hw/acpi/cxl-stub.c
@@ -0,0 +1,12 @@
+
+/*
+ * Stubs for ACPI platforms that don't support CXl
+ */
+#include "qemu/osdep.h"
+#include "hw/acpi/aml-build.h"
+#include "hw/acpi/cxl.h"
+
+void build_cxl_osc_method(Aml *dev)
+{
+    g_assert_not_reached();
+}
diff --git a/hw/acpi/cxl.c b/hw/acpi/cxl.c
new file mode 100644
index 0000000000..7124d5a1a3
--- /dev/null
+++ b/hw/acpi/cxl.c
@@ -0,0 +1,104 @@
+/*
+ * CXL ACPI Implementation
+ *
+ * Copyright(C) 2020 Intel Corporation.
+ *
+ * This library is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU Lesser General Public
+ * License as published by the Free Software Foundation; either
+ * version 2 of the License, or (at your option) any later version.
+ *
+ * This library is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+ * Lesser General Public License for more details.
+ *
+ * You should have received a copy of the GNU Lesser General Public
+ * License along with this library; if not, see <http://www.gnu.org/licenses/>
+ */
+
+#include "qemu/osdep.h"
+#include "hw/cxl/cxl.h"
+#include "hw/acpi/acpi.h"
+#include "hw/acpi/aml-build.h"
+#include "hw/acpi/bios-linker-loader.h"
+#include "hw/acpi/cxl.h"
+#include "qapi/error.h"
+#include "qemu/uuid.h"
+
+static Aml *__build_cxl_osc_method(void)
+{
+    Aml *method, *if_uuid, *else_uuid, *if_arg1_not_1, *if_cxl, *if_caps_masked;
+    Aml *a_ctrl = aml_local(0);
+    Aml *a_cdw1 = aml_name("CDW1");
+
+    method = aml_method("_OSC", 4, AML_NOTSERIALIZED);
+    aml_append(method, aml_create_dword_field(aml_arg(3), aml_int(0), "CDW1"));
+
+    /* 9.14.2.1.4 */
+    if_uuid = aml_if(
+        aml_lor(aml_equal(aml_arg(0),
+                          aml_touuid("33DB4D5B-1FF7-401C-9657-7441C03DD766")),
+                aml_equal(aml_arg(0),
+                          aml_touuid("68F2D50B-C469-4D8A-BD3D-941A103FD3FC"))));
+    aml_append(if_uuid, aml_create_dword_field(aml_arg(3), aml_int(4), "CDW2"));
+    aml_append(if_uuid, aml_create_dword_field(aml_arg(3), aml_int(8), "CDW3"));
+
+    aml_append(if_uuid, aml_store(aml_name("CDW3"), a_ctrl));
+
+    /* This is all the same as what's used for PCIe */
+    aml_append(if_uuid,
+               aml_and(aml_name("CTRL"), aml_int(0x1F), aml_name("CTRL")));
+
+    if_arg1_not_1 = aml_if(aml_lnot(aml_equal(aml_arg(1), aml_int(0x1))));
+    /* Unknown revision */
+    aml_append(if_arg1_not_1, aml_or(a_cdw1, aml_int(0x08), a_cdw1));
+    aml_append(if_uuid, if_arg1_not_1);
+
+    if_caps_masked = aml_if(aml_lnot(aml_equal(aml_name("CDW3"), a_ctrl)));
+    /* Capability bits were masked */
+    aml_append(if_caps_masked, aml_or(a_cdw1, aml_int(0x10), a_cdw1));
+    aml_append(if_uuid, if_caps_masked);
+
+    aml_append(if_uuid, aml_store(aml_name("CDW2"), aml_name("SUPP")));
+    aml_append(if_uuid, aml_store(aml_name("CDW3"), aml_name("CTRL")));
+
+    if_cxl = aml_if(aml_equal(
+        aml_arg(0), aml_touuid("68F2D50B-C469-4D8A-BD3D-941A103FD3FC")));
+    /* CXL support field */
+    aml_append(if_cxl, aml_create_dword_field(aml_arg(3), aml_int(12), "CDW4"));
+    /* CXL capabilities */
+    aml_append(if_cxl, aml_create_dword_field(aml_arg(3), aml_int(16), "CDW5"));
+    aml_append(if_cxl, aml_store(aml_name("CDW4"), aml_name("SUPC")));
+    aml_append(if_cxl, aml_store(aml_name("CDW5"), aml_name("CTRC")));
+
+    /* CXL 2.0 Port/Device Register access */
+    aml_append(if_cxl,
+               aml_or(aml_name("CDW5"), aml_int(0x1), aml_name("CDW5")));
+    aml_append(if_uuid, if_cxl);
+
+    /* Update DWORD3 (the return value) */
+    aml_append(if_uuid, aml_store(a_ctrl, aml_name("CDW3")));
+
+    aml_append(if_uuid, aml_return(aml_arg(3)));
+    aml_append(method, if_uuid);
+
+    else_uuid = aml_else();
+
+    /* unrecognized uuid */
+    aml_append(else_uuid,
+               aml_or(aml_name("CDW1"), aml_int(0x4), aml_name("CDW1")));
+    aml_append(else_uuid, aml_return(aml_arg(3)));
+    aml_append(method, else_uuid);
+
+    return method;
+}
+
+void build_cxl_osc_method(Aml *dev)
+{
+    aml_append(dev, aml_name_decl("SUPP", aml_int(0)));
+    aml_append(dev, aml_name_decl("CTRL", aml_int(0)));
+    aml_append(dev, aml_name_decl("SUPC", aml_int(0)));
+    aml_append(dev, aml_name_decl("CTRC", aml_int(0)));
+    aml_append(dev, __build_cxl_osc_method());
+}
diff --git a/hw/acpi/meson.build b/hw/acpi/meson.build
index 8bea2e6933..cea2f5f93a 100644
--- a/hw/acpi/meson.build
+++ b/hw/acpi/meson.build
@@ -13,6 +13,7 @@ acpi_ss.add(when: 'CONFIG_ACPI_MEMORY_HOTPLUG', if_false: files('acpi-mem-hotplu
 acpi_ss.add(when: 'CONFIG_ACPI_NVDIMM', if_true: files('nvdimm.c'))
 acpi_ss.add(when: 'CONFIG_ACPI_NVDIMM', if_false: files('acpi-nvdimm-stub.c'))
 acpi_ss.add(when: 'CONFIG_ACPI_PCI', if_true: files('pci.c'))
+acpi_ss.add(when: 'CONFIG_ACPI_CXL', if_true: files('cxl.c'), if_false: files('cxl-stub.c'))
 acpi_ss.add(when: 'CONFIG_ACPI_VMGENID', if_true: files('vmgenid.c'))
 acpi_ss.add(when: 'CONFIG_ACPI_HW_REDUCED', if_true: files('generic_event_device.c'))
 acpi_ss.add(when: 'CONFIG_ACPI_HMAT', if_true: files('hmat.c'))
@@ -33,4 +34,5 @@ softmmu_ss.add_all(when: 'CONFIG_ACPI', if_true: acpi_ss)
 softmmu_ss.add(when: 'CONFIG_ALL', if_true: files('acpi-stub.c', 'aml-build-stub.c',
                                                   'acpi-x86-stub.c', 'ipmi-stub.c', 'ghes-stub.c',
                                                   'acpi-mem-hotplug-stub.c', 'acpi-cpu-hotplug-stub.c',
-                                                  'acpi-pci-hotplug-stub.c', 'acpi-nvdimm-stub.c'))
+                                                  'acpi-pci-hotplug-stub.c', 'acpi-nvdimm-stub.c',
+                                                  'cxl-stub.c'))
diff --git a/hw/i386/acpi-build.c b/hw/i386/acpi-build.c
index 0a28dd6d4e..b5a4b663f2 100644
--- a/hw/i386/acpi-build.c
+++ b/hw/i386/acpi-build.c
@@ -66,6 +66,7 @@
 #include "hw/acpi/aml-build.h"
 #include "hw/acpi/utils.h"
 #include "hw/acpi/pci.h"
+#include "hw/acpi/cxl.h"
 
 #include "qom/qom-qobject.h"
 #include "hw/i386/amd_iommu.h"
@@ -1574,11 +1575,15 @@ build_dsdt(GArray *table_data, BIOSLinker *linker,
             aml_append(dev, aml_name_decl("_UID", aml_int(bus_num)));
             aml_append(dev, aml_name_decl("_BBN", aml_int(bus_num)));
             if (pci_bus_is_cxl(bus)) {
-                aml_append(dev, aml_name_decl("_HID", aml_eisaid("PNP0A08")));
-                aml_append(dev, aml_name_decl("_CID", aml_eisaid("PNP0A03")));
-
-                /* Expander bridges do not have ACPI PCI Hot-plug enabled */
-                aml_append(dev, build_q35_osc_method(true));
+                struct Aml *pkg = aml_package(2);
+
+                aml_append(dev, aml_name_decl("_HID", aml_string("ACPI0016")));
+                aml_append(pkg, aml_eisaid("PNP0A08"));
+                aml_append(pkg, aml_eisaid("PNP0A03"));
+                aml_append(dev, aml_name_decl("_CID", pkg));
+                aml_append(dev, aml_name_decl("_ADR", aml_int(0)));
+                aml_append(dev, aml_name_decl("_UID", aml_int(bus_num)));
+                build_cxl_osc_method(dev);
             } else if (pci_bus_is_express(bus)) {
                 aml_append(dev, aml_name_decl("_HID", aml_eisaid("PNP0A08")));
                 aml_append(dev, aml_name_decl("_CID", aml_eisaid("PNP0A03")));
diff --git a/include/hw/acpi/cxl.h b/include/hw/acpi/cxl.h
new file mode 100644
index 0000000000..7b8f3b8a2e
--- /dev/null
+++ b/include/hw/acpi/cxl.h
@@ -0,0 +1,23 @@
+/*
+ * Copyright (C) 2020 Intel Corporation
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+
+ * You should have received a copy of the GNU General Public License along
+ * with this program; if not, see <http://www.gnu.org/licenses/>.
+ */
+
+#ifndef HW_ACPI_CXL_H
+#define HW_ACPI_CXL_H
+
+void build_cxl_osc_method(Aml *dev);
+
+#endif
-- 
2.32.0



^ permalink raw reply related	[flat|nested] 124+ messages in thread

* [PATCH v7 25/46] acpi/cxl: Create the CEDT (9.14.1)
  2022-03-06 17:40 ` Jonathan Cameron via
@ 2022-03-06 17:41   ` Jonathan Cameron via
  -1 siblings, 0 replies; 124+ messages in thread
From: Jonathan Cameron @ 2022-03-06 17:41 UTC (permalink / raw)
  To: linuxarm, qemu-devel, Alex Bennée, Marcel Apfelbaum,
	Michael S . Tsirkin, Igor Mammedov, Markus Armbruster
  Cc: linux-cxl, Ben Widawsky, Peter Maydell,
	Shameerali Kolothum Thodi, Philippe Mathieu-Daudé,
	Saransh Gupta1, Shreyas Shah, Chris Browy, Samarth Saxena,
	Dan Williams

From: Ben Widawsky <ben.widawsky@intel.com>

The CXL Early Discovery Table is defined in the CXL 2.0 specification as
a way for the OS to get CXL specific information from the system
firmware.

CXL 2.0 specification adds an _HID, ACPI0016, for CXL capable host
bridges, with a _CID of PNP0A08 (PCIe host bridge). CXL aware software
is able to use this initiate the proper _OSC method, and get the _UID
which is referenced by the CEDT. Therefore the existence of an ACPI0016
device allows a CXL aware driver perform the necessary actions. For a
CXL capable OS, this works. For a CXL unaware OS, this works.

CEDT awaremess requires more. The motivation for ACPI0017 is to provide
the possibility of having a Linux CXL module that can work on a legacy
Linux kernel. Linux core PCI/ACPI which won't be built as a module,
will see the _CID of PNP0A08 and bind a driver to it. If we later loaded
a driver for ACPI0016, Linux won't be able to bind it to the hardware
because it has already bound the PNP0A08 driver. The ACPI0017 device is
an opportunity to have an object to bind a driver will be used by a
Linux driver to walk the CXL topology and do everything that we would
have preferred to do with ACPI0016.

There is another motivation for an ACPI0017 device which isn't
implemented here. An operating system needs an attach point for a
non-volatile region provider that understands cross-hostbridge
interleaving. Since QEMU emulation doesn't support interleaving yet,
this is more important on the OS side, for now.

As of CXL 2.0 spec, only 1 sub structure is defined, the CXL Host Bridge
Structure (CHBS) which is primarily useful for telling the OS exactly
where the MMIO for the host bridge is.

Link: https://lore.kernel.org/linux-cxl/20210115034911.nkgpzc756d6qmjpl@intel.com/T/#t
Signed-off-by: Ben Widawsky <ben.widawsky@intel.com>
Signed-off-by: Jonathan Cameron <Jonathan.Cameron@huawei.com>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
---
 hw/acpi/cxl.c                       | 68 +++++++++++++++++++++++++++++
 hw/i386/acpi-build.c                | 27 ++++++++++++
 hw/pci-bridge/pci_expander_bridge.c | 17 --------
 include/hw/acpi/cxl.h               |  5 +++
 include/hw/pci/pci_bridge.h         | 20 +++++++++
 5 files changed, 120 insertions(+), 17 deletions(-)

diff --git a/hw/acpi/cxl.c b/hw/acpi/cxl.c
index 7124d5a1a3..442f836a3e 100644
--- a/hw/acpi/cxl.c
+++ b/hw/acpi/cxl.c
@@ -18,7 +18,11 @@
  */
 
 #include "qemu/osdep.h"
+#include "hw/sysbus.h"
+#include "hw/pci/pci_bridge.h"
+#include "hw/pci/pci_host.h"
 #include "hw/cxl/cxl.h"
+#include "hw/mem/memory-device.h"
 #include "hw/acpi/acpi.h"
 #include "hw/acpi/aml-build.h"
 #include "hw/acpi/bios-linker-loader.h"
@@ -26,6 +30,70 @@
 #include "qapi/error.h"
 #include "qemu/uuid.h"
 
+static void cedt_build_chbs(GArray *table_data, PXBDev *cxl)
+{
+    SysBusDevice *sbd = SYS_BUS_DEVICE(cxl->cxl.cxl_host_bridge);
+    struct MemoryRegion *mr = sbd->mmio[0].memory;
+
+    /* Type */
+    build_append_int_noprefix(table_data, 0, 1);
+
+    /* Reserved */
+    build_append_int_noprefix(table_data, 0, 1);
+
+    /* Record Length */
+    build_append_int_noprefix(table_data, 32, 2);
+
+    /* UID - currently equal to bus number */
+    build_append_int_noprefix(table_data, cxl->bus_nr, 4);
+
+    /* Version */
+    build_append_int_noprefix(table_data, 1, 4);
+
+    /* Reserved */
+    build_append_int_noprefix(table_data, 0, 4);
+
+    /* Base - subregion within a container that is in PA space */
+    build_append_int_noprefix(table_data, mr->container->addr + mr->addr, 8);
+
+    /* Length */
+    build_append_int_noprefix(table_data, memory_region_size(mr), 8);
+}
+
+static int cxl_foreach_pxb_hb(Object *obj, void *opaque)
+{
+    Aml *cedt = opaque;
+
+    if (object_dynamic_cast(obj, TYPE_PXB_CXL_DEVICE)) {
+        cedt_build_chbs(cedt->buf, PXB_CXL_DEV(obj));
+    }
+
+    return 0;
+}
+
+void cxl_build_cedt(MachineState *ms, GArray *table_offsets, GArray *table_data,
+                    BIOSLinker *linker, const char *oem_id,
+                    const char *oem_table_id)
+{
+    Aml *cedt;
+    AcpiTable table = { .sig = "CEDT", .rev = 1, .oem_id = oem_id,
+                        .oem_table_id = oem_table_id };
+
+    acpi_add_table(table_offsets, table_data);
+    acpi_table_begin(&table, table_data);
+    cedt = init_aml_allocator();
+
+    /* reserve space for CEDT header */
+
+    object_child_foreach_recursive(object_get_root(), cxl_foreach_pxb_hb, cedt);
+
+    /* copy AML table into ACPI tables blob and patch header there */
+    g_array_append_vals(table_data, cedt->buf->data, cedt->buf->len);
+    free_aml_allocator();
+
+    acpi_table_end(linker, &table);
+}
+
 static Aml *__build_cxl_osc_method(void)
 {
     Aml *method, *if_uuid, *else_uuid, *if_arg1_not_1, *if_cxl, *if_caps_masked;
diff --git a/hw/i386/acpi-build.c b/hw/i386/acpi-build.c
index b5a4b663f2..a68e8bf166 100644
--- a/hw/i386/acpi-build.c
+++ b/hw/i386/acpi-build.c
@@ -76,6 +76,7 @@
 #include "hw/acpi/ipmi.h"
 #include "hw/acpi/hmat.h"
 #include "hw/acpi/viot.h"
+#include "hw/acpi/cxl.h"
 
 #include CONFIG_DEVICES
 
@@ -1403,6 +1404,22 @@ static void build_smb0(Aml *table, I2CBus *smbus, int devnr, int func)
     aml_append(table, scope);
 }
 
+static void build_acpi0017(Aml *table)
+{
+    Aml *dev, *scope, *method;
+
+    scope =  aml_scope("_SB");
+    dev = aml_device("CXLM");
+    aml_append(dev, aml_name_decl("_HID", aml_string("ACPI0017")));
+
+    method = aml_method("_STA", 0, AML_NOTSERIALIZED);
+    aml_append(method, aml_return(aml_int(0x01)));
+    aml_append(dev, method);
+
+    aml_append(scope, dev);
+    aml_append(table, scope);
+}
+
 static void
 build_dsdt(GArray *table_data, BIOSLinker *linker,
            AcpiPmInfo *pm, AcpiMiscInfo *misc,
@@ -1422,6 +1439,7 @@ build_dsdt(GArray *table_data, BIOSLinker *linker,
 #ifdef CONFIG_TPM
     TPMIf *tpm = tpm_find();
 #endif
+    bool cxl_present = false;
     int i;
     VMBusBridge *vmbus_bridge = vmbus_bridge_find();
     AcpiTable table = { .sig = "DSDT", .rev = 1, .oem_id = x86ms->oem_id,
@@ -1610,12 +1628,17 @@ build_dsdt(GArray *table_data, BIOSLinker *linker,
                 MemoryRegion *mr = &machine->cxl_devices_state->host_mr;
                 uint64_t base = mr->addr;
 
+                cxl_present = true;
                 crs_range_insert(crs_range_set.mem_ranges, base,
                                  base + memory_region_size(mr) - 1);
             }
         }
     }
 
+    if (cxl_present) {
+        build_acpi0017(dsdt);
+    }
+
     /*
      * At this point crs_range_set has all the ranges used by pci
      * busses *other* than PCI0.  These ranges will be excluded from
@@ -2680,6 +2703,10 @@ void acpi_build(AcpiBuildTables *tables, MachineState *machine)
                           machine->nvdimms_state, machine->ram_slots,
                           x86ms->oem_id, x86ms->oem_table_id);
     }
+    if (machine->cxl_devices_state->is_enabled) {
+        cxl_build_cedt(machine, table_offsets, tables_blob, tables->linker,
+                       x86ms->oem_id, x86ms->oem_table_id);
+    }
 
     acpi_add_table(table_offsets, tables_blob);
     build_waet(tables_blob, tables->linker, x86ms->oem_id, x86ms->oem_table_id);
diff --git a/hw/pci-bridge/pci_expander_bridge.c b/hw/pci-bridge/pci_expander_bridge.c
index b3b5f93650..e11a967916 100644
--- a/hw/pci-bridge/pci_expander_bridge.c
+++ b/hw/pci-bridge/pci_expander_bridge.c
@@ -57,29 +57,12 @@ DECLARE_INSTANCE_CHECKER(PXBDev, PXB_DEV,
 DECLARE_INSTANCE_CHECKER(PXBDev, PXB_PCIE_DEV,
                          TYPE_PXB_PCIE_DEVICE)
 
-#define TYPE_PXB_CXL_DEVICE "pxb-cxl"
-DECLARE_INSTANCE_CHECKER(PXBDev, PXB_CXL_DEV,
-                         TYPE_PXB_CXL_DEVICE)
-
 typedef struct CXLHost {
     PCIHostState parent_obj;
 
     CXLComponentState cxl_cstate;
 } CXLHost;
 
-struct PXBDev {
-    /*< private >*/
-    PCIDevice parent_obj;
-    /*< public >*/
-
-    uint8_t bus_nr;
-    uint16_t numa_node;
-    bool bypass_iommu;
-    struct cxl_dev {
-        CXLHost *cxl_host_bridge;
-    } cxl;
-};
-
 static PXBDev *convert_to_pxb(PCIDevice *dev)
 {
     /* A CXL PXB's parent bus is PCIe, so the normal check won't work */
diff --git a/include/hw/acpi/cxl.h b/include/hw/acpi/cxl.h
index 7b8f3b8a2e..0c496538c0 100644
--- a/include/hw/acpi/cxl.h
+++ b/include/hw/acpi/cxl.h
@@ -18,6 +18,11 @@
 #ifndef HW_ACPI_CXL_H
 #define HW_ACPI_CXL_H
 
+#include "hw/acpi/bios-linker-loader.h"
+
+void cxl_build_cedt(MachineState *ms, GArray *table_offsets, GArray *table_data,
+                    BIOSLinker *linker, const char *oem_id,
+                    const char *oem_table_id);
 void build_cxl_osc_method(Aml *dev);
 
 #endif
diff --git a/include/hw/pci/pci_bridge.h b/include/hw/pci/pci_bridge.h
index 30691a6e57..ba4bafac7c 100644
--- a/include/hw/pci/pci_bridge.h
+++ b/include/hw/pci/pci_bridge.h
@@ -28,6 +28,7 @@
 
 #include "hw/pci/pci.h"
 #include "hw/pci/pci_bus.h"
+#include "hw/cxl/cxl.h"
 #include "qom/object.h"
 
 typedef struct PCIBridgeWindows PCIBridgeWindows;
@@ -80,6 +81,25 @@ struct PCIBridge {
 #define PCI_BRIDGE_DEV_PROP_CHASSIS_NR "chassis_nr"
 #define PCI_BRIDGE_DEV_PROP_MSI        "msi"
 #define PCI_BRIDGE_DEV_PROP_SHPC       "shpc"
+typedef struct CXLHost CXLHost;
+
+struct PXBDev {
+    /*< private >*/
+    PCIDevice parent_obj;
+    /*< public >*/
+
+    uint8_t bus_nr;
+    uint16_t numa_node;
+    bool bypass_iommu;
+    struct cxl_dev {
+        CXLHost *cxl_host_bridge; /* Pointer to a CXLHost */
+    } cxl;
+};
+
+typedef struct PXBDev PXBDev;
+#define TYPE_PXB_CXL_DEVICE "pxb-cxl"
+DECLARE_INSTANCE_CHECKER(PXBDev, PXB_CXL_DEV,
+                         TYPE_PXB_CXL_DEVICE)
 
 int pci_bridge_ssvid_init(PCIDevice *dev, uint8_t offset,
                           uint16_t svid, uint16_t ssid,
-- 
2.32.0


^ permalink raw reply related	[flat|nested] 124+ messages in thread

* [PATCH v7 25/46] acpi/cxl: Create the CEDT (9.14.1)
@ 2022-03-06 17:41   ` Jonathan Cameron via
  0 siblings, 0 replies; 124+ messages in thread
From: Jonathan Cameron via @ 2022-03-06 17:41 UTC (permalink / raw)
  To: linuxarm, qemu-devel, Alex Bennée, Marcel Apfelbaum,
	Michael S . Tsirkin, Igor Mammedov, Markus Armbruster
  Cc: linux-cxl, Ben Widawsky, Peter Maydell,
	Shameerali Kolothum Thodi, Philippe Mathieu-Daudé,
	Saransh Gupta1, Shreyas Shah, Chris Browy, Samarth Saxena,
	Dan Williams

From: Ben Widawsky <ben.widawsky@intel.com>

The CXL Early Discovery Table is defined in the CXL 2.0 specification as
a way for the OS to get CXL specific information from the system
firmware.

CXL 2.0 specification adds an _HID, ACPI0016, for CXL capable host
bridges, with a _CID of PNP0A08 (PCIe host bridge). CXL aware software
is able to use this initiate the proper _OSC method, and get the _UID
which is referenced by the CEDT. Therefore the existence of an ACPI0016
device allows a CXL aware driver perform the necessary actions. For a
CXL capable OS, this works. For a CXL unaware OS, this works.

CEDT awaremess requires more. The motivation for ACPI0017 is to provide
the possibility of having a Linux CXL module that can work on a legacy
Linux kernel. Linux core PCI/ACPI which won't be built as a module,
will see the _CID of PNP0A08 and bind a driver to it. If we later loaded
a driver for ACPI0016, Linux won't be able to bind it to the hardware
because it has already bound the PNP0A08 driver. The ACPI0017 device is
an opportunity to have an object to bind a driver will be used by a
Linux driver to walk the CXL topology and do everything that we would
have preferred to do with ACPI0016.

There is another motivation for an ACPI0017 device which isn't
implemented here. An operating system needs an attach point for a
non-volatile region provider that understands cross-hostbridge
interleaving. Since QEMU emulation doesn't support interleaving yet,
this is more important on the OS side, for now.

As of CXL 2.0 spec, only 1 sub structure is defined, the CXL Host Bridge
Structure (CHBS) which is primarily useful for telling the OS exactly
where the MMIO for the host bridge is.

Link: https://lore.kernel.org/linux-cxl/20210115034911.nkgpzc756d6qmjpl@intel.com/T/#t
Signed-off-by: Ben Widawsky <ben.widawsky@intel.com>
Signed-off-by: Jonathan Cameron <Jonathan.Cameron@huawei.com>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
---
 hw/acpi/cxl.c                       | 68 +++++++++++++++++++++++++++++
 hw/i386/acpi-build.c                | 27 ++++++++++++
 hw/pci-bridge/pci_expander_bridge.c | 17 --------
 include/hw/acpi/cxl.h               |  5 +++
 include/hw/pci/pci_bridge.h         | 20 +++++++++
 5 files changed, 120 insertions(+), 17 deletions(-)

diff --git a/hw/acpi/cxl.c b/hw/acpi/cxl.c
index 7124d5a1a3..442f836a3e 100644
--- a/hw/acpi/cxl.c
+++ b/hw/acpi/cxl.c
@@ -18,7 +18,11 @@
  */
 
 #include "qemu/osdep.h"
+#include "hw/sysbus.h"
+#include "hw/pci/pci_bridge.h"
+#include "hw/pci/pci_host.h"
 #include "hw/cxl/cxl.h"
+#include "hw/mem/memory-device.h"
 #include "hw/acpi/acpi.h"
 #include "hw/acpi/aml-build.h"
 #include "hw/acpi/bios-linker-loader.h"
@@ -26,6 +30,70 @@
 #include "qapi/error.h"
 #include "qemu/uuid.h"
 
+static void cedt_build_chbs(GArray *table_data, PXBDev *cxl)
+{
+    SysBusDevice *sbd = SYS_BUS_DEVICE(cxl->cxl.cxl_host_bridge);
+    struct MemoryRegion *mr = sbd->mmio[0].memory;
+
+    /* Type */
+    build_append_int_noprefix(table_data, 0, 1);
+
+    /* Reserved */
+    build_append_int_noprefix(table_data, 0, 1);
+
+    /* Record Length */
+    build_append_int_noprefix(table_data, 32, 2);
+
+    /* UID - currently equal to bus number */
+    build_append_int_noprefix(table_data, cxl->bus_nr, 4);
+
+    /* Version */
+    build_append_int_noprefix(table_data, 1, 4);
+
+    /* Reserved */
+    build_append_int_noprefix(table_data, 0, 4);
+
+    /* Base - subregion within a container that is in PA space */
+    build_append_int_noprefix(table_data, mr->container->addr + mr->addr, 8);
+
+    /* Length */
+    build_append_int_noprefix(table_data, memory_region_size(mr), 8);
+}
+
+static int cxl_foreach_pxb_hb(Object *obj, void *opaque)
+{
+    Aml *cedt = opaque;
+
+    if (object_dynamic_cast(obj, TYPE_PXB_CXL_DEVICE)) {
+        cedt_build_chbs(cedt->buf, PXB_CXL_DEV(obj));
+    }
+
+    return 0;
+}
+
+void cxl_build_cedt(MachineState *ms, GArray *table_offsets, GArray *table_data,
+                    BIOSLinker *linker, const char *oem_id,
+                    const char *oem_table_id)
+{
+    Aml *cedt;
+    AcpiTable table = { .sig = "CEDT", .rev = 1, .oem_id = oem_id,
+                        .oem_table_id = oem_table_id };
+
+    acpi_add_table(table_offsets, table_data);
+    acpi_table_begin(&table, table_data);
+    cedt = init_aml_allocator();
+
+    /* reserve space for CEDT header */
+
+    object_child_foreach_recursive(object_get_root(), cxl_foreach_pxb_hb, cedt);
+
+    /* copy AML table into ACPI tables blob and patch header there */
+    g_array_append_vals(table_data, cedt->buf->data, cedt->buf->len);
+    free_aml_allocator();
+
+    acpi_table_end(linker, &table);
+}
+
 static Aml *__build_cxl_osc_method(void)
 {
     Aml *method, *if_uuid, *else_uuid, *if_arg1_not_1, *if_cxl, *if_caps_masked;
diff --git a/hw/i386/acpi-build.c b/hw/i386/acpi-build.c
index b5a4b663f2..a68e8bf166 100644
--- a/hw/i386/acpi-build.c
+++ b/hw/i386/acpi-build.c
@@ -76,6 +76,7 @@
 #include "hw/acpi/ipmi.h"
 #include "hw/acpi/hmat.h"
 #include "hw/acpi/viot.h"
+#include "hw/acpi/cxl.h"
 
 #include CONFIG_DEVICES
 
@@ -1403,6 +1404,22 @@ static void build_smb0(Aml *table, I2CBus *smbus, int devnr, int func)
     aml_append(table, scope);
 }
 
+static void build_acpi0017(Aml *table)
+{
+    Aml *dev, *scope, *method;
+
+    scope =  aml_scope("_SB");
+    dev = aml_device("CXLM");
+    aml_append(dev, aml_name_decl("_HID", aml_string("ACPI0017")));
+
+    method = aml_method("_STA", 0, AML_NOTSERIALIZED);
+    aml_append(method, aml_return(aml_int(0x01)));
+    aml_append(dev, method);
+
+    aml_append(scope, dev);
+    aml_append(table, scope);
+}
+
 static void
 build_dsdt(GArray *table_data, BIOSLinker *linker,
            AcpiPmInfo *pm, AcpiMiscInfo *misc,
@@ -1422,6 +1439,7 @@ build_dsdt(GArray *table_data, BIOSLinker *linker,
 #ifdef CONFIG_TPM
     TPMIf *tpm = tpm_find();
 #endif
+    bool cxl_present = false;
     int i;
     VMBusBridge *vmbus_bridge = vmbus_bridge_find();
     AcpiTable table = { .sig = "DSDT", .rev = 1, .oem_id = x86ms->oem_id,
@@ -1610,12 +1628,17 @@ build_dsdt(GArray *table_data, BIOSLinker *linker,
                 MemoryRegion *mr = &machine->cxl_devices_state->host_mr;
                 uint64_t base = mr->addr;
 
+                cxl_present = true;
                 crs_range_insert(crs_range_set.mem_ranges, base,
                                  base + memory_region_size(mr) - 1);
             }
         }
     }
 
+    if (cxl_present) {
+        build_acpi0017(dsdt);
+    }
+
     /*
      * At this point crs_range_set has all the ranges used by pci
      * busses *other* than PCI0.  These ranges will be excluded from
@@ -2680,6 +2703,10 @@ void acpi_build(AcpiBuildTables *tables, MachineState *machine)
                           machine->nvdimms_state, machine->ram_slots,
                           x86ms->oem_id, x86ms->oem_table_id);
     }
+    if (machine->cxl_devices_state->is_enabled) {
+        cxl_build_cedt(machine, table_offsets, tables_blob, tables->linker,
+                       x86ms->oem_id, x86ms->oem_table_id);
+    }
 
     acpi_add_table(table_offsets, tables_blob);
     build_waet(tables_blob, tables->linker, x86ms->oem_id, x86ms->oem_table_id);
diff --git a/hw/pci-bridge/pci_expander_bridge.c b/hw/pci-bridge/pci_expander_bridge.c
index b3b5f93650..e11a967916 100644
--- a/hw/pci-bridge/pci_expander_bridge.c
+++ b/hw/pci-bridge/pci_expander_bridge.c
@@ -57,29 +57,12 @@ DECLARE_INSTANCE_CHECKER(PXBDev, PXB_DEV,
 DECLARE_INSTANCE_CHECKER(PXBDev, PXB_PCIE_DEV,
                          TYPE_PXB_PCIE_DEVICE)
 
-#define TYPE_PXB_CXL_DEVICE "pxb-cxl"
-DECLARE_INSTANCE_CHECKER(PXBDev, PXB_CXL_DEV,
-                         TYPE_PXB_CXL_DEVICE)
-
 typedef struct CXLHost {
     PCIHostState parent_obj;
 
     CXLComponentState cxl_cstate;
 } CXLHost;
 
-struct PXBDev {
-    /*< private >*/
-    PCIDevice parent_obj;
-    /*< public >*/
-
-    uint8_t bus_nr;
-    uint16_t numa_node;
-    bool bypass_iommu;
-    struct cxl_dev {
-        CXLHost *cxl_host_bridge;
-    } cxl;
-};
-
 static PXBDev *convert_to_pxb(PCIDevice *dev)
 {
     /* A CXL PXB's parent bus is PCIe, so the normal check won't work */
diff --git a/include/hw/acpi/cxl.h b/include/hw/acpi/cxl.h
index 7b8f3b8a2e..0c496538c0 100644
--- a/include/hw/acpi/cxl.h
+++ b/include/hw/acpi/cxl.h
@@ -18,6 +18,11 @@
 #ifndef HW_ACPI_CXL_H
 #define HW_ACPI_CXL_H
 
+#include "hw/acpi/bios-linker-loader.h"
+
+void cxl_build_cedt(MachineState *ms, GArray *table_offsets, GArray *table_data,
+                    BIOSLinker *linker, const char *oem_id,
+                    const char *oem_table_id);
 void build_cxl_osc_method(Aml *dev);
 
 #endif
diff --git a/include/hw/pci/pci_bridge.h b/include/hw/pci/pci_bridge.h
index 30691a6e57..ba4bafac7c 100644
--- a/include/hw/pci/pci_bridge.h
+++ b/include/hw/pci/pci_bridge.h
@@ -28,6 +28,7 @@
 
 #include "hw/pci/pci.h"
 #include "hw/pci/pci_bus.h"
+#include "hw/cxl/cxl.h"
 #include "qom/object.h"
 
 typedef struct PCIBridgeWindows PCIBridgeWindows;
@@ -80,6 +81,25 @@ struct PCIBridge {
 #define PCI_BRIDGE_DEV_PROP_CHASSIS_NR "chassis_nr"
 #define PCI_BRIDGE_DEV_PROP_MSI        "msi"
 #define PCI_BRIDGE_DEV_PROP_SHPC       "shpc"
+typedef struct CXLHost CXLHost;
+
+struct PXBDev {
+    /*< private >*/
+    PCIDevice parent_obj;
+    /*< public >*/
+
+    uint8_t bus_nr;
+    uint16_t numa_node;
+    bool bypass_iommu;
+    struct cxl_dev {
+        CXLHost *cxl_host_bridge; /* Pointer to a CXLHost */
+    } cxl;
+};
+
+typedef struct PXBDev PXBDev;
+#define TYPE_PXB_CXL_DEVICE "pxb-cxl"
+DECLARE_INSTANCE_CHECKER(PXBDev, PXB_CXL_DEV,
+                         TYPE_PXB_CXL_DEVICE)
 
 int pci_bridge_ssvid_init(PCIDevice *dev, uint8_t offset,
                           uint16_t svid, uint16_t ssid,
-- 
2.32.0



^ permalink raw reply related	[flat|nested] 124+ messages in thread

* [PATCH v7 26/46] hw/cxl/component: Add utils for interleave parameter encoding/decoding
  2022-03-06 17:40 ` Jonathan Cameron via
@ 2022-03-06 17:41   ` Jonathan Cameron via
  -1 siblings, 0 replies; 124+ messages in thread
From: Jonathan Cameron @ 2022-03-06 17:41 UTC (permalink / raw)
  To: linuxarm, qemu-devel, Alex Bennée, Marcel Apfelbaum,
	Michael S . Tsirkin, Igor Mammedov, Markus Armbruster
  Cc: linux-cxl, Ben Widawsky, Peter Maydell,
	Shameerali Kolothum Thodi, Philippe Mathieu-Daudé,
	Saransh Gupta1, Shreyas Shah, Chris Browy, Samarth Saxena,
	Dan Williams

From: Jonathan Cameron <jonathan.cameron@huawei.com>

Both registers and the CFMWS entries in CDAT use simple encodings
for the number of interleave ways and the interleave granularity.
Introduce simple conversion functions to/from the unencoded
number / size.  So far the iw decode has not been needed so is
it not implemented.

Signed-off-by: Jonathan Cameron <Jonathan.Cameron@huawei.com>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
---
 hw/cxl/cxl-component-utils.c   | 34 ++++++++++++++++++++++++++++++++++
 include/hw/cxl/cxl_component.h |  8 ++++++++
 2 files changed, 42 insertions(+)

diff --git a/hw/cxl/cxl-component-utils.c b/hw/cxl/cxl-component-utils.c
index 410f8ef328..443a11c837 100644
--- a/hw/cxl/cxl-component-utils.c
+++ b/hw/cxl/cxl-component-utils.c
@@ -9,6 +9,7 @@
 
 #include "qemu/osdep.h"
 #include "qemu/log.h"
+#include "qapi/error.h"
 #include "hw/pci/pci.h"
 #include "hw/cxl/cxl.h"
 
@@ -217,3 +218,36 @@ void cxl_component_create_dvsec(CXLComponentState *cxl, uint16_t length,
     range_init_nofail(&cxl->dvsecs[type], cxl->dvsec_offset, length);
     cxl->dvsec_offset += length;
 }
+
+uint8_t cxl_interleave_ways_enc(int iw, Error **errp)
+{
+    switch (iw) {
+    case 1: return 0x0;
+    case 2: return 0x1;
+    case 4: return 0x2;
+    case 8: return 0x3;
+    case 16: return 0x4;
+    case 3: return 0x8;
+    case 6: return 0x9;
+    case 12: return 0xa;
+    default:
+        error_setg(errp, "Interleave ways: %d not supported", iw);
+        return 0;
+    }
+}
+
+uint8_t cxl_interleave_granularity_enc(uint64_t gran, Error **errp)
+{
+    switch (gran) {
+    case 256: return 0;
+    case 512: return 1;
+    case 1024: return 2;
+    case 2048: return 3;
+    case 4096: return 4;
+    case 8192: return 5;
+    case 16384: return 6;
+    default:
+        error_setg(errp, "Interleave granularity: %" PRIu64 " invalid", gran);
+        return 0;
+    }
+}
diff --git a/include/hw/cxl/cxl_component.h b/include/hw/cxl/cxl_component.h
index 74e9bfe1ff..8dae21cfc6 100644
--- a/include/hw/cxl/cxl_component.h
+++ b/include/hw/cxl/cxl_component.h
@@ -194,4 +194,12 @@ void cxl_component_register_init_common(uint32_t *reg_state,
 void cxl_component_create_dvsec(CXLComponentState *cxl_cstate, uint16_t length,
                                 uint16_t type, uint8_t rev, uint8_t *body);
 
+uint8_t cxl_interleave_ways_enc(int iw, Error **errp);
+uint8_t cxl_interleave_granularity_enc(uint64_t gran, Error **errp);
+
+static inline hwaddr cxl_decode_ig(int ig)
+{
+    return 1 << (ig + 8);
+}
+
 #endif
-- 
2.32.0


^ permalink raw reply related	[flat|nested] 124+ messages in thread

* [PATCH v7 26/46] hw/cxl/component: Add utils for interleave parameter encoding/decoding
@ 2022-03-06 17:41   ` Jonathan Cameron via
  0 siblings, 0 replies; 124+ messages in thread
From: Jonathan Cameron via @ 2022-03-06 17:41 UTC (permalink / raw)
  To: linuxarm, qemu-devel, Alex Bennée, Marcel Apfelbaum,
	Michael S . Tsirkin, Igor Mammedov, Markus Armbruster
  Cc: linux-cxl, Ben Widawsky, Peter Maydell,
	Shameerali Kolothum Thodi, Philippe Mathieu-Daudé,
	Saransh Gupta1, Shreyas Shah, Chris Browy, Samarth Saxena,
	Dan Williams

From: Jonathan Cameron <jonathan.cameron@huawei.com>

Both registers and the CFMWS entries in CDAT use simple encodings
for the number of interleave ways and the interleave granularity.
Introduce simple conversion functions to/from the unencoded
number / size.  So far the iw decode has not been needed so is
it not implemented.

Signed-off-by: Jonathan Cameron <Jonathan.Cameron@huawei.com>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
---
 hw/cxl/cxl-component-utils.c   | 34 ++++++++++++++++++++++++++++++++++
 include/hw/cxl/cxl_component.h |  8 ++++++++
 2 files changed, 42 insertions(+)

diff --git a/hw/cxl/cxl-component-utils.c b/hw/cxl/cxl-component-utils.c
index 410f8ef328..443a11c837 100644
--- a/hw/cxl/cxl-component-utils.c
+++ b/hw/cxl/cxl-component-utils.c
@@ -9,6 +9,7 @@
 
 #include "qemu/osdep.h"
 #include "qemu/log.h"
+#include "qapi/error.h"
 #include "hw/pci/pci.h"
 #include "hw/cxl/cxl.h"
 
@@ -217,3 +218,36 @@ void cxl_component_create_dvsec(CXLComponentState *cxl, uint16_t length,
     range_init_nofail(&cxl->dvsecs[type], cxl->dvsec_offset, length);
     cxl->dvsec_offset += length;
 }
+
+uint8_t cxl_interleave_ways_enc(int iw, Error **errp)
+{
+    switch (iw) {
+    case 1: return 0x0;
+    case 2: return 0x1;
+    case 4: return 0x2;
+    case 8: return 0x3;
+    case 16: return 0x4;
+    case 3: return 0x8;
+    case 6: return 0x9;
+    case 12: return 0xa;
+    default:
+        error_setg(errp, "Interleave ways: %d not supported", iw);
+        return 0;
+    }
+}
+
+uint8_t cxl_interleave_granularity_enc(uint64_t gran, Error **errp)
+{
+    switch (gran) {
+    case 256: return 0;
+    case 512: return 1;
+    case 1024: return 2;
+    case 2048: return 3;
+    case 4096: return 4;
+    case 8192: return 5;
+    case 16384: return 6;
+    default:
+        error_setg(errp, "Interleave granularity: %" PRIu64 " invalid", gran);
+        return 0;
+    }
+}
diff --git a/include/hw/cxl/cxl_component.h b/include/hw/cxl/cxl_component.h
index 74e9bfe1ff..8dae21cfc6 100644
--- a/include/hw/cxl/cxl_component.h
+++ b/include/hw/cxl/cxl_component.h
@@ -194,4 +194,12 @@ void cxl_component_register_init_common(uint32_t *reg_state,
 void cxl_component_create_dvsec(CXLComponentState *cxl_cstate, uint16_t length,
                                 uint16_t type, uint8_t rev, uint8_t *body);
 
+uint8_t cxl_interleave_ways_enc(int iw, Error **errp);
+uint8_t cxl_interleave_granularity_enc(uint64_t gran, Error **errp);
+
+static inline hwaddr cxl_decode_ig(int ig)
+{
+    return 1 << (ig + 8);
+}
+
 #endif
-- 
2.32.0



^ permalink raw reply related	[flat|nested] 124+ messages in thread

* [PATCH v7 27/46] hw/cxl/host: Add support for CXL Fixed Memory Windows.
  2022-03-06 17:40 ` Jonathan Cameron via
@ 2022-03-06 17:41   ` Jonathan Cameron via
  -1 siblings, 0 replies; 124+ messages in thread
From: Jonathan Cameron @ 2022-03-06 17:41 UTC (permalink / raw)
  To: linuxarm, qemu-devel, Alex Bennée, Marcel Apfelbaum,
	Michael S . Tsirkin, Igor Mammedov, Markus Armbruster
  Cc: linux-cxl, Ben Widawsky, Peter Maydell,
	Shameerali Kolothum Thodi, Philippe Mathieu-Daudé,
	Saransh Gupta1, Shreyas Shah, Chris Browy, Samarth Saxena,
	Dan Williams

From: Jonathan Cameron <jonathan.cameron@huawei.com>

The concept of these is introduced in [1] in terms of the
description the CEDT ACPI table. The principal is more general.
Unlike once traffic hits the CXL root bridges, the host system
memory address routing is implementation defined and effectively
static once observable by standard / generic system software.
Each CXL Fixed Memory Windows (CFMW) is a region of PA space
which has fixed system dependent routing configured so that
accesses can be routed to the CXL devices below a set of target
root bridges. The accesses may be interleaved across multiple
root bridges.

For QEMU we could have fully specified these regions in terms
of a base PA + size, but as the absolute address does not matter
it is simpler to let individual platforms place the memory regions.

ExampleS:
-cxl-fixed-memory-window targets.0=cxl.0,size=128G
-cxl-fixed-memory-window targets.0=cxl.1,size=128G
-cxl-fixed-memory-window targets.0=cxl0,targets.1=cxl.1,size=256G,interleave-granularity=2k

Specifies
* 2x 128G regions not interleaved across root bridges, one for each of
  the root bridges with ids cxl.0 and cxl.1
* 256G region interleaved across root bridges with ids cxl.0 and cxl.1
with a 2k interleave granularity.

When system software enumerates the devices below a given root bridge
it can then decide which CFMW to use. If non interleave is desired
(or possible) it can use the appropriate CFMW for the root bridge in
question.  If there are suitable devices to interleave across the
two root bridges then it may use the 3rd CFMS.

A number of other designs were considered but the following constraints
made it hard to adapt existing QEMU approaches to this particular problem.
1) The size must be known before a specific architecture / board brings
   up it's PA memory map.  We need to set up an appropriate region.
2) Using links to the host bridges provides a clean command line interface
   but these links cannot be established until command line devices have
   been added.

Hence the two step process used here of first establishing the size,
interleave-ways and granularity + caching the ids of the host bridges
and then, once available finding the actual host bridges so they can
be used later to support interleave decoding.

[1] CXL 2.0 ECN: CEDT CFMWS & QTG DSM (computeexpresslink.org / specifications)

Signed-off-by: Jonathan Cameron <jonathan.cameron@huawei.com>
---
v7:
* Change -cxl-fixed-memory-window option handling to use
  qobject_input_visitor_new_str().  This changed the required handling of
  targets parameter to require an array index and hence test and docs updates.
  e.g. targets.1=cxl_hb0,targets.2=cxl_hb1
  (Patches 38,40,42,43) (Markus Armbuster)
* Docs fixes (Markus)
 hw/cxl/cxl-host-stubs.c | 14 ++++++
 hw/cxl/cxl-host.c       | 94 +++++++++++++++++++++++++++++++++++++++++
 hw/cxl/meson.build      |  6 +++
 include/hw/cxl/cxl.h    | 20 +++++++++
 qapi/machine.json       | 18 ++++++++
 qemu-options.hx         | 38 +++++++++++++++++
 softmmu/vl.c            | 42 ++++++++++++++++++
 7 files changed, 232 insertions(+)

diff --git a/hw/cxl/cxl-host-stubs.c b/hw/cxl/cxl-host-stubs.c
new file mode 100644
index 0000000000..d24282ec1c
--- /dev/null
+++ b/hw/cxl/cxl-host-stubs.c
@@ -0,0 +1,14 @@
+/*
+ * CXL host parameter parsing routine stubs
+ *
+ * Copyright (c) 2022 Huawei
+ */
+#include "qemu/osdep.h"
+#include "qapi/error.h"
+#include "hw/cxl/cxl.h"
+
+void cxl_fixed_memory_window_options_set(MachineState *ms,
+                                         CXLFixedMemoryWindowOptions *object,
+                                         Error **errp) {};
+
+void cxl_fixed_memory_window_link_targets(Error **errp) {};
diff --git a/hw/cxl/cxl-host.c b/hw/cxl/cxl-host.c
new file mode 100644
index 0000000000..f25713236d
--- /dev/null
+++ b/hw/cxl/cxl-host.c
@@ -0,0 +1,94 @@
+/*
+ * CXL host parameter parsing routines
+ *
+ * Copyright (c) 2022 Huawei
+ * Modeled loosely on the NUMA options handling in hw/core/numa.c
+ */
+
+#include "qemu/osdep.h"
+#include "qemu/units.h"
+#include "qemu/bitmap.h"
+#include "qemu/error-report.h"
+#include "qapi/error.h"
+#include "sysemu/qtest.h"
+#include "hw/boards.h"
+
+#include "qapi/qapi-visit-machine.h"
+#include "hw/cxl/cxl.h"
+
+void cxl_fixed_memory_window_options_set(MachineState *ms,
+                                         CXLFixedMemoryWindowOptions *object,
+                                         Error **errp)
+{
+    CXLFixedWindow *fw = g_malloc0(sizeof(*fw));
+    strList *target;
+    int i;
+
+    for (target = object->targets; target; target = target->next) {
+        fw->num_targets++;
+    }
+
+    fw->enc_int_ways = cxl_interleave_ways_enc(fw->num_targets, errp);
+    if (*errp) {
+        return;
+    }
+
+    fw->targets = g_malloc0_n(fw->num_targets, sizeof(*fw->targets));
+    for (i = 0, target = object->targets; target; i++, target = target->next) {
+        /* This link cannot be resolved yet, so stash the name for now */
+        fw->targets[i] = g_strdup(target->value);
+    }
+
+    if (object->size % (256 * MiB)) {
+        error_setg(errp,
+                   "Size of a CXL fixed memory window must my a multiple of 256MiB");
+        return;
+    }
+    fw->size = object->size;
+
+    if (object->has_interleave_granularity) {
+        fw->enc_int_gran =
+            cxl_interleave_granularity_enc(object->interleave_granularity,
+                                           errp);
+        if (*errp) {
+            return;
+        }
+    } else {
+        /* Default to 256 byte interleave */
+        fw->enc_int_gran = 0;
+    }
+
+    ms->cxl_devices_state->fixed_windows =
+        g_list_append(ms->cxl_devices_state->fixed_windows, fw);
+
+    return;
+}
+
+void cxl_fixed_memory_window_link_targets(Error **errp)
+{
+    MachineState *ms = MACHINE(qdev_get_machine());
+
+    if (ms->cxl_devices_state && ms->cxl_devices_state->fixed_windows) {
+        GList *it;
+
+        for (it = ms->cxl_devices_state->fixed_windows; it; it = it->next) {
+            CXLFixedWindow *fw = it->data;
+            int i;
+
+            for (i = 0; i < fw->num_targets; i++) {
+                Object *o;
+                bool ambig;
+
+                o = object_resolve_path_type(fw->targets[i],
+                                             TYPE_PXB_CXL_DEVICE,
+                                             &ambig);
+                if (!o) {
+                    error_setg(errp, "Could not resolve CXLFM target %s",
+                               fw->targets[i]);
+                    return;
+                }
+                fw->target_hbs[i] = PXB_CXL_DEV(o);
+            }
+        }
+    }
+}
diff --git a/hw/cxl/meson.build b/hw/cxl/meson.build
index e68eea2358..f117b99949 100644
--- a/hw/cxl/meson.build
+++ b/hw/cxl/meson.build
@@ -3,4 +3,10 @@ softmmu_ss.add(when: 'CONFIG_CXL',
                    'cxl-component-utils.c',
                    'cxl-device-utils.c',
                    'cxl-mailbox-utils.c',
+                   'cxl-host.c',
+               ),
+               if_false: files(
+                   'cxl-host-stubs.c',
                ))
+
+softmmu_ss.add(when: 'CONFIG_ALL', if_true: files('cxl-host-stubs.c'))
diff --git a/include/hw/cxl/cxl.h b/include/hw/cxl/cxl.h
index 75e5bf71e1..5abc307ef4 100644
--- a/include/hw/cxl/cxl.h
+++ b/include/hw/cxl/cxl.h
@@ -10,6 +10,8 @@
 #ifndef CXL_H
 #define CXL_H
 
+#include "qapi/qapi-types-machine.h"
+#include "hw/pci/pci_bridge.h"
 #include "cxl_pci.h"
 #include "cxl_component.h"
 #include "cxl_device.h"
@@ -19,10 +21,28 @@
 
 #define CXL_WINDOW_MAX 10
 
+typedef struct CXLFixedWindow {
+    uint64_t size;
+    char **targets;
+    struct PXBDev *target_hbs[8];
+    uint8_t num_targets;
+    uint8_t enc_int_ways;
+    uint8_t enc_int_gran;
+    /* Todo: XOR based interleaving */
+    MemoryRegion mr;
+    hwaddr base;
+} CXLFixedWindow;
+
 typedef struct CXLState {
     bool is_enabled;
     MemoryRegion host_mr;
     unsigned int next_mr_idx;
+    GList *fixed_windows;
 } CXLState;
 
+void cxl_fixed_memory_window_options_set(MachineState *ms,
+                                         CXLFixedMemoryWindowOptions *object,
+                                         Error **errp);
+void cxl_fixed_memory_window_link_targets(Error **errp);
+
 #endif
diff --git a/qapi/machine.json b/qapi/machine.json
index 42fc68403d..e4e64096ca 100644
--- a/qapi/machine.json
+++ b/qapi/machine.json
@@ -504,6 +504,24 @@
    'dst': 'uint16',
    'val': 'uint8' }}
 
+##
+# @CXLFixedMemoryWindowOptions:
+#
+# Create a CXL Fixed Memory Window (for OptsVisitor)
+#
+# @size: Size in bytes of the Fixed Memory Window
+# @interleave-granularity: Number of contiguous bytes for which
+#                          accesses will go to a given interleave target.
+# @targets: Target root bridge IDs
+#
+# Since 6.3
+##
+{ 'struct': 'CXLFixedMemoryWindowOptions',
+  'data': {
+      'size': 'size',
+      '*interleave-granularity': 'size',
+      'targets': ['str'] }}
+
 ##
 # @X86CPURegister32:
 #
diff --git a/qemu-options.hx b/qemu-options.hx
index 094a6c1d7c..9f9b7626de 100644
--- a/qemu-options.hx
+++ b/qemu-options.hx
@@ -467,6 +467,44 @@ SRST
         -numa hmat-cache,node-id=1,size=10K,level=1,associativity=direct,policy=write-back,line=8
 ERST
 
+DEF("cxl-fixed-memory-window", HAS_ARG, QEMU_OPTION_cxl_fixed_memory_window,
+    "-cxl-fixed-memory-window targets.0=firsttarget,targets.1=secondtarget,size=size[,interleave-granularity=granularity]\n",
+    QEMU_ARCH_ALL)
+SRST
+``-cxl-fixed-memory-window targets.0=firsttarget,targets.1=secondtarget,size=size[,interleave-granularity=granularity]``
+    Define a CXL Fixed Memory Window (CFMW).
+
+    Described in the CXL 2.0 ECN: CEDT CFMWS & QTG _DSM.
+
+    They are regions of Host Physical Addresses (HPA) on a system which
+    may be interleaved across one or more CXL host bridges.  The system
+    software will assign particular devices into these windows and
+    configure the downstream Host-managed Device Memory (HDM) decoders
+    in root ports, switch ports and devices appropriately to meet the
+    interleave requirements before enabling the memory devices.
+
+    ``targets.X=firsttarget`` provides the mapping to CXL host bridges
+    which may be identified by the id provied in the -device entry.
+    Multiple entries are needed to specify all the targets when
+    the fixed memory window represents interleaved memory. X is the
+    target index from 0.
+
+    ``size=size`` sets the size of the CFMW. This must be a multiple of
+    256MiB. The region will be aligned to 256MiB but the location is
+    platform and configuration dependent.
+
+    ``interleave-granularity=granularity`` sets the granularity of
+    interleave. Default 256KiB. Only 256KiB, 512KiB, 1024KiB, 2048KiB
+    4096KiB, 8192KiB and 16384KiB granularities supported.
+
+    Example:
+
+    ::
+
+        -cxl-fixed-memory-window -targets.0=cxl.0,-targets.1=cxl.1,size=128G,interleave-granularity=512k
+
+ERST
+
 DEF("add-fd", HAS_ARG, QEMU_OPTION_add_fd,
     "-add-fd fd=fd,set=set[,opaque=opaque]\n"
     "                Add 'fd' to fd 'set'\n", QEMU_ARCH_ALL)
diff --git a/softmmu/vl.c b/softmmu/vl.c
index 1fe028800f..d0196191ac 100644
--- a/softmmu/vl.c
+++ b/softmmu/vl.c
@@ -92,6 +92,7 @@
 #include "qemu/config-file.h"
 #include "qemu/qemu-options.h"
 #include "qemu/main-loop.h"
+#include "hw/cxl/cxl.h"
 #ifdef CONFIG_VIRTFS
 #include "fsdev/qemu-fsdev.h"
 #endif
@@ -117,6 +118,7 @@
 #include "qapi/qapi-events-run-state.h"
 #include "qapi/qapi-visit-block-core.h"
 #include "qapi/qapi-visit-compat.h"
+#include "qapi/qapi-visit-machine.h"
 #include "qapi/qapi-visit-ui.h"
 #include "qapi/qapi-commands-block-core.h"
 #include "qapi/qapi-commands-migration.h"
@@ -140,6 +142,11 @@ typedef struct BlockdevOptionsQueueEntry {
 
 typedef QSIMPLEQ_HEAD(, BlockdevOptionsQueueEntry) BlockdevOptionsQueue;
 
+typedef struct CXLFMWOptionQueueEntry {
+    CXLFixedMemoryWindowOptions *opts;
+    QSIMPLEQ_ENTRY(CXLFMWOptionQueueEntry) entry;
+} CXLFMWOptionQueueEntry;
+
 typedef struct ObjectOption {
     ObjectOptions *opts;
     QTAILQ_ENTRY(ObjectOption) next;
@@ -166,6 +173,7 @@ static int snapshot;
 static bool preconfig_requested;
 static QemuPluginList plugin_list = QTAILQ_HEAD_INITIALIZER(plugin_list);
 static BlockdevOptionsQueue bdo_queue = QSIMPLEQ_HEAD_INITIALIZER(bdo_queue);
+static QSIMPLEQ_HEAD(, CXLFMWOptionQueueEntry) CXLFMW_opts = QSIMPLEQ_HEAD_INITIALIZER(CXLFMW_opts);
 static bool nographic = false;
 static int mem_prealloc; /* force preallocation of physical target memory */
 static ram_addr_t ram_size;
@@ -1149,6 +1157,22 @@ static void parse_display(const char *p)
     }
 }
 
+static void parse_cxl_fixed_memory_window(const char *optarg)
+{
+    CXLFMWOptionQueueEntry *cfmws_entry;
+    Visitor *v;
+
+    v = qobject_input_visitor_new_str(optarg, "cxl-fixed-memory-window",
+                                      &error_fatal);
+    cfmws_entry = g_new(CXLFMWOptionQueueEntry, 1);
+    visit_type_CXLFixedMemoryWindowOptions(v, NULL, &cfmws_entry->opts, &error_fatal);
+    if (!cfmws_entry->opts) {
+        exit(1);
+    }
+    visit_free(v);
+    QSIMPLEQ_INSERT_TAIL(&CXLFMW_opts, cfmws_entry, entry);
+}
+
 static inline bool nonempty_str(const char *str)
 {
     return str && *str;
@@ -2020,6 +2044,19 @@ static void qemu_create_late_backends(void)
     qemu_semihosting_console_init();
 }
 
+static void cxl_set_opts(void)
+{
+    while (!QSIMPLEQ_EMPTY(&CXLFMW_opts)) {
+        CXLFMWOptionQueueEntry *cfmws_entry = QSIMPLEQ_FIRST(&CXLFMW_opts);
+
+        QSIMPLEQ_REMOVE_HEAD(&CXLFMW_opts, entry);
+        cxl_fixed_memory_window_options_set(current_machine, cfmws_entry->opts,
+                                            &error_fatal);
+        qapi_free_CXLFixedMemoryWindowOptions(cfmws_entry->opts);
+        g_free(cfmws_entry);
+    }
+}
+
 static bool have_custom_ram_size(void)
 {
     QemuOpts *opts = qemu_find_opts_singleton("memory");
@@ -2745,6 +2782,7 @@ void qmp_x_exit_preconfig(Error **errp)
 
     qemu_init_board();
     qemu_create_cli_devices();
+    cxl_fixed_memory_window_link_targets(errp);
     qemu_machine_creation_done();
 
     if (loadvm) {
@@ -2928,6 +2966,9 @@ void qemu_init(int argc, char **argv, char **envp)
                     exit(1);
                 }
                 break;
+            case QEMU_OPTION_cxl_fixed_memory_window:
+                parse_cxl_fixed_memory_window(optarg);
+                break;
             case QEMU_OPTION_display:
                 parse_display(optarg);
                 break;
@@ -3765,6 +3806,7 @@ void qemu_init(int argc, char **argv, char **envp)
 
     qemu_resolve_machine_memdev();
     parse_numa_opts(current_machine);
+    cxl_set_opts();
 
     if (vmstate_dump_file) {
         /* dump and exit */
-- 
2.32.0


^ permalink raw reply related	[flat|nested] 124+ messages in thread

* [PATCH v7 27/46] hw/cxl/host: Add support for CXL Fixed Memory Windows.
@ 2022-03-06 17:41   ` Jonathan Cameron via
  0 siblings, 0 replies; 124+ messages in thread
From: Jonathan Cameron via @ 2022-03-06 17:41 UTC (permalink / raw)
  To: linuxarm, qemu-devel, Alex Bennée, Marcel Apfelbaum,
	Michael S . Tsirkin, Igor Mammedov, Markus Armbruster
  Cc: linux-cxl, Ben Widawsky, Peter Maydell,
	Shameerali Kolothum Thodi, Philippe Mathieu-Daudé,
	Saransh Gupta1, Shreyas Shah, Chris Browy, Samarth Saxena,
	Dan Williams

From: Jonathan Cameron <jonathan.cameron@huawei.com>

The concept of these is introduced in [1] in terms of the
description the CEDT ACPI table. The principal is more general.
Unlike once traffic hits the CXL root bridges, the host system
memory address routing is implementation defined and effectively
static once observable by standard / generic system software.
Each CXL Fixed Memory Windows (CFMW) is a region of PA space
which has fixed system dependent routing configured so that
accesses can be routed to the CXL devices below a set of target
root bridges. The accesses may be interleaved across multiple
root bridges.

For QEMU we could have fully specified these regions in terms
of a base PA + size, but as the absolute address does not matter
it is simpler to let individual platforms place the memory regions.

ExampleS:
-cxl-fixed-memory-window targets.0=cxl.0,size=128G
-cxl-fixed-memory-window targets.0=cxl.1,size=128G
-cxl-fixed-memory-window targets.0=cxl0,targets.1=cxl.1,size=256G,interleave-granularity=2k

Specifies
* 2x 128G regions not interleaved across root bridges, one for each of
  the root bridges with ids cxl.0 and cxl.1
* 256G region interleaved across root bridges with ids cxl.0 and cxl.1
with a 2k interleave granularity.

When system software enumerates the devices below a given root bridge
it can then decide which CFMW to use. If non interleave is desired
(or possible) it can use the appropriate CFMW for the root bridge in
question.  If there are suitable devices to interleave across the
two root bridges then it may use the 3rd CFMS.

A number of other designs were considered but the following constraints
made it hard to adapt existing QEMU approaches to this particular problem.
1) The size must be known before a specific architecture / board brings
   up it's PA memory map.  We need to set up an appropriate region.
2) Using links to the host bridges provides a clean command line interface
   but these links cannot be established until command line devices have
   been added.

Hence the two step process used here of first establishing the size,
interleave-ways and granularity + caching the ids of the host bridges
and then, once available finding the actual host bridges so they can
be used later to support interleave decoding.

[1] CXL 2.0 ECN: CEDT CFMWS & QTG DSM (computeexpresslink.org / specifications)

Signed-off-by: Jonathan Cameron <jonathan.cameron@huawei.com>
---
v7:
* Change -cxl-fixed-memory-window option handling to use
  qobject_input_visitor_new_str().  This changed the required handling of
  targets parameter to require an array index and hence test and docs updates.
  e.g. targets.1=cxl_hb0,targets.2=cxl_hb1
  (Patches 38,40,42,43) (Markus Armbuster)
* Docs fixes (Markus)
 hw/cxl/cxl-host-stubs.c | 14 ++++++
 hw/cxl/cxl-host.c       | 94 +++++++++++++++++++++++++++++++++++++++++
 hw/cxl/meson.build      |  6 +++
 include/hw/cxl/cxl.h    | 20 +++++++++
 qapi/machine.json       | 18 ++++++++
 qemu-options.hx         | 38 +++++++++++++++++
 softmmu/vl.c            | 42 ++++++++++++++++++
 7 files changed, 232 insertions(+)

diff --git a/hw/cxl/cxl-host-stubs.c b/hw/cxl/cxl-host-stubs.c
new file mode 100644
index 0000000000..d24282ec1c
--- /dev/null
+++ b/hw/cxl/cxl-host-stubs.c
@@ -0,0 +1,14 @@
+/*
+ * CXL host parameter parsing routine stubs
+ *
+ * Copyright (c) 2022 Huawei
+ */
+#include "qemu/osdep.h"
+#include "qapi/error.h"
+#include "hw/cxl/cxl.h"
+
+void cxl_fixed_memory_window_options_set(MachineState *ms,
+                                         CXLFixedMemoryWindowOptions *object,
+                                         Error **errp) {};
+
+void cxl_fixed_memory_window_link_targets(Error **errp) {};
diff --git a/hw/cxl/cxl-host.c b/hw/cxl/cxl-host.c
new file mode 100644
index 0000000000..f25713236d
--- /dev/null
+++ b/hw/cxl/cxl-host.c
@@ -0,0 +1,94 @@
+/*
+ * CXL host parameter parsing routines
+ *
+ * Copyright (c) 2022 Huawei
+ * Modeled loosely on the NUMA options handling in hw/core/numa.c
+ */
+
+#include "qemu/osdep.h"
+#include "qemu/units.h"
+#include "qemu/bitmap.h"
+#include "qemu/error-report.h"
+#include "qapi/error.h"
+#include "sysemu/qtest.h"
+#include "hw/boards.h"
+
+#include "qapi/qapi-visit-machine.h"
+#include "hw/cxl/cxl.h"
+
+void cxl_fixed_memory_window_options_set(MachineState *ms,
+                                         CXLFixedMemoryWindowOptions *object,
+                                         Error **errp)
+{
+    CXLFixedWindow *fw = g_malloc0(sizeof(*fw));
+    strList *target;
+    int i;
+
+    for (target = object->targets; target; target = target->next) {
+        fw->num_targets++;
+    }
+
+    fw->enc_int_ways = cxl_interleave_ways_enc(fw->num_targets, errp);
+    if (*errp) {
+        return;
+    }
+
+    fw->targets = g_malloc0_n(fw->num_targets, sizeof(*fw->targets));
+    for (i = 0, target = object->targets; target; i++, target = target->next) {
+        /* This link cannot be resolved yet, so stash the name for now */
+        fw->targets[i] = g_strdup(target->value);
+    }
+
+    if (object->size % (256 * MiB)) {
+        error_setg(errp,
+                   "Size of a CXL fixed memory window must my a multiple of 256MiB");
+        return;
+    }
+    fw->size = object->size;
+
+    if (object->has_interleave_granularity) {
+        fw->enc_int_gran =
+            cxl_interleave_granularity_enc(object->interleave_granularity,
+                                           errp);
+        if (*errp) {
+            return;
+        }
+    } else {
+        /* Default to 256 byte interleave */
+        fw->enc_int_gran = 0;
+    }
+
+    ms->cxl_devices_state->fixed_windows =
+        g_list_append(ms->cxl_devices_state->fixed_windows, fw);
+
+    return;
+}
+
+void cxl_fixed_memory_window_link_targets(Error **errp)
+{
+    MachineState *ms = MACHINE(qdev_get_machine());
+
+    if (ms->cxl_devices_state && ms->cxl_devices_state->fixed_windows) {
+        GList *it;
+
+        for (it = ms->cxl_devices_state->fixed_windows; it; it = it->next) {
+            CXLFixedWindow *fw = it->data;
+            int i;
+
+            for (i = 0; i < fw->num_targets; i++) {
+                Object *o;
+                bool ambig;
+
+                o = object_resolve_path_type(fw->targets[i],
+                                             TYPE_PXB_CXL_DEVICE,
+                                             &ambig);
+                if (!o) {
+                    error_setg(errp, "Could not resolve CXLFM target %s",
+                               fw->targets[i]);
+                    return;
+                }
+                fw->target_hbs[i] = PXB_CXL_DEV(o);
+            }
+        }
+    }
+}
diff --git a/hw/cxl/meson.build b/hw/cxl/meson.build
index e68eea2358..f117b99949 100644
--- a/hw/cxl/meson.build
+++ b/hw/cxl/meson.build
@@ -3,4 +3,10 @@ softmmu_ss.add(when: 'CONFIG_CXL',
                    'cxl-component-utils.c',
                    'cxl-device-utils.c',
                    'cxl-mailbox-utils.c',
+                   'cxl-host.c',
+               ),
+               if_false: files(
+                   'cxl-host-stubs.c',
                ))
+
+softmmu_ss.add(when: 'CONFIG_ALL', if_true: files('cxl-host-stubs.c'))
diff --git a/include/hw/cxl/cxl.h b/include/hw/cxl/cxl.h
index 75e5bf71e1..5abc307ef4 100644
--- a/include/hw/cxl/cxl.h
+++ b/include/hw/cxl/cxl.h
@@ -10,6 +10,8 @@
 #ifndef CXL_H
 #define CXL_H
 
+#include "qapi/qapi-types-machine.h"
+#include "hw/pci/pci_bridge.h"
 #include "cxl_pci.h"
 #include "cxl_component.h"
 #include "cxl_device.h"
@@ -19,10 +21,28 @@
 
 #define CXL_WINDOW_MAX 10
 
+typedef struct CXLFixedWindow {
+    uint64_t size;
+    char **targets;
+    struct PXBDev *target_hbs[8];
+    uint8_t num_targets;
+    uint8_t enc_int_ways;
+    uint8_t enc_int_gran;
+    /* Todo: XOR based interleaving */
+    MemoryRegion mr;
+    hwaddr base;
+} CXLFixedWindow;
+
 typedef struct CXLState {
     bool is_enabled;
     MemoryRegion host_mr;
     unsigned int next_mr_idx;
+    GList *fixed_windows;
 } CXLState;
 
+void cxl_fixed_memory_window_options_set(MachineState *ms,
+                                         CXLFixedMemoryWindowOptions *object,
+                                         Error **errp);
+void cxl_fixed_memory_window_link_targets(Error **errp);
+
 #endif
diff --git a/qapi/machine.json b/qapi/machine.json
index 42fc68403d..e4e64096ca 100644
--- a/qapi/machine.json
+++ b/qapi/machine.json
@@ -504,6 +504,24 @@
    'dst': 'uint16',
    'val': 'uint8' }}
 
+##
+# @CXLFixedMemoryWindowOptions:
+#
+# Create a CXL Fixed Memory Window (for OptsVisitor)
+#
+# @size: Size in bytes of the Fixed Memory Window
+# @interleave-granularity: Number of contiguous bytes for which
+#                          accesses will go to a given interleave target.
+# @targets: Target root bridge IDs
+#
+# Since 6.3
+##
+{ 'struct': 'CXLFixedMemoryWindowOptions',
+  'data': {
+      'size': 'size',
+      '*interleave-granularity': 'size',
+      'targets': ['str'] }}
+
 ##
 # @X86CPURegister32:
 #
diff --git a/qemu-options.hx b/qemu-options.hx
index 094a6c1d7c..9f9b7626de 100644
--- a/qemu-options.hx
+++ b/qemu-options.hx
@@ -467,6 +467,44 @@ SRST
         -numa hmat-cache,node-id=1,size=10K,level=1,associativity=direct,policy=write-back,line=8
 ERST
 
+DEF("cxl-fixed-memory-window", HAS_ARG, QEMU_OPTION_cxl_fixed_memory_window,
+    "-cxl-fixed-memory-window targets.0=firsttarget,targets.1=secondtarget,size=size[,interleave-granularity=granularity]\n",
+    QEMU_ARCH_ALL)
+SRST
+``-cxl-fixed-memory-window targets.0=firsttarget,targets.1=secondtarget,size=size[,interleave-granularity=granularity]``
+    Define a CXL Fixed Memory Window (CFMW).
+
+    Described in the CXL 2.0 ECN: CEDT CFMWS & QTG _DSM.
+
+    They are regions of Host Physical Addresses (HPA) on a system which
+    may be interleaved across one or more CXL host bridges.  The system
+    software will assign particular devices into these windows and
+    configure the downstream Host-managed Device Memory (HDM) decoders
+    in root ports, switch ports and devices appropriately to meet the
+    interleave requirements before enabling the memory devices.
+
+    ``targets.X=firsttarget`` provides the mapping to CXL host bridges
+    which may be identified by the id provied in the -device entry.
+    Multiple entries are needed to specify all the targets when
+    the fixed memory window represents interleaved memory. X is the
+    target index from 0.
+
+    ``size=size`` sets the size of the CFMW. This must be a multiple of
+    256MiB. The region will be aligned to 256MiB but the location is
+    platform and configuration dependent.
+
+    ``interleave-granularity=granularity`` sets the granularity of
+    interleave. Default 256KiB. Only 256KiB, 512KiB, 1024KiB, 2048KiB
+    4096KiB, 8192KiB and 16384KiB granularities supported.
+
+    Example:
+
+    ::
+
+        -cxl-fixed-memory-window -targets.0=cxl.0,-targets.1=cxl.1,size=128G,interleave-granularity=512k
+
+ERST
+
 DEF("add-fd", HAS_ARG, QEMU_OPTION_add_fd,
     "-add-fd fd=fd,set=set[,opaque=opaque]\n"
     "                Add 'fd' to fd 'set'\n", QEMU_ARCH_ALL)
diff --git a/softmmu/vl.c b/softmmu/vl.c
index 1fe028800f..d0196191ac 100644
--- a/softmmu/vl.c
+++ b/softmmu/vl.c
@@ -92,6 +92,7 @@
 #include "qemu/config-file.h"
 #include "qemu/qemu-options.h"
 #include "qemu/main-loop.h"
+#include "hw/cxl/cxl.h"
 #ifdef CONFIG_VIRTFS
 #include "fsdev/qemu-fsdev.h"
 #endif
@@ -117,6 +118,7 @@
 #include "qapi/qapi-events-run-state.h"
 #include "qapi/qapi-visit-block-core.h"
 #include "qapi/qapi-visit-compat.h"
+#include "qapi/qapi-visit-machine.h"
 #include "qapi/qapi-visit-ui.h"
 #include "qapi/qapi-commands-block-core.h"
 #include "qapi/qapi-commands-migration.h"
@@ -140,6 +142,11 @@ typedef struct BlockdevOptionsQueueEntry {
 
 typedef QSIMPLEQ_HEAD(, BlockdevOptionsQueueEntry) BlockdevOptionsQueue;
 
+typedef struct CXLFMWOptionQueueEntry {
+    CXLFixedMemoryWindowOptions *opts;
+    QSIMPLEQ_ENTRY(CXLFMWOptionQueueEntry) entry;
+} CXLFMWOptionQueueEntry;
+
 typedef struct ObjectOption {
     ObjectOptions *opts;
     QTAILQ_ENTRY(ObjectOption) next;
@@ -166,6 +173,7 @@ static int snapshot;
 static bool preconfig_requested;
 static QemuPluginList plugin_list = QTAILQ_HEAD_INITIALIZER(plugin_list);
 static BlockdevOptionsQueue bdo_queue = QSIMPLEQ_HEAD_INITIALIZER(bdo_queue);
+static QSIMPLEQ_HEAD(, CXLFMWOptionQueueEntry) CXLFMW_opts = QSIMPLEQ_HEAD_INITIALIZER(CXLFMW_opts);
 static bool nographic = false;
 static int mem_prealloc; /* force preallocation of physical target memory */
 static ram_addr_t ram_size;
@@ -1149,6 +1157,22 @@ static void parse_display(const char *p)
     }
 }
 
+static void parse_cxl_fixed_memory_window(const char *optarg)
+{
+    CXLFMWOptionQueueEntry *cfmws_entry;
+    Visitor *v;
+
+    v = qobject_input_visitor_new_str(optarg, "cxl-fixed-memory-window",
+                                      &error_fatal);
+    cfmws_entry = g_new(CXLFMWOptionQueueEntry, 1);
+    visit_type_CXLFixedMemoryWindowOptions(v, NULL, &cfmws_entry->opts, &error_fatal);
+    if (!cfmws_entry->opts) {
+        exit(1);
+    }
+    visit_free(v);
+    QSIMPLEQ_INSERT_TAIL(&CXLFMW_opts, cfmws_entry, entry);
+}
+
 static inline bool nonempty_str(const char *str)
 {
     return str && *str;
@@ -2020,6 +2044,19 @@ static void qemu_create_late_backends(void)
     qemu_semihosting_console_init();
 }
 
+static void cxl_set_opts(void)
+{
+    while (!QSIMPLEQ_EMPTY(&CXLFMW_opts)) {
+        CXLFMWOptionQueueEntry *cfmws_entry = QSIMPLEQ_FIRST(&CXLFMW_opts);
+
+        QSIMPLEQ_REMOVE_HEAD(&CXLFMW_opts, entry);
+        cxl_fixed_memory_window_options_set(current_machine, cfmws_entry->opts,
+                                            &error_fatal);
+        qapi_free_CXLFixedMemoryWindowOptions(cfmws_entry->opts);
+        g_free(cfmws_entry);
+    }
+}
+
 static bool have_custom_ram_size(void)
 {
     QemuOpts *opts = qemu_find_opts_singleton("memory");
@@ -2745,6 +2782,7 @@ void qmp_x_exit_preconfig(Error **errp)
 
     qemu_init_board();
     qemu_create_cli_devices();
+    cxl_fixed_memory_window_link_targets(errp);
     qemu_machine_creation_done();
 
     if (loadvm) {
@@ -2928,6 +2966,9 @@ void qemu_init(int argc, char **argv, char **envp)
                     exit(1);
                 }
                 break;
+            case QEMU_OPTION_cxl_fixed_memory_window:
+                parse_cxl_fixed_memory_window(optarg);
+                break;
             case QEMU_OPTION_display:
                 parse_display(optarg);
                 break;
@@ -3765,6 +3806,7 @@ void qemu_init(int argc, char **argv, char **envp)
 
     qemu_resolve_machine_memdev();
     parse_numa_opts(current_machine);
+    cxl_set_opts();
 
     if (vmstate_dump_file) {
         /* dump and exit */
-- 
2.32.0



^ permalink raw reply related	[flat|nested] 124+ messages in thread

* [PATCH v7 28/46] acpi/cxl: Introduce CFMWS structures in CEDT
  2022-03-06 17:40 ` Jonathan Cameron via
@ 2022-03-06 17:41   ` Jonathan Cameron via
  -1 siblings, 0 replies; 124+ messages in thread
From: Jonathan Cameron @ 2022-03-06 17:41 UTC (permalink / raw)
  To: linuxarm, qemu-devel, Alex Bennée, Marcel Apfelbaum,
	Michael S . Tsirkin, Igor Mammedov, Markus Armbruster
  Cc: linux-cxl, Ben Widawsky, Peter Maydell,
	Shameerali Kolothum Thodi, Philippe Mathieu-Daudé,
	Saransh Gupta1, Shreyas Shah, Chris Browy, Samarth Saxena,
	Dan Williams

From: Ben Widawsky <ben.widawsky@intel.com>

The CEDT CXL Fixed Window Memory Window Structures (CFMWs)
define regions of the host phyiscal address map which
(via an impdef means) are configured such that they have
a particular interleave setup across one or more CXL Host Bridges.

Reported-by: Alison Schofield <alison.schofield@intel.com>
Signed-off-by: Ben Widawsky <ben.widawsky@intel.com>
Signed-off-by: Jonathan Cameron <Jonathan.Cameron@huawei.com>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
---
 hw/acpi/cxl.c | 59 +++++++++++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 59 insertions(+)

diff --git a/hw/acpi/cxl.c b/hw/acpi/cxl.c
index 442f836a3e..50efc7f690 100644
--- a/hw/acpi/cxl.c
+++ b/hw/acpi/cxl.c
@@ -60,6 +60,64 @@ static void cedt_build_chbs(GArray *table_data, PXBDev *cxl)
     build_append_int_noprefix(table_data, memory_region_size(mr), 8);
 }
 
+/*
+ * CFMWS entries in CXL 2.0 ECN: CEDT CFMWS & QTG _DSM.
+ * Interleave ways encoding in CXL 2.0 ECN: 3, 6, 12 and 16-way memory
+ * interleaving.
+ */
+static void cedt_build_cfmws(GArray *table_data, MachineState *ms)
+{
+    CXLState *cxls = ms->cxl_devices_state;
+    GList *it;
+
+    for (it = cxls->fixed_windows; it; it = it->next) {
+        CXLFixedWindow *fw = it->data;
+        int i;
+
+        /* Type */
+        build_append_int_noprefix(table_data, 1, 1);
+
+        /* Reserved */
+        build_append_int_noprefix(table_data, 0, 1);
+
+        /* Record Length */
+        build_append_int_noprefix(table_data, 36 + 4 * fw->num_targets, 2);
+
+        /* Reserved */
+        build_append_int_noprefix(table_data, 0, 4);
+
+        /* Base HPA */
+        build_append_int_noprefix(table_data, fw->mr.addr, 8);
+
+        /* Window Size */
+        build_append_int_noprefix(table_data, fw->size, 8);
+
+        /* Host Bridge Interleave Ways */
+        build_append_int_noprefix(table_data, fw->enc_int_ways, 1);
+
+        /* Host Bridge Interleave Arithmetic */
+        build_append_int_noprefix(table_data, 0, 1);
+
+        /* Reserved */
+        build_append_int_noprefix(table_data, 0, 2);
+
+        /* Host Bridge Interleave Granularity */
+        build_append_int_noprefix(table_data, fw->enc_int_gran, 4);
+
+        /* Window Restrictions */
+        build_append_int_noprefix(table_data, 0x0f, 2); /* No restrictions */
+
+        /* QTG ID */
+        build_append_int_noprefix(table_data, 0, 2);
+
+        /* Host Bridge List (list of UIDs - currently bus_nr) */
+        for (i = 0; i < fw->num_targets; i++) {
+            g_assert(fw->target_hbs[i]);
+            build_append_int_noprefix(table_data, fw->target_hbs[i]->bus_nr, 4);
+        }
+    }
+}
+
 static int cxl_foreach_pxb_hb(Object *obj, void *opaque)
 {
     Aml *cedt = opaque;
@@ -86,6 +144,7 @@ void cxl_build_cedt(MachineState *ms, GArray *table_offsets, GArray *table_data,
     /* reserve space for CEDT header */
 
     object_child_foreach_recursive(object_get_root(), cxl_foreach_pxb_hb, cedt);
+    cedt_build_cfmws(cedt->buf, ms);
 
     /* copy AML table into ACPI tables blob and patch header there */
     g_array_append_vals(table_data, cedt->buf->data, cedt->buf->len);
-- 
2.32.0


^ permalink raw reply related	[flat|nested] 124+ messages in thread

* [PATCH v7 28/46] acpi/cxl: Introduce CFMWS structures in CEDT
@ 2022-03-06 17:41   ` Jonathan Cameron via
  0 siblings, 0 replies; 124+ messages in thread
From: Jonathan Cameron via @ 2022-03-06 17:41 UTC (permalink / raw)
  To: linuxarm, qemu-devel, Alex Bennée, Marcel Apfelbaum,
	Michael S . Tsirkin, Igor Mammedov, Markus Armbruster
  Cc: linux-cxl, Ben Widawsky, Peter Maydell,
	Shameerali Kolothum Thodi, Philippe Mathieu-Daudé,
	Saransh Gupta1, Shreyas Shah, Chris Browy, Samarth Saxena,
	Dan Williams

From: Ben Widawsky <ben.widawsky@intel.com>

The CEDT CXL Fixed Window Memory Window Structures (CFMWs)
define regions of the host phyiscal address map which
(via an impdef means) are configured such that they have
a particular interleave setup across one or more CXL Host Bridges.

Reported-by: Alison Schofield <alison.schofield@intel.com>
Signed-off-by: Ben Widawsky <ben.widawsky@intel.com>
Signed-off-by: Jonathan Cameron <Jonathan.Cameron@huawei.com>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
---
 hw/acpi/cxl.c | 59 +++++++++++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 59 insertions(+)

diff --git a/hw/acpi/cxl.c b/hw/acpi/cxl.c
index 442f836a3e..50efc7f690 100644
--- a/hw/acpi/cxl.c
+++ b/hw/acpi/cxl.c
@@ -60,6 +60,64 @@ static void cedt_build_chbs(GArray *table_data, PXBDev *cxl)
     build_append_int_noprefix(table_data, memory_region_size(mr), 8);
 }
 
+/*
+ * CFMWS entries in CXL 2.0 ECN: CEDT CFMWS & QTG _DSM.
+ * Interleave ways encoding in CXL 2.0 ECN: 3, 6, 12 and 16-way memory
+ * interleaving.
+ */
+static void cedt_build_cfmws(GArray *table_data, MachineState *ms)
+{
+    CXLState *cxls = ms->cxl_devices_state;
+    GList *it;
+
+    for (it = cxls->fixed_windows; it; it = it->next) {
+        CXLFixedWindow *fw = it->data;
+        int i;
+
+        /* Type */
+        build_append_int_noprefix(table_data, 1, 1);
+
+        /* Reserved */
+        build_append_int_noprefix(table_data, 0, 1);
+
+        /* Record Length */
+        build_append_int_noprefix(table_data, 36 + 4 * fw->num_targets, 2);
+
+        /* Reserved */
+        build_append_int_noprefix(table_data, 0, 4);
+
+        /* Base HPA */
+        build_append_int_noprefix(table_data, fw->mr.addr, 8);
+
+        /* Window Size */
+        build_append_int_noprefix(table_data, fw->size, 8);
+
+        /* Host Bridge Interleave Ways */
+        build_append_int_noprefix(table_data, fw->enc_int_ways, 1);
+
+        /* Host Bridge Interleave Arithmetic */
+        build_append_int_noprefix(table_data, 0, 1);
+
+        /* Reserved */
+        build_append_int_noprefix(table_data, 0, 2);
+
+        /* Host Bridge Interleave Granularity */
+        build_append_int_noprefix(table_data, fw->enc_int_gran, 4);
+
+        /* Window Restrictions */
+        build_append_int_noprefix(table_data, 0x0f, 2); /* No restrictions */
+
+        /* QTG ID */
+        build_append_int_noprefix(table_data, 0, 2);
+
+        /* Host Bridge List (list of UIDs - currently bus_nr) */
+        for (i = 0; i < fw->num_targets; i++) {
+            g_assert(fw->target_hbs[i]);
+            build_append_int_noprefix(table_data, fw->target_hbs[i]->bus_nr, 4);
+        }
+    }
+}
+
 static int cxl_foreach_pxb_hb(Object *obj, void *opaque)
 {
     Aml *cedt = opaque;
@@ -86,6 +144,7 @@ void cxl_build_cedt(MachineState *ms, GArray *table_offsets, GArray *table_data,
     /* reserve space for CEDT header */
 
     object_child_foreach_recursive(object_get_root(), cxl_foreach_pxb_hb, cedt);
+    cedt_build_cfmws(cedt->buf, ms);
 
     /* copy AML table into ACPI tables blob and patch header there */
     g_array_append_vals(table_data, cedt->buf->data, cedt->buf->len);
-- 
2.32.0



^ permalink raw reply related	[flat|nested] 124+ messages in thread

* [PATCH v7 29/46] hw/pci-host/gpex-acpi: Add support for dsdt construction for pxb-cxl
  2022-03-06 17:40 ` Jonathan Cameron via
@ 2022-03-06 17:41   ` Jonathan Cameron via
  -1 siblings, 0 replies; 124+ messages in thread
From: Jonathan Cameron @ 2022-03-06 17:41 UTC (permalink / raw)
  To: linuxarm, qemu-devel, Alex Bennée, Marcel Apfelbaum,
	Michael S . Tsirkin, Igor Mammedov, Markus Armbruster
  Cc: linux-cxl, Ben Widawsky, Peter Maydell,
	Shameerali Kolothum Thodi, Philippe Mathieu-Daudé,
	Saransh Gupta1, Shreyas Shah, Chris Browy, Samarth Saxena,
	Dan Williams

This adds code to instantiate the slightly extended ACPI root port
description in DSDT as per the CXL 2.0 specification.

Basically a cut and paste job from the i386/pc code.

Signed-off-by: Jonathan Cameron <Jonathan.Cameron@huawei.com>
Signed-off-by: Ben Widawsky <ben.widawsky@intel.com>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
---
v7:
* Host is_cxl assignment to declaration. (Alex)
 hw/arm/Kconfig          |  1 +
 hw/pci-host/gpex-acpi.c | 20 +++++++++++++++++---
 2 files changed, 18 insertions(+), 3 deletions(-)

diff --git a/hw/arm/Kconfig b/hw/arm/Kconfig
index 6945330030..d926dc0e4d 100644
--- a/hw/arm/Kconfig
+++ b/hw/arm/Kconfig
@@ -29,6 +29,7 @@ config ARM_VIRT
     select ACPI_APEI
     select ACPI_VIOT
     select VIRTIO_MEM_SUPPORTED
+    select ACPI_CXL
 
 config CHEETAH
     bool
diff --git a/hw/pci-host/gpex-acpi.c b/hw/pci-host/gpex-acpi.c
index e7e162a00a..7c7316bc96 100644
--- a/hw/pci-host/gpex-acpi.c
+++ b/hw/pci-host/gpex-acpi.c
@@ -5,6 +5,7 @@
 #include "hw/pci/pci_bus.h"
 #include "hw/pci/pci_bridge.h"
 #include "hw/pci/pcie_host.h"
+#include "hw/acpi/cxl.h"
 
 static void acpi_dsdt_add_pci_route_table(Aml *dev, uint32_t irq)
 {
@@ -139,6 +140,7 @@ void acpi_dsdt_add_gpex(Aml *scope, struct GPEXConfig *cfg)
         QLIST_FOREACH(bus, &bus->child, sibling) {
             uint8_t bus_num = pci_bus_num(bus);
             uint8_t numa_node = pci_bus_numa_node(bus);
+            bool is_cxl = pci_bus_is_cxl(bus);
 
             if (!pci_bus_is_root(bus)) {
                 continue;
@@ -154,8 +156,16 @@ void acpi_dsdt_add_gpex(Aml *scope, struct GPEXConfig *cfg)
             }
 
             dev = aml_device("PC%.02X", bus_num);
-            aml_append(dev, aml_name_decl("_HID", aml_string("PNP0A08")));
-            aml_append(dev, aml_name_decl("_CID", aml_string("PNP0A03")));
+            if (is_cxl) {
+                struct Aml *pkg = aml_package(2);
+                aml_append(dev, aml_name_decl("_HID", aml_string("ACPI0016")));
+                aml_append(pkg, aml_eisaid("PNP0A08"));
+                aml_append(pkg, aml_eisaid("PNP0A03"));
+                aml_append(dev, aml_name_decl("_CID", pkg));
+            } else {
+                aml_append(dev, aml_name_decl("_HID", aml_string("PNP0A08")));
+                aml_append(dev, aml_name_decl("_CID", aml_string("PNP0A03")));
+            }
             aml_append(dev, aml_name_decl("_BBN", aml_int(bus_num)));
             aml_append(dev, aml_name_decl("_UID", aml_int(bus_num)));
             aml_append(dev, aml_name_decl("_STR", aml_unicode("pxb Device")));
@@ -175,7 +185,11 @@ void acpi_dsdt_add_gpex(Aml *scope, struct GPEXConfig *cfg)
                             cfg->pio.base, 0, 0, 0);
             aml_append(dev, aml_name_decl("_CRS", crs));
 
-            acpi_dsdt_add_pci_osc(dev);
+            if (is_cxl) {
+                build_cxl_osc_method(dev);
+            } else {
+                acpi_dsdt_add_pci_osc(dev);
+            }
 
             aml_append(scope, dev);
         }
-- 
2.32.0


^ permalink raw reply related	[flat|nested] 124+ messages in thread

* [PATCH v7 29/46] hw/pci-host/gpex-acpi: Add support for dsdt construction for pxb-cxl
@ 2022-03-06 17:41   ` Jonathan Cameron via
  0 siblings, 0 replies; 124+ messages in thread
From: Jonathan Cameron via @ 2022-03-06 17:41 UTC (permalink / raw)
  To: linuxarm, qemu-devel, Alex Bennée, Marcel Apfelbaum,
	Michael S . Tsirkin, Igor Mammedov, Markus Armbruster
  Cc: linux-cxl, Ben Widawsky, Peter Maydell,
	Shameerali Kolothum Thodi, Philippe Mathieu-Daudé,
	Saransh Gupta1, Shreyas Shah, Chris Browy, Samarth Saxena,
	Dan Williams

This adds code to instantiate the slightly extended ACPI root port
description in DSDT as per the CXL 2.0 specification.

Basically a cut and paste job from the i386/pc code.

Signed-off-by: Jonathan Cameron <Jonathan.Cameron@huawei.com>
Signed-off-by: Ben Widawsky <ben.widawsky@intel.com>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
---
v7:
* Host is_cxl assignment to declaration. (Alex)
 hw/arm/Kconfig          |  1 +
 hw/pci-host/gpex-acpi.c | 20 +++++++++++++++++---
 2 files changed, 18 insertions(+), 3 deletions(-)

diff --git a/hw/arm/Kconfig b/hw/arm/Kconfig
index 6945330030..d926dc0e4d 100644
--- a/hw/arm/Kconfig
+++ b/hw/arm/Kconfig
@@ -29,6 +29,7 @@ config ARM_VIRT
     select ACPI_APEI
     select ACPI_VIOT
     select VIRTIO_MEM_SUPPORTED
+    select ACPI_CXL
 
 config CHEETAH
     bool
diff --git a/hw/pci-host/gpex-acpi.c b/hw/pci-host/gpex-acpi.c
index e7e162a00a..7c7316bc96 100644
--- a/hw/pci-host/gpex-acpi.c
+++ b/hw/pci-host/gpex-acpi.c
@@ -5,6 +5,7 @@
 #include "hw/pci/pci_bus.h"
 #include "hw/pci/pci_bridge.h"
 #include "hw/pci/pcie_host.h"
+#include "hw/acpi/cxl.h"
 
 static void acpi_dsdt_add_pci_route_table(Aml *dev, uint32_t irq)
 {
@@ -139,6 +140,7 @@ void acpi_dsdt_add_gpex(Aml *scope, struct GPEXConfig *cfg)
         QLIST_FOREACH(bus, &bus->child, sibling) {
             uint8_t bus_num = pci_bus_num(bus);
             uint8_t numa_node = pci_bus_numa_node(bus);
+            bool is_cxl = pci_bus_is_cxl(bus);
 
             if (!pci_bus_is_root(bus)) {
                 continue;
@@ -154,8 +156,16 @@ void acpi_dsdt_add_gpex(Aml *scope, struct GPEXConfig *cfg)
             }
 
             dev = aml_device("PC%.02X", bus_num);
-            aml_append(dev, aml_name_decl("_HID", aml_string("PNP0A08")));
-            aml_append(dev, aml_name_decl("_CID", aml_string("PNP0A03")));
+            if (is_cxl) {
+                struct Aml *pkg = aml_package(2);
+                aml_append(dev, aml_name_decl("_HID", aml_string("ACPI0016")));
+                aml_append(pkg, aml_eisaid("PNP0A08"));
+                aml_append(pkg, aml_eisaid("PNP0A03"));
+                aml_append(dev, aml_name_decl("_CID", pkg));
+            } else {
+                aml_append(dev, aml_name_decl("_HID", aml_string("PNP0A08")));
+                aml_append(dev, aml_name_decl("_CID", aml_string("PNP0A03")));
+            }
             aml_append(dev, aml_name_decl("_BBN", aml_int(bus_num)));
             aml_append(dev, aml_name_decl("_UID", aml_int(bus_num)));
             aml_append(dev, aml_name_decl("_STR", aml_unicode("pxb Device")));
@@ -175,7 +185,11 @@ void acpi_dsdt_add_gpex(Aml *scope, struct GPEXConfig *cfg)
                             cfg->pio.base, 0, 0, 0);
             aml_append(dev, aml_name_decl("_CRS", crs));
 
-            acpi_dsdt_add_pci_osc(dev);
+            if (is_cxl) {
+                build_cxl_osc_method(dev);
+            } else {
+                acpi_dsdt_add_pci_osc(dev);
+            }
 
             aml_append(scope, dev);
         }
-- 
2.32.0



^ permalink raw reply related	[flat|nested] 124+ messages in thread

* [PATCH v7 30/46] pci/pcie_port: Add pci_find_port_by_pn()
  2022-03-06 17:40 ` Jonathan Cameron via
@ 2022-03-06 17:41   ` Jonathan Cameron via
  -1 siblings, 0 replies; 124+ messages in thread
From: Jonathan Cameron @ 2022-03-06 17:41 UTC (permalink / raw)
  To: linuxarm, qemu-devel, Alex Bennée, Marcel Apfelbaum,
	Michael S . Tsirkin, Igor Mammedov, Markus Armbruster
  Cc: linux-cxl, Ben Widawsky, Peter Maydell,
	Shameerali Kolothum Thodi, Philippe Mathieu-Daudé,
	Saransh Gupta1, Shreyas Shah, Chris Browy, Samarth Saxena,
	Dan Williams

From: Jonathan Cameron <jonathan.cameron@huawei.com>

Simple function to search a PCIBus to find a port by
it's port number.

CXL interleave decoding uses the port number as a target
so it is necessary to locate the port when doing interleave
decoding.

Signed-off-by: Jonathan Cameron <jonathan.cameron@huawei.com>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
---
 hw/pci/pcie_port.c         | 25 +++++++++++++++++++++++++
 include/hw/pci/pcie_port.h |  2 ++
 2 files changed, 27 insertions(+)

diff --git a/hw/pci/pcie_port.c b/hw/pci/pcie_port.c
index e95c1e5519..687e4e763a 100644
--- a/hw/pci/pcie_port.c
+++ b/hw/pci/pcie_port.c
@@ -136,6 +136,31 @@ static void pcie_port_class_init(ObjectClass *oc, void *data)
     device_class_set_props(dc, pcie_port_props);
 }
 
+PCIDevice *pcie_find_port_by_pn(PCIBus *bus, uint8_t pn)
+{
+    int devfn;
+
+    for (devfn = 0; devfn < ARRAY_SIZE(bus->devices); devfn++) {
+        PCIDevice *d = bus->devices[devfn];
+        PCIEPort *port;
+
+        if (!d || !pci_is_express(d) || !d->exp.exp_cap) {
+            continue;
+        }
+
+        if (!object_dynamic_cast(OBJECT(d), TYPE_PCIE_PORT)) {
+            continue;
+        }
+
+        port = PCIE_PORT(d);
+        if (port->port == pn) {
+            return d;
+        }
+    }
+
+    return NULL;
+}
+
 static const TypeInfo pcie_port_type_info = {
     .name = TYPE_PCIE_PORT,
     .parent = TYPE_PCI_BRIDGE,
diff --git a/include/hw/pci/pcie_port.h b/include/hw/pci/pcie_port.h
index e25b289ce8..7b8193061a 100644
--- a/include/hw/pci/pcie_port.h
+++ b/include/hw/pci/pcie_port.h
@@ -39,6 +39,8 @@ struct PCIEPort {
 
 void pcie_port_init_reg(PCIDevice *d);
 
+PCIDevice *pcie_find_port_by_pn(PCIBus *bus, uint8_t pn);
+
 #define TYPE_PCIE_SLOT "pcie-slot"
 OBJECT_DECLARE_SIMPLE_TYPE(PCIESlot, PCIE_SLOT)
 
-- 
2.32.0


^ permalink raw reply related	[flat|nested] 124+ messages in thread

* [PATCH v7 30/46] pci/pcie_port: Add pci_find_port_by_pn()
@ 2022-03-06 17:41   ` Jonathan Cameron via
  0 siblings, 0 replies; 124+ messages in thread
From: Jonathan Cameron via @ 2022-03-06 17:41 UTC (permalink / raw)
  To: linuxarm, qemu-devel, Alex Bennée, Marcel Apfelbaum,
	Michael S . Tsirkin, Igor Mammedov, Markus Armbruster
  Cc: linux-cxl, Ben Widawsky, Peter Maydell,
	Shameerali Kolothum Thodi, Philippe Mathieu-Daudé,
	Saransh Gupta1, Shreyas Shah, Chris Browy, Samarth Saxena,
	Dan Williams

From: Jonathan Cameron <jonathan.cameron@huawei.com>

Simple function to search a PCIBus to find a port by
it's port number.

CXL interleave decoding uses the port number as a target
so it is necessary to locate the port when doing interleave
decoding.

Signed-off-by: Jonathan Cameron <jonathan.cameron@huawei.com>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
---
 hw/pci/pcie_port.c         | 25 +++++++++++++++++++++++++
 include/hw/pci/pcie_port.h |  2 ++
 2 files changed, 27 insertions(+)

diff --git a/hw/pci/pcie_port.c b/hw/pci/pcie_port.c
index e95c1e5519..687e4e763a 100644
--- a/hw/pci/pcie_port.c
+++ b/hw/pci/pcie_port.c
@@ -136,6 +136,31 @@ static void pcie_port_class_init(ObjectClass *oc, void *data)
     device_class_set_props(dc, pcie_port_props);
 }
 
+PCIDevice *pcie_find_port_by_pn(PCIBus *bus, uint8_t pn)
+{
+    int devfn;
+
+    for (devfn = 0; devfn < ARRAY_SIZE(bus->devices); devfn++) {
+        PCIDevice *d = bus->devices[devfn];
+        PCIEPort *port;
+
+        if (!d || !pci_is_express(d) || !d->exp.exp_cap) {
+            continue;
+        }
+
+        if (!object_dynamic_cast(OBJECT(d), TYPE_PCIE_PORT)) {
+            continue;
+        }
+
+        port = PCIE_PORT(d);
+        if (port->port == pn) {
+            return d;
+        }
+    }
+
+    return NULL;
+}
+
 static const TypeInfo pcie_port_type_info = {
     .name = TYPE_PCIE_PORT,
     .parent = TYPE_PCI_BRIDGE,
diff --git a/include/hw/pci/pcie_port.h b/include/hw/pci/pcie_port.h
index e25b289ce8..7b8193061a 100644
--- a/include/hw/pci/pcie_port.h
+++ b/include/hw/pci/pcie_port.h
@@ -39,6 +39,8 @@ struct PCIEPort {
 
 void pcie_port_init_reg(PCIDevice *d);
 
+PCIDevice *pcie_find_port_by_pn(PCIBus *bus, uint8_t pn);
+
 #define TYPE_PCIE_SLOT "pcie-slot"
 OBJECT_DECLARE_SIMPLE_TYPE(PCIESlot, PCIE_SLOT)
 
-- 
2.32.0



^ permalink raw reply related	[flat|nested] 124+ messages in thread

* [PATCH v7 31/46] CXL/cxl_component: Add cxl_get_hb_cstate()
  2022-03-06 17:40 ` Jonathan Cameron via
@ 2022-03-06 17:41   ` Jonathan Cameron via
  -1 siblings, 0 replies; 124+ messages in thread
From: Jonathan Cameron @ 2022-03-06 17:41 UTC (permalink / raw)
  To: linuxarm, qemu-devel, Alex Bennée, Marcel Apfelbaum,
	Michael S . Tsirkin, Igor Mammedov, Markus Armbruster
  Cc: linux-cxl, Ben Widawsky, Peter Maydell,
	Shameerali Kolothum Thodi, Philippe Mathieu-Daudé,
	Saransh Gupta1, Shreyas Shah, Chris Browy, Samarth Saxena,
	Dan Williams

From: Jonathan Cameron <jonathan.cameron@huawei.com>

Accessor to get hold of the cxl state for a CXL host bridge
without exposing the internals of the implementation.

Signed-off-by: Jonathan Cameron <jonathan.cameron@huawei.com>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
---
 hw/pci-bridge/pci_expander_bridge.c | 7 +++++++
 include/hw/cxl/cxl_component.h      | 2 ++
 2 files changed, 9 insertions(+)

diff --git a/hw/pci-bridge/pci_expander_bridge.c b/hw/pci-bridge/pci_expander_bridge.c
index e11a967916..de534c44ab 100644
--- a/hw/pci-bridge/pci_expander_bridge.c
+++ b/hw/pci-bridge/pci_expander_bridge.c
@@ -81,6 +81,13 @@ static GList *pxb_dev_list;
 #define TYPE_PXB_CXL_HOST "pxb-cxl-host"
 #define PXB_CXL_HOST(obj) OBJECT_CHECK(CXLHost, (obj), TYPE_PXB_CXL_HOST)
 
+CXLComponentState *cxl_get_hb_cstate(PCIHostState *hb)
+{
+    CXLHost *host = PXB_CXL_HOST(hb);
+
+    return &host->cxl_cstate;
+}
+
 static int pxb_bus_num(PCIBus *bus)
 {
     PXBDev *pxb = convert_to_pxb(bus->parent_dev);
diff --git a/include/hw/cxl/cxl_component.h b/include/hw/cxl/cxl_component.h
index 8dae21cfc6..65779ce7ed 100644
--- a/include/hw/cxl/cxl_component.h
+++ b/include/hw/cxl/cxl_component.h
@@ -202,4 +202,6 @@ static inline hwaddr cxl_decode_ig(int ig)
     return 1 << (ig + 8);
 }
 
+CXLComponentState *cxl_get_hb_cstate(PCIHostState *hb);
+
 #endif
-- 
2.32.0


^ permalink raw reply related	[flat|nested] 124+ messages in thread

* [PATCH v7 31/46] CXL/cxl_component: Add cxl_get_hb_cstate()
@ 2022-03-06 17:41   ` Jonathan Cameron via
  0 siblings, 0 replies; 124+ messages in thread
From: Jonathan Cameron via @ 2022-03-06 17:41 UTC (permalink / raw)
  To: linuxarm, qemu-devel, Alex Bennée, Marcel Apfelbaum,
	Michael S . Tsirkin, Igor Mammedov, Markus Armbruster
  Cc: linux-cxl, Ben Widawsky, Peter Maydell,
	Shameerali Kolothum Thodi, Philippe Mathieu-Daudé,
	Saransh Gupta1, Shreyas Shah, Chris Browy, Samarth Saxena,
	Dan Williams

From: Jonathan Cameron <jonathan.cameron@huawei.com>

Accessor to get hold of the cxl state for a CXL host bridge
without exposing the internals of the implementation.

Signed-off-by: Jonathan Cameron <jonathan.cameron@huawei.com>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
---
 hw/pci-bridge/pci_expander_bridge.c | 7 +++++++
 include/hw/cxl/cxl_component.h      | 2 ++
 2 files changed, 9 insertions(+)

diff --git a/hw/pci-bridge/pci_expander_bridge.c b/hw/pci-bridge/pci_expander_bridge.c
index e11a967916..de534c44ab 100644
--- a/hw/pci-bridge/pci_expander_bridge.c
+++ b/hw/pci-bridge/pci_expander_bridge.c
@@ -81,6 +81,13 @@ static GList *pxb_dev_list;
 #define TYPE_PXB_CXL_HOST "pxb-cxl-host"
 #define PXB_CXL_HOST(obj) OBJECT_CHECK(CXLHost, (obj), TYPE_PXB_CXL_HOST)
 
+CXLComponentState *cxl_get_hb_cstate(PCIHostState *hb)
+{
+    CXLHost *host = PXB_CXL_HOST(hb);
+
+    return &host->cxl_cstate;
+}
+
 static int pxb_bus_num(PCIBus *bus)
 {
     PXBDev *pxb = convert_to_pxb(bus->parent_dev);
diff --git a/include/hw/cxl/cxl_component.h b/include/hw/cxl/cxl_component.h
index 8dae21cfc6..65779ce7ed 100644
--- a/include/hw/cxl/cxl_component.h
+++ b/include/hw/cxl/cxl_component.h
@@ -202,4 +202,6 @@ static inline hwaddr cxl_decode_ig(int ig)
     return 1 << (ig + 8);
 }
 
+CXLComponentState *cxl_get_hb_cstate(PCIHostState *hb);
+
 #endif
-- 
2.32.0



^ permalink raw reply related	[flat|nested] 124+ messages in thread

* [PATCH v7 32/46] mem/cxl_type3: Add read and write functions for associated hostmem.
  2022-03-06 17:40 ` Jonathan Cameron via
@ 2022-03-06 17:41   ` Jonathan Cameron via
  -1 siblings, 0 replies; 124+ messages in thread
From: Jonathan Cameron @ 2022-03-06 17:41 UTC (permalink / raw)
  To: linuxarm, qemu-devel, Alex Bennée, Marcel Apfelbaum,
	Michael S . Tsirkin, Igor Mammedov, Markus Armbruster
  Cc: linux-cxl, Ben Widawsky, Peter Maydell,
	Shameerali Kolothum Thodi, Philippe Mathieu-Daudé,
	Saransh Gupta1, Shreyas Shah, Chris Browy, Samarth Saxena,
	Dan Williams

From: Jonathan Cameron <jonathan.cameron@huawei.com>

Once a read or write reaches a CXL type 3 device, the HDM decoders
on the device are used to establish the Device Physical Address
which should be accessed.  These functions peform the required maths
and then directly access the hostmem->mr to fullfil the actual
operation.  Note that failed writes are silent, but failed reads
return poison.  Note this is based loosely on:

https://lore.kernel.org/qemu-devel/20200817161853.593247-6-f4bug@amsat.org/
[RFC PATCH 0/9] hw/misc: Add support for interleaved memory accesses

Only lightly tested so far.  More complex test cases yet to be written.

Signed-off-by: Jonathan Cameron <Jonathan.Cameron@huawei.com>
---
 hw/mem/cxl_type3.c          | 81 +++++++++++++++++++++++++++++++++++++
 include/hw/cxl/cxl_device.h |  5 +++
 2 files changed, 86 insertions(+)

diff --git a/hw/mem/cxl_type3.c b/hw/mem/cxl_type3.c
index 244eb5dc91..e498bacee5 100644
--- a/hw/mem/cxl_type3.c
+++ b/hw/mem/cxl_type3.c
@@ -160,6 +160,87 @@ static void ct3_realize(PCIDevice *pci_dev, Error **errp)
                      &ct3d->cxl_dstate.device_registers);
 }
 
+/* TODO: Support multiple HDM decoders and DPA skip */
+static bool cxl_type3_dpa(CXLType3Dev *ct3d, hwaddr host_addr, uint64_t *dpa)
+{
+    uint32_t *cache_mem = ct3d->cxl_cstate.crb.cache_mem_registers;
+    uint64_t decoder_base, decoder_size, hpa_offset;
+    uint32_t hdm0_ctrl;
+    int ig, iw;
+
+    decoder_base = (((uint64_t)cache_mem[R_CXL_HDM_DECODER0_BASE_HI] << 32) |
+                    cache_mem[R_CXL_HDM_DECODER0_BASE_LO]);
+    if ((uint64_t)host_addr < decoder_base) {
+        return false;
+    }
+
+    hpa_offset = (uint64_t)host_addr - decoder_base;
+
+    decoder_size = ((uint64_t)cache_mem[R_CXL_HDM_DECODER0_SIZE_HI] << 32) |
+        cache_mem[R_CXL_HDM_DECODER0_SIZE_LO];
+    if (hpa_offset >= decoder_size) {
+        return false;
+    }
+
+    hdm0_ctrl = cache_mem[R_CXL_HDM_DECODER0_CTRL];
+    iw = FIELD_EX32(hdm0_ctrl, CXL_HDM_DECODER0_CTRL, IW);
+    ig = FIELD_EX32(hdm0_ctrl, CXL_HDM_DECODER0_CTRL, IG);
+
+    *dpa = (MAKE_64BIT_MASK(0, 8 + ig) & hpa_offset) |
+        ((MAKE_64BIT_MASK(8 + ig + iw, 64 - 8 - ig - iw) & hpa_offset) >> iw);
+
+    return true;
+}
+
+MemTxResult cxl_type3_read(PCIDevice *d, hwaddr host_addr, uint64_t *data,
+                           unsigned size, MemTxAttrs attrs)
+{
+    CXLType3Dev *ct3d = CT3(d);
+    uint64_t dpa_offset;
+    MemoryRegion *mr;
+
+    /* TODO support volatile region */
+    mr = host_memory_backend_get_memory(ct3d->hostmem);
+    if (!mr) {
+        return MEMTX_ERROR;
+    }
+
+    if (!cxl_type3_dpa(ct3d, host_addr, &dpa_offset)) {
+        return MEMTX_ERROR;
+    }
+
+    if (dpa_offset > int128_get64(mr->size)) {
+        return MEMTX_ERROR;
+    }
+
+    return memory_region_dispatch_read(mr, dpa_offset, data,
+                                       size_memop(size), attrs);
+}
+
+MemTxResult cxl_type3_write(PCIDevice *d, hwaddr host_addr, uint64_t data,
+                            unsigned size, MemTxAttrs attrs)
+{
+    CXLType3Dev *ct3d = CT3(d);
+    uint64_t dpa_offset;
+    MemoryRegion *mr;
+
+    mr = host_memory_backend_get_memory(ct3d->hostmem);
+    if (!mr) {
+        return MEMTX_OK;
+    }
+
+    if (!cxl_type3_dpa(ct3d, host_addr, &dpa_offset)) {
+        return MEMTX_OK;
+    }
+
+    if (dpa_offset > int128_get64(mr->size)) {
+        return MEMTX_OK;
+    }
+
+    return memory_region_dispatch_write(mr, dpa_offset, data,
+                                        size_memop(size), attrs);
+}
+
 static void ct3d_reset(DeviceState *dev)
 {
     CXLType3Dev *ct3d = CT3(dev);
diff --git a/include/hw/cxl/cxl_device.h b/include/hw/cxl/cxl_device.h
index 288cc11772..f9faf87312 100644
--- a/include/hw/cxl/cxl_device.h
+++ b/include/hw/cxl/cxl_device.h
@@ -262,4 +262,9 @@ struct CXLType3Class {
                     uint64_t offset);
 };
 
+MemTxResult cxl_type3_read(PCIDevice *d, hwaddr host_addr, uint64_t *data,
+                           unsigned size, MemTxAttrs attrs);
+MemTxResult cxl_type3_write(PCIDevice *d, hwaddr host_addr, uint64_t data,
+                            unsigned size, MemTxAttrs attrs);
+
 #endif
-- 
2.32.0


^ permalink raw reply related	[flat|nested] 124+ messages in thread

* [PATCH v7 32/46] mem/cxl_type3: Add read and write functions for associated hostmem.
@ 2022-03-06 17:41   ` Jonathan Cameron via
  0 siblings, 0 replies; 124+ messages in thread
From: Jonathan Cameron via @ 2022-03-06 17:41 UTC (permalink / raw)
  To: linuxarm, qemu-devel, Alex Bennée, Marcel Apfelbaum,
	Michael S . Tsirkin, Igor Mammedov, Markus Armbruster
  Cc: linux-cxl, Ben Widawsky, Peter Maydell,
	Shameerali Kolothum Thodi, Philippe Mathieu-Daudé,
	Saransh Gupta1, Shreyas Shah, Chris Browy, Samarth Saxena,
	Dan Williams

From: Jonathan Cameron <jonathan.cameron@huawei.com>

Once a read or write reaches a CXL type 3 device, the HDM decoders
on the device are used to establish the Device Physical Address
which should be accessed.  These functions peform the required maths
and then directly access the hostmem->mr to fullfil the actual
operation.  Note that failed writes are silent, but failed reads
return poison.  Note this is based loosely on:

https://lore.kernel.org/qemu-devel/20200817161853.593247-6-f4bug@amsat.org/
[RFC PATCH 0/9] hw/misc: Add support for interleaved memory accesses

Only lightly tested so far.  More complex test cases yet to be written.

Signed-off-by: Jonathan Cameron <Jonathan.Cameron@huawei.com>
---
 hw/mem/cxl_type3.c          | 81 +++++++++++++++++++++++++++++++++++++
 include/hw/cxl/cxl_device.h |  5 +++
 2 files changed, 86 insertions(+)

diff --git a/hw/mem/cxl_type3.c b/hw/mem/cxl_type3.c
index 244eb5dc91..e498bacee5 100644
--- a/hw/mem/cxl_type3.c
+++ b/hw/mem/cxl_type3.c
@@ -160,6 +160,87 @@ static void ct3_realize(PCIDevice *pci_dev, Error **errp)
                      &ct3d->cxl_dstate.device_registers);
 }
 
+/* TODO: Support multiple HDM decoders and DPA skip */
+static bool cxl_type3_dpa(CXLType3Dev *ct3d, hwaddr host_addr, uint64_t *dpa)
+{
+    uint32_t *cache_mem = ct3d->cxl_cstate.crb.cache_mem_registers;
+    uint64_t decoder_base, decoder_size, hpa_offset;
+    uint32_t hdm0_ctrl;
+    int ig, iw;
+
+    decoder_base = (((uint64_t)cache_mem[R_CXL_HDM_DECODER0_BASE_HI] << 32) |
+                    cache_mem[R_CXL_HDM_DECODER0_BASE_LO]);
+    if ((uint64_t)host_addr < decoder_base) {
+        return false;
+    }
+
+    hpa_offset = (uint64_t)host_addr - decoder_base;
+
+    decoder_size = ((uint64_t)cache_mem[R_CXL_HDM_DECODER0_SIZE_HI] << 32) |
+        cache_mem[R_CXL_HDM_DECODER0_SIZE_LO];
+    if (hpa_offset >= decoder_size) {
+        return false;
+    }
+
+    hdm0_ctrl = cache_mem[R_CXL_HDM_DECODER0_CTRL];
+    iw = FIELD_EX32(hdm0_ctrl, CXL_HDM_DECODER0_CTRL, IW);
+    ig = FIELD_EX32(hdm0_ctrl, CXL_HDM_DECODER0_CTRL, IG);
+
+    *dpa = (MAKE_64BIT_MASK(0, 8 + ig) & hpa_offset) |
+        ((MAKE_64BIT_MASK(8 + ig + iw, 64 - 8 - ig - iw) & hpa_offset) >> iw);
+
+    return true;
+}
+
+MemTxResult cxl_type3_read(PCIDevice *d, hwaddr host_addr, uint64_t *data,
+                           unsigned size, MemTxAttrs attrs)
+{
+    CXLType3Dev *ct3d = CT3(d);
+    uint64_t dpa_offset;
+    MemoryRegion *mr;
+
+    /* TODO support volatile region */
+    mr = host_memory_backend_get_memory(ct3d->hostmem);
+    if (!mr) {
+        return MEMTX_ERROR;
+    }
+
+    if (!cxl_type3_dpa(ct3d, host_addr, &dpa_offset)) {
+        return MEMTX_ERROR;
+    }
+
+    if (dpa_offset > int128_get64(mr->size)) {
+        return MEMTX_ERROR;
+    }
+
+    return memory_region_dispatch_read(mr, dpa_offset, data,
+                                       size_memop(size), attrs);
+}
+
+MemTxResult cxl_type3_write(PCIDevice *d, hwaddr host_addr, uint64_t data,
+                            unsigned size, MemTxAttrs attrs)
+{
+    CXLType3Dev *ct3d = CT3(d);
+    uint64_t dpa_offset;
+    MemoryRegion *mr;
+
+    mr = host_memory_backend_get_memory(ct3d->hostmem);
+    if (!mr) {
+        return MEMTX_OK;
+    }
+
+    if (!cxl_type3_dpa(ct3d, host_addr, &dpa_offset)) {
+        return MEMTX_OK;
+    }
+
+    if (dpa_offset > int128_get64(mr->size)) {
+        return MEMTX_OK;
+    }
+
+    return memory_region_dispatch_write(mr, dpa_offset, data,
+                                        size_memop(size), attrs);
+}
+
 static void ct3d_reset(DeviceState *dev)
 {
     CXLType3Dev *ct3d = CT3(dev);
diff --git a/include/hw/cxl/cxl_device.h b/include/hw/cxl/cxl_device.h
index 288cc11772..f9faf87312 100644
--- a/include/hw/cxl/cxl_device.h
+++ b/include/hw/cxl/cxl_device.h
@@ -262,4 +262,9 @@ struct CXLType3Class {
                     uint64_t offset);
 };
 
+MemTxResult cxl_type3_read(PCIDevice *d, hwaddr host_addr, uint64_t *data,
+                           unsigned size, MemTxAttrs attrs);
+MemTxResult cxl_type3_write(PCIDevice *d, hwaddr host_addr, uint64_t data,
+                            unsigned size, MemTxAttrs attrs);
+
 #endif
-- 
2.32.0



^ permalink raw reply related	[flat|nested] 124+ messages in thread

* [PATCH v7 33/46] cxl/cxl-host: Add memops for CFMWS region.
  2022-03-06 17:40 ` Jonathan Cameron via
@ 2022-03-06 17:41   ` Jonathan Cameron via
  -1 siblings, 0 replies; 124+ messages in thread
From: Jonathan Cameron @ 2022-03-06 17:41 UTC (permalink / raw)
  To: linuxarm, qemu-devel, Alex Bennée, Marcel Apfelbaum,
	Michael S . Tsirkin, Igor Mammedov, Markus Armbruster
  Cc: linux-cxl, Ben Widawsky, Peter Maydell,
	Shameerali Kolothum Thodi, Philippe Mathieu-Daudé,
	Saransh Gupta1, Shreyas Shah, Chris Browy, Samarth Saxena,
	Dan Williams

From: Jonathan Cameron <jonathan.cameron@huawei.com>

These memops perform interleave decoding, walking down the
CXL topology from CFMWS described host interleave
decoder via CXL host bridge HDM decoders, through the CXL
root ports and finally call CXL type 3 specific read and write
functions.

Note that, whilst functional the current implementation does
not support:
* switches
* multiple HDM decoders at a given level.
* unaligned accesses across the interleave boundaries

Signed-off-by: Jonathan Cameron <jonathan.cameron@huawei.com>
---
 hw/cxl/cxl-host-stubs.c |   2 +
 hw/cxl/cxl-host.c       | 128 ++++++++++++++++++++++++++++++++++++++++
 include/hw/cxl/cxl.h    |   2 +
 3 files changed, 132 insertions(+)

diff --git a/hw/cxl/cxl-host-stubs.c b/hw/cxl/cxl-host-stubs.c
index d24282ec1c..fcb7aded1f 100644
--- a/hw/cxl/cxl-host-stubs.c
+++ b/hw/cxl/cxl-host-stubs.c
@@ -12,3 +12,5 @@ void cxl_fixed_memory_window_options_set(MachineState *ms,
                                          Error **errp) {};
 
 void cxl_fixed_memory_window_link_targets(Error **errp) {};
+
+const MemoryRegionOps cfmws_ops;
diff --git a/hw/cxl/cxl-host.c b/hw/cxl/cxl-host.c
index f25713236d..a1eafa89bb 100644
--- a/hw/cxl/cxl-host.c
+++ b/hw/cxl/cxl-host.c
@@ -15,6 +15,10 @@
 
 #include "qapi/qapi-visit-machine.h"
 #include "hw/cxl/cxl.h"
+#include "hw/pci/pci_bus.h"
+#include "hw/pci/pci_bridge.h"
+#include "hw/pci/pci_host.h"
+#include "hw/pci/pcie_port.h"
 
 void cxl_fixed_memory_window_options_set(MachineState *ms,
                                          CXLFixedMemoryWindowOptions *object,
@@ -92,3 +96,127 @@ void cxl_fixed_memory_window_link_targets(Error **errp)
         }
     }
 }
+
+/* TODO: support, multiple hdm decoders */
+static bool cxl_hdm_find_target(uint32_t *cache_mem, hwaddr addr,
+                                uint8_t *target)
+{
+    uint32_t ctrl;
+    uint32_t ig_enc;
+    uint32_t iw_enc;
+    uint32_t target_reg;
+    uint32_t target_idx;
+
+    ctrl = cache_mem[R_CXL_HDM_DECODER0_CTRL];
+    if (!FIELD_EX32(ctrl, CXL_HDM_DECODER0_CTRL, COMMITTED)) {
+        return false;
+    }
+
+    ig_enc = FIELD_EX32(ctrl, CXL_HDM_DECODER0_CTRL, IG);
+    iw_enc = FIELD_EX32(ctrl, CXL_HDM_DECODER0_CTRL, IW);
+    target_idx = (addr / cxl_decode_ig(ig_enc)) % (1 << iw_enc);
+
+    if (target_idx > 4) {
+        target_reg = cache_mem[R_CXL_HDM_DECODER0_TARGET_LIST_LO];
+        target_reg >>= target_idx * 8;
+    } else {
+        target_reg = cache_mem[R_CXL_HDM_DECODER0_TARGET_LIST_LO];
+        target_reg >>= (target_idx - 4) * 8;
+    }
+    *target = target_reg & 0xff;
+
+    return true;
+}
+
+static PCIDevice *cxl_cfmws_find_device(CXLFixedWindow *fw, hwaddr addr)
+{
+    CXLComponentState *hb_cstate;
+    PCIHostState *hb;
+    int rb_index;
+    uint32_t *cache_mem;
+    uint8_t target;
+    bool target_found;
+    PCIDevice *rp, *d;
+
+    /* Address is relative to memory region. Convert to HPA */
+    addr += fw->base;
+
+    rb_index = (addr / cxl_decode_ig(fw->enc_int_gran)) % fw->num_targets;
+    hb = PCI_HOST_BRIDGE(fw->target_hbs[rb_index]->cxl.cxl_host_bridge);
+    if (!hb || !hb->bus || !pci_bus_is_cxl(hb->bus)) {
+        return NULL;
+    }
+
+    hb_cstate = cxl_get_hb_cstate(hb);
+    if (!hb_cstate) {
+        return NULL;
+    }
+
+    cache_mem = hb_cstate->crb.cache_mem_registers;
+
+    target_found = cxl_hdm_find_target(cache_mem, addr, &target);
+    if (!target_found) {
+        return NULL;
+    }
+
+    rp = pcie_find_port_by_pn(hb->bus, target);
+    if (!rp) {
+        return NULL;
+    }
+
+    d = pci_bridge_get_sec_bus(PCI_BRIDGE(rp))->devices[0];
+
+    if (!d || !object_dynamic_cast(OBJECT(d), TYPE_CXL_TYPE3_DEV)) {
+        return NULL;
+    }
+
+    return d;
+}
+
+static MemTxResult cxl_read_cfmws(void *opaque, hwaddr addr, uint64_t *data,
+                                  unsigned size, MemTxAttrs attrs)
+{
+    CXLFixedWindow *fw = opaque;
+    PCIDevice *d;
+
+    d = cxl_cfmws_find_device(fw, addr);
+    if (d == NULL) {
+        *data = 0;
+        /* Reads to invalid address return poison */
+        return MEMTX_ERROR;
+    }
+
+    return cxl_type3_read(d, addr + fw->base, data, size, attrs);
+}
+
+static MemTxResult cxl_write_cfmws(void *opaque, hwaddr addr,
+                                   uint64_t data, unsigned size,
+                                   MemTxAttrs attrs)
+{
+    CXLFixedWindow *fw = opaque;
+    PCIDevice *d;
+
+    d = cxl_cfmws_find_device(fw, addr);
+    if (d == NULL) {
+        /* Writes to invalid address are silent */
+        return MEMTX_OK;
+    }
+
+    return cxl_type3_write(d, addr + fw->base, data, size, attrs);
+}
+
+const MemoryRegionOps cfmws_ops = {
+    .read_with_attrs = cxl_read_cfmws,
+    .write_with_attrs = cxl_write_cfmws,
+    .endianness = DEVICE_LITTLE_ENDIAN,
+    .valid = {
+        .min_access_size = 1,
+        .max_access_size = 8,
+        .unaligned = true,
+    },
+    .impl = {
+        .min_access_size = 1,
+        .max_access_size = 8,
+        .unaligned = true,
+    },
+};
diff --git a/include/hw/cxl/cxl.h b/include/hw/cxl/cxl.h
index 5abc307ef4..14194acead 100644
--- a/include/hw/cxl/cxl.h
+++ b/include/hw/cxl/cxl.h
@@ -45,4 +45,6 @@ void cxl_fixed_memory_window_options_set(MachineState *ms,
                                          Error **errp);
 void cxl_fixed_memory_window_link_targets(Error **errp);
 
+extern const MemoryRegionOps cfmws_ops;
+
 #endif
-- 
2.32.0


^ permalink raw reply related	[flat|nested] 124+ messages in thread

* [PATCH v7 33/46] cxl/cxl-host: Add memops for CFMWS region.
@ 2022-03-06 17:41   ` Jonathan Cameron via
  0 siblings, 0 replies; 124+ messages in thread
From: Jonathan Cameron via @ 2022-03-06 17:41 UTC (permalink / raw)
  To: linuxarm, qemu-devel, Alex Bennée, Marcel Apfelbaum,
	Michael S . Tsirkin, Igor Mammedov, Markus Armbruster
  Cc: linux-cxl, Ben Widawsky, Peter Maydell,
	Shameerali Kolothum Thodi, Philippe Mathieu-Daudé,
	Saransh Gupta1, Shreyas Shah, Chris Browy, Samarth Saxena,
	Dan Williams

From: Jonathan Cameron <jonathan.cameron@huawei.com>

These memops perform interleave decoding, walking down the
CXL topology from CFMWS described host interleave
decoder via CXL host bridge HDM decoders, through the CXL
root ports and finally call CXL type 3 specific read and write
functions.

Note that, whilst functional the current implementation does
not support:
* switches
* multiple HDM decoders at a given level.
* unaligned accesses across the interleave boundaries

Signed-off-by: Jonathan Cameron <jonathan.cameron@huawei.com>
---
 hw/cxl/cxl-host-stubs.c |   2 +
 hw/cxl/cxl-host.c       | 128 ++++++++++++++++++++++++++++++++++++++++
 include/hw/cxl/cxl.h    |   2 +
 3 files changed, 132 insertions(+)

diff --git a/hw/cxl/cxl-host-stubs.c b/hw/cxl/cxl-host-stubs.c
index d24282ec1c..fcb7aded1f 100644
--- a/hw/cxl/cxl-host-stubs.c
+++ b/hw/cxl/cxl-host-stubs.c
@@ -12,3 +12,5 @@ void cxl_fixed_memory_window_options_set(MachineState *ms,
                                          Error **errp) {};
 
 void cxl_fixed_memory_window_link_targets(Error **errp) {};
+
+const MemoryRegionOps cfmws_ops;
diff --git a/hw/cxl/cxl-host.c b/hw/cxl/cxl-host.c
index f25713236d..a1eafa89bb 100644
--- a/hw/cxl/cxl-host.c
+++ b/hw/cxl/cxl-host.c
@@ -15,6 +15,10 @@
 
 #include "qapi/qapi-visit-machine.h"
 #include "hw/cxl/cxl.h"
+#include "hw/pci/pci_bus.h"
+#include "hw/pci/pci_bridge.h"
+#include "hw/pci/pci_host.h"
+#include "hw/pci/pcie_port.h"
 
 void cxl_fixed_memory_window_options_set(MachineState *ms,
                                          CXLFixedMemoryWindowOptions *object,
@@ -92,3 +96,127 @@ void cxl_fixed_memory_window_link_targets(Error **errp)
         }
     }
 }
+
+/* TODO: support, multiple hdm decoders */
+static bool cxl_hdm_find_target(uint32_t *cache_mem, hwaddr addr,
+                                uint8_t *target)
+{
+    uint32_t ctrl;
+    uint32_t ig_enc;
+    uint32_t iw_enc;
+    uint32_t target_reg;
+    uint32_t target_idx;
+
+    ctrl = cache_mem[R_CXL_HDM_DECODER0_CTRL];
+    if (!FIELD_EX32(ctrl, CXL_HDM_DECODER0_CTRL, COMMITTED)) {
+        return false;
+    }
+
+    ig_enc = FIELD_EX32(ctrl, CXL_HDM_DECODER0_CTRL, IG);
+    iw_enc = FIELD_EX32(ctrl, CXL_HDM_DECODER0_CTRL, IW);
+    target_idx = (addr / cxl_decode_ig(ig_enc)) % (1 << iw_enc);
+
+    if (target_idx > 4) {
+        target_reg = cache_mem[R_CXL_HDM_DECODER0_TARGET_LIST_LO];
+        target_reg >>= target_idx * 8;
+    } else {
+        target_reg = cache_mem[R_CXL_HDM_DECODER0_TARGET_LIST_LO];
+        target_reg >>= (target_idx - 4) * 8;
+    }
+    *target = target_reg & 0xff;
+
+    return true;
+}
+
+static PCIDevice *cxl_cfmws_find_device(CXLFixedWindow *fw, hwaddr addr)
+{
+    CXLComponentState *hb_cstate;
+    PCIHostState *hb;
+    int rb_index;
+    uint32_t *cache_mem;
+    uint8_t target;
+    bool target_found;
+    PCIDevice *rp, *d;
+
+    /* Address is relative to memory region. Convert to HPA */
+    addr += fw->base;
+
+    rb_index = (addr / cxl_decode_ig(fw->enc_int_gran)) % fw->num_targets;
+    hb = PCI_HOST_BRIDGE(fw->target_hbs[rb_index]->cxl.cxl_host_bridge);
+    if (!hb || !hb->bus || !pci_bus_is_cxl(hb->bus)) {
+        return NULL;
+    }
+
+    hb_cstate = cxl_get_hb_cstate(hb);
+    if (!hb_cstate) {
+        return NULL;
+    }
+
+    cache_mem = hb_cstate->crb.cache_mem_registers;
+
+    target_found = cxl_hdm_find_target(cache_mem, addr, &target);
+    if (!target_found) {
+        return NULL;
+    }
+
+    rp = pcie_find_port_by_pn(hb->bus, target);
+    if (!rp) {
+        return NULL;
+    }
+
+    d = pci_bridge_get_sec_bus(PCI_BRIDGE(rp))->devices[0];
+
+    if (!d || !object_dynamic_cast(OBJECT(d), TYPE_CXL_TYPE3_DEV)) {
+        return NULL;
+    }
+
+    return d;
+}
+
+static MemTxResult cxl_read_cfmws(void *opaque, hwaddr addr, uint64_t *data,
+                                  unsigned size, MemTxAttrs attrs)
+{
+    CXLFixedWindow *fw = opaque;
+    PCIDevice *d;
+
+    d = cxl_cfmws_find_device(fw, addr);
+    if (d == NULL) {
+        *data = 0;
+        /* Reads to invalid address return poison */
+        return MEMTX_ERROR;
+    }
+
+    return cxl_type3_read(d, addr + fw->base, data, size, attrs);
+}
+
+static MemTxResult cxl_write_cfmws(void *opaque, hwaddr addr,
+                                   uint64_t data, unsigned size,
+                                   MemTxAttrs attrs)
+{
+    CXLFixedWindow *fw = opaque;
+    PCIDevice *d;
+
+    d = cxl_cfmws_find_device(fw, addr);
+    if (d == NULL) {
+        /* Writes to invalid address are silent */
+        return MEMTX_OK;
+    }
+
+    return cxl_type3_write(d, addr + fw->base, data, size, attrs);
+}
+
+const MemoryRegionOps cfmws_ops = {
+    .read_with_attrs = cxl_read_cfmws,
+    .write_with_attrs = cxl_write_cfmws,
+    .endianness = DEVICE_LITTLE_ENDIAN,
+    .valid = {
+        .min_access_size = 1,
+        .max_access_size = 8,
+        .unaligned = true,
+    },
+    .impl = {
+        .min_access_size = 1,
+        .max_access_size = 8,
+        .unaligned = true,
+    },
+};
diff --git a/include/hw/cxl/cxl.h b/include/hw/cxl/cxl.h
index 5abc307ef4..14194acead 100644
--- a/include/hw/cxl/cxl.h
+++ b/include/hw/cxl/cxl.h
@@ -45,4 +45,6 @@ void cxl_fixed_memory_window_options_set(MachineState *ms,
                                          Error **errp);
 void cxl_fixed_memory_window_link_targets(Error **errp);
 
+extern const MemoryRegionOps cfmws_ops;
+
 #endif
-- 
2.32.0



^ permalink raw reply related	[flat|nested] 124+ messages in thread

* [PATCH v7 34/46] RFC: softmmu/memory: Add ops to memory_region_ram_init_from_file
  2022-03-06 17:40 ` Jonathan Cameron via
@ 2022-03-06 17:41   ` Jonathan Cameron via
  -1 siblings, 0 replies; 124+ messages in thread
From: Jonathan Cameron @ 2022-03-06 17:41 UTC (permalink / raw)
  To: linuxarm, qemu-devel, Alex Bennée, Marcel Apfelbaum,
	Michael S . Tsirkin, Igor Mammedov, Markus Armbruster
  Cc: linux-cxl, Ben Widawsky, Peter Maydell,
	Shameerali Kolothum Thodi, Philippe Mathieu-Daudé,
	Saransh Gupta1, Shreyas Shah, Chris Browy, Samarth Saxena,
	Dan Williams

From: Jonathan Cameron <jonathan.cameron@huawei.com>

Inorder to implement memory interleaving we need a means to proxy
the calls. Adding mem_ops allows such proxying.

Note should have no impact on use cases not using _dispatch_read/write.
For now, only file backed hostmem is considered to seek feedback on
the approach before considering other hostmem backends.

Signed-off-by: Jonathan Cameron <jonathan.cameron@huawei.com>
---
 softmmu/memory.c | 9 +++++++++
 1 file changed, 9 insertions(+)

diff --git a/softmmu/memory.c b/softmmu/memory.c
index 8060c6de78..99bd817150 100644
--- a/softmmu/memory.c
+++ b/softmmu/memory.c
@@ -1606,6 +1606,15 @@ void memory_region_init_ram_from_file(MemoryRegion *mr,
     Error *err = NULL;
     memory_region_init(mr, owner, name, size);
     mr->ram = true;
+
+    /*
+     * ops used only when directly accessing via
+     * - memory_region_dispatch_read()
+     * - memory_region_dispatch_write()
+     */
+    mr->ops = &ram_device_mem_ops;
+    mr->opaque = mr;
+
     mr->readonly = readonly;
     mr->terminates = true;
     mr->destructor = memory_region_destructor_ram;
-- 
2.32.0


^ permalink raw reply related	[flat|nested] 124+ messages in thread

* [PATCH v7 34/46] RFC: softmmu/memory: Add ops to memory_region_ram_init_from_file
@ 2022-03-06 17:41   ` Jonathan Cameron via
  0 siblings, 0 replies; 124+ messages in thread
From: Jonathan Cameron via @ 2022-03-06 17:41 UTC (permalink / raw)
  To: linuxarm, qemu-devel, Alex Bennée, Marcel Apfelbaum,
	Michael S . Tsirkin, Igor Mammedov, Markus Armbruster
  Cc: linux-cxl, Ben Widawsky, Peter Maydell,
	Shameerali Kolothum Thodi, Philippe Mathieu-Daudé,
	Saransh Gupta1, Shreyas Shah, Chris Browy, Samarth Saxena,
	Dan Williams

From: Jonathan Cameron <jonathan.cameron@huawei.com>

Inorder to implement memory interleaving we need a means to proxy
the calls. Adding mem_ops allows such proxying.

Note should have no impact on use cases not using _dispatch_read/write.
For now, only file backed hostmem is considered to seek feedback on
the approach before considering other hostmem backends.

Signed-off-by: Jonathan Cameron <jonathan.cameron@huawei.com>
---
 softmmu/memory.c | 9 +++++++++
 1 file changed, 9 insertions(+)

diff --git a/softmmu/memory.c b/softmmu/memory.c
index 8060c6de78..99bd817150 100644
--- a/softmmu/memory.c
+++ b/softmmu/memory.c
@@ -1606,6 +1606,15 @@ void memory_region_init_ram_from_file(MemoryRegion *mr,
     Error *err = NULL;
     memory_region_init(mr, owner, name, size);
     mr->ram = true;
+
+    /*
+     * ops used only when directly accessing via
+     * - memory_region_dispatch_read()
+     * - memory_region_dispatch_write()
+     */
+    mr->ops = &ram_device_mem_ops;
+    mr->opaque = mr;
+
     mr->readonly = readonly;
     mr->terminates = true;
     mr->destructor = memory_region_destructor_ram;
-- 
2.32.0



^ permalink raw reply related	[flat|nested] 124+ messages in thread

* [PATCH v7 35/46] hw/cxl/component Add a dumb HDM decoder handler
  2022-03-06 17:40 ` Jonathan Cameron via
@ 2022-03-06 17:41   ` Jonathan Cameron via
  -1 siblings, 0 replies; 124+ messages in thread
From: Jonathan Cameron @ 2022-03-06 17:41 UTC (permalink / raw)
  To: linuxarm, qemu-devel, Alex Bennée, Marcel Apfelbaum,
	Michael S . Tsirkin, Igor Mammedov, Markus Armbruster
  Cc: linux-cxl, Ben Widawsky, Peter Maydell,
	Shameerali Kolothum Thodi, Philippe Mathieu-Daudé,
	Saransh Gupta1, Shreyas Shah, Chris Browy, Samarth Saxena,
	Dan Williams

From: Ben Widawsky <ben.widawsky@intel.com>

Add a trivial handler for now to cover the root bridge
where we could do some error checking in future.

Signed-off-by: Ben Widawsky <ben.widawsky@intel.com>
Signed-off-by: Jonathan Cameron <Jonathan.Cameron@huawei.com>
---
 hw/cxl/cxl-component-utils.c | 31 +++++++++++++++++++++++++++++++
 1 file changed, 31 insertions(+)

diff --git a/hw/cxl/cxl-component-utils.c b/hw/cxl/cxl-component-utils.c
index 443a11c837..110ec9864e 100644
--- a/hw/cxl/cxl-component-utils.c
+++ b/hw/cxl/cxl-component-utils.c
@@ -32,6 +32,31 @@ static uint64_t cxl_cache_mem_read_reg(void *opaque, hwaddr offset,
     }
 }
 
+static void dumb_hdm_handler(CXLComponentState *cxl_cstate, hwaddr offset,
+                             uint32_t value)
+{
+    ComponentRegisters *cregs = &cxl_cstate->crb;
+    uint32_t *cache_mem = cregs->cache_mem_registers;
+    bool should_commit = false;
+
+    switch (offset) {
+    case A_CXL_HDM_DECODER0_CTRL:
+        should_commit = FIELD_EX32(value, CXL_HDM_DECODER0_CTRL, COMMIT);
+        break;
+    default:
+        break;
+    }
+
+    memory_region_transaction_begin();
+    stl_le_p((uint8_t *)cache_mem + offset, value);
+    if (should_commit) {
+        ARRAY_FIELD_DP32(cache_mem, CXL_HDM_DECODER0_CTRL, COMMIT, 0);
+        ARRAY_FIELD_DP32(cache_mem, CXL_HDM_DECODER0_CTRL, ERR, 0);
+        ARRAY_FIELD_DP32(cache_mem, CXL_HDM_DECODER0_CTRL, COMMITTED, 1);
+    }
+    memory_region_transaction_commit();
+}
+
 static void cxl_cache_mem_write_reg(void *opaque, hwaddr offset, uint64_t value,
                                     unsigned size)
 {
@@ -45,6 +70,12 @@ static void cxl_cache_mem_write_reg(void *opaque, hwaddr offset, uint64_t value,
     }
     if (cregs->special_ops && cregs->special_ops->write) {
         cregs->special_ops->write(cxl_cstate, offset, value, size);
+        return;
+    }
+
+    if (offset >= A_CXL_HDM_DECODER_CAPABILITY &&
+        offset <= A_CXL_HDM_DECODER0_TARGET_LIST_HI) {
+        dumb_hdm_handler(cxl_cstate, offset, value);
     } else {
         cregs->cache_mem_registers[offset / 4] = value;
     }
-- 
2.32.0


^ permalink raw reply related	[flat|nested] 124+ messages in thread

* [PATCH v7 35/46] hw/cxl/component Add a dumb HDM decoder handler
@ 2022-03-06 17:41   ` Jonathan Cameron via
  0 siblings, 0 replies; 124+ messages in thread
From: Jonathan Cameron via @ 2022-03-06 17:41 UTC (permalink / raw)
  To: linuxarm, qemu-devel, Alex Bennée, Marcel Apfelbaum,
	Michael S . Tsirkin, Igor Mammedov, Markus Armbruster
  Cc: linux-cxl, Ben Widawsky, Peter Maydell,
	Shameerali Kolothum Thodi, Philippe Mathieu-Daudé,
	Saransh Gupta1, Shreyas Shah, Chris Browy, Samarth Saxena,
	Dan Williams

From: Ben Widawsky <ben.widawsky@intel.com>

Add a trivial handler for now to cover the root bridge
where we could do some error checking in future.

Signed-off-by: Ben Widawsky <ben.widawsky@intel.com>
Signed-off-by: Jonathan Cameron <Jonathan.Cameron@huawei.com>
---
 hw/cxl/cxl-component-utils.c | 31 +++++++++++++++++++++++++++++++
 1 file changed, 31 insertions(+)

diff --git a/hw/cxl/cxl-component-utils.c b/hw/cxl/cxl-component-utils.c
index 443a11c837..110ec9864e 100644
--- a/hw/cxl/cxl-component-utils.c
+++ b/hw/cxl/cxl-component-utils.c
@@ -32,6 +32,31 @@ static uint64_t cxl_cache_mem_read_reg(void *opaque, hwaddr offset,
     }
 }
 
+static void dumb_hdm_handler(CXLComponentState *cxl_cstate, hwaddr offset,
+                             uint32_t value)
+{
+    ComponentRegisters *cregs = &cxl_cstate->crb;
+    uint32_t *cache_mem = cregs->cache_mem_registers;
+    bool should_commit = false;
+
+    switch (offset) {
+    case A_CXL_HDM_DECODER0_CTRL:
+        should_commit = FIELD_EX32(value, CXL_HDM_DECODER0_CTRL, COMMIT);
+        break;
+    default:
+        break;
+    }
+
+    memory_region_transaction_begin();
+    stl_le_p((uint8_t *)cache_mem + offset, value);
+    if (should_commit) {
+        ARRAY_FIELD_DP32(cache_mem, CXL_HDM_DECODER0_CTRL, COMMIT, 0);
+        ARRAY_FIELD_DP32(cache_mem, CXL_HDM_DECODER0_CTRL, ERR, 0);
+        ARRAY_FIELD_DP32(cache_mem, CXL_HDM_DECODER0_CTRL, COMMITTED, 1);
+    }
+    memory_region_transaction_commit();
+}
+
 static void cxl_cache_mem_write_reg(void *opaque, hwaddr offset, uint64_t value,
                                     unsigned size)
 {
@@ -45,6 +70,12 @@ static void cxl_cache_mem_write_reg(void *opaque, hwaddr offset, uint64_t value,
     }
     if (cregs->special_ops && cregs->special_ops->write) {
         cregs->special_ops->write(cxl_cstate, offset, value, size);
+        return;
+    }
+
+    if (offset >= A_CXL_HDM_DECODER_CAPABILITY &&
+        offset <= A_CXL_HDM_DECODER0_TARGET_LIST_HI) {
+        dumb_hdm_handler(cxl_cstate, offset, value);
     } else {
         cregs->cache_mem_registers[offset / 4] = value;
     }
-- 
2.32.0



^ permalink raw reply related	[flat|nested] 124+ messages in thread

* [PATCH v7 36/46] i386/pc: Enable CXL fixed memory windows
  2022-03-06 17:40 ` Jonathan Cameron via
@ 2022-03-06 17:41   ` Jonathan Cameron via
  -1 siblings, 0 replies; 124+ messages in thread
From: Jonathan Cameron @ 2022-03-06 17:41 UTC (permalink / raw)
  To: linuxarm, qemu-devel, Alex Bennée, Marcel Apfelbaum,
	Michael S . Tsirkin, Igor Mammedov, Markus Armbruster
  Cc: linux-cxl, Ben Widawsky, Peter Maydell,
	Shameerali Kolothum Thodi, Philippe Mathieu-Daudé,
	Saransh Gupta1, Shreyas Shah, Chris Browy, Samarth Saxena,
	Dan Williams

From: Jonathan Cameron <jonathan.cameron@huawei.com>

Add the CFMWs memory regions to the memorymap and adjust the
PCI window to avoid hitting the same memory.

Signed-off-by: Jonathan Cameron <jonathan.cameron@huawei.com>
---
 hw/i386/pc.c | 31 ++++++++++++++++++++++++++++++-
 1 file changed, 30 insertions(+), 1 deletion(-)

diff --git a/hw/i386/pc.c b/hw/i386/pc.c
index 7a18dce529..5ece806d2b 100644
--- a/hw/i386/pc.c
+++ b/hw/i386/pc.c
@@ -816,7 +816,7 @@ void pc_memory_init(PCMachineState *pcms,
     MachineClass *mc = MACHINE_GET_CLASS(machine);
     PCMachineClass *pcmc = PC_MACHINE_GET_CLASS(pcms);
     X86MachineState *x86ms = X86_MACHINE(pcms);
-    hwaddr cxl_base;
+    hwaddr cxl_base, cxl_resv_end = 0;
 
     assert(machine->ram_size == x86ms->below_4g_mem_size +
                                 x86ms->above_4g_mem_size);
@@ -924,6 +924,24 @@ void pc_memory_init(PCMachineState *pcms,
         e820_add_entry(cxl_base, cxl_size, E820_RESERVED);
         memory_region_init(mr, OBJECT(machine), "cxl_host_reg", cxl_size);
         memory_region_add_subregion(system_memory, cxl_base, mr);
+        cxl_resv_end = cxl_base + cxl_size;
+        if (machine->cxl_devices_state->fixed_windows) {
+            hwaddr cxl_fmw_base;
+            GList *it;
+
+            cxl_fmw_base = ROUND_UP(cxl_base + cxl_size, 256 * MiB);
+            for (it = machine->cxl_devices_state->fixed_windows; it; it = it->next) {
+                CXLFixedWindow *fw = it->data;
+
+                fw->base = cxl_fmw_base;
+                memory_region_init_io(&fw->mr, OBJECT(machine), &cfmws_ops, fw,
+                                      "cxl-fixed-memory-region", fw->size);
+                memory_region_add_subregion(system_memory, fw->base, &fw->mr);
+                e820_add_entry(fw->base, fw->size, E820_RESERVED);
+                cxl_fmw_base += fw->size;
+                cxl_resv_end = cxl_fmw_base;
+            }
+        }
     }
 
     /* Initialize PC system firmware */
@@ -953,6 +971,10 @@ void pc_memory_init(PCMachineState *pcms,
         if (!pcmc->broken_reserved_end) {
             res_mem_end += memory_region_size(&machine->device_memory->mr);
         }
+
+        if (machine->cxl_devices_state->is_enabled) {
+            res_mem_end = cxl_resv_end;
+        }
         *val = cpu_to_le64(ROUND_UP(res_mem_end, 1 * GiB));
         fw_cfg_add_file(fw_cfg, "etc/reserved-memory-end", val, sizeof(*val));
     }
@@ -989,6 +1011,13 @@ uint64_t pc_pci_hole64_start(void)
     if (ms->cxl_devices_state->host_mr.addr) {
         hole64_start = ms->cxl_devices_state->host_mr.addr +
             memory_region_size(&ms->cxl_devices_state->host_mr);
+        if (ms->cxl_devices_state->fixed_windows) {
+            GList *it;
+            for (it = ms->cxl_devices_state->fixed_windows; it; it = it->next) {
+                CXLFixedWindow *fw = it->data;
+                hole64_start = fw->mr.addr + memory_region_size(&fw->mr);
+            }
+        }
     } else if (pcmc->has_reserved_memory && ms->device_memory->base) {
         hole64_start = ms->device_memory->base;
         if (!pcmc->broken_reserved_end) {
-- 
2.32.0


^ permalink raw reply related	[flat|nested] 124+ messages in thread

* [PATCH v7 36/46] i386/pc: Enable CXL fixed memory windows
@ 2022-03-06 17:41   ` Jonathan Cameron via
  0 siblings, 0 replies; 124+ messages in thread
From: Jonathan Cameron via @ 2022-03-06 17:41 UTC (permalink / raw)
  To: linuxarm, qemu-devel, Alex Bennée, Marcel Apfelbaum,
	Michael S . Tsirkin, Igor Mammedov, Markus Armbruster
  Cc: linux-cxl, Ben Widawsky, Peter Maydell,
	Shameerali Kolothum Thodi, Philippe Mathieu-Daudé,
	Saransh Gupta1, Shreyas Shah, Chris Browy, Samarth Saxena,
	Dan Williams

From: Jonathan Cameron <jonathan.cameron@huawei.com>

Add the CFMWs memory regions to the memorymap and adjust the
PCI window to avoid hitting the same memory.

Signed-off-by: Jonathan Cameron <jonathan.cameron@huawei.com>
---
 hw/i386/pc.c | 31 ++++++++++++++++++++++++++++++-
 1 file changed, 30 insertions(+), 1 deletion(-)

diff --git a/hw/i386/pc.c b/hw/i386/pc.c
index 7a18dce529..5ece806d2b 100644
--- a/hw/i386/pc.c
+++ b/hw/i386/pc.c
@@ -816,7 +816,7 @@ void pc_memory_init(PCMachineState *pcms,
     MachineClass *mc = MACHINE_GET_CLASS(machine);
     PCMachineClass *pcmc = PC_MACHINE_GET_CLASS(pcms);
     X86MachineState *x86ms = X86_MACHINE(pcms);
-    hwaddr cxl_base;
+    hwaddr cxl_base, cxl_resv_end = 0;
 
     assert(machine->ram_size == x86ms->below_4g_mem_size +
                                 x86ms->above_4g_mem_size);
@@ -924,6 +924,24 @@ void pc_memory_init(PCMachineState *pcms,
         e820_add_entry(cxl_base, cxl_size, E820_RESERVED);
         memory_region_init(mr, OBJECT(machine), "cxl_host_reg", cxl_size);
         memory_region_add_subregion(system_memory, cxl_base, mr);
+        cxl_resv_end = cxl_base + cxl_size;
+        if (machine->cxl_devices_state->fixed_windows) {
+            hwaddr cxl_fmw_base;
+            GList *it;
+
+            cxl_fmw_base = ROUND_UP(cxl_base + cxl_size, 256 * MiB);
+            for (it = machine->cxl_devices_state->fixed_windows; it; it = it->next) {
+                CXLFixedWindow *fw = it->data;
+
+                fw->base = cxl_fmw_base;
+                memory_region_init_io(&fw->mr, OBJECT(machine), &cfmws_ops, fw,
+                                      "cxl-fixed-memory-region", fw->size);
+                memory_region_add_subregion(system_memory, fw->base, &fw->mr);
+                e820_add_entry(fw->base, fw->size, E820_RESERVED);
+                cxl_fmw_base += fw->size;
+                cxl_resv_end = cxl_fmw_base;
+            }
+        }
     }
 
     /* Initialize PC system firmware */
@@ -953,6 +971,10 @@ void pc_memory_init(PCMachineState *pcms,
         if (!pcmc->broken_reserved_end) {
             res_mem_end += memory_region_size(&machine->device_memory->mr);
         }
+
+        if (machine->cxl_devices_state->is_enabled) {
+            res_mem_end = cxl_resv_end;
+        }
         *val = cpu_to_le64(ROUND_UP(res_mem_end, 1 * GiB));
         fw_cfg_add_file(fw_cfg, "etc/reserved-memory-end", val, sizeof(*val));
     }
@@ -989,6 +1011,13 @@ uint64_t pc_pci_hole64_start(void)
     if (ms->cxl_devices_state->host_mr.addr) {
         hole64_start = ms->cxl_devices_state->host_mr.addr +
             memory_region_size(&ms->cxl_devices_state->host_mr);
+        if (ms->cxl_devices_state->fixed_windows) {
+            GList *it;
+            for (it = ms->cxl_devices_state->fixed_windows; it; it = it->next) {
+                CXLFixedWindow *fw = it->data;
+                hole64_start = fw->mr.addr + memory_region_size(&fw->mr);
+            }
+        }
     } else if (pcmc->has_reserved_memory && ms->device_memory->base) {
         hole64_start = ms->device_memory->base;
         if (!pcmc->broken_reserved_end) {
-- 
2.32.0



^ permalink raw reply related	[flat|nested] 124+ messages in thread

* [PATCH v7 37/46] tests/acpi: q35: Allow addition of a CXL test.
  2022-03-06 17:40 ` Jonathan Cameron via
@ 2022-03-06 17:41   ` Jonathan Cameron via
  -1 siblings, 0 replies; 124+ messages in thread
From: Jonathan Cameron @ 2022-03-06 17:41 UTC (permalink / raw)
  To: linuxarm, qemu-devel, Alex Bennée, Marcel Apfelbaum,
	Michael S . Tsirkin, Igor Mammedov, Markus Armbruster
  Cc: linux-cxl, Ben Widawsky, Peter Maydell,
	Shameerali Kolothum Thodi, Philippe Mathieu-Daudé,
	Saransh Gupta1, Shreyas Shah, Chris Browy, Samarth Saxena,
	Dan Williams

Add exceptions for the DSDT and the new CEDT tables
specific to a new CXL test in the following patch.

Signed-off-by: Jonathan Cameron <Jonathan.Cameron@huawei.com>
---
 tests/data/acpi/q35/CEDT.cxl                | 0
 tests/data/acpi/q35/DSDT.cxl                | 0
 tests/qtest/bios-tables-test-allowed-diff.h | 2 ++
 3 files changed, 2 insertions(+)

diff --git a/tests/data/acpi/q35/CEDT.cxl b/tests/data/acpi/q35/CEDT.cxl
new file mode 100644
index 0000000000..e69de29bb2
diff --git a/tests/data/acpi/q35/DSDT.cxl b/tests/data/acpi/q35/DSDT.cxl
new file mode 100644
index 0000000000..e69de29bb2
diff --git a/tests/qtest/bios-tables-test-allowed-diff.h b/tests/qtest/bios-tables-test-allowed-diff.h
index dfb8523c8b..7c7f9fbc44 100644
--- a/tests/qtest/bios-tables-test-allowed-diff.h
+++ b/tests/qtest/bios-tables-test-allowed-diff.h
@@ -1 +1,3 @@
 /* List of comma-separated changed AML files to ignore */
+"tests/data/acpi/q35/DSDT.cxl",
+"tests/data/acpi/q35/CEDT.cxl",
-- 
2.32.0


^ permalink raw reply related	[flat|nested] 124+ messages in thread

* [PATCH v7 37/46] tests/acpi: q35: Allow addition of a CXL test.
@ 2022-03-06 17:41   ` Jonathan Cameron via
  0 siblings, 0 replies; 124+ messages in thread
From: Jonathan Cameron via @ 2022-03-06 17:41 UTC (permalink / raw)
  To: linuxarm, qemu-devel, Alex Bennée, Marcel Apfelbaum,
	Michael S . Tsirkin, Igor Mammedov, Markus Armbruster
  Cc: linux-cxl, Ben Widawsky, Peter Maydell,
	Shameerali Kolothum Thodi, Philippe Mathieu-Daudé,
	Saransh Gupta1, Shreyas Shah, Chris Browy, Samarth Saxena,
	Dan Williams

Add exceptions for the DSDT and the new CEDT tables
specific to a new CXL test in the following patch.

Signed-off-by: Jonathan Cameron <Jonathan.Cameron@huawei.com>
---
 tests/data/acpi/q35/CEDT.cxl                | 0
 tests/data/acpi/q35/DSDT.cxl                | 0
 tests/qtest/bios-tables-test-allowed-diff.h | 2 ++
 3 files changed, 2 insertions(+)

diff --git a/tests/data/acpi/q35/CEDT.cxl b/tests/data/acpi/q35/CEDT.cxl
new file mode 100644
index 0000000000..e69de29bb2
diff --git a/tests/data/acpi/q35/DSDT.cxl b/tests/data/acpi/q35/DSDT.cxl
new file mode 100644
index 0000000000..e69de29bb2
diff --git a/tests/qtest/bios-tables-test-allowed-diff.h b/tests/qtest/bios-tables-test-allowed-diff.h
index dfb8523c8b..7c7f9fbc44 100644
--- a/tests/qtest/bios-tables-test-allowed-diff.h
+++ b/tests/qtest/bios-tables-test-allowed-diff.h
@@ -1 +1,3 @@
 /* List of comma-separated changed AML files to ignore */
+"tests/data/acpi/q35/DSDT.cxl",
+"tests/data/acpi/q35/CEDT.cxl",
-- 
2.32.0



^ permalink raw reply related	[flat|nested] 124+ messages in thread

* [PATCH v7 38/46] qtests/bios-tables-test: Add a test for CXL emulation.
  2022-03-06 17:40 ` Jonathan Cameron via
@ 2022-03-06 17:41   ` Jonathan Cameron via
  -1 siblings, 0 replies; 124+ messages in thread
From: Jonathan Cameron @ 2022-03-06 17:41 UTC (permalink / raw)
  To: linuxarm, qemu-devel, Alex Bennée, Marcel Apfelbaum,
	Michael S . Tsirkin, Igor Mammedov, Markus Armbruster
  Cc: linux-cxl, Ben Widawsky, Peter Maydell,
	Shameerali Kolothum Thodi, Philippe Mathieu-Daudé,
	Saransh Gupta1, Shreyas Shah, Chris Browy, Samarth Saxena,
	Dan Williams

The DSDT includes several CXL specific elements and the CEDT
table is only present if we enable CXL.

The test exercises all current functionality with several
CFMWS, CHBS structures in CEDT and ACPI0016/ACPI00017 and _OSC
entries in DSDT.

Signed-off-by: Jonathan Cameron <Jonathan.Cameron@huawei.com>
---
 tests/qtest/bios-tables-test.c | 44 ++++++++++++++++++++++++++++++++++
 1 file changed, 44 insertions(+)

diff --git a/tests/qtest/bios-tables-test.c b/tests/qtest/bios-tables-test.c
index c4a2d1e166..d43503a42d 100644
--- a/tests/qtest/bios-tables-test.c
+++ b/tests/qtest/bios-tables-test.c
@@ -1537,6 +1537,49 @@ static void test_acpi_q35_viot(void)
     free_test_data(&data);
 }
 
+static void test_acpi_q35_cxl(void)
+{
+    gchar *tmp_path = g_dir_make_tmp("qemu-test-cxl.XXXXXX", NULL);
+    gchar *params;
+
+    test_data data = {
+        .machine = MACHINE_Q35,
+        .variant = ".cxl",
+    };
+    /*
+     * A complex CXL setup.
+     */
+    params = g_strdup_printf(" -machine cxl=on"
+                             " -object memory-backend-file,id=cxl-mem1,mem-path=%s,size=256M"
+                             " -object memory-backend-file,id=cxl-mem2,mem-path=%s,size=256M"
+                             " -object memory-backend-file,id=cxl-mem3,mem-path=%s,size=256M"
+                             " -object memory-backend-file,id=cxl-mem4,mem-path=%s,size=256M"
+                             " -object memory-backend-file,id=lsa1,mem-path=%s,size=256M"
+                             " -object memory-backend-file,id=lsa2,mem-path=%s,size=256M"
+                             " -object memory-backend-file,id=lsa3,mem-path=%s,size=256M"
+                             " -object memory-backend-file,id=lsa4,mem-path=%s,size=256M"
+                             " -device pxb-cxl,bus_nr=12,bus=pcie.0,id=cxl.1"
+                             " -device pxb-cxl,bus_nr=222,bus=pcie.0,id=cxl.2"
+                             " -device cxl-rp,port=0,bus=cxl.1,id=rp1,chassis=0,slot=2"
+                             " -device cxl-type3,bus=rp1,memdev=cxl-mem1,lsa=lsa1,size=256M"
+                             " -device cxl-rp,port=1,bus=cxl.1,id=rp2,chassis=0,slot=3"
+                             " -device cxl-type3,bus=rp2,memdev=cxl-mem2,lsa=lsa2,size=256M"
+                             " -device cxl-rp,port=0,bus=cxl.2,id=rp3,chassis=0,slot=5"
+                             " -device cxl-type3,bus=rp3,memdev=cxl-mem3,lsa=lsa3,size=256M"
+                             " -device cxl-rp,port=1,bus=cxl.2,id=rp4,chassis=0,slot=6"
+                             " -device cxl-type3,bus=rp4,memdev=cxl-mem4,lsa=lsa4,size=256M"
+                             " -cxl-fixed-memory-window targets.0=cxl.1,size=4G,interleave-granularity=8k"
+                             " -cxl-fixed-memory-window targets.0=cxl.1,targets.1=cxl.2,size=4G,interleave-granularity=8k",
+                             tmp_path, tmp_path, tmp_path, tmp_path,
+                             tmp_path, tmp_path, tmp_path, tmp_path);
+    test_acpi_one(params, &data);
+
+    g_free(params);
+    g_assert(g_rmdir(tmp_path) == 0);
+    g_free(tmp_path);
+    free_test_data(&data);
+}
+
 static void test_acpi_virt_viot(void)
 {
     test_data data = {
@@ -1742,6 +1785,7 @@ int main(int argc, char *argv[])
             qtest_add_func("acpi/q35/kvm/dmar", test_acpi_q35_kvm_dmar);
         }
         qtest_add_func("acpi/q35/viot", test_acpi_q35_viot);
+        qtest_add_func("acpi/q35/cxl", test_acpi_q35_cxl);
         qtest_add_func("acpi/q35/slic", test_acpi_q35_slic);
     } else if (strcmp(arch, "aarch64") == 0) {
         if (has_tcg) {
-- 
2.32.0


^ permalink raw reply related	[flat|nested] 124+ messages in thread

* [PATCH v7 38/46] qtests/bios-tables-test: Add a test for CXL emulation.
@ 2022-03-06 17:41   ` Jonathan Cameron via
  0 siblings, 0 replies; 124+ messages in thread
From: Jonathan Cameron via @ 2022-03-06 17:41 UTC (permalink / raw)
  To: linuxarm, qemu-devel, Alex Bennée, Marcel Apfelbaum,
	Michael S . Tsirkin, Igor Mammedov, Markus Armbruster
  Cc: linux-cxl, Ben Widawsky, Peter Maydell,
	Shameerali Kolothum Thodi, Philippe Mathieu-Daudé,
	Saransh Gupta1, Shreyas Shah, Chris Browy, Samarth Saxena,
	Dan Williams

The DSDT includes several CXL specific elements and the CEDT
table is only present if we enable CXL.

The test exercises all current functionality with several
CFMWS, CHBS structures in CEDT and ACPI0016/ACPI00017 and _OSC
entries in DSDT.

Signed-off-by: Jonathan Cameron <Jonathan.Cameron@huawei.com>
---
 tests/qtest/bios-tables-test.c | 44 ++++++++++++++++++++++++++++++++++
 1 file changed, 44 insertions(+)

diff --git a/tests/qtest/bios-tables-test.c b/tests/qtest/bios-tables-test.c
index c4a2d1e166..d43503a42d 100644
--- a/tests/qtest/bios-tables-test.c
+++ b/tests/qtest/bios-tables-test.c
@@ -1537,6 +1537,49 @@ static void test_acpi_q35_viot(void)
     free_test_data(&data);
 }
 
+static void test_acpi_q35_cxl(void)
+{
+    gchar *tmp_path = g_dir_make_tmp("qemu-test-cxl.XXXXXX", NULL);
+    gchar *params;
+
+    test_data data = {
+        .machine = MACHINE_Q35,
+        .variant = ".cxl",
+    };
+    /*
+     * A complex CXL setup.
+     */
+    params = g_strdup_printf(" -machine cxl=on"
+                             " -object memory-backend-file,id=cxl-mem1,mem-path=%s,size=256M"
+                             " -object memory-backend-file,id=cxl-mem2,mem-path=%s,size=256M"
+                             " -object memory-backend-file,id=cxl-mem3,mem-path=%s,size=256M"
+                             " -object memory-backend-file,id=cxl-mem4,mem-path=%s,size=256M"
+                             " -object memory-backend-file,id=lsa1,mem-path=%s,size=256M"
+                             " -object memory-backend-file,id=lsa2,mem-path=%s,size=256M"
+                             " -object memory-backend-file,id=lsa3,mem-path=%s,size=256M"
+                             " -object memory-backend-file,id=lsa4,mem-path=%s,size=256M"
+                             " -device pxb-cxl,bus_nr=12,bus=pcie.0,id=cxl.1"
+                             " -device pxb-cxl,bus_nr=222,bus=pcie.0,id=cxl.2"
+                             " -device cxl-rp,port=0,bus=cxl.1,id=rp1,chassis=0,slot=2"
+                             " -device cxl-type3,bus=rp1,memdev=cxl-mem1,lsa=lsa1,size=256M"
+                             " -device cxl-rp,port=1,bus=cxl.1,id=rp2,chassis=0,slot=3"
+                             " -device cxl-type3,bus=rp2,memdev=cxl-mem2,lsa=lsa2,size=256M"
+                             " -device cxl-rp,port=0,bus=cxl.2,id=rp3,chassis=0,slot=5"
+                             " -device cxl-type3,bus=rp3,memdev=cxl-mem3,lsa=lsa3,size=256M"
+                             " -device cxl-rp,port=1,bus=cxl.2,id=rp4,chassis=0,slot=6"
+                             " -device cxl-type3,bus=rp4,memdev=cxl-mem4,lsa=lsa4,size=256M"
+                             " -cxl-fixed-memory-window targets.0=cxl.1,size=4G,interleave-granularity=8k"
+                             " -cxl-fixed-memory-window targets.0=cxl.1,targets.1=cxl.2,size=4G,interleave-granularity=8k",
+                             tmp_path, tmp_path, tmp_path, tmp_path,
+                             tmp_path, tmp_path, tmp_path, tmp_path);
+    test_acpi_one(params, &data);
+
+    g_free(params);
+    g_assert(g_rmdir(tmp_path) == 0);
+    g_free(tmp_path);
+    free_test_data(&data);
+}
+
 static void test_acpi_virt_viot(void)
 {
     test_data data = {
@@ -1742,6 +1785,7 @@ int main(int argc, char *argv[])
             qtest_add_func("acpi/q35/kvm/dmar", test_acpi_q35_kvm_dmar);
         }
         qtest_add_func("acpi/q35/viot", test_acpi_q35_viot);
+        qtest_add_func("acpi/q35/cxl", test_acpi_q35_cxl);
         qtest_add_func("acpi/q35/slic", test_acpi_q35_slic);
     } else if (strcmp(arch, "aarch64") == 0) {
         if (has_tcg) {
-- 
2.32.0



^ permalink raw reply related	[flat|nested] 124+ messages in thread

* [PATCH v7 39/46] tests/acpi: Add tables for CXL emulation.
  2022-03-06 17:40 ` Jonathan Cameron via
@ 2022-03-06 17:41   ` Jonathan Cameron via
  -1 siblings, 0 replies; 124+ messages in thread
From: Jonathan Cameron @ 2022-03-06 17:41 UTC (permalink / raw)
  To: linuxarm, qemu-devel, Alex Bennée, Marcel Apfelbaum,
	Michael S . Tsirkin, Igor Mammedov, Markus Armbruster
  Cc: linux-cxl, Ben Widawsky, Peter Maydell,
	Shameerali Kolothum Thodi, Philippe Mathieu-Daudé,
	Saransh Gupta1, Shreyas Shah, Chris Browy, Samarth Saxena,
	Dan Williams

Tables that differ from normal Q35 tables when running the CXL test.

Signed-off-by: Jonathan Cameron <Jonathan.Cameron@huawei.com>
---
 tests/data/acpi/q35/CEDT.cxl                | Bin 0 -> 184 bytes
 tests/data/acpi/q35/DSDT.cxl                | Bin 0 -> 9627 bytes
 tests/qtest/bios-tables-test-allowed-diff.h |   2 --
 3 files changed, 2 deletions(-)

diff --git a/tests/data/acpi/q35/CEDT.cxl b/tests/data/acpi/q35/CEDT.cxl
index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..b8fa06b00e65712e91e0a5ea0d9277e0146d1c00 100644
GIT binary patch
literal 184
zcmZ>EbqU$Qz`(%x(aGQ0BUr&HBEVSz2pEB4AU23*U{GMV2P7eE5T6mshKVRJ@Sw=U
r)I#JL88kqeKtKSd14gp~1^Iy(qF)E31_T6{AT-z>kXmGQAh!SjnYIc6

literal 0
HcmV?d00001

diff --git a/tests/data/acpi/q35/DSDT.cxl b/tests/data/acpi/q35/DSDT.cxl
index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..e1d1299a67abd02a055c58f9306bf4a796b7d4c0 100644
GIT binary patch
literal 9627
zcmeHN&2JmW9i1g9X|-HQONzE^`9p+b_bWw0`6EeNq%pZmk+emcCaE|94R9$bt!$^r
zB8h=GhEZ7o64!^K4&$Ie52alO=&k(+iW+DSEzpC3qG*5uL3`*W#}-9=NX)+9%#J)m
zQb2qxP#u>0n|*J7^JZuEt!CD%dyStRVa)hb?u=Wkr*kh=-8B9T#u%N`uTzShXU$D7
zS7;gWNX%$nkMnaJa%SqAUT&pe{B*<r(D&bb&o|luTfOVtUTp{O-0)W!fiABGmPIE{
ze!5l9wVGS5Rdq`lvsEm4cJ{tynk`qHMa@>$t1+{+Qu0`8d7^cu?#6CjVy<^?v0EAI
zY$Rqar&<2nvCkIvpZxILo7sEM|Mhz}FIh1Fj^fwE@3DXm{#D;P=y#p>I~R;=MCaEQ
zyR+|i_+?S%(3QZak~_92qN)~qmzrD8d9|#|+c;INR5Na75$$qo^~8d;|GhcJZ0a2P
z`*pwH|E)M>&K2gAO`$Sx7OVM&RB1pPscLQ(tBo?V8N2^5&SK5$4#l%C<||&hTJu`z
z)HAJmBg+CR@dGMqzwKL&-?eXbeAe;f{eF*ir*C!I?1Q$={_&r&9_v{%qHPIl`rS!t
z%l(He#u!@jm8YmR)Zfjm`BFXG;$4jN3usEUR6857e9LxD&paKKKteUQj-SqX0E>GR
zMeETjnP!i*t6=!dXNFNQ*4a9K4@HozxplaFW@Vc-Gpn&oYwQs9P;=|(a<#E2>yC;0
zZD?4>nkpD?Mc;~==Alhq^HKLqd7Co^G<tIc=Rx#Al$nPd&N)(RxZACXhxd<H9k75I
z<{@UFiz<mneY9D1>kT7*&$dk58VX$W-AWn<FWua%=TBi4{K2+x<Kh&!xlH~PF61iZ
zni0P*XcV})%1Sw1EqO1cn?`QgJ<U=_s&4*Nr1=Cz{eG}Ph~SEGfl(dJuTpW3rJl@!
zGYUm@1$6L3te^76t6sf4-CYi+#{Fmi_|N)@x68NYJ}vM4om&p5u2+BHcc187WZk`;
z3swX(;2IxCHim7V^%7CzIvv=Zx$dscMQb8Rjz!oMYX|;~He<cESuMFsEM>RwYXln<
zIX`~cpuys}pYn-Ztu?C2({F4h<1vg*wp=-#XX4zz`4UUoRBlBO6l4%ZP-Mxe7}|lE
zByeFwEC8p)HO>Q6hQvhu92?<GV8e)*5*lHg2`FbGBqmzJu~E*HEr>E=N@!Ft0p(1D
z#FSYuuLPFY6XQ$?jR~e~JVI6Hn5J`#GbMCPFlFNrsyfFto#UJ-q2szvgsRR7P3MHB
zb3)gNP}ON_I!#Tdsp~|j>ckUB>FG&L=cKL^p{mo;bXuBDOV^1|)fv}x#x<RBT_-|S
zC!U4M{j@cmwyqPQs&h)yIi=~G(sd$KbxvzKr!}3^x=w_u&KXVTjHYu&*NITo>1a9~
zO{b&lM5yXaXgU*`&V;TLp{g^f=}c-mle$iXs?J$W=d7l4R@aG8)tS<CrZk-?T_-|S
z=Ppg>E=}hyT_-|S=Wb2sZcXQIT_-|SC*CA5`s8zHPSZK3>qMyP+@tB-qv_nE>qMyP
z+{>BS@JYCrGjrj?Zm(dvk3wR4A$DG4&TGthorzFo?$en2H0C~?iBM(k*O>b?=6;=t
zP-Py_m<KfG0iB6ZWgg^A=`sg7Q$FJl3Z{H;BP6E0g9}>D1uf@-o)e+SX~ICWih&mD
zC8XwNYTz7+Ljy&Cv?7QikV#>n0>>@MV8oK`Gmun3w+$4blm-J8SZSaNlnhirw+$2_
zS|bfqV8e)Vss<{c+XjjdE#g=hsKAC%sF6d-Km}BWs!kZFsFpKfpbC@>6rprQGEjt4
zCk#|zITHq|K*>M_l;<P^MJRQ`Kn0dFVW0|>3{*fllMEE0)CmI>Sk8ojDo`>|0p(0G
zP=xY+!axO<Ghv_#lnhirIg<<&q0|Wj6<E%MfhtfkPyyvkGEjt4Ck#|zITHq|K*>M_
zlrzad5lWpfP=V!47^ngz0~JutBm+e#b;3XemNQ|X3X}{~Ksl2P6rt1!0~J`#gn=qh
zGEf2KOfpb}QYQ>lU^x>8szAv=1(Y+%KoLrvFi?TzOc<yFB?A>u&LjgxD0RX>1(q{m
zpbC@>R6seC3>2Z%2?G^a&V+#~P%=;f<xDbAgi<FARA4z12C6{GKn0XD$v_cGoiI>=
z<xCi;0wn_#P|hR+MJRQ`Kn0dFVW0|>3{*fllMEE0)CmI>Sk8ojDo`>|0p(0GP=rz^
z3{+q_69%e4$v_2^Gs!>^N}VuJf#pmXr~)Me6;RG314Srx!axxz28u{EP=u<1B2)}i
zVZuNaCK;&0Bm-5LFi?dF167!0pbC==RAItE6($T+VUmF=Ofpb~2?JG_Fi?d_2C6X0
zKouqo6p_5UFi=FeW4trTKoR0L$dH(_Z(*Q_WZ%L-5y`$K14StNmJAdjmWt+Euu#^u
zJN%#39{odlXPkbr&FkNOI!gbg(y9incNo>$*(@CQY>o~t9Xyj^?d5Eq&X?#=phMA2
z&6dt$HK03r)!N*^BFkjYil>g3&bqZp0BV`Uv=#r+IGf}vL08yKDznCLECp9LtQkgU
zXhaQ5FUu2nN65-04;#xhv>0Tf+4aP3YxYOeY%&UWV|acuJc+S-k(%M_Ks{;#T9ZvB
zT3)HnuF<)*$xCKvJ&FP0)_6SEs@Lkq&5f`Pl%C4N?74yLSUmaC*>g(v9M7IZ`_$~Y
z)adNF=2n@si={Ly_l4T6$YZ2;mGmx8?+&DQcS!Gs>AeB*78bO=w0ciT@A35BKzeV7
z^j?@gKbSuD80qs$`aDmcA4s3yA$>kfzciSB>@m_WDe0GZ`lW&NOFN`r3ey(`(~mz!
z`ht?az|$87(ie6}UkK9|2h&eHM*5<XzR1%T2htaJNMA(y7(L1c(>o%)e0bos0@Gcr
z-|A(laX#hoL-RVUZg3h~3B5YJYYbH<%0^Z<G%u5Sb$9_8s!o)RtZrzYVF{S#nG}Y%
zlA-EE*~sdK=1oek4zDUh)rqo^)eX&qU3ztRe;KMyl#Q%zXk72stHVppP<5hgWOYO1
zbWX1hZ#qNOi88%9XWygs?5&x4IftzlY^&H&tob`<zAJVoiq`o@EB@75zAi_pg~<tT
znsTcv>Edf|2CrWJe){!S-gxcu>uayP!J39!T|OO-)+x&>7$4go+lGx*eRvWroqudw
zcJunzD9Ez3Tg_Jtw^XScZms;Hfd%jk9hM9hb=|UuMfM~iqi#LFoCe&>HquyYE>%iq
zz^D+T{@8i02MrR9oXWb@QYN#Qp=OAUqp?)NZ7jVU=~|r)_O6@BWG<wN<5<1VPBx!7
z|8z`lZIDnBK+Erg#Yh$~zdPR<yg(8HYX-~iLsynh_xx0OjAT7tB4e;B)G)IC)Y5LZ
z?pDklWSDLxe=Zs62W%}c*6(`B&bkq>^}z50*1o%J|D@v^x7SeQ2Wx{Vx!}P<+?4Lz
z?ZHyqmbH(%lbyEJN1Bg<QP;bNhkB0gd$4wc+Sq27+i2CDXS*kcV0>iY_N`gbV5nno
z5xmQ5w6U0viJe8NaFwmmEr+(X=Hh5^Lv^7&msdx3b9vG|Q*YQibo1!ZGE_f0FSszG
zZvrbaSW3`~gN5kkk*#aK^Bj$7%zyl0dSG0eJsRDZ0p)BX5w}c+)dqSGO-*Cdv=JvU
zY~#Yk)ILm}LN(vYXO6OP#?wTiG3A_z(Ir0d!#S0ChNZp*>>{I%*xnHoJ61|T)vfTB
zY6vHL<b3#WIqWWbs2I-?>l3q;%T?$HTZwY&J|m`X(vI*AU^?K1Y<uq)==w^v%1pOW
zTV{0b@^hD^ax>bdus3yU^scZdjsN9e8y9|mv;3RqKbUu3`19vG>=jcQYk`hmZpRpV
z#7C)te5;w$yx5KjZJ>`MF>N-VAjWahLnkLdb!prAgu$=zjp*K`JJZ@$bn4_JEVRLb
zWMq@i>dM6JwzV0&-L@iZHMshEdlml&v9+$#Sp#TN>`!@YoeGTYf-DLWoi!`6U6Kn+
zqT3}do*dV%tXdAEeY_j%j0IQQsQbpc!GiT0K274DFJ4u=am&~C!N2JDU-*KLP<E23
zSJDsr{no=#SAKyjz3@te-I!t#tR=Be*xzxC!t3-AmBG6TjVN1;DEvIeh_e4bq89mg
z68N~`e`G|R#EAOg$PwjZ-28mxh`NI-cZWul&9Gi5M$}>PggW59w2TimlmYZC!@DIt
zQhSE8JTG23`0h6U=%eJsV}=}{;$L)|6^sY=1KTiBZs6?-f1hFeKCrjrDa<0q{>>)?
zzfS+)_xtn@KK(_sX~Ilw$p1=QVZT%-5W9yfpNChTSFdcMixxt;U9Gk*I-(n1F|;cm
z;R<gTDr_1zpTwsuVcU3ylN12(f^21&mkb6|5?}AcU>f4<9sXrZ7N#Ly8L3=(R}>#~
z#Kq%+zbmH4)8hvJ0_0SzTCCQ@4WQ3!9#w)qAMxHfT-QDOo^9;GGve##k-mPcl^l;5
za6p_NZzau`6~W_?q7!t5L^$7hbXS{OPc5Bu#s33SXY=J1*y~Y3<0BWf#m<;BMsK&y
z$qDXlPH~(DMRO}&&t$~6H0Y9V)HBbC0QcYE18LD?_?QC~9+fA@=%k4k^2Lw{Y(x}1
MB1c&Y2`UZtUm5%vhyVZp

literal 0
HcmV?d00001

diff --git a/tests/qtest/bios-tables-test-allowed-diff.h b/tests/qtest/bios-tables-test-allowed-diff.h
index 7c7f9fbc44..dfb8523c8b 100644
--- a/tests/qtest/bios-tables-test-allowed-diff.h
+++ b/tests/qtest/bios-tables-test-allowed-diff.h
@@ -1,3 +1 @@
 /* List of comma-separated changed AML files to ignore */
-"tests/data/acpi/q35/DSDT.cxl",
-"tests/data/acpi/q35/CEDT.cxl",
-- 
2.32.0


^ permalink raw reply related	[flat|nested] 124+ messages in thread

* [PATCH v7 39/46] tests/acpi: Add tables for CXL emulation.
@ 2022-03-06 17:41   ` Jonathan Cameron via
  0 siblings, 0 replies; 124+ messages in thread
From: Jonathan Cameron via @ 2022-03-06 17:41 UTC (permalink / raw)
  To: linuxarm, qemu-devel, Alex Bennée, Marcel Apfelbaum,
	Michael S . Tsirkin, Igor Mammedov, Markus Armbruster
  Cc: linux-cxl, Ben Widawsky, Peter Maydell,
	Shameerali Kolothum Thodi, Philippe Mathieu-Daudé,
	Saransh Gupta1, Shreyas Shah, Chris Browy, Samarth Saxena,
	Dan Williams

Tables that differ from normal Q35 tables when running the CXL test.

Signed-off-by: Jonathan Cameron <Jonathan.Cameron@huawei.com>
---
 tests/data/acpi/q35/CEDT.cxl                | Bin 0 -> 184 bytes
 tests/data/acpi/q35/DSDT.cxl                | Bin 0 -> 9627 bytes
 tests/qtest/bios-tables-test-allowed-diff.h |   2 --
 3 files changed, 2 deletions(-)

diff --git a/tests/data/acpi/q35/CEDT.cxl b/tests/data/acpi/q35/CEDT.cxl
index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..b8fa06b00e65712e91e0a5ea0d9277e0146d1c00 100644
GIT binary patch
literal 184
zcmZ>EbqU$Qz`(%x(aGQ0BUr&HBEVSz2pEB4AU23*U{GMV2P7eE5T6mshKVRJ@Sw=U
r)I#JL88kqeKtKSd14gp~1^Iy(qF)E31_T6{AT-z>kXmGQAh!SjnYIc6

literal 0
HcmV?d00001

diff --git a/tests/data/acpi/q35/DSDT.cxl b/tests/data/acpi/q35/DSDT.cxl
index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..e1d1299a67abd02a055c58f9306bf4a796b7d4c0 100644
GIT binary patch
literal 9627
zcmeHN&2JmW9i1g9X|-HQONzE^`9p+b_bWw0`6EeNq%pZmk+emcCaE|94R9$bt!$^r
zB8h=GhEZ7o64!^K4&$Ie52alO=&k(+iW+DSEzpC3qG*5uL3`*W#}-9=NX)+9%#J)m
zQb2qxP#u>0n|*J7^JZuEt!CD%dyStRVa)hb?u=Wkr*kh=-8B9T#u%N`uTzShXU$D7
zS7;gWNX%$nkMnaJa%SqAUT&pe{B*<r(D&bb&o|luTfOVtUTp{O-0)W!fiABGmPIE{
ze!5l9wVGS5Rdq`lvsEm4cJ{tynk`qHMa@>$t1+{+Qu0`8d7^cu?#6CjVy<^?v0EAI
zY$Rqar&<2nvCkIvpZxILo7sEM|Mhz}FIh1Fj^fwE@3DXm{#D;P=y#p>I~R;=MCaEQ
zyR+|i_+?S%(3QZak~_92qN)~qmzrD8d9|#|+c;INR5Na75$$qo^~8d;|GhcJZ0a2P
z`*pwH|E)M>&K2gAO`$Sx7OVM&RB1pPscLQ(tBo?V8N2^5&SK5$4#l%C<||&hTJu`z
z)HAJmBg+CR@dGMqzwKL&-?eXbeAe;f{eF*ir*C!I?1Q$={_&r&9_v{%qHPIl`rS!t
z%l(He#u!@jm8YmR)Zfjm`BFXG;$4jN3usEUR6857e9LxD&paKKKteUQj-SqX0E>GR
zMeETjnP!i*t6=!dXNFNQ*4a9K4@HozxplaFW@Vc-Gpn&oYwQs9P;=|(a<#E2>yC;0
zZD?4>nkpD?Mc;~==Alhq^HKLqd7Co^G<tIc=Rx#Al$nPd&N)(RxZACXhxd<H9k75I
z<{@UFiz<mneY9D1>kT7*&$dk58VX$W-AWn<FWua%=TBi4{K2+x<Kh&!xlH~PF61iZ
zni0P*XcV})%1Sw1EqO1cn?`QgJ<U=_s&4*Nr1=Cz{eG}Ph~SEGfl(dJuTpW3rJl@!
zGYUm@1$6L3te^76t6sf4-CYi+#{Fmi_|N)@x68NYJ}vM4om&p5u2+BHcc187WZk`;
z3swX(;2IxCHim7V^%7CzIvv=Zx$dscMQb8Rjz!oMYX|;~He<cESuMFsEM>RwYXln<
zIX`~cpuys}pYn-Ztu?C2({F4h<1vg*wp=-#XX4zz`4UUoRBlBO6l4%ZP-Mxe7}|lE
zByeFwEC8p)HO>Q6hQvhu92?<GV8e)*5*lHg2`FbGBqmzJu~E*HEr>E=N@!Ft0p(1D
z#FSYuuLPFY6XQ$?jR~e~JVI6Hn5J`#GbMCPFlFNrsyfFto#UJ-q2szvgsRR7P3MHB
zb3)gNP}ON_I!#Tdsp~|j>ckUB>FG&L=cKL^p{mo;bXuBDOV^1|)fv}x#x<RBT_-|S
zC!U4M{j@cmwyqPQs&h)yIi=~G(sd$KbxvzKr!}3^x=w_u&KXVTjHYu&*NITo>1a9~
zO{b&lM5yXaXgU*`&V;TLp{g^f=}c-mle$iXs?J$W=d7l4R@aG8)tS<CrZk-?T_-|S
z=Ppg>E=}hyT_-|S=Wb2sZcXQIT_-|SC*CA5`s8zHPSZK3>qMyP+@tB-qv_nE>qMyP
z+{>BS@JYCrGjrj?Zm(dvk3wR4A$DG4&TGthorzFo?$en2H0C~?iBM(k*O>b?=6;=t
zP-Py_m<KfG0iB6ZWgg^A=`sg7Q$FJl3Z{H;BP6E0g9}>D1uf@-o)e+SX~ICWih&mD
zC8XwNYTz7+Ljy&Cv?7QikV#>n0>>@MV8oK`Gmun3w+$4blm-J8SZSaNlnhirw+$2_
zS|bfqV8e)Vss<{c+XjjdE#g=hsKAC%sF6d-Km}BWs!kZFsFpKfpbC@>6rprQGEjt4
zCk#|zITHq|K*>M_l;<P^MJRQ`Kn0dFVW0|>3{*fllMEE0)CmI>Sk8ojDo`>|0p(0G
zP=xY+!axO<Ghv_#lnhirIg<<&q0|Wj6<E%MfhtfkPyyvkGEjt4Ck#|zITHq|K*>M_
zlrzad5lWpfP=V!47^ngz0~JutBm+e#b;3XemNQ|X3X}{~Ksl2P6rt1!0~J`#gn=qh
zGEf2KOfpb}QYQ>lU^x>8szAv=1(Y+%KoLrvFi?TzOc<yFB?A>u&LjgxD0RX>1(q{m
zpbC@>R6seC3>2Z%2?G^a&V+#~P%=;f<xDbAgi<FARA4z12C6{GKn0XD$v_cGoiI>=
z<xCi;0wn_#P|hR+MJRQ`Kn0dFVW0|>3{*fllMEE0)CmI>Sk8ojDo`>|0p(0GP=rz^
z3{+q_69%e4$v_2^Gs!>^N}VuJf#pmXr~)Me6;RG314Srx!axxz28u{EP=u<1B2)}i
zVZuNaCK;&0Bm-5LFi?dF167!0pbC==RAItE6($T+VUmF=Ofpb~2?JG_Fi?d_2C6X0
zKouqo6p_5UFi=FeW4trTKoR0L$dH(_Z(*Q_WZ%L-5y`$K14StNmJAdjmWt+Euu#^u
zJN%#39{odlXPkbr&FkNOI!gbg(y9incNo>$*(@CQY>o~t9Xyj^?d5Eq&X?#=phMA2
z&6dt$HK03r)!N*^BFkjYil>g3&bqZp0BV`Uv=#r+IGf}vL08yKDznCLECp9LtQkgU
zXhaQ5FUu2nN65-04;#xhv>0Tf+4aP3YxYOeY%&UWV|acuJc+S-k(%M_Ks{;#T9ZvB
zT3)HnuF<)*$xCKvJ&FP0)_6SEs@Lkq&5f`Pl%C4N?74yLSUmaC*>g(v9M7IZ`_$~Y
z)adNF=2n@si={Ly_l4T6$YZ2;mGmx8?+&DQcS!Gs>AeB*78bO=w0ciT@A35BKzeV7
z^j?@gKbSuD80qs$`aDmcA4s3yA$>kfzciSB>@m_WDe0GZ`lW&NOFN`r3ey(`(~mz!
z`ht?az|$87(ie6}UkK9|2h&eHM*5<XzR1%T2htaJNMA(y7(L1c(>o%)e0bos0@Gcr
z-|A(laX#hoL-RVUZg3h~3B5YJYYbH<%0^Z<G%u5Sb$9_8s!o)RtZrzYVF{S#nG}Y%
zlA-EE*~sdK=1oek4zDUh)rqo^)eX&qU3ztRe;KMyl#Q%zXk72stHVppP<5hgWOYO1
zbWX1hZ#qNOi88%9XWygs?5&x4IftzlY^&H&tob`<zAJVoiq`o@EB@75zAi_pg~<tT
znsTcv>Edf|2CrWJe){!S-gxcu>uayP!J39!T|OO-)+x&>7$4go+lGx*eRvWroqudw
zcJunzD9Ez3Tg_Jtw^XScZms;Hfd%jk9hM9hb=|UuMfM~iqi#LFoCe&>HquyYE>%iq
zz^D+T{@8i02MrR9oXWb@QYN#Qp=OAUqp?)NZ7jVU=~|r)_O6@BWG<wN<5<1VPBx!7
z|8z`lZIDnBK+Erg#Yh$~zdPR<yg(8HYX-~iLsynh_xx0OjAT7tB4e;B)G)IC)Y5LZ
z?pDklWSDLxe=Zs62W%}c*6(`B&bkq>^}z50*1o%J|D@v^x7SeQ2Wx{Vx!}P<+?4Lz
z?ZHyqmbH(%lbyEJN1Bg<QP;bNhkB0gd$4wc+Sq27+i2CDXS*kcV0>iY_N`gbV5nno
z5xmQ5w6U0viJe8NaFwmmEr+(X=Hh5^Lv^7&msdx3b9vG|Q*YQibo1!ZGE_f0FSszG
zZvrbaSW3`~gN5kkk*#aK^Bj$7%zyl0dSG0eJsRDZ0p)BX5w}c+)dqSGO-*Cdv=JvU
zY~#Yk)ILm}LN(vYXO6OP#?wTiG3A_z(Ir0d!#S0ChNZp*>>{I%*xnHoJ61|T)vfTB
zY6vHL<b3#WIqWWbs2I-?>l3q;%T?$HTZwY&J|m`X(vI*AU^?K1Y<uq)==w^v%1pOW
zTV{0b@^hD^ax>bdus3yU^scZdjsN9e8y9|mv;3RqKbUu3`19vG>=jcQYk`hmZpRpV
z#7C)te5;w$yx5KjZJ>`MF>N-VAjWahLnkLdb!prAgu$=zjp*K`JJZ@$bn4_JEVRLb
zWMq@i>dM6JwzV0&-L@iZHMshEdlml&v9+$#Sp#TN>`!@YoeGTYf-DLWoi!`6U6Kn+
zqT3}do*dV%tXdAEeY_j%j0IQQsQbpc!GiT0K274DFJ4u=am&~C!N2JDU-*KLP<E23
zSJDsr{no=#SAKyjz3@te-I!t#tR=Be*xzxC!t3-AmBG6TjVN1;DEvIeh_e4bq89mg
z68N~`e`G|R#EAOg$PwjZ-28mxh`NI-cZWul&9Gi5M$}>PggW59w2TimlmYZC!@DIt
zQhSE8JTG23`0h6U=%eJsV}=}{;$L)|6^sY=1KTiBZs6?-f1hFeKCrjrDa<0q{>>)?
zzfS+)_xtn@KK(_sX~Ilw$p1=QVZT%-5W9yfpNChTSFdcMixxt;U9Gk*I-(n1F|;cm
z;R<gTDr_1zpTwsuVcU3ylN12(f^21&mkb6|5?}AcU>f4<9sXrZ7N#Ly8L3=(R}>#~
z#Kq%+zbmH4)8hvJ0_0SzTCCQ@4WQ3!9#w)qAMxHfT-QDOo^9;GGve##k-mPcl^l;5
za6p_NZzau`6~W_?q7!t5L^$7hbXS{OPc5Bu#s33SXY=J1*y~Y3<0BWf#m<;BMsK&y
z$qDXlPH~(DMRO}&&t$~6H0Y9V)HBbC0QcYE18LD?_?QC~9+fA@=%k4k^2Lw{Y(x}1
MB1c&Y2`UZtUm5%vhyVZp

literal 0
HcmV?d00001

diff --git a/tests/qtest/bios-tables-test-allowed-diff.h b/tests/qtest/bios-tables-test-allowed-diff.h
index 7c7f9fbc44..dfb8523c8b 100644
--- a/tests/qtest/bios-tables-test-allowed-diff.h
+++ b/tests/qtest/bios-tables-test-allowed-diff.h
@@ -1,3 +1 @@
 /* List of comma-separated changed AML files to ignore */
-"tests/data/acpi/q35/DSDT.cxl",
-"tests/data/acpi/q35/CEDT.cxl",
-- 
2.32.0



^ permalink raw reply related	[flat|nested] 124+ messages in thread

* [PATCH v7 40/46] qtest/cxl: Add more complex test cases with CFMWs
  2022-03-06 17:40 ` Jonathan Cameron via
@ 2022-03-06 17:41   ` Jonathan Cameron via
  -1 siblings, 0 replies; 124+ messages in thread
From: Jonathan Cameron @ 2022-03-06 17:41 UTC (permalink / raw)
  To: linuxarm, qemu-devel, Alex Bennée, Marcel Apfelbaum,
	Michael S . Tsirkin, Igor Mammedov, Markus Armbruster
  Cc: linux-cxl, Ben Widawsky, Peter Maydell,
	Shameerali Kolothum Thodi, Philippe Mathieu-Daudé,
	Saransh Gupta1, Shreyas Shah, Chris Browy, Samarth Saxena,
	Dan Williams

From: Ben Widawsky <ben.widawsky@intel.com>

Add CXL Fixed Memory Windows to the CXL tests.

Signed-off-by: Ben Widawsky <ben.widawsky@intel.com>
Co-developed-by: Jonathan Cameron <Jonathan.Cameron@huawei.com>
Signed-off-by: Jonathan Cameron <Jonathan.Cameron@huawei.com>
---
 tests/qtest/cxl-test.c | 8 +++++---
 1 file changed, 5 insertions(+), 3 deletions(-)

diff --git a/tests/qtest/cxl-test.c b/tests/qtest/cxl-test.c
index 148bc94340..0bb93b0191 100644
--- a/tests/qtest/cxl-test.c
+++ b/tests/qtest/cxl-test.c
@@ -9,11 +9,13 @@
 #include "libqtest-single.h"
 
 #define QEMU_PXB_CMD "-machine q35,cxl=on " \
-                     "-device pxb-cxl,id=cxl.0,bus=pcie.0,bus_nr=52 "
+                     "-device pxb-cxl,id=cxl.0,bus=pcie.0,bus_nr=52 "  \
+                     "-cxl-fixed-memory-window targets.0=cxl.0,size=4G "
 
-#define QEMU_2PXB_CMD "-machine q35,cxl=on " \
+#define QEMU_2PXB_CMD "-machine q35,cxl=on "                            \
                       "-device pxb-cxl,id=cxl.0,bus=pcie.0,bus_nr=52 "  \
-                      "-device pxb-cxl,id=cxl.1,bus=pcie.0,bus_nr=53 "
+                      "-device pxb-cxl,id=cxl.1,bus=pcie.0,bus_nr=53 " \
+                      "-cxl-fixed-memory-window targets.0=cxl.0,targets.1=cxl.1,size=4G "
 
 #define QEMU_RP "-device cxl-rp,id=rp0,bus=cxl.0,chassis=0,slot=0 "
 
-- 
2.32.0


^ permalink raw reply related	[flat|nested] 124+ messages in thread

* [PATCH v7 40/46] qtest/cxl: Add more complex test cases with CFMWs
@ 2022-03-06 17:41   ` Jonathan Cameron via
  0 siblings, 0 replies; 124+ messages in thread
From: Jonathan Cameron via @ 2022-03-06 17:41 UTC (permalink / raw)
  To: linuxarm, qemu-devel, Alex Bennée, Marcel Apfelbaum,
	Michael S . Tsirkin, Igor Mammedov, Markus Armbruster
  Cc: linux-cxl, Ben Widawsky, Peter Maydell,
	Shameerali Kolothum Thodi, Philippe Mathieu-Daudé,
	Saransh Gupta1, Shreyas Shah, Chris Browy, Samarth Saxena,
	Dan Williams

From: Ben Widawsky <ben.widawsky@intel.com>

Add CXL Fixed Memory Windows to the CXL tests.

Signed-off-by: Ben Widawsky <ben.widawsky@intel.com>
Co-developed-by: Jonathan Cameron <Jonathan.Cameron@huawei.com>
Signed-off-by: Jonathan Cameron <Jonathan.Cameron@huawei.com>
---
 tests/qtest/cxl-test.c | 8 +++++---
 1 file changed, 5 insertions(+), 3 deletions(-)

diff --git a/tests/qtest/cxl-test.c b/tests/qtest/cxl-test.c
index 148bc94340..0bb93b0191 100644
--- a/tests/qtest/cxl-test.c
+++ b/tests/qtest/cxl-test.c
@@ -9,11 +9,13 @@
 #include "libqtest-single.h"
 
 #define QEMU_PXB_CMD "-machine q35,cxl=on " \
-                     "-device pxb-cxl,id=cxl.0,bus=pcie.0,bus_nr=52 "
+                     "-device pxb-cxl,id=cxl.0,bus=pcie.0,bus_nr=52 "  \
+                     "-cxl-fixed-memory-window targets.0=cxl.0,size=4G "
 
-#define QEMU_2PXB_CMD "-machine q35,cxl=on " \
+#define QEMU_2PXB_CMD "-machine q35,cxl=on "                            \
                       "-device pxb-cxl,id=cxl.0,bus=pcie.0,bus_nr=52 "  \
-                      "-device pxb-cxl,id=cxl.1,bus=pcie.0,bus_nr=53 "
+                      "-device pxb-cxl,id=cxl.1,bus=pcie.0,bus_nr=53 " \
+                      "-cxl-fixed-memory-window targets.0=cxl.0,targets.1=cxl.1,size=4G "
 
 #define QEMU_RP "-device cxl-rp,id=rp0,bus=cxl.0,chassis=0,slot=0 "
 
-- 
2.32.0



^ permalink raw reply related	[flat|nested] 124+ messages in thread

* [PATCH v7 41/46] hw/arm/virt: Basic CXL enablement on pci_expander_bridge instances pxb-cxl
  2022-03-06 17:40 ` Jonathan Cameron via
@ 2022-03-06 17:41   ` Jonathan Cameron via
  -1 siblings, 0 replies; 124+ messages in thread
From: Jonathan Cameron @ 2022-03-06 17:41 UTC (permalink / raw)
  To: linuxarm, qemu-devel, Alex Bennée, Marcel Apfelbaum,
	Michael S . Tsirkin, Igor Mammedov, Markus Armbruster
  Cc: linux-cxl, Ben Widawsky, Peter Maydell,
	Shameerali Kolothum Thodi, Philippe Mathieu-Daudé,
	Saransh Gupta1, Shreyas Shah, Chris Browy, Samarth Saxena,
	Dan Williams

Code based on i386/pc enablement.
The memory layout places space for 16 host bridge register regions after
the GIC_REDIST2 in the extended memmap.
The CFMWs are placed above the extended memmap.

Only create the CEDT table if cxl=on set for the machine.

Signed-off-by: Jonathan Cameron <Jonathan.Cameron@huawei.com>
Signed-off-by: Ben Widawsky <ben.widawsky@intel.com>
---
 hw/arm/virt-acpi-build.c | 33 +++++++++++++++++++++++++++++++++
 hw/arm/virt.c            | 40 +++++++++++++++++++++++++++++++++++++++-
 include/hw/arm/virt.h    |  1 +
 3 files changed, 73 insertions(+), 1 deletion(-)

diff --git a/hw/arm/virt-acpi-build.c b/hw/arm/virt-acpi-build.c
index 449fab0080..86a2f40437 100644
--- a/hw/arm/virt-acpi-build.c
+++ b/hw/arm/virt-acpi-build.c
@@ -39,9 +39,11 @@
 #include "hw/acpi/aml-build.h"
 #include "hw/acpi/utils.h"
 #include "hw/acpi/pci.h"
+#include "hw/acpi/cxl.h"
 #include "hw/acpi/memory_hotplug.h"
 #include "hw/acpi/generic_event_device.h"
 #include "hw/acpi/tpm.h"
+#include "hw/cxl/cxl.h"
 #include "hw/pci/pcie_host.h"
 #include "hw/pci/pci.h"
 #include "hw/pci/pci_bus.h"
@@ -157,10 +159,29 @@ static void acpi_dsdt_add_virtio(Aml *scope,
     }
 }
 
+/* Uses local definition of AcpiBuildState so can't easily be common code */
+static void build_acpi0017(Aml *table)
+{
+    Aml *dev, *scope, *method;
+
+    scope =  aml_scope("_SB");
+    dev = aml_device("CXLM");
+    aml_append(dev, aml_name_decl("_HID", aml_string("ACPI0017")));
+
+    method = aml_method("_STA", 0, AML_NOTSERIALIZED);
+    aml_append(method, aml_return(aml_int(0x01)));
+    aml_append(dev, method);
+
+    aml_append(scope, dev);
+    aml_append(table, scope);
+}
+
 static void acpi_dsdt_add_pci(Aml *scope, const MemMapEntry *memmap,
                               uint32_t irq, VirtMachineState *vms)
 {
     int ecam_id = VIRT_ECAM_ID(vms->highmem_ecam);
+    bool cxl_present = false;
+    PCIBus *bus = vms->bus;
     struct GPEXConfig cfg = {
         .mmio32 = memmap[VIRT_PCIE_MMIO],
         .pio    = memmap[VIRT_PCIE_PIO],
@@ -174,6 +195,14 @@ static void acpi_dsdt_add_pci(Aml *scope, const MemMapEntry *memmap,
     }
 
     acpi_dsdt_add_gpex(scope, &cfg);
+    QLIST_FOREACH(bus, &vms->bus->child, sibling) {
+        if (pci_bus_is_cxl(bus)) {
+            cxl_present = true;
+        }
+    }
+    if (cxl_present) {
+        build_acpi0017(scope);
+    }
 }
 
 static void acpi_dsdt_add_gpio(Aml *scope, const MemMapEntry *gpio_memmap,
@@ -991,6 +1020,10 @@ void virt_acpi_build(VirtMachineState *vms, AcpiBuildTables *tables)
                        vms->oem_table_id);
         }
     }
+    if (ms->cxl_devices_state->is_enabled) {
+        cxl_build_cedt(ms, table_offsets, tables_blob, tables->linker,
+                       vms->oem_id, vms->oem_table_id);
+    }
 
     if (ms->nvdimms_state->is_enabled) {
         nvdimm_build_acpi(table_offsets, tables_blob, tables->linker,
diff --git a/hw/arm/virt.c b/hw/arm/virt.c
index 6c0f2ef9c7..31fdebc7c7 100644
--- a/hw/arm/virt.c
+++ b/hw/arm/virt.c
@@ -78,6 +78,7 @@
 #include "hw/virtio/virtio-mem-pci.h"
 #include "hw/virtio/virtio-iommu.h"
 #include "hw/char/pl011.h"
+#include "hw/cxl/cxl.h"
 #include "qemu/guest-random.h"
 
 #define DEFINE_VIRT_MACHINE_LATEST(major, minor, latest) \
@@ -178,6 +179,7 @@ static const MemMapEntry base_memmap[] = {
 static MemMapEntry extended_memmap[] = {
     /* Additional 64 MB redist region (can contain up to 512 redistributors) */
     [VIRT_HIGH_GIC_REDIST2] =   { 0x0, 64 * MiB },
+    [VIRT_CXL_HOST] =           { 0x0, 64 * KiB * 16 }, /* 16 UID */
     [VIRT_HIGH_PCIE_ECAM] =     { 0x0, 256 * MiB },
     /* Second PCIe window */
     [VIRT_HIGH_PCIE_MMIO] =     { 0x0, 512 * GiB },
@@ -1508,6 +1510,17 @@ static void create_pcie(VirtMachineState *vms)
     }
 }
 
+static void create_cxl_host_reg_region(VirtMachineState *vms)
+{
+    MemoryRegion *sysmem = get_system_memory();
+    MachineState *ms = MACHINE(vms);
+    MemoryRegion *mr = &ms->cxl_devices_state->host_mr;
+
+    memory_region_init(mr, OBJECT(ms), "cxl_host_reg",
+                       vms->memmap[VIRT_CXL_HOST].size);
+    memory_region_add_subregion(sysmem, vms->memmap[VIRT_CXL_HOST].base, mr);
+}
+
 static void create_platform_bus(VirtMachineState *vms)
 {
     DeviceState *dev;
@@ -1670,7 +1683,7 @@ static uint64_t virt_cpu_mp_affinity(VirtMachineState *vms, int idx)
 static void virt_set_memmap(VirtMachineState *vms, int pa_bits)
 {
     MachineState *ms = MACHINE(vms);
-    hwaddr base, device_memory_base, device_memory_size, memtop;
+    hwaddr base, device_memory_base, device_memory_size, memtop, cxl_fmw_base;
     int i;
 
     vms->memmap = extended_memmap;
@@ -1762,6 +1775,20 @@ static void virt_set_memmap(VirtMachineState *vms, int pa_bits)
         memory_region_init(&ms->device_memory->mr, OBJECT(vms),
                            "device-memory", device_memory_size);
     }
+
+    if (ms->cxl_devices_state->fixed_windows) {
+        GList *it;
+
+        cxl_fmw_base = ROUND_UP(base, 256 * MiB);
+        for (it = ms->cxl_devices_state->fixed_windows; it; it = it->next) {
+            CXLFixedWindow *fw = it->data;
+
+            fw->base = cxl_fmw_base;
+            memory_region_init_io(&fw->mr, OBJECT(vms), &cfmws_ops, fw,
+                                  "cxl-fixed-memory-region", fw->size);
+            cxl_fmw_base += fw->size;
+        }
+    }
 }
 
 /*
@@ -2164,6 +2191,15 @@ static void machvirt_init(MachineState *machine)
         memory_region_add_subregion(sysmem, machine->device_memory->base,
                                     &machine->device_memory->mr);
     }
+    if (machine->cxl_devices_state->fixed_windows) {
+        GList *it;
+        for (it = machine->cxl_devices_state->fixed_windows; it;
+             it = it->next) {
+            CXLFixedWindow *fw = it->data;
+
+            memory_region_add_subregion(sysmem, fw->base, &fw->mr);
+        }
+    }
 
     virt_flash_fdt(vms, sysmem, secure_sysmem ?: sysmem);
 
@@ -2190,6 +2226,7 @@ static void machvirt_init(MachineState *machine)
     create_rtc(vms);
 
     create_pcie(vms);
+    create_cxl_host_reg_region(vms);
 
     if (has_ged && aarch64 && firmware_loaded && virt_is_acpi_enabled(vms)) {
         vms->acpi_dev = create_acpi_ged(vms);
@@ -2845,6 +2882,7 @@ static void virt_machine_class_init(ObjectClass *oc, void *data)
     hc->unplug = virt_machine_device_unplug_cb;
     mc->nvdimm_supported = true;
     mc->smp_props.clusters_supported = true;
+    mc->cxl_supported = true;
     mc->auto_enable_numa_with_memhp = true;
     mc->auto_enable_numa_with_memdev = true;
     mc->default_ram_id = "mach-virt.ram";
diff --git a/include/hw/arm/virt.h b/include/hw/arm/virt.h
index c1ea17d0de..097e1f0c36 100644
--- a/include/hw/arm/virt.h
+++ b/include/hw/arm/virt.h
@@ -92,6 +92,7 @@ enum {
 /* indices of IO regions located after the RAM */
 enum {
     VIRT_HIGH_GIC_REDIST2 =  VIRT_LOWMEMMAP_LAST,
+    VIRT_CXL_HOST,
     VIRT_HIGH_PCIE_ECAM,
     VIRT_HIGH_PCIE_MMIO,
 };
-- 
2.32.0


^ permalink raw reply related	[flat|nested] 124+ messages in thread

* [PATCH v7 41/46] hw/arm/virt: Basic CXL enablement on pci_expander_bridge instances pxb-cxl
@ 2022-03-06 17:41   ` Jonathan Cameron via
  0 siblings, 0 replies; 124+ messages in thread
From: Jonathan Cameron via @ 2022-03-06 17:41 UTC (permalink / raw)
  To: linuxarm, qemu-devel, Alex Bennée, Marcel Apfelbaum,
	Michael S . Tsirkin, Igor Mammedov, Markus Armbruster
  Cc: linux-cxl, Ben Widawsky, Peter Maydell,
	Shameerali Kolothum Thodi, Philippe Mathieu-Daudé,
	Saransh Gupta1, Shreyas Shah, Chris Browy, Samarth Saxena,
	Dan Williams

Code based on i386/pc enablement.
The memory layout places space for 16 host bridge register regions after
the GIC_REDIST2 in the extended memmap.
The CFMWs are placed above the extended memmap.

Only create the CEDT table if cxl=on set for the machine.

Signed-off-by: Jonathan Cameron <Jonathan.Cameron@huawei.com>
Signed-off-by: Ben Widawsky <ben.widawsky@intel.com>
---
 hw/arm/virt-acpi-build.c | 33 +++++++++++++++++++++++++++++++++
 hw/arm/virt.c            | 40 +++++++++++++++++++++++++++++++++++++++-
 include/hw/arm/virt.h    |  1 +
 3 files changed, 73 insertions(+), 1 deletion(-)

diff --git a/hw/arm/virt-acpi-build.c b/hw/arm/virt-acpi-build.c
index 449fab0080..86a2f40437 100644
--- a/hw/arm/virt-acpi-build.c
+++ b/hw/arm/virt-acpi-build.c
@@ -39,9 +39,11 @@
 #include "hw/acpi/aml-build.h"
 #include "hw/acpi/utils.h"
 #include "hw/acpi/pci.h"
+#include "hw/acpi/cxl.h"
 #include "hw/acpi/memory_hotplug.h"
 #include "hw/acpi/generic_event_device.h"
 #include "hw/acpi/tpm.h"
+#include "hw/cxl/cxl.h"
 #include "hw/pci/pcie_host.h"
 #include "hw/pci/pci.h"
 #include "hw/pci/pci_bus.h"
@@ -157,10 +159,29 @@ static void acpi_dsdt_add_virtio(Aml *scope,
     }
 }
 
+/* Uses local definition of AcpiBuildState so can't easily be common code */
+static void build_acpi0017(Aml *table)
+{
+    Aml *dev, *scope, *method;
+
+    scope =  aml_scope("_SB");
+    dev = aml_device("CXLM");
+    aml_append(dev, aml_name_decl("_HID", aml_string("ACPI0017")));
+
+    method = aml_method("_STA", 0, AML_NOTSERIALIZED);
+    aml_append(method, aml_return(aml_int(0x01)));
+    aml_append(dev, method);
+
+    aml_append(scope, dev);
+    aml_append(table, scope);
+}
+
 static void acpi_dsdt_add_pci(Aml *scope, const MemMapEntry *memmap,
                               uint32_t irq, VirtMachineState *vms)
 {
     int ecam_id = VIRT_ECAM_ID(vms->highmem_ecam);
+    bool cxl_present = false;
+    PCIBus *bus = vms->bus;
     struct GPEXConfig cfg = {
         .mmio32 = memmap[VIRT_PCIE_MMIO],
         .pio    = memmap[VIRT_PCIE_PIO],
@@ -174,6 +195,14 @@ static void acpi_dsdt_add_pci(Aml *scope, const MemMapEntry *memmap,
     }
 
     acpi_dsdt_add_gpex(scope, &cfg);
+    QLIST_FOREACH(bus, &vms->bus->child, sibling) {
+        if (pci_bus_is_cxl(bus)) {
+            cxl_present = true;
+        }
+    }
+    if (cxl_present) {
+        build_acpi0017(scope);
+    }
 }
 
 static void acpi_dsdt_add_gpio(Aml *scope, const MemMapEntry *gpio_memmap,
@@ -991,6 +1020,10 @@ void virt_acpi_build(VirtMachineState *vms, AcpiBuildTables *tables)
                        vms->oem_table_id);
         }
     }
+    if (ms->cxl_devices_state->is_enabled) {
+        cxl_build_cedt(ms, table_offsets, tables_blob, tables->linker,
+                       vms->oem_id, vms->oem_table_id);
+    }
 
     if (ms->nvdimms_state->is_enabled) {
         nvdimm_build_acpi(table_offsets, tables_blob, tables->linker,
diff --git a/hw/arm/virt.c b/hw/arm/virt.c
index 6c0f2ef9c7..31fdebc7c7 100644
--- a/hw/arm/virt.c
+++ b/hw/arm/virt.c
@@ -78,6 +78,7 @@
 #include "hw/virtio/virtio-mem-pci.h"
 #include "hw/virtio/virtio-iommu.h"
 #include "hw/char/pl011.h"
+#include "hw/cxl/cxl.h"
 #include "qemu/guest-random.h"
 
 #define DEFINE_VIRT_MACHINE_LATEST(major, minor, latest) \
@@ -178,6 +179,7 @@ static const MemMapEntry base_memmap[] = {
 static MemMapEntry extended_memmap[] = {
     /* Additional 64 MB redist region (can contain up to 512 redistributors) */
     [VIRT_HIGH_GIC_REDIST2] =   { 0x0, 64 * MiB },
+    [VIRT_CXL_HOST] =           { 0x0, 64 * KiB * 16 }, /* 16 UID */
     [VIRT_HIGH_PCIE_ECAM] =     { 0x0, 256 * MiB },
     /* Second PCIe window */
     [VIRT_HIGH_PCIE_MMIO] =     { 0x0, 512 * GiB },
@@ -1508,6 +1510,17 @@ static void create_pcie(VirtMachineState *vms)
     }
 }
 
+static void create_cxl_host_reg_region(VirtMachineState *vms)
+{
+    MemoryRegion *sysmem = get_system_memory();
+    MachineState *ms = MACHINE(vms);
+    MemoryRegion *mr = &ms->cxl_devices_state->host_mr;
+
+    memory_region_init(mr, OBJECT(ms), "cxl_host_reg",
+                       vms->memmap[VIRT_CXL_HOST].size);
+    memory_region_add_subregion(sysmem, vms->memmap[VIRT_CXL_HOST].base, mr);
+}
+
 static void create_platform_bus(VirtMachineState *vms)
 {
     DeviceState *dev;
@@ -1670,7 +1683,7 @@ static uint64_t virt_cpu_mp_affinity(VirtMachineState *vms, int idx)
 static void virt_set_memmap(VirtMachineState *vms, int pa_bits)
 {
     MachineState *ms = MACHINE(vms);
-    hwaddr base, device_memory_base, device_memory_size, memtop;
+    hwaddr base, device_memory_base, device_memory_size, memtop, cxl_fmw_base;
     int i;
 
     vms->memmap = extended_memmap;
@@ -1762,6 +1775,20 @@ static void virt_set_memmap(VirtMachineState *vms, int pa_bits)
         memory_region_init(&ms->device_memory->mr, OBJECT(vms),
                            "device-memory", device_memory_size);
     }
+
+    if (ms->cxl_devices_state->fixed_windows) {
+        GList *it;
+
+        cxl_fmw_base = ROUND_UP(base, 256 * MiB);
+        for (it = ms->cxl_devices_state->fixed_windows; it; it = it->next) {
+            CXLFixedWindow *fw = it->data;
+
+            fw->base = cxl_fmw_base;
+            memory_region_init_io(&fw->mr, OBJECT(vms), &cfmws_ops, fw,
+                                  "cxl-fixed-memory-region", fw->size);
+            cxl_fmw_base += fw->size;
+        }
+    }
 }
 
 /*
@@ -2164,6 +2191,15 @@ static void machvirt_init(MachineState *machine)
         memory_region_add_subregion(sysmem, machine->device_memory->base,
                                     &machine->device_memory->mr);
     }
+    if (machine->cxl_devices_state->fixed_windows) {
+        GList *it;
+        for (it = machine->cxl_devices_state->fixed_windows; it;
+             it = it->next) {
+            CXLFixedWindow *fw = it->data;
+
+            memory_region_add_subregion(sysmem, fw->base, &fw->mr);
+        }
+    }
 
     virt_flash_fdt(vms, sysmem, secure_sysmem ?: sysmem);
 
@@ -2190,6 +2226,7 @@ static void machvirt_init(MachineState *machine)
     create_rtc(vms);
 
     create_pcie(vms);
+    create_cxl_host_reg_region(vms);
 
     if (has_ged && aarch64 && firmware_loaded && virt_is_acpi_enabled(vms)) {
         vms->acpi_dev = create_acpi_ged(vms);
@@ -2845,6 +2882,7 @@ static void virt_machine_class_init(ObjectClass *oc, void *data)
     hc->unplug = virt_machine_device_unplug_cb;
     mc->nvdimm_supported = true;
     mc->smp_props.clusters_supported = true;
+    mc->cxl_supported = true;
     mc->auto_enable_numa_with_memhp = true;
     mc->auto_enable_numa_with_memdev = true;
     mc->default_ram_id = "mach-virt.ram";
diff --git a/include/hw/arm/virt.h b/include/hw/arm/virt.h
index c1ea17d0de..097e1f0c36 100644
--- a/include/hw/arm/virt.h
+++ b/include/hw/arm/virt.h
@@ -92,6 +92,7 @@ enum {
 /* indices of IO regions located after the RAM */
 enum {
     VIRT_HIGH_GIC_REDIST2 =  VIRT_LOWMEMMAP_LAST,
+    VIRT_CXL_HOST,
     VIRT_HIGH_PCIE_ECAM,
     VIRT_HIGH_PCIE_MMIO,
 };
-- 
2.32.0



^ permalink raw reply related	[flat|nested] 124+ messages in thread

* [PATCH v7 42/46] qtest/cxl: Add aarch64 virt test for CXL
  2022-03-06 17:40 ` Jonathan Cameron via
@ 2022-03-06 17:41   ` Jonathan Cameron via
  -1 siblings, 0 replies; 124+ messages in thread
From: Jonathan Cameron @ 2022-03-06 17:41 UTC (permalink / raw)
  To: linuxarm, qemu-devel, Alex Bennée, Marcel Apfelbaum,
	Michael S . Tsirkin, Igor Mammedov, Markus Armbruster
  Cc: linux-cxl, Ben Widawsky, Peter Maydell,
	Shameerali Kolothum Thodi, Philippe Mathieu-Daudé,
	Saransh Gupta1, Shreyas Shah, Chris Browy, Samarth Saxena,
	Dan Williams

Add a single complex case for aarch64 virt machine.

Signed-off-by: Jonathan Cameron <Jonathan.Cameron@huawei.com>
---
 tests/qtest/cxl-test.c  | 48 +++++++++++++++++++++++++++++++++--------
 tests/qtest/meson.build |  1 +
 2 files changed, 40 insertions(+), 9 deletions(-)

diff --git a/tests/qtest/cxl-test.c b/tests/qtest/cxl-test.c
index 0bb93b0191..c96469a435 100644
--- a/tests/qtest/cxl-test.c
+++ b/tests/qtest/cxl-test.c
@@ -17,6 +17,11 @@
                       "-device pxb-cxl,id=cxl.1,bus=pcie.0,bus_nr=53 " \
                       "-cxl-fixed-memory-window targets.0=cxl.0,targets.1=cxl.1,size=4G "
 
+#define QEMU_VIRT_2PXB_CMD "-machine virt,cxl=on "                      \
+                      "-device pxb-cxl,id=cxl.0,bus=pcie.0,bus_nr=52 "  \
+                      "-device pxb-cxl,id=cxl.1,bus=pcie.0,bus_nr=53 "  \
+                      "-cxl-fixed-memory-window targets.0=cxl.0,targets.1=cxl.1,size=4G "
+
 #define QEMU_RP "-device cxl-rp,id=rp0,bus=cxl.0,chassis=0,slot=0 "
 
 /* Dual ports on first pxb */
@@ -134,18 +139,43 @@ static void cxl_2pxb_4rp_4t3d(void)
     qtest_end();
 }
 
+static void cxl_virt_2pxb_4rp_4t3d(void)
+{
+    g_autoptr(GString) cmdline = g_string_new(NULL);
+    char template[] = "/tmp/cxl-test-XXXXXX";
+    const char *tmpfs;
+
+    tmpfs = mkdtemp(template);
+
+    g_string_printf(cmdline, QEMU_VIRT_2PXB_CMD QEMU_4RP QEMU_4T3D,
+                    tmpfs, tmpfs, tmpfs, tmpfs, tmpfs, tmpfs,
+                    tmpfs, tmpfs);
+
+    qtest_start(cmdline->str);
+    qtest_end();
+}
+
 int main(int argc, char **argv)
 {
+    const char *arch = qtest_get_arch();
+
     g_test_init(&argc, &argv, NULL);
 
-    qtest_add_func("/pci/cxl/basic_hostbridge", cxl_basic_hb);
-    qtest_add_func("/pci/cxl/basic_pxb", cxl_basic_pxb);
-    qtest_add_func("/pci/cxl/pxb_with_window", cxl_pxb_with_window);
-    qtest_add_func("/pci/cxl/pxb_x2_with_window", cxl_2pxb_with_window);
-    qtest_add_func("/pci/cxl/rp", cxl_root_port);
-    qtest_add_func("/pci/cxl/rp_x2", cxl_2root_port);
-    qtest_add_func("/pci/cxl/type3_device", cxl_t3d);
-    qtest_add_func("/pci/cxl/rp_x2_type3_x2", cxl_1pxb_2rp_2t3d);
-    qtest_add_func("/pci/cxl/pxb_x2_root_port_x4_type3_x4", cxl_2pxb_4rp_4t3d);
+    if (strcmp(arch, "i386") == 0 || strcmp(arch, "x86_64") == 0) {
+        qtest_add_func("/pci/cxl/basic_hostbridge", cxl_basic_hb);
+        qtest_add_func("/pci/cxl/basic_pxb", cxl_basic_pxb);
+        qtest_add_func("/pci/cxl/pxb_with_window", cxl_pxb_with_window);
+        qtest_add_func("/pci/cxl/pxb_x2_with_window", cxl_2pxb_with_window);
+        qtest_add_func("/pci/cxl/rp", cxl_root_port);
+        qtest_add_func("/pci/cxl/rp_x2", cxl_2root_port);
+        qtest_add_func("/pci/cxl/type3_device", cxl_t3d);
+        qtest_add_func("/pci/cxl/rp_x2_type3_x2", cxl_1pxb_2rp_2t3d);
+        qtest_add_func("/pci/cxl/pxb_x2_root_port_x4_type3_x4",
+                       cxl_2pxb_4rp_4t3d);
+    } else if (strcmp(arch, "aarch64") == 0) {
+        qtest_add_func("/pci/cxl/virt/pxb_x2_root_port_x4_type3_x4",
+                       cxl_virt_2pxb_4rp_4t3d);
+    }
+
     return g_test_run();
 }
diff --git a/tests/qtest/meson.build b/tests/qtest/meson.build
index 7e072d9a84..f384b2096d 100644
--- a/tests/qtest/meson.build
+++ b/tests/qtest/meson.build
@@ -222,6 +222,7 @@ qtests_aarch64 = \
   (config_all_devices.has_key('CONFIG_TPM_TIS_SYSBUS') ? ['tpm-tis-device-test'] : []) +        \
   (config_all_devices.has_key('CONFIG_TPM_TIS_SYSBUS') ? ['tpm-tis-device-swtpm-test'] : []) +  \
   (config_all_devices.has_key('CONFIG_XLNX_ZYNQMP_ARM') ? ['xlnx-can-test', 'fuzz-xlnx-dp-test'] : []) + \
+  qtests_cxl +                                                                                  \
   ['arm-cpu-features',
    'numa-test',
    'boot-serial-test',
-- 
2.32.0


^ permalink raw reply related	[flat|nested] 124+ messages in thread

* [PATCH v7 42/46] qtest/cxl: Add aarch64 virt test for CXL
@ 2022-03-06 17:41   ` Jonathan Cameron via
  0 siblings, 0 replies; 124+ messages in thread
From: Jonathan Cameron via @ 2022-03-06 17:41 UTC (permalink / raw)
  To: linuxarm, qemu-devel, Alex Bennée, Marcel Apfelbaum,
	Michael S . Tsirkin, Igor Mammedov, Markus Armbruster
  Cc: linux-cxl, Ben Widawsky, Peter Maydell,
	Shameerali Kolothum Thodi, Philippe Mathieu-Daudé,
	Saransh Gupta1, Shreyas Shah, Chris Browy, Samarth Saxena,
	Dan Williams

Add a single complex case for aarch64 virt machine.

Signed-off-by: Jonathan Cameron <Jonathan.Cameron@huawei.com>
---
 tests/qtest/cxl-test.c  | 48 +++++++++++++++++++++++++++++++++--------
 tests/qtest/meson.build |  1 +
 2 files changed, 40 insertions(+), 9 deletions(-)

diff --git a/tests/qtest/cxl-test.c b/tests/qtest/cxl-test.c
index 0bb93b0191..c96469a435 100644
--- a/tests/qtest/cxl-test.c
+++ b/tests/qtest/cxl-test.c
@@ -17,6 +17,11 @@
                       "-device pxb-cxl,id=cxl.1,bus=pcie.0,bus_nr=53 " \
                       "-cxl-fixed-memory-window targets.0=cxl.0,targets.1=cxl.1,size=4G "
 
+#define QEMU_VIRT_2PXB_CMD "-machine virt,cxl=on "                      \
+                      "-device pxb-cxl,id=cxl.0,bus=pcie.0,bus_nr=52 "  \
+                      "-device pxb-cxl,id=cxl.1,bus=pcie.0,bus_nr=53 "  \
+                      "-cxl-fixed-memory-window targets.0=cxl.0,targets.1=cxl.1,size=4G "
+
 #define QEMU_RP "-device cxl-rp,id=rp0,bus=cxl.0,chassis=0,slot=0 "
 
 /* Dual ports on first pxb */
@@ -134,18 +139,43 @@ static void cxl_2pxb_4rp_4t3d(void)
     qtest_end();
 }
 
+static void cxl_virt_2pxb_4rp_4t3d(void)
+{
+    g_autoptr(GString) cmdline = g_string_new(NULL);
+    char template[] = "/tmp/cxl-test-XXXXXX";
+    const char *tmpfs;
+
+    tmpfs = mkdtemp(template);
+
+    g_string_printf(cmdline, QEMU_VIRT_2PXB_CMD QEMU_4RP QEMU_4T3D,
+                    tmpfs, tmpfs, tmpfs, tmpfs, tmpfs, tmpfs,
+                    tmpfs, tmpfs);
+
+    qtest_start(cmdline->str);
+    qtest_end();
+}
+
 int main(int argc, char **argv)
 {
+    const char *arch = qtest_get_arch();
+
     g_test_init(&argc, &argv, NULL);
 
-    qtest_add_func("/pci/cxl/basic_hostbridge", cxl_basic_hb);
-    qtest_add_func("/pci/cxl/basic_pxb", cxl_basic_pxb);
-    qtest_add_func("/pci/cxl/pxb_with_window", cxl_pxb_with_window);
-    qtest_add_func("/pci/cxl/pxb_x2_with_window", cxl_2pxb_with_window);
-    qtest_add_func("/pci/cxl/rp", cxl_root_port);
-    qtest_add_func("/pci/cxl/rp_x2", cxl_2root_port);
-    qtest_add_func("/pci/cxl/type3_device", cxl_t3d);
-    qtest_add_func("/pci/cxl/rp_x2_type3_x2", cxl_1pxb_2rp_2t3d);
-    qtest_add_func("/pci/cxl/pxb_x2_root_port_x4_type3_x4", cxl_2pxb_4rp_4t3d);
+    if (strcmp(arch, "i386") == 0 || strcmp(arch, "x86_64") == 0) {
+        qtest_add_func("/pci/cxl/basic_hostbridge", cxl_basic_hb);
+        qtest_add_func("/pci/cxl/basic_pxb", cxl_basic_pxb);
+        qtest_add_func("/pci/cxl/pxb_with_window", cxl_pxb_with_window);
+        qtest_add_func("/pci/cxl/pxb_x2_with_window", cxl_2pxb_with_window);
+        qtest_add_func("/pci/cxl/rp", cxl_root_port);
+        qtest_add_func("/pci/cxl/rp_x2", cxl_2root_port);
+        qtest_add_func("/pci/cxl/type3_device", cxl_t3d);
+        qtest_add_func("/pci/cxl/rp_x2_type3_x2", cxl_1pxb_2rp_2t3d);
+        qtest_add_func("/pci/cxl/pxb_x2_root_port_x4_type3_x4",
+                       cxl_2pxb_4rp_4t3d);
+    } else if (strcmp(arch, "aarch64") == 0) {
+        qtest_add_func("/pci/cxl/virt/pxb_x2_root_port_x4_type3_x4",
+                       cxl_virt_2pxb_4rp_4t3d);
+    }
+
     return g_test_run();
 }
diff --git a/tests/qtest/meson.build b/tests/qtest/meson.build
index 7e072d9a84..f384b2096d 100644
--- a/tests/qtest/meson.build
+++ b/tests/qtest/meson.build
@@ -222,6 +222,7 @@ qtests_aarch64 = \
   (config_all_devices.has_key('CONFIG_TPM_TIS_SYSBUS') ? ['tpm-tis-device-test'] : []) +        \
   (config_all_devices.has_key('CONFIG_TPM_TIS_SYSBUS') ? ['tpm-tis-device-swtpm-test'] : []) +  \
   (config_all_devices.has_key('CONFIG_XLNX_ZYNQMP_ARM') ? ['xlnx-can-test', 'fuzz-xlnx-dp-test'] : []) + \
+  qtests_cxl +                                                                                  \
   ['arm-cpu-features',
    'numa-test',
    'boot-serial-test',
-- 
2.32.0



^ permalink raw reply related	[flat|nested] 124+ messages in thread

* [PATCH v7 43/46] docs/cxl: Add initial Compute eXpress Link (CXL) documentation.
  2022-03-06 17:40 ` Jonathan Cameron via
@ 2022-03-06 17:41   ` Jonathan Cameron via
  -1 siblings, 0 replies; 124+ messages in thread
From: Jonathan Cameron @ 2022-03-06 17:41 UTC (permalink / raw)
  To: linuxarm, qemu-devel, Alex Bennée, Marcel Apfelbaum,
	Michael S . Tsirkin, Igor Mammedov, Markus Armbruster
  Cc: linux-cxl, Ben Widawsky, Peter Maydell,
	Shameerali Kolothum Thodi, Philippe Mathieu-Daudé,
	Saransh Gupta1, Shreyas Shah, Chris Browy, Samarth Saxena,
	Dan Williams

Provide an introduction to the main components of a CXL system,
with detailed explanation of memory interleaving, example command
lines and kernel configuration.

This was a challenging document to write due to the need to extract
only that subset of CXL information which is relevant to either
users of QEMU emulation of CXL or to those interested in the
implementation.  Much of CXL is concerned with specific elements of
the protocol, management of memory pooling etc which is simply
not relevant to what is currently planned for CXL emulation
in QEMU.  All comments welcome

Signed-off-by: Jonathan Cameron <Jonathan.Cameron@huawei.com>
---
 docs/system/device-emulation.rst |   1 +
 docs/system/devices/cxl.rst      | 302 +++++++++++++++++++++++++++++++
 2 files changed, 303 insertions(+)

diff --git a/docs/system/device-emulation.rst b/docs/system/device-emulation.rst
index 0b3a3d73ad..2da2bd5d64 100644
--- a/docs/system/device-emulation.rst
+++ b/docs/system/device-emulation.rst
@@ -83,6 +83,7 @@ Emulated Devices
    :maxdepth: 1
 
    devices/can.rst
+   devices/cxl.rst
    devices/ivshmem.rst
    devices/net.rst
    devices/nvme.rst
diff --git a/docs/system/devices/cxl.rst b/docs/system/devices/cxl.rst
new file mode 100644
index 0000000000..6871c26efd
--- /dev/null
+++ b/docs/system/devices/cxl.rst
@@ -0,0 +1,302 @@
+Compute Express Link (CXL)
+==========================
+From the view of a single host, CXL is an interconnect standard that
+targets accelerators and memory devices attached to a CXL host.
+This description will focus on those aspects visible either to
+software running on a QEMU emulated host or to the internals of
+functional emulation. As such, it will skip over many of the
+electrical and protocol elements that would be more of interest
+for real hardware and will dominate more general introductions to CXL.
+It will also completely ignore the fabric management aspects of CXL
+by considering only a single host and a static configuration.
+
+CXL shares many concepts and much of the infrastructure of PCI Express,
+with CXL Host Bridges, which have CXL Root Ports which may be directly
+attached to CXL or PCI End Points. Alternatively there may be CXL Switches
+with CXL and PCI Endpoints attached below them.  In many cases additional
+control and capabilities are exposed via PCI Express interfaces.
+This sharing of interfaces and hence emulation code is is reflected
+in how the devices are emulated in QEMU. In most cases the various
+CXL elements are built upon an equivalent PCIe devices.
+
+CXL devices support the following interfaces:
+
+* Most conventional PCIe interfaces
+
+  - Configuration space access
+  - BAR mapped memory accesses used for registers and mailboxes.
+  - MSI/MSI-X
+  - AER
+  - DOE mailboxes
+  - IDE
+  - Many other PCI express defined interfaces..
+
+* Memory operations
+
+  - Equivalent of accessing DRAM / NVDIMMs. Any access / feature
+    supported by the host for normal memory should also work for
+    CXL attached memory devices.
+
+* Cache operations. The are mostly irrelevant to QEMU emulation as
+  QEMU is not emulating a coherency protocol. Any emulation related
+  to these will be device specific and is out of the scope of this
+  document.
+
+CXL 2.0 Device Types
+--------------------
+CXL 2.0 End Points are often categorized into three types.
+
+**Type 1:** These support coherent caching of host memory.  Example might
+be a crypto accelerators.  May also have device private memory accessible
+via means such as PCI memory reads and writes to BARs.
+
+**Type 2:** These support coherent caching of host memory and host
+managed device memory (HDM) for which the coherency protocol is managed
+by the host. This is a complex topic, so for more information on CXL
+coherency see the CXL 2.0 specification.
+
+**Type 3 Memory devices:**  These devices act as a means of attaching
+additional memory (HDM) to a CXL host including both volatile and
+persistent memory. The CXL topology may support interleaving across a
+number of Type 3 memory devices using HDM Decoders in the host, host
+bridge, switch upstream port and endpoints.
+
+Scope of CXL emulation in QEMU
+------------------------------
+The focus of CXL emulation is CXL revision 2.0 and later. Earlier CXL
+revisions defined a smaller set of features, leaving much of the control
+interface as implementation defined or device specific, making generic
+emulation challenging with host specific firmware being responsible
+for setup and the Endpoints being presented to operating systems
+as Root Complex Integrated End Points. CXL rev 2.0 looks a lot
+more like PCI Express, with fully specified discoverability
+of the CXL topology.
+
+CXL System components
+----------------------
+A CXL system is made up a Host with a number of 'standard components'
+the control and capabilities of which are discoverable by system software
+using means described in the CXL 2.0 specification.
+
+CXL Fixed Memory Windows (CFMW)
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+A CFMW consists of a particular range of Host Physical Address space
+which is routed to particular CXL Host Bridges.  At time of generic
+software initialization it will have a particularly interleaving
+configuration and associated Quality of Serice Throtling Group (QTG).
+This information is available to system software, when making
+decisions about how to configure interleave across available CXL
+memory devices.  It is provide as CFMW Structures (CFMWS) in
+the CXL Early Discovery Table, an ACPI table.
+
+Note: QTG 0 is the only one currently supported in QEMU.
+
+CXL Host Bridge (CXL HB)
+~~~~~~~~~~~~~~~~~~~~~~~~
+A CXL host bridge is similar to the PCIe equivalent, but with a
+specification defined register interface called CXL Host Bridge
+Component Registers (CHBCR). The location of this CHBCR MMIO
+space is described to system software via a CXL Host Bridge
+Structure (CHBS) in the CEDT ACPI table.  The actual interfaces
+are identical to those used for other parts of the CXL heirarchy
+as CXL Component Registers in PCI BARs.
+
+Interfaces provided include:
+
+* Configuration of HDM Decoders to route CXL Memory accesses with
+  a particularly Host Physical Address range to the target port
+  below which the CXL device servicing that address lies.  This
+  may be a mapping to a single Root Port (RP) or across a set of
+  target RPs.
+
+CXL Root Ports (CXL RP)
+~~~~~~~~~~~~~~~~~~~~~~~
+A CXL Root Port servers te same purpose as a PCIe Root Port.
+There are a number of CXL specific Designated Vendor Specific
+Extended Capabilities (DVSEC) in PCIe Configuration Space
+and associated component register access via PCI bars.
+
+CXL Switch
+~~~~~~~~~~
+Not yet implemented in QEMU.
+
+Here we consider a simple CXL switch with only a single
+virtual hierarchy. Whilst more complex devices exist, their
+visibility to a particular host is generally the same as for
+a simple switch design. Hosts often have no awareness
+of complex rerouting and device pooling, they simply see
+devices being hot added or hot removed.
+
+A CXL switch has a similar architecture to those in PCIe,
+with a single upstream port, internal PCI bus and multiple
+downstream ports.
+
+Both the CXL upstream and downstream ports have CXL specific
+DVSECs in configuration space, and component registers in PCI
+BARs.  The Upstream Port has the configuration interfaces for
+the HDM decoders which route incoming memory accesses to the
+appropriate downstream port.
+
+CXL Memory Devices - Type 3
+~~~~~~~~~~~~~~~~~~~~~~~~~~~
+CXL type 3 devices use a PCI class code and are intended to be supported
+by a generic operating system driver. They have HDM decoders
+though in these EP devices, the decoder is reponsible not for
+routing but for translation of the incoming host physical address (HPA)
+into a Device Physical Address (DPA).
+
+CXL Memory Interleave
+---------------------
+To understand the interaction of different CXL hardware components which
+are emulated in QEMU, let us consider a memory read in a fully configured
+CXL topology.  Note that system software is responsible for configuration
+of all components with the exception of the CFMWs. System software is
+responsible for allocating appropriate ranges from within the CFMWs
+and exposing those via normal memory configurations as would be done
+for system RAM.
+
+Example system Topology. x marks the match in each decoder level::
+
+  |<------------------SYSTEM PHYSICAL ADDRESS MAP (1)----------------->|
+  |    __________   __________________________________   __________    |
+  |   |          | |                                  | |          |   |
+  |   | CFMW 0   | |  CXL Fixed Memory Window 1       | | CFMW 1   |   |
+  |   | HB0 only | |  Configured to interleave memory | | HB1 only |   |
+  |   |          | |  memory accesses across HB0/HB1  | |          |   |
+  |   |__________| |_____x____________________________| |__________|   |
+           |             |                     |             |
+           |             |                     |             |
+           |             |                     |             |
+           |       Interleave Decoder          |             |
+           |       Matches this HB             |             |
+           \_____________|                     |_____________/
+               __________|__________      _____|_______________
+              |                     |    |                     |
+       (2)    | CXL HB 0            |    | CXL HB 1            |
+              | HB IntLv Decoders   |    | HB IntLv Decoders   |
+              | PCI/CXL Root Bus 0c |    | PCI/CXL Root Bus 0d |
+              |                     |    |                     |
+              |___x_________________|    |_____________________|
+                  |                |       |               |
+                  |                |       |               |
+       A HB 0 HDM Decoder          |       |               |
+       matches this Port           |       |               |
+                  |                |       |               |
+       ___________|___   __________|__   __|_________   ___|_________
+   (3)|  Root Port 0  | | Root Port 1 | | Root Port 2| | Root Port 3 |
+      |  Appears in   | | Appears in  | | Appears in | | Appear in   |
+      |  PCI topology | | PCI Topology| | PCI Topo   | | PCI Topo    |
+      |  As 0c:00.0   | | as 0c:01.0  | | as de:00.0 | | as de:01.0  |
+      |_______________| |_____________| |____________| |_____________|
+            |                  |               |              |
+            |                  |               |              |
+       _____|_________   ______|______   ______|_____   ______|_______
+   (4)|     x         | |             | |            | |              |
+      | CXL Type3 0   | | CXL Type3 1 | | CXL type3 2| | CLX Type 3 3 |
+      |               | |             | |            | |              |
+      | PMEM0(Vol LSA)| | PMEM1 (...) | | PMEM2 (...)| | PMEM3 (...)  |
+      | Decoder to go | |             | |            | |              |
+      | from host PA  | | PCI 0e:00.0 | | PCI df:00.0| | PCI e0:00.0  |
+      | to device PA  | |             | |            | |              |
+      | PCI as 0d:00.0| |             | |            | |              |
+      |_______________| |_____________| |____________| |______________|
+
+Notes:
+
+(1) **3 CXL Fixed Memory Windows (CFMW)** corresponding to different
+    ranges of the system physical address map.  Each CFMW has
+    particular interleave setup across the CXL Host Bridges (HB)
+    CFMW0 provides uninterleaved access to HB0, CFW2 provides
+    uninterleaved acess to HB1. CFW1 provides interleaved memory access
+    across HB0 and HB1.
+
+(2) **Two CXL Host Bridges**. Each of these has 2 CXL Root Ports and
+    programmable HDM decoders to route memory accesses either to
+    a single port or interleave them across multiple ports.
+    A complex configuration here, might be to use the following HDM
+    decoders in HB0. HDM0 routes CFMW0 requests to RP0 and hence
+    part of CXL Type3 0. HDM1 routes CFMW0 requests from a
+    different region of the CFMW0 PA range to RP2 and hence part
+    of CXL Type 3 1.  HDM2 routes yet another PA range from within
+    CFMW0 to be interleaved across RP0 and RP1, providing 2 way
+    interleave of part of the memory provided by CXL Type3 0 and
+    CXL Type 3 1. HDM3 routes those interleaved accesses from
+    CFMW1 that target HB0 to RP 0 and another part of the memory of
+    CXL Type 3 0 (as part of a 2 way interleave at the system level
+    across for example CXL Type3 0 and CXL Type3 2.
+    HDM4 is used to enable system wide 4 way interleave across all
+    the present CXL type3 devices, by interleaving those (interleaved)
+    requests that HB0 receives from from CFMW1 across RP 0 and
+    RP 1 and hence to yet more regions of the memory of the
+    attached Type3 devices.  Note this is a representative subset
+    of the full range of possible HDM decoder configurations in this
+    topology.
+
+(3) **Four CXL Root Ports.** In this case the CXL Type 3 devices are
+    directly attached to these ports.
+
+(4) **Four CXL Type3 memory expansion devices.**  These will each have
+    HDM decoders, but in this case rather than performing interleave
+    they will take the Host Physical Addresses of accesses and map
+    them to their own local Device Physical Address Space (DPA).
+
+Example command lines
+---------------------
+A very simple setup with just one directly attached CXL Type 3 device::
+
+  qemu-system-aarch64 -M virt,gic-version=3,cxl=on -m 4g,maxmem=8G,slots=8 -cpu max \
+  ...
+  -object memory-backend-file,id=cxl-mem1,share=on,mem-path=/tmp/cxltest.raw,size=256M \
+  -object memory-backend-file,id=cxl-lsa1,share=on,mem-path=/tmp/lsa.raw,size=256M \
+  -device pxb-cxl,bus_nr=12,bus=pcie.0,id=cxl.1 \
+  -device cxl-rp,port=0,bus=cxl.1,id=root_port13,chassis=0,slot=2 \
+  -device cxl-type3,bus=root_port13,memdev=cxl-mem1,lsa=cxl-lsa1,id=cxl-pmem0,size=256M \
+  -cxl-fixed-memory-window targets.0=cxl.1,size=4G
+
+A setup suitable for 4 way interleave. Only one fixed window provided, to enable 2 way
+interleave across 2 CXL host bridges.  Each host bridge has 2 CXL Root Ports, with
+the CXL Type3 device directly attached (no switches).::
+
+  qemu-system-aarch64 -M virt,gic-version=3,cxl=on -m 4g,maxmem=8G,slots=8 -cpu max \
+  ...
+  -object memory-backend-file,id=cxl-mem1,share=on,mem-path=/tmp/cxltest.raw,size=256M \
+  -object memory-backend-file,id=cxl-mem2,share=on,mem-path=/tmp/cxltest2.raw,size=256M \
+  -object memory-backend-file,id=cxl-mem3,share=on,mem-path=/tmp/cxltest3.raw,size=256M \
+  -object memory-backend-file,id=cxl-mem4,share=on,mem-path=/tmp/cxltest4.raw,size=256M \
+  -object memory-backend-file,id=cxl-lsa1,share=on,mem-path=/tmp/lsa.raw,size=256M \
+  -object memory-backend-file,id=cxl-lsa2,share=on,mem-path=/tmp/lsa2.raw,size=256M \
+  -object memory-backend-file,id=cxl-lsa3,share=on,mem-path=/tmp/lsa3.raw,size=256M \
+  -object memory-backend-file,id=cxl-lsa4,share=on,mem-path=/tmp/lsa4.raw,size=256M \
+  -device pxb-cxl,bus_nr=12,bus=pcie.0,id=cxl.1 \
+  -device pxb-cxl,bus_nr=222,bus=pcie.0,id=cxl.2 \
+  -device cxl-rp,port=0,bus=cxl.1,id=root_port13,chassis=0,slot=2 \
+  -device cxl-type3,bus=root_port13,memdev=cxl-mem1,lsa=cxl-lsa1,id=cxl-pmem0,size=256M \
+  -device cxl-rp,port=1,bus=cxl.1,id=root_port14,chassis=0,slot=3 \
+  -device cxl-type3,bus=root_port14,memdev=cxl-mem2,lsa=cxl-lsa2,id=cxl-pmem1,size=256M \
+  -device cxl-rp,port=0,bus=cxl.2,id=root_port15,chassis=0,slot=5 \
+  -device cxl-type3,bus=root_port15,memdev=cxl-mem3,lsa=cxl-lsa3,id=cxl-pmem2,size=256M \
+  -device cxl-rp,port=1,bus=cxl.2,id=root_port16,chassis=0,slot=6 \
+  -device cxl-type3,bus=root_port16,memdev=cxl-mem4,lsa=cxl-lsa4,id=cxl-pmem3,size=256M \
+  -cxl-fixed-memory-window targets.0=cxl.1,targets.1=cxl.2,size=4G,interleave-granularity=8k
+
+Kernel Configuration Options
+----------------------------
+
+In Linux 5.18 the followings options are necessary to make use of
+OS management of CXL memory devices as described here.
+
+* CONFIG_CXL_BUS
+* CONFIG_CXL_PCI
+* CONFIG_CXL_ACPI
+* CONFIG_CXL_PMEM
+* CONFIG_CXL_MEM
+* CONFIG_CXL_PORT
+* CONFIG_CXL_REGION
+
+References
+----------
+
+ - Consortium website for specifications etc:
+   http://www.computeexpresslink.org
+ - Compute Express link Revision 2 specification, October 2020
+ - CEDT CFMWS & QTG _DSM ECN May 2021
-- 
2.32.0


^ permalink raw reply related	[flat|nested] 124+ messages in thread

* [PATCH v7 43/46] docs/cxl: Add initial Compute eXpress Link (CXL) documentation.
@ 2022-03-06 17:41   ` Jonathan Cameron via
  0 siblings, 0 replies; 124+ messages in thread
From: Jonathan Cameron via @ 2022-03-06 17:41 UTC (permalink / raw)
  To: linuxarm, qemu-devel, Alex Bennée, Marcel Apfelbaum,
	Michael S . Tsirkin, Igor Mammedov, Markus Armbruster
  Cc: linux-cxl, Ben Widawsky, Peter Maydell,
	Shameerali Kolothum Thodi, Philippe Mathieu-Daudé,
	Saransh Gupta1, Shreyas Shah, Chris Browy, Samarth Saxena,
	Dan Williams

Provide an introduction to the main components of a CXL system,
with detailed explanation of memory interleaving, example command
lines and kernel configuration.

This was a challenging document to write due to the need to extract
only that subset of CXL information which is relevant to either
users of QEMU emulation of CXL or to those interested in the
implementation.  Much of CXL is concerned with specific elements of
the protocol, management of memory pooling etc which is simply
not relevant to what is currently planned for CXL emulation
in QEMU.  All comments welcome

Signed-off-by: Jonathan Cameron <Jonathan.Cameron@huawei.com>
---
 docs/system/device-emulation.rst |   1 +
 docs/system/devices/cxl.rst      | 302 +++++++++++++++++++++++++++++++
 2 files changed, 303 insertions(+)

diff --git a/docs/system/device-emulation.rst b/docs/system/device-emulation.rst
index 0b3a3d73ad..2da2bd5d64 100644
--- a/docs/system/device-emulation.rst
+++ b/docs/system/device-emulation.rst
@@ -83,6 +83,7 @@ Emulated Devices
    :maxdepth: 1
 
    devices/can.rst
+   devices/cxl.rst
    devices/ivshmem.rst
    devices/net.rst
    devices/nvme.rst
diff --git a/docs/system/devices/cxl.rst b/docs/system/devices/cxl.rst
new file mode 100644
index 0000000000..6871c26efd
--- /dev/null
+++ b/docs/system/devices/cxl.rst
@@ -0,0 +1,302 @@
+Compute Express Link (CXL)
+==========================
+From the view of a single host, CXL is an interconnect standard that
+targets accelerators and memory devices attached to a CXL host.
+This description will focus on those aspects visible either to
+software running on a QEMU emulated host or to the internals of
+functional emulation. As such, it will skip over many of the
+electrical and protocol elements that would be more of interest
+for real hardware and will dominate more general introductions to CXL.
+It will also completely ignore the fabric management aspects of CXL
+by considering only a single host and a static configuration.
+
+CXL shares many concepts and much of the infrastructure of PCI Express,
+with CXL Host Bridges, which have CXL Root Ports which may be directly
+attached to CXL or PCI End Points. Alternatively there may be CXL Switches
+with CXL and PCI Endpoints attached below them.  In many cases additional
+control and capabilities are exposed via PCI Express interfaces.
+This sharing of interfaces and hence emulation code is is reflected
+in how the devices are emulated in QEMU. In most cases the various
+CXL elements are built upon an equivalent PCIe devices.
+
+CXL devices support the following interfaces:
+
+* Most conventional PCIe interfaces
+
+  - Configuration space access
+  - BAR mapped memory accesses used for registers and mailboxes.
+  - MSI/MSI-X
+  - AER
+  - DOE mailboxes
+  - IDE
+  - Many other PCI express defined interfaces..
+
+* Memory operations
+
+  - Equivalent of accessing DRAM / NVDIMMs. Any access / feature
+    supported by the host for normal memory should also work for
+    CXL attached memory devices.
+
+* Cache operations. The are mostly irrelevant to QEMU emulation as
+  QEMU is not emulating a coherency protocol. Any emulation related
+  to these will be device specific and is out of the scope of this
+  document.
+
+CXL 2.0 Device Types
+--------------------
+CXL 2.0 End Points are often categorized into three types.
+
+**Type 1:** These support coherent caching of host memory.  Example might
+be a crypto accelerators.  May also have device private memory accessible
+via means such as PCI memory reads and writes to BARs.
+
+**Type 2:** These support coherent caching of host memory and host
+managed device memory (HDM) for which the coherency protocol is managed
+by the host. This is a complex topic, so for more information on CXL
+coherency see the CXL 2.0 specification.
+
+**Type 3 Memory devices:**  These devices act as a means of attaching
+additional memory (HDM) to a CXL host including both volatile and
+persistent memory. The CXL topology may support interleaving across a
+number of Type 3 memory devices using HDM Decoders in the host, host
+bridge, switch upstream port and endpoints.
+
+Scope of CXL emulation in QEMU
+------------------------------
+The focus of CXL emulation is CXL revision 2.0 and later. Earlier CXL
+revisions defined a smaller set of features, leaving much of the control
+interface as implementation defined or device specific, making generic
+emulation challenging with host specific firmware being responsible
+for setup and the Endpoints being presented to operating systems
+as Root Complex Integrated End Points. CXL rev 2.0 looks a lot
+more like PCI Express, with fully specified discoverability
+of the CXL topology.
+
+CXL System components
+----------------------
+A CXL system is made up a Host with a number of 'standard components'
+the control and capabilities of which are discoverable by system software
+using means described in the CXL 2.0 specification.
+
+CXL Fixed Memory Windows (CFMW)
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+A CFMW consists of a particular range of Host Physical Address space
+which is routed to particular CXL Host Bridges.  At time of generic
+software initialization it will have a particularly interleaving
+configuration and associated Quality of Serice Throtling Group (QTG).
+This information is available to system software, when making
+decisions about how to configure interleave across available CXL
+memory devices.  It is provide as CFMW Structures (CFMWS) in
+the CXL Early Discovery Table, an ACPI table.
+
+Note: QTG 0 is the only one currently supported in QEMU.
+
+CXL Host Bridge (CXL HB)
+~~~~~~~~~~~~~~~~~~~~~~~~
+A CXL host bridge is similar to the PCIe equivalent, but with a
+specification defined register interface called CXL Host Bridge
+Component Registers (CHBCR). The location of this CHBCR MMIO
+space is described to system software via a CXL Host Bridge
+Structure (CHBS) in the CEDT ACPI table.  The actual interfaces
+are identical to those used for other parts of the CXL heirarchy
+as CXL Component Registers in PCI BARs.
+
+Interfaces provided include:
+
+* Configuration of HDM Decoders to route CXL Memory accesses with
+  a particularly Host Physical Address range to the target port
+  below which the CXL device servicing that address lies.  This
+  may be a mapping to a single Root Port (RP) or across a set of
+  target RPs.
+
+CXL Root Ports (CXL RP)
+~~~~~~~~~~~~~~~~~~~~~~~
+A CXL Root Port servers te same purpose as a PCIe Root Port.
+There are a number of CXL specific Designated Vendor Specific
+Extended Capabilities (DVSEC) in PCIe Configuration Space
+and associated component register access via PCI bars.
+
+CXL Switch
+~~~~~~~~~~
+Not yet implemented in QEMU.
+
+Here we consider a simple CXL switch with only a single
+virtual hierarchy. Whilst more complex devices exist, their
+visibility to a particular host is generally the same as for
+a simple switch design. Hosts often have no awareness
+of complex rerouting and device pooling, they simply see
+devices being hot added or hot removed.
+
+A CXL switch has a similar architecture to those in PCIe,
+with a single upstream port, internal PCI bus and multiple
+downstream ports.
+
+Both the CXL upstream and downstream ports have CXL specific
+DVSECs in configuration space, and component registers in PCI
+BARs.  The Upstream Port has the configuration interfaces for
+the HDM decoders which route incoming memory accesses to the
+appropriate downstream port.
+
+CXL Memory Devices - Type 3
+~~~~~~~~~~~~~~~~~~~~~~~~~~~
+CXL type 3 devices use a PCI class code and are intended to be supported
+by a generic operating system driver. They have HDM decoders
+though in these EP devices, the decoder is reponsible not for
+routing but for translation of the incoming host physical address (HPA)
+into a Device Physical Address (DPA).
+
+CXL Memory Interleave
+---------------------
+To understand the interaction of different CXL hardware components which
+are emulated in QEMU, let us consider a memory read in a fully configured
+CXL topology.  Note that system software is responsible for configuration
+of all components with the exception of the CFMWs. System software is
+responsible for allocating appropriate ranges from within the CFMWs
+and exposing those via normal memory configurations as would be done
+for system RAM.
+
+Example system Topology. x marks the match in each decoder level::
+
+  |<------------------SYSTEM PHYSICAL ADDRESS MAP (1)----------------->|
+  |    __________   __________________________________   __________    |
+  |   |          | |                                  | |          |   |
+  |   | CFMW 0   | |  CXL Fixed Memory Window 1       | | CFMW 1   |   |
+  |   | HB0 only | |  Configured to interleave memory | | HB1 only |   |
+  |   |          | |  memory accesses across HB0/HB1  | |          |   |
+  |   |__________| |_____x____________________________| |__________|   |
+           |             |                     |             |
+           |             |                     |             |
+           |             |                     |             |
+           |       Interleave Decoder          |             |
+           |       Matches this HB             |             |
+           \_____________|                     |_____________/
+               __________|__________      _____|_______________
+              |                     |    |                     |
+       (2)    | CXL HB 0            |    | CXL HB 1            |
+              | HB IntLv Decoders   |    | HB IntLv Decoders   |
+              | PCI/CXL Root Bus 0c |    | PCI/CXL Root Bus 0d |
+              |                     |    |                     |
+              |___x_________________|    |_____________________|
+                  |                |       |               |
+                  |                |       |               |
+       A HB 0 HDM Decoder          |       |               |
+       matches this Port           |       |               |
+                  |                |       |               |
+       ___________|___   __________|__   __|_________   ___|_________
+   (3)|  Root Port 0  | | Root Port 1 | | Root Port 2| | Root Port 3 |
+      |  Appears in   | | Appears in  | | Appears in | | Appear in   |
+      |  PCI topology | | PCI Topology| | PCI Topo   | | PCI Topo    |
+      |  As 0c:00.0   | | as 0c:01.0  | | as de:00.0 | | as de:01.0  |
+      |_______________| |_____________| |____________| |_____________|
+            |                  |               |              |
+            |                  |               |              |
+       _____|_________   ______|______   ______|_____   ______|_______
+   (4)|     x         | |             | |            | |              |
+      | CXL Type3 0   | | CXL Type3 1 | | CXL type3 2| | CLX Type 3 3 |
+      |               | |             | |            | |              |
+      | PMEM0(Vol LSA)| | PMEM1 (...) | | PMEM2 (...)| | PMEM3 (...)  |
+      | Decoder to go | |             | |            | |              |
+      | from host PA  | | PCI 0e:00.0 | | PCI df:00.0| | PCI e0:00.0  |
+      | to device PA  | |             | |            | |              |
+      | PCI as 0d:00.0| |             | |            | |              |
+      |_______________| |_____________| |____________| |______________|
+
+Notes:
+
+(1) **3 CXL Fixed Memory Windows (CFMW)** corresponding to different
+    ranges of the system physical address map.  Each CFMW has
+    particular interleave setup across the CXL Host Bridges (HB)
+    CFMW0 provides uninterleaved access to HB0, CFW2 provides
+    uninterleaved acess to HB1. CFW1 provides interleaved memory access
+    across HB0 and HB1.
+
+(2) **Two CXL Host Bridges**. Each of these has 2 CXL Root Ports and
+    programmable HDM decoders to route memory accesses either to
+    a single port or interleave them across multiple ports.
+    A complex configuration here, might be to use the following HDM
+    decoders in HB0. HDM0 routes CFMW0 requests to RP0 and hence
+    part of CXL Type3 0. HDM1 routes CFMW0 requests from a
+    different region of the CFMW0 PA range to RP2 and hence part
+    of CXL Type 3 1.  HDM2 routes yet another PA range from within
+    CFMW0 to be interleaved across RP0 and RP1, providing 2 way
+    interleave of part of the memory provided by CXL Type3 0 and
+    CXL Type 3 1. HDM3 routes those interleaved accesses from
+    CFMW1 that target HB0 to RP 0 and another part of the memory of
+    CXL Type 3 0 (as part of a 2 way interleave at the system level
+    across for example CXL Type3 0 and CXL Type3 2.
+    HDM4 is used to enable system wide 4 way interleave across all
+    the present CXL type3 devices, by interleaving those (interleaved)
+    requests that HB0 receives from from CFMW1 across RP 0 and
+    RP 1 and hence to yet more regions of the memory of the
+    attached Type3 devices.  Note this is a representative subset
+    of the full range of possible HDM decoder configurations in this
+    topology.
+
+(3) **Four CXL Root Ports.** In this case the CXL Type 3 devices are
+    directly attached to these ports.
+
+(4) **Four CXL Type3 memory expansion devices.**  These will each have
+    HDM decoders, but in this case rather than performing interleave
+    they will take the Host Physical Addresses of accesses and map
+    them to their own local Device Physical Address Space (DPA).
+
+Example command lines
+---------------------
+A very simple setup with just one directly attached CXL Type 3 device::
+
+  qemu-system-aarch64 -M virt,gic-version=3,cxl=on -m 4g,maxmem=8G,slots=8 -cpu max \
+  ...
+  -object memory-backend-file,id=cxl-mem1,share=on,mem-path=/tmp/cxltest.raw,size=256M \
+  -object memory-backend-file,id=cxl-lsa1,share=on,mem-path=/tmp/lsa.raw,size=256M \
+  -device pxb-cxl,bus_nr=12,bus=pcie.0,id=cxl.1 \
+  -device cxl-rp,port=0,bus=cxl.1,id=root_port13,chassis=0,slot=2 \
+  -device cxl-type3,bus=root_port13,memdev=cxl-mem1,lsa=cxl-lsa1,id=cxl-pmem0,size=256M \
+  -cxl-fixed-memory-window targets.0=cxl.1,size=4G
+
+A setup suitable for 4 way interleave. Only one fixed window provided, to enable 2 way
+interleave across 2 CXL host bridges.  Each host bridge has 2 CXL Root Ports, with
+the CXL Type3 device directly attached (no switches).::
+
+  qemu-system-aarch64 -M virt,gic-version=3,cxl=on -m 4g,maxmem=8G,slots=8 -cpu max \
+  ...
+  -object memory-backend-file,id=cxl-mem1,share=on,mem-path=/tmp/cxltest.raw,size=256M \
+  -object memory-backend-file,id=cxl-mem2,share=on,mem-path=/tmp/cxltest2.raw,size=256M \
+  -object memory-backend-file,id=cxl-mem3,share=on,mem-path=/tmp/cxltest3.raw,size=256M \
+  -object memory-backend-file,id=cxl-mem4,share=on,mem-path=/tmp/cxltest4.raw,size=256M \
+  -object memory-backend-file,id=cxl-lsa1,share=on,mem-path=/tmp/lsa.raw,size=256M \
+  -object memory-backend-file,id=cxl-lsa2,share=on,mem-path=/tmp/lsa2.raw,size=256M \
+  -object memory-backend-file,id=cxl-lsa3,share=on,mem-path=/tmp/lsa3.raw,size=256M \
+  -object memory-backend-file,id=cxl-lsa4,share=on,mem-path=/tmp/lsa4.raw,size=256M \
+  -device pxb-cxl,bus_nr=12,bus=pcie.0,id=cxl.1 \
+  -device pxb-cxl,bus_nr=222,bus=pcie.0,id=cxl.2 \
+  -device cxl-rp,port=0,bus=cxl.1,id=root_port13,chassis=0,slot=2 \
+  -device cxl-type3,bus=root_port13,memdev=cxl-mem1,lsa=cxl-lsa1,id=cxl-pmem0,size=256M \
+  -device cxl-rp,port=1,bus=cxl.1,id=root_port14,chassis=0,slot=3 \
+  -device cxl-type3,bus=root_port14,memdev=cxl-mem2,lsa=cxl-lsa2,id=cxl-pmem1,size=256M \
+  -device cxl-rp,port=0,bus=cxl.2,id=root_port15,chassis=0,slot=5 \
+  -device cxl-type3,bus=root_port15,memdev=cxl-mem3,lsa=cxl-lsa3,id=cxl-pmem2,size=256M \
+  -device cxl-rp,port=1,bus=cxl.2,id=root_port16,chassis=0,slot=6 \
+  -device cxl-type3,bus=root_port16,memdev=cxl-mem4,lsa=cxl-lsa4,id=cxl-pmem3,size=256M \
+  -cxl-fixed-memory-window targets.0=cxl.1,targets.1=cxl.2,size=4G,interleave-granularity=8k
+
+Kernel Configuration Options
+----------------------------
+
+In Linux 5.18 the followings options are necessary to make use of
+OS management of CXL memory devices as described here.
+
+* CONFIG_CXL_BUS
+* CONFIG_CXL_PCI
+* CONFIG_CXL_ACPI
+* CONFIG_CXL_PMEM
+* CONFIG_CXL_MEM
+* CONFIG_CXL_PORT
+* CONFIG_CXL_REGION
+
+References
+----------
+
+ - Consortium website for specifications etc:
+   http://www.computeexpresslink.org
+ - Compute Express link Revision 2 specification, October 2020
+ - CEDT CFMWS & QTG _DSM ECN May 2021
-- 
2.32.0



^ permalink raw reply related	[flat|nested] 124+ messages in thread

* [PATCH v7 44/46] pci-bridge/cxl_upstream: Add a CXL switch upstream port
  2022-03-06 17:40 ` Jonathan Cameron via
@ 2022-03-06 17:41   ` Jonathan Cameron via
  -1 siblings, 0 replies; 124+ messages in thread
From: Jonathan Cameron @ 2022-03-06 17:41 UTC (permalink / raw)
  To: linuxarm, qemu-devel, Alex Bennée, Marcel Apfelbaum,
	Michael S . Tsirkin, Igor Mammedov, Markus Armbruster
  Cc: linux-cxl, Ben Widawsky, Peter Maydell,
	Shameerali Kolothum Thodi, Philippe Mathieu-Daudé,
	Saransh Gupta1, Shreyas Shah, Chris Browy, Samarth Saxena,
	Dan Williams

An initial simple upstream port emulation to allow the creation
of CXL switches. The Device ID has been allocated for this use.

Signed-off-by: Jonathan Cameron <Jonathan.Cameron@huawei.com>
---
 hw/pci-bridge/cxl_upstream.c | 206 +++++++++++++++++++++++++++++++++++
 hw/pci-bridge/meson.build    |   2 +-
 include/hw/cxl/cxl.h         |   4 +
 3 files changed, 211 insertions(+), 1 deletion(-)

diff --git a/hw/pci-bridge/cxl_upstream.c b/hw/pci-bridge/cxl_upstream.c
new file mode 100644
index 0000000000..253404e927
--- /dev/null
+++ b/hw/pci-bridge/cxl_upstream.c
@@ -0,0 +1,206 @@
+/*
+ * Emulated CXL Switch Upstream Port
+ *
+ * Copyright (c) 2022 Huawei Technologies.
+ *
+ * Based on xio31130_upstream.c
+ *
+ * SPDX-License-Identifier: GPL-2.0-or-later
+ */
+
+#include "qemu/osdep.h"
+#include "hw/pci/msi.h"
+#include "hw/pci/pcie.h"
+#include "hw/pci/pcie_port.h"
+
+#define CXL_UPSTREAM_PORT_MSI_NR_VECTOR 1
+
+#define CXL_UPSTREAM_PORT_MSI_OFFSET 0x70
+#define CXL_UPSTREAM_PORT_PCIE_CAP_OFFSET 0x90
+#define CXL_UPSTREAM_PORT_AER_OFFSET 0x100
+#define CXL_UPSTREAM_PORT_DVSEC_OFFSET \
+    (CXL_UPSTREAM_PORT_AER_OFFSET + PCI_ERR_SIZEOF)
+
+typedef struct CXLUpstreamPort
+{
+    /*< private >*/
+    PCIEPort parent_obj;
+
+    /*< public >*/
+    CXLComponentState cxl_cstate;
+} CXLUpstreamPort;
+
+CXLComponentState *cxl_usp_to_cstate(CXLUpstreamPort *usp)
+{
+    return &usp->cxl_cstate;
+}
+
+static void cxl_usp_dvsec_write_config(PCIDevice *dev, uint32_t addr,
+                                       uint32_t val, int len)
+{
+    CXLUpstreamPort *usp = CXL_USP(dev);
+
+    if (range_contains(&usp->cxl_cstate.dvsecs[EXTENSIONS_PORT_DVSEC], addr)) {
+        uint8_t *reg = &dev->config[addr];
+        addr -= usp->cxl_cstate.dvsecs[EXTENSIONS_PORT_DVSEC].lob;
+        if (addr == PORT_CONTROL_OFFSET) {
+            if (pci_get_word(reg) & PORT_CONTROL_UNMASK_SBR) {
+                /* unmask SBR */
+            }
+            if (pci_get_word(reg) & PORT_CONTROL_ALT_MEMID_EN) {
+                /* Alt Memory & ID Space Enable */
+            }
+        }
+    }
+}
+
+static void cxl_usp_write_config(PCIDevice *d, uint32_t address,
+                                 uint32_t val, int len)
+{
+    pci_bridge_write_config(d, address, val, len);
+    pcie_cap_flr_write_config(d, address, val, len);
+    pcie_aer_write_config(d, address, val, len);
+
+    cxl_usp_dvsec_write_config(d, address, val, len);
+}
+
+static void latch_registers(CXLUpstreamPort *usp)
+{
+    uint32_t *reg_state = usp->cxl_cstate.crb.cache_mem_registers;
+
+    cxl_component_register_init_common(reg_state, CXL2_UPSTREAM_PORT);
+    ARRAY_FIELD_DP32(reg_state, CXL_HDM_DECODER_CAPABILITY, TARGET_COUNT, 8);
+}
+
+static void cxl_usp_reset(DeviceState *qdev)
+{
+    PCIDevice *d = PCI_DEVICE(qdev);
+    CXLUpstreamPort *usp = CXL_USP(qdev);
+
+    pci_bridge_reset(qdev);
+    pcie_cap_deverr_reset(d);
+    latch_registers(usp);
+}
+
+static void build_dvsecs(CXLComponentState *cxl)
+{
+    uint8_t *dvsec;
+
+    dvsec = (uint8_t *)&(struct cxl_dvsec_port_extensions){ 0 };
+    cxl_component_create_dvsec(cxl, EXTENSIONS_PORT_DVSEC_LENGTH,
+                               EXTENSIONS_PORT_DVSEC,
+                               EXTENSIONS_PORT_DVSEC_REVID, dvsec);
+    dvsec = (uint8_t *)&(struct cxl_dvsec_port_flexbus){
+        .cap                     = 0x26, /* IO, Mem, non-MLD */
+        .ctrl                    = 0,
+        .status                  = 0x26, /* same */
+        .rcvd_mod_ts_data_phase1 = 0xef, /* WTF? */
+    };
+    cxl_component_create_dvsec(cxl, PCIE_FLEXBUS_PORT_DVSEC_LENGTH_2_0,
+                               PCIE_FLEXBUS_PORT_DVSEC,
+                               PCIE_FLEXBUS_PORT_DVSEC_REVID_2_0, dvsec);
+
+    dvsec = (uint8_t *)&(struct cxl_dvsec_register_locator){
+        .rsvd         = 0,
+        .reg0_base_lo = RBI_COMPONENT_REG | CXL_COMPONENT_REG_BAR_IDX,
+        .reg0_base_hi = 0,
+    };
+    cxl_component_create_dvsec(cxl, REG_LOC_DVSEC_LENGTH, REG_LOC_DVSEC,
+                               REG_LOC_DVSEC_REVID, dvsec);
+}
+
+static void cxl_usp_realize(PCIDevice *d, Error **errp)
+{
+    PCIEPort *p = PCIE_PORT(d);
+    CXLUpstreamPort *usp = CXL_USP(d);
+    CXLComponentState *cxl_cstate = &usp->cxl_cstate;
+    ComponentRegisters *cregs = &cxl_cstate->crb;
+    MemoryRegion *component_bar = &cregs->component_registers;
+    int rc;
+
+    pci_bridge_initfn(d, TYPE_PCIE_BUS);
+    pcie_port_init_reg(d);
+
+    rc = msi_init(d, CXL_UPSTREAM_PORT_MSI_OFFSET,
+                  CXL_UPSTREAM_PORT_MSI_NR_VECTOR, true, true, errp);
+    if (rc) {
+        assert(rc == -ENOTSUP);
+        goto err_bridge;
+    }
+
+    rc = pcie_cap_init(d, CXL_UPSTREAM_PORT_PCIE_CAP_OFFSET,
+                       PCI_EXP_TYPE_UPSTREAM, p->port, errp);
+    if (rc < 0) {
+        goto err_msi;
+    }
+
+    pcie_cap_flr_init(d);
+    pcie_cap_deverr_init(d);
+    rc = pcie_aer_init(d, PCI_ERR_VER, CXL_UPSTREAM_PORT_AER_OFFSET,
+                       PCI_ERR_SIZEOF, errp);
+    if (rc) {
+        goto err_cap;
+    }
+
+    cxl_cstate->dvsec_offset = CXL_UPSTREAM_PORT_DVSEC_OFFSET;
+    cxl_cstate->pdev = d;
+    build_dvsecs(cxl_cstate);
+    cxl_component_register_block_init(OBJECT(d), cxl_cstate, TYPE_CXL_USP);
+    pci_register_bar(d, CXL_COMPONENT_REG_BAR_IDX,
+                     PCI_BASE_ADDRESS_SPACE_MEMORY |
+                     PCI_BASE_ADDRESS_MEM_TYPE_64,
+                     component_bar);
+
+    return;
+
+err_cap:
+    pcie_cap_exit(d);
+err_msi:
+    msi_uninit(d);
+err_bridge:
+    pci_bridge_exitfn(d);
+}
+
+static void cxl_usp_exitfn(PCIDevice *d)
+{
+    pcie_aer_exit(d);
+    pcie_cap_exit(d);
+    msi_uninit(d);
+    pci_bridge_exitfn(d);
+}
+
+static void cxl_upstream_class_init(ObjectClass *oc, void *data)
+{
+    DeviceClass *dc = DEVICE_CLASS(oc);
+    PCIDeviceClass *k = PCI_DEVICE_CLASS(oc);
+
+    k->is_bridge = true;
+    k->config_write = cxl_usp_write_config;
+    k->realize = cxl_usp_realize;
+    k->exit = cxl_usp_exitfn;
+    k->vendor_id = 0x19e5; /* Huawei */
+    k->device_id = 0xa128; /* Emulated CXL Switch Upstream Port */
+    k->revision = 0;
+    set_bit(DEVICE_CATEGORY_BRIDGE, dc->categories);
+    dc->desc = "CXL Switch Upstream Port";
+    dc->reset = cxl_usp_reset;
+}
+
+static const TypeInfo cxl_usp_info = {
+    .name = TYPE_CXL_USP,
+    .parent = TYPE_PCIE_PORT,
+    .instance_size = sizeof(CXLUpstreamPort),
+    .class_init = cxl_upstream_class_init,
+    .interfaces = (InterfaceInfo[]) {
+        { INTERFACE_PCIE_DEVICE },
+        { INTERFACE_CXL_DEVICE },
+        { }
+    },
+};
+
+static void cxl_usp_register_type(void)
+{
+    type_register_static(&cxl_usp_info);
+}
+
+type_init(cxl_usp_register_type);
diff --git a/hw/pci-bridge/meson.build b/hw/pci-bridge/meson.build
index b6d26a03d5..4a8c26b5a1 100644
--- a/hw/pci-bridge/meson.build
+++ b/hw/pci-bridge/meson.build
@@ -5,7 +5,7 @@ pci_ss.add(when: 'CONFIG_IOH3420', if_true: files('ioh3420.c'))
 pci_ss.add(when: 'CONFIG_PCIE_PORT', if_true: files('pcie_root_port.c', 'gen_pcie_root_port.c', 'pcie_pci_bridge.c'))
 pci_ss.add(when: 'CONFIG_PXB', if_true: files('pci_expander_bridge.c'))
 pci_ss.add(when: 'CONFIG_XIO3130', if_true: files('xio3130_upstream.c', 'xio3130_downstream.c'))
-pci_ss.add(when: 'CONFIG_CXL', if_true: files('cxl_root_port.c'))
+pci_ss.add(when: 'CONFIG_CXL', if_true: files('cxl_root_port.c', 'cxl_upstream.c'))
 
 # NewWorld PowerMac
 pci_ss.add(when: 'CONFIG_DEC_PCI', if_true: files('dec.c'))
diff --git a/include/hw/cxl/cxl.h b/include/hw/cxl/cxl.h
index 14194acead..a9a2e6c405 100644
--- a/include/hw/cxl/cxl.h
+++ b/include/hw/cxl/cxl.h
@@ -46,5 +46,9 @@ void cxl_fixed_memory_window_options_set(MachineState *ms,
 void cxl_fixed_memory_window_link_targets(Error **errp);
 
 extern const MemoryRegionOps cfmws_ops;
+#define TYPE_CXL_USP "cxl-upstream"
 
+typedef struct CXLUpstreamPort CXLUpstreamPort;
+DECLARE_INSTANCE_CHECKER(CXLUpstreamPort, CXL_USP, TYPE_CXL_USP)
+CXLComponentState *cxl_usp_to_cstate(CXLUpstreamPort *usp);
 #endif
-- 
2.32.0


^ permalink raw reply related	[flat|nested] 124+ messages in thread

* [PATCH v7 44/46] pci-bridge/cxl_upstream: Add a CXL switch upstream port
@ 2022-03-06 17:41   ` Jonathan Cameron via
  0 siblings, 0 replies; 124+ messages in thread
From: Jonathan Cameron via @ 2022-03-06 17:41 UTC (permalink / raw)
  To: linuxarm, qemu-devel, Alex Bennée, Marcel Apfelbaum,
	Michael S . Tsirkin, Igor Mammedov, Markus Armbruster
  Cc: linux-cxl, Ben Widawsky, Peter Maydell,
	Shameerali Kolothum Thodi, Philippe Mathieu-Daudé,
	Saransh Gupta1, Shreyas Shah, Chris Browy, Samarth Saxena,
	Dan Williams

An initial simple upstream port emulation to allow the creation
of CXL switches. The Device ID has been allocated for this use.

Signed-off-by: Jonathan Cameron <Jonathan.Cameron@huawei.com>
---
 hw/pci-bridge/cxl_upstream.c | 206 +++++++++++++++++++++++++++++++++++
 hw/pci-bridge/meson.build    |   2 +-
 include/hw/cxl/cxl.h         |   4 +
 3 files changed, 211 insertions(+), 1 deletion(-)

diff --git a/hw/pci-bridge/cxl_upstream.c b/hw/pci-bridge/cxl_upstream.c
new file mode 100644
index 0000000000..253404e927
--- /dev/null
+++ b/hw/pci-bridge/cxl_upstream.c
@@ -0,0 +1,206 @@
+/*
+ * Emulated CXL Switch Upstream Port
+ *
+ * Copyright (c) 2022 Huawei Technologies.
+ *
+ * Based on xio31130_upstream.c
+ *
+ * SPDX-License-Identifier: GPL-2.0-or-later
+ */
+
+#include "qemu/osdep.h"
+#include "hw/pci/msi.h"
+#include "hw/pci/pcie.h"
+#include "hw/pci/pcie_port.h"
+
+#define CXL_UPSTREAM_PORT_MSI_NR_VECTOR 1
+
+#define CXL_UPSTREAM_PORT_MSI_OFFSET 0x70
+#define CXL_UPSTREAM_PORT_PCIE_CAP_OFFSET 0x90
+#define CXL_UPSTREAM_PORT_AER_OFFSET 0x100
+#define CXL_UPSTREAM_PORT_DVSEC_OFFSET \
+    (CXL_UPSTREAM_PORT_AER_OFFSET + PCI_ERR_SIZEOF)
+
+typedef struct CXLUpstreamPort
+{
+    /*< private >*/
+    PCIEPort parent_obj;
+
+    /*< public >*/
+    CXLComponentState cxl_cstate;
+} CXLUpstreamPort;
+
+CXLComponentState *cxl_usp_to_cstate(CXLUpstreamPort *usp)
+{
+    return &usp->cxl_cstate;
+}
+
+static void cxl_usp_dvsec_write_config(PCIDevice *dev, uint32_t addr,
+                                       uint32_t val, int len)
+{
+    CXLUpstreamPort *usp = CXL_USP(dev);
+
+    if (range_contains(&usp->cxl_cstate.dvsecs[EXTENSIONS_PORT_DVSEC], addr)) {
+        uint8_t *reg = &dev->config[addr];
+        addr -= usp->cxl_cstate.dvsecs[EXTENSIONS_PORT_DVSEC].lob;
+        if (addr == PORT_CONTROL_OFFSET) {
+            if (pci_get_word(reg) & PORT_CONTROL_UNMASK_SBR) {
+                /* unmask SBR */
+            }
+            if (pci_get_word(reg) & PORT_CONTROL_ALT_MEMID_EN) {
+                /* Alt Memory & ID Space Enable */
+            }
+        }
+    }
+}
+
+static void cxl_usp_write_config(PCIDevice *d, uint32_t address,
+                                 uint32_t val, int len)
+{
+    pci_bridge_write_config(d, address, val, len);
+    pcie_cap_flr_write_config(d, address, val, len);
+    pcie_aer_write_config(d, address, val, len);
+
+    cxl_usp_dvsec_write_config(d, address, val, len);
+}
+
+static void latch_registers(CXLUpstreamPort *usp)
+{
+    uint32_t *reg_state = usp->cxl_cstate.crb.cache_mem_registers;
+
+    cxl_component_register_init_common(reg_state, CXL2_UPSTREAM_PORT);
+    ARRAY_FIELD_DP32(reg_state, CXL_HDM_DECODER_CAPABILITY, TARGET_COUNT, 8);
+}
+
+static void cxl_usp_reset(DeviceState *qdev)
+{
+    PCIDevice *d = PCI_DEVICE(qdev);
+    CXLUpstreamPort *usp = CXL_USP(qdev);
+
+    pci_bridge_reset(qdev);
+    pcie_cap_deverr_reset(d);
+    latch_registers(usp);
+}
+
+static void build_dvsecs(CXLComponentState *cxl)
+{
+    uint8_t *dvsec;
+
+    dvsec = (uint8_t *)&(struct cxl_dvsec_port_extensions){ 0 };
+    cxl_component_create_dvsec(cxl, EXTENSIONS_PORT_DVSEC_LENGTH,
+                               EXTENSIONS_PORT_DVSEC,
+                               EXTENSIONS_PORT_DVSEC_REVID, dvsec);
+    dvsec = (uint8_t *)&(struct cxl_dvsec_port_flexbus){
+        .cap                     = 0x26, /* IO, Mem, non-MLD */
+        .ctrl                    = 0,
+        .status                  = 0x26, /* same */
+        .rcvd_mod_ts_data_phase1 = 0xef, /* WTF? */
+    };
+    cxl_component_create_dvsec(cxl, PCIE_FLEXBUS_PORT_DVSEC_LENGTH_2_0,
+                               PCIE_FLEXBUS_PORT_DVSEC,
+                               PCIE_FLEXBUS_PORT_DVSEC_REVID_2_0, dvsec);
+
+    dvsec = (uint8_t *)&(struct cxl_dvsec_register_locator){
+        .rsvd         = 0,
+        .reg0_base_lo = RBI_COMPONENT_REG | CXL_COMPONENT_REG_BAR_IDX,
+        .reg0_base_hi = 0,
+    };
+    cxl_component_create_dvsec(cxl, REG_LOC_DVSEC_LENGTH, REG_LOC_DVSEC,
+                               REG_LOC_DVSEC_REVID, dvsec);
+}
+
+static void cxl_usp_realize(PCIDevice *d, Error **errp)
+{
+    PCIEPort *p = PCIE_PORT(d);
+    CXLUpstreamPort *usp = CXL_USP(d);
+    CXLComponentState *cxl_cstate = &usp->cxl_cstate;
+    ComponentRegisters *cregs = &cxl_cstate->crb;
+    MemoryRegion *component_bar = &cregs->component_registers;
+    int rc;
+
+    pci_bridge_initfn(d, TYPE_PCIE_BUS);
+    pcie_port_init_reg(d);
+
+    rc = msi_init(d, CXL_UPSTREAM_PORT_MSI_OFFSET,
+                  CXL_UPSTREAM_PORT_MSI_NR_VECTOR, true, true, errp);
+    if (rc) {
+        assert(rc == -ENOTSUP);
+        goto err_bridge;
+    }
+
+    rc = pcie_cap_init(d, CXL_UPSTREAM_PORT_PCIE_CAP_OFFSET,
+                       PCI_EXP_TYPE_UPSTREAM, p->port, errp);
+    if (rc < 0) {
+        goto err_msi;
+    }
+
+    pcie_cap_flr_init(d);
+    pcie_cap_deverr_init(d);
+    rc = pcie_aer_init(d, PCI_ERR_VER, CXL_UPSTREAM_PORT_AER_OFFSET,
+                       PCI_ERR_SIZEOF, errp);
+    if (rc) {
+        goto err_cap;
+    }
+
+    cxl_cstate->dvsec_offset = CXL_UPSTREAM_PORT_DVSEC_OFFSET;
+    cxl_cstate->pdev = d;
+    build_dvsecs(cxl_cstate);
+    cxl_component_register_block_init(OBJECT(d), cxl_cstate, TYPE_CXL_USP);
+    pci_register_bar(d, CXL_COMPONENT_REG_BAR_IDX,
+                     PCI_BASE_ADDRESS_SPACE_MEMORY |
+                     PCI_BASE_ADDRESS_MEM_TYPE_64,
+                     component_bar);
+
+    return;
+
+err_cap:
+    pcie_cap_exit(d);
+err_msi:
+    msi_uninit(d);
+err_bridge:
+    pci_bridge_exitfn(d);
+}
+
+static void cxl_usp_exitfn(PCIDevice *d)
+{
+    pcie_aer_exit(d);
+    pcie_cap_exit(d);
+    msi_uninit(d);
+    pci_bridge_exitfn(d);
+}
+
+static void cxl_upstream_class_init(ObjectClass *oc, void *data)
+{
+    DeviceClass *dc = DEVICE_CLASS(oc);
+    PCIDeviceClass *k = PCI_DEVICE_CLASS(oc);
+
+    k->is_bridge = true;
+    k->config_write = cxl_usp_write_config;
+    k->realize = cxl_usp_realize;
+    k->exit = cxl_usp_exitfn;
+    k->vendor_id = 0x19e5; /* Huawei */
+    k->device_id = 0xa128; /* Emulated CXL Switch Upstream Port */
+    k->revision = 0;
+    set_bit(DEVICE_CATEGORY_BRIDGE, dc->categories);
+    dc->desc = "CXL Switch Upstream Port";
+    dc->reset = cxl_usp_reset;
+}
+
+static const TypeInfo cxl_usp_info = {
+    .name = TYPE_CXL_USP,
+    .parent = TYPE_PCIE_PORT,
+    .instance_size = sizeof(CXLUpstreamPort),
+    .class_init = cxl_upstream_class_init,
+    .interfaces = (InterfaceInfo[]) {
+        { INTERFACE_PCIE_DEVICE },
+        { INTERFACE_CXL_DEVICE },
+        { }
+    },
+};
+
+static void cxl_usp_register_type(void)
+{
+    type_register_static(&cxl_usp_info);
+}
+
+type_init(cxl_usp_register_type);
diff --git a/hw/pci-bridge/meson.build b/hw/pci-bridge/meson.build
index b6d26a03d5..4a8c26b5a1 100644
--- a/hw/pci-bridge/meson.build
+++ b/hw/pci-bridge/meson.build
@@ -5,7 +5,7 @@ pci_ss.add(when: 'CONFIG_IOH3420', if_true: files('ioh3420.c'))
 pci_ss.add(when: 'CONFIG_PCIE_PORT', if_true: files('pcie_root_port.c', 'gen_pcie_root_port.c', 'pcie_pci_bridge.c'))
 pci_ss.add(when: 'CONFIG_PXB', if_true: files('pci_expander_bridge.c'))
 pci_ss.add(when: 'CONFIG_XIO3130', if_true: files('xio3130_upstream.c', 'xio3130_downstream.c'))
-pci_ss.add(when: 'CONFIG_CXL', if_true: files('cxl_root_port.c'))
+pci_ss.add(when: 'CONFIG_CXL', if_true: files('cxl_root_port.c', 'cxl_upstream.c'))
 
 # NewWorld PowerMac
 pci_ss.add(when: 'CONFIG_DEC_PCI', if_true: files('dec.c'))
diff --git a/include/hw/cxl/cxl.h b/include/hw/cxl/cxl.h
index 14194acead..a9a2e6c405 100644
--- a/include/hw/cxl/cxl.h
+++ b/include/hw/cxl/cxl.h
@@ -46,5 +46,9 @@ void cxl_fixed_memory_window_options_set(MachineState *ms,
 void cxl_fixed_memory_window_link_targets(Error **errp);
 
 extern const MemoryRegionOps cfmws_ops;
+#define TYPE_CXL_USP "cxl-upstream"
 
+typedef struct CXLUpstreamPort CXLUpstreamPort;
+DECLARE_INSTANCE_CHECKER(CXLUpstreamPort, CXL_USP, TYPE_CXL_USP)
+CXLComponentState *cxl_usp_to_cstate(CXLUpstreamPort *usp);
 #endif
-- 
2.32.0



^ permalink raw reply related	[flat|nested] 124+ messages in thread

* [PATCH v7 45/46] pci-bridge/cxl_downstream: Add a CXL switch downstream port
  2022-03-06 17:40 ` Jonathan Cameron via
@ 2022-03-06 17:41   ` Jonathan Cameron via
  -1 siblings, 0 replies; 124+ messages in thread
From: Jonathan Cameron @ 2022-03-06 17:41 UTC (permalink / raw)
  To: linuxarm, qemu-devel, Alex Bennée, Marcel Apfelbaum,
	Michael S . Tsirkin, Igor Mammedov, Markus Armbruster
  Cc: linux-cxl, Ben Widawsky, Peter Maydell,
	Shameerali Kolothum Thodi, Philippe Mathieu-Daudé,
	Saransh Gupta1, Shreyas Shah, Chris Browy, Samarth Saxena,
	Dan Williams

Emulation of a simple CXL Switch downstream port.
The Device ID has been allocated for this use.

Signed-off-by: Jonathan Cameron <Jonathan.Cameron@huawei.com>
---
 hw/pci-bridge/cxl_downstream.c | 229 +++++++++++++++++++++++++++++++++
 hw/pci-bridge/meson.build      |   2 +-
 2 files changed, 230 insertions(+), 1 deletion(-)

diff --git a/hw/pci-bridge/cxl_downstream.c b/hw/pci-bridge/cxl_downstream.c
new file mode 100644
index 0000000000..8e66f632a5
--- /dev/null
+++ b/hw/pci-bridge/cxl_downstream.c
@@ -0,0 +1,229 @@
+/*
+ * Emulated CXL Switch Downstream Port
+ *
+ * Copyright (c) 2022 Huawei Technologies.
+ *
+ * Based on xio31130_downstream.c
+ *
+ * SPDX-License-Identifier: GPL-2.0-or-later
+ */
+
+#include "qemu/osdep.h"
+#include "hw/pci/msi.h"
+#include "hw/pci/pcie.h"
+#include "hw/pci/pcie_port.h"
+#include "qapi/error.h"
+
+typedef struct CXLDownStreamPort {
+    /*< private >*/
+    PCIESlot parent_obj;
+
+    /*< public >*/
+    CXLComponentState cxl_cstate;
+} CXLDownstreamPort;
+
+#define TYPE_CXL_DSP "cxl-downstream"
+DECLARE_INSTANCE_CHECKER(CXLDownstreamPort, CXL_DSP, TYPE_CXL_DSP)
+
+#define CXL_DOWNSTREAM_PORT_MSI_OFFSET 0x70
+#define CXL_DOWNSTREAM_PORT_MSI_NR_VECTOR 1
+#define CXL_DOWNSTREAM_PORT_EXP_OFFSET 0x90
+#define CXL_DOWNSTREAM_PORT_AER_OFFSET 0x100
+#define CXL_DOWNSTREAM_PORT_DVSEC_OFFSET        \
+    (CXL_DOWNSTREAM_PORT_AER_OFFSET + PCI_ERR_SIZEOF)
+
+static void latch_registers(CXLDownstreamPort *dsp)
+{
+    uint32_t *reg_state = dsp->cxl_cstate.crb.cache_mem_registers;
+
+    cxl_component_register_init_common(reg_state, CXL2_DOWNSTREAM_PORT);
+}
+
+/* TODO: Look at sharing this code acorss all CXL port types */
+static void cxl_dsp_dvsec_write_config(PCIDevice *dev, uint32_t addr,
+                                      uint32_t val, int len)
+{
+    CXLDownstreamPort *dsp = CXL_DSP(dev);
+    CXLComponentState *cxl_cstate = &dsp->cxl_cstate;
+
+    if (range_contains(&cxl_cstate->dvsecs[EXTENSIONS_PORT_DVSEC], addr)) {
+        uint8_t *reg = &dev->config[addr];
+        addr -= cxl_cstate->dvsecs[EXTENSIONS_PORT_DVSEC].lob;
+        if (addr == PORT_CONTROL_OFFSET) {
+            if (pci_get_word(reg) & PORT_CONTROL_UNMASK_SBR) {
+                /* unmask SBR */
+            }
+            if (pci_get_word(reg) & PORT_CONTROL_ALT_MEMID_EN) {
+                /* Alt Memory & ID Space Enable */
+            }
+        }
+    }
+}
+
+static void cxl_dsp_config_write(PCIDevice *d, uint32_t address,
+                                 uint32_t val, int len)
+{
+    uint16_t slt_ctl, slt_sta;
+
+    pcie_cap_slot_get(d, &slt_ctl, &slt_sta);
+    pci_bridge_write_config(d, address, val, len);
+    pcie_cap_flr_write_config(d, address, val, len);
+    pcie_cap_slot_write_config(d, slt_ctl, slt_sta, address, val, len);
+    pcie_aer_write_config(d, address, val, len);
+
+    cxl_dsp_dvsec_write_config(d, address, val, len);
+}
+
+static void cxl_dsp_reset(DeviceState *qdev)
+{
+    PCIDevice *d = PCI_DEVICE(qdev);
+    CXLDownstreamPort *dsp = CXL_DSP(qdev);
+
+    pcie_cap_deverr_reset(d);
+    pcie_cap_slot_reset(d);
+    pcie_cap_arifwd_reset(d);
+    pci_bridge_reset(qdev);
+
+    latch_registers(dsp);
+}
+
+static void build_dvsecs(CXLComponentState *cxl)
+{
+    uint8_t *dvsec;
+
+    dvsec = (uint8_t *)&(struct cxl_dvsec_port_extensions){ 0 };
+    cxl_component_create_dvsec(cxl, EXTENSIONS_PORT_DVSEC_LENGTH,
+                               EXTENSIONS_PORT_DVSEC,
+                               EXTENSIONS_PORT_DVSEC_REVID, dvsec);
+    dvsec = (uint8_t *)&(struct cxl_dvsec_port_flexbus){
+        .cap                     = 0x26, /* IO, Mem, non-MLD */
+        .ctrl                    = 0,
+        .status                  = 0x26, /* same */
+        .rcvd_mod_ts_data_phase1 = 0xef, /* WTF? */
+    };
+    cxl_component_create_dvsec(cxl, PCIE_FLEXBUS_PORT_DVSEC_LENGTH_2_0,
+                               PCIE_FLEXBUS_PORT_DVSEC,
+                               PCIE_FLEXBUS_PORT_DVSEC_REVID_2_0, dvsec);
+
+    dvsec = (uint8_t *)&(struct cxl_dvsec_register_locator){
+        .rsvd         = 0,
+        .reg0_base_lo = RBI_COMPONENT_REG | CXL_COMPONENT_REG_BAR_IDX,
+        .reg0_base_hi = 0,
+    };
+    cxl_component_create_dvsec(cxl, REG_LOC_DVSEC_LENGTH, REG_LOC_DVSEC,
+                               REG_LOC_DVSEC_REVID, dvsec);
+}
+
+static void cxl_dsp_realize(PCIDevice *d, Error **errp)
+{
+    PCIEPort *p = PCIE_PORT(d);
+    PCIESlot *s = PCIE_SLOT(d);
+    CXLDownstreamPort *dsp = CXL_DSP(d);
+    CXLComponentState *cxl_cstate = &dsp->cxl_cstate;
+    ComponentRegisters *cregs = &cxl_cstate->crb;
+    MemoryRegion *component_bar = &cregs->component_registers;
+    int rc;
+
+    pci_bridge_initfn(d, TYPE_PCIE_BUS);
+    pcie_port_init_reg(d);
+
+    rc = msi_init(d, CXL_DOWNSTREAM_PORT_MSI_OFFSET,
+                  CXL_DOWNSTREAM_PORT_MSI_NR_VECTOR,
+                  true, true, errp);
+    if (rc) {
+        assert(rc == -ENOTSUP);
+        goto err_bridge;
+    }
+
+    rc = pcie_cap_init(d, CXL_DOWNSTREAM_PORT_EXP_OFFSET,
+                       PCI_EXP_TYPE_DOWNSTREAM, p->port,
+                       errp);
+    if (rc < 0) {
+        goto err_msi;
+    }
+
+    pcie_cap_flr_init(d);
+    pcie_cap_deverr_init(d);
+    pcie_cap_slot_init(d, s);
+    pcie_cap_arifwd_init(d);
+
+    pcie_chassis_create(s->chassis);
+    rc = pcie_chassis_add_slot(s);
+    if (rc < 0) {
+        error_setg(errp, "Can't add chassis slot, error %d", rc);
+        goto err_pcie_cap;
+    }
+
+    rc = pcie_aer_init(d, PCI_ERR_VER, CXL_DOWNSTREAM_PORT_AER_OFFSET,
+                       PCI_ERR_SIZEOF, errp);
+    if (rc < 0) {
+        goto err_chassis;
+    }
+
+    cxl_cstate->dvsec_offset = CXL_DOWNSTREAM_PORT_DVSEC_OFFSET;
+    cxl_cstate->pdev = d;
+    build_dvsecs(cxl_cstate);
+    cxl_component_register_block_init(OBJECT(d), cxl_cstate, TYPE_CXL_DSP);
+    pci_register_bar(d, CXL_COMPONENT_REG_BAR_IDX,
+                     PCI_BASE_ADDRESS_SPACE_MEMORY |
+                         PCI_BASE_ADDRESS_MEM_TYPE_64,
+                     component_bar);
+
+    return;
+
+ err_chassis:
+    pcie_chassis_del_slot(s);
+ err_pcie_cap:
+    pcie_cap_exit(d);
+ err_msi:
+    msi_uninit(d);
+ err_bridge:
+    pci_bridge_exitfn(d);
+}
+
+static void cxl_dsp_exitfn(PCIDevice *d)
+{
+    PCIESlot *s = PCIE_SLOT(d);
+
+    pcie_aer_exit(d);
+    pcie_chassis_del_slot(s);
+    pcie_cap_exit(d);
+    msi_uninit(d);
+    pci_bridge_exitfn(d);
+}
+
+static void cxl_dsp_class_init(ObjectClass *oc, void *data)
+{
+    DeviceClass *dc = DEVICE_CLASS(oc);
+    PCIDeviceClass *k = PCI_DEVICE_CLASS(oc);
+
+    k->is_bridge = true;
+    k->config_write = cxl_dsp_config_write;
+    k->realize = cxl_dsp_realize;
+    k->exit = cxl_dsp_exitfn;
+    k->vendor_id = 0x19e5; /* Huawei */
+    k->device_id = 0xa129; /* Emulated CXL Switch Downstream Port */
+    k->revision = 0;
+    set_bit(DEVICE_CATEGORY_BRIDGE, dc->categories);
+    dc->desc = "CXL Switch Downstream Port";
+    dc->reset = cxl_dsp_reset;
+}
+
+static const TypeInfo cxl_dsp_info = {
+    .name = TYPE_CXL_DSP,
+    .instance_size = sizeof(CXLDownstreamPort),
+    .parent = TYPE_PCIE_SLOT,
+    .class_init = cxl_dsp_class_init,
+    .interfaces = (InterfaceInfo[]) {
+        { INTERFACE_PCIE_DEVICE },
+        { INTERFACE_CXL_DEVICE },
+        { }
+    },
+};
+
+static void cxl_dsp_register_type(void)
+{
+    type_register_static(&cxl_dsp_info);
+}
+
+type_init(cxl_dsp_register_type);
diff --git a/hw/pci-bridge/meson.build b/hw/pci-bridge/meson.build
index 4a8c26b5a1..53cc53314a 100644
--- a/hw/pci-bridge/meson.build
+++ b/hw/pci-bridge/meson.build
@@ -5,7 +5,7 @@ pci_ss.add(when: 'CONFIG_IOH3420', if_true: files('ioh3420.c'))
 pci_ss.add(when: 'CONFIG_PCIE_PORT', if_true: files('pcie_root_port.c', 'gen_pcie_root_port.c', 'pcie_pci_bridge.c'))
 pci_ss.add(when: 'CONFIG_PXB', if_true: files('pci_expander_bridge.c'))
 pci_ss.add(when: 'CONFIG_XIO3130', if_true: files('xio3130_upstream.c', 'xio3130_downstream.c'))
-pci_ss.add(when: 'CONFIG_CXL', if_true: files('cxl_root_port.c', 'cxl_upstream.c'))
+pci_ss.add(when: 'CONFIG_CXL', if_true: files('cxl_root_port.c', 'cxl_upstream.c', 'cxl_downstream.c'))
 
 # NewWorld PowerMac
 pci_ss.add(when: 'CONFIG_DEC_PCI', if_true: files('dec.c'))
-- 
2.32.0


^ permalink raw reply related	[flat|nested] 124+ messages in thread

* [PATCH v7 45/46] pci-bridge/cxl_downstream: Add a CXL switch downstream port
@ 2022-03-06 17:41   ` Jonathan Cameron via
  0 siblings, 0 replies; 124+ messages in thread
From: Jonathan Cameron via @ 2022-03-06 17:41 UTC (permalink / raw)
  To: linuxarm, qemu-devel, Alex Bennée, Marcel Apfelbaum,
	Michael S . Tsirkin, Igor Mammedov, Markus Armbruster
  Cc: linux-cxl, Ben Widawsky, Peter Maydell,
	Shameerali Kolothum Thodi, Philippe Mathieu-Daudé,
	Saransh Gupta1, Shreyas Shah, Chris Browy, Samarth Saxena,
	Dan Williams

Emulation of a simple CXL Switch downstream port.
The Device ID has been allocated for this use.

Signed-off-by: Jonathan Cameron <Jonathan.Cameron@huawei.com>
---
 hw/pci-bridge/cxl_downstream.c | 229 +++++++++++++++++++++++++++++++++
 hw/pci-bridge/meson.build      |   2 +-
 2 files changed, 230 insertions(+), 1 deletion(-)

diff --git a/hw/pci-bridge/cxl_downstream.c b/hw/pci-bridge/cxl_downstream.c
new file mode 100644
index 0000000000..8e66f632a5
--- /dev/null
+++ b/hw/pci-bridge/cxl_downstream.c
@@ -0,0 +1,229 @@
+/*
+ * Emulated CXL Switch Downstream Port
+ *
+ * Copyright (c) 2022 Huawei Technologies.
+ *
+ * Based on xio31130_downstream.c
+ *
+ * SPDX-License-Identifier: GPL-2.0-or-later
+ */
+
+#include "qemu/osdep.h"
+#include "hw/pci/msi.h"
+#include "hw/pci/pcie.h"
+#include "hw/pci/pcie_port.h"
+#include "qapi/error.h"
+
+typedef struct CXLDownStreamPort {
+    /*< private >*/
+    PCIESlot parent_obj;
+
+    /*< public >*/
+    CXLComponentState cxl_cstate;
+} CXLDownstreamPort;
+
+#define TYPE_CXL_DSP "cxl-downstream"
+DECLARE_INSTANCE_CHECKER(CXLDownstreamPort, CXL_DSP, TYPE_CXL_DSP)
+
+#define CXL_DOWNSTREAM_PORT_MSI_OFFSET 0x70
+#define CXL_DOWNSTREAM_PORT_MSI_NR_VECTOR 1
+#define CXL_DOWNSTREAM_PORT_EXP_OFFSET 0x90
+#define CXL_DOWNSTREAM_PORT_AER_OFFSET 0x100
+#define CXL_DOWNSTREAM_PORT_DVSEC_OFFSET        \
+    (CXL_DOWNSTREAM_PORT_AER_OFFSET + PCI_ERR_SIZEOF)
+
+static void latch_registers(CXLDownstreamPort *dsp)
+{
+    uint32_t *reg_state = dsp->cxl_cstate.crb.cache_mem_registers;
+
+    cxl_component_register_init_common(reg_state, CXL2_DOWNSTREAM_PORT);
+}
+
+/* TODO: Look at sharing this code acorss all CXL port types */
+static void cxl_dsp_dvsec_write_config(PCIDevice *dev, uint32_t addr,
+                                      uint32_t val, int len)
+{
+    CXLDownstreamPort *dsp = CXL_DSP(dev);
+    CXLComponentState *cxl_cstate = &dsp->cxl_cstate;
+
+    if (range_contains(&cxl_cstate->dvsecs[EXTENSIONS_PORT_DVSEC], addr)) {
+        uint8_t *reg = &dev->config[addr];
+        addr -= cxl_cstate->dvsecs[EXTENSIONS_PORT_DVSEC].lob;
+        if (addr == PORT_CONTROL_OFFSET) {
+            if (pci_get_word(reg) & PORT_CONTROL_UNMASK_SBR) {
+                /* unmask SBR */
+            }
+            if (pci_get_word(reg) & PORT_CONTROL_ALT_MEMID_EN) {
+                /* Alt Memory & ID Space Enable */
+            }
+        }
+    }
+}
+
+static void cxl_dsp_config_write(PCIDevice *d, uint32_t address,
+                                 uint32_t val, int len)
+{
+    uint16_t slt_ctl, slt_sta;
+
+    pcie_cap_slot_get(d, &slt_ctl, &slt_sta);
+    pci_bridge_write_config(d, address, val, len);
+    pcie_cap_flr_write_config(d, address, val, len);
+    pcie_cap_slot_write_config(d, slt_ctl, slt_sta, address, val, len);
+    pcie_aer_write_config(d, address, val, len);
+
+    cxl_dsp_dvsec_write_config(d, address, val, len);
+}
+
+static void cxl_dsp_reset(DeviceState *qdev)
+{
+    PCIDevice *d = PCI_DEVICE(qdev);
+    CXLDownstreamPort *dsp = CXL_DSP(qdev);
+
+    pcie_cap_deverr_reset(d);
+    pcie_cap_slot_reset(d);
+    pcie_cap_arifwd_reset(d);
+    pci_bridge_reset(qdev);
+
+    latch_registers(dsp);
+}
+
+static void build_dvsecs(CXLComponentState *cxl)
+{
+    uint8_t *dvsec;
+
+    dvsec = (uint8_t *)&(struct cxl_dvsec_port_extensions){ 0 };
+    cxl_component_create_dvsec(cxl, EXTENSIONS_PORT_DVSEC_LENGTH,
+                               EXTENSIONS_PORT_DVSEC,
+                               EXTENSIONS_PORT_DVSEC_REVID, dvsec);
+    dvsec = (uint8_t *)&(struct cxl_dvsec_port_flexbus){
+        .cap                     = 0x26, /* IO, Mem, non-MLD */
+        .ctrl                    = 0,
+        .status                  = 0x26, /* same */
+        .rcvd_mod_ts_data_phase1 = 0xef, /* WTF? */
+    };
+    cxl_component_create_dvsec(cxl, PCIE_FLEXBUS_PORT_DVSEC_LENGTH_2_0,
+                               PCIE_FLEXBUS_PORT_DVSEC,
+                               PCIE_FLEXBUS_PORT_DVSEC_REVID_2_0, dvsec);
+
+    dvsec = (uint8_t *)&(struct cxl_dvsec_register_locator){
+        .rsvd         = 0,
+        .reg0_base_lo = RBI_COMPONENT_REG | CXL_COMPONENT_REG_BAR_IDX,
+        .reg0_base_hi = 0,
+    };
+    cxl_component_create_dvsec(cxl, REG_LOC_DVSEC_LENGTH, REG_LOC_DVSEC,
+                               REG_LOC_DVSEC_REVID, dvsec);
+}
+
+static void cxl_dsp_realize(PCIDevice *d, Error **errp)
+{
+    PCIEPort *p = PCIE_PORT(d);
+    PCIESlot *s = PCIE_SLOT(d);
+    CXLDownstreamPort *dsp = CXL_DSP(d);
+    CXLComponentState *cxl_cstate = &dsp->cxl_cstate;
+    ComponentRegisters *cregs = &cxl_cstate->crb;
+    MemoryRegion *component_bar = &cregs->component_registers;
+    int rc;
+
+    pci_bridge_initfn(d, TYPE_PCIE_BUS);
+    pcie_port_init_reg(d);
+
+    rc = msi_init(d, CXL_DOWNSTREAM_PORT_MSI_OFFSET,
+                  CXL_DOWNSTREAM_PORT_MSI_NR_VECTOR,
+                  true, true, errp);
+    if (rc) {
+        assert(rc == -ENOTSUP);
+        goto err_bridge;
+    }
+
+    rc = pcie_cap_init(d, CXL_DOWNSTREAM_PORT_EXP_OFFSET,
+                       PCI_EXP_TYPE_DOWNSTREAM, p->port,
+                       errp);
+    if (rc < 0) {
+        goto err_msi;
+    }
+
+    pcie_cap_flr_init(d);
+    pcie_cap_deverr_init(d);
+    pcie_cap_slot_init(d, s);
+    pcie_cap_arifwd_init(d);
+
+    pcie_chassis_create(s->chassis);
+    rc = pcie_chassis_add_slot(s);
+    if (rc < 0) {
+        error_setg(errp, "Can't add chassis slot, error %d", rc);
+        goto err_pcie_cap;
+    }
+
+    rc = pcie_aer_init(d, PCI_ERR_VER, CXL_DOWNSTREAM_PORT_AER_OFFSET,
+                       PCI_ERR_SIZEOF, errp);
+    if (rc < 0) {
+        goto err_chassis;
+    }
+
+    cxl_cstate->dvsec_offset = CXL_DOWNSTREAM_PORT_DVSEC_OFFSET;
+    cxl_cstate->pdev = d;
+    build_dvsecs(cxl_cstate);
+    cxl_component_register_block_init(OBJECT(d), cxl_cstate, TYPE_CXL_DSP);
+    pci_register_bar(d, CXL_COMPONENT_REG_BAR_IDX,
+                     PCI_BASE_ADDRESS_SPACE_MEMORY |
+                         PCI_BASE_ADDRESS_MEM_TYPE_64,
+                     component_bar);
+
+    return;
+
+ err_chassis:
+    pcie_chassis_del_slot(s);
+ err_pcie_cap:
+    pcie_cap_exit(d);
+ err_msi:
+    msi_uninit(d);
+ err_bridge:
+    pci_bridge_exitfn(d);
+}
+
+static void cxl_dsp_exitfn(PCIDevice *d)
+{
+    PCIESlot *s = PCIE_SLOT(d);
+
+    pcie_aer_exit(d);
+    pcie_chassis_del_slot(s);
+    pcie_cap_exit(d);
+    msi_uninit(d);
+    pci_bridge_exitfn(d);
+}
+
+static void cxl_dsp_class_init(ObjectClass *oc, void *data)
+{
+    DeviceClass *dc = DEVICE_CLASS(oc);
+    PCIDeviceClass *k = PCI_DEVICE_CLASS(oc);
+
+    k->is_bridge = true;
+    k->config_write = cxl_dsp_config_write;
+    k->realize = cxl_dsp_realize;
+    k->exit = cxl_dsp_exitfn;
+    k->vendor_id = 0x19e5; /* Huawei */
+    k->device_id = 0xa129; /* Emulated CXL Switch Downstream Port */
+    k->revision = 0;
+    set_bit(DEVICE_CATEGORY_BRIDGE, dc->categories);
+    dc->desc = "CXL Switch Downstream Port";
+    dc->reset = cxl_dsp_reset;
+}
+
+static const TypeInfo cxl_dsp_info = {
+    .name = TYPE_CXL_DSP,
+    .instance_size = sizeof(CXLDownstreamPort),
+    .parent = TYPE_PCIE_SLOT,
+    .class_init = cxl_dsp_class_init,
+    .interfaces = (InterfaceInfo[]) {
+        { INTERFACE_PCIE_DEVICE },
+        { INTERFACE_CXL_DEVICE },
+        { }
+    },
+};
+
+static void cxl_dsp_register_type(void)
+{
+    type_register_static(&cxl_dsp_info);
+}
+
+type_init(cxl_dsp_register_type);
diff --git a/hw/pci-bridge/meson.build b/hw/pci-bridge/meson.build
index 4a8c26b5a1..53cc53314a 100644
--- a/hw/pci-bridge/meson.build
+++ b/hw/pci-bridge/meson.build
@@ -5,7 +5,7 @@ pci_ss.add(when: 'CONFIG_IOH3420', if_true: files('ioh3420.c'))
 pci_ss.add(when: 'CONFIG_PCIE_PORT', if_true: files('pcie_root_port.c', 'gen_pcie_root_port.c', 'pcie_pci_bridge.c'))
 pci_ss.add(when: 'CONFIG_PXB', if_true: files('pci_expander_bridge.c'))
 pci_ss.add(when: 'CONFIG_XIO3130', if_true: files('xio3130_upstream.c', 'xio3130_downstream.c'))
-pci_ss.add(when: 'CONFIG_CXL', if_true: files('cxl_root_port.c', 'cxl_upstream.c'))
+pci_ss.add(when: 'CONFIG_CXL', if_true: files('cxl_root_port.c', 'cxl_upstream.c', 'cxl_downstream.c'))
 
 # NewWorld PowerMac
 pci_ss.add(when: 'CONFIG_DEC_PCI', if_true: files('dec.c'))
-- 
2.32.0



^ permalink raw reply related	[flat|nested] 124+ messages in thread

* [PATCH v7 46/46] cxl/cxl-host: Support interleave decoding with one level of switches.
  2022-03-06 17:40 ` Jonathan Cameron via
@ 2022-03-06 17:41   ` Jonathan Cameron via
  -1 siblings, 0 replies; 124+ messages in thread
From: Jonathan Cameron @ 2022-03-06 17:41 UTC (permalink / raw)
  To: linuxarm, qemu-devel, Alex Bennée, Marcel Apfelbaum,
	Michael S . Tsirkin, Igor Mammedov, Markus Armbruster
  Cc: linux-cxl, Ben Widawsky, Peter Maydell,
	Shameerali Kolothum Thodi, Philippe Mathieu-Daudé,
	Saransh Gupta1, Shreyas Shah, Chris Browy, Samarth Saxena,
	Dan Williams

Extend the walk of the CXL bus during interleave decoding to take
into account one layer of switches.

Whilst theoretically CXL 2.0 allows multiple switch levels, in the
vast majority of usecases only one level is expected and currently
that is all the proposed Linux support provides.

Signed-off-by: Jonathan Cameron <Jonathan.Cameron@huawei.com>
---
 hw/cxl/cxl-host.c | 44 ++++++++++++++++++++++++++++++++++++++++++--
 1 file changed, 42 insertions(+), 2 deletions(-)

diff --git a/hw/cxl/cxl-host.c b/hw/cxl/cxl-host.c
index a1eafa89bb..ac20d9e2f5 100644
--- a/hw/cxl/cxl-host.c
+++ b/hw/cxl/cxl-host.c
@@ -130,8 +130,9 @@ static bool cxl_hdm_find_target(uint32_t *cache_mem, hwaddr addr,
 
 static PCIDevice *cxl_cfmws_find_device(CXLFixedWindow *fw, hwaddr addr)
 {
-    CXLComponentState *hb_cstate;
+    CXLComponentState *hb_cstate, *usp_cstate;
     PCIHostState *hb;
+    CXLUpstreamPort *usp;
     int rb_index;
     uint32_t *cache_mem;
     uint8_t target;
@@ -166,7 +167,46 @@ static PCIDevice *cxl_cfmws_find_device(CXLFixedWindow *fw, hwaddr addr)
 
     d = pci_bridge_get_sec_bus(PCI_BRIDGE(rp))->devices[0];
 
-    if (!d || !object_dynamic_cast(OBJECT(d), TYPE_CXL_TYPE3_DEV)) {
+    if (!d) {
+        return NULL;
+    }
+
+    if (object_dynamic_cast(OBJECT(d), TYPE_CXL_TYPE3_DEV)) {
+        return d;
+    }
+
+    /*
+     * Could also be a switch.  Note only one level of switching currently
+     * supported.
+     */
+    if (!object_dynamic_cast(OBJECT(d), TYPE_CXL_USP)) {
+        return NULL;
+    }
+    usp = CXL_USP(d);
+
+    usp_cstate = cxl_usp_to_cstate(usp);
+    if (!usp_cstate) {
+        return NULL;
+    }
+
+    cache_mem = usp_cstate->crb.cache_mem_registers;
+
+    target_found = cxl_hdm_find_target(cache_mem, addr, &target);
+    if (!target_found) {
+        return NULL;
+    }
+
+    d = pcie_find_port_by_pn(&PCI_BRIDGE(d)->sec_bus, target);
+    if (!d) {
+        return NULL;
+    }
+
+    d = pci_bridge_get_sec_bus(PCI_BRIDGE(d))->devices[0];
+    if (!d) {
+        return NULL;
+    }
+
+    if (!object_dynamic_cast(OBJECT(d), TYPE_CXL_TYPE3_DEV)) {
         return NULL;
     }
 
-- 
2.32.0


^ permalink raw reply related	[flat|nested] 124+ messages in thread

* [PATCH v7 46/46] cxl/cxl-host: Support interleave decoding with one level of switches.
@ 2022-03-06 17:41   ` Jonathan Cameron via
  0 siblings, 0 replies; 124+ messages in thread
From: Jonathan Cameron via @ 2022-03-06 17:41 UTC (permalink / raw)
  To: linuxarm, qemu-devel, Alex Bennée, Marcel Apfelbaum,
	Michael S . Tsirkin, Igor Mammedov, Markus Armbruster
  Cc: linux-cxl, Ben Widawsky, Peter Maydell,
	Shameerali Kolothum Thodi, Philippe Mathieu-Daudé,
	Saransh Gupta1, Shreyas Shah, Chris Browy, Samarth Saxena,
	Dan Williams

Extend the walk of the CXL bus during interleave decoding to take
into account one layer of switches.

Whilst theoretically CXL 2.0 allows multiple switch levels, in the
vast majority of usecases only one level is expected and currently
that is all the proposed Linux support provides.

Signed-off-by: Jonathan Cameron <Jonathan.Cameron@huawei.com>
---
 hw/cxl/cxl-host.c | 44 ++++++++++++++++++++++++++++++++++++++++++--
 1 file changed, 42 insertions(+), 2 deletions(-)

diff --git a/hw/cxl/cxl-host.c b/hw/cxl/cxl-host.c
index a1eafa89bb..ac20d9e2f5 100644
--- a/hw/cxl/cxl-host.c
+++ b/hw/cxl/cxl-host.c
@@ -130,8 +130,9 @@ static bool cxl_hdm_find_target(uint32_t *cache_mem, hwaddr addr,
 
 static PCIDevice *cxl_cfmws_find_device(CXLFixedWindow *fw, hwaddr addr)
 {
-    CXLComponentState *hb_cstate;
+    CXLComponentState *hb_cstate, *usp_cstate;
     PCIHostState *hb;
+    CXLUpstreamPort *usp;
     int rb_index;
     uint32_t *cache_mem;
     uint8_t target;
@@ -166,7 +167,46 @@ static PCIDevice *cxl_cfmws_find_device(CXLFixedWindow *fw, hwaddr addr)
 
     d = pci_bridge_get_sec_bus(PCI_BRIDGE(rp))->devices[0];
 
-    if (!d || !object_dynamic_cast(OBJECT(d), TYPE_CXL_TYPE3_DEV)) {
+    if (!d) {
+        return NULL;
+    }
+
+    if (object_dynamic_cast(OBJECT(d), TYPE_CXL_TYPE3_DEV)) {
+        return d;
+    }
+
+    /*
+     * Could also be a switch.  Note only one level of switching currently
+     * supported.
+     */
+    if (!object_dynamic_cast(OBJECT(d), TYPE_CXL_USP)) {
+        return NULL;
+    }
+    usp = CXL_USP(d);
+
+    usp_cstate = cxl_usp_to_cstate(usp);
+    if (!usp_cstate) {
+        return NULL;
+    }
+
+    cache_mem = usp_cstate->crb.cache_mem_registers;
+
+    target_found = cxl_hdm_find_target(cache_mem, addr, &target);
+    if (!target_found) {
+        return NULL;
+    }
+
+    d = pcie_find_port_by_pn(&PCI_BRIDGE(d)->sec_bus, target);
+    if (!d) {
+        return NULL;
+    }
+
+    d = pci_bridge_get_sec_bus(PCI_BRIDGE(d))->devices[0];
+    if (!d) {
+        return NULL;
+    }
+
+    if (!object_dynamic_cast(OBJECT(d), TYPE_CXL_TYPE3_DEV)) {
         return NULL;
     }
 
-- 
2.32.0



^ permalink raw reply related	[flat|nested] 124+ messages in thread

* Re: [PATCH v7 24/46] acpi/cxl: Add _OSC implementation (9.14.2)
  2022-03-06 17:41   ` Jonathan Cameron via
@ 2022-03-06 21:31     ` Michael S. Tsirkin
  -1 siblings, 0 replies; 124+ messages in thread
From: Michael S. Tsirkin @ 2022-03-06 21:31 UTC (permalink / raw)
  To: Jonathan Cameron
  Cc: linuxarm, qemu-devel, Alex Bennée, Marcel Apfelbaum,
	Igor Mammedov, Markus Armbruster, linux-cxl, Ben Widawsky,
	Peter Maydell, Shameerali Kolothum Thodi,
	Philippe Mathieu-Daudé,
	Saransh Gupta1, Shreyas Shah, Chris Browy, Samarth Saxena,
	Dan Williams

On Sun, Mar 06, 2022 at 05:41:15PM +0000, Jonathan Cameron wrote:
> From: Ben Widawsky <ben.widawsky@intel.com>
> 
> CXL 2.0 specification adds 2 new dwords to the existing _OSC definition
> from PCIe. The new dwords are accessed with a new uuid. This
> implementation supports what is in the specification.
> 
> Signed-off-by: Ben Widawsky <ben.widawsky@intel.com>
> Signed-off-by: Jonathan Cameron <Jonathan.Cameron@huawei.com>
> Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
> ---
>  hw/acpi/Kconfig       |   5 ++
>  hw/acpi/cxl-stub.c    |  12 +++++
>  hw/acpi/cxl.c         | 104 ++++++++++++++++++++++++++++++++++++++++++
>  hw/acpi/meson.build   |   4 +-
>  hw/i386/acpi-build.c  |  15 ++++--
>  include/hw/acpi/cxl.h |  23 ++++++++++
>  6 files changed, 157 insertions(+), 6 deletions(-)
> 
> diff --git a/hw/acpi/Kconfig b/hw/acpi/Kconfig
> index 19caebde6c..3703aca212 100644
> --- a/hw/acpi/Kconfig
> +++ b/hw/acpi/Kconfig
> @@ -5,6 +5,7 @@ config ACPI_X86
>      bool
>      select ACPI
>      select ACPI_NVDIMM
> +    select ACPI_CXL
>      select ACPI_CPU_HOTPLUG
>      select ACPI_MEMORY_HOTPLUG
>      select ACPI_HMAT
> @@ -66,3 +67,7 @@ config ACPI_ERST
>      bool
>      default y
>      depends on ACPI && PCI
> +
> +config ACPI_CXL
> +    bool
> +    depends on ACPI
> diff --git a/hw/acpi/cxl-stub.c b/hw/acpi/cxl-stub.c
> new file mode 100644
> index 0000000000..15bc21076b
> --- /dev/null
> +++ b/hw/acpi/cxl-stub.c
> @@ -0,0 +1,12 @@
> +
> +/*
> + * Stubs for ACPI platforms that don't support CXl
> + */
> +#include "qemu/osdep.h"
> +#include "hw/acpi/aml-build.h"
> +#include "hw/acpi/cxl.h"
> +
> +void build_cxl_osc_method(Aml *dev)
> +{
> +    g_assert_not_reached();
> +}
> diff --git a/hw/acpi/cxl.c b/hw/acpi/cxl.c
> new file mode 100644
> index 0000000000..7124d5a1a3
> --- /dev/null
> +++ b/hw/acpi/cxl.c
> @@ -0,0 +1,104 @@
> +/*
> + * CXL ACPI Implementation
> + *
> + * Copyright(C) 2020 Intel Corporation.
> + *
> + * This library is free software; you can redistribute it and/or
> + * modify it under the terms of the GNU Lesser General Public
> + * License as published by the Free Software Foundation; either
> + * version 2 of the License, or (at your option) any later version.
> + *
> + * This library is distributed in the hope that it will be useful,
> + * but WITHOUT ANY WARRANTY; without even the implied warranty of
> + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
> + * Lesser General Public License for more details.
> + *
> + * You should have received a copy of the GNU Lesser General Public
> + * License along with this library; if not, see <http://www.gnu.org/licenses/>
> + */
> +
> +#include "qemu/osdep.h"
> +#include "hw/cxl/cxl.h"
> +#include "hw/acpi/acpi.h"
> +#include "hw/acpi/aml-build.h"
> +#include "hw/acpi/bios-linker-loader.h"
> +#include "hw/acpi/cxl.h"
> +#include "qapi/error.h"
> +#include "qemu/uuid.h"
> +
> +static Aml *__build_cxl_osc_method(void)
> +{
> +    Aml *method, *if_uuid, *else_uuid, *if_arg1_not_1, *if_cxl, *if_caps_masked;
> +    Aml *a_ctrl = aml_local(0);
> +    Aml *a_cdw1 = aml_name("CDW1");
> +
> +    method = aml_method("_OSC", 4, AML_NOTSERIALIZED);
> +    aml_append(method, aml_create_dword_field(aml_arg(3), aml_int(0), "CDW1"));
> +
> +    /* 9.14.2.1.4 */

List spec name and version pls?

> +    if_uuid = aml_if(
> +        aml_lor(aml_equal(aml_arg(0),
> +                          aml_touuid("33DB4D5B-1FF7-401C-9657-7441C03DD766")),
> +                aml_equal(aml_arg(0),
> +                          aml_touuid("68F2D50B-C469-4D8A-BD3D-941A103FD3FC"))));
> +    aml_append(if_uuid, aml_create_dword_field(aml_arg(3), aml_int(4), "CDW2"));
> +    aml_append(if_uuid, aml_create_dword_field(aml_arg(3), aml_int(8), "CDW3"));
> +
> +    aml_append(if_uuid, aml_store(aml_name("CDW3"), a_ctrl));
> +
> +    /* This is all the same as what's used for PCIe */

Referring to what exactly?
Better to also document the meaning.


> +    aml_append(if_uuid,
> +               aml_and(aml_name("CTRL"), aml_int(0x1F), aml_name("CTRL")));
> +
> +    if_arg1_not_1 = aml_if(aml_lnot(aml_equal(aml_arg(1), aml_int(0x1))));
> +    /* Unknown revision */
> +    aml_append(if_arg1_not_1, aml_or(a_cdw1, aml_int(0x08), a_cdw1));
> +    aml_append(if_uuid, if_arg1_not_1);
> +
> +    if_caps_masked = aml_if(aml_lnot(aml_equal(aml_name("CDW3"), a_ctrl)));
> +    /* Capability bits were masked */
> +    aml_append(if_caps_masked, aml_or(a_cdw1, aml_int(0x10), a_cdw1));
> +    aml_append(if_uuid, if_caps_masked);
> +
> +    aml_append(if_uuid, aml_store(aml_name("CDW2"), aml_name("SUPP")));
> +    aml_append(if_uuid, aml_store(aml_name("CDW3"), aml_name("CTRL")));
> +
> +    if_cxl = aml_if(aml_equal(
> +        aml_arg(0), aml_touuid("68F2D50B-C469-4D8A-BD3D-941A103FD3FC")));
> +    /* CXL support field */
> +    aml_append(if_cxl, aml_create_dword_field(aml_arg(3), aml_int(12), "CDW4"));
> +    /* CXL capabilities */
> +    aml_append(if_cxl, aml_create_dword_field(aml_arg(3), aml_int(16), "CDW5"));
> +    aml_append(if_cxl, aml_store(aml_name("CDW4"), aml_name("SUPC")));
> +    aml_append(if_cxl, aml_store(aml_name("CDW5"), aml_name("CTRC")));
> +
> +    /* CXL 2.0 Port/Device Register access */
> +    aml_append(if_cxl,
> +               aml_or(aml_name("CDW5"), aml_int(0x1), aml_name("CDW5")));
> +    aml_append(if_uuid, if_cxl);
> +
> +    /* Update DWORD3 (the return value) */
> +    aml_append(if_uuid, aml_store(a_ctrl, aml_name("CDW3")));
> +
> +    aml_append(if_uuid, aml_return(aml_arg(3)));
> +    aml_append(method, if_uuid);
> +
> +    else_uuid = aml_else();
> +
> +    /* unrecognized uuid */
> +    aml_append(else_uuid,
> +               aml_or(aml_name("CDW1"), aml_int(0x4), aml_name("CDW1")));
> +    aml_append(else_uuid, aml_return(aml_arg(3)));
> +    aml_append(method, else_uuid);
> +
> +    return method;
> +}
> +
> +void build_cxl_osc_method(Aml *dev)
> +{
> +    aml_append(dev, aml_name_decl("SUPP", aml_int(0)));
> +    aml_append(dev, aml_name_decl("CTRL", aml_int(0)));
> +    aml_append(dev, aml_name_decl("SUPC", aml_int(0)));
> +    aml_append(dev, aml_name_decl("CTRC", aml_int(0)));
> +    aml_append(dev, __build_cxl_osc_method());
> +}
> diff --git a/hw/acpi/meson.build b/hw/acpi/meson.build
> index 8bea2e6933..cea2f5f93a 100644
> --- a/hw/acpi/meson.build
> +++ b/hw/acpi/meson.build
> @@ -13,6 +13,7 @@ acpi_ss.add(when: 'CONFIG_ACPI_MEMORY_HOTPLUG', if_false: files('acpi-mem-hotplu
>  acpi_ss.add(when: 'CONFIG_ACPI_NVDIMM', if_true: files('nvdimm.c'))
>  acpi_ss.add(when: 'CONFIG_ACPI_NVDIMM', if_false: files('acpi-nvdimm-stub.c'))
>  acpi_ss.add(when: 'CONFIG_ACPI_PCI', if_true: files('pci.c'))
> +acpi_ss.add(when: 'CONFIG_ACPI_CXL', if_true: files('cxl.c'), if_false: files('cxl-stub.c'))
>  acpi_ss.add(when: 'CONFIG_ACPI_VMGENID', if_true: files('vmgenid.c'))
>  acpi_ss.add(when: 'CONFIG_ACPI_HW_REDUCED', if_true: files('generic_event_device.c'))
>  acpi_ss.add(when: 'CONFIG_ACPI_HMAT', if_true: files('hmat.c'))
> @@ -33,4 +34,5 @@ softmmu_ss.add_all(when: 'CONFIG_ACPI', if_true: acpi_ss)
>  softmmu_ss.add(when: 'CONFIG_ALL', if_true: files('acpi-stub.c', 'aml-build-stub.c',
>                                                    'acpi-x86-stub.c', 'ipmi-stub.c', 'ghes-stub.c',
>                                                    'acpi-mem-hotplug-stub.c', 'acpi-cpu-hotplug-stub.c',
> -                                                  'acpi-pci-hotplug-stub.c', 'acpi-nvdimm-stub.c'))
> +                                                  'acpi-pci-hotplug-stub.c', 'acpi-nvdimm-stub.c',
> +                                                  'cxl-stub.c'))
> diff --git a/hw/i386/acpi-build.c b/hw/i386/acpi-build.c
> index 0a28dd6d4e..b5a4b663f2 100644
> --- a/hw/i386/acpi-build.c
> +++ b/hw/i386/acpi-build.c
> @@ -66,6 +66,7 @@
>  #include "hw/acpi/aml-build.h"
>  #include "hw/acpi/utils.h"
>  #include "hw/acpi/pci.h"
> +#include "hw/acpi/cxl.h"
>  
>  #include "qom/qom-qobject.h"
>  #include "hw/i386/amd_iommu.h"
> @@ -1574,11 +1575,15 @@ build_dsdt(GArray *table_data, BIOSLinker *linker,
>              aml_append(dev, aml_name_decl("_UID", aml_int(bus_num)));
>              aml_append(dev, aml_name_decl("_BBN", aml_int(bus_num)));
>              if (pci_bus_is_cxl(bus)) {
> -                aml_append(dev, aml_name_decl("_HID", aml_eisaid("PNP0A08")));
> -                aml_append(dev, aml_name_decl("_CID", aml_eisaid("PNP0A03")));
> -
> -                /* Expander bridges do not have ACPI PCI Hot-plug enabled */
> -                aml_append(dev, build_q35_osc_method(true));
> +                struct Aml *pkg = aml_package(2);
> +
> +                aml_append(dev, aml_name_decl("_HID", aml_string("ACPI0016")));
> +                aml_append(pkg, aml_eisaid("PNP0A08"));
> +                aml_append(pkg, aml_eisaid("PNP0A03"));
> +                aml_append(dev, aml_name_decl("_CID", pkg));
> +                aml_append(dev, aml_name_decl("_ADR", aml_int(0)));
> +                aml_append(dev, aml_name_decl("_UID", aml_int(bus_num)));
> +                build_cxl_osc_method(dev);
>              } else if (pci_bus_is_express(bus)) {
>                  aml_append(dev, aml_name_decl("_HID", aml_eisaid("PNP0A08")));
>                  aml_append(dev, aml_name_decl("_CID", aml_eisaid("PNP0A03")));
> diff --git a/include/hw/acpi/cxl.h b/include/hw/acpi/cxl.h
> new file mode 100644
> index 0000000000..7b8f3b8a2e
> --- /dev/null
> +++ b/include/hw/acpi/cxl.h
> @@ -0,0 +1,23 @@
> +/*
> + * Copyright (C) 2020 Intel Corporation
> + *
> + * This program is free software; you can redistribute it and/or modify
> + * it under the terms of the GNU General Public License as published by
> + * the Free Software Foundation; either version 2 of the License, or
> + * (at your option) any later version.
> +
> + * This program is distributed in the hope that it will be useful,
> + * but WITHOUT ANY WARRANTY; without even the implied warranty of
> + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
> + * GNU General Public License for more details.
> +
> + * You should have received a copy of the GNU General Public License along
> + * with this program; if not, see <http://www.gnu.org/licenses/>.
> + */
> +
> +#ifndef HW_ACPI_CXL_H
> +#define HW_ACPI_CXL_H
> +
> +void build_cxl_osc_method(Aml *dev);
> +
> +#endif
> -- 
> 2.32.0


^ permalink raw reply	[flat|nested] 124+ messages in thread

* Re: [PATCH v7 24/46] acpi/cxl: Add _OSC implementation (9.14.2)
@ 2022-03-06 21:31     ` Michael S. Tsirkin
  0 siblings, 0 replies; 124+ messages in thread
From: Michael S. Tsirkin @ 2022-03-06 21:31 UTC (permalink / raw)
  To: Jonathan Cameron
  Cc: Peter Maydell, Ben Widawsky, qemu-devel, Samarth Saxena,
	Chris Browy, linuxarm, linux-cxl, Markus Armbruster,
	Shreyas Shah, Saransh Gupta1, Shameerali Kolothum Thodi,
	Marcel Apfelbaum, Igor Mammedov, Dan Williams, Alex Bennée,
	Philippe Mathieu-Daudé

On Sun, Mar 06, 2022 at 05:41:15PM +0000, Jonathan Cameron wrote:
> From: Ben Widawsky <ben.widawsky@intel.com>
> 
> CXL 2.0 specification adds 2 new dwords to the existing _OSC definition
> from PCIe. The new dwords are accessed with a new uuid. This
> implementation supports what is in the specification.
> 
> Signed-off-by: Ben Widawsky <ben.widawsky@intel.com>
> Signed-off-by: Jonathan Cameron <Jonathan.Cameron@huawei.com>
> Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
> ---
>  hw/acpi/Kconfig       |   5 ++
>  hw/acpi/cxl-stub.c    |  12 +++++
>  hw/acpi/cxl.c         | 104 ++++++++++++++++++++++++++++++++++++++++++
>  hw/acpi/meson.build   |   4 +-
>  hw/i386/acpi-build.c  |  15 ++++--
>  include/hw/acpi/cxl.h |  23 ++++++++++
>  6 files changed, 157 insertions(+), 6 deletions(-)
> 
> diff --git a/hw/acpi/Kconfig b/hw/acpi/Kconfig
> index 19caebde6c..3703aca212 100644
> --- a/hw/acpi/Kconfig
> +++ b/hw/acpi/Kconfig
> @@ -5,6 +5,7 @@ config ACPI_X86
>      bool
>      select ACPI
>      select ACPI_NVDIMM
> +    select ACPI_CXL
>      select ACPI_CPU_HOTPLUG
>      select ACPI_MEMORY_HOTPLUG
>      select ACPI_HMAT
> @@ -66,3 +67,7 @@ config ACPI_ERST
>      bool
>      default y
>      depends on ACPI && PCI
> +
> +config ACPI_CXL
> +    bool
> +    depends on ACPI
> diff --git a/hw/acpi/cxl-stub.c b/hw/acpi/cxl-stub.c
> new file mode 100644
> index 0000000000..15bc21076b
> --- /dev/null
> +++ b/hw/acpi/cxl-stub.c
> @@ -0,0 +1,12 @@
> +
> +/*
> + * Stubs for ACPI platforms that don't support CXl
> + */
> +#include "qemu/osdep.h"
> +#include "hw/acpi/aml-build.h"
> +#include "hw/acpi/cxl.h"
> +
> +void build_cxl_osc_method(Aml *dev)
> +{
> +    g_assert_not_reached();
> +}
> diff --git a/hw/acpi/cxl.c b/hw/acpi/cxl.c
> new file mode 100644
> index 0000000000..7124d5a1a3
> --- /dev/null
> +++ b/hw/acpi/cxl.c
> @@ -0,0 +1,104 @@
> +/*
> + * CXL ACPI Implementation
> + *
> + * Copyright(C) 2020 Intel Corporation.
> + *
> + * This library is free software; you can redistribute it and/or
> + * modify it under the terms of the GNU Lesser General Public
> + * License as published by the Free Software Foundation; either
> + * version 2 of the License, or (at your option) any later version.
> + *
> + * This library is distributed in the hope that it will be useful,
> + * but WITHOUT ANY WARRANTY; without even the implied warranty of
> + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
> + * Lesser General Public License for more details.
> + *
> + * You should have received a copy of the GNU Lesser General Public
> + * License along with this library; if not, see <http://www.gnu.org/licenses/>
> + */
> +
> +#include "qemu/osdep.h"
> +#include "hw/cxl/cxl.h"
> +#include "hw/acpi/acpi.h"
> +#include "hw/acpi/aml-build.h"
> +#include "hw/acpi/bios-linker-loader.h"
> +#include "hw/acpi/cxl.h"
> +#include "qapi/error.h"
> +#include "qemu/uuid.h"
> +
> +static Aml *__build_cxl_osc_method(void)
> +{
> +    Aml *method, *if_uuid, *else_uuid, *if_arg1_not_1, *if_cxl, *if_caps_masked;
> +    Aml *a_ctrl = aml_local(0);
> +    Aml *a_cdw1 = aml_name("CDW1");
> +
> +    method = aml_method("_OSC", 4, AML_NOTSERIALIZED);
> +    aml_append(method, aml_create_dword_field(aml_arg(3), aml_int(0), "CDW1"));
> +
> +    /* 9.14.2.1.4 */

List spec name and version pls?

> +    if_uuid = aml_if(
> +        aml_lor(aml_equal(aml_arg(0),
> +                          aml_touuid("33DB4D5B-1FF7-401C-9657-7441C03DD766")),
> +                aml_equal(aml_arg(0),
> +                          aml_touuid("68F2D50B-C469-4D8A-BD3D-941A103FD3FC"))));
> +    aml_append(if_uuid, aml_create_dword_field(aml_arg(3), aml_int(4), "CDW2"));
> +    aml_append(if_uuid, aml_create_dword_field(aml_arg(3), aml_int(8), "CDW3"));
> +
> +    aml_append(if_uuid, aml_store(aml_name("CDW3"), a_ctrl));
> +
> +    /* This is all the same as what's used for PCIe */

Referring to what exactly?
Better to also document the meaning.


> +    aml_append(if_uuid,
> +               aml_and(aml_name("CTRL"), aml_int(0x1F), aml_name("CTRL")));
> +
> +    if_arg1_not_1 = aml_if(aml_lnot(aml_equal(aml_arg(1), aml_int(0x1))));
> +    /* Unknown revision */
> +    aml_append(if_arg1_not_1, aml_or(a_cdw1, aml_int(0x08), a_cdw1));
> +    aml_append(if_uuid, if_arg1_not_1);
> +
> +    if_caps_masked = aml_if(aml_lnot(aml_equal(aml_name("CDW3"), a_ctrl)));
> +    /* Capability bits were masked */
> +    aml_append(if_caps_masked, aml_or(a_cdw1, aml_int(0x10), a_cdw1));
> +    aml_append(if_uuid, if_caps_masked);
> +
> +    aml_append(if_uuid, aml_store(aml_name("CDW2"), aml_name("SUPP")));
> +    aml_append(if_uuid, aml_store(aml_name("CDW3"), aml_name("CTRL")));
> +
> +    if_cxl = aml_if(aml_equal(
> +        aml_arg(0), aml_touuid("68F2D50B-C469-4D8A-BD3D-941A103FD3FC")));
> +    /* CXL support field */
> +    aml_append(if_cxl, aml_create_dword_field(aml_arg(3), aml_int(12), "CDW4"));
> +    /* CXL capabilities */
> +    aml_append(if_cxl, aml_create_dword_field(aml_arg(3), aml_int(16), "CDW5"));
> +    aml_append(if_cxl, aml_store(aml_name("CDW4"), aml_name("SUPC")));
> +    aml_append(if_cxl, aml_store(aml_name("CDW5"), aml_name("CTRC")));
> +
> +    /* CXL 2.0 Port/Device Register access */
> +    aml_append(if_cxl,
> +               aml_or(aml_name("CDW5"), aml_int(0x1), aml_name("CDW5")));
> +    aml_append(if_uuid, if_cxl);
> +
> +    /* Update DWORD3 (the return value) */
> +    aml_append(if_uuid, aml_store(a_ctrl, aml_name("CDW3")));
> +
> +    aml_append(if_uuid, aml_return(aml_arg(3)));
> +    aml_append(method, if_uuid);
> +
> +    else_uuid = aml_else();
> +
> +    /* unrecognized uuid */
> +    aml_append(else_uuid,
> +               aml_or(aml_name("CDW1"), aml_int(0x4), aml_name("CDW1")));
> +    aml_append(else_uuid, aml_return(aml_arg(3)));
> +    aml_append(method, else_uuid);
> +
> +    return method;
> +}
> +
> +void build_cxl_osc_method(Aml *dev)
> +{
> +    aml_append(dev, aml_name_decl("SUPP", aml_int(0)));
> +    aml_append(dev, aml_name_decl("CTRL", aml_int(0)));
> +    aml_append(dev, aml_name_decl("SUPC", aml_int(0)));
> +    aml_append(dev, aml_name_decl("CTRC", aml_int(0)));
> +    aml_append(dev, __build_cxl_osc_method());
> +}
> diff --git a/hw/acpi/meson.build b/hw/acpi/meson.build
> index 8bea2e6933..cea2f5f93a 100644
> --- a/hw/acpi/meson.build
> +++ b/hw/acpi/meson.build
> @@ -13,6 +13,7 @@ acpi_ss.add(when: 'CONFIG_ACPI_MEMORY_HOTPLUG', if_false: files('acpi-mem-hotplu
>  acpi_ss.add(when: 'CONFIG_ACPI_NVDIMM', if_true: files('nvdimm.c'))
>  acpi_ss.add(when: 'CONFIG_ACPI_NVDIMM', if_false: files('acpi-nvdimm-stub.c'))
>  acpi_ss.add(when: 'CONFIG_ACPI_PCI', if_true: files('pci.c'))
> +acpi_ss.add(when: 'CONFIG_ACPI_CXL', if_true: files('cxl.c'), if_false: files('cxl-stub.c'))
>  acpi_ss.add(when: 'CONFIG_ACPI_VMGENID', if_true: files('vmgenid.c'))
>  acpi_ss.add(when: 'CONFIG_ACPI_HW_REDUCED', if_true: files('generic_event_device.c'))
>  acpi_ss.add(when: 'CONFIG_ACPI_HMAT', if_true: files('hmat.c'))
> @@ -33,4 +34,5 @@ softmmu_ss.add_all(when: 'CONFIG_ACPI', if_true: acpi_ss)
>  softmmu_ss.add(when: 'CONFIG_ALL', if_true: files('acpi-stub.c', 'aml-build-stub.c',
>                                                    'acpi-x86-stub.c', 'ipmi-stub.c', 'ghes-stub.c',
>                                                    'acpi-mem-hotplug-stub.c', 'acpi-cpu-hotplug-stub.c',
> -                                                  'acpi-pci-hotplug-stub.c', 'acpi-nvdimm-stub.c'))
> +                                                  'acpi-pci-hotplug-stub.c', 'acpi-nvdimm-stub.c',
> +                                                  'cxl-stub.c'))
> diff --git a/hw/i386/acpi-build.c b/hw/i386/acpi-build.c
> index 0a28dd6d4e..b5a4b663f2 100644
> --- a/hw/i386/acpi-build.c
> +++ b/hw/i386/acpi-build.c
> @@ -66,6 +66,7 @@
>  #include "hw/acpi/aml-build.h"
>  #include "hw/acpi/utils.h"
>  #include "hw/acpi/pci.h"
> +#include "hw/acpi/cxl.h"
>  
>  #include "qom/qom-qobject.h"
>  #include "hw/i386/amd_iommu.h"
> @@ -1574,11 +1575,15 @@ build_dsdt(GArray *table_data, BIOSLinker *linker,
>              aml_append(dev, aml_name_decl("_UID", aml_int(bus_num)));
>              aml_append(dev, aml_name_decl("_BBN", aml_int(bus_num)));
>              if (pci_bus_is_cxl(bus)) {
> -                aml_append(dev, aml_name_decl("_HID", aml_eisaid("PNP0A08")));
> -                aml_append(dev, aml_name_decl("_CID", aml_eisaid("PNP0A03")));
> -
> -                /* Expander bridges do not have ACPI PCI Hot-plug enabled */
> -                aml_append(dev, build_q35_osc_method(true));
> +                struct Aml *pkg = aml_package(2);
> +
> +                aml_append(dev, aml_name_decl("_HID", aml_string("ACPI0016")));
> +                aml_append(pkg, aml_eisaid("PNP0A08"));
> +                aml_append(pkg, aml_eisaid("PNP0A03"));
> +                aml_append(dev, aml_name_decl("_CID", pkg));
> +                aml_append(dev, aml_name_decl("_ADR", aml_int(0)));
> +                aml_append(dev, aml_name_decl("_UID", aml_int(bus_num)));
> +                build_cxl_osc_method(dev);
>              } else if (pci_bus_is_express(bus)) {
>                  aml_append(dev, aml_name_decl("_HID", aml_eisaid("PNP0A08")));
>                  aml_append(dev, aml_name_decl("_CID", aml_eisaid("PNP0A03")));
> diff --git a/include/hw/acpi/cxl.h b/include/hw/acpi/cxl.h
> new file mode 100644
> index 0000000000..7b8f3b8a2e
> --- /dev/null
> +++ b/include/hw/acpi/cxl.h
> @@ -0,0 +1,23 @@
> +/*
> + * Copyright (C) 2020 Intel Corporation
> + *
> + * This program is free software; you can redistribute it and/or modify
> + * it under the terms of the GNU General Public License as published by
> + * the Free Software Foundation; either version 2 of the License, or
> + * (at your option) any later version.
> +
> + * This program is distributed in the hope that it will be useful,
> + * but WITHOUT ANY WARRANTY; without even the implied warranty of
> + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
> + * GNU General Public License for more details.
> +
> + * You should have received a copy of the GNU General Public License along
> + * with this program; if not, see <http://www.gnu.org/licenses/>.
> + */
> +
> +#ifndef HW_ACPI_CXL_H
> +#define HW_ACPI_CXL_H
> +
> +void build_cxl_osc_method(Aml *dev);
> +
> +#endif
> -- 
> 2.32.0



^ permalink raw reply	[flat|nested] 124+ messages in thread

* Re: [PATCH v7 00/46] CXl 2.0 emulation Support
  2022-03-06 17:40 ` Jonathan Cameron via
@ 2022-03-06 21:33   ` Michael S. Tsirkin
  -1 siblings, 0 replies; 124+ messages in thread
From: Michael S. Tsirkin @ 2022-03-06 21:33 UTC (permalink / raw)
  To: Jonathan Cameron
  Cc: linuxarm, qemu-devel, Alex Bennée, Marcel Apfelbaum,
	Igor Mammedov, Markus Armbruster, linux-cxl, Ben Widawsky,
	Peter Maydell, Shameerali Kolothum Thodi,
	Philippe Mathieu-Daudé,
	Saransh Gupta1, Shreyas Shah, Chris Browy, Samarth Saxena,
	Dan Williams

On Sun, Mar 06, 2022 at 05:40:51PM +0000, Jonathan Cameron wrote:
> Ideally I'd love it if we could start picking up the earlier
> sections of this series as I think those have been reasonably
> well reviewed and should not be particularly controversial.
> (perhaps up to patch 15 inline with what Michael Tsirkin suggested
> on v5).

Well true but given we are entering freeze this will leave
us with a half baked devices which cant be used.
At this point if we can't merge it up to documentation then
I think we should wait until after the release.

> There is one core memory handling related patch (34) marked as RFC.
> Whilst it's impact seems small to me, I'm not sure it is the best way
> to meet our requirements wrt interleaving.
> 
> Changes since v7:
> 
> Thanks to all who have taken a look.
> Small amount of reordering was necessary due to LSA fix in patch 17.
> Test moved forwards to patch 22 and so all intermediate patches
> move -1 in the series.
> 
> (New stuff)
> - Switch support.  Needed to support more interesting topologies.
> (Ben Widawsky)
> - Patch 17: Fix reversed condition on presence of LSA that meant these never
>   got properly initialized. Related change needed to ensure test for cxl_type3
>   always needs an LSA. We can relax this later when adding volatile memory
>   support.
> (Markus Armbuster)
> - Patch 27: Change -cxl-fixed-memory-window option handling to use
>   qobject_input_visitor_new_str().  This changed the required handling of
>   targets parameter to require an array index and hence test and docs updates.
>   e.g. targets.1=cxl_hb0,targets.2=cxl_hb1
>   (Patches 38,40,42,43)
> - Missing structure element docs and version number (optimisitic :)
> (Alex Bennée)
> - Added Reviewed-by tags.  Thanks!.
> - Series wise: Switch to compiler.h QEMU_BUILD_BUG_ON/MSG QEMU_PACKED
>   and QEMU_ALIGNED as Alex suggested in patch 20.
> - Patch 6: Dropped documentation for a non-existent lock.
>            Added error code suitable for unimplemented commands.
> 	   Reordered code for better readability.
> - Patch 9: Reorder as suggested to avoid a goto.
> - Patch 16: Add LOG_UNIMP message where feature not yet implemented.
>             Drop "Explain" comment that doesn't explain anything.
> - Patch 18: Drop pointless void * cast.
>             Add assertion as suggested (without divide)
> - Patch 19: Use pstrcpy rather than snprintf for a fixed string.
>             The compiler.h comment was in this patch but affects a
> 	    number of other patches as well.
> - Patch 20: Move structure CXLType3Dev to header when originally
>             introduced so changes are more obvious in this patch.
> - Patch 21: Substantial refactor to resolve unclear use of sizeof
>             on the LSA command header. Now uses a variable length
> 	    last element so we can use offsetof()
> - Patch 22: Use g_autoptr() to avoid need for explicit free in tests
>   	    Similar in later patches.
> - Patch 29: Minor reorganziation as suggested.
> 	    
> (Tidy up from me)
> - Trivial stuff like moving header includes to patch where first used.
> - Patch 17: Drop ifndef protections from TYPE_CXL_TYPE3_DEV as there
>             doesn't seem to be a reason.
> 
> Series organized to allow it to be taken in stages if the maintainers
> prefer that approach. Most sets end with the addition of appropriate
> tests (TBD for final set)
> 
> Patches 0-15 - CXL PXB
> Patches 16-22 - Type 3 Device, Root Port
> Patches 23-40 - ACPI, board elements and interleave decoding to enable x86 hosts
> Patches 41-42 - arm64 support on virt.
> Patch 43 - Initial documentation
> Patches 44-46 - Switch support.
> 
> Gitlab CI is proving challenging to get a completely clean bill of health
> as there seem to be some intermittent failures in common with the
> main QEMU gitlab. In particular an ASAN leak error that appears in some
> upstream CI runs and build-oss-fuzz timeouts.
> Results at http://gitlab.com/jic23/qemu cxl-v7-draft-2-for-test
> which also includes the DOE/CDAT patches serial number support which
> will form part of a future series.
> 
> Updated background info:
> 
> Looking in particular for:
> * Review of the PCI interactions
> * x86 and ARM machine interactions (particularly the memory maps)
> * Review of the interleaving approach - is the basic idea
>   acceptable?
> * Review of the command line interface.
> * CXL related review welcome but much of that got reviewed
>   in earlier versions and hasn't changed substantially.
> 
> Big TODOs:
> 
> * Volatile memory devices (easy but it's more code so left for now).
> * Hotplug?  May not need much but it's not tested yet!
> * More tests and tighter verification that values written to hardware
>   are actually valid - stuff that real hardware would check.
> * Testing, testing and more testing.  I have been running a basic
>   set of ARM and x86 tests on this, but there is always room for
>   more tests and greater automation.
> * CFMWS flags as requested by Ben.
> 
> Why do we want QEMU emulation of CXL?
> 
> As Ben stated in V3, QEMU support has been critical to getting OS
> software written given lack of availability of hardware supporting the
> latest CXL features (coupled with very high demand for support being
> ready in a timely fashion). What has become clear since Ben's v3
> is that situation is a continuous one. Whilst we can't talk about
> them yet, CXL 3.0 features and OS support have been prototyped on
> top of this support and a lot of the ongoing kernel work is being
> tested against these patches. The kernel CXL mocking code allows
> some forms of testing, but QEMU provides a more versatile and
> exensible platform.
> 
> Other features on the qemu-list that build on these include PCI-DOE
> /CDAT support from the Avery Design team further showing how this
> code is useful. Whilst not directly related this is also the test
> platform for work on PCI IDE/CMA + related DMTF SPDM as CXL both
> utilizes and extends those technologies and is likely to be an early
> adopter.
> Refs:
> CMA Kernel: https://lore.kernel.org/all/20210804161839.3492053-1-Jonathan.Cameron@huawei.com/
> CMA Qemu: https://lore.kernel.org/qemu-devel/1624665723-5169-1-git-send-email-cbrowy@avery-design.com/
> DOE Qemu: https://lore.kernel.org/qemu-devel/1623329999-15662-1-git-send-email-cbrowy@avery-design.com/
> 
> As can be seen there is non trivial interaction with other areas of
> Qemu, particularly PCI and keeping this set up to date is proving
> a burden we'd rather do without :)
> 
> Ben mentioned a few other good reasons in v3:
> https://lore.kernel.org/qemu-devel/20210202005948.241655-1-ben.widawsky@intel.com/
> 
> What we have here is about what you need for it to be useful for testing
> currently kernel code.  Note the kernel code is moving fast so
> since v4, some features have been introduced we don't yet support in
> QEMU (e.g. use of the PCIe serial number extended capability).
> 
> All comments welcome.
> 
> Additional info that was here in v5 is now in the documentation patch.
> 
> Thanks,
> 
> Jonathan
> 
> Ben Widawsky (24):
>   hw/pci/cxl: Add a CXL component type (interface)
>   hw/cxl/component: Introduce CXL components (8.1.x, 8.2.5)
>   hw/cxl/device: Introduce a CXL device (8.2.8)
>   hw/cxl/device: Implement the CAP array (8.2.8.1-2)
>   hw/cxl/device: Implement basic mailbox (8.2.8.4)
>   hw/cxl/device: Add memory device utilities
>   hw/cxl/device: Add cheap EVENTS implementation (8.2.9.1)
>   hw/cxl/device: Timestamp implementation (8.2.9.3)
>   hw/cxl/device: Add log commands (8.2.9.4) + CEL
>   hw/pxb: Use a type for realizing expanders
>   hw/pci/cxl: Create a CXL bus type
>   hw/pxb: Allow creation of a CXL PXB (host bridge)
>   hw/cxl/rp: Add a root port
>   hw/cxl/device: Add a memory device (8.2.8.5)
>   hw/cxl/device: Implement MMIO HDM decoding (8.2.5.12)
>   hw/cxl/device: Add some trivial commands
>   hw/cxl/device: Plumb real Label Storage Area (LSA) sizing
>   hw/cxl/device: Implement get/set Label Storage Area (LSA)
>   hw/cxl/component: Implement host bridge MMIO (8.2.5, table 142)
>   acpi/cxl: Add _OSC implementation (9.14.2)
>   acpi/cxl: Create the CEDT (9.14.1)
>   acpi/cxl: Introduce CFMWS structures in CEDT
>   hw/cxl/component Add a dumb HDM decoder handler
>   qtest/cxl: Add more complex test cases with CFMWs
> 
> Jonathan Cameron (22):
>   MAINTAINERS: Add entry for Compute Express Link Emulation
>   cxl: Machine level control on whether CXL support is enabled
>   qtest/cxl: Introduce initial test for pxb-cxl only.
>   qtests/cxl: Add initial root port and CXL type3 tests
>   hw/cxl/component: Add utils for interleave parameter encoding/decoding
>   hw/cxl/host: Add support for CXL Fixed Memory Windows.
>   hw/pci-host/gpex-acpi: Add support for dsdt construction for pxb-cxl
>   pci/pcie_port: Add pci_find_port_by_pn()
>   CXL/cxl_component: Add cxl_get_hb_cstate()
>   mem/cxl_type3: Add read and write functions for associated hostmem.
>   cxl/cxl-host: Add memops for CFMWS region.
>   RFC: softmmu/memory: Add ops to memory_region_ram_init_from_file
>   i386/pc: Enable CXL fixed memory windows
>   tests/acpi: q35: Allow addition of a CXL test.
>   qtests/bios-tables-test: Add a test for CXL emulation.
>   tests/acpi: Add tables for CXL emulation.
>   hw/arm/virt: Basic CXL enablement on pci_expander_bridge instances
>     pxb-cxl
>   qtest/cxl: Add aarch64 virt test for CXL
>   docs/cxl: Add initial Compute eXpress Link (CXL) documentation.
>   pci-bridge/cxl_upstream: Add a CXL switch upstream port
>   pci-bridge/cxl_downstream: Add a CXL switch downstream port
>   cxl/cxl-host: Support interleave decoding with one level of switches.
> 
>  MAINTAINERS                         |   7 +
>  docs/system/device-emulation.rst    |   1 +
>  docs/system/devices/cxl.rst         | 302 +++++++++++++++++
>  hw/Kconfig                          |   1 +
>  hw/acpi/Kconfig                     |   5 +
>  hw/acpi/cxl-stub.c                  |  12 +
>  hw/acpi/cxl.c                       | 231 +++++++++++++
>  hw/acpi/meson.build                 |   4 +-
>  hw/arm/Kconfig                      |   1 +
>  hw/arm/virt-acpi-build.c            |  33 ++
>  hw/arm/virt.c                       |  40 ++-
>  hw/core/machine.c                   |  28 ++
>  hw/cxl/Kconfig                      |   3 +
>  hw/cxl/cxl-component-utils.c        | 284 ++++++++++++++++
>  hw/cxl/cxl-device-utils.c           | 265 +++++++++++++++
>  hw/cxl/cxl-host-stubs.c             |  16 +
>  hw/cxl/cxl-host.c                   | 262 +++++++++++++++
>  hw/cxl/cxl-mailbox-utils.c          | 485 ++++++++++++++++++++++++++++
>  hw/cxl/meson.build                  |  12 +
>  hw/i386/acpi-build.c                |  57 +++-
>  hw/i386/pc.c                        |  57 +++-
>  hw/mem/Kconfig                      |   5 +
>  hw/mem/cxl_type3.c                  | 352 ++++++++++++++++++++
>  hw/mem/meson.build                  |   1 +
>  hw/meson.build                      |   1 +
>  hw/pci-bridge/Kconfig               |   5 +
>  hw/pci-bridge/cxl_downstream.c      | 229 +++++++++++++
>  hw/pci-bridge/cxl_root_port.c       | 231 +++++++++++++
>  hw/pci-bridge/cxl_upstream.c        | 206 ++++++++++++
>  hw/pci-bridge/meson.build           |   1 +
>  hw/pci-bridge/pci_expander_bridge.c | 172 +++++++++-
>  hw/pci-bridge/pcie_root_port.c      |   6 +-
>  hw/pci-host/gpex-acpi.c             |  20 +-
>  hw/pci/pci.c                        |  21 +-
>  hw/pci/pcie_port.c                  |  25 ++
>  include/hw/acpi/cxl.h               |  28 ++
>  include/hw/arm/virt.h               |   1 +
>  include/hw/boards.h                 |   2 +
>  include/hw/cxl/cxl.h                |  54 ++++
>  include/hw/cxl/cxl_component.h      | 207 ++++++++++++
>  include/hw/cxl/cxl_device.h         | 270 ++++++++++++++++
>  include/hw/cxl/cxl_pci.h            | 156 +++++++++
>  include/hw/pci/pci.h                |  14 +
>  include/hw/pci/pci_bridge.h         |  20 ++
>  include/hw/pci/pci_bus.h            |   7 +
>  include/hw/pci/pci_ids.h            |   1 +
>  include/hw/pci/pcie_port.h          |   2 +
>  qapi/machine.json                   |  18 ++
>  qemu-options.hx                     |  38 +++
>  scripts/device-crash-test           |   1 +
>  softmmu/memory.c                    |   9 +
>  softmmu/vl.c                        |  42 +++
>  tests/data/acpi/q35/CEDT.cxl        | Bin 0 -> 184 bytes
>  tests/data/acpi/q35/DSDT.cxl        | Bin 0 -> 9627 bytes
>  tests/qtest/bios-tables-test.c      |  44 +++
>  tests/qtest/cxl-test.c              | 181 +++++++++++
>  tests/qtest/meson.build             |   5 +
>  57 files changed, 4456 insertions(+), 25 deletions(-)
>  create mode 100644 docs/system/devices/cxl.rst
>  create mode 100644 hw/acpi/cxl-stub.c
>  create mode 100644 hw/acpi/cxl.c
>  create mode 100644 hw/cxl/Kconfig
>  create mode 100644 hw/cxl/cxl-component-utils.c
>  create mode 100644 hw/cxl/cxl-device-utils.c
>  create mode 100644 hw/cxl/cxl-host-stubs.c
>  create mode 100644 hw/cxl/cxl-host.c
>  create mode 100644 hw/cxl/cxl-mailbox-utils.c
>  create mode 100644 hw/cxl/meson.build
>  create mode 100644 hw/mem/cxl_type3.c
>  create mode 100644 hw/pci-bridge/cxl_downstream.c
>  create mode 100644 hw/pci-bridge/cxl_root_port.c
>  create mode 100644 hw/pci-bridge/cxl_upstream.c
>  create mode 100644 include/hw/acpi/cxl.h
>  create mode 100644 include/hw/cxl/cxl.h
>  create mode 100644 include/hw/cxl/cxl_component.h
>  create mode 100644 include/hw/cxl/cxl_device.h
>  create mode 100644 include/hw/cxl/cxl_pci.h
>  create mode 100644 tests/data/acpi/q35/CEDT.cxl
>  create mode 100644 tests/data/acpi/q35/DSDT.cxl
>  create mode 100644 tests/qtest/cxl-test.c
> 
> -- 
> 2.32.0


^ permalink raw reply	[flat|nested] 124+ messages in thread

* Re: [PATCH v7 00/46] CXl 2.0 emulation Support
@ 2022-03-06 21:33   ` Michael S. Tsirkin
  0 siblings, 0 replies; 124+ messages in thread
From: Michael S. Tsirkin @ 2022-03-06 21:33 UTC (permalink / raw)
  To: Jonathan Cameron
  Cc: Peter Maydell, Ben Widawsky, qemu-devel, Samarth Saxena,
	Chris Browy, linuxarm, linux-cxl, Markus Armbruster,
	Shreyas Shah, Saransh Gupta1, Shameerali Kolothum Thodi,
	Marcel Apfelbaum, Igor Mammedov, Dan Williams, Alex Bennée,
	Philippe Mathieu-Daudé

On Sun, Mar 06, 2022 at 05:40:51PM +0000, Jonathan Cameron wrote:
> Ideally I'd love it if we could start picking up the earlier
> sections of this series as I think those have been reasonably
> well reviewed and should not be particularly controversial.
> (perhaps up to patch 15 inline with what Michael Tsirkin suggested
> on v5).

Well true but given we are entering freeze this will leave
us with a half baked devices which cant be used.
At this point if we can't merge it up to documentation then
I think we should wait until after the release.

> There is one core memory handling related patch (34) marked as RFC.
> Whilst it's impact seems small to me, I'm not sure it is the best way
> to meet our requirements wrt interleaving.
> 
> Changes since v7:
> 
> Thanks to all who have taken a look.
> Small amount of reordering was necessary due to LSA fix in patch 17.
> Test moved forwards to patch 22 and so all intermediate patches
> move -1 in the series.
> 
> (New stuff)
> - Switch support.  Needed to support more interesting topologies.
> (Ben Widawsky)
> - Patch 17: Fix reversed condition on presence of LSA that meant these never
>   got properly initialized. Related change needed to ensure test for cxl_type3
>   always needs an LSA. We can relax this later when adding volatile memory
>   support.
> (Markus Armbuster)
> - Patch 27: Change -cxl-fixed-memory-window option handling to use
>   qobject_input_visitor_new_str().  This changed the required handling of
>   targets parameter to require an array index and hence test and docs updates.
>   e.g. targets.1=cxl_hb0,targets.2=cxl_hb1
>   (Patches 38,40,42,43)
> - Missing structure element docs and version number (optimisitic :)
> (Alex Bennée)
> - Added Reviewed-by tags.  Thanks!.
> - Series wise: Switch to compiler.h QEMU_BUILD_BUG_ON/MSG QEMU_PACKED
>   and QEMU_ALIGNED as Alex suggested in patch 20.
> - Patch 6: Dropped documentation for a non-existent lock.
>            Added error code suitable for unimplemented commands.
> 	   Reordered code for better readability.
> - Patch 9: Reorder as suggested to avoid a goto.
> - Patch 16: Add LOG_UNIMP message where feature not yet implemented.
>             Drop "Explain" comment that doesn't explain anything.
> - Patch 18: Drop pointless void * cast.
>             Add assertion as suggested (without divide)
> - Patch 19: Use pstrcpy rather than snprintf for a fixed string.
>             The compiler.h comment was in this patch but affects a
> 	    number of other patches as well.
> - Patch 20: Move structure CXLType3Dev to header when originally
>             introduced so changes are more obvious in this patch.
> - Patch 21: Substantial refactor to resolve unclear use of sizeof
>             on the LSA command header. Now uses a variable length
> 	    last element so we can use offsetof()
> - Patch 22: Use g_autoptr() to avoid need for explicit free in tests
>   	    Similar in later patches.
> - Patch 29: Minor reorganziation as suggested.
> 	    
> (Tidy up from me)
> - Trivial stuff like moving header includes to patch where first used.
> - Patch 17: Drop ifndef protections from TYPE_CXL_TYPE3_DEV as there
>             doesn't seem to be a reason.
> 
> Series organized to allow it to be taken in stages if the maintainers
> prefer that approach. Most sets end with the addition of appropriate
> tests (TBD for final set)
> 
> Patches 0-15 - CXL PXB
> Patches 16-22 - Type 3 Device, Root Port
> Patches 23-40 - ACPI, board elements and interleave decoding to enable x86 hosts
> Patches 41-42 - arm64 support on virt.
> Patch 43 - Initial documentation
> Patches 44-46 - Switch support.
> 
> Gitlab CI is proving challenging to get a completely clean bill of health
> as there seem to be some intermittent failures in common with the
> main QEMU gitlab. In particular an ASAN leak error that appears in some
> upstream CI runs and build-oss-fuzz timeouts.
> Results at http://gitlab.com/jic23/qemu cxl-v7-draft-2-for-test
> which also includes the DOE/CDAT patches serial number support which
> will form part of a future series.
> 
> Updated background info:
> 
> Looking in particular for:
> * Review of the PCI interactions
> * x86 and ARM machine interactions (particularly the memory maps)
> * Review of the interleaving approach - is the basic idea
>   acceptable?
> * Review of the command line interface.
> * CXL related review welcome but much of that got reviewed
>   in earlier versions and hasn't changed substantially.
> 
> Big TODOs:
> 
> * Volatile memory devices (easy but it's more code so left for now).
> * Hotplug?  May not need much but it's not tested yet!
> * More tests and tighter verification that values written to hardware
>   are actually valid - stuff that real hardware would check.
> * Testing, testing and more testing.  I have been running a basic
>   set of ARM and x86 tests on this, but there is always room for
>   more tests and greater automation.
> * CFMWS flags as requested by Ben.
> 
> Why do we want QEMU emulation of CXL?
> 
> As Ben stated in V3, QEMU support has been critical to getting OS
> software written given lack of availability of hardware supporting the
> latest CXL features (coupled with very high demand for support being
> ready in a timely fashion). What has become clear since Ben's v3
> is that situation is a continuous one. Whilst we can't talk about
> them yet, CXL 3.0 features and OS support have been prototyped on
> top of this support and a lot of the ongoing kernel work is being
> tested against these patches. The kernel CXL mocking code allows
> some forms of testing, but QEMU provides a more versatile and
> exensible platform.
> 
> Other features on the qemu-list that build on these include PCI-DOE
> /CDAT support from the Avery Design team further showing how this
> code is useful. Whilst not directly related this is also the test
> platform for work on PCI IDE/CMA + related DMTF SPDM as CXL both
> utilizes and extends those technologies and is likely to be an early
> adopter.
> Refs:
> CMA Kernel: https://lore.kernel.org/all/20210804161839.3492053-1-Jonathan.Cameron@huawei.com/
> CMA Qemu: https://lore.kernel.org/qemu-devel/1624665723-5169-1-git-send-email-cbrowy@avery-design.com/
> DOE Qemu: https://lore.kernel.org/qemu-devel/1623329999-15662-1-git-send-email-cbrowy@avery-design.com/
> 
> As can be seen there is non trivial interaction with other areas of
> Qemu, particularly PCI and keeping this set up to date is proving
> a burden we'd rather do without :)
> 
> Ben mentioned a few other good reasons in v3:
> https://lore.kernel.org/qemu-devel/20210202005948.241655-1-ben.widawsky@intel.com/
> 
> What we have here is about what you need for it to be useful for testing
> currently kernel code.  Note the kernel code is moving fast so
> since v4, some features have been introduced we don't yet support in
> QEMU (e.g. use of the PCIe serial number extended capability).
> 
> All comments welcome.
> 
> Additional info that was here in v5 is now in the documentation patch.
> 
> Thanks,
> 
> Jonathan
> 
> Ben Widawsky (24):
>   hw/pci/cxl: Add a CXL component type (interface)
>   hw/cxl/component: Introduce CXL components (8.1.x, 8.2.5)
>   hw/cxl/device: Introduce a CXL device (8.2.8)
>   hw/cxl/device: Implement the CAP array (8.2.8.1-2)
>   hw/cxl/device: Implement basic mailbox (8.2.8.4)
>   hw/cxl/device: Add memory device utilities
>   hw/cxl/device: Add cheap EVENTS implementation (8.2.9.1)
>   hw/cxl/device: Timestamp implementation (8.2.9.3)
>   hw/cxl/device: Add log commands (8.2.9.4) + CEL
>   hw/pxb: Use a type for realizing expanders
>   hw/pci/cxl: Create a CXL bus type
>   hw/pxb: Allow creation of a CXL PXB (host bridge)
>   hw/cxl/rp: Add a root port
>   hw/cxl/device: Add a memory device (8.2.8.5)
>   hw/cxl/device: Implement MMIO HDM decoding (8.2.5.12)
>   hw/cxl/device: Add some trivial commands
>   hw/cxl/device: Plumb real Label Storage Area (LSA) sizing
>   hw/cxl/device: Implement get/set Label Storage Area (LSA)
>   hw/cxl/component: Implement host bridge MMIO (8.2.5, table 142)
>   acpi/cxl: Add _OSC implementation (9.14.2)
>   acpi/cxl: Create the CEDT (9.14.1)
>   acpi/cxl: Introduce CFMWS structures in CEDT
>   hw/cxl/component Add a dumb HDM decoder handler
>   qtest/cxl: Add more complex test cases with CFMWs
> 
> Jonathan Cameron (22):
>   MAINTAINERS: Add entry for Compute Express Link Emulation
>   cxl: Machine level control on whether CXL support is enabled
>   qtest/cxl: Introduce initial test for pxb-cxl only.
>   qtests/cxl: Add initial root port and CXL type3 tests
>   hw/cxl/component: Add utils for interleave parameter encoding/decoding
>   hw/cxl/host: Add support for CXL Fixed Memory Windows.
>   hw/pci-host/gpex-acpi: Add support for dsdt construction for pxb-cxl
>   pci/pcie_port: Add pci_find_port_by_pn()
>   CXL/cxl_component: Add cxl_get_hb_cstate()
>   mem/cxl_type3: Add read and write functions for associated hostmem.
>   cxl/cxl-host: Add memops for CFMWS region.
>   RFC: softmmu/memory: Add ops to memory_region_ram_init_from_file
>   i386/pc: Enable CXL fixed memory windows
>   tests/acpi: q35: Allow addition of a CXL test.
>   qtests/bios-tables-test: Add a test for CXL emulation.
>   tests/acpi: Add tables for CXL emulation.
>   hw/arm/virt: Basic CXL enablement on pci_expander_bridge instances
>     pxb-cxl
>   qtest/cxl: Add aarch64 virt test for CXL
>   docs/cxl: Add initial Compute eXpress Link (CXL) documentation.
>   pci-bridge/cxl_upstream: Add a CXL switch upstream port
>   pci-bridge/cxl_downstream: Add a CXL switch downstream port
>   cxl/cxl-host: Support interleave decoding with one level of switches.
> 
>  MAINTAINERS                         |   7 +
>  docs/system/device-emulation.rst    |   1 +
>  docs/system/devices/cxl.rst         | 302 +++++++++++++++++
>  hw/Kconfig                          |   1 +
>  hw/acpi/Kconfig                     |   5 +
>  hw/acpi/cxl-stub.c                  |  12 +
>  hw/acpi/cxl.c                       | 231 +++++++++++++
>  hw/acpi/meson.build                 |   4 +-
>  hw/arm/Kconfig                      |   1 +
>  hw/arm/virt-acpi-build.c            |  33 ++
>  hw/arm/virt.c                       |  40 ++-
>  hw/core/machine.c                   |  28 ++
>  hw/cxl/Kconfig                      |   3 +
>  hw/cxl/cxl-component-utils.c        | 284 ++++++++++++++++
>  hw/cxl/cxl-device-utils.c           | 265 +++++++++++++++
>  hw/cxl/cxl-host-stubs.c             |  16 +
>  hw/cxl/cxl-host.c                   | 262 +++++++++++++++
>  hw/cxl/cxl-mailbox-utils.c          | 485 ++++++++++++++++++++++++++++
>  hw/cxl/meson.build                  |  12 +
>  hw/i386/acpi-build.c                |  57 +++-
>  hw/i386/pc.c                        |  57 +++-
>  hw/mem/Kconfig                      |   5 +
>  hw/mem/cxl_type3.c                  | 352 ++++++++++++++++++++
>  hw/mem/meson.build                  |   1 +
>  hw/meson.build                      |   1 +
>  hw/pci-bridge/Kconfig               |   5 +
>  hw/pci-bridge/cxl_downstream.c      | 229 +++++++++++++
>  hw/pci-bridge/cxl_root_port.c       | 231 +++++++++++++
>  hw/pci-bridge/cxl_upstream.c        | 206 ++++++++++++
>  hw/pci-bridge/meson.build           |   1 +
>  hw/pci-bridge/pci_expander_bridge.c | 172 +++++++++-
>  hw/pci-bridge/pcie_root_port.c      |   6 +-
>  hw/pci-host/gpex-acpi.c             |  20 +-
>  hw/pci/pci.c                        |  21 +-
>  hw/pci/pcie_port.c                  |  25 ++
>  include/hw/acpi/cxl.h               |  28 ++
>  include/hw/arm/virt.h               |   1 +
>  include/hw/boards.h                 |   2 +
>  include/hw/cxl/cxl.h                |  54 ++++
>  include/hw/cxl/cxl_component.h      | 207 ++++++++++++
>  include/hw/cxl/cxl_device.h         | 270 ++++++++++++++++
>  include/hw/cxl/cxl_pci.h            | 156 +++++++++
>  include/hw/pci/pci.h                |  14 +
>  include/hw/pci/pci_bridge.h         |  20 ++
>  include/hw/pci/pci_bus.h            |   7 +
>  include/hw/pci/pci_ids.h            |   1 +
>  include/hw/pci/pcie_port.h          |   2 +
>  qapi/machine.json                   |  18 ++
>  qemu-options.hx                     |  38 +++
>  scripts/device-crash-test           |   1 +
>  softmmu/memory.c                    |   9 +
>  softmmu/vl.c                        |  42 +++
>  tests/data/acpi/q35/CEDT.cxl        | Bin 0 -> 184 bytes
>  tests/data/acpi/q35/DSDT.cxl        | Bin 0 -> 9627 bytes
>  tests/qtest/bios-tables-test.c      |  44 +++
>  tests/qtest/cxl-test.c              | 181 +++++++++++
>  tests/qtest/meson.build             |   5 +
>  57 files changed, 4456 insertions(+), 25 deletions(-)
>  create mode 100644 docs/system/devices/cxl.rst
>  create mode 100644 hw/acpi/cxl-stub.c
>  create mode 100644 hw/acpi/cxl.c
>  create mode 100644 hw/cxl/Kconfig
>  create mode 100644 hw/cxl/cxl-component-utils.c
>  create mode 100644 hw/cxl/cxl-device-utils.c
>  create mode 100644 hw/cxl/cxl-host-stubs.c
>  create mode 100644 hw/cxl/cxl-host.c
>  create mode 100644 hw/cxl/cxl-mailbox-utils.c
>  create mode 100644 hw/cxl/meson.build
>  create mode 100644 hw/mem/cxl_type3.c
>  create mode 100644 hw/pci-bridge/cxl_downstream.c
>  create mode 100644 hw/pci-bridge/cxl_root_port.c
>  create mode 100644 hw/pci-bridge/cxl_upstream.c
>  create mode 100644 include/hw/acpi/cxl.h
>  create mode 100644 include/hw/cxl/cxl.h
>  create mode 100644 include/hw/cxl/cxl_component.h
>  create mode 100644 include/hw/cxl/cxl_device.h
>  create mode 100644 include/hw/cxl/cxl_pci.h
>  create mode 100644 tests/data/acpi/q35/CEDT.cxl
>  create mode 100644 tests/data/acpi/q35/DSDT.cxl
>  create mode 100644 tests/qtest/cxl-test.c
> 
> -- 
> 2.32.0



^ permalink raw reply	[flat|nested] 124+ messages in thread

* Re: [PATCH v7 00/46] CXl 2.0 emulation Support
  2022-03-06 21:33   ` Michael S. Tsirkin
@ 2022-03-07  9:39     ` Jonathan Cameron
  -1 siblings, 0 replies; 124+ messages in thread
From: Jonathan Cameron via @ 2022-03-07  9:39 UTC (permalink / raw)
  To: Michael S. Tsirkin
  Cc: Peter Maydell, Ben Widawsky, qemu-devel, Samarth Saxena,
	Chris Browy, linuxarm, linux-cxl, Markus Armbruster,
	Shreyas Shah, Saransh Gupta1, Shameerali Kolothum Thodi,
	Marcel Apfelbaum, Igor Mammedov, Dan Williams, Alex Bennée,
	Philippe Mathieu-Daudé,
	Paolo Bonzini, Peter Xu, David Hildenbrand

On Sun, 6 Mar 2022 16:33:40 -0500
"Michael S. Tsirkin" <mst@redhat.com> wrote:

> On Sun, Mar 06, 2022 at 05:40:51PM +0000, Jonathan Cameron wrote:
> > Ideally I'd love it if we could start picking up the earlier
> > sections of this series as I think those have been reasonably
> > well reviewed and should not be particularly controversial.
> > (perhaps up to patch 15 inline with what Michael Tsirkin suggested
> > on v5).  
> 
> Well true but given we are entering freeze this will leave
> us with a half baked devices which cant be used.
> At this point if we can't merge it up to documentation then
> I think we should wait until after the release.

Makes sense.

If any of the memory maintainers can take a look at patch 34 that would
be great as to my mind that and the related interleave decoding in general is
the big unknown in this set. I just realized I haven't cc'd everyone
I should have for that - added them here and I'll make sure to CC them
all on V8.

Thanks.

Jonathan

> 
> > There is one core memory handling related patch (34) marked as RFC.
> > Whilst it's impact seems small to me, I'm not sure it is the best way
> > to meet our requirements wrt interleaving.
> > 
> > Changes since v7:
> > 
> > Thanks to all who have taken a look.
> > Small amount of reordering was necessary due to LSA fix in patch 17.
> > Test moved forwards to patch 22 and so all intermediate patches
> > move -1 in the series.
> > 
> > (New stuff)
> > - Switch support.  Needed to support more interesting topologies.
> > (Ben Widawsky)
> > - Patch 17: Fix reversed condition on presence of LSA that meant these never
> >   got properly initialized. Related change needed to ensure test for cxl_type3
> >   always needs an LSA. We can relax this later when adding volatile memory
> >   support.
> > (Markus Armbuster)
> > - Patch 27: Change -cxl-fixed-memory-window option handling to use
> >   qobject_input_visitor_new_str().  This changed the required handling of
> >   targets parameter to require an array index and hence test and docs updates.
> >   e.g. targets.1=cxl_hb0,targets.2=cxl_hb1
> >   (Patches 38,40,42,43)
> > - Missing structure element docs and version number (optimisitic :)
> > (Alex Bennée)
> > - Added Reviewed-by tags.  Thanks!.
> > - Series wise: Switch to compiler.h QEMU_BUILD_BUG_ON/MSG QEMU_PACKED
> >   and QEMU_ALIGNED as Alex suggested in patch 20.
> > - Patch 6: Dropped documentation for a non-existent lock.
> >            Added error code suitable for unimplemented commands.
> > 	   Reordered code for better readability.
> > - Patch 9: Reorder as suggested to avoid a goto.
> > - Patch 16: Add LOG_UNIMP message where feature not yet implemented.
> >             Drop "Explain" comment that doesn't explain anything.
> > - Patch 18: Drop pointless void * cast.
> >             Add assertion as suggested (without divide)
> > - Patch 19: Use pstrcpy rather than snprintf for a fixed string.
> >             The compiler.h comment was in this patch but affects a
> > 	    number of other patches as well.
> > - Patch 20: Move structure CXLType3Dev to header when originally
> >             introduced so changes are more obvious in this patch.
> > - Patch 21: Substantial refactor to resolve unclear use of sizeof
> >             on the LSA command header. Now uses a variable length
> > 	    last element so we can use offsetof()
> > - Patch 22: Use g_autoptr() to avoid need for explicit free in tests
> >   	    Similar in later patches.
> > - Patch 29: Minor reorganziation as suggested.
> > 	    
> > (Tidy up from me)
> > - Trivial stuff like moving header includes to patch where first used.
> > - Patch 17: Drop ifndef protections from TYPE_CXL_TYPE3_DEV as there
> >             doesn't seem to be a reason.
> > 
> > Series organized to allow it to be taken in stages if the maintainers
> > prefer that approach. Most sets end with the addition of appropriate
> > tests (TBD for final set)
> > 
> > Patches 0-15 - CXL PXB
> > Patches 16-22 - Type 3 Device, Root Port
> > Patches 23-40 - ACPI, board elements and interleave decoding to enable x86 hosts
> > Patches 41-42 - arm64 support on virt.
> > Patch 43 - Initial documentation
> > Patches 44-46 - Switch support.
> > 
> > Gitlab CI is proving challenging to get a completely clean bill of health
> > as there seem to be some intermittent failures in common with the
> > main QEMU gitlab. In particular an ASAN leak error that appears in some
> > upstream CI runs and build-oss-fuzz timeouts.
> > Results at http://gitlab.com/jic23/qemu cxl-v7-draft-2-for-test
> > which also includes the DOE/CDAT patches serial number support which
> > will form part of a future series.
> > 
> > Updated background info:
> > 
> > Looking in particular for:
> > * Review of the PCI interactions
> > * x86 and ARM machine interactions (particularly the memory maps)
> > * Review of the interleaving approach - is the basic idea
> >   acceptable?
> > * Review of the command line interface.
> > * CXL related review welcome but much of that got reviewed
> >   in earlier versions and hasn't changed substantially.
> > 
> > Big TODOs:
> > 
> > * Volatile memory devices (easy but it's more code so left for now).
> > * Hotplug?  May not need much but it's not tested yet!
> > * More tests and tighter verification that values written to hardware
> >   are actually valid - stuff that real hardware would check.
> > * Testing, testing and more testing.  I have been running a basic
> >   set of ARM and x86 tests on this, but there is always room for
> >   more tests and greater automation.
> > * CFMWS flags as requested by Ben.
> > 
> > Why do we want QEMU emulation of CXL?
> > 
> > As Ben stated in V3, QEMU support has been critical to getting OS
> > software written given lack of availability of hardware supporting the
> > latest CXL features (coupled with very high demand for support being
> > ready in a timely fashion). What has become clear since Ben's v3
> > is that situation is a continuous one. Whilst we can't talk about
> > them yet, CXL 3.0 features and OS support have been prototyped on
> > top of this support and a lot of the ongoing kernel work is being
> > tested against these patches. The kernel CXL mocking code allows
> > some forms of testing, but QEMU provides a more versatile and
> > exensible platform.
> > 
> > Other features on the qemu-list that build on these include PCI-DOE
> > /CDAT support from the Avery Design team further showing how this
> > code is useful. Whilst not directly related this is also the test
> > platform for work on PCI IDE/CMA + related DMTF SPDM as CXL both
> > utilizes and extends those technologies and is likely to be an early
> > adopter.
> > Refs:
> > CMA Kernel: https://lore.kernel.org/all/20210804161839.3492053-1-Jonathan.Cameron@huawei.com/
> > CMA Qemu: https://lore.kernel.org/qemu-devel/1624665723-5169-1-git-send-email-cbrowy@avery-design.com/
> > DOE Qemu: https://lore.kernel.org/qemu-devel/1623329999-15662-1-git-send-email-cbrowy@avery-design.com/
> > 
> > As can be seen there is non trivial interaction with other areas of
> > Qemu, particularly PCI and keeping this set up to date is proving
> > a burden we'd rather do without :)
> > 
> > Ben mentioned a few other good reasons in v3:
> > https://lore.kernel.org/qemu-devel/20210202005948.241655-1-ben.widawsky@intel.com/
> > 
> > What we have here is about what you need for it to be useful for testing
> > currently kernel code.  Note the kernel code is moving fast so
> > since v4, some features have been introduced we don't yet support in
> > QEMU (e.g. use of the PCIe serial number extended capability).
> > 
> > All comments welcome.
> > 
> > Additional info that was here in v5 is now in the documentation patch.
> > 
> > Thanks,
> > 
> > Jonathan
> > 
> > Ben Widawsky (24):
> >   hw/pci/cxl: Add a CXL component type (interface)
> >   hw/cxl/component: Introduce CXL components (8.1.x, 8.2.5)
> >   hw/cxl/device: Introduce a CXL device (8.2.8)
> >   hw/cxl/device: Implement the CAP array (8.2.8.1-2)
> >   hw/cxl/device: Implement basic mailbox (8.2.8.4)
> >   hw/cxl/device: Add memory device utilities
> >   hw/cxl/device: Add cheap EVENTS implementation (8.2.9.1)
> >   hw/cxl/device: Timestamp implementation (8.2.9.3)
> >   hw/cxl/device: Add log commands (8.2.9.4) + CEL
> >   hw/pxb: Use a type for realizing expanders
> >   hw/pci/cxl: Create a CXL bus type
> >   hw/pxb: Allow creation of a CXL PXB (host bridge)
> >   hw/cxl/rp: Add a root port
> >   hw/cxl/device: Add a memory device (8.2.8.5)
> >   hw/cxl/device: Implement MMIO HDM decoding (8.2.5.12)
> >   hw/cxl/device: Add some trivial commands
> >   hw/cxl/device: Plumb real Label Storage Area (LSA) sizing
> >   hw/cxl/device: Implement get/set Label Storage Area (LSA)
> >   hw/cxl/component: Implement host bridge MMIO (8.2.5, table 142)
> >   acpi/cxl: Add _OSC implementation (9.14.2)
> >   acpi/cxl: Create the CEDT (9.14.1)
> >   acpi/cxl: Introduce CFMWS structures in CEDT
> >   hw/cxl/component Add a dumb HDM decoder handler
> >   qtest/cxl: Add more complex test cases with CFMWs
> > 
> > Jonathan Cameron (22):
> >   MAINTAINERS: Add entry for Compute Express Link Emulation
> >   cxl: Machine level control on whether CXL support is enabled
> >   qtest/cxl: Introduce initial test for pxb-cxl only.
> >   qtests/cxl: Add initial root port and CXL type3 tests
> >   hw/cxl/component: Add utils for interleave parameter encoding/decoding
> >   hw/cxl/host: Add support for CXL Fixed Memory Windows.
> >   hw/pci-host/gpex-acpi: Add support for dsdt construction for pxb-cxl
> >   pci/pcie_port: Add pci_find_port_by_pn()
> >   CXL/cxl_component: Add cxl_get_hb_cstate()
> >   mem/cxl_type3: Add read and write functions for associated hostmem.
> >   cxl/cxl-host: Add memops for CFMWS region.
> >   RFC: softmmu/memory: Add ops to memory_region_ram_init_from_file
> >   i386/pc: Enable CXL fixed memory windows
> >   tests/acpi: q35: Allow addition of a CXL test.
> >   qtests/bios-tables-test: Add a test for CXL emulation.
> >   tests/acpi: Add tables for CXL emulation.
> >   hw/arm/virt: Basic CXL enablement on pci_expander_bridge instances
> >     pxb-cxl
> >   qtest/cxl: Add aarch64 virt test for CXL
> >   docs/cxl: Add initial Compute eXpress Link (CXL) documentation.
> >   pci-bridge/cxl_upstream: Add a CXL switch upstream port
> >   pci-bridge/cxl_downstream: Add a CXL switch downstream port
> >   cxl/cxl-host: Support interleave decoding with one level of switches.
> > 
> >  MAINTAINERS                         |   7 +
> >  docs/system/device-emulation.rst    |   1 +
> >  docs/system/devices/cxl.rst         | 302 +++++++++++++++++
> >  hw/Kconfig                          |   1 +
> >  hw/acpi/Kconfig                     |   5 +
> >  hw/acpi/cxl-stub.c                  |  12 +
> >  hw/acpi/cxl.c                       | 231 +++++++++++++
> >  hw/acpi/meson.build                 |   4 +-
> >  hw/arm/Kconfig                      |   1 +
> >  hw/arm/virt-acpi-build.c            |  33 ++
> >  hw/arm/virt.c                       |  40 ++-
> >  hw/core/machine.c                   |  28 ++
> >  hw/cxl/Kconfig                      |   3 +
> >  hw/cxl/cxl-component-utils.c        | 284 ++++++++++++++++
> >  hw/cxl/cxl-device-utils.c           | 265 +++++++++++++++
> >  hw/cxl/cxl-host-stubs.c             |  16 +
> >  hw/cxl/cxl-host.c                   | 262 +++++++++++++++
> >  hw/cxl/cxl-mailbox-utils.c          | 485 ++++++++++++++++++++++++++++
> >  hw/cxl/meson.build                  |  12 +
> >  hw/i386/acpi-build.c                |  57 +++-
> >  hw/i386/pc.c                        |  57 +++-
> >  hw/mem/Kconfig                      |   5 +
> >  hw/mem/cxl_type3.c                  | 352 ++++++++++++++++++++
> >  hw/mem/meson.build                  |   1 +
> >  hw/meson.build                      |   1 +
> >  hw/pci-bridge/Kconfig               |   5 +
> >  hw/pci-bridge/cxl_downstream.c      | 229 +++++++++++++
> >  hw/pci-bridge/cxl_root_port.c       | 231 +++++++++++++
> >  hw/pci-bridge/cxl_upstream.c        | 206 ++++++++++++
> >  hw/pci-bridge/meson.build           |   1 +
> >  hw/pci-bridge/pci_expander_bridge.c | 172 +++++++++-
> >  hw/pci-bridge/pcie_root_port.c      |   6 +-
> >  hw/pci-host/gpex-acpi.c             |  20 +-
> >  hw/pci/pci.c                        |  21 +-
> >  hw/pci/pcie_port.c                  |  25 ++
> >  include/hw/acpi/cxl.h               |  28 ++
> >  include/hw/arm/virt.h               |   1 +
> >  include/hw/boards.h                 |   2 +
> >  include/hw/cxl/cxl.h                |  54 ++++
> >  include/hw/cxl/cxl_component.h      | 207 ++++++++++++
> >  include/hw/cxl/cxl_device.h         | 270 ++++++++++++++++
> >  include/hw/cxl/cxl_pci.h            | 156 +++++++++
> >  include/hw/pci/pci.h                |  14 +
> >  include/hw/pci/pci_bridge.h         |  20 ++
> >  include/hw/pci/pci_bus.h            |   7 +
> >  include/hw/pci/pci_ids.h            |   1 +
> >  include/hw/pci/pcie_port.h          |   2 +
> >  qapi/machine.json                   |  18 ++
> >  qemu-options.hx                     |  38 +++
> >  scripts/device-crash-test           |   1 +
> >  softmmu/memory.c                    |   9 +
> >  softmmu/vl.c                        |  42 +++
> >  tests/data/acpi/q35/CEDT.cxl        | Bin 0 -> 184 bytes
> >  tests/data/acpi/q35/DSDT.cxl        | Bin 0 -> 9627 bytes
> >  tests/qtest/bios-tables-test.c      |  44 +++
> >  tests/qtest/cxl-test.c              | 181 +++++++++++
> >  tests/qtest/meson.build             |   5 +
> >  57 files changed, 4456 insertions(+), 25 deletions(-)
> >  create mode 100644 docs/system/devices/cxl.rst
> >  create mode 100644 hw/acpi/cxl-stub.c
> >  create mode 100644 hw/acpi/cxl.c
> >  create mode 100644 hw/cxl/Kconfig
> >  create mode 100644 hw/cxl/cxl-component-utils.c
> >  create mode 100644 hw/cxl/cxl-device-utils.c
> >  create mode 100644 hw/cxl/cxl-host-stubs.c
> >  create mode 100644 hw/cxl/cxl-host.c
> >  create mode 100644 hw/cxl/cxl-mailbox-utils.c
> >  create mode 100644 hw/cxl/meson.build
> >  create mode 100644 hw/mem/cxl_type3.c
> >  create mode 100644 hw/pci-bridge/cxl_downstream.c
> >  create mode 100644 hw/pci-bridge/cxl_root_port.c
> >  create mode 100644 hw/pci-bridge/cxl_upstream.c
> >  create mode 100644 include/hw/acpi/cxl.h
> >  create mode 100644 include/hw/cxl/cxl.h
> >  create mode 100644 include/hw/cxl/cxl_component.h
> >  create mode 100644 include/hw/cxl/cxl_device.h
> >  create mode 100644 include/hw/cxl/cxl_pci.h
> >  create mode 100644 tests/data/acpi/q35/CEDT.cxl
> >  create mode 100644 tests/data/acpi/q35/DSDT.cxl
> >  create mode 100644 tests/qtest/cxl-test.c
> > 
> > -- 
> > 2.32.0  
> 
> 



^ permalink raw reply	[flat|nested] 124+ messages in thread

* Re: [PATCH v7 00/46] CXl 2.0 emulation Support
@ 2022-03-07  9:39     ` Jonathan Cameron
  0 siblings, 0 replies; 124+ messages in thread
From: Jonathan Cameron @ 2022-03-07  9:39 UTC (permalink / raw)
  To: Michael S. Tsirkin
  Cc: Peter Maydell, Ben Widawsky, qemu-devel, Samarth Saxena,
	Chris Browy, linuxarm, linux-cxl, Markus Armbruster,
	Shreyas Shah, Saransh Gupta1, Shameerali Kolothum Thodi,
	Marcel Apfelbaum, Igor Mammedov, Dan Williams, Alex Bennée,
	Philippe Mathieu-Daudé,
	Paolo Bonzini, Peter Xu, David Hildenbrand

On Sun, 6 Mar 2022 16:33:40 -0500
"Michael S. Tsirkin" <mst@redhat.com> wrote:

> On Sun, Mar 06, 2022 at 05:40:51PM +0000, Jonathan Cameron wrote:
> > Ideally I'd love it if we could start picking up the earlier
> > sections of this series as I think those have been reasonably
> > well reviewed and should not be particularly controversial.
> > (perhaps up to patch 15 inline with what Michael Tsirkin suggested
> > on v5).  
> 
> Well true but given we are entering freeze this will leave
> us with a half baked devices which cant be used.
> At this point if we can't merge it up to documentation then
> I think we should wait until after the release.

Makes sense.

If any of the memory maintainers can take a look at patch 34 that would
be great as to my mind that and the related interleave decoding in general is
the big unknown in this set. I just realized I haven't cc'd everyone
I should have for that - added them here and I'll make sure to CC them
all on V8.

Thanks.

Jonathan

> 
> > There is one core memory handling related patch (34) marked as RFC.
> > Whilst it's impact seems small to me, I'm not sure it is the best way
> > to meet our requirements wrt interleaving.
> > 
> > Changes since v7:
> > 
> > Thanks to all who have taken a look.
> > Small amount of reordering was necessary due to LSA fix in patch 17.
> > Test moved forwards to patch 22 and so all intermediate patches
> > move -1 in the series.
> > 
> > (New stuff)
> > - Switch support.  Needed to support more interesting topologies.
> > (Ben Widawsky)
> > - Patch 17: Fix reversed condition on presence of LSA that meant these never
> >   got properly initialized. Related change needed to ensure test for cxl_type3
> >   always needs an LSA. We can relax this later when adding volatile memory
> >   support.
> > (Markus Armbuster)
> > - Patch 27: Change -cxl-fixed-memory-window option handling to use
> >   qobject_input_visitor_new_str().  This changed the required handling of
> >   targets parameter to require an array index and hence test and docs updates.
> >   e.g. targets.1=cxl_hb0,targets.2=cxl_hb1
> >   (Patches 38,40,42,43)
> > - Missing structure element docs and version number (optimisitic :)
> > (Alex Bennée)
> > - Added Reviewed-by tags.  Thanks!.
> > - Series wise: Switch to compiler.h QEMU_BUILD_BUG_ON/MSG QEMU_PACKED
> >   and QEMU_ALIGNED as Alex suggested in patch 20.
> > - Patch 6: Dropped documentation for a non-existent lock.
> >            Added error code suitable for unimplemented commands.
> > 	   Reordered code for better readability.
> > - Patch 9: Reorder as suggested to avoid a goto.
> > - Patch 16: Add LOG_UNIMP message where feature not yet implemented.
> >             Drop "Explain" comment that doesn't explain anything.
> > - Patch 18: Drop pointless void * cast.
> >             Add assertion as suggested (without divide)
> > - Patch 19: Use pstrcpy rather than snprintf for a fixed string.
> >             The compiler.h comment was in this patch but affects a
> > 	    number of other patches as well.
> > - Patch 20: Move structure CXLType3Dev to header when originally
> >             introduced so changes are more obvious in this patch.
> > - Patch 21: Substantial refactor to resolve unclear use of sizeof
> >             on the LSA command header. Now uses a variable length
> > 	    last element so we can use offsetof()
> > - Patch 22: Use g_autoptr() to avoid need for explicit free in tests
> >   	    Similar in later patches.
> > - Patch 29: Minor reorganziation as suggested.
> > 	    
> > (Tidy up from me)
> > - Trivial stuff like moving header includes to patch where first used.
> > - Patch 17: Drop ifndef protections from TYPE_CXL_TYPE3_DEV as there
> >             doesn't seem to be a reason.
> > 
> > Series organized to allow it to be taken in stages if the maintainers
> > prefer that approach. Most sets end with the addition of appropriate
> > tests (TBD for final set)
> > 
> > Patches 0-15 - CXL PXB
> > Patches 16-22 - Type 3 Device, Root Port
> > Patches 23-40 - ACPI, board elements and interleave decoding to enable x86 hosts
> > Patches 41-42 - arm64 support on virt.
> > Patch 43 - Initial documentation
> > Patches 44-46 - Switch support.
> > 
> > Gitlab CI is proving challenging to get a completely clean bill of health
> > as there seem to be some intermittent failures in common with the
> > main QEMU gitlab. In particular an ASAN leak error that appears in some
> > upstream CI runs and build-oss-fuzz timeouts.
> > Results at http://gitlab.com/jic23/qemu cxl-v7-draft-2-for-test
> > which also includes the DOE/CDAT patches serial number support which
> > will form part of a future series.
> > 
> > Updated background info:
> > 
> > Looking in particular for:
> > * Review of the PCI interactions
> > * x86 and ARM machine interactions (particularly the memory maps)
> > * Review of the interleaving approach - is the basic idea
> >   acceptable?
> > * Review of the command line interface.
> > * CXL related review welcome but much of that got reviewed
> >   in earlier versions and hasn't changed substantially.
> > 
> > Big TODOs:
> > 
> > * Volatile memory devices (easy but it's more code so left for now).
> > * Hotplug?  May not need much but it's not tested yet!
> > * More tests and tighter verification that values written to hardware
> >   are actually valid - stuff that real hardware would check.
> > * Testing, testing and more testing.  I have been running a basic
> >   set of ARM and x86 tests on this, but there is always room for
> >   more tests and greater automation.
> > * CFMWS flags as requested by Ben.
> > 
> > Why do we want QEMU emulation of CXL?
> > 
> > As Ben stated in V3, QEMU support has been critical to getting OS
> > software written given lack of availability of hardware supporting the
> > latest CXL features (coupled with very high demand for support being
> > ready in a timely fashion). What has become clear since Ben's v3
> > is that situation is a continuous one. Whilst we can't talk about
> > them yet, CXL 3.0 features and OS support have been prototyped on
> > top of this support and a lot of the ongoing kernel work is being
> > tested against these patches. The kernel CXL mocking code allows
> > some forms of testing, but QEMU provides a more versatile and
> > exensible platform.
> > 
> > Other features on the qemu-list that build on these include PCI-DOE
> > /CDAT support from the Avery Design team further showing how this
> > code is useful. Whilst not directly related this is also the test
> > platform for work on PCI IDE/CMA + related DMTF SPDM as CXL both
> > utilizes and extends those technologies and is likely to be an early
> > adopter.
> > Refs:
> > CMA Kernel: https://lore.kernel.org/all/20210804161839.3492053-1-Jonathan.Cameron@huawei.com/
> > CMA Qemu: https://lore.kernel.org/qemu-devel/1624665723-5169-1-git-send-email-cbrowy@avery-design.com/
> > DOE Qemu: https://lore.kernel.org/qemu-devel/1623329999-15662-1-git-send-email-cbrowy@avery-design.com/
> > 
> > As can be seen there is non trivial interaction with other areas of
> > Qemu, particularly PCI and keeping this set up to date is proving
> > a burden we'd rather do without :)
> > 
> > Ben mentioned a few other good reasons in v3:
> > https://lore.kernel.org/qemu-devel/20210202005948.241655-1-ben.widawsky@intel.com/
> > 
> > What we have here is about what you need for it to be useful for testing
> > currently kernel code.  Note the kernel code is moving fast so
> > since v4, some features have been introduced we don't yet support in
> > QEMU (e.g. use of the PCIe serial number extended capability).
> > 
> > All comments welcome.
> > 
> > Additional info that was here in v5 is now in the documentation patch.
> > 
> > Thanks,
> > 
> > Jonathan
> > 
> > Ben Widawsky (24):
> >   hw/pci/cxl: Add a CXL component type (interface)
> >   hw/cxl/component: Introduce CXL components (8.1.x, 8.2.5)
> >   hw/cxl/device: Introduce a CXL device (8.2.8)
> >   hw/cxl/device: Implement the CAP array (8.2.8.1-2)
> >   hw/cxl/device: Implement basic mailbox (8.2.8.4)
> >   hw/cxl/device: Add memory device utilities
> >   hw/cxl/device: Add cheap EVENTS implementation (8.2.9.1)
> >   hw/cxl/device: Timestamp implementation (8.2.9.3)
> >   hw/cxl/device: Add log commands (8.2.9.4) + CEL
> >   hw/pxb: Use a type for realizing expanders
> >   hw/pci/cxl: Create a CXL bus type
> >   hw/pxb: Allow creation of a CXL PXB (host bridge)
> >   hw/cxl/rp: Add a root port
> >   hw/cxl/device: Add a memory device (8.2.8.5)
> >   hw/cxl/device: Implement MMIO HDM decoding (8.2.5.12)
> >   hw/cxl/device: Add some trivial commands
> >   hw/cxl/device: Plumb real Label Storage Area (LSA) sizing
> >   hw/cxl/device: Implement get/set Label Storage Area (LSA)
> >   hw/cxl/component: Implement host bridge MMIO (8.2.5, table 142)
> >   acpi/cxl: Add _OSC implementation (9.14.2)
> >   acpi/cxl: Create the CEDT (9.14.1)
> >   acpi/cxl: Introduce CFMWS structures in CEDT
> >   hw/cxl/component Add a dumb HDM decoder handler
> >   qtest/cxl: Add more complex test cases with CFMWs
> > 
> > Jonathan Cameron (22):
> >   MAINTAINERS: Add entry for Compute Express Link Emulation
> >   cxl: Machine level control on whether CXL support is enabled
> >   qtest/cxl: Introduce initial test for pxb-cxl only.
> >   qtests/cxl: Add initial root port and CXL type3 tests
> >   hw/cxl/component: Add utils for interleave parameter encoding/decoding
> >   hw/cxl/host: Add support for CXL Fixed Memory Windows.
> >   hw/pci-host/gpex-acpi: Add support for dsdt construction for pxb-cxl
> >   pci/pcie_port: Add pci_find_port_by_pn()
> >   CXL/cxl_component: Add cxl_get_hb_cstate()
> >   mem/cxl_type3: Add read and write functions for associated hostmem.
> >   cxl/cxl-host: Add memops for CFMWS region.
> >   RFC: softmmu/memory: Add ops to memory_region_ram_init_from_file
> >   i386/pc: Enable CXL fixed memory windows
> >   tests/acpi: q35: Allow addition of a CXL test.
> >   qtests/bios-tables-test: Add a test for CXL emulation.
> >   tests/acpi: Add tables for CXL emulation.
> >   hw/arm/virt: Basic CXL enablement on pci_expander_bridge instances
> >     pxb-cxl
> >   qtest/cxl: Add aarch64 virt test for CXL
> >   docs/cxl: Add initial Compute eXpress Link (CXL) documentation.
> >   pci-bridge/cxl_upstream: Add a CXL switch upstream port
> >   pci-bridge/cxl_downstream: Add a CXL switch downstream port
> >   cxl/cxl-host: Support interleave decoding with one level of switches.
> > 
> >  MAINTAINERS                         |   7 +
> >  docs/system/device-emulation.rst    |   1 +
> >  docs/system/devices/cxl.rst         | 302 +++++++++++++++++
> >  hw/Kconfig                          |   1 +
> >  hw/acpi/Kconfig                     |   5 +
> >  hw/acpi/cxl-stub.c                  |  12 +
> >  hw/acpi/cxl.c                       | 231 +++++++++++++
> >  hw/acpi/meson.build                 |   4 +-
> >  hw/arm/Kconfig                      |   1 +
> >  hw/arm/virt-acpi-build.c            |  33 ++
> >  hw/arm/virt.c                       |  40 ++-
> >  hw/core/machine.c                   |  28 ++
> >  hw/cxl/Kconfig                      |   3 +
> >  hw/cxl/cxl-component-utils.c        | 284 ++++++++++++++++
> >  hw/cxl/cxl-device-utils.c           | 265 +++++++++++++++
> >  hw/cxl/cxl-host-stubs.c             |  16 +
> >  hw/cxl/cxl-host.c                   | 262 +++++++++++++++
> >  hw/cxl/cxl-mailbox-utils.c          | 485 ++++++++++++++++++++++++++++
> >  hw/cxl/meson.build                  |  12 +
> >  hw/i386/acpi-build.c                |  57 +++-
> >  hw/i386/pc.c                        |  57 +++-
> >  hw/mem/Kconfig                      |   5 +
> >  hw/mem/cxl_type3.c                  | 352 ++++++++++++++++++++
> >  hw/mem/meson.build                  |   1 +
> >  hw/meson.build                      |   1 +
> >  hw/pci-bridge/Kconfig               |   5 +
> >  hw/pci-bridge/cxl_downstream.c      | 229 +++++++++++++
> >  hw/pci-bridge/cxl_root_port.c       | 231 +++++++++++++
> >  hw/pci-bridge/cxl_upstream.c        | 206 ++++++++++++
> >  hw/pci-bridge/meson.build           |   1 +
> >  hw/pci-bridge/pci_expander_bridge.c | 172 +++++++++-
> >  hw/pci-bridge/pcie_root_port.c      |   6 +-
> >  hw/pci-host/gpex-acpi.c             |  20 +-
> >  hw/pci/pci.c                        |  21 +-
> >  hw/pci/pcie_port.c                  |  25 ++
> >  include/hw/acpi/cxl.h               |  28 ++
> >  include/hw/arm/virt.h               |   1 +
> >  include/hw/boards.h                 |   2 +
> >  include/hw/cxl/cxl.h                |  54 ++++
> >  include/hw/cxl/cxl_component.h      | 207 ++++++++++++
> >  include/hw/cxl/cxl_device.h         | 270 ++++++++++++++++
> >  include/hw/cxl/cxl_pci.h            | 156 +++++++++
> >  include/hw/pci/pci.h                |  14 +
> >  include/hw/pci/pci_bridge.h         |  20 ++
> >  include/hw/pci/pci_bus.h            |   7 +
> >  include/hw/pci/pci_ids.h            |   1 +
> >  include/hw/pci/pcie_port.h          |   2 +
> >  qapi/machine.json                   |  18 ++
> >  qemu-options.hx                     |  38 +++
> >  scripts/device-crash-test           |   1 +
> >  softmmu/memory.c                    |   9 +
> >  softmmu/vl.c                        |  42 +++
> >  tests/data/acpi/q35/CEDT.cxl        | Bin 0 -> 184 bytes
> >  tests/data/acpi/q35/DSDT.cxl        | Bin 0 -> 9627 bytes
> >  tests/qtest/bios-tables-test.c      |  44 +++
> >  tests/qtest/cxl-test.c              | 181 +++++++++++
> >  tests/qtest/meson.build             |   5 +
> >  57 files changed, 4456 insertions(+), 25 deletions(-)
> >  create mode 100644 docs/system/devices/cxl.rst
> >  create mode 100644 hw/acpi/cxl-stub.c
> >  create mode 100644 hw/acpi/cxl.c
> >  create mode 100644 hw/cxl/Kconfig
> >  create mode 100644 hw/cxl/cxl-component-utils.c
> >  create mode 100644 hw/cxl/cxl-device-utils.c
> >  create mode 100644 hw/cxl/cxl-host-stubs.c
> >  create mode 100644 hw/cxl/cxl-host.c
> >  create mode 100644 hw/cxl/cxl-mailbox-utils.c
> >  create mode 100644 hw/cxl/meson.build
> >  create mode 100644 hw/mem/cxl_type3.c
> >  create mode 100644 hw/pci-bridge/cxl_downstream.c
> >  create mode 100644 hw/pci-bridge/cxl_root_port.c
> >  create mode 100644 hw/pci-bridge/cxl_upstream.c
> >  create mode 100644 include/hw/acpi/cxl.h
> >  create mode 100644 include/hw/cxl/cxl.h
> >  create mode 100644 include/hw/cxl/cxl_component.h
> >  create mode 100644 include/hw/cxl/cxl_device.h
> >  create mode 100644 include/hw/cxl/cxl_pci.h
> >  create mode 100644 tests/data/acpi/q35/CEDT.cxl
> >  create mode 100644 tests/data/acpi/q35/DSDT.cxl
> >  create mode 100644 tests/qtest/cxl-test.c
> > 
> > -- 
> > 2.32.0  
> 
> 


^ permalink raw reply	[flat|nested] 124+ messages in thread

* Re: [PATCH v7 24/46] acpi/cxl: Add _OSC implementation (9.14.2)
  2022-03-06 21:31     ` Michael S. Tsirkin
@ 2022-03-07 17:01       ` Jonathan Cameron via
  -1 siblings, 0 replies; 124+ messages in thread
From: Jonathan Cameron @ 2022-03-07 17:01 UTC (permalink / raw)
  To: Michael S. Tsirkin
  Cc: linuxarm, qemu-devel, Alex Bennée, Marcel Apfelbaum,
	Igor Mammedov, Markus Armbruster, linux-cxl, Ben Widawsky,
	Peter Maydell, Shameerali Kolothum Thodi,
	Philippe Mathieu-Daudé,
	Saransh Gupta1, Shreyas Shah, Chris Browy, Samarth Saxena,
	Dan Williams

On Sun, 6 Mar 2022 16:31:05 -0500
"Michael S. Tsirkin" <mst@redhat.com> wrote:

> On Sun, Mar 06, 2022 at 05:41:15PM +0000, Jonathan Cameron wrote:
> > From: Ben Widawsky <ben.widawsky@intel.com>
> > 
> > CXL 2.0 specification adds 2 new dwords to the existing _OSC definition
> > from PCIe. The new dwords are accessed with a new uuid. This
> > implementation supports what is in the specification.
> > 
> > Signed-off-by: Ben Widawsky <ben.widawsky@intel.com>
> > Signed-off-by: Jonathan Cameron <Jonathan.Cameron@huawei.com>
> > Reviewed-by: Alex Bennée <alex.bennee@linaro.org>

Question for Ben inline.

> > ---
> >  hw/acpi/Kconfig       |   5 ++
> >  hw/acpi/cxl-stub.c    |  12 +++++
> >  hw/acpi/cxl.c         | 104 ++++++++++++++++++++++++++++++++++++++++++
> >  hw/acpi/meson.build   |   4 +-
> >  hw/i386/acpi-build.c  |  15 ++++--
> >  include/hw/acpi/cxl.h |  23 ++++++++++
> >  6 files changed, 157 insertions(+), 6 deletions(-)
> > 
> > diff --git a/hw/acpi/Kconfig b/hw/acpi/Kconfig
> > index 19caebde6c..3703aca212 100644
> > --- a/hw/acpi/Kconfig
> > +++ b/hw/acpi/Kconfig
> > @@ -5,6 +5,7 @@ config ACPI_X86
> >      bool
> >      select ACPI
> >      select ACPI_NVDIMM
> > +    select ACPI_CXL
> >      select ACPI_CPU_HOTPLUG
> >      select ACPI_MEMORY_HOTPLUG
> >      select ACPI_HMAT
> > @@ -66,3 +67,7 @@ config ACPI_ERST
> >      bool
> >      default y
> >      depends on ACPI && PCI
> > +
> > +config ACPI_CXL
> > +    bool
> > +    depends on ACPI
> > diff --git a/hw/acpi/cxl-stub.c b/hw/acpi/cxl-stub.c
> > new file mode 100644
> > index 0000000000..15bc21076b
> > --- /dev/null
> > +++ b/hw/acpi/cxl-stub.c
> > @@ -0,0 +1,12 @@
> > +
> > +/*
> > + * Stubs for ACPI platforms that don't support CXl
> > + */
> > +#include "qemu/osdep.h"
> > +#include "hw/acpi/aml-build.h"
> > +#include "hw/acpi/cxl.h"
> > +
> > +void build_cxl_osc_method(Aml *dev)
> > +{
> > +    g_assert_not_reached();
> > +}
> > diff --git a/hw/acpi/cxl.c b/hw/acpi/cxl.c
> > new file mode 100644
> > index 0000000000..7124d5a1a3
> > --- /dev/null
> > +++ b/hw/acpi/cxl.c
> > @@ -0,0 +1,104 @@
> > +/*
> > + * CXL ACPI Implementation
> > + *
> > + * Copyright(C) 2020 Intel Corporation.
> > + *
> > + * This library is free software; you can redistribute it and/or
> > + * modify it under the terms of the GNU Lesser General Public
> > + * License as published by the Free Software Foundation; either
> > + * version 2 of the License, or (at your option) any later version.
> > + *
> > + * This library is distributed in the hope that it will be useful,
> > + * but WITHOUT ANY WARRANTY; without even the implied warranty of
> > + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
> > + * Lesser General Public License for more details.
> > + *
> > + * You should have received a copy of the GNU Lesser General Public
> > + * License along with this library; if not, see <http://www.gnu.org/licenses/>
> > + */
> > +
> > +#include "qemu/osdep.h"
> > +#include "hw/cxl/cxl.h"
> > +#include "hw/acpi/acpi.h"
> > +#include "hw/acpi/aml-build.h"
> > +#include "hw/acpi/bios-linker-loader.h"
> > +#include "hw/acpi/cxl.h"
> > +#include "qapi/error.h"
> > +#include "qemu/uuid.h"
> > +
> > +static Aml *__build_cxl_osc_method(void)
> > +{
> > +    Aml *method, *if_uuid, *else_uuid, *if_arg1_not_1, *if_cxl, *if_caps_masked;
> > +    Aml *a_ctrl = aml_local(0);
> > +    Aml *a_cdw1 = aml_name("CDW1");
> > +
> > +    method = aml_method("_OSC", 4, AML_NOTSERIALIZED);
> > +    aml_append(method, aml_create_dword_field(aml_arg(3), aml_int(0), "CDW1"));
> > +
> > +    /* 9.14.2.1.4 */  
> 
> List spec name and version pls?

Added.

> 
> > +    if_uuid = aml_if(
> > +        aml_lor(aml_equal(aml_arg(0),
> > +                          aml_touuid("33DB4D5B-1FF7-401C-9657-7441C03DD766")),
> > +                aml_equal(aml_arg(0),
> > +                          aml_touuid("68F2D50B-C469-4D8A-BD3D-941A103FD3FC"))));
> > +    aml_append(if_uuid, aml_create_dword_field(aml_arg(3), aml_int(4), "CDW2"));
> > +    aml_append(if_uuid, aml_create_dword_field(aml_arg(3), aml_int(8), "CDW3"));
> > +
> > +    aml_append(if_uuid, aml_store(aml_name("CDW3"), a_ctrl));
> > +
> > +    /* This is all the same as what's used for PCIe */  
> 
> Referring to what exactly?
> Better to also document the meaning.

Wise advice.  Having added documentation it is clear this was strangely ordered
and contained at least one bug.  Guess I took this on a bit too much trust.

> 
> 
> > +    aml_append(if_uuid,
> > +               aml_and(aml_name("CTRL"), aml_int(0x1F), aml_name("CTRL")));

This should be
		    aml_and(a_ctrl, aml_int(0x1F), a_ctrl)
as we haven't stored anything to CTRL yet.  The a_ctrl variable ends up named
Local0 when disassembled  which isn't particularly intuitive.

> > +
> > +    if_arg1_not_1 = aml_if(aml_lnot(aml_equal(aml_arg(1), aml_int(0x1))));
> > +    /* Unknown revision */
> > +    aml_append(if_arg1_not_1, aml_or(a_cdw1, aml_int(0x08), a_cdw1));
> > +    aml_append(if_uuid, if_arg1_not_1);
> > +
> > +    if_caps_masked = aml_if(aml_lnot(aml_equal(aml_name("CDW3"), a_ctrl)));
> > +    /* Capability bits were masked */
> > +    aml_append(if_caps_masked, aml_or(a_cdw1, aml_int(0x10), a_cdw1));
> > +    aml_append(if_uuid, if_caps_masked);
> > +
> > +    aml_append(if_uuid, aml_store(aml_name("CDW2"), aml_name("SUPP")));
> > +    aml_append(if_uuid, aml_store(aml_name("CDW3"), aml_name("CTRL")));
> > +
> > +    if_cxl = aml_if(aml_equal(
> > +        aml_arg(0), aml_touuid("68F2D50B-C469-4D8A-BD3D-941A103FD3FC")));
> > +    /* CXL support field */
> > +    aml_append(if_cxl, aml_create_dword_field(aml_arg(3), aml_int(12), "CDW4"));
> > +    /* CXL capabilities */
> > +    aml_append(if_cxl, aml_create_dword_field(aml_arg(3), aml_int(16), "CDW5"));
> > +    aml_append(if_cxl, aml_store(aml_name("CDW4"), aml_name("SUPC")));
> > +    aml_append(if_cxl, aml_store(aml_name("CDW5"), aml_name("CTRC")));
> > +
> > +    /* CXL 2.0 Port/Device Register access */

Ben, this seems to be wrong to me.  CDW5 is the CXL control register
and the only defined bit in the 2.0 spec (and no ECR or errata seems to
have changed it) is bit 0 as CXL Memory Error Reporting Control.
We can't control access to the Port/Device registers as that's only
specified in the 'supported' value in CDW4.

We also should be setting bits the OS didn't ask for.

Obvious was a long time ago now, but can you recall what was intended here?

Thanks,

Jonathan

 
> > +    aml_append(if_cxl,
> > +               aml_or(aml_name("CDW5"), aml_int(0x1), aml_name("CDW5")));
> > +    aml_append(if_uuid, if_cxl);
> > +
> > +    /* Update DWORD3 (the return value) */
> > +    aml_append(if_uuid, aml_store(a_ctrl, aml_name("CDW3")));
> > +
> > +    aml_append(if_uuid, aml_return(aml_arg(3)));
> > +    aml_append(method, if_uuid);
> > +
> > +    else_uuid = aml_else();
> > +
> > +    /* unrecognized uuid */
> > +    aml_append(else_uuid,
> > +               aml_or(aml_name("CDW1"), aml_int(0x4), aml_name("CDW1")));
> > +    aml_append(else_uuid, aml_return(aml_arg(3)));
> > +    aml_append(method, else_uuid);
> > +
> > +    return method;
> > +}
> > +
> > +void build_cxl_osc_method(Aml *dev)
> > +{
> > +    aml_append(dev, aml_name_decl("SUPP", aml_int(0)));
> > +    aml_append(dev, aml_name_decl("CTRL", aml_int(0)));
> > +    aml_append(dev, aml_name_decl("SUPC", aml_int(0)));
> > +    aml_append(dev, aml_name_decl("CTRC", aml_int(0)));
> > +    aml_append(dev, __build_cxl_osc_method());
> > +}
> > diff --git a/hw/acpi/meson.build b/hw/acpi/meson.build
> > index 8bea2e6933..cea2f5f93a 100644
> > --- a/hw/acpi/meson.build
> > +++ b/hw/acpi/meson.build
> > @@ -13,6 +13,7 @@ acpi_ss.add(when: 'CONFIG_ACPI_MEMORY_HOTPLUG', if_false: files('acpi-mem-hotplu
> >  acpi_ss.add(when: 'CONFIG_ACPI_NVDIMM', if_true: files('nvdimm.c'))
> >  acpi_ss.add(when: 'CONFIG_ACPI_NVDIMM', if_false: files('acpi-nvdimm-stub.c'))
> >  acpi_ss.add(when: 'CONFIG_ACPI_PCI', if_true: files('pci.c'))
> > +acpi_ss.add(when: 'CONFIG_ACPI_CXL', if_true: files('cxl.c'), if_false: files('cxl-stub.c'))
> >  acpi_ss.add(when: 'CONFIG_ACPI_VMGENID', if_true: files('vmgenid.c'))
> >  acpi_ss.add(when: 'CONFIG_ACPI_HW_REDUCED', if_true: files('generic_event_device.c'))
> >  acpi_ss.add(when: 'CONFIG_ACPI_HMAT', if_true: files('hmat.c'))
> > @@ -33,4 +34,5 @@ softmmu_ss.add_all(when: 'CONFIG_ACPI', if_true: acpi_ss)
> >  softmmu_ss.add(when: 'CONFIG_ALL', if_true: files('acpi-stub.c', 'aml-build-stub.c',
> >                                                    'acpi-x86-stub.c', 'ipmi-stub.c', 'ghes-stub.c',
> >                                                    'acpi-mem-hotplug-stub.c', 'acpi-cpu-hotplug-stub.c',
> > -                                                  'acpi-pci-hotplug-stub.c', 'acpi-nvdimm-stub.c'))
> > +                                                  'acpi-pci-hotplug-stub.c', 'acpi-nvdimm-stub.c',
> > +                                                  'cxl-stub.c'))
> > diff --git a/hw/i386/acpi-build.c b/hw/i386/acpi-build.c
> > index 0a28dd6d4e..b5a4b663f2 100644
> > --- a/hw/i386/acpi-build.c
> > +++ b/hw/i386/acpi-build.c
> > @@ -66,6 +66,7 @@
> >  #include "hw/acpi/aml-build.h"
> >  #include "hw/acpi/utils.h"
> >  #include "hw/acpi/pci.h"
> > +#include "hw/acpi/cxl.h"
> >  
> >  #include "qom/qom-qobject.h"
> >  #include "hw/i386/amd_iommu.h"
> > @@ -1574,11 +1575,15 @@ build_dsdt(GArray *table_data, BIOSLinker *linker,
> >              aml_append(dev, aml_name_decl("_UID", aml_int(bus_num)));
> >              aml_append(dev, aml_name_decl("_BBN", aml_int(bus_num)));
> >              if (pci_bus_is_cxl(bus)) {
> > -                aml_append(dev, aml_name_decl("_HID", aml_eisaid("PNP0A08")));
> > -                aml_append(dev, aml_name_decl("_CID", aml_eisaid("PNP0A03")));
> > -
> > -                /* Expander bridges do not have ACPI PCI Hot-plug enabled */
> > -                aml_append(dev, build_q35_osc_method(true));
> > +                struct Aml *pkg = aml_package(2);
> > +
> > +                aml_append(dev, aml_name_decl("_HID", aml_string("ACPI0016")));
> > +                aml_append(pkg, aml_eisaid("PNP0A08"));
> > +                aml_append(pkg, aml_eisaid("PNP0A03"));
> > +                aml_append(dev, aml_name_decl("_CID", pkg));
> > +                aml_append(dev, aml_name_decl("_ADR", aml_int(0)));
> > +                aml_append(dev, aml_name_decl("_UID", aml_int(bus_num)));
> > +                build_cxl_osc_method(dev);
> >              } else if (pci_bus_is_express(bus)) {
> >                  aml_append(dev, aml_name_decl("_HID", aml_eisaid("PNP0A08")));
> >                  aml_append(dev, aml_name_decl("_CID", aml_eisaid("PNP0A03")));
> > diff --git a/include/hw/acpi/cxl.h b/include/hw/acpi/cxl.h
> > new file mode 100644
> > index 0000000000..7b8f3b8a2e
> > --- /dev/null
> > +++ b/include/hw/acpi/cxl.h
> > @@ -0,0 +1,23 @@
> > +/*
> > + * Copyright (C) 2020 Intel Corporation
> > + *
> > + * This program is free software; you can redistribute it and/or modify
> > + * it under the terms of the GNU General Public License as published by
> > + * the Free Software Foundation; either version 2 of the License, or
> > + * (at your option) any later version.
> > +
> > + * This program is distributed in the hope that it will be useful,
> > + * but WITHOUT ANY WARRANTY; without even the implied warranty of
> > + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
> > + * GNU General Public License for more details.
> > +
> > + * You should have received a copy of the GNU General Public License along
> > + * with this program; if not, see <http://www.gnu.org/licenses/>.
> > + */
> > +
> > +#ifndef HW_ACPI_CXL_H
> > +#define HW_ACPI_CXL_H
> > +
> > +void build_cxl_osc_method(Aml *dev);
> > +
> > +#endif
> > -- 
> > 2.32.0  
> 


^ permalink raw reply	[flat|nested] 124+ messages in thread

* Re: [PATCH v7 24/46] acpi/cxl: Add _OSC implementation (9.14.2)
@ 2022-03-07 17:01       ` Jonathan Cameron via
  0 siblings, 0 replies; 124+ messages in thread
From: Jonathan Cameron via @ 2022-03-07 17:01 UTC (permalink / raw)
  To: Michael S. Tsirkin
  Cc: linuxarm, qemu-devel, Alex Bennée, Marcel Apfelbaum,
	Igor Mammedov, Markus Armbruster, linux-cxl, Ben Widawsky,
	Peter Maydell, Shameerali Kolothum Thodi,
	Philippe Mathieu-Daudé,
	Saransh Gupta1, Shreyas Shah, Chris Browy, Samarth Saxena,
	Dan Williams

On Sun, 6 Mar 2022 16:31:05 -0500
"Michael S. Tsirkin" <mst@redhat.com> wrote:

> On Sun, Mar 06, 2022 at 05:41:15PM +0000, Jonathan Cameron wrote:
> > From: Ben Widawsky <ben.widawsky@intel.com>
> > 
> > CXL 2.0 specification adds 2 new dwords to the existing _OSC definition
> > from PCIe. The new dwords are accessed with a new uuid. This
> > implementation supports what is in the specification.
> > 
> > Signed-off-by: Ben Widawsky <ben.widawsky@intel.com>
> > Signed-off-by: Jonathan Cameron <Jonathan.Cameron@huawei.com>
> > Reviewed-by: Alex Bennée <alex.bennee@linaro.org>

Question for Ben inline.

> > ---
> >  hw/acpi/Kconfig       |   5 ++
> >  hw/acpi/cxl-stub.c    |  12 +++++
> >  hw/acpi/cxl.c         | 104 ++++++++++++++++++++++++++++++++++++++++++
> >  hw/acpi/meson.build   |   4 +-
> >  hw/i386/acpi-build.c  |  15 ++++--
> >  include/hw/acpi/cxl.h |  23 ++++++++++
> >  6 files changed, 157 insertions(+), 6 deletions(-)
> > 
> > diff --git a/hw/acpi/Kconfig b/hw/acpi/Kconfig
> > index 19caebde6c..3703aca212 100644
> > --- a/hw/acpi/Kconfig
> > +++ b/hw/acpi/Kconfig
> > @@ -5,6 +5,7 @@ config ACPI_X86
> >      bool
> >      select ACPI
> >      select ACPI_NVDIMM
> > +    select ACPI_CXL
> >      select ACPI_CPU_HOTPLUG
> >      select ACPI_MEMORY_HOTPLUG
> >      select ACPI_HMAT
> > @@ -66,3 +67,7 @@ config ACPI_ERST
> >      bool
> >      default y
> >      depends on ACPI && PCI
> > +
> > +config ACPI_CXL
> > +    bool
> > +    depends on ACPI
> > diff --git a/hw/acpi/cxl-stub.c b/hw/acpi/cxl-stub.c
> > new file mode 100644
> > index 0000000000..15bc21076b
> > --- /dev/null
> > +++ b/hw/acpi/cxl-stub.c
> > @@ -0,0 +1,12 @@
> > +
> > +/*
> > + * Stubs for ACPI platforms that don't support CXl
> > + */
> > +#include "qemu/osdep.h"
> > +#include "hw/acpi/aml-build.h"
> > +#include "hw/acpi/cxl.h"
> > +
> > +void build_cxl_osc_method(Aml *dev)
> > +{
> > +    g_assert_not_reached();
> > +}
> > diff --git a/hw/acpi/cxl.c b/hw/acpi/cxl.c
> > new file mode 100644
> > index 0000000000..7124d5a1a3
> > --- /dev/null
> > +++ b/hw/acpi/cxl.c
> > @@ -0,0 +1,104 @@
> > +/*
> > + * CXL ACPI Implementation
> > + *
> > + * Copyright(C) 2020 Intel Corporation.
> > + *
> > + * This library is free software; you can redistribute it and/or
> > + * modify it under the terms of the GNU Lesser General Public
> > + * License as published by the Free Software Foundation; either
> > + * version 2 of the License, or (at your option) any later version.
> > + *
> > + * This library is distributed in the hope that it will be useful,
> > + * but WITHOUT ANY WARRANTY; without even the implied warranty of
> > + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
> > + * Lesser General Public License for more details.
> > + *
> > + * You should have received a copy of the GNU Lesser General Public
> > + * License along with this library; if not, see <http://www.gnu.org/licenses/>
> > + */
> > +
> > +#include "qemu/osdep.h"
> > +#include "hw/cxl/cxl.h"
> > +#include "hw/acpi/acpi.h"
> > +#include "hw/acpi/aml-build.h"
> > +#include "hw/acpi/bios-linker-loader.h"
> > +#include "hw/acpi/cxl.h"
> > +#include "qapi/error.h"
> > +#include "qemu/uuid.h"
> > +
> > +static Aml *__build_cxl_osc_method(void)
> > +{
> > +    Aml *method, *if_uuid, *else_uuid, *if_arg1_not_1, *if_cxl, *if_caps_masked;
> > +    Aml *a_ctrl = aml_local(0);
> > +    Aml *a_cdw1 = aml_name("CDW1");
> > +
> > +    method = aml_method("_OSC", 4, AML_NOTSERIALIZED);
> > +    aml_append(method, aml_create_dword_field(aml_arg(3), aml_int(0), "CDW1"));
> > +
> > +    /* 9.14.2.1.4 */  
> 
> List spec name and version pls?

Added.

> 
> > +    if_uuid = aml_if(
> > +        aml_lor(aml_equal(aml_arg(0),
> > +                          aml_touuid("33DB4D5B-1FF7-401C-9657-7441C03DD766")),
> > +                aml_equal(aml_arg(0),
> > +                          aml_touuid("68F2D50B-C469-4D8A-BD3D-941A103FD3FC"))));
> > +    aml_append(if_uuid, aml_create_dword_field(aml_arg(3), aml_int(4), "CDW2"));
> > +    aml_append(if_uuid, aml_create_dword_field(aml_arg(3), aml_int(8), "CDW3"));
> > +
> > +    aml_append(if_uuid, aml_store(aml_name("CDW3"), a_ctrl));
> > +
> > +    /* This is all the same as what's used for PCIe */  
> 
> Referring to what exactly?
> Better to also document the meaning.

Wise advice.  Having added documentation it is clear this was strangely ordered
and contained at least one bug.  Guess I took this on a bit too much trust.

> 
> 
> > +    aml_append(if_uuid,
> > +               aml_and(aml_name("CTRL"), aml_int(0x1F), aml_name("CTRL")));

This should be
		    aml_and(a_ctrl, aml_int(0x1F), a_ctrl)
as we haven't stored anything to CTRL yet.  The a_ctrl variable ends up named
Local0 when disassembled  which isn't particularly intuitive.

> > +
> > +    if_arg1_not_1 = aml_if(aml_lnot(aml_equal(aml_arg(1), aml_int(0x1))));
> > +    /* Unknown revision */
> > +    aml_append(if_arg1_not_1, aml_or(a_cdw1, aml_int(0x08), a_cdw1));
> > +    aml_append(if_uuid, if_arg1_not_1);
> > +
> > +    if_caps_masked = aml_if(aml_lnot(aml_equal(aml_name("CDW3"), a_ctrl)));
> > +    /* Capability bits were masked */
> > +    aml_append(if_caps_masked, aml_or(a_cdw1, aml_int(0x10), a_cdw1));
> > +    aml_append(if_uuid, if_caps_masked);
> > +
> > +    aml_append(if_uuid, aml_store(aml_name("CDW2"), aml_name("SUPP")));
> > +    aml_append(if_uuid, aml_store(aml_name("CDW3"), aml_name("CTRL")));
> > +
> > +    if_cxl = aml_if(aml_equal(
> > +        aml_arg(0), aml_touuid("68F2D50B-C469-4D8A-BD3D-941A103FD3FC")));
> > +    /* CXL support field */
> > +    aml_append(if_cxl, aml_create_dword_field(aml_arg(3), aml_int(12), "CDW4"));
> > +    /* CXL capabilities */
> > +    aml_append(if_cxl, aml_create_dword_field(aml_arg(3), aml_int(16), "CDW5"));
> > +    aml_append(if_cxl, aml_store(aml_name("CDW4"), aml_name("SUPC")));
> > +    aml_append(if_cxl, aml_store(aml_name("CDW5"), aml_name("CTRC")));
> > +
> > +    /* CXL 2.0 Port/Device Register access */

Ben, this seems to be wrong to me.  CDW5 is the CXL control register
and the only defined bit in the 2.0 spec (and no ECR or errata seems to
have changed it) is bit 0 as CXL Memory Error Reporting Control.
We can't control access to the Port/Device registers as that's only
specified in the 'supported' value in CDW4.

We also should be setting bits the OS didn't ask for.

Obvious was a long time ago now, but can you recall what was intended here?

Thanks,

Jonathan

 
> > +    aml_append(if_cxl,
> > +               aml_or(aml_name("CDW5"), aml_int(0x1), aml_name("CDW5")));
> > +    aml_append(if_uuid, if_cxl);
> > +
> > +    /* Update DWORD3 (the return value) */
> > +    aml_append(if_uuid, aml_store(a_ctrl, aml_name("CDW3")));
> > +
> > +    aml_append(if_uuid, aml_return(aml_arg(3)));
> > +    aml_append(method, if_uuid);
> > +
> > +    else_uuid = aml_else();
> > +
> > +    /* unrecognized uuid */
> > +    aml_append(else_uuid,
> > +               aml_or(aml_name("CDW1"), aml_int(0x4), aml_name("CDW1")));
> > +    aml_append(else_uuid, aml_return(aml_arg(3)));
> > +    aml_append(method, else_uuid);
> > +
> > +    return method;
> > +}
> > +
> > +void build_cxl_osc_method(Aml *dev)
> > +{
> > +    aml_append(dev, aml_name_decl("SUPP", aml_int(0)));
> > +    aml_append(dev, aml_name_decl("CTRL", aml_int(0)));
> > +    aml_append(dev, aml_name_decl("SUPC", aml_int(0)));
> > +    aml_append(dev, aml_name_decl("CTRC", aml_int(0)));
> > +    aml_append(dev, __build_cxl_osc_method());
> > +}
> > diff --git a/hw/acpi/meson.build b/hw/acpi/meson.build
> > index 8bea2e6933..cea2f5f93a 100644
> > --- a/hw/acpi/meson.build
> > +++ b/hw/acpi/meson.build
> > @@ -13,6 +13,7 @@ acpi_ss.add(when: 'CONFIG_ACPI_MEMORY_HOTPLUG', if_false: files('acpi-mem-hotplu
> >  acpi_ss.add(when: 'CONFIG_ACPI_NVDIMM', if_true: files('nvdimm.c'))
> >  acpi_ss.add(when: 'CONFIG_ACPI_NVDIMM', if_false: files('acpi-nvdimm-stub.c'))
> >  acpi_ss.add(when: 'CONFIG_ACPI_PCI', if_true: files('pci.c'))
> > +acpi_ss.add(when: 'CONFIG_ACPI_CXL', if_true: files('cxl.c'), if_false: files('cxl-stub.c'))
> >  acpi_ss.add(when: 'CONFIG_ACPI_VMGENID', if_true: files('vmgenid.c'))
> >  acpi_ss.add(when: 'CONFIG_ACPI_HW_REDUCED', if_true: files('generic_event_device.c'))
> >  acpi_ss.add(when: 'CONFIG_ACPI_HMAT', if_true: files('hmat.c'))
> > @@ -33,4 +34,5 @@ softmmu_ss.add_all(when: 'CONFIG_ACPI', if_true: acpi_ss)
> >  softmmu_ss.add(when: 'CONFIG_ALL', if_true: files('acpi-stub.c', 'aml-build-stub.c',
> >                                                    'acpi-x86-stub.c', 'ipmi-stub.c', 'ghes-stub.c',
> >                                                    'acpi-mem-hotplug-stub.c', 'acpi-cpu-hotplug-stub.c',
> > -                                                  'acpi-pci-hotplug-stub.c', 'acpi-nvdimm-stub.c'))
> > +                                                  'acpi-pci-hotplug-stub.c', 'acpi-nvdimm-stub.c',
> > +                                                  'cxl-stub.c'))
> > diff --git a/hw/i386/acpi-build.c b/hw/i386/acpi-build.c
> > index 0a28dd6d4e..b5a4b663f2 100644
> > --- a/hw/i386/acpi-build.c
> > +++ b/hw/i386/acpi-build.c
> > @@ -66,6 +66,7 @@
> >  #include "hw/acpi/aml-build.h"
> >  #include "hw/acpi/utils.h"
> >  #include "hw/acpi/pci.h"
> > +#include "hw/acpi/cxl.h"
> >  
> >  #include "qom/qom-qobject.h"
> >  #include "hw/i386/amd_iommu.h"
> > @@ -1574,11 +1575,15 @@ build_dsdt(GArray *table_data, BIOSLinker *linker,
> >              aml_append(dev, aml_name_decl("_UID", aml_int(bus_num)));
> >              aml_append(dev, aml_name_decl("_BBN", aml_int(bus_num)));
> >              if (pci_bus_is_cxl(bus)) {
> > -                aml_append(dev, aml_name_decl("_HID", aml_eisaid("PNP0A08")));
> > -                aml_append(dev, aml_name_decl("_CID", aml_eisaid("PNP0A03")));
> > -
> > -                /* Expander bridges do not have ACPI PCI Hot-plug enabled */
> > -                aml_append(dev, build_q35_osc_method(true));
> > +                struct Aml *pkg = aml_package(2);
> > +
> > +                aml_append(dev, aml_name_decl("_HID", aml_string("ACPI0016")));
> > +                aml_append(pkg, aml_eisaid("PNP0A08"));
> > +                aml_append(pkg, aml_eisaid("PNP0A03"));
> > +                aml_append(dev, aml_name_decl("_CID", pkg));
> > +                aml_append(dev, aml_name_decl("_ADR", aml_int(0)));
> > +                aml_append(dev, aml_name_decl("_UID", aml_int(bus_num)));
> > +                build_cxl_osc_method(dev);
> >              } else if (pci_bus_is_express(bus)) {
> >                  aml_append(dev, aml_name_decl("_HID", aml_eisaid("PNP0A08")));
> >                  aml_append(dev, aml_name_decl("_CID", aml_eisaid("PNP0A03")));
> > diff --git a/include/hw/acpi/cxl.h b/include/hw/acpi/cxl.h
> > new file mode 100644
> > index 0000000000..7b8f3b8a2e
> > --- /dev/null
> > +++ b/include/hw/acpi/cxl.h
> > @@ -0,0 +1,23 @@
> > +/*
> > + * Copyright (C) 2020 Intel Corporation
> > + *
> > + * This program is free software; you can redistribute it and/or modify
> > + * it under the terms of the GNU General Public License as published by
> > + * the Free Software Foundation; either version 2 of the License, or
> > + * (at your option) any later version.
> > +
> > + * This program is distributed in the hope that it will be useful,
> > + * but WITHOUT ANY WARRANTY; without even the implied warranty of
> > + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
> > + * GNU General Public License for more details.
> > +
> > + * You should have received a copy of the GNU General Public License along
> > + * with this program; if not, see <http://www.gnu.org/licenses/>.
> > + */
> > +
> > +#ifndef HW_ACPI_CXL_H
> > +#define HW_ACPI_CXL_H
> > +
> > +void build_cxl_osc_method(Aml *dev);
> > +
> > +#endif
> > -- 
> > 2.32.0  
> 



^ permalink raw reply	[flat|nested] 124+ messages in thread

* Re: [PATCH v7 00/46] CXl 2.0 emulation Support
  2022-03-07  9:39     ` Jonathan Cameron
@ 2022-03-09  8:15       ` Peter Xu
  -1 siblings, 0 replies; 124+ messages in thread
From: Peter Xu @ 2022-03-09  8:15 UTC (permalink / raw)
  To: Jonathan Cameron
  Cc: Michael S. Tsirkin, Peter Maydell, Ben Widawsky, qemu-devel,
	Samarth Saxena, Chris Browy, linuxarm, linux-cxl,
	Markus Armbruster, Shreyas Shah, Saransh Gupta1,
	Shameerali Kolothum Thodi, Marcel Apfelbaum, Igor Mammedov,
	Dan Williams, Alex Bennée, Philippe Mathieu-Daudé,
	Paolo Bonzini, David Hildenbrand

On Mon, Mar 07, 2022 at 09:39:18AM +0000, Jonathan Cameron via wrote:
> If any of the memory maintainers can take a look at patch 34 that would
> be great as to my mind that and the related interleave decoding in general is
> the big unknown in this set. I just realized I haven't cc'd everyone
> I should have for that - added them here and I'll make sure to CC them
> all on V8.

https://lore.kernel.org/qemu-devel/20220306174137.5707-35-Jonathan.Cameron@huawei.com/

Having mr->ops set but with memory_access_is_direct() returning true sounds
weird to me.

Sorry to have no understanding of the whole picture, but.. could you share
more on what's the interleaving requirement on the proxying, and why it
can't be done with adding some IO memory regions as sub-regions upon the
file one?

Thanks,

-- 
Peter Xu


^ permalink raw reply	[flat|nested] 124+ messages in thread

* Re: [PATCH v7 00/46] CXl 2.0 emulation Support
@ 2022-03-09  8:15       ` Peter Xu
  0 siblings, 0 replies; 124+ messages in thread
From: Peter Xu @ 2022-03-09  8:15 UTC (permalink / raw)
  To: Jonathan Cameron
  Cc: Peter Maydell, Ben Widawsky, Michael S. Tsirkin,
	Markus Armbruster, Samarth Saxena, Chris Browy, qemu-devel,
	linux-cxl, linuxarm, Shreyas Shah, Saransh Gupta1, Paolo Bonzini,
	Marcel Apfelbaum, Igor Mammedov, Dan Williams, David Hildenbrand,
	Alex Bennée, Shameerali Kolothum Thodi,
	Philippe Mathieu-Daudé

On Mon, Mar 07, 2022 at 09:39:18AM +0000, Jonathan Cameron via wrote:
> If any of the memory maintainers can take a look at patch 34 that would
> be great as to my mind that and the related interleave decoding in general is
> the big unknown in this set. I just realized I haven't cc'd everyone
> I should have for that - added them here and I'll make sure to CC them
> all on V8.

https://lore.kernel.org/qemu-devel/20220306174137.5707-35-Jonathan.Cameron@huawei.com/

Having mr->ops set but with memory_access_is_direct() returning true sounds
weird to me.

Sorry to have no understanding of the whole picture, but.. could you share
more on what's the interleaving requirement on the proxying, and why it
can't be done with adding some IO memory regions as sub-regions upon the
file one?

Thanks,

-- 
Peter Xu



^ permalink raw reply	[flat|nested] 124+ messages in thread

* Re: [PATCH v7 00/46] CXl 2.0 emulation Support
  2022-03-09  8:15       ` Peter Xu
@ 2022-03-09 11:28         ` Jonathan Cameron via
  -1 siblings, 0 replies; 124+ messages in thread
From: Jonathan Cameron @ 2022-03-09 11:28 UTC (permalink / raw)
  To: Peter Xu
  Cc: Michael S. Tsirkin, Peter Maydell, Ben Widawsky, qemu-devel,
	Samarth Saxena, Chris Browy, linuxarm, linux-cxl,
	Markus Armbruster, Shreyas Shah, Saransh Gupta1,
	Shameerali Kolothum Thodi, Marcel Apfelbaum, Igor Mammedov,
	Dan Williams, Alex Bennée, Philippe Mathieu-Daudé,
	Paolo Bonzini, David Hildenbrand

On Wed, 9 Mar 2022 16:15:24 +0800
Peter Xu <peterx@redhat.com> wrote:

> On Mon, Mar 07, 2022 at 09:39:18AM +0000, Jonathan Cameron via wrote:
> > If any of the memory maintainers can take a look at patch 34 that would
> > be great as to my mind that and the related interleave decoding in general is
> > the big unknown in this set. I just realized I haven't cc'd everyone
> > I should have for that - added them here and I'll make sure to CC them
> > all on V8.  

Hi Peter,

> 
> https://lore.kernel.org/qemu-devel/20220306174137.5707-35-Jonathan.Cameron@huawei.com/
> 
> Having mr->ops set but with memory_access_is_direct() returning true sounds
> weird to me.
> 
> Sorry to have no understanding of the whole picture, but.. could you share
> more on what's the interleaving requirement on the proxying, and why it
> can't be done with adding some IO memory regions as sub-regions upon the
> file one?

The proxying requirement is simply a means to read/write to a computed address
within a memory region. There may well be a better way to do that.

If I understand your suggestion correctly you would need a very high
number of IO memory regions to be created dynamically when particular sets of
registers across multiple devices in the topology are all programmed.

The interleave can be 256 bytes across up to 16x, many terabyte, devices.
So assuming a simple set of 16 1TB devices I think you'd need about 4x10^9
IO regions.  Even for a minimal useful test case of largest interleave
set of 16x 256MB devices (256MB is minimum size the specification allows per
decoded region at the device) and 16 way interleave we'd need 10^6 IO regions.
Any idea if that approach would scale sensibly to this number of regions?

There are also complexities to getting all the information in one place to
work out which IO memory regions maps where in PA space. Current solution is
to do that mapping in the same way the hardware does which is hierarchical,
so we walk the path to the device, picking directions based on each interleave
decoder that we meet.
Obviously this is a bit slow but I only really care about correctness at the
moment.  I can think of various approaches to speeding it up but I'm not sure
if we will ever care about performance.

https://gitlab.com/jic23/qemu/-/blob/cxl-v7-draft-2-for-test/hw/cxl/cxl-host.c#L131
has the logic for that and as you can see it's fairly simple because we are always
going down the topology following the decoders.

Below I have mapped out an algorithm I think would work for doing it with
IO memory regions as subregions.

We could fake the whole thing by limiting ourselves to small host
memory windows which are always directly backed, but then I wouldn't
achieve the main aim of this which is to provide a test base for the OS code.
To do that I need real interleave so I can seed the files with test patterns
and verify the accesses hit the correct locations. Emulating what the hardware
is actually doing on a device by device basis is the easiest way I have
come up with to do that.

Let me try to provide some more background so you hopefully don't have
to have read the specs to follow what is going on!
There are an example for directly connected (no switches) topology in the
docs

https://gitlab.com/jic23/qemu/-/blob/cxl-v7-draft-2-for-test/docs/system/devices/cxl.rst

The overall picture is we have a large number of CXL Type 3 memory devices,
which at runtime (by OS at boot/on hotplug) are configured into various
interleaving sets with hierarchical decoding at the host + host bridge
+ switch levels. For test setups I probably need to go to around 32 devices
so I can hit various configurations simultaneously.
No individual device has visibility of the full interleave setup - hence
the walk in the existing code through the various decoders to find the
final Device Physical address.

At the host level the host provides a set of Physical Address windows with
a fixed interleave decoding across the different host bridges in the system
(CXL Fixed Memory windows, CFMWs)
On a real system these have to be large enough to allow for any memory
devices that might be hotplugged and all possible configurations (so
with 2 host bridges you need at least 3 windows in the many TB range,
much worse as the number of host bridges goes up). It'll be worse than
this when we have QoS groups, but the current Qemu code just puts all
the windows in group 0.  Hence my first thought of just putting memory
behind those doesn't scale (a similar approach to this was in the
earliest versions of this patch set - though the full access path
wasn't wired up).

The granularity can be in powers of 2 from 256 bytes to 16 kbytes

Next each host bridge has programmable address decoders which take the
incoming (often already interleaved) memory access and direct them to
appropriate root ports.  The root ports can be connected to a switch
which has additional address decoders in the upstream port to decide
which downstream port to route to.  Note we currently only support 1 level
of switches but it's easy to make this algorithm recursive to support
multiple switch levels (currently the kernel proposals only support 1 level)

Finally the End Point with the actual memory receives the interleaved request and
takes the full address and (for power of 2 decoding - we don't yet support
3,6 and 12 way which is more complex and there is no kernel support yet)
it drops a few address bits and adds an offset for the decoder used to
calculate it's own device physical address.  Note device will support
multiple interleave sets for different parts of it's file once we add
multiple decoder support (on the todo list).

So the current solution is straight forward (with the exception of that
proxying) because it follows the same decoding as used in real hardware
to route the memory accesses. As a result we get a read/write to a
device physical address and hence proxy that.  If any of the decoders
along the path are not configured then we error out at that stage.

To create the equivalent as IO subregions I think we'd have to do the
following from (this might be mediated by some central entity that
doesn't currently exist, or done on demand from which ever CXL device
happens to have it's decoder set up last)

1) Wait for a decoder commit (enable) on any component. Goto 2.
2) Walk the topology (up to host decoder, down to memory device)
If a complete interleaving path has been configured -
   i.e. we have committed decoders all the way to the memory
   device goto step 3, otherwise return to step 1 to wait for
   more decoders to be committed.
3) For the memory region being supplied by the memory device,
   add subregions to map the device physical address (address
   in the file) for each interleave stride to the appropriate
   host Physical Address.
4) Return to step 1 to wait for more decoders to commit.

So summary is we can do it with IO regions, but there are a lot of them
and the setup is somewhat complex as we don't have one single point in
time where we know all the necessary information is available to compute
the right addresses.

Looking forward to your suggestions if I haven't caused more confusion!

Thanks,

Jonathan


> 
> Thanks,
> 


^ permalink raw reply	[flat|nested] 124+ messages in thread

* Re: [PATCH v7 00/46] CXl 2.0 emulation Support
@ 2022-03-09 11:28         ` Jonathan Cameron via
  0 siblings, 0 replies; 124+ messages in thread
From: Jonathan Cameron via @ 2022-03-09 11:28 UTC (permalink / raw)
  To: Peter Xu
  Cc: Michael S. Tsirkin, Peter Maydell, Ben Widawsky, qemu-devel,
	Samarth Saxena, Chris Browy, linuxarm, linux-cxl,
	Markus Armbruster, Shreyas Shah, Saransh Gupta1,
	Shameerali Kolothum Thodi, Marcel Apfelbaum, Igor Mammedov,
	Dan Williams, Alex Bennée, Philippe Mathieu-Daudé,
	Paolo Bonzini, David Hildenbrand

On Wed, 9 Mar 2022 16:15:24 +0800
Peter Xu <peterx@redhat.com> wrote:

> On Mon, Mar 07, 2022 at 09:39:18AM +0000, Jonathan Cameron via wrote:
> > If any of the memory maintainers can take a look at patch 34 that would
> > be great as to my mind that and the related interleave decoding in general is
> > the big unknown in this set. I just realized I haven't cc'd everyone
> > I should have for that - added them here and I'll make sure to CC them
> > all on V8.  

Hi Peter,

> 
> https://lore.kernel.org/qemu-devel/20220306174137.5707-35-Jonathan.Cameron@huawei.com/
> 
> Having mr->ops set but with memory_access_is_direct() returning true sounds
> weird to me.
> 
> Sorry to have no understanding of the whole picture, but.. could you share
> more on what's the interleaving requirement on the proxying, and why it
> can't be done with adding some IO memory regions as sub-regions upon the
> file one?

The proxying requirement is simply a means to read/write to a computed address
within a memory region. There may well be a better way to do that.

If I understand your suggestion correctly you would need a very high
number of IO memory regions to be created dynamically when particular sets of
registers across multiple devices in the topology are all programmed.

The interleave can be 256 bytes across up to 16x, many terabyte, devices.
So assuming a simple set of 16 1TB devices I think you'd need about 4x10^9
IO regions.  Even for a minimal useful test case of largest interleave
set of 16x 256MB devices (256MB is minimum size the specification allows per
decoded region at the device) and 16 way interleave we'd need 10^6 IO regions.
Any idea if that approach would scale sensibly to this number of regions?

There are also complexities to getting all the information in one place to
work out which IO memory regions maps where in PA space. Current solution is
to do that mapping in the same way the hardware does which is hierarchical,
so we walk the path to the device, picking directions based on each interleave
decoder that we meet.
Obviously this is a bit slow but I only really care about correctness at the
moment.  I can think of various approaches to speeding it up but I'm not sure
if we will ever care about performance.

https://gitlab.com/jic23/qemu/-/blob/cxl-v7-draft-2-for-test/hw/cxl/cxl-host.c#L131
has the logic for that and as you can see it's fairly simple because we are always
going down the topology following the decoders.

Below I have mapped out an algorithm I think would work for doing it with
IO memory regions as subregions.

We could fake the whole thing by limiting ourselves to small host
memory windows which are always directly backed, but then I wouldn't
achieve the main aim of this which is to provide a test base for the OS code.
To do that I need real interleave so I can seed the files with test patterns
and verify the accesses hit the correct locations. Emulating what the hardware
is actually doing on a device by device basis is the easiest way I have
come up with to do that.

Let me try to provide some more background so you hopefully don't have
to have read the specs to follow what is going on!
There are an example for directly connected (no switches) topology in the
docs

https://gitlab.com/jic23/qemu/-/blob/cxl-v7-draft-2-for-test/docs/system/devices/cxl.rst

The overall picture is we have a large number of CXL Type 3 memory devices,
which at runtime (by OS at boot/on hotplug) are configured into various
interleaving sets with hierarchical decoding at the host + host bridge
+ switch levels. For test setups I probably need to go to around 32 devices
so I can hit various configurations simultaneously.
No individual device has visibility of the full interleave setup - hence
the walk in the existing code through the various decoders to find the
final Device Physical address.

At the host level the host provides a set of Physical Address windows with
a fixed interleave decoding across the different host bridges in the system
(CXL Fixed Memory windows, CFMWs)
On a real system these have to be large enough to allow for any memory
devices that might be hotplugged and all possible configurations (so
with 2 host bridges you need at least 3 windows in the many TB range,
much worse as the number of host bridges goes up). It'll be worse than
this when we have QoS groups, but the current Qemu code just puts all
the windows in group 0.  Hence my first thought of just putting memory
behind those doesn't scale (a similar approach to this was in the
earliest versions of this patch set - though the full access path
wasn't wired up).

The granularity can be in powers of 2 from 256 bytes to 16 kbytes

Next each host bridge has programmable address decoders which take the
incoming (often already interleaved) memory access and direct them to
appropriate root ports.  The root ports can be connected to a switch
which has additional address decoders in the upstream port to decide
which downstream port to route to.  Note we currently only support 1 level
of switches but it's easy to make this algorithm recursive to support
multiple switch levels (currently the kernel proposals only support 1 level)

Finally the End Point with the actual memory receives the interleaved request and
takes the full address and (for power of 2 decoding - we don't yet support
3,6 and 12 way which is more complex and there is no kernel support yet)
it drops a few address bits and adds an offset for the decoder used to
calculate it's own device physical address.  Note device will support
multiple interleave sets for different parts of it's file once we add
multiple decoder support (on the todo list).

So the current solution is straight forward (with the exception of that
proxying) because it follows the same decoding as used in real hardware
to route the memory accesses. As a result we get a read/write to a
device physical address and hence proxy that.  If any of the decoders
along the path are not configured then we error out at that stage.

To create the equivalent as IO subregions I think we'd have to do the
following from (this might be mediated by some central entity that
doesn't currently exist, or done on demand from which ever CXL device
happens to have it's decoder set up last)

1) Wait for a decoder commit (enable) on any component. Goto 2.
2) Walk the topology (up to host decoder, down to memory device)
If a complete interleaving path has been configured -
   i.e. we have committed decoders all the way to the memory
   device goto step 3, otherwise return to step 1 to wait for
   more decoders to be committed.
3) For the memory region being supplied by the memory device,
   add subregions to map the device physical address (address
   in the file) for each interleave stride to the appropriate
   host Physical Address.
4) Return to step 1 to wait for more decoders to commit.

So summary is we can do it with IO regions, but there are a lot of them
and the setup is somewhat complex as we don't have one single point in
time where we know all the necessary information is available to compute
the right addresses.

Looking forward to your suggestions if I haven't caused more confusion!

Thanks,

Jonathan


> 
> Thanks,
> 



^ permalink raw reply	[flat|nested] 124+ messages in thread

* Re: [PATCH v7 00/46] CXl 2.0 emulation Support
  2022-03-09 11:28         ` Jonathan Cameron via
@ 2022-03-10  8:02           ` Peter Xu
  -1 siblings, 0 replies; 124+ messages in thread
From: Peter Xu @ 2022-03-10  8:02 UTC (permalink / raw)
  To: Jonathan Cameron
  Cc: Michael S. Tsirkin, Peter Maydell, Ben Widawsky, qemu-devel,
	Samarth Saxena, Chris Browy, linuxarm, linux-cxl,
	Markus Armbruster, Shreyas Shah, Saransh Gupta1,
	Shameerali Kolothum Thodi, Marcel Apfelbaum, Igor Mammedov,
	Dan Williams, Alex Bennée, Philippe Mathieu-Daudé,
	Paolo Bonzini, David Hildenbrand

On Wed, Mar 09, 2022 at 11:28:27AM +0000, Jonathan Cameron wrote:
> Hi Peter,

Hi, Jonathan,

> 
> > 
> > https://lore.kernel.org/qemu-devel/20220306174137.5707-35-Jonathan.Cameron@huawei.com/
> > 
> > Having mr->ops set but with memory_access_is_direct() returning true sounds
> > weird to me.
> > 
> > Sorry to have no understanding of the whole picture, but.. could you share
> > more on what's the interleaving requirement on the proxying, and why it
> > can't be done with adding some IO memory regions as sub-regions upon the
> > file one?
> 
> The proxying requirement is simply a means to read/write to a computed address
> within a memory region. There may well be a better way to do that.
> 
> If I understand your suggestion correctly you would need a very high
> number of IO memory regions to be created dynamically when particular sets of
> registers across multiple devices in the topology are all programmed.
> 
> The interleave can be 256 bytes across up to 16x, many terabyte, devices.
> So assuming a simple set of 16 1TB devices I think you'd need about 4x10^9
> IO regions.  Even for a minimal useful test case of largest interleave
> set of 16x 256MB devices (256MB is minimum size the specification allows per
> decoded region at the device) and 16 way interleave we'd need 10^6 IO regions.
> Any idea if that approach would scale sensibly to this number of regions?
> 
> There are also complexities to getting all the information in one place to
> work out which IO memory regions maps where in PA space. Current solution is
> to do that mapping in the same way the hardware does which is hierarchical,
> so we walk the path to the device, picking directions based on each interleave
> decoder that we meet.
> Obviously this is a bit slow but I only really care about correctness at the
> moment.  I can think of various approaches to speeding it up but I'm not sure
> if we will ever care about performance.
> 
> https://gitlab.com/jic23/qemu/-/blob/cxl-v7-draft-2-for-test/hw/cxl/cxl-host.c#L131
> has the logic for that and as you can see it's fairly simple because we are always
> going down the topology following the decoders.
> 
> Below I have mapped out an algorithm I think would work for doing it with
> IO memory regions as subregions.
> 
> We could fake the whole thing by limiting ourselves to small host
> memory windows which are always directly backed, but then I wouldn't
> achieve the main aim of this which is to provide a test base for the OS code.
> To do that I need real interleave so I can seed the files with test patterns
> and verify the accesses hit the correct locations. Emulating what the hardware
> is actually doing on a device by device basis is the easiest way I have
> come up with to do that.
> 
> Let me try to provide some more background so you hopefully don't have
> to have read the specs to follow what is going on!
> There are an example for directly connected (no switches) topology in the
> docs
> 
> https://gitlab.com/jic23/qemu/-/blob/cxl-v7-draft-2-for-test/docs/system/devices/cxl.rst
> 
> The overall picture is we have a large number of CXL Type 3 memory devices,
> which at runtime (by OS at boot/on hotplug) are configured into various
> interleaving sets with hierarchical decoding at the host + host bridge
> + switch levels. For test setups I probably need to go to around 32 devices
> so I can hit various configurations simultaneously.
> No individual device has visibility of the full interleave setup - hence
> the walk in the existing code through the various decoders to find the
> final Device Physical address.
> 
> At the host level the host provides a set of Physical Address windows with
> a fixed interleave decoding across the different host bridges in the system
> (CXL Fixed Memory windows, CFMWs)
> On a real system these have to be large enough to allow for any memory
> devices that might be hotplugged and all possible configurations (so
> with 2 host bridges you need at least 3 windows in the many TB range,
> much worse as the number of host bridges goes up). It'll be worse than
> this when we have QoS groups, but the current Qemu code just puts all
> the windows in group 0.  Hence my first thought of just putting memory
> behind those doesn't scale (a similar approach to this was in the
> earliest versions of this patch set - though the full access path
> wasn't wired up).
> 
> The granularity can be in powers of 2 from 256 bytes to 16 kbytes
> 
> Next each host bridge has programmable address decoders which take the
> incoming (often already interleaved) memory access and direct them to
> appropriate root ports.  The root ports can be connected to a switch
> which has additional address decoders in the upstream port to decide
> which downstream port to route to.  Note we currently only support 1 level
> of switches but it's easy to make this algorithm recursive to support
> multiple switch levels (currently the kernel proposals only support 1 level)
> 
> Finally the End Point with the actual memory receives the interleaved request and
> takes the full address and (for power of 2 decoding - we don't yet support
> 3,6 and 12 way which is more complex and there is no kernel support yet)
> it drops a few address bits and adds an offset for the decoder used to
> calculate it's own device physical address.  Note device will support
> multiple interleave sets for different parts of it's file once we add
> multiple decoder support (on the todo list).
> 
> So the current solution is straight forward (with the exception of that
> proxying) because it follows the same decoding as used in real hardware
> to route the memory accesses. As a result we get a read/write to a
> device physical address and hence proxy that.  If any of the decoders
> along the path are not configured then we error out at that stage.
> 
> To create the equivalent as IO subregions I think we'd have to do the
> following from (this might be mediated by some central entity that
> doesn't currently exist, or done on demand from which ever CXL device
> happens to have it's decoder set up last)
> 
> 1) Wait for a decoder commit (enable) on any component. Goto 2.
> 2) Walk the topology (up to host decoder, down to memory device)
> If a complete interleaving path has been configured -
>    i.e. we have committed decoders all the way to the memory
>    device goto step 3, otherwise return to step 1 to wait for
>    more decoders to be committed.
> 3) For the memory region being supplied by the memory device,
>    add subregions to map the device physical address (address
>    in the file) for each interleave stride to the appropriate
>    host Physical Address.
> 4) Return to step 1 to wait for more decoders to commit.
> 
> So summary is we can do it with IO regions, but there are a lot of them
> and the setup is somewhat complex as we don't have one single point in
> time where we know all the necessary information is available to compute
> the right addresses.
> 
> Looking forward to your suggestions if I haven't caused more confusion!

Thanks for the write up - I must confess they're a lot! :)

I merely only learned what is CXL today, and I'm not very experienced on
device modeling either, so please bare with me with stupid questions..

IIUC so far CXL traps these memory accesses using CXLFixedWindow.mr.
That's a normal IO region, which looks very reasonable.

However I'm confused why patch "RFC: softmmu/memory: Add ops to
memory_region_ram_init_from_file" helped.

Per my knowledge, all the memory accesses upon this CFMW window trapped
using this IO region already.  There can be multiple memory file objects
underneath, and when read/write happens the object will be decoded from
cxl_cfmws_find_device() as you referenced.

However I see nowhere that these memory objects got mapped as sub-regions
into parent (CXLFixedWindow.mr).  Then I don't understand why they cannot
be trapped.

To ask in another way: what will happen if you simply revert this RFC
patch?  What will go wrong?

Thanks,

-- 
Peter Xu


^ permalink raw reply	[flat|nested] 124+ messages in thread

* Re: [PATCH v7 00/46] CXl 2.0 emulation Support
@ 2022-03-10  8:02           ` Peter Xu
  0 siblings, 0 replies; 124+ messages in thread
From: Peter Xu @ 2022-03-10  8:02 UTC (permalink / raw)
  To: Jonathan Cameron
  Cc: Peter Maydell, Ben Widawsky, Michael S. Tsirkin,
	Markus Armbruster, Samarth Saxena, Chris Browy, qemu-devel,
	linux-cxl, linuxarm, Shreyas Shah, Saransh Gupta1, Paolo Bonzini,
	Marcel Apfelbaum, Igor Mammedov, Dan Williams, David Hildenbrand,
	Alex Bennée, Shameerali Kolothum Thodi,
	Philippe Mathieu-Daudé

On Wed, Mar 09, 2022 at 11:28:27AM +0000, Jonathan Cameron wrote:
> Hi Peter,

Hi, Jonathan,

> 
> > 
> > https://lore.kernel.org/qemu-devel/20220306174137.5707-35-Jonathan.Cameron@huawei.com/
> > 
> > Having mr->ops set but with memory_access_is_direct() returning true sounds
> > weird to me.
> > 
> > Sorry to have no understanding of the whole picture, but.. could you share
> > more on what's the interleaving requirement on the proxying, and why it
> > can't be done with adding some IO memory regions as sub-regions upon the
> > file one?
> 
> The proxying requirement is simply a means to read/write to a computed address
> within a memory region. There may well be a better way to do that.
> 
> If I understand your suggestion correctly you would need a very high
> number of IO memory regions to be created dynamically when particular sets of
> registers across multiple devices in the topology are all programmed.
> 
> The interleave can be 256 bytes across up to 16x, many terabyte, devices.
> So assuming a simple set of 16 1TB devices I think you'd need about 4x10^9
> IO regions.  Even for a minimal useful test case of largest interleave
> set of 16x 256MB devices (256MB is minimum size the specification allows per
> decoded region at the device) and 16 way interleave we'd need 10^6 IO regions.
> Any idea if that approach would scale sensibly to this number of regions?
> 
> There are also complexities to getting all the information in one place to
> work out which IO memory regions maps where in PA space. Current solution is
> to do that mapping in the same way the hardware does which is hierarchical,
> so we walk the path to the device, picking directions based on each interleave
> decoder that we meet.
> Obviously this is a bit slow but I only really care about correctness at the
> moment.  I can think of various approaches to speeding it up but I'm not sure
> if we will ever care about performance.
> 
> https://gitlab.com/jic23/qemu/-/blob/cxl-v7-draft-2-for-test/hw/cxl/cxl-host.c#L131
> has the logic for that and as you can see it's fairly simple because we are always
> going down the topology following the decoders.
> 
> Below I have mapped out an algorithm I think would work for doing it with
> IO memory regions as subregions.
> 
> We could fake the whole thing by limiting ourselves to small host
> memory windows which are always directly backed, but then I wouldn't
> achieve the main aim of this which is to provide a test base for the OS code.
> To do that I need real interleave so I can seed the files with test patterns
> and verify the accesses hit the correct locations. Emulating what the hardware
> is actually doing on a device by device basis is the easiest way I have
> come up with to do that.
> 
> Let me try to provide some more background so you hopefully don't have
> to have read the specs to follow what is going on!
> There are an example for directly connected (no switches) topology in the
> docs
> 
> https://gitlab.com/jic23/qemu/-/blob/cxl-v7-draft-2-for-test/docs/system/devices/cxl.rst
> 
> The overall picture is we have a large number of CXL Type 3 memory devices,
> which at runtime (by OS at boot/on hotplug) are configured into various
> interleaving sets with hierarchical decoding at the host + host bridge
> + switch levels. For test setups I probably need to go to around 32 devices
> so I can hit various configurations simultaneously.
> No individual device has visibility of the full interleave setup - hence
> the walk in the existing code through the various decoders to find the
> final Device Physical address.
> 
> At the host level the host provides a set of Physical Address windows with
> a fixed interleave decoding across the different host bridges in the system
> (CXL Fixed Memory windows, CFMWs)
> On a real system these have to be large enough to allow for any memory
> devices that might be hotplugged and all possible configurations (so
> with 2 host bridges you need at least 3 windows in the many TB range,
> much worse as the number of host bridges goes up). It'll be worse than
> this when we have QoS groups, but the current Qemu code just puts all
> the windows in group 0.  Hence my first thought of just putting memory
> behind those doesn't scale (a similar approach to this was in the
> earliest versions of this patch set - though the full access path
> wasn't wired up).
> 
> The granularity can be in powers of 2 from 256 bytes to 16 kbytes
> 
> Next each host bridge has programmable address decoders which take the
> incoming (often already interleaved) memory access and direct them to
> appropriate root ports.  The root ports can be connected to a switch
> which has additional address decoders in the upstream port to decide
> which downstream port to route to.  Note we currently only support 1 level
> of switches but it's easy to make this algorithm recursive to support
> multiple switch levels (currently the kernel proposals only support 1 level)
> 
> Finally the End Point with the actual memory receives the interleaved request and
> takes the full address and (for power of 2 decoding - we don't yet support
> 3,6 and 12 way which is more complex and there is no kernel support yet)
> it drops a few address bits and adds an offset for the decoder used to
> calculate it's own device physical address.  Note device will support
> multiple interleave sets for different parts of it's file once we add
> multiple decoder support (on the todo list).
> 
> So the current solution is straight forward (with the exception of that
> proxying) because it follows the same decoding as used in real hardware
> to route the memory accesses. As a result we get a read/write to a
> device physical address and hence proxy that.  If any of the decoders
> along the path are not configured then we error out at that stage.
> 
> To create the equivalent as IO subregions I think we'd have to do the
> following from (this might be mediated by some central entity that
> doesn't currently exist, or done on demand from which ever CXL device
> happens to have it's decoder set up last)
> 
> 1) Wait for a decoder commit (enable) on any component. Goto 2.
> 2) Walk the topology (up to host decoder, down to memory device)
> If a complete interleaving path has been configured -
>    i.e. we have committed decoders all the way to the memory
>    device goto step 3, otherwise return to step 1 to wait for
>    more decoders to be committed.
> 3) For the memory region being supplied by the memory device,
>    add subregions to map the device physical address (address
>    in the file) for each interleave stride to the appropriate
>    host Physical Address.
> 4) Return to step 1 to wait for more decoders to commit.
> 
> So summary is we can do it with IO regions, but there are a lot of them
> and the setup is somewhat complex as we don't have one single point in
> time where we know all the necessary information is available to compute
> the right addresses.
> 
> Looking forward to your suggestions if I haven't caused more confusion!

Thanks for the write up - I must confess they're a lot! :)

I merely only learned what is CXL today, and I'm not very experienced on
device modeling either, so please bare with me with stupid questions..

IIUC so far CXL traps these memory accesses using CXLFixedWindow.mr.
That's a normal IO region, which looks very reasonable.

However I'm confused why patch "RFC: softmmu/memory: Add ops to
memory_region_ram_init_from_file" helped.

Per my knowledge, all the memory accesses upon this CFMW window trapped
using this IO region already.  There can be multiple memory file objects
underneath, and when read/write happens the object will be decoded from
cxl_cfmws_find_device() as you referenced.

However I see nowhere that these memory objects got mapped as sub-regions
into parent (CXLFixedWindow.mr).  Then I don't understand why they cannot
be trapped.

To ask in another way: what will happen if you simply revert this RFC
patch?  What will go wrong?

Thanks,

-- 
Peter Xu



^ permalink raw reply	[flat|nested] 124+ messages in thread

* Re: [PATCH v7 00/46] CXl 2.0 emulation Support
  2022-03-10  8:02           ` Peter Xu
@ 2022-03-16 16:50             ` Jonathan Cameron via
  -1 siblings, 0 replies; 124+ messages in thread
From: Jonathan Cameron @ 2022-03-16 16:50 UTC (permalink / raw)
  To: Peter Xu
  Cc: Michael S. Tsirkin, Peter Maydell, Ben Widawsky, qemu-devel,
	Samarth Saxena, Chris Browy, linuxarm, linux-cxl,
	Markus Armbruster, Shreyas Shah, Saransh Gupta1,
	Shameerali Kolothum Thodi, Marcel Apfelbaum, Igor Mammedov,
	Dan Williams, Alex Bennée, Philippe Mathieu-Daudé,
	Paolo Bonzini, David Hildenbrand

On Thu, 10 Mar 2022 16:02:22 +0800
Peter Xu <peterx@redhat.com> wrote:

> On Wed, Mar 09, 2022 at 11:28:27AM +0000, Jonathan Cameron wrote:
> > Hi Peter,  
> 
> Hi, Jonathan,
> 
> >   
> > > 
> > > https://lore.kernel.org/qemu-devel/20220306174137.5707-35-Jonathan.Cameron@huawei.com/
> > > 
> > > Having mr->ops set but with memory_access_is_direct() returning true sounds
> > > weird to me.
> > > 
> > > Sorry to have no understanding of the whole picture, but.. could you share
> > > more on what's the interleaving requirement on the proxying, and why it
> > > can't be done with adding some IO memory regions as sub-regions upon the
> > > file one?  
> > 
> > The proxying requirement is simply a means to read/write to a computed address
> > within a memory region. There may well be a better way to do that.
> > 
> > If I understand your suggestion correctly you would need a very high
> > number of IO memory regions to be created dynamically when particular sets of
> > registers across multiple devices in the topology are all programmed.
> > 
> > The interleave can be 256 bytes across up to 16x, many terabyte, devices.
> > So assuming a simple set of 16 1TB devices I think you'd need about 4x10^9
> > IO regions.  Even for a minimal useful test case of largest interleave
> > set of 16x 256MB devices (256MB is minimum size the specification allows per
> > decoded region at the device) and 16 way interleave we'd need 10^6 IO regions.
> > Any idea if that approach would scale sensibly to this number of regions?
> > 
> > There are also complexities to getting all the information in one place to
> > work out which IO memory regions maps where in PA space. Current solution is
> > to do that mapping in the same way the hardware does which is hierarchical,
> > so we walk the path to the device, picking directions based on each interleave
> > decoder that we meet.
> > Obviously this is a bit slow but I only really care about correctness at the
> > moment.  I can think of various approaches to speeding it up but I'm not sure
> > if we will ever care about performance.
> > 
> > https://gitlab.com/jic23/qemu/-/blob/cxl-v7-draft-2-for-test/hw/cxl/cxl-host.c#L131
> > has the logic for that and as you can see it's fairly simple because we are always
> > going down the topology following the decoders.
> > 
> > Below I have mapped out an algorithm I think would work for doing it with
> > IO memory regions as subregions.
> > 
> > We could fake the whole thing by limiting ourselves to small host
> > memory windows which are always directly backed, but then I wouldn't
> > achieve the main aim of this which is to provide a test base for the OS code.
> > To do that I need real interleave so I can seed the files with test patterns
> > and verify the accesses hit the correct locations. Emulating what the hardware
> > is actually doing on a device by device basis is the easiest way I have
> > come up with to do that.
> > 
> > Let me try to provide some more background so you hopefully don't have
> > to have read the specs to follow what is going on!
> > There are an example for directly connected (no switches) topology in the
> > docs
> > 
> > https://gitlab.com/jic23/qemu/-/blob/cxl-v7-draft-2-for-test/docs/system/devices/cxl.rst
> > 
> > The overall picture is we have a large number of CXL Type 3 memory devices,
> > which at runtime (by OS at boot/on hotplug) are configured into various
> > interleaving sets with hierarchical decoding at the host + host bridge
> > + switch levels. For test setups I probably need to go to around 32 devices
> > so I can hit various configurations simultaneously.
> > No individual device has visibility of the full interleave setup - hence
> > the walk in the existing code through the various decoders to find the
> > final Device Physical address.
> > 
> > At the host level the host provides a set of Physical Address windows with
> > a fixed interleave decoding across the different host bridges in the system
> > (CXL Fixed Memory windows, CFMWs)
> > On a real system these have to be large enough to allow for any memory
> > devices that might be hotplugged and all possible configurations (so
> > with 2 host bridges you need at least 3 windows in the many TB range,
> > much worse as the number of host bridges goes up). It'll be worse than
> > this when we have QoS groups, but the current Qemu code just puts all
> > the windows in group 0.  Hence my first thought of just putting memory
> > behind those doesn't scale (a similar approach to this was in the
> > earliest versions of this patch set - though the full access path
> > wasn't wired up).
> > 
> > The granularity can be in powers of 2 from 256 bytes to 16 kbytes
> > 
> > Next each host bridge has programmable address decoders which take the
> > incoming (often already interleaved) memory access and direct them to
> > appropriate root ports.  The root ports can be connected to a switch
> > which has additional address decoders in the upstream port to decide
> > which downstream port to route to.  Note we currently only support 1 level
> > of switches but it's easy to make this algorithm recursive to support
> > multiple switch levels (currently the kernel proposals only support 1 level)
> > 
> > Finally the End Point with the actual memory receives the interleaved request and
> > takes the full address and (for power of 2 decoding - we don't yet support
> > 3,6 and 12 way which is more complex and there is no kernel support yet)
> > it drops a few address bits and adds an offset for the decoder used to
> > calculate it's own device physical address.  Note device will support
> > multiple interleave sets for different parts of it's file once we add
> > multiple decoder support (on the todo list).
> > 
> > So the current solution is straight forward (with the exception of that
> > proxying) because it follows the same decoding as used in real hardware
> > to route the memory accesses. As a result we get a read/write to a
> > device physical address and hence proxy that.  If any of the decoders
> > along the path are not configured then we error out at that stage.
> > 
> > To create the equivalent as IO subregions I think we'd have to do the
> > following from (this might be mediated by some central entity that
> > doesn't currently exist, or done on demand from which ever CXL device
> > happens to have it's decoder set up last)
> > 
> > 1) Wait for a decoder commit (enable) on any component. Goto 2.
> > 2) Walk the topology (up to host decoder, down to memory device)
> > If a complete interleaving path has been configured -
> >    i.e. we have committed decoders all the way to the memory
> >    device goto step 3, otherwise return to step 1 to wait for
> >    more decoders to be committed.
> > 3) For the memory region being supplied by the memory device,
> >    add subregions to map the device physical address (address
> >    in the file) for each interleave stride to the appropriate
> >    host Physical Address.
> > 4) Return to step 1 to wait for more decoders to commit.
> > 
> > So summary is we can do it with IO regions, but there are a lot of them
> > and the setup is somewhat complex as we don't have one single point in
> > time where we know all the necessary information is available to compute
> > the right addresses.
> > 
> > Looking forward to your suggestions if I haven't caused more confusion!  

Hi Peter,

> 
> Thanks for the write up - I must confess they're a lot! :)
> 
> I merely only learned what is CXL today, and I'm not very experienced on
> device modeling either, so please bare with me with stupid questions..
> 
> IIUC so far CXL traps these memory accesses using CXLFixedWindow.mr.
> That's a normal IO region, which looks very reasonable.
> 
> However I'm confused why patch "RFC: softmmu/memory: Add ops to
> memory_region_ram_init_from_file" helped.
> 
> Per my knowledge, all the memory accesses upon this CFMW window trapped
> using this IO region already.  There can be multiple memory file objects
> underneath, and when read/write happens the object will be decoded from
> cxl_cfmws_find_device() as you referenced.

Yes.

> 
> However I see nowhere that these memory objects got mapped as sub-regions
> into parent (CXLFixedWindow.mr).  Then I don't understand why they cannot
> be trapped.

AS you note they aren't mapped into the parent mr, hence we are trapping.
The parent mem_ops are responsible for decoding the 'which device' +
'what address in device memory space'. Once we've gotten that info
the question is how do I actually do the access?

Mapping as subregions seems unwise due to the huge number required.

> 
> To ask in another way: what will happen if you simply revert this RFC
> patch?  What will go wrong?

The call to memory_region_dispatch_read()
https://gitlab.com/jic23/qemu/-/blob/cxl-v7-draft-2-for-test/hw/mem/cxl_type3.c#L556

would call memory_region_access_valid() that calls 
mr->ops->valid.accepts() which is set to
unassigned_mem_accepts() and hence...
you get back a MEMTX_DECODE_ERROR back and an exception in the
guest.

That wouldn't happen with a non proxied access to the ram as
those paths never uses the ops as memory_access_is_direct() is called
and simply memcpy used without any involvement of the ops.

Is a better way to proxy those writes to the backing files?

I was fishing a bit in the dark here and saw the existing ops defined
for a different purpose for VFIO

4a2e242bbb ("memory Don't use memcpy for ram_device regions")

and those allowed the use of memory_region_dispatch_write() to work.

Hence the RFC marking on that patch :)

Thanks,

Jonathan



> 
> Thanks,
> 


^ permalink raw reply	[flat|nested] 124+ messages in thread

* Re: [PATCH v7 00/46] CXl 2.0 emulation Support
@ 2022-03-16 16:50             ` Jonathan Cameron via
  0 siblings, 0 replies; 124+ messages in thread
From: Jonathan Cameron via @ 2022-03-16 16:50 UTC (permalink / raw)
  To: Peter Xu
  Cc: Michael S. Tsirkin, Peter Maydell, Ben Widawsky, qemu-devel,
	Samarth Saxena, Chris Browy, linuxarm, linux-cxl,
	Markus Armbruster, Shreyas Shah, Saransh Gupta1,
	Shameerali Kolothum Thodi, Marcel Apfelbaum, Igor Mammedov,
	Dan Williams, Alex Bennée, Philippe Mathieu-Daudé,
	Paolo Bonzini, David Hildenbrand

On Thu, 10 Mar 2022 16:02:22 +0800
Peter Xu <peterx@redhat.com> wrote:

> On Wed, Mar 09, 2022 at 11:28:27AM +0000, Jonathan Cameron wrote:
> > Hi Peter,  
> 
> Hi, Jonathan,
> 
> >   
> > > 
> > > https://lore.kernel.org/qemu-devel/20220306174137.5707-35-Jonathan.Cameron@huawei.com/
> > > 
> > > Having mr->ops set but with memory_access_is_direct() returning true sounds
> > > weird to me.
> > > 
> > > Sorry to have no understanding of the whole picture, but.. could you share
> > > more on what's the interleaving requirement on the proxying, and why it
> > > can't be done with adding some IO memory regions as sub-regions upon the
> > > file one?  
> > 
> > The proxying requirement is simply a means to read/write to a computed address
> > within a memory region. There may well be a better way to do that.
> > 
> > If I understand your suggestion correctly you would need a very high
> > number of IO memory regions to be created dynamically when particular sets of
> > registers across multiple devices in the topology are all programmed.
> > 
> > The interleave can be 256 bytes across up to 16x, many terabyte, devices.
> > So assuming a simple set of 16 1TB devices I think you'd need about 4x10^9
> > IO regions.  Even for a minimal useful test case of largest interleave
> > set of 16x 256MB devices (256MB is minimum size the specification allows per
> > decoded region at the device) and 16 way interleave we'd need 10^6 IO regions.
> > Any idea if that approach would scale sensibly to this number of regions?
> > 
> > There are also complexities to getting all the information in one place to
> > work out which IO memory regions maps where in PA space. Current solution is
> > to do that mapping in the same way the hardware does which is hierarchical,
> > so we walk the path to the device, picking directions based on each interleave
> > decoder that we meet.
> > Obviously this is a bit slow but I only really care about correctness at the
> > moment.  I can think of various approaches to speeding it up but I'm not sure
> > if we will ever care about performance.
> > 
> > https://gitlab.com/jic23/qemu/-/blob/cxl-v7-draft-2-for-test/hw/cxl/cxl-host.c#L131
> > has the logic for that and as you can see it's fairly simple because we are always
> > going down the topology following the decoders.
> > 
> > Below I have mapped out an algorithm I think would work for doing it with
> > IO memory regions as subregions.
> > 
> > We could fake the whole thing by limiting ourselves to small host
> > memory windows which are always directly backed, but then I wouldn't
> > achieve the main aim of this which is to provide a test base for the OS code.
> > To do that I need real interleave so I can seed the files with test patterns
> > and verify the accesses hit the correct locations. Emulating what the hardware
> > is actually doing on a device by device basis is the easiest way I have
> > come up with to do that.
> > 
> > Let me try to provide some more background so you hopefully don't have
> > to have read the specs to follow what is going on!
> > There are an example for directly connected (no switches) topology in the
> > docs
> > 
> > https://gitlab.com/jic23/qemu/-/blob/cxl-v7-draft-2-for-test/docs/system/devices/cxl.rst
> > 
> > The overall picture is we have a large number of CXL Type 3 memory devices,
> > which at runtime (by OS at boot/on hotplug) are configured into various
> > interleaving sets with hierarchical decoding at the host + host bridge
> > + switch levels. For test setups I probably need to go to around 32 devices
> > so I can hit various configurations simultaneously.
> > No individual device has visibility of the full interleave setup - hence
> > the walk in the existing code through the various decoders to find the
> > final Device Physical address.
> > 
> > At the host level the host provides a set of Physical Address windows with
> > a fixed interleave decoding across the different host bridges in the system
> > (CXL Fixed Memory windows, CFMWs)
> > On a real system these have to be large enough to allow for any memory
> > devices that might be hotplugged and all possible configurations (so
> > with 2 host bridges you need at least 3 windows in the many TB range,
> > much worse as the number of host bridges goes up). It'll be worse than
> > this when we have QoS groups, but the current Qemu code just puts all
> > the windows in group 0.  Hence my first thought of just putting memory
> > behind those doesn't scale (a similar approach to this was in the
> > earliest versions of this patch set - though the full access path
> > wasn't wired up).
> > 
> > The granularity can be in powers of 2 from 256 bytes to 16 kbytes
> > 
> > Next each host bridge has programmable address decoders which take the
> > incoming (often already interleaved) memory access and direct them to
> > appropriate root ports.  The root ports can be connected to a switch
> > which has additional address decoders in the upstream port to decide
> > which downstream port to route to.  Note we currently only support 1 level
> > of switches but it's easy to make this algorithm recursive to support
> > multiple switch levels (currently the kernel proposals only support 1 level)
> > 
> > Finally the End Point with the actual memory receives the interleaved request and
> > takes the full address and (for power of 2 decoding - we don't yet support
> > 3,6 and 12 way which is more complex and there is no kernel support yet)
> > it drops a few address bits and adds an offset for the decoder used to
> > calculate it's own device physical address.  Note device will support
> > multiple interleave sets for different parts of it's file once we add
> > multiple decoder support (on the todo list).
> > 
> > So the current solution is straight forward (with the exception of that
> > proxying) because it follows the same decoding as used in real hardware
> > to route the memory accesses. As a result we get a read/write to a
> > device physical address and hence proxy that.  If any of the decoders
> > along the path are not configured then we error out at that stage.
> > 
> > To create the equivalent as IO subregions I think we'd have to do the
> > following from (this might be mediated by some central entity that
> > doesn't currently exist, or done on demand from which ever CXL device
> > happens to have it's decoder set up last)
> > 
> > 1) Wait for a decoder commit (enable) on any component. Goto 2.
> > 2) Walk the topology (up to host decoder, down to memory device)
> > If a complete interleaving path has been configured -
> >    i.e. we have committed decoders all the way to the memory
> >    device goto step 3, otherwise return to step 1 to wait for
> >    more decoders to be committed.
> > 3) For the memory region being supplied by the memory device,
> >    add subregions to map the device physical address (address
> >    in the file) for each interleave stride to the appropriate
> >    host Physical Address.
> > 4) Return to step 1 to wait for more decoders to commit.
> > 
> > So summary is we can do it with IO regions, but there are a lot of them
> > and the setup is somewhat complex as we don't have one single point in
> > time where we know all the necessary information is available to compute
> > the right addresses.
> > 
> > Looking forward to your suggestions if I haven't caused more confusion!  

Hi Peter,

> 
> Thanks for the write up - I must confess they're a lot! :)
> 
> I merely only learned what is CXL today, and I'm not very experienced on
> device modeling either, so please bare with me with stupid questions..
> 
> IIUC so far CXL traps these memory accesses using CXLFixedWindow.mr.
> That's a normal IO region, which looks very reasonable.
> 
> However I'm confused why patch "RFC: softmmu/memory: Add ops to
> memory_region_ram_init_from_file" helped.
> 
> Per my knowledge, all the memory accesses upon this CFMW window trapped
> using this IO region already.  There can be multiple memory file objects
> underneath, and when read/write happens the object will be decoded from
> cxl_cfmws_find_device() as you referenced.

Yes.

> 
> However I see nowhere that these memory objects got mapped as sub-regions
> into parent (CXLFixedWindow.mr).  Then I don't understand why they cannot
> be trapped.

AS you note they aren't mapped into the parent mr, hence we are trapping.
The parent mem_ops are responsible for decoding the 'which device' +
'what address in device memory space'. Once we've gotten that info
the question is how do I actually do the access?

Mapping as subregions seems unwise due to the huge number required.

> 
> To ask in another way: what will happen if you simply revert this RFC
> patch?  What will go wrong?

The call to memory_region_dispatch_read()
https://gitlab.com/jic23/qemu/-/blob/cxl-v7-draft-2-for-test/hw/mem/cxl_type3.c#L556

would call memory_region_access_valid() that calls 
mr->ops->valid.accepts() which is set to
unassigned_mem_accepts() and hence...
you get back a MEMTX_DECODE_ERROR back and an exception in the
guest.

That wouldn't happen with a non proxied access to the ram as
those paths never uses the ops as memory_access_is_direct() is called
and simply memcpy used without any involvement of the ops.

Is a better way to proxy those writes to the backing files?

I was fishing a bit in the dark here and saw the existing ops defined
for a different purpose for VFIO

4a2e242bbb ("memory Don't use memcpy for ram_device regions")

and those allowed the use of memory_region_dispatch_write() to work.

Hence the RFC marking on that patch :)

Thanks,

Jonathan



> 
> Thanks,
> 



^ permalink raw reply	[flat|nested] 124+ messages in thread

* Re: [PATCH v7 00/46] CXl 2.0 emulation Support
  2022-03-16 16:50             ` Jonathan Cameron via
@ 2022-03-16 17:16               ` Mark Cave-Ayland
  -1 siblings, 0 replies; 124+ messages in thread
From: Mark Cave-Ayland @ 2022-03-16 17:16 UTC (permalink / raw)
  To: Jonathan Cameron, Peter Xu
  Cc: Peter Maydell, Ben Widawsky, Michael S. Tsirkin,
	Markus Armbruster, Samarth Saxena, Chris Browy, qemu-devel,
	linux-cxl, linuxarm, Shreyas Shah, Saransh Gupta1, Paolo Bonzini,
	Marcel Apfelbaum, Igor Mammedov, Dan Williams, David Hildenbrand,
	Alex Bennée, Shameerali Kolothum Thodi,
	Philippe Mathieu-Daudé

On 16/03/2022 16:50, Jonathan Cameron via wrote:

> On Thu, 10 Mar 2022 16:02:22 +0800
> Peter Xu <peterx@redhat.com> wrote:
> 
>> On Wed, Mar 09, 2022 at 11:28:27AM +0000, Jonathan Cameron wrote:
>>> Hi Peter,
>>
>> Hi, Jonathan,
>>
>>>    
>>>>
>>>> https://lore.kernel.org/qemu-devel/20220306174137.5707-35-Jonathan.Cameron@huawei.com/
>>>>
>>>> Having mr->ops set but with memory_access_is_direct() returning true sounds
>>>> weird to me.
>>>>
>>>> Sorry to have no understanding of the whole picture, but.. could you share
>>>> more on what's the interleaving requirement on the proxying, and why it
>>>> can't be done with adding some IO memory regions as sub-regions upon the
>>>> file one?
>>>
>>> The proxying requirement is simply a means to read/write to a computed address
>>> within a memory region. There may well be a better way to do that.
>>>
>>> If I understand your suggestion correctly you would need a very high
>>> number of IO memory regions to be created dynamically when particular sets of
>>> registers across multiple devices in the topology are all programmed.
>>>
>>> The interleave can be 256 bytes across up to 16x, many terabyte, devices.
>>> So assuming a simple set of 16 1TB devices I think you'd need about 4x10^9
>>> IO regions.  Even for a minimal useful test case of largest interleave
>>> set of 16x 256MB devices (256MB is minimum size the specification allows per
>>> decoded region at the device) and 16 way interleave we'd need 10^6 IO regions.
>>> Any idea if that approach would scale sensibly to this number of regions?
>>>
>>> There are also complexities to getting all the information in one place to
>>> work out which IO memory regions maps where in PA space. Current solution is
>>> to do that mapping in the same way the hardware does which is hierarchical,
>>> so we walk the path to the device, picking directions based on each interleave
>>> decoder that we meet.
>>> Obviously this is a bit slow but I only really care about correctness at the
>>> moment.  I can think of various approaches to speeding it up but I'm not sure
>>> if we will ever care about performance.
>>>
>>> https://gitlab.com/jic23/qemu/-/blob/cxl-v7-draft-2-for-test/hw/cxl/cxl-host.c#L131
>>> has the logic for that and as you can see it's fairly simple because we are always
>>> going down the topology following the decoders.
>>>
>>> Below I have mapped out an algorithm I think would work for doing it with
>>> IO memory regions as subregions.
>>>
>>> We could fake the whole thing by limiting ourselves to small host
>>> memory windows which are always directly backed, but then I wouldn't
>>> achieve the main aim of this which is to provide a test base for the OS code.
>>> To do that I need real interleave so I can seed the files with test patterns
>>> and verify the accesses hit the correct locations. Emulating what the hardware
>>> is actually doing on a device by device basis is the easiest way I have
>>> come up with to do that.
>>>
>>> Let me try to provide some more background so you hopefully don't have
>>> to have read the specs to follow what is going on!
>>> There are an example for directly connected (no switches) topology in the
>>> docs
>>>
>>> https://gitlab.com/jic23/qemu/-/blob/cxl-v7-draft-2-for-test/docs/system/devices/cxl.rst
>>>
>>> The overall picture is we have a large number of CXL Type 3 memory devices,
>>> which at runtime (by OS at boot/on hotplug) are configured into various
>>> interleaving sets with hierarchical decoding at the host + host bridge
>>> + switch levels. For test setups I probably need to go to around 32 devices
>>> so I can hit various configurations simultaneously.
>>> No individual device has visibility of the full interleave setup - hence
>>> the walk in the existing code through the various decoders to find the
>>> final Device Physical address.
>>>
>>> At the host level the host provides a set of Physical Address windows with
>>> a fixed interleave decoding across the different host bridges in the system
>>> (CXL Fixed Memory windows, CFMWs)
>>> On a real system these have to be large enough to allow for any memory
>>> devices that might be hotplugged and all possible configurations (so
>>> with 2 host bridges you need at least 3 windows in the many TB range,
>>> much worse as the number of host bridges goes up). It'll be worse than
>>> this when we have QoS groups, but the current Qemu code just puts all
>>> the windows in group 0.  Hence my first thought of just putting memory
>>> behind those doesn't scale (a similar approach to this was in the
>>> earliest versions of this patch set - though the full access path
>>> wasn't wired up).
>>>
>>> The granularity can be in powers of 2 from 256 bytes to 16 kbytes
>>>
>>> Next each host bridge has programmable address decoders which take the
>>> incoming (often already interleaved) memory access and direct them to
>>> appropriate root ports.  The root ports can be connected to a switch
>>> which has additional address decoders in the upstream port to decide
>>> which downstream port to route to.  Note we currently only support 1 level
>>> of switches but it's easy to make this algorithm recursive to support
>>> multiple switch levels (currently the kernel proposals only support 1 level)
>>>
>>> Finally the End Point with the actual memory receives the interleaved request and
>>> takes the full address and (for power of 2 decoding - we don't yet support
>>> 3,6 and 12 way which is more complex and there is no kernel support yet)
>>> it drops a few address bits and adds an offset for the decoder used to
>>> calculate it's own device physical address.  Note device will support
>>> multiple interleave sets for different parts of it's file once we add
>>> multiple decoder support (on the todo list).
>>>
>>> So the current solution is straight forward (with the exception of that
>>> proxying) because it follows the same decoding as used in real hardware
>>> to route the memory accesses. As a result we get a read/write to a
>>> device physical address and hence proxy that.  If any of the decoders
>>> along the path are not configured then we error out at that stage.
>>>
>>> To create the equivalent as IO subregions I think we'd have to do the
>>> following from (this might be mediated by some central entity that
>>> doesn't currently exist, or done on demand from which ever CXL device
>>> happens to have it's decoder set up last)
>>>
>>> 1) Wait for a decoder commit (enable) on any component. Goto 2.
>>> 2) Walk the topology (up to host decoder, down to memory device)
>>> If a complete interleaving path has been configured -
>>>     i.e. we have committed decoders all the way to the memory
>>>     device goto step 3, otherwise return to step 1 to wait for
>>>     more decoders to be committed.
>>> 3) For the memory region being supplied by the memory device,
>>>     add subregions to map the device physical address (address
>>>     in the file) for each interleave stride to the appropriate
>>>     host Physical Address.
>>> 4) Return to step 1 to wait for more decoders to commit.
>>>
>>> So summary is we can do it with IO regions, but there are a lot of them
>>> and the setup is somewhat complex as we don't have one single point in
>>> time where we know all the necessary information is available to compute
>>> the right addresses.
>>>
>>> Looking forward to your suggestions if I haven't caused more confusion!
> 
> Hi Peter,
> 
>>
>> Thanks for the write up - I must confess they're a lot! :)
>>
>> I merely only learned what is CXL today, and I'm not very experienced on
>> device modeling either, so please bare with me with stupid questions..
>>
>> IIUC so far CXL traps these memory accesses using CXLFixedWindow.mr.
>> That's a normal IO region, which looks very reasonable.
>>
>> However I'm confused why patch "RFC: softmmu/memory: Add ops to
>> memory_region_ram_init_from_file" helped.
>>
>> Per my knowledge, all the memory accesses upon this CFMW window trapped
>> using this IO region already.  There can be multiple memory file objects
>> underneath, and when read/write happens the object will be decoded from
>> cxl_cfmws_find_device() as you referenced.
> 
> Yes.
> 
>>
>> However I see nowhere that these memory objects got mapped as sub-regions
>> into parent (CXLFixedWindow.mr).  Then I don't understand why they cannot
>> be trapped.
> 
> AS you note they aren't mapped into the parent mr, hence we are trapping.
> The parent mem_ops are responsible for decoding the 'which device' +
> 'what address in device memory space'. Once we've gotten that info
> the question is how do I actually do the access?
> 
> Mapping as subregions seems unwise due to the huge number required.
> 
>>
>> To ask in another way: what will happen if you simply revert this RFC
>> patch?  What will go wrong?
> 
> The call to memory_region_dispatch_read()
> https://gitlab.com/jic23/qemu/-/blob/cxl-v7-draft-2-for-test/hw/mem/cxl_type3.c#L556
> 
> would call memory_region_access_valid() that calls
> mr->ops->valid.accepts() which is set to
> unassigned_mem_accepts() and hence...
> you get back a MEMTX_DECODE_ERROR back and an exception in the
> guest.
> 
> That wouldn't happen with a non proxied access to the ram as
> those paths never uses the ops as memory_access_is_direct() is called
> and simply memcpy used without any involvement of the ops.
> 
> Is a better way to proxy those writes to the backing files?
> 
> I was fishing a bit in the dark here and saw the existing ops defined
> for a different purpose for VFIO
> 
> 4a2e242bbb ("memory Don't use memcpy for ram_device regions")
> 
> and those allowed the use of memory_region_dispatch_write() to work.
> 
> Hence the RFC marking on that patch :)

FWIW I had a similar issue implementing manual aliasing in one of my q800 patches 
where I found that dispatching a read to a non-IO memory region didn't work with 
memory_region_dispatch_read(). The solution in my case was to switch to using the 
address space API instead, which whilst requiring an absolute address for the target 
address space, handles the dispatch correctly across all different memory region types.

Have a look at 
https://gitlab.com/mcayland/qemu/-/commit/318e12579c7570196187652da13542db86b8c722 to 
see how I did this in macio_alias_read().

IIRC from my experiments in this area, my conclusion was that 
memory_region_dispatch_read() can only work correctly if mapping directly between 2 
IO memory regions, and for anything else you need to use the address space API.


ATB,

Mark.


^ permalink raw reply	[flat|nested] 124+ messages in thread

* Re: [PATCH v7 00/46] CXl 2.0 emulation Support
@ 2022-03-16 17:16               ` Mark Cave-Ayland
  0 siblings, 0 replies; 124+ messages in thread
From: Mark Cave-Ayland @ 2022-03-16 17:16 UTC (permalink / raw)
  To: Jonathan Cameron, Peter Xu
  Cc: Michael S. Tsirkin, Peter Maydell, Ben Widawsky, qemu-devel,
	Samarth Saxena, Chris Browy, linuxarm, linux-cxl,
	Markus Armbruster, Shreyas Shah, Saransh Gupta1,
	Shameerali Kolothum Thodi, Marcel Apfelbaum, Igor Mammedov,
	Dan Williams, Alex Bennée, Philippe Mathieu-Daudé,
	Paolo Bonzini, David Hildenbrand

On 16/03/2022 16:50, Jonathan Cameron via wrote:

> On Thu, 10 Mar 2022 16:02:22 +0800
> Peter Xu <peterx@redhat.com> wrote:
> 
>> On Wed, Mar 09, 2022 at 11:28:27AM +0000, Jonathan Cameron wrote:
>>> Hi Peter,
>>
>> Hi, Jonathan,
>>
>>>    
>>>>
>>>> https://lore.kernel.org/qemu-devel/20220306174137.5707-35-Jonathan.Cameron@huawei.com/
>>>>
>>>> Having mr->ops set but with memory_access_is_direct() returning true sounds
>>>> weird to me.
>>>>
>>>> Sorry to have no understanding of the whole picture, but.. could you share
>>>> more on what's the interleaving requirement on the proxying, and why it
>>>> can't be done with adding some IO memory regions as sub-regions upon the
>>>> file one?
>>>
>>> The proxying requirement is simply a means to read/write to a computed address
>>> within a memory region. There may well be a better way to do that.
>>>
>>> If I understand your suggestion correctly you would need a very high
>>> number of IO memory regions to be created dynamically when particular sets of
>>> registers across multiple devices in the topology are all programmed.
>>>
>>> The interleave can be 256 bytes across up to 16x, many terabyte, devices.
>>> So assuming a simple set of 16 1TB devices I think you'd need about 4x10^9
>>> IO regions.  Even for a minimal useful test case of largest interleave
>>> set of 16x 256MB devices (256MB is minimum size the specification allows per
>>> decoded region at the device) and 16 way interleave we'd need 10^6 IO regions.
>>> Any idea if that approach would scale sensibly to this number of regions?
>>>
>>> There are also complexities to getting all the information in one place to
>>> work out which IO memory regions maps where in PA space. Current solution is
>>> to do that mapping in the same way the hardware does which is hierarchical,
>>> so we walk the path to the device, picking directions based on each interleave
>>> decoder that we meet.
>>> Obviously this is a bit slow but I only really care about correctness at the
>>> moment.  I can think of various approaches to speeding it up but I'm not sure
>>> if we will ever care about performance.
>>>
>>> https://gitlab.com/jic23/qemu/-/blob/cxl-v7-draft-2-for-test/hw/cxl/cxl-host.c#L131
>>> has the logic for that and as you can see it's fairly simple because we are always
>>> going down the topology following the decoders.
>>>
>>> Below I have mapped out an algorithm I think would work for doing it with
>>> IO memory regions as subregions.
>>>
>>> We could fake the whole thing by limiting ourselves to small host
>>> memory windows which are always directly backed, but then I wouldn't
>>> achieve the main aim of this which is to provide a test base for the OS code.
>>> To do that I need real interleave so I can seed the files with test patterns
>>> and verify the accesses hit the correct locations. Emulating what the hardware
>>> is actually doing on a device by device basis is the easiest way I have
>>> come up with to do that.
>>>
>>> Let me try to provide some more background so you hopefully don't have
>>> to have read the specs to follow what is going on!
>>> There are an example for directly connected (no switches) topology in the
>>> docs
>>>
>>> https://gitlab.com/jic23/qemu/-/blob/cxl-v7-draft-2-for-test/docs/system/devices/cxl.rst
>>>
>>> The overall picture is we have a large number of CXL Type 3 memory devices,
>>> which at runtime (by OS at boot/on hotplug) are configured into various
>>> interleaving sets with hierarchical decoding at the host + host bridge
>>> + switch levels. For test setups I probably need to go to around 32 devices
>>> so I can hit various configurations simultaneously.
>>> No individual device has visibility of the full interleave setup - hence
>>> the walk in the existing code through the various decoders to find the
>>> final Device Physical address.
>>>
>>> At the host level the host provides a set of Physical Address windows with
>>> a fixed interleave decoding across the different host bridges in the system
>>> (CXL Fixed Memory windows, CFMWs)
>>> On a real system these have to be large enough to allow for any memory
>>> devices that might be hotplugged and all possible configurations (so
>>> with 2 host bridges you need at least 3 windows in the many TB range,
>>> much worse as the number of host bridges goes up). It'll be worse than
>>> this when we have QoS groups, but the current Qemu code just puts all
>>> the windows in group 0.  Hence my first thought of just putting memory
>>> behind those doesn't scale (a similar approach to this was in the
>>> earliest versions of this patch set - though the full access path
>>> wasn't wired up).
>>>
>>> The granularity can be in powers of 2 from 256 bytes to 16 kbytes
>>>
>>> Next each host bridge has programmable address decoders which take the
>>> incoming (often already interleaved) memory access and direct them to
>>> appropriate root ports.  The root ports can be connected to a switch
>>> which has additional address decoders in the upstream port to decide
>>> which downstream port to route to.  Note we currently only support 1 level
>>> of switches but it's easy to make this algorithm recursive to support
>>> multiple switch levels (currently the kernel proposals only support 1 level)
>>>
>>> Finally the End Point with the actual memory receives the interleaved request and
>>> takes the full address and (for power of 2 decoding - we don't yet support
>>> 3,6 and 12 way which is more complex and there is no kernel support yet)
>>> it drops a few address bits and adds an offset for the decoder used to
>>> calculate it's own device physical address.  Note device will support
>>> multiple interleave sets for different parts of it's file once we add
>>> multiple decoder support (on the todo list).
>>>
>>> So the current solution is straight forward (with the exception of that
>>> proxying) because it follows the same decoding as used in real hardware
>>> to route the memory accesses. As a result we get a read/write to a
>>> device physical address and hence proxy that.  If any of the decoders
>>> along the path are not configured then we error out at that stage.
>>>
>>> To create the equivalent as IO subregions I think we'd have to do the
>>> following from (this might be mediated by some central entity that
>>> doesn't currently exist, or done on demand from which ever CXL device
>>> happens to have it's decoder set up last)
>>>
>>> 1) Wait for a decoder commit (enable) on any component. Goto 2.
>>> 2) Walk the topology (up to host decoder, down to memory device)
>>> If a complete interleaving path has been configured -
>>>     i.e. we have committed decoders all the way to the memory
>>>     device goto step 3, otherwise return to step 1 to wait for
>>>     more decoders to be committed.
>>> 3) For the memory region being supplied by the memory device,
>>>     add subregions to map the device physical address (address
>>>     in the file) for each interleave stride to the appropriate
>>>     host Physical Address.
>>> 4) Return to step 1 to wait for more decoders to commit.
>>>
>>> So summary is we can do it with IO regions, but there are a lot of them
>>> and the setup is somewhat complex as we don't have one single point in
>>> time where we know all the necessary information is available to compute
>>> the right addresses.
>>>
>>> Looking forward to your suggestions if I haven't caused more confusion!
> 
> Hi Peter,
> 
>>
>> Thanks for the write up - I must confess they're a lot! :)
>>
>> I merely only learned what is CXL today, and I'm not very experienced on
>> device modeling either, so please bare with me with stupid questions..
>>
>> IIUC so far CXL traps these memory accesses using CXLFixedWindow.mr.
>> That's a normal IO region, which looks very reasonable.
>>
>> However I'm confused why patch "RFC: softmmu/memory: Add ops to
>> memory_region_ram_init_from_file" helped.
>>
>> Per my knowledge, all the memory accesses upon this CFMW window trapped
>> using this IO region already.  There can be multiple memory file objects
>> underneath, and when read/write happens the object will be decoded from
>> cxl_cfmws_find_device() as you referenced.
> 
> Yes.
> 
>>
>> However I see nowhere that these memory objects got mapped as sub-regions
>> into parent (CXLFixedWindow.mr).  Then I don't understand why they cannot
>> be trapped.
> 
> AS you note they aren't mapped into the parent mr, hence we are trapping.
> The parent mem_ops are responsible for decoding the 'which device' +
> 'what address in device memory space'. Once we've gotten that info
> the question is how do I actually do the access?
> 
> Mapping as subregions seems unwise due to the huge number required.
> 
>>
>> To ask in another way: what will happen if you simply revert this RFC
>> patch?  What will go wrong?
> 
> The call to memory_region_dispatch_read()
> https://gitlab.com/jic23/qemu/-/blob/cxl-v7-draft-2-for-test/hw/mem/cxl_type3.c#L556
> 
> would call memory_region_access_valid() that calls
> mr->ops->valid.accepts() which is set to
> unassigned_mem_accepts() and hence...
> you get back a MEMTX_DECODE_ERROR back and an exception in the
> guest.
> 
> That wouldn't happen with a non proxied access to the ram as
> those paths never uses the ops as memory_access_is_direct() is called
> and simply memcpy used without any involvement of the ops.
> 
> Is a better way to proxy those writes to the backing files?
> 
> I was fishing a bit in the dark here and saw the existing ops defined
> for a different purpose for VFIO
> 
> 4a2e242bbb ("memory Don't use memcpy for ram_device regions")
> 
> and those allowed the use of memory_region_dispatch_write() to work.
> 
> Hence the RFC marking on that patch :)

FWIW I had a similar issue implementing manual aliasing in one of my q800 patches 
where I found that dispatching a read to a non-IO memory region didn't work with 
memory_region_dispatch_read(). The solution in my case was to switch to using the 
address space API instead, which whilst requiring an absolute address for the target 
address space, handles the dispatch correctly across all different memory region types.

Have a look at 
https://gitlab.com/mcayland/qemu/-/commit/318e12579c7570196187652da13542db86b8c722 to 
see how I did this in macio_alias_read().

IIRC from my experiments in this area, my conclusion was that 
memory_region_dispatch_read() can only work correctly if mapping directly between 2 
IO memory regions, and for anything else you need to use the address space API.


ATB,

Mark.

^ permalink raw reply	[flat|nested] 124+ messages in thread

* Re: [PATCH v7 00/46] CXl 2.0 emulation Support
  2022-03-16 17:16               ` Mark Cave-Ayland
@ 2022-03-16 17:58                 ` Jonathan Cameron via
  -1 siblings, 0 replies; 124+ messages in thread
From: Jonathan Cameron @ 2022-03-16 17:58 UTC (permalink / raw)
  To: Mark Cave-Ayland
  Cc: Peter Xu, Peter Maydell, Ben Widawsky, Michael S. Tsirkin,
	Markus Armbruster, Samarth Saxena, Chris Browy, qemu-devel,
	linux-cxl, linuxarm, Shreyas Shah, Saransh Gupta1, Paolo Bonzini,
	Marcel Apfelbaum, Igor Mammedov, Dan Williams, David Hildenbrand,
	Alex Bennée, Shameerali Kolothum Thodi,
	Philippe Mathieu-Daudé

On Wed, 16 Mar 2022 17:16:55 +0000
Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk> wrote:

> On 16/03/2022 16:50, Jonathan Cameron via wrote:
> 
> > On Thu, 10 Mar 2022 16:02:22 +0800
> > Peter Xu <peterx@redhat.com> wrote:
> >   
> >> On Wed, Mar 09, 2022 at 11:28:27AM +0000, Jonathan Cameron wrote:  
> >>> Hi Peter,  
> >>
> >> Hi, Jonathan,
> >>  
> >>>      
> >>>>
> >>>> https://lore.kernel.org/qemu-devel/20220306174137.5707-35-Jonathan.Cameron@huawei.com/
> >>>>
> >>>> Having mr->ops set but with memory_access_is_direct() returning true sounds
> >>>> weird to me.
> >>>>
> >>>> Sorry to have no understanding of the whole picture, but.. could you share
> >>>> more on what's the interleaving requirement on the proxying, and why it
> >>>> can't be done with adding some IO memory regions as sub-regions upon the
> >>>> file one?  
> >>>
> >>> The proxying requirement is simply a means to read/write to a computed address
> >>> within a memory region. There may well be a better way to do that.
> >>>
> >>> If I understand your suggestion correctly you would need a very high
> >>> number of IO memory regions to be created dynamically when particular sets of
> >>> registers across multiple devices in the topology are all programmed.
> >>>
> >>> The interleave can be 256 bytes across up to 16x, many terabyte, devices.
> >>> So assuming a simple set of 16 1TB devices I think you'd need about 4x10^9
> >>> IO regions.  Even for a minimal useful test case of largest interleave
> >>> set of 16x 256MB devices (256MB is minimum size the specification allows per
> >>> decoded region at the device) and 16 way interleave we'd need 10^6 IO regions.
> >>> Any idea if that approach would scale sensibly to this number of regions?
> >>>
> >>> There are also complexities to getting all the information in one place to
> >>> work out which IO memory regions maps where in PA space. Current solution is
> >>> to do that mapping in the same way the hardware does which is hierarchical,
> >>> so we walk the path to the device, picking directions based on each interleave
> >>> decoder that we meet.
> >>> Obviously this is a bit slow but I only really care about correctness at the
> >>> moment.  I can think of various approaches to speeding it up but I'm not sure
> >>> if we will ever care about performance.
> >>>
> >>> https://gitlab.com/jic23/qemu/-/blob/cxl-v7-draft-2-for-test/hw/cxl/cxl-host.c#L131
> >>> has the logic for that and as you can see it's fairly simple because we are always
> >>> going down the topology following the decoders.
> >>>
> >>> Below I have mapped out an algorithm I think would work for doing it with
> >>> IO memory regions as subregions.
> >>>
> >>> We could fake the whole thing by limiting ourselves to small host
> >>> memory windows which are always directly backed, but then I wouldn't
> >>> achieve the main aim of this which is to provide a test base for the OS code.
> >>> To do that I need real interleave so I can seed the files with test patterns
> >>> and verify the accesses hit the correct locations. Emulating what the hardware
> >>> is actually doing on a device by device basis is the easiest way I have
> >>> come up with to do that.
> >>>
> >>> Let me try to provide some more background so you hopefully don't have
> >>> to have read the specs to follow what is going on!
> >>> There are an example for directly connected (no switches) topology in the
> >>> docs
> >>>
> >>> https://gitlab.com/jic23/qemu/-/blob/cxl-v7-draft-2-for-test/docs/system/devices/cxl.rst
> >>>
> >>> The overall picture is we have a large number of CXL Type 3 memory devices,
> >>> which at runtime (by OS at boot/on hotplug) are configured into various
> >>> interleaving sets with hierarchical decoding at the host + host bridge
> >>> + switch levels. For test setups I probably need to go to around 32 devices
> >>> so I can hit various configurations simultaneously.
> >>> No individual device has visibility of the full interleave setup - hence
> >>> the walk in the existing code through the various decoders to find the
> >>> final Device Physical address.
> >>>
> >>> At the host level the host provides a set of Physical Address windows with
> >>> a fixed interleave decoding across the different host bridges in the system
> >>> (CXL Fixed Memory windows, CFMWs)
> >>> On a real system these have to be large enough to allow for any memory
> >>> devices that might be hotplugged and all possible configurations (so
> >>> with 2 host bridges you need at least 3 windows in the many TB range,
> >>> much worse as the number of host bridges goes up). It'll be worse than
> >>> this when we have QoS groups, but the current Qemu code just puts all
> >>> the windows in group 0.  Hence my first thought of just putting memory
> >>> behind those doesn't scale (a similar approach to this was in the
> >>> earliest versions of this patch set - though the full access path
> >>> wasn't wired up).
> >>>
> >>> The granularity can be in powers of 2 from 256 bytes to 16 kbytes
> >>>
> >>> Next each host bridge has programmable address decoders which take the
> >>> incoming (often already interleaved) memory access and direct them to
> >>> appropriate root ports.  The root ports can be connected to a switch
> >>> which has additional address decoders in the upstream port to decide
> >>> which downstream port to route to.  Note we currently only support 1 level
> >>> of switches but it's easy to make this algorithm recursive to support
> >>> multiple switch levels (currently the kernel proposals only support 1 level)
> >>>
> >>> Finally the End Point with the actual memory receives the interleaved request and
> >>> takes the full address and (for power of 2 decoding - we don't yet support
> >>> 3,6 and 12 way which is more complex and there is no kernel support yet)
> >>> it drops a few address bits and adds an offset for the decoder used to
> >>> calculate it's own device physical address.  Note device will support
> >>> multiple interleave sets for different parts of it's file once we add
> >>> multiple decoder support (on the todo list).
> >>>
> >>> So the current solution is straight forward (with the exception of that
> >>> proxying) because it follows the same decoding as used in real hardware
> >>> to route the memory accesses. As a result we get a read/write to a
> >>> device physical address and hence proxy that.  If any of the decoders
> >>> along the path are not configured then we error out at that stage.
> >>>
> >>> To create the equivalent as IO subregions I think we'd have to do the
> >>> following from (this might be mediated by some central entity that
> >>> doesn't currently exist, or done on demand from which ever CXL device
> >>> happens to have it's decoder set up last)
> >>>
> >>> 1) Wait for a decoder commit (enable) on any component. Goto 2.
> >>> 2) Walk the topology (up to host decoder, down to memory device)
> >>> If a complete interleaving path has been configured -
> >>>     i.e. we have committed decoders all the way to the memory
> >>>     device goto step 3, otherwise return to step 1 to wait for
> >>>     more decoders to be committed.
> >>> 3) For the memory region being supplied by the memory device,
> >>>     add subregions to map the device physical address (address
> >>>     in the file) for each interleave stride to the appropriate
> >>>     host Physical Address.
> >>> 4) Return to step 1 to wait for more decoders to commit.
> >>>
> >>> So summary is we can do it with IO regions, but there are a lot of them
> >>> and the setup is somewhat complex as we don't have one single point in
> >>> time where we know all the necessary information is available to compute
> >>> the right addresses.
> >>>
> >>> Looking forward to your suggestions if I haven't caused more confusion!  
> > 
> > Hi Peter,
> >   
> >>
> >> Thanks for the write up - I must confess they're a lot! :)
> >>
> >> I merely only learned what is CXL today, and I'm not very experienced on
> >> device modeling either, so please bare with me with stupid questions..
> >>
> >> IIUC so far CXL traps these memory accesses using CXLFixedWindow.mr.
> >> That's a normal IO region, which looks very reasonable.
> >>
> >> However I'm confused why patch "RFC: softmmu/memory: Add ops to
> >> memory_region_ram_init_from_file" helped.
> >>
> >> Per my knowledge, all the memory accesses upon this CFMW window trapped
> >> using this IO region already.  There can be multiple memory file objects
> >> underneath, and when read/write happens the object will be decoded from
> >> cxl_cfmws_find_device() as you referenced.  
> > 
> > Yes.
> >   
> >>
> >> However I see nowhere that these memory objects got mapped as sub-regions
> >> into parent (CXLFixedWindow.mr).  Then I don't understand why they cannot
> >> be trapped.  
> > 
> > AS you note they aren't mapped into the parent mr, hence we are trapping.
> > The parent mem_ops are responsible for decoding the 'which device' +
> > 'what address in device memory space'. Once we've gotten that info
> > the question is how do I actually do the access?
> > 
> > Mapping as subregions seems unwise due to the huge number required.
> >   
> >>
> >> To ask in another way: what will happen if you simply revert this RFC
> >> patch?  What will go wrong?  
> > 
> > The call to memory_region_dispatch_read()
> > https://gitlab.com/jic23/qemu/-/blob/cxl-v7-draft-2-for-test/hw/mem/cxl_type3.c#L556
> > 
> > would call memory_region_access_valid() that calls
> > mr->ops->valid.accepts() which is set to
> > unassigned_mem_accepts() and hence...
> > you get back a MEMTX_DECODE_ERROR back and an exception in the
> > guest.
> > 
> > That wouldn't happen with a non proxied access to the ram as
> > those paths never uses the ops as memory_access_is_direct() is called
> > and simply memcpy used without any involvement of the ops.
> > 
> > Is a better way to proxy those writes to the backing files?
> > 
> > I was fishing a bit in the dark here and saw the existing ops defined
> > for a different purpose for VFIO
> > 
> > 4a2e242bbb ("memory Don't use memcpy for ram_device regions")
> > 
> > and those allowed the use of memory_region_dispatch_write() to work.
> > 
> > Hence the RFC marking on that patch :)  
> 
> FWIW I had a similar issue implementing manual aliasing in one of my q800 patches 
> where I found that dispatching a read to a non-IO memory region didn't work with 
> memory_region_dispatch_read(). The solution in my case was to switch to using the 
> address space API instead, which whilst requiring an absolute address for the target 
> address space, handles the dispatch correctly across all different memory region types.
> 
> Have a look at 
> https://gitlab.com/mcayland/qemu/-/commit/318e12579c7570196187652da13542db86b8c722 to 
> see how I did this in macio_alias_read().
> 
> IIRC from my experiments in this area, my conclusion was that 
> memory_region_dispatch_read() can only work correctly if mapping directly between 2 
> IO memory regions, and for anything else you need to use the address space API.

Hi Mark,

I'd wondered about the address space API as an alternative approach.

From that reference looks like you have the memory mapped into the system address
space and are providing an alias to that.  That's something I'd ideally like to
avoid doing as there is no meaningful way to do it so I'd just be hiding the memory
somewhere up high.  The memory should only be accessible through the one
route.

I think I could spin a separate address space for this purpose (one per CXL type 3
device probably) but that seems like another nasty hack to make. I'll try a quick
prototype of this tomorrow.

What do people think is the least horrible way to do this?

Thanks for the suggestion and I'm glad I'm not the only one trying to get this
sort of thing to work ;)

Jonathan

> 
> 
> ATB,
> 
> Mark.
> 


^ permalink raw reply	[flat|nested] 124+ messages in thread

* Re: [PATCH v7 00/46] CXl 2.0 emulation Support
@ 2022-03-16 17:58                 ` Jonathan Cameron via
  0 siblings, 0 replies; 124+ messages in thread
From: Jonathan Cameron via @ 2022-03-16 17:58 UTC (permalink / raw)
  To: Mark Cave-Ayland
  Cc: Peter Xu, Peter Maydell, Ben Widawsky, Michael S. Tsirkin,
	Markus Armbruster, Samarth Saxena, Chris Browy, qemu-devel,
	linux-cxl, linuxarm, Shreyas Shah, Saransh Gupta1, Paolo Bonzini,
	Marcel Apfelbaum, Igor Mammedov, Dan Williams, David Hildenbrand,
	Alex Bennée, Shameerali Kolothum Thodi,
	Philippe Mathieu-Daudé

On Wed, 16 Mar 2022 17:16:55 +0000
Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk> wrote:

> On 16/03/2022 16:50, Jonathan Cameron via wrote:
> 
> > On Thu, 10 Mar 2022 16:02:22 +0800
> > Peter Xu <peterx@redhat.com> wrote:
> >   
> >> On Wed, Mar 09, 2022 at 11:28:27AM +0000, Jonathan Cameron wrote:  
> >>> Hi Peter,  
> >>
> >> Hi, Jonathan,
> >>  
> >>>      
> >>>>
> >>>> https://lore.kernel.org/qemu-devel/20220306174137.5707-35-Jonathan.Cameron@huawei.com/
> >>>>
> >>>> Having mr->ops set but with memory_access_is_direct() returning true sounds
> >>>> weird to me.
> >>>>
> >>>> Sorry to have no understanding of the whole picture, but.. could you share
> >>>> more on what's the interleaving requirement on the proxying, and why it
> >>>> can't be done with adding some IO memory regions as sub-regions upon the
> >>>> file one?  
> >>>
> >>> The proxying requirement is simply a means to read/write to a computed address
> >>> within a memory region. There may well be a better way to do that.
> >>>
> >>> If I understand your suggestion correctly you would need a very high
> >>> number of IO memory regions to be created dynamically when particular sets of
> >>> registers across multiple devices in the topology are all programmed.
> >>>
> >>> The interleave can be 256 bytes across up to 16x, many terabyte, devices.
> >>> So assuming a simple set of 16 1TB devices I think you'd need about 4x10^9
> >>> IO regions.  Even for a minimal useful test case of largest interleave
> >>> set of 16x 256MB devices (256MB is minimum size the specification allows per
> >>> decoded region at the device) and 16 way interleave we'd need 10^6 IO regions.
> >>> Any idea if that approach would scale sensibly to this number of regions?
> >>>
> >>> There are also complexities to getting all the information in one place to
> >>> work out which IO memory regions maps where in PA space. Current solution is
> >>> to do that mapping in the same way the hardware does which is hierarchical,
> >>> so we walk the path to the device, picking directions based on each interleave
> >>> decoder that we meet.
> >>> Obviously this is a bit slow but I only really care about correctness at the
> >>> moment.  I can think of various approaches to speeding it up but I'm not sure
> >>> if we will ever care about performance.
> >>>
> >>> https://gitlab.com/jic23/qemu/-/blob/cxl-v7-draft-2-for-test/hw/cxl/cxl-host.c#L131
> >>> has the logic for that and as you can see it's fairly simple because we are always
> >>> going down the topology following the decoders.
> >>>
> >>> Below I have mapped out an algorithm I think would work for doing it with
> >>> IO memory regions as subregions.
> >>>
> >>> We could fake the whole thing by limiting ourselves to small host
> >>> memory windows which are always directly backed, but then I wouldn't
> >>> achieve the main aim of this which is to provide a test base for the OS code.
> >>> To do that I need real interleave so I can seed the files with test patterns
> >>> and verify the accesses hit the correct locations. Emulating what the hardware
> >>> is actually doing on a device by device basis is the easiest way I have
> >>> come up with to do that.
> >>>
> >>> Let me try to provide some more background so you hopefully don't have
> >>> to have read the specs to follow what is going on!
> >>> There are an example for directly connected (no switches) topology in the
> >>> docs
> >>>
> >>> https://gitlab.com/jic23/qemu/-/blob/cxl-v7-draft-2-for-test/docs/system/devices/cxl.rst
> >>>
> >>> The overall picture is we have a large number of CXL Type 3 memory devices,
> >>> which at runtime (by OS at boot/on hotplug) are configured into various
> >>> interleaving sets with hierarchical decoding at the host + host bridge
> >>> + switch levels. For test setups I probably need to go to around 32 devices
> >>> so I can hit various configurations simultaneously.
> >>> No individual device has visibility of the full interleave setup - hence
> >>> the walk in the existing code through the various decoders to find the
> >>> final Device Physical address.
> >>>
> >>> At the host level the host provides a set of Physical Address windows with
> >>> a fixed interleave decoding across the different host bridges in the system
> >>> (CXL Fixed Memory windows, CFMWs)
> >>> On a real system these have to be large enough to allow for any memory
> >>> devices that might be hotplugged and all possible configurations (so
> >>> with 2 host bridges you need at least 3 windows in the many TB range,
> >>> much worse as the number of host bridges goes up). It'll be worse than
> >>> this when we have QoS groups, but the current Qemu code just puts all
> >>> the windows in group 0.  Hence my first thought of just putting memory
> >>> behind those doesn't scale (a similar approach to this was in the
> >>> earliest versions of this patch set - though the full access path
> >>> wasn't wired up).
> >>>
> >>> The granularity can be in powers of 2 from 256 bytes to 16 kbytes
> >>>
> >>> Next each host bridge has programmable address decoders which take the
> >>> incoming (often already interleaved) memory access and direct them to
> >>> appropriate root ports.  The root ports can be connected to a switch
> >>> which has additional address decoders in the upstream port to decide
> >>> which downstream port to route to.  Note we currently only support 1 level
> >>> of switches but it's easy to make this algorithm recursive to support
> >>> multiple switch levels (currently the kernel proposals only support 1 level)
> >>>
> >>> Finally the End Point with the actual memory receives the interleaved request and
> >>> takes the full address and (for power of 2 decoding - we don't yet support
> >>> 3,6 and 12 way which is more complex and there is no kernel support yet)
> >>> it drops a few address bits and adds an offset for the decoder used to
> >>> calculate it's own device physical address.  Note device will support
> >>> multiple interleave sets for different parts of it's file once we add
> >>> multiple decoder support (on the todo list).
> >>>
> >>> So the current solution is straight forward (with the exception of that
> >>> proxying) because it follows the same decoding as used in real hardware
> >>> to route the memory accesses. As a result we get a read/write to a
> >>> device physical address and hence proxy that.  If any of the decoders
> >>> along the path are not configured then we error out at that stage.
> >>>
> >>> To create the equivalent as IO subregions I think we'd have to do the
> >>> following from (this might be mediated by some central entity that
> >>> doesn't currently exist, or done on demand from which ever CXL device
> >>> happens to have it's decoder set up last)
> >>>
> >>> 1) Wait for a decoder commit (enable) on any component. Goto 2.
> >>> 2) Walk the topology (up to host decoder, down to memory device)
> >>> If a complete interleaving path has been configured -
> >>>     i.e. we have committed decoders all the way to the memory
> >>>     device goto step 3, otherwise return to step 1 to wait for
> >>>     more decoders to be committed.
> >>> 3) For the memory region being supplied by the memory device,
> >>>     add subregions to map the device physical address (address
> >>>     in the file) for each interleave stride to the appropriate
> >>>     host Physical Address.
> >>> 4) Return to step 1 to wait for more decoders to commit.
> >>>
> >>> So summary is we can do it with IO regions, but there are a lot of them
> >>> and the setup is somewhat complex as we don't have one single point in
> >>> time where we know all the necessary information is available to compute
> >>> the right addresses.
> >>>
> >>> Looking forward to your suggestions if I haven't caused more confusion!  
> > 
> > Hi Peter,
> >   
> >>
> >> Thanks for the write up - I must confess they're a lot! :)
> >>
> >> I merely only learned what is CXL today, and I'm not very experienced on
> >> device modeling either, so please bare with me with stupid questions..
> >>
> >> IIUC so far CXL traps these memory accesses using CXLFixedWindow.mr.
> >> That's a normal IO region, which looks very reasonable.
> >>
> >> However I'm confused why patch "RFC: softmmu/memory: Add ops to
> >> memory_region_ram_init_from_file" helped.
> >>
> >> Per my knowledge, all the memory accesses upon this CFMW window trapped
> >> using this IO region already.  There can be multiple memory file objects
> >> underneath, and when read/write happens the object will be decoded from
> >> cxl_cfmws_find_device() as you referenced.  
> > 
> > Yes.
> >   
> >>
> >> However I see nowhere that these memory objects got mapped as sub-regions
> >> into parent (CXLFixedWindow.mr).  Then I don't understand why they cannot
> >> be trapped.  
> > 
> > AS you note they aren't mapped into the parent mr, hence we are trapping.
> > The parent mem_ops are responsible for decoding the 'which device' +
> > 'what address in device memory space'. Once we've gotten that info
> > the question is how do I actually do the access?
> > 
> > Mapping as subregions seems unwise due to the huge number required.
> >   
> >>
> >> To ask in another way: what will happen if you simply revert this RFC
> >> patch?  What will go wrong?  
> > 
> > The call to memory_region_dispatch_read()
> > https://gitlab.com/jic23/qemu/-/blob/cxl-v7-draft-2-for-test/hw/mem/cxl_type3.c#L556
> > 
> > would call memory_region_access_valid() that calls
> > mr->ops->valid.accepts() which is set to
> > unassigned_mem_accepts() and hence...
> > you get back a MEMTX_DECODE_ERROR back and an exception in the
> > guest.
> > 
> > That wouldn't happen with a non proxied access to the ram as
> > those paths never uses the ops as memory_access_is_direct() is called
> > and simply memcpy used without any involvement of the ops.
> > 
> > Is a better way to proxy those writes to the backing files?
> > 
> > I was fishing a bit in the dark here and saw the existing ops defined
> > for a different purpose for VFIO
> > 
> > 4a2e242bbb ("memory Don't use memcpy for ram_device regions")
> > 
> > and those allowed the use of memory_region_dispatch_write() to work.
> > 
> > Hence the RFC marking on that patch :)  
> 
> FWIW I had a similar issue implementing manual aliasing in one of my q800 patches 
> where I found that dispatching a read to a non-IO memory region didn't work with 
> memory_region_dispatch_read(). The solution in my case was to switch to using the 
> address space API instead, which whilst requiring an absolute address for the target 
> address space, handles the dispatch correctly across all different memory region types.
> 
> Have a look at 
> https://gitlab.com/mcayland/qemu/-/commit/318e12579c7570196187652da13542db86b8c722 to 
> see how I did this in macio_alias_read().
> 
> IIRC from my experiments in this area, my conclusion was that 
> memory_region_dispatch_read() can only work correctly if mapping directly between 2 
> IO memory regions, and for anything else you need to use the address space API.

Hi Mark,

I'd wondered about the address space API as an alternative approach.

From that reference looks like you have the memory mapped into the system address
space and are providing an alias to that.  That's something I'd ideally like to
avoid doing as there is no meaningful way to do it so I'd just be hiding the memory
somewhere up high.  The memory should only be accessible through the one
route.

I think I could spin a separate address space for this purpose (one per CXL type 3
device probably) but that seems like another nasty hack to make. I'll try a quick
prototype of this tomorrow.

What do people think is the least horrible way to do this?

Thanks for the suggestion and I'm glad I'm not the only one trying to get this
sort of thing to work ;)

Jonathan

> 
> 
> ATB,
> 
> Mark.
> 



^ permalink raw reply	[flat|nested] 124+ messages in thread

* Re: [PATCH v7 00/46] CXl 2.0 emulation Support
  2022-03-16 17:58                 ` Jonathan Cameron via
@ 2022-03-16 18:26                   ` Jonathan Cameron via
  -1 siblings, 0 replies; 124+ messages in thread
From: Jonathan Cameron @ 2022-03-16 18:26 UTC (permalink / raw)
  To: Mark Cave-Ayland
  Cc: Peter Xu, Peter Maydell, Ben Widawsky, Michael S. Tsirkin,
	Markus Armbruster, Samarth Saxena, Chris Browy, qemu-devel,
	linux-cxl, linuxarm, Shreyas Shah, Saransh Gupta1, Paolo Bonzini,
	Marcel Apfelbaum, Igor Mammedov, Dan Williams, David Hildenbrand,
	Alex Bennée, Shameerali Kolothum Thodi,
	Philippe Mathieu-Daudé

On Wed, 16 Mar 2022 17:58:46 +0000
Jonathan Cameron <Jonathan.Cameron@Huawei.com> wrote:

> On Wed, 16 Mar 2022 17:16:55 +0000
> Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk> wrote:
> 
> > On 16/03/2022 16:50, Jonathan Cameron via wrote:
> >   
> > > On Thu, 10 Mar 2022 16:02:22 +0800
> > > Peter Xu <peterx@redhat.com> wrote:
> > >     
> > >> On Wed, Mar 09, 2022 at 11:28:27AM +0000, Jonathan Cameron wrote:    
> > >>> Hi Peter,    
> > >>
> > >> Hi, Jonathan,
> > >>    
> > >>>        
> > >>>>
> > >>>> https://lore.kernel.org/qemu-devel/20220306174137.5707-35-Jonathan.Cameron@huawei.com/
> > >>>>
> > >>>> Having mr->ops set but with memory_access_is_direct() returning true sounds
> > >>>> weird to me.
> > >>>>
> > >>>> Sorry to have no understanding of the whole picture, but.. could you share
> > >>>> more on what's the interleaving requirement on the proxying, and why it
> > >>>> can't be done with adding some IO memory regions as sub-regions upon the
> > >>>> file one?    
> > >>>
> > >>> The proxying requirement is simply a means to read/write to a computed address
> > >>> within a memory region. There may well be a better way to do that.
> > >>>
> > >>> If I understand your suggestion correctly you would need a very high
> > >>> number of IO memory regions to be created dynamically when particular sets of
> > >>> registers across multiple devices in the topology are all programmed.
> > >>>
> > >>> The interleave can be 256 bytes across up to 16x, many terabyte, devices.
> > >>> So assuming a simple set of 16 1TB devices I think you'd need about 4x10^9
> > >>> IO regions.  Even for a minimal useful test case of largest interleave
> > >>> set of 16x 256MB devices (256MB is minimum size the specification allows per
> > >>> decoded region at the device) and 16 way interleave we'd need 10^6 IO regions.
> > >>> Any idea if that approach would scale sensibly to this number of regions?
> > >>>
> > >>> There are also complexities to getting all the information in one place to
> > >>> work out which IO memory regions maps where in PA space. Current solution is
> > >>> to do that mapping in the same way the hardware does which is hierarchical,
> > >>> so we walk the path to the device, picking directions based on each interleave
> > >>> decoder that we meet.
> > >>> Obviously this is a bit slow but I only really care about correctness at the
> > >>> moment.  I can think of various approaches to speeding it up but I'm not sure
> > >>> if we will ever care about performance.
> > >>>
> > >>> https://gitlab.com/jic23/qemu/-/blob/cxl-v7-draft-2-for-test/hw/cxl/cxl-host.c#L131
> > >>> has the logic for that and as you can see it's fairly simple because we are always
> > >>> going down the topology following the decoders.
> > >>>
> > >>> Below I have mapped out an algorithm I think would work for doing it with
> > >>> IO memory regions as subregions.
> > >>>
> > >>> We could fake the whole thing by limiting ourselves to small host
> > >>> memory windows which are always directly backed, but then I wouldn't
> > >>> achieve the main aim of this which is to provide a test base for the OS code.
> > >>> To do that I need real interleave so I can seed the files with test patterns
> > >>> and verify the accesses hit the correct locations. Emulating what the hardware
> > >>> is actually doing on a device by device basis is the easiest way I have
> > >>> come up with to do that.
> > >>>
> > >>> Let me try to provide some more background so you hopefully don't have
> > >>> to have read the specs to follow what is going on!
> > >>> There are an example for directly connected (no switches) topology in the
> > >>> docs
> > >>>
> > >>> https://gitlab.com/jic23/qemu/-/blob/cxl-v7-draft-2-for-test/docs/system/devices/cxl.rst
> > >>>
> > >>> The overall picture is we have a large number of CXL Type 3 memory devices,
> > >>> which at runtime (by OS at boot/on hotplug) are configured into various
> > >>> interleaving sets with hierarchical decoding at the host + host bridge
> > >>> + switch levels. For test setups I probably need to go to around 32 devices
> > >>> so I can hit various configurations simultaneously.
> > >>> No individual device has visibility of the full interleave setup - hence
> > >>> the walk in the existing code through the various decoders to find the
> > >>> final Device Physical address.
> > >>>
> > >>> At the host level the host provides a set of Physical Address windows with
> > >>> a fixed interleave decoding across the different host bridges in the system
> > >>> (CXL Fixed Memory windows, CFMWs)
> > >>> On a real system these have to be large enough to allow for any memory
> > >>> devices that might be hotplugged and all possible configurations (so
> > >>> with 2 host bridges you need at least 3 windows in the many TB range,
> > >>> much worse as the number of host bridges goes up). It'll be worse than
> > >>> this when we have QoS groups, but the current Qemu code just puts all
> > >>> the windows in group 0.  Hence my first thought of just putting memory
> > >>> behind those doesn't scale (a similar approach to this was in the
> > >>> earliest versions of this patch set - though the full access path
> > >>> wasn't wired up).
> > >>>
> > >>> The granularity can be in powers of 2 from 256 bytes to 16 kbytes
> > >>>
> > >>> Next each host bridge has programmable address decoders which take the
> > >>> incoming (often already interleaved) memory access and direct them to
> > >>> appropriate root ports.  The root ports can be connected to a switch
> > >>> which has additional address decoders in the upstream port to decide
> > >>> which downstream port to route to.  Note we currently only support 1 level
> > >>> of switches but it's easy to make this algorithm recursive to support
> > >>> multiple switch levels (currently the kernel proposals only support 1 level)
> > >>>
> > >>> Finally the End Point with the actual memory receives the interleaved request and
> > >>> takes the full address and (for power of 2 decoding - we don't yet support
> > >>> 3,6 and 12 way which is more complex and there is no kernel support yet)
> > >>> it drops a few address bits and adds an offset for the decoder used to
> > >>> calculate it's own device physical address.  Note device will support
> > >>> multiple interleave sets for different parts of it's file once we add
> > >>> multiple decoder support (on the todo list).
> > >>>
> > >>> So the current solution is straight forward (with the exception of that
> > >>> proxying) because it follows the same decoding as used in real hardware
> > >>> to route the memory accesses. As a result we get a read/write to a
> > >>> device physical address and hence proxy that.  If any of the decoders
> > >>> along the path are not configured then we error out at that stage.
> > >>>
> > >>> To create the equivalent as IO subregions I think we'd have to do the
> > >>> following from (this might be mediated by some central entity that
> > >>> doesn't currently exist, or done on demand from which ever CXL device
> > >>> happens to have it's decoder set up last)
> > >>>
> > >>> 1) Wait for a decoder commit (enable) on any component. Goto 2.
> > >>> 2) Walk the topology (up to host decoder, down to memory device)
> > >>> If a complete interleaving path has been configured -
> > >>>     i.e. we have committed decoders all the way to the memory
> > >>>     device goto step 3, otherwise return to step 1 to wait for
> > >>>     more decoders to be committed.
> > >>> 3) For the memory region being supplied by the memory device,
> > >>>     add subregions to map the device physical address (address
> > >>>     in the file) for each interleave stride to the appropriate
> > >>>     host Physical Address.
> > >>> 4) Return to step 1 to wait for more decoders to commit.
> > >>>
> > >>> So summary is we can do it with IO regions, but there are a lot of them
> > >>> and the setup is somewhat complex as we don't have one single point in
> > >>> time where we know all the necessary information is available to compute
> > >>> the right addresses.
> > >>>
> > >>> Looking forward to your suggestions if I haven't caused more confusion!    
> > > 
> > > Hi Peter,
> > >     
> > >>
> > >> Thanks for the write up - I must confess they're a lot! :)
> > >>
> > >> I merely only learned what is CXL today, and I'm not very experienced on
> > >> device modeling either, so please bare with me with stupid questions..
> > >>
> > >> IIUC so far CXL traps these memory accesses using CXLFixedWindow.mr.
> > >> That's a normal IO region, which looks very reasonable.
> > >>
> > >> However I'm confused why patch "RFC: softmmu/memory: Add ops to
> > >> memory_region_ram_init_from_file" helped.
> > >>
> > >> Per my knowledge, all the memory accesses upon this CFMW window trapped
> > >> using this IO region already.  There can be multiple memory file objects
> > >> underneath, and when read/write happens the object will be decoded from
> > >> cxl_cfmws_find_device() as you referenced.    
> > > 
> > > Yes.
> > >     
> > >>
> > >> However I see nowhere that these memory objects got mapped as sub-regions
> > >> into parent (CXLFixedWindow.mr).  Then I don't understand why they cannot
> > >> be trapped.    
> > > 
> > > AS you note they aren't mapped into the parent mr, hence we are trapping.
> > > The parent mem_ops are responsible for decoding the 'which device' +
> > > 'what address in device memory space'. Once we've gotten that info
> > > the question is how do I actually do the access?
> > > 
> > > Mapping as subregions seems unwise due to the huge number required.
> > >     
> > >>
> > >> To ask in another way: what will happen if you simply revert this RFC
> > >> patch?  What will go wrong?    
> > > 
> > > The call to memory_region_dispatch_read()
> > > https://gitlab.com/jic23/qemu/-/blob/cxl-v7-draft-2-for-test/hw/mem/cxl_type3.c#L556
> > > 
> > > would call memory_region_access_valid() that calls
> > > mr->ops->valid.accepts() which is set to
> > > unassigned_mem_accepts() and hence...
> > > you get back a MEMTX_DECODE_ERROR back and an exception in the
> > > guest.
> > > 
> > > That wouldn't happen with a non proxied access to the ram as
> > > those paths never uses the ops as memory_access_is_direct() is called
> > > and simply memcpy used without any involvement of the ops.
> > > 
> > > Is a better way to proxy those writes to the backing files?
> > > 
> > > I was fishing a bit in the dark here and saw the existing ops defined
> > > for a different purpose for VFIO
> > > 
> > > 4a2e242bbb ("memory Don't use memcpy for ram_device regions")
> > > 
> > > and those allowed the use of memory_region_dispatch_write() to work.
> > > 
> > > Hence the RFC marking on that patch :)    
> > 
> > FWIW I had a similar issue implementing manual aliasing in one of my q800 patches 
> > where I found that dispatching a read to a non-IO memory region didn't work with 
> > memory_region_dispatch_read(). The solution in my case was to switch to using the 
> > address space API instead, which whilst requiring an absolute address for the target 
> > address space, handles the dispatch correctly across all different memory region types.
> > 
> > Have a look at 
> > https://gitlab.com/mcayland/qemu/-/commit/318e12579c7570196187652da13542db86b8c722 to 
> > see how I did this in macio_alias_read().
> > 
> > IIRC from my experiments in this area, my conclusion was that 
> > memory_region_dispatch_read() can only work correctly if mapping directly between 2 
> > IO memory regions, and for anything else you need to use the address space API.  
> 
> Hi Mark,
> 
> I'd wondered about the address space API as an alternative approach.
> 
> From that reference looks like you have the memory mapped into the system address
> space and are providing an alias to that.  That's something I'd ideally like to
> avoid doing as there is no meaningful way to do it so I'd just be hiding the memory
> somewhere up high.  The memory should only be accessible through the one
> route.
> 
> I think I could spin a separate address space for this purpose (one per CXL type 3
> device probably) but that seems like another nasty hack to make. I'll try a quick
> prototype of this tomorrow.

Turned out to be trivial so already done.  Will send out as v8 unless anyone
feeds back that there is a major disadvantage to just spinning up one address space
per CXL type3 device.  That will mean dropping the RFC patch as well as no longer
used :)

Thanks for the hint Mark.

Jonathan

> 
> What do people think is the least horrible way to do this?
> 
> Thanks for the suggestion and I'm glad I'm not the only one trying to get this
> sort of thing to work ;)
> 
> Jonathan
> 
> > 
> > 
> > ATB,
> > 
> > Mark.
> >   
> 


^ permalink raw reply	[flat|nested] 124+ messages in thread

* Re: [PATCH v7 00/46] CXl 2.0 emulation Support
@ 2022-03-16 18:26                   ` Jonathan Cameron via
  0 siblings, 0 replies; 124+ messages in thread
From: Jonathan Cameron via @ 2022-03-16 18:26 UTC (permalink / raw)
  To: Mark Cave-Ayland
  Cc: Peter Xu, Peter Maydell, Ben Widawsky, Michael S. Tsirkin,
	Markus Armbruster, Samarth Saxena, Chris Browy, qemu-devel,
	linux-cxl, linuxarm, Shreyas Shah, Saransh Gupta1, Paolo Bonzini,
	Marcel Apfelbaum, Igor Mammedov, Dan Williams, David Hildenbrand,
	Alex Bennée, Shameerali Kolothum Thodi,
	Philippe Mathieu-Daudé

On Wed, 16 Mar 2022 17:58:46 +0000
Jonathan Cameron <Jonathan.Cameron@Huawei.com> wrote:

> On Wed, 16 Mar 2022 17:16:55 +0000
> Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk> wrote:
> 
> > On 16/03/2022 16:50, Jonathan Cameron via wrote:
> >   
> > > On Thu, 10 Mar 2022 16:02:22 +0800
> > > Peter Xu <peterx@redhat.com> wrote:
> > >     
> > >> On Wed, Mar 09, 2022 at 11:28:27AM +0000, Jonathan Cameron wrote:    
> > >>> Hi Peter,    
> > >>
> > >> Hi, Jonathan,
> > >>    
> > >>>        
> > >>>>
> > >>>> https://lore.kernel.org/qemu-devel/20220306174137.5707-35-Jonathan.Cameron@huawei.com/
> > >>>>
> > >>>> Having mr->ops set but with memory_access_is_direct() returning true sounds
> > >>>> weird to me.
> > >>>>
> > >>>> Sorry to have no understanding of the whole picture, but.. could you share
> > >>>> more on what's the interleaving requirement on the proxying, and why it
> > >>>> can't be done with adding some IO memory regions as sub-regions upon the
> > >>>> file one?    
> > >>>
> > >>> The proxying requirement is simply a means to read/write to a computed address
> > >>> within a memory region. There may well be a better way to do that.
> > >>>
> > >>> If I understand your suggestion correctly you would need a very high
> > >>> number of IO memory regions to be created dynamically when particular sets of
> > >>> registers across multiple devices in the topology are all programmed.
> > >>>
> > >>> The interleave can be 256 bytes across up to 16x, many terabyte, devices.
> > >>> So assuming a simple set of 16 1TB devices I think you'd need about 4x10^9
> > >>> IO regions.  Even for a minimal useful test case of largest interleave
> > >>> set of 16x 256MB devices (256MB is minimum size the specification allows per
> > >>> decoded region at the device) and 16 way interleave we'd need 10^6 IO regions.
> > >>> Any idea if that approach would scale sensibly to this number of regions?
> > >>>
> > >>> There are also complexities to getting all the information in one place to
> > >>> work out which IO memory regions maps where in PA space. Current solution is
> > >>> to do that mapping in the same way the hardware does which is hierarchical,
> > >>> so we walk the path to the device, picking directions based on each interleave
> > >>> decoder that we meet.
> > >>> Obviously this is a bit slow but I only really care about correctness at the
> > >>> moment.  I can think of various approaches to speeding it up but I'm not sure
> > >>> if we will ever care about performance.
> > >>>
> > >>> https://gitlab.com/jic23/qemu/-/blob/cxl-v7-draft-2-for-test/hw/cxl/cxl-host.c#L131
> > >>> has the logic for that and as you can see it's fairly simple because we are always
> > >>> going down the topology following the decoders.
> > >>>
> > >>> Below I have mapped out an algorithm I think would work for doing it with
> > >>> IO memory regions as subregions.
> > >>>
> > >>> We could fake the whole thing by limiting ourselves to small host
> > >>> memory windows which are always directly backed, but then I wouldn't
> > >>> achieve the main aim of this which is to provide a test base for the OS code.
> > >>> To do that I need real interleave so I can seed the files with test patterns
> > >>> and verify the accesses hit the correct locations. Emulating what the hardware
> > >>> is actually doing on a device by device basis is the easiest way I have
> > >>> come up with to do that.
> > >>>
> > >>> Let me try to provide some more background so you hopefully don't have
> > >>> to have read the specs to follow what is going on!
> > >>> There are an example for directly connected (no switches) topology in the
> > >>> docs
> > >>>
> > >>> https://gitlab.com/jic23/qemu/-/blob/cxl-v7-draft-2-for-test/docs/system/devices/cxl.rst
> > >>>
> > >>> The overall picture is we have a large number of CXL Type 3 memory devices,
> > >>> which at runtime (by OS at boot/on hotplug) are configured into various
> > >>> interleaving sets with hierarchical decoding at the host + host bridge
> > >>> + switch levels. For test setups I probably need to go to around 32 devices
> > >>> so I can hit various configurations simultaneously.
> > >>> No individual device has visibility of the full interleave setup - hence
> > >>> the walk in the existing code through the various decoders to find the
> > >>> final Device Physical address.
> > >>>
> > >>> At the host level the host provides a set of Physical Address windows with
> > >>> a fixed interleave decoding across the different host bridges in the system
> > >>> (CXL Fixed Memory windows, CFMWs)
> > >>> On a real system these have to be large enough to allow for any memory
> > >>> devices that might be hotplugged and all possible configurations (so
> > >>> with 2 host bridges you need at least 3 windows in the many TB range,
> > >>> much worse as the number of host bridges goes up). It'll be worse than
> > >>> this when we have QoS groups, but the current Qemu code just puts all
> > >>> the windows in group 0.  Hence my first thought of just putting memory
> > >>> behind those doesn't scale (a similar approach to this was in the
> > >>> earliest versions of this patch set - though the full access path
> > >>> wasn't wired up).
> > >>>
> > >>> The granularity can be in powers of 2 from 256 bytes to 16 kbytes
> > >>>
> > >>> Next each host bridge has programmable address decoders which take the
> > >>> incoming (often already interleaved) memory access and direct them to
> > >>> appropriate root ports.  The root ports can be connected to a switch
> > >>> which has additional address decoders in the upstream port to decide
> > >>> which downstream port to route to.  Note we currently only support 1 level
> > >>> of switches but it's easy to make this algorithm recursive to support
> > >>> multiple switch levels (currently the kernel proposals only support 1 level)
> > >>>
> > >>> Finally the End Point with the actual memory receives the interleaved request and
> > >>> takes the full address and (for power of 2 decoding - we don't yet support
> > >>> 3,6 and 12 way which is more complex and there is no kernel support yet)
> > >>> it drops a few address bits and adds an offset for the decoder used to
> > >>> calculate it's own device physical address.  Note device will support
> > >>> multiple interleave sets for different parts of it's file once we add
> > >>> multiple decoder support (on the todo list).
> > >>>
> > >>> So the current solution is straight forward (with the exception of that
> > >>> proxying) because it follows the same decoding as used in real hardware
> > >>> to route the memory accesses. As a result we get a read/write to a
> > >>> device physical address and hence proxy that.  If any of the decoders
> > >>> along the path are not configured then we error out at that stage.
> > >>>
> > >>> To create the equivalent as IO subregions I think we'd have to do the
> > >>> following from (this might be mediated by some central entity that
> > >>> doesn't currently exist, or done on demand from which ever CXL device
> > >>> happens to have it's decoder set up last)
> > >>>
> > >>> 1) Wait for a decoder commit (enable) on any component. Goto 2.
> > >>> 2) Walk the topology (up to host decoder, down to memory device)
> > >>> If a complete interleaving path has been configured -
> > >>>     i.e. we have committed decoders all the way to the memory
> > >>>     device goto step 3, otherwise return to step 1 to wait for
> > >>>     more decoders to be committed.
> > >>> 3) For the memory region being supplied by the memory device,
> > >>>     add subregions to map the device physical address (address
> > >>>     in the file) for each interleave stride to the appropriate
> > >>>     host Physical Address.
> > >>> 4) Return to step 1 to wait for more decoders to commit.
> > >>>
> > >>> So summary is we can do it with IO regions, but there are a lot of them
> > >>> and the setup is somewhat complex as we don't have one single point in
> > >>> time where we know all the necessary information is available to compute
> > >>> the right addresses.
> > >>>
> > >>> Looking forward to your suggestions if I haven't caused more confusion!    
> > > 
> > > Hi Peter,
> > >     
> > >>
> > >> Thanks for the write up - I must confess they're a lot! :)
> > >>
> > >> I merely only learned what is CXL today, and I'm not very experienced on
> > >> device modeling either, so please bare with me with stupid questions..
> > >>
> > >> IIUC so far CXL traps these memory accesses using CXLFixedWindow.mr.
> > >> That's a normal IO region, which looks very reasonable.
> > >>
> > >> However I'm confused why patch "RFC: softmmu/memory: Add ops to
> > >> memory_region_ram_init_from_file" helped.
> > >>
> > >> Per my knowledge, all the memory accesses upon this CFMW window trapped
> > >> using this IO region already.  There can be multiple memory file objects
> > >> underneath, and when read/write happens the object will be decoded from
> > >> cxl_cfmws_find_device() as you referenced.    
> > > 
> > > Yes.
> > >     
> > >>
> > >> However I see nowhere that these memory objects got mapped as sub-regions
> > >> into parent (CXLFixedWindow.mr).  Then I don't understand why they cannot
> > >> be trapped.    
> > > 
> > > AS you note they aren't mapped into the parent mr, hence we are trapping.
> > > The parent mem_ops are responsible for decoding the 'which device' +
> > > 'what address in device memory space'. Once we've gotten that info
> > > the question is how do I actually do the access?
> > > 
> > > Mapping as subregions seems unwise due to the huge number required.
> > >     
> > >>
> > >> To ask in another way: what will happen if you simply revert this RFC
> > >> patch?  What will go wrong?    
> > > 
> > > The call to memory_region_dispatch_read()
> > > https://gitlab.com/jic23/qemu/-/blob/cxl-v7-draft-2-for-test/hw/mem/cxl_type3.c#L556
> > > 
> > > would call memory_region_access_valid() that calls
> > > mr->ops->valid.accepts() which is set to
> > > unassigned_mem_accepts() and hence...
> > > you get back a MEMTX_DECODE_ERROR back and an exception in the
> > > guest.
> > > 
> > > That wouldn't happen with a non proxied access to the ram as
> > > those paths never uses the ops as memory_access_is_direct() is called
> > > and simply memcpy used without any involvement of the ops.
> > > 
> > > Is a better way to proxy those writes to the backing files?
> > > 
> > > I was fishing a bit in the dark here and saw the existing ops defined
> > > for a different purpose for VFIO
> > > 
> > > 4a2e242bbb ("memory Don't use memcpy for ram_device regions")
> > > 
> > > and those allowed the use of memory_region_dispatch_write() to work.
> > > 
> > > Hence the RFC marking on that patch :)    
> > 
> > FWIW I had a similar issue implementing manual aliasing in one of my q800 patches 
> > where I found that dispatching a read to a non-IO memory region didn't work with 
> > memory_region_dispatch_read(). The solution in my case was to switch to using the 
> > address space API instead, which whilst requiring an absolute address for the target 
> > address space, handles the dispatch correctly across all different memory region types.
> > 
> > Have a look at 
> > https://gitlab.com/mcayland/qemu/-/commit/318e12579c7570196187652da13542db86b8c722 to 
> > see how I did this in macio_alias_read().
> > 
> > IIRC from my experiments in this area, my conclusion was that 
> > memory_region_dispatch_read() can only work correctly if mapping directly between 2 
> > IO memory regions, and for anything else you need to use the address space API.  
> 
> Hi Mark,
> 
> I'd wondered about the address space API as an alternative approach.
> 
> From that reference looks like you have the memory mapped into the system address
> space and are providing an alias to that.  That's something I'd ideally like to
> avoid doing as there is no meaningful way to do it so I'd just be hiding the memory
> somewhere up high.  The memory should only be accessible through the one
> route.
> 
> I think I could spin a separate address space for this purpose (one per CXL type 3
> device probably) but that seems like another nasty hack to make. I'll try a quick
> prototype of this tomorrow.

Turned out to be trivial so already done.  Will send out as v8 unless anyone
feeds back that there is a major disadvantage to just spinning up one address space
per CXL type3 device.  That will mean dropping the RFC patch as well as no longer
used :)

Thanks for the hint Mark.

Jonathan

> 
> What do people think is the least horrible way to do this?
> 
> Thanks for the suggestion and I'm glad I'm not the only one trying to get this
> sort of thing to work ;)
> 
> Jonathan
> 
> > 
> > 
> > ATB,
> > 
> > Mark.
> >   
> 



^ permalink raw reply	[flat|nested] 124+ messages in thread

* Re: [PATCH v7 00/46] CXl 2.0 emulation Support
  2022-03-16 18:26                   ` Jonathan Cameron via
@ 2022-03-17  8:12                     ` Mark Cave-Ayland
  -1 siblings, 0 replies; 124+ messages in thread
From: Mark Cave-Ayland @ 2022-03-17  8:12 UTC (permalink / raw)
  To: Jonathan Cameron
  Cc: Peter Xu, Peter Maydell, Ben Widawsky, Michael S. Tsirkin,
	Markus Armbruster, Samarth Saxena, Chris Browy, qemu-devel,
	linux-cxl, linuxarm, Shreyas Shah, Saransh Gupta1, Paolo Bonzini,
	Marcel Apfelbaum, Igor Mammedov, Dan Williams, David Hildenbrand,
	Alex Bennée, Shameerali Kolothum Thodi,
	Philippe Mathieu-Daudé

On 16/03/2022 18:26, Jonathan Cameron via wrote:
> On Wed, 16 Mar 2022 17:58:46 +0000
> Jonathan Cameron <Jonathan.Cameron@Huawei.com> wrote:
> 
>> On Wed, 16 Mar 2022 17:16:55 +0000
>> Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk> wrote:
>>
>>> On 16/03/2022 16:50, Jonathan Cameron via wrote:
>>>    
>>>> On Thu, 10 Mar 2022 16:02:22 +0800
>>>> Peter Xu <peterx@redhat.com> wrote:
>>>>      
>>>>> On Wed, Mar 09, 2022 at 11:28:27AM +0000, Jonathan Cameron wrote:
>>>>>> Hi Peter,
>>>>>
>>>>> Hi, Jonathan,
>>>>>     
>>>>>>         
>>>>>>>
>>>>>>> https://lore.kernel.org/qemu-devel/20220306174137.5707-35-Jonathan.Cameron@huawei.com/
>>>>>>>
>>>>>>> Having mr->ops set but with memory_access_is_direct() returning true sounds
>>>>>>> weird to me.
>>>>>>>
>>>>>>> Sorry to have no understanding of the whole picture, but.. could you share
>>>>>>> more on what's the interleaving requirement on the proxying, and why it
>>>>>>> can't be done with adding some IO memory regions as sub-regions upon the
>>>>>>> file one?
>>>>>>
>>>>>> The proxying requirement is simply a means to read/write to a computed address
>>>>>> within a memory region. There may well be a better way to do that.
>>>>>>
>>>>>> If I understand your suggestion correctly you would need a very high
>>>>>> number of IO memory regions to be created dynamically when particular sets of
>>>>>> registers across multiple devices in the topology are all programmed.
>>>>>>
>>>>>> The interleave can be 256 bytes across up to 16x, many terabyte, devices.
>>>>>> So assuming a simple set of 16 1TB devices I think you'd need about 4x10^9
>>>>>> IO regions.  Even for a minimal useful test case of largest interleave
>>>>>> set of 16x 256MB devices (256MB is minimum size the specification allows per
>>>>>> decoded region at the device) and 16 way interleave we'd need 10^6 IO regions.
>>>>>> Any idea if that approach would scale sensibly to this number of regions?
>>>>>>
>>>>>> There are also complexities to getting all the information in one place to
>>>>>> work out which IO memory regions maps where in PA space. Current solution is
>>>>>> to do that mapping in the same way the hardware does which is hierarchical,
>>>>>> so we walk the path to the device, picking directions based on each interleave
>>>>>> decoder that we meet.
>>>>>> Obviously this is a bit slow but I only really care about correctness at the
>>>>>> moment.  I can think of various approaches to speeding it up but I'm not sure
>>>>>> if we will ever care about performance.
>>>>>>
>>>>>> https://gitlab.com/jic23/qemu/-/blob/cxl-v7-draft-2-for-test/hw/cxl/cxl-host.c#L131
>>>>>> has the logic for that and as you can see it's fairly simple because we are always
>>>>>> going down the topology following the decoders.
>>>>>>
>>>>>> Below I have mapped out an algorithm I think would work for doing it with
>>>>>> IO memory regions as subregions.
>>>>>>
>>>>>> We could fake the whole thing by limiting ourselves to small host
>>>>>> memory windows which are always directly backed, but then I wouldn't
>>>>>> achieve the main aim of this which is to provide a test base for the OS code.
>>>>>> To do that I need real interleave so I can seed the files with test patterns
>>>>>> and verify the accesses hit the correct locations. Emulating what the hardware
>>>>>> is actually doing on a device by device basis is the easiest way I have
>>>>>> come up with to do that.
>>>>>>
>>>>>> Let me try to provide some more background so you hopefully don't have
>>>>>> to have read the specs to follow what is going on!
>>>>>> There are an example for directly connected (no switches) topology in the
>>>>>> docs
>>>>>>
>>>>>> https://gitlab.com/jic23/qemu/-/blob/cxl-v7-draft-2-for-test/docs/system/devices/cxl.rst
>>>>>>
>>>>>> The overall picture is we have a large number of CXL Type 3 memory devices,
>>>>>> which at runtime (by OS at boot/on hotplug) are configured into various
>>>>>> interleaving sets with hierarchical decoding at the host + host bridge
>>>>>> + switch levels. For test setups I probably need to go to around 32 devices
>>>>>> so I can hit various configurations simultaneously.
>>>>>> No individual device has visibility of the full interleave setup - hence
>>>>>> the walk in the existing code through the various decoders to find the
>>>>>> final Device Physical address.
>>>>>>
>>>>>> At the host level the host provides a set of Physical Address windows with
>>>>>> a fixed interleave decoding across the different host bridges in the system
>>>>>> (CXL Fixed Memory windows, CFMWs)
>>>>>> On a real system these have to be large enough to allow for any memory
>>>>>> devices that might be hotplugged and all possible configurations (so
>>>>>> with 2 host bridges you need at least 3 windows in the many TB range,
>>>>>> much worse as the number of host bridges goes up). It'll be worse than
>>>>>> this when we have QoS groups, but the current Qemu code just puts all
>>>>>> the windows in group 0.  Hence my first thought of just putting memory
>>>>>> behind those doesn't scale (a similar approach to this was in the
>>>>>> earliest versions of this patch set - though the full access path
>>>>>> wasn't wired up).
>>>>>>
>>>>>> The granularity can be in powers of 2 from 256 bytes to 16 kbytes
>>>>>>
>>>>>> Next each host bridge has programmable address decoders which take the
>>>>>> incoming (often already interleaved) memory access and direct them to
>>>>>> appropriate root ports.  The root ports can be connected to a switch
>>>>>> which has additional address decoders in the upstream port to decide
>>>>>> which downstream port to route to.  Note we currently only support 1 level
>>>>>> of switches but it's easy to make this algorithm recursive to support
>>>>>> multiple switch levels (currently the kernel proposals only support 1 level)
>>>>>>
>>>>>> Finally the End Point with the actual memory receives the interleaved request and
>>>>>> takes the full address and (for power of 2 decoding - we don't yet support
>>>>>> 3,6 and 12 way which is more complex and there is no kernel support yet)
>>>>>> it drops a few address bits and adds an offset for the decoder used to
>>>>>> calculate it's own device physical address.  Note device will support
>>>>>> multiple interleave sets for different parts of it's file once we add
>>>>>> multiple decoder support (on the todo list).
>>>>>>
>>>>>> So the current solution is straight forward (with the exception of that
>>>>>> proxying) because it follows the same decoding as used in real hardware
>>>>>> to route the memory accesses. As a result we get a read/write to a
>>>>>> device physical address and hence proxy that.  If any of the decoders
>>>>>> along the path are not configured then we error out at that stage.
>>>>>>
>>>>>> To create the equivalent as IO subregions I think we'd have to do the
>>>>>> following from (this might be mediated by some central entity that
>>>>>> doesn't currently exist, or done on demand from which ever CXL device
>>>>>> happens to have it's decoder set up last)
>>>>>>
>>>>>> 1) Wait for a decoder commit (enable) on any component. Goto 2.
>>>>>> 2) Walk the topology (up to host decoder, down to memory device)
>>>>>> If a complete interleaving path has been configured -
>>>>>>      i.e. we have committed decoders all the way to the memory
>>>>>>      device goto step 3, otherwise return to step 1 to wait for
>>>>>>      more decoders to be committed.
>>>>>> 3) For the memory region being supplied by the memory device,
>>>>>>      add subregions to map the device physical address (address
>>>>>>      in the file) for each interleave stride to the appropriate
>>>>>>      host Physical Address.
>>>>>> 4) Return to step 1 to wait for more decoders to commit.
>>>>>>
>>>>>> So summary is we can do it with IO regions, but there are a lot of them
>>>>>> and the setup is somewhat complex as we don't have one single point in
>>>>>> time where we know all the necessary information is available to compute
>>>>>> the right addresses.
>>>>>>
>>>>>> Looking forward to your suggestions if I haven't caused more confusion!
>>>>
>>>> Hi Peter,
>>>>      
>>>>>
>>>>> Thanks for the write up - I must confess they're a lot! :)
>>>>>
>>>>> I merely only learned what is CXL today, and I'm not very experienced on
>>>>> device modeling either, so please bare with me with stupid questions..
>>>>>
>>>>> IIUC so far CXL traps these memory accesses using CXLFixedWindow.mr.
>>>>> That's a normal IO region, which looks very reasonable.
>>>>>
>>>>> However I'm confused why patch "RFC: softmmu/memory: Add ops to
>>>>> memory_region_ram_init_from_file" helped.
>>>>>
>>>>> Per my knowledge, all the memory accesses upon this CFMW window trapped
>>>>> using this IO region already.  There can be multiple memory file objects
>>>>> underneath, and when read/write happens the object will be decoded from
>>>>> cxl_cfmws_find_device() as you referenced.
>>>>
>>>> Yes.
>>>>      
>>>>>
>>>>> However I see nowhere that these memory objects got mapped as sub-regions
>>>>> into parent (CXLFixedWindow.mr).  Then I don't understand why they cannot
>>>>> be trapped.
>>>>
>>>> AS you note they aren't mapped into the parent mr, hence we are trapping.
>>>> The parent mem_ops are responsible for decoding the 'which device' +
>>>> 'what address in device memory space'. Once we've gotten that info
>>>> the question is how do I actually do the access?
>>>>
>>>> Mapping as subregions seems unwise due to the huge number required.
>>>>      
>>>>>
>>>>> To ask in another way: what will happen if you simply revert this RFC
>>>>> patch?  What will go wrong?
>>>>
>>>> The call to memory_region_dispatch_read()
>>>> https://gitlab.com/jic23/qemu/-/blob/cxl-v7-draft-2-for-test/hw/mem/cxl_type3.c#L556
>>>>
>>>> would call memory_region_access_valid() that calls
>>>> mr->ops->valid.accepts() which is set to
>>>> unassigned_mem_accepts() and hence...
>>>> you get back a MEMTX_DECODE_ERROR back and an exception in the
>>>> guest.
>>>>
>>>> That wouldn't happen with a non proxied access to the ram as
>>>> those paths never uses the ops as memory_access_is_direct() is called
>>>> and simply memcpy used without any involvement of the ops.
>>>>
>>>> Is a better way to proxy those writes to the backing files?
>>>>
>>>> I was fishing a bit in the dark here and saw the existing ops defined
>>>> for a different purpose for VFIO
>>>>
>>>> 4a2e242bbb ("memory Don't use memcpy for ram_device regions")
>>>>
>>>> and those allowed the use of memory_region_dispatch_write() to work.
>>>>
>>>> Hence the RFC marking on that patch :)
>>>
>>> FWIW I had a similar issue implementing manual aliasing in one of my q800 patches
>>> where I found that dispatching a read to a non-IO memory region didn't work with
>>> memory_region_dispatch_read(). The solution in my case was to switch to using the
>>> address space API instead, which whilst requiring an absolute address for the target
>>> address space, handles the dispatch correctly across all different memory region types.
>>>
>>> Have a look at
>>> https://gitlab.com/mcayland/qemu/-/commit/318e12579c7570196187652da13542db86b8c722 to
>>> see how I did this in macio_alias_read().
>>>
>>> IIRC from my experiments in this area, my conclusion was that
>>> memory_region_dispatch_read() can only work correctly if mapping directly between 2
>>> IO memory regions, and for anything else you need to use the address space API.
>>
>> Hi Mark,
>>
>> I'd wondered about the address space API as an alternative approach.
>>
>>  From that reference looks like you have the memory mapped into the system address
>> space and are providing an alias to that.  That's something I'd ideally like to
>> avoid doing as there is no meaningful way to do it so I'd just be hiding the memory
>> somewhere up high.  The memory should only be accessible through the one
>> route.
>>
>> I think I could spin a separate address space for this purpose (one per CXL type 3
>> device probably) but that seems like another nasty hack to make. I'll try a quick
>> prototype of this tomorrow.
> 
> Turned out to be trivial so already done.  Will send out as v8 unless anyone
> feeds back that there is a major disadvantage to just spinning up one address space
> per CXL type3 device.  That will mean dropping the RFC patch as well as no longer
> used :)
> 
> Thanks for the hint Mark.
> 
> Jonathan

Ah great! As you've already noticed my particular case was performing partial 
decoding on a memory region, but there are no issues if you need to dispatch to 
another existing address space such as PCI/IOMMU. Creating a separate address space 
per device shouldn't be an issue either, as that's effectively how the PCI bus master 
requests are handled.

The address spaces are visible in "info mtree" so if you haven't already, I would 
recommend generating a dynamic name for the address space based upon the device 
name/address to make it easier for development and debugging.


ATB,

Mark.

^ permalink raw reply	[flat|nested] 124+ messages in thread

* Re: [PATCH v7 00/46] CXl 2.0 emulation Support
@ 2022-03-17  8:12                     ` Mark Cave-Ayland
  0 siblings, 0 replies; 124+ messages in thread
From: Mark Cave-Ayland @ 2022-03-17  8:12 UTC (permalink / raw)
  To: Jonathan Cameron
  Cc: Peter Maydell, Shreyas Shah, Ben Widawsky, Michael S. Tsirkin,
	Marcel Apfelbaum, Samarth Saxena, Chris Browy, Markus Armbruster,
	Peter Xu, qemu-devel, linuxarm, linux-cxl, Igor Mammedov,
	Saransh Gupta1, Paolo Bonzini, Dan Williams, David Hildenbrand,
	Alex Bennée, Shameerali Kolothum Thodi,
	Philippe Mathieu-Daudé

On 16/03/2022 18:26, Jonathan Cameron via wrote:
> On Wed, 16 Mar 2022 17:58:46 +0000
> Jonathan Cameron <Jonathan.Cameron@Huawei.com> wrote:
> 
>> On Wed, 16 Mar 2022 17:16:55 +0000
>> Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk> wrote:
>>
>>> On 16/03/2022 16:50, Jonathan Cameron via wrote:
>>>    
>>>> On Thu, 10 Mar 2022 16:02:22 +0800
>>>> Peter Xu <peterx@redhat.com> wrote:
>>>>      
>>>>> On Wed, Mar 09, 2022 at 11:28:27AM +0000, Jonathan Cameron wrote:
>>>>>> Hi Peter,
>>>>>
>>>>> Hi, Jonathan,
>>>>>     
>>>>>>         
>>>>>>>
>>>>>>> https://lore.kernel.org/qemu-devel/20220306174137.5707-35-Jonathan.Cameron@huawei.com/
>>>>>>>
>>>>>>> Having mr->ops set but with memory_access_is_direct() returning true sounds
>>>>>>> weird to me.
>>>>>>>
>>>>>>> Sorry to have no understanding of the whole picture, but.. could you share
>>>>>>> more on what's the interleaving requirement on the proxying, and why it
>>>>>>> can't be done with adding some IO memory regions as sub-regions upon the
>>>>>>> file one?
>>>>>>
>>>>>> The proxying requirement is simply a means to read/write to a computed address
>>>>>> within a memory region. There may well be a better way to do that.
>>>>>>
>>>>>> If I understand your suggestion correctly you would need a very high
>>>>>> number of IO memory regions to be created dynamically when particular sets of
>>>>>> registers across multiple devices in the topology are all programmed.
>>>>>>
>>>>>> The interleave can be 256 bytes across up to 16x, many terabyte, devices.
>>>>>> So assuming a simple set of 16 1TB devices I think you'd need about 4x10^9
>>>>>> IO regions.  Even for a minimal useful test case of largest interleave
>>>>>> set of 16x 256MB devices (256MB is minimum size the specification allows per
>>>>>> decoded region at the device) and 16 way interleave we'd need 10^6 IO regions.
>>>>>> Any idea if that approach would scale sensibly to this number of regions?
>>>>>>
>>>>>> There are also complexities to getting all the information in one place to
>>>>>> work out which IO memory regions maps where in PA space. Current solution is
>>>>>> to do that mapping in the same way the hardware does which is hierarchical,
>>>>>> so we walk the path to the device, picking directions based on each interleave
>>>>>> decoder that we meet.
>>>>>> Obviously this is a bit slow but I only really care about correctness at the
>>>>>> moment.  I can think of various approaches to speeding it up but I'm not sure
>>>>>> if we will ever care about performance.
>>>>>>
>>>>>> https://gitlab.com/jic23/qemu/-/blob/cxl-v7-draft-2-for-test/hw/cxl/cxl-host.c#L131
>>>>>> has the logic for that and as you can see it's fairly simple because we are always
>>>>>> going down the topology following the decoders.
>>>>>>
>>>>>> Below I have mapped out an algorithm I think would work for doing it with
>>>>>> IO memory regions as subregions.
>>>>>>
>>>>>> We could fake the whole thing by limiting ourselves to small host
>>>>>> memory windows which are always directly backed, but then I wouldn't
>>>>>> achieve the main aim of this which is to provide a test base for the OS code.
>>>>>> To do that I need real interleave so I can seed the files with test patterns
>>>>>> and verify the accesses hit the correct locations. Emulating what the hardware
>>>>>> is actually doing on a device by device basis is the easiest way I have
>>>>>> come up with to do that.
>>>>>>
>>>>>> Let me try to provide some more background so you hopefully don't have
>>>>>> to have read the specs to follow what is going on!
>>>>>> There are an example for directly connected (no switches) topology in the
>>>>>> docs
>>>>>>
>>>>>> https://gitlab.com/jic23/qemu/-/blob/cxl-v7-draft-2-for-test/docs/system/devices/cxl.rst
>>>>>>
>>>>>> The overall picture is we have a large number of CXL Type 3 memory devices,
>>>>>> which at runtime (by OS at boot/on hotplug) are configured into various
>>>>>> interleaving sets with hierarchical decoding at the host + host bridge
>>>>>> + switch levels. For test setups I probably need to go to around 32 devices
>>>>>> so I can hit various configurations simultaneously.
>>>>>> No individual device has visibility of the full interleave setup - hence
>>>>>> the walk in the existing code through the various decoders to find the
>>>>>> final Device Physical address.
>>>>>>
>>>>>> At the host level the host provides a set of Physical Address windows with
>>>>>> a fixed interleave decoding across the different host bridges in the system
>>>>>> (CXL Fixed Memory windows, CFMWs)
>>>>>> On a real system these have to be large enough to allow for any memory
>>>>>> devices that might be hotplugged and all possible configurations (so
>>>>>> with 2 host bridges you need at least 3 windows in the many TB range,
>>>>>> much worse as the number of host bridges goes up). It'll be worse than
>>>>>> this when we have QoS groups, but the current Qemu code just puts all
>>>>>> the windows in group 0.  Hence my first thought of just putting memory
>>>>>> behind those doesn't scale (a similar approach to this was in the
>>>>>> earliest versions of this patch set - though the full access path
>>>>>> wasn't wired up).
>>>>>>
>>>>>> The granularity can be in powers of 2 from 256 bytes to 16 kbytes
>>>>>>
>>>>>> Next each host bridge has programmable address decoders which take the
>>>>>> incoming (often already interleaved) memory access and direct them to
>>>>>> appropriate root ports.  The root ports can be connected to a switch
>>>>>> which has additional address decoders in the upstream port to decide
>>>>>> which downstream port to route to.  Note we currently only support 1 level
>>>>>> of switches but it's easy to make this algorithm recursive to support
>>>>>> multiple switch levels (currently the kernel proposals only support 1 level)
>>>>>>
>>>>>> Finally the End Point with the actual memory receives the interleaved request and
>>>>>> takes the full address and (for power of 2 decoding - we don't yet support
>>>>>> 3,6 and 12 way which is more complex and there is no kernel support yet)
>>>>>> it drops a few address bits and adds an offset for the decoder used to
>>>>>> calculate it's own device physical address.  Note device will support
>>>>>> multiple interleave sets for different parts of it's file once we add
>>>>>> multiple decoder support (on the todo list).
>>>>>>
>>>>>> So the current solution is straight forward (with the exception of that
>>>>>> proxying) because it follows the same decoding as used in real hardware
>>>>>> to route the memory accesses. As a result we get a read/write to a
>>>>>> device physical address and hence proxy that.  If any of the decoders
>>>>>> along the path are not configured then we error out at that stage.
>>>>>>
>>>>>> To create the equivalent as IO subregions I think we'd have to do the
>>>>>> following from (this might be mediated by some central entity that
>>>>>> doesn't currently exist, or done on demand from which ever CXL device
>>>>>> happens to have it's decoder set up last)
>>>>>>
>>>>>> 1) Wait for a decoder commit (enable) on any component. Goto 2.
>>>>>> 2) Walk the topology (up to host decoder, down to memory device)
>>>>>> If a complete interleaving path has been configured -
>>>>>>      i.e. we have committed decoders all the way to the memory
>>>>>>      device goto step 3, otherwise return to step 1 to wait for
>>>>>>      more decoders to be committed.
>>>>>> 3) For the memory region being supplied by the memory device,
>>>>>>      add subregions to map the device physical address (address
>>>>>>      in the file) for each interleave stride to the appropriate
>>>>>>      host Physical Address.
>>>>>> 4) Return to step 1 to wait for more decoders to commit.
>>>>>>
>>>>>> So summary is we can do it with IO regions, but there are a lot of them
>>>>>> and the setup is somewhat complex as we don't have one single point in
>>>>>> time where we know all the necessary information is available to compute
>>>>>> the right addresses.
>>>>>>
>>>>>> Looking forward to your suggestions if I haven't caused more confusion!
>>>>
>>>> Hi Peter,
>>>>      
>>>>>
>>>>> Thanks for the write up - I must confess they're a lot! :)
>>>>>
>>>>> I merely only learned what is CXL today, and I'm not very experienced on
>>>>> device modeling either, so please bare with me with stupid questions..
>>>>>
>>>>> IIUC so far CXL traps these memory accesses using CXLFixedWindow.mr.
>>>>> That's a normal IO region, which looks very reasonable.
>>>>>
>>>>> However I'm confused why patch "RFC: softmmu/memory: Add ops to
>>>>> memory_region_ram_init_from_file" helped.
>>>>>
>>>>> Per my knowledge, all the memory accesses upon this CFMW window trapped
>>>>> using this IO region already.  There can be multiple memory file objects
>>>>> underneath, and when read/write happens the object will be decoded from
>>>>> cxl_cfmws_find_device() as you referenced.
>>>>
>>>> Yes.
>>>>      
>>>>>
>>>>> However I see nowhere that these memory objects got mapped as sub-regions
>>>>> into parent (CXLFixedWindow.mr).  Then I don't understand why they cannot
>>>>> be trapped.
>>>>
>>>> AS you note they aren't mapped into the parent mr, hence we are trapping.
>>>> The parent mem_ops are responsible for decoding the 'which device' +
>>>> 'what address in device memory space'. Once we've gotten that info
>>>> the question is how do I actually do the access?
>>>>
>>>> Mapping as subregions seems unwise due to the huge number required.
>>>>      
>>>>>
>>>>> To ask in another way: what will happen if you simply revert this RFC
>>>>> patch?  What will go wrong?
>>>>
>>>> The call to memory_region_dispatch_read()
>>>> https://gitlab.com/jic23/qemu/-/blob/cxl-v7-draft-2-for-test/hw/mem/cxl_type3.c#L556
>>>>
>>>> would call memory_region_access_valid() that calls
>>>> mr->ops->valid.accepts() which is set to
>>>> unassigned_mem_accepts() and hence...
>>>> you get back a MEMTX_DECODE_ERROR back and an exception in the
>>>> guest.
>>>>
>>>> That wouldn't happen with a non proxied access to the ram as
>>>> those paths never uses the ops as memory_access_is_direct() is called
>>>> and simply memcpy used without any involvement of the ops.
>>>>
>>>> Is a better way to proxy those writes to the backing files?
>>>>
>>>> I was fishing a bit in the dark here and saw the existing ops defined
>>>> for a different purpose for VFIO
>>>>
>>>> 4a2e242bbb ("memory Don't use memcpy for ram_device regions")
>>>>
>>>> and those allowed the use of memory_region_dispatch_write() to work.
>>>>
>>>> Hence the RFC marking on that patch :)
>>>
>>> FWIW I had a similar issue implementing manual aliasing in one of my q800 patches
>>> where I found that dispatching a read to a non-IO memory region didn't work with
>>> memory_region_dispatch_read(). The solution in my case was to switch to using the
>>> address space API instead, which whilst requiring an absolute address for the target
>>> address space, handles the dispatch correctly across all different memory region types.
>>>
>>> Have a look at
>>> https://gitlab.com/mcayland/qemu/-/commit/318e12579c7570196187652da13542db86b8c722 to
>>> see how I did this in macio_alias_read().
>>>
>>> IIRC from my experiments in this area, my conclusion was that
>>> memory_region_dispatch_read() can only work correctly if mapping directly between 2
>>> IO memory regions, and for anything else you need to use the address space API.
>>
>> Hi Mark,
>>
>> I'd wondered about the address space API as an alternative approach.
>>
>>  From that reference looks like you have the memory mapped into the system address
>> space and are providing an alias to that.  That's something I'd ideally like to
>> avoid doing as there is no meaningful way to do it so I'd just be hiding the memory
>> somewhere up high.  The memory should only be accessible through the one
>> route.
>>
>> I think I could spin a separate address space for this purpose (one per CXL type 3
>> device probably) but that seems like another nasty hack to make. I'll try a quick
>> prototype of this tomorrow.
> 
> Turned out to be trivial so already done.  Will send out as v8 unless anyone
> feeds back that there is a major disadvantage to just spinning up one address space
> per CXL type3 device.  That will mean dropping the RFC patch as well as no longer
> used :)
> 
> Thanks for the hint Mark.
> 
> Jonathan

Ah great! As you've already noticed my particular case was performing partial 
decoding on a memory region, but there are no issues if you need to dispatch to 
another existing address space such as PCI/IOMMU. Creating a separate address space 
per device shouldn't be an issue either, as that's effectively how the PCI bus master 
requests are handled.

The address spaces are visible in "info mtree" so if you haven't already, I would 
recommend generating a dynamic name for the address space based upon the device 
name/address to make it easier for development and debugging.


ATB,

Mark.


^ permalink raw reply	[flat|nested] 124+ messages in thread

* Re: [PATCH v7 00/46] CXl 2.0 emulation Support
  2022-03-17  8:12                     ` Mark Cave-Ayland
@ 2022-03-17 16:47                       ` Jonathan Cameron via
  -1 siblings, 0 replies; 124+ messages in thread
From: Jonathan Cameron @ 2022-03-17 16:47 UTC (permalink / raw)
  To: Mark Cave-Ayland
  Cc: Peter Maydell, Shreyas Shah, Ben Widawsky, Michael S. Tsirkin,
	Marcel Apfelbaum, Samarth Saxena, Chris Browy, Markus Armbruster,
	Peter Xu, qemu-devel, linuxarm, linux-cxl, Igor Mammedov,
	Saransh Gupta1, Paolo Bonzini, Dan Williams, David Hildenbrand,
	Alex Bennée, Shameerali Kolothum Thodi,
	Philippe Mathieu-Daudé

On Thu, 17 Mar 2022 08:12:56 +0000
Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk> wrote:

> On 16/03/2022 18:26, Jonathan Cameron via wrote:
> > On Wed, 16 Mar 2022 17:58:46 +0000
> > Jonathan Cameron <Jonathan.Cameron@Huawei.com> wrote:
> >   
> >> On Wed, 16 Mar 2022 17:16:55 +0000
> >> Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk> wrote:
> >>  
> >>> On 16/03/2022 16:50, Jonathan Cameron via wrote:
> >>>      
> >>>> On Thu, 10 Mar 2022 16:02:22 +0800
> >>>> Peter Xu <peterx@redhat.com> wrote:
> >>>>        
> >>>>> On Wed, Mar 09, 2022 at 11:28:27AM +0000, Jonathan Cameron wrote:  
> >>>>>> Hi Peter,  
> >>>>>
> >>>>> Hi, Jonathan,
> >>>>>       
> >>>>>>           
> >>>>>>>
> >>>>>>> https://lore.kernel.org/qemu-devel/20220306174137.5707-35-Jonathan.Cameron@huawei.com/
> >>>>>>>
> >>>>>>> Having mr->ops set but with memory_access_is_direct() returning true sounds
> >>>>>>> weird to me.
> >>>>>>>
> >>>>>>> Sorry to have no understanding of the whole picture, but.. could you share
> >>>>>>> more on what's the interleaving requirement on the proxying, and why it
> >>>>>>> can't be done with adding some IO memory regions as sub-regions upon the
> >>>>>>> file one?  
> >>>>>>
> >>>>>> The proxying requirement is simply a means to read/write to a computed address
> >>>>>> within a memory region. There may well be a better way to do that.
> >>>>>>
> >>>>>> If I understand your suggestion correctly you would need a very high
> >>>>>> number of IO memory regions to be created dynamically when particular sets of
> >>>>>> registers across multiple devices in the topology are all programmed.
> >>>>>>
> >>>>>> The interleave can be 256 bytes across up to 16x, many terabyte, devices.
> >>>>>> So assuming a simple set of 16 1TB devices I think you'd need about 4x10^9
> >>>>>> IO regions.  Even for a minimal useful test case of largest interleave
> >>>>>> set of 16x 256MB devices (256MB is minimum size the specification allows per
> >>>>>> decoded region at the device) and 16 way interleave we'd need 10^6 IO regions.
> >>>>>> Any idea if that approach would scale sensibly to this number of regions?
> >>>>>>
> >>>>>> There are also complexities to getting all the information in one place to
> >>>>>> work out which IO memory regions maps where in PA space. Current solution is
> >>>>>> to do that mapping in the same way the hardware does which is hierarchical,
> >>>>>> so we walk the path to the device, picking directions based on each interleave
> >>>>>> decoder that we meet.
> >>>>>> Obviously this is a bit slow but I only really care about correctness at the
> >>>>>> moment.  I can think of various approaches to speeding it up but I'm not sure
> >>>>>> if we will ever care about performance.
> >>>>>>
> >>>>>> https://gitlab.com/jic23/qemu/-/blob/cxl-v7-draft-2-for-test/hw/cxl/cxl-host.c#L131
> >>>>>> has the logic for that and as you can see it's fairly simple because we are always
> >>>>>> going down the topology following the decoders.
> >>>>>>
> >>>>>> Below I have mapped out an algorithm I think would work for doing it with
> >>>>>> IO memory regions as subregions.
> >>>>>>
> >>>>>> We could fake the whole thing by limiting ourselves to small host
> >>>>>> memory windows which are always directly backed, but then I wouldn't
> >>>>>> achieve the main aim of this which is to provide a test base for the OS code.
> >>>>>> To do that I need real interleave so I can seed the files with test patterns
> >>>>>> and verify the accesses hit the correct locations. Emulating what the hardware
> >>>>>> is actually doing on a device by device basis is the easiest way I have
> >>>>>> come up with to do that.
> >>>>>>
> >>>>>> Let me try to provide some more background so you hopefully don't have
> >>>>>> to have read the specs to follow what is going on!
> >>>>>> There are an example for directly connected (no switches) topology in the
> >>>>>> docs
> >>>>>>
> >>>>>> https://gitlab.com/jic23/qemu/-/blob/cxl-v7-draft-2-for-test/docs/system/devices/cxl.rst
> >>>>>>
> >>>>>> The overall picture is we have a large number of CXL Type 3 memory devices,
> >>>>>> which at runtime (by OS at boot/on hotplug) are configured into various
> >>>>>> interleaving sets with hierarchical decoding at the host + host bridge
> >>>>>> + switch levels. For test setups I probably need to go to around 32 devices
> >>>>>> so I can hit various configurations simultaneously.
> >>>>>> No individual device has visibility of the full interleave setup - hence
> >>>>>> the walk in the existing code through the various decoders to find the
> >>>>>> final Device Physical address.
> >>>>>>
> >>>>>> At the host level the host provides a set of Physical Address windows with
> >>>>>> a fixed interleave decoding across the different host bridges in the system
> >>>>>> (CXL Fixed Memory windows, CFMWs)
> >>>>>> On a real system these have to be large enough to allow for any memory
> >>>>>> devices that might be hotplugged and all possible configurations (so
> >>>>>> with 2 host bridges you need at least 3 windows in the many TB range,
> >>>>>> much worse as the number of host bridges goes up). It'll be worse than
> >>>>>> this when we have QoS groups, but the current Qemu code just puts all
> >>>>>> the windows in group 0.  Hence my first thought of just putting memory
> >>>>>> behind those doesn't scale (a similar approach to this was in the
> >>>>>> earliest versions of this patch set - though the full access path
> >>>>>> wasn't wired up).
> >>>>>>
> >>>>>> The granularity can be in powers of 2 from 256 bytes to 16 kbytes
> >>>>>>
> >>>>>> Next each host bridge has programmable address decoders which take the
> >>>>>> incoming (often already interleaved) memory access and direct them to
> >>>>>> appropriate root ports.  The root ports can be connected to a switch
> >>>>>> which has additional address decoders in the upstream port to decide
> >>>>>> which downstream port to route to.  Note we currently only support 1 level
> >>>>>> of switches but it's easy to make this algorithm recursive to support
> >>>>>> multiple switch levels (currently the kernel proposals only support 1 level)
> >>>>>>
> >>>>>> Finally the End Point with the actual memory receives the interleaved request and
> >>>>>> takes the full address and (for power of 2 decoding - we don't yet support
> >>>>>> 3,6 and 12 way which is more complex and there is no kernel support yet)
> >>>>>> it drops a few address bits and adds an offset for the decoder used to
> >>>>>> calculate it's own device physical address.  Note device will support
> >>>>>> multiple interleave sets for different parts of it's file once we add
> >>>>>> multiple decoder support (on the todo list).
> >>>>>>
> >>>>>> So the current solution is straight forward (with the exception of that
> >>>>>> proxying) because it follows the same decoding as used in real hardware
> >>>>>> to route the memory accesses. As a result we get a read/write to a
> >>>>>> device physical address and hence proxy that.  If any of the decoders
> >>>>>> along the path are not configured then we error out at that stage.
> >>>>>>
> >>>>>> To create the equivalent as IO subregions I think we'd have to do the
> >>>>>> following from (this might be mediated by some central entity that
> >>>>>> doesn't currently exist, or done on demand from which ever CXL device
> >>>>>> happens to have it's decoder set up last)
> >>>>>>
> >>>>>> 1) Wait for a decoder commit (enable) on any component. Goto 2.
> >>>>>> 2) Walk the topology (up to host decoder, down to memory device)
> >>>>>> If a complete interleaving path has been configured -
> >>>>>>      i.e. we have committed decoders all the way to the memory
> >>>>>>      device goto step 3, otherwise return to step 1 to wait for
> >>>>>>      more decoders to be committed.
> >>>>>> 3) For the memory region being supplied by the memory device,
> >>>>>>      add subregions to map the device physical address (address
> >>>>>>      in the file) for each interleave stride to the appropriate
> >>>>>>      host Physical Address.
> >>>>>> 4) Return to step 1 to wait for more decoders to commit.
> >>>>>>
> >>>>>> So summary is we can do it with IO regions, but there are a lot of them
> >>>>>> and the setup is somewhat complex as we don't have one single point in
> >>>>>> time where we know all the necessary information is available to compute
> >>>>>> the right addresses.
> >>>>>>
> >>>>>> Looking forward to your suggestions if I haven't caused more confusion!  
> >>>>
> >>>> Hi Peter,
> >>>>        
> >>>>>
> >>>>> Thanks for the write up - I must confess they're a lot! :)
> >>>>>
> >>>>> I merely only learned what is CXL today, and I'm not very experienced on
> >>>>> device modeling either, so please bare with me with stupid questions..
> >>>>>
> >>>>> IIUC so far CXL traps these memory accesses using CXLFixedWindow.mr.
> >>>>> That's a normal IO region, which looks very reasonable.
> >>>>>
> >>>>> However I'm confused why patch "RFC: softmmu/memory: Add ops to
> >>>>> memory_region_ram_init_from_file" helped.
> >>>>>
> >>>>> Per my knowledge, all the memory accesses upon this CFMW window trapped
> >>>>> using this IO region already.  There can be multiple memory file objects
> >>>>> underneath, and when read/write happens the object will be decoded from
> >>>>> cxl_cfmws_find_device() as you referenced.  
> >>>>
> >>>> Yes.
> >>>>        
> >>>>>
> >>>>> However I see nowhere that these memory objects got mapped as sub-regions
> >>>>> into parent (CXLFixedWindow.mr).  Then I don't understand why they cannot
> >>>>> be trapped.  
> >>>>
> >>>> AS you note they aren't mapped into the parent mr, hence we are trapping.
> >>>> The parent mem_ops are responsible for decoding the 'which device' +
> >>>> 'what address in device memory space'. Once we've gotten that info
> >>>> the question is how do I actually do the access?
> >>>>
> >>>> Mapping as subregions seems unwise due to the huge number required.
> >>>>        
> >>>>>
> >>>>> To ask in another way: what will happen if you simply revert this RFC
> >>>>> patch?  What will go wrong?  
> >>>>
> >>>> The call to memory_region_dispatch_read()
> >>>> https://gitlab.com/jic23/qemu/-/blob/cxl-v7-draft-2-for-test/hw/mem/cxl_type3.c#L556
> >>>>
> >>>> would call memory_region_access_valid() that calls
> >>>> mr->ops->valid.accepts() which is set to
> >>>> unassigned_mem_accepts() and hence...
> >>>> you get back a MEMTX_DECODE_ERROR back and an exception in the
> >>>> guest.
> >>>>
> >>>> That wouldn't happen with a non proxied access to the ram as
> >>>> those paths never uses the ops as memory_access_is_direct() is called
> >>>> and simply memcpy used without any involvement of the ops.
> >>>>
> >>>> Is a better way to proxy those writes to the backing files?
> >>>>
> >>>> I was fishing a bit in the dark here and saw the existing ops defined
> >>>> for a different purpose for VFIO
> >>>>
> >>>> 4a2e242bbb ("memory Don't use memcpy for ram_device regions")
> >>>>
> >>>> and those allowed the use of memory_region_dispatch_write() to work.
> >>>>
> >>>> Hence the RFC marking on that patch :)  
> >>>
> >>> FWIW I had a similar issue implementing manual aliasing in one of my q800 patches
> >>> where I found that dispatching a read to a non-IO memory region didn't work with
> >>> memory_region_dispatch_read(). The solution in my case was to switch to using the
> >>> address space API instead, which whilst requiring an absolute address for the target
> >>> address space, handles the dispatch correctly across all different memory region types.
> >>>
> >>> Have a look at
> >>> https://gitlab.com/mcayland/qemu/-/commit/318e12579c7570196187652da13542db86b8c722 to
> >>> see how I did this in macio_alias_read().
> >>>
> >>> IIRC from my experiments in this area, my conclusion was that
> >>> memory_region_dispatch_read() can only work correctly if mapping directly between 2
> >>> IO memory regions, and for anything else you need to use the address space API.  
> >>
> >> Hi Mark,
> >>
> >> I'd wondered about the address space API as an alternative approach.
> >>
> >>  From that reference looks like you have the memory mapped into the system address
> >> space and are providing an alias to that.  That's something I'd ideally like to
> >> avoid doing as there is no meaningful way to do it so I'd just be hiding the memory
> >> somewhere up high.  The memory should only be accessible through the one
> >> route.
> >>
> >> I think I could spin a separate address space for this purpose (one per CXL type 3
> >> device probably) but that seems like another nasty hack to make. I'll try a quick
> >> prototype of this tomorrow.  
> > 
> > Turned out to be trivial so already done.  Will send out as v8 unless anyone
> > feeds back that there is a major disadvantage to just spinning up one address space
> > per CXL type3 device.  That will mean dropping the RFC patch as well as no longer
> > used :)
> > 
> > Thanks for the hint Mark.
> > 
> > Jonathan  
> 
> Ah great! As you've already noticed my particular case was performing partial 
> decoding on a memory region, but there are no issues if you need to dispatch to 
> another existing address space such as PCI/IOMMU. Creating a separate address space 
> per device shouldn't be an issue either, as that's effectively how the PCI bus master 
> requests are handled.
> 
> The address spaces are visible in "info mtree" so if you haven't already, I would 
> recommend generating a dynamic name for the address space based upon the device 
> name/address to make it easier for development and debugging.
info mtree already provides the following with a static name
address-space: cxl-type3-dpa-space
  0000000000000000-000000000fffffff (prio 0, nv-ram): cxl-mem2

So the device association is there anyway.  Hence I'm not sure a dynamic name adds
a lot on this occasion and code is simpler without making it dynamic.

Thanks,

Jonathan


> 
> 
> ATB,
> 
> Mark.
> 


^ permalink raw reply	[flat|nested] 124+ messages in thread

* Re: [PATCH v7 00/46] CXl 2.0 emulation Support
@ 2022-03-17 16:47                       ` Jonathan Cameron via
  0 siblings, 0 replies; 124+ messages in thread
From: Jonathan Cameron via @ 2022-03-17 16:47 UTC (permalink / raw)
  To: Mark Cave-Ayland
  Cc: Peter Maydell, Shreyas Shah, Ben Widawsky, Michael S. Tsirkin,
	Marcel Apfelbaum, Samarth Saxena, Chris Browy, Markus Armbruster,
	Peter Xu, qemu-devel, linuxarm, linux-cxl, Igor Mammedov,
	Saransh Gupta1, Paolo Bonzini, Dan Williams, David Hildenbrand,
	Alex Bennée, Shameerali Kolothum Thodi,
	Philippe Mathieu-Daudé

On Thu, 17 Mar 2022 08:12:56 +0000
Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk> wrote:

> On 16/03/2022 18:26, Jonathan Cameron via wrote:
> > On Wed, 16 Mar 2022 17:58:46 +0000
> > Jonathan Cameron <Jonathan.Cameron@Huawei.com> wrote:
> >   
> >> On Wed, 16 Mar 2022 17:16:55 +0000
> >> Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk> wrote:
> >>  
> >>> On 16/03/2022 16:50, Jonathan Cameron via wrote:
> >>>      
> >>>> On Thu, 10 Mar 2022 16:02:22 +0800
> >>>> Peter Xu <peterx@redhat.com> wrote:
> >>>>        
> >>>>> On Wed, Mar 09, 2022 at 11:28:27AM +0000, Jonathan Cameron wrote:  
> >>>>>> Hi Peter,  
> >>>>>
> >>>>> Hi, Jonathan,
> >>>>>       
> >>>>>>           
> >>>>>>>
> >>>>>>> https://lore.kernel.org/qemu-devel/20220306174137.5707-35-Jonathan.Cameron@huawei.com/
> >>>>>>>
> >>>>>>> Having mr->ops set but with memory_access_is_direct() returning true sounds
> >>>>>>> weird to me.
> >>>>>>>
> >>>>>>> Sorry to have no understanding of the whole picture, but.. could you share
> >>>>>>> more on what's the interleaving requirement on the proxying, and why it
> >>>>>>> can't be done with adding some IO memory regions as sub-regions upon the
> >>>>>>> file one?  
> >>>>>>
> >>>>>> The proxying requirement is simply a means to read/write to a computed address
> >>>>>> within a memory region. There may well be a better way to do that.
> >>>>>>
> >>>>>> If I understand your suggestion correctly you would need a very high
> >>>>>> number of IO memory regions to be created dynamically when particular sets of
> >>>>>> registers across multiple devices in the topology are all programmed.
> >>>>>>
> >>>>>> The interleave can be 256 bytes across up to 16x, many terabyte, devices.
> >>>>>> So assuming a simple set of 16 1TB devices I think you'd need about 4x10^9
> >>>>>> IO regions.  Even for a minimal useful test case of largest interleave
> >>>>>> set of 16x 256MB devices (256MB is minimum size the specification allows per
> >>>>>> decoded region at the device) and 16 way interleave we'd need 10^6 IO regions.
> >>>>>> Any idea if that approach would scale sensibly to this number of regions?
> >>>>>>
> >>>>>> There are also complexities to getting all the information in one place to
> >>>>>> work out which IO memory regions maps where in PA space. Current solution is
> >>>>>> to do that mapping in the same way the hardware does which is hierarchical,
> >>>>>> so we walk the path to the device, picking directions based on each interleave
> >>>>>> decoder that we meet.
> >>>>>> Obviously this is a bit slow but I only really care about correctness at the
> >>>>>> moment.  I can think of various approaches to speeding it up but I'm not sure
> >>>>>> if we will ever care about performance.
> >>>>>>
> >>>>>> https://gitlab.com/jic23/qemu/-/blob/cxl-v7-draft-2-for-test/hw/cxl/cxl-host.c#L131
> >>>>>> has the logic for that and as you can see it's fairly simple because we are always
> >>>>>> going down the topology following the decoders.
> >>>>>>
> >>>>>> Below I have mapped out an algorithm I think would work for doing it with
> >>>>>> IO memory regions as subregions.
> >>>>>>
> >>>>>> We could fake the whole thing by limiting ourselves to small host
> >>>>>> memory windows which are always directly backed, but then I wouldn't
> >>>>>> achieve the main aim of this which is to provide a test base for the OS code.
> >>>>>> To do that I need real interleave so I can seed the files with test patterns
> >>>>>> and verify the accesses hit the correct locations. Emulating what the hardware
> >>>>>> is actually doing on a device by device basis is the easiest way I have
> >>>>>> come up with to do that.
> >>>>>>
> >>>>>> Let me try to provide some more background so you hopefully don't have
> >>>>>> to have read the specs to follow what is going on!
> >>>>>> There are an example for directly connected (no switches) topology in the
> >>>>>> docs
> >>>>>>
> >>>>>> https://gitlab.com/jic23/qemu/-/blob/cxl-v7-draft-2-for-test/docs/system/devices/cxl.rst
> >>>>>>
> >>>>>> The overall picture is we have a large number of CXL Type 3 memory devices,
> >>>>>> which at runtime (by OS at boot/on hotplug) are configured into various
> >>>>>> interleaving sets with hierarchical decoding at the host + host bridge
> >>>>>> + switch levels. For test setups I probably need to go to around 32 devices
> >>>>>> so I can hit various configurations simultaneously.
> >>>>>> No individual device has visibility of the full interleave setup - hence
> >>>>>> the walk in the existing code through the various decoders to find the
> >>>>>> final Device Physical address.
> >>>>>>
> >>>>>> At the host level the host provides a set of Physical Address windows with
> >>>>>> a fixed interleave decoding across the different host bridges in the system
> >>>>>> (CXL Fixed Memory windows, CFMWs)
> >>>>>> On a real system these have to be large enough to allow for any memory
> >>>>>> devices that might be hotplugged and all possible configurations (so
> >>>>>> with 2 host bridges you need at least 3 windows in the many TB range,
> >>>>>> much worse as the number of host bridges goes up). It'll be worse than
> >>>>>> this when we have QoS groups, but the current Qemu code just puts all
> >>>>>> the windows in group 0.  Hence my first thought of just putting memory
> >>>>>> behind those doesn't scale (a similar approach to this was in the
> >>>>>> earliest versions of this patch set - though the full access path
> >>>>>> wasn't wired up).
> >>>>>>
> >>>>>> The granularity can be in powers of 2 from 256 bytes to 16 kbytes
> >>>>>>
> >>>>>> Next each host bridge has programmable address decoders which take the
> >>>>>> incoming (often already interleaved) memory access and direct them to
> >>>>>> appropriate root ports.  The root ports can be connected to a switch
> >>>>>> which has additional address decoders in the upstream port to decide
> >>>>>> which downstream port to route to.  Note we currently only support 1 level
> >>>>>> of switches but it's easy to make this algorithm recursive to support
> >>>>>> multiple switch levels (currently the kernel proposals only support 1 level)
> >>>>>>
> >>>>>> Finally the End Point with the actual memory receives the interleaved request and
> >>>>>> takes the full address and (for power of 2 decoding - we don't yet support
> >>>>>> 3,6 and 12 way which is more complex and there is no kernel support yet)
> >>>>>> it drops a few address bits and adds an offset for the decoder used to
> >>>>>> calculate it's own device physical address.  Note device will support
> >>>>>> multiple interleave sets for different parts of it's file once we add
> >>>>>> multiple decoder support (on the todo list).
> >>>>>>
> >>>>>> So the current solution is straight forward (with the exception of that
> >>>>>> proxying) because it follows the same decoding as used in real hardware
> >>>>>> to route the memory accesses. As a result we get a read/write to a
> >>>>>> device physical address and hence proxy that.  If any of the decoders
> >>>>>> along the path are not configured then we error out at that stage.
> >>>>>>
> >>>>>> To create the equivalent as IO subregions I think we'd have to do the
> >>>>>> following from (this might be mediated by some central entity that
> >>>>>> doesn't currently exist, or done on demand from which ever CXL device
> >>>>>> happens to have it's decoder set up last)
> >>>>>>
> >>>>>> 1) Wait for a decoder commit (enable) on any component. Goto 2.
> >>>>>> 2) Walk the topology (up to host decoder, down to memory device)
> >>>>>> If a complete interleaving path has been configured -
> >>>>>>      i.e. we have committed decoders all the way to the memory
> >>>>>>      device goto step 3, otherwise return to step 1 to wait for
> >>>>>>      more decoders to be committed.
> >>>>>> 3) For the memory region being supplied by the memory device,
> >>>>>>      add subregions to map the device physical address (address
> >>>>>>      in the file) for each interleave stride to the appropriate
> >>>>>>      host Physical Address.
> >>>>>> 4) Return to step 1 to wait for more decoders to commit.
> >>>>>>
> >>>>>> So summary is we can do it with IO regions, but there are a lot of them
> >>>>>> and the setup is somewhat complex as we don't have one single point in
> >>>>>> time where we know all the necessary information is available to compute
> >>>>>> the right addresses.
> >>>>>>
> >>>>>> Looking forward to your suggestions if I haven't caused more confusion!  
> >>>>
> >>>> Hi Peter,
> >>>>        
> >>>>>
> >>>>> Thanks for the write up - I must confess they're a lot! :)
> >>>>>
> >>>>> I merely only learned what is CXL today, and I'm not very experienced on
> >>>>> device modeling either, so please bare with me with stupid questions..
> >>>>>
> >>>>> IIUC so far CXL traps these memory accesses using CXLFixedWindow.mr.
> >>>>> That's a normal IO region, which looks very reasonable.
> >>>>>
> >>>>> However I'm confused why patch "RFC: softmmu/memory: Add ops to
> >>>>> memory_region_ram_init_from_file" helped.
> >>>>>
> >>>>> Per my knowledge, all the memory accesses upon this CFMW window trapped
> >>>>> using this IO region already.  There can be multiple memory file objects
> >>>>> underneath, and when read/write happens the object will be decoded from
> >>>>> cxl_cfmws_find_device() as you referenced.  
> >>>>
> >>>> Yes.
> >>>>        
> >>>>>
> >>>>> However I see nowhere that these memory objects got mapped as sub-regions
> >>>>> into parent (CXLFixedWindow.mr).  Then I don't understand why they cannot
> >>>>> be trapped.  
> >>>>
> >>>> AS you note they aren't mapped into the parent mr, hence we are trapping.
> >>>> The parent mem_ops are responsible for decoding the 'which device' +
> >>>> 'what address in device memory space'. Once we've gotten that info
> >>>> the question is how do I actually do the access?
> >>>>
> >>>> Mapping as subregions seems unwise due to the huge number required.
> >>>>        
> >>>>>
> >>>>> To ask in another way: what will happen if you simply revert this RFC
> >>>>> patch?  What will go wrong?  
> >>>>
> >>>> The call to memory_region_dispatch_read()
> >>>> https://gitlab.com/jic23/qemu/-/blob/cxl-v7-draft-2-for-test/hw/mem/cxl_type3.c#L556
> >>>>
> >>>> would call memory_region_access_valid() that calls
> >>>> mr->ops->valid.accepts() which is set to
> >>>> unassigned_mem_accepts() and hence...
> >>>> you get back a MEMTX_DECODE_ERROR back and an exception in the
> >>>> guest.
> >>>>
> >>>> That wouldn't happen with a non proxied access to the ram as
> >>>> those paths never uses the ops as memory_access_is_direct() is called
> >>>> and simply memcpy used without any involvement of the ops.
> >>>>
> >>>> Is a better way to proxy those writes to the backing files?
> >>>>
> >>>> I was fishing a bit in the dark here and saw the existing ops defined
> >>>> for a different purpose for VFIO
> >>>>
> >>>> 4a2e242bbb ("memory Don't use memcpy for ram_device regions")
> >>>>
> >>>> and those allowed the use of memory_region_dispatch_write() to work.
> >>>>
> >>>> Hence the RFC marking on that patch :)  
> >>>
> >>> FWIW I had a similar issue implementing manual aliasing in one of my q800 patches
> >>> where I found that dispatching a read to a non-IO memory region didn't work with
> >>> memory_region_dispatch_read(). The solution in my case was to switch to using the
> >>> address space API instead, which whilst requiring an absolute address for the target
> >>> address space, handles the dispatch correctly across all different memory region types.
> >>>
> >>> Have a look at
> >>> https://gitlab.com/mcayland/qemu/-/commit/318e12579c7570196187652da13542db86b8c722 to
> >>> see how I did this in macio_alias_read().
> >>>
> >>> IIRC from my experiments in this area, my conclusion was that
> >>> memory_region_dispatch_read() can only work correctly if mapping directly between 2
> >>> IO memory regions, and for anything else you need to use the address space API.  
> >>
> >> Hi Mark,
> >>
> >> I'd wondered about the address space API as an alternative approach.
> >>
> >>  From that reference looks like you have the memory mapped into the system address
> >> space and are providing an alias to that.  That's something I'd ideally like to
> >> avoid doing as there is no meaningful way to do it so I'd just be hiding the memory
> >> somewhere up high.  The memory should only be accessible through the one
> >> route.
> >>
> >> I think I could spin a separate address space for this purpose (one per CXL type 3
> >> device probably) but that seems like another nasty hack to make. I'll try a quick
> >> prototype of this tomorrow.  
> > 
> > Turned out to be trivial so already done.  Will send out as v8 unless anyone
> > feeds back that there is a major disadvantage to just spinning up one address space
> > per CXL type3 device.  That will mean dropping the RFC patch as well as no longer
> > used :)
> > 
> > Thanks for the hint Mark.
> > 
> > Jonathan  
> 
> Ah great! As you've already noticed my particular case was performing partial 
> decoding on a memory region, but there are no issues if you need to dispatch to 
> another existing address space such as PCI/IOMMU. Creating a separate address space 
> per device shouldn't be an issue either, as that's effectively how the PCI bus master 
> requests are handled.
> 
> The address spaces are visible in "info mtree" so if you haven't already, I would 
> recommend generating a dynamic name for the address space based upon the device 
> name/address to make it easier for development and debugging.
info mtree already provides the following with a static name
address-space: cxl-type3-dpa-space
  0000000000000000-000000000fffffff (prio 0, nv-ram): cxl-mem2

So the device association is there anyway.  Hence I'm not sure a dynamic name adds
a lot on this occasion and code is simpler without making it dynamic.

Thanks,

Jonathan


> 
> 
> ATB,
> 
> Mark.
> 



^ permalink raw reply	[flat|nested] 124+ messages in thread

* Re: [PATCH v7 00/46] CXl 2.0 emulation Support
  2022-03-17 16:47                       ` Jonathan Cameron via
@ 2022-03-18  8:14                         ` Mark Cave-Ayland
  -1 siblings, 0 replies; 124+ messages in thread
From: Mark Cave-Ayland @ 2022-03-18  8:14 UTC (permalink / raw)
  To: Jonathan Cameron
  Cc: Peter Maydell, Shreyas Shah, Ben Widawsky, Michael S. Tsirkin,
	Marcel Apfelbaum, Samarth Saxena, Chris Browy, Markus Armbruster,
	Peter Xu, qemu-devel, linuxarm, linux-cxl, Igor Mammedov,
	Saransh Gupta1, Paolo Bonzini, Dan Williams, David Hildenbrand,
	Alex Bennée, Shameerali Kolothum Thodi,
	Philippe Mathieu-Daudé

On 17/03/2022 16:47, Jonathan Cameron via wrote:

>> Ah great! As you've already noticed my particular case was performing partial
>> decoding on a memory region, but there are no issues if you need to dispatch to
>> another existing address space such as PCI/IOMMU. Creating a separate address space
>> per device shouldn't be an issue either, as that's effectively how the PCI bus master
>> requests are handled.
>>
>> The address spaces are visible in "info mtree" so if you haven't already, I would
>> recommend generating a dynamic name for the address space based upon the device
>> name/address to make it easier for development and debugging.
> info mtree already provides the following with a static name
> address-space: cxl-type3-dpa-space
>    0000000000000000-000000000fffffff (prio 0, nv-ram): cxl-mem2
> 
> So the device association is there anyway.  Hence I'm not sure a dynamic name adds
> a lot on this occasion and code is simpler without making it dynamic.

Is this using a single address space for multiple memory devices, or one per device 
as you were suggesting in the thread? If it is one per device and cxl-mem2 is the 
value of the -device id parameter, I still think it is worth adding the same device 
id into the address space name for the sake of a g_strdup_printf() and corresponding 
g_free().

Alas I don't currently have the time (and enough knowledge of CXL!) to do a more 
comprehensive review of the patches, but a quick skim of the series suggests it seems 
quite mature. The only thing that I noticed was that there doesn't seem to be any 
trace-events added, which I think may be useful to aid driver developers if they need 
to debug some of the memory access routing.

Finally I should point out that there are a number of more experienced PCI developers 
on the CC list than me, and they should have the final say on patch review. So please 
consider these comments as recommendations based upon my development work on QEMU, 
and not as a NAK for proceeding with the series :)


ATB,

Mark.

^ permalink raw reply	[flat|nested] 124+ messages in thread

* Re: [PATCH v7 00/46] CXl 2.0 emulation Support
@ 2022-03-18  8:14                         ` Mark Cave-Ayland
  0 siblings, 0 replies; 124+ messages in thread
From: Mark Cave-Ayland @ 2022-03-18  8:14 UTC (permalink / raw)
  To: Jonathan Cameron
  Cc: Peter Maydell, Ben Widawsky, Michael S. Tsirkin,
	Alex Bennée, Samarth Saxena, Chris Browy, Markus Armbruster,
	Peter Xu, qemu-devel, Shreyas Shah, linuxarm, linux-cxl,
	Paolo Bonzini, Marcel Apfelbaum, Igor Mammedov, Dan Williams,
	David Hildenbrand, Saransh Gupta1, Shameerali Kolothum Thodi,
	Philippe Mathieu-Daudé

On 17/03/2022 16:47, Jonathan Cameron via wrote:

>> Ah great! As you've already noticed my particular case was performing partial
>> decoding on a memory region, but there are no issues if you need to dispatch to
>> another existing address space such as PCI/IOMMU. Creating a separate address space
>> per device shouldn't be an issue either, as that's effectively how the PCI bus master
>> requests are handled.
>>
>> The address spaces are visible in "info mtree" so if you haven't already, I would
>> recommend generating a dynamic name for the address space based upon the device
>> name/address to make it easier for development and debugging.
> info mtree already provides the following with a static name
> address-space: cxl-type3-dpa-space
>    0000000000000000-000000000fffffff (prio 0, nv-ram): cxl-mem2
> 
> So the device association is there anyway.  Hence I'm not sure a dynamic name adds
> a lot on this occasion and code is simpler without making it dynamic.

Is this using a single address space for multiple memory devices, or one per device 
as you were suggesting in the thread? If it is one per device and cxl-mem2 is the 
value of the -device id parameter, I still think it is worth adding the same device 
id into the address space name for the sake of a g_strdup_printf() and corresponding 
g_free().

Alas I don't currently have the time (and enough knowledge of CXL!) to do a more 
comprehensive review of the patches, but a quick skim of the series suggests it seems 
quite mature. The only thing that I noticed was that there doesn't seem to be any 
trace-events added, which I think may be useful to aid driver developers if they need 
to debug some of the memory access routing.

Finally I should point out that there are a number of more experienced PCI developers 
on the CC list than me, and they should have the final say on patch review. So please 
consider these comments as recommendations based upon my development work on QEMU, 
and not as a NAK for proceeding with the series :)


ATB,

Mark.


^ permalink raw reply	[flat|nested] 124+ messages in thread

* Re: [PATCH v7 00/46] CXl 2.0 emulation Support
  2022-03-18  8:14                         ` Mark Cave-Ayland
@ 2022-03-18 10:08                           ` Jonathan Cameron via
  -1 siblings, 0 replies; 124+ messages in thread
From: Jonathan Cameron @ 2022-03-18 10:08 UTC (permalink / raw)
  To: Mark Cave-Ayland
  Cc: Peter Maydell, Shreyas Shah, Ben Widawsky, Michael S. Tsirkin,
	Marcel Apfelbaum, Samarth Saxena, Chris Browy, Markus Armbruster,
	Peter Xu, qemu-devel, linuxarm, linux-cxl, Igor Mammedov,
	Saransh Gupta1, Paolo Bonzini, Dan Williams, David Hildenbrand,
	Alex Bennée, Shameerali Kolothum Thodi,
	Philippe Mathieu-Daudé

On Fri, 18 Mar 2022 08:14:58 +0000
Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk> wrote:

> On 17/03/2022 16:47, Jonathan Cameron via wrote:
> 
> >> Ah great! As you've already noticed my particular case was performing partial
> >> decoding on a memory region, but there are no issues if you need to dispatch to
> >> another existing address space such as PCI/IOMMU. Creating a separate address space
> >> per device shouldn't be an issue either, as that's effectively how the PCI bus master
> >> requests are handled.
> >>
> >> The address spaces are visible in "info mtree" so if you haven't already, I would
> >> recommend generating a dynamic name for the address space based upon the device
> >> name/address to make it easier for development and debugging.  
> > info mtree already provides the following with a static name
> > address-space: cxl-type3-dpa-space
> >    0000000000000000-000000000fffffff (prio 0, nv-ram): cxl-mem2
> > 
> > So the device association is there anyway.  Hence I'm not sure a dynamic name adds
> > a lot on this occasion and code is simpler without making it dynamic.  
> 
> Is this using a single address space for multiple memory devices, or one per device 
> as you were suggesting in the thread? If it is one per device and cxl-mem2 is the 
> value of the -device id parameter, I still think it is worth adding the same device 
> id into the address space name for the sake of a g_strdup_printf() and corresponding 
> g_free().

One per device.  Ultimately when I add volatile memory support we'll end up with possibly
having to add an mr as a container for the two hostmem mr.   Looking again, the name
above is actually the id of the mr, not the type3 device. Probably better to optionally
use the type3 device name if available.

I'll make the name something like cxl-type3-dpa-space-cxl-pmem3 if id available
and fall back to cxl-type3-dpa-space as before if not.

> 
> Alas I don't currently have the time (and enough knowledge of CXL!) to do a more 
> comprehensive review of the patches, but a quick skim of the series suggests it seems 
> quite mature. The only thing that I noticed was that there doesn't seem to be any 
> trace-events added, which I think may be useful to aid driver developers if they need 
> to debug some of the memory access routing.

Good suggestion.  I'm inclined to add them in a follow up patch though because
this patch set is already somewhat unmanageable from point of view of review.
I already have a number of other patches queued up for a second series adding
more functionality.

> 
> Finally I should point out that there are a number of more experienced PCI developers 
> on the CC list than me, and they should have the final say on patch review. So please 
> consider these comments as recommendations based upon my development work on QEMU, 
> and not as a NAK for proceeding with the series :)

No problem and thanks for your help as (I think) you've solved the biggest open issue :)

Jonathan

> 
> 
> ATB,
> 
> Mark.


^ permalink raw reply	[flat|nested] 124+ messages in thread

* Re: [PATCH v7 00/46] CXl 2.0 emulation Support
@ 2022-03-18 10:08                           ` Jonathan Cameron via
  0 siblings, 0 replies; 124+ messages in thread
From: Jonathan Cameron via @ 2022-03-18 10:08 UTC (permalink / raw)
  To: Mark Cave-Ayland
  Cc: Peter Maydell, Shreyas Shah, Ben Widawsky, Michael S. Tsirkin,
	Marcel Apfelbaum, Samarth Saxena, Chris Browy, Markus Armbruster,
	Peter Xu, qemu-devel, linuxarm, linux-cxl, Igor Mammedov,
	Saransh Gupta1, Paolo Bonzini, Dan Williams, David Hildenbrand,
	Alex Bennée, Shameerali Kolothum Thodi,
	Philippe Mathieu-Daudé

On Fri, 18 Mar 2022 08:14:58 +0000
Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk> wrote:

> On 17/03/2022 16:47, Jonathan Cameron via wrote:
> 
> >> Ah great! As you've already noticed my particular case was performing partial
> >> decoding on a memory region, but there are no issues if you need to dispatch to
> >> another existing address space such as PCI/IOMMU. Creating a separate address space
> >> per device shouldn't be an issue either, as that's effectively how the PCI bus master
> >> requests are handled.
> >>
> >> The address spaces are visible in "info mtree" so if you haven't already, I would
> >> recommend generating a dynamic name for the address space based upon the device
> >> name/address to make it easier for development and debugging.  
> > info mtree already provides the following with a static name
> > address-space: cxl-type3-dpa-space
> >    0000000000000000-000000000fffffff (prio 0, nv-ram): cxl-mem2
> > 
> > So the device association is there anyway.  Hence I'm not sure a dynamic name adds
> > a lot on this occasion and code is simpler without making it dynamic.  
> 
> Is this using a single address space for multiple memory devices, or one per device 
> as you were suggesting in the thread? If it is one per device and cxl-mem2 is the 
> value of the -device id parameter, I still think it is worth adding the same device 
> id into the address space name for the sake of a g_strdup_printf() and corresponding 
> g_free().

One per device.  Ultimately when I add volatile memory support we'll end up with possibly
having to add an mr as a container for the two hostmem mr.   Looking again, the name
above is actually the id of the mr, not the type3 device. Probably better to optionally
use the type3 device name if available.

I'll make the name something like cxl-type3-dpa-space-cxl-pmem3 if id available
and fall back to cxl-type3-dpa-space as before if not.

> 
> Alas I don't currently have the time (and enough knowledge of CXL!) to do a more 
> comprehensive review of the patches, but a quick skim of the series suggests it seems 
> quite mature. The only thing that I noticed was that there doesn't seem to be any 
> trace-events added, which I think may be useful to aid driver developers if they need 
> to debug some of the memory access routing.

Good suggestion.  I'm inclined to add them in a follow up patch though because
this patch set is already somewhat unmanageable from point of view of review.
I already have a number of other patches queued up for a second series adding
more functionality.

> 
> Finally I should point out that there are a number of more experienced PCI developers 
> on the CC list than me, and they should have the final say on patch review. So please 
> consider these comments as recommendations based upon my development work on QEMU, 
> and not as a NAK for proceeding with the series :)

No problem and thanks for your help as (I think) you've solved the biggest open issue :)

Jonathan

> 
> 
> ATB,
> 
> Mark.



^ permalink raw reply	[flat|nested] 124+ messages in thread

end of thread, other threads:[~2022-03-18 10:10 UTC | newest]

Thread overview: 124+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-03-06 17:40 [PATCH v7 00/46] CXl 2.0 emulation Support Jonathan Cameron
2022-03-06 17:40 ` Jonathan Cameron via
2022-03-06 17:40 ` [PATCH v7 01/46] hw/pci/cxl: Add a CXL component type (interface) Jonathan Cameron
2022-03-06 17:40   ` Jonathan Cameron via
2022-03-06 17:40 ` [PATCH v7 02/46] hw/cxl/component: Introduce CXL components (8.1.x, 8.2.5) Jonathan Cameron
2022-03-06 17:40   ` Jonathan Cameron via
2022-03-06 17:40 ` [PATCH v7 03/46] MAINTAINERS: Add entry for Compute Express Link Emulation Jonathan Cameron
2022-03-06 17:40   ` Jonathan Cameron via
2022-03-06 17:40 ` [PATCH v7 04/46] hw/cxl/device: Introduce a CXL device (8.2.8) Jonathan Cameron
2022-03-06 17:40   ` Jonathan Cameron via
2022-03-06 17:40 ` [PATCH v7 05/46] hw/cxl/device: Implement the CAP array (8.2.8.1-2) Jonathan Cameron
2022-03-06 17:40   ` Jonathan Cameron via
2022-03-06 17:40 ` [PATCH v7 06/46] hw/cxl/device: Implement basic mailbox (8.2.8.4) Jonathan Cameron
2022-03-06 17:40   ` Jonathan Cameron via
2022-03-06 17:40 ` [PATCH v7 07/46] hw/cxl/device: Add memory device utilities Jonathan Cameron
2022-03-06 17:40   ` Jonathan Cameron via
2022-03-06 17:40 ` [PATCH v7 08/46] hw/cxl/device: Add cheap EVENTS implementation (8.2.9.1) Jonathan Cameron
2022-03-06 17:40   ` Jonathan Cameron via
2022-03-06 17:41 ` [PATCH v7 09/46] hw/cxl/device: Timestamp implementation (8.2.9.3) Jonathan Cameron
2022-03-06 17:41   ` Jonathan Cameron via
2022-03-06 17:41 ` [PATCH v7 10/46] hw/cxl/device: Add log commands (8.2.9.4) + CEL Jonathan Cameron
2022-03-06 17:41   ` Jonathan Cameron via
2022-03-06 17:41 ` [PATCH v7 11/46] hw/pxb: Use a type for realizing expanders Jonathan Cameron
2022-03-06 17:41   ` Jonathan Cameron via
2022-03-06 17:41 ` [PATCH v7 12/46] hw/pci/cxl: Create a CXL bus type Jonathan Cameron
2022-03-06 17:41   ` Jonathan Cameron via
2022-03-06 17:41 ` [PATCH v7 13/46] cxl: Machine level control on whether CXL support is enabled Jonathan Cameron
2022-03-06 17:41   ` Jonathan Cameron via
2022-03-06 17:41 ` [PATCH v7 14/46] hw/pxb: Allow creation of a CXL PXB (host bridge) Jonathan Cameron
2022-03-06 17:41   ` Jonathan Cameron via
2022-03-06 17:41 ` [PATCH v7 15/46] qtest/cxl: Introduce initial test for pxb-cxl only Jonathan Cameron
2022-03-06 17:41   ` Jonathan Cameron via
2022-03-06 17:41 ` [PATCH v7 16/46] hw/cxl/rp: Add a root port Jonathan Cameron
2022-03-06 17:41   ` Jonathan Cameron via
2022-03-06 17:41 ` [PATCH v7 17/46] hw/cxl/device: Add a memory device (8.2.8.5) Jonathan Cameron
2022-03-06 17:41   ` Jonathan Cameron via
2022-03-06 17:41 ` [PATCH v7 18/46] hw/cxl/device: Implement MMIO HDM decoding (8.2.5.12) Jonathan Cameron
2022-03-06 17:41   ` Jonathan Cameron via
2022-03-06 17:41 ` [PATCH v7 19/46] hw/cxl/device: Add some trivial commands Jonathan Cameron
2022-03-06 17:41   ` Jonathan Cameron via
2022-03-06 17:41 ` [PATCH v7 20/46] hw/cxl/device: Plumb real Label Storage Area (LSA) sizing Jonathan Cameron
2022-03-06 17:41   ` Jonathan Cameron via
2022-03-06 17:41 ` [PATCH v7 21/46] hw/cxl/device: Implement get/set Label Storage Area (LSA) Jonathan Cameron
2022-03-06 17:41   ` Jonathan Cameron via
2022-03-06 17:41 ` [PATCH v7 22/46] qtests/cxl: Add initial root port and CXL type3 tests Jonathan Cameron
2022-03-06 17:41   ` Jonathan Cameron via
2022-03-06 17:41 ` [PATCH v7 23/46] hw/cxl/component: Implement host bridge MMIO (8.2.5, table 142) Jonathan Cameron
2022-03-06 17:41   ` Jonathan Cameron via
2022-03-06 17:41 ` [PATCH v7 24/46] acpi/cxl: Add _OSC implementation (9.14.2) Jonathan Cameron
2022-03-06 17:41   ` Jonathan Cameron via
2022-03-06 21:31   ` Michael S. Tsirkin
2022-03-06 21:31     ` Michael S. Tsirkin
2022-03-07 17:01     ` Jonathan Cameron
2022-03-07 17:01       ` Jonathan Cameron via
2022-03-06 17:41 ` [PATCH v7 25/46] acpi/cxl: Create the CEDT (9.14.1) Jonathan Cameron
2022-03-06 17:41   ` Jonathan Cameron via
2022-03-06 17:41 ` [PATCH v7 26/46] hw/cxl/component: Add utils for interleave parameter encoding/decoding Jonathan Cameron
2022-03-06 17:41   ` Jonathan Cameron via
2022-03-06 17:41 ` [PATCH v7 27/46] hw/cxl/host: Add support for CXL Fixed Memory Windows Jonathan Cameron
2022-03-06 17:41   ` Jonathan Cameron via
2022-03-06 17:41 ` [PATCH v7 28/46] acpi/cxl: Introduce CFMWS structures in CEDT Jonathan Cameron
2022-03-06 17:41   ` Jonathan Cameron via
2022-03-06 17:41 ` [PATCH v7 29/46] hw/pci-host/gpex-acpi: Add support for dsdt construction for pxb-cxl Jonathan Cameron
2022-03-06 17:41   ` Jonathan Cameron via
2022-03-06 17:41 ` [PATCH v7 30/46] pci/pcie_port: Add pci_find_port_by_pn() Jonathan Cameron
2022-03-06 17:41   ` Jonathan Cameron via
2022-03-06 17:41 ` [PATCH v7 31/46] CXL/cxl_component: Add cxl_get_hb_cstate() Jonathan Cameron
2022-03-06 17:41   ` Jonathan Cameron via
2022-03-06 17:41 ` [PATCH v7 32/46] mem/cxl_type3: Add read and write functions for associated hostmem Jonathan Cameron
2022-03-06 17:41   ` Jonathan Cameron via
2022-03-06 17:41 ` [PATCH v7 33/46] cxl/cxl-host: Add memops for CFMWS region Jonathan Cameron
2022-03-06 17:41   ` Jonathan Cameron via
2022-03-06 17:41 ` [PATCH v7 34/46] RFC: softmmu/memory: Add ops to memory_region_ram_init_from_file Jonathan Cameron
2022-03-06 17:41   ` Jonathan Cameron via
2022-03-06 17:41 ` [PATCH v7 35/46] hw/cxl/component Add a dumb HDM decoder handler Jonathan Cameron
2022-03-06 17:41   ` Jonathan Cameron via
2022-03-06 17:41 ` [PATCH v7 36/46] i386/pc: Enable CXL fixed memory windows Jonathan Cameron
2022-03-06 17:41   ` Jonathan Cameron via
2022-03-06 17:41 ` [PATCH v7 37/46] tests/acpi: q35: Allow addition of a CXL test Jonathan Cameron
2022-03-06 17:41   ` Jonathan Cameron via
2022-03-06 17:41 ` [PATCH v7 38/46] qtests/bios-tables-test: Add a test for CXL emulation Jonathan Cameron
2022-03-06 17:41   ` Jonathan Cameron via
2022-03-06 17:41 ` [PATCH v7 39/46] tests/acpi: Add tables " Jonathan Cameron
2022-03-06 17:41   ` Jonathan Cameron via
2022-03-06 17:41 ` [PATCH v7 40/46] qtest/cxl: Add more complex test cases with CFMWs Jonathan Cameron
2022-03-06 17:41   ` Jonathan Cameron via
2022-03-06 17:41 ` [PATCH v7 41/46] hw/arm/virt: Basic CXL enablement on pci_expander_bridge instances pxb-cxl Jonathan Cameron
2022-03-06 17:41   ` Jonathan Cameron via
2022-03-06 17:41 ` [PATCH v7 42/46] qtest/cxl: Add aarch64 virt test for CXL Jonathan Cameron
2022-03-06 17:41   ` Jonathan Cameron via
2022-03-06 17:41 ` [PATCH v7 43/46] docs/cxl: Add initial Compute eXpress Link (CXL) documentation Jonathan Cameron
2022-03-06 17:41   ` Jonathan Cameron via
2022-03-06 17:41 ` [PATCH v7 44/46] pci-bridge/cxl_upstream: Add a CXL switch upstream port Jonathan Cameron
2022-03-06 17:41   ` Jonathan Cameron via
2022-03-06 17:41 ` [PATCH v7 45/46] pci-bridge/cxl_downstream: Add a CXL switch downstream port Jonathan Cameron
2022-03-06 17:41   ` Jonathan Cameron via
2022-03-06 17:41 ` [PATCH v7 46/46] cxl/cxl-host: Support interleave decoding with one level of switches Jonathan Cameron
2022-03-06 17:41   ` Jonathan Cameron via
2022-03-06 21:33 ` [PATCH v7 00/46] CXl 2.0 emulation Support Michael S. Tsirkin
2022-03-06 21:33   ` Michael S. Tsirkin
2022-03-07  9:39   ` Jonathan Cameron via
2022-03-07  9:39     ` Jonathan Cameron
2022-03-09  8:15     ` Peter Xu
2022-03-09  8:15       ` Peter Xu
2022-03-09 11:28       ` Jonathan Cameron
2022-03-09 11:28         ` Jonathan Cameron via
2022-03-10  8:02         ` Peter Xu
2022-03-10  8:02           ` Peter Xu
2022-03-16 16:50           ` Jonathan Cameron
2022-03-16 16:50             ` Jonathan Cameron via
2022-03-16 17:16             ` Mark Cave-Ayland
2022-03-16 17:16               ` Mark Cave-Ayland
2022-03-16 17:58               ` Jonathan Cameron
2022-03-16 17:58                 ` Jonathan Cameron via
2022-03-16 18:26                 ` Jonathan Cameron
2022-03-16 18:26                   ` Jonathan Cameron via
2022-03-17  8:12                   ` Mark Cave-Ayland
2022-03-17  8:12                     ` Mark Cave-Ayland
2022-03-17 16:47                     ` Jonathan Cameron
2022-03-17 16:47                       ` Jonathan Cameron via
2022-03-18  8:14                       ` Mark Cave-Ayland
2022-03-18  8:14                         ` Mark Cave-Ayland
2022-03-18 10:08                         ` Jonathan Cameron
2022-03-18 10:08                           ` Jonathan Cameron via

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.