qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
From: Jonathan Cameron <Jonathan.Cameron@Huawei.com>
To: Ben Widawsky <ben.widawsky@intel.com>
Cc: "David Hildenbrand" <david@redhat.com>,
	"Vishal Verma" <vishal.l.verma@intel.com>,
	"John Groves (jgroves)" <jgroves@micron.com>,
	"Chris Browy" <cbrowy@avery-design.com>,
	qemu-devel@nongnu.org, linux-cxl@vger.kernel.org,
	"Markus Armbruster" <armbru@redhat.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	"Igor Mammedov" <imammedo@redhat.com>,
	"Dan Williams" <dan.j.williams@intel.com>,
	"Ira Weiny" <ira.weiny@intel.com>,
	"Philippe Mathieu-Daudé" <f4bug@amsat.org>
Subject: Re: [RFC PATCH v3 02/31] hw/cxl/component: Introduce CXL components (8.1.x, 8.2.5)
Date: Thu, 11 Feb 2021 17:08:45 +0000	[thread overview]
Message-ID: <20210211170845.0000451d@Huawei.com> (raw)
In-Reply-To: <20210202005948.241655-3-ben.widawsky@intel.com>

On Mon, 1 Feb 2021 16:59:19 -0800
Ben Widawsky <ben.widawsky@intel.com> wrote:

> A CXL 2.0 component is any entity in the CXL topology. All components
> have a analogous function in PCIe. Except for the CXL host bridge, all
> have a PCIe config space that is accessible via the common PCIe
> mechanisms. CXL components are enumerated via DVSEC fields in the
> extended PCIe header space. CXL components will minimally implement some
> subset of CXL.mem and CXL.cache registers defined in 8.2.5 of the CXL
> 2.0 specification. Two headers and a utility library are introduced to
> support the minimum functionality needed to enumerate components.
> 
> The cxl_pci header manages bits associated with PCI, specifically the
> DVSEC and related fields. The cxl_component.h variant has data
> structures and APIs that are useful for drivers implementing any of the
> CXL 2.0 components. The library takes care of making use of the DVSEC
> bits and the CXL.[mem|cache] registers. Per spec, the registers are
> little endian.
> 
> None of the mechanisms required to enumerate a CXL capable hostbridge
> are introduced at this point.
> 
> Note that the CXL.mem and CXL.cache registers used are always 4B wide.
> It's possible in the future that this constraint will not hold.
> 
> Signed-off-by: Ben Widawsky <ben.widawsky@intel.com>
A few additions to previous comments.

> ---
>  MAINTAINERS                    |   6 +
>  hw/Kconfig                     |   1 +
>  hw/cxl/Kconfig                 |   3 +
>  hw/cxl/cxl-component-utils.c   | 208 +++++++++++++++++++++++++++++++++
>  hw/cxl/meson.build             |   3 +
>  hw/meson.build                 |   1 +
>  include/hw/cxl/cxl.h           |  17 +++
>  include/hw/cxl/cxl_component.h | 187 +++++++++++++++++++++++++++++
>  include/hw/cxl/cxl_pci.h       | 138 ++++++++++++++++++++++
>  9 files changed, 564 insertions(+)
>  create mode 100644 hw/cxl/Kconfig
>  create mode 100644 hw/cxl/cxl-component-utils.c
>  create mode 100644 hw/cxl/meson.build
>  create mode 100644 include/hw/cxl/cxl.h
>  create mode 100644 include/hw/cxl/cxl_component.h
>  create mode 100644 include/hw/cxl/cxl_pci.h
> 


> diff --git a/hw/cxl/cxl-component-utils.c b/hw/cxl/cxl-component-utils.c
> new file mode 100644
> index 0000000000..8d56ad5c7d
> --- /dev/null
> +++ b/hw/cxl/cxl-component-utils.c
> @@ -0,0 +1,208 @@
> +/*
> + * CXL Utility library for components
> + *
> + * Copyright(C) 2020 Intel Corporation.
> + *
> + * This work is licensed under the terms of the GNU GPL, version 2. See the
> + * COPYING file in the top-level directory.
> + */
> +
> +#include "qemu/osdep.h"
> +#include "qemu/log.h"
> +#include "hw/pci/pci.h"
> +#include "hw/cxl/cxl.h"
> +
> +static uint64_t cxl_cache_mem_read_reg(void *opaque, hwaddr offset,
> +                                       unsigned size)
> +{
> +    CXLComponentState *cxl_cstate = opaque;
> +    ComponentRegisters *cregs = &cxl_cstate->crb;
> +
> +    assert(size == 4);
> +
> +    if (cregs->special_ops && cregs->special_ops->read) {
> +        return cregs->special_ops->read(cxl_cstate, offset, size);
> +    } else {
> +        return cregs->cache_mem_registers[offset / 4];
> +    }
> +}
> +
> +static void cxl_cache_mem_write_reg(void *opaque, hwaddr offset, uint64_t value,
> +                                    unsigned size)
> +{
> +    CXLComponentState *cxl_cstate = opaque;
> +    ComponentRegisters *cregs = &cxl_cstate->crb;
> +
> +    assert(size == 4);
> +
> +    if (cregs->special_ops && cregs->special_ops->write) {
> +        cregs->special_ops->write(cxl_cstate, offset, value, size);
> +    } else {
> +        cregs->cache_mem_registers[offset / 4] = value;
> +    }
> +}
> +
> +/*
> + * 8.2.3
> + *   The access restrictions specified in Section 8.2.2 also apply to CXL 2.0
> + *   Component Registers.
> + *
> + * 8.2.2
> + *   • A 32 bit register shall be accessed as a 4 Bytes quantity. Partial
> + *   reads are not permitted.
> + *   • A 64 bit register shall be accessed as a 8 Bytes quantity. Partial
> + *   reads are not permitted.
> + *
> + * As of the spec defined today, only 4 byte registers exist.

The exciting exception to this is the RAS header log which is
defined as 512 bits.  Will seek clarification but I think the spec should
probably say that is a set of 32 bit registers.

A bunch of the other elements that we probably want to block in plausible
values for also seem to use 64 bit registers.

> + */
> +static const MemoryRegionOps cache_mem_ops = {
> +    .read = cxl_cache_mem_read_reg,
> +    .write = cxl_cache_mem_write_reg,
> +    .endianness = DEVICE_LITTLE_ENDIAN,
> +    .valid = {
> +        .min_access_size = 4,
> +        .max_access_size = 4,
> +        .unaligned = false,
> +    },
> +    .impl = {
> +        .min_access_size = 4,
> +        .max_access_size = 4,
> +    },
> +};
> +

..
> +
> +void cxl_component_register_init_common(uint32_t *reg_state, enum reg_type type)
> +{
> +    int caps = 0;
> +    switch (type) {
> +    case CXL2_DOWNSTREAM_PORT:
> +    case CXL2_DEVICE:
> +        /* CAP, RAS, Link */
> +        caps = 2;
> +        break;
> +    case CXL2_UPSTREAM_PORT:
> +    case CXL2_TYPE3_DEVICE:
> +    case CXL2_LOGICAL_DEVICE:
> +        /* + HDM */
> +        caps = 3;
> +        break;
> +    case CXL2_ROOT_PORT:
> +        /* + Extended Security, + Snoop */
> +        caps = 5;
> +        break;
> +    default:
> +        abort();
> +    }
> +
> +    memset(reg_state, 0, 0x1000);
> +
> +    /* CXL Capability Header Register */
> +    ARRAY_FIELD_DP32(reg_state, CXL_CAPABILITY_HEADER, ID, 1);
> +    ARRAY_FIELD_DP32(reg_state, CXL_CAPABILITY_HEADER, VERSION, 1);
> +    ARRAY_FIELD_DP32(reg_state, CXL_CAPABILITY_HEADER, CACHE_MEM_VERSION, 1);
> +    ARRAY_FIELD_DP32(reg_state, CXL_CAPABILITY_HEADER, ARRAY_SIZE, caps);
> +
> +
> +#define init_cap_reg(reg, id, version)                                        \
> +    _Static_assert(CXL_##reg##_REGISTERS_OFFSET != 0, "Invalid cap offset\n");\
> +    do {                                                                      \
> +        int which = R_CXL_##reg##_CAPABILITY_HEADER;                          \
> +        reg_state[which] = FIELD_DP32(reg_state[which],                       \
> +                                      CXL_##reg##_CAPABILITY_HEADER, ID, id); \
> +        reg_state[which] =                                                    \
> +            FIELD_DP32(reg_state[which], CXL_##reg##_CAPABILITY_HEADER,       \
> +                       VERSION, version);                                     \
> +        reg_state[which] =                                                    \
> +            FIELD_DP32(reg_state[which], CXL_##reg##_CAPABILITY_HEADER, PTR,  \
> +                       CXL_##reg##_REGISTERS_OFFSET);                         \
> +    } while (0)

Seems like this would be cleaner using ARRAY_FIELD_DP32 as you did for the header.

    #define init_cap_reg(reg, id, version)                                        \
        _Static_assert(CXL_##reg##_REGISTERS_OFFSET != 0, "Invalid cap offset\n");\
        do {                                                                    \
            ARRAY_FIELD_DP32(reg_state, CXL_##reg##_CAPABILITY_HEADER, ID, id); \
            ARRAY_FIELD_DP32(reg_state, CXL_##reg##_CAPABILITY_HEADER,          \
                             VERSION, version);                                 \
            ARRAY_FIELD_DP32(reg_state, CXL_##reg##_CAPABILITY_HEADER,          \
                             PTR, CXL_##reg##_REGISTRS_OFFSET);                 \
	} while (0)
I think gives the same result.

> +
> +    init_cap_reg(RAS, 2, 1);
> +    ras_init_common(reg_state);
> +
> +    init_cap_reg(LINK, 4, 2);

Feels like we'll want to block some values for the rest of these to at least
ensure whatever is read isn't crazy.

> +
> +    if (caps < 3) {
> +        return;
> +    }
> +
> +    init_cap_reg(HDM, 5, 1);
> +    hdm_init_common(reg_state);
> +
> +    if (caps < 5) {
> +        return;
> +    }
> +
> +    init_cap_reg(EXTSEC, 6, 1);
> +    init_cap_reg(SNOOP, 8, 1);
> +
> +#undef init_cap_reg
> +}
> +
> +/*
> + * Helper to creates a DVSEC header for a CXL entity. The caller is responsible
> + * for tracking the valid offset.
> + *
> + * This function will build the DVSEC header on behalf of the caller and then
> + * copy in the remaining data for the vendor specific bits.
> + */
> +void cxl_component_create_dvsec(CXLComponentState *cxl, uint16_t length,
> +                                uint16_t type, uint8_t rev, uint8_t *body)
> +{
> +    PCIDevice *pdev = cxl->pdev;
> +    uint16_t offset = cxl->dvsec_offset;
> +
> +    assert(offset >= PCI_CFG_SPACE_SIZE &&
> +           ((offset + length) < PCI_CFG_SPACE_EXP_SIZE));
> +    assert((length & 0xf000) == 0);
> +    assert((rev & ~0xf) == 0);
> +
> +    /* Create the DVSEC in the MCFG space */
> +    pcie_add_capability(pdev, PCI_EXT_CAP_ID_DVSEC, 1, offset, length);
> +    pci_set_long(pdev->config + offset + PCIE_DVSEC_HEADER1_OFFSET,
> +                 (length << 20) | (rev << 16) | CXL_VENDOR_ID);
> +    pci_set_word(pdev->config + offset + PCIE_DVSEC_ID_OFFSET, type);
> +    memcpy(pdev->config + offset + sizeof(struct dvsec_header),
> +           body + sizeof(struct dvsec_header),
> +           length - sizeof(struct dvsec_header));
> +
> +    /* Update state for future DVSEC additions */
> +    range_init_nofail(&cxl->dvsecs[type], cxl->dvsec_offset, length);
> +    cxl->dvsec_offset += length;
> +}
...



  parent reply	other threads:[~2021-02-11 17:17 UTC|newest]

Thread overview: 65+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-02-02  0:59 [RFC PATCH v3 00/31] CXL 2.0 Support Ben Widawsky
2021-02-02  0:59 ` [RFC PATCH v3 01/31] hw/pci/cxl: Add a CXL component type (interface) Ben Widawsky
2021-02-02  0:59 ` [RFC PATCH v3 02/31] hw/cxl/component: Introduce CXL components (8.1.x, 8.2.5) Ben Widawsky
2021-02-02 11:48   ` Jonathan Cameron
2021-02-17 18:36     ` Ben Widawsky
2021-02-11 17:08   ` Jonathan Cameron [this message]
2021-02-17 16:40     ` Ben Widawsky
2021-02-02  0:59 ` [RFC PATCH v3 03/31] hw/cxl/device: Introduce a CXL device (8.2.8) Ben Widawsky
2021-02-02 12:03   ` Jonathan Cameron
2021-02-02  0:59 ` [RFC PATCH v3 04/31] hw/cxl/device: Implement the CAP array (8.2.8.1-2) Ben Widawsky
2021-02-02 12:23   ` Jonathan Cameron
2021-02-17 22:15     ` Ben Widawsky
2021-02-02  0:59 ` [RFC PATCH v3 05/31] hw/cxl/device: Implement basic mailbox (8.2.8.4) Ben Widawsky
2021-02-02 14:58   ` Jonathan Cameron
2021-02-11 17:46     ` Jonathan Cameron
2021-02-18  0:55       ` Ben Widawsky
2021-02-18 16:50         ` Jonathan Cameron
2021-02-11 18:09   ` Jonathan Cameron
2021-02-02  0:59 ` [RFC PATCH v3 06/31] hw/cxl/device: Add memory device utilities Ben Widawsky
2021-02-02  0:59 ` [RFC PATCH v3 07/31] hw/cxl/device: Add cheap EVENTS implementation (8.2.9.1) Ben Widawsky
2021-02-02 13:44   ` Jonathan Cameron
2021-02-11 17:59   ` Jonathan Cameron
2021-02-02  0:59 ` [RFC PATCH v3 08/31] hw/cxl/device: Timestamp implementation (8.2.9.3) Ben Widawsky
2021-02-02  0:59 ` [RFC PATCH v3 09/31] hw/cxl/device: Add log commands (8.2.9.4) + CEL Ben Widawsky
2021-02-02  0:59 ` [RFC PATCH v3 10/31] hw/pxb: Use a type for realizing expanders Ben Widawsky
2021-02-02 13:50   ` Jonathan Cameron
2021-02-02  0:59 ` [RFC PATCH v3 11/31] hw/pci/cxl: Create a CXL bus type Ben Widawsky
2021-02-02  0:59 ` [RFC PATCH v3 12/31] hw/pxb: Allow creation of a CXL PXB (host bridge) Ben Widawsky
2021-02-02  0:59 ` [RFC PATCH v3 13/31] qtest: allow DSDT acpi table changes Ben Widawsky
2021-02-02  0:59 ` [RFC PATCH v3 14/31] acpi/pci: Consolidate host bridge setup Ben Widawsky
2021-02-02 13:56   ` Jonathan Cameron
2021-12-02 10:32   ` Jonathan Cameron via
2021-02-02  0:59 ` [RFC PATCH v3 15/31] tests/acpi: remove stale allowed tables Ben Widawsky
2021-02-02  0:59 ` [RFC PATCH v3 16/31] hw/pci: Plumb _UID through host bridges Ben Widawsky
2021-02-02 15:00   ` Jonathan Cameron
2021-02-02 15:24     ` Michael S. Tsirkin
2021-02-02 15:42       ` Ben Widawsky
2021-02-02 15:51         ` Michael S. Tsirkin
2021-02-02 16:20           ` Ben Widawsky
2021-02-02  0:59 ` [RFC PATCH v3 17/31] hw/cxl/component: Implement host bridge MMIO (8.2.5, table 142) Ben Widawsky
2021-02-02 19:21   ` Jonathan Cameron
2021-02-02 19:45     ` Ben Widawsky
2021-02-02 20:43       ` Jonathan Cameron
2021-02-02 21:03         ` Ben Widawsky
2021-02-02 22:06           ` Jonathan Cameron
2021-02-02  0:59 ` [RFC PATCH v3 18/31] acpi/pxb/cxl: Reserve host bridge MMIO Ben Widawsky
2021-02-02  0:59 ` [RFC PATCH v3 19/31] hw/pxb/cxl: Add "windows" for host bridges Ben Widawsky
2021-02-02  0:59 ` [RFC PATCH v3 20/31] hw/cxl/rp: Add a root port Ben Widawsky
2021-02-02  0:59 ` [RFC PATCH v3 21/31] hw/cxl/device: Add a memory device (8.2.8.5) Ben Widawsky
2021-02-02 14:26   ` Eric Blake
2021-02-02 15:06     ` Ben Widawsky
2021-02-02  0:59 ` [RFC PATCH v3 22/31] hw/cxl/device: Implement MMIO HDM decoding (8.2.5.12) Ben Widawsky
2021-02-02  0:59 ` [RFC PATCH v3 23/31] acpi/cxl: Add _OSC implementation (9.14.2) Ben Widawsky
2021-02-02  0:59 ` [RFC PATCH v3 24/31] tests/acpi: allow CEDT table addition Ben Widawsky
2021-02-02  0:59 ` [RFC PATCH v3 25/31] acpi/cxl: Create the CEDT (9.14.1) Ben Widawsky
2021-02-02  0:59 ` [RFC PATCH v3 26/31] tests/acpi: Add new CEDT files Ben Widawsky
2021-02-02  0:59 ` [RFC PATCH v3 27/31] hw/cxl/device: Add some trivial commands Ben Widawsky
2021-02-02  0:59 ` [RFC PATCH v3 28/31] hw/cxl/device: Plumb real LSA sizing Ben Widawsky
2021-02-02  0:59 ` [RFC PATCH v3 29/31] hw/cxl/device: Implement get/set LSA Ben Widawsky
2021-02-02  0:59 ` [RFC PATCH v3 30/31] qtest/cxl: Add very basic sanity tests Ben Widawsky
2021-02-02  0:59 ` [RFC PATCH v3 31/31] WIP: i386/cxl: Initialize a host bridge Ben Widawsky
2021-02-02  1:33 ` [RFC PATCH v3 00/31] CXL 2.0 Support no-reply
2021-02-03 17:42 ` Ben Widawsky
2021-02-11 18:51   ` Jonathan Cameron
2021-03-11 23:27 ` [RFC PATCH] hw/mem/cxl_type3: Go back to subregions Ben Widawsky

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20210211170845.0000451d@Huawei.com \
    --to=jonathan.cameron@huawei.com \
    --cc=armbru@redhat.com \
    --cc=ben.widawsky@intel.com \
    --cc=cbrowy@avery-design.com \
    --cc=dan.j.williams@intel.com \
    --cc=david@redhat.com \
    --cc=f4bug@amsat.org \
    --cc=imammedo@redhat.com \
    --cc=ira.weiny@intel.com \
    --cc=jgroves@micron.com \
    --cc=linux-cxl@vger.kernel.org \
    --cc=mst@redhat.com \
    --cc=qemu-devel@nongnu.org \
    --cc=vishal.l.verma@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).