All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH 0/5] CXL Region Provisioning Fixes
@ 2022-07-23  0:55 Dan Williams
  2022-07-23  0:55 ` [PATCH 1/5] cxl/acpi: Autoload driver for 'cxl_acpi' test devices Dan Williams
                   ` (4 more replies)
  0 siblings, 5 replies; 28+ messages in thread
From: Dan Williams @ 2022-07-23  0:55 UTC (permalink / raw)
  To: linux-cxl; +Cc: Jonathan Cameron, Dave Jiang

Here is a small collection of fixes that have cropped after cutting the
new cxl/pending baseline.

The first is just a module loading fixup to make it possible to run
cxl_test on a system without an ACPI0017 device to preload the cxl_acpi
driver.

Patch 2 cleans up an attribute that is relevant for all decoders
*except* root decoders.

Patch 3 cleans up ACPI CFMWS implementations that specify a region
granularity other than the minimum when cross-host-bridge interleaving
is disabled.

The final patches are the meat of the fixes to constrain the interleave
granularity settings to ones that do not cause aliasing or DPA decode
gaps. See the details in patch 5. Expect more of these fixups as the
testing continues to mature.

---

Dan Williams (5):
      cxl/acpi: Autoload driver for 'cxl_acpi' test devices
      cxl/region: Delete 'region' attribute from root decoders
      cxl/acpi: Minimize granularity for x1 interleaves
      cxl/region: Stop initializing interleave granularity
      cxl/region: Constrain region granularity scaling factor


 drivers/cxl/acpi.c        |   13 +++++++++
 drivers/cxl/core/port.c   |    3 +-
 drivers/cxl/core/region.c |   65 ++++++++++++++++++++++++++++++++++++++++++---
 3 files changed, 75 insertions(+), 6 deletions(-)

base-commit: b282b26d11c50d48b336fedb5f74b2eca3f7b94c

^ permalink raw reply	[flat|nested] 28+ messages in thread

* [PATCH 1/5] cxl/acpi: Autoload driver for 'cxl_acpi' test devices
  2022-07-23  0:55 [PATCH 0/5] CXL Region Provisioning Fixes Dan Williams
@ 2022-07-23  0:55 ` Dan Williams
  2022-08-01 19:24   ` Verma, Vishal L
  2022-07-23  0:56 ` [PATCH 2/5] cxl/region: Delete 'region' attribute from root decoders Dan Williams
                   ` (3 subsequent siblings)
  4 siblings, 1 reply; 28+ messages in thread
From: Dan Williams @ 2022-07-23  0:55 UTC (permalink / raw)
  To: linux-cxl; +Cc: Dave Jiang, Dave Jiang

In support of CXL unit tests in the ndctl project, arrange for the
cxl_acpi driver to load in response to the registration of cxl_test
devices.

Reported-by: Dave Jiang <dave.jiang@intel.com>
Tested-by: Dave Jiang <dave.jiang@intel.com>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
---
 drivers/cxl/acpi.c |    7 +++++++
 1 file changed, 7 insertions(+)

diff --git a/drivers/cxl/acpi.c b/drivers/cxl/acpi.c
index 64004eb672d0..eb436268b92c 100644
--- a/drivers/cxl/acpi.c
+++ b/drivers/cxl/acpi.c
@@ -515,12 +515,19 @@ static const struct acpi_device_id cxl_acpi_ids[] = {
 };
 MODULE_DEVICE_TABLE(acpi, cxl_acpi_ids);
 
+static const struct platform_device_id cxl_test_ids[] = {
+	{ "cxl_acpi" },
+	{ },
+};
+MODULE_DEVICE_TABLE(platform, cxl_test_ids);
+
 static struct platform_driver cxl_acpi_driver = {
 	.probe = cxl_acpi_probe,
 	.driver = {
 		.name = KBUILD_MODNAME,
 		.acpi_match_table = cxl_acpi_ids,
 	},
+	.id_table = cxl_test_ids,
 };
 
 module_platform_driver(cxl_acpi_driver);


^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [PATCH 2/5] cxl/region: Delete 'region' attribute from root decoders
  2022-07-23  0:55 [PATCH 0/5] CXL Region Provisioning Fixes Dan Williams
  2022-07-23  0:55 ` [PATCH 1/5] cxl/acpi: Autoload driver for 'cxl_acpi' test devices Dan Williams
@ 2022-07-23  0:56 ` Dan Williams
  2022-08-01 19:32   ` Alison Schofield
  2022-08-01 19:38   ` Verma, Vishal L
  2022-07-23  0:56 ` [PATCH 3/5] cxl/acpi: Minimize granularity for x1 interleaves Dan Williams
                   ` (2 subsequent siblings)
  4 siblings, 2 replies; 28+ messages in thread
From: Dan Williams @ 2022-07-23  0:56 UTC (permalink / raw)
  To: linux-cxl

For switch and endpoint decoders the relationship of decoders to regions
is 1:1. However, for root decoders the relationship is 1:N. Also,
regions are already children of root decoders, so the 1:N relationship
is observed by walking the following glob:

    /sys/bus/cxl/devices/$decoder/region*

Hide the vestigial 'region' attribute for root decoders.

Signed-off-by: Dan Williams <dan.j.williams@intel.com>
---
 drivers/cxl/core/port.c |    3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/drivers/cxl/core/port.c b/drivers/cxl/core/port.c
index 3d2d0119cc3d..bffde862de0b 100644
--- a/drivers/cxl/core/port.c
+++ b/drivers/cxl/core/port.c
@@ -290,7 +290,6 @@ static struct attribute *cxl_decoder_base_attrs[] = {
 	&dev_attr_locked.attr,
 	&dev_attr_interleave_granularity.attr,
 	&dev_attr_interleave_ways.attr,
-	SET_CXL_REGION_ATTR(region)
 	NULL,
 };
 
@@ -345,6 +344,7 @@ static const struct attribute_group *cxl_decoder_root_attribute_groups[] = {
 static struct attribute *cxl_decoder_switch_attrs[] = {
 	&dev_attr_target_type.attr,
 	&dev_attr_target_list.attr,
+	SET_CXL_REGION_ATTR(region)
 	NULL,
 };
 
@@ -364,6 +364,7 @@ static struct attribute *cxl_decoder_endpoint_attrs[] = {
 	&dev_attr_mode.attr,
 	&dev_attr_dpa_size.attr,
 	&dev_attr_dpa_resource.attr,
+	SET_CXL_REGION_ATTR(region)
 	NULL,
 };
 


^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [PATCH 3/5] cxl/acpi: Minimize granularity for x1 interleaves
  2022-07-23  0:55 [PATCH 0/5] CXL Region Provisioning Fixes Dan Williams
  2022-07-23  0:55 ` [PATCH 1/5] cxl/acpi: Autoload driver for 'cxl_acpi' test devices Dan Williams
  2022-07-23  0:56 ` [PATCH 2/5] cxl/region: Delete 'region' attribute from root decoders Dan Williams
@ 2022-07-23  0:56 ` Dan Williams
  2022-08-01 19:35   ` Alison Schofield
                     ` (2 more replies)
  2022-07-23  0:56 ` [PATCH 4/5] cxl/region: Stop initializing interleave granularity Dan Williams
  2022-07-23  0:56 ` [PATCH 5/5] cxl/region: Constrain region granularity scaling factor Dan Williams
  4 siblings, 3 replies; 28+ messages in thread
From: Dan Williams @ 2022-07-23  0:56 UTC (permalink / raw)
  To: linux-cxl; +Cc: Jonathan Cameron

The kernel enforces that region granularity is >= to the top-level
interleave-granularity for the given CXL window. However, when the CXL
window interleave is x1, i.e. non-interleaved at the host bridge level,
then the specified granularity does not matter. Override the window
specified granularity to the CXL minimum so that any valid region
granularity is >= to the root granularity.

Reported-by: Jonathan Cameron <Jonathan.Cameron@huawei.com>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
---
 drivers/cxl/acpi.c |    6 ++++++
 1 file changed, 6 insertions(+)

diff --git a/drivers/cxl/acpi.c b/drivers/cxl/acpi.c
index eb436268b92c..67137e17b8c9 100644
--- a/drivers/cxl/acpi.c
+++ b/drivers/cxl/acpi.c
@@ -140,6 +140,12 @@ static int cxl_parse_cfmws(union acpi_subtable_headers *header, void *arg,
 		.end = res->end,
 	};
 	cxld->interleave_ways = ways;
+	/*
+	 * Minimize the x1 granularity to advertise support for any
+	 * valid region granularity
+	 */
+	if (ways == 1)
+		ig = 256;
 	cxld->interleave_granularity = ig;
 
 	rc = cxl_decoder_add(cxld, target_map);


^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [PATCH 4/5] cxl/region: Stop initializing interleave granularity
  2022-07-23  0:55 [PATCH 0/5] CXL Region Provisioning Fixes Dan Williams
                   ` (2 preceding siblings ...)
  2022-07-23  0:56 ` [PATCH 3/5] cxl/acpi: Minimize granularity for x1 interleaves Dan Williams
@ 2022-07-23  0:56 ` Dan Williams
  2022-08-01 19:41   ` Alison Schofield
  2022-08-01 19:47   ` Verma, Vishal L
  2022-07-23  0:56 ` [PATCH 5/5] cxl/region: Constrain region granularity scaling factor Dan Williams
  4 siblings, 2 replies; 28+ messages in thread
From: Dan Williams @ 2022-07-23  0:56 UTC (permalink / raw)
  To: linux-cxl

In preparation for a patch that validates that the region ways setting
is compatible with the granularity setting, the initial granularity
setting needs to start at zero to indicate "unset".

Signed-off-by: Dan Williams <dan.j.williams@intel.com>
---
 drivers/cxl/core/region.c |    2 --
 1 file changed, 2 deletions(-)

diff --git a/drivers/cxl/core/region.c b/drivers/cxl/core/region.c
index df35d3c475a0..05b6212e6399 100644
--- a/drivers/cxl/core/region.c
+++ b/drivers/cxl/core/region.c
@@ -1524,7 +1524,6 @@ static struct cxl_region *devm_cxl_add_region(struct cxl_root_decoder *cxlrd,
 					      enum cxl_decoder_type type)
 {
 	struct cxl_port *port = to_cxl_port(cxlrd->cxlsd.cxld.dev.parent);
-	struct cxl_decoder *cxld = &cxlrd->cxlsd.cxld;
 	struct cxl_region_params *p;
 	struct cxl_region *cxlr;
 	struct device *dev;
@@ -1536,7 +1535,6 @@ static struct cxl_region *devm_cxl_add_region(struct cxl_root_decoder *cxlrd,
 	p = &cxlr->params;
 	cxlr->mode = mode;
 	cxlr->type = type;
-	p->interleave_granularity = cxld->interleave_granularity;
 
 	dev = &cxlr->dev;
 	rc = dev_set_name(dev, "region%d", id);


^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [PATCH 5/5] cxl/region: Constrain region granularity scaling factor
  2022-07-23  0:55 [PATCH 0/5] CXL Region Provisioning Fixes Dan Williams
                   ` (3 preceding siblings ...)
  2022-07-23  0:56 ` [PATCH 4/5] cxl/region: Stop initializing interleave granularity Dan Williams
@ 2022-07-23  0:56 ` Dan Williams
  2022-08-01 19:43   ` Alison Schofield
                     ` (2 more replies)
  4 siblings, 3 replies; 28+ messages in thread
From: Dan Williams @ 2022-07-23  0:56 UTC (permalink / raw)
  To: linux-cxl; +Cc: Jonathan Cameron

Consider the scenario where a platform provides for a x2 host-bridge
interleave at a granularity of 256 bytes. The only permitted region
granularities for that configuration are 256 or 512. Anything larger
than 512 results in unmapped capacity within a given decoder. Also, if
the region granularity is 512 then the interleave_ways for the region
must be 4 to keep the scaling matched.

Here are the translations for the first (4) 256-byte blocks where an
endpoint decoder is configured for a 512-byte granularity:

Block[0] => HB0 => DPA: 0
Block[1] => HB1 => DPA: 0
Block[2] => HB0 => DPA: 0
Block[3] => HB1 => DPA: 0

In order for those translations to not alias the region interleave ways
must be 4 resulting in:

Block[0] => HB0 => Dev0 => DPA: 0
Block[1] => HB1 => Dev1 => DPA: 0
Block[2] => HB0 => Dev2 => DPA: 0
Block[3] => HB1 => Dev3 => DPA: 0

...not 2, resulting in:

Block[0] => HB0 => Dev0 => DPA: 0
Block[1] => HB1 => Dev1 => DPA: 0
Block[2] => HB0 => Dev0 => DPA: 0 !
Block[3] => HB1 => Dev1 => DPA: 0 !

Given tooling is already being built around this ABI allow for
granularity and ways to be set in either order and validate the
combination once both are set.

Reported-by: Jonathan Cameron <Jonathan.Cameron@huawei.com>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
---
 drivers/cxl/core/region.c |   63 +++++++++++++++++++++++++++++++++++++++++++--
 1 file changed, 60 insertions(+), 3 deletions(-)

diff --git a/drivers/cxl/core/region.c b/drivers/cxl/core/region.c
index 05b6212e6399..a34e537e4cb2 100644
--- a/drivers/cxl/core/region.c
+++ b/drivers/cxl/core/region.c
@@ -348,6 +348,25 @@ static ssize_t interleave_ways_store(struct device *dev,
 		goto out;
 	}
 
+	/*
+	 * If region granularity has been set and xHB interleave is active,
+	 * validate that granularity is compatible with specified ways.
+	 * Otherwise allow ways to be set now and depend on
+	 * interleave_granularity_store() to validate this constraint.
+	 */
+	if (cxld->interleave_ways > 1 &&
+	    p->interleave_granularity > cxld->interleave_granularity &&
+	    p->interleave_granularity / cxld->interleave_granularity !=
+		    val / cxld->interleave_ways) {
+		dev_dbg(dev,
+			"ways scaling factor %d mismatch with granularity %d\n",
+			val / cxld->interleave_ways,
+			p->interleave_granularity /
+				cxld->interleave_granularity);
+		rc = -EINVAL;
+		goto out;
+	}
+
 	save = p->interleave_ways;
 	p->interleave_ways = val;
 	rc = sysfs_update_group(&cxlr->dev.kobj, get_cxl_region_target_group());
@@ -386,7 +405,7 @@ static ssize_t interleave_granularity_store(struct device *dev,
 	struct cxl_decoder *cxld = &cxlrd->cxlsd.cxld;
 	struct cxl_region *cxlr = to_cxl_region(dev);
 	struct cxl_region_params *p = &cxlr->params;
-	int rc, val;
+	int rc, val, ways;
 	u16 ig;
 
 	rc = kstrtoint(buf, 0, &val);
@@ -403,9 +422,29 @@ static ssize_t interleave_granularity_store(struct device *dev,
 	 * granularity less than the root interleave result in needing
 	 * multiple endpoints to support a single slot in the
 	 * interleave.
+	 *
+	 * When the root interleave ways is 1 then the root granularity is a
+	 * don't care.
+	 *
+	 * Limit region granularity to cxld->interleave_granularity *
+	 * rounddown_pow_of_two(cxld->interleave_ways) otherwise holes result in
+	 * the decode at each endpoint. Note that rounddown_pow_of_two()
+	 * accounts for x3, x6, and x9 root intereleave.
 	 */
-	if (val < cxld->interleave_granularity)
-		return -EINVAL;
+	ways = rounddown_pow_of_two(cxld->interleave_ways);
+	if (ways > 1) {
+		if (val < cxld->interleave_granularity) {
+			dev_dbg(dev, "granularity %d must be >= %d\n", val,
+				cxld->interleave_granularity);
+			return -EINVAL;
+		}
+
+		if (val > cxld->interleave_granularity * ways) {
+			dev_dbg(dev, "granularity %d must be <= %d\n", val,
+				cxld->interleave_granularity * ways);
+			return -EINVAL;
+		}
+	}
 
 	rc = down_write_killable(&cxl_region_rwsem);
 	if (rc)
@@ -415,6 +454,24 @@ static ssize_t interleave_granularity_store(struct device *dev,
 		goto out;
 	}
 
+	/*
+	 * If region ways has been set and xHB interleave is active, validate
+	 * that ways is compatible with specified granularity.  Otherwise allow
+	 * granularity to be set now and depend on interleave_ways_store() to
+	 * validate this constraint.
+	 */
+	if (cxld->interleave_ways > 1 && p->interleave_ways &&
+	    val > cxld->interleave_granularity &&
+	    p->interleave_ways / cxld->interleave_ways !=
+		    val / cxld->interleave_granularity) {
+		dev_dbg(dev,
+			"granularity scaling factor %d mismatch with ways %d\n",
+			val / cxld->interleave_granularity,
+			p->interleave_ways / cxld->interleave_ways);
+		rc = -EINVAL;
+		goto out;
+	}
+
 	p->interleave_granularity = val;
 out:
 	up_write(&cxl_region_rwsem);


^ permalink raw reply related	[flat|nested] 28+ messages in thread

* Re: [PATCH 1/5] cxl/acpi: Autoload driver for 'cxl_acpi' test devices
  2022-07-23  0:55 ` [PATCH 1/5] cxl/acpi: Autoload driver for 'cxl_acpi' test devices Dan Williams
@ 2022-08-01 19:24   ` Verma, Vishal L
  0 siblings, 0 replies; 28+ messages in thread
From: Verma, Vishal L @ 2022-08-01 19:24 UTC (permalink / raw)
  To: Williams, Dan J, linux-cxl; +Cc: Jiang, Dave

On Fri, 2022-07-22 at 17:55 -0700, Dan Williams wrote:
> In support of CXL unit tests in the ndctl project, arrange for the
> cxl_acpi driver to load in response to the registration of cxl_test
> devices.
> 
> Reported-by: Dave Jiang <dave.jiang@intel.com>
> Tested-by: Dave Jiang <dave.jiang@intel.com>
> Signed-off-by: Dan Williams <dan.j.williams@intel.com>
> ---
>  drivers/cxl/acpi.c |    7 +++++++
>  1 file changed, 7 insertions(+)
> 

Looks good, 

Reviewed-by: Vishal Verma <vishal.l.verma@intel.com>

> diff --git a/drivers/cxl/acpi.c b/drivers/cxl/acpi.c
> index 64004eb672d0..eb436268b92c 100644
> --- a/drivers/cxl/acpi.c
> +++ b/drivers/cxl/acpi.c
> @@ -515,12 +515,19 @@ static const struct acpi_device_id
> cxl_acpi_ids[] = {
>  };
>  MODULE_DEVICE_TABLE(acpi, cxl_acpi_ids);
>  
> +static const struct platform_device_id cxl_test_ids[] = {
> +       { "cxl_acpi" },
> +       { },
> +};
> +MODULE_DEVICE_TABLE(platform, cxl_test_ids);
> +
>  static struct platform_driver cxl_acpi_driver = {
>         .probe = cxl_acpi_probe,
>         .driver = {
>                 .name = KBUILD_MODNAME,
>                 .acpi_match_table = cxl_acpi_ids,
>         },
> +       .id_table = cxl_test_ids,
>  };
>  
>  module_platform_driver(cxl_acpi_driver);
> 


^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH 2/5] cxl/region: Delete 'region' attribute from root decoders
  2022-07-23  0:56 ` [PATCH 2/5] cxl/region: Delete 'region' attribute from root decoders Dan Williams
@ 2022-08-01 19:32   ` Alison Schofield
  2022-08-01 19:38   ` Verma, Vishal L
  1 sibling, 0 replies; 28+ messages in thread
From: Alison Schofield @ 2022-08-01 19:32 UTC (permalink / raw)
  To: Dan Williams; +Cc: linux-cxl

On Fri, Jul 22, 2022 at 05:56:03PM -0700, Dan Williams wrote:
> For switch and endpoint decoders the relationship of decoders to regions
> is 1:1. However, for root decoders the relationship is 1:N. Also,
> regions are already children of root decoders, so the 1:N relationship
> is observed by walking the following glob:
> 
>     /sys/bus/cxl/devices/$decoder/region*
> 
> Hide the vestigial 'region' attribute for root decoders.
> 
> Signed-off-by: Dan Williams <dan.j.williams@intel.com>

Reviewed-by: Alison Schofield <alison.schofield@intel.com>

> ---
>  drivers/cxl/core/port.c |    3 ++-
>  1 file changed, 2 insertions(+), 1 deletion(-)
> 
> diff --git a/drivers/cxl/core/port.c b/drivers/cxl/core/port.c
> index 3d2d0119cc3d..bffde862de0b 100644
> --- a/drivers/cxl/core/port.c
> +++ b/drivers/cxl/core/port.c
> @@ -290,7 +290,6 @@ static struct attribute *cxl_decoder_base_attrs[] = {
>  	&dev_attr_locked.attr,
>  	&dev_attr_interleave_granularity.attr,
>  	&dev_attr_interleave_ways.attr,
> -	SET_CXL_REGION_ATTR(region)
>  	NULL,
>  };
>  
> @@ -345,6 +344,7 @@ static const struct attribute_group *cxl_decoder_root_attribute_groups[] = {
>  static struct attribute *cxl_decoder_switch_attrs[] = {
>  	&dev_attr_target_type.attr,
>  	&dev_attr_target_list.attr,
> +	SET_CXL_REGION_ATTR(region)
>  	NULL,
>  };
>  
> @@ -364,6 +364,7 @@ static struct attribute *cxl_decoder_endpoint_attrs[] = {
>  	&dev_attr_mode.attr,
>  	&dev_attr_dpa_size.attr,
>  	&dev_attr_dpa_resource.attr,
> +	SET_CXL_REGION_ATTR(region)
>  	NULL,
>  };
>  
> 

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH 3/5] cxl/acpi: Minimize granularity for x1 interleaves
  2022-07-23  0:56 ` [PATCH 3/5] cxl/acpi: Minimize granularity for x1 interleaves Dan Williams
@ 2022-08-01 19:35   ` Alison Schofield
  2022-08-01 19:45   ` Verma, Vishal L
  2022-08-02 15:56   ` Jonathan Cameron
  2 siblings, 0 replies; 28+ messages in thread
From: Alison Schofield @ 2022-08-01 19:35 UTC (permalink / raw)
  To: Dan Williams; +Cc: linux-cxl, Jonathan Cameron

On Fri, Jul 22, 2022 at 05:56:09PM -0700, Dan Williams wrote:
> The kernel enforces that region granularity is >= to the top-level
> interleave-granularity for the given CXL window. However, when the CXL
> window interleave is x1, i.e. non-interleaved at the host bridge level,
> then the specified granularity does not matter. Override the window
> specified granularity to the CXL minimum so that any valid region
> granularity is >= to the root granularity.
> 
> Reported-by: Jonathan Cameron <Jonathan.Cameron@huawei.com>
> Signed-off-by: Dan Williams <dan.j.williams@intel.com>

Reviewed-by: Alison Schofield <alison.schofield@intel.com>

> ---
>  drivers/cxl/acpi.c |    6 ++++++
>  1 file changed, 6 insertions(+)
> 
> diff --git a/drivers/cxl/acpi.c b/drivers/cxl/acpi.c
> index eb436268b92c..67137e17b8c9 100644
> --- a/drivers/cxl/acpi.c
> +++ b/drivers/cxl/acpi.c
> @@ -140,6 +140,12 @@ static int cxl_parse_cfmws(union acpi_subtable_headers *header, void *arg,
>  		.end = res->end,
>  	};
>  	cxld->interleave_ways = ways;
> +	/*
> +	 * Minimize the x1 granularity to advertise support for any
> +	 * valid region granularity
> +	 */
> +	if (ways == 1)
> +		ig = 256;
>  	cxld->interleave_granularity = ig;
>  
>  	rc = cxl_decoder_add(cxld, target_map);
> 

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH 2/5] cxl/region: Delete 'region' attribute from root decoders
  2022-07-23  0:56 ` [PATCH 2/5] cxl/region: Delete 'region' attribute from root decoders Dan Williams
  2022-08-01 19:32   ` Alison Schofield
@ 2022-08-01 19:38   ` Verma, Vishal L
  2022-08-01 19:40     ` Verma, Vishal L
  2022-08-01 21:32     ` Dan Williams
  1 sibling, 2 replies; 28+ messages in thread
From: Verma, Vishal L @ 2022-08-01 19:38 UTC (permalink / raw)
  To: Williams, Dan J, linux-cxl

On Fri, 2022-07-22 at 17:56 -0700, Dan Williams wrote:
> For switch and endpoint decoders the relationship of decoders to
> regions
> is 1:1. However, for root decoders the relationship is 1:N. Also,
> regions are already children of root decoders, so the 1:N
> relationship
> is observed by walking the following glob:
> 
>     /sys/bus/cxl/devices/$decoder/region*
> 
> Hide the vestigial 'region' attribute for root decoders.
> 
> Signed-off-by: Dan Williams <dan.j.williams@intel.com>
> ---
>  drivers/cxl/core/port.c |    3 ++-
>  1 file changed, 2 insertions(+), 1 deletion(-)
> 
> diff --git a/drivers/cxl/core/port.c b/drivers/cxl/core/port.c
> index 3d2d0119cc3d..bffde862de0b 100644
> --- a/drivers/cxl/core/port.c
> +++ b/drivers/cxl/core/port.c
> @@ -290,7 +290,6 @@ static struct attribute *cxl_decoder_base_attrs[]
> = {
>         &dev_attr_locked.attr,
>         &dev_attr_interleave_granularity.attr,
>         &dev_attr_interleave_ways.attr,
> -       SET_CXL_REGION_ATTR(region)
>         NULL,
>  };
>  
> @@ -345,6 +344,7 @@ static const struct attribute_group
> *cxl_decoder_root_attribute_groups[] = {
>  static struct attribute *cxl_decoder_switch_attrs[] = {
>         &dev_attr_target_type.attr,
>         &dev_attr_target_list.attr,
> +       SET_CXL_REGION_ATTR(region)

This patch removes the 'region' attr from switch decoders, but isn't it
also vestigial under root decoders?

>         NULL,
>  };
>  
> @@ -364,6 +364,7 @@ static struct attribute
> *cxl_decoder_endpoint_attrs[] = {
>         &dev_attr_mode.attr,
>         &dev_attr_dpa_size.attr,
>         &dev_attr_dpa_resource.attr,
> +       SET_CXL_REGION_ATTR(region)
>         NULL,
>  };
>  
> 


^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH 2/5] cxl/region: Delete 'region' attribute from root decoders
  2022-08-01 19:38   ` Verma, Vishal L
@ 2022-08-01 19:40     ` Verma, Vishal L
  2022-08-01 21:32       ` Dan Williams
  2022-08-01 21:32     ` Dan Williams
  1 sibling, 1 reply; 28+ messages in thread
From: Verma, Vishal L @ 2022-08-01 19:40 UTC (permalink / raw)
  To: Williams, Dan J, linux-cxl

On Mon, 2022-08-01 at 19:38 +0000, Verma, Vishal L wrote:
> On Fri, 2022-07-22 at 17:56 -0700, Dan Williams wrote:
> > For switch and endpoint decoders the relationship of decoders to
> > regions
> > is 1:1. However, for root decoders the relationship is 1:N. Also,
> > regions are already children of root decoders, so the 1:N
> > relationship
> > is observed by walking the following glob:
> > 
> >     /sys/bus/cxl/devices/$decoder/region*
> > 
> > Hide the vestigial 'region' attribute for root decoders.
> > 
> > Signed-off-by: Dan Williams <dan.j.williams@intel.com>
> > ---
> >  drivers/cxl/core/port.c |    3 ++-
> >  1 file changed, 2 insertions(+), 1 deletion(-)
> > 
> > diff --git a/drivers/cxl/core/port.c b/drivers/cxl/core/port.c
> > index 3d2d0119cc3d..bffde862de0b 100644
> > --- a/drivers/cxl/core/port.c
> > +++ b/drivers/cxl/core/port.c
> > @@ -290,7 +290,6 @@ static struct attribute *cxl_decoder_base_attrs[]
> > = {
> >         &dev_attr_locked.attr,
> >         &dev_attr_interleave_granularity.attr,
> >         &dev_attr_interleave_ways.attr,
> > -       SET_CXL_REGION_ATTR(region)
> >         NULL,
> >  };
> >  
> > @@ -345,6 +344,7 @@ static const struct attribute_group
> > *cxl_decoder_root_attribute_groups[] = {
> >  static struct attribute *cxl_decoder_switch_attrs[] = {
> >         &dev_attr_target_type.attr,
> >         &dev_attr_target_list.attr,
> > +       SET_CXL_REGION_ATTR(region)
> 
> This patch removes the 'region' attr from switch decoders, but isn't it
> also vestigial under root decoders?

Oh, never mind, I misread the patch - it adds it to switch and endpoint
decoders, removes from root.

Reviewed-by: Vishal Verma <vishal.l.verma@intel.com>

> 
> >         NULL,
> >  };
> >  
> > @@ -364,6 +364,7 @@ static struct attribute
> > *cxl_decoder_endpoint_attrs[] = {
> >         &dev_attr_mode.attr,
> >         &dev_attr_dpa_size.attr,
> >         &dev_attr_dpa_resource.attr,
> > +       SET_CXL_REGION_ATTR(region)
> >         NULL,
> >  };
> >  
> > 
> 


^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH 4/5] cxl/region: Stop initializing interleave granularity
  2022-07-23  0:56 ` [PATCH 4/5] cxl/region: Stop initializing interleave granularity Dan Williams
@ 2022-08-01 19:41   ` Alison Schofield
  2022-08-01 19:47   ` Verma, Vishal L
  1 sibling, 0 replies; 28+ messages in thread
From: Alison Schofield @ 2022-08-01 19:41 UTC (permalink / raw)
  To: Dan Williams; +Cc: linux-cxl

On Fri, Jul 22, 2022 at 05:56:14PM -0700, Dan Williams wrote:
> In preparation for a patch that validates that the region ways setting
> is compatible with the granularity setting, the initial granularity
> setting needs to start at zero to indicate "unset".
> 
> Signed-off-by: Dan Williams <dan.j.williams@intel.com>

Reviewed-by: Alison Schofield <alison.schofield@intel.com>

> ---
>  drivers/cxl/core/region.c |    2 --
>  1 file changed, 2 deletions(-)
> 
> diff --git a/drivers/cxl/core/region.c b/drivers/cxl/core/region.c
> index df35d3c475a0..05b6212e6399 100644
> --- a/drivers/cxl/core/region.c
> +++ b/drivers/cxl/core/region.c
> @@ -1524,7 +1524,6 @@ static struct cxl_region *devm_cxl_add_region(struct cxl_root_decoder *cxlrd,
>  					      enum cxl_decoder_type type)
>  {
>  	struct cxl_port *port = to_cxl_port(cxlrd->cxlsd.cxld.dev.parent);
> -	struct cxl_decoder *cxld = &cxlrd->cxlsd.cxld;
>  	struct cxl_region_params *p;
>  	struct cxl_region *cxlr;
>  	struct device *dev;
> @@ -1536,7 +1535,6 @@ static struct cxl_region *devm_cxl_add_region(struct cxl_root_decoder *cxlrd,
>  	p = &cxlr->params;
>  	cxlr->mode = mode;
>  	cxlr->type = type;
> -	p->interleave_granularity = cxld->interleave_granularity;
>  
>  	dev = &cxlr->dev;
>  	rc = dev_set_name(dev, "region%d", id);
> 

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH 5/5] cxl/region: Constrain region granularity scaling factor
  2022-07-23  0:56 ` [PATCH 5/5] cxl/region: Constrain region granularity scaling factor Dan Williams
@ 2022-08-01 19:43   ` Alison Schofield
  2022-08-01 20:55   ` Verma, Vishal L
  2022-08-03 16:17   ` Jonathan Cameron
  2 siblings, 0 replies; 28+ messages in thread
From: Alison Schofield @ 2022-08-01 19:43 UTC (permalink / raw)
  To: Dan Williams; +Cc: linux-cxl, Jonathan Cameron

On Fri, Jul 22, 2022 at 05:56:20PM -0700, Dan Williams wrote:
> Consider the scenario where a platform provides for a x2 host-bridge
> interleave at a granularity of 256 bytes. The only permitted region
> granularities for that configuration are 256 or 512. Anything larger
> than 512 results in unmapped capacity within a given decoder. Also, if
> the region granularity is 512 then the interleave_ways for the region
> must be 4 to keep the scaling matched.
> 
> Here are the translations for the first (4) 256-byte blocks where an
> endpoint decoder is configured for a 512-byte granularity:
> 
> Block[0] => HB0 => DPA: 0
> Block[1] => HB1 => DPA: 0
> Block[2] => HB0 => DPA: 0
> Block[3] => HB1 => DPA: 0
> 
> In order for those translations to not alias the region interleave ways
> must be 4 resulting in:
> 
> Block[0] => HB0 => Dev0 => DPA: 0
> Block[1] => HB1 => Dev1 => DPA: 0
> Block[2] => HB0 => Dev2 => DPA: 0
> Block[3] => HB1 => Dev3 => DPA: 0
> 
> ...not 2, resulting in:
> 
> Block[0] => HB0 => Dev0 => DPA: 0
> Block[1] => HB1 => Dev1 => DPA: 0
> Block[2] => HB0 => Dev0 => DPA: 0 !
> Block[3] => HB1 => Dev1 => DPA: 0 !
> 
> Given tooling is already being built around this ABI allow for
> granularity and ways to be set in either order and validate the
> combination once both are set.
> 
> Reported-by: Jonathan Cameron <Jonathan.Cameron@huawei.com>
> Signed-off-by: Dan Williams <dan.j.williams@intel.com>

I appreciate the code comments!

Reviewed-by: Alison Schofield <alison.schofield@intel.com>

> ---
>  drivers/cxl/core/region.c |   63 +++++++++++++++++++++++++++++++++++++++++++--
>  1 file changed, 60 insertions(+), 3 deletions(-)
> 
> diff --git a/drivers/cxl/core/region.c b/drivers/cxl/core/region.c
> index 05b6212e6399..a34e537e4cb2 100644
> --- a/drivers/cxl/core/region.c
> +++ b/drivers/cxl/core/region.c
> @@ -348,6 +348,25 @@ static ssize_t interleave_ways_store(struct device *dev,
>  		goto out;
>  	}
>  
> +	/*
> +	 * If region granularity has been set and xHB interleave is active,
> +	 * validate that granularity is compatible with specified ways.
> +	 * Otherwise allow ways to be set now and depend on
> +	 * interleave_granularity_store() to validate this constraint.
> +	 */
> +	if (cxld->interleave_ways > 1 &&
> +	    p->interleave_granularity > cxld->interleave_granularity &&
> +	    p->interleave_granularity / cxld->interleave_granularity !=
> +		    val / cxld->interleave_ways) {
> +		dev_dbg(dev,
> +			"ways scaling factor %d mismatch with granularity %d\n",
> +			val / cxld->interleave_ways,
> +			p->interleave_granularity /
> +				cxld->interleave_granularity);
> +		rc = -EINVAL;
> +		goto out;
> +	}
> +
>  	save = p->interleave_ways;
>  	p->interleave_ways = val;
>  	rc = sysfs_update_group(&cxlr->dev.kobj, get_cxl_region_target_group());
> @@ -386,7 +405,7 @@ static ssize_t interleave_granularity_store(struct device *dev,
>  	struct cxl_decoder *cxld = &cxlrd->cxlsd.cxld;
>  	struct cxl_region *cxlr = to_cxl_region(dev);
>  	struct cxl_region_params *p = &cxlr->params;
> -	int rc, val;
> +	int rc, val, ways;
>  	u16 ig;
>  
>  	rc = kstrtoint(buf, 0, &val);
> @@ -403,9 +422,29 @@ static ssize_t interleave_granularity_store(struct device *dev,
>  	 * granularity less than the root interleave result in needing
>  	 * multiple endpoints to support a single slot in the
>  	 * interleave.
> +	 *
> +	 * When the root interleave ways is 1 then the root granularity is a
> +	 * don't care.
> +	 *
> +	 * Limit region granularity to cxld->interleave_granularity *
> +	 * rounddown_pow_of_two(cxld->interleave_ways) otherwise holes result in
> +	 * the decode at each endpoint. Note that rounddown_pow_of_two()
> +	 * accounts for x3, x6, and x9 root intereleave.
>  	 */
> -	if (val < cxld->interleave_granularity)
> -		return -EINVAL;
> +	ways = rounddown_pow_of_two(cxld->interleave_ways);
> +	if (ways > 1) {
> +		if (val < cxld->interleave_granularity) {
> +			dev_dbg(dev, "granularity %d must be >= %d\n", val,
> +				cxld->interleave_granularity);
> +			return -EINVAL;
> +		}
> +
> +		if (val > cxld->interleave_granularity * ways) {
> +			dev_dbg(dev, "granularity %d must be <= %d\n", val,
> +				cxld->interleave_granularity * ways);
> +			return -EINVAL;
> +		}
> +	}
>  
>  	rc = down_write_killable(&cxl_region_rwsem);
>  	if (rc)
> @@ -415,6 +454,24 @@ static ssize_t interleave_granularity_store(struct device *dev,
>  		goto out;
>  	}
>  
> +	/*
> +	 * If region ways has been set and xHB interleave is active, validate
> +	 * that ways is compatible with specified granularity.  Otherwise allow
> +	 * granularity to be set now and depend on interleave_ways_store() to
> +	 * validate this constraint.
> +	 */
> +	if (cxld->interleave_ways > 1 && p->interleave_ways &&
> +	    val > cxld->interleave_granularity &&
> +	    p->interleave_ways / cxld->interleave_ways !=
> +		    val / cxld->interleave_granularity) {
> +		dev_dbg(dev,
> +			"granularity scaling factor %d mismatch with ways %d\n",
> +			val / cxld->interleave_granularity,
> +			p->interleave_ways / cxld->interleave_ways);
> +		rc = -EINVAL;
> +		goto out;
> +	}
> +
>  	p->interleave_granularity = val;
>  out:
>  	up_write(&cxl_region_rwsem);
> 

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH 3/5] cxl/acpi: Minimize granularity for x1 interleaves
  2022-07-23  0:56 ` [PATCH 3/5] cxl/acpi: Minimize granularity for x1 interleaves Dan Williams
  2022-08-01 19:35   ` Alison Schofield
@ 2022-08-01 19:45   ` Verma, Vishal L
  2022-08-01 21:34     ` Dan Williams
  2022-08-02 15:56   ` Jonathan Cameron
  2 siblings, 1 reply; 28+ messages in thread
From: Verma, Vishal L @ 2022-08-01 19:45 UTC (permalink / raw)
  To: Williams, Dan J, linux-cxl; +Cc: Jonathan.Cameron

On Fri, 2022-07-22 at 17:56 -0700, Dan Williams wrote:
> The kernel enforces that region granularity is >= to the top-level
> interleave-granularity for the given CXL window. However, when the CXL
> window interleave is x1, i.e. non-interleaved at the host bridge level,
> then the specified granularity does not matter. Override the window
> specified granularity to the CXL minimum so that any valid region
> granularity is >= to the root granularity.
> 
> Reported-by: Jonathan Cameron <Jonathan.Cameron@huawei.com>
> Signed-off-by: Dan Williams <dan.j.williams@intel.com>
> ---
>  drivers/cxl/acpi.c |    6 ++++++
>  1 file changed, 6 insertions(+)
> 
> diff --git a/drivers/cxl/acpi.c b/drivers/cxl/acpi.c
> index eb436268b92c..67137e17b8c9 100644
> --- a/drivers/cxl/acpi.c
> +++ b/drivers/cxl/acpi.c
> @@ -140,6 +140,12 @@ static int cxl_parse_cfmws(union acpi_subtable_headers *header, void *arg,
>                 .end = res->end,
>         };
>         cxld->interleave_ways = ways;
> +       /*
> +        * Minimize the x1 granularity to advertise support for any
> +        * valid region granularity
> +        */
> +       if (ways == 1)
> +               ig = 256;

Probably not super critical, but should this be a #define somewhere?

Regardless,

Reviewed-by: Vishal Verma <vishal.l.verma@intel.com>

>         cxld->interleave_granularity = ig;
>  
>         rc = cxl_decoder_add(cxld, target_map);
> 


^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH 4/5] cxl/region: Stop initializing interleave granularity
  2022-07-23  0:56 ` [PATCH 4/5] cxl/region: Stop initializing interleave granularity Dan Williams
  2022-08-01 19:41   ` Alison Schofield
@ 2022-08-01 19:47   ` Verma, Vishal L
  1 sibling, 0 replies; 28+ messages in thread
From: Verma, Vishal L @ 2022-08-01 19:47 UTC (permalink / raw)
  To: Williams, Dan J, linux-cxl

On Fri, 2022-07-22 at 17:56 -0700, Dan Williams wrote:
> In preparation for a patch that validates that the region ways setting
> is compatible with the granularity setting, the initial granularity
> setting needs to start at zero to indicate "unset".
> 
> Signed-off-by: Dan Williams <dan.j.williams@intel.com>
> ---
>  drivers/cxl/core/region.c |    2 --
>  1 file changed, 2 deletions(-)

Looks good, 

Reviewed-by: Vishal Verma <vishal.l.verma@intel.com>

> 
> diff --git a/drivers/cxl/core/region.c b/drivers/cxl/core/region.c
> index df35d3c475a0..05b6212e6399 100644
> --- a/drivers/cxl/core/region.c
> +++ b/drivers/cxl/core/region.c
> @@ -1524,7 +1524,6 @@ static struct cxl_region *devm_cxl_add_region(struct cxl_root_decoder *cxlrd,
>                                               enum cxl_decoder_type type)
>  {
>         struct cxl_port *port = to_cxl_port(cxlrd->cxlsd.cxld.dev.parent);
> -       struct cxl_decoder *cxld = &cxlrd->cxlsd.cxld;
>         struct cxl_region_params *p;
>         struct cxl_region *cxlr;
>         struct device *dev;
> @@ -1536,7 +1535,6 @@ static struct cxl_region *devm_cxl_add_region(struct cxl_root_decoder *cxlrd,
>         p = &cxlr->params;
>         cxlr->mode = mode;
>         cxlr->type = type;
> -       p->interleave_granularity = cxld->interleave_granularity;
>  
>         dev = &cxlr->dev;
>         rc = dev_set_name(dev, "region%d", id);
> 


^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH 5/5] cxl/region: Constrain region granularity scaling factor
  2022-07-23  0:56 ` [PATCH 5/5] cxl/region: Constrain region granularity scaling factor Dan Williams
  2022-08-01 19:43   ` Alison Schofield
@ 2022-08-01 20:55   ` Verma, Vishal L
  2022-08-03 16:17   ` Jonathan Cameron
  2 siblings, 0 replies; 28+ messages in thread
From: Verma, Vishal L @ 2022-08-01 20:55 UTC (permalink / raw)
  To: Williams, Dan J, linux-cxl; +Cc: Jonathan.Cameron

On Fri, 2022-07-22 at 17:56 -0700, Dan Williams wrote:
> Consider the scenario where a platform provides for a x2 host-bridge
> interleave at a granularity of 256 bytes. The only permitted region
> granularities for that configuration are 256 or 512. Anything larger
> than 512 results in unmapped capacity within a given decoder. Also, if
> the region granularity is 512 then the interleave_ways for the region
> must be 4 to keep the scaling matched.
> 
> Here are the translations for the first (4) 256-byte blocks where an
> endpoint decoder is configured for a 512-byte granularity:
> 
> Block[0] => HB0 => DPA: 0
> Block[1] => HB1 => DPA: 0
> Block[2] => HB0 => DPA: 0
> Block[3] => HB1 => DPA: 0
> 
> In order for those translations to not alias the region interleave ways
> must be 4 resulting in:
> 
> Block[0] => HB0 => Dev0 => DPA: 0
> Block[1] => HB1 => Dev1 => DPA: 0
> Block[2] => HB0 => Dev2 => DPA: 0
> Block[3] => HB1 => Dev3 => DPA: 0
> 
> ...not 2, resulting in:
> 
> Block[0] => HB0 => Dev0 => DPA: 0
> Block[1] => HB1 => Dev1 => DPA: 0
> Block[2] => HB0 => Dev0 => DPA: 0 !
> Block[3] => HB1 => Dev1 => DPA: 0 !
> 
> Given tooling is already being built around this ABI allow for
> granularity and ways to be set in either order and validate the
> combination once both are set.
> 
> Reported-by: Jonathan Cameron <Jonathan.Cameron@huawei.com>
> Signed-off-by: Dan Williams <dan.j.williams@intel.com>
> ---
>  drivers/cxl/core/region.c |   63 +++++++++++++++++++++++++++++++++++++++++++--
>  1 file changed, 60 insertions(+), 3 deletions(-)

Looks good, 

Reviewed-by: Vishal Verma <vishal.l.verma@intel.com>

> 
> diff --git a/drivers/cxl/core/region.c b/drivers/cxl/core/region.c
> index 05b6212e6399..a34e537e4cb2 100644
> --- a/drivers/cxl/core/region.c
> +++ b/drivers/cxl/core/region.c
> @@ -348,6 +348,25 @@ static ssize_t interleave_ways_store(struct device *dev,
>                 goto out;
>         }
>  
> +       /*
> +        * If region granularity has been set and xHB interleave is active,
> +        * validate that granularity is compatible with specified ways.
> +        * Otherwise allow ways to be set now and depend on
> +        * interleave_granularity_store() to validate this constraint.
> +        */
> +       if (cxld->interleave_ways > 1 &&
> +           p->interleave_granularity > cxld->interleave_granularity &&
> +           p->interleave_granularity / cxld->interleave_granularity !=
> +                   val / cxld->interleave_ways) {
> +               dev_dbg(dev,
> +                       "ways scaling factor %d mismatch with granularity %d\n",
> +                       val / cxld->interleave_ways,
> +                       p->interleave_granularity /
> +                               cxld->interleave_granularity);
> +               rc = -EINVAL;
> +               goto out;
> +       }
> +
>         save = p->interleave_ways;
>         p->interleave_ways = val;
>         rc = sysfs_update_group(&cxlr->dev.kobj, get_cxl_region_target_group());
> @@ -386,7 +405,7 @@ static ssize_t interleave_granularity_store(struct device *dev,
>         struct cxl_decoder *cxld = &cxlrd->cxlsd.cxld;
>         struct cxl_region *cxlr = to_cxl_region(dev);
>         struct cxl_region_params *p = &cxlr->params;
> -       int rc, val;
> +       int rc, val, ways;
>         u16 ig;
>  
>         rc = kstrtoint(buf, 0, &val);
> @@ -403,9 +422,29 @@ static ssize_t interleave_granularity_store(struct device *dev,
>          * granularity less than the root interleave result in needing
>          * multiple endpoints to support a single slot in the
>          * interleave.
> +        *
> +        * When the root interleave ways is 1 then the root granularity is a
> +        * don't care.
> +        *
> +        * Limit region granularity to cxld->interleave_granularity *
> +        * rounddown_pow_of_two(cxld->interleave_ways) otherwise holes result in
> +        * the decode at each endpoint. Note that rounddown_pow_of_two()
> +        * accounts for x3, x6, and x9 root intereleave.
>          */
> -       if (val < cxld->interleave_granularity)
> -               return -EINVAL;
> +       ways = rounddown_pow_of_two(cxld->interleave_ways);
> +       if (ways > 1) {
> +               if (val < cxld->interleave_granularity) {
> +                       dev_dbg(dev, "granularity %d must be >= %d\n", val,
> +                               cxld->interleave_granularity);
> +                       return -EINVAL;
> +               }
> +
> +               if (val > cxld->interleave_granularity * ways) {
> +                       dev_dbg(dev, "granularity %d must be <= %d\n", val,
> +                               cxld->interleave_granularity * ways);
> +                       return -EINVAL;
> +               }
> +       }
>  
>         rc = down_write_killable(&cxl_region_rwsem);
>         if (rc)
> @@ -415,6 +454,24 @@ static ssize_t interleave_granularity_store(struct device *dev,
>                 goto out;
>         }
>  
> +       /*
> +        * If region ways has been set and xHB interleave is active, validate
> +        * that ways is compatible with specified granularity.  Otherwise allow
> +        * granularity to be set now and depend on interleave_ways_store() to
> +        * validate this constraint.
> +        */
> +       if (cxld->interleave_ways > 1 && p->interleave_ways &&
> +           val > cxld->interleave_granularity &&
> +           p->interleave_ways / cxld->interleave_ways !=
> +                   val / cxld->interleave_granularity) {
> +               dev_dbg(dev,
> +                       "granularity scaling factor %d mismatch with ways %d\n",
> +                       val / cxld->interleave_granularity,
> +                       p->interleave_ways / cxld->interleave_ways);
> +               rc = -EINVAL;
> +               goto out;
> +       }
> +
>         p->interleave_granularity = val;
>  out:
>         up_write(&cxl_region_rwsem);
> 


^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH 2/5] cxl/region: Delete 'region' attribute from root decoders
  2022-08-01 19:38   ` Verma, Vishal L
  2022-08-01 19:40     ` Verma, Vishal L
@ 2022-08-01 21:32     ` Dan Williams
  1 sibling, 0 replies; 28+ messages in thread
From: Dan Williams @ 2022-08-01 21:32 UTC (permalink / raw)
  To: Verma, Vishal L, Williams, Dan J, linux-cxl

Verma, Vishal L wrote:
> On Fri, 2022-07-22 at 17:56 -0700, Dan Williams wrote:
> > For switch and endpoint decoders the relationship of decoders to
> > regions
> > is 1:1. However, for root decoders the relationship is 1:N. Also,
> > regions are already children of root decoders, so the 1:N
> > relationship
> > is observed by walking the following glob:
> > 
> >     /sys/bus/cxl/devices/$decoder/region*
> > 
> > Hide the vestigial 'region' attribute for root decoders.
> > 
> > Signed-off-by: Dan Williams <dan.j.williams@intel.com>
> > ---
> >  drivers/cxl/core/port.c |    3 ++-
> >  1 file changed, 2 insertions(+), 1 deletion(-)
> > 
> > diff --git a/drivers/cxl/core/port.c b/drivers/cxl/core/port.c
> > index 3d2d0119cc3d..bffde862de0b 100644
> > --- a/drivers/cxl/core/port.c
> > +++ b/drivers/cxl/core/port.c
> > @@ -290,7 +290,6 @@ static struct attribute *cxl_decoder_base_attrs[]
> > = {
> >         &dev_attr_locked.attr,
> >         &dev_attr_interleave_granularity.attr,
> >         &dev_attr_interleave_ways.attr,
> > -       SET_CXL_REGION_ATTR(region)
> >         NULL,
> >  };
> >  
> > @@ -345,6 +344,7 @@ static const struct attribute_group
> > *cxl_decoder_root_attribute_groups[] = {
> >  static struct attribute *cxl_decoder_switch_attrs[] = {
> >         &dev_attr_target_type.attr,
> >         &dev_attr_target_list.attr,
> > +       SET_CXL_REGION_ATTR(region)
> 
> This patch removes the 'region' attr from switch decoders, but isn't it
> also vestigial under root decoders?

Take another look. It removes it from the common cxl_decoder_base_attrs,
and adds it back to just cxl_decoder_switch_attrs and
cxl_decoder_endpoint_attrs.

Perhaps you read this context info:

> > @@ -345,6 +344,7 @@ static const struct attribute_group
> > *cxl_decoder_root_attribute_groups[] = {

...and thought it was adding it back to the root decoder attributes?

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH 2/5] cxl/region: Delete 'region' attribute from root decoders
  2022-08-01 19:40     ` Verma, Vishal L
@ 2022-08-01 21:32       ` Dan Williams
  0 siblings, 0 replies; 28+ messages in thread
From: Dan Williams @ 2022-08-01 21:32 UTC (permalink / raw)
  To: Verma, Vishal L, Williams, Dan J, linux-cxl

Verma, Vishal L wrote:
> On Mon, 2022-08-01 at 19:38 +0000, Verma, Vishal L wrote:
> > On Fri, 2022-07-22 at 17:56 -0700, Dan Williams wrote:
> > > For switch and endpoint decoders the relationship of decoders to
> > > regions
> > > is 1:1. However, for root decoders the relationship is 1:N. Also,
> > > regions are already children of root decoders, so the 1:N
> > > relationship
> > > is observed by walking the following glob:
> > > 
> > >     /sys/bus/cxl/devices/$decoder/region*
> > > 
> > > Hide the vestigial 'region' attribute for root decoders.
> > > 
> > > Signed-off-by: Dan Williams <dan.j.williams@intel.com>
> > > ---
> > >  drivers/cxl/core/port.c |    3 ++-
> > >  1 file changed, 2 insertions(+), 1 deletion(-)
> > > 
> > > diff --git a/drivers/cxl/core/port.c b/drivers/cxl/core/port.c
> > > index 3d2d0119cc3d..bffde862de0b 100644
> > > --- a/drivers/cxl/core/port.c
> > > +++ b/drivers/cxl/core/port.c
> > > @@ -290,7 +290,6 @@ static struct attribute *cxl_decoder_base_attrs[]
> > > = {
> > >         &dev_attr_locked.attr,
> > >         &dev_attr_interleave_granularity.attr,
> > >         &dev_attr_interleave_ways.attr,
> > > -       SET_CXL_REGION_ATTR(region)
> > >         NULL,
> > >  };
> > >  
> > > @@ -345,6 +344,7 @@ static const struct attribute_group
> > > *cxl_decoder_root_attribute_groups[] = {
> > >  static struct attribute *cxl_decoder_switch_attrs[] = {
> > >         &dev_attr_target_type.attr,
> > >         &dev_attr_target_list.attr,
> > > +       SET_CXL_REGION_ATTR(region)
> > 
> > This patch removes the 'region' attr from switch decoders, but isn't it
> > also vestigial under root decoders?
> 
> Oh, never mind, I misread the patch - it adds it to switch and endpoint
> decoders, removes from root.

Oh, and never mind my previous comment too.

> 
> Reviewed-by: Vishal Verma <vishal.l.verma@intel.com>

Thanks!

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH 3/5] cxl/acpi: Minimize granularity for x1 interleaves
  2022-08-01 19:45   ` Verma, Vishal L
@ 2022-08-01 21:34     ` Dan Williams
  0 siblings, 0 replies; 28+ messages in thread
From: Dan Williams @ 2022-08-01 21:34 UTC (permalink / raw)
  To: Verma, Vishal L, Williams, Dan J, linux-cxl; +Cc: Jonathan.Cameron

Verma, Vishal L wrote:
> On Fri, 2022-07-22 at 17:56 -0700, Dan Williams wrote:
> > The kernel enforces that region granularity is >= to the top-level
> > interleave-granularity for the given CXL window. However, when the CXL
> > window interleave is x1, i.e. non-interleaved at the host bridge level,
> > then the specified granularity does not matter. Override the window
> > specified granularity to the CXL minimum so that any valid region
> > granularity is >= to the root granularity.
> > 
> > Reported-by: Jonathan Cameron <Jonathan.Cameron@huawei.com>
> > Signed-off-by: Dan Williams <dan.j.williams@intel.com>
> > ---
> >  drivers/cxl/acpi.c |    6 ++++++
> >  1 file changed, 6 insertions(+)
> > 
> > diff --git a/drivers/cxl/acpi.c b/drivers/cxl/acpi.c
> > index eb436268b92c..67137e17b8c9 100644
> > --- a/drivers/cxl/acpi.c
> > +++ b/drivers/cxl/acpi.c
> > @@ -140,6 +140,12 @@ static int cxl_parse_cfmws(union acpi_subtable_headers *header, void *arg,
> >                 .end = res->end,
> >         };
> >         cxld->interleave_ways = ways;
> > +       /*
> > +        * Minimize the x1 granularity to advertise support for any
> > +        * valid region granularity
> > +        */
> > +       if (ways == 1)
> > +               ig = 256;
> 
> Probably not super critical, but should this be a #define somewhere?

Yeah, something like CXL_MIN_GRANULARITY, will add.

> 
> Regardless,
> 
> Reviewed-by: Vishal Verma <vishal.l.verma@intel.com>
> 
> >         cxld->interleave_granularity = ig;
> >  
> >         rc = cxl_decoder_add(cxld, target_map);
> > 
> 



^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH 3/5] cxl/acpi: Minimize granularity for x1 interleaves
  2022-07-23  0:56 ` [PATCH 3/5] cxl/acpi: Minimize granularity for x1 interleaves Dan Williams
  2022-08-01 19:35   ` Alison Schofield
  2022-08-01 19:45   ` Verma, Vishal L
@ 2022-08-02 15:56   ` Jonathan Cameron
  2022-08-02 16:52     ` Jonathan Cameron
  2022-08-02 17:33     ` Dan Williams
  2 siblings, 2 replies; 28+ messages in thread
From: Jonathan Cameron @ 2022-08-02 15:56 UTC (permalink / raw)
  To: Dan Williams; +Cc: linux-cxl

On Fri, 22 Jul 2022 17:56:09 -0700
Dan Williams <dan.j.williams@intel.com> wrote:

> The kernel enforces that region granularity is >= to the top-level
> interleave-granularity for the given CXL window. However, when the CXL
> window interleave is x1, i.e. non-interleaved at the host bridge level,
> then the specified granularity does not matter. Override the window
> specified granularity to the CXL minimum so that any valid region
> granularity is >= to the root granularity.
> 
> Reported-by: Jonathan Cameron <Jonathan.Cameron@huawei.com>
> Signed-off-by: Dan Williams <dan.j.williams@intel.com>

Hi Dan,

Debugging exactly why this is failing (from cxl.git/preview) for my test setup...
(1 hb, 8 rp, 8 direct connected devices)

If I set the interleave granularity of a region to 256, I end
up with 256 for the CFMWS which is fine, then 512 for the HB which
is not - EP interleave granularity is expected 256.

https://git.kernel.org/pub/scm/linux/kernel/git/cxl/cxl.git/tree/drivers/cxl/core/region.c?h=preview#n1070

Calculates the eig as address_bit - eiw + 1

iw = 8
eiw = 3
peig = 0 (pig = 256)
peiw = 0 (piw = 1)
(all as expected I think...)

So address_bit = s max(peig + peiw, eiw + peig) = max(0, 3)
and eig = 3 - 3 + 1 = 1 (ig = 512) which is wrong.

I'm not 100% sure on the logic behind this maths, but would expect eig = 0 as the output for this
setup..

Even with this hacked, qemu address decode is landing at wrong address in the backing files (but it
is at least landing in the right file!)
Curiously interleave ways = 1 for the EPs which is obviously wrong. (I'm not convinced the
qemu address logic is right but it'll never work with that value).  I'm struggling to figure
out where we actually set the interleave ways for an EP.

Also I'm not having much luck requesting a larger interleave granularity for the region (desirable perhaps
because the devices give better performance with 1024 byte sequential reads)

Clearly going to be one of those bugs all the way down days. 

Thanks,

Jonathan


> ---
>  drivers/cxl/acpi.c |    6 ++++++
>  1 file changed, 6 insertions(+)
> 
> diff --git a/drivers/cxl/acpi.c b/drivers/cxl/acpi.c
> index eb436268b92c..67137e17b8c9 100644
> --- a/drivers/cxl/acpi.c
> +++ b/drivers/cxl/acpi.c
> @@ -140,6 +140,12 @@ static int cxl_parse_cfmws(union acpi_subtable_headers *header, void *arg,
>  		.end = res->end,
>  	};
>  	cxld->interleave_ways = ways;
> +	/*
> +	 * Minimize the x1 granularity to advertise support for any
> +	 * valid region granularity
> +	 */
> +	if (ways == 1)
> +		ig = 256;
>  	cxld->interleave_granularity = ig;
>  
>  	rc = cxl_decoder_add(cxld, target_map);
> 
> 


^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH 3/5] cxl/acpi: Minimize granularity for x1 interleaves
  2022-08-02 15:56   ` Jonathan Cameron
@ 2022-08-02 16:52     ` Jonathan Cameron
  2022-08-02 17:33     ` Dan Williams
  1 sibling, 0 replies; 28+ messages in thread
From: Jonathan Cameron @ 2022-08-02 16:52 UTC (permalink / raw)
  To: Dan Williams; +Cc: linux-cxl

On Tue, 2 Aug 2022 16:56:27 +0100
Jonathan Cameron <Jonathan.Cameron@huawei.com> wrote:

> On Fri, 22 Jul 2022 17:56:09 -0700
> Dan Williams <dan.j.williams@intel.com> wrote:
> 
> > The kernel enforces that region granularity is >= to the top-level
> > interleave-granularity for the given CXL window. However, when the CXL
> > window interleave is x1, i.e. non-interleaved at the host bridge level,
> > then the specified granularity does not matter. Override the window
> > specified granularity to the CXL minimum so that any valid region
> > granularity is >= to the root granularity.
> > 
> > Reported-by: Jonathan Cameron <Jonathan.Cameron@huawei.com>
> > Signed-off-by: Dan Williams <dan.j.williams@intel.com>  
> 
> Hi Dan,
> 
> Debugging exactly why this is failing (from cxl.git/preview) for my test setup...
> (1 hb, 8 rp, 8 direct connected devices)
> 
> If I set the interleave granularity of a region to 256, I end
> up with 256 for the CFMWS which is fine, then 512 for the HB which
> is not - EP interleave granularity is expected 256.
> 
> https://git.kernel.org/pub/scm/linux/kernel/git/cxl/cxl.git/tree/drivers/cxl/core/region.c?h=preview#n1070
> 
> Calculates the eig as address_bit - eiw + 1
> 
> iw = 8
> eiw = 3
> peig = 0 (pig = 256)
> peiw = 0 (piw = 1)
> (all as expected I think...)
> 
> So address_bit = s max(peig + peiw, eiw + peig) = max(0, 3)
> and eig = 3 - 3 + 1 = 1 (ig = 512) which is wrong.
> 
> I'm not 100% sure on the logic behind this maths, but would expect eig = 0 as the output for this
> setup..
> 
> Even with this hacked, qemu address decode is landing at wrong address in the backing files (but it
> is at least landing in the right file!)
> Curiously interleave ways = 1 for the EPs which is obviously wrong. (I'm not convinced the
> qemu address logic is right but it'll never work with that value).  I'm struggling to figure
> out where we actually set the interleave ways for an EP.

FWIW I hacked qemu to default to EPs with eig = 3 (ig = 8) for the EPs and decode looks better.
There is a write to eiw for the EPs when commit is set, but seems to be just writing back value
cached when we originally read the setup back from the HDM decoders at probe.

Test code is that QEMU fix I sent a few weeks back + a hack of -= 1 of eig as above for the HB given
I haven't figured out what right fix for that is.

Jonathan



> 
> Also I'm not having much luck requesting a larger interleave granularity for the region (desirable perhaps
> because the devices give better performance with 1024 byte sequential reads)
> 
> Clearly going to be one of those bugs all the way down days. 
> 
> Thanks,
> 
> Jonathan
> 
> 
> > ---
> >  drivers/cxl/acpi.c |    6 ++++++
> >  1 file changed, 6 insertions(+)
> > 
> > diff --git a/drivers/cxl/acpi.c b/drivers/cxl/acpi.c
> > index eb436268b92c..67137e17b8c9 100644
> > --- a/drivers/cxl/acpi.c
> > +++ b/drivers/cxl/acpi.c
> > @@ -140,6 +140,12 @@ static int cxl_parse_cfmws(union acpi_subtable_headers *header, void *arg,
> >  		.end = res->end,
> >  	};
> >  	cxld->interleave_ways = ways;
> > +	/*
> > +	 * Minimize the x1 granularity to advertise support for any
> > +	 * valid region granularity
> > +	 */
> > +	if (ways == 1)
> > +		ig = 256;
> >  	cxld->interleave_granularity = ig;
> >  
> >  	rc = cxl_decoder_add(cxld, target_map);
> > 
> >   
> 


^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH 3/5] cxl/acpi: Minimize granularity for x1 interleaves
  2022-08-02 15:56   ` Jonathan Cameron
  2022-08-02 16:52     ` Jonathan Cameron
@ 2022-08-02 17:33     ` Dan Williams
  2022-08-03 16:00       ` Jonathan Cameron
  1 sibling, 1 reply; 28+ messages in thread
From: Dan Williams @ 2022-08-02 17:33 UTC (permalink / raw)
  To: Jonathan Cameron, Dan Williams; +Cc: linux-cxl

Jonathan Cameron wrote:
> On Fri, 22 Jul 2022 17:56:09 -0700
> Dan Williams <dan.j.williams@intel.com> wrote:
> 
> > The kernel enforces that region granularity is >= to the top-level
> > interleave-granularity for the given CXL window. However, when the CXL
> > window interleave is x1, i.e. non-interleaved at the host bridge level,
> > then the specified granularity does not matter. Override the window
> > specified granularity to the CXL minimum so that any valid region
> > granularity is >= to the root granularity.
> > 
> > Reported-by: Jonathan Cameron <Jonathan.Cameron@huawei.com>
> > Signed-off-by: Dan Williams <dan.j.williams@intel.com>
> 
> Hi Dan,
> 
> Debugging exactly why this is failing (from cxl.git/preview) for my test setup...
> (1 hb, 8 rp, 8 direct connected devices)
> 
> If I set the interleave granularity of a region to 256, I end
> up with 256 for the CFMWS which is fine, then 512 for the HB which
> is not - EP interleave granularity is expected 256.
> 
> https://git.kernel.org/pub/scm/linux/kernel/git/cxl/cxl.git/tree/drivers/cxl/core/region.c?h=preview#n1070
> 
> Calculates the eig as address_bit - eiw + 1
> 
> iw = 8
> eiw = 3
> peig = 0 (pig = 256)
> peiw = 0 (piw = 1)
> (all as expected I think...)
> 
> So address_bit = s max(peig + peiw, eiw + peig) = max(0, 3)
> and eig = 3 - 3 + 1 = 1 (ig = 512) which is wrong.
> 
> I'm not 100% sure on the logic behind this maths, but would expect eig = 0 as the output for this
> setup..

Yeah, the "+ 1" is not required when routing from the x1 HB level. I can
reproduce this config with cxl_test to validate.

> Even with this hacked, qemu address decode is landing at wrong address in the backing files (but it
> is at least landing in the right file!)
> Curiously interleave ways = 1 for the EPs which is obviously wrong. (I'm not convinced the
> qemu address logic is right but it'll never work with that value).  I'm struggling to figure
> out where we actually set the interleave ways for an EP.

Ugh, it is not set and I think I was blinded by some successful region
creation results and assumed they also ran cycles to validate the data
consistency. My expectation is that the EP interleave_ways is set here:

diff --git a/drivers/cxl/core/region.c b/drivers/cxl/core/region.c
index cf5d5811fe4c..dd4035d92041 100644
--- a/drivers/cxl/core/region.c
+++ b/drivers/cxl/core/region.c
@@ -1285,6 +1285,8 @@ static int cxl_region_attach(struct cxl_region *cxlr,
 
        p->targets[pos] = cxled;
        cxled->pos = pos;
+       cxled->cxld.interleave_ways = p->interleave_ways;
+       cxled->cxld.interleave_granularity = p->interleave_granularity;
        p->nr_targets++;
 
        if (p->nr_targets == p->interleave_ways) {


> Also I'm not having much luck requesting a larger interleave granularity for the region (desirable perhaps
> because the devices give better performance with 1024 byte sequential reads)
> 
> Clearly going to be one of those bugs all the way down days. 
> 

Yes, the hunt continues, but I think the driver has some large ones the
squash first.

^ permalink raw reply related	[flat|nested] 28+ messages in thread

* Re: [PATCH 3/5] cxl/acpi: Minimize granularity for x1 interleaves
  2022-08-02 17:33     ` Dan Williams
@ 2022-08-03 16:00       ` Jonathan Cameron
  2022-08-03 17:18         ` Dan Williams
  0 siblings, 1 reply; 28+ messages in thread
From: Jonathan Cameron @ 2022-08-03 16:00 UTC (permalink / raw)
  To: Dan Williams; +Cc: linux-cxl

On Tue, 2 Aug 2022 10:33:19 -0700
Dan Williams <dan.j.williams@intel.com> wrote:

> Jonathan Cameron wrote:
> > On Fri, 22 Jul 2022 17:56:09 -0700
> > Dan Williams <dan.j.williams@intel.com> wrote:
> >   
> > > The kernel enforces that region granularity is >= to the top-level
> > > interleave-granularity for the given CXL window. However, when the CXL
> > > window interleave is x1, i.e. non-interleaved at the host bridge level,
> > > then the specified granularity does not matter. Override the window
> > > specified granularity to the CXL minimum so that any valid region
> > > granularity is >= to the root granularity.
> > > 
> > > Reported-by: Jonathan Cameron <Jonathan.Cameron@huawei.com>
> > > Signed-off-by: Dan Williams <dan.j.williams@intel.com>  
> > 
> > Hi Dan,
> > 
> > Debugging exactly why this is failing (from cxl.git/preview) for my test setup...
> > (1 hb, 8 rp, 8 direct connected devices)
> > 
> > If I set the interleave granularity of a region to 256, I end
> > up with 256 for the CFMWS which is fine, then 512 for the HB which
> > is not - EP interleave granularity is expected 256.
> > 
> > https://git.kernel.org/pub/scm/linux/kernel/git/cxl/cxl.git/tree/drivers/cxl/core/region.c?h=preview#n1070
> > 
> > Calculates the eig as address_bit - eiw + 1
> > 
> > iw = 8
> > eiw = 3
> > peig = 0 (pig = 256)
> > peiw = 0 (piw = 1)
> > (all as expected I think...)
> > 
> > So address_bit = s max(peig + peiw, eiw + peig) = max(0, 3)
> > and eig = 3 - 3 + 1 = 1 (ig = 512) which is wrong.
> > 
> > I'm not 100% sure on the logic behind this maths, but would expect eig = 0 as the output for this
> > setup..  
> 
> Yeah, the "+ 1" is not required when routing from the x1 HB level. I can
> reproduce this config with cxl_test to validate.

Other than this off by one, with the other fixes you posted everything now works for me
with the particular test case above.

Thanks

Jonathan

> 
> > Even with this hacked, qemu address decode is landing at wrong address in the backing files (but it
> > is at least landing in the right file!)
> > Curiously interleave ways = 1 for the EPs which is obviously wrong. (I'm not convinced the
> > qemu address logic is right but it'll never work with that value).  I'm struggling to figure
> > out where we actually set the interleave ways for an EP.  
> 
> Ugh, it is not set and I think I was blinded by some successful region
> creation results and assumed they also ran cycles to validate the data
> consistency. My expectation is that the EP interleave_ways is set here:
> 
> diff --git a/drivers/cxl/core/region.c b/drivers/cxl/core/region.c
> index cf5d5811fe4c..dd4035d92041 100644
> --- a/drivers/cxl/core/region.c
> +++ b/drivers/cxl/core/region.c
> @@ -1285,6 +1285,8 @@ static int cxl_region_attach(struct cxl_region *cxlr,
>  
>         p->targets[pos] = cxled;
>         cxled->pos = pos;
> +       cxled->cxld.interleave_ways = p->interleave_ways;
> +       cxled->cxld.interleave_granularity = p->interleave_granularity;
>         p->nr_targets++;
>  
>         if (p->nr_targets == p->interleave_ways) {
> 
> 
> > Also I'm not having much luck requesting a larger interleave granularity for the region (desirable perhaps
> > because the devices give better performance with 1024 byte sequential reads)
> > 
> > Clearly going to be one of those bugs all the way down days. 
> >   
> 
> Yes, the hunt continues, but I think the driver has some large ones the
> squash first.


^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH 5/5] cxl/region: Constrain region granularity scaling factor
  2022-07-23  0:56 ` [PATCH 5/5] cxl/region: Constrain region granularity scaling factor Dan Williams
  2022-08-01 19:43   ` Alison Schofield
  2022-08-01 20:55   ` Verma, Vishal L
@ 2022-08-03 16:17   ` Jonathan Cameron
  2022-08-04 16:33     ` Dan Williams
  2 siblings, 1 reply; 28+ messages in thread
From: Jonathan Cameron @ 2022-08-03 16:17 UTC (permalink / raw)
  To: Dan Williams; +Cc: linux-cxl

On Fri, 22 Jul 2022 17:56:20 -0700
Dan Williams <dan.j.williams@intel.com> wrote:

> Consider the scenario where a platform provides for a x2 host-bridge
> interleave at a granularity of 256 bytes.
Hi Dan,

Terminology not clear to me.  Is this interleave across 2 host bridges?

I'm lost in the explanation.

> The only permitted region
> granularities for that configuration are 256 or 512. Anything larger
> than 512 results in unmapped capacity within a given decoder. Also, if
> the region granularity is 512 then the interleave_ways for the region
> must be 4 to keep the scaling matched.
> 
> Here are the translations for the first (4) 256-byte blocks where an
> endpoint decoder is configured for a 512-byte granularity:
> 
> Block[0] => HB0 => DPA: 0
> Block[1] => HB1 => DPA: 0
> Block[2] => HB0 => DPA: 0
> Block[3] => HB1 => DPA: 0
> 
> In order for those translations to not alias the region interleave ways
> must be 4 resulting in:
> 
> Block[0] => HB0 => Dev0 => DPA: 0
> Block[1] => HB1 => Dev1 => DPA: 0
> Block[2] => HB0 => Dev2 => DPA: 0
> Block[3] => HB1 => Dev3 => DPA: 0

Block[4] => HBA0 => Dev0 => DPA: 512 ?

which means we are only using alternate 256 blocks of the EP.  Does that make
sense?

> 
> ...not 2, resulting in:
> 
> Block[0] => HB0 => Dev0 => DPA: 0
> Block[1] => HB1 => Dev1 => DPA: 0
> Block[2] => HB0 => Dev0 => DPA: 0 !
> Block[3] => HB1 => Dev1 => DPA: 0 !

> 
> Given tooling is already being built around this ABI allow for
> granularity and ways to be set in either order and validate the
> combination once both are set.
> 
> Reported-by: Jonathan Cameron <Jonathan.Cameron@huawei.com>
> Signed-off-by: Dan Williams <dan.j.williams@intel.com>

Typo inline.

> ---
>  drivers/cxl/core/region.c |   63 +++++++++++++++++++++++++++++++++++++++++++--
>  1 file changed, 60 insertions(+), 3 deletions(-)
> 
> diff --git a/drivers/cxl/core/region.c b/drivers/cxl/core/region.c
> index 05b6212e6399..a34e537e4cb2 100644
> --- a/drivers/cxl/core/region.c
> +++ b/drivers/cxl/core/region.c
> @@ -348,6 +348,25 @@ static ssize_t interleave_ways_store(struct device *dev,
>  		goto out;
>  	}
>  
> +	/*
> +	 * If region granularity has been set and xHB interleave is active,
> +	 * validate that granularity is compatible with specified ways.
> +	 * Otherwise allow ways to be set now and depend on
> +	 * interleave_granularity_store() to validate this constraint.
> +	 */
> +	if (cxld->interleave_ways > 1 &&
> +	    p->interleave_granularity > cxld->interleave_granularity &&
> +	    p->interleave_granularity / cxld->interleave_granularity !=
> +		    val / cxld->interleave_ways) {
> +		dev_dbg(dev,
> +			"ways scaling factor %d mismatch with granularity %d\n",
> +			val / cxld->interleave_ways,
> +			p->interleave_granularity /
> +				cxld->interleave_granularity);
> +		rc = -EINVAL;
> +		goto out;
> +	}
> +
>  	save = p->interleave_ways;
>  	p->interleave_ways = val;
>  	rc = sysfs_update_group(&cxlr->dev.kobj, get_cxl_region_target_group());
> @@ -386,7 +405,7 @@ static ssize_t interleave_granularity_store(struct device *dev,
>  	struct cxl_decoder *cxld = &cxlrd->cxlsd.cxld;
>  	struct cxl_region *cxlr = to_cxl_region(dev);
>  	struct cxl_region_params *p = &cxlr->params;
> -	int rc, val;
> +	int rc, val, ways;
>  	u16 ig;
>  
>  	rc = kstrtoint(buf, 0, &val);
> @@ -403,9 +422,29 @@ static ssize_t interleave_granularity_store(struct device *dev,
>  	 * granularity less than the root interleave result in needing
>  	 * multiple endpoints to support a single slot in the
>  	 * interleave.
> +	 *
> +	 * When the root interleave ways is 1 then the root granularity is a
> +	 * don't care.
> +	 *
> +	 * Limit region granularity to cxld->interleave_granularity *
> +	 * rounddown_pow_of_two(cxld->interleave_ways) otherwise holes result in
> +	 * the decode at each endpoint. Note that rounddown_pow_of_two()
> +	 * accounts for x3, x6, and x9 root intereleave.

interleave

>  	 */
> -	if (val < cxld->interleave_granularity)
> -		return -EINVAL;
> +	ways = rounddown_pow_of_two(cxld->interleave_ways);
> +	if (ways > 1) {
> +		if (val < cxld->interleave_granularity) {
> +			dev_dbg(dev, "granularity %d must be >= %d\n", val,
> +				cxld->interleave_granularity);
> +			return -EINVAL;
> +		}
> +
> +		if (val > cxld->interleave_granularity * ways) {
> +			dev_dbg(dev, "granularity %d must be <= %d\n", val,
> +				cxld->interleave_granularity * ways);
> +			return -EINVAL;
> +		}
> +	}
>  
>  	rc = down_write_killable(&cxl_region_rwsem);
>  	if (rc)
> @@ -415,6 +454,24 @@ static ssize_t interleave_granularity_store(struct device *dev,
>  		goto out;
>  	}
>  
> +	/*
> +	 * If region ways has been set and xHB interleave is active, validate
> +	 * that ways is compatible with specified granularity.  Otherwise allow
> +	 * granularity to be set now and depend on interleave_ways_store() to
> +	 * validate this constraint.
> +	 */
> +	if (cxld->interleave_ways > 1 && p->interleave_ways &&
> +	    val > cxld->interleave_granularity &&
> +	    p->interleave_ways / cxld->interleave_ways !=
> +		    val / cxld->interleave_granularity) {
> +		dev_dbg(dev,
> +			"granularity scaling factor %d mismatch with ways %d\n",
> +			val / cxld->interleave_granularity,
> +			p->interleave_ways / cxld->interleave_ways);
> +		rc = -EINVAL;
> +		goto out;
> +	}
> +
>  	p->interleave_granularity = val;
>  out:
>  	up_write(&cxl_region_rwsem);
> 
> 


^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH 3/5] cxl/acpi: Minimize granularity for x1 interleaves
  2022-08-03 16:00       ` Jonathan Cameron
@ 2022-08-03 17:18         ` Dan Williams
  2022-08-04  9:32           ` Jonathan Cameron
  0 siblings, 1 reply; 28+ messages in thread
From: Dan Williams @ 2022-08-03 17:18 UTC (permalink / raw)
  To: Jonathan Cameron, Dan Williams; +Cc: linux-cxl

Jonathan Cameron wrote:
> On Tue, 2 Aug 2022 10:33:19 -0700
> Dan Williams <dan.j.williams@intel.com> wrote:
> 
> > Jonathan Cameron wrote:
> > > On Fri, 22 Jul 2022 17:56:09 -0700
> > > Dan Williams <dan.j.williams@intel.com> wrote:
> > >   
> > > > The kernel enforces that region granularity is >= to the top-level
> > > > interleave-granularity for the given CXL window. However, when the CXL
> > > > window interleave is x1, i.e. non-interleaved at the host bridge level,
> > > > then the specified granularity does not matter. Override the window
> > > > specified granularity to the CXL minimum so that any valid region
> > > > granularity is >= to the root granularity.
> > > > 
> > > > Reported-by: Jonathan Cameron <Jonathan.Cameron@huawei.com>
> > > > Signed-off-by: Dan Williams <dan.j.williams@intel.com>  
> > > 
> > > Hi Dan,
> > > 
> > > Debugging exactly why this is failing (from cxl.git/preview) for my test setup...
> > > (1 hb, 8 rp, 8 direct connected devices)
> > > 
> > > If I set the interleave granularity of a region to 256, I end
> > > up with 256 for the CFMWS which is fine, then 512 for the HB which
> > > is not - EP interleave granularity is expected 256.
> > > 
> > > https://git.kernel.org/pub/scm/linux/kernel/git/cxl/cxl.git/tree/drivers/cxl/core/region.c?h=preview#n1070
> > > 
> > > Calculates the eig as address_bit - eiw + 1
> > > 
> > > iw = 8
> > > eiw = 3
> > > peig = 0 (pig = 256)
> > > peiw = 0 (piw = 1)
> > > (all as expected I think...)
> > > 
> > > So address_bit = s max(peig + peiw, eiw + peig) = max(0, 3)
> > > and eig = 3 - 3 + 1 = 1 (ig = 512) which is wrong.
> > > 
> > > I'm not 100% sure on the logic behind this maths, but would expect eig = 0 as the output for this
> > > setup..  
> > 
> > Yeah, the "+ 1" is not required when routing from the x1 HB level. I can
> > reproduce this config with cxl_test to validate.
> 
> Other than this off by one, with the other fixes you posted everything now works for me
> with the particular test case above.

Oh, you are hitting this path in your test? I misread the condition when
this triggers. It should be skipping the granularity increment when the
port's parent interleave is x1. I.e. in this case with devices directly
connected to a host-bridge where the host-bridge is not interleaved then
there is no need for an extra address bit to route the decode.

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH 3/5] cxl/acpi: Minimize granularity for x1 interleaves
  2022-08-03 17:18         ` Dan Williams
@ 2022-08-04  9:32           ` Jonathan Cameron
  0 siblings, 0 replies; 28+ messages in thread
From: Jonathan Cameron @ 2022-08-04  9:32 UTC (permalink / raw)
  To: Dan Williams; +Cc: linux-cxl

On Wed, 3 Aug 2022 10:18:27 -0700
Dan Williams <dan.j.williams@intel.com> wrote:

> Jonathan Cameron wrote:
> > On Tue, 2 Aug 2022 10:33:19 -0700
> > Dan Williams <dan.j.williams@intel.com> wrote:
> >   
> > > Jonathan Cameron wrote:  
> > > > On Fri, 22 Jul 2022 17:56:09 -0700
> > > > Dan Williams <dan.j.williams@intel.com> wrote:
> > > >     
> > > > > The kernel enforces that region granularity is >= to the top-level
> > > > > interleave-granularity for the given CXL window. However, when the CXL
> > > > > window interleave is x1, i.e. non-interleaved at the host bridge level,
> > > > > then the specified granularity does not matter. Override the window
> > > > > specified granularity to the CXL minimum so that any valid region
> > > > > granularity is >= to the root granularity.
> > > > > 
> > > > > Reported-by: Jonathan Cameron <Jonathan.Cameron@huawei.com>
> > > > > Signed-off-by: Dan Williams <dan.j.williams@intel.com>    
> > > > 
> > > > Hi Dan,
> > > > 
> > > > Debugging exactly why this is failing (from cxl.git/preview) for my test setup...
> > > > (1 hb, 8 rp, 8 direct connected devices)
> > > > 
> > > > If I set the interleave granularity of a region to 256, I end
> > > > up with 256 for the CFMWS which is fine, then 512 for the HB which
> > > > is not - EP interleave granularity is expected 256.
> > > > 
> > > > https://git.kernel.org/pub/scm/linux/kernel/git/cxl/cxl.git/tree/drivers/cxl/core/region.c?h=preview#n1070
> > > > 
> > > > Calculates the eig as address_bit - eiw + 1
> > > > 
> > > > iw = 8
> > > > eiw = 3
> > > > peig = 0 (pig = 256)
> > > > peiw = 0 (piw = 1)
> > > > (all as expected I think...)
> > > > 
> > > > So address_bit = s max(peig + peiw, eiw + peig) = max(0, 3)
> > > > and eig = 3 - 3 + 1 = 1 (ig = 512) which is wrong.
> > > > 
> > > > I'm not 100% sure on the logic behind this maths, but would expect eig = 0 as the output for this
> > > > setup..    
> > > 
> > > Yeah, the "+ 1" is not required when routing from the x1 HB level. I can
> > > reproduce this config with cxl_test to validate.  
> > 
> > Other than this off by one, with the other fixes you posted everything now works for me
> > with the particular test case above.  
> 
> Oh, you are hitting this path in your test? I misread the condition when
> this triggers. It should be skipping the granularity increment when the
> port's parent interleave is x1. I.e. in this case with devices directly
> connected to a host-bridge where the host-bridge is not interleaved then
> there is no need for an extra address bit to route the decode.
> 

I haven't chased down why, but definitely hitting it with resulting routing
to wrong device. 
Current test when setting interleave gran for the decoder1.0 (the HB in this case)
is on the ways of decoder 1.0 which is 8.
Maybe the test should be on the parent_rr nr_targets? (or parent_iw should work I think)
That makes things 'work' for this particular test, but need to test on a wider range
of setups.

Jonathan



^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH 5/5] cxl/region: Constrain region granularity scaling factor
  2022-08-03 16:17   ` Jonathan Cameron
@ 2022-08-04 16:33     ` Dan Williams
  2022-08-04 17:57       ` Dan Williams
  0 siblings, 1 reply; 28+ messages in thread
From: Dan Williams @ 2022-08-04 16:33 UTC (permalink / raw)
  To: Jonathan Cameron, Dan Williams; +Cc: linux-cxl

Jonathan Cameron wrote:
> On Fri, 22 Jul 2022 17:56:20 -0700
> Dan Williams <dan.j.williams@intel.com> wrote:
> 
> > Consider the scenario where a platform provides for a x2 host-bridge
> > interleave at a granularity of 256 bytes.
> Hi Dan,
> 
> Terminology not clear to me.  Is this interleave across 2 host bridges?

Yes.

> 
> I'm lost in the explanation.
> 
> > The only permitted region
> > granularities for that configuration are 256 or 512. Anything larger
> > than 512 results in unmapped capacity within a given decoder. Also, if
> > the region granularity is 512 then the interleave_ways for the region
> > must be 4 to keep the scaling matched.
> > 
> > Here are the translations for the first (4) 256-byte blocks where an
> > endpoint decoder is configured for a 512-byte granularity:
> > 
> > Block[0] => HB0 => DPA: 0
> > Block[1] => HB1 => DPA: 0
> > Block[2] => HB0 => DPA: 0
> > Block[3] => HB1 => DPA: 0
> > 
> > In order for those translations to not alias the region interleave ways
> > must be 4 resulting in:
> > 
> > Block[0] => HB0 => Dev0 => DPA: 0
> > Block[1] => HB1 => Dev1 => DPA: 0
> > Block[2] => HB0 => Dev2 => DPA: 0
> > Block[3] => HB1 => Dev3 => DPA: 0
> 
> Block[4] => HBA0 => Dev0 => DPA: 512 ?
> 
> which means we are only using alternate 256 blocks of the EP.  Does that make
> sense?

Yes, but I found a subtle bug a bug in my logic. Here is the output from
my spreadsheet calculating the DPA_Offset for the first 8 blocks when
the topology is "x2 host-bridge @ 256 => x2 endpoint @ 512"

Address	HB Index	HPA Offset	DPA Offset
0x0	0		0x0		0x0
0x100	1		0x100		0x0
0x200	0		0x200		0x0
0x300	1		0x300		0x0
0x400	0		0x400		0x200
0x500	1		0x500		0x200
0x600	0		0x600		0x200
0x700	1		0x700		0x200
0x800	0		0x800		0x400

So we have 2 endpoints at 4 DPA_Offset == 0, so there is aliasing.
However, the bug in my logic is that this fix for this is not:

"...if the region granularity is 512 then the interleave_ways for the
region must be 4 to keep the scaling matched."

...it is that the number of *targets* in the interleave must be 4 with
that split handled by the host bridge, and leave the endpoint interleave
ways setting at 2. So, in general to support region-granularity >
root-granularity, the host-bridges and / or switches in the path must
scale the interleave.

---
"x4 region @ 512 granularity under an x2 window @ 256"

 ┌───────────────────────────────────┬──┐
 │WINDOW0                            │x2│
 └─────────┬─────────────────┬───────┴──┘
           │                 │
 ┌─────────▼────┬──┐  ┌──────▼───────┬──┐
 │HB0           │x2│  │HB1           │x2│
 └──┬────────┬──┴──┘  └─┬─────────┬──┴──┘
    │        │          │         │
 ┌──▼─┬──┐ ┌─▼──┬──┐  ┌─▼──┬──┐ ┌─▼──┬──┐
 │DEV0│x2│ │DEV1│x2│  │DEV2│x2│ │DEV3│x2│
 └────┴──┘ └────┴──┘  └────┴──┘ └────┴──┘
---

---
"x4 region @ 256 granularity under an x2 window @ 256"

 ┌───────────────────────────────────┬──┐
 │WINDOW0                            │x2│
 └─────────┬─────────────────┬───────┴──┘
           │                 │
 ┌─────────▼────┬──┐  ┌──────▼───────┬──┐
 │HB0           │x2│  │HB1           │x2│
 └──┬────────┬──┴──┘  └─┬─────────┬──┴──┘
    │        │          │         │
 ┌──▼─┬──┐ ┌─▼──┬──┐  ┌─▼──┬──┐ ┌─▼──┬──┐
 │DEV0│x4│ │DEV1│x4│  │DEV2│x4│ │DEV3│x4│
 └────┴──┘ └────┴──┘  └────┴──┘ └────┴──┘
---

...i.e. it is not always the case that the endpoint interleave ways
setting matches the number of devices in the set.

In the interests of settling the code for v6.0 merge the smallest fix is
to just not allow region granularity != window granularity
configurations for now. Then for v6.1 the algorithm can be expanded to
take switching into account for endpoint intereleave geometry.

> > ...not 2, resulting in:
> > 
> > Block[0] => HB0 => Dev0 => DPA: 0
> > Block[1] => HB1 => Dev1 => DPA: 0
> > Block[2] => HB0 => Dev0 => DPA: 0 !
> > Block[3] => HB1 => Dev1 => DPA: 0 !
> 
> > 
> > Given tooling is already being built around this ABI allow for
> > granularity and ways to be set in either order and validate the
> > combination once both are set.
> > 
> > Reported-by: Jonathan Cameron <Jonathan.Cameron@huawei.com>
> > Signed-off-by: Dan Williams <dan.j.williams@intel.com>
> 
> Typo inline.

Thanks, but moot now that I'm dropping this in favor of the safe fix,
and push ongoing improvements to the next merge window.

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH 5/5] cxl/region: Constrain region granularity scaling factor
  2022-08-04 16:33     ` Dan Williams
@ 2022-08-04 17:57       ` Dan Williams
  0 siblings, 0 replies; 28+ messages in thread
From: Dan Williams @ 2022-08-04 17:57 UTC (permalink / raw)
  To: Dan Williams, Jonathan Cameron; +Cc: linux-cxl

Dan Williams wrote:
> Jonathan Cameron wrote:
> > On Fri, 22 Jul 2022 17:56:20 -0700
> > Dan Williams <dan.j.williams@intel.com> wrote:
> > 
> > > Consider the scenario where a platform provides for a x2 host-bridge
> > > interleave at a granularity of 256 bytes.
> > Hi Dan,
> > 
> > Terminology not clear to me.  Is this interleave across 2 host bridges?
> 
> Yes.
> 
> > 
> > I'm lost in the explanation.
> > 
> > > The only permitted region
> > > granularities for that configuration are 256 or 512. Anything larger
> > > than 512 results in unmapped capacity within a given decoder. Also, if
> > > the region granularity is 512 then the interleave_ways for the region
> > > must be 4 to keep the scaling matched.
> > > 
> > > Here are the translations for the first (4) 256-byte blocks where an
> > > endpoint decoder is configured for a 512-byte granularity:
> > > 
> > > Block[0] => HB0 => DPA: 0
> > > Block[1] => HB1 => DPA: 0
> > > Block[2] => HB0 => DPA: 0
> > > Block[3] => HB1 => DPA: 0
> > > 
> > > In order for those translations to not alias the region interleave ways
> > > must be 4 resulting in:
> > > 
> > > Block[0] => HB0 => Dev0 => DPA: 0
> > > Block[1] => HB1 => Dev1 => DPA: 0
> > > Block[2] => HB0 => Dev2 => DPA: 0
> > > Block[3] => HB1 => Dev3 => DPA: 0
> > 
> > Block[4] => HBA0 => Dev0 => DPA: 512 ?
> > 
> > which means we are only using alternate 256 blocks of the EP.  Does that make
> > sense?
> 
> Yes, but I found a subtle bug a bug in my logic. Here is the output from
> my spreadsheet calculating the DPA_Offset for the first 8 blocks when
> the topology is "x2 host-bridge @ 256 => x2 endpoint @ 512"
> 
> Address	HB Index	HPA Offset	DPA Offset
> 0x0	0		0x0		0x0
> 0x100	1		0x100		0x0
> 0x200	0		0x200		0x0
> 0x300	1		0x300		0x0
> 0x400	0		0x400		0x200
> 0x500	1		0x500		0x200
> 0x600	0		0x600		0x200
> 0x700	1		0x700		0x200
> 0x800	0		0x800		0x400
> 
> So we have 2 endpoints at 4 DPA_Offset == 0, so there is aliasing.
> However, the bug in my logic is that this fix for this is not:
> 
> "...if the region granularity is 512 then the interleave_ways for the
> region must be 4 to keep the scaling matched."
> 
> ...it is that the number of *targets* in the interleave must be 4 with
> that split handled by the host bridge, and leave the endpoint interleave
> ways setting at 2. So, in general to support region-granularity >
> root-granularity, the host-bridges and / or switches in the path must
> scale the interleave.
> 
> ---
> "x4 region @ 512 granularity under an x2 window @ 256"
> 
>  ┌───────────────────────────────────┬──┐
>  │WINDOW0                            │x2│
>  └─────────┬─────────────────┬───────┴──┘
>            │                 │
>  ┌─────────▼────┬──┐  ┌──────▼───────┬──┐
>  │HB0           │x2│  │HB1           │x2│
>  └──┬────────┬──┴──┘  └─┬─────────┬──┴──┘
>     │        │          │         │
>  ┌──▼─┬──┐ ┌─▼──┬──┐  ┌─▼──┬──┐ ┌─▼──┬──┐
>  │DEV0│x2│ │DEV1│x2│  │DEV2│x2│ │DEV3│x2│
>  └────┴──┘ └────┴──┘  └────┴──┘ └────┴──┘
> ---
> 
> ---
> "x4 region @ 256 granularity under an x2 window @ 256"
> 
>  ┌───────────────────────────────────┬──┐
>  │WINDOW0                            │x2│
>  └─────────┬─────────────────┬───────┴──┘
>            │                 │
>  ┌─────────▼────┬──┐  ┌──────▼───────┬──┐
>  │HB0           │x2│  │HB1           │x2│
>  └──┬────────┬──┴──┘  └─┬─────────┬──┴──┘
>     │        │          │         │
>  ┌──▼─┬──┐ ┌─▼──┬──┐  ┌─▼──┬──┐ ┌─▼──┬──┐
>  │DEV0│x4│ │DEV1│x4│  │DEV2│x4│ │DEV3│x4│
>  └────┴──┘ └────┴──┘  └────┴──┘ └────┴──┘
> ---
> 
> ...i.e. it is not always the case that the endpoint interleave ways
> setting matches the number of devices in the set.

No, that's still broken. Even with the host-bridge scaling the interleave it
still results in stranded capacity at the endpoints. So, in general region IG must
be <= Window IG. The current implementation is already blocking the region IG <
Window IG case, so forcing region IG to be equal to Window IG is the
solution for now.

^ permalink raw reply	[flat|nested] 28+ messages in thread

end of thread, other threads:[~2022-08-04 17:58 UTC | newest]

Thread overview: 28+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-07-23  0:55 [PATCH 0/5] CXL Region Provisioning Fixes Dan Williams
2022-07-23  0:55 ` [PATCH 1/5] cxl/acpi: Autoload driver for 'cxl_acpi' test devices Dan Williams
2022-08-01 19:24   ` Verma, Vishal L
2022-07-23  0:56 ` [PATCH 2/5] cxl/region: Delete 'region' attribute from root decoders Dan Williams
2022-08-01 19:32   ` Alison Schofield
2022-08-01 19:38   ` Verma, Vishal L
2022-08-01 19:40     ` Verma, Vishal L
2022-08-01 21:32       ` Dan Williams
2022-08-01 21:32     ` Dan Williams
2022-07-23  0:56 ` [PATCH 3/5] cxl/acpi: Minimize granularity for x1 interleaves Dan Williams
2022-08-01 19:35   ` Alison Schofield
2022-08-01 19:45   ` Verma, Vishal L
2022-08-01 21:34     ` Dan Williams
2022-08-02 15:56   ` Jonathan Cameron
2022-08-02 16:52     ` Jonathan Cameron
2022-08-02 17:33     ` Dan Williams
2022-08-03 16:00       ` Jonathan Cameron
2022-08-03 17:18         ` Dan Williams
2022-08-04  9:32           ` Jonathan Cameron
2022-07-23  0:56 ` [PATCH 4/5] cxl/region: Stop initializing interleave granularity Dan Williams
2022-08-01 19:41   ` Alison Schofield
2022-08-01 19:47   ` Verma, Vishal L
2022-07-23  0:56 ` [PATCH 5/5] cxl/region: Constrain region granularity scaling factor Dan Williams
2022-08-01 19:43   ` Alison Schofield
2022-08-01 20:55   ` Verma, Vishal L
2022-08-03 16:17   ` Jonathan Cameron
2022-08-04 16:33     ` Dan Williams
2022-08-04 17:57       ` Dan Williams

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.