From: Dan Williams <dan.j.williams@intel.com>
To: Ben Widawsky <ben.widawsky@intel.com>
Cc: linux-cxl@vger.kernel.org, Linux NVDIMM <nvdimm@lists.linux.dev>,
patches@lists.linux.dev,
Alison Schofield <alison.schofield@intel.com>,
Ira Weiny <ira.weiny@intel.com>,
Jonathan Cameron <Jonathan.Cameron@huawei.com>,
Vishal Verma <vishal.l.verma@intel.com>
Subject: Re: [RFC PATCH 06/15] cxl/acpi: Manage root decoder's address space
Date: Mon, 18 Apr 2022 15:15:47 -0700 [thread overview]
Message-ID: <CAPcyv4hD93d20Sq25tPNMQ1T68uQmTTQo7aDXMKN36wrCTa1-Q@mail.gmail.com> (raw)
In-Reply-To: <20220413183720.2444089-7-ben.widawsky@intel.com>
On Wed, Apr 13, 2022 at 11:38 AM Ben Widawsky <ben.widawsky@intel.com> wrote:
>
> Use a gen_pool to manage the physical address space that is routed by
> the platform decoder (root decoder). As described in 'cxl/acpi: Resereve
> CXL resources from request_free_mem_region' the address space does not
> coexist well if part of all of it is conveyed in the memory map to the
> kernel.
>
> Since the existing resource APIs of interest all rely on the root
> decoder's address space being in iomem_resource,
I do not understand what this is trying to convey. Nothing requires
that a given 'struct resource' be managed under iomem_resource.
> the choices are to roll
> a new allocator because on struct resource, or use gen_pool. gen_pool is
> a good choice because it already has all the capabilities needed to
> satisfy CXL programming.
Not sure what comparison to 'struct resource' is being made here, what
is the tradeoff as you see it? In other words, why mention 'struct
resource' as a consideration?
>
> Signed-off-by: Ben Widawsky <ben.widawsky@intel.com>
> ---
> drivers/cxl/acpi.c | 36 ++++++++++++++++++++++++++++++++++++
> drivers/cxl/cxl.h | 2 ++
> 2 files changed, 38 insertions(+)
>
> diff --git a/drivers/cxl/acpi.c b/drivers/cxl/acpi.c
> index 0870904fe4b5..a6b0c3181d0e 100644
> --- a/drivers/cxl/acpi.c
> +++ b/drivers/cxl/acpi.c
> @@ -1,6 +1,7 @@
> // SPDX-License-Identifier: GPL-2.0-only
> /* Copyright(c) 2021 Intel Corporation. All rights reserved. */
> #include <linux/platform_device.h>
> +#include <linux/genalloc.h>
> #include <linux/module.h>
> #include <linux/device.h>
> #include <linux/kernel.h>
> @@ -79,6 +80,25 @@ struct cxl_cfmws_context {
> struct acpi_cedt_cfmws *high_cfmws;
> };
>
> +static int cfmws_cookie;
> +
> +static int fill_busy_mem(struct resource *res, void *_window)
> +{
> + struct gen_pool *window = _window;
> + struct genpool_data_fixed gpdf;
> + unsigned long addr;
> + void *type;
> +
> + gpdf.offset = res->start;
> + addr = gen_pool_alloc_algo_owner(window, resource_size(res),
> + gen_pool_fixed_alloc, &gpdf, &type);
The "_owner" variant of gen_pool was only added for p2pdma as a way to
coordinate reference counts across p2pdma space allocation and a
'strcuct dev_pagemap' instance. The use here seems completely
vestigial and can just move to gen_pool_alloc_algo.
> + if (addr != res->start || (res->start == 0 && type != &cfmws_cookie))
> + return -ENXIO;
How can the second condition ever be true?
> +
> + pr_devel("%pR removed from CFMWS\n", res);
> + return 0;
> +}
> +
> static int cxl_parse_cfmws(union acpi_subtable_headers *header, void *arg,
> const unsigned long end)
> {
> @@ -88,6 +108,8 @@ static int cxl_parse_cfmws(union acpi_subtable_headers *header, void *arg,
> struct device *dev = ctx->dev;
> struct acpi_cedt_cfmws *cfmws;
> struct cxl_decoder *cxld;
> + struct gen_pool *window;
> + char name[64];
> int rc, i;
>
> cfmws = (struct acpi_cedt_cfmws *) header;
> @@ -116,6 +138,20 @@ static int cxl_parse_cfmws(union acpi_subtable_headers *header, void *arg,
> cxld->interleave_ways = CFMWS_INTERLEAVE_WAYS(cfmws);
> cxld->interleave_granularity = CFMWS_INTERLEAVE_GRANULARITY(cfmws);
>
> + sprintf(name, "cfmws@%#llx", cfmws->base_hpa);
> + window = devm_gen_pool_create(dev, ilog2(SZ_256M), NUMA_NO_NODE, name);
> + if (IS_ERR(window))
> + return 0;
> +
> + gen_pool_add_owner(window, cfmws->base_hpa, -1, cfmws->window_size,
> + NUMA_NO_NODE, &cfmws_cookie);
Similar comment about the "_owner" variant serving no visible purpose.
These seems to pre-suppose that only the allocator will ever want to
interrogate the state of free space, it might be worth registering
objects for each intersection that are not cxl_regions so that
userspace explicitly sees what the cxl_acpi driver sees in terms of
available resources.
> +
> + /* Area claimed by other resources, remove those from the gen_pool. */
> + walk_iomem_res_desc(IORES_DESC_NONE, 0, cfmws->base_hpa,
> + cfmws->base_hpa + cfmws->window_size - 1, window,
> + fill_busy_mem);
> + to_cxl_root_decoder(cxld)->window = window;
> +
> rc = cxl_decoder_add(cxld, target_map);
> if (rc)
> put_device(&cxld->dev);
> diff --git a/drivers/cxl/cxl.h b/drivers/cxl/cxl.h
> index 85fd5e84f978..0e1c65761ead 100644
> --- a/drivers/cxl/cxl.h
> +++ b/drivers/cxl/cxl.h
> @@ -246,10 +246,12 @@ struct cxl_switch_decoder {
> /**
> * struct cxl_root_decoder - A toplevel/platform decoder
> * @base: Base class decoder
> + * @window: host address space allocator
> * @targets: Downstream targets (ie. hostbridges).
> */
> struct cxl_root_decoder {
> struct cxl_decoder base;
> + struct gen_pool *window;
> struct cxl_decoder_targets *targets;
> };
>
> --
> 2.35.1
>
next prev parent reply other threads:[~2022-04-18 22:16 UTC|newest]
Thread overview: 53+ messages / expand[flat|nested] mbox.gz Atom feed top
2022-04-13 18:37 [RFC PATCH 00/15] Region driver Ben Widawsky
2022-04-13 18:37 ` [RFC PATCH 01/15] cxl/core: Use is_endpoint_decoder Ben Widawsky
2022-04-13 21:22 ` Dan Williams
[not found] ` <CGME20220415205052uscas1p209e03abf95b9c80b2ba1f287c82dfd80@uscas1p2.samsung.com>
2022-04-15 20:50 ` Adam Manzanares
2022-04-13 18:37 ` [RFC PATCH 02/15] cxl/core/hdm: Bail on endpoint init fail Ben Widawsky
2022-04-13 21:31 ` Dan Williams
[not found] ` <CGME20220418163713uscas1p17b3b1b45c7d27e54e3ecb62eb8af2469@uscas1p1.samsung.com>
2022-04-18 16:37 ` Adam Manzanares
2022-05-12 15:50 ` Ben Widawsky
2022-05-12 17:27 ` Luis Chamberlain
2022-05-13 12:09 ` Jonathan Cameron
2022-05-13 15:03 ` Dan Williams
2022-05-13 15:12 ` Luis Chamberlain
2022-05-13 19:14 ` Dan Williams
2022-05-13 19:31 ` Luis Chamberlain
2022-05-19 5:09 ` Dan Williams
2022-04-13 18:37 ` [RFC PATCH 03/15] Revert "cxl/core: Convert decoder range to resource" Ben Widawsky
2022-04-13 21:43 ` Dan Williams
2022-05-12 16:09 ` Ben Widawsky
2022-04-13 18:37 ` [RFC PATCH 04/15] cxl/core: Create distinct decoder structs Ben Widawsky
2022-04-15 1:45 ` Dan Williams
2022-04-18 20:43 ` Dan Williams
2022-04-13 18:37 ` [RFC PATCH 05/15] cxl/acpi: Reserve CXL resources from request_free_mem_region Ben Widawsky
2022-04-18 16:42 ` Dan Williams
2022-04-19 16:43 ` Jason Gunthorpe
2022-04-19 21:50 ` Dan Williams
2022-04-19 21:59 ` Dan Williams
2022-04-19 23:04 ` Jason Gunthorpe
2022-04-20 0:47 ` Dan Williams
2022-04-20 14:34 ` Jason Gunthorpe
2022-04-20 15:32 ` Dan Williams
2022-04-13 18:37 ` [RFC PATCH 06/15] cxl/acpi: Manage root decoder's address space Ben Widawsky
2022-04-18 22:15 ` Dan Williams [this message]
2022-05-12 19:18 ` Ben Widawsky
2022-04-13 18:37 ` [RFC PATCH 07/15] cxl/port: Surface ram and pmem resources Ben Widawsky
2022-04-13 18:37 ` [RFC PATCH 08/15] cxl/core/hdm: Allocate resources from the media Ben Widawsky
2022-04-13 18:37 ` [RFC PATCH 09/15] cxl/core/port: Add attrs for size and volatility Ben Widawsky
2022-04-13 18:37 ` [RFC PATCH 10/15] cxl/core: Extract IW/IG decoding Ben Widawsky
2022-04-13 18:37 ` [RFC PATCH 11/15] cxl/acpi: Use common " Ben Widawsky
2022-04-13 18:37 ` [RFC PATCH 12/15] cxl/region: Add region creation ABI Ben Widawsky
2022-05-04 22:56 ` Verma, Vishal L
2022-05-05 5:17 ` Dan Williams
2022-05-12 15:54 ` Ben Widawsky
2022-04-13 18:37 ` [RFC PATCH 13/15] cxl/core/port: Add attrs for root ways & granularity Ben Widawsky
2022-04-13 18:37 ` [RFC PATCH 14/15] cxl/region: Introduce configuration Ben Widawsky
2022-04-13 18:37 ` [RFC PATCH 15/15] cxl/region: Introduce a cxl_region driver Ben Widawsky
2022-05-20 16:23 ` [RFC PATCH 00/15] Region driver Jonathan Cameron
2022-05-20 16:41 ` Dan Williams
2022-05-31 12:21 ` Jonathan Cameron
2022-06-23 5:40 ` Dan Williams
2022-06-23 15:08 ` Jonathan Cameron
2022-06-23 17:33 ` Dan Williams
2022-06-23 23:44 ` Dan Williams
2022-06-24 9:08 ` Jonathan Cameron
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=CAPcyv4hD93d20Sq25tPNMQ1T68uQmTTQo7aDXMKN36wrCTa1-Q@mail.gmail.com \
--to=dan.j.williams@intel.com \
--cc=Jonathan.Cameron@huawei.com \
--cc=alison.schofield@intel.com \
--cc=ben.widawsky@intel.com \
--cc=ira.weiny@intel.com \
--cc=linux-cxl@vger.kernel.org \
--cc=nvdimm@lists.linux.dev \
--cc=patches@lists.linux.dev \
--cc=vishal.l.verma@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).