From: Jonathan Cameron <Jonathan.Cameron@huawei.com>
To: <ira.weiny@intel.com>
Cc: Dan Williams <dan.j.williams@intel.com>,
Alison Schofield <alison.schofield@intel.com>,
Vishal Verma <vishal.l.verma@intel.com>,
"Ben Widawsky" <ben.widawsky@intel.com>,
Bjorn Helgaas <bhelgaas@google.com>, <linux-cxl@vger.kernel.org>,
<linux-pci@vger.kernel.org>
Subject: Re: [PATCH 5/5] cxl/cdat: Parse out DSMAS data from CDAT table
Date: Fri, 19 Nov 2021 14:55:20 +0000 [thread overview]
Message-ID: <20211119145451.0000682f@huawei.com> (raw)
In-Reply-To: <20211105235056.3711389-6-ira.weiny@intel.com>
On Fri, 5 Nov 2021 16:50:56 -0700
<ira.weiny@intel.com> wrote:
> From: Ira Weiny <ira.weiny@intel.com>
>
> Parse and cache the DSMAS data from the CDAT table. Store this data in
> Unmarshaled data structures for use later.
>
> Signed-off-by: Ira Weiny <ira.weiny@intel.com>
More fun from clashing patch sets below.
I think this is wrong rather than the other patch, but I'm prepared to
be persuaded otherwise!
Ben, this is related to your mega RFC for regions etc.
Jonathan
> +static int parse_dsmas(struct cxl_memdev *cxlmd)
> +{
> + struct cxl_dsmas *dsmas_ary = NULL;
> + u32 *data = cxlmd->cdat_table;
> + int bytes_left = cxlmd->cdat_length;
> + int nr_dsmas = 0;
> + size_t dsmas_byte_size;
> + int rc = 0;
> +
> + if (!data || !cdat_hdr_valid(cxlmd))
> + return -ENXIO;
> +
> + /* Skip header */
> + data += CDAT_HEADER_LENGTH_DW;
> + bytes_left -= CDAT_HEADER_LENGTH_BYTES;
> +
> + while (bytes_left > 0) {
> + u32 *cur_rec = data;
> + u8 type = FIELD_GET(CDAT_STRUCTURE_DW0_TYPE, cur_rec[0]);
> + u16 length = FIELD_GET(CDAT_STRUCTURE_DW0_LENGTH, cur_rec[0]);
> +
> + if (type == CDAT_STRUCTURE_DW0_TYPE_DSMAS) {
> + struct cxl_dsmas *new_ary;
> + u8 flags;
> +
> + new_ary = krealloc(dsmas_ary,
> + sizeof(*dsmas_ary) * (nr_dsmas+1),
> + GFP_KERNEL);
> + if (!new_ary) {
> + dev_err(&cxlmd->dev,
> + "Failed to allocate memory for DSMAS data\n");
> + rc = -ENOMEM;
> + goto free_dsmas;
> + }
> + dsmas_ary = new_ary;
> +
> + flags = FIELD_GET(CDAT_DSMAS_DW1_FLAGS, cur_rec[1]);
> +
> + dsmas_ary[nr_dsmas].dpa_base = CDAT_DSMAS_DPA_OFFSET(cur_rec);
> + dsmas_ary[nr_dsmas].dpa_length = CDAT_DSMAS_DPA_LEN(cur_rec);
> + dsmas_ary[nr_dsmas].non_volatile = CDAT_DSMAS_NON_VOLATILE(flags);
> +
> + dev_dbg(&cxlmd->dev, "DSMAS %d: %llx:%llx %s\n",
> + nr_dsmas,
> + dsmas_ary[nr_dsmas].dpa_base,
> + dsmas_ary[nr_dsmas].dpa_base +
> + dsmas_ary[nr_dsmas].dpa_length,
> + (dsmas_ary[nr_dsmas].non_volatile ?
> + "Persistent" : "Volatile")
> + );
> +
> + nr_dsmas++;
> + }
> +
> + data += (length/sizeof(u32));
> + bytes_left -= length;
> + }
> +
> + if (nr_dsmas == 0) {
> + rc = -ENXIO;
> + goto free_dsmas;
> + }
> +
> + dev_dbg(&cxlmd->dev, "Found %d DSMAS entries\n", nr_dsmas);
> +
> + dsmas_byte_size = sizeof(*dsmas_ary) * nr_dsmas;
> + cxlmd->dsmas_ary = devm_kzalloc(&cxlmd->dev, dsmas_byte_size, GFP_KERNEL);
Here is another place where we need to hang this off cxlds->dev rather than this
one to avoid breaking Ben's code.
> + if (!cxlmd->dsmas_ary) {
> + rc = -ENOMEM;
> + goto free_dsmas;
> + }
> +
> + memcpy(cxlmd->dsmas_ary, dsmas_ary, dsmas_byte_size);
> + cxlmd->nr_dsmas = nr_dsmas;
> +
> +free_dsmas:
> + kfree(dsmas_ary);
> + return rc;
> +}
> +
prev parent reply other threads:[~2021-11-19 14:55 UTC|newest]
Thread overview: 37+ messages / expand[flat|nested] mbox.gz Atom feed top
2021-11-05 23:50 [PATCH 0/5] CXL: Read CDAT and DSMAS data from the device ira.weiny
2021-11-05 23:50 ` [PATCH 1/5] PCI: Add vendor ID for the PCI SIG ira.weiny
2021-11-17 21:50 ` Bjorn Helgaas
2021-11-05 23:50 ` [PATCH 2/5] PCI/DOE: Add Data Object Exchange Aux Driver ira.weiny
2021-11-08 12:15 ` Jonathan Cameron
2021-11-10 5:45 ` Ira Weiny
2021-11-18 18:48 ` Jonathan Cameron
2021-11-16 23:48 ` Bjorn Helgaas
2021-12-03 20:48 ` Dan Williams
2021-12-03 23:56 ` Bjorn Helgaas
2021-12-04 15:47 ` Dan Williams
2021-12-06 12:27 ` Jonathan Cameron
2021-11-05 23:50 ` [PATCH 3/5] cxl/pci: Add DOE Auxiliary Devices ira.weiny
2021-11-08 13:09 ` Jonathan Cameron
2021-11-11 1:31 ` Ira Weiny
2021-11-11 11:53 ` Jonathan Cameron
2021-11-16 23:48 ` Bjorn Helgaas
2021-11-17 12:23 ` Jonathan Cameron
2021-11-17 22:15 ` Bjorn Helgaas
2021-11-18 10:51 ` Jonathan Cameron
2021-11-19 6:48 ` Christoph Hellwig
2021-11-29 23:37 ` Dan Williams
2021-11-29 23:59 ` Dan Williams
2021-11-30 6:42 ` Christoph Hellwig
2021-11-05 23:50 ` [PATCH 4/5] cxl/mem: Add CDAT table reading from DOE ira.weiny
2021-11-08 13:21 ` Jonathan Cameron
2021-11-08 23:19 ` Ira Weiny
2021-11-08 15:02 ` Jonathan Cameron
2021-11-08 22:25 ` Ira Weiny
2021-11-09 11:09 ` Jonathan Cameron
2021-11-19 14:40 ` Jonathan Cameron
2021-11-05 23:50 ` [PATCH 5/5] cxl/cdat: Parse out DSMAS data from CDAT table ira.weiny
2021-11-08 14:52 ` Jonathan Cameron
2021-11-11 3:58 ` Ira Weiny
2021-11-11 11:58 ` Jonathan Cameron
2021-11-18 17:02 ` Jonathan Cameron
2021-11-19 14:55 ` Jonathan Cameron [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20211119145451.0000682f@huawei.com \
--to=jonathan.cameron@huawei.com \
--cc=alison.schofield@intel.com \
--cc=ben.widawsky@intel.com \
--cc=bhelgaas@google.com \
--cc=dan.j.williams@intel.com \
--cc=ira.weiny@intel.com \
--cc=linux-cxl@vger.kernel.org \
--cc=linux-pci@vger.kernel.org \
--cc=vishal.l.verma@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).