nvdimm.lists.linux.dev archive mirror
 help / color / mirror / Atom feed
From: Jonathan Cameron <Jonathan.Cameron@Huawei.com>
To: Ben Widawsky <ben.widawsky@intel.com>
Cc: Dan Williams <dan.j.williams@intel.com>,
	<linux-cxl@vger.kernel.org>,
	Vishal L Verma <vishal.l.verma@intel.com>,
	Linux NVDIMM <nvdimm@lists.linux.dev>,
	"Schofield, Alison" <alison.schofield@intel.com>,
	"Weiny, Ira" <ira.weiny@intel.com>
Subject: Re: [PATCH v4 15/21] cxl/pmem: Translate NVDIMM label commands to CXL label commands
Date: Fri, 10 Sep 2021 10:39:18 +0100	[thread overview]
Message-ID: <20210910103918.00003648@Huawei.com> (raw)
In-Reply-To: <20210909203214.ldl5gtv7myxcfacf@intel.com>

On Thu, 9 Sep 2021 13:32:14 -0700
Ben Widawsky <ben.widawsky@intel.com> wrote:

> On 21-09-09 12:03:49, Dan Williams wrote:
> > On Thu, Sep 9, 2021 at 10:22 AM Ben Widawsky <ben.widawsky@intel.com> wrote:  
> > >
> > > On 21-09-08 22:12:54, Dan Williams wrote:  
> > > > The LIBNVDIMM IOCTL UAPI calls back to the nvdimm-bus-provider to
> > > > translate the Linux command payload to the device native command format.
> > > > The LIBNVDIMM commands get-config-size, get-config-data, and
> > > > set-config-data, map to the CXL memory device commands device-identify,
> > > > get-lsa, and set-lsa. Recall that the label-storage-area (LSA) on an
> > > > NVDIMM device arranges for the provisioning of namespaces. Additionally
> > > > for CXL the LSA is used for provisioning regions as well.
> > > >
> > > > The data from device-identify is already cached in the 'struct cxl_mem'
> > > > instance associated with @cxl_nvd, so that payload return is simply
> > > > crafted and no CXL command is issued. The conversion for get-lsa is
> > > > straightforward, but the conversion for set-lsa requires an allocation
> > > > to append the set-lsa header in front of the payload.
> > > >
> > > > Acked-by: Ben Widawsky <ben.widawsky@intel.com>
> > > > Signed-off-by: Dan Williams <dan.j.williams@intel.com>
> > > > ---
> > > >  drivers/cxl/pmem.c |  125 ++++++++++++++++++++++++++++++++++++++++++++++++++--
> > > >  1 file changed, 121 insertions(+), 4 deletions(-)
> > > >
> > > > diff --git a/drivers/cxl/pmem.c b/drivers/cxl/pmem.c
> > > > index a972af7a6e0b..29d24f13aa73 100644
> > > > --- a/drivers/cxl/pmem.c
> > > > +++ b/drivers/cxl/pmem.c
> > > > @@ -1,6 +1,7 @@
> > > >  // SPDX-License-Identifier: GPL-2.0-only
> > > >  /* Copyright(c) 2021 Intel Corporation. All rights reserved. */
> > > >  #include <linux/libnvdimm.h>
> > > > +#include <asm/unaligned.h>
> > > >  #include <linux/device.h>
> > > >  #include <linux/module.h>
> > > >  #include <linux/ndctl.h>
> > > > @@ -48,10 +49,10 @@ static int cxl_nvdimm_probe(struct device *dev)
> > > >  {
> > > >       struct cxl_nvdimm *cxl_nvd = to_cxl_nvdimm(dev);
> > > >       struct cxl_memdev *cxlmd = cxl_nvd->cxlmd;
> > > > +     unsigned long flags = 0, cmd_mask = 0;
> > > >       struct cxl_mem *cxlm = cxlmd->cxlm;
> > > >       struct cxl_nvdimm_bridge *cxl_nvb;
> > > >       struct nvdimm *nvdimm = NULL;
> > > > -     unsigned long flags = 0;
> > > >       int rc = -ENXIO;
> > > >
> > > >       cxl_nvb = cxl_find_nvdimm_bridge();
> > > > @@ -66,8 +67,11 @@ static int cxl_nvdimm_probe(struct device *dev)
> > > >
> > > >       set_bit(NDD_LABELING, &flags);
> > > >       rc = -ENOMEM;
> > > > -     nvdimm = nvdimm_create(cxl_nvb->nvdimm_bus, cxl_nvd, NULL, flags, 0, 0,
> > > > -                            NULL);
> > > > +     set_bit(ND_CMD_GET_CONFIG_SIZE, &cmd_mask);
> > > > +     set_bit(ND_CMD_GET_CONFIG_DATA, &cmd_mask);
> > > > +     set_bit(ND_CMD_SET_CONFIG_DATA, &cmd_mask);
> > > > +     nvdimm = nvdimm_create(cxl_nvb->nvdimm_bus, cxl_nvd, NULL, flags,
> > > > +                            cmd_mask, 0, NULL);
> > > >       dev_set_drvdata(dev, nvdimm);
> > > >
> > > >  out_unlock:
> > > > @@ -89,11 +93,124 @@ static struct cxl_driver cxl_nvdimm_driver = {
> > > >       .id = CXL_DEVICE_NVDIMM,
> > > >  };
> > > >
> > > > +static int cxl_pmem_get_config_size(struct cxl_mem *cxlm,
> > > > +                                 struct nd_cmd_get_config_size *cmd,
> > > > +                                 unsigned int buf_len, int *cmd_rc)
> > > > +{
> > > > +     if (sizeof(*cmd) > buf_len)
> > > > +             return -EINVAL;
> > > > +
> > > > +     *cmd = (struct nd_cmd_get_config_size) {
> > > > +              .config_size = cxlm->lsa_size,
> > > > +              .max_xfer = cxlm->payload_size,
> > > > +     };
> > > > +     *cmd_rc = 0;
> > > > +
> > > > +     return 0;
> > > > +}
> > > > +
> > > > +static int cxl_pmem_get_config_data(struct cxl_mem *cxlm,
> > > > +                                 struct nd_cmd_get_config_data_hdr *cmd,
> > > > +                                 unsigned int buf_len, int *cmd_rc)
> > > > +{
> > > > +     struct cxl_mbox_get_lsa {
> > > > +             u32 offset;
> > > > +             u32 length;
> > > > +     } get_lsa;
> > > > +     int rc;
> > > > +
> > > > +     if (sizeof(*cmd) > buf_len)
> > > > +             return -EINVAL;
> > > > +     if (struct_size(cmd, out_buf, cmd->in_length) > buf_len)
> > > > +             return -EINVAL;
> > > > +
> > > > +     get_lsa = (struct cxl_mbox_get_lsa) {
> > > > +             .offset = cmd->in_offset,
> > > > +             .length = cmd->in_length,
> > > > +     };
> > > > +
> > > > +     rc = cxl_mem_mbox_send_cmd(cxlm, CXL_MBOX_OP_GET_LSA, &get_lsa,
> > > > +                                sizeof(get_lsa), cmd->out_buf,
> > > > +                                cmd->in_length);
> > > > +     cmd->status = 0;
> > > > +     *cmd_rc = 0;
> > > > +
> > > > +     return rc;
> > > > +}
> > > > +
> > > > +static int cxl_pmem_set_config_data(struct cxl_mem *cxlm,
> > > > +                                 struct nd_cmd_set_config_hdr *cmd,
> > > > +                                 unsigned int buf_len, int *cmd_rc)
> > > > +{
> > > > +     struct cxl_mbox_set_lsa {
> > > > +             u32 offset;
> > > > +             u32 reserved;
> > > > +             u8 data[];
> > > > +     } *set_lsa;
> > > > +     int rc;
> > > > +
> > > > +     if (sizeof(*cmd) > buf_len)
> > > > +             return -EINVAL;
> > > > +
> > > > +     /* 4-byte status follows the input data in the payload */
> > > > +     if (struct_size(cmd, in_buf, cmd->in_length) + 4 > buf_len)
> > > > +             return -EINVAL;
> > > > +
> > > > +     set_lsa =
> > > > +             kvzalloc(struct_size(set_lsa, data, cmd->in_length), GFP_KERNEL);
> > > > +     if (!set_lsa)
> > > > +             return -ENOMEM;
> > > > +
> > > > +     *set_lsa = (struct cxl_mbox_set_lsa) {
> > > > +             .offset = cmd->in_offset,
> > > > +     };
> > > > +     memcpy(set_lsa->data, cmd->in_buf, cmd->in_length);
> > > > +
> > > > +     rc = cxl_mem_mbox_send_cmd(cxlm, CXL_MBOX_OP_SET_LSA, set_lsa,
> > > > +                                struct_size(set_lsa, data, cmd->in_length),
> > > > +                                NULL, 0);
> > > > +
> > > > +     /*
> > > > +      * Set "firmware" status (4-packed bytes at the end of the input
> > > > +      * payload.
> > > > +      */
> > > > +     put_unaligned(0, (u32 *) &cmd->in_buf[cmd->in_length]);
> > > > +     *cmd_rc = 0;
> > > > +     kvfree(set_lsa);
> > > > +
> > > > +     return rc;
> > > > +}
> > > > +
> > > > +static int cxl_pmem_nvdimm_ctl(struct nvdimm *nvdimm, unsigned int cmd,
> > > > +                            void *buf, unsigned int buf_len, int *cmd_rc)
> > > > +{
> > > > +     struct cxl_nvdimm *cxl_nvd = nvdimm_provider_data(nvdimm);
> > > > +     unsigned long cmd_mask = nvdimm_cmd_mask(nvdimm);
> > > > +     struct cxl_memdev *cxlmd = cxl_nvd->cxlmd;
> > > > +     struct cxl_mem *cxlm = cxlmd->cxlm;
> > > > +
> > > > +     if (!test_bit(cmd, &cmd_mask))
> > > > +             return -ENOTTY;
> > > > +
> > > > +     switch (cmd) {
> > > > +     case ND_CMD_GET_CONFIG_SIZE:
> > > > +             return cxl_pmem_get_config_size(cxlm, buf, buf_len, cmd_rc);
> > > > +     case ND_CMD_GET_CONFIG_DATA:
> > > > +             return cxl_pmem_get_config_data(cxlm, buf, buf_len, cmd_rc);
> > > > +     case ND_CMD_SET_CONFIG_DATA:
> > > > +             return cxl_pmem_set_config_data(cxlm, buf, buf_len, cmd_rc);
> > > > +     default:
> > > > +             return -ENOTTY;
> > > > +     }
> > > > +}
> > > > +  
> > >
> > > Is there some intended purpose for passing cmd_rc down, if it isn't actually
> > > ever used? Perhaps add it when needed later?  
> > 
> > Ah true, copy-pasta leftovers from other similar routines. I'll clean this up.  
> 
> With that,
> Reviewed-by: Ben Widawsky <ben.widawsky@intel.com>

Reviewed-by: Jonathan Cameron <Jonathan.Cameron@huawei.com>
on basis you fixed the one thing I moaned about in v3 :)

  reply	other threads:[~2021-09-10  9:39 UTC|newest]

Thread overview: 78+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-09-09  5:11 [PATCH v4 00/21] cxl_test: Enable CXL Topology and UAPI regression tests Dan Williams
2021-09-09  5:11 ` [PATCH v4 01/21] libnvdimm/labels: Add uuid helpers Dan Williams
2021-09-09  5:11 ` [PATCH v4 02/21] libnvdimm/label: Add a helper for nlabel validation Dan Williams
2021-09-09  5:11 ` [PATCH v4 03/21] libnvdimm/labels: Introduce the concept of multi-range namespace labels Dan Williams
2021-09-09 13:09   ` Jonathan Cameron
2021-09-09 15:16     ` Dan Williams
2021-09-09  5:11 ` [PATCH v4 04/21] libnvdimm/labels: Fix kernel-doc for label.h Dan Williams
2021-09-10  8:38   ` Jonathan Cameron
2021-09-09  5:11 ` [PATCH v4 05/21] libnvdimm/label: Define CXL region labels Dan Williams
2021-09-09 15:58   ` Ben Widawsky
2021-09-09 18:38     ` Dan Williams
2021-09-09  5:12 ` [PATCH v4 06/21] libnvdimm/labels: Introduce CXL labels Dan Williams
2021-09-09  5:12 ` [PATCH v4 07/21] cxl/pci: Make 'struct cxl_mem' device type generic Dan Williams
2021-09-09 16:12   ` Ben Widawsky
2021-09-10  8:43   ` Jonathan Cameron
2021-09-09  5:12 ` [PATCH v4 08/21] cxl/pci: Clean up cxl_mem_get_partition_info() Dan Williams
2021-09-09 16:20   ` Ben Widawsky
2021-09-09 18:06     ` Dan Williams
2021-09-09 21:05       ` Ben Widawsky
2021-09-09 21:10         ` Dan Williams
2021-09-10  8:56         ` Jonathan Cameron
2021-09-13 22:19   ` [PATCH v5 " Dan Williams
2021-09-13 22:21     ` Dan Williams
2021-09-13 22:24   ` [PATCH v6 " Dan Williams
2021-09-09  5:12 ` [PATCH v4 09/21] cxl/mbox: Introduce the mbox_send operation Dan Williams
2021-09-09 16:34   ` Ben Widawsky
2021-09-10  8:58   ` Jonathan Cameron
2021-09-09  5:12 ` [PATCH v4 10/21] cxl/pci: Drop idr.h Dan Williams
2021-09-09 16:34   ` Ben Widawsky
2021-09-10  8:46     ` Jonathan Cameron
2021-09-09  5:12 ` [PATCH v4 11/21] cxl/mbox: Move mailbox and other non-PCI specific infrastructure to the core Dan Williams
2021-09-09 16:41   ` Ben Widawsky
2021-09-09 18:50     ` Dan Williams
2021-09-09 20:35       ` Ben Widawsky
2021-09-09 21:05         ` Dan Williams
2021-09-10  9:13   ` Jonathan Cameron
2021-09-09  5:12 ` [PATCH v4 12/21] cxl/pci: Use module_pci_driver Dan Williams
2021-09-09  5:12 ` [PATCH v4 13/21] cxl/mbox: Convert 'enabled_cmds' to DECLARE_BITMAP Dan Williams
2021-09-09  5:12 ` [PATCH v4 14/21] cxl/mbox: Add exclusive kernel command support Dan Williams
2021-09-09 17:02   ` Ben Widawsky
2021-09-10  9:33   ` Jonathan Cameron
2021-09-13 23:46     ` Dan Williams
2021-09-14  9:01       ` Jonathan Cameron
2021-09-14 12:22       ` Konstantin Ryabitsev
2021-09-14 14:39         ` Dan Williams
2021-09-14 15:51           ` Konstantin Ryabitsev
2021-09-14 19:03   ` [PATCH v5 " Dan Williams
2021-09-09  5:12 ` [PATCH v4 15/21] cxl/pmem: Translate NVDIMM label commands to CXL label commands Dan Williams
2021-09-09 17:22   ` Ben Widawsky
2021-09-09 19:03     ` Dan Williams
2021-09-09 20:32       ` Ben Widawsky
2021-09-10  9:39         ` Jonathan Cameron [this message]
2021-09-09 22:08   ` [PATCH v5 " Dan Williams
2021-09-10  9:40     ` Jonathan Cameron
2021-09-14 19:06   ` Dan Williams
2021-09-09  5:12 ` [PATCH v4 16/21] cxl/pmem: Add support for multiple nvdimm-bridge objects Dan Williams
2021-09-09 22:03   ` Dan Williams
2021-09-14 19:08   ` [PATCH v5 " Dan Williams
2021-09-09  5:13 ` [PATCH v4 17/21] tools/testing/cxl: Introduce a mocked-up CXL port hierarchy Dan Williams
2021-09-10  9:53   ` Jonathan Cameron
2021-09-10 18:46     ` Dan Williams
2021-09-14 19:14   ` [PATCH v5 " Dan Williams
2021-09-09  5:13 ` [PATCH v4 18/21] cxl/bus: Populate the target list at decoder create Dan Williams
2021-09-10  9:57   ` Jonathan Cameron
2021-09-09  5:13 ` [PATCH v4 19/21] cxl/mbox: Move command definitions to common location Dan Williams
2021-09-09  5:13 ` [PATCH v4 20/21] tools/testing/cxl: Introduce a mock memory device + driver Dan Williams
2021-09-10 10:09   ` Jonathan Cameron
2021-09-09  5:13 ` [PATCH v4 21/21] cxl/core: Split decoder setup into alloc + add Dan Williams
2021-09-10 10:33   ` Jonathan Cameron
2021-09-10 18:36     ` Dan Williams
2021-09-11 17:15       ` Ben Widawsky
2021-09-11 20:20         ` Dan Williams
2021-09-14 19:31   ` [PATCH v5 " Dan Williams
2021-09-21 14:24     ` Ben Widawsky
2021-09-21 16:18       ` Dan Williams
2021-09-21 19:22     ` [PATCH v6 " Dan Williams
2021-12-10 19:38       ` Nathan Chancellor
2021-12-10 19:41         ` Dan Williams

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20210910103918.00003648@Huawei.com \
    --to=jonathan.cameron@huawei.com \
    --cc=alison.schofield@intel.com \
    --cc=ben.widawsky@intel.com \
    --cc=dan.j.williams@intel.com \
    --cc=ira.weiny@intel.com \
    --cc=linux-cxl@vger.kernel.org \
    --cc=nvdimm@lists.linux.dev \
    --cc=vishal.l.verma@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).