linux-pci.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Dan Williams <dan.j.williams@intel.com>
To: Jonathan Cameron <Jonathan.Cameron@huawei.com>
Cc: Ben Widawsky <ben.widawsky@intel.com>,
	linux-cxl@vger.kernel.org,
	Linux Kernel Mailing List <linux-kernel@vger.kernel.org>,
	Linux PCI <linux-pci@vger.kernel.org>,
	Linux ACPI <linux-acpi@vger.kernel.org>,
	Ira Weiny <ira.weiny@intel.com>,
	Vishal Verma <vishal.l.verma@intel.com>,
	"Kelley, Sean V" <sean.v.kelley@intel.com>,
	Bjorn Helgaas <bhelgaas@google.com>,
	"Rafael J . Wysocki" <rafael.j.wysocki@intel.com>
Subject: Re: [RFC PATCH 3/9] cxl/mem: Add a driver for the type-3 mailbox
Date: Thu, 3 Dec 2020 23:22:04 -0800	[thread overview]
Message-ID: <CAPcyv4h1NQ_ctMAUv1Sc37uh6Mqnm-VL_+woKKAATGOuLCC0Uw@mail.gmail.com> (raw)
In-Reply-To: <20201117144935.00006dee@Huawei.com>

On Tue, Nov 17, 2020 at 6:50 AM Jonathan Cameron
<Jonathan.Cameron@huawei.com> wrote:
>
> On Tue, 10 Nov 2020 21:43:50 -0800
> Ben Widawsky <ben.widawsky@intel.com> wrote:
>
> > From: Dan Williams <dan.j.williams@intel.com>
> >
> > The CXL.mem protocol allows a device to act as a provider of "System
> > RAM" and/or "Persistent Memory" that is fully coherent as if the memory
> > was attached to the typical CPU memory controller.
> >
> > The memory range exported by the device may optionally be described by
> > the platform firmware memory map, or by infrastructure like LIBNVDIMM to
> > provision persistent memory capacity from one, or more, CXL.mem devices.
> >
> > A pre-requisite for Linux-managed memory-capacity provisioning is this
> > cxl_mem driver that can speak the "type-3 mailbox" protocol.
> >
> > For now just land the driver boiler-plate and fill it in with
> > functionality in subsequent commits.
> >
> > Signed-off-by: Dan Williams <dan.j.williams@intel.com>
> > Signed-off-by: Ben Widawsky <ben.widawsky@intel.com>
>
> I've tried to avoid repeats, so mostly this is me moaning about naming!
>
> Jonathan
>
> > ---
> >  drivers/cxl/Kconfig  | 20 +++++++++++
> >  drivers/cxl/Makefile |  2 ++
> >  drivers/cxl/mem.c    | 82 ++++++++++++++++++++++++++++++++++++++++++++
> >  drivers/cxl/pci.h    | 15 ++++++++
> >  4 files changed, 119 insertions(+)
> >  create mode 100644 drivers/cxl/mem.c
> >  create mode 100644 drivers/cxl/pci.h
> >
> > diff --git a/drivers/cxl/Kconfig b/drivers/cxl/Kconfig
> > index dd724bd364df..15548f5c77ff 100644
> > --- a/drivers/cxl/Kconfig
> > +++ b/drivers/cxl/Kconfig
> > @@ -27,4 +27,24 @@ config CXL_ACPI
> >         resources described by the CEDT (CXL Early Discovery Table)
> >
> >         Say 'y' to enable CXL (Compute Express Link) drivers.
> > +
> > +config CXL_MEM
> > +        tristate "CXL.mem Device Support"
> > +        depends on PCI && CXL_BUS_PROVIDER != n
> > +        default m if CXL_BUS_PROVIDER
> > +        help
> > +          The CXL.mem protocol allows a device to act as a provider of
> > +          "System RAM" and/or "Persistent Memory" that is fully coherent
> > +          as if the memory was attached to the typical CPU memory
> > +          controller.
> > +
> > +          Say 'y/m' to enable a driver named "cxl_mem.ko" that will attach
> > +          to CXL.mem devices for configuration, provisioning, and health
> > +          monitoring, the so called "type-3 mailbox". Note, this driver
> > +          is required for dynamic provisioning of CXL.mem attached
> > +          memory, a pre-requisite for persistent memory support, but
> > +          devices that provide volatile memory may be fully described by
> > +          existing platform firmware memory enumeration.
> > +
> > +          If unsure say 'n'.
> >  endif
> > diff --git a/drivers/cxl/Makefile b/drivers/cxl/Makefile
> > index d38cd34a2582..97fdffb00f2d 100644
> > --- a/drivers/cxl/Makefile
> > +++ b/drivers/cxl/Makefile
> > @@ -1,5 +1,7 @@
> >  # SPDX-License-Identifier: GPL-2.0
> >  obj-$(CONFIG_CXL_ACPI) += cxl_acpi.o
> > +obj-$(CONFIG_CXL_MEM) += cxl_mem.o
> >
> >  ccflags-y += -DDEFAULT_SYMBOL_NAMESPACE=CXL
> >  cxl_acpi-y := acpi.o
> > +cxl_mem-y := mem.o
> > diff --git a/drivers/cxl/mem.c b/drivers/cxl/mem.c
> > new file mode 100644
> > index 000000000000..aa7d881fa47b
> > --- /dev/null
> > +++ b/drivers/cxl/mem.c
> > @@ -0,0 +1,82 @@
> > +// SPDX-License-Identifier: GPL-2.0-only
> > +// Copyright(c) 2020 Intel Corporation. All rights reserved.
> > +#include <linux/module.h>
> > +#include <linux/pci.h>
> > +#include <linux/io.h>
> > +#include "acpi.h"
> > +#include "pci.h"
> > +
> > +struct cxl_mem {
> > +     void __iomem *regs;
> > +};
> > +
> > +static int cxl_mem_dvsec(struct pci_dev *pdev, int dvsec)
> > +{
> > +     int pos;
> > +
> > +     pos = pci_find_ext_capability(pdev, PCI_EXT_CAP_ID_DVSEC);
> > +     if (!pos)
> > +             return 0;
> > +
> > +     while (pos) {
> > +             u16 vendor, id;
> > +
> > +             pci_read_config_word(pdev, pos + PCI_DVSEC_VENDOR_OFFSET, &vendor);
> > +             pci_read_config_word(pdev, pos + PCI_DVSEC_ID_OFFSET, &id);
> > +             if (vendor == PCI_DVSEC_VENDOR_CXL && dvsec == id)
> > +                     return pos;
> > +
> > +             pos = pci_find_next_ext_capability(pdev, pos, PCI_EXT_CAP_ID_DVSEC);
>
> This is good generic code and wouldn't cause much backport effort (even if needed
> to bring in a local copy), so perhaps make it a generic function and move to
> core PCI code?
>
> Mind you I guess that can happen the 'second' time someone wants to find a DVSEC.
>
> > +     }
> > +
> > +     return 0;
> > +}
> > +
> > +static int cxl_mem_probe(struct pci_dev *pdev, const struct pci_device_id *id)
> > +{
> > +     struct device *dev = &pdev->dev;
> > +     struct cxl_mem *cxlm;
> > +     int rc, regloc;
> > +
> > +     rc = cxl_bus_prepared(pdev);
> > +     if (rc != 0) {
> > +             dev_err(dev, "failed to acquire interface\n");
> > +             return rc;
> > +     }
> > +
> > +     regloc = cxl_mem_dvsec(pdev, PCI_DVSEC_ID_CXL_REGLOC);
> > +     if (!regloc) {
> > +             dev_err(dev, "register location dvsec not found\n");
> > +             return -ENXIO;
> > +     }
> > +
> > +     cxlm = devm_kzalloc(dev, sizeof(*cxlm), GFP_KERNEL);
> > +     if (!cxlm)
> > +             return -ENOMEM;
> > +
> > +     return 0;
> > +}
> > +
> > +static void cxl_mem_remove(struct pci_dev *pdev)
> > +{
> > +}
>
> I'd bring this in only when needed in later patch.
>
> > +
> > +static const struct pci_device_id cxl_mem_pci_tbl[] = {
> > +     /* PCI class code for CXL.mem Type-3 Devices */
> > +     { PCI_ANY_ID, PCI_ANY_ID, PCI_ANY_ID, PCI_ANY_ID,
> > +       PCI_CLASS_MEMORY_CXL, 0xffffff, 0 },
> > +     { /* terminate list */ },
> > +};
> > +MODULE_DEVICE_TABLE(pci, cxl_mem_pci_tbl);
> > +
> > +static struct pci_driver cxl_mem_driver = {
> > +     .name                   = KBUILD_MODNAME,
> > +     .id_table               = cxl_mem_pci_tbl,
> > +     .probe                  = cxl_mem_probe,
> > +     .remove                 = cxl_mem_remove,
> > +};
> > +
> > +MODULE_LICENSE("GPL v2");
> > +MODULE_AUTHOR("Intel Corporation");
> > +module_pci_driver(cxl_mem_driver);
> > +MODULE_IMPORT_NS(CXL);
> > diff --git a/drivers/cxl/pci.h b/drivers/cxl/pci.h
> > new file mode 100644
> > index 000000000000..beb03921e6da
> > --- /dev/null
> > +++ b/drivers/cxl/pci.h
> > @@ -0,0 +1,15 @@
> > +// SPDX-License-Identifier: GPL-2.0-only
> > +// Copyright(c) 2020 Intel Corporation. All rights reserved.
> > +#ifndef __CXL_PCI_H__
> > +#define __CXL_PCI_H__
> > +
> > +#define PCI_CLASS_MEMORY_CXL 0x050210
> > +
> > +#define PCI_EXT_CAP_ID_DVSEC 0x23
> > +#define PCI_DVSEC_VENDOR_CXL 0x1E98
>
> Hmm. The magic question of what to call a vendor ID that isn't a vendor
> ID but just a magic number that talks like a duck and quacks like a duck
> (for anyone wondering what I'm talking about, there is a nice bit of legal
> boilerplate on this in the CXL spec)
>
> This name is definitely not accurate however.
>
> PCI_UNIQUE_VALUE_CXL maybe?  It is used for other things than DVSEC (VDMs etc),
> though possibly this is the only software visible use.

Finally working my way back through this review to make the changes.
If 0x1E98 becomes visible to software somewhere else then this can
become something like the following:

#define PCI_DVSEC_VENDOR_CXL PCI_UNIQUE_VALUE_CXL

...or whatever the generic name is, but this field per the
specification is the DVSEC-vendor-id and calling it
PCI_UNIQUE_VALUE_CXL does not have any basis in the spec.

I will rename it though to:

PCI_DVSEC_VENDOR_ID_CXL

...since include/linux/pci_ids.h includes the _ID_ part.

>
>
> > +#define PCI_DVSEC_VENDOR_OFFSET      0x4
> > +#define PCI_DVSEC_ID_OFFSET  0x8
>
> Put a line break here perhaps and maybe a spec reference to where to find
> the various DVSEC IDs.

Ok.

>
> > +#define PCI_DVSEC_ID_CXL     0x0
>
> That's definitely a confusing name as well.

Yeah, should be PCI_DVSEC_DEVICE_ID_CXL

>
> PCI_DEVSEC_ID_CXL_DEVICE maybe?
>
>
> > +#define PCI_DVSEC_ID_CXL_REGLOC      0x8
> > +
> > +#endif /* __CXL_PCI_H__ */
>

  reply	other threads:[~2020-12-04  7:23 UTC|newest]

Thread overview: 67+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-11-11  5:43 [RFC PATCH 0/9] CXL 2.0 Support Ben Widawsky
2020-11-11  5:43 ` [RFC PATCH 1/9] cxl/acpi: Add an acpi_cxl module for the CXL interconnect Ben Widawsky
2020-11-11  6:17   ` Randy Dunlap
2020-11-11  7:10   ` Christoph Hellwig
2020-11-11  7:30     ` Verma, Vishal L
2020-11-11  7:34       ` hch
2020-11-11  7:36         ` Verma, Vishal L
2020-11-11 23:03   ` Bjorn Helgaas
2020-11-16 17:59   ` Jonathan Cameron
2020-11-16 18:23     ` Verma, Vishal L
2020-11-17 14:32   ` Rafael J. Wysocki
2020-11-17 21:45     ` Dan Williams
2020-11-18 11:14       ` Rafael J. Wysocki
2020-11-11  5:43 ` [RFC PATCH 2/9] cxl/acpi: add OSC support Ben Widawsky
2020-11-16 17:59   ` Jonathan Cameron
2020-11-16 23:25     ` Dan Williams
2020-11-18 12:25       ` Rafael J. Wysocki
2020-11-18 17:58         ` Dan Williams
2020-11-11  5:43 ` [RFC PATCH 3/9] cxl/mem: Add a driver for the type-3 mailbox Ben Widawsky
2020-11-11  6:17   ` Randy Dunlap
2020-11-11  7:12   ` Christoph Hellwig
2020-11-11 17:17     ` Dan Williams
2020-11-11 18:27       ` Dan Williams
2020-11-11 21:41       ` Randy Dunlap
2020-11-11 22:40         ` Dan Williams
2020-11-16 16:56       ` Christoph Hellwig
2020-11-13 18:17   ` Bjorn Helgaas
2020-11-14  1:08     ` Ben Widawsky
2020-11-15  0:23       ` Dan Williams
2020-11-17 14:49   ` Jonathan Cameron
2020-12-04  7:22     ` Dan Williams [this message]
2020-12-04  7:27       ` Dan Williams
2020-12-04 17:39         ` Jonathan Cameron
2020-11-11  5:43 ` [RFC PATCH 4/9] cxl/mem: Map memory device registers Ben Widawsky
2020-11-13 18:17   ` Bjorn Helgaas
2020-11-14  1:12     ` Ben Widawsky
2020-11-16 23:19       ` Dan Williams
2020-11-17  0:23         ` Bjorn Helgaas
2020-11-23 19:20           ` Ben Widawsky
2020-11-23 19:32             ` Dan Williams
2020-11-23 19:58               ` Ben Widawsky
2020-11-17 15:00   ` Jonathan Cameron
2020-11-11  5:43 ` [RFC PATCH 5/9] cxl/mem: Find device capabilities Ben Widawsky
2020-11-13 18:26   ` Bjorn Helgaas
2020-11-14  1:36     ` Ben Widawsky
2020-11-17 15:15   ` Jonathan Cameron
2020-11-24  0:17     ` Ben Widawsky
2020-11-26  6:05   ` Jon Masters
2020-11-26 18:18     ` Ben Widawsky
2020-12-04  7:35     ` Dan Williams
2020-12-04  7:41   ` Dan Williams
2020-12-07  6:12     ` Ben Widawsky
2020-11-11  5:43 ` [RFC PATCH 6/9] cxl/mem: Initialize the mailbox interface Ben Widawsky
2020-11-17 15:22   ` Jonathan Cameron
2020-11-11  5:43 ` [RFC PATCH 7/9] cxl/mem: Implement polled mode mailbox Ben Widawsky
2020-11-13 23:14   ` Bjorn Helgaas
2020-11-17 15:31   ` Jonathan Cameron
2020-11-17 16:34     ` Ben Widawsky
2020-11-17 18:06       ` Jonathan Cameron
2020-11-17 18:38         ` Dan Williams
2020-11-11  5:43 ` [RFC PATCH 8/9] cxl/mem: Register CXL memX devices Ben Widawsky
2020-11-17 15:56   ` Jonathan Cameron
2020-11-20  2:16     ` Dan Williams
2020-11-20 15:20       ` Jonathan Cameron
2020-11-11  5:43 ` [RFC PATCH 9/9] MAINTAINERS: Add maintainers of the CXL driver Ben Widawsky
2020-11-11 22:06 ` [RFC PATCH 0/9] CXL 2.0 Support Ben Widawsky
2020-11-11 22:43 ` Bjorn Helgaas

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=CAPcyv4h1NQ_ctMAUv1Sc37uh6Mqnm-VL_+woKKAATGOuLCC0Uw@mail.gmail.com \
    --to=dan.j.williams@intel.com \
    --cc=Jonathan.Cameron@huawei.com \
    --cc=ben.widawsky@intel.com \
    --cc=bhelgaas@google.com \
    --cc=ira.weiny@intel.com \
    --cc=linux-acpi@vger.kernel.org \
    --cc=linux-cxl@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-pci@vger.kernel.org \
    --cc=rafael.j.wysocki@intel.com \
    --cc=sean.v.kelley@intel.com \
    --cc=vishal.l.verma@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).