linux-pci.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Jonathan Cameron <Jonathan.Cameron@Huawei.com>
To: Ben Widawsky <ben.widawsky@intel.com>
Cc: <linux-cxl@vger.kernel.org>,
	Dan Williams <dan.j.williams@intel.com>,
	<linux-kernel@vger.kernel.org>, <linux-pci@vger.kernel.org>,
	"linux-acpi@vger.kernel.org, Ira Weiny" <ira.weiny@intel.com>,
	Vishal Verma <vishal.l.verma@intel.com>,
	"Kelley, Sean V" <sean.v.kelley@intel.com>,
	Rafael Wysocki <rafael.j.wysocki@intel.com>,
	Bjorn Helgaas <helgaas@kernel.org>,
	Jon Masters <jcm@jonmasters.org>,
	Chris Browy <cbrowy@avery-design.com>,
	Randy Dunlap <rdunlap@infradead.org>,
	"Christoph Hellwig" <hch@infradead.org>,
	<daniel.lll@alibaba-inc.com>
Subject: Re: [RFC PATCH v3 04/16] cxl/mem: Introduce a driver for CXL-2.0-Type-3 endpoints
Date: Tue, 12 Jan 2021 19:01:03 +0000	[thread overview]
Message-ID: <20210112190103.00004644@Huawei.com> (raw)
In-Reply-To: <20210111225121.820014-5-ben.widawsky@intel.com>

On Mon, 11 Jan 2021 14:51:08 -0800
Ben Widawsky <ben.widawsky@intel.com> wrote:

> From: Dan Williams <dan.j.williams@intel.com>
> 
> The CXL.mem protocol allows a device to act as a provider of "System
> RAM" and/or "Persistent Memory" that is fully coherent as if the memory
> was attached to the typical CPU memory controller.
> 
> With the CXL-2.0 specification a PCI endpoint can implement a "Type-3"
> device interface and give the operating system control over "Host
> Managed Device Memory". See section 2.3 Type 3 CXL Device.
> 
> The memory range exported by the device may optionally be described by
> the platform firmware memory map, or by infrastructure like LIBNVDIMM to
> provision persistent memory capacity from one, or more, CXL.mem devices.
> 
> A pre-requisite for Linux-managed memory-capacity provisioning is this
> cxl_mem driver that can speak the mailbox protocol defined in section
> 8.2.8.4 Mailbox Registers.
> 
> For now just land the driver boiler-plate and fill it in with
> functionality in subsequent commits.
> 
> Link: https://www.computeexpresslink.org/download-the-specification
> Signed-off-by: Dan Williams <dan.j.williams@intel.com>
> Signed-off-by: Ben Widawsky <ben.widawsky@intel.com>

Just one passing comment inline.

> diff --git a/drivers/cxl/mem.c b/drivers/cxl/mem.c
> new file mode 100644
> index 000000000000..005404888942
> --- /dev/null
> +++ b/drivers/cxl/mem.c
> @@ -0,0 +1,69 @@
> +// SPDX-License-Identifier: GPL-2.0-only
> +/* Copyright(c) 2020 Intel Corporation. All rights reserved. */
> +#include <linux/module.h>
> +#include <linux/pci.h>
> +#include <linux/io.h>
> +#include "acpi.h"
> +#include "pci.h"
> +
> +static int cxl_mem_dvsec(struct pci_dev *pdev, int dvsec)

Is it worth pulling this out to a utility library now as we are going
to keep needing this for CXL devices?
Arguably, with a vendor_id parameter it might make sense to have
it as a utility function for pci rather than CXL alone.

> +{
> +	int pos;
> +
> +	pos = pci_find_ext_capability(pdev, PCI_EXT_CAP_ID_DVSEC);
> +	if (!pos)
> +		return 0;
> +
> +	while (pos) {
> +		u16 vendor, id;
> +
> +		pci_read_config_word(pdev, pos + PCI_DVSEC_VENDOR_ID_OFFSET,
> +				     &vendor);
> +		pci_read_config_word(pdev, pos + PCI_DVSEC_ID_OFFSET, &id);
> +		if (vendor == PCI_DVSEC_VENDOR_ID_CXL && dvsec == id)
> +			return pos;
> +
> +		pos = pci_find_next_ext_capability(pdev, pos,
> +						   PCI_EXT_CAP_ID_DVSEC);
> +	}
> +
> +	return 0;
> +}
> +
> +static int cxl_mem_probe(struct pci_dev *pdev, const struct pci_device_id *id)
> +{
> +	struct device *dev = &pdev->dev;
> +	int rc, regloc;
> +
> +	rc = cxl_bus_acquire(pdev);
> +	if (rc != 0) {
> +		dev_err(dev, "failed to acquire interface\n");
> +		return rc;
> +	}
> +
> +	regloc = cxl_mem_dvsec(pdev, PCI_DVSEC_ID_CXL_REGLOC);
> +	if (!regloc) {
> +		dev_err(dev, "register location dvsec not found\n");
> +		return -ENXIO;
> +	}
> +
> +	return 0;
> +}
> +
> +static const struct pci_device_id cxl_mem_pci_tbl[] = {
> +	/* PCI class code for CXL.mem Type-3 Devices */
> +	{ PCI_ANY_ID, PCI_ANY_ID, PCI_ANY_ID, PCI_ANY_ID,
> +	  PCI_CLASS_MEMORY_CXL, 0xffffff, 0 },
> +	{ /* terminate list */ },
> +};
> +MODULE_DEVICE_TABLE(pci, cxl_mem_pci_tbl);
> +
> +static struct pci_driver cxl_mem_driver = {
> +	.name			= KBUILD_MODNAME,
> +	.id_table		= cxl_mem_pci_tbl,
> +	.probe			= cxl_mem_probe,
> +};
> +
> +MODULE_LICENSE("GPL v2");
> +module_pci_driver(cxl_mem_driver);
> +MODULE_IMPORT_NS(CXL);
> diff --git a/drivers/cxl/pci.h b/drivers/cxl/pci.h
> new file mode 100644
> index 000000000000..a8a9935fa90b
> --- /dev/null
> +++ b/drivers/cxl/pci.h
> @@ -0,0 +1,20 @@
> +/* SPDX-License-Identifier: GPL-2.0-only */
> +/* Copyright(c) 2020 Intel Corporation. All rights reserved. */
> +#ifndef __CXL_PCI_H__
> +#define __CXL_PCI_H__
> +
> +#define PCI_CLASS_MEMORY_CXL	0x050210
> +
> +/*
> + * See section 8.1 Configuration Space Registers in the CXL 2.0
> + * Specification
> + */
> +#define PCI_EXT_CAP_ID_DVSEC		0x23
> +#define PCI_DVSEC_VENDOR_ID_CXL		0x1E98
> +#define PCI_DVSEC_VENDOR_ID_OFFSET	0x4
> +#define PCI_DVSEC_ID_CXL		0x0
> +#define PCI_DVSEC_ID_OFFSET		0x8
> +
> +#define PCI_DVSEC_ID_CXL_REGLOC		0x8
> +
> +#endif /* __CXL_PCI_H__ */


  parent reply	other threads:[~2021-01-12 19:02 UTC|newest]

Thread overview: 61+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-01-11 22:51 [RFC PATCH v3 00/16] CXL 2.0 Support Ben Widawsky
2021-01-11 22:51 ` [RFC PATCH v3 01/16] docs: cxl: Add basic documentation Ben Widawsky
2021-01-11 22:51 ` [RFC PATCH v3 02/16] cxl/acpi: Add an acpi_cxl module for the CXL interconnect Ben Widawsky
2021-01-12  7:08   ` Randy Dunlap
2021-01-12 18:43   ` Jonathan Cameron
2021-01-12 19:43     ` Dan Williams
2021-01-12 22:06       ` Jonathan Cameron
2021-01-13 17:55       ` Kaneda, Erik
2021-01-20 19:27         ` Dan Williams
2021-01-20 19:18     ` Verma, Vishal L
2021-01-13 12:40   ` Rafael J. Wysocki
2021-01-20 19:21     ` Verma, Vishal L
2021-01-11 22:51 ` [RFC PATCH v3 03/16] cxl/acpi: add OSC support Ben Widawsky
2021-01-12 15:09   ` Rafael J. Wysocki
2021-01-12 18:48   ` Jonathan Cameron
2021-01-11 22:51 ` [RFC PATCH v3 04/16] cxl/mem: Introduce a driver for CXL-2.0-Type-3 endpoints Ben Widawsky
2021-01-12  7:08   ` Randy Dunlap
2021-01-12 19:01   ` Jonathan Cameron [this message]
2021-01-12 20:06     ` Dan Williams
2021-01-11 22:51 ` [RFC PATCH v3 05/16] cxl/mem: Map memory device registers Ben Widawsky
2021-01-12 19:13   ` Jonathan Cameron
2021-01-12 19:21     ` Ben Widawsky
2021-01-12 20:40       ` Dan Williams
2021-01-11 22:51 ` [RFC PATCH v3 06/16] cxl/mem: Find device capabilities Ben Widawsky
2021-01-12 19:17   ` Jonathan Cameron
2021-01-12 19:22     ` Ben Widawsky
2021-01-11 22:51 ` [RFC PATCH v3 07/16] cxl/mem: Implement polled mode mailbox Ben Widawsky
2021-01-13 18:26   ` Jonathan Cameron
2021-01-14 17:40   ` Jonathan Cameron
2021-01-14 17:50     ` Ben Widawsky
2021-01-14 18:13       ` Jonathan Cameron
2021-01-11 22:51 ` [RFC PATCH v3 08/16] cxl/mem: Register CXL memX devices Ben Widawsky
2021-01-14 16:28   ` Jonathan Cameron
2021-01-11 22:51 ` [RFC PATCH v3 09/16] cxl/mem: Add basic IOCTL interface Ben Widawsky
2021-01-14 16:19   ` Jonathan Cameron
2021-01-11 22:51 ` [RFC PATCH v3 10/16] cxl/mem: Add send command Ben Widawsky
2021-01-14 17:10   ` Jonathan Cameron
2021-01-21 18:15     ` Ben Widawsky
2021-01-22 11:43       ` Jonathan Cameron
2021-01-22 17:08         ` Ben Widawsky
2021-01-11 22:51 ` [RFC PATCH v3 11/16] taint: add taint for direct hardware access Ben Widawsky
2021-01-11 22:51 ` [RFC PATCH v3 11/16] taint: add taint for unfettered " Ben Widawsky
2021-01-12  3:31   ` Ben Widawsky
2021-01-11 22:51 ` [RFC PATCH v3 12/16] cxl/mem: Add a "RAW" send command Ben Widawsky
2021-01-11 22:51 ` [RFC PATCH v3 13/16] cxl/mem: Create concept of enabled commands Ben Widawsky
2021-01-14 17:25   ` Jonathan Cameron
2021-01-21 18:40     ` Ben Widawsky
2021-01-22 11:28       ` Jonathan Cameron
2021-01-11 22:51 ` [RFC PATCH v3 14/16] cxl/mem: Use CEL for enabling commands Ben Widawsky
2021-01-14 18:02   ` Jonathan Cameron
2021-01-14 18:13     ` Ben Widawsky
2021-01-14 18:32       ` Jonathan Cameron
2021-01-14 19:04         ` Ben Widawsky
2021-01-14 19:24           ` Jonathan Cameron
2021-01-11 22:51 ` [RFC PATCH v3 15/16] cxl/mem: Add limited Get Log command (0401h) Ben Widawsky
2021-01-14 18:08   ` Jonathan Cameron
2021-01-23  0:14     ` Ben Widawsky
2021-01-11 22:51 ` [RFC PATCH v3 16/16] MAINTAINERS: Add maintainers of the CXL driver Ben Widawsky
2021-01-12  1:12   ` Joe Perches
     [not found] ` <0f2a6d62-09d8-416f-e972-3e9869c3e1a6@alibaba-inc.com>
2021-01-12 15:17   ` [RFC PATCH v3 00/16] CXL 2.0 Support Ben Widawsky
2021-01-12 16:19   ` Bjorn Helgaas

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20210112190103.00004644@Huawei.com \
    --to=jonathan.cameron@huawei.com \
    --cc=ben.widawsky@intel.com \
    --cc=cbrowy@avery-design.com \
    --cc=dan.j.williams@intel.com \
    --cc=daniel.lll@alibaba-inc.com \
    --cc=hch@infradead.org \
    --cc=helgaas@kernel.org \
    --cc=ira.weiny@intel.com \
    --cc=jcm@jonmasters.org \
    --cc=linux-cxl@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-pci@vger.kernel.org \
    --cc=rafael.j.wysocki@intel.com \
    --cc=rdunlap@infradead.org \
    --cc=sean.v.kelley@intel.com \
    --cc=vishal.l.verma@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).