linux-cxl.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Dan Williams <dan.j.williams@intel.com>
To: Jonathan Cameron <Jonathan.Cameron@huawei.com>
Cc: Ben Widawsky <ben.widawsky@intel.com>,
	linux-cxl@vger.kernel.org,
	Alison Schofield <alison.schofield@intel.com>,
	Ira Weiny <ira.weiny@intel.com>,
	Vishal Verma <vishal.l.verma@intel.com>
Subject: Re: [RFC PATCH 0/5] Introduce memdev driver
Date: Wed, 30 Jun 2021 10:49:32 -0700	[thread overview]
Message-ID: <CAPcyv4jP05cCFjwWAeSfpURBWaGoRPxfzN0VBk5mASQgCWf3JQ@mail.gmail.com> (raw)
In-Reply-To: <20210618152721.00006b71@Huawei.com>

On Fri, Jun 18, 2021 at 7:27 AM Jonathan Cameron
<Jonathan.Cameron@huawei.com> wrote:
>
> On Thu, 17 Jun 2021 17:51:55 -0700
> Ben Widawsky <ben.widawsky@intel.com> wrote:
>
> > The concept of the memdev has existed since the initial support for CXL.io
> > landed in 5.12. Here, supported is furthered by adding a driver that is capable
> > of reporting whether or not the device is also CXL.mem capable. With this, the
> > region driver is able to consume these devices for programming interleave (or
> > x1) sets. Unlike the region driver, no explicit sysfs interaction is needed to
> > utilize this driver.
> >
> > The logic encapsulated here checks two things:
> > 1. The device itself is CXL.mem enabled.
>
> Need comments in relevant places to say checking if it is enabled,
> not capable.
>
> > 2. The device's upstream is CXL.mem enabled [1].
> >
> > What's currently missing is for the cxlmem driver to add the device as an
> > upstream port (since it has HDM decoders). I'm still working out those details.
> > HDM decoder programming still remains undone as well, and isn't pertinent to
> > this series perse.
> >
> > The patches are based on top of my region patches [2].
> >
> > The code itself is pretty rough for now, and so I'm mostly looking for feedback
> > as to whether or not the memdev driver is serving its purpose and checking what
> > needs to be checked on bind. If however you come along something glaringly bad,
> > or feel like reviewing not fully tested code (I know it builds), by all means...
>
> :)
>
> >
> > [1]: This series doesn't actually add real support for switches which would also
> > need to make the determination of CXL.mem enabling.
>
> Any plans to do the QEMU stuff needed for a switch?  I guess it's going to get
> messy if you want the hdm decoders to 'work' but should be fairly trivial to make them look
> plausible from an interfaces point of view.

It's already the case that the stack is modeling host bridges as
devtype 'cxl_decoder_switch' objects. So, I was hoping we, linux-cxl@
community, could convince ourselves that the algorithm that works for
the default 'cxl_decoder_switch' in the hierarchy can apply generally
to N layers of those. That said, it still would be nice to exercise
that case, but I don't see QEMU getting there anytime soon. I'm taking
a look at modeling a CXL hierarchy with kernel mocked resources for
this purpose.

      reply	other threads:[~2021-06-30 17:49 UTC|newest]

Thread overview: 12+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-06-18  0:51 [RFC PATCH 0/5] Introduce memdev driver Ben Widawsky
2021-06-18  0:51 ` [RFC PATCH 1/5] cxl/region: Only allow CXL capable targets Ben Widawsky
2021-06-18 14:08   ` Jonathan Cameron
2021-06-18  0:51 ` [RFC PATCH 2/5] cxl/mem: Introduce CXL mem driver Ben Widawsky
2021-06-18  0:51 ` [RFC PATCH 3/5] cxl/memdev: Determine CXL.mem capability Ben Widawsky
2021-06-18 14:14   ` Jonathan Cameron
2021-06-18  0:51 ` [RFC PATCH 4/5] cxl/pci: Export CXL DVSEC functionality Ben Widawsky
2021-06-18 15:00   ` Dan Williams
2021-06-18  0:52 ` [RFC PATCH 5/5] cxl/mem: Check that the device is CXL.mem capable Ben Widawsky
2021-06-18 14:24   ` Jonathan Cameron
2021-06-18 14:27 ` [RFC PATCH 0/5] Introduce memdev driver Jonathan Cameron
2021-06-30 17:49   ` Dan Williams [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=CAPcyv4jP05cCFjwWAeSfpURBWaGoRPxfzN0VBk5mASQgCWf3JQ@mail.gmail.com \
    --to=dan.j.williams@intel.com \
    --cc=Jonathan.Cameron@huawei.com \
    --cc=alison.schofield@intel.com \
    --cc=ben.widawsky@intel.com \
    --cc=ira.weiny@intel.com \
    --cc=linux-cxl@vger.kernel.org \
    --cc=vishal.l.verma@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).