All of lore.kernel.org
 help / color / mirror / Atom feed
From: Dan Williams <dan.j.williams@intel.com>
To: Jonathan Cameron <Jonathan.Cameron@huawei.com>
Cc: linux-cxl@vger.kernel.org, Linux NVDIMM <nvdimm@lists.linux.dev>,
	 Ben Widawsky <ben.widawsky@intel.com>,
	"Schofield, Alison" <alison.schofield@intel.com>,
	 Vishal L Verma <vishal.l.verma@intel.com>,
	"Weiny, Ira" <ira.weiny@intel.com>,
	 Linux Kernel Mailing List <linux-kernel@vger.kernel.org>
Subject: Re: [PATCH 2/5] cxl/pmem: Add initial infrastructure for pmem support
Date: Fri, 11 Jun 2021 17:07:32 -0700	[thread overview]
Message-ID: <CAPcyv4hU-b1=5eAn=Fs65AwYMQj58txMj_D3Y_Ynq72QO-qJrQ@mail.gmail.com> (raw)
In-Reply-To: <20210611123953.000054df@Huawei.com>

On Fri, Jun 11, 2021 at 4:40 AM Jonathan Cameron
<Jonathan.Cameron@huawei.com> wrote:
>
> On Thu, 10 Jun 2021 15:26:08 -0700
> Dan Williams <dan.j.williams@intel.com> wrote:
>
> > Register an 'nvdimm-bridge' device to act as an anchor for a libnvdimm
> > bus hierarchy. Also, flesh out the cxl_bus definition to allow a
> > cxl_nvdimm_bridge_driver to attach to the bridge and trigger the
> > nvdimm-bus registration.
> >
> > The creation of the bridge is gated on the detection of a PMEM capable
> > address space registered to the root. The bridge indirection allows the
> > libnvdimm module to remain unloaded on platforms without PMEM support.
> >
> > Given that the probing of ACPI0017 is asynchronous to CXL endpoint
> > devices, and the expectation that CXL endpoint devices register other
> > PMEM resources on the 'CXL' nvdimm bus, a workqueue is added. The
> > workqueue is needed to run bus_rescan_devices() outside of the
> > device_lock() of the nvdimm-bridge device to rendezvous nvdimm resources
> > as they arrive. For now only the bus is taken online/offline in the
> > workqueue.
> >
> > Signed-off-by: Dan Williams <dan.j.williams@intel.com>
>
> I'm not that familiar with nvdimm side of things, so this is mostly
> superficial review of the patch itself.
>
> A few really minor comments inline, but otherwise looks good to me.
>
> Jonathan
>
[..]
> > +static void unregister_nvb(void *_cxl_nvb)
> > +{
> > +     struct cxl_nvdimm_bridge *cxl_nvb = _cxl_nvb;
> > +     bool flush = false;
> > +
> > +     /*
> > +      * If the bridge was ever activated then there might be in-flight state
> > +      * work to flush. Once the state has been changed to 'dead' then no new
> > +      * work can be queued by user-triggered bind.
> > +      */
> > +     device_lock(&cxl_nvb->dev);
> > +     if (cxl_nvb->state != CXL_NVB_NEW)
> > +             flush = true;
>
> flush = clx_nvb->state != CXL_NVB_NEW;
>
> perhaps?

Oh, yeah, that's nicer.

[..]
> > +static void cxl_nvb_update_state(struct work_struct *work)
> > +{
> > +     struct cxl_nvdimm_bridge *cxl_nvb =
> > +             container_of(work, typeof(*cxl_nvb), state_work);
> > +     bool release = false;
> > +
> > +     device_lock(&cxl_nvb->dev);
> > +     switch (cxl_nvb->state) {
> > +     case CXL_NVB_ONLINE:
> > +             online_nvdimm_bus(cxl_nvb);
> > +             if (!cxl_nvb->nvdimm_bus) {
>
> I'd slightly prefer a simple return code from online_nvdimm_bus()
> so the reviewer doesn't have to look up above to find out that
> this condition corresponds to failure.

Yeah, not sure why I made that so obscure.

  reply	other threads:[~2021-06-12  0:07 UTC|newest]

Thread overview: 23+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-06-10 22:25 [PATCH 0/5] cxl/pmem: Add core infrastructure for PMEM support Dan Williams
2021-06-10 22:26 ` [PATCH 1/5] cxl/core: Add cxl-bus driver infrastructure Dan Williams
2021-06-11 17:47   ` Ben Widawsky
2021-06-11 18:55     ` Dan Williams
2021-06-11 18:55       ` Dan Williams
2021-06-11 19:28       ` Ben Widawsky
2021-06-11 23:25         ` Dan Williams
2021-06-11 23:25           ` Dan Williams
2021-06-14 21:40           ` Ben Widawsky
2021-06-10 22:26 ` [PATCH 2/5] cxl/pmem: Add initial infrastructure for pmem support Dan Williams
2021-06-11 11:39   ` Jonathan Cameron
2021-06-12  0:07     ` Dan Williams [this message]
2021-06-12  0:07       ` Dan Williams
2021-06-10 22:26 ` [PATCH 3/5] libnvdimm: Export nvdimm shutdown helper, nvdimm_delete() Dan Williams
2021-06-10 22:26 ` [PATCH 4/5] libnvdimm: Drop unused device power management support Dan Williams
2021-06-11 11:47   ` Jonathan Cameron
2021-06-12  0:16     ` Dan Williams
2021-06-12  0:16       ` Dan Williams
2021-06-10 22:26 ` [PATCH 5/5] cxl/pmem: Register 'pmem' / cxl_nvdimm devices Dan Williams
2021-06-11 12:57   ` Jonathan Cameron
2021-06-12  0:34     ` Dan Williams
2021-06-12  0:34       ` Dan Williams
2021-06-11 12:59 ` [PATCH 0/5] cxl/pmem: Add core infrastructure for PMEM support Jonathan Cameron

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to='CAPcyv4hU-b1=5eAn=Fs65AwYMQj58txMj_D3Y_Ynq72QO-qJrQ@mail.gmail.com' \
    --to=dan.j.williams@intel.com \
    --cc=Jonathan.Cameron@huawei.com \
    --cc=alison.schofield@intel.com \
    --cc=ben.widawsky@intel.com \
    --cc=ira.weiny@intel.com \
    --cc=linux-cxl@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=nvdimm@lists.linux.dev \
    --cc=vishal.l.verma@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.