All of lore.kernel.org
 help / color / mirror / Atom feed
From: Oliver <oohall@gmail.com>
To: Rob Herring <robh@kernel.org>
Cc: Device Tree <devicetree@vger.kernel.org>,
	linuxppc-dev <linuxppc-dev@lists.ozlabs.org>,
	"linux-nvdimm@lists.01.org" <linux-nvdimm@lists.01.org>
Subject: Re: [PATCH 6/6] doc/devicetree: NVDIMM region documentation
Date: Wed, 28 Mar 2018 01:53:30 +1100	[thread overview]
Message-ID: <CAOSf1CFpFwzLMx0xmM+JmbQCiOA=QU_S5g0uf-qx181vJ_Xc1w@mail.gmail.com> (raw)
In-Reply-To: <20180326222448.l7ukrslvccvrjnjf@rob-hp-laptop>

On Tue, Mar 27, 2018 at 9:24 AM, Rob Herring <robh@kernel.org> wrote:
> On Fri, Mar 23, 2018 at 07:12:09PM +1100, Oliver O'Halloran wrote:
>> Add device-tree binding documentation for the nvdimm region driver.
>>
>> Cc: devicetree@vger.kernel.org
>> Signed-off-by: Oliver O'Halloran <oohall@gmail.com>
>> ---
>>  .../devicetree/bindings/nvdimm/nvdimm-region.txt   | 45 ++++++++++++++++++++++
>>  1 file changed, 45 insertions(+)
>>  create mode 100644 Documentation/devicetree/bindings/nvdimm/nvdimm-region.txt
>>
>> diff --git a/Documentation/devicetree/bindings/nvdimm/nvdimm-region.txt b/Documentation/devicetree/bindings/nvdimm/nvdimm-region.txt
>> new file mode 100644
>> index 000000000000..02091117ff16
>> --- /dev/null
>> +++ b/Documentation/devicetree/bindings/nvdimm/nvdimm-region.txt
>> @@ -0,0 +1,45 @@
>> +Device-tree bindings for NVDIMM memory regions
>> +-----------------------------------------------------
>> +
>> +Non-volatile DIMMs are memory modules used to provide (cacheable) main memory
>
> Are DIMMs always going to be the only form factor for NV memory?
>
> And if you have multiple DIMMs, does each DT node correspond to a DIMM?

A nvdimm-region might correspond to a single NVDIMM, a set of
interleaved NVDIMMs, or it might just be a chunk of normal memory that
you want treated as a NVDIMM for some reason. The last case is useful
for provisioning install media on servers since it allows you do
download a DVD image, turn it into an nvdimm-region, and kexec into
the installer which can use it as a root disk. That may seem a little
esoteric, but it's handy and we're using a full linux environment for
our boot loader so it's easy to make use of.

> If not, then what if we want/need to provide power control to a DIMM?

That would require a DIMM (and probably memory controller) specific
driver. I've deliberately left out how regions are mapped back to
DIMMs from the binding since it's not really clear to me how that
should work. A phandle array pointing to each DIMM device (which could
be anything) would do the trick, but I've found that a bit awkward to
plumb into the model that libnvdimm expects.

>> +that retains its contents across power cycles. In more practical terms, they
>> +are kind of storage device where the contents can be accessed by the CPU
>> +directly, rather than indirectly via a storage controller or similar. The an
>> +nvdimm-region specifies a physical address range that is hosted on an NVDIMM
>> +device.
>> +
>> +Bindings for the region nodes:
>> +-----------------------------
>> +
>> +Required properties:
>> +     - compatible = "nvdimm-region"
>> +
>> +     - reg = <base, size>;
>> +             The system physical address range of this nvdimm region.
>> +
>> +Optional properties:
>> +     - Any relevant NUMA assocativity properties for the target platform.
>> +     - A "volatile" property indicating that this region is actually in
>> +       normal DRAM and does not require cache flushes after each write.
>> +
>> +A complete example:
>> +--------------------
>> +
>> +/ {
>> +     #size-cells = <2>;
>> +     #address-cells = <2>;
>> +
>> +     platform {
>
> Perhaps we need a more well defined node here. Like we have 'memory' for
> memory nodes.

I think treating it as a platform device is fine. Memory nodes are
special since the OS needs to know where it can allocate early in boot
and I don't see non-volatile memory as being similarly significant.
Fundamentally an NVDIMM is just a memory mapped storage device so we
should be able to defer looking at them until later in boot.

That said you might have problems with XIP kernels and what not. I
think that problem is better solved through other means though.

>> +             region@5000 {
>> +                     compatible = "nvdimm-region;
>> +                     reg = <0x00000001 0x00000000 0x00000000 0x40000000>
>> +
>> +             };
>> +
>> +             region@6000 {
>> +                     compatible = "nvdimm-region";
>> +                     reg = <0x00000001 0x00000000 0x00000000 0x40000000>
>
> Your reg property and unit-address don't match and you have overlapping
> regions.

Yep, those are completely screwed up.

>> +                     volatile;
>> +             };
>> +     };
>> +};
>> --
>> 2.9.5
>>
>> --
>> To unsubscribe from this list: send the line "unsubscribe devicetree" in
>> the body of a message to majordomo@vger.kernel.org
>> More majordomo info at  http://vger.kernel.org/majordomo-info.html
_______________________________________________
Linux-nvdimm mailing list
Linux-nvdimm@lists.01.org
https://lists.01.org/mailman/listinfo/linux-nvdimm

WARNING: multiple messages have this Message-ID (diff)
From: Oliver <oohall-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org>
To: Rob Herring <robh-DgEjT+Ai2ygdnm+yROfE0A@public.gmane.org>
Cc: Device Tree <devicetree-u79uwXL29TY76Z2rM5mHXA@public.gmane.org>,
	linuxppc-dev
	<linuxppc-dev-uLR06cmDAlY/bJ5BZ2RsiQ@public.gmane.org>,
	"linux-nvdimm-hn68Rpc1hR1g9hUCZPvPmw@public.gmane.org"
	<linux-nvdimm-hn68Rpc1hR1g9hUCZPvPmw@public.gmane.org>
Subject: Re: [PATCH 6/6] doc/devicetree: NVDIMM region documentation
Date: Wed, 28 Mar 2018 01:53:30 +1100	[thread overview]
Message-ID: <CAOSf1CFpFwzLMx0xmM+JmbQCiOA=QU_S5g0uf-qx181vJ_Xc1w@mail.gmail.com> (raw)
In-Reply-To: <20180326222448.l7ukrslvccvrjnjf@rob-hp-laptop>

On Tue, Mar 27, 2018 at 9:24 AM, Rob Herring <robh-DgEjT+Ai2ygdnm+yROfE0A@public.gmane.org> wrote:
> On Fri, Mar 23, 2018 at 07:12:09PM +1100, Oliver O'Halloran wrote:
>> Add device-tree binding documentation for the nvdimm region driver.
>>
>> Cc: devicetree-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
>> Signed-off-by: Oliver O'Halloran <oohall-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org>
>> ---
>>  .../devicetree/bindings/nvdimm/nvdimm-region.txt   | 45 ++++++++++++++++++++++
>>  1 file changed, 45 insertions(+)
>>  create mode 100644 Documentation/devicetree/bindings/nvdimm/nvdimm-region.txt
>>
>> diff --git a/Documentation/devicetree/bindings/nvdimm/nvdimm-region.txt b/Documentation/devicetree/bindings/nvdimm/nvdimm-region.txt
>> new file mode 100644
>> index 000000000000..02091117ff16
>> --- /dev/null
>> +++ b/Documentation/devicetree/bindings/nvdimm/nvdimm-region.txt
>> @@ -0,0 +1,45 @@
>> +Device-tree bindings for NVDIMM memory regions
>> +-----------------------------------------------------
>> +
>> +Non-volatile DIMMs are memory modules used to provide (cacheable) main memory
>
> Are DIMMs always going to be the only form factor for NV memory?
>
> And if you have multiple DIMMs, does each DT node correspond to a DIMM?

A nvdimm-region might correspond to a single NVDIMM, a set of
interleaved NVDIMMs, or it might just be a chunk of normal memory that
you want treated as a NVDIMM for some reason. The last case is useful
for provisioning install media on servers since it allows you do
download a DVD image, turn it into an nvdimm-region, and kexec into
the installer which can use it as a root disk. That may seem a little
esoteric, but it's handy and we're using a full linux environment for
our boot loader so it's easy to make use of.

> If not, then what if we want/need to provide power control to a DIMM?

That would require a DIMM (and probably memory controller) specific
driver. I've deliberately left out how regions are mapped back to
DIMMs from the binding since it's not really clear to me how that
should work. A phandle array pointing to each DIMM device (which could
be anything) would do the trick, but I've found that a bit awkward to
plumb into the model that libnvdimm expects.

>> +that retains its contents across power cycles. In more practical terms, they
>> +are kind of storage device where the contents can be accessed by the CPU
>> +directly, rather than indirectly via a storage controller or similar. The an
>> +nvdimm-region specifies a physical address range that is hosted on an NVDIMM
>> +device.
>> +
>> +Bindings for the region nodes:
>> +-----------------------------
>> +
>> +Required properties:
>> +     - compatible = "nvdimm-region"
>> +
>> +     - reg = <base, size>;
>> +             The system physical address range of this nvdimm region.
>> +
>> +Optional properties:
>> +     - Any relevant NUMA assocativity properties for the target platform.
>> +     - A "volatile" property indicating that this region is actually in
>> +       normal DRAM and does not require cache flushes after each write.
>> +
>> +A complete example:
>> +--------------------
>> +
>> +/ {
>> +     #size-cells = <2>;
>> +     #address-cells = <2>;
>> +
>> +     platform {
>
> Perhaps we need a more well defined node here. Like we have 'memory' for
> memory nodes.

I think treating it as a platform device is fine. Memory nodes are
special since the OS needs to know where it can allocate early in boot
and I don't see non-volatile memory as being similarly significant.
Fundamentally an NVDIMM is just a memory mapped storage device so we
should be able to defer looking at them until later in boot.

That said you might have problems with XIP kernels and what not. I
think that problem is better solved through other means though.

>> +             region@5000 {
>> +                     compatible = "nvdimm-region;
>> +                     reg = <0x00000001 0x00000000 0x00000000 0x40000000>
>> +
>> +             };
>> +
>> +             region@6000 {
>> +                     compatible = "nvdimm-region";
>> +                     reg = <0x00000001 0x00000000 0x00000000 0x40000000>
>
> Your reg property and unit-address don't match and you have overlapping
> regions.

Yep, those are completely screwed up.

>> +                     volatile;
>> +             };
>> +     };
>> +};
>> --
>> 2.9.5
>>
>> --
>> To unsubscribe from this list: send the line "unsubscribe devicetree" in
>> the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
>> More majordomo info at  http://vger.kernel.org/majordomo-info.html

WARNING: multiple messages have this Message-ID (diff)
From: Oliver <oohall@gmail.com>
To: Rob Herring <robh@kernel.org>
Cc: "linux-nvdimm@lists.01.org" <linux-nvdimm@lists.01.org>,
	Device Tree <devicetree@vger.kernel.org>,
	linuxppc-dev <linuxppc-dev@lists.ozlabs.org>,
	Dan Williams <dan.j.williams@intel.com>
Subject: Re: [PATCH 6/6] doc/devicetree: NVDIMM region documentation
Date: Wed, 28 Mar 2018 01:53:30 +1100	[thread overview]
Message-ID: <CAOSf1CFpFwzLMx0xmM+JmbQCiOA=QU_S5g0uf-qx181vJ_Xc1w@mail.gmail.com> (raw)
In-Reply-To: <20180326222448.l7ukrslvccvrjnjf@rob-hp-laptop>

On Tue, Mar 27, 2018 at 9:24 AM, Rob Herring <robh@kernel.org> wrote:
> On Fri, Mar 23, 2018 at 07:12:09PM +1100, Oliver O'Halloran wrote:
>> Add device-tree binding documentation for the nvdimm region driver.
>>
>> Cc: devicetree@vger.kernel.org
>> Signed-off-by: Oliver O'Halloran <oohall@gmail.com>
>> ---
>>  .../devicetree/bindings/nvdimm/nvdimm-region.txt   | 45 ++++++++++++++++++++++
>>  1 file changed, 45 insertions(+)
>>  create mode 100644 Documentation/devicetree/bindings/nvdimm/nvdimm-region.txt
>>
>> diff --git a/Documentation/devicetree/bindings/nvdimm/nvdimm-region.txt b/Documentation/devicetree/bindings/nvdimm/nvdimm-region.txt
>> new file mode 100644
>> index 000000000000..02091117ff16
>> --- /dev/null
>> +++ b/Documentation/devicetree/bindings/nvdimm/nvdimm-region.txt
>> @@ -0,0 +1,45 @@
>> +Device-tree bindings for NVDIMM memory regions
>> +-----------------------------------------------------
>> +
>> +Non-volatile DIMMs are memory modules used to provide (cacheable) main memory
>
> Are DIMMs always going to be the only form factor for NV memory?
>
> And if you have multiple DIMMs, does each DT node correspond to a DIMM?

A nvdimm-region might correspond to a single NVDIMM, a set of
interleaved NVDIMMs, or it might just be a chunk of normal memory that
you want treated as a NVDIMM for some reason. The last case is useful
for provisioning install media on servers since it allows you do
download a DVD image, turn it into an nvdimm-region, and kexec into
the installer which can use it as a root disk. That may seem a little
esoteric, but it's handy and we're using a full linux environment for
our boot loader so it's easy to make use of.

> If not, then what if we want/need to provide power control to a DIMM?

That would require a DIMM (and probably memory controller) specific
driver. I've deliberately left out how regions are mapped back to
DIMMs from the binding since it's not really clear to me how that
should work. A phandle array pointing to each DIMM device (which could
be anything) would do the trick, but I've found that a bit awkward to
plumb into the model that libnvdimm expects.

>> +that retains its contents across power cycles. In more practical terms, they
>> +are kind of storage device where the contents can be accessed by the CPU
>> +directly, rather than indirectly via a storage controller or similar. The an
>> +nvdimm-region specifies a physical address range that is hosted on an NVDIMM
>> +device.
>> +
>> +Bindings for the region nodes:
>> +-----------------------------
>> +
>> +Required properties:
>> +     - compatible = "nvdimm-region"
>> +
>> +     - reg = <base, size>;
>> +             The system physical address range of this nvdimm region.
>> +
>> +Optional properties:
>> +     - Any relevant NUMA assocativity properties for the target platform.
>> +     - A "volatile" property indicating that this region is actually in
>> +       normal DRAM and does not require cache flushes after each write.
>> +
>> +A complete example:
>> +--------------------
>> +
>> +/ {
>> +     #size-cells = <2>;
>> +     #address-cells = <2>;
>> +
>> +     platform {
>
> Perhaps we need a more well defined node here. Like we have 'memory' for
> memory nodes.

I think treating it as a platform device is fine. Memory nodes are
special since the OS needs to know where it can allocate early in boot
and I don't see non-volatile memory as being similarly significant.
Fundamentally an NVDIMM is just a memory mapped storage device so we
should be able to defer looking at them until later in boot.

That said you might have problems with XIP kernels and what not. I
think that problem is better solved through other means though.

>> +             region@5000 {
>> +                     compatible = "nvdimm-region;
>> +                     reg = <0x00000001 0x00000000 0x00000000 0x40000000>
>> +
>> +             };
>> +
>> +             region@6000 {
>> +                     compatible = "nvdimm-region";
>> +                     reg = <0x00000001 0x00000000 0x00000000 0x40000000>
>
> Your reg property and unit-address don't match and you have overlapping
> regions.

Yep, those are completely screwed up.

>> +                     volatile;
>> +             };
>> +     };
>> +};
>> --
>> 2.9.5
>>
>> --
>> To unsubscribe from this list: send the line "unsubscribe devicetree" in
>> the body of a message to majordomo@vger.kernel.org
>> More majordomo info at  http://vger.kernel.org/majordomo-info.html

  reply	other threads:[~2018-03-27 14:46 UTC|newest]

Thread overview: 63+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2018-03-23  8:12 [PATCH 1/6] libnvdimm: Add of_node to region and bus descriptors Oliver O'Halloran
2018-03-23  8:12 ` Oliver O'Halloran
2018-03-23  8:12 ` Oliver O'Halloran
2018-03-23  8:12 ` [PATCH 2/6] libnvdimm: Add nd_region_destroy() Oliver O'Halloran
2018-03-23  8:12   ` Oliver O'Halloran
2018-03-23  8:12   ` Oliver O'Halloran
2018-03-23 16:59   ` Dan Williams
2018-03-23 16:59     ` Dan Williams
2018-03-23 16:59     ` Dan Williams
2018-03-25 23:24   ` Balbir Singh
2018-03-25 23:24     ` Balbir Singh
2018-03-25 23:24     ` Balbir Singh
2018-03-23  8:12 ` [PATCH 3/6] libnvdimm: Add device-tree based driver Oliver O'Halloran
2018-03-23  8:12   ` Oliver O'Halloran
2018-03-23  8:12   ` Oliver O'Halloran
2018-03-23 17:07   ` Dan Williams
2018-03-23 17:07     ` Dan Williams
2018-03-23 17:07     ` Dan Williams
2018-03-26  1:07     ` Oliver
2018-03-26  1:07       ` Oliver
2018-03-26  1:07       ` Oliver
2018-03-25  2:51   ` kbuild test robot
2018-03-25  2:51     ` kbuild test robot
2018-03-25  2:51     ` kbuild test robot
2018-03-25  4:27   ` kbuild test robot
2018-03-25  4:27     ` kbuild test robot
2018-03-25  4:27     ` kbuild test robot
2018-03-25  4:28   ` [RFC PATCH] libnvdimm: bus_desc can be static kbuild test robot
2018-03-25  4:28     ` kbuild test robot
2018-03-25  4:28     ` kbuild test robot
2018-03-26  4:05   ` [PATCH 3/6] libnvdimm: Add device-tree based driver Balbir Singh
2018-03-26  4:05     ` Balbir Singh
2018-03-26  4:05     ` Balbir Singh
2018-03-23  8:12 ` [PATCH 4/6] libnvdimm/of: Symlink platform and region devices Oliver O'Halloran
2018-03-23  8:12   ` Oliver O'Halloran
2018-03-23  8:12   ` Oliver O'Halloran
2018-03-23 17:08   ` Dan Williams
2018-03-23 17:08     ` Dan Williams
2018-03-23 17:08     ` Dan Williams
2018-03-23  8:12 ` [PATCH 5/6] powerpc/powernv: Create platform devs for nvdimm buses Oliver O'Halloran
2018-03-23  8:12   ` Oliver O'Halloran
2018-03-23  8:12   ` Oliver O'Halloran
2018-03-23  8:12 ` [PATCH 6/6] doc/devicetree: NVDIMM region documentation Oliver O'Halloran
2018-03-23  8:12   ` Oliver O'Halloran
2018-03-23  8:12   ` Oliver O'Halloran
2018-03-26 22:24   ` Rob Herring
2018-03-26 22:24     ` Rob Herring
2018-03-26 22:24     ` Rob Herring
2018-03-27 14:53     ` Oliver [this message]
2018-03-27 14:53       ` Oliver
2018-03-27 14:53       ` Oliver
2018-03-28 17:06       ` Rob Herring
2018-03-28 17:06         ` Rob Herring
2018-03-28 17:06         ` Rob Herring
2018-03-28 17:25         ` Dan Williams
2018-03-28 17:25           ` Dan Williams
2018-03-28 17:25           ` Dan Williams
2018-03-29  3:10         ` Oliver
2018-03-29  3:10           ` Oliver
2018-03-29  3:10           ` Oliver
2018-03-25 23:16 ` [PATCH 1/6] libnvdimm: Add of_node to region and bus descriptors Balbir Singh
2018-03-25 23:16   ` Balbir Singh
2018-03-25 23:16   ` Balbir Singh

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to='CAOSf1CFpFwzLMx0xmM+JmbQCiOA=QU_S5g0uf-qx181vJ_Xc1w@mail.gmail.com' \
    --to=oohall@gmail.com \
    --cc=devicetree@vger.kernel.org \
    --cc=linux-nvdimm@lists.01.org \
    --cc=linuxppc-dev@lists.ozlabs.org \
    --cc=robh@kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.