linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [RFC] Correct memory layout reporting for "jedec,lpddr2" and related bindings
@ 2022-06-15  2:25 Julius Werner
  2022-06-15  2:28 ` Julius Werner
  2022-07-04  8:22 ` Dmitry Osipenko
  0 siblings, 2 replies; 21+ messages in thread
From: Julius Werner @ 2022-06-15  2:25 UTC (permalink / raw)
  To: Krzysztof Kozlowski, Dmitry Osipenko, Jian-Jia Su, Doug Anderson,
	Rob Herring, LKML,
	open list:OPEN FIRMWARE AND FLATTENED DEVICE TREE BINDINGS,
	Nikola Milosavljevic

We (Chromium OS) have been trying to find a way to pass LPDDR memory
chip information that is available to the firmware through the FDT
(mostly for userspace informational purposes, for now). We have been
using and expanding the existing "jedec,lpddr2" and "jedec,lpddr3"
bindings for this (e.g. [1]). The goal is to be able to identify the
memory layout of the system (how the parts look like, how they're tied
together, how much capacity there is in total) as accurately as
possible from software-probed values.

The existing bindings contain the fields "io-width" and "density",
which is terminology directly matching what an LPDDR chip can report
to firmware through the "Mode Register" interface, specifically MR8.
(The LPDDR specs describing this are not public, but you can see the
register definitions in most LPDDR chip datasheets that can be
randomly found online, e.g. [2] page 37.) The code in
drivers/memory/of_memory.c also suggests that these are supposed to
directly correspond to the MR8 values read from the chip, since when
of_lpddr2_get_info() copies the device tree values into struct
lpddr2_info, it encodes them in a format that directly matches the
mode register bit field patterns.

The problem with this is that each individual LPDDR chip has its own
set of mode registers (per rank) that only describe the density of
that particular chip (rank). The host memory controller may have
multiple channels (each of which is basically an entirely separate set
of physical LPDDR pins on the board), a single channel may be
connected to multiple LPDDR chips (e.g. if the memory controller has
an outgoing 32-bit channel, that channel could be tied to two 16-bit
LPDDR chips by tying the low 16 bits to one and the high 16 bits to
the other), and then each of those chips may offer multiple
independent ranks (which rank is being accessed at a given time is
controlled by a separate chip select pin).

So if we just have one "io-width" and one "density" field in the FDT,
there's no way to figure out how much memory there's actually
connected in total, because that only describes a single LPDDR chip.
Worse, there may be chips where different ranks have different
densities (e.g. a 6GB dual-rank chip with one 4GB and one 2GB rank),
and different channels could theoretically be connected to chips of
completely different manufacturers.

We need to be able to report the information that's currently encoded
in the "jedec,lpddr2" binding separately for each channel+rank
combination, and we need to be able to tell how many LPDDR chips are
combined under a single memory channel. For the former, I'd suggest
creating a separate FDT node for each channel, and then creating
subnodes under those for each rank that implement the binding. For the
latter, I would suggest adding a new property "channel-io-width" which
describes the width of the channel from the host memory controller's
point of view, so that you can divide that property by the already
existing "io-width" property to figure out how many parts are tied
together in series in a single channel. The final layout, then, would
look something like this:

lpddr2-channel0 {
    rank0 {
        compatible = "jedec,lpddr2";
        density = <2048>;
        channel-io-width = <32>;
        io-width = <16>;
    };
    rank1 {
        compatible = "jedec,lpddr2";
        density = <1024>;
        channel-io-width = <32>;
        io-width = <16>;
    };
};
lpddr2-channel0 {
    rank0 {
        compatible = "jedec,lpddr2";
        density = <2048>;
        channel-io-width = <32>;
        io-width = <16>;
    };
    rank1 {
        compatible = "jedec,lpddr2";
        density = <1024>;
        channel-io-width = <32>;
        io-width = <16>;
    };
};

This would be describing a dual-channel, dual-rank layout where each
32-bit channel is connected to two 16-bit LPDDR chips in series. The
total capacity would be (2048 Mbits * (32/16) chips + 1024 Mbits *
(32/16) chips) * 2 channels = 12Gbits.

Does this seem reasonable? If nobody has any objections, I can draft
up a real patch to change the respective bindings. (The two existing
uses in platform device trees would stay how they are until the
respective platform maintainers choose to update them, since only they
would know the exact configuration. They wouldn't technically violate
the changed binding since they still contain the same properties
(other than "channel-io-width" which could be declared optional), but
they wouldn't represent the total memory layout.)

(Also, btw, would it make sense to use this opportunity to combine the
"jedec,lpddr2" and "jedec,lpddr3" bindings into a single document?
They contain all the same properties and I think it makes sense to
keep them in sync, so duplicating the documentation is just
unnecessary maintenance overhead. I would also like to add a
"jedec,lpddr4" binding that has the same properties.)

[1] https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/Documentation/devicetree/bindings/memory-controllers/ddr/jedec,lpddr2.yaml
[2] https://media-www.micron.com/-/media/client/global/documents/products/data-sheet/dram/mobile-dram/low-power-dram/lpddr2/2gb_automotive_lpddr2_u89n.pdf

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [RFC] Correct memory layout reporting for "jedec,lpddr2" and related bindings
  2022-06-15  2:25 [RFC] Correct memory layout reporting for "jedec,lpddr2" and related bindings Julius Werner
@ 2022-06-15  2:28 ` Julius Werner
  2022-06-15 19:33   ` Doug Anderson
                     ` (2 more replies)
  2022-07-04  8:22 ` Dmitry Osipenko
  1 sibling, 3 replies; 21+ messages in thread
From: Julius Werner @ 2022-06-15  2:28 UTC (permalink / raw)
  To: Julius Werner, Krzysztof Kozlowski
  Cc: Dmitry Osipenko, Jian-Jia Su, Doug Anderson, Rob Herring, LKML,
	open list:OPEN FIRMWARE AND FLATTENED DEVICE TREE BINDINGS,
	Nikola Milosavljevic

Sorry, wrong email for Krzysztof.

On Tue, Jun 14, 2022 at 7:25 PM Julius Werner <jwerner@chromium.org> wrote:
>
> We (Chromium OS) have been trying to find a way to pass LPDDR memory
> chip information that is available to the firmware through the FDT
> (mostly for userspace informational purposes, for now). We have been
> using and expanding the existing "jedec,lpddr2" and "jedec,lpddr3"
> bindings for this (e.g. [1]). The goal is to be able to identify the
> memory layout of the system (how the parts look like, how they're tied
> together, how much capacity there is in total) as accurately as
> possible from software-probed values.
>
> The existing bindings contain the fields "io-width" and "density",
> which is terminology directly matching what an LPDDR chip can report
> to firmware through the "Mode Register" interface, specifically MR8.
> (The LPDDR specs describing this are not public, but you can see the
> register definitions in most LPDDR chip datasheets that can be
> randomly found online, e.g. [2] page 37.) The code in
> drivers/memory/of_memory.c also suggests that these are supposed to
> directly correspond to the MR8 values read from the chip, since when
> of_lpddr2_get_info() copies the device tree values into struct
> lpddr2_info, it encodes them in a format that directly matches the
> mode register bit field patterns.
>
> The problem with this is that each individual LPDDR chip has its own
> set of mode registers (per rank) that only describe the density of
> that particular chip (rank). The host memory controller may have
> multiple channels (each of which is basically an entirely separate set
> of physical LPDDR pins on the board), a single channel may be
> connected to multiple LPDDR chips (e.g. if the memory controller has
> an outgoing 32-bit channel, that channel could be tied to two 16-bit
> LPDDR chips by tying the low 16 bits to one and the high 16 bits to
> the other), and then each of those chips may offer multiple
> independent ranks (which rank is being accessed at a given time is
> controlled by a separate chip select pin).
>
> So if we just have one "io-width" and one "density" field in the FDT,
> there's no way to figure out how much memory there's actually
> connected in total, because that only describes a single LPDDR chip.
> Worse, there may be chips where different ranks have different
> densities (e.g. a 6GB dual-rank chip with one 4GB and one 2GB rank),
> and different channels could theoretically be connected to chips of
> completely different manufacturers.
>
> We need to be able to report the information that's currently encoded
> in the "jedec,lpddr2" binding separately for each channel+rank
> combination, and we need to be able to tell how many LPDDR chips are
> combined under a single memory channel. For the former, I'd suggest
> creating a separate FDT node for each channel, and then creating
> subnodes under those for each rank that implement the binding. For the
> latter, I would suggest adding a new property "channel-io-width" which
> describes the width of the channel from the host memory controller's
> point of view, so that you can divide that property by the already
> existing "io-width" property to figure out how many parts are tied
> together in series in a single channel. The final layout, then, would
> look something like this:
>
> lpddr2-channel0 {
>     rank0 {
>         compatible = "jedec,lpddr2";
>         density = <2048>;
>         channel-io-width = <32>;
>         io-width = <16>;
>     };
>     rank1 {
>         compatible = "jedec,lpddr2";
>         density = <1024>;
>         channel-io-width = <32>;
>         io-width = <16>;
>     };
> };
> lpddr2-channel0 {
>     rank0 {
>         compatible = "jedec,lpddr2";
>         density = <2048>;
>         channel-io-width = <32>;
>         io-width = <16>;
>     };
>     rank1 {
>         compatible = "jedec,lpddr2";
>         density = <1024>;
>         channel-io-width = <32>;
>         io-width = <16>;
>     };
> };
>
> This would be describing a dual-channel, dual-rank layout where each
> 32-bit channel is connected to two 16-bit LPDDR chips in series. The
> total capacity would be (2048 Mbits * (32/16) chips + 1024 Mbits *
> (32/16) chips) * 2 channels = 12Gbits.
>
> Does this seem reasonable? If nobody has any objections, I can draft
> up a real patch to change the respective bindings. (The two existing
> uses in platform device trees would stay how they are until the
> respective platform maintainers choose to update them, since only they
> would know the exact configuration. They wouldn't technically violate
> the changed binding since they still contain the same properties
> (other than "channel-io-width" which could be declared optional), but
> they wouldn't represent the total memory layout.)
>
> (Also, btw, would it make sense to use this opportunity to combine the
> "jedec,lpddr2" and "jedec,lpddr3" bindings into a single document?
> They contain all the same properties and I think it makes sense to
> keep them in sync, so duplicating the documentation is just
> unnecessary maintenance overhead. I would also like to add a
> "jedec,lpddr4" binding that has the same properties.)
>
> [1] https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/Documentation/devicetree/bindings/memory-controllers/ddr/jedec,lpddr2.yaml
> [2] https://media-www.micron.com/-/media/client/global/documents/products/data-sheet/dram/mobile-dram/low-power-dram/lpddr2/2gb_automotive_lpddr2_u89n.pdf

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [RFC] Correct memory layout reporting for "jedec,lpddr2" and related bindings
  2022-06-15  2:28 ` Julius Werner
@ 2022-06-15 19:33   ` Doug Anderson
  2022-06-15 21:27     ` Julius Werner
  2022-06-18  2:17   ` Krzysztof Kozlowski
  2022-06-24  9:45   ` Krzysztof Kozlowski
  2 siblings, 1 reply; 21+ messages in thread
From: Doug Anderson @ 2022-06-15 19:33 UTC (permalink / raw)
  To: Julius Werner
  Cc: Krzysztof Kozlowski, Dmitry Osipenko, Jian-Jia Su, Rob Herring,
	LKML, open list:OPEN FIRMWARE AND FLATTENED DEVICE TREE BINDINGS,
	Nikola Milosavljevic

Hi,

On Tue, Jun 14, 2022 at 7:28 PM Julius Werner <jwerner@chromium.org> wrote:
>
> Sorry, wrong email for Krzysztof.
>
> On Tue, Jun 14, 2022 at 7:25 PM Julius Werner <jwerner@chromium.org> wrote:
> >
> > We (Chromium OS) have been trying to find a way to pass LPDDR memory
> > chip information that is available to the firmware through the FDT
> > (mostly for userspace informational purposes, for now). We have been
> > using and expanding the existing "jedec,lpddr2" and "jedec,lpddr3"
> > bindings for this (e.g. [1]). The goal is to be able to identify the
> > memory layout of the system (how the parts look like, how they're tied
> > together, how much capacity there is in total) as accurately as
> > possible from software-probed values.
> >
> > The existing bindings contain the fields "io-width" and "density",
> > which is terminology directly matching what an LPDDR chip can report
> > to firmware through the "Mode Register" interface, specifically MR8.
> > (The LPDDR specs describing this are not public, but you can see the
> > register definitions in most LPDDR chip datasheets that can be
> > randomly found online, e.g. [2] page 37.) The code in
> > drivers/memory/of_memory.c also suggests that these are supposed to
> > directly correspond to the MR8 values read from the chip, since when
> > of_lpddr2_get_info() copies the device tree values into struct
> > lpddr2_info, it encodes them in a format that directly matches the
> > mode register bit field patterns.
> >
> > The problem with this is that each individual LPDDR chip has its own
> > set of mode registers (per rank) that only describe the density of
> > that particular chip (rank). The host memory controller may have
> > multiple channels (each of which is basically an entirely separate set
> > of physical LPDDR pins on the board), a single channel may be
> > connected to multiple LPDDR chips (e.g. if the memory controller has
> > an outgoing 32-bit channel, that channel could be tied to two 16-bit
> > LPDDR chips by tying the low 16 bits to one and the high 16 bits to
> > the other), and then each of those chips may offer multiple
> > independent ranks (which rank is being accessed at a given time is
> > controlled by a separate chip select pin).
> >
> > So if we just have one "io-width" and one "density" field in the FDT,
> > there's no way to figure out how much memory there's actually
> > connected in total, because that only describes a single LPDDR chip.
> > Worse, there may be chips where different ranks have different
> > densities (e.g. a 6GB dual-rank chip with one 4GB and one 2GB rank),
> > and different channels could theoretically be connected to chips of
> > completely different manufacturers.
> >
> > We need to be able to report the information that's currently encoded
> > in the "jedec,lpddr2" binding separately for each channel+rank
> > combination, and we need to be able to tell how many LPDDR chips are
> > combined under a single memory channel. For the former, I'd suggest
> > creating a separate FDT node for each channel, and then creating
> > subnodes under those for each rank that implement the binding. For the
> > latter, I would suggest adding a new property "channel-io-width" which
> > describes the width of the channel from the host memory controller's
> > point of view, so that you can divide that property by the already
> > existing "io-width" property to figure out how many parts are tied
> > together in series in a single channel. The final layout, then, would
> > look something like this:
> >
> > lpddr2-channel0 {
> >     rank0 {
> >         compatible = "jedec,lpddr2";
> >         density = <2048>;
> >         channel-io-width = <32>;
> >         io-width = <16>;
> >     };
> >     rank1 {
> >         compatible = "jedec,lpddr2";
> >         density = <1024>;
> >         channel-io-width = <32>;
> >         io-width = <16>;
> >     };
> > };

Two comments about the above:

1. It seems like the top-level node should have a compatible of some
type. Without that I guess you're just relying on people to find it
based on the name of the node?

2. Why not put the `channel-io-width` property in the channel? Then
you don't need to repeat it for each rank that's under the channel?

If you do that then I think you're actually not making _any_
modifications to the existing lpddr2 bindings, right? You're just
adding a new top-level node that would include the memory nodes under
it.


Although maybe some extra things you might want to add that would
affect the nodes in it:

1. In the above the two ranks are in series, right? ...with a chip
select to select rank0 vs rank1? From how SPI works I'd expect that to
be represented using "reg", AKA:

lpddr2-channel0 {
    compatible = "jdec,lpddr2-channel";
    #address-cells = <1>;
    #size-cells = <0>;
    channel-io-width = <32>;

    rank@0 {
        reg = <0>;
        compatible = "jedec,lpddr2";
        density = <2048>;
        io-width = <16>;
    };
    rank@1 {
        reg = <1>;
        compatible = "jedec,lpddr2";
        density = <1024>;
        io-width = <16>;
    };
};

2. I guess if you had two things in parallel you'd want to know how?
Maybe if you had 4 8-bit chips connected to a 32-bit channel maybe
it'd look like this:

lpddr2-channel0 {
    compatible = "jdec,lpddr2-channel";
    #address-cells = <2>;
    #size-cells = <0>;
    channel-io-width = <32>;

    rank@0,0 {
        reg = <0 0>;
        compatible = "jedec,lpddr2";
        density = <2048>;
        io-width = <8>;
    };
    rank@0,7 {
        reg = <0 7>;
        compatible = "jedec,lpddr2";
        density = <2048>;
        io-width = <8>;
    };
    rank@0,15 {
        reg = <0 15>;
        compatible = "jedec,lpddr2";
        density = <2048>;
        io-width = <8>;
    };
    rank@0,23 {
        reg = <0 23>;
        compatible = "jedec,lpddr2";
        density = <2048>;
        io-width = <8>;
    };
};

...and I guess you could have things that include serial and parallel hookups...


> > lpddr2-channel0 {

nit that you define "channel0" twice in your example.


> >     rank0 {
> >         compatible = "jedec,lpddr2";
> >         density = <2048>;
> >         channel-io-width = <32>;
> >         io-width = <16>;
> >     };
> >     rank1 {
> >         compatible = "jedec,lpddr2";
> >         density = <1024>;
> >         channel-io-width = <32>;
> >         io-width = <16>;
> >     };
> > };
> >
> > This would be describing a dual-channel, dual-rank layout where each
> > 32-bit channel is connected to two 16-bit LPDDR chips in series. The
> > total capacity would be (2048 Mbits * (32/16) chips + 1024 Mbits *
> > (32/16) chips) * 2 channels = 12Gbits.

Just to make sure I'm understanding things: in your hypothetical
example we're effectively wasting half of the SDRAM bandwidth of the
controller, right? So while what you describe is legal you'd get a
much more performant system by hooking the two big chips in parallel
on one channel and the two small chips in parallel on the other
channel. That would effectively give you a 64-bit wide bus as opposed
to the 32-bit wide bus that you describe.


> > Does this seem reasonable? If nobody has any objections, I can draft
> > up a real patch to change the respective bindings. (The two existing
> > uses in platform device trees would stay how they are until the
> > respective platform maintainers choose to update them, since only they
> > would know the exact configuration. They wouldn't technically violate
> > the changed binding since they still contain the same properties
> > (other than "channel-io-width" which could be declared optional), but
> > they wouldn't represent the total memory layout.)
> >
> > (Also, btw, would it make sense to use this opportunity to combine the
> > "jedec,lpddr2" and "jedec,lpddr3" bindings into a single document?
> > They contain all the same properties and I think it makes sense to
> > keep them in sync, so duplicating the documentation is just
> > unnecessary maintenance overhead. I would also like to add a
> > "jedec,lpddr4" binding that has the same properties.)

I'm happy to let others chime in, but one way to do this would be to
put the super common properties (density, width, etc) in a common file
and have it "included" by everyone else. See
`bindings/spi/spi-controller.yaml` and then see how all the SPI
controllers "reference" that.

Once you have that you could create a "jedec,lpddr4" bindings that at
least contained the common bits even if it didn't contain all the
LPDDR4-specific details and later someone could add in all those extra
details.


> > [1] https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/Documentation/devicetree/bindings/memory-controllers/ddr/jedec,lpddr2.yaml
> > [2] https://media-www.micron.com/-/media/client/global/documents/products/data-sheet/dram/mobile-dram/low-power-dram/lpddr2/2gb_automotive_lpddr2_u89n.pdf

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [RFC] Correct memory layout reporting for "jedec,lpddr2" and related bindings
  2022-06-15 19:33   ` Doug Anderson
@ 2022-06-15 21:27     ` Julius Werner
  2022-06-15 22:33       ` Doug Anderson
  0 siblings, 1 reply; 21+ messages in thread
From: Julius Werner @ 2022-06-15 21:27 UTC (permalink / raw)
  To: Doug Anderson
  Cc: Julius Werner, Krzysztof Kozlowski, Dmitry Osipenko, Jian-Jia Su,
	Rob Herring, LKML,
	open list:OPEN FIRMWARE AND FLATTENED DEVICE TREE BINDINGS,
	Nikola Milosavljevic

> Two comments about the above:
>
> 1. It seems like the top-level node should have a compatible of some
> type. Without that I guess you're just relying on people to find it
> based on the name of the node?
>
> 2. Why not put the `channel-io-width` property in the channel? Then
> you don't need to repeat it for each rank that's under the channel?

Yes, we could do it that way. That seemed a bit more complicated to
me, but if there's precedent for that in other devices it's probably
the right thing.

> 1. In the above the two ranks are in series, right? ...with a chip
> select to select rank0 vs rank1? From how SPI works I'd expect that to
> be represented using "reg", AKA:

I wouldn't call it "in series" (rank is just a separate dimension of
its own, in my mental model) but yes, if you think they should also be
named with a property inside the node (and not just distinguished by
node name), we can do that. Using "reg" for this feels a bit odd to
me, but if that's common device tree practice we can do it that way.

> 2. I guess if you had two things in parallel you'd want to know how?
> Maybe if you had 4 8-bit chips connected to a 32-bit channel maybe
> it'd look like this: [...]

I think the channel-io-width mechanism is sufficient to distinguish
this (by dividing by io-width), so I don't think there's anything to
gain from listing each of these parallel chips separately. This also
more closely reflects the way the memory training firmware that's
writing these entries actually sees the system. The way I understand
it, from the memory controller's perspective there's actually no
difference between talking to a single 32-bit chip or two 16-bit chips
in parallel -- there's no difference in register settings or anything,
both software and hardware are totally unaware of this. This is all
just implemented by wiring the respective components together
correctly in the board layout (split the DQ pins between the two
chips, and short all the other pins like clock and chip select
together). When reading the mode register value, the controller only
reads the first chip's register (which is connected to DQ[0:7]). When
writing a mode register, the one Write Mode Register cycle will write
all chips at once (because the written value is transferred through
the column address pins which are shorted together between all chips).
So if we were to pretend in the FDT that we had separate density and
io-width values for each chip, that's kinda disingenuous, because the
firmware can only read one of them and just assumes that it applies to
all chips in parallel on that channel. The only way the firmware could
know how many chips there are in parallel would also be by dividing
the width of its channel by the io-width reported by the chip -- so I
think it would be more honest there to just report those two "original
source" values to the kernel / userspace and let them make that
deduction themselves if they care to.

> ...and I guess you could have things that include serial and parallel hookups...

Sorry, just to avoid having more confusion here: there is no "serial"
dimension to this as far as I'm aware (in my original email I called
the "several chips per channel" thing "in series", but you are right
that it would really be more accurate to call it "in parallel").
There's only three dimensions: a) multiple channels (totally separate
sets of pins coming out of the controller), b) multiple chips per
channel (splitting e.g. 32 pins from the controller onto two 16-pin
parts), and c) multiple ranks within each chip (which chip select pin
is asserted in each access cycle).

> > > This would be describing a dual-channel, dual-rank layout where each
> > > 32-bit channel is connected to two 16-bit LPDDR chips in series. The
> > > total capacity would be (2048 Mbits * (32/16) chips + 1024 Mbits *
> > > (32/16) chips) * 2 channels = 12Gbits.
>
> Just to make sure I'm understanding things: in your hypothetical
> example we're effectively wasting half of the SDRAM bandwidth of the
> controller, right? So while what you describe is legal you'd get a
> much more performant system by hooking the two big chips in parallel
> on one channel and the two small chips in parallel on the other
> channel. That would effectively give you a 64-bit wide bus as opposed
> to the 32-bit wide bus that you describe.

No, I don't think you're wasting bandwidth. In my example the
controller has two 32-bit channels, so it always uses 64 bits of
bandwidth in total. There's no asymmetry in the "chips per channel"
dimension in my example (maybe that was a misunderstanding due to our
different use of "in series" vs "in parallel") -- in fact, there can
never be asymmetry in that dimension, when you split a channel onto
more than one chip then those chips always must be exactly equal in
geometry and timings (because, as mentioned above, they all get
initialized the same way with parallel Write Mode Register commands).
Asymmetry can only come in at the rank or channel dimension.
(Asymmetry there may have a minor performance penalty since you'd be
limiting the amount of rank or channel interleaving the controller can
do, but it would be an indirect penalty that depends on the access
pattern and not be anywhere near as bad as "half the bandwidth".)

Anyway, whether it's a good idea or not, these parts definitely do
exist and get sold that way. I can't find an example with a public
datasheet right now, but e.g. the MT53E1536M32DDNQ-046 WT:A part
offers two 16-bit channels that have two ranks each, where rank 0 is 8
Gbits and rank 1 is 16 Gbits for each channel (6 GB part in total).

> I'm happy to let others chime in, but one way to do this would be to
> put the super common properties (density, width, etc) in a common file
> and have it "included" by everyone else. See
> `bindings/spi/spi-controller.yaml` and then see how all the SPI
> controllers "reference" that.

Okay, that should work. I don't think there would be any differences
other than the compatible strings right now (and maybe which values
are valid for each property... not sure if that can be distinguished
while still including shared definitions?), but I can write them as
three dummy binding files that contain nothing but a compatible string
and an include.

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [RFC] Correct memory layout reporting for "jedec,lpddr2" and related bindings
  2022-06-15 21:27     ` Julius Werner
@ 2022-06-15 22:33       ` Doug Anderson
  2022-06-15 23:24         ` Julius Werner
  0 siblings, 1 reply; 21+ messages in thread
From: Doug Anderson @ 2022-06-15 22:33 UTC (permalink / raw)
  To: Julius Werner
  Cc: Krzysztof Kozlowski, Dmitry Osipenko, Jian-Jia Su, Rob Herring,
	LKML, open list:OPEN FIRMWARE AND FLATTENED DEVICE TREE BINDINGS,
	Nikola Milosavljevic

Hi,

On Wed, Jun 15, 2022 at 2:27 PM Julius Werner <jwerner@chromium.org> wrote:
>
> > Two comments about the above:
> >
> > 1. It seems like the top-level node should have a compatible of some
> > type. Without that I guess you're just relying on people to find it
> > based on the name of the node?
> >
> > 2. Why not put the `channel-io-width` property in the channel? Then
> > you don't need to repeat it for each rank that's under the channel?
>
> Yes, we could do it that way. That seemed a bit more complicated to
> me, but if there's precedent for that in other devices it's probably
> the right thing.
>
> > 1. In the above the two ranks are in series, right? ...with a chip
> > select to select rank0 vs rank1? From how SPI works I'd expect that to
> > be represented using "reg", AKA:
>
> I wouldn't call it "in series" (rank is just a separate dimension of
> its own, in my mental model) but yes, if you think they should also be
> named with a property inside the node (and not just distinguished by
> node name), we can do that. Using "reg" for this feels a bit odd to
> me, but if that's common device tree practice we can do it that way.

Definitely should listen to real DT maintainers and not just me--it
just sounded like it matched how SPI did things where the chip select
was the "reg".


> > 2. I guess if you had two things in parallel you'd want to know how?
> > Maybe if you had 4 8-bit chips connected to a 32-bit channel maybe
> > it'd look like this: [...]
>
> I think the channel-io-width mechanism is sufficient to distinguish
> this (by dividing by io-width), so I don't think there's anything to
> gain from listing each of these parallel chips separately. This also
> more closely reflects the way the memory training firmware that's
> writing these entries actually sees the system. The way I understand
> it, from the memory controller's perspective there's actually no
> difference between talking to a single 32-bit chip or two 16-bit chips
> in parallel -- there's no difference in register settings or anything,
> both software and hardware are totally unaware of this. This is all
> just implemented by wiring the respective components together
> correctly in the board layout (split the DQ pins between the two
> chips, and short all the other pins like clock and chip select
> together). When reading the mode register value, the controller only
> reads the first chip's register (which is connected to DQ[0:7]). When
> writing a mode register, the one Write Mode Register cycle will write
> all chips at once (because the written value is transferred through
> the column address pins which are shorted together between all chips).
> So if we were to pretend in the FDT that we had separate density and
> io-width values for each chip, that's kinda disingenuous, because the
> firmware can only read one of them and just assumes that it applies to
> all chips in parallel on that channel. The only way the firmware could
> know how many chips there are in parallel would also be by dividing
> the width of its channel by the io-width reported by the chip -- so I
> think it would be more honest there to just report those two "original
> source" values to the kernel / userspace and let them make that
> deduction themselves if they care to.
>
> > ...and I guess you could have things that include serial and parallel hookups...
>
> Sorry, just to avoid having more confusion here: there is no "serial"
> dimension to this as far as I'm aware (in my original email I called
> the "several chips per channel" thing "in series", but you are right
> that it would really be more accurate to call it "in parallel").
> There's only three dimensions: a) multiple channels (totally separate
> sets of pins coming out of the controller), b) multiple chips per
> channel (splitting e.g. 32 pins from the controller onto two 16-pin
> parts), and c) multiple ranks within each chip (which chip select pin
> is asserted in each access cycle).
>
> > > > This would be describing a dual-channel, dual-rank layout where each
> > > > 32-bit channel is connected to two 16-bit LPDDR chips in series. The
> > > > total capacity would be (2048 Mbits * (32/16) chips + 1024 Mbits *
> > > > (32/16) chips) * 2 channels = 12Gbits.
> >
> > Just to make sure I'm understanding things: in your hypothetical
> > example we're effectively wasting half of the SDRAM bandwidth of the
> > controller, right? So while what you describe is legal you'd get a
> > much more performant system by hooking the two big chips in parallel
> > on one channel and the two small chips in parallel on the other
> > channel. That would effectively give you a 64-bit wide bus as opposed
> > to the 32-bit wide bus that you describe.
>
> No, I don't think you're wasting bandwidth. In my example the
> controller has two 32-bit channels, so it always uses 64 bits of
> bandwidth in total. There's no asymmetry in the "chips per channel"
> dimension in my example (maybe that was a misunderstanding due to our
> different use of "in series" vs "in parallel") -- in fact, there can
> never be asymmetry in that dimension, when you split a channel onto
> more than one chip then those chips always must be exactly equal in
> geometry and timings (because, as mentioned above, they all get
> initialized the same way with parallel Write Mode Register commands).
> Asymmetry can only come in at the rank or channel dimension.
> (Asymmetry there may have a minor performance penalty since you'd be
> limiting the amount of rank or channel interleaving the controller can
> do, but it would be an indirect penalty that depends on the access
> pattern and not be anywhere near as bad as "half the bandwidth".)
>
> Anyway, whether it's a good idea or not, these parts definitely do
> exist and get sold that way. I can't find an example with a public
> datasheet right now, but e.g. the MT53E1536M32DDNQ-046 WT:A part
> offers two 16-bit channels that have two ranks each, where rank 0 is 8
> Gbits and rank 1 is 16 Gbits for each channel (6 GB part in total).

Ah, got it. I re-read your email and I see my confusion. I thought in
your example there were a total of 4 chips in the system, but there
were actually 8 chips. You were definitely explicit about it but I
still got confused. :( I was somehow assuming that you were saying
that each channel was 32-bits wide but that we were only connecting
16-bits of it.

OK, then what you have seems OK. Personally I guess I'd find it a
little less confusing if we described it as "num-chips" or something
like that. I guess that would make me feel like the io-width could
stay where it is and describe the full width, like maybe for your
original example:

lpddr2-channel0 {
    compatible = "jdec,lpddr2-channel";
    #address-cells = <1>;
    #size-cells = <0>;

    rank@0 {
        reg = <0>;
        compatible = "jedec,lpddr2";
        density = <2048>;
        io-width = <32>;
        num-chips = <2>;
    };
    rank@1 {
        reg = <1>;
        compatible = "jedec,lpddr2";
        density = <1024>;
        io-width = <32>;
        num-chips = <2>;
    };
};


> > I'm happy to let others chime in, but one way to do this would be to
> > put the super common properties (density, width, etc) in a common file
> > and have it "included" by everyone else. See
> > `bindings/spi/spi-controller.yaml` and then see how all the SPI
> > controllers "reference" that.
>
> Okay, that should work. I don't think there would be any differences
> other than the compatible strings right now (and maybe which values
> are valid for each property... not sure if that can be distinguished
> while still including shared definitions?), but I can write them as
> three dummy binding files that contain nothing but a compatible string
> and an include.

They do have different sets of values valid for each property. The
properties are annoyingly not sorted consistently with each other, but
I think there are also different sets of properties aren't there? Like
I only see tRASmin-min-tck in the LPDDR2 one and not LPDDR3.

I was suggesting keeping most of the current stuff separate between
DDR2 / DDR3 / DDR4 and only putting the bare minimum "this is clearly
something you'd describe for any memory chip" in a common place...

-Doug

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [RFC] Correct memory layout reporting for "jedec,lpddr2" and related bindings
  2022-06-15 22:33       ` Doug Anderson
@ 2022-06-15 23:24         ` Julius Werner
  0 siblings, 0 replies; 21+ messages in thread
From: Julius Werner @ 2022-06-15 23:24 UTC (permalink / raw)
  To: Doug Anderson
  Cc: Julius Werner, Krzysztof Kozlowski, Dmitry Osipenko, Jian-Jia Su,
	Rob Herring, LKML,
	open list:OPEN FIRMWARE AND FLATTENED DEVICE TREE BINDINGS,
	Nikola Milosavljevic

> OK, then what you have seems OK. Personally I guess I'd find it a
> little less confusing if we described it as "num-chips" or something
> like that.

Yeah, we can do that too if people prefer that, that just means the
firmware writing the entry needs to do that math. But while it makes
the chips thing more obvious, it makes it less obvious what the actual
memory channel width for the memory controller is, so I think it's
sort of a trade-off either way (I feel like reporting the channel
width would be closer to describing the raw topography as seen by
memory training firmware, and leaving interpretations up to the
kernel/userspace).

> They do have different sets of values valid for each property. The
> properties are annoyingly not sorted consistently with each other, but
> I think there are also different sets of properties aren't there? Like
> I only see tRASmin-min-tck in the LPDDR2 one and not LPDDR3.

Okay, I haven't looked closely into the timing part. If there are
notable differences, let's keep that separate.

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [RFC] Correct memory layout reporting for "jedec,lpddr2" and related bindings
  2022-06-15  2:28 ` Julius Werner
  2022-06-15 19:33   ` Doug Anderson
@ 2022-06-18  2:17   ` Krzysztof Kozlowski
  2022-06-24  9:45   ` Krzysztof Kozlowski
  2 siblings, 0 replies; 21+ messages in thread
From: Krzysztof Kozlowski @ 2022-06-18  2:17 UTC (permalink / raw)
  To: Julius Werner, Krzysztof Kozlowski
  Cc: Dmitry Osipenko, Jian-Jia Su, Doug Anderson, Rob Herring, LKML,
	open list:OPEN FIRMWARE AND FLATTENED DEVICE TREE BINDINGS,
	Nikola Milosavljevic

On 14/06/2022 19:28, Julius Werner wrote:
> Sorry, wrong email for Krzysztof.

You need to base your upstream work on upstream tree. My email was
changed like three months ago...

> 
> On Tue, Jun 14, 2022 at 7:25 PM Julius Werner <jwerner@chromium.org> wrote:
>>
>> We (Chromium OS) have been trying to find a way to pass LPDDR memory
>> chip information that is available to the firmware through the FDT
>> (mostly for userspace informational purposes, for now). We have been
>> using and expanding the existing "jedec,lpddr2" and "jedec,lpddr3"
>> bindings for this (e.g. [1]). The goal is to be able to identify the
>> memory layout of the system (how the parts look like, how they're tied
>> together, how much capacity there is in total) as accurately as
>> possible from software-probed values.
>>
>> The existing bindings contain the fields "io-width" and "density",
>> which is terminology directly matching what an LPDDR chip can report
>> to firmware through the "Mode Register" interface, specifically MR8.
>> (The LPDDR specs describing this are not public, but you can see the
>> register definitions in most LPDDR chip datasheets that can be
>> randomly found online, e.g. [2] page 37.) The code in
>> drivers/memory/of_memory.c also suggests that these are supposed to
>> directly correspond to the MR8 values read from the chip, since when
>> of_lpddr2_get_info() copies the device tree values into struct
>> lpddr2_info, it encodes them in a format that directly matches the
>> mode register bit field patterns.
>>
>> The problem with this is that each individual LPDDR chip has its own
>> set of mode registers (per rank) that only describe the density of
>> that particular chip (rank). The host memory controller may have
>> multiple channels (each of which is basically an entirely separate set
>> of physical LPDDR pins on the board), a single channel may be
>> connected to multiple LPDDR chips (e.g. if the memory controller has
>> an outgoing 32-bit channel, that channel could be tied to two 16-bit
>> LPDDR chips by tying the low 16 bits to one and the high 16 bits to
>> the other), and then each of those chips may offer multiple
>> independent ranks (which rank is being accessed at a given time is
>> controlled by a separate chip select pin).
>>
>> So if we just have one "io-width" and one "density" field in the FDT,
>> there's no way to figure out how much memory there's actually
>> connected in total, because that only describes a single LPDDR chip.


Isn't the memory node used for that purpose - to identify how much
memory you have in total?

>> Worse, there may be chips where different ranks have different
>> densities (e.g. a 6GB dual-rank chip with one 4GB and one 2GB rank),
>> and different channels could theoretically be connected to chips of
>> completely different manufacturers.

>>
>> We need to be able to report the information that's currently encoded
>> in the "jedec,lpddr2" binding separately for each channel+rank
>> combination, and we need to be able to tell how many LPDDR chips are
>> combined under a single memory channel. For the former, I'd suggest
>> creating a separate FDT node for each channel, and then creating
>> subnodes under those for each rank that implement the binding. For the
>> latter, I would suggest adding a new property "channel-io-width" which

No, because io-width is a standard property, so it should be used
instead. It could be defined in channel node.

I'll think later, although it would be easier if you just bounce the
message to me, not forward. Much easier to read...


Best regards,
Krzysztof

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [RFC] Correct memory layout reporting for "jedec,lpddr2" and related bindings
  2022-06-15  2:28 ` Julius Werner
  2022-06-15 19:33   ` Doug Anderson
  2022-06-18  2:17   ` Krzysztof Kozlowski
@ 2022-06-24  9:45   ` Krzysztof Kozlowski
  2022-06-30  1:03     ` Julius Werner
  2 siblings, 1 reply; 21+ messages in thread
From: Krzysztof Kozlowski @ 2022-06-24  9:45 UTC (permalink / raw)
  To: Julius Werner, Krzysztof Kozlowski
  Cc: Dmitry Osipenko, Jian-Jia Su, Doug Anderson, Rob Herring, LKML,
	open list:OPEN FIRMWARE AND FLATTENED DEVICE TREE BINDINGS,
	Nikola Milosavljevic

On 15/06/2022 04:28, Julius Werner wrote:
> Sorry, wrong email for Krzysztof.
> 
> On Tue, Jun 14, 2022 at 7:25 PM Julius Werner <jwerner@chromium.org> wrote:
>>
>> We (Chromium OS) have been trying to find a way to pass LPDDR memory
>> chip information that is available to the firmware through the FDT
>> (mostly for userspace informational purposes, for now). We have been
>> using and expanding the existing "jedec,lpddr2" and "jedec,lpddr3"
>> bindings for this (e.g. [1]). The goal is to be able to identify the
>> memory layout of the system (how the parts look like, how they're tied
>> together, how much capacity there is in total) as accurately as
>> possible from software-probed values.
>>
>> The existing bindings contain the fields "io-width" and "density",
>> which is terminology directly matching what an LPDDR chip can report
>> to firmware through the "Mode Register" interface, specifically MR8.
>> (The LPDDR specs describing this are not public, but you can see the
>> register definitions in most LPDDR chip datasheets that can be
>> randomly found online, e.g. [2] page 37.) The code in
>> drivers/memory/of_memory.c also suggests that these are supposed to
>> directly correspond to the MR8 values read from the chip, since when
>> of_lpddr2_get_info() copies the device tree values into struct
>> lpddr2_info, it encodes them in a format that directly matches the
>> mode register bit field patterns.
>>
>> The problem with this is that each individual LPDDR chip has its own
>> set of mode registers (per rank) that only describe the density of
>> that particular chip (rank). The host memory controller may have
>> multiple channels (each of which is basically an entirely separate set
>> of physical LPDDR pins on the board), a single channel may be
>> connected to multiple LPDDR chips (e.g. if the memory controller has
>> an outgoing 32-bit channel, that channel could be tied to two 16-bit
>> LPDDR chips by tying the low 16 bits to one and the high 16 bits to
>> the other), and then each of those chips may offer multiple
>> independent ranks (which rank is being accessed at a given time is
>> controlled by a separate chip select pin).
>>
>> So if we just have one "io-width" and one "density" field in the FDT,
>> there's no way to figure out how much memory there's actually
>> connected in total, because that only describes a single LPDDR chip.
>> Worse, there may be chips where different ranks have different
>> densities (e.g. a 6GB dual-rank chip with one 4GB and one 2GB rank),
>> and different channels could theoretically be connected to chips of
>> completely different manufacturers.
>>
>> We need to be able to report the information that's currently encoded
>> in the "jedec,lpddr2" binding separately for each channel+rank
>> combination, and we need to be able to tell how many LPDDR chips are
>> combined under a single memory channel. 

Why?

At beginning of your message you kind of mixed two different usages:
1. Knowing the topology of the memory.
2. Figuring out total memory.

Implementing (1) above would probably solve your (2) use case. But if
you only need (2), do you really need to define entire topology?

>> For the former, I'd suggest
>> creating a separate FDT node for each channel, and then creating
>> subnodes under those for each rank that implement the binding. For the
>> latter, I would suggest adding a new property "channel-io-width" which
>> describes the width of the channel from the host memory controller's
>> point of view, so that you can divide that property by the already
>> existing "io-width" property to figure out how many parts are tied
>> together in series in a single channel. The final layout, then, would
>> look something like this:
>>
>> lpddr2-channel0 {

Looks reasonable.

This should be then:
channel@0

and children as well (so rank@0)

>>     rank0 {
>>         compatible = "jedec,lpddr2";
>>         density = <2048>;
>>         channel-io-width = <32>;
>>         io-width = <16>;
>>     };
>>     rank1 {
>>         compatible = "jedec,lpddr2";
>>         density = <1024>;
>>         channel-io-width = <32>;
>>         io-width = <16>;
>>     };

You also need a timings node. I don't think it would be different for
each of ranks, would it?

>> };
>> lpddr2-channel0 {
>>     rank0 {
>>         compatible = "jedec,lpddr2";
>>         density = <2048>;
>>         channel-io-width = <32>;
>>         io-width = <16>;
>>     };
>>     rank1 {
>>         compatible = "jedec,lpddr2";
>>         density = <1024>;
>>         channel-io-width = <32>;
>>         io-width = <16>;
>>     };
>> };
>>
>> This would be describing a dual-channel, dual-rank layout where each
>> 32-bit channel is connected to two 16-bit LPDDR chips in series. The
>> total capacity would be (2048 Mbits * (32/16) chips + 1024 Mbits *
>> (32/16) chips) * 2 channels = 12Gbits.
>>
>> Does this seem reasonable? If nobody has any objections, I can draft
>> up a real patch to change the respective bindings. (The two existing
>> uses in platform device trees would stay how they are until the
>> respective platform maintainers choose to update them, since only they
>> would know the exact configuration. They wouldn't technically violate
>> the changed binding since they still contain the same properties
>> (other than "channel-io-width" which could be declared optional), but
>> they wouldn't represent the total memory layout.)
>>
>> (Also, btw, would it make sense to use this opportunity to combine the
>> "jedec,lpddr2" and "jedec,lpddr3" bindings into a single document?

These bindings are quite different, so combining would result in big
allOf. I am not sure if there is benefit in that.

>> They contain all the same properties and I think it makes sense to
>> keep them in sync, so duplicating the documentation is just
>> unnecessary maintenance overhead. I would also like to add a
>> "jedec,lpddr4" binding that has the same properties.)
>>
>> [1] https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/Documentation/devicetree/bindings/memory-controllers/ddr/jedec,lpddr2.yaml
>> [2] https://media-www.micron.com/-/media/client/global/documents/products/data-sheet/dram/mobile-dram/low-power-dram/lpddr2/2gb_automotive_lpddr2_u89n.pdf


Best regards,
Krzysztof

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [RFC] Correct memory layout reporting for "jedec,lpddr2" and related bindings
  2022-06-24  9:45   ` Krzysztof Kozlowski
@ 2022-06-30  1:03     ` Julius Werner
  2022-06-30  8:05       ` Krzysztof Kozlowski
  0 siblings, 1 reply; 21+ messages in thread
From: Julius Werner @ 2022-06-30  1:03 UTC (permalink / raw)
  To: Krzysztof Kozlowski
  Cc: Julius Werner, Krzysztof Kozlowski, Dmitry Osipenko, Jian-Jia Su,
	Doug Anderson, Rob Herring, LKML,
	open list:OPEN FIRMWARE AND FLATTENED DEVICE TREE BINDINGS,
	Nikola Milosavljevic

> You need to base your upstream work on upstream tree. My email was
> changed like three months ago...

Apologies, I just used the same email that I sent patches to last
year. Once I write an actual patch for this issue, I'll make sure to
use get_maintainer.pl.

> >> We need to be able to report the information that's currently encoded
> >> in the "jedec,lpddr2" binding separately for each channel+rank
> >> combination, and we need to be able to tell how many LPDDR chips are
> >> combined under a single memory channel.
>
> Why?
>
> At beginning of your message you kind of mixed two different usages:
> 1. Knowing the topology of the memory.
> 2. Figuring out total memory.
>
> Implementing (1) above would probably solve your (2) use case. But if
> you only need (2), do you really need to define entire topology?

Okay, sorry, I wasn't clear here. We are really interested in topology
(for documentation and SKU identification purposes), so "just"
figuring out total memory is not enough. The point I wanted to make is
more that we want to be able to identify the whole topology down to
the exact number of components on each layer, so the existing binding
(which just defines one LPDDR chip without explaining how many
instances of it there are and how they're hooked up together) is not
enough. Saying "I want to be able to figure out total memory from
this" is more like an easy way to verify that all the information
we're looking for is available... i.e. if all the LPDDR chips and
their amounts and relations to each other are described in a way
that's detailed enough that I can total up their density values and
come up with the same number that the /memory node says, then I know
we're not missing any layer of information. But ultimately I'm
interested in being able to read out the whole topology, not just
total capacity.

>> For the latter, I would suggest adding a new property "channel-io-width" which
>
> No, because io-width is a standard property, so it should be used
> instead. It could be defined in channel node.

What exactly do you mean by "standard property" -- do you mean in an
LPDDR context, or for device tree bindings in general? In other device
tree bindings, the only thing I can find is `reg-io-width`, so that's
not quite the same (and wouldn't seem to preclude calling a field here
`channel-io-width`, since the width that's talking about is not the
width of a register). In LPDDR context, the term "IO width" mostly
appears specifically for the bit field in Mode Register 8 that
describes the amount of DQ pins going into one individual LPDDR chip.
The field that I need to encode for the channel here is explicitly
*not* that, it's the amount of DQ pins coming *out* of the LPDDR
controller, and as explained in my original email those two numbers
need not necessarily be the same when multiple LPDDR chips are hooked
up in parallel. So, yes, I could call both of these properties
`io-width` with one in the rank node and one in the channel node...
but I think giving the latter one a different name (e.g.
`channel-io-width`) would be better to avoid confusion and provide a
hint that there's an important difference between these numbers.

> You also need a timings node. I don't think it would be different for
> each of ranks, would it?

I think it might be? I'm honestly not a memory expert so I'm not
really sure (Jian-Jia in CC might know this?), but since different
ranks can be asymmetric (even when they're on the same part), I could
imagine that, say, the larger rank might need slightly longer
precharge time or something like that. They at least all implement a
separate set of mode registers, so they could theoretically be
configured with different latency settings through those.

> >> (Also, btw, would it make sense to use this opportunity to combine the
> >> "jedec,lpddr2" and "jedec,lpddr3" bindings into a single document?
>
> These bindings are quite different, so combining would result in big
> allOf. I am not sure if there is benefit in that.

They should basically be 100% identical outside of the timings. I can
see that jedec,lpddr2 is currently missing the manufacturer-id
property, that's probably an oversight -- Mode Register 5 with that ID
exists for LPDDR2 just as well as for LPDDR3, and we're already
passing the revision IDs which is kinda useless without also passing
the manufacturer ID as well (because the revision IDs are
vendor-specific). So merging the bindings would fix that. The only
other difference I can see are the deprecated
`revision-id1`/`revision-id2` fields for jedec,lpddr2 -- if I use a
property inclusion mechanism like Doug suggested, those could stay
separate in jedec,lpddr2 only (since they're deprecated anyway and
replaced by `revision-id` in the combined bindings).

For the timings, I'm okay with keeping them separate.

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [RFC] Correct memory layout reporting for "jedec,lpddr2" and related bindings
  2022-06-30  1:03     ` Julius Werner
@ 2022-06-30  8:05       ` Krzysztof Kozlowski
  2022-07-01  0:52         ` Julius Werner
  0 siblings, 1 reply; 21+ messages in thread
From: Krzysztof Kozlowski @ 2022-06-30  8:05 UTC (permalink / raw)
  To: Julius Werner
  Cc: Krzysztof Kozlowski, Dmitry Osipenko, Jian-Jia Su, Doug Anderson,
	Rob Herring, LKML,
	open list:OPEN FIRMWARE AND FLATTENED DEVICE TREE BINDINGS,
	Nikola Milosavljevic

On 30/06/2022 03:03, Julius Werner wrote:
>>> For the latter, I would suggest adding a new property "channel-io-width" which
>>
>> No, because io-width is a standard property, so it should be used
>> instead. It could be defined in channel node.
> 
> What exactly do you mean by "standard property" -- do you mean in an
> LPDDR context, or for device tree bindings in general? In other device
> tree bindings, the only thing I can find is `reg-io-width`,

I had impression I saw io-width outside of LPPDR bindings, but
apparently it's only reg-io-width

>  so that's
> not quite the same (and wouldn't seem to preclude calling a field here
> `channel-io-width`, since the width that's talking about is not the
> width of a register).

reg-io-width is not only about register width, but width of access size
or width of IO.


> In LPDDR context, the term "IO width" mostly
> appears specifically for the bit field in Mode Register 8 that
> describes the amount of DQ pins going into one individual LPDDR chip.
> The field that I need to encode for the channel here is explicitly
> *not* that, it's the amount of DQ pins coming *out* of the LPDDR
> controller, and as explained in my original email those two numbers
> need not necessarily be the same when multiple LPDDR chips are hooked
> up in parallel. So, yes, I could call both of these properties
> `io-width` with one in the rank node and one in the channel node...
> but I think giving the latter one a different name (e.g.
> `channel-io-width`) would be better to avoid confusion and provide a
> hint that there's an important difference between these numbers.

Send the bindings, we'll see what the DT binding maintainers will say. :)

> 
>> You also need a timings node. I don't think it would be different for
>> each of ranks, would it?
> 
> I think it might be? I'm honestly not a memory expert so I'm not
> really sure (Jian-Jia in CC might know this?), but since different
> ranks can be asymmetric (even when they're on the same part), I could
> imagine that, say, the larger rank might need slightly longer
> precharge time or something like that. They at least all implement a
> separate set of mode registers, so they could theoretically be
> configured with different latency settings through those.

This feels weird... although maybe one or few parameters of timings
could be different.

How the asymmetric SDRAMs report density? This is a field with
fixed/enum values, so does it mean two-rank-asymmetric module has two
registers, one per each rank and choice of register depends on chip select?

> 
>>>> (Also, btw, would it make sense to use this opportunity to combine the
>>>> "jedec,lpddr2" and "jedec,lpddr3" bindings into a single document?
>>
>> These bindings are quite different, so combining would result in big
>> allOf. I am not sure if there is benefit in that.
> 
> They should basically be 100% identical outside of the timings. I can
> see that jedec,lpddr2 is currently missing the manufacturer-id
> property, that's probably an oversight -- Mode Register 5 with that ID
> exists for LPDDR2 just as well as for LPDDR3, and we're already
> passing the revision IDs which is kinda useless without also passing
> the manufacturer ID as well (because the revision IDs are
> vendor-specific).

Manufacturer ID is taken from compatible. LPDDR3 has it deprecated.

> So merging the bindings would fix that. 

Nothing to fix, it was by choice.

> The only
> other difference I can see are the deprecated
> `revision-id1`/`revision-id2` fields for jedec,lpddr2 -- if I use a
> property inclusion mechanism like Doug suggested, those could stay
> separate in jedec,lpddr2 only (since they're deprecated anyway and
> replaced by `revision-id` in the combined bindings).
> 
> For the timings, I'm okay with keeping them separate.


Best regards,
Krzysztof

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [RFC] Correct memory layout reporting for "jedec,lpddr2" and related bindings
  2022-06-30  8:05       ` Krzysztof Kozlowski
@ 2022-07-01  0:52         ` Julius Werner
  2022-07-01  6:57           ` Krzysztof Kozlowski
  0 siblings, 1 reply; 21+ messages in thread
From: Julius Werner @ 2022-07-01  0:52 UTC (permalink / raw)
  To: Krzysztof Kozlowski
  Cc: Julius Werner, Krzysztof Kozlowski, Dmitry Osipenko, Jian-Jia Su,
	Doug Anderson, Rob Herring, LKML,
	open list:OPEN FIRMWARE AND FLATTENED DEVICE TREE BINDINGS,
	Nikola Milosavljevic

> How the asymmetric SDRAMs report density? This is a field with
> fixed/enum values, so does it mean two-rank-asymmetric module has two
> registers, one per each rank and choice of register depends on chip select?

Yes, each rank has a completely separate set of mode registers.

> Manufacturer ID is taken from compatible. LPDDR3 has it deprecated.

Oh! Oh no, I only just saw that. I wish you had CCed us on that patch. :/

That really doesn't work for our use case, we can't generate a
specific compatible string for each part number. This may work when
your board is only using a single memory part and you can hardcode
that in the DTB blob bundled with the kernel, but we are trying to do
runtime identification between dozens of different parts on our
boards. The whole point of us wanting to add these bindings is that we
want to have the firmware inject the raw values it can read from mode
registers into the device tree (with just the compatible string
"jedec,lpddr3"), so that we can then delegate the task of matching
those values to part numbers to a userspace process. We don't want to
hardcode long tables for ID-to-string matching that have to be updated
all the time in our constrained firmware space.

Can we please revert that deprecation and at least keep the property
around as optional?

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [RFC] Correct memory layout reporting for "jedec,lpddr2" and related bindings
  2022-07-01  0:52         ` Julius Werner
@ 2022-07-01  6:57           ` Krzysztof Kozlowski
  2022-07-08  1:20             ` Julius Werner
  0 siblings, 1 reply; 21+ messages in thread
From: Krzysztof Kozlowski @ 2022-07-01  6:57 UTC (permalink / raw)
  To: Julius Werner
  Cc: Krzysztof Kozlowski, Dmitry Osipenko, Jian-Jia Su, Doug Anderson,
	Rob Herring, LKML,
	open list:OPEN FIRMWARE AND FLATTENED DEVICE TREE BINDINGS,
	Nikola Milosavljevic

On 01/07/2022 02:52, Julius Werner wrote:
>> How the asymmetric SDRAMs report density? This is a field with
>> fixed/enum values, so does it mean two-rank-asymmetric module has two
>> registers, one per each rank and choice of register depends on chip select?
> 
> Yes, each rank has a completely separate set of mode registers.

Then I would assume that all lpddr properties can differ between ranks,
including the timings. But probably some SDRAM memory expert should
clarify that.

> 
>> Manufacturer ID is taken from compatible. LPDDR3 has it deprecated.
> 
> Oh! Oh no, I only just saw that. I wish you had CCed us on that patch. :/
> 
> That really doesn't work for our use case, we can't generate a
> specific compatible string for each part number. This may work when
> your board is only using a single memory part and you can hardcode
> that in the DTB blob bundled with the kernel, but we are trying to do
> runtime identification between dozens of different parts on our
> boards. The whole point of us wanting to add these bindings is that we
> want to have the firmware inject the raw values it can read from mode
> registers into the device tree (with just the compatible string
> "jedec,lpddr3"),

You cannot have jedec,lpddr3 alone. You need specific compatible.

> so that we can then delegate the task of matching
> those values to part numbers to a userspace process. 

Constructing a vendor from mode registers is like 10 lines of C code, so
this is not a problem. Trouble would be with device part of compatible.

> We don't want to
> hardcode long tables for ID-to-string matching that have to be updated
> all the time in our constrained firmware space.

I understand.

> Can we please revert that deprecation and at least keep the property
> around as optional?

Yes, we can. You still would need to generate the compatible according
to the current bindings. Whether we can change it I am not sure. I think
it depends how much customization is possible per vendor, according to
JEDEC spec. If we never ever have to identify specific part, because
JEDEC spec and registers tell us everything, then we could skip it,
similarly to lpddr2 and jedec,spi-nor.

Best regards,
Krzysztof

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [RFC] Correct memory layout reporting for "jedec,lpddr2" and related bindings
  2022-06-15  2:25 [RFC] Correct memory layout reporting for "jedec,lpddr2" and related bindings Julius Werner
  2022-06-15  2:28 ` Julius Werner
@ 2022-07-04  8:22 ` Dmitry Osipenko
  1 sibling, 0 replies; 21+ messages in thread
From: Dmitry Osipenko @ 2022-07-04  8:22 UTC (permalink / raw)
  To: Julius Werner, Krzysztof Kozlowski, Jian-Jia Su, Doug Anderson,
	Rob Herring, LKML,
	open list:OPEN FIRMWARE AND FLATTENED DEVICE TREE BINDINGS,
	Nikola Milosavljevic

15.06.2022 05:25, Julius Werner пишет:
> We need to be able to report the information that's currently encoded
> in the "jedec,lpddr2" binding separately for each channel+rank
> combination, and we need to be able to tell how many LPDDR chips are
> combined under a single memory channel.

Who and why needs that information?

To me it's not a very useful information without knowing how memory
ranges are mapped to the chips and then only kernel drivers should be
able to utilize that info in a meaningful way. What driver are we
talking about?

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [RFC] Correct memory layout reporting for "jedec,lpddr2" and related bindings
  2022-07-01  6:57           ` Krzysztof Kozlowski
@ 2022-07-08  1:20             ` Julius Werner
  2022-07-10 15:06               ` Krzysztof Kozlowski
  0 siblings, 1 reply; 21+ messages in thread
From: Julius Werner @ 2022-07-08  1:20 UTC (permalink / raw)
  To: Krzysztof Kozlowski
  Cc: Julius Werner, Krzysztof Kozlowski, Dmitry Osipenko, Jian-Jia Su,
	Doug Anderson, Rob Herring, LKML,
	open list:OPEN FIRMWARE AND FLATTENED DEVICE TREE BINDINGS,
	Nikola Milosavljevic

> Then I would assume that all lpddr properties can differ between ranks,
> including the timings. But probably some SDRAM memory expert should
> clarify that.

Right, so that's what my proposal does -- separate timings nodes per
rank (and channel).

> > That really doesn't work for our use case, we can't generate a
> > specific compatible string for each part number. This may work when
> > your board is only using a single memory part and you can hardcode
> > that in the DTB blob bundled with the kernel, but we are trying to do
> > runtime identification between dozens of different parts on our
> > boards. The whole point of us wanting to add these bindings is that we
> > want to have the firmware inject the raw values it can read from mode
> > registers into the device tree (with just the compatible string
> > "jedec,lpddr3"),
>
> You cannot have jedec,lpddr3 alone. You need specific compatible.

Sorry, what do you mean we cannot? Why not? That's the way we need to
do it for our use case. Why shouldn't it work that way? As far as I
understand the binding definition, this is one of the legal compatible
strings for it. (I'm not saying other platforms can't register and
provide specific compatible strings if they want to, of course, but
for our situation that really doesn't work.)

> > so that we can then delegate the task of matching
> > those values to part numbers to a userspace process.
>
> Constructing a vendor from mode registers is like 10 lines of C code, so
> this is not a problem. Trouble would be with device part of compatible.

There's potentially 255 different manufacturer codes, and the
assignments may be different for different LPDDR versions. That's a
big string table that we don't want to have to fit in our firmware
flash. Besides, as you said, that still only gives you the vendor...
so then should we use "micron,lpddr3" or "elpida,lpddr3" instead of
"jedec,lpddr3"? Where's the advantage in that?

> > Can we please revert that deprecation and at least keep the property
> > around as optional?
>
> Yes, we can. You still would need to generate the compatible according
> to the current bindings. Whether we can change it I am not sure. I think
> it depends how much customization is possible per vendor, according to
> JEDEC spec. If we never ever have to identify specific part, because
> JEDEC spec and registers tell us everything, then we could skip it,
> similarly to lpddr2 and jedec,spi-nor.

Shouldn't that be decided per use case? In general LPDDR is a pretty
rigid set of standards and memory controllers are generally compatible
with any vendor without hardcoding vendor-specific behavior, so I
don't anticipate that this would be likely (particularly since there
is no "real" kernel device driver that needs to initialize the full
memory controller, after all, these bindings are mostly
informational). Of course there may always be mistakes and broken
devices that need custom handling, and if someone has a platform with
such a case I of course don't want to preclude them from tying special
behavior to a custom compatible string. But why would that mean we
need to make this mandatory for all platforms even if it's not
relevant (and not practically feasible) for them? Why not allow both?

> > We need to be able to report the information that's currently encoded
> > in the "jedec,lpddr2" binding separately for each channel+rank
> > combination, and we need to be able to tell how many LPDDR chips are
> > combined under a single memory channel.
>
> Who and why needs that information?
>
> To me it's not a very useful information without knowing how memory
> ranges are mapped to the chips and then only kernel drivers should be
> able to utilize that info in a meaningful way. What driver are we
> talking about?

We're using this for diagnostic purposes, to be able to accurately
report the installed memory configuration to the user or in automated
error reporting. We're planning to just read it from userspace (via
/proc/device-tree) and not actually add a kernel driver for it, but
since it needs to come from the firmware through the device tree it
should have a standardized binding all the same.

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [RFC] Correct memory layout reporting for "jedec,lpddr2" and related bindings
  2022-07-08  1:20             ` Julius Werner
@ 2022-07-10 15:06               ` Krzysztof Kozlowski
  2022-07-20 23:42                 ` Julius Werner
  0 siblings, 1 reply; 21+ messages in thread
From: Krzysztof Kozlowski @ 2022-07-10 15:06 UTC (permalink / raw)
  To: Julius Werner
  Cc: Krzysztof Kozlowski, Dmitry Osipenko, Jian-Jia Su, Doug Anderson,
	Rob Herring, LKML,
	open list:OPEN FIRMWARE AND FLATTENED DEVICE TREE BINDINGS,
	Nikola Milosavljevic

On 08/07/2022 03:20, Julius Werner wrote:
>> Then I would assume that all lpddr properties can differ between ranks,
>> including the timings. But probably some SDRAM memory expert should
>> clarify that.
> 
> Right, so that's what my proposal does -- separate timings nodes per
> rank (and channel).
> 
>>> That really doesn't work for our use case, we can't generate a
>>> specific compatible string for each part number. This may work when
>>> your board is only using a single memory part and you can hardcode
>>> that in the DTB blob bundled with the kernel, but we are trying to do
>>> runtime identification between dozens of different parts on our
>>> boards. The whole point of us wanting to add these bindings is that we
>>> want to have the firmware inject the raw values it can read from mode
>>> registers into the device tree (with just the compatible string
>>> "jedec,lpddr3"),
>>
>> You cannot have jedec,lpddr3 alone. You need specific compatible.
> 
> Sorry, what do you mean we cannot? Why not? 

Currently bindings do not allow it.

Whether this can be changed, I responded in last paragraph of my
previous email.

> That's the way we need to
> do it for our use case. Why shouldn't it work that way? As far as I
> understand the binding definition, this is one of the legal compatible
> strings for it. (I'm not saying other platforms can't register and
> provide specific compatible strings if they want to, of course, but
> for our situation that really doesn't work.)

Not for LPDDR3 bindings, yes for LPDDR2. Inconsistency is actually a bit
confusing.

> 
>>> so that we can then delegate the task of matching
>>> those values to part numbers to a userspace process.
>>
>> Constructing a vendor from mode registers is like 10 lines of C code, so
>> this is not a problem. Trouble would be with device part of compatible.
> 
> There's potentially 255 different manufacturer codes, and the
> assignments may be different for different LPDDR versions. That's a
> big string table that we don't want to have to fit in our firmware
> flash. Besides, as you said, that still only gives you the vendor...
> so then should we use "micron,lpddr3" or "elpida,lpddr3" instead of
> "jedec,lpddr3"? Where's the advantage in that?
> 
>>> Can we please revert that deprecation and at least keep the property
>>> around as optional?
>>
>> Yes, we can. You still would need to generate the compatible according
>> to the current bindings. Whether we can change it I am not sure. I think
>> it depends how much customization is possible per vendor, according to
>> JEDEC spec. If we never ever have to identify specific part, because
>> JEDEC spec and registers tell us everything, then we could skip it,
>> similarly to lpddr2 and jedec,spi-nor.
> 
> Shouldn't that be decided per use case? In general LPDDR is a pretty
> rigid set of standards and memory controllers are generally compatible
> with any vendor without hardcoding vendor-specific behavior, so I
> don't anticipate that this would be likely (particularly since there
> is no "real" kernel device driver that needs to initialize the full
> memory controller, after all, these bindings are mostly
> informational). 

If decided per use case understood as "decided depending how to use the
bindings" then answer is rather not. For example Linux implementation is
usually not the best argument to shape the bindings and usually to such
arguments answer is: "implementation does not matter".

If by "use case" you mean actual hardware or specification
characteristics, then yes, of course. This is why I wrote "it depends".

> Of course there may always be mistakes and broken
> devices that need custom handling, and if someone has a platform with
> such a case I of course don't want to preclude them from tying special
> behavior to a custom compatible string. But why would that mean we
> need to make this mandatory for all platforms even if it's not
> relevant (and not practically feasible) for them? Why not allow both?

Depends. If generic compatible is not enough (works only for your case)
then it should have never been added alone.

> 
>>> We need to be able to report the information that's currently encoded
>>> in the "jedec,lpddr2" binding separately for each channel+rank
>>> combination, and we need to be able to tell how many LPDDR chips are
>>> combined under a single memory channel.
>>
>> Who and why needs that information?
>>
>> To me it's not a very useful information without knowing how memory
>> ranges are mapped to the chips and then only kernel drivers should be
>> able to utilize that info in a meaningful way. What driver are we
>> talking about?
> 
> We're using this for diagnostic purposes, to be able to accurately
> report the installed memory configuration to the user or in automated
> error reporting. We're planning to just read it from userspace (via
> /proc/device-tree) and not actually add a kernel driver for it, but
> since it needs to come from the firmware through the device tree it
> should have a standardized binding all the same.

Sorry, I don't understand to whom you reply. It's a reply to me but you
quoted here few pieces of emails which were not from me. If you
argue/comment/discuss with any of my statements, please write your reply
under my quote. This allows me easily to find the relevant point of
discussion.

Best regards,
Krzysztof

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [RFC] Correct memory layout reporting for "jedec,lpddr2" and related bindings
  2022-07-10 15:06               ` Krzysztof Kozlowski
@ 2022-07-20 23:42                 ` Julius Werner
  2022-07-27  8:47                   ` Krzysztof Kozlowski
  0 siblings, 1 reply; 21+ messages in thread
From: Julius Werner @ 2022-07-20 23:42 UTC (permalink / raw)
  To: Krzysztof Kozlowski
  Cc: Julius Werner, Krzysztof Kozlowski, Dmitry Osipenko, Jian-Jia Su,
	Doug Anderson, Rob Herring, LKML,
	open list:OPEN FIRMWARE AND FLATTENED DEVICE TREE BINDINGS,
	Nikola Milosavljevic

Sorry, got distracted from this for a bit. Sounds like we were pretty
much on the same page about how the updated binding should look like
here, the remaining question was just about the compatible string.

> >> Yes, we can. You still would need to generate the compatible according
> >> to the current bindings. Whether we can change it I am not sure. I think
> >> it depends how much customization is possible per vendor, according to
> >> JEDEC spec. If we never ever have to identify specific part, because
> >> JEDEC spec and registers tell us everything, then we could skip it,
> >> similarly to lpddr2 and jedec,spi-nor.
> >
> > Shouldn't that be decided per use case? In general LPDDR is a pretty
> > rigid set of standards and memory controllers are generally compatible
> > with any vendor without hardcoding vendor-specific behavior, so I
> > don't anticipate that this would be likely (particularly since there
> > is no "real" kernel device driver that needs to initialize the full
> > memory controller, after all, these bindings are mostly
> > informational).
>
> If decided per use case understood as "decided depending how to use the
> bindings" then answer is rather not. For example Linux implementation is
> usually not the best argument to shape the bindings and usually to such
> arguments answer is: "implementation does not matter".
>
> If by "use case" you mean actual hardware or specification
> characteristics, then yes, of course. This is why I wrote "it depends".

By "use case" I mean our particular platform and firmware requirements
-- or rather, the realities of building devices with widely
multi-sourced LPDDR parts. One cannot efficiently build firmware that
can pass an exact vendor-and-part-specific compatible string to Linux
for this binding for every single LPDDR part used on such a platform.
And I don't see why that should be needed, either... that's kinda the
point of having an interoperability standard, after all, that you can
just assume the devices all work according to the same spec and don't
need to hardcode details about each specific instance.

The existing bindings seem to have clearly been designed for platforms
that only ever use a single specific LPDDR part, and in those cases
these issues don't come up, so I assume this choice had just been made
without much thought. But now I would like to reuse them on platforms
where this is a problem, and that's why I would like to either expand
or change the binding to (also) allow a generic compatible string to
solve it. (Whether that means moving all platforms to only use this
generic compatible string, or allowing it side-by-side with the
existing part-specific compatible strings is up to you -- I don't want
to preclude other platforms from doing what they prefer, I just want
to make sure there's some form of workable solution for my platform.
But if you would prefer this to be an all-or-nothing thing, we could
change everything over to a generic compatible string too if you
want.)

> > Of course there may always be mistakes and broken
> > devices that need custom handling, and if someone has a platform with
> > such a case I of course don't want to preclude them from tying special
> > behavior to a custom compatible string. But why would that mean we
> > need to make this mandatory for all platforms even if it's not
> > relevant (and not practically feasible) for them? Why not allow both?
>
> Depends. If generic compatible is not enough (works only for your case)
> then it should have never been added alone.

No, I don't think it would work only for my case -- in fact I think
it's pretty unlikely that we'll ever find a case where this doesn't
work. LPDDR is a very rigid standard that generally needs to be able
to interoperate (at the memory controller and firmware training code
level) without any vendor-specific software quirks. I was just trying
to say that I wouldn't want to stop anybody from adding
vendor-specific quirks for a platform if they really needed to -- but
I don't know of any such case in practice and I doubt it actually
exists. The few existing uses of this binding don't use the compatible
string for anything more than to parse out the manufacturer ID, which
I think would make much more sense to model as a property.

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [RFC] Correct memory layout reporting for "jedec,lpddr2" and related bindings
  2022-07-20 23:42                 ` Julius Werner
@ 2022-07-27  8:47                   ` Krzysztof Kozlowski
  2022-07-27 14:07                     ` Doug Anderson
  2022-07-28  0:22                     ` Julius Werner
  0 siblings, 2 replies; 21+ messages in thread
From: Krzysztof Kozlowski @ 2022-07-27  8:47 UTC (permalink / raw)
  To: Julius Werner
  Cc: Krzysztof Kozlowski, Dmitry Osipenko, Jian-Jia Su, Doug Anderson,
	Rob Herring, LKML,
	open list:OPEN FIRMWARE AND FLATTENED DEVICE TREE BINDINGS,
	Nikola Milosavljevic

On 21/07/2022 01:42, Julius Werner wrote:
> Sorry, got distracted from this for a bit. Sounds like we were pretty
> much on the same page about how the updated binding should look like
> here, the remaining question was just about the compatible string.
> 
>>>> Yes, we can. You still would need to generate the compatible according
>>>> to the current bindings. Whether we can change it I am not sure. I think
>>>> it depends how much customization is possible per vendor, according to
>>>> JEDEC spec. If we never ever have to identify specific part, because
>>>> JEDEC spec and registers tell us everything, then we could skip it,
>>>> similarly to lpddr2 and jedec,spi-nor.
>>>
>>> Shouldn't that be decided per use case? In general LPDDR is a pretty
>>> rigid set of standards and memory controllers are generally compatible
>>> with any vendor without hardcoding vendor-specific behavior, so I
>>> don't anticipate that this would be likely (particularly since there
>>> is no "real" kernel device driver that needs to initialize the full
>>> memory controller, after all, these bindings are mostly
>>> informational).
>>
>> If decided per use case understood as "decided depending how to use the
>> bindings" then answer is rather not. For example Linux implementation is
>> usually not the best argument to shape the bindings and usually to such
>> arguments answer is: "implementation does not matter".
>>
>> If by "use case" you mean actual hardware or specification
>> characteristics, then yes, of course. This is why I wrote "it depends".
> 
> By "use case" I mean our particular platform and firmware requirements
> -- or rather, the realities of building devices with widely
> multi-sourced LPDDR parts. One cannot efficiently build firmware that
> can pass an exact vendor-and-part-specific compatible string to Linux
> for this binding for every single LPDDR part used on such a platform.

Why cannot? You want to pass them as numerical values which directly map
to vendor ID and some part, don't they?

> And I don't see why that should be needed, either... that's kinda the
> point of having an interoperability standard, after all, that you can
> just assume the devices all work according to the same spec and don't
> need to hardcode details about each specific instance.

If we talk about standard, then DT purpose is not for autodetectable
pieces. These values are autodetectable, so such properties should not
be encoded in DT.

> 
> The existing bindings seem to have clearly been designed for platforms
> that only ever use a single specific LPDDR part, and in those cases
> these issues don't come up, so I assume this choice had just been made
> without much thought. 

True.

> But now I would like to reuse them on platforms
> where this is a problem, and that's why I would like to either expand
> or change the binding to (also) allow a generic compatible string to
> solve it. (Whether that means moving all platforms to only use this
> generic compatible string, or allowing it side-by-side with the
> existing part-specific compatible strings is up to you -- I don't want
> to preclude other platforms from doing what they prefer, I just want
> to make sure there's some form of workable solution for my platform.
> But if you would prefer this to be an all-or-nothing thing, we could
> change everything over to a generic compatible string too if you
> want.)
> 
>>> Of course there may always be mistakes and broken
>>> devices that need custom handling, and if someone has a platform with
>>> such a case I of course don't want to preclude them from tying special
>>> behavior to a custom compatible string. But why would that mean we
>>> need to make this mandatory for all platforms even if it's not
>>> relevant (and not practically feasible) for them? Why not allow both?
>>
>> Depends. If generic compatible is not enough (works only for your case)
>> then it should have never been added alone.
> 
> No, I don't think it would work only for my case -- in fact I think
> it's pretty unlikely that we'll ever find a case where this doesn't
> work. LPDDR is a very rigid standard that generally needs to be able
> to interoperate (at the memory controller and firmware training code
> level) without any vendor-specific software quirks. I was just trying
> to say that I wouldn't want to stop anybody from adding
> vendor-specific quirks for a platform if they really needed to -- but
> I don't know of any such case in practice and I doubt it actually
> exists. The few existing uses of this binding don't use the compatible
> string for anything more than to parse out the manufacturer ID, which
> I think would make much more sense to model as a property.


Best regards,
Krzysztof

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [RFC] Correct memory layout reporting for "jedec,lpddr2" and related bindings
  2022-07-27  8:47                   ` Krzysztof Kozlowski
@ 2022-07-27 14:07                     ` Doug Anderson
  2022-07-28  7:35                       ` Krzysztof Kozlowski
  2022-07-28  0:22                     ` Julius Werner
  1 sibling, 1 reply; 21+ messages in thread
From: Doug Anderson @ 2022-07-27 14:07 UTC (permalink / raw)
  To: Krzysztof Kozlowski
  Cc: Julius Werner, Krzysztof Kozlowski, Dmitry Osipenko, Jian-Jia Su,
	Rob Herring, LKML,
	open list:OPEN FIRMWARE AND FLATTENED DEVICE TREE BINDINGS,
	Nikola Milosavljevic

Hi,

On Wed, Jul 27, 2022 at 1:47 AM Krzysztof Kozlowski
<krzysztof.kozlowski@linaro.org> wrote:
>
> On 21/07/2022 01:42, Julius Werner wrote:
> > Sorry, got distracted from this for a bit. Sounds like we were pretty
> > much on the same page about how the updated binding should look like
> > here, the remaining question was just about the compatible string.
> >
> >>>> Yes, we can. You still would need to generate the compatible according
> >>>> to the current bindings. Whether we can change it I am not sure. I think
> >>>> it depends how much customization is possible per vendor, according to
> >>>> JEDEC spec. If we never ever have to identify specific part, because
> >>>> JEDEC spec and registers tell us everything, then we could skip it,
> >>>> similarly to lpddr2 and jedec,spi-nor.
> >>>
> >>> Shouldn't that be decided per use case? In general LPDDR is a pretty
> >>> rigid set of standards and memory controllers are generally compatible
> >>> with any vendor without hardcoding vendor-specific behavior, so I
> >>> don't anticipate that this would be likely (particularly since there
> >>> is no "real" kernel device driver that needs to initialize the full
> >>> memory controller, after all, these bindings are mostly
> >>> informational).
> >>
> >> If decided per use case understood as "decided depending how to use the
> >> bindings" then answer is rather not. For example Linux implementation is
> >> usually not the best argument to shape the bindings and usually to such
> >> arguments answer is: "implementation does not matter".
> >>
> >> If by "use case" you mean actual hardware or specification
> >> characteristics, then yes, of course. This is why I wrote "it depends".
> >
> > By "use case" I mean our particular platform and firmware requirements
> > -- or rather, the realities of building devices with widely
> > multi-sourced LPDDR parts. One cannot efficiently build firmware that
> > can pass an exact vendor-and-part-specific compatible string to Linux
> > for this binding for every single LPDDR part used on such a platform.
>
> Why cannot? You want to pass them as numerical values which directly map
> to vendor ID and some part, don't they?

If you really want this to be in the "compatible" string, maybe the
right answer is to follow the lead of USB which encodes the VID/PID in
the compatible string
(Documentation/devicetree/bindings/usb/usb-device.yaml). It's solving
this exact same problem of avoiding needing a table translating from
an ID provided by a probable device to an human-readable string.


> > And I don't see why that should be needed, either... that's kinda the
> > point of having an interoperability standard, after all, that you can
> > just assume the devices all work according to the same spec and don't
> > need to hardcode details about each specific instance.
>
> If we talk about standard, then DT purpose is not for autodetectable
> pieces. These values are autodetectable, so such properties should not
> be encoded in DT.

In the case of DDR, I think that the firmware can auto-detect them but
not the kernel. So from the kernel's point of view the DDR info should
be in DT, right?

-Doug

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [RFC] Correct memory layout reporting for "jedec,lpddr2" and related bindings
  2022-07-27  8:47                   ` Krzysztof Kozlowski
  2022-07-27 14:07                     ` Doug Anderson
@ 2022-07-28  0:22                     ` Julius Werner
  2022-07-28  7:38                       ` Krzysztof Kozlowski
  1 sibling, 1 reply; 21+ messages in thread
From: Julius Werner @ 2022-07-28  0:22 UTC (permalink / raw)
  To: Krzysztof Kozlowski
  Cc: Julius Werner, Krzysztof Kozlowski, Dmitry Osipenko, Jian-Jia Su,
	Doug Anderson, Rob Herring, LKML,
	open list:OPEN FIRMWARE AND FLATTENED DEVICE TREE BINDINGS,
	Nikola Milosavljevic

> > By "use case" I mean our particular platform and firmware requirements
> > -- or rather, the realities of building devices with widely
> > multi-sourced LPDDR parts. One cannot efficiently build firmware that
> > can pass an exact vendor-and-part-specific compatible string to Linux
> > for this binding for every single LPDDR part used on such a platform.
>
> Why cannot? You want to pass them as numerical values which directly map
> to vendor ID and some part, don't they?

Yes, but the current compatible string format also requires the exact
part number, of which there are many thousands and it's impossible to
build a list in advance. Even for vendors, hardcoding 255 strings in a
tight firmware space would be an unnecessary burden. There's also an
update problem -- firmware may be built and signed and burned into ROM
long before the assembly of the final mainboard. Board manufacturers
want to be able to just drop-in replace a newly-sourced LPDDR part in
their existing production line without having to worry if the existing
(and possibly no longer changeable) firmware contains a string table
entry for this part.

If you just want the compatible string to be unique, encoding the
numbers like Doug suggested (e.g. jedec,lpddr3-ff-0100) would work for
us.

> If we talk about standard, then DT purpose is not for autodetectable
> pieces. These values are autodetectable, so such properties should not
> be encoded in DT.

But the DT is the only interface that we have to pass information from
firmware to kernel and userspace. Where else should these properties
be encoded? They are auto-detectable, but not for the kernel itself
(only for memory-training firmware running in SRAM). Maybe the usual
rules of thumb don't apply here, because unlike all other peripheral
controllers the memory controller is special in that the kernel cannot
simply reinitialize it and get the same information from the original
source again.

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [RFC] Correct memory layout reporting for "jedec,lpddr2" and related bindings
  2022-07-27 14:07                     ` Doug Anderson
@ 2022-07-28  7:35                       ` Krzysztof Kozlowski
  0 siblings, 0 replies; 21+ messages in thread
From: Krzysztof Kozlowski @ 2022-07-28  7:35 UTC (permalink / raw)
  To: Doug Anderson, Krzysztof Kozlowski
  Cc: Julius Werner, Krzysztof Kozlowski, Dmitry Osipenko, Jian-Jia Su,
	Rob Herring, LKML,
	open list:OPEN FIRMWARE AND FLATTENED DEVICE TREE BINDINGS,
	Nikola Milosavljevic

On 27/07/2022 16:07, Doug Anderson wrote:
> Hi,
> 
> On Wed, Jul 27, 2022 at 1:47 AM Krzysztof Kozlowski
> <krzysztof.kozlowski@linaro.org> wrote:
>>
>> On 21/07/2022 01:42, Julius Werner wrote:
>>> Sorry, got distracted from this for a bit. Sounds like we were pretty
>>> much on the same page about how the updated binding should look like
>>> here, the remaining question was just about the compatible string.
>>>
>>>>>> Yes, we can. You still would need to generate the compatible according
>>>>>> to the current bindings. Whether we can change it I am not sure. I think
>>>>>> it depends how much customization is possible per vendor, according to
>>>>>> JEDEC spec. If we never ever have to identify specific part, because
>>>>>> JEDEC spec and registers tell us everything, then we could skip it,
>>>>>> similarly to lpddr2 and jedec,spi-nor.
>>>>>
>>>>> Shouldn't that be decided per use case? In general LPDDR is a pretty
>>>>> rigid set of standards and memory controllers are generally compatible
>>>>> with any vendor without hardcoding vendor-specific behavior, so I
>>>>> don't anticipate that this would be likely (particularly since there
>>>>> is no "real" kernel device driver that needs to initialize the full
>>>>> memory controller, after all, these bindings are mostly
>>>>> informational).
>>>>
>>>> If decided per use case understood as "decided depending how to use the
>>>> bindings" then answer is rather not. For example Linux implementation is
>>>> usually not the best argument to shape the bindings and usually to such
>>>> arguments answer is: "implementation does not matter".
>>>>
>>>> If by "use case" you mean actual hardware or specification
>>>> characteristics, then yes, of course. This is why I wrote "it depends".
>>>
>>> By "use case" I mean our particular platform and firmware requirements
>>> -- or rather, the realities of building devices with widely
>>> multi-sourced LPDDR parts. One cannot efficiently build firmware that
>>> can pass an exact vendor-and-part-specific compatible string to Linux
>>> for this binding for every single LPDDR part used on such a platform.
>>
>> Why cannot? You want to pass them as numerical values which directly map
>> to vendor ID and some part, don't they?
> 
> If you really want this to be in the "compatible" string, maybe the
> right answer is to follow the lead of USB which encodes the VID/PID in
> the compatible string
> (Documentation/devicetree/bindings/usb/usb-device.yaml). It's solving
> this exact same problem of avoiding needing a table translating from
> an ID provided by a probable device to an human-readable string.

This makes sense. I would still argue that number of vendors is small
thus strings could be translated (there is like 20 of them in JEP166D -
JC-42.6), but for device ID this would work.

> 
> 
>>> And I don't see why that should be needed, either... that's kinda the
>>> point of having an interoperability standard, after all, that you can
>>> just assume the devices all work according to the same spec and don't
>>> need to hardcode details about each specific instance.
>>
>> If we talk about standard, then DT purpose is not for autodetectable
>> pieces. These values are autodetectable, so such properties should not
>> be encoded in DT.
> 
> In the case of DDR, I think that the firmware can auto-detect them but
> not the kernel. So from the kernel's point of view the DDR info should
> be in DT, right?

True, I thought memory controllers could provide such information, but
now I checked Exynos5422 DMC and it does not expose such register.


Best regards,
Krzysztof

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [RFC] Correct memory layout reporting for "jedec,lpddr2" and related bindings
  2022-07-28  0:22                     ` Julius Werner
@ 2022-07-28  7:38                       ` Krzysztof Kozlowski
  0 siblings, 0 replies; 21+ messages in thread
From: Krzysztof Kozlowski @ 2022-07-28  7:38 UTC (permalink / raw)
  To: Julius Werner, Krzysztof Kozlowski
  Cc: Krzysztof Kozlowski, Dmitry Osipenko, Jian-Jia Su, Doug Anderson,
	Rob Herring, LKML,
	open list:OPEN FIRMWARE AND FLATTENED DEVICE TREE BINDINGS,
	Nikola Milosavljevic

On 28/07/2022 02:22, Julius Werner wrote:
>>> By "use case" I mean our particular platform and firmware requirements
>>> -- or rather, the realities of building devices with widely
>>> multi-sourced LPDDR parts. One cannot efficiently build firmware that
>>> can pass an exact vendor-and-part-specific compatible string to Linux
>>> for this binding for every single LPDDR part used on such a platform.
>>
>> Why cannot? You want to pass them as numerical values which directly map
>> to vendor ID and some part, don't they?
> 
> Yes, but the current compatible string format also requires the exact
> part number, of which there are many thousands and it's impossible to
> build a list in advance. Even for vendors, hardcoding 255 strings in a
> tight firmware space would be an unnecessary burden. 

There are 25 for LPDDR2/3 and and 12 for LPDD4 (although many reserved
so it might grow to ~32). You will not have 255 of them, although I
actually don't insist on that - we can code manufacturer ID as well.

> There's also an
> update problem -- firmware may be built and signed and burned into ROM
> long before the assembly of the final mainboard. Board manufacturers
> want to be able to just drop-in replace a newly-sourced LPDDR part in
> their existing production line without having to worry if the existing
> (and possibly no longer changeable) firmware contains a string table
> entry for this part.
> 
> If you just want the compatible string to be unique, encoding the
> numbers like Doug suggested (e.g. jedec,lpddr3-ff-0100) would work for
> us.
> 
>> If we talk about standard, then DT purpose is not for autodetectable
>> pieces. These values are autodetectable, so such properties should not
>> be encoded in DT.
> 
> But the DT is the only interface that we have to pass information from
> firmware to kernel and userspace. Where else should these properties
> be encoded? They are auto-detectable, but not for the kernel itself
> (only for memory-training firmware running in SRAM). Maybe the usual
> rules of thumb don't apply here, because unlike all other peripheral
> controllers the memory controller is special in that the kernel cannot
> simply reinitialize it and get the same information from the original
> source again.

True, I thought these registers are aliased or also exposed as memory
controllers, but at least for one MC I don't see it so kernel cannot
read them.

Best regards,
Krzysztof

^ permalink raw reply	[flat|nested] 21+ messages in thread

end of thread, other threads:[~2022-07-28  7:39 UTC | newest]

Thread overview: 21+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-06-15  2:25 [RFC] Correct memory layout reporting for "jedec,lpddr2" and related bindings Julius Werner
2022-06-15  2:28 ` Julius Werner
2022-06-15 19:33   ` Doug Anderson
2022-06-15 21:27     ` Julius Werner
2022-06-15 22:33       ` Doug Anderson
2022-06-15 23:24         ` Julius Werner
2022-06-18  2:17   ` Krzysztof Kozlowski
2022-06-24  9:45   ` Krzysztof Kozlowski
2022-06-30  1:03     ` Julius Werner
2022-06-30  8:05       ` Krzysztof Kozlowski
2022-07-01  0:52         ` Julius Werner
2022-07-01  6:57           ` Krzysztof Kozlowski
2022-07-08  1:20             ` Julius Werner
2022-07-10 15:06               ` Krzysztof Kozlowski
2022-07-20 23:42                 ` Julius Werner
2022-07-27  8:47                   ` Krzysztof Kozlowski
2022-07-27 14:07                     ` Doug Anderson
2022-07-28  7:35                       ` Krzysztof Kozlowski
2022-07-28  0:22                     ` Julius Werner
2022-07-28  7:38                       ` Krzysztof Kozlowski
2022-07-04  8:22 ` Dmitry Osipenko

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).