From: Frank Rowand <firstname.lastname@example.org>
To: Segher Boessenkool <email@example.com>,
Michael Ellerman <firstname.lastname@example.org>
Cc: Sebastian Andrzej Siewior <email@example.com>,
Rob Herring <firstname.lastname@example.org>,
Benjamin Herrenschmidt <email@example.com>,
Paul Mackerras <firstname.lastname@example.org>,
Thomas Gleixner <email@example.com>
Subject: Re: [RFC] Efficiency of the phandle_cache on ppc64/SLOF
Date: Thu, 5 Dec 2019 19:37:24 -0600 [thread overview]
Message-ID: <firstname.lastname@example.org> (raw)
On 12/3/19 12:35 PM, Segher Boessenkool wrote:
> On Tue, Dec 03, 2019 at 03:03:22PM +1100, Michael Ellerman wrote:
>> Sebastian Andrzej Siewior <email@example.com> writes:
>> I've certainly heard it said that on some OF's the phandle was just ==
>> the address of the internal representation, and I guess maybe for SLOF
>> that is true.
> It is (or was). In many OFs it is just the effective address of some
> node structure. SLOF runs with translation off normally.
>> They seem to vary wildly though, eg. on an Apple G5:
> Apple OF runs with translation on usually. IIRC these are effective
> addresses as well.
> The OF they have on G5 machines is mostly 32-bit, for compatibility is my
> guess (for userland things dealing with addresses from OF, importantly).
>> $ find /proc/device-tree/ -name phandle | xargs lsprop | head -10
>> /proc/device-tree/vsp@0,f9000000/veo@f9180000/phandle ff970848
>> /proc/device-tree/vsp@0,f9000000/phandle ff970360
>> /proc/device-tree/vsp@0,f9000000/veo@f9080000/phandle ff970730
>> /proc/device-tree/nvram@0,fff04000/phandle ff967fb8
>> /proc/device-tree/xmodem/phandle ff9655e8
>> /proc/device-tree/multiboot/phandle ff9504f0
>> /proc/device-tree/diagnostics/phandle ff965550
>> /proc/device-tree/options/phandle ff893cf0
>> /proc/device-tree/openprom/client-services/phandle ff8925b8
>> /proc/device-tree/openprom/phandle ff892458
>> That machine does not have enough RAM for those to be 32-bit real
>> addresses. I think Apple OF is running in virtual mode though (?), so
>> maybe they are pointers?
> Yes, I think the default is to have 8MB ram at the top of 4GB (which is
> the physical address of the bootrom, btw) for OF.
>> And on an IBM pseries machine they're a bit all over the place:
>> /proc/device-tree/cpus/PowerPC,POWER8@40/ibm,phandle 10000040
>> /proc/device-tree/cpus/l2-cache@2005/ibm,phandle 00002005
>> /proc/device-tree/cpus/PowerPC,POWER8@30/ibm,phandle 10000030
>> /proc/device-tree/cpus/PowerPC,POWER8@20/ibm,phandle 10000020
>> /proc/device-tree/cpus/PowerPC,POWER8@10/ibm,phandle 10000010
>> /proc/device-tree/cpus/l2-cache@2003/ibm,phandle 00002003
>> /proc/device-tree/cpus/l2-cache@200a/ibm,phandle 0000200a
>> /proc/device-tree/cpus/l3-cache@3108/ibm,phandle 00003108
>> /proc/device-tree/cpus/l2-cache@2001/ibm,phandle 00002001
>> /proc/device-tree/cpus/l3-cache@3106/ibm,phandle 00003106
>> /proc/device-tree/cpus/ibm,phandle fffffff8
>> /proc/device-tree/cpus/l3-cache@3104/ibm,phandle 00003104
>> /proc/device-tree/cpus/l2-cache@2008/ibm,phandle 00002008
>> /proc/device-tree/cpus/l3-cache@3102/ibm,phandle 00003102
>> /proc/device-tree/cpus/l2-cache@2006/ibm,phandle 00002006
>> /proc/device-tree/cpus/l3-cache@3100/ibm,phandle 00003100
>> /proc/device-tree/cpus/PowerPC,POWER8@8/ibm,phandle 10000008
>> /proc/device-tree/cpus/l2-cache@2004/ibm,phandle 00002004
>> /proc/device-tree/cpus/PowerPC,POWER8@48/ibm,phandle 10000048
>> /proc/device-tree/cpus/PowerPC,POWER8@38/ibm,phandle 10000038
>> /proc/device-tree/cpus/l2-cache@2002/ibm,phandle 00002002
>> /proc/device-tree/cpus/PowerPC,POWER8@28/ibm,phandle 10000028
>> /proc/device-tree/cpus/l3-cache@3107/ibm,phandle 00003107
>> /proc/device-tree/cpus/PowerPC,POWER8@18/ibm,phandle 10000018
>> /proc/device-tree/cpus/l2-cache@2000/ibm,phandle 00002000
>> /proc/device-tree/cpus/l3-cache@3105/ibm,phandle 00003105
>> /proc/device-tree/cpus/l3-cache@3103/ibm,phandle 00003103
>> /proc/device-tree/cpus/l3-cache@310a/ibm,phandle 0000310a
>> /proc/device-tree/cpus/PowerPC,POWER8@0/ibm,phandle 10000000
>> /proc/device-tree/cpus/l2-cache@2007/ibm,phandle 00002007
>> /proc/device-tree/cpus/l3-cache@3101/ibm,phandle 00003101
>> /proc/device-tree/pci@80000002000001b/ibm,phandle 2000001b
> Some (the 1000xxxx) look like addresses as well.
>>> So the hash array has 64 entries out which only 8 are populated. Using
>>> hash_32() populates 29 entries.
>> On the G5 it's similarly inefficient:
>> [ 0.007379] OF: of_populate_phandle_cache(242) Used entries: 31, hashed: 111
>> And some output from a "real" pseries machine (IBM OF), which is
>> slightly better:
>> [ 0.129467] OF: of_populate_phandle_cache(242) Used entries: 39, hashed: 81
>> So yeah using hash_32() is quite a bit better in both cases.
> Yup, no surprise there. And hash_32 is very cheap to compute.
>> And if I'm reading your patch right it would be a single line change to>> switch, so that seems like it's worth doing to me.
> Btw. Some OFs mangle the phandles some way, to make it easier to catch
> people using it as an address (and similarly, mangle ihandles differently,
> so you catch confusion between ihandles and phandles as well). Like a
> simple xor, with some odd number preferably. You should assume *nothing*
> about phandles, they are opaque identifiers.
For arm32 machines that use dtc to generate the devicetree, which is a
very large user base, we certainly can make assumptions about phandles.
Especially because the complaints about the overhead of phandle based
lookups have been voiced by users of this specific set of machines.
For systems with a devicetree that does not follow the assumptions, the
phandle cache should not measurably increase the overhead of phandle
based lookups. For these systems, they might not see an overhead
reduction from the existence of the cache and they may or may not
see the overhead reduction. This was explicitly stated during the
reviews of the possible phandle cache implementation alternatives.
If you have measurements of a system where implementing the phandle
cache increased the overhead, and the additional overhead is a concern
(such as significantly increasing boot time) then please share that
information with us. Otherwise this is just a theoretical exercise.
next prev parent reply other threads:[~2019-12-06 1:37 UTC|newest]
Thread overview: 20+ messages / expand[flat|nested] mbox.gz Atom feed top
2019-11-29 15:10 [RFC] Efficiency of the phandle_cache on ppc64/SLOF Sebastian Andrzej Siewior
2019-11-30 2:14 ` Frank Rowand
2019-12-02 11:07 ` Sebastian Andrzej Siewior
2019-12-03 4:12 ` Michael Ellerman
2019-12-03 4:28 ` Frank Rowand
2019-12-03 16:56 ` Rob Herring
2019-12-05 16:35 ` Sebastian Andrzej Siewior
2019-12-06 2:01 ` Frank Rowand
2019-12-09 13:35 ` Sebastian Andrzej Siewior
2019-12-10 1:51 ` Rob Herring
2019-12-10 8:17 ` Frank Rowand
2019-12-10 12:46 ` Frank Rowand
2019-12-11 14:42 ` Rob Herring
2019-12-06 1:52 ` Frank Rowand
2019-12-08 6:59 ` Frank Rowand
2019-12-03 4:03 ` Michael Ellerman
2019-12-03 18:35 ` Segher Boessenkool
2019-12-06 1:37 ` Frank Rowand [this message]
2019-12-06 23:40 ` Segher Boessenkool
2019-12-08 4:30 ` Frank Rowand
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).