From: Jon Hunter <jonathanh@nvidia.com>
To: Rob Herring <robh@kernel.org>
Cc: <devicetree@vger.kernel.org>,
"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
Sebastian Andrzej Siewior <bigeasy@linutronix.de>,
Michael Ellerman <mpe@ellerman.id.au>,
Segher Boessenkool <segher@kernel.crashing.org>,
Frank Rowand <frowand.list@gmail.com>,
linux-tegra <linux-tegra@vger.kernel.org>
Subject: Re: [PATCH] of: Rework and simplify phandle cache to use a fixed size
Date: Mon, 13 Jan 2020 11:12:23 +0000 [thread overview]
Message-ID: <93314ff5-aa89-cd99-393c-f75f31d9d6e5@nvidia.com> (raw)
In-Reply-To: <CAL_JsqLmth0bYcG2VnxU-jk_VoC4TgvWD8_e6r1_8WqVwYGq0g@mail.gmail.com>
On 10/01/2020 23:50, Rob Herring wrote:
> On Tue, Jan 7, 2020 at 4:22 AM Jon Hunter <jonathanh@nvidia.com> wrote:
>>
>> Hi Rob,
>>
>> On 11/12/2019 23:23, Rob Herring wrote:
>>> The phandle cache was added to speed up of_find_node_by_phandle() by
>>> avoiding walking the whole DT to find a matching phandle. The
>>> implementation has several shortcomings:
>>>
>>> - The cache is designed to work on a linear set of phandle values.
>>> This is true for dtc generated DTs, but not for other cases such as
>>> Power.
>>> - The cache isn't enabled until of_core_init() and a typical system
>>> may see hundreds of calls to of_find_node_by_phandle() before that
>>> point.
>>> - The cache is freed and re-allocated when the number of phandles
>>> changes.
>>> - It takes a raw spinlock around a memory allocation which breaks on
>>> RT.
>>>
>>> Change the implementation to a fixed size and use hash_32() as the
>>> cache index. This greatly simplifies the implementation. It avoids
>>> the need for any re-alloc of the cache and taking a reference on nodes
>>> in the cache. We only have a single source of removing cache entries
>>> which is of_detach_node().
>>>
>>> Using hash_32() removes any assumption on phandle values improving
>>> the hit rate for non-linear phandle values. The effect on linear values
>>> using hash_32() is about a 10% collision. The chances of thrashing on
>>> colliding values seems to be low.
>>>
>>> To compare performance, I used a RK3399 board which is a pretty typical
>>> system. I found that just measuring boot time as done previously is
>>> noisy and may be impacted by other things. Also bringing up secondary
>>> cores causes some issues with measuring, so I booted with 'nr_cpus=1'.
>>> With no caching, calls to of_find_node_by_phandle() take about 20124 us
>>> for 1248 calls. There's an additional 288 calls before time keeping is
>>> up. Using the average time per hit/miss with the cache, we can calculate
>>> these calls to take 690 us (277 hit / 11 miss) with a 128 entry cache
>>> and 13319 us with no cache or an uninitialized cache.
>>>
>>> Comparing the 3 implementations the time spent in
>>> of_find_node_by_phandle() is:
>>>
>>> no cache: 20124 us (+ 13319 us)
>>> 128 entry cache: 5134 us (+ 690 us)
>>> current cache: 819 us (+ 13319 us)
>>>
>>> We could move the allocation of the cache earlier to improve the
>>> current cache, but that just further complicates the situation as it
>>> needs to be after slab is up, so we can't do it when unflattening (which
>>> uses memblock).
>>>
>>> Reported-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
>>> Cc: Michael Ellerman <mpe@ellerman.id.au>
>>> Cc: Segher Boessenkool <segher@kernel.crashing.org>
>>> Cc: Frank Rowand <frowand.list@gmail.com>
>>> Signed-off-by: Rob Herring <robh@kernel.org>
>>
>> With next-20200106 I have noticed a regression on Tegra210 where it
>> appears that only one of the eMMC devices is being registered. Bisect is
>> pointing to this patch and reverting on top of next fixes the problem.
>> That is as far as I have got so far, so if you have any ideas, please
>> let me know. Unfortunately, there do not appear to be any obvious errors
>> from the bootlog.
>
> I guess that's tegra210-p2371-2180.dts because none of the others have
> 2 SD hosts enabled. I don't see anything obvious though. Are you doing
> any runtime mods to the DT?
I have noticed that the bootloader is doing some runtime mods and so
checking if this is the cause. I will let you know, but most likely,
seeing as I cannot find anything wrong with this change itself.
Cheers
Jon
--
nvpublic
next prev parent reply other threads:[~2020-01-13 11:12 UTC|newest]
Thread overview: 16+ messages / expand[flat|nested] mbox.gz Atom feed top
2019-12-11 23:23 [PATCH] of: Rework and simplify phandle cache to use a fixed size Rob Herring
2019-12-11 23:48 ` Rob Herring
2019-12-12 13:05 ` Sebastian Andrzej Siewior
2019-12-12 19:28 ` Rob Herring
2019-12-18 9:47 ` Sebastian Andrzej Siewior
2019-12-19 15:33 ` Frank Rowand
2019-12-19 15:31 ` Frank Rowand
2019-12-12 11:50 ` Frank Rowand
2019-12-19 3:38 ` Frank Rowand
2019-12-12 13:00 ` Sebastian Andrzej Siewior
2019-12-19 15:51 ` Frank Rowand
2020-01-07 10:22 ` Jon Hunter
2020-01-10 23:50 ` Rob Herring
2020-01-13 11:12 ` Jon Hunter [this message]
2020-04-14 15:00 ` Rob Herring
2020-04-14 19:43 ` Jon Hunter
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=93314ff5-aa89-cd99-393c-f75f31d9d6e5@nvidia.com \
--to=jonathanh@nvidia.com \
--cc=bigeasy@linutronix.de \
--cc=devicetree@vger.kernel.org \
--cc=frowand.list@gmail.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-tegra@vger.kernel.org \
--cc=mpe@ellerman.id.au \
--cc=robh@kernel.org \
--cc=segher@kernel.crashing.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).