devicetree.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Frank Rowand <frowand.list@gmail.com>
To: Sebastian Andrzej Siewior <bigeasy@linutronix.de>,
	Rob Herring <robh+dt@kernel.org>
Cc: Michael Ellerman <mpe@ellerman.id.au>,
	devicetree@vger.kernel.org,
	linuxppc-dev <linuxppc-dev@lists.ozlabs.org>,
	Benjamin Herrenschmidt <benh@kernel.crashing.org>,
	Paul Mackerras <paulus@samba.org>,
	Thomas Gleixner <tglx@linutronix.de>
Subject: Re: [RFC] Efficiency of the phandle_cache on ppc64/SLOF
Date: Thu, 5 Dec 2019 20:01:41 -0600	[thread overview]
Message-ID: <084ed924-eaed-5232-a9f6-fe60128fe11a@gmail.com> (raw)
In-Reply-To: <20191205163538.mzunfrpox7jbrssl@linutronix.de>

On 12/5/19 10:35 AM, Sebastian Andrzej Siewior wrote:
> On 2019-12-03 10:56:35 [-0600], Rob Herring wrote:
>>> Another possibility would be to make the cache be dependent
>>> upon not CONFIG_PPC.  It might be possible to disable the
>>> cache with a minimal code change.
>>
>> I'd rather not do that.
>>
>> And yes, as mentioned earlier I don't like the complexity. I didn't
>> from the start and I'm  I'm still of the opinion we should have a
>> fixed or 1 time sized true cache (i.e. smaller than total # of
>> phandles). That would solve the RT memory allocation and locking issue
>> too.
>>
>> For reference, the performance difference between the current
>> implementation (assuming fixes haven't regressed it) was ~400ms vs. a
>> ~340ms improvement with a 64 entry cache (using a mask, not a hash).
>> IMO, 340ms improvement was good enough.
> 
> Okay. So the 814 phandles would result in an array with 1024 slots. That
> would need 4KiB of memory.

Is this amount of memory an issue for this system?

If module support is not configured into the kernel then the cache is
removed and memory freed in a late initcall.  I don't know if that helps
your use case or not.


> What about we go back to the fix 64 slots array but with hash32 for the
> lookup? Without the hash we would be 60ms slower during boot (compared
> to now, based on ancient data) but then the hash isn't expensive so we
> end up with better coverage of the memory on systems which don't have a
> plain enumeration of the phandle.

That performance data is specific to one particular system.  It does not
generalize to all devicetree based systems.  So don't draw too many
conclusions from it.  If you want to understand the boot performance
impact for your system, you need to measure the alternatives on
your system.

Is there a memory usage issue for the systems that led to this thread?
Unless there is a documented memory issue, I do not want to expand an
issue with poor cache bucket percent utilization to the other issue
of cache size.

-Frank

> 
>> Rob
> 
> Sebastian
> 


  reply	other threads:[~2019-12-06  2:01 UTC|newest]

Thread overview: 20+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-11-29 15:10 [RFC] Efficiency of the phandle_cache on ppc64/SLOF Sebastian Andrzej Siewior
2019-11-30  2:14 ` Frank Rowand
2019-12-02 11:07   ` Sebastian Andrzej Siewior
2019-12-03  4:12   ` Michael Ellerman
2019-12-03  4:28     ` Frank Rowand
2019-12-03 16:56       ` Rob Herring
2019-12-05 16:35         ` Sebastian Andrzej Siewior
2019-12-06  2:01           ` Frank Rowand [this message]
2019-12-09 13:35             ` Sebastian Andrzej Siewior
2019-12-10  1:51               ` Rob Herring
2019-12-10  8:17                 ` Frank Rowand
2019-12-10 12:46                   ` Frank Rowand
2019-12-11 14:42                   ` Rob Herring
2019-12-06  1:52         ` Frank Rowand
2019-12-08  6:59           ` Frank Rowand
2019-12-03  4:03 ` Michael Ellerman
2019-12-03 18:35   ` Segher Boessenkool
2019-12-06  1:37     ` Frank Rowand
2019-12-06 23:40       ` Segher Boessenkool
2019-12-08  4:30         ` Frank Rowand

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=084ed924-eaed-5232-a9f6-fe60128fe11a@gmail.com \
    --to=frowand.list@gmail.com \
    --cc=benh@kernel.crashing.org \
    --cc=bigeasy@linutronix.de \
    --cc=devicetree@vger.kernel.org \
    --cc=linuxppc-dev@lists.ozlabs.org \
    --cc=mpe@ellerman.id.au \
    --cc=paulus@samba.org \
    --cc=robh+dt@kernel.org \
    --cc=tglx@linutronix.de \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).