From: Krzysztof Kozlowski <krzk@kernel.org>
To: Willy Wolff <willy.mh.wolff.ml@gmail.com>
Cc: Chanwoo Choi <cw00.choi@samsung.com>,
MyungJoo Ham <myungjoo.ham@samsung.com>,
Kyungmin Park <kyungmin.park@samsung.com>,
Kukjin Kim <kgene@kernel.org>,
linux-pm@vger.kernel.org,
"linux-samsung-soc@vger.kernel.org"
<linux-samsung-soc@vger.kernel.org>,
linux-arm-kernel@lists.infradead.org,
"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
Lukasz Luba <lukasz.luba@arm.com>
Subject: Re: brocken devfreq simple_ondemand for Odroid XU3/4?
Date: Tue, 23 Jun 2020 21:11:29 +0200 [thread overview]
Message-ID: <20200623191129.GA4171@kozik-lap> (raw)
In-Reply-To: <CAJKOXPeLuq81NC2xZh3y32EB-_APbDAchZD4OW_eCgQKKO+p8w@mail.gmail.com>
On Tue, Jun 23, 2020 at 09:02:38PM +0200, Krzysztof Kozlowski wrote:
> On Tue, 23 Jun 2020 at 18:47, Willy Wolff <willy.mh.wolff.ml@gmail.com> wrote:
> >
> > Hi everybody,
> >
> > Is DVFS for memory bus really working on Odroid XU3/4 board?
> > Using a simple microbenchmark that is doing only memory accesses, memory DVFS
> > seems to not working properly:
> >
> > The microbenchmark is doing pointer chasing by following index in an array.
> > Indices in the array are set to follow a random pattern (cutting prefetcher),
> > and forcing RAM access.
> >
> > git clone https://github.com/wwilly/benchmark.git \
> > && cd benchmark \
> > && source env.sh \
> > && ./bench_build.sh \
> > && bash source/scripts/test_dvfs_mem.sh
> >
> > Python 3, cmake and sudo rights are required.
> >
> > Results:
> > DVFS CPU with performance governor
> > mem_gov = simple_ondemand at 165000000 Hz in idle, should be bumped when the
> > benchmark is running.
> > - on the LITTLE cluster it takes 4.74308 s to run (683.004 c per memory access),
> > - on the big cluster it takes 4.76556 s to run (980.343 c per moemory access).
> >
> > While forcing DVFS memory bus to use performance governor,
> > mem_gov = performance at 825000000 Hz in idle,
> > - on the LITTLE cluster it takes 1.1451 s to run (164.894 c per memory access),
> > - on the big cluster it takes 1.18448 s to run (243.664 c per memory access).
> >
> > The kernel used is the last 5.7.5 stable with default exynos_defconfig.
>
> Thanks for the report. Few thoughts:
> 1. What trans_stat are saying? Except DMC driver you can also check
> all other devfreq devices (e.g. wcore) - maybe the devfreq events
> (nocp) are not properly assigned?
> 2. Try running the measurement for ~1 minutes or longer. The counters
> might have some delay (which would require probably fixing but the
> point is to narrow the problem).
> 3. What do you understand by "mem_gov"? Which device is it?
+Cc Lukasz who was working on this.
I just run memtester and more-or-less ondemand works (at least ramps
up):
Before:
/sys/class/devfreq/10c20000.memory-controller$ cat trans_stat
From : To
: 165000000 206000000 275000000 413000000 543000000 633000000 728000000 825000000 time(ms)
* 165000000: 0 0 0 0 0 0 0 0 1795950
206000000: 1 0 0 0 0 0 0 0 4770
275000000: 0 1 0 0 0 0 0 0 15540
413000000: 0 0 1 0 0 0 0 0 20780
543000000: 0 0 0 1 0 0 0 1 10760
633000000: 0 0 0 0 2 0 0 0 10310
728000000: 0 0 0 0 0 0 0 0 0
825000000: 0 0 0 0 0 2 0 0 25920
Total transition : 9
$ sudo memtester 1G
During memtester:
/sys/class/devfreq/10c20000.memory-controller$ cat trans_stat
From : To
: 165000000 206000000 275000000 413000000 543000000 633000000 728000000 825000000 time(ms)
165000000: 0 0 0 0 0 0 0 1 1801490
206000000: 1 0 0 0 0 0 0 0 4770
275000000: 0 1 0 0 0 0 0 0 15540
413000000: 0 0 1 0 0 0 0 0 20780
543000000: 0 0 0 1 0 0 0 2 11090
633000000: 0 0 0 0 3 0 0 0 17210
728000000: 0 0 0 0 0 0 0 0 0
* 825000000: 0 0 0 0 0 3 0 0 169020
Total transition : 13
However after killing memtester it stays at 633 MHz for very long time
and does not slow down. This is indeed weird...
Best regards,
Krzysztof
next prev parent reply other threads:[~2020-06-23 19:11 UTC|newest]
Thread overview: 25+ messages / expand[flat|nested] mbox.gz Atom feed top
2020-06-23 16:47 brocken devfreq simple_ondemand for Odroid XU3/4? Willy Wolff
2020-06-23 19:02 ` Krzysztof Kozlowski
2020-06-23 19:11 ` Krzysztof Kozlowski [this message]
2020-06-24 8:01 ` Willy Wolff
2020-06-24 8:14 ` Krzysztof Kozlowski
2020-06-24 8:52 ` Willy Wolff
2020-06-24 10:32 ` Lukasz Luba
2020-06-24 11:18 ` Kamil Konieczny
2020-06-24 12:06 ` Krzysztof Kozlowski
2020-06-24 13:03 ` Lukasz Luba
2020-06-24 13:13 ` Krzysztof Kozlowski
2020-06-24 13:42 ` Lukasz Luba
2020-06-24 15:11 ` Sylwester Nawrocki
2020-06-25 10:02 ` Lukasz Luba
2020-06-25 11:30 ` Kamil Konieczny
2020-06-25 12:02 ` Lukasz Luba
2020-06-25 12:12 ` Kamil Konieczny
2020-06-26 11:22 ` Bartlomiej Zolnierkiewicz
2020-06-29 1:43 ` Chanwoo Choi
2020-06-29 11:52 ` Lukasz Luba
2020-07-01 15:48 ` Willy Wolff
2020-06-29 11:34 ` Lukasz Luba
2020-06-26 17:50 ` Sylwester Nawrocki
2020-06-29 11:41 ` Lukasz Luba
2020-06-29 1:52 ` Chanwoo Choi
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20200623191129.GA4171@kozik-lap \
--to=krzk@kernel.org \
--cc=cw00.choi@samsung.com \
--cc=kgene@kernel.org \
--cc=kyungmin.park@samsung.com \
--cc=linux-arm-kernel@lists.infradead.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-pm@vger.kernel.org \
--cc=linux-samsung-soc@vger.kernel.org \
--cc=lukasz.luba@arm.com \
--cc=myungjoo.ham@samsung.com \
--cc=willy.mh.wolff.ml@gmail.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).