From: Joel Fernandes <joelaf@google.com>
To: Mike Galbraith <umgwanakikbuti@gmail.com>
Cc: Peter Zijlstra <peterz@infradead.org>,
LKML <linux-kernel@vger.kernel.org>,
Juri Lelli <Juri.Lelli@arm.com>,
Dietmar Eggemann <dietmar.eggemann@arm.com>,
Patrick Bellasi <patrick.bellasi@arm.com>,
Brendan Jackman <brendan.jackman@arm.com>,
Chris Redpath <Chris.Redpath@arm.com>,
Josef Bacik <jbacik@fb.com>
Subject: wake_wide mechanism clarification
Date: Thu, 29 Jun 2017 17:19:14 -0700 [thread overview]
Message-ID: <CAJWu+oo0_zAWLjObRZp-r3VH5kKKb4aFVnRevtbYh+uH_tCNag@mail.gmail.com> (raw)
Dear Mike,
I wanted your kind help to understand your patch "sched: beef up
wake_wide()"[1] which is a modification to the original patch from
Michael Wang [2].
In particular, I didn't following the following comment:
" to shared cache, we look for a minimum 'flip' frequency of llc_size
in one partner, and a factor of lls_size higher frequency in the
other."
Why are wanting the master's flip frequency to be higher than the
slaves by the factor?
The code here is written as:
if (slave < factor || master < slave * factor)
return 0;
However I think we should just do (with my current and probably wrong
understanding):
if (slave < factor || master < factor)
return 0;
Basically, I didn't follow why we multiply the slave's flips with
llc_size. That makes it sound like the master has to have way more
flips than the slave to return 0 from wake_wide. Could you maybe give
an example to clarify? Thanks a lot for your help,
I am also CC'ing Peter and some ARM folks for the discussion (and also
Jocef who was discuss it with Mike on the mailing list few years ago).
Thanks,
Joel
[1] https://patchwork.kernel.org/patch/6787941/
[2] https://lkml.org/lkml/2013/7/4/20
next reply other threads:[~2017-06-30 0:19 UTC|newest]
Thread overview: 29+ messages / expand[flat|nested] mbox.gz Atom feed top
2017-06-30 0:19 Joel Fernandes [this message]
2017-06-30 0:49 ` wake_wide mechanism clarification Josef Bacik
2017-06-30 3:04 ` Joel Fernandes
2017-06-30 14:28 ` Josef Bacik
2017-06-30 17:02 ` Mike Galbraith
2017-06-30 17:55 ` Josef Bacik
2017-08-03 10:53 ` Brendan Jackman
2017-08-03 13:15 ` Josef Bacik
2017-08-03 15:05 ` Brendan Jackman
2017-08-09 21:22 ` Atish Patra
2017-08-10 9:48 ` Brendan Jackman
2017-08-10 17:41 ` Atish Patra
2017-07-29 8:01 ` Joel Fernandes
2017-07-29 8:13 ` Joel Fernandes
2017-08-02 8:26 ` Michael Wang
2017-08-03 23:48 ` Joel Fernandes
2017-07-29 15:07 ` Mike Galbraith
2017-07-29 20:19 ` Joel Fernandes
2017-07-29 22:28 ` Joel Fernandes
2017-07-29 22:41 ` Joel Fernandes
2017-07-31 12:21 ` Josef Bacik
2017-07-31 13:42 ` Mike Galbraith
2017-07-31 14:48 ` Josef Bacik
2017-07-31 17:23 ` Mike Galbraith
2017-07-31 16:21 ` Joel Fernandes
2017-07-31 16:42 ` Josef Bacik
2017-07-31 17:55 ` Joel Fernandes
2017-06-30 3:11 ` Mike Galbraith
2017-06-30 13:11 ` Matt Fleming
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=CAJWu+oo0_zAWLjObRZp-r3VH5kKKb4aFVnRevtbYh+uH_tCNag@mail.gmail.com \
--to=joelaf@google.com \
--cc=Chris.Redpath@arm.com \
--cc=Juri.Lelli@arm.com \
--cc=brendan.jackman@arm.com \
--cc=dietmar.eggemann@arm.com \
--cc=jbacik@fb.com \
--cc=linux-kernel@vger.kernel.org \
--cc=patrick.bellasi@arm.com \
--cc=peterz@infradead.org \
--cc=umgwanakikbuti@gmail.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).