linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Viresh Kumar <viresh.kumar@linaro.org>
To: Saravana Kannan <saravanak@google.com>
Cc: Georgi Djakov <georgi.djakov@linaro.org>,
	Rob Herring <robh+dt@kernel.org>,
	Mark Rutland <mark.rutland@arm.com>,
	Viresh Kumar <vireshk@kernel.org>, Nishanth Menon <nm@ti.com>,
	Stephen Boyd <sboyd@kernel.org>,
	"Rafael J. Wysocki" <rjw@rjwysocki.net>,
	Vincent Guittot <vincent.guittot@linaro.org>,
	"Sweeney, Sean" <seansw@qti.qualcomm.com>,
	daidavid1@codeaurora.org, Rajendra Nayak <rnayak@codeaurora.org>,
	Sibi Sankar <sibis@codeaurora.org>,
	Bjorn Andersson <bjorn.andersson@linaro.org>,
	Evan Green <evgreen@chromium.org>,
	Android Kernel Team <kernel-team@android.com>,
	Linux PM <linux-pm@vger.kernel.org>,
	"open list:OPEN FIRMWARE AND FLATTENED DEVICE TREE BINDINGS" 
	<devicetree@vger.kernel.org>, LKML <linux-kernel@vger.kernel.org>
Subject: Re: [PATCH v3 0/6] Introduce Bandwidth OPPs for interconnect paths
Date: Thu, 18 Jul 2019 11:07:46 +0530	[thread overview]
Message-ID: <20190718053746.64drmonk72vwnt4s@vireshk-i7> (raw)
In-Reply-To: <CAGETcx-tbjVzRKW8D-564zgNOhrA_z-NC1q5U70bhoUDBhp6VA@mail.gmail.com>

I know you have explained lots of things earlier as well, but they are
available over multiple threads and I don't know where to reply now :)

Lets have proper discussion (once again) here and be done with it.
Sorry for the trouble of explaining things again.

On 17-07-19, 13:34, Saravana Kannan wrote:
> On Wed, Jul 17, 2019 at 3:32 AM Viresh Kumar <viresh.kumar@linaro.org> wrote:
> > On 02-07-19, 18:10, Saravana Kannan wrote:
> > > gpu_cache_opp_table: gpu_cache_opp_table {
> > >       compatible = "operating-points-v2";
> > >
> > >       gpu_cache_3000: opp-3000 {
> > >               opp-peak-KBps = <3000>;
> > >               opp-avg-KBps = <1000>;
> > >       };
> > >       gpu_cache_6000: opp-6000 {
> > >               opp-peak-KBps = <6000>;
> > >               opp-avg-KBps = <2000>;
> > >       };
> > >       gpu_cache_9000: opp-9000 {
> > >               opp-peak-KBps = <9000>;
> > >               opp-avg-KBps = <9000>;
> > >       };
> > > };
> > >
> > > gpu_ddr_opp_table: gpu_ddr_opp_table {
> > >       compatible = "operating-points-v2";
> > >
> > >       gpu_ddr_1525: opp-1525 {
> > >               opp-peak-KBps = <1525>;
> > >               opp-avg-KBps = <452>;
> > >       };
> > >       gpu_ddr_3051: opp-3051 {
> > >               opp-peak-KBps = <3051>;
> > >               opp-avg-KBps = <915>;
> > >       };
> > >       gpu_ddr_7500: opp-7500 {
> > >               opp-peak-KBps = <7500>;
> > >               opp-avg-KBps = <3000>;
> > >       };
> > > };
> >
> > Who is going to use the above tables and how ?
> 
> In this example the GPU driver would use these. It'll go through these
> and then decide what peak and average bw to pick based on whatever
> criteria.

Are you saying that the GPU driver will decide which bandwidth to
choose while running at a particular frequency (say 2 GHz) ? And that
it can choose 1525 or 3051 or 7500 from the ddr path ?

Will it be possible to publicly share how we derive to these decisions
?

The thing is I don't like these separate OPP tables which will not be
used by anyone else, but just GPU (or a single device). I would like
to put this data in the GPU OPP table only. What about putting a
range in the GPU OPP table for the Bandwidth if it can change so much
for the same frequency.

> > These are the maximum
> > BW available over these paths, right ?
> 
> I wouldn't call them "maximum" because there can't be multiple
> maximums :) But yes, these are the meaningful bandwidth from the GPU's
> perspective to use over these paths.
> 
> >
> > > gpu_opp_table: gpu_opp_table {
> > >       compatible = "operating-points-v2";
> > >       opp-shared;
> > >
> > >       opp-200000000 {
> > >               opp-hz = /bits/ 64 <200000000>;
> > >       };
> > >       opp-400000000 {
> > >               opp-hz = /bits/ 64 <400000000>;
> > >       };
> > > };
> >
> > Shouldn't this link back to the above tables via required-opp, etc ?
> > How will we know how much BW is required by the GPU device for all the
> > paths ?
> 
> If that's what the GPU driver wants to do, then yes. But the GPU
> driver could also choose to scale the bandwidth for these paths based
> on multiple other signals. Eg: bus port busy percentage, measure
> bandwidth, etc.

Lets say that the GPU is running at 2 GHz right now and based on above
inputs it wants to increase the bandwidth to 7500 for ddr path, now
does it make sense to run at 4 GHz instead of 2 so we utilize the
bandwidth to the best of our ability and waste less power ?

If something like that is acceptable, then what about keeping the
bandwidth fixed for frequencies and rather scale the frequency of the
GPU on the inputs your provided (like bus port busy percentage, etc).

The current proposal makes me wonder on why should we try to reuse OPP
tables for providing these bandwidth values as the OPP tables for
interconnect paths isn't really a lot of data, only bandwidth all the
time and there is no linking from the device's OPP table as well.

-- 
viresh

  reply	other threads:[~2019-07-18  5:37 UTC|newest]

Thread overview: 47+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-07-03  1:10 [PATCH v3 0/6] Introduce Bandwidth OPPs for interconnect paths Saravana Kannan
2019-07-03  1:10 ` [PATCH v3 1/6] dt-bindings: opp: Introduce opp-peak-KBps and opp-avg-KBps bindings Saravana Kannan
2019-07-16 17:25   ` Sibi Sankar
2019-07-16 18:58     ` Saravana Kannan
2019-07-22 23:35       ` Rob Herring
2019-07-22 23:40         ` Saravana Kannan
2019-07-23 14:26           ` Rob Herring
2019-07-24  0:18             ` Saravana Kannan
2019-07-17  7:54   ` Viresh Kumar
2019-07-17 20:29     ` Saravana Kannan
2019-07-18  4:35       ` Viresh Kumar
2019-07-18 17:26         ` Saravana Kannan
2019-07-26 16:24   ` Georgi Djakov
2019-07-26 19:08     ` Saravana Kannan
2019-07-03  1:10 ` [PATCH v3 2/6] OPP: Add support for bandwidth OPP tables Saravana Kannan
2019-07-16 17:33   ` Sibi Sankar
2019-07-16 19:10     ` Saravana Kannan
2019-07-30 10:57   ` Amit Kucheria
2019-07-30 23:20     ` Saravana Kannan
2019-07-03  1:10 ` [PATCH v3 3/6] OPP: Add helper function " Saravana Kannan
2019-07-03  1:10 ` [PATCH v3 4/6] OPP: Add API to find an OPP table from its DT node Saravana Kannan
2019-07-03  1:10 ` [PATCH v3 5/6] dt-bindings: interconnect: Add interconnect-opp-table property Saravana Kannan
2019-07-22 23:39   ` Rob Herring
2019-07-22 23:43     ` Saravana Kannan
2019-07-23  2:21     ` Viresh Kumar
2019-07-03  1:10 ` [PATCH v3 6/6] interconnect: Add OPP table support for interconnects Saravana Kannan
2019-07-03  6:45   ` Vincent Guittot
2019-07-03 21:33     ` Saravana Kannan
2019-07-04  7:12       ` Vincent Guittot
2019-07-07 21:48         ` Saravana Kannan
2019-07-09  7:25           ` Vincent Guittot
2019-07-09 19:02             ` Saravana Kannan
2019-07-15  8:16               ` Vincent Guittot
2019-07-16  0:55                 ` Saravana Kannan
2019-07-24  7:16                   ` Vincent Guittot
2019-07-26 16:25   ` Georgi Djakov
2019-07-26 19:08     ` Saravana Kannan
2019-07-03  6:36 ` [PATCH v3 0/6] Introduce Bandwidth OPPs for interconnect paths Viresh Kumar
2019-07-03 20:35   ` Saravana Kannan
2019-07-17 10:32 ` Viresh Kumar
2019-07-17 20:34   ` Saravana Kannan
2019-07-18  5:37     ` Viresh Kumar [this message]
2019-07-19  4:12       ` Saravana Kannan
2019-07-29  9:24         ` Viresh Kumar
2019-07-29 20:12           ` Saravana Kannan
2019-07-30  3:01             ` Viresh Kumar
2019-08-03  7:36               ` Saravana Kannan

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20190718053746.64drmonk72vwnt4s@vireshk-i7 \
    --to=viresh.kumar@linaro.org \
    --cc=bjorn.andersson@linaro.org \
    --cc=daidavid1@codeaurora.org \
    --cc=devicetree@vger.kernel.org \
    --cc=evgreen@chromium.org \
    --cc=georgi.djakov@linaro.org \
    --cc=kernel-team@android.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-pm@vger.kernel.org \
    --cc=mark.rutland@arm.com \
    --cc=nm@ti.com \
    --cc=rjw@rjwysocki.net \
    --cc=rnayak@codeaurora.org \
    --cc=robh+dt@kernel.org \
    --cc=saravanak@google.com \
    --cc=sboyd@kernel.org \
    --cc=seansw@qti.qualcomm.com \
    --cc=sibis@codeaurora.org \
    --cc=vincent.guittot@linaro.org \
    --cc=vireshk@kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).