All of lore.kernel.org
 help / color / mirror / Atom feed
From: Jordan Crouse <jcrouse@codeaurora.org>
To: Saravana Kannan <saravanak@google.com>
Cc: georgi.djakov@linaro.org, amit.kucheria@linaro.org,
	bjorn.andersson@linaro.org, daidavid1@codeaurora.org,
	devicetree@vger.kernel.org, evgreen@chromium.org,
	linux-arm-msm@vger.kernel.org, linux-kernel@vger.kernel.org,
	linux-pm@vger.kernel.org, mark.rutland@arm.com, nm@ti.com,
	rjw@rjwysocki.net, robh+dt@kernel.org, sboyd@kernel.org,
	seansw@qti.qualcomm.com, sibis@codeaurora.org,
	vincent.guittot@linaro.org, vireshk@kernel.org,
	kernel-team@android.com
Subject: Re: [PATCH v2 0/5] Introduce OPP bandwidth bindings
Date: Mon, 3 Jun 2019 09:56:35 -0600	[thread overview]
Message-ID: <20190603155634.GA10741@jcrouse1-lnx.qualcomm.com> (raw)
In-Reply-To: <20190601021228.210574-1-saravanak@google.com>

On Fri, May 31, 2019 at 07:12:28PM -0700, Saravana Kannan wrote:
> I'll have to Nack this series because it's making a couple of wrong assumptions
> about bandwidth voting.
> 
> Firstly, it's mixing up OPP to bandwidth mapping (Eg: CPU freq to CPU<->DDR
> bandwidth mapping) with the bandwidth levels that are actually supported by an
> interconnect path (Eg: CPU<->DDR bandwidth levels). For example, CPU0 might
> decide to vote for a max of 10 GB/s because it's a little CPU and never needs
> anything higher than 10 GB/s even at CPU0's max frequency. But that has no
> bearing on bandwidth level available between CPU<->DDR.

I'm going to just quote this part of the email to avoid forcing people to
scroll too much.

I agree that there is an enormous universe of new and innovative things that can
be done for bandwidth voting. I would love to have smart governors and expansive
connections between different components that are all aware of each other. I
don't think that anybody is discounting that these things are possible.

But as it stands today, as a leaf driver developer my primary concern is that I
need to vote something for the GPU->DDR path. Right now I'm voting the maximum
because that is the bare minimum we need to get working GPU.

Then the next incremental baby step is to allow us to select a minimum
vote based on a GPU frequency level to allow for some sort of very coarse power
savings. It isn't perfect, but better than cranking everything to 11. This is
why we need the OPP bandwidth bindings to allow us to make the association and
tune down the vote. I fully agree that this isn't the optimal solution but
it is the only knob we have right now.

And after that we should go nuts. I'll gladly put the OPP bindings in the
rear-view mirror and turn over all bandwidth to a governor or two or three.
I'll be happy to have nothing to do with it again. But until then we need
a solution for the leaf drivers that lets us provide some modicum of power
control.

Jordan
-- 
The Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum,
a Linux Foundation Collaborative Project

  reply	other threads:[~2019-06-03 15:56 UTC|newest]

Thread overview: 22+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-04-23 13:28 [PATCH v2 0/5] Introduce OPP bandwidth bindings Georgi Djakov
2019-04-23 13:28 ` [PATCH v2 1/5] dt-bindings: opp: Introduce bandwidth-MBps bindings Georgi Djakov
2019-04-24  5:33   ` Viresh Kumar
2019-04-24  6:46   ` Rajendra Nayak
2019-04-24  6:49     ` Viresh Kumar
2019-04-24  9:00       ` Sibi Sankar
2019-04-24  9:05         ` Viresh Kumar
2019-04-25  4:24         ` Bjorn Andersson
2019-04-24  8:44   ` Sibi Sankar
2019-04-23 13:28 ` [PATCH v2 2/5] interconnect: Add of_icc_get_by_index() helper function Georgi Djakov
2019-05-07 11:59   ` Sibi Sankar
2019-06-27  5:56     ` Sibi Sankar
2019-04-23 13:28 ` [PATCH v2 3/5] OPP: Add support for parsing the interconnect bandwidth Georgi Djakov
2019-04-24  5:52   ` Viresh Kumar
2019-06-27  6:27     ` Sibi Sankar
2019-04-23 13:28 ` [PATCH v2 4/5] OPP: Update the bandwidth on OPP frequency changes Georgi Djakov
2019-04-24  5:55   ` Viresh Kumar
2019-04-24 10:05   ` Sibi Sankar
2019-04-23 13:28 ` [PATCH v2 5/5] cpufreq: dt: Add support for interconnect bandwidth scaling Georgi Djakov
2019-06-01  2:12 ` [PATCH v2 0/5] Introduce OPP bandwidth bindings Saravana Kannan
2019-06-03 15:56   ` Jordan Crouse [this message]
2019-06-03 19:12     ` Saravana Kannan

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20190603155634.GA10741@jcrouse1-lnx.qualcomm.com \
    --to=jcrouse@codeaurora.org \
    --cc=amit.kucheria@linaro.org \
    --cc=bjorn.andersson@linaro.org \
    --cc=daidavid1@codeaurora.org \
    --cc=devicetree@vger.kernel.org \
    --cc=evgreen@chromium.org \
    --cc=georgi.djakov@linaro.org \
    --cc=kernel-team@android.com \
    --cc=linux-arm-msm@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-pm@vger.kernel.org \
    --cc=mark.rutland@arm.com \
    --cc=nm@ti.com \
    --cc=rjw@rjwysocki.net \
    --cc=robh+dt@kernel.org \
    --cc=saravanak@google.com \
    --cc=sboyd@kernel.org \
    --cc=seansw@qti.qualcomm.com \
    --cc=sibis@codeaurora.org \
    --cc=vincent.guittot@linaro.org \
    --cc=vireshk@kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.