devicetree.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Viresh Kumar <viresh.kumar@linaro.org>
To: Saravana Kannan <saravanak@google.com>
Cc: Georgi Djakov <georgi.djakov@linaro.org>,
	Rob Herring <robh+dt@kernel.org>,
	Mark Rutland <mark.rutland@arm.com>,
	Viresh Kumar <vireshk@kernel.org>, Nishanth Menon <nm@ti.com>,
	Stephen Boyd <sboyd@kernel.org>,
	"Rafael J. Wysocki" <rjw@rjwysocki.net>,
	vincent.guittot@linaro.org, seansw@qti.qualcomm.com,
	daidavid1@codeaurora.org, Rajendra Nayak <rnayak@codeaurora.org>,
	sibis@codeaurora.org, bjorn.andersson@linaro.org,
	evgreen@chromium.org, kernel-team@android.com,
	linux-pm@vger.kernel.org, devicetree@vger.kernel.org,
	linux-kernel@vger.kernel.org
Subject: Re: [PATCH v3 0/6] Introduce Bandwidth OPPs for interconnect paths
Date: Wed, 17 Jul 2019 16:02:20 +0530	[thread overview]
Message-ID: <20190717103220.f7cys267hq23fbsb@vireshk-i7> (raw)
In-Reply-To: <20190703011020.151615-1-saravanak@google.com>

On 02-07-19, 18:10, Saravana Kannan wrote:
> Interconnects and interconnect paths quantify their performance levels in
> terms of bandwidth and not in terms of frequency. So similar to how we have
> frequency based OPP tables in DT and in the OPP framework, we need
> bandwidth OPP table support in the OPP framework and in DT. Since there can
> be more than one interconnect path used by a device, we also need a way to
> assign a bandwidth OPP table to an interconnect path.
> 
> This patch series:
> - Adds opp-peak-KBps and opp-avg-KBps properties to OPP DT bindings
> - Adds interconnect-opp-table property to interconnect DT bindings
> - Adds OPP helper functions for bandwidth OPP tables
> - Adds icc_get_opp_table() to get the OPP table for an interconnect path
> 
> So with the DT bindings added in this patch series, the DT for a GPU
> that does bandwidth voting from GPU to Cache and GPU to DDR would look
> something like this:
> 
> gpu_cache_opp_table: gpu_cache_opp_table {
> 	compatible = "operating-points-v2";
> 
> 	gpu_cache_3000: opp-3000 {
> 		opp-peak-KBps = <3000>;
> 		opp-avg-KBps = <1000>;
> 	};
> 	gpu_cache_6000: opp-6000 {
> 		opp-peak-KBps = <6000>;
> 		opp-avg-KBps = <2000>;
> 	};
> 	gpu_cache_9000: opp-9000 {
> 		opp-peak-KBps = <9000>;
> 		opp-avg-KBps = <9000>;
> 	};
> };
> 
> gpu_ddr_opp_table: gpu_ddr_opp_table {
> 	compatible = "operating-points-v2";
> 
> 	gpu_ddr_1525: opp-1525 {
> 		opp-peak-KBps = <1525>;
> 		opp-avg-KBps = <452>;
> 	};
> 	gpu_ddr_3051: opp-3051 {
> 		opp-peak-KBps = <3051>;
> 		opp-avg-KBps = <915>;
> 	};
> 	gpu_ddr_7500: opp-7500 {
> 		opp-peak-KBps = <7500>;
> 		opp-avg-KBps = <3000>;
> 	};
> };

Who is going to use the above tables and how ? These are the maximum
BW available over these paths, right ?

> gpu_opp_table: gpu_opp_table {
> 	compatible = "operating-points-v2";
> 	opp-shared;
> 
> 	opp-200000000 {
> 		opp-hz = /bits/ 64 <200000000>;
> 	};
> 	opp-400000000 {
> 		opp-hz = /bits/ 64 <400000000>;
> 	};
> };

Shouldn't this link back to the above tables via required-opp, etc ?
How will we know how much BW is required by the GPU device for all the
paths ?

> gpu@7864000 {
> 	...
> 	operating-points-v2 = <&gpu_opp_table>, <&gpu_cache_opp_table>, <&gpu_ddr_opp_table>;
> 	interconnects = <&mmnoc MASTER_GPU_1 &bimc SLAVE_SYSTEM_CACHE>,
> 			<&mmnoc MASTER_GPU_1 &bimc SLAVE_DDR>;
> 	interconnect-names = "gpu-cache", "gpu-mem";
> 	interconnect-opp-table = <&gpu_cache_opp_table>, <&gpu_ddr_opp_table>
> };

-- 
viresh

  parent reply	other threads:[~2019-07-17 10:32 UTC|newest]

Thread overview: 48+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-07-03  1:10 [PATCH v3 0/6] Introduce Bandwidth OPPs for interconnect paths Saravana Kannan
2019-07-03  1:10 ` [PATCH v3 1/6] dt-bindings: opp: Introduce opp-peak-KBps and opp-avg-KBps bindings Saravana Kannan
2019-07-16 17:25   ` Sibi Sankar
2019-07-16 18:58     ` Saravana Kannan
2019-07-22 23:35       ` Rob Herring
2019-07-22 23:40         ` Saravana Kannan
2019-07-23 14:26           ` Rob Herring
2019-07-24  0:18             ` Saravana Kannan
2019-07-17  7:54   ` Viresh Kumar
2019-07-17 20:29     ` Saravana Kannan
2019-07-18  4:35       ` Viresh Kumar
2019-07-18 17:26         ` Saravana Kannan
2019-07-26 16:24   ` Georgi Djakov
2019-07-26 19:08     ` Saravana Kannan
2019-07-03  1:10 ` [PATCH v3 2/6] OPP: Add support for bandwidth OPP tables Saravana Kannan
2019-07-16 17:33   ` Sibi Sankar
2019-07-16 19:10     ` Saravana Kannan
2019-07-30 10:57   ` Amit Kucheria
2019-07-30 23:20     ` Saravana Kannan
2019-07-03  1:10 ` [PATCH v3 3/6] OPP: Add helper function " Saravana Kannan
2019-07-03  1:10 ` [PATCH v3 4/6] OPP: Add API to find an OPP table from its DT node Saravana Kannan
2019-07-03  1:10 ` [PATCH v3 5/6] dt-bindings: interconnect: Add interconnect-opp-table property Saravana Kannan
2019-07-22 23:39   ` Rob Herring
2019-07-22 23:43     ` Saravana Kannan
2019-07-23  2:21     ` Viresh Kumar
2019-07-03  1:10 ` [PATCH v3 6/6] interconnect: Add OPP table support for interconnects Saravana Kannan
2019-07-03  6:45   ` Vincent Guittot
2019-07-03 21:33     ` Saravana Kannan
2019-07-04  7:12       ` Vincent Guittot
2019-07-07 21:48         ` Saravana Kannan
2019-07-09  7:25           ` Vincent Guittot
2019-07-09 19:02             ` Saravana Kannan
2019-07-15  8:16               ` Vincent Guittot
2019-07-16  0:55                 ` Saravana Kannan
2019-07-24  7:16                   ` Vincent Guittot
2019-07-26 16:25   ` Georgi Djakov
2019-07-26 19:08     ` Saravana Kannan
2019-07-03  6:36 ` [PATCH v3 0/6] Introduce Bandwidth OPPs for interconnect paths Viresh Kumar
2019-07-03 20:35   ` Saravana Kannan
2019-07-17 10:32 ` Viresh Kumar [this message]
2019-07-17 20:34   ` Saravana Kannan
2019-07-18  5:37     ` Viresh Kumar
2019-07-19  4:12       ` Saravana Kannan
2019-07-29  9:24         ` Viresh Kumar
2019-07-29 20:12           ` Saravana Kannan
2019-07-30  3:01             ` Viresh Kumar
2019-08-03  1:20               ` Saravana Kannan
2019-08-03  7:36               ` Saravana Kannan

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20190717103220.f7cys267hq23fbsb@vireshk-i7 \
    --to=viresh.kumar@linaro.org \
    --cc=bjorn.andersson@linaro.org \
    --cc=daidavid1@codeaurora.org \
    --cc=devicetree@vger.kernel.org \
    --cc=evgreen@chromium.org \
    --cc=georgi.djakov@linaro.org \
    --cc=kernel-team@android.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-pm@vger.kernel.org \
    --cc=mark.rutland@arm.com \
    --cc=nm@ti.com \
    --cc=rjw@rjwysocki.net \
    --cc=rnayak@codeaurora.org \
    --cc=robh+dt@kernel.org \
    --cc=saravanak@google.com \
    --cc=sboyd@kernel.org \
    --cc=seansw@qti.qualcomm.com \
    --cc=sibis@codeaurora.org \
    --cc=vincent.guittot@linaro.org \
    --cc=vireshk@kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).