All of lore.kernel.org
 help / color / mirror / Atom feed
From: Georgi Djakov <georgi.djakov-QSEj5FYQhm4dnm+yROfE0A@public.gmane.org>
To: Rob Herring <robh-DgEjT+Ai2ygdnm+yROfE0A@public.gmane.org>
Cc: linux-pm-u79uwXL29TY76Z2rM5mHXA@public.gmane.org,
	rjw-LthD3rsA81gm4RdzfppkhA@public.gmane.org,
	gregkh-hQyY1W1yCW8ekmWlsbkhG0B+6BGkLq7r@public.gmane.org,
	khilman-rdvid1DuHRBWk0Htik3J/w@public.gmane.org,
	mturquette-rdvid1DuHRBWk0Htik3J/w@public.gmane.org,
	vincent.guittot-QSEj5FYQhm4dnm+yROfE0A@public.gmane.org,
	skannan-sgV2jX0FEOL9JmXXK+q4OQ@public.gmane.org,
	sboyd-sgV2jX0FEOL9JmXXK+q4OQ@public.gmane.org,
	andy.gross-QSEj5FYQhm4dnm+yROfE0A@public.gmane.org,
	seansw-Rm6X0d1/PG5y9aJCnZT0Uw@public.gmane.org,
	davidai-jfJNa2p1gH1BDgjK7y7TUQ@public.gmane.org,
	devicetree-u79uwXL29TY76Z2rM5mHXA@public.gmane.org,
	linux-kernel-u79uwXL29TY76Z2rM5mHXA@public.gmane.org,
	linux-arm-kernel-IAPFreCvJWM7uuMidbF8XUB+6BGkLq7r@public.gmane.org,
	linux-arm-msm-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
Subject: Re: [RFC v0 0/2] Introduce on-chip interconnect API
Date: Tue, 14 Mar 2017 17:41:54 +0200	[thread overview]
Message-ID: <7e7c29a7-af04-04a8-cb76-0c406f8f855c@linaro.org> (raw)
In-Reply-To: <20170303062145.zpa4oblwgx2ecgv7@rob-hp-laptop>

On 03/03/2017 08:21 AM, Rob Herring wrote:
> On Wed, Mar 01, 2017 at 08:22:33PM +0200, Georgi Djakov wrote:
>> Modern SoCs have multiple processors and various dedicated cores (video, gpu,
>> graphics, modem). These cores are talking to each other and can generate a lot
>> of data flowing through the on-chip interconnects. These interconnect buses
>> could form different topologies such as crossbar, point to point buses,
>> hierarchical buses or use the network-on-chip concept.
>>
>> These buses have been sized usually to handle use cases with high data
>> throughput but it is not necessary all the time and consume a lot of power.
>> Furthermore, the priority between masters can vary depending on the running
>> use case like video playback or cpu intensive tasks.
>>
>> Having an API to control the requirement of the system in term of bandwidth
>> and QoS, so we can adapt the interconnect configuration to match those by
>> scaling the frequencies, setting link priority and tuning QoS parameters.
>> This configuration can be a static, one-time operation done at boot for some
>> platforms or a dynamic set of operations that happen at run-time.
>>
>> This patchset introduce a new API to get the requirement and configure the
>> interconnect buses across the entire chipset to fit with the current demand.
>> The API is NOT for changing the performance of the endpoint devices, but only
>> the interconnect path in between them.
>>
>> The API is using a consumer/provider-based model, where the providers are
>> the interconnect controllers and the consumers could be various drivers.
>> The consumers request interconnect resources (path) to an endpoint and set
>> the desired constraints on this data flow path. The provider(s) receive
>> requests from consumers and aggregate these requests for all master-slave
>> pairs on that path. Then the providers configure each participating in the
>> topology node according to the requested data flow path, physical links and
>> constraints. The topology could be complicated and multi-tiered and is SoC
>> specific.
>>
>> Below is a simplified diagram of a real-world SoC topology. The interconnect
>> providers are the memory front-end and the NoCs.
>>
>> +----------------+    +----------------+
>> | HW Accelerator |--->|      M NoC     |<---------------+
>> +----------------+    +----------------+                |
>>                         |      |                    +------------+
>>           +-------------+      V       +------+     |            |
>>           |                +--------+  | PCIe |     |            |
>>           |                | Slaves |  +------+     |            |
>>           |                +--------+     |         |   C NoC    |
>>           V                               V         |            |
>> +------------------+   +------------------------+   |            |   +-----+
>> |                  |-->|                        |-->|            |-->| CPU |
>> |                  |-->|                        |<--|            |   +-----+
>> |      Memory      |   |         S NoC          |   +------------+
>> |                  |<--|                        |---------+    |
>> |                  |<--|                        |<------+ |    |   +--------+
>> +------------------+   +------------------------+       | |    +-->| Slaves |
>>    ^     ^    ^           ^                             | |        +--------+
>>    |     |    |           |                             | V
>> +-----+  |  +-----+    +-----+  +---------+   +----------------+   +--------+
>> | CPU |  |  | GPU |    | DSP |  | Masters |-->|       P NoC    |-->| Slaves |
>> +-----+  |  +-----+    +-----+  +---------+   +----------------+   +--------+
>>          |
>>      +-------+
>>      | Modem |
>>      +-------+
>>
>> This RFC does not implement all features but only main skeleton to check the
>> validity of the proposal. Currently it only works with device-tree and platform
>> devices.
>>
>> TODO:
>>  * Constraints are currently stored in internal data structure. Should PM QoS
>>  be used instead?
>>  * Rework the framework to not depend on DT as frameworks cannot be tied
>>  directly to firmware interfaces. Add support for ACPI?
>
> I would start without DT even. You can always have the data you need in
> the kernel. This will be more flexible as you're not defining an ABI as
> this evolves. I think it will take some time to have consensus on how to
> represent the bus master view of buses/interconnects (It's been
> attempted before).
>
> Rob
>

Thanks for the comment and for discussing this off-line! As the main
concern here is to see a list of multiple platforms before we come
up with a common binding, i will convert this to initially use platform
data. Then later we will figure out what exactly to pull into DT.

BR,
Georgi
--
To unsubscribe from this list: send the line "unsubscribe devicetree" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

WARNING: multiple messages have this Message-ID (diff)
From: Georgi Djakov <georgi.djakov@linaro.org>
To: Rob Herring <robh@kernel.org>
Cc: linux-pm@vger.kernel.org, rjw@rjwysocki.net,
	gregkh@linuxfoundation.org, khilman@baylibre.com,
	mturquette@baylibre.com, vincent.guittot@linaro.org,
	skannan@codeaurora.org, sboyd@codeaurora.org,
	andy.gross@linaro.org, seansw@qti.qualcomm.com,
	davidai@quicinc.com, devicetree@vger.kernel.org,
	linux-kernel@vger.kernel.org,
	linux-arm-kernel@lists.infradead.org,
	linux-arm-msm@vger.kernel.org
Subject: Re: [RFC v0 0/2] Introduce on-chip interconnect API
Date: Tue, 14 Mar 2017 17:41:54 +0200	[thread overview]
Message-ID: <7e7c29a7-af04-04a8-cb76-0c406f8f855c@linaro.org> (raw)
In-Reply-To: <20170303062145.zpa4oblwgx2ecgv7@rob-hp-laptop>

On 03/03/2017 08:21 AM, Rob Herring wrote:
> On Wed, Mar 01, 2017 at 08:22:33PM +0200, Georgi Djakov wrote:
>> Modern SoCs have multiple processors and various dedicated cores (video, gpu,
>> graphics, modem). These cores are talking to each other and can generate a lot
>> of data flowing through the on-chip interconnects. These interconnect buses
>> could form different topologies such as crossbar, point to point buses,
>> hierarchical buses or use the network-on-chip concept.
>>
>> These buses have been sized usually to handle use cases with high data
>> throughput but it is not necessary all the time and consume a lot of power.
>> Furthermore, the priority between masters can vary depending on the running
>> use case like video playback or cpu intensive tasks.
>>
>> Having an API to control the requirement of the system in term of bandwidth
>> and QoS, so we can adapt the interconnect configuration to match those by
>> scaling the frequencies, setting link priority and tuning QoS parameters.
>> This configuration can be a static, one-time operation done at boot for some
>> platforms or a dynamic set of operations that happen at run-time.
>>
>> This patchset introduce a new API to get the requirement and configure the
>> interconnect buses across the entire chipset to fit with the current demand.
>> The API is NOT for changing the performance of the endpoint devices, but only
>> the interconnect path in between them.
>>
>> The API is using a consumer/provider-based model, where the providers are
>> the interconnect controllers and the consumers could be various drivers.
>> The consumers request interconnect resources (path) to an endpoint and set
>> the desired constraints on this data flow path. The provider(s) receive
>> requests from consumers and aggregate these requests for all master-slave
>> pairs on that path. Then the providers configure each participating in the
>> topology node according to the requested data flow path, physical links and
>> constraints. The topology could be complicated and multi-tiered and is SoC
>> specific.
>>
>> Below is a simplified diagram of a real-world SoC topology. The interconnect
>> providers are the memory front-end and the NoCs.
>>
>> +----------------+    +----------------+
>> | HW Accelerator |--->|      M NoC     |<---------------+
>> +----------------+    +----------------+                |
>>                         |      |                    +------------+
>>           +-------------+      V       +------+     |            |
>>           |                +--------+  | PCIe |     |            |
>>           |                | Slaves |  +------+     |            |
>>           |                +--------+     |         |   C NoC    |
>>           V                               V         |            |
>> +------------------+   +------------------------+   |            |   +-----+
>> |                  |-->|                        |-->|            |-->| CPU |
>> |                  |-->|                        |<--|            |   +-----+
>> |      Memory      |   |         S NoC          |   +------------+
>> |                  |<--|                        |---------+    |
>> |                  |<--|                        |<------+ |    |   +--------+
>> +------------------+   +------------------------+       | |    +-->| Slaves |
>>    ^     ^    ^           ^                             | |        +--------+
>>    |     |    |           |                             | V
>> +-----+  |  +-----+    +-----+  +---------+   +----------------+   +--------+
>> | CPU |  |  | GPU |    | DSP |  | Masters |-->|       P NoC    |-->| Slaves |
>> +-----+  |  +-----+    +-----+  +---------+   +----------------+   +--------+
>>          |
>>      +-------+
>>      | Modem |
>>      +-------+
>>
>> This RFC does not implement all features but only main skeleton to check the
>> validity of the proposal. Currently it only works with device-tree and platform
>> devices.
>>
>> TODO:
>>  * Constraints are currently stored in internal data structure. Should PM QoS
>>  be used instead?
>>  * Rework the framework to not depend on DT as frameworks cannot be tied
>>  directly to firmware interfaces. Add support for ACPI?
>
> I would start without DT even. You can always have the data you need in
> the kernel. This will be more flexible as you're not defining an ABI as
> this evolves. I think it will take some time to have consensus on how to
> represent the bus master view of buses/interconnects (It's been
> attempted before).
>
> Rob
>

Thanks for the comment and for discussing this off-line! As the main
concern here is to see a list of multiple platforms before we come
up with a common binding, i will convert this to initially use platform
data. Then later we will figure out what exactly to pull into DT.

BR,
Georgi

WARNING: multiple messages have this Message-ID (diff)
From: georgi.djakov@linaro.org (Georgi Djakov)
To: linux-arm-kernel@lists.infradead.org
Subject: [RFC v0 0/2] Introduce on-chip interconnect API
Date: Tue, 14 Mar 2017 17:41:54 +0200	[thread overview]
Message-ID: <7e7c29a7-af04-04a8-cb76-0c406f8f855c@linaro.org> (raw)
In-Reply-To: <20170303062145.zpa4oblwgx2ecgv7@rob-hp-laptop>

On 03/03/2017 08:21 AM, Rob Herring wrote:
> On Wed, Mar 01, 2017 at 08:22:33PM +0200, Georgi Djakov wrote:
>> Modern SoCs have multiple processors and various dedicated cores (video, gpu,
>> graphics, modem). These cores are talking to each other and can generate a lot
>> of data flowing through the on-chip interconnects. These interconnect buses
>> could form different topologies such as crossbar, point to point buses,
>> hierarchical buses or use the network-on-chip concept.
>>
>> These buses have been sized usually to handle use cases with high data
>> throughput but it is not necessary all the time and consume a lot of power.
>> Furthermore, the priority between masters can vary depending on the running
>> use case like video playback or cpu intensive tasks.
>>
>> Having an API to control the requirement of the system in term of bandwidth
>> and QoS, so we can adapt the interconnect configuration to match those by
>> scaling the frequencies, setting link priority and tuning QoS parameters.
>> This configuration can be a static, one-time operation done at boot for some
>> platforms or a dynamic set of operations that happen at run-time.
>>
>> This patchset introduce a new API to get the requirement and configure the
>> interconnect buses across the entire chipset to fit with the current demand.
>> The API is NOT for changing the performance of the endpoint devices, but only
>> the interconnect path in between them.
>>
>> The API is using a consumer/provider-based model, where the providers are
>> the interconnect controllers and the consumers could be various drivers.
>> The consumers request interconnect resources (path) to an endpoint and set
>> the desired constraints on this data flow path. The provider(s) receive
>> requests from consumers and aggregate these requests for all master-slave
>> pairs on that path. Then the providers configure each participating in the
>> topology node according to the requested data flow path, physical links and
>> constraints. The topology could be complicated and multi-tiered and is SoC
>> specific.
>>
>> Below is a simplified diagram of a real-world SoC topology. The interconnect
>> providers are the memory front-end and the NoCs.
>>
>> +----------------+    +----------------+
>> | HW Accelerator |--->|      M NoC     |<---------------+
>> +----------------+    +----------------+                |
>>                         |      |                    +------------+
>>           +-------------+      V       +------+     |            |
>>           |                +--------+  | PCIe |     |            |
>>           |                | Slaves |  +------+     |            |
>>           |                +--------+     |         |   C NoC    |
>>           V                               V         |            |
>> +------------------+   +------------------------+   |            |   +-----+
>> |                  |-->|                        |-->|            |-->| CPU |
>> |                  |-->|                        |<--|            |   +-----+
>> |      Memory      |   |         S NoC          |   +------------+
>> |                  |<--|                        |---------+    |
>> |                  |<--|                        |<------+ |    |   +--------+
>> +------------------+   +------------------------+       | |    +-->| Slaves |
>>    ^     ^    ^           ^                             | |        +--------+
>>    |     |    |           |                             | V
>> +-----+  |  +-----+    +-----+  +---------+   +----------------+   +--------+
>> | CPU |  |  | GPU |    | DSP |  | Masters |-->|       P NoC    |-->| Slaves |
>> +-----+  |  +-----+    +-----+  +---------+   +----------------+   +--------+
>>          |
>>      +-------+
>>      | Modem |
>>      +-------+
>>
>> This RFC does not implement all features but only main skeleton to check the
>> validity of the proposal. Currently it only works with device-tree and platform
>> devices.
>>
>> TODO:
>>  * Constraints are currently stored in internal data structure. Should PM QoS
>>  be used instead?
>>  * Rework the framework to not depend on DT as frameworks cannot be tied
>>  directly to firmware interfaces. Add support for ACPI?
>
> I would start without DT even. You can always have the data you need in
> the kernel. This will be more flexible as you're not defining an ABI as
> this evolves. I think it will take some time to have consensus on how to
> represent the bus master view of buses/interconnects (It's been
> attempted before).
>
> Rob
>

Thanks for the comment and for discussing this off-line! As the main
concern here is to see a list of multiple platforms before we come
up with a common binding, i will convert this to initially use platform
data. Then later we will figure out what exactly to pull into DT.

BR,
Georgi

  reply	other threads:[~2017-03-14 15:41 UTC|newest]

Thread overview: 37+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2017-03-01 18:22 [RFC v0 0/2] Introduce on-chip interconnect API Georgi Djakov
2017-03-01 18:22 ` Georgi Djakov
2017-03-01 18:22 ` Georgi Djakov
2017-03-01 18:22 ` [RFC v0 1/2] interconnect: Add generic interconnect controller API Georgi Djakov
2017-03-01 18:22   ` Georgi Djakov
2017-03-02 23:53   ` Randy Dunlap
2017-03-02 23:53     ` Randy Dunlap
2017-03-02 23:53     ` Randy Dunlap
     [not found]     ` <15929d7a-65e5-f8ba-2fac-0381c5f8e8fc-wEGCiKHe2LqWVfeAwA7xHQ@public.gmane.org>
2017-03-14 15:35       ` Georgi Djakov
2017-03-14 15:35         ` Georgi Djakov
2017-03-14 15:35         ` Georgi Djakov
2017-03-03  2:07   ` Saravana Kannan
2017-03-03  2:07     ` Saravana Kannan
2017-03-14 15:39     ` Georgi Djakov
2017-03-14 15:39       ` Georgi Djakov
2017-03-14 15:39       ` Georgi Djakov
2017-03-09  9:56   ` Lucas Stach
2017-03-09  9:56     ` Lucas Stach
2017-03-14 15:45     ` Georgi Djakov
2017-03-14 15:45       ` Georgi Djakov
2017-03-14 15:45       ` Georgi Djakov
     [not found]   ` <20170301182235.19154-2-georgi.djakov-QSEj5FYQhm4dnm+yROfE0A@public.gmane.org>
2017-03-23  1:21     ` Michael Turquette
2017-03-23  1:21       ` Michael Turquette
2017-03-23  1:21       ` Michael Turquette
2017-03-23 15:21       ` Georgi Djakov
2017-03-23 15:21         ` Georgi Djakov
2017-03-01 18:22 ` [RFC v0 2/2] interconnect: Add Qualcomm msm8916 interconnect provider driver Georgi Djakov
2017-03-01 18:22   ` Georgi Djakov
2017-03-03  6:21 ` [RFC v0 0/2] Introduce on-chip interconnect API Rob Herring
2017-03-03  6:21   ` Rob Herring
2017-03-03  6:21   ` Rob Herring
2017-03-14 15:41   ` Georgi Djakov [this message]
2017-03-14 15:41     ` Georgi Djakov
2017-03-14 15:41     ` Georgi Djakov
2017-03-23  3:32     ` Moritz Fischer
2017-03-23  3:32       ` Moritz Fischer
2017-03-23  3:32       ` Moritz Fischer

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=7e7c29a7-af04-04a8-cb76-0c406f8f855c@linaro.org \
    --to=georgi.djakov-qsej5fyqhm4dnm+yrofe0a@public.gmane.org \
    --cc=andy.gross-QSEj5FYQhm4dnm+yROfE0A@public.gmane.org \
    --cc=davidai-jfJNa2p1gH1BDgjK7y7TUQ@public.gmane.org \
    --cc=devicetree-u79uwXL29TY76Z2rM5mHXA@public.gmane.org \
    --cc=gregkh-hQyY1W1yCW8ekmWlsbkhG0B+6BGkLq7r@public.gmane.org \
    --cc=khilman-rdvid1DuHRBWk0Htik3J/w@public.gmane.org \
    --cc=linux-arm-kernel-IAPFreCvJWM7uuMidbF8XUB+6BGkLq7r@public.gmane.org \
    --cc=linux-arm-msm-u79uwXL29TY76Z2rM5mHXA@public.gmane.org \
    --cc=linux-kernel-u79uwXL29TY76Z2rM5mHXA@public.gmane.org \
    --cc=linux-pm-u79uwXL29TY76Z2rM5mHXA@public.gmane.org \
    --cc=mturquette-rdvid1DuHRBWk0Htik3J/w@public.gmane.org \
    --cc=rjw-LthD3rsA81gm4RdzfppkhA@public.gmane.org \
    --cc=robh-DgEjT+Ai2ygdnm+yROfE0A@public.gmane.org \
    --cc=sboyd-sgV2jX0FEOL9JmXXK+q4OQ@public.gmane.org \
    --cc=seansw-Rm6X0d1/PG5y9aJCnZT0Uw@public.gmane.org \
    --cc=skannan-sgV2jX0FEOL9JmXXK+q4OQ@public.gmane.org \
    --cc=vincent.guittot-QSEj5FYQhm4dnm+yROfE0A@public.gmane.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.