From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1755998AbbBPSGa (ORCPT ); Mon, 16 Feb 2015 13:06:30 -0500 Received: from mail-la0-f51.google.com ([209.85.215.51]:37827 "EHLO mail-la0-f51.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1753126AbbBPSG2 (ORCPT ); Mon, 16 Feb 2015 13:06:28 -0500 MIME-Version: 1.0 In-Reply-To: <20150211112907.GD9154@leverpostej> References: <20150115135556.GH16217@leverpostej> <20150116101746.GA21809@leverpostej> <20150120180548.GK7718@atomide.com> <54BFE855.3090200@ti.com> <20150122185622.GE12911@leverpostej> <54C9AFF2.6000108@ti.com> <20150211112907.GD9154@leverpostej> Date: Mon, 16 Feb 2015 10:06:26 -0800 Message-ID: Subject: Re: [PATCH v7 1/4] Documentation: dt: add common bindings for hwspinlock From: Bjorn Andersson To: Mark Rutland Cc: Suman Anna , Ohad Ben-Cohen , Tony Lindgren , Rob Herring , Kumar Gala , Josh Cartwright , "devicetree@vger.kernel.org" , "linux-kernel@vger.kernel.org" , "linux-omap@vger.kernel.org" , "linux-arm-kernel@lists.infradead.org" , Rob Herring Content-Type: text/plain; charset=UTF-8 Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Wed, Feb 11, 2015 at 3:29 AM, Mark Rutland wrote: > On Thu, Jan 29, 2015 at 03:58:42AM +0000, Suman Anna wrote: >> On 01/22/2015 12:56 PM, Mark Rutland wrote: [..] >> > That's the only way I would expect this to possibly remain a stable >> > over time, and it's the entire reason for #hwlock-cells, no? >> > >> > How do you expect the other components sharing the hwspinlocks to be >> > described? >> >> Yes indeed, this is what any of the clients will use on Linux. But >> this is not necessarily the semantics for exchanging hwlocks with the >> other processor(s) which is where the global id space comes into >> picture. > > I did try to consider that above. Rather than thinking about the > numbering as "global", think of it as unique within the a given pool > shared between processors. That's what the "poolN" names are about > above. > > That way you can dynamically allocate within the pool and know that > Linux and the SW on the other processors will use the same ID. You can > have pools that span multiple hwlock hardware blocks, and you can have > multiple separate pools in operation at once. > > Surely that covers the cases you care about? > > If using names is clunky, we could instead have a pool-hwlocks property > for that purpose. > Just to make I understand your suggestion. We would have the communication entity list all the potential hwlocks (and gpios etc) that it can share and the key to be communicated would then basically be the index in that list? Like: awesome-hub { pool-hwlocks = <&a 1>, <&a 3>, <&b 5>; }; And a communicated "lock 2" would mean lock 3 from block a? This would make it possible to describe what locks are available in this "allocation pool" and would keep such allocation logic out from the hwlock core - as the awesome-hub driver could simply trial and error (with some logic) through the list. Is this understanding correct? Regards, Bjorn From mboxrd@z Thu Jan 1 00:00:00 1970 From: Bjorn Andersson Subject: Re: [PATCH v7 1/4] Documentation: dt: add common bindings for hwspinlock Date: Mon, 16 Feb 2015 10:06:26 -0800 Message-ID: References: <20150115135556.GH16217@leverpostej> <20150116101746.GA21809@leverpostej> <20150120180548.GK7718@atomide.com> <54BFE855.3090200@ti.com> <20150122185622.GE12911@leverpostej> <54C9AFF2.6000108@ti.com> <20150211112907.GD9154@leverpostej> Mime-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Return-path: In-Reply-To: <20150211112907.GD9154@leverpostej> Sender: linux-omap-owner@vger.kernel.org To: Mark Rutland Cc: Suman Anna , Ohad Ben-Cohen , Tony Lindgren , Rob Herring , Kumar Gala , Josh Cartwright , "devicetree@vger.kernel.org" , "linux-kernel@vger.kernel.org" , "linux-omap@vger.kernel.org" , "linux-arm-kernel@lists.infradead.org" , Rob Herring List-Id: devicetree@vger.kernel.org On Wed, Feb 11, 2015 at 3:29 AM, Mark Rutland wrote: > On Thu, Jan 29, 2015 at 03:58:42AM +0000, Suman Anna wrote: >> On 01/22/2015 12:56 PM, Mark Rutland wrote: [..] >> > That's the only way I would expect this to possibly remain a stable >> > over time, and it's the entire reason for #hwlock-cells, no? >> > >> > How do you expect the other components sharing the hwspinlocks to be >> > described? >> >> Yes indeed, this is what any of the clients will use on Linux. But >> this is not necessarily the semantics for exchanging hwlocks with the >> other processor(s) which is where the global id space comes into >> picture. > > I did try to consider that above. Rather than thinking about the > numbering as "global", think of it as unique within the a given pool > shared between processors. That's what the "poolN" names are about > above. > > That way you can dynamically allocate within the pool and know that > Linux and the SW on the other processors will use the same ID. You can > have pools that span multiple hwlock hardware blocks, and you can have > multiple separate pools in operation at once. > > Surely that covers the cases you care about? > > If using names is clunky, we could instead have a pool-hwlocks property > for that purpose. > Just to make I understand your suggestion. We would have the communication entity list all the potential hwlocks (and gpios etc) that it can share and the key to be communicated would then basically be the index in that list? Like: awesome-hub { pool-hwlocks = <&a 1>, <&a 3>, <&b 5>; }; And a communicated "lock 2" would mean lock 3 from block a? This would make it possible to describe what locks are available in this "allocation pool" and would keep such allocation logic out from the hwlock core - as the awesome-hub driver could simply trial and error (with some logic) through the list. Is this understanding correct? Regards, Bjorn From mboxrd@z Thu Jan 1 00:00:00 1970 From: bjorn@kryo.se (Bjorn Andersson) Date: Mon, 16 Feb 2015 10:06:26 -0800 Subject: [PATCH v7 1/4] Documentation: dt: add common bindings for hwspinlock In-Reply-To: <20150211112907.GD9154@leverpostej> References: <20150115135556.GH16217@leverpostej> <20150116101746.GA21809@leverpostej> <20150120180548.GK7718@atomide.com> <54BFE855.3090200@ti.com> <20150122185622.GE12911@leverpostej> <54C9AFF2.6000108@ti.com> <20150211112907.GD9154@leverpostej> Message-ID: To: linux-arm-kernel@lists.infradead.org List-Id: linux-arm-kernel.lists.infradead.org On Wed, Feb 11, 2015 at 3:29 AM, Mark Rutland wrote: > On Thu, Jan 29, 2015 at 03:58:42AM +0000, Suman Anna wrote: >> On 01/22/2015 12:56 PM, Mark Rutland wrote: [..] >> > That's the only way I would expect this to possibly remain a stable >> > over time, and it's the entire reason for #hwlock-cells, no? >> > >> > How do you expect the other components sharing the hwspinlocks to be >> > described? >> >> Yes indeed, this is what any of the clients will use on Linux. But >> this is not necessarily the semantics for exchanging hwlocks with the >> other processor(s) which is where the global id space comes into >> picture. > > I did try to consider that above. Rather than thinking about the > numbering as "global", think of it as unique within the a given pool > shared between processors. That's what the "poolN" names are about > above. > > That way you can dynamically allocate within the pool and know that > Linux and the SW on the other processors will use the same ID. You can > have pools that span multiple hwlock hardware blocks, and you can have > multiple separate pools in operation at once. > > Surely that covers the cases you care about? > > If using names is clunky, we could instead have a pool-hwlocks property > for that purpose. > Just to make I understand your suggestion. We would have the communication entity list all the potential hwlocks (and gpios etc) that it can share and the key to be communicated would then basically be the index in that list? Like: awesome-hub { pool-hwlocks = <&a 1>, <&a 3>, <&b 5>; }; And a communicated "lock 2" would mean lock 3 from block a? This would make it possible to describe what locks are available in this "allocation pool" and would keep such allocation logic out from the hwlock core - as the awesome-hub driver could simply trial and error (with some logic) through the list. Is this understanding correct? Regards, Bjorn