From: Fabien DESSENNE <fabien.dessenne@st.com>
To: Bjorn Andersson <bjorn.andersson@linaro.org>
Cc: Ohad Ben-Cohen <ohad@wizery.com>,
Rob Herring <robh+dt@kernel.org>,
"Mark Rutland" <mark.rutland@arm.com>,
Maxime Coquelin <mcoquelin.stm32@gmail.com>,
Alexandre TORGUE <alexandre.torgue@st.com>,
Jonathan Corbet <corbet@lwn.net>,
"linux-remoteproc@vger.kernel.org"
<linux-remoteproc@vger.kernel.org>,
"devicetree@vger.kernel.org" <devicetree@vger.kernel.org>,
"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
"linux-stm32@st-md-mailman.stormreply.com"
<linux-stm32@st-md-mailman.stormreply.com>,
"linux-arm-kernel@lists.infradead.org"
<linux-arm-kernel@lists.infradead.org>,
"linux-doc@vger.kernel.org" <linux-doc@vger.kernel.org>,
Benjamin GAIGNARD <benjamin.gaignard@st.com>
Subject: Re: [PATCH 0/6] hwspinlock: allow sharing of hwspinlocks
Date: Mon, 5 Aug 2019 08:48:44 +0000 [thread overview]
Message-ID: <1a057176-81ab-e302-4375-2717ceef6924@st.com> (raw)
In-Reply-To: <20190801191403.GA7234@tuxbook-pro>
On 01/08/2019 9:14 PM, Bjorn Andersson wrote:
> On Wed 13 Mar 08:50 PDT 2019, Fabien Dessenne wrote:
>
>> The current implementation does not allow two different devices to use
>> a common hwspinlock. This patch set proposes to have, as an option, some
>> hwspinlocks shared between several users.
>>
>> Below is an example that explain the need for this:
>> exti: interrupt-controller@5000d000 {
>> compatible = "st,stm32mp1-exti", "syscon";
>> interrupt-controller;
>> #interrupt-cells = <2>;
>> reg = <0x5000d000 0x400>;
>> hwlocks = <&hsem 1>;
>> };
>> The two drivers (stm32mp1-exti and syscon) refer to the same hwlock.
>> With the current hwspinlock implementation, only the first driver succeeds
>> in requesting (hwspin_lock_request_specific) the hwlock. The second request
>> fails.
>>
>>
>> The proposed approach does not modify the API, but extends the DT 'hwlocks'
>> property with a second optional parameter (the first one identifies an
>> hwlock) that specifies whether an hwlock is requested for exclusive usage
>> (current behavior) or can be shared between several users.
>> Examples:
>> hwlocks = <&hsem 8>; Ref to hwlock #8 for exclusive usage
>> hwlocks = <&hsem 8 0>; Ref to hwlock #8 for exclusive (0) usage
>> hwlocks = <&hsem 8 1>; Ref to hwlock #8 for shared (1) usage
>>
>> As a constraint, the #hwlock-cells value must be 1 or 2.
>> In the current implementation, this can have theorically any value but:
>> - all of the exisiting drivers use the same value : 1.
>> - the framework supports only one value : 1 (see implementation of
>> of_hwspin_lock_simple_xlate())
>> Hence, it shall not be a problem to restrict this value to 1 or 2 since
>> it won't break any driver.
>>
> Hi Fabien,
>
> Your series looks good, but it makes me wonder why the hardware locks
> should be an exclusive resource.
>
> How about just making all (specific) locks shared?
Hi Bjorn,
Making all locks shared is a possible implementation (my first
implementation
was going this way) but there are some drawbacks we must be aware of:
A/ This theoretically break the legacy behavior (the legacy works with
exclusive (UNUSED radix tag) usage). As a consequence, an existing driver
that is currently failing to request a lock (already claimed by another
user) would now work fine. Not sure that there are such drivers, so this
point is probably not a real issue.
B/ This would introduce some inconsistency between the two 'request' API
which are hwspin_lock_request() and hwspin_lock_request_specific().
hwspin_lock_request() looks for an unused lock, so requests for an exclusive
usage. On the other side, request_specific() would request shared locks.
Worst the following sequence can transform an exclusive usage into a shared
one:
-hwspin_lock_request() -> returns Id#0 (exclusive)
-hwspin_lock_request() -> returns Id#1 (exclusive)
-hwspin_lock_request_specific(0) -> returns Id#0 and makes Id#0 shared
Honestly I am not sure that this is a real issue, but it's better to have it
in mind before we take ay decision
I could not find any driver using the hwspin_lock_request() API, we may
decide
to remove (or to make deprecated) this API, having everything 'shared
without
any conditions'.
I can see three options:
1- Keep my initial proposition
2- Have hwspin_lock_request_specific() using shared locks and
hwspin_lock_request() using unused (so 'initially' exclusive) locks.
3- Have hwspin_lock_request_specific() using shared locks and
remove/make deprecated hwspin_lock_request().
Just let me know what is your preference.
BR
Fabien
>
> Regards,
> Bjorn
>
>> Fabien Dessenne (6):
>> dt-bindings: hwlock: add support of shared locks
>> hwspinlock: allow sharing of hwspinlocks
>> dt-bindings: hwlock: update STM32 #hwlock-cells value
>> ARM: dts: stm32: Add hwspinlock node for stm32mp157 SoC
>> ARM: dts: stm32: Add hwlock for irqchip on stm32mp157
>> ARM: dts: stm32: hwlocks for GPIO for stm32mp157
>>
>> .../devicetree/bindings/hwlock/hwlock.txt | 27 +++++--
>> .../bindings/hwlock/st,stm32-hwspinlock.txt | 6 +-
>> Documentation/hwspinlock.txt | 10 ++-
>> arch/arm/boot/dts/stm32mp157-pinctrl.dtsi | 2 +
>> arch/arm/boot/dts/stm32mp157c.dtsi | 10 +++
>> drivers/hwspinlock/hwspinlock_core.c | 82 +++++++++++++++++-----
>> drivers/hwspinlock/hwspinlock_internal.h | 2 +
>> 7 files changed, 108 insertions(+), 31 deletions(-)
>>
>> --
>> 2.7.4
>>
next prev parent reply other threads:[~2019-08-05 8:49 UTC|newest]
Thread overview: 23+ messages / expand[flat|nested] mbox.gz Atom feed top
2019-03-13 15:50 [PATCH 0/6] hwspinlock: allow sharing of hwspinlocks Fabien Dessenne
2019-03-13 15:50 ` [PATCH 1/6] dt-bindings: hwlock: add support of shared locks Fabien Dessenne
2019-03-28 15:24 ` Rob Herring
2019-03-13 15:50 ` [PATCH 2/6] hwspinlock: allow sharing of hwspinlocks Fabien Dessenne
2019-07-31 9:22 ` Loic PALLARDY
2019-03-13 15:50 ` [PATCH 3/6] dt-bindings: hwlock: update STM32 #hwlock-cells value Fabien Dessenne
2019-03-28 15:26 ` Rob Herring
2019-03-13 15:50 ` [PATCH 4/6] ARM: dts: stm32: Add hwspinlock node for stm32mp157 SoC Fabien Dessenne
2019-03-28 15:26 ` Rob Herring
2019-03-13 15:50 ` [PATCH 5/6] ARM: dts: stm32: Add hwlock for irqchip on stm32mp157 Fabien Dessenne
2019-03-13 15:50 ` [PATCH 6/6] ARM: dts: stm32: hwlocks for GPIO for stm32mp157 Fabien Dessenne
2019-08-01 19:14 ` [PATCH 0/6] hwspinlock: allow sharing of hwspinlocks Bjorn Andersson
2019-08-05 8:48 ` Fabien DESSENNE [this message]
2019-08-05 17:46 ` Bjorn Andersson
2019-08-06 7:43 ` Fabien DESSENNE
2019-08-06 17:38 ` Suman Anna
2019-08-06 18:21 ` Bjorn Andersson
2019-08-06 21:30 ` Suman Anna
2019-08-07 8:39 ` Fabien DESSENNE
2019-08-07 16:19 ` Suman Anna
2019-08-08 12:52 ` Fabien DESSENNE
2019-08-08 15:37 ` Bjorn Andersson
2019-08-26 13:30 ` Fabien DESSENNE
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=1a057176-81ab-e302-4375-2717ceef6924@st.com \
--to=fabien.dessenne@st.com \
--cc=alexandre.torgue@st.com \
--cc=benjamin.gaignard@st.com \
--cc=bjorn.andersson@linaro.org \
--cc=corbet@lwn.net \
--cc=devicetree@vger.kernel.org \
--cc=linux-arm-kernel@lists.infradead.org \
--cc=linux-doc@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-remoteproc@vger.kernel.org \
--cc=linux-stm32@st-md-mailman.stormreply.com \
--cc=mark.rutland@arm.com \
--cc=mcoquelin.stm32@gmail.com \
--cc=ohad@wizery.com \
--cc=robh+dt@kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).