linux-arm-msm.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Lina Iyer <ilina@codeaurora.org>
To: Stephen Boyd <swboyd@chromium.org>
Cc: agross@kernel.org, bjorn.andersson@linaro.org,
	linux-arm-msm@vger.kernel.org, linux-soc@vger.kernel.org,
	rnayak@codeaurora.org, linux-kernel@vger.kernel.org,
	linux-pm@vger.kernel.org, dianders@chromium.org,
	mkshah@codeaurora.org,
	"Raju P.L.S.S.S.N" <rplsssn@codeaurora.org>
Subject: Re: [PATCH V2 1/4] drivers: qcom: rpmh-rsc: simplify TCS locking
Date: Tue, 23 Jul 2019 13:21:59 -0600	[thread overview]
Message-ID: <20190723192159.GA18620@codeaurora.org> (raw)
In-Reply-To: <5d375054.1c69fb81.7ce3f.3591@mx.google.com>

On Tue, Jul 23 2019 at 12:22 -0600, Stephen Boyd wrote:
>Quoting Lina Iyer (2019-07-22 14:53:37)
>> From: "Raju P.L.S.S.S.N" <rplsssn@codeaurora.org>
>>
>> The tcs->lock was introduced to serialize access with in TCS group. But,
>> drv->lock is still needed to synchronize core aspects of the
>> communication. This puts the drv->lock in the critical and high latency
>> path of sending a request. drv->lock provides the all necessary
>> synchronization. So remove locking around TCS group and simply use the
>> drv->lock instead.
>
>This doesn't talk about removing the irq saving and restoring though.
You mean for drv->lock? It was not an _irqsave/_irqrestore anyways and
we were only removing the tcs->lock.

>Can you keep irq saving and restoring in this patch and then remove that
>in the next patch with reasoning? It probably isn't safe if the lock is
>taken in interrupt context anyway.
>
Yes, the drv->lock should have been irqsave/irqrestore, but it hasn't
been changed by this patch.

>>
>> Signed-off-by: Raju P.L.S.S.S.N <rplsssn@codeaurora.org>
>> [ilina: split patch into multiple files, update commit text]
>> Signed-off-by: Lina Iyer <ilina@codeaurora.org>
>
>> diff --git a/drivers/soc/qcom/rpmh-internal.h b/drivers/soc/qcom/rpmh-internal.h
>> index a7bbbb67991c..969d5030860e 100644
>> --- a/drivers/soc/qcom/rpmh-internal.h
>> +++ b/drivers/soc/qcom/rpmh-internal.h
>> diff --git a/drivers/soc/qcom/rpmh-rsc.c b/drivers/soc/qcom/rpmh-rsc.c
>> index e278fc11fe5c..5ede8d6de3ad 100644
>> --- a/drivers/soc/qcom/rpmh-rsc.c
>> +++ b/drivers/soc/qcom/rpmh-rsc.c
>> @@ -106,26 +106,26 @@ static int tcs_invalidate(struct rsc_drv *drv, int type)
>>  {
>>         int m;
>>         struct tcs_group *tcs;
>> +       int ret = 0;
>>
>>         tcs = get_tcs_of_type(drv, type);
>>
>> -       spin_lock(&tcs->lock);
>> -       if (bitmap_empty(tcs->slots, MAX_TCS_SLOTS)) {
>> -               spin_unlock(&tcs->lock);
>> -               return 0;
>> -       }
>> +       spin_lock(&drv->lock);
>> +       if (bitmap_empty(tcs->slots, MAX_TCS_SLOTS))
>> +               goto done_invalidate;
>>
>>         for (m = tcs->offset; m < tcs->offset + tcs->num_tcs; m++) {
>>                 if (!tcs_is_free(drv, m)) {
>> -                       spin_unlock(&tcs->lock);
>> -                       return -EAGAIN;
>> +                       ret = -EAGAIN;
>> +                       goto done_invalidate;
>>                 }
>>                 write_tcs_reg_sync(drv, RSC_DRV_CMD_ENABLE, m, 0);
>>                 write_tcs_reg_sync(drv, RSC_DRV_CMD_WAIT_FOR_CMPL, m, 0);
>>         }
>>         bitmap_zero(tcs->slots, MAX_TCS_SLOTS);
>> -       spin_unlock(&tcs->lock);
>>
>> +done_invalidate:
>> +       spin_unlock(&drv->lock);
>>         return 0;
>
>return ret now?
>
Yes, will do.
>>  }
>>
>> @@ -349,41 +349,35 @@ static int tcs_write(struct rsc_drv *drv, const struct tcs_request *msg)
>>  {
>>         struct tcs_group *tcs;
>>         int tcs_id;
>> -       unsigned long flags;
>>         int ret;
>>
>>         tcs = get_tcs_for_msg(drv, msg);
>>         if (IS_ERR(tcs))
>>                 return PTR_ERR(tcs);
>>
>> -       spin_lock_irqsave(&tcs->lock, flags);
>>         spin_lock(&drv->lock);
>>         /*
>>          * The h/w does not like if we send a request to the same address,
>>          * when one is already in-flight or being processed.
>>          */
>>         ret = check_for_req_inflight(drv, tcs, msg);
>> -       if (ret) {
>> -               spin_unlock(&drv->lock);
>> +       if (ret)
>>                 goto done_write;
>> -       }
>>
>>         tcs_id = find_free_tcs(tcs);
>>         if (tcs_id < 0) {
>>                 ret = tcs_id;
>> -               spin_unlock(&drv->lock);
>>                 goto done_write;
>>         }
>>
>>         tcs->req[tcs_id - tcs->offset] = msg;
>>         set_bit(tcs_id, drv->tcs_in_use);
>> -       spin_unlock(&drv->lock);
>>
>>         __tcs_buffer_write(drv, tcs_id, 0, msg);
>>         __tcs_trigger(drv, tcs_id);
>>
>>  done_write:
>> -       spin_unlock_irqrestore(&tcs->lock, flags);
>> +       spin_unlock(&drv->lock);
>>         return ret;
>>  }
>>
>> @@ -481,19 +475,18 @@ static int tcs_ctrl_write(struct rsc_drv *drv, const struct tcs_request *msg)
>>  {
>>         struct tcs_group *tcs;
>>         int tcs_id = 0, cmd_id = 0;
>> -       unsigned long flags;
>>         int ret;
>>
>>         tcs = get_tcs_for_msg(drv, msg);
>>         if (IS_ERR(tcs))
>>                 return PTR_ERR(tcs);
>>
>> -       spin_lock_irqsave(&tcs->lock, flags);
>> +       spin_lock(&drv->lock);
>>         /* find the TCS id and the command in the TCS to write to */
>>         ret = find_slots(tcs, msg, &tcs_id, &cmd_id);
>>         if (!ret)
>>                 __tcs_buffer_write(drv, tcs_id, cmd_id, msg);
>> -       spin_unlock_irqrestore(&tcs->lock, flags);
>> +       spin_unlock(&drv->lock);
>>
>
>These ones, just leave them doing the irq save restore for now?
>
drv->lock ??

--Lina


  reply	other threads:[~2019-07-23 19:22 UTC|newest]

Thread overview: 21+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-07-22 21:53 [PATCH V2 1/4] drivers: qcom: rpmh-rsc: simplify TCS locking Lina Iyer
2019-07-22 21:53 ` [PATCH V2 2/4] drivers: qcom: rpmh-rsc: avoid locking in the interrupt handler Lina Iyer
2019-07-23 20:11   ` Stephen Boyd
2019-07-24 14:52     ` Lina Iyer
2019-07-24 19:37       ` Stephen Boyd
2019-07-24 20:36         ` Lina Iyer
2019-07-24 23:27           ` Doug Anderson
2019-07-25 15:18             ` Lina Iyer
2019-07-25 15:39               ` Doug Anderson
2019-07-29 19:01                 ` Lina Iyer
2019-07-29 20:56                   ` Stephen Boyd
2019-07-30 17:29                     ` Lina Iyer
2019-07-22 21:53 ` [PATCH V2 3/4] drivers: qcom: rpmh: switch over from spinlock irq variants Lina Iyer
2019-07-23 18:24   ` Stephen Boyd
2019-07-22 21:53 ` [PATCH V2 4/4] drivers: qcom: rpmh-rsc: remove redundant register access Lina Iyer
2019-07-23 18:22 ` [PATCH V2 1/4] drivers: qcom: rpmh-rsc: simplify TCS locking Stephen Boyd
2019-07-23 19:21   ` Lina Iyer [this message]
2019-07-23 20:18     ` Stephen Boyd
2019-07-24 14:54       ` Lina Iyer
2019-07-24 18:32         ` Stephen Boyd
2019-07-24 19:36           ` Lina Iyer

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20190723192159.GA18620@codeaurora.org \
    --to=ilina@codeaurora.org \
    --cc=agross@kernel.org \
    --cc=bjorn.andersson@linaro.org \
    --cc=dianders@chromium.org \
    --cc=linux-arm-msm@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-pm@vger.kernel.org \
    --cc=linux-soc@vger.kernel.org \
    --cc=mkshah@codeaurora.org \
    --cc=rnayak@codeaurora.org \
    --cc=rplsssn@codeaurora.org \
    --cc=swboyd@chromium.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).