From: Maulik Shah <mkshah@codeaurora.org>
To: Evan Green <evgreen@chromium.org>
Cc: Bjorn Andersson <bjorn.andersson@linaro.org>,
Andy Gross <agross@kernel.org>,
linux-arm-msm <linux-arm-msm@vger.kernel.org>,
LKML <linux-kernel@vger.kernel.org>,
Stephen Boyd <swboyd@chromium.org>,
Doug Anderson <dianders@chromium.org>,
Rajendra Nayak <rnayak@codeaurora.org>,
Lina Iyer <ilina@codeaurora.org>,
lsrao@codeaurora.org
Subject: Re: [PATCH 2/3] soc: qcom: rpmh: Update rpm_msgs offset address and add list_del
Date: Wed, 5 Feb 2020 10:41:57 +0530 [thread overview]
Message-ID: <7db81eed-d46d-8131-f471-6f57c0335ace@codeaurora.org> (raw)
In-Reply-To: <CAE=gft7gPS+hhnDP+uTn3is6s9=Nspbb4PL0bZ025Tq1Zpth8Q@mail.gmail.com>
On 2/5/2020 6:01 AM, Evan Green wrote:
> On Mon, Feb 3, 2020 at 10:14 PM Maulik Shah <mkshah@codeaurora.org> wrote:
>> rpm_msgs are copied in continuously allocated memory during write_batch.
>> Update request pointer to correctly point to designated area for rpm_msgs.
>>
>> While at this also add missing list_del before freeing rpm_msgs.
>>
>> Signed-off-by: Maulik Shah <mkshah@codeaurora.org>
>> ---
>> drivers/soc/qcom/rpmh.c | 9 ++++++---
>> 1 file changed, 6 insertions(+), 3 deletions(-)
>>
>> diff --git a/drivers/soc/qcom/rpmh.c b/drivers/soc/qcom/rpmh.c
>> index c3d6f00..04c7805 100644
>> --- a/drivers/soc/qcom/rpmh.c
>> +++ b/drivers/soc/qcom/rpmh.c
>> @@ -65,7 +65,7 @@ struct cache_req {
>> struct batch_cache_req {
>> struct list_head list;
>> int count;
>> - struct rpmh_request rpm_msgs[];
>> + struct rpmh_request *rpm_msgs;
>> };
>>
>> static struct rpmh_ctrlr *get_rpmh_ctrlr(const struct device *dev)
>> @@ -327,8 +327,10 @@ static void invalidate_batch(struct rpmh_ctrlr *ctrlr)
>> unsigned long flags;
>>
>> spin_lock_irqsave(&ctrlr->cache_lock, flags);
>> - list_for_each_entry_safe(req, tmp, &ctrlr->batch_cache, list)
>> + list_for_each_entry_safe(req, tmp, &ctrlr->batch_cache, list) {
>> + list_del(&req->list);
>> kfree(req);
>> + }
>> INIT_LIST_HEAD(&ctrlr->batch_cache);
> Hm, I don't get it. list_for_each_entry_safe ensures you can traverse
> the list while freeing it behind you. ctrlr->batch_cache is now a
> bogus list, but is re-inited with the lock held. From my reading,
> there doesn't seem to be anything wrong with the current code. Can you
> elaborate on the bug you found?
Hi Evan,
when we don't do list_del, there might be access to already freed memory.
Even after current item free via kfree(req), without list_del, the next
and prev item's pointer are still pointing to this freed region.
it seem best to call list_del to ensure that before freeing this area,
no other item in list refer to this.
>
>> spin_unlock_irqrestore(&ctrlr->cache_lock, flags);
>> }
>> @@ -377,10 +379,11 @@ int rpmh_write_batch(const struct device *dev, enum rpmh_state state,
>> return -ENOMEM;
>>
>> req = ptr;
>> + rpm_msgs = ptr + sizeof(*req);
>> compls = ptr + sizeof(*req) + count * sizeof(*rpm_msgs);
>>
>> req->count = count;
>> - rpm_msgs = req->rpm_msgs;
>> + req->rpm_msgs = rpm_msgs;
> I don't really understand what this is fixing either, can you explain?
the continous memory allocated via below is for 3 items,
ptr = kzalloc(sizeof(*req) + count * (sizeof(req->rpm_msgs[0]) +
sizeof(*compls)), GFP_ATOMIC);
1. batch_cache_req, followed by
2. total count of rpmh_request, followed by
3. total count of compls
current code starts using (3) compls from proper offset in memory
compls = ptr + sizeof(*req) + count * sizeof(*rpm_msgs);
however for (2) rpmh_request it does
rpm_msgs = req->rpm_msgs;
because of this it starts 8 byte before its designated area and overlaps
with (1) batch_cache_req struct's last entry.
this patch corrects it via below to ensure rpmh_request uses correct
start address in memory.
rpm_msgs = ptr + sizeof(*req);
Hope this explains.
Thanks,
Maulik
--
QUALCOMM INDIA, on behalf of Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum, hosted by The Linux Foundation
next prev parent reply other threads:[~2020-02-05 5:12 UTC|newest]
Thread overview: 13+ messages / expand[flat|nested] mbox.gz Atom feed top
2020-02-04 6:13 [PATCH 0/3] Misc stability fixes and optimization for rpmh driver Maulik Shah
2020-02-04 6:13 ` [PATCH 1/3] soc: qcom: rpmh: Update dirty flag only when data changes Maulik Shah
2020-02-05 0:35 ` Evan Green
2020-02-05 4:14 ` Maulik Shah
2020-02-05 18:07 ` Evan Green
2020-02-12 11:41 ` Maulik Shah
2020-02-04 6:13 ` [PATCH 2/3] soc: qcom: rpmh: Update rpm_msgs offset address and add list_del Maulik Shah
2020-02-05 0:31 ` Evan Green
2020-02-05 5:11 ` Maulik Shah [this message]
2020-02-05 18:21 ` Evan Green
2020-02-12 12:15 ` Maulik Shah
2020-02-21 1:05 ` Evan Green
2020-02-04 6:13 ` [PATCH 3/3] soc: qcom: rpmh: Invalidate sleep and wake TCS before flushing new data Maulik Shah
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=7db81eed-d46d-8131-f471-6f57c0335ace@codeaurora.org \
--to=mkshah@codeaurora.org \
--cc=agross@kernel.org \
--cc=bjorn.andersson@linaro.org \
--cc=dianders@chromium.org \
--cc=evgreen@chromium.org \
--cc=ilina@codeaurora.org \
--cc=linux-arm-msm@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=lsrao@codeaurora.org \
--cc=rnayak@codeaurora.org \
--cc=swboyd@chromium.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).