From: Alex Elder <elder@linaro.org>
To: davem@davemloft.net, kuba@kernel.org
Cc: bjorn.andersson@linaro.org, evgreen@chromium.org,
cpratapa@codeaurora.org, subashab@codeaurora.org,
elder@kernel.org, netdev@vger.kernel.org,
linux-kernel@vger.kernel.org
Subject: [PATCH net-next 1/8] net: ipa: don't assume mem array indexed by ID
Date: Thu, 10 Jun 2021 14:23:01 -0500 [thread overview]
Message-ID: <20210610192308.2739540-2-elder@linaro.org> (raw)
In-Reply-To: <20210610192308.2739540-1-elder@linaro.org>
Change ipa_mem_valid() to iterate over the entries using a u32 index
variable rather than using a memory region ID. Use the ID found
inside the memory descriptor rather than the loop index.
Change ipa_mem_size_valid() to iterate over the entries but without
assuming the array index is the memory region ID. "Empty" entries
will have zero size; and we'll temporarily assume such entries have
zero offset as well (they all do, currently).
Similarly, don't assume the mem[] array is indexed by ID in
ipa_mem_config(). There, "empty" entries will have a zero canary
count, so no special assumptions are needed to handle them correctly.
Signed-off-by: Alex Elder <elder@linaro.org>
---
drivers/net/ipa/ipa_mem.c | 27 ++++++++++++++-------------
1 file changed, 14 insertions(+), 13 deletions(-)
diff --git a/drivers/net/ipa/ipa_mem.c b/drivers/net/ipa/ipa_mem.c
index ef9fdd3b88750..9e504ec278179 100644
--- a/drivers/net/ipa/ipa_mem.c
+++ b/drivers/net/ipa/ipa_mem.c
@@ -220,6 +220,7 @@ static bool ipa_mem_valid(struct ipa *ipa, const struct ipa_mem_data *mem_data)
DECLARE_BITMAP(regions, IPA_MEM_COUNT) = { };
struct device *dev = &ipa->pdev->dev;
enum ipa_mem_id mem_id;
+ u32 i;
if (mem_data->local_count > IPA_MEM_COUNT) {
dev_err(dev, "too many memory regions (%u > %u)\n",
@@ -227,10 +228,10 @@ static bool ipa_mem_valid(struct ipa *ipa, const struct ipa_mem_data *mem_data)
return false;
}
- for (mem_id = 0; mem_id < mem_data->local_count; mem_id++) {
- const struct ipa_mem *mem = &mem_data->local[mem_id];
+ for (i = 0; i < mem_data->local_count; i++) {
+ const struct ipa_mem *mem = &mem_data->local[i];
- if (mem_id == IPA_MEM_UNDEFINED)
+ if (mem->id == IPA_MEM_UNDEFINED)
continue;
if (__test_and_set_bit(mem->id, regions)) {
@@ -248,7 +249,7 @@ static bool ipa_mem_valid(struct ipa *ipa, const struct ipa_mem_data *mem_data)
/* It's harmless, but warn if an offset is provided */
if (mem->offset)
dev_warn(dev, "empty region %u has non-zero offset\n",
- mem_id);
+ mem->id);
}
/* Now see if any required regions are not defined */
@@ -268,16 +269,16 @@ static bool ipa_mem_size_valid(struct ipa *ipa)
{
struct device *dev = &ipa->pdev->dev;
u32 limit = ipa->mem_size;
- enum ipa_mem_id mem_id;
+ u32 i;
- for (mem_id = 0; mem_id < ipa->mem_count; mem_id++) {
- const struct ipa_mem *mem = &ipa->mem[mem_id];
+ for (i = 0; i < ipa->mem_count; i++) {
+ const struct ipa_mem *mem = &ipa->mem[i];
if (mem->offset + mem->size <= limit)
continue;
dev_err(dev, "region %u ends beyond memory limit (0x%08x)\n",
- mem_id, limit);
+ mem->id, limit);
return false;
}
@@ -294,11 +295,11 @@ static bool ipa_mem_size_valid(struct ipa *ipa)
int ipa_mem_config(struct ipa *ipa)
{
struct device *dev = &ipa->pdev->dev;
- enum ipa_mem_id mem_id;
dma_addr_t addr;
u32 mem_size;
void *virt;
u32 val;
+ u32 i;
/* Check the advertised location and size of the shared memory area */
val = ioread32(ipa->reg_virt + IPA_REG_SHARED_MEM_SIZE_OFFSET);
@@ -330,11 +331,11 @@ int ipa_mem_config(struct ipa *ipa)
ipa->zero_virt = virt;
ipa->zero_size = IPA_MEM_MAX;
- /* For each region, write "canary" values in the space prior to
- * the region's base address if indicated.
+ /* For each defined region, write "canary" values in the
+ * space prior to the region's base address if indicated.
*/
- for (mem_id = 0; mem_id < ipa->mem_count; mem_id++) {
- const struct ipa_mem *mem = &ipa->mem[mem_id];
+ for (i = 0; i < ipa->mem_count; i++) {
+ const struct ipa_mem *mem = &ipa->mem[i];
u16 canary_count;
__le32 *canary;
--
2.27.0
next prev parent reply other threads:[~2021-06-10 19:24 UTC|newest]
Thread overview: 9+ messages / expand[flat|nested] mbox.gz Atom feed top
2021-06-10 19:23 [PATCH net-next 0/8] net: ipa: memory region rework, part 2 Alex Elder
2021-06-10 19:23 ` Alex Elder [this message]
2021-06-10 19:23 ` [PATCH net-next 2/8] net: ipa: clean up header memory validation Alex Elder
2021-06-10 19:23 ` [PATCH net-next 3/8] net: ipa: pass mem_id to ipa_filter_reset_table() Alex Elder
2021-06-10 19:23 ` [PATCH net-next 4/8] net: ipa: pass mem ID to ipa_mem_zero_region_add() Alex Elder
2021-06-10 19:23 ` [PATCH net-next 5/8] net: ipa: pass mem_id to ipa_table_reset_add() Alex Elder
2021-06-10 19:23 ` [PATCH net-next 6/8] net: ipa: pass memory id to ipa_table_valid_one() Alex Elder
2021-06-10 19:23 ` [PATCH net-next 7/8] net: ipa: introduce ipa_mem_find() Alex Elder
2021-06-10 19:23 ` [PATCH net-next 8/8] net: ipa: don't index mem data array by ID Alex Elder
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20210610192308.2739540-2-elder@linaro.org \
--to=elder@linaro.org \
--cc=bjorn.andersson@linaro.org \
--cc=cpratapa@codeaurora.org \
--cc=davem@davemloft.net \
--cc=elder@kernel.org \
--cc=evgreen@chromium.org \
--cc=kuba@kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=netdev@vger.kernel.org \
--cc=subashab@codeaurora.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).