* [PATCH AUTOSEL 5.10 2/7] powerpc/64s: Disable stack variable initialisation for prom_init
2022-08-01 19:02 [PATCH AUTOSEL 5.10 1/7] ntfs: fix use-after-free in ntfs_ucsncmp() Sasha Levin
@ 2022-08-01 19:02 ` Sasha Levin
2022-08-01 19:02 ` [PATCH AUTOSEL 5.10 3/7] spi: spi-rspi: Fix PIO fallback on RZ platforms Sasha Levin
` (4 subsequent siblings)
5 siblings, 0 replies; 9+ messages in thread
From: Sasha Levin @ 2022-08-01 19:02 UTC (permalink / raw)
To: linux-kernel, stable
Cc: Sasha Levin, peterz, masahiroy, ast, npiggin, aneesh.kumar,
linuxppc-dev, Sudip Mukherjee, dja
From: Michael Ellerman <mpe@ellerman.id.au>
[ Upstream commit be640317a1d0b9cf42fedb2debc2887a7cfa38de ]
With GCC 12 allmodconfig prom_init fails to build:
Error: External symbol 'memset' referenced from prom_init.c
make[2]: *** [arch/powerpc/kernel/Makefile:204: arch/powerpc/kernel/prom_init_check] Error 1
The allmodconfig build enables KASAN, so all calls to memset in
prom_init should be converted to __memset by the #ifdefs in
asm/string.h, because prom_init must use the non-KASAN instrumented
versions.
The build failure happens because there's a call to memset that hasn't
been caught by the pre-processor and converted to __memset. Typically
that's because it's a memset generated by the compiler itself, and that
is the case here.
With GCC 12, allmodconfig enables CONFIG_INIT_STACK_ALL_PATTERN, which
causes the compiler to emit memset calls to initialise on-stack
variables with a pattern.
Because prom_init is non-user-facing boot-time only code, as a
workaround just disable stack variable initialisation to unbreak the
build.
Reported-by: Sudip Mukherjee <sudipm.mukherjee@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/20220718134418.354114-1-mpe@ellerman.id.au
Signed-off-by: Sasha Levin <sashal@kernel.org>
---
arch/powerpc/kernel/Makefile | 1 +
1 file changed, 1 insertion(+)
diff --git a/arch/powerpc/kernel/Makefile b/arch/powerpc/kernel/Makefile
index 376104c166fc..db2bdc4cec64 100644
--- a/arch/powerpc/kernel/Makefile
+++ b/arch/powerpc/kernel/Makefile
@@ -20,6 +20,7 @@ CFLAGS_prom.o += $(DISABLE_LATENT_ENTROPY_PLUGIN)
CFLAGS_prom_init.o += -fno-stack-protector
CFLAGS_prom_init.o += -DDISABLE_BRANCH_PROFILING
CFLAGS_prom_init.o += -ffreestanding
+CFLAGS_prom_init.o += $(call cc-option, -ftrivial-auto-var-init=uninitialized)
ifdef CONFIG_FUNCTION_TRACER
# Do not trace early boot code
--
2.35.1
^ permalink raw reply related [flat|nested] 9+ messages in thread
* [PATCH AUTOSEL 5.10 2/7] powerpc/64s: Disable stack variable initialisation for prom_init
@ 2022-08-01 19:02 ` Sasha Levin
0 siblings, 0 replies; 9+ messages in thread
From: Sasha Levin @ 2022-08-01 19:02 UTC (permalink / raw)
To: linux-kernel, stable
Cc: Michael Ellerman, Sudip Mukherjee, Sasha Levin, christophe.leroy,
paulus, peterz, ast, npiggin, dja, masahiroy, aneesh.kumar,
linuxppc-dev
From: Michael Ellerman <mpe@ellerman.id.au>
[ Upstream commit be640317a1d0b9cf42fedb2debc2887a7cfa38de ]
With GCC 12 allmodconfig prom_init fails to build:
Error: External symbol 'memset' referenced from prom_init.c
make[2]: *** [arch/powerpc/kernel/Makefile:204: arch/powerpc/kernel/prom_init_check] Error 1
The allmodconfig build enables KASAN, so all calls to memset in
prom_init should be converted to __memset by the #ifdefs in
asm/string.h, because prom_init must use the non-KASAN instrumented
versions.
The build failure happens because there's a call to memset that hasn't
been caught by the pre-processor and converted to __memset. Typically
that's because it's a memset generated by the compiler itself, and that
is the case here.
With GCC 12, allmodconfig enables CONFIG_INIT_STACK_ALL_PATTERN, which
causes the compiler to emit memset calls to initialise on-stack
variables with a pattern.
Because prom_init is non-user-facing boot-time only code, as a
workaround just disable stack variable initialisation to unbreak the
build.
Reported-by: Sudip Mukherjee <sudipm.mukherjee@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/20220718134418.354114-1-mpe@ellerman.id.au
Signed-off-by: Sasha Levin <sashal@kernel.org>
---
arch/powerpc/kernel/Makefile | 1 +
1 file changed, 1 insertion(+)
diff --git a/arch/powerpc/kernel/Makefile b/arch/powerpc/kernel/Makefile
index 376104c166fc..db2bdc4cec64 100644
--- a/arch/powerpc/kernel/Makefile
+++ b/arch/powerpc/kernel/Makefile
@@ -20,6 +20,7 @@ CFLAGS_prom.o += $(DISABLE_LATENT_ENTROPY_PLUGIN)
CFLAGS_prom_init.o += -fno-stack-protector
CFLAGS_prom_init.o += -DDISABLE_BRANCH_PROFILING
CFLAGS_prom_init.o += -ffreestanding
+CFLAGS_prom_init.o += $(call cc-option, -ftrivial-auto-var-init=uninitialized)
ifdef CONFIG_FUNCTION_TRACER
# Do not trace early boot code
--
2.35.1
^ permalink raw reply related [flat|nested] 9+ messages in thread
* [PATCH AUTOSEL 5.10 3/7] spi: spi-rspi: Fix PIO fallback on RZ platforms
2022-08-01 19:02 [PATCH AUTOSEL 5.10 1/7] ntfs: fix use-after-free in ntfs_ucsncmp() Sasha Levin
2022-08-01 19:02 ` Sasha Levin
@ 2022-08-01 19:02 ` Sasha Levin
2022-08-01 19:02 ` [PATCH AUTOSEL 5.10 4/7] mmu_gather: Let there be one tlb_{start,end}_vma() implementation Sasha Levin
` (3 subsequent siblings)
5 siblings, 0 replies; 9+ messages in thread
From: Sasha Levin @ 2022-08-01 19:02 UTC (permalink / raw)
To: linux-kernel, stable
Cc: Biju Das, Geert Uytterhoeven, Mark Brown, Sasha Levin, linux-spi
From: Biju Das <biju.das.jz@bp.renesas.com>
[ Upstream commit b620aa3a7be346f04ae7789b165937615c6ee8d3 ]
RSPI IP on RZ/{A, G2L} SoC's has the same signal for both interrupt
and DMA transfer request. Setting DMARS register for DMA transfer
makes the signal to work as a DMA transfer request signal and
subsequent interrupt requests to the interrupt controller
are masked.
PIO fallback does not work as interrupt signal is disabled.
This patch fixes this issue by re-enabling the interrupts by
calling dmaengine_synchronize().
Signed-off-by: Biju Das <biju.das.jz@bp.renesas.com>
Reviewed-by: Geert Uytterhoeven <geert+renesas@glider.be>
Link: https://lore.kernel.org/r/20220721143449.879257-1-biju.das.jz@bp.renesas.com
Signed-off-by: Mark Brown <broonie@kernel.org>
Signed-off-by: Sasha Levin <sashal@kernel.org>
---
drivers/spi/spi-rspi.c | 4 ++++
1 file changed, 4 insertions(+)
diff --git a/drivers/spi/spi-rspi.c b/drivers/spi/spi-rspi.c
index ea03cc589e61..4600e3c9e49e 100644
--- a/drivers/spi/spi-rspi.c
+++ b/drivers/spi/spi-rspi.c
@@ -612,6 +612,10 @@ static int rspi_dma_transfer(struct rspi_data *rspi, struct sg_table *tx,
rspi->dma_callbacked, HZ);
if (ret > 0 && rspi->dma_callbacked) {
ret = 0;
+ if (tx)
+ dmaengine_synchronize(rspi->ctlr->dma_tx);
+ if (rx)
+ dmaengine_synchronize(rspi->ctlr->dma_rx);
} else {
if (!ret) {
dev_err(&rspi->ctlr->dev, "DMA timeout\n");
--
2.35.1
^ permalink raw reply related [flat|nested] 9+ messages in thread
* [PATCH AUTOSEL 5.10 4/7] mmu_gather: Let there be one tlb_{start,end}_vma() implementation
2022-08-01 19:02 [PATCH AUTOSEL 5.10 1/7] ntfs: fix use-after-free in ntfs_ucsncmp() Sasha Levin
2022-08-01 19:02 ` Sasha Levin
2022-08-01 19:02 ` [PATCH AUTOSEL 5.10 3/7] spi: spi-rspi: Fix PIO fallback on RZ platforms Sasha Levin
@ 2022-08-01 19:02 ` Sasha Levin
2022-08-01 19:02 ` [PATCH AUTOSEL 5.10 5/7] mmu_gather: Force tlb-flush VM_PFNMAP vmas Sasha Levin
` (2 subsequent siblings)
5 siblings, 0 replies; 9+ messages in thread
From: Sasha Levin @ 2022-08-01 19:02 UTC (permalink / raw)
To: linux-kernel, stable
Cc: Peter Zijlstra, Will Deacon, Linus Torvalds, Sasha Levin,
aneesh.kumar, npiggin, linux-arch, linux-mm
From: Peter Zijlstra <peterz@infradead.org>
[ Upstream commit 18ba064e42df3661e196ab58a23931fc732a420b ]
Now that architectures are no longer allowed to override
tlb_{start,end}_vma() re-arrange code so that there is only one
implementation for each of these functions.
This much simplifies trying to figure out what they actually do.
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Acked-by: Will Deacon <will@kernel.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Sasha Levin <sashal@kernel.org>
---
include/asm-generic/tlb.h | 15 ++-------------
1 file changed, 2 insertions(+), 13 deletions(-)
diff --git a/include/asm-generic/tlb.h b/include/asm-generic/tlb.h
index a0c4b99d2899..a8112510522b 100644
--- a/include/asm-generic/tlb.h
+++ b/include/asm-generic/tlb.h
@@ -332,8 +332,8 @@ static inline void __tlb_reset_range(struct mmu_gather *tlb)
#ifdef CONFIG_MMU_GATHER_NO_RANGE
-#if defined(tlb_flush) || defined(tlb_start_vma) || defined(tlb_end_vma)
-#error MMU_GATHER_NO_RANGE relies on default tlb_flush(), tlb_start_vma() and tlb_end_vma()
+#if defined(tlb_flush)
+#error MMU_GATHER_NO_RANGE relies on default tlb_flush()
#endif
/*
@@ -353,17 +353,10 @@ static inline void tlb_flush(struct mmu_gather *tlb)
static inline void
tlb_update_vma_flags(struct mmu_gather *tlb, struct vm_area_struct *vma) { }
-#define tlb_end_vma tlb_end_vma
-static inline void tlb_end_vma(struct mmu_gather *tlb, struct vm_area_struct *vma) { }
-
#else /* CONFIG_MMU_GATHER_NO_RANGE */
#ifndef tlb_flush
-#if defined(tlb_start_vma) || defined(tlb_end_vma)
-#error Default tlb_flush() relies on default tlb_start_vma() and tlb_end_vma()
-#endif
-
/*
* When an architecture does not provide its own tlb_flush() implementation
* but does have a reasonably efficient flush_vma_range() implementation
@@ -484,7 +477,6 @@ static inline unsigned long tlb_get_unmap_size(struct mmu_gather *tlb)
* case where we're doing a full MM flush. When we're doing a munmap,
* the vmas are adjusted to only cover the region to be torn down.
*/
-#ifndef tlb_start_vma
static inline void tlb_start_vma(struct mmu_gather *tlb, struct vm_area_struct *vma)
{
if (tlb->fullmm)
@@ -493,9 +485,7 @@ static inline void tlb_start_vma(struct mmu_gather *tlb, struct vm_area_struct *
tlb_update_vma_flags(tlb, vma);
flush_cache_range(vma, vma->vm_start, vma->vm_end);
}
-#endif
-#ifndef tlb_end_vma
static inline void tlb_end_vma(struct mmu_gather *tlb, struct vm_area_struct *vma)
{
if (tlb->fullmm)
@@ -509,7 +499,6 @@ static inline void tlb_end_vma(struct mmu_gather *tlb, struct vm_area_struct *vm
*/
tlb_flush_mmu_tlbonly(tlb);
}
-#endif
/*
* tlb_flush_{pte|pmd|pud|p4d}_range() adjust the tlb->start and tlb->end,
--
2.35.1
^ permalink raw reply related [flat|nested] 9+ messages in thread
* [PATCH AUTOSEL 5.10 5/7] mmu_gather: Force tlb-flush VM_PFNMAP vmas
2022-08-01 19:02 [PATCH AUTOSEL 5.10 1/7] ntfs: fix use-after-free in ntfs_ucsncmp() Sasha Levin
` (2 preceding siblings ...)
2022-08-01 19:02 ` [PATCH AUTOSEL 5.10 4/7] mmu_gather: Let there be one tlb_{start,end}_vma() implementation Sasha Levin
@ 2022-08-01 19:02 ` Sasha Levin
2022-08-01 19:03 ` [PATCH AUTOSEL 5.10 6/7] netfilter: nf_tables: add rescheduling points during loop detection walks Sasha Levin
2022-08-01 19:03 ` Sasha Levin
5 siblings, 0 replies; 9+ messages in thread
From: Sasha Levin @ 2022-08-01 19:02 UTC (permalink / raw)
To: linux-kernel, stable
Cc: Peter Zijlstra, Will Deacon, Linus Torvalds, Sasha Levin,
aneesh.kumar, npiggin, linux-arch, linux-mm
From: Peter Zijlstra <peterz@infradead.org>
[ Upstream commit b67fbebd4cf980aecbcc750e1462128bffe8ae15 ]
Jann reported a race between munmap() and unmap_mapping_range(), where
unmap_mapping_range() will no-op once unmap_vmas() has unlinked the
VMA; however munmap() will not yet have invalidated the TLBs.
Therefore unmap_mapping_range() will complete while there are still
(stale) TLB entries for the specified range.
Mitigate this by force flushing TLBs for VM_PFNMAP ranges.
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Acked-by: Will Deacon <will@kernel.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Sasha Levin <sashal@kernel.org>
---
include/asm-generic/tlb.h | 31 ++++++++++++++++---------------
1 file changed, 16 insertions(+), 15 deletions(-)
diff --git a/include/asm-generic/tlb.h b/include/asm-generic/tlb.h
index a8112510522b..171bd6347f74 100644
--- a/include/asm-generic/tlb.h
+++ b/include/asm-generic/tlb.h
@@ -286,6 +286,7 @@ struct mmu_gather {
*/
unsigned int vma_exec : 1;
unsigned int vma_huge : 1;
+ unsigned int vma_pfn : 1;
unsigned int batch_count;
@@ -356,7 +357,6 @@ tlb_update_vma_flags(struct mmu_gather *tlb, struct vm_area_struct *vma) { }
#else /* CONFIG_MMU_GATHER_NO_RANGE */
#ifndef tlb_flush
-
/*
* When an architecture does not provide its own tlb_flush() implementation
* but does have a reasonably efficient flush_vma_range() implementation
@@ -376,6 +376,9 @@ static inline void tlb_flush(struct mmu_gather *tlb)
flush_tlb_range(&vma, tlb->start, tlb->end);
}
}
+#endif
+
+#endif /* CONFIG_MMU_GATHER_NO_RANGE */
static inline void
tlb_update_vma_flags(struct mmu_gather *tlb, struct vm_area_struct *vma)
@@ -393,17 +396,9 @@ tlb_update_vma_flags(struct mmu_gather *tlb, struct vm_area_struct *vma)
*/
tlb->vma_huge = is_vm_hugetlb_page(vma);
tlb->vma_exec = !!(vma->vm_flags & VM_EXEC);
+ tlb->vma_pfn = !!(vma->vm_flags & (VM_PFNMAP|VM_MIXEDMAP));
}
-#else
-
-static inline void
-tlb_update_vma_flags(struct mmu_gather *tlb, struct vm_area_struct *vma) { }
-
-#endif
-
-#endif /* CONFIG_MMU_GATHER_NO_RANGE */
-
static inline void tlb_flush_mmu_tlbonly(struct mmu_gather *tlb)
{
/*
@@ -492,12 +487,18 @@ static inline void tlb_end_vma(struct mmu_gather *tlb, struct vm_area_struct *vm
return;
/*
- * Do a TLB flush and reset the range at VMA boundaries; this avoids
- * the ranges growing with the unused space between consecutive VMAs,
- * but also the mmu_gather::vma_* flags from tlb_start_vma() rely on
- * this.
+ * VM_PFNMAP is more fragile because the core mm will not track the
+ * page mapcount -- there might not be page-frames for these PFNs after
+ * all. Force flush TLBs for such ranges to avoid munmap() vs
+ * unmap_mapping_range() races.
*/
- tlb_flush_mmu_tlbonly(tlb);
+ if (tlb->vma_pfn || !IS_ENABLED(CONFIG_MMU_GATHER_MERGE_VMAS)) {
+ /*
+ * Do a TLB flush and reset the range at VMA boundaries; this avoids
+ * the ranges growing with the unused space between consecutive VMAs.
+ */
+ tlb_flush_mmu_tlbonly(tlb);
+ }
}
/*
--
2.35.1
^ permalink raw reply related [flat|nested] 9+ messages in thread
* [PATCH AUTOSEL 5.10 6/7] netfilter: nf_tables: add rescheduling points during loop detection walks
2022-08-01 19:02 [PATCH AUTOSEL 5.10 1/7] ntfs: fix use-after-free in ntfs_ucsncmp() Sasha Levin
` (3 preceding siblings ...)
2022-08-01 19:02 ` [PATCH AUTOSEL 5.10 5/7] mmu_gather: Force tlb-flush VM_PFNMAP vmas Sasha Levin
@ 2022-08-01 19:03 ` Sasha Levin
2022-08-01 19:03 ` Sasha Levin
5 siblings, 0 replies; 9+ messages in thread
From: Sasha Levin @ 2022-08-01 19:03 UTC (permalink / raw)
To: linux-kernel, stable
Cc: Florian Westphal, Pablo Neira Ayuso, Sasha Levin, kadlec, davem,
edumazet, kuba, pabeni, netfilter-devel, coreteam, netdev
From: Florian Westphal <fw@strlen.de>
[ Upstream commit 81ea010667417ef3f218dfd99b69769fe66c2b67 ]
Add explicit rescheduling points during ruleset walk.
Switching to a faster algorithm is possible but this is a much
smaller change, suitable for nf tree.
Link: https://bugzilla.netfilter.org/show_bug.cgi?id=1460
Signed-off-by: Florian Westphal <fw@strlen.de>
Acked-by: Pablo Neira Ayuso <pablo@netfilter.org>
Signed-off-by: Sasha Levin <sashal@kernel.org>
---
net/netfilter/nf_tables_api.c | 6 ++++++
1 file changed, 6 insertions(+)
diff --git a/net/netfilter/nf_tables_api.c b/net/netfilter/nf_tables_api.c
index e5622e925ea9..1cc75ac2d9cc 100644
--- a/net/netfilter/nf_tables_api.c
+++ b/net/netfilter/nf_tables_api.c
@@ -3125,6 +3125,8 @@ int nft_chain_validate(const struct nft_ctx *ctx, const struct nft_chain *chain)
if (err < 0)
return err;
}
+
+ cond_resched();
}
return 0;
@@ -8419,9 +8421,13 @@ static int nf_tables_check_loops(const struct nft_ctx *ctx,
break;
}
}
+
+ cond_resched();
}
list_for_each_entry(set, &ctx->table->sets, list) {
+ cond_resched();
+
if (!nft_is_active_next(ctx->net, set))
continue;
if (!(set->flags & NFT_SET_MAP) ||
--
2.35.1
^ permalink raw reply related [flat|nested] 9+ messages in thread
* [PATCH AUTOSEL 5.10 7/7] ARM: findbit: fix overflowing offset
2022-08-01 19:02 [PATCH AUTOSEL 5.10 1/7] ntfs: fix use-after-free in ntfs_ucsncmp() Sasha Levin
@ 2022-08-01 19:03 ` Sasha Levin
2022-08-01 19:02 ` [PATCH AUTOSEL 5.10 3/7] spi: spi-rspi: Fix PIO fallback on RZ platforms Sasha Levin
` (4 subsequent siblings)
5 siblings, 0 replies; 9+ messages in thread
From: Sasha Levin @ 2022-08-01 19:03 UTC (permalink / raw)
To: linux-kernel, stable
Cc: Russell King (Oracle),
Guenter Roeck, Sasha Levin, linux, linux-arm-kernel
From: "Russell King (Oracle)" <rmk+kernel@armlinux.org.uk>
[ Upstream commit ec85bd369fd2bfaed6f45dd678706429d4f75b48 ]
When offset is larger than the size of the bit array, we should not
attempt to access the array as we can perform an access beyond the
end of the array. Fix this by changing the pre-condition.
Using "cmp r2, r1; bhs ..." covers us for the size == 0 case, since
this will always take the branch when r1 is zero, irrespective of
the value of r2. This means we can fix this bug without adding any
additional code!
Tested-by: Guenter Roeck <linux@roeck-us.net>
Signed-off-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk>
Signed-off-by: Sasha Levin <sashal@kernel.org>
---
arch/arm/lib/findbit.S | 16 ++++++++--------
1 file changed, 8 insertions(+), 8 deletions(-)
diff --git a/arch/arm/lib/findbit.S b/arch/arm/lib/findbit.S
index b5e8b9ae4c7d..7fd3600db8ef 100644
--- a/arch/arm/lib/findbit.S
+++ b/arch/arm/lib/findbit.S
@@ -40,8 +40,8 @@ ENDPROC(_find_first_zero_bit_le)
* Prototype: int find_next_zero_bit(void *addr, unsigned int maxbit, int offset)
*/
ENTRY(_find_next_zero_bit_le)
- teq r1, #0
- beq 3b
+ cmp r2, r1
+ bhs 3b
ands ip, r2, #7
beq 1b @ If new byte, goto old routine
ARM( ldrb r3, [r0, r2, lsr #3] )
@@ -81,8 +81,8 @@ ENDPROC(_find_first_bit_le)
* Prototype: int find_next_zero_bit(void *addr, unsigned int maxbit, int offset)
*/
ENTRY(_find_next_bit_le)
- teq r1, #0
- beq 3b
+ cmp r2, r1
+ bhs 3b
ands ip, r2, #7
beq 1b @ If new byte, goto old routine
ARM( ldrb r3, [r0, r2, lsr #3] )
@@ -115,8 +115,8 @@ ENTRY(_find_first_zero_bit_be)
ENDPROC(_find_first_zero_bit_be)
ENTRY(_find_next_zero_bit_be)
- teq r1, #0
- beq 3b
+ cmp r2, r1
+ bhs 3b
ands ip, r2, #7
beq 1b @ If new byte, goto old routine
eor r3, r2, #0x18 @ big endian byte ordering
@@ -149,8 +149,8 @@ ENTRY(_find_first_bit_be)
ENDPROC(_find_first_bit_be)
ENTRY(_find_next_bit_be)
- teq r1, #0
- beq 3b
+ cmp r2, r1
+ bhs 3b
ands ip, r2, #7
beq 1b @ If new byte, goto old routine
eor r3, r2, #0x18 @ big endian byte ordering
--
2.35.1
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply related [flat|nested] 9+ messages in thread
* [PATCH AUTOSEL 5.10 7/7] ARM: findbit: fix overflowing offset
@ 2022-08-01 19:03 ` Sasha Levin
0 siblings, 0 replies; 9+ messages in thread
From: Sasha Levin @ 2022-08-01 19:03 UTC (permalink / raw)
To: linux-kernel, stable
Cc: Russell King (Oracle),
Guenter Roeck, Sasha Levin, linux, linux-arm-kernel
From: "Russell King (Oracle)" <rmk+kernel@armlinux.org.uk>
[ Upstream commit ec85bd369fd2bfaed6f45dd678706429d4f75b48 ]
When offset is larger than the size of the bit array, we should not
attempt to access the array as we can perform an access beyond the
end of the array. Fix this by changing the pre-condition.
Using "cmp r2, r1; bhs ..." covers us for the size == 0 case, since
this will always take the branch when r1 is zero, irrespective of
the value of r2. This means we can fix this bug without adding any
additional code!
Tested-by: Guenter Roeck <linux@roeck-us.net>
Signed-off-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk>
Signed-off-by: Sasha Levin <sashal@kernel.org>
---
arch/arm/lib/findbit.S | 16 ++++++++--------
1 file changed, 8 insertions(+), 8 deletions(-)
diff --git a/arch/arm/lib/findbit.S b/arch/arm/lib/findbit.S
index b5e8b9ae4c7d..7fd3600db8ef 100644
--- a/arch/arm/lib/findbit.S
+++ b/arch/arm/lib/findbit.S
@@ -40,8 +40,8 @@ ENDPROC(_find_first_zero_bit_le)
* Prototype: int find_next_zero_bit(void *addr, unsigned int maxbit, int offset)
*/
ENTRY(_find_next_zero_bit_le)
- teq r1, #0
- beq 3b
+ cmp r2, r1
+ bhs 3b
ands ip, r2, #7
beq 1b @ If new byte, goto old routine
ARM( ldrb r3, [r0, r2, lsr #3] )
@@ -81,8 +81,8 @@ ENDPROC(_find_first_bit_le)
* Prototype: int find_next_zero_bit(void *addr, unsigned int maxbit, int offset)
*/
ENTRY(_find_next_bit_le)
- teq r1, #0
- beq 3b
+ cmp r2, r1
+ bhs 3b
ands ip, r2, #7
beq 1b @ If new byte, goto old routine
ARM( ldrb r3, [r0, r2, lsr #3] )
@@ -115,8 +115,8 @@ ENTRY(_find_first_zero_bit_be)
ENDPROC(_find_first_zero_bit_be)
ENTRY(_find_next_zero_bit_be)
- teq r1, #0
- beq 3b
+ cmp r2, r1
+ bhs 3b
ands ip, r2, #7
beq 1b @ If new byte, goto old routine
eor r3, r2, #0x18 @ big endian byte ordering
@@ -149,8 +149,8 @@ ENTRY(_find_first_bit_be)
ENDPROC(_find_first_bit_be)
ENTRY(_find_next_bit_be)
- teq r1, #0
- beq 3b
+ cmp r2, r1
+ bhs 3b
ands ip, r2, #7
beq 1b @ If new byte, goto old routine
eor r3, r2, #0x18 @ big endian byte ordering
--
2.35.1
^ permalink raw reply related [flat|nested] 9+ messages in thread