All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH] mm: kfence: Fix false positives on big endian
@ 2023-05-05  3:51 ` Michael Ellerman
  0 siblings, 0 replies; 17+ messages in thread
From: Michael Ellerman @ 2023-05-05  3:51 UTC (permalink / raw)
  To: glider, elver, akpm, zhangpeng.00; +Cc: linux-kernel, linux-mm, linuxppc-dev

Since commit 1ba3cbf3ec3b ("mm: kfence: improve the performance of
__kfence_alloc() and __kfence_free()"), kfence reports failures in
random places at boot on big endian machines.

The problem is that the new KFENCE_CANARY_PATTERN_U64 encodes the
address of each byte in its value, so it needs to be byte swapped on big
endian machines.

The compiler is smart enough to do the le64_to_cpu() at compile time, so
there is no runtime overhead.

Fixes: 1ba3cbf3ec3b ("mm: kfence: improve the performance of __kfence_alloc() and __kfence_free()")
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
---
 mm/kfence/kfence.h | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/mm/kfence/kfence.h b/mm/kfence/kfence.h
index 2aafc46a4aaf..392fb273e7bd 100644
--- a/mm/kfence/kfence.h
+++ b/mm/kfence/kfence.h
@@ -29,7 +29,7 @@
  * canary of every 8 bytes is the same. 64-bit memory can be filled and checked
  * at a time instead of byte by byte to improve performance.
  */
-#define KFENCE_CANARY_PATTERN_U64 ((u64)0xaaaaaaaaaaaaaaaa ^ (u64)(0x0706050403020100))
+#define KFENCE_CANARY_PATTERN_U64 ((u64)0xaaaaaaaaaaaaaaaa ^ (u64)(le64_to_cpu(0x0706050403020100)))
 
 /* Maximum stack depth for reports. */
 #define KFENCE_STACK_DEPTH 64
-- 
2.40.1


^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [PATCH] mm: kfence: Fix false positives on big endian
@ 2023-05-05  3:51 ` Michael Ellerman
  0 siblings, 0 replies; 17+ messages in thread
From: Michael Ellerman @ 2023-05-05  3:51 UTC (permalink / raw)
  To: glider, elver, akpm, zhangpeng.00; +Cc: linux-mm, linuxppc-dev, linux-kernel

Since commit 1ba3cbf3ec3b ("mm: kfence: improve the performance of
__kfence_alloc() and __kfence_free()"), kfence reports failures in
random places at boot on big endian machines.

The problem is that the new KFENCE_CANARY_PATTERN_U64 encodes the
address of each byte in its value, so it needs to be byte swapped on big
endian machines.

The compiler is smart enough to do the le64_to_cpu() at compile time, so
there is no runtime overhead.

Fixes: 1ba3cbf3ec3b ("mm: kfence: improve the performance of __kfence_alloc() and __kfence_free()")
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
---
 mm/kfence/kfence.h | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/mm/kfence/kfence.h b/mm/kfence/kfence.h
index 2aafc46a4aaf..392fb273e7bd 100644
--- a/mm/kfence/kfence.h
+++ b/mm/kfence/kfence.h
@@ -29,7 +29,7 @@
  * canary of every 8 bytes is the same. 64-bit memory can be filled and checked
  * at a time instead of byte by byte to improve performance.
  */
-#define KFENCE_CANARY_PATTERN_U64 ((u64)0xaaaaaaaaaaaaaaaa ^ (u64)(0x0706050403020100))
+#define KFENCE_CANARY_PATTERN_U64 ((u64)0xaaaaaaaaaaaaaaaa ^ (u64)(le64_to_cpu(0x0706050403020100)))
 
 /* Maximum stack depth for reports. */
 #define KFENCE_STACK_DEPTH 64
-- 
2.40.1


^ permalink raw reply related	[flat|nested] 17+ messages in thread

* Re: [PATCH] mm: kfence: Fix false positives on big endian
  2023-05-05  3:51 ` Michael Ellerman
@ 2023-05-05  7:14   ` Alexander Potapenko
  -1 siblings, 0 replies; 17+ messages in thread
From: Alexander Potapenko @ 2023-05-05  7:14 UTC (permalink / raw)
  To: Michael Ellerman
  Cc: elver, akpm, zhangpeng.00, linux-kernel, linux-mm, linuxppc-dev

[-- Attachment #1: Type: text/plain, Size: 772 bytes --]

On Fri, May 5, 2023 at 5:51 AM Michael Ellerman <mpe@ellerman.id.au> wrote:

> Since commit 1ba3cbf3ec3b ("mm: kfence: improve the performance of
> __kfence_alloc() and __kfence_free()"), kfence reports failures in
> random places at boot on big endian machines.
>
> The problem is that the new KFENCE_CANARY_PATTERN_U64 encodes the
> address of each byte in its value, so it needs to be byte swapped on big
> endian machines.
>
> The compiler is smart enough to do the le64_to_cpu() at compile time, so
> there is no runtime overhead.
>
> Fixes: 1ba3cbf3ec3b ("mm: kfence: improve the performance of
> __kfence_alloc() and __kfence_free()")
> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
>
Reviewed-by: Alexander Potapenko <glider@google.com>

[-- Attachment #2: Type: text/html, Size: 1232 bytes --]

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH] mm: kfence: Fix false positives on big endian
@ 2023-05-05  7:14   ` Alexander Potapenko
  0 siblings, 0 replies; 17+ messages in thread
From: Alexander Potapenko @ 2023-05-05  7:14 UTC (permalink / raw)
  To: Michael Ellerman
  Cc: zhangpeng.00, elver, linux-kernel, linux-mm, akpm, linuxppc-dev

[-- Attachment #1: Type: text/plain, Size: 772 bytes --]

On Fri, May 5, 2023 at 5:51 AM Michael Ellerman <mpe@ellerman.id.au> wrote:

> Since commit 1ba3cbf3ec3b ("mm: kfence: improve the performance of
> __kfence_alloc() and __kfence_free()"), kfence reports failures in
> random places at boot on big endian machines.
>
> The problem is that the new KFENCE_CANARY_PATTERN_U64 encodes the
> address of each byte in its value, so it needs to be byte swapped on big
> endian machines.
>
> The compiler is smart enough to do the le64_to_cpu() at compile time, so
> there is no runtime overhead.
>
> Fixes: 1ba3cbf3ec3b ("mm: kfence: improve the performance of
> __kfence_alloc() and __kfence_free()")
> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
>
Reviewed-by: Alexander Potapenko <glider@google.com>

[-- Attachment #2: Type: text/html, Size: 1232 bytes --]

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH] mm: kfence: Fix false positives on big endian
  2023-05-05  3:51 ` Michael Ellerman
@ 2023-05-05  7:43   ` Marco Elver
  -1 siblings, 0 replies; 17+ messages in thread
From: Marco Elver @ 2023-05-05  7:43 UTC (permalink / raw)
  To: Michael Ellerman
  Cc: glider, akpm, zhangpeng.00, linux-kernel, linux-mm, linuxppc-dev

On Fri, 5 May 2023 at 05:51, Michael Ellerman <mpe@ellerman.id.au> wrote:
>
> Since commit 1ba3cbf3ec3b ("mm: kfence: improve the performance of
> __kfence_alloc() and __kfence_free()"), kfence reports failures in
> random places at boot on big endian machines.
>
> The problem is that the new KFENCE_CANARY_PATTERN_U64 encodes the
> address of each byte in its value, so it needs to be byte swapped on big
> endian machines.
>
> The compiler is smart enough to do the le64_to_cpu() at compile time, so
> there is no runtime overhead.
>
> Fixes: 1ba3cbf3ec3b ("mm: kfence: improve the performance of __kfence_alloc() and __kfence_free()")
> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>

Reviewed-by: Marco Elver <elver@google.com>

Andrew, is the Fixes enough to make it to stable as well or do we also
need Cc: stable?

Thanks,
-- Marco

> ---
>  mm/kfence/kfence.h | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/mm/kfence/kfence.h b/mm/kfence/kfence.h
> index 2aafc46a4aaf..392fb273e7bd 100644
> --- a/mm/kfence/kfence.h
> +++ b/mm/kfence/kfence.h
> @@ -29,7 +29,7 @@
>   * canary of every 8 bytes is the same. 64-bit memory can be filled and checked
>   * at a time instead of byte by byte to improve performance.
>   */
> -#define KFENCE_CANARY_PATTERN_U64 ((u64)0xaaaaaaaaaaaaaaaa ^ (u64)(0x0706050403020100))
> +#define KFENCE_CANARY_PATTERN_U64 ((u64)0xaaaaaaaaaaaaaaaa ^ (u64)(le64_to_cpu(0x0706050403020100)))
>
>  /* Maximum stack depth for reports. */
>  #define KFENCE_STACK_DEPTH 64
> --
> 2.40.1
>

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH] mm: kfence: Fix false positives on big endian
@ 2023-05-05  7:43   ` Marco Elver
  0 siblings, 0 replies; 17+ messages in thread
From: Marco Elver @ 2023-05-05  7:43 UTC (permalink / raw)
  To: Michael Ellerman
  Cc: zhangpeng.00, linux-kernel, linux-mm, glider, akpm, linuxppc-dev

On Fri, 5 May 2023 at 05:51, Michael Ellerman <mpe@ellerman.id.au> wrote:
>
> Since commit 1ba3cbf3ec3b ("mm: kfence: improve the performance of
> __kfence_alloc() and __kfence_free()"), kfence reports failures in
> random places at boot on big endian machines.
>
> The problem is that the new KFENCE_CANARY_PATTERN_U64 encodes the
> address of each byte in its value, so it needs to be byte swapped on big
> endian machines.
>
> The compiler is smart enough to do the le64_to_cpu() at compile time, so
> there is no runtime overhead.
>
> Fixes: 1ba3cbf3ec3b ("mm: kfence: improve the performance of __kfence_alloc() and __kfence_free()")
> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>

Reviewed-by: Marco Elver <elver@google.com>

Andrew, is the Fixes enough to make it to stable as well or do we also
need Cc: stable?

Thanks,
-- Marco

> ---
>  mm/kfence/kfence.h | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/mm/kfence/kfence.h b/mm/kfence/kfence.h
> index 2aafc46a4aaf..392fb273e7bd 100644
> --- a/mm/kfence/kfence.h
> +++ b/mm/kfence/kfence.h
> @@ -29,7 +29,7 @@
>   * canary of every 8 bytes is the same. 64-bit memory can be filled and checked
>   * at a time instead of byte by byte to improve performance.
>   */
> -#define KFENCE_CANARY_PATTERN_U64 ((u64)0xaaaaaaaaaaaaaaaa ^ (u64)(0x0706050403020100))
> +#define KFENCE_CANARY_PATTERN_U64 ((u64)0xaaaaaaaaaaaaaaaa ^ (u64)(le64_to_cpu(0x0706050403020100)))
>
>  /* Maximum stack depth for reports. */
>  #define KFENCE_STACK_DEPTH 64
> --
> 2.40.1
>

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH] mm: kfence: Fix false positives on big endian
  2023-05-05  7:43   ` Marco Elver
@ 2023-05-05 11:56     ` Michael Ellerman
  -1 siblings, 0 replies; 17+ messages in thread
From: Michael Ellerman @ 2023-05-05 11:56 UTC (permalink / raw)
  To: Marco Elver
  Cc: glider, akpm, zhangpeng.00, linux-kernel, linux-mm, linuxppc-dev

Marco Elver <elver@google.com> writes:
> On Fri, 5 May 2023 at 05:51, Michael Ellerman <mpe@ellerman.id.au> wrote:
>>
>> Since commit 1ba3cbf3ec3b ("mm: kfence: improve the performance of
>> __kfence_alloc() and __kfence_free()"), kfence reports failures in
>> random places at boot on big endian machines.
>>
>> The problem is that the new KFENCE_CANARY_PATTERN_U64 encodes the
>> address of each byte in its value, so it needs to be byte swapped on big
>> endian machines.
>>
>> The compiler is smart enough to do the le64_to_cpu() at compile time, so
>> there is no runtime overhead.
>>
>> Fixes: 1ba3cbf3ec3b ("mm: kfence: improve the performance of __kfence_alloc() and __kfence_free()")
>> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
>
> Reviewed-by: Marco Elver <elver@google.com>

Thanks.

> Andrew, is the Fixes enough to make it to stable as well or do we also
> need Cc: stable?

That commit is not in any releases yet (or even an rc), so as long as it
gets picked up before v6.4 then it won't need to go to stable.

cheers

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH] mm: kfence: Fix false positives on big endian
@ 2023-05-05 11:56     ` Michael Ellerman
  0 siblings, 0 replies; 17+ messages in thread
From: Michael Ellerman @ 2023-05-05 11:56 UTC (permalink / raw)
  To: Marco Elver
  Cc: zhangpeng.00, linux-kernel, linux-mm, glider, akpm, linuxppc-dev

Marco Elver <elver@google.com> writes:
> On Fri, 5 May 2023 at 05:51, Michael Ellerman <mpe@ellerman.id.au> wrote:
>>
>> Since commit 1ba3cbf3ec3b ("mm: kfence: improve the performance of
>> __kfence_alloc() and __kfence_free()"), kfence reports failures in
>> random places at boot on big endian machines.
>>
>> The problem is that the new KFENCE_CANARY_PATTERN_U64 encodes the
>> address of each byte in its value, so it needs to be byte swapped on big
>> endian machines.
>>
>> The compiler is smart enough to do the le64_to_cpu() at compile time, so
>> there is no runtime overhead.
>>
>> Fixes: 1ba3cbf3ec3b ("mm: kfence: improve the performance of __kfence_alloc() and __kfence_free()")
>> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
>
> Reviewed-by: Marco Elver <elver@google.com>

Thanks.

> Andrew, is the Fixes enough to make it to stable as well or do we also
> need Cc: stable?

That commit is not in any releases yet (or even an rc), so as long as it
gets picked up before v6.4 then it won't need to go to stable.

cheers

^ permalink raw reply	[flat|nested] 17+ messages in thread

* RE: [PATCH] mm: kfence: Fix false positives on big endian
  2023-05-05  3:51 ` Michael Ellerman
@ 2023-05-05 16:02   ` David Laight
  -1 siblings, 0 replies; 17+ messages in thread
From: David Laight @ 2023-05-05 16:02 UTC (permalink / raw)
  To: 'Michael Ellerman', glider, elver, akpm, zhangpeng.00
  Cc: linux-kernel, linux-mm, linuxppc-dev

From: Michael Ellerman
> Sent: 05 May 2023 04:51
> 
> Since commit 1ba3cbf3ec3b ("mm: kfence: improve the performance of
> __kfence_alloc() and __kfence_free()"), kfence reports failures in
> random places at boot on big endian machines.
> 
> The problem is that the new KFENCE_CANARY_PATTERN_U64 encodes the
> address of each byte in its value, so it needs to be byte swapped on big
> endian machines.
> 
> The compiler is smart enough to do the le64_to_cpu() at compile time, so
> there is no runtime overhead.
> 
> Fixes: 1ba3cbf3ec3b ("mm: kfence: improve the performance of __kfence_alloc() and __kfence_free()")
> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
> ---
>  mm/kfence/kfence.h | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/mm/kfence/kfence.h b/mm/kfence/kfence.h
> index 2aafc46a4aaf..392fb273e7bd 100644
> --- a/mm/kfence/kfence.h
> +++ b/mm/kfence/kfence.h
> @@ -29,7 +29,7 @@
>   * canary of every 8 bytes is the same. 64-bit memory can be filled and checked
>   * at a time instead of byte by byte to improve performance.
>   */
> -#define KFENCE_CANARY_PATTERN_U64 ((u64)0xaaaaaaaaaaaaaaaa ^ (u64)(0x0706050403020100))
> +#define KFENCE_CANARY_PATTERN_U64 ((u64)0xaaaaaaaaaaaaaaaa ^ (u64)(le64_to_cpu(0x0706050403020100)))

What at the (u64) casts for?
The constants should probably have a ul (or ull) suffix.

	David

-
Registered Address Lakeside, Bramley Road, Mount Farm, Milton Keynes, MK1 1PT, UK
Registration No: 1397386 (Wales)


^ permalink raw reply	[flat|nested] 17+ messages in thread

* RE: [PATCH] mm: kfence: Fix false positives on big endian
@ 2023-05-05 16:02   ` David Laight
  0 siblings, 0 replies; 17+ messages in thread
From: David Laight @ 2023-05-05 16:02 UTC (permalink / raw)
  To: 'Michael Ellerman', glider, elver, akpm, zhangpeng.00
  Cc: linux-mm, linuxppc-dev, linux-kernel

From: Michael Ellerman
> Sent: 05 May 2023 04:51
> 
> Since commit 1ba3cbf3ec3b ("mm: kfence: improve the performance of
> __kfence_alloc() and __kfence_free()"), kfence reports failures in
> random places at boot on big endian machines.
> 
> The problem is that the new KFENCE_CANARY_PATTERN_U64 encodes the
> address of each byte in its value, so it needs to be byte swapped on big
> endian machines.
> 
> The compiler is smart enough to do the le64_to_cpu() at compile time, so
> there is no runtime overhead.
> 
> Fixes: 1ba3cbf3ec3b ("mm: kfence: improve the performance of __kfence_alloc() and __kfence_free()")
> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
> ---
>  mm/kfence/kfence.h | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/mm/kfence/kfence.h b/mm/kfence/kfence.h
> index 2aafc46a4aaf..392fb273e7bd 100644
> --- a/mm/kfence/kfence.h
> +++ b/mm/kfence/kfence.h
> @@ -29,7 +29,7 @@
>   * canary of every 8 bytes is the same. 64-bit memory can be filled and checked
>   * at a time instead of byte by byte to improve performance.
>   */
> -#define KFENCE_CANARY_PATTERN_U64 ((u64)0xaaaaaaaaaaaaaaaa ^ (u64)(0x0706050403020100))
> +#define KFENCE_CANARY_PATTERN_U64 ((u64)0xaaaaaaaaaaaaaaaa ^ (u64)(le64_to_cpu(0x0706050403020100)))

What at the (u64) casts for?
The constants should probably have a ul (or ull) suffix.

	David

-
Registered Address Lakeside, Bramley Road, Mount Farm, Milton Keynes, MK1 1PT, UK
Registration No: 1397386 (Wales)


^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH] mm: kfence: Fix false positives on big endian
  2023-05-05 16:02   ` David Laight
@ 2023-05-17 22:20     ` Andrew Morton
  -1 siblings, 0 replies; 17+ messages in thread
From: Andrew Morton @ 2023-05-17 22:20 UTC (permalink / raw)
  To: David Laight
  Cc: 'Michael Ellerman',
	glider, elver, zhangpeng.00, linux-kernel, linux-mm,
	linuxppc-dev

On Fri, 5 May 2023 16:02:17 +0000 David Laight <David.Laight@ACULAB.COM> wrote:

> From: Michael Ellerman
> > Sent: 05 May 2023 04:51
> > 
> > Since commit 1ba3cbf3ec3b ("mm: kfence: improve the performance of
> > __kfence_alloc() and __kfence_free()"), kfence reports failures in
> > random places at boot on big endian machines.
> > 
> > The problem is that the new KFENCE_CANARY_PATTERN_U64 encodes the
> > address of each byte in its value, so it needs to be byte swapped on big
> > endian machines.
> > 
> > The compiler is smart enough to do the le64_to_cpu() at compile time, so
> > there is no runtime overhead.
> > 
> > Fixes: 1ba3cbf3ec3b ("mm: kfence: improve the performance of __kfence_alloc() and __kfence_free()")
> > Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
> > ---
> >  mm/kfence/kfence.h | 2 +-
> >  1 file changed, 1 insertion(+), 1 deletion(-)
> > 
> > diff --git a/mm/kfence/kfence.h b/mm/kfence/kfence.h
> > index 2aafc46a4aaf..392fb273e7bd 100644
> > --- a/mm/kfence/kfence.h
> > +++ b/mm/kfence/kfence.h
> > @@ -29,7 +29,7 @@
> >   * canary of every 8 bytes is the same. 64-bit memory can be filled and checked
> >   * at a time instead of byte by byte to improve performance.
> >   */
> > -#define KFENCE_CANARY_PATTERN_U64 ((u64)0xaaaaaaaaaaaaaaaa ^ (u64)(0x0706050403020100))
> > +#define KFENCE_CANARY_PATTERN_U64 ((u64)0xaaaaaaaaaaaaaaaa ^ (u64)(le64_to_cpu(0x0706050403020100)))
> 
> What at the (u64) casts for?
> The constants should probably have a ul (or ull) suffix.
> 

I tried that, didn't fix the sparse warnings described at
https://lkml.kernel.org/r/202305132244.DwzBUcUd-lkp@intel.com.

Michael, have you looked into this?

I'll merge it upstream - I guess we can live with the warnings for a while.

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH] mm: kfence: Fix false positives on big endian
@ 2023-05-17 22:20     ` Andrew Morton
  0 siblings, 0 replies; 17+ messages in thread
From: Andrew Morton @ 2023-05-17 22:20 UTC (permalink / raw)
  To: David Laight
  Cc: zhangpeng.00, elver, linux-kernel, linux-mm, glider, linuxppc-dev

On Fri, 5 May 2023 16:02:17 +0000 David Laight <David.Laight@ACULAB.COM> wrote:

> From: Michael Ellerman
> > Sent: 05 May 2023 04:51
> > 
> > Since commit 1ba3cbf3ec3b ("mm: kfence: improve the performance of
> > __kfence_alloc() and __kfence_free()"), kfence reports failures in
> > random places at boot on big endian machines.
> > 
> > The problem is that the new KFENCE_CANARY_PATTERN_U64 encodes the
> > address of each byte in its value, so it needs to be byte swapped on big
> > endian machines.
> > 
> > The compiler is smart enough to do the le64_to_cpu() at compile time, so
> > there is no runtime overhead.
> > 
> > Fixes: 1ba3cbf3ec3b ("mm: kfence: improve the performance of __kfence_alloc() and __kfence_free()")
> > Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
> > ---
> >  mm/kfence/kfence.h | 2 +-
> >  1 file changed, 1 insertion(+), 1 deletion(-)
> > 
> > diff --git a/mm/kfence/kfence.h b/mm/kfence/kfence.h
> > index 2aafc46a4aaf..392fb273e7bd 100644
> > --- a/mm/kfence/kfence.h
> > +++ b/mm/kfence/kfence.h
> > @@ -29,7 +29,7 @@
> >   * canary of every 8 bytes is the same. 64-bit memory can be filled and checked
> >   * at a time instead of byte by byte to improve performance.
> >   */
> > -#define KFENCE_CANARY_PATTERN_U64 ((u64)0xaaaaaaaaaaaaaaaa ^ (u64)(0x0706050403020100))
> > +#define KFENCE_CANARY_PATTERN_U64 ((u64)0xaaaaaaaaaaaaaaaa ^ (u64)(le64_to_cpu(0x0706050403020100)))
> 
> What at the (u64) casts for?
> The constants should probably have a ul (or ull) suffix.
> 

I tried that, didn't fix the sparse warnings described at
https://lkml.kernel.org/r/202305132244.DwzBUcUd-lkp@intel.com.

Michael, have you looked into this?

I'll merge it upstream - I guess we can live with the warnings for a while.

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH] mm: kfence: Fix false positives on big endian
  2023-05-17 22:20     ` Andrew Morton
@ 2023-05-19  5:14       ` Michael Ellerman
  -1 siblings, 0 replies; 17+ messages in thread
From: Michael Ellerman @ 2023-05-19  5:14 UTC (permalink / raw)
  To: Andrew Morton, David Laight
  Cc: glider, elver, zhangpeng.00, linux-kernel, linux-mm, linuxppc-dev

Andrew Morton <akpm@linux-foundation.org> writes:
> On Fri, 5 May 2023 16:02:17 +0000 David Laight <David.Laight@ACULAB.COM> wrote:
>
>> From: Michael Ellerman
>> > Sent: 05 May 2023 04:51
>> > 
>> > Since commit 1ba3cbf3ec3b ("mm: kfence: improve the performance of
>> > __kfence_alloc() and __kfence_free()"), kfence reports failures in
>> > random places at boot on big endian machines.
>> > 
>> > The problem is that the new KFENCE_CANARY_PATTERN_U64 encodes the
>> > address of each byte in its value, so it needs to be byte swapped on big
>> > endian machines.
>> > 
>> > The compiler is smart enough to do the le64_to_cpu() at compile time, so
>> > there is no runtime overhead.
>> > 
>> > Fixes: 1ba3cbf3ec3b ("mm: kfence: improve the performance of __kfence_alloc() and __kfence_free()")
>> > Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
>> > ---
>> >  mm/kfence/kfence.h | 2 +-
>> >  1 file changed, 1 insertion(+), 1 deletion(-)
>> > 
>> > diff --git a/mm/kfence/kfence.h b/mm/kfence/kfence.h
>> > index 2aafc46a4aaf..392fb273e7bd 100644
>> > --- a/mm/kfence/kfence.h
>> > +++ b/mm/kfence/kfence.h
>> > @@ -29,7 +29,7 @@
>> >   * canary of every 8 bytes is the same. 64-bit memory can be filled and checked
>> >   * at a time instead of byte by byte to improve performance.
>> >   */
>> > -#define KFENCE_CANARY_PATTERN_U64 ((u64)0xaaaaaaaaaaaaaaaa ^ (u64)(0x0706050403020100))
>> > +#define KFENCE_CANARY_PATTERN_U64 ((u64)0xaaaaaaaaaaaaaaaa ^ (u64)(le64_to_cpu(0x0706050403020100)))
>> 
>> What at the (u64) casts for?
>> The constants should probably have a ul (or ull) suffix.
>> 
>
> I tried that, didn't fix the sparse warnings described at
> https://lkml.kernel.org/r/202305132244.DwzBUcUd-lkp@intel.com.
>
> Michael, have you looked into this?

I haven't sorry, been chasing other bugs.

> I'll merge it upstream - I guess we can live with the warnings for a while.

Thanks, yeah spurious WARNs are more of a pain than some sparse warnings.

Maybe using le64_to_cpu() is too fancy, could just do it with an ifdef? eg.

diff --git a/mm/kfence/kfence.h b/mm/kfence/kfence.h
index 392fb273e7bd..510355a5382b 100644
--- a/mm/kfence/kfence.h
+++ b/mm/kfence/kfence.h
@@ -29,7 +29,11 @@
  * canary of every 8 bytes is the same. 64-bit memory can be filled and checked
  * at a time instead of byte by byte to improve performance.
  */
-#define KFENCE_CANARY_PATTERN_U64 ((u64)0xaaaaaaaaaaaaaaaa ^ (u64)(le64_to_cpu(0x0706050403020100)))
+#ifdef __LITTLE_ENDIAN__
+#define KFENCE_CANARY_PATTERN_U64 (0xaaaaaaaaaaaaaaaaULL ^ 0x0706050403020100ULL)
+#else
+#define KFENCE_CANARY_PATTERN_U64 (0xaaaaaaaaaaaaaaaaULL ^ 0x0001020304050607ULL)
+#endif
 
 /* Maximum stack depth for reports. */
 #define KFENCE_STACK_DEPTH 64


cheers

^ permalink raw reply related	[flat|nested] 17+ messages in thread

* Re: [PATCH] mm: kfence: Fix false positives on big endian
@ 2023-05-19  5:14       ` Michael Ellerman
  0 siblings, 0 replies; 17+ messages in thread
From: Michael Ellerman @ 2023-05-19  5:14 UTC (permalink / raw)
  To: Andrew Morton, David Laight
  Cc: zhangpeng.00, elver, linux-kernel, linux-mm, glider, linuxppc-dev

Andrew Morton <akpm@linux-foundation.org> writes:
> On Fri, 5 May 2023 16:02:17 +0000 David Laight <David.Laight@ACULAB.COM> wrote:
>
>> From: Michael Ellerman
>> > Sent: 05 May 2023 04:51
>> > 
>> > Since commit 1ba3cbf3ec3b ("mm: kfence: improve the performance of
>> > __kfence_alloc() and __kfence_free()"), kfence reports failures in
>> > random places at boot on big endian machines.
>> > 
>> > The problem is that the new KFENCE_CANARY_PATTERN_U64 encodes the
>> > address of each byte in its value, so it needs to be byte swapped on big
>> > endian machines.
>> > 
>> > The compiler is smart enough to do the le64_to_cpu() at compile time, so
>> > there is no runtime overhead.
>> > 
>> > Fixes: 1ba3cbf3ec3b ("mm: kfence: improve the performance of __kfence_alloc() and __kfence_free()")
>> > Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
>> > ---
>> >  mm/kfence/kfence.h | 2 +-
>> >  1 file changed, 1 insertion(+), 1 deletion(-)
>> > 
>> > diff --git a/mm/kfence/kfence.h b/mm/kfence/kfence.h
>> > index 2aafc46a4aaf..392fb273e7bd 100644
>> > --- a/mm/kfence/kfence.h
>> > +++ b/mm/kfence/kfence.h
>> > @@ -29,7 +29,7 @@
>> >   * canary of every 8 bytes is the same. 64-bit memory can be filled and checked
>> >   * at a time instead of byte by byte to improve performance.
>> >   */
>> > -#define KFENCE_CANARY_PATTERN_U64 ((u64)0xaaaaaaaaaaaaaaaa ^ (u64)(0x0706050403020100))
>> > +#define KFENCE_CANARY_PATTERN_U64 ((u64)0xaaaaaaaaaaaaaaaa ^ (u64)(le64_to_cpu(0x0706050403020100)))
>> 
>> What at the (u64) casts for?
>> The constants should probably have a ul (or ull) suffix.
>> 
>
> I tried that, didn't fix the sparse warnings described at
> https://lkml.kernel.org/r/202305132244.DwzBUcUd-lkp@intel.com.
>
> Michael, have you looked into this?

I haven't sorry, been chasing other bugs.

> I'll merge it upstream - I guess we can live with the warnings for a while.

Thanks, yeah spurious WARNs are more of a pain than some sparse warnings.

Maybe using le64_to_cpu() is too fancy, could just do it with an ifdef? eg.

diff --git a/mm/kfence/kfence.h b/mm/kfence/kfence.h
index 392fb273e7bd..510355a5382b 100644
--- a/mm/kfence/kfence.h
+++ b/mm/kfence/kfence.h
@@ -29,7 +29,11 @@
  * canary of every 8 bytes is the same. 64-bit memory can be filled and checked
  * at a time instead of byte by byte to improve performance.
  */
-#define KFENCE_CANARY_PATTERN_U64 ((u64)0xaaaaaaaaaaaaaaaa ^ (u64)(le64_to_cpu(0x0706050403020100)))
+#ifdef __LITTLE_ENDIAN__
+#define KFENCE_CANARY_PATTERN_U64 (0xaaaaaaaaaaaaaaaaULL ^ 0x0706050403020100ULL)
+#else
+#define KFENCE_CANARY_PATTERN_U64 (0xaaaaaaaaaaaaaaaaULL ^ 0x0001020304050607ULL)
+#endif
 
 /* Maximum stack depth for reports. */
 #define KFENCE_STACK_DEPTH 64


cheers

^ permalink raw reply related	[flat|nested] 17+ messages in thread

* Re: [PATCH] mm: kfence: Fix false positives on big endian
  2023-05-17 22:20     ` Andrew Morton
@ 2023-05-19  5:40       ` Christophe Leroy
  -1 siblings, 0 replies; 17+ messages in thread
From: Christophe Leroy @ 2023-05-19  5:40 UTC (permalink / raw)
  To: Andrew Morton, David Laight
  Cc: 'Michael Ellerman',
	glider, elver, zhangpeng.00, linux-kernel, linux-mm,
	linuxppc-dev



Le 18/05/2023 à 00:20, Andrew Morton a écrit :
> On Fri, 5 May 2023 16:02:17 +0000 David Laight <David.Laight@ACULAB.COM> wrote:
> 
>> From: Michael Ellerman
>>> Sent: 05 May 2023 04:51
>>>
>>> Since commit 1ba3cbf3ec3b ("mm: kfence: improve the performance of
>>> __kfence_alloc() and __kfence_free()"), kfence reports failures in
>>> random places at boot on big endian machines.
>>>
>>> The problem is that the new KFENCE_CANARY_PATTERN_U64 encodes the
>>> address of each byte in its value, so it needs to be byte swapped on big
>>> endian machines.
>>>
>>> The compiler is smart enough to do the le64_to_cpu() at compile time, so
>>> there is no runtime overhead.
>>>
>>> Fixes: 1ba3cbf3ec3b ("mm: kfence: improve the performance of __kfence_alloc() and __kfence_free()")
>>> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
>>> ---
>>>   mm/kfence/kfence.h | 2 +-
>>>   1 file changed, 1 insertion(+), 1 deletion(-)
>>>
>>> diff --git a/mm/kfence/kfence.h b/mm/kfence/kfence.h
>>> index 2aafc46a4aaf..392fb273e7bd 100644
>>> --- a/mm/kfence/kfence.h
>>> +++ b/mm/kfence/kfence.h
>>> @@ -29,7 +29,7 @@
>>>    * canary of every 8 bytes is the same. 64-bit memory can be filled and checked
>>>    * at a time instead of byte by byte to improve performance.
>>>    */
>>> -#define KFENCE_CANARY_PATTERN_U64 ((u64)0xaaaaaaaaaaaaaaaa ^ (u64)(0x0706050403020100))
>>> +#define KFENCE_CANARY_PATTERN_U64 ((u64)0xaaaaaaaaaaaaaaaa ^ (u64)(le64_to_cpu(0x0706050403020100)))
>>
>> What at the (u64) casts for?
>> The constants should probably have a ul (or ull) suffix.
>>
> 
> I tried that, didn't fix the sparse warnings described at
> https://lkml.kernel.org/r/202305132244.DwzBUcUd-lkp@intel.com.
> 
> Michael, have you looked into this?
> 
> I'll merge it upstream - I guess we can live with the warnings for a while.
> 

sparse warning goes away with:

#define KFENCE_CANARY_PATTERN_U64 (0xaaaaaaaaaaaaaaaaULL ^ 
le64_to_cpu((__force __le64)0x0706050403020100))

Christophe

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH] mm: kfence: Fix false positives on big endian
@ 2023-05-19  5:40       ` Christophe Leroy
  0 siblings, 0 replies; 17+ messages in thread
From: Christophe Leroy @ 2023-05-19  5:40 UTC (permalink / raw)
  To: Andrew Morton, David Laight
  Cc: zhangpeng.00, elver, linux-kernel, linux-mm, glider, linuxppc-dev



Le 18/05/2023 à 00:20, Andrew Morton a écrit :
> On Fri, 5 May 2023 16:02:17 +0000 David Laight <David.Laight@ACULAB.COM> wrote:
> 
>> From: Michael Ellerman
>>> Sent: 05 May 2023 04:51
>>>
>>> Since commit 1ba3cbf3ec3b ("mm: kfence: improve the performance of
>>> __kfence_alloc() and __kfence_free()"), kfence reports failures in
>>> random places at boot on big endian machines.
>>>
>>> The problem is that the new KFENCE_CANARY_PATTERN_U64 encodes the
>>> address of each byte in its value, so it needs to be byte swapped on big
>>> endian machines.
>>>
>>> The compiler is smart enough to do the le64_to_cpu() at compile time, so
>>> there is no runtime overhead.
>>>
>>> Fixes: 1ba3cbf3ec3b ("mm: kfence: improve the performance of __kfence_alloc() and __kfence_free()")
>>> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
>>> ---
>>>   mm/kfence/kfence.h | 2 +-
>>>   1 file changed, 1 insertion(+), 1 deletion(-)
>>>
>>> diff --git a/mm/kfence/kfence.h b/mm/kfence/kfence.h
>>> index 2aafc46a4aaf..392fb273e7bd 100644
>>> --- a/mm/kfence/kfence.h
>>> +++ b/mm/kfence/kfence.h
>>> @@ -29,7 +29,7 @@
>>>    * canary of every 8 bytes is the same. 64-bit memory can be filled and checked
>>>    * at a time instead of byte by byte to improve performance.
>>>    */
>>> -#define KFENCE_CANARY_PATTERN_U64 ((u64)0xaaaaaaaaaaaaaaaa ^ (u64)(0x0706050403020100))
>>> +#define KFENCE_CANARY_PATTERN_U64 ((u64)0xaaaaaaaaaaaaaaaa ^ (u64)(le64_to_cpu(0x0706050403020100)))
>>
>> What at the (u64) casts for?
>> The constants should probably have a ul (or ull) suffix.
>>
> 
> I tried that, didn't fix the sparse warnings described at
> https://lkml.kernel.org/r/202305132244.DwzBUcUd-lkp@intel.com.
> 
> Michael, have you looked into this?
> 
> I'll merge it upstream - I guess we can live with the warnings for a while.
> 

sparse warning goes away with:

#define KFENCE_CANARY_PATTERN_U64 (0xaaaaaaaaaaaaaaaaULL ^ 
le64_to_cpu((__force __le64)0x0706050403020100))

Christophe

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH] mm: kfence: Fix false positives on big endian
  2023-05-19  5:14       ` Michael Ellerman
  (?)
@ 2023-05-19  6:29       ` Benjamin Gray
  -1 siblings, 0 replies; 17+ messages in thread
From: Benjamin Gray @ 2023-05-19  6:29 UTC (permalink / raw)
  To: Michael Ellerman, Andrew Morton, David Laight
  Cc: zhangpeng.00, elver, linux-kernel, linux-mm, glider, linuxppc-dev

On Fri, 2023-05-19 at 15:14 +1000, Michael Ellerman wrote:
> Andrew Morton <akpm@linux-foundation.org> writes:
> > On Fri, 5 May 2023 16:02:17 +0000 David Laight
> > <David.Laight@ACULAB.COM> wrote:
> > 
> > > From: Michael Ellerman
> > > > Sent: 05 May 2023 04:51
> > > > 
> > > > Since commit 1ba3cbf3ec3b ("mm: kfence: improve the performance
> > > > of
> > > > __kfence_alloc() and __kfence_free()"), kfence reports failures
> > > > in
> > > > random places at boot on big endian machines.
> > > > 
> > > > The problem is that the new KFENCE_CANARY_PATTERN_U64 encodes
> > > > the
> > > > address of each byte in its value, so it needs to be byte
> > > > swapped on big
> > > > endian machines.
> > > > 
> > > > The compiler is smart enough to do the le64_to_cpu() at compile
> > > > time, so
> > > > there is no runtime overhead.
> > > > 
> > > > Fixes: 1ba3cbf3ec3b ("mm: kfence: improve the performance of
> > > > __kfence_alloc() and __kfence_free()")
> > > > Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
> > > > ---
> > > >  mm/kfence/kfence.h | 2 +-
> > > >  1 file changed, 1 insertion(+), 1 deletion(-)
> > > > 
> > > > diff --git a/mm/kfence/kfence.h b/mm/kfence/kfence.h
> > > > index 2aafc46a4aaf..392fb273e7bd 100644
> > > > --- a/mm/kfence/kfence.h
> > > > +++ b/mm/kfence/kfence.h
> > > > @@ -29,7 +29,7 @@
> > > >   * canary of every 8 bytes is the same. 64-bit memory can be
> > > > filled and checked
> > > >   * at a time instead of byte by byte to improve performance.
> > > >   */
> > > > -#define KFENCE_CANARY_PATTERN_U64 ((u64)0xaaaaaaaaaaaaaaaa ^
> > > > (u64)(0x0706050403020100))
> > > > +#define KFENCE_CANARY_PATTERN_U64 ((u64)0xaaaaaaaaaaaaaaaa ^
> > > > (u64)(le64_to_cpu(0x0706050403020100)))
> > > 
> > > What at the (u64) casts for?
> > > The constants should probably have a ul (or ull) suffix.
> > > 
> > 
> > I tried that, didn't fix the sparse warnings described at
> > https://lkml.kernel.org/r/202305132244.DwzBUcUd-lkp@intel.com.
> > 
> > Michael, have you looked into this?
> 
> I haven't sorry, been chasing other bugs.
> 
> > I'll merge it upstream - I guess we can live with the warnings for
> > a while.
> 
> Thanks, yeah spurious WARNs are more of a pain than some sparse
> warnings.
> 
> Maybe using le64_to_cpu() is too fancy, could just do it with an
> ifdef? eg.
> 
> diff --git a/mm/kfence/kfence.h b/mm/kfence/kfence.h
> index 392fb273e7bd..510355a5382b 100644
> --- a/mm/kfence/kfence.h
> +++ b/mm/kfence/kfence.h
> @@ -29,7 +29,11 @@
>   * canary of every 8 bytes is the same. 64-bit memory can be filled
> and checked
>   * at a time instead of byte by byte to improve performance.
>   */
> -#define KFENCE_CANARY_PATTERN_U64 ((u64)0xaaaaaaaaaaaaaaaa ^
> (u64)(le64_to_cpu(0x0706050403020100)))
> +#ifdef __LITTLE_ENDIAN__
> +#define KFENCE_CANARY_PATTERN_U64 (0xaaaaaaaaaaaaaaaaULL ^
> 0x0706050403020100ULL)
> +#else
> +#define KFENCE_CANARY_PATTERN_U64 (0xaaaaaaaaaaaaaaaaULL ^
> 0x0001020304050607ULL)
> +#endif
>  
>  /* Maximum stack depth for reports. */
>  #define KFENCE_STACK_DEPTH 64
> 
> 
> cheers

(for the sparse errors)

As I understand, we require memory to look like "00 01 02 03 04 05 06
07" such that iterating byte-by-byte gives 00, 01, etc. (with
everything XORed with aaa...)

I think it would be most semantically correct to use cpu_to_le64 on
KFENCE_CANARY_PATTERN_U64 and annotate the values being compared
against it as __le64. This is because we want the integer literal
0x0706050403020100 to be stored as "00 01 02 03 04 05 06 07", which is
the definition of little endian.

Masking this with an #ifdef leaves the type as cpu endian, which could
result in future issues.

(or I've just misunderstood and can disregard this)

^ permalink raw reply	[flat|nested] 17+ messages in thread

end of thread, other threads:[~2023-05-19  6:31 UTC | newest]

Thread overview: 17+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2023-05-05  3:51 [PATCH] mm: kfence: Fix false positives on big endian Michael Ellerman
2023-05-05  3:51 ` Michael Ellerman
2023-05-05  7:14 ` Alexander Potapenko
2023-05-05  7:14   ` Alexander Potapenko
2023-05-05  7:43 ` Marco Elver
2023-05-05  7:43   ` Marco Elver
2023-05-05 11:56   ` Michael Ellerman
2023-05-05 11:56     ` Michael Ellerman
2023-05-05 16:02 ` David Laight
2023-05-05 16:02   ` David Laight
2023-05-17 22:20   ` Andrew Morton
2023-05-17 22:20     ` Andrew Morton
2023-05-19  5:14     ` Michael Ellerman
2023-05-19  5:14       ` Michael Ellerman
2023-05-19  6:29       ` Benjamin Gray
2023-05-19  5:40     ` Christophe Leroy
2023-05-19  5:40       ` Christophe Leroy

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.