All of lore.kernel.org
 help / color / mirror / Atom feed
* [XEN v3] xen/arm: Enforce alignment check in debug build for {read, write}_atomic
@ 2022-11-08  9:45 Ayan Kumar Halder
  2022-11-08 11:22 ` Bertrand Marquis
  2022-11-10  0:00 ` Stefano Stabellini
  0 siblings, 2 replies; 4+ messages in thread
From: Ayan Kumar Halder @ 2022-11-08  9:45 UTC (permalink / raw)
  To: xen-devel
  Cc: sstabellini, stefanos, julien, Volodymyr_Babchuk,
	bertrand.marquis, michal.orzel, Ayan Kumar Halder,
	Ayan Kumar Halder

From: Ayan Kumar Halder <ayankuma@amd.com>

Xen provides helper to atomically read/write memory (see {read,
write}_atomic()). Those helpers can only work if the address is aligned
to the size of the access (see B2.2.1 ARM DDI 08476I.a).

On Arm32, the alignment is already enforced by the processor because
HSCTLR.A bit is set (it enforce alignment for every access). For Arm64,
this bit is not set because memcpy()/memset() can use unaligned access
for performance reason (the implementation is taken from the Cortex
library).

To avoid any overhead in production build, the alignment will only be
checked using an ASSERT. Note that it might be possible to do it in
production build using the acquire/exclusive version of load/store. But
this is left to a follow-up (if wanted).

Signed-off-by: Ayan Kumar Halder <ayan.kumar.halder@amd.com>
Signed-off-by: Julien Grall <julien@xen.org>
Reviewed-by: Michal Orzel <michal.orzel@amd.com>
Reviewed-by: Bertrand Marquis <bertrand.marquis@arm.com>
---

Changes from :-
v1 - 1. Referred to the latest Arm Architecture Reference Manual in the commit
message.

v2 - 1. Updated commit message to specify the reason for using ASSERT().
2. Added Julien's SoB.

 xen/arch/arm/include/asm/atomic.h | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/xen/arch/arm/include/asm/atomic.h b/xen/arch/arm/include/asm/atomic.h
index 1f60c28b1b..64314d59b3 100644
--- a/xen/arch/arm/include/asm/atomic.h
+++ b/xen/arch/arm/include/asm/atomic.h
@@ -78,6 +78,7 @@ static always_inline void read_atomic_size(const volatile void *p,
                                            void *res,
                                            unsigned int size)
 {
+    ASSERT(IS_ALIGNED((vaddr_t)p, size));
     switch ( size )
     {
     case 1:
@@ -102,6 +103,7 @@ static always_inline void write_atomic_size(volatile void *p,
                                             void *val,
                                             unsigned int size)
 {
+    ASSERT(IS_ALIGNED((vaddr_t)p, size));
     switch ( size )
     {
     case 1:
-- 
2.17.1



^ permalink raw reply related	[flat|nested] 4+ messages in thread

* Re: [XEN v3] xen/arm: Enforce alignment check in debug build for {read, write}_atomic
  2022-11-08  9:45 [XEN v3] xen/arm: Enforce alignment check in debug build for {read, write}_atomic Ayan Kumar Halder
@ 2022-11-08 11:22 ` Bertrand Marquis
  2022-11-10  0:00 ` Stefano Stabellini
  1 sibling, 0 replies; 4+ messages in thread
From: Bertrand Marquis @ 2022-11-08 11:22 UTC (permalink / raw)
  To: Ayan Kumar Halder
  Cc: Xen developer discussion, sstabellini, stefanos, julien,
	Volodymyr_Babchuk, michal.orzel, Ayan Kumar Halder

Hi Ayan,,

> On 8 Nov 2022, at 09:45, Ayan Kumar Halder <ayan.kumar.halder@amd.com> wrote:
> 
> From: Ayan Kumar Halder <ayankuma@amd.com>
> 
> Xen provides helper to atomically read/write memory (see {read,
> write}_atomic()). Those helpers can only work if the address is aligned
> to the size of the access (see B2.2.1 ARM DDI 08476I.a).
> 
> On Arm32, the alignment is already enforced by the processor because
> HSCTLR.A bit is set (it enforce alignment for every access). For Arm64,
> this bit is not set because memcpy()/memset() can use unaligned access
> for performance reason (the implementation is taken from the Cortex
> library).
> 
> To avoid any overhead in production build, the alignment will only be
> checked using an ASSERT. Note that it might be possible to do it in
> production build using the acquire/exclusive version of load/store. But
> this is left to a follow-up (if wanted).
> 
> Signed-off-by: Ayan Kumar Halder <ayan.kumar.halder@amd.com>
> Signed-off-by: Julien Grall <julien@xen.org>
> Reviewed-by: Michal Orzel <michal.orzel@amd.com>
> Reviewed-by: Bertrand Marquis <bertrand.marquis@arm.com>

I confirm my Reviewed-by.

Side note: You should actually have removed it :-)

Cheers
Bertrand

> ---
> 
> Changes from :-
> v1 - 1. Referred to the latest Arm Architecture Reference Manual in the commit
> message.
> 
> v2 - 1. Updated commit message to specify the reason for using ASSERT().
> 2. Added Julien's SoB.
> 
> xen/arch/arm/include/asm/atomic.h | 2 ++
> 1 file changed, 2 insertions(+)
> 
> diff --git a/xen/arch/arm/include/asm/atomic.h b/xen/arch/arm/include/asm/atomic.h
> index 1f60c28b1b..64314d59b3 100644
> --- a/xen/arch/arm/include/asm/atomic.h
> +++ b/xen/arch/arm/include/asm/atomic.h
> @@ -78,6 +78,7 @@ static always_inline void read_atomic_size(const volatile void *p,
>                                            void *res,
>                                            unsigned int size)
> {
> +    ASSERT(IS_ALIGNED((vaddr_t)p, size));
>     switch ( size )
>     {
>     case 1:
> @@ -102,6 +103,7 @@ static always_inline void write_atomic_size(volatile void *p,
>                                             void *val,
>                                             unsigned int size)
> {
> +    ASSERT(IS_ALIGNED((vaddr_t)p, size));
>     switch ( size )
>     {
>     case 1:
> -- 
> 2.17.1
> 



^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: [XEN v3] xen/arm: Enforce alignment check in debug build for {read, write}_atomic
  2022-11-08  9:45 [XEN v3] xen/arm: Enforce alignment check in debug build for {read, write}_atomic Ayan Kumar Halder
  2022-11-08 11:22 ` Bertrand Marquis
@ 2022-11-10  0:00 ` Stefano Stabellini
  2022-11-10 19:03   ` Julien Grall
  1 sibling, 1 reply; 4+ messages in thread
From: Stefano Stabellini @ 2022-11-10  0:00 UTC (permalink / raw)
  To: Ayan Kumar Halder
  Cc: xen-devel, sstabellini, stefanos, julien, Volodymyr_Babchuk,
	bertrand.marquis, michal.orzel, Ayan Kumar Halder

On Tue, 8 Nov 2022, Ayan Kumar Halder wrote:
> From: Ayan Kumar Halder <ayankuma@amd.com>
> 
> Xen provides helper to atomically read/write memory (see {read,
> write}_atomic()). Those helpers can only work if the address is aligned
> to the size of the access (see B2.2.1 ARM DDI 08476I.a).
> 
> On Arm32, the alignment is already enforced by the processor because
> HSCTLR.A bit is set (it enforce alignment for every access). For Arm64,
> this bit is not set because memcpy()/memset() can use unaligned access
> for performance reason (the implementation is taken from the Cortex
> library).
> 
> To avoid any overhead in production build, the alignment will only be
> checked using an ASSERT. Note that it might be possible to do it in
> production build using the acquire/exclusive version of load/store. But
> this is left to a follow-up (if wanted).
> 
> Signed-off-by: Ayan Kumar Halder <ayan.kumar.halder@amd.com>
> Signed-off-by: Julien Grall <julien@xen.org>
> Reviewed-by: Michal Orzel <michal.orzel@amd.com>
> Reviewed-by: Bertrand Marquis <bertrand.marquis@arm.com>

Acked-by: Stefano Stabellini <sstabellini@kernel.org>


> ---
> 
> Changes from :-
> v1 - 1. Referred to the latest Arm Architecture Reference Manual in the commit
> message.
> 
> v2 - 1. Updated commit message to specify the reason for using ASSERT().
> 2. Added Julien's SoB.
> 
>  xen/arch/arm/include/asm/atomic.h | 2 ++
>  1 file changed, 2 insertions(+)
> 
> diff --git a/xen/arch/arm/include/asm/atomic.h b/xen/arch/arm/include/asm/atomic.h
> index 1f60c28b1b..64314d59b3 100644
> --- a/xen/arch/arm/include/asm/atomic.h
> +++ b/xen/arch/arm/include/asm/atomic.h
> @@ -78,6 +78,7 @@ static always_inline void read_atomic_size(const volatile void *p,
>                                             void *res,
>                                             unsigned int size)
>  {
> +    ASSERT(IS_ALIGNED((vaddr_t)p, size));
>      switch ( size )
>      {
>      case 1:
> @@ -102,6 +103,7 @@ static always_inline void write_atomic_size(volatile void *p,
>                                              void *val,
>                                              unsigned int size)
>  {
> +    ASSERT(IS_ALIGNED((vaddr_t)p, size));
>      switch ( size )
>      {
>      case 1:
> -- 
> 2.17.1
> 


^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: [XEN v3] xen/arm: Enforce alignment check in debug build for {read, write}_atomic
  2022-11-10  0:00 ` Stefano Stabellini
@ 2022-11-10 19:03   ` Julien Grall
  0 siblings, 0 replies; 4+ messages in thread
From: Julien Grall @ 2022-11-10 19:03 UTC (permalink / raw)
  To: Stefano Stabellini, Ayan Kumar Halder
  Cc: xen-devel, stefanos, Volodymyr_Babchuk, bertrand.marquis,
	michal.orzel, Ayan Kumar Halder

Hi,

On 10/11/2022 00:00, Stefano Stabellini wrote:
> On Tue, 8 Nov 2022, Ayan Kumar Halder wrote:
>> From: Ayan Kumar Halder <ayankuma@amd.com>
>>
>> Xen provides helper to atomically read/write memory (see {read,
>> write}_atomic()). Those helpers can only work if the address is aligned
>> to the size of the access (see B2.2.1 ARM DDI 08476I.a).
>>
>> On Arm32, the alignment is already enforced by the processor because
>> HSCTLR.A bit is set (it enforce alignment for every access). For Arm64,
>> this bit is not set because memcpy()/memset() can use unaligned access
>> for performance reason (the implementation is taken from the Cortex
>> library).
>>
>> To avoid any overhead in production build, the alignment will only be
>> checked using an ASSERT. Note that it might be possible to do it in
>> production build using the acquire/exclusive version of load/store. But
>> this is left to a follow-up (if wanted).
>>
>> Signed-off-by: Ayan Kumar Halder <ayan.kumar.halder@amd.com>
>> Signed-off-by: Julien Grall <julien@xen.org>
>> Reviewed-by: Michal Orzel <michal.orzel@amd.com>
>> Reviewed-by: Bertrand Marquis <bertrand.marquis@arm.com>
> 
> Acked-by: Stefano Stabellini <sstabellini@kernel.org>

I have pushed this patch in a branch for-next/4.18 on my public repo. I 
will apply the patch to staging once the tree re-opened.

Cheers,

-- 
Julien Grall


^ permalink raw reply	[flat|nested] 4+ messages in thread

end of thread, other threads:[~2022-11-10 19:03 UTC | newest]

Thread overview: 4+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-11-08  9:45 [XEN v3] xen/arm: Enforce alignment check in debug build for {read, write}_atomic Ayan Kumar Halder
2022-11-08 11:22 ` Bertrand Marquis
2022-11-10  0:00 ` Stefano Stabellini
2022-11-10 19:03   ` Julien Grall

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.