linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH] ARM/shmem: Drop page coloring align for non-VIPT CPUs
@ 2017-04-14 10:09 Dmitry Safonov
  2017-04-25 17:19 ` Dmitry Safonov
  0 siblings, 1 reply; 6+ messages in thread
From: Dmitry Safonov @ 2017-04-14 10:09 UTC (permalink / raw)
  To: linux-kernel
  Cc: 0x7f454c46, Dmitry Safonov, Russell King, linux-arm-kernel,
	Will Deacon, Christopher Covington, Cyrill Gorcunov,
	Pavel Emelyanov

On ARMv6 CPUs with VIPT caches there are aliasing issues: if two
different cache line indexes correspond to the same physical
address, then changes made to one of the alias might be lost
or they can overwrite each other. To overcome aliasing issues,
the align for shared mappings was introduced with:

commit 4197692eef113eeb8e3e413cc70993a5e667e5b8
Author: Russell King <rmk@flint.arm.linux.org.uk>
Date:   Wed Apr 28 22:22:33 2004 +0100

    [ARM] Fix shared mmap()ings for ARM VIPT caches.

    This allows us to appropriately align shared mappings on VIPT caches
    with aliasing issues.

Which introduced 4 pages align with SHMLBA, which resulted in
unique physical address after any tag in cache (because two upper bits
corresponding to page address get unused in tags).

As this workaround is not needed by non-VIPT caches (like most armv7
CPUs which have PIPT caches), ARM mmap() code checks if cache is VIPT
aliasing for MAP_SHARED.

The problem here is in shmat() syscall:
1. if shmaddr is NULL then do_shmat() uses arch_get_unmapped_area()
   to allocate shared mapping.
2. if shmaddr is specified then do_shmat() checks that address has
   SHMLBA alignment regardless to CPU cache aliasing.

Which results on ARMv7 CPUs that shmat() with NULL shmaddr may return
non-SHMLBA aligned address (page-aligned), but shmat() with the same
address will fail.

That is not critical issue for CRIU as after shmat() with NULL address,
we can mremap() resulted shmem to restore shared memory mappings on the
same address where they were on checkpointing.
But it's still worth fixing because we can't reliably tell from
userspace if the platform has VIPT cache, and so this mremap()
workaround is done with HUGE warning that restoring application, that
uses SHMBLA-unaligned shmem on ARMv6 CPU with VIPT cache may result
in data corruptions.

I also changed SHMLBA build-time check to init-time WARN_ON(), as
it's not constant afterward.

This is resend of the patch from discussion:
http://www.spinics.net/lists/arm-kernel/msg258870.html

Cc: linux-arm-kernel@lists.infradead.org
Cc: Will Deacon <will.deacon@arm.com>
Cc: Russell King <rmk+kernel@armlinux.org.uk>
Cc: Christopher Covington <cov@codeaurora.org>
Cc: Cyrill Gorcunov <gorcunov@openvz.org>
Cc: Pavel Emelyanov <xemul@virtuozzo.com>
Suggested-by: Russell King <rmk+kernel@armlinux.org.uk>
Signed-off-by: Dmitry Safonov <dsafonov@virtuozzo.com>
---
 arch/arm/include/asm/shmparam.h | 8 ++++----
 arch/arm/mm/copypage-v6.c       | 5 +----
 2 files changed, 5 insertions(+), 8 deletions(-)

diff --git a/arch/arm/include/asm/shmparam.h b/arch/arm/include/asm/shmparam.h
index a5223b3a9bf9..6e1a72cbdf78 100644
--- a/arch/arm/include/asm/shmparam.h
+++ b/arch/arm/include/asm/shmparam.h
@@ -1,16 +1,16 @@
 #ifndef _ASMARM_SHMPARAM_H
 #define _ASMARM_SHMPARAM_H
 
+#include <asm/cachetype.h>
+
 /*
  * This should be the size of the virtually indexed cache/ways,
  * or page size, whichever is greater since the cache aliases
  * every size/ways bytes.
  */
-#define	SHMLBA	(4 * PAGE_SIZE)		 /* attach addr a multiple of this */
+#define	SHMLBA (cache_is_vipt_aliasing() ? 4 * PAGE_SIZE : PAGE_SIZE)
 
-/*
- * Enforce SHMLBA in shmat
- */
+/* Enforce SHMLBA in shmat */
 #define __ARCH_FORCE_SHMLBA
 
 #endif /* _ASMARM_SHMPARAM_H */
diff --git a/arch/arm/mm/copypage-v6.c b/arch/arm/mm/copypage-v6.c
index 70423345da26..9b22c530e8b6 100644
--- a/arch/arm/mm/copypage-v6.c
+++ b/arch/arm/mm/copypage-v6.c
@@ -20,10 +20,6 @@
 
 #include "mm.h"
 
-#if SHMLBA > 16384
-#error FIX ME
-#endif
-
 static DEFINE_RAW_SPINLOCK(v6_lock);
 
 /*
@@ -129,6 +125,7 @@ struct cpu_user_fns v6_user_fns __initdata = {
 
 static int __init v6_userpage_init(void)
 {
+	WARN_ON(SHMLBA > 16384);
 	if (cache_is_vipt_aliasing()) {
 		cpu_user.cpu_clear_user_highpage = v6_clear_user_highpage_aliasing;
 		cpu_user.cpu_copy_user_highpage = v6_copy_user_highpage_aliasing;
-- 
2.12.2

^ permalink raw reply related	[flat|nested] 6+ messages in thread

* Re: [PATCH] ARM/shmem: Drop page coloring align for non-VIPT CPUs
  2017-04-14 10:09 [PATCH] ARM/shmem: Drop page coloring align for non-VIPT CPUs Dmitry Safonov
@ 2017-04-25 17:19 ` Dmitry Safonov
  2017-04-25 17:35   ` Russell King - ARM Linux
  0 siblings, 1 reply; 6+ messages in thread
From: Dmitry Safonov @ 2017-04-25 17:19 UTC (permalink / raw)
  To: Russell King, Will Deacon; +Cc: linux-kernel, 0x7f454c46, linux-arm-kernel

On 04/14/2017 01:09 PM, Dmitry Safonov wrote:
> On ARMv6 CPUs with VIPT caches there are aliasing issues: if two
> different cache line indexes correspond to the same physical
> address, then changes made to one of the alias might be lost
> or they can overwrite each other. To overcome aliasing issues,
> the align for shared mappings was introduced with:
> 
> commit 4197692eef113eeb8e3e413cc70993a5e667e5b8
> Author: Russell King <rmk@flint.arm.linux.org.uk>
> Date:   Wed Apr 28 22:22:33 2004 +0100
> 
>      [ARM] Fix shared mmap()ings for ARM VIPT caches.
> 
>      This allows us to appropriately align shared mappings on VIPT caches
>      with aliasing issues.
> 
> Which introduced 4 pages align with SHMLBA, which resulted in
> unique physical address after any tag in cache (because two upper bits
> corresponding to page address get unused in tags).
> 
> As this workaround is not needed by non-VIPT caches (like most armv7
> CPUs which have PIPT caches), ARM mmap() code checks if cache is VIPT
> aliasing for MAP_SHARED.
> 
> The problem here is in shmat() syscall:
> 1. if shmaddr is NULL then do_shmat() uses arch_get_unmapped_area()
>     to allocate shared mapping.
> 2. if shmaddr is specified then do_shmat() checks that address has
>     SHMLBA alignment regardless to CPU cache aliasing.
> 
> Which results on ARMv7 CPUs that shmat() with NULL shmaddr may return
> non-SHMLBA aligned address (page-aligned), but shmat() with the same
> address will fail.
> 
> That is not critical issue for CRIU as after shmat() with NULL address,
> we can mremap() resulted shmem to restore shared memory mappings on the
> same address where they were on checkpointing.
> But it's still worth fixing because we can't reliably tell from
> userspace if the platform has VIPT cache, and so this mremap()
> workaround is done with HUGE warning that restoring application, that
> uses SHMBLA-unaligned shmem on ARMv6 CPU with VIPT cache may result
> in data corruptions.
> 
> I also changed SHMLBA build-time check to init-time WARN_ON(), as
> it's not constant afterward.
> 
> This is resend of the patch from discussion:
> http://www.spinics.net/lists/arm-kernel/msg258870.html
> 
> Cc: linux-arm-kernel@lists.infradead.org
> Cc: Will Deacon <will.deacon@arm.com>
> Cc: Russell King <rmk+kernel@armlinux.org.uk>
> Cc: Christopher Covington <cov@codeaurora.org>
> Cc: Cyrill Gorcunov <gorcunov@openvz.org>
> Cc: Pavel Emelyanov <xemul@virtuozzo.com>
> Suggested-by: Russell King <rmk+kernel@armlinux.org.uk>
> Signed-off-by: Dmitry Safonov <dsafonov@virtuozzo.com>
> ---
>   arch/arm/include/asm/shmparam.h | 8 ++++----
>   arch/arm/mm/copypage-v6.c       | 5 +----
>   2 files changed, 5 insertions(+), 8 deletions(-)

Ping?

> 
> diff --git a/arch/arm/include/asm/shmparam.h b/arch/arm/include/asm/shmparam.h
> index a5223b3a9bf9..6e1a72cbdf78 100644
> --- a/arch/arm/include/asm/shmparam.h
> +++ b/arch/arm/include/asm/shmparam.h
> @@ -1,16 +1,16 @@
>   #ifndef _ASMARM_SHMPARAM_H
>   #define _ASMARM_SHMPARAM_H
>   
> +#include <asm/cachetype.h>
> +
>   /*
>    * This should be the size of the virtually indexed cache/ways,
>    * or page size, whichever is greater since the cache aliases
>    * every size/ways bytes.
>    */
> -#define	SHMLBA	(4 * PAGE_SIZE)		 /* attach addr a multiple of this */
> +#define	SHMLBA (cache_is_vipt_aliasing() ? 4 * PAGE_SIZE : PAGE_SIZE)
>   
> -/*
> - * Enforce SHMLBA in shmat
> - */
> +/* Enforce SHMLBA in shmat */
>   #define __ARCH_FORCE_SHMLBA
>   
>   #endif /* _ASMARM_SHMPARAM_H */
> diff --git a/arch/arm/mm/copypage-v6.c b/arch/arm/mm/copypage-v6.c
> index 70423345da26..9b22c530e8b6 100644
> --- a/arch/arm/mm/copypage-v6.c
> +++ b/arch/arm/mm/copypage-v6.c
> @@ -20,10 +20,6 @@
>   
>   #include "mm.h"
>   
> -#if SHMLBA > 16384
> -#error FIX ME
> -#endif
> -
>   static DEFINE_RAW_SPINLOCK(v6_lock);
>   
>   /*
> @@ -129,6 +125,7 @@ struct cpu_user_fns v6_user_fns __initdata = {
>   
>   static int __init v6_userpage_init(void)
>   {
> +	WARN_ON(SHMLBA > 16384);
>   	if (cache_is_vipt_aliasing()) {
>   		cpu_user.cpu_clear_user_highpage = v6_clear_user_highpage_aliasing;
>   		cpu_user.cpu_copy_user_highpage = v6_copy_user_highpage_aliasing;
> 


-- 
              Dmitry

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [PATCH] ARM/shmem: Drop page coloring align for non-VIPT CPUs
  2017-04-25 17:19 ` Dmitry Safonov
@ 2017-04-25 17:35   ` Russell King - ARM Linux
  2017-04-25 17:51     ` Dmitry Safonov
  2017-05-18 11:22     ` Dmitry Safonov
  0 siblings, 2 replies; 6+ messages in thread
From: Russell King - ARM Linux @ 2017-04-25 17:35 UTC (permalink / raw)
  To: Dmitry Safonov; +Cc: Will Deacon, linux-kernel, 0x7f454c46, linux-arm-kernel

On Tue, Apr 25, 2017 at 08:19:21PM +0300, Dmitry Safonov wrote:
> On 04/14/2017 01:09 PM, Dmitry Safonov wrote:
> >On ARMv6 CPUs with VIPT caches there are aliasing issues: if two
> >different cache line indexes correspond to the same physical
> >address, then changes made to one of the alias might be lost
> >or they can overwrite each other. To overcome aliasing issues,
> >the align for shared mappings was introduced with:
> >
> >commit 4197692eef113eeb8e3e413cc70993a5e667e5b8
> >Author: Russell King <rmk@flint.arm.linux.org.uk>
> >Date:   Wed Apr 28 22:22:33 2004 +0100
> >
> >     [ARM] Fix shared mmap()ings for ARM VIPT caches.
> >
> >     This allows us to appropriately align shared mappings on VIPT caches
> >     with aliasing issues.
> >
> >Which introduced 4 pages align with SHMLBA, which resulted in
> >unique physical address after any tag in cache (because two upper bits
> >corresponding to page address get unused in tags).
> >
> >As this workaround is not needed by non-VIPT caches (like most armv7
> >CPUs which have PIPT caches), ARM mmap() code checks if cache is VIPT
> >aliasing for MAP_SHARED.
> >
> >The problem here is in shmat() syscall:
> >1. if shmaddr is NULL then do_shmat() uses arch_get_unmapped_area()
> >    to allocate shared mapping.
> >2. if shmaddr is specified then do_shmat() checks that address has
> >    SHMLBA alignment regardless to CPU cache aliasing.
> >
> >Which results on ARMv7 CPUs that shmat() with NULL shmaddr may return
> >non-SHMLBA aligned address (page-aligned), but shmat() with the same
> >address will fail.
> >
> >That is not critical issue for CRIU as after shmat() with NULL address,

CRIU?  Please try to keep use of acronyms to a minimum.

> >we can mremap() resulted shmem to restore shared memory mappings on the
> >same address where they were on checkpointing.
> >But it's still worth fixing because we can't reliably tell from
> >userspace if the platform has VIPT cache, and so this mremap()
> >workaround is done with HUGE warning that restoring application, that
> >uses SHMBLA-unaligned shmem on ARMv6 CPU with VIPT cache may result
> >in data corruptions.
> >
> >I also changed SHMLBA build-time check to init-time WARN_ON(), as
> >it's not constant afterward.

I'm not happy with this.  SHMLBA is defined by POSIX to be a constant,
which means that if we want to have any kind of binary compatibility
between different architecture versions, SHMLBA must be constant across
all variants of the architecture.

Making it dependent on the cache architecture means that userspace's
assumptions can be broken.  Increasing it is not an issue (since SHMLBA
is defined to be the address multiple - an address that is aligned to
4-page is also by definition aligned to 1-page.)  So what I did back in
2004 wasn't a problem.

However, reducing it (as you're now suggesting) is - newly built programs
are built today with:

#define SHMLBA              (__getpagesize () << 2)

so we must not allow the kernel to return addresses that violate that.
As I say, we can't reduce SHMLBA now.

-- 
RMK's Patch system: http://www.armlinux.org.uk/developer/patches/
FTTC broadband for 0.8mile line: currently at 9.6Mbps down 400kbps up
according to speedtest.net.

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [PATCH] ARM/shmem: Drop page coloring align for non-VIPT CPUs
  2017-04-25 17:35   ` Russell King - ARM Linux
@ 2017-04-25 17:51     ` Dmitry Safonov
  2017-05-18 11:22     ` Dmitry Safonov
  1 sibling, 0 replies; 6+ messages in thread
From: Dmitry Safonov @ 2017-04-25 17:51 UTC (permalink / raw)
  To: Russell King - ARM Linux
  Cc: Will Deacon, linux-kernel, 0x7f454c46, linux-arm-kernel

On 04/25/2017 08:35 PM, Russell King - ARM Linux wrote:
> On Tue, Apr 25, 2017 at 08:19:21PM +0300, Dmitry Safonov wrote:
>> On 04/14/2017 01:09 PM, Dmitry Safonov wrote:
>>> On ARMv6 CPUs with VIPT caches there are aliasing issues: if two
>>> different cache line indexes correspond to the same physical
>>> address, then changes made to one of the alias might be lost
>>> or they can overwrite each other. To overcome aliasing issues,
>>> the align for shared mappings was introduced with:
>>>
>>> commit 4197692eef113eeb8e3e413cc70993a5e667e5b8
>>> Author: Russell King <rmk@flint.arm.linux.org.uk>
>>> Date:   Wed Apr 28 22:22:33 2004 +0100
>>>
>>>      [ARM] Fix shared mmap()ings for ARM VIPT caches.
>>>
>>>      This allows us to appropriately align shared mappings on VIPT caches
>>>      with aliasing issues.
>>>
>>> Which introduced 4 pages align with SHMLBA, which resulted in
>>> unique physical address after any tag in cache (because two upper bits
>>> corresponding to page address get unused in tags).
>>>
>>> As this workaround is not needed by non-VIPT caches (like most armv7
>>> CPUs which have PIPT caches), ARM mmap() code checks if cache is VIPT
>>> aliasing for MAP_SHARED.
>>>
>>> The problem here is in shmat() syscall:
>>> 1. if shmaddr is NULL then do_shmat() uses arch_get_unmapped_area()
>>>     to allocate shared mapping.
>>> 2. if shmaddr is specified then do_shmat() checks that address has
>>>     SHMLBA alignment regardless to CPU cache aliasing.
>>>
>>> Which results on ARMv7 CPUs that shmat() with NULL shmaddr may return
>>> non-SHMLBA aligned address (page-aligned), but shmat() with the same
>>> address will fail.
>>>
>>> That is not critical issue for CRIU as after shmat() with NULL address,
> 
> CRIU?  Please try to keep use of acronyms to a minimum.

Ok.

> 
>>> we can mremap() resulted shmem to restore shared memory mappings on the
>>> same address where they were on checkpointing.
>>> But it's still worth fixing because we can't reliably tell from
>>> userspace if the platform has VIPT cache, and so this mremap()
>>> workaround is done with HUGE warning that restoring application, that
>>> uses SHMBLA-unaligned shmem on ARMv6 CPU with VIPT cache may result
>>> in data corruptions.
>>>
>>> I also changed SHMLBA build-time check to init-time WARN_ON(), as
>>> it's not constant afterward.
> 
> I'm not happy with this.  SHMLBA is defined by POSIX to be a constant,
> which means that if we want to have any kind of binary compatibility
> between different architecture versions, SHMLBA must be constant across
> all variants of the architecture.
> 
> Making it dependent on the cache architecture means that userspace's
> assumptions can be broken.  Increasing it is not an issue (since SHMLBA
> is defined to be the address multiple - an address that is aligned to
> 4-page is also by definition aligned to 1-page.)  So what I did back in
> 2004 wasn't a problem.
> 
> However, reducing it (as you're now suggesting) is - newly built programs
> are built today with:
> 
> #define SHMLBA              (__getpagesize () << 2)
> 
> so we must not allow the kernel to return addresses that violate that.
> As I say, we can't reduce SHMLBA now.

Thanks for the reply!
Hmm, so what do you think if we align then shmat(smid, NULL, shmflg)
allocations also? (with 0 == shmaddr)

Something like below:

--- >8 ---
diff --git a/arch/arm/mm/mmap.c b/arch/arm/mm/mmap.c
index 2239fde10b80..ac52f066f47f 100644
--- a/arch/arm/mm/mmap.c
+++ b/arch/arm/mm/mmap.c
@@ -59,21 +59,15 @@ arch_get_unmapped_area(struct file *filp, unsigned 
long addr,
  	struct mm_struct *mm = current->mm;
  	struct vm_area_struct *vma;
  	int do_align = 0;
-	int aliasing = cache_is_vipt_aliasing();
  	struct vm_unmapped_area_info info;

-	/*
-	 * We only need to do colour alignment if either the I or D
-	 * caches alias.
-	 */
-	if (aliasing)
-		do_align = filp || (flags & MAP_SHARED);
+	do_align = filp || (flags & MAP_SHARED);

  	/*
  	 * We enforce the MAP_FIXED case.
  	 */
  	if (flags & MAP_FIXED) {
-		if (aliasing && flags & MAP_SHARED &&
+		if (flags & MAP_SHARED &&
  		    (addr - (pgoff << PAGE_SHIFT)) & (SHMLBA - 1))
  			return -EINVAL;
  		return addr;
@@ -112,22 +106,16 @@ arch_get_unmapped_area_topdown(struct file *filp, 
const unsigned long addr0,
  	struct mm_struct *mm = current->mm;
  	unsigned long addr = addr0;
  	int do_align = 0;
-	int aliasing = cache_is_vipt_aliasing();
  	struct vm_unmapped_area_info info;

-	/*
-	 * We only need to do colour alignment if either the I or D
-	 * caches alias.
-	 */
-	if (aliasing)
-		do_align = filp || (flags & MAP_SHARED);
+	do_align = filp || (flags & MAP_SHARED);

  	/* requested length too big for entire address space */
  	if (len > TASK_SIZE)
  		return -ENOMEM;

  	if (flags & MAP_FIXED) {
-		if (aliasing && flags & MAP_SHARED &&
+		if (flags & MAP_SHARED &&
  		    (addr - (pgoff << PAGE_SHIFT)) & (SHMLBA - 1))
  			return -EINVAL;
  		return addr;

^ permalink raw reply related	[flat|nested] 6+ messages in thread

* Re: [PATCH] ARM/shmem: Drop page coloring align for non-VIPT CPUs
  2017-04-25 17:35   ` Russell King - ARM Linux
  2017-04-25 17:51     ` Dmitry Safonov
@ 2017-05-18 11:22     ` Dmitry Safonov
  2017-05-23 20:10       ` Russell King - ARM Linux
  1 sibling, 1 reply; 6+ messages in thread
From: Dmitry Safonov @ 2017-05-18 11:22 UTC (permalink / raw)
  To: Russell King - ARM Linux
  Cc: Will Deacon, linux-kernel, 0x7f454c46, linux-arm-kernel

On 04/25/2017 08:35 PM, Russell King - ARM Linux wrote:
> On Tue, Apr 25, 2017 at 08:19:21PM +0300, Dmitry Safonov wrote:
>> On 04/14/2017 01:09 PM, Dmitry Safonov wrote:
>>> On ARMv6 CPUs with VIPT caches there are aliasing issues: if two
>>> different cache line indexes correspond to the same physical
>>> address, then changes made to one of the alias might be lost
>>> or they can overwrite each other. To overcome aliasing issues,
>>> the align for shared mappings was introduced with:
>>>
>>> commit 4197692eef113eeb8e3e413cc70993a5e667e5b8
>>> Author: Russell King <rmk@flint.arm.linux.org.uk>
>>> Date:   Wed Apr 28 22:22:33 2004 +0100
>>>
>>>      [ARM] Fix shared mmap()ings for ARM VIPT caches.
>>>
>>>      This allows us to appropriately align shared mappings on VIPT caches
>>>      with aliasing issues.
>>>
>>> Which introduced 4 pages align with SHMLBA, which resulted in
>>> unique physical address after any tag in cache (because two upper bits
>>> corresponding to page address get unused in tags).
>>>
>>> As this workaround is not needed by non-VIPT caches (like most armv7
>>> CPUs which have PIPT caches), ARM mmap() code checks if cache is VIPT
>>> aliasing for MAP_SHARED.
>>>
>>> The problem here is in shmat() syscall:
>>> 1. if shmaddr is NULL then do_shmat() uses arch_get_unmapped_area()
>>>     to allocate shared mapping.
>>> 2. if shmaddr is specified then do_shmat() checks that address has
>>>     SHMLBA alignment regardless to CPU cache aliasing.
>>>
>>> Which results on ARMv7 CPUs that shmat() with NULL shmaddr may return
>>> non-SHMLBA aligned address (page-aligned), but shmat() with the same
>>> address will fail.
>>>
>>> That is not critical issue for CRIU as after shmat() with NULL address,
> 
> CRIU?  Please try to keep use of acronyms to a minimum.
> 
>>> we can mremap() resulted shmem to restore shared memory mappings on the
>>> same address where they were on checkpointing.
>>> But it's still worth fixing because we can't reliably tell from
>>> userspace if the platform has VIPT cache, and so this mremap()
>>> workaround is done with HUGE warning that restoring application, that
>>> uses SHMBLA-unaligned shmem on ARMv6 CPU with VIPT cache may result
>>> in data corruptions.
>>>
>>> I also changed SHMLBA build-time check to init-time WARN_ON(), as
>>> it's not constant afterward.
> 
> I'm not happy with this.  SHMLBA is defined by POSIX to be a constant,
> which means that if we want to have any kind of binary compatibility
> between different architecture versions, SHMLBA must be constant across
> all variants of the architecture.
> 
> Making it dependent on the cache architecture means that userspace's
> assumptions can be broken.  Increasing it is not an issue (since SHMLBA
> is defined to be the address multiple - an address that is aligned to
> 4-page is also by definition aligned to 1-page.)  So what I did back in
> 2004 wasn't a problem.
> 
> However, reducing it (as you're now suggesting) is - newly built programs
> are built today with:
> 
> #define SHMLBA              (__getpagesize () << 2)
> 
> so we must not allow the kernel to return addresses that violate that.
> As I say, we can't reduce SHMLBA now.

So, we violate this on return address with shmat(smid, NULL, shmflg)
when shmaddr == 0.
But we don't do this on shmat(smid, shmaddr, shmflg)
where shmaddr should be SHMLBA-aligned.

That API looks unexpected and creates difficulties, which I've
workarounded in CRIU, but still might worth fixing.

-- 
              Dmitry

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [PATCH] ARM/shmem: Drop page coloring align for non-VIPT CPUs
  2017-05-18 11:22     ` Dmitry Safonov
@ 2017-05-23 20:10       ` Russell King - ARM Linux
  0 siblings, 0 replies; 6+ messages in thread
From: Russell King - ARM Linux @ 2017-05-23 20:10 UTC (permalink / raw)
  To: Dmitry Safonov; +Cc: Will Deacon, linux-kernel, 0x7f454c46, linux-arm-kernel

On Thu, May 18, 2017 at 02:22:57PM +0300, Dmitry Safonov wrote:
> On 04/25/2017 08:35 PM, Russell King - ARM Linux wrote:
> >On Tue, Apr 25, 2017 at 08:19:21PM +0300, Dmitry Safonov wrote:
> >>On 04/14/2017 01:09 PM, Dmitry Safonov wrote:
> >>>On ARMv6 CPUs with VIPT caches there are aliasing issues: if two
> >>>different cache line indexes correspond to the same physical
> >>>address, then changes made to one of the alias might be lost
> >>>or they can overwrite each other. To overcome aliasing issues,
> >>>the align for shared mappings was introduced with:
> >>>
> >>>commit 4197692eef113eeb8e3e413cc70993a5e667e5b8
> >>>Author: Russell King <rmk@flint.arm.linux.org.uk>
> >>>Date:   Wed Apr 28 22:22:33 2004 +0100
> >>>
> >>>     [ARM] Fix shared mmap()ings for ARM VIPT caches.
> >>>
> >>>     This allows us to appropriately align shared mappings on VIPT caches
> >>>     with aliasing issues.
> >>>
> >>>Which introduced 4 pages align with SHMLBA, which resulted in
> >>>unique physical address after any tag in cache (because two upper bits
> >>>corresponding to page address get unused in tags).
> >>>
> >>>As this workaround is not needed by non-VIPT caches (like most armv7
> >>>CPUs which have PIPT caches), ARM mmap() code checks if cache is VIPT
> >>>aliasing for MAP_SHARED.
> >>>
> >>>The problem here is in shmat() syscall:
> >>>1. if shmaddr is NULL then do_shmat() uses arch_get_unmapped_area()
> >>>    to allocate shared mapping.
> >>>2. if shmaddr is specified then do_shmat() checks that address has
> >>>    SHMLBA alignment regardless to CPU cache aliasing.
> >>>
> >>>Which results on ARMv7 CPUs that shmat() with NULL shmaddr may return
> >>>non-SHMLBA aligned address (page-aligned), but shmat() with the same
> >>>address will fail.
> >>>
> >>>That is not critical issue for CRIU as after shmat() with NULL address,
> >
> >CRIU?  Please try to keep use of acronyms to a minimum.
> >
> >>>we can mremap() resulted shmem to restore shared memory mappings on the
> >>>same address where they were on checkpointing.
> >>>But it's still worth fixing because we can't reliably tell from
> >>>userspace if the platform has VIPT cache, and so this mremap()
> >>>workaround is done with HUGE warning that restoring application, that
> >>>uses SHMBLA-unaligned shmem on ARMv6 CPU with VIPT cache may result
> >>>in data corruptions.
> >>>
> >>>I also changed SHMLBA build-time check to init-time WARN_ON(), as
> >>>it's not constant afterward.
> >
> >I'm not happy with this.  SHMLBA is defined by POSIX to be a constant,
> >which means that if we want to have any kind of binary compatibility
> >between different architecture versions, SHMLBA must be constant across
> >all variants of the architecture.
> >
> >Making it dependent on the cache architecture means that userspace's
> >assumptions can be broken.  Increasing it is not an issue (since SHMLBA
> >is defined to be the address multiple - an address that is aligned to
> >4-page is also by definition aligned to 1-page.)  So what I did back in
> >2004 wasn't a problem.
> >
> >However, reducing it (as you're now suggesting) is - newly built programs
> >are built today with:
> >
> >#define SHMLBA              (__getpagesize () << 2)
> >
> >so we must not allow the kernel to return addresses that violate that.
> >As I say, we can't reduce SHMLBA now.
> 
> So, we violate this on return address with shmat(smid, NULL, shmflg)
> when shmaddr == 0.
> But we don't do this on shmat(smid, shmaddr, shmflg)
> where shmaddr should be SHMLBA-aligned.
> 
> That API looks unexpected and creates difficulties, which I've
> workarounded in CRIU, but still might worth fixing.

It should at least be self-consistent.

-- 
RMK's Patch system: http://www.armlinux.org.uk/developer/patches/
FTTC broadband for 0.8mile line: currently at 9.6Mbps down 400kbps up
according to speedtest.net.

^ permalink raw reply	[flat|nested] 6+ messages in thread

end of thread, other threads:[~2017-05-23 20:10 UTC | newest]

Thread overview: 6+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2017-04-14 10:09 [PATCH] ARM/shmem: Drop page coloring align for non-VIPT CPUs Dmitry Safonov
2017-04-25 17:19 ` Dmitry Safonov
2017-04-25 17:35   ` Russell King - ARM Linux
2017-04-25 17:51     ` Dmitry Safonov
2017-05-18 11:22     ` Dmitry Safonov
2017-05-23 20:10       ` Russell King - ARM Linux

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).