All of lore.kernel.org
 help / color / mirror / Atom feed
* [Qemu-devel] [PATCH v2 for-2.10 0/3] Fixup logic for exclusive pair
@ 2017-08-15 14:57 Richard Henderson
  2017-08-15 14:57 ` [Qemu-devel] [PATCH v2 for-2.10 1/3] target/arm: Correct exclusive store cmpxchg memop mask Richard Henderson
                   ` (3 more replies)
  0 siblings, 4 replies; 13+ messages in thread
From: Richard Henderson @ 2017-08-15 14:57 UTC (permalink / raw)
  To: qemu-devel; +Cc: peter.maydell, alistair.francis, edgar.iglesias

In reviewing my previous patch, Peter pointed out that it is
CONSTRAINED UNPREDICTABLE what happens when you mix the number
of registers in a LDX[PR] + STX[RP] pair.  So most of the bug
that I thought that I was fixing isn't a bug at all.

That said, the patch does still fix a real bug wrt single-copy
semantics, so patch 2 is largely unchanged; the commit message
is re-worded.

I also un-squashed Alistair's original patches and dropped the
tcg/tcg-op.c change, to be revisited for 2.11.


r~


Alistair Francis (2):
  target/arm: Correct exclusive store cmpxchg memop mask
  target/arm: Require alignment for load exclusive

Richard Henderson (1):
  target/arm: Correct load exclusive pair atomicity

 target/arm/translate-a64.c | 63 ++++++++++++++++++++++++++++------------------
 1 file changed, 39 insertions(+), 24 deletions(-)

-- 
2.13.4

^ permalink raw reply	[flat|nested] 13+ messages in thread

* [Qemu-devel] [PATCH v2 for-2.10 1/3] target/arm: Correct exclusive store cmpxchg memop mask
  2017-08-15 14:57 [Qemu-devel] [PATCH v2 for-2.10 0/3] Fixup logic for exclusive pair Richard Henderson
@ 2017-08-15 14:57 ` Richard Henderson
  2017-08-15 15:41   ` Philippe Mathieu-Daudé
  2017-08-15 14:57 ` [Qemu-devel] [PATCH v2 for-2.10 2/3] target/arm: Correct load exclusive pair atomicity Richard Henderson
                   ` (2 subsequent siblings)
  3 siblings, 1 reply; 13+ messages in thread
From: Richard Henderson @ 2017-08-15 14:57 UTC (permalink / raw)
  To: qemu-devel; +Cc: peter.maydell, alistair.francis, edgar.iglesias

From: Alistair Francis <alistair.francis@xilinx.com>

When we perform the atomic_cmpxchg operation we want to perform the
operation on a pair of 32-bit registers. Previously we were just passing
the register size in which was set to MO_32. This would result in the
high register to be ignored. To fix this issue we hardcode the size to
be 64-bits long when operating on 32-bit pairs.

Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Signed-off-by: Alistair Francis <alistair.francis@xilinx.com>
Message-Id: <bc18dddca56e8c2ea4a3def48d33ceb5d21d1fff.1502488636.git.alistair.francis@xilinx.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
---
 target/arm/translate-a64.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
index 58ed4c6d05..113e2e172b 100644
--- a/target/arm/translate-a64.c
+++ b/target/arm/translate-a64.c
@@ -1913,7 +1913,7 @@ static void gen_store_exclusive(DisasContext *s, int rd, int rt, int rt2,
             tcg_gen_concat32_i64(val, cpu_exclusive_val, cpu_exclusive_high);
             tcg_gen_atomic_cmpxchg_i64(tmp, addr, val, tmp,
                                        get_mem_index(s),
-                                       size | MO_ALIGN | s->be_data);
+                                       MO_64 | MO_ALIGN | s->be_data);
             tcg_gen_setcond_i64(TCG_COND_NE, tmp, tmp, val);
             tcg_temp_free_i64(val);
         } else if (s->be_data == MO_LE) {
-- 
2.13.4

^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [Qemu-devel] [PATCH v2 for-2.10 2/3] target/arm: Correct load exclusive pair atomicity
  2017-08-15 14:57 [Qemu-devel] [PATCH v2 for-2.10 0/3] Fixup logic for exclusive pair Richard Henderson
  2017-08-15 14:57 ` [Qemu-devel] [PATCH v2 for-2.10 1/3] target/arm: Correct exclusive store cmpxchg memop mask Richard Henderson
@ 2017-08-15 14:57 ` Richard Henderson
  2017-08-15 14:57 ` [Qemu-devel] [PATCH v2 for-2.10 3/3] target/arm: Require alignment for load exclusive Richard Henderson
  2017-08-15 17:16 ` [Qemu-devel] [PATCH v2 for-2.10 0/3] Fixup logic for exclusive pair Peter Maydell
  3 siblings, 0 replies; 13+ messages in thread
From: Richard Henderson @ 2017-08-15 14:57 UTC (permalink / raw)
  To: qemu-devel; +Cc: peter.maydell, alistair.francis, edgar.iglesias

We are not providing the required single-copy atomic semantics for
the 64-bit operation that is the 32-bit paired load.

At the same time, leave the entire 64-bit value in cpu_exclusive_val
and stop writing to cpu_exclusive_high.  This means that we do not
have to re-assemble the 64-bit quantity when it comes time to store.

At the same time, drop a redundant temporary and perform all loads
directly into the cpu_exclusive_* globals.

Tested-by: Alistair Francis <alistair.francis@xilinx.com>
Reviewed-by: Alistair Francis <alistair.francis@xilinx.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
---
 target/arm/translate-a64.c | 60 ++++++++++++++++++++++++++++------------------
 1 file changed, 37 insertions(+), 23 deletions(-)

diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
index 113e2e172b..eac545e4f2 100644
--- a/target/arm/translate-a64.c
+++ b/target/arm/translate-a64.c
@@ -1853,29 +1853,42 @@ static void disas_b_exc_sys(DisasContext *s, uint32_t insn)
 static void gen_load_exclusive(DisasContext *s, int rt, int rt2,
                                TCGv_i64 addr, int size, bool is_pair)
 {
-    TCGv_i64 tmp = tcg_temp_new_i64();
-    TCGMemOp memop = s->be_data + size;
+    int idx = get_mem_index(s);
+    TCGMemOp memop = s->be_data;
 
     g_assert(size <= 3);
-    tcg_gen_qemu_ld_i64(tmp, addr, get_mem_index(s), memop);
-
     if (is_pair) {
-        TCGv_i64 addr2 = tcg_temp_new_i64();
-        TCGv_i64 hitmp = tcg_temp_new_i64();
-
         g_assert(size >= 2);
-        tcg_gen_addi_i64(addr2, addr, 1 << size);
-        tcg_gen_qemu_ld_i64(hitmp, addr2, get_mem_index(s), memop);
-        tcg_temp_free_i64(addr2);
-        tcg_gen_mov_i64(cpu_exclusive_high, hitmp);
-        tcg_gen_mov_i64(cpu_reg(s, rt2), hitmp);
-        tcg_temp_free_i64(hitmp);
-    }
+        if (size == 2) {
+            /* The pair must be single-copy atomic for the doubleword.  */
+            memop |= MO_64;
+            tcg_gen_qemu_ld_i64(cpu_exclusive_val, addr, idx, memop);
+            if (s->be_data == MO_LE) {
+                tcg_gen_extract_i64(cpu_reg(s, rt), cpu_exclusive_val, 0, 32);
+                tcg_gen_extract_i64(cpu_reg(s, rt2), cpu_exclusive_val, 32, 32);
+            } else {
+                tcg_gen_extract_i64(cpu_reg(s, rt), cpu_exclusive_val, 32, 32);
+                tcg_gen_extract_i64(cpu_reg(s, rt2), cpu_exclusive_val, 0, 32);
+            }
+        } else {
+            /* The pair must be single-copy atomic for *each* doubleword,
+               but not the entire quadword.  */
+            memop |= MO_64;
+            tcg_gen_qemu_ld_i64(cpu_exclusive_val, addr, idx, memop);
 
-    tcg_gen_mov_i64(cpu_exclusive_val, tmp);
-    tcg_gen_mov_i64(cpu_reg(s, rt), tmp);
+            TCGv_i64 addr2 = tcg_temp_new_i64();
+            tcg_gen_addi_i64(addr2, addr, 8);
+            tcg_gen_qemu_ld_i64(cpu_exclusive_high, addr2, idx, memop);
+            tcg_temp_free_i64(addr2);
 
-    tcg_temp_free_i64(tmp);
+            tcg_gen_mov_i64(cpu_reg(s, rt), cpu_exclusive_val);
+            tcg_gen_mov_i64(cpu_reg(s, rt2), cpu_exclusive_high);
+        }
+    } else {
+        memop |= size;
+        tcg_gen_qemu_ld_i64(cpu_exclusive_val, addr, idx, memop);
+        tcg_gen_mov_i64(cpu_reg(s, rt), cpu_exclusive_val);
+    }
     tcg_gen_mov_i64(cpu_exclusive_addr, addr);
 }
 
@@ -1908,14 +1921,15 @@ static void gen_store_exclusive(DisasContext *s, int rd, int rt, int rt2,
     tmp = tcg_temp_new_i64();
     if (is_pair) {
         if (size == 2) {
-            TCGv_i64 val = tcg_temp_new_i64();
-            tcg_gen_concat32_i64(tmp, cpu_reg(s, rt), cpu_reg(s, rt2));
-            tcg_gen_concat32_i64(val, cpu_exclusive_val, cpu_exclusive_high);
-            tcg_gen_atomic_cmpxchg_i64(tmp, addr, val, tmp,
+            if (s->be_data == MO_LE) {
+                tcg_gen_concat32_i64(tmp, cpu_reg(s, rt), cpu_reg(s, rt2));
+            } else {
+                tcg_gen_concat32_i64(tmp, cpu_reg(s, rt2), cpu_reg(s, rt));
+            }
+            tcg_gen_atomic_cmpxchg_i64(tmp, addr, cpu_exclusive_val, tmp,
                                        get_mem_index(s),
                                        MO_64 | MO_ALIGN | s->be_data);
-            tcg_gen_setcond_i64(TCG_COND_NE, tmp, tmp, val);
-            tcg_temp_free_i64(val);
+            tcg_gen_setcond_i64(TCG_COND_NE, tmp, tmp, cpu_exclusive_val);
         } else if (s->be_data == MO_LE) {
             gen_helper_paired_cmpxchg64_le(tmp, cpu_env, addr, cpu_reg(s, rt),
                                            cpu_reg(s, rt2));
-- 
2.13.4

^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [Qemu-devel] [PATCH v2 for-2.10 3/3] target/arm: Require alignment for load exclusive
  2017-08-15 14:57 [Qemu-devel] [PATCH v2 for-2.10 0/3] Fixup logic for exclusive pair Richard Henderson
  2017-08-15 14:57 ` [Qemu-devel] [PATCH v2 for-2.10 1/3] target/arm: Correct exclusive store cmpxchg memop mask Richard Henderson
  2017-08-15 14:57 ` [Qemu-devel] [PATCH v2 for-2.10 2/3] target/arm: Correct load exclusive pair atomicity Richard Henderson
@ 2017-08-15 14:57 ` Richard Henderson
  2017-08-15 15:27   ` Eric Blake
  2017-08-15 15:56   ` Philippe Mathieu-Daudé
  2017-08-15 17:16 ` [Qemu-devel] [PATCH v2 for-2.10 0/3] Fixup logic for exclusive pair Peter Maydell
  3 siblings, 2 replies; 13+ messages in thread
From: Richard Henderson @ 2017-08-15 14:57 UTC (permalink / raw)
  To: qemu-devel; +Cc: peter.maydell, alistair.francis, edgar.iglesias

From: Alistair Francis <alistair.francis@xilinx.com>

Acording to the ARM ARM exclusive loads require the same allignment as
exclusive stores. Let's update the memops used for the load to match
that of the store. This adds the alignment requirement to the memops.

Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Signed-off-by: Alistair Francis <alistair.francis@xilinx.com>
[rth: Require 16-byte alignment for 64-bit LDXP.]
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
---
 target/arm/translate-a64.c | 11 ++++++-----
 1 file changed, 6 insertions(+), 5 deletions(-)

diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
index eac545e4f2..2200e25be0 100644
--- a/target/arm/translate-a64.c
+++ b/target/arm/translate-a64.c
@@ -1861,7 +1861,7 @@ static void gen_load_exclusive(DisasContext *s, int rt, int rt2,
         g_assert(size >= 2);
         if (size == 2) {
             /* The pair must be single-copy atomic for the doubleword.  */
-            memop |= MO_64;
+            memop |= MO_64 | MO_ALIGN;
             tcg_gen_qemu_ld_i64(cpu_exclusive_val, addr, idx, memop);
             if (s->be_data == MO_LE) {
                 tcg_gen_extract_i64(cpu_reg(s, rt), cpu_exclusive_val, 0, 32);
@@ -1871,10 +1871,11 @@ static void gen_load_exclusive(DisasContext *s, int rt, int rt2,
                 tcg_gen_extract_i64(cpu_reg(s, rt2), cpu_exclusive_val, 0, 32);
             }
         } else {
-            /* The pair must be single-copy atomic for *each* doubleword,
-               but not the entire quadword.  */
+            /* The pair must be single-copy atomic for *each* doubleword, not
+               the entire quadword, however it must be quadword aligned.  */
             memop |= MO_64;
-            tcg_gen_qemu_ld_i64(cpu_exclusive_val, addr, idx, memop);
+            tcg_gen_qemu_ld_i64(cpu_exclusive_val, addr, idx,
+                                memop | MO_ALIGN_16);
 
             TCGv_i64 addr2 = tcg_temp_new_i64();
             tcg_gen_addi_i64(addr2, addr, 8);
@@ -1885,7 +1886,7 @@ static void gen_load_exclusive(DisasContext *s, int rt, int rt2,
             tcg_gen_mov_i64(cpu_reg(s, rt2), cpu_exclusive_high);
         }
     } else {
-        memop |= size;
+        memop |= size | MO_ALIGN;
         tcg_gen_qemu_ld_i64(cpu_exclusive_val, addr, idx, memop);
         tcg_gen_mov_i64(cpu_reg(s, rt), cpu_exclusive_val);
     }
-- 
2.13.4

^ permalink raw reply related	[flat|nested] 13+ messages in thread

* Re: [Qemu-devel] [PATCH v2 for-2.10 3/3] target/arm: Require alignment for load exclusive
  2017-08-15 14:57 ` [Qemu-devel] [PATCH v2 for-2.10 3/3] target/arm: Require alignment for load exclusive Richard Henderson
@ 2017-08-15 15:27   ` Eric Blake
  2017-08-15 15:56   ` Philippe Mathieu-Daudé
  1 sibling, 0 replies; 13+ messages in thread
From: Eric Blake @ 2017-08-15 15:27 UTC (permalink / raw)
  To: Richard Henderson, qemu-devel
  Cc: edgar.iglesias, peter.maydell, alistair.francis

[-- Attachment #1: Type: text/plain, Size: 771 bytes --]

On 08/15/2017 09:57 AM, Richard Henderson wrote:
> From: Alistair Francis <alistair.francis@xilinx.com>
> 
> Acording to the ARM ARM exclusive loads require the same allignment as

s/Acording/According/
s/allignmnet/alignment/

> exclusive stores. Let's update the memops used for the load to match
> that of the store. This adds the alignment requirement to the memops.
> 
> Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
> Signed-off-by: Alistair Francis <alistair.francis@xilinx.com>
> [rth: Require 16-byte alignment for 64-bit LDXP.]
> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
> ---

-- 
Eric Blake, Principal Software Engineer
Red Hat, Inc.           +1-919-301-3266
Virtualization:  qemu.org | libvirt.org


[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 619 bytes --]

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [Qemu-devel] [PATCH v2 for-2.10 1/3] target/arm: Correct exclusive store cmpxchg memop mask
  2017-08-15 14:57 ` [Qemu-devel] [PATCH v2 for-2.10 1/3] target/arm: Correct exclusive store cmpxchg memop mask Richard Henderson
@ 2017-08-15 15:41   ` Philippe Mathieu-Daudé
  2017-08-15 16:21     ` Alistair Francis
  0 siblings, 1 reply; 13+ messages in thread
From: Philippe Mathieu-Daudé @ 2017-08-15 15:41 UTC (permalink / raw)
  To: Richard Henderson, qemu-devel
  Cc: edgar.iglesias, peter.maydell, alistair.francis

On 08/15/2017 11:57 AM, Richard Henderson wrote:
> From: Alistair Francis <alistair.francis@xilinx.com>
> 
> When we perform the atomic_cmpxchg operation we want to perform the
> operation on a pair of 32-bit registers. Previously we were just passing
> the register size in which was set to MO_32. This would result in the
> high register to be ignored. To fix this issue we hardcode the size to
> be 64-bits long when operating on 32-bit pairs.
> 
> Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
> Signed-off-by: Alistair Francis <alistair.francis@xilinx.com>
> Message-Id: <bc18dddca56e8c2ea4a3def48d33ceb5d21d1fff.1502488636.git.alistair.francis@xilinx.com>
> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>

Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>

> ---
>   target/arm/translate-a64.c | 2 +-
>   1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
> index 58ed4c6d05..113e2e172b 100644
> --- a/target/arm/translate-a64.c
> +++ b/target/arm/translate-a64.c
> @@ -1913,7 +1913,7 @@ static void gen_store_exclusive(DisasContext *s, int rd, int rt, int rt2,
>               tcg_gen_concat32_i64(val, cpu_exclusive_val, cpu_exclusive_high);
>               tcg_gen_atomic_cmpxchg_i64(tmp, addr, val, tmp,
>                                          get_mem_index(s),
> -                                       size | MO_ALIGN | s->be_data);
> +                                       MO_64 | MO_ALIGN | s->be_data);
>               tcg_gen_setcond_i64(TCG_COND_NE, tmp, tmp, val);
>               tcg_temp_free_i64(val);
>           } else if (s->be_data == MO_LE) {
> 

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [Qemu-devel] [PATCH v2 for-2.10 3/3] target/arm: Require alignment for load exclusive
  2017-08-15 14:57 ` [Qemu-devel] [PATCH v2 for-2.10 3/3] target/arm: Require alignment for load exclusive Richard Henderson
  2017-08-15 15:27   ` Eric Blake
@ 2017-08-15 15:56   ` Philippe Mathieu-Daudé
  2017-08-15 16:14     ` Peter Maydell
  2017-08-15 16:32     ` Richard Henderson
  1 sibling, 2 replies; 13+ messages in thread
From: Philippe Mathieu-Daudé @ 2017-08-15 15:56 UTC (permalink / raw)
  To: Richard Henderson, qemu-devel
  Cc: edgar.iglesias, peter.maydell, alistair.francis

Hi Richard,

On 08/15/2017 11:57 AM, Richard Henderson wrote:
> From: Alistair Francis <alistair.francis@xilinx.com>
> 
> Acording to the ARM ARM exclusive loads require the same allignment as
> exclusive stores. Let's update the memops used for the load to match
> that of the store. This adds the alignment requirement to the memops.
> 
> Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
> Signed-off-by: Alistair Francis <alistair.francis@xilinx.com>
> [rth: Require 16-byte alignment for 64-bit LDXP.]
> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
> ---
>   target/arm/translate-a64.c | 11 ++++++-----
>   1 file changed, 6 insertions(+), 5 deletions(-)
> 
> diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
> index eac545e4f2..2200e25be0 100644
> --- a/target/arm/translate-a64.c
> +++ b/target/arm/translate-a64.c
> @@ -1861,7 +1861,7 @@ static void gen_load_exclusive(DisasContext *s, int rt, int rt2,
>           g_assert(size >= 2);
>           if (size == 2) {
>               /* The pair must be single-copy atomic for the doubleword.  */
> -            memop |= MO_64;
> +            memop |= MO_64 | MO_ALIGN;

isn't MO_ALIGN_8 enough?

>               tcg_gen_qemu_ld_i64(cpu_exclusive_val, addr, idx, memop);
>               if (s->be_data == MO_LE) {
>                   tcg_gen_extract_i64(cpu_reg(s, rt), cpu_exclusive_val, 0, 32);
> @@ -1871,10 +1871,11 @@ static void gen_load_exclusive(DisasContext *s, int rt, int rt2,
>                   tcg_gen_extract_i64(cpu_reg(s, rt2), cpu_exclusive_val, 0, 32);
>               }
>           } else {
> -            /* The pair must be single-copy atomic for *each* doubleword,
> -               but not the entire quadword.  */
> +            /* The pair must be single-copy atomic for *each* doubleword, not
> +               the entire quadword, however it must be quadword aligned.  */
>               memop |= MO_64;
> -            tcg_gen_qemu_ld_i64(cpu_exclusive_val, addr, idx, memop);
> +            tcg_gen_qemu_ld_i64(cpu_exclusive_val, addr, idx,
> +                                memop | MO_ALIGN_16);

ok

>   
>               TCGv_i64 addr2 = tcg_temp_new_i64();
>               tcg_gen_addi_i64(addr2, addr, 8);
> @@ -1885,7 +1886,7 @@ static void gen_load_exclusive(DisasContext *s, int rt, int rt2,
>               tcg_gen_mov_i64(cpu_reg(s, rt2), cpu_exclusive_high);
>           }
>       } else {
> -        memop |= size;
> +        memop |= size | MO_ALIGN;

MO_ALIGN_8 here too?

>           tcg_gen_qemu_ld_i64(cpu_exclusive_val, addr, idx, memop);
>           tcg_gen_mov_i64(cpu_reg(s, rt), cpu_exclusive_val);
>       }
> 

Regards,

Phil.

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [Qemu-devel] [PATCH v2 for-2.10 3/3] target/arm: Require alignment for load exclusive
  2017-08-15 15:56   ` Philippe Mathieu-Daudé
@ 2017-08-15 16:14     ` Peter Maydell
  2017-08-15 17:48       ` Philippe Mathieu-Daudé
  2017-08-15 16:32     ` Richard Henderson
  1 sibling, 1 reply; 13+ messages in thread
From: Peter Maydell @ 2017-08-15 16:14 UTC (permalink / raw)
  To: Philippe Mathieu-Daudé
  Cc: Richard Henderson, QEMU Developers, Edgar Iglesias, Alistair Francis

On 15 August 2017 at 16:56, Philippe Mathieu-Daudé <f4bug@amsat.org> wrote:
> Hi Richard,
>
> On 08/15/2017 11:57 AM, Richard Henderson wrote:
>>
>> From: Alistair Francis <alistair.francis@xilinx.com>
>>
>> Acording to the ARM ARM exclusive loads require the same allignment as
>> exclusive stores. Let's update the memops used for the load to match
>> that of the store. This adds the alignment requirement to the memops.
>>
>> Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
>> Signed-off-by: Alistair Francis <alistair.francis@xilinx.com>
>> [rth: Require 16-byte alignment for 64-bit LDXP.]
>> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
>> ---
>>   target/arm/translate-a64.c | 11 ++++++-----
>>   1 file changed, 6 insertions(+), 5 deletions(-)
>>
>> diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
>> index eac545e4f2..2200e25be0 100644
>> --- a/target/arm/translate-a64.c
>> +++ b/target/arm/translate-a64.c
>> @@ -1861,7 +1861,7 @@ static void gen_load_exclusive(DisasContext *s, int
>> rt, int rt2,
>>           g_assert(size >= 2);
>>           if (size == 2) {
>>               /* The pair must be single-copy atomic for the doubleword.
>> */
>> -            memop |= MO_64;
>> +            memop |= MO_64 | MO_ALIGN;
>
>
> isn't MO_ALIGN_8 enough?

MO_ALIGN is "align to size of access", ie to 8 bytes in this case.
MO_ALIGN_8 is "align to a specified size (8 bytes) which isn't the
 size of the access".
In this case the access size is 8 bytes so we don't need MO_ALIGN_8.

Alignments to something other than the access size are the oddball
case, so I think it makes sense to stick with MO_ALIGN for the
common case of "just aligned to the access size" so you can
spot the odd cases when you're reading the source.

thanks
-- PMM

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [Qemu-devel] [PATCH v2 for-2.10 1/3] target/arm: Correct exclusive store cmpxchg memop mask
  2017-08-15 15:41   ` Philippe Mathieu-Daudé
@ 2017-08-15 16:21     ` Alistair Francis
  0 siblings, 0 replies; 13+ messages in thread
From: Alistair Francis @ 2017-08-15 16:21 UTC (permalink / raw)
  To: Philippe Mathieu-Daudé
  Cc: Richard Henderson, qemu-devel@nongnu.org Developers,
	Edgar Iglesias, Peter Maydell, Alistair Francis, portia.stephens

On Tue, Aug 15, 2017 at 8:41 AM, Philippe Mathieu-Daudé <f4bug@amsat.org> wrote:
> On 08/15/2017 11:57 AM, Richard Henderson wrote:
>>
>> From: Alistair Francis <alistair.francis@xilinx.com>
>>
>> When we perform the atomic_cmpxchg operation we want to perform the
>> operation on a pair of 32-bit registers. Previously we were just passing
>> the register size in which was set to MO_32. This would result in the
>> high register to be ignored. To fix this issue we hardcode the size to
>> be 64-bits long when operating on 32-bit pairs.
>>
>> Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
>> Signed-off-by: Alistair Francis <alistair.francis@xilinx.com>
>> Message-Id:
>> <bc18dddca56e8c2ea4a3def48d33ceb5d21d1fff.1502488636.git.alistair.francis@xilinx.com>
>> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
>
>
> Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>

Can we keep this as well, it was posted for my entire series:

Tested-by: Portia Stephens <portia.stephens@xilinx.com>

Thanks,
Alistair

>
>
>> ---
>>   target/arm/translate-a64.c | 2 +-
>>   1 file changed, 1 insertion(+), 1 deletion(-)
>>
>> diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
>> index 58ed4c6d05..113e2e172b 100644
>> --- a/target/arm/translate-a64.c
>> +++ b/target/arm/translate-a64.c
>> @@ -1913,7 +1913,7 @@ static void gen_store_exclusive(DisasContext *s, int
>> rd, int rt, int rt2,
>>               tcg_gen_concat32_i64(val, cpu_exclusive_val,
>> cpu_exclusive_high);
>>               tcg_gen_atomic_cmpxchg_i64(tmp, addr, val, tmp,
>>                                          get_mem_index(s),
>> -                                       size | MO_ALIGN | s->be_data);
>> +                                       MO_64 | MO_ALIGN | s->be_data);
>>               tcg_gen_setcond_i64(TCG_COND_NE, tmp, tmp, val);
>>               tcg_temp_free_i64(val);
>>           } else if (s->be_data == MO_LE) {
>>
>

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [Qemu-devel] [PATCH v2 for-2.10 3/3] target/arm: Require alignment for load exclusive
  2017-08-15 15:56   ` Philippe Mathieu-Daudé
  2017-08-15 16:14     ` Peter Maydell
@ 2017-08-15 16:32     ` Richard Henderson
  2017-08-15 17:49       ` Philippe Mathieu-Daudé
  1 sibling, 1 reply; 13+ messages in thread
From: Richard Henderson @ 2017-08-15 16:32 UTC (permalink / raw)
  To: Philippe Mathieu-Daudé, qemu-devel
  Cc: edgar.iglesias, peter.maydell, alistair.francis

On 08/15/2017 08:56 AM, Philippe Mathieu-Daudé wrote:
>> @@ -1885,7 +1886,7 @@ static void gen_load_exclusive(DisasContext *s, int rt,
>> int rt2,
>>               tcg_gen_mov_i64(cpu_reg(s, rt2), cpu_exclusive_high);
>>           }
>>       } else {
>> -        memop |= size;
>> +        memop |= size | MO_ALIGN;
> 
> MO_ALIGN_8 here too?
> 
>>           tcg_gen_qemu_ld_i64(cpu_exclusive_val, addr, idx, memop);
>>           tcg_gen_mov_i64(cpu_reg(s, rt), cpu_exclusive_val);

Peter already pointed out that MO_ALIGN_N should be reserved for those cases
where an explicit size really needed.  You should note that using MO_ALIGN_8
would be actively wrong here -- it would incorrectly require 8 byte alignment
for the single byte access of LDXRB.


r~

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [Qemu-devel] [PATCH v2 for-2.10 0/3] Fixup logic for exclusive pair
  2017-08-15 14:57 [Qemu-devel] [PATCH v2 for-2.10 0/3] Fixup logic for exclusive pair Richard Henderson
                   ` (2 preceding siblings ...)
  2017-08-15 14:57 ` [Qemu-devel] [PATCH v2 for-2.10 3/3] target/arm: Require alignment for load exclusive Richard Henderson
@ 2017-08-15 17:16 ` Peter Maydell
  3 siblings, 0 replies; 13+ messages in thread
From: Peter Maydell @ 2017-08-15 17:16 UTC (permalink / raw)
  To: Richard Henderson; +Cc: QEMU Developers, Alistair Francis, Edgar Iglesias

On 15 August 2017 at 15:57, Richard Henderson
<richard.henderson@linaro.org> wrote:
> In reviewing my previous patch, Peter pointed out that it is
> CONSTRAINED UNPREDICTABLE what happens when you mix the number
> of registers in a LDX[PR] + STX[RP] pair.  So most of the bug
> that I thought that I was fixing isn't a bug at all.
>
> That said, the patch does still fix a real bug wrt single-copy
> semantics, so patch 2 is largely unchanged; the commit message
> is re-worded.
>
> I also un-squashed Alistair's original patches and dropped the
> tcg/tcg-op.c change, to be revisited for 2.11.

Applied to master (with various reviewed-by etc tags, and
the typos in the commit message for 3/3 fixed).

thanks
-- PMM

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [Qemu-devel] [PATCH v2 for-2.10 3/3] target/arm: Require alignment for load exclusive
  2017-08-15 16:14     ` Peter Maydell
@ 2017-08-15 17:48       ` Philippe Mathieu-Daudé
  0 siblings, 0 replies; 13+ messages in thread
From: Philippe Mathieu-Daudé @ 2017-08-15 17:48 UTC (permalink / raw)
  To: Peter Maydell
  Cc: Richard Henderson, QEMU Developers, Edgar Iglesias, Alistair Francis

On 08/15/2017 01:14 PM, Peter Maydell wrote:
> On 15 August 2017 at 16:56, Philippe Mathieu-Daudé <f4bug@amsat.org> wrote:
>> Hi Richard,
>>
>> On 08/15/2017 11:57 AM, Richard Henderson wrote:
>>>
>>> From: Alistair Francis <alistair.francis@xilinx.com>
>>>
>>> Acording to the ARM ARM exclusive loads require the same allignment as
>>> exclusive stores. Let's update the memops used for the load to match
>>> that of the store. This adds the alignment requirement to the memops.
>>>
>>> Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
>>> Signed-off-by: Alistair Francis <alistair.francis@xilinx.com>
>>> [rth: Require 16-byte alignment for 64-bit LDXP.]
>>> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
>>> ---
>>>    target/arm/translate-a64.c | 11 ++++++-----
>>>    1 file changed, 6 insertions(+), 5 deletions(-)
>>>
>>> diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
>>> index eac545e4f2..2200e25be0 100644
>>> --- a/target/arm/translate-a64.c
>>> +++ b/target/arm/translate-a64.c
>>> @@ -1861,7 +1861,7 @@ static void gen_load_exclusive(DisasContext *s, int
>>> rt, int rt2,
>>>            g_assert(size >= 2);
>>>            if (size == 2) {
>>>                /* The pair must be single-copy atomic for the doubleword.
>>> */
>>> -            memop |= MO_64;
>>> +            memop |= MO_64 | MO_ALIGN;
>>
>>
>> isn't MO_ALIGN_8 enough?
> 
> MO_ALIGN is "align to size of access", ie to 8 bytes in this case.
> MO_ALIGN_8 is "align to a specified size (8 bytes) which isn't the
>   size of the access".
> In this case the access size is 8 bytes so we don't need MO_ALIGN_8.
> 
> Alignments to something other than the access size are the oddball
> case, so I think it makes sense to stick with MO_ALIGN for the
> common case of "just aligned to the access size" so you can
> spot the odd cases when you're reading the source.

Ok, I misunderstood "MO_ALIGN supposes the alignment size is the size of 
a memory access."

thanks!

> 
> thanks
> -- PMM
> 

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [Qemu-devel] [PATCH v2 for-2.10 3/3] target/arm: Require alignment for load exclusive
  2017-08-15 16:32     ` Richard Henderson
@ 2017-08-15 17:49       ` Philippe Mathieu-Daudé
  0 siblings, 0 replies; 13+ messages in thread
From: Philippe Mathieu-Daudé @ 2017-08-15 17:49 UTC (permalink / raw)
  To: Richard Henderson, qemu-devel
  Cc: edgar.iglesias, peter.maydell, alistair.francis

On 08/15/2017 01:32 PM, Richard Henderson wrote:
> On 08/15/2017 08:56 AM, Philippe Mathieu-Daudé wrote:
>>> @@ -1885,7 +1886,7 @@ static void gen_load_exclusive(DisasContext *s, int rt,
>>> int rt2,
>>>                tcg_gen_mov_i64(cpu_reg(s, rt2), cpu_exclusive_high);
>>>            }
>>>        } else {
>>> -        memop |= size;
>>> +        memop |= size | MO_ALIGN;
>>
>> MO_ALIGN_8 here too?
>>
>>>            tcg_gen_qemu_ld_i64(cpu_exclusive_val, addr, idx, memop);
>>>            tcg_gen_mov_i64(cpu_reg(s, rt), cpu_exclusive_val);
> 
> Peter already pointed out that MO_ALIGN_N should be reserved for those cases
> where an explicit size really needed.  You should note that using MO_ALIGN_8
> would be actively wrong here -- it would incorrectly require 8 byte alignment
> for the single byte access of LDXRB.

Indeed I didn't think of that, thanks!

> 
> 
> r~
> 

^ permalink raw reply	[flat|nested] 13+ messages in thread

end of thread, other threads:[~2017-08-15 17:49 UTC | newest]

Thread overview: 13+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2017-08-15 14:57 [Qemu-devel] [PATCH v2 for-2.10 0/3] Fixup logic for exclusive pair Richard Henderson
2017-08-15 14:57 ` [Qemu-devel] [PATCH v2 for-2.10 1/3] target/arm: Correct exclusive store cmpxchg memop mask Richard Henderson
2017-08-15 15:41   ` Philippe Mathieu-Daudé
2017-08-15 16:21     ` Alistair Francis
2017-08-15 14:57 ` [Qemu-devel] [PATCH v2 for-2.10 2/3] target/arm: Correct load exclusive pair atomicity Richard Henderson
2017-08-15 14:57 ` [Qemu-devel] [PATCH v2 for-2.10 3/3] target/arm: Require alignment for load exclusive Richard Henderson
2017-08-15 15:27   ` Eric Blake
2017-08-15 15:56   ` Philippe Mathieu-Daudé
2017-08-15 16:14     ` Peter Maydell
2017-08-15 17:48       ` Philippe Mathieu-Daudé
2017-08-15 16:32     ` Richard Henderson
2017-08-15 17:49       ` Philippe Mathieu-Daudé
2017-08-15 17:16 ` [Qemu-devel] [PATCH v2 for-2.10 0/3] Fixup logic for exclusive pair Peter Maydell

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.