All of lore.kernel.org
 help / color / mirror / Atom feed
From: Richard Henderson <richard.henderson@linaro.org>
To: "Philippe Mathieu-Daudé" <philmd@linaro.org>, qemu-devel@nongnu.org
Cc: alex.bennee@linaro.org
Subject: Re: [PATCH v5 17/36] tcg: Split out tcg_gen_nonatomic_cmpxchg_i{32,64}
Date: Thu, 26 Jan 2023 20:44:10 -1000	[thread overview]
Message-ID: <79c17cc8-f45d-391f-88db-8d74d32829ef@linaro.org> (raw)
In-Reply-To: <abb025a0-8588-81b3-ddd3-a93b4b66f6f5@linaro.org>

On 1/26/23 14:53, Philippe Mathieu-Daudé wrote:
> On 26/1/23 05:38, Richard Henderson wrote:
>> Normally this is automatically handled by the CF_PARALLEL checks
>> with in tcg_gen_atomic_cmpxchg_i{32,64}, but x86 has a special
>> case of !PREFIX_LOCK where it always wants the non-atomic version.
>>
>> Split these out so that x86 does not have to roll its own.
>>
>> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
>> ---
>>   include/tcg/tcg-op.h |   4 ++
>>   tcg/tcg-op.c         | 154 +++++++++++++++++++++++++++----------------
>>   2 files changed, 101 insertions(+), 57 deletions(-)
> 
> 
>> +void tcg_gen_nonatomic_cmpxchg_i64(TCGv_i64 retv, TCGv addr, TCGv_i64 cmpv,
>> +                                   TCGv_i64 newv, TCGArg idx, MemOp memop)
>> +{
>> +    TCGv_i64 t1, t2;
>> +
> 
> This block from here ...
> 
>> +    if (TCG_TARGET_REG_BITS == 32 && (memop & MO_SIZE) < MO_64) {
>> +        tcg_gen_nonatomic_cmpxchg_i32(TCGV_LOW(retv), addr, TCGV_LOW(cmpv),
>> +                                      TCGV_LOW(newv), idx, memop);
>> +        if (memop & MO_SIGN) {
>> +            tcg_gen_sari_i32(TCGV_HIGH(retv), TCGV_LOW(retv), 31);
>> +        } else {
>> +            tcg_gen_movi_i32(TCGV_HIGH(retv), 0);
>> +        }
>> +        return;
>> +    }
> 
> ... to here,
> 
>> +    t1 = tcg_temp_new_i64();
>> +    t2 = tcg_temp_new_i64();
>> +
>> +    tcg_gen_ext_i64(t2, cmpv, memop & MO_SIZE);
>> +
>> +    tcg_gen_qemu_ld_i64(t1, addr, idx, memop & ~MO_SIGN);
>> +    tcg_gen_movcond_i64(TCG_COND_EQ, t2, t1, t2, newv, t1);
>> +    tcg_gen_qemu_st_i64(t2, addr, idx, memop);
>> +    tcg_temp_free_i64(t2);
>> +
>> +    if (memop & MO_SIGN) {
>> +        tcg_gen_ext_i64(retv, t1, memop);
>> +    } else {
>> +        tcg_gen_mov_i64(retv, t1);
>> +    }
>> +    tcg_temp_free_i64(t1);
>>   }
>>   void tcg_gen_atomic_cmpxchg_i64(TCGv_i64 retv, TCGv addr, TCGv_i64 cmpv,
>>                                   TCGv_i64 newv, TCGArg idx, MemOp memop)
>>   {
>> -    memop = tcg_canonicalize_memop(memop, 1, 0);
>> -
>>       if (!(tcg_ctx->gen_tb->cflags & CF_PARALLEL)) {
>> -        TCGv_i64 t1 = tcg_temp_new_i64();
>> -        TCGv_i64 t2 = tcg_temp_new_i64();
>> +        tcg_gen_nonatomic_cmpxchg_i64(retv, addr, cmpv, newv, idx, memop);
>> +        return;
>> +    }
>> -        tcg_gen_ext_i64(t2, cmpv, memop & MO_SIZE);
>> -
>> -        tcg_gen_qemu_ld_i64(t1, addr, idx, memop & ~MO_SIGN);
>> -        tcg_gen_movcond_i64(TCG_COND_EQ, t2, t1, t2, newv, t1);
>> -        tcg_gen_qemu_st_i64(t2, addr, idx, memop);
>> -        tcg_temp_free_i64(t2);
>> -
>> -        if (memop & MO_SIGN) {
>> -            tcg_gen_ext_i64(retv, t1, memop);
>> -        } else {
>> -            tcg_gen_mov_i64(retv, t1);
>> -        }
>> -        tcg_temp_free_i64(t1);
>> -    } else if ((memop & MO_SIZE) == MO_64) {
>> -#ifdef CONFIG_ATOMIC64
>> +    if ((memop & MO_SIZE) == MO_64) {
>>           gen_atomic_cx_i64 gen;
>> -        MemOpIdx oi;
>> +        memop = tcg_canonicalize_memop(memop, 1, 0);
>>           gen = table_cmpxchg[memop & (MO_SIZE | MO_BSWAP)];
>> -        tcg_debug_assert(gen != NULL);
>> +        if (gen) {
>> +            MemOpIdx oi = make_memop_idx(memop, idx);
>> +            gen(retv, cpu_env, addr, cmpv, newv, tcg_constant_i32(oi));
>> +            return;
>> +        }
>> -        oi = make_memop_idx(memop, idx);
>> -        gen(retv, cpu_env, addr, cmpv, newv, tcg_constant_i32(oi));
>> -#else
>>           gen_helper_exit_atomic(cpu_env);
>> -        /* Produce a result, so that we have a well-formed opcode stream
>> -           with respect to uses of the result in the (dead) code following.  */
>> +
>> +        /*
>> +         * Produce a result for a well-formed opcode stream.  This satisfies
>> +         * liveness for set before used, which happens before this dead code
>> +         * is removed.
>> +         */
>>           tcg_gen_movi_i64(retv, 0);
>> -#endif /* CONFIG_ATOMIC64 */
>> +        return;
>> +    }
> 
> and this one here:
>> +    if (TCG_TARGET_REG_BITS == 32) {
>> +        tcg_gen_atomic_cmpxchg_i32(TCGV_LOW(retv), addr, TCGV_LOW(cmpv),
>> +                                   TCGV_LOW(newv), idx, memop);
>> +        if (memop & MO_SIGN) {
>> +            tcg_gen_sari_i32(TCGV_HIGH(retv), TCGV_LOW(retv), 31);
>> +        } else {
>> +            tcg_gen_movi_i32(TCGV_HIGH(retv), 0);
>> +        }
> 
> belong to a subsequent patch IMO.

No.  That code is in there now, and needs to be there for correctness.

It gets duplicated in the exposure of the non-atomic code path.


r~


  Otherwise LGTM.
> 
>>       } else {
>>           TCGv_i32 c32 = tcg_temp_new_i32();
>>           TCGv_i32 n32 = tcg_temp_new_i32();
> 



  reply	other threads:[~2023-01-27  6:44 UTC|newest]

Thread overview: 63+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-01-26  4:37 [PATCH v5 00/36] tcg: Support for Int128 with helpers Richard Henderson
2023-01-26  4:37 ` [PATCH v5 01/36] tcg: Define TCG_TYPE_I128 and related helper macros Richard Henderson
2023-01-26  4:37 ` [PATCH v5 02/36] tcg: Handle dh_typecode_i128 with TCG_CALL_{RET, ARG}_NORMAL Richard Henderson
2023-01-26  4:37 ` [PATCH v5 03/36] tcg: Allocate objects contiguously in temp_allocate_frame Richard Henderson
2023-01-26 17:12   ` Alex Bennée
2023-01-26 19:48     ` Richard Henderson
2023-01-26  4:37 ` [PATCH v5 04/36] tcg: Introduce tcg_out_addi_ptr Richard Henderson
2023-01-26  4:37 ` [PATCH v5 05/36] tcg: Add TCG_CALL_{RET,ARG}_BY_REF Richard Henderson
2023-01-27 10:40   ` Alex Bennée
2023-01-27 18:48     ` Richard Henderson
2023-01-26  4:37 ` [PATCH v5 06/36] tcg: Introduce tcg_target_call_oarg_reg Richard Henderson
2023-01-26  4:37 ` [PATCH v5 07/36] tcg: Add TCG_CALL_RET_BY_VEC Richard Henderson
2023-01-26  4:37 ` [PATCH v5 08/36] include/qemu/int128: Use Int128 structure for TCI Richard Henderson
2023-01-27 13:51   ` Alex Bennée
2023-01-26  4:37 ` [PATCH v5 09/36] tcg/i386: Add TCG_TARGET_CALL_{RET,ARG}_I128 Richard Henderson
2023-01-27 13:52   ` Alex Bennée
2023-01-26  4:37 ` [PATCH v5 10/36] tcg/tci: Fix big-endian return register ordering Richard Henderson
2023-01-27 13:53   ` Alex Bennée
2023-01-26  4:37 ` [PATCH v5 11/36] tcg/tci: Add TCG_TARGET_CALL_{RET,ARG}_I128 Richard Henderson
2023-01-27 14:00   ` Alex Bennée
2023-01-27 18:55     ` Richard Henderson
2023-01-26  4:38 ` [PATCH v5 12/36] tcg: " Richard Henderson
2023-01-27 17:04   ` Alex Bennée
2023-01-26  4:38 ` [PATCH v5 13/36] tcg: Add temp allocation for TCGv_i128 Richard Henderson
2023-01-27 17:08   ` Alex Bennée
2023-01-27 18:56     ` Richard Henderson
2023-01-26  4:38 ` [PATCH v5 14/36] tcg: Add basic data movement " Richard Henderson
2023-01-27 18:23   ` Alex Bennée
2023-01-26  4:38 ` [PATCH v5 15/36] tcg: Add guest load/store primitives " Richard Henderson
2023-01-26  4:38 ` [PATCH v5 16/36] tcg: Add tcg_gen_{non}atomic_cmpxchg_i128 Richard Henderson
2023-01-27  0:45   ` Philippe Mathieu-Daudé
2023-01-27  6:39     ` Richard Henderson
2023-01-27 23:49       ` Philippe Mathieu-Daudé
2023-01-26  4:38 ` [PATCH v5 17/36] tcg: Split out tcg_gen_nonatomic_cmpxchg_i{32,64} Richard Henderson
2023-01-27  0:53   ` Philippe Mathieu-Daudé
2023-01-27  6:44     ` Richard Henderson [this message]
2023-01-26  4:38 ` [PATCH v5 18/36] target/arm: Use tcg_gen_atomic_cmpxchg_i128 for STXP Richard Henderson
2023-01-26  4:38 ` [PATCH v5 19/36] target/arm: Use tcg_gen_atomic_cmpxchg_i128 for CASP Richard Henderson
2023-01-26  4:38 ` [PATCH v5 20/36] target/ppc: Use tcg_gen_atomic_cmpxchg_i128 for STQCX Richard Henderson
2023-01-26  4:38 ` [PATCH v5 21/36] tests/tcg/s390x: Add div.c Richard Henderson
2023-01-26  4:38 ` [PATCH v5 22/36] tests/tcg/s390x: Add clst.c Richard Henderson
2023-01-26  4:38 ` [PATCH v5 23/36] tests/tcg/s390x: Add long-double.c Richard Henderson
2023-01-26  4:38 ` [PATCH v5 24/36] target/s390x: Use a single return for helper_divs32/u32 Richard Henderson
2023-01-26  9:58   ` David Hildenbrand
2023-01-27  0:57   ` Philippe Mathieu-Daudé
2023-01-26  4:38 ` [PATCH v5 25/36] target/s390x: Use a single return for helper_divs64/u64 Richard Henderson
2023-01-26  4:38 ` [PATCH v5 26/36] target/s390x: Use Int128 for return from CLST Richard Henderson
2023-01-26  4:38 ` [PATCH v5 27/36] target/s390x: Use Int128 for return from CKSM Richard Henderson
2023-01-26  4:38 ` [PATCH v5 28/36] target/s390x: Use Int128 for return from TRE Richard Henderson
2023-01-26  4:38 ` [PATCH v5 29/36] target/s390x: Copy wout_x1 to wout_x1_P Richard Henderson
2023-01-26  4:38 ` [PATCH v5 30/36] target/s390x: Use Int128 for returning float128 Richard Henderson
2023-01-26 10:06   ` David Hildenbrand
2023-01-26  4:38 ` [PATCH v5 31/36] target/s390x: Use Int128 for passing float128 Richard Henderson
2023-01-26 11:19   ` David Hildenbrand
2023-01-26  4:38 ` [PATCH v5 32/36] target/s390x: Use tcg_gen_atomic_cmpxchg_i128 for CDSG Richard Henderson
2023-01-26 11:27   ` David Hildenbrand
2023-01-26 21:01     ` Richard Henderson
2023-01-27 16:09       ` David Hildenbrand
2023-01-26  4:38 ` [PATCH v5 33/36] target/s390x: Implement CC_OP_NZ in gen_op_calc_cc Richard Henderson
2023-01-26 11:25   ` David Hildenbrand
2023-01-26  4:38 ` [PATCH v5 34/36] target/i386: Split out gen_cmpxchg8b, gen_cmpxchg16b Richard Henderson
2023-01-26  4:38 ` [PATCH v5 35/36] target/i386: Inline cmpxchg8b Richard Henderson
2023-01-26  4:38 ` [PATCH v5 36/36] target/i386: Inline cmpxchg16b Richard Henderson

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=79c17cc8-f45d-391f-88db-8d74d32829ef@linaro.org \
    --to=richard.henderson@linaro.org \
    --cc=alex.bennee@linaro.org \
    --cc=philmd@linaro.org \
    --cc=qemu-devel@nongnu.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.