All of lore.kernel.org
 help / color / mirror / Atom feed
From: Richard Henderson <richard.henderson@linaro.org>
To: qemu-devel@nongnu.org
Cc: qemu-arm@nongnu.org, Peter Maydell <peter.maydell@linaro.org>
Subject: [PATCH v3 17/20] target/arm: Move mte check for store-exclusive
Date: Tue, 30 May 2023 12:14:35 -0700	[thread overview]
Message-ID: <20230530191438.411344-18-richard.henderson@linaro.org> (raw)
In-Reply-To: <20230530191438.411344-1-richard.henderson@linaro.org>

Push the mte check behind the exclusive_addr check.
Document the several ways that we are still out of spec
with this implementation.

Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
---
 target/arm/tcg/translate-a64.c | 42 +++++++++++++++++++++++++++++-----
 1 file changed, 36 insertions(+), 6 deletions(-)

diff --git a/target/arm/tcg/translate-a64.c b/target/arm/tcg/translate-a64.c
index 49cb7a7dd5..9654c5746a 100644
--- a/target/arm/tcg/translate-a64.c
+++ b/target/arm/tcg/translate-a64.c
@@ -2524,17 +2524,47 @@ static void gen_store_exclusive(DisasContext *s, int rd, int rt, int rt2,
      */
     TCGLabel *fail_label = gen_new_label();
     TCGLabel *done_label = gen_new_label();
-    TCGv_i64 tmp, dirty_addr, clean_addr;
+    TCGv_i64 tmp, clean_addr;
     MemOp memop;
 
-    memop = (size + is_pair) | MO_ALIGN;
-    memop = finalize_memop(s, memop);
-
-    dirty_addr = cpu_reg_sp(s, rn);
-    clean_addr = gen_mte_check1(s, dirty_addr, true, rn != 31, memop);
+    /*
+     * FIXME: We are out of spec here.  We have recorded only the address
+     * from load_exclusive, not the entire range, and we assume that the
+     * size of the access on both sides match.  The architecture allows the
+     * store to be smaller than the load, so long as the stored bytes are
+     * within the range recorded by the load.
+     */
 
+    /* See AArch64.ExclusiveMonitorsPass() and AArch64.IsExclusiveVA(). */
+    clean_addr = clean_data_tbi(s, cpu_reg_sp(s, rn));
     tcg_gen_brcond_i64(TCG_COND_NE, clean_addr, cpu_exclusive_addr, fail_label);
 
+    /*
+     * The write, and any associated faults, only happen if the virtual
+     * and physical addresses pass the exclusive monitor check.  These
+     * faults are exceedingly unlikely, because normally the guest uses
+     * the exact same address register for the load_exclusive, and we
+     * would have recognized these faults there.
+     *
+     * It is possible to trigger an alignment fault pre-LSE2, e.g. with an
+     * unaligned 4-byte write within the range of an aligned 8-byte load.
+     * With LSE2, the store would need to cross a 16-byte boundary when the
+     * load did not, which would mean the store is outside the range
+     * recorded for the monitor, which would have failed a corrected monitor
+     * check above.  For now, we assume no size change and retain the
+     * MO_ALIGN to let tcg know what we checked in the load_exclusive.
+     *
+     * It is possible to trigger an MTE fault, by performing the load with
+     * a virtual address with a valid tag and performing the store with the
+     * same virtual address and a different invalid tag.
+     */
+    memop = size + is_pair;
+    if (memop == MO_128 || !dc_isar_feature(aa64_lse2, s)) {
+        memop |= MO_ALIGN;
+    }
+    memop = finalize_memop(s, memop);
+    gen_mte_check1(s, cpu_reg_sp(s, rn), true, rn != 31, memop);
+
     tmp = tcg_temp_new_i64();
     if (is_pair) {
         if (size == 2) {
-- 
2.34.1



  parent reply	other threads:[~2023-05-30 19:15 UTC|newest]

Thread overview: 22+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-05-30 19:14 [PATCH v3 00/20] target/arm: Implement FEAT_LSE2 Richard Henderson
2023-05-30 19:14 ` [PATCH v3 01/20] target/arm: Add commentary for CPUARMState.exclusive_high Richard Henderson
2023-05-30 19:14 ` [PATCH v3 02/20] target/arm: Add feature test for FEAT_LSE2 Richard Henderson
2023-05-30 19:14 ` [PATCH v3 03/20] target/arm: Introduce finalize_memop_{atom,pair} Richard Henderson
2023-05-30 19:14 ` [PATCH v3 04/20] target/arm: Use tcg_gen_qemu_ld_i128 for LDXP Richard Henderson
2023-05-30 19:14 ` [PATCH v3 05/20] target/arm: Use tcg_gen_qemu_{st, ld}_i128 for do_fp_{st, ld} Richard Henderson
2023-05-30 19:14 ` [PATCH v3 06/20] target/arm: Use tcg_gen_qemu_st_i128 for STZG, STZ2G Richard Henderson
2023-05-30 19:14 ` [PATCH v3 07/20] target/arm: Use tcg_gen_qemu_{ld, st}_i128 in gen_sve_{ld, st}r Richard Henderson
2023-05-30 19:14 ` [PATCH v3 08/20] target/arm: Sink gen_mte_check1 into load/store_exclusive Richard Henderson
2023-05-30 19:14 ` [PATCH v3 09/20] target/arm: Load/store integer pair with one tcg operation Richard Henderson
2023-05-30 19:14 ` [PATCH v3 10/20] target/arm: Hoist finalize_memop out of do_gpr_{ld, st} Richard Henderson
2023-05-30 19:14 ` [PATCH v3 11/20] target/arm: Hoist finalize_memop out of do_fp_{ld, st} Richard Henderson
2023-05-30 19:14 ` [PATCH v3 12/20] target/arm: Pass memop to gen_mte_check1* Richard Henderson
2023-05-30 19:14 ` [PATCH v3 13/20] target/arm: Pass single_memop to gen_mte_checkN Richard Henderson
2023-05-30 19:14 ` [PATCH v3 14/20] target/arm: Check alignment in helper_mte_check Richard Henderson
2023-05-30 19:14 ` [PATCH v3 15/20] target/arm: Add SCTLR.nAA to TBFLAG_A64 Richard Henderson
2023-05-30 19:14 ` [PATCH v3 16/20] target/arm: Relax ordered/atomic alignment checks for LSE2 Richard Henderson
2023-05-30 19:14 ` Richard Henderson [this message]
2023-05-30 19:14 ` [PATCH v3 18/20] tests/tcg/aarch64: Use stz2g in mte-7.c Richard Henderson
2023-05-30 19:14 ` [PATCH v3 19/20] tests/tcg/multiarch: Adjust sigbus.c Richard Henderson
2023-05-30 19:14 ` [PATCH v3 20/20] target/arm: Enable FEAT_LSE2 for -cpu max Richard Henderson
2023-06-05 15:56 ` [PATCH v3 00/20] target/arm: Implement FEAT_LSE2 Peter Maydell

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20230530191438.411344-18-richard.henderson@linaro.org \
    --to=richard.henderson@linaro.org \
    --cc=peter.maydell@linaro.org \
    --cc=qemu-arm@nongnu.org \
    --cc=qemu-devel@nongnu.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.