All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH 0/4] accel/tcg: Fixes for user-only page tracking
@ 2022-12-24 15:18 Richard Henderson
  2022-12-24 15:18 ` [PATCH 1/4] accel/tcg: Fix tb_invalidate_phys_page_unwind Richard Henderson
                   ` (3 more replies)
  0 siblings, 4 replies; 26+ messages in thread
From: Richard Henderson @ 2022-12-24 15:18 UTC (permalink / raw)
  To: qemu-devel; +Cc: iii

Fix all three bugs pointed out by Ilya, and for the moment at least,
adjust the testcase to set read+write for writability.


r~


Ilya Leoshkevich (1):
  tests/tcg/multiarch: add vma-pthread.c

Richard Henderson (3):
  accel/tcg: Fix tb_invalidate_phys_page_unwind
  accel/tcg: Use g_free_rcu for user-exec interval trees
  accel/tcg: Handle false negative lookup in page_check_range

 tests/tcg/multiarch/nop_func.h       |  25 ++++
 accel/tcg/tb-maint.c                 |  78 +++++-----
 accel/tcg/user-exec.c                |  57 ++++++--
 tests/tcg/multiarch/munmap-pthread.c |  16 +--
 tests/tcg/multiarch/vma-pthread.c    | 207 +++++++++++++++++++++++++++
 tests/tcg/multiarch/Makefile.target  |   3 +
 6 files changed, 321 insertions(+), 65 deletions(-)
 create mode 100644 tests/tcg/multiarch/nop_func.h
 create mode 100644 tests/tcg/multiarch/vma-pthread.c

-- 
2.34.1



^ permalink raw reply	[flat|nested] 26+ messages in thread

* [PATCH 1/4] accel/tcg: Fix tb_invalidate_phys_page_unwind
  2022-12-24 15:18 [PATCH 0/4] accel/tcg: Fixes for user-only page tracking Richard Henderson
@ 2022-12-24 15:18 ` Richard Henderson
  2022-12-28 12:49   ` [PATCH 1a/4] " Philippe Mathieu-Daudé
  2022-12-24 15:18 ` [PATCH 2/4] accel/tcg: Use g_free_rcu for user-exec interval trees Richard Henderson
                   ` (2 subsequent siblings)
  3 siblings, 1 reply; 26+ messages in thread
From: Richard Henderson @ 2022-12-24 15:18 UTC (permalink / raw)
  To: qemu-devel; +Cc: iii

When called from syscall(), we are not within a TB and pc == 0.
We can skip the check for invalidating the current TB.

Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
---
 accel/tcg/tb-maint.c | 78 ++++++++++++++++++++++++--------------------
 1 file changed, 43 insertions(+), 35 deletions(-)

diff --git a/accel/tcg/tb-maint.c b/accel/tcg/tb-maint.c
index 1b8e860647..b3d6529ae2 100644
--- a/accel/tcg/tb-maint.c
+++ b/accel/tcg/tb-maint.c
@@ -1024,43 +1024,51 @@ void tb_invalidate_phys_page(tb_page_addr_t addr)
  */
 bool tb_invalidate_phys_page_unwind(tb_page_addr_t addr, uintptr_t pc)
 {
-    assert(pc != 0);
-#ifdef TARGET_HAS_PRECISE_SMC
-    assert_memory_lock();
-    {
-        TranslationBlock *current_tb = tcg_tb_lookup(pc);
-        bool current_tb_modified = false;
-        TranslationBlock *tb;
-        PageForEachNext n;
+    TranslationBlock *current_tb;
+    bool current_tb_modified;
+    TranslationBlock *tb;
+    PageForEachNext n;
 
-        addr &= TARGET_PAGE_MASK;
-
-        PAGE_FOR_EACH_TB(addr, addr + TARGET_PAGE_SIZE, unused, tb, n) {
-            if (current_tb == tb &&
-                (tb_cflags(current_tb) & CF_COUNT_MASK) != 1) {
-                /*
-                 * If we are modifying the current TB, we must stop its
-                 * execution. We could be more precise by checking that
-                 * the modification is after the current PC, but it would
-                 * require a specialized function to partially restore
-                 * the CPU state.
-                 */
-                current_tb_modified = true;
-                cpu_restore_state_from_tb(current_cpu, current_tb, pc);
-            }
-            tb_phys_invalidate__locked(tb);
-        }
-
-        if (current_tb_modified) {
-            /* Force execution of one insn next time.  */
-            CPUState *cpu = current_cpu;
-            cpu->cflags_next_tb = 1 | CF_NOIRQ | curr_cflags(current_cpu);
-            return true;
-        }
+    /*
+     * Without precise smc semantics, or when outside of a TB,
+     * we can skip to invalidate.
+     */
+#ifndef TARGET_HAS_PRECISE_SMC
+    pc = 0;
+#endif
+    if (!pc) {
+        tb_invalidate_phys_page(addr);
+        return false;
+    }
+
+    assert_memory_lock();
+    current_tb = tcg_tb_lookup(pc);
+
+    addr &= TARGET_PAGE_MASK;
+    current_tb_modified = false;
+
+    PAGE_FOR_EACH_TB(addr, addr + TARGET_PAGE_SIZE, unused, tb, n) {
+        if (current_tb == tb &&
+            (tb_cflags(current_tb) & CF_COUNT_MASK) != 1) {
+            /*
+             * If we are modifying the current TB, we must stop its
+             * execution. We could be more precise by checking that
+             * the modification is after the current PC, but it would
+             * require a specialized function to partially restore
+             * the CPU state.
+             */
+            current_tb_modified = true;
+            cpu_restore_state_from_tb(current_cpu, current_tb, pc);
+        }
+        tb_phys_invalidate__locked(tb);
+    }
+
+    if (current_tb_modified) {
+        /* Force execution of one insn next time.  */
+        CPUState *cpu = current_cpu;
+        cpu->cflags_next_tb = 1 | CF_NOIRQ | curr_cflags(current_cpu);
+        return true;
     }
-#else
-    tb_invalidate_phys_page(addr);
-#endif /* TARGET_HAS_PRECISE_SMC */
     return false;
 }
 #else
-- 
2.34.1



^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [PATCH 2/4] accel/tcg: Use g_free_rcu for user-exec interval trees
  2022-12-24 15:18 [PATCH 0/4] accel/tcg: Fixes for user-only page tracking Richard Henderson
  2022-12-24 15:18 ` [PATCH 1/4] accel/tcg: Fix tb_invalidate_phys_page_unwind Richard Henderson
@ 2022-12-24 15:18 ` Richard Henderson
  2022-12-28  7:19   ` Philippe Mathieu-Daudé
  2022-12-24 15:18 ` [PATCH 3/4] accel/tcg: Handle false negative lookup in page_check_range Richard Henderson
  2022-12-24 15:18 ` [PATCH 4/4] tests/tcg/multiarch: add vma-pthread.c Richard Henderson
  3 siblings, 1 reply; 26+ messages in thread
From: Richard Henderson @ 2022-12-24 15:18 UTC (permalink / raw)
  To: qemu-devel; +Cc: iii

Because we allow lockless lookups, we have to be careful
when it is freed.  Use rcu to delay the free until safe.

Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
---
 accel/tcg/user-exec.c | 18 ++++++++++--------
 1 file changed, 10 insertions(+), 8 deletions(-)

diff --git a/accel/tcg/user-exec.c b/accel/tcg/user-exec.c
index a3cecda405..2c5c10d2e6 100644
--- a/accel/tcg/user-exec.c
+++ b/accel/tcg/user-exec.c
@@ -22,6 +22,7 @@
 #include "exec/exec-all.h"
 #include "tcg/tcg.h"
 #include "qemu/bitops.h"
+#include "qemu/rcu.h"
 #include "exec/cpu_ldst.h"
 #include "exec/translate-all.h"
 #include "exec/helper-proto.h"
@@ -136,6 +137,7 @@ bool handle_sigsegv_accerr_write(CPUState *cpu, sigset_t *old_set,
 }
 
 typedef struct PageFlagsNode {
+    struct rcu_head rcu;
     IntervalTreeNode itree;
     int flags;
 } PageFlagsNode;
@@ -266,7 +268,7 @@ static bool pageflags_unset(target_ulong start, target_ulong last)
             }
         } else if (p_last <= last) {
             /* Range completely covers node -- remove it. */
-            g_free(p);
+            g_free_rcu(p, rcu);
         } else {
             /* Truncate the node from the start. */
             p->itree.start = last + 1;
@@ -311,7 +313,7 @@ static void pageflags_create_merge(target_ulong start, target_ulong last,
     if (prev) {
         if (next) {
             prev->itree.last = next->itree.last;
-            g_free(next);
+            g_free_rcu(next, rcu);
         } else {
             prev->itree.last = last;
         }
@@ -376,7 +378,7 @@ static bool pageflags_set_clear(target_ulong start, target_ulong last,
             p->flags = merge_flags;
         } else {
             interval_tree_remove(&p->itree, &pageflags_root);
-            g_free(p);
+            g_free_rcu(p, rcu);
         }
         goto done;
     }
@@ -421,7 +423,7 @@ static bool pageflags_set_clear(target_ulong start, target_ulong last,
                     p->flags = merge_flags;
                 } else {
                     interval_tree_remove(&p->itree, &pageflags_root);
-                    g_free(p);
+                    g_free_rcu(p, rcu);
                 }
                 if (p_last < last) {
                     start = p_last + 1;
@@ -462,7 +464,7 @@ static bool pageflags_set_clear(target_ulong start, target_ulong last,
         p->itree.start = last + 1;
         interval_tree_insert(&p->itree, &pageflags_root);
     } else {
-        g_free(p);
+        g_free_rcu(p, rcu);
         goto restart;
     }
     if (set_flags) {
@@ -779,6 +781,7 @@ tb_page_addr_t get_page_addr_code_hostp(CPUArchState *env, target_ulong addr,
 #define TBD_MASK   (TARGET_PAGE_MASK * TPD_PAGES)
 
 typedef struct TargetPageDataNode {
+    struct rcu_head rcu;
     IntervalTreeNode itree;
     char data[TPD_PAGES][TARGET_PAGE_DATA_SIZE] __attribute__((aligned));
 } TargetPageDataNode;
@@ -801,11 +804,11 @@ void page_reset_target_data(target_ulong start, target_ulong end)
          n = next,
          next = next ? interval_tree_iter_next(n, start, last) : NULL) {
         target_ulong n_start, n_last, p_ofs, p_len;
-        TargetPageDataNode *t;
+        TargetPageDataNode *t = container_of(n, TargetPageDataNode, itree);
 
         if (n->start >= start && n->last <= last) {
             interval_tree_remove(n, &targetdata_root);
-            g_free(n);
+            g_free_rcu(t, rcu);
             continue;
         }
 
@@ -819,7 +822,6 @@ void page_reset_target_data(target_ulong start, target_ulong end)
         n_last = MIN(last, n->last);
         p_len = (n_last + 1 - n_start) >> TARGET_PAGE_BITS;
 
-        t = container_of(n, TargetPageDataNode, itree);
         memset(t->data[p_ofs], 0, p_len * TARGET_PAGE_DATA_SIZE);
     }
 }
-- 
2.34.1



^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [PATCH 3/4] accel/tcg: Handle false negative lookup in page_check_range
  2022-12-24 15:18 [PATCH 0/4] accel/tcg: Fixes for user-only page tracking Richard Henderson
  2022-12-24 15:18 ` [PATCH 1/4] accel/tcg: Fix tb_invalidate_phys_page_unwind Richard Henderson
  2022-12-24 15:18 ` [PATCH 2/4] accel/tcg: Use g_free_rcu for user-exec interval trees Richard Henderson
@ 2022-12-24 15:18 ` Richard Henderson
  2022-12-28  7:24   ` Philippe Mathieu-Daudé
  2022-12-24 15:18 ` [PATCH 4/4] tests/tcg/multiarch: add vma-pthread.c Richard Henderson
  3 siblings, 1 reply; 26+ messages in thread
From: Richard Henderson @ 2022-12-24 15:18 UTC (permalink / raw)
  To: qemu-devel; +Cc: iii

As in page_get_flags, we need to try again with the mmap
lock held if we fail a page lookup.

Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
---
 accel/tcg/user-exec.c | 39 ++++++++++++++++++++++++++++++++-------
 1 file changed, 32 insertions(+), 7 deletions(-)

diff --git a/accel/tcg/user-exec.c b/accel/tcg/user-exec.c
index 2c5c10d2e6..c72a806203 100644
--- a/accel/tcg/user-exec.c
+++ b/accel/tcg/user-exec.c
@@ -525,6 +525,7 @@ void page_set_flags(target_ulong start, target_ulong end, int flags)
 int page_check_range(target_ulong start, target_ulong len, int flags)
 {
     target_ulong last;
+    int locked, ret;
 
     if (len == 0) {
         return 0;  /* trivial length */
@@ -535,42 +536,66 @@ int page_check_range(target_ulong start, target_ulong len, int flags)
         return -1; /* wrap around */
     }
 
+    locked = have_mmap_lock();
     while (true) {
         PageFlagsNode *p = pageflags_find(start, last);
         int missing;
 
         if (!p) {
-            return -1; /* entire region invalid */
+            if (!locked) {
+                /*
+                 * Lockless lookups have false negatives.
+                 * Retry with the lock held.
+                 */
+                mmap_lock();
+                locked = -1;
+                p = pageflags_find(start, last);
+            }
+            if (!p) {
+                ret = -1; /* entire region invalid */
+                break;
+            }
         }
         if (start < p->itree.start) {
-            return -1; /* initial bytes invalid */
+            ret = -1; /* initial bytes invalid */
+            break;
         }
 
         missing = flags & ~p->flags;
         if (missing & PAGE_READ) {
-            return -1; /* page not readable */
+            ret = -1; /* page not readable */
+            break;
         }
         if (missing & PAGE_WRITE) {
             if (!(p->flags & PAGE_WRITE_ORG)) {
-                return -1; /* page not writable */
+                ret = -1; /* page not writable */
+                break;
             }
             /* Asking about writable, but has been protected: undo. */
             if (!page_unprotect(start, 0)) {
-                return -1;
+                ret = -1;
+                break;
             }
             /* TODO: page_unprotect should take a range, not a single page. */
             if (last - start < TARGET_PAGE_SIZE) {
-                return 0; /* ok */
+                ret = 0; /* ok */
+                break;
             }
             start += TARGET_PAGE_SIZE;
             continue;
         }
 
         if (last <= p->itree.last) {
-            return 0; /* ok */
+            ret = 0; /* ok */
+            break;
         }
         start = p->itree.last + 1;
     }
+
+    if (locked < 0) {
+        mmap_unlock();
+    }
+    return ret;
 }
 
 void page_protect(tb_page_addr_t address)
-- 
2.34.1



^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [PATCH 4/4] tests/tcg/multiarch: add vma-pthread.c
  2022-12-24 15:18 [PATCH 0/4] accel/tcg: Fixes for user-only page tracking Richard Henderson
                   ` (2 preceding siblings ...)
  2022-12-24 15:18 ` [PATCH 3/4] accel/tcg: Handle false negative lookup in page_check_range Richard Henderson
@ 2022-12-24 15:18 ` Richard Henderson
  2022-12-27 17:23   ` Alex Bennée
  2023-01-13 15:17   ` Peter Maydell
  3 siblings, 2 replies; 26+ messages in thread
From: Richard Henderson @ 2022-12-24 15:18 UTC (permalink / raw)
  To: qemu-devel; +Cc: iii

From: Ilya Leoshkevich <iii@linux.ibm.com>

Add a test that locklessly changes and exercises page protection bits
from various threads. This helps catch race conditions in the VMA
handling.

Signed-off-by: Ilya Leoshkevich <iii@linux.ibm.com>
Message-Id: <20221223120252.513319-1-iii@linux.ibm.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
---
 tests/tcg/multiarch/nop_func.h       |  25 ++++
 tests/tcg/multiarch/munmap-pthread.c |  16 +--
 tests/tcg/multiarch/vma-pthread.c    | 207 +++++++++++++++++++++++++++
 tests/tcg/multiarch/Makefile.target  |   3 +
 4 files changed, 236 insertions(+), 15 deletions(-)
 create mode 100644 tests/tcg/multiarch/nop_func.h
 create mode 100644 tests/tcg/multiarch/vma-pthread.c

diff --git a/tests/tcg/multiarch/nop_func.h b/tests/tcg/multiarch/nop_func.h
new file mode 100644
index 0000000000..f714d21000
--- /dev/null
+++ b/tests/tcg/multiarch/nop_func.h
@@ -0,0 +1,25 @@
+/*
+ * No-op functions that can be safely copied.
+ *
+ * SPDX-License-Identifier: GPL-2.0-or-later
+ */
+#ifndef NOP_FUNC_H
+#define NOP_FUNC_H
+
+static const char nop_func[] = {
+#if defined(__aarch64__)
+    0xc0, 0x03, 0x5f, 0xd6,     /* ret */
+#elif defined(__alpha__)
+    0x01, 0x80, 0xFA, 0x6B,     /* ret */
+#elif defined(__arm__)
+    0x1e, 0xff, 0x2f, 0xe1,     /* bx lr */
+#elif defined(__riscv)
+    0x67, 0x80, 0x00, 0x00,     /* ret */
+#elif defined(__s390__)
+    0x07, 0xfe,                 /* br %r14 */
+#elif defined(__i386__) || defined(__x86_64__)
+    0xc3,                       /* ret */
+#endif
+};
+
+#endif
diff --git a/tests/tcg/multiarch/munmap-pthread.c b/tests/tcg/multiarch/munmap-pthread.c
index d7143b00d5..1c79005846 100644
--- a/tests/tcg/multiarch/munmap-pthread.c
+++ b/tests/tcg/multiarch/munmap-pthread.c
@@ -7,21 +7,7 @@
 #include <sys/mman.h>
 #include <unistd.h>
 
-static const char nop_func[] = {
-#if defined(__aarch64__)
-    0xc0, 0x03, 0x5f, 0xd6,     /* ret */
-#elif defined(__alpha__)
-    0x01, 0x80, 0xFA, 0x6B,     /* ret */
-#elif defined(__arm__)
-    0x1e, 0xff, 0x2f, 0xe1,     /* bx lr */
-#elif defined(__riscv)
-    0x67, 0x80, 0x00, 0x00,     /* ret */
-#elif defined(__s390__)
-    0x07, 0xfe,                 /* br %r14 */
-#elif defined(__i386__) || defined(__x86_64__)
-    0xc3,                       /* ret */
-#endif
-};
+#include "nop_func.h"
 
 static void *thread_mmap_munmap(void *arg)
 {
diff --git a/tests/tcg/multiarch/vma-pthread.c b/tests/tcg/multiarch/vma-pthread.c
new file mode 100644
index 0000000000..35e63cd6cd
--- /dev/null
+++ b/tests/tcg/multiarch/vma-pthread.c
@@ -0,0 +1,207 @@
+/*
+ * Test that VMA updates do not race.
+ *
+ * SPDX-License-Identifier: GPL-2.0-or-later
+ *
+ * Map a contiguous chunk of RWX memory. Split it into 8 equally sized
+ * regions, each of which is guaranteed to have a certain combination of
+ * protection bits set.
+ *
+ * Reader, writer and executor threads perform the respective operations on
+ * pages, which are guaranteed to have the respective protection bit set.
+ * Two mutator threads change the non-fixed protection bits randomly.
+ */
+#include <assert.h>
+#include <fcntl.h>
+#include <pthread.h>
+#include <stdbool.h>
+#include <stdlib.h>
+#include <string.h>
+#include <stdio.h>
+#include <sys/mman.h>
+#include <unistd.h>
+
+#include "nop_func.h"
+
+#define PAGE_IDX_BITS 10
+#define PAGE_COUNT (1 << PAGE_IDX_BITS)
+#define PAGE_IDX_MASK (PAGE_COUNT - 1)
+#define REGION_IDX_BITS 3
+#define PAGE_IDX_R_MASK (1 << 7)
+#define PAGE_IDX_W_MASK (1 << 8)
+#define PAGE_IDX_X_MASK (1 << 9)
+#define REGION_MASK (PAGE_IDX_R_MASK | PAGE_IDX_W_MASK | PAGE_IDX_X_MASK)
+#define PAGES_PER_REGION (1 << (PAGE_IDX_BITS - REGION_IDX_BITS))
+
+struct context {
+    int pagesize;
+    char *ptr;
+    int dev_null_fd;
+    volatile int mutator_count;
+};
+
+static void *thread_read(void *arg)
+{
+    struct context *ctx = arg;
+    ssize_t sret;
+    size_t i, j;
+    int ret;
+
+    for (i = 0; ctx->mutator_count; i++) {
+        char *p;
+
+        j = (i & PAGE_IDX_MASK) | PAGE_IDX_R_MASK;
+        p = &ctx->ptr[j * ctx->pagesize];
+
+        /* Read directly. */
+        ret = memcmp(p, nop_func, sizeof(nop_func));
+        if (ret != 0) {
+            fprintf(stderr, "fail direct read %p\n", p);
+            abort();
+        }
+
+        /* Read indirectly. */
+        sret = write(ctx->dev_null_fd, p, 1);
+        if (sret != 1) {
+            if (sret < 0) {
+                fprintf(stderr, "fail indirect read %p (%m)\n", p);
+            } else {
+                fprintf(stderr, "fail indirect read %p (%zd)\n", p, sret);
+            }
+            abort();
+        }
+    }
+
+    return NULL;
+}
+
+static void *thread_write(void *arg)
+{
+    struct context *ctx = arg;
+    struct timespec *ts;
+    size_t i, j;
+    int ret;
+
+    for (i = 0; ctx->mutator_count; i++) {
+        j = (i & PAGE_IDX_MASK) | PAGE_IDX_W_MASK;
+
+        /* Write directly. */
+        memcpy(&ctx->ptr[j * ctx->pagesize], nop_func, sizeof(nop_func));
+
+        /* Write using a syscall. */
+        ts = (struct timespec *)(&ctx->ptr[(j + 1) * ctx->pagesize] -
+                                 sizeof(struct timespec));
+        ret = clock_gettime(CLOCK_REALTIME, ts);
+        if (ret != 0) {
+            fprintf(stderr, "fail indirect write %p (%m)\n", ts);
+            abort();
+        }
+    }
+
+    return NULL;
+}
+
+static void *thread_execute(void *arg)
+{
+    struct context *ctx = arg;
+    size_t i, j;
+
+    for (i = 0; ctx->mutator_count; i++) {
+        j = (i & PAGE_IDX_MASK) | PAGE_IDX_X_MASK;
+        ((void(*)(void))&ctx->ptr[j * ctx->pagesize])();
+    }
+
+    return NULL;
+}
+
+static void *thread_mutate(void *arg)
+{
+    size_t i, start_idx, end_idx, page_idx, tmp;
+    struct context *ctx = arg;
+    unsigned int seed;
+    int prot, ret;
+
+    seed = (unsigned int)time(NULL);
+    for (i = 0; i < 50000; i++) {
+        start_idx = rand_r(&seed) & PAGE_IDX_MASK;
+        end_idx = rand_r(&seed) & PAGE_IDX_MASK;
+        if (start_idx > end_idx) {
+            tmp = start_idx;
+            start_idx = end_idx;
+            end_idx = tmp;
+        }
+        prot = rand_r(&seed) & (PROT_READ | PROT_WRITE | PROT_EXEC);
+        for (page_idx = start_idx & REGION_MASK; page_idx <= end_idx;
+             page_idx += PAGES_PER_REGION) {
+            if (page_idx & PAGE_IDX_R_MASK) {
+                prot |= PROT_READ;
+            }
+            if (page_idx & PAGE_IDX_W_MASK) {
+                /* FIXME: qemu syscalls check for both read+write. */
+                prot |= PROT_WRITE | PROT_READ;
+            }
+            if (page_idx & PAGE_IDX_X_MASK) {
+                prot |= PROT_EXEC;
+            }
+        }
+        ret = mprotect(&ctx->ptr[start_idx * ctx->pagesize],
+                       (end_idx - start_idx + 1) * ctx->pagesize, prot);
+        assert(ret == 0);
+    }
+
+    __atomic_fetch_sub(&ctx->mutator_count, 1, __ATOMIC_SEQ_CST);
+
+    return NULL;
+}
+
+int main(void)
+{
+    pthread_t threads[5];
+    struct context ctx;
+    size_t i;
+    int ret;
+
+    /* Without a template, nothing to test. */
+    if (sizeof(nop_func) == 0) {
+        return EXIT_SUCCESS;
+    }
+
+    /* Initialize memory chunk. */
+    ctx.pagesize = getpagesize();
+    ctx.ptr = mmap(NULL, PAGE_COUNT * ctx.pagesize,
+                   PROT_READ | PROT_WRITE | PROT_EXEC,
+                   MAP_PRIVATE | MAP_ANONYMOUS, -1, 0);
+    assert(ctx.ptr != MAP_FAILED);
+    for (i = 0; i < PAGE_COUNT; i++) {
+        memcpy(&ctx.ptr[i * ctx.pagesize], nop_func, sizeof(nop_func));
+    }
+    ctx.dev_null_fd = open("/dev/null", O_WRONLY);
+    assert(ctx.dev_null_fd >= 0);
+    ctx.mutator_count = 2;
+
+    /* Start threads. */
+    ret = pthread_create(&threads[0], NULL, thread_read, &ctx);
+    assert(ret == 0);
+    ret = pthread_create(&threads[1], NULL, thread_write, &ctx);
+    assert(ret == 0);
+    ret = pthread_create(&threads[2], NULL, thread_execute, &ctx);
+    assert(ret == 0);
+    for (i = 3; i <= 4; i++) {
+        ret = pthread_create(&threads[i], NULL, thread_mutate, &ctx);
+        assert(ret == 0);
+    }
+
+    /* Wait for threads to stop. */
+    for (i = 0; i < sizeof(threads) / sizeof(threads[0]); i++) {
+        ret = pthread_join(threads[i], NULL);
+        assert(ret == 0);
+    }
+
+    /* Destroy memory chunk. */
+    ret = close(ctx.dev_null_fd);
+    assert(ret == 0);
+    ret = munmap(ctx.ptr, PAGE_COUNT * ctx.pagesize);
+    assert(ret == 0);
+
+    return EXIT_SUCCESS;
+}
diff --git a/tests/tcg/multiarch/Makefile.target b/tests/tcg/multiarch/Makefile.target
index 5f0fee1aad..e7213af492 100644
--- a/tests/tcg/multiarch/Makefile.target
+++ b/tests/tcg/multiarch/Makefile.target
@@ -39,6 +39,9 @@ signals: LDFLAGS+=-lrt -lpthread
 munmap-pthread: CFLAGS+=-pthread
 munmap-pthread: LDFLAGS+=-pthread
 
+vma-pthread: CFLAGS+=-pthread
+vma-pthread: LDFLAGS+=-pthread
+
 # We define the runner for test-mmap after the individual
 # architectures have defined their supported pages sizes. If no
 # additional page sizes are defined we only run the default test.
-- 
2.34.1



^ permalink raw reply related	[flat|nested] 26+ messages in thread

* Re: [PATCH 4/4] tests/tcg/multiarch: add vma-pthread.c
  2022-12-24 15:18 ` [PATCH 4/4] tests/tcg/multiarch: add vma-pthread.c Richard Henderson
@ 2022-12-27 17:23   ` Alex Bennée
  2023-01-13 15:17   ` Peter Maydell
  1 sibling, 0 replies; 26+ messages in thread
From: Alex Bennée @ 2022-12-27 17:23 UTC (permalink / raw)
  To: Richard Henderson; +Cc: iii, qemu-devel


Richard Henderson <richard.henderson@linaro.org> writes:

> From: Ilya Leoshkevich <iii@linux.ibm.com>
>
> Add a test that locklessly changes and exercises page protection bits
> from various threads. This helps catch race conditions in the VMA
> handling.
>
> Signed-off-by: Ilya Leoshkevich <iii@linux.ibm.com>
> Message-Id: <20221223120252.513319-1-iii@linux.ibm.com>
> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>

Acked-by: Alex Bennée <alex.bennee@linaro.org>

-- 
Alex Bennée
Virtualisation Tech Lead @ Linaro


^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH 2/4] accel/tcg: Use g_free_rcu for user-exec interval trees
  2022-12-24 15:18 ` [PATCH 2/4] accel/tcg: Use g_free_rcu for user-exec interval trees Richard Henderson
@ 2022-12-28  7:19   ` Philippe Mathieu-Daudé
  0 siblings, 0 replies; 26+ messages in thread
From: Philippe Mathieu-Daudé @ 2022-12-28  7:19 UTC (permalink / raw)
  To: Richard Henderson, qemu-devel; +Cc: iii

On 24/12/22 16:18, Richard Henderson wrote:
> Because we allow lockless lookups, we have to be careful
> when it is freed.  Use rcu to delay the free until safe.
> 
> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
> ---
>   accel/tcg/user-exec.c | 18 ++++++++++--------
>   1 file changed, 10 insertions(+), 8 deletions(-)

Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>



^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH 3/4] accel/tcg: Handle false negative lookup in page_check_range
  2022-12-24 15:18 ` [PATCH 3/4] accel/tcg: Handle false negative lookup in page_check_range Richard Henderson
@ 2022-12-28  7:24   ` Philippe Mathieu-Daudé
  2022-12-28 12:53     ` Philippe Mathieu-Daudé
  2022-12-28 17:36     ` Richard Henderson
  0 siblings, 2 replies; 26+ messages in thread
From: Philippe Mathieu-Daudé @ 2022-12-28  7:24 UTC (permalink / raw)
  To: Richard Henderson, qemu-devel; +Cc: iii

On 24/12/22 16:18, Richard Henderson wrote:
> As in page_get_flags, we need to try again with the mmap
> lock held if we fail a page lookup.
> 
> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
> ---
>   accel/tcg/user-exec.c | 39 ++++++++++++++++++++++++++++++++-------
>   1 file changed, 32 insertions(+), 7 deletions(-)
> 
> diff --git a/accel/tcg/user-exec.c b/accel/tcg/user-exec.c
> index 2c5c10d2e6..c72a806203 100644
> --- a/accel/tcg/user-exec.c
> +++ b/accel/tcg/user-exec.c
> @@ -525,6 +525,7 @@ void page_set_flags(target_ulong start, target_ulong end, int flags)
>   int page_check_range(target_ulong start, target_ulong len, int flags)
>   {
>       target_ulong last;
> +    int locked, ret;

have_mmap_lock() returns a boolean, can we declare 'locked'
as such, ...

>   
>       if (len == 0) {
>           return 0;  /* trivial length */
> @@ -535,42 +536,66 @@ int page_check_range(target_ulong start, target_ulong len, int flags)
>           return -1; /* wrap around */
>       }
>   
> +    locked = have_mmap_lock();
>       while (true) {
>           PageFlagsNode *p = pageflags_find(start, last);
>           int missing;
>   
>           if (!p) {
> -            return -1; /* entire region invalid */
> +            if (!locked) {
> +                /*
> +                 * Lockless lookups have false negatives.
> +                 * Retry with the lock held.
> +                 */
> +                mmap_lock();
> +                locked = -1;

... skip this confusing assignation, ...

> +                p = pageflags_find(start, last);
> +            }
> +            if (!p) {
> +                ret = -1; /* entire region invalid */
> +                break;
> +            }
>           }
>           if (start < p->itree.start) {
> -            return -1; /* initial bytes invalid */
> +            ret = -1; /* initial bytes invalid */
> +            break;
>           }
>   
>           missing = flags & ~p->flags;
>           if (missing & PAGE_READ) {
> -            return -1; /* page not readable */
> +            ret = -1; /* page not readable */
> +            break;
>           }
>           if (missing & PAGE_WRITE) {
>               if (!(p->flags & PAGE_WRITE_ORG)) {
> -                return -1; /* page not writable */
> +                ret = -1; /* page not writable */
> +                break;
>               }
>               /* Asking about writable, but has been protected: undo. */
>               if (!page_unprotect(start, 0)) {
> -                return -1;
> +                ret = -1;
> +                break;
>               }
>               /* TODO: page_unprotect should take a range, not a single page. */
>               if (last - start < TARGET_PAGE_SIZE) {
> -                return 0; /* ok */
> +                ret = 0; /* ok */
> +                break;
>               }
>               start += TARGET_PAGE_SIZE;
>               continue;
>           }
>   
>           if (last <= p->itree.last) {
> -            return 0; /* ok */
> +            ret = 0; /* ok */
> +            break;
>           }
>           start = p->itree.last + 1;
>       }
> +
> +    if (locked < 0) {

... and check for !locked here?

> +        mmap_unlock();
> +    }
> +    return ret;
>   }




^ permalink raw reply	[flat|nested] 26+ messages in thread

* [PATCH 1a/4] accel/tcg: Fix tb_invalidate_phys_page_unwind
  2022-12-24 15:18 ` [PATCH 1/4] accel/tcg: Fix tb_invalidate_phys_page_unwind Richard Henderson
@ 2022-12-28 12:49   ` Philippe Mathieu-Daudé
  2022-12-28 12:49     ` [PATCH 1b/4] accel/tcg: Unindent tb_invalidate_phys_page_unwind Philippe Mathieu-Daudé
  2022-12-28 12:52     ` [PATCH 1a/4] accel/tcg: Fix tb_invalidate_phys_page_unwind Philippe Mathieu-Daudé
  0 siblings, 2 replies; 26+ messages in thread
From: Philippe Mathieu-Daudé @ 2022-12-28 12:49 UTC (permalink / raw)
  To: qemu-devel
  Cc: Ilya Leoshkevich, Richard Henderson, Paolo Bonzini,
	Philippe Mathieu-Daudé

When called from syscall(), we are not within a TB and pc == 0.
We can skip the check for invalidating the current TB.

Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
[PMD: Split patch in 2]
Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
---
 accel/tcg/tb-maint.c | 17 ++++++++++++-----
 1 file changed, 12 insertions(+), 5 deletions(-)

diff --git a/accel/tcg/tb-maint.c b/accel/tcg/tb-maint.c
index 1b8e860647..c9b8d3c6c3 100644
--- a/accel/tcg/tb-maint.c
+++ b/accel/tcg/tb-maint.c
@@ -1024,8 +1024,18 @@ void tb_invalidate_phys_page(tb_page_addr_t addr)
  */
 bool tb_invalidate_phys_page_unwind(tb_page_addr_t addr, uintptr_t pc)
 {
-    assert(pc != 0);
-#ifdef TARGET_HAS_PRECISE_SMC
+    /*
+     * Without precise smc semantics, or when outside of a TB,
+     * we can skip to invalidate.
+     */
+#ifndef TARGET_HAS_PRECISE_SMC
+    pc = 0;
+#endif
+    if (!pc) {
+        tb_invalidate_phys_page(addr);
+        return false;
+    }
+
     assert_memory_lock();
     {
         TranslationBlock *current_tb = tcg_tb_lookup(pc);
@@ -1058,9 +1068,6 @@ bool tb_invalidate_phys_page_unwind(tb_page_addr_t addr, uintptr_t pc)
             return true;
         }
     }
-#else
-    tb_invalidate_phys_page(addr);
-#endif /* TARGET_HAS_PRECISE_SMC */
     return false;
 }
 #else
-- 
2.38.1



^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [PATCH 1b/4] accel/tcg: Unindent tb_invalidate_phys_page_unwind
  2022-12-28 12:49   ` [PATCH 1a/4] " Philippe Mathieu-Daudé
@ 2022-12-28 12:49     ` Philippe Mathieu-Daudé
  2022-12-28 12:52       ` Philippe Mathieu-Daudé
  2022-12-28 12:52     ` [PATCH 1a/4] accel/tcg: Fix tb_invalidate_phys_page_unwind Philippe Mathieu-Daudé
  1 sibling, 1 reply; 26+ messages in thread
From: Philippe Mathieu-Daudé @ 2022-12-28 12:49 UTC (permalink / raw)
  To: qemu-devel
  Cc: Ilya Leoshkevich, Richard Henderson, Paolo Bonzini,
	Philippe Mathieu-Daudé

Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
[PMD: Split patch in 2]
Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
---
 accel/tcg/tb-maint.c | 53 ++++++++++++++++++++++----------------------
 1 file changed, 27 insertions(+), 26 deletions(-)

diff --git a/accel/tcg/tb-maint.c b/accel/tcg/tb-maint.c
index c9b8d3c6c3..b3d6529ae2 100644
--- a/accel/tcg/tb-maint.c
+++ b/accel/tcg/tb-maint.c
@@ -1024,6 +1024,11 @@ void tb_invalidate_phys_page(tb_page_addr_t addr)
  */
 bool tb_invalidate_phys_page_unwind(tb_page_addr_t addr, uintptr_t pc)
 {
+    TranslationBlock *current_tb;
+    bool current_tb_modified;
+    TranslationBlock *tb;
+    PageForEachNext n;
+
     /*
      * Without precise smc semantics, or when outside of a TB,
      * we can skip to invalidate.
@@ -1037,36 +1042,32 @@ bool tb_invalidate_phys_page_unwind(tb_page_addr_t addr, uintptr_t pc)
     }
 
     assert_memory_lock();
-    {
-        TranslationBlock *current_tb = tcg_tb_lookup(pc);
-        bool current_tb_modified = false;
-        TranslationBlock *tb;
-        PageForEachNext n;
+    current_tb = tcg_tb_lookup(pc);
 
-        addr &= TARGET_PAGE_MASK;
+    addr &= TARGET_PAGE_MASK;
+    current_tb_modified = false;
 
-        PAGE_FOR_EACH_TB(addr, addr + TARGET_PAGE_SIZE, unused, tb, n) {
-            if (current_tb == tb &&
-                (tb_cflags(current_tb) & CF_COUNT_MASK) != 1) {
-                /*
-                 * If we are modifying the current TB, we must stop its
-                 * execution. We could be more precise by checking that
-                 * the modification is after the current PC, but it would
-                 * require a specialized function to partially restore
-                 * the CPU state.
-                 */
-                current_tb_modified = true;
-                cpu_restore_state_from_tb(current_cpu, current_tb, pc);
-            }
-            tb_phys_invalidate__locked(tb);
+    PAGE_FOR_EACH_TB(addr, addr + TARGET_PAGE_SIZE, unused, tb, n) {
+        if (current_tb == tb &&
+            (tb_cflags(current_tb) & CF_COUNT_MASK) != 1) {
+            /*
+             * If we are modifying the current TB, we must stop its
+             * execution. We could be more precise by checking that
+             * the modification is after the current PC, but it would
+             * require a specialized function to partially restore
+             * the CPU state.
+             */
+            current_tb_modified = true;
+            cpu_restore_state_from_tb(current_cpu, current_tb, pc);
         }
+        tb_phys_invalidate__locked(tb);
+    }
 
-        if (current_tb_modified) {
-            /* Force execution of one insn next time.  */
-            CPUState *cpu = current_cpu;
-            cpu->cflags_next_tb = 1 | CF_NOIRQ | curr_cflags(current_cpu);
-            return true;
-        }
+    if (current_tb_modified) {
+        /* Force execution of one insn next time.  */
+        CPUState *cpu = current_cpu;
+        cpu->cflags_next_tb = 1 | CF_NOIRQ | curr_cflags(current_cpu);
+        return true;
     }
     return false;
 }
-- 
2.38.1



^ permalink raw reply related	[flat|nested] 26+ messages in thread

* Re: [PATCH 1a/4] accel/tcg: Fix tb_invalidate_phys_page_unwind
  2022-12-28 12:49   ` [PATCH 1a/4] " Philippe Mathieu-Daudé
  2022-12-28 12:49     ` [PATCH 1b/4] accel/tcg: Unindent tb_invalidate_phys_page_unwind Philippe Mathieu-Daudé
@ 2022-12-28 12:52     ` Philippe Mathieu-Daudé
  1 sibling, 0 replies; 26+ messages in thread
From: Philippe Mathieu-Daudé @ 2022-12-28 12:52 UTC (permalink / raw)
  To: qemu-devel; +Cc: Ilya Leoshkevich, Richard Henderson, Paolo Bonzini

On 28/12/22 13:49, Philippe Mathieu-Daudé wrote:
> When called from syscall(), we are not within a TB and pc == 0.
> We can skip the check for invalidating the current TB.
> 
> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
> [PMD: Split patch in 2]
> Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
> ---
>   accel/tcg/tb-maint.c | 17 ++++++++++++-----
>   1 file changed, 12 insertions(+), 5 deletions(-)

Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>



^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH 1b/4] accel/tcg: Unindent tb_invalidate_phys_page_unwind
  2022-12-28 12:49     ` [PATCH 1b/4] accel/tcg: Unindent tb_invalidate_phys_page_unwind Philippe Mathieu-Daudé
@ 2022-12-28 12:52       ` Philippe Mathieu-Daudé
  0 siblings, 0 replies; 26+ messages in thread
From: Philippe Mathieu-Daudé @ 2022-12-28 12:52 UTC (permalink / raw)
  To: qemu-devel; +Cc: Ilya Leoshkevich, Richard Henderson, Paolo Bonzini

On 28/12/22 13:49, Philippe Mathieu-Daudé wrote:
> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
> [PMD: Split patch in 2]
> Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
> ---
>   accel/tcg/tb-maint.c | 53 ++++++++++++++++++++++----------------------
>   1 file changed, 27 insertions(+), 26 deletions(-)

Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>



^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH 3/4] accel/tcg: Handle false negative lookup in page_check_range
  2022-12-28  7:24   ` Philippe Mathieu-Daudé
@ 2022-12-28 12:53     ` Philippe Mathieu-Daudé
  2022-12-28 17:36     ` Richard Henderson
  1 sibling, 0 replies; 26+ messages in thread
From: Philippe Mathieu-Daudé @ 2022-12-28 12:53 UTC (permalink / raw)
  To: Richard Henderson, qemu-devel; +Cc: iii

On 28/12/22 08:24, Philippe Mathieu-Daudé wrote:
> On 24/12/22 16:18, Richard Henderson wrote:
>> As in page_get_flags, we need to try again with the mmap
>> lock held if we fail a page lookup.
>>
>> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
>> ---
>>   accel/tcg/user-exec.c | 39 ++++++++++++++++++++++++++++++++-------
>>   1 file changed, 32 insertions(+), 7 deletions(-)
>>
>> diff --git a/accel/tcg/user-exec.c b/accel/tcg/user-exec.c
>> index 2c5c10d2e6..c72a806203 100644
>> --- a/accel/tcg/user-exec.c
>> +++ b/accel/tcg/user-exec.c
>> @@ -525,6 +525,7 @@ void page_set_flags(target_ulong start, 
>> target_ulong end, int flags)
>>   int page_check_range(target_ulong start, target_ulong len, int flags)
>>   {
>>       target_ulong last;
>> +    int locked, ret;
> 
> have_mmap_lock() returns a boolean, can we declare 'locked'
> as such, ...
> 
>>       if (len == 0) {
>>           return 0;  /* trivial length */
>> @@ -535,42 +536,66 @@ int page_check_range(target_ulong start, 
>> target_ulong len, int flags)
>>           return -1; /* wrap around */
>>       }
>> +    locked = have_mmap_lock();
>>       while (true) {
>>           PageFlagsNode *p = pageflags_find(start, last);
>>           int missing;
>>           if (!p) {
>> -            return -1; /* entire region invalid */
>> +            if (!locked) {
>> +                /*
>> +                 * Lockless lookups have false negatives.
>> +                 * Retry with the lock held.
>> +                 */
>> +                mmap_lock();
>> +                locked = -1;
> 
> ... skip this confusing assignation, ...
> 
>> +                p = pageflags_find(start, last);
>> +            }
>> +            if (!p) {
>> +                ret = -1; /* entire region invalid */
>> +                break;
>> +            }
>>           }
>>           if (start < p->itree.start) {
>> -            return -1; /* initial bytes invalid */
>> +            ret = -1; /* initial bytes invalid */
>> +            break;
>>           }
>>           missing = flags & ~p->flags;
>>           if (missing & PAGE_READ) {
>> -            return -1; /* page not readable */
>> +            ret = -1; /* page not readable */
>> +            break;
>>           }
>>           if (missing & PAGE_WRITE) {
>>               if (!(p->flags & PAGE_WRITE_ORG)) {
>> -                return -1; /* page not writable */
>> +                ret = -1; /* page not writable */
>> +                break;
>>               }
>>               /* Asking about writable, but has been protected: undo. */
>>               if (!page_unprotect(start, 0)) {
>> -                return -1;
>> +                ret = -1;
>> +                break;
>>               }
>>               /* TODO: page_unprotect should take a range, not a 
>> single page. */
>>               if (last - start < TARGET_PAGE_SIZE) {
>> -                return 0; /* ok */
>> +                ret = 0; /* ok */
>> +                break;
>>               }
>>               start += TARGET_PAGE_SIZE;
>>               continue;
>>           }
>>           if (last <= p->itree.last) {
>> -            return 0; /* ok */
>> +            ret = 0; /* ok */
>> +            break;
>>           }
>>           start = p->itree.last + 1;
>>       }
>> +
>> +    if (locked < 0) {
> 
> ... and check for !locked here?
> 
>> +        mmap_unlock();
>> +    }
>> +    return ret;
>>   }

Modulo using a boolean:

Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>




^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH 3/4] accel/tcg: Handle false negative lookup in page_check_range
  2022-12-28  7:24   ` Philippe Mathieu-Daudé
  2022-12-28 12:53     ` Philippe Mathieu-Daudé
@ 2022-12-28 17:36     ` Richard Henderson
  2022-12-28 18:27       ` Philippe Mathieu-Daudé
  1 sibling, 1 reply; 26+ messages in thread
From: Richard Henderson @ 2022-12-28 17:36 UTC (permalink / raw)
  To: Philippe Mathieu-Daudé, qemu-devel; +Cc: iii

On 12/27/22 23:24, Philippe Mathieu-Daudé wrote:
> On 24/12/22 16:18, Richard Henderson wrote:
>> As in page_get_flags, we need to try again with the mmap
>> lock held if we fail a page lookup.
>>
>> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
>> ---
>>   accel/tcg/user-exec.c | 39 ++++++++++++++++++++++++++++++++-------
>>   1 file changed, 32 insertions(+), 7 deletions(-)
>>
>> diff --git a/accel/tcg/user-exec.c b/accel/tcg/user-exec.c
>> index 2c5c10d2e6..c72a806203 100644
>> --- a/accel/tcg/user-exec.c
>> +++ b/accel/tcg/user-exec.c
>> @@ -525,6 +525,7 @@ void page_set_flags(target_ulong start, target_ulong end, int flags)
>>   int page_check_range(target_ulong start, target_ulong len, int flags)
>>   {
>>       target_ulong last;
>> +    int locked, ret;
> 
> have_mmap_lock() returns a boolean, can we declare 'locked'
> as such, ...
> 
>>       if (len == 0) {
>>           return 0;  /* trivial length */
>> @@ -535,42 +536,66 @@ int page_check_range(target_ulong start, target_ulong len, int flags)
>>           return -1; /* wrap around */
>>       }
>> +    locked = have_mmap_lock();
>>       while (true) {
>>           PageFlagsNode *p = pageflags_find(start, last);
>>           int missing;
>>           if (!p) {
>> -            return -1; /* entire region invalid */
>> +            if (!locked) {
>> +                /*
>> +                 * Lockless lookups have false negatives.
>> +                 * Retry with the lock held.
>> +                 */
>> +                mmap_lock();
>> +                locked = -1;
> 
> ... skip this confusing assignation, ...
> 
>> +                p = pageflags_find(start, last);
>> +            }
>> +            if (!p) {
>> +                ret = -1; /* entire region invalid */
>> +                break;
>> +            }
>>           }
>>           if (start < p->itree.start) {
>> -            return -1; /* initial bytes invalid */
>> +            ret = -1; /* initial bytes invalid */
>> +            break;
>>           }
>>           missing = flags & ~p->flags;
>>           if (missing & PAGE_READ) {
>> -            return -1; /* page not readable */
>> +            ret = -1; /* page not readable */
>> +            break;
>>           }
>>           if (missing & PAGE_WRITE) {
>>               if (!(p->flags & PAGE_WRITE_ORG)) {
>> -                return -1; /* page not writable */
>> +                ret = -1; /* page not writable */
>> +                break;
>>               }
>>               /* Asking about writable, but has been protected: undo. */
>>               if (!page_unprotect(start, 0)) {
>> -                return -1;
>> +                ret = -1;
>> +                break;
>>               }
>>               /* TODO: page_unprotect should take a range, not a single page. */
>>               if (last - start < TARGET_PAGE_SIZE) {
>> -                return 0; /* ok */
>> +                ret = 0; /* ok */
>> +                break;
>>               }
>>               start += TARGET_PAGE_SIZE;
>>               continue;
>>           }
>>           if (last <= p->itree.last) {
>> -            return 0; /* ok */
>> +            ret = 0; /* ok */
>> +            break;
>>           }
>>           start = p->itree.last + 1;
>>       }
>> +
>> +    if (locked < 0) {
> 
> ... and check for !locked here?

No, we may have entered with mmap_locked.  Only unlocking if the lock was taken locally.


r~



^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH 3/4] accel/tcg: Handle false negative lookup in page_check_range
  2022-12-28 17:36     ` Richard Henderson
@ 2022-12-28 18:27       ` Philippe Mathieu-Daudé
  2022-12-28 18:30         ` Richard Henderson
  0 siblings, 1 reply; 26+ messages in thread
From: Philippe Mathieu-Daudé @ 2022-12-28 18:27 UTC (permalink / raw)
  To: Richard Henderson, qemu-devel; +Cc: iii

On 28/12/22 18:36, Richard Henderson wrote:
> On 12/27/22 23:24, Philippe Mathieu-Daudé wrote:
>> On 24/12/22 16:18, Richard Henderson wrote:
>>> As in page_get_flags, we need to try again with the mmap
>>> lock held if we fail a page lookup.
>>>
>>> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
>>> ---
>>>   accel/tcg/user-exec.c | 39 ++++++++++++++++++++++++++++++++-------
>>>   1 file changed, 32 insertions(+), 7 deletions(-)
>>>
>>> diff --git a/accel/tcg/user-exec.c b/accel/tcg/user-exec.c
>>> index 2c5c10d2e6..c72a806203 100644
>>> --- a/accel/tcg/user-exec.c
>>> +++ b/accel/tcg/user-exec.c
>>> @@ -525,6 +525,7 @@ void page_set_flags(target_ulong start, 
>>> target_ulong end, int flags)
>>>   int page_check_range(target_ulong start, target_ulong len, int flags)
>>>   {
>>>       target_ulong last;
>>> +    int locked, ret;
>>
>> have_mmap_lock() returns a boolean, can we declare 'locked'
>> as such, ...
>>
>>>       if (len == 0) {
>>>           return 0;  /* trivial length */
>>> @@ -535,42 +536,66 @@ int page_check_range(target_ulong start, 
>>> target_ulong len, int flags)
>>>           return -1; /* wrap around */
>>>       }
>>> +    locked = have_mmap_lock();
>>>       while (true) {
>>>           PageFlagsNode *p = pageflags_find(start, last);
>>>           int missing;
>>>           if (!p) {
>>> -            return -1; /* entire region invalid */
>>> +            if (!locked) {
>>> +                /*
>>> +                 * Lockless lookups have false negatives.
>>> +                 * Retry with the lock held.
>>> +                 */
>>> +                mmap_lock();
>>> +                locked = -1;
>>
>> ... skip this confusing assignation, ...
>>
>>> +                p = pageflags_find(start, last);
>>> +            }
>>> +            if (!p) {
>>> +                ret = -1; /* entire region invalid */
>>> +                break;
>>> +            }
>>>           }
>>>           if (start < p->itree.start) {
>>> -            return -1; /* initial bytes invalid */
>>> +            ret = -1; /* initial bytes invalid */
>>> +            break;
>>>           }
>>>           missing = flags & ~p->flags;
>>>           if (missing & PAGE_READ) {
>>> -            return -1; /* page not readable */
>>> +            ret = -1; /* page not readable */
>>> +            break;
>>>           }
>>>           if (missing & PAGE_WRITE) {
>>>               if (!(p->flags & PAGE_WRITE_ORG)) {
>>> -                return -1; /* page not writable */
>>> +                ret = -1; /* page not writable */
>>> +                break;
>>>               }
>>>               /* Asking about writable, but has been protected: undo. */
>>>               if (!page_unprotect(start, 0)) {
>>> -                return -1;
>>> +                ret = -1;
>>> +                break;
>>>               }
>>>               /* TODO: page_unprotect should take a range, not a 
>>> single page. */
>>>               if (last - start < TARGET_PAGE_SIZE) {
>>> -                return 0; /* ok */
>>> +                ret = 0; /* ok */
>>> +                break;
>>>               }
>>>               start += TARGET_PAGE_SIZE;
>>>               continue;
>>>           }
>>>           if (last <= p->itree.last) {
>>> -            return 0; /* ok */
>>> +            ret = 0; /* ok */
>>> +            break;
>>>           }
>>>           start = p->itree.last + 1;
>>>       }
>>> +
>>> +    if (locked < 0) {
>>
>> ... and check for !locked here?
> 
> No, we may have entered with mmap_locked.  Only unlocking if the lock 
> was taken locally.

Oh, so you are using this variable as tri-state?


^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH 3/4] accel/tcg: Handle false negative lookup in page_check_range
  2022-12-28 18:27       ` Philippe Mathieu-Daudé
@ 2022-12-28 18:30         ` Richard Henderson
  2022-12-28 18:53           ` Philippe Mathieu-Daudé
  0 siblings, 1 reply; 26+ messages in thread
From: Richard Henderson @ 2022-12-28 18:30 UTC (permalink / raw)
  To: Philippe Mathieu-Daudé, qemu-devel; +Cc: iii

On 12/28/22 10:27, Philippe Mathieu-Daudé wrote:
> Oh, so you are using this variable as tri-state?

Yes.  Perhaps a comment on the -1?

r~



^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH 3/4] accel/tcg: Handle false negative lookup in page_check_range
  2022-12-28 18:30         ` Richard Henderson
@ 2022-12-28 18:53           ` Philippe Mathieu-Daudé
  0 siblings, 0 replies; 26+ messages in thread
From: Philippe Mathieu-Daudé @ 2022-12-28 18:53 UTC (permalink / raw)
  To: Richard Henderson, qemu-devel; +Cc: iii

On 28/12/22 19:30, Richard Henderson wrote:
> On 12/28/22 10:27, Philippe Mathieu-Daudé wrote:
>> Oh, so you are using this variable as tri-state?
> 
> Yes.  Perhaps a comment on the -1?

Or name the variable tristate_lock? And a comment :)
(No need to respin, you can keep my R-b).



^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH 4/4] tests/tcg/multiarch: add vma-pthread.c
  2022-12-24 15:18 ` [PATCH 4/4] tests/tcg/multiarch: add vma-pthread.c Richard Henderson
  2022-12-27 17:23   ` Alex Bennée
@ 2023-01-13 15:17   ` Peter Maydell
  2023-01-13 17:10     ` Alex Bennée
  1 sibling, 1 reply; 26+ messages in thread
From: Peter Maydell @ 2023-01-13 15:17 UTC (permalink / raw)
  To: Richard Henderson; +Cc: qemu-devel, iii

On Sat, 24 Dec 2022 at 15:19, Richard Henderson
<richard.henderson@linaro.org> wrote:
>
> From: Ilya Leoshkevich <iii@linux.ibm.com>
>
> Add a test that locklessly changes and exercises page protection bits
> from various threads. This helps catch race conditions in the VMA
> handling.
>
> Signed-off-by: Ilya Leoshkevich <iii@linux.ibm.com>
> Message-Id: <20221223120252.513319-1-iii@linux.ibm.com>
> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>

I've noticed that this newly added vma-pthread test seems to
be flaky. Here's an example from a clang-user job:
https://gitlab.com/qemu-project/qemu/-/jobs/3600385176

TEST vma-pthread-with-libbb.so on aarch64
fail indirect write 0x5500b1eff0 (Bad address)
timeout: the monitored command dumped core
Aborted
make[1]: *** [Makefile:173: run-plugin-vma-pthread-with-libbb.so] Error 134

and another from a few days earlier:
https://gitlab.com/qemu-project/qemu/-/jobs/3572970612

TEST vma-pthread-with-libsyscall.so on s390x
fail indirect read 0x4000999000 (Bad address)
timeout: the monitored command dumped core
Aborted
make[1]: *** [Makefile:173: run-plugin-vma-pthread-with-libsyscall.so] Error 134

thanks
-- PMM


^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH 4/4] tests/tcg/multiarch: add vma-pthread.c
  2023-01-13 15:17   ` Peter Maydell
@ 2023-01-13 17:10     ` Alex Bennée
  2023-01-16 12:40       ` Philippe Mathieu-Daudé
  2023-01-20 14:58       ` Peter Maydell
  0 siblings, 2 replies; 26+ messages in thread
From: Alex Bennée @ 2023-01-13 17:10 UTC (permalink / raw)
  To: Peter Maydell; +Cc: Richard Henderson, iii, qemu-devel


Peter Maydell <peter.maydell@linaro.org> writes:

> On Sat, 24 Dec 2022 at 15:19, Richard Henderson
> <richard.henderson@linaro.org> wrote:
>>
>> From: Ilya Leoshkevich <iii@linux.ibm.com>
>>
>> Add a test that locklessly changes and exercises page protection bits
>> from various threads. This helps catch race conditions in the VMA
>> handling.
>>
>> Signed-off-by: Ilya Leoshkevich <iii@linux.ibm.com>
>> Message-Id: <20221223120252.513319-1-iii@linux.ibm.com>
>> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
>
> I've noticed that this newly added vma-pthread test seems to
> be flaky. Here's an example from a clang-user job:
> https://gitlab.com/qemu-project/qemu/-/jobs/3600385176
>
> TEST vma-pthread-with-libbb.so on aarch64
> fail indirect write 0x5500b1eff0 (Bad address)
> timeout: the monitored command dumped core
> Aborted
> make[1]: *** [Makefile:173: run-plugin-vma-pthread-with-libbb.so] Error 134
>
> and another from a few days earlier:
> https://gitlab.com/qemu-project/qemu/-/jobs/3572970612
>
> TEST vma-pthread-with-libsyscall.so on s390x
> fail indirect read 0x4000999000 (Bad address)
> timeout: the monitored command dumped core
> Aborted
> make[1]: *** [Makefile:173: run-plugin-vma-pthread-with-libsyscall.so] Error 134
>
> thanks
> -- PMM

Interesting those are both with plugins. I wonder if the tsan plugin
fixes in my latest tree help?

-- 
Alex Bennée
Virtualisation Tech Lead @ Linaro


^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH 4/4] tests/tcg/multiarch: add vma-pthread.c
  2023-01-13 17:10     ` Alex Bennée
@ 2023-01-16 12:40       ` Philippe Mathieu-Daudé
  2023-01-16 15:07         ` Peter Maydell
  2023-01-20 14:58       ` Peter Maydell
  1 sibling, 1 reply; 26+ messages in thread
From: Philippe Mathieu-Daudé @ 2023-01-16 12:40 UTC (permalink / raw)
  To: Alex Bennée, Peter Maydell; +Cc: Richard Henderson, iii, qemu-devel

On 13/1/23 18:10, Alex Bennée wrote:
> 
> Peter Maydell <peter.maydell@linaro.org> writes:
> 
>> On Sat, 24 Dec 2022 at 15:19, Richard Henderson
>> <richard.henderson@linaro.org> wrote:
>>>
>>> From: Ilya Leoshkevich <iii@linux.ibm.com>
>>>
>>> Add a test that locklessly changes and exercises page protection bits
>>> from various threads. This helps catch race conditions in the VMA
>>> handling.
>>>
>>> Signed-off-by: Ilya Leoshkevich <iii@linux.ibm.com>
>>> Message-Id: <20221223120252.513319-1-iii@linux.ibm.com>
>>> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
>>
>> I've noticed that this newly added vma-pthread test seems to
>> be flaky. Here's an example from a clang-user job:
>> https://gitlab.com/qemu-project/qemu/-/jobs/3600385176
>>
>> TEST vma-pthread-with-libbb.so on aarch64
>> fail indirect write 0x5500b1eff0 (Bad address)
>> timeout: the monitored command dumped core
>> Aborted
>> make[1]: *** [Makefile:173: run-plugin-vma-pthread-with-libbb.so] Error 134
>>
>> and another from a few days earlier:
>> https://gitlab.com/qemu-project/qemu/-/jobs/3572970612
>>
>> TEST vma-pthread-with-libsyscall.so on s390x
>> fail indirect read 0x4000999000 (Bad address)
>> timeout: the monitored command dumped core
>> Aborted
>> make[1]: *** [Makefile:173: run-plugin-vma-pthread-with-libsyscall.so] Error 134

Yet again:
https://gitlab.com/qemu-project/qemu/-/jobs/3608436731

> Interesting those are both with plugins. I wonder if the tsan plugin
> fixes in my latest tree help?
> 



^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH 4/4] tests/tcg/multiarch: add vma-pthread.c
  2023-01-16 12:40       ` Philippe Mathieu-Daudé
@ 2023-01-16 15:07         ` Peter Maydell
  2023-01-16 16:27           ` Alex Bennée
  0 siblings, 1 reply; 26+ messages in thread
From: Peter Maydell @ 2023-01-16 15:07 UTC (permalink / raw)
  To: Philippe Mathieu-Daudé
  Cc: Alex Bennée, Richard Henderson, iii, qemu-devel

On Mon, 16 Jan 2023 at 12:40, Philippe Mathieu-Daudé <philmd@linaro.org> wrote:
>
> On 13/1/23 18:10, Alex Bennée wrote:
> >
> > Peter Maydell <peter.maydell@linaro.org> writes:
> >
> >> On Sat, 24 Dec 2022 at 15:19, Richard Henderson
> >> <richard.henderson@linaro.org> wrote:
> >>>
> >>> From: Ilya Leoshkevich <iii@linux.ibm.com>
> >>>
> >>> Add a test that locklessly changes and exercises page protection bits
> >>> from various threads. This helps catch race conditions in the VMA
> >>> handling.
> >>>
> >>> Signed-off-by: Ilya Leoshkevich <iii@linux.ibm.com>
> >>> Message-Id: <20221223120252.513319-1-iii@linux.ibm.com>
> >>> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
> >>
> >> I've noticed that this newly added vma-pthread test seems to
> >> be flaky. Here's an example from a clang-user job:
> >> https://gitlab.com/qemu-project/qemu/-/jobs/3600385176
> >>
> >> TEST vma-pthread-with-libbb.so on aarch64
> >> fail indirect write 0x5500b1eff0 (Bad address)
> >> timeout: the monitored command dumped core
> >> Aborted
> >> make[1]: *** [Makefile:173: run-plugin-vma-pthread-with-libbb.so] Error 134
> >>
> >> and another from a few days earlier:
> >> https://gitlab.com/qemu-project/qemu/-/jobs/3572970612
> >>
> >> TEST vma-pthread-with-libsyscall.so on s390x
> >> fail indirect read 0x4000999000 (Bad address)
> >> timeout: the monitored command dumped core
> >> Aborted
> >> make[1]: *** [Makefile:173: run-plugin-vma-pthread-with-libsyscall.so] Error 134
>
> Yet again:
> https://gitlab.com/qemu-project/qemu/-/jobs/3608436731

Yep. Could somebody write a patch to disable this test while
we figure out why it's flaky, please?

thanks
-- PMM


^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH 4/4] tests/tcg/multiarch: add vma-pthread.c
  2023-01-16 15:07         ` Peter Maydell
@ 2023-01-16 16:27           ` Alex Bennée
  2023-01-16 16:48             ` Peter Maydell
  2023-01-16 19:21             ` Richard Henderson
  0 siblings, 2 replies; 26+ messages in thread
From: Alex Bennée @ 2023-01-16 16:27 UTC (permalink / raw)
  To: Peter Maydell
  Cc: Philippe Mathieu-Daudé, Richard Henderson, iii, qemu-devel


Peter Maydell <peter.maydell@linaro.org> writes:

> On Mon, 16 Jan 2023 at 12:40, Philippe Mathieu-Daudé <philmd@linaro.org> wrote:
>>
>> On 13/1/23 18:10, Alex Bennée wrote:
>> >
>> > Peter Maydell <peter.maydell@linaro.org> writes:
>> >
>> >> On Sat, 24 Dec 2022 at 15:19, Richard Henderson
>> >> <richard.henderson@linaro.org> wrote:
>> >>>
>> >>> From: Ilya Leoshkevich <iii@linux.ibm.com>
>> >>>
>> >>> Add a test that locklessly changes and exercises page protection bits
>> >>> from various threads. This helps catch race conditions in the VMA
>> >>> handling.
>> >>>
>> >>> Signed-off-by: Ilya Leoshkevich <iii@linux.ibm.com>
>> >>> Message-Id: <20221223120252.513319-1-iii@linux.ibm.com>
>> >>> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
>> >>
>> >> I've noticed that this newly added vma-pthread test seems to
>> >> be flaky. Here's an example from a clang-user job:
>> >> https://gitlab.com/qemu-project/qemu/-/jobs/3600385176
>> >>
>> >> TEST vma-pthread-with-libbb.so on aarch64
>> >> fail indirect write 0x5500b1eff0 (Bad address)
>> >> timeout: the monitored command dumped core
>> >> Aborted
>> >> make[1]: *** [Makefile:173: run-plugin-vma-pthread-with-libbb.so] Error 134
>> >>
>> >> and another from a few days earlier:
>> >> https://gitlab.com/qemu-project/qemu/-/jobs/3572970612
>> >>
>> >> TEST vma-pthread-with-libsyscall.so on s390x
>> >> fail indirect read 0x4000999000 (Bad address)
>> >> timeout: the monitored command dumped core
>> >> Aborted
>> >> make[1]: *** [Makefile:173: run-plugin-vma-pthread-with-libsyscall.so] Error 134
>>
>> Yet again:
>> https://gitlab.com/qemu-project/qemu/-/jobs/3608436731
>
> Yep. Could somebody write a patch to disable this test while
> we figure out why it's flaky, please?

I don't think the test is flaky - I think it is triggering a race in
QEMU code. I have not however been able to replicate it in anything other
than CI.

Although looking at the test I'm beginning to wonder what the sync point
is between the mutator and the read/write threads?

>
> thanks
> -- PMM


-- 
Alex Bennée
Virtualisation Tech Lead @ Linaro


^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH 4/4] tests/tcg/multiarch: add vma-pthread.c
  2023-01-16 16:27           ` Alex Bennée
@ 2023-01-16 16:48             ` Peter Maydell
  2023-01-16 17:09               ` Alex Bennée
  2023-01-16 19:21             ` Richard Henderson
  1 sibling, 1 reply; 26+ messages in thread
From: Peter Maydell @ 2023-01-16 16:48 UTC (permalink / raw)
  To: Alex Bennée
  Cc: Philippe Mathieu-Daudé, Richard Henderson, iii, qemu-devel

On Mon, 16 Jan 2023 at 16:33, Alex Bennée <alex.bennee@linaro.org> wrote:
>
>
> Peter Maydell <peter.maydell@linaro.org> writes:
>
> > On Mon, 16 Jan 2023 at 12:40, Philippe Mathieu-Daudé <philmd@linaro.org> wrote:
> >>
> >> On 13/1/23 18:10, Alex Bennée wrote:
> > Yep. Could somebody write a patch to disable this test while
> > we figure out why it's flaky, please?
>
> I don't think the test is flaky - I think it is triggering a race in
> QEMU code. I have not however been able to replicate it in anything other
> than CI.

My definition of "flaky" here is "the CI fails randomly".
Unless you have a patch to fix the underlying bug ready to
go right now, please can we disable this so we don't keep
getting CI failures trying to test merges?

thanks
-- PMM


^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH 4/4] tests/tcg/multiarch: add vma-pthread.c
  2023-01-16 16:48             ` Peter Maydell
@ 2023-01-16 17:09               ` Alex Bennée
  0 siblings, 0 replies; 26+ messages in thread
From: Alex Bennée @ 2023-01-16 17:09 UTC (permalink / raw)
  To: Peter Maydell
  Cc: Philippe Mathieu-Daudé, Richard Henderson, iii, qemu-devel


Peter Maydell <peter.maydell@linaro.org> writes:

> On Mon, 16 Jan 2023 at 16:33, Alex Bennée <alex.bennee@linaro.org> wrote:
>>
>>
>> Peter Maydell <peter.maydell@linaro.org> writes:
>>
>> > On Mon, 16 Jan 2023 at 12:40, Philippe Mathieu-Daudé <philmd@linaro.org> wrote:
>> >>
>> >> On 13/1/23 18:10, Alex Bennée wrote:
>> > Yep. Could somebody write a patch to disable this test while
>> > we figure out why it's flaky, please?
>>
>> I don't think the test is flaky - I think it is triggering a race in
>> QEMU code. I have not however been able to replicate it in anything other
>> than CI.
>
> My definition of "flaky" here is "the CI fails randomly".
> Unless you have a patch to fix the underlying bug ready to
> go right now, please can we disable this so we don't keep
> getting CI failures trying to test merges?

Yes - just testing the patch now.

>
> thanks
> -- PMM


-- 
Alex Bennée
Virtualisation Tech Lead @ Linaro


^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH 4/4] tests/tcg/multiarch: add vma-pthread.c
  2023-01-16 16:27           ` Alex Bennée
  2023-01-16 16:48             ` Peter Maydell
@ 2023-01-16 19:21             ` Richard Henderson
  1 sibling, 0 replies; 26+ messages in thread
From: Richard Henderson @ 2023-01-16 19:21 UTC (permalink / raw)
  To: Alex Bennée, Peter Maydell
  Cc: Philippe Mathieu-Daudé, iii, qemu-devel

On 1/16/23 06:27, Alex Bennée wrote:
> Although looking at the test I'm beginning to wonder what the sync point
> is between the mutator and the read/write threads?

What do you mean?

There is no explicit sync, because the mutator always leaves each page with the (one) 
permission that the read/write/execute thread requires.

Within qemu... the sync point is primarily the syscall for read/write, and the tb 
invalidate (during the mprotect) or the mmap lock for new translation for execute.


r~



^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH 4/4] tests/tcg/multiarch: add vma-pthread.c
  2023-01-13 17:10     ` Alex Bennée
  2023-01-16 12:40       ` Philippe Mathieu-Daudé
@ 2023-01-20 14:58       ` Peter Maydell
  1 sibling, 0 replies; 26+ messages in thread
From: Peter Maydell @ 2023-01-20 14:58 UTC (permalink / raw)
  To: Alex Bennée; +Cc: Richard Henderson, iii, qemu-devel

On Fri, 13 Jan 2023 at 17:10, Alex Bennée <alex.bennee@linaro.org> wrote:
>
>
> Peter Maydell <peter.maydell@linaro.org> writes:
>
> > On Sat, 24 Dec 2022 at 15:19, Richard Henderson
> > <richard.henderson@linaro.org> wrote:
> >>
> >> From: Ilya Leoshkevich <iii@linux.ibm.com>
> >>
> >> Add a test that locklessly changes and exercises page protection bits
> >> from various threads. This helps catch race conditions in the VMA
> >> handling.
> >>
> >> Signed-off-by: Ilya Leoshkevich <iii@linux.ibm.com>
> >> Message-Id: <20221223120252.513319-1-iii@linux.ibm.com>
> >> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
> >
> > I've noticed that this newly added vma-pthread test seems to
> > be flaky. Here's an example from a clang-user job:
> > https://gitlab.com/qemu-project/qemu/-/jobs/3600385176
> >
> > TEST vma-pthread-with-libbb.so on aarch64
> > fail indirect write 0x5500b1eff0 (Bad address)
> > timeout: the monitored command dumped core
> > Aborted
> > make[1]: *** [Makefile:173: run-plugin-vma-pthread-with-libbb.so] Error 134
> >
> > and another from a few days earlier:
> > https://gitlab.com/qemu-project/qemu/-/jobs/3572970612
> >
> > TEST vma-pthread-with-libsyscall.so on s390x
> > fail indirect read 0x4000999000 (Bad address)
> > timeout: the monitored command dumped core
> > Aborted
> > make[1]: *** [Makefile:173: run-plugin-vma-pthread-with-libsyscall.so] Error 134
> >
> > thanks
> > -- PMM
>
> Interesting those are both with plugins. I wonder if the tsan plugin
> fixes in my latest tree help?

I think this is a failure in the non-plugin case:
https://gitlab.com/qemu-project/qemu/-/jobs/3636082364

-- PMM


^ permalink raw reply	[flat|nested] 26+ messages in thread

end of thread, other threads:[~2023-01-20 14:59 UTC | newest]

Thread overview: 26+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-12-24 15:18 [PATCH 0/4] accel/tcg: Fixes for user-only page tracking Richard Henderson
2022-12-24 15:18 ` [PATCH 1/4] accel/tcg: Fix tb_invalidate_phys_page_unwind Richard Henderson
2022-12-28 12:49   ` [PATCH 1a/4] " Philippe Mathieu-Daudé
2022-12-28 12:49     ` [PATCH 1b/4] accel/tcg: Unindent tb_invalidate_phys_page_unwind Philippe Mathieu-Daudé
2022-12-28 12:52       ` Philippe Mathieu-Daudé
2022-12-28 12:52     ` [PATCH 1a/4] accel/tcg: Fix tb_invalidate_phys_page_unwind Philippe Mathieu-Daudé
2022-12-24 15:18 ` [PATCH 2/4] accel/tcg: Use g_free_rcu for user-exec interval trees Richard Henderson
2022-12-28  7:19   ` Philippe Mathieu-Daudé
2022-12-24 15:18 ` [PATCH 3/4] accel/tcg: Handle false negative lookup in page_check_range Richard Henderson
2022-12-28  7:24   ` Philippe Mathieu-Daudé
2022-12-28 12:53     ` Philippe Mathieu-Daudé
2022-12-28 17:36     ` Richard Henderson
2022-12-28 18:27       ` Philippe Mathieu-Daudé
2022-12-28 18:30         ` Richard Henderson
2022-12-28 18:53           ` Philippe Mathieu-Daudé
2022-12-24 15:18 ` [PATCH 4/4] tests/tcg/multiarch: add vma-pthread.c Richard Henderson
2022-12-27 17:23   ` Alex Bennée
2023-01-13 15:17   ` Peter Maydell
2023-01-13 17:10     ` Alex Bennée
2023-01-16 12:40       ` Philippe Mathieu-Daudé
2023-01-16 15:07         ` Peter Maydell
2023-01-16 16:27           ` Alex Bennée
2023-01-16 16:48             ` Peter Maydell
2023-01-16 17:09               ` Alex Bennée
2023-01-16 19:21             ` Richard Henderson
2023-01-20 14:58       ` Peter Maydell

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.