* [PATCH 1/2 v4] rseq/membarrier: add MEMBARRIER_CMD_PRIVATE_EXPEDITED_RSEQ
@ 2020-08-26 18:47 Peter Oskolkov
2020-08-26 18:47 ` [PATCH 2/2 v4] rseq/selftests: test MEMBARRIER_CMD_PRIVATE_EXPEDITED_RSEQ Peter Oskolkov
2020-08-26 22:31 ` [PATCH 1/2 v4] rseq/membarrier: add MEMBARRIER_CMD_PRIVATE_EXPEDITED_RSEQ kernel test robot
0 siblings, 2 replies; 3+ messages in thread
From: Peter Oskolkov @ 2020-08-26 18:47 UTC (permalink / raw)
To: Mathieu Desnoyers, Paul E . McKenney, Peter Zijlstra, Boqun Feng,
linux-kernel
Cc: Paul Turner, Chris Kennelly, Peter Oskolkov, Peter Oskolkov
This patchset is based on Google-internal RSEQ
work done by Paul Turner and Andrew Hunter.
When working with per-CPU RSEQ-based memory allocations,
it is sometimes important to make sure that a global
memory location is no longer accessed from RSEQ critical
sections. For example, there can be two per-CPU lists,
one is "active" and accessed per-CPU, while another one
is inactive and worked on asynchronously "off CPU" (e.g.
garbage collection is performed). Then at some point
the two lists are swapped, and a fast RCU-like mechanism
is required to make sure that the previously active
list is no longer accessed.
This patch introduces such a mechanism: in short,
membarrier() syscall issues an IPI to a CPU, restarting
a potentially active RSEQ critical section on the CPU.
v1->v2:
- removed the ability to IPI all CPUs in a single sycall;
- use task->mm rather than task->group_leader to identify
tasks belonging to the same process.
v2->v3:
- re-added the ability to IPI all CPUs in a single syscall;
- integrated with membarrier_private_expedited() to
make sure only CPUs running tasks with the same mm as
the current task are interrupted;
- also added MEMBARRIER_CMD_REGISTER_PRIVATE_EXPEDITED_RSEQ;
- flags in membarrier_private_expedited are never actually
bit flags but always distinct values (i.e. never two flags
are combined), so I modified bit testing to full equation
comparison for simplicity (otherwise the code needs to
work when several bits are set, for example).
v3->v4:
- added the third parameter to membarrier syscall: @cpu_id:
if @flags == MEMBARRIER_CMD_FLAG_CPU, then @cpu_id indicates
the cpu on which RSEQ CS should be restarted.
The second patch in the patchset adds a selftest
of this feature.
Signed-off-by: Peter Oskolkov <posk@google.com>
---
include/linux/sched/mm.h | 3 ++
include/uapi/linux/membarrier.h | 29 ++++++++++++++
kernel/sched/membarrier.c | 67 ++++++++++++++++++++++++++++-----
3 files changed, 89 insertions(+), 10 deletions(-)
diff --git a/include/linux/sched/mm.h b/include/linux/sched/mm.h
index f889e332912f..15bfb06f2884 100644
--- a/include/linux/sched/mm.h
+++ b/include/linux/sched/mm.h
@@ -348,10 +348,13 @@ enum {
MEMBARRIER_STATE_GLOBAL_EXPEDITED = (1U << 3),
MEMBARRIER_STATE_PRIVATE_EXPEDITED_SYNC_CORE_READY = (1U << 4),
MEMBARRIER_STATE_PRIVATE_EXPEDITED_SYNC_CORE = (1U << 5),
+ MEMBARRIER_STATE_PRIVATE_EXPEDITED_RSEQ_READY = (1U << 6),
+ MEMBARRIER_STATE_PRIVATE_EXPEDITED_RSEQ = (1U << 7),
};
enum {
MEMBARRIER_FLAG_SYNC_CORE = (1U << 0),
+ MEMBARRIER_FLAG_RSEQ = (1U << 1),
};
#ifdef CONFIG_ARCH_HAS_MEMBARRIER_CALLBACKS
diff --git a/include/uapi/linux/membarrier.h b/include/uapi/linux/membarrier.h
index 5891d7614c8c..98c2b0e7c0d8 100644
--- a/include/uapi/linux/membarrier.h
+++ b/include/uapi/linux/membarrier.h
@@ -114,6 +114,29 @@
* If this command is not implemented by an
* architecture, -EINVAL is returned.
* Returns 0 on success.
+ * @MEMBARRIER_CMD_PRIVATE_EXPEDITED_RSEQ:
+ * In addition to provide memory ordering
+ * guarantees described in
+ * MEMBARRIER_CMD_PRIVATE_EXPEDITED_SYNC_CORE,
+ * ensure the caller thread, upon return from
+ * system call, that all its running thread
+ * siblings have any currently running rseq
+ * critical sections restarted if @flags
+ * parameter is 0; if @flags parameter is
+ * MEMBARRIER_CMD_FLAG_CPU,
+ * then this operation is performed only
+ * on CPU indicated by @cpu_id. If this command is
+ * not implemented by an architecture, -EINVAL
+ * is returned. A process needs to register its
+ * intent to use the private expedited rseq
+ * command prior to using it, otherwise
+ * this command returns -EPERM.
+ * @MEMBARRIER_CMD_REGISTER_PRIVATE_EXPEDITED_RSEQ:
+ * Register the process intent to use
+ * MEMBARRIER_CMD_PRIVATE_EXPEDITED_RSEQ.
+ * If this command is not implemented by an
+ * architecture, -EINVAL is returned.
+ * Returns 0 on success.
* @MEMBARRIER_CMD_SHARED:
* Alias to MEMBARRIER_CMD_GLOBAL. Provided for
* header backward compatibility.
@@ -131,9 +154,15 @@ enum membarrier_cmd {
MEMBARRIER_CMD_REGISTER_PRIVATE_EXPEDITED = (1 << 4),
MEMBARRIER_CMD_PRIVATE_EXPEDITED_SYNC_CORE = (1 << 5),
MEMBARRIER_CMD_REGISTER_PRIVATE_EXPEDITED_SYNC_CORE = (1 << 6),
+ MEMBARRIER_CMD_PRIVATE_EXPEDITED_RSEQ = (1 << 7),
+ MEMBARRIER_CMD_REGISTER_PRIVATE_EXPEDITED_RSEQ = (1 << 8),
/* Alias for header backward compatibility. */
MEMBARRIER_CMD_SHARED = MEMBARRIER_CMD_GLOBAL,
};
+enum membarrier_cmd_flag {
+ MEMBARRIER_CMD_FLAG_CPU = (1 << 0),
+};
+
#endif /* _UAPI_LINUX_MEMBARRIER_H */
diff --git a/kernel/sched/membarrier.c b/kernel/sched/membarrier.c
index 168479a7d61b..ac9f4895491d 100644
--- a/kernel/sched/membarrier.c
+++ b/kernel/sched/membarrier.c
@@ -18,6 +18,14 @@
#define MEMBARRIER_PRIVATE_EXPEDITED_SYNC_CORE_BITMASK 0
#endif
+#ifdef CONFIG_RSEQ
+#define MEMBARRIER_CMD_PRIVATE_EXPEDITED_RSEQ_BITMASK \
+ (MEMBARRIER_CMD_PRIVATE_EXPEDITED_RSEQ \
+ | MEMBARRIER_CMD_REGISTER_PRIVATE_EXPEDITED_RSEQ_BITMASK)
+#else
+#define MEMBARRIER_CMD_PRIVATE_EXPEDITED_RSEQ_BITMASK 0
+#endif
+
#define MEMBARRIER_CMD_BITMASK \
(MEMBARRIER_CMD_GLOBAL | MEMBARRIER_CMD_GLOBAL_EXPEDITED \
| MEMBARRIER_CMD_REGISTER_GLOBAL_EXPEDITED \
@@ -27,6 +35,12 @@
static void ipi_mb(void *info)
{
+#ifdef CONFIG_RSEQ
+ int *flags = info;
+
+ if (flags && (*flags == MEMBARRIER_FLAG_RSEQ))
+ rseq_preempt(current);
+#endif
smp_mb(); /* IPIs should be serializing but paranoid. */
}
@@ -129,19 +143,26 @@ static int membarrier_global_expedited(void)
return 0;
}
-static int membarrier_private_expedited(int flags)
+static int membarrier_private_expedited(int flags, int cpu_id)
{
int cpu;
cpumask_var_t tmpmask;
struct mm_struct *mm = current->mm;
- if (flags & MEMBARRIER_FLAG_SYNC_CORE) {
+ if (flags == MEMBARRIER_FLAG_SYNC_CORE) {
if (!IS_ENABLED(CONFIG_ARCH_HAS_MEMBARRIER_SYNC_CORE))
return -EINVAL;
if (!(atomic_read(&mm->membarrier_state) &
MEMBARRIER_STATE_PRIVATE_EXPEDITED_SYNC_CORE_READY))
return -EPERM;
+ } else if (flags == MEMBARRIER_FLAG_RSEQ) {
+ if (!IS_ENABLED(CONFIG_RSEQ))
+ return -EINVAL;
+ if (!(atomic_read(&mm->membarrier_state) &
+ MEMBARRIER_STATE_PRIVATE_EXPEDITED_RSEQ_READY))
+ return -EPERM;
} else {
+ BUG_ON(flags != 0);
if (!(atomic_read(&mm->membarrier_state) &
MEMBARRIER_STATE_PRIVATE_EXPEDITED_READY))
return -EPERM;
@@ -174,6 +195,8 @@ static int membarrier_private_expedited(int flags)
*/
if (cpu == raw_smp_processor_id())
continue;
+ if (cpu_id >= 0 && cpu != cpu_id)
+ continue;
p = rcu_dereference(cpu_rq(cpu)->curr);
if (p && p->mm == mm)
__cpumask_set_cpu(cpu, tmpmask);
@@ -181,7 +204,7 @@ static int membarrier_private_expedited(int flags)
rcu_read_unlock();
preempt_disable();
- smp_call_function_many(tmpmask, ipi_mb, NULL, 1);
+ smp_call_function_many(tmpmask, ipi_mb, &flags, 1);
preempt_enable();
free_cpumask_var(tmpmask);
@@ -283,11 +306,18 @@ static int membarrier_register_private_expedited(int flags)
set_state = MEMBARRIER_STATE_PRIVATE_EXPEDITED,
ret;
- if (flags & MEMBARRIER_FLAG_SYNC_CORE) {
+ if (flags == MEMBARRIER_FLAG_SYNC_CORE) {
if (!IS_ENABLED(CONFIG_ARCH_HAS_MEMBARRIER_SYNC_CORE))
return -EINVAL;
ready_state =
MEMBARRIER_STATE_PRIVATE_EXPEDITED_SYNC_CORE_READY;
+ } else if (flags == MEMBARRIER_FLAG_RSEQ) {
+ if (!IS_ENABLED(CONFIG_RSEQ))
+ return -EINVAL;
+ ready_state =
+ MEMBARRIER_STATE_PRIVATE_EXPEDITED_RSEQ_READY;
+ } else {
+ BUG_ON(flags != 0);
}
/*
@@ -299,6 +329,8 @@ static int membarrier_register_private_expedited(int flags)
return 0;
if (flags & MEMBARRIER_FLAG_SYNC_CORE)
set_state |= MEMBARRIER_STATE_PRIVATE_EXPEDITED_SYNC_CORE;
+ if (flags & MEMBARRIER_FLAG_RSEQ)
+ set_state |= MEMBARRIER_STATE_PRIVATE_EXPEDITED_RSEQ;
atomic_or(set_state, &mm->membarrier_state);
ret = sync_runqueues_membarrier_state(mm);
if (ret)
@@ -310,8 +342,15 @@ static int membarrier_register_private_expedited(int flags)
/**
* sys_membarrier - issue memory barriers on a set of threads
- * @cmd: Takes command values defined in enum membarrier_cmd.
- * @flags: Currently needs to be 0. For future extensions.
+ * @cmd: Takes command values defined in enum membarrier_cmd.
+ * @flags: Currently needs to be 0 for all commands other than
+ * MEMBARRIER_CMD_PRIVATE_EXPEDITED_RSEQ: in the latter
+ * case it can be MEMBARRIER_CMD_FLAG_CPU, indicating that @cpu_id
+ * contains the CPU on which to interrupt (= restart)
+ * the RSEQ critical section.
+ * @cpu_id: if @flags == MEMBARRIER_CMD_FLAG_CPU, indicates the cpu on which
+ * RSEQ CS should be interrupted (@cmd must be
+ * MEMBARRIER_CMD_PRIVATE_EXPEDITED_RSEQ).
*
* If this system call is not implemented, -ENOSYS is returned. If the
* command specified does not exist, not available on the running
@@ -337,9 +376,9 @@ static int membarrier_register_private_expedited(int flags)
* smp_mb() X O O
* sys_membarrier() O O O
*/
-SYSCALL_DEFINE2(membarrier, int, cmd, int, flags)
+SYSCALL_DEFINE3(membarrier, int, cmd, int, flags, int, cpu_id)
{
- if (unlikely(flags))
+ if (unlikely(flags) && cmd != MEMBARRIER_CMD_PRIVATE_EXPEDITED_RSEQ)
return -EINVAL;
switch (cmd) {
case MEMBARRIER_CMD_QUERY:
@@ -362,13 +401,21 @@ SYSCALL_DEFINE2(membarrier, int, cmd, int, flags)
case MEMBARRIER_CMD_REGISTER_GLOBAL_EXPEDITED:
return membarrier_register_global_expedited();
case MEMBARRIER_CMD_PRIVATE_EXPEDITED:
- return membarrier_private_expedited(0);
+ return membarrier_private_expedited(0, -1);
case MEMBARRIER_CMD_REGISTER_PRIVATE_EXPEDITED:
return membarrier_register_private_expedited(0);
case MEMBARRIER_CMD_PRIVATE_EXPEDITED_SYNC_CORE:
- return membarrier_private_expedited(MEMBARRIER_FLAG_SYNC_CORE);
+ return membarrier_private_expedited(MEMBARRIER_FLAG_SYNC_CORE, -1);
case MEMBARRIER_CMD_REGISTER_PRIVATE_EXPEDITED_SYNC_CORE:
return membarrier_register_private_expedited(MEMBARRIER_FLAG_SYNC_CORE);
+ case MEMBARRIER_CMD_PRIVATE_EXPEDITED_RSEQ:
+ if (flags == 0)
+ return membarrier_private_expedited(MEMBARRIER_FLAG_RSEQ, -1);
+ if (flags == MEMBARRIER_CMD_FLAG_CPU)
+ return membarrier_private_expedited(MEMBARRIER_FLAG_RSEQ, cpu_id);
+ return -EINVAL;
+ case MEMBARRIER_CMD_REGISTER_PRIVATE_EXPEDITED_RSEQ:
+ return membarrier_register_private_expedited(MEMBARRIER_FLAG_RSEQ);
default:
return -EINVAL;
}
--
2.28.0.297.g1956fa8f8d-goog
^ permalink raw reply related [flat|nested] 3+ messages in thread
* [PATCH 2/2 v4] rseq/selftests: test MEMBARRIER_CMD_PRIVATE_EXPEDITED_RSEQ
2020-08-26 18:47 [PATCH 1/2 v4] rseq/membarrier: add MEMBARRIER_CMD_PRIVATE_EXPEDITED_RSEQ Peter Oskolkov
@ 2020-08-26 18:47 ` Peter Oskolkov
2020-08-26 22:31 ` [PATCH 1/2 v4] rseq/membarrier: add MEMBARRIER_CMD_PRIVATE_EXPEDITED_RSEQ kernel test robot
1 sibling, 0 replies; 3+ messages in thread
From: Peter Oskolkov @ 2020-08-26 18:47 UTC (permalink / raw)
To: Mathieu Desnoyers, Paul E . McKenney, Peter Zijlstra, Boqun Feng,
linux-kernel
Cc: Paul Turner, Chris Kennelly, Peter Oskolkov, Peter Oskolkov
Based on Google-internal RSEQ work done by
Paul Turner and Andrew Hunter.
This patch adds a selftest for MEMBARRIER_CMD_PRIVATE_EXPEDITED_RSEQ.
The test quite often fails without the previous patch in this patchset,
but consistently passes with it.
v3: added rseq_offset_deref_addv() to x86_64 to make the test
more explicit; on other architectures I kept using existing
rseq_cmpeqv_cmpeqv_storev() as I have no easy way to test
there. Added a comment explaining why the test works this way.
v4: skipped the test if rseq_offset_deref_addv() is not present
(that is, on all architectures other than x86_64).
Signed-off-by: Peter Oskolkov <posk@google.com>
---
.../selftests/rseq/basic_percpu_ops_test.c | 187 ++++++++++++++++++
tools/testing/selftests/rseq/rseq-x86.h | 57 ++++++
2 files changed, 244 insertions(+)
diff --git a/tools/testing/selftests/rseq/basic_percpu_ops_test.c b/tools/testing/selftests/rseq/basic_percpu_ops_test.c
index eb3f6db36d36..e6e10ba4b9ed 100644
--- a/tools/testing/selftests/rseq/basic_percpu_ops_test.c
+++ b/tools/testing/selftests/rseq/basic_percpu_ops_test.c
@@ -3,16 +3,24 @@
#include <assert.h>
#include <pthread.h>
#include <sched.h>
+#include <stdatomic.h>
#include <stdint.h>
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <stddef.h>
+#include <syscall.h>
+#include <unistd.h>
#include "rseq.h"
#define ARRAY_SIZE(arr) (sizeof(arr) / sizeof((arr)[0]))
+/* The local <linux/membarrier.h> may not contain the commands below. */
+#define MEMBARRIER_CMD_PRIVATE_EXPEDITED_RSEQ (1<<7)
+#define MEMBARRIER_CMD_REGISTER_PRIVATE_EXPEDITED_RSEQ (1<<8)
+#define MEMBARRIER_CMD_FLAG_CPU (1<<0)
+
struct percpu_lock_entry {
intptr_t v;
} __attribute__((aligned(128)));
@@ -289,6 +297,183 @@ void test_percpu_list(void)
assert(sum == expected_sum);
}
+struct test_membarrier_thread_args {
+ int stop;
+ intptr_t percpu_list_ptr;
+};
+
+/* Worker threads modify data in their "active" percpu lists. */
+void *test_membarrier_worker_thread(void *arg)
+{
+ struct test_membarrier_thread_args *args =
+ (struct test_membarrier_thread_args *)arg;
+ const int iters = 10 * 1000 * 1000;
+ int i;
+
+ if (rseq_register_current_thread()) {
+ fprintf(stderr, "Error: rseq_register_current_thread(...) failed(%d): %s\n",
+ errno, strerror(errno));
+ abort();
+ }
+
+ /* Wait for initialization. */
+ while (!atomic_load(&args->percpu_list_ptr)) {}
+
+ for (i = 0; i < iters; ++i) {
+ int ret;
+
+ do {
+ int cpu = rseq_cpu_start();
+
+ ret = rseq_offset_deref_addv(&args->percpu_list_ptr,
+ 128 * cpu, 1, cpu);
+ } while (rseq_unlikely(ret));
+ }
+
+ if (rseq_unregister_current_thread()) {
+ fprintf(stderr, "Error: rseq_unregister_current_thread(...) failed(%d): %s\n",
+ errno, strerror(errno));
+ abort();
+ }
+ return NULL;
+}
+
+void test_membarrier_init_percpu_list(struct percpu_list *list)
+{
+ int i;
+
+ memset(list, 0, sizeof(*list));
+ for (i = 0; i < CPU_SETSIZE; i++) {
+ struct percpu_list_node *node;
+
+ node = malloc(sizeof(*node));
+ assert(node);
+ node->data = 0;
+ node->next = NULL;
+ list->c[i].head = node;
+ }
+}
+
+void test_membarrier_free_percpu_list(struct percpu_list *list)
+{
+ int i;
+
+ for (i = 0; i < CPU_SETSIZE; i++)
+ free(list->c[i].head);
+}
+
+static int sys_membarrier(int cmd, int flags, int cpu_id)
+{
+ return syscall(__NR_membarrier, cmd, flags, cpu_id);
+}
+
+/*
+ * The manager thread swaps per-cpu lists that worker threads see,
+ * and validates that there are no unexpected modifications.
+ */
+void *test_membarrier_manager_thread(void *arg)
+{
+ struct test_membarrier_thread_args *args =
+ (struct test_membarrier_thread_args *)arg;
+ struct percpu_list list_a, list_b;
+ intptr_t expect_a = 0, expect_b = 0;
+ int cpu_a = 0, cpu_b = 0;
+
+ if (rseq_register_current_thread()) {
+ fprintf(stderr, "Error: rseq_register_current_thread(...) failed(%d): %s\n",
+ errno, strerror(errno));
+ abort();
+ }
+
+ /* Init lists. */
+ test_membarrier_init_percpu_list(&list_a);
+ test_membarrier_init_percpu_list(&list_b);
+
+ atomic_store(&args->percpu_list_ptr, (intptr_t)&list_a);
+
+ while (!atomic_load(&args->stop)) {
+ /* list_a is "active". */
+ cpu_a = rand() % CPU_SETSIZE;
+ /*
+ * As list_b is "inactive", we should never see changes
+ * to list_b.
+ */
+ if (expect_b != atomic_load(&list_b.c[cpu_b].head->data)) {
+ fprintf(stderr, "Membarrier test failed\n");
+ abort();
+ }
+
+ /* Make list_b "active". */
+ atomic_store(&args->percpu_list_ptr, (intptr_t)&list_b);
+ sys_membarrier(MEMBARRIER_CMD_PRIVATE_EXPEDITED_RSEQ,
+ MEMBARRIER_CMD_FLAG_CPU, cpu_a);
+ /*
+ * Cpu A should now only modify list_b, so we values
+ * in list_a should be stable.
+ */
+ expect_a = atomic_load(&list_a.c[cpu_a].head->data);
+
+ cpu_b = rand() % CPU_SETSIZE;
+ /*
+ * As list_a is "inactive", we should never see changes
+ * to list_a.
+ */
+ if (expect_a != atomic_load(&list_a.c[cpu_a].head->data)) {
+ fprintf(stderr, "Membarrier test failed\n");
+ abort();
+ }
+
+ /* Make list_a "active". */
+ atomic_store(&args->percpu_list_ptr, (intptr_t)&list_a);
+ sys_membarrier(MEMBARRIER_CMD_PRIVATE_EXPEDITED_RSEQ,
+ MEMBARRIER_CMD_FLAG_CPU, cpu_b);
+ /* Remember a value from list_b. */
+ expect_b = atomic_load(&list_b.c[cpu_b].head->data);
+ }
+
+ test_membarrier_free_percpu_list(&list_a);
+ test_membarrier_free_percpu_list(&list_b);
+
+ if (rseq_unregister_current_thread()) {
+ fprintf(stderr, "Error: rseq_unregister_current_thread(...) failed(%d): %s\n",
+ errno, strerror(errno));
+ abort();
+ }
+ return NULL;
+}
+
+/* Test MEMBARRIER_CMD_PRIVATE_RESTART_RSEQ_ON_CPU membarrier command. */
+void test_membarrier(void)
+{
+#ifndef RSEQ_ARCH_HAS_OFFSET_DEREF_ADDV
+ fprintf(stderr, "rseq_offset_deref_addv is not implemented on this architecture. "
+ "Skipping membarrier test.\n");
+ return;
+#else
+ struct test_membarrier_thread_args thread_args;
+ pthread_t worker_threads[CPU_SETSIZE];
+ pthread_t manager_thread;
+ int i;
+
+ sys_membarrier(MEMBARRIER_CMD_REGISTER_PRIVATE_EXPEDITED_RSEQ, 0, 0);
+
+ thread_args.stop = 0;
+ thread_args.percpu_list_ptr = 0;
+ pthread_create(&manager_thread, NULL,
+ test_membarrier_manager_thread, &thread_args);
+
+ for (i = 0; i < CPU_SETSIZE; i++)
+ pthread_create(&worker_threads[i], NULL,
+ test_membarrier_worker_thread, &thread_args);
+
+ for (i = 0; i < CPU_SETSIZE; i++)
+ pthread_join(worker_threads[i], NULL);
+
+ atomic_store(&thread_args.stop, 1);
+ pthread_join(manager_thread, NULL);
+#endif
+}
+
int main(int argc, char **argv)
{
if (rseq_register_current_thread()) {
@@ -300,6 +485,8 @@ int main(int argc, char **argv)
test_percpu_spinlock();
printf("percpu_list\n");
test_percpu_list();
+ printf("membarrier\n");
+ test_membarrier();
if (rseq_unregister_current_thread()) {
fprintf(stderr, "Error: rseq_unregister_current_thread(...) failed(%d): %s\n",
errno, strerror(errno));
diff --git a/tools/testing/selftests/rseq/rseq-x86.h b/tools/testing/selftests/rseq/rseq-x86.h
index b2da6004fe30..640411518e46 100644
--- a/tools/testing/selftests/rseq/rseq-x86.h
+++ b/tools/testing/selftests/rseq/rseq-x86.h
@@ -279,6 +279,63 @@ int rseq_addv(intptr_t *v, intptr_t count, int cpu)
#endif
}
+#define RSEQ_ARCH_HAS_OFFSET_DEREF_ADDV
+
+/*
+ * pval = *(ptr+off)
+ * *pval += inc;
+ */
+static inline __attribute__((always_inline))
+int rseq_offset_deref_addv(intptr_t *ptr, off_t off, intptr_t inc, int cpu)
+{
+ RSEQ_INJECT_C(9)
+
+ __asm__ __volatile__ goto (
+ RSEQ_ASM_DEFINE_TABLE(3, 1f, 2f, 4f) /* start, commit, abort */
+#ifdef RSEQ_COMPARE_TWICE
+ RSEQ_ASM_DEFINE_EXIT_POINT(1f, %l[error1])
+#endif
+ /* Start rseq by storing table entry pointer into rseq_cs. */
+ RSEQ_ASM_STORE_RSEQ_CS(1, 3b, RSEQ_CS_OFFSET(%[rseq_abi]))
+ RSEQ_ASM_CMP_CPU_ID(cpu_id, RSEQ_CPU_ID_OFFSET(%[rseq_abi]), 4f)
+ RSEQ_INJECT_ASM(3)
+#ifdef RSEQ_COMPARE_TWICE
+ RSEQ_ASM_CMP_CPU_ID(cpu_id, RSEQ_CPU_ID_OFFSET(%[rseq_abi]), %l[error1])
+#endif
+ /* get p+v */
+ "movq %[ptr], %%rbx\n\t"
+ "addq %[off], %%rbx\n\t"
+ /* get pv */
+ "movq (%%rbx), %%rcx\n\t"
+ /* *pv += inc */
+ "addq %[inc], (%%rcx)\n\t"
+ "2:\n\t"
+ RSEQ_INJECT_ASM(4)
+ RSEQ_ASM_DEFINE_ABORT(4, "", abort)
+ : /* gcc asm goto does not allow outputs */
+ : [cpu_id] "r" (cpu),
+ [rseq_abi] "r" (&__rseq_abi),
+ /* final store input */
+ [ptr] "m" (*ptr),
+ [off] "er" (off),
+ [inc] "er" (inc)
+ : "memory", "cc", "rax", "rbx", "rcx"
+ RSEQ_INJECT_CLOBBER
+ : abort
+#ifdef RSEQ_COMPARE_TWICE
+ , error1
+#endif
+ );
+ return 0;
+abort:
+ RSEQ_INJECT_FAILED
+ return -1;
+#ifdef RSEQ_COMPARE_TWICE
+error1:
+ rseq_bug("cpu_id comparison failed");
+#endif
+}
+
static inline __attribute__((always_inline))
int rseq_cmpeqv_trystorev_storev(intptr_t *v, intptr_t expect,
intptr_t *v2, intptr_t newv2,
--
2.28.0.297.g1956fa8f8d-goog
^ permalink raw reply related [flat|nested] 3+ messages in thread
* Re: [PATCH 1/2 v4] rseq/membarrier: add MEMBARRIER_CMD_PRIVATE_EXPEDITED_RSEQ
2020-08-26 18:47 [PATCH 1/2 v4] rseq/membarrier: add MEMBARRIER_CMD_PRIVATE_EXPEDITED_RSEQ Peter Oskolkov
2020-08-26 18:47 ` [PATCH 2/2 v4] rseq/selftests: test MEMBARRIER_CMD_PRIVATE_EXPEDITED_RSEQ Peter Oskolkov
@ 2020-08-26 22:31 ` kernel test robot
1 sibling, 0 replies; 3+ messages in thread
From: kernel test robot @ 2020-08-26 22:31 UTC (permalink / raw)
To: Peter Oskolkov, Mathieu Desnoyers, Paul E . McKenney,
Peter Zijlstra, Boqun Feng, linux-kernel
Cc: kbuild-all, clang-built-linux, Paul Turner, Chris Kennelly,
Peter Oskolkov
[-- Attachment #1: Type: text/plain, Size: 5043 bytes --]
Hi Peter,
Thank you for the patch! Yet something to improve:
[auto build test ERROR on kselftest/next]
[also build test ERROR on tip/sched/core linux/master linus/master v5.9-rc2 next-20200826]
[If your patch is applied to the wrong git tree, kindly drop us a note.
And when submitting patch, we suggest to use '--base' as documented in
https://git-scm.com/docs/git-format-patch]
url: https://github.com/0day-ci/linux/commits/Peter-Oskolkov/rseq-membarrier-add-MEMBARRIER_CMD_PRIVATE_EXPEDITED_RSEQ/20200827-024952
base: https://git.kernel.org/pub/scm/linux/kernel/git/shuah/linux-kselftest.git next
config: powerpc64-randconfig-r001-20200826 (attached as .config)
compiler: clang version 12.0.0 (https://github.com/llvm/llvm-project 7cfcecece0e0430937cf529ce74d3a071a4dedc6)
reproduce (this is a W=1 build):
wget https://raw.githubusercontent.com/intel/lkp-tests/master/sbin/make.cross -O ~/bin/make.cross
chmod +x ~/bin/make.cross
# install powerpc64 cross compiling tool for clang build
# apt-get install binutils-powerpc64-linux-gnu
# save the attached .config to linux build tree
COMPILER_INSTALL_PATH=$HOME/0day COMPILER=clang make.cross ARCH=powerpc64
If you fix the issue, kindly add following tag as appropriate
Reported-by: kernel test robot <lkp@intel.com>
All errors (new ones prefixed by >>):
>> kernel/sched/membarrier.c:379:1: error: conflicting types for 'sys_membarrier'
SYSCALL_DEFINE3(membarrier, int, cmd, int, flags, int, cpu_id)
^
include/linux/syscalls.h:215:36: note: expanded from macro 'SYSCALL_DEFINE3'
#define SYSCALL_DEFINE3(name, ...) SYSCALL_DEFINEx(3, _##name, __VA_ARGS__)
^
include/linux/syscalls.h:224:2: note: expanded from macro 'SYSCALL_DEFINEx'
__SYSCALL_DEFINEx(x, sname, __VA_ARGS__)
^
include/linux/syscalls.h:238:18: note: expanded from macro '__SYSCALL_DEFINEx'
asmlinkage long sys##name(__MAP(x,__SC_DECL,__VA_ARGS__)) \
^
<scratch space>:7:1: note: expanded from here
sys_membarrier
^
include/linux/syscalls.h:977:17: note: previous declaration is here
asmlinkage long sys_membarrier(int cmd, int flags);
^
1 error generated.
# https://github.com/0day-ci/linux/commit/41ee0f192c40af25a53abea1610a9ea4852cf0f4
git remote add linux-review https://github.com/0day-ci/linux
git fetch --no-tags linux-review Peter-Oskolkov/rseq-membarrier-add-MEMBARRIER_CMD_PRIVATE_EXPEDITED_RSEQ/20200827-024952
git checkout 41ee0f192c40af25a53abea1610a9ea4852cf0f4
vim +/sys_membarrier +379 kernel/sched/membarrier.c
342
343 /**
344 * sys_membarrier - issue memory barriers on a set of threads
345 * @cmd: Takes command values defined in enum membarrier_cmd.
346 * @flags: Currently needs to be 0 for all commands other than
347 * MEMBARRIER_CMD_PRIVATE_EXPEDITED_RSEQ: in the latter
348 * case it can be MEMBARRIER_CMD_FLAG_CPU, indicating that @cpu_id
349 * contains the CPU on which to interrupt (= restart)
350 * the RSEQ critical section.
351 * @cpu_id: if @flags == MEMBARRIER_CMD_FLAG_CPU, indicates the cpu on which
352 * RSEQ CS should be interrupted (@cmd must be
353 * MEMBARRIER_CMD_PRIVATE_EXPEDITED_RSEQ).
354 *
355 * If this system call is not implemented, -ENOSYS is returned. If the
356 * command specified does not exist, not available on the running
357 * kernel, or if the command argument is invalid, this system call
358 * returns -EINVAL. For a given command, with flags argument set to 0,
359 * if this system call returns -ENOSYS or -EINVAL, it is guaranteed to
360 * always return the same value until reboot. In addition, it can return
361 * -ENOMEM if there is not enough memory available to perform the system
362 * call.
363 *
364 * All memory accesses performed in program order from each targeted thread
365 * is guaranteed to be ordered with respect to sys_membarrier(). If we use
366 * the semantic "barrier()" to represent a compiler barrier forcing memory
367 * accesses to be performed in program order across the barrier, and
368 * smp_mb() to represent explicit memory barriers forcing full memory
369 * ordering across the barrier, we have the following ordering table for
370 * each pair of barrier(), sys_membarrier() and smp_mb():
371 *
372 * The pair ordering is detailed as (O: ordered, X: not ordered):
373 *
374 * barrier() smp_mb() sys_membarrier()
375 * barrier() X X O
376 * smp_mb() X O O
377 * sys_membarrier() O O O
378 */
> 379 SYSCALL_DEFINE3(membarrier, int, cmd, int, flags, int, cpu_id)
---
0-DAY CI Kernel Test Service, Intel Corporation
https://lists.01.org/hyperkitty/list/kbuild-all@lists.01.org
[-- Attachment #2: .config.gz --]
[-- Type: application/gzip, Size: 34148 bytes --]
^ permalink raw reply [flat|nested] 3+ messages in thread
end of thread, other threads:[~2020-08-26 22:32 UTC | newest]
Thread overview: 3+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-08-26 18:47 [PATCH 1/2 v4] rseq/membarrier: add MEMBARRIER_CMD_PRIVATE_EXPEDITED_RSEQ Peter Oskolkov
2020-08-26 18:47 ` [PATCH 2/2 v4] rseq/selftests: test MEMBARRIER_CMD_PRIVATE_EXPEDITED_RSEQ Peter Oskolkov
2020-08-26 22:31 ` [PATCH 1/2 v4] rseq/membarrier: add MEMBARRIER_CMD_PRIVATE_EXPEDITED_RSEQ kernel test robot
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).