* [PATCH 1/2 v6] rseq/membarrier: add MEMBARRIER_CMD_PRIVATE_EXPEDITED_RSEQ
@ 2020-08-31 22:58 Peter Oskolkov
2020-08-31 22:58 ` [PATCH 2/2 v6] rseq/selftests: test MEMBARRIER_CMD_PRIVATE_EXPEDITED_RSEQ Peter Oskolkov
2020-09-15 6:12 ` [PATCH 1/2 v6] rseq/membarrier: add MEMBARRIER_CMD_PRIVATE_EXPEDITED_RSEQ Peter Oskolkov
0 siblings, 2 replies; 5+ messages in thread
From: Peter Oskolkov @ 2020-08-31 22:58 UTC (permalink / raw)
To: Mathieu Desnoyers, Paul E . McKenney, Peter Zijlstra, Boqun Feng,
linux-kernel
Cc: Paul Turner, Chris Kennelly, Peter Oskolkov, Peter Oskolkov
This patchset is based on Google-internal RSEQ
work done by Paul Turner and Andrew Hunter.
When working with per-CPU RSEQ-based memory allocations,
it is sometimes important to make sure that a global
memory location is no longer accessed from RSEQ critical
sections. For example, there can be two per-CPU lists,
one is "active" and accessed per-CPU, while another one
is inactive and worked on asynchronously "off CPU" (e.g.
garbage collection is performed). Then at some point
the two lists are swapped, and a fast RCU-like mechanism
is required to make sure that the previously active
list is no longer accessed.
This patch introduces such a mechanism: in short,
membarrier() syscall issues an IPI to a CPU, restarting
a potentially active RSEQ critical section on the CPU.
v1->v2:
- removed the ability to IPI all CPUs in a single sycall;
- use task->mm rather than task->group_leader to identify
tasks belonging to the same process.
v2->v3:
- re-added the ability to IPI all CPUs in a single syscall;
- integrated with membarrier_private_expedited() to
make sure only CPUs running tasks with the same mm as
the current task are interrupted;
- also added MEMBARRIER_CMD_REGISTER_PRIVATE_EXPEDITED_RSEQ;
- flags in membarrier_private_expedited are never actually
bit flags but always distinct values (i.e. never two flags
are combined), so I modified bit testing to full equation
comparison for simplicity (otherwise the code needs to
work when several bits are set, for example).
v3->v4:
- added the third parameter to membarrier syscall: @cpu_id:
if @flags == MEMBARRIER_CMD_FLAG_CPU, then @cpu_id indicates
the cpu on which RSEQ CS should be restarted.
v4->v5:
- added @cpu_id parameter to sys_membarrier in syscalls.h.
v5->v6:
- made membarrier_private_expedited more efficient in a
single-cpu case;
- a couple of other minor refactorings.
The second patch in the patchset adds a selftest
of this feature.
Signed-off-by: Peter Oskolkov <posk@google.com>
---
include/linux/sched/mm.h | 3 +
include/linux/syscalls.h | 2 +-
include/uapi/linux/membarrier.h | 29 +++++++
kernel/sched/membarrier.c | 140 +++++++++++++++++++++++++-------
4 files changed, 143 insertions(+), 31 deletions(-)
diff --git a/include/linux/sched/mm.h b/include/linux/sched/mm.h
index f889e332912f..15bfb06f2884 100644
--- a/include/linux/sched/mm.h
+++ b/include/linux/sched/mm.h
@@ -348,10 +348,13 @@ enum {
MEMBARRIER_STATE_GLOBAL_EXPEDITED = (1U << 3),
MEMBARRIER_STATE_PRIVATE_EXPEDITED_SYNC_CORE_READY = (1U << 4),
MEMBARRIER_STATE_PRIVATE_EXPEDITED_SYNC_CORE = (1U << 5),
+ MEMBARRIER_STATE_PRIVATE_EXPEDITED_RSEQ_READY = (1U << 6),
+ MEMBARRIER_STATE_PRIVATE_EXPEDITED_RSEQ = (1U << 7),
};
enum {
MEMBARRIER_FLAG_SYNC_CORE = (1U << 0),
+ MEMBARRIER_FLAG_RSEQ = (1U << 1),
};
#ifdef CONFIG_ARCH_HAS_MEMBARRIER_CALLBACKS
diff --git a/include/linux/syscalls.h b/include/linux/syscalls.h
index 75ac7f8ae93c..466c993e52bf 100644
--- a/include/linux/syscalls.h
+++ b/include/linux/syscalls.h
@@ -974,7 +974,7 @@ asmlinkage long sys_execveat(int dfd, const char __user *filename,
const char __user *const __user *argv,
const char __user *const __user *envp, int flags);
asmlinkage long sys_userfaultfd(int flags);
-asmlinkage long sys_membarrier(int cmd, int flags);
+asmlinkage long sys_membarrier(int cmd, int flags, int cpu_id);
asmlinkage long sys_mlock2(unsigned long start, size_t len, int flags);
asmlinkage long sys_copy_file_range(int fd_in, loff_t __user *off_in,
int fd_out, loff_t __user *off_out,
diff --git a/include/uapi/linux/membarrier.h b/include/uapi/linux/membarrier.h
index 5891d7614c8c..98c2b0e7c0d8 100644
--- a/include/uapi/linux/membarrier.h
+++ b/include/uapi/linux/membarrier.h
@@ -114,6 +114,29 @@
* If this command is not implemented by an
* architecture, -EINVAL is returned.
* Returns 0 on success.
+ * @MEMBARRIER_CMD_PRIVATE_EXPEDITED_RSEQ:
+ * In addition to provide memory ordering
+ * guarantees described in
+ * MEMBARRIER_CMD_PRIVATE_EXPEDITED_SYNC_CORE,
+ * ensure the caller thread, upon return from
+ * system call, that all its running thread
+ * siblings have any currently running rseq
+ * critical sections restarted if @flags
+ * parameter is 0; if @flags parameter is
+ * MEMBARRIER_CMD_FLAG_CPU,
+ * then this operation is performed only
+ * on CPU indicated by @cpu_id. If this command is
+ * not implemented by an architecture, -EINVAL
+ * is returned. A process needs to register its
+ * intent to use the private expedited rseq
+ * command prior to using it, otherwise
+ * this command returns -EPERM.
+ * @MEMBARRIER_CMD_REGISTER_PRIVATE_EXPEDITED_RSEQ:
+ * Register the process intent to use
+ * MEMBARRIER_CMD_PRIVATE_EXPEDITED_RSEQ.
+ * If this command is not implemented by an
+ * architecture, -EINVAL is returned.
+ * Returns 0 on success.
* @MEMBARRIER_CMD_SHARED:
* Alias to MEMBARRIER_CMD_GLOBAL. Provided for
* header backward compatibility.
@@ -131,9 +154,15 @@ enum membarrier_cmd {
MEMBARRIER_CMD_REGISTER_PRIVATE_EXPEDITED = (1 << 4),
MEMBARRIER_CMD_PRIVATE_EXPEDITED_SYNC_CORE = (1 << 5),
MEMBARRIER_CMD_REGISTER_PRIVATE_EXPEDITED_SYNC_CORE = (1 << 6),
+ MEMBARRIER_CMD_PRIVATE_EXPEDITED_RSEQ = (1 << 7),
+ MEMBARRIER_CMD_REGISTER_PRIVATE_EXPEDITED_RSEQ = (1 << 8),
/* Alias for header backward compatibility. */
MEMBARRIER_CMD_SHARED = MEMBARRIER_CMD_GLOBAL,
};
+enum membarrier_cmd_flag {
+ MEMBARRIER_CMD_FLAG_CPU = (1 << 0),
+};
+
#endif /* _UAPI_LINUX_MEMBARRIER_H */
diff --git a/kernel/sched/membarrier.c b/kernel/sched/membarrier.c
index 168479a7d61b..e32e9476ccf3 100644
--- a/kernel/sched/membarrier.c
+++ b/kernel/sched/membarrier.c
@@ -18,6 +18,14 @@
#define MEMBARRIER_PRIVATE_EXPEDITED_SYNC_CORE_BITMASK 0
#endif
+#ifdef CONFIG_RSEQ
+#define MEMBARRIER_CMD_PRIVATE_EXPEDITED_RSEQ_BITMASK \
+ (MEMBARRIER_CMD_PRIVATE_EXPEDITED_RSEQ \
+ | MEMBARRIER_CMD_REGISTER_PRIVATE_EXPEDITED_RSEQ_BITMASK)
+#else
+#define MEMBARRIER_CMD_PRIVATE_EXPEDITED_RSEQ_BITMASK 0
+#endif
+
#define MEMBARRIER_CMD_BITMASK \
(MEMBARRIER_CMD_GLOBAL | MEMBARRIER_CMD_GLOBAL_EXPEDITED \
| MEMBARRIER_CMD_REGISTER_GLOBAL_EXPEDITED \
@@ -30,6 +38,13 @@ static void ipi_mb(void *info)
smp_mb(); /* IPIs should be serializing but paranoid. */
}
+#ifdef CONFIG_RSEQ
+static void ipi_rseq(void *info)
+{
+ rseq_preempt(current);
+}
+#endif
+
static void ipi_sync_rq_state(void *info)
{
struct mm_struct *mm = (struct mm_struct *) info;
@@ -129,19 +144,29 @@ static int membarrier_global_expedited(void)
return 0;
}
-static int membarrier_private_expedited(int flags)
+static int membarrier_private_expedited(int flags, int cpu_id)
{
- int cpu;
cpumask_var_t tmpmask;
struct mm_struct *mm = current->mm;
+ smp_call_func_t ipi_func = ipi_mb;
- if (flags & MEMBARRIER_FLAG_SYNC_CORE) {
+ if (flags == MEMBARRIER_FLAG_SYNC_CORE) {
if (!IS_ENABLED(CONFIG_ARCH_HAS_MEMBARRIER_SYNC_CORE))
return -EINVAL;
if (!(atomic_read(&mm->membarrier_state) &
MEMBARRIER_STATE_PRIVATE_EXPEDITED_SYNC_CORE_READY))
return -EPERM;
+ } else if (flags == MEMBARRIER_FLAG_RSEQ) {
+#ifdef CONFIG_RSEQ
+ if (!(atomic_read(&mm->membarrier_state) &
+ MEMBARRIER_STATE_PRIVATE_EXPEDITED_RSEQ_READY))
+ return -EPERM;
+ ipi_func = ipi_rseq;
+#else
+ return -EINVAL;
+#endif
} else {
+ BUG_ON(flags != 0);
if (!(atomic_read(&mm->membarrier_state) &
MEMBARRIER_STATE_PRIVATE_EXPEDITED_READY))
return -EPERM;
@@ -156,35 +181,59 @@ static int membarrier_private_expedited(int flags)
*/
smp_mb(); /* system call entry is not a mb. */
- if (!zalloc_cpumask_var(&tmpmask, GFP_KERNEL))
+ if (cpu_id < 0 && !zalloc_cpumask_var(&tmpmask, GFP_KERNEL))
return -ENOMEM;
cpus_read_lock();
- rcu_read_lock();
- for_each_online_cpu(cpu) {
+
+ if (cpu_id >= 0) {
struct task_struct *p;
- /*
- * Skipping the current CPU is OK even through we can be
- * migrated at any point. The current CPU, at the point
- * where we read raw_smp_processor_id(), is ensured to
- * be in program order with respect to the caller
- * thread. Therefore, we can skip this CPU from the
- * iteration.
- */
- if (cpu == raw_smp_processor_id())
- continue;
- p = rcu_dereference(cpu_rq(cpu)->curr);
- if (p && p->mm == mm)
- __cpumask_set_cpu(cpu, tmpmask);
+ if (cpu_id >= nr_cpu_ids || !cpu_online(cpu_id))
+ goto out;
+ if (cpu_id == raw_smp_processor_id())
+ goto out;
+ rcu_read_lock();
+ p = rcu_dereference(cpu_rq(cpu_id)->curr);
+ if (!p || p->mm != mm) {
+ rcu_read_unlock();
+ goto out;
+ }
+ rcu_read_unlock();
+ } else {
+ int cpu;
+
+ rcu_read_lock();
+ for_each_online_cpu(cpu) {
+ struct task_struct *p;
+
+ /*
+ * Skipping the current CPU is OK even through we can be
+ * migrated at any point. The current CPU, at the point
+ * where we read raw_smp_processor_id(), is ensured to
+ * be in program order with respect to the caller
+ * thread. Therefore, we can skip this CPU from the
+ * iteration.
+ */
+ if (cpu == raw_smp_processor_id())
+ continue;
+ p = rcu_dereference(cpu_rq(cpu)->curr);
+ if (p && p->mm == mm)
+ __cpumask_set_cpu(cpu, tmpmask);
+ }
+ rcu_read_unlock();
}
- rcu_read_unlock();
preempt_disable();
- smp_call_function_many(tmpmask, ipi_mb, NULL, 1);
+ if (cpu_id >= 0)
+ smp_call_function_single(cpu_id, ipi_func, NULL, 1);
+ else
+ smp_call_function_many(tmpmask, ipi_func, NULL, 1);
preempt_enable();
- free_cpumask_var(tmpmask);
+out:
+ if (cpu_id < 0)
+ free_cpumask_var(tmpmask);
cpus_read_unlock();
/*
@@ -283,11 +332,18 @@ static int membarrier_register_private_expedited(int flags)
set_state = MEMBARRIER_STATE_PRIVATE_EXPEDITED,
ret;
- if (flags & MEMBARRIER_FLAG_SYNC_CORE) {
+ if (flags == MEMBARRIER_FLAG_SYNC_CORE) {
if (!IS_ENABLED(CONFIG_ARCH_HAS_MEMBARRIER_SYNC_CORE))
return -EINVAL;
ready_state =
MEMBARRIER_STATE_PRIVATE_EXPEDITED_SYNC_CORE_READY;
+ } else if (flags == MEMBARRIER_FLAG_RSEQ) {
+ if (!IS_ENABLED(CONFIG_RSEQ))
+ return -EINVAL;
+ ready_state =
+ MEMBARRIER_STATE_PRIVATE_EXPEDITED_RSEQ_READY;
+ } else {
+ BUG_ON(flags != 0);
}
/*
@@ -299,6 +355,8 @@ static int membarrier_register_private_expedited(int flags)
return 0;
if (flags & MEMBARRIER_FLAG_SYNC_CORE)
set_state |= MEMBARRIER_STATE_PRIVATE_EXPEDITED_SYNC_CORE;
+ if (flags & MEMBARRIER_FLAG_RSEQ)
+ set_state |= MEMBARRIER_STATE_PRIVATE_EXPEDITED_RSEQ;
atomic_or(set_state, &mm->membarrier_state);
ret = sync_runqueues_membarrier_state(mm);
if (ret)
@@ -310,8 +368,15 @@ static int membarrier_register_private_expedited(int flags)
/**
* sys_membarrier - issue memory barriers on a set of threads
- * @cmd: Takes command values defined in enum membarrier_cmd.
- * @flags: Currently needs to be 0. For future extensions.
+ * @cmd: Takes command values defined in enum membarrier_cmd.
+ * @flags: Currently needs to be 0 for all commands other than
+ * MEMBARRIER_CMD_PRIVATE_EXPEDITED_RSEQ: in the latter
+ * case it can be MEMBARRIER_CMD_FLAG_CPU, indicating that @cpu_id
+ * contains the CPU on which to interrupt (= restart)
+ * the RSEQ critical section.
+ * @cpu_id: if @flags == MEMBARRIER_CMD_FLAG_CPU, indicates the cpu on which
+ * RSEQ CS should be interrupted (@cmd must be
+ * MEMBARRIER_CMD_PRIVATE_EXPEDITED_RSEQ).
*
* If this system call is not implemented, -ENOSYS is returned. If the
* command specified does not exist, not available on the running
@@ -337,10 +402,21 @@ static int membarrier_register_private_expedited(int flags)
* smp_mb() X O O
* sys_membarrier() O O O
*/
-SYSCALL_DEFINE2(membarrier, int, cmd, int, flags)
+SYSCALL_DEFINE3(membarrier, int, cmd, int, flags, int, cpu_id)
{
- if (unlikely(flags))
- return -EINVAL;
+ switch (cmd) {
+ case MEMBARRIER_CMD_PRIVATE_EXPEDITED_RSEQ:
+ if (unlikely(flags && flags != MEMBARRIER_CMD_FLAG_CPU))
+ return -EINVAL;
+ break;
+ default:
+ if (unlikely(flags))
+ return -EINVAL;
+ }
+
+ if (!(flags & MEMBARRIER_CMD_FLAG_CPU))
+ cpu_id = -1;
+
switch (cmd) {
case MEMBARRIER_CMD_QUERY:
{
@@ -362,13 +438,17 @@ SYSCALL_DEFINE2(membarrier, int, cmd, int, flags)
case MEMBARRIER_CMD_REGISTER_GLOBAL_EXPEDITED:
return membarrier_register_global_expedited();
case MEMBARRIER_CMD_PRIVATE_EXPEDITED:
- return membarrier_private_expedited(0);
+ return membarrier_private_expedited(0, cpu_id);
case MEMBARRIER_CMD_REGISTER_PRIVATE_EXPEDITED:
return membarrier_register_private_expedited(0);
case MEMBARRIER_CMD_PRIVATE_EXPEDITED_SYNC_CORE:
- return membarrier_private_expedited(MEMBARRIER_FLAG_SYNC_CORE);
+ return membarrier_private_expedited(MEMBARRIER_FLAG_SYNC_CORE, cpu_id);
case MEMBARRIER_CMD_REGISTER_PRIVATE_EXPEDITED_SYNC_CORE:
return membarrier_register_private_expedited(MEMBARRIER_FLAG_SYNC_CORE);
+ case MEMBARRIER_CMD_PRIVATE_EXPEDITED_RSEQ:
+ return membarrier_private_expedited(MEMBARRIER_FLAG_RSEQ, cpu_id);
+ case MEMBARRIER_CMD_REGISTER_PRIVATE_EXPEDITED_RSEQ:
+ return membarrier_register_private_expedited(MEMBARRIER_FLAG_RSEQ);
default:
return -EINVAL;
}
--
2.28.0.402.g5ffc5be6b7-goog
^ permalink raw reply related [flat|nested] 5+ messages in thread
* [PATCH 2/2 v6] rseq/selftests: test MEMBARRIER_CMD_PRIVATE_EXPEDITED_RSEQ
2020-08-31 22:58 [PATCH 1/2 v6] rseq/membarrier: add MEMBARRIER_CMD_PRIVATE_EXPEDITED_RSEQ Peter Oskolkov
@ 2020-08-31 22:58 ` Peter Oskolkov
2020-09-15 6:12 ` [PATCH 1/2 v6] rseq/membarrier: add MEMBARRIER_CMD_PRIVATE_EXPEDITED_RSEQ Peter Oskolkov
1 sibling, 0 replies; 5+ messages in thread
From: Peter Oskolkov @ 2020-08-31 22:58 UTC (permalink / raw)
To: Mathieu Desnoyers, Paul E . McKenney, Peter Zijlstra, Boqun Feng,
linux-kernel
Cc: Paul Turner, Chris Kennelly, Peter Oskolkov, Peter Oskolkov
Based on Google-internal RSEQ work done by
Paul Turner and Andrew Hunter.
This patch adds a selftest for MEMBARRIER_CMD_PRIVATE_EXPEDITED_RSEQ.
The test quite often fails without the previous patch in this patchset,
but consistently passes with it.
v3: added rseq_offset_deref_addv() to x86_64 to make the test
more explicit; on other architectures I kept using existing
rseq_cmpeqv_cmpeqv_storev() as I have no easy way to test
there. Added a comment explaining why the test works this way.
v4: skipped the test if rseq_offset_deref_addv() is not present
(that is, on all architectures other than x86_64).
Signed-off-by: Peter Oskolkov <posk@google.com>
---
.../selftests/rseq/basic_percpu_ops_test.c | 187 ++++++++++++++++++
tools/testing/selftests/rseq/rseq-x86.h | 57 ++++++
2 files changed, 244 insertions(+)
diff --git a/tools/testing/selftests/rseq/basic_percpu_ops_test.c b/tools/testing/selftests/rseq/basic_percpu_ops_test.c
index eb3f6db36d36..e6e10ba4b9ed 100644
--- a/tools/testing/selftests/rseq/basic_percpu_ops_test.c
+++ b/tools/testing/selftests/rseq/basic_percpu_ops_test.c
@@ -3,16 +3,24 @@
#include <assert.h>
#include <pthread.h>
#include <sched.h>
+#include <stdatomic.h>
#include <stdint.h>
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <stddef.h>
+#include <syscall.h>
+#include <unistd.h>
#include "rseq.h"
#define ARRAY_SIZE(arr) (sizeof(arr) / sizeof((arr)[0]))
+/* The local <linux/membarrier.h> may not contain the commands below. */
+#define MEMBARRIER_CMD_PRIVATE_EXPEDITED_RSEQ (1<<7)
+#define MEMBARRIER_CMD_REGISTER_PRIVATE_EXPEDITED_RSEQ (1<<8)
+#define MEMBARRIER_CMD_FLAG_CPU (1<<0)
+
struct percpu_lock_entry {
intptr_t v;
} __attribute__((aligned(128)));
@@ -289,6 +297,183 @@ void test_percpu_list(void)
assert(sum == expected_sum);
}
+struct test_membarrier_thread_args {
+ int stop;
+ intptr_t percpu_list_ptr;
+};
+
+/* Worker threads modify data in their "active" percpu lists. */
+void *test_membarrier_worker_thread(void *arg)
+{
+ struct test_membarrier_thread_args *args =
+ (struct test_membarrier_thread_args *)arg;
+ const int iters = 10 * 1000 * 1000;
+ int i;
+
+ if (rseq_register_current_thread()) {
+ fprintf(stderr, "Error: rseq_register_current_thread(...) failed(%d): %s\n",
+ errno, strerror(errno));
+ abort();
+ }
+
+ /* Wait for initialization. */
+ while (!atomic_load(&args->percpu_list_ptr)) {}
+
+ for (i = 0; i < iters; ++i) {
+ int ret;
+
+ do {
+ int cpu = rseq_cpu_start();
+
+ ret = rseq_offset_deref_addv(&args->percpu_list_ptr,
+ 128 * cpu, 1, cpu);
+ } while (rseq_unlikely(ret));
+ }
+
+ if (rseq_unregister_current_thread()) {
+ fprintf(stderr, "Error: rseq_unregister_current_thread(...) failed(%d): %s\n",
+ errno, strerror(errno));
+ abort();
+ }
+ return NULL;
+}
+
+void test_membarrier_init_percpu_list(struct percpu_list *list)
+{
+ int i;
+
+ memset(list, 0, sizeof(*list));
+ for (i = 0; i < CPU_SETSIZE; i++) {
+ struct percpu_list_node *node;
+
+ node = malloc(sizeof(*node));
+ assert(node);
+ node->data = 0;
+ node->next = NULL;
+ list->c[i].head = node;
+ }
+}
+
+void test_membarrier_free_percpu_list(struct percpu_list *list)
+{
+ int i;
+
+ for (i = 0; i < CPU_SETSIZE; i++)
+ free(list->c[i].head);
+}
+
+static int sys_membarrier(int cmd, int flags, int cpu_id)
+{
+ return syscall(__NR_membarrier, cmd, flags, cpu_id);
+}
+
+/*
+ * The manager thread swaps per-cpu lists that worker threads see,
+ * and validates that there are no unexpected modifications.
+ */
+void *test_membarrier_manager_thread(void *arg)
+{
+ struct test_membarrier_thread_args *args =
+ (struct test_membarrier_thread_args *)arg;
+ struct percpu_list list_a, list_b;
+ intptr_t expect_a = 0, expect_b = 0;
+ int cpu_a = 0, cpu_b = 0;
+
+ if (rseq_register_current_thread()) {
+ fprintf(stderr, "Error: rseq_register_current_thread(...) failed(%d): %s\n",
+ errno, strerror(errno));
+ abort();
+ }
+
+ /* Init lists. */
+ test_membarrier_init_percpu_list(&list_a);
+ test_membarrier_init_percpu_list(&list_b);
+
+ atomic_store(&args->percpu_list_ptr, (intptr_t)&list_a);
+
+ while (!atomic_load(&args->stop)) {
+ /* list_a is "active". */
+ cpu_a = rand() % CPU_SETSIZE;
+ /*
+ * As list_b is "inactive", we should never see changes
+ * to list_b.
+ */
+ if (expect_b != atomic_load(&list_b.c[cpu_b].head->data)) {
+ fprintf(stderr, "Membarrier test failed\n");
+ abort();
+ }
+
+ /* Make list_b "active". */
+ atomic_store(&args->percpu_list_ptr, (intptr_t)&list_b);
+ sys_membarrier(MEMBARRIER_CMD_PRIVATE_EXPEDITED_RSEQ,
+ MEMBARRIER_CMD_FLAG_CPU, cpu_a);
+ /*
+ * Cpu A should now only modify list_b, so we values
+ * in list_a should be stable.
+ */
+ expect_a = atomic_load(&list_a.c[cpu_a].head->data);
+
+ cpu_b = rand() % CPU_SETSIZE;
+ /*
+ * As list_a is "inactive", we should never see changes
+ * to list_a.
+ */
+ if (expect_a != atomic_load(&list_a.c[cpu_a].head->data)) {
+ fprintf(stderr, "Membarrier test failed\n");
+ abort();
+ }
+
+ /* Make list_a "active". */
+ atomic_store(&args->percpu_list_ptr, (intptr_t)&list_a);
+ sys_membarrier(MEMBARRIER_CMD_PRIVATE_EXPEDITED_RSEQ,
+ MEMBARRIER_CMD_FLAG_CPU, cpu_b);
+ /* Remember a value from list_b. */
+ expect_b = atomic_load(&list_b.c[cpu_b].head->data);
+ }
+
+ test_membarrier_free_percpu_list(&list_a);
+ test_membarrier_free_percpu_list(&list_b);
+
+ if (rseq_unregister_current_thread()) {
+ fprintf(stderr, "Error: rseq_unregister_current_thread(...) failed(%d): %s\n",
+ errno, strerror(errno));
+ abort();
+ }
+ return NULL;
+}
+
+/* Test MEMBARRIER_CMD_PRIVATE_RESTART_RSEQ_ON_CPU membarrier command. */
+void test_membarrier(void)
+{
+#ifndef RSEQ_ARCH_HAS_OFFSET_DEREF_ADDV
+ fprintf(stderr, "rseq_offset_deref_addv is not implemented on this architecture. "
+ "Skipping membarrier test.\n");
+ return;
+#else
+ struct test_membarrier_thread_args thread_args;
+ pthread_t worker_threads[CPU_SETSIZE];
+ pthread_t manager_thread;
+ int i;
+
+ sys_membarrier(MEMBARRIER_CMD_REGISTER_PRIVATE_EXPEDITED_RSEQ, 0, 0);
+
+ thread_args.stop = 0;
+ thread_args.percpu_list_ptr = 0;
+ pthread_create(&manager_thread, NULL,
+ test_membarrier_manager_thread, &thread_args);
+
+ for (i = 0; i < CPU_SETSIZE; i++)
+ pthread_create(&worker_threads[i], NULL,
+ test_membarrier_worker_thread, &thread_args);
+
+ for (i = 0; i < CPU_SETSIZE; i++)
+ pthread_join(worker_threads[i], NULL);
+
+ atomic_store(&thread_args.stop, 1);
+ pthread_join(manager_thread, NULL);
+#endif
+}
+
int main(int argc, char **argv)
{
if (rseq_register_current_thread()) {
@@ -300,6 +485,8 @@ int main(int argc, char **argv)
test_percpu_spinlock();
printf("percpu_list\n");
test_percpu_list();
+ printf("membarrier\n");
+ test_membarrier();
if (rseq_unregister_current_thread()) {
fprintf(stderr, "Error: rseq_unregister_current_thread(...) failed(%d): %s\n",
errno, strerror(errno));
diff --git a/tools/testing/selftests/rseq/rseq-x86.h b/tools/testing/selftests/rseq/rseq-x86.h
index b2da6004fe30..640411518e46 100644
--- a/tools/testing/selftests/rseq/rseq-x86.h
+++ b/tools/testing/selftests/rseq/rseq-x86.h
@@ -279,6 +279,63 @@ int rseq_addv(intptr_t *v, intptr_t count, int cpu)
#endif
}
+#define RSEQ_ARCH_HAS_OFFSET_DEREF_ADDV
+
+/*
+ * pval = *(ptr+off)
+ * *pval += inc;
+ */
+static inline __attribute__((always_inline))
+int rseq_offset_deref_addv(intptr_t *ptr, off_t off, intptr_t inc, int cpu)
+{
+ RSEQ_INJECT_C(9)
+
+ __asm__ __volatile__ goto (
+ RSEQ_ASM_DEFINE_TABLE(3, 1f, 2f, 4f) /* start, commit, abort */
+#ifdef RSEQ_COMPARE_TWICE
+ RSEQ_ASM_DEFINE_EXIT_POINT(1f, %l[error1])
+#endif
+ /* Start rseq by storing table entry pointer into rseq_cs. */
+ RSEQ_ASM_STORE_RSEQ_CS(1, 3b, RSEQ_CS_OFFSET(%[rseq_abi]))
+ RSEQ_ASM_CMP_CPU_ID(cpu_id, RSEQ_CPU_ID_OFFSET(%[rseq_abi]), 4f)
+ RSEQ_INJECT_ASM(3)
+#ifdef RSEQ_COMPARE_TWICE
+ RSEQ_ASM_CMP_CPU_ID(cpu_id, RSEQ_CPU_ID_OFFSET(%[rseq_abi]), %l[error1])
+#endif
+ /* get p+v */
+ "movq %[ptr], %%rbx\n\t"
+ "addq %[off], %%rbx\n\t"
+ /* get pv */
+ "movq (%%rbx), %%rcx\n\t"
+ /* *pv += inc */
+ "addq %[inc], (%%rcx)\n\t"
+ "2:\n\t"
+ RSEQ_INJECT_ASM(4)
+ RSEQ_ASM_DEFINE_ABORT(4, "", abort)
+ : /* gcc asm goto does not allow outputs */
+ : [cpu_id] "r" (cpu),
+ [rseq_abi] "r" (&__rseq_abi),
+ /* final store input */
+ [ptr] "m" (*ptr),
+ [off] "er" (off),
+ [inc] "er" (inc)
+ : "memory", "cc", "rax", "rbx", "rcx"
+ RSEQ_INJECT_CLOBBER
+ : abort
+#ifdef RSEQ_COMPARE_TWICE
+ , error1
+#endif
+ );
+ return 0;
+abort:
+ RSEQ_INJECT_FAILED
+ return -1;
+#ifdef RSEQ_COMPARE_TWICE
+error1:
+ rseq_bug("cpu_id comparison failed");
+#endif
+}
+
static inline __attribute__((always_inline))
int rseq_cmpeqv_trystorev_storev(intptr_t *v, intptr_t expect,
intptr_t *v2, intptr_t newv2,
--
2.28.0.402.g5ffc5be6b7-goog
^ permalink raw reply related [flat|nested] 5+ messages in thread
* Re: [PATCH 1/2 v6] rseq/membarrier: add MEMBARRIER_CMD_PRIVATE_EXPEDITED_RSEQ
2020-08-31 22:58 [PATCH 1/2 v6] rseq/membarrier: add MEMBARRIER_CMD_PRIVATE_EXPEDITED_RSEQ Peter Oskolkov
2020-08-31 22:58 ` [PATCH 2/2 v6] rseq/selftests: test MEMBARRIER_CMD_PRIVATE_EXPEDITED_RSEQ Peter Oskolkov
@ 2020-09-15 6:12 ` Peter Oskolkov
2020-09-15 15:39 ` Mathieu Desnoyers
1 sibling, 1 reply; 5+ messages in thread
From: Peter Oskolkov @ 2020-09-15 6:12 UTC (permalink / raw)
To: Peter Oskolkov
Cc: Mathieu Desnoyers, Paul E . McKenney, Peter Zijlstra, Boqun Feng,
Linux Kernel Mailing List, Paul Turner, Chris Kennelly
Any comments here? Should I change anything?
Thanks,
Peter
On Mon, Aug 31, 2020 at 3:58 PM Peter Oskolkov <posk@google.com> wrote:
>
> This patchset is based on Google-internal RSEQ
> work done by Paul Turner and Andrew Hunter.
>
> When working with per-CPU RSEQ-based memory allocations,
> it is sometimes important to make sure that a global
> memory location is no longer accessed from RSEQ critical
> sections. For example, there can be two per-CPU lists,
> one is "active" and accessed per-CPU, while another one
> is inactive and worked on asynchronously "off CPU" (e.g.
> garbage collection is performed). Then at some point
> the two lists are swapped, and a fast RCU-like mechanism
> is required to make sure that the previously active
> list is no longer accessed.
>
> This patch introduces such a mechanism: in short,
> membarrier() syscall issues an IPI to a CPU, restarting
> a potentially active RSEQ critical section on the CPU.
>
> v1->v2:
> - removed the ability to IPI all CPUs in a single sycall;
> - use task->mm rather than task->group_leader to identify
> tasks belonging to the same process.
> v2->v3:
> - re-added the ability to IPI all CPUs in a single syscall;
> - integrated with membarrier_private_expedited() to
> make sure only CPUs running tasks with the same mm as
> the current task are interrupted;
> - also added MEMBARRIER_CMD_REGISTER_PRIVATE_EXPEDITED_RSEQ;
> - flags in membarrier_private_expedited are never actually
> bit flags but always distinct values (i.e. never two flags
> are combined), so I modified bit testing to full equation
> comparison for simplicity (otherwise the code needs to
> work when several bits are set, for example).
> v3->v4:
> - added the third parameter to membarrier syscall: @cpu_id:
> if @flags == MEMBARRIER_CMD_FLAG_CPU, then @cpu_id indicates
> the cpu on which RSEQ CS should be restarted.
> v4->v5:
> - added @cpu_id parameter to sys_membarrier in syscalls.h.
> v5->v6:
> - made membarrier_private_expedited more efficient in a
> single-cpu case;
> - a couple of other minor refactorings.
>
> The second patch in the patchset adds a selftest
> of this feature.
>
> Signed-off-by: Peter Oskolkov <posk@google.com>
> ---
> include/linux/sched/mm.h | 3 +
> include/linux/syscalls.h | 2 +-
> include/uapi/linux/membarrier.h | 29 +++++++
> kernel/sched/membarrier.c | 140 +++++++++++++++++++++++++-------
> 4 files changed, 143 insertions(+), 31 deletions(-)
>
> diff --git a/include/linux/sched/mm.h b/include/linux/sched/mm.h
> index f889e332912f..15bfb06f2884 100644
> --- a/include/linux/sched/mm.h
> +++ b/include/linux/sched/mm.h
> @@ -348,10 +348,13 @@ enum {
> MEMBARRIER_STATE_GLOBAL_EXPEDITED = (1U << 3),
> MEMBARRIER_STATE_PRIVATE_EXPEDITED_SYNC_CORE_READY = (1U << 4),
> MEMBARRIER_STATE_PRIVATE_EXPEDITED_SYNC_CORE = (1U << 5),
> + MEMBARRIER_STATE_PRIVATE_EXPEDITED_RSEQ_READY = (1U << 6),
> + MEMBARRIER_STATE_PRIVATE_EXPEDITED_RSEQ = (1U << 7),
> };
>
> enum {
> MEMBARRIER_FLAG_SYNC_CORE = (1U << 0),
> + MEMBARRIER_FLAG_RSEQ = (1U << 1),
> };
>
> #ifdef CONFIG_ARCH_HAS_MEMBARRIER_CALLBACKS
> diff --git a/include/linux/syscalls.h b/include/linux/syscalls.h
> index 75ac7f8ae93c..466c993e52bf 100644
> --- a/include/linux/syscalls.h
> +++ b/include/linux/syscalls.h
> @@ -974,7 +974,7 @@ asmlinkage long sys_execveat(int dfd, const char __user *filename,
> const char __user *const __user *argv,
> const char __user *const __user *envp, int flags);
> asmlinkage long sys_userfaultfd(int flags);
> -asmlinkage long sys_membarrier(int cmd, int flags);
> +asmlinkage long sys_membarrier(int cmd, int flags, int cpu_id);
> asmlinkage long sys_mlock2(unsigned long start, size_t len, int flags);
> asmlinkage long sys_copy_file_range(int fd_in, loff_t __user *off_in,
> int fd_out, loff_t __user *off_out,
> diff --git a/include/uapi/linux/membarrier.h b/include/uapi/linux/membarrier.h
> index 5891d7614c8c..98c2b0e7c0d8 100644
> --- a/include/uapi/linux/membarrier.h
> +++ b/include/uapi/linux/membarrier.h
> @@ -114,6 +114,29 @@
> * If this command is not implemented by an
> * architecture, -EINVAL is returned.
> * Returns 0 on success.
> + * @MEMBARRIER_CMD_PRIVATE_EXPEDITED_RSEQ:
> + * In addition to provide memory ordering
> + * guarantees described in
> + * MEMBARRIER_CMD_PRIVATE_EXPEDITED_SYNC_CORE,
> + * ensure the caller thread, upon return from
> + * system call, that all its running thread
> + * siblings have any currently running rseq
> + * critical sections restarted if @flags
> + * parameter is 0; if @flags parameter is
> + * MEMBARRIER_CMD_FLAG_CPU,
> + * then this operation is performed only
> + * on CPU indicated by @cpu_id. If this command is
> + * not implemented by an architecture, -EINVAL
> + * is returned. A process needs to register its
> + * intent to use the private expedited rseq
> + * command prior to using it, otherwise
> + * this command returns -EPERM.
> + * @MEMBARRIER_CMD_REGISTER_PRIVATE_EXPEDITED_RSEQ:
> + * Register the process intent to use
> + * MEMBARRIER_CMD_PRIVATE_EXPEDITED_RSEQ.
> + * If this command is not implemented by an
> + * architecture, -EINVAL is returned.
> + * Returns 0 on success.
> * @MEMBARRIER_CMD_SHARED:
> * Alias to MEMBARRIER_CMD_GLOBAL. Provided for
> * header backward compatibility.
> @@ -131,9 +154,15 @@ enum membarrier_cmd {
> MEMBARRIER_CMD_REGISTER_PRIVATE_EXPEDITED = (1 << 4),
> MEMBARRIER_CMD_PRIVATE_EXPEDITED_SYNC_CORE = (1 << 5),
> MEMBARRIER_CMD_REGISTER_PRIVATE_EXPEDITED_SYNC_CORE = (1 << 6),
> + MEMBARRIER_CMD_PRIVATE_EXPEDITED_RSEQ = (1 << 7),
> + MEMBARRIER_CMD_REGISTER_PRIVATE_EXPEDITED_RSEQ = (1 << 8),
>
> /* Alias for header backward compatibility. */
> MEMBARRIER_CMD_SHARED = MEMBARRIER_CMD_GLOBAL,
> };
>
> +enum membarrier_cmd_flag {
> + MEMBARRIER_CMD_FLAG_CPU = (1 << 0),
> +};
> +
> #endif /* _UAPI_LINUX_MEMBARRIER_H */
> diff --git a/kernel/sched/membarrier.c b/kernel/sched/membarrier.c
> index 168479a7d61b..e32e9476ccf3 100644
> --- a/kernel/sched/membarrier.c
> +++ b/kernel/sched/membarrier.c
> @@ -18,6 +18,14 @@
> #define MEMBARRIER_PRIVATE_EXPEDITED_SYNC_CORE_BITMASK 0
> #endif
>
> +#ifdef CONFIG_RSEQ
> +#define MEMBARRIER_CMD_PRIVATE_EXPEDITED_RSEQ_BITMASK \
> + (MEMBARRIER_CMD_PRIVATE_EXPEDITED_RSEQ \
> + | MEMBARRIER_CMD_REGISTER_PRIVATE_EXPEDITED_RSEQ_BITMASK)
> +#else
> +#define MEMBARRIER_CMD_PRIVATE_EXPEDITED_RSEQ_BITMASK 0
> +#endif
> +
> #define MEMBARRIER_CMD_BITMASK \
> (MEMBARRIER_CMD_GLOBAL | MEMBARRIER_CMD_GLOBAL_EXPEDITED \
> | MEMBARRIER_CMD_REGISTER_GLOBAL_EXPEDITED \
> @@ -30,6 +38,13 @@ static void ipi_mb(void *info)
> smp_mb(); /* IPIs should be serializing but paranoid. */
> }
>
> +#ifdef CONFIG_RSEQ
> +static void ipi_rseq(void *info)
> +{
> + rseq_preempt(current);
> +}
> +#endif
> +
> static void ipi_sync_rq_state(void *info)
> {
> struct mm_struct *mm = (struct mm_struct *) info;
> @@ -129,19 +144,29 @@ static int membarrier_global_expedited(void)
> return 0;
> }
>
> -static int membarrier_private_expedited(int flags)
> +static int membarrier_private_expedited(int flags, int cpu_id)
> {
> - int cpu;
> cpumask_var_t tmpmask;
> struct mm_struct *mm = current->mm;
> + smp_call_func_t ipi_func = ipi_mb;
>
> - if (flags & MEMBARRIER_FLAG_SYNC_CORE) {
> + if (flags == MEMBARRIER_FLAG_SYNC_CORE) {
> if (!IS_ENABLED(CONFIG_ARCH_HAS_MEMBARRIER_SYNC_CORE))
> return -EINVAL;
> if (!(atomic_read(&mm->membarrier_state) &
> MEMBARRIER_STATE_PRIVATE_EXPEDITED_SYNC_CORE_READY))
> return -EPERM;
> + } else if (flags == MEMBARRIER_FLAG_RSEQ) {
> +#ifdef CONFIG_RSEQ
> + if (!(atomic_read(&mm->membarrier_state) &
> + MEMBARRIER_STATE_PRIVATE_EXPEDITED_RSEQ_READY))
> + return -EPERM;
> + ipi_func = ipi_rseq;
> +#else
> + return -EINVAL;
> +#endif
> } else {
> + BUG_ON(flags != 0);
> if (!(atomic_read(&mm->membarrier_state) &
> MEMBARRIER_STATE_PRIVATE_EXPEDITED_READY))
> return -EPERM;
> @@ -156,35 +181,59 @@ static int membarrier_private_expedited(int flags)
> */
> smp_mb(); /* system call entry is not a mb. */
>
> - if (!zalloc_cpumask_var(&tmpmask, GFP_KERNEL))
> + if (cpu_id < 0 && !zalloc_cpumask_var(&tmpmask, GFP_KERNEL))
> return -ENOMEM;
>
> cpus_read_lock();
> - rcu_read_lock();
> - for_each_online_cpu(cpu) {
> +
> + if (cpu_id >= 0) {
> struct task_struct *p;
>
> - /*
> - * Skipping the current CPU is OK even through we can be
> - * migrated at any point. The current CPU, at the point
> - * where we read raw_smp_processor_id(), is ensured to
> - * be in program order with respect to the caller
> - * thread. Therefore, we can skip this CPU from the
> - * iteration.
> - */
> - if (cpu == raw_smp_processor_id())
> - continue;
> - p = rcu_dereference(cpu_rq(cpu)->curr);
> - if (p && p->mm == mm)
> - __cpumask_set_cpu(cpu, tmpmask);
> + if (cpu_id >= nr_cpu_ids || !cpu_online(cpu_id))
> + goto out;
> + if (cpu_id == raw_smp_processor_id())
> + goto out;
> + rcu_read_lock();
> + p = rcu_dereference(cpu_rq(cpu_id)->curr);
> + if (!p || p->mm != mm) {
> + rcu_read_unlock();
> + goto out;
> + }
> + rcu_read_unlock();
> + } else {
> + int cpu;
> +
> + rcu_read_lock();
> + for_each_online_cpu(cpu) {
> + struct task_struct *p;
> +
> + /*
> + * Skipping the current CPU is OK even through we can be
> + * migrated at any point. The current CPU, at the point
> + * where we read raw_smp_processor_id(), is ensured to
> + * be in program order with respect to the caller
> + * thread. Therefore, we can skip this CPU from the
> + * iteration.
> + */
> + if (cpu == raw_smp_processor_id())
> + continue;
> + p = rcu_dereference(cpu_rq(cpu)->curr);
> + if (p && p->mm == mm)
> + __cpumask_set_cpu(cpu, tmpmask);
> + }
> + rcu_read_unlock();
> }
> - rcu_read_unlock();
>
> preempt_disable();
> - smp_call_function_many(tmpmask, ipi_mb, NULL, 1);
> + if (cpu_id >= 0)
> + smp_call_function_single(cpu_id, ipi_func, NULL, 1);
> + else
> + smp_call_function_many(tmpmask, ipi_func, NULL, 1);
> preempt_enable();
>
> - free_cpumask_var(tmpmask);
> +out:
> + if (cpu_id < 0)
> + free_cpumask_var(tmpmask);
> cpus_read_unlock();
>
> /*
> @@ -283,11 +332,18 @@ static int membarrier_register_private_expedited(int flags)
> set_state = MEMBARRIER_STATE_PRIVATE_EXPEDITED,
> ret;
>
> - if (flags & MEMBARRIER_FLAG_SYNC_CORE) {
> + if (flags == MEMBARRIER_FLAG_SYNC_CORE) {
> if (!IS_ENABLED(CONFIG_ARCH_HAS_MEMBARRIER_SYNC_CORE))
> return -EINVAL;
> ready_state =
> MEMBARRIER_STATE_PRIVATE_EXPEDITED_SYNC_CORE_READY;
> + } else if (flags == MEMBARRIER_FLAG_RSEQ) {
> + if (!IS_ENABLED(CONFIG_RSEQ))
> + return -EINVAL;
> + ready_state =
> + MEMBARRIER_STATE_PRIVATE_EXPEDITED_RSEQ_READY;
> + } else {
> + BUG_ON(flags != 0);
> }
>
> /*
> @@ -299,6 +355,8 @@ static int membarrier_register_private_expedited(int flags)
> return 0;
> if (flags & MEMBARRIER_FLAG_SYNC_CORE)
> set_state |= MEMBARRIER_STATE_PRIVATE_EXPEDITED_SYNC_CORE;
> + if (flags & MEMBARRIER_FLAG_RSEQ)
> + set_state |= MEMBARRIER_STATE_PRIVATE_EXPEDITED_RSEQ;
> atomic_or(set_state, &mm->membarrier_state);
> ret = sync_runqueues_membarrier_state(mm);
> if (ret)
> @@ -310,8 +368,15 @@ static int membarrier_register_private_expedited(int flags)
>
> /**
> * sys_membarrier - issue memory barriers on a set of threads
> - * @cmd: Takes command values defined in enum membarrier_cmd.
> - * @flags: Currently needs to be 0. For future extensions.
> + * @cmd: Takes command values defined in enum membarrier_cmd.
> + * @flags: Currently needs to be 0 for all commands other than
> + * MEMBARRIER_CMD_PRIVATE_EXPEDITED_RSEQ: in the latter
> + * case it can be MEMBARRIER_CMD_FLAG_CPU, indicating that @cpu_id
> + * contains the CPU on which to interrupt (= restart)
> + * the RSEQ critical section.
> + * @cpu_id: if @flags == MEMBARRIER_CMD_FLAG_CPU, indicates the cpu on which
> + * RSEQ CS should be interrupted (@cmd must be
> + * MEMBARRIER_CMD_PRIVATE_EXPEDITED_RSEQ).
> *
> * If this system call is not implemented, -ENOSYS is returned. If the
> * command specified does not exist, not available on the running
> @@ -337,10 +402,21 @@ static int membarrier_register_private_expedited(int flags)
> * smp_mb() X O O
> * sys_membarrier() O O O
> */
> -SYSCALL_DEFINE2(membarrier, int, cmd, int, flags)
> +SYSCALL_DEFINE3(membarrier, int, cmd, int, flags, int, cpu_id)
> {
> - if (unlikely(flags))
> - return -EINVAL;
> + switch (cmd) {
> + case MEMBARRIER_CMD_PRIVATE_EXPEDITED_RSEQ:
> + if (unlikely(flags && flags != MEMBARRIER_CMD_FLAG_CPU))
> + return -EINVAL;
> + break;
> + default:
> + if (unlikely(flags))
> + return -EINVAL;
> + }
> +
> + if (!(flags & MEMBARRIER_CMD_FLAG_CPU))
> + cpu_id = -1;
> +
> switch (cmd) {
> case MEMBARRIER_CMD_QUERY:
> {
> @@ -362,13 +438,17 @@ SYSCALL_DEFINE2(membarrier, int, cmd, int, flags)
> case MEMBARRIER_CMD_REGISTER_GLOBAL_EXPEDITED:
> return membarrier_register_global_expedited();
> case MEMBARRIER_CMD_PRIVATE_EXPEDITED:
> - return membarrier_private_expedited(0);
> + return membarrier_private_expedited(0, cpu_id);
> case MEMBARRIER_CMD_REGISTER_PRIVATE_EXPEDITED:
> return membarrier_register_private_expedited(0);
> case MEMBARRIER_CMD_PRIVATE_EXPEDITED_SYNC_CORE:
> - return membarrier_private_expedited(MEMBARRIER_FLAG_SYNC_CORE);
> + return membarrier_private_expedited(MEMBARRIER_FLAG_SYNC_CORE, cpu_id);
> case MEMBARRIER_CMD_REGISTER_PRIVATE_EXPEDITED_SYNC_CORE:
> return membarrier_register_private_expedited(MEMBARRIER_FLAG_SYNC_CORE);
> + case MEMBARRIER_CMD_PRIVATE_EXPEDITED_RSEQ:
> + return membarrier_private_expedited(MEMBARRIER_FLAG_RSEQ, cpu_id);
> + case MEMBARRIER_CMD_REGISTER_PRIVATE_EXPEDITED_RSEQ:
> + return membarrier_register_private_expedited(MEMBARRIER_FLAG_RSEQ);
> default:
> return -EINVAL;
> }
> --
> 2.28.0.402.g5ffc5be6b7-goog
>
^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: [PATCH 1/2 v6] rseq/membarrier: add MEMBARRIER_CMD_PRIVATE_EXPEDITED_RSEQ
2020-09-15 6:12 ` [PATCH 1/2 v6] rseq/membarrier: add MEMBARRIER_CMD_PRIVATE_EXPEDITED_RSEQ Peter Oskolkov
@ 2020-09-15 15:39 ` Mathieu Desnoyers
2020-09-15 18:57 ` Peter Oskolkov
0 siblings, 1 reply; 5+ messages in thread
From: Mathieu Desnoyers @ 2020-09-15 15:39 UTC (permalink / raw)
To: Peter Oskolkov
Cc: Peter Oskolkov, paulmck, Peter Zijlstra, Boqun Feng,
linux-kernel, Paul Turner, Chris Kennelly
----- On Sep 15, 2020, at 2:12 AM, Peter Oskolkov posk@posk.io wrote:
> Any comments here? Should I change anything?
See below,
>> diff --git a/include/uapi/linux/membarrier.h b/include/uapi/linux/membarrier.h
>> index 5891d7614c8c..98c2b0e7c0d8 100644
>> --- a/include/uapi/linux/membarrier.h
>> +++ b/include/uapi/linux/membarrier.h
>> @@ -114,6 +114,29 @@
>> * If this command is not implemented by an
>> * architecture, -EINVAL is returned.
>> * Returns 0 on success.
>> + * @MEMBARRIER_CMD_PRIVATE_EXPEDITED_RSEQ:
>> + * In addition to provide memory ordering
>> + * guarantees described in
>> + * MEMBARRIER_CMD_PRIVATE_EXPEDITED_SYNC_CORE,
^ is the above still true ? I see from the code that the rseq membarrier
only issues rseq_preempt, not any (documented) explicit core serialization nor
memory barrier.
>> + * ensure the caller thread, upon return from
>> + * system call, that all its running thread
>> + * siblings have any currently running rseq
>> + * critical sections restarted if @flags
>> + * parameter is 0; if @flags parameter is
>> + * MEMBARRIER_CMD_FLAG_CPU,
>> + * then this operation is performed only
>> + * on CPU indicated by @cpu_id. If this command is
>> + * not implemented by an architecture, -EINVAL
>> + * is returned. A process needs to register its
>> + * intent to use the private expedited rseq
>> + * command prior to using it, otherwise
>> + * this command returns -EPERM.
>> + * @MEMBARRIER_CMD_REGISTER_PRIVATE_EXPEDITED_RSEQ:
>> + * Register the process intent to use
>> + * MEMBARRIER_CMD_PRIVATE_EXPEDITED_RSEQ.
>> + * If this command is not implemented by an
>> + * architecture, -EINVAL is returned.
>> + * Returns 0 on success.
>> * @MEMBARRIER_CMD_SHARED:
>> * Alias to MEMBARRIER_CMD_GLOBAL. Provided for
>> * header backward compatibility.
>> @@ -131,9 +154,15 @@ enum membarrier_cmd {
>> MEMBARRIER_CMD_REGISTER_PRIVATE_EXPEDITED = (1 << 4),
>> MEMBARRIER_CMD_PRIVATE_EXPEDITED_SYNC_CORE = (1 << 5),
>> MEMBARRIER_CMD_REGISTER_PRIVATE_EXPEDITED_SYNC_CORE = (1 << 6),
>> + MEMBARRIER_CMD_PRIVATE_EXPEDITED_RSEQ = (1 << 7),
>> + MEMBARRIER_CMD_REGISTER_PRIVATE_EXPEDITED_RSEQ = (1 << 8),
>>
>> /* Alias for header backward compatibility. */
>> MEMBARRIER_CMD_SHARED = MEMBARRIER_CMD_GLOBAL,
>> };
>>
>> +enum membarrier_cmd_flag {
>> + MEMBARRIER_CMD_FLAG_CPU = (1 << 0),
>> +};
>> +
>> #endif /* _UAPI_LINUX_MEMBARRIER_H */
>> diff --git a/kernel/sched/membarrier.c b/kernel/sched/membarrier.c
>> index 168479a7d61b..e32e9476ccf3 100644
>> --- a/kernel/sched/membarrier.c
>> +++ b/kernel/sched/membarrier.c
>> @@ -18,6 +18,14 @@
>> #define MEMBARRIER_PRIVATE_EXPEDITED_SYNC_CORE_BITMASK 0
>> #endif
>>
>> +#ifdef CONFIG_RSEQ
>> +#define MEMBARRIER_CMD_PRIVATE_EXPEDITED_RSEQ_BITMASK \
>> + (MEMBARRIER_CMD_PRIVATE_EXPEDITED_RSEQ \
>> + | MEMBARRIER_CMD_REGISTER_PRIVATE_EXPEDITED_RSEQ_BITMASK)
>> +#else
>> +#define MEMBARRIER_CMD_PRIVATE_EXPEDITED_RSEQ_BITMASK 0
>> +#endif
>> +
>> #define MEMBARRIER_CMD_BITMASK \
>> (MEMBARRIER_CMD_GLOBAL | MEMBARRIER_CMD_GLOBAL_EXPEDITED \
>> | MEMBARRIER_CMD_REGISTER_GLOBAL_EXPEDITED \
>> @@ -30,6 +38,13 @@ static void ipi_mb(void *info)
>> smp_mb(); /* IPIs should be serializing but paranoid. */
>> }
>>
>> +#ifdef CONFIG_RSEQ
>> +static void ipi_rseq(void *info)
>> +{
>> + rseq_preempt(current);
>> +}
>> +#endif
>> +
>> static void ipi_sync_rq_state(void *info)
>> {
>> struct mm_struct *mm = (struct mm_struct *) info;
>> @@ -129,19 +144,29 @@ static int membarrier_global_expedited(void)
>> return 0;
>> }
>>
>> -static int membarrier_private_expedited(int flags)
>> +static int membarrier_private_expedited(int flags, int cpu_id)
>> {
>> - int cpu;
>> cpumask_var_t tmpmask;
>> struct mm_struct *mm = current->mm;
>> + smp_call_func_t ipi_func = ipi_mb;
>>
>> - if (flags & MEMBARRIER_FLAG_SYNC_CORE) {
>> + if (flags == MEMBARRIER_FLAG_SYNC_CORE) {
>> if (!IS_ENABLED(CONFIG_ARCH_HAS_MEMBARRIER_SYNC_CORE))
>> return -EINVAL;
>> if (!(atomic_read(&mm->membarrier_state) &
>> MEMBARRIER_STATE_PRIVATE_EXPEDITED_SYNC_CORE_READY))
>> return -EPERM;
>> + } else if (flags == MEMBARRIER_FLAG_RSEQ) {
>> +#ifdef CONFIG_RSEQ
>> + if (!(atomic_read(&mm->membarrier_state) &
>> + MEMBARRIER_STATE_PRIVATE_EXPEDITED_RSEQ_READY))
>> + return -EPERM;
>> + ipi_func = ipi_rseq;
>> +#else
>> + return -EINVAL;
>> +#endif
I'm allergic to mixing code logic and preprocessor logic. Anything against
the following changes ?
(above)
static void ipi_rseq(void *info)
{
rseq_preempt(current);
}
^ without the #ifdef CONFIG_RSEQ (rseq_preempt is already defined as a no-op when
CONFIG_RSEQ=n)
and within membarrier_private_expedited:
if (!IS_ENABLED(CONFIG_RSEQ))
return -EINVAL;
if (!(atomic_read(&mm->membarrier_state) &
MEMBARRIER_STATE_PRIVATE_EXPEDITED_RSEQ_READY))
return -EPERM;
ipi_func = ipi_rseq;
[...]
>> @@ -310,8 +368,15 @@ static int membarrier_register_private_expedited(int flags)
>>
>> /**
>> * sys_membarrier - issue memory barriers on a set of threads
>> - * @cmd: Takes command values defined in enum membarrier_cmd.
>> - * @flags: Currently needs to be 0. For future extensions.
>> + * @cmd: Takes command values defined in enum membarrier_cmd.
>> + * @flags: Currently needs to be 0 for all commands other than
>> + * MEMBARRIER_CMD_PRIVATE_EXPEDITED_RSEQ: in the latter
>> + * case it can be MEMBARRIER_CMD_FLAG_CPU, indicating that @cpu_id
>> + * contains the CPU on which to interrupt (= restart)
>> + * the RSEQ critical section.
>> + * @cpu_id: if @flags == MEMBARRIER_CMD_FLAG_CPU, indicates the cpu on which
>> + * RSEQ CS should be interrupted (@cmd must be
>> + * MEMBARRIER_CMD_PRIVATE_EXPEDITED_RSEQ).
>> *
>> * If this system call is not implemented, -ENOSYS is returned. If the
>> * command specified does not exist, not available on the running
>> @@ -337,10 +402,21 @@ static int membarrier_register_private_expedited(int
>> flags)
>> * smp_mb() X O O
>> * sys_membarrier() O O O
>> */
>> -SYSCALL_DEFINE2(membarrier, int, cmd, int, flags)
>> +SYSCALL_DEFINE3(membarrier, int, cmd, int, flags, int, cpu_id)
Now that we have the first use of "flags", it would be a good time to change
"int flags" to "unsigned int flags", which is a preferred way to express system
call flags parameters.
Thanks,
Mathieu
>> {
>> - if (unlikely(flags))
>> - return -EINVAL;
>> + switch (cmd) {
>> + case MEMBARRIER_CMD_PRIVATE_EXPEDITED_RSEQ:
>> + if (unlikely(flags && flags != MEMBARRIER_CMD_FLAG_CPU))
>> + return -EINVAL;
>> + break;
>> + default:
>> + if (unlikely(flags))
>> + return -EINVAL;
>> + }
>> +
>> + if (!(flags & MEMBARRIER_CMD_FLAG_CPU))
>> + cpu_id = -1;
>> +
>> switch (cmd) {
>> case MEMBARRIER_CMD_QUERY:
>> {
>> @@ -362,13 +438,17 @@ SYSCALL_DEFINE2(membarrier, int, cmd, int, flags)
>> case MEMBARRIER_CMD_REGISTER_GLOBAL_EXPEDITED:
>> return membarrier_register_global_expedited();
>> case MEMBARRIER_CMD_PRIVATE_EXPEDITED:
>> - return membarrier_private_expedited(0);
>> + return membarrier_private_expedited(0, cpu_id);
>> case MEMBARRIER_CMD_REGISTER_PRIVATE_EXPEDITED:
>> return membarrier_register_private_expedited(0);
>> case MEMBARRIER_CMD_PRIVATE_EXPEDITED_SYNC_CORE:
>> - return membarrier_private_expedited(MEMBARRIER_FLAG_SYNC_CORE);
>> + return membarrier_private_expedited(MEMBARRIER_FLAG_SYNC_CORE,
>> cpu_id);
>> case MEMBARRIER_CMD_REGISTER_PRIVATE_EXPEDITED_SYNC_CORE:
>> return membarrier_register_private_expedited(MEMBARRIER_FLAG_SYNC_CORE);
>> + case MEMBARRIER_CMD_PRIVATE_EXPEDITED_RSEQ:
>> + return membarrier_private_expedited(MEMBARRIER_FLAG_RSEQ,
>> cpu_id);
>> + case MEMBARRIER_CMD_REGISTER_PRIVATE_EXPEDITED_RSEQ:
>> + return
>> membarrier_register_private_expedited(MEMBARRIER_FLAG_RSEQ);
>> default:
>> return -EINVAL;
>> }
>> --
>> 2.28.0.402.g5ffc5be6b7-goog
--
Mathieu Desnoyers
EfficiOS Inc.
http://www.efficios.com
^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: [PATCH 1/2 v6] rseq/membarrier: add MEMBARRIER_CMD_PRIVATE_EXPEDITED_RSEQ
2020-09-15 15:39 ` Mathieu Desnoyers
@ 2020-09-15 18:57 ` Peter Oskolkov
0 siblings, 0 replies; 5+ messages in thread
From: Peter Oskolkov @ 2020-09-15 18:57 UTC (permalink / raw)
To: Mathieu Desnoyers
Cc: Peter Oskolkov, paulmck, Peter Zijlstra, Boqun Feng,
linux-kernel, Paul Turner, Chris Kennelly
On Tue, Sep 15, 2020 at 8:39 AM Mathieu Desnoyers
<mathieu.desnoyers@efficios.com> wrote:
>
> ----- On Sep 15, 2020, at 2:12 AM, Peter Oskolkov posk@posk.io wrote:
>
> > Any comments here? Should I change anything?
>
> See below,
Thanks for the review - all done in V7. Please have a look.
Thanks,
Peter
[...]
^ permalink raw reply [flat|nested] 5+ messages in thread
end of thread, other threads:[~2020-09-15 18:58 UTC | newest]
Thread overview: 5+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-08-31 22:58 [PATCH 1/2 v6] rseq/membarrier: add MEMBARRIER_CMD_PRIVATE_EXPEDITED_RSEQ Peter Oskolkov
2020-08-31 22:58 ` [PATCH 2/2 v6] rseq/selftests: test MEMBARRIER_CMD_PRIVATE_EXPEDITED_RSEQ Peter Oskolkov
2020-09-15 6:12 ` [PATCH 1/2 v6] rseq/membarrier: add MEMBARRIER_CMD_PRIVATE_EXPEDITED_RSEQ Peter Oskolkov
2020-09-15 15:39 ` Mathieu Desnoyers
2020-09-15 18:57 ` Peter Oskolkov
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).