linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH -rcu 1/2] kcsan: Avoid nested contexts reading inconsistent reorder_access
@ 2021-12-06  6:41 Marco Elver
  2021-12-06  6:41 ` [PATCH -rcu 2/2] kcsan: Only test clear_bit_unlock_is_negative_byte if arch defines it Marco Elver
  2021-12-06 19:54 ` [PATCH -rcu 1/2] kcsan: Avoid nested contexts reading inconsistent reorder_access Paul E. McKenney
  0 siblings, 2 replies; 3+ messages in thread
From: Marco Elver @ 2021-12-06  6:41 UTC (permalink / raw)
  To: elver, Paul E. McKenney; +Cc: kasan-dev, linux-kernel

Nested contexts, such as nested interrupts or scheduler code, share the
same kcsan_ctx. When such a nested context reads an inconsistent
reorder_access due to an interrupt during set_reorder_access(), we can
observe the following warning:

 | ------------[ cut here ]------------
 | Cannot find frame for torture_random kernel/torture.c:456 in stack trace
 | WARNING: CPU: 13 PID: 147 at kernel/kcsan/report.c:343 replace_stack_entry kernel/kcsan/report.c:343
 | ...
 | Call Trace:
 |  <TASK>
 |  sanitize_stack_entries kernel/kcsan/report.c:351 [inline]
 |  print_report kernel/kcsan/report.c:409
 |  kcsan_report_known_origin kernel/kcsan/report.c:693
 |  kcsan_setup_watchpoint kernel/kcsan/core.c:658
 |  rcutorture_one_extend kernel/rcu/rcutorture.c:1475
 |  rcutorture_loop_extend kernel/rcu/rcutorture.c:1558 [inline]
 |  ...
 |  </TASK>
 | ---[ end trace ee5299cb933115f5 ]---
 | ==================================================================
 | BUG: KCSAN: data-race in _raw_spin_lock_irqsave / rcutorture_one_extend
 |
 | write (reordered) to 0xffffffff8c93b300 of 8 bytes by task 154 on cpu 12:
 |  queued_spin_lock                include/asm-generic/qspinlock.h:80 [inline]
 |  do_raw_spin_lock                include/linux/spinlock.h:185 [inline]
 |  __raw_spin_lock_irqsave         include/linux/spinlock_api_smp.h:111 [inline]
 |  _raw_spin_lock_irqsave          kernel/locking/spinlock.c:162
 |  try_to_wake_up                  kernel/sched/core.c:4003
 |  sysvec_apic_timer_interrupt     arch/x86/kernel/apic/apic.c:1097
 |  asm_sysvec_apic_timer_interrupt arch/x86/include/asm/idtentry.h:638
 |  set_reorder_access              kernel/kcsan/core.c:416 [inline]    <-- inconsistent reorder_access
 |  kcsan_setup_watchpoint          kernel/kcsan/core.c:693
 |  rcutorture_one_extend           kernel/rcu/rcutorture.c:1475
 |  rcutorture_loop_extend          kernel/rcu/rcutorture.c:1558 [inline]
 |  rcu_torture_one_read            kernel/rcu/rcutorture.c:1600
 |  rcu_torture_reader              kernel/rcu/rcutorture.c:1692
 |  kthread                         kernel/kthread.c:327
 |  ret_from_fork                   arch/x86/entry/entry_64.S:295
 |
 | read to 0xffffffff8c93b300 of 8 bytes by task 147 on cpu 13:
 |  rcutorture_one_extend           kernel/rcu/rcutorture.c:1475
 |  rcutorture_loop_extend          kernel/rcu/rcutorture.c:1558 [inline]
 |  ...

The warning is telling us that there was a data race which KCSAN wants
to report, but the function where the original access (that is now
reordered) happened cannot be found in the stack trace, which prevents
KCSAN from generating the right stack trace. The stack trace of "write
(reordered)" now only shows where the access was reordered to, but
should instead show the stack trace of the original write, with a final
line saying "reordered to".

At the point where set_reorder_access() is interrupted, it just set
reorder_access->ptr and size, at which point size is non-zero. This is
sufficient (if ctx->disable_scoped is zero) for further accesses from
nested contexts to perform checking of this reorder_access.

That then happened in _raw_spin_lock_irqsave(), which is called by
scheduler code. However, since reorder_access->ip is still stale (ptr
and size belong to a different ip not yet set) this finally leads to
replace_stack_entry() not finding the frame in reorder_access->ip and
generating the above warning.

Fix it by ensuring that a nested context cannot access reorder_access
while we update it in set_reorder_access(): set ctx->disable_scoped for
the duration that reorder_access is updated, which effectively locks
reorder_access and prevents concurrent use by nested contexts. Note,
set_reorder_access() can do the update only if disabled_scoped is zero
on entry, and must therefore set disable_scoped back to non-zero after
the initial check in set_reorder_access().

Signed-off-by: Marco Elver <elver@google.com>
---
 kernel/kcsan/core.c | 9 +++++++++
 1 file changed, 9 insertions(+)

diff --git a/kernel/kcsan/core.c b/kernel/kcsan/core.c
index 916060913966..fe12dfe254ec 100644
--- a/kernel/kcsan/core.c
+++ b/kernel/kcsan/core.c
@@ -412,11 +412,20 @@ set_reorder_access(struct kcsan_ctx *ctx, const volatile void *ptr, size_t size,
 	if (!reorder_access || !kcsan_weak_memory)
 		return;
 
+	/*
+	 * To avoid nested interrupts or scheduler (which share kcsan_ctx)
+	 * reading an inconsistent reorder_access, ensure that the below has
+	 * exclusive access to reorder_access by disallowing concurrent use.
+	 */
+	ctx->disable_scoped++;
+	barrier();
 	reorder_access->ptr		= ptr;
 	reorder_access->size		= size;
 	reorder_access->type		= type | KCSAN_ACCESS_SCOPED;
 	reorder_access->ip		= ip;
 	reorder_access->stack_depth	= get_kcsan_stack_depth();
+	barrier();
+	ctx->disable_scoped--;
 }
 
 /*
-- 
2.34.1.400.ga245620fadb-goog


^ permalink raw reply related	[flat|nested] 3+ messages in thread

* [PATCH -rcu 2/2] kcsan: Only test clear_bit_unlock_is_negative_byte if arch defines it
  2021-12-06  6:41 [PATCH -rcu 1/2] kcsan: Avoid nested contexts reading inconsistent reorder_access Marco Elver
@ 2021-12-06  6:41 ` Marco Elver
  2021-12-06 19:54 ` [PATCH -rcu 1/2] kcsan: Avoid nested contexts reading inconsistent reorder_access Paul E. McKenney
  1 sibling, 0 replies; 3+ messages in thread
From: Marco Elver @ 2021-12-06  6:41 UTC (permalink / raw)
  To: elver, Paul E. McKenney; +Cc: kasan-dev, linux-kernel, kernel test robot

Some architectures do not define clear_bit_unlock_is_negative_byte().
Only test it when it is actually defined (similar to other usage, such
as in lib/test_kasan.c).

Link: https://lkml.kernel.org/r/202112050757.x67rHnFU-lkp@intel.com
Reported-by: kernel test robot <lkp@intel.com>
Signed-off-by: Marco Elver <elver@google.com>
---
 kernel/kcsan/kcsan_test.c | 8 +++++---
 kernel/kcsan/selftest.c   | 8 +++++---
 2 files changed, 10 insertions(+), 6 deletions(-)

diff --git a/kernel/kcsan/kcsan_test.c b/kernel/kcsan/kcsan_test.c
index 2bad0820f73a..a36fca063a73 100644
--- a/kernel/kcsan/kcsan_test.c
+++ b/kernel/kcsan/kcsan_test.c
@@ -598,7 +598,6 @@ static void test_barrier_nothreads(struct kunit *test)
 	KCSAN_EXPECT_READ_BARRIER(test_and_change_bit(0, &test_var), true);
 	KCSAN_EXPECT_READ_BARRIER(clear_bit_unlock(0, &test_var), true);
 	KCSAN_EXPECT_READ_BARRIER(__clear_bit_unlock(0, &test_var), true);
-	KCSAN_EXPECT_READ_BARRIER(clear_bit_unlock_is_negative_byte(0, &test_var), true);
 	KCSAN_EXPECT_READ_BARRIER(arch_spin_lock(&arch_spinlock), false);
 	KCSAN_EXPECT_READ_BARRIER(arch_spin_unlock(&arch_spinlock), true);
 	KCSAN_EXPECT_READ_BARRIER(spin_lock(&test_spinlock), false);
@@ -644,7 +643,6 @@ static void test_barrier_nothreads(struct kunit *test)
 	KCSAN_EXPECT_WRITE_BARRIER(test_and_change_bit(0, &test_var), true);
 	KCSAN_EXPECT_WRITE_BARRIER(clear_bit_unlock(0, &test_var), true);
 	KCSAN_EXPECT_WRITE_BARRIER(__clear_bit_unlock(0, &test_var), true);
-	KCSAN_EXPECT_WRITE_BARRIER(clear_bit_unlock_is_negative_byte(0, &test_var), true);
 	KCSAN_EXPECT_WRITE_BARRIER(arch_spin_lock(&arch_spinlock), false);
 	KCSAN_EXPECT_WRITE_BARRIER(arch_spin_unlock(&arch_spinlock), true);
 	KCSAN_EXPECT_WRITE_BARRIER(spin_lock(&test_spinlock), false);
@@ -690,7 +688,6 @@ static void test_barrier_nothreads(struct kunit *test)
 	KCSAN_EXPECT_RW_BARRIER(test_and_change_bit(0, &test_var), true);
 	KCSAN_EXPECT_RW_BARRIER(clear_bit_unlock(0, &test_var), true);
 	KCSAN_EXPECT_RW_BARRIER(__clear_bit_unlock(0, &test_var), true);
-	KCSAN_EXPECT_RW_BARRIER(clear_bit_unlock_is_negative_byte(0, &test_var), true);
 	KCSAN_EXPECT_RW_BARRIER(arch_spin_lock(&arch_spinlock), false);
 	KCSAN_EXPECT_RW_BARRIER(arch_spin_unlock(&arch_spinlock), true);
 	KCSAN_EXPECT_RW_BARRIER(spin_lock(&test_spinlock), false);
@@ -698,6 +695,11 @@ static void test_barrier_nothreads(struct kunit *test)
 	KCSAN_EXPECT_RW_BARRIER(mutex_lock(&test_mutex), false);
 	KCSAN_EXPECT_RW_BARRIER(mutex_unlock(&test_mutex), true);
 
+#ifdef clear_bit_unlock_is_negative_byte
+	KCSAN_EXPECT_READ_BARRIER(clear_bit_unlock_is_negative_byte(0, &test_var), true);
+	KCSAN_EXPECT_WRITE_BARRIER(clear_bit_unlock_is_negative_byte(0, &test_var), true);
+	KCSAN_EXPECT_RW_BARRIER(clear_bit_unlock_is_negative_byte(0, &test_var), true);
+#endif
 	kcsan_nestable_atomic_end();
 }
 
diff --git a/kernel/kcsan/selftest.c b/kernel/kcsan/selftest.c
index b6d4da07d80a..75712959c84e 100644
--- a/kernel/kcsan/selftest.c
+++ b/kernel/kcsan/selftest.c
@@ -169,7 +169,6 @@ static bool __init test_barrier(void)
 	KCSAN_CHECK_READ_BARRIER(test_and_change_bit(0, &test_var));
 	KCSAN_CHECK_READ_BARRIER(clear_bit_unlock(0, &test_var));
 	KCSAN_CHECK_READ_BARRIER(__clear_bit_unlock(0, &test_var));
-	KCSAN_CHECK_READ_BARRIER(clear_bit_unlock_is_negative_byte(0, &test_var));
 	arch_spin_lock(&arch_spinlock);
 	KCSAN_CHECK_READ_BARRIER(arch_spin_unlock(&arch_spinlock));
 	spin_lock(&test_spinlock);
@@ -199,7 +198,6 @@ static bool __init test_barrier(void)
 	KCSAN_CHECK_WRITE_BARRIER(test_and_change_bit(0, &test_var));
 	KCSAN_CHECK_WRITE_BARRIER(clear_bit_unlock(0, &test_var));
 	KCSAN_CHECK_WRITE_BARRIER(__clear_bit_unlock(0, &test_var));
-	KCSAN_CHECK_WRITE_BARRIER(clear_bit_unlock_is_negative_byte(0, &test_var));
 	arch_spin_lock(&arch_spinlock);
 	KCSAN_CHECK_WRITE_BARRIER(arch_spin_unlock(&arch_spinlock));
 	spin_lock(&test_spinlock);
@@ -232,12 +230,16 @@ static bool __init test_barrier(void)
 	KCSAN_CHECK_RW_BARRIER(test_and_change_bit(0, &test_var));
 	KCSAN_CHECK_RW_BARRIER(clear_bit_unlock(0, &test_var));
 	KCSAN_CHECK_RW_BARRIER(__clear_bit_unlock(0, &test_var));
-	KCSAN_CHECK_RW_BARRIER(clear_bit_unlock_is_negative_byte(0, &test_var));
 	arch_spin_lock(&arch_spinlock);
 	KCSAN_CHECK_RW_BARRIER(arch_spin_unlock(&arch_spinlock));
 	spin_lock(&test_spinlock);
 	KCSAN_CHECK_RW_BARRIER(spin_unlock(&test_spinlock));
 
+#ifdef clear_bit_unlock_is_negative_byte
+	KCSAN_CHECK_RW_BARRIER(clear_bit_unlock_is_negative_byte(0, &test_var));
+	KCSAN_CHECK_READ_BARRIER(clear_bit_unlock_is_negative_byte(0, &test_var));
+	KCSAN_CHECK_WRITE_BARRIER(clear_bit_unlock_is_negative_byte(0, &test_var));
+#endif
 	kcsan_nestable_atomic_end();
 
 	return ret;
-- 
2.34.1.400.ga245620fadb-goog


^ permalink raw reply related	[flat|nested] 3+ messages in thread

* Re: [PATCH -rcu 1/2] kcsan: Avoid nested contexts reading inconsistent reorder_access
  2021-12-06  6:41 [PATCH -rcu 1/2] kcsan: Avoid nested contexts reading inconsistent reorder_access Marco Elver
  2021-12-06  6:41 ` [PATCH -rcu 2/2] kcsan: Only test clear_bit_unlock_is_negative_byte if arch defines it Marco Elver
@ 2021-12-06 19:54 ` Paul E. McKenney
  1 sibling, 0 replies; 3+ messages in thread
From: Paul E. McKenney @ 2021-12-06 19:54 UTC (permalink / raw)
  To: Marco Elver; +Cc: kasan-dev, linux-kernel

On Mon, Dec 06, 2021 at 07:41:50AM +0100, Marco Elver wrote:
> Nested contexts, such as nested interrupts or scheduler code, share the
> same kcsan_ctx. When such a nested context reads an inconsistent
> reorder_access due to an interrupt during set_reorder_access(), we can
> observe the following warning:
> 
>  | ------------[ cut here ]------------
>  | Cannot find frame for torture_random kernel/torture.c:456 in stack trace
>  | WARNING: CPU: 13 PID: 147 at kernel/kcsan/report.c:343 replace_stack_entry kernel/kcsan/report.c:343
>  | ...
>  | Call Trace:
>  |  <TASK>
>  |  sanitize_stack_entries kernel/kcsan/report.c:351 [inline]
>  |  print_report kernel/kcsan/report.c:409
>  |  kcsan_report_known_origin kernel/kcsan/report.c:693
>  |  kcsan_setup_watchpoint kernel/kcsan/core.c:658
>  |  rcutorture_one_extend kernel/rcu/rcutorture.c:1475
>  |  rcutorture_loop_extend kernel/rcu/rcutorture.c:1558 [inline]
>  |  ...
>  |  </TASK>
>  | ---[ end trace ee5299cb933115f5 ]---
>  | ==================================================================
>  | BUG: KCSAN: data-race in _raw_spin_lock_irqsave / rcutorture_one_extend
>  |
>  | write (reordered) to 0xffffffff8c93b300 of 8 bytes by task 154 on cpu 12:
>  |  queued_spin_lock                include/asm-generic/qspinlock.h:80 [inline]
>  |  do_raw_spin_lock                include/linux/spinlock.h:185 [inline]
>  |  __raw_spin_lock_irqsave         include/linux/spinlock_api_smp.h:111 [inline]
>  |  _raw_spin_lock_irqsave          kernel/locking/spinlock.c:162
>  |  try_to_wake_up                  kernel/sched/core.c:4003
>  |  sysvec_apic_timer_interrupt     arch/x86/kernel/apic/apic.c:1097
>  |  asm_sysvec_apic_timer_interrupt arch/x86/include/asm/idtentry.h:638
>  |  set_reorder_access              kernel/kcsan/core.c:416 [inline]    <-- inconsistent reorder_access
>  |  kcsan_setup_watchpoint          kernel/kcsan/core.c:693
>  |  rcutorture_one_extend           kernel/rcu/rcutorture.c:1475
>  |  rcutorture_loop_extend          kernel/rcu/rcutorture.c:1558 [inline]
>  |  rcu_torture_one_read            kernel/rcu/rcutorture.c:1600
>  |  rcu_torture_reader              kernel/rcu/rcutorture.c:1692
>  |  kthread                         kernel/kthread.c:327
>  |  ret_from_fork                   arch/x86/entry/entry_64.S:295
>  |
>  | read to 0xffffffff8c93b300 of 8 bytes by task 147 on cpu 13:
>  |  rcutorture_one_extend           kernel/rcu/rcutorture.c:1475
>  |  rcutorture_loop_extend          kernel/rcu/rcutorture.c:1558 [inline]
>  |  ...
> 
> The warning is telling us that there was a data race which KCSAN wants
> to report, but the function where the original access (that is now
> reordered) happened cannot be found in the stack trace, which prevents
> KCSAN from generating the right stack trace. The stack trace of "write
> (reordered)" now only shows where the access was reordered to, but
> should instead show the stack trace of the original write, with a final
> line saying "reordered to".
> 
> At the point where set_reorder_access() is interrupted, it just set
> reorder_access->ptr and size, at which point size is non-zero. This is
> sufficient (if ctx->disable_scoped is zero) for further accesses from
> nested contexts to perform checking of this reorder_access.
> 
> That then happened in _raw_spin_lock_irqsave(), which is called by
> scheduler code. However, since reorder_access->ip is still stale (ptr
> and size belong to a different ip not yet set) this finally leads to
> replace_stack_entry() not finding the frame in reorder_access->ip and
> generating the above warning.
> 
> Fix it by ensuring that a nested context cannot access reorder_access
> while we update it in set_reorder_access(): set ctx->disable_scoped for
> the duration that reorder_access is updated, which effectively locks
> reorder_access and prevents concurrent use by nested contexts. Note,
> set_reorder_access() can do the update only if disabled_scoped is zero
> on entry, and must therefore set disable_scoped back to non-zero after
> the initial check in set_reorder_access().
> 
> Signed-off-by: Marco Elver <elver@google.com>

I pulled both of these in, thank you!

							Thanx, Paul

> ---
>  kernel/kcsan/core.c | 9 +++++++++
>  1 file changed, 9 insertions(+)
> 
> diff --git a/kernel/kcsan/core.c b/kernel/kcsan/core.c
> index 916060913966..fe12dfe254ec 100644
> --- a/kernel/kcsan/core.c
> +++ b/kernel/kcsan/core.c
> @@ -412,11 +412,20 @@ set_reorder_access(struct kcsan_ctx *ctx, const volatile void *ptr, size_t size,
>  	if (!reorder_access || !kcsan_weak_memory)
>  		return;
>  
> +	/*
> +	 * To avoid nested interrupts or scheduler (which share kcsan_ctx)
> +	 * reading an inconsistent reorder_access, ensure that the below has
> +	 * exclusive access to reorder_access by disallowing concurrent use.
> +	 */
> +	ctx->disable_scoped++;
> +	barrier();
>  	reorder_access->ptr		= ptr;
>  	reorder_access->size		= size;
>  	reorder_access->type		= type | KCSAN_ACCESS_SCOPED;
>  	reorder_access->ip		= ip;
>  	reorder_access->stack_depth	= get_kcsan_stack_depth();
> +	barrier();
> +	ctx->disable_scoped--;
>  }
>  
>  /*
> -- 
> 2.34.1.400.ga245620fadb-goog
> 

^ permalink raw reply	[flat|nested] 3+ messages in thread

end of thread, other threads:[~2021-12-06 19:54 UTC | newest]

Thread overview: 3+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-12-06  6:41 [PATCH -rcu 1/2] kcsan: Avoid nested contexts reading inconsistent reorder_access Marco Elver
2021-12-06  6:41 ` [PATCH -rcu 2/2] kcsan: Only test clear_bit_unlock_is_negative_byte if arch defines it Marco Elver
2021-12-06 19:54 ` [PATCH -rcu 1/2] kcsan: Avoid nested contexts reading inconsistent reorder_access Paul E. McKenney

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).