linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH RT] rt: convert mm/kasan/quarantine_lock to raw_spinlock
@ 2018-09-18 15:29 Clark Williams
  2018-09-18 20:34 ` kbuild test robot
  2018-10-05 16:30 ` [PATCH] kasan: convert kasan/quarantine_lock " Sebastian Andrzej Siewior
  0 siblings, 2 replies; 20+ messages in thread
From: Clark Williams @ 2018-09-18 15:29 UTC (permalink / raw)
  To: Sebastian Andrzej Siewior; +Cc: linux-kernel, linux-rt-users

The static lock quarantine_lock is used in mm/kasan/quarantine.c to protect
the quarantine queue datastructures. It is taken inside quarantine queue
manipulation routines (quarantine_put(), quarantine_reduce() and quarantine_remove_cache()),
with IRQs disabled. This is no problem on a stock kernel but is problematic
on an RT kernel where spin locks are converted to rt_mutex_t, which can sleep.

Convert the quarantine_lock to a raw spinlock. The usage of quarantine_lock
is confined to quarantine.c and the work performed while the lock is held is limited.

Signed-off-by: Clark Williams <williams@redhat.com>
---
 mm/kasan/quarantine.c | 18 +++++++++---------
 1 file changed, 9 insertions(+), 9 deletions(-)

diff --git a/mm/kasan/quarantine.c b/mm/kasan/quarantine.c
index 3a8ddf8baf7d..b209dbaefde8 100644
--- a/mm/kasan/quarantine.c
+++ b/mm/kasan/quarantine.c
@@ -103,7 +103,7 @@ static int quarantine_head;
 static int quarantine_tail;
 /* Total size of all objects in global_quarantine across all batches. */
 static unsigned long quarantine_size;
-static DEFINE_SPINLOCK(quarantine_lock);
+static DEFINE_RAW_SPINLOCK(quarantine_lock);
 DEFINE_STATIC_SRCU(remove_cache_srcu);
 
 /* Maximum size of the global queue. */
@@ -190,7 +190,7 @@ void quarantine_put(struct kasan_free_meta *info, struct kmem_cache *cache)
 	if (unlikely(q->bytes > QUARANTINE_PERCPU_SIZE)) {
 		qlist_move_all(q, &temp);
 
-		spin_lock(&quarantine_lock);
+		raw_spin_lock(&quarantine_lock);
 		WRITE_ONCE(quarantine_size, quarantine_size + temp.bytes);
 		qlist_move_all(&temp, &global_quarantine[quarantine_tail]);
 		if (global_quarantine[quarantine_tail].bytes >=
@@ -203,7 +203,7 @@ void quarantine_put(struct kasan_free_meta *info, struct kmem_cache *cache)
 			if (new_tail != quarantine_head)
 				quarantine_tail = new_tail;
 		}
-		spin_unlock(&quarantine_lock);
+		raw_spin_unlock(&quarantine_lock);
 	}
 
 	local_irq_restore(flags);
@@ -230,7 +230,7 @@ void quarantine_reduce(void)
 	 * expected case).
 	 */
 	srcu_idx = srcu_read_lock(&remove_cache_srcu);
-	spin_lock_irqsave(&quarantine_lock, flags);
+	raw_spin_lock_irqsave(&quarantine_lock, flags);
 
 	/*
 	 * Update quarantine size in case of hotplug. Allocate a fraction of
@@ -254,7 +254,7 @@ void quarantine_reduce(void)
 			quarantine_head = 0;
 	}
 
-	spin_unlock_irqrestore(&quarantine_lock, flags);
+	raw_spin_unlock_irqrestore(&quarantine_lock, flags);
 
 	qlist_free_all(&to_free, NULL);
 	srcu_read_unlock(&remove_cache_srcu, srcu_idx);
@@ -310,17 +310,17 @@ void quarantine_remove_cache(struct kmem_cache *cache)
 	 */
 	on_each_cpu(per_cpu_remove_cache, cache, 1);
 
-	spin_lock_irqsave(&quarantine_lock, flags);
+	raw_spin_lock_irqsave(&quarantine_lock, flags);
 	for (i = 0; i < QUARANTINE_BATCHES; i++) {
 		if (qlist_empty(&global_quarantine[i]))
 			continue;
 		qlist_move_cache(&global_quarantine[i], &to_free, cache);
 		/* Scanning whole quarantine can take a while. */
-		spin_unlock_irqrestore(&quarantine_lock, flags);
+		raw_spin_unlock_irqrestore(&quarantine_lock, flags);
 		cond_resched();
-		spin_lock_irqsave(&quarantine_lock, flags);
+		raw_spin_lock_irqsave(&quarantine_lock, flags);
 	}
-	spin_unlock_irqrestore(&quarantine_lock, flags);
+	raw_spin_unlock_irqrestore(&quarantine_lock, flags);
 
 	qlist_free_all(&to_free, cache);
 
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 20+ messages in thread

* Re: [PATCH RT] rt: convert mm/kasan/quarantine_lock to raw_spinlock
  2018-09-18 15:29 [PATCH RT] rt: convert mm/kasan/quarantine_lock to raw_spinlock Clark Williams
@ 2018-09-18 20:34 ` kbuild test robot
  2018-10-05 16:37   ` Sebastian Andrzej Siewior
  2018-10-05 16:30 ` [PATCH] kasan: convert kasan/quarantine_lock " Sebastian Andrzej Siewior
  1 sibling, 1 reply; 20+ messages in thread
From: kbuild test robot @ 2018-09-18 20:34 UTC (permalink / raw)
  To: Clark Williams
  Cc: kbuild-all, Sebastian Andrzej Siewior, linux-kernel, linux-rt-users

[-- Attachment #1: Type: text/plain, Size: 3095 bytes --]

Hi Clark,

Thank you for the patch! Perhaps something to improve:

[auto build test WARNING on linux-rt-devel/for-kbuild-bot/current-stable]

url:    https://github.com/0day-ci/linux/commits/Clark-Williams/rt-convert-mm-kasan-quarantine_lock-to-raw_spinlock/20180919-021343
base:   https://git.kernel.org/pub/scm/linux/kernel/git/rt/linux-rt-devel.git for-kbuild-bot/current-stable
config: x86_64-allmodconfig (attached as .config)
compiler: gcc-7 (Debian 7.3.0-1) 7.3.0
reproduce:
        # save the attached .config to linux build tree
        make ARCH=x86_64 

All warnings (new ones prefixed by >>):

   include/uapi/linux/perf_event.h:147:56: warning: cast truncates bits from constant value (8000000000000000 becomes 0)
   kernel/rcu/tree.c:216:15: warning: symbol 'rcu_rnp_online_cpus' was not declared. Should it be static?
   kernel/rcu/tree.c:366:6: warning: symbol 'rcu_dynticks_curr_cpu_in_eqs' was not declared. Should it be static?
>> kernel/rcu/tree.c:2930:36: warning: incorrect type in initializer (different address spaces)
   kernel/rcu/tree.c:2930:36:    expected struct task_struct [noderef] <asn:3>**store
   kernel/rcu/tree.c:2930:36:    got struct task_struct *[noderef] <asn:3>*<noident>
   kernel/rcu/tree.c:3978:21: warning: incorrect type in argument 1 (different modifiers)
   kernel/rcu/tree.c:3978:21:    expected int ( *threadfn )( ... )
   kernel/rcu/tree.c:3978:21:    got int ( [noreturn] *<noident> )( ... )
   kernel/rcu/tree.c:1680:13: warning: context imbalance in 'rcu_start_this_gp' - different lock contexts for basic block
   kernel/rcu/tree.c:2703:9: warning: context imbalance in 'force_qs_rnp' - different lock contexts for basic block
   kernel/rcu/tree.c:2766:25: warning: context imbalance in 'force_quiescent_state' - unexpected unlock
   kernel/rcu/tree_exp.h:203:9: warning: context imbalance in '__rcu_report_exp_rnp' - different lock contexts for basic block

vim +2930 kernel/rcu/tree.c

385c3906 Paul E. McKenney 2013-11-04  2928  
385c3906 Paul E. McKenney 2013-11-04  2929  static struct smp_hotplug_thread rcu_cpu_thread_spec = {
385c3906 Paul E. McKenney 2013-11-04 @2930  	.store			= &rcu_cpu_kthread_task,
385c3906 Paul E. McKenney 2013-11-04  2931  	.thread_should_run	= rcu_cpu_kthread_should_run,
385c3906 Paul E. McKenney 2013-11-04  2932  	.thread_fn		= rcu_cpu_kthread,
385c3906 Paul E. McKenney 2013-11-04  2933  	.thread_comm		= "rcuc/%u",
385c3906 Paul E. McKenney 2013-11-04  2934  	.setup			= rcu_cpu_kthread_setup,
385c3906 Paul E. McKenney 2013-11-04  2935  	.park			= rcu_cpu_kthread_park,
385c3906 Paul E. McKenney 2013-11-04  2936  };
385c3906 Paul E. McKenney 2013-11-04  2937  

:::::: The code at line 2930 was first introduced by commit
:::::: 385c3906e2a7db036cd3185d1a2f38c842664ce0 rcu: Eliminate softirq processing from rcutree

:::::: TO: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
:::::: CC: Sebastian Andrzej Siewior <bigeasy@linutronix.de>

---
0-DAY kernel test infrastructure                Open Source Technology Center
https://lists.01.org/pipermail/kbuild-all                   Intel Corporation

[-- Attachment #2: .config.gz --]
[-- Type: application/gzip, Size: 64656 bytes --]

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH] kasan: convert kasan/quarantine_lock to raw_spinlock
  2018-09-18 15:29 [PATCH RT] rt: convert mm/kasan/quarantine_lock to raw_spinlock Clark Williams
  2018-09-18 20:34 ` kbuild test robot
@ 2018-10-05 16:30 ` Sebastian Andrzej Siewior
  2018-10-05 16:33   ` Sebastian Andrzej Siewior
  1 sibling, 1 reply; 20+ messages in thread
From: Sebastian Andrzej Siewior @ 2018-10-05 16:30 UTC (permalink / raw)
  To: Clark Williams, Alexander Potapenko, Dmitry Vyukov, kasan-dev, linux-mm
  Cc: linux-kernel, linux-rt-users, Peter Zijlstra, Thomas Gleixner

On 2018-09-18 10:29:31 [-0500], Clark Williams wrote:

So I received this from Clark:

> The static lock quarantine_lock is used in mm/kasan/quarantine.c to protect
> the quarantine queue datastructures. It is taken inside quarantine queue
> manipulation routines (quarantine_put(), quarantine_reduce() and quarantine_remove_cache()),
> with IRQs disabled. This is no problem on a stock kernel but is problematic
> on an RT kernel where spin locks are converted to rt_mutex_t, which can sleep.
> 
> Convert the quarantine_lock to a raw spinlock. The usage of quarantine_lock
> is confined to quarantine.c and the work performed while the lock is held is limited.
> 
> Signed-off-by: Clark Williams <williams@redhat.com>

This is the minimum to get this working on RT splat free. There is one
memory deallocation with irqs off which should work on RT in its current
way.
Once this and the on_each_cpu() invocation, I was wondering if…

> ---
>  mm/kasan/quarantine.c | 18 +++++++++---------
>  1 file changed, 9 insertions(+), 9 deletions(-)
> 
> diff --git a/mm/kasan/quarantine.c b/mm/kasan/quarantine.c
> index 3a8ddf8baf7d..b209dbaefde8 100644
> --- a/mm/kasan/quarantine.c
> +++ b/mm/kasan/quarantine.c
> @@ -103,7 +103,7 @@ static int quarantine_head;
>  static int quarantine_tail;
>  /* Total size of all objects in global_quarantine across all batches. */
>  static unsigned long quarantine_size;
> -static DEFINE_SPINLOCK(quarantine_lock);
> +static DEFINE_RAW_SPINLOCK(quarantine_lock);
>  DEFINE_STATIC_SRCU(remove_cache_srcu);
>  
>  /* Maximum size of the global queue. */
> @@ -190,7 +190,7 @@ void quarantine_put(struct kasan_free_meta *info, struct kmem_cache *cache)
>  	if (unlikely(q->bytes > QUARANTINE_PERCPU_SIZE)) {
>  		qlist_move_all(q, &temp);
>  
> -		spin_lock(&quarantine_lock);
> +		raw_spin_lock(&quarantine_lock);
>  		WRITE_ONCE(quarantine_size, quarantine_size + temp.bytes);
>  		qlist_move_all(&temp, &global_quarantine[quarantine_tail]);
>  		if (global_quarantine[quarantine_tail].bytes >=
> @@ -203,7 +203,7 @@ void quarantine_put(struct kasan_free_meta *info, struct kmem_cache *cache)
>  			if (new_tail != quarantine_head)
>  				quarantine_tail = new_tail;
>  		}
> -		spin_unlock(&quarantine_lock);
> +		raw_spin_unlock(&quarantine_lock);
>  	}
>  
>  	local_irq_restore(flags);
> @@ -230,7 +230,7 @@ void quarantine_reduce(void)
>  	 * expected case).
>  	 */
>  	srcu_idx = srcu_read_lock(&remove_cache_srcu);
> -	spin_lock_irqsave(&quarantine_lock, flags);
> +	raw_spin_lock_irqsave(&quarantine_lock, flags);
>  
>  	/*
>  	 * Update quarantine size in case of hotplug. Allocate a fraction of
> @@ -254,7 +254,7 @@ void quarantine_reduce(void)
>  			quarantine_head = 0;
>  	}
>  
> -	spin_unlock_irqrestore(&quarantine_lock, flags);
> +	raw_spin_unlock_irqrestore(&quarantine_lock, flags);
>  
>  	qlist_free_all(&to_free, NULL);
>  	srcu_read_unlock(&remove_cache_srcu, srcu_idx);
> @@ -310,17 +310,17 @@ void quarantine_remove_cache(struct kmem_cache *cache)
>  	 */
>  	on_each_cpu(per_cpu_remove_cache, cache, 1);
>  
> -	spin_lock_irqsave(&quarantine_lock, flags);
> +	raw_spin_lock_irqsave(&quarantine_lock, flags);
>  	for (i = 0; i < QUARANTINE_BATCHES; i++) {
>  		if (qlist_empty(&global_quarantine[i]))
>  			continue;
>  		qlist_move_cache(&global_quarantine[i], &to_free, cache);
>  		/* Scanning whole quarantine can take a while. */
> -		spin_unlock_irqrestore(&quarantine_lock, flags);
> +		raw_spin_unlock_irqrestore(&quarantine_lock, flags);
>  		cond_resched();
> -		spin_lock_irqsave(&quarantine_lock, flags);
> +		raw_spin_lock_irqsave(&quarantine_lock, flags);
>  	}
> -	spin_unlock_irqrestore(&quarantine_lock, flags);
> +	raw_spin_unlock_irqrestore(&quarantine_lock, flags);
>  
>  	qlist_free_all(&to_free, cache);
>  
> -- 
> 2.17.1
> 

Sebastian

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH] kasan: convert kasan/quarantine_lock to raw_spinlock
  2018-10-05 16:30 ` [PATCH] kasan: convert kasan/quarantine_lock " Sebastian Andrzej Siewior
@ 2018-10-05 16:33   ` Sebastian Andrzej Siewior
  2018-10-08  0:47     ` Clark Williams
  2018-10-08  9:15     ` Dmitry Vyukov
  0 siblings, 2 replies; 20+ messages in thread
From: Sebastian Andrzej Siewior @ 2018-10-05 16:33 UTC (permalink / raw)
  To: Clark Williams, Alexander Potapenko, Dmitry Vyukov, kasan-dev, linux-mm
  Cc: linux-kernel, linux-rt-users, Peter Zijlstra, Thomas Gleixner

On 2018-10-05 18:30:18 [+0200], To Clark Williams wrote:
> This is the minimum to get this working on RT splat free. There is one
> memory deallocation with irqs off which should work on RT in its current
> way.
> Once this and the on_each_cpu() invocation, I was wondering if…

the patch at the bottom wouldn't work just fine for everyone.
It would have the beaty of annotating the locking scope a little and
avoiding the on_each_cpu() invocation. No local_irq_save() but actually
the proper locking primitives.
I haven't fully decoded the srcu part in the code.

Wouldn't that work for you?

Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
---
 mm/kasan/quarantine.c | 45 +++++++++++++++++++++++++------------------
 1 file changed, 26 insertions(+), 19 deletions(-)

diff --git a/mm/kasan/quarantine.c b/mm/kasan/quarantine.c
index 3a8ddf8baf7dc..8ed595960e3c1 100644
--- a/mm/kasan/quarantine.c
+++ b/mm/kasan/quarantine.c
@@ -39,12 +39,13 @@
  * objects inside of it.
  */
 struct qlist_head {
+	spinlock_t	lock;
 	struct qlist_node *head;
 	struct qlist_node *tail;
 	size_t bytes;
 };
 
-#define QLIST_INIT { NULL, NULL, 0 }
+#define QLIST_INIT {.head = NULL, .tail = NULL, .bytes = 0 }
 
 static bool qlist_empty(struct qlist_head *q)
 {
@@ -95,7 +96,9 @@ static void qlist_move_all(struct qlist_head *from, struct qlist_head *to)
  * The object quarantine consists of per-cpu queues and a global queue,
  * guarded by quarantine_lock.
  */
-static DEFINE_PER_CPU(struct qlist_head, cpu_quarantine);
+static DEFINE_PER_CPU(struct qlist_head, cpu_quarantine) = {
+	.lock = __SPIN_LOCK_UNLOCKED(cpu_quarantine.lock),
+};
 
 /* Round-robin FIFO array of batches. */
 static struct qlist_head global_quarantine[QUARANTINE_BATCHES];
@@ -183,12 +186,13 @@ void quarantine_put(struct kasan_free_meta *info, struct kmem_cache *cache)
 	 * beginning which ensures that it either sees the objects in per-cpu
 	 * lists or in the global quarantine.
 	 */
-	local_irq_save(flags);
+	q = raw_cpu_ptr(&cpu_quarantine);
+	spin_lock_irqsave(&q->lock, flags);
 
-	q = this_cpu_ptr(&cpu_quarantine);
 	qlist_put(q, &info->quarantine_link, cache->size);
 	if (unlikely(q->bytes > QUARANTINE_PERCPU_SIZE)) {
 		qlist_move_all(q, &temp);
+		spin_unlock(&q->lock);
 
 		spin_lock(&quarantine_lock);
 		WRITE_ONCE(quarantine_size, quarantine_size + temp.bytes);
@@ -203,10 +207,10 @@ void quarantine_put(struct kasan_free_meta *info, struct kmem_cache *cache)
 			if (new_tail != quarantine_head)
 				quarantine_tail = new_tail;
 		}
-		spin_unlock(&quarantine_lock);
+		spin_unlock_irqrestore(&quarantine_lock, flags);
+	} else {
+		spin_unlock_irqrestore(&q->lock, flags);
 	}
-
-	local_irq_restore(flags);
 }
 
 void quarantine_reduce(void)
@@ -284,21 +288,11 @@ static void qlist_move_cache(struct qlist_head *from,
 	}
 }
 
-static void per_cpu_remove_cache(void *arg)
-{
-	struct kmem_cache *cache = arg;
-	struct qlist_head to_free = QLIST_INIT;
-	struct qlist_head *q;
-
-	q = this_cpu_ptr(&cpu_quarantine);
-	qlist_move_cache(q, &to_free, cache);
-	qlist_free_all(&to_free, cache);
-}
-
 /* Free all quarantined objects belonging to cache. */
 void quarantine_remove_cache(struct kmem_cache *cache)
 {
 	unsigned long flags, i;
+	unsigned int cpu;
 	struct qlist_head to_free = QLIST_INIT;
 
 	/*
@@ -308,7 +302,20 @@ void quarantine_remove_cache(struct kmem_cache *cache)
 	 * achieves the first goal, while synchronize_srcu() achieves the
 	 * second.
 	 */
-	on_each_cpu(per_cpu_remove_cache, cache, 1);
+	/* get_online_cpus() invoked by caller */
+	for_each_online_cpu(cpu) {
+		struct qlist_head *q;
+		unsigned long flags;
+		struct qlist_head to_free = QLIST_INIT;
+
+		q = per_cpu_ptr(&cpu_quarantine, cpu);
+		spin_lock_irqsave(&q->lock, flags);
+		qlist_move_cache(q, &to_free, cache);
+		spin_unlock_irqrestore(&q->lock, flags);
+
+		qlist_free_all(&to_free, cache);
+
+	}
 
 	spin_lock_irqsave(&quarantine_lock, flags);
 	for (i = 0; i < QUARANTINE_BATCHES; i++) {
-- 
2.19.0


^ permalink raw reply related	[flat|nested] 20+ messages in thread

* Re: [PATCH RT] rt: convert mm/kasan/quarantine_lock to raw_spinlock
  2018-09-18 20:34 ` kbuild test robot
@ 2018-10-05 16:37   ` Sebastian Andrzej Siewior
  2018-10-18  9:04     ` [kbuild-all] " Rong Chen
  0 siblings, 1 reply; 20+ messages in thread
From: Sebastian Andrzej Siewior @ 2018-10-05 16:37 UTC (permalink / raw)
  To: kbuild test robot
  Cc: Clark Williams, kbuild-all, linux-kernel, linux-rt-users

On 2018-09-19 04:34:50 [+0800], kbuild test robot wrote:
> Hi Clark,
> 
> Thank you for the patch! Perhaps something to improve:
> 
> [auto build test WARNING on linux-rt-devel/for-kbuild-bot/current-stable]
> 
> url:    https://github.com/0day-ci/linux/commits/Clark-Williams/rt-convert-mm-kasan-quarantine_lock-to-raw_spinlock/20180919-021343
> base:   https://git.kernel.org/pub/scm/linux/kernel/git/rt/linux-rt-devel.git for-kbuild-bot/current-stable
> config: x86_64-allmodconfig (attached as .config)
> compiler: gcc-7 (Debian 7.3.0-1) 7.3.0
> reproduce:
>         # save the attached .config to linux build tree
>         make ARCH=x86_64 
> 
> All warnings (new ones prefixed by >>):

how is this related? Could someone please explain this.

Sebastian

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH] kasan: convert kasan/quarantine_lock to raw_spinlock
  2018-10-05 16:33   ` Sebastian Andrzej Siewior
@ 2018-10-08  0:47     ` Clark Williams
  2018-10-08  9:15     ` Dmitry Vyukov
  1 sibling, 0 replies; 20+ messages in thread
From: Clark Williams @ 2018-10-08  0:47 UTC (permalink / raw)
  To: Sebastian Andrzej Siewior
  Cc: Alexander Potapenko, Dmitry Vyukov, kasan-dev, linux-mm,
	linux-kernel, linux-rt-users, Peter Zijlstra, Thomas Gleixner

I applied this patch to a fairly stock 4.18-rt5 kernel and booted with no complaints, then
ran rteval for 12h with no stack splats reported. I'll keep banging on it but preliminary
reports look good. 

Clark

On Fri, 5 Oct 2018 18:33:20 +0200
Sebastian Andrzej Siewior <bigeasy@linutronix.de> wrote:

> On 2018-10-05 18:30:18 [+0200], To Clark Williams wrote:
> > This is the minimum to get this working on RT splat free. There is one
> > memory deallocation with irqs off which should work on RT in its current
> > way.
> > Once this and the on_each_cpu() invocation, I was wondering if…  
> 
> the patch at the bottom wouldn't work just fine for everyone.
> It would have the beaty of annotating the locking scope a little and
> avoiding the on_each_cpu() invocation. No local_irq_save() but actually
> the proper locking primitives.
> I haven't fully decoded the srcu part in the code.
> 
> Wouldn't that work for you?
> 
> Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
> ---
>  mm/kasan/quarantine.c | 45 +++++++++++++++++++++++++------------------
>  1 file changed, 26 insertions(+), 19 deletions(-)
> 
> diff --git a/mm/kasan/quarantine.c b/mm/kasan/quarantine.c
> index 3a8ddf8baf7dc..8ed595960e3c1 100644
> --- a/mm/kasan/quarantine.c
> +++ b/mm/kasan/quarantine.c
> @@ -39,12 +39,13 @@
>   * objects inside of it.
>   */
>  struct qlist_head {
> +	spinlock_t	lock;
>  	struct qlist_node *head;
>  	struct qlist_node *tail;
>  	size_t bytes;
>  };
>  
> -#define QLIST_INIT { NULL, NULL, 0 }
> +#define QLIST_INIT {.head = NULL, .tail = NULL, .bytes = 0 }
>  
>  static bool qlist_empty(struct qlist_head *q)
>  {
> @@ -95,7 +96,9 @@ static void qlist_move_all(struct qlist_head *from, struct qlist_head *to)
>   * The object quarantine consists of per-cpu queues and a global queue,
>   * guarded by quarantine_lock.
>   */
> -static DEFINE_PER_CPU(struct qlist_head, cpu_quarantine);
> +static DEFINE_PER_CPU(struct qlist_head, cpu_quarantine) = {
> +	.lock = __SPIN_LOCK_UNLOCKED(cpu_quarantine.lock),
> +};
>  
>  /* Round-robin FIFO array of batches. */
>  static struct qlist_head global_quarantine[QUARANTINE_BATCHES];
> @@ -183,12 +186,13 @@ void quarantine_put(struct kasan_free_meta *info, struct kmem_cache *cache)
>  	 * beginning which ensures that it either sees the objects in per-cpu
>  	 * lists or in the global quarantine.
>  	 */
> -	local_irq_save(flags);
> +	q = raw_cpu_ptr(&cpu_quarantine);
> +	spin_lock_irqsave(&q->lock, flags);
>  
> -	q = this_cpu_ptr(&cpu_quarantine);
>  	qlist_put(q, &info->quarantine_link, cache->size);
>  	if (unlikely(q->bytes > QUARANTINE_PERCPU_SIZE)) {
>  		qlist_move_all(q, &temp);
> +		spin_unlock(&q->lock);
>  
>  		spin_lock(&quarantine_lock);
>  		WRITE_ONCE(quarantine_size, quarantine_size + temp.bytes);
> @@ -203,10 +207,10 @@ void quarantine_put(struct kasan_free_meta *info, struct kmem_cache *cache)
>  			if (new_tail != quarantine_head)
>  				quarantine_tail = new_tail;
>  		}
> -		spin_unlock(&quarantine_lock);
> +		spin_unlock_irqrestore(&quarantine_lock, flags);
> +	} else {
> +		spin_unlock_irqrestore(&q->lock, flags);
>  	}
> -
> -	local_irq_restore(flags);
>  }
>  
>  void quarantine_reduce(void)
> @@ -284,21 +288,11 @@ static void qlist_move_cache(struct qlist_head *from,
>  	}
>  }
>  
> -static void per_cpu_remove_cache(void *arg)
> -{
> -	struct kmem_cache *cache = arg;
> -	struct qlist_head to_free = QLIST_INIT;
> -	struct qlist_head *q;
> -
> -	q = this_cpu_ptr(&cpu_quarantine);
> -	qlist_move_cache(q, &to_free, cache);
> -	qlist_free_all(&to_free, cache);
> -}
> -
>  /* Free all quarantined objects belonging to cache. */
>  void quarantine_remove_cache(struct kmem_cache *cache)
>  {
>  	unsigned long flags, i;
> +	unsigned int cpu;
>  	struct qlist_head to_free = QLIST_INIT;
>  
>  	/*
> @@ -308,7 +302,20 @@ void quarantine_remove_cache(struct kmem_cache *cache)
>  	 * achieves the first goal, while synchronize_srcu() achieves the
>  	 * second.
>  	 */
> -	on_each_cpu(per_cpu_remove_cache, cache, 1);
> +	/* get_online_cpus() invoked by caller */
> +	for_each_online_cpu(cpu) {
> +		struct qlist_head *q;
> +		unsigned long flags;
> +		struct qlist_head to_free = QLIST_INIT;
> +
> +		q = per_cpu_ptr(&cpu_quarantine, cpu);
> +		spin_lock_irqsave(&q->lock, flags);
> +		qlist_move_cache(q, &to_free, cache);
> +		spin_unlock_irqrestore(&q->lock, flags);
> +
> +		qlist_free_all(&to_free, cache);
> +
> +	}
>  
>  	spin_lock_irqsave(&quarantine_lock, flags);
>  	for (i = 0; i < QUARANTINE_BATCHES; i++) {
> -- 
> 2.19.0
> 


-- 
The United States Coast Guard
Ruining Natural Selection since 1790

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH] kasan: convert kasan/quarantine_lock to raw_spinlock
  2018-10-05 16:33   ` Sebastian Andrzej Siewior
  2018-10-08  0:47     ` Clark Williams
@ 2018-10-08  9:15     ` Dmitry Vyukov
  2018-10-09 14:27       ` Sebastian Andrzej Siewior
  1 sibling, 1 reply; 20+ messages in thread
From: Dmitry Vyukov @ 2018-10-08  9:15 UTC (permalink / raw)
  To: Sebastian Andrzej Siewior
  Cc: Clark Williams, Alexander Potapenko, kasan-dev, Linux-MM, LKML,
	linux-rt-users, Peter Zijlstra, Thomas Gleixner

On Fri, Oct 5, 2018 at 6:33 PM, Sebastian Andrzej Siewior
<bigeasy@linutronix.de> wrote:
> On 2018-10-05 18:30:18 [+0200], To Clark Williams wrote:
>> This is the minimum to get this working on RT splat free. There is one
>> memory deallocation with irqs off which should work on RT in its current
>> way.
>> Once this and the on_each_cpu() invocation, I was wondering if…
>
> the patch at the bottom wouldn't work just fine for everyone.
> It would have the beaty of annotating the locking scope a little and
> avoiding the on_each_cpu() invocation. No local_irq_save() but actually
> the proper locking primitives.
> I haven't fully decoded the srcu part in the code.
>
> Wouldn't that work for you?

Hi Sebastian,

This seems to beak quarantine_remove_cache( ) in the sense that some
object from the cache may still be in quarantine when
quarantine_remove_cache() returns. When quarantine_remove_cache()
returns all objects from the cache must be purged from quarantine.
That srcu and irq trickery is there for a reason.
This code is also on hot path of kmallock/kfree, an additional
lock/unlock per operation is expensive. Adding 2 locked RMW per
kmalloc is not something that should be done only out of refactoring
reasons.

The original message from Clark mentions that the problem can be fixed
by just changing type of spinlock. This looks like a better and
simpler way to resolve the problem to me.


> Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
> ---
>  mm/kasan/quarantine.c | 45 +++++++++++++++++++++++++------------------
>  1 file changed, 26 insertions(+), 19 deletions(-)
>
> diff --git a/mm/kasan/quarantine.c b/mm/kasan/quarantine.c
> index 3a8ddf8baf7dc..8ed595960e3c1 100644
> --- a/mm/kasan/quarantine.c
> +++ b/mm/kasan/quarantine.c
> @@ -39,12 +39,13 @@
>   * objects inside of it.
>   */
>  struct qlist_head {
> +       spinlock_t      lock;
>         struct qlist_node *head;
>         struct qlist_node *tail;
>         size_t bytes;
>  };
>
> -#define QLIST_INIT { NULL, NULL, 0 }
> +#define QLIST_INIT {.head = NULL, .tail = NULL, .bytes = 0 }
>
>  static bool qlist_empty(struct qlist_head *q)
>  {
> @@ -95,7 +96,9 @@ static void qlist_move_all(struct qlist_head *from, struct qlist_head *to)
>   * The object quarantine consists of per-cpu queues and a global queue,
>   * guarded by quarantine_lock.
>   */
> -static DEFINE_PER_CPU(struct qlist_head, cpu_quarantine);
> +static DEFINE_PER_CPU(struct qlist_head, cpu_quarantine) = {
> +       .lock = __SPIN_LOCK_UNLOCKED(cpu_quarantine.lock),
> +};
>
>  /* Round-robin FIFO array of batches. */
>  static struct qlist_head global_quarantine[QUARANTINE_BATCHES];
> @@ -183,12 +186,13 @@ void quarantine_put(struct kasan_free_meta *info, struct kmem_cache *cache)
>          * beginning which ensures that it either sees the objects in per-cpu
>          * lists or in the global quarantine.
>          */
> -       local_irq_save(flags);
> +       q = raw_cpu_ptr(&cpu_quarantine);
> +       spin_lock_irqsave(&q->lock, flags);
>
> -       q = this_cpu_ptr(&cpu_quarantine);
>         qlist_put(q, &info->quarantine_link, cache->size);
>         if (unlikely(q->bytes > QUARANTINE_PERCPU_SIZE)) {
>                 qlist_move_all(q, &temp);
> +               spin_unlock(&q->lock);
>
>                 spin_lock(&quarantine_lock);
>                 WRITE_ONCE(quarantine_size, quarantine_size + temp.bytes);
> @@ -203,10 +207,10 @@ void quarantine_put(struct kasan_free_meta *info, struct kmem_cache *cache)
>                         if (new_tail != quarantine_head)
>                                 quarantine_tail = new_tail;
>                 }
> -               spin_unlock(&quarantine_lock);
> +               spin_unlock_irqrestore(&quarantine_lock, flags);
> +       } else {
> +               spin_unlock_irqrestore(&q->lock, flags);
>         }
> -
> -       local_irq_restore(flags);
>  }
>
>  void quarantine_reduce(void)
> @@ -284,21 +288,11 @@ static void qlist_move_cache(struct qlist_head *from,
>         }
>  }
>
> -static void per_cpu_remove_cache(void *arg)
> -{
> -       struct kmem_cache *cache = arg;
> -       struct qlist_head to_free = QLIST_INIT;
> -       struct qlist_head *q;
> -
> -       q = this_cpu_ptr(&cpu_quarantine);
> -       qlist_move_cache(q, &to_free, cache);
> -       qlist_free_all(&to_free, cache);
> -}
> -
>  /* Free all quarantined objects belonging to cache. */
>  void quarantine_remove_cache(struct kmem_cache *cache)
>  {
>         unsigned long flags, i;
> +       unsigned int cpu;
>         struct qlist_head to_free = QLIST_INIT;
>
>         /*
> @@ -308,7 +302,20 @@ void quarantine_remove_cache(struct kmem_cache *cache)
>          * achieves the first goal, while synchronize_srcu() achieves the
>          * second.
>          */
> -       on_each_cpu(per_cpu_remove_cache, cache, 1);
> +       /* get_online_cpus() invoked by caller */
> +       for_each_online_cpu(cpu) {
> +               struct qlist_head *q;
> +               unsigned long flags;
> +               struct qlist_head to_free = QLIST_INIT;
> +
> +               q = per_cpu_ptr(&cpu_quarantine, cpu);
> +               spin_lock_irqsave(&q->lock, flags);
> +               qlist_move_cache(q, &to_free, cache);
> +               spin_unlock_irqrestore(&q->lock, flags);
> +
> +               qlist_free_all(&to_free, cache);
> +
> +       }
>
>         spin_lock_irqsave(&quarantine_lock, flags);
>         for (i = 0; i < QUARANTINE_BATCHES; i++) {
> --
> 2.19.0
>

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH] kasan: convert kasan/quarantine_lock to raw_spinlock
  2018-10-08  9:15     ` Dmitry Vyukov
@ 2018-10-09 14:27       ` Sebastian Andrzej Siewior
  2018-10-10  8:25         ` Dmitry Vyukov
  0 siblings, 1 reply; 20+ messages in thread
From: Sebastian Andrzej Siewior @ 2018-10-09 14:27 UTC (permalink / raw)
  To: Dmitry Vyukov
  Cc: Clark Williams, Alexander Potapenko, kasan-dev, Linux-MM, LKML,
	linux-rt-users, Peter Zijlstra, Thomas Gleixner

On 2018-10-08 11:15:57 [+0200], Dmitry Vyukov wrote:
> Hi Sebastian,
Hi Dmitry,

> This seems to beak quarantine_remove_cache( ) in the sense that some
> object from the cache may still be in quarantine when
> quarantine_remove_cache() returns. When quarantine_remove_cache()
> returns all objects from the cache must be purged from quarantine.
> That srcu and irq trickery is there for a reason.

That loop should behave like your on_each_cpu() except it does not
involve the remote CPU.

> This code is also on hot path of kmallock/kfree, an additional
> lock/unlock per operation is expensive. Adding 2 locked RMW per
> kmalloc is not something that should be done only out of refactoring
> reasons.
But this is debug code anyway, right? And it is highly complex imho.
Well, maybe only for me after I looked at it for the first time…

> The original message from Clark mentions that the problem can be fixed
> by just changing type of spinlock. This looks like a better and
> simpler way to resolve the problem to me.

I usually prefer to avoid adding raw_locks everywhere if it can be
avoided. However given that this is debug code and a few additional us
shouldn't matter here, I have no problem with Clark's initial patch
(also the mem-free in irq-off region works in this scenario).
Can you take it as-is or should I repost it with an acked-by?

Sebastian

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH] kasan: convert kasan/quarantine_lock to raw_spinlock
  2018-10-09 14:27       ` Sebastian Andrzej Siewior
@ 2018-10-10  8:25         ` Dmitry Vyukov
  2018-10-10  9:29           ` Sebastian Andrzej Siewior
  0 siblings, 1 reply; 20+ messages in thread
From: Dmitry Vyukov @ 2018-10-10  8:25 UTC (permalink / raw)
  To: Sebastian Andrzej Siewior
  Cc: Clark Williams, Alexander Potapenko, kasan-dev, Linux-MM, LKML,
	linux-rt-users, Peter Zijlstra, Thomas Gleixner

On Tue, Oct 9, 2018 at 4:27 PM, Sebastian Andrzej Siewior
<bigeasy@linutronix.de> wrote:
> On 2018-10-08 11:15:57 [+0200], Dmitry Vyukov wrote:
>> Hi Sebastian,
> Hi Dmitry,
>
>> This seems to beak quarantine_remove_cache( ) in the sense that some
>> object from the cache may still be in quarantine when
>> quarantine_remove_cache() returns. When quarantine_remove_cache()
>> returns all objects from the cache must be purged from quarantine.
>> That srcu and irq trickery is there for a reason.
>
> That loop should behave like your on_each_cpu() except it does not
> involve the remote CPU.


The problem is that it can squeeze in between:

+               spin_unlock(&q->lock);

                spin_lock(&quarantine_lock);

as far as I see. And then some objects can be left in the quarantine.


>> This code is also on hot path of kmallock/kfree, an additional
>> lock/unlock per operation is expensive. Adding 2 locked RMW per
>> kmalloc is not something that should be done only out of refactoring
>> reasons.
> But this is debug code anyway, right? And it is highly complex imho.
> Well, maybe only for me after I looked at it for the first time…

It is debug code - yes.
Nothing about its performance matters - no.

That's the way to produce unusable debug tools.
With too much overhead timeouts start to fire and code behaves not the
way it behaves in production.
The tool is used in continuous integration and developers wait for
test results before merging code.
The tool is used on canary devices and directly contributes to usage experience.

We of course don't want to trade a page of assembly code for cutting
few cycles here (something that could make sense for some networking
code maybe). But otherwise let's not introduce spinlocks on fast paths
just for refactoring reasons.


>> The original message from Clark mentions that the problem can be fixed
>> by just changing type of spinlock. This looks like a better and
>> simpler way to resolve the problem to me.
>
> I usually prefer to avoid adding raw_locks everywhere if it can be
> avoided. However given that this is debug code and a few additional us
> shouldn't matter here, I have no problem with Clark's initial patch
> (also the mem-free in irq-off region works in this scenario).
> Can you take it as-is or should I repost it with an acked-by?

Perhaps it's the problem with the way RT kernel changes things then?
This is not specific to quarantine, right? Should that mutex detect
that IRQs are disabled and not try to schedule? If this would happen
in some networking code, what would we do?

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH] kasan: convert kasan/quarantine_lock to raw_spinlock
  2018-10-10  8:25         ` Dmitry Vyukov
@ 2018-10-10  9:29           ` Sebastian Andrzej Siewior
  2018-10-10  9:45             ` Dmitry Vyukov
  0 siblings, 1 reply; 20+ messages in thread
From: Sebastian Andrzej Siewior @ 2018-10-10  9:29 UTC (permalink / raw)
  To: Dmitry Vyukov
  Cc: Clark Williams, Alexander Potapenko, kasan-dev, Linux-MM, LKML,
	linux-rt-users, Peter Zijlstra, Thomas Gleixner

On 2018-10-10 10:25:42 [+0200], Dmitry Vyukov wrote:
> > That loop should behave like your on_each_cpu() except it does not
> > involve the remote CPU.
> 
> 
> The problem is that it can squeeze in between:
> 
> +               spin_unlock(&q->lock);
> 
>                 spin_lock(&quarantine_lock);
> 
> as far as I see. And then some objects can be left in the quarantine.

Okay. But then once you are at CPU10 (in the on_each_cpu() loop) there
can be objects which are added to CPU0, right? So based on that, I
assumed that this would be okay to drop the lock here. 

> > But this is debug code anyway, right? And it is highly complex imho.
> > Well, maybe only for me after I looked at it for the first time…
> 
> It is debug code - yes.
> Nothing about its performance matters - no.
> 
> That's the way to produce unusable debug tools.
> With too much overhead timeouts start to fire and code behaves not the
> way it behaves in production.
> The tool is used in continuous integration and developers wait for
> test results before merging code.
> The tool is used on canary devices and directly contributes to usage experience.

Completely understood. What I meant is that debug code in general (from
RT perspective) increases latency to a level where the device can not
operate. Take lockdep for instance. Debug facility which is required
for RT as it spots locking problems early. It increases the latency
(depending on the workload) by 50ms+ and can't be used in production.
Same goes for SLUB debug and most other.

> We of course don't want to trade a page of assembly code for cutting
> few cycles here (something that could make sense for some networking
> code maybe). But otherwise let's not introduce spinlocks on fast paths
> just for refactoring reasons.

Sure. As I said. I'm fine with patch Clark initially proposed. I assumed
the refactoring would make things simpler and avoiding the cross-CPU
call a good thing.

> > Can you take it as-is or should I repost it with an acked-by?
> 
> Perhaps it's the problem with the way RT kernel changes things then?
> This is not specific to quarantine, right? 

We got rid of _a lot_ of local_irq_disable/save() + spin_lock() combos
which were there for reasons which are no longer true or due to lack of
the API. And this kasan thing is just something Clark stumbled upon
recently. And I try to negotiate something where everyone can agree on.

> Should that mutex detect
> that IRQs are disabled and not try to schedule? If this would happen
> in some networking code, what would we do?

It is not only that it is supposed not to schedule. Assuming the "mutex"
is not owned you could acquire it right away. No scheduling. However,
you would record current() as the owner of the lock which is wrong and
you get into other trouble later on. The list goes on :)
However, networking. If there is something that breaks then it will be
addressed. It will be forwarded upstream if this something where it
is likely to assume that RT won't change. So networking isn't special.

Should I repost Clark's patch?

Sebastian

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH] kasan: convert kasan/quarantine_lock to raw_spinlock
  2018-10-10  9:29           ` Sebastian Andrzej Siewior
@ 2018-10-10  9:45             ` Dmitry Vyukov
  2018-10-10  9:53               ` Sebastian Andrzej Siewior
  0 siblings, 1 reply; 20+ messages in thread
From: Dmitry Vyukov @ 2018-10-10  9:45 UTC (permalink / raw)
  To: Sebastian Andrzej Siewior
  Cc: Clark Williams, Alexander Potapenko, kasan-dev, Linux-MM, LKML,
	linux-rt-users, Peter Zijlstra, Thomas Gleixner

On Wed, Oct 10, 2018 at 11:29 AM, Sebastian Andrzej Siewior
<bigeasy@linutronix.de> wrote:
> On 2018-10-10 10:25:42 [+0200], Dmitry Vyukov wrote:
>> > That loop should behave like your on_each_cpu() except it does not
>> > involve the remote CPU.
>>
>>
>> The problem is that it can squeeze in between:
>>
>> +               spin_unlock(&q->lock);
>>
>>                 spin_lock(&quarantine_lock);
>>
>> as far as I see. And then some objects can be left in the quarantine.
>
> Okay. But then once you are at CPU10 (in the on_each_cpu() loop) there
> can be objects which are added to CPU0, right? So based on that, I
> assumed that this would be okay to drop the lock here.

What happens here is a bit tricky.

When a kmem cache is freed, all objects from the cache must be already
free. That is kmem_cache_free has returned for all objects (otherwise
it's a user bug), these calls could have have happened on other CPUs.
So is we are freeing a cache on CPU10, it is not possible that CPU0
frees an object from this cache right now/concurrently, the objects
were already freed but they still can be sitting in per-cpu quarantine
caches.
Now a free on an unrelated object on CPU0 can decide to drain CPU
cache concurrently, it grabs whole CPU cache, unlocks the cache
spinlock (with your patch), and now proceeds to pushing the batch to
global quarantine list. At this point CPU10 drains quarantine to evict
all objects related to the cache, but it misses the batch that CPU0
transfers from cpu cache to global quarantine, because at this point
it's referenced from just local stack variable temp in quarantine_put.
Wrapping the whole transfer sequence with irq disable/enable, ensures
that the transfer happens atomically wrt quarantine_remove_cache. That
is, that quarantine_remove_cache will see the object either in cpu
cache or in global quarantine, but not somewhere in the flight.


>> > But this is debug code anyway, right? And it is highly complex imho.
>> > Well, maybe only for me after I looked at it for the first time…
>>
>> It is debug code - yes.
>> Nothing about its performance matters - no.
>>
>> That's the way to produce unusable debug tools.
>> With too much overhead timeouts start to fire and code behaves not the
>> way it behaves in production.
>> The tool is used in continuous integration and developers wait for
>> test results before merging code.
>> The tool is used on canary devices and directly contributes to usage experience.
>
> Completely understood. What I meant is that debug code in general (from
> RT perspective) increases latency to a level where the device can not
> operate. Take lockdep for instance. Debug facility which is required
> for RT as it spots locking problems early. It increases the latency
> (depending on the workload) by 50ms+ and can't be used in production.
> Same goes for SLUB debug and most other.
>
>> We of course don't want to trade a page of assembly code for cutting
>> few cycles here (something that could make sense for some networking
>> code maybe). But otherwise let's not introduce spinlocks on fast paths
>> just for refactoring reasons.
>
> Sure. As I said. I'm fine with patch Clark initially proposed. I assumed
> the refactoring would make things simpler and avoiding the cross-CPU
> call a good thing.
>
>> > Can you take it as-is or should I repost it with an acked-by?
>>
>> Perhaps it's the problem with the way RT kernel changes things then?
>> This is not specific to quarantine, right?
>
> We got rid of _a lot_ of local_irq_disable/save() + spin_lock() combos
> which were there for reasons which are no longer true or due to lack of
> the API. And this kasan thing is just something Clark stumbled upon
> recently. And I try to negotiate something where everyone can agree on.
>
>> Should that mutex detect
>> that IRQs are disabled and not try to schedule? If this would happen
>> in some networking code, what would we do?
>
> It is not only that it is supposed not to schedule. Assuming the "mutex"
> is not owned you could acquire it right away. No scheduling. However,
> you would record current() as the owner of the lock which is wrong and
> you get into other trouble later on. The list goes on :)
> However, networking. If there is something that breaks then it will be
> addressed. It will be forwarded upstream if this something where it
> is likely to assume that RT won't change. So networking isn't special.
>
> Should I repost Clark's patch?


I am much more comfortable with just changing the type of the lock.
What are the bad implications of using the raw spinlock? Will it help
to do something along the following lines:

// Because of ...
#if CONFIG_RT
#define quarantine_spinlock_t raw_spinlock_t
#else
#define quarantine_spinlock_t spinlock_t
#endif

?

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH] kasan: convert kasan/quarantine_lock to raw_spinlock
  2018-10-10  9:45             ` Dmitry Vyukov
@ 2018-10-10  9:53               ` Sebastian Andrzej Siewior
  2018-10-10  9:57                 ` Dmitry Vyukov
  0 siblings, 1 reply; 20+ messages in thread
From: Sebastian Andrzej Siewior @ 2018-10-10  9:53 UTC (permalink / raw)
  To: Dmitry Vyukov
  Cc: Clark Williams, Alexander Potapenko, kasan-dev, Linux-MM, LKML,
	linux-rt-users, Peter Zijlstra, Thomas Gleixner

On 2018-10-10 11:45:32 [+0200], Dmitry Vyukov wrote:
> > Should I repost Clark's patch?
> 
> 
> I am much more comfortable with just changing the type of the lock.

Yes, that is what Clark's patch does. Should I resent it?

> What are the bad implications of using the raw spinlock? Will it help
> to do something along the following lines:
> 
> // Because of ...
> #if CONFIG_RT
> #define quarantine_spinlock_t raw_spinlock_t
> #else
> #define quarantine_spinlock_t spinlock_t
> #endif

no. For !RT spinlock_t and raw_spinlock_t are the same. For RT
spinlock_t does not disable interrupts or preemption while
raw_spinlock_t does.
Therefore holding a raw_spinlock_t might increase your latency.

Sebastian

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH] kasan: convert kasan/quarantine_lock to raw_spinlock
  2018-10-10  9:53               ` Sebastian Andrzej Siewior
@ 2018-10-10  9:57                 ` Dmitry Vyukov
  2018-10-10 21:49                   ` [PATCH] mm/kasan: make quarantine_lock a raw_spinlock_t Sebastian Andrzej Siewior
  0 siblings, 1 reply; 20+ messages in thread
From: Dmitry Vyukov @ 2018-10-10  9:57 UTC (permalink / raw)
  To: Sebastian Andrzej Siewior
  Cc: Clark Williams, Alexander Potapenko, kasan-dev, Linux-MM, LKML,
	linux-rt-users, Peter Zijlstra, Thomas Gleixner

On Wed, Oct 10, 2018 at 11:53 AM, Sebastian Andrzej Siewior
<bigeasy@linutronix.de> wrote:
> On 2018-10-10 11:45:32 [+0200], Dmitry Vyukov wrote:
>> > Should I repost Clark's patch?
>>
>>
>> I am much more comfortable with just changing the type of the lock.
>
> Yes, that is what Clark's patch does. Should I resent it?


Yes. Clark's patch looks good to me. Probably would be useful to add a
comment as to why raw spinlock is used (otherwise somebody may
refactor it back later).


>> What are the bad implications of using the raw spinlock? Will it help
>> to do something along the following lines:
>>
>> // Because of ...
>> #if CONFIG_RT
>> #define quarantine_spinlock_t raw_spinlock_t
>> #else
>> #define quarantine_spinlock_t spinlock_t
>> #endif
>
> no. For !RT spinlock_t and raw_spinlock_t are the same. For RT
> spinlock_t does not disable interrupts or preemption while
> raw_spinlock_t does.
> Therefore holding a raw_spinlock_t might increase your latency.

Ack.

^ permalink raw reply	[flat|nested] 20+ messages in thread

* [PATCH] mm/kasan: make quarantine_lock a raw_spinlock_t
  2018-10-10  9:57                 ` Dmitry Vyukov
@ 2018-10-10 21:49                   ` Sebastian Andrzej Siewior
  2018-10-11  8:03                     ` Dmitry Vyukov
  2018-10-12 23:56                     ` Andrew Morton
  0 siblings, 2 replies; 20+ messages in thread
From: Sebastian Andrzej Siewior @ 2018-10-10 21:49 UTC (permalink / raw)
  To: Dmitry Vyukov
  Cc: Clark Williams, Alexander Potapenko, kasan-dev, Linux-MM, LKML,
	linux-rt-users, Peter Zijlstra, Thomas Gleixner

From: Clark Williams <williams@redhat.com>
Date: Tue, 18 Sep 2018 10:29:31 -0500

The static lock quarantine_lock is used in quarantine.c to protect the
quarantine queue datastructures. It is taken inside quarantine queue
manipulation routines (quarantine_put(), quarantine_reduce() and
quarantine_remove_cache()), with IRQs disabled.
This is not a problem on a stock kernel but is problematic on an RT
kernel where spin locks are sleeping spinlocks, which can sleep and can
not be acquired with disabled interrupts.

Convert the quarantine_lock to a raw spinlock_t. The usage of
quarantine_lock is confined to quarantine.c and the work performed while
the lock is held is used for debug purpose.

Signed-off-by: Clark Williams <williams@redhat.com>
Acked-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
[bigeasy: slightly altered the commit message]
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
---
On 2018-10-10 11:57:41 [+0200], Dmitry Vyukov wrote:
> Yes. Clark's patch looks good to me. Probably would be useful to add a
> comment as to why raw spinlock is used (otherwise somebody may
> refactor it back later).

If you really insist, I could add something but this didn't happen so
far. git's changelog should provide enough information why to why it was
changed.

 mm/kasan/quarantine.c |   18 +++++++++---------
 1 file changed, 9 insertions(+), 9 deletions(-)

--- a/mm/kasan/quarantine.c
+++ b/mm/kasan/quarantine.c
@@ -103,7 +103,7 @@ static int quarantine_head;
 static int quarantine_tail;
 /* Total size of all objects in global_quarantine across all batches. */
 static unsigned long quarantine_size;
-static DEFINE_SPINLOCK(quarantine_lock);
+static DEFINE_RAW_SPINLOCK(quarantine_lock);
 DEFINE_STATIC_SRCU(remove_cache_srcu);
 
 /* Maximum size of the global queue. */
@@ -190,7 +190,7 @@ void quarantine_put(struct kasan_free_me
 	if (unlikely(q->bytes > QUARANTINE_PERCPU_SIZE)) {
 		qlist_move_all(q, &temp);
 
-		spin_lock(&quarantine_lock);
+		raw_spin_lock(&quarantine_lock);
 		WRITE_ONCE(quarantine_size, quarantine_size + temp.bytes);
 		qlist_move_all(&temp, &global_quarantine[quarantine_tail]);
 		if (global_quarantine[quarantine_tail].bytes >=
@@ -203,7 +203,7 @@ void quarantine_put(struct kasan_free_me
 			if (new_tail != quarantine_head)
 				quarantine_tail = new_tail;
 		}
-		spin_unlock(&quarantine_lock);
+		raw_spin_unlock(&quarantine_lock);
 	}
 
 	local_irq_restore(flags);
@@ -230,7 +230,7 @@ void quarantine_reduce(void)
 	 * expected case).
 	 */
 	srcu_idx = srcu_read_lock(&remove_cache_srcu);
-	spin_lock_irqsave(&quarantine_lock, flags);
+	raw_spin_lock_irqsave(&quarantine_lock, flags);
 
 	/*
 	 * Update quarantine size in case of hotplug. Allocate a fraction of
@@ -254,7 +254,7 @@ void quarantine_reduce(void)
 			quarantine_head = 0;
 	}
 
-	spin_unlock_irqrestore(&quarantine_lock, flags);
+	raw_spin_unlock_irqrestore(&quarantine_lock, flags);
 
 	qlist_free_all(&to_free, NULL);
 	srcu_read_unlock(&remove_cache_srcu, srcu_idx);
@@ -310,17 +310,17 @@ void quarantine_remove_cache(struct kmem
 	 */
 	on_each_cpu(per_cpu_remove_cache, cache, 1);
 
-	spin_lock_irqsave(&quarantine_lock, flags);
+	raw_spin_lock_irqsave(&quarantine_lock, flags);
 	for (i = 0; i < QUARANTINE_BATCHES; i++) {
 		if (qlist_empty(&global_quarantine[i]))
 			continue;
 		qlist_move_cache(&global_quarantine[i], &to_free, cache);
 		/* Scanning whole quarantine can take a while. */
-		spin_unlock_irqrestore(&quarantine_lock, flags);
+		raw_spin_unlock_irqrestore(&quarantine_lock, flags);
 		cond_resched();
-		spin_lock_irqsave(&quarantine_lock, flags);
+		raw_spin_lock_irqsave(&quarantine_lock, flags);
 	}
-	spin_unlock_irqrestore(&quarantine_lock, flags);
+	raw_spin_unlock_irqrestore(&quarantine_lock, flags);
 
 	qlist_free_all(&to_free, cache);
 

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH] mm/kasan: make quarantine_lock a raw_spinlock_t
  2018-10-10 21:49                   ` [PATCH] mm/kasan: make quarantine_lock a raw_spinlock_t Sebastian Andrzej Siewior
@ 2018-10-11  8:03                     ` Dmitry Vyukov
  2018-10-12 23:56                     ` Andrew Morton
  1 sibling, 0 replies; 20+ messages in thread
From: Dmitry Vyukov @ 2018-10-11  8:03 UTC (permalink / raw)
  To: Sebastian Andrzej Siewior
  Cc: Clark Williams, Alexander Potapenko, kasan-dev, Linux-MM, LKML,
	linux-rt-users, Peter Zijlstra, Thomas Gleixner, Andrew Morton

On Wed, Oct 10, 2018 at 11:49 PM, Sebastian Andrzej Siewior
<bigeasy@linutronix.de> wrote:
> From: Clark Williams <williams@redhat.com>
> Date: Tue, 18 Sep 2018 10:29:31 -0500
>
> The static lock quarantine_lock is used in quarantine.c to protect the
> quarantine queue datastructures. It is taken inside quarantine queue
> manipulation routines (quarantine_put(), quarantine_reduce() and
> quarantine_remove_cache()), with IRQs disabled.
> This is not a problem on a stock kernel but is problematic on an RT
> kernel where spin locks are sleeping spinlocks, which can sleep and can
> not be acquired with disabled interrupts.
>
> Convert the quarantine_lock to a raw spinlock_t. The usage of
> quarantine_lock is confined to quarantine.c and the work performed while
> the lock is held is used for debug purpose.
>
> Signed-off-by: Clark Williams <williams@redhat.com>
> Acked-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
> [bigeasy: slightly altered the commit message]
> Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>

Acked-by: Dmitry Vyukov <dvyukov@google.com>

> ---
> On 2018-10-10 11:57:41 [+0200], Dmitry Vyukov wrote:
>> Yes. Clark's patch looks good to me. Probably would be useful to add a
>> comment as to why raw spinlock is used (otherwise somebody may
>> refactor it back later).
>
> If you really insist, I could add something but this didn't happen so
> far. git's changelog should provide enough information why to why it was
> changed.
>
>  mm/kasan/quarantine.c |   18 +++++++++---------
>  1 file changed, 9 insertions(+), 9 deletions(-)
>
> --- a/mm/kasan/quarantine.c
> +++ b/mm/kasan/quarantine.c
> @@ -103,7 +103,7 @@ static int quarantine_head;
>  static int quarantine_tail;
>  /* Total size of all objects in global_quarantine across all batches. */
>  static unsigned long quarantine_size;
> -static DEFINE_SPINLOCK(quarantine_lock);
> +static DEFINE_RAW_SPINLOCK(quarantine_lock);
>  DEFINE_STATIC_SRCU(remove_cache_srcu);
>
>  /* Maximum size of the global queue. */
> @@ -190,7 +190,7 @@ void quarantine_put(struct kasan_free_me
>         if (unlikely(q->bytes > QUARANTINE_PERCPU_SIZE)) {
>                 qlist_move_all(q, &temp);
>
> -               spin_lock(&quarantine_lock);
> +               raw_spin_lock(&quarantine_lock);
>                 WRITE_ONCE(quarantine_size, quarantine_size + temp.bytes);
>                 qlist_move_all(&temp, &global_quarantine[quarantine_tail]);
>                 if (global_quarantine[quarantine_tail].bytes >=
> @@ -203,7 +203,7 @@ void quarantine_put(struct kasan_free_me
>                         if (new_tail != quarantine_head)
>                                 quarantine_tail = new_tail;
>                 }
> -               spin_unlock(&quarantine_lock);
> +               raw_spin_unlock(&quarantine_lock);
>         }
>
>         local_irq_restore(flags);
> @@ -230,7 +230,7 @@ void quarantine_reduce(void)
>          * expected case).
>          */
>         srcu_idx = srcu_read_lock(&remove_cache_srcu);
> -       spin_lock_irqsave(&quarantine_lock, flags);
> +       raw_spin_lock_irqsave(&quarantine_lock, flags);
>
>         /*
>          * Update quarantine size in case of hotplug. Allocate a fraction of
> @@ -254,7 +254,7 @@ void quarantine_reduce(void)
>                         quarantine_head = 0;
>         }
>
> -       spin_unlock_irqrestore(&quarantine_lock, flags);
> +       raw_spin_unlock_irqrestore(&quarantine_lock, flags);
>
>         qlist_free_all(&to_free, NULL);
>         srcu_read_unlock(&remove_cache_srcu, srcu_idx);
> @@ -310,17 +310,17 @@ void quarantine_remove_cache(struct kmem
>          */
>         on_each_cpu(per_cpu_remove_cache, cache, 1);
>
> -       spin_lock_irqsave(&quarantine_lock, flags);
> +       raw_spin_lock_irqsave(&quarantine_lock, flags);
>         for (i = 0; i < QUARANTINE_BATCHES; i++) {
>                 if (qlist_empty(&global_quarantine[i]))
>                         continue;
>                 qlist_move_cache(&global_quarantine[i], &to_free, cache);
>                 /* Scanning whole quarantine can take a while. */
> -               spin_unlock_irqrestore(&quarantine_lock, flags);
> +               raw_spin_unlock_irqrestore(&quarantine_lock, flags);
>                 cond_resched();
> -               spin_lock_irqsave(&quarantine_lock, flags);
> +               raw_spin_lock_irqsave(&quarantine_lock, flags);
>         }
> -       spin_unlock_irqrestore(&quarantine_lock, flags);
> +       raw_spin_unlock_irqrestore(&quarantine_lock, flags);
>
>         qlist_free_all(&to_free, cache);
>

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH] mm/kasan: make quarantine_lock a raw_spinlock_t
  2018-10-10 21:49                   ` [PATCH] mm/kasan: make quarantine_lock a raw_spinlock_t Sebastian Andrzej Siewior
  2018-10-11  8:03                     ` Dmitry Vyukov
@ 2018-10-12 23:56                     ` Andrew Morton
  2018-10-13 13:50                       ` Peter Zijlstra
  1 sibling, 1 reply; 20+ messages in thread
From: Andrew Morton @ 2018-10-12 23:56 UTC (permalink / raw)
  To: Sebastian Andrzej Siewior
  Cc: Dmitry Vyukov, Clark Williams, Alexander Potapenko, kasan-dev,
	Linux-MM, LKML, linux-rt-users, Peter Zijlstra, Thomas Gleixner

On Wed, 10 Oct 2018 23:49:45 +0200 Sebastian Andrzej Siewior <bigeasy@linutronix.de> wrote:

> On 2018-10-10 11:57:41 [+0200], Dmitry Vyukov wrote:
> > Yes. Clark's patch looks good to me. Probably would be useful to add a
> > comment as to why raw spinlock is used (otherwise somebody may
> > refactor it back later).
> 
> If you really insist, I could add something but this didn't happen so
> far. git's changelog should provide enough information why to why it was
> changed.

Requiring code readers to look up changelogs in git is rather user-hostile.
There are several reasons for using raw_*, so an explanatory comment at
each site is called for.

However it would be smarter to stop "using raw_* for several reasons". 
Instead, create a differently named variant for each such reason.  ie, do

/*
 * Nice comment goes here.  It explains all the possible reasons why -rt
 * might use a raw_spin_lock when a spin_lock could otherwise be used.
 */
#define raw_spin_lock_for_rt	raw_spinlock

Then use raw_spin_lock_for_rt() at all such sites.

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH] mm/kasan: make quarantine_lock a raw_spinlock_t
  2018-10-12 23:56                     ` Andrew Morton
@ 2018-10-13 13:50                       ` Peter Zijlstra
  2018-10-15 23:35                         ` Andrew Morton
  0 siblings, 1 reply; 20+ messages in thread
From: Peter Zijlstra @ 2018-10-13 13:50 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Sebastian Andrzej Siewior, Dmitry Vyukov, Clark Williams,
	Alexander Potapenko, kasan-dev, Linux-MM, LKML, linux-rt-users,
	Thomas Gleixner

On Fri, Oct 12, 2018 at 04:56:55PM -0700, Andrew Morton wrote:
> There are several reasons for using raw_*, so an explanatory comment at
> each site is called for.
> 
> However it would be smarter to stop "using raw_* for several reasons". 
> Instead, create a differently named variant for each such reason.  ie, do
> 
> /*
>  * Nice comment goes here.  It explains all the possible reasons why -rt
>  * might use a raw_spin_lock when a spin_lock could otherwise be used.
>  */
> #define raw_spin_lock_for_rt	raw_spinlock
> 
> Then use raw_spin_lock_for_rt() at all such sites.

The whole raw_spinlock_t is for RT, no other reason. It is the one true
spinlock.

From this, it naturally follows that:

 - nesting order: raw_spinlock_t < spinlock_t < mutex_t
 - raw_spinlock_t sections must be bounded

The patch under discussion is the result of the nesting order rule; and
is allowed to violate the second rule, by virtue of it being debug code.

There are no other reasons; and I'm somewhat confused by what you
propose.

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH] mm/kasan: make quarantine_lock a raw_spinlock_t
  2018-10-13 13:50                       ` Peter Zijlstra
@ 2018-10-15 23:35                         ` Andrew Morton
  2018-10-19 10:58                           ` Peter Zijlstra
  0 siblings, 1 reply; 20+ messages in thread
From: Andrew Morton @ 2018-10-15 23:35 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: Sebastian Andrzej Siewior, Dmitry Vyukov, Clark Williams,
	Alexander Potapenko, kasan-dev, Linux-MM, LKML, linux-rt-users,
	Thomas Gleixner

On Sat, 13 Oct 2018 15:50:58 +0200 Peter Zijlstra <peterz@infradead.org> wrote:

> The whole raw_spinlock_t is for RT, no other reason.

Oh.  I never realised that.

Is this documented anywhere?  Do there exist guidelines which tell
non-rt developers and reviewers when it should be used?

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [kbuild-all] [PATCH RT] rt: convert mm/kasan/quarantine_lock to raw_spinlock
  2018-10-05 16:37   ` Sebastian Andrzej Siewior
@ 2018-10-18  9:04     ` Rong Chen
  0 siblings, 0 replies; 20+ messages in thread
From: Rong Chen @ 2018-10-18  9:04 UTC (permalink / raw)
  To: Sebastian Andrzej Siewior, kbuild test robot
  Cc: Clark Williams, linux-rt-users, kbuild-all, linux-kernel



On 10/06/2018 12:37 AM, Sebastian Andrzej Siewior wrote:
> On 2018-09-19 04:34:50 [+0800], kbuild test robot wrote:
>> Hi Clark,
>>
>> Thank you for the patch! Perhaps something to improve:
>>
>> [auto build test WARNING on linux-rt-devel/for-kbuild-bot/current-stable]
>>
>> url:    https://github.com/0day-ci/linux/commits/Clark-Williams/rt-convert-mm-kasan-quarantine_lock-to-raw_spinlock/20180919-021343
>> base:   https://git.kernel.org/pub/scm/linux/kernel/git/rt/linux-rt-devel.git for-kbuild-bot/current-stable
>> config: x86_64-allmodconfig (attached as .config)
>> compiler: gcc-7 (Debian 7.3.0-1) 7.3.0
>> reproduce:
>>          # save the attached .config to linux build tree
>>          make ARCH=x86_64
>>
>> All warnings (new ones prefixed by >>):
> how is this related? Could someone please explain this.
>
>
It Iooks like a kbuild issue, we will looking into it.

Best Regards,
Rong Chen

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH] mm/kasan: make quarantine_lock a raw_spinlock_t
  2018-10-15 23:35                         ` Andrew Morton
@ 2018-10-19 10:58                           ` Peter Zijlstra
  0 siblings, 0 replies; 20+ messages in thread
From: Peter Zijlstra @ 2018-10-19 10:58 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Sebastian Andrzej Siewior, Dmitry Vyukov, Clark Williams,
	Alexander Potapenko, kasan-dev, Linux-MM, LKML, linux-rt-users,
	Thomas Gleixner

On Mon, Oct 15, 2018 at 04:35:29PM -0700, Andrew Morton wrote:
> On Sat, 13 Oct 2018 15:50:58 +0200 Peter Zijlstra <peterz@infradead.org> wrote:
> 
> > The whole raw_spinlock_t is for RT, no other reason.
> 
> Oh.  I never realised that.
> 
> Is this documented anywhere?  Do there exist guidelines which tell
> non-rt developers and reviewers when it should be used?

I'm afraid not; I'll put it on the todo list ... I've also been working
on some lockdep validation for the lock order stuff.

^ permalink raw reply	[flat|nested] 20+ messages in thread

end of thread, other threads:[~2018-10-19 10:59 UTC | newest]

Thread overview: 20+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-09-18 15:29 [PATCH RT] rt: convert mm/kasan/quarantine_lock to raw_spinlock Clark Williams
2018-09-18 20:34 ` kbuild test robot
2018-10-05 16:37   ` Sebastian Andrzej Siewior
2018-10-18  9:04     ` [kbuild-all] " Rong Chen
2018-10-05 16:30 ` [PATCH] kasan: convert kasan/quarantine_lock " Sebastian Andrzej Siewior
2018-10-05 16:33   ` Sebastian Andrzej Siewior
2018-10-08  0:47     ` Clark Williams
2018-10-08  9:15     ` Dmitry Vyukov
2018-10-09 14:27       ` Sebastian Andrzej Siewior
2018-10-10  8:25         ` Dmitry Vyukov
2018-10-10  9:29           ` Sebastian Andrzej Siewior
2018-10-10  9:45             ` Dmitry Vyukov
2018-10-10  9:53               ` Sebastian Andrzej Siewior
2018-10-10  9:57                 ` Dmitry Vyukov
2018-10-10 21:49                   ` [PATCH] mm/kasan: make quarantine_lock a raw_spinlock_t Sebastian Andrzej Siewior
2018-10-11  8:03                     ` Dmitry Vyukov
2018-10-12 23:56                     ` Andrew Morton
2018-10-13 13:50                       ` Peter Zijlstra
2018-10-15 23:35                         ` Andrew Morton
2018-10-19 10:58                           ` Peter Zijlstra

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).