linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH -tip] kprobes: Lock rcu_read_lock() while searching kprobe
@ 2019-12-02  7:32 Masami Hiramatsu
  2019-12-02 15:17 ` Anders Roxell
  2019-12-02 21:08 ` Joel Fernandes
  0 siblings, 2 replies; 19+ messages in thread
From: Masami Hiramatsu @ 2019-12-02  7:32 UTC (permalink / raw)
  To: Ingo Molnar
  Cc: Anders Roxell, paulmck, joel, Naveen N . Rao,
	Anil S Keshavamurthy, David Miller, Masami Hiramatsu,
	Linux Kernel Mailing List

Anders reported that the lockdep warns that suspicious
RCU list usage in register_kprobe() (detected by
CONFIG_PROVE_RCU_LIST.) This is because get_kprobe()
access kprobe_table[] by hlist_for_each_entry_rcu()
without rcu_read_lock.

If we call get_kprobe() from the breakpoint handler context,
it is run with preempt disabled, so this is not a problem.
But in other cases, instead of rcu_read_lock(), we locks
kprobe_mutex so that the kprobe_table[] is not updated.
So, current code is safe, but still not good from the view
point of RCU.

Let's lock the rcu_read_lock() around get_kprobe() and
ensure kprobe_mutex is locked at those points.

Note that we can safely unlock rcu_read_lock() soon after
accessing the list, because we are sure the found kprobe has
never gone before unlocking kprobe_mutex. Unless locking
kprobe_mutex, caller must hold rcu_read_lock() until it
finished operations on that kprobe.

Reported-by: Anders Roxell <anders.roxell@linaro.org>
Signed-off-by: Masami Hiramatsu <mhiramat@kernel.org>
---
 kernel/kprobes.c |   18 ++++++++++++++----
 1 file changed, 14 insertions(+), 4 deletions(-)

diff --git a/kernel/kprobes.c b/kernel/kprobes.c
index 53534aa258a6..fd814ea7dbd8 100644
--- a/kernel/kprobes.c
+++ b/kernel/kprobes.c
@@ -319,6 +319,7 @@ static inline void reset_kprobe_instance(void)
  * 	- under the kprobe_mutex - during kprobe_[un]register()
  * 				OR
  * 	- with preemption disabled - from arch/xxx/kernel/kprobes.c
+ * In both cases, caller must disable preempt (or acquire rcu_read_lock)
  */
 struct kprobe *get_kprobe(void *addr)
 {
@@ -435,6 +436,7 @@ static int kprobe_queued(struct kprobe *p)
 /*
  * Return an optimized kprobe whose optimizing code replaces
  * instructions including addr (exclude breakpoint).
+ * This must be called with locking kprobe_mutex.
  */
 static struct kprobe *get_optimized_kprobe(unsigned long addr)
 {
@@ -442,9 +444,12 @@ static struct kprobe *get_optimized_kprobe(unsigned long addr)
 	struct kprobe *p = NULL;
 	struct optimized_kprobe *op;
 
+	lockdep_assert_held(&kprobe_mutex);
+	rcu_read_lock();
 	/* Don't check i == 0, since that is a breakpoint case. */
 	for (i = 1; !p && i < MAX_OPTIMIZED_LENGTH; i++)
 		p = get_kprobe((void *)(addr - i));
+	rcu_read_unlock();	/* We are safe because kprobe_mutex is held */
 
 	if (p && kprobe_optready(p)) {
 		op = container_of(p, struct optimized_kprobe, kp);
@@ -1478,18 +1483,21 @@ static struct kprobe *__get_valid_kprobe(struct kprobe *p)
 {
 	struct kprobe *ap, *list_p;
 
+	lockdep_assert_held(&kprobe_mutex);
+	rcu_read_lock();
 	ap = get_kprobe(p->addr);
 	if (unlikely(!ap))
-		return NULL;
+		goto out;
 
 	if (p != ap) {
 		list_for_each_entry_rcu(list_p, &ap->list, list)
 			if (list_p == p)
 			/* kprobe p is a valid probe */
-				goto valid;
-		return NULL;
+				goto out;
+		ap = NULL;
 	}
-valid:
+out:
+	rcu_read_unlock();	/* We are safe because kprobe_mutex is held */
 	return ap;
 }
 
@@ -1602,7 +1610,9 @@ int register_kprobe(struct kprobe *p)
 
 	mutex_lock(&kprobe_mutex);
 
+	rcu_read_lock();
 	old_p = get_kprobe(p->addr);
+	rcu_read_unlock();	/* We are safe because kprobe_mutex is held */
 	if (old_p) {
 		/* Since this may unoptimize old_p, locking text_mutex. */
 		ret = register_aggr_kprobe(old_p, p);


^ permalink raw reply related	[flat|nested] 19+ messages in thread

* Re: [PATCH -tip] kprobes: Lock rcu_read_lock() while searching kprobe
  2019-12-02  7:32 [PATCH -tip] kprobes: Lock rcu_read_lock() while searching kprobe Masami Hiramatsu
@ 2019-12-02 15:17 ` Anders Roxell
  2019-12-02 21:08 ` Joel Fernandes
  1 sibling, 0 replies; 19+ messages in thread
From: Anders Roxell @ 2019-12-02 15:17 UTC (permalink / raw)
  To: Masami Hiramatsu
  Cc: Ingo Molnar, paulmck, joel, Naveen N . Rao, Anil S Keshavamurthy,
	David Miller, Linux Kernel Mailing List

On Mon, 2 Dec 2019 at 08:32, Masami Hiramatsu <mhiramat@kernel.org> wrote:
>
> Anders reported that the lockdep warns that suspicious
> RCU list usage in register_kprobe() (detected by
> CONFIG_PROVE_RCU_LIST.) This is because get_kprobe()
> access kprobe_table[] by hlist_for_each_entry_rcu()
> without rcu_read_lock.
>
> If we call get_kprobe() from the breakpoint handler context,
> it is run with preempt disabled, so this is not a problem.
> But in other cases, instead of rcu_read_lock(), we locks
> kprobe_mutex so that the kprobe_table[] is not updated.
> So, current code is safe, but still not good from the view
> point of RCU.
>
> Let's lock the rcu_read_lock() around get_kprobe() and
> ensure kprobe_mutex is locked at those points.
>
> Note that we can safely unlock rcu_read_lock() soon after
> accessing the list, because we are sure the found kprobe has
> never gone before unlocking kprobe_mutex. Unless locking
> kprobe_mutex, caller must hold rcu_read_lock() until it
> finished operations on that kprobe.
>
> Reported-by: Anders Roxell <anders.roxell@linaro.org>
> Signed-off-by: Masami Hiramatsu <mhiramat@kernel.org>

Thank you Masami for fixing this.

Tested-by: Anders Roxell <anders.roxell@linaro.org>

Cheers,
Anders

> ---
>  kernel/kprobes.c |   18 ++++++++++++++----
>  1 file changed, 14 insertions(+), 4 deletions(-)
>
> diff --git a/kernel/kprobes.c b/kernel/kprobes.c
> index 53534aa258a6..fd814ea7dbd8 100644
> --- a/kernel/kprobes.c
> +++ b/kernel/kprobes.c
> @@ -319,6 +319,7 @@ static inline void reset_kprobe_instance(void)
>   *     - under the kprobe_mutex - during kprobe_[un]register()
>   *                             OR
>   *     - with preemption disabled - from arch/xxx/kernel/kprobes.c
> + * In both cases, caller must disable preempt (or acquire rcu_read_lock)
>   */
>  struct kprobe *get_kprobe(void *addr)
>  {
> @@ -435,6 +436,7 @@ static int kprobe_queued(struct kprobe *p)
>  /*
>   * Return an optimized kprobe whose optimizing code replaces
>   * instructions including addr (exclude breakpoint).
> + * This must be called with locking kprobe_mutex.
>   */
>  static struct kprobe *get_optimized_kprobe(unsigned long addr)
>  {
> @@ -442,9 +444,12 @@ static struct kprobe *get_optimized_kprobe(unsigned long addr)
>         struct kprobe *p = NULL;
>         struct optimized_kprobe *op;
>
> +       lockdep_assert_held(&kprobe_mutex);
> +       rcu_read_lock();
>         /* Don't check i == 0, since that is a breakpoint case. */
>         for (i = 1; !p && i < MAX_OPTIMIZED_LENGTH; i++)
>                 p = get_kprobe((void *)(addr - i));
> +       rcu_read_unlock();      /* We are safe because kprobe_mutex is held */
>
>         if (p && kprobe_optready(p)) {
>                 op = container_of(p, struct optimized_kprobe, kp);
> @@ -1478,18 +1483,21 @@ static struct kprobe *__get_valid_kprobe(struct kprobe *p)
>  {
>         struct kprobe *ap, *list_p;
>
> +       lockdep_assert_held(&kprobe_mutex);
> +       rcu_read_lock();
>         ap = get_kprobe(p->addr);
>         if (unlikely(!ap))
> -               return NULL;
> +               goto out;
>
>         if (p != ap) {
>                 list_for_each_entry_rcu(list_p, &ap->list, list)
>                         if (list_p == p)
>                         /* kprobe p is a valid probe */
> -                               goto valid;
> -               return NULL;
> +                               goto out;
> +               ap = NULL;
>         }
> -valid:
> +out:
> +       rcu_read_unlock();      /* We are safe because kprobe_mutex is held */
>         return ap;
>  }
>
> @@ -1602,7 +1610,9 @@ int register_kprobe(struct kprobe *p)
>
>         mutex_lock(&kprobe_mutex);
>
> +       rcu_read_lock();
>         old_p = get_kprobe(p->addr);
> +       rcu_read_unlock();      /* We are safe because kprobe_mutex is held */
>         if (old_p) {
>                 /* Since this may unoptimize old_p, locking text_mutex. */
>                 ret = register_aggr_kprobe(old_p, p);
>

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH -tip] kprobes: Lock rcu_read_lock() while searching kprobe
  2019-12-02  7:32 [PATCH -tip] kprobes: Lock rcu_read_lock() while searching kprobe Masami Hiramatsu
  2019-12-02 15:17 ` Anders Roxell
@ 2019-12-02 21:08 ` Joel Fernandes
  2019-12-02 22:34   ` Masami Hiramatsu
  2019-12-03  7:13   ` Ingo Molnar
  1 sibling, 2 replies; 19+ messages in thread
From: Joel Fernandes @ 2019-12-02 21:08 UTC (permalink / raw)
  To: Masami Hiramatsu
  Cc: Ingo Molnar, Anders Roxell, paulmck, Naveen N . Rao,
	Anil S Keshavamurthy, David Miller, Linux Kernel Mailing List

On Mon, Dec 02, 2019 at 04:32:13PM +0900, Masami Hiramatsu wrote:
> Anders reported that the lockdep warns that suspicious
> RCU list usage in register_kprobe() (detected by
> CONFIG_PROVE_RCU_LIST.) This is because get_kprobe()
> access kprobe_table[] by hlist_for_each_entry_rcu()
> without rcu_read_lock.
> 
> If we call get_kprobe() from the breakpoint handler context,
> it is run with preempt disabled, so this is not a problem.
> But in other cases, instead of rcu_read_lock(), we locks
> kprobe_mutex so that the kprobe_table[] is not updated.
> So, current code is safe, but still not good from the view
> point of RCU.
> 
> Let's lock the rcu_read_lock() around get_kprobe() and
> ensure kprobe_mutex is locked at those points.
> 
> Note that we can safely unlock rcu_read_lock() soon after
> accessing the list, because we are sure the found kprobe has
> never gone before unlocking kprobe_mutex. Unless locking
> kprobe_mutex, caller must hold rcu_read_lock() until it
> finished operations on that kprobe.
> 
> Reported-by: Anders Roxell <anders.roxell@linaro.org>
> Signed-off-by: Masami Hiramatsu <mhiramat@kernel.org>

Instead of this, can you not just pass the lockdep_is_held() expression as
the last argument to list_for_each_entry_rcu() to silence the warning? Then
it will be a simpler patch.

thanks,

 - Joel

> ---
>  kernel/kprobes.c |   18 ++++++++++++++----
>  1 file changed, 14 insertions(+), 4 deletions(-)
> 
> diff --git a/kernel/kprobes.c b/kernel/kprobes.c
> index 53534aa258a6..fd814ea7dbd8 100644
> --- a/kernel/kprobes.c
> +++ b/kernel/kprobes.c
> @@ -319,6 +319,7 @@ static inline void reset_kprobe_instance(void)
>   * 	- under the kprobe_mutex - during kprobe_[un]register()
>   * 				OR
>   * 	- with preemption disabled - from arch/xxx/kernel/kprobes.c
> + * In both cases, caller must disable preempt (or acquire rcu_read_lock)
>   */
>  struct kprobe *get_kprobe(void *addr)
>  {
> @@ -435,6 +436,7 @@ static int kprobe_queued(struct kprobe *p)
>  /*
>   * Return an optimized kprobe whose optimizing code replaces
>   * instructions including addr (exclude breakpoint).
> + * This must be called with locking kprobe_mutex.
>   */
>  static struct kprobe *get_optimized_kprobe(unsigned long addr)
>  {
> @@ -442,9 +444,12 @@ static struct kprobe *get_optimized_kprobe(unsigned long addr)
>  	struct kprobe *p = NULL;
>  	struct optimized_kprobe *op;
>  
> +	lockdep_assert_held(&kprobe_mutex);
> +	rcu_read_lock();
>  	/* Don't check i == 0, since that is a breakpoint case. */
>  	for (i = 1; !p && i < MAX_OPTIMIZED_LENGTH; i++)
>  		p = get_kprobe((void *)(addr - i));
> +	rcu_read_unlock();	/* We are safe because kprobe_mutex is held */
>  
>  	if (p && kprobe_optready(p)) {
>  		op = container_of(p, struct optimized_kprobe, kp);
> @@ -1478,18 +1483,21 @@ static struct kprobe *__get_valid_kprobe(struct kprobe *p)
>  {
>  	struct kprobe *ap, *list_p;
>  
> +	lockdep_assert_held(&kprobe_mutex);
> +	rcu_read_lock();
>  	ap = get_kprobe(p->addr);
>  	if (unlikely(!ap))
> -		return NULL;
> +		goto out;
>  
>  	if (p != ap) {
>  		list_for_each_entry_rcu(list_p, &ap->list, list)
>  			if (list_p == p)
>  			/* kprobe p is a valid probe */
> -				goto valid;
> -		return NULL;
> +				goto out;
> +		ap = NULL;
>  	}
> -valid:
> +out:
> +	rcu_read_unlock();	/* We are safe because kprobe_mutex is held */
>  	return ap;
>  }
>  
> @@ -1602,7 +1610,9 @@ int register_kprobe(struct kprobe *p)
>  
>  	mutex_lock(&kprobe_mutex);
>  
> +	rcu_read_lock();
>  	old_p = get_kprobe(p->addr);
> +	rcu_read_unlock();	/* We are safe because kprobe_mutex is held */
>  	if (old_p) {
>  		/* Since this may unoptimize old_p, locking text_mutex. */
>  		ret = register_aggr_kprobe(old_p, p);
> 

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH -tip] kprobes: Lock rcu_read_lock() while searching kprobe
  2019-12-02 21:08 ` Joel Fernandes
@ 2019-12-02 22:34   ` Masami Hiramatsu
  2019-12-02 23:35     ` Joel Fernandes
  2019-12-03  7:13   ` Ingo Molnar
  1 sibling, 1 reply; 19+ messages in thread
From: Masami Hiramatsu @ 2019-12-02 22:34 UTC (permalink / raw)
  To: Joel Fernandes
  Cc: Ingo Molnar, Anders Roxell, paulmck, Naveen N . Rao,
	Anil S Keshavamurthy, David Miller, Linux Kernel Mailing List

Hi Joel,

On Mon, 2 Dec 2019 16:08:54 -0500
Joel Fernandes <joel@joelfernandes.org> wrote:

> On Mon, Dec 02, 2019 at 04:32:13PM +0900, Masami Hiramatsu wrote:
> > Anders reported that the lockdep warns that suspicious
> > RCU list usage in register_kprobe() (detected by
> > CONFIG_PROVE_RCU_LIST.) This is because get_kprobe()
> > access kprobe_table[] by hlist_for_each_entry_rcu()
> > without rcu_read_lock.
> > 
> > If we call get_kprobe() from the breakpoint handler context,
> > it is run with preempt disabled, so this is not a problem.
> > But in other cases, instead of rcu_read_lock(), we locks
> > kprobe_mutex so that the kprobe_table[] is not updated.
> > So, current code is safe, but still not good from the view
> > point of RCU.
> > 
> > Let's lock the rcu_read_lock() around get_kprobe() and
> > ensure kprobe_mutex is locked at those points.
> > 
> > Note that we can safely unlock rcu_read_lock() soon after
> > accessing the list, because we are sure the found kprobe has
> > never gone before unlocking kprobe_mutex. Unless locking
> > kprobe_mutex, caller must hold rcu_read_lock() until it
> > finished operations on that kprobe.
> > 
> > Reported-by: Anders Roxell <anders.roxell@linaro.org>
> > Signed-off-by: Masami Hiramatsu <mhiramat@kernel.org>
> 
> Instead of this, can you not just pass the lockdep_is_held() expression as
> the last argument to list_for_each_entry_rcu() to silence the warning? Then
> it will be a simpler patch.

Ah, I see. That is more natural to silence the warning.

Thank you!

-- 
Masami Hiramatsu <mhiramat@kernel.org>

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH -tip] kprobes: Lock rcu_read_lock() while searching kprobe
  2019-12-02 22:34   ` Masami Hiramatsu
@ 2019-12-02 23:35     ` Joel Fernandes
  2019-12-03  6:02       ` Masami Hiramatsu
  0 siblings, 1 reply; 19+ messages in thread
From: Joel Fernandes @ 2019-12-02 23:35 UTC (permalink / raw)
  To: Masami Hiramatsu
  Cc: Ingo Molnar, Anders Roxell, paulmck, Naveen N . Rao,
	Anil S Keshavamurthy, David Miller, Linux Kernel Mailing List

On Tue, Dec 03, 2019 at 07:34:53AM +0900, Masami Hiramatsu wrote:
> Hi Joel,
> 
> On Mon, 2 Dec 2019 16:08:54 -0500
> Joel Fernandes <joel@joelfernandes.org> wrote:
> 
> > On Mon, Dec 02, 2019 at 04:32:13PM +0900, Masami Hiramatsu wrote:
> > > Anders reported that the lockdep warns that suspicious
> > > RCU list usage in register_kprobe() (detected by
> > > CONFIG_PROVE_RCU_LIST.) This is because get_kprobe()
> > > access kprobe_table[] by hlist_for_each_entry_rcu()
> > > without rcu_read_lock.
> > > 
> > > If we call get_kprobe() from the breakpoint handler context,
> > > it is run with preempt disabled, so this is not a problem.
> > > But in other cases, instead of rcu_read_lock(), we locks
> > > kprobe_mutex so that the kprobe_table[] is not updated.
> > > So, current code is safe, but still not good from the view
> > > point of RCU.
> > > 
> > > Let's lock the rcu_read_lock() around get_kprobe() and
> > > ensure kprobe_mutex is locked at those points.
> > > 
> > > Note that we can safely unlock rcu_read_lock() soon after
> > > accessing the list, because we are sure the found kprobe has
> > > never gone before unlocking kprobe_mutex. Unless locking
> > > kprobe_mutex, caller must hold rcu_read_lock() until it
> > > finished operations on that kprobe.
> > > 
> > > Reported-by: Anders Roxell <anders.roxell@linaro.org>
> > > Signed-off-by: Masami Hiramatsu <mhiramat@kernel.org>
> > 
> > Instead of this, can you not just pass the lockdep_is_held() expression as
> > the last argument to list_for_each_entry_rcu() to silence the warning? Then
> > it will be a simpler patch.
> 
> Ah, I see. That is more natural to silence the warning.

Np, and on such fix, my:

Reviewed-by: Joel Fernandes (Google) <joel@joelfernandes.org>

thanks,

 - Joel

> 
> Thank you!
> 
> -- 
> Masami Hiramatsu <mhiramat@kernel.org>

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH -tip] kprobes: Lock rcu_read_lock() while searching kprobe
  2019-12-02 23:35     ` Joel Fernandes
@ 2019-12-03  6:02       ` Masami Hiramatsu
  0 siblings, 0 replies; 19+ messages in thread
From: Masami Hiramatsu @ 2019-12-03  6:02 UTC (permalink / raw)
  To: Joel Fernandes
  Cc: Ingo Molnar, Anders Roxell, paulmck, Naveen N . Rao,
	Anil S Keshavamurthy, David Miller, Linux Kernel Mailing List

On Mon, 2 Dec 2019 18:35:31 -0500
Joel Fernandes <joel@joelfernandes.org> wrote:

> On Tue, Dec 03, 2019 at 07:34:53AM +0900, Masami Hiramatsu wrote:
> > Hi Joel,
> > 
> > On Mon, 2 Dec 2019 16:08:54 -0500
> > Joel Fernandes <joel@joelfernandes.org> wrote:
> > 
> > > On Mon, Dec 02, 2019 at 04:32:13PM +0900, Masami Hiramatsu wrote:
> > > > Anders reported that the lockdep warns that suspicious
> > > > RCU list usage in register_kprobe() (detected by
> > > > CONFIG_PROVE_RCU_LIST.) This is because get_kprobe()
> > > > access kprobe_table[] by hlist_for_each_entry_rcu()
> > > > without rcu_read_lock.
> > > > 
> > > > If we call get_kprobe() from the breakpoint handler context,
> > > > it is run with preempt disabled, so this is not a problem.
> > > > But in other cases, instead of rcu_read_lock(), we locks
> > > > kprobe_mutex so that the kprobe_table[] is not updated.
> > > > So, current code is safe, but still not good from the view
> > > > point of RCU.
> > > > 
> > > > Let's lock the rcu_read_lock() around get_kprobe() and
> > > > ensure kprobe_mutex is locked at those points.
> > > > 
> > > > Note that we can safely unlock rcu_read_lock() soon after
> > > > accessing the list, because we are sure the found kprobe has
> > > > never gone before unlocking kprobe_mutex. Unless locking
> > > > kprobe_mutex, caller must hold rcu_read_lock() until it
> > > > finished operations on that kprobe.
> > > > 
> > > > Reported-by: Anders Roxell <anders.roxell@linaro.org>
> > > > Signed-off-by: Masami Hiramatsu <mhiramat@kernel.org>
> > > 
> > > Instead of this, can you not just pass the lockdep_is_held() expression as
> > > the last argument to list_for_each_entry_rcu() to silence the warning? Then
> > > it will be a simpler patch.
> > 
> > Ah, I see. That is more natural to silence the warning.
> 
> Np, and on such fix, my:
> 
> Reviewed-by: Joel Fernandes (Google) <joel@joelfernandes.org>
> 

Thanks! I found another similar issue while testing, I'll fix it too.

-- 
Masami Hiramatsu <mhiramat@kernel.org>

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH -tip] kprobes: Lock rcu_read_lock() while searching kprobe
  2019-12-02 21:08 ` Joel Fernandes
  2019-12-02 22:34   ` Masami Hiramatsu
@ 2019-12-03  7:13   ` Ingo Molnar
  2019-12-03 17:57     ` Paul E. McKenney
  2019-12-04  4:09     ` Joel Fernandes
  1 sibling, 2 replies; 19+ messages in thread
From: Ingo Molnar @ 2019-12-03  7:13 UTC (permalink / raw)
  To: Joel Fernandes, Paul E. McKenney, Peter Zijlstra
  Cc: Masami Hiramatsu, Anders Roxell, paulmck, Naveen N . Rao,
	Anil S Keshavamurthy, David Miller, Linux Kernel Mailing List


* Joel Fernandes <joel@joelfernandes.org> wrote:

> On Mon, Dec 02, 2019 at 04:32:13PM +0900, Masami Hiramatsu wrote:
> > Anders reported that the lockdep warns that suspicious
> > RCU list usage in register_kprobe() (detected by
> > CONFIG_PROVE_RCU_LIST.) This is because get_kprobe()
> > access kprobe_table[] by hlist_for_each_entry_rcu()
> > without rcu_read_lock.
> > 
> > If we call get_kprobe() from the breakpoint handler context,
> > it is run with preempt disabled, so this is not a problem.
> > But in other cases, instead of rcu_read_lock(), we locks
> > kprobe_mutex so that the kprobe_table[] is not updated.
> > So, current code is safe, but still not good from the view
> > point of RCU.
> > 
> > Let's lock the rcu_read_lock() around get_kprobe() and
> > ensure kprobe_mutex is locked at those points.
> > 
> > Note that we can safely unlock rcu_read_lock() soon after
> > accessing the list, because we are sure the found kprobe has
> > never gone before unlocking kprobe_mutex. Unless locking
> > kprobe_mutex, caller must hold rcu_read_lock() until it
> > finished operations on that kprobe.
> > 
> > Reported-by: Anders Roxell <anders.roxell@linaro.org>
> > Signed-off-by: Masami Hiramatsu <mhiramat@kernel.org>
> 
> Instead of this, can you not just pass the lockdep_is_held() expression as
> the last argument to list_for_each_entry_rcu() to silence the warning? Then
> it will be a simpler patch.

Come on, we do not silence warnings!

If it's safely inside the lock then why not change it from 
hlist_for_each_entry_rcu() to hlist_for_each_entry()?

I do think that 'lockdep flag' inside hlist_for_each_entry_rcu():

/**
 * hlist_for_each_entry_rcu - iterate over rcu list of given type
 * @pos:        the type * to use as a loop cursor.
 * @head:       the head for your list.
 * @member:     the name of the hlist_node within the struct.
 * @cond:       optional lockdep expression if called from non-RCU protection.
 *
 * This list-traversal primitive may safely run concurrently with
 * the _rcu list-mutation primitives such as hlist_add_head_rcu()
 * as long as the traversal is guarded by rcu_read_lock().
 */
#define hlist_for_each_entry_rcu(pos, head, member, cond...)            \

is actively harmful. Why is it there?

Thanks,

	Ingo

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH -tip] kprobes: Lock rcu_read_lock() while searching kprobe
  2019-12-03  7:13   ` Ingo Molnar
@ 2019-12-03 17:57     ` Paul E. McKenney
  2019-12-04 10:05       ` Ingo Molnar
  2019-12-04  4:09     ` Joel Fernandes
  1 sibling, 1 reply; 19+ messages in thread
From: Paul E. McKenney @ 2019-12-03 17:57 UTC (permalink / raw)
  To: Ingo Molnar
  Cc: Joel Fernandes, Peter Zijlstra, Masami Hiramatsu, Anders Roxell,
	Naveen N . Rao, Anil S Keshavamurthy, David Miller,
	Linux Kernel Mailing List

On Tue, Dec 03, 2019 at 08:13:29AM +0100, Ingo Molnar wrote:
> 
> * Joel Fernandes <joel@joelfernandes.org> wrote:
> 
> > On Mon, Dec 02, 2019 at 04:32:13PM +0900, Masami Hiramatsu wrote:
> > > Anders reported that the lockdep warns that suspicious
> > > RCU list usage in register_kprobe() (detected by
> > > CONFIG_PROVE_RCU_LIST.) This is because get_kprobe()
> > > access kprobe_table[] by hlist_for_each_entry_rcu()
> > > without rcu_read_lock.
> > > 
> > > If we call get_kprobe() from the breakpoint handler context,
> > > it is run with preempt disabled, so this is not a problem.
> > > But in other cases, instead of rcu_read_lock(), we locks
> > > kprobe_mutex so that the kprobe_table[] is not updated.
> > > So, current code is safe, but still not good from the view
> > > point of RCU.
> > > 
> > > Let's lock the rcu_read_lock() around get_kprobe() and
> > > ensure kprobe_mutex is locked at those points.
> > > 
> > > Note that we can safely unlock rcu_read_lock() soon after
> > > accessing the list, because we are sure the found kprobe has
> > > never gone before unlocking kprobe_mutex. Unless locking
> > > kprobe_mutex, caller must hold rcu_read_lock() until it
> > > finished operations on that kprobe.
> > > 
> > > Reported-by: Anders Roxell <anders.roxell@linaro.org>
> > > Signed-off-by: Masami Hiramatsu <mhiramat@kernel.org>
> > 
> > Instead of this, can you not just pass the lockdep_is_held() expression as
> > the last argument to list_for_each_entry_rcu() to silence the warning? Then
> > it will be a simpler patch.
> 
> Come on, we do not silence warnings!
> 
> If it's safely inside the lock then why not change it from 
> hlist_for_each_entry_rcu() to hlist_for_each_entry()?
> 
> I do think that 'lockdep flag' inside hlist_for_each_entry_rcu():
> 
> /**
>  * hlist_for_each_entry_rcu - iterate over rcu list of given type
>  * @pos:        the type * to use as a loop cursor.
>  * @head:       the head for your list.
>  * @member:     the name of the hlist_node within the struct.
>  * @cond:       optional lockdep expression if called from non-RCU protection.
>  *
>  * This list-traversal primitive may safely run concurrently with
>  * the _rcu list-mutation primitives such as hlist_add_head_rcu()
>  * as long as the traversal is guarded by rcu_read_lock().
>  */
> #define hlist_for_each_entry_rcu(pos, head, member, cond...)            \
> 
> is actively harmful. Why is it there?

For cases where common code might be invoked both from the reader
(with RCU protection) and from the updater (protected by some
lock).  This common code can then use the optional argument to
hlist_for_each_entry_rcu() to truthfully tell lockdep that it might be
called with either form of protection in place.

This also combines with the __rcu tag used to mark RCU-protected
pointers, in which case sparse complains when a non-RCU API is applied
to these pointers, to get back to your earlier question about use of
hlist_for_each_entry_rcu() within the update-side lock.

But what are you seeing as actively harmful about all of this?
What should we be doing instead?

If you are suggesting that KCSAN might replace sparse's use of __rcu,
I have been attempting to think along those lines, but thus far without
any success.  In contrast, I am starting to think that lockdep has made
the sparse __acquires() and __releases() tags obsolete.

Or am I missing your point?

							Thanx, Paul

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH -tip] kprobes: Lock rcu_read_lock() while searching kprobe
  2019-12-03  7:13   ` Ingo Molnar
  2019-12-03 17:57     ` Paul E. McKenney
@ 2019-12-04  4:09     ` Joel Fernandes
  2019-12-04  4:20       ` Joel Fernandes
  1 sibling, 1 reply; 19+ messages in thread
From: Joel Fernandes @ 2019-12-04  4:09 UTC (permalink / raw)
  To: Ingo Molnar
  Cc: Paul E. McKenney, Peter Zijlstra, Masami Hiramatsu,
	Anders Roxell, Naveen N . Rao, Anil S Keshavamurthy,
	David Miller, Linux Kernel Mailing List

On Tue, Dec 03, 2019 at 08:13:29AM +0100, Ingo Molnar wrote:
> 
> * Joel Fernandes <joel@joelfernandes.org> wrote:
> 
> > On Mon, Dec 02, 2019 at 04:32:13PM +0900, Masami Hiramatsu wrote:
> > > Anders reported that the lockdep warns that suspicious
> > > RCU list usage in register_kprobe() (detected by
> > > CONFIG_PROVE_RCU_LIST.) This is because get_kprobe()
> > > access kprobe_table[] by hlist_for_each_entry_rcu()
> > > without rcu_read_lock.
> > > 
> > > If we call get_kprobe() from the breakpoint handler context,
> > > it is run with preempt disabled, so this is not a problem.
> > > But in other cases, instead of rcu_read_lock(), we locks
> > > kprobe_mutex so that the kprobe_table[] is not updated.
> > > So, current code is safe, but still not good from the view
> > > point of RCU.
> > > 
> > > Let's lock the rcu_read_lock() around get_kprobe() and
> > > ensure kprobe_mutex is locked at those points.
> > > 
> > > Note that we can safely unlock rcu_read_lock() soon after
> > > accessing the list, because we are sure the found kprobe has
> > > never gone before unlocking kprobe_mutex. Unless locking
> > > kprobe_mutex, caller must hold rcu_read_lock() until it
> > > finished operations on that kprobe.
> > > 
> > > Reported-by: Anders Roxell <anders.roxell@linaro.org>
> > > Signed-off-by: Masami Hiramatsu <mhiramat@kernel.org>
> > 
> > Instead of this, can you not just pass the lockdep_is_held() expression as
> > the last argument to list_for_each_entry_rcu() to silence the warning? Then
> > it will be a simpler patch.
> 
> Come on, we do not silence warnings!

By silence, I mean remove a false-positive warning. In this case since lock
is held, it is not a valid warning.

> If it's safely inside the lock then why not change it from 
> hlist_for_each_entry_rcu() to hlist_for_each_entry()?
> 
> I do think that 'lockdep flag' inside hlist_for_each_entry_rcu():
> 
> /**
>  * hlist_for_each_entry_rcu - iterate over rcu list of given type
>  * @pos:        the type * to use as a loop cursor.
>  * @head:       the head for your list.
>  * @member:     the name of the hlist_node within the struct.
>  * @cond:       optional lockdep expression if called from non-RCU protection.
>  *
>  * This list-traversal primitive may safely run concurrently with
>  * the _rcu list-mutation primitives such as hlist_add_head_rcu()
>  * as long as the traversal is guarded by rcu_read_lock().
>  */
> #define hlist_for_each_entry_rcu(pos, head, member, cond...)            \
> 
> is actively harmful. Why is it there?

Because as Paul also said, the code can be common between regular lock
holders and RCU lock holders. I am not sure if this is the case with the
kprobe code though.

thanks,

 - Joel


^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH -tip] kprobes: Lock rcu_read_lock() while searching kprobe
  2019-12-04  4:09     ` Joel Fernandes
@ 2019-12-04  4:20       ` Joel Fernandes
  0 siblings, 0 replies; 19+ messages in thread
From: Joel Fernandes @ 2019-12-04  4:20 UTC (permalink / raw)
  To: Ingo Molnar
  Cc: Paul E. McKenney, Peter Zijlstra, Masami Hiramatsu,
	Anders Roxell, Naveen N . Rao, Anil S Keshavamurthy,
	David Miller, Linux Kernel Mailing List

On Tue, Dec 03, 2019 at 11:09:59PM -0500, Joel Fernandes wrote:
> On Tue, Dec 03, 2019 at 08:13:29AM +0100, Ingo Molnar wrote:
> > 
> > * Joel Fernandes <joel@joelfernandes.org> wrote:
> > 
> > > On Mon, Dec 02, 2019 at 04:32:13PM +0900, Masami Hiramatsu wrote:
> > > > Anders reported that the lockdep warns that suspicious
> > > > RCU list usage in register_kprobe() (detected by
> > > > CONFIG_PROVE_RCU_LIST.) This is because get_kprobe()
> > > > access kprobe_table[] by hlist_for_each_entry_rcu()
> > > > without rcu_read_lock.
> > > > 
> > > > If we call get_kprobe() from the breakpoint handler context,
> > > > it is run with preempt disabled, so this is not a problem.
> > > > But in other cases, instead of rcu_read_lock(), we locks
> > > > kprobe_mutex so that the kprobe_table[] is not updated.
> > > > So, current code is safe, but still not good from the view
> > > > point of RCU.
> > > > 
> > > > Let's lock the rcu_read_lock() around get_kprobe() and
> > > > ensure kprobe_mutex is locked at those points.
> > > > 
> > > > Note that we can safely unlock rcu_read_lock() soon after
> > > > accessing the list, because we are sure the found kprobe has
> > > > never gone before unlocking kprobe_mutex. Unless locking
> > > > kprobe_mutex, caller must hold rcu_read_lock() until it
> > > > finished operations on that kprobe.
> > > > 
> > > > Reported-by: Anders Roxell <anders.roxell@linaro.org>
> > > > Signed-off-by: Masami Hiramatsu <mhiramat@kernel.org>
> > > 
> > > Instead of this, can you not just pass the lockdep_is_held() expression as
> > > the last argument to list_for_each_entry_rcu() to silence the warning? Then
> > > it will be a simpler patch.
> > 
> > Come on, we do not silence warnings!
> 
> By silence, I mean remove a false-positive warning. In this case since lock
> is held, it is not a valid warning.
> 
> > If it's safely inside the lock then why not change it from 
> > hlist_for_each_entry_rcu() to hlist_for_each_entry()?
> > 
> > I do think that 'lockdep flag' inside hlist_for_each_entry_rcu():
> > 
> > /**
> >  * hlist_for_each_entry_rcu - iterate over rcu list of given type
> >  * @pos:        the type * to use as a loop cursor.
> >  * @head:       the head for your list.
> >  * @member:     the name of the hlist_node within the struct.
> >  * @cond:       optional lockdep expression if called from non-RCU protection.
> >  *
> >  * This list-traversal primitive may safely run concurrently with
> >  * the _rcu list-mutation primitives such as hlist_add_head_rcu()
> >  * as long as the traversal is guarded by rcu_read_lock().
> >  */
> > #define hlist_for_each_entry_rcu(pos, head, member, cond...)            \
> > 
> > is actively harmful. Why is it there?
> 
> Because as Paul also said, the code can be common between regular lock
> holders and RCU lock holders. I am not sure if this is the case with the
> kprobe code though.

Here are some more details on the kprobe side of things:

get_kprobe() can be called wither from preempt disabled section, or under
kprobe_mutex lock as evident from also the code comments on this function [1]

If called from a preempt disable section, then it is in an RCU reader section
and no warning will be emitted by use of hlist_for_each_entry_rcu(). This is
because hlist_for_each_entry_rcu() will internally check if preempt is
disabled.  However, if it is called under kprobe_mutex lock, then we have no
way of knowing in hlist_for_each_entry_rcu() if lock was held. So we must
pass the lockdep expression (which tests if lock is held) to the macro so
that the false-positive warning is silenced.

thanks,

 - Joel

[1]
/*
 * This routine is called either:
 *      - under the kprobe_mutex - during kprobe_[un]register()
 *                              OR
 *      - with preemption disabled - from arch/xxx/kernel/kprobes.c
 */
struct kprobe *get_kprobe(void *addr)


^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH -tip] kprobes: Lock rcu_read_lock() while searching kprobe
  2019-12-03 17:57     ` Paul E. McKenney
@ 2019-12-04 10:05       ` Ingo Molnar
  2019-12-04 16:12         ` Paul E. McKenney
  0 siblings, 1 reply; 19+ messages in thread
From: Ingo Molnar @ 2019-12-04 10:05 UTC (permalink / raw)
  To: Paul E. McKenney
  Cc: Joel Fernandes, Peter Zijlstra, Masami Hiramatsu, Anders Roxell,
	Naveen N . Rao, Anil S Keshavamurthy, David Miller,
	Linux Kernel Mailing List


* Paul E. McKenney <paulmck@kernel.org> wrote:

> >  * This list-traversal primitive may safely run concurrently with
> >  * the _rcu list-mutation primitives such as hlist_add_head_rcu()
> >  * as long as the traversal is guarded by rcu_read_lock().
> >  */
> > #define hlist_for_each_entry_rcu(pos, head, member, cond...)            \
> > 
> > is actively harmful. Why is it there?
> 
> For cases where common code might be invoked both from the reader
> (with RCU protection) and from the updater (protected by some
> lock).  This common code can then use the optional argument to
> hlist_for_each_entry_rcu() to truthfully tell lockdep that it might be
> called with either form of protection in place.
> 
> This also combines with the __rcu tag used to mark RCU-protected
> pointers, in which case sparse complains when a non-RCU API is applied
> to these pointers, to get back to your earlier question about use of
> hlist_for_each_entry_rcu() within the update-side lock.
> 
> But what are you seeing as actively harmful about all of this?
> What should we be doing instead?

Yeah, so basically in the write-locked path hlist_for_each_entry() 
generates (slightly) more efficient code than hlist_for_each_entry_rcu(), 
correct?

Also, the principle of passing warning flags around is problematic - but 
I can see the point in this specific case.

Thanks,

	Ingo

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH -tip] kprobes: Lock rcu_read_lock() while searching kprobe
  2019-12-04 10:05       ` Ingo Molnar
@ 2019-12-04 16:12         ` Paul E. McKenney
  2019-12-05  4:19           ` Masami Hiramatsu
  2019-12-06  1:11           ` Joel Fernandes
  0 siblings, 2 replies; 19+ messages in thread
From: Paul E. McKenney @ 2019-12-04 16:12 UTC (permalink / raw)
  To: Ingo Molnar
  Cc: Joel Fernandes, Peter Zijlstra, Masami Hiramatsu, Anders Roxell,
	Naveen N . Rao, Anil S Keshavamurthy, David Miller,
	Linux Kernel Mailing List

On Wed, Dec 04, 2019 at 11:05:50AM +0100, Ingo Molnar wrote:
> 
> * Paul E. McKenney <paulmck@kernel.org> wrote:
> 
> > >  * This list-traversal primitive may safely run concurrently with
> > >  * the _rcu list-mutation primitives such as hlist_add_head_rcu()
> > >  * as long as the traversal is guarded by rcu_read_lock().
> > >  */
> > > #define hlist_for_each_entry_rcu(pos, head, member, cond...)            \
> > > 
> > > is actively harmful. Why is it there?
> > 
> > For cases where common code might be invoked both from the reader
> > (with RCU protection) and from the updater (protected by some
> > lock).  This common code can then use the optional argument to
> > hlist_for_each_entry_rcu() to truthfully tell lockdep that it might be
> > called with either form of protection in place.
> > 
> > This also combines with the __rcu tag used to mark RCU-protected
> > pointers, in which case sparse complains when a non-RCU API is applied
> > to these pointers, to get back to your earlier question about use of
> > hlist_for_each_entry_rcu() within the update-side lock.
> > 
> > But what are you seeing as actively harmful about all of this?
> > What should we be doing instead?
> 
> Yeah, so basically in the write-locked path hlist_for_each_entry() 
> generates (slightly) more efficient code than hlist_for_each_entry_rcu(), 
> correct?

Potentially yes, if the READ_ONCE() constrains the compiler.  Or not,
depending of course on the compiler and the surrounding code.

> Also, the principle of passing warning flags around is problematic - but 
> I can see the point in this specific case.

Would it help to add an hlist_for_each_entry_protected() that expected
RCU-protected pointers and write-side protection, analogous to
rcu_dereference_protected()?  Or would that expansion of the RCU API
outweigh any benefits?

							Thanx, Paul

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH -tip] kprobes: Lock rcu_read_lock() while searching kprobe
  2019-12-04 16:12         ` Paul E. McKenney
@ 2019-12-05  4:19           ` Masami Hiramatsu
  2019-12-06  1:11           ` Joel Fernandes
  1 sibling, 0 replies; 19+ messages in thread
From: Masami Hiramatsu @ 2019-12-05  4:19 UTC (permalink / raw)
  To: paulmck
  Cc: Ingo Molnar, Joel Fernandes, Peter Zijlstra, Masami Hiramatsu,
	Anders Roxell, Naveen N . Rao, Anil S Keshavamurthy,
	David Miller, Linux Kernel Mailing List

Hi Ingo,

On Wed, 4 Dec 2019 08:12:39 -0800
"Paul E. McKenney" <paulmck@kernel.org> wrote:

> On Wed, Dec 04, 2019 at 11:05:50AM +0100, Ingo Molnar wrote:
> > 
> > * Paul E. McKenney <paulmck@kernel.org> wrote:
> > 
> > > >  * This list-traversal primitive may safely run concurrently with
> > > >  * the _rcu list-mutation primitives such as hlist_add_head_rcu()
> > > >  * as long as the traversal is guarded by rcu_read_lock().
> > > >  */
> > > > #define hlist_for_each_entry_rcu(pos, head, member, cond...)            \
> > > > 
> > > > is actively harmful. Why is it there?
> > > 
> > > For cases where common code might be invoked both from the reader
> > > (with RCU protection) and from the updater (protected by some
> > > lock).  This common code can then use the optional argument to
> > > hlist_for_each_entry_rcu() to truthfully tell lockdep that it might be
> > > called with either form of protection in place.
> > > 
> > > This also combines with the __rcu tag used to mark RCU-protected
> > > pointers, in which case sparse complains when a non-RCU API is applied
> > > to these pointers, to get back to your earlier question about use of
> > > hlist_for_each_entry_rcu() within the update-side lock.
> > > 
> > > But what are you seeing as actively harmful about all of this?
> > > What should we be doing instead?
> > 
> > Yeah, so basically in the write-locked path hlist_for_each_entry() 
> > generates (slightly) more efficient code than hlist_for_each_entry_rcu(), 
> > correct?
> 
> Potentially yes, if the READ_ONCE() constrains the compiler.  Or not,
> depending of course on the compiler and the surrounding code.

For this kprobes case, I can introduce get_kprobe_locked() which uses
 hlist_for_each_entry() instead of hlist_for_each_entry_rcu().

However, this sounds like a bit strange choice, because get_kprobe
(RCU version) should be used on "hot" paths (because it is lock-free),
and get_kprobe_locked() is used on slow paths.
If hlist_for_each_entry() can be more efficient, we will keep unefficient
API for hot paths, but use the efficient one for slow paths.

Thank you,

-- 
Masami Hiramatsu <mhiramat@kernel.org>

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH -tip] kprobes: Lock rcu_read_lock() while searching kprobe
  2019-12-04 16:12         ` Paul E. McKenney
  2019-12-05  4:19           ` Masami Hiramatsu
@ 2019-12-06  1:11           ` Joel Fernandes
  2019-12-06  3:11             ` Paul E. McKenney
  1 sibling, 1 reply; 19+ messages in thread
From: Joel Fernandes @ 2019-12-06  1:11 UTC (permalink / raw)
  To: Paul E. McKenney
  Cc: Ingo Molnar, Peter Zijlstra, Masami Hiramatsu, Anders Roxell,
	Naveen N . Rao, Anil S Keshavamurthy, David Miller,
	Linux Kernel Mailing List

On Wed, Dec 04, 2019 at 08:12:39AM -0800, Paul E. McKenney wrote:
> On Wed, Dec 04, 2019 at 11:05:50AM +0100, Ingo Molnar wrote:
> > 
> > * Paul E. McKenney <paulmck@kernel.org> wrote:
> > 
> > > >  * This list-traversal primitive may safely run concurrently with
> > > >  * the _rcu list-mutation primitives such as hlist_add_head_rcu()
> > > >  * as long as the traversal is guarded by rcu_read_lock().
> > > >  */
> > > > #define hlist_for_each_entry_rcu(pos, head, member, cond...)            \
> > > > 
> > > > is actively harmful. Why is it there?
> > > 
> > > For cases where common code might be invoked both from the reader
> > > (with RCU protection) and from the updater (protected by some
> > > lock).  This common code can then use the optional argument to
> > > hlist_for_each_entry_rcu() to truthfully tell lockdep that it might be
> > > called with either form of protection in place.
> > > 
> > > This also combines with the __rcu tag used to mark RCU-protected
> > > pointers, in which case sparse complains when a non-RCU API is applied
> > > to these pointers, to get back to your earlier question about use of
> > > hlist_for_each_entry_rcu() within the update-side lock.
> > > 
> > > But what are you seeing as actively harmful about all of this?
> > > What should we be doing instead?
> > 
> > Yeah, so basically in the write-locked path hlist_for_each_entry() 
> > generates (slightly) more efficient code than hlist_for_each_entry_rcu(), 
> > correct?
> 
> Potentially yes, if the READ_ONCE() constrains the compiler.  Or not,
> depending of course on the compiler and the surrounding code.
> 
> > Also, the principle of passing warning flags around is problematic - but 
> > I can see the point in this specific case.
> 
> Would it help to add an hlist_for_each_entry_protected() that expected
> RCU-protected pointers and write-side protection, analogous to
> rcu_dereference_protected()?  Or would that expansion of the RCU API
> outweigh any benefits?

Personally, I like keeping the same API and using the optional argument like
we did thus preventing too many APIs / new APIs.

thanks,

 - Joel


^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH -tip] kprobes: Lock rcu_read_lock() while searching kprobe
  2019-12-06  1:11           ` Joel Fernandes
@ 2019-12-06  3:11             ` Paul E. McKenney
  2019-12-08  0:08               ` Joel Fernandes
  0 siblings, 1 reply; 19+ messages in thread
From: Paul E. McKenney @ 2019-12-06  3:11 UTC (permalink / raw)
  To: Joel Fernandes
  Cc: Ingo Molnar, Peter Zijlstra, Masami Hiramatsu, Anders Roxell,
	Naveen N . Rao, Anil S Keshavamurthy, David Miller,
	Linux Kernel Mailing List

On Thu, Dec 05, 2019 at 08:11:37PM -0500, Joel Fernandes wrote:
> On Wed, Dec 04, 2019 at 08:12:39AM -0800, Paul E. McKenney wrote:
> > On Wed, Dec 04, 2019 at 11:05:50AM +0100, Ingo Molnar wrote:
> > > 
> > > * Paul E. McKenney <paulmck@kernel.org> wrote:
> > > 
> > > > >  * This list-traversal primitive may safely run concurrently with
> > > > >  * the _rcu list-mutation primitives such as hlist_add_head_rcu()
> > > > >  * as long as the traversal is guarded by rcu_read_lock().
> > > > >  */
> > > > > #define hlist_for_each_entry_rcu(pos, head, member, cond...)            \
> > > > > 
> > > > > is actively harmful. Why is it there?
> > > > 
> > > > For cases where common code might be invoked both from the reader
> > > > (with RCU protection) and from the updater (protected by some
> > > > lock).  This common code can then use the optional argument to
> > > > hlist_for_each_entry_rcu() to truthfully tell lockdep that it might be
> > > > called with either form of protection in place.
> > > > 
> > > > This also combines with the __rcu tag used to mark RCU-protected
> > > > pointers, in which case sparse complains when a non-RCU API is applied
> > > > to these pointers, to get back to your earlier question about use of
> > > > hlist_for_each_entry_rcu() within the update-side lock.
> > > > 
> > > > But what are you seeing as actively harmful about all of this?
> > > > What should we be doing instead?
> > > 
> > > Yeah, so basically in the write-locked path hlist_for_each_entry() 
> > > generates (slightly) more efficient code than hlist_for_each_entry_rcu(), 
> > > correct?
> > 
> > Potentially yes, if the READ_ONCE() constrains the compiler.  Or not,
> > depending of course on the compiler and the surrounding code.
> > 
> > > Also, the principle of passing warning flags around is problematic - but 
> > > I can see the point in this specific case.
> > 
> > Would it help to add an hlist_for_each_entry_protected() that expected
> > RCU-protected pointers and write-side protection, analogous to
> > rcu_dereference_protected()?  Or would that expansion of the RCU API
> > outweigh any benefits?
> 
> Personally, I like keeping the same API and using the optional argument like
> we did thus preventing too many APIs / new APIs.

Would you be willing to put together a prototype patch so that people
can see exactly how it would look?

							Thanx, Paul

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH -tip] kprobes: Lock rcu_read_lock() while searching kprobe
  2019-12-06  3:11             ` Paul E. McKenney
@ 2019-12-08  0:08               ` Joel Fernandes
  2019-12-09  3:39                 ` Paul E. McKenney
  0 siblings, 1 reply; 19+ messages in thread
From: Joel Fernandes @ 2019-12-08  0:08 UTC (permalink / raw)
  To: Paul E. McKenney
  Cc: Ingo Molnar, Peter Zijlstra, Masami Hiramatsu, Anders Roxell,
	Naveen N . Rao, Anil S Keshavamurthy, David Miller,
	Linux Kernel Mailing List

On Thu, Dec 05, 2019 at 07:11:51PM -0800, Paul E. McKenney wrote:
> On Thu, Dec 05, 2019 at 08:11:37PM -0500, Joel Fernandes wrote:
> > On Wed, Dec 04, 2019 at 08:12:39AM -0800, Paul E. McKenney wrote:
> > > On Wed, Dec 04, 2019 at 11:05:50AM +0100, Ingo Molnar wrote:
> > > > 
> > > > * Paul E. McKenney <paulmck@kernel.org> wrote:
> > > > 
> > > > > >  * This list-traversal primitive may safely run concurrently with
> > > > > >  * the _rcu list-mutation primitives such as hlist_add_head_rcu()
> > > > > >  * as long as the traversal is guarded by rcu_read_lock().
> > > > > >  */
> > > > > > #define hlist_for_each_entry_rcu(pos, head, member, cond...)            \
> > > > > > 
> > > > > > is actively harmful. Why is it there?
> > > > > 
> > > > > For cases where common code might be invoked both from the reader
> > > > > (with RCU protection) and from the updater (protected by some
> > > > > lock).  This common code can then use the optional argument to
> > > > > hlist_for_each_entry_rcu() to truthfully tell lockdep that it might be
> > > > > called with either form of protection in place.
> > > > > 
> > > > > This also combines with the __rcu tag used to mark RCU-protected
> > > > > pointers, in which case sparse complains when a non-RCU API is applied
> > > > > to these pointers, to get back to your earlier question about use of
> > > > > hlist_for_each_entry_rcu() within the update-side lock.
> > > > > 
> > > > > But what are you seeing as actively harmful about all of this?
> > > > > What should we be doing instead?
> > > > 
> > > > Yeah, so basically in the write-locked path hlist_for_each_entry() 
> > > > generates (slightly) more efficient code than hlist_for_each_entry_rcu(), 
> > > > correct?
> > > 
> > > Potentially yes, if the READ_ONCE() constrains the compiler.  Or not,
> > > depending of course on the compiler and the surrounding code.
> > > 
> > > > Also, the principle of passing warning flags around is problematic - but 
> > > > I can see the point in this specific case.
> > > 
> > > Would it help to add an hlist_for_each_entry_protected() that expected
> > > RCU-protected pointers and write-side protection, analogous to
> > > rcu_dereference_protected()?  Or would that expansion of the RCU API
> > > outweigh any benefits?
> > 
> > Personally, I like keeping the same API and using the optional argument like
> > we did thus preventing too many APIs / new APIs.
> 
> Would you be willing to put together a prototype patch so that people
> can see exactly how it would look?

Hi Paul,

I was referring to the same API we have at the moment (that is
hlist_for_each_entry_rcu() with the additional cond parameter). I was saying
let us keep that and not add a hlist_for_each_entry_protected() instead, so
as to not proliferate the number of APIs.

Or did I miss the point?

thanks,

 - Joel


^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH -tip] kprobes: Lock rcu_read_lock() while searching kprobe
  2019-12-08  0:08               ` Joel Fernandes
@ 2019-12-09  3:39                 ` Paul E. McKenney
  2019-12-17 14:59                   ` Masami Hiramatsu
  0 siblings, 1 reply; 19+ messages in thread
From: Paul E. McKenney @ 2019-12-09  3:39 UTC (permalink / raw)
  To: Joel Fernandes
  Cc: Ingo Molnar, Peter Zijlstra, Masami Hiramatsu, Anders Roxell,
	Naveen N . Rao, Anil S Keshavamurthy, David Miller,
	Linux Kernel Mailing List

On Sat, Dec 07, 2019 at 07:08:42PM -0500, Joel Fernandes wrote:
> On Thu, Dec 05, 2019 at 07:11:51PM -0800, Paul E. McKenney wrote:
> > On Thu, Dec 05, 2019 at 08:11:37PM -0500, Joel Fernandes wrote:
> > > On Wed, Dec 04, 2019 at 08:12:39AM -0800, Paul E. McKenney wrote:
> > > > On Wed, Dec 04, 2019 at 11:05:50AM +0100, Ingo Molnar wrote:
> > > > > 
> > > > > * Paul E. McKenney <paulmck@kernel.org> wrote:
> > > > > 
> > > > > > >  * This list-traversal primitive may safely run concurrently with
> > > > > > >  * the _rcu list-mutation primitives such as hlist_add_head_rcu()
> > > > > > >  * as long as the traversal is guarded by rcu_read_lock().
> > > > > > >  */
> > > > > > > #define hlist_for_each_entry_rcu(pos, head, member, cond...)            \
> > > > > > > 
> > > > > > > is actively harmful. Why is it there?
> > > > > > 
> > > > > > For cases where common code might be invoked both from the reader
> > > > > > (with RCU protection) and from the updater (protected by some
> > > > > > lock).  This common code can then use the optional argument to
> > > > > > hlist_for_each_entry_rcu() to truthfully tell lockdep that it might be
> > > > > > called with either form of protection in place.
> > > > > > 
> > > > > > This also combines with the __rcu tag used to mark RCU-protected
> > > > > > pointers, in which case sparse complains when a non-RCU API is applied
> > > > > > to these pointers, to get back to your earlier question about use of
> > > > > > hlist_for_each_entry_rcu() within the update-side lock.
> > > > > > 
> > > > > > But what are you seeing as actively harmful about all of this?
> > > > > > What should we be doing instead?
> > > > > 
> > > > > Yeah, so basically in the write-locked path hlist_for_each_entry() 
> > > > > generates (slightly) more efficient code than hlist_for_each_entry_rcu(), 
> > > > > correct?
> > > > 
> > > > Potentially yes, if the READ_ONCE() constrains the compiler.  Or not,
> > > > depending of course on the compiler and the surrounding code.
> > > > 
> > > > > Also, the principle of passing warning flags around is problematic - but 
> > > > > I can see the point in this specific case.
> > > > 
> > > > Would it help to add an hlist_for_each_entry_protected() that expected
> > > > RCU-protected pointers and write-side protection, analogous to
> > > > rcu_dereference_protected()?  Or would that expansion of the RCU API
> > > > outweigh any benefits?
> > > 
> > > Personally, I like keeping the same API and using the optional argument like
> > > we did thus preventing too many APIs / new APIs.
> > 
> > Would you be willing to put together a prototype patch so that people
> > can see exactly how it would look?
> 
> Hi Paul,
> 
> I was referring to the same API we have at the moment (that is
> hlist_for_each_entry_rcu() with the additional cond parameter). I was saying
> let us keep that and not add a hlist_for_each_entry_protected() instead, so
> as to not proliferate the number of APIs.
> 
> Or did I miss the point?

This would work for me.  The only concern would be inefficiency, but we
have heard from people saying that the unnecessary inefficiency is only
on code paths that they do not care about, so we should be good.

							Thanx, Paul

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH -tip] kprobes: Lock rcu_read_lock() while searching kprobe
  2019-12-09  3:39                 ` Paul E. McKenney
@ 2019-12-17 14:59                   ` Masami Hiramatsu
  2019-12-17 18:07                     ` Joel Fernandes
  0 siblings, 1 reply; 19+ messages in thread
From: Masami Hiramatsu @ 2019-12-17 14:59 UTC (permalink / raw)
  To: paulmck
  Cc: Joel Fernandes, Ingo Molnar, Peter Zijlstra, Masami Hiramatsu,
	Anders Roxell, Naveen N . Rao, Anil S Keshavamurthy,
	David Miller, Linux Kernel Mailing List

Hi,

On Sun, 8 Dec 2019 19:39:11 -0800
"Paul E. McKenney" <paulmck@kernel.org> wrote:

> On Sat, Dec 07, 2019 at 07:08:42PM -0500, Joel Fernandes wrote:
> > On Thu, Dec 05, 2019 at 07:11:51PM -0800, Paul E. McKenney wrote:
> > > On Thu, Dec 05, 2019 at 08:11:37PM -0500, Joel Fernandes wrote:
> > > > On Wed, Dec 04, 2019 at 08:12:39AM -0800, Paul E. McKenney wrote:
> > > > > On Wed, Dec 04, 2019 at 11:05:50AM +0100, Ingo Molnar wrote:
> > > > > > 
> > > > > > * Paul E. McKenney <paulmck@kernel.org> wrote:
> > > > > > 
> > > > > > > >  * This list-traversal primitive may safely run concurrently with
> > > > > > > >  * the _rcu list-mutation primitives such as hlist_add_head_rcu()
> > > > > > > >  * as long as the traversal is guarded by rcu_read_lock().
> > > > > > > >  */
> > > > > > > > #define hlist_for_each_entry_rcu(pos, head, member, cond...)            \
> > > > > > > > 
> > > > > > > > is actively harmful. Why is it there?
> > > > > > > 
> > > > > > > For cases where common code might be invoked both from the reader
> > > > > > > (with RCU protection) and from the updater (protected by some
> > > > > > > lock).  This common code can then use the optional argument to
> > > > > > > hlist_for_each_entry_rcu() to truthfully tell lockdep that it might be
> > > > > > > called with either form of protection in place.
> > > > > > > 
> > > > > > > This also combines with the __rcu tag used to mark RCU-protected
> > > > > > > pointers, in which case sparse complains when a non-RCU API is applied
> > > > > > > to these pointers, to get back to your earlier question about use of
> > > > > > > hlist_for_each_entry_rcu() within the update-side lock.
> > > > > > > 
> > > > > > > But what are you seeing as actively harmful about all of this?
> > > > > > > What should we be doing instead?
> > > > > > 
> > > > > > Yeah, so basically in the write-locked path hlist_for_each_entry() 
> > > > > > generates (slightly) more efficient code than hlist_for_each_entry_rcu(), 
> > > > > > correct?
> > > > > 
> > > > > Potentially yes, if the READ_ONCE() constrains the compiler.  Or not,
> > > > > depending of course on the compiler and the surrounding code.
> > > > > 
> > > > > > Also, the principle of passing warning flags around is problematic - but 
> > > > > > I can see the point in this specific case.
> > > > > 
> > > > > Would it help to add an hlist_for_each_entry_protected() that expected
> > > > > RCU-protected pointers and write-side protection, analogous to
> > > > > rcu_dereference_protected()?  Or would that expansion of the RCU API
> > > > > outweigh any benefits?
> > > > 
> > > > Personally, I like keeping the same API and using the optional argument like
> > > > we did thus preventing too many APIs / new APIs.
> > > 
> > > Would you be willing to put together a prototype patch so that people
> > > can see exactly how it would look?
> > 
> > Hi Paul,
> > 
> > I was referring to the same API we have at the moment (that is
> > hlist_for_each_entry_rcu() with the additional cond parameter). I was saying
> > let us keep that and not add a hlist_for_each_entry_protected() instead, so
> > as to not proliferate the number of APIs.
> > 
> > Or did I miss the point?
> 
> This would work for me.  The only concern would be inefficiency, but we
> have heard from people saying that the unnecessary inefficiency is only
> on code paths that they do not care about, so we should be good.

So, what will be the conclusion here, Ingo?

I faced other warnings in tracing subsystem, so I need to add more
lockdep_is_held()s there to suppress warnings.

Thank you,

-- 
Masami Hiramatsu <mhiramat@kernel.org>

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH -tip] kprobes: Lock rcu_read_lock() while searching kprobe
  2019-12-17 14:59                   ` Masami Hiramatsu
@ 2019-12-17 18:07                     ` Joel Fernandes
  0 siblings, 0 replies; 19+ messages in thread
From: Joel Fernandes @ 2019-12-17 18:07 UTC (permalink / raw)
  To: Masami Hiramatsu
  Cc: paulmck, Ingo Molnar, Peter Zijlstra, Anders Roxell,
	Naveen N . Rao, Anil S Keshavamurthy, David Miller,
	Linux Kernel Mailing List

On Tue, Dec 17, 2019 at 11:59:21PM +0900, Masami Hiramatsu wrote:
> Hi,
> 
> On Sun, 8 Dec 2019 19:39:11 -0800
> "Paul E. McKenney" <paulmck@kernel.org> wrote:
> 
> > On Sat, Dec 07, 2019 at 07:08:42PM -0500, Joel Fernandes wrote:
> > > On Thu, Dec 05, 2019 at 07:11:51PM -0800, Paul E. McKenney wrote:
> > > > On Thu, Dec 05, 2019 at 08:11:37PM -0500, Joel Fernandes wrote:
> > > > > On Wed, Dec 04, 2019 at 08:12:39AM -0800, Paul E. McKenney wrote:
> > > > > > On Wed, Dec 04, 2019 at 11:05:50AM +0100, Ingo Molnar wrote:
> > > > > > > 
> > > > > > > * Paul E. McKenney <paulmck@kernel.org> wrote:
> > > > > > > 
> > > > > > > > >  * This list-traversal primitive may safely run concurrently with
> > > > > > > > >  * the _rcu list-mutation primitives such as hlist_add_head_rcu()
> > > > > > > > >  * as long as the traversal is guarded by rcu_read_lock().
> > > > > > > > >  */
> > > > > > > > > #define hlist_for_each_entry_rcu(pos, head, member, cond...)            \
> > > > > > > > > 
> > > > > > > > > is actively harmful. Why is it there?
> > > > > > > > 
> > > > > > > > For cases where common code might be invoked both from the reader
> > > > > > > > (with RCU protection) and from the updater (protected by some
> > > > > > > > lock).  This common code can then use the optional argument to
> > > > > > > > hlist_for_each_entry_rcu() to truthfully tell lockdep that it might be
> > > > > > > > called with either form of protection in place.
> > > > > > > > 
> > > > > > > > This also combines with the __rcu tag used to mark RCU-protected
> > > > > > > > pointers, in which case sparse complains when a non-RCU API is applied
> > > > > > > > to these pointers, to get back to your earlier question about use of
> > > > > > > > hlist_for_each_entry_rcu() within the update-side lock.
> > > > > > > > 
> > > > > > > > But what are you seeing as actively harmful about all of this?
> > > > > > > > What should we be doing instead?
> > > > > > > 
> > > > > > > Yeah, so basically in the write-locked path hlist_for_each_entry() 
> > > > > > > generates (slightly) more efficient code than hlist_for_each_entry_rcu(), 
> > > > > > > correct?
> > > > > > 
> > > > > > Potentially yes, if the READ_ONCE() constrains the compiler.  Or not,
> > > > > > depending of course on the compiler and the surrounding code.
> > > > > > 
> > > > > > > Also, the principle of passing warning flags around is problematic - but 
> > > > > > > I can see the point in this specific case.
> > > > > > 
> > > > > > Would it help to add an hlist_for_each_entry_protected() that expected
> > > > > > RCU-protected pointers and write-side protection, analogous to
> > > > > > rcu_dereference_protected()?  Or would that expansion of the RCU API
> > > > > > outweigh any benefits?
> > > > > 
> > > > > Personally, I like keeping the same API and using the optional argument like
> > > > > we did thus preventing too many APIs / new APIs.
> > > > 
> > > > Would you be willing to put together a prototype patch so that people
> > > > can see exactly how it would look?
> > > 
> > > Hi Paul,
> > > 
> > > I was referring to the same API we have at the moment (that is
> > > hlist_for_each_entry_rcu() with the additional cond parameter). I was saying
> > > let us keep that and not add a hlist_for_each_entry_protected() instead, so
> > > as to not proliferate the number of APIs.
> > > 
> > > Or did I miss the point?
> > 
> > This would work for me.  The only concern would be inefficiency, but we
> > have heard from people saying that the unnecessary inefficiency is only
> > on code paths that they do not care about, so we should be good.
> 
> So, what will be the conclusion here, Ingo?
> 
> I faced other warnings in tracing subsystem, so I need to add more
> lockdep_is_held()s there to suppress warnings.

Please don't add a new:
hlist_for_each_entry_rcu_protected(..., lockdep_is_held(...))

Instead use the existing one in mainline:
 hlist_for_each_entry_rcu(..., lockdep_is_held(...)).

How many warnings are you facing? I think it is a good idea to add
lockdep_is_held() wherever needed so as to prevent false-positive warnings as
it would harden your code and prevent fireworks.

thanks,

 - Joel

> 
> Thank you,
> 
> -- 
> Masami Hiramatsu <mhiramat@kernel.org>

^ permalink raw reply	[flat|nested] 19+ messages in thread

end of thread, other threads:[~2019-12-17 18:07 UTC | newest]

Thread overview: 19+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-12-02  7:32 [PATCH -tip] kprobes: Lock rcu_read_lock() while searching kprobe Masami Hiramatsu
2019-12-02 15:17 ` Anders Roxell
2019-12-02 21:08 ` Joel Fernandes
2019-12-02 22:34   ` Masami Hiramatsu
2019-12-02 23:35     ` Joel Fernandes
2019-12-03  6:02       ` Masami Hiramatsu
2019-12-03  7:13   ` Ingo Molnar
2019-12-03 17:57     ` Paul E. McKenney
2019-12-04 10:05       ` Ingo Molnar
2019-12-04 16:12         ` Paul E. McKenney
2019-12-05  4:19           ` Masami Hiramatsu
2019-12-06  1:11           ` Joel Fernandes
2019-12-06  3:11             ` Paul E. McKenney
2019-12-08  0:08               ` Joel Fernandes
2019-12-09  3:39                 ` Paul E. McKenney
2019-12-17 14:59                   ` Masami Hiramatsu
2019-12-17 18:07                     ` Joel Fernandes
2019-12-04  4:09     ` Joel Fernandes
2019-12-04  4:20       ` Joel Fernandes

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).