linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH 0/3] smp/ipi: Minor cleanups in smp_call_function variants
@ 2013-07-05 16:26 Preeti U Murthy
  2013-07-05 16:27 ` [PATCH 1/3] smp/ipi: Remove redundant cfd->cpumask_ipi mask Preeti U Murthy
                   ` (2 more replies)
  0 siblings, 3 replies; 17+ messages in thread
From: Preeti U Murthy @ 2013-07-05 16:26 UTC (permalink / raw)
  To: xiaoguangrong, mingo, paulmck, linux-kernel, a.p.zijlstra
  Cc: npiggin, deepthi, peterz, rusty, heiko.carstens, udknight,
	rostedt, miltonm, srivatsa.bhat, jens.axboe, tj, akpm, svaidy,
	shli, tglx, lig.fnst, anton

This patchset removes possible stale code overlooked by previous
cleanups. It also clarifies ambiguous comments about deadlock scenarios
while calling smp_call_function variants, as they were not obvious at a first
glance.

---

Preeti U Murthy (3):
      smp/ipi: Remove redundant cfd->cpumask_ipi mask
      smp/ipi:Clarify ambiguous comments around deadlock scenarios in smp_call_function variants.
      smp/ipi:Remove check around csd lock in handler for smp_call_function variants


 kernel/smp.c |   74 ++++++++++++++++++++++++++++++++++++----------------------
 1 file changed, 46 insertions(+), 28 deletions(-)

-- 
Signature


^ permalink raw reply	[flat|nested] 17+ messages in thread

* [PATCH 1/3] smp/ipi: Remove redundant cfd->cpumask_ipi mask
  2013-07-05 16:26 [PATCH 0/3] smp/ipi: Minor cleanups in smp_call_function variants Preeti U Murthy
@ 2013-07-05 16:27 ` Preeti U Murthy
  2013-07-06  3:13   ` Wang YanQing
  2013-07-05 16:27 ` [PATCH 2/3] smp/ipi:Clarify ambiguous comments around deadlock scenarios in smp_call_function variants Preeti U Murthy
  2013-07-05 16:27 ` [PATCH 3/3] smp/ipi:Remove check around csd lock in handler for " Preeti U Murthy
  2 siblings, 1 reply; 17+ messages in thread
From: Preeti U Murthy @ 2013-07-05 16:27 UTC (permalink / raw)
  To: xiaoguangrong, mingo, paulmck, linux-kernel, a.p.zijlstra
  Cc: npiggin, deepthi, peterz, rusty, heiko.carstens, udknight,
	rostedt, miltonm, srivatsa.bhat, jens.axboe, tj, akpm, svaidy,
	shli, tglx, lig.fnst, anton

cfd->cpumask_ipi is used only in smp_call_function_many().The existing
comment around it says that this additional mask is used because
cfd->cpumask can get overwritten.

There is no reason why the cfd->cpumask can be overwritten, since this
is a per_cpu mask; nobody can change it but us and we are
called with preemption disabled.

Signed-off-by: Preeti U Murthy <preeti@linux.vnet.ibm.com>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Ingo Molnar <mingo@elte.hu>
Cc: Xiao Guangrong <xiaoguangrong@cn.fujitsu.com>
Cc: srivatsa.bhat@linux.vnet.ibm.com
Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Cc: Steven Rostedt <rostedt@goodmis.org>
Cc: Rusty Russell <rusty@rustcorp.com.au>
---

 kernel/smp.c |   14 +-------------
 1 file changed, 1 insertion(+), 13 deletions(-)

diff --git a/kernel/smp.c b/kernel/smp.c
index 4dba0f7..89be6e6 100644
--- a/kernel/smp.c
+++ b/kernel/smp.c
@@ -23,7 +23,6 @@ enum {
 struct call_function_data {
 	struct call_single_data	__percpu *csd;
 	cpumask_var_t		cpumask;
-	cpumask_var_t		cpumask_ipi;
 };
 
 static DEFINE_PER_CPU_SHARED_ALIGNED(struct call_function_data, cfd_data);
@@ -47,9 +46,6 @@ hotplug_cfd(struct notifier_block *nfb, unsigned long action, void *hcpu)
 		if (!zalloc_cpumask_var_node(&cfd->cpumask, GFP_KERNEL,
 				cpu_to_node(cpu)))
 			return notifier_from_errno(-ENOMEM);
-		if (!zalloc_cpumask_var_node(&cfd->cpumask_ipi, GFP_KERNEL,
-				cpu_to_node(cpu)))
-			return notifier_from_errno(-ENOMEM);
 		cfd->csd = alloc_percpu(struct call_single_data);
 		if (!cfd->csd) {
 			free_cpumask_var(cfd->cpumask);
@@ -64,7 +60,6 @@ hotplug_cfd(struct notifier_block *nfb, unsigned long action, void *hcpu)
 	case CPU_DEAD:
 	case CPU_DEAD_FROZEN:
 		free_cpumask_var(cfd->cpumask);
-		free_cpumask_var(cfd->cpumask_ipi);
 		free_percpu(cfd->csd);
 		break;
 #endif
@@ -410,13 +405,6 @@ void smp_call_function_many(const struct cpumask *mask,
 	if (unlikely(!cpumask_weight(cfd->cpumask)))
 		return;
 
-	/*
-	 * After we put an entry into the list, cfd->cpumask may be cleared
-	 * again when another CPU sends another IPI for a SMP function call, so
-	 * cfd->cpumask will be zero.
-	 */
-	cpumask_copy(cfd->cpumask_ipi, cfd->cpumask);
-
 	for_each_cpu(cpu, cfd->cpumask) {
 		struct call_single_data *csd = per_cpu_ptr(cfd->csd, cpu);
 		struct call_single_queue *dst =
@@ -433,7 +421,7 @@ void smp_call_function_many(const struct cpumask *mask,
 	}
 
 	/* Send a message to all CPUs in the map */
-	arch_send_call_function_ipi_mask(cfd->cpumask_ipi);
+	arch_send_call_function_ipi_mask(cfd->cpumask);
 
 	if (wait) {
 		for_each_cpu(cpu, cfd->cpumask) {


^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [PATCH 2/3] smp/ipi:Clarify ambiguous comments around deadlock scenarios in smp_call_function variants.
  2013-07-05 16:26 [PATCH 0/3] smp/ipi: Minor cleanups in smp_call_function variants Preeti U Murthy
  2013-07-05 16:27 ` [PATCH 1/3] smp/ipi: Remove redundant cfd->cpumask_ipi mask Preeti U Murthy
@ 2013-07-05 16:27 ` Preeti U Murthy
  2013-07-06  6:12   ` Wang YanQing
  2013-07-05 16:27 ` [PATCH 3/3] smp/ipi:Remove check around csd lock in handler for " Preeti U Murthy
  2 siblings, 1 reply; 17+ messages in thread
From: Preeti U Murthy @ 2013-07-05 16:27 UTC (permalink / raw)
  To: xiaoguangrong, mingo, paulmck, linux-kernel, a.p.zijlstra
  Cc: npiggin, deepthi, peterz, rusty, heiko.carstens, udknight,
	rostedt, miltonm, srivatsa.bhat, jens.axboe, tj, akpm, svaidy,
	shli, tglx, lig.fnst, anton

Elaborate on when deadlocks can occur when a call is made to
smp_call_function_single() and its friends. This avoids ambiguity about
when to use these calls.

Signed-off-by: Preeti U Murthy <preeti@linux.vnet.ibm.com>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Ingo Molnar <mingo@elte.hu>
Cc: Xiao Guangrong <xiaoguangrong@cn.fujitsu.com>
Cc: srivatsa.bhat@linux.vnet.ibm.com
Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Cc: Steven Rostedt <rostedt@goodmis.org>
Cc: Rusty Russell <rusty@rustcorp.com.au
---

 kernel/smp.c |   46 ++++++++++++++++++++++++++++++++++++++++++++--
 1 file changed, 44 insertions(+), 2 deletions(-)

diff --git a/kernel/smp.c b/kernel/smp.c
index 89be6e6..b6981ae 100644
--- a/kernel/smp.c
+++ b/kernel/smp.c
@@ -230,7 +230,23 @@ int smp_call_function_single(int cpu, smp_call_func_t func, void *info,
 	this_cpu = get_cpu();
 
 	/*
-	 * Can deadlock when called with interrupts disabled.
+	 * Can deadlock when called with interrupts disabled under two
+	 * different circumstances depending on the wait parameter.
+	 *
+	 * 1. wait = 1: Two CPUs execute smp_call_function_single(), send an
+	 * IPI to each other, and wait for func to finish on each other.
+	 * Since they are interrupt disabled, neither receives this IPI,
+	 * nor do they proceed forward,as they wait for each other to complete
+	 * execution of func.
+	 *
+	 * 2. wait = 0: This function could be called from an interrupt
+	 * context, and can get blocked on the csd_lock(csd) below in
+	 * "non wait cases".
+	 * This is because the percpu copy of csd of this_cpu is used
+	 * in non wait cases. Under such circumstances, if the previous caller
+	 * of this function who got preempted by this interrupt has already taken
+	 * the lock under non wait condition, it will result in deadlock.
+	 *
 	 * We allow cpu's that are not yet online though, as no one else can
 	 * send smp call function interrupt to this cpu and as such deadlocks
 	 * can't happen.
@@ -329,6 +345,16 @@ void __smp_call_function_single(int cpu, struct call_single_data *csd,
 	this_cpu = get_cpu();
 	/*
 	 * Can deadlock when called with interrupts disabled.
+	 * 1. wait = 1: Two CPUs execute smp_call_function_single(), send an
+	 * IPI to each other, and wait for func to finish on each other.
+	 * Since they are interrupt disabled, neither receives this IPI,
+	 * nor do they proceed forward,as they wait for each other to complete
+	 * execution of func.
+	 *
+	 * 2. wait = 0:  A scenario similar to smp_call_function_single()
+	 * does not happen here, because each caller of
+	 * __smp_call_function_single() passes unique copies of csd.
+	 *
 	 * We allow cpu's that are not yet online though, as no one else can
 	 * send smp call function interrupt to this cpu and as such deadlocks
 	 * can't happen.
@@ -368,7 +394,23 @@ void smp_call_function_many(const struct cpumask *mask,
 	int cpu, next_cpu, this_cpu = smp_processor_id();
 
 	/*
-	 * Can deadlock when called with interrupts disabled.
+	 * Can deadlock when called with interrupts disabled under two
+	 * different circumstances depending on the wait parameter.
+	 *
+	 * 1. wait = 1: Two CPUs execute smp_call_function_single(), send an
+	 * IPI to each other, and wait for func to finish on each other.
+	 * Since they are interrupt disabled, neither receives this IPI,
+	 * nor do they proceed forward,as they wait for each other to complete
+	 * execution of func.
+	 *
+	 * 2. wait = 0: This function could be called from an interrupt
+	 * context, and can get blocked on the csd_lock(csd) below in
+	 * "non wait cases".
+	 * This is because the percpu copy of csd of this_cpu is used
+	 * in non wait cases. Under such circumstances, if the previous caller
+	 * of this function who got preempted by this interrupt has already taken
+	 * the lock under non wait condition, it will result in deadlock.
+	 *
 	 * We allow cpu's that are not yet online though, as no one else can
 	 * send smp call function interrupt to this cpu and as such deadlocks
 	 * can't happen.


^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [PATCH 3/3] smp/ipi:Remove check around csd lock in handler for smp_call_function variants
  2013-07-05 16:26 [PATCH 0/3] smp/ipi: Minor cleanups in smp_call_function variants Preeti U Murthy
  2013-07-05 16:27 ` [PATCH 1/3] smp/ipi: Remove redundant cfd->cpumask_ipi mask Preeti U Murthy
  2013-07-05 16:27 ` [PATCH 2/3] smp/ipi:Clarify ambiguous comments around deadlock scenarios in smp_call_function variants Preeti U Murthy
@ 2013-07-05 16:27 ` Preeti U Murthy
  2013-07-06  5:45   ` Wang YanQing
  2 siblings, 1 reply; 17+ messages in thread
From: Preeti U Murthy @ 2013-07-05 16:27 UTC (permalink / raw)
  To: xiaoguangrong, mingo, paulmck, linux-kernel, a.p.zijlstra
  Cc: npiggin, deepthi, peterz, rusty, heiko.carstens, udknight,
	rostedt, miltonm, srivatsa.bhat, jens.axboe, tj, akpm, svaidy,
	shli, tglx, lig.fnst, anton

call_single_data is always locked by all callers of
arch_send_call_function_single_ipi() or
arch_send_call_function_ipi_mask() which results in execution of
generic_call_function_interrupt() handler.

Hence remove the check for lock on csd in generic_call_function_interrupt()
handler, before unlocking it.

Signed-off-by: Preeti U Murthy <preeti@linux.vnet.ibm.com>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Ingo Molnar <mingo@elte.hu>
Cc: Xiao Guangrong <xiaoguangrong@cn.fujitsu.com>
Cc: srivatsa.bhat@linux.vnet.ibm.com
Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Cc: Steven Rostedt <rostedt@goodmis.org>
Cc: Rusty Russell <rusty@rustcorp.com.au
---

 kernel/smp.c |   14 +-------------
 1 file changed, 1 insertion(+), 13 deletions(-)

diff --git a/kernel/smp.c b/kernel/smp.c
index b6981ae..d37581a 100644
--- a/kernel/smp.c
+++ b/kernel/smp.c
@@ -181,25 +181,13 @@ void generic_smp_call_function_single_interrupt(void)
 
 	while (!list_empty(&list)) {
 		struct call_single_data *csd;
-		unsigned int csd_flags;
 
 		csd = list_entry(list.next, struct call_single_data, list);
 		list_del(&csd->list);
 
-		/*
-		 * 'csd' can be invalid after this call if flags == 0
-		 * (when called through generic_exec_single()),
-		 * so save them away before making the call:
-		 */
-		csd_flags = csd->flags;
-
 		csd->func(csd->info);
 
-		/*
-		 * Unlocked CSDs are valid through generic_exec_single():
-		 */
-		if (csd_flags & CSD_FLAG_LOCK)
-			csd_unlock(csd);
+		csd_unlock(csd);
 	}
 }
 


^ permalink raw reply related	[flat|nested] 17+ messages in thread

* Re: [PATCH 1/3] smp/ipi: Remove redundant cfd->cpumask_ipi mask
  2013-07-05 16:27 ` [PATCH 1/3] smp/ipi: Remove redundant cfd->cpumask_ipi mask Preeti U Murthy
@ 2013-07-06  3:13   ` Wang YanQing
  2013-07-06  5:29     ` Preeti U Murthy
  0 siblings, 1 reply; 17+ messages in thread
From: Wang YanQing @ 2013-07-06  3:13 UTC (permalink / raw)
  To: Preeti U Murthy
  Cc: xiaoguangrong, mingo, paulmck, linux-kernel, a.p.zijlstra,
	npiggin, deepthi, peterz, rusty, heiko.carstens, rostedt,
	miltonm, srivatsa.bhat, jens.axboe, tj, akpm, svaidy, shli, tglx,
	lig.fnst, anton, torvalds, jbeulich

On Fri, Jul 05, 2013 at 09:57:01PM +0530, Preeti U Murthy wrote:
> cfd->cpumask_ipi is used only in smp_call_function_many().The existing
> comment around it says that this additional mask is used because
> cfd->cpumask can get overwritten.
> 
> There is no reason why the cfd->cpumask can be overwritten, since this
> is a per_cpu mask; nobody can change it but us and we are
> called with preemption disabled.

The ChangeLog for f44310b98ddb7f0d06550d73ed67df5865e3eda5
which import cfd->cpumask_ipi saied the reason why we need
it:

"    As explained by Linus as well:
    
     |
     | Once we've done the "list_add_rcu()" to add it to the
     | queue, we can have (another) IPI to the target CPU that can
     | now see it and clear the mask.
     |
     | So by the time we get to actually send the IPI, the mask might
     | have been cleared by another IPI.
     |
    
    This patch also fixes a system hang problem, if the data->cpumask
    gets cleared after passing this point:
    
            if (WARN_ONCE(!mask, "empty IPI mask"))
                    return;
    
    then the problem in commit 83d349f35e1a ("x86: don't send an IPI to
    the empty set of CPU's") will happen again.
"
So this patch is wrong.

And you should cc linus and Jan Beulich who give acked-by tag to
the commit.

Thanks.

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH 1/3] smp/ipi: Remove redundant cfd->cpumask_ipi mask
  2013-07-06  3:13   ` Wang YanQing
@ 2013-07-06  5:29     ` Preeti U Murthy
  2013-07-06  6:03       ` Wang YanQing
  0 siblings, 1 reply; 17+ messages in thread
From: Preeti U Murthy @ 2013-07-06  5:29 UTC (permalink / raw)
  To: Wang YanQing, xiaoguangrong, mingo, paulmck, linux-kernel,
	a.p.zijlstra, npiggin, deepthi, peterz, rusty, heiko.carstens,
	rostedt, miltonm, srivatsa.bhat, jens.axboe, tj, akpm, svaidy,
	shli, tglx, lig.fnst, anton, torvalds, jbeulich

Hi Wang,

On 07/06/2013 08:43 AM, Wang YanQing wrote:
> On Fri, Jul 05, 2013 at 09:57:01PM +0530, Preeti U Murthy wrote:
>> cfd->cpumask_ipi is used only in smp_call_function_many().The existing
>> comment around it says that this additional mask is used because
>> cfd->cpumask can get overwritten.
>>
>> There is no reason why the cfd->cpumask can be overwritten, since this
>> is a per_cpu mask; nobody can change it but us and we are
>> called with preemption disabled.
> 
> The ChangeLog for f44310b98ddb7f0d06550d73ed67df5865e3eda5
> which import cfd->cpumask_ipi saied the reason why we need
> it:
> 
> "    As explained by Linus as well:
>     
>      |
>      | Once we've done the "list_add_rcu()" to add it to the
>      | queue, we can have (another) IPI to the target CPU that can
>      | now see it and clear the mask.
>      |
>      | So by the time we get to actually send the IPI, the mask might
>      | have been cleared by another IPI.

I am unable to understand where the cfd->cpumask of the source cpu is
getting cleared. Surely not by itself, since it is preempt disabled.
Also why should it get cleared?

The idea behind clearing a source CPU's cfd->cpumask AFAICS, could be
that the source cpu should not send an IPI to the target if the target
has already received an IPI from another CPU. The reason being that the
target would execute the already queued csds, hence would not need
another IPI to see its queue.

If the above is the intention of clearing the cfd->cpumask of the source
cpu, why is the mechanism not consistent with what happens in
generic_exec_single(), where in an ipi is decided to be sent if there
are no previous queued csds on the target?

Also why is it that in the wait condition under
smp_call_function_many(), cfd->cpumask continues to be used and not
cfd->cpumask_ipi ?

>      |
>     
>     This patch also fixes a system hang problem, if the data->cpumask
>     gets cleared after passing this point:
>     
>             if (WARN_ONCE(!mask, "empty IPI mask"))
>                     return;
>     
>     then the problem in commit 83d349f35e1a ("x86: don't send an IPI to
>     the empty set of CPU's") will happen again.
> "
> So this patch is wrong.
> 
> And you should cc linus and Jan Beulich who give acked-by tag to
> the commit.
> 
> Thanks.
> 

Thank you

Regards
Preeti U Murthy


^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH 3/3] smp/ipi:Remove check around csd lock in handler for smp_call_function variants
  2013-07-05 16:27 ` [PATCH 3/3] smp/ipi:Remove check around csd lock in handler for " Preeti U Murthy
@ 2013-07-06  5:45   ` Wang YanQing
  2013-07-06  8:06     ` Preeti U Murthy
  0 siblings, 1 reply; 17+ messages in thread
From: Wang YanQing @ 2013-07-06  5:45 UTC (permalink / raw)
  To: Preeti U Murthy
  Cc: xiaoguangrong, mingo, paulmck, linux-kernel, a.p.zijlstra,
	npiggin, deepthi, peterz, rusty, heiko.carstens, rostedt,
	miltonm, srivatsa.bhat, jens.axboe, tj, akpm, svaidy, shli, tglx,
	lig.fnst, anton

On Fri, Jul 05, 2013 at 09:57:21PM +0530, Preeti U Murthy wrote:
> call_single_data is always locked by all callers of
> arch_send_call_function_single_ipi() or
> arch_send_call_function_ipi_mask() which results in execution of
> generic_call_function_interrupt() handler.
> 
> Hence remove the check for lock on csd in generic_call_function_interrupt()
> handler, before unlocking it.

I can't find where is the generic_call_function_interrupt :)

> Signed-off-by: Preeti U Murthy <preeti@linux.vnet.ibm.com>
> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
> Cc: Ingo Molnar <mingo@elte.hu>
> Cc: Xiao Guangrong <xiaoguangrong@cn.fujitsu.com>
> Cc: srivatsa.bhat@linux.vnet.ibm.com
> Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
> Cc: Steven Rostedt <rostedt@goodmis.org>
> Cc: Rusty Russell <rusty@rustcorp.com.au
> ---
> 
>  kernel/smp.c |   14 +-------------
>  1 file changed, 1 insertion(+), 13 deletions(-)
> 
> diff --git a/kernel/smp.c b/kernel/smp.c
> index b6981ae..d37581a 100644
> --- a/kernel/smp.c
> +++ b/kernel/smp.c
> @@ -181,25 +181,13 @@ void generic_smp_call_function_single_interrupt(void)
>  
>  	while (!list_empty(&list)) {
>  		struct call_single_data *csd;
> -		unsigned int csd_flags;
>  
>  		csd = list_entry(list.next, struct call_single_data, list);
>  		list_del(&csd->list);
>  
> -		/*
> -		 * 'csd' can be invalid after this call if flags == 0
> -		 * (when called through generic_exec_single()),
> -		 * so save them away before making the call:
> -		 */
> -		csd_flags = csd->flags;
> -

You haven't mention this change in the ChangeLog, don't do it.
I can't see any harm to remove csd_flags, but I hope others
check it again.

>  		csd->func(csd->info);
>  
> -		/*
> -		 * Unlocked CSDs are valid through generic_exec_single():
> -		 */
> -		if (csd_flags & CSD_FLAG_LOCK)
> -			csd_unlock(csd);
> +		csd_unlock(csd);

I don't like this change, I think check CSD_FLAG_LOCK 
to make sure we really need csd_unlock is good.

Just like you can't know who and how people will use the
API, so some robust check code is good.

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH 1/3] smp/ipi: Remove redundant cfd->cpumask_ipi mask
  2013-07-06  5:29     ` Preeti U Murthy
@ 2013-07-06  6:03       ` Wang YanQing
  2013-07-07 16:45         ` Preeti U Murthy
  0 siblings, 1 reply; 17+ messages in thread
From: Wang YanQing @ 2013-07-06  6:03 UTC (permalink / raw)
  To: Preeti U Murthy
  Cc: xiaoguangrong, mingo, paulmck, linux-kernel, a.p.zijlstra,
	npiggin, deepthi, peterz, rusty, heiko.carstens, rostedt,
	miltonm, srivatsa.bhat, jens.axboe, tj, akpm, svaidy, shli, tglx,
	lig.fnst, anton, torvalds, jbeulich

On Sat, Jul 06, 2013 at 10:59:39AM +0530, Preeti U Murthy wrote:
> Hi Wang,
> 
> On 07/06/2013 08:43 AM, Wang YanQing wrote:
> > On Fri, Jul 05, 2013 at 09:57:01PM +0530, Preeti U Murthy wrote:
> >> cfd->cpumask_ipi is used only in smp_call_function_many().The existing
> >> comment around it says that this additional mask is used because
> >> cfd->cpumask can get overwritten.
> >>
> >> There is no reason why the cfd->cpumask can be overwritten, since this
> >> is a per_cpu mask; nobody can change it but us and we are
> >> called with preemption disabled.
> > 
> > The ChangeLog for f44310b98ddb7f0d06550d73ed67df5865e3eda5
> > which import cfd->cpumask_ipi saied the reason why we need
> > it:
> > 
> > "    As explained by Linus as well:
> >     
> >      |
> >      | Once we've done the "list_add_rcu()" to add it to the
> >      | queue, we can have (another) IPI to the target CPU that can
> >      | now see it and clear the mask.
> >      |
> >      | So by the time we get to actually send the IPI, the mask might
> >      | have been cleared by another IPI.
> 
> I am unable to understand where the cfd->cpumask of the source cpu is
> getting cleared. Surely not by itself, since it is preempt disabled.
> Also why should it get cleared?

Assume we have three CPUs: A,B,C

A call smp_call_function_many to notify C do something,
and current it execute on finished below codes:

"for_each_cpu(cpu, cfd->cpumask) {
                struct call_single_data *csd = per_cpu_ptr(cfd->csd, cpu);
                struct call_single_queue *dst =
                                        &per_cpu(call_single_queue, cpu);
                unsigned long flags;

                csd_lock(csd);
                csd->func = func;
                csd->info = info;

                raw_spin_lock_irqsave(&dst->lock, flags);
                list_add_tail(&csd->list, &dst->list);
                raw_spin_unlock_irqrestore(&dst->lock, flags);
        }
"
You see "list_add_tail(&csd->list, &dst->list);", it pass the address of csd,
and A stop before call arch_send_call_function_ipi_mask due interrupt.

At this time B send ipi to C also, then C will see the csd passed by A,
then C will clear itself in the cfd->cpumask.

Thanks.


^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH 2/3] smp/ipi:Clarify ambiguous comments around deadlock scenarios in smp_call_function variants.
  2013-07-05 16:27 ` [PATCH 2/3] smp/ipi:Clarify ambiguous comments around deadlock scenarios in smp_call_function variants Preeti U Murthy
@ 2013-07-06  6:12   ` Wang YanQing
  2013-07-06  7:48     ` Preeti U Murthy
  0 siblings, 1 reply; 17+ messages in thread
From: Wang YanQing @ 2013-07-06  6:12 UTC (permalink / raw)
  To: Preeti U Murthy
  Cc: xiaoguangrong, mingo, paulmck, linux-kernel, a.p.zijlstra,
	npiggin, deepthi, peterz, rusty, heiko.carstens, rostedt,
	miltonm, srivatsa.bhat, jens.axboe, tj, akpm, svaidy, shli, tglx,
	lig.fnst, anton

On Fri, Jul 05, 2013 at 09:57:11PM +0530, Preeti U Murthy wrote:
> Elaborate on when deadlocks can occur when a call is made to
> smp_call_function_single() and its friends. This avoids ambiguity about
> when to use these calls.
> 
> Signed-off-by: Preeti U Murthy <preeti@linux.vnet.ibm.com>
> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
> Cc: Ingo Molnar <mingo@elte.hu>
> Cc: Xiao Guangrong <xiaoguangrong@cn.fujitsu.com>
> Cc: srivatsa.bhat@linux.vnet.ibm.com
> Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
> Cc: Steven Rostedt <rostedt@goodmis.org>
> Cc: Rusty Russell <rusty@rustcorp.com.au
> ---
> 
>  kernel/smp.c |   46 ++++++++++++++++++++++++++++++++++++++++++++--
>  1 file changed, 44 insertions(+), 2 deletions(-)
> 
> diff --git a/kernel/smp.c b/kernel/smp.c
> index 89be6e6..b6981ae 100644
> --- a/kernel/smp.c
> +++ b/kernel/smp.c
> @@ -230,7 +230,23 @@ int smp_call_function_single(int cpu, smp_call_func_t func, void *info,
>  	this_cpu = get_cpu();
>  
>  	/*
> -	 * Can deadlock when called with interrupts disabled.
> +	 * Can deadlock when called with interrupts disabled under two
> +	 * different circumstances depending on the wait parameter.
> +	 *
> +	 * 1. wait = 1: Two CPUs execute smp_call_function_single(), send an
> +	 * IPI to each other, and wait for func to finish on each other.
> +	 * Since they are interrupt disabled, neither receives this IPI,
> +	 * nor do they proceed forward,as they wait for each other to complete
> +	 * execution of func.
> +	 *

Yes, we should avoid this situation, but I am not sure whether this is 
the meaning of "deadlock" in the original comment.

> +	 * 2. wait = 0: This function could be called from an interrupt
> +	 * context, and can get blocked on the csd_lock(csd) below in
> +	 * "non wait cases".
> +	 * This is because the percpu copy of csd of this_cpu is used
> +	 * in non wait cases. Under such circumstances, if the previous caller
> +	 * of this function who got preempted by this interrupt has already taken
> +	 * the lock under non wait condition, it will result in deadlock.
> +	 *

No, it will not cause deadlock, it is not mutex lock,  it is busy wait, so
when the CSD_FLAG_LOCK be cleared, the code will go on running.

After stare into the kernel/smp.c, I can't catch what the exactly meaning
of the "DeadLock" in the original comment also.

I hope someone can clarify it.

Thanks.


^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH 2/3] smp/ipi:Clarify ambiguous comments around deadlock scenarios in smp_call_function variants.
  2013-07-06  6:12   ` Wang YanQing
@ 2013-07-06  7:48     ` Preeti U Murthy
  2013-07-06 19:48       ` Thomas Gleixner
  0 siblings, 1 reply; 17+ messages in thread
From: Preeti U Murthy @ 2013-07-06  7:48 UTC (permalink / raw)
  To: Wang YanQing, xiaoguangrong, mingo, paulmck, linux-kernel,
	a.p.zijlstra, npiggin, deepthi, peterz, rusty, heiko.carstens,
	rostedt, miltonm, srivatsa.bhat, jens.axboe, tj, akpm, svaidy,
	shli, tglx, lig.fnst, anton

Hi Wang,

On 07/06/2013 11:42 AM, Wang YanQing wrote:
> On Fri, Jul 05, 2013 at 09:57:11PM +0530, Preeti U Murthy wrote:
>> Elaborate on when deadlocks can occur when a call is made to
>> smp_call_function_single() and its friends. This avoids ambiguity about
>> when to use these calls.
>>
>> +	 * 2. wait = 0: This function could be called from an interrupt
>> +	 * context, and can get blocked on the csd_lock(csd) below in
>> +	 * "non wait cases".
>> +	 * This is because the percpu copy of csd of this_cpu is used
>> +	 * in non wait cases. Under such circumstances, if the previous caller
>> +	 * of this function who got preempted by this interrupt has already taken
>> +	 * the lock under non wait condition, it will result in deadlock.
>> +	 *
> 
> No, it will not cause deadlock, it is not mutex lock,  it is busy wait, so
> when the CSD_FLAG_LOCK be cleared, the code will go on running.

A deadlock might not result, but a potential long wait in an interrupt
context could result if the source cpu got preempted by an interrupt
between  csd_lock(csd) and generic_exec_single(), where it actually
sends an ipi to the target cpu.

Under such a scenario, if no other cpu has sent a smp_call_function ipi
to it, it will not check its queue till such an ipi is sent, as a result
of which the target will not release the csd on which the source cpu is
waiting.

Hence, on the source cpu,the interrupt handler will have to wait till
then. It would be good therefore to issue a warning under this circumstance.
Maybe we can modify the changelog to reflect this scenario.

> 
> After stare into the kernel/smp.c, I can't catch what the exactly meaning
> of the "DeadLock" in the original comment also.
> 
> I hope someone can clarify it.
> 
> Thanks.
> 

Regards
Preeti U Murthy


^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH 3/3] smp/ipi:Remove check around csd lock in handler for smp_call_function variants
  2013-07-06  5:45   ` Wang YanQing
@ 2013-07-06  8:06     ` Preeti U Murthy
  2013-07-06 14:21       ` Wang YanQing
  0 siblings, 1 reply; 17+ messages in thread
From: Preeti U Murthy @ 2013-07-06  8:06 UTC (permalink / raw)
  To: Wang YanQing, xiaoguangrong, mingo, paulmck, linux-kernel,
	a.p.zijlstra, npiggin, deepthi, peterz, rusty, heiko.carstens,
	rostedt, miltonm, srivatsa.bhat, jens.axboe, tj, akpm, svaidy,
	shli, tglx, lig.fnst, anton

On 07/06/2013 11:15 AM, Wang YanQing wrote:
> On Fri, Jul 05, 2013 at 09:57:21PM +0530, Preeti U Murthy wrote:
>> call_single_data is always locked by all callers of
>> arch_send_call_function_single_ipi() or
>> arch_send_call_function_ipi_mask() which results in execution of
>> generic_call_function_interrupt() handler.
>>
>> Hence remove the check for lock on csd in generic_call_function_interrupt()
>> handler, before unlocking it.
> 
> I can't find where is the generic_call_function_interrupt :)

Sorry about this error :)
> 
>> Signed-off-by: Preeti U Murthy <preeti@linux.vnet.ibm.com>
>> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
>> Cc: Ingo Molnar <mingo@elte.hu>
>> Cc: Xiao Guangrong <xiaoguangrong@cn.fujitsu.com>
>> Cc: srivatsa.bhat@linux.vnet.ibm.com
>> Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
>> Cc: Steven Rostedt <rostedt@goodmis.org>
>> Cc: Rusty Russell <rusty@rustcorp.com.au
>> ---
>>
>>  kernel/smp.c |   14 +-------------
>>  1 file changed, 1 insertion(+), 13 deletions(-)
>>
>> diff --git a/kernel/smp.c b/kernel/smp.c
>> index b6981ae..d37581a 100644
>> --- a/kernel/smp.c
>> +++ b/kernel/smp.c
>> @@ -181,25 +181,13 @@ void generic_smp_call_function_single_interrupt(void)
>>  
>>  	while (!list_empty(&list)) {
>>  		struct call_single_data *csd;
>> -		unsigned int csd_flags;
>>  
>>  		csd = list_entry(list.next, struct call_single_data, list);
>>  		list_del(&csd->list);
>>  
>> -		/*
>> -		 * 'csd' can be invalid after this call if flags == 0
>> -		 * (when called through generic_exec_single()),
>> -		 * so save them away before making the call:
>> -		 */
>> -		csd_flags = csd->flags;
>> -
> 
> You haven't mention this change in the ChangeLog, don't do it.

Right, I will include it in the changelog.

> I can't see any harm to remove csd_flags, but I hope others
> check it again.
> 
>>  		csd->func(csd->info);
>>  
>> -		/*
>> -		 * Unlocked CSDs are valid through generic_exec_single():
>> -		 */
>> -		if (csd_flags & CSD_FLAG_LOCK)
>> -			csd_unlock(csd);
>> +		csd_unlock(csd);
> 
> I don't like this change, I think check CSD_FLAG_LOCK 
> to make sure we really need csd_unlock is good.

Ideally it should be under a WARN_ON(). csd_unlock() has that WARN_ON().
Unlocking a parameter which is not locked should be seen as a bug, which
the above code is not doing. In fact it avoids it being reported as a bug.

> 
> Just like you can't know who and how people will use the
> API, so some robust check code is good.
> 

Regards
Preeti U Murthy


^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH 3/3] smp/ipi:Remove check around csd lock in handler for smp_call_function variants
  2013-07-06  8:06     ` Preeti U Murthy
@ 2013-07-06 14:21       ` Wang YanQing
  2013-07-07 16:23         ` Preeti U Murthy
  0 siblings, 1 reply; 17+ messages in thread
From: Wang YanQing @ 2013-07-06 14:21 UTC (permalink / raw)
  To: Preeti U Murthy
  Cc: mingo, paulmck, linux-kernel, a.p.zijlstra, deepthi, peterz,
	rusty, heiko.carstens, rostedt, miltonm, srivatsa.bhat, tj, akpm,
	svaidy, shli, tglx, lig.fnst, anton

On Sat, Jul 06, 2013 at 01:36:27PM +0530, Preeti U Murthy wrote:
> Ideally it should be under a WARN_ON(). csd_unlock() has that WARN_ON().
> Unlocking a parameter which is not locked should be seen as a bug, which
> the above code is not doing. In fact it avoids it being reported as a bug.

Although I know what's your meaning, but just like the comment in code:

"
 /*                                                                                               
  * Unlocked CSDs are valid through generic_exec_single():                                        
  */
"

If the csd don't come from generic_exec_single, then
Unlocked CSDs maybe are not valid. So we check CSD_FLAG_LOCK
to avoid trigger the WARN_ON in csd_unlock.

Genric_exec_single's name imply it is a generic version,
you know, maybe we will have "special" version.

Thanks.

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH 2/3] smp/ipi:Clarify ambiguous comments around deadlock scenarios in smp_call_function variants.
  2013-07-06  7:48     ` Preeti U Murthy
@ 2013-07-06 19:48       ` Thomas Gleixner
  2013-07-07 16:29         ` Preeti U Murthy
  0 siblings, 1 reply; 17+ messages in thread
From: Thomas Gleixner @ 2013-07-06 19:48 UTC (permalink / raw)
  To: Preeti U Murthy
  Cc: Wang YanQing, xiaoguangrong, mingo, paulmck, linux-kernel,
	a.p.zijlstra, npiggin, deepthi, peterz, rusty, heiko.carstens,
	rostedt, miltonm, srivatsa.bhat, jens.axboe, tj, akpm, svaidy,
	shli, lig.fnst, anton

On Sat, 6 Jul 2013, Preeti U Murthy wrote:

> Hi Wang,
> 
> On 07/06/2013 11:42 AM, Wang YanQing wrote:
> > On Fri, Jul 05, 2013 at 09:57:11PM +0530, Preeti U Murthy wrote:
> >> Elaborate on when deadlocks can occur when a call is made to
> >> smp_call_function_single() and its friends. This avoids ambiguity about
> >> when to use these calls.
> >>
> >> +	 * 2. wait = 0: This function could be called from an interrupt
> >> +	 * context, and can get blocked on the csd_lock(csd) below in
> >> +	 * "non wait cases".
> >> +	 * This is because the percpu copy of csd of this_cpu is used
> >> +	 * in non wait cases. Under such circumstances, if the previous caller
> >> +	 * of this function who got preempted by this interrupt has already taken
> >> +	 * the lock under non wait condition, it will result in deadlock.
> >> +	 *
> > 
> > No, it will not cause deadlock, it is not mutex lock,  it is busy wait, so
> > when the CSD_FLAG_LOCK be cleared, the code will go on running.
> 
> A deadlock might not result, but a potential long wait in an interrupt
> context could result if the source cpu got preempted by an interrupt
> between  csd_lock(csd) and generic_exec_single(), where it actually
> sends an ipi to the target cpu.

See https://lkml.org/lkml/2013/7/5/183 and the related thread for real
deadlock scenarios.
 
Thanks,

	tglx

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH 3/3] smp/ipi:Remove check around csd lock in handler for smp_call_function variants
  2013-07-06 14:21       ` Wang YanQing
@ 2013-07-07 16:23         ` Preeti U Murthy
  2013-07-07 17:25           ` Wang YanQing
  0 siblings, 1 reply; 17+ messages in thread
From: Preeti U Murthy @ 2013-07-07 16:23 UTC (permalink / raw)
  To: Wang YanQing, mingo, paulmck, linux-kernel, a.p.zijlstra,
	deepthi, peterz, rusty, heiko.carstens, rostedt, miltonm,
	srivatsa.bhat, tj, akpm, svaidy, shli, tglx, lig.fnst, anton

Hi Wang,

On 07/06/2013 07:51 PM, Wang YanQing wrote:
> On Sat, Jul 06, 2013 at 01:36:27PM +0530, Preeti U Murthy wrote:
>> Ideally it should be under a WARN_ON(). csd_unlock() has that WARN_ON().
>> Unlocking a parameter which is not locked should be seen as a bug, which
>> the above code is not doing. In fact it avoids it being reported as a bug.
> 
> Although I know what's your meaning, but just like the comment in code:
> 
> "
>  /*                                                                                               
>   * Unlocked CSDs are valid through generic_exec_single():                                        
>   */

I don't understand this comment. All callers of generic_exec_single()
take the csd lock. So where is the scenario of csds being unlocked in
generic_exec_single() before the call to
arch_send_call_function_single_ipi() is made?
  Rather what is the above comment trying to say?

> "
> 
> If the csd don't come from generic_exec_single, then
> Unlocked CSDs maybe are not valid. So we check CSD_FLAG_LOCK
> to avoid trigger the WARN_ON in csd_unlock.
> 
> Genric_exec_single's name imply it is a generic version,
> you know, maybe we will have "special" version.
> 
> Thanks.
> 

Regards
Preeti U Murthy


^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH 2/3] smp/ipi:Clarify ambiguous comments around deadlock scenarios in smp_call_function variants.
  2013-07-06 19:48       ` Thomas Gleixner
@ 2013-07-07 16:29         ` Preeti U Murthy
  0 siblings, 0 replies; 17+ messages in thread
From: Preeti U Murthy @ 2013-07-07 16:29 UTC (permalink / raw)
  To: Thomas Gleixner
  Cc: Wang YanQing, xiaoguangrong, mingo, paulmck, linux-kernel,
	a.p.zijlstra, npiggin, deepthi, peterz, rusty, heiko.carstens,
	rostedt, miltonm, srivatsa.bhat, jens.axboe, tj, akpm, svaidy,
	shli, lig.fnst, anton

Thanks for the pointer Thomas :)

Regards
Preeti U murthy
On 07/07/2013 01:18 AM, Thomas Gleixner wrote:
> On Sat, 6 Jul 2013, Preeti U Murthy wrote:
> 
>> Hi Wang,
>>
>> On 07/06/2013 11:42 AM, Wang YanQing wrote:
>>> On Fri, Jul 05, 2013 at 09:57:11PM +0530, Preeti U Murthy wrote:
>>>> Elaborate on when deadlocks can occur when a call is made to
>>>> smp_call_function_single() and its friends. This avoids ambiguity about
>>>> when to use these calls.
>>>>
>>>> +	 * 2. wait = 0: This function could be called from an interrupt
>>>> +	 * context, and can get blocked on the csd_lock(csd) below in
>>>> +	 * "non wait cases".
>>>> +	 * This is because the percpu copy of csd of this_cpu is used
>>>> +	 * in non wait cases. Under such circumstances, if the previous caller
>>>> +	 * of this function who got preempted by this interrupt has already taken
>>>> +	 * the lock under non wait condition, it will result in deadlock.
>>>> +	 *
>>>
>>> No, it will not cause deadlock, it is not mutex lock,  it is busy wait, so
>>> when the CSD_FLAG_LOCK be cleared, the code will go on running.
>>
>> A deadlock might not result, but a potential long wait in an interrupt
>> context could result if the source cpu got preempted by an interrupt
>> between  csd_lock(csd) and generic_exec_single(), where it actually
>> sends an ipi to the target cpu.
> 
> See https://lkml.org/lkml/2013/7/5/183 and the related thread for real
> deadlock scenarios.
> 
> Thanks,
> 
> 	tglx
> 


^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH 1/3] smp/ipi: Remove redundant cfd->cpumask_ipi mask
  2013-07-06  6:03       ` Wang YanQing
@ 2013-07-07 16:45         ` Preeti U Murthy
  0 siblings, 0 replies; 17+ messages in thread
From: Preeti U Murthy @ 2013-07-07 16:45 UTC (permalink / raw)
  To: Wang YanQing, xiaoguangrong, mingo, paulmck, linux-kernel,
	a.p.zijlstra, npiggin, deepthi, peterz, rusty, heiko.carstens,
	rostedt, miltonm, srivatsa.bhat, jens.axboe, tj, akpm, svaidy,
	shli, tglx, lig.fnst, anton, torvalds, jbeulich

Hi Wang,

On 07/06/2013 11:33 AM, Wang YanQing wrote:
> On Sat, Jul 06, 2013 at 10:59:39AM +0530, Preeti U Murthy wrote:
>> Hi Wang,
>>
>> On 07/06/2013 08:43 AM, Wang YanQing wrote:
>>> On Fri, Jul 05, 2013 at 09:57:01PM +0530, Preeti U Murthy wrote:
>>>> cfd->cpumask_ipi is used only in smp_call_function_many().The existing
>>>> comment around it says that this additional mask is used because
>>>> cfd->cpumask can get overwritten.
>>>>
>>>> There is no reason why the cfd->cpumask can be overwritten, since this
>>>> is a per_cpu mask; nobody can change it but us and we are
>>>> called with preemption disabled.
>>>
>>> The ChangeLog for f44310b98ddb7f0d06550d73ed67df5865e3eda5
>>> which import cfd->cpumask_ipi saied the reason why we need
>>> it:
>>>
>>> "    As explained by Linus as well:
>>>     
>>>      |
>>>      | Once we've done the "list_add_rcu()" to add it to the
>>>      | queue, we can have (another) IPI to the target CPU that can
>>>      | now see it and clear the mask.
>>>      |
>>>      | So by the time we get to actually send the IPI, the mask might
>>>      | have been cleared by another IPI.
>>
>> I am unable to understand where the cfd->cpumask of the source cpu is
>> getting cleared. Surely not by itself, since it is preempt disabled.
>> Also why should it get cleared?
> 
> Assume we have three CPUs: A,B,C
> 
> A call smp_call_function_many to notify C do something,
> and current it execute on finished below codes:
> 
> "for_each_cpu(cpu, cfd->cpumask) {
>                 struct call_single_data *csd = per_cpu_ptr(cfd->csd, cpu);
>                 struct call_single_queue *dst =
>                                         &per_cpu(call_single_queue, cpu);
>                 unsigned long flags;
> 
>                 csd_lock(csd);
>                 csd->func = func;
>                 csd->info = info;
> 
>                 raw_spin_lock_irqsave(&dst->lock, flags);
>                 list_add_tail(&csd->list, &dst->list);
>                 raw_spin_unlock_irqrestore(&dst->lock, flags);
>         }
> "
> You see "list_add_tail(&csd->list, &dst->list);", it pass the address of csd,
> and A stop before call arch_send_call_function_ipi_mask due interrupt.
> 
> At this time B send ipi to C also, then C will see the csd passed by A,
> then C will clear itself in the cfd->cpumask.

Ah ok! Thank you very much for this clarification :)

Regards
Preeti U Murthy


^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH 3/3] smp/ipi:Remove check around csd lock in handler for smp_call_function variants
  2013-07-07 16:23         ` Preeti U Murthy
@ 2013-07-07 17:25           ` Wang YanQing
  0 siblings, 0 replies; 17+ messages in thread
From: Wang YanQing @ 2013-07-07 17:25 UTC (permalink / raw)
  To: Preeti U Murthy
  Cc: mingo, paulmck, linux-kernel, a.p.zijlstra, deepthi, peterz,
	rusty, heiko.carstens, rostedt, miltonm, srivatsa.bhat, tj, akpm,
	svaidy, shli, tglx, lig.fnst, anton

On Sun, Jul 07, 2013 at 09:53:48PM +0530, Preeti U Murthy wrote:
> > "
> >  /*                                                                                               
> >   * Unlocked CSDs are valid through generic_exec_single():                                        
> >   */
> 
> I don't understand this comment. All callers of generic_exec_single()
> take the csd lock. So where is the scenario of csds being unlocked in
> generic_exec_single() before the call to
> arch_send_call_function_single_ipi() is made?
>   Rather what is the above comment trying to say?

I have given the answer to this question in last reply.

I don't know whether it is right to make a assumption through
this way that what you do currently:

Find all the current api users, and drop all the robust codes,
despite the unpredictable future users.

Ok, I know the balance between "robust" vs "performance",
robust check codes will bring performance penalty in fastest
code path, but the "penalty" is neglectable sometimes for
modern CPU.

I decide to respect the MAINTAINER's decision to accept this 
change or not.

Thanks.




^ permalink raw reply	[flat|nested] 17+ messages in thread

end of thread, other threads:[~2013-07-07 17:26 UTC | newest]

Thread overview: 17+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2013-07-05 16:26 [PATCH 0/3] smp/ipi: Minor cleanups in smp_call_function variants Preeti U Murthy
2013-07-05 16:27 ` [PATCH 1/3] smp/ipi: Remove redundant cfd->cpumask_ipi mask Preeti U Murthy
2013-07-06  3:13   ` Wang YanQing
2013-07-06  5:29     ` Preeti U Murthy
2013-07-06  6:03       ` Wang YanQing
2013-07-07 16:45         ` Preeti U Murthy
2013-07-05 16:27 ` [PATCH 2/3] smp/ipi:Clarify ambiguous comments around deadlock scenarios in smp_call_function variants Preeti U Murthy
2013-07-06  6:12   ` Wang YanQing
2013-07-06  7:48     ` Preeti U Murthy
2013-07-06 19:48       ` Thomas Gleixner
2013-07-07 16:29         ` Preeti U Murthy
2013-07-05 16:27 ` [PATCH 3/3] smp/ipi:Remove check around csd lock in handler for " Preeti U Murthy
2013-07-06  5:45   ` Wang YanQing
2013-07-06  8:06     ` Preeti U Murthy
2013-07-06 14:21       ` Wang YanQing
2013-07-07 16:23         ` Preeti U Murthy
2013-07-07 17:25           ` Wang YanQing

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).