All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v2] x86, mm: only wait for flushes from online cpus
@ 2012-06-22 22:06 Mandeep Singh Baines
  2012-07-18 18:51 ` Mandeep Singh Baines
  2012-07-18 21:17 ` Srivatsa S. Bhat
  0 siblings, 2 replies; 5+ messages in thread
From: Mandeep Singh Baines @ 2012-06-22 22:06 UTC (permalink / raw)
  To: Ingo Molnar, linux-kernel, Shaohua Li, Yinghai Lu
  Cc: Mandeep Singh Baines, Thomas Gleixner, H. Peter Anvin, x86,
	Tejun Heo, Andrew Morton, Stephen Rothwell, Christoph Lameter,
	Olof Johansson

A cpu in the mm_cpumask could go offline before we send the invalidate
IPI causing us to wait forever. Avoid this by only waiting for online
cpus.

We are seeing a softlockup reporting during shutdown. The stack
trace shows us that we are inside default_send_IPI_mask_logical:

 BUG: soft lockup - CPU#0 stuck for 11s! [lmt-udev:23605]
 Pid: 23605, comm: lmt-udev Tainted: G        WC   3.2.7 #1
 EIP: 0060:[<8101eec6>] EFLAGS: 00000202 CPU: 0
 EIP is at flush_tlb_others_ipi+0x8a/0xba
 Call Trace:
  [<8101f0bb>] flush_tlb_mm+0x5e/0x62
  [<8101e36c>] pud_populate+0x2c/0x31
  [<8101e409>] pgd_alloc+0x98/0xc7
  [<8102c881>] mm_init.isra.38+0xcc/0xf3
  [<8102cbc2>] dup_mm+0x68/0x34e
  [<8139bbae>] ? _cond_resched+0xd/0x21
  [<810a5b7c>] ? kmem_cache_alloc+0x26/0xe2
  [<8102d421>] ? copy_process+0x556/0xda6
  [<8102d641>] copy_process+0x776/0xda6
  [<8102dd5e>] do_fork+0xcb/0x1d4
  [<810a8c96>] ? do_sync_write+0xd3/0xd3
  [<810a94ab>] ? vfs_read+0x95/0xa2
  [<81008850>] sys_clone+0x20/0x25
  [<8139d8c5>] ptregs_clone+0x15/0x30
  [<8139d7f7>] ? sysenter_do_call+0x12/0x26

Before the softlock, we see the following kernel warning:

 WARNING: at ../../arch/x86/kernel/apic/ipi.c:113 default_send_IPI_mask_logical+0x58/0x73()
 Pid: 23605, comm: lmt-udev Tainted: G         C   3.2.7 #1
 Call Trace:
  [<8102e666>] warn_slowpath_common+0x68/0x7d
  [<81016c36>] ? default_send_IPI_mask_logical+0x58/0x73
  [<8102e68f>] warn_slowpath_null+0x14/0x18
  [<81016c36>] default_send_IPI_mask_logical+0x58/0x73
  [<8101eec2>] flush_tlb_others_ipi+0x86/0xba
  [<8101f0bb>] flush_tlb_mm+0x5e/0x62
  [<8101e36c>] pud_populate+0x2c/0x31
  [<8101e409>] pgd_alloc+0x98/0xc7
  [<8102c881>] mm_init.isra.38+0xcc/0xf3
  [<8102cbc2>] dup_mm+0x68/0x34e
  [<8139bbae>] ? _cond_resched+0xd/0x21
  [<810a5b7c>] ? kmem_cache_alloc+0x26/0xe2
  [<8102d421>] ? copy_process+0x556/0xda6
  [<8102d641>] copy_process+0x776/0xda6
  [<8102dd5e>] do_fork+0xcb/0x1d4
  [<810a8c96>] ? do_sync_write+0xd3/0xd3
  [<810a94ab>] ? vfs_read+0x95/0xa2
  [<81008850>] sys_clone+0x20/0x25
  [<8139d8c5>] ptregs_clone+0x15/0x30
  [<8139d7f7>] ? sysenter_do_call+0x12/0x26

So we are sending an IPI to a cpu which is now offline. Once a cpu is offline,
it will no longer respond to IPIs. This explains the softlockup.

Addresses http://crosbug.com/31737

Changes in V2:
  * bitmap_and is not atomic so use a temporary bitmask

Signed-off-by: Mandeep Singh Baines <msb@chromium.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Cc: x86@kernel.org
Cc: Tejun Heo <tj@kernel.org>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Stephen Rothwell <sfr@canb.auug.org.au>
Cc: Christoph Lameter <cl@gentwo.org>
Cc: Olof Johansson <olofj@chromium.org>
---
 arch/x86/mm/tlb.c |    9 ++++++++-
 1 files changed, 8 insertions(+), 1 deletions(-)

diff --git a/arch/x86/mm/tlb.c b/arch/x86/mm/tlb.c
index d6c0418..231a0b9 100644
--- a/arch/x86/mm/tlb.c
+++ b/arch/x86/mm/tlb.c
@@ -185,6 +185,8 @@ static void flush_tlb_others_ipi(const struct cpumask *cpumask,
 	f->flush_mm = mm;
 	f->flush_va = va;
 	if (cpumask_andnot(to_cpumask(f->flush_cpumask), cpumask, cpumask_of(smp_processor_id()))) {
+		DECLARE_BITMAP(tmp_cpumask, NR_CPUS);
+
 		/*
 		 * We have to send the IPI only to
 		 * CPUs affected.
@@ -192,8 +194,13 @@ static void flush_tlb_others_ipi(const struct cpumask *cpumask,
 		apic->send_IPI_mask(to_cpumask(f->flush_cpumask),
 			      INVALIDATE_TLB_VECTOR_START + sender);
 
-		while (!cpumask_empty(to_cpumask(f->flush_cpumask)))
+		/* Only wait for online cpus */
+		do {
+			cpumask_and(to_cpumask(tmp_cpumask),
+				    to_cpumask(f->flush_cpumask),
+				    cpu_online_mask);
 			cpu_relax();
+		} while (!cpumask_empty(to_cpumask(tmp_cpumask)));
 	}
 
 	f->flush_mm = NULL;
-- 
1.7.7.3


^ permalink raw reply related	[flat|nested] 5+ messages in thread

* Re: [PATCH v2] x86, mm: only wait for flushes from online cpus
  2012-06-22 22:06 [PATCH v2] x86, mm: only wait for flushes from online cpus Mandeep Singh Baines
@ 2012-07-18 18:51 ` Mandeep Singh Baines
  2012-07-18 21:17 ` Srivatsa S. Bhat
  1 sibling, 0 replies; 5+ messages in thread
From: Mandeep Singh Baines @ 2012-07-18 18:51 UTC (permalink / raw)
  To: Ingo Molnar, linux-kernel, Shaohua Li, Yinghai Lu
  Cc: Mandeep Singh Baines, Thomas Gleixner, H. Peter Anvin, x86,
	Tejun Heo, Andrew Morton, Stephen Rothwell, Christoph Lameter,
	Olof Johansson

On Fri, Jun 22, 2012 at 3:06 PM, Mandeep Singh Baines <msb@chromium.org> wrote:
> A cpu in the mm_cpumask could go offline before we send the invalidate
> IPI causing us to wait forever. Avoid this by only waiting for online
> cpus.
>
> We are seeing a softlockup reporting during shutdown. The stack
> trace shows us that we are inside default_send_IPI_mask_logical:
>

I can confirm that after making this change, we no longer see this crash.

>  BUG: soft lockup - CPU#0 stuck for 11s! [lmt-udev:23605]
>  Pid: 23605, comm: lmt-udev Tainted: G        WC   3.2.7 #1
>  EIP: 0060:[<8101eec6>] EFLAGS: 00000202 CPU: 0
>  EIP is at flush_tlb_others_ipi+0x8a/0xba
>  Call Trace:
>   [<8101f0bb>] flush_tlb_mm+0x5e/0x62
>   [<8101e36c>] pud_populate+0x2c/0x31
>   [<8101e409>] pgd_alloc+0x98/0xc7
>   [<8102c881>] mm_init.isra.38+0xcc/0xf3
>   [<8102cbc2>] dup_mm+0x68/0x34e
>   [<8139bbae>] ? _cond_resched+0xd/0x21
>   [<810a5b7c>] ? kmem_cache_alloc+0x26/0xe2
>   [<8102d421>] ? copy_process+0x556/0xda6
>   [<8102d641>] copy_process+0x776/0xda6
>   [<8102dd5e>] do_fork+0xcb/0x1d4
>   [<810a8c96>] ? do_sync_write+0xd3/0xd3
>   [<810a94ab>] ? vfs_read+0x95/0xa2
>   [<81008850>] sys_clone+0x20/0x25
>   [<8139d8c5>] ptregs_clone+0x15/0x30
>   [<8139d7f7>] ? sysenter_do_call+0x12/0x26
>
> Before the softlock, we see the following kernel warning:
>
>  WARNING: at ../../arch/x86/kernel/apic/ipi.c:113 default_send_IPI_mask_logical+0x58/0x73()
>  Pid: 23605, comm: lmt-udev Tainted: G         C   3.2.7 #1
>  Call Trace:
>   [<8102e666>] warn_slowpath_common+0x68/0x7d
>   [<81016c36>] ? default_send_IPI_mask_logical+0x58/0x73
>   [<8102e68f>] warn_slowpath_null+0x14/0x18
>   [<81016c36>] default_send_IPI_mask_logical+0x58/0x73
>   [<8101eec2>] flush_tlb_others_ipi+0x86/0xba
>   [<8101f0bb>] flush_tlb_mm+0x5e/0x62
>   [<8101e36c>] pud_populate+0x2c/0x31
>   [<8101e409>] pgd_alloc+0x98/0xc7
>   [<8102c881>] mm_init.isra.38+0xcc/0xf3
>   [<8102cbc2>] dup_mm+0x68/0x34e
>   [<8139bbae>] ? _cond_resched+0xd/0x21
>   [<810a5b7c>] ? kmem_cache_alloc+0x26/0xe2
>   [<8102d421>] ? copy_process+0x556/0xda6
>   [<8102d641>] copy_process+0x776/0xda6
>   [<8102dd5e>] do_fork+0xcb/0x1d4
>   [<810a8c96>] ? do_sync_write+0xd3/0xd3
>   [<810a94ab>] ? vfs_read+0x95/0xa2
>   [<81008850>] sys_clone+0x20/0x25
>   [<8139d8c5>] ptregs_clone+0x15/0x30
>   [<8139d7f7>] ? sysenter_do_call+0x12/0x26
>
> So we are sending an IPI to a cpu which is now offline. Once a cpu is offline,
> it will no longer respond to IPIs. This explains the softlockup.
>
> Addresses http://crosbug.com/31737
>
> Changes in V2:
>   * bitmap_and is not atomic so use a temporary bitmask
>
> Signed-off-by: Mandeep Singh Baines <msb@chromium.org>
> Cc: Thomas Gleixner <tglx@linutronix.de>
> Cc: Ingo Molnar <mingo@redhat.com>
> Cc: "H. Peter Anvin" <hpa@zytor.com>
> Cc: x86@kernel.org
> Cc: Tejun Heo <tj@kernel.org>
> Cc: Andrew Morton <akpm@linux-foundation.org>
> Cc: Stephen Rothwell <sfr@canb.auug.org.au>
> Cc: Christoph Lameter <cl@gentwo.org>
> Cc: Olof Johansson <olofj@chromium.org>
> ---
>  arch/x86/mm/tlb.c |    9 ++++++++-
>  1 files changed, 8 insertions(+), 1 deletions(-)
>
> diff --git a/arch/x86/mm/tlb.c b/arch/x86/mm/tlb.c
> index d6c0418..231a0b9 100644
> --- a/arch/x86/mm/tlb.c
> +++ b/arch/x86/mm/tlb.c
> @@ -185,6 +185,8 @@ static void flush_tlb_others_ipi(const struct cpumask *cpumask,
>         f->flush_mm = mm;
>         f->flush_va = va;
>         if (cpumask_andnot(to_cpumask(f->flush_cpumask), cpumask, cpumask_of(smp_processor_id()))) {
> +               DECLARE_BITMAP(tmp_cpumask, NR_CPUS);
> +
>                 /*
>                  * We have to send the IPI only to
>                  * CPUs affected.
> @@ -192,8 +194,13 @@ static void flush_tlb_others_ipi(const struct cpumask *cpumask,
>                 apic->send_IPI_mask(to_cpumask(f->flush_cpumask),
>                               INVALIDATE_TLB_VECTOR_START + sender);
>
> -               while (!cpumask_empty(to_cpumask(f->flush_cpumask)))
> +               /* Only wait for online cpus */
> +               do {
> +                       cpumask_and(to_cpumask(tmp_cpumask),
> +                                   to_cpumask(f->flush_cpumask),
> +                                   cpu_online_mask);
>                         cpu_relax();
> +               } while (!cpumask_empty(to_cpumask(tmp_cpumask)));
>         }
>
>         f->flush_mm = NULL;
> --
> 1.7.7.3
>

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [PATCH v2] x86, mm: only wait for flushes from online cpus
  2012-06-22 22:06 [PATCH v2] x86, mm: only wait for flushes from online cpus Mandeep Singh Baines
  2012-07-18 18:51 ` Mandeep Singh Baines
@ 2012-07-18 21:17 ` Srivatsa S. Bhat
  2012-07-18 22:13   ` Mandeep Singh Baines
  1 sibling, 1 reply; 5+ messages in thread
From: Srivatsa S. Bhat @ 2012-07-18 21:17 UTC (permalink / raw)
  To: Mandeep Singh Baines
  Cc: Ingo Molnar, linux-kernel, Shaohua Li, Yinghai Lu,
	Thomas Gleixner, H. Peter Anvin, x86, Tejun Heo, Andrew Morton,
	Stephen Rothwell, Christoph Lameter, Olof Johansson

On 06/23/2012 03:36 AM, Mandeep Singh Baines wrote:
> A cpu in the mm_cpumask could go offline before we send the invalidate
> IPI causing us to wait forever. Avoid this by only waiting for online
> cpus.
> 
> We are seeing a softlockup reporting during shutdown. The stack
> trace shows us that we are inside default_send_IPI_mask_logical:
> 
[...]
> Changes in V2:
>   * bitmap_and is not atomic so use a temporary bitmask
> 

Looks like I posted my reply to v1. So I'll repeat the same suggestions in
this thread as well.

> ---
>  arch/x86/mm/tlb.c |    9 ++++++++-
>  1 files changed, 8 insertions(+), 1 deletions(-)
> 
> diff --git a/arch/x86/mm/tlb.c b/arch/x86/mm/tlb.c
> index d6c0418..231a0b9 100644
> --- a/arch/x86/mm/tlb.c
> +++ b/arch/x86/mm/tlb.c
> @@ -185,6 +185,8 @@ static void flush_tlb_others_ipi(const struct cpumask *cpumask,
>  	f->flush_mm = mm;
>  	f->flush_va = va;
>  	if (cpumask_andnot(to_cpumask(f->flush_cpumask), cpumask, cpumask_of(smp_processor_id()))) {
> +		DECLARE_BITMAP(tmp_cpumask, NR_CPUS);
> +
>  		/*
>  		 * We have to send the IPI only to
>  		 * CPUs affected.
> @@ -192,8 +194,13 @@ static void flush_tlb_others_ipi(const struct cpumask *cpumask,
>  		apic->send_IPI_mask(to_cpumask(f->flush_cpumask),
>  			      INVALIDATE_TLB_VECTOR_START + sender);
> 

This function is always called with preempt_disabled() right?
In that case, _while_ this function is running, a CPU cannot go offline
because of stop_machine(). (I understand that it might go offline in between
calculating that cpumask and calling preempt_disable() - which is the race
you are trying to handle).

So, why not take the offline cpus out of the way even before sending that IPI?
That way, we need not modify the while loop below.

> -		while (!cpumask_empty(to_cpumask(f->flush_cpumask)))
> +		/* Only wait for online cpus */
> +		do {
> +			cpumask_and(to_cpumask(tmp_cpumask),
> +				    to_cpumask(f->flush_cpumask),
> +				    cpu_online_mask);
>  			cpu_relax();
> +		} while (!cpumask_empty(to_cpumask(tmp_cpumask)));
>  	}
> 
>  	f->flush_mm = NULL;
> 

That is, how about something like this:

diff --git a/arch/x86/mm/tlb.c b/arch/x86/mm/tlb.c
index 5e57e11..9d387a9 100644
--- a/arch/x86/mm/tlb.c
+++ b/arch/x86/mm/tlb.c
@@ -186,7 +186,11 @@ static void flush_tlb_others_ipi(const struct cpumask *cpumask,
 
        f->flush_mm = mm;
        f->flush_va = va;
-       if (cpumask_andnot(to_cpumask(f->flush_cpumask), cpumask, cpumask_of(smp_processor_id()))) {
+
+       cpumask_and(to_cpumask(f->flush_cpumask), cpumask, cpu_online_mask);
+       cpumask_clear_cpu(smp_processor_id(), to_cpumask(f->flush_cpumask));
+
+       if (!cpumask_empty(to_cpumask(f->flush_cpumask))) {
                /*
                 * We have to send the IPI only to
                 * CPUs affected.


Regards,
Srivatsa S. Bhat
IBM Linux Technology Center


^ permalink raw reply related	[flat|nested] 5+ messages in thread

* Re: [PATCH v2] x86, mm: only wait for flushes from online cpus
  2012-07-18 21:17 ` Srivatsa S. Bhat
@ 2012-07-18 22:13   ` Mandeep Singh Baines
  2012-07-19  6:29     ` Srivatsa S. Bhat
  0 siblings, 1 reply; 5+ messages in thread
From: Mandeep Singh Baines @ 2012-07-18 22:13 UTC (permalink / raw)
  To: Srivatsa S. Bhat
  Cc: Mandeep Singh Baines, Ingo Molnar, linux-kernel, Shaohua Li,
	Yinghai Lu, Thomas Gleixner, H. Peter Anvin, x86, Tejun Heo,
	Andrew Morton, Stephen Rothwell, Christoph Lameter,
	Olof Johansson

Srivatsa S. Bhat (srivatsa.bhat@linux.vnet.ibm.com) wrote:
> On 06/23/2012 03:36 AM, Mandeep Singh Baines wrote:
> > A cpu in the mm_cpumask could go offline before we send the invalidate
> > IPI causing us to wait forever. Avoid this by only waiting for online
> > cpus.
> > 
> > We are seeing a softlockup reporting during shutdown. The stack
> > trace shows us that we are inside default_send_IPI_mask_logical:
> > 
> [...]
> > Changes in V2:
> >   * bitmap_and is not atomic so use a temporary bitmask
> > 
> 
> Looks like I posted my reply to v1. So I'll repeat the same suggestions in
> this thread as well.
> 
> > ---
> >  arch/x86/mm/tlb.c |    9 ++++++++-
> >  1 files changed, 8 insertions(+), 1 deletions(-)
> > 
> > diff --git a/arch/x86/mm/tlb.c b/arch/x86/mm/tlb.c
> > index d6c0418..231a0b9 100644
> > --- a/arch/x86/mm/tlb.c
> > +++ b/arch/x86/mm/tlb.c
> > @@ -185,6 +185,8 @@ static void flush_tlb_others_ipi(const struct cpumask *cpumask,
> >  	f->flush_mm = mm;
> >  	f->flush_va = va;
> >  	if (cpumask_andnot(to_cpumask(f->flush_cpumask), cpumask, cpumask_of(smp_processor_id()))) {
> > +		DECLARE_BITMAP(tmp_cpumask, NR_CPUS);
> > +
> >  		/*
> >  		 * We have to send the IPI only to
> >  		 * CPUs affected.
> > @@ -192,8 +194,13 @@ static void flush_tlb_others_ipi(const struct cpumask *cpumask,
> >  		apic->send_IPI_mask(to_cpumask(f->flush_cpumask),
> >  			      INVALIDATE_TLB_VECTOR_START + sender);
> > 
> 
> This function is always called with preempt_disabled() right?
> In that case, _while_ this function is running, a CPU cannot go offline
> because of stop_machine(). (I understand that it might go offline in between
> calculating that cpumask and calling preempt_disable() - which is the race
> you are trying to handle).
> 

Ah. Good point. A cpu cannot be remove from the cpu_online_mask while
preemption is disabled because stop_machine() can't run until
preemption is enabled.

./kernel/cpu.c: err = __stop_machine(take_cpu_down, &tcd_param, cpumask_of(cpu));

> So, why not take the offline cpus out of the way even before sending that IPI?
> That way, we need not modify the while loop below.
> 

Acked-off-by: Mandeep Singh Baines <msb@chromium.org>

Do you mind re-sending you're patch with a proper sign-off.

Thanks and regards,
Mandeep

> > -		while (!cpumask_empty(to_cpumask(f->flush_cpumask)))
> > +		/* Only wait for online cpus */
> > +		do {
> > +			cpumask_and(to_cpumask(tmp_cpumask),
> > +				    to_cpumask(f->flush_cpumask),
> > +				    cpu_online_mask);
> >  			cpu_relax();
> > +		} while (!cpumask_empty(to_cpumask(tmp_cpumask)));
> >  	}
> > 
> >  	f->flush_mm = NULL;
> > 
> 
> That is, how about something like this:
> 
> diff --git a/arch/x86/mm/tlb.c b/arch/x86/mm/tlb.c
> index 5e57e11..9d387a9 100644
> --- a/arch/x86/mm/tlb.c
> +++ b/arch/x86/mm/tlb.c
> @@ -186,7 +186,11 @@ static void flush_tlb_others_ipi(const struct cpumask *cpumask,
>  
>         f->flush_mm = mm;
>         f->flush_va = va;
> -       if (cpumask_andnot(to_cpumask(f->flush_cpumask), cpumask, cpumask_of(smp_processor_id()))) {
> +
> +       cpumask_and(to_cpumask(f->flush_cpumask), cpumask, cpu_online_mask);
> +       cpumask_clear_cpu(smp_processor_id(), to_cpumask(f->flush_cpumask));
> +
> +       if (!cpumask_empty(to_cpumask(f->flush_cpumask))) {
>                 /*
>                  * We have to send the IPI only to
>                  * CPUs affected.
> 
> 
> Regards,
> Srivatsa S. Bhat
> IBM Linux Technology Center
> 

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [PATCH v2] x86, mm: only wait for flushes from online cpus
  2012-07-18 22:13   ` Mandeep Singh Baines
@ 2012-07-19  6:29     ` Srivatsa S. Bhat
  0 siblings, 0 replies; 5+ messages in thread
From: Srivatsa S. Bhat @ 2012-07-19  6:29 UTC (permalink / raw)
  To: Mandeep Singh Baines
  Cc: Ingo Molnar, linux-kernel, Shaohua Li, Yinghai Lu,
	Thomas Gleixner, H. Peter Anvin, x86, Tejun Heo, Andrew Morton,
	Stephen Rothwell, Christoph Lameter, Olof Johansson

On 07/19/2012 03:43 AM, Mandeep Singh Baines wrote:
> Srivatsa S. Bhat (srivatsa.bhat@linux.vnet.ibm.com) wrote:
>> On 06/23/2012 03:36 AM, Mandeep Singh Baines wrote:
>>> A cpu in the mm_cpumask could go offline before we send the invalidate
>>> IPI causing us to wait forever. Avoid this by only waiting for online
>>> cpus.
>>>
[...]
>> This function is always called with preempt_disabled() right?
>> In that case, _while_ this function is running, a CPU cannot go offline
>> because of stop_machine(). (I understand that it might go offline in between
>> calculating that cpumask and calling preempt_disable() - which is the race
>> you are trying to handle).
>>
> 
> Ah. Good point. A cpu cannot be remove from the cpu_online_mask while
> preemption is disabled because stop_machine() can't run until
> preemption is enabled.
> 
> ./kernel/cpu.c: err = __stop_machine(take_cpu_down, &tcd_param, cpumask_of(cpu));
> 
>> So, why not take the offline cpus out of the way even before sending that IPI?
>> That way, we need not modify the while loop below.
>>
> 
> Acked-off-by: Mandeep Singh Baines <msb@chromium.org>
> 
> Do you mind re-sending you're patch with a proper sign-off.
>

Sure, will do. I'll post it in a separate thread.

Thanks!

Regards,
Srivatsa S. Bhat

> 
>>> -		while (!cpumask_empty(to_cpumask(f->flush_cpumask)))
>>> +		/* Only wait for online cpus */
>>> +		do {
>>> +			cpumask_and(to_cpumask(tmp_cpumask),
>>> +				    to_cpumask(f->flush_cpumask),
>>> +				    cpu_online_mask);
>>>  			cpu_relax();
>>> +		} while (!cpumask_empty(to_cpumask(tmp_cpumask)));
>>>  	}
>>>
>>>  	f->flush_mm = NULL;
>>>
>>
>> That is, how about something like this:
>>
>> diff --git a/arch/x86/mm/tlb.c b/arch/x86/mm/tlb.c
>> index 5e57e11..9d387a9 100644
>> --- a/arch/x86/mm/tlb.c
>> +++ b/arch/x86/mm/tlb.c
>> @@ -186,7 +186,11 @@ static void flush_tlb_others_ipi(const struct cpumask *cpumask,
>>  
>>         f->flush_mm = mm;
>>         f->flush_va = va;
>> -       if (cpumask_andnot(to_cpumask(f->flush_cpumask), cpumask, cpumask_of(smp_processor_id()))) {
>> +
>> +       cpumask_and(to_cpumask(f->flush_cpumask), cpumask, cpu_online_mask);
>> +       cpumask_clear_cpu(smp_processor_id(), to_cpumask(f->flush_cpumask));
>> +
>> +       if (!cpumask_empty(to_cpumask(f->flush_cpumask))) {
>>                 /*
>>                  * We have to send the IPI only to
>>                  * CPUs affected.
>>
>>
>> Regards,
>> Srivatsa S. Bhat
>> IBM Linux Technology Center
>>
> 


^ permalink raw reply	[flat|nested] 5+ messages in thread

end of thread, other threads:[~2012-07-19  6:31 UTC | newest]

Thread overview: 5+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2012-06-22 22:06 [PATCH v2] x86, mm: only wait for flushes from online cpus Mandeep Singh Baines
2012-07-18 18:51 ` Mandeep Singh Baines
2012-07-18 21:17 ` Srivatsa S. Bhat
2012-07-18 22:13   ` Mandeep Singh Baines
2012-07-19  6:29     ` Srivatsa S. Bhat

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.