All of lore.kernel.org
 help / color / mirror / Atom feed
* needed lru_add_drain_all() change
@ 2012-06-26 21:37 Andrew Morton
  2012-06-27  0:55 ` Minchan Kim
                   ` (3 more replies)
  0 siblings, 4 replies; 20+ messages in thread
From: Andrew Morton @ 2012-06-26 21:37 UTC (permalink / raw)
  To: linux-mm

https://bugzilla.kernel.org/show_bug.cgi?id=43811

lru_add_drain_all() uses schedule_on_each_cpu().  But
schedule_on_each_cpu() hangs if a realtime thread is spinning, pinned
to a CPU.  There's no intention to change the scheduler behaviour, so I
think we should remove schedule_on_each_cpu() from the kernel.

The biggest user of schedule_on_each_cpu() is lru_add_drain_all().

Does anyone have any thoughts on how we can do this?  The obvious
approach is to declare these:

static DEFINE_PER_CPU(struct pagevec[NR_LRU_LISTS], lru_add_pvecs);
static DEFINE_PER_CPU(struct pagevec, lru_rotate_pvecs);
static DEFINE_PER_CPU(struct pagevec, lru_deactivate_pvecs);

to be irq-safe and use on_each_cpu().  lru_rotate_pvecs is already
irq-safe and converting lru_add_pvecs and lru_deactivate_pvecs looks
pretty simple.

Thoughts?

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: needed lru_add_drain_all() change
  2012-06-26 21:37 needed lru_add_drain_all() change Andrew Morton
@ 2012-06-27  0:55 ` Minchan Kim
  2012-06-27  1:15   ` Andrew Morton
  2012-06-27 12:04 ` Peter Zijlstra
                   ` (2 subsequent siblings)
  3 siblings, 1 reply; 20+ messages in thread
From: Minchan Kim @ 2012-06-27  0:55 UTC (permalink / raw)
  To: Andrew Morton; +Cc: linux-mm, KOSAKI Motohiro

On 06/27/2012 06:37 AM, Andrew Morton wrote:

> https://bugzilla.kernel.org/show_bug.cgi?id=43811
> 
> lru_add_drain_all() uses schedule_on_each_cpu().  But
> schedule_on_each_cpu() hangs if a realtime thread is spinning, pinned
> to a CPU.  There's no intention to change the scheduler behaviour, so I
> think we should remove schedule_on_each_cpu() from the kernel.
> 
> The biggest user of schedule_on_each_cpu() is lru_add_drain_all().
> 
> Does anyone have any thoughts on how we can do this?  The obvious
> approach is to declare these:
> 
> static DEFINE_PER_CPU(struct pagevec[NR_LRU_LISTS], lru_add_pvecs);
> static DEFINE_PER_CPU(struct pagevec, lru_rotate_pvecs);
> static DEFINE_PER_CPU(struct pagevec, lru_deactivate_pvecs);


One more 
static DEFINE_PER_CPU(struct pagevec, activate_page_pvecs);

> 
> to be irq-safe and use on_each_cpu().  lru_rotate_pvecs is already
> irq-safe and converting lru_add_pvecs and lru_deactivate_pvecs looks
> pretty simple.


Yes. Changing looks simple.
I'm okay with lru_[activate_page|deactivate]_pvecs because it's not hot
but lru_rotate_pvecs is hotter than others. Considering mlock and CPU pinning
of realtime thread is very rare, it might be rather expensive solution.
Unfortunately, I have no idea better than you suggested. :(

And looking 8891d6da17, mlock's lru_add_drain_all isn't must.
If it's really bother us, couldn't we remove it?



-- 
Kind regards,
Minchan Kim

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: needed lru_add_drain_all() change
  2012-06-27  0:55 ` Minchan Kim
@ 2012-06-27  1:15   ` Andrew Morton
  2012-06-27  1:20     ` Minchan Kim
  2012-06-27  2:09     ` Minchan Kim
  0 siblings, 2 replies; 20+ messages in thread
From: Andrew Morton @ 2012-06-27  1:15 UTC (permalink / raw)
  To: Minchan Kim; +Cc: linux-mm, KOSAKI Motohiro

On Wed, 27 Jun 2012 09:55:10 +0900 Minchan Kim <minchan@kernel.org> wrote:

> On 06/27/2012 06:37 AM, Andrew Morton wrote:
> 
> > https://bugzilla.kernel.org/show_bug.cgi?id=43811
> > 
> > lru_add_drain_all() uses schedule_on_each_cpu().  But
> > schedule_on_each_cpu() hangs if a realtime thread is spinning, pinned
> > to a CPU.  There's no intention to change the scheduler behaviour, so I
> > think we should remove schedule_on_each_cpu() from the kernel.
> > 
> > The biggest user of schedule_on_each_cpu() is lru_add_drain_all().
> > 
> > Does anyone have any thoughts on how we can do this?  The obvious
> > approach is to declare these:
> > 
> > static DEFINE_PER_CPU(struct pagevec[NR_LRU_LISTS], lru_add_pvecs);
> > static DEFINE_PER_CPU(struct pagevec, lru_rotate_pvecs);
> > static DEFINE_PER_CPU(struct pagevec, lru_deactivate_pvecs);
> 
> 
> One more 
> static DEFINE_PER_CPU(struct pagevec, activate_page_pvecs);
> 
> > 
> > to be irq-safe and use on_each_cpu().  lru_rotate_pvecs is already
> > irq-safe and converting lru_add_pvecs and lru_deactivate_pvecs looks
> > pretty simple.
> 
> 
> Yes. Changing looks simple.
> I'm okay with lru_[activate_page|deactivate]_pvecs because it's not hot
> but lru_rotate_pvecs is hotter than others.

I don't think any change is needed for lru_rotate_pvecs?

> Considering mlock and CPU pinning
> of realtime thread is very rare, it might be rather expensive solution.
> Unfortunately, I have no idea better than you suggested. :(
> 
> And looking 8891d6da17, mlock's lru_add_drain_all isn't must.
> If it's really bother us, couldn't we remove it?

"grep lru_add_drain_all mm/*.c".  They're all problematic.

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: needed lru_add_drain_all() change
  2012-06-27  1:15   ` Andrew Morton
@ 2012-06-27  1:20     ` Minchan Kim
  2012-06-27  1:29       ` Andrew Morton
  2012-06-27  2:09     ` Minchan Kim
  1 sibling, 1 reply; 20+ messages in thread
From: Minchan Kim @ 2012-06-27  1:20 UTC (permalink / raw)
  To: Andrew Morton; +Cc: linux-mm, KOSAKI Motohiro

Hi Andrew,

On 06/27/2012 10:15 AM, Andrew Morton wrote:

> On Wed, 27 Jun 2012 09:55:10 +0900 Minchan Kim <minchan@kernel.org> wrote:
> 
>> On 06/27/2012 06:37 AM, Andrew Morton wrote:
>>
>>> https://bugzilla.kernel.org/show_bug.cgi?id=43811
>>>
>>> lru_add_drain_all() uses schedule_on_each_cpu().  But
>>> schedule_on_each_cpu() hangs if a realtime thread is spinning, pinned
>>> to a CPU.  There's no intention to change the scheduler behaviour, so I
>>> think we should remove schedule_on_each_cpu() from the kernel.
>>>
>>> The biggest user of schedule_on_each_cpu() is lru_add_drain_all().
>>>
>>> Does anyone have any thoughts on how we can do this?  The obvious
>>> approach is to declare these:
>>>
>>> static DEFINE_PER_CPU(struct pagevec[NR_LRU_LISTS], lru_add_pvecs);
>>> static DEFINE_PER_CPU(struct pagevec, lru_rotate_pvecs);
>>> static DEFINE_PER_CPU(struct pagevec, lru_deactivate_pvecs);
>>
>>
>> One more 
>> static DEFINE_PER_CPU(struct pagevec, activate_page_pvecs);
>>
>>>
>>> to be irq-safe and use on_each_cpu().  lru_rotate_pvecs is already
>>> irq-safe and converting lru_add_pvecs and lru_deactivate_pvecs looks
>>> pretty simple.
>>
>>
>> Yes. Changing looks simple.
>> I'm okay with lru_[activate_page|deactivate]_pvecs because it's not hot
>> but lru_rotate_pvecs is hotter than others.
> 
> I don't think any change is needed for lru_rotate_pvecs?


Sorry for the typo
lru_add_pvecs

> 
>> Considering mlock and CPU pinning
>> of realtime thread is very rare, it might be rather expensive solution.
>> Unfortunately, I have no idea better than you suggested. :(
>>
>> And looking 8891d6da17, mlock's lru_add_drain_all isn't must.
>> If it's really bother us, couldn't we remove it?
> 
> "grep lru_add_drain_all mm/*.c".  They're all problematic.
> 
> --
> To unsubscribe, send a message with 'unsubscribe linux-mm' in
> the body to majordomo@kvack.org.  For more info on Linux MM,
> see: http://www.linux-mm.org/ .
> Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
> 



-- 
Kind regards,
Minchan Kim

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: needed lru_add_drain_all() change
  2012-06-27  1:20     ` Minchan Kim
@ 2012-06-27  1:29       ` Andrew Morton
  0 siblings, 0 replies; 20+ messages in thread
From: Andrew Morton @ 2012-06-27  1:29 UTC (permalink / raw)
  To: Minchan Kim; +Cc: linux-mm, KOSAKI Motohiro

On Wed, 27 Jun 2012 10:20:24 +0900 Minchan Kim <minchan@kernel.org> wrote:

> >> Yes. Changing looks simple.
> >> I'm okay with lru_[activate_page|deactivate]_pvecs because it's not hot
> >> but lru_rotate_pvecs is hotter than others.
> > 
> > I don't think any change is needed for lru_rotate_pvecs?
> 
> 
> Sorry for the typo
> lru_add_pvecs

OK.

A local_irq_save/restore shouldn't be tooooo expensive.  We can remove
the current get_cpu()/put_cpu() to reclaim some of the overhead.

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: needed lru_add_drain_all() change
  2012-06-27  1:15   ` Andrew Morton
  2012-06-27  1:20     ` Minchan Kim
@ 2012-06-27  2:09     ` Minchan Kim
  2012-06-27  5:12       ` Andrew Morton
  1 sibling, 1 reply; 20+ messages in thread
From: Minchan Kim @ 2012-06-27  2:09 UTC (permalink / raw)
  To: linux-mm; +Cc: KOSAKI Motohiro

On 06/27/2012 10:15 AM, Andrew Morton wrote:

>> Considering mlock and CPU pinning
>> > of realtime thread is very rare, it might be rather expensive solution.
>> > Unfortunately, I have no idea better than you suggested. :(
>> > 
>> > And looking 8891d6da17, mlock's lru_add_drain_all isn't must.
>> > If it's really bother us, couldn't we remove it?
> "grep lru_add_drain_all mm/*.c".  They're all problematic.


Yeb but I'm not sure such system modeling is good.
Potentially, It could make problem once we use workqueue of other CPU.

-- 
Kind regards,
Minchan Kim

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: needed lru_add_drain_all() change
  2012-06-27  2:09     ` Minchan Kim
@ 2012-06-27  5:12       ` Andrew Morton
  2012-06-27  5:41         ` Minchan Kim
  0 siblings, 1 reply; 20+ messages in thread
From: Andrew Morton @ 2012-06-27  5:12 UTC (permalink / raw)
  To: Minchan Kim; +Cc: linux-mm, KOSAKI Motohiro

On Wed, 27 Jun 2012 11:09:31 +0900 Minchan Kim <minchan@kernel.org> wrote:

> On 06/27/2012 10:15 AM, Andrew Morton wrote:
> 
> >> Considering mlock and CPU pinning
> >> > of realtime thread is very rare, it might be rather expensive solution.
> >> > Unfortunately, I have no idea better than you suggested. :(
> >> > 
> >> > And looking 8891d6da17, mlock's lru_add_drain_all isn't must.
> >> > If it's really bother us, couldn't we remove it?
> > "grep lru_add_drain_all mm/*.c".  They're all problematic.
> 
> 
> Yeb but I'm not sure such system modeling is good.
> Potentially, It could make problem once we use workqueue of other CPU.

whut?

My suggestion is that we switch lru_add_drain_all() to on_each_cpu()
and delete schedule_on_each_cpu().  No workqueues.

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: needed lru_add_drain_all() change
  2012-06-27  5:12       ` Andrew Morton
@ 2012-06-27  5:41         ` Minchan Kim
  2012-06-27  5:55           ` Andrew Morton
  0 siblings, 1 reply; 20+ messages in thread
From: Minchan Kim @ 2012-06-27  5:41 UTC (permalink / raw)
  To: Andrew Morton; +Cc: linux-mm, KOSAKI Motohiro

On 06/27/2012 02:12 PM, Andrew Morton wrote:

> On Wed, 27 Jun 2012 11:09:31 +0900 Minchan Kim <minchan@kernel.org> wrote:
> 
>> On 06/27/2012 10:15 AM, Andrew Morton wrote:
>>
>>>> Considering mlock and CPU pinning
>>>>> of realtime thread is very rare, it might be rather expensive solution.
>>>>> Unfortunately, I have no idea better than you suggested. :(
>>>>>
>>>>> And looking 8891d6da17, mlock's lru_add_drain_all isn't must.
>>>>> If it's really bother us, couldn't we remove it?
>>> "grep lru_add_drain_all mm/*.c".  They're all problematic.
>>
>>
>> Yeb but I'm not sure such system modeling is good.
>> Potentially, It could make problem once we use workqueue of other CPU.
> 
> whut?
> 
> My suggestion is that we switch lru_add_drain_all() to on_each_cpu()
> and delete schedule_on_each_cpu().  No workqueues.


Current problem is that RT thread doesn't yield his CPU so other tasks can't be scheduled in.
schedule_on_each_cpu uses system workqueue so if there are any user to try using
workqueue for the CPU(ex, schedule_work_on), he can make trouble, too.
So my question is I doubt such greedy RT thread modeling is good.

Do I miss something?

-- 
Kind regards,
Minchan Kim

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: needed lru_add_drain_all() change
  2012-06-27  5:41         ` Minchan Kim
@ 2012-06-27  5:55           ` Andrew Morton
  2012-06-27  6:33             ` Minchan Kim
  0 siblings, 1 reply; 20+ messages in thread
From: Andrew Morton @ 2012-06-27  5:55 UTC (permalink / raw)
  To: Minchan Kim; +Cc: linux-mm, KOSAKI Motohiro

On Wed, 27 Jun 2012 14:41:39 +0900 Minchan Kim <minchan@kernel.org> wrote:

> On 06/27/2012 02:12 PM, Andrew Morton wrote:
> 
> > On Wed, 27 Jun 2012 11:09:31 +0900 Minchan Kim <minchan@kernel.org> wrote:
> > 
> >> On 06/27/2012 10:15 AM, Andrew Morton wrote:
> >>
> >>>> Considering mlock and CPU pinning
> >>>>> of realtime thread is very rare, it might be rather expensive solution.
> >>>>> Unfortunately, I have no idea better than you suggested. :(
> >>>>>
> >>>>> And looking 8891d6da17, mlock's lru_add_drain_all isn't must.
> >>>>> If it's really bother us, couldn't we remove it?
> >>> "grep lru_add_drain_all mm/*.c".  They're all problematic.
> >>
> >>
> >> Yeb but I'm not sure such system modeling is good.
> >> Potentially, It could make problem once we use workqueue of other CPU.
> > 
> > whut?
> > 
> > My suggestion is that we switch lru_add_drain_all() to on_each_cpu()
> > and delete schedule_on_each_cpu().  No workqueues.
> 
> 
> Current problem is that RT thread doesn't yield his CPU so other tasks can't be scheduled in.
> schedule_on_each_cpu uses system workqueue so if there are any user to try using
> workqueue for the CPU(ex, schedule_work_on), he can make trouble, too.
> So my question is I doubt such greedy RT thread modeling is good.
> 

There's no way of fixing this without significantly degrading the
service which rt priority offers.  As we don't wish to degrade that
service, schedule_work_on() and schedule_on_each_cpu() cannot be
implemented reliably.  So we delete them.

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: needed lru_add_drain_all() change
  2012-06-27  5:55           ` Andrew Morton
@ 2012-06-27  6:33             ` Minchan Kim
  2012-06-27  6:41               ` Andrew Morton
  2012-06-27  6:46               ` Andrew Morton
  0 siblings, 2 replies; 20+ messages in thread
From: Minchan Kim @ 2012-06-27  6:33 UTC (permalink / raw)
  To: Andrew Morton; +Cc: linux-mm, KOSAKI Motohiro, Peter Zijlstra

On 06/27/2012 02:55 PM, Andrew Morton wrote:

> On Wed, 27 Jun 2012 14:41:39 +0900 Minchan Kim <minchan@kernel.org> wrote:
> 
>> On 06/27/2012 02:12 PM, Andrew Morton wrote:
>>
>>> On Wed, 27 Jun 2012 11:09:31 +0900 Minchan Kim <minchan@kernel.org> wrote:
>>>
>>>> On 06/27/2012 10:15 AM, Andrew Morton wrote:
>>>>
>>>>>> Considering mlock and CPU pinning
>>>>>>> of realtime thread is very rare, it might be rather expensive solution.
>>>>>>> Unfortunately, I have no idea better than you suggested. :(
>>>>>>>
>>>>>>> And looking 8891d6da17, mlock's lru_add_drain_all isn't must.
>>>>>>> If it's really bother us, couldn't we remove it?
>>>>> "grep lru_add_drain_all mm/*.c".  They're all problematic.
>>>>
>>>>
>>>> Yeb but I'm not sure such system modeling is good.
>>>> Potentially, It could make problem once we use workqueue of other CPU.
>>>
>>> whut?
>>>
>>> My suggestion is that we switch lru_add_drain_all() to on_each_cpu()
>>> and delete schedule_on_each_cpu().  No workqueues.
>>
>>
>> Current problem is that RT thread doesn't yield his CPU so other tasks can't be scheduled in.
>> schedule_on_each_cpu uses system workqueue so if there are any user to try using
>> workqueue for the CPU(ex, schedule_work_on), he can make trouble, too.
>> So my question is I doubt such greedy RT thread modeling is good.
>>
> 
> There's no way of fixing this without significantly degrading the
> service which rt priority offers.  As we don't wish to degrade that
> service, schedule_work_on() and schedule_on_each_cpu() cannot be
> implemented reliably.  So we delete them.


Okay. I'm not against strongly if local_irq_save/restore isn't expensive
as a first step for removing them because I have no good idea.
I want to add some comment on schedule_work_on and friends.
"You shouldn't use it any more and we will try to remove this".

Anyway, let's wait further answer, especially, RT folks. 

-- 
Kind regards,
Minchan Kim

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: needed lru_add_drain_all() change
  2012-06-27  6:33             ` Minchan Kim
@ 2012-06-27  6:41               ` Andrew Morton
  2012-06-27 10:27                 ` Peter Zijlstra
  2012-06-27  6:46               ` Andrew Morton
  1 sibling, 1 reply; 20+ messages in thread
From: Andrew Morton @ 2012-06-27  6:41 UTC (permalink / raw)
  To: Minchan Kim; +Cc: linux-mm, KOSAKI Motohiro, Peter Zijlstra

On Wed, 27 Jun 2012 15:33:09 +0900 Minchan Kim <minchan@kernel.org> wrote:

> Anyway, let's wait further answer, especially, RT folks. 

rt folks said "it isn't changing", and I agree with them.  It isn't
worth breaking the rt-prio quality of service because a few odd parts
of the kernel did something inappropriate.  Especially when those
few sites have alternatives.

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: needed lru_add_drain_all() change
  2012-06-27  6:33             ` Minchan Kim
  2012-06-27  6:41               ` Andrew Morton
@ 2012-06-27  6:46               ` Andrew Morton
  2012-06-27 10:31                 ` Peter Zijlstra
  1 sibling, 1 reply; 20+ messages in thread
From: Andrew Morton @ 2012-06-27  6:46 UTC (permalink / raw)
  To: Minchan Kim; +Cc: linux-mm, KOSAKI Motohiro, Peter Zijlstra


btw, the first step should be to audit all lru_add_drain_all() sites
and work out exactly why they are calling lru_add_drain_all() - what
are they trying to achive?

Because we may be able to use a more lightweight approach there, or
handle the asynchronous behaviour in a more graceful fashion, rather
than forcing this massive synchronization barrier.

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: needed lru_add_drain_all() change
  2012-06-27  6:41               ` Andrew Morton
@ 2012-06-27 10:27                 ` Peter Zijlstra
  0 siblings, 0 replies; 20+ messages in thread
From: Peter Zijlstra @ 2012-06-27 10:27 UTC (permalink / raw)
  To: Andrew Morton; +Cc: Minchan Kim, linux-mm, KOSAKI Motohiro

On Tue, 2012-06-26 at 23:41 -0700, Andrew Morton wrote:
> On Wed, 27 Jun 2012 15:33:09 +0900 Minchan Kim <minchan@kernel.org> wrote:
> 
> > Anyway, let's wait further answer, especially, RT folks. 
> 
> rt folks said "it isn't changing", and I agree with them.  It isn't
> worth breaking the rt-prio quality of service because a few odd parts
> of the kernel did something inappropriate.  Especially when those
> few sites have alternatives.

I'm not exactly sure its a 'few' sites.. but yeah there's a few obvious
sites we should look at.

Afaict all lru_add_drain_all() callers do this optimistically, esp.
since there's no hard sync. against adding new entries to the per-cpu
pagevecs.

So there's no hard requirement to wait for completion, now not waiting
has obvious problems as well, but we could cheat and timeout after a few
jiffies or so.

This would avoid the DoS scenario, it will not improve the over-all
quality of the kernel though, since an unflushed pagevec can result in
compaction etc. failing.

The problem with stuffing all this in hardirq context (using
on_each_cpu() and friends) is that these people who do spin in fifo
threads generally don't like interrupt latencies forced on them either.
And I presume its currently scheduled is because its potentially quite
expensive to flush all these pages.

The only alternative I can come up with is scheduling the work like we
do now, wait for it for a few jiffies, track which CPUs completed,
cancel the others, and remote flush their pagevecs from the calling cpu.

But I can't say I like that option either...


As it stands I've always said that doing while(1) from FIFO/RR tasks is
broken and you get to keep the pieces. If we can find good solutions for
this I'm all ears, but I don't think its something we should bend over
backwards for.

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: needed lru_add_drain_all() change
  2012-06-27  6:46               ` Andrew Morton
@ 2012-06-27 10:31                 ` Peter Zijlstra
  0 siblings, 0 replies; 20+ messages in thread
From: Peter Zijlstra @ 2012-06-27 10:31 UTC (permalink / raw)
  To: Andrew Morton; +Cc: Minchan Kim, linux-mm, KOSAKI Motohiro

On Tue, 2012-06-26 at 23:46 -0700, Andrew Morton wrote:
> btw, the first step should be to audit all lru_add_drain_all() sites
> and work out exactly why they are calling lru_add_drain_all() - what
> are they trying to achive?

# git grep lru_add_drain_all
fs/block_dev.c: lru_add_drain_all();    /* make sure all lru add caches are flushed */
include/linux/swap.h:extern int lru_add_drain_all(void);
mm/compaction.c:        lru_add_drain_all();
mm/compaction.c:                lru_add_drain_all();
mm/ksm.c:               lru_add_drain_all();
mm/memcontrol.c:                lru_add_drain_all();
mm/memcontrol.c:        lru_add_drain_all();
mm/memcontrol.c:        lru_add_drain_all();
mm/memory-failure.c:            lru_add_drain_all();
mm/memory_hotplug.c:            lru_add_drain_all();
mm/memory_hotplug.c:    lru_add_drain_all();
mm/migrate.c:   lru_add_drain_all();
mm/migrate.c:    * here to avoid lru_add_drain_all().
mm/mlock.c:     lru_add_drain_all();    /* flush pagevec */
mm/mlock.c:             lru_add_drain_all();    /* flush pagevec */
mm/page_alloc.c:         * For avoiding noise data, lru_add_drain_all() should be called
mm/page_alloc.c:        lru_add_drain_all();
mm/swap.c:int lru_add_drain_all(void)


I haven't audited all sites, but most of them try to flush the per-cpu
lru pagevecs to make sure the pages are on the lru so they can take them
off again ;-)

Take compaction for instance, if a page in the middle of a range is on a
per-cpu pagevec it can't move it and the compaction might fail.


Hmm, another alternative is teaching isolate_lru_page() and friends to
take pages from the pagevecs directly, not sure what that would take.

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: needed lru_add_drain_all() change
  2012-06-26 21:37 needed lru_add_drain_all() change Andrew Morton
  2012-06-27  0:55 ` Minchan Kim
@ 2012-06-27 12:04 ` Peter Zijlstra
  2012-06-28  6:23 ` KOSAKI Motohiro
  2012-06-28  7:43 ` Kamezawa Hiroyuki
  3 siblings, 0 replies; 20+ messages in thread
From: Peter Zijlstra @ 2012-06-27 12:04 UTC (permalink / raw)
  To: Andrew Morton; +Cc: linux-mm

On Tue, 2012-06-26 at 14:37 -0700, Andrew Morton wrote:
> lru_add_drain_all() uses schedule_on_each_cpu().  But
> schedule_on_each_cpu() hangs if a realtime thread is spinning, pinned
> to a CPU.  There's no intention to change the scheduler behaviour, so
> I
> think we should remove schedule_on_each_cpu() from the kernel.
> 

Anything that uses a per-cpu workqueue and waits on work from another
cpu is vulnerable too. This would include things like padata, crypto and
possibly others.

ksoftirqd is vulnerable too, if it were preempted while handling a
softirq, all of softirq handling will be out the window for that cpu.

infiniband/hw/ehca would likely malfunction as well, since it has
per-cpu threads.


FIFO is dangerous, don't do stupid things :-)

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: needed lru_add_drain_all() change
  2012-06-26 21:37 needed lru_add_drain_all() change Andrew Morton
  2012-06-27  0:55 ` Minchan Kim
  2012-06-27 12:04 ` Peter Zijlstra
@ 2012-06-28  6:23 ` KOSAKI Motohiro
  2012-06-29  3:47   ` Kamezawa Hiroyuki
  2012-06-28  7:43 ` Kamezawa Hiroyuki
  3 siblings, 1 reply; 20+ messages in thread
From: KOSAKI Motohiro @ 2012-06-28  6:23 UTC (permalink / raw)
  To: Andrew Morton; +Cc: linux-mm

On Tue, Jun 26, 2012 at 5:37 PM, Andrew Morton
<akpm@linux-foundation.org> wrote:
> https://bugzilla.kernel.org/show_bug.cgi?id=43811
>
> lru_add_drain_all() uses schedule_on_each_cpu().  But
> schedule_on_each_cpu() hangs if a realtime thread is spinning, pinned
> to a CPU.  There's no intention to change the scheduler behaviour, so I
> think we should remove schedule_on_each_cpu() from the kernel.
>
> The biggest user of schedule_on_each_cpu() is lru_add_drain_all().
>
> Does anyone have any thoughts on how we can do this?  The obvious
> approach is to declare these:
>
> static DEFINE_PER_CPU(struct pagevec[NR_LRU_LISTS], lru_add_pvecs);
> static DEFINE_PER_CPU(struct pagevec, lru_rotate_pvecs);
> static DEFINE_PER_CPU(struct pagevec, lru_deactivate_pvecs);
>
> to be irq-safe and use on_each_cpu().  lru_rotate_pvecs is already
> irq-safe and converting lru_add_pvecs and lru_deactivate_pvecs looks
> pretty simple.
>
> Thoughts?

I agree.

But i hope more. In these days, we have plenty lru_add_drain_all()
callsite. So,
i think we should remove struct pagevec and should aim migration aware new
batch mechanism. maybe. This also improve compaction success rate.

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: needed lru_add_drain_all() change
  2012-06-26 21:37 needed lru_add_drain_all() change Andrew Morton
                   ` (2 preceding siblings ...)
  2012-06-28  6:23 ` KOSAKI Motohiro
@ 2012-06-28  7:43 ` Kamezawa Hiroyuki
  2012-06-28 23:42   ` Minchan Kim
  3 siblings, 1 reply; 20+ messages in thread
From: Kamezawa Hiroyuki @ 2012-06-28  7:43 UTC (permalink / raw)
  To: Andrew Morton; +Cc: linux-mm

(2012/06/27 6:37), Andrew Morton wrote:
> https://bugzilla.kernel.org/show_bug.cgi?id=43811
>
> lru_add_drain_all() uses schedule_on_each_cpu().  But
> schedule_on_each_cpu() hangs if a realtime thread is spinning, pinned
> to a CPU.  There's no intention to change the scheduler behaviour, so I
> think we should remove schedule_on_each_cpu() from the kernel.
>
> The biggest user of schedule_on_each_cpu() is lru_add_drain_all().
>
> Does anyone have any thoughts on how we can do this?  The obvious
> approach is to declare these:
>
> static DEFINE_PER_CPU(struct pagevec[NR_LRU_LISTS], lru_add_pvecs);
> static DEFINE_PER_CPU(struct pagevec, lru_rotate_pvecs);
> static DEFINE_PER_CPU(struct pagevec, lru_deactivate_pvecs);
>
> to be irq-safe and use on_each_cpu().  lru_rotate_pvecs is already
> irq-safe and converting lru_add_pvecs and lru_deactivate_pvecs looks
> pretty simple.
>
> Thoughts?
>

How about this kind of RCU synchronization ?
==
/*
  * Double buffered pagevec for quick drain.
  * The usual per-cpu-pvec user need to take rcu_read_lock() before accessing.
  * External drainer of pvecs will relpace pvec vector and call synchroize_rcu(),
  * and drain all pages on unused pvecs in turn.
  */
static DEFINE_PER_CPU(struct pagevec[NR_LRU_LISTS * 2], lru_pvecs);

atomic_t pvec_idx; /* must be placed onto some aligned address...*/


struct pagevec *my_pagevec(enum lru)
{
	return  pvec = &__get_cpu_var(lru_pvecs[lru << atomic_read(pvec_idx)]);
}

/*
  * percpu pagevec access should be surrounded by these calls.
  */
static inline void pagevec_start_access()
{
	rcu_read_lock();
}

static inline void pagevec_end_access()
{
	rcu_read_unlock();
}


/*
  * changing pagevec array vec 0 <-> 1
  */
static void lru_pvec_update()
{
	if (atomic_read(&pvec_idx))
		atomic_set(&pvec_idx, 0);
	else
		atomic_set(&pvec_idx, 1);
}

/*
  * drain all LRUS on per-cpu pagevecs.
  */
DEFINE_MUTEX(lru_add_drain_all_mutex);
static void lru_add_drain_all()
{
	mutex_lock(&lru_add_drain_mutex);
	lru_pvec_update();
	synchronize_rcu();  /* waits for all accessors to pvec quits. */
	for_each_cpu(cpu)
		drain_pvec_of_the_cpu(cpu);
	mutex_unlock(&lru_add_drain_mutex);
}
==














--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: needed lru_add_drain_all() change
  2012-06-28  7:43 ` Kamezawa Hiroyuki
@ 2012-06-28 23:42   ` Minchan Kim
  2012-06-29  3:24     ` Kamezawa Hiroyuki
  0 siblings, 1 reply; 20+ messages in thread
From: Minchan Kim @ 2012-06-28 23:42 UTC (permalink / raw)
  To: Kamezawa Hiroyuki; +Cc: Andrew Morton, linux-mm

On 06/28/2012 04:43 PM, Kamezawa Hiroyuki wrote:

> (2012/06/27 6:37), Andrew Morton wrote:
>> https://bugzilla.kernel.org/show_bug.cgi?id=43811
>>
>> lru_add_drain_all() uses schedule_on_each_cpu().  But
>> schedule_on_each_cpu() hangs if a realtime thread is spinning, pinned
>> to a CPU.  There's no intention to change the scheduler behaviour, so I
>> think we should remove schedule_on_each_cpu() from the kernel.
>>
>> The biggest user of schedule_on_each_cpu() is lru_add_drain_all().
>>
>> Does anyone have any thoughts on how we can do this?  The obvious
>> approach is to declare these:
>>
>> static DEFINE_PER_CPU(struct pagevec[NR_LRU_LISTS], lru_add_pvecs);
>> static DEFINE_PER_CPU(struct pagevec, lru_rotate_pvecs);
>> static DEFINE_PER_CPU(struct pagevec, lru_deactivate_pvecs);
>>
>> to be irq-safe and use on_each_cpu().  lru_rotate_pvecs is already
>> irq-safe and converting lru_add_pvecs and lru_deactivate_pvecs looks
>> pretty simple.
>>
>> Thoughts?
>>
> 
> How about this kind of RCU synchronization ?
> ==
> /*
>  * Double buffered pagevec for quick drain.
>  * The usual per-cpu-pvec user need to take rcu_read_lock() before
> accessing.
>  * External drainer of pvecs will relpace pvec vector and call
> synchroize_rcu(),
>  * and drain all pages on unused pvecs in turn.
>  */
> static DEFINE_PER_CPU(struct pagevec[NR_LRU_LISTS * 2], lru_pvecs);
> 
> atomic_t pvec_idx; /* must be placed onto some aligned address...*/
> 
> 
> struct pagevec *my_pagevec(enum lru)
> {
>     return  pvec = &__get_cpu_var(lru_pvecs[lru << atomic_read(pvec_idx)]);
> }
> 
> /*
>  * percpu pagevec access should be surrounded by these calls.
>  */
> static inline void pagevec_start_access()
> {
>     rcu_read_lock();
> }
> 
> static inline void pagevec_end_access()
> {
>     rcu_read_unlock();
> }
> 
> 
> /*
>  * changing pagevec array vec 0 <-> 1
>  */
> static void lru_pvec_update()
> {
>     if (atomic_read(&pvec_idx))
>         atomic_set(&pvec_idx, 0);
>     else
>         atomic_set(&pvec_idx, 1);
> }
> 
> /*
>  * drain all LRUS on per-cpu pagevecs.
>  */
> DEFINE_MUTEX(lru_add_drain_all_mutex);
> static void lru_add_drain_all()
> {
>     mutex_lock(&lru_add_drain_mutex);
>     lru_pvec_update();
>     synchronize_rcu();  /* waits for all accessors to pvec quits. */


I don't know RCU internal but conceptually, I understood synchronize_rcu need 
context switching of all CPU. If it's partly true, it could be a problem, too.

>     for_each_cpu(cpu)
>         drain_pvec_of_the_cpu(cpu);
>     mutex_unlock(&lru_add_drain_mutex);
> }
> ==
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> -- 
> To unsubscribe, send a message with 'unsubscribe linux-mm' in
> the body to majordomo@kvack.org.  For more info on Linux MM,
> see: http://www.linux-mm.org/ .
> Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
> 



-- 
Kind regards,
Minchan Kim

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: needed lru_add_drain_all() change
  2012-06-28 23:42   ` Minchan Kim
@ 2012-06-29  3:24     ` Kamezawa Hiroyuki
  0 siblings, 0 replies; 20+ messages in thread
From: Kamezawa Hiroyuki @ 2012-06-29  3:24 UTC (permalink / raw)
  To: Minchan Kim; +Cc: Andrew Morton, linux-mm

(2012/06/29 8:42), Minchan Kim wrote:
> On 06/28/2012 04:43 PM, Kamezawa Hiroyuki wrote:
>
>> (2012/06/27 6:37), Andrew Morton wrote:
>>> https://bugzilla.kernel.org/show_bug.cgi?id=43811
>>>
>>> lru_add_drain_all() uses schedule_on_each_cpu().  But
>>> schedule_on_each_cpu() hangs if a realtime thread is spinning, pinned
>>> to a CPU.  There's no intention to change the scheduler behaviour, so I
>>> think we should remove schedule_on_each_cpu() from the kernel.
>>>
>>> The biggest user of schedule_on_each_cpu() is lru_add_drain_all().
>>>
>>> Does anyone have any thoughts on how we can do this?  The obvious
>>> approach is to declare these:
>>>
>>> static DEFINE_PER_CPU(struct pagevec[NR_LRU_LISTS], lru_add_pvecs);
>>> static DEFINE_PER_CPU(struct pagevec, lru_rotate_pvecs);
>>> static DEFINE_PER_CPU(struct pagevec, lru_deactivate_pvecs);
>>>
>>> to be irq-safe and use on_each_cpu().  lru_rotate_pvecs is already
>>> irq-safe and converting lru_add_pvecs and lru_deactivate_pvecs looks
>>> pretty simple.
>>>
>>> Thoughts?
>>>
>>
>> How about this kind of RCU synchronization ?
>> ==
>> /*
>>   * Double buffered pagevec for quick drain.
>>   * The usual per-cpu-pvec user need to take rcu_read_lock() before
>> accessing.
>>   * External drainer of pvecs will relpace pvec vector and call
>> synchroize_rcu(),
>>   * and drain all pages on unused pvecs in turn.
>>   */
>> static DEFINE_PER_CPU(struct pagevec[NR_LRU_LISTS * 2], lru_pvecs);
>>
>> atomic_t pvec_idx; /* must be placed onto some aligned address...*/
>>
>>
>> struct pagevec *my_pagevec(enum lru)
>> {
>>      return  pvec = &__get_cpu_var(lru_pvecs[lru << atomic_read(pvec_idx)]);
>> }
>>
>> /*
>>   * percpu pagevec access should be surrounded by these calls.
>>   */
>> static inline void pagevec_start_access()
>> {
>>      rcu_read_lock();
>> }
>>
>> static inline void pagevec_end_access()
>> {
>>      rcu_read_unlock();
>> }
>>
>>
>> /*
>>   * changing pagevec array vec 0 <-> 1
>>   */
>> static void lru_pvec_update()
>> {
>>      if (atomic_read(&pvec_idx))
>>          atomic_set(&pvec_idx, 0);
>>      else
>>          atomic_set(&pvec_idx, 1);
>> }
>>
>> /*
>>   * drain all LRUS on per-cpu pagevecs.
>>   */
>> DEFINE_MUTEX(lru_add_drain_all_mutex);
>> static void lru_add_drain_all()
>> {
>>      mutex_lock(&lru_add_drain_mutex);
>>      lru_pvec_update();
>>      synchronize_rcu();  /* waits for all accessors to pvec quits. */
>
>
> I don't know RCU internal but conceptually, I understood synchronize_rcu need
> context switching of all CPU. If it's partly true, it could be a problem, too.
>

Hmm, from Documenatation/RCU/stallwarn.txt
==

o       For !CONFIG_PREEMPT kernels, a CPU looping anywhere in the kernel
         without invoking schedule().

o       A CPU-bound real-time task in a CONFIG_PREEMPT kernel, which might
         happen to preempt a low-priority task in the middle of an RCU
         read-side critical section.   This is especially damaging if
         that low-priority task is not permitted to run on any other CPU,
         in which case the next RCU grace period can never complete, which
         will eventually cause the system to run out of memory and hang.
         While the system is in the process of running itself out of
         memory, you might see stall-warning messages.

o       A CPU-bound real-time task in a CONFIG_PREEMPT_RT kernel that
         is running at a higher priority than the RCU softirq threads.
         This will prevent RCU callbacks from ever being invoked,
         and in a CONFIG_TREE_PREEMPT_RCU kernel will further prevent
         RCU grace periods from ever completing.  Either way, the
         system will eventually run out of memory and hang.  In the
         CONFIG_TREE_PREEMPT_RCU case, you might see stall-warning
         messages.
==
you're right. (RCU stall warning seems to be shown per 60secs at default.)

I'm wondering to do sync without RCU...
==
pvec_start_access(struct pagevec *pvec)
{
	atomic_inc(&pvec->using);
}

pvec_end_access(struct pagevec *pvec)
{
	atomic_dec(&pvec->using);
}

synchronize_pvec()
{
	for_each_cpu(cpu)
		wait for pvec->using to be 0.
}

static void lru_add_drain_all()
{
	mutex_lock();
	lru_pvec_update(); //switch pvec
	synchronize_pvec(); // wait for all user exits
	for_each_cpu()
		drain pages in pvec
	mutex_unlock()
}
==

"disable_irq() + intterupt()" will be easier.

What is the cost of IRQ-disable v.s. atomic_inc() for local variable...

Regards,
-Kame











--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: needed lru_add_drain_all() change
  2012-06-28  6:23 ` KOSAKI Motohiro
@ 2012-06-29  3:47   ` Kamezawa Hiroyuki
  0 siblings, 0 replies; 20+ messages in thread
From: Kamezawa Hiroyuki @ 2012-06-29  3:47 UTC (permalink / raw)
  To: KOSAKI Motohiro; +Cc: Andrew Morton, linux-mm

(2012/06/28 15:23), KOSAKI Motohiro wrote:
> On Tue, Jun 26, 2012 at 5:37 PM, Andrew Morton
> <akpm@linux-foundation.org> wrote:
>> https://bugzilla.kernel.org/show_bug.cgi?id=43811
>>
>> lru_add_drain_all() uses schedule_on_each_cpu().  But
>> schedule_on_each_cpu() hangs if a realtime thread is spinning, pinned
>> to a CPU.  There's no intention to change the scheduler behaviour, so I
>> think we should remove schedule_on_each_cpu() from the kernel.
>>
>> The biggest user of schedule_on_each_cpu() is lru_add_drain_all().
>>
>> Does anyone have any thoughts on how we can do this?  The obvious
>> approach is to declare these:
>>
>> static DEFINE_PER_CPU(struct pagevec[NR_LRU_LISTS], lru_add_pvecs);
>> static DEFINE_PER_CPU(struct pagevec, lru_rotate_pvecs);
>> static DEFINE_PER_CPU(struct pagevec, lru_deactivate_pvecs);
>>
>> to be irq-safe and use on_each_cpu().  lru_rotate_pvecs is already
>> irq-safe and converting lru_add_pvecs and lru_deactivate_pvecs looks
>> pretty simple.
>>
>> Thoughts?
>
> I agree.
>
> But i hope more. In these days, we have plenty lru_add_drain_all()
> callsite. So,
> i think we should remove struct pagevec and should aim migration aware new
> batch mechanism. maybe. This also improve compaction success rate.
>

migration-aware means an framework which isolate_xxxx_page() can work with ?
To do that, we need to know which object points to the page. Hmm. Do you have
anyidea ?

-Kame








--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 20+ messages in thread

end of thread, other threads:[~2012-06-29  3:49 UTC | newest]

Thread overview: 20+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2012-06-26 21:37 needed lru_add_drain_all() change Andrew Morton
2012-06-27  0:55 ` Minchan Kim
2012-06-27  1:15   ` Andrew Morton
2012-06-27  1:20     ` Minchan Kim
2012-06-27  1:29       ` Andrew Morton
2012-06-27  2:09     ` Minchan Kim
2012-06-27  5:12       ` Andrew Morton
2012-06-27  5:41         ` Minchan Kim
2012-06-27  5:55           ` Andrew Morton
2012-06-27  6:33             ` Minchan Kim
2012-06-27  6:41               ` Andrew Morton
2012-06-27 10:27                 ` Peter Zijlstra
2012-06-27  6:46               ` Andrew Morton
2012-06-27 10:31                 ` Peter Zijlstra
2012-06-27 12:04 ` Peter Zijlstra
2012-06-28  6:23 ` KOSAKI Motohiro
2012-06-29  3:47   ` Kamezawa Hiroyuki
2012-06-28  7:43 ` Kamezawa Hiroyuki
2012-06-28 23:42   ` Minchan Kim
2012-06-29  3:24     ` Kamezawa Hiroyuki

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.