linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* Re: [patch 113/131] mm: balance LRU lists based on relative thrashing
       [not found] <20200603230303.kSkT62Lb5%akpm@linux-foundation.org>
@ 2020-06-09  9:15 ` Alex Shi
  2020-06-09 14:45   ` Johannes Weiner
  0 siblings, 1 reply; 4+ messages in thread
From: Alex Shi @ 2020-06-09  9:15 UTC (permalink / raw)
  To: linux-kernel, akpm, hannes, iamjoonsoo.kim, linux-mm, mhocko,
	minchan, mm-commits, riel, torvalds



在 2020/6/4 上午7:03, Andrew Morton 写道:
>  
> +	/* XXX: Move to lru_cache_add() when it supports new vs putback */

Hi Hannes,

Sorry for a bit lost, would you like to explain a bit more of your idea here?

> +	spin_lock_irq(&page_pgdat(page)->lru_lock);
> +	lru_note_cost(page);
> +	spin_unlock_irq(&page_pgdat(page)->lru_lock);
> +


What could we see here w/o the lru_lock?

Thanks
Alex

^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: [patch 113/131] mm: balance LRU lists based on relative thrashing
  2020-06-09  9:15 ` [patch 113/131] mm: balance LRU lists based on relative thrashing Alex Shi
@ 2020-06-09 14:45   ` Johannes Weiner
  2020-06-10  5:23     ` Joonsoo Kim
  0 siblings, 1 reply; 4+ messages in thread
From: Johannes Weiner @ 2020-06-09 14:45 UTC (permalink / raw)
  To: Alex Shi
  Cc: linux-kernel, akpm, iamjoonsoo.kim, linux-mm, mhocko, minchan,
	mm-commits, riel, torvalds

On Tue, Jun 09, 2020 at 05:15:33PM +0800, Alex Shi wrote:
> 
> 
> 在 2020/6/4 上午7:03, Andrew Morton 写道:
> >  
> > +	/* XXX: Move to lru_cache_add() when it supports new vs putback */
> 
> Hi Hannes,
> 
> Sorry for a bit lost, would you like to explain a bit more of your idea here?
> 
> > +	spin_lock_irq(&page_pgdat(page)->lru_lock);
> > +	lru_note_cost(page);
> > +	spin_unlock_irq(&page_pgdat(page)->lru_lock);
> > +
> 
> 
> What could we see here w/o the lru_lock?

It'll just be part of the existing LRU locking in
pagevec_lru_move_fn(), when the new pages are added to the LRU in
batch. See this older patch for example:

https://lore.kernel.org/linux-mm/20160606194836.3624-6-hannes@cmpxchg.org/

I didn't include it in this series to reduce conflict with Joonsoo's
WIP series that also operates in this area and does something similar:

https://lkml.org/lkml/2020/4/3/63

^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: [patch 113/131] mm: balance LRU lists based on relative thrashing
  2020-06-09 14:45   ` Johannes Weiner
@ 2020-06-10  5:23     ` Joonsoo Kim
  2020-06-11  3:28       ` Alex Shi
  0 siblings, 1 reply; 4+ messages in thread
From: Joonsoo Kim @ 2020-06-10  5:23 UTC (permalink / raw)
  To: Johannes Weiner
  Cc: Alex Shi, LKML, Andrew Morton, Joonsoo Kim,
	Linux Memory Management List, Michal Hocko,
	김민찬,
	mm-commits, Rik van Riel, Linus Torvalds

2020년 6월 9일 (화) 오후 11:46, Johannes Weiner <hannes@cmpxchg.org>님이 작성:
>
> On Tue, Jun 09, 2020 at 05:15:33PM +0800, Alex Shi wrote:
> >
> >
> > 在 2020/6/4 上午7:03, Andrew Morton 写道:
> > >
> > > +   /* XXX: Move to lru_cache_add() when it supports new vs putback */
> >
> > Hi Hannes,
> >
> > Sorry for a bit lost, would you like to explain a bit more of your idea here?
> >
> > > +   spin_lock_irq(&page_pgdat(page)->lru_lock);
> > > +   lru_note_cost(page);
> > > +   spin_unlock_irq(&page_pgdat(page)->lru_lock);
> > > +
> >
> >
> > What could we see here w/o the lru_lock?
>
> It'll just be part of the existing LRU locking in
> pagevec_lru_move_fn(), when the new pages are added to the LRU in
> batch. See this older patch for example:
>
> https://lore.kernel.org/linux-mm/20160606194836.3624-6-hannes@cmpxchg.org/
>
> I didn't include it in this series to reduce conflict with Joonsoo's
> WIP series that also operates in this area and does something similar:

Thanks!

> https://lkml.org/lkml/2020/4/3/63

I haven't completed the rebase of my series but I guess that referenced patch
"https://lkml.org/lkml/2020/4/3/63" would be removed in the next version.

Before the I/O cost model, a new anonymous page contributes to the LRU reclaim
balance. But, now, a new anonymous page doesn't contributes to the I/O cost
so this adjusting patch would not be needed anymore.

If anyone wants to change this part,
"/* XXX: Move to lru_cache_add() when it supports new vs putback */", feel free
to do it.

Thanks.

^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: [patch 113/131] mm: balance LRU lists based on relative thrashing
  2020-06-10  5:23     ` Joonsoo Kim
@ 2020-06-11  3:28       ` Alex Shi
  0 siblings, 0 replies; 4+ messages in thread
From: Alex Shi @ 2020-06-11  3:28 UTC (permalink / raw)
  To: Joonsoo Kim, Johannes Weiner
  Cc: LKML, Andrew Morton, Joonsoo Kim, Linux Memory Management List,
	Michal Hocko, 김민찬,
	mm-commits, Rik van Riel, Linus Torvalds



在 2020/6/10 下午1:23, Joonsoo Kim 写道:
> 2020년 6월 9일 (화) 오후 11:46, Johannes Weiner <hannes@cmpxchg.org>님이 작성:
>>
>> On Tue, Jun 09, 2020 at 05:15:33PM +0800, Alex Shi wrote:
>>>
>>>
>>> 在 2020/6/4 上午7:03, Andrew Morton 写道:
>>>>
>>>> +   /* XXX: Move to lru_cache_add() when it supports new vs putback */
>>>
>>> Hi Hannes,
>>>
>>> Sorry for a bit lost, would you like to explain a bit more of your idea here?
>>>
>>>> +   spin_lock_irq(&page_pgdat(page)->lru_lock);
>>>> +   lru_note_cost(page);
>>>> +   spin_unlock_irq(&page_pgdat(page)->lru_lock);
>>>> +
>>>
>>>
>>> What could we see here w/o the lru_lock?

Why I want to know the lru_lock protection here is that currently we have 5 lru lists
but only guarded by one lock, that would cause much contention when different apps
active on a server.

I guess originally we have only one lru_lock, since 5 locks would cause cacheline bouncing
if we put them together, or a bit cacheline waste to separate them in cacheline. But
after we have qspinlock, each of cpu will just loop lock on their cacheline, no interfer 
to others. It would much much relief the performance drop by cacheline bounce.

And we could use page.mapping bits to store the using lru list index for the page.
As a quick thought, I guess, except the 5 locks for 5 lists, we still need 1 more lock for
common lruvec data or for others which relay on lru_lock now, like mlock, hpage_nr_pages..
That's the reason I want to know everything under lru_lock. :)

Any comments for this idea? :)

Thanks
Alex


>>
>> It'll just be part of the existing LRU locking in
>> pagevec_lru_move_fn(), when the new pages are added to the LRU in
>> batch. See this older patch for example:
>>
>> https://lore.kernel.org/linux-mm/20160606194836.3624-6-hannes@cmpxchg.org/
>>
>> I didn't include it in this series to reduce conflict with Joonsoo's
>> WIP series that also operates in this area and does something similar:
> 
> Thanks!
> 
>> https://lkml.org/lkml/2020/4/3/63
> 
> I haven't completed the rebase of my series but I guess that referenced patch
> "https://lkml.org/lkml/2020/4/3/63" would be removed in the next version.

Thanks a lot for the info, Johannes&Joonsoo! A long history for a interesting idea. :)

> 
> Before the I/O cost model, a new anonymous page contributes to the LRU reclaim
> balance. But, now, a new anonymous page doesn't contributes to the I/O cost
> so this adjusting patch would not be needed anymore.
> 
> If anyone wants to change this part,
> "/* XXX: Move to lru_cache_add() when it supports new vs putback */", feel free
> to do it.


^ permalink raw reply	[flat|nested] 4+ messages in thread

end of thread, other threads:[~2020-06-11  3:29 UTC | newest]

Thread overview: 4+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
     [not found] <20200603230303.kSkT62Lb5%akpm@linux-foundation.org>
2020-06-09  9:15 ` [patch 113/131] mm: balance LRU lists based on relative thrashing Alex Shi
2020-06-09 14:45   ` Johannes Weiner
2020-06-10  5:23     ` Joonsoo Kim
2020-06-11  3:28       ` Alex Shi

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).