linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [patch] mm: release swap slots for actively used pages
@ 2009-05-27  1:47 Johannes Weiner
  2009-05-27 23:15 ` Andrew Morton
  2009-06-04 21:22 ` Hugh Dickins
  0 siblings, 2 replies; 5+ messages in thread
From: Johannes Weiner @ 2009-05-27  1:47 UTC (permalink / raw)
  To: Andrew Morton
  Cc: linux-mm, linux-kernel, KAMEZAWA Hiroyuki, Rik van Riel, Hugh Dickins

For anonymous pages activated by the reclaim scan or faulted from an
evicted page table entry we should always try to free up swap space.

Both events indicate that the page is in active use and a possible
change in the working set.  Thus removing the slot association from
the page increases the chance of the page being placed near its new
LRU buddies on the next eviction and helps keeping the amount of stale
swap cache entries low.

try_to_free_swap() inherently only succeeds when the last user of the
swap slot vanishes so it is safe to use from places where that single
mapping just brought the page back to life.

Signed-off-by: Johannes Weiner <hannes@cmpxchg.org>
Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
Cc: Rik van Riel <riel@redhat.com>
Cc: Hugh Dickins <hugh.dickins@tiscali.co.uk>
---
 mm/memory.c |    3 +--
 mm/vmscan.c |    2 +-
 2 files changed, 2 insertions(+), 3 deletions(-)

diff --git a/mm/memory.c b/mm/memory.c
index 8b4e40e..407ebf7 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -2671,8 +2671,7 @@ static int do_swap_page(struct mm_struct *mm, struct vm_area_struct *vma,
 	mem_cgroup_commit_charge_swapin(page, ptr);
 
 	swap_free(entry);
-	if (vm_swap_full() || (vma->vm_flags & VM_LOCKED) || PageMlocked(page))
-		try_to_free_swap(page);
+	try_to_free_swap(page);
 	unlock_page(page);
 
 	if (write_access) {
diff --git a/mm/vmscan.c b/mm/vmscan.c
index 621708f..2f0549d 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -788,7 +788,7 @@ cull_mlocked:
 
 activate_locked:
 		/* Not a candidate for swapping, so reclaim swap space. */
-		if (PageSwapCache(page) && vm_swap_full())
+		if (PageSwapCache(page))
 			try_to_free_swap(page);
 		VM_BUG_ON(PageActive(page));
 		SetPageActive(page);
-- 
1.6.3


^ permalink raw reply related	[flat|nested] 5+ messages in thread

* Re: [patch] mm: release swap slots for actively used pages
  2009-05-27  1:47 [patch] mm: release swap slots for actively used pages Johannes Weiner
@ 2009-05-27 23:15 ` Andrew Morton
  2009-05-28  0:23   ` KAMEZAWA Hiroyuki
  2009-06-04 21:22 ` Hugh Dickins
  1 sibling, 1 reply; 5+ messages in thread
From: Andrew Morton @ 2009-05-27 23:15 UTC (permalink / raw)
  To: Johannes Weiner
  Cc: linux-mm, linux-kernel, kamezawa.hiroyu, riel, hugh.dickins

On Wed, 27 May 2009 03:47:39 +0200
Johannes Weiner <hannes@cmpxchg.org> wrote:

> For anonymous pages activated by the reclaim scan or faulted from an
> evicted page table entry we should always try to free up swap space.
> 
> Both events indicate that the page is in active use and a possible
> change in the working set.  Thus removing the slot association from
> the page increases the chance of the page being placed near its new
> LRU buddies on the next eviction and helps keeping the amount of stale
> swap cache entries low.
> 
> try_to_free_swap() inherently only succeeds when the last user of the
> swap slot vanishes so it is safe to use from places where that single
> mapping just brought the page back to life.
> 

Seems that this has a risk of worsening swap fragmentation for some
situations.  Or not, I have no way of knowing, really.

> diff --git a/mm/memory.c b/mm/memory.c
> index 8b4e40e..407ebf7 100644
> --- a/mm/memory.c
> +++ b/mm/memory.c
> @@ -2671,8 +2671,7 @@ static int do_swap_page(struct mm_struct *mm, struct vm_area_struct *vma,
>  	mem_cgroup_commit_charge_swapin(page, ptr);
>  
>  	swap_free(entry);
> -	if (vm_swap_full() || (vma->vm_flags & VM_LOCKED) || PageMlocked(page))
> -		try_to_free_swap(page);
> +	try_to_free_swap(page);
>  	unlock_page(page);
>  
>  	if (write_access) {
> diff --git a/mm/vmscan.c b/mm/vmscan.c
> index 621708f..2f0549d 100644
> --- a/mm/vmscan.c
> +++ b/mm/vmscan.c
> @@ -788,7 +788,7 @@ cull_mlocked:
>  
>  activate_locked:
>  		/* Not a candidate for swapping, so reclaim swap space. */
> -		if (PageSwapCache(page) && vm_swap_full())
> +		if (PageSwapCache(page))
>  			try_to_free_swap(page);
>  		VM_BUG_ON(PageActive(page));
>  		SetPageActive(page);

How are we to know that this is a desirable patch for Linux??

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [patch] mm: release swap slots for actively used pages
  2009-05-27 23:15 ` Andrew Morton
@ 2009-05-28  0:23   ` KAMEZAWA Hiroyuki
  2009-05-28  0:34     ` Johannes Weiner
  0 siblings, 1 reply; 5+ messages in thread
From: KAMEZAWA Hiroyuki @ 2009-05-28  0:23 UTC (permalink / raw)
  To: Andrew Morton; +Cc: Johannes Weiner, linux-mm, linux-kernel, riel, hugh.dickins

On Wed, 27 May 2009 16:15:35 -0700
Andrew Morton <akpm@linux-foundation.org> wrote:

> On Wed, 27 May 2009 03:47:39 +0200
> Johannes Weiner <hannes@cmpxchg.org> wrote:
> 
> > For anonymous pages activated by the reclaim scan or faulted from an
> > evicted page table entry we should always try to free up swap space.
> > 
> > Both events indicate that the page is in active use and a possible
> > change in the working set.  Thus removing the slot association from
> > the page increases the chance of the page being placed near its new
> > LRU buddies on the next eviction and helps keeping the amount of stale
> > swap cache entries low.
> > 
> > try_to_free_swap() inherently only succeeds when the last user of the
> > swap slot vanishes so it is safe to use from places where that single
> > mapping just brought the page back to life.
> > 
> 
> Seems that this has a risk of worsening swap fragmentation for some
> situations.  Or not, I have no way of knowing, really.
> 
I'm afraid, too.

> > diff --git a/mm/memory.c b/mm/memory.c
> > index 8b4e40e..407ebf7 100644
> > --- a/mm/memory.c
> > +++ b/mm/memory.c
> > @@ -2671,8 +2671,7 @@ static int do_swap_page(struct mm_struct *mm, struct vm_area_struct *vma,
> >  	mem_cgroup_commit_charge_swapin(page, ptr);
> >  
> >  	swap_free(entry);
> > -	if (vm_swap_full() || (vma->vm_flags & VM_LOCKED) || PageMlocked(page))
> > -		try_to_free_swap(page);
> > +	try_to_free_swap(page);
> >  	unlock_page(page);
> >  
> >  	if (write_access) {
> > diff --git a/mm/vmscan.c b/mm/vmscan.c
> > index 621708f..2f0549d 100644
> > --- a/mm/vmscan.c
> > +++ b/mm/vmscan.c
> > @@ -788,7 +788,7 @@ cull_mlocked:
> >  
> >  activate_locked:
> >  		/* Not a candidate for swapping, so reclaim swap space. */
> > -		if (PageSwapCache(page) && vm_swap_full())
> > +		if (PageSwapCache(page))
> >  			try_to_free_swap(page);
> >  		VM_BUG_ON(PageActive(page));
> >  		SetPageActive(page);
> 
> How are we to know that this is a desirable patch for Linux??

I'm not sure what is the "purpose/benefit" of this patch...
In patch description,
"we should always try to free up swap space" ...then, why "should" ?

Thanks,
-Kame


^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [patch] mm: release swap slots for actively used pages
  2009-05-28  0:23   ` KAMEZAWA Hiroyuki
@ 2009-05-28  0:34     ` Johannes Weiner
  0 siblings, 0 replies; 5+ messages in thread
From: Johannes Weiner @ 2009-05-28  0:34 UTC (permalink / raw)
  To: KAMEZAWA Hiroyuki
  Cc: Andrew Morton, linux-mm, linux-kernel, riel, hugh.dickins

On Thu, May 28, 2009 at 09:23:45AM +0900, KAMEZAWA Hiroyuki wrote:
> On Wed, 27 May 2009 16:15:35 -0700
> Andrew Morton <akpm@linux-foundation.org> wrote:
> 
> > On Wed, 27 May 2009 03:47:39 +0200
> > Johannes Weiner <hannes@cmpxchg.org> wrote:
> > 
> > > For anonymous pages activated by the reclaim scan or faulted from an
> > > evicted page table entry we should always try to free up swap space.
> > > 
> > > Both events indicate that the page is in active use and a possible
> > > change in the working set.  Thus removing the slot association from
> > > the page increases the chance of the page being placed near its new
> > > LRU buddies on the next eviction and helps keeping the amount of stale
> > > swap cache entries low.
> > > 
> > > try_to_free_swap() inherently only succeeds when the last user of the
> > > swap slot vanishes so it is safe to use from places where that single
> > > mapping just brought the page back to life.
> > > 
> > 
> > Seems that this has a risk of worsening swap fragmentation for some
> > situations.  Or not, I have no way of knowing, really.
> > 
> I'm afraid, too.
> 
> > > diff --git a/mm/memory.c b/mm/memory.c
> > > index 8b4e40e..407ebf7 100644
> > > --- a/mm/memory.c
> > > +++ b/mm/memory.c
> > > @@ -2671,8 +2671,7 @@ static int do_swap_page(struct mm_struct *mm, struct vm_area_struct *vma,
> > >  	mem_cgroup_commit_charge_swapin(page, ptr);
> > >  
> > >  	swap_free(entry);
> > > -	if (vm_swap_full() || (vma->vm_flags & VM_LOCKED) || PageMlocked(page))
> > > -		try_to_free_swap(page);
> > > +	try_to_free_swap(page);
> > >  	unlock_page(page);
> > >  
> > >  	if (write_access) {
> > > diff --git a/mm/vmscan.c b/mm/vmscan.c
> > > index 621708f..2f0549d 100644
> > > --- a/mm/vmscan.c
> > > +++ b/mm/vmscan.c
> > > @@ -788,7 +788,7 @@ cull_mlocked:
> > >  
> > >  activate_locked:
> > >  		/* Not a candidate for swapping, so reclaim swap space. */
> > > -		if (PageSwapCache(page) && vm_swap_full())
> > > +		if (PageSwapCache(page))
> > >  			try_to_free_swap(page);
> > >  		VM_BUG_ON(PageActive(page));
> > >  		SetPageActive(page);
> > 
> > How are we to know that this is a desirable patch for Linux??
> 
> I'm not sure what is the "purpose/benefit" of this patch...
> In patch description,
> "we should always try to free up swap space" ...then, why "should" ?

I wrote the reason in the next paragraph:

	Both events indicate that the page is in active use and a
	possible change in the working set.  Thus removing the slot
	association from the page increases the chance of the page
	being placed near its new LRU buddies on the next eviction and
	helps keeping the amount of stale swap cache entries low.

I did some investigation on the average swap distance of slots used by
one reclaimer and noticed that the highest distances occurred when
most of the pages where already in swap cache.

The conclusion for me was that the pages had been rotated on the lists
but still clung to their old swap cache entries, which led to LRU
buddies being scattered all over the swap device.

Sorry that I didn't mention that beforehand.  And I will try and see
if I can get some hard numbers on this.

Thanks,

	Hannes

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [patch] mm: release swap slots for actively used pages
  2009-05-27  1:47 [patch] mm: release swap slots for actively used pages Johannes Weiner
  2009-05-27 23:15 ` Andrew Morton
@ 2009-06-04 21:22 ` Hugh Dickins
  1 sibling, 0 replies; 5+ messages in thread
From: Hugh Dickins @ 2009-06-04 21:22 UTC (permalink / raw)
  To: Johannes Weiner
  Cc: Andrew Morton, linux-mm, linux-kernel, KAMEZAWA Hiroyuki,
	Rik van Riel, Hugh Dickins

On Wed, 27 May 2009, Johannes Weiner wrote:

> For anonymous pages activated by the reclaim scan or faulted from an
> evicted page table entry we should always try to free up swap space.
> 
> Both events indicate that the page is in active use and a possible
> change in the working set.  Thus removing the slot association from
> the page increases the chance of the page being placed near its new
> LRU buddies on the next eviction and helps keeping the amount of stale
> swap cache entries low.
> 
> try_to_free_swap() inherently only succeeds when the last user of the
> swap slot vanishes so it is safe to use from places where that single
> mapping just brought the page back to life.
> 
> Signed-off-by: Johannes Weiner <hannes@cmpxchg.org>
> Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
> Cc: Rik van Riel <riel@redhat.com>
> Cc: Hugh Dickins <hugh.dickins@tiscali.co.uk>

You're absolutely right to question these now ancient vm_swap_full()
tests.  But I'm not convinced that you're right in this patch.  You
seem to be overlooking non-dirty cases e.g. at process startup data
is read in from file, perhaps modified, or otherwise constructed in
a large anonymous buffer, never subsequently changed, but under later
memory pressure written out to swap.

With your patch, we keep freeing that swap, so it has to get written
to swap again each time there's memory pressure; whereas without your
patch, it's already there on swap, no subsequent writes needed.

Yes, access patterns may change, and it may sometimes be advantageous
to rewrite even the unchanged pages, to somewhere near their related
pages; but I don't think we can ever be sure of winning at that game.

So the do_swap_page() part of your patch looks plain wrong to me:
if it's a page which isn't going to be modified, it ought to remain
in swap (unless swap getting full or page now locked); and if it's
going to be modified, then do_wp_page()'s reuse_swap_page() should
already be dealing with it appropriately.

And the vmscan.c activate test should be checking PageDirty?

Hugh

> ---
>  mm/memory.c |    3 +--
>  mm/vmscan.c |    2 +-
>  2 files changed, 2 insertions(+), 3 deletions(-)
> 
> diff --git a/mm/memory.c b/mm/memory.c
> index 8b4e40e..407ebf7 100644
> --- a/mm/memory.c
> +++ b/mm/memory.c
> @@ -2671,8 +2671,7 @@ static int do_swap_page(struct mm_struct *mm, struct vm_area_struct *vma,
>  	mem_cgroup_commit_charge_swapin(page, ptr);
>  
>  	swap_free(entry);
> -	if (vm_swap_full() || (vma->vm_flags & VM_LOCKED) || PageMlocked(page))
> -		try_to_free_swap(page);
> +	try_to_free_swap(page);
>  	unlock_page(page);
>  
>  	if (write_access) {
> diff --git a/mm/vmscan.c b/mm/vmscan.c
> index 621708f..2f0549d 100644
> --- a/mm/vmscan.c
> +++ b/mm/vmscan.c
> @@ -788,7 +788,7 @@ cull_mlocked:
>  
>  activate_locked:
>  		/* Not a candidate for swapping, so reclaim swap space. */
> -		if (PageSwapCache(page) && vm_swap_full())
> +		if (PageSwapCache(page))
>  			try_to_free_swap(page);
>  		VM_BUG_ON(PageActive(page));
>  		SetPageActive(page);
> -- 
> 1.6.3

^ permalink raw reply	[flat|nested] 5+ messages in thread

end of thread, other threads:[~2009-06-04 21:23 UTC | newest]

Thread overview: 5+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2009-05-27  1:47 [patch] mm: release swap slots for actively used pages Johannes Weiner
2009-05-27 23:15 ` Andrew Morton
2009-05-28  0:23   ` KAMEZAWA Hiroyuki
2009-05-28  0:34     ` Johannes Weiner
2009-06-04 21:22 ` Hugh Dickins

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).