linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
* [PATCH] mm: move swap-in anonymous page into active list
@ 2016-07-29  3:25 Minchan Kim
  2016-07-29  3:30 ` Minchan Kim
                   ` (2 more replies)
  0 siblings, 3 replies; 5+ messages in thread
From: Minchan Kim @ 2016-07-29  3:25 UTC (permalink / raw)
  To: Andrew Morton
  Cc: linux-kernel, linux-mm, Mel Gorman, Johannes Weiner,
	Rik van Riel, Minchan Kim

Every swap-in anonymous page starts from inactive lru list's head.
It should be activated unconditionally when VM decide to reclaim
because page table entry for the page always usually has marked
accessed bit. Thus, their window size for getting a new referece
is 2 * NR_inactive + NR_active while others is NR_active + NR_active.

It's not fair that it has more chance to be referenced compared
to other newly allocated page which starts from active lru list's
head.

Signed-off-by: Minchan Kim <minchan@kernel.org>
---
 mm/memory.c | 1 +
 1 file changed, 1 insertion(+)

diff --git a/mm/memory.c b/mm/memory.c
index 4425b6059339..3a730b920242 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -2642,6 +2642,7 @@ int do_swap_page(struct fault_env *fe, pte_t orig_pte)
 	if (page == swapcache) {
 		do_page_add_anon_rmap(page, vma, fe->address, exclusive);
 		mem_cgroup_commit_charge(page, memcg, true, false);
+		activate_page(page);
 	} else { /* ksm created a completely new copy */
 		page_add_new_anon_rmap(page, vma, fe->address, false);
 		mem_cgroup_commit_charge(page, memcg, false, false);
-- 
1.9.1

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 5+ messages in thread

* Re: [PATCH] mm: move swap-in anonymous page into active list
  2016-07-29  3:25 [PATCH] mm: move swap-in anonymous page into active list Minchan Kim
@ 2016-07-29  3:30 ` Minchan Kim
  2016-07-29 13:30 ` Johannes Weiner
  2016-07-29 16:55 ` Rik van Riel
  2 siblings, 0 replies; 5+ messages in thread
From: Minchan Kim @ 2016-07-29  3:30 UTC (permalink / raw)
  To: Andrew Morton
  Cc: linux-kernel, linux-mm, Mel Gorman, Johannes Weiner, Rik van Riel

On Fri, Jul 29, 2016 at 12:25:40PM +0900, Minchan Kim wrote:
> Every swap-in anonymous page starts from inactive lru list's head.
> It should be activated unconditionally when VM decide to reclaim
> because page table entry for the page always usually has marked
> accessed bit. Thus, their window size for getting a new referece
> is 2 * NR_inactive + NR_active while others is NR_active + NR_active.

                                                 NR_inactive

typo

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [PATCH] mm: move swap-in anonymous page into active list
  2016-07-29  3:25 [PATCH] mm: move swap-in anonymous page into active list Minchan Kim
  2016-07-29  3:30 ` Minchan Kim
@ 2016-07-29 13:30 ` Johannes Weiner
  2016-07-29 16:55 ` Rik van Riel
  2 siblings, 0 replies; 5+ messages in thread
From: Johannes Weiner @ 2016-07-29 13:30 UTC (permalink / raw)
  To: Minchan Kim
  Cc: Andrew Morton, linux-kernel, linux-mm, Mel Gorman, Rik van Riel,
	Hugh Dickins

On Fri, Jul 29, 2016 at 12:25:40PM +0900, Minchan Kim wrote:
> Every swap-in anonymous page starts from inactive lru list's head.
> It should be activated unconditionally when VM decide to reclaim
> because page table entry for the page always usually has marked
> accessed bit. Thus, their window size for getting a new referece
> is 2 * NR_inactive + NR_active while others is NR_active + NR_active.
> 
> It's not fair that it has more chance to be referenced compared
> to other newly allocated page which starts from active lru list's
> head.
> 
> Signed-off-by: Minchan Kim <minchan@kernel.org>

That behavior stood out to me as well recently, but I couldn't
convince myself that activation is the right thing.

The page can still have a valid copy on the swap device, so prefering
to reclaim that page over a fresh one could make sense. But as you
point out, having it start inactive instead of active actually ends up
giving it *more* LRU time, and that seems to be without justification.

So this change makes sense to me. Maybe somebody else remembers a good
reason for why the behavior is the way it is, but likely it has always
been an oversight.

Acked-by: Johannes Weiner <hannes@cmpxchg.org>

> ---
>  mm/memory.c | 1 +
>  1 file changed, 1 insertion(+)
> 
> diff --git a/mm/memory.c b/mm/memory.c
> index 4425b6059339..3a730b920242 100644
> --- a/mm/memory.c
> +++ b/mm/memory.c
> @@ -2642,6 +2642,7 @@ int do_swap_page(struct fault_env *fe, pte_t orig_pte)
>  	if (page == swapcache) {
>  		do_page_add_anon_rmap(page, vma, fe->address, exclusive);
>  		mem_cgroup_commit_charge(page, memcg, true, false);
> +		activate_page(page);
>  	} else { /* ksm created a completely new copy */
>  		page_add_new_anon_rmap(page, vma, fe->address, false);
>  		mem_cgroup_commit_charge(page, memcg, false, false);
> -- 
> 1.9.1

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [PATCH] mm: move swap-in anonymous page into active list
  2016-07-29  3:25 [PATCH] mm: move swap-in anonymous page into active list Minchan Kim
  2016-07-29  3:30 ` Minchan Kim
  2016-07-29 13:30 ` Johannes Weiner
@ 2016-07-29 16:55 ` Rik van Riel
  2016-07-29 18:08   ` Nadav Amit
  2 siblings, 1 reply; 5+ messages in thread
From: Rik van Riel @ 2016-07-29 16:55 UTC (permalink / raw)
  To: Minchan Kim, Andrew Morton
  Cc: linux-kernel, linux-mm, Mel Gorman, Johannes Weiner

[-- Attachment #1: Type: text/plain, Size: 947 bytes --]

On Fri, 2016-07-29 at 12:25 +0900, Minchan Kim wrote:
> Every swap-in anonymous page starts from inactive lru list's head.
> It should be activated unconditionally when VM decide to reclaim
> because page table entry for the page always usually has marked
> accessed bit. Thus, their window size for getting a new referece
> is 2 * NR_inactive + NR_active while others is NR_active + NR_active.
> 
> It's not fair that it has more chance to be referenced compared
> to other newly allocated page which starts from active lru list's
> head.
> 
> Signed-off-by: Minchan Kim <minchan@kernel.org>

Acked-by: Rik van Riel <riel@redhat.com>

The reason newly read in swap cache pages start on the
inactive list is that we do some amount of read-around,
and do not know which pages will get used.

However, immediately activating the ones that DO get
used, like your patch does, is the right thing to do.

-- 
All Rights Reversed.

[-- Attachment #2: This is a digitally signed message part --]
[-- Type: application/pgp-signature, Size: 473 bytes --]

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [PATCH] mm: move swap-in anonymous page into active list
  2016-07-29 16:55 ` Rik van Riel
@ 2016-07-29 18:08   ` Nadav Amit
  0 siblings, 0 replies; 5+ messages in thread
From: Nadav Amit @ 2016-07-29 18:08 UTC (permalink / raw)
  To: Rik van Riel
  Cc: Minchan Kim, Andrew Morton, LKML, open list:MEMORY MANAGEMENT,
	Mel Gorman, Johannes Weiner

Rik van Riel <riel@redhat.com> wrote:

> On Fri, 2016-07-29 at 12:25 +0900, Minchan Kim wrote:
>> Every swap-in anonymous page starts from inactive lru list's head.
>> It should be activated unconditionally when VM decide to reclaim
>> because page table entry for the page always usually has marked
>> accessed bit. Thus, their window size for getting a new referece
>> is 2 * NR_inactive + NR_active while others is NR_active + NR_active.
>> 
>> It's not fair that it has more chance to be referenced compared
>> to other newly allocated page which starts from active lru list's
>> head.
>> 
>> Signed-off-by: Minchan Kim <minchan@kernel.org>
> 
> Acked-by: Rik van Riel <riel@redhat.com>
> 
> The reason newly read in swap cache pages start on the
> inactive list is that we do some amount of read-around,
> and do not know which pages will get used.
> 
> However, immediately activating the ones that DO get
> used, like your patch does, is the right thing to do.

Can it cause the swap clusters to lose spatial locality?

For instance, if a process writes sequentially to memory multiple times,
and if pages are swapped out, in and back out. In such case, doesn’t it
increase the probability that the swap cluster will hold irrelevant data and
make swap prefetch less efficient?

Regards,
Nadav

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 5+ messages in thread

end of thread, other threads:[~2016-07-29 18:08 UTC | newest]

Thread overview: 5+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2016-07-29  3:25 [PATCH] mm: move swap-in anonymous page into active list Minchan Kim
2016-07-29  3:30 ` Minchan Kim
2016-07-29 13:30 ` Johannes Weiner
2016-07-29 16:55 ` Rik van Riel
2016-07-29 18:08   ` Nadav Amit

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).