All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH] mm: try to free swap only for reading swap fault
@ 2017-11-02 12:35 ` zhouxianrong
  0 siblings, 0 replies; 6+ messages in thread
From: zhouxianrong @ 2017-11-02 12:35 UTC (permalink / raw)
  To: linux-mm
  Cc: linux-kernel, akpm, jack, kirill.shutemov, ross.zwisler, mhocko,
	dave.jiang, aneesh.kumar, minchan, mingo, jglisse, willy, hughd,
	zhouxianrong, zhouxiyu, weidu.du, fanghua3, hutj, won.ho.park

From: zhouxianrong <zhouxianrong@huawei.com>

the purpose of this patch is that when a reading swap fault
happens on a clean swap cache page whose swap count is equal
to one, then try_to_free_swap could remove this page from 
swap cache and mark this page dirty. so if later we reclaimed
this page then we could pageout this page due to this dirty.
so i want to allow this action only for writing swap fault.

i sampled the data of non-dirty anonymous pages which is no
need to pageout and total anonymous pages in shrink_page_list.

the results are:

        non-dirty anonymous pages     total anonymous pages
before  26343                         635218
after   36907                         634312

Signed-off-by: zhouxianrong <zhouxianrong@huawei.com>
---
 mm/memory.c |    2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/mm/memory.c b/mm/memory.c
index a728bed..5a944fe 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -2999,7 +2999,7 @@ int do_swap_page(struct vm_fault *vmf)
 	}
 
 	swap_free(entry);
-	if (mem_cgroup_swap_full(page) ||
+	if (((vmf->flags & FAULT_FLAG_WRITE) && mem_cgroup_swap_full(page)) ||
 	    (vma->vm_flags & VM_LOCKED) || PageMlocked(page))
 		try_to_free_swap(page);
 	unlock_page(page);
-- 
1.7.9.5

^ permalink raw reply related	[flat|nested] 6+ messages in thread

* [PATCH] mm: try to free swap only for reading swap fault
@ 2017-11-02 12:35 ` zhouxianrong
  0 siblings, 0 replies; 6+ messages in thread
From: zhouxianrong @ 2017-11-02 12:35 UTC (permalink / raw)
  To: linux-mm
  Cc: linux-kernel, akpm, jack, kirill.shutemov, ross.zwisler, mhocko,
	dave.jiang, aneesh.kumar, minchan, mingo, jglisse, willy, hughd,
	zhouxianrong, zhouxiyu, weidu.du, fanghua3, hutj, won.ho.park

From: zhouxianrong <zhouxianrong@huawei.com>

the purpose of this patch is that when a reading swap fault
happens on a clean swap cache page whose swap count is equal
to one, then try_to_free_swap could remove this page from 
swap cache and mark this page dirty. so if later we reclaimed
this page then we could pageout this page due to this dirty.
so i want to allow this action only for writing swap fault.

i sampled the data of non-dirty anonymous pages which is no
need to pageout and total anonymous pages in shrink_page_list.

the results are:

        non-dirty anonymous pages     total anonymous pages
before  26343                         635218
after   36907                         634312

Signed-off-by: zhouxianrong <zhouxianrong@huawei.com>
---
 mm/memory.c |    2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/mm/memory.c b/mm/memory.c
index a728bed..5a944fe 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -2999,7 +2999,7 @@ int do_swap_page(struct vm_fault *vmf)
 	}
 
 	swap_free(entry);
-	if (mem_cgroup_swap_full(page) ||
+	if (((vmf->flags & FAULT_FLAG_WRITE) && mem_cgroup_swap_full(page)) ||
 	    (vma->vm_flags & VM_LOCKED) || PageMlocked(page))
 		try_to_free_swap(page);
 	unlock_page(page);
-- 
1.7.9.5

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 6+ messages in thread

* Re: [PATCH] mm: try to free swap only for reading swap fault
  2017-11-02 12:35 ` zhouxianrong
@ 2017-11-02 13:22   ` Michal Hocko
  -1 siblings, 0 replies; 6+ messages in thread
From: Michal Hocko @ 2017-11-02 13:22 UTC (permalink / raw)
  To: zhouxianrong
  Cc: linux-mm, linux-kernel, akpm, jack, kirill.shutemov,
	ross.zwisler, dave.jiang, aneesh.kumar, minchan, mingo, jglisse,
	willy, hughd, zhouxiyu, weidu.du, fanghua3, hutj, won.ho.park

On Thu 02-11-17 20:35:19, zhouxianrong@huawei.com wrote:
> From: zhouxianrong <zhouxianrong@huawei.com>
> 
> the purpose of this patch is that when a reading swap fault
> happens on a clean swap cache page whose swap count is equal
> to one, then try_to_free_swap could remove this page from 
> swap cache and mark this page dirty. so if later we reclaimed
> this page then we could pageout this page due to this dirty.
> so i want to allow this action only for writing swap fault.
> 
> i sampled the data of non-dirty anonymous pages which is no
> need to pageout and total anonymous pages in shrink_page_list.
> 
> the results are:
> 
>         non-dirty anonymous pages     total anonymous pages
> before  26343                         635218
> after   36907                         634312

This data is absolutely pointless without describing the workload.
You patch also stil fails to explain which workloads are going to
benefit/suffer from the change and why it is a good thing to do in
general.

> Signed-off-by: zhouxianrong <zhouxianrong@huawei.com>
> ---
>  mm/memory.c |    2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/mm/memory.c b/mm/memory.c
> index a728bed..5a944fe 100644
> --- a/mm/memory.c
> +++ b/mm/memory.c
> @@ -2999,7 +2999,7 @@ int do_swap_page(struct vm_fault *vmf)
>  	}
>  
>  	swap_free(entry);
> -	if (mem_cgroup_swap_full(page) ||
> +	if (((vmf->flags & FAULT_FLAG_WRITE) && mem_cgroup_swap_full(page)) ||
>  	    (vma->vm_flags & VM_LOCKED) || PageMlocked(page))
>  		try_to_free_swap(page);
>  	unlock_page(page);
> -- 
> 1.7.9.5
> 

-- 
Michal Hocko
SUSE Labs

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [PATCH] mm: try to free swap only for reading swap fault
@ 2017-11-02 13:22   ` Michal Hocko
  0 siblings, 0 replies; 6+ messages in thread
From: Michal Hocko @ 2017-11-02 13:22 UTC (permalink / raw)
  To: zhouxianrong
  Cc: linux-mm, linux-kernel, akpm, jack, kirill.shutemov,
	ross.zwisler, dave.jiang, aneesh.kumar, minchan, mingo, jglisse,
	willy, hughd, zhouxiyu, weidu.du, fanghua3, hutj, won.ho.park

On Thu 02-11-17 20:35:19, zhouxianrong@huawei.com wrote:
> From: zhouxianrong <zhouxianrong@huawei.com>
> 
> the purpose of this patch is that when a reading swap fault
> happens on a clean swap cache page whose swap count is equal
> to one, then try_to_free_swap could remove this page from 
> swap cache and mark this page dirty. so if later we reclaimed
> this page then we could pageout this page due to this dirty.
> so i want to allow this action only for writing swap fault.
> 
> i sampled the data of non-dirty anonymous pages which is no
> need to pageout and total anonymous pages in shrink_page_list.
> 
> the results are:
> 
>         non-dirty anonymous pages     total anonymous pages
> before  26343                         635218
> after   36907                         634312

This data is absolutely pointless without describing the workload.
You patch also stil fails to explain which workloads are going to
benefit/suffer from the change and why it is a good thing to do in
general.

> Signed-off-by: zhouxianrong <zhouxianrong@huawei.com>
> ---
>  mm/memory.c |    2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/mm/memory.c b/mm/memory.c
> index a728bed..5a944fe 100644
> --- a/mm/memory.c
> +++ b/mm/memory.c
> @@ -2999,7 +2999,7 @@ int do_swap_page(struct vm_fault *vmf)
>  	}
>  
>  	swap_free(entry);
> -	if (mem_cgroup_swap_full(page) ||
> +	if (((vmf->flags & FAULT_FLAG_WRITE) && mem_cgroup_swap_full(page)) ||
>  	    (vma->vm_flags & VM_LOCKED) || PageMlocked(page))
>  		try_to_free_swap(page);
>  	unlock_page(page);
> -- 
> 1.7.9.5
> 

-- 
Michal Hocko
SUSE Labs

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [PATCH] mm: try to free swap only for reading swap fault
  2017-11-02 13:22   ` Michal Hocko
@ 2017-11-03  3:31     ` zhouxianrong
  -1 siblings, 0 replies; 6+ messages in thread
From: zhouxianrong @ 2017-11-03  3:31 UTC (permalink / raw)
  To: Michal Hocko
  Cc: linux-mm, linux-kernel, akpm, jack, kirill.shutemov,
	ross.zwisler, dave.jiang, aneesh.kumar, minchan, mingo, jglisse,
	willy, hughd, zhouxiyu, weidu.du, fanghua3, hutj, won.ho.park

i mean for reading swap fault try_to_free_swap in do_swap_page
could hurt clean swap cache pages and make them dirty. it affects
reclaim procedure in shrink_page_list and let this function write
out much more these dirty anonymous pages. in fact these dirty
anonymous pages might keep clean originally.

On 2017/11/2 21:22, Michal Hocko wrote:
> On Thu 02-11-17 20:35:19, zhouxianrong@huawei.com wrote:
>> From: zhouxianrong <zhouxianrong@huawei.com>
>>
>> the purpose of this patch is that when a reading swap fault
>> happens on a clean swap cache page whose swap count is equal
>> to one, then try_to_free_swap could remove this page from
>> swap cache and mark this page dirty. so if later we reclaimed
>> this page then we could pageout this page due to this dirty.
>> so i want to allow this action only for writing swap fault.
>>
>> i sampled the data of non-dirty anonymous pages which is no
>> need to pageout and total anonymous pages in shrink_page_list.
>>
>> the results are:
>>
>>         non-dirty anonymous pages     total anonymous pages
>> before  26343                         635218
>> after   36907                         634312
>
> This data is absolutely pointless without describing the workload.
> You patch also stil fails to explain which workloads are going to
> benefit/suffer from the change and why it is a good thing to do in
> general.
>
>> Signed-off-by: zhouxianrong <zhouxianrong@huawei.com>
>> ---
>>  mm/memory.c |    2 +-
>>  1 file changed, 1 insertion(+), 1 deletion(-)
>>
>> diff --git a/mm/memory.c b/mm/memory.c
>> index a728bed..5a944fe 100644
>> --- a/mm/memory.c
>> +++ b/mm/memory.c
>> @@ -2999,7 +2999,7 @@ int do_swap_page(struct vm_fault *vmf)
>>  	}
>>
>>  	swap_free(entry);
>> -	if (mem_cgroup_swap_full(page) ||
>> +	if (((vmf->flags & FAULT_FLAG_WRITE) && mem_cgroup_swap_full(page)) ||
>>  	    (vma->vm_flags & VM_LOCKED) || PageMlocked(page))
>>  		try_to_free_swap(page);
>>  	unlock_page(page);
>> --
>> 1.7.9.5
>>
>

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [PATCH] mm: try to free swap only for reading swap fault
@ 2017-11-03  3:31     ` zhouxianrong
  0 siblings, 0 replies; 6+ messages in thread
From: zhouxianrong @ 2017-11-03  3:31 UTC (permalink / raw)
  To: Michal Hocko
  Cc: linux-mm, linux-kernel, akpm, jack, kirill.shutemov,
	ross.zwisler, dave.jiang, aneesh.kumar, minchan, mingo, jglisse,
	willy, hughd, zhouxiyu, weidu.du, fanghua3, hutj, won.ho.park

i mean for reading swap fault try_to_free_swap in do_swap_page
could hurt clean swap cache pages and make them dirty. it affects
reclaim procedure in shrink_page_list and let this function write
out much more these dirty anonymous pages. in fact these dirty
anonymous pages might keep clean originally.

On 2017/11/2 21:22, Michal Hocko wrote:
> On Thu 02-11-17 20:35:19, zhouxianrong@huawei.com wrote:
>> From: zhouxianrong <zhouxianrong@huawei.com>
>>
>> the purpose of this patch is that when a reading swap fault
>> happens on a clean swap cache page whose swap count is equal
>> to one, then try_to_free_swap could remove this page from
>> swap cache and mark this page dirty. so if later we reclaimed
>> this page then we could pageout this page due to this dirty.
>> so i want to allow this action only for writing swap fault.
>>
>> i sampled the data of non-dirty anonymous pages which is no
>> need to pageout and total anonymous pages in shrink_page_list.
>>
>> the results are:
>>
>>         non-dirty anonymous pages     total anonymous pages
>> before  26343                         635218
>> after   36907                         634312
>
> This data is absolutely pointless without describing the workload.
> You patch also stil fails to explain which workloads are going to
> benefit/suffer from the change and why it is a good thing to do in
> general.
>
>> Signed-off-by: zhouxianrong <zhouxianrong@huawei.com>
>> ---
>>  mm/memory.c |    2 +-
>>  1 file changed, 1 insertion(+), 1 deletion(-)
>>
>> diff --git a/mm/memory.c b/mm/memory.c
>> index a728bed..5a944fe 100644
>> --- a/mm/memory.c
>> +++ b/mm/memory.c
>> @@ -2999,7 +2999,7 @@ int do_swap_page(struct vm_fault *vmf)
>>  	}
>>
>>  	swap_free(entry);
>> -	if (mem_cgroup_swap_full(page) ||
>> +	if (((vmf->flags & FAULT_FLAG_WRITE) && mem_cgroup_swap_full(page)) ||
>>  	    (vma->vm_flags & VM_LOCKED) || PageMlocked(page))
>>  		try_to_free_swap(page);
>>  	unlock_page(page);
>> --
>> 1.7.9.5
>>
>

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 6+ messages in thread

end of thread, other threads:[~2017-11-03  3:39 UTC | newest]

Thread overview: 6+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2017-11-02 12:35 [PATCH] mm: try to free swap only for reading swap fault zhouxianrong
2017-11-02 12:35 ` zhouxianrong
2017-11-02 13:22 ` Michal Hocko
2017-11-02 13:22   ` Michal Hocko
2017-11-03  3:31   ` zhouxianrong
2017-11-03  3:31     ` zhouxianrong

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.