All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH] mm: extend reuse_swap_page range as much as possible
@ 2017-11-01 10:51 ` zhouxianrong
  0 siblings, 0 replies; 9+ messages in thread
From: zhouxianrong @ 2017-11-01 10:51 UTC (permalink / raw)
  To: linux-mm
  Cc: linux-kernel, akpm, ying.huang, tim.c.chen, mhocko, rientjes,
	mingo, vegard.nossum, minchan, aaron.lu, zhouxianrong, zhouxiyu,
	weidu.du, fanghua3, hutj, won.ho.park

From: zhouxianrong <zhouxianrong@huawei.com>

origanlly reuse_swap_page requires that the sum of page's
mapcount and swapcount less than or equal to one.
in this case we can reuse this page and avoid COW currently.

now reuse_swap_page requires only that page's mapcount
less than or equal to one and the page is not dirty in
swap cache. in this case we do not care its swap count.

the page without dirty in swap cache means that it has
been written to swap device successfully for reclaim before
and then read again on a swap fault. in this case the page
can be reused even though its swap count is greater than one
and postpone the COW on other successive accesses to the swap
cache page later rather than now.

i did this patch test in kernel 4.4.23 with arm64 and none huge
memory. it work fine.

Signed-off-by: zhouxianrong <zhouxianrong@huawei.com>
---
 mm/swapfile.c |    9 +++++++--
 1 file changed, 7 insertions(+), 2 deletions(-)

diff --git a/mm/swapfile.c b/mm/swapfile.c
index bf91dc9..c21cf07 100644
--- a/mm/swapfile.c
+++ b/mm/swapfile.c
@@ -1543,22 +1543,27 @@ static int page_trans_huge_map_swapcount(struct page *page, int *total_mapcount,
 bool reuse_swap_page(struct page *page, int *total_map_swapcount)
 {
 	int count, total_mapcount, total_swapcount;
+	int dirty;
 
 	VM_BUG_ON_PAGE(!PageLocked(page), page);
 	if (unlikely(PageKsm(page)))
 		return false;
+	dirty = PageDirty(page);
 	count = page_trans_huge_map_swapcount(page, &total_mapcount,
 					      &total_swapcount);
 	if (total_map_swapcount)
 		*total_map_swapcount = total_mapcount + total_swapcount;
-	if (count == 1 && PageSwapCache(page) &&
+	if ((total_mapcount <= 1 && !dirty) ||
+		(count == 1 && PageSwapCache(page) &&
 	    (likely(!PageTransCompound(page)) ||
 	     /* The remaining swap count will be freed soon */
-	     total_swapcount == page_swapcount(page))) {
+	     total_swapcount == page_swapcount(page)))) {
 		if (!PageWriteback(page)) {
 			page = compound_head(page);
 			delete_from_swap_cache(page);
 			SetPageDirty(page);
+			if (!dirty)
+				return true;
 		} else {
 			swp_entry_t entry;
 			struct swap_info_struct *p;
-- 
1.7.9.5

^ permalink raw reply related	[flat|nested] 9+ messages in thread

* [PATCH] mm: extend reuse_swap_page range as much as possible
@ 2017-11-01 10:51 ` zhouxianrong
  0 siblings, 0 replies; 9+ messages in thread
From: zhouxianrong @ 2017-11-01 10:51 UTC (permalink / raw)
  To: linux-mm
  Cc: linux-kernel, akpm, ying.huang, tim.c.chen, mhocko, rientjes,
	mingo, vegard.nossum, minchan, aaron.lu, zhouxianrong, zhouxiyu,
	weidu.du, fanghua3, hutj, won.ho.park

From: zhouxianrong <zhouxianrong@huawei.com>

origanlly reuse_swap_page requires that the sum of page's
mapcount and swapcount less than or equal to one.
in this case we can reuse this page and avoid COW currently.

now reuse_swap_page requires only that page's mapcount
less than or equal to one and the page is not dirty in
swap cache. in this case we do not care its swap count.

the page without dirty in swap cache means that it has
been written to swap device successfully for reclaim before
and then read again on a swap fault. in this case the page
can be reused even though its swap count is greater than one
and postpone the COW on other successive accesses to the swap
cache page later rather than now.

i did this patch test in kernel 4.4.23 with arm64 and none huge
memory. it work fine.

Signed-off-by: zhouxianrong <zhouxianrong@huawei.com>
---
 mm/swapfile.c |    9 +++++++--
 1 file changed, 7 insertions(+), 2 deletions(-)

diff --git a/mm/swapfile.c b/mm/swapfile.c
index bf91dc9..c21cf07 100644
--- a/mm/swapfile.c
+++ b/mm/swapfile.c
@@ -1543,22 +1543,27 @@ static int page_trans_huge_map_swapcount(struct page *page, int *total_mapcount,
 bool reuse_swap_page(struct page *page, int *total_map_swapcount)
 {
 	int count, total_mapcount, total_swapcount;
+	int dirty;
 
 	VM_BUG_ON_PAGE(!PageLocked(page), page);
 	if (unlikely(PageKsm(page)))
 		return false;
+	dirty = PageDirty(page);
 	count = page_trans_huge_map_swapcount(page, &total_mapcount,
 					      &total_swapcount);
 	if (total_map_swapcount)
 		*total_map_swapcount = total_mapcount + total_swapcount;
-	if (count == 1 && PageSwapCache(page) &&
+	if ((total_mapcount <= 1 && !dirty) ||
+		(count == 1 && PageSwapCache(page) &&
 	    (likely(!PageTransCompound(page)) ||
 	     /* The remaining swap count will be freed soon */
-	     total_swapcount == page_swapcount(page))) {
+	     total_swapcount == page_swapcount(page)))) {
 		if (!PageWriteback(page)) {
 			page = compound_head(page);
 			delete_from_swap_cache(page);
 			SetPageDirty(page);
+			if (!dirty)
+				return true;
 		} else {
 			swp_entry_t entry;
 			struct swap_info_struct *p;
-- 
1.7.9.5

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 9+ messages in thread

* Re: [PATCH] mm: extend reuse_swap_page range as much as possible
  2017-11-01 10:51 ` zhouxianrong
@ 2017-11-02  1:42   ` Huang, Ying
  -1 siblings, 0 replies; 9+ messages in thread
From: Huang, Ying @ 2017-11-02  1:42 UTC (permalink / raw)
  To: zhouxianrong
  Cc: linux-mm, linux-kernel, akpm, ying.huang, tim.c.chen, mhocko,
	rientjes, mingo, vegard.nossum, minchan, aaron.lu, zhouxiyu,
	weidu.du, fanghua3, hutj, won.ho.park

<zhouxianrong@huawei.com> writes:

> From: zhouxianrong <zhouxianrong@huawei.com>
>
> origanlly reuse_swap_page requires that the sum of page's
> mapcount and swapcount less than or equal to one.
> in this case we can reuse this page and avoid COW currently.
>
> now reuse_swap_page requires only that page's mapcount
> less than or equal to one and the page is not dirty in
> swap cache. in this case we do not care its swap count.
>
> the page without dirty in swap cache means that it has
> been written to swap device successfully for reclaim before
> and then read again on a swap fault. in this case the page
> can be reused even though its swap count is greater than one
> and postpone the COW on other successive accesses to the swap
> cache page later rather than now.
>
> i did this patch test in kernel 4.4.23 with arm64 and none huge
> memory. it work fine.

Why do you need this?  You saved copying one page from memory to memory
(COW) now, at the cost of reading a page from disk to memory later?

Best Regards,
Huang, Ying

> Signed-off-by: zhouxianrong <zhouxianrong@huawei.com>
> ---
>  mm/swapfile.c |    9 +++++++--
>  1 file changed, 7 insertions(+), 2 deletions(-)
>
> diff --git a/mm/swapfile.c b/mm/swapfile.c
> index bf91dc9..c21cf07 100644
> --- a/mm/swapfile.c
> +++ b/mm/swapfile.c
> @@ -1543,22 +1543,27 @@ static int page_trans_huge_map_swapcount(struct page *page, int *total_mapcount,
>  bool reuse_swap_page(struct page *page, int *total_map_swapcount)
>  {
>  	int count, total_mapcount, total_swapcount;
> +	int dirty;
>  
>  	VM_BUG_ON_PAGE(!PageLocked(page), page);
>  	if (unlikely(PageKsm(page)))
>  		return false;
> +	dirty = PageDirty(page);
>  	count = page_trans_huge_map_swapcount(page, &total_mapcount,
>  					      &total_swapcount);
>  	if (total_map_swapcount)
>  		*total_map_swapcount = total_mapcount + total_swapcount;
> -	if (count == 1 && PageSwapCache(page) &&
> +	if ((total_mapcount <= 1 && !dirty) ||
> +		(count == 1 && PageSwapCache(page) &&
>  	    (likely(!PageTransCompound(page)) ||
>  	     /* The remaining swap count will be freed soon */
> -	     total_swapcount == page_swapcount(page))) {
> +	     total_swapcount == page_swapcount(page)))) {
>  		if (!PageWriteback(page)) {
>  			page = compound_head(page);
>  			delete_from_swap_cache(page);
>  			SetPageDirty(page);
> +			if (!dirty)
> +				return true;
>  		} else {
>  			swp_entry_t entry;
>  			struct swap_info_struct *p;

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH] mm: extend reuse_swap_page range as much as possible
@ 2017-11-02  1:42   ` Huang, Ying
  0 siblings, 0 replies; 9+ messages in thread
From: Huang, Ying @ 2017-11-02  1:42 UTC (permalink / raw)
  To: zhouxianrong
  Cc: linux-mm, linux-kernel, akpm, ying.huang, tim.c.chen, mhocko,
	rientjes, mingo, vegard.nossum, minchan, aaron.lu, zhouxiyu,
	weidu.du, fanghua3, hutj, won.ho.park

<zhouxianrong@huawei.com> writes:

> From: zhouxianrong <zhouxianrong@huawei.com>
>
> origanlly reuse_swap_page requires that the sum of page's
> mapcount and swapcount less than or equal to one.
> in this case we can reuse this page and avoid COW currently.
>
> now reuse_swap_page requires only that page's mapcount
> less than or equal to one and the page is not dirty in
> swap cache. in this case we do not care its swap count.
>
> the page without dirty in swap cache means that it has
> been written to swap device successfully for reclaim before
> and then read again on a swap fault. in this case the page
> can be reused even though its swap count is greater than one
> and postpone the COW on other successive accesses to the swap
> cache page later rather than now.
>
> i did this patch test in kernel 4.4.23 with arm64 and none huge
> memory. it work fine.

Why do you need this?  You saved copying one page from memory to memory
(COW) now, at the cost of reading a page from disk to memory later?

Best Regards,
Huang, Ying

> Signed-off-by: zhouxianrong <zhouxianrong@huawei.com>
> ---
>  mm/swapfile.c |    9 +++++++--
>  1 file changed, 7 insertions(+), 2 deletions(-)
>
> diff --git a/mm/swapfile.c b/mm/swapfile.c
> index bf91dc9..c21cf07 100644
> --- a/mm/swapfile.c
> +++ b/mm/swapfile.c
> @@ -1543,22 +1543,27 @@ static int page_trans_huge_map_swapcount(struct page *page, int *total_mapcount,
>  bool reuse_swap_page(struct page *page, int *total_map_swapcount)
>  {
>  	int count, total_mapcount, total_swapcount;
> +	int dirty;
>  
>  	VM_BUG_ON_PAGE(!PageLocked(page), page);
>  	if (unlikely(PageKsm(page)))
>  		return false;
> +	dirty = PageDirty(page);
>  	count = page_trans_huge_map_swapcount(page, &total_mapcount,
>  					      &total_swapcount);
>  	if (total_map_swapcount)
>  		*total_map_swapcount = total_mapcount + total_swapcount;
> -	if (count == 1 && PageSwapCache(page) &&
> +	if ((total_mapcount <= 1 && !dirty) ||
> +		(count == 1 && PageSwapCache(page) &&
>  	    (likely(!PageTransCompound(page)) ||
>  	     /* The remaining swap count will be freed soon */
> -	     total_swapcount == page_swapcount(page))) {
> +	     total_swapcount == page_swapcount(page)))) {
>  		if (!PageWriteback(page)) {
>  			page = compound_head(page);
>  			delete_from_swap_cache(page);
>  			SetPageDirty(page);
> +			if (!dirty)
> +				return true;
>  		} else {
>  			swp_entry_t entry;
>  			struct swap_info_struct *p;

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 9+ messages in thread

* 答复: [PATCH] mm: extend reuse_swap_page range as much as possible
  2017-11-02  1:42   ` Huang, Ying
  (?)
@ 2017-11-02  2:09   ` zhouxianrong
  2017-11-02  4:22       ` Minchan Kim
  -1 siblings, 1 reply; 9+ messages in thread
From: zhouxianrong @ 2017-11-02  2:09 UTC (permalink / raw)
  To: Huang, Ying
  Cc: linux-mm, linux-kernel, akpm, tim.c.chen, mhocko, rientjes,
	mingo, vegard.nossum, minchan, aaron.lu, Zhouxiyu,
	Duwei (Device OS),
	fanghua, hutj, Won Ho Park

<zhouxianrong@huawei.com> writes:

> From: zhouxianrong <zhouxianrong@huawei.com>
>
> origanlly reuse_swap_page requires that the sum of page's mapcount and 
> swapcount less than or equal to one.
> in this case we can reuse this page and avoid COW currently.
>
> now reuse_swap_page requires only that page's mapcount less than or 
> equal to one and the page is not dirty in swap cache. in this case we 
> do not care its swap count.
>
> the page without dirty in swap cache means that it has been written to 
> swap device successfully for reclaim before and then read again on a 
> swap fault. in this case the page can be reused even though its swap 
> count is greater than one and postpone the COW on other successive 
> accesses to the swap cache page later rather than now.
>
> i did this patch test in kernel 4.4.23 with arm64 and none huge 
> memory. it work fine.

Why do you need this?  You saved copying one page from memory to memory
(COW) now, at the cost of reading a page from disk to memory later?

yes, accessing later does not always happen, there is probability for it, so postpone COW now.

Best Regards,
Huang, Ying

> Signed-off-by: zhouxianrong <zhouxianrong@huawei.com>
> ---
>  mm/swapfile.c |    9 +++++++--
>  1 file changed, 7 insertions(+), 2 deletions(-)
>
> diff --git a/mm/swapfile.c b/mm/swapfile.c index bf91dc9..c21cf07 
> 100644
> --- a/mm/swapfile.c
> +++ b/mm/swapfile.c
> @@ -1543,22 +1543,27 @@ static int 
> page_trans_huge_map_swapcount(struct page *page, int *total_mapcount,  
> bool reuse_swap_page(struct page *page, int *total_map_swapcount)  {
>  	int count, total_mapcount, total_swapcount;
> +	int dirty;
>  
>  	VM_BUG_ON_PAGE(!PageLocked(page), page);
>  	if (unlikely(PageKsm(page)))
>  		return false;
> +	dirty = PageDirty(page);
>  	count = page_trans_huge_map_swapcount(page, &total_mapcount,
>  					      &total_swapcount);
>  	if (total_map_swapcount)
>  		*total_map_swapcount = total_mapcount + total_swapcount;
> -	if (count == 1 && PageSwapCache(page) &&
> +	if ((total_mapcount <= 1 && !dirty) ||
> +		(count == 1 && PageSwapCache(page) &&
>  	    (likely(!PageTransCompound(page)) ||
>  	     /* The remaining swap count will be freed soon */
> -	     total_swapcount == page_swapcount(page))) {
> +	     total_swapcount == page_swapcount(page)))) {
>  		if (!PageWriteback(page)) {
>  			page = compound_head(page);
>  			delete_from_swap_cache(page);
>  			SetPageDirty(page);
> +			if (!dirty)
> +				return true;
>  		} else {
>  			swp_entry_t entry;
>  			struct swap_info_struct *p;

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: 答复: [PATCH] mm: extend reuse_swap_page range as much as possible
  2017-11-02  2:09   ` 答复: " zhouxianrong
@ 2017-11-02  4:22       ` Minchan Kim
  0 siblings, 0 replies; 9+ messages in thread
From: Minchan Kim @ 2017-11-02  4:22 UTC (permalink / raw)
  To: zhouxianrong
  Cc: Huang, Ying, linux-mm, linux-kernel, akpm, tim.c.chen, mhocko,
	rientjes, mingo, vegard.nossum, aaron.lu, Zhouxiyu,
	Duwei (Device OS),
	fanghua, hutj, Won Ho Park

On Thu, Nov 02, 2017 at 02:09:57AM +0000, zhouxianrong wrote:
> <zhouxianrong@huawei.com> writes:
> 
> > From: zhouxianrong <zhouxianrong@huawei.com>
> >
> > origanlly reuse_swap_page requires that the sum of page's mapcount and 
> > swapcount less than or equal to one.
> > in this case we can reuse this page and avoid COW currently.
> >
> > now reuse_swap_page requires only that page's mapcount less than or 
> > equal to one and the page is not dirty in swap cache. in this case we 
> > do not care its swap count.
> >
> > the page without dirty in swap cache means that it has been written to 
> > swap device successfully for reclaim before and then read again on a 
> > swap fault. in this case the page can be reused even though its swap 
> > count is greater than one and postpone the COW on other successive 
> > accesses to the swap cache page later rather than now.
> >
> > i did this patch test in kernel 4.4.23 with arm64 and none huge 
> > memory. it work fine.
> 
> Why do you need this?  You saved copying one page from memory to memory
> (COW) now, at the cost of reading a page from disk to memory later?
> 
> yes, accessing later does not always happen, there is probability for it, so postpone COW now.

So, it's trade-off. It means we need some number with some scenarios
to prove it's better than as-is.
It would help to drive reviewers/maintainer.

Thanks.

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: 答复: [PATCH] mm: extend reuse_swap_page range as much as possible
@ 2017-11-02  4:22       ` Minchan Kim
  0 siblings, 0 replies; 9+ messages in thread
From: Minchan Kim @ 2017-11-02  4:22 UTC (permalink / raw)
  To: zhouxianrong
  Cc: Huang, Ying, linux-mm, linux-kernel, akpm, tim.c.chen, mhocko,
	rientjes, mingo, vegard.nossum, aaron.lu, Zhouxiyu,
	Duwei (Device OS),
	fanghua, hutj, Won Ho Park

On Thu, Nov 02, 2017 at 02:09:57AM +0000, zhouxianrong wrote:
> <zhouxianrong@huawei.com> writes:
> 
> > From: zhouxianrong <zhouxianrong@huawei.com>
> >
> > origanlly reuse_swap_page requires that the sum of page's mapcount and 
> > swapcount less than or equal to one.
> > in this case we can reuse this page and avoid COW currently.
> >
> > now reuse_swap_page requires only that page's mapcount less than or 
> > equal to one and the page is not dirty in swap cache. in this case we 
> > do not care its swap count.
> >
> > the page without dirty in swap cache means that it has been written to 
> > swap device successfully for reclaim before and then read again on a 
> > swap fault. in this case the page can be reused even though its swap 
> > count is greater than one and postpone the COW on other successive 
> > accesses to the swap cache page later rather than now.
> >
> > i did this patch test in kernel 4.4.23 with arm64 and none huge 
> > memory. it work fine.
> 
> Why do you need this?  You saved copying one page from memory to memory
> (COW) now, at the cost of reading a page from disk to memory later?
> 
> yes, accessing later does not always happen, there is probability for it, so postpone COW now.

So, it's trade-off. It means we need some number with some scenarios
to prove it's better than as-is.
It would help to drive reviewers/maintainer.

Thanks.

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: 答复: [PATCH] mm: extend reuse_swap_page range as much as possible
  2017-11-02  4:22       ` Minchan Kim
@ 2017-11-02  7:49         ` Michal Hocko
  -1 siblings, 0 replies; 9+ messages in thread
From: Michal Hocko @ 2017-11-02  7:49 UTC (permalink / raw)
  To: Minchan Kim
  Cc: zhouxianrong, Huang, Ying, linux-mm, linux-kernel, akpm,
	tim.c.chen, rientjes, mingo, vegard.nossum, aaron.lu, Zhouxiyu,
	Duwei (Device OS),
	fanghua, hutj, Won Ho Park

On Thu 02-11-17 13:22:23, Minchan Kim wrote:
> On Thu, Nov 02, 2017 at 02:09:57AM +0000, zhouxianrong wrote:
> > <zhouxianrong@huawei.com> writes:
> > 
> > > From: zhouxianrong <zhouxianrong@huawei.com>
> > >
> > > origanlly reuse_swap_page requires that the sum of page's mapcount and 
> > > swapcount less than or equal to one.
> > > in this case we can reuse this page and avoid COW currently.
> > >
> > > now reuse_swap_page requires only that page's mapcount less than or 
> > > equal to one and the page is not dirty in swap cache. in this case we 
> > > do not care its swap count.
> > >
> > > the page without dirty in swap cache means that it has been written to 
> > > swap device successfully for reclaim before and then read again on a 
> > > swap fault. in this case the page can be reused even though its swap 
> > > count is greater than one and postpone the COW on other successive 
> > > accesses to the swap cache page later rather than now.
> > >
> > > i did this patch test in kernel 4.4.23 with arm64 and none huge 
> > > memory. it work fine.

this is not an appropriate justification

> > Why do you need this?  You saved copying one page from memory to memory
> > (COW) now, at the cost of reading a page from disk to memory later?
> > 
> > yes, accessing later does not always happen, there is probability for it, so postpone COW now.
> 
> So, it's trade-off. It means we need some number with some scenarios
> to prove it's better than as-is.
> It would help to drive reviewers/maintainer.

Absolutely agreed. We definitely need some numbers for different set of
workloads.
-- 
Michal Hocko
SUSE Labs

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: 答复: [PATCH] mm: extend reuse_swap_page range as much as possible
@ 2017-11-02  7:49         ` Michal Hocko
  0 siblings, 0 replies; 9+ messages in thread
From: Michal Hocko @ 2017-11-02  7:49 UTC (permalink / raw)
  To: Minchan Kim
  Cc: zhouxianrong, Huang, Ying, linux-mm, linux-kernel, akpm,
	tim.c.chen, rientjes, mingo, vegard.nossum, aaron.lu, Zhouxiyu,
	Duwei (Device OS),
	fanghua, hutj, Won Ho Park

On Thu 02-11-17 13:22:23, Minchan Kim wrote:
> On Thu, Nov 02, 2017 at 02:09:57AM +0000, zhouxianrong wrote:
> > <zhouxianrong@huawei.com> writes:
> > 
> > > From: zhouxianrong <zhouxianrong@huawei.com>
> > >
> > > origanlly reuse_swap_page requires that the sum of page's mapcount and 
> > > swapcount less than or equal to one.
> > > in this case we can reuse this page and avoid COW currently.
> > >
> > > now reuse_swap_page requires only that page's mapcount less than or 
> > > equal to one and the page is not dirty in swap cache. in this case we 
> > > do not care its swap count.
> > >
> > > the page without dirty in swap cache means that it has been written to 
> > > swap device successfully for reclaim before and then read again on a 
> > > swap fault. in this case the page can be reused even though its swap 
> > > count is greater than one and postpone the COW on other successive 
> > > accesses to the swap cache page later rather than now.
> > >
> > > i did this patch test in kernel 4.4.23 with arm64 and none huge 
> > > memory. it work fine.

this is not an appropriate justification

> > Why do you need this?  You saved copying one page from memory to memory
> > (COW) now, at the cost of reading a page from disk to memory later?
> > 
> > yes, accessing later does not always happen, there is probability for it, so postpone COW now.
> 
> So, it's trade-off. It means we need some number with some scenarios
> to prove it's better than as-is.
> It would help to drive reviewers/maintainer.

Absolutely agreed. We definitely need some numbers for different set of
workloads.
-- 
Michal Hocko
SUSE Labs

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 9+ messages in thread

end of thread, other threads:[~2017-11-02  7:49 UTC | newest]

Thread overview: 9+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2017-11-01 10:51 [PATCH] mm: extend reuse_swap_page range as much as possible zhouxianrong
2017-11-01 10:51 ` zhouxianrong
2017-11-02  1:42 ` Huang, Ying
2017-11-02  1:42   ` Huang, Ying
2017-11-02  2:09   ` 答复: " zhouxianrong
2017-11-02  4:22     ` Minchan Kim
2017-11-02  4:22       ` Minchan Kim
2017-11-02  7:49       ` Michal Hocko
2017-11-02  7:49         ` Michal Hocko

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.