linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH RFC] ksm: Assist buddy allocator to assemble 1-order pages
@ 2018-10-11 10:52 Kirill Tkhai
  2018-10-11 17:07 ` Andy Shevchenko
  2018-10-15 15:41 ` Kirill A. Shutemov
  0 siblings, 2 replies; 5+ messages in thread
From: Kirill Tkhai @ 2018-10-11 10:52 UTC (permalink / raw)
  To: akpm, kirill.shutemov, andriy.shevchenko, mhocko, rppt, imbrenda,
	corbet, ndesaulniers, ktkhai, dave.jiang, jglisse, jia.he,
	paulmck, colin.king, jiang.biao2, linux-mm, linux-kernel

try_to_merge_two_pages() merges two pages, one of them
is a page of currently scanned mm, the second is a page
with identical hash from unstable tree. Currently, we
merge the page from unstable tree into the first one,
and then free it.

The idea of this patch is to prefer freeing that page
of them, which has a free neighbour (i.e., neighbour
with zero page_count()). This allows buddy allocator
to assemble at least 1-order set from the freed page
and its neighbour; this is a kind of cheep passive
compaction.

AFAIK, 1-order pages set consists of pages with PFNs
[2n, 2n+1] (odd, even), so the neighbour's pfn is
calculated via XOR with 1. We check the result pfn
is valid and its page_count(), and prefer merging
into @tree_page if neighbour's usage count is zero.

There a is small difference with current behavior
in case of error path. In case of the second
try_to_merge_with_ksm_page() is failed, we return
from try_to_merge_two_pages() with @tree_page
removed from unstable tree. It does not seem to matter,
but if we do not want a change at all, it's not
a problem to move remove_rmap_item_from_tree() from
try_to_merge_with_ksm_page() to its callers.

Signed-off-by: Kirill Tkhai <ktkhai@virtuozzo.com>
---
 mm/ksm.c |   15 +++++++++++++++
 1 file changed, 15 insertions(+)

diff --git a/mm/ksm.c b/mm/ksm.c
index 5b0894b45ee5..b83ca37e28f0 100644
--- a/mm/ksm.c
+++ b/mm/ksm.c
@@ -1321,6 +1321,21 @@ static struct page *try_to_merge_two_pages(struct rmap_item *rmap_item,
 {
 	int err;
 
+	if (IS_ENABLED(CONFIG_COMPACTION)) {
+		unsigned long pfn;
+		/*
+		 * Find neighbour of @page containing 1-order pair
+		 * in buddy-allocator and check whether it is free.
+		 * If it is so, try to use @tree_page as ksm page
+		 * and to free @page.
+		 */
+		pfn = (page_to_pfn(page) ^ 1);
+		if (pfn_valid(pfn) && page_count(pfn_to_page(pfn)) == 0) {
+			swap(rmap_item, tree_rmap_item);
+			swap(page, tree_page);
+		}
+	}
+
 	err = try_to_merge_with_ksm_page(rmap_item, page, NULL);
 	if (!err) {
 		err = try_to_merge_with_ksm_page(tree_rmap_item,


^ permalink raw reply related	[flat|nested] 5+ messages in thread

* Re: [PATCH RFC] ksm: Assist buddy allocator to assemble 1-order pages
  2018-10-11 10:52 [PATCH RFC] ksm: Assist buddy allocator to assemble 1-order pages Kirill Tkhai
@ 2018-10-11 17:07 ` Andy Shevchenko
  2018-10-15 15:41 ` Kirill A. Shutemov
  1 sibling, 0 replies; 5+ messages in thread
From: Andy Shevchenko @ 2018-10-11 17:07 UTC (permalink / raw)
  To: Kirill Tkhai
  Cc: akpm, kirill.shutemov, mhocko, rppt, imbrenda, corbet,
	ndesaulniers, dave.jiang, jglisse, jia.he, paulmck, colin.king,
	jiang.biao2, linux-mm, linux-kernel

On Thu, Oct 11, 2018 at 01:52:22PM +0300, Kirill Tkhai wrote:
> try_to_merge_two_pages() merges two pages, one of them
> is a page of currently scanned mm, the second is a page
> with identical hash from unstable tree. Currently, we
> merge the page from unstable tree into the first one,
> and then free it.
> 
> The idea of this patch is to prefer freeing that page
> of them, which has a free neighbour (i.e., neighbour
> with zero page_count()). This allows buddy allocator
> to assemble at least 1-order set from the freed page
> and its neighbour; this is a kind of cheep passive
> compaction.
> 
> AFAIK, 1-order pages set consists of pages with PFNs
> [2n, 2n+1] (odd, even), so the neighbour's pfn is
> calculated via XOR with 1. We check the result pfn
> is valid and its page_count(), and prefer merging
> into @tree_page if neighbour's usage count is zero.
> 
> There a is small difference with current behavior
> in case of error path. In case of the second
> try_to_merge_with_ksm_page() is failed, we return
> from try_to_merge_two_pages() with @tree_page
> removed from unstable tree. It does not seem to matter,
> but if we do not want a change at all, it's not
> a problem to move remove_rmap_item_from_tree() from
> try_to_merge_with_ksm_page() to its callers.
> 
> Signed-off-by: Kirill Tkhai <ktkhai@virtuozzo.com>
> ---
>  mm/ksm.c |   15 +++++++++++++++
>  1 file changed, 15 insertions(+)
> 
> diff --git a/mm/ksm.c b/mm/ksm.c
> index 5b0894b45ee5..b83ca37e28f0 100644
> --- a/mm/ksm.c
> +++ b/mm/ksm.c
> @@ -1321,6 +1321,21 @@ static struct page *try_to_merge_two_pages(struct rmap_item *rmap_item,
>  {
>  	int err;
>  
> +	if (IS_ENABLED(CONFIG_COMPACTION)) {
> +		unsigned long pfn;

+ blank line?

> +		/*
> +		 * Find neighbour of @page containing 1-order pair
> +		 * in buddy-allocator and check whether it is free.
> +		 * If it is so, try to use @tree_page as ksm page
> +		 * and to free @page.
> +		 */
> +		pfn = (page_to_pfn(page) ^ 1);

Parens are not needed.


> +		if (pfn_valid(pfn) && page_count(pfn_to_page(pfn)) == 0) {
> +			swap(rmap_item, tree_rmap_item);
> +			swap(page, tree_page);
> +		}
> +	}
> +
>  	err = try_to_merge_with_ksm_page(rmap_item, page, NULL);
>  	if (!err) {
>  		err = try_to_merge_with_ksm_page(tree_rmap_item,
> 

-- 
With Best Regards,
Andy Shevchenko



^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [PATCH RFC] ksm: Assist buddy allocator to assemble 1-order pages
  2018-10-11 10:52 [PATCH RFC] ksm: Assist buddy allocator to assemble 1-order pages Kirill Tkhai
  2018-10-11 17:07 ` Andy Shevchenko
@ 2018-10-15 15:41 ` Kirill A. Shutemov
  2018-10-16  9:34   ` Kirill Tkhai
  1 sibling, 1 reply; 5+ messages in thread
From: Kirill A. Shutemov @ 2018-10-15 15:41 UTC (permalink / raw)
  To: Kirill Tkhai
  Cc: akpm, kirill.shutemov, andriy.shevchenko, mhocko, rppt, imbrenda,
	corbet, ndesaulniers, dave.jiang, jglisse, jia.he, paulmck,
	colin.king, jiang.biao2, linux-mm, linux-kernel

On Thu, Oct 11, 2018 at 01:52:22PM +0300, Kirill Tkhai wrote:
> try_to_merge_two_pages() merges two pages, one of them
> is a page of currently scanned mm, the second is a page
> with identical hash from unstable tree. Currently, we
> merge the page from unstable tree into the first one,
> and then free it.
> 
> The idea of this patch is to prefer freeing that page
> of them, which has a free neighbour (i.e., neighbour
> with zero page_count()). This allows buddy allocator
> to assemble at least 1-order set from the freed page
> and its neighbour; this is a kind of cheep passive
> compaction.
> 
> AFAIK, 1-order pages set consists of pages with PFNs
> [2n, 2n+1] (odd, even), so the neighbour's pfn is
> calculated via XOR with 1. We check the result pfn
> is valid and its page_count(), and prefer merging
> into @tree_page if neighbour's usage count is zero.
> 
> There a is small difference with current behavior
> in case of error path. In case of the second
> try_to_merge_with_ksm_page() is failed, we return
> from try_to_merge_two_pages() with @tree_page
> removed from unstable tree. It does not seem to matter,
> but if we do not want a change at all, it's not
> a problem to move remove_rmap_item_from_tree() from
> try_to_merge_with_ksm_page() to its callers.
> 
> Signed-off-by: Kirill Tkhai <ktkhai@virtuozzo.com>
> ---
>  mm/ksm.c |   15 +++++++++++++++
>  1 file changed, 15 insertions(+)
> 
> diff --git a/mm/ksm.c b/mm/ksm.c
> index 5b0894b45ee5..b83ca37e28f0 100644
> --- a/mm/ksm.c
> +++ b/mm/ksm.c
> @@ -1321,6 +1321,21 @@ static struct page *try_to_merge_two_pages(struct rmap_item *rmap_item,
>  {
>  	int err;
>  
> +	if (IS_ENABLED(CONFIG_COMPACTION)) {
> +		unsigned long pfn;
> +		/*
> +		 * Find neighbour of @page containing 1-order pair
> +		 * in buddy-allocator and check whether it is free.

You cannot really check if the page is free. There are some paths that
makes the refcount zero temporarely, but doesn't free the page.
See page_ref_freeze() for instance.

It should be fine for the use case, but comment should state that we
speculate about page usage, not having definetive answer.

[ I don't know enough about KSM to ack the patch in general, but it looks
fine to me at the first glance.]

> +		 * If it is so, try to use @tree_page as ksm page
> +		 * and to free @page.
> +		 */
> +		pfn = (page_to_pfn(page) ^ 1);
> +		if (pfn_valid(pfn) && page_count(pfn_to_page(pfn)) == 0) {
> +			swap(rmap_item, tree_rmap_item);
> +			swap(page, tree_page);
> +		}
> +	}
> +
>  	err = try_to_merge_with_ksm_page(rmap_item, page, NULL);
>  	if (!err) {
>  		err = try_to_merge_with_ksm_page(tree_rmap_item,
> 

-- 
 Kirill A. Shutemov

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [PATCH RFC] ksm: Assist buddy allocator to assemble 1-order pages
  2018-10-15 15:41 ` Kirill A. Shutemov
@ 2018-10-16  9:34   ` Kirill Tkhai
  2018-10-16 11:43     ` Kirill A. Shutemov
  0 siblings, 1 reply; 5+ messages in thread
From: Kirill Tkhai @ 2018-10-16  9:34 UTC (permalink / raw)
  To: Kirill A. Shutemov
  Cc: akpm, kirill.shutemov, andriy.shevchenko, mhocko, rppt, imbrenda,
	corbet, ndesaulniers, dave.jiang, jglisse, jia.he, paulmck,
	colin.king, jiang.biao2, linux-mm, linux-kernel

On 15.10.2018 18:41, Kirill A. Shutemov wrote:
> On Thu, Oct 11, 2018 at 01:52:22PM +0300, Kirill Tkhai wrote:
>> try_to_merge_two_pages() merges two pages, one of them
>> is a page of currently scanned mm, the second is a page
>> with identical hash from unstable tree. Currently, we
>> merge the page from unstable tree into the first one,
>> and then free it.
>>
>> The idea of this patch is to prefer freeing that page
>> of them, which has a free neighbour (i.e., neighbour
>> with zero page_count()). This allows buddy allocator
>> to assemble at least 1-order set from the freed page
>> and its neighbour; this is a kind of cheep passive
>> compaction.
>>
>> AFAIK, 1-order pages set consists of pages with PFNs
>> [2n, 2n+1] (odd, even), so the neighbour's pfn is
>> calculated via XOR with 1. We check the result pfn
>> is valid and its page_count(), and prefer merging
>> into @tree_page if neighbour's usage count is zero.
>>
>> There a is small difference with current behavior
>> in case of error path. In case of the second
>> try_to_merge_with_ksm_page() is failed, we return
>> from try_to_merge_two_pages() with @tree_page
>> removed from unstable tree. It does not seem to matter,
>> but if we do not want a change at all, it's not
>> a problem to move remove_rmap_item_from_tree() from
>> try_to_merge_with_ksm_page() to its callers.
>>
>> Signed-off-by: Kirill Tkhai <ktkhai@virtuozzo.com>
>> ---
>>  mm/ksm.c |   15 +++++++++++++++
>>  1 file changed, 15 insertions(+)
>>
>> diff --git a/mm/ksm.c b/mm/ksm.c
>> index 5b0894b45ee5..b83ca37e28f0 100644
>> --- a/mm/ksm.c
>> +++ b/mm/ksm.c
>> @@ -1321,6 +1321,21 @@ static struct page *try_to_merge_two_pages(struct rmap_item *rmap_item,
>>  {
>>  	int err;
>>  
>> +	if (IS_ENABLED(CONFIG_COMPACTION)) {
>> +		unsigned long pfn;
>> +		/*
>> +		 * Find neighbour of @page containing 1-order pair
>> +		 * in buddy-allocator and check whether it is free.
> 
> You cannot really check if the page is free. There are some paths that
> makes the refcount zero temporarely, but doesn't free the page.
> See page_ref_freeze() for instance.

Thanks. Does this look better?

  Find neighbour of @page containing 1-order pair in buddy-allocator
  and check whether its count is 0. If it is so, we consider it's as free
  (this is more probable than it's freezed via page_ref_freeze()),
  and we try to use @tree_page as ksm page and to free @page.
 
> It should be fine for the use case, but comment should state that we
> speculate about page usage, not having definetive answer.
> 
> [ I don't know enough about KSM to ack the patch in general, but it looks
> fine to me at the first glance.]
> 
>> +		 * If it is so, try to use @tree_page as ksm page
>> +		 * and to free @page.
>> +		 */
>> +		pfn = (page_to_pfn(page) ^ 1);
>> +		if (pfn_valid(pfn) && page_count(pfn_to_page(pfn)) == 0) {
>> +			swap(rmap_item, tree_rmap_item);
>> +			swap(page, tree_page);
>> +		}
>> +	}
>> +
>>  	err = try_to_merge_with_ksm_page(rmap_item, page, NULL);
>>  	if (!err) {
>>  		err = try_to_merge_with_ksm_page(tree_rmap_item,
>>
> 

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [PATCH RFC] ksm: Assist buddy allocator to assemble 1-order pages
  2018-10-16  9:34   ` Kirill Tkhai
@ 2018-10-16 11:43     ` Kirill A. Shutemov
  0 siblings, 0 replies; 5+ messages in thread
From: Kirill A. Shutemov @ 2018-10-16 11:43 UTC (permalink / raw)
  To: Kirill Tkhai
  Cc: Kirill A. Shutemov, akpm, andriy.shevchenko, mhocko, rppt,
	imbrenda, corbet, ndesaulniers, dave.jiang, jglisse, jia.he,
	paulmck, colin.king, jiang.biao2, linux-mm, linux-kernel

On Tue, Oct 16, 2018 at 09:34:11AM +0000, Kirill Tkhai wrote:
> On 15.10.2018 18:41, Kirill A. Shutemov wrote:
> > On Thu, Oct 11, 2018 at 01:52:22PM +0300, Kirill Tkhai wrote:
> >> try_to_merge_two_pages() merges two pages, one of them
> >> is a page of currently scanned mm, the second is a page
> >> with identical hash from unstable tree. Currently, we
> >> merge the page from unstable tree into the first one,
> >> and then free it.
> >>
> >> The idea of this patch is to prefer freeing that page
> >> of them, which has a free neighbour (i.e., neighbour
> >> with zero page_count()). This allows buddy allocator
> >> to assemble at least 1-order set from the freed page
> >> and its neighbour; this is a kind of cheep passive
> >> compaction.
> >>
> >> AFAIK, 1-order pages set consists of pages with PFNs
> >> [2n, 2n+1] (odd, even), so the neighbour's pfn is
> >> calculated via XOR with 1. We check the result pfn
> >> is valid and its page_count(), and prefer merging
> >> into @tree_page if neighbour's usage count is zero.
> >>
> >> There a is small difference with current behavior
> >> in case of error path. In case of the second
> >> try_to_merge_with_ksm_page() is failed, we return
> >> from try_to_merge_two_pages() with @tree_page
> >> removed from unstable tree. It does not seem to matter,
> >> but if we do not want a change at all, it's not
> >> a problem to move remove_rmap_item_from_tree() from
> >> try_to_merge_with_ksm_page() to its callers.
> >>
> >> Signed-off-by: Kirill Tkhai <ktkhai@virtuozzo.com>
> >> ---
> >>  mm/ksm.c |   15 +++++++++++++++
> >>  1 file changed, 15 insertions(+)
> >>
> >> diff --git a/mm/ksm.c b/mm/ksm.c
> >> index 5b0894b45ee5..b83ca37e28f0 100644
> >> --- a/mm/ksm.c
> >> +++ b/mm/ksm.c
> >> @@ -1321,6 +1321,21 @@ static struct page *try_to_merge_two_pages(struct rmap_item *rmap_item,
> >>  {
> >>  	int err;
> >>  
> >> +	if (IS_ENABLED(CONFIG_COMPACTION)) {
> >> +		unsigned long pfn;
> >> +		/*
> >> +		 * Find neighbour of @page containing 1-order pair
> >> +		 * in buddy-allocator and check whether it is free.
> > 
> > You cannot really check if the page is free. There are some paths that
> > makes the refcount zero temporarely, but doesn't free the page.
> > See page_ref_freeze() for instance.
> 
> Thanks. Does this look better?
> 
>   Find neighbour of @page containing 1-order pair in buddy-allocator
>   and check whether its count is 0. If it is so, we consider it's as free
>   (this is more probable than it's freezed via page_ref_freeze()),
>   and we try to use @tree_page as ksm page and to free @page.

Looks fine to me.

-- 
 Kirill A. Shutemov

^ permalink raw reply	[flat|nested] 5+ messages in thread

end of thread, other threads:[~2018-10-16 11:43 UTC | newest]

Thread overview: 5+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-10-11 10:52 [PATCH RFC] ksm: Assist buddy allocator to assemble 1-order pages Kirill Tkhai
2018-10-11 17:07 ` Andy Shevchenko
2018-10-15 15:41 ` Kirill A. Shutemov
2018-10-16  9:34   ` Kirill Tkhai
2018-10-16 11:43     ` Kirill A. Shutemov

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).