All of lore.kernel.org
 help / color / mirror / Atom feed
From: Mel Gorman <mgorman@techsingularity.net>
To: J?r?me Glisse <jglisse@redhat.com>
Cc: akpm@linux-foundation.org, linux-kernel@vger.kernel.org,
	linux-mm@kvack.org, John Hubbard <jhubbard@nvidia.com>,
	Naoya Horiguchi <n-horiguchi@ah.jp.nec.com>,
	David Nellans <dnellans@nvidia.com>
Subject: Re: [HMM 06/16] mm/migrate: add new boolean copy flag to migratepage() callback
Date: Sun, 19 Mar 2017 20:09:12 +0000	[thread overview]
Message-ID: <20170319200912.GF2774@techsingularity.net> (raw)
In-Reply-To: <1489680335-6594-7-git-send-email-jglisse@redhat.com>

On Thu, Mar 16, 2017 at 12:05:25PM -0400, J?r?me Glisse wrote:
> Allow migration without copy in case destination page already have
> source page content. This is usefull for new dma capable migration
> where use device dma engine to copy pages.
> 
> This feature need carefull audit of filesystem code to make sure
> that no one can write to the source page while it is unmapped and
> locked. It should be safe for most filesystem but as precaution
> return error until support for device migration is added to them.
> 
> Signed-off-by: Jérôme Glisse <jglisse@redhat.com>

I really dislike the amount of boilerplace code this creates and the fact
that additional headers are needed for that boilerplate. As it's only of
relevance to DMA capable migration, why not simply infer from that if it's
an option instead of updating all supporters of migration?

If that is unsuitable, create a new migreate_mode for a no-copy
migration. You'll need to alter some sites that check the migrate_mode
and it *may* be easier to convert migrate_mode to a bitmask but overall
it would be less boilerplate and confined to just the migration code.

> diff --git a/mm/migrate.c b/mm/migrate.c
> index 9a0897a..cb911ce 100644
> --- a/mm/migrate.c
> +++ b/mm/migrate.c
> @@ -596,18 +596,10 @@ static void copy_huge_page(struct page *dst, struct page *src)
>  	}
>  }
>  
> -/*
> - * Copy the page to its new location
> - */
> -void migrate_page_copy(struct page *newpage, struct page *page)
> +static void migrate_page_states(struct page *newpage, struct page *page)
>  {
>  	int cpupid;
>  
> -	if (PageHuge(page) || PageTransHuge(page))
> -		copy_huge_page(newpage, page);
> -	else
> -		copy_highpage(newpage, page);
> -
>  	if (PageError(page))
>  		SetPageError(newpage);
>  	if (PageReferenced(page))

> @@ -661,6 +653,19 @@ void migrate_page_copy(struct page *newpage, struct page *page)
>  
>  	mem_cgroup_migrate(page, newpage);
>  }
> +
> +/*
> + * Copy the page to its new location
> + */
> +void migrate_page_copy(struct page *newpage, struct page *page)
> +{
> +	if (PageHuge(page) || PageTransHuge(page))
> +		copy_huge_page(newpage, page);
> +	else
> +		copy_highpage(newpage, page);
> +
> +	migrate_page_states(newpage, page);
> +}
>  EXPORT_SYMBOL(migrate_page_copy);
>  
>  /************************************************************
> @@ -674,8 +679,8 @@ EXPORT_SYMBOL(migrate_page_copy);
>   * Pages are locked upon entry and exit.
>   */
>  int migrate_page(struct address_space *mapping,
> -		struct page *newpage, struct page *page,
> -		enum migrate_mode mode)
> +		 struct page *newpage, struct page *page,
> +		 enum migrate_mode mode, bool copy)
>  {
>  	int rc;
>  
> @@ -686,7 +691,11 @@ int migrate_page(struct address_space *mapping,
>  	if (rc != MIGRATEPAGE_SUCCESS)
>  		return rc;
>  
> -	migrate_page_copy(newpage, page);
> +	if (copy)
> +		migrate_page_copy(newpage, page);
> +	else
> +		migrate_page_states(newpage, page);
> +
>  	return MIGRATEPAGE_SUCCESS;
>  }
>  EXPORT_SYMBOL(migrate_page);

Other than some reshuffling, this is the place where the new copy
parameters it used and it has the mode parameter. At worst you end up
creating a helper to check two potential migrate modes to have either
ASYNC, SYNC or SYNC_LIGHT semantics. I expect you want SYNC symantics.

This patch is huge relative to the small thing it acatually requires.

WARNING: multiple messages have this Message-ID (diff)
From: Mel Gorman <mgorman@techsingularity.net>
To: J?r?me Glisse <jglisse@redhat.com>
Cc: akpm@linux-foundation.org, linux-kernel@vger.kernel.org,
	linux-mm@kvack.org, John Hubbard <jhubbard@nvidia.com>,
	Naoya Horiguchi <n-horiguchi@ah.jp.nec.com>,
	David Nellans <dnellans@nvidia.com>
Subject: Re: [HMM 06/16] mm/migrate: add new boolean copy flag to migratepage() callback
Date: Sun, 19 Mar 2017 20:09:12 +0000	[thread overview]
Message-ID: <20170319200912.GF2774@techsingularity.net> (raw)
In-Reply-To: <1489680335-6594-7-git-send-email-jglisse@redhat.com>

On Thu, Mar 16, 2017 at 12:05:25PM -0400, J?r?me Glisse wrote:
> Allow migration without copy in case destination page already have
> source page content. This is usefull for new dma capable migration
> where use device dma engine to copy pages.
> 
> This feature need carefull audit of filesystem code to make sure
> that no one can write to the source page while it is unmapped and
> locked. It should be safe for most filesystem but as precaution
> return error until support for device migration is added to them.
> 
> Signed-off-by: Jerome Glisse <jglisse@redhat.com>

I really dislike the amount of boilerplace code this creates and the fact
that additional headers are needed for that boilerplate. As it's only of
relevance to DMA capable migration, why not simply infer from that if it's
an option instead of updating all supporters of migration?

If that is unsuitable, create a new migreate_mode for a no-copy
migration. You'll need to alter some sites that check the migrate_mode
and it *may* be easier to convert migrate_mode to a bitmask but overall
it would be less boilerplate and confined to just the migration code.

> diff --git a/mm/migrate.c b/mm/migrate.c
> index 9a0897a..cb911ce 100644
> --- a/mm/migrate.c
> +++ b/mm/migrate.c
> @@ -596,18 +596,10 @@ static void copy_huge_page(struct page *dst, struct page *src)
>  	}
>  }
>  
> -/*
> - * Copy the page to its new location
> - */
> -void migrate_page_copy(struct page *newpage, struct page *page)
> +static void migrate_page_states(struct page *newpage, struct page *page)
>  {
>  	int cpupid;
>  
> -	if (PageHuge(page) || PageTransHuge(page))
> -		copy_huge_page(newpage, page);
> -	else
> -		copy_highpage(newpage, page);
> -
>  	if (PageError(page))
>  		SetPageError(newpage);
>  	if (PageReferenced(page))

> @@ -661,6 +653,19 @@ void migrate_page_copy(struct page *newpage, struct page *page)
>  
>  	mem_cgroup_migrate(page, newpage);
>  }
> +
> +/*
> + * Copy the page to its new location
> + */
> +void migrate_page_copy(struct page *newpage, struct page *page)
> +{
> +	if (PageHuge(page) || PageTransHuge(page))
> +		copy_huge_page(newpage, page);
> +	else
> +		copy_highpage(newpage, page);
> +
> +	migrate_page_states(newpage, page);
> +}
>  EXPORT_SYMBOL(migrate_page_copy);
>  
>  /************************************************************
> @@ -674,8 +679,8 @@ EXPORT_SYMBOL(migrate_page_copy);
>   * Pages are locked upon entry and exit.
>   */
>  int migrate_page(struct address_space *mapping,
> -		struct page *newpage, struct page *page,
> -		enum migrate_mode mode)
> +		 struct page *newpage, struct page *page,
> +		 enum migrate_mode mode, bool copy)
>  {
>  	int rc;
>  
> @@ -686,7 +691,11 @@ int migrate_page(struct address_space *mapping,
>  	if (rc != MIGRATEPAGE_SUCCESS)
>  		return rc;
>  
> -	migrate_page_copy(newpage, page);
> +	if (copy)
> +		migrate_page_copy(newpage, page);
> +	else
> +		migrate_page_states(newpage, page);
> +
>  	return MIGRATEPAGE_SUCCESS;
>  }
>  EXPORT_SYMBOL(migrate_page);

Other than some reshuffling, this is the place where the new copy
parameters it used and it has the mode parameter. At worst you end up
creating a helper to check two potential migrate modes to have either
ASYNC, SYNC or SYNC_LIGHT semantics. I expect you want SYNC symantics.

This patch is huge relative to the small thing it acatually requires.

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

  reply	other threads:[~2017-03-19 20:09 UTC|newest]

Thread overview: 90+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2017-03-16 16:05 [HMM 00/16] HMM (Heterogeneous Memory Management) v18 Jérôme Glisse
2017-03-16 16:05 ` Jérôme Glisse
2017-03-16 16:05 ` [HMM 01/16] mm/memory/hotplug: convert device bool to int to allow for more flags v3 Jérôme Glisse
2017-03-16 16:05   ` Jérôme Glisse
2017-03-19 20:08   ` Mel Gorman
2017-03-19 20:08     ` Mel Gorman
2017-03-16 16:05 ` [HMM 02/16] mm/put_page: move ref decrement to put_zone_device_page() Jérôme Glisse
2017-03-16 16:05   ` Jérôme Glisse
2017-03-19 20:08   ` Mel Gorman
2017-03-19 20:08     ` Mel Gorman
2017-03-16 16:05 ` [HMM 03/16] mm/ZONE_DEVICE/free-page: callback when page is freed v3 Jérôme Glisse
2017-03-16 16:05   ` Jérôme Glisse
2017-03-19 20:08   ` Mel Gorman
2017-03-19 20:08     ` Mel Gorman
2017-03-16 16:05 ` [HMM 04/16] mm/ZONE_DEVICE/unaddressable: add support for un-addressable device memory v3 Jérôme Glisse
2017-03-16 16:05   ` Jérôme Glisse
2017-03-19 20:09   ` Mel Gorman
2017-03-19 20:09     ` Mel Gorman
2017-03-16 16:05 ` [HMM 05/16] mm/ZONE_DEVICE/x86: add support for un-addressable device memory Jérôme Glisse
2017-03-16 16:05   ` Jérôme Glisse
2017-03-16 16:05 ` [HMM 06/16] mm/migrate: add new boolean copy flag to migratepage() callback Jérôme Glisse
2017-03-16 16:05   ` Jérôme Glisse
2017-03-19 20:09   ` Mel Gorman [this message]
2017-03-19 20:09     ` Mel Gorman
2017-03-16 16:05 ` [HMM 07/16] mm/migrate: new memory migration helper for use with device memory v4 Jérôme Glisse
2017-03-16 16:05   ` Jérôme Glisse
2017-03-16 16:24   ` Reza Arbab
2017-03-16 16:24     ` Reza Arbab
2017-03-16 20:58     ` Balbir Singh
2017-03-16 20:58       ` Balbir Singh
2017-03-16 23:05   ` Andrew Morton
2017-03-16 23:05     ` Andrew Morton
2017-03-17  0:22     ` John Hubbard
2017-03-17  0:22       ` John Hubbard
2017-03-17  0:45       ` Balbir Singh
2017-03-17  0:45         ` Balbir Singh
2017-03-17  0:57         ` John Hubbard
2017-03-17  0:57           ` John Hubbard
2017-03-17  1:52           ` Jerome Glisse
2017-03-17  1:52             ` Jerome Glisse
2017-03-17  3:32             ` Andrew Morton
2017-03-17  3:32               ` Andrew Morton
2017-03-17  3:42           ` Balbir Singh
2017-03-17  3:42             ` Balbir Singh
2017-03-17  4:51             ` Balbir Singh
2017-03-17  4:51               ` Balbir Singh
2017-03-17  7:17               ` John Hubbard
2017-03-17  7:17                 ` John Hubbard
2017-03-16 16:05 ` [HMM 08/16] mm/migrate: migrate_vma() unmap page from vma while collecting pages Jérôme Glisse
2017-03-16 16:05   ` Jérôme Glisse
2017-03-16 16:05 ` [HMM 09/16] mm/hmm: heterogeneous memory management (HMM for short) Jérôme Glisse
2017-03-16 16:05   ` Jérôme Glisse
2017-03-19 20:09   ` Mel Gorman
2017-03-19 20:09     ` Mel Gorman
2017-03-16 16:05 ` [HMM 10/16] mm/hmm/mirror: mirror process address space on device with HMM helpers Jérôme Glisse
2017-03-16 16:05   ` Jérôme Glisse
2017-03-19 20:09   ` Mel Gorman
2017-03-19 20:09     ` Mel Gorman
2017-03-16 16:05 ` [HMM 11/16] mm/hmm/mirror: helper to snapshot CPU page table v2 Jérôme Glisse
2017-03-16 16:05   ` Jérôme Glisse
2017-03-19 20:09   ` Mel Gorman
2017-03-19 20:09     ` Mel Gorman
2017-03-16 16:05 ` [HMM 12/16] mm/hmm/mirror: device page fault handler Jérôme Glisse
2017-03-16 16:05   ` Jérôme Glisse
2017-03-16 16:05 ` [HMM 13/16] mm/hmm/migrate: support un-addressable ZONE_DEVICE page in migration Jérôme Glisse
2017-03-16 16:05   ` Jérôme Glisse
2017-03-16 16:05 ` [HMM 14/16] mm/migrate: allow migrate_vma() to alloc new page on empty entry Jérôme Glisse
2017-03-16 16:05   ` Jérôme Glisse
2017-03-16 16:05 ` [HMM 15/16] mm/hmm/devmem: device memory hotplug using ZONE_DEVICE Jérôme Glisse
2017-03-16 16:05   ` Jérôme Glisse
2017-03-16 16:05 ` [HMM 16/16] mm/hmm/devmem: dummy HMM device for ZONE_DEVICE memory v2 Jérôme Glisse
2017-03-16 16:05   ` Jérôme Glisse
2017-03-17  6:55   ` Bob Liu
2017-03-17  6:55     ` Bob Liu
2017-03-17 16:53     ` Jerome Glisse
2017-03-17 16:53       ` Jerome Glisse
2017-03-16 20:43 ` [HMM 00/16] HMM (Heterogeneous Memory Management) v18 Andrew Morton
2017-03-16 20:43   ` Andrew Morton
2017-03-16 23:49   ` Jerome Glisse
2017-03-16 23:49     ` Jerome Glisse
2017-03-17  8:29     ` Bob Liu
2017-03-17  8:29       ` Bob Liu
2017-03-17 15:57       ` Jerome Glisse
2017-03-17 15:57         ` Jerome Glisse
2017-03-17  8:39     ` Bob Liu
2017-03-17  8:39       ` Bob Liu
2017-03-17 15:52       ` Jerome Glisse
2017-03-17 15:52         ` Jerome Glisse
2017-03-19 20:09 ` Mel Gorman
2017-03-19 20:09   ` Mel Gorman

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20170319200912.GF2774@techsingularity.net \
    --to=mgorman@techsingularity.net \
    --cc=akpm@linux-foundation.org \
    --cc=dnellans@nvidia.com \
    --cc=jglisse@redhat.com \
    --cc=jhubbard@nvidia.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=n-horiguchi@ah.jp.nec.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.