All of lore.kernel.org
 help / color / mirror / Atom feed
From: Michal Hocko <mhocko@kernel.org>
To: js1304@gmail.com
Cc: Andrew Morton <akpm@linux-foundation.org>,
	linux-mm@kvack.org, linux-kernel@vger.kernel.org,
	kernel-team@lge.com, Vlastimil Babka <vbabka@suse.cz>,
	Christoph Hellwig <hch@infradead.org>,
	Roman Gushchin <guro@fb.com>,
	Mike Kravetz <mike.kravetz@oracle.com>,
	Naoya Horiguchi <n-horiguchi@ah.jp.nec.com>,
	Joonsoo Kim <iamjoonsoo.kim@lge.com>
Subject: Re: [PATCH v4 11/11] mm/memory_hotplug: remove a wrapper for alloc_migration_target()
Date: Tue, 7 Jul 2020 13:52:31 +0200	[thread overview]
Message-ID: <20200707115231.GM5913@dhcp22.suse.cz> (raw)
In-Reply-To: <1594107889-32228-12-git-send-email-iamjoonsoo.kim@lge.com>

On Tue 07-07-20 16:44:49, Joonsoo Kim wrote:
> From: Joonsoo Kim <iamjoonsoo.kim@lge.com>
> 
> To calculate the correct node to migrate the page for hotplug, we need
> to check node id of the page. Wrapper for alloc_migration_target() exists
> for this purpose.
> 
> However, Vlastimil informs that all migration source pages come from
> a single node. In this case, we don't need to check the node id for each
> page and we don't need to re-set the target nodemask for each page by
> using the wrapper. Set up the migration_target_control once and use it for
> all pages.

yes, memory offlining only operates on a single zone. Have a look at
test_pages_in_a_zone().

> 
> Signed-off-by: Joonsoo Kim <iamjoonsoo.kim@lge.com>

Acked-by: Michal Hocko <mhocko@suse.com>

> ---
>  mm/memory_hotplug.c | 46 ++++++++++++++++++++++------------------------
>  1 file changed, 22 insertions(+), 24 deletions(-)
> 
> diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c
> index 86bc2ad..269e8ca 100644
> --- a/mm/memory_hotplug.c
> +++ b/mm/memory_hotplug.c
> @@ -1265,27 +1265,6 @@ static int scan_movable_pages(unsigned long start, unsigned long end,
>  	return 0;
>  }
>  
> -static struct page *new_node_page(struct page *page, unsigned long private)
> -{
> -	nodemask_t nmask = node_states[N_MEMORY];
> -	struct migration_target_control mtc = {
> -		.nid = page_to_nid(page),
> -		.nmask = &nmask,
> -		.gfp_mask = GFP_USER | __GFP_MOVABLE | __GFP_RETRY_MAYFAIL,
> -	};
> -
> -	/*
> -	 * try to allocate from a different node but reuse this node if there
> -	 * are no other online nodes to be used (e.g. we are offlining a part
> -	 * of the only existing node)
> -	 */
> -	node_clear(mtc.nid, *mtc.nmask);
> -	if (nodes_empty(*mtc.nmask))
> -		node_set(mtc.nid, *mtc.nmask);
> -
> -	return alloc_migration_target(page, (unsigned long)&mtc);
> -}
> -
>  static int
>  do_migrate_range(unsigned long start_pfn, unsigned long end_pfn)
>  {
> @@ -1345,9 +1324,28 @@ do_migrate_range(unsigned long start_pfn, unsigned long end_pfn)
>  		put_page(page);
>  	}
>  	if (!list_empty(&source)) {
> -		/* Allocate a new page from the nearest neighbor node */
> -		ret = migrate_pages(&source, new_node_page, NULL, 0,
> -					MIGRATE_SYNC, MR_MEMORY_HOTPLUG);
> +		nodemask_t nmask = node_states[N_MEMORY];
> +		struct migration_target_control mtc = {
> +			.nmask = &nmask,
> +			.gfp_mask = GFP_USER | __GFP_MOVABLE | __GFP_RETRY_MAYFAIL,
> +		};
> +
> +		/*
> +		 * We have checked that migration range is on a single zone so
> +		 * we can use the nid of the first page to all the others.
> +		 */
> +		mtc.nid = page_to_nid(list_first_entry(&source, struct page, lru));
> +
> +		/*
> +		 * try to allocate from a different node but reuse this node
> +		 * if there are no other online nodes to be used (e.g. we are
> +		 * offlining a part of the only existing node)
> +		 */
> +		node_clear(mtc.nid, *mtc.nmask);
> +		if (nodes_empty(*mtc.nmask))
> +			node_set(mtc.nid, *mtc.nmask);
> +		ret = migrate_pages(&source, alloc_migration_target, NULL,
> +			(unsigned long)&mtc, MIGRATE_SYNC, MR_MEMORY_HOTPLUG);
>  		if (ret) {
>  			list_for_each_entry(page, &source, lru) {
>  				pr_warn("migrating pfn %lx failed ret:%d ",
> -- 
> 2.7.4
> 

-- 
Michal Hocko
SUSE Labs

  reply	other threads:[~2020-07-07 11:52 UTC|newest]

Thread overview: 52+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-07-07  7:44 [PATCH v4 00/11] clean-up the migration target allocation functions js1304
2020-07-07  7:44 ` [PATCH v4 01/11] mm/page_isolation: prefer the node of the source page js1304
2020-07-07  7:44 ` [PATCH v4 02/11] mm/migrate: move migration helper from .h to .c js1304
2020-07-07  7:44 ` [PATCH v4 03/11] mm/hugetlb: unify migration callbacks js1304
2020-07-07 11:05   ` Vlastimil Babka
2020-07-07 11:19   ` Michal Hocko
2020-07-07  7:44 ` [PATCH v4 04/11] mm/hugetlb: make hugetlb migration callback CMA aware js1304
2020-07-07 11:22   ` Vlastimil Babka
2020-07-08  7:16     ` Joonsoo Kim
2020-07-08  7:41       ` Michal Hocko
2020-07-08  9:26         ` Vlastimil Babka
2020-07-08 10:57           ` Aneesh Kumar K.V
2020-07-08 11:32             ` Michal Hocko
2020-07-09  6:43         ` Michal Hocko
2020-07-09  7:03           ` Joonsoo Kim
2020-07-09  7:03             ` Joonsoo Kim
2020-07-09  0:27       ` Mike Kravetz
2020-07-07 11:31   ` Michal Hocko
2020-07-08  6:48     ` Michal Hocko
2020-07-08  7:12     ` Joonsoo Kim
2020-07-07  7:44 ` [PATCH v4 05/11] mm/migrate: clear __GFP_RECLAIM for THP allocation for migration js1304
2020-07-07 11:40   ` Michal Hocko
2020-07-08  7:19     ` Joonsoo Kim
2020-07-08  7:48       ` Michal Hocko
2020-07-09  3:26         ` Joonsoo Kim
2020-07-09  3:26           ` Joonsoo Kim
2020-07-07 12:17   ` Vlastimil Babka
2020-07-08  7:17     ` Joonsoo Kim
2020-07-09  7:17     ` Joonsoo Kim
2020-07-09  7:17       ` Joonsoo Kim
2020-07-07  7:44 ` [PATCH v4 06/11] mm/migrate: make a standard migration target allocation function js1304
2020-07-07 11:43   ` Michal Hocko
2020-07-07 14:49   ` Vlastimil Babka
2020-07-07 19:00     ` Michal Hocko
2020-07-09  7:15       ` Joonsoo Kim
2020-07-09  7:15         ` Joonsoo Kim
2020-07-09 10:28         ` Michal Hocko
2020-07-07  7:44 ` [PATCH v4 07/11] mm/gup: use a standard migration target allocation callback js1304
2020-07-07 11:46   ` Michal Hocko
2020-07-08  7:21     ` Joonsoo Kim
2020-07-07  7:44 ` [PATCH v4 08/11] mm/mempolicy: " js1304
2020-07-07  7:44 ` [PATCH v4 09/11] mm/page_alloc: remove a wrapper for alloc_migration_target() js1304
2020-07-07  7:44 ` [PATCH v4 10/11] mm/memory-failure: " js1304
2020-07-07 11:48   ` Michal Hocko
2020-07-07 15:03     ` Vlastimil Babka
2020-07-07 18:55       ` Michal Hocko
2020-07-07 15:00   ` Vlastimil Babka
2020-07-07  7:44 ` [PATCH v4 11/11] mm/memory_hotplug: " js1304
2020-07-07 11:52   ` Michal Hocko [this message]
2020-07-07 15:09   ` Vlastimil Babka
2020-07-09  3:25     ` Joonsoo Kim
2020-07-09  3:25       ` Joonsoo Kim

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20200707115231.GM5913@dhcp22.suse.cz \
    --to=mhocko@kernel.org \
    --cc=akpm@linux-foundation.org \
    --cc=guro@fb.com \
    --cc=hch@infradead.org \
    --cc=iamjoonsoo.kim@lge.com \
    --cc=js1304@gmail.com \
    --cc=kernel-team@lge.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=mike.kravetz@oracle.com \
    --cc=n-horiguchi@ah.jp.nec.com \
    --cc=vbabka@suse.cz \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.