All of lore.kernel.org
 help / color / mirror / Atom feed
From: Yisheng Xie <xieyisheng1@huawei.com>
To: Vlastimil Babka <vbabka@suse.cz>, <linux-mm@kvack.org>,
	Johannes Weiner <hannes@cmpxchg.org>
Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>,
	David Rientjes <rientjes@google.com>,
	Mel Gorman <mgorman@techsingularity.net>,
	<linux-kernel@vger.kernel.org>, <kernel-team@fb.com>,
	Hanjun Guo <guohanjun@huawei.com>
Subject: Re: [RFC v2 10/10] mm, page_alloc: introduce MIGRATE_MIXED migratetype
Date: Wed, 8 Mar 2017 10:16:44 +0800	[thread overview]
Message-ID: <2743b3d4-743a-33db-fdbd-fa95edd35611@huawei.com> (raw)
In-Reply-To: <20170210172343.30283-11-vbabka@suse.cz>

Hi Vlastimil ,

On 2017/2/11 1:23, Vlastimil Babka wrote:
> @@ -1977,7 +1978,7 @@ static void steal_suitable_fallback(struct zone *zone, struct page *page,
>  	unsigned int current_order = page_order(page);
>  	struct free_area *area;
>  	int free_pages, good_pages;
> -	int old_block_type;
> +	int old_block_type, new_block_type;
>  
>  	/* Take ownership for orders >= pageblock_order */
>  	if (current_order >= pageblock_order) {
> @@ -1991,11 +1992,27 @@ static void steal_suitable_fallback(struct zone *zone, struct page *page,
>  	if (!whole_block) {
>  		area = &zone->free_area[current_order];
>  		list_move(&page->lru, &area->free_list[start_type]);
> -		return;
> +		free_pages = 1 << current_order;
> +		/* TODO: We didn't scan the block, so be pessimistic */
> +		good_pages = 0;
> +	} else {
> +		free_pages = move_freepages_block(zone, page, start_type,
> +							&good_pages);
> +		/*
> +		 * good_pages is now the number of movable pages, but if we
> +		 * want UNMOVABLE or RECLAIMABLE, we consider all non-movable
> +		 * as good (but we can't fully distinguish them)
> +		 */
> +		if (start_type != MIGRATE_MOVABLE)
> +			good_pages = pageblock_nr_pages - free_pages -
> +								good_pages;
>  	}
>  
>  	free_pages = move_freepages_block(zone, page, start_type,
>  						&good_pages);
It seems this move_freepages_block() should be removed, if we can steal whole block
then just  do it. If not we can check whether we can set it as mixed mt, right?
Please let me know if I miss something..

Thanks
Yisheng Xie

> +
> +	new_block_type = old_block_type = get_pageblock_migratetype(page);
> +
>  	/*
>  	 * good_pages is now the number of movable pages, but if we
>  	 * want UNMOVABLE or RECLAIMABLE allocation, it's more tricky
> @@ -2007,7 +2024,6 @@ static void steal_suitable_fallback(struct zone *zone, struct page *page,
>  		 * falling back to RECLAIMABLE or vice versa, be conservative
>  		 * as we can't distinguish the exact migratetype.
>  		 */
> -		old_block_type = get_pageblock_migratetype(page);
>  		if (old_block_type == MIGRATE_MOVABLE)
>  			good_pages = pageblock_nr_pages
>  						- free_pages - good_pages;
> @@ -2015,10 +2031,34 @@ static void steal_suitable_fallback(struct zone *zone, struct page *page,
>  			good_pages = 0;
>  	}
>  
> -	/* Claim the whole block if over half of it is free or good type */
> -	if (free_pages + good_pages >= (1 << (pageblock_order-1)) ||
> -			page_group_by_mobility_disabled)
> -		set_pageblock_migratetype(page, start_type);
> +	if (page_group_by_mobility_disabled) {
> +		new_block_type = start_type;
> +	} else if (free_pages + good_pages >= (1 << (pageblock_order-1))) {
> +		/*
> +		 * Claim the whole block if over half of it is free or good
> +		 * type. The exception is the transition to MIGRATE_MOVABLE
> +		 * where we require it to be fully free so that MIGRATE_MOVABLE
> +		 * pageblocks consist of purely movable pages. So if we steal
> +		 * less than whole pageblock, mark it as MIGRATE_MIXED.
> +		 */
> +		if ((start_type == MIGRATE_MOVABLE) &&
> +				free_pages + good_pages < pageblock_nr_pages)
> +			new_block_type = MIGRATE_MIXED;
> +		else
> +			new_block_type = start_type;
> +	} else {
> +		/*
> +		 * We didn't steal enough to change the block's migratetype.
> +		 * But if we are stealing from a MOVABLE block for a
> +		 * non-MOVABLE allocation, mark the block as MIXED.
> +		 */
> +		if (old_block_type == MIGRATE_MOVABLE
> +					&& start_type != MIGRATE_MOVABLE)
> +			new_block_type = MIGRATE_MIXED;
> +	}
> +
> +	if (new_block_type != old_block_type)
> +		set_pageblock_migratetype(page, new_block_type);
>  }
>  
>  /*
> @@ -2560,16 +2600,18 @@ int __isolate_free_page(struct page *page, unsigned int order)
>  	rmv_page_order(page);
>  
>  	/*
> -	 * Set the pageblock if the isolated page is at least half of a
> -	 * pageblock
> +	 * Set the pageblock's migratetype to MIXED if the isolated page is
> +	 * at least half of a pageblock, MOVABLE if at least whole pageblock
>  	 */
>  	if (order >= pageblock_order - 1) {
>  		struct page *endpage = page + (1 << order) - 1;
> +		int new_mt = (order >= pageblock_order) ?
> +					MIGRATE_MOVABLE : MIGRATE_MIXED;
>  		for (; page < endpage; page += pageblock_nr_pages) {
>  			int mt = get_pageblock_migratetype(page);
> -			if (!is_migrate_isolate(mt) && !is_migrate_cma(mt))
> -				set_pageblock_migratetype(page,
> -							  MIGRATE_MOVABLE);
> +
> +			if (!is_migrate_isolate(mt) && !is_migrate_movable(mt))
> +				set_pageblock_migratetype(page, new_mt);
>  		}
>  	}
>  
> @@ -4252,6 +4294,7 @@ static void show_migration_types(unsigned char type)
>  		[MIGRATE_MOVABLE]	= 'M',
>  		[MIGRATE_RECLAIMABLE]	= 'E',
>  		[MIGRATE_HIGHATOMIC]	= 'H',
> +		[MIGRATE_MIXED]		= 'M',
>  #ifdef CONFIG_CMA
>  		[MIGRATE_CMA]		= 'C',
>  #endif
> 

WARNING: multiple messages have this Message-ID (diff)
From: Yisheng Xie <xieyisheng1@huawei.com>
To: Vlastimil Babka <vbabka@suse.cz>,
	linux-mm@kvack.org, Johannes Weiner <hannes@cmpxchg.org>
Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>,
	David Rientjes <rientjes@google.com>,
	Mel Gorman <mgorman@techsingularity.net>,
	linux-kernel@vger.kernel.org, kernel-team@fb.com,
	Hanjun Guo <guohanjun@huawei.com>
Subject: Re: [RFC v2 10/10] mm, page_alloc: introduce MIGRATE_MIXED migratetype
Date: Wed, 8 Mar 2017 10:16:44 +0800	[thread overview]
Message-ID: <2743b3d4-743a-33db-fdbd-fa95edd35611@huawei.com> (raw)
In-Reply-To: <20170210172343.30283-11-vbabka@suse.cz>

Hi Vlastimil ,

On 2017/2/11 1:23, Vlastimil Babka wrote:
> @@ -1977,7 +1978,7 @@ static void steal_suitable_fallback(struct zone *zone, struct page *page,
>  	unsigned int current_order = page_order(page);
>  	struct free_area *area;
>  	int free_pages, good_pages;
> -	int old_block_type;
> +	int old_block_type, new_block_type;
>  
>  	/* Take ownership for orders >= pageblock_order */
>  	if (current_order >= pageblock_order) {
> @@ -1991,11 +1992,27 @@ static void steal_suitable_fallback(struct zone *zone, struct page *page,
>  	if (!whole_block) {
>  		area = &zone->free_area[current_order];
>  		list_move(&page->lru, &area->free_list[start_type]);
> -		return;
> +		free_pages = 1 << current_order;
> +		/* TODO: We didn't scan the block, so be pessimistic */
> +		good_pages = 0;
> +	} else {
> +		free_pages = move_freepages_block(zone, page, start_type,
> +							&good_pages);
> +		/*
> +		 * good_pages is now the number of movable pages, but if we
> +		 * want UNMOVABLE or RECLAIMABLE, we consider all non-movable
> +		 * as good (but we can't fully distinguish them)
> +		 */
> +		if (start_type != MIGRATE_MOVABLE)
> +			good_pages = pageblock_nr_pages - free_pages -
> +								good_pages;
>  	}
>  
>  	free_pages = move_freepages_block(zone, page, start_type,
>  						&good_pages);
It seems this move_freepages_block() should be removed, if we can steal whole block
then just  do it. If not we can check whether we can set it as mixed mt, right?
Please let me know if I miss something..

Thanks
Yisheng Xie

> +
> +	new_block_type = old_block_type = get_pageblock_migratetype(page);
> +
>  	/*
>  	 * good_pages is now the number of movable pages, but if we
>  	 * want UNMOVABLE or RECLAIMABLE allocation, it's more tricky
> @@ -2007,7 +2024,6 @@ static void steal_suitable_fallback(struct zone *zone, struct page *page,
>  		 * falling back to RECLAIMABLE or vice versa, be conservative
>  		 * as we can't distinguish the exact migratetype.
>  		 */
> -		old_block_type = get_pageblock_migratetype(page);
>  		if (old_block_type == MIGRATE_MOVABLE)
>  			good_pages = pageblock_nr_pages
>  						- free_pages - good_pages;
> @@ -2015,10 +2031,34 @@ static void steal_suitable_fallback(struct zone *zone, struct page *page,
>  			good_pages = 0;
>  	}
>  
> -	/* Claim the whole block if over half of it is free or good type */
> -	if (free_pages + good_pages >= (1 << (pageblock_order-1)) ||
> -			page_group_by_mobility_disabled)
> -		set_pageblock_migratetype(page, start_type);
> +	if (page_group_by_mobility_disabled) {
> +		new_block_type = start_type;
> +	} else if (free_pages + good_pages >= (1 << (pageblock_order-1))) {
> +		/*
> +		 * Claim the whole block if over half of it is free or good
> +		 * type. The exception is the transition to MIGRATE_MOVABLE
> +		 * where we require it to be fully free so that MIGRATE_MOVABLE
> +		 * pageblocks consist of purely movable pages. So if we steal
> +		 * less than whole pageblock, mark it as MIGRATE_MIXED.
> +		 */
> +		if ((start_type == MIGRATE_MOVABLE) &&
> +				free_pages + good_pages < pageblock_nr_pages)
> +			new_block_type = MIGRATE_MIXED;
> +		else
> +			new_block_type = start_type;
> +	} else {
> +		/*
> +		 * We didn't steal enough to change the block's migratetype.
> +		 * But if we are stealing from a MOVABLE block for a
> +		 * non-MOVABLE allocation, mark the block as MIXED.
> +		 */
> +		if (old_block_type == MIGRATE_MOVABLE
> +					&& start_type != MIGRATE_MOVABLE)
> +			new_block_type = MIGRATE_MIXED;
> +	}
> +
> +	if (new_block_type != old_block_type)
> +		set_pageblock_migratetype(page, new_block_type);
>  }
>  
>  /*
> @@ -2560,16 +2600,18 @@ int __isolate_free_page(struct page *page, unsigned int order)
>  	rmv_page_order(page);
>  
>  	/*
> -	 * Set the pageblock if the isolated page is at least half of a
> -	 * pageblock
> +	 * Set the pageblock's migratetype to MIXED if the isolated page is
> +	 * at least half of a pageblock, MOVABLE if at least whole pageblock
>  	 */
>  	if (order >= pageblock_order - 1) {
>  		struct page *endpage = page + (1 << order) - 1;
> +		int new_mt = (order >= pageblock_order) ?
> +					MIGRATE_MOVABLE : MIGRATE_MIXED;
>  		for (; page < endpage; page += pageblock_nr_pages) {
>  			int mt = get_pageblock_migratetype(page);
> -			if (!is_migrate_isolate(mt) && !is_migrate_cma(mt))
> -				set_pageblock_migratetype(page,
> -							  MIGRATE_MOVABLE);
> +
> +			if (!is_migrate_isolate(mt) && !is_migrate_movable(mt))
> +				set_pageblock_migratetype(page, new_mt);
>  		}
>  	}
>  
> @@ -4252,6 +4294,7 @@ static void show_migration_types(unsigned char type)
>  		[MIGRATE_MOVABLE]	= 'M',
>  		[MIGRATE_RECLAIMABLE]	= 'E',
>  		[MIGRATE_HIGHATOMIC]	= 'H',
> +		[MIGRATE_MIXED]		= 'M',
>  #ifdef CONFIG_CMA
>  		[MIGRATE_CMA]		= 'C',
>  #endif
> 

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

  reply	other threads:[~2017-03-08  2:17 UTC|newest]

Thread overview: 92+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2017-02-10 17:23 [PATCH v2 00/10] try to reduce fragmenting fallbacks Vlastimil Babka
2017-02-10 17:23 ` Vlastimil Babka
2017-02-10 17:23 ` [PATCH v2 01/10] mm, compaction: reorder fields in struct compact_control Vlastimil Babka
2017-02-10 17:23   ` Vlastimil Babka
2017-02-13 10:49   ` Mel Gorman
2017-02-13 10:49     ` Mel Gorman
2017-02-14 16:33   ` Johannes Weiner
2017-02-14 16:33     ` Johannes Weiner
2017-02-10 17:23 ` [PATCH v2 02/10] mm, compaction: remove redundant watermark check in compact_finished() Vlastimil Babka
2017-02-10 17:23   ` Vlastimil Babka
2017-02-13 10:49   ` Mel Gorman
2017-02-13 10:49     ` Mel Gorman
2017-02-14 16:34   ` Johannes Weiner
2017-02-14 16:34     ` Johannes Weiner
2017-02-10 17:23 ` [PATCH v2 03/10] mm, page_alloc: split smallest stolen page in fallback Vlastimil Babka
2017-02-10 17:23   ` Vlastimil Babka
2017-02-13 10:51   ` Mel Gorman
2017-02-13 10:51     ` Mel Gorman
2017-02-13 10:54     ` Vlastimil Babka
2017-02-13 10:54       ` Vlastimil Babka
2017-02-14 16:59   ` Johannes Weiner
2017-02-14 16:59     ` Johannes Weiner
2017-02-10 17:23 ` [PATCH v2 04/10] mm, page_alloc: count movable pages when stealing from pageblock Vlastimil Babka
2017-02-10 17:23   ` Vlastimil Babka
2017-02-13 10:53   ` Mel Gorman
2017-02-13 10:53     ` Mel Gorman
2017-02-14 10:07   ` Xishi Qiu
2017-02-14 10:07     ` Xishi Qiu
2017-02-15 10:47     ` Vlastimil Babka
2017-02-15 10:47       ` Vlastimil Babka
2017-02-15 11:56       ` Xishi Qiu
2017-02-15 11:56         ` Xishi Qiu
2017-02-17 16:21         ` Vlastimil Babka
2017-02-17 16:21           ` Vlastimil Babka
2017-02-14 18:10   ` Johannes Weiner
2017-02-14 18:10     ` Johannes Weiner
2017-02-17 16:09     ` Vlastimil Babka
2017-02-17 16:09       ` Vlastimil Babka
2017-02-10 17:23 ` [PATCH v2 05/10] mm, compaction: change migrate_async_suitable() to suitable_migration_source() Vlastimil Babka
2017-02-10 17:23   ` Vlastimil Babka
2017-02-13 10:53   ` Mel Gorman
2017-02-13 10:53     ` Mel Gorman
2017-02-14 18:12   ` Johannes Weiner
2017-02-14 18:12     ` Johannes Weiner
2017-02-10 17:23 ` [PATCH v2 06/10] mm, compaction: add migratetype to compact_control Vlastimil Babka
2017-02-10 17:23   ` Vlastimil Babka
2017-02-13 10:53   ` Mel Gorman
2017-02-13 10:53     ` Mel Gorman
2017-02-14 18:15   ` Johannes Weiner
2017-02-14 18:15     ` Johannes Weiner
2017-02-10 17:23 ` [PATCH v2 07/10] mm, compaction: restrict async compaction to pageblocks of same migratetype Vlastimil Babka
2017-02-10 17:23   ` Vlastimil Babka
2017-02-13 10:56   ` Mel Gorman
2017-02-13 10:56     ` Mel Gorman
2017-02-14 20:10   ` Johannes Weiner
2017-02-14 20:10     ` Johannes Weiner
2017-02-17 16:32     ` Vlastimil Babka
2017-02-17 16:32       ` Vlastimil Babka
2017-02-17 17:39       ` Johannes Weiner
2017-02-17 17:39         ` Johannes Weiner
2017-02-10 17:23 ` [PATCH v2 08/10] mm, compaction: finish whole pageblock to reduce fragmentation Vlastimil Babka
2017-02-10 17:23   ` Vlastimil Babka
2017-02-13 10:57   ` Mel Gorman
2017-02-13 10:57     ` Mel Gorman
2017-02-16 11:44   ` Johannes Weiner
2017-02-16 11:44     ` Johannes Weiner
2017-02-10 17:23 ` [RFC v2 09/10] mm, page_alloc: disallow migratetype fallback in fastpath Vlastimil Babka
2017-02-10 17:23   ` Vlastimil Babka
2017-02-10 17:23 ` [RFC v2 10/10] mm, page_alloc: introduce MIGRATE_MIXED migratetype Vlastimil Babka
2017-02-10 17:23   ` Vlastimil Babka
2017-03-08  2:16   ` Yisheng Xie [this message]
2017-03-08  2:16     ` Yisheng Xie
2017-03-08  7:07     ` Vlastimil Babka
2017-03-08  7:07       ` Vlastimil Babka
2017-03-13  2:16       ` Yisheng Xie
2017-03-13  2:16         ` Yisheng Xie
2017-02-13 11:07 ` [PATCH v2 00/10] try to reduce fragmenting fallbacks Mel Gorman
2017-02-13 11:07   ` Mel Gorman
2017-02-15 14:29   ` Vlastimil Babka
2017-02-15 14:29     ` Vlastimil Babka
2017-02-15 16:11     ` Vlastimil Babka
2017-02-15 16:11       ` Vlastimil Babka
2017-02-15 20:11       ` Vlastimil Babka
2017-02-15 20:11         ` Vlastimil Babka
2017-02-16 15:12     ` Vlastimil Babka
2017-02-16 15:12       ` Vlastimil Babka
2017-02-17 15:24       ` Vlastimil Babka
2017-02-17 15:24         ` Vlastimil Babka
2017-02-20 12:30   ` Vlastimil Babka
2017-02-20 12:30     ` Vlastimil Babka
2017-02-23 16:01     ` Mel Gorman
2017-02-23 16:01       ` Mel Gorman

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=2743b3d4-743a-33db-fdbd-fa95edd35611@huawei.com \
    --to=xieyisheng1@huawei.com \
    --cc=guohanjun@huawei.com \
    --cc=hannes@cmpxchg.org \
    --cc=iamjoonsoo.kim@lge.com \
    --cc=kernel-team@fb.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=mgorman@techsingularity.net \
    --cc=rientjes@google.com \
    --cc=vbabka@suse.cz \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.