All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH] mm: delete duplicate order checking, when stealing whole pageblock
@ 2021-06-11  6:38 chengkaitao
  2021-06-12  0:00 ` Andrew Morton
  0 siblings, 1 reply; 2+ messages in thread
From: chengkaitao @ 2021-06-11  6:38 UTC (permalink / raw)
  To: akpm; +Cc: linux-mm, linux-kernel, smcdef, chengkaitao

From: chengkaitao <pilgrimtao@gmail.com>

1. Already has (order >= pageblock_order / 2) here, we don't neet
(order >= pageblock_order)
2. set function can_steal_fallback to inline

Signed-off-by: chengkaitao <pilgrimtao@gmail.com>
---
 mm/page_alloc.c | 12 +-----------
 1 file changed, 1 insertion(+), 11 deletions(-)

diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index ded02d867491..180081fe711b 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -2619,18 +2619,8 @@ static void change_pageblock_range(struct page *pageblock_page,
  * is worse than movable allocations stealing from unmovable and reclaimable
  * pageblocks.
  */
-static bool can_steal_fallback(unsigned int order, int start_mt)
+static inline bool can_steal_fallback(unsigned int order, int start_mt)
 {
-	/*
-	 * Leaving this order check is intended, although there is
-	 * relaxed order check in next check. The reason is that
-	 * we can actually steal whole pageblock if this condition met,
-	 * but, below check doesn't guarantee it and that is just heuristic
-	 * so could be changed anytime.
-	 */
-	if (order >= pageblock_order)
-		return true;
-
 	if (order >= pageblock_order / 2 ||
 		start_mt == MIGRATE_RECLAIMABLE ||
 		start_mt == MIGRATE_UNMOVABLE ||
-- 
2.14.1


^ permalink raw reply related	[flat|nested] 2+ messages in thread

* Re: [PATCH] mm: delete duplicate order checking, when stealing whole pageblock
  2021-06-11  6:38 [PATCH] mm: delete duplicate order checking, when stealing whole pageblock chengkaitao
@ 2021-06-12  0:00 ` Andrew Morton
  0 siblings, 0 replies; 2+ messages in thread
From: Andrew Morton @ 2021-06-12  0:00 UTC (permalink / raw)
  To: chengkaitao; +Cc: linux-mm, linux-kernel, smcdef, Joonsoo Kim

On Fri, 11 Jun 2021 14:38:34 +0800 chengkaitao <pilgrimtao@gmail.com> wrote:

> From: chengkaitao <pilgrimtao@gmail.com>
> 
> 1. Already has (order >= pageblock_order / 2) here, we don't neet
> (order >= pageblock_order)
> 2. set function can_steal_fallback to inline
> 
> ...
>
> --- a/mm/page_alloc.c
> +++ b/mm/page_alloc.c
> @@ -2619,18 +2619,8 @@ static void change_pageblock_range(struct page *pageblock_page,
>   * is worse than movable allocations stealing from unmovable and reclaimable
>   * pageblocks.
>   */
> -static bool can_steal_fallback(unsigned int order, int start_mt)
> +static inline bool can_steal_fallback(unsigned int order, int start_mt)
>  {
> -	/*
> -	 * Leaving this order check is intended, although there is
> -	 * relaxed order check in next check. The reason is that
> -	 * we can actually steal whole pageblock if this condition met,
> -	 * but, below check doesn't guarantee it and that is just heuristic
> -	 * so could be changed anytime.
> -	 */
> -	if (order >= pageblock_order)
> -		return true;
> -
>  	if (order >= pageblock_order / 2 ||
>  		start_mt == MIGRATE_RECLAIMABLE ||
>  		start_mt == MIGRATE_UNMOVABLE ||

Well, that redundant check was put there deliberately, as the comment
explains.

The reasoning is perhaps a little dubious, but it seems that the
compiler has optimized away the redundant check anyway (your patch
doesn't alter code size).


^ permalink raw reply	[flat|nested] 2+ messages in thread

end of thread, other threads:[~2021-06-12  0:00 UTC | newest]

Thread overview: 2+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-06-11  6:38 [PATCH] mm: delete duplicate order checking, when stealing whole pageblock chengkaitao
2021-06-12  0:00 ` Andrew Morton

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.