All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH] mm, page_alloc: only use per-cpu allocator for irq-safe requests -fix v2
@ 2017-02-08 15:22 ` Mel Gorman
  0 siblings, 0 replies; 6+ messages in thread
From: Mel Gorman @ 2017-02-08 15:22 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Thomas Gleixner, Peter Zijlstra, Michal Hocko, Vlastimil Babka,
	linux-mm, linux-kernel, Ingo Molnar

preempt_enable_no_resched() was used based on review feedback that had
no strong objection at the time. The thinking was that it avoided adding
a preemption point where one didn't exist before so the feedback was
applied. This reasoning was wrong.

There was an indirect preemption point as explained by Thomas Gleixner where
an interrupt could set_need_resched() followed by preempt_enable being
a preemption point that matters. This use of preempt_enable_no_resched
is bad from both a mainline and RT perspective and a violation of the
preemption mechanism. Peter Zijlstra noted that "the only acceptable use
of preempt_enable_no_resched() is if the next statement is a schedule()
variant".

The usage was outright broken and I should have stuck to preempt_enable()
as it was originally developed. It's known from previous tests
that there was no detectable difference to the performance by using
preempt_enable_no_resched().

This is a fix to the mmotm patch
mm-page_alloc-only-use-per-cpu-allocator-for-irq-safe-requests.patch

Signed-off-by: Mel Gorman <mgorman@techsingularity.net>
---
 mm/page_alloc.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index eaecb4b145e6..2a36dad03dac 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -2520,7 +2520,7 @@ void free_hot_cold_page(struct page *page, bool cold)
 	}
 
 out:
-	preempt_enable_no_resched();
+	preempt_enable();
 }
 
 /*
@@ -2686,7 +2686,7 @@ static struct page *rmqueue_pcplist(struct zone *preferred_zone,
 		__count_zid_vm_events(PGALLOC, page_zonenum(page), 1 << order);
 		zone_statistics(preferred_zone, zone);
 	}
-	preempt_enable_no_resched();
+	preempt_enable();
 	return page;
 }
 

-- 
Mel Gorman
SUSE Labs

^ permalink raw reply related	[flat|nested] 6+ messages in thread

* [PATCH] mm, page_alloc: only use per-cpu allocator for irq-safe requests -fix v2
@ 2017-02-08 15:22 ` Mel Gorman
  0 siblings, 0 replies; 6+ messages in thread
From: Mel Gorman @ 2017-02-08 15:22 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Thomas Gleixner, Peter Zijlstra, Michal Hocko, Vlastimil Babka,
	linux-mm, linux-kernel, Ingo Molnar

preempt_enable_no_resched() was used based on review feedback that had
no strong objection at the time. The thinking was that it avoided adding
a preemption point where one didn't exist before so the feedback was
applied. This reasoning was wrong.

There was an indirect preemption point as explained by Thomas Gleixner where
an interrupt could set_need_resched() followed by preempt_enable being
a preemption point that matters. This use of preempt_enable_no_resched
is bad from both a mainline and RT perspective and a violation of the
preemption mechanism. Peter Zijlstra noted that "the only acceptable use
of preempt_enable_no_resched() is if the next statement is a schedule()
variant".

The usage was outright broken and I should have stuck to preempt_enable()
as it was originally developed. It's known from previous tests
that there was no detectable difference to the performance by using
preempt_enable_no_resched().

This is a fix to the mmotm patch
mm-page_alloc-only-use-per-cpu-allocator-for-irq-safe-requests.patch

Signed-off-by: Mel Gorman <mgorman@techsingularity.net>
---
 mm/page_alloc.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index eaecb4b145e6..2a36dad03dac 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -2520,7 +2520,7 @@ void free_hot_cold_page(struct page *page, bool cold)
 	}
 
 out:
-	preempt_enable_no_resched();
+	preempt_enable();
 }
 
 /*
@@ -2686,7 +2686,7 @@ static struct page *rmqueue_pcplist(struct zone *preferred_zone,
 		__count_zid_vm_events(PGALLOC, page_zonenum(page), 1 << order);
 		zone_statistics(preferred_zone, zone);
 	}
-	preempt_enable_no_resched();
+	preempt_enable();
 	return page;
 }
 

-- 
Mel Gorman
SUSE Labs

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 6+ messages in thread

* Re: [PATCH] mm, page_alloc: only use per-cpu allocator for irq-safe requests -fix v2
  2017-02-08 15:22 ` Mel Gorman
@ 2017-02-08 15:27   ` Thomas Gleixner
  -1 siblings, 0 replies; 6+ messages in thread
From: Thomas Gleixner @ 2017-02-08 15:27 UTC (permalink / raw)
  To: Mel Gorman
  Cc: Andrew Morton, Peter Zijlstra, Michal Hocko, Vlastimil Babka,
	linux-mm, linux-kernel, Ingo Molnar

On Wed, 8 Feb 2017, Mel Gorman wrote:

> preempt_enable_no_resched() was used based on review feedback that had
> no strong objection at the time. The thinking was that it avoided adding
> a preemption point where one didn't exist before so the feedback was
> applied. This reasoning was wrong.
> 
> There was an indirect preemption point as explained by Thomas Gleixner where
> an interrupt could set_need_resched() followed by preempt_enable being
> a preemption point that matters. This use of preempt_enable_no_resched
> is bad from both a mainline and RT perspective and a violation of the
> preemption mechanism. Peter Zijlstra noted that "the only acceptable use
> of preempt_enable_no_resched() is if the next statement is a schedule()
> variant".
> 
> The usage was outright broken and I should have stuck to preempt_enable()
> as it was originally developed. It's known from previous tests
> that there was no detectable difference to the performance by using
> preempt_enable_no_resched().
> 
> This is a fix to the mmotm patch
> mm-page_alloc-only-use-per-cpu-allocator-for-irq-safe-requests.patch
> 
> Signed-off-by: Mel Gorman <mgorman@techsingularity.net>

Reviewed-by: Thomas Gleixner <tglx@linutronix.de>

> ---
>  mm/page_alloc.c | 4 ++--
>  1 file changed, 2 insertions(+), 2 deletions(-)
> 
> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> index eaecb4b145e6..2a36dad03dac 100644
> --- a/mm/page_alloc.c
> +++ b/mm/page_alloc.c
> @@ -2520,7 +2520,7 @@ void free_hot_cold_page(struct page *page, bool cold)
>  	}
>  
>  out:
> -	preempt_enable_no_resched();
> +	preempt_enable();
>  }
>  
>  /*
> @@ -2686,7 +2686,7 @@ static struct page *rmqueue_pcplist(struct zone *preferred_zone,
>  		__count_zid_vm_events(PGALLOC, page_zonenum(page), 1 << order);
>  		zone_statistics(preferred_zone, zone);
>  	}
> -	preempt_enable_no_resched();
> +	preempt_enable();
>  	return page;
>  }
>  
> 
> -- 
> Mel Gorman
> SUSE Labs
> 

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [PATCH] mm, page_alloc: only use per-cpu allocator for irq-safe requests -fix v2
@ 2017-02-08 15:27   ` Thomas Gleixner
  0 siblings, 0 replies; 6+ messages in thread
From: Thomas Gleixner @ 2017-02-08 15:27 UTC (permalink / raw)
  To: Mel Gorman
  Cc: Andrew Morton, Peter Zijlstra, Michal Hocko, Vlastimil Babka,
	linux-mm, linux-kernel, Ingo Molnar

On Wed, 8 Feb 2017, Mel Gorman wrote:

> preempt_enable_no_resched() was used based on review feedback that had
> no strong objection at the time. The thinking was that it avoided adding
> a preemption point where one didn't exist before so the feedback was
> applied. This reasoning was wrong.
> 
> There was an indirect preemption point as explained by Thomas Gleixner where
> an interrupt could set_need_resched() followed by preempt_enable being
> a preemption point that matters. This use of preempt_enable_no_resched
> is bad from both a mainline and RT perspective and a violation of the
> preemption mechanism. Peter Zijlstra noted that "the only acceptable use
> of preempt_enable_no_resched() is if the next statement is a schedule()
> variant".
> 
> The usage was outright broken and I should have stuck to preempt_enable()
> as it was originally developed. It's known from previous tests
> that there was no detectable difference to the performance by using
> preempt_enable_no_resched().
> 
> This is a fix to the mmotm patch
> mm-page_alloc-only-use-per-cpu-allocator-for-irq-safe-requests.patch
> 
> Signed-off-by: Mel Gorman <mgorman@techsingularity.net>

Reviewed-by: Thomas Gleixner <tglx@linutronix.de>

> ---
>  mm/page_alloc.c | 4 ++--
>  1 file changed, 2 insertions(+), 2 deletions(-)
> 
> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> index eaecb4b145e6..2a36dad03dac 100644
> --- a/mm/page_alloc.c
> +++ b/mm/page_alloc.c
> @@ -2520,7 +2520,7 @@ void free_hot_cold_page(struct page *page, bool cold)
>  	}
>  
>  out:
> -	preempt_enable_no_resched();
> +	preempt_enable();
>  }
>  
>  /*
> @@ -2686,7 +2686,7 @@ static struct page *rmqueue_pcplist(struct zone *preferred_zone,
>  		__count_zid_vm_events(PGALLOC, page_zonenum(page), 1 << order);
>  		zone_statistics(preferred_zone, zone);
>  	}
> -	preempt_enable_no_resched();
> +	preempt_enable();
>  	return page;
>  }
>  
> 
> -- 
> Mel Gorman
> SUSE Labs
> 

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [PATCH] mm, page_alloc: only use per-cpu allocator for irq-safe requests -fix v2
  2017-02-08 15:22 ` Mel Gorman
@ 2017-02-09  6:22   ` Hillf Danton
  -1 siblings, 0 replies; 6+ messages in thread
From: Hillf Danton @ 2017-02-09  6:22 UTC (permalink / raw)
  To: 'Mel Gorman', 'Andrew Morton'
  Cc: 'Thomas Gleixner', 'Peter Zijlstra',
	'Michal Hocko', 'Vlastimil Babka',
	linux-mm, linux-kernel, 'Ingo Molnar'


On February 08, 2017 11:22 PM Mel Gorman wrote: 
> 
> preempt_enable_no_resched() was used based on review feedback that had
> no strong objection at the time. The thinking was that it avoided adding
> a preemption point where one didn't exist before so the feedback was
> applied. This reasoning was wrong.
> 
> There was an indirect preemption point as explained by Thomas Gleixner where
> an interrupt could set_need_resched() followed by preempt_enable being
> a preemption point that matters. This use of preempt_enable_no_resched
> is bad from both a mainline and RT perspective and a violation of the
> preemption mechanism. Peter Zijlstra noted that "the only acceptable use
> of preempt_enable_no_resched() is if the next statement is a schedule()
> variant".
> 
> The usage was outright broken and I should have stuck to preempt_enable()
> as it was originally developed. It's known from previous tests
> that there was no detectable difference to the performance by using
> preempt_enable_no_resched().
> 
> This is a fix to the mmotm patch
> mm-page_alloc-only-use-per-cpu-allocator-for-irq-safe-requests.patch
> 
> Signed-off-by: Mel Gorman <mgorman@techsingularity.net>
> ---
Thanks for fixing it.

Acked-by: Hillf Danton <hillf.zj@alibaba-inc.com>

>  mm/page_alloc.c | 4 ++--
>  1 file changed, 2 insertions(+), 2 deletions(-)
> 
> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> index eaecb4b145e6..2a36dad03dac 100644
> --- a/mm/page_alloc.c
> +++ b/mm/page_alloc.c
> @@ -2520,7 +2520,7 @@ void free_hot_cold_page(struct page *page, bool cold)
>  	}
> 
>  out:
> -	preempt_enable_no_resched();
> +	preempt_enable();
>  }
> 
>  /*
> @@ -2686,7 +2686,7 @@ static struct page *rmqueue_pcplist(struct zone *preferred_zone,
>  		__count_zid_vm_events(PGALLOC, page_zonenum(page), 1 << order);
>  		zone_statistics(preferred_zone, zone);
>  	}
> -	preempt_enable_no_resched();
> +	preempt_enable();
>  	return page;
>  }
> 

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [PATCH] mm, page_alloc: only use per-cpu allocator for irq-safe requests -fix v2
@ 2017-02-09  6:22   ` Hillf Danton
  0 siblings, 0 replies; 6+ messages in thread
From: Hillf Danton @ 2017-02-09  6:22 UTC (permalink / raw)
  To: 'Mel Gorman', 'Andrew Morton'
  Cc: 'Thomas Gleixner', 'Peter Zijlstra',
	'Michal Hocko', 'Vlastimil Babka',
	linux-mm, linux-kernel, 'Ingo Molnar'


On February 08, 2017 11:22 PM Mel Gorman wrote: 
> 
> preempt_enable_no_resched() was used based on review feedback that had
> no strong objection at the time. The thinking was that it avoided adding
> a preemption point where one didn't exist before so the feedback was
> applied. This reasoning was wrong.
> 
> There was an indirect preemption point as explained by Thomas Gleixner where
> an interrupt could set_need_resched() followed by preempt_enable being
> a preemption point that matters. This use of preempt_enable_no_resched
> is bad from both a mainline and RT perspective and a violation of the
> preemption mechanism. Peter Zijlstra noted that "the only acceptable use
> of preempt_enable_no_resched() is if the next statement is a schedule()
> variant".
> 
> The usage was outright broken and I should have stuck to preempt_enable()
> as it was originally developed. It's known from previous tests
> that there was no detectable difference to the performance by using
> preempt_enable_no_resched().
> 
> This is a fix to the mmotm patch
> mm-page_alloc-only-use-per-cpu-allocator-for-irq-safe-requests.patch
> 
> Signed-off-by: Mel Gorman <mgorman@techsingularity.net>
> ---
Thanks for fixing it.

Acked-by: Hillf Danton <hillf.zj@alibaba-inc.com>

>  mm/page_alloc.c | 4 ++--
>  1 file changed, 2 insertions(+), 2 deletions(-)
> 
> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> index eaecb4b145e6..2a36dad03dac 100644
> --- a/mm/page_alloc.c
> +++ b/mm/page_alloc.c
> @@ -2520,7 +2520,7 @@ void free_hot_cold_page(struct page *page, bool cold)
>  	}
> 
>  out:
> -	preempt_enable_no_resched();
> +	preempt_enable();
>  }
> 
>  /*
> @@ -2686,7 +2686,7 @@ static struct page *rmqueue_pcplist(struct zone *preferred_zone,
>  		__count_zid_vm_events(PGALLOC, page_zonenum(page), 1 << order);
>  		zone_statistics(preferred_zone, zone);
>  	}
> -	preempt_enable_no_resched();
> +	preempt_enable();
>  	return page;
>  }
> 

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 6+ messages in thread

end of thread, other threads:[~2017-02-09  6:31 UTC | newest]

Thread overview: 6+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2017-02-08 15:22 [PATCH] mm, page_alloc: only use per-cpu allocator for irq-safe requests -fix v2 Mel Gorman
2017-02-08 15:22 ` Mel Gorman
2017-02-08 15:27 ` Thomas Gleixner
2017-02-08 15:27   ` Thomas Gleixner
2017-02-09  6:22 ` Hillf Danton
2017-02-09  6:22   ` Hillf Danton

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.