mm/hotplug: Optimize clear_hwpoisoned_pages
diff mbox series

Message ID 20181102120001.4526-1-bsingharora@gmail.com
State In Next
Commit dc8fc7082345025452fbea3034cfac223f19f730
Headers show
Series
  • mm/hotplug: Optimize clear_hwpoisoned_pages
Related show

Commit Message

Balbir Singh Nov. 2, 2018, noon UTC
In hot remove, we try to clear poisoned pages, but
a small optimization to check if num_poisoned_pages
is 0 helps remove the iteration through nr_pages.

Signed-off-by: Balbir Singh <bsingharora@gmail.com>
---
 mm/sparse.c | 10 ++++++++++
 1 file changed, 10 insertions(+)

Comments

Michal Hocko Nov. 2, 2018, 12:32 p.m. UTC | #1
On Fri 02-11-18 23:00:01, Balbir Singh wrote:
> In hot remove, we try to clear poisoned pages, but
> a small optimization to check if num_poisoned_pages
> is 0 helps remove the iteration through nr_pages.
> 
> Signed-off-by: Balbir Singh <bsingharora@gmail.com>

Makes sense to me. It would be great to actually have some number but
the optimization for the normal case is quite obvious.

Acked-by: Michal Hocko <mhocko@suse.com>

> ---
>  mm/sparse.c | 10 ++++++++++
>  1 file changed, 10 insertions(+)
> 
> diff --git a/mm/sparse.c b/mm/sparse.c
> index 33307fc05c4d..16219c7ddb5f 100644
> --- a/mm/sparse.c
> +++ b/mm/sparse.c
> @@ -724,6 +724,16 @@ static void clear_hwpoisoned_pages(struct page *memmap, int nr_pages)
>  	if (!memmap)
>  		return;
>  
> +	/*
> +	 * A further optimization is to have per section
> +	 * ref counted num_poisoned_pages, but that is going
> +	 * to need more space per memmap, for now just do
> +	 * a quick global check, this should speed up this
> +	 * routine in the absence of bad pages.
> +	 */
> +	if (atomic_long_read(&num_poisoned_pages) == 0)
> +		return;
> +
>  	for (i = 0; i < nr_pages; i++) {
>  		if (PageHWPoison(&memmap[i])) {
>  			atomic_long_sub(1, &num_poisoned_pages);
> -- 
> 2.17.1
>
Naoya Horiguchi Nov. 6, 2018, 11:32 p.m. UTC | #2
On Fri, Nov 02, 2018 at 11:00:01PM +1100, Balbir Singh wrote:
> In hot remove, we try to clear poisoned pages, but
> a small optimization to check if num_poisoned_pages
> is 0 helps remove the iteration through nr_pages.
> 
> Signed-off-by: Balbir Singh <bsingharora@gmail.com>

Acked-by: Naoya Horiguchi <n-horiguchi@ah.jp.nec.com>

Thanks!

Patch
diff mbox series

diff --git a/mm/sparse.c b/mm/sparse.c
index 33307fc05c4d..16219c7ddb5f 100644
--- a/mm/sparse.c
+++ b/mm/sparse.c
@@ -724,6 +724,16 @@  static void clear_hwpoisoned_pages(struct page *memmap, int nr_pages)
 	if (!memmap)
 		return;
 
+	/*
+	 * A further optimization is to have per section
+	 * ref counted num_poisoned_pages, but that is going
+	 * to need more space per memmap, for now just do
+	 * a quick global check, this should speed up this
+	 * routine in the absence of bad pages.
+	 */
+	if (atomic_long_read(&num_poisoned_pages) == 0)
+		return;
+
 	for (i = 0; i < nr_pages; i++) {
 		if (PageHWPoison(&memmap[i])) {
 			atomic_long_sub(1, &num_poisoned_pages);