All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH] mm: vmscan: decrease cma pages from nr_reclaimed
@ 2013-08-12 15:51 Haojian Zhuang
  2013-08-12 18:55 ` Dave Hansen
  0 siblings, 1 reply; 5+ messages in thread
From: Haojian Zhuang @ 2013-08-12 15:51 UTC (permalink / raw)
  To: m.szyprowski, linux-mm, akpm, mgorman; +Cc: Haojian Zhuang

shrink_page_list() reclaims the pages. But the statistical data may
be inaccurate since some pages are CMA pages. If kernel needs to
reclaim unmovable memory (GFP_KERNEL flag), free CMA pages should not
be counted in nr_reclaimed pages.

Signed-off-by: Haojian Zhuang <haojian.zhuang@gmail.com>
---
 mm/vmscan.c | 17 +++++++++++++++++
 1 file changed, 17 insertions(+)

diff --git a/mm/vmscan.c b/mm/vmscan.c
index 2cff0d4..0cbe393 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -720,6 +720,10 @@ static unsigned long shrink_page_list(struct list_head *page_list,
 	unsigned long nr_reclaimed = 0;
 	unsigned long nr_writeback = 0;
 	unsigned long nr_immediate = 0;
+#ifdef CONFIG_CMA
+	/* Number of pages freed with MIGRATE_CMA type */
+	unsigned long nr_reclaimed_cma = 0;
+#endif
 
 	cond_resched();
 
@@ -987,6 +991,11 @@ static unsigned long shrink_page_list(struct list_head *page_list,
 					 * leave it off the LRU).
 					 */
 					nr_reclaimed++;
+#ifdef CONFIG_CMA
+					if (get_pageblock_migratetype(page) ==
+						MIGRATE_CMA)
+						nr_reclaimed_cma++;
+#endif
 					continue;
 				}
 			}
@@ -1005,6 +1014,10 @@ static unsigned long shrink_page_list(struct list_head *page_list,
 		__clear_page_locked(page);
 free_it:
 		nr_reclaimed++;
+#ifdef CONFIG_CMA
+		if (get_pageblock_migratetype(page) == MIGRATE_CMA)
+			nr_reclaimed_cma++;
+#endif
 
 		/*
 		 * Is there need to periodically free_page_list? It would
@@ -1044,6 +1057,10 @@ keep:
 	*ret_nr_unqueued_dirty += nr_unqueued_dirty;
 	*ret_nr_writeback += nr_writeback;
 	*ret_nr_immediate += nr_immediate;
+#ifdef CONFIG_CMA
+	if (allocflags_to_migratetype(sc->gfp_mask) == MIGRATE_UNMOVABLE)
+		nr_reclaimed -= nr_reclaimed_cma;
+#endif
 	return nr_reclaimed;
 }
 
-- 
1.8.1.2

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 5+ messages in thread

* Re: [PATCH] mm: vmscan: decrease cma pages from nr_reclaimed
  2013-08-12 15:51 [PATCH] mm: vmscan: decrease cma pages from nr_reclaimed Haojian Zhuang
@ 2013-08-12 18:55 ` Dave Hansen
  2013-08-12 23:43   ` Haojian Zhuang
  2013-08-13  1:07   ` [PATCH v2] " Haojian Zhuang
  0 siblings, 2 replies; 5+ messages in thread
From: Dave Hansen @ 2013-08-12 18:55 UTC (permalink / raw)
  To: Haojian Zhuang; +Cc: m.szyprowski, linux-mm, akpm, mgorman

On 08/12/2013 08:51 AM, Haojian Zhuang wrote:
> @@ -987,6 +991,11 @@ static unsigned long shrink_page_list(struct list_head *page_list,
>  					 * leave it off the LRU).
>  					 */
>  					nr_reclaimed++;
> +#ifdef CONFIG_CMA
> +					if (get_pageblock_migratetype(page) ==
> +						MIGRATE_CMA)
> +						nr_reclaimed_cma++;
> +#endif
>  					continue;
>  				}
>  			}

Throwing four #ifdefs like that in to any is pretty mean.  Doing it to
shrink_page_list() is just cruel. :)

Can you think of a way to do this without so many explicit #ifdefs in a
.c file?

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [PATCH] mm: vmscan: decrease cma pages from nr_reclaimed
  2013-08-12 18:55 ` Dave Hansen
@ 2013-08-12 23:43   ` Haojian Zhuang
  2013-08-13  1:07   ` [PATCH v2] " Haojian Zhuang
  1 sibling, 0 replies; 5+ messages in thread
From: Haojian Zhuang @ 2013-08-12 23:43 UTC (permalink / raw)
  To: Dave Hansen; +Cc: Marek Szyprowski, linux-mm, Andrew Morton, mgorman

On Tue, Aug 13, 2013 at 2:55 AM, Dave Hansen <dave.hansen@intel.com> wrote:
> On 08/12/2013 08:51 AM, Haojian Zhuang wrote:
>> @@ -987,6 +991,11 @@ static unsigned long shrink_page_list(struct list_head *page_list,
>>                                        * leave it off the LRU).
>>                                        */
>>                                       nr_reclaimed++;
>> +#ifdef CONFIG_CMA
>> +                                     if (get_pageblock_migratetype(page) ==
>> +                                             MIGRATE_CMA)
>> +                                             nr_reclaimed_cma++;
>> +#endif
>>                                       continue;
>>                               }
>>                       }
>
> Throwing four #ifdefs like that in to any is pretty mean.  Doing it to
> shrink_page_list() is just cruel. :)
>
> Can you think of a way to do this without so many explicit #ifdefs in a
> .c file?

OK. I'll use IS_ENABLED() instead.

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 5+ messages in thread

* [PATCH v2] mm: vmscan: decrease cma pages from nr_reclaimed
  2013-08-12 18:55 ` Dave Hansen
  2013-08-12 23:43   ` Haojian Zhuang
@ 2013-08-13  1:07   ` Haojian Zhuang
  2013-08-13  1:35     ` Minchan Kim
  1 sibling, 1 reply; 5+ messages in thread
From: Haojian Zhuang @ 2013-08-13  1:07 UTC (permalink / raw)
  To: dave.hansen, m.szyprowski, akpm, mgorman, linux-mm; +Cc: Haojian Zhuang

shrink_page_list() reclaims the pages. But the statistical data may
be inaccurate since some pages are CMA pages. If kernel needs to
reclaim unmovable memory (GFP_KERNEL flag), free CMA pages should not
be counted in nr_reclaimed pages.

v2:
* Remove #ifdef CONFIG_CMA. Use IS_ENABLED() & is_migrate_cma() instead.

Signed-off-by: Haojian Zhuang <haojian.zhuang@gmail.com>
---
 mm/vmscan.c | 14 ++++++++++++++
 1 file changed, 14 insertions(+)

diff --git a/mm/vmscan.c b/mm/vmscan.c
index 2cff0d4..414f74f 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -720,6 +720,9 @@ static unsigned long shrink_page_list(struct list_head *page_list,
 	unsigned long nr_reclaimed = 0;
 	unsigned long nr_writeback = 0;
 	unsigned long nr_immediate = 0;
+	/* Number of pages freed with MIGRATE_CMA type */
+	unsigned long nr_reclaimed_cma = 0;
+	int mt = 0;
 
 	cond_resched();
 
@@ -987,6 +990,9 @@ static unsigned long shrink_page_list(struct list_head *page_list,
 					 * leave it off the LRU).
 					 */
 					nr_reclaimed++;
+					mt = get_pageblock_migratetype(page);
+					if (is_migrate_cma(mt))
+						nr_reclaimed_cma++;
 					continue;
 				}
 			}
@@ -1005,6 +1011,9 @@ static unsigned long shrink_page_list(struct list_head *page_list,
 		__clear_page_locked(page);
 free_it:
 		nr_reclaimed++;
+		mt = get_pageblock_migratetype(page);
+		if (is_migrate_cma(mt))
+			nr_reclaimed_cma++;
 
 		/*
 		 * Is there need to periodically free_page_list? It would
@@ -1044,6 +1053,11 @@ keep:
 	*ret_nr_unqueued_dirty += nr_unqueued_dirty;
 	*ret_nr_writeback += nr_writeback;
 	*ret_nr_immediate += nr_immediate;
+	if (IS_ENABLED(CONFIG_CMA)) {
+		mt = allocflags_to_migratetype(sc->gfp_mask);
+		if (mt == MIGRATE_UNMOVABLE)
+			nr_reclaimed -= nr_reclaimed_cma;
+	}
 	return nr_reclaimed;
 }
 
-- 
1.8.1.2

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 5+ messages in thread

* Re: [PATCH v2] mm: vmscan: decrease cma pages from nr_reclaimed
  2013-08-13  1:07   ` [PATCH v2] " Haojian Zhuang
@ 2013-08-13  1:35     ` Minchan Kim
  0 siblings, 0 replies; 5+ messages in thread
From: Minchan Kim @ 2013-08-13  1:35 UTC (permalink / raw)
  To: Haojian Zhuang; +Cc: dave.hansen, m.szyprowski, akpm, mgorman, linux-mm

Hello,

On Tue, Aug 13, 2013 at 09:07:42AM +0800, Haojian Zhuang wrote:
> shrink_page_list() reclaims the pages. But the statistical data may
> be inaccurate since some pages are CMA pages. If kernel needs to
> reclaim unmovable memory (GFP_KERNEL flag), free CMA pages should not
> be counted in nr_reclaimed pages.

Please write description as following as.

1. What's the problem?
2. So what's the user effect?
3. How to fix it?

I will try.

Now, VM reclaims CMA pages although memory pressure happens by kernel
memory request(ex, GFP_KERNEL). The problem is that VM can't allocate
new page from just freed CMA area for kernel memory request so that
reclaiming CMA pages when kernel memory space is short is pointless and
it would reclaim too excessive CMA pages without any progress.

This patch fixes ....

> 
> v2:
> * Remove #ifdef CONFIG_CMA. Use IS_ENABLED() & is_migrate_cma() instead.

But I don't like your approach.

IMHO, better fix is we should filter out it from the beginnig.
Look at isolate_lru_pages with isolate_mode. When we select victim pages,
we shouldn't select CMA pages if memory pressure happens by GFP_KERNEL.
It would avoid unnecessary CPU overhead and reclaiming.

Thanks.

> 
> Signed-off-by: Haojian Zhuang <haojian.zhuang@gmail.com>
> ---
>  mm/vmscan.c | 14 ++++++++++++++
>  1 file changed, 14 insertions(+)
> 
> diff --git a/mm/vmscan.c b/mm/vmscan.c
> index 2cff0d4..414f74f 100644
> --- a/mm/vmscan.c
> +++ b/mm/vmscan.c
> @@ -720,6 +720,9 @@ static unsigned long shrink_page_list(struct list_head *page_list,
>  	unsigned long nr_reclaimed = 0;
>  	unsigned long nr_writeback = 0;
>  	unsigned long nr_immediate = 0;
> +	/* Number of pages freed with MIGRATE_CMA type */
> +	unsigned long nr_reclaimed_cma = 0;
> +	int mt = 0;
>  
>  	cond_resched();
>  
> @@ -987,6 +990,9 @@ static unsigned long shrink_page_list(struct list_head *page_list,
>  					 * leave it off the LRU).
>  					 */
>  					nr_reclaimed++;
> +					mt = get_pageblock_migratetype(page);
> +					if (is_migrate_cma(mt))
> +						nr_reclaimed_cma++;
>  					continue;
>  				}
>  			}
> @@ -1005,6 +1011,9 @@ static unsigned long shrink_page_list(struct list_head *page_list,
>  		__clear_page_locked(page);
>  free_it:
>  		nr_reclaimed++;
> +		mt = get_pageblock_migratetype(page);
> +		if (is_migrate_cma(mt))
> +			nr_reclaimed_cma++;
>  
>  		/*
>  		 * Is there need to periodically free_page_list? It would
> @@ -1044,6 +1053,11 @@ keep:
>  	*ret_nr_unqueued_dirty += nr_unqueued_dirty;
>  	*ret_nr_writeback += nr_writeback;
>  	*ret_nr_immediate += nr_immediate;
> +	if (IS_ENABLED(CONFIG_CMA)) {
> +		mt = allocflags_to_migratetype(sc->gfp_mask);
> +		if (mt == MIGRATE_UNMOVABLE)
> +			nr_reclaimed -= nr_reclaimed_cma;
> +	}
>  	return nr_reclaimed;
>  }
>  
> -- 
> 1.8.1.2
> 
> --
> To unsubscribe, send a message with 'unsubscribe linux-mm' in
> the body to majordomo@kvack.org.  For more info on Linux MM,
> see: http://www.linux-mm.org/ .
> Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

-- 
Kind regards,
Minchan Kim

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 5+ messages in thread

end of thread, other threads:[~2013-08-13  1:35 UTC | newest]

Thread overview: 5+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2013-08-12 15:51 [PATCH] mm: vmscan: decrease cma pages from nr_reclaimed Haojian Zhuang
2013-08-12 18:55 ` Dave Hansen
2013-08-12 23:43   ` Haojian Zhuang
2013-08-13  1:07   ` [PATCH v2] " Haojian Zhuang
2013-08-13  1:35     ` Minchan Kim

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.