* [PATCH] mm: fix calculation accounting dirtyable highmem
@ 2016-07-13 2:23 ` Minchan Kim
0 siblings, 0 replies; 4+ messages in thread
From: Minchan Kim @ 2016-07-13 2:23 UTC (permalink / raw)
To: Andrew Morton; +Cc: Mel Gorman, linux-kernel, linux-mm, Minchan Kim
When I tested vmscale in mmtest in 32bit, I found the benchmark
was slow down 0.5 times.
base node
1 global-1
User 12.98 16.04
System 147.61 166.42
Elapsed 26.48 38.08
With vmstat, I found IO wait avg is much increased compared to
base.
The reason was highmem_dirtyable_memory accumulates free pages
and highmem_file_pages from HIGHMEM to MOVABLE zones which was
wrong. With that, dirth_thresh in throtlle_vm_write is always
0 so that it calls congestion_wait frequently if writeback
starts.
With this patch, it is much recovered.
base node fi
1 global-1 fix
User 12.98 16.04 13.78
System 147.61 166.42 143.92
Elapsed 26.48 38.08 29.64
Signed-off-by: Minchan Kim <minchan@kernel.org>
---
mm/page-writeback.c | 16 ++++++++++------
1 file changed, 10 insertions(+), 6 deletions(-)
diff --git a/mm/page-writeback.c b/mm/page-writeback.c
index 8db1db2..bf27594 100644
--- a/mm/page-writeback.c
+++ b/mm/page-writeback.c
@@ -307,27 +307,31 @@ static unsigned long highmem_dirtyable_memory(unsigned long total)
{
#ifdef CONFIG_HIGHMEM
int node;
- unsigned long x = 0;
+ unsigned long x;
int i;
- unsigned long dirtyable = highmem_file_pages;
+ unsigned long dirtyable = 0;
for_each_node_state(node, N_HIGH_MEMORY) {
for (i = ZONE_NORMAL + 1; i < MAX_NR_ZONES; i++) {
struct zone *z;
+ unsigned long nr_pages;
if (!is_highmem_idx(i))
continue;
z = &NODE_DATA(node)->node_zones[i];
- dirtyable += zone_page_state(z, NR_FREE_PAGES);
+ if (!populated_zone(z))
+ continue;
+ nr_pages = zone_page_state(z, NR_FREE_PAGES);
/* watch for underflows */
- dirtyable -= min(dirtyable, high_wmark_pages(z));
-
- x += dirtyable;
+ nr_pages -= min(nr_pages, high_wmark_pages(z));
+ dirtyable += nr_pages;
}
}
+ x = dirtyable + highmem_file_pages;
+
/*
* Unreclaimable memory (kernel memory or anonymous memory
* without swap) can bring down the dirtyable pages below
--
1.9.1
^ permalink raw reply related [flat|nested] 4+ messages in thread
* [PATCH] mm: fix calculation accounting dirtyable highmem
@ 2016-07-13 2:23 ` Minchan Kim
0 siblings, 0 replies; 4+ messages in thread
From: Minchan Kim @ 2016-07-13 2:23 UTC (permalink / raw)
To: Andrew Morton; +Cc: Mel Gorman, linux-kernel, linux-mm, Minchan Kim
When I tested vmscale in mmtest in 32bit, I found the benchmark
was slow down 0.5 times.
base node
1 global-1
User 12.98 16.04
System 147.61 166.42
Elapsed 26.48 38.08
With vmstat, I found IO wait avg is much increased compared to
base.
The reason was highmem_dirtyable_memory accumulates free pages
and highmem_file_pages from HIGHMEM to MOVABLE zones which was
wrong. With that, dirth_thresh in throtlle_vm_write is always
0 so that it calls congestion_wait frequently if writeback
starts.
With this patch, it is much recovered.
base node fi
1 global-1 fix
User 12.98 16.04 13.78
System 147.61 166.42 143.92
Elapsed 26.48 38.08 29.64
Signed-off-by: Minchan Kim <minchan@kernel.org>
---
mm/page-writeback.c | 16 ++++++++++------
1 file changed, 10 insertions(+), 6 deletions(-)
diff --git a/mm/page-writeback.c b/mm/page-writeback.c
index 8db1db2..bf27594 100644
--- a/mm/page-writeback.c
+++ b/mm/page-writeback.c
@@ -307,27 +307,31 @@ static unsigned long highmem_dirtyable_memory(unsigned long total)
{
#ifdef CONFIG_HIGHMEM
int node;
- unsigned long x = 0;
+ unsigned long x;
int i;
- unsigned long dirtyable = highmem_file_pages;
+ unsigned long dirtyable = 0;
for_each_node_state(node, N_HIGH_MEMORY) {
for (i = ZONE_NORMAL + 1; i < MAX_NR_ZONES; i++) {
struct zone *z;
+ unsigned long nr_pages;
if (!is_highmem_idx(i))
continue;
z = &NODE_DATA(node)->node_zones[i];
- dirtyable += zone_page_state(z, NR_FREE_PAGES);
+ if (!populated_zone(z))
+ continue;
+ nr_pages = zone_page_state(z, NR_FREE_PAGES);
/* watch for underflows */
- dirtyable -= min(dirtyable, high_wmark_pages(z));
-
- x += dirtyable;
+ nr_pages -= min(nr_pages, high_wmark_pages(z));
+ dirtyable += nr_pages;
}
}
+ x = dirtyable + highmem_file_pages;
+
/*
* Unreclaimable memory (kernel memory or anonymous memory
* without swap) can bring down the dirtyable pages below
--
1.9.1
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply related [flat|nested] 4+ messages in thread
* Re: [PATCH] mm: fix calculation accounting dirtyable highmem
2016-07-13 2:23 ` Minchan Kim
@ 2016-07-13 9:17 ` Mel Gorman
-1 siblings, 0 replies; 4+ messages in thread
From: Mel Gorman @ 2016-07-13 9:17 UTC (permalink / raw)
To: Minchan Kim; +Cc: Andrew Morton, linux-kernel, linux-mm
On Wed, Jul 13, 2016 at 11:23:13AM +0900, Minchan Kim wrote:
> When I tested vmscale in mmtest in 32bit, I found the benchmark
> was slow down 0.5 times.
>
> base node
> 1 global-1
> User 12.98 16.04
> System 147.61 166.42
> Elapsed 26.48 38.08
>
> With vmstat, I found IO wait avg is much increased compared to
> base.
>
> The reason was highmem_dirtyable_memory accumulates free pages
> and highmem_file_pages from HIGHMEM to MOVABLE zones which was
> wrong. With that, dirth_thresh in throtlle_vm_write is always
> 0 so that it calls congestion_wait frequently if writeback
> starts.
>
> With this patch, it is much recovered.
>
> base node fi
> 1 global-1 fix
> User 12.98 16.04 13.78
> System 147.61 166.42 143.92
> Elapsed 26.48 38.08 29.64
>
> Signed-off-by: Minchan Kim <minchan@kernel.org>
Thanks. I'll pick this up and send a follow-on series to Andrew with
this included.
--
Mel Gorman
SUSE Labs
^ permalink raw reply [flat|nested] 4+ messages in thread
* Re: [PATCH] mm: fix calculation accounting dirtyable highmem
@ 2016-07-13 9:17 ` Mel Gorman
0 siblings, 0 replies; 4+ messages in thread
From: Mel Gorman @ 2016-07-13 9:17 UTC (permalink / raw)
To: Minchan Kim; +Cc: Andrew Morton, linux-kernel, linux-mm
On Wed, Jul 13, 2016 at 11:23:13AM +0900, Minchan Kim wrote:
> When I tested vmscale in mmtest in 32bit, I found the benchmark
> was slow down 0.5 times.
>
> base node
> 1 global-1
> User 12.98 16.04
> System 147.61 166.42
> Elapsed 26.48 38.08
>
> With vmstat, I found IO wait avg is much increased compared to
> base.
>
> The reason was highmem_dirtyable_memory accumulates free pages
> and highmem_file_pages from HIGHMEM to MOVABLE zones which was
> wrong. With that, dirth_thresh in throtlle_vm_write is always
> 0 so that it calls congestion_wait frequently if writeback
> starts.
>
> With this patch, it is much recovered.
>
> base node fi
> 1 global-1 fix
> User 12.98 16.04 13.78
> System 147.61 166.42 143.92
> Elapsed 26.48 38.08 29.64
>
> Signed-off-by: Minchan Kim <minchan@kernel.org>
Thanks. I'll pick this up and send a follow-on series to Andrew with
this included.
--
Mel Gorman
SUSE Labs
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 4+ messages in thread
end of thread, other threads:[~2016-07-13 9:17 UTC | newest]
Thread overview: 4+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2016-07-13 2:23 [PATCH] mm: fix calculation accounting dirtyable highmem Minchan Kim
2016-07-13 2:23 ` Minchan Kim
2016-07-13 9:17 ` Mel Gorman
2016-07-13 9:17 ` Mel Gorman
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.