All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH 0/5] Improve sequential read throughput v3
@ 2014-06-27  8:14 ` Mel Gorman
  0 siblings, 0 replies; 22+ messages in thread
From: Mel Gorman @ 2014-06-27  8:14 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Linux Kernel, Linux-MM, Linux-FSDevel, Johannes Weiner, Mel Gorman

Changelog since V2
o Simply fair zone policy cost reduction
o Drop CFQ patch

Changelog since v1
o Rebase to v3.16-rc2
o Move CFQ patch to end of series where it can be rejected easier if necessary
o Introduce page-reclaim related patch related to kswapd/fairzone interactions
o Rework fast zone policy patch

IO performance since 3.0 has been a mixed bag. In many respects we are
better and in some we are worse and one of those places is sequential
read throughput. This is visible in a number of benchmarks but I looked
at tiobench the closest. This is using ext3 on a mid-range desktop and
the series applied.

                                      3.16.0-rc2            3.16.0-rc2
                                         vanilla             lessdirty
Min    SeqRead-MB/sec-1         120.92 (  0.00%)      140.73 ( 16.38%)
Min    SeqRead-MB/sec-2         100.25 (  0.00%)      117.43 ( 17.14%)
Min    SeqRead-MB/sec-4          96.27 (  0.00%)      109.01 ( 13.23%)
Min    SeqRead-MB/sec-8          83.55 (  0.00%)       90.86 (  8.75%)
Min    SeqRead-MB/sec-16         66.77 (  0.00%)       74.12 ( 11.01%)

Overall system CPU usage is reduced

          3.16.0-rc2  3.16.0-rc2
             vanilla lessdirty-v3
User          390.13      390.20
System        404.41      379.08
Elapsed      5412.45     5123.74

This series does not fully restore throughput performance to 3.0 levels
but it brings it close for lower thread counts. Higher thread counts are
known to be worse than 3.0 due to CFQ changes but there is no appetite
for changing the defaults there.

 include/linux/mmzone.h         | 210 ++++++++++++++++++++++-------------------
 include/linux/writeback.h      |   1 +
 include/trace/events/pagemap.h |  16 ++--
 mm/internal.h                  |   1 +
 mm/mm_init.c                   |   4 +-
 mm/page-writeback.c            |  23 +++--
 mm/page_alloc.c                | 173 ++++++++++++++++++++-------------
 mm/swap.c                      |   4 +-
 mm/vmscan.c                    |  16 ++--
 mm/vmstat.c                    |   4 +-
 10 files changed, 258 insertions(+), 194 deletions(-)

-- 
1.8.4.5


^ permalink raw reply	[flat|nested] 22+ messages in thread

end of thread, other threads:[~2014-06-30 14:41 UTC | newest]

Thread overview: 22+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2014-06-27  8:14 [PATCH 0/5] Improve sequential read throughput v3 Mel Gorman
2014-06-27  8:14 ` Mel Gorman
2014-06-27  8:14 ` [PATCH 1/5] mm: pagemap: Avoid unnecessary overhead when tracepoints are deactivated Mel Gorman
2014-06-27  8:14   ` Mel Gorman
2014-06-27  8:14 ` [PATCH 2/5] mm: Rearrange zone fields into read-only, page alloc, statistics and page reclaim lines Mel Gorman
2014-06-27  8:14   ` Mel Gorman
2014-06-27  8:14 ` [PATCH 3/5] mm: vmscan: Do not reclaim from lower zones if they are balanced Mel Gorman
2014-06-27  8:14   ` Mel Gorman
2014-06-27 17:26   ` Johannes Weiner
2014-06-27 17:26     ` Johannes Weiner
2014-06-27 18:42     ` Mel Gorman
2014-06-27 18:42       ` Mel Gorman
2014-06-27  8:14 ` [PATCH 4/5] mm: page_alloc: Reduce cost of the fair zone allocation policy Mel Gorman
2014-06-27  8:14   ` Mel Gorman
2014-06-27 18:57   ` Johannes Weiner
2014-06-27 18:57     ` Johannes Weiner
2014-06-27 19:25     ` Mel Gorman
2014-06-27 19:25       ` Mel Gorman
2014-06-30 14:41       ` Johannes Weiner
2014-06-30 14:41         ` Johannes Weiner
2014-06-27  8:14 ` [PATCH 5/5] mm: page_alloc: Reduce cost of dirty zone balancing Mel Gorman
2014-06-27  8:14   ` Mel Gorman

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.