All of lore.kernel.org
 help / color / mirror / Atom feed
From: Mel Gorman <mgorman@suse.de>
To: Andrew Morton <akpm@linux-foundation.org>
Cc: Linux Kernel <linux-kernel@vger.kernel.org>,
	Linux-MM <linux-mm@kvack.org>,
	Linux-FSDevel <linux-fsdevel@vger.kernel.org>,
	Johannes Weiner <hannes@cmpxchg.org>,
	Mel Gorman <mgorman@suse.de>
Subject: [PATCH 0/5] Improve sequential read throughput v3
Date: Fri, 27 Jun 2014 09:14:35 +0100	[thread overview]
Message-ID: <1403856880-12597-1-git-send-email-mgorman@suse.de> (raw)

Changelog since V2
o Simply fair zone policy cost reduction
o Drop CFQ patch

Changelog since v1
o Rebase to v3.16-rc2
o Move CFQ patch to end of series where it can be rejected easier if necessary
o Introduce page-reclaim related patch related to kswapd/fairzone interactions
o Rework fast zone policy patch

IO performance since 3.0 has been a mixed bag. In many respects we are
better and in some we are worse and one of those places is sequential
read throughput. This is visible in a number of benchmarks but I looked
at tiobench the closest. This is using ext3 on a mid-range desktop and
the series applied.

                                      3.16.0-rc2            3.16.0-rc2
                                         vanilla             lessdirty
Min    SeqRead-MB/sec-1         120.92 (  0.00%)      140.73 ( 16.38%)
Min    SeqRead-MB/sec-2         100.25 (  0.00%)      117.43 ( 17.14%)
Min    SeqRead-MB/sec-4          96.27 (  0.00%)      109.01 ( 13.23%)
Min    SeqRead-MB/sec-8          83.55 (  0.00%)       90.86 (  8.75%)
Min    SeqRead-MB/sec-16         66.77 (  0.00%)       74.12 ( 11.01%)

Overall system CPU usage is reduced

          3.16.0-rc2  3.16.0-rc2
             vanilla lessdirty-v3
User          390.13      390.20
System        404.41      379.08
Elapsed      5412.45     5123.74

This series does not fully restore throughput performance to 3.0 levels
but it brings it close for lower thread counts. Higher thread counts are
known to be worse than 3.0 due to CFQ changes but there is no appetite
for changing the defaults there.

 include/linux/mmzone.h         | 210 ++++++++++++++++++++++-------------------
 include/linux/writeback.h      |   1 +
 include/trace/events/pagemap.h |  16 ++--
 mm/internal.h                  |   1 +
 mm/mm_init.c                   |   4 +-
 mm/page-writeback.c            |  23 +++--
 mm/page_alloc.c                | 173 ++++++++++++++++++++-------------
 mm/swap.c                      |   4 +-
 mm/vmscan.c                    |  16 ++--
 mm/vmstat.c                    |   4 +-
 10 files changed, 258 insertions(+), 194 deletions(-)

-- 
1.8.4.5


WARNING: multiple messages have this Message-ID (diff)
From: Mel Gorman <mgorman@suse.de>
To: Andrew Morton <akpm@linux-foundation.org>
Cc: Linux Kernel <linux-kernel@vger.kernel.org>,
	Linux-MM <linux-mm@kvack.org>,
	Linux-FSDevel <linux-fsdevel@vger.kernel.org>,
	Johannes Weiner <hannes@cmpxchg.org>,
	Mel Gorman <mgorman@suse.de>
Subject: [PATCH 0/5] Improve sequential read throughput v3
Date: Fri, 27 Jun 2014 09:14:35 +0100	[thread overview]
Message-ID: <1403856880-12597-1-git-send-email-mgorman@suse.de> (raw)

Changelog since V2
o Simply fair zone policy cost reduction
o Drop CFQ patch

Changelog since v1
o Rebase to v3.16-rc2
o Move CFQ patch to end of series where it can be rejected easier if necessary
o Introduce page-reclaim related patch related to kswapd/fairzone interactions
o Rework fast zone policy patch

IO performance since 3.0 has been a mixed bag. In many respects we are
better and in some we are worse and one of those places is sequential
read throughput. This is visible in a number of benchmarks but I looked
at tiobench the closest. This is using ext3 on a mid-range desktop and
the series applied.

                                      3.16.0-rc2            3.16.0-rc2
                                         vanilla             lessdirty
Min    SeqRead-MB/sec-1         120.92 (  0.00%)      140.73 ( 16.38%)
Min    SeqRead-MB/sec-2         100.25 (  0.00%)      117.43 ( 17.14%)
Min    SeqRead-MB/sec-4          96.27 (  0.00%)      109.01 ( 13.23%)
Min    SeqRead-MB/sec-8          83.55 (  0.00%)       90.86 (  8.75%)
Min    SeqRead-MB/sec-16         66.77 (  0.00%)       74.12 ( 11.01%)

Overall system CPU usage is reduced

          3.16.0-rc2  3.16.0-rc2
             vanilla lessdirty-v3
User          390.13      390.20
System        404.41      379.08
Elapsed      5412.45     5123.74

This series does not fully restore throughput performance to 3.0 levels
but it brings it close for lower thread counts. Higher thread counts are
known to be worse than 3.0 due to CFQ changes but there is no appetite
for changing the defaults there.

 include/linux/mmzone.h         | 210 ++++++++++++++++++++++-------------------
 include/linux/writeback.h      |   1 +
 include/trace/events/pagemap.h |  16 ++--
 mm/internal.h                  |   1 +
 mm/mm_init.c                   |   4 +-
 mm/page-writeback.c            |  23 +++--
 mm/page_alloc.c                | 173 ++++++++++++++++++++-------------
 mm/swap.c                      |   4 +-
 mm/vmscan.c                    |  16 ++--
 mm/vmstat.c                    |   4 +-
 10 files changed, 258 insertions(+), 194 deletions(-)

-- 
1.8.4.5

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

             reply	other threads:[~2014-06-27  8:14 UTC|newest]

Thread overview: 22+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2014-06-27  8:14 Mel Gorman [this message]
2014-06-27  8:14 ` [PATCH 0/5] Improve sequential read throughput v3 Mel Gorman
2014-06-27  8:14 ` [PATCH 1/5] mm: pagemap: Avoid unnecessary overhead when tracepoints are deactivated Mel Gorman
2014-06-27  8:14   ` Mel Gorman
2014-06-27  8:14 ` [PATCH 2/5] mm: Rearrange zone fields into read-only, page alloc, statistics and page reclaim lines Mel Gorman
2014-06-27  8:14   ` Mel Gorman
2014-06-27  8:14 ` [PATCH 3/5] mm: vmscan: Do not reclaim from lower zones if they are balanced Mel Gorman
2014-06-27  8:14   ` Mel Gorman
2014-06-27 17:26   ` Johannes Weiner
2014-06-27 17:26     ` Johannes Weiner
2014-06-27 18:42     ` Mel Gorman
2014-06-27 18:42       ` Mel Gorman
2014-06-27  8:14 ` [PATCH 4/5] mm: page_alloc: Reduce cost of the fair zone allocation policy Mel Gorman
2014-06-27  8:14   ` Mel Gorman
2014-06-27 18:57   ` Johannes Weiner
2014-06-27 18:57     ` Johannes Weiner
2014-06-27 19:25     ` Mel Gorman
2014-06-27 19:25       ` Mel Gorman
2014-06-30 14:41       ` Johannes Weiner
2014-06-30 14:41         ` Johannes Weiner
2014-06-27  8:14 ` [PATCH 5/5] mm: page_alloc: Reduce cost of dirty zone balancing Mel Gorman
2014-06-27  8:14   ` Mel Gorman

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=1403856880-12597-1-git-send-email-mgorman@suse.de \
    --to=mgorman@suse.de \
    --cc=akpm@linux-foundation.org \
    --cc=hannes@cmpxchg.org \
    --cc=linux-fsdevel@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.