All of lore.kernel.org
 help / color / mirror / Atom feed
From: Mel Gorman <mgorman@suse.de>
To: linux-mm@kvack.org
Cc: linux-kernel@vger.kernel.org, linux-fsdevel@vger.kernel.org,
	xfs@oss.sgi.com
Subject: [MMTests] Threaded IO Performance on xfs
Date: Mon, 23 Jul 2012 22:25:33 +0100	[thread overview]
Message-ID: <20120723212533.GJ9222@suse.de> (raw)
In-Reply-To: <20120629111932.GA14154@suse.de>

Configuration:	global-dhp__io-threaded-xfs
Result: 	http://www.csn.ul.ie/~mel/postings/mmtests-20120424/global-dhp__io-threaded-xfs
Benchmarks:	tiobench

Summary
=======

There have been many improvements in the sequential read/write case but
3.4 is noticably worse than 3.3 in a number of cases.

Benchmark notes
===============

mkfs was run on system startup.
mkfs parameters -f -d agcount=8
mount options inode64,delaylog,logbsize=262144,nobarrier for the most part.
        On kernels to old to support delaylog was removed. On kernels
        where it was the default, it was specified and the warning ignored.

The size parameter for tiobench was 2*RAM. This is barely sufficient for
	this particular test where the size parameter should be multiple
	times the size of memory. The running time of the benchmark is
	already excessive and this is not likely to be changed.

===========================================================
Machine:	arnold
Result:		http://www.csn.ul.ie/~mel/postings/mmtests-20120424/global-dhp__io-threaded-xfs/arnold/comparison.html
Arch:		x86
CPUs:		1 socket, 2 threads
Model:		Pentium 4
Disk:		Single Rotary Disk
==========================================================

tiobench
--------
  This is a mixed bag. For low numbers of clients, throughput on
  sequential reads has improved.  For larger number of clients, there
  are many regressions but this is not consistent.  This could be due to
  weakness in the methodology due to both a small filesize and a small
  number of iterations.

  Random read is generally bad.

  For many kernels sequential write is good with the notable exception
  of 2.6.39 and 3.0 kernels.

  There was unexpected swapping on 3.1 and 3.2 kernels.

==========================================================
Machine:	hydra
Result:		http://www.csn.ul.ie/~mel/postings/mmtests-20120424/global-dhp__io-threaded-xfs/hydra/comparison.html
Arch:		x86-64
CPUs:		1 socket, 4 threads
Model:		AMD Phenom II X4 940
Disk:		Single Rotary Disk
==========================================================

tiobench
--------

  Like arnold, performance for sequential read is good for low number
  of clients.

  Random read looks good.

  With the exception of 3.0 in general and single threaded writes for all
  kernels, sequential writes have generally improved.

  Random write has a number of regressions.

  Kernels 3.1 and 3.2 had unexpected swapping.

==========================================================
Machine:	sandy
Result:		http://www.csn.ul.ie/~mel/postings/mmtests-20120424/global-dhp__io-threaded-xfs/sandy/comparison.html
Arch:		x86-64
CPUs:		1 socket, 8 threads
Model:		Intel Core i7-2600
Disk:		Single Rotary Disk
==========================================================

tiobench
--------

  Like hydra, sequential reads were generally better for low numbers of
  clients. 3.4 is notable in that it regressed and 3.1 was also bad which
  is roughly similar to what was seen on ext3. There are differences in
  the memory sizes and therefore the filesize and it implies that there
  is not a single cause of the regression.

  Random read has generally improved except with the obvious exception of
  the single-threaded case.

  Sequential writes have generally improved but it is interesting to note
  that 3.4 is worse than 3.3 and this was also seen for ext3.

  Random write is a mixed bad but again 3.4 is worse than 3.3.

  Like the other machines, 3.1 and 3.2 saw unexpected swapping.

-- 
Mel Gorman
SUSE Labs

WARNING: multiple messages have this Message-ID (diff)
From: Mel Gorman <mgorman@suse.de>
To: linux-mm@kvack.org
Cc: linux-kernel@vger.kernel.org, linux-fsdevel@vger.kernel.org,
	xfs@oss.sgi.com
Subject: [MMTests] Threaded IO Performance on xfs
Date: Mon, 23 Jul 2012 22:25:33 +0100	[thread overview]
Message-ID: <20120723212533.GJ9222@suse.de> (raw)
In-Reply-To: <20120629111932.GA14154@suse.de>

Configuration:	global-dhp__io-threaded-xfs
Result: 	http://www.csn.ul.ie/~mel/postings/mmtests-20120424/global-dhp__io-threaded-xfs
Benchmarks:	tiobench

Summary
=======

There have been many improvements in the sequential read/write case but
3.4 is noticably worse than 3.3 in a number of cases.

Benchmark notes
===============

mkfs was run on system startup.
mkfs parameters -f -d agcount=8
mount options inode64,delaylog,logbsize=262144,nobarrier for the most part.
        On kernels to old to support delaylog was removed. On kernels
        where it was the default, it was specified and the warning ignored.

The size parameter for tiobench was 2*RAM. This is barely sufficient for
	this particular test where the size parameter should be multiple
	times the size of memory. The running time of the benchmark is
	already excessive and this is not likely to be changed.

===========================================================
Machine:	arnold
Result:		http://www.csn.ul.ie/~mel/postings/mmtests-20120424/global-dhp__io-threaded-xfs/arnold/comparison.html
Arch:		x86
CPUs:		1 socket, 2 threads
Model:		Pentium 4
Disk:		Single Rotary Disk
==========================================================

tiobench
--------
  This is a mixed bag. For low numbers of clients, throughput on
  sequential reads has improved.  For larger number of clients, there
  are many regressions but this is not consistent.  This could be due to
  weakness in the methodology due to both a small filesize and a small
  number of iterations.

  Random read is generally bad.

  For many kernels sequential write is good with the notable exception
  of 2.6.39 and 3.0 kernels.

  There was unexpected swapping on 3.1 and 3.2 kernels.

==========================================================
Machine:	hydra
Result:		http://www.csn.ul.ie/~mel/postings/mmtests-20120424/global-dhp__io-threaded-xfs/hydra/comparison.html
Arch:		x86-64
CPUs:		1 socket, 4 threads
Model:		AMD Phenom II X4 940
Disk:		Single Rotary Disk
==========================================================

tiobench
--------

  Like arnold, performance for sequential read is good for low number
  of clients.

  Random read looks good.

  With the exception of 3.0 in general and single threaded writes for all
  kernels, sequential writes have generally improved.

  Random write has a number of regressions.

  Kernels 3.1 and 3.2 had unexpected swapping.

==========================================================
Machine:	sandy
Result:		http://www.csn.ul.ie/~mel/postings/mmtests-20120424/global-dhp__io-threaded-xfs/sandy/comparison.html
Arch:		x86-64
CPUs:		1 socket, 8 threads
Model:		Intel Core i7-2600
Disk:		Single Rotary Disk
==========================================================

tiobench
--------

  Like hydra, sequential reads were generally better for low numbers of
  clients. 3.4 is notable in that it regressed and 3.1 was also bad which
  is roughly similar to what was seen on ext3. There are differences in
  the memory sizes and therefore the filesize and it implies that there
  is not a single cause of the regression.

  Random read has generally improved except with the obvious exception of
  the single-threaded case.

  Sequential writes have generally improved but it is interesting to note
  that 3.4 is worse than 3.3 and this was also seen for ext3.

  Random write is a mixed bad but again 3.4 is worse than 3.3.

  Like the other machines, 3.1 and 3.2 saw unexpected swapping.

-- 
Mel Gorman
SUSE Labs

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

WARNING: multiple messages have this Message-ID (diff)
From: Mel Gorman <mgorman@suse.de>
To: linux-mm@kvack.org
Cc: linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org,
	xfs@oss.sgi.com
Subject: [MMTests] Threaded IO Performance on xfs
Date: Mon, 23 Jul 2012 22:25:33 +0100	[thread overview]
Message-ID: <20120723212533.GJ9222@suse.de> (raw)
In-Reply-To: <20120629111932.GA14154@suse.de>

Configuration:	global-dhp__io-threaded-xfs
Result: 	http://www.csn.ul.ie/~mel/postings/mmtests-20120424/global-dhp__io-threaded-xfs
Benchmarks:	tiobench

Summary
=======

There have been many improvements in the sequential read/write case but
3.4 is noticably worse than 3.3 in a number of cases.

Benchmark notes
===============

mkfs was run on system startup.
mkfs parameters -f -d agcount=8
mount options inode64,delaylog,logbsize=262144,nobarrier for the most part.
        On kernels to old to support delaylog was removed. On kernels
        where it was the default, it was specified and the warning ignored.

The size parameter for tiobench was 2*RAM. This is barely sufficient for
	this particular test where the size parameter should be multiple
	times the size of memory. The running time of the benchmark is
	already excessive and this is not likely to be changed.

===========================================================
Machine:	arnold
Result:		http://www.csn.ul.ie/~mel/postings/mmtests-20120424/global-dhp__io-threaded-xfs/arnold/comparison.html
Arch:		x86
CPUs:		1 socket, 2 threads
Model:		Pentium 4
Disk:		Single Rotary Disk
==========================================================

tiobench
--------
  This is a mixed bag. For low numbers of clients, throughput on
  sequential reads has improved.  For larger number of clients, there
  are many regressions but this is not consistent.  This could be due to
  weakness in the methodology due to both a small filesize and a small
  number of iterations.

  Random read is generally bad.

  For many kernels sequential write is good with the notable exception
  of 2.6.39 and 3.0 kernels.

  There was unexpected swapping on 3.1 and 3.2 kernels.

==========================================================
Machine:	hydra
Result:		http://www.csn.ul.ie/~mel/postings/mmtests-20120424/global-dhp__io-threaded-xfs/hydra/comparison.html
Arch:		x86-64
CPUs:		1 socket, 4 threads
Model:		AMD Phenom II X4 940
Disk:		Single Rotary Disk
==========================================================

tiobench
--------

  Like arnold, performance for sequential read is good for low number
  of clients.

  Random read looks good.

  With the exception of 3.0 in general and single threaded writes for all
  kernels, sequential writes have generally improved.

  Random write has a number of regressions.

  Kernels 3.1 and 3.2 had unexpected swapping.

==========================================================
Machine:	sandy
Result:		http://www.csn.ul.ie/~mel/postings/mmtests-20120424/global-dhp__io-threaded-xfs/sandy/comparison.html
Arch:		x86-64
CPUs:		1 socket, 8 threads
Model:		Intel Core i7-2600
Disk:		Single Rotary Disk
==========================================================

tiobench
--------

  Like hydra, sequential reads were generally better for low numbers of
  clients. 3.4 is notable in that it regressed and 3.1 was also bad which
  is roughly similar to what was seen on ext3. There are differences in
  the memory sizes and therefore the filesize and it implies that there
  is not a single cause of the regression.

  Random read has generally improved except with the obvious exception of
  the single-threaded case.

  Sequential writes have generally improved but it is interesting to note
  that 3.4 is worse than 3.3 and this was also seen for ext3.

  Random write is a mixed bad but again 3.4 is worse than 3.3.

  Like the other machines, 3.1 and 3.2 saw unexpected swapping.

-- 
Mel Gorman
SUSE Labs

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

  parent reply	other threads:[~2012-07-23 21:25 UTC|newest]

Thread overview: 108+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2012-06-20 11:32 MMTests 0.04 Mel Gorman
2012-06-20 11:32 ` Mel Gorman
2012-06-29 11:19 ` Mel Gorman
2012-06-29 11:19   ` Mel Gorman
2012-06-29 11:21   ` [MMTests] Page allocator Mel Gorman
2012-06-29 11:21     ` Mel Gorman
2012-06-29 11:22   ` [MMTests] Network performance Mel Gorman
2012-06-29 11:22     ` Mel Gorman
2012-06-29 11:23   ` [MMTests] IO metadata on ext3 Mel Gorman
2012-06-29 11:23     ` Mel Gorman
2012-06-29 11:24   ` [MMTests] IO metadata on ext4 Mel Gorman
2012-06-29 11:24     ` Mel Gorman
2012-06-29 11:25   ` [MMTests] IO metadata on XFS Mel Gorman
2012-06-29 11:25     ` Mel Gorman
2012-06-29 11:25     ` Mel Gorman
2012-07-01 23:54     ` Dave Chinner
2012-07-01 23:54       ` Dave Chinner
2012-07-01 23:54       ` Dave Chinner
2012-07-02  6:32       ` Christoph Hellwig
2012-07-02  6:32         ` Christoph Hellwig
2012-07-02  6:32         ` Christoph Hellwig
2012-07-02 14:32         ` Mel Gorman
2012-07-02 14:32           ` Mel Gorman
2012-07-02 14:32           ` Mel Gorman
2012-07-02 19:35           ` Mel Gorman
2012-07-02 19:35             ` Mel Gorman
2012-07-02 19:35             ` Mel Gorman
2012-07-03  0:19             ` Dave Chinner
2012-07-03  0:19               ` Dave Chinner
2012-07-03  0:19               ` Dave Chinner
2012-07-03 10:59               ` Mel Gorman
2012-07-03 10:59                 ` Mel Gorman
2012-07-03 10:59                 ` Mel Gorman
2012-07-03 11:44                 ` Mel Gorman
2012-07-03 11:44                   ` Mel Gorman
2012-07-03 11:44                   ` Mel Gorman
2012-07-03 12:31                 ` Daniel Vetter
2012-07-03 12:31                   ` Daniel Vetter
2012-07-03 12:31                   ` Daniel Vetter
2012-07-03 13:08                   ` Mel Gorman
2012-07-03 13:08                     ` Mel Gorman
2012-07-03 13:08                     ` Mel Gorman
2012-07-03 13:28                   ` Eugeni Dodonov
2012-07-03 13:28                     ` Eugeni Dodonov
2012-07-04  0:47                 ` Dave Chinner
2012-07-04  0:47                   ` Dave Chinner
2012-07-04  0:47                   ` Dave Chinner
2012-07-04  9:51                   ` Mel Gorman
2012-07-04  9:51                     ` Mel Gorman
2012-07-04  9:51                     ` Mel Gorman
2012-07-03 13:04             ` Mel Gorman
2012-07-03 13:04               ` Mel Gorman
2012-07-03 13:04               ` Mel Gorman
2012-07-03 14:04               ` Daniel Vetter
2012-07-03 14:04                 ` Daniel Vetter
2012-07-03 14:04                 ` Daniel Vetter
2012-07-02 13:30       ` Mel Gorman
2012-07-02 13:30         ` Mel Gorman
2012-07-02 13:30         ` Mel Gorman
2012-07-04 15:52   ` [MMTests] Page reclaim performance on ext3 Mel Gorman
2012-07-04 15:52     ` Mel Gorman
2012-07-04 15:53   ` [MMTests] Page reclaim performance on ext4 Mel Gorman
2012-07-04 15:53     ` Mel Gorman
2012-07-04 15:53   ` [MMTests] Page reclaim performance on xfs Mel Gorman
2012-07-04 15:53     ` Mel Gorman
2012-07-05 14:56   ` [MMTests] Interactivity during IO on ext3 Mel Gorman
2012-07-05 14:56     ` Mel Gorman
2012-07-10  9:49     ` Jan Kara
2012-07-10  9:49       ` Jan Kara
2012-07-10 11:30       ` Mel Gorman
2012-07-10 11:30         ` Mel Gorman
2012-07-05 14:57   ` [MMTests] Interactivity during IO on ext4 Mel Gorman
2012-07-05 14:57     ` Mel Gorman
2012-07-23 21:12   ` [MMTests] Scheduler Mel Gorman
2012-07-23 21:12     ` Mel Gorman
2012-07-23 21:13   ` [MMTests] Sysbench read-only on ext3 Mel Gorman
2012-07-23 21:13     ` Mel Gorman
2012-07-24  2:29     ` Mike Galbraith
2012-07-24  2:29       ` Mike Galbraith
2012-07-24  8:19       ` Mel Gorman
2012-07-24  8:19         ` Mel Gorman
2012-07-24  8:32         ` Mike Galbraith
2012-07-24  8:32           ` Mike Galbraith
2012-07-23 21:14   ` [MMTests] Sysbench read-only on ext4 Mel Gorman
2012-07-23 21:14     ` Mel Gorman
2012-07-23 21:15   ` [MMTests] Sysbench read-only on xfs Mel Gorman
2012-07-23 21:15     ` Mel Gorman
2012-07-23 21:17   ` [MMTests] memcachetest and parallel IO on ext3 Mel Gorman
2012-07-23 21:17     ` Mel Gorman
2012-07-23 21:19   ` [MMTests] memcachetest and parallel IO on xfs Mel Gorman
2012-07-23 21:19     ` Mel Gorman
2012-07-23 21:20   ` [MMTests] Stress high-order allocations on ext3 Mel Gorman
2012-07-23 21:20     ` Mel Gorman
2012-07-23 21:21   ` [MMTests] dbench4 async " Mel Gorman
2012-07-23 21:21     ` Mel Gorman
2012-08-16 14:52     ` Jan Kara
2012-08-16 14:52       ` Jan Kara
2012-08-21 22:00     ` Jan Kara
2012-08-21 22:00       ` Jan Kara
2012-08-22 10:48       ` Mel Gorman
2012-08-22 10:48         ` Mel Gorman
2012-07-23 21:23   ` [MMTests] dbench4 async on ext4 Mel Gorman
2012-07-23 21:23     ` Mel Gorman
2012-07-23 21:24   ` [MMTests] Threaded IO Performance on ext3 Mel Gorman
2012-07-23 21:24     ` Mel Gorman
2012-07-23 21:25   ` Mel Gorman [this message]
2012-07-23 21:25     ` [MMTests] Threaded IO Performance on xfs Mel Gorman
2012-07-23 21:25     ` Mel Gorman

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20120723212533.GJ9222@suse.de \
    --to=mgorman@suse.de \
    --cc=linux-fsdevel@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=xfs@oss.sgi.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.