All of lore.kernel.org
 help / color / mirror / Atom feed
From: Mel Gorman <mgorman@suse.de>
To: Jan Kara <jack@suse.cz>
Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org,
	linux-fsdevel@vger.kernel.org
Subject: Re: [MMTests] Interactivity during IO on ext3
Date: Tue, 10 Jul 2012 12:30:36 +0100	[thread overview]
Message-ID: <20120710113036.GE14154@suse.de> (raw)
In-Reply-To: <20120710094940.GC13539@quack.suse.cz>

On Tue, Jul 10, 2012 at 11:49:40AM +0200, Jan Kara wrote:
> > ===========================================================
> > Machine:	arnold
> > Result:		http://www.csn.ul.ie/~mel/postings/mmtests-20120424/global-dhp__io-interactive-performance-ext3/arnold/comparison.html
> > Arch:		x86
> > CPUs:		1 socket, 2 threads
> > Model:		Pentium 4
> > Disk:		Single Rotary Disk
> > ===========================================================
> > 
> > fsmark-single
> > -------------
> >   Completion times since 3.2 have been badly affected which coincides with
> >   the introduction of IO-less dirty page throttling. 3.3 was particularly
> >   bad.
> > 
> >   2.6.32 was TERRIBLE in terms of read-latencies with the average latency
> >   and max latencies looking awful. The 90th percentile was close to 4
> >   seconds and as a result the graphs are even more of a complete mess than
> >   they might have been otherwise.
> > 
> >   Otherwise it's worth looking closely at 3.0 and 3.2. In 3.0, 95% of the
> >   reads were below 206ms but in 3.2 this had grown to 273ms. The latency
> >   of the other 5% results increased from 481ms to 774ms.
> > 
> >   3.4 is looking better at least.
>
>   Yeah, 3.4 looks OK and I'd be interested in 3.5 results since I've merged
> one more fix which should help the read latency.

When 3.5 comes out, I'll be queue up the same tests. Ideally I would be
running against each rc but the machines are used for other tests as well
and these ones take too long for continual testing to be practical.

> But all in all it's hard
> to tackle the latency problems with ext3 - we have a journal which
> synchronizes all the writes so we write to it with a high priority
> (we use WRITE_SYNC when there's some contention on the journal). But that
> naturally competes with reads and creates higher read latency.
>  

Thanks for the good explanation. I'll just know to look out for this in
interactivity-related or IO-latency bugs.

> > <SNIP>
> > ==========================================================
> > Machine:	hydra
> > Result:		http://www.csn.ul.ie/~mel/postings/mmtests-20120424/global-dhp__io-interactive-performance-ext3/hydra/comparison.html
> > Arch:		x86-64
> > CPUs:		1 socket, 4 threads
> > Model:		AMD Phenom II X4 940
> > Disk:		Single Rotary Disk
> > ==========================================================
> > 
> > fsmark-single
> > -------------
> >   Completion times are all over the place with a big increase in 3.2 that
> >   improved a bit since but not as good as 3.1 kernels were.
> > 
> >   Unlike arnold, 2.6.32 is not a complete mess and makes a comparison more
> >   meaningful. Our maximum latencies have jumped around a lot with 3.2
> >   being particularly bad and 3.4 not being much better. 3.1 and 3.3 were
> >   both good in terms of maximum latency.
> > 
> >   Average latency is shot to hell. In 2.6.32 it was 349ms and it's now 781ms.
> >   3.2 was really bad but it's not like 3.0 or 3.1 were fantastic either.
>
>   So I wonder what makes a difference between this machine and the previous
> one. The results seem completely different. Is it the amount of memory? Is
> it the difference in the disk? Or even the difference in the CPU?
> 

Two big differences are 32-bit versus 64-bit and the 32-bit machine having
4G of RAM and the 64-bit machine having 8G.  On the 32-bit machine, bounce
buffering may have been an issue but as -S0 was specified (no sync) there
would also be differences on when dirty page balancing took place.

-- 
Mel Gorman
SUSE Labs

WARNING: multiple messages have this Message-ID (diff)
From: Mel Gorman <mgorman@suse.de>
To: Jan Kara <jack@suse.cz>
Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org,
	linux-fsdevel@vger.kernel.org
Subject: Re: [MMTests] Interactivity during IO on ext3
Date: Tue, 10 Jul 2012 12:30:36 +0100	[thread overview]
Message-ID: <20120710113036.GE14154@suse.de> (raw)
In-Reply-To: <20120710094940.GC13539@quack.suse.cz>

On Tue, Jul 10, 2012 at 11:49:40AM +0200, Jan Kara wrote:
> > ===========================================================
> > Machine:	arnold
> > Result:		http://www.csn.ul.ie/~mel/postings/mmtests-20120424/global-dhp__io-interactive-performance-ext3/arnold/comparison.html
> > Arch:		x86
> > CPUs:		1 socket, 2 threads
> > Model:		Pentium 4
> > Disk:		Single Rotary Disk
> > ===========================================================
> > 
> > fsmark-single
> > -------------
> >   Completion times since 3.2 have been badly affected which coincides with
> >   the introduction of IO-less dirty page throttling. 3.3 was particularly
> >   bad.
> > 
> >   2.6.32 was TERRIBLE in terms of read-latencies with the average latency
> >   and max latencies looking awful. The 90th percentile was close to 4
> >   seconds and as a result the graphs are even more of a complete mess than
> >   they might have been otherwise.
> > 
> >   Otherwise it's worth looking closely at 3.0 and 3.2. In 3.0, 95% of the
> >   reads were below 206ms but in 3.2 this had grown to 273ms. The latency
> >   of the other 5% results increased from 481ms to 774ms.
> > 
> >   3.4 is looking better at least.
>
>   Yeah, 3.4 looks OK and I'd be interested in 3.5 results since I've merged
> one more fix which should help the read latency.

When 3.5 comes out, I'll be queue up the same tests. Ideally I would be
running against each rc but the machines are used for other tests as well
and these ones take too long for continual testing to be practical.

> But all in all it's hard
> to tackle the latency problems with ext3 - we have a journal which
> synchronizes all the writes so we write to it with a high priority
> (we use WRITE_SYNC when there's some contention on the journal). But that
> naturally competes with reads and creates higher read latency.
>  

Thanks for the good explanation. I'll just know to look out for this in
interactivity-related or IO-latency bugs.

> > <SNIP>
> > ==========================================================
> > Machine:	hydra
> > Result:		http://www.csn.ul.ie/~mel/postings/mmtests-20120424/global-dhp__io-interactive-performance-ext3/hydra/comparison.html
> > Arch:		x86-64
> > CPUs:		1 socket, 4 threads
> > Model:		AMD Phenom II X4 940
> > Disk:		Single Rotary Disk
> > ==========================================================
> > 
> > fsmark-single
> > -------------
> >   Completion times are all over the place with a big increase in 3.2 that
> >   improved a bit since but not as good as 3.1 kernels were.
> > 
> >   Unlike arnold, 2.6.32 is not a complete mess and makes a comparison more
> >   meaningful. Our maximum latencies have jumped around a lot with 3.2
> >   being particularly bad and 3.4 not being much better. 3.1 and 3.3 were
> >   both good in terms of maximum latency.
> > 
> >   Average latency is shot to hell. In 2.6.32 it was 349ms and it's now 781ms.
> >   3.2 was really bad but it's not like 3.0 or 3.1 were fantastic either.
>
>   So I wonder what makes a difference between this machine and the previous
> one. The results seem completely different. Is it the amount of memory? Is
> it the difference in the disk? Or even the difference in the CPU?
> 

Two big differences are 32-bit versus 64-bit and the 32-bit machine having
4G of RAM and the 64-bit machine having 8G.  On the 32-bit machine, bounce
buffering may have been an issue but as -S0 was specified (no sync) there
would also be differences on when dirty page balancing took place.

-- 
Mel Gorman
SUSE Labs

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

  reply	other threads:[~2012-07-10 11:30 UTC|newest]

Thread overview: 108+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2012-06-20 11:32 MMTests 0.04 Mel Gorman
2012-06-20 11:32 ` Mel Gorman
2012-06-29 11:19 ` Mel Gorman
2012-06-29 11:19   ` Mel Gorman
2012-06-29 11:21   ` [MMTests] Page allocator Mel Gorman
2012-06-29 11:21     ` Mel Gorman
2012-06-29 11:22   ` [MMTests] Network performance Mel Gorman
2012-06-29 11:22     ` Mel Gorman
2012-06-29 11:23   ` [MMTests] IO metadata on ext3 Mel Gorman
2012-06-29 11:23     ` Mel Gorman
2012-06-29 11:24   ` [MMTests] IO metadata on ext4 Mel Gorman
2012-06-29 11:24     ` Mel Gorman
2012-06-29 11:25   ` [MMTests] IO metadata on XFS Mel Gorman
2012-06-29 11:25     ` Mel Gorman
2012-06-29 11:25     ` Mel Gorman
2012-07-01 23:54     ` Dave Chinner
2012-07-01 23:54       ` Dave Chinner
2012-07-01 23:54       ` Dave Chinner
2012-07-02  6:32       ` Christoph Hellwig
2012-07-02  6:32         ` Christoph Hellwig
2012-07-02  6:32         ` Christoph Hellwig
2012-07-02 14:32         ` Mel Gorman
2012-07-02 14:32           ` Mel Gorman
2012-07-02 14:32           ` Mel Gorman
2012-07-02 19:35           ` Mel Gorman
2012-07-02 19:35             ` Mel Gorman
2012-07-02 19:35             ` Mel Gorman
2012-07-03  0:19             ` Dave Chinner
2012-07-03  0:19               ` Dave Chinner
2012-07-03  0:19               ` Dave Chinner
2012-07-03 10:59               ` Mel Gorman
2012-07-03 10:59                 ` Mel Gorman
2012-07-03 10:59                 ` Mel Gorman
2012-07-03 11:44                 ` Mel Gorman
2012-07-03 11:44                   ` Mel Gorman
2012-07-03 11:44                   ` Mel Gorman
2012-07-03 12:31                 ` Daniel Vetter
2012-07-03 12:31                   ` Daniel Vetter
2012-07-03 12:31                   ` Daniel Vetter
2012-07-03 13:08                   ` Mel Gorman
2012-07-03 13:08                     ` Mel Gorman
2012-07-03 13:08                     ` Mel Gorman
2012-07-03 13:28                   ` Eugeni Dodonov
2012-07-03 13:28                     ` Eugeni Dodonov
2012-07-04  0:47                 ` Dave Chinner
2012-07-04  0:47                   ` Dave Chinner
2012-07-04  0:47                   ` Dave Chinner
2012-07-04  9:51                   ` Mel Gorman
2012-07-04  9:51                     ` Mel Gorman
2012-07-04  9:51                     ` Mel Gorman
2012-07-03 13:04             ` Mel Gorman
2012-07-03 13:04               ` Mel Gorman
2012-07-03 13:04               ` Mel Gorman
2012-07-03 14:04               ` Daniel Vetter
2012-07-03 14:04                 ` Daniel Vetter
2012-07-03 14:04                 ` Daniel Vetter
2012-07-02 13:30       ` Mel Gorman
2012-07-02 13:30         ` Mel Gorman
2012-07-02 13:30         ` Mel Gorman
2012-07-04 15:52   ` [MMTests] Page reclaim performance on ext3 Mel Gorman
2012-07-04 15:52     ` Mel Gorman
2012-07-04 15:53   ` [MMTests] Page reclaim performance on ext4 Mel Gorman
2012-07-04 15:53     ` Mel Gorman
2012-07-04 15:53   ` [MMTests] Page reclaim performance on xfs Mel Gorman
2012-07-04 15:53     ` Mel Gorman
2012-07-05 14:56   ` [MMTests] Interactivity during IO on ext3 Mel Gorman
2012-07-05 14:56     ` Mel Gorman
2012-07-10  9:49     ` Jan Kara
2012-07-10  9:49       ` Jan Kara
2012-07-10 11:30       ` Mel Gorman [this message]
2012-07-10 11:30         ` Mel Gorman
2012-07-05 14:57   ` [MMTests] Interactivity during IO on ext4 Mel Gorman
2012-07-05 14:57     ` Mel Gorman
2012-07-23 21:12   ` [MMTests] Scheduler Mel Gorman
2012-07-23 21:12     ` Mel Gorman
2012-07-23 21:13   ` [MMTests] Sysbench read-only on ext3 Mel Gorman
2012-07-23 21:13     ` Mel Gorman
2012-07-24  2:29     ` Mike Galbraith
2012-07-24  2:29       ` Mike Galbraith
2012-07-24  8:19       ` Mel Gorman
2012-07-24  8:19         ` Mel Gorman
2012-07-24  8:32         ` Mike Galbraith
2012-07-24  8:32           ` Mike Galbraith
2012-07-23 21:14   ` [MMTests] Sysbench read-only on ext4 Mel Gorman
2012-07-23 21:14     ` Mel Gorman
2012-07-23 21:15   ` [MMTests] Sysbench read-only on xfs Mel Gorman
2012-07-23 21:15     ` Mel Gorman
2012-07-23 21:17   ` [MMTests] memcachetest and parallel IO on ext3 Mel Gorman
2012-07-23 21:17     ` Mel Gorman
2012-07-23 21:19   ` [MMTests] memcachetest and parallel IO on xfs Mel Gorman
2012-07-23 21:19     ` Mel Gorman
2012-07-23 21:20   ` [MMTests] Stress high-order allocations on ext3 Mel Gorman
2012-07-23 21:20     ` Mel Gorman
2012-07-23 21:21   ` [MMTests] dbench4 async " Mel Gorman
2012-07-23 21:21     ` Mel Gorman
2012-08-16 14:52     ` Jan Kara
2012-08-16 14:52       ` Jan Kara
2012-08-21 22:00     ` Jan Kara
2012-08-21 22:00       ` Jan Kara
2012-08-22 10:48       ` Mel Gorman
2012-08-22 10:48         ` Mel Gorman
2012-07-23 21:23   ` [MMTests] dbench4 async on ext4 Mel Gorman
2012-07-23 21:23     ` Mel Gorman
2012-07-23 21:24   ` [MMTests] Threaded IO Performance on ext3 Mel Gorman
2012-07-23 21:24     ` Mel Gorman
2012-07-23 21:25   ` [MMTests] Threaded IO Performance on xfs Mel Gorman
2012-07-23 21:25     ` Mel Gorman
2012-07-23 21:25     ` Mel Gorman

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20120710113036.GE14154@suse.de \
    --to=mgorman@suse.de \
    --cc=jack@suse.cz \
    --cc=linux-fsdevel@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.