All of lore.kernel.org
 help / color / mirror / Atom feed
* Very poor latency when using hard drive (raid1)
@ 2013-04-15  9:59 lkml
  2013-04-16  6:49 ` lkml
                   ` (2 more replies)
  0 siblings, 3 replies; 5+ messages in thread
From: lkml @ 2013-04-15  9:59 UTC (permalink / raw)
  To: linux-kernel

There are 2 hard drives (normal, magnetic) in software raid 1
on 3.2.41 kernel.

When I write into them e.g. using dd from /dev/zero to a local file
(ext4 on default settings), running 2 dd at once (writing two files) it
starves all other programs that try to use the disk.

Running ls on any directory on same disk (same fs btw), takes over half
minute to execute, same for any other disk touching action.

Did anyone seen such problem, where too look, what to test?

What could solve it (other then ionice on applications that I expect to
use hard drive)?



^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: Very poor latency when using hard drive (raid1)
  2013-04-15  9:59 Very poor latency when using hard drive (raid1) lkml
@ 2013-04-16  6:49 ` lkml
  2013-04-16  7:24   ` Mike Galbraith
  2013-04-16 11:23 ` Michael Tokarev
  2013-04-19 15:01 ` Jan Kara
  2 siblings, 1 reply; 5+ messages in thread
From: lkml @ 2013-04-16  6:49 UTC (permalink / raw)
  To: linux-kernel

On 15/04/13 11:59, lkml@tigusoft.pl wrote:
> There are 2 hard drives (normal, magnetic) in software raid 1
> on 3.2.41 kernel.
> 
> When I write into them e.g. using dd from /dev/zero to a local file
> (ext4 on default settings), running 2 dd at once (writing two files) it
> starves all other programs that try to use the disk.
> 
> Running ls on any directory on same disk (same fs btw), takes over half
> minute to execute, same for any other disk touching action.
> 
> Did anyone seen such problem, where too look, what to test?
> 
> What could solve it (other then ionice on applications that I expect to
> use hard drive)?

I got reply (by e-mail) suggesting to use XFS.
Thanks, possible for other/next server.

But I fell this should work correctly as well on ext4.





^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: Very poor latency when using hard drive (raid1)
  2013-04-16  6:49 ` lkml
@ 2013-04-16  7:24   ` Mike Galbraith
  0 siblings, 0 replies; 5+ messages in thread
From: Mike Galbraith @ 2013-04-16  7:24 UTC (permalink / raw)
  To: lkml; +Cc: linux-kernel

On Tue, 2013-04-16 at 08:49 +0200, lkml@tigusoft.pl wrote: 
> On 15/04/13 11:59, lkml@tigusoft.pl wrote:
> > There are 2 hard drives (normal, magnetic) in software raid 1
> > on 3.2.41 kernel.
> > 
> > When I write into them e.g. using dd from /dev/zero to a local file
> > (ext4 on default settings), running 2 dd at once (writing two files) it
> > starves all other programs that try to use the disk.
> > 
> > Running ls on any directory on same disk (same fs btw), takes over half
> > minute to execute, same for any other disk touching action.
> > 
> > Did anyone seen such problem, where too look, what to test?
> > 
> > What could solve it (other then ionice on applications that I expect to
> > use hard drive)?
> 
> I got reply (by e-mail) suggesting to use XFS.
> Thanks, possible for other/next server.
> 
> But I fell this should work correctly as well on ext4.

It should not starve readers that badly, something is wrong.

You can try setting low_latency for the devices if you're using CFQ
ioscheduler.  For my box, that would be..

echo 1 > /sys/devices/pci0000:00/0000:00:1f.2/ata2/host0/target0:0:0/0:0:0:0/block/sda/queue/iosched/low_latency

Or, you can try a different scheduler.  cat (blabla)/sda/queue/scheduler
to see which choices are available, and echo your choice back to the
file to select it.

-Mike


^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: Very poor latency when using hard drive (raid1)
  2013-04-15  9:59 Very poor latency when using hard drive (raid1) lkml
  2013-04-16  6:49 ` lkml
@ 2013-04-16 11:23 ` Michael Tokarev
  2013-04-19 15:01 ` Jan Kara
  2 siblings, 0 replies; 5+ messages in thread
From: Michael Tokarev @ 2013-04-16 11:23 UTC (permalink / raw)
  To: lkml; +Cc: linux-kernel

15.04.2013 13:59, lkml@tigusoft.pl пишет:
> There are 2 hard drives (normal, magnetic) in software raid 1
> on 3.2.41 kernel.
> 
> When I write into them e.g. using dd from /dev/zero to a local file
> (ext4 on default settings), running 2 dd at once (writing two files) it
> starves all other programs that try to use the disk.
> 
> Running ls on any directory on same disk (same fs btw), takes over half
> minute to execute, same for any other disk touching action.
> 
> Did anyone seen such problem, where too look, what to test?

This is typical, known for many years, issue.

Your dds are run against buffer cache, the same as used by all other
regular accesses.  So once it fills up, cached directories and the
like are thrown away to make room for new cache space.  So once
you need something else, that something needs to be read from disk,
which is busy together with the buffer cache.

> What could solve it (other then ionice on applications that I expect to
> use hard drive)?

Just don't mix these two workloads.  Or, if you really need to transfer
large amount of data, use direct I/O (O_DIRECT) -- for dd it is
iflag=direct or oflag=direct (depending on the I/O direction).

ionice wont help much.

Thanks,

/mjt

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: Very poor latency when using hard drive (raid1)
  2013-04-15  9:59 Very poor latency when using hard drive (raid1) lkml
  2013-04-16  6:49 ` lkml
  2013-04-16 11:23 ` Michael Tokarev
@ 2013-04-19 15:01 ` Jan Kara
  2 siblings, 0 replies; 5+ messages in thread
From: Jan Kara @ 2013-04-19 15:01 UTC (permalink / raw)
  To: lkml; +Cc: linux-kernel

On Mon 15-04-13 11:59:59, lkml@tigusoft.pl wrote:
> There are 2 hard drives (normal, magnetic) in software raid 1
> on 3.2.41 kernel.
  Any possibility in trying a newer kernel? Like 3.8 / 3.9?

> When I write into them e.g. using dd from /dev/zero to a local file
> (ext4 on default settings), running 2 dd at once (writing two files) it
> starves all other programs that try to use the disk.
  That shouldn't really happen. I presume you use the default IO scheduler?
i.e. CFQ?

> Running ls on any directory on same disk (same fs btw), takes over half
> minute to execute, same for any other disk touching action.
> 
> Did anyone seen such problem, where too look, what to test?
  Not in such a simple setting as yours.

> What could solve it (other then ionice on applications that I expect to
> use hard drive)?
  ionice wouldn't help because dd writes the data into page cache and from
there flusher thread writes them to disk. So IO happens in flusher context
which won't be ioniced.

									Honza
-- 
Jan Kara <jack@suse.cz>
SUSE Labs, CR

^ permalink raw reply	[flat|nested] 5+ messages in thread

end of thread, other threads:[~2013-04-19 15:42 UTC | newest]

Thread overview: 5+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2013-04-15  9:59 Very poor latency when using hard drive (raid1) lkml
2013-04-16  6:49 ` lkml
2013-04-16  7:24   ` Mike Galbraith
2013-04-16 11:23 ` Michael Tokarev
2013-04-19 15:01 ` Jan Kara

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.