All of lore.kernel.org
 help / color / mirror / Atom feed
From: Arnd Bergmann <arnd@arndb.de>
To: ksummit-discuss@lists.linuxfoundation.org
Cc: Bartlomiej Zolnierkiewicz <b.zolnierkie@samsung.com>,
	ksummit-discuss@lists.linux-foundation.org,
	Greg KH <gregkh@linuxfoundation.org>, Jens Axboe <axboe@fb.com>,
	hare@suse.de, Tejun Heo <tj@kernel.org>,
	Bart Van Assche <bart.vanassche@sandisk.com>,
	osandov@osandov.com, Christoph Hellwig <hch@lst.de>
Subject: Re: [Ksummit-discuss] [TECH TOPIC] Addressing long-standing high-latency problems related to I/O
Date: Fri, 16 Sep 2016 13:46:14 +0200	[thread overview]
Message-ID: <4604090.shkXlmNYJ6@wuerfel> (raw)
In-Reply-To: <CACRpkdb5+mw5CafFLWKr4vwcTVpO6FWUSiDifDOZ+oXAu_h7+A@mail.gmail.com>

On Friday, September 16, 2016 1:24:07 PM CEST Linus Walleij wrote:

> It is not super-useful on MMC/SD cards, because the load
> will simply bog down everything and your typical embedded
> system will start to behave like an updating Android phone
> "optimizing applications" which is a known issue that is
> caused by the slowness of eMMC. It also eats memory
> quickly and that way just kills any embedded system because
> of OOM before you can make any meaningful tests. But it
> can spawn any number of readers & writers and stress out
> your device very efficiently if you have enough memory
> and CPU. (It is apparently designed to test systems with
> lots of memory and CPU power.)

I think it's more complex than "the slowness of eMMC": I would
expect that in a read-only scenario, eMMC (or SD cards and
most USB sticks) does't do that bad, it may be one order of
magnitude slower than a hard drive but doesn't suffer from
seeks during read by nearly as much.

For writes, the situation is completely different on these,
as you can just hit extremely long delays (up to a second) on
a single write whenever the device goes into garbage collection
mode, during which no other I/O is done, and that ends up
stalling any process that is waiting for a read request.

> I mainly used fio on NAS type devices.
> For example on Marvell Kirkwood Pogoplug 4 with SATA, I
> can do a test like this to test an dmcrypt devicemapper thing:
> 
> fio --filename=/dev/dm-0 --direct=1 --iodepth=1 --rw=read --bs=64K \
> --size=1G --group_reporting --numjobs=1 --name=test_read
> 
> > Which I/O scheduler was used when measuring
> > performance with the traditional block layer?
> 
> I used CFQ, deadline, noop, and of course the BFQ patches.
> With BFQ I reproduced the figures reported by Paolo on a
> laptop but since his test cases use fio to stress the system
> and eMMC/SD are so slow, I couldn't come up with any good
> usecase using fio.
> 
> Any hints on better tests are welcome!
> In the kernel logs I only see peole doing a lot of dd
> tests which I think is silly, you need more serious
> test cases so it's good if we can build some consensus
> there.

My guess is that the impact of the file system is much greater
than the I/O scheduler. If the file system is well tuned
to the storage device (e.g. f2fs should be near ideal),
you can avoid most of the stalls regardless of the scheduler,
while with file systems that are not aware of flash geometry
at all (e.g. the now-removed ext3 code, especially with
journaling), the scheduler won't be able to help that much
either.

What file system did you use for testing, and which tuning
did you do for your storage devices?

Maybe a better long-term strategy is to improve the important
file systems (ext4, xfs, btrfs) further to work well with
flash storage through blk-mq.

	Arnd

  reply	other threads:[~2016-09-16 11:46 UTC|newest]

Thread overview: 20+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2016-09-16  7:55 [Ksummit-discuss] [TECH TOPIC] Addressing long-standing high-latency problems related to I/O Paolo Valente
2016-09-16  8:24 ` Greg KH
2016-09-16  8:59   ` Linus Walleij
2016-09-16  9:10     ` Bart Van Assche
2016-09-16 11:24       ` Linus Walleij
2016-09-16 11:46         ` Arnd Bergmann [this message]
2016-09-16 13:10           ` Paolo Valente
2016-09-16 13:36           ` Linus Walleij
2016-09-16 11:53         ` Bart Van Assche
2016-09-22  9:18     ` Ulf Hansson
2016-09-22 11:06       ` Linus Walleij
2016-09-16 15:15   ` James Bottomley
2016-09-16 18:48     ` Paolo Valente
2016-09-16 19:36       ` James Bottomley
2016-09-16 20:13         ` Paolo Valente
2016-09-19  8:17           ` Jan Kara
2016-09-17 10:31         ` Linus Walleij
2016-09-21 13:51         ` Grant Likely
2016-09-21 14:30 ` Bart Van Assche
2016-09-21 14:37   ` Paolo Valente

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=4604090.shkXlmNYJ6@wuerfel \
    --to=arnd@arndb.de \
    --cc=axboe@fb.com \
    --cc=b.zolnierkie@samsung.com \
    --cc=bart.vanassche@sandisk.com \
    --cc=gregkh@linuxfoundation.org \
    --cc=hare@suse.de \
    --cc=hch@lst.de \
    --cc=ksummit-discuss@lists.linux-foundation.org \
    --cc=ksummit-discuss@lists.linuxfoundation.org \
    --cc=osandov@osandov.com \
    --cc=tj@kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.