All of lore.kernel.org
 help / color / mirror / Atom feed
From: Praveen G K <praveen.gk@gmail.com>
To: "Andrei E. Warkentin" <andrey.warkentin@gmail.com>
Cc: "J Freyensee" <james_p_freyensee@linux.intel.com>,
	"Andrei Warkentin" <awarkentin@vmware.com>,
	"Per Förlin" <per.forlin@stericsson.com>,
	"Linus Walleij" <linus.walleij@linaro.org>,
	linux-mmc@vger.kernel.org, "Arnd Bergmann" <arnd@arndb.de>,
	"Jon Medhurst" <tixy@linaro.org>
Subject: Re: slow eMMC write speed
Date: Wed, 19 Oct 2011 16:27:02 -0700	[thread overview]
Message-ID: <CAHzg1A9swGoYy8fE6ULw8WprJQvURQNWHhOZ9vywgrONT9RDpw@mail.gmail.com> (raw)
In-Reply-To: <CANz0V+5HGJpNkoHMmsAupCcCK3tG74Jza=rmZQxjv3pwMWKSuA@mail.gmail.com>

Also, can somebody please tell me the significance of blk_end_request? Thanks.
Why do we call this after every block read or write?

2011/10/4 Andrei E. Warkentin <andrey.warkentin@gmail.com>
>
> Hi James,
>
> 2011/10/3 J Freyensee <james_p_freyensee@linux.intel.com>:
> >
> > The idea is the page cache is too generic for hand-held (i.e. Android)
> > workloads.  Page cache handles regular files, directories, user-swappable
> > processes, etc, and all of that has to contend with the resource available
> > for the page cache.  This is specific to eMMC workloads.  Namely, for games
> > and even .pdf files on an Android system (ARM or Intel), there are a lot of
> > 1-2 sector writes and almost 0 reads.
> >
> > But by no means am I an expert on the page cache area either.
> >
>
> I misspoke, sorry, I really meant the buffer cache, which caches block
> access. It may contend
> with other resources, but it is practically boundless and responds
> well to memory pressure (which
> otherwise is something you need to consider).
>
> As to Android workloads, what you're really trying to say, is that
> you're dealing with a tumult of SQLite accesses,
> and coupled with ext4 these don't look so good when it comes down to
> MMC performance and reliability, right? When
> I investigated this problem in my previous life, it came down to
> figuring out if it was worth putting vendor hacks in the MMC driver
> to purportedly reduce a drastic reduction in reliability/life-span,
> while also improving performance for accesses smaller than flash page
> size.
>
> The problem being, of course that you have many small random accesses, which -
> a) Chew through a fixed amount of erase-block (AU, allocation unit)
> slots in the internal (non-volatile) cache on the MMC.
> b) As a consequence of (a) result in much thrashing, as erase-block
> slot evictions result in (small) writes, which result in extra erases.
> c) The accesses could also end up spanning erase-blocks which further
> multiplies the performance and life-span damage.
>
> The hacks I was investigating actually made things worse performance
> wise, and there was no way to measure reliability. I did realize that
> you could, under some circumstances, and with some idea behind the GC
> behavior of MMCs and it's flash parameters, devise an I/O scheduler
> that would optimize accesses by grouping AUs and trying to defer
> writing AUs which are being actively written to. Of course this would
> be in no way generic, and would involve fine tuning on a per-card
> basis, making it useful for eMMC/eSD.
>
> Caching by itself  might save you some trouble from many writes to
> similar places, but you can already tune the buffer cache to delay
> writes
> (/proc/sys/vm/dirty_writeback_centisec), and it's not going to help
> with the fixed amount of AUs and preferences to a particular size of
> writes (i.e. the garbage collection mechanism inside the MMC and the
> flash technology in it). On the other hand, caching brings another set
> of problems - data loss, and the occasional need to flush all data to
> disk, with a larger delay.
>
> Speaking of reducing flash traffic...you might be interested with
> bumping the commit time (ext3/ext4), but that also has data-loss
> implications.
>
> Anyway, the point I want to make, is that you should ask yourself the
> question of what you're trying to achieve, and what the real problem
> is - and why existing solutions don't work. If you think caching is
> your problem, then you should probably answer the question of why the
> buffer cache isn't sufficient - and if it isn't, how should it adapt
> to fit the scenario. I would want to say that the real fix should be
> the I/O happy SQLite usage on Android... But there may be some value
> in trying to alleviate in by grouping writes by AUs and deferring
> "hot" AUs.
>
> A

  reply	other threads:[~2011-10-19 23:27 UTC|newest]

Thread overview: 32+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2011-09-23  5:05 slow eMMC write speed Praveen G K
2011-09-28  5:42 ` Linus Walleij
2011-09-28 19:06   ` Praveen G K
2011-09-28 19:59     ` J Freyensee
2011-09-28 20:34       ` Praveen G K
2011-09-28 21:01         ` J Freyensee
2011-09-28 21:03           ` Praveen G K
2011-09-28 21:34             ` J Freyensee
2011-09-28 22:24               ` Praveen G K
2011-09-28 22:59                 ` J Freyensee
2011-09-28 23:16                   ` Praveen G K
2011-09-29  0:57                     ` Philip Rakity
2011-09-29  2:24                       ` Praveen G K
2011-09-29  3:30                         ` Philip Rakity
2011-09-29  7:24               ` Linus Walleij
2011-09-29  8:17                 ` Per Förlin
2011-09-29 20:16                   ` J Freyensee
2011-09-30  8:22                     ` Andrei E. Warkentin
2011-10-01  0:33                       ` J Freyensee
2011-10-02  6:20                         ` Andrei E. Warkentin
2011-10-03 18:01                           ` J Freyensee
2011-10-03 20:19                             ` Andrei Warkentin
2011-10-03 21:00                               ` J Freyensee
2011-10-04  7:59                                 ` Andrei E. Warkentin
2011-10-19 23:27                                   ` Praveen G K [this message]
2011-10-20 15:01                                     ` Andrei E. Warkentin
2011-10-20 15:10                                       ` Praveen G K
2011-10-20 15:26                                         ` Andrei Warkentin
2011-10-20 16:07                                           ` Praveen G K
2011-10-21  4:45                                             ` Andrei E. Warkentin
2011-09-29  7:05         ` Linus Walleij
2011-09-29  7:33           ` Linus Walleij

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=CAHzg1A9swGoYy8fE6ULw8WprJQvURQNWHhOZ9vywgrONT9RDpw@mail.gmail.com \
    --to=praveen.gk@gmail.com \
    --cc=andrey.warkentin@gmail.com \
    --cc=arnd@arndb.de \
    --cc=awarkentin@vmware.com \
    --cc=james_p_freyensee@linux.intel.com \
    --cc=linus.walleij@linaro.org \
    --cc=linux-mmc@vger.kernel.org \
    --cc=per.forlin@stericsson.com \
    --cc=tixy@linaro.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.