linux-mtd.lists.infradead.org archive mirror
 help / color / mirror / Atom feed
From: Tim Harvey <tharvey@gateworks.com>
To: Steve deRosier <derosier@gmail.com>
Cc: Richard Weinberger <richard@nod.at>, linux-mtd@lists.infradead.org
Subject: Re: ubi/ubifs performance comparison on two NAND devices
Date: Fri, 1 Mar 2019 08:41:06 -0800	[thread overview]
Message-ID: <CAJ+vNU12C4va7-cCKzxHgVftZ-omODefsnzmMMjFnQtR-+ZWbQ@mail.gmail.com> (raw)
In-Reply-To: <CALLGbRL6m1P8uoWVrAzfKzqem-pL6q5bbwhEWyHS0iCiuSLyQQ@mail.gmail.com>

On Thu, Feb 28, 2019 at 9:22 AM Steve deRosier <derosier@gmail.com> wrote:
>
> On Thu, Feb 28, 2019 at 8:41 AM Tim Harvey <tharvey@gateworks.com> wrote:
> >
> > Ok, thanks for the pointer. Is there a sysfs node that contains all of
> > those? I didn't see anything obvious. I can printk them for comparison
> > but I don't see this as a raw nand issue, I see it as a ubi/ubifs
> > issue. There is something going on at the ubi/ubifs layer that makes
> > the Cypress FLASH very slow for the ubi scan that occurs on attach and
> > the UBIFS resize that occurs on first mount.
>
> I don't follow your assertion. IMHO, if it were at the ubi/ubifs
> layer, the time it takes should be symmetrical for both your flashes.
> Perhaps you're saying "I don't believe it has to do with the hardware
> layer."? Maybe. Though I suggest looking at all layers and prove
> things beyond a reasonable doubt, otherwise you're going to likely
> spend an inordinate amount of time looking in the wrong place.
>
> UBIFS sits on top of UBI. Which sits on top of the raw flash driver.
> Which sits on top of whatever bus or SoC driver that may be necessary
> (maybe there, maybe not). Which then sits on the actual hardware.
> Unless you have another method of testing the raw flash driver, and
> through the exact same pathway the UBI is using, I don't think you've
> eliminated it. The most likely scenario is it is doing something
> pathological with your flash. Looking at the timing parameters it
> chooses is a good start, since IIRC, you've said you're not choosing
> them, you're letting the driver do so.
>
> Let's give an example - maybe with the new flash you've got a
> write-protect GPIO setup and that wasn't in the old configuration. And
> let's say it takes too long to toggle due to some really bad
> setup/hold times set by default because they're not configured. And
> the NAND driver writer implemented it with gpiolib, and toggles it
> even on a read, and there's some horrible timing bug in gpiolib... And
> since UBIFS touches every page during the scan...  boom - crazy extra
> time. This is a totally made up example, but it illustrates the type
> of odd non-obvious interaction that could happen even if you think
> everything is fine with the raw nand.
>
> Personally, I'd shove a bus analyzer on your NAND and take a look if
> the bus sequences it does on the "good" vs the "bad" chip case are
> similar. Likely that will tell you exactly why it takes so long which
> in turn will lead you to exactly what the problem is.
>
> If I had to guess, either there is a configuration error OR the nand
> driver you're using is choosing bad defaults OR there's a particular
> pathway in the driver that the UBI is exercising that isn't what a raw
> access would exercise and there's a funky bug there.
>
> Remember though - I'm not saying it isn't a bug in the UBI or UBIFS
> code, I just don't feel you've eliminated the more likely spot first:
> the code that actually deals with the chip in question. Go examine and
> understand the NAND driver and printk those timing parameters.
>

Steve,

I've compared erase/read/write speeds for both flashes and the Cypress
flash is 2x slower than the Micron on a 'per-byte-size' basis (which
is what I would expect as the datasheets have pretty much the same
timings per 'block' but the micron has 2x larger blocks and the chips
are the same overall size meaning the cypress would have 2x as many
'block' operations across the same size).

So, at a raw erase/read/write level the Cypress is 2x slower than
Micron, but ubi-scan is 7x slower (4s to 28s), and ubifs-space-fixup
is 100x slower (0.5s to 50s).

I guess i've made a mess of the description of the issue. I can dig in
and find the basic flash timings the kernel is using 'but' when I test
using flash_erase and dd for erase/read/write over say 60M I find the
expected 2x slower performance. I just don't understand why I see a
much slower performance at the ubi and ubifs layers.

Tim

______________________________________________________
Linux MTD discussion mailing list
http://lists.infradead.org/mailman/listinfo/linux-mtd/

  reply	other threads:[~2019-03-01 16:41 UTC|newest]

Thread overview: 8+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-02-27 20:39 ubi/ubifs performance comparison on two NAND devices Tim Harvey
2019-02-27 22:12 ` Richard Weinberger
2019-02-27 22:43   ` Tim Harvey
2019-02-27 22:59     ` Richard Weinberger
2019-02-28 16:40       ` Tim Harvey
2019-02-28 17:21         ` Steve deRosier
2019-03-01 16:41           ` Tim Harvey [this message]
2019-03-01 16:44             ` Richard Weinberger

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=CAJ+vNU12C4va7-cCKzxHgVftZ-omODefsnzmMMjFnQtR-+ZWbQ@mail.gmail.com \
    --to=tharvey@gateworks.com \
    --cc=derosier@gmail.com \
    --cc=linux-mtd@lists.infradead.org \
    --cc=richard@nod.at \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).