linux-mtd.lists.infradead.org archive mirror
 help / color / mirror / Atom feed
* Near full problem (old version)
@ 2014-03-24  7:08 Karsten Jeppesen
  2014-03-26  8:22 ` Andrea Adami
  0 siblings, 1 reply; 2+ messages in thread
From: Karsten Jeppesen @ 2014-03-24  7:08 UTC (permalink / raw)
  To: linux-mtd

Hi Guys,

I've been using UBIFS for a long time now. Needless to say it works very well.
However as every corner of such embedded solutions tend to fill up I am faced with a grave problem that I don't know how to get around:

We are talking about the 2.6.32 kernel backport but the problem may be general anyway.
I am formatting the NOR FLASH, then populating it.
At the end of populating the FLASH I am reaching 100% fill and I can actually see UBIFS packing the file system, in the end reaching 93%.
"umount" and "detach" both returns ok, but once the target has been powered off and then powers on again, the UBIFS suddenly reports errors where none were before:

 [   38.100000] UBIFS error (pid 1935): ubifs_read_node: bad node type (255 but expected 1)
[   38.140000] UBIFS error (pid 1935): ubifs_read_node: bad node at LEB 410:96096, LEB mapping status 0
[   38.140000] UBIFS error (pid 1935): do_readpage: cannot read page 0 of inode 3090, error -22
 [   46.670000] UBIFS error (pid 2575): ubifs_read_node: bad node type (255 but expected 1)
[   46.700000] UBIFS error (pid 2575): ubifs_read_node: bad node at LEB 410:96096, LEB mapping status 0
[   46.700000] UBIFS error (pid 2575): do_readpage: cannot read page 0 of inode 3090, error -22
[   52.370000] UBIFS error (pid 2686): ubifs_read_node: bad node type (255 but expected 1)
[   52.410000] UBIFS error (pid 2686): ubifs_read_node: bad node at LEB 410:96096, LEB mapping status 0
[   52.410000] UBIFS error (pid 2686): do_readpage: cannot read page 0 of inode 3090, error -22

It looks like inserting delays after population does influence the problem, but as of now I have no real solution.
Can I somehow tell when UBI has ended repacking and is in a safe-to-poweroff state? (as detach is not the answer)


Med venlig hilsen / Sincerely yours,

Karsten Jeppesen, D.D.S;  Bsc. CS
Senior Software Engineer
Ext.: +45 72 17 56 69
Mobile phone: +45 25 66 00 23
______________________________________
SKOV A/S
Hedelund 4, Glyngoere, 7870 Roslev, Denmark
Tel.: +45 72 17 55 55 - Fax: +45 72 17 59 59
www.skov.com

^ permalink raw reply	[flat|nested] 2+ messages in thread

* Re: Near full problem (old version)
  2014-03-24  7:08 Near full problem (old version) Karsten Jeppesen
@ 2014-03-26  8:22 ` Andrea Adami
  0 siblings, 0 replies; 2+ messages in thread
From: Andrea Adami @ 2014-03-26  8:22 UTC (permalink / raw)
  To: Karsten Jeppesen; +Cc: linux-mtd

On Mon, Mar 24, 2014 at 8:08 AM, Karsten Jeppesen <kjp@skov.dk> wrote:
> Hi Guys,
>
> I've been using UBIFS for a long time now. Needless to say it works very well.
> However as every corner of such embedded solutions tend to fill up I am faced with a grave problem that I don't know how to get around:
>
> We are talking about the 2.6.32 kernel backport but the problem may be general anyway.
> I am formatting the NOR FLASH, then populating it.
> At the end of populating the FLASH I am reaching 100% fill and I can actually see UBIFS packing the file system, in the end reaching 93%.
> "umount" and "detach" both returns ok, but once the target has been powered off and then powers on again, the UBIFS suddenly reports errors where none were before:
>
>  [   38.100000] UBIFS error (pid 1935): ubifs_read_node: bad node type (255 but expected 1)
> [   38.140000] UBIFS error (pid 1935): ubifs_read_node: bad node at LEB 410:96096, LEB mapping status 0
> [   38.140000] UBIFS error (pid 1935): do_readpage: cannot read page 0 of inode 3090, error -22
>  [   46.670000] UBIFS error (pid 2575): ubifs_read_node: bad node type (255 but expected 1)
> [   46.700000] UBIFS error (pid 2575): ubifs_read_node: bad node at LEB 410:96096, LEB mapping status 0
> [   46.700000] UBIFS error (pid 2575): do_readpage: cannot read page 0 of inode 3090, error -22
> [   52.370000] UBIFS error (pid 2686): ubifs_read_node: bad node type (255 but expected 1)
> [   52.410000] UBIFS error (pid 2686): ubifs_read_node: bad node at LEB 410:96096, LEB mapping status 0
> [   52.410000] UBIFS error (pid 2686): do_readpage: cannot read page 0 of inode 3090, error -22
>
> It looks like inserting delays after population does influence the problem, but as of now I have no real solution.
> Can I somehow tell when UBI has ended repacking and is in a safe-to-poweroff state? (as detach is not the answer)
>
>
> Med venlig hilsen / Sincerely yours,
>
> Karsten Jeppesen, D.D.S;  Bsc. CS
> Senior Software Engineer
> Ext.: +45 72 17 56 69
> Mobile phone: +45 25 66 00 23
> ______________________________________
> SKOV A/S
> Hedelund 4, Glyngoere, 7870 Roslev, Denmark
> Tel.: +45 72 17 55 55 - Fax: +45 72 17 59 59
> www.skov.com
>
>
>
>
> ______________________________________________________
> Linux MTD discussion mailing list
> http://lists.infradead.org/mailman/listinfo/linux-mtd/


I'm using more recent kernel and very old and small NOR and I see
corruption after mount/umount cycles.
Unfortunately I couldn't fully debug it, sometimes it repairs itself
after more cycles...I suspect some race between the background
threads.

I have some debug logs here on github:
https://github.com/andrea-adami/collie-nor-flash/blob/master/debug-20140203/ubi-repair.txt

In my specific case, the definitive fixup is to disable both "Suspend
erase on write" extp->FeatureSupport &= ~512;
and "Simultaneous Operations" extp->SuspendCmdSupport &= ~1;

Suboptimal but stable.

Regards

Andrea

^ permalink raw reply	[flat|nested] 2+ messages in thread

end of thread, other threads:[~2014-03-26  8:22 UTC | newest]

Thread overview: 2+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2014-03-24  7:08 Near full problem (old version) Karsten Jeppesen
2014-03-26  8:22 ` Andrea Adami

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).