All of lore.kernel.org
 help / color / mirror / Atom feed
* UBIFS Panic
@ 2014-06-26 20:28 Akshay Bhat
  2014-06-27  2:36 ` hujianyang
  0 siblings, 1 reply; 19+ messages in thread
From: Akshay Bhat @ 2014-06-26 20:28 UTC (permalink / raw)
  To: linux-mtd

Hi,

I was performing a stress test on UBI file system on a ARM based 
embedded board
(Beagle Bone). We have a SLC NAND flash and the MTD_UBI_WL_THRESHOLD is 
set to
4096. I have 2 scripts running in the background in a infinite while loop:
Script1: dd if=/dev/zero of=/var/db/test bs=2M count=1
Script2: dd if=/dev/urandom of=/var/log/test bs=2M count=1 2> /dev/null
Note: The above directories are mounted as (sync,relatime).

After the running the scripts for 5 days and max_ec reached the
MTD_UBI_WL_THRESHOLD. At this point I got panic1 (see below) and the ubi 
volume
switched to read-only mode. I rebooted the board, changed the transfer 
size in
the script from 2M to 140K and the scripts ran for 2+ days and I got 
panic 2 (See
below).

What is the cause of these panics?

Thanks,
Akshay

Software versions
Kernel Version: 3.8.13 (I have not tried using the latest kernel nor any
patches)
MTD-utils version: 1.5.0

#ubinfo /dev/ubi0
ubi0
Volumes count:                           6
Logical eraseblock size:                 126976 bytes, 124.0 KiB
Total amount of logical eraseblocks:     1939 (246206464 bytes, 234.8 MiB)
Amount of available logical eraseblocks: 0 (0 bytes)
Maximum count of volumes                 128
Count of bad physical eraseblocks:       0
Count of reserved physical eraseblocks:  40
Current maximum erase counter value:     4736
Minimum input/output unit size:          2048 bytes
Character device major/minor:            244:0
Present volumes:                         0, 1, 2, 3, 4, 5

# mtdinfo -a
Count of MTD devices:           12
Present MTD devices:            mtd0, mtd1, mtd2, mtd3, mtd4, mtd5, 
mtd6, mtd7,
mtd8, mtd9, mtd10, mtd11
Sysfs interface supported:      yes

mtd0
Name:                           SPL1
Type:                           nand
Eraseblock size:                131072 bytes, 128.0 KiB
Amount of eraseblocks:          1 (131072 bytes, 128.0 KiB)
Minimum input/output unit size: 2048 bytes
Sub-page size:                  512 bytes
OOB size:                       64 bytes
Character device major/minor:   90:0
Bad blocks are allowed:         true
Device is writable:             true

mtd1
Name:                           SPL2
Type:                           nand
Eraseblock size:                131072 bytes, 128.0 KiB
Amount of eraseblocks:          1 (131072 bytes, 128.0 KiB)
Minimum input/output unit size: 2048 bytes
Sub-page size:                  512 bytes
OOB size:                       64 bytes
Character device major/minor:   90:2
Bad blocks are allowed:         true
Device is writable:             true

mtd2
Name:                           SPL3
Type:                           nand
Eraseblock size:                131072 bytes, 128.0 KiB
Amount of eraseblocks:          1 (131072 bytes, 128.0 KiB)
Minimum input/output unit size: 2048 bytes
Sub-page size:                  512 bytes
OOB size:                       64 bytes
Character device major/minor:   90:4
Bad blocks are allowed:         true
Device is writable:             true

mtd3
Name:                           SPL4
Type:                           nand
Eraseblock size:                131072 bytes, 128.0 KiB
Amount of eraseblocks:          1 (131072 bytes, 128.0 KiB)
Minimum input/output unit size: 2048 bytes
Sub-page size:                  512 bytes
OOB size:                       64 bytes
Character device major/minor:   90:6
Bad blocks are allowed:         true
Device is writable:             true

mtd4
Name:                           U-boot
Type:                           nand
Eraseblock size:                131072 bytes, 128.0 KiB
Amount of eraseblocks:          8 (1048576 bytes, 1024.0 KiB)
Minimum input/output unit size: 2048 bytes
Sub-page size:                  512 bytes
OOB size:                       64 bytes
Character device major/minor:   90:8
Bad blocks are allowed:         true
Device is writable:             true

mtd5
Name:                           U-boot Backup
Type:                           nand
Eraseblock size:                131072 bytes, 128.0 KiB
Amount of eraseblocks:          8 (1048576 bytes, 1024.0 KiB)
Minimum input/output unit size: 2048 bytes
Sub-page size:                  512 bytes
OOB size:                       64 bytes
Character device major/minor:   90:10
Bad blocks are allowed:         true
Device is writable:             true

mtd6
Name:                           U-Boot Environment
Type:                           nand
Eraseblock size:                131072 bytes, 128.0 KiB
Amount of eraseblocks:          1 (131072 bytes, 128.0 KiB)
Minimum input/output unit size: 2048 bytes
Sub-page size:                  512 bytes
OOB size:                       64 bytes
Character device major/minor:   90:12
Bad blocks are allowed:         true
Device is writable:             true

mtd7
Name:                           Kernel
Type:                           nand
Eraseblock size:                131072 bytes, 128.0 KiB
Amount of eraseblocks:          40 (5242880 bytes, 5.0 MiB)
Minimum input/output unit size: 2048 bytes
Sub-page size:                  512 bytes
OOB size:                       64 bytes
Character device major/minor:   90:14
Bad blocks are allowed:         true
Device is writable:             true

mtd8
Name:                           Kernel Backup
Type:                           nand
Eraseblock size:                131072 bytes, 128.0 KiB
Amount of eraseblocks:          40 (5242880 bytes, 5.0 MiB)
Minimum input/output unit size: 2048 bytes
Sub-page size:                  512 bytes
OOB size:                       64 bytes
Character device major/minor:   90:16
Bad blocks are allowed:         true
Device is writable:             true

mtd9
Name:                           Device Tree
Type:                           nand
Eraseblock size:                131072 bytes, 128.0 KiB
Amount of eraseblocks:          4 (524288 bytes, 512.0 KiB)
Minimum input/output unit size: 2048 bytes
Sub-page size:                  512 bytes
OOB size:                       64 bytes
Character device major/minor:   90:18
Bad blocks are allowed:         true
Device is writable:             true

mtd10
Name:                           Device Tree Backup
Type:                           nand
Eraseblock size:                131072 bytes, 128.0 KiB
Amount of eraseblocks:          4 (524288 bytes, 512.0 KiB)
Minimum input/output unit size: 2048 bytes
Sub-page size:                  512 bytes
OOB size:                       64 bytes
Character device major/minor:   90:20
Bad blocks are allowed:         true
Device is writable:             true

mtd11
Name:                           RFS
Type:                           nand
Eraseblock size:                131072 bytes, 128.0 KiB
Amount of eraseblocks:          1939 (254148608 bytes, 242.4 MiB)
Minimum input/output unit size: 2048 bytes
Sub-page size:                  512 bytes
OOB size:                       64 bytes
Character device major/minor:   90:22
Bad blocks are allowed:         true
Device is writable:             true

Panic1:
[    7.307987] IPv6: ADDRCONF(NETDEV_UP): eth0: link is not ready
[250780.000679] UBIFS error (pid 29853): ubifs_leb_map: mapping LEB 46 
failed,
error -28
[250780.000709] UBIFS warning (pid 29853): ubifs_ro_mode: switched to 
read-only
mode, error -28
[250780.000770] [<c00104b1>] (unwind_backtrace+0x1/0x8c) from [<c014d321>]
(ubifs_leb_map+0x7d/0xb4)
[250780.000807] [<c014d321>] (ubifs_leb_map+0x7d/0xb4) from [<c01531df>]
(ubifs_add_bud_to_log+0x1bf/0x214)
[250780.000851] [<c01531df>] (ubifs_add_bud_to_log+0x1bf/0x214) from
[<c01455fd>] (make_reservation+0x12d/0x274)
[250780.000884] [<c01455fd>] (make_reservation+0x12d/0x274) from 
[<c0145d65>]
(ubifs_jnl_write_data+0xf5/0x1a4)
[250780.000918] [<c0145d65>] (ubifs_jnl_write_data+0xf5/0x1a4) from 
[<c01479b5>]
(do_writepage+0x73/0x12e)
[250780.000960] [<c01479b5>] (do_writepage+0x73/0x12e) from [<c007780f>]
(__writepage+0xb/0x26)
[250780.000993] [<c007780f>] (__writepage+0xb/0x26) from [<c0077b0b>]
(write_cache_pages+0x151/0x1e8)
[250780.001026] [<c0077b0b>] (write_cache_pages+0x151/0x1e8) from 
[<c0077bc3>]
(generic_writepages+0x21/0x36)
[250780.001057] [<c0077bc3>] (generic_writepages+0x21/0x36) from 
[<c00736bb>]
(__filemap_fdatawrite_range+0x3b/0x42)
[250780.001087] [<c00736bb>] (__filemap_fdatawrite_range+0x3b/0x42) from
[<c0073741>] (filemap_write_and_wait_range+0x21/0x4a)
[250780.001118] [<c0073741>] (filemap_write_and_wait_range+0x21/0x4a) from
[<c0147bd9>] (ubifs_fsync+0x29/0x6c)
[250780.001153] [<c0147bd9>] (ubifs_fsync+0x29/0x6c) from [<c00ac21b>]
(vfs_fsync_range+0x1b/0x24)
[250780.001184] [<c00ac21b>] (vfs_fsync_range+0x1b/0x24) from [<c00ac28d>]
(generic_write_sync+0x4d/0x54)
[250780.001214] [<c00ac28d>] (generic_write_sync+0x4d/0x54) from 
[<c0073bcd>]
(generic_file_aio_write+0x71/0x8a)
[250780.001245] [<c0073bcd>] (generic_file_aio_write+0x71/0x8a) from
[<c01471a3>] (ubifs_aio_write+0xff/0x10c)
[250780.001289] [<c01471a3>] (ubifs_aio_write+0xff/0x10c) from [<c00945ed>]
(do_sync_write+0x61/0x8c)
[250780.001324] [<c00945ed>] (do_sync_write+0x61/0x8c) from [<c0094a8f>]
(vfs_write+0x5f/0x100)
[250780.001355] [<c0094a8f>] (vfs_write+0x5f/0x100) from [<c0094c9b>]
(sys_write+0x27/0x44)
[250780.001394] [<c0094c9b>] (sys_write+0x27/0x44) from [<c000c681>]
(ret_fast_syscall+0x1/0x46)
[250780.001424] [<c00104b1>] (unwind_backtrace+0x1/0x8c) from [<c014d325>]
(ubifs_leb_map+0x81/0xb4)
[250780.001455] [<c014d325>] (ubifs_leb_map+0x81/0xb4) from [<c01531df>]
(ubifs_add_bud_to_log+0x1bf/0x214)
[250780.001487] [<c01531df>] (ubifs_add_bud_to_log+0x1bf/0x214) from
[<c01455fd>] (make_reservation+0x12d/0x274)
[250780.001519] [<c01455fd>] (make_reservation+0x12d/0x274) from 
[<c0145d65>]
(ubifs_jnl_write_data+0xf5/0x1a4)
[250780.001550] [<c0145d65>] (ubifs_jnl_write_data+0xf5/0x1a4) from 
[<c01479b5>]
(do_writepage+0x73/0x12e)
[250780.001582] [<c01479b5>] (do_writepage+0x73/0x12e) from [<c007780f>]
(__writepage+0xb/0x26)
[250780.001613] [<c007780f>] (__writepage+0xb/0x26) from [<c0077b0b>]
(write_cache_pages+0x151/0x1e8)
[250780.001644] [<c0077b0b>] (write_cache_pages+0x151/0x1e8) from 
[<c0077bc3>]
(generic_writepages+0x21/0x36)
[250780.001675] [<c0077bc3>] (generic_writepages+0x21/0x36) from 
[<c00736bb>]
(__filemap_fdatawrite_range+0x3b/0x42)
[250780.001704] [<c00736bb>] (__filemap_fdatawrite_range+0x3b/0x42) from
[<c0073741>] (filemap_write_and_wait_range+0x21/0x4a)
[250780.001735] [<c0073741>] (filemap_write_and_wait_range+0x21/0x4a) from
[<c0147bd9>] (ubifs_fsync+0x29/0x6c)
[250780.001765] [<c0147bd9>] (ubifs_fsync+0x29/0x6c) from [<c00ac21b>]
(vfs_fsync_range+0x1b/0x24)
[250780.001794] [<c00ac21b>] (vfs_fsync_range+0x1b/0x24) from [<c00ac28d>]
(generic_write_sync+0x4d/0x54)
[250780.001824] [<c00ac28d>] (generic_write_sync+0x4d/0x54) from 
[<c0073bcd>]
(generic_file_aio_write+0x71/0x8a)
[250780.001854] [<c0073bcd>] (generic_file_aio_write+0x71/0x8a) from
[<c01471a3>] (ubifs_aio_write+0xff/0x10c)
[250780.001886] [<c01471a3>] (ubifs_aio_write+0xff/0x10c) from [<c00945ed>]
(do_sync_write+0x61/0x8c)
[250780.001918] [<c00945ed>] (do_sync_write+0x61/0x8c) from [<c0094a8f>]
(vfs_write+0x5f/0x100)
[250780.001950] [<c0094a8f>] (vfs_write+0x5f/0x100) from [<c0094c9b>]
(sys_write+0x27/0x44)
[250780.001980] [<c0094c9b>] (sys_write+0x27/0x44) from [<c000c681>]
(ret_fast_syscall+0x1/0x46)
[250780.008005] UBIFS error (pid 29853): do_commit: commit failed, error -30
[250780.008037] UBIFS error (pid 29853): do_writepage: cannot write page 
501 of
inode 72, error -30
[250780.153476] UBIFS error (pid 722): ubifs_leb_map: mapping LEB 18 failed,
error -28
[250780.153505] UBIFS warning (pid 722): ubifs_ro_mode: switched to 
read-only
mode, error -28
[250780.153566] [<c00104b1>] (unwind_backtrace+0x1/0x8c) from [<c014d321>]
(ubifs_leb_map+0x7d/0xb4)
[250780.153604] [<c014d321>] (ubifs_leb_map+0x7d/0xb4) from [<c01531df>]
(ubifs_add_bud_to_log+0x1bf/0x214)
[250780.153649] [<c01531df>] (ubifs_add_bud_to_log+0x1bf/0x214) from
[<c01455fd>] (make_reservation+0x12d/0x274)
[250780.153682] [<c01455fd>] (make_reservation+0x12d/0x274) from 
[<c0145d65>]
(ubifs_jnl_write_data+0xf5/0x1a4)
[250780.153715] [<c0145d65>] (ubifs_jnl_write_data+0xf5/0x1a4) from 
[<c01479b5>]
(do_writepage+0x73/0x12e)
[250780.153756] [<c01479b5>] (do_writepage+0x73/0x12e) from [<c007780f>]
(__writepage+0xb/0x26)
[250780.153790] [<c007780f>] (__writepage+0xb/0x26) from [<c0077b0b>]
(write_cache_pages+0x151/0x1e8)
[250780.153822] [<c0077b0b>] (write_cache_pages+0x151/0x1e8) from 
[<c0077bc3>]
(generic_writepages+0x21/0x36)
[250780.153854] [<c0077bc3>] (generic_writepages+0x21/0x36) from 
[<c00736bb>]
(__filemap_fdatawrite_range+0x3b/0x42)
[250780.153884] [<c00736bb>] (__filemap_fdatawrite_range+0x3b/0x42) from
[<c0073741>] (filemap_write_and_wait_range+0x21/0x4a)
[250780.153915] [<c0073741>] (filemap_write_and_wait_range+0x21/0x4a) from
[<c0147bd9>] (ubifs_fsync+0x29/0x6c)
[250780.153950] [<c0147bd9>] (ubifs_fsync+0x29/0x6c) from [<c00ac21b>]
(vfs_fsync_range+0x1b/0x24)
[250780.153980] [<c00ac21b>] (vfs_fsync_range+0x1b/0x24) from [<c00ac28d>]
(generic_write_sync+0x4d/0x54)
[250780.154009] [<c00ac28d>] (generic_write_sync+0x4d/0x54) from 
[<c0073bcd>]
(generic_file_aio_write+0x71/0x8a)
[250780.154040] [<c0073bcd>] (generic_file_aio_write+0x71/0x8a) from
[<c01471a3>] (ubifs_aio_write+0xff/0x10c)
[250780.154084] [<c01471a3>] (ubifs_aio_write+0xff/0x10c) from [<c00945ed>]
(do_sync_write+0x61/0x8c)
[250780.154119] [<c00945ed>] (do_sync_write+0x61/0x8c) from [<c0094a8f>]
(vfs_write+0x5f/0x100)
[250780.154150] [<c0094a8f>] (vfs_write+0x5f/0x100) from [<c0094c9b>]
(sys_write+0x27/0x44)
[250780.154188] [<c0094c9b>] (sys_write+0x27/0x44) from [<c000c681>]
(ret_fast_syscall+0x1/0x46)
[250780.154218] [<c00104b1>] (unwind_backtrace+0x1/0x8c) from [<c014d325>]
(ubifs_leb_map+0x81/0xb4)
[250780.154248] [<c014d325>] (ubifs_leb_map+0x81/0xb4) from [<c01531df>]
(ubifs_add_bud_to_log+0x1bf/0x214)
[250780.154281] [<c01531df>] (ubifs_add_bud_to_log+0x1bf/0x214) from
[<c01455fd>] (make_reservation+0x12d/0x274)
[250780.154312] [<c01455fd>] (make_reservation+0x12d/0x274) from 
[<c0145d65>]
(ubifs_jnl_write_data+0xf5/0x1a4)
[250780.154344] [<c0145d65>] (ubifs_jnl_write_data+0xf5/0x1a4) from 
[<c01479b5>]
(do_writepage+0x73/0x12e)
[250780.154478] [<c01479b5>] (do_writepage+0x73/0x12e) from [<c007780f>]
(__writepage+0xb/0x26)
[250780.154512] [<c007780f>] (__writepage+0xb/0x26) from [<c0077b0b>]
(write_cache_pages+0x151/0x1e8)
[250780.154543] [<c0077b0b>] (write_cache_pages+0x151/0x1e8) from 
[<c0077bc3>]
(generic_writepages+0x21/0x36)
[250780.154575] [<c0077bc3>] (generic_writepages+0x21/0x36) from 
[<c00736bb>]
(__filemap_fdatawrite_range+0x3b/0x42)
[250780.154605] [<c00736bb>] (__filemap_fdatawrite_range+0x3b/0x42) from
[<c0073741>] (filemap_write_and_wait_range+0x21/0x4a)
[250780.154636] [<c0073741>] (filemap_write_and_wait_range+0x21/0x4a) from
[<c0147bd9>] (ubifs_fsync+0x29/0x6c)
[250780.154666] [<c0147bd9>] (ubifs_fsync+0x29/0x6c) from [<c00ac21b>]
(vfs_fsync_range+0x1b/0x24)
[250780.154695] [<c00ac21b>] (vfs_fsync_range+0x1b/0x24) from [<c00ac28d>]
(generic_write_sync+0x4d/0x54)
[250780.154724] [<c00ac28d>] (generic_write_sync+0x4d/0x54) from 
[<c0073bcd>]
(generic_file_aio_write+0x71/0x8a)
[250780.154755] [<c0073bcd>] (generic_file_aio_write+0x71/0x8a) from
[<c01471a3>] (ubifs_aio_write+0xff/0x10c)
[250780.154788] [<c01471a3>] (ubifs_aio_write+0xff/0x10c) from [<c00945ed>]
(do_sync_write+0x61/0x8c)
[250780.154820] [<c00945ed>] (do_sync_write+0x61/0x8c) from [<c0094a8f>]
(vfs_write+0x5f/0x100)
[250780.154851] [<c0094a8f>] (vfs_write+0x5f/0x100) from [<c0094c9b>]
(sys_write+0x27/0x44)
[250780.154883] [<c0094c9b>] (sys_write+0x27/0x44) from [<c000c681>]
(ret_fast_syscall+0x1/0x46)
[250780.158234] UBIFS error (pid 29885): make_reservation: cannot 
reserve 216
bytes in jhead 1, error -30
[250780.158443] UBIFS error (pid 722): do_commit: commit failed, error -30
[250780.158473] UBIFS error (pid 722): do_writepage: cannot write page 0 of
inode 77, error -30
[250799.438458] UBIFS error (pid 620): make_reservation: cannot reserve 4144
bytes in jhead 2, error -30
[250799.438494] UBIFS error (pid 620): do_writepage: cannot write page 
502 of
inode 72, error -30
[250804.437920] UBIFS error (pid 620): make_reservation: cannot reserve 450
bytes in jhead 2, error -30
[250804.437955] UBIFS error (pid 620): do_writepage: cannot write page 48 of
inode 5566, error -30
[250829.438391] UBIFS error (pid 620): make_reservation: cannot reserve 4144
bytes in jhead 2, error -30
[250829.438427] UBIFS error (pid 620): do_writepage: cannot write page 
503 of
inode 72, error -30
[250834.438388] UBIFS error (pid 620): make_reservation: cannot reserve 4144
bytes in jhead 2, error -30
[250834.438424] UBIFS error (pid 620): do_writepage: cannot write page 
504 of
inode 72, error -30
[250839.438350] UBIFS error (pid 620): make_reservation: cannot reserve 4144
bytes in jhead 2, error -30
[250839.438386] UBIFS error (pid 620): do_writepage: cannot write page 
505 of
inode 72, error -30
[250844.438359] UBIFS error (pid 620): make_reservation: cannot reserve 4144
bytes in jhead 2, error -30
[250844.438396] UBIFS error (pid 620): do_writepage: cannot write page 
506 of
inode 72, error -30
[250849.438449] UBIFS error (pid 620): make_reservation: cannot reserve 4144
bytes in jhead 2, error -30
[250849.438485] UBIFS error (pid 620): do_writepage: cannot write page 
507 of
inode 72, error -30
[250854.438453] UBIFS error (pid 620): make_reservation: cannot reserve 4144
bytes in jhead 2, error -30
[250854.438488] UBIFS error (pid 620): do_writepage: cannot write page 
508 of
inode 72, error -30
[250859.438390] UBIFS error (pid 620): make_reservation: cannot reserve 4144
bytes in jhead 2, error -30
[250859.438426] UBIFS error (pid 620): do_writepage: cannot write page 
509 of
inode 72, error -30
[250864.438373] UBIFS error (pid 620): make_reservation: cannot reserve 4144
bytes in jhead 2, error -30
[250864.438410] UBIFS error (pid 620): do_writepage: cannot write page 
510 of
inode 72, error -30
[250869.438371] UBIFS error (pid 620): make_reservation: cannot reserve 4144
bytes in jhead 2, error -30
[250869.438407] UBIFS error (pid 620): do_writepage: cannot write page 
511 of
inode 72, error -30
[251162.922358] UBIFS error (pid 14262): ubifs_leb_map: mapping LEB 816 
failed,
error -28
[251162.922389] UBIFS warning (pid 14262): ubifs_ro_mode: switched to 
read-only
mode, error -28
[251162.922449] [<c00104b1>] (unwind_backtrace+0x1/0x8c) from [<c014d321>]
(ubifs_leb_map+0x7d/0xb4)
[251162.922486] [<c014d321>] (ubifs_leb_map+0x7d/0xb4) from [<c01531df>]
(ubifs_add_bud_to_log+0x1bf/0x214)
[251162.922529] [<c01531df>] (ubifs_add_bud_to_log+0x1bf/0x214) from
[<c01455fd>] (make_reservation+0x12d/0x274)
[251162.922562] [<c01455fd>] (make_reservation+0x12d/0x274) from 
[<c01467ab>]
(ubifs_jnl_truncate+0x243/0x420)
[251162.922597] [<c01467ab>] (ubifs_jnl_truncate+0x243/0x420) from 
[<c0147e67>]
(ubifs_setattr+0x197/0x2cc)
[251162.922637] [<c0147e67>] (ubifs_setattr+0x197/0x2cc) from [<c00a37df>]
(notify_change+0x18b/0x240)
[251162.922681] [<c00a37df>] (notify_change+0x18b/0x240) from [<c0093a5b>]
(do_truncate+0x4d/0x62)
[251162.922724] [<c0093a5b>] (do_truncate+0x4d/0x62) from [<c009cc0f>]
(do_last.isra.29+0x637/0x700)
[251162.922758] [<c009cc0f>] (do_last.isra.29+0x637/0x700) from [<c009cd4f>]
(path_openat+0x77/0x2b0)
[251162.922790] [<c009cd4f>] (path_openat+0x77/0x2b0) from [<c009d12d>]
(do_filp_open+0x1b/0x4a)
[251162.922823] [<c009d12d>] (do_filp_open+0x1b/0x4a) from [<c0094453>]
(do_sys_open+0xbd/0x126)
[251162.922863] [<c0094453>] (do_sys_open+0xbd/0x126) from [<c000c681>]
(ret_fast_syscall+0x1/0x46)
[251162.922894] [<c00104b1>] (unwind_backtrace+0x1/0x8c) from [<c014d325>]
(ubifs_leb_map+0x81/0xb4)
[251162.922980] [<c014d325>] (ubifs_leb_map+0x81/0xb4) from [<c01531df>]
(ubifs_add_bud_to_log+0x1bf/0x214)
[251162.923013] [<c01531df>] (ubifs_add_bud_to_log+0x1bf/0x214) from
[<c01455fd>] (make_reservation+0x12d/0x274)
[251162.923046] [<c01455fd>] (make_reservation+0x12d/0x274) from 
[<c01467ab>]
(ubifs_jnl_truncate+0x243/0x420)
[251162.923078] [<c01467ab>] (ubifs_jnl_truncate+0x243/0x420) from 
[<c0147e67>]
(ubifs_setattr+0x197/0x2cc)
[251162.923111] [<c0147e67>] (ubifs_setattr+0x197/0x2cc) from [<c00a37df>]
(notify_change+0x18b/0x240)
[251162.923143] [<c00a37df>] (notify_change+0x18b/0x240) from [<c0093a5b>]
(do_truncate+0x4d/0x62)
[251162.923176] [<c0093a5b>] (do_truncate+0x4d/0x62) from [<c009cc0f>]
(do_last.isra.29+0x637/0x700)
[251162.923209] [<c009cc0f>] (do_last.isra.29+0x637/0x700) from [<c009cd4f>]
(path_openat+0x77/0x2b0)
[251162.923241] [<c009cd4f>] (path_openat+0x77/0x2b0) from [<c009d12d>]
(do_filp_open+0x1b/0x4a)
[251162.923273] [<c009d12d>] (do_filp_open+0x1b/0x4a) from [<c0094453>]
(do_sys_open+0xbd/0x126)
[251162.923304] [<c0094453>] (do_sys_open+0xbd/0x126) from [<c000c681>]
(ret_fast_syscall+0x1/0x46)
[251162.925980] UBIFS error (pid 14262): do_commit: commit failed, error -30

Panic 2:
[81258.398060] UBIFS error (pid 292): ubifs_leb_write: writing 2048 
bytes to LEB
4:0 failed, error -28
[81258.398089] UBIFS warning (pid 292): ubifs_ro_mode: switched to read-only
mode, error -28
[81258.398164] [<c00104b1>] (unwind_backtrace+0x1/0x8c) from [<c014d0df>]
(ubifs_leb_write+0x8b/0xd8)
[81258.398205] [<c014d0df>] (ubifs_leb_write+0x8b/0xd8) from [<c0153339>]
(ubifs_log_start_commit+0x105/0x268)
[81258.398238] [<c0153339>] (ubifs_log_start_commit+0x105/0x268) from
[<c0153d4d>] (do_commit+0x147/0x3da)
[81258.398267] [<c0153d4d>] (do_commit+0x147/0x3da) from [<c015415d>]
(ubifs_bg_thread+0xd7/0x106)
[81258.398307] [<c015415d>] (ubifs_bg_thread+0xd7/0x106) from [<c003cd5b>]
(kthread+0x61/0x72)
[81258.398347] [<c003cd5b>] (kthread+0x61/0x72) from [<c000c73d>]
(ret_from_fork+0x11/0x34)
[81258.398380] [<c00104b1>] (unwind_backtrace+0x1/0x8c) from [<c014d0e3>]
(ubifs_leb_write+0x8f/0xd8)
[81258.398412] [<c014d0e3>] (ubifs_leb_write+0x8f/0xd8) from [<c0153339>]
(ubifs_log_start_commit+0x105/0x268)
[81258.398441] [<c0153339>] (ubifs_log_start_commit+0x105/0x268) from
[<c0153d4d>] (do_commit+0x147/0x3da)
[81258.398471] [<c0153d4d>] (do_commit+0x147/0x3da) from [<c015415d>]
(ubifs_bg_thread+0xd7/0x106)
[81258.398502] [<c015415d>] (ubifs_bg_thread+0xd7/0x106) from [<c003cd5b>]
(kthread+0x61/0x72)
[81258.398595] [<c003cd5b>] (kthread+0x61/0x72) from [<c000c73d>]
(ret_from_fork+0x11/0x34)
[81258.398619] UBIFS error (pid 292): do_commit: commit failed, error -28
[81260.895318] UBIFS error (pid 621): make_reservation: cannot reserve 
250 bytes
in jhead 2, error -30
[81260.895350] UBIFS error (pid 621): do_writepage: cannot write page 2 
of inode
11356, error -30
[81285.894788] UBIFS error (pid 621): make_reservation: cannot reserve 
160 bytes
in jhead 1, error -30
[81285.894820] UBIFS error (pid 621): ubifs_write_inode: can't write 
inode 5586,
error -30
[81438.783604] UBIFS error (pid 31441): ubifs_leb_map: mapping LEB 27 
failed,
error -28
[81438.783634] UBIFS warning (pid 31441): ubifs_ro_mode: switched to 
read-only
mode, error -28
[81438.783693] [<c00104b1>] (unwind_backtrace+0x1/0x8c) from [<c014d321>]
(ubifs_leb_map+0x7d/0xb4)
[81438.783731] [<c014d321>] (ubifs_leb_map+0x7d/0xb4) from [<c01531df>]
(ubifs_add_bud_to_log+0x1bf/0x214)
[81438.783775] [<c01531df>] (ubifs_add_bud_to_log+0x1bf/0x214) from 
[<c01455fd>]
(make_reservation+0x12d/0x274)
[81438.783808] [<c01455fd>] (make_reservation+0x12d/0x274) from [<c0145e6b>]
(ubifs_jnl_write_inode+0x57/0x138)
[81438.783842] [<c0145e6b>] (ubifs_jnl_write_inode+0x57/0x138) from 
[<c014a225>]
(ubifs_write_inode+0x69/0xcc)
[81438.783875] [<c014a225>] (ubifs_write_inode+0x69/0xcc) from [<c0147b43>]
(ubifs_writepage+0xd3/0x140)
[81438.783916] [<c0147b43>] (ubifs_writepage+0xd3/0x140) from [<c007780f>]
(__writepage+0xb/0x26)
[81438.783950] [<c007780f>] (__writepage+0xb/0x26) from [<c0077b0b>]
(write_cache_pages+0x151/0x1e8)
[81438.783982] [<c0077b0b>] (write_cache_pages+0x151/0x1e8) from 
[<c0077bc3>]
(generic_writepages+0x21/0x36)
[81438.784014] [<c0077bc3>] (generic_writepages+0x21/0x36) from [<c00736bb>]
(__filemap_fdatawrite_range+0x3b/0x42)
[81438.784044] [<c00736bb>] (__filemap_fdatawrite_range+0x3b/0x42) from
[<c0073741>] (filemap_write_and_wait_range+0x21/0x4a)
[81438.784075] [<c0073741>] (filemap_write_and_wait_range+0x21/0x4a) from
[<c0147bd9>] (ubifs_fsync+0x29/0x6c)
[81438.784109] [<c0147bd9>] (ubifs_fsync+0x29/0x6c) from [<c00ac21b>]
(vfs_fsync_range+0x1b/0x24)
[81438.784140] [<c00ac21b>] (vfs_fsync_range+0x1b/0x24) from [<c00ac28d>]
(generic_write_sync+0x4d/0x54)
[81438.784170] [<c00ac28d>] (generic_write_sync+0x4d/0x54) from [<c0073bcd>]
(generic_file_aio_write+0x71/0x8a)
[81438.784201] [<c0073bcd>] (generic_file_aio_write+0x71/0x8a) from 
[<c01471a3>]
(ubifs_aio_write+0xff/0x10c)
[81438.784245] [<c01471a3>] (ubifs_aio_write+0xff/0x10c) from [<c00945ed>]
(do_sync_write+0x61/0x8c)
[81438.784280] [<c00945ed>] (do_sync_write+0x61/0x8c) from [<c0094a8f>]
(vfs_write+0x5f/0x100)
[81438.784312] [<c0094a8f>] (vfs_write+0x5f/0x100) from [<c0094c9b>]
(sys_write+0x27/0x44)
[81438.784350] [<c0094c9b>] (sys_write+0x27/0x44) from [<c000c681>]
(ret_fast_syscall+0x1/0x46)
[81438.784380] [<c00104b1>] (unwind_backtrace+0x1/0x8c) from [<c014d325>]
(ubifs_leb_map+0x81/0xb4)
[81438.784411] [<c014d325>] (ubifs_leb_map+0x81/0xb4) from [<c01531df>]
(ubifs_add_bud_to_log+0x1bf/0x214)
[81438.784442] [<c01531df>] (ubifs_add_bud_to_log+0x1bf/0x214) from 
[<c01455fd>]
(make_reservation+0x12d/0x274)
[81438.784474] [<c01455fd>] (make_reservation+0x12d/0x274) from [<c0145e6b>]
(ubifs_jnl_write_inode+0x57/0x138)
[81438.784505] [<c0145e6b>] (ubifs_jnl_write_inode+0x57/0x138) from 
[<c014a225>]
(ubifs_write_inode+0x69/0xcc)
[81438.784537] [<c014a225>] (ubifs_write_inode+0x69/0xcc) from [<c0147b43>]
(ubifs_writepage+0xd3/0x140)
[81438.784569] [<c0147b43>] (ubifs_writepage+0xd3/0x140) from [<c007780f>]
(__writepage+0xb/0x26)
[81438.784600] [<c007780f>] (__writepage+0xb/0x26) from [<c0077b0b>]
(write_cache_pages+0x151/0x1e8)
[81438.784632] [<c0077b0b>] (write_cache_pages+0x151/0x1e8) from 
[<c0077bc3>]
(generic_writepages+0x21/0x36)
[81438.784662] [<c0077bc3>] (generic_writepages+0x21/0x36) from [<c00736bb>]
(__filemap_fdatawrite_range+0x3b/0x42)
[81438.784693] [<c00736bb>] (__filemap_fdatawrite_range+0x3b/0x42) from
[<c0073741>] (filemap_write_and_wait_range+0x21/0x4a)
[81438.784724] [<c0073741>] (filemap_write_and_wait_range+0x21/0x4a) from
[<c0147bd9>] (ubifs_fsync+0x29/0x6c)
[81438.784754] [<c0147bd9>] (ubifs_fsync+0x29/0x6c) from [<c00ac21b>]
(vfs_fsync_range+0x1b/0x24)
[81438.784783] [<c00ac21b>] (vfs_fsync_range+0x1b/0x24) from [<c00ac28d>]
(generic_write_sync+0x4d/0x54)
[81438.784812] [<c00ac28d>] (generic_write_sync+0x4d/0x54) from [<c0073bcd>]
(generic_file_aio_write+0x71/0x8a)
[81438.784843] [<c0073bcd>] (generic_file_aio_write+0x71/0x8a) from 
[<c01471a3>]
(ubifs_aio_write+0xff/0x10c)
[81438.784876] [<c01471a3>] (ubifs_aio_write+0xff/0x10c) from [<c00945ed>]
(do_sync_write+0x61/0x8c)
[81438.784909] [<c00945ed>] (do_sync_write+0x61/0x8c) from [<c0094a8f>]
(vfs_write+0x5f/0x100)
[81438.784940] [<c0094a8f>] (vfs_write+0x5f/0x100) from [<c0094c9b>]
(sys_write+0x27/0x44)
[81438.784972] [<c0094c9b>] (sys_write+0x27/0x44) from [<c000c681>]
(ret_fast_syscall+0x1/0x46)
[81438.785011] UBIFS error (pid 31441): do_commit: commit failed, error -30
[81438.785034] UBIFS error (pid 31441): ubifs_write_inode: can't write 
inode 79,
error -30

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: UBIFS Panic
  2014-06-26 20:28 UBIFS Panic Akshay Bhat
@ 2014-06-27  2:36 ` hujianyang
  2014-06-30 13:01   ` Akshay Bhat
  0 siblings, 1 reply; 19+ messages in thread
From: hujianyang @ 2014-06-27  2:36 UTC (permalink / raw)
  To: Akshay Bhat; +Cc: linux-mtd

On 2014/6/27 4:28, Akshay Bhat wrote:
> Hi,
> 
> I was performing a stress test on UBI file system on a ARM based embedded board
> (Beagle Bone). We have a SLC NAND flash and the MTD_UBI_WL_THRESHOLD is set to
> 4096. I have 2 scripts running in the background in a infinite while loop:
> Script1: dd if=/dev/zero of=/var/db/test bs=2M count=1
> Script2: dd if=/dev/urandom of=/var/log/test bs=2M count=1 2> /dev/null
> Note: The above directories are mounted as (sync,relatime).
> 

How did you release data on the flash? What's the partitions on your system?
Did you use MTD_UBI_FASTMAP?

> After the running the scripts for 5 days and max_ec reached the
> MTD_UBI_WL_THRESHOLD. At this point I got panic1 (see below) and the ubi volume
> switched to read-only mode. I rebooted the board, changed the transfer size in
> the script from 2M to 140K and the scripts ran for 2+ days and I got panic 2 (See
> below).

Did you try umount after this error happen and mount partition again, then
re-run your scripts to see what will happen?

> [81438.785011] UBIFS error (pid 31441): do_commit: commit failed, error -30
> [81438.785034] UBIFS error (pid 31441): ubifs_write_inode: can't write inode 79,
> error -30
>

Later error -30 is caused by former error -28 which is reported by UBI layer.
Did you run df to see how much space left on your device?

I think each time you get an error -28 from UBI layer, you will see an ubi_err.
But I didn't see it in your log. Does anyone else know something about it?

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: UBIFS Panic
  2014-06-27  2:36 ` hujianyang
@ 2014-06-30 13:01   ` Akshay Bhat
  2014-06-30 14:48     ` Richard Weinberger
  2014-07-01  0:58     ` hujianyang
  0 siblings, 2 replies; 19+ messages in thread
From: Akshay Bhat @ 2014-06-30 13:01 UTC (permalink / raw)
  To: hujianyang; +Cc: linux-mtd

Thanks for your response. Answers in-line.

On Thu 26 Jun 2014 10:36:00 PM EDT, hujianyang wrote:
> How did you release data on the flash? What's the partitions on your system?
> Did you use MTD_UBI_FASTMAP?

Image was flashed using the below command:
ubiformat /dev/mtd11 -f rootfs.ubi -s 512 -O 2048

UBI fastmap is enabled.
CONFIG_MTD_UBI_FASTMAP=y

mtd11 is for Rootfs and it has 6 volumes. Contents of ubinize.cfg:
[rootfs]
mode=ubi
#image=
vol_id=0
vol_size=100MiB
vol_type=dynamic
vol_name=rootfs

[rootfs2]
mode=ubi
vol_id=1
vol_size=100MiB
vol_type=dynamic
vol_name=rootfs2

[database]
mode=ubi
vol_id=2
vol_size=7MiB
vol_type=dynamic
vol_name=database

[database2]
mode=ubi
vol_id=3
vol_size=7MiB
vol_type=dynamic
vol_name=database2

[logging]
mode=ubi
vol_id=4
vol_size=7MiB
vol_type=dynamic
vol_name=logging

[firmware]
mode=ubi
vol_id=5
vol_size=7MiB
vol_type=dynamic
vol_name=firmware
vol_flags=autoresize

> Did you try umount after this error happen and mount partition again, then
> re-run your scripts to see what will happen?

I am not able mount the partition again after getting the error

######At the time of panic############
# df
Filesystem           1024-blocks    Used Available Use% Mounted on
rootfs                   92780     32792     59988  35% /
ubi0:rootfs              92780     32792     59988  35% /
tmpfs                   125800        36    125764   0% /tmp
tmpfs                   125800         0    125800   0% /dev/shm
tmpfs                   125800        68    125732   0% /var/run
ubi0:logging              4816      1984      2548  44% /var/log
ubi0:database             4816       456      4080  10% /var/db
tmpfs                   125800         4    125796   0% /var/spool/cron
tmpfs                   125800         0    125800   0% /var/sftp

# mount
rootfs on / type rootfs (rw)
ubi0:rootfs on / type ubifs (ro,relatime)
proc on /proc type proc (rw,relatime)
sysfs on /sys type sysfs (rw,relatime)
tmpfs on /tmp type tmpfs (rw,relatime)
none on /dev/pts type devpts (rw,relatime,mode=600)
tmpfs on /dev/shm type tmpfs (rw,relatime,mode=777)
tmpfs on /var/run type tmpfs (rw,relatime,mode=777)
ubi0:logging on /var/log type ubifs (ro,sync,relatime)
ubi0:database on /var/db type ubifs (ro,sync,relatime)
tmpfs on /var/spool/cron type tmpfs (rw,relatime,mode=755)
tmpfs on /var/sftp type tmpfs (rw,relatime,mode=755)

# umount /var/log
# umount /var/db
# mount -t ubifs ubi0:logging /var/log
mount: mounting ubi0:logging on /var/log failed: No space left on device
# mount -t ubifs ubi0:database /var/db
mount: mounting ubi0:database on /var/db failed: No space left on device

# mount
rootfs on / type rootfs (rw)
ubi0:rootfs on / type ubifs (ro,relatime)
proc on /proc type proc (rw,relatime)
sysfs on /sys type sysfs (rw,relatime)
tmpfs on /tmp type tmpfs (rw,relatime)
none on /dev/pts type devpts (rw,relatime,mode=600)
tmpfs on /dev/shm type tmpfs (rw,relatime,mode=777)
tmpfs on /var/run type tmpfs (rw,relatime,mode=777)
tmpfs on /var/spool/cron type tmpfs (rw,relatime,mode=755)
tmpfs on /var/sftp type tmpfs (rw,relatime,mode=755)

# df
Filesystem           1024-blocks    Used Available Use% Mounted on
rootfs                   92780     32792     59988  35% /
ubi0:rootfs              92780     32792     59988  35% /
tmpfs                   125800        36    125764   0% /tmp
tmpfs                   125800         0    125800   0% /dev/shm
tmpfs                   125800        60    125740   0% /var/run
tmpfs                   125800         4    125796   0% /var/spool/cron
tmpfs                   125800         0    125800   0% /var/sftp

>> [81438.785011] UBIFS error (pid 31441): do_commit: commit failed, error -30
>> [81438.785034] UBIFS error (pid 31441): ubifs_write_inode: can't write inode 79,
>> error -30
>>
>
> Later error -30 is caused by former error -28 which is reported by UBI layer.
> Did you run df to see how much space left on your device?

Before running the scripts (right after boot)
# df
Filesystem           1024-blocks    Used Available Use% Mounted on
rootfs                   92780     32780     60000  35% /
ubi0:rootfs              92780     32780     60000  35% /
tmpfs                   125800        36    125764   0% /tmp
tmpfs                   125800         0    125800   0% /dev/shm
tmpfs                   125800        68    125732   0% /var/run
ubi0:logging              4816       324      4212   7% /var/log
ubi0:database             4816       248      4288   5% /var/db
tmpfs                   125800         4    125796   0% /var/spool/cron
tmpfs                   125800         0    125800   0% /var/sftp

At the time of the panic:
# df
Filesystem           1024-blocks    Used Available Use% Mounted on
rootfs                   92780     32792     59988  35% /
ubi0:rootfs              92780     32792     59988  35% /
tmpfs                   125800        36    125764   0% /tmp
tmpfs                   125800         0    125800   0% /dev/shm
tmpfs                   125800        68    125732   0% /var/run
ubi0:logging              4816      1984      2548  44% /var/log
ubi0:database             4816       456      4080  10% /var/db
tmpfs                   125800         4    125796   0% /var/spool/cron
tmpfs                   125800         0    125800   0% /var/sftp


>
> I think each time you get an error -28 from UBI layer, you will see an ubi_err.
> But I didn't see it in your log. Does anyone else know something about it?
>
> .
>

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: UBIFS Panic
  2014-06-30 13:01   ` Akshay Bhat
@ 2014-06-30 14:48     ` Richard Weinberger
  2014-06-30 17:23       ` Akshay Bhat
  2014-07-01  0:58     ` hujianyang
  1 sibling, 1 reply; 19+ messages in thread
From: Richard Weinberger @ 2014-06-30 14:48 UTC (permalink / raw)
  To: Akshay Bhat; +Cc: linux-mtd, hujianyang

On Mon, Jun 30, 2014 at 3:01 PM, Akshay Bhat <abhat@lutron.com> wrote:
> Thanks for your response. Answers in-line.
>
>
> On Thu 26 Jun 2014 10:36:00 PM EDT, hujianyang wrote:
>>
>> How did you release data on the flash? What's the partitions on your
>> system?
>> Did you use MTD_UBI_FASTMAP?
>
>
> Image was flashed using the below command:
> ubiformat /dev/mtd11 -f rootfs.ubi -s 512 -O 2048
>
> UBI fastmap is enabled.
> CONFIG_MTD_UBI_FASTMAP=y

Do you also use it or is it just enabled?

> mtd11 is for Rootfs and it has 6 volumes. Contents of ubinize.cfg:
> [rootfs]
> mode=ubi
> #image=
> vol_id=0
> vol_size=100MiB
> vol_type=dynamic
> vol_name=rootfs
>
> [rootfs2]
> mode=ubi
> vol_id=1
> vol_size=100MiB
> vol_type=dynamic
> vol_name=rootfs2
>
> [database]
> mode=ubi
> vol_id=2
> vol_size=7MiB
> vol_type=dynamic
> vol_name=database
>
> [database2]
> mode=ubi
> vol_id=3
> vol_size=7MiB
> vol_type=dynamic
> vol_name=database2
>
> [logging]
> mode=ubi
> vol_id=4
> vol_size=7MiB
> vol_type=dynamic
> vol_name=logging
>
> [firmware]
> mode=ubi
> vol_id=5
> vol_size=7MiB
> vol_type=dynamic
> vol_name=firmware
> vol_flags=autoresize
>
>
>> Did you try umount after this error happen and mount partition again, then
>> re-run your scripts to see what will happen?
>
>
> I am not able mount the partition again after getting the error
>
> ######At the time of panic############
> # df
> Filesystem           1024-blocks    Used Available Use% Mounted on
> rootfs                   92780     32792     59988  35% /
> ubi0:rootfs              92780     32792     59988  35% /
> tmpfs                   125800        36    125764   0% /tmp
> tmpfs                   125800         0    125800   0% /dev/shm
> tmpfs                   125800        68    125732   0% /var/run
> ubi0:logging              4816      1984      2548  44% /var/log
> ubi0:database             4816       456      4080  10% /var/db
> tmpfs                   125800         4    125796   0% /var/spool/cron
> tmpfs                   125800         0    125800   0% /var/sftp
>
> # mount
> rootfs on / type rootfs (rw)
> ubi0:rootfs on / type ubifs (ro,relatime)
> proc on /proc type proc (rw,relatime)
> sysfs on /sys type sysfs (rw,relatime)
> tmpfs on /tmp type tmpfs (rw,relatime)
> none on /dev/pts type devpts (rw,relatime,mode=600)
> tmpfs on /dev/shm type tmpfs (rw,relatime,mode=777)
> tmpfs on /var/run type tmpfs (rw,relatime,mode=777)
> ubi0:logging on /var/log type ubifs (ro,sync,relatime)
> ubi0:database on /var/db type ubifs (ro,sync,relatime)
> tmpfs on /var/spool/cron type tmpfs (rw,relatime,mode=755)
> tmpfs on /var/sftp type tmpfs (rw,relatime,mode=755)
>
> # umount /var/log
> # umount /var/db
> # mount -t ubifs ubi0:logging /var/log
> mount: mounting ubi0:logging on /var/log failed: No space left on device
> # mount -t ubifs ubi0:database /var/db
> mount: mounting ubi0:database on /var/db failed: No space left on device
>
> # mount
> rootfs on / type rootfs (rw)
> ubi0:rootfs on / type ubifs (ro,relatime)
> proc on /proc type proc (rw,relatime)
> sysfs on /sys type sysfs (rw,relatime)
> tmpfs on /tmp type tmpfs (rw,relatime)
> none on /dev/pts type devpts (rw,relatime,mode=600)
> tmpfs on /dev/shm type tmpfs (rw,relatime,mode=777)
> tmpfs on /var/run type tmpfs (rw,relatime,mode=777)
> tmpfs on /var/spool/cron type tmpfs (rw,relatime,mode=755)
> tmpfs on /var/sftp type tmpfs (rw,relatime,mode=755)
>
> # df
> Filesystem           1024-blocks    Used Available Use% Mounted on
> rootfs                   92780     32792     59988  35% /
> ubi0:rootfs              92780     32792     59988  35% /
> tmpfs                   125800        36    125764   0% /tmp
> tmpfs                   125800         0    125800   0% /dev/shm
> tmpfs                   125800        60    125740   0% /var/run
> tmpfs                   125800         4    125796   0% /var/spool/cron
> tmpfs                   125800         0    125800   0% /var/sftp
>
>
>>> [81438.785011] UBIFS error (pid 31441): do_commit: commit failed, error
>>> -30
>>> [81438.785034] UBIFS error (pid 31441): ubifs_write_inode: can't write
>>> inode 79,
>>> error -30
>>>
>>
>> Later error -30 is caused by former error -28 which is reported by UBI
>> layer.
>> Did you run df to see how much space left on your device?
>
>
> Before running the scripts (right after boot)
> # df
> Filesystem           1024-blocks    Used Available Use% Mounted on
> rootfs                   92780     32780     60000  35% /
> ubi0:rootfs              92780     32780     60000  35% /
> tmpfs                   125800        36    125764   0% /tmp
> tmpfs                   125800         0    125800   0% /dev/shm
> tmpfs                   125800        68    125732   0% /var/run
> ubi0:logging              4816       324      4212   7% /var/log
> ubi0:database             4816       248      4288   5% /var/db
> tmpfs                   125800         4    125796   0% /var/spool/cron
> tmpfs                   125800         0    125800   0% /var/sftp
>
> At the time of the panic:
> # df
> Filesystem           1024-blocks    Used Available Use% Mounted on
> rootfs                   92780     32792     59988  35% /
> ubi0:rootfs              92780     32792     59988  35% /
> tmpfs                   125800        36    125764   0% /tmp
> tmpfs                   125800         0    125800   0% /dev/shm
> tmpfs                   125800        68    125732   0% /var/run
> ubi0:logging              4816      1984      2548  44% /var/log
> ubi0:database             4816       456      4080  10% /var/db
> tmpfs                   125800         4    125796   0% /var/spool/cron
> tmpfs                   125800         0    125800   0% /var/sftp
>
>
>
>>
>> I think each time you get an error -28 from UBI layer, you will see an
>> ubi_err.
>> But I didn't see it in your log. Does anyone else know something about it?
>>
>> .
>>
>
> ______________________________________________________
> Linux MTD discussion mailing list
> http://lists.infradead.org/mailman/listinfo/linux-mtd/



-- 
Thanks,
//richard

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: UBIFS Panic
  2014-06-30 14:48     ` Richard Weinberger
@ 2014-06-30 17:23       ` Akshay Bhat
  2014-06-30 17:34         ` Richard Weinberger
  2014-07-01  1:09         ` hujianyang
  0 siblings, 2 replies; 19+ messages in thread
From: Akshay Bhat @ 2014-06-30 17:23 UTC (permalink / raw)
  To: Richard Weinberger; +Cc: linux-mtd, hujianyang



On Mon 30 Jun 2014 10:48:01 AM EDT, Richard Weinberger wrote:
> On Mon, Jun 30, 2014 at 3:01 PM, Akshay Bhat <abhat@lutron.com> wrote:
>> Thanks for your response. Answers in-line.
>>
>>
>> On Thu 26 Jun 2014 10:36:00 PM EDT, hujianyang wrote:
>>>
>>> How did you release data on the flash? What's the partitions on your
>>> system?
>>> Did you use MTD_UBI_FASTMAP?
>>
>>
>> Image was flashed using the below command:
>> ubiformat /dev/mtd11 -f rootfs.ubi -s 512 -O 2048
>>
>> UBI fastmap is enabled.
>> CONFIG_MTD_UBI_FASTMAP=y
>
> Do you also use it or is it just enabled?

We do not need/use the fastmap feature. (fm_autoconvert set to 0).

Is enabling UBI_FASTMAP the cause for the panic? If so I can disabled 
the
feature and re-test. Do you see any compatibility issue going from:
Current config -> New config -> Failsafe config

Current config: CONFIG_MTD_UBI_FASTMAP = y; fm_autoconvert = 0
New config:  CONFIG_MTD_UBI_FASTMAP = n; fm_autoconvert = 0
Failsafe kernel config (if the above kernel does not boot):
 CONFIG_MTD_UBI_FASTMAP = y; fm_autoconvert = 0

Snippet of dmesg boot log:
[    0.000000] Kernel command line: console=ttyO0,115200n8 noinitrd 
mem=256M
root=ubi0:rootfs rw ubi.mtd=11,2048 rootfstype=ubifs rootwait=1 ip=none 
quiet
loglevel=3 panic=3
............
[    0.483696] UBI: default fastmap pool size: 95
[    0.483712] UBI: default fastmap WL pool size: 25
[    0.483728] UBI: attaching mtd11 to ubi0
[    1.699309] UBI: scanning is finished
[    1.711816] UBI: attached mtd11 (name "RFS", size 242 MiB) to ubi0
[    1.711842] UBI: PEB size: 131072 bytes (128 KiB), LEB size: 126976 
bytes
[    1.711858] UBI: min./max. I/O unit sizes: 2048/2048, sub-page size 
512
[    1.711875] UBI: VID header offset: 2048 (aligned 2048), data 
offset: 4096
[    1.711891] UBI: good PEBs: 1939, bad PEBs: 0, corrupted PEBs: 0
[    1.711907] UBI: user volume: 6, internal volumes: 1, max. volumes 
count: 128
[    1.711926] UBI: max/mean erase counter: 1/0, WL threshold: 4096, 
image sequence number: 1426503060
[    1.711943] UBI: available PEBs: 0, total reserved PEBs: 1939, PEBs 
reserved for bad PEB handling: 40
[    1.726701] UBI: background thread "ubi_bgt0d" started, PID 55

>> mtd11 is for Rootfs and it has 6 volumes. Contents of ubinize.cfg:
>> [rootfs]
>> mode=ubi
>> #image=
>> vol_id=0
>> vol_size=100MiB
>> vol_type=dynamic
>> vol_name=rootfs
>>
>> [rootfs2]
>> mode=ubi
>> vol_id=1
>> vol_size=100MiB
>> vol_type=dynamic
>> vol_name=rootfs2
>>
>> [database]
>> mode=ubi
>> vol_id=2
>> vol_size=7MiB
>> vol_type=dynamic
>> vol_name=database
>>
>> [database2]
>> mode=ubi
>> vol_id=3
>> vol_size=7MiB
>> vol_type=dynamic
>> vol_name=database2
>>
>> [logging]
>> mode=ubi
>> vol_id=4
>> vol_size=7MiB
>> vol_type=dynamic
>> vol_name=logging
>>
>> [firmware]
>> mode=ubi
>> vol_id=5
>> vol_size=7MiB
>> vol_type=dynamic
>> vol_name=firmware
>> vol_flags=autoresize
>>
>>
>>> Did you try umount after this error happen and mount partition again, then
>>> re-run your scripts to see what will happen?
>>
>>
>> I am not able mount the partition again after getting the error
>>
>> ######At the time of panic############
>> # df
>> Filesystem           1024-blocks    Used Available Use% Mounted on
>> rootfs                   92780     32792     59988  35% /
>> ubi0:rootfs              92780     32792     59988  35% /
>> tmpfs                   125800        36    125764   0% /tmp
>> tmpfs                   125800         0    125800   0% /dev/shm
>> tmpfs                   125800        68    125732   0% /var/run
>> ubi0:logging              4816      1984      2548  44% /var/log
>> ubi0:database             4816       456      4080  10% /var/db
>> tmpfs                   125800         4    125796   0% /var/spool/cron
>> tmpfs                   125800         0    125800   0% /var/sftp
>>
>> # mount
>> rootfs on / type rootfs (rw)
>> ubi0:rootfs on / type ubifs (ro,relatime)
>> proc on /proc type proc (rw,relatime)
>> sysfs on /sys type sysfs (rw,relatime)
>> tmpfs on /tmp type tmpfs (rw,relatime)
>> none on /dev/pts type devpts (rw,relatime,mode=600)
>> tmpfs on /dev/shm type tmpfs (rw,relatime,mode=777)
>> tmpfs on /var/run type tmpfs (rw,relatime,mode=777)
>> ubi0:logging on /var/log type ubifs (ro,sync,relatime)
>> ubi0:database on /var/db type ubifs (ro,sync,relatime)
>> tmpfs on /var/spool/cron type tmpfs (rw,relatime,mode=755)
>> tmpfs on /var/sftp type tmpfs (rw,relatime,mode=755)
>>
>> # umount /var/log
>> # umount /var/db
>> # mount -t ubifs ubi0:logging /var/log
>> mount: mounting ubi0:logging on /var/log failed: No space left on device
>> # mount -t ubifs ubi0:database /var/db
>> mount: mounting ubi0:database on /var/db failed: No space left on device
>>
>> # mount
>> rootfs on / type rootfs (rw)
>> ubi0:rootfs on / type ubifs (ro,relatime)
>> proc on /proc type proc (rw,relatime)
>> sysfs on /sys type sysfs (rw,relatime)
>> tmpfs on /tmp type tmpfs (rw,relatime)
>> none on /dev/pts type devpts (rw,relatime,mode=600)
>> tmpfs on /dev/shm type tmpfs (rw,relatime,mode=777)
>> tmpfs on /var/run type tmpfs (rw,relatime,mode=777)
>> tmpfs on /var/spool/cron type tmpfs (rw,relatime,mode=755)
>> tmpfs on /var/sftp type tmpfs (rw,relatime,mode=755)
>>
>> # df
>> Filesystem           1024-blocks    Used Available Use% Mounted on
>> rootfs                   92780     32792     59988  35% /
>> ubi0:rootfs              92780     32792     59988  35% /
>> tmpfs                   125800        36    125764   0% /tmp
>> tmpfs                   125800         0    125800   0% /dev/shm
>> tmpfs                   125800        60    125740   0% /var/run
>> tmpfs                   125800         4    125796   0% /var/spool/cron
>> tmpfs                   125800         0    125800   0% /var/sftp
>>
>>
>>>> [81438.785011] UBIFS error (pid 31441): do_commit: commit failed, error
>>>> -30
>>>> [81438.785034] UBIFS error (pid 31441): ubifs_write_inode: can't write
>>>> inode 79,
>>>> error -30
>>>>
>>>
>>> Later error -30 is caused by former error -28 which is reported by UBI
>>> layer.
>>> Did you run df to see how much space left on your device?
>>
>>
>> Before running the scripts (right after boot)
>> # df
>> Filesystem           1024-blocks    Used Available Use% Mounted on
>> rootfs                   92780     32780     60000  35% /
>> ubi0:rootfs              92780     32780     60000  35% /
>> tmpfs                   125800        36    125764   0% /tmp
>> tmpfs                   125800         0    125800   0% /dev/shm
>> tmpfs                   125800        68    125732   0% /var/run
>> ubi0:logging              4816       324      4212   7% /var/log
>> ubi0:database             4816       248      4288   5% /var/db
>> tmpfs                   125800         4    125796   0% /var/spool/cron
>> tmpfs                   125800         0    125800   0% /var/sftp
>>
>> At the time of the panic:
>> # df
>> Filesystem           1024-blocks    Used Available Use% Mounted on
>> rootfs                   92780     32792     59988  35% /
>> ubi0:rootfs              92780     32792     59988  35% /
>> tmpfs                   125800        36    125764   0% /tmp
>> tmpfs                   125800         0    125800   0% /dev/shm
>> tmpfs                   125800        68    125732   0% /var/run
>> ubi0:logging              4816      1984      2548  44% /var/log
>> ubi0:database             4816       456      4080  10% /var/db
>> tmpfs                   125800         4    125796   0% /var/spool/cron
>> tmpfs                   125800         0    125800   0% /var/sftp
>>
>>
>>
>>>
>>> I think each time you get an error -28 from UBI layer, you will see an
>>> ubi_err.
>>> But I didn't see it in your log. Does anyone else know something about it?
>>>
>>> .
>>>
>>
>> ______________________________________________________
>> Linux MTD discussion mailing list
>> http://lists.infradead.org/mailman/listinfo/linux-mtd/
>
>
>

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: UBIFS Panic
  2014-06-30 17:23       ` Akshay Bhat
@ 2014-06-30 17:34         ` Richard Weinberger
  2014-07-01  1:09         ` hujianyang
  1 sibling, 0 replies; 19+ messages in thread
From: Richard Weinberger @ 2014-06-30 17:34 UTC (permalink / raw)
  To: Akshay Bhat; +Cc: linux-mtd, hujianyang

Am 30.06.2014 19:23, schrieb Akshay Bhat:
> 
> 
> On Mon 30 Jun 2014 10:48:01 AM EDT, Richard Weinberger wrote:
>> On Mon, Jun 30, 2014 at 3:01 PM, Akshay Bhat <abhat@lutron.com> wrote:
>>> Thanks for your response. Answers in-line.
>>>
>>>
>>> On Thu 26 Jun 2014 10:36:00 PM EDT, hujianyang wrote:
>>>>
>>>> How did you release data on the flash? What's the partitions on your
>>>> system?
>>>> Did you use MTD_UBI_FASTMAP?
>>>
>>>
>>> Image was flashed using the below command:
>>> ubiformat /dev/mtd11 -f rootfs.ubi -s 512 -O 2048
>>>
>>> UBI fastmap is enabled.
>>> CONFIG_MTD_UBI_FASTMAP=y
>>
>> Do you also use it or is it just enabled?
> 
> We do not need/use the fastmap feature. (fm_autoconvert set to 0).
> 
> Is enabling UBI_FASTMAP the cause for the panic? If so I can disabled the
> feature and re-test. Do you see any compatibility issue going from:
> Current config -> New config -> Failsafe config

I hope not. :)
Please retry with CONFIG_MTD_UBI_FASTMAP=n.

> Current config: CONFIG_MTD_UBI_FASTMAP = y; fm_autoconvert = 0
> New config:  CONFIG_MTD_UBI_FASTMAP = n; fm_autoconvert = 0
> Failsafe kernel config (if the above kernel does not boot):
> CONFIG_MTD_UBI_FASTMAP = y; fm_autoconvert = 0
> 
> Snippet of dmesg boot log:
> [    0.000000] Kernel command line: console=ttyO0,115200n8 noinitrd mem=256M
> root=ubi0:rootfs rw ubi.mtd=11,2048 rootfstype=ubifs rootwait=1 ip=none quiet
> loglevel=3 panic=3
> ............
> [    0.483696] UBI: default fastmap pool size: 95
> [    0.483712] UBI: default fastmap WL pool size: 25

This indicates CONFIG_MTD_UBI_FASTMAP=y.

Thanks,
//richard

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: UBIFS Panic
  2014-06-30 13:01   ` Akshay Bhat
  2014-06-30 14:48     ` Richard Weinberger
@ 2014-07-01  0:58     ` hujianyang
  1 sibling, 0 replies; 19+ messages in thread
From: hujianyang @ 2014-07-01  0:58 UTC (permalink / raw)
  To: Akshay Bhat; +Cc: linux-mtd

> 
> # umount /var/log
> # umount /var/db
> # mount -t ubifs ubi0:logging /var/log
> mount: mounting ubi0:logging on /var/log failed: No space left on device
> # mount -t ubifs ubi0:database /var/db
> mount: mounting ubi0:database on /var/db failed: No space left on device
> 

It's interesting.

Can you re-mount these partitions after reboot? I would like to see
the kernel message when this mounting failed happens.

Besides, Could you mount another partition after this failure? Example,
database2 on your system. I think some calculate information goes
wrong on your global UBI layer so you can't mount partitions of your
device ubi0 any more.

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: UBIFS Panic
  2014-06-30 17:23       ` Akshay Bhat
  2014-06-30 17:34         ` Richard Weinberger
@ 2014-07-01  1:09         ` hujianyang
  2014-07-01  7:48           ` Richard Weinberger
  1 sibling, 1 reply; 19+ messages in thread
From: hujianyang @ 2014-07-01  1:09 UTC (permalink / raw)
  To: Akshay Bhat; +Cc: Richard Weinberger, linux-mtd

> 
> Is enabling UBI_FASTMAP the cause for the panic? If so I can disabled the
> feature and re-test. Do you see any compatibility issue going from:
> Current config -> New config -> Failsafe config
> 

I don't know. But I found func ubi_wl_get_peb() in drivers/mtd/ubi/wl.c
will return an error ENOSPC without any error messages. And this func
seems related with fastmap feature.

I don't have much experience in this.

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: UBIFS Panic
  2014-07-01  1:09         ` hujianyang
@ 2014-07-01  7:48           ` Richard Weinberger
  2014-07-01 14:18             ` Akshay Bhat
  0 siblings, 1 reply; 19+ messages in thread
From: Richard Weinberger @ 2014-07-01  7:48 UTC (permalink / raw)
  To: hujianyang, Akshay Bhat; +Cc: linux-mtd

Am 01.07.2014 03:09, schrieb hujianyang:
>>
>> Is enabling UBI_FASTMAP the cause for the panic? If so I can disabled the
>> feature and re-test. Do you see any compatibility issue going from:
>> Current config -> New config -> Failsafe config
>>
> 
> I don't know. But I found func ubi_wl_get_peb() in drivers/mtd/ubi/wl.c
> will return an error ENOSPC without any error messages. And this func
> seems related with fastmap feature.

I have sort of an idea what could going on.
Akshay, can you please confirm that you face the issue only with UBI_FASTMAP=y?

Thanks,
//richard

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: UBIFS Panic
  2014-07-01  7:48           ` Richard Weinberger
@ 2014-07-01 14:18             ` Akshay Bhat
  2014-07-01 14:32               ` Richard Weinberger
  0 siblings, 1 reply; 19+ messages in thread
From: Akshay Bhat @ 2014-07-01 14:18 UTC (permalink / raw)
  To: Richard Weinberger; +Cc: linux-mtd, hujianyang



On Tue 01 Jul 2014 03:48:54 AM EDT, Richard Weinberger wrote:
> Am 01.07.2014 03:09, schrieb hujianyang:
>>>
>>> Is enabling UBI_FASTMAP the cause for the panic? If so I can disabled the
>>> feature and re-test. Do you see any compatibility issue going from:
>>> Current config -> New config -> Failsafe config
>>>
>>
>> I don't know. But I found func ubi_wl_get_peb() in drivers/mtd/ubi/wl.c
>> will return an error ENOSPC without any error messages. And this func
>> seems related with fastmap feature.
>
> I have sort of an idea what could going on.
> Akshay, can you please confirm that you face the issue only with UBI_FASTMAP=y?

I have not been able to recreate the issue with UBI_FASTMAP=n

To expedite reproducing the problem, the WL threshold was set to 128 
instead of 4096

Unit 1: UBI_FASTMAP=n; No panic
# dmesg |grep -i ubi
[    0.000000] Kernel command line: console=ttyO0,115200n8 noinitrd 
mem=256M
root=ubi0:rootfs rw ubi.mtd=11,2048 rootfstype=ubifs rootwait=1 ip=none 
quiet
loglevel=3 panic=3
[    0.478610] TCP: cubic registered
[    0.482445] UBI: attaching mtd11 to ubi0
[    1.735485] UBI: scanning is finished
[    1.747719] UBI: attached mtd11 (name "RFS", size 242 MiB) to ubi0
[    1.747746] UBI: PEB size: 131072 bytes (128 KiB), LEB size: 126976 
bytes
[    1.747763] UBI: min./max. I/O unit sizes: 2048/2048, sub-page size 
512
[    1.747779] UBI: VID header offset: 2048 (aligned 2048), data 
offset: 4096
[    1.747795] UBI: good PEBs: 1939, bad PEBs: 0, corrupted PEBs: 0
[    1.747811] UBI: user volume: 6, internal volumes: 1, max. volumes 
count: 128
[    1.747831] UBI: max/mean erase counter: 2811/2558, WL threshold: 
128, image
sequence number: 1541329669
[    1.747848] UBI: available PEBs: 2, total reserved PEBs: 1937, PEBs 
reserved
for bad PEB handling: 40
[    1.762580] UBI: background thread "ubi_bgt0d" started, PID 55
[    1.843707] UBIFS: background thread "ubifs_bgt0_0" started, PID 58
[    1.882095] UBIFS: mounted UBI device 0, volume 0, name 
"rootfs"(null)
[    1.882120] UBIFS: LEB size: 126976 bytes (124 KiB), min./max. I/O 
unit
sizes: 2048 bytes/2048 bytes
[    1.882144] UBIFS: FS size: 103485440 bytes (98 MiB, 815 LEBs), 
journal size
9023488 bytes (8 MiB, 72 LEBs)
[    1.882159] UBIFS: reserved for root: 0 bytes (0 KiB)
[    1.882185] UBIFS: media format: w4/r0 (latest is w4/r0), UUID
4E0D06C8-F441-4E9D-9BF6-4EC7D576A269, small LPT model
[    1.883217] VFS: Mounted root (ubifs filesystem) on device 0:11.
[    4.556647] UBIFS: background thread "ubifs_bgt0_4" started, PID 292
[    4.612160] UBIFS: mounted UBI device 0, volume 4, name 
"logging"(null)
[    4.612191] UBIFS: LEB size: 126976 bytes (124 KiB), min./max. I/O 
unit
sizes: 2048 bytes/2048 bytes
[    4.612213] UBIFS: FS size: 6094848 bytes (5 MiB, 48 LEBs), journal 
size
1015809 bytes (0 MiB, 6 LEBs)
[    4.612229] UBIFS: reserved for root: 287874 bytes (281 KiB)
[    4.612254] UBIFS: media format: w4/r0 (latest is w4/r0), UUID
AB9F5353-2682-409D-8070-63E40E0108E1, small LPT model
[    4.637764] UBIFS: background thread "ubifs_bgt0_2" started, PID 294
[    4.664091] UBIFS: recovery needed
[    4.755545] UBIFS: recovery completed
[    4.755789] UBIFS: mounted UBI device 0, volume 2, name 
"database"(null)
[    4.755810] UBIFS: LEB size: 126976 bytes (124 KiB), min./max. I/O 
unit
sizes: 2048 bytes/2048 bytes
[    4.755832] UBIFS: FS size: 6094848 bytes (5 MiB, 48 LEBs), journal 
size
1015809 bytes (0 MiB, 6 LEBs)
[    4.755848] UBIFS: reserved for root: 287874 bytes (281 KiB)
[    4.755872] UBIFS: media format: w4/r0 (latest is w4/r0), UUID
7243AE4C-AAEB-491F-9006-40B6EE8DCED2, small LPT model
# cat /sys/class/ubi/ubi0/max_ec
2811


Unit 2:  UBI_FASTMAP=y; Panic at max_ec = 165

[48173.891239] UBIFS error (pid 516): ubifs_write_inode: can't write 
inode 66, error -30
[48203.906863] UBIFS error (pid 516): make_reservation: cannot reserve 
160 bytes in jhead 1, error -30

#########REBOOT the device##############
# dmesg |grep -i ubi
[    0.000000] Kernel command line: console=ttyO0,115200n8 noinitrd 
mem=256M
root=ubi0:rootfs rw ubi.mtd=11,2048 rootfstype=ubifs rootwait=1 ip=none 
quiet
loglevel=3 panic=3
[    0.477729] TCP: cubic registered
[    0.481541] UBI: default fastmap pool size: 95
[    0.481555] UBI: default fastmap WL pool size: 25
[    0.481570] UBI: attaching mtd11 to ubi0
[    1.697741] UBI: scanning is finished
[    1.710069] UBI: attached mtd11 (name "RFS", size 242 MiB) to ubi0
[    1.710096] UBI: PEB size: 131072 bytes (128 KiB), LEB size: 126976 
bytes
[    1.710113] UBI: min./max. I/O unit sizes: 2048/2048, sub-page size 
512
[    1.710129] UBI: VID header offset: 2048 (aligned 2048), data 
offset: 4096
[    1.710145] UBI: good PEBs: 1939, bad PEBs: 0, corrupted PEBs: 0
[    1.710161] UBI: user volume: 6, internal volumes: 1, max. volumes 
count: 128
[    1.710180] UBI: max/mean erase counter: 165/126, WL threshold: 128, 
image
sequence number: 119408202
[    1.710197] UBI: available PEBs: 0, total reserved PEBs: 1939, PEBs 
reserved
for bad PEB handling: 40
[    1.724904] UBI: background thread "ubi_bgt0d" started, PID 55
[    1.806465] UBIFS: background thread "ubifs_bgt0_0" started, PID 58
[    1.807324] UBIFS: recovery needed
[    2.166247] UBIFS: recovery completed
[    2.166539] UBIFS: mounted UBI device 0, volume 0, name 
"rootfs"(null)
[    2.166560] UBIFS: LEB size: 126976 bytes (124 KiB), min./max. I/O 
unit
sizes: 2048 bytes/2048 bytes
[    2.166583] UBIFS: FS size: 103485440 bytes (98 MiB, 815 LEBs), 
journal size
9023488 bytes (8 MiB, 72 LEBs)
[    2.166598] UBIFS: reserved for root: 0 bytes (0 KiB)
[    2.166623] UBIFS: media format: w4/r0 (latest is w4/r0), UUID
4E0D06C8-F441-4E9D-9BF6-4EC7D576A269, small LPT model
[    2.168698] VFS: Mounted root (ubifs filesystem) on device 0:11.
[    4.826134] UBIFS: background thread "ubifs_bgt0_4" started, PID 292
[    4.836046] UBIFS: recovery needed
[    4.980129] UBIFS: recovery completed
[    4.981698] UBIFS: mounted UBI device 0, volume 4, name 
"logging"(null)
[    4.981722] UBIFS: LEB size: 126976 bytes (124 KiB), min./max. I/O 
unit
sizes: 2048 bytes/2048 bytes
[    4.981743] UBIFS: FS size: 6094848 bytes (5 MiB, 48 LEBs), journal 
size
1015809 bytes (0 MiB, 6 LEBs)
[    4.981759] UBIFS: reserved for root: 287874 bytes (281 KiB)
[    4.981783] UBIFS: media format: w4/r0 (latest is w4/r0), UUID
91E2E462-16AE-4DAB-8DD4-03095DA66B01, small LPT model
[    5.021855] UBIFS: background thread "ubifs_bgt0_2" started, PID 294
[    5.033250] UBIFS: recovery needed
[    5.202984] UBIFS: recovery completed
[    5.203230] UBIFS: mounted UBI device 0, volume 2, name 
"database"(null)
[    5.203251] UBIFS: LEB size: 126976 bytes (124 KiB), min./max. I/O 
unit
sizes: 2048 bytes/2048 bytes
[    5.203272] UBIFS: FS size: 6094848 bytes (5 MiB, 48 LEBs), journal 
size
1015809 bytes (0 MiB, 6 LEBs)
[    5.203288] UBIFS: reserved for root: 287874 bytes (281 KiB)
[    5.203313] UBIFS: media format: w4/r0 (latest is w4/r0), UUID
D3BF3ADD-4A86-4ACD-BF4C-C86ABEF571EE, small LPT model
# cat /sys/class/ubi/ubi0/max_ec
165

> Thanks,
> //richard
>
>

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: UBIFS Panic
  2014-07-01 14:18             ` Akshay Bhat
@ 2014-07-01 14:32               ` Richard Weinberger
  2014-07-01 14:46                 ` Akshay Bhat
  0 siblings, 1 reply; 19+ messages in thread
From: Richard Weinberger @ 2014-07-01 14:32 UTC (permalink / raw)
  To: Akshay Bhat; +Cc: linux-mtd, hujianyang

Am 01.07.2014 16:18, schrieb Akshay Bhat:
> 
> 
> On Tue 01 Jul 2014 03:48:54 AM EDT, Richard Weinberger wrote:
>> Am 01.07.2014 03:09, schrieb hujianyang:
>>>>
>>>> Is enabling UBI_FASTMAP the cause for the panic? If so I can disabled the
>>>> feature and re-test. Do you see any compatibility issue going from:
>>>> Current config -> New config -> Failsafe config
>>>>
>>>
>>> I don't know. But I found func ubi_wl_get_peb() in drivers/mtd/ubi/wl.c
>>> will return an error ENOSPC without any error messages. And this func
>>> seems related with fastmap feature.
>>
>> I have sort of an idea what could going on.
>> Akshay, can you please confirm that you face the issue only with UBI_FASTMAP=y?
> 
> I have not been able to recreate the issue with UBI_FASTMAP=n

Okay, you test case basically fills the filesystem over and over?
And after some time you face the said issue.
Is this correct?

Thanks,
//richard

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: UBIFS Panic
  2014-07-01 14:32               ` Richard Weinberger
@ 2014-07-01 14:46                 ` Akshay Bhat
  2014-07-01 14:56                   ` Richard Weinberger
  0 siblings, 1 reply; 19+ messages in thread
From: Akshay Bhat @ 2014-07-01 14:46 UTC (permalink / raw)
  To: Richard Weinberger; +Cc: linux-mtd, hujianyang



On Tue 01 Jul 2014 10:32:34 AM EDT, Richard Weinberger wrote:
> Am 01.07.2014 16:18, schrieb Akshay Bhat:
>>
>>
>> On Tue 01 Jul 2014 03:48:54 AM EDT, Richard Weinberger wrote:
>>> Am 01.07.2014 03:09, schrieb hujianyang:
>>>>>
>>>>> Is enabling UBI_FASTMAP the cause for the panic? If so I can disabled the
>>>>> feature and re-test. Do you see any compatibility issue going from:
>>>>> Current config -> New config -> Failsafe config
>>>>>
>>>>
>>>> I don't know. But I found func ubi_wl_get_peb() in drivers/mtd/ubi/wl.c
>>>> will return an error ENOSPC without any error messages. And this func
>>>> seems related with fastmap feature.
>>>
>>> I have sort of an idea what could going on.
>>> Akshay, can you please confirm that you face the issue only with UBI_FASTMAP=y?
>>
>> I have not been able to recreate the issue with UBI_FASTMAP=n
>
> Okay, you test case basically fills the filesystem over and over?
> And after some time you face the said issue.
> Is this correct?

Yes. From testing, the "some time" is typically after crossing the WL 
threshold.

> Thanks,
> //richard

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: UBIFS Panic
  2014-07-01 14:46                 ` Akshay Bhat
@ 2014-07-01 14:56                   ` Richard Weinberger
  2014-07-10 21:38                     ` Akshay Bhat
  0 siblings, 1 reply; 19+ messages in thread
From: Richard Weinberger @ 2014-07-01 14:56 UTC (permalink / raw)
  To: Akshay Bhat; +Cc: linux-mtd, hujianyang

Am 01.07.2014 16:46, schrieb Akshay Bhat:
> 
> 
> On Tue 01 Jul 2014 10:32:34 AM EDT, Richard Weinberger wrote:
>> Am 01.07.2014 16:18, schrieb Akshay Bhat:
>>>
>>>
>>> On Tue 01 Jul 2014 03:48:54 AM EDT, Richard Weinberger wrote:
>>>> Am 01.07.2014 03:09, schrieb hujianyang:
>>>>>>
>>>>>> Is enabling UBI_FASTMAP the cause for the panic? If so I can disabled the
>>>>>> feature and re-test. Do you see any compatibility issue going from:
>>>>>> Current config -> New config -> Failsafe config
>>>>>>
>>>>>
>>>>> I don't know. But I found func ubi_wl_get_peb() in drivers/mtd/ubi/wl.c
>>>>> will return an error ENOSPC without any error messages. And this func
>>>>> seems related with fastmap feature.
>>>>
>>>> I have sort of an idea what could going on.
>>>> Akshay, can you please confirm that you face the issue only with UBI_FASTMAP=y?
>>>
>>> I have not been able to recreate the issue with UBI_FASTMAP=n
>>
>> Okay, you test case basically fills the filesystem over and over?
>> And after some time you face the said issue.
>> Is this correct?
> 
> Yes. From testing, the "some time" is typically after crossing the WL threshold.

Good. I'll dig into that by the end of the week.

Thanks,
//richard

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: UBIFS Panic
  2014-07-01 14:56                   ` Richard Weinberger
@ 2014-07-10 21:38                     ` Akshay Bhat
  2014-07-10 21:42                       ` Richard Weinberger
  2014-07-11 20:45                       ` Richard Weinberger
  0 siblings, 2 replies; 19+ messages in thread
From: Akshay Bhat @ 2014-07-10 21:38 UTC (permalink / raw)
  To: Richard Weinberger; +Cc: linux-mtd, hujianyang



On Tue 01 Jul 2014 10:56:06 AM EDT, Richard Weinberger wrote:
> Am 01.07.2014 16:46, schrieb Akshay Bhat:
>>
>>
>> On Tue 01 Jul 2014 10:32:34 AM EDT, Richard Weinberger wrote:
>>> Am 01.07.2014 16:18, schrieb Akshay Bhat:
>>>>
>>>>
>>>> On Tue 01 Jul 2014 03:48:54 AM EDT, Richard Weinberger wrote:
>>>>> Am 01.07.2014 03:09, schrieb hujianyang:
>>>>>>>
>>>>>>> Is enabling UBI_FASTMAP the cause for the panic? If so I can disabled the
>>>>>>> feature and re-test. Do you see any compatibility issue going from:
>>>>>>> Current config -> New config -> Failsafe config
>>>>>>>
>>>>>>
>>>>>> I don't know. But I found func ubi_wl_get_peb() in drivers/mtd/ubi/wl.c
>>>>>> will return an error ENOSPC without any error messages. And this func
>>>>>> seems related with fastmap feature.
>>>>>
>>>>> I have sort of an idea what could going on.
>>>>> Akshay, can you please confirm that you face the issue only with UBI_FASTMAP=y?
>>>>
>>>> I have not been able to recreate the issue with UBI_FASTMAP=n
>>>
>>> Okay, you test case basically fills the filesystem over and over?
>>> And after some time you face the said issue.
>>> Is this correct?
>>
>> Yes. From testing, the "some time" is typically after crossing the WL threshold.
>
> Good. I'll dig into that by the end of the week.

Hi Richard, wanted to check if you got a chance to dig into this? 
Thanks.

> Thanks,
> //richard
>

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: UBIFS Panic
  2014-07-10 21:38                     ` Akshay Bhat
@ 2014-07-10 21:42                       ` Richard Weinberger
  2014-07-11 20:45                       ` Richard Weinberger
  1 sibling, 0 replies; 19+ messages in thread
From: Richard Weinberger @ 2014-07-10 21:42 UTC (permalink / raw)
  To: Akshay Bhat; +Cc: linux-mtd, hujianyang

Am 10.07.2014 23:38, schrieb Akshay Bhat:
>> Good. I'll dig into that by the end of the week.
> 
> Hi Richard, wanted to check if you got a chance to dig into this? Thanks.

Yes, I'm.
So far I was not able to reproduce it on my testbed.
But I forgot to tell you. :(

Can I send you a debug patch?
First we need to find out where the ENOSPC comes from.

Thanks,
//richard

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: UBIFS Panic
  2014-07-10 21:38                     ` Akshay Bhat
  2014-07-10 21:42                       ` Richard Weinberger
@ 2014-07-11 20:45                       ` Richard Weinberger
  2014-07-16 17:31                         ` Akshay Bhat
  1 sibling, 1 reply; 19+ messages in thread
From: Richard Weinberger @ 2014-07-11 20:45 UTC (permalink / raw)
  To: Akshay Bhat; +Cc: linux-mtd, hujianyang

[-- Attachment #1: Type: text/plain, Size: 1548 bytes --]

Am 10.07.2014 23:38, schrieb Akshay Bhat:
> 
> 
> On Tue 01 Jul 2014 10:56:06 AM EDT, Richard Weinberger wrote:
>> Am 01.07.2014 16:46, schrieb Akshay Bhat:
>>>
>>>
>>> On Tue 01 Jul 2014 10:32:34 AM EDT, Richard Weinberger wrote:
>>>> Am 01.07.2014 16:18, schrieb Akshay Bhat:
>>>>>
>>>>>
>>>>> On Tue 01 Jul 2014 03:48:54 AM EDT, Richard Weinberger wrote:
>>>>>> Am 01.07.2014 03:09, schrieb hujianyang:
>>>>>>>>
>>>>>>>> Is enabling UBI_FASTMAP the cause for the panic? If so I can disabled the
>>>>>>>> feature and re-test. Do you see any compatibility issue going from:
>>>>>>>> Current config -> New config -> Failsafe config
>>>>>>>>
>>>>>>>
>>>>>>> I don't know. But I found func ubi_wl_get_peb() in drivers/mtd/ubi/wl.c
>>>>>>> will return an error ENOSPC without any error messages. And this func
>>>>>>> seems related with fastmap feature.
>>>>>>
>>>>>> I have sort of an idea what could going on.
>>>>>> Akshay, can you please confirm that you face the issue only with UBI_FASTMAP=y?
>>>>>
>>>>> I have not been able to recreate the issue with UBI_FASTMAP=n
>>>>
>>>> Okay, you test case basically fills the filesystem over and over?
>>>> And after some time you face the said issue.
>>>> Is this correct?
>>>
>>> Yes. From testing, the "some time" is typically after crossing the WL threshold.
>>
>> Good. I'll dig into that by the end of the week.
> 
> Hi Richard, wanted to check if you got a chance to dig into this? Thanks.

Can you please rerun with the attached patch applied?
Maybe it can give use a hint. :)

Thanks,
//richard

[-- Attachment #2: debug.diff --]
[-- Type: text/x-patch, Size: 1637 bytes --]

diff --git a/drivers/mtd/ubi/wl.c b/drivers/mtd/ubi/wl.c
index 0f3425d..90588d4 100644
--- a/drivers/mtd/ubi/wl.c
+++ b/drivers/mtd/ubi/wl.c
@@ -575,8 +575,10 @@ static void refill_wl_pool(struct ubi_device *ubi)
 
 	for (pool->size = 0; pool->size < pool->max_size; pool->size++) {
 		if (!ubi->free.rb_node ||
-		   (ubi->free_count - ubi->beb_rsvd_pebs < 5))
+		   (ubi->free_count - ubi->beb_rsvd_pebs < 5)) {
+			ubi_err("didn't get all pebs I wanted!");
 			break;
+		}
 
 		e = find_wl_entry(ubi, &ubi->free, WL_FREE_MAX_DIFF);
 		self_check_in_wl_tree(ubi, e, &ubi->free);
@@ -600,8 +602,10 @@ static void refill_wl_user_pool(struct ubi_device *ubi)
 
 	for (pool->size = 0; pool->size < pool->max_size; pool->size++) {
 		pool->pebs[pool->size] = __wl_get_peb(ubi);
-		if (pool->pebs[pool->size] < 0)
+		if (pool->pebs[pool->size] < 0) {
+			ubi_err("didn't get all pebs I wanted!");
 			break;
+		}
 	}
 	pool->used = 0;
 }
@@ -632,9 +636,10 @@ int ubi_wl_get_peb(struct ubi_device *ubi)
 		ubi_update_fastmap(ubi);
 
 	/* we got not a single free PEB */
-	if (!pool->size)
+	if (!pool->size) {
+		ubi_err("User WL pool is empty!");
 		ret = -ENOSPC;
-	else {
+	} else {
 		spin_lock(&ubi->wl_lock);
 		ret = pool->pebs[pool->used++];
 		prot_queue_add(ubi, ubi->lookuptbl[ret]);
@@ -654,6 +659,7 @@ static struct ubi_wl_entry *get_peb_for_wl(struct ubi_device *ubi)
 	int pnum;
 
 	if (pool->used == pool->size || !pool->size) {
+		ubi_err("WL pool is empty!");
 		/* We cannot update the fastmap here because this
 		 * function is called in atomic context.
 		 * Let's fail here and refill/update it as soon as possible. */

^ permalink raw reply related	[flat|nested] 19+ messages in thread

* RE: UBIFS Panic
  2014-07-11 20:45                       ` Richard Weinberger
@ 2014-07-16 17:31                         ` Akshay Bhat
  2014-07-16 21:00                           ` Richard Weinberger
  0 siblings, 1 reply; 19+ messages in thread
From: Akshay Bhat @ 2014-07-16 17:31 UTC (permalink / raw)
  To: Richard Weinberger; +Cc: linux-mtd, hujianyang


>>
>> Hi Richard, wanted to check if you got a chance to dig into this? Thanks.

> Can you please rerun with the attached patch applied?
> Maybe it can give use a hint. :)

I ran the tests with the patch, below is the dmesg log (note: the first kernel panic resulted in the log running over since I wasn't around, so I had to reboot and re-run the test to capture a new panic).

# dmesg
[    0.000000] Booting Linux on physical CPU 0x0
[    0.000000] Initializing cgroup subsys cpu
[    0.000000] Linux version 3.8.13-004-ts-armv7l (abhat@PC0008690) (gcc version 4.7.3 (Timesys 20130916) ) #3 SMP Tue Jul 15 16:01:55 EDT 2014
[    0.000000] CPU: ARMv7 Processor [413fc082] revision 2 (ARMv7), cr=50c5387d
[    0.000000] CPU: PIPT / VIPT nonaliasing data cache, VIPT aliasing instruction cache
[    0.000000] Machine: Generic AM33XX (Flattened Device Tree), model: Lutron Ethernet Bridge
[    0.000000] Memory policy: ECC disabled, Data cache writeback
[    0.000000] On node 0 totalpages: 65280
[    0.000000] free_area_init_node: node 0, pgdat c04a3c80, node_mem_map c04f2000
[    0.000000]   Normal zone: 512 pages used for memmap
[    0.000000]   Normal zone: 0 pages reserved
[    0.000000]   Normal zone: 64768 pages, LIFO batch:15
[    0.000000] AM335X ES1.0 (neon )
[    0.000000] PERCPU: Embedded 8 pages/cpu @c06fd000 s8896 r8192 d15680 u32768
[    0.000000] pcpu-alloc: s8896 r8192 d15680 u32768 alloc=8*4096
[    0.000000] pcpu-alloc: [0] 0
[    0.000000] Built 1 zonelists in Zone order, mobility grouping on.  Total pages: 64768
[    0.000000] Kernel command line: console=ttyO0,115200n8 noinitrd mem=256M root=ubi0:rootfs rw ubi.mtd=11,2048 rootfstype=ubifs rootwait=1 ip=none quiet loglevel=3 panic=3
[    0.000000] PID hash table entries: 1024 (order: 0, 4096 bytes)
[    0.000000] Dentry cache hash table entries: 32768 (order: 5, 131072 bytes)
[    0.000000] Inode-cache hash table entries: 16384 (order: 4, 65536 bytes)
[    0.000000] __ex_table already sorted, skipping sort
[    0.000000] allocated 524288 bytes of page_cgroup
[    0.000000] please try 'cgroup_disable=memory' option if you don't want memory cgroups
[    0.000000] Memory: 255MB = 255MB total
[    0.000000] Memory: 253196k/253196k available, 8948k reserved, 0K highmem
[    0.000000] Virtual kernel memory layout:
[    0.000000]     vector  : 0xffff0000 - 0xffff1000   (   4 kB)
[    0.000000]     fixmap  : 0xfff00000 - 0xfffe0000   ( 896 kB)
[    0.000000]     vmalloc : 0xd0800000 - 0xff000000   ( 744 MB)
[    0.000000]     lowmem  : 0xc0000000 - 0xd0000000   ( 256 MB)
[    0.000000]     pkmap   : 0xbfe00000 - 0xc0000000   (   2 MB)
[    0.000000]     modules : 0xbf800000 - 0xbfe00000   (   6 MB)
[    0.000000]       .text : 0xc0008000 - 0xc04157fc   (4150 kB)
[    0.000000]       .init : 0xc0416000 - 0xc04442c0   ( 185 kB)
[    0.000000]       .data : 0xc0446000 - 0xc04a4b40   ( 379 kB)
[    0.000000]        .bss : 0xc04a4b40 - 0xc04f1b1c   ( 308 kB)
[    0.000000] Hierarchical RCU implementation.
[    0.000000]  RCU restricting CPUs from NR_CPUS=4 to nr_cpu_ids=1.
[    0.000000] NR_IRQS:16 nr_irqs:16 16
[    0.000000] IRQ: Found an INTC at 0xfa200000 (revision 5.0) with 128 interrupts
[    0.000000] Total of 128 interrupts on 1 active controller
[    0.000000] OMAP clockevent source: GPTIMER1 at 26000000 Hz
[    0.000000] sched_clock: 32 bits at 26MHz, resolution 38ns, wraps every 165191ms
[    0.000000] OMAP clocksource: GPTIMER2 at 26000000 Hz
[    0.000000] Console: colour dummy device 80x30
[    0.000355] Calibrating delay loop... 545.07 BogoMIPS (lpj=531968)
[    0.015437] pid_max: default: 32768 minimum: 301
[    0.015667] Security Framework initialized
[    0.015762] Mount-cache hash table entries: 512
[    0.024043] Initializing cgroup subsys cpuacct
[    0.024078] Initializing cgroup subsys memory
[    0.024141] Initializing cgroup subsys blkio
[    0.024278] CPU: Testing write buffer coherency: ok
[    0.024773] CPU0: thread -1, cpu 0, socket -1, mpidr 0
[    0.024847] Setting up static identity map for 0x80273580 - 0x802735cc
[    0.026239] Brought up 1 CPUs
[    0.026263] SMP: Total of 1 processors activated (545.07 BogoMIPS).
[    0.052585] omap_hwmod: wd_timer2: _wait_target_disable failed
[    0.107753] pinctrl core: initialized pinctrl subsystem
[    0.107976] rstctl core: initialized rstctl subsystem
[    0.108461] regulator-dummy: no parameters
[    0.108950] NET: Registered protocol family 16
[    0.109807] DMA: preallocated 256 KiB pool for atomic coherent allocations
[    0.119591] pinctrl-single 44e10800.pinmux: 142 pins at pa f9e10800 size 568
[    0.120352] platform 49000000.edma: alias fck already exists
[    0.120387] platform 49000000.edma: alias fck already exists
[    0.120416] platform 49000000.edma: alias fck already exists
[    0.121590] gpiochip_add: registered GPIOs 0 to 31 on device: gpio
[    0.121768] OMAP GPIO hardware version 0.1
[    0.123185] gpiochip_add: registered GPIOs 32 to 63 on device: gpio
[    0.124427] gpiochip_add: registered GPIOs 64 to 95 on device: gpio
[    0.125742] gpiochip_add: registered GPIOs 96 to 127 on device: gpio
[    0.129563] omap-gpmc 50000000.gpmc: unable to select pin group
[    0.130407] omap-gpmc 50000000.gpmc: GPMC revision 6.0
[    0.130941] omap-gpmc 50000000.gpmc: loaded OK
[    0.133390] hw-breakpoint: debug architecture 0x4 unsupported.
[    0.135240] cpsw.0: No hwaddr in dt. Using 1c:ba:8c:9d:62:23 from efuse
[    0.135272] cpsw.1: No hwaddr in dt. Using 1c:ba:8c:9d:62:25 from efuse
[    0.143652] bio: create slab <bio-0> at 0
[    0.155361] edma-dma-engine edma-dma-engine.0: TI EDMA DMA engine driver
[    0.158058] usbcore: registered new interface driver usbfs
[    0.158184] usbcore: registered new interface driver hub
[    0.158444] usbcore: registered new device driver usb
[    0.159413] pps_core: LinuxPPS API ver. 1 registered
[    0.159432] pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti <giometti@linux.it>
[    0.160960] Switching to clocksource gp_timer
[    0.173382] NET: Registered protocol family 2
[    0.174361] TCP established hash table entries: 2048 (order: 2, 16384 bytes)
[    0.174451] TCP bind hash table entries: 2048 (order: 3, 40960 bytes)
[    0.174544] TCP: Hash tables configured (established 2048 bind 2048)
[    0.174637] TCP: reno registered
[    0.174664] UDP hash table entries: 256 (order: 1, 12288 bytes)
[    0.174786] UDP-Lite hash table entries: 256 (order: 1, 12288 bytes)
[    0.175215] NET: Registered protocol family 1
[    0.175798] RPC: Registered named UNIX socket transport module.
[    0.175819] RPC: Registered udp transport module.
[    0.175834] RPC: Registered tcp transport module.
[    0.175849] RPC: Registered tcp NFSv4.1 backchannel transport module.
[    0.176982] CPU PMU: probing PMU on CPU 0
[    0.177012] hw perfevents: enabled with ARMv7 Cortex-A8 PMU driver, 5 counters available
[    0.177470] omap2_mbox_probe: platform not supported
[    0.181673] VFS: Disk quotas dquot_6.5.2
[    0.181906] Dquot-cache hash table entries: 1024 (order 0, 4096 bytes)
[    0.182745] NFS: Registering the id_resolver key type
[    0.182837] Key type id_resolver registered
[    0.182855] Key type id_legacy registered
[    0.182954] msgmni has been set to 494
[    0.185794] Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250)
[    0.185820] io scheduler noop registered
[    0.185837] io scheduler deadline registered
[    0.185889] io scheduler cfq registered (default)
[    0.187357] ecap 48300100.ecap: unable to select pin group
[    0.188100] ehrpwm 48300200.ehrpwm: unable to select pin group
[    0.188985] ecap 48302100.ecap: unable to select pin group
[    0.189653] ehrpwm 48302200.ehrpwm: unable to select pin group
[    0.190625] ecap 48304100.ecap: unable to select pin group
[    0.191343] ehrpwm 48304200.ehrpwm: unable to select pin group
[    0.191951] pwm_test pwm_test.5: unable to request PWM
[    0.192005] pwm_test: probe of pwm_test.5 failed with error -2
[    0.192804] Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled
[    0.195080] 44e09000.serial: ttyO0 at MMIO 0x44e09000 (irq = 88) is a OMAP UART0
[    0.195878] console [ttyO0] enabled
[    0.196730] 48022000.serial: ttyO1 at MMIO 0x48022000 (irq = 89) is a OMAP UART1
[    0.209845] brd: module loaded
[    0.216494] loop: module loaded
[    0.218721] ONFI param page 0 valid
[    0.218744] ONFI flash detected
[    0.218772] NAND device: Manufacturer ID: 0x2c, Chip ID: 0xaa (Micron MT29F2G08ABBEAH4), 256MiB, page size: 2048, OOB size: 64
[    0.218789] nand: using OMAP_ECC_BCH8_CODE_HW ECC scheme
[    0.218909] 12 ofpart partitions found on MTD device omap2-nand.0
[    0.218927] Creating 12 MTD partitions on "omap2-nand.0":
[    0.218953] 0x000000000000-0x000000020000 : "SPL1"
[    0.220496] 0x000000020000-0x000000040000 : "SPL2"
[    0.221962] 0x000000040000-0x000000060000 : "SPL3"
[    0.223336] 0x000000060000-0x000000080000 : "SPL4"
[    0.224841] 0x000000080000-0x000000180000 : "U-boot"
[    0.227046] 0x000000180000-0x000000280000 : "U-boot Backup"
[    0.229270] 0x000000280000-0x0000002a0000 : "U-Boot Environment"
[    0.230794] 0x0000002a0000-0x0000007a0000 : "Kernel"
[    0.236489] 0x0000007a0000-0x000000ca0000 : "Kernel Backup"
[    0.242060] 0x000000ca0000-0x000000d20000 : "Device Tree"
[    0.243819] 0x000000d20000-0x000000da0000 : "Device Tree Backup"
[    0.245576] 0x000000da0000-0x000010000000 : "RFS"
[    0.452697] edma-dma-engine edma-dma-engine.0: allocated channel for 0:17
[    0.452788] edma-dma-engine edma-dma-engine.0: allocated channel for 0:16
[    0.454697] usbcore: registered new interface driver asix
[    0.454823] usbcore: registered new interface driver cdc_ether
[    0.454945] usbcore: registered new interface driver smsc95xx
[    0.455031] usbcore: registered new interface driver net1080
[    0.455117] usbcore: registered new interface driver cdc_subset
[    0.455275] usbcore: registered new interface driver cdc_ncm
[    0.456135] musb-hdrc: version 6.0, ?dma?, otg (peripheral+host)
[    0.456901] omap_rtc 44e3e000.rtc: rtc core: registered 44e3e000.rtc as rtc0
[    0.456965] 44e3e000.rtc: already running
[    0.457245] i2c /dev entries driver
[    0.457585] pps_ldisc: PPS line discipline registered
[    0.458707] omap_wdt: OMAP Watchdog Timer Rev 0x01: initial timeout 60 sec
[    0.459040] cpuidle: using governor ladder
[    0.459061] cpuidle: using governor menu
[    0.459257] ledtrig-cpu: registered to indicate activity on CPUs
[    0.459662] edma-dma-engine edma-dma-engine.0: allocated channel for 0:36
[    0.459735] omap-sham 53100000.sham: hw accel on OMAP rev 4.3
[    0.461924] omap-aes 53500000.aes: OMAP AES hw accel rev: 3.2
[    0.462053] edma-dma-engine edma-dma-engine.0: allocated channel for 0:5
[    0.462134] edma-dma-engine edma-dma-engine.0: allocated channel for 0:6
[    0.464080] TCP: cubic registered
[    0.464100] Initializing XFRM netlink socket
[    0.464146] NET: Registered protocol family 17
[    0.464226] NET: Registered protocol family 15
[    0.464364] Key type dns_resolver registered
[    0.464594] VFP support v0.3: implementor 41 architecture 3 part 30 variant c rev 3
[    0.464630] ThumbEE CPU extension supported.
[    0.464673] Registering SWP/SWPB emulation handler
[    0.465335] registered taskstats version 1
[    0.466851] UBI: default fastmap pool size: 95
[    0.466872] UBI: default fastmap WL pool size: 25
[    0.466889] UBI: attaching mtd11 to ubi0
[    1.683659] UBI: scanning is finished
[    1.696028] UBI: attached mtd11 (name "RFS", size 242 MiB) to ubi0
[    1.696054] UBI: PEB size: 131072 bytes (128 KiB), LEB size: 126976 bytes
[    1.696072] UBI: min./max. I/O unit sizes: 2048/2048, sub-page size 512
[    1.696089] UBI: VID header offset: 2048 (aligned 2048), data offset: 4096
[    1.696105] UBI: good PEBs: 1939, bad PEBs: 0, corrupted PEBs: 0
[    1.696122] UBI: user volume: 6, internal volumes: 1, max. volumes count: 128
[    1.696143] UBI: max/mean erase counter: 159/97, WL threshold: 64, image sequence number: 1213918928
[    1.696160] UBI: available PEBs: 0, total reserved PEBs: 1939, PEBs reserved for bad PEB handling: 40
[    1.710847] UBI: background thread "ubi_bgt0d" started, PID 54
[    1.710955] UBI error: get_peb_for_wl: WL pool is empty!
[    1.745110] davinci_mdio 4a101000.mdio: davinci mdio revision 1.6
[    1.745137] davinci_mdio 4a101000.mdio: detected phy mask fffffffe
[    1.746108] libphy: 4a101000.mdio: probed
[    1.746140] davinci_mdio 4a101000.mdio: phy[0]: device 4a101000.mdio:00, driver unknown
[    1.746340] Detected MACID = 1c:ba:8c:9d:62:23
[    1.746444] cpsw 4a100000.ethernet: NAPI disabled
[    1.747976] of_get_named_gpio_flags exited with status 13
[    1.748626] input: gpio-keys.6 as /devices/ocp.2/gpio-keys.6/input/input0
[    1.749327] omap_rtc 44e3e000.rtc: setting system clock to 2000-01-01 00:00:00 UTC (946684800)
[    1.774884] UBIFS: background thread "ubifs_bgt0_0" started, PID 57
[    1.826833] UBIFS: mounted UBI device 0, volume 0, name "rootfs"(null)
[    1.826864] UBIFS: LEB size: 126976 bytes (124 KiB), min./max. I/O unit sizes: 2048 bytes/2048 bytes
[    1.826888] UBIFS: FS size: 103485440 bytes (98 MiB, 815 LEBs), journal size 9023488 bytes (8 MiB, 72 LEBs)
[    1.826904] UBIFS: reserved for root: 0 bytes (0 KiB)
[    1.826929] UBIFS: media format: w4/r0 (latest is w4/r0), UUID 44CF8C71-DD52-400D-920E-D4385BF511A6, small LPT model
[    1.828014] VFS: Mounted root (ubifs filesystem) on device 0:11.
[    1.828413] Freeing init memory: 184K
[    2.459362] UBIFS: background thread "ubifs_bgt0_4" started, PID 74
[    2.478501] UBIFS: recovery needed
[    2.689992] UBIFS: recovery completed
[    2.690225] UBIFS: mounted UBI device 0, volume 4, name "logging"(null)
[    2.690248] UBIFS: LEB size: 126976 bytes (124 KiB), min./max. I/O unit sizes: 2048 bytes/2048 bytes
[    2.690270] UBIFS: FS size: 6094848 bytes (5 MiB, 48 LEBs), journal size 1015809 bytes (0 MiB, 6 LEBs)
[    2.690286] UBIFS: reserved for root: 287874 bytes (281 KiB)
[    2.690311] UBIFS: media format: w4/r0 (latest is w4/r0), UUID C61E74EF-7167-4498-B4F4-493D82776AF3, small LPT model
[    2.733136] UBIFS: background thread "ubifs_bgt0_2" started, PID 76
[    2.757661] UBIFS: recovery needed
[    3.180476] UBIFS: recovery completed
[    3.180794] UBIFS: mounted UBI device 0, volume 2, name "database"(null)
[    3.180817] UBIFS: LEB size: 126976 bytes (124 KiB), min./max. I/O unit sizes: 2048 bytes/2048 bytes
[    3.180839] UBIFS: FS size: 6094848 bytes (5 MiB, 48 LEBs), journal size 1015809 bytes (0 MiB, 6 LEBs)
[    3.180855] UBIFS: reserved for root: 287874 bytes (281 KiB)
[    3.180881] UBIFS: media format: w4/r0 (latest is w4/r0), UUID EBDF7BE1-E887-40A6-9CE3-AC4DBCE47745, small LPT model
[    5.496941] net eth0: initializing cpsw version 1.12 (0)
[    5.499227] net eth0: phy found : id is : 0x221560
[    5.499261] libphy: PHY 4a101000.mdio:01 not found
[    5.499283] net eth0: phy 4a101000.mdio:01 not found on slave 1
[    6.073996] NET: Registered protocol family 10
[    6.078839] IPv6: ADDRCONF(NETDEV_UP): eth0: link is not ready
[    7.505536] libphy: 4a101000.mdio:00 - Link is Up - 100/Full
[    7.505601] IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready
[  985.959083] UBI error: get_peb_for_wl: WL pool is empty!
[ 1003.199570] UBI error: get_peb_for_wl: WL pool is empty!
[ 1036.210666] UBI error: get_peb_for_wl: WL pool is empty!
[ 1054.821942] UBI error: get_peb_for_wl: WL pool is empty!
[ 1093.320619] UBI error: get_peb_for_wl: WL pool is empty!
[ 1121.904358] UBI error: get_peb_for_wl: WL pool is empty!
[ 1214.922847] UBI error: get_peb_for_wl: WL pool is empty!
[ 1255.932050] UBI error: get_peb_for_wl: WL pool is empty!
[ 1288.346109] UBI error: get_peb_for_wl: WL pool is empty!
[ 1314.810886] UBI error: get_peb_for_wl: WL pool is empty!
[ 1324.222522] UBI error: get_peb_for_wl: WL pool is empty!
[ 1409.435290] UBI error: get_peb_for_wl: WL pool is empty!
[ 1443.234302] UBI error: get_peb_for_wl: WL pool is empty!
[ 1461.921623] UBI error: get_peb_for_wl: WL pool is empty!
[ 1508.373858] UBI error: get_peb_for_wl: WL pool is empty!
[ 1520.062908] UBI error: get_peb_for_wl: WL pool is empty!
[ 1531.391419] UBI error: refill_wl_pool: didn't get all pebs I wanted!
[ 1531.533398] UBI error: refill_wl_pool: didn't get all pebs I wanted!
[ 1531.880750] UBI error: refill_wl_pool: didn't get all pebs I wanted!
[ 1532.280971] UBI error: refill_wl_pool: didn't get all pebs I wanted!
[ 1532.637379] UBI error: refill_wl_pool: didn't get all pebs I wanted!
[ 1533.006858] UBI error: refill_wl_pool: didn't get all pebs I wanted!
[ 1533.377138] UBI error: refill_wl_pool: didn't get all pebs I wanted!
[ 1533.720499] UBI error: refill_wl_pool: didn't get all pebs I wanted!
[ 1534.102098] UBI error: refill_wl_pool: didn't get all pebs I wanted!
[ 1534.463482] UBI error: refill_wl_pool: didn't get all pebs I wanted!
[ 1534.836984] UBI error: refill_wl_pool: didn't get all pebs I wanted!
[ 1535.189263] UBI error: refill_wl_pool: didn't get all pebs I wanted!
[ 1535.559109] UBI error: refill_wl_pool: didn't get all pebs I wanted!
[ 1535.931087] UBI error: refill_wl_pool: didn't get all pebs I wanted!
[ 1536.290380] UBI error: refill_wl_pool: didn't get all pebs I wanted!
[ 1536.650297] UBI error: refill_wl_pool: didn't get all pebs I wanted!
[ 1537.106535] UBI error: refill_wl_pool: didn't get all pebs I wanted!
[ 1537.121018] UBI error: refill_wl_pool: didn't get all pebs I wanted!
[ 1537.627255] UBI error: refill_wl_pool: didn't get all pebs I wanted!
[ 1539.003036] UBI error: refill_wl_pool: didn't get all pebs I wanted!
[ 1539.204165] UBI error: refill_wl_pool: didn't get all pebs I wanted!
[ 1539.411251] UBI error: refill_wl_pool: didn't get all pebs I wanted!
[ 1539.656223] UBI error: refill_wl_pool: didn't get all pebs I wanted!
[ 1539.795953] UBI error: refill_wl_pool: didn't get all pebs I wanted!
[ 1540.053808] UBI error: refill_wl_pool: didn't get all pebs I wanted!
[ 1540.335017] UBI error: refill_wl_pool: didn't get all pebs I wanted!
[ 1540.809342] UBI error: refill_wl_pool: didn't get all pebs I wanted!
[ 1541.249917] UBI error: refill_wl_pool: didn't get all pebs I wanted!
[ 1541.512227] UBI error: refill_wl_pool: didn't get all pebs I wanted!
[ 1541.879117] UBI error: refill_wl_pool: didn't get all pebs I wanted!
[ 1542.194330] UBI error: refill_wl_pool: didn't get all pebs I wanted!
[ 1542.596129] UBI error: refill_wl_pool: didn't get all pebs I wanted!
[ 1542.954970] UBI error: refill_wl_pool: didn't get all pebs I wanted!
[ 1543.317284] UBI error: refill_wl_pool: didn't get all pebs I wanted!
[ 1543.681753] UBI error: refill_wl_pool: didn't get all pebs I wanted!
[ 1544.034243] UBI error: refill_wl_pool: didn't get all pebs I wanted!
[ 1544.404248] UBI error: refill_wl_pool: didn't get all pebs I wanted!
[ 1544.761860] UBI error: refill_wl_pool: didn't get all pebs I wanted!
[ 1544.968691] UBI error: refill_wl_pool: didn't get all pebs I wanted!
[ 1545.337708] UBI error: refill_wl_pool: didn't get all pebs I wanted!
[ 1545.875617] UBI error: refill_wl_pool: didn't get all pebs I wanted!
[ 1546.151633] UBI error: refill_wl_pool: didn't get all pebs I wanted!
[ 1546.304892] UBI error: refill_wl_pool: didn't get all pebs I wanted!
[ 1546.459485] UBI error: refill_wl_pool: didn't get all pebs I wanted!
[ 1546.618697] UBI error: refill_wl_pool: didn't get all pebs I wanted!
[ 1546.845616] UBI error: refill_wl_pool: didn't get all pebs I wanted!
[ 1546.993368] UBI error: refill_wl_pool: didn't get all pebs I wanted!
[ 1547.168828] UBI error: refill_wl_pool: didn't get all pebs I wanted!
[ 1547.325286] UBI error: refill_wl_pool: didn't get all pebs I wanted!
[ 1547.491399] UBI error: refill_wl_pool: didn't get all pebs I wanted!
[ 1547.844732] UBI error: refill_wl_pool: didn't get all pebs I wanted!
[ 1548.366688] UBI error: refill_wl_pool: didn't get all pebs I wanted!
[ 1548.715912] UBI error: refill_wl_pool: didn't get all pebs I wanted!
[ 1549.030371] UBI error: refill_wl_pool: didn't get all pebs I wanted!
[ 1549.366938] UBI error: refill_wl_pool: didn't get all pebs I wanted!
[ 1549.678733] UBI error: refill_wl_pool: didn't get all pebs I wanted!
[ 1550.190265] UBI error: refill_wl_pool: didn't get all pebs I wanted!
[ 1550.388567] UBI error: refill_wl_pool: didn't get all pebs I wanted!
[ 1550.752373] UBI error: refill_wl_pool: didn't get all pebs I wanted!
[ 1551.105251] UBI error: refill_wl_pool: didn't get all pebs I wanted!
[ 1551.470484] UBI error: refill_wl_pool: didn't get all pebs I wanted!
[ 1551.831356] UBI error: refill_wl_pool: didn't get all pebs I wanted!
[ 1552.195188] UBI error: refill_wl_pool: didn't get all pebs I wanted!
[ 1552.505822] UBI error: refill_wl_pool: didn't get all pebs I wanted!
[ 1552.912960] UBI error: refill_wl_pool: didn't get all pebs I wanted!
[ 1553.281028] UBI error: refill_wl_pool: didn't get all pebs I wanted!
[ 1553.495567] UBI error: refill_wl_pool: didn't get all pebs I wanted!
[ 1553.712645] UBI error: refill_wl_pool: didn't get all pebs I wanted!
[ 1553.909300] UBI error: refill_wl_pool: didn't get all pebs I wanted!
[ 1554.109519] UBI error: refill_wl_pool: didn't get all pebs I wanted!
[ 1554.372473] UBI error: refill_wl_pool: didn't get all pebs I wanted!
[ 1554.480515] UBI error: refill_wl_pool: didn't get all pebs I wanted!
[ 1554.777829] UBI error: refill_wl_pool: didn't get all pebs I wanted!
[ 1554.885781] UBI error: refill_wl_pool: didn't get all pebs I wanted!
[ 1555.130785] UBI error: refill_wl_pool: didn't get all pebs I wanted!
[ 1555.312802] UBI error: refill_wl_pool: didn't get all pebs I wanted!
[ 1555.541779] UBI error: refill_wl_pool: didn't get all pebs I wanted!
[ 1555.861228] UBI error: refill_wl_pool: didn't get all pebs I wanted!
[ 1556.154005] UBI error: refill_wl_pool: didn't get all pebs I wanted!
[ 1556.488280] UBI error: refill_wl_pool: didn't get all pebs I wanted!
[ 1556.833147] UBI error: refill_wl_pool: didn't get all pebs I wanted!
[ 1557.355272] UBI error: refill_wl_pool: didn't get all pebs I wanted!
[ 1557.688029] UBI error: refill_wl_pool: didn't get all pebs I wanted!
[ 1557.992720] UBI error: refill_wl_pool: didn't get all pebs I wanted!
[ 1558.323269] UBI error: refill_wl_pool: didn't get all pebs I wanted!
[ 1558.657243] UBI error: refill_wl_pool: didn't get all pebs I wanted!
[ 1559.178866] UBI error: refill_wl_pool: didn't get all pebs I wanted!
[ 1559.537006] UBI error: refill_wl_pool: didn't get all pebs I wanted!
[ 1559.572526] UBI error: get_peb_for_wl: WL pool is empty!
[ 1559.572593] UBI error: refill_wl_pool: didn't get all pebs I wanted!
[ 1559.572708] UBI error: refill_wl_pool: didn't get all pebs I wanted!
[ 1559.578182] UBI error: get_peb_for_wl: WL pool is empty!
[ 1559.578246] UBI error: refill_wl_pool: didn't get all pebs I wanted!
[ 1559.584656] UBI error: refill_wl_pool: didn't get all pebs I wanted!
[ 1559.589583] UBI error: get_peb_for_wl: WL pool is empty!
[ 1559.589646] UBI error: refill_wl_pool: didn't get all pebs I wanted!
[ 1559.589765] UBI error: refill_wl_pool: didn't get all pebs I wanted!
[ 1559.728270] UBI error: refill_wl_pool: didn't get all pebs I wanted!
[ 1559.875135] UBI error: refill_wl_pool: didn't get all pebs I wanted!
[ 1559.904174] UBI error: refill_wl_pool: didn't get all pebs I wanted!
[ 1559.911222] UBI error: refill_wl_pool: didn't get all pebs I wanted!
[ 1560.082067] UBI error: get_peb_for_wl: WL pool is empty!
[ 1560.082140] UBI error: refill_wl_pool: didn't get all pebs I wanted!
[ 1560.090233] UBI error: get_peb_for_wl: WL pool is empty!
[ 1560.090297] UBI error: refill_wl_pool: didn't get all pebs I wanted!
[ 1560.092849] UBI error: get_peb_for_wl: WL pool is empty!
[ 1560.092907] UBI error: refill_wl_pool: didn't get all pebs I wanted!
[ 1560.093030] UBI error: refill_wl_pool: didn't get all pebs I wanted!
[ 1560.100330] UBI error: get_peb_for_wl: WL pool is empty!
[ 1560.100387] UBI error: refill_wl_pool: didn't get all pebs I wanted!
[ 1560.104935] UBI error: get_peb_for_wl: WL pool is empty!
[ 1560.104987] UBI error: refill_wl_pool: didn't get all pebs I wanted!
[ 1560.105103] UBI error: refill_wl_pool: didn't get all pebs I wanted!
[ 1560.109683] UBI error: get_peb_for_wl: WL pool is empty!
[ 1560.109740] UBI error: refill_wl_pool: didn't get all pebs I wanted!
[ 1560.207529] UBI error: refill_wl_pool: didn't get all pebs I wanted!
[ 1560.287510] UBI error: refill_wl_pool: didn't get all pebs I wanted!
[ 1560.288633] UBI error: refill_wl_pool: didn't get all pebs I wanted!
[ 1560.291605] UBI error: get_peb_for_wl: WL pool is empty!
[ 1560.291668] UBI error: refill_wl_pool: didn't get all pebs I wanted!
[ 1560.463373] UBI error: refill_wl_pool: didn't get all pebs I wanted!
[ 1560.508987] UBI error: get_peb_for_wl: WL pool is empty!
[ 1560.509061] UBI error: refill_wl_pool: didn't get all pebs I wanted!
[ 1560.510458] UBI error: get_peb_for_wl: WL pool is empty!
[ 1560.510502] UBI error: refill_wl_pool: didn't get all pebs I wanted!
[ 1560.510610] UBI error: refill_wl_pool: didn't get all pebs I wanted!
[ 1560.516025] UBI error: get_peb_for_wl: WL pool is empty!
[ 1560.516091] UBI error: refill_wl_pool: didn't get all pebs I wanted!
[ 1560.518365] UBI error: refill_wl_pool: didn't get all pebs I wanted!
[ 1560.525751] UBI error: get_peb_for_wl: WL pool is empty!
[ 1560.525825] UBI error: refill_wl_pool: didn't get all pebs I wanted!
[ 1560.648892] UBI error: refill_wl_pool: didn't get all pebs I wanted!
[ 1560.786797] UBI error: refill_wl_pool: didn't get all pebs I wanted!
[ 1560.828216] UBI error: refill_wl_pool: didn't get all pebs I wanted!
[ 1561.002383] UBI error: get_peb_for_wl: WL pool is empty!
[ 1561.002458] UBI error: refill_wl_pool: didn't get all pebs I wanted!
[ 1561.005317] UBI error: get_peb_for_wl: WL pool is empty!
[ 1561.005374] UBI error: refill_wl_pool: didn't get all pebs I wanted!
[ 1561.007267] UBI error: refill_wl_pool: didn't get all pebs I wanted!
[ 1561.028342] UBI error: refill_wl_pool: didn't get all pebs I wanted!
[ 1561.104100] UBI error: refill_wl_pool: didn't get all pebs I wanted!
[ 1561.206861] UBI error: refill_wl_pool: didn't get all pebs I wanted!
[ 1561.377667] UBI error: refill_wl_pool: didn't get all pebs I wanted!
[ 1561.404110] UBI error: refill_wl_pool: didn't get all pebs I wanted!
[ 1561.420765] UBI error: refill_wl_pool: didn't get all pebs I wanted!
[ 1561.582438] UBI error: refill_wl_pool: didn't get all pebs I wanted!
[ 1561.721016] UBI error: refill_wl_pool: didn't get all pebs I wanted!
[ 1561.764390] UBI error: refill_wl_pool: didn't get all pebs I wanted!
[ 1561.836516] UBI error: refill_wl_pool: didn't get all pebs I wanted!
[ 1561.867453] UBI error: refill_wl_pool: didn't get all pebs I wanted!
[ 1562.266052] UBI error: refill_wl_pool: didn't get all pebs I wanted!
[ 1562.449091] UBI error: refill_wl_pool: didn't get all pebs I wanted!
[ 1562.604589] UBI error: refill_wl_pool: didn't get all pebs I wanted!
[ 1562.820018] UBI error: refill_wl_pool: didn't get all pebs I wanted!
[ 1562.927094] UBI error: refill_wl_pool: didn't get all pebs I wanted!
[ 1563.145924] UBI error: refill_wl_pool: didn't get all pebs I wanted!
[ 1563.290279] UBI error: refill_wl_pool: didn't get all pebs I wanted!
[ 1563.525201] UBI error: refill_wl_pool: didn't get all pebs I wanted!
[ 1563.700422] UBI error: refill_wl_pool: didn't get all pebs I wanted!
[ 1563.738667] UBI error: refill_wl_pool: didn't get all pebs I wanted!
[ 1563.760820] UBI error: refill_wl_pool: didn't get all pebs I wanted!
[ 1563.903381] UBI error: refill_wl_pool: didn't get all pebs I wanted!
[ 1564.081820] UBI error: refill_wl_pool: didn't get all pebs I wanted!
[ 1564.088477] UBI error: refill_wl_pool: didn't get all pebs I wanted!
[ 1564.263451] UBI error: refill_wl_pool: didn't get all pebs I wanted!
[ 1564.282318] UBI error: refill_wl_pool: didn't get all pebs I wanted!
[ 1564.393783] UBI error: refill_wl_pool: didn't get all pebs I wanted!
[ 1564.458506] UBI error: refill_wl_pool: didn't get all pebs I wanted!
[ 1564.689287] UBI error: refill_wl_pool: didn't get all pebs I wanted!
[ 1564.760210] UBI error: refill_wl_pool: didn't get all pebs I wanted!
[ 1564.784221] UBI error: refill_wl_pool: didn't get all pebs I wanted!
[ 1564.888638] UBI error: refill_wl_pool: didn't get all pebs I wanted!
[ 1565.070054] UBI error: refill_wl_pool: didn't get all pebs I wanted!
[ 1565.093654] UBI error: refill_wl_pool: didn't get all pebs I wanted!
[ 1565.261488] UBI error: refill_wl_pool: didn't get all pebs I wanted!
[ 1565.277811] UBI error: refill_wl_pool: didn't get all pebs I wanted!
[ 1565.399477] UBI error: refill_wl_pool: didn't get all pebs I wanted!
[ 1565.443987] UBI error: refill_wl_pool: didn't get all pebs I wanted!
[ 1565.616996] UBI error: refill_wl_pool: didn't get all pebs I wanted!
[ 1565.730463] UBI error: refill_wl_pool: didn't get all pebs I wanted!
[ 1565.738395] UBI error: refill_wl_pool: didn't get all pebs I wanted!
[ 1565.746952] UBI error: refill_wl_pool: didn't get all pebs I wanted!
[ 1565.805743] UBI error: refill_wl_pool: didn't get all pebs I wanted!
[ 1565.810076] UBI error: refill_wl_pool: didn't get all pebs I wanted!
[ 1565.980949] UBI error: refill_wl_pool: didn't get all pebs I wanted!
[ 1566.036032] UBI error: refill_wl_pool: didn't get all pebs I wanted!
[ 1566.165815] UBI error: refill_wl_pool: didn't get all pebs I wanted!
[ 1566.167779] UBI error: refill_wl_pool: didn't get all pebs I wanted!
[ 1566.181500] UBI error: refill_wl_pool: didn't get all pebs I wanted!
[ 1566.218603] UBI error: refill_wl_pool: didn't get all pebs I wanted!
[ 1566.982974] UBI error: refill_wl_pool: didn't get all pebs I wanted!
[ 1567.114776] UBI error: refill_wl_pool: didn't get all pebs I wanted!
[ 1567.605403] UBI error: refill_wl_pool: didn't get all pebs I wanted!
[ 1568.008874] UBI error: refill_wl_pool: didn't get all pebs I wanted!
[ 1568.337114] UBI error: refill_wl_pool: didn't get all pebs I wanted!
[ 1568.636018] UBI error: refill_wl_pool: didn't get all pebs I wanted!
[ 1569.151240] UBI error: refill_wl_pool: didn't get all pebs I wanted!
[ 1569.547332] UBI error: refill_wl_pool: didn't get all pebs I wanted!
[ 1569.764730] UBI error: refill_wl_pool: didn't get all pebs I wanted!
[ 1569.911190] UBI error: refill_wl_pool: didn't get all pebs I wanted!
[ 1570.161940] UBI error: refill_wl_pool: didn't get all pebs I wanted!
[ 1570.311715] UBI error: refill_wl_pool: didn't get all pebs I wanted!
[ 1570.552132] UBI error: refill_wl_pool: didn't get all pebs I wanted!
[ 1570.696319] UBI error: refill_wl_pool: didn't get all pebs I wanted!
[ 1570.948349] UBI error: refill_wl_pool: didn't get all pebs I wanted!
[ 1570.956860] UBI error: refill_wl_pool: didn't get all pebs I wanted!
[ 1571.075746] UBI error: refill_wl_pool: didn't get all pebs I wanted!
[ 1571.085176] UBI error: refill_wl_pool: didn't get all pebs I wanted!
[ 1571.149690] UBI error: refill_wl_pool: didn't get all pebs I wanted!
[ 1571.163941] UBI error: refill_wl_pool: didn't get all pebs I wanted!
[ 1571.180767] UBI error: refill_wl_pool: didn't get all pebs I wanted!
[ 1571.216362] UBI error: refill_wl_pool: didn't get all pebs I wanted!
[ 1571.329202] UBI error: refill_wl_pool: didn't get all pebs I wanted!
[ 1571.338966] UBI error: refill_wl_pool: didn't get all pebs I wanted!
[ 1571.367128] UBI error: refill_wl_pool: didn't get all pebs I wanted!
[ 1571.367159] UBI error: ubi_wl_get_peb: User WL pool is empty!
[ 1571.367182] UBIFS error (pid 792): ubifs_leb_map: mapping LEB 35 failed, error -28
[ 1571.367201] UBIFS warning (pid 792): ubifs_ro_mode: switched to read-only mode, error -28
[ 1571.367269] [<c00104b1>] (unwind_backtrace+0x1/0x8c) from [<c00ff329>] (ubifs_leb_map+0x7d/0xb4)
[ 1571.367306] [<c00ff329>] (ubifs_leb_map+0x7d/0xb4) from [<c01051e7>] (ubifs_add_bud_to_log+0x1bf/0x214)
[ 1571.367339] [<c01051e7>] (ubifs_add_bud_to_log+0x1bf/0x214) from [<c00f7605>] (make_reservation+0x12d/0x274)
[ 1571.367372] [<c00f7605>] (make_reservation+0x12d/0x274) from [<c00f7e73>] (ubifs_jnl_write_inode+0x57/0x138)
[ 1571.367404] [<c00f7e73>] (ubifs_jnl_write_inode+0x57/0x138) from [<c00fc22d>] (ubifs_write_inode+0x69/0xcc)
[ 1571.367435] [<c00fc22d>] (ubifs_write_inode+0x69/0xcc) from [<c00f9b95>] (ubifs_writepage+0x11d/0x140)
[ 1571.367476] [<c00f9b95>] (ubifs_writepage+0x11d/0x140) from [<c0077117>] (__writepage+0xb/0x26)
[ 1571.367510] [<c0077117>] (__writepage+0xb/0x26) from [<c0077413>] (write_cache_pages+0x151/0x1e8)
[ 1571.367543] [<c0077413>] (write_cache_pages+0x151/0x1e8) from [<c00774cb>] (generic_writepages+0x21/0x36)
[ 1571.367576] [<c00774cb>] (generic_writepages+0x21/0x36) from [<c0072fc3>] (__filemap_fdatawrite_range+0x3b/0x42)
[ 1571.367607] [<c0072fc3>] (__filemap_fdatawrite_range+0x3b/0x42) from [<c0073049>] (filemap_write_and_wait_range+0x21/0x4a)
[ 1571.367638] [<c0073049>] (filemap_write_and_wait_range+0x21/0x4a) from [<c00f9be1>] (ubifs_fsync+0x29/0x6c)
[ 1571.367671] [<c00f9be1>] (ubifs_fsync+0x29/0x6c) from [<c00abb23>] (vfs_fsync_range+0x1b/0x24)
[ 1571.367701] [<c00abb23>] (vfs_fsync_range+0x1b/0x24) from [<c00abb95>] (generic_write_sync+0x4d/0x54)
[ 1571.367731] [<c00abb95>] (generic_write_sync+0x4d/0x54) from [<c00734d5>] (generic_file_aio_write+0x71/0x8a)
[ 1571.367762] [<c00734d5>] (generic_file_aio_write+0x71/0x8a) from [<c00f91ab>] (ubifs_aio_write+0xff/0x10c)
[ 1571.367805] [<c00f91ab>] (ubifs_aio_write+0xff/0x10c) from [<c0093ef5>] (do_sync_write+0x61/0x8c)
[ 1571.367840] [<c0093ef5>] (do_sync_write+0x61/0x8c) from [<c0094397>] (vfs_write+0x5f/0x100)
[ 1571.367871] [<c0094397>] (vfs_write+0x5f/0x100) from [<c00945a3>] (sys_write+0x27/0x44)
[ 1571.367911] [<c00945a3>] (sys_write+0x27/0x44) from [<c000c681>] (ret_fast_syscall+0x1/0x46)
[ 1571.367945] [<c00104b1>] (unwind_backtrace+0x1/0x8c) from [<c00ff32d>] (ubifs_leb_map+0x81/0xb4)
[ 1571.367975] [<c00ff32d>] (ubifs_leb_map+0x81/0xb4) from [<c01051e7>] (ubifs_add_bud_to_log+0x1bf/0x214)
[ 1571.368006] [<c01051e7>] (ubifs_add_bud_to_log+0x1bf/0x214) from [<c00f7605>] (make_reservation+0x12d/0x274)
[ 1571.368037] [<c00f7605>] (make_reservation+0x12d/0x274) from [<c00f7e73>] (ubifs_jnl_write_inode+0x57/0x138)
[ 1571.368069] [<c00f7e73>] (ubifs_jnl_write_inode+0x57/0x138) from [<c00fc22d>] (ubifs_write_inode+0x69/0xcc)
[ 1571.368209] [<c00fc22d>] (ubifs_write_inode+0x69/0xcc) from [<c00f9b95>] (ubifs_writepage+0x11d/0x140)
[ 1571.368245] [<c00f9b95>] (ubifs_writepage+0x11d/0x140) from [<c0077117>] (__writepage+0xb/0x26)
[ 1571.368277] [<c0077117>] (__writepage+0xb/0x26) from [<c0077413>] (write_cache_pages+0x151/0x1e8)
[ 1571.368310] [<c0077413>] (write_cache_pages+0x151/0x1e8) from [<c00774cb>] (generic_writepages+0x21/0x36)
[ 1571.368341] [<c00774cb>] (generic_writepages+0x21/0x36) from [<c0072fc3>] (__filemap_fdatawrite_range+0x3b/0x42)
[ 1571.368371] [<c0072fc3>] (__filemap_fdatawrite_range+0x3b/0x42) from [<c0073049>] (filemap_write_and_wait_range+0x21/0x4a)
[ 1571.368402] [<c0073049>] (filemap_write_and_wait_range+0x21/0x4a) from [<c00f9be1>] (ubifs_fsync+0x29/0x6c)
[ 1571.368433] [<c00f9be1>] (ubifs_fsync+0x29/0x6c) from [<c00abb23>] (vfs_fsync_range+0x1b/0x24)
[ 1571.368462] [<c00abb23>] (vfs_fsync_range+0x1b/0x24) from [<c00abb95>] (generic_write_sync+0x4d/0x54)
[ 1571.368491] [<c00abb95>] (generic_write_sync+0x4d/0x54) from [<c00734d5>] (generic_file_aio_write+0x71/0x8a)
[ 1571.368522] [<c00734d5>] (generic_file_aio_write+0x71/0x8a) from [<c00f91ab>] (ubifs_aio_write+0xff/0x10c)
[ 1571.368554] [<c00f91ab>] (ubifs_aio_write+0xff/0x10c) from [<c0093ef5>] (do_sync_write+0x61/0x8c)
[ 1571.368586] [<c0093ef5>] (do_sync_write+0x61/0x8c) from [<c0094397>] (vfs_write+0x5f/0x100)
[ 1571.368618] [<c0094397>] (vfs_write+0x5f/0x100) from [<c00945a3>] (sys_write+0x27/0x44)
[ 1571.368650] [<c00945a3>] (sys_write+0x27/0x44) from [<c000c681>] (ret_fast_syscall+0x1/0x46)
[ 1571.372140] UBIFS error (pid 6493): make_reservation: cannot reserve 76 bytes in jhead 2, error -30
[ 1571.372172] UBIFS error (pid 6493): do_writepage: cannot write page 106 of inode 82, error -30
[ 1571.380069] UBIFS error (pid 792): do_commit: commit failed, error -30
[ 1571.380104] UBIFS error (pid 792): ubifs_write_inode: can't write inode 81, error -30
[ 1571.454203] UBI error: refill_wl_pool: didn't get all pebs I wanted!
[ 1571.454236] UBI error: ubi_wl_get_peb: User WL pool is empty!
[ 1571.454262] UBIFS error (pid 6464): ubifs_leb_write: writing 2048 bytes to LEB 3:0 failed, error -28
[ 1571.454282] UBIFS warning (pid 6464): ubifs_ro_mode: switched to read-only mode, error -28
[ 1571.454351] [<c00104b1>] (unwind_backtrace+0x1/0x8c) from [<c00ff0e7>] (ubifs_leb_write+0x8b/0xd8)
[ 1571.454388] [<c00ff0e7>] (ubifs_leb_write+0x8b/0xd8) from [<c0105341>] (ubifs_log_start_commit+0x105/0x268)
[ 1571.454420] [<c0105341>] (ubifs_log_start_commit+0x105/0x268) from [<c0105d55>] (do_commit+0x147/0x3da)
[ 1571.454451] [<c0105d55>] (do_commit+0x147/0x3da) from [<c00f76d3>] (make_reservation+0x1fb/0x274)
[ 1571.454484] [<c00f76d3>] (make_reservation+0x1fb/0x274) from [<c00f7d6d>] (ubifs_jnl_write_data+0xf5/0x1a4)
[ 1571.454516] [<c00f7d6d>] (ubifs_jnl_write_data+0xf5/0x1a4) from [<c00f99bd>] (do_writepage+0x73/0x12e)
[ 1571.454556] [<c00f99bd>] (do_writepage+0x73/0x12e) from [<c0077117>] (__writepage+0xb/0x26)
[ 1571.454590] [<c0077117>] (__writepage+0xb/0x26) from [<c0077413>] (write_cache_pages+0x151/0x1e8)
[ 1571.454623] [<c0077413>] (write_cache_pages+0x151/0x1e8) from [<c00774cb>] (generic_writepages+0x21/0x36)
[ 1571.454655] [<c00774cb>] (generic_writepages+0x21/0x36) from [<c0072fc3>] (__filemap_fdatawrite_range+0x3b/0x42)
[ 1571.454687] [<c0072fc3>] (__filemap_fdatawrite_range+0x3b/0x42) from [<c0073049>] (filemap_write_and_wait_range+0x21/0x4a)
[ 1571.454719] [<c0073049>] (filemap_write_and_wait_range+0x21/0x4a) from [<c00f9be1>] (ubifs_fsync+0x29/0x6c)
[ 1571.454752] [<c00f9be1>] (ubifs_fsync+0x29/0x6c) from [<c00abb23>] (vfs_fsync_range+0x1b/0x24)
[ 1571.454784] [<c00abb23>] (vfs_fsync_range+0x1b/0x24) from [<c00abb95>] (generic_write_sync+0x4d/0x54)
[ 1571.454814] [<c00abb95>] (generic_write_sync+0x4d/0x54) from [<c00734d5>] (generic_file_aio_write+0x71/0x8a)
[ 1571.454846] [<c00734d5>] (generic_file_aio_write+0x71/0x8a) from [<c00f91ab>] (ubifs_aio_write+0xff/0x10c)
[ 1571.454888] [<c00f91ab>] (ubifs_aio_write+0xff/0x10c) from [<c0093ef5>] (do_sync_write+0x61/0x8c)
[ 1571.454923] [<c0093ef5>] (do_sync_write+0x61/0x8c) from [<c0094397>] (vfs_write+0x5f/0x100)
[ 1571.454955] [<c0094397>] (vfs_write+0x5f/0x100) from [<c00945a3>] (sys_write+0x27/0x44)
[ 1571.454995] [<c00945a3>] (sys_write+0x27/0x44) from [<c000c681>] (ret_fast_syscall+0x1/0x46)
[ 1571.455030] [<c00104b1>] (unwind_backtrace+0x1/0x8c) from [<c00ff0eb>] (ubifs_leb_write+0x8f/0xd8)
[ 1571.455063] [<c00ff0eb>] (ubifs_leb_write+0x8f/0xd8) from [<c0105341>] (ubifs_log_start_commit+0x105/0x268)
[ 1571.455092] [<c0105341>] (ubifs_log_start_commit+0x105/0x268) from [<c0105d55>] (do_commit+0x147/0x3da)
[ 1571.455122] [<c0105d55>] (do_commit+0x147/0x3da) from [<c00f76d3>] (make_reservation+0x1fb/0x274)
[ 1571.455154] [<c00f76d3>] (make_reservation+0x1fb/0x274) from [<c00f7d6d>] (ubifs_jnl_write_data+0xf5/0x1a4)
[ 1571.455185] [<c00f7d6d>] (ubifs_jnl_write_data+0xf5/0x1a4) from [<c00f99bd>] (do_writepage+0x73/0x12e)
[ 1571.455217] [<c00f99bd>] (do_writepage+0x73/0x12e) from [<c0077117>] (__writepage+0xb/0x26)
[ 1571.455248] [<c0077117>] (__writepage+0xb/0x26) from [<c0077413>] (write_cache_pages+0x151/0x1e8)
[ 1571.455328] [<c0077413>] (write_cache_pages+0x151/0x1e8) from [<c00774cb>] (generic_writepages+0x21/0x36)
[ 1571.455363] [<c00774cb>] (generic_writepages+0x21/0x36) from [<c0072fc3>] (__filemap_fdatawrite_range+0x3b/0x42)
[ 1571.455394] [<c0072fc3>] (__filemap_fdatawrite_range+0x3b/0x42) from [<c0073049>] (filemap_write_and_wait_range+0x21/0x4a)
[ 1571.455425] [<c0073049>] (filemap_write_and_wait_range+0x21/0x4a) from [<c00f9be1>] (ubifs_fsync+0x29/0x6c)
[ 1571.455456] [<c00f9be1>] (ubifs_fsync+0x29/0x6c) from [<c00abb23>] (vfs_fsync_range+0x1b/0x24)
[ 1571.455486] [<c00abb23>] (vfs_fsync_range+0x1b/0x24) from [<c00abb95>] (generic_write_sync+0x4d/0x54)
[ 1571.455516] [<c00abb95>] (generic_write_sync+0x4d/0x54) from [<c00734d5>] (generic_file_aio_write+0x71/0x8a)
[ 1571.455548] [<c00734d5>] (generic_file_aio_write+0x71/0x8a) from [<c00f91ab>] (ubifs_aio_write+0xff/0x10c)
[ 1571.455581] [<c00f91ab>] (ubifs_aio_write+0xff/0x10c) from [<c0093ef5>] (do_sync_write+0x61/0x8c)
[ 1571.455614] [<c0093ef5>] (do_sync_write+0x61/0x8c) from [<c0094397>] (vfs_write+0x5f/0x100)
[ 1571.455647] [<c0094397>] (vfs_write+0x5f/0x100) from [<c00945a3>] (sys_write+0x27/0x44)
[ 1571.455680] [<c00945a3>] (sys_write+0x27/0x44) from [<c000c681>] (ret_fast_syscall+0x1/0x46)
[ 1571.455703] UBIFS error (pid 6464): do_commit: commit failed, error -28
[ 1571.455729] UBIFS error (pid 6464): do_writepage: cannot write page 435 of inode 71, error -28
[ 1598.409896] UBIFS error (pid 324): make_reservation: cannot reserve 4144 bytes in jhead 2, error -30
[ 1598.409987] UBIFS error (pid 324): do_writepage: cannot write page 508 of inode 71, error -30
[ 1603.409961] UBIFS error (pid 324): make_reservation: cannot reserve 4144 bytes in jhead 2, error -30
[ 1603.409997] UBIFS error (pid 324): do_writepage: cannot write page 509 of inode 71, error -30
[ 1603.410087] UBIFS error (pid 324): make_reservation: cannot reserve 160 bytes in jhead 1, error -30
[ 1603.410112] UBIFS error (pid 324): ubifs_write_inode: can't write inode 73, error -30
[ 1603.410536] UBIFS error (pid 324): make_reservation: cannot reserve 4144 bytes in jhead 2, error -30
[ 1603.410559] UBIFS error (pid 324): do_writepage: cannot write page 510 of inode 71, error -30
[ 1603.411004] UBIFS error (pid 324): make_reservation: cannot reserve 4144 bytes in jhead 2, error -30
[ 1603.411028] UBIFS error (pid 324): do_writepage: cannot write page 511 of inode 71, error -30
[ 1603.876156] UBIFS error (pid 529): make_reservation: cannot reserve 76 bytes in jhead 2, error -30
[ 1603.876191] UBIFS error (pid 529): do_writepage: cannot write page 107 of inode 82, error -30
[ 1608.411246] UBIFS error (pid 324): make_reservation: cannot reserve 160 bytes in jhead 1, error -30
[ 1608.411282] UBIFS error (pid 324): ubifs_write_inode: can't write inode 73, error -30
[ 1608.411902] UBIFS error (pid 324): make_reservation: cannot reserve 4144 bytes in jhead 2, error -30
[ 1608.411930] UBIFS error (pid 324): do_writepage: cannot write page 436 of inode 71, error -30
[ 1608.412470] UBIFS error (pid 324): make_reservation: cannot reserve 4144 bytes in jhead 2, error -30
[ 1608.412495] UBIFS error (pid 324): do_writepage: cannot write page 437 of inode 71, error -30
[ 1613.411827] UBIFS error (pid 324): make_reservation: cannot reserve 4144 bytes in jhead 2, error -30
[ 1613.411864] UBIFS error (pid 324): do_writepage: cannot write page 438 of inode 71, error -30
[ 1613.412000] UBIFS error (pid 324): make_reservation: cannot reserve 160 bytes in jhead 1, error -30
[ 1613.412024] UBIFS error (pid 324): ubifs_write_inode: can't write inode 73, error -30
[ 1613.412416] UBIFS error (pid 324): make_reservation: cannot reserve 4144 bytes in jhead 2, error -30
[ 1613.412441] UBIFS error (pid 324): do_writepage: cannot write page 439 of inode 71, error -30
[ 1618.411910] UBIFS error (pid 324): make_reservation: cannot reserve 4144 bytes in jhead 2, error -30
[ 1618.411946] UBIFS error (pid 324): do_writepage: cannot write page 440 of inode 71, error -30
[ 1618.412028] UBIFS error (pid 324): make_reservation: cannot reserve 160 bytes in jhead 1, error -30
[ 1618.412051] UBIFS error (pid 324): ubifs_write_inode: can't write inode 73, error -30
[ 1618.412432] UBIFS error (pid 324): make_reservation: cannot reserve 4144 bytes in jhead 2, error -30
[ 1618.412457] UBIFS error (pid 324): do_writepage: cannot write page 441 of inode 71, error -30
[ 1623.411873] UBIFS error (pid 324): make_reservation: cannot reserve 4144 bytes in jhead 2, error -30
[ 1623.411910] UBIFS error (pid 324): do_writepage: cannot write page 442 of inode 71, error -30
[ 1623.411994] UBIFS error (pid 324): make_reservation: cannot reserve 160 bytes in jhead 1, error -30
[ 1623.412017] UBIFS error (pid 324): ubifs_write_inode: can't write inode 73, error -30
[ 1623.412405] UBIFS error (pid 324): make_reservation: cannot reserve 4144 bytes in jhead 2, error -30
[ 1623.412429] UBIFS error (pid 324): do_writepage: cannot write page 443 of inode 71, error -30
[ 1628.411872] UBIFS error (pid 324): make_reservation: cannot reserve 4144 bytes in jhead 2, error -30
[ 1628.411909] UBIFS error (pid 324): do_writepage: cannot write page 444 of inode 71, error -30
[ 1628.412002] UBIFS error (pid 324): make_reservation: cannot reserve 160 bytes in jhead 1, error -30
[ 1628.412024] UBIFS error (pid 324): ubifs_write_inode: can't write inode 73, error -30
[ 1628.412416] UBIFS error (pid 324): make_reservation: cannot reserve 4144 bytes in jhead 2, error -30
[ 1628.412440] UBIFS error (pid 324): do_writepage: cannot write page 445 of inode 71, error -30
[ 1633.411856] UBIFS error (pid 324): make_reservation: cannot reserve 4144 bytes in jhead 2, error -30
[ 1633.411894] UBIFS error (pid 324): do_writepage: cannot write page 446 of inode 71, error -30
[ 1633.412003] UBIFS error (pid 324): make_reservation: cannot reserve 160 bytes in jhead 1, error -30
[ 1633.412027] UBIFS error (pid 324): ubifs_write_inode: can't write inode 73, error -30
[ 1633.412403] UBIFS error (pid 324): make_reservation: cannot reserve 4144 bytes in jhead 2, error -30
[ 1633.412492] UBIFS error (pid 324): do_writepage: cannot write page 447 of inode 71, error -30
[ 1633.412961] UBIFS error (pid 324): make_reservation: cannot reserve 4144 bytes in jhead 2, error -30
[ 1633.413086] UBIFS error (pid 324): do_writepage: cannot write page 448 of inode 71, error -30
[ 1633.876152] UBIFS error (pid 529): make_reservation: cannot reserve 76 bytes in jhead 2, error -30
[ 1633.876188] UBIFS error (pid 529): do_writepage: cannot write page 108 of inode 82, error -30
[ 1638.413166] UBIFS error (pid 324): make_reservation: cannot reserve 160 bytes in jhead 1, error -30
[ 1638.413201] UBIFS error (pid 324): ubifs_write_inode: can't write inode 73, error -30
[ 1638.413838] UBIFS error (pid 324): make_reservation: cannot reserve 4144 bytes in jhead 2, error -30
[ 1638.413868] UBIFS error (pid 324): do_writepage: cannot write page 449 of inode 71, error -30
[ 1638.414416] UBIFS error (pid 324): make_reservation: cannot reserve 4144 bytes in jhead 2, error -30
[ 1638.414442] UBIFS error (pid 324): do_writepage: cannot write page 450 of inode 71, error -30
[ 1643.413768] UBIFS error (pid 324): make_reservation: cannot reserve 4144 bytes in jhead 2, error -30
[ 1643.413806] UBIFS error (pid 324): do_writepage: cannot write page 451 of inode 71, error -30
[ 1643.413891] UBIFS error (pid 324): make_reservation: cannot reserve 160 bytes in jhead 1, error -30
[ 1643.413915] UBIFS error (pid 324): ubifs_write_inode: can't write inode 73, error -30
[ 1643.414411] UBIFS error (pid 324): make_reservation: cannot reserve 4144 bytes in jhead 2, error -30
[ 1643.414436] UBIFS error (pid 324): do_writepage: cannot write page 452 of inode 71, error -30
[ 1648.413841] UBIFS error (pid 324): make_reservation: cannot reserve 4144 bytes in jhead 2, error -30
[ 1648.413881] UBIFS error (pid 324): do_writepage: cannot write page 453 of inode 71, error -30
[ 1648.413965] UBIFS error (pid 324): make_reservation: cannot reserve 160 bytes in jhead 1, error -30
[ 1648.413988] UBIFS error (pid 324): ubifs_write_inode: can't write inode 73, error -30
[ 1648.414542] UBIFS error (pid 324): make_reservation: cannot reserve 4144 bytes in jhead 2, error -30
[ 1648.414568] UBIFS error (pid 324): do_writepage: cannot write page 454 of inode 71, error -30
[ 1653.413879] UBIFS error (pid 324): make_reservation: cannot reserve 4144 bytes in jhead 2, error -30
[ 1653.413917] UBIFS error (pid 324): do_writepage: cannot write page 455 of inode 71, error -30
[ 1653.414004] UBIFS error (pid 324): make_reservation: cannot reserve 160 bytes in jhead 1, error -30
[ 1653.414028] UBIFS error (pid 324): ubifs_write_inode: can't write inode 73, error -30
[ 1653.414472] UBIFS error (pid 324): make_reservation: cannot reserve 4144 bytes in jhead 2, error -30
[ 1653.414496] UBIFS error (pid 324): do_writepage: cannot write page 456 of inode 71, error -30
[ 1658.413832] UBIFS error (pid 324): make_reservation: cannot reserve 4144 bytes in jhead 2, error -30
[ 1658.413870] UBIFS error (pid 324): do_writepage: cannot write page 457 of inode 71, error -30
[ 1658.413961] UBIFS error (pid 324): make_reservation: cannot reserve 160 bytes in jhead 1, error -30
[ 1658.413984] UBIFS error (pid 324): ubifs_write_inode: can't write inode 73, error -30
[ 1658.414561] UBIFS error (pid 324): make_reservation: cannot reserve 4144 bytes in jhead 2, error -30
[ 1658.414588] UBIFS error (pid 324): do_writepage: cannot write page 458 of inode 71, error -30
[ 1663.413793] UBIFS error (pid 324): make_reservation: cannot reserve 4144 bytes in jhead 2, error -30
[ 1663.413831] UBIFS error (pid 324): do_writepage: cannot write page 459 of inode 71, error -30
[ 1663.413926] UBIFS error (pid 324): make_reservation: cannot reserve 160 bytes in jhead 1, error -30
[ 1663.414002] UBIFS error (pid 324): ubifs_write_inode: can't write inode 73, error -30
[ 1663.414414] UBIFS error (pid 324): make_reservation: cannot reserve 4144 bytes in jhead 2, error -30
[ 1663.414437] UBIFS error (pid 324): do_writepage: cannot write page 460 of inode 71, error -30
[ 1663.414829] UBIFS error (pid 324): make_reservation: cannot reserve 4144 bytes in jhead 2, error -30
[ 1663.414852] UBIFS error (pid 324): do_writepage: cannot write page 461 of inode 71, error -30
[ 1663.876181] UBIFS error (pid 529): make_reservation: cannot reserve 76 bytes in jhead 2, error -30
[ 1663.876215] UBIFS error (pid 529): do_writepage: cannot write page 109 of inode 82, error -30
[ 1668.413789] UBIFS error (pid 324): make_reservation: cannot reserve 4144 bytes in jhead 2, error -30
[ 1668.413826] UBIFS error (pid 324): do_writepage: cannot write page 462 of inode 71, error -30
[ 1668.413919] UBIFS error (pid 324): make_reservation: cannot reserve 160 bytes in jhead 1, error -30
[ 1668.413941] UBIFS error (pid 324): ubifs_write_inode: can't write inode 73, error -30
[ 1668.414487] UBIFS error (pid 324): make_reservation: cannot reserve 4144 bytes in jhead 2, error -30
[ 1668.414512] UBIFS error (pid 324): do_writepage: cannot write page 463 of inode 71, error -30
[ 1673.413819] UBIFS error (pid 324): make_reservation: cannot reserve 4144 bytes in jhead 2, error -30
[ 1673.413856] UBIFS error (pid 324): do_writepage: cannot write page 464 of inode 71, error -30
[ 1673.413936] UBIFS error (pid 324): make_reservation: cannot reserve 160 bytes in jhead 1, error -30
[ 1673.413959] UBIFS error (pid 324): ubifs_write_inode: can't write inode 73, error -30
[ 1673.414358] UBIFS error (pid 324): make_reservation: cannot reserve 4144 bytes in jhead 2, error -30
[ 1673.414382] UBIFS error (pid 324): do_writepage: cannot write page 465 of inode 71, error -30
[ 1678.413863] UBIFS error (pid 324): make_reservation: cannot reserve 4144 bytes in jhead 2, error -30
[ 1678.413900] UBIFS error (pid 324): do_writepage: cannot write page 466 of inode 71, error -30
[ 1678.413989] UBIFS error (pid 324): make_reservation: cannot reserve 160 bytes in jhead 1, error -30
[ 1678.414012] UBIFS error (pid 324): ubifs_write_inode: can't write inode 73, error -30
[ 1678.414413] UBIFS error (pid 324): make_reservation: cannot reserve 4144 bytes in jhead 2, error -30
[ 1678.414437] UBIFS error (pid 324): do_writepage: cannot write page 467 of inode 71, error -30
[ 1683.413771] UBIFS error (pid 324): make_reservation: cannot reserve 4144 bytes in jhead 2, error -30
[ 1683.413810] UBIFS error (pid 324): do_writepage: cannot write page 468 of inode 71, error -30
[ 1683.413883] UBIFS error (pid 324): make_reservation: cannot reserve 160 bytes in jhead 1, error -30
[ 1683.413905] UBIFS error (pid 324): ubifs_write_inode: can't write inode 73, error -30
[ 1683.414358] UBIFS error (pid 324): make_reservation: cannot reserve 4144 bytes in jhead 2, error -30
[ 1683.414383] UBIFS error (pid 324): do_writepage: cannot write page 469 of inode 71, error -30
[ 1688.413806] UBIFS error (pid 324): make_reservation: cannot reserve 4144 bytes in jhead 2, error -30
[ 1688.413844] UBIFS error (pid 324): do_writepage: cannot write page 470 of inode 71, error -30
[ 1688.413934] UBIFS error (pid 324): make_reservation: cannot reserve 160 bytes in jhead 1, error -30
[ 1688.413957] UBIFS error (pid 324): ubifs_write_inode: can't write inode 73, error -30
[ 1688.414344] UBIFS error (pid 324): make_reservation: cannot reserve 4144 bytes in jhead 2, error -30
[ 1688.414368] UBIFS error (pid 324): do_writepage: cannot write page 471 of inode 71, error -30
[ 1693.413834] UBIFS error (pid 324): make_reservation: cannot reserve 4144 bytes in jhead 2, error -30
[ 1693.413871] UBIFS error (pid 324): do_writepage: cannot write page 472 of inode 71, error -30
[ 1693.413975] UBIFS error (pid 324): make_reservation: cannot reserve 160 bytes in jhead 1, error -30
[ 1693.413999] UBIFS error (pid 324): ubifs_write_inode: can't write inode 73, error -30
[ 1693.414377] UBIFS error (pid 324): make_reservation: cannot reserve 4144 bytes in jhead 2, error -30
[ 1693.414400] UBIFS error (pid 324): do_writepage: cannot write page 473 of inode 71, error -30
[ 1693.414783] UBIFS error (pid 324): make_reservation: cannot reserve 4144 bytes in jhead 2, error -30
[ 1693.414806] UBIFS error (pid 324): do_writepage: cannot write page 474 of inode 71, error -30
[ 1693.876184] UBIFS error (pid 529): make_reservation: cannot reserve 76 bytes in jhead 2, error -30
[ 1693.876219] UBIFS error (pid 529): do_writepage: cannot write page 110 of inode 82, error -30
[ 1698.413810] UBIFS error (pid 324): make_reservation: cannot reserve 4144 bytes in jhead 2, error -30
[ 1698.413847] UBIFS error (pid 324): do_writepage: cannot write page 475 of inode 71, error -30
[ 1703.413758] UBIFS error (pid 324): make_reservation: cannot reserve 4144 bytes in jhead 2, error -30
[ 1703.413797] UBIFS error (pid 324): do_writepage: cannot write page 476 of inode 71, error -30
[ 1703.413930] UBIFS error (pid 324): make_reservation: cannot reserve 160 bytes in jhead 1, error -30
[ 1703.413955] UBIFS error (pid 324): ubifs_write_inode: can't write inode 73, error -30
[ 1703.414348] UBIFS error (pid 324): make_reservation: cannot reserve 4144 bytes in jhead 2, error -30
[ 1703.414372] UBIFS error (pid 324): do_writepage: cannot write page 477 of inode 71, error -30
[ 1708.413850] UBIFS error (pid 324): make_reservation: cannot reserve 4144 bytes in jhead 2, error -30
[ 1708.413887] UBIFS error (pid 324): do_writepage: cannot write page 478 of inode 71, error -30
[ 1708.413979] UBIFS error (pid 324): make_reservation: cannot reserve 160 bytes in jhead 1, error -30
[ 1708.414001] UBIFS error (pid 324): ubifs_write_inode: can't write inode 73, error -30
[ 1708.414393] UBIFS error (pid 324): make_reservation: cannot reserve 4144 bytes in jhead 2, error -30
[ 1708.414417] UBIFS error (pid 324): do_writepage: cannot write page 479 of inode 71, error -30
[ 1713.413807] UBIFS error (pid 324): make_reservation: cannot reserve 4144 bytes in jhead 2, error -30
[ 1713.413845] UBIFS error (pid 324): do_writepage: cannot write page 480 of inode 71, error -30
[ 1713.413916] UBIFS error (pid 324): make_reservation: cannot reserve 160 bytes in jhead 1, error -30
[ 1713.413938] UBIFS error (pid 324): ubifs_write_inode: can't write inode 73, error -30
[ 1713.414324] UBIFS error (pid 324): make_reservation: cannot reserve 4144 bytes in jhead 2, error -30
[ 1713.414347] UBIFS error (pid 324): do_writepage: cannot write page 481 of inode 71, error -30
[ 1718.413867] UBIFS error (pid 324): make_reservation: cannot reserve 4144 bytes in jhead 2, error -30
[ 1718.413904] UBIFS error (pid 324): do_writepage: cannot write page 482 of inode 71, error -30
[ 1718.413990] UBIFS error (pid 324): make_reservation: cannot reserve 160 bytes in jhead 1, error -30
[ 1718.414013] UBIFS error (pid 324): ubifs_write_inode: can't write inode 73, error -30
[ 1718.414402] UBIFS error (pid 324): make_reservation: cannot reserve 4144 bytes in jhead 2, error -30
[ 1718.414426] UBIFS error (pid 324): do_writepage: cannot write page 483 of inode 71, error -30
[ 1723.413820] UBIFS error (pid 324): make_reservation: cannot reserve 4144 bytes in jhead 2, error -30
[ 1723.413857] UBIFS error (pid 324): do_writepage: cannot write page 484 of inode 71, error -30
[ 1723.413962] UBIFS error (pid 324): make_reservation: cannot reserve 160 bytes in jhead 1, error -30
[ 1723.413985] UBIFS error (pid 324): ubifs_write_inode: can't write inode 73, error -30
[ 1723.414391] UBIFS error (pid 324): make_reservation: cannot reserve 4144 bytes in jhead 2, error -30
[ 1723.414414] UBIFS error (pid 324): do_writepage: cannot write page 485 of inode 71, error -30
[ 1723.414816] UBIFS error (pid 324): make_reservation: cannot reserve 4144 bytes in jhead 2, error -30
[ 1723.414838] UBIFS error (pid 324): do_writepage: cannot write page 486 of inode 71, error -30
[ 1723.876189] UBIFS error (pid 529): make_reservation: cannot reserve 76 bytes in jhead 2, error -30
[ 1723.876224] UBIFS error (pid 529): do_writepage: cannot write page 111 of inode 82, error -30
[ 1728.413877] UBIFS error (pid 324): make_reservation: cannot reserve 4144 bytes in jhead 2, error -30
[ 1728.413914] UBIFS error (pid 324): do_writepage: cannot write page 487 of inode 71, error -30
[ 1728.413997] UBIFS error (pid 324): make_reservation: cannot reserve 160 bytes in jhead 1, error -30
[ 1728.414020] UBIFS error (pid 324): ubifs_write_inode: can't write inode 73, error -30
[ 1728.414467] UBIFS error (pid 324): make_reservation: cannot reserve 4144 bytes in jhead 2, error -30
[ 1728.414491] UBIFS error (pid 324): do_writepage: cannot write page 488 of inode 71, error -30
[ 1733.413852] UBIFS error (pid 324): make_reservation: cannot reserve 4144 bytes in jhead 2, error -30
[ 1733.413889] UBIFS error (pid 324): do_writepage: cannot write page 489 of inode 71, error -30
[ 1733.413967] UBIFS error (pid 324): make_reservation: cannot reserve 160 bytes in jhead 1, error -30
[ 1733.413989] UBIFS error (pid 324): ubifs_write_inode: can't write inode 73, error -30
[ 1733.414487] UBIFS error (pid 324): make_reservation: cannot reserve 4144 bytes in jhead 2, error -30
[ 1733.414512] UBIFS error (pid 324): do_writepage: cannot write page 490 of inode 71, error -30
[ 1738.413808] UBIFS error (pid 324): make_reservation: cannot reserve 4144 bytes in jhead 2, error -30
[ 1738.413846] UBIFS error (pid 324): do_writepage: cannot write page 491 of inode 71, error -30
[ 1738.413989] UBIFS error (pid 324): make_reservation: cannot reserve 160 bytes in jhead 1, error -30
[ 1738.414013] UBIFS error (pid 324): ubifs_write_inode: can't write inode 73, error -30
[ 1738.414445] UBIFS error (pid 324): make_reservation: cannot reserve 4144 bytes in jhead 2, error -30
[ 1738.414469] UBIFS error (pid 324): do_writepage: cannot write page 492 of inode 71, error -30
[ 1743.413775] UBIFS error (pid 324): make_reservation: cannot reserve 4144 bytes in jhead 2, error -30
[ 1743.413815] UBIFS error (pid 324): do_writepage: cannot write page 493 of inode 71, error -30
[ 1743.413901] UBIFS error (pid 324): make_reservation: cannot reserve 160 bytes in jhead 1, error -30
[ 1743.413923] UBIFS error (pid 324): ubifs_write_inode: can't write inode 73, error -30
[ 1743.414474] UBIFS error (pid 324): make_reservation: cannot reserve 4144 bytes in jhead 2, error -30
[ 1743.414499] UBIFS error (pid 324): do_writepage: cannot write page 494 of inode 71, error -30
[ 1748.413860] UBIFS error (pid 324): make_reservation: cannot reserve 4144 bytes in jhead 2, error -30
[ 1748.413897] UBIFS error (pid 324): do_writepage: cannot write page 495 of inode 71, error -30
[ 1748.413976] UBIFS error (pid 324): make_reservation: cannot reserve 160 bytes in jhead 1, error -30
[ 1748.413998] UBIFS error (pid 324): ubifs_write_inode: can't write inode 73, error -30
[ 1748.414393] UBIFS error (pid 324): make_reservation: cannot reserve 4144 bytes in jhead 2, error -30
[ 1748.414418] UBIFS error (pid 324): do_writepage: cannot write page 496 of inode 71, error -30
[ 1753.413839] UBIFS error (pid 324): make_reservation: cannot reserve 4144 bytes in jhead 2, error -30
[ 1753.413876] UBIFS error (pid 324): do_writepage: cannot write page 497 of inode 71, error -30
[ 1753.413979] UBIFS error (pid 324): make_reservation: cannot reserve 160 bytes in jhead 1, error -30
[ 1753.414002] UBIFS error (pid 324): ubifs_write_inode: can't write inode 73, error -30
[ 1753.414382] UBIFS error (pid 324): make_reservation: cannot reserve 4144 bytes in jhead 2, error -30
[ 1753.414406] UBIFS error (pid 324): do_writepage: cannot write page 498 of inode 71, error -30
[ 1753.414809] UBIFS error (pid 324): make_reservation: cannot reserve 4144 bytes in jhead 2, error -30
[ 1753.414833] UBIFS error (pid 324): do_writepage: cannot write page 499 of inode 71, error -30
[ 1753.876220] UBIFS error (pid 529): make_reservation: cannot reserve 76 bytes in jhead 2, error -30
[ 1753.876255] UBIFS error (pid 529): do_writepage: cannot write page 112 of inode 82, error -30
[ 1758.413756] UBIFS error (pid 324): make_reservation: cannot reserve 4144 bytes in jhead 2, error -30
[ 1758.413793] UBIFS error (pid 324): do_writepage: cannot write page 500 of inode 71, error -30
[ 1758.413866] UBIFS error (pid 324): make_reservation: cannot reserve 160 bytes in jhead 1, error -30
[ 1758.413889] UBIFS error (pid 324): ubifs_write_inode: can't write inode 73, error -30
[ 1758.414419] UBIFS error (pid 324): make_reservation: cannot reserve 4144 bytes in jhead 2, error -30
[ 1758.414445] UBIFS error (pid 324): do_writepage: cannot write page 501 of inode 71, error -30
[ 1763.413831] UBIFS error (pid 324): make_reservation: cannot reserve 4144 bytes in jhead 2, error -30
[ 1763.413868] UBIFS error (pid 324): do_writepage: cannot write page 502 of inode 71, error -30
[ 1763.413959] UBIFS error (pid 324): make_reservation: cannot reserve 160 bytes in jhead 1, error -30
[ 1763.413982] UBIFS error (pid 324): ubifs_write_inode: can't write inode 73, error -30
[ 1763.414373] UBIFS error (pid 324): make_reservation: cannot reserve 4144 bytes in jhead 2, error -30
[ 1763.414397] UBIFS error (pid 324): do_writepage: cannot write page 503 of inode 71, error -30
[ 1768.413821] UBIFS error (pid 324): make_reservation: cannot reserve 4144 bytes in jhead 2, error -30
[ 1768.413858] UBIFS error (pid 324): do_writepage: cannot write page 504 of inode 71, error -30
[ 1768.413935] UBIFS error (pid 324): make_reservation: cannot reserve 160 bytes in jhead 1, error -30
[ 1768.413958] UBIFS error (pid 324): ubifs_write_inode: can't write inode 73, error -30
[ 1768.414352] UBIFS error (pid 324): make_reservation: cannot reserve 4144 bytes in jhead 2, error -30
[ 1768.414376] UBIFS error (pid 324): do_writepage: cannot write page 505 of inode 71, error -30
[ 1773.413814] UBIFS error (pid 324): make_reservation: cannot reserve 4144 bytes in jhead 2, error -30
[ 1773.413850] UBIFS error (pid 324): do_writepage: cannot write page 506 of inode 71, error -30
[ 1773.413937] UBIFS error (pid 324): make_reservation: cannot reserve 160 bytes in jhead 1, error -30
[ 1773.413960] UBIFS error (pid 324): ubifs_write_inode: can't write inode 73, error -30
[ 1773.414345] UBIFS error (pid 324): make_reservation: cannot reserve 4144 bytes in jhead 2, error -30
[ 1773.414369] UBIFS error (pid 324): do_writepage: cannot write page 507 of inode 71, error -30
[ 1783.876195] UBIFS error (pid 529): make_reservation: cannot reserve 76 bytes in jhead 2, error -30
[ 1783.876230] UBIFS error (pid 529): do_writepage: cannot write page 113 of inode 82, error -30
[ 1803.422949] UBIFS error (pid 324): make_reservation: cannot reserve 160 bytes in jhead 1, error -30
[ 1803.422985] UBIFS error (pid 324): ubifs_write_inode: can't write inode 73, error -30
[ 1813.876257] UBIFS error (pid 529): make_reservation: cannot reserve 76 bytes in jhead 2, error -30
[ 1813.876291] UBIFS error (pid 529): do_writepage: cannot write page 114 of inode 82, error -30
[ 1833.438582] UBIFS error (pid 324): make_reservation: cannot reserve 160 bytes in jhead 1, error -30
[ 1833.438617] UBIFS error (pid 324): ubifs_write_inode: can't write inode 73, error -30
[ 1843.876181] UBIFS error (pid 529): make_reservation: cannot reserve 76 bytes in jhead 2, error -30
[ 1843.876216] UBIFS error (pid 529): do_writepage: cannot write page 115 of inode 82, error -30
[ 1863.454188] UBIFS error (pid 324): make_reservation: cannot reserve 160 bytes in jhead 1, error -30
[ 1863.454223] UBIFS error (pid 324): ubifs_write_inode: can't write inode 73, error -30
[ 1873.876173] UBIFS error (pid 529): make_reservation: cannot reserve 76 bytes in jhead 2, error -30
[ 1873.876209] UBIFS error (pid 529): do_writepage: cannot write page 116 of inode 82, error -30
[ 1893.469808] UBIFS error (pid 324): make_reservation: cannot reserve 160 bytes in jhead 1, error -30
[ 1893.469842] UBIFS error (pid 324): ubifs_write_inode: can't write inode 73, error -30
[ 1903.876160] UBIFS error (pid 529): make_reservation: cannot reserve 76 bytes in jhead 2, error -30
[ 1903.876194] UBIFS error (pid 529): do_writepage: cannot write page 117 of inode 82, error -30
[ 1923.485482] UBIFS error (pid 324): make_reservation: cannot reserve 160 bytes in jhead 1, error -30
[ 1923.485518] UBIFS error (pid 324): ubifs_write_inode: can't write inode 73, error -30
[ 1933.876180] UBIFS error (pid 529): make_reservation: cannot reserve 76 bytes in jhead 2, error -30
[ 1933.876216] UBIFS error (pid 529): do_writepage: cannot write page 118 of inode 82, error -30
[ 1953.501084] UBIFS error (pid 324): make_reservation: cannot reserve 160 bytes in jhead 1, error -30
[ 1953.501120] UBIFS error (pid 324): ubifs_write_inode: can't write inode 73, error -30
[ 1963.876133] UBIFS error (pid 529): make_reservation: cannot reserve 76 bytes in jhead 2, error -30
[ 1963.876168] UBIFS error (pid 529): do_writepage: cannot write page 119 of inode 82, error -30
[ 1983.516694] UBIFS error (pid 324): make_reservation: cannot reserve 160 bytes in jhead 1, error -30
[ 1983.516730] UBIFS error (pid 324): ubifs_write_inode: can't write inode 73, error -30

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: UBIFS Panic
  2014-07-16 17:31                         ` Akshay Bhat
@ 2014-07-16 21:00                           ` Richard Weinberger
  2014-07-22 18:39                             ` Akshay Bhat
  0 siblings, 1 reply; 19+ messages in thread
From: Richard Weinberger @ 2014-07-16 21:00 UTC (permalink / raw)
  To: Akshay Bhat; +Cc: linux-mtd, hujianyang

[-- Attachment #1: Type: text/plain, Size: 1121 bytes --]

Akshay,

Am 16.07.2014 19:31, schrieb Akshay Bhat:
> 
>>>
>>> Hi Richard, wanted to check if you got a chance to dig into this? Thanks.
> 
>> Can you please rerun with the attached patch applied?
>> Maybe it can give use a hint. :)
> 
> I ran the tests with the patch, below is the dmesg log (note: the first kernel panic resulted in the log running over since I wasn't around, so I had to reboot and re-run the test to capture a new panic).

Thanks a lot for the log!

> # dmesg
> [ 1571.338966] UBI error: refill_wl_pool: didn't get all pebs I wanted!
> [ 1571.367128] UBI error: refill_wl_pool: didn't get all pebs I wanted!
> [ 1571.367159] UBI error: ubi_wl_get_peb: User WL pool is empty!

This is interesting.
Looks like a race in ubi_wl_get_peb(), two threads enter the function and the pool needs refilling.
While T1 triggers a ubi_refill_pools(), T2 goes immediately to " if (!pool->size) {" this triggers
as T1 resets pool->size to 0 while refilling.

Please test the updated debug patch.

Can you also please share your test script with me?
So far none of my boards run into that issue. :(

Thanks,
//richard

[-- Attachment #2: debug.diff --]
[-- Type: text/x-patch, Size: 1788 bytes --]

diff --git a/drivers/mtd/ubi/wl.c b/drivers/mtd/ubi/wl.c
index 0f3425d..d02b5f9 100644
--- a/drivers/mtd/ubi/wl.c
+++ b/drivers/mtd/ubi/wl.c
@@ -575,8 +575,10 @@ static void refill_wl_pool(struct ubi_device *ubi)
 
 	for (pool->size = 0; pool->size < pool->max_size; pool->size++) {
 		if (!ubi->free.rb_node ||
-		   (ubi->free_count - ubi->beb_rsvd_pebs < 5))
+		   (ubi->free_count - ubi->beb_rsvd_pebs < 5)) {
+			ubi_err("didn't get all pebs I wanted!");
 			break;
+		}
 
 		e = find_wl_entry(ubi, &ubi->free, WL_FREE_MAX_DIFF);
 		self_check_in_wl_tree(ubi, e, &ubi->free);
@@ -600,8 +602,10 @@ static void refill_wl_user_pool(struct ubi_device *ubi)
 
 	for (pool->size = 0; pool->size < pool->max_size; pool->size++) {
 		pool->pebs[pool->size] = __wl_get_peb(ubi);
-		if (pool->pebs[pool->size] < 0)
+		if (pool->pebs[pool->size] < 0) {
+			ubi_err("didn't get all pebs I wanted!");
 			break;
+		}
 	}
 	pool->used = 0;
 }
@@ -631,15 +635,16 @@ int ubi_wl_get_peb(struct ubi_device *ubi)
 	    wl_pool->used == wl_pool->size)
 		ubi_update_fastmap(ubi);
 
+	spin_lock(&ubi->wl_lock);
 	/* we got not a single free PEB */
-	if (!pool->size)
+	if (!pool->size) {
+		ubi_err("User WL pool is empty!");
 		ret = -ENOSPC;
-	else {
-		spin_lock(&ubi->wl_lock);
+	} else {
 		ret = pool->pebs[pool->used++];
 		prot_queue_add(ubi, ubi->lookuptbl[ret]);
-		spin_unlock(&ubi->wl_lock);
 	}
+	spin_unlock(&ubi->wl_lock);
 
 	return ret;
 }
@@ -654,6 +659,7 @@ static struct ubi_wl_entry *get_peb_for_wl(struct ubi_device *ubi)
 	int pnum;
 
 	if (pool->used == pool->size || !pool->size) {
+		ubi_err("WL pool is empty!");
 		/* We cannot update the fastmap here because this
 		 * function is called in atomic context.
 		 * Let's fail here and refill/update it as soon as possible. */

^ permalink raw reply related	[flat|nested] 19+ messages in thread

* Re: UBIFS Panic
  2014-07-16 21:00                           ` Richard Weinberger
@ 2014-07-22 18:39                             ` Akshay Bhat
  0 siblings, 0 replies; 19+ messages in thread
From: Akshay Bhat @ 2014-07-22 18:39 UTC (permalink / raw)
  To: Richard Weinberger; +Cc: linux-mtd, hujianyang



On Wed 16 Jul 2014 05:00:48 PM EDT, Richard Weinberger wrote:
> Akshay,
>
> Am 16.07.2014 19:31, schrieb Akshay Bhat:
>>
>>>>
>>>> Hi Richard, wanted to check if you got a chance to dig into this? Thanks.
>>
>>> Can you please rerun with the attached patch applied?
>>> Maybe it can give use a hint. :)
>>
>> I ran the tests with the patch, below is the dmesg log (note: the first kernel panic resulted in the log running over since I wasn't around, so I had to reboot and re-run the test to capture a new panic).
>
> Thanks a lot for the log!
>
>> # dmesg
>> [ 1571.338966] UBI error: refill_wl_pool: didn't get all pebs I wanted!
>> [ 1571.367128] UBI error: refill_wl_pool: didn't get all pebs I wanted!
>> [ 1571.367159] UBI error: ubi_wl_get_peb: User WL pool is empty!
>
> This is interesting.
> Looks like a race in ubi_wl_get_peb(), two threads enter the function and the pool needs refilling.
> While T1 triggers a ubi_refill_pools(), T2 goes immediately to " if (!pool->size) {" this triggers
> as T1 resets pool->size to 0 while refilling.
>
> Please test the updated debug patch.

Still see an error:

# dmesg
[    0.000000] Booting Linux on physical CPU 0x0
[    0.000000] Initializing cgroup subsys cpu
[    0.000000] Linux version 3.8.13-004-ts-armv7l (abhat@PC0008690) 
(gcc version 4.7.3 (Timesys 20130916) ) #4 SMP Thu Jul 17 14:37:18 EDT 
2014
[    0.000000] CPU: ARMv7 Processor [413fc082] revision 2 (ARMv7), 
cr=50c5387d
[    0.000000] CPU: PIPT / VIPT nonaliasing data cache, VIPT aliasing 
instruction cache
[    0.000000] Machine: Generic AM33XX (Flattened Device Tree), model: 
Lutron Ethernet Bridge
[    0.000000] Memory policy: ECC disabled, Data cache writeback
[    0.000000] On node 0 totalpages: 65280
[    0.000000] free_area_init_node: node 0, pgdat c04a3c80, 
node_mem_map c04f2000
[    0.000000]   Normal zone: 512 pages used for memmap
[    0.000000]   Normal zone: 0 pages reserved
[    0.000000]   Normal zone: 64768 pages, LIFO batch:15
[    0.000000] AM335X ES1.0 (neon )
[    0.000000] PERCPU: Embedded 8 pages/cpu @c06fd000 s8896 r8192 
d15680 u32768
[    0.000000] pcpu-alloc: s8896 r8192 d15680 u32768 alloc=8*4096
[    0.000000] pcpu-alloc: [0] 0
[    0.000000] Built 1 zonelists in Zone order, mobility grouping on.  
Total pages: 64768
[    0.000000] Kernel command line: console=ttyO0,115200n8 noinitrd 
mem=256M root=ubi0:rootfs rw ubi.mtd=11,2048 rootfstype=ubifs 
rootwait=1 ip=none quiet loglevel=3 panic=3
[    0.000000] PID hash table entries: 1024 (order: 0, 4096 bytes)
[    0.000000] Dentry cache hash table entries: 32768 (order: 5, 131072 
bytes)
[    0.000000] Inode-cache hash table entries: 16384 (order: 4, 65536 
bytes)
[    0.000000] __ex_table already sorted, skipping sort
[    0.000000] allocated 524288 bytes of page_cgroup
[    0.000000] please try 'cgroup_disable=memory' option if you don't 
want memory cgroups
[    0.000000] Memory: 255MB = 255MB total
[    0.000000] Memory: 253196k/253196k available, 8948k reserved, 0K 
highmem
[    0.000000] Virtual kernel memory layout:
[    0.000000]     vector  : 0xffff0000 - 0xffff1000   (   4 kB)
[    0.000000]     fixmap  : 0xfff00000 - 0xfffe0000   ( 896 kB)
[    0.000000]     vmalloc : 0xd0800000 - 0xff000000   ( 744 MB)
[    0.000000]     lowmem  : 0xc0000000 - 0xd0000000   ( 256 MB)
[    0.000000]     pkmap   : 0xbfe00000 - 0xc0000000   (   2 MB)
[    0.000000]     modules : 0xbf800000 - 0xbfe00000   (   6 MB)
[    0.000000]       .text : 0xc0008000 - 0xc04157fc   (4150 kB)
[    0.000000]       .init : 0xc0416000 - 0xc04442c0   ( 185 kB)
[    0.000000]       .data : 0xc0446000 - 0xc04a4b40   ( 379 kB)
[    0.000000]        .bss : 0xc04a4b40 - 0xc04f1b1c   ( 308 kB)
[    0.000000] Hierarchical RCU implementation.
[    0.000000]  RCU restricting CPUs from NR_CPUS=4 to nr_cpu_ids=1.
[    0.000000] NR_IRQS:16 nr_irqs:16 16
[    0.000000] IRQ: Found an INTC at 0xfa200000 (revision 5.0) with 128 
interrupts
[    0.000000] Total of 128 interrupts on 1 active controller
[    0.000000] OMAP clockevent source: GPTIMER1 at 26000000 Hz
[    0.000000] sched_clock: 32 bits at 26MHz, resolution 38ns, wraps 
every 165191ms
[    0.000000] OMAP clocksource: GPTIMER2 at 26000000 Hz
[    0.000000] Console: colour dummy device 80x30
[    0.000355] Calibrating delay loop... 545.07 BogoMIPS (lpj=531968)
[    0.015440] pid_max: default: 32768 minimum: 301
[    0.015669] Security Framework initialized
[    0.015764] Mount-cache hash table entries: 512
[    0.024064] Initializing cgroup subsys cpuacct
[    0.024096] Initializing cgroup subsys memory
[    0.024160] Initializing cgroup subsys blkio
[    0.024294] CPU: Testing write buffer coherency: ok
[    0.024793] CPU0: thread -1, cpu 0, socket -1, mpidr 0
[    0.024867] Setting up static identity map for 0x80273580 - 
0x802735cc
[    0.026260] Brought up 1 CPUs
[    0.026284] SMP: Total of 1 processors activated (545.07 BogoMIPS).
[    0.052593] omap_hwmod: wd_timer2: _wait_target_disable failed
[    0.107693] pinctrl core: initialized pinctrl subsystem
[    0.107916] rstctl core: initialized rstctl subsystem
[    0.108403] regulator-dummy: no parameters
[    0.108894] NET: Registered protocol family 16
[    0.109746] DMA: preallocated 256 KiB pool for atomic coherent 
allocations
[    0.119532] pinctrl-single 44e10800.pinmux: 142 pins at pa f9e10800 
size 568
[    0.120292] platform 49000000.edma: alias fck already exists
[    0.120327] platform 49000000.edma: alias fck already exists
[    0.120356] platform 49000000.edma: alias fck already exists
[    0.121530] gpiochip_add: registered GPIOs 0 to 31 on device: gpio
[    0.121706] OMAP GPIO hardware version 0.1
[    0.123120] gpiochip_add: registered GPIOs 32 to 63 on device: gpio
[    0.124370] gpiochip_add: registered GPIOs 64 to 95 on device: gpio
[    0.125686] gpiochip_add: registered GPIOs 96 to 127 on device: gpio
[    0.129500] omap-gpmc 50000000.gpmc: unable to select pin group
[    0.130346] omap-gpmc 50000000.gpmc: GPMC revision 6.0
[    0.130878] omap-gpmc 50000000.gpmc: loaded OK
[    0.133321] hw-breakpoint: debug architecture 0x4 unsupported.
[    0.135169] cpsw.0: No hwaddr in dt. Using 1c:ba:8c:9d:55:29 from 
efuse
[    0.135201] cpsw.1: No hwaddr in dt. Using 1c:ba:8c:9d:55:2b from 
efuse
[    0.143580] bio: create slab <bio-0> at 0
[    0.155298] edma-dma-engine edma-dma-engine.0: TI EDMA DMA engine 
driver
[    0.157950] usbcore: registered new interface driver usbfs
[    0.158123] usbcore: registered new interface driver hub
[    0.158389] usbcore: registered new device driver usb
[    0.159350] pps_core: LinuxPPS API ver. 1 registered
[    0.159370] pps_core: Software ver. 5.3.6 - Copyright 2005-2007 
Rodolfo Giometti <giometti@linux.it>
[    0.160895] Switching to clocksource gp_timer
[    0.173329] NET: Registered protocol family 2
[    0.174305] TCP established hash table entries: 2048 (order: 2, 
16384 bytes)
[    0.174393] TCP bind hash table entries: 2048 (order: 3, 40960 bytes)
[    0.174485] TCP: Hash tables configured (established 2048 bind 2048)
[    0.174577] TCP: reno registered
[    0.174605] UDP hash table entries: 256 (order: 1, 12288 bytes)
[    0.174727] UDP-Lite hash table entries: 256 (order: 1, 12288 bytes)
[    0.175152] NET: Registered protocol family 1
[    0.175735] RPC: Registered named UNIX socket transport module.
[    0.175755] RPC: Registered udp transport module.
[    0.175771] RPC: Registered tcp transport module.
[    0.175786] RPC: Registered tcp NFSv4.1 backchannel transport module.
[    0.176915] CPU PMU: probing PMU on CPU 0
[    0.176945] hw perfevents: enabled with ARMv7 Cortex-A8 PMU driver, 
5 counters available
[    0.177406] omap2_mbox_probe: platform not supported
[    0.181589] VFS: Disk quotas dquot_6.5.2
[    0.181819] Dquot-cache hash table entries: 1024 (order 0, 4096 
bytes)
[    0.182653] NFS: Registering the id_resolver key type
[    0.182747] Key type id_resolver registered
[    0.182765] Key type id_legacy registered
[    0.182863] msgmni has been set to 494
[    0.185706] Block layer SCSI generic (bsg) driver version 0.4 loaded 
(major 250)
[    0.185731] io scheduler noop registered
[    0.185748] io scheduler deadline registered
[    0.185800] io scheduler cfq registered (default)
[    0.187267] ecap 48300100.ecap: unable to select pin group
[    0.188018] ehrpwm 48300200.ehrpwm: unable to select pin group
[    0.188891] ecap 48302100.ecap: unable to select pin group
[    0.189562] ehrpwm 48302200.ehrpwm: unable to select pin group
[    0.190538] ecap 48304100.ecap: unable to select pin group
[    0.191253] ehrpwm 48304200.ehrpwm: unable to select pin group
[    0.191865] pwm_test pwm_test.5: unable to request PWM
[    0.191919] pwm_test: probe of pwm_test.5 failed with error -2
[    0.192717] Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled
[    0.194993] 44e09000.serial: ttyO0 at MMIO 0x44e09000 (irq = 88) is 
a OMAP UART0
[    0.195793] console [ttyO0] enabled
[    0.196641] 48022000.serial: ttyO1 at MMIO 0x48022000 (irq = 89) is 
a OMAP UART1
[    0.209751] brd: module loaded
[    0.216399] loop: module loaded
[    0.218631] ONFI param page 0 valid
[    0.218653] ONFI flash detected
[    0.218680] NAND device: Manufacturer ID: 0x2c, Chip ID: 0xaa 
(Micron MT29F2G08ABBEAH4), 256MiB, page size: 2048, OOB size: 64
[    0.218698] nand: using OMAP_ECC_BCH8_CODE_HW ECC scheme
[    0.218816] 12 ofpart partitions found on MTD device omap2-nand.0
[    0.218835] Creating 12 MTD partitions on "omap2-nand.0":
[    0.218860] 0x000000000000-0x000000020000 : "SPL1"
[    0.220395] 0x000000020000-0x000000040000 : "SPL2"
[    0.221864] 0x000000040000-0x000000060000 : "SPL3"
[    0.223244] 0x000000060000-0x000000080000 : "SPL4"
[    0.224748] 0x000000080000-0x000000180000 : "U-boot"
[    0.226952] 0x000000180000-0x000000280000 : "U-boot Backup"
[    0.229178] 0x000000280000-0x0000002a0000 : "U-Boot Environment"
[    0.230701] 0x0000002a0000-0x0000007a0000 : "Kernel"
[    0.236392] 0x0000007a0000-0x000000ca0000 : "Kernel Backup"
[    0.241960] 0x000000ca0000-0x000000d20000 : "Device Tree"
[    0.243710] 0x000000d20000-0x000000da0000 : "Device Tree Backup"
[    0.245468] 0x000000da0000-0x000010000000 : "RFS"
[    0.452874] edma-dma-engine edma-dma-engine.0: allocated channel for 
0:17
[    0.452965] edma-dma-engine edma-dma-engine.0: allocated channel for 
0:16
[    0.454862] usbcore: registered new interface driver asix
[    0.454986] usbcore: registered new interface driver cdc_ether
[    0.455107] usbcore: registered new interface driver smsc95xx
[    0.455193] usbcore: registered new interface driver net1080
[    0.455279] usbcore: registered new interface driver cdc_subset
[    0.455437] usbcore: registered new interface driver cdc_ncm
[    0.456306] musb-hdrc: version 6.0, ?dma?, otg (peripheral+host)
[    0.457055] omap_rtc 44e3e000.rtc: rtc core: registered 44e3e000.rtc 
as rtc0
[    0.457384] i2c /dev entries driver
[    0.457727] pps_ldisc: PPS line discipline registered
[    0.458835] omap_wdt: OMAP Watchdog Timer Rev 0x01: initial timeout 
60 sec
[    0.459159] cpuidle: using governor ladder
[    0.459179] cpuidle: using governor menu
[    0.459378] ledtrig-cpu: registered to indicate activity on CPUs
[    0.459791] edma-dma-engine edma-dma-engine.0: allocated channel for 
0:36
[    0.459998] omap-sham 53100000.sham: hw accel on OMAP rev 4.3
[    0.462062] omap-aes 53500000.aes: OMAP AES hw accel rev: 3.2
[    0.462186] edma-dma-engine edma-dma-engine.0: allocated channel for 
0:5
[    0.462267] edma-dma-engine edma-dma-engine.0: allocated channel for 
0:6
[    0.464167] TCP: cubic registered
[    0.464189] Initializing XFRM netlink socket
[    0.464235] NET: Registered protocol family 17
[    0.464315] NET: Registered protocol family 15
[    0.464463] Key type dns_resolver registered
[    0.464699] VFP support v0.3: implementor 41 architecture 3 part 30 
variant c rev 3
[    0.464735] ThumbEE CPU extension supported.
[    0.464780] Registering SWP/SWPB emulation handler
[    0.465446] registered taskstats version 1
[    0.466886] UBI: default fastmap pool size: 95
[    0.466906] UBI: default fastmap WL pool size: 25
[    0.466923] UBI: attaching mtd11 to ubi0
[    1.683740] UBI: scanning is finished
[    1.696100] UBI: attached mtd11 (name "RFS", size 242 MiB) to ubi0
[    1.696182] UBI: PEB size: 131072 bytes (128 KiB), LEB size: 126976 
bytes
[    1.696202] UBI: min./max. I/O unit sizes: 2048/2048, sub-page size 
512
[    1.696219] UBI: VID header offset: 2048 (aligned 2048), data 
offset: 4096
[    1.696235] UBI: good PEBs: 1939, bad PEBs: 0, corrupted PEBs: 0
[    1.696253] UBI: user volume: 6, internal volumes: 1, max. volumes 
count: 128
[    1.696273] UBI: max/mean erase counter: 152/101, WL threshold: 64, 
image sequence number: 366298915
[    1.696291] UBI: available PEBs: 0, total reserved PEBs: 1939, PEBs 
reserved for bad PEB handling: 40
[    1.710932] UBI: background thread "ubi_bgt0d" started, PID 54
[    1.711080] UBI error: get_peb_for_wl: WL pool is empty!
[    1.745041] davinci_mdio 4a101000.mdio: davinci mdio revision 1.6
[    1.745066] davinci_mdio 4a101000.mdio: detected phy mask fffffffe
[    1.746060] libphy: 4a101000.mdio: probed
[    1.746092] davinci_mdio 4a101000.mdio: phy[0]: device 
4a101000.mdio:00, driver unknown
[    1.746297] Detected MACID = 1c:ba:8c:9d:55:29
[    1.746405] cpsw 4a100000.ethernet: NAPI disabled
[    1.747937] of_get_named_gpio_flags exited with status 13
[    1.748583] input: gpio-keys.6 as 
/devices/ocp.2/gpio-keys.6/input/input0
[    1.749285] omap_rtc 44e3e000.rtc: setting system clock to 
2000-01-01 00:00:00 UTC (946684800)
[    1.774832] UBIFS: background thread "ubifs_bgt0_0" started, PID 57
[    1.792271] UBIFS: recovery needed
[    1.983513] UBIFS: recovery completed
[    1.983758] UBIFS: mounted UBI device 0, volume 0, name 
"rootfs"(null)
[    1.983781] UBIFS: LEB size: 126976 bytes (124 KiB), min./max. I/O 
unit sizes: 2048 bytes/2048 bytes
[    1.983805] UBIFS: FS size: 103485440 bytes (98 MiB, 815 LEBs), 
journal size 9023488 bytes (8 MiB, 72 LEBs)
[    1.983821] UBIFS: reserved for root: 0 bytes (0 KiB)
[    1.983846] UBIFS: media format: w4/r0 (latest is w4/r0), UUID 
44CF8C71-DD52-400D-920E-D4385BF511A6, small LPT model
[    1.984590] VFS: Mounted root (ubifs filesystem) on device 0:11.
[    1.984988] Freeing init memory: 184K
[    2.619389] UBIFS: background thread "ubifs_bgt0_4" started, PID 74
[    2.638503] UBIFS: recovery needed
[    2.875910] UBIFS: recovery completed
[    2.876153] UBIFS: mounted UBI device 0, volume 4, name 
"logging"(null)
[    2.876175] UBIFS: LEB size: 126976 bytes (124 KiB), min./max. I/O 
unit sizes: 2048 bytes/2048 bytes
[    2.876197] UBIFS: FS size: 6094848 bytes (5 MiB, 48 LEBs), journal 
size 1015809 bytes (0 MiB, 6 LEBs)
[    2.876213] UBIFS: reserved for root: 287874 bytes (281 KiB)
[    2.876238] UBIFS: media format: w4/r0 (latest is w4/r0), UUID 
D886D287-9A0B-466C-87A2-4C7F014783E3, small LPT model
[    2.916494] UBIFS: background thread "ubifs_bgt0_2" started, PID 76
[    2.927778] UBIFS: recovery needed
[    3.063093] UBIFS: recovery completed
[    3.063403] UBIFS: mounted UBI device 0, volume 2, name 
"database"(null)
[    3.063426] UBIFS: LEB size: 126976 bytes (124 KiB), min./max. I/O 
unit sizes: 2048 bytes/2048 bytes
[    3.063448] UBIFS: FS size: 6094848 bytes (5 MiB, 48 LEBs), journal 
size 1015809 bytes (0 MiB, 6 LEBs)
[    3.063464] UBIFS: reserved for root: 287874 bytes (281 KiB)
[    3.063490] UBIFS: media format: w4/r0 (latest is w4/r0), UUID 
94414355-A36D-405B-A9CC-0C43B5EE2F32, small LPT model
[    5.381843] net eth0: initializing cpsw version 1.12 (0)
[    5.387460] net eth0: phy found : id is : 0x221560
[    5.387503] libphy: PHY 4a101000.mdio:01 not found
[    5.387610] net eth0: phy 4a101000.mdio:01 not found on slave 1
[    5.962950] NET: Registered protocol family 10
[    5.967845] IPv6: ADDRCONF(NETDEV_UP): eth0: link is not ready
[    7.392197] libphy: 4a101000.mdio:00 - Link is Up - 100/Full
[    7.392264] IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready
[  226.871952] UBI error: get_peb_for_wl: WL pool is empty!
[  948.507808] UBI error: get_peb_for_wl: WL pool is empty!
[  964.267421] UBI error: get_peb_for_wl: WL pool is empty!
[ 2713.598888] UBI error: get_peb_for_wl: WL pool is empty!
[ 3163.587848] UBI: scrubbed PEB 562 (LEB 2:53), data moved to PEB 1101
[ 3169.387175] UBI error: get_peb_for_wl: WL pool is empty!
[ 3171.366744] UBI error: get_peb_for_wl: WL pool is empty!
[ 3171.850681] UBI: scrubbed PEB 542 (LEB 2:22), data moved to PEB 1926
[ 3179.179937] UBI error: get_peb_for_wl: WL pool is empty!
[ 3179.467549] UBI: scrubbed PEB 1858 (LEB 4:29), data moved to PEB 1797
[ 3181.503910] UBI error: get_peb_for_wl: WL pool is empty!
[ 3181.738543] UBI: scrubbed PEB 1933 (LEB 4:13), data moved to PEB 1925
[ 3182.681096] UBI: scrubbed PEB 544 (LEB 2:38), data moved to PEB 1176
[ 3186.414570] UBI error: get_peb_for_wl: WL pool is empty!
[ 3186.833616] UBI: scrubbed PEB 501 (LEB 4:25), data moved to PEB 1921
[ 3188.021276] UBI error: get_peb_for_wl: WL pool is empty!
[ 3188.462113] UBI: scrubbed PEB 331 (LEB 4:55), data moved to PEB 1860
[ 3189.117887] UBI: scrubbed PEB 400 (LEB 4:30), data moved to PEB 1917
[ 3189.496218] UBI error: get_peb_for_wl: WL pool is empty!
[ 3189.755589] UBI: scrubbed PEB 1890 (LEB 2:53), data moved to PEB 1840
[ 3191.326605] UBI error: get_peb_for_wl: WL pool is empty!
[ 3191.526829] UBI: scrubbed PEB 223 (LEB 4:29), data moved to PEB 1840
[ 3192.478363] UBI: scrubbed PEB 396 (LEB 4:13), data moved to PEB 1391
[ 3192.637519] UBI: scrubbed PEB 221 (LEB 4:22), data moved to PEB 1174
[ 3196.000079] UBI error: get_peb_for_wl: WL pool is empty!
[ 3196.583955] UBI: scrubbed PEB 1207 (LEB 4:25), data moved to PEB 1772
[ 3197.063390] UBI error: get_peb_for_wl: WL pool is empty!
[ 3197.493860] UBI: scrubbed PEB 331 (LEB 4:19), data moved to PEB 1178
[ 3198.605318] UBI: scrubbed PEB 546 (LEB 2:34), data moved to PEB 1867
[ 3202.529169] UBI: scrubbed PEB 1330 (LEB 4:29), data moved to PEB 1508
[ 3203.107524] UBI error: get_peb_for_wl: WL pool is empty!
[ 3209.398545] UBI: scrubbed PEB 183 (LEB 4:10), data moved to PEB 1481
[ 3210.299449] UBI error: get_peb_for_wl: WL pool is empty!
[ 3210.662360] UBI: scrubbed PEB 1196 (LEB 4:55), data moved to PEB 1176
[ 3213.646517] UBI error: get_peb_for_wl: WL pool is empty!
[ 3214.953139] UBI error: get_peb_for_wl: WL pool is empty!
[ 3215.442068] UBI: scrubbed PEB 1207 (LEB 2:53), data moved to PEB 1925
[ 3223.488744] UBI: scrubbed PEB 876 (LEB 4:55), data moved to PEB 1144
[ 3223.706006] UBI error: get_peb_for_wl: WL pool is empty!
[ 3225.010216] UBI: scrubbed PEB 1207 (LEB 4:46), data moved to PEB 1359
[ 3226.377734] UBI error: get_peb_for_wl: WL pool is empty!
[ 3226.695611] UBI: scrubbed PEB 1196 (LEB 4:30), data moved to PEB 1798
[ 3227.445865] UBI: scrubbed PEB 1394 (LEB 2:30), data moved to PEB 1456
[ 3228.508325] UBI error: get_peb_for_wl: WL pool is empty!
[ 3228.985518] UBI: scrubbed PEB 1190 (LEB 4:19), data moved to PEB 1926
[ 3232.322614] UBI error: get_peb_for_wl: WL pool is empty!
[ 3234.855286] UBI: scrubbed PEB 869 (LEB 4:30), data moved to PEB 1116
[ 3235.176539] UBI error: get_peb_for_wl: WL pool is empty!
[ 3236.711453] UBI: scrubbed PEB 1190 (LEB 4:22), data moved to PEB 1125
[ 3238.735118] UBI: scrubbed PEB 679 (LEB 4:10), data moved to PEB 1466
[ 3241.768587] UBI error: get_peb_for_wl: WL pool is empty!
[ 3244.513696] UBI error: get_peb_for_wl: WL pool is empty!
[ 3244.835041] UBI: scrubbed PEB 870 (LEB 4:22), data moved to PEB 1925
[ 3246.380082] UBI: scrubbed PEB 873 (LEB 4:21), data moved to PEB 1792
[ 3246.899732] UBI error: get_peb_for_wl: WL pool is empty!
[ 3248.132762] UBI: scrubbed PEB 1145 (LEB 2:15), data moved to PEB 1171
[ 3250.153584] UBI error: get_peb_for_wl: WL pool is empty!
[ 3252.751397] UBI error: get_peb_for_wl: WL pool is empty!
[ 3253.245123] UBI error: get_peb_for_wl: WL pool is empty!
[ 3253.361413] UBI: scrubbed PEB 868 (LEB 4:13), data moved to PEB 1797
[ 3255.672294] UBI error: get_peb_for_wl: WL pool is empty!
[ 3257.771034] UBI error: get_peb_for_wl: WL pool is empty!
[ 3258.023000] UBI: scrubbed PEB 863 (LEB 4:22), data moved to PEB 1454
[ 3261.421750] UBI: scrubbed PEB 1195 (LEB 4:25), data moved to PEB 1005
[ 3261.666261] UBI: scrubbed PEB 1189 (LEB 2:28), data moved to PEB 979
[ 3263.986744] UBI error: get_peb_for_wl: WL pool is empty!
[ 3264.370750] UBI: scrubbed PEB 555 (LEB 4:55), data moved to PEB 648
[ 3296.946212] UBI error: get_peb_for_wl: WL pool is empty!
[ 3297.616817] UBI: scrubbed PEB 38 (LEB 2:43), data moved to PEB 1467
[ 3298.754258] UBI error: get_peb_for_wl: WL pool is empty!
[ 3304.111213] UBI error: get_peb_for_wl: WL pool is empty!
[ 3308.200639] UBI: scrubbed PEB 1190 (LEB 4:22), data moved to PEB 981
[ 3308.330233] UBI error: get_peb_for_wl: WL pool is empty!
[ 3328.782661] UBI error: get_peb_for_wl: WL pool is empty!
[ 3329.004125] UBI: scrubbed PEB 306 (LEB 2:54), data moved to PEB 265
[ 3348.018142] UBI: scrubbed PEB 2 (LEB 4:23), data moved to PEB 1049
[ 3348.331609] UBI error: get_peb_for_wl: WL pool is empty!
[ 3350.723301] UBI error: get_peb_for_wl: WL pool is empty!
[ 3351.206788] UBI: scrubbed PEB 867 (LEB 4:22), data moved to PEB 1453
[ 3352.002646] UBI error: get_peb_for_wl: WL pool is empty!
[ 3352.339500] UBI: scrubbed PEB 876 (LEB 4:13), data moved to PEB 1778
[ 3361.246224] UBI: scrubbed PEB 1188 (LEB 2:46), data moved to PEB 959
[ 3361.815902] UBI error: get_peb_for_wl: WL pool is empty!
[ 3366.045576] UBI error: get_peb_for_wl: WL pool is empty!
[ 3401.088184] UBI error: get_peb_for_wl: WL pool is empty!
[ 3401.437188] UBI: scrubbed PEB 864 (LEB 4:22), data moved to PEB 912
[ 3406.130784] UBI error: get_peb_for_wl: WL pool is empty!
[ 3441.749699] UBI error: get_peb_for_wl: WL pool is empty!
[ 3442.284397] UBI: scrubbed PEB 10 (LEB 2:23), data moved to PEB 1926
[ 3445.213692] UBI error: get_peb_for_wl: WL pool is empty!
[ 3445.335194] UBI: scrubbed PEB 865 (LEB 4:19), data moved to PEB 1925
[ 3450.061740] UBI error: get_peb_for_wl: WL pool is empty!
[ 3450.470342] UBI: scrubbed PEB 1133 (LEB 2:54), data moved to PEB 674
[ 3454.297963] UBI error: get_peb_for_wl: WL pool is empty!
[ 3488.359145] UBI error: get_peb_for_wl: WL pool is empty!
[ 3488.487293] UBI: scrubbed PEB 13 (LEB 4:46), data moved to PEB 1926
[ 3490.768415] UBI error: get_peb_for_wl: WL pool is empty!
[ 3495.913087] UBI: scrubbed PEB 1395 (LEB 4:49), data moved to PEB 652
[ 3531.710394] UBI: scrubbed PEB 34 (LEB 2:36), data moved to PEB 857
[ 3534.116906] UBI error: get_peb_for_wl: WL pool is empty!
[ 3536.436828] UBI error: get_peb_for_wl: WL pool is empty!
[ 3541.144011] UBI: scrubbed PEB 1209 (LEB 2:25), data moved to PEB 651
[ 3583.359978] UBI error: get_peb_for_wl: WL pool is empty!
[ 3583.606508] UBI: scrubbed PEB 241 (LEB 4:21), data moved to PEB 992
[ 3585.766637] UBI error: get_peb_for_wl: WL pool is empty!
[ 3585.891228] UBI: scrubbed PEB 1207 (LEB 2:56), data moved to PEB 625
[ 3621.394395] UBI error: get_peb_for_wl: WL pool is empty!
[ 3621.700961] UBI: scrubbed PEB 38 (LEB 4:23), data moved to PEB 1926
[ 3622.091682] UBI error: get_peb_for_wl: WL pool is empty!
[ 3622.413061] UBI: scrubbed PEB 27 (LEB 4:10), data moved to PEB 1711
[ 3624.433903] UBI error: get_peb_for_wl: WL pool is empty!
[ 3625.004360] UBI: scrubbed PEB 3 (LEB 4:29), data moved to PEB 1467
[ 3628.582549] UBI error: get_peb_for_wl: WL pool is empty!
[ 3630.884872] UBI error: get_peb_for_wl: WL pool is empty!
[ 3631.226581] UBI: scrubbed PEB 1207 (LEB 2:12), data moved to PEB 642
[ 3671.425101] UBI error: get_peb_for_wl: WL pool is empty!
[ 3671.535484] UBI: scrubbed PEB 240 (LEB 4:21), data moved to PEB 1926
[ 3674.609732] UBI error: get_peb_for_wl: WL pool is empty!
[ 3677.259297] UBI error: get_peb_for_wl: WL pool is empty!
[ 3693.265639] UBI error: get_peb_for_wl: WL pool is empty!
[ 3716.106487] UBI error: get_peb_for_wl: WL pool is empty!
[ 3716.384899] UBI: scrubbed PEB 236 (LEB 4:29), data moved to PEB 1012
[ 3717.235591] UBI: scrubbed PEB 121 (LEB 4:10), data moved to PEB 973
[ 3718.527381] UBI: scrubbed PEB 1579 (LEB 4:19), data moved to PEB 648
[ 3720.548481] UBI: scrubbed PEB 797 (LEB 4:13), data moved to PEB 334
[ 3720.678878] UBI error: get_peb_for_wl: WL pool is empty!
[ 3758.789232] UBI error: get_peb_for_wl: WL pool is empty!
[ 3759.977114] UBI: scrubbed PEB 867 (LEB 4:19), data moved to PEB 973
[ 3759.999051] UBI error: get_peb_for_wl: WL pool is empty!
[ 3760.223871] UBI: scrubbed PEB 238 (LEB 2:43), data moved to PEB 1039
[ 3761.414981] UBI: scrubbed PEB 234 (LEB 4:10), data moved to PEB 650
[ 3761.417674] UBI error: get_peb_for_wl: WL pool is empty!
[ 3762.430600] UBI: scrubbed PEB 1932 (LEB 2:17), data moved to PEB 666
[ 3765.059363] UBI error: get_peb_for_wl: WL pool is empty!
[ 3799.407809] UBI: scrubbed PEB 33 (LEB 4:25), data moved to PEB 622
[ 3801.259161] UBI: scrubbed PEB 13 (LEB 4:11), data moved to PEB 1456
[ 3801.301235] UBI error: get_peb_for_wl: WL pool is empty!
[ 3801.889182] UBI: scrubbed PEB 8 (LEB 2:57), data moved to PEB 1798
[ 3802.124754] UBI: scrubbed PEB 2 (LEB 2:44), data moved to PEB 1711
[ 3804.260101] UBI error: get_peb_for_wl: WL pool is empty!
[ 3813.190139] UBI: scrubbed PEB 1395 (LEB 2:23), data moved to PEB 644
[ 3815.504314] UBI error: get_peb_for_wl: WL pool is empty!
[ 3849.541610] UBI: scrubbed PEB 27 (LEB 4:23), data moved to PEB 1921
[ 3851.562397] UBI error: get_peb_for_wl: WL pool is empty!
[ 3851.679355] UBI: scrubbed PEB 5 (LEB 2:52), data moved to PEB 1711
[ 3854.748851] UBI error: get_peb_for_wl: WL pool is empty!
[ 3856.314510] UBI error: get_peb_for_wl: WL pool is empty!
[ 3856.863271] UBI: scrubbed PEB 1548 (LEB 2:57), data moved to PEB 644
[ 3895.048841] UBI error: get_peb_for_wl: WL pool is empty!
[ 3895.400407] UBI: scrubbed PEB 3 (LEB 2:51), data moved to PEB 670
[ 3896.978798] UBI error: get_peb_for_wl: WL pool is empty!
[ 3898.650450] UBI error: get_peb_for_wl: WL pool is empty!
[ 3898.754720] UBI: scrubbed PEB 239 (LEB 4:13), data moved to PEB 1917
[ 3922.055690] UBI error: get_peb_for_wl: WL pool is empty!
[ 3944.892061] UBI error: get_peb_for_wl: WL pool is empty!
[ 3951.030595] UBI error: get_peb_for_wl: WL pool is empty!
[ 3983.243550] UBI error: get_peb_for_wl: WL pool is empty!
[ 3983.730162] UBI: scrubbed PEB 861 (LEB 2:45), data moved to PEB 1798
[ 3984.829053] UBI: scrubbed PEB 236 (LEB 2:33), data moved to PEB 644
[ 3998.842808] UBI: scrubbed PEB 513 (LEB 2:51), data moved to PEB 286
[ 3999.136413] UBI: scrubbed PEB 517 (LEB 4:11), data moved to PEB 12
[ 4022.859708] UBI error: get_peb_for_wl: WL pool is empty!
[ 4028.172279] UBI error: get_peb_for_wl: WL pool is empty!
[ 4028.645516] UBI: scrubbed PEB 237 (LEB 2:48), data moved to PEB 1798
[ 4031.884300] UBI error: get_peb_for_wl: WL pool is empty!
[ 4035.128101] UBI error: get_peb_for_wl: WL pool is empty!
[ 4110.158791] UBI error: get_peb_for_wl: WL pool is empty!
[ 4146.294400] UBI: scrubbed PEB 60 (LEB 4:44), data moved to PEB 661
[ 4147.648658] UBI error: get_peb_for_wl: WL pool is empty!
[ 4149.457704] UBI: scrubbed PEB 158 (LEB 4:11), data moved to PEB 1038
[ 4189.002471] UBI error: get_peb_for_wl: WL pool is empty!
[ 4189.305407] UBI: scrubbed PEB 30 (LEB 4:10), data moved to PEB 1456
[ 4190.435918] UBI: scrubbed PEB 10 (LEB 4:21), data moved to PEB 648
[ 4274.368722] UBI: scrubbed PEB 518 (LEB 4:29), data moved to PEB 410
[ 4275.679155] UBI error: get_peb_for_wl: WL pool is empty!
[ 4323.828151] UBI error: get_peb_for_wl: WL pool is empty!
[ 4342.684249] UBI error: get_peb_for_wl: WL pool is empty!
[ 4355.636263] UBI error: get_peb_for_wl: WL pool is empty!
[ 4356.118602] UBI error: get_peb_for_wl: WL pool is empty!
[ 4356.659373] UBI: scrubbed PEB 8 (LEB 4:55), data moved to PEB 1317
[ 4369.381698] UBI error: get_peb_for_wl: WL pool is empty!
[ 4378.908349] UBI error: get_peb_for_wl: WL pool is empty!
[ 4381.308754] UBI error: get_peb_for_wl: WL pool is empty!
[ 4383.342003] UBI: scrubbed PEB 3 (LEB 4:22), data moved to PEB 40
[ 4388.285793] UBI error: get_peb_for_wl: WL pool is empty!
[ 4407.516936] UBI error: refill_wl_pool: didn't get all pebs I wanted!
[ 4413.909496] UBI error: refill_wl_pool: didn't get all pebs I wanted!
[ 4414.112199] UBI error: refill_wl_pool: didn't get all pebs I wanted!
[ 4414.308961] UBI error: refill_wl_pool: didn't get all pebs I wanted!
[ 4414.507716] UBI error: refill_wl_pool: didn't get all pebs I wanted!
[ 4414.606793] UBI error: refill_wl_pool: didn't get all pebs I wanted!
[ 4414.811759] UBI error: refill_wl_pool: didn't get all pebs I wanted!
[ 4415.013178] UBI error: refill_wl_pool: didn't get all pebs I wanted!
[ 4415.242645] UBI error: refill_wl_pool: didn't get all pebs I wanted!
[ 4415.594145] UBI error: refill_wl_pool: didn't get all pebs I wanted!
[ 4415.744040] UBI error: refill_wl_pool: didn't get all pebs I wanted!
[ 4416.000767] UBI error: refill_wl_pool: didn't get all pebs I wanted!
[ 4416.402072] UBI error: refill_wl_pool: didn't get all pebs I wanted!
[ 4416.705505] UBI error: refill_wl_pool: didn't get all pebs I wanted!
[ 4417.186341] UBI error: refill_wl_pool: didn't get all pebs I wanted!
[ 4417.544133] UBI error: refill_wl_pool: didn't get all pebs I wanted!
[ 4417.769550] UBI error: refill_wl_pool: didn't get all pebs I wanted!
[ 4418.637420] UBI error: refill_wl_pool: didn't get all pebs I wanted!
[ 4418.684298] UBI error: refill_wl_pool: didn't get all pebs I wanted!
[ 4419.042088] UBI error: refill_wl_pool: didn't get all pebs I wanted!
[ 4419.401790] UBI error: refill_wl_pool: didn't get all pebs I wanted!
[ 4419.768537] UBI error: refill_wl_pool: didn't get all pebs I wanted!
[ 4421.839218] UBI error: refill_wl_pool: didn't get all pebs I wanted!
[ 4422.194049] UBI error: refill_wl_pool: didn't get all pebs I wanted!
[ 4422.458851] UBI error: refill_wl_pool: didn't get all pebs I wanted!
[ 4422.679981] UBI error: refill_wl_pool: didn't get all pebs I wanted!
[ 4422.983158] UBI error: refill_wl_pool: didn't get all pebs I wanted!
[ 4423.105967] UBI error: refill_wl_pool: didn't get all pebs I wanted!
[ 4423.226706] UBI error: refill_wl_pool: didn't get all pebs I wanted!
[ 4423.514118] UBI error: refill_wl_pool: didn't get all pebs I wanted!
[ 4423.648586] UBI error: refill_wl_pool: didn't get all pebs I wanted!
[ 4423.820411] UBI error: refill_wl_pool: didn't get all pebs I wanted!
[ 4424.010306] UBI error: refill_wl_pool: didn't get all pebs I wanted!
[ 4424.195718] UBI error: refill_wl_pool: didn't get all pebs I wanted!
[ 4424.394994] UBI error: refill_wl_pool: didn't get all pebs I wanted!
[ 4424.567974] UBI error: refill_wl_pool: didn't get all pebs I wanted!
[ 4424.669157] UBI error: refill_wl_pool: didn't get all pebs I wanted!
[ 4424.752216] UBI error: get_peb_for_wl: WL pool is empty!
[ 4424.752289] UBI error: refill_wl_pool: didn't get all pebs I wanted!
[ 4424.753666] UBI error: get_peb_for_wl: WL pool is empty!
[ 4424.753708] UBI error: refill_wl_pool: didn't get all pebs I wanted!
[ 4424.758002] UBI error: refill_wl_pool: didn't get all pebs I wanted!
[ 4424.784938] UBI error: get_peb_for_wl: WL pool is empty!
[ 4424.785009] UBI error: refill_wl_pool: didn't get all pebs I wanted!
[ 4425.068522] UBI error: refill_wl_pool: didn't get all pebs I wanted!
[ 4425.894464] UBI error: refill_wl_pool: didn't get all pebs I wanted!
[ 4426.252189] UBI error: refill_wl_pool: didn't get all pebs I wanted!
[ 4426.769527] UBI error: refill_wl_pool: didn't get all pebs I wanted!
[ 4427.132794] UBI error: refill_wl_pool: didn't get all pebs I wanted!
[ 4427.611097] UBI error: refill_wl_pool: didn't get all pebs I wanted!
[ 4427.970473] UBI error: refill_wl_pool: didn't get all pebs I wanted!
[ 4428.251522] UBI error: refill_wl_pool: didn't get all pebs I wanted!
[ 4428.780309] UBI error: refill_wl_pool: didn't get all pebs I wanted!
[ 4429.136434] UBI error: refill_wl_pool: didn't get all pebs I wanted!
[ 4429.401325] UBI error: refill_wl_pool: didn't get all pebs I wanted!
[ 4429.699199] UBI error: refill_wl_pool: didn't get all pebs I wanted!
[ 4430.232353] UBI error: refill_wl_pool: didn't get all pebs I wanted!
[ 4430.413879] UBI error: refill_wl_pool: didn't get all pebs I wanted!
[ 4430.638959] UBI error: refill_wl_pool: didn't get all pebs I wanted!
[ 4430.870159] UBI error: refill_wl_pool: didn't get all pebs I wanted!
[ 4431.074319] UBI error: refill_wl_pool: didn't get all pebs I wanted!
[ 4431.277710] UBI error: refill_wl_pool: didn't get all pebs I wanted!
[ 4431.488604] UBI error: refill_wl_pool: didn't get all pebs I wanted!
[ 4431.742211] UBI error: refill_wl_pool: didn't get all pebs I wanted!
[ 4431.915334] UBI error: get_peb_for_wl: WL pool is empty!
[ 4431.915423] UBI error: refill_wl_pool: didn't get all pebs I wanted!
[ 4431.940874] UBI error: refill_wl_pool: didn't get all pebs I wanted!
[ 4431.958715] UBI error: get_peb_for_wl: WL pool is empty!
[ 4431.958788] UBI error: refill_wl_pool: didn't get all pebs I wanted!
[ 4431.958871] UBI error: refill_wl_pool: didn't get all pebs I wanted!
[ 4431.965118] UBI error: get_peb_for_wl: WL pool is empty!
[ 4431.965180] UBI error: refill_wl_pool: didn't get all pebs I wanted!
[ 4431.966443] UBI error: refill_wl_pool: didn't get all pebs I wanted!
[ 4432.058742] UBI error: refill_wl_pool: didn't get all pebs I wanted!
[ 4432.147926] UBI error: refill_wl_pool: didn't get all pebs I wanted!
[ 4432.163859] UBI error: refill_wl_pool: didn't get all pebs I wanted!
[ 4432.261475] UBI error: get_peb_for_wl: WL pool is empty!
[ 4432.261552] UBI error: refill_wl_pool: didn't get all pebs I wanted!
[ 4432.267382] UBI error: get_peb_for_wl: WL pool is empty!
[ 4432.267454] UBI error: refill_wl_pool: didn't get all pebs I wanted!
[ 4432.267540] UBI error: refill_wl_pool: didn't get all pebs I wanted!
[ 4432.275362] UBI error: get_peb_for_wl: WL pool is empty!
[ 4432.275424] UBI error: refill_wl_pool: didn't get all pebs I wanted!
[ 4432.279752] UBI error: refill_wl_pool: didn't get all pebs I wanted!
[ 4432.364669] UBI error: refill_wl_pool: didn't get all pebs I wanted!
[ 4432.374139] UBI error: refill_wl_pool: didn't get all pebs I wanted!
[ 4432.472956] UBI error: refill_wl_pool: didn't get all pebs I wanted!
[ 4432.480919] UBI error: refill_wl_pool: didn't get all pebs I wanted!
[ 4432.480948] UBI error: ubi_wl_get_peb: User WL pool is empty!
[ 4432.480972] UBIFS error (pid 15156): ubifs_leb_map: mapping LEB 29 
failed, error -28
[ 4432.480991] UBIFS warning (pid 15156): ubifs_ro_mode: switched to 
read-only mode, error -28
[ 4432.481060] [<c00104b1>] (unwind_backtrace+0x1/0x8c) from 
[<c00ff329>] (ubifs_leb_map+0x7d/0xb4)
[ 4432.481097] [<c00ff329>] (ubifs_leb_map+0x7d/0xb4) from [<c01051e7>] 
(ubifs_add_bud_to_log+0x1bf/0x214)
[ 4432.481130] [<c01051e7>] (ubifs_add_bud_to_log+0x1bf/0x214) from 
[<c00f7605>] (make_reservation+0x12d/0x274)
[ 4432.481163] [<c00f7605>] (make_reservation+0x12d/0x274) from 
[<c00f7d6d>] (ubifs_jnl_write_data+0xf5/0x1a4)
[ 4432.481194] [<c00f7d6d>] (ubifs_jnl_write_data+0xf5/0x1a4) from 
[<c00f99bd>] (do_writepage+0x73/0x12e)
[ 4432.481235] [<c00f99bd>] (do_writepage+0x73/0x12e) from [<c0077117>] 
(__writepage+0xb/0x26)
[ 4432.481270] [<c0077117>] (__writepage+0xb/0x26) from [<c0077413>] 
(write_cache_pages+0x151/0x1e8)
[ 4432.481303] [<c0077413>] (write_cache_pages+0x151/0x1e8) from 
[<c00774cb>] (generic_writepages+0x21/0x36)
[ 4432.481335] [<c00774cb>] (generic_writepages+0x21/0x36) from 
[<c0072fc3>] (__filemap_fdatawrite_range+0x3b/0x42)
[ 4432.481366] [<c0072fc3>] (__filemap_fdatawrite_range+0x3b/0x42) from 
[<c0073049>] (filemap_write_and_wait_range+0x21/0x4a)
[ 4432.481397] [<c0073049>] (filemap_write_and_wait_range+0x21/0x4a) 
from [<c00f9be1>] (ubifs_fsync+0x29/0x6c)
[ 4432.481430] [<c00f9be1>] (ubifs_fsync+0x29/0x6c) from [<c00abb23>] 
(vfs_fsync_range+0x1b/0x24)
[ 4432.481462] [<c00abb23>] (vfs_fsync_range+0x1b/0x24) from 
[<c00abb95>] (generic_write_sync+0x4d/0x54)
[ 4432.481492] [<c00abb95>] (generic_write_sync+0x4d/0x54) from 
[<c00734d5>] (generic_file_aio_write+0x71/0x8a)
[ 4432.481524] [<c00734d5>] (generic_file_aio_write+0x71/0x8a) from 
[<c00f91ab>] (ubifs_aio_write+0xff/0x10c)
[ 4432.481567] [<c00f91ab>] (ubifs_aio_write+0xff/0x10c) from 
[<c0093ef5>] (do_sync_write+0x61/0x8c)
[ 4432.481602] [<c0093ef5>] (do_sync_write+0x61/0x8c) from [<c0094397>] 
(vfs_write+0x5f/0x100)
[ 4432.481635] [<c0094397>] (vfs_write+0x5f/0x100) from [<c00945a3>] 
(sys_write+0x27/0x44)
[ 4432.481674] [<c00945a3>] (sys_write+0x27/0x44) from [<c000c681>] 
(ret_fast_syscall+0x1/0x46)
[ 4432.481708] [<c00104b1>] (unwind_backtrace+0x1/0x8c) from 
[<c00ff32d>] (ubifs_leb_map+0x81/0xb4)
[ 4432.481739] [<c00ff32d>] (ubifs_leb_map+0x81/0xb4) from [<c01051e7>] 
(ubifs_add_bud_to_log+0x1bf/0x214)
[ 4432.481770] [<c01051e7>] (ubifs_add_bud_to_log+0x1bf/0x214) from 
[<c00f7605>] (make_reservation+0x12d/0x274)
[ 4432.481801] [<c00f7605>] (make_reservation+0x12d/0x274) from 
[<c00f7d6d>] (ubifs_jnl_write_data+0xf5/0x1a4)
[ 4432.481832] [<c00f7d6d>] (ubifs_jnl_write_data+0xf5/0x1a4) from 
[<c00f99bd>] (do_writepage+0x73/0x12e)
[ 4432.481863] [<c00f99bd>] (do_writepage+0x73/0x12e) from [<c0077117>] 
(__writepage+0xb/0x26)
[ 4432.481894] [<c0077117>] (__writepage+0xb/0x26) from [<c0077413>] 
(write_cache_pages+0x151/0x1e8)
[ 4432.481927] [<c0077413>] (write_cache_pages+0x151/0x1e8) from 
[<c00774cb>] (generic_writepages+0x21/0x36)
[ 4432.481958] [<c00774cb>] (generic_writepages+0x21/0x36) from 
[<c0072fc3>] (__filemap_fdatawrite_range+0x3b/0x42)
[ 4432.481988] [<c0072fc3>] (__filemap_fdatawrite_range+0x3b/0x42) from 
[<c0073049>] (filemap_write_and_wait_range+0x21/0x4a)
[ 4432.482019] [<c0073049>] (filemap_write_and_wait_range+0x21/0x4a) 
from [<c00f9be1>] (ubifs_fsync+0x29/0x6c)
[ 4432.482049] [<c00f9be1>] (ubifs_fsync+0x29/0x6c) from [<c00abb23>] 
(vfs_fsync_range+0x1b/0x24)
[ 4432.482079] [<c00abb23>] (vfs_fsync_range+0x1b/0x24) from 
[<c00abb95>] (generic_write_sync+0x4d/0x54)
[ 4432.482109] [<c00abb95>] (generic_write_sync+0x4d/0x54) from 
[<c00734d5>] (generic_file_aio_write+0x71/0x8a)
[ 4432.482139] [<c00734d5>] (generic_file_aio_write+0x71/0x8a) from 
[<c00f91ab>] (ubifs_aio_write+0xff/0x10c)
[ 4432.482171] [<c00f91ab>] (ubifs_aio_write+0xff/0x10c) from 
[<c0093ef5>] (do_sync_write+0x61/0x8c)
[ 4432.482204] [<c0093ef5>] (do_sync_write+0x61/0x8c) from [<c0094397>] 
(vfs_write+0x5f/0x100)
[ 4432.482236] [<c0094397>] (vfs_write+0x5f/0x100) from [<c00945a3>] 
(sys_write+0x27/0x44)
[ 4432.482268] [<c00945a3>] (sys_write+0x27/0x44) from [<c000c681>] 
(ret_fast_syscall+0x1/0x46)
[ 4432.482306] UBIFS error (pid 15156): do_commit: commit failed, error 
-30
[ 4432.482331] UBIFS error (pid 15156): do_writepage: cannot write page 
231 of inode 71, error -30
[ 4432.558187] UBI error: get_peb_for_wl: WL pool is empty!
[ 4432.558266] UBI error: refill_wl_pool: didn't get all pebs I wanted!
[ 4432.559693] UBI error: refill_wl_pool: didn't get all pebs I wanted!
[ 4432.564610] UBI error: get_peb_for_wl: WL pool is empty!
[ 4432.564720] UBI error: refill_wl_pool: didn't get all pebs I wanted!
[ 4432.568733] UBI error: refill_wl_pool: didn't get all pebs I wanted!
[ 4432.588338] UBI error: refill_wl_pool: didn't get all pebs I wanted!
[ 4432.588369] UBI error: ubi_wl_get_peb: User WL pool is empty!
[ 4432.588392] UBIFS error (pid 15134): ubifs_leb_map: mapping LEB 57 
failed, error -28
[ 4432.588412] UBIFS warning (pid 15134): ubifs_ro_mode: switched to 
read-only mode, error -28
[ 4432.588480] [<c00104b1>] (unwind_backtrace+0x1/0x8c) from 
[<c00ff329>] (ubifs_leb_map+0x7d/0xb4)
[ 4432.588517] [<c00ff329>] (ubifs_leb_map+0x7d/0xb4) from [<c01051e7>] 
(ubifs_add_bud_to_log+0x1bf/0x214)
[ 4432.588550] [<c01051e7>] (ubifs_add_bud_to_log+0x1bf/0x214) from 
[<c00f7605>] (make_reservation+0x12d/0x274)
[ 4432.588582] [<c00f7605>] (make_reservation+0x12d/0x274) from 
[<c00f7d6d>] (ubifs_jnl_write_data+0xf5/0x1a4)
[ 4432.588614] [<c00f7d6d>] (ubifs_jnl_write_data+0xf5/0x1a4) from 
[<c00f99bd>] (do_writepage+0x73/0x12e)
[ 4432.588654] [<c00f99bd>] (do_writepage+0x73/0x12e) from [<c0077117>] 
(__writepage+0xb/0x26)
[ 4432.588688] [<c0077117>] (__writepage+0xb/0x26) from [<c0077413>] 
(write_cache_pages+0x151/0x1e8)
[ 4432.588721] [<c0077413>] (write_cache_pages+0x151/0x1e8) from 
[<c00774cb>] (generic_writepages+0x21/0x36)
[ 4432.588753] [<c00774cb>] (generic_writepages+0x21/0x36) from 
[<c0072fc3>] (__filemap_fdatawrite_range+0x3b/0x42)
[ 4432.588786] [<c0072fc3>] (__filemap_fdatawrite_range+0x3b/0x42) from 
[<c0073049>] (filemap_write_and_wait_range+0x21/0x4a)
[ 4432.588817] [<c0073049>] (filemap_write_and_wait_range+0x21/0x4a) 
from [<c00f9be1>] (ubifs_fsync+0x29/0x6c)
[ 4432.588850] [<c00f9be1>] (ubifs_fsync+0x29/0x6c) from [<c00abb23>] 
(vfs_fsync_range+0x1b/0x24)
[ 4432.588881] [<c00abb23>] (vfs_fsync_range+0x1b/0x24) from 
[<c00abb95>] (generic_write_sync+0x4d/0x54)
[ 4432.588912] [<c00abb95>] (generic_write_sync+0x4d/0x54) from 
[<c00734d5>] (generic_file_aio_write+0x71/0x8a)
[ 4432.588943] [<c00734d5>] (generic_file_aio_write+0x71/0x8a) from 
[<c00f91ab>] (ubifs_aio_write+0xff/0x10c)
[ 4432.588987] [<c00f91ab>] (ubifs_aio_write+0xff/0x10c) from 
[<c0093ef5>] (do_sync_write+0x61/0x8c)
[ 4432.589021] [<c0093ef5>] (do_sync_write+0x61/0x8c) from [<c0094397>] 
(vfs_write+0x5f/0x100)
[ 4432.589054] [<c0094397>] (vfs_write+0x5f/0x100) from [<c00945a3>] 
(sys_write+0x27/0x44)
[ 4432.589094] [<c00945a3>] (sys_write+0x27/0x44) from [<c000c681>] 
(ret_fast_syscall+0x1/0x46)
[ 4432.589128] [<c00104b1>] (unwind_backtrace+0x1/0x8c) from 
[<c00ff32d>] (ubifs_leb_map+0x81/0xb4)
[ 4432.589159] [<c00ff32d>] (ubifs_leb_map+0x81/0xb4) from [<c01051e7>] 
(ubifs_add_bud_to_log+0x1bf/0x214)
[ 4432.589189] [<c01051e7>] (ubifs_add_bud_to_log+0x1bf/0x214) from 
[<c00f7605>] (make_reservation+0x12d/0x274)
[ 4432.589220] [<c00f7605>] (make_reservation+0x12d/0x274) from 
[<c00f7d6d>] (ubifs_jnl_write_data+0xf5/0x1a4)
[ 4432.589251] [<c00f7d6d>] (ubifs_jnl_write_data+0xf5/0x1a4) from 
[<c00f99bd>] (do_writepage+0x73/0x12e)
[ 4432.589284] [<c00f99bd>] (do_writepage+0x73/0x12e) from [<c0077117>] 
(__writepage+0xb/0x26)
[ 4432.589315] [<c0077117>] (__writepage+0xb/0x26) from [<c0077413>] 
(write_cache_pages+0x151/0x1e8)
[ 4432.589348] [<c0077413>] (write_cache_pages+0x151/0x1e8) from 
[<c00774cb>] (generic_writepages+0x21/0x36)
[ 4432.589379] [<c00774cb>] (generic_writepages+0x21/0x36) from 
[<c0072fc3>] (__filemap_fdatawrite_range+0x3b/0x42)
[ 4432.589410] [<c0072fc3>] (__filemap_fdatawrite_range+0x3b/0x42) from 
[<c0073049>] (filemap_write_and_wait_range+0x21/0x4a)
[ 4432.589441] [<c0073049>] (filemap_write_and_wait_range+0x21/0x4a) 
from [<c00f9be1>] (ubifs_fsync+0x29/0x6c)
[ 4432.589471] [<c00f9be1>] (ubifs_fsync+0x29/0x6c) from [<c00abb23>] 
(vfs_fsync_range+0x1b/0x24)
[ 4432.589501] [<c00abb23>] (vfs_fsync_range+0x1b/0x24) from 
[<c00abb95>] (generic_write_sync+0x4d/0x54)
[ 4432.589530] [<c00abb95>] (generic_write_sync+0x4d/0x54) from 
[<c00734d5>] (generic_file_aio_write+0x71/0x8a)
[ 4432.589561] [<c00734d5>] (generic_file_aio_write+0x71/0x8a) from 
[<c00f91ab>] (ubifs_aio_write+0xff/0x10c)
[ 4432.589593] [<c00f91ab>] (ubifs_aio_write+0xff/0x10c) from 
[<c0093ef5>] (do_sync_write+0x61/0x8c)
[ 4432.589627] [<c0093ef5>] (do_sync_write+0x61/0x8c) from [<c0094397>] 
(vfs_write+0x5f/0x100)
[ 4432.589659] [<c0094397>] (vfs_write+0x5f/0x100) from [<c00945a3>] 
(sys_write+0x27/0x44)
[ 4432.589690] [<c00945a3>] (sys_write+0x27/0x44) from [<c000c681>] 
(ret_fast_syscall+0x1/0x46)
[ 4432.589769] UBIFS error (pid 1259): make_reservation: cannot reserve 
67 bytes in jhead 2, error -30
[ 4432.589795] UBIFS error (pid 1259): do_writepage: cannot write page 
0 of inode 82, error -30
[ 4432.599975] UBIFS error (pid 15134): do_commit: commit failed, error 
-30
[ 4432.600013] UBIFS error (pid 15134): do_writepage: cannot write page 
405 of inode 81, error -30
[ 4433.066881] UBIFS error (pid 323): make_reservation: cannot reserve 
76 bytes in jhead 2, error -30
[ 4433.066917] UBIFS error (pid 323): do_writepage: cannot write page 
482 of inode 71, error -30
[ 4438.066923] UBIFS error (pid 323): make_reservation: cannot reserve 
160 bytes in jhead 1, error -30
[ 4438.067004] UBIFS error (pid 323): ubifs_write_inode: can't write 
inode 73, error -30
[ 4447.442854] UBIFS error (pid 1193): make_reservation: cannot reserve 
4144 bytes in jhead 2, error -30
[ 4447.442891] UBIFS error (pid 1193): do_writepage: cannot write page 
505 of inode 81, error -30
[ 4452.442956] UBIFS error (pid 1193): make_reservation: cannot reserve 
4144 bytes in jhead 2, error -30
[ 4452.442994] UBIFS error (pid 1193): do_writepage: cannot write page 
506 of inode 81, error -30
[ 4457.443149] UBIFS error (pid 1193): make_reservation: cannot reserve 
4144 bytes in jhead 2, error -30
[ 4457.443186] UBIFS error (pid 1193): do_writepage: cannot write page 
507 of inode 81, error -30
[ 4462.443211] UBIFS error (pid 1193): make_reservation: cannot reserve 
4144 bytes in jhead 2, error -30
[ 4462.443248] UBIFS error (pid 1193): do_writepage: cannot write page 
508 of inode 81, error -30
[ 4463.067691] UBIFS error (pid 323): make_reservation: cannot reserve 
76 bytes in jhead 2, error -30
[ 4463.067727] UBIFS error (pid 323): do_writepage: cannot write page 
483 of inode 71, error -30
[ 4467.443433] UBIFS error (pid 1193): make_reservation: cannot reserve 
4144 bytes in jhead 2, error -30
[ 4467.443470] UBIFS error (pid 1193): do_writepage: cannot write page 
509 of inode 81, error -30
[ 4472.443597] UBIFS error (pid 1193): make_reservation: cannot reserve 
4144 bytes in jhead 2, error -30
[ 4472.443635] UBIFS error (pid 1193): do_writepage: cannot write page 
510 of inode 81, error -30
[ 4473.067876] UBIFS error (pid 323): make_reservation: cannot reserve 
160 bytes in jhead 1, error -30
[ 4473.067911] UBIFS error (pid 323): ubifs_write_inode: can't write 
inode 73, error -30
[ 4477.443674] UBIFS error (pid 1193): make_reservation: cannot reserve 
4144 bytes in jhead 2, error -30
[ 4477.443712] UBIFS error (pid 1193): do_writepage: cannot write page 
511 of inode 81, error -30
[ 4482.443788] UBIFS error (pid 1193): make_reservation: cannot reserve 
4144 bytes in jhead 2, error -30
[ 4482.443826] UBIFS error (pid 1193): do_writepage: cannot write page 
406 of inode 81, error -30
[ 4487.443921] UBIFS error (pid 1193): make_reservation: cannot reserve 
4144 bytes in jhead 2, error -30
[ 4487.443958] UBIFS error (pid 1193): do_writepage: cannot write page 
407 of inode 81, error -30
[ 4492.443981] UBIFS error (pid 1193): make_reservation: cannot reserve 
4144 bytes in jhead 2, error -30
[ 4492.444018] UBIFS error (pid 1193): do_writepage: cannot write page 
408 of inode 81, error -30
[ 4493.068604] UBIFS error (pid 323): make_reservation: cannot reserve 
76 bytes in jhead 2, error -30
[ 4493.068640] UBIFS error (pid 323): do_writepage: cannot write page 
484 of inode 71, error -30
[ 4497.444212] UBIFS error (pid 1193): make_reservation: cannot reserve 
4144 bytes in jhead 2, error -30
[ 4497.444249] UBIFS error (pid 1193): do_writepage: cannot write page 
409 of inode 81, error -30
[ 4502.444272] UBIFS error (pid 1193): make_reservation: cannot reserve 
4144 bytes in jhead 2, error -30
[ 4502.444308] UBIFS error (pid 1193): do_writepage: cannot write page 
410 of inode 81, error -30
[ 4507.444432] UBIFS error (pid 1193): make_reservation: cannot reserve 
4144 bytes in jhead 2, error -30
[ 4507.444470] UBIFS error (pid 1193): do_writepage: cannot write page 
411 of inode 81, error -30
[ 4508.068867] UBIFS error (pid 323): make_reservation: cannot reserve 
160 bytes in jhead 1, error -30
[ 4508.068903] UBIFS error (pid 323): ubifs_write_inode: can't write 
inode 73, error -30
[ 4512.444605] UBIFS error (pid 1193): make_reservation: cannot reserve 
4144 bytes in jhead 2, error -30
[ 4512.444642] UBIFS error (pid 1193): do_writepage: cannot write page 
412 of inode 81, error -30
[ 4517.444735] UBIFS error (pid 1193): make_reservation: cannot reserve 
4144 bytes in jhead 2, error -30
[ 4517.444774] UBIFS error (pid 1193): do_writepage: cannot write page 
413 of inode 81, error -30
[ 4522.444927] UBIFS error (pid 1193): make_reservation: cannot reserve 
4144 bytes in jhead 2, error -30
[ 4522.444965] UBIFS error (pid 1193): do_writepage: cannot write page 
414 of inode 81, error -30
[ 4523.069388] UBIFS error (pid 323): make_reservation: cannot reserve 
76 bytes in jhead 2, error -30
[ 4523.069423] UBIFS error (pid 323): do_writepage: cannot write page 
485 of inode 71, error -30
[ 4527.445013] UBIFS error (pid 1193): make_reservation: cannot reserve 
4144 bytes in jhead 2, error -30
[ 4527.445052] UBIFS error (pid 1193): do_writepage: cannot write page 
415 of inode 81, error -30
[ 4532.445136] UBIFS error (pid 1193): make_reservation: cannot reserve 
4144 bytes in jhead 2, error -30
[ 4532.445173] UBIFS error (pid 1193): do_writepage: cannot write page 
416 of inode 81, error -30
[ 4537.445320] UBIFS error (pid 1193): make_reservation: cannot reserve 
4144 bytes in jhead 2, error -30
[ 4537.445358] UBIFS error (pid 1193): do_writepage: cannot write page 
417 of inode 81, error -30
[ 4542.445417] UBIFS error (pid 1193): make_reservation: cannot reserve 
4144 bytes in jhead 2, error -30
[ 4542.445456] UBIFS error (pid 1193): do_writepage: cannot write page 
418 of inode 81, error -30
[ 4543.069848] UBIFS error (pid 323): make_reservation: cannot reserve 
160 bytes in jhead 1, error -30
[ 4543.069883] UBIFS error (pid 323): ubifs_write_inode: can't write 
inode 73, error -30
[ 4547.445623] UBIFS error (pid 1193): make_reservation: cannot reserve 
4144 bytes in jhead 2, error -30
[ 4547.445659] UBIFS error (pid 1193): do_writepage: cannot write page 
419 of inode 81, error -30
[ 4552.445692] UBIFS error (pid 1193): make_reservation: cannot reserve 
4144 bytes in jhead 2, error -30
[ 4552.445730] UBIFS error (pid 1193): do_writepage: cannot write page 
420 of inode 81, error -30
[ 4553.070260] UBIFS error (pid 323): make_reservation: cannot reserve 
76 bytes in jhead 2, error -30
[ 4553.070296] UBIFS error (pid 323): do_writepage: cannot write page 
486 of inode 71, error -30
[ 4557.445890] UBIFS error (pid 1193): make_reservation: cannot reserve 
4144 bytes in jhead 2, error -30
[ 4557.445928] UBIFS error (pid 1193): do_writepage: cannot write page 
421 of inode 81, error -30
[ 4562.446059] UBIFS error (pid 1193): make_reservation: cannot reserve 
4144 bytes in jhead 2, error -30
[ 4562.446095] UBIFS error (pid 1193): do_writepage: cannot write page 
422 of inode 81, error -30
[ 4567.446190] UBIFS error (pid 1193): make_reservation: cannot reserve 
4144 bytes in jhead 2, error -30
[ 4567.446229] UBIFS error (pid 1193): do_writepage: cannot write page 
423 of inode 81, error -30
[ 4572.446339] UBIFS error (pid 1193): make_reservation: cannot reserve 
4144 bytes in jhead 2, error -30
[ 4572.446378] UBIFS error (pid 1193): do_writepage: cannot write page 
424 of inode 81, error -30
[ 4577.446513] UBIFS error (pid 1193): make_reservation: cannot reserve 
4144 bytes in jhead 2, error -30
[ 4577.446552] UBIFS error (pid 1193): do_writepage: cannot write page 
425 of inode 81, error -30
[ 4578.070870] UBIFS error (pid 323): make_reservation: cannot reserve 
160 bytes in jhead 1, error -30
[ 4578.070942] UBIFS error (pid 323): ubifs_write_inode: can't write 
inode 73, error -30
[ 4582.446581] UBIFS error (pid 1193): make_reservation: cannot reserve 
4144 bytes in jhead 2, error -30
[ 4582.446619] UBIFS error (pid 1193): do_writepage: cannot write page 
426 of inode 81, error -30
[ 4583.071122] UBIFS error (pid 323): make_reservation: cannot reserve 
76 bytes in jhead 2, error -30
[ 4583.071157] UBIFS error (pid 323): do_writepage: cannot write page 
487 of inode 71, error -30
[ 4587.446762] UBIFS error (pid 1193): make_reservation: cannot reserve 
4144 bytes in jhead 2, error -30
[ 4587.446801] UBIFS error (pid 1193): do_writepage: cannot write page 
427 of inode 81, error -30
[ 4592.446864] UBIFS error (pid 1193): make_reservation: cannot reserve 
4144 bytes in jhead 2, error -30
[ 4592.446901] UBIFS error (pid 1193): do_writepage: cannot write page 
428 of inode 81, error -30
[ 4597.447057] UBIFS error (pid 1193): make_reservation: cannot reserve 
4144 bytes in jhead 2, error -30
[ 4597.447095] UBIFS error (pid 1193): do_writepage: cannot write page 
429 of inode 81, error -30
[ 4602.447200] UBIFS error (pid 1193): make_reservation: cannot reserve 
4144 bytes in jhead 2, error -30
[ 4602.447237] UBIFS error (pid 1193): do_writepage: cannot write page 
430 of inode 81, error -30
[ 4607.447375] UBIFS error (pid 1193): make_reservation: cannot reserve 
4144 bytes in jhead 2, error -30
[ 4607.447412] UBIFS error (pid 1193): do_writepage: cannot write page 
431 of inode 81, error -30
[ 4612.447511] UBIFS error (pid 1193): make_reservation: cannot reserve 
4144 bytes in jhead 2, error -30
[ 4612.447548] UBIFS error (pid 1193): do_writepage: cannot write page 
432 of inode 81, error -30
[ 4613.071898] UBIFS error (pid 323): make_reservation: cannot reserve 
160 bytes in jhead 1, error -30
[ 4613.071933] UBIFS error (pid 323): ubifs_write_inode: can't write 
inode 73, error -30
[ 4613.072083] UBIFS error (pid 323): make_reservation: cannot reserve 
76 bytes in jhead 2, error -30
[ 4613.072108] UBIFS error (pid 323): do_writepage: cannot write page 
488 of inode 71, error -30
[ 4613.072260] UBIFS error (pid 323): make_reservation: cannot reserve 
76 bytes in jhead 2, error -30
[ 4613.072283] UBIFS error (pid 323): do_writepage: cannot write page 
489 of inode 71, error -30
[ 4617.447582] UBIFS error (pid 1193): make_reservation: cannot reserve 
4144 bytes in jhead 2, error -30
[ 4617.447617] UBIFS error (pid 1193): do_writepage: cannot write page 
433 of inode 81, error -30
[ 4618.072122] UBIFS error (pid 323): make_reservation: cannot reserve 
76 bytes in jhead 2, error -30
[ 4618.072157] UBIFS error (pid 323): do_writepage: cannot write page 
490 of inode 71, error -30
[ 4622.447741] UBIFS error (pid 1193): make_reservation: cannot reserve 
4144 bytes in jhead 2, error -30
[ 4622.447779] UBIFS error (pid 1193): do_writepage: cannot write page 
434 of inode 81, error -30
[ 4627.447963] UBIFS error (pid 1193): make_reservation: cannot reserve 
4144 bytes in jhead 2, error -30
[ 4627.448000] UBIFS error (pid 1193): do_writepage: cannot write page 
435 of inode 81, error -30
[ 4632.448098] UBIFS error (pid 1193): make_reservation: cannot reserve 
4144 bytes in jhead 2, error -30
[ 4632.448135] UBIFS error (pid 1193): do_writepage: cannot write page 
436 of inode 81, error -30
[ 4637.448172] UBIFS error (pid 1193): make_reservation: cannot reserve 
4144 bytes in jhead 2, error -30
[ 4637.448210] UBIFS error (pid 1193): do_writepage: cannot write page 
437 of inode 81, error -30

> Can you also please share your test script with me?
> So far none of my boards run into that issue. :(

Below are the 3 scripts run in background:
Script1:
while [ true ]
do
        dd if=/dev/urandom of=/var/db/test bs=2M count=1 2> /dev/null
done

Script2:
#!/bin/sh
while [ true ]
do
        dd if=/dev/zero of=/var/log/test.log bs=2M count=1 2> /dev/null
done

Script3:
#!/bin/sh
while [ true ]
do
        echo "hskjfwehjfiojwiojo" > /var/db/test.db
done

Details regarding the filesystem:
# mount
rootfs on / type rootfs (rw)
ubi0:rootfs on / type ubifs (rw,relatime)
proc on /proc type proc (rw,relatime)
sysfs on /sys type sysfs (rw,relatime)
tmpfs on /tmp type tmpfs (rw,relatime)
none on /dev/pts type devpts (rw,relatime,mode=600)
tmpfs on /dev/shm type tmpfs (rw,relatime,mode=777)
tmpfs on /var/run type tmpfs (rw,relatime,mode=777)
ubi0:logging on /var/log type ubifs (rw,sync,relatime)
ubi0:database on /var/db type ubifs (rw,sync,relatime)
tmpfs on /var/spool/cron type tmpfs (rw,relatime,mode=755)
tmpfs on /var/sftp type tmpfs (rw,relatime,mode=755)

> Thanks,
> //richard

^ permalink raw reply	[flat|nested] 19+ messages in thread

end of thread, other threads:[~2014-07-22 18:39 UTC | newest]

Thread overview: 19+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2014-06-26 20:28 UBIFS Panic Akshay Bhat
2014-06-27  2:36 ` hujianyang
2014-06-30 13:01   ` Akshay Bhat
2014-06-30 14:48     ` Richard Weinberger
2014-06-30 17:23       ` Akshay Bhat
2014-06-30 17:34         ` Richard Weinberger
2014-07-01  1:09         ` hujianyang
2014-07-01  7:48           ` Richard Weinberger
2014-07-01 14:18             ` Akshay Bhat
2014-07-01 14:32               ` Richard Weinberger
2014-07-01 14:46                 ` Akshay Bhat
2014-07-01 14:56                   ` Richard Weinberger
2014-07-10 21:38                     ` Akshay Bhat
2014-07-10 21:42                       ` Richard Weinberger
2014-07-11 20:45                       ` Richard Weinberger
2014-07-16 17:31                         ` Akshay Bhat
2014-07-16 21:00                           ` Richard Weinberger
2014-07-22 18:39                             ` Akshay Bhat
2014-07-01  0:58     ` hujianyang

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.