All of lore.kernel.org
 help / color / mirror / Atom feed
* Re: Kernel 4.14: SQUASHFS error: unable to read xattr id index table
       [not found] <CAOuPNLjgpkBh9dnfNTdDcfk5HiL=HjjiB9o_=fjrm+0vP7Re2Q@mail.gmail.com>
@ 2021-05-14 11:41   ` Pintu Agarwal
  0 siblings, 0 replies; 25+ messages in thread
From: Pintu Agarwal @ 2021-05-14 11:41 UTC (permalink / raw)
  To: phillip, linux-fsdevel, open list, sean, linux-mtd

Hi,

This is regarding the squashfs mount failure that I am getting on my
device during boot time.
I just wanted to know if someone else has come across this issue, or
this issue is already fixed, or this is altogether a different issue?

Here are more details:
Kernel: 4.14.170 ; Qualcomm chipset (arm32 bit)
Platform: busybox
Storage: NAND 512MB
Filesystem: ubifs + squashfs
ubi0 : with 5 volumes (rootfs, usrfs, others)
Kernel command line: ro rootwait console=ttyMSM0,115200,n8
rootfstype=squashfs root=/dev/mtdblock34 ubi.mtd=30,0,30 ....

Background:
We are using ubifs filesystem with squashfs for rootfs (as ready only).
First we tried to flash "usrfs" (data) volume (ubi0_1) and it worked
fine (with device booting successfully).

Next we are trying to flash "rootfs" volume (ubi0_0) now. The volume
flashing is successful but after that when we reboot the system we are
getting below errors.

Logs:
[....]
[    4.589340] vreg_conn_pa: dis▒[    4.602779] squashfs: SQUASHFS
error: unable to read xattr id index table
[...]
[    4.964083] No filesystem could mount root, tried:
[    4.964087]  squashfs
[    4.966255]
[    4.973443] Kernel panic - not syncing: VFS: Unable to mount root
fs on unknown-block(31,34)

-----------
[    4.246861] ubi0: attaching mtd30
[    4.453241] ubi0: scanning is finished
[    4.460655] ubi0: attached mtd30 (name "system", size 216 MiB)
[    4.460704] ubi0: PEB size: 262144 bytes (256 KiB), LEB size: 253952 bytes
[    4.465562] ubi0: min./max. I/O unit sizes: 4096/4096, sub-page size 4096
[    4.472483] ubi0: VID header offset: 4096 (aligned 4096), data offset: 8192
[    4.479295] ubi0: good PEBs: 864, bad PEBs: 0, corrupted PEBs: 0
[    4.486067] ubi0: user volume: 5, internal volumes: 1, max. volumes
count: 128
[    4.492311] ubi0: max/mean erase counter: 4/0, WL threshold: 4096,
image sequence number: 1
[    4.499333] ubi0: available PEBs: 0, total reserved PEBs: 864, PEBs
reserved for bad PEB handling: 60

So, we just wanted to know if this issue is related to squashfs or if
there is some issue with our volume flashing.
Note: We are using fastboot mechanism to support UBI volume flashing.

Observation:
Recently I have seen some squashfs changes related to similar issues
(xattr) so I wanted to understand if these changes are relevant to our
issue or not ?

Age           Commit message(Expand)                                 Author
2021-03-30    squashfs: fix xattr id and id lookup sanity checks
Phillip Lougher
2021-03-30    squashfs: fix inode lookup sanity checks
Sean Nyekjaer
2021-02-23    squashfs: add more sanity checks in xattr id lookup
Phillip Lougher
2021-02-23    squashfs: add more sanity checks in inode lookup
Phillip Lougher
2021-02-23    squashfs: add more sanity checks in id lookup
Phillip Lougher

Please let us know your opinion about this issue...
It will help us to decide whether the issue is related to squashfs  or not.


Thanks,
Pintu

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: Kernel 4.14: SQUASHFS error: unable to read xattr id index table
@ 2021-05-14 11:41   ` Pintu Agarwal
  0 siblings, 0 replies; 25+ messages in thread
From: Pintu Agarwal @ 2021-05-14 11:41 UTC (permalink / raw)
  To: phillip, linux-fsdevel, open list, sean, linux-mtd

Hi,

This is regarding the squashfs mount failure that I am getting on my
device during boot time.
I just wanted to know if someone else has come across this issue, or
this issue is already fixed, or this is altogether a different issue?

Here are more details:
Kernel: 4.14.170 ; Qualcomm chipset (arm32 bit)
Platform: busybox
Storage: NAND 512MB
Filesystem: ubifs + squashfs
ubi0 : with 5 volumes (rootfs, usrfs, others)
Kernel command line: ro rootwait console=ttyMSM0,115200,n8
rootfstype=squashfs root=/dev/mtdblock34 ubi.mtd=30,0,30 ....

Background:
We are using ubifs filesystem with squashfs for rootfs (as ready only).
First we tried to flash "usrfs" (data) volume (ubi0_1) and it worked
fine (with device booting successfully).

Next we are trying to flash "rootfs" volume (ubi0_0) now. The volume
flashing is successful but after that when we reboot the system we are
getting below errors.

Logs:
[....]
[    4.589340] vreg_conn_pa: dis▒[    4.602779] squashfs: SQUASHFS
error: unable to read xattr id index table
[...]
[    4.964083] No filesystem could mount root, tried:
[    4.964087]  squashfs
[    4.966255]
[    4.973443] Kernel panic - not syncing: VFS: Unable to mount root
fs on unknown-block(31,34)

-----------
[    4.246861] ubi0: attaching mtd30
[    4.453241] ubi0: scanning is finished
[    4.460655] ubi0: attached mtd30 (name "system", size 216 MiB)
[    4.460704] ubi0: PEB size: 262144 bytes (256 KiB), LEB size: 253952 bytes
[    4.465562] ubi0: min./max. I/O unit sizes: 4096/4096, sub-page size 4096
[    4.472483] ubi0: VID header offset: 4096 (aligned 4096), data offset: 8192
[    4.479295] ubi0: good PEBs: 864, bad PEBs: 0, corrupted PEBs: 0
[    4.486067] ubi0: user volume: 5, internal volumes: 1, max. volumes
count: 128
[    4.492311] ubi0: max/mean erase counter: 4/0, WL threshold: 4096,
image sequence number: 1
[    4.499333] ubi0: available PEBs: 0, total reserved PEBs: 864, PEBs
reserved for bad PEB handling: 60

So, we just wanted to know if this issue is related to squashfs or if
there is some issue with our volume flashing.
Note: We are using fastboot mechanism to support UBI volume flashing.

Observation:
Recently I have seen some squashfs changes related to similar issues
(xattr) so I wanted to understand if these changes are relevant to our
issue or not ?

Age           Commit message(Expand)                                 Author
2021-03-30    squashfs: fix xattr id and id lookup sanity checks
Phillip Lougher
2021-03-30    squashfs: fix inode lookup sanity checks
Sean Nyekjaer
2021-02-23    squashfs: add more sanity checks in xattr id lookup
Phillip Lougher
2021-02-23    squashfs: add more sanity checks in inode lookup
Phillip Lougher
2021-02-23    squashfs: add more sanity checks in id lookup
Phillip Lougher

Please let us know your opinion about this issue...
It will help us to decide whether the issue is related to squashfs  or not.


Thanks,
Pintu

______________________________________________________
Linux MTD discussion mailing list
http://lists.infradead.org/mailman/listinfo/linux-mtd/

^ permalink raw reply	[flat|nested] 25+ messages in thread

* [RESEND]: Kernel 4.14: SQUASHFS error: unable to read xattr id index table
  2021-05-14 11:41   ` Pintu Agarwal
  (?)
@ 2021-05-14 12:37   ` Pintu Agarwal
  2021-05-14 21:50       ` Phillip Lougher
  -1 siblings, 1 reply; 25+ messages in thread
From: Pintu Agarwal @ 2021-05-14 12:37 UTC (permalink / raw)
  To: phillip, open list, sean, linux-mtd, linux-fsdevel

Hi,

This is regarding the squashfs mount failure that I am getting on my
device during boot time.
I just wanted to know if someone else has come across this issue, or
this issue is already fixed, or this is altogether a different issue?

Here are more details:
Kernel: 4.14.170 ; Qualcomm chipset (arm32 bit)
Platform: busybox
Storage: NAND 512MB
Filesystem: ubifs + squashfs
ubi0 : with 5 volumes (rootfs, usrfs, others)
Kernel command line: ro rootwait console=ttyMSM0,115200,n8
rootfstype=squashfs root=/dev/mtdblock34 ubi.mtd=30,0,30 ....

Background:
We are using ubifs filesystem with squashfs for rootfs (as ready only).
First we tried to flash "usrfs" (data) volume (ubi0_1) and it worked
fine (with device booting successfully).

Next we are trying to flash "rootfs" volume (ubi0_0) now. The volume
flashing is successful but after that when we reboot the system we are
getting below errors.

Logs:
[....]
[    4.589340] vreg_conn_pa: dis▒[    4.602779] squashfs: SQUASHFS
error: unable to read xattr id index table
[...]
[    4.964083] No filesystem could mount root, tried:
[    4.964087]  squashfs
[    4.966255]
[    4.973443] Kernel panic - not syncing: VFS: Unable to mount root
fs on unknown-block(31,34)

-----------
[    4.246861] ubi0: attaching mtd30
[    4.453241] ubi0: scanning is finished
[    4.460655] ubi0: attached mtd30 (name "system", size 216 MiB)
[    4.460704] ubi0: PEB size: 262144 bytes (256 KiB), LEB size: 253952 bytes
[    4.465562] ubi0: min./max. I/O unit sizes: 4096/4096, sub-page size 4096
[    4.472483] ubi0: VID header offset: 4096 (aligned 4096), data offset: 8192
[    4.479295] ubi0: good PEBs: 864, bad PEBs: 0, corrupted PEBs: 0
[    4.486067] ubi0: user volume: 5, internal volumes: 1, max. volumes
count: 128
[    4.492311] ubi0: max/mean erase counter: 4/0, WL threshold: 4096,
image sequence number: 1
[    4.499333] ubi0: available PEBs: 0, total reserved PEBs: 864, PEBs
reserved for bad PEB handling: 60

So, we just wanted to know if this issue is related to squashfs or if
there is some issue with our volume flashing.
Note: We are using fastboot mechanism to support UBI volume flashing.

Observation:
Recently I have seen some squashfs changes related to similar issues
(xattr) so I wanted to understand if these changes are relevant to our
issue or not ?

Age           Commit message(Expand)                                 Author
2021-03-30    squashfs: fix xattr id and id lookup sanity checks
Phillip Lougher
2021-03-30    squashfs: fix inode lookup sanity checks
Sean Nyekjaer
2021-02-23    squashfs: add more sanity checks in xattr id lookup
Phillip Lougher
2021-02-23    squashfs: add more sanity checks in inode lookup
Phillip Lougher
2021-02-23    squashfs: add more sanity checks in id lookup
Phillip Lougher

Please let us know your opinion about this issue...
It will help us to decide whether the issue is related to squashfs  or not.


Thanks,
Pintu

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [RESEND]: Kernel 4.14: SQUASHFS error: unable to read xattr id index table
  2021-05-14 12:37   ` [RESEND]: " Pintu Agarwal
@ 2021-05-14 21:50       ` Phillip Lougher
  0 siblings, 0 replies; 25+ messages in thread
From: Phillip Lougher @ 2021-05-14 21:50 UTC (permalink / raw)
  To: Pintu Agarwal, open list, sean, linux-mtd, linux-fsdevel


> On 14/05/2021 13:37 Pintu Agarwal <pintu.ping@gmail.com> wrote:
> 
>  
> Hi,
> 
> This is regarding the squashfs mount failure that I am getting on my
> device during boot time.
> I just wanted to know if someone else has come across this issue, or
> this issue is already fixed, or this is altogether a different issue?
> 
> Here are more details:
> Kernel: 4.14.170 ; Qualcomm chipset (arm32 bit)
> Platform: busybox
> Storage: NAND 512MB
> Filesystem: ubifs + squashfs
> ubi0 : with 5 volumes (rootfs, usrfs, others)
> Kernel command line: ro rootwait console=ttyMSM0,115200,n8
> rootfstype=squashfs root=/dev/mtdblock34 ubi.mtd=30,0,30 ....
> 
> Background:
> We are using ubifs filesystem with squashfs for rootfs (as ready only).
> First we tried to flash "usrfs" (data) volume (ubi0_1) and it worked
> fine (with device booting successfully).
> 
> Next we are trying to flash "rootfs" volume (ubi0_0) now. The volume
> flashing is successful but after that when we reboot the system we are
> getting below errors.
> 
> Logs:
> [....]
> [    4.589340] vreg_conn_pa: dis▒[    4.602779] squashfs: SQUASHFS
> error: unable to read xattr id index table
> [...]
> [    4.964083] No filesystem could mount root, tried:
> [    4.964087]  squashfs
> [    4.966255]
> [    4.973443] Kernel panic - not syncing: VFS: Unable to mount root
> fs on unknown-block(31,34)
> 
> -----------
> [    4.246861] ubi0: attaching mtd30
> [    4.453241] ubi0: scanning is finished
> [    4.460655] ubi0: attached mtd30 (name "system", size 216 MiB)
> [    4.460704] ubi0: PEB size: 262144 bytes (256 KiB), LEB size: 253952 bytes
> [    4.465562] ubi0: min./max. I/O unit sizes: 4096/4096, sub-page size 4096
> [    4.472483] ubi0: VID header offset: 4096 (aligned 4096), data offset: 8192
> [    4.479295] ubi0: good PEBs: 864, bad PEBs: 0, corrupted PEBs: 0
> [    4.486067] ubi0: user volume: 5, internal volumes: 1, max. volumes
> count: 128
> [    4.492311] ubi0: max/mean erase counter: 4/0, WL threshold: 4096,
> image sequence number: 1
> [    4.499333] ubi0: available PEBs: 0, total reserved PEBs: 864, PEBs
> reserved for bad PEB handling: 60
> 
> So, we just wanted to know if this issue is related to squashfs or if
> there is some issue with our volume flashing.
> Note: We are using fastboot mechanism to support UBI volume flashing.
> 
> Observation:
> Recently I have seen some squashfs changes related to similar issues
> (xattr) so I wanted to understand if these changes are relevant to our
> issue or not ?
> 
> Age           Commit message(Expand)                                 Author
> 2021-03-30    squashfs: fix xattr id and id lookup sanity checks
> Phillip Lougher
> 2021-03-30    squashfs: fix inode lookup sanity checks
> Sean Nyekjaer
> 2021-02-23    squashfs: add more sanity checks in xattr id lookup
> Phillip Lougher
> 2021-02-23    squashfs: add more sanity checks in inode lookup
> Phillip Lougher
> 2021-02-23    squashfs: add more sanity checks in id lookup
> Phillip Lougher
> 
> Please let us know your opinion about this issue...
> It will help us to decide whether the issue is related to squashfs  or not.
> 
> 
> Thanks,
> Pintu

Your kernel (4.14.170) was released on 5 Feb 2020, and so it won't
contain any of the above commits. The xattr -id code in 4.14.170,
was last updated in May 2011, and so it is much more likely the
problem is elsewhere.

The xattr id index table is written to the end of the Squashfs filesystem,
and it is the first table read on mounting.

As such this is the error you will receive if the Squashfs filesystem
has been truncated in some way. This is by far the most likely reason
for the error.

So you need to check if the Squashfs filesystem image is truncated or
corrupted in some way. This could obviously have happened before
writing to the flash, during writing or afterwards. It could also be
being truncated at read time. The cause could be faulty hardware or
software at any point in the I/O path, at any point in the processs.

So, you need to double check everything at each of the above stages.

1. Check the Squashfs filesystem for correctness before writing it to
the flash. You can run Unsquashfs on the image and see if it reports
any errors.

2. You need to check the filesystem for integrity after writing it to
the flash. Compute a checksum, and compare it with the original
checksum.

In that way you can pinpoint the cause of the truncation/corruption.
But, this is unlikely to be a Squashfs issue, and more likely
truncation/corruption caused by something else.

Phillip

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [RESEND]: Kernel 4.14: SQUASHFS error: unable to read xattr id index table
@ 2021-05-14 21:50       ` Phillip Lougher
  0 siblings, 0 replies; 25+ messages in thread
From: Phillip Lougher @ 2021-05-14 21:50 UTC (permalink / raw)
  To: Pintu Agarwal, open list, sean, linux-mtd, linux-fsdevel


> On 14/05/2021 13:37 Pintu Agarwal <pintu.ping@gmail.com> wrote:
> 
>  
> Hi,
> 
> This is regarding the squashfs mount failure that I am getting on my
> device during boot time.
> I just wanted to know if someone else has come across this issue, or
> this issue is already fixed, or this is altogether a different issue?
> 
> Here are more details:
> Kernel: 4.14.170 ; Qualcomm chipset (arm32 bit)
> Platform: busybox
> Storage: NAND 512MB
> Filesystem: ubifs + squashfs
> ubi0 : with 5 volumes (rootfs, usrfs, others)
> Kernel command line: ro rootwait console=ttyMSM0,115200,n8
> rootfstype=squashfs root=/dev/mtdblock34 ubi.mtd=30,0,30 ....
> 
> Background:
> We are using ubifs filesystem with squashfs for rootfs (as ready only).
> First we tried to flash "usrfs" (data) volume (ubi0_1) and it worked
> fine (with device booting successfully).
> 
> Next we are trying to flash "rootfs" volume (ubi0_0) now. The volume
> flashing is successful but after that when we reboot the system we are
> getting below errors.
> 
> Logs:
> [....]
> [    4.589340] vreg_conn_pa: dis▒[    4.602779] squashfs: SQUASHFS
> error: unable to read xattr id index table
> [...]
> [    4.964083] No filesystem could mount root, tried:
> [    4.964087]  squashfs
> [    4.966255]
> [    4.973443] Kernel panic - not syncing: VFS: Unable to mount root
> fs on unknown-block(31,34)
> 
> -----------
> [    4.246861] ubi0: attaching mtd30
> [    4.453241] ubi0: scanning is finished
> [    4.460655] ubi0: attached mtd30 (name "system", size 216 MiB)
> [    4.460704] ubi0: PEB size: 262144 bytes (256 KiB), LEB size: 253952 bytes
> [    4.465562] ubi0: min./max. I/O unit sizes: 4096/4096, sub-page size 4096
> [    4.472483] ubi0: VID header offset: 4096 (aligned 4096), data offset: 8192
> [    4.479295] ubi0: good PEBs: 864, bad PEBs: 0, corrupted PEBs: 0
> [    4.486067] ubi0: user volume: 5, internal volumes: 1, max. volumes
> count: 128
> [    4.492311] ubi0: max/mean erase counter: 4/0, WL threshold: 4096,
> image sequence number: 1
> [    4.499333] ubi0: available PEBs: 0, total reserved PEBs: 864, PEBs
> reserved for bad PEB handling: 60
> 
> So, we just wanted to know if this issue is related to squashfs or if
> there is some issue with our volume flashing.
> Note: We are using fastboot mechanism to support UBI volume flashing.
> 
> Observation:
> Recently I have seen some squashfs changes related to similar issues
> (xattr) so I wanted to understand if these changes are relevant to our
> issue or not ?
> 
> Age           Commit message(Expand)                                 Author
> 2021-03-30    squashfs: fix xattr id and id lookup sanity checks
> Phillip Lougher
> 2021-03-30    squashfs: fix inode lookup sanity checks
> Sean Nyekjaer
> 2021-02-23    squashfs: add more sanity checks in xattr id lookup
> Phillip Lougher
> 2021-02-23    squashfs: add more sanity checks in inode lookup
> Phillip Lougher
> 2021-02-23    squashfs: add more sanity checks in id lookup
> Phillip Lougher
> 
> Please let us know your opinion about this issue...
> It will help us to decide whether the issue is related to squashfs  or not.
> 
> 
> Thanks,
> Pintu

Your kernel (4.14.170) was released on 5 Feb 2020, and so it won't
contain any of the above commits. The xattr -id code in 4.14.170,
was last updated in May 2011, and so it is much more likely the
problem is elsewhere.

The xattr id index table is written to the end of the Squashfs filesystem,
and it is the first table read on mounting.

As such this is the error you will receive if the Squashfs filesystem
has been truncated in some way. This is by far the most likely reason
for the error.

So you need to check if the Squashfs filesystem image is truncated or
corrupted in some way. This could obviously have happened before
writing to the flash, during writing or afterwards. It could also be
being truncated at read time. The cause could be faulty hardware or
software at any point in the I/O path, at any point in the processs.

So, you need to double check everything at each of the above stages.

1. Check the Squashfs filesystem for correctness before writing it to
the flash. You can run Unsquashfs on the image and see if it reports
any errors.

2. You need to check the filesystem for integrity after writing it to
the flash. Compute a checksum, and compare it with the original
checksum.

In that way you can pinpoint the cause of the truncation/corruption.
But, this is unlikely to be a Squashfs issue, and more likely
truncation/corruption caused by something else.

Phillip

______________________________________________________
Linux MTD discussion mailing list
http://lists.infradead.org/mailman/listinfo/linux-mtd/

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [RESEND]: Kernel 4.14: SQUASHFS error: unable to read xattr id index table
  2021-05-14 21:50       ` Phillip Lougher
@ 2021-05-17 11:34         ` Pintu Agarwal
  -1 siblings, 0 replies; 25+ messages in thread
From: Pintu Agarwal @ 2021-05-17 11:34 UTC (permalink / raw)
  To: phillip; +Cc: open list, sean, linux-mtd, linux-fsdevel

On Sat, 15 May 2021 at 03:21, Phillip Lougher <phillip@squashfs.org.uk> wrote:
>
> Your kernel (4.14.170) was released on 5 Feb 2020, and so it won't
> contain any of the above commits. The xattr -id code in 4.14.170,
> was last updated in May 2011, and so it is much more likely the
> problem is elsewhere.
>
Okay this seems to be UBI volume flashing issue then. I will also try
with non-squashfs image (just ubifs).
See the result in the end.

> The xattr id index table is written to the end of the Squashfs filesystem,
> and it is the first table read on mounting.
>
Okay this gives me a clue that there are some corruptions while
writing the leftover blocks in the end.

> 1. Check the Squashfs filesystem for correctness before writing it to
> the flash. You can run Unsquashfs on the image and see if it reports
> any errors.
>
Can you give me some pointers on how to use unsquashfs ? I could not
find any unsquashfs command on my device.
Do we need to do it on the device or my Ubuntu PC ? Are there some
commands/utility available on ubuntu ?

> 2. You need to check the filesystem for integrity after writing it to
> the flash. Compute a checksum, and compare it with the original
> checksum.
>
Can you also guide me with an example, how to do this as well ?

BTW, I also tried "rootfs" volume flashing using "ubifs" image (non
squashfs). Here are the results.
a) With ubifs image also, the device is not booting after flashing the volume.
b) But I can see that the "rootfs" volume could be mounted, but later
gives some other errors during read_node.

These are the boot up errors logs:
{{{
[ 4.600001] vreg_conn_pa: dis▒[ 4.712458] UBIFS (ubi0:0): UBIFS:
mounted UBI device 0, volume 0, name "rootfs", R/O mode
[ 4.712520] UBIFS (ubi0:0): LEB size: 253952 bytes (248 KiB),
min./max. I/O unit sizes: 4096 bytes/4096 bytes
[ 4.719823] UBIFS (ubi0:0): FS size: 113008640 bytes (107 MiB, 445
LEBs), journal size 9404416 bytes (8 MiB, 38 LEBs)
[ 4.729867] UBIFS (ubi0:0): reserved for root: 0 bytes (0 KiB)
[ 4.740400] UBIFS (ubi0:0): media format: w4/r0 (latest is w5/r0),
UUID xxxxxxxxx-xxxxxxxxxx, small LPT model
[ 4.748587] VFS: Mounted root (ubifs filesystem) readonly on device 0:16.
[ 4.759033] devtmpfs: mounted
[ 4.766803] Freeing unused kernel memory: 2048K
[ 4.805035] UBIFS error (ubi0:0 pid 1): ubifs_read_node: bad node type
(255 but expected 9)
[ 4.805097] UBIFS error (ubi0:0 pid 1): ubifs_read_node: bad node at
LEB 336:250560, LEB mapping status 1
[ 4.812401] Not a node, first 24 bytes:
[ 4.812413] 00000000: ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff
ff ff ff ff ff ff ff ff ........................
}}}

Seems like there is some corruption in the first 24 bytes ??


Thanks,
Pintu

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [RESEND]: Kernel 4.14: SQUASHFS error: unable to read xattr id index table
@ 2021-05-17 11:34         ` Pintu Agarwal
  0 siblings, 0 replies; 25+ messages in thread
From: Pintu Agarwal @ 2021-05-17 11:34 UTC (permalink / raw)
  To: phillip; +Cc: open list, sean, linux-mtd, linux-fsdevel

On Sat, 15 May 2021 at 03:21, Phillip Lougher <phillip@squashfs.org.uk> wrote:
>
> Your kernel (4.14.170) was released on 5 Feb 2020, and so it won't
> contain any of the above commits. The xattr -id code in 4.14.170,
> was last updated in May 2011, and so it is much more likely the
> problem is elsewhere.
>
Okay this seems to be UBI volume flashing issue then. I will also try
with non-squashfs image (just ubifs).
See the result in the end.

> The xattr id index table is written to the end of the Squashfs filesystem,
> and it is the first table read on mounting.
>
Okay this gives me a clue that there are some corruptions while
writing the leftover blocks in the end.

> 1. Check the Squashfs filesystem for correctness before writing it to
> the flash. You can run Unsquashfs on the image and see if it reports
> any errors.
>
Can you give me some pointers on how to use unsquashfs ? I could not
find any unsquashfs command on my device.
Do we need to do it on the device or my Ubuntu PC ? Are there some
commands/utility available on ubuntu ?

> 2. You need to check the filesystem for integrity after writing it to
> the flash. Compute a checksum, and compare it with the original
> checksum.
>
Can you also guide me with an example, how to do this as well ?

BTW, I also tried "rootfs" volume flashing using "ubifs" image (non
squashfs). Here are the results.
a) With ubifs image also, the device is not booting after flashing the volume.
b) But I can see that the "rootfs" volume could be mounted, but later
gives some other errors during read_node.

These are the boot up errors logs:
{{{
[ 4.600001] vreg_conn_pa: dis▒[ 4.712458] UBIFS (ubi0:0): UBIFS:
mounted UBI device 0, volume 0, name "rootfs", R/O mode
[ 4.712520] UBIFS (ubi0:0): LEB size: 253952 bytes (248 KiB),
min./max. I/O unit sizes: 4096 bytes/4096 bytes
[ 4.719823] UBIFS (ubi0:0): FS size: 113008640 bytes (107 MiB, 445
LEBs), journal size 9404416 bytes (8 MiB, 38 LEBs)
[ 4.729867] UBIFS (ubi0:0): reserved for root: 0 bytes (0 KiB)
[ 4.740400] UBIFS (ubi0:0): media format: w4/r0 (latest is w5/r0),
UUID xxxxxxxxx-xxxxxxxxxx, small LPT model
[ 4.748587] VFS: Mounted root (ubifs filesystem) readonly on device 0:16.
[ 4.759033] devtmpfs: mounted
[ 4.766803] Freeing unused kernel memory: 2048K
[ 4.805035] UBIFS error (ubi0:0 pid 1): ubifs_read_node: bad node type
(255 but expected 9)
[ 4.805097] UBIFS error (ubi0:0 pid 1): ubifs_read_node: bad node at
LEB 336:250560, LEB mapping status 1
[ 4.812401] Not a node, first 24 bytes:
[ 4.812413] 00000000: ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff
ff ff ff ff ff ff ff ff ........................
}}}

Seems like there is some corruption in the first 24 bytes ??


Thanks,
Pintu

______________________________________________________
Linux MTD discussion mailing list
http://lists.infradead.org/mailman/listinfo/linux-mtd/

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [RESEND]: Kernel 4.14: SQUASHFS error: unable to read xattr id index table
  2021-05-17 11:34         ` Pintu Agarwal
@ 2021-05-20  4:30           ` Phillip Lougher
  -1 siblings, 0 replies; 25+ messages in thread
From: Phillip Lougher @ 2021-05-20  4:30 UTC (permalink / raw)
  To: Pintu Agarwal; +Cc: open list, sean, linux-mtd, linux-fsdevel


> On 17/05/2021 12:34 Pintu Agarwal <pintu.ping@gmail.com> wrote:
> 
>  
> On Sat, 15 May 2021 at 03:21, Phillip Lougher <phillip@squashfs.org.uk> wrote:
> >
> > Your kernel (4.14.170) was released on 5 Feb 2020, and so it won't
> > contain any of the above commits. The xattr -id code in 4.14.170,
> > was last updated in May 2011, and so it is much more likely the
> > problem is elsewhere.
> >
> Okay this seems to be UBI volume flashing issue then. I will also try
> with non-squashfs image (just ubifs).
> See the result in the end.
> 
> > The xattr id index table is written to the end of the Squashfs filesystem,
> > and it is the first table read on mounting.
> >
> Okay this gives me a clue that there are some corruptions while
> writing the leftover blocks in the end.
> 
> > 1. Check the Squashfs filesystem for correctness before writing it to
> > the flash. You can run Unsquashfs on the image and see if it reports
> > any errors.
> >
> Can you give me some pointers on how to use unsquashfs ? I could not
> find any unsquashfs command on my device.
> Do we need to do it on the device or my Ubuntu PC ? Are there some
> commands/utility available on ubuntu ?
> 

You should run Unsquashfs on the host Ubuntu PC to verify
the integrity of the Squashfs image before transferring and
flashing.

Unsquashfs is in the squashfs-tools package on Ubuntu.  To install
run as root

% apt-get install squashfs-tools

Then run it on your Squashfs image

% unsquashfs <your image>

If the image is uncorrupted, it will unpack the image into
"squashfs-root".  If it is corrupted it will give error
messages.

 
> > 2. You need to check the filesystem for integrity after writing it to
> > the flash. Compute a checksum, and compare it with the original
> > checksum.
> >
> Can you also guide me with an example, how to do this as well ?

I have not used the MTD subsystem for more than 13 years, and so
this is best answered on linux-mtd.  There may be some specfic
UBI/MTD tools available now to do integrity checking.

But failing that, and presuming you have character device access
to the flashed partition, you can "dd" the image out of the flash
into a file, and then run a checksum program against it.

You appear to be running busybox, and this has both support for
"dd" and the "md5sum" checksum program.

So do this

% dd if=<your character device> of=img bs=1 count=<image size>

Where <image size> is the size of the Squashfs image reported
by "ls -l" or "stat".  You need to get the exact byte count
right, otherwise the resultant checksum won't be right.

Then run md5sum on the extracted "img" file.

% md5sum img

This will produce a checksum.

You can then compare that with the result of "md5sum" on your
original Squashfs image before flashing (produced on the host
or the target).

If the checksums differ then it is corrupted.

> 
> BTW, I also tried "rootfs" volume flashing using "ubifs" image (non
> squashfs). Here are the results.
> a) With ubifs image also, the device is not booting after flashing the volume.
> b) But I can see that the "rootfs" volume could be mounted, but later
> gives some other errors during read_node.
> 
> These are the boot up errors logs:
> {{{
> [ 4.600001] vreg_conn_pa: dis▒[ 4.712458] UBIFS (ubi0:0): UBIFS:
> mounted UBI device 0, volume 0, name "rootfs", R/O mode
> [ 4.712520] UBIFS (ubi0:0): LEB size: 253952 bytes (248 KiB),
> min./max. I/O unit sizes: 4096 bytes/4096 bytes
> [ 4.719823] UBIFS (ubi0:0): FS size: 113008640 bytes (107 MiB, 445
> LEBs), journal size 9404416 bytes (8 MiB, 38 LEBs)
> [ 4.729867] UBIFS (ubi0:0): reserved for root: 0 bytes (0 KiB)
> [ 4.740400] UBIFS (ubi0:0): media format: w4/r0 (latest is w5/r0),
> UUID xxxxxxxxx-xxxxxxxxxx, small LPT model
> [ 4.748587] VFS: Mounted root (ubifs filesystem) readonly on device 0:16.
> [ 4.759033] devtmpfs: mounted
> [ 4.766803] Freeing unused kernel memory: 2048K
> [ 4.805035] UBIFS error (ubi0:0 pid 1): ubifs_read_node: bad node type
> (255 but expected 9)
> [ 4.805097] UBIFS error (ubi0:0 pid 1): ubifs_read_node: bad node at
> LEB 336:250560, LEB mapping status 1
> [ 4.812401] Not a node, first 24 bytes:
> [ 4.812413] 00000000: ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff
> ff ff ff ff ff ff ff ff ........................
> }}}
> 
> Seems like there is some corruption in the first 24 bytes ??
> 

This implies there is corruption being introduced at the MTD level or
below.

Phillip

> 
> Thanks,
> Pintu

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [RESEND]: Kernel 4.14: SQUASHFS error: unable to read xattr id index table
@ 2021-05-20  4:30           ` Phillip Lougher
  0 siblings, 0 replies; 25+ messages in thread
From: Phillip Lougher @ 2021-05-20  4:30 UTC (permalink / raw)
  To: Pintu Agarwal; +Cc: open list, sean, linux-mtd, linux-fsdevel


> On 17/05/2021 12:34 Pintu Agarwal <pintu.ping@gmail.com> wrote:
> 
>  
> On Sat, 15 May 2021 at 03:21, Phillip Lougher <phillip@squashfs.org.uk> wrote:
> >
> > Your kernel (4.14.170) was released on 5 Feb 2020, and so it won't
> > contain any of the above commits. The xattr -id code in 4.14.170,
> > was last updated in May 2011, and so it is much more likely the
> > problem is elsewhere.
> >
> Okay this seems to be UBI volume flashing issue then. I will also try
> with non-squashfs image (just ubifs).
> See the result in the end.
> 
> > The xattr id index table is written to the end of the Squashfs filesystem,
> > and it is the first table read on mounting.
> >
> Okay this gives me a clue that there are some corruptions while
> writing the leftover blocks in the end.
> 
> > 1. Check the Squashfs filesystem for correctness before writing it to
> > the flash. You can run Unsquashfs on the image and see if it reports
> > any errors.
> >
> Can you give me some pointers on how to use unsquashfs ? I could not
> find any unsquashfs command on my device.
> Do we need to do it on the device or my Ubuntu PC ? Are there some
> commands/utility available on ubuntu ?
> 

You should run Unsquashfs on the host Ubuntu PC to verify
the integrity of the Squashfs image before transferring and
flashing.

Unsquashfs is in the squashfs-tools package on Ubuntu.  To install
run as root

% apt-get install squashfs-tools

Then run it on your Squashfs image

% unsquashfs <your image>

If the image is uncorrupted, it will unpack the image into
"squashfs-root".  If it is corrupted it will give error
messages.

 
> > 2. You need to check the filesystem for integrity after writing it to
> > the flash. Compute a checksum, and compare it with the original
> > checksum.
> >
> Can you also guide me with an example, how to do this as well ?

I have not used the MTD subsystem for more than 13 years, and so
this is best answered on linux-mtd.  There may be some specfic
UBI/MTD tools available now to do integrity checking.

But failing that, and presuming you have character device access
to the flashed partition, you can "dd" the image out of the flash
into a file, and then run a checksum program against it.

You appear to be running busybox, and this has both support for
"dd" and the "md5sum" checksum program.

So do this

% dd if=<your character device> of=img bs=1 count=<image size>

Where <image size> is the size of the Squashfs image reported
by "ls -l" or "stat".  You need to get the exact byte count
right, otherwise the resultant checksum won't be right.

Then run md5sum on the extracted "img" file.

% md5sum img

This will produce a checksum.

You can then compare that with the result of "md5sum" on your
original Squashfs image before flashing (produced on the host
or the target).

If the checksums differ then it is corrupted.

> 
> BTW, I also tried "rootfs" volume flashing using "ubifs" image (non
> squashfs). Here are the results.
> a) With ubifs image also, the device is not booting after flashing the volume.
> b) But I can see that the "rootfs" volume could be mounted, but later
> gives some other errors during read_node.
> 
> These are the boot up errors logs:
> {{{
> [ 4.600001] vreg_conn_pa: dis▒[ 4.712458] UBIFS (ubi0:0): UBIFS:
> mounted UBI device 0, volume 0, name "rootfs", R/O mode
> [ 4.712520] UBIFS (ubi0:0): LEB size: 253952 bytes (248 KiB),
> min./max. I/O unit sizes: 4096 bytes/4096 bytes
> [ 4.719823] UBIFS (ubi0:0): FS size: 113008640 bytes (107 MiB, 445
> LEBs), journal size 9404416 bytes (8 MiB, 38 LEBs)
> [ 4.729867] UBIFS (ubi0:0): reserved for root: 0 bytes (0 KiB)
> [ 4.740400] UBIFS (ubi0:0): media format: w4/r0 (latest is w5/r0),
> UUID xxxxxxxxx-xxxxxxxxxx, small LPT model
> [ 4.748587] VFS: Mounted root (ubifs filesystem) readonly on device 0:16.
> [ 4.759033] devtmpfs: mounted
> [ 4.766803] Freeing unused kernel memory: 2048K
> [ 4.805035] UBIFS error (ubi0:0 pid 1): ubifs_read_node: bad node type
> (255 but expected 9)
> [ 4.805097] UBIFS error (ubi0:0 pid 1): ubifs_read_node: bad node at
> LEB 336:250560, LEB mapping status 1
> [ 4.812401] Not a node, first 24 bytes:
> [ 4.812413] 00000000: ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff
> ff ff ff ff ff ff ff ff ........................
> }}}
> 
> Seems like there is some corruption in the first 24 bytes ??
> 

This implies there is corruption being introduced at the MTD level or
below.

Phillip

> 
> Thanks,
> Pintu

______________________________________________________
Linux MTD discussion mailing list
http://lists.infradead.org/mailman/listinfo/linux-mtd/

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [RESEND]: Kernel 4.14: UBIFS+SQUASHFS: Device fails to boot after flashing rootfs volume
  2021-05-20  4:30           ` Phillip Lougher
@ 2021-05-23 16:44             ` Pintu Agarwal
  -1 siblings, 0 replies; 25+ messages in thread
From: Pintu Agarwal @ 2021-05-23 16:44 UTC (permalink / raw)
  To: Phillip Lougher; +Cc: open list, sean, linux-mtd, linux-fsdevel

On Thu, 20 May 2021 at 10:00, Phillip Lougher <phillip@squashfs.org.uk> wrote:
>

> Then run it on your Squashfs image
>
> % unsquashfs <your image>
>
> If the image is uncorrupted, it will unpack the image into
> "squashfs-root".  If it is corrupted it will give error
> messages.
>
I have tried this and it seems with unsquashfs I am able to
successfully extract it to "squashfs-root" folder.

> I have not used the MTD subsystem for more than 13 years, and so
> this is best answered on linux-mtd.

Yes, I have already included the linux-mtd list here.
Maybe MTD folks can share their opinion as well....
That is the reason I have changed the subject as well.

> You appear to be running busybox, and this has both support for
> "dd" and the "md5sum" checksum program.
>
> So do this
>
> % dd if=<your character device> of=img bs=1 count=<image size>
>
> Where <image size> is the size of the Squashfs image reported
> by "ls -l" or "stat".  You need to get the exact byte count
> right, otherwise the resultant checksum won't be right.
>
> Then run md5sum on the extracted "img" file.
>
> % md5sum img
>
> This will produce a checksum.
>
> You can then compare that with the result of "md5sum" on your
> original Squashfs image before flashing (produced on the host
> or the target).
>
> If the checksums differ then it is corrupted.
>
I have also tried that and it seems the checksum exactly matches.
$ md5sum system.squash
d301016207cc5782d1634259a5c597f9  ./system.squash

On the device:
/data/pintu # dd if=/dev/ubi0_0 of=squash_rootfs.img bs=1K count=48476
48476+0 records in
48476+0 records out
49639424 bytes (47.3MB) copied, 26.406276 seconds, 1.8MB/s
[12001.375255] dd (2392) used greatest stack depth: 4208 bytes left

/data/pintu # md5sum squash_rootfs.img
d301016207cc5782d1634259a5c597f9  squash_rootfs.img

So, it seems there is no problem with either the original image
(unsquashfs) as well as the checksum.

Then what else could be the suspect/issue ?
If you have any further inputs please share your thoughts.

This is the kernel command line we are using:
[    0.000000] Kernel command line: ro rootwait
console=ttyMSM0,115200,n8 androidboot.hardware=qcom
msm_rtb.filter=0x237 androidboot.console=ttyMSM0
lpm_levels.sleep_disabled=1 firmware_class.path=/lib/firmware/updates
service_locator.enable=1 net.ifnames=0 rootfstype=squashfs
root=/dev/ubiblock0_0 ubi.mtd=30 ubi.block=0,0

These are few more points to be noted:
a) With squashfs we are getting below error:
[    4.603156] squashfs: SQUASHFS error: unable to read xattr id index table
[...]
[    4.980519] Kernel panic - not syncing: VFS: Unable to mount root
fs on unknown-block(254,0)

b) With ubifs (without squashfs) we are getting below error:
[    4.712458] UBIFS (ubi0:0): UBIFS: mounted UBI device 0, volume 0,
name "rootfs", R/O mode
[...]
UBIFS error (ubi0:0 pid 1): ubifs_read_node: bad node type (255 but expected 9)
UBIFS error (ubi0:0 pid 1): ubifs_read_node: bad node at LEB
336:250560, LEB mapping status 1
Not a node, first 24 bytes:
00000000: ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff
ff ff ff ff

c) While flashing "usrfs" volume (ubi0_1) there is no issue and device
boots successfully.

d) This issue is happening only after flashing rootfs volume (ubi0_0)
and rebooting the device.

e) We are using "uefi" and fastboot mechanism to flash the volumes.

f) Next I wanted to check the read-only UBI volume flashing mechanism
within the Kernel itself.
Is there a way to try a read-only "rootfs" (squashfs type) ubi volume
flashing mechanism from the Linux command prompt ?
Or, what are the other ways to verify UBI volume flashing in Linux ?

g) I wanted to root-cause, if there is any problem in our UBI flashing
logic, or there's something missing on the Linux/Kernel side (squashfs
or ubifs) or the way we configure the system.

Thanks,
Pintu

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [RESEND]: Kernel 4.14: UBIFS+SQUASHFS: Device fails to boot after flashing rootfs volume
@ 2021-05-23 16:44             ` Pintu Agarwal
  0 siblings, 0 replies; 25+ messages in thread
From: Pintu Agarwal @ 2021-05-23 16:44 UTC (permalink / raw)
  To: Phillip Lougher; +Cc: open list, sean, linux-mtd, linux-fsdevel

On Thu, 20 May 2021 at 10:00, Phillip Lougher <phillip@squashfs.org.uk> wrote:
>

> Then run it on your Squashfs image
>
> % unsquashfs <your image>
>
> If the image is uncorrupted, it will unpack the image into
> "squashfs-root".  If it is corrupted it will give error
> messages.
>
I have tried this and it seems with unsquashfs I am able to
successfully extract it to "squashfs-root" folder.

> I have not used the MTD subsystem for more than 13 years, and so
> this is best answered on linux-mtd.

Yes, I have already included the linux-mtd list here.
Maybe MTD folks can share their opinion as well....
That is the reason I have changed the subject as well.

> You appear to be running busybox, and this has both support for
> "dd" and the "md5sum" checksum program.
>
> So do this
>
> % dd if=<your character device> of=img bs=1 count=<image size>
>
> Where <image size> is the size of the Squashfs image reported
> by "ls -l" or "stat".  You need to get the exact byte count
> right, otherwise the resultant checksum won't be right.
>
> Then run md5sum on the extracted "img" file.
>
> % md5sum img
>
> This will produce a checksum.
>
> You can then compare that with the result of "md5sum" on your
> original Squashfs image before flashing (produced on the host
> or the target).
>
> If the checksums differ then it is corrupted.
>
I have also tried that and it seems the checksum exactly matches.
$ md5sum system.squash
d301016207cc5782d1634259a5c597f9  ./system.squash

On the device:
/data/pintu # dd if=/dev/ubi0_0 of=squash_rootfs.img bs=1K count=48476
48476+0 records in
48476+0 records out
49639424 bytes (47.3MB) copied, 26.406276 seconds, 1.8MB/s
[12001.375255] dd (2392) used greatest stack depth: 4208 bytes left

/data/pintu # md5sum squash_rootfs.img
d301016207cc5782d1634259a5c597f9  squash_rootfs.img

So, it seems there is no problem with either the original image
(unsquashfs) as well as the checksum.

Then what else could be the suspect/issue ?
If you have any further inputs please share your thoughts.

This is the kernel command line we are using:
[    0.000000] Kernel command line: ro rootwait
console=ttyMSM0,115200,n8 androidboot.hardware=qcom
msm_rtb.filter=0x237 androidboot.console=ttyMSM0
lpm_levels.sleep_disabled=1 firmware_class.path=/lib/firmware/updates
service_locator.enable=1 net.ifnames=0 rootfstype=squashfs
root=/dev/ubiblock0_0 ubi.mtd=30 ubi.block=0,0

These are few more points to be noted:
a) With squashfs we are getting below error:
[    4.603156] squashfs: SQUASHFS error: unable to read xattr id index table
[...]
[    4.980519] Kernel panic - not syncing: VFS: Unable to mount root
fs on unknown-block(254,0)

b) With ubifs (without squashfs) we are getting below error:
[    4.712458] UBIFS (ubi0:0): UBIFS: mounted UBI device 0, volume 0,
name "rootfs", R/O mode
[...]
UBIFS error (ubi0:0 pid 1): ubifs_read_node: bad node type (255 but expected 9)
UBIFS error (ubi0:0 pid 1): ubifs_read_node: bad node at LEB
336:250560, LEB mapping status 1
Not a node, first 24 bytes:
00000000: ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff
ff ff ff ff

c) While flashing "usrfs" volume (ubi0_1) there is no issue and device
boots successfully.

d) This issue is happening only after flashing rootfs volume (ubi0_0)
and rebooting the device.

e) We are using "uefi" and fastboot mechanism to flash the volumes.

f) Next I wanted to check the read-only UBI volume flashing mechanism
within the Kernel itself.
Is there a way to try a read-only "rootfs" (squashfs type) ubi volume
flashing mechanism from the Linux command prompt ?
Or, what are the other ways to verify UBI volume flashing in Linux ?

g) I wanted to root-cause, if there is any problem in our UBI flashing
logic, or there's something missing on the Linux/Kernel side (squashfs
or ubifs) or the way we configure the system.

Thanks,
Pintu

______________________________________________________
Linux MTD discussion mailing list
http://lists.infradead.org/mailman/listinfo/linux-mtd/

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [RESEND]: Kernel 4.14: UBIFS+SQUASHFS: Device fails to boot after flashing rootfs volume
  2021-05-23 16:44             ` Pintu Agarwal
@ 2021-05-23 17:31               ` Sean Nyekjaer
  -1 siblings, 0 replies; 25+ messages in thread
From: Sean Nyekjaer @ 2021-05-23 17:31 UTC (permalink / raw)
  To: Pintu Agarwal, Phillip Lougher; +Cc: open list, linux-mtd, linux-fsdevel

On 23/05/2021 18.44, Pintu Agarwal wrote:
> On Thu, 20 May 2021 at 10:00, Phillip Lougher <phillip@squashfs.org.uk> wrote:
>>
> 
>> Then run it on your Squashfs image
>>
>> % unsquashfs <your image>
>>
>> If the image is uncorrupted, it will unpack the image into
>> "squashfs-root".  If it is corrupted it will give error
>> messages.
>>
> I have tried this and it seems with unsquashfs I am able to
> successfully extract it to "squashfs-root" folder.
> 
>> I have not used the MTD subsystem for more than 13 years, and so
>> this is best answered on linux-mtd.
> 
> Yes, I have already included the linux-mtd list here.
> Maybe MTD folks can share their opinion as well....
> That is the reason I have changed the subject as well.
> 
>> You appear to be running busybox, and this has both support for
>> "dd" and the "md5sum" checksum program.
>>
>> So do this
>>
>> % dd if=<your character device> of=img bs=1 count=<image size>
>>
>> Where <image size> is the size of the Squashfs image reported
>> by "ls -l" or "stat".  You need to get the exact byte count
>> right, otherwise the resultant checksum won't be right.
>>
>> Then run md5sum on the extracted "img" file.
>>
>> % md5sum img
>>
>> This will produce a checksum.
>>
>> You can then compare that with the result of "md5sum" on your
>> original Squashfs image before flashing (produced on the host
>> or the target).
>>
>> If the checksums differ then it is corrupted.
>>
> I have also tried that and it seems the checksum exactly matches.
> $ md5sum system.squash
> d301016207cc5782d1634259a5c597f9  ./system.squash
> 
> On the device:
> /data/pintu # dd if=/dev/ubi0_0 of=squash_rootfs.img bs=1K count=48476
> 48476+0 records in
> 48476+0 records out
> 49639424 bytes (47.3MB) copied, 26.406276 seconds, 1.8MB/s
> [12001.375255] dd (2392) used greatest stack depth: 4208 bytes left
> 
> /data/pintu # md5sum squash_rootfs.img
> d301016207cc5782d1634259a5c597f9  squash_rootfs.img
> 
> So, it seems there is no problem with either the original image
> (unsquashfs) as well as the checksum.
> 
> Then what else could be the suspect/issue ?
> If you have any further inputs please share your thoughts.
> 
> This is the kernel command line we are using:
> [    0.000000] Kernel command line: ro rootwait
> console=ttyMSM0,115200,n8 androidboot.hardware=qcom
> msm_rtb.filter=0x237 androidboot.console=ttyMSM0
> lpm_levels.sleep_disabled=1 firmware_class.path=/lib/firmware/updates
> service_locator.enable=1 net.ifnames=0 rootfstype=squashfs
> root=/dev/ubiblock0_0 ubi.mtd=30 ubi.block=0,0
> 
> These are few more points to be noted:
> a) With squashfs we are getting below error:
> [    4.603156] squashfs: SQUASHFS error: unable to read xattr id index table
> [...]
> [    4.980519] Kernel panic - not syncing: VFS: Unable to mount root
> fs on unknown-block(254,0)
> 
> b) With ubifs (without squashfs) we are getting below error:
> [    4.712458] UBIFS (ubi0:0): UBIFS: mounted UBI device 0, volume 0,
> name "rootfs", R/O mode
> [...]
> UBIFS error (ubi0:0 pid 1): ubifs_read_node: bad node type (255 but expected 9)
> UBIFS error (ubi0:0 pid 1): ubifs_read_node: bad node at LEB
> 336:250560, LEB mapping status 1
> Not a node, first 24 bytes:
> 00000000: ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff
> ff ff ff ff
> 
> c) While flashing "usrfs" volume (ubi0_1) there is no issue and device
> boots successfully.
> 
> d) This issue is happening only after flashing rootfs volume (ubi0_0)
> and rebooting the device.
> 
> e) We are using "uefi" and fastboot mechanism to flash the volumes.
Are you writing the squashfs into the ubi block device with uefi/fastboot?
> 
> f) Next I wanted to check the read-only UBI volume flashing mechanism
> within the Kernel itself.
> Is there a way to try a read-only "rootfs" (squashfs type) ubi volume
> flashing mechanism from the Linux command prompt ?
> Or, what are the other ways to verify UBI volume flashing in Linux ?
> 
> g) I wanted to root-cause, if there is any problem in our UBI flashing
> logic, or there's something missing on the Linux/Kernel side (squashfs
> or ubifs) or the way we configure the system.
> 
> Thanks,
> Pintu
> 

Have you had it to work? Or is this a new project?
If you had it to work, i would start bisecting...

/Sean

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [RESEND]: Kernel 4.14: UBIFS+SQUASHFS: Device fails to boot after flashing rootfs volume
@ 2021-05-23 17:31               ` Sean Nyekjaer
  0 siblings, 0 replies; 25+ messages in thread
From: Sean Nyekjaer @ 2021-05-23 17:31 UTC (permalink / raw)
  To: Pintu Agarwal, Phillip Lougher; +Cc: open list, linux-mtd, linux-fsdevel

On 23/05/2021 18.44, Pintu Agarwal wrote:
> On Thu, 20 May 2021 at 10:00, Phillip Lougher <phillip@squashfs.org.uk> wrote:
>>
> 
>> Then run it on your Squashfs image
>>
>> % unsquashfs <your image>
>>
>> If the image is uncorrupted, it will unpack the image into
>> "squashfs-root".  If it is corrupted it will give error
>> messages.
>>
> I have tried this and it seems with unsquashfs I am able to
> successfully extract it to "squashfs-root" folder.
> 
>> I have not used the MTD subsystem for more than 13 years, and so
>> this is best answered on linux-mtd.
> 
> Yes, I have already included the linux-mtd list here.
> Maybe MTD folks can share their opinion as well....
> That is the reason I have changed the subject as well.
> 
>> You appear to be running busybox, and this has both support for
>> "dd" and the "md5sum" checksum program.
>>
>> So do this
>>
>> % dd if=<your character device> of=img bs=1 count=<image size>
>>
>> Where <image size> is the size of the Squashfs image reported
>> by "ls -l" or "stat".  You need to get the exact byte count
>> right, otherwise the resultant checksum won't be right.
>>
>> Then run md5sum on the extracted "img" file.
>>
>> % md5sum img
>>
>> This will produce a checksum.
>>
>> You can then compare that with the result of "md5sum" on your
>> original Squashfs image before flashing (produced on the host
>> or the target).
>>
>> If the checksums differ then it is corrupted.
>>
> I have also tried that and it seems the checksum exactly matches.
> $ md5sum system.squash
> d301016207cc5782d1634259a5c597f9  ./system.squash
> 
> On the device:
> /data/pintu # dd if=/dev/ubi0_0 of=squash_rootfs.img bs=1K count=48476
> 48476+0 records in
> 48476+0 records out
> 49639424 bytes (47.3MB) copied, 26.406276 seconds, 1.8MB/s
> [12001.375255] dd (2392) used greatest stack depth: 4208 bytes left
> 
> /data/pintu # md5sum squash_rootfs.img
> d301016207cc5782d1634259a5c597f9  squash_rootfs.img
> 
> So, it seems there is no problem with either the original image
> (unsquashfs) as well as the checksum.
> 
> Then what else could be the suspect/issue ?
> If you have any further inputs please share your thoughts.
> 
> This is the kernel command line we are using:
> [    0.000000] Kernel command line: ro rootwait
> console=ttyMSM0,115200,n8 androidboot.hardware=qcom
> msm_rtb.filter=0x237 androidboot.console=ttyMSM0
> lpm_levels.sleep_disabled=1 firmware_class.path=/lib/firmware/updates
> service_locator.enable=1 net.ifnames=0 rootfstype=squashfs
> root=/dev/ubiblock0_0 ubi.mtd=30 ubi.block=0,0
> 
> These are few more points to be noted:
> a) With squashfs we are getting below error:
> [    4.603156] squashfs: SQUASHFS error: unable to read xattr id index table
> [...]
> [    4.980519] Kernel panic - not syncing: VFS: Unable to mount root
> fs on unknown-block(254,0)
> 
> b) With ubifs (without squashfs) we are getting below error:
> [    4.712458] UBIFS (ubi0:0): UBIFS: mounted UBI device 0, volume 0,
> name "rootfs", R/O mode
> [...]
> UBIFS error (ubi0:0 pid 1): ubifs_read_node: bad node type (255 but expected 9)
> UBIFS error (ubi0:0 pid 1): ubifs_read_node: bad node at LEB
> 336:250560, LEB mapping status 1
> Not a node, first 24 bytes:
> 00000000: ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff
> ff ff ff ff
> 
> c) While flashing "usrfs" volume (ubi0_1) there is no issue and device
> boots successfully.
> 
> d) This issue is happening only after flashing rootfs volume (ubi0_0)
> and rebooting the device.
> 
> e) We are using "uefi" and fastboot mechanism to flash the volumes.
Are you writing the squashfs into the ubi block device with uefi/fastboot?
> 
> f) Next I wanted to check the read-only UBI volume flashing mechanism
> within the Kernel itself.
> Is there a way to try a read-only "rootfs" (squashfs type) ubi volume
> flashing mechanism from the Linux command prompt ?
> Or, what are the other ways to verify UBI volume flashing in Linux ?
> 
> g) I wanted to root-cause, if there is any problem in our UBI flashing
> logic, or there's something missing on the Linux/Kernel side (squashfs
> or ubifs) or the way we configure the system.
> 
> Thanks,
> Pintu
> 

Have you had it to work? Or is this a new project?
If you had it to work, i would start bisecting...

/Sean

______________________________________________________
Linux MTD discussion mailing list
http://lists.infradead.org/mailman/listinfo/linux-mtd/

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [RESEND]: Kernel 4.14: UBIFS+SQUASHFS: Device fails to boot after flashing rootfs volume
  2021-05-23 17:31               ` Sean Nyekjaer
@ 2021-05-24  6:12                 ` Pintu Agarwal
  -1 siblings, 0 replies; 25+ messages in thread
From: Pintu Agarwal @ 2021-05-24  6:12 UTC (permalink / raw)
  To: Sean Nyekjaer; +Cc: Phillip Lougher, open list, linux-mtd, linux-fsdevel

On Sun, 23 May 2021 at 23:01, Sean Nyekjaer <sean@geanix.com> wrote:
>

> > I have also tried that and it seems the checksum exactly matches.
> > $ md5sum system.squash
> > d301016207cc5782d1634259a5c597f9  ./system.squash
> >
> > On the device:
> > /data/pintu # dd if=/dev/ubi0_0 of=squash_rootfs.img bs=1K count=48476
> > 48476+0 records in
> > 48476+0 records out
> > 49639424 bytes (47.3MB) copied, 26.406276 seconds, 1.8MB/s
> > [12001.375255] dd (2392) used greatest stack depth: 4208 bytes left
> >
> > /data/pintu # md5sum squash_rootfs.img
> > d301016207cc5782d1634259a5c597f9  squash_rootfs.img
> >
> > So, it seems there is no problem with either the original image
> > (unsquashfs) as well as the checksum.
> >
> > Then what else could be the suspect/issue ?
> > If you have any further inputs please share your thoughts.
> >
> > This is the kernel command line we are using:
> > [    0.000000] Kernel command line: ro rootwait
> > console=ttyMSM0,115200,n8 androidboot.hardware=qcom
> > msm_rtb.filter=0x237 androidboot.console=ttyMSM0
> > lpm_levels.sleep_disabled=1 firmware_class.path=/lib/firmware/updates
> > service_locator.enable=1 net.ifnames=0 rootfstype=squashfs
> > root=/dev/ubiblock0_0 ubi.mtd=30 ubi.block=0,0
> >
> > These are few more points to be noted:
> > a) With squashfs we are getting below error:
> > [    4.603156] squashfs: SQUASHFS error: unable to read xattr id index table
> > [...]
> > [    4.980519] Kernel panic - not syncing: VFS: Unable to mount root
> > fs on unknown-block(254,0)
> >
> > b) With ubifs (without squashfs) we are getting below error:
> > [    4.712458] UBIFS (ubi0:0): UBIFS: mounted UBI device 0, volume 0,
> > name "rootfs", R/O mode
> > [...]
> > UBIFS error (ubi0:0 pid 1): ubifs_read_node: bad node type (255 but expected 9)
> > UBIFS error (ubi0:0 pid 1): ubifs_read_node: bad node at LEB
> > 336:250560, LEB mapping status 1
> > Not a node, first 24 bytes:
> > 00000000: ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff
> > ff ff ff ff
> >
> > c) While flashing "usrfs" volume (ubi0_1) there is no issue and device
> > boots successfully.
> >
> > d) This issue is happening only after flashing rootfs volume (ubi0_0)
> > and rebooting the device.
> >
> > e) We are using "uefi" and fastboot mechanism to flash the volumes.
> Are you writing the squashfs into the ubi block device with uefi/fastboot?
> >
> > f) Next I wanted to check the read-only UBI volume flashing mechanism
> > within the Kernel itself.
> > Is there a way to try a read-only "rootfs" (squashfs type) ubi volume
> > flashing mechanism from the Linux command prompt ?
> > Or, what are the other ways to verify UBI volume flashing in Linux ?
> >
> > g) I wanted to root-cause, if there is any problem in our UBI flashing
> > logic, or there's something missing on the Linux/Kernel side (squashfs
> > or ubifs) or the way we configure the system.

>
> Have you had it to work? Or is this a new project?
> If you had it to work, i would start bisecting...
>

No, this is still experimental.
Currently we are only able to write to ubi volumes but after that
device is not booting (with rootfs volume update).
However, with "userdata" it is working fine.

I have few more questions to clarify.

a) Is there a way in kernel to do the ubi volume update while the
device is running ?
    I tried "ubiupdatevol" but it does not seem to work.
    I guess it is only to update the empty volume ?
    Or, maybe I don't know how to use it to update the live "rootfs" volume

b) How to verify the volume checksum as soon as we finish writing the
content, since the device is not booting ?
     Is there a way to verify the rootfs checksum at the bootloader or
kernel level before mounting ?

c) We are configuring the ubi volumes in this way. Is it fine ?
[rootfs_volume]
mode=ubi
image=.<path>/system.squash
vol_id=0
vol_type=dynamic
vol_name=rootfs
vol_size=62980096  ==> 60.0625 MiB

Few more info:
----------------------
Our actual squashfs image size:
$ ls -l ./system.squash
rw-rr- 1 pintu users 49639424 ../system.squash

after earse_volume: page-size: 4096, block-size-bytes: 262144,
vtbl-count: 2, used-blk: 38, leb-size: 253952, leb-blk-size: 62
Thus:
49639424 / 253952 = 195.46 blocks

This then round-off to 196 blocks which does not match exactly.
Is there any issue with this ?

If you have any suggestions to debug further please help us...


Thanks,
Pintu

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [RESEND]: Kernel 4.14: UBIFS+SQUASHFS: Device fails to boot after flashing rootfs volume
@ 2021-05-24  6:12                 ` Pintu Agarwal
  0 siblings, 0 replies; 25+ messages in thread
From: Pintu Agarwal @ 2021-05-24  6:12 UTC (permalink / raw)
  To: Sean Nyekjaer; +Cc: Phillip Lougher, open list, linux-mtd, linux-fsdevel

On Sun, 23 May 2021 at 23:01, Sean Nyekjaer <sean@geanix.com> wrote:
>

> > I have also tried that and it seems the checksum exactly matches.
> > $ md5sum system.squash
> > d301016207cc5782d1634259a5c597f9  ./system.squash
> >
> > On the device:
> > /data/pintu # dd if=/dev/ubi0_0 of=squash_rootfs.img bs=1K count=48476
> > 48476+0 records in
> > 48476+0 records out
> > 49639424 bytes (47.3MB) copied, 26.406276 seconds, 1.8MB/s
> > [12001.375255] dd (2392) used greatest stack depth: 4208 bytes left
> >
> > /data/pintu # md5sum squash_rootfs.img
> > d301016207cc5782d1634259a5c597f9  squash_rootfs.img
> >
> > So, it seems there is no problem with either the original image
> > (unsquashfs) as well as the checksum.
> >
> > Then what else could be the suspect/issue ?
> > If you have any further inputs please share your thoughts.
> >
> > This is the kernel command line we are using:
> > [    0.000000] Kernel command line: ro rootwait
> > console=ttyMSM0,115200,n8 androidboot.hardware=qcom
> > msm_rtb.filter=0x237 androidboot.console=ttyMSM0
> > lpm_levels.sleep_disabled=1 firmware_class.path=/lib/firmware/updates
> > service_locator.enable=1 net.ifnames=0 rootfstype=squashfs
> > root=/dev/ubiblock0_0 ubi.mtd=30 ubi.block=0,0
> >
> > These are few more points to be noted:
> > a) With squashfs we are getting below error:
> > [    4.603156] squashfs: SQUASHFS error: unable to read xattr id index table
> > [...]
> > [    4.980519] Kernel panic - not syncing: VFS: Unable to mount root
> > fs on unknown-block(254,0)
> >
> > b) With ubifs (without squashfs) we are getting below error:
> > [    4.712458] UBIFS (ubi0:0): UBIFS: mounted UBI device 0, volume 0,
> > name "rootfs", R/O mode
> > [...]
> > UBIFS error (ubi0:0 pid 1): ubifs_read_node: bad node type (255 but expected 9)
> > UBIFS error (ubi0:0 pid 1): ubifs_read_node: bad node at LEB
> > 336:250560, LEB mapping status 1
> > Not a node, first 24 bytes:
> > 00000000: ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff
> > ff ff ff ff
> >
> > c) While flashing "usrfs" volume (ubi0_1) there is no issue and device
> > boots successfully.
> >
> > d) This issue is happening only after flashing rootfs volume (ubi0_0)
> > and rebooting the device.
> >
> > e) We are using "uefi" and fastboot mechanism to flash the volumes.
> Are you writing the squashfs into the ubi block device with uefi/fastboot?
> >
> > f) Next I wanted to check the read-only UBI volume flashing mechanism
> > within the Kernel itself.
> > Is there a way to try a read-only "rootfs" (squashfs type) ubi volume
> > flashing mechanism from the Linux command prompt ?
> > Or, what are the other ways to verify UBI volume flashing in Linux ?
> >
> > g) I wanted to root-cause, if there is any problem in our UBI flashing
> > logic, or there's something missing on the Linux/Kernel side (squashfs
> > or ubifs) or the way we configure the system.

>
> Have you had it to work? Or is this a new project?
> If you had it to work, i would start bisecting...
>

No, this is still experimental.
Currently we are only able to write to ubi volumes but after that
device is not booting (with rootfs volume update).
However, with "userdata" it is working fine.

I have few more questions to clarify.

a) Is there a way in kernel to do the ubi volume update while the
device is running ?
    I tried "ubiupdatevol" but it does not seem to work.
    I guess it is only to update the empty volume ?
    Or, maybe I don't know how to use it to update the live "rootfs" volume

b) How to verify the volume checksum as soon as we finish writing the
content, since the device is not booting ?
     Is there a way to verify the rootfs checksum at the bootloader or
kernel level before mounting ?

c) We are configuring the ubi volumes in this way. Is it fine ?
[rootfs_volume]
mode=ubi
image=.<path>/system.squash
vol_id=0
vol_type=dynamic
vol_name=rootfs
vol_size=62980096  ==> 60.0625 MiB

Few more info:
----------------------
Our actual squashfs image size:
$ ls -l ./system.squash
rw-rr- 1 pintu users 49639424 ../system.squash

after earse_volume: page-size: 4096, block-size-bytes: 262144,
vtbl-count: 2, used-blk: 38, leb-size: 253952, leb-blk-size: 62
Thus:
49639424 / 253952 = 195.46 blocks

This then round-off to 196 blocks which does not match exactly.
Is there any issue with this ?

If you have any suggestions to debug further please help us...


Thanks,
Pintu

______________________________________________________
Linux MTD discussion mailing list
http://lists.infradead.org/mailman/listinfo/linux-mtd/

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [RESEND]: Kernel 4.14: UBIFS+SQUASHFS: Device fails to boot after flashing rootfs volume
  2021-05-24  6:12                 ` Pintu Agarwal
@ 2021-05-24  7:07                   ` Phillip Lougher
  -1 siblings, 0 replies; 25+ messages in thread
From: Phillip Lougher @ 2021-05-24  7:07 UTC (permalink / raw)
  To: Pintu Agarwal, Sean Nyekjaer; +Cc: open list, linux-mtd, linux-fsdevel


> On 24/05/2021 07:12 Pintu Agarwal <pintu.ping@gmail.com> wrote:
> 
>  
> On Sun, 23 May 2021 at 23:01, Sean Nyekjaer <sean@geanix.com> wrote:
> >
> 
> > > I have also tried that and it seems the checksum exactly matches.
> > > $ md5sum system.squash
> > > d301016207cc5782d1634259a5c597f9  ./system.squash
> > >
> > > On the device:
> > > /data/pintu # dd if=/dev/ubi0_0 of=squash_rootfs.img bs=1K count=48476
> > > 48476+0 records in
> > > 48476+0 records out
> > > 49639424 bytes (47.3MB) copied, 26.406276 seconds, 1.8MB/s
> > > [12001.375255] dd (2392) used greatest stack depth: 4208 bytes left
> > >
> > > /data/pintu # md5sum squash_rootfs.img
> > > d301016207cc5782d1634259a5c597f9  squash_rootfs.img
> > >
> > > So, it seems there is no problem with either the original image
> > > (unsquashfs) as well as the checksum.
> > >
> > > Then what else could be the suspect/issue ?
> > > If you have any further inputs please share your thoughts.
> > >
> > > This is the kernel command line we are using:
> > > [    0.000000] Kernel command line: ro rootwait
> > > console=ttyMSM0,115200,n8 androidboot.hardware=qcom
> > > msm_rtb.filter=0x237 androidboot.console=ttyMSM0
> > > lpm_levels.sleep_disabled=1 firmware_class.path=/lib/firmware/updates
> > > service_locator.enable=1 net.ifnames=0 rootfstype=squashfs
> > > root=/dev/ubiblock0_0 ubi.mtd=30 ubi.block=0,0
> > >
> > > These are few more points to be noted:
> > > a) With squashfs we are getting below error:
> > > [    4.603156] squashfs: SQUASHFS error: unable to read xattr id index table
> > > [...]
> > > [    4.980519] Kernel panic - not syncing: VFS: Unable to mount root
> > > fs on unknown-block(254,0)
> > >
> > > b) With ubifs (without squashfs) we are getting below error:
> > > [    4.712458] UBIFS (ubi0:0): UBIFS: mounted UBI device 0, volume 0,
> > > name "rootfs", R/O mode
> > > [...]
> > > UBIFS error (ubi0:0 pid 1): ubifs_read_node: bad node type (255 but expected 9)
> > > UBIFS error (ubi0:0 pid 1): ubifs_read_node: bad node at LEB
> > > 336:250560, LEB mapping status 1
> > > Not a node, first 24 bytes:
> > > 00000000: ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff
> > > ff ff ff ff
> > >
> > > c) While flashing "usrfs" volume (ubi0_1) there is no issue and device
> > > boots successfully.
> > >
> > > d) This issue is happening only after flashing rootfs volume (ubi0_0)
> > > and rebooting the device.
> > >
> > > e) We are using "uefi" and fastboot mechanism to flash the volumes.
> > Are you writing the squashfs into the ubi block device with uefi/fastboot?
> > >
> > > f) Next I wanted to check the read-only UBI volume flashing mechanism
> > > within the Kernel itself.
> > > Is there a way to try a read-only "rootfs" (squashfs type) ubi volume
> > > flashing mechanism from the Linux command prompt ?
> > > Or, what are the other ways to verify UBI volume flashing in Linux ?
> > >
> > > g) I wanted to root-cause, if there is any problem in our UBI flashing
> > > logic, or there's something missing on the Linux/Kernel side (squashfs
> > > or ubifs) or the way we configure the system.
> 
> >
> > Have you had it to work? Or is this a new project?
> > If you had it to work, i would start bisecting...
> >
> 
> No, this is still experimental.
> Currently we are only able to write to ubi volumes but after that
> device is not booting (with rootfs volume update).
> However, with "userdata" it is working fine.
> 
> I have few more questions to clarify.
> 
> a) Is there a way in kernel to do the ubi volume update while the
> device is running ?
>     I tried "ubiupdatevol" but it does not seem to work.
>     I guess it is only to update the empty volume ?
>     Or, maybe I don't know how to use it to update the live "rootfs" volume
> 
> b) How to verify the volume checksum as soon as we finish writing the
> content, since the device is not booting ?
>      Is there a way to verify the rootfs checksum at the bootloader or
> kernel level before mounting ?
> 
> c) We are configuring the ubi volumes in this way. Is it fine ?
> [rootfs_volume]
> mode=ubi
> image=.<path>/system.squash
> vol_id=0
> vol_type=dynamic
> vol_name=rootfs
> vol_size=62980096  ==> 60.0625 MiB
> 
> Few more info:
> ----------------------
> Our actual squashfs image size:
> $ ls -l ./system.squash
> rw-rr- 1 pintu users 49639424 ../system.squash
> 
> after earse_volume: page-size: 4096, block-size-bytes: 262144,
> vtbl-count: 2, used-blk: 38, leb-size: 253952, leb-blk-size: 62
> Thus:
> 49639424 / 253952 = 195.46 blocks
> 
> This then round-off to 196 blocks which does not match exactly.
> Is there any issue with this ?
> 
> If you have any suggestions to debug further please help us...
> 
> 
> Thanks,
> Pintu

Three perhaps obvious questions here:

1. As an experimental system, are you using a vanilla (unmodified)
   Linux kernel, or have you made modifications.  If so, how is it
   modified?

2. What is the difference between "rootfs" and "userdata"?
   Have you written exactly the same Squashfs image to "rootfs"
   and "userdata", and has it worked with "userdata" and not
   worked with "rootfs".

   So far it is unclear whether "userdata" has worked because
   you've written different images/data to it.

   In other words tell us exactly what you're writing to "userdata"
   and what you're writing to "rootfs".  The difference or non-difference
   may be significant.

3. The rounding up to a whole 196 blocks should not be a problem.
   The problem is, obviously, if it is rounding down to 195 blocks,
   where the tail end of the Squashfs image will be lost.

   Remember this is exactly what the Squashfs error is saying, the image
   has been truncated.

   You could try adding a lot of padding to the end of the Squashfs image
   (Squashfs won't care), so it is more than the effective block size,
   and then writing that, to prevent any rounding down or truncation.

Phillip

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [RESEND]: Kernel 4.14: UBIFS+SQUASHFS: Device fails to boot after flashing rootfs volume
@ 2021-05-24  7:07                   ` Phillip Lougher
  0 siblings, 0 replies; 25+ messages in thread
From: Phillip Lougher @ 2021-05-24  7:07 UTC (permalink / raw)
  To: Pintu Agarwal, Sean Nyekjaer; +Cc: open list, linux-mtd, linux-fsdevel


> On 24/05/2021 07:12 Pintu Agarwal <pintu.ping@gmail.com> wrote:
> 
>  
> On Sun, 23 May 2021 at 23:01, Sean Nyekjaer <sean@geanix.com> wrote:
> >
> 
> > > I have also tried that and it seems the checksum exactly matches.
> > > $ md5sum system.squash
> > > d301016207cc5782d1634259a5c597f9  ./system.squash
> > >
> > > On the device:
> > > /data/pintu # dd if=/dev/ubi0_0 of=squash_rootfs.img bs=1K count=48476
> > > 48476+0 records in
> > > 48476+0 records out
> > > 49639424 bytes (47.3MB) copied, 26.406276 seconds, 1.8MB/s
> > > [12001.375255] dd (2392) used greatest stack depth: 4208 bytes left
> > >
> > > /data/pintu # md5sum squash_rootfs.img
> > > d301016207cc5782d1634259a5c597f9  squash_rootfs.img
> > >
> > > So, it seems there is no problem with either the original image
> > > (unsquashfs) as well as the checksum.
> > >
> > > Then what else could be the suspect/issue ?
> > > If you have any further inputs please share your thoughts.
> > >
> > > This is the kernel command line we are using:
> > > [    0.000000] Kernel command line: ro rootwait
> > > console=ttyMSM0,115200,n8 androidboot.hardware=qcom
> > > msm_rtb.filter=0x237 androidboot.console=ttyMSM0
> > > lpm_levels.sleep_disabled=1 firmware_class.path=/lib/firmware/updates
> > > service_locator.enable=1 net.ifnames=0 rootfstype=squashfs
> > > root=/dev/ubiblock0_0 ubi.mtd=30 ubi.block=0,0
> > >
> > > These are few more points to be noted:
> > > a) With squashfs we are getting below error:
> > > [    4.603156] squashfs: SQUASHFS error: unable to read xattr id index table
> > > [...]
> > > [    4.980519] Kernel panic - not syncing: VFS: Unable to mount root
> > > fs on unknown-block(254,0)
> > >
> > > b) With ubifs (without squashfs) we are getting below error:
> > > [    4.712458] UBIFS (ubi0:0): UBIFS: mounted UBI device 0, volume 0,
> > > name "rootfs", R/O mode
> > > [...]
> > > UBIFS error (ubi0:0 pid 1): ubifs_read_node: bad node type (255 but expected 9)
> > > UBIFS error (ubi0:0 pid 1): ubifs_read_node: bad node at LEB
> > > 336:250560, LEB mapping status 1
> > > Not a node, first 24 bytes:
> > > 00000000: ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff
> > > ff ff ff ff
> > >
> > > c) While flashing "usrfs" volume (ubi0_1) there is no issue and device
> > > boots successfully.
> > >
> > > d) This issue is happening only after flashing rootfs volume (ubi0_0)
> > > and rebooting the device.
> > >
> > > e) We are using "uefi" and fastboot mechanism to flash the volumes.
> > Are you writing the squashfs into the ubi block device with uefi/fastboot?
> > >
> > > f) Next I wanted to check the read-only UBI volume flashing mechanism
> > > within the Kernel itself.
> > > Is there a way to try a read-only "rootfs" (squashfs type) ubi volume
> > > flashing mechanism from the Linux command prompt ?
> > > Or, what are the other ways to verify UBI volume flashing in Linux ?
> > >
> > > g) I wanted to root-cause, if there is any problem in our UBI flashing
> > > logic, or there's something missing on the Linux/Kernel side (squashfs
> > > or ubifs) or the way we configure the system.
> 
> >
> > Have you had it to work? Or is this a new project?
> > If you had it to work, i would start bisecting...
> >
> 
> No, this is still experimental.
> Currently we are only able to write to ubi volumes but after that
> device is not booting (with rootfs volume update).
> However, with "userdata" it is working fine.
> 
> I have few more questions to clarify.
> 
> a) Is there a way in kernel to do the ubi volume update while the
> device is running ?
>     I tried "ubiupdatevol" but it does not seem to work.
>     I guess it is only to update the empty volume ?
>     Or, maybe I don't know how to use it to update the live "rootfs" volume
> 
> b) How to verify the volume checksum as soon as we finish writing the
> content, since the device is not booting ?
>      Is there a way to verify the rootfs checksum at the bootloader or
> kernel level before mounting ?
> 
> c) We are configuring the ubi volumes in this way. Is it fine ?
> [rootfs_volume]
> mode=ubi
> image=.<path>/system.squash
> vol_id=0
> vol_type=dynamic
> vol_name=rootfs
> vol_size=62980096  ==> 60.0625 MiB
> 
> Few more info:
> ----------------------
> Our actual squashfs image size:
> $ ls -l ./system.squash
> rw-rr- 1 pintu users 49639424 ../system.squash
> 
> after earse_volume: page-size: 4096, block-size-bytes: 262144,
> vtbl-count: 2, used-blk: 38, leb-size: 253952, leb-blk-size: 62
> Thus:
> 49639424 / 253952 = 195.46 blocks
> 
> This then round-off to 196 blocks which does not match exactly.
> Is there any issue with this ?
> 
> If you have any suggestions to debug further please help us...
> 
> 
> Thanks,
> Pintu

Three perhaps obvious questions here:

1. As an experimental system, are you using a vanilla (unmodified)
   Linux kernel, or have you made modifications.  If so, how is it
   modified?

2. What is the difference between "rootfs" and "userdata"?
   Have you written exactly the same Squashfs image to "rootfs"
   and "userdata", and has it worked with "userdata" and not
   worked with "rootfs".

   So far it is unclear whether "userdata" has worked because
   you've written different images/data to it.

   In other words tell us exactly what you're writing to "userdata"
   and what you're writing to "rootfs".  The difference or non-difference
   may be significant.

3. The rounding up to a whole 196 blocks should not be a problem.
   The problem is, obviously, if it is rounding down to 195 blocks,
   where the tail end of the Squashfs image will be lost.

   Remember this is exactly what the Squashfs error is saying, the image
   has been truncated.

   You could try adding a lot of padding to the end of the Squashfs image
   (Squashfs won't care), so it is more than the effective block size,
   and then writing that, to prevent any rounding down or truncation.

Phillip

______________________________________________________
Linux MTD discussion mailing list
http://lists.infradead.org/mailman/listinfo/linux-mtd/

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [RESEND]: Kernel 4.14: UBIFS+SQUASHFS: Device fails to boot after flashing rootfs volume
  2021-05-24  6:12                 ` Pintu Agarwal
@ 2021-05-25  5:37                   ` Sean Nyekjaer
  -1 siblings, 0 replies; 25+ messages in thread
From: Sean Nyekjaer @ 2021-05-25  5:37 UTC (permalink / raw)
  To: Pintu Agarwal; +Cc: Phillip Lougher, open list, linux-mtd, linux-fsdevel

On 24/05/2021 08.12, Pintu Agarwal wrote:
> On Sun, 23 May 2021 at 23:01, Sean Nyekjaer <sean@geanix.com> wrote:
>>
> 
>>> I have also tried that and it seems the checksum exactly matches.
>>> $ md5sum system.squash
>>> d301016207cc5782d1634259a5c597f9  ./system.squash
>>>
>>> On the device:
>>> /data/pintu # dd if=/dev/ubi0_0 of=squash_rootfs.img bs=1K count=48476
>>> 48476+0 records in
>>> 48476+0 records out
>>> 49639424 bytes (47.3MB) copied, 26.406276 seconds, 1.8MB/s
>>> [12001.375255] dd (2392) used greatest stack depth: 4208 bytes left
>>>
>>> /data/pintu # md5sum squash_rootfs.img
>>> d301016207cc5782d1634259a5c597f9  squash_rootfs.img
>>>
>>> So, it seems there is no problem with either the original image
>>> (unsquashfs) as well as the checksum.
>>>
>>> Then what else could be the suspect/issue ?
>>> If you have any further inputs please share your thoughts.
>>>
>>> This is the kernel command line we are using:
>>> [    0.000000] Kernel command line: ro rootwait
>>> console=ttyMSM0,115200,n8 androidboot.hardware=qcom
>>> msm_rtb.filter=0x237 androidboot.console=ttyMSM0
>>> lpm_levels.sleep_disabled=1 firmware_class.path=/lib/firmware/updates
>>> service_locator.enable=1 net.ifnames=0 rootfstype=squashfs
>>> root=/dev/ubiblock0_0 ubi.mtd=30 ubi.block=0,0
>>>
>>> These are few more points to be noted:
>>> a) With squashfs we are getting below error:
>>> [    4.603156] squashfs: SQUASHFS error: unable to read xattr id index table
>>> [...]
>>> [    4.980519] Kernel panic - not syncing: VFS: Unable to mount root
>>> fs on unknown-block(254,0)
>>>
>>> b) With ubifs (without squashfs) we are getting below error:
>>> [    4.712458] UBIFS (ubi0:0): UBIFS: mounted UBI device 0, volume 0,
>>> name "rootfs", R/O mode
>>> [...]
>>> UBIFS error (ubi0:0 pid 1): ubifs_read_node: bad node type (255 but expected 9)
>>> UBIFS error (ubi0:0 pid 1): ubifs_read_node: bad node at LEB
>>> 336:250560, LEB mapping status 1
>>> Not a node, first 24 bytes:
>>> 00000000: ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff
>>> ff ff ff ff
>>>
>>> c) While flashing "usrfs" volume (ubi0_1) there is no issue and device
>>> boots successfully.
>>>
>>> d) This issue is happening only after flashing rootfs volume (ubi0_0)
>>> and rebooting the device.
>>>
>>> e) We are using "uefi" and fastboot mechanism to flash the volumes.
>> Are you writing the squashfs into the ubi block device with uefi/fastboot?
>>>
>>> f) Next I wanted to check the read-only UBI volume flashing mechanism
>>> within the Kernel itself.
>>> Is there a way to try a read-only "rootfs" (squashfs type) ubi volume
>>> flashing mechanism from the Linux command prompt ?
>>> Or, what are the other ways to verify UBI volume flashing in Linux ?
>>>
>>> g) I wanted to root-cause, if there is any problem in our UBI flashing
>>> logic, or there's something missing on the Linux/Kernel side (squashfs
>>> or ubifs) or the way we configure the system.
> 
>>
>> Have you had it to work? Or is this a new project?
>> If you had it to work, i would start bisecting...
>>
> 
> No, this is still experimental.
> Currently we are only able to write to ubi volumes but after that
> device is not booting (with rootfs volume update).
> However, with "userdata" it is working fine.
> 
> I have few more questions to clarify.
> 
> a) Is there a way in kernel to do the ubi volume update while the
> device is running ?
>     I tried "ubiupdatevol" but it does not seem to work.
>     I guess it is only to update the empty volume ?
>     Or, maybe I don't know how to use it to update the live "rootfs" volume

We are writing our rootfs with this command:
ubiupdatevol /dev/ubi0_4 rootfs.squashfs

> 
> b) How to verify the volume checksum as soon as we finish writing the
> content, since the device is not booting ?
>      Is there a way to verify the rootfs checksum at the bootloader or
> kernel level before mounting ?
> 
> c) We are configuring the ubi volumes in this way. Is it fine ?
> [rootfs_volume]
> mode=ubi
> image=.<path>/system.squash
> vol_id=0
> vol_type=dynamic
> vol_name=rootfs
> vol_size=62980096  ==> 60.0625 MiB
> 
> Few more info:
> ----------------------
> Our actual squashfs image size:
> $ ls -l ./system.squash
> rw-rr- 1 pintu users 49639424 ../system.squash
> 
> after earse_volume: page-size: 4096, block-size-bytes: 262144,
> vtbl-count: 2, used-blk: 38, leb-size: 253952, leb-blk-size: 62
> Thus:
> 49639424 / 253952 = 195.46 blocks
> 
> This then round-off to 196 blocks which does not match exactly.
> Is there any issue with this ?
> 
> If you have any suggestions to debug further please help us...
> 
> 
> Thanks,
> Pintu
> 

Please understand the differences between the UBI and UBIFS. UBI(unsorted block image) and UBIFS(UBI File System).
I think you want to write the squashfs to the UBI(unsorted block image).

Can you try to boot with a initramfs, and then use ubiupdatevol to write the rootfs.squshfs.

/Sean

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [RESEND]: Kernel 4.14: UBIFS+SQUASHFS: Device fails to boot after flashing rootfs volume
@ 2021-05-25  5:37                   ` Sean Nyekjaer
  0 siblings, 0 replies; 25+ messages in thread
From: Sean Nyekjaer @ 2021-05-25  5:37 UTC (permalink / raw)
  To: Pintu Agarwal; +Cc: Phillip Lougher, open list, linux-mtd, linux-fsdevel

On 24/05/2021 08.12, Pintu Agarwal wrote:
> On Sun, 23 May 2021 at 23:01, Sean Nyekjaer <sean@geanix.com> wrote:
>>
> 
>>> I have also tried that and it seems the checksum exactly matches.
>>> $ md5sum system.squash
>>> d301016207cc5782d1634259a5c597f9  ./system.squash
>>>
>>> On the device:
>>> /data/pintu # dd if=/dev/ubi0_0 of=squash_rootfs.img bs=1K count=48476
>>> 48476+0 records in
>>> 48476+0 records out
>>> 49639424 bytes (47.3MB) copied, 26.406276 seconds, 1.8MB/s
>>> [12001.375255] dd (2392) used greatest stack depth: 4208 bytes left
>>>
>>> /data/pintu # md5sum squash_rootfs.img
>>> d301016207cc5782d1634259a5c597f9  squash_rootfs.img
>>>
>>> So, it seems there is no problem with either the original image
>>> (unsquashfs) as well as the checksum.
>>>
>>> Then what else could be the suspect/issue ?
>>> If you have any further inputs please share your thoughts.
>>>
>>> This is the kernel command line we are using:
>>> [    0.000000] Kernel command line: ro rootwait
>>> console=ttyMSM0,115200,n8 androidboot.hardware=qcom
>>> msm_rtb.filter=0x237 androidboot.console=ttyMSM0
>>> lpm_levels.sleep_disabled=1 firmware_class.path=/lib/firmware/updates
>>> service_locator.enable=1 net.ifnames=0 rootfstype=squashfs
>>> root=/dev/ubiblock0_0 ubi.mtd=30 ubi.block=0,0
>>>
>>> These are few more points to be noted:
>>> a) With squashfs we are getting below error:
>>> [    4.603156] squashfs: SQUASHFS error: unable to read xattr id index table
>>> [...]
>>> [    4.980519] Kernel panic - not syncing: VFS: Unable to mount root
>>> fs on unknown-block(254,0)
>>>
>>> b) With ubifs (without squashfs) we are getting below error:
>>> [    4.712458] UBIFS (ubi0:0): UBIFS: mounted UBI device 0, volume 0,
>>> name "rootfs", R/O mode
>>> [...]
>>> UBIFS error (ubi0:0 pid 1): ubifs_read_node: bad node type (255 but expected 9)
>>> UBIFS error (ubi0:0 pid 1): ubifs_read_node: bad node at LEB
>>> 336:250560, LEB mapping status 1
>>> Not a node, first 24 bytes:
>>> 00000000: ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff
>>> ff ff ff ff
>>>
>>> c) While flashing "usrfs" volume (ubi0_1) there is no issue and device
>>> boots successfully.
>>>
>>> d) This issue is happening only after flashing rootfs volume (ubi0_0)
>>> and rebooting the device.
>>>
>>> e) We are using "uefi" and fastboot mechanism to flash the volumes.
>> Are you writing the squashfs into the ubi block device with uefi/fastboot?
>>>
>>> f) Next I wanted to check the read-only UBI volume flashing mechanism
>>> within the Kernel itself.
>>> Is there a way to try a read-only "rootfs" (squashfs type) ubi volume
>>> flashing mechanism from the Linux command prompt ?
>>> Or, what are the other ways to verify UBI volume flashing in Linux ?
>>>
>>> g) I wanted to root-cause, if there is any problem in our UBI flashing
>>> logic, or there's something missing on the Linux/Kernel side (squashfs
>>> or ubifs) or the way we configure the system.
> 
>>
>> Have you had it to work? Or is this a new project?
>> If you had it to work, i would start bisecting...
>>
> 
> No, this is still experimental.
> Currently we are only able to write to ubi volumes but after that
> device is not booting (with rootfs volume update).
> However, with "userdata" it is working fine.
> 
> I have few more questions to clarify.
> 
> a) Is there a way in kernel to do the ubi volume update while the
> device is running ?
>     I tried "ubiupdatevol" but it does not seem to work.
>     I guess it is only to update the empty volume ?
>     Or, maybe I don't know how to use it to update the live "rootfs" volume

We are writing our rootfs with this command:
ubiupdatevol /dev/ubi0_4 rootfs.squashfs

> 
> b) How to verify the volume checksum as soon as we finish writing the
> content, since the device is not booting ?
>      Is there a way to verify the rootfs checksum at the bootloader or
> kernel level before mounting ?
> 
> c) We are configuring the ubi volumes in this way. Is it fine ?
> [rootfs_volume]
> mode=ubi
> image=.<path>/system.squash
> vol_id=0
> vol_type=dynamic
> vol_name=rootfs
> vol_size=62980096  ==> 60.0625 MiB
> 
> Few more info:
> ----------------------
> Our actual squashfs image size:
> $ ls -l ./system.squash
> rw-rr- 1 pintu users 49639424 ../system.squash
> 
> after earse_volume: page-size: 4096, block-size-bytes: 262144,
> vtbl-count: 2, used-blk: 38, leb-size: 253952, leb-blk-size: 62
> Thus:
> 49639424 / 253952 = 195.46 blocks
> 
> This then round-off to 196 blocks which does not match exactly.
> Is there any issue with this ?
> 
> If you have any suggestions to debug further please help us...
> 
> 
> Thanks,
> Pintu
> 

Please understand the differences between the UBI and UBIFS. UBI(unsorted block image) and UBIFS(UBI File System).
I think you want to write the squashfs to the UBI(unsorted block image).

Can you try to boot with a initramfs, and then use ubiupdatevol to write the rootfs.squshfs.

/Sean

______________________________________________________
Linux MTD discussion mailing list
http://lists.infradead.org/mailman/listinfo/linux-mtd/

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [RESEND]: Kernel 4.14: UBIFS+SQUASHFS: Device fails to boot after flashing rootfs volume
  2021-05-24  7:07                   ` Phillip Lougher
@ 2021-05-25  9:22                     ` Pintu Agarwal
  -1 siblings, 0 replies; 25+ messages in thread
From: Pintu Agarwal @ 2021-05-25  9:22 UTC (permalink / raw)
  To: Phillip Lougher; +Cc: Sean Nyekjaer, open list, linux-mtd, linux-fsdevel

On Mon, 24 May 2021 at 12:37, Phillip Lougher <phillip@squashfs.org.uk> wrote:
>
> > No, this is still experimental.
> > Currently we are only able to write to ubi volumes but after that
> > device is not booting (with rootfs volume update).
> > However, with "userdata" it is working fine.
> >
> > I have few more questions to clarify.
> >
> > a) Is there a way in kernel to do the ubi volume update while the
> > device is running ?
> >     I tried "ubiupdatevol" but it does not seem to work.
> >     I guess it is only to update the empty volume ?
> >     Or, maybe I don't know how to use it to update the live "rootfs" volume
> >
> > b) How to verify the volume checksum as soon as we finish writing the
> > content, since the device is not booting ?
> >      Is there a way to verify the rootfs checksum at the bootloader or
> > kernel level before mounting ?
> >
> > c) We are configuring the ubi volumes in this way. Is it fine ?
> > [rootfs_volume]
> > mode=ubi
> > image=.<path>/system.squash
> > vol_id=0
> > vol_type=dynamic
> > vol_name=rootfs
> > vol_size=62980096  ==> 60.0625 MiB
> >
> > Few more info:
> > ----------------------
> > Our actual squashfs image size:
> > $ ls -l ./system.squash
> > rw-rr- 1 pintu users 49639424 ../system.squash
> >
> > after earse_volume: page-size: 4096, block-size-bytes: 262144,
> > vtbl-count: 2, used-blk: 38, leb-size: 253952, leb-blk-size: 62
> > Thus:
> > 49639424 / 253952 = 195.46 blocks
> >
> > This then round-off to 196 blocks which does not match exactly.
> > Is there any issue with this ?
> >
> > If you have any suggestions to debug further please help us...
> >
> >
> > Thanks,
> > Pintu
>
> Three perhaps obvious questions here:
>
> 1. As an experimental system, are you using a vanilla (unmodified)
>    Linux kernel, or have you made modifications.  If so, how is it
>    modified?
>
> 2. What is the difference between "rootfs" and "userdata"?
>    Have you written exactly the same Squashfs image to "rootfs"
>    and "userdata", and has it worked with "userdata" and not
>    worked with "rootfs".
>
>    So far it is unclear whether "userdata" has worked because
>    you've written different images/data to it.
>
>    In other words tell us exactly what you're writing to "userdata"
>    and what you're writing to "rootfs".  The difference or non-difference
>    may be significant.
>
> 3. The rounding up to a whole 196 blocks should not be a problem.
>    The problem is, obviously, if it is rounding down to 195 blocks,
>    where the tail end of the Squashfs image will be lost.
>
>    Remember this is exactly what the Squashfs error is saying, the image
>    has been truncated.
>
>    You could try adding a lot of padding to the end of the Squashfs image
>    (Squashfs won't care), so it is more than the effective block size,
>    and then writing that, to prevent any rounding down or truncation.
>

Just wanted to share the Good news that the ubi volume flashing is
working now :)
First I have created a small read-only volume (instead of rootfs) and
tried to write to it and then compared the checksum.
Initially when I checked, the checksum was not matching and when I
compared the 2 images I found there were around 8192 blocks containing
FF data at the end of each erase block.
After the fix, this time the checksum matches exactly.

/data/pintu # md5sum test-vol-orig.img
6a8a185ec65fcb212b6b5f72f0b0d206  test-vol-orig.img

/data/pintu # md5sum test-vol-after.img
6a8a185ec65fcb212b6b5f72f0b0d206  test-vol-after.img

Once this is working, I tried with rootfs volume, and this time the
device is booting fine :)

The fix is related to the data-len and data-offset calculation in our
volume write code.
[...]
size += data_offset;
[...]
ubi_block_write(....)
buf_size -= (size - data_offset);
offset += (size - data_offset);
[...]
In the previous case, we were not adding and subtracting the data_offset.

The Kernel command line we are using is this:
[    0.000000] Kernel command line: ro rootwait
console=ttyMSM0,115200,n8 [..skip..] rootfstype=squashfs
root=/dev/mtdblock34 ubi.mtd=30,0,30 [...skip..]

Hope, this parameters are fine (no change here).

Thank you Phillip and Sean for your help.
Phillip I think this checksum trick really helped me in figuring out
the root cause :)

Glad to work with you...

Thanks,
Pintu

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [RESEND]: Kernel 4.14: UBIFS+SQUASHFS: Device fails to boot after flashing rootfs volume
@ 2021-05-25  9:22                     ` Pintu Agarwal
  0 siblings, 0 replies; 25+ messages in thread
From: Pintu Agarwal @ 2021-05-25  9:22 UTC (permalink / raw)
  To: Phillip Lougher; +Cc: Sean Nyekjaer, open list, linux-mtd, linux-fsdevel

On Mon, 24 May 2021 at 12:37, Phillip Lougher <phillip@squashfs.org.uk> wrote:
>
> > No, this is still experimental.
> > Currently we are only able to write to ubi volumes but after that
> > device is not booting (with rootfs volume update).
> > However, with "userdata" it is working fine.
> >
> > I have few more questions to clarify.
> >
> > a) Is there a way in kernel to do the ubi volume update while the
> > device is running ?
> >     I tried "ubiupdatevol" but it does not seem to work.
> >     I guess it is only to update the empty volume ?
> >     Or, maybe I don't know how to use it to update the live "rootfs" volume
> >
> > b) How to verify the volume checksum as soon as we finish writing the
> > content, since the device is not booting ?
> >      Is there a way to verify the rootfs checksum at the bootloader or
> > kernel level before mounting ?
> >
> > c) We are configuring the ubi volumes in this way. Is it fine ?
> > [rootfs_volume]
> > mode=ubi
> > image=.<path>/system.squash
> > vol_id=0
> > vol_type=dynamic
> > vol_name=rootfs
> > vol_size=62980096  ==> 60.0625 MiB
> >
> > Few more info:
> > ----------------------
> > Our actual squashfs image size:
> > $ ls -l ./system.squash
> > rw-rr- 1 pintu users 49639424 ../system.squash
> >
> > after earse_volume: page-size: 4096, block-size-bytes: 262144,
> > vtbl-count: 2, used-blk: 38, leb-size: 253952, leb-blk-size: 62
> > Thus:
> > 49639424 / 253952 = 195.46 blocks
> >
> > This then round-off to 196 blocks which does not match exactly.
> > Is there any issue with this ?
> >
> > If you have any suggestions to debug further please help us...
> >
> >
> > Thanks,
> > Pintu
>
> Three perhaps obvious questions here:
>
> 1. As an experimental system, are you using a vanilla (unmodified)
>    Linux kernel, or have you made modifications.  If so, how is it
>    modified?
>
> 2. What is the difference between "rootfs" and "userdata"?
>    Have you written exactly the same Squashfs image to "rootfs"
>    and "userdata", and has it worked with "userdata" and not
>    worked with "rootfs".
>
>    So far it is unclear whether "userdata" has worked because
>    you've written different images/data to it.
>
>    In other words tell us exactly what you're writing to "userdata"
>    and what you're writing to "rootfs".  The difference or non-difference
>    may be significant.
>
> 3. The rounding up to a whole 196 blocks should not be a problem.
>    The problem is, obviously, if it is rounding down to 195 blocks,
>    where the tail end of the Squashfs image will be lost.
>
>    Remember this is exactly what the Squashfs error is saying, the image
>    has been truncated.
>
>    You could try adding a lot of padding to the end of the Squashfs image
>    (Squashfs won't care), so it is more than the effective block size,
>    and then writing that, to prevent any rounding down or truncation.
>

Just wanted to share the Good news that the ubi volume flashing is
working now :)
First I have created a small read-only volume (instead of rootfs) and
tried to write to it and then compared the checksum.
Initially when I checked, the checksum was not matching and when I
compared the 2 images I found there were around 8192 blocks containing
FF data at the end of each erase block.
After the fix, this time the checksum matches exactly.

/data/pintu # md5sum test-vol-orig.img
6a8a185ec65fcb212b6b5f72f0b0d206  test-vol-orig.img

/data/pintu # md5sum test-vol-after.img
6a8a185ec65fcb212b6b5f72f0b0d206  test-vol-after.img

Once this is working, I tried with rootfs volume, and this time the
device is booting fine :)

The fix is related to the data-len and data-offset calculation in our
volume write code.
[...]
size += data_offset;
[...]
ubi_block_write(....)
buf_size -= (size - data_offset);
offset += (size - data_offset);
[...]
In the previous case, we were not adding and subtracting the data_offset.

The Kernel command line we are using is this:
[    0.000000] Kernel command line: ro rootwait
console=ttyMSM0,115200,n8 [..skip..] rootfstype=squashfs
root=/dev/mtdblock34 ubi.mtd=30,0,30 [...skip..]

Hope, this parameters are fine (no change here).

Thank you Phillip and Sean for your help.
Phillip I think this checksum trick really helped me in figuring out
the root cause :)

Glad to work with you...

Thanks,
Pintu

______________________________________________________
Linux MTD discussion mailing list
http://lists.infradead.org/mailman/listinfo/linux-mtd/

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [RESEND]: Kernel 4.14: UBIFS+SQUASHFS: Device fails to boot after flashing rootfs volume
  2021-05-25  5:37                   ` Sean Nyekjaer
@ 2021-05-31  2:54                     ` Pintu Agarwal
  -1 siblings, 0 replies; 25+ messages in thread
From: Pintu Agarwal @ 2021-05-31  2:54 UTC (permalink / raw)
  To: Sean Nyekjaer; +Cc: Phillip Lougher, open list, linux-mtd, linux-fsdevel

On Tue, 25 May 2021 at 11:07, Sean Nyekjaer <sean@geanix.com> wrote:
> We are writing our rootfs with this command:
> ubiupdatevol /dev/ubi0_4 rootfs.squashfs
>
> Please understand the differences between the UBI and UBIFS. UBI(unsorted block image) and UBIFS(UBI File System).
> I think you want to write the squashfs to the UBI(unsorted block image).
>
> Can you try to boot with a initramfs, and then use ubiupdatevol to write the rootfs.squshfs.
>
Dear Sean, thank you so much for this suggestion.
Just a final help I need here.

For future experiment purposes, I am trying to setup my qemu-arm
environment using ubifs/squashfs and "nandsim" module.
I already have a working setup for qemu-arm with busybox/initramfs.
Now I wanted to prepare ubifs/squashfs based busybox rootfs which I
can use for booting the mainline kernel.
Is it possible ?
Are there already some pre-built ubifs images available which I can
use for my qemu-arm ?
Or, please guide me how to do it ?

I think it is more convenient to do all experiments with "nandsim"
instead of corrupting the actual NAND hardware.
If you have any other suggestions please let me know.


Thanks,
Pintu

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [RESEND]: Kernel 4.14: UBIFS+SQUASHFS: Device fails to boot after flashing rootfs volume
@ 2021-05-31  2:54                     ` Pintu Agarwal
  0 siblings, 0 replies; 25+ messages in thread
From: Pintu Agarwal @ 2021-05-31  2:54 UTC (permalink / raw)
  To: Sean Nyekjaer; +Cc: Phillip Lougher, open list, linux-mtd, linux-fsdevel

On Tue, 25 May 2021 at 11:07, Sean Nyekjaer <sean@geanix.com> wrote:
> We are writing our rootfs with this command:
> ubiupdatevol /dev/ubi0_4 rootfs.squashfs
>
> Please understand the differences between the UBI and UBIFS. UBI(unsorted block image) and UBIFS(UBI File System).
> I think you want to write the squashfs to the UBI(unsorted block image).
>
> Can you try to boot with a initramfs, and then use ubiupdatevol to write the rootfs.squshfs.
>
Dear Sean, thank you so much for this suggestion.
Just a final help I need here.

For future experiment purposes, I am trying to setup my qemu-arm
environment using ubifs/squashfs and "nandsim" module.
I already have a working setup for qemu-arm with busybox/initramfs.
Now I wanted to prepare ubifs/squashfs based busybox rootfs which I
can use for booting the mainline kernel.
Is it possible ?
Are there already some pre-built ubifs images available which I can
use for my qemu-arm ?
Or, please guide me how to do it ?

I think it is more convenient to do all experiments with "nandsim"
instead of corrupting the actual NAND hardware.
If you have any other suggestions please let me know.


Thanks,
Pintu

______________________________________________________
Linux MTD discussion mailing list
http://lists.infradead.org/mailman/listinfo/linux-mtd/

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [RESEND]: Kernel 4.14: UBIFS+SQUASHFS: Device fails to boot after flashing rootfs volume
  2021-05-31  2:54                     ` Pintu Agarwal
@ 2021-06-01  6:52                       ` Sean Nyekjaer
  -1 siblings, 0 replies; 25+ messages in thread
From: Sean Nyekjaer @ 2021-06-01  6:52 UTC (permalink / raw)
  To: Pintu Agarwal; +Cc: Phillip Lougher, open list, linux-mtd, linux-fsdevel

On 31/05/2021 04.54, Pintu Agarwal wrote:
> On Tue, 25 May 2021 at 11:07, Sean Nyekjaer <sean@geanix.com> wrote:
>> We are writing our rootfs with this command:
>> ubiupdatevol /dev/ubi0_4 rootfs.squashfs
>>
>> Please understand the differences between the UBI and UBIFS. UBI(unsorted block image) and UBIFS(UBI File System).
>> I think you want to write the squashfs to the UBI(unsorted block image).
>>
>> Can you try to boot with a initramfs, and then use ubiupdatevol to write the rootfs.squshfs.
>>
> Dear Sean, thank you so much for this suggestion.
> Just a final help I need here.
> 
> For future experiment purposes, I am trying to setup my qemu-arm
> environment using ubifs/squashfs and "nandsim" module.
> I already have a working setup for qemu-arm with busybox/initramfs.
> Now I wanted to prepare ubifs/squashfs based busybox rootfs which I
> can use for booting the mainline kernel.
> Is it possible ?> Are there already some pre-built ubifs images available which I can
> use for my qemu-arm ?
> Or, please guide me how to do it ?
> 
> I think it is more convenient to do all experiments with "nandsim"
> instead of corrupting the actual NAND hardware.
> If you have any other suggestions please let me know.
> 
> 
> Thanks,
> Pintu
> 
Hi,


I have not used qemu with nandsim :(
I would prefer testing on the actual hardware.

We have used Labgrid in the past for that.

/Sean

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [RESEND]: Kernel 4.14: UBIFS+SQUASHFS: Device fails to boot after flashing rootfs volume
@ 2021-06-01  6:52                       ` Sean Nyekjaer
  0 siblings, 0 replies; 25+ messages in thread
From: Sean Nyekjaer @ 2021-06-01  6:52 UTC (permalink / raw)
  To: Pintu Agarwal; +Cc: Phillip Lougher, open list, linux-mtd, linux-fsdevel

On 31/05/2021 04.54, Pintu Agarwal wrote:
> On Tue, 25 May 2021 at 11:07, Sean Nyekjaer <sean@geanix.com> wrote:
>> We are writing our rootfs with this command:
>> ubiupdatevol /dev/ubi0_4 rootfs.squashfs
>>
>> Please understand the differences between the UBI and UBIFS. UBI(unsorted block image) and UBIFS(UBI File System).
>> I think you want to write the squashfs to the UBI(unsorted block image).
>>
>> Can you try to boot with a initramfs, and then use ubiupdatevol to write the rootfs.squshfs.
>>
> Dear Sean, thank you so much for this suggestion.
> Just a final help I need here.
> 
> For future experiment purposes, I am trying to setup my qemu-arm
> environment using ubifs/squashfs and "nandsim" module.
> I already have a working setup for qemu-arm with busybox/initramfs.
> Now I wanted to prepare ubifs/squashfs based busybox rootfs which I
> can use for booting the mainline kernel.
> Is it possible ?> Are there already some pre-built ubifs images available which I can
> use for my qemu-arm ?
> Or, please guide me how to do it ?
> 
> I think it is more convenient to do all experiments with "nandsim"
> instead of corrupting the actual NAND hardware.
> If you have any other suggestions please let me know.
> 
> 
> Thanks,
> Pintu
> 
Hi,


I have not used qemu with nandsim :(
I would prefer testing on the actual hardware.

We have used Labgrid in the past for that.

/Sean

______________________________________________________
Linux MTD discussion mailing list
http://lists.infradead.org/mailman/listinfo/linux-mtd/

^ permalink raw reply	[flat|nested] 25+ messages in thread

end of thread, other threads:[~2021-06-01  6:53 UTC | newest]

Thread overview: 25+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
     [not found] <CAOuPNLjgpkBh9dnfNTdDcfk5HiL=HjjiB9o_=fjrm+0vP7Re2Q@mail.gmail.com>
2021-05-14 11:41 ` Kernel 4.14: SQUASHFS error: unable to read xattr id index table Pintu Agarwal
2021-05-14 11:41   ` Pintu Agarwal
2021-05-14 12:37   ` [RESEND]: " Pintu Agarwal
2021-05-14 21:50     ` Phillip Lougher
2021-05-14 21:50       ` Phillip Lougher
2021-05-17 11:34       ` Pintu Agarwal
2021-05-17 11:34         ` Pintu Agarwal
2021-05-20  4:30         ` Phillip Lougher
2021-05-20  4:30           ` Phillip Lougher
2021-05-23 16:44           ` [RESEND]: Kernel 4.14: UBIFS+SQUASHFS: Device fails to boot after flashing rootfs volume Pintu Agarwal
2021-05-23 16:44             ` Pintu Agarwal
2021-05-23 17:31             ` Sean Nyekjaer
2021-05-23 17:31               ` Sean Nyekjaer
2021-05-24  6:12               ` Pintu Agarwal
2021-05-24  6:12                 ` Pintu Agarwal
2021-05-24  7:07                 ` Phillip Lougher
2021-05-24  7:07                   ` Phillip Lougher
2021-05-25  9:22                   ` Pintu Agarwal
2021-05-25  9:22                     ` Pintu Agarwal
2021-05-25  5:37                 ` Sean Nyekjaer
2021-05-25  5:37                   ` Sean Nyekjaer
2021-05-31  2:54                   ` Pintu Agarwal
2021-05-31  2:54                     ` Pintu Agarwal
2021-06-01  6:52                     ` Sean Nyekjaer
2021-06-01  6:52                       ` Sean Nyekjaer

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.