* Problem in EROFS: Not able to read the files after mount
@ 2020-01-20 6:55 Saumya Panda
2020-01-20 7:41 ` Gao Xiang via Linux-erofs
0 siblings, 1 reply; 9+ messages in thread
From: Saumya Panda @ 2020-01-20 6:55 UTC (permalink / raw)
To: linux-erofs
[-- Attachment #1: Type: text/plain, Size: 1640 bytes --]
Hi Experts,
I am testing EROFS in openSuse(Kernel: 5.4.7-1-default). I used the
enwik8 and enwik9 as source file (
https://cs.fit.edu/~mmahoney/compression/textdata.html) for compression.
But after I mount the erofs image, I am not able to read it (it is saying
operation not permitted). Simple "ls" command is not working. But if I
create EROFS image without compression flag, then after mount I am able to
read the files. Seems there is some problem during compression.
I will appreciate if someone can help me out why this is happening.
Steps followed:
*Erofs image creation & mount: *
mkfs.erofs -zlz4hc enwik8.erofs.img enwik8/
mkfs.erofs 1.0
c_version: [ 1.0]
c_dbg_lvl: [ 0]
c_dry_run: [ 0]
mount enwik8.erofs.img /mnt/enwik8/ -t erofs -o loop
ls -l /mnt/enwik8/
ls: cannot access '/mnt/enwik8/enwik8': Operation not supported
total 0
-????????? ? ? ? ? ? enwik8
The problem seen for both lz4 & lz4hc.
*Erofs image creation & mount without compression: *
mkfs.erofs enwik8_nocomp.erofs.img enwik8/
mkfs.erofs 1.0
c_version: [ 1.0]
c_dbg_lvl: [ 0]
c_dry_run: [ 0]
mount enwik8_nocomp.erofs.img /mnt/enwik8_nocomp/ -t erofs -o loop
ls -l /mnt/enwik8_nocomp/
total 97660
-rw-r--r-- 1 root root 100000000 Jan 20 01:27 enwik8
*Original enwik8 file:*
ls -l enwik8
total 97660
-rw-r--r-- 1 root root 100000000 Jan 20 01:14 enwik8
*Source code used for Lz4 and Erofs utils:*
https://github.com/hsiangkao/erofs-utils
https://github.com/lz4/lz4
--
Thanks,
Saumya Prakash Panda
[-- Attachment #2: Type: text/html, Size: 2475 bytes --]
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: Problem in EROFS: Not able to read the files after mount
2020-01-20 6:55 Problem in EROFS: Not able to read the files after mount Saumya Panda
@ 2020-01-20 7:41 ` Gao Xiang via Linux-erofs
2020-01-22 3:57 ` Saumya Panda
0 siblings, 1 reply; 9+ messages in thread
From: Gao Xiang via Linux-erofs @ 2020-01-20 7:41 UTC (permalink / raw)
To: Saumya Panda; +Cc: linux-erofs
Hi Saumya,
On Mon, Jan 20, 2020 at 12:25:15PM +0530, Saumya Panda wrote:
> Hi Experts,
> I am testing EROFS in openSuse(Kernel: 5.4.7-1-default). I used the
> enwik8 and enwik9 as source file (
> https://cs.fit.edu/~mmahoney/compression/textdata.html) for compression.
> But after I mount the erofs image, I am not able to read it (it is saying
> operation not permitted). Simple "ls" command is not working. But if I
> create EROFS image without compression flag, then after mount I am able to
> read the files. Seems there is some problem during compression.
>
> I will appreciate if someone can help me out why this is happening.
Could you please check if your opensuse kernel has been enabled
the following configuration?
CONFIG_EROFS_FS_ZIP=y
CONFIG_EROFS_FS_CLUSTER_PAGE_LIMIT=1
By default, they should be enabled, but it seems not according to
the following information you mentioned.
Thanks,
Gao Xiang
>
> Steps followed:
> *Erofs image creation & mount: *
> mkfs.erofs -zlz4hc enwik8.erofs.img enwik8/
> mkfs.erofs 1.0
> c_version: [ 1.0]
> c_dbg_lvl: [ 0]
> c_dry_run: [ 0]
> mount enwik8.erofs.img /mnt/enwik8/ -t erofs -o loop
>
> ls -l /mnt/enwik8/
> ls: cannot access '/mnt/enwik8/enwik8': Operation not supported
> total 0
> -????????? ? ? ? ? ? enwik8
>
> The problem seen for both lz4 & lz4hc.
>
> *Erofs image creation & mount without compression: *
> mkfs.erofs enwik8_nocomp.erofs.img enwik8/
> mkfs.erofs 1.0
> c_version: [ 1.0]
> c_dbg_lvl: [ 0]
> c_dry_run: [ 0]
>
> mount enwik8_nocomp.erofs.img /mnt/enwik8_nocomp/ -t erofs -o loop
>
> ls -l /mnt/enwik8_nocomp/
> total 97660
> -rw-r--r-- 1 root root 100000000 Jan 20 01:27 enwik8
>
> *Original enwik8 file:*
> ls -l enwik8
> total 97660
> -rw-r--r-- 1 root root 100000000 Jan 20 01:14 enwik8
>
> *Source code used for Lz4 and Erofs utils:*
> https://github.com/hsiangkao/erofs-utils
> https://github.com/lz4/lz4
>
> --
> Thanks,
> Saumya Prakash Panda
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: Problem in EROFS: Not able to read the files after mount
2020-01-20 7:41 ` Gao Xiang via Linux-erofs
@ 2020-01-22 3:57 ` Saumya Panda
2020-01-22 4:37 ` Gao Xiang via Linux-erofs
0 siblings, 1 reply; 9+ messages in thread
From: Saumya Panda @ 2020-01-22 3:57 UTC (permalink / raw)
To: Gao Xiang; +Cc: linux-erofs
[-- Attachment #1: Type: text/plain, Size: 2888 bytes --]
Hi Gao,
Thanks for the info. After I enabled the said configuration, I am now
able to read the files after mount. But I am seeing Squashfs has better
compression ratio compared to Erofs (more than 60% than that of Erofs). Am
I missing something? I used lz4hc while making the Erofs image.
ls -l enwik*
-rw-r--r-- 1 saumya users 61280256 Jan 21 03:22 enwik8.erofs.img
-rw-r--r-- 1 saumya users 37355520 Jan 21 03:34 enwik8.sqsh
-rw-r--r-- 1 saumya users 558133248 Jan 21 03:25 enwik9.erofs.img
-rw-r--r-- 1 saumya users 331481088 Jan 21 03:35 enwik9.sqsh
On Mon, Jan 20, 2020 at 1:11 PM Gao Xiang <hsiangkao@aol.com> wrote:
> Hi Saumya,
>
> On Mon, Jan 20, 2020 at 12:25:15PM +0530, Saumya Panda wrote:
> > Hi Experts,
> > I am testing EROFS in openSuse(Kernel: 5.4.7-1-default). I used the
> > enwik8 and enwik9 as source file (
> > https://cs.fit.edu/~mmahoney/compression/textdata.html) for compression.
> > But after I mount the erofs image, I am not able to read it (it is saying
> > operation not permitted). Simple "ls" command is not working. But if I
> > create EROFS image without compression flag, then after mount I am able
> to
> > read the files. Seems there is some problem during compression.
> >
> > I will appreciate if someone can help me out why this is happening.
>
> Could you please check if your opensuse kernel has been enabled
> the following configuration?
>
> CONFIG_EROFS_FS_ZIP=y
> CONFIG_EROFS_FS_CLUSTER_PAGE_LIMIT=1
>
> By default, they should be enabled, but it seems not according to
> the following information you mentioned.
>
> Thanks,
> Gao Xiang
>
> >
> > Steps followed:
> > *Erofs image creation & mount: *
> > mkfs.erofs -zlz4hc enwik8.erofs.img enwik8/
> > mkfs.erofs 1.0
> > c_version: [ 1.0]
> > c_dbg_lvl: [ 0]
> > c_dry_run: [ 0]
> > mount enwik8.erofs.img /mnt/enwik8/ -t erofs -o loop
> >
> > ls -l /mnt/enwik8/
> > ls: cannot access '/mnt/enwik8/enwik8': Operation not supported
> > total 0
> > -????????? ? ? ? ? ? enwik8
> >
> > The problem seen for both lz4 & lz4hc.
> >
> > *Erofs image creation & mount without compression: *
> > mkfs.erofs enwik8_nocomp.erofs.img enwik8/
> > mkfs.erofs 1.0
> > c_version: [ 1.0]
> > c_dbg_lvl: [ 0]
> > c_dry_run: [ 0]
> >
> > mount enwik8_nocomp.erofs.img /mnt/enwik8_nocomp/ -t erofs -o loop
> >
> > ls -l /mnt/enwik8_nocomp/
> > total 97660
> > -rw-r--r-- 1 root root 100000000 Jan 20 01:27 enwik8
> >
> > *Original enwik8 file:*
> > ls -l enwik8
> > total 97660
> > -rw-r--r-- 1 root root 100000000 Jan 20 01:14 enwik8
> >
> > *Source code used for Lz4 and Erofs utils:*
> > https://github.com/hsiangkao/erofs-utils
> > https://github.com/lz4/lz4
> >
> > --
> > Thanks,
> > Saumya Prakash Panda
>
--
Thanks,
Saumya Prakash Panda
[-- Attachment #2: Type: text/html, Size: 4074 bytes --]
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: Problem in EROFS: Not able to read the files after mount
2020-01-22 3:57 ` Saumya Panda
@ 2020-01-22 4:37 ` Gao Xiang via Linux-erofs
2020-01-29 4:13 ` Saumya Panda
0 siblings, 1 reply; 9+ messages in thread
From: Gao Xiang via Linux-erofs @ 2020-01-22 4:37 UTC (permalink / raw)
To: Saumya Panda; +Cc: linux-erofs
On Wed, Jan 22, 2020 at 09:27:45AM +0530, Saumya Panda wrote:
> Hi Gao,
> Thanks for the info. After I enabled the said configuration, I am now
> able to read the files after mount. But I am seeing Squashfs has better
> compression ratio compared to Erofs (more than 60% than that of Erofs). Am
> I missing something? I used lz4hc while making the Erofs image.
>
> ls -l enwik*
> -rw-r--r-- 1 saumya users 61280256 Jan 21 03:22 enwik8.erofs.img
> -rw-r--r-- 1 saumya users 37355520 Jan 21 03:34 enwik8.sqsh
> -rw-r--r-- 1 saumya users 558133248 Jan 21 03:25 enwik9.erofs.img
> -rw-r--r-- 1 saumya users 331481088 Jan 21 03:35 enwik9.sqsh
Yes, it's working as expect. Currently EROFS is compressed in 4k
fixed-sized output compression granularity as mentioned in many
available materials. That is the use case for our smartphones.
You should compare with similar block configuration of squashfs.
and there are some 3rd data by other folks as well [1].
In the future, we will support other compression algorithms and
larger compressed size (> 4k).
[1] In chinese,
https://blog.csdn.net/scnutiger/article/details/102507596
Thanks,
Gao Xiang
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: Problem in EROFS: Not able to read the files after mount
2020-01-22 4:37 ` Gao Xiang via Linux-erofs
@ 2020-01-29 4:13 ` Saumya Panda
2020-01-29 4:50 ` Gao Xiang via Linux-erofs
2020-01-29 4:59 ` Gao Xiang via Linux-erofs
0 siblings, 2 replies; 9+ messages in thread
From: Saumya Panda @ 2020-01-29 4:13 UTC (permalink / raw)
To: Gao Xiang; +Cc: linux-erofs
[-- Attachment #1: Type: text/plain, Size: 7215 bytes --]
Hi Gao,
How you got the read amplification? I ran FIO on enwik9 (both Erofs and
SquashFs) and got the below output. Is there anyway to calculate the read
amplification from the below logs.
Here filename (/mnt/enwik9_erofs/enwik9, /mnt/enwiki_sqfs/enwik9)
points to the mounted readonly file system(squasfs, erofs). But if I give
directory as a parameter instead of filename I am getting error(see the
logs at the end).
*FIO on Erofs:*
localhost:~> fio --name=randread --ioengine=libaio --iodepth=16
--rw=randread --bs=4k --direct=0 --size=512M --numjobs=4 --runtime=240
--group_reporting --filename=/mnt/enwik9_erofs/enwik9
randread: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T)
4096B-4096B, ioengine=libaio, iodepth=16
...
fio-3.17-90-gd9b7
Starting 4 processes
Jobs: 4 (f=4): [r(4)][100.0%][r=381MiB/s][r=97.6k IOPS][eta 00m:00s]
randread: (groupid=0, jobs=4): err= 0: pid=34282: Mon Jan 27 01:04:55 2020
read: IOPS=36.7k, BW=144MiB/s (150MB/s)(2048MiB/14271msec)
slat (nsec): min=1305, max=135688k, avg=106650.48, stdev=493480.73
clat (nsec): min=1970, max=136593k, avg=1629459.90, stdev=2639786.83
lat (usec): min=3, max=136625, avg=1736.29, stdev=2772.32
clat percentiles (usec):
| 1.00th=[ 48], 5.00th=[ 69], 10.00th=[ 251], 20.00th=[
437],
| 30.00th=[ 570], 40.00th=[ 701], 50.00th=[ 848], 60.00th=[
1029],
| 70.00th=[ 1336], 80.00th=[ 2147], 90.00th=[ 4015], 95.00th=[
5932],
| 99.00th=[ 11600], 99.50th=[ 13304], 99.90th=[ 17171], 99.95th=[
20579],
| 99.99th=[135267]
bw ( KiB/s): min=16510, max=295435, per=76.91%, avg=113025.79,
stdev=23830.42, samples=112
iops : min= 4126, max=73857, avg=28254.82, stdev=5957.62,
samples=112
lat (usec) : 2=0.01%, 4=0.01%, 10=0.01%, 20=0.01%, 50=1.37%
lat (usec) : 100=5.45%, 250=3.15%, 500=14.74%, 750=18.99%, 1000=14.99%
lat (msec) : 2=20.14%, 4=11.09%, 10=8.42%, 20=1.62%, 50=0.04%
lat (msec) : 250=0.01%
cpu : usr=1.87%, sys=8.28%, ctx=144023, majf=1, minf=114
IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=100.0%, 32=0.0%,
>=64=0.0%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%,
>=64=0.0%
complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.1%, 32=0.0%, 64=0.0%,
>=64=0.0%
issued rwts: total=524288,0,0,0 short=0,0,0,0 dropped=0,0,0,0
latency : target=0, window=0, percentile=100.00%, depth=16
Run status group 0 (all jobs):
READ: bw=144MiB/s (150MB/s), 144MiB/s-144MiB/s (150MB/s-150MB/s),
io=2048MiB (2147MB), run=14271-14271msec
Disk stats (read/write):
loop0: ios=137357/0, merge=0/0, ticks=23020/0, in_queue=460, util=97.70%
*FIO on SquashFs:*
localhost:~/Downloads/erofs-utils> fio --name=randread --ioengine=libaio
--iodepth=16 --rw=randread --bs=4k --direct=0 --size=512M --numjobs=4
--runtime=240 --group_reporting --filename=/mnt/enwik9_sqsh/enwik9
randread: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T)
4096B-4096B, ioengine=libaio, iodepth=16
...
fio-3.17-90-gd9b7
Starting 4 processes
Jobs: 4 (f=4): [r(4)][66.7%][r=1175MiB/s][r=301k IOPS][eta 00m:05s]
randread: (groupid=0, jobs=4): err= 0: pid=34389: Mon Jan 27 01:07:56 2020
read: IOPS=55.4k, BW=216MiB/s (227MB/s)(2048MiB/9467msec)
slat (nsec): min=1194, max=61065k, avg=67581.76, stdev=754174.73
clat (usec): min=2, max=222014, avg=1075.25, stdev=5969.94
lat (usec): min=3, max=235437, avg=1143.13, stdev=6341.32
clat percentiles (usec):
| 1.00th=[ 39], 5.00th=[ 40], 10.00th=[ 40], 20.00th=[
41],
| 30.00th=[ 42], 40.00th=[ 43], 50.00th=[ 43], 60.00th=[
44],
| 70.00th=[ 45], 80.00th=[ 48], 90.00th=[ 63], 95.00th=[
3163],
| 99.00th=[ 28443], 99.50th=[ 41157], 99.90th=[ 78119], 99.95th=[
89654],
| 99.99th=[125305]
bw ( KiB/s): min= 1985, max=991826, per=63.49%, avg=140649.83,
stdev=78204.76, samples=72
iops : min= 495, max=247955, avg=35161.00, stdev=19551.19,
samples=72
lat (usec) : 4=0.01%, 10=0.01%, 20=0.01%, 50=84.82%, 100=8.18%
lat (usec) : 250=0.37%, 500=0.09%, 750=0.24%, 1000=0.54%
lat (msec) : 2=0.43%, 4=0.46%, 10=1.29%, 20=1.93%, 50=1.30%
lat (msec) : 100=0.33%, 250=0.02%
cpu : usr=1.76%, sys=16.29%, ctx=14519, majf=0, minf=104
IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=100.0%, 32=0.0%,
>=64=0.0%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%,
>=64=0.0%
complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.1%, 32=0.0%, 64=0.0%,
>=64=0.0%
issued rwts: total=524288,0,0,0 short=0,0,0,0 dropped=0,0,0,0
latency : target=0, window=0, percentile=100.00%, depth=16
Run status group 0 (all jobs):
READ: bw=216MiB/s (227MB/s), 216MiB/s-216MiB/s (227MB/s-227MB/s),
io=2048MiB (2147MB), run=9467-9467msec
Disk stats (read/write):
loop1: ios=177240/0, merge=0/0, ticks=199386/0, in_queue=75984,
util=73.95%
Fio Test on SquashFs dir:
localhost:~/Downloads/erofs-utils> fio --name=randread --ioengine=libaio
--iodepth=16 --rw=randread --bs=4k --direct=0 --size=512M --numjobs=4
--runtime=240 --group_reporting --directory=/mnt/enwik9_sqsh/
randread: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T)
4096B-4096B, ioengine=libaio, iodepth=16
...
fio-3.17-90-gd9b7
Starting 4 processes
randread: Laying out IO file (1 file / 512MiB)
fio: pid=0, err=30/file:filesetup.c:150, func=unlink, error=Read-only file
system
randread: Laying out IO file (1 file / 512MiB)
fio: pid=0, err=30/file:filesetup.c:150, func=unlink, error=Read-only file
system
randread: Laying out IO file (1 file / 512MiB)
fio: pid=0, err=30/file:filesetup.c:150, func=unlink, error=Read-only file
system
randread: Laying out IO file (1 file / 512MiB)
fio: pid=0, err=30/file:filesetup.c:150, func=unlink, error=Read-only file
system
Run status group 0 (all jobs):
On Wed, Jan 22, 2020 at 10:07 AM Gao Xiang <hsiangkao@aol.com> wrote:
> On Wed, Jan 22, 2020 at 09:27:45AM +0530, Saumya Panda wrote:
> > Hi Gao,
> > Thanks for the info. After I enabled the said configuration, I am now
> > able to read the files after mount. But I am seeing Squashfs has better
> > compression ratio compared to Erofs (more than 60% than that of Erofs).
> Am
> > I missing something? I used lz4hc while making the Erofs image.
> >
> > ls -l enwik*
> > -rw-r--r-- 1 saumya users 61280256 Jan 21 03:22 enwik8.erofs.img
> > -rw-r--r-- 1 saumya users 37355520 Jan 21 03:34 enwik8.sqsh
> > -rw-r--r-- 1 saumya users 558133248 Jan 21 03:25 enwik9.erofs.img
> > -rw-r--r-- 1 saumya users 331481088 Jan 21 03:35 enwik9.sqsh
>
> Yes, it's working as expect. Currently EROFS is compressed in 4k
> fixed-sized output compression granularity as mentioned in many
> available materials. That is the use case for our smartphones.
> You should compare with similar block configuration of squashfs.
> and there are some 3rd data by other folks as well [1].
>
> In the future, we will support other compression algorithms and
> larger compressed size (> 4k).
>
> [1] In chinese,
> https://blog.csdn.net/scnutiger/article/details/102507596
>
> Thanks,
> Gao Xiang
>
>
--
Thanks,
Saumya Prakash Panda
[-- Attachment #2: Type: text/html, Size: 34706 bytes --]
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: Problem in EROFS: Not able to read the files after mount
2020-01-29 4:13 ` Saumya Panda
@ 2020-01-29 4:50 ` Gao Xiang via Linux-erofs
2020-01-29 4:59 ` Gao Xiang via Linux-erofs
1 sibling, 0 replies; 9+ messages in thread
From: Gao Xiang via Linux-erofs @ 2020-01-29 4:50 UTC (permalink / raw)
To: Saumya Panda; +Cc: linux-erofs
Hi Saumya,
On Wed, Jan 29, 2020 at 09:43:37AM +0530, Saumya Panda wrote:
> Hi Gao,
> How you got the read amplification? I ran FIO on enwik9 (both Erofs and
> SquashFs) and got the below output. Is there anyway to calculate the read
> amplification from the below logs.
No. FIO doesn't provide such number as far as I know, you'd get all statistic
number by some block device information.
BTW, I'd suggest you umount, drop_caches, mount the filesystem every time
in order to get rid of filesystem itself internal cache and bdev buffer
cache. It would not be useful in real scenarios except for benchmark use
only.
https://github.com/erofs/erofs-openbenchmark/
Thanks,
Gao Xiang
>
> Here filename (/mnt/enwik9_erofs/enwik9, /mnt/enwiki_sqfs/enwik9)
> points to the mounted readonly file system(squasfs, erofs). But if I give
> directory as a parameter instead of filename I am getting error(see the
> logs at the end).
>
> *FIO on Erofs:*
>
> localhost:~> fio --name=randread --ioengine=libaio --iodepth=16
> --rw=randread --bs=4k --direct=0 --size=512M --numjobs=4 --runtime=240
> --group_reporting --filename=/mnt/enwik9_erofs/enwik9
>
> randread: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T)
> 4096B-4096B, ioengine=libaio, iodepth=16
>
> ...
>
> fio-3.17-90-gd9b7
>
> Starting 4 processes
>
> Jobs: 4 (f=4): [r(4)][100.0%][r=381MiB/s][r=97.6k IOPS][eta 00m:00s]
>
> randread: (groupid=0, jobs=4): err= 0: pid=34282: Mon Jan 27 01:04:55 2020
>
> read: IOPS=36.7k, BW=144MiB/s (150MB/s)(2048MiB/14271msec)
>
> slat (nsec): min=1305, max=135688k, avg=106650.48, stdev=493480.73
>
> clat (nsec): min=1970, max=136593k, avg=1629459.90, stdev=2639786.83
>
> lat (usec): min=3, max=136625, avg=1736.29, stdev=2772.32
>
> clat percentiles (usec):
>
> | 1.00th=[ 48], 5.00th=[ 69], 10.00th=[ 251], 20.00th=[
> 437],
>
> | 30.00th=[ 570], 40.00th=[ 701], 50.00th=[ 848], 60.00th=[
> 1029],
>
> | 70.00th=[ 1336], 80.00th=[ 2147], 90.00th=[ 4015], 95.00th=[
> 5932],
>
> | 99.00th=[ 11600], 99.50th=[ 13304], 99.90th=[ 17171], 99.95th=[
> 20579],
>
> | 99.99th=[135267]
>
> bw ( KiB/s): min=16510, max=295435, per=76.91%, avg=113025.79,
> stdev=23830.42, samples=112
>
> iops : min= 4126, max=73857, avg=28254.82, stdev=5957.62,
> samples=112
>
> lat (usec) : 2=0.01%, 4=0.01%, 10=0.01%, 20=0.01%, 50=1.37%
>
> lat (usec) : 100=5.45%, 250=3.15%, 500=14.74%, 750=18.99%, 1000=14.99%
>
> lat (msec) : 2=20.14%, 4=11.09%, 10=8.42%, 20=1.62%, 50=0.04%
>
> lat (msec) : 250=0.01%
>
> cpu : usr=1.87%, sys=8.28%, ctx=144023, majf=1, minf=114
>
> IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=100.0%, 32=0.0%,
> >=64=0.0%
>
> submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%,
> >=64=0.0%
>
> complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.1%, 32=0.0%, 64=0.0%,
> >=64=0.0%
>
> issued rwts: total=524288,0,0,0 short=0,0,0,0 dropped=0,0,0,0
>
> latency : target=0, window=0, percentile=100.00%, depth=16
>
>
>
> Run status group 0 (all jobs):
>
> READ: bw=144MiB/s (150MB/s), 144MiB/s-144MiB/s (150MB/s-150MB/s),
> io=2048MiB (2147MB), run=14271-14271msec
>
>
>
> Disk stats (read/write):
>
> loop0: ios=137357/0, merge=0/0, ticks=23020/0, in_queue=460, util=97.70%
>
>
> *FIO on SquashFs:*
>
>
> localhost:~/Downloads/erofs-utils> fio --name=randread --ioengine=libaio
> --iodepth=16 --rw=randread --bs=4k --direct=0 --size=512M --numjobs=4
> --runtime=240 --group_reporting --filename=/mnt/enwik9_sqsh/enwik9
>
> randread: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T)
> 4096B-4096B, ioengine=libaio, iodepth=16
>
> ...
>
> fio-3.17-90-gd9b7
>
> Starting 4 processes
>
> Jobs: 4 (f=4): [r(4)][66.7%][r=1175MiB/s][r=301k IOPS][eta 00m:05s]
>
> randread: (groupid=0, jobs=4): err= 0: pid=34389: Mon Jan 27 01:07:56 2020
>
> read: IOPS=55.4k, BW=216MiB/s (227MB/s)(2048MiB/9467msec)
>
> slat (nsec): min=1194, max=61065k, avg=67581.76, stdev=754174.73
>
> clat (usec): min=2, max=222014, avg=1075.25, stdev=5969.94
>
> lat (usec): min=3, max=235437, avg=1143.13, stdev=6341.32
>
> clat percentiles (usec):
>
> | 1.00th=[ 39], 5.00th=[ 40], 10.00th=[ 40], 20.00th=[
> 41],
>
> | 30.00th=[ 42], 40.00th=[ 43], 50.00th=[ 43], 60.00th=[
> 44],
>
> | 70.00th=[ 45], 80.00th=[ 48], 90.00th=[ 63], 95.00th=[
> 3163],
>
> | 99.00th=[ 28443], 99.50th=[ 41157], 99.90th=[ 78119], 99.95th=[
> 89654],
>
> | 99.99th=[125305]
>
> bw ( KiB/s): min= 1985, max=991826, per=63.49%, avg=140649.83,
> stdev=78204.76, samples=72
>
> iops : min= 495, max=247955, avg=35161.00, stdev=19551.19,
> samples=72
>
> lat (usec) : 4=0.01%, 10=0.01%, 20=0.01%, 50=84.82%, 100=8.18%
>
> lat (usec) : 250=0.37%, 500=0.09%, 750=0.24%, 1000=0.54%
>
> lat (msec) : 2=0.43%, 4=0.46%, 10=1.29%, 20=1.93%, 50=1.30%
>
> lat (msec) : 100=0.33%, 250=0.02%
>
> cpu : usr=1.76%, sys=16.29%, ctx=14519, majf=0, minf=104
>
> IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=100.0%, 32=0.0%,
> >=64=0.0%
>
> submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%,
> >=64=0.0%
>
> complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.1%, 32=0.0%, 64=0.0%,
> >=64=0.0%
>
> issued rwts: total=524288,0,0,0 short=0,0,0,0 dropped=0,0,0,0
>
> latency : target=0, window=0, percentile=100.00%, depth=16
>
>
>
> Run status group 0 (all jobs):
>
> READ: bw=216MiB/s (227MB/s), 216MiB/s-216MiB/s (227MB/s-227MB/s),
> io=2048MiB (2147MB), run=9467-9467msec
>
>
>
> Disk stats (read/write):
>
> loop1: ios=177240/0, merge=0/0, ticks=199386/0, in_queue=75984,
> util=73.95%
>
>
>
> Fio Test on SquashFs dir:
>
>
> localhost:~/Downloads/erofs-utils> fio --name=randread --ioengine=libaio
> --iodepth=16 --rw=randread --bs=4k --direct=0 --size=512M --numjobs=4
> --runtime=240 --group_reporting --directory=/mnt/enwik9_sqsh/
> randread: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T)
> 4096B-4096B, ioengine=libaio, iodepth=16
> ...
> fio-3.17-90-gd9b7
> Starting 4 processes
> randread: Laying out IO file (1 file / 512MiB)
> fio: pid=0, err=30/file:filesetup.c:150, func=unlink, error=Read-only file
> system
> randread: Laying out IO file (1 file / 512MiB)
> fio: pid=0, err=30/file:filesetup.c:150, func=unlink, error=Read-only file
> system
> randread: Laying out IO file (1 file / 512MiB)
> fio: pid=0, err=30/file:filesetup.c:150, func=unlink, error=Read-only file
> system
> randread: Laying out IO file (1 file / 512MiB)
> fio: pid=0, err=30/file:filesetup.c:150, func=unlink, error=Read-only file
> system
>
>
> Run status group 0 (all jobs):
>
>
> On Wed, Jan 22, 2020 at 10:07 AM Gao Xiang <hsiangkao@aol.com> wrote:
>
> > On Wed, Jan 22, 2020 at 09:27:45AM +0530, Saumya Panda wrote:
> > > Hi Gao,
> > > Thanks for the info. After I enabled the said configuration, I am now
> > > able to read the files after mount. But I am seeing Squashfs has better
> > > compression ratio compared to Erofs (more than 60% than that of Erofs).
> > Am
> > > I missing something? I used lz4hc while making the Erofs image.
> > >
> > > ls -l enwik*
> > > -rw-r--r-- 1 saumya users 61280256 Jan 21 03:22 enwik8.erofs.img
> > > -rw-r--r-- 1 saumya users 37355520 Jan 21 03:34 enwik8.sqsh
> > > -rw-r--r-- 1 saumya users 558133248 Jan 21 03:25 enwik9.erofs.img
> > > -rw-r--r-- 1 saumya users 331481088 Jan 21 03:35 enwik9.sqsh
> >
> > Yes, it's working as expect. Currently EROFS is compressed in 4k
> > fixed-sized output compression granularity as mentioned in many
> > available materials. That is the use case for our smartphones.
> > You should compare with similar block configuration of squashfs.
> > and there are some 3rd data by other folks as well [1].
> >
> > In the future, we will support other compression algorithms and
> > larger compressed size (> 4k).
> >
> > [1] In chinese,
> > https://blog.csdn.net/scnutiger/article/details/102507596
> >
> > Thanks,
> > Gao Xiang
> >
> >
>
> --
> Thanks,
> Saumya Prakash Panda
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: Problem in EROFS: Not able to read the files after mount
2020-01-29 4:13 ` Saumya Panda
2020-01-29 4:50 ` Gao Xiang via Linux-erofs
@ 2020-01-29 4:59 ` Gao Xiang via Linux-erofs
2020-03-20 8:00 ` Saumya Panda
1 sibling, 1 reply; 9+ messages in thread
From: Gao Xiang via Linux-erofs @ 2020-01-29 4:59 UTC (permalink / raw)
To: Saumya Panda; +Cc: linux-erofs
On Wed, Jan 29, 2020 at 09:43:37AM +0530, Saumya Panda wrote:
>
> localhost:~> fio --name=randread --ioengine=libaio --iodepth=16
> --rw=randread --bs=4k --direct=0 --size=512M --numjobs=4 --runtime=240
> --group_reporting --filename=/mnt/enwik9_erofs/enwik9
>
> randread: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T)
> 4096B-4096B, ioengine=libaio, iodepth=16
And I don't think such configuration is useful to calculate read ampfication
since you read 100% finally, use multi-thread without memory limitation (all
compressed data will be cached, so the total read is compressed size).
I have no idea what you want to get via doing comparsion between EROFS and
Squashfs. Larger block size much like readahead in bulk. If you benchmark
uncompressed file systems, you will notice such filesystems cannot get such
high 100% randread number.
Thank,
Gao Xiang
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: Problem in EROFS: Not able to read the files after mount
2020-01-29 4:59 ` Gao Xiang via Linux-erofs
@ 2020-03-20 8:00 ` Saumya Panda
2020-03-20 11:16 ` Gao Xiang via Linux-erofs
0 siblings, 1 reply; 9+ messages in thread
From: Saumya Panda @ 2020-03-20 8:00 UTC (permalink / raw)
To: Gao Xiang; +Cc: linux-erofs
[-- Attachment #1: Type: text/plain, Size: 2762 bytes --]
Hi Gao,
I am trying to evaluate Erofs on my device. Right now SquashFS is used
for system files. Hence I am trying to compare Erofs with SquashFs. On my
device with the below environment I am seeing Erofs is 3 times faster than
SquashFS 128k (I used enwik8 (100MB) as testing file)) while doing Seq
Read. Your test result shows it is near to SquasFs 128k. How Erofs is so
fast for Seq Read? I also tested it on Suse VM with low memory(free
memory 425MB) and I am seeing Erofs is pretty fast.
Also Can you tell me how to run FIO on directory instead of files ?
fio -filename=$i -rw=read -bs=4k -name=seqbench
Test on Embedded Device:
Total Memory 5.5 GB:
Free Memory 1515
No Swap
$: /fio/erofs_test]$ free -m
total used free shared buff/cache
available
Mem: 5384 2315 1515 1378 1553
1592
Swap: 0 0 0
Seq Read
Rand Read
squashFS 4k
51.8MB/s
1931msec
45.7MB/s
2187msec
SquashFS 128k
116MB/s
861msec
14MB/s
877msec
SquashFS 1M
124MB/s-124MB/s
805msec
119MB/s
837msec
Erofs 4k
658MB/s-658MB/s
152msec
103MB
974msec
Test on Suse VM:
Total Memory 1.5 GB:
Free Memory 425
No Swap
localhost:/home/saumya/Documents/erofs_test # free -m
total used free shared buff/cache
available
Mem: 1436 817 425 5 192
444
Swap: 0 0 0
Seq Read
Rand Read
squashFS 4k
30.7MB/s
3216msec
9333kB/s
10715msec
SquashFS 128k
318MB/s
314msec
5946kB/s
16819msec
Erofs 4k
469MB/s
213msec
11.9MB/s
8414msec
On Wed, Jan 29, 2020 at 10:30 AM Gao Xiang <hsiangkao@aol.com> wrote:
> On Wed, Jan 29, 2020 at 09:43:37AM +0530, Saumya Panda wrote:
> >
> > localhost:~> fio --name=randread --ioengine=libaio --iodepth=16
> > --rw=randread --bs=4k --direct=0 --size=512M --numjobs=4 --runtime=240
> > --group_reporting --filename=/mnt/enwik9_erofs/enwik9
> >
> > randread: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T)
> > 4096B-4096B, ioengine=libaio, iodepth=16
>
> And I don't think such configuration is useful to calculate read
> ampfication
> since you read 100% finally, use multi-thread without memory limitation
> (all
> compressed data will be cached, so the total read is compressed size).
>
> I have no idea what you want to get via doing comparsion between EROFS and
> Squashfs. Larger block size much like readahead in bulk. If you benchmark
> uncompressed file systems, you will notice such filesystems cannot get such
> high 100% randread number.
>
> Thank,
> Gao Xiang
>
>
--
Thanks,
Saumya Prakash Panda
[-- Attachment #2: Type: text/html, Size: 30834 bytes --]
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: Problem in EROFS: Not able to read the files after mount
2020-03-20 8:00 ` Saumya Panda
@ 2020-03-20 11:16 ` Gao Xiang via Linux-erofs
0 siblings, 0 replies; 9+ messages in thread
From: Gao Xiang via Linux-erofs @ 2020-03-20 11:16 UTC (permalink / raw)
To: Saumya Panda; +Cc: linux-erofs
Hi Saumya,
On Fri, Mar 20, 2020 at 01:30:39PM +0530, Saumya Panda wrote:
> Hi Gao,
> I am trying to evaluate Erofs on my device. Right now SquashFS is used
> for system files. Hence I am trying to compare Erofs with SquashFs. On my
> device with the below environment I am seeing Erofs is 3 times faster than
> SquashFS 128k (I used enwik8 (100MB) as testing file)) while doing Seq
> Read. Your test result shows it is near to SquasFs 128k. How Erofs is so
> fast for Seq Read? I also tested it on Suse VM with low memory(free
> memory 425MB) and I am seeing Erofs is pretty fast.
>
> Also Can you tell me how to run FIO on directory instead of files ?
> fio -filename=$i -rw=read -bs=4k -name=seqbench
Thanks for your detailed words.
Firstly, I cannot think out some way to run FIO on directory directly.
And maybe some numbers below are still strange in my opinion.
OK, Actually, I don't want to leave a lot of (maybe aggressive) comments
publicly to compare one filesystem with other filesystems, such as EROFS
vs squashfs (or ext4 vs f2fs). But there are actually some exist materials
which did this before, if you have some extra time, you could read through
the following reference materials about EROFS (although some parts are outdated):
[1] https://static.sched.com/hosted_files/kccncosschn19chi/ce/EROFS%20file%20system_OSS2019_Final.pdf
[2] https://www.usenix.org/system/files/atc19-gao.pdf
The reason why I think in this way is that (Objectively, I think) people
have their own judgement / insistance on every stuffs. But okay, there are
some hints why EROFS behaves well in this email (compared with Squashfs, but
I really want to avoid such aggressive topics):
o EROFS has carefully designed critial paths, such as async decompression
path. that partly answers your question about sequential read behavior;
o EROFS has well-designed compression metadata (called EROFS compacted
index). Each logic compressed block only takes 2-byte metadata on average
(high information entropy, so no need to compress compacted indexes again)
and it supports random read without pervious meta dependence. In contrast,
the on-disk metadata of Squashfs doesn't support random read (and even
metadata itself could be compressed), which means you have to cached more
metadata in memory for random read, or you'll stand with its bad metadata
random access performance. some hint: see ondisk blocklist, index cache
and read_blocklist();
o EROFS firstly uses fixed-sized output compression in filesystem field.
By using fixed-sized output compression, EROFS can easily implement
in-place decompression (or at least in-place I/O), which means that it
doesn't allocate physical pages for most cases, therefore fewer memory
reclaim/compaction possibility and keeps useful file-backed page cache
as much as possible;
o EROFS has designed on-disk directory format, it supports directory
random access compared with current Squashfs;
In a word, I don't think the current on-disk squashfs is a well-designed
stuff in the long term. In other words, EROFS is a completely different
stuff either from its principle, the on-disk format and runtime
implementation...)
By the way, the pervious link
https://blog.csdn.net/scnutiger/article/details/102507596
was _not_ written by me. I just noticed it by chance, I think
it was written by some Chinese kernel developer from some other
Android vendor.
And FIO cannot benchmark all cases, heavy memory workload
doesn't completely equal to low memory as well.
However, there is my FIO test script to benchmark different fses:
https://github.com/erofs/erofs-openbenchmark/blob/master/fio-benchmark.sh
for reference. Personally, I think it's reasonable.
It makes more sense to use designed dynamic model. Huawei interally uses
several well-designed light/heavy workloads to benchmark the whole system.
In addition, I noticed many complaints about Squashfs, e.g:
https://forum.snapcraft.io/t/squashfs-is-a-terrible-storage-format/9466
I don't want to comment the whole content itself. But for such runtime
workloads, I'd suggest using EROFS instead and see if it performs better
(compared with any configuration of squashfs+lz4).
There are many ongoing stuffs to do, but I'm really busy recently. After
implementing LZMA and larger compress cluster, I think EROFS will be more
useful, but it needs to be carefully designed first in order to avoid
further complexity of the whole solution.
Sorry about my English, hope it of some help..
Thanks,
Gao Xiang
>
> Test on Embedded Device:
>
> Total Memory 5.5 GB:
>
> Free Memory 1515
>
> No Swap
>
>
> $: /fio/erofs_test]$ free -m
>
> total used free shared buff/cache
> available
>
> Mem: 5384 2315 1515 1378 1553
> 1592
>
> Swap: 0 0 0
>
>
>
>
>
> Seq Read
>
>
>
> Rand Read
>
>
>
>
>
> squashFS 4k
>
>
>
> 51.8MB/s
>
> 1931msec
>
> 45.7MB/s
>
> 2187msec
>
>
>
> SquashFS 128k
>
>
>
> 116MB/s
>
> 861msec
>
> 14MB/s
>
> 877msec
>
>
>
> SquashFS 1M
>
>
>
> 124MB/s-124MB/s
>
> 805msec
>
> 119MB/s
>
> 837msec
>
>
>
>
>
> Erofs 4k
>
>
>
> 658MB/s-658MB/s
>
> 152msec
>
>
>
> 103MB
>
> 974msec
>
>
>
>
>
>
>
> Test on Suse VM:
>
>
> Total Memory 1.5 GB:
>
> Free Memory 425
>
> No Swap
>
> localhost:/home/saumya/Documents/erofs_test # free -m
> total used free shared buff/cache
> available
> Mem: 1436 817 425 5 192
> 444
> Swap: 0 0 0
>
>
>
>
>
>
> Seq Read
>
>
>
> Rand Read
>
>
>
>
>
> squashFS 4k
>
>
>
> 30.7MB/s
>
> 3216msec
>
> 9333kB/s
>
> 10715msec
>
>
>
> SquashFS 128k
>
>
>
> 318MB/s
>
> 314msec
>
> 5946kB/s
>
> 16819msec
>
>
>
>
>
>
>
>
>
>
>
> Erofs 4k
>
>
>
> 469MB/s
>
> 213msec
>
>
>
> 11.9MB/s
>
> 8414msec
>
>
>
>
>
>
>
>
>
>
>
> On Wed, Jan 29, 2020 at 10:30 AM Gao Xiang <hsiangkao@aol.com> wrote:
>
> > On Wed, Jan 29, 2020 at 09:43:37AM +0530, Saumya Panda wrote:
> > >
> > > localhost:~> fio --name=randread --ioengine=libaio --iodepth=16
> > > --rw=randread --bs=4k --direct=0 --size=512M --numjobs=4 --runtime=240
> > > --group_reporting --filename=/mnt/enwik9_erofs/enwik9
> > >
> > > randread: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T)
> > > 4096B-4096B, ioengine=libaio, iodepth=16
> >
> > And I don't think such configuration is useful to calculate read
> > ampfication
> > since you read 100% finally, use multi-thread without memory limitation
> > (all
> > compressed data will be cached, so the total read is compressed size).
> >
> > I have no idea what you want to get via doing comparsion between EROFS and
> > Squashfs. Larger block size much like readahead in bulk. If you benchmark
> > uncompressed file systems, you will notice such filesystems cannot get such
> > high 100% randread number.
> >
> > Thank,
> > Gao Xiang
> >
> >
>
> --
> Thanks,
> Saumya Prakash Panda
^ permalink raw reply [flat|nested] 9+ messages in thread
end of thread, other threads:[~2020-03-20 11:17 UTC | newest]
Thread overview: 9+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-01-20 6:55 Problem in EROFS: Not able to read the files after mount Saumya Panda
2020-01-20 7:41 ` Gao Xiang via Linux-erofs
2020-01-22 3:57 ` Saumya Panda
2020-01-22 4:37 ` Gao Xiang via Linux-erofs
2020-01-29 4:13 ` Saumya Panda
2020-01-29 4:50 ` Gao Xiang via Linux-erofs
2020-01-29 4:59 ` Gao Xiang via Linux-erofs
2020-03-20 8:00 ` Saumya Panda
2020-03-20 11:16 ` Gao Xiang via Linux-erofs
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).