linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* Re: lz4hc compression in UBIFS?
       [not found]       ` <139841382436609@web24g.yandex.ru>
@ 2013-10-23  5:26         ` Brent Taylor
  2013-10-23  7:40           ` Konstantin Tokarev
  0 siblings, 1 reply; 7+ messages in thread
From: Brent Taylor @ 2013-10-23  5:26 UTC (permalink / raw)
  To: Konstantin Tokarev; +Cc: linux-mtd, Artem Bityutskiy, linux-kernel, akpm

Konstantin,
   I did my testing with data from /dev/urandom (which I now realize
wasn't the best choice of data source), but if I use /dev/zero (which
actually causes data compression to occur), the decompressor fails.  I
don't know the internal workings of the lz4hc compressor or the lz4
decompressor.  I couldn't find any examples of any code in the kernel
actually using the compressor.  I've cc'ed the maintainers of the
lz4hc_compress.c to see if they my have some more insight to the
issue.

-- Brent

On Tue, Oct 22, 2013 at 5:10 AM, Konstantin Tokarev <annulen@yandex.ru> wrote:
>
>
> 22.10.2013, 07:43, "Brent Taylor" <motobud@gmail.com>:
>> On Mon, Oct 21, 2013 at 10:59 AM, Konstantin Tokarev <annulen@yandex.ru> wrote:
>>
>>>  04.10.2013, 07:09, "Brent Taylor" <motobud@gmail.com>:
>>>>  Here is a patch based on linux-3.12-rc3.  I haven't performed any
>>>>  performance testing UBIFS using lz4hc, but I can mount UBIFS volumes
>>>>  and haven't seen any problems yet.  The only think I know that isn't
>>>>  correct about the patch is the description for the Kconfig element for
>>>>  select lz4hc as a compression option.  I only copied the description
>>>>  from the lzo description.
>>>  Hi Brent,
>>>
>>>  I'm testing your patch on my SH4 device. When I create new partition
>>>  with lz4hc compressor, it works fine: I can copy file into it, and
>>>  md5sums of original and copy match. However, after reboot I cannot
>>>  read the file anymore:
>>>
>>>  UBIFS error (pid 1101): ubifs_decompress: cannot decompress 934 bytes, compressor lz4hc, error -22
>>>  UBIFS error (pid 1101): read_block: bad data node (block 1, inode 65)
>>>  UBIFS error (pid 1101): do_readpage: cannot read page 1 of inode 65, error -22
>>>
>>>  The same error appears if I use lz4hc-compressed ubifs image to flash rootfs
>>>  (using patched mkfs.ubifs).
>>>
>>>  Decompression error occurs in lz4_uncompress() function (lib/lz4/lz4_decompress.c),
>>>  on the line 101:
>>>
>>>  /* Error: offset create reference outside destination buffer */
>>>  if (unlikely(ref < (BYTE *const) dest))
>>>      goto _output_error;
>>>
>>>  Brent: are you able to read data from lz4hc volume on your device?
>>>  Anyone: any ideas what may happen here?
>>>
>>>  --
>>>  Regards,
>>>  Konstantin
>>
>> Konstantin,
>>    I haven't seen anything like that on my at91sam9m10g45-ek
>> development board.  I haven't used a flash image from mkfs.ubifs yet.
>> Is it possible the file system was not umounted cleanly before the
>> reboot and UBIFS went through a recovery procedure?  Maybe something
>> breaks with lz4hc when UBIFS does a recovery?  That's just a guess.
>
> Could you save attached file on lz4hc volume, umount it and mount again?
> I get aforementioned error when doing `cat set11.cfg`
>
> --
> Regards,
> Konstantin

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: lz4hc compression in UBIFS?
  2013-10-23  5:26         ` lz4hc compression in UBIFS? Brent Taylor
@ 2013-10-23  7:40           ` Konstantin Tokarev
  2013-10-23 12:49             ` Brent Taylor
  0 siblings, 1 reply; 7+ messages in thread
From: Konstantin Tokarev @ 2013-10-23  7:40 UTC (permalink / raw)
  To: Brent Taylor; +Cc: linux-mtd, Artem Bityutskiy, linux-kernel, akpm



23.10.2013, 09:26, "Brent Taylor" <motobud@gmail.com>:
> Konstantin,
>    I did my testing with data from /dev/urandom (which I now realize
> wasn't the best choice of data source), but if I use /dev/zero (which
> actually causes data compression to occur), the decompressor fails.  I
> don't know the internal workings of the lz4hc compressor or the lz4
> decompressor.  I couldn't find any examples of any code in the kernel
> actually using the compressor.  I've cc'ed the maintainers of the
> lz4hc_compress.c to see if they my have some more insight to the
> issue.

Does decompressor fail for you with the same error messages?

Have you tried to copy my file to the volume? It looks like minimal test case
for my board, if I remove any line decompressor works fine.

-- 
Regards,
Konstantin

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: lz4hc compression in UBIFS?
  2013-10-23  7:40           ` Konstantin Tokarev
@ 2013-10-23 12:49             ` Brent Taylor
  2013-10-23 13:39               ` Konstantin Tokarev
  0 siblings, 1 reply; 7+ messages in thread
From: Brent Taylor @ 2013-10-23 12:49 UTC (permalink / raw)
  To: Konstantin Tokarev; +Cc: linux-mtd, Artem Bityutskiy, linux-kernel, akpm

On Wed, Oct 23, 2013 at 2:40 AM, Konstantin Tokarev <annulen@yandex.ru> wrote:
>
>
> 23.10.2013, 09:26, "Brent Taylor" <motobud@gmail.com>:
>> Konstantin,
>>    I did my testing with data from /dev/urandom (which I now realize
>> wasn't the best choice of data source), but if I use /dev/zero (which
>> actually causes data compression to occur), the decompressor fails.  I
>> don't know the internal workings of the lz4hc compressor or the lz4
>> decompressor.  I couldn't find any examples of any code in the kernel
>> actually using the compressor.  I've cc'ed the maintainers of the
>> lz4hc_compress.c to see if they my have some more insight to the
>> issue.
>
> Does decompressor fail for you with the same error messages?
>
> Have you tried to copy my file to the volume? It looks like minimal test case
> for my board, if I remove any line decompressor works fine.
>
> --
> Regards,
> Konstantin

Yes, I get the same error, here's a dump from UBIFS when I cat a file
filled with data from /dev/zero:

UBIFS error (pid 4288): ubifs_decompress: cannot decompress 12 bytes,
compressor lz4hc, error -22
UBIFS error (pid 4288): read_block: bad data node (block 0, inode 71)
        magic          0x6101831
        crc            0xff61a078
        node_type      1 (data node)
        group_type     0 (no node group)
        sqnum          2700
        len            60
        key            (71, data, 0)
        size           512
        compr_typ      3
        data size      12
        data:
        00000000: 1f 00 01 00 ff e8 50 00 00 00 00 00
UBIFS error (pid 4288): do_readpage: cannot read page 0 of inode 71, error -22
cat: /opt/data/zero.bin: Input/output error

Steps to reproduce are:
1.  Create a file with all zeros: dd if=/dev/zero bs=512 count=1
of=/opt/data/zero.bin
2.  Unmount ubifs and detach ubi partition
3.  attach ubi partition and mount ubifs
4. cat /opt/data/zero.bin

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: lz4hc compression in UBIFS?
  2013-10-23 12:49             ` Brent Taylor
@ 2013-10-23 13:39               ` Konstantin Tokarev
  0 siblings, 0 replies; 7+ messages in thread
From: Konstantin Tokarev @ 2013-10-23 13:39 UTC (permalink / raw)
  To: Brent Taylor; +Cc: linux-mtd, Artem Bityutskiy, linux-kernel, akpm



23.10.2013, 16:49, "Brent Taylor" <motobud@gmail.com>:
> On Wed, Oct 23, 2013 at 2:40 AM, Konstantin Tokarev <annulen@yandex.ru> wrote:
>
>>  23.10.2013, 09:26, "Brent Taylor" <motobud@gmail.com>:
>>>  Konstantin,
>>>     I did my testing with data from /dev/urandom (which I now realize
>>>  wasn't the best choice of data source), but if I use /dev/zero (which
>>>  actually causes data compression to occur), the decompressor fails.  I
>>>  don't know the internal workings of the lz4hc compressor or the lz4
>>>  decompressor.  I couldn't find any examples of any code in the kernel
>>>  actually using the compressor.  I've cc'ed the maintainers of the
>>>  lz4hc_compress.c to see if they my have some more insight to the
>>>  issue.
>>  Does decompressor fail for you with the same error messages?
>>
>>  Have you tried to copy my file to the volume? It looks like minimal test case
>>  for my board, if I remove any line decompressor works fine.
>>
>>  --
>>  Regards,
>>  Konstantin
>
> Yes, I get the same error, here's a dump from UBIFS when I cat a file
> filled with data from /dev/zero:
>
> UBIFS error (pid 4288): ubifs_decompress: cannot decompress 12 bytes,
> compressor lz4hc, error -22
> UBIFS error (pid 4288): read_block: bad data node (block 0, inode 71)
>         magic          0x6101831
>         crc            0xff61a078
>         node_type      1 (data node)
>         group_type     0 (no node group)
>         sqnum          2700
>         len            60
>         key            (71, data, 0)
>         size           512
>         compr_typ      3
>         data size      12
>         data:
>         00000000: 1f 00 01 00 ff e8 50 00 00 00 00 00
> UBIFS error (pid 4288): do_readpage: cannot read page 0 of inode 71, error -22
> cat: /opt/data/zero.bin: Input/output error
>
> Steps to reproduce are:
> 1.  Create a file with all zeros: dd if=/dev/zero bs=512 count=1
> of=/opt/data/zero.bin
> 2.  Unmount ubifs and detach ubi partition
> 3.  attach ubi partition and mount ubifs
> 4. cat /opt/data/zero.bin

Reproduced here.

-- 
Regards,
Konstantin

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: lz4hc compression in UBIFS?
       [not found]     ` <237221382623942@web21m.yandex.ru>
@ 2013-10-24 15:15       ` Konstantin Tokarev
  2013-10-28 16:22         ` Konstantin Tokarev
  0 siblings, 1 reply; 7+ messages in thread
From: Konstantin Tokarev @ 2013-10-24 15:15 UTC (permalink / raw)
  To: Yann Collet, linux-mtd, Brent Taylor, Artem Bityutskiy,
	linux-kernel, akpm

[-- Attachment #1: Type: text/plain, Size: 1440 bytes --]



24.10.2013, 18:20, "Konstantin Tokarev" <annulen@yandex.ru>:
> 23.10.2013, 22:26, "Yann Collet" <yann.collet.73@gmail.com>:
>
>>  UBIFS error (pid 4288): ubifs_decompress: cannot decompress 12 bytes,
>>  (...)
>>          data size      12
>>          data:
>>          00000000: 1f 00 01 00 ff e8 50 00 00 00 00 00
>>
>>  The compressed format is correct.
>>  It describes a flow of 00, of size ~500.
>>
>>  So the problem seems more likely on the decompression side.
>>
>>  Are you sure you are providing "12" as the "input size" parameter ? and that
>>  the "maximum output size" parameter is correctly provided ? (i.e. >= to
>>  original data size)
>
> Decompression code in kernel[1] is heavily modified. In particular, lz4_uncompress
> function (used in this case) does not have input size parameter at all,
> while it's present in lz4_uncompress_unknownoutputsize.
>
> [1] lib/lz4/lz4_decompress.c

Attached patch to crypto API wrapper for lz4hc seems to fix the issue. I can save
and read files on LZ4HC-compressed volume, and I'm running on LZ4HC-compressed rootfs,
flashed from mkfs.ubifs generated image (patch by Elie De Brauwer). My software
works correctly.

Unfortunately, on my board LZ4HC-compressed UBIFS volume performs noticeable worse
than LZO, while still faster than zlib. I guess the reason is that CPU is no longer a
bottleneck for my system, but NAND speed.

Thank you all for your help!

-- 
Regards,
Konstantin

[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #2: crypto_lz4.patch --]
[-- Type: text/x-diff; name="crypto_lz4.patch", Size: 852 bytes --]

diff --git a/crypto/lz4.c b/crypto/lz4.c
index 4586dd1..91a0613 100644
--- a/crypto/lz4.c
+++ b/crypto/lz4.c
@@ -66,9 +66,8 @@ static int lz4_decompress_crypto(struct crypto_tfm *tfm, const u8 *src,
 {
 	int err;
 	size_t tmp_len = *dlen;
-	size_t __slen = slen;
 
-	err = lz4_decompress(src, &__slen, dst, tmp_len);
+	err = lz4_decompress_unknownoutputsize(src, slen, dst, &tmp_len);
 	if (err < 0)
 		return -EINVAL;
 
diff --git a/crypto/lz4hc.c b/crypto/lz4hc.c
index 151ba31..9987741 100644
--- a/crypto/lz4hc.c
+++ b/crypto/lz4hc.c
@@ -66,9 +66,8 @@ static int lz4hc_decompress_crypto(struct crypto_tfm *tfm, const u8 *src,
 {
 	int err;
 	size_t tmp_len = *dlen;
-	size_t __slen = slen;
 
-	err = lz4_decompress(src, &__slen, dst, tmp_len);
+	err = lz4_decompress_unknownoutputsize(src, slen, dst, &tmp_len);
 	if (err < 0)
 		return -EINVAL;
 

^ permalink raw reply related	[flat|nested] 7+ messages in thread

* Re: lz4hc compression in UBIFS?
  2013-10-24 15:15       ` Konstantin Tokarev
@ 2013-10-28 16:22         ` Konstantin Tokarev
  2013-10-28 16:45           ` Florian Fainelli
  0 siblings, 1 reply; 7+ messages in thread
From: Konstantin Tokarev @ 2013-10-28 16:22 UTC (permalink / raw)
  To: Yann Collet, linux-mtd, Brent Taylor, Artem Bityutskiy,
	linux-kernel, akpm



24.10.2013, 19:15, "Konstantin Tokarev" <annulen@yandex.ru>:
> Attached patch to crypto API wrapper for lz4hc seems to fix the issue. I can save
> and read files on LZ4HC-compressed volume, and I'm running on LZ4HC-compressed rootfs,
> flashed from mkfs.ubifs generated image (patch by Elie De Brauwer). My software
> works correctly.
>
> Unfortunately, on my board LZ4HC-compressed UBIFS volume performs noticeable worse
> than LZO, while still faster than zlib. I guess the reason is that CPU is no longer a
> bottleneck for my system, but NAND speed.
>
> Thank you all for your help!

Can anyone review the correctness of my patch? Looks like API of LZ4 decompressor is
used wrong way in crypto API.

-- 
Regards,
Konstantin

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: lz4hc compression in UBIFS?
  2013-10-28 16:22         ` Konstantin Tokarev
@ 2013-10-28 16:45           ` Florian Fainelli
  0 siblings, 0 replies; 7+ messages in thread
From: Florian Fainelli @ 2013-10-28 16:45 UTC (permalink / raw)
  To: Konstantin Tokarev
  Cc: Yann Collet, linux-mtd, Brent Taylor, Artem Bityutskiy,
	linux-kernel, akpm

2013/10/28 Konstantin Tokarev <annulen@yandex.ru>:
>
>
> 24.10.2013, 19:15, "Konstantin Tokarev" <annulen@yandex.ru>:
>> Attached patch to crypto API wrapper for lz4hc seems to fix the issue. I can save
>> and read files on LZ4HC-compressed volume, and I'm running on LZ4HC-compressed rootfs,
>> flashed from mkfs.ubifs generated image (patch by Elie De Brauwer). My software
>> works correctly.
>>
>> Unfortunately, on my board LZ4HC-compressed UBIFS volume performs noticeable worse
>> than LZO, while still faster than zlib. I guess the reason is that CPU is no longer a
>> bottleneck for my system, but NAND speed.
>>
>> Thank you all for your help!
>
> Can anyone review the correctness of my patch? Looks like API of LZ4 decompressor is
> used wrong way in crypto API.

Can you make a formal submission of it? That would probably help reviewing it.
--
Florian

^ permalink raw reply	[flat|nested] 7+ messages in thread

end of thread, other threads:[~2013-10-28 16:46 UTC | newest]

Thread overview: 7+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
     [not found] <55541379679397@web20h.yandex.ru>
     [not found] ` <CAP+RiCAVuUEfyjg02+ZjeFXgUuaRW+fuMB490Ce2Hq_4qHBL=A@mail.gmail.com>
     [not found]   ` <183031382371160@web6m.yandex.ru>
     [not found]     ` <CAP+RiCDKgRqi8_Y4OMqyJyCUpKJW5BVE=hNwp3WzQ7PuOMWGMw@mail.gmail.com>
     [not found]       ` <139841382436609@web24g.yandex.ru>
2013-10-23  5:26         ` lz4hc compression in UBIFS? Brent Taylor
2013-10-23  7:40           ` Konstantin Tokarev
2013-10-23 12:49             ` Brent Taylor
2013-10-23 13:39               ` Konstantin Tokarev
     [not found]   ` <loom.20131023T201657-894@post.gmane.org>
     [not found]     ` <237221382623942@web21m.yandex.ru>
2013-10-24 15:15       ` Konstantin Tokarev
2013-10-28 16:22         ` Konstantin Tokarev
2013-10-28 16:45           ` Florian Fainelli

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).