All of lore.kernel.org
 help / color / mirror / Atom feed
* makedumpfile 1.5.4, 734G kdump tests
@ 2013-07-09 16:24 Cliff Wickman
       [not found] ` <CAJGZr0JPrBB3cVyVdwJdd6cEUfnXNMuRijb9EOoSy+XmRupv7A@mail.gmail.com>
  2013-07-11 13:06 ` Vivek Goyal
  0 siblings, 2 replies; 8+ messages in thread
From: Cliff Wickman @ 2013-07-09 16:24 UTC (permalink / raw)
  To: kexec; +Cc: d.hatayama, kumagai-atsushi

I have run some tests with the latest makedumpfile and kexec and the
results (below) look very good to me.

This particular test machine has a megaraid root, which had been a problem
with previous kexec-tools (I did have to allocate a lot of low memory).

My 3.10 kernel does have Hatayama's vmcore mmap patches.

My only remaining concern for makedumpfile is the filtering of huge pages.
I believe that patch is in progress; but I don't see it in 1.5.4.

-Cliff


UV2000   memory: 734G
makedumpfile: makedumpfile-1.5.4
kexec:   git://git.kernel.org/pub/scm/utils/kernel/kexec/kexec-tools.git
booted with   crashkernel=1G,high crashkernel=192M,low
non-cyclic mode

write to       option            init&scan sec.   copy sec.  dump size
-------------  -----------------           ----   ---------  ---------
megaraid disk  no compression                19          91      11.7G
megaraid disk  zlib compression              20         209       1.4G
megaraid disk  snappy compression            20          46       2.4G
megaraid disk  snappy compression no mmap    30          72       2.4G
/dev/null      no compression                19          28          -
/dev/null      zlib compression              19         206          -
/dev/null      snappy compression            19          41          -

Notes and observations
- Snappy compression is a big win over zlib compression; over 4 times faster
  with a cost of relatively little disk space.
- The vmcore mmap kernel patch cuts off about 1/3 of both page scan time and
  data copy time.
  I hope those patches reach Linus' tree shortly, as you expect.
- Data copy time is dominated by compression time; writing compressed data
  to /dev/null uses almost the same time as writing to disk.
  I hope your efforts to multi-thread the crash kernel go forward.

-- 
Cliff Wickman
SGI
cpw@sgi.com
(651) 683-3824

_______________________________________________
kexec mailing list
kexec@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/kexec

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: makedumpfile 1.5.4, 734G kdump tests
       [not found] ` <CAJGZr0JPrBB3cVyVdwJdd6cEUfnXNMuRijb9EOoSy+XmRupv7A@mail.gmail.com>
@ 2013-07-10  9:07   ` HATAYAMA Daisuke
       [not found]     ` <CAJGZr0L7GBqJHaPzgpFxhpF_jmAZfDJYU_=MFxATxnLk13ni4g@mail.gmail.com>
  0 siblings, 1 reply; 8+ messages in thread
From: HATAYAMA Daisuke @ 2013-07-10  9:07 UTC (permalink / raw)
  To: Maxim Uvarov; +Cc: kumagai-atsushi, kexec, Cliff Wickman

(2013/07/10 17:33), Maxim Uvarov wrote:
> does crash tool read snappy compressed files?
>

Yes, but you need to specify libraries at build.

http://people.redhat.com/anderson/crash.changelog.html

6.0.9    - Fix for building on host machines that have glibc-2.15.90 installed,
<cut>

          - Add support for reading compressed kdump dumpfiles that were
            compressed by the snappy compressor.  This feature is disabled by
            default.  To enable this feature, build the crash utility in the
            following manner:
            (1) Install the snappy libraries by using the host system's package
                manager or by directly downloading libraries from author's
                website.  The packages required are:
                  - snappy
                  - snappy-devel
                The author's website is: http://code.google.com/p/snappy
            (2) Create a CFLAGS.extra file and an LDFLAGS.extra file in top-level
                crash sources directory:
                  - enter -DSNAPPY in the CFLAGS.extra file
                  - enter -lsnappy in the LDFLAGS.extra file.
            (3) Build crash with "make" as always.

6.0.7    - Enhanced the "search" command to allow the searched-for value
<cut>
          - Add support to for reading dumpfiles compressed by LZO using
            makedumpfile version 1.4.4 or later.  This feature is disabled by
            default.  To enable this feature, build the crash utility in the
            following manner:
            (1) Install the LZO libraries by using the host system's package
                manager or by directly downloading libraries from author's
                website.  The packages required are:
                  - lzo
                  - lzo-minilzo
                  - lzo-devel
                The author's website is: http://www.oberhumer.com/opensource/lzo
            (2) Create a CFLAGS.extra file and an LDFLAGS.extra file in top-level
                crash sources directory:
                  - enter -DLZO in the CFLAGS.extra file
                  - enter -llzo2 in the LDFLAGS.extra file.
            (3) Build crash with "make" as always.

-- 
Thanks.
HATAYAMA, Daisuke


_______________________________________________
kexec mailing list
kexec@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/kexec

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: makedumpfile 1.5.4, 734G kdump tests
       [not found]     ` <CAJGZr0L7GBqJHaPzgpFxhpF_jmAZfDJYU_=MFxATxnLk13ni4g@mail.gmail.com>
@ 2013-07-10 18:27       ` Cliff Wickman
  0 siblings, 0 replies; 8+ messages in thread
From: Cliff Wickman @ 2013-07-10 18:27 UTC (permalink / raw)
  To: Maxim Uvarov; +Cc: kumagai-atsushi, HATAYAMA Daisuke, kexec

On Wed, Jul 10, 2013 at 04:37:20PM +0400, Maxim Uvarov wrote:
> thanks. I also have a question about static support of snappy in
> makedumpfile.
> 
> Dynamic linking works ok.  But static link confuses snappy C++ library and
> C, and fails with:
> undefined reference to `__gxx_personality_v0'  and undefined reference to
> new[].
> 
> Is there workaround for that?
> gcc version 4.4.6 20120305 (Red Hat 4.4.6-4) (GCC)

I think you'll have to install the libstdc++ devel rpm, such as
libstdc++-devel-4.4.7-3.el6.x86_64.rpm,  to get libstdc++.a

And probably add -lstdc to the Makefile
   LIBS := -lsnappy -lstdc++ $(LIBS)


> Maxim.
> 
> 2013/7/10 HATAYAMA Daisuke <d.hatayama@jp.fujitsu.com>
> 
> > (2013/07/10 17:33), Maxim Uvarov wrote:
> >
> >> does crash tool read snappy compressed files?
> >>
> >>
> > Yes, but you need to specify libraries at build.
> >
> > http://people.redhat.com/**anderson/crash.changelog.html<http://people.redhat.com/anderson/crash.changelog.html>
> >
> > 6.0.9    - Fix for building on host machines that have glibc-2.15.90
> > installed,
> > <cut>
> >
> >          - Add support for reading compressed kdump dumpfiles that were
> >            compressed by the snappy compressor.  This feature is disabled
> > by
> >            default.  To enable this feature, build the crash utility in the
> >            following manner:
> >            (1) Install the snappy libraries by using the host system's
> > package
> >                manager or by directly downloading libraries from author's
> >                website.  The packages required are:
> >                  - snappy
> >                  - snappy-devel
> >                The author's website is: http://code.google.com/p/**snappy<http://code.google.com/p/snappy>
> >            (2) Create a CFLAGS.extra file and an LDFLAGS.extra file in
> > top-level
> >                crash sources directory:
> >                  - enter -DSNAPPY in the CFLAGS.extra file
> >                  - enter -lsnappy in the LDFLAGS.extra file.
> >            (3) Build crash with "make" as always.
> >
> > 6.0.7    - Enhanced the "search" command to allow the searched-for value
> > <cut>
> >          - Add support to for reading dumpfiles compressed by LZO using
> >            makedumpfile version 1.4.4 or later.  This feature is disabled
> > by
> >            default.  To enable this feature, build the crash utility in the
> >            following manner:
> >            (1) Install the LZO libraries by using the host system's package
> >                manager or by directly downloading libraries from author's
> >                website.  The packages required are:
> >                  - lzo
> >                  - lzo-minilzo
> >                  - lzo-devel
> >                The author's website is: http://www.oberhumer.com/**
> > opensource/lzo <http://www.oberhumer.com/opensource/lzo>
> >            (2) Create a CFLAGS.extra file and an LDFLAGS.extra file in
> > top-level
> >                crash sources directory:
> >                  - enter -DLZO in the CFLAGS.extra file
> >                  - enter -llzo2 in the LDFLAGS.extra file.
> >            (3) Build crash with "make" as always.
> >
> > --
> > Thanks.
> > HATAYAMA, Daisuke
> >
> >
> 
> 
> -- 
> Best regards,
> Maxim Uvarov

-- 
Cliff Wickman
SGI
cpw@sgi.com
(651) 683-3824

_______________________________________________
kexec mailing list
kexec@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/kexec

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: makedumpfile 1.5.4, 734G kdump tests
  2013-07-09 16:24 makedumpfile 1.5.4, 734G kdump tests Cliff Wickman
       [not found] ` <CAJGZr0JPrBB3cVyVdwJdd6cEUfnXNMuRijb9EOoSy+XmRupv7A@mail.gmail.com>
@ 2013-07-11 13:06 ` Vivek Goyal
  2013-07-12 16:14   ` Cliff Wickman
  1 sibling, 1 reply; 8+ messages in thread
From: Vivek Goyal @ 2013-07-11 13:06 UTC (permalink / raw)
  To: Cliff Wickman; +Cc: kumagai-atsushi, d.hatayama, kexec

On Tue, Jul 09, 2013 at 11:24:03AM -0500, Cliff Wickman wrote:

[..]
> UV2000   memory: 734G
> makedumpfile: makedumpfile-1.5.4
> kexec:   git://git.kernel.org/pub/scm/utils/kernel/kexec/kexec-tools.git
> booted with   crashkernel=1G,high crashkernel=192M,low
> non-cyclic mode
> 
> write to       option            init&scan sec.   copy sec.  dump size
> -------------  -----------------           ----   ---------  ---------
> megaraid disk  no compression                19          91      11.7G
> megaraid disk  zlib compression              20         209       1.4G
> megaraid disk  snappy compression            20          46       2.4G
> megaraid disk  snappy compression no mmap    30          72       2.4G
> /dev/null      no compression                19          28          -
> /dev/null      zlib compression              19         206          -
> /dev/null      snappy compression            19          41          -
> 
> Notes and observations
> - Snappy compression is a big win over zlib compression; over 4 times faster
>   with a cost of relatively little disk space.

Thanks for the results Cliff. If it is not too much of trouble, can you
please also test with lzo compression on same configuration. I am 
curious to know how much better snappy performs as compared to lzo.

Thanks
Vivek

_______________________________________________
kexec mailing list
kexec@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/kexec

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: makedumpfile 1.5.4, 734G kdump tests
  2013-07-11 13:06 ` Vivek Goyal
@ 2013-07-12 16:14   ` Cliff Wickman
  2013-07-12 16:42     ` Vivek Goyal
  0 siblings, 1 reply; 8+ messages in thread
From: Cliff Wickman @ 2013-07-12 16:14 UTC (permalink / raw)
  To: Vivek Goyal; +Cc: kumagai-atsushi, d.hatayama, kexec

On Thu, Jul 11, 2013 at 09:06:47AM -0400, Vivek Goyal wrote:
> On Tue, Jul 09, 2013 at 11:24:03AM -0500, Cliff Wickman wrote:
> 
> [..]
> > UV2000   memory: 734G
> > makedumpfile: makedumpfile-1.5.4
> > kexec:   git://git.kernel.org/pub/scm/utils/kernel/kexec/kexec-tools.git
> > booted with   crashkernel=1G,high crashkernel=192M,low
> > non-cyclic mode
> > 
> > write to       option            init&scan sec.   copy sec.  dump size
> > -------------  -----------------           ----   ---------  ---------
> > megaraid disk  no compression                19          91      11.7G
> > megaraid disk  zlib compression              20         209       1.4G
> > megaraid disk  snappy compression            20          46       2.4G
> > megaraid disk  snappy compression no mmap    30          72       2.4G
> > /dev/null      no compression                19          28          -
> > /dev/null      zlib compression              19         206          -
> > /dev/null      snappy compression            19          41          -
> > 
> > Notes and observations
> > - Snappy compression is a big win over zlib compression; over 4 times faster
> >   with a cost of relatively little disk space.
> 
> Thanks for the results Cliff. If it is not too much of trouble, can you
> please also test with lzo compression on same configuration. I am 
> curious to know how much better snappy performs as compared to lzo.
> 
> Thanks
> Vivek

Ok.  I repeated the tests and included LZO compression.

UV2000   memory: 734G
makedumpfile: makedumpfile-1.5.4     non-cyclic mode
kexec: git://git.kernel.org/pub/scm/utils/kernel/kexec/kexec-tools.git
3.10 kernel with vmcore mmap patches
booted with   crashkernel=1G,high crashkernel=192M,low

write to       compression       init&scan sec.   copy sec.  dump size
-------------  -----------------           ----   ---------  ---------
megaraid disk  no compression                20          86      11.6G
megaraid disk  zlib compression              19         209       1.4G
megaraid disk  snappy compression            20          47       2.4G
megaraid disk  lzo compression               19          54       2.8G

/dev/null      no compression                19          28          -
/dev/null      zlib compression              20         206          -
/dev/null      snappy compression            19          42          -
/dev/null      lzo compression               20          47          -

Notes:
- Snappy compression is still be fastest (and more compressed than LZO),
  but LZO is close.
- Compression and I/O seem pretty well overlapped, so I am not sure that
  multithreading the crash kernel (to speed compression) will speed the
  dump as much I was hoping, unless perhaps the I/O device is an SSD.

-Cliff
-- 
Cliff Wickman
SGI
cpw@sgi.com
(651) 683-3824

_______________________________________________
kexec mailing list
kexec@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/kexec

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: makedumpfile 1.5.4, 734G kdump tests
  2013-07-12 16:14   ` Cliff Wickman
@ 2013-07-12 16:42     ` Vivek Goyal
  2013-07-16  9:22       ` HATAYAMA Daisuke
  0 siblings, 1 reply; 8+ messages in thread
From: Vivek Goyal @ 2013-07-12 16:42 UTC (permalink / raw)
  To: Cliff Wickman; +Cc: kumagai-atsushi, d.hatayama, kexec

On Fri, Jul 12, 2013 at 11:14:27AM -0500, Cliff Wickman wrote:
> On Thu, Jul 11, 2013 at 09:06:47AM -0400, Vivek Goyal wrote:
> > On Tue, Jul 09, 2013 at 11:24:03AM -0500, Cliff Wickman wrote:
> > 
> > [..]
> > > UV2000   memory: 734G
> > > makedumpfile: makedumpfile-1.5.4
> > > kexec:   git://git.kernel.org/pub/scm/utils/kernel/kexec/kexec-tools.git
> > > booted with   crashkernel=1G,high crashkernel=192M,low
> > > non-cyclic mode
> > > 
> > > write to       option            init&scan sec.   copy sec.  dump size
> > > -------------  -----------------           ----   ---------  ---------
> > > megaraid disk  no compression                19          91      11.7G
> > > megaraid disk  zlib compression              20         209       1.4G
> > > megaraid disk  snappy compression            20          46       2.4G
> > > megaraid disk  snappy compression no mmap    30          72       2.4G
> > > /dev/null      no compression                19          28          -
> > > /dev/null      zlib compression              19         206          -
> > > /dev/null      snappy compression            19          41          -
> > > 
> > > Notes and observations
> > > - Snappy compression is a big win over zlib compression; over 4 times faster
> > >   with a cost of relatively little disk space.
> > 
> > Thanks for the results Cliff. If it is not too much of trouble, can you
> > please also test with lzo compression on same configuration. I am 
> > curious to know how much better snappy performs as compared to lzo.
> > 
> > Thanks
> > Vivek
> 
> Ok.  I repeated the tests and included LZO compression.
> 
> UV2000   memory: 734G
> makedumpfile: makedumpfile-1.5.4     non-cyclic mode
> kexec: git://git.kernel.org/pub/scm/utils/kernel/kexec/kexec-tools.git
> 3.10 kernel with vmcore mmap patches
> booted with   crashkernel=1G,high crashkernel=192M,low
> 
> write to       compression       init&scan sec.   copy sec.  dump size
> -------------  -----------------           ----   ---------  ---------
> megaraid disk  no compression                20          86      11.6G
> megaraid disk  zlib compression              19         209       1.4G
> megaraid disk  snappy compression            20          47       2.4G
> megaraid disk  lzo compression               19          54       2.8G
> 
> /dev/null      no compression                19          28          -
> /dev/null      zlib compression              20         206          -
> /dev/null      snappy compression            19          42          -
> /dev/null      lzo compression               20          47          -
> 
> Notes:
> - Snappy compression is still be fastest (and more compressed than LZO),
>   but LZO is close.
> - Compression and I/O seem pretty well overlapped, so I am not sure that
>   multithreading the crash kernel (to speed compression) will speed the
>   dump as much I was hoping, unless perhaps the I/O device is an SSD.

Thanks Cliff. So LZO is pretty close to snappy in this case.

Thanks
Vivek

_______________________________________________
kexec mailing list
kexec@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/kexec

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: makedumpfile 1.5.4, 734G kdump tests
  2013-07-12 16:42     ` Vivek Goyal
@ 2013-07-16  9:22       ` HATAYAMA Daisuke
  2013-07-16 14:15         ` Vivek Goyal
  0 siblings, 1 reply; 8+ messages in thread
From: HATAYAMA Daisuke @ 2013-07-16  9:22 UTC (permalink / raw)
  To: Vivek Goyal; +Cc: kexec, HATAYAMA Daisuke, kumagai-atsushi, Cliff Wickman

[-- Attachment #1: Type: text/plain, Size: 4221 bytes --]

(2013/07/13 1:42), Vivek Goyal wrote:
> On Fri, Jul 12, 2013 at 11:14:27AM -0500, Cliff Wickman wrote:
>> On Thu, Jul 11, 2013 at 09:06:47AM -0400, Vivek Goyal wrote:
>>> On Tue, Jul 09, 2013 at 11:24:03AM -0500, Cliff Wickman wrote:
>>>
>>> [..]
>>>> UV2000   memory: 734G
>>>> makedumpfile: makedumpfile-1.5.4
>>>> kexec:   git://git.kernel.org/pub/scm/utils/kernel/kexec/kexec-tools.git
>>>> booted with   crashkernel=1G,high crashkernel=192M,low
>>>> non-cyclic mode
>>>>
>>>> write to       option            init&scan sec.   copy sec.  dump size
>>>> -------------  -----------------           ----   ---------  ---------
>>>> megaraid disk  no compression                19          91      11.7G
>>>> megaraid disk  zlib compression              20         209       1.4G
>>>> megaraid disk  snappy compression            20          46       2.4G
>>>> megaraid disk  snappy compression no mmap    30          72       2.4G
>>>> /dev/null      no compression                19          28          -
>>>> /dev/null      zlib compression              19         206          -
>>>> /dev/null      snappy compression            19          41          -
>>>>
>>>> Notes and observations
>>>> - Snappy compression is a big win over zlib compression; over 4 times faster
>>>>    with a cost of relatively little disk space.
>>>
>>> Thanks for the results Cliff. If it is not too much of trouble, can you
>>> please also test with lzo compression on same configuration. I am
>>> curious to know how much better snappy performs as compared to lzo.
>>>
>>> Thanks
>>> Vivek
>>
>> Ok.  I repeated the tests and included LZO compression.
>>
>> UV2000   memory: 734G
>> makedumpfile: makedumpfile-1.5.4     non-cyclic mode
>> kexec: git://git.kernel.org/pub/scm/utils/kernel/kexec/kexec-tools.git
>> 3.10 kernel with vmcore mmap patches
>> booted with   crashkernel=1G,high crashkernel=192M,low
>>
>> write to       compression       init&scan sec.   copy sec.  dump size
>> -------------  -----------------           ----   ---------  ---------
>> megaraid disk  no compression                20          86      11.6G
>> megaraid disk  zlib compression              19         209       1.4G
>> megaraid disk  snappy compression            20          47       2.4G
>> megaraid disk  lzo compression               19          54       2.8G
>>
>> /dev/null      no compression                19          28          -
>> /dev/null      zlib compression              20         206          -
>> /dev/null      snappy compression            19          42          -
>> /dev/null      lzo compression               20          47          -
>>
>> Notes:
>> - Snappy compression is still be fastest (and more compressed than LZO),
>>    but LZO is close.
>> - Compression and I/O seem pretty well overlapped, so I am not sure that
>>    multithreading the crash kernel (to speed compression) will speed the
>>    dump as much I was hoping, unless perhaps the I/O device is an SSD.
>
> Thanks Cliff. So LZO is pretty close to snappy in this case.
>

This benchmarks lack considering randamized part ratio of data.
On my benchmark, LZO was slower than snappy from 50% to 100% randomized.

The attached is a graph of benchmark result that compares LZO and snappy
on a variety of ratio of randomized data. The benchmark detail is that

- block size is 4KiB
- sample data is 4MiB
   - so 4K blocks in total
- x value is percentage of amount of randomized data
- y value is performance of compression, i.e. 4MiB / (the time consumed for
   compressing the 4MiB sample data)
- processor is Xeon E7540
- randomizing data is done per a single byte. The 1-byte randomized data
   is chosen from /dev/urandom. Other part is filled with '\000'.

On this result, LZO remains 100 [MiB/sec] on data whose more than 50 percent
is randomized, while snappy shows better performance on more randomized
ratio.

On the worst case of this 100 [MiB/sec], 1TiB system memory needs about 3
hours to take crash dump.

While I don't think it's typical case, it's problematic that crash dump
requires some more hours depending on contents of memory at crash time.
It should always complete in as stable time as possible.

-- 
Thanks.
HATAYAMA, Daisuke

[-- Attachment #2: xen_e7540-performance.png --]
[-- Type: image/png, Size: 12137 bytes --]

[-- Attachment #3: Type: text/plain, Size: 143 bytes --]

_______________________________________________
kexec mailing list
kexec@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/kexec

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: makedumpfile 1.5.4, 734G kdump tests
  2013-07-16  9:22       ` HATAYAMA Daisuke
@ 2013-07-16 14:15         ` Vivek Goyal
  0 siblings, 0 replies; 8+ messages in thread
From: Vivek Goyal @ 2013-07-16 14:15 UTC (permalink / raw)
  To: HATAYAMA Daisuke; +Cc: kexec, kumagai-atsushi, Cliff Wickman

On Tue, Jul 16, 2013 at 06:22:17PM +0900, HATAYAMA Daisuke wrote:
> (2013/07/13 1:42), Vivek Goyal wrote:
> >On Fri, Jul 12, 2013 at 11:14:27AM -0500, Cliff Wickman wrote:
> >>On Thu, Jul 11, 2013 at 09:06:47AM -0400, Vivek Goyal wrote:
> >>>On Tue, Jul 09, 2013 at 11:24:03AM -0500, Cliff Wickman wrote:
> >>>
> >>>[..]
> >>>>UV2000   memory: 734G
> >>>>makedumpfile: makedumpfile-1.5.4
> >>>>kexec:   git://git.kernel.org/pub/scm/utils/kernel/kexec/kexec-tools.git
> >>>>booted with   crashkernel=1G,high crashkernel=192M,low
> >>>>non-cyclic mode
> >>>>
> >>>>write to       option            init&scan sec.   copy sec.  dump size
> >>>>-------------  -----------------           ----   ---------  ---------
> >>>>megaraid disk  no compression                19          91      11.7G
> >>>>megaraid disk  zlib compression              20         209       1.4G
> >>>>megaraid disk  snappy compression            20          46       2.4G
> >>>>megaraid disk  snappy compression no mmap    30          72       2.4G
> >>>>/dev/null      no compression                19          28          -
> >>>>/dev/null      zlib compression              19         206          -
> >>>>/dev/null      snappy compression            19          41          -
> >>>>
> >>>>Notes and observations
> >>>>- Snappy compression is a big win over zlib compression; over 4 times faster
> >>>>   with a cost of relatively little disk space.
> >>>
> >>>Thanks for the results Cliff. If it is not too much of trouble, can you
> >>>please also test with lzo compression on same configuration. I am
> >>>curious to know how much better snappy performs as compared to lzo.
> >>>
> >>>Thanks
> >>>Vivek
> >>
> >>Ok.  I repeated the tests and included LZO compression.
> >>
> >>UV2000   memory: 734G
> >>makedumpfile: makedumpfile-1.5.4     non-cyclic mode
> >>kexec: git://git.kernel.org/pub/scm/utils/kernel/kexec/kexec-tools.git
> >>3.10 kernel with vmcore mmap patches
> >>booted with   crashkernel=1G,high crashkernel=192M,low
> >>
> >>write to       compression       init&scan sec.   copy sec.  dump size
> >>-------------  -----------------           ----   ---------  ---------
> >>megaraid disk  no compression                20          86      11.6G
> >>megaraid disk  zlib compression              19         209       1.4G
> >>megaraid disk  snappy compression            20          47       2.4G
> >>megaraid disk  lzo compression               19          54       2.8G
> >>
> >>/dev/null      no compression                19          28          -
> >>/dev/null      zlib compression              20         206          -
> >>/dev/null      snappy compression            19          42          -
> >>/dev/null      lzo compression               20          47          -
> >>
> >>Notes:
> >>- Snappy compression is still be fastest (and more compressed than LZO),
> >>   but LZO is close.
> >>- Compression and I/O seem pretty well overlapped, so I am not sure that
> >>   multithreading the crash kernel (to speed compression) will speed the
> >>   dump as much I was hoping, unless perhaps the I/O device is an SSD.
> >
> >Thanks Cliff. So LZO is pretty close to snappy in this case.
> >
> 
> This benchmarks lack considering randamized part ratio of data.
> On my benchmark, LZO was slower than snappy from 50% to 100% randomized.
> 
> The attached is a graph of benchmark result that compares LZO and snappy
> on a variety of ratio of randomized data. The benchmark detail is that
> 
> - block size is 4KiB
> - sample data is 4MiB
>   - so 4K blocks in total
> - x value is percentage of amount of randomized data
> - y value is performance of compression, i.e. 4MiB / (the time consumed for
>   compressing the 4MiB sample data)
> - processor is Xeon E7540
> - randomizing data is done per a single byte. The 1-byte randomized data
>   is chosen from /dev/urandom. Other part is filled with '\000'.
> 
> On this result, LZO remains 100 [MiB/sec] on data whose more than 50 percent
> is randomized, while snappy shows better performance on more randomized
> ratio.
> 
> On the worst case of this 100 [MiB/sec], 1TiB system memory needs about 3
> hours to take crash dump.
> 
> While I don't think it's typical case, it's problematic that crash dump
> requires some more hours depending on contents of memory at crash time.
> It should always complete in as stable time as possible.

As per your performance graphs, both lzo and snappy vary in performance
based on randomized data in the system. So that means total dump time will
vary based on contents in memory at crash time (until and unless there is
a fast compression algorithm which does not get impacted much due to
randomness of data). So being able to dump in constant time irresepctive
of randomness of data in  memory is probably not the goal here.

Instead being able to dump faster in most of the scenarios is the goal.
And your graph does show that snappy performs much better at higher
ranomness ratios.

So based on your graph, I agree that lzo is not a replacement for snappy
and snappy can be much faster depending on randomness of data.

Thanks
Vivek

be faster in 
> 
> -- 
> Thanks.
> HATAYAMA, Daisuke



_______________________________________________
kexec mailing list
kexec@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/kexec

^ permalink raw reply	[flat|nested] 8+ messages in thread

end of thread, other threads:[~2013-07-16 14:16 UTC | newest]

Thread overview: 8+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2013-07-09 16:24 makedumpfile 1.5.4, 734G kdump tests Cliff Wickman
     [not found] ` <CAJGZr0JPrBB3cVyVdwJdd6cEUfnXNMuRijb9EOoSy+XmRupv7A@mail.gmail.com>
2013-07-10  9:07   ` HATAYAMA Daisuke
     [not found]     ` <CAJGZr0L7GBqJHaPzgpFxhpF_jmAZfDJYU_=MFxATxnLk13ni4g@mail.gmail.com>
2013-07-10 18:27       ` Cliff Wickman
2013-07-11 13:06 ` Vivek Goyal
2013-07-12 16:14   ` Cliff Wickman
2013-07-12 16:42     ` Vivek Goyal
2013-07-16  9:22       ` HATAYAMA Daisuke
2013-07-16 14:15         ` Vivek Goyal

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.