All of lore.kernel.org
 help / color / mirror / Atom feed
From: Stefan Priebe - Profihost AG <s.priebe@profihost.ag>
To: Sage Weil <sage@newdream.net>
Cc: ceph-users@lists.ceph.com, ceph-devel@vger.kernel.org
Subject: Re: [ceph-users] ceph zstd not for bluestor due to performance reasons
Date: Sun, 5 Nov 2017 09:07:10 +0100	[thread overview]
Message-ID: <5f013ee9-3587-f04c-0005-fa25201a4ecc@profihost.ag> (raw)
In-Reply-To: <alpine.DEB.2.11.1711042010330.23234@piezo.novalocal>

The compression unit test does not work for me:
 bin/unittest_compression
2017-11-05 08:05:36.734567 7f7e8322c180 -1 did not load config file,
using default settings.
2017-11-05 08:05:36.735744 7f7e8322c180 -1 Errors while parsing config file!
2017-11-05 08:05:36.735760 7f7e8322c180 -1 parse_file: cannot open
/etc/ceph/ceph.conf: (2) No such file or directory
2017-11-05 08:05:36.735761 7f7e8322c180 -1 parse_file: cannot open
~/.ceph/ceph.conf: (2) No such file or directory
2017-11-05 08:05:36.735762 7f7e8322c180 -1 parse_file: cannot open
ceph.conf: (2) No such file or directory
2017-11-05 08:05:36.737096 7f7e8322c180 -1 Errors while parsing config file!
2017-11-05 08:05:36.737109 7f7e8322c180 -1 parse_file: cannot open
/etc/ceph/ceph.conf: (2) No such file or directory
2017-11-05 08:05:36.737110 7f7e8322c180 -1 parse_file: cannot open
~/.ceph/ceph.conf: (2) No such file or directory
2017-11-05 08:05:36.737110 7f7e8322c180 -1 parse_file: cannot open
ceph.conf: (2) No such file or directory
[==========] Running 68 tests from 3 test cases.
[----------] Global test environment set-up.
[----------] 3 tests from ZlibCompressor
[ RUN      ] ZlibCompressor.zlib_isal_compatibility
[       OK ] ZlibCompressor.zlib_isal_compatibility (3 ms)
[ RUN      ] ZlibCompressor.isal_compress_zlib_decompress_random
[       OK ] ZlibCompressor.isal_compress_zlib_decompress_random (76 ms)
[ RUN      ] ZlibCompressor.isal_compress_zlib_decompress_walk
[       OK ] ZlibCompressor.isal_compress_zlib_decompress_walk (65 ms)
[----------] 3 tests from ZlibCompressor (144 ms total)

[----------] 1 test from CompressionPlugin
[ RUN      ] CompressionPlugin.all
/build/ceph/src/test/compressor/test_compression.cc:389: Failure
Value of: factory
  Actual: false
Expected: true
2017-11-05 08:05:36.884467 7f7e8322c180 -1 load failed dlopen():
"/usr/lib/ceph/compressor/libceph_invalid.so: cannot open shared object
file: No such file or directory" or "/usr/lib/ceph/libceph_invalid.so:
cannot open shared object file: No such file or directory"
2017-11-05 08:05:36.884528 7f7e8322c180 -1 load failed dlopen():
"/usr/lib/ceph/compressor/libceph_example.so: cannot open shared object
file: No such file or directory" or "/usr/lib/ceph/libceph_example.so:
cannot open shared object file: No such file or directory"
*** Caught signal (Segmentation fault) **

i've no idea why it tries to use libceph_invalid and libceph_example

Stefan

Am 04.11.2017 um 21:10 schrieb Sage Weil:
> On Sat, 4 Nov 2017, Stefan Priebe - Profihost AG wrote:
>> Hi Sage,
>>
>> Am 26.10.2017 um 13:58 schrieb Sage Weil:
>>> On Thu, 26 Oct 2017, Stefan Priebe - Profihost AG wrote:
>>>> Hi Sage,
>>>>
>>>> Am 25.10.2017 um 21:54 schrieb Sage Weil:
>>>>> On Wed, 25 Oct 2017, Stefan Priebe - Profihost AG wrote:
>>>>>> Hello,
>>>>>>
>>>>>> in the lumious release notes is stated that zstd is not supported by
>>>>>> bluestor due to performance reason. I'm wondering why btrfs instead
>>>>>> states that zstd is as fast as lz4 but compresses as good as zlib.
>>>>>>
>>>>>> Why is zlib than supported by bluestor? And why does btrfs / facebook
>>>>>> behave different?
>>>>>>
>>>>>> "BlueStore supports inline compression using zlib, snappy, or LZ4. (Ceph
>>>>>> also supports zstd for RGW compression but zstd is not recommended for
>>>>>> BlueStore for performance reasons.)"
>>>>>
>>>>> zstd will work but in our testing the performance wasn't great for 
>>>>> bluestore in particular.  The problem was that for each compression run 
>>>>> there is a relatively high start-up cost initializing the zstd 
>>>>> context/state (IIRC a memset of a huge memory buffer) that dominated the 
>>>>> execution time... primarily because bluestore is generally compressing 
>>>>> pretty small chunks of data at a time, not big buffers or streams.
>>>>>
>>>>> Take a look at unittest_compression timings on compressing 16KB buffers 
>>>>> (smaller than bluestore needs usually, but illustrated of the problem):
>>>>>
>>>>> [ RUN      ] Compressor/CompressorTest.compress_16384/0
>>>>> [plugin zlib (zlib/isal)]
>>>>> [       OK ] Compressor/CompressorTest.compress_16384/0 (294 ms)
>>>>> [ RUN      ] Compressor/CompressorTest.compress_16384/1
>>>>> [plugin zlib (zlib/noisal)]
>>>>> [       OK ] Compressor/CompressorTest.compress_16384/1 (1755 ms)
>>>>> [ RUN      ] Compressor/CompressorTest.compress_16384/2
>>>>> [plugin snappy (snappy)]
>>>>> [       OK ] Compressor/CompressorTest.compress_16384/2 (169 ms)
>>>>> [ RUN      ] Compressor/CompressorTest.compress_16384/3
>>>>> [plugin zstd (zstd)]
>>>>> [       OK ] Compressor/CompressorTest.compress_16384/3 (4528 ms)
>>>>>
>>>>> It's an order of magnitude slower than zlib or snappy, which probably 
>>>>> isn't acceptable--even if it is a bit smaller.
>>
>> i've fixed the zstd compression plugin to use reset stream instead of
>> initializing new objects.
>>
>> What's needed to run only / just the unittest_compression test?
> 
> make unittest_compression && bin/unittest_compression
> 
> should do it!
> 
> sage
> 

  parent reply	other threads:[~2017-11-05  8:07 UTC|newest]

Thread overview: 15+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
     [not found] <BD0427BB-F720-4746-82F9-E082D2396F56@profihost.ag>
2017-10-25 19:54 ` [ceph-users] ceph zstd not for bluestor due to performance reasons Sage Weil
     [not found]   ` <alpine.DEB.2.11.1710251940520.22592-ie3vfNGmdjePKud3HExfWg@public.gmane.org>
2017-10-26  6:06     ` Stefan Priebe - Profihost AG
     [not found]       ` <5cf6f721-05ea-4e38-a6b9-04cff5d6aad3-2Lf/h1ldwEHR5kwTpVNS9A@public.gmane.org>
2017-10-26  6:44         ` Haomai Wang
2017-10-26 11:59           ` [ceph-users] " Sage Weil
2017-10-26 11:58         ` Sage Weil
     [not found]           ` <alpine.DEB.2.11.1710261158070.22592-ie3vfNGmdjePKud3HExfWg@public.gmane.org>
2017-11-04 19:41             ` Stefan Priebe - Profihost AG
2017-11-04 20:10               ` [ceph-users] " Sage Weil
2017-11-04 20:23                 ` Stefan Priebe - Profihost AG
2017-11-04 21:21                   ` Stefan Priebe - Profihost AG
     [not found]                 ` <alpine.DEB.2.11.1711042010330.23234-qHenpvqtifaMSRpgCs4c+g@public.gmane.org>
2017-11-05  6:51                   ` Stefan Priebe - Profihost AG
2017-11-05  7:03                   ` Stefan Priebe - Profihost AG
2017-11-05  8:07                 ` Stefan Priebe - Profihost AG [this message]
2017-11-05  8:18                 ` [ceph-users] " Stefan Priebe - Profihost AG
2017-11-12 16:55     ` Sage Weil
     [not found]       ` <alpine.DEB.2.11.1711121651120.2819-qHenpvqtifaMSRpgCs4c+g@public.gmane.org>
2017-11-12 19:10         ` Stefan Priebe - Profihost AG

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=5f013ee9-3587-f04c-0005-fa25201a4ecc@profihost.ag \
    --to=s.priebe@profihost.ag \
    --cc=ceph-devel@vger.kernel.org \
    --cc=ceph-users@lists.ceph.com \
    --cc=sage@newdream.net \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.