All of lore.kernel.org
 help / color / mirror / Atom feed
* nvme+btrfs+compression sensibility and benchmark
@ 2018-04-18 15:10 Brendan Hide
  2018-04-18 15:14 ` Nikolay Borisov
  2018-04-18 16:38 ` Austin S. Hemmelgarn
  0 siblings, 2 replies; 8+ messages in thread
From: Brendan Hide @ 2018-04-18 15:10 UTC (permalink / raw)
  To: linux-btrfs

Hi, all

I'm looking for some advice re compression with NVME. Compression helps 
performance with a minor CPU hit - but is it still worth it with the far 
higher throughputs offered by newer PCI and NVME-type SSDs?

I've ordered a PCIe-to-M.2 adapter along with a 1TB 960 Evo drive for my 
home desktop. I previously used compression on an older SATA-based Intel 
520 SSD, where compression made sense.

However, the wisdom isn't so clear-cut if the SSD is potentially faster 
than the compression algorithm with my CPU (aging i7 3770).

Testing using a copy of the kernel source tarball in tmpfs  it seems my 
system can compress/decompress at about 670MB/s using zstd with 8 
threads. lzop isn't that far behind. But I'm not sure if the benchmark 
I'm running is the same as how btrfs would be using it internally.

Given these numbers I'm inclined to believe compression will make things 
slower - but can't be sure without knowing if I'm testing correctly.

What is the best practice with benchmarking and with NVME/PCI storage?


^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: nvme+btrfs+compression sensibility and benchmark
  2018-04-18 15:10 nvme+btrfs+compression sensibility and benchmark Brendan Hide
@ 2018-04-18 15:14 ` Nikolay Borisov
  2018-04-18 18:28   ` David Sterba
  2018-04-18 16:38 ` Austin S. Hemmelgarn
  1 sibling, 1 reply; 8+ messages in thread
From: Nikolay Borisov @ 2018-04-18 15:14 UTC (permalink / raw)
  To: Brendan Hide, linux-btrfs



On 18.04.2018 18:10, Brendan Hide wrote:
> Hi, all
> 
> I'm looking for some advice re compression with NVME. Compression helps
> performance with a minor CPU hit - but is it still worth it with the far
> higher throughputs offered by newer PCI and NVME-type SSDs?
> 
> I've ordered a PCIe-to-M.2 adapter along with a 1TB 960 Evo drive for my
> home desktop. I previously used compression on an older SATA-based Intel
> 520 SSD, where compression made sense.
> 
> However, the wisdom isn't so clear-cut if the SSD is potentially faster
> than the compression algorithm with my CPU (aging i7 3770).
> 
> Testing using a copy of the kernel source tarball in tmpfs  it seems my
> system can compress/decompress at about 670MB/s using zstd with 8
> threads. lzop isn't that far behind. But I'm not sure if the benchmark
> I'm running is the same as how btrfs would be using it internally.
> 
> Given these numbers I'm inclined to believe compression will make things
> slower - but can't be sure without knowing if I'm testing correctly.
> 
> What is the best practice with benchmarking and with NVME/PCI storage?

btrfs doesn't support DAX so using it on NVME doesn't make much sense
performance wise.

> 
> -- 
> To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> 

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: nvme+btrfs+compression sensibility and benchmark
  2018-04-18 15:10 nvme+btrfs+compression sensibility and benchmark Brendan Hide
  2018-04-18 15:14 ` Nikolay Borisov
@ 2018-04-18 16:38 ` Austin S. Hemmelgarn
  2018-04-18 19:24   ` Chris Murphy
       [not found]   ` <CAJCQCtSBs9nJXi2CZuBsgegCoN0J5K1BDWGPqD5K9z_G6pOPsg@mail.gmail.com>
  1 sibling, 2 replies; 8+ messages in thread
From: Austin S. Hemmelgarn @ 2018-04-18 16:38 UTC (permalink / raw)
  To: Brendan Hide, linux-btrfs

On 2018-04-18 11:10, Brendan Hide wrote:
> Hi, all
> 
> I'm looking for some advice re compression with NVME. Compression helps 
> performance with a minor CPU hit - but is it still worth it with the far 
> higher throughputs offered by newer PCI and NVME-type SSDs?
> 
> I've ordered a PCIe-to-M.2 adapter along with a 1TB 960 Evo drive for my 
> home desktop. I previously used compression on an older SATA-based Intel 
> 520 SSD, where compression made sense.
> 
> However, the wisdom isn't so clear-cut if the SSD is potentially faster 
> than the compression algorithm with my CPU (aging i7 3770).
> 
> Testing using a copy of the kernel source tarball in tmpfs  it seems my 
> system can compress/decompress at about 670MB/s using zstd with 8 
> threads. lzop isn't that far behind. But I'm not sure if the benchmark 
> I'm running is the same as how btrfs would be using it internally.
BTRFS compresses chunks of 128k at a time and compresses each block one 
at a time (it doesn't do multi-threaded compression).  You can simulate 
this a bit better by splitting the files you're trying to compress into 
128k chunks (calling `split -b 131072` on the file will do this quickly 
and easily), and then passing all those chunks to the compression 
program _at the same time_ (this eliminates the overhead of re-invoking 
the compressor for each block), and then running it with one thread. 
For reference, the zstd compression in BTRFS uses level 3 by default (as 
does zlib compression IIRC), though I'm not sure about lzop (I think it 
uses the lowest compression setting).

Note that this will still not be entirely accurate (there are 
significant differences in buffer handling in the in-kernel 
implementations because of memory management differences).

Another option is to see how long it takes to copy the test data into a 
ZRAM device.  This will eliminate the storage overhead, and use the same 
compression algorithms that BTRFS does (the only big difference is that 
it compresses by page, so it will use 4k blocks instead of 128k).  zRAM 
currently doesn't support zstd (though patches have been posted), but it 
by default uses lzo, and it supports deflate as well (which is 
essentially the same mathematically as the 'zlib' compression method in 
BTRFS).
> 
> Given these numbers I'm inclined to believe compression will make things 
> slower - but can't be sure without knowing if I'm testing correctly.
On NVMe, yes, it's probably not worth it for speed.  It may however help 
in other ways.  Compressed writes are smaller than normal writes.  This 
means that rewriting a file that is compressed by the filesystem will 
result in fewer rewritten blocks of storage, which can be useful when 
dealing with flash memory.  Less written data also means you leave a bit 
more free space for the wear-leveling algorithms to work with, which can 
improve performance on some devices.

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: nvme+btrfs+compression sensibility and benchmark
  2018-04-18 15:14 ` Nikolay Borisov
@ 2018-04-18 18:28   ` David Sterba
  2018-04-18 18:32     ` Nikolay Borisov
  0 siblings, 1 reply; 8+ messages in thread
From: David Sterba @ 2018-04-18 18:28 UTC (permalink / raw)
  To: Nikolay Borisov; +Cc: Brendan Hide, linux-btrfs

On Wed, Apr 18, 2018 at 06:14:07PM +0300, Nikolay Borisov wrote:
> 
> 
> On 18.04.2018 18:10, Brendan Hide wrote:
> > Hi, all
> > 
> > I'm looking for some advice re compression with NVME. Compression helps
> > performance with a minor CPU hit - but is it still worth it with the far
> > higher throughputs offered by newer PCI and NVME-type SSDs?
> > 
> > I've ordered a PCIe-to-M.2 adapter along with a 1TB 960 Evo drive for my
> > home desktop. I previously used compression on an older SATA-based Intel
> > 520 SSD, where compression made sense.
> > 
> > However, the wisdom isn't so clear-cut if the SSD is potentially faster
> > than the compression algorithm with my CPU (aging i7 3770).
> > 
> > Testing using a copy of the kernel source tarball in tmpfs  it seems my
> > system can compress/decompress at about 670MB/s using zstd with 8
> > threads. lzop isn't that far behind. But I'm not sure if the benchmark
> > I'm running is the same as how btrfs would be using it internally.
> > 
> > Given these numbers I'm inclined to believe compression will make things
> > slower - but can't be sure without knowing if I'm testing correctly.
> > 
> > What is the best practice with benchmarking and with NVME/PCI storage?
> 
> btrfs doesn't support DAX so using it on NVME doesn't make much sense
> performance wise.

Is'nt NVMe just "the faster SSD"? Not the persistent memory thing.

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: nvme+btrfs+compression sensibility and benchmark
  2018-04-18 18:28   ` David Sterba
@ 2018-04-18 18:32     ` Nikolay Borisov
  0 siblings, 0 replies; 8+ messages in thread
From: Nikolay Borisov @ 2018-04-18 18:32 UTC (permalink / raw)
  To: dsterba, Brendan Hide, linux-btrfs



On 18.04.2018 21:28, David Sterba wrote:
> On Wed, Apr 18, 2018 at 06:14:07PM +0300, Nikolay Borisov wrote:
>>
>>
>> On 18.04.2018 18:10, Brendan Hide wrote:
>>> Hi, all
>>>
>>> I'm looking for some advice re compression with NVME. Compression helps
>>> performance with a minor CPU hit - but is it still worth it with the far
>>> higher throughputs offered by newer PCI and NVME-type SSDs?
>>>
>>> I've ordered a PCIe-to-M.2 adapter along with a 1TB 960 Evo drive for my
>>> home desktop. I previously used compression on an older SATA-based Intel
>>> 520 SSD, where compression made sense.
>>>
>>> However, the wisdom isn't so clear-cut if the SSD is potentially faster
>>> than the compression algorithm with my CPU (aging i7 3770).
>>>
>>> Testing using a copy of the kernel source tarball in tmpfs  it seems my
>>> system can compress/decompress at about 670MB/s using zstd with 8
>>> threads. lzop isn't that far behind. But I'm not sure if the benchmark
>>> I'm running is the same as how btrfs would be using it internally.
>>>
>>> Given these numbers I'm inclined to believe compression will make things
>>> slower - but can't be sure without knowing if I'm testing correctly.
>>>
>>> What is the best practice with benchmarking and with NVME/PCI storage?
>>
>> btrfs doesn't support DAX so using it on NVME doesn't make much sense
>> performance wise.
> 
> Is'nt NVMe just "the faster SSD"? Not the persistent memory thing.

Indeed, brain fart on my part. NVDIMM is the persistent memory thing.

> 

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: nvme+btrfs+compression sensibility and benchmark
  2018-04-18 16:38 ` Austin S. Hemmelgarn
@ 2018-04-18 19:24   ` Chris Murphy
  2018-04-19  7:12     ` Nikolay Borisov
       [not found]   ` <CAJCQCtSBs9nJXi2CZuBsgegCoN0J5K1BDWGPqD5K9z_G6pOPsg@mail.gmail.com>
  1 sibling, 1 reply; 8+ messages in thread
From: Chris Murphy @ 2018-04-18 19:24 UTC (permalink / raw)
  To: Btrfs BTRFS

On Wed, Apr 18, 2018 at 10:38 AM, Austin S. Hemmelgarn <ahferroin7@gmail.com
> wrote:

> For reference, the zstd compression in BTRFS uses level 3 by default (as
> does zlib compression IIRC), though I'm not sure about lzop (I think it
> uses the lowest compression setting).
>


The user space tool, zstd, does default to 3, according to its man page.

       -#     # compression level [1-19] (default: 3)


However, the kernel is claiming it's level 0, which doesn't exist in the
man page. So I have no idea what we're using. This is what I get with mount
option compress=zstd

[    4.097858] BTRFS info (device nvme0n1p9): use zstd compression, level 0



--
Chris Murphy

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: nvme+btrfs+compression sensibility and benchmark
       [not found]   ` <CAJCQCtSBs9nJXi2CZuBsgegCoN0J5K1BDWGPqD5K9z_G6pOPsg@mail.gmail.com>
@ 2018-04-18 19:56     ` Brendan Hide
  0 siblings, 0 replies; 8+ messages in thread
From: Brendan Hide @ 2018-04-18 19:56 UTC (permalink / raw)
  To: Chris Murphy, Austin S. Hemmelgarn; +Cc: Btrfs BTRFS

Thank you, all

Though the info is useful, there's not a clear consensus on what I 
should expect. For interest's sake, I'll post benchmarks from the device 
itself when it arrives.

I'm expecting at least that I'll be blown away :)

On 04/18/2018 09:23 PM, Chris Murphy wrote:
> 
> 
> On Wed, Apr 18, 2018 at 10:38 AM, Austin S. Hemmelgarn 
> <ahferroin7@gmail.com <mailto:ahferroin7@gmail.com>> wrote:
> 
>     For reference, the zstd compression in BTRFS uses level 3 by default
>     (as does zlib compression IIRC), though I'm not sure about lzop (I
>     think it uses the lowest compression setting).
> 
> 
> 
> The user space tool, zstd, does default to 3, according to its man page.
> 
>         -#     # compression level [1-19] (default: 3)
> 
> 
> However, the kernel is claiming it's level 0, which doesn't exist in the 
> man page. So I have no idea what we're using. This is what I get with 
> mount option compress=zstd
> 
> [    4.097858] BTRFS info (device nvme0n1p9): use zstd compression, level 0
> 
> 
> 
> 
> -- 
> Chris Murphy

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: nvme+btrfs+compression sensibility and benchmark
  2018-04-18 19:24   ` Chris Murphy
@ 2018-04-19  7:12     ` Nikolay Borisov
  0 siblings, 0 replies; 8+ messages in thread
From: Nikolay Borisov @ 2018-04-19  7:12 UTC (permalink / raw)
  To: Chris Murphy, Btrfs BTRFS



On 18.04.2018 22:24, Chris Murphy wrote:
> On Wed, Apr 18, 2018 at 10:38 AM, Austin S. Hemmelgarn <ahferroin7@gmail.com
>> wrote:
> 
>> For reference, the zstd compression in BTRFS uses level 3 by default (as
>> does zlib compression IIRC), though I'm not sure about lzop (I think it
>> uses the lowest compression setting).
>>
> 
> 
> The user space tool, zstd, does default to 3, according to its man page.
> 
>        -#     # compression level [1-19] (default: 3)
> 
> 
> However, the kernel is claiming it's level 0, which doesn't exist in the
> man page. So I have no idea what we're using. This is what I get with mount
> option compress=zstd
> 

Currently the kernel-mode zstd compression doesn't really support any
levels (compress_level is not set, even if it's passed, and even then
zstd_set_level is also unimplemented). So this number doesn't really
make any difference.

> [    4.097858] BTRFS info (device nvme0n1p9): use zstd compression, level 0
> 
> 
> 
> --
> Chris Murphy
> --
> To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> 

^ permalink raw reply	[flat|nested] 8+ messages in thread

end of thread, other threads:[~2018-04-19  7:12 UTC | newest]

Thread overview: 8+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-04-18 15:10 nvme+btrfs+compression sensibility and benchmark Brendan Hide
2018-04-18 15:14 ` Nikolay Borisov
2018-04-18 18:28   ` David Sterba
2018-04-18 18:32     ` Nikolay Borisov
2018-04-18 16:38 ` Austin S. Hemmelgarn
2018-04-18 19:24   ` Chris Murphy
2018-04-19  7:12     ` Nikolay Borisov
     [not found]   ` <CAJCQCtSBs9nJXi2CZuBsgegCoN0J5K1BDWGPqD5K9z_G6pOPsg@mail.gmail.com>
2018-04-18 19:56     ` Brendan Hide

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.