linux-lvm.redhat.com archive mirror
 help / color / mirror / Atom feed
From: Gionatan Danti <g.danti@assyoma.it>
To: LVM general discussion and development <linux-lvm@redhat.com>
Subject: [linux-lvm] Fast thin volume preallocation?
Date: Fri, 31 May 2019 15:13:41 +0200	[thread overview]
Message-ID: <93f53408-2f37-dec3-5c68-deef021f530c@assyoma.it> (raw)

Hi all,
doing some tests on a 4-bays, entry-level NAS/SAN system, I discovered 
it is entirely based on lvm thin volumes.

On configuring what it calls "thick volumes" it create a new thin 
logical volume and pre-allocates all space inside the new volume.

What surprised me is the speed at which this allocation happens: a 2 TB 
volume was allocated (ie: all data chunks were touched) in about 2 min. 
This immediately exclude any simple zeroing of the volume, which will 
require much more time.

I tried on a regular CentOS box + lvmthin with lvm zeroing disabled and, 
indeed, allocating all blocks inside a thin volume requires much more 
time. I tried both a very simple "dd if=/dev/zero of=/dev/test/thinvol 
bs=1M oflag=direct" and "blkdiscard -z /dev/test/thinvol".

Being curious, I found that the NAS use a binary [1] that seems to 
create a pattern of null writes with extremely high queue depth [2].

So, my questions:
- for what you know, are commercial NAS using some patched/custom 
lvmthin version which enables fast volume preallocation, or early zero 
rejection?
- does standard lvmthin support something similar? If not, how do you 
see a zero coalesce/compression/trim/whatever feature?
- can I obtain something similar by simply touching (maybe with a 512B 
only write) once each thin chunk?

Thanks.

[1] I am not calling it because I don't know if discovering the binary, 
and so the NAS vendor, is contrary to this mailing list policy.

[2] iostat -x 1 produces the following example output. Please see how 
*no* writes are passed on backing devices:

    extended device statistics
device mgr/s mgw/s    r/s    w/s    kr/s    kw/s   size queue   wait 
svc_t  %b
sda        0     0  105.2    0.0   841.3     0.0    8.0   0.0    0.2 
0.2   2
sdb        0     0   62.9    0.0   503.2     0.0    8.0   0.0    0.2 
0.2   1
sdd        0     0  124.8    0.0   998.5     0.0    8.0   0.0    0.0 
0.0   0
sdc        0     0  119.9    0.0   959.2     0.0    8.0   0.0    0.2 
0.2   2
mtdblock0     0     0    0.0    0.0     0.0     0.0    0.0   0.0    0.0 
  0.0   0
mtdblock1     0     0    0.0    0.0     0.0     0.0    0.0   0.0    0.0 
  0.0   0
mtdblock2     0     0    0.0    0.0     0.0     0.0    0.0   0.0    0.0 
  0.0   0
mtdblock3     0     0    0.0    0.0     0.0     0.0    0.0   0.0    0.0 
  0.0   0
mtdblock4     0     0    0.0    0.0     0.0     0.0    0.0   0.0    0.0 
  0.0   0
mtdblock5     0     0    0.0    0.0     0.0     0.0    0.0   0.0    0.0 
  0.0   0
mtdblock6     0     0    0.0    0.0     0.0     0.0    0.0   0.0    0.0 
  0.0   0
md9        0     0    0.0    0.0     0.0     0.0    0.0   0.0    0.0 
0.0   0
md13       0     0    0.0    0.0     0.0     0.0    0.0   0.0    0.0 
0.0   0
md256      0     0    0.0    0.0     0.0     0.0    0.0   0.0    0.0 
0.0   0
md322      0     0    0.0    0.0     0.0     0.0    0.0   0.0    0.0 
0.0   0
md1        0     0  412.8    0.0  3302.2     0.0    8.0   0.0    0.0 
0.0   0
dm-1       0     0  412.8    0.0  3302.2     0.0    8.0   0.0    0.1 
0.1   5
dm-2       0     0    0.0    0.0     0.0     0.0    0.0   0.0    0.0 
0.0   0
dm-3       0     0    0.0    0.0     0.0     0.0    0.0   0.0    0.0 
0.0   0
dm-4       0     0    0.0    0.0     0.0     0.0    0.0   0.0    0.0 
0.0   0
dm-5       0     0    0.0    0.0     0.0     0.0    0.0   0.0    0.0 
0.0   0
dm-6       0     0    0.0    0.0     0.0     0.0    0.0   0.0    0.0 
0.0   0
dm-7       0     0    0.0    0.0     0.0     0.0    0.0   0.0    0.0 
0.0   0
dm-8       0     0    0.0    0.0     0.0     0.0    0.0   1.0    0.0 
0.0  99
dm-0       0     0    0.0    0.0     0.0     0.0    0.0   1.0    0.0 
0.0  99
dm-9       0     0    0.0    0.0     0.0     0.0    0.0 345966.5    0.0 
  0.0  99
dm-10      0     0    0.0    0.0     0.0     0.0    0.0   0.0    0.0 
0.0   0

-- 
Danti Gionatan
Supporto Tecnico
Assyoma S.r.l. - www.assyoma.it
email: g.danti@assyoma.it - info@assyoma.it
GPG public key ID: FF5F32A8

             reply	other threads:[~2019-05-31 13:17 UTC|newest]

Thread overview: 6+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-05-31 13:13 Gionatan Danti [this message]
2019-06-03 13:23 ` [linux-lvm] Fast thin volume preallocation? Joe Thornber
2019-06-03 19:23   ` Gionatan Danti
2019-06-03 21:12     ` Ilia Zykov
2019-06-05 10:31       ` Zdenek Kabelac
2019-06-04  5:23     ` Ilia Zykov

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=93f53408-2f37-dec3-5c68-deef021f530c@assyoma.it \
    --to=g.danti@assyoma.it \
    --cc=linux-lvm@redhat.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).