All of lore.kernel.org
 help / color / mirror / Atom feed
* dm-thin - issue about the maximum size of the metadata device
@ 2013-08-06  4:18 梁文彥
  2013-08-06 10:39 ` Zdenek Kabelac
  2014-09-09 15:52 ` Joe Thornber
  0 siblings, 2 replies; 4+ messages in thread
From: 梁文彥 @ 2013-08-06  4:18 UTC (permalink / raw)
  To: dm-devel


[-- Attachment #1.1: Type: text/plain, Size: 2513 bytes --]

Hi folks,

I currently do some experiments on the dm-thin-provisioning targets.
One of these experiments is trying to create/find-out the largest thin
volumn on the pool.

As I Know, each time we provision blocks from the pool, the metadata is
comsumed for recording the mapping information.
By executing the lvdisplay command, we can observe the status of the pool &
metadata usage, such as
$ sudo lvdisplay | grep Allocated
  Allocated pool data    7.87%
  Allocated metadata    6.09%

And the following content is extracted from thin-provisioning.txt from
Documentation\device-mapper in the source tree.
"As a guide, we suggest you calculate the number of bytes to use in the
metadata device as 48 * $data_dev_size / $data_block_size but round it up
to 2MB if the answer is smaller."

If the size of the metadata dev was fixed as 16G, and the block size of the
pool dev was set as 64K,
then we may infer that the largest volumn size of the thin is 21.33TB.

(48 * $data_dev_size / 64K = 16G
 $data_dev_size = 16G * 64K / 48 = 21.33TB)

If this inference was not correct, please kindly let me know why.

Then I do the experiment with the following steps:
1. create a thin-pool with size 21.33T on my RAID0, say, the largest size
we infered, and block size 64K, metadata size 16G
2. create a thin volumn with virtual size 21.33T.
3. dd data(/dev/uramdom) to the thin device

Finally, I observed that
  Allocated pool data    100.00%
  Allocated metadata     71.89%

It seemed that the pool data had already out of usage, but the metadata was
not.
Did it means that, metadata 16G can be applied to record a thin dev with
size bigger than 21.73T?

I did another experiment. In this time, I enlarged the size of pool/ thin
dev as big as I could offer.

1. create a thin-pool on my RAID0 with size 36.20T, and block size 64K,
metadata size 16G
2. create a thin volumn with virtual size 36.20T.
3. dd data to the thin device

Finally, I observed that
  Allocated pool data    84.52%
  Allocated metadata    99.99%

And these messages were shown in dmesge
device-mapper: space map metadata: out of metadata space
device-mapper: thin: pre_alloc_func: dm_thin_insert_block failed
device-mapper: space map metadata: out of metadata space
device-mapper: thin: commit failed, error = -28
device-mapper: thin: switching pool to read-only mode

In this experiment, we run out of the metadata, and by the "Allocated pool
data" field, we infered that the maximum thin device was about 30.59TB, was
it correct?

Regards,
Burton

[-- Attachment #1.2: Type: text/html, Size: 3471 bytes --]

[-- Attachment #2: Type: text/plain, Size: 0 bytes --]



^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: dm-thin - issue about the maximum size of the metadata device
  2013-08-06  4:18 dm-thin - issue about the maximum size of the metadata device 梁文彥
@ 2013-08-06 10:39 ` Zdenek Kabelac
  2014-09-09 15:52 ` Joe Thornber
  1 sibling, 0 replies; 4+ messages in thread
From: Zdenek Kabelac @ 2013-08-06 10:39 UTC (permalink / raw)
  To: device-mapper development; +Cc: 梁文彥

Dne 6.8.2013 06:18, 梁文彥 napsal(a):
> Hi folks,
>
> I currently do some experiments on the dm-thin-provisioning targets.
> One of these experiments is trying to create/find-out the largest thin volumn
> on the pool.

There is currently not yet lvm2 support for this operation.
It needs to parse thin_dump output and do an accounting of each block
(which is not a cheap operation).
It's also hard to say how to account shared blocks between multiple volumes.
So there could be information about total used blocks, and how many blocks
would be free, if this thin volume would be removed.

The only displayed info now is the highest mapped block for thin_volume,
which just gives you very 'light' info about usage of such volume.


> As I Know, each time we provision blocks from the pool, the metadata is
> comsumed for recording the mapping information.
> By executing the lvdisplay command, we can observe the status of the pool &
> metadata usage, such as
> $ sudo lvdisplay | grep Allocated
>    Allocated pool data    7.87%
>    Allocated metadata    6.09%
>
> And the following content is extracted from thin-provisioning.txt from
> Documentation\device-mapper in the source tree.
> "As a guide, we suggest you calculate the number of bytes to use in the
> metadata device as 48 * $data_dev_size / $data_block_size but round it up
> to 2MB if the answer is smaller."
>
> If the size of the metadata dev was fixed as 16G, and the block size of the
> pool dev was set as 64K,
> then we may infer that the largest volumn size of the thin is 21.33TB.
>
> (48 * $data_dev_size / 64K = 16G
>   $data_dev_size = 16G * 64K / 48 = 21.33TB)
>
> If this inference was not correct, please kindly let me know why.

The formula there is just some approximation.
Newest device-mapper-persistent-data  have even better improved formula
to calculate also with number of thin volumes inside pool.

Anyway current limit of kernel mda  size of 16G - it may be
used for addressing variable sized thin pools.
(So its not a problem to use i.e. 1EB thin pool size - just
in this case you should probably use much bigger chunk size then 64K)


> Then I do the experiment with the following steps:
> 1. create a thin-pool with size 21.33T on my RAID0, say, the largest size we
> infered, and block size 64K, metadata size 16G
> 2. create a thin volumn with virtual size 21.33T.
> 3. dd data(/dev/uramdom) to the thin device
>
> Finally, I observed that
>    Allocated pool data    100.00%
>    Allocated metadata     71.89%
>
> It seemed that the pool data had already out of usage, but the metadata was not.
> Did it means that, metadata 16G can be applied to record a thin dev with size
> bigger than 21.73T?

Well it always depends what do you want to do - if you will use it for 
snapshot you will likely run out-of-space.

If you use it only for provisioning - you may use much bigger chunksize
and significantly reduce metadata usage and improve performance.

> 1. create a thin-pool on my RAID0 with size 36.20T, and block size 64K,
> metadata size 16G

Using blocksize/chunksize(in lvm2 terminology) 64K for  36T device is just
not a good idea.

> In this experiment, we run out of the metadata, and by the "Allocated pool
> data" field, we infered that the maximum thin device was about 30.59TB, was it
> correct?

The major rule here is to observe how full metadata are - and they are getting
too full - remove volumes.

As said above you need to plan the use-case - 64K is good for lots of 
snapshot, bigger chunk sizes are good for space provisioning (i.e. 1MB).

BTW - did you really wrote 36TB of data - how log was this test  running ?

Zdenek

--
dm-devel mailing list
dm-devel@redhat.com
https://www.redhat.com/mailman/listinfo/dm-devel

^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: dm-thin - issue about the maximum size of the metadata device
  2013-08-06  4:18 dm-thin - issue about the maximum size of the metadata device 梁文彥
  2013-08-06 10:39 ` Zdenek Kabelac
@ 2014-09-09 15:52 ` Joe Thornber
  2014-09-09 15:54   ` Joe Thornber
  1 sibling, 1 reply; 4+ messages in thread
From: Joe Thornber @ 2014-09-09 15:52 UTC (permalink / raw)
  To: device-mapper development

On Tue, Aug 06, 2013 at 12:18:22PM +0800, 梁文彥 wrote:
> In this experiment, we run out of the metadata, and by the "Allocated pool
> data" field, we infered that the maximum thin device was about 30.59TB, was
> it correct?

Sounds about right.  Use a bigger block size if you want to provision
more space in the pool.

- Joe

--
dm-devel mailing list
dm-devel@redhat.com
https://www.redhat.com/mailman/listinfo/dm-devel

^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: dm-thin - issue about the maximum size of the metadata device
  2014-09-09 15:52 ` Joe Thornber
@ 2014-09-09 15:54   ` Joe Thornber
  0 siblings, 0 replies; 4+ messages in thread
From: Joe Thornber @ 2014-09-09 15:54 UTC (permalink / raw)
  To: device-mapper development

Sorry for raising this thread from the dead.  My mail was sorted by
sender rather than received time.  Please ignore.

On Tue, Sep 09, 2014 at 04:52:40PM +0100, Joe Thornber wrote:
> On Tue, Aug 06, 2013 at 12:18:22PM +0800, 梁文彥 wrote:
> > In this experiment, we run out of the metadata, and by the "Allocated pool
> > data" field, we infered that the maximum thin device was about 30.59TB, was
> > it correct?
> 
> Sounds about right.  Use a bigger block size if you want to provision
> more space in the pool.
> 
> - Joe
> 
> --
> dm-devel mailing list
> dm-devel@redhat.com
> https://www.redhat.com/mailman/listinfo/dm-devel

--
dm-devel mailing list
dm-devel@redhat.com
https://www.redhat.com/mailman/listinfo/dm-devel

^ permalink raw reply	[flat|nested] 4+ messages in thread

end of thread, other threads:[~2014-09-09 15:54 UTC | newest]

Thread overview: 4+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2013-08-06  4:18 dm-thin - issue about the maximum size of the metadata device 梁文彥
2013-08-06 10:39 ` Zdenek Kabelac
2014-09-09 15:52 ` Joe Thornber
2014-09-09 15:54   ` Joe Thornber

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.