linux-lvm.redhat.com archive mirror
 help / color / mirror / Atom feed
* [linux-lvm] Thin metadata volume bigger than 16GB ?
@ 2018-06-22 18:10 Gionatan Danti
  2018-06-22 20:13 ` Zdenek Kabelac
  0 siblings, 1 reply; 3+ messages in thread
From: Gionatan Danti @ 2018-06-22 18:10 UTC (permalink / raw)
  To: linux-lvm

Hi list,
I wonder if a method exists to have a >16 GB thin metadata volume.

When using a 64 KB chunksize, a maximum of ~16 TB can be addressed in a 
single thin pool. The obvious solution is to increase the chunk size, as 
128 KB chunks are good for over 30 TB, and so on. However, increasing 
chunksize is detrimental for efficiency/performance when heavily using 
snapshot.

Another simple and very effective solution is to have multiple thin 
pools, ie: 2x 16 GB metadata volumes with 64 KB chunksize is good, 
again, for over 30 TB thin pool space.

That said, the naive but straightforward would be to increase maximum 
thin pool size. So I have some questions:

- is the 16 GB limit an hard one?
- there are practical consideration to avoid that (eg: slow thin_check 
for very bug metadata volumes)?
- if so, why I can not find any similar limit (in the docs) for cache 
metadata volumes?
- what is the right thing to do when a 16 GB metadata volume fill-up, if 
it can not be expanded?

Thanks.

-- 
Danti Gionatan
Supporto Tecnico
Assyoma S.r.l. - www.assyoma.it
email: g.danti@assyoma.it - info@assyoma.it
GPG public key ID: FF5F32A8

^ permalink raw reply	[flat|nested] 3+ messages in thread

* Re: [linux-lvm] Thin metadata volume bigger than 16GB ?
  2018-06-22 18:10 [linux-lvm] Thin metadata volume bigger than 16GB ? Gionatan Danti
@ 2018-06-22 20:13 ` Zdenek Kabelac
  2018-06-23 10:14   ` Gionatan Danti
  0 siblings, 1 reply; 3+ messages in thread
From: Zdenek Kabelac @ 2018-06-22 20:13 UTC (permalink / raw)
  To: LVM general discussion and development, Gionatan Danti

Dne 22.6.2018 v 20:10 Gionatan Danti napsal(a):
> Hi list,
> I wonder if a method exists to have a >16 GB thin metadata volume.
> 
> When using a 64 KB chunksize, a maximum of ~16 TB can be addressed in a single 
> thin pool. The obvious solution is to increase the chunk size, as 128 KB 
> chunks are good for over 30 TB, and so on. However, increasing chunksize is 
> detrimental for efficiency/performance when heavily using snapshot.
> 
> Another simple and very effective solution is to have multiple thin pools, ie: 
> 2x 16 GB metadata volumes with 64 KB chunksize is good, again, for over 30 TB 
> thin pool space.
> 
> That said, the naive but straightforward would be to increase maximum thin 
> pool size. So I have some questions:
> 
> - is the 16 GB limit an hard one?

Addressing is internally limited to use lower amount of bits.

> - there are practical consideration to avoid that (eg: slow thin_check for 
> very bug metadata volumes)?

Usage of memory resources, efficiency.

> - if so, why I can not find any similar limit (in the docs) for cache metadata 
> volumes?

ATM we do not recommend to use cache with more then 1.000.000 chunks for 
better efficiency reasons although on bigger machines bigger amount of chunks 
are still quite usable especially now with cache metadata format 2.

> - what is the right thing to do when a 16 GB metadata volume fill-up, if it 
> can not be expanded?

ATM drop data you don't need  (fstrim filesystem).

So far there were not many request to support bigger size although there are 
plans to improve thin-pool metadata format for next version.

Regards

Zdenek

^ permalink raw reply	[flat|nested] 3+ messages in thread

* Re: [linux-lvm] Thin metadata volume bigger than 16GB ?
  2018-06-22 20:13 ` Zdenek Kabelac
@ 2018-06-23 10:14   ` Gionatan Danti
  0 siblings, 0 replies; 3+ messages in thread
From: Gionatan Danti @ 2018-06-23 10:14 UTC (permalink / raw)
  To: Zdenek Kabelac; +Cc: LVM general discussion and development

Il 22-06-2018 22:13 Zdenek Kabelac ha scritto:
> 
> Addressing is internally limited to use lower amount of bits.
> 
> Usage of memory resources, efficiency.
> 
> ATM we do not recommend to use cache with more then 1.000.000 chunks
> for better efficiency reasons although on bigger machines bigger
> amount of chunks are still quite usable especially now with cache
> metadata format 2.

Does it means that with a 64 KB cache chunk size I can efficiently cache 
only up to 64 KB * 1000000 = ~60 GB volume?
So for, say, a 64 TB volume do I need to use 64 MB cache chunks?

> ATM drop data you don't need  (fstrim filesystem).
> 
> So far there were not many request to support bigger size although
> there are plans to improve thin-pool metadata format for next version.
> 
> Regards
> 
> Zdenek

Extremely informative answer, thanks a lot.

-- 
Danti Gionatan
Supporto Tecnico
Assyoma S.r.l. - www.assyoma.it
email: g.danti@assyoma.it - info@assyoma.it
GPG public key ID: FF5F32A8

^ permalink raw reply	[flat|nested] 3+ messages in thread

end of thread, other threads:[~2018-06-23 10:14 UTC | newest]

Thread overview: 3+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-06-22 18:10 [linux-lvm] Thin metadata volume bigger than 16GB ? Gionatan Danti
2018-06-22 20:13 ` Zdenek Kabelac
2018-06-23 10:14   ` Gionatan Danti

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).