All of lore.kernel.org
 help / color / mirror / Atom feed
* Sparse file info in filestore not propagated to other OSDs
@ 2017-04-06 10:15 Piotr Dałek
  2017-04-06 13:25 ` Sage Weil
  0 siblings, 1 reply; 19+ messages in thread
From: Piotr Dałek @ 2017-04-06 10:15 UTC (permalink / raw)
  To: ceph-devel

Hello,

We recently had an interesting issue with RBD images and filestore on Jewel 
10.2.5:
We have a pool with RBD images, all of them mostly untouched (large areas of 
those images unused), and once we added 3 new OSDs to cluster, objects 
representing these images grew substantially on new OSDs: objects hosting 
unused areas of these images on original OSDs remained small (~8K of space 
actually used, 4M allocated), but on new OSDs were large (4M allocated *and* 
actually used). After investigation we concluded that Ceph didn't propagate 
sparse file information during cluster rebalance, resulting in correct data 
contents on all OSDs, but no sparse file data on new OSDs, hence disk space 
usage increase on those.

Example on test cluster, before growing it by one OSD:

ls:

osd-01-cluster: -rw-r--r-- 1 root root 4194304 Apr  6 09:18 
/var/lib/ceph/osd-01-cluster/current/0.27_head/rbd\udata.12a474b0dc51.0000000000000008__head_2DD64767__0
osd-02-cluster: -rw-r--r-- 1 root root 4194304 Apr  6 09:18 
/var/lib/ceph/osd-02-cluster/current/0.27_head/rbd\udata.12a474b0dc51.0000000000000008__head_2DD64767__0
osd-03-cluster: -rw-r--r-- 1 root root 4194304 Apr  6 09:18 
/var/lib/ceph/osd-03-cluster/current/0.27_head/rbd\udata.12a474b0dc51.0000000000000008__head_2DD64767__0

du:

osd-01-cluster: 12 
/var/lib/ceph/osd-01-cluster/current/0.27_head/rbd\udata.12a474b0dc51.0000000000000008__head_2DD64767__0
osd-02-cluster: 12 
/var/lib/ceph/osd-02-cluster/current/0.27_head/rbd\udata.12a474b0dc51.0000000000000008__head_2DD64767__0
osd-03-cluster: 12 
/var/lib/ceph/osd-03-cluster/current/0.27_head/rbd\udata.12a474b0dc51.0000000000000008__head_2DD64767__0


mon-01-cluster:~ # rbd diff test
Offset   Length  Type
8388608  4194304 data
16777216 4096    data
33554432 4194304 data
37748736 2048    data

And after growing it:

ls:

clush> find /var/lib/ceph/osd-*/current/0.*head/ -type f -name '*data*' 
-exec ls -l {} \+
osd-02-cluster: -rw-r--r-- 1 root root 4194304 Apr  6 09:18 
/var/lib/ceph/osd-02-cluster/current/0.27_head/rbd\udata.12a474b0dc51.0000000000000008__head_2DD64767__0
osd-03-cluster: -rw-r--r-- 1 root root 4194304 Apr  6 09:18 
/var/lib/ceph/osd-03-cluster/current/0.27_head/rbd\udata.12a474b0dc51.0000000000000008__head_2DD64767__0
osd-04-cluster: -rw-r--r-- 1 root root 4194304 Apr  6 09:25 
/var/lib/ceph/osd-04-cluster/current/0.27_head/rbd\udata.12a474b0dc51.0000000000000008__head_2DD64767__0

du:

clush> find /var/lib/ceph/osd-*/current/0.*head/ -type f -name '*data*' 
-exec du -k {} \+
osd-02-cluster: 12 
/var/lib/ceph/osd-02-cluster/current/0.27_head/rbd\udata.12a474b0dc51.0000000000000008__head_2DD64767__0
osd-03-cluster: 12 
/var/lib/ceph/osd-03-cluster/current/0.27_head/rbd\udata.12a474b0dc51.0000000000000008__head_2DD64767__0
osd-04-cluster: 4100 
/var/lib/ceph/osd-04-cluster/current/0.27_head/rbd\udata.12a474b0dc51.0000000000000008__head_2DD64767__0

Note that "rbd\udata.12a474b0dc51.0000000000000008__head_2DD64767__0" grew 
from 12 to 4100KB when copied from other OSDs to osd-04.

Is this something to be expected? Is there any way to make it propagate the 
sparse file info? Or should we think about issuing a "fallocate -d"-like 
patch for writes on filestore?

(We're using kernel 3.13.0-45-generic but on 4.4.0-31-generic the issue 
remains; our XFS uses 4K bsize).

-- 
Piotr Dałek
piotr.dalek@corp.ovh.com
https://www.ovh.com/us/

^ permalink raw reply	[flat|nested] 19+ messages in thread

end of thread, other threads:[~2017-06-26 11:59 UTC | newest]

Thread overview: 19+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2017-04-06 10:15 Sparse file info in filestore not propagated to other OSDs Piotr Dałek
2017-04-06 13:25 ` Sage Weil
2017-04-06 13:30   ` Piotr Dałek
2017-04-06 13:55     ` Sage Weil
2017-04-06 14:24       ` Piotr Dałek
2017-04-06 14:27         ` Sage Weil
2017-04-06 15:50           ` Jason Dillaman
2017-04-06 17:52             ` Josh Durgin
2017-04-07  6:46           ` Piotr Dałek
2017-04-13 14:23   ` Piotr Dałek
     [not found]     ` <d4bde447-f179-aeca-bac5-636fa40ccba5-Rm6v+N6rxxBWk0Htik3J/w@public.gmane.org>
2017-06-14  6:30       ` Paweł Sadowski
2017-06-14 13:44         ` Sage Weil
     [not found]           ` <alpine.DEB.2.11.1706141340520.3646-qHenpvqtifaMSRpgCs4c+g@public.gmane.org>
2017-06-21  7:05             ` Piotr Dałek
2017-06-21 13:24               ` Sage Weil
2017-06-21 13:46                 ` Piotr Dałek
     [not found]                   ` <898546b4-b9b2-5413-27ab-74534cc77eed-Rm6v+N6rxxBWk0Htik3J/w@public.gmane.org>
2017-06-21 13:56                     ` Sage Weil
2017-06-26 11:59                 ` Piotr Dalek
2017-06-21 13:35               ` [ceph-users] " Jason Dillaman
     [not found]                 ` <CA+aFP1DJ3L3Pg0r4Pj3o7JoNTNnBRRs0u_nnb2JYz4nGxafUTA-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
2017-06-21 13:47                   ` Piotr Dałek

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.