All of lore.kernel.org
 help / color / mirror / Atom feed
* btrfs on LVM: Out of space
@ 2010-09-08 11:18 Marcel Lohmann
  2010-09-08 14:53 ` Zhu Yanhai
  0 siblings, 1 reply; 10+ messages in thread
From: Marcel Lohmann @ 2010-09-08 11:18 UTC (permalink / raw)
  To: linux-btrfs

Hi,

I'm really new to btrfs and wanted to give it a try. But now I have
some strange behavior with a "full disk". My setup is currently as
follows:
An LVM volume group that is configured as RAID1. In that there is a
logical volume of 130 GB in size. So physically 2x130 GB as of RAID 1,
but logically you can use 130 GB.
In that complete logical volume I created a btrfs partition. And
btrfs-show displays 130GB space. Fine.
Then I started to fill that volume with thousands of files until it
was unexpectedly "full". But I am sure that there is far less than
130GB in files! Now btrfs-show says that I used 130GB of 130GB, while
df -h shows 38GB of free space. And as I know, df -h has problems to
determine the real space. So I used "btrfs filesystem df /mountpoint"
to give me the actual numbers. And that tells me:
Data: total=23.97GB, used=23.97GB
Metadata: total=53.01GB, used=33.98GB
System: total=12.00MB, used=16.00KB
So what does that mean? I have 130GB disk capacity and I can only use
1/5 of that for real data? That can't be true. Even if there is a
problem with the RAID recognition (but RAID should be invisible for
btrfs) then I would have 65GB of "available" space, but btrfs
currently uses 77GB already.

What did I do wrong, or how can I solve that? The kernel is the
"official" 2.6.35-20-server that ships with Ubuntu 10.10 beta. With
Ubuntu 10.04 I had the same problem, but there was no "btrfs" command
and the buggy "df -h" so I thought the new kernel would solve the
problem. But it does not solve it.

Marcel

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: btrfs on LVM: Out of space
  2010-09-08 11:18 btrfs on LVM: Out of space Marcel Lohmann
@ 2010-09-08 14:53 ` Zhu Yanhai
  2010-09-08 20:35   ` Marcel Lohmann
  0 siblings, 1 reply; 10+ messages in thread
From: Zhu Yanhai @ 2010-09-08 14:53 UTC (permalink / raw)
  To: Marcel Lohmann; +Cc: linux-btrfs

Hi,
Have you ever tried with 'mkfs.btrfs -m single /dev/xxxx'?
As you had a RAID1 based on LVM, you don't have to keep
the default duplicated metadata profile in Btrfs.

-zyh

2010/9/8 Marcel Lohmann <marcel.lohmann@googlemail.com>:
> Hi,
>
> I'm really new to btrfs and wanted to give it a try. But now I have
> some strange behavior with a "full disk". My setup is currently as
> follows:
> An LVM volume group that is configured as RAID1. In that there is a
> logical volume of 130 GB in size. So physically 2x130 GB as of RAID 1=
,
> but logically you can use 130 GB.
> In that complete logical volume I created a btrfs partition. And
> btrfs-show displays 130GB space. Fine.
> Then I started to fill that volume with thousands of files until it
> was unexpectedly "full". But I am sure that there is far less than
> 130GB in files! Now btrfs-show says that I used 130GB of 130GB, while
> df -h shows 38GB of free space. And as I know, df -h has problems to
> determine the real space. So I used "btrfs filesystem df /mountpoint"
> to give me the actual numbers. And that tells me:
> Data: total=3D23.97GB, used=3D23.97GB
> Metadata: total=3D53.01GB, used=3D33.98GB
> System: total=3D12.00MB, used=3D16.00KB
> So what does that mean? I have 130GB disk capacity and I can only use
> 1/5 of that for real data? That can't be true. Even if there is a
> problem with the RAID recognition (but RAID should be invisible for
> btrfs) then I would have 65GB of "available" space, but btrfs
> currently uses 77GB already.
>
> What did I do wrong, or how can I solve that? The kernel is the
> "official" 2.6.35-20-server that ships with Ubuntu 10.10 beta. With
> Ubuntu 10.04 I had the same problem, but there was no "btrfs" command
> and the buggy "df -h" so I thought the new kernel would solve the
> problem. But it does not solve it.
>
> Marcel
> --
> To unsubscribe from this list: send the line "unsubscribe linux-btrfs=
" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at =C2=A0http://vger.kernel.org/majordomo-info.ht=
ml
>
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" =
in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: btrfs on LVM: Out of space
  2010-09-08 14:53 ` Zhu Yanhai
@ 2010-09-08 20:35   ` Marcel Lohmann
  2010-09-09  2:15     ` Zhu Yanhai
  0 siblings, 1 reply; 10+ messages in thread
From: Marcel Lohmann @ 2010-09-08 20:35 UTC (permalink / raw)
  To: linux-btrfs

2010/9/8 Zhu Yanhai <zhu.yanhai@gmail.com>:
> Hi,
> Have you ever tried with 'mkfs.btrfs -m single /dev/xxxx'?
> As you had a RAID1 based on LVM, you don't have to keep
> the default duplicated metadata profile in Btrfs.
>
> -zyh
>
> 2010/9/8 Marcel Lohmann <marcel.lohmann@googlemail.com>:
>> In that complete logical volume I created a btrfs partition. And
>> btrfs-show displays 130GB space. Fine.

>> to give me the actual numbers. And that tells me:
>> Data: total=23.97GB, used=23.97GB
>> Metadata: total=53.01GB, used=33.98GB
>> System: total=12.00MB, used=16.00KB

No, I did not try this. I just created it with the defaults
"mkfs.btrfs /dev/mapper/somelogicalvolume". Isn't "-m single" the
default?
So is it right that btrfs "knows" that it is running on a RAID and
changes it's behavior? Why? Normally a FS does not care about the
underlaying (hidden) disk array.
But if it is neccessary to "drop" that duplicate metadata, how can I
arrange this afterwards. And if it is done, then I would have reduced
the Metadata size, but will there really be more space for Data? Where
is the remaining space from 77 GB to 130 GB?

Maybe I was pointing it the wrong way, sorry. I created a mdadm
software RAID1 and on that is a LVM. I did not use btrfs to span over
two disks. The md RAID with LVS was there before and I couldn't change
this.
This is why I have no btrfs-RAID but a md-RAID.

Marcel

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: btrfs on LVM: Out of space
  2010-09-08 20:35   ` Marcel Lohmann
@ 2010-09-09  2:15     ` Zhu Yanhai
  2010-09-09  9:23       ` Marcel Lohmann
  2010-09-10 19:46       ` Marcel Lohmann
  0 siblings, 2 replies; 10+ messages in thread
From: Zhu Yanhai @ 2010-09-09  2:15 UTC (permalink / raw)
  To: Marcel Lohmann; +Cc: linux-btrfs

2010/9/9 Marcel Lohmann <marcel.lohmann@googlemail.com>:
> 2010/9/8 Zhu Yanhai <zhu.yanhai@gmail.com>:
>> Hi,
>> Have you ever tried with 'mkfs.btrfs -m single /dev/xxxx'?
>> As you had a RAID1 based on LVM, you don't have to keep
>> the default duplicated metadata profile in Btrfs.
>>
>> -zyh
>>
>> 2010/9/8 Marcel Lohmann <marcel.lohmann@googlemail.com>:
>>> In that complete logical volume I created a btrfs partition. And
>>> btrfs-show displays 130GB space. Fine.
>
>>> to give me the actual numbers. And that tells me:
>>> Data: total=3D23.97GB, used=3D23.97GB
>>> Metadata: total=3D53.01GB, used=3D33.98GB
>>> System: total=3D12.00MB, used=3D16.00KB
>
> No, I did not try this. I just created it with the defaults
> "mkfs.btrfs /dev/mapper/somelogicalvolume". Isn't "-m single" the
> default?

No, it's not by default. Btrfs will write two copies of the Metadata in=
to
disk by default, with only one copy of Data -- something similar with
RAID1, but not the same.
Anyway, you don't need this, since you already have a standard RAID1
array setup by LVM.
'-m single' makes Btrfs write exactly one copy of Metadata, instead of
two by default.

> So is it right that btrfs "knows" that it is running on a RAID and
> changes it's behavior? Why? Normally a FS does not care about the
> underlaying (hidden) disk array.

No, it doesn't know.

> But if it is neccessary to "drop" that duplicate metadata, how can I
> arrange this afterwards. And if it is done, then I would have reduced
> the Metadata size, but will there really be more space for Data? Wher=
e
> is the remaining space from 77 GB to 130 GB?

53 * 2 + 24 =3D 130. The size of Metadata reported by "btrfs filesystem=
 df"
is 53GB, however it occupies 53 * 2 =3D 106GB on 'disk' physically.
So yes, there will be more space for Data.

>
> Maybe I was pointing it the wrong way, sorry. I created a mdadm
> software RAID1 and on that is a LVM. I did not use btrfs to span over
> two disks. The md RAID with LVS was there before and I couldn't chang=
e
> this.
> This is why I have no btrfs-RAID but a md-RAID.
>
> Marcel
> --
> To unsubscribe from this list: send the line "unsubscribe linux-btrfs=
" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at =C2=A0http://vger.kernel.org/majordomo-info.ht=
ml
>
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" =
in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: btrfs on LVM: Out of space
  2010-09-09  2:15     ` Zhu Yanhai
@ 2010-09-09  9:23       ` Marcel Lohmann
  2010-09-09  9:37         ` Tamás Gulácsi
  2010-09-09  9:52         ` Zhu Yanhai
  2010-09-10 19:46       ` Marcel Lohmann
  1 sibling, 2 replies; 10+ messages in thread
From: Marcel Lohmann @ 2010-09-09  9:23 UTC (permalink / raw)
  To: linux-btrfs

2010/9/9 Zhu Yanhai <zhu.yanhai@gmail.com>:
> 2010/9/9 Marcel Lohmann <marcel.lohmann@googlemail.com>:
>> 2010/9/8 Zhu Yanhai <zhu.yanhai@gmail.com>:
>> But if it is neccessary to "drop" that duplicate metadata, how can I
>> arrange this afterwards. And if it is done, then I would have reduced
>> the Metadata size, but will there really be more space for Data? Where
>> is the remaining space from 77 GB to 130 GB?
>
> 53 * 2 + 24 = 130. The size of Metadata reported by "btrfs filesystem df"
> is 53GB, however it occupies 53 * 2 = 106GB on 'disk' physically.
> So yes, there will be more space for Data.
>

Perfect, great. This sounds strange on the first sight because it does
not show anywhere that one has to double the metadata. Or better: that
it is currently doubled.
So I know what I have to do now. I will drop the partition and create it again.
There is then still the problem that I have twice the size of metadata
than real data. But I guess that is due to the high number of very
small files (current estimate is 1440*50000 files with around 200Bytes
to 2000Bytes). On the other hand there is only twice the size reserved
and actually used a bit more that the real data.

Many thanks for make that clear to me.

Marcel

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: btrfs on LVM: Out of space
  2010-09-09  9:23       ` Marcel Lohmann
@ 2010-09-09  9:37         ` Tamás Gulácsi
  2010-09-09  9:52         ` Zhu Yanhai
  1 sibling, 0 replies; 10+ messages in thread
From: Tamás Gulácsi @ 2010-09-09  9:37 UTC (permalink / raw)
  To: Marcel Lohmann; +Cc: linux-btrfs

You can try "-l" option of mkfs.btrfs to have all the small files
packed in the metadata, not extents.

GThomas

2010/9/9 Marcel Lohmann <marcel.lohmann@googlemail.com>:
> 2010/9/9 Zhu Yanhai <zhu.yanhai@gmail.com>:
>> 2010/9/9 Marcel Lohmann <marcel.lohmann@googlemail.com>:
>>> 2010/9/8 Zhu Yanhai <zhu.yanhai@gmail.com>:
>>> But if it is neccessary to "drop" that duplicate metadata, how can =
I
>>> arrange this afterwards. And if it is done, then I would have reduc=
ed
>>> the Metadata size, but will there really be more space for Data? Wh=
ere
>>> is the remaining space from 77 GB to 130 GB?
>>
>> 53 * 2 + 24 =3D 130. The size of Metadata reported by "btrfs filesys=
tem df"
>> is 53GB, however it occupies 53 * 2 =3D 106GB on 'disk' physically.
>> So yes, there will be more space for Data.
>>
>
> Perfect, great. This sounds strange on the first sight because it doe=
s
> not show anywhere that one has to double the metadata. Or better: tha=
t
> it is currently doubled.
> So I know what I have to do now. I will drop the partition and create=
 it again.
> There is then still the problem that I have twice the size of metadat=
a
> than real data. But I guess that is due to the high number of very
> small files (current estimate is 1440*50000 files with around 200Byte=
s
> to 2000Bytes). On the other hand there is only twice the size reserve=
d
> and actually used a bit more that the real data.
>
> Many thanks for make that clear to me.
>
> Marcel
> --
> To unsubscribe from this list: send the line "unsubscribe linux-btrfs=
" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at =C2=A0http://vger.kernel.org/majordomo-info.ht=
ml
>
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" =
in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: btrfs on LVM: Out of space
  2010-09-09  9:23       ` Marcel Lohmann
  2010-09-09  9:37         ` Tamás Gulácsi
@ 2010-09-09  9:52         ` Zhu Yanhai
  2010-09-09 10:54           ` Marcel Lohmann
  1 sibling, 1 reply; 10+ messages in thread
From: Zhu Yanhai @ 2010-09-09  9:52 UTC (permalink / raw)
  To: Marcel Lohmann; +Cc: linux-btrfs

2010/9/9 Marcel Lohmann <marcel.lohmann@googlemail.com>:
> 2010/9/9 Zhu Yanhai <zhu.yanhai@gmail.com>:
>> 2010/9/9 Marcel Lohmann <marcel.lohmann@googlemail.com>:
>>> 2010/9/8 Zhu Yanhai <zhu.yanhai@gmail.com>:
>>> But if it is neccessary to "drop" that duplicate metadata, how can =
I
>>> arrange this afterwards. And if it is done, then I would have reduc=
ed
>>> the Metadata size, but will there really be more space for Data? Wh=
ere
>>> is the remaining space from 77 GB to 130 GB?
>>
>> 53 * 2 + 24 =3D 130. The size of Metadata reported by "btrfs filesys=
tem df"
>> is 53GB, however it occupies 53 * 2 =3D 106GB on 'disk' physically.
>> So yes, there will be more space for Data.
>>
>
> Perfect, great. This sounds strange on the first sight because it doe=
s
> not show anywhere that one has to double the metadata. Or better: tha=
t
> it is currently doubled.
> So I know what I have to do now. I will drop the partition and create=
 it again.
> There is then still the problem that I have twice the size of metadat=
a
> than real data. But I guess that is due to the high number of very
> small files (current estimate is 1440*50000 files with around 200Byte=
s
> to 2000Bytes). On the other hand there is only twice the size reserve=
d

200B ~ 2000B is really too small to the modern hard disks (some of them
already have 4KB-sectors instead of 512B).
Besides, currectly Btrfs doesn't play quite well with such small files,=
 you may
need to read this thread:
http://www.mail-archive.com/linux-btrfs@vger.kernel.org/msg05263.html
=46ortunately Chris found this is caused by a plain old bug in Btrfs,
and he already
had a patch for it, IIRC. Here it is:
http://www.mail-archive.com/linux-btrfs@vger.kernel.org/msg05292.html
I don't know whether it has been in the kernel you are using.

> and actually used a bit more that the real data.
>
> Many thanks for make that clear to me.

You're welcome!

>
> Marcel
> --
> To unsubscribe from this list: send the line "unsubscribe linux-btrfs=
" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at =C2=A0http://vger.kernel.org/majordomo-info.ht=
ml
>

-zyh
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" =
in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: btrfs on LVM: Out of space
  2010-09-09  9:52         ` Zhu Yanhai
@ 2010-09-09 10:54           ` Marcel Lohmann
  0 siblings, 0 replies; 10+ messages in thread
From: Marcel Lohmann @ 2010-09-09 10:54 UTC (permalink / raw)
  To: linux-btrfs

2010/9/9 Zhu Yanhai <zhu.yanhai@gmail.com>:
> 200B ~ 2000B is really too small to the modern hard disks (some of them
> already have 4KB-sectors instead of 512B).

I know. But I thought that btrfs is the best FS for handling that.
Besides other storage methods than keeping small files on a
filesystem.
This is why I wanted to give it a try. It's currently just an
experiment for me to evaluate best practices.

> Besides, currectly Btrfs doesn't play quite well with such small files, you may

> had a patch for it, IIRC. Here it is:
> http://www.mail-archive.com/linux-btrfs@vger.kernel.org/msg05292.html
> I don't know whether it has been in the kernel you are using.

Looking real quick on the patch and the git commits I would confirm
that this is already in the kernel 2.6.35 (which Ubuntu 10.10 uses).
So there is a good chance to have a "repaired" filesystem when I
recreate it.

Marcel

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: btrfs on LVM: Out of space
  2010-09-09  2:15     ` Zhu Yanhai
  2010-09-09  9:23       ` Marcel Lohmann
@ 2010-09-10 19:46       ` Marcel Lohmann
  2010-09-17 15:29         ` Johannes Hirte
  1 sibling, 1 reply; 10+ messages in thread
From: Marcel Lohmann @ 2010-09-10 19:46 UTC (permalink / raw)
  To: linux-btrfs

2010/9/9 Zhu Yanhai <zhu.yanhai@gmail.com>:
> 2010/9/9 Marcel Lohmann <marcel.lohmann@googlemail.com>:
>> 2010/9/8 Zhu Yanhai <zhu.yanhai@gmail.com>:
>>> Hi,
>>> Have you ever tried with 'mkfs.btrfs -m single /dev/xxxx'?
>>> As you had a RAID1 based on LVM, you don't have to keep
>>> the default duplicated metadata profile in Btrfs.
>>>
>> No, I did not try this. I just created it with the defaults
>> "mkfs.btrfs /dev/mapper/somelogicalvolume". Isn't "-m single" the
>> default?
>
> No, it's not by default. Btrfs will write two copies of the Metadata into
> disk by default, with only one copy of Data -- something similar with
> RAID1, but not the same.
> Anyway, you don't need this, since you already have a standard RAID1
> array setup by LVM.
> '-m single' makes Btrfs write exactly one copy of Metadata, instead of
> two by default.
>

To set this thread to SOLVED:
I recreated the filesystem with "mkfs.btrfs -m single -d single
/dev/mapper/xxx" and mounted it "compress"ed.
After copying only 12,000,000 small files I have 5.13GB in Data and
15.15GB in Metadata. This is far away from good, but this is better
than before. I can live with that because the estimated number of
final files will be 60,000,000 which should fit well on the partition.

> 53 * 2 + 24 = 130. The size of Metadata reported by "btrfs filesystem df"
> is 53GB, however it occupies 53 * 2 = 106GB on 'disk' physically.
> So yes, there will be more space for Data.

There really IS more space for Data. 15.15 * 1 + 5.13 < 130. And yes
this size is also reported by "df -h"

Trying to use "-l 2048" during mkfs was rejected as being invalid. But
who cares...?

Marcel

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: btrfs on LVM: Out of space
  2010-09-10 19:46       ` Marcel Lohmann
@ 2010-09-17 15:29         ` Johannes Hirte
  0 siblings, 0 replies; 10+ messages in thread
From: Johannes Hirte @ 2010-09-17 15:29 UTC (permalink / raw)
  To: Marcel Lohmann; +Cc: linux-btrfs

On Friday 10 September 2010 21:46:49 Marcel Lohmann wrote:
> Trying to use "-l 2048" during mkfs was rejected as being invalid. But
> who cares...?
> 
> Marcel

That's because btrfs supports only leafsize equal to pagesize for now.

regards,
  Johannes

^ permalink raw reply	[flat|nested] 10+ messages in thread

end of thread, other threads:[~2010-09-17 15:29 UTC | newest]

Thread overview: 10+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2010-09-08 11:18 btrfs on LVM: Out of space Marcel Lohmann
2010-09-08 14:53 ` Zhu Yanhai
2010-09-08 20:35   ` Marcel Lohmann
2010-09-09  2:15     ` Zhu Yanhai
2010-09-09  9:23       ` Marcel Lohmann
2010-09-09  9:37         ` Tamás Gulácsi
2010-09-09  9:52         ` Zhu Yanhai
2010-09-09 10:54           ` Marcel Lohmann
2010-09-10 19:46       ` Marcel Lohmann
2010-09-17 15:29         ` Johannes Hirte

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.