linux-btrfs.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* Re: RAID-10 arrays built with btrfs & md report 2x difference in  available size?
@ 2010-01-29 21:57 Thomas Kupper
  2010-01-29 22:13 ` 0bo0
  0 siblings, 1 reply; 14+ messages in thread
From: Thomas Kupper @ 2010-01-29 21:57 UTC (permalink / raw)
  To: 0.bugs.only.0; +Cc: linux-btrfs


> noticing from above
> 
> >>  ... size 931.51GB used 2.03GB ...
> 
> 'used' more than the 'size'?
> 
> more confused ...

For me, it looks as if 2.03GB is way smaller than 931.51GB (2 << 931), no? Everything seems to be fine here.

And regarding your original mail: it seems that df is still lying about the size of the btrfs fs, check http://www.mail-archive.com/linux-btrfs@vger.kernel.org/msg00758.html

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: RAID-10 arrays built with btrfs & md report 2x difference in available size?
  2010-01-29 21:57 RAID-10 arrays built with btrfs & md report 2x difference in available size? Thomas Kupper
@ 2010-01-29 22:13 ` 0bo0
  2010-01-29 22:38   ` RK
  0 siblings, 1 reply; 14+ messages in thread
From: 0bo0 @ 2010-01-29 22:13 UTC (permalink / raw)
  To: Thomas Kupper; +Cc: linux-btrfs

> For me, it looks as if 2.03GB is way smaller than 931.51GB (2 << 931), no? Everything seems to be fine here.

gagh!  i "saw" TB, not GB.  8-/

> And regarding your original mail: it seems that df is still lying about the size of the btrfs fs, check http://www.mail-archive.com/linux-btrfs@vger.kernel.org/msg00758.html

it is, and reading -> "df is lying.  The total bytes in the FS include
all 4 drives.  I need to fix up the math for the total available
space.", it looks like its under control.  thx!

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: RAID-10 arrays built with btrfs & md report 2x difference in available size?
  2010-01-29 22:13 ` 0bo0
@ 2010-01-29 22:38   ` RK
  2010-01-29 23:46     ` jim owens
  0 siblings, 1 reply; 14+ messages in thread
From: RK @ 2010-01-29 22:38 UTC (permalink / raw)
  To: linux-btrfs


> it is, and reading -> "df is lying.  The total bytes in the FS include all 4 drives.  I need to fix up the math for the total available
> space.", it looks like its under control.  thx!

I think so too -- I have six 1TB drives on RAID-10 btrfs and it shows
that I have 5.5TB free space .. how that can be ?

# df -h
Filesystem            Size  Used Avail Use% Mounted on
/dev/sde1              66G  3.8G   59G   7% /
/dev/sda              5.5T   28K  5.5T   1% /mnt/btrfs



^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: RAID-10 arrays built with btrfs & md report 2x difference in available size?
  2010-01-29 22:38   ` RK
@ 2010-01-29 23:46     ` jim owens
  2010-01-29 23:53       ` 0bo0
  0 siblings, 1 reply; 14+ messages in thread
From: jim owens @ 2010-01-29 23:46 UTC (permalink / raw)
  To: RK; +Cc: linux-btrfs

RK wrote:
> I think so too -- I have six 1TB drives on RAID-10 btrfs and it shows
> that I have 5.5TB free space .. how that can be ?
> 
> # df -h
> Filesystem            Size  Used Avail Use% Mounted on
> /dev/sde1              66G  3.8G   59G   7% /
> /dev/sda              5.5T   28K  5.5T   1% /mnt/btrfs

As has been discussed multiple times on the list, btrfs reports
RAW storage so 6 x 1TB is 6 TB.  And the use rate will be double
for each block written (i.e. 2 blocks used) for raid10 (or raid1).

And yes, it is "not what you expect", but it is the only method
that can remain accurate under the mixed raid modes possible
on a per-file-basis in btrfs.

jim

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: RAID-10 arrays built with btrfs & md report 2x difference in available size?
  2010-01-29 23:46     ` jim owens
@ 2010-01-29 23:53       ` 0bo0
  2010-01-30 13:24         ` Goffredo Baroncelli
  2010-01-30 15:36         ` jim owens
  0 siblings, 2 replies; 14+ messages in thread
From: 0bo0 @ 2010-01-29 23:53 UTC (permalink / raw)
  To: jim owens; +Cc: RK, linux-btrfs

On Fri, Jan 29, 2010 at 3:46 PM, jim owens <jowens@hp.com> wrote:
> but it is the only method
> that can remain accurate under the mixed raid modes possible
> on a per-file-basis in btrfs.

can you clarify, then, the intention/goal behind cmason's

"df is lying.  The total bytes in the FS include all 4 drives.  I need to
fix up the math for the total available space."

Is the goal NOT to accurately represent the actual available space?
Seems rather odd that users are simply to know/accept that "available
space" in btrfs RAID-10 != "available space" in md RIAD-10 ...

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: RAID-10 arrays built with btrfs & md report 2x difference in available size?
  2010-01-29 23:53       ` 0bo0
@ 2010-01-30 13:24         ` Goffredo Baroncelli
  2010-01-30 13:29           ` Goffredo Baroncelli
  2010-01-30 15:36         ` jim owens
  1 sibling, 1 reply; 14+ messages in thread
From: Goffredo Baroncelli @ 2010-01-30 13:24 UTC (permalink / raw)
  To: linux-btrfs

On Saturday 30 January 2010, 0bo0 wrote:

> Is the goal NOT to accurately represent the actual available space?
> Seems rather odd that users are simply to know/accept that "available
> space" in btrfs RAID-10 != "available space" in md RIAD-10 ...

As reported more time in this ML, btrfs is able to store the data in 
striping/raid1 mode per-file-basis. 

The space on the disk is grouped in chunk. The raid mode is set per-chunk-
basis [1]. So a file stored in a chunk may be written two times (in one or two 
different disk), and another file stored in another chunk may be written with 
a different policy.


In fact the btrfs store the data in "raid0" mode, and the metadata in raid1 
mode, even with only one disk. Even tough the words "raid1/0" are incorrect 
with only one disk.

So key points are:
- it is incorrect to say that the btrfs filesystem is configured in raidX mode
- it is correct that the file xyz is stored in raidX mode
- is quite simple to evaluate the space available. It is more complex to 
evaluate before the file creation how many of the space available a file of a 
certain size consumes.
- unfortunately, today are not available tools that permits to manage the raid 
mode of a file

BR
G.Baroncelli



-- 
gpg key@ keyserver.linux.it: Goffredo Baroncelli (ghigo) <kreijack inwind it>
Key fingerprint = 4769 7E51 5293 D36C 814E  C054 BF04 F161 3DC5 0512

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: RAID-10 arrays built with btrfs & md report 2x difference in available size?
  2010-01-30 13:24         ` Goffredo Baroncelli
@ 2010-01-30 13:29           ` Goffredo Baroncelli
  0 siblings, 0 replies; 14+ messages in thread
From: Goffredo Baroncelli @ 2010-01-30 13:29 UTC (permalink / raw)
  To: linux-btrfs

On Saturday 30 January 2010, Goffredo Baroncelli wrote:
> On Saturday 30 January 2010, 0bo0 wrote:
> 
> > Is the goal NOT to accurately represent the actual available space?
> > Seems rather odd that users are simply to know/accept that "available
> > space" in btrfs RAID-10 != "available space" in md RIAD-10 ...
> 
> As reported more time in this ML, btrfs is able to store the data in 
> striping/raid1 mode per-file-basis. 
> 
> The space on the disk is grouped in chunk. The raid mode is set per-chunk-
> basis [1]. So a file stored in a chunk may be written two times (in one or 
two 
> different disk), and another file stored in another chunk may be written 
with 
> a different policy.


Sorry, I forgot the reference:
[1] http://btrfs.wiki.kernel.org/index.php/Multiple_Device_Support

> 
> In fact the btrfs store the data in "raid0" mode, and the metadata in raid1 
> mode, even with only one disk. Even tough the words "raid1/0" are incorrect 
> with only one disk.
> 
> So key points are:
> - it is incorrect to say that the btrfs filesystem is configured in raidX 
mode
> - it is correct that the file xyz is stored in raidX mode
> - is quite simple to evaluate the space available. It is more complex to 
> evaluate before the file creation how many of the space available a file of 
a 
> certain size consumes.
> - unfortunately, today are not available tools that permits to manage the 
raid 
> mode of a file
> 
> BR
> G.Baroncelli
> 
> 
> 
> -- 
> gpg key@ keyserver.linux.it: Goffredo Baroncelli (ghigo) <kreijack inwind 
it>
> Key fingerprint = 4769 7E51 5293 D36C 814E  C054 BF04 F161 3DC5 0512
> --
> To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> 


-- 
gpg key@ keyserver.linux.it: Goffredo Baroncelli (ghigo) <kreijack@inwind.it>
Key fingerprint = 4769 7E51 5293 D36C 814E  C054 BF04 F161 3DC5 0512

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: RAID-10 arrays built with btrfs & md report 2x difference in available size?
  2010-01-29 23:53       ` 0bo0
  2010-01-30 13:24         ` Goffredo Baroncelli
@ 2010-01-30 15:36         ` jim owens
  2010-02-08  3:52           ` 0bo0
  2010-02-08  3:54           ` 0bo0
  1 sibling, 2 replies; 14+ messages in thread
From: jim owens @ 2010-01-30 15:36 UTC (permalink / raw)
  To: 0bo0; +Cc: RK, linux-btrfs

0bo0 wrote:
> On Fri, Jan 29, 2010 at 3:46 PM, jim owens <jowens@hp.com> wrote:
>> but it is the only method
>> that can remain accurate under the mixed raid modes possible
>> on a per-file-basis in btrfs.
> 
> can you clarify, then, the intention/goal behind cmason's
> 
> "df is lying.  The total bytes in the FS include all 4 drives.  I need to
> fix up the math for the total available space."

Well I don't have the message where Chris said that, but I know he
did not mean that "df" will be changed to report like an md raid.

> Is the goal NOT to accurately represent the actual available space?

Yes, but in btrfs "accurate" is RAW byte count, however...

> Seems rather odd that users are simply to know/accept that "available
> space" in btrfs RAID-10 != "available space" in md RIAD-10 ...

Developers are aware that users want a method to get space values
that reflect the raid state(s) of their filesystem.

So Josef Bacik has sent patches to btrfs and btrfs-progs that
allow you to see raid-mode data and metadata adjusted values
with btrfs-ctrl -i instead of using "df".

These patches have not been merged yet so you will have to pull
them and apply yourself.

But there remains the fact that the command "df" is not accurate
and will never be accurate for many other filesystems.  It is just
that the user perception of error is much larger with some btrfs
raid modes.

And at the end of the day, you can not say md value == fs value
is a requirement. 

jim

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: RAID-10 arrays built with btrfs & md report 2x difference in available size?
  2010-01-30 15:36         ` jim owens
@ 2010-02-08  3:52           ` 0bo0
  2010-02-08  3:54           ` 0bo0
  1 sibling, 0 replies; 14+ messages in thread
From: 0bo0 @ 2010-02-08  3:52 UTC (permalink / raw)
  To: jim owens; +Cc: RK, linux-btrfs

On Sat, Jan 30, 2010 at 7:36 AM, jim owens <jowens@hp.com> wrote:
> So Josef Bacik has sent patches to btrfs and btrfs-progs that
> allow you to see raid-mode data and metadata adjusted values
> with btrfs-ctrl -i instead of using "df".
>
> These patches have not been merged yet so you will have to pull
> them and apply yourself.

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: RAID-10 arrays built with btrfs & md report 2x difference in available size?
  2010-01-30 15:36         ` jim owens
  2010-02-08  3:52           ` 0bo0
@ 2010-02-08  3:54           ` 0bo0
  2010-02-08 14:33             ` jim owens
  1 sibling, 1 reply; 14+ messages in thread
From: 0bo0 @ 2010-02-08  3:54 UTC (permalink / raw)
  To: jim owens; +Cc: RK, linux-btrfs

On Sat, Jan 30, 2010 at 7:36 AM, jim owens <jowens@hp.com> wrote:
> So Josef Bacik has sent patches to btrfs and btrfs-progs that
> allow you to see raid-mode data and metadata adjusted values
> with btrfs-ctrl -i instead of using "df".
>
> These patches have not been merged yet so you will have to pull
> them and apply yourself.

Where exactly can these be pulled from? Is there a separate git tree?
I just built from the btrfs & btrfs-progs heads, and still do not see
these add'l features.

Thanks.

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: RAID-10 arrays built with btrfs & md report 2x difference in available size?
  2010-02-08  3:54           ` 0bo0
@ 2010-02-08 14:33             ` jim owens
  0 siblings, 0 replies; 14+ messages in thread
From: jim owens @ 2010-02-08 14:33 UTC (permalink / raw)
  To: 0bo0; +Cc: RK, linux-btrfs

0bo0 wrote:
> On Sat, Jan 30, 2010 at 7:36 AM, jim owens <jowens@hp.com> wrote:
>> So Josef Bacik has sent patches to btrfs and btrfs-progs that
>> allow you to see raid-mode data and metadata adjusted values
>> with btrfs-ctrl -i instead of using "df".
>>
>> These patches have not been merged yet so you will have to pull
>> them and apply yourself.
> 
> Where exactly can these be pulled from? Is there a separate git tree?
> I just built from the btrfs & btrfs-progs heads, and still do not see
> these add'l features.

Chris does not merge patches into the tree until they are
pushed to Linus. Sometimes he creates "experimental" branches
with code for testing but I don't think he has done that recently.

You can find proposed unmerged patches at:

http://patchwork.kernel.org/project/linux-btrfs/list/

jim

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: RAID-10 arrays built with btrfs & md report 2x difference in available size?
  2010-01-24 12:01 ` RK
@ 2010-01-24 17:18   ` 0bo0
  0 siblings, 0 replies; 14+ messages in thread
From: 0bo0 @ 2010-01-24 17:18 UTC (permalink / raw)
  To: RK; +Cc: linux-btrfs

noticing from above

>> =A0... size 931.51GB used 2.03GB ...

'used' more than the 'size'?

more confused ...
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" =
in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: RAID-10 arrays built with btrfs & md report 2x difference in available size?
  2010-01-24  5:31 0bo0
@ 2010-01-24 12:01 ` RK
  2010-01-24 17:18   ` 0bo0
  0 siblings, 1 reply; 14+ messages in thread
From: RK @ 2010-01-24 12:01 UTC (permalink / raw)
  Cc: linux-btrfs

.. I have the same puzzlement?

0bo0 wrote:
> I created a btrfs RAID-10 array across 4-drives,
>
>  mkfs.btrfs -L TEST -m raid10 -d raid10 /dev/sda /dev/sdb /dev/sdc /dev/sdd
>  btrfs-show
>  	Label: TEST  uuid: 2ac85206-2d88-47d7-a1e7-a93d80b199f8
>  	        Total devices 4 FS bytes used 28.00KB
>  	        devid    1 size 931.51GB used 2.03GB path /dev/sda
>  	        devid    2 size 931.51GB used 2.01GB path /dev/sdb
>  	        devid    4 size 931.51GB used 2.01GB path /dev/sdd
>  	        devid    3 size 931.51GB used 2.01GB path /dev/sdc
>
> @ mount,
>
>  mount /dev/sda /mnt
>  df -H | grep /dev/sda
> 	/dev/sda               4.1T    29k   4.1T   1% /mnt
>
> for RAID-10 across 4-drives, shouldn't the reported/available size be
> 1/2x4TB ~ 2TB?
>
> e.g., using mdadm to build a RAID-10 array across the same drives,
>
>  mdadm -v --create /dev/md0 --level=raid10 --raid-devices=4 /dev/sd[abcd]1
>  pvcreate     /dev/md0
> pvs
>   PV         VG   Fmt  Attr PSize   PFree
>   /dev/md0        lvm2 --   1.82T 1.82T
>
> is the difference in available array space real, an artifact, or a
> misunderstanding on my part?
>
> thanks.
> --
> To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>   


^ permalink raw reply	[flat|nested] 14+ messages in thread

* RAID-10 arrays built with btrfs & md report 2x difference in available size?
@ 2010-01-24  5:31 0bo0
  2010-01-24 12:01 ` RK
  0 siblings, 1 reply; 14+ messages in thread
From: 0bo0 @ 2010-01-24  5:31 UTC (permalink / raw)
  To: linux-btrfs

I created a btrfs RAID-10 array across 4-drives,

 mkfs.btrfs -L TEST -m raid10 -d raid10 /dev/sda /dev/sdb /dev/sdc /dev/sdd
 btrfs-show
 	Label: TEST  uuid: 2ac85206-2d88-47d7-a1e7-a93d80b199f8
 	        Total devices 4 FS bytes used 28.00KB
 	        devid    1 size 931.51GB used 2.03GB path /dev/sda
 	        devid    2 size 931.51GB used 2.01GB path /dev/sdb
 	        devid    4 size 931.51GB used 2.01GB path /dev/sdd
 	        devid    3 size 931.51GB used 2.01GB path /dev/sdc

@ mount,

 mount /dev/sda /mnt
 df -H | grep /dev/sda
	/dev/sda               4.1T    29k   4.1T   1% /mnt

for RAID-10 across 4-drives, shouldn't the reported/available size be
1/2x4TB ~ 2TB?

e.g., using mdadm to build a RAID-10 array across the same drives,

 mdadm -v --create /dev/md0 --level=raid10 --raid-devices=4 /dev/sd[abcd]1
 pvcreate     /dev/md0
pvs
  PV         VG   Fmt  Attr PSize   PFree
  /dev/md0        lvm2 --   1.82T 1.82T

is the difference in available array space real, an artifact, or a
misunderstanding on my part?

thanks.

^ permalink raw reply	[flat|nested] 14+ messages in thread

end of thread, other threads:[~2010-02-08 14:33 UTC | newest]

Thread overview: 14+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2010-01-29 21:57 RAID-10 arrays built with btrfs & md report 2x difference in available size? Thomas Kupper
2010-01-29 22:13 ` 0bo0
2010-01-29 22:38   ` RK
2010-01-29 23:46     ` jim owens
2010-01-29 23:53       ` 0bo0
2010-01-30 13:24         ` Goffredo Baroncelli
2010-01-30 13:29           ` Goffredo Baroncelli
2010-01-30 15:36         ` jim owens
2010-02-08  3:52           ` 0bo0
2010-02-08  3:54           ` 0bo0
2010-02-08 14:33             ` jim owens
  -- strict thread matches above, loose matches on Subject: below --
2010-01-24  5:31 0bo0
2010-01-24 12:01 ` RK
2010-01-24 17:18   ` 0bo0

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).