All of lore.kernel.org
 help / color / mirror / Atom feed
* Using Btrfs on single drives
@ 2015-11-14 10:43 audio muze
  2015-11-14 11:09 ` Goffredo Baroncelli
  2015-11-15  3:27 ` audio muze
  0 siblings, 2 replies; 8+ messages in thread
From: audio muze @ 2015-11-14 10:43 UTC (permalink / raw)
  To: Btrfs BTRFS

I'm looking to make a "production copy" of my music and video library
for use in our media server.  It is not my intent to create any form
of RAID array, but rather to treat each drive independently where
filesystem is concerned and then to create a single view of the drives
using mhddfs.  As the data will remain relatively static I may also
deploy Snapraid in conjunction with mhddfs.

I'm considering using Btrfs as the underlying filesystem on each of
the individual drives, principally to take advantage of metadata
redundancy.  Am I corect in surmising that ?I can turn checksumming
off given it's of no utility where a Btrfs volume is comprised of a
single device only?

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: Using Btrfs on single drives
  2015-11-14 10:43 Using Btrfs on single drives audio muze
@ 2015-11-14 11:09 ` Goffredo Baroncelli
  2015-11-14 16:35   ` Duncan
  2015-11-15  3:27 ` audio muze
  1 sibling, 1 reply; 8+ messages in thread
From: Goffredo Baroncelli @ 2015-11-14 11:09 UTC (permalink / raw)
  To: audio muze, Btrfs BTRFS

On 2015-11-14 11:43, audio muze wrote:
> I can turn checksumming
> off given it's of no utility where a Btrfs volume is comprised of a
> single device only?

The checksums are used to detect a data corruption; in case of a btrfs-raid, the checksums are used *also* to pick the good copy.

BR
G.Baroncelli

-- 
gpg @keyserver.linux.it: Goffredo Baroncelli <kreijackATinwind.it>
Key fingerprint BBF5 1610 0B64 DAC6 5F7D  17B2 0EDA 9B37 8B82 E0B5

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: Using Btrfs on single drives
  2015-11-14 11:09 ` Goffredo Baroncelli
@ 2015-11-14 16:35   ` Duncan
  0 siblings, 0 replies; 8+ messages in thread
From: Duncan @ 2015-11-14 16:35 UTC (permalink / raw)
  To: linux-btrfs

Goffredo Baroncelli posted on Sat, 14 Nov 2015 12:09:21 +0100 as
excerpted:

> On 2015-11-14 11:43, audio muze wrote:
>> I can turn checksumming off given it's of no utility where a Btrfs
>> volume is comprised of a single device only?
> 
> The checksums are used to detect a data corruption; in case of a
> btrfs-raid, the checksums are used *also* to pick the good copy.

And yes, you can turn them off (for data, not metadata), using the 
nodatasum mount option.

Tho personally, I prefer raid1, not just for the normal raid1 capacities, 
but for the ability to scrub corrupt data as well, and thus would never 
turn off checksumming here (except possibly in the context of nocow, for 
vm images, etc).

-- 
Duncan - List replies preferred.   No HTML msgs.
"Every nonfree program has a lord, a master --
and if you use the program, he is your master."  Richard Stallman


^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: Using Btrfs on single drives
  2015-11-14 10:43 Using Btrfs on single drives audio muze
  2015-11-14 11:09 ` Goffredo Baroncelli
@ 2015-11-15  3:27 ` audio muze
  2015-11-15  4:01   ` Duncan
  1 sibling, 1 reply; 8+ messages in thread
From: audio muze @ 2015-11-15  3:27 UTC (permalink / raw)
  To: Btrfs BTRFS; +Cc: audio muze

I've gone ahead and created a single drive Btrfs filesystem on a 3TB
drive and started copying content from a raid5 array to the Btrfs
volume.  Initially copy speeds were very good sustained at ~145MB/s
and I left it to run overnight.  This morning I ran btrfs fi usage
/mnt/btrfs and it reported around 700GB free.  I selected another
folder containing 204GB and started a copy operation, again from the
raid5 array to the Btrfs volume.  Copying is now materially slower and
slowing further...it started at ~105MB/s and after 141GB has slowed to
around 97MB/s.  Is this to be expected with Btrfs of have I come
across a bug of some sort?

On Sat, Nov 14, 2015 at 12:43 PM, audio muze <audiomuze@gmail.com> wrote:
> I'm looking to make a "production copy" of my music and video library
> for use in our media server.  It is not my intent to create any form
> of RAID array, but rather to treat each drive independently where
> filesystem is concerned and then to create a single view of the drives
> using mhddfs.  As the data will remain relatively static I may also
> deploy Snapraid in conjunction with mhddfs.
>
> I'm considering using Btrfs as the underlying filesystem on each of
> the individual drives, principally to take advantage of metadata
> redundancy.  Am I corect in surmising that ?I can turn checksumming
> off given it's of no utility where a Btrfs volume is comprised of a
> single device only?

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: Using Btrfs on single drives
  2015-11-15  3:27 ` audio muze
@ 2015-11-15  4:01   ` Duncan
  2015-11-15  6:30     ` Marc Joliet
  2015-11-25  7:20     ` Russell Coker
  0 siblings, 2 replies; 8+ messages in thread
From: Duncan @ 2015-11-15  4:01 UTC (permalink / raw)
  To: linux-btrfs

audio muze posted on Sun, 15 Nov 2015 05:27:00 +0200 as excerpted:

> I've gone ahead and created a single drive Btrfs filesystem on a 3TB
> drive and started copying content from a raid5 array to the Btrfs
> volume.  Initially copy speeds were very good sustained at ~145MB/s and
> I left it to run overnight.  This morning I ran btrfs fi usage
> /mnt/btrfs and it reported around 700GB free.  I selected another folder
> containing 204GB and started a copy operation, again from the raid5
> array to the Btrfs volume.  Copying is now materially slower and slowing
> further...it started at ~105MB/s and after 141GB has slowed to around
> 97MB/s.  Is this to be expected with Btrfs of have I come across a bug
> of some sort?

That looks to /me/ like native drive limitations.

Due to the fact that a modern hard drive spins at the same speed no 
matter where the read/write head is located, when it's reading/writing to 
the first part of the drive -- the outside -- much more linear drive 
distance will pass under the read/write heads in say a tenth of a second 
than will be the case as the last part of the drive is filled -- the 
inside -- and throughput will be much higher at the first of the drive.

You report a 3 TB drive with initial/outside speeds of ~145 MB/s, then 
after copying quite some data, in the morning it had ~700 GB free, so 
presumably you had written something over 2 TB to it.  I'll leave the 
precise math to someone else, but you report that it started the second 
copy at 105 MB/s and was down to 97 MB/s after another 141 GB, so 
presumably ~550 GB free.  That's a slowdown of roughly a third from the 
initial outside edge where it was covering perhaps twice as much linear 
drive distance per unit of time, so it doesn't sound at all unreasonable 
to me.

What's the actual extended sequential write throughput rating on the 
drive?  What do the online reviews of the product say it does?  Have you 
used hdparm to test it?

It's kinda late for this test now, but if before creating a big 
filesystem out of the whole thing, if for testing you had created a small 
partition at the beginning of the drive, and another at the end, you 
could have then used hdparm to test each to see what the relative speed 
difference was between them, and further, if desired, you could have 
created small partitions at specific size locations into the drive, and 
done similar testing, to find the speed at say 1 TB into the drive, 2 TB 
in, etc.  Of course after testing you could erase those temporary 
partitions and make one big filesystem out of it, if desired.

Of course this is one of the big differences with SSDs, since they aren't 
spinning any longer and have direct access to any part of the device with 
just an address change, so speeds for them, in addition to being far 
faster, should normally be the same across the device.  But of course 
they cost far more per GB or TB, and tend to be vastly more expensive in 
the TB+ size ranges, tho you can of course combine many smaller ones 
using raid technologies to create a larger logical one, but you'll still 
be paying a marked premium for the SSD technology.

-- 
Duncan - List replies preferred.   No HTML msgs.
"Every nonfree program has a lord, a master --
and if you use the program, he is your master."  Richard Stallman


^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: Using Btrfs on single drives
  2015-11-15  4:01   ` Duncan
@ 2015-11-15  6:30     ` Marc Joliet
  2015-11-25  7:20     ` Russell Coker
  1 sibling, 0 replies; 8+ messages in thread
From: Marc Joliet @ 2015-11-15  6:30 UTC (permalink / raw)
  To: linux-btrfs

[-- Attachment #1: Type: text/plain, Size: 1421 bytes --]

On Sunday 15 November 2015 04:01:57 Duncan wrote:
>audio muze posted on Sun, 15 Nov 2015 05:27:00 +0200 as excerpted:
>> I've gone ahead and created a single drive Btrfs filesystem on a 3TB
>> drive and started copying content from a raid5 array to the Btrfs
>> volume.  Initially copy speeds were very good sustained at ~145MB/s and
>> I left it to run overnight.  This morning I ran btrfs fi usage
>> /mnt/btrfs and it reported around 700GB free.  I selected another folder
>> containing 204GB and started a copy operation, again from the raid5
>> array to the Btrfs volume.  Copying is now materially slower and slowing
>> further...it started at ~105MB/s and after 141GB has slowed to around
>> 97MB/s.  Is this to be expected with Btrfs of have I come across a bug
>> of some sort?
>
>That looks to /me/ like native drive limitations.
>
[Snip nice explanation]

I'll just add that I see this with my 3TB USB3 HDD, too, but also with my 
internal HDDs.  Old drives (the oldest I had were about 10 years old) also had 
this problem, only scaled appropriately (the worst was something like 40/60 
GB/s min./max.).

You can also see this very nicely with scrub runs (I use dstat for this):  
they start out at the max., but gradually slow down as they progress.

HTH
-- 
Marc Joliet
--
"People who think they know everything really annoy those of us who know we
don't" - Bjarne Stroustrup

[-- Attachment #2: This is a digitally signed message part. --]
[-- Type: application/pgp-signature, Size: 819 bytes --]

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: Using Btrfs on single drives
  2015-11-15  4:01   ` Duncan
  2015-11-15  6:30     ` Marc Joliet
@ 2015-11-25  7:20     ` Russell Coker
  2015-11-26 16:27       ` Duncan
  1 sibling, 1 reply; 8+ messages in thread
From: Russell Coker @ 2015-11-25  7:20 UTC (permalink / raw)
  To: linux-btrfs

On Sun, 15 Nov 2015 03:01:57 PM Duncan wrote:
> That looks to me like native drive limitations.
> 
> Due to the fact that a modern hard drive spins at the same speed no 
> matter where the read/write head is located, when it's reading/writing to 
> the first part of the drive -- the outside -- much more linear drive 
> distance will pass under the read/write heads in say a tenth of a second 
> than will be the case as the last part of the drive is filled -- the 
> inside -- and throughput will be much higher at the first of the drive.

http://www.coker.com.au/bonnie++/zcav/results.html

The above page has the results of my ZCAV benchmark (part of the Bonnie++ 
suite) which shows this.  You can safely tun ZCAV in read mode on a device 
that's got a filesystem on it so it's not too late to test these things.

-- 
My Main Blog         http://etbe.coker.com.au/
My Documents Blog    http://doc.coker.com.au/

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: Using Btrfs on single drives
  2015-11-25  7:20     ` Russell Coker
@ 2015-11-26 16:27       ` Duncan
  0 siblings, 0 replies; 8+ messages in thread
From: Duncan @ 2015-11-26 16:27 UTC (permalink / raw)
  To: linux-btrfs

Russell Coker posted on Wed, 25 Nov 2015 18:20:25 +1100 as excerpted:

> On Sun, 15 Nov 2015 03:01:57 PM Duncan wrote:
>> That looks to me like native drive limitations.
>> 
>> Due to the fact that a modern hard drive spins at the same speed no
>> matter where the read/write head is located, when it's reading/writing
>> to the first part of the drive -- the outside -- much more linear drive
>> distance will pass under the read/write heads in say a tenth of a
>> second than will be the case as the last part of the drive is filled --
>> the inside -- and throughput will be much higher at the first of the
>> drive.
> 
> http://www.coker.com.au/bonnie++/zcav/results.html
> 
> The above page has the results of my ZCAV benchmark (part of the
> Bonnie++ suite) which shows this.  You can safely tun ZCAV in read mode
> on a device that's got a filesystem on it so it's not too late to test
> these things.

Thanks.  Those graphs are pretty clear.

As you, I'd have thought there'd be far fewer zones (3-4) than it turns 
out there are (8ish).

-- 
Duncan - List replies preferred.   No HTML msgs.
"Every nonfree program has a lord, a master --
and if you use the program, he is your master."  Richard Stallman


^ permalink raw reply	[flat|nested] 8+ messages in thread

end of thread, other threads:[~2015-11-26 16:28 UTC | newest]

Thread overview: 8+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2015-11-14 10:43 Using Btrfs on single drives audio muze
2015-11-14 11:09 ` Goffredo Baroncelli
2015-11-14 16:35   ` Duncan
2015-11-15  3:27 ` audio muze
2015-11-15  4:01   ` Duncan
2015-11-15  6:30     ` Marc Joliet
2015-11-25  7:20     ` Russell Coker
2015-11-26 16:27       ` Duncan

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.