All of lore.kernel.org
 help / color / mirror / Atom feed
From: boli <btrfs@bueechi.net>
To: linux-btrfs <linux-btrfs@vger.kernel.org>
Subject: Re: Replacing drives with larger ones in a 4 drive raid1
Date: Sun, 12 Jun 2016 12:35:16 +0200	[thread overview]
Message-ID: <0AC846B3-C8B0-45FF-BCA9-F681811A23D7@bueechi.net> (raw)
In-Reply-To: <38F2FB04-0617-4DD3-9D48-ED3C4434003D@bueechi.net>

> It has now been doing "btrfs device delete missing /mnt" for about 90 hours.
> 
> These 90 hours seem like a rather long time, given that a rebalance/convert from 4-disk-raid5 to 4-disk-raid1 took about 20 hours months ago, and a scrub takes about 7 hours (4-disk-raid1).
> 
> OTOH the filesystem will be rather full with only 3 of 4 disks available, so I do expect it to take somewhat "longer than usual".
> 
> Would anyone venture a guess as to how long it might take?

It's done now, and took close to 99 hours to rebalance 8.1 TB of data from a 4x6TB raid1 (12 TB capacity) with 1 drive missing onto the remaining 3x6TB raid1 (9 TB capacity).

Now I made sure quotas were off, then started a screen to fill the new 8 TB disk with zeros, detached it and and checked iotop to get a rough estimate on how long it will take (I'm aware it will become slower in time).

After that I'll add this 8 TB disk to the btrfs raid1 (for yet another rebalance).

The next 3 disks will be replaced with "btrfs replace", so only one rebalance each is needed.

I assume each "btrfs replace" would do a full rebalance, and thus assign chunks according to the normal strategy of choosing the two drives with the most free space, which in this case would be a chunk to the new drive, and a mirrored chunk to that existing 3 drive with most free space.

What I'm wondering is this:
If the goal is to replace 4x 6TB drive (raid1) with 4x 8TB drive (still raid1), is there a way to remove one 6 TB drive at a time, recreate its exact contents from the other 3 drives onto a new 8 TB drive, without doing a full rebalance? That is: without writing any substantial amount of data onto the remaining 3 drives.

It seems to me that would be a lot more efficient, but it would go against the normal chunk assignment strategy.

Cheers, boli


  reply	other threads:[~2016-06-12 10:35 UTC|newest]

Thread overview: 17+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2016-06-08 18:55 Replacing drives with larger ones in a 4 drive raid1 boli
2016-06-09 15:20 ` Duncan
2016-06-09 17:30   ` bOli
2016-06-10 18:56   ` Jukka Larja
2016-06-11 13:13 ` boli
2016-06-12 10:35   ` boli [this message]
2016-06-12 15:24     ` Henk Slager
2016-06-12 17:03       ` boli
2016-06-12 19:03         ` Henk Slager
2016-06-13  3:54           ` Duncan
2016-06-13 12:24     ` Austin S. Hemmelgarn
2016-06-14 19:28       ` boli
2016-06-15  3:19         ` Duncan
2016-06-16  0:09           ` boli
2016-06-16 18:18             ` boli
2016-06-17  6:25               ` Duncan
2016-06-19 17:38 ` boli

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=0AC846B3-C8B0-45FF-BCA9-F681811A23D7@bueechi.net \
    --to=btrfs@bueechi.net \
    --cc=linux-btrfs@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.