All of lore.kernel.org
 help / color / mirror / Atom feed
* btr fs unmountable after disk failure
@ 2012-04-13 10:55 Jan Engelhardt
  2012-04-13 10:58 ` Hugo Mills
  0 siblings, 1 reply; 5+ messages in thread
From: Jan Engelhardt @ 2012-04-13 10:55 UTC (permalink / raw)
  To: linux-btrfs


I originally created a RAID1(0) compound out of 4 drives. One of them 
[sdf] failed recently and was removed. The filesystem is no longer 
mountable with the 3 drives left.
On 3.3.1:

# btrfs dev scan
[ 1065.572938] device label srv devid 1 transid 11386 /dev/sdc
[ 1065.573044] device label srv devid 3 transid 11386 /dev/sde
[ 1066.089981] device label srv devid 2 transid 11386 /dev/sdd
# mount /dev/sdd /top.srv
[ 1070.201339] device label srv devid 2 transid 11386 /dev/sdd
[ 1070.201666] btrfs: disk space caching is enabled
[ 1070.203310] btrfs: failed to read the system array on sde
[ 1070.204458] btrfs: open_ctree failed
(Sparse error message, innit..)

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: btr fs unmountable after disk failure
  2012-04-13 10:55 btr fs unmountable after disk failure Jan Engelhardt
@ 2012-04-13 10:58 ` Hugo Mills
  2012-04-13 12:00   ` Chris Samuel
  2012-04-13 13:42   ` Jan Engelhardt
  0 siblings, 2 replies; 5+ messages in thread
From: Hugo Mills @ 2012-04-13 10:58 UTC (permalink / raw)
  To: Jan Engelhardt; +Cc: linux-btrfs

[-- Attachment #1: Type: text/plain, Size: 1119 bytes --]

On Fri, Apr 13, 2012 at 12:55:43PM +0200, Jan Engelhardt wrote:
> 
> I originally created a RAID1(0) compound out of 4 drives. One of them 
> [sdf] failed recently and was removed. The filesystem is no longer 
> mountable with the 3 drives left.
> On 3.3.1:
> 
> # btrfs dev scan
> [ 1065.572938] device label srv devid 1 transid 11386 /dev/sdc
> [ 1065.573044] device label srv devid 3 transid 11386 /dev/sde
> [ 1066.089981] device label srv devid 2 transid 11386 /dev/sdd
> # mount /dev/sdd /top.srv
> [ 1070.201339] device label srv devid 2 transid 11386 /dev/sdd
> [ 1070.201666] btrfs: disk space caching is enabled
> [ 1070.203310] btrfs: failed to read the system array on sde
> [ 1070.204458] btrfs: open_ctree failed
> (Sparse error message, innit..)

   I think you need "-o degraded" in this case.

   Hugo.

-- 
=== Hugo Mills: hugo@... carfax.org.uk | darksatanic.net | lug.org.uk ===
  PGP key: 515C238D from wwwkeys.eu.pgp.net or http://www.carfax.org.uk
       --- Mixing mathematics and alcohol is dangerous.  Don't ---       
                            drink and derive.                            

[-- Attachment #2: Digital signature --]
[-- Type: application/pgp-signature, Size: 190 bytes --]

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: btr fs unmountable after disk failure
  2012-04-13 10:58 ` Hugo Mills
@ 2012-04-13 12:00   ` Chris Samuel
  2012-04-13 13:42   ` Jan Engelhardt
  1 sibling, 0 replies; 5+ messages in thread
From: Chris Samuel @ 2012-04-13 12:00 UTC (permalink / raw)
  To: linux-btrfs

[-- Attachment #1: Type: Text/Plain, Size: 524 bytes --]

On Friday 13 April 2012 20:58:22 Hugo Mills wrote:

>    I think you need "-o degraded" in this case.

I've always wondered why btrfs doesn't fall back to this by default if 
it fails to find a device, would seem the obvious thing to do (we 
don't have to tell mdadm if a disk has gone away for instance).

cheers,
Chris
-- 
 Chris Samuel  :  http://www.csamuel.org/  :  Melbourne, VIC

This email may come with a PGP signature as a file. Do not panic.
For more info see: http://en.wikipedia.org/wiki/OpenPGP

[-- Attachment #2: This is a digitally signed message part. --]
[-- Type: application/pgp-signature, Size: 482 bytes --]

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: btr fs unmountable after disk failure
  2012-04-13 10:58 ` Hugo Mills
  2012-04-13 12:00   ` Chris Samuel
@ 2012-04-13 13:42   ` Jan Engelhardt
  2012-04-13 20:32     ` Duncan
  1 sibling, 1 reply; 5+ messages in thread
From: Jan Engelhardt @ 2012-04-13 13:42 UTC (permalink / raw)
  To: Hugo Mills; +Cc: linux-btrfs


On Friday 2012-04-13 12:58, Hugo Mills wrote:
>On Fri, Apr 13, 2012 at 12:55:43PM +0200, Jan Engelhardt wrote:
>> 
>> I originally created a RAID1(0) compound out of 4 drives. One of them 
>> [sdf] failed recently and was removed. The filesystem is no longer 
>> mountable with the 3 drives left.
>> On 3.3.1:
>> 
>> # btrfs dev scan
>> [ 1065.572938] device label srv devid 1 transid 11386 /dev/sdc
>> [ 1065.573044] device label srv devid 3 transid 11386 /dev/sde
>> [ 1066.089981] device label srv devid 2 transid 11386 /dev/sdd
>> # mount /dev/sdd /top.srv
>> [ 1070.201339] device label srv devid 2 transid 11386 /dev/sdd
>> [ 1070.201666] btrfs: disk space caching is enabled
>> [ 1070.203310] btrfs: failed to read the system array on sde
>> [ 1070.204458] btrfs: open_ctree failed
>> (Sparse error message, innit..)
>
>   I think you need "-o degraded" in this case.

Yes indeed, -o degraded makes it go. Where is such documented? I know
I can't expect mount(8) to yet have it, but there is not a
mount.btrfs(8) either.

After mounting, df shows

Filesystem               1K-blocks       Used Available Use% Mounted on
/dev/sdd                5860554336 2651644680  20368600 100% /top.srv

Adding the new disk now yields yet another kernel warning.

# btrfs dev add /dev/sdf /top.srv; df
[10852.064139] btrfs: free space inode generation (0) did not match
free space cache generation (11385)
Filesystem               1K-blocks       Used Available Use% Mounted on
/dev/sdd                7325692920 2651643152 2974681688  48% /top.srv

According to
# btrfs fi show
Label: 'srv'  uuid: 88300cd5-dbcb-4147-9ee4-c65a1c895e1d
        Total devices 5 FS bytes used 1.23TB
        devid    2 size 1.36TB used 692.88GB path /dev/sdd
        devid    5 size 1.36TB used 51.00GB path /dev/sdf
        devid    3 size 1.36TB used 692.88GB path /dev/sde
        devid    1 size 1.36TB used 692.90GB path /dev/sdc
        *** Some devices missing

devices are missing, but how would I remove the old devid 4
(the "some" that's "missing")? `btrfs fi del` does not take
ids, unfortunately.

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: btr fs unmountable after disk failure
  2012-04-13 13:42   ` Jan Engelhardt
@ 2012-04-13 20:32     ` Duncan
  0 siblings, 0 replies; 5+ messages in thread
From: Duncan @ 2012-04-13 20:32 UTC (permalink / raw)
  To: linux-btrfs

Jan Engelhardt posted on Fri, 13 Apr 2012 15:42:14 +0200 as excerpted:

> On Friday 2012-04-13 12:58, Hugo Mills wrote:
>>On Fri, Apr 13, 2012 at 12:55:43PM +0200, Jan Engelhardt wrote:
>>> 
>>> I originally created a RAID1(0) compound out of 4 drives. One of them
>>> [sdf] failed recently and was removed. The filesystem is no longer
>>> mountable with the 3 drives left.

>>   I think you need "-o degraded" in this case.
> 
> Yes indeed, -o degraded makes it go. Where is such documented? I know I
> can't expect mount(8) to yet have it, but there is not a mount.btrfs(8)
> either.

To the extent that it /is/ documented, probably the wiki.

http://btrfs.ipv5.de/index.php?title=Main_Page

(Note that there's an old/stale wiki at btrfs.wiki.kernel.org as well, 
but it's read-only since the kernel.org breakin and thus is quite stale 
by now.)

Looking at the mount options listed there, "degraded" is the first on the 
list. =:^)

http://btrfs.ipv5.de/index.php?title=Mount_options


> # btrfs fi show
> Label: 'srv'  uuid: 88300cd5-dbcb-4147-9ee4-c65a1c895e1d
>         Total devices 5 FS bytes used 1.23TB
>	  devid    2 size 1.36TB used 692.88GB path /dev/sdd
>         devid    5 size 1.36TB used 51.00GB path /dev/sdf
>         devid    3 size 1.36TB used 692.88GB path /dev/sde
>         devid    1 size 1.36TB used 692.90GB path /dev/sdc
>         *** Some devices missing
> 
> devices are missing, but how would I remove the old devid 4 (the "some"
> that's "missing")? `btrfs fi del` does not take ids, unfortunately.

You probably know my answer: wiki! =:^)

See the using btrfs with multiple devices page:

http://btrfs.ipv5.de/index.php?title=Using_Btrfs_with_Multiple_Devices

Look under replacing failed devices.

In particular, btrfs device del missing /mnt/pnt  (device not filesystem, 
missing tells btrfs to remove the first missing device).

If you're asking this sort of questions it should be obvious that there's 
a lot more info you'll likely find useful on the wiki as well, so I'd 
encourage you to spend some time browsing around.  Your btrfs filesystems 
and consequently, your stress levels managing them, should thank you for 
it. =:^)

-- 
Duncan - List replies preferred.   No HTML msgs.
"Every nonfree program has a lord, a master --
and if you use the program, he is your master."  Richard Stallman


^ permalink raw reply	[flat|nested] 5+ messages in thread

end of thread, other threads:[~2012-04-13 20:32 UTC | newest]

Thread overview: 5+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2012-04-13 10:55 btr fs unmountable after disk failure Jan Engelhardt
2012-04-13 10:58 ` Hugo Mills
2012-04-13 12:00   ` Chris Samuel
2012-04-13 13:42   ` Jan Engelhardt
2012-04-13 20:32     ` Duncan

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.