All of lore.kernel.org
 help / color / mirror / Atom feed
* Can't remove missing drive
@ 2010-10-30  7:37 William Uther
  2010-10-31  5:55 ` Brian Rogers
  2010-10-31 12:01 ` Chris Mason
  0 siblings, 2 replies; 5+ messages in thread
From: William Uther @ 2010-10-30  7:37 UTC (permalink / raw)
  To: linux-btrfs

Hi,
  I have a raid1 setup with a missing device.  I have added a new device and everything seems to be working fine, except I cannot remove the old, missing, device.  There is no error - but the 'some devices missing' tag doesn't go away.

root@willvo:~# btrfs filesystem show
failed to read /dev/sr0
Label: none  uuid: f929c413-01c8-443f-b4f2-86f36702f519
    Total devices 3 FS bytes used 578.39GB
    devid    1 size 931.51GB used 604.00GB path /dev/sdb1
    devid    2 size 931.51GB used 604.00GB path /dev/sdc1
    *** Some devices missing

Btrfs Btrfs v0.19
root@willvo:~# btrfs device delete missing /data
root@willvo:~# btrfs filesystem show
failed to read /dev/sr0
Label: none  uuid: f929c413-01c8-443f-b4f2-86f36702f519
    Total devices 3 FS bytes used 578.39GB
    devid    1 size 931.51GB used 604.00GB path /dev/sdb1
    devid    2 size 931.51GB used 604.00GB path /dev/sdc1
    *** Some devices missing

Btrfs Btrfs v0.19

There are a number of sub-volumes of /data that are mounted in other locations.  I'm using kernel 2.6.36 (the lucid backport of the natty kernel) and similar btrfs-tools (lucid backport of natty tools).  Interestingly looking at the output of `dh -h`, it appears that the 'missing' devices are no longer being counted in the filesystem size - there is just a phantom 'missing' tag in btrfs-show.

Is this actually a problem, or can I just keep running as is?  It seems to mount fine without -odegraded.

Any ideas how I can list the missing devices?  Any ideas on how I can remove the missing devices?

Be well,

Will        :-}


^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: Can't remove missing drive
  2010-10-30  7:37 Can't remove missing drive William Uther
@ 2010-10-31  5:55 ` Brian Rogers
  2010-11-01  0:36   ` William Uther
  2010-10-31 12:01 ` Chris Mason
  1 sibling, 1 reply; 5+ messages in thread
From: Brian Rogers @ 2010-10-31  5:55 UTC (permalink / raw)
  To: William Uther; +Cc: linux-btrfs

On 10/30/2010 12:37 AM, William Uther wrote:
> Hi,
>    I have a raid1 setup with a missing device.  I have added a new device and everything seems to be working fine, except I cannot remove the old, missing, device.  There is no error - but the 'some devices missing' tag doesn't go away.
>
> root@willvo:~# btrfs filesystem show
> failed to read /dev/sr0
> Label: none  uuid: f929c413-01c8-443f-b4f2-86f36702f519
>      Total devices 3 FS bytes used 578.39GB
>      devid    1 size 931.51GB used 604.00GB path /dev/sdb1
>      devid    2 size 931.51GB used 604.00GB path /dev/sdc1
>      *** Some devices missing
>
> Btrfs Btrfs v0.19
> root@willvo:~# btrfs device delete missing /data
> root@willvo:~# btrfs filesystem show
> failed to read /dev/sr0
> Label: none  uuid: f929c413-01c8-443f-b4f2-86f36702f519
>      Total devices 3 FS bytes used 578.39GB
>      devid    1 size 931.51GB used 604.00GB path /dev/sdb1
>      devid    2 size 931.51GB used 604.00GB path /dev/sdc1
>      *** Some devices missing
>
> Btrfs Btrfs v0.19

The lack of a message on the delete operation indicates success. What 
you see is the expected behavior, since 'btrfs filesystem show' is 
reading the partitions directly. Therefore, it won't see any changes 
that haven't been committed to disk yet. The 'some devices missing' 
message should go away after running 'sync', or rebooting, or 
un-mounting the file system.


^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: Can't remove missing drive
  2010-10-30  7:37 Can't remove missing drive William Uther
  2010-10-31  5:55 ` Brian Rogers
@ 2010-10-31 12:01 ` Chris Mason
  1 sibling, 0 replies; 5+ messages in thread
From: Chris Mason @ 2010-10-31 12:01 UTC (permalink / raw)
  To: William Uther; +Cc: linux-btrfs

On Sat, Oct 30, 2010 at 06:37:06PM +1100, William Uther wrote:
> Hi,
>   I have a raid1 setup with a missing device.  I have added a new device and everything seems to be working fine, except I cannot remove the old, missing, device.  There is no error - but the 'some devices missing' tag doesn't go away.
> 
> root@willvo:~# btrfs filesystem show
> failed to read /dev/sr0
> Label: none  uuid: f929c413-01c8-443f-b4f2-86f36702f519
>     Total devices 3 FS bytes used 578.39GB
>     devid    1 size 931.51GB used 604.00GB path /dev/sdb1
>     devid    2 size 931.51GB used 604.00GB path /dev/sdc1
>     *** Some devices missing
> 
> Btrfs Btrfs v0.19
> root@willvo:~# btrfs device delete missing /data
> root@willvo:~# btrfs filesystem show
> failed to read /dev/sr0
> Label: none  uuid: f929c413-01c8-443f-b4f2-86f36702f519
>     Total devices 3 FS bytes used 578.39GB
>     devid    1 size 931.51GB used 604.00GB path /dev/sdb1
>     devid    2 size 931.51GB used 604.00GB path /dev/sdc1
>     *** Some devices missing
> 
> Btrfs Btrfs v0.19
> 
> There are a number of sub-volumes of /data that are mounted in other locations.  I'm using kernel 2.6.36 (the lucid backport of the natty kernel) and similar btrfs-tools (lucid backport of natty tools).  Interestingly looking at the output of `dh -h`, it appears that the 'missing' devices are no longer being counted in the filesystem size - there is just a phantom 'missing' tag in btrfs-show.
> 
> Is this actually a problem, or can I just keep running as is?  It seems to mount fine without -odegraded.
> 
> Any ideas how I can list the missing devices?  Any ideas on how I can remove the missing devices?

What have you tried so far?

The general formula is:

mount -o degraded /dev/xxx /mnt (where xxx is one drive still in the
array)

btrfs-vol -r missing /mnt

I'd suggest pulling the master branch of the unstable tree first, it has
a fix for the btrfs-vol -r missing code.

-chris


^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: Can't remove missing drive
  2010-10-31  5:55 ` Brian Rogers
@ 2010-11-01  0:36   ` William Uther
  2010-11-06  7:11     ` William Uther
  0 siblings, 1 reply; 5+ messages in thread
From: William Uther @ 2010-11-01  0:36 UTC (permalink / raw)
  To: linux-btrfs

Thanks to Chris and Brian for the help!

On 31/10/2010, at 11:01 PM, Chris Mason wrote:

> 
> On Sat, Oct 30, 2010 at 06:37:06PM +1100, William Uther wrote:
>> [snip - issues removing a missing drive - see below for new log]
>> 
>> Is this actually a problem, or can I just keep running as is?  It seems to mount fine without -odegraded.
>> 
>> Any ideas how I can list the missing devices?  Any ideas on how I can remove the missing devices?
> 
> What have you tried so far?

Well, to remove the missing drive I've tried `btrfs-vol -r missing /data` and newer `btrfs` command.  I've previously tried with the system mounted in degraded mode.  The wiki, <https://btrfs.wiki.kernel.org/index.php/Using_Btrfs_with_Multiple_Devices>, suggests that you should mount the new disk before removing the missing disk.

I've also tried removing the old device with `btrfs device delete /dev/loop0 /data` - i.e. giving the missing device explicitly.  Also, the 'missing' device, /dev/loop0, is there - just not connected to anything.  I thought that might be the issue so I moved it out of the way and tried to remove 'missing' again.  No change.

To list the missing devices I tried `btrfs filesystem show` - that shows 'some devices missing' but doesn't list them.  Interestingly, the new log below shows that `btrfs device delete missing` shows that `btrfs delete` doesn't think there are any devices missing.

> The general formula is:
> 
> mount -o degraded /dev/xxx /mnt (where xxx is one drive still in the
> array)
> 
> btrfs-vol -r missing /mnt
> 
> I'd suggest pulling the master branch of the unstable tree first, it has
> a fix for the btrfs-vol -r missing code.

Ok.  Is this kernel or tools or both?  I'll assume both.  I probably wont be able to get to that for a few days.

On 31/10/2010, at 4:55 PM, Brian Rogers wrote:

> The lack of a message on the delete operation indicates success. What you see is the expected behavior, since 'btrfs filesystem show' is reading the partitions directly. Therefore, it won't see any changes that haven't been committed to disk yet. The 'some devices missing' message should go away after running 'sync', or rebooting, or un-mounting the file system.

Thanks for the suggestion, but that doesn't seem to work.  I've tried rebooting multiple times.  The new log below might be more interesting - note that `btrfs device delete missing` claims that there is no missing device.

root@willvo:~# btrfs filesystem sync /data
FSSync '/data'
root@willvo:~# btrfs filesystem show
failed to read /dev/sr0
Label: none  uuid: f929c413-01c8-443f-b4f2-86f36702f519
	Total devices 3 FS bytes used 577.81GB
	devid    1 size 931.51GB used 604.00GB path /dev/sdb1
	devid    2 size 931.51GB used 604.00GB path /dev/sdc1
	*** Some devices missing

Btrfs Btrfs v0.19
root@willvo:~# btrfs device delete missing /data
root@willvo:~# tail -1 /var/log/syslog
Nov  1 11:20:39 willvo kernel: [175031.411348] btrfs: no missing devices found to remove
root@willvo:~# btrfs filesystem show
failed to read /dev/sr0
Label: none  uuid: f929c413-01c8-443f-b4f2-86f36702f519
	Total devices 3 FS bytes used 577.81GB
	devid    1 size 931.51GB used 604.00GB path /dev/sdb1
	devid    2 size 931.51GB used 604.00GB path /dev/sdc1
	*** Some devices missing

Btrfs Btrfs v0.19
root@willvo:~# btrfs filesystem sync /data
FSSync '/data'
root@willvo:~# btrfs filesystem show
failed to read /dev/sr0
Label: none  uuid: f929c413-01c8-443f-b4f2-86f36702f519
	Total devices 3 FS bytes used 577.81GB
	devid    1 size 931.51GB used 604.00GB path /dev/sdb1
	devid    2 size 931.51GB used 604.00GB path /dev/sdc1
	*** Some devices missing

Btrfs Btrfs v0.19

Cheers,

Will      :-}


^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: Can't remove missing drive
  2010-11-01  0:36   ` William Uther
@ 2010-11-06  7:11     ` William Uther
  0 siblings, 0 replies; 5+ messages in thread
From: William Uther @ 2010-11-06  7:11 UTC (permalink / raw)
  To: linux-btrfs

Hi,

  I was trying to remove a 'missing' drive from a raid1 setup.  It was suggested on this list that I update to HEAD.  I updated my kernel to Ubuntu-lts-2.6.37-2.9, which appears to have the latest BTRFS code in it.  I then tried to remove my missing drive again:

root@willvo:~# btrfs filesystem show
failed to read /dev/sr0
Label: none  uuid: f929c413-01c8-443f-b4f2-86f36702f519
	Total devices 3 FS bytes used 594.71GB
	devid    1 size 931.51GB used 604.00GB path /dev/sdb1
	devid    2 size 931.51GB used 604.00GB path /dev/sdc1
	*** Some devices missing

Btrfs v0.19-36-gcbc979b-dirty
root@willvo:~# btrfs device delete missing /data
root@willvo:~# tail -1 /var/log/syslog
Nov  6 13:36:29 willvo kernel: [ 1227.711276] btrfs: no missing devices found to remove
root@willvo:~# btrfs filesystem show
failed to read /dev/sr0
Label: none  uuid: f929c413-01c8-443f-b4f2-86f36702f519
	Total devices 3 FS bytes used 594.71GB
	devid    1 size 931.51GB used 604.00GB path /dev/sdb1
	devid    2 size 931.51GB used 604.00GB path /dev/sdc1
	*** Some devices missing

Btrfs v0.19-36-gcbc979b-dirty

This is already strange as 'btrfs device delete' cannot find the missing device that 'btrfs filesystem show' knows about.  But then things get really strange...

root@willvo:~# btrfs filesystem df /data
Data, RAID0: total=1.18TB, used=596.74GB
System: total=4.00MB, used=96.00KB
Metadata, RAID0: total=2.00GB, used=993.35MB

Why is my filesystem suddenly showing RAID0?  Note that the used space displayed still seems to suggest raid 1.  Unfortunately I didn't notice the raid0 label right away, and did:

root@willvo:~# btrfs filesystem balance /data
root@willvo:~# btrfs filesystem show
failed to read /dev/sr0
Label: none  uuid: f929c413-01c8-443f-b4f2-86f36702f519
	Total devices 3 FS bytes used 594.71GB
	devid    1 size 931.51GB used 298.88GB path /dev/sdb1
	devid    2 size 931.51GB used 298.88GB path /dev/sdc1
	*** Some devices missing

Btrfs v0.19-36-gcbc979b-dirty
root@willvo:~# btrfs filesystem df /data
Data, RAID0: total=596.00GB, used=593.75GB
System: total=4.00MB, used=52.00KB
Metadata, RAID0: total=1.75GB, used=979.95MB

Which seems to have believed the spurious raid0 setting and converted my setup from raid1 to raid0 - albeit still with missing devices.

Is there any way to 'convert back' to raid1?  My reading suggests that feature isn't implemented yet - although I managed to magically convert from raid1 to raid0, so who knows.

Cheers,

Will      :-}


^ permalink raw reply	[flat|nested] 5+ messages in thread

end of thread, other threads:[~2010-11-06  7:11 UTC | newest]

Thread overview: 5+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2010-10-30  7:37 Can't remove missing drive William Uther
2010-10-31  5:55 ` Brian Rogers
2010-11-01  0:36   ` William Uther
2010-11-06  7:11     ` William Uther
2010-10-31 12:01 ` Chris Mason

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.