All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH] btrfs: close any open devices if btrfs_mount fails
@ 2013-04-04 15:35 Eric Sandeen
  2013-04-04 16:32 ` [PATCH V2] " Eric Sandeen
  0 siblings, 1 reply; 7+ messages in thread
From: Eric Sandeen @ 2013-04-04 15:35 UTC (permalink / raw)
  To: linux-btrfs, Jan Safranek

This:

   # mkfs.btrfs /dev/sdb{1,2} ; wipefs -a /dev/sdb1; mount /dev/sdb2 /mnt/test

would lead to a blkdev open/close mismatch when the mount fails, and
a permanently busy (opened O_EXCL) sdb2:

   # wipefs -a /dev/sdb2
   wipefs: error: /dev/sdb2: probing initialization failed: Device or resource busy

It's because btrfs_open_devices() may open some devices, and still
return failure.  So the error unwinding needs to close any open
devices in fs_devices before returning.

Reported-by: Jan Safranek <jsafrane@redhat.com>
Signed-off-by: Eric Sandeen <sandeen@redhat.com>
---

Note, __btrfs_open_devices is weird; it seems to return success or
failure based on the outcome of the result of the last call
to btrfs_get_bdev_and_sb().  But that's a different bug...

diff --git a/fs/btrfs/super.c b/fs/btrfs/super.c
index f6b8859..60c67fa 100644
--- a/fs/btrfs/super.c
+++ b/fs/btrfs/super.c
@@ -1125,7 +1125,7 @@ static struct dentry *btrfs_mount(struct file_system_type *fs_type, int flags,
 
 	error = btrfs_open_devices(fs_devices, mode, fs_type);
 	if (error)
-		goto error_fs_info;
+		goto error_close_devices;
 
 	if (!(flags & MS_RDONLY) && fs_devices->rw_devices == 0) {
 		error = -EACCES;
@@ -1161,7 +1161,8 @@ static struct dentry *btrfs_mount(struct file_system_type *fs_type, int flags,
 	return root;
 
 error_close_devices:
-	btrfs_close_devices(fs_devices);
+	if (fs_devices->opened)
+		btrfs_close_devices(fs_devices);
 error_fs_info:
 	free_fs_info(fs_info);
 	return ERR_PTR(error);


^ permalink raw reply related	[flat|nested] 7+ messages in thread

* [PATCH V2] btrfs: close any open devices if btrfs_mount fails
  2013-04-04 15:35 [PATCH] btrfs: close any open devices if btrfs_mount fails Eric Sandeen
@ 2013-04-04 16:32 ` Eric Sandeen
  2013-04-04 18:46   ` Zach Brown
  2013-04-04 20:45   ` [PATCH V3] btrfs: ignore device open failures in __btrfs_open_devices Eric Sandeen
  0 siblings, 2 replies; 7+ messages in thread
From: Eric Sandeen @ 2013-04-04 16:32 UTC (permalink / raw)
  To: linux-btrfs, Jan Safranek

This:

   # mkfs.btrfs /dev/sdb{1,2} ; wipefs -a /dev/sdb1; mount /dev/sdb2 /mnt/test

would lead to a blkdev open/close mismatch when the mount fails, and
a permanently busy (opened O_EXCL) sdb2:

   # wipefs -a /dev/sdb2
   wipefs: error: /dev/sdb2: probing initialization failed: Device or resource busy

It's because btrfs_open_devices() may open some devices, and still
return failure.  So the error unwinding needs to close any open
devices in fs_devices before returning.

Reported-by: Jan Safranek <jsafrane@redhat.com>
Signed-off-by: Eric Sandeen <sandeen@redhat.com>
---

V2: use open_devices not opened, that seems like the right test...
if a test is necessary I'm not certain, tbh.  Seems prudently
defensive.

Note, __btrfs_open_devices is weird; it seems to return success or
failure based on the outcome of the result of the last call
to btrfs_get_bdev_and_sb().  But that's a different bug...

diff --git a/fs/btrfs/super.c b/fs/btrfs/super.c
index f6b8859..60c67fa 100644
--- a/fs/btrfs/super.c
+++ b/fs/btrfs/super.c
@@ -1125,7 +1125,7 @@ static struct dentry *btrfs_mount(struct file_system_type *fs_type, int flags,
 
 	error = btrfs_open_devices(fs_devices, mode, fs_type);
 	if (error)
-		goto error_fs_info;
+		goto error_close_devices;
 
 	if (!(flags & MS_RDONLY) && fs_devices->rw_devices == 0) {
 		error = -EACCES;
@@ -1161,7 +1161,8 @@ static struct dentry *btrfs_mount(struct file_system_type *fs_type, int flags,
 	return root;
 
 error_close_devices:
-	btrfs_close_devices(fs_devices);
+	if (fs_devices->open_devices)
+		btrfs_close_devices(fs_devices);
 error_fs_info:
 	free_fs_info(fs_info);
 	return ERR_PTR(error);

--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html



^ permalink raw reply related	[flat|nested] 7+ messages in thread

* Re: [PATCH V2] btrfs: close any open devices if btrfs_mount fails
  2013-04-04 16:32 ` [PATCH V2] " Eric Sandeen
@ 2013-04-04 18:46   ` Zach Brown
  2013-04-04 19:45     ` Eric Sandeen
  2013-04-04 20:45   ` [PATCH V3] btrfs: ignore device open failures in __btrfs_open_devices Eric Sandeen
  1 sibling, 1 reply; 7+ messages in thread
From: Zach Brown @ 2013-04-04 18:46 UTC (permalink / raw)
  To: Eric Sandeen; +Cc: linux-btrfs, Jan Safranek

> It's because btrfs_open_devices() may open some devices, and still
> return failure.  So the error unwinding needs to close any open
> devices in fs_devices before returning.

Yeah, looks like.

> Note, __btrfs_open_devices is weird; it seems to return success or
> failure based on the outcome of the result of the last call
> to btrfs_get_bdev_and_sb().  But that's a different bug...

I disagree that this is a different bug, I think it's the root cause of
this bug.

> @@ -1125,7 +1125,7 @@ static struct dentry *btrfs_mount(struct file_system_type *fs_type, int flags,
>  
>  	error = btrfs_open_devices(fs_devices, mode, fs_type);
>  	if (error)
> -		goto error_fs_info;
> +		goto error_close_devices;

Wouldn't open_seed_devices() also need a change like this?

I'd just rework __btrfs_open_devices to clean up after itself when it
returns an error.

>  error_close_devices:
> -	btrfs_close_devices(fs_devices);
> +	if (fs_devices->open_devices)
> +		btrfs_close_devices(fs_devices);

I guess that ->open_devices is supposed to be protected by the
uuid_mutex so it shouldn't be tested out here.  In any case, it wouldn't
be needed if btrfs_open_devices() cleaned up as it failed.

- z

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [PATCH V2] btrfs: close any open devices if btrfs_mount fails
  2013-04-04 18:46   ` Zach Brown
@ 2013-04-04 19:45     ` Eric Sandeen
  2013-04-04 19:50       ` Zach Brown
  2013-04-04 20:05       ` Chris Mason
  0 siblings, 2 replies; 7+ messages in thread
From: Eric Sandeen @ 2013-04-04 19:45 UTC (permalink / raw)
  To: Zach Brown; +Cc: linux-btrfs, Jan Safranek

On 4/4/13 1:46 PM, Zach Brown wrote:
>> It's because btrfs_open_devices() may open some devices, and still
>> return failure.  So the error unwinding needs to close any open
>> devices in fs_devices before returning.
> 
> Yeah, looks like.
> 
>> Note, __btrfs_open_devices is weird; it seems to return success or
>> failure based on the outcome of the result of the last call
>> to btrfs_get_bdev_and_sb().  But that's a different bug...
> 
> I disagree that this is a different bug, I think it's the root cause of
> this bug.
> 
>> @@ -1125,7 +1125,7 @@ static struct dentry *btrfs_mount(struct file_system_type *fs_type, int flags,
>>  
>>  	error = btrfs_open_devices(fs_devices, mode, fs_type);
>>  	if (error)
>> -		goto error_fs_info;
>> +		goto error_close_devices;
> 
> Wouldn't open_seed_devices() also need a change like this?
> 
> I'd just rework __btrfs_open_devices to clean up after itself when it
> returns an error.
> 
>>  error_close_devices:
>> -	btrfs_close_devices(fs_devices);
>> +	if (fs_devices->open_devices)
>> +		btrfs_close_devices(fs_devices);
> 
> I guess that ->open_devices is supposed to be protected by the
> uuid_mutex so it shouldn't be tested out here.  In any case, it wouldn't
> be needed if btrfs_open_devices() cleaned up as it failed.

I guess I had a feeling that in something like a degraded mount scenario
you might live with failures.  But I guess that is initiated on the mount
commandline, i.e. "mount this subset; it's degraded" not "mount these devices,
and if some fail that's cool."

Right?

Thanks,
-Eric

> - z
> 


^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [PATCH V2] btrfs: close any open devices if btrfs_mount fails
  2013-04-04 19:45     ` Eric Sandeen
@ 2013-04-04 19:50       ` Zach Brown
  2013-04-04 20:05       ` Chris Mason
  1 sibling, 0 replies; 7+ messages in thread
From: Zach Brown @ 2013-04-04 19:50 UTC (permalink / raw)
  To: Eric Sandeen; +Cc: linux-btrfs, Jan Safranek

> I guess I had a feeling that in something like a degraded mount scenario
> you might live with failures.  But I guess that is initiated on the mount
> commandline, i.e. "mount this subset; it's degraded" not "mount these devices,
> and if some fail that's cool."
> 
> Right?

Maybe?  Who knows.

- z

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [PATCH V2] btrfs: close any open devices if btrfs_mount fails
  2013-04-04 19:45     ` Eric Sandeen
  2013-04-04 19:50       ` Zach Brown
@ 2013-04-04 20:05       ` Chris Mason
  1 sibling, 0 replies; 7+ messages in thread
From: Chris Mason @ 2013-04-04 20:05 UTC (permalink / raw)
  To: Eric Sandeen, Zach Brown; +Cc: linux-btrfs, Jan Safranek

Quoting Eric Sandeen (2013-04-04 15:45:28)
> On 4/4/13 1:46 PM, Zach Brown wrote:
> >> It's because btrfs_open_devices() may open some devices, and still
> >> return failure.  So the error unwinding needs to close any open
> >> devices in fs_devices before returning.
> > 
> > Yeah, looks like.
> > 
> >> Note, __btrfs_open_devices is weird; it seems to return success or
> >> failure based on the outcome of the result of the last call
> >> to btrfs_get_bdev_and_sb().  But that's a different bug...
> > 
> > I disagree that this is a different bug, I think it's the root cause of
> > this bug.
> > 
> >> @@ -1125,7 +1125,7 @@ static struct dentry *btrfs_mount(struct file_system_type *fs_type, int flags,
> >>  
> >>      error = btrfs_open_devices(fs_devices, mode, fs_type);
> >>      if (error)
> >> -            goto error_fs_info;
> >> +            goto error_close_devices;
> > 
> > Wouldn't open_seed_devices() also need a change like this?
> > 
> > I'd just rework __btrfs_open_devices to clean up after itself when it
> > returns an error.
> > 
> >>  error_close_devices:
> >> -    btrfs_close_devices(fs_devices);
> >> +    if (fs_devices->open_devices)
> >> +            btrfs_close_devices(fs_devices);
> > 
> > I guess that ->open_devices is supposed to be protected by the
> > uuid_mutex so it shouldn't be tested out here.  In any case, it wouldn't
> > be needed if btrfs_open_devices() cleaned up as it failed.
> 
> I guess I had a feeling that in something like a degraded mount scenario
> you might live with failures.  But I guess that is initiated on the mount
> commandline, i.e. "mount this subset; it's degraded" not "mount these devices,
> and if some fail that's cool."

btrfs_open_devices just means: go off and open every bdev you can from
this uuid.  It should return success if we opened any of them at all.

__btrfs_open_devices() already ignores failures, and this is the
only place it is explicitly setting ret.  It should only happen if there
are no devices to close.

        if (fs_devices->open_devices == 0) {
                ret = -EINVAL;
                goto out;
        }

Unless of course we happen to fail to open the last device in the list:

	ret = btrfs_get_bdev_and_sb(device->name->str, flags, holder, 1, &bdev, &bh);
	if (ret)
		continue;

This is two curlies and a ret = 0 away from correct.

-chris


^ permalink raw reply	[flat|nested] 7+ messages in thread

* [PATCH V3] btrfs: ignore device open failures in __btrfs_open_devices
  2013-04-04 16:32 ` [PATCH V2] " Eric Sandeen
  2013-04-04 18:46   ` Zach Brown
@ 2013-04-04 20:45   ` Eric Sandeen
  1 sibling, 0 replies; 7+ messages in thread
From: Eric Sandeen @ 2013-04-04 20:45 UTC (permalink / raw)
  To: linux-btrfs, Jan Safranek

This:

   # mkfs.btrfs /dev/sdb{1,2} ; wipefs -a /dev/sdb1; mount /dev/sdb2 /mnt/test

would lead to a blkdev open/close mismatch when the mount fails, and
a permanently busy (opened O_EXCL) sdb2:

   # wipefs -a /dev/sdb2
   wipefs: error: /dev/sdb2: probing initialization failed: Device or resource busy

It's because btrfs_open_devices() may open some devices, fail on
the last one, and return that failure stored in "ret."   The mount
then fails, but the caller then does not clean up the open devices.

Chris assures me that:

"btrfs_open_devices just means: go off and open every bdev you can from
this uuid.  It should return success if we opened any of them at all."

So change the logic to ignore any open failures; just skip processing
of that device.  Later on it's decided whether we have enough devices
to continue.

Reported-by: Jan Safranek <jsafrane@redhat.com>
Signed-off-by: Eric Sandeen <sandeen@redhat.com>
---

V3: just ignore failures from btrfs_get_bdev_and_sb

diff --git a/fs/btrfs/volumes.c b/fs/btrfs/volumes.c
index 5989a92..8edcd33 100644
--- a/fs/btrfs/volumes.c
+++ b/fs/btrfs/volumes.c
@@ -717,9 +717,9 @@ static int __btrfs_open_devices(struct btrfs_fs_devices *fs_devices,
 		if (!device->name)
 			continue;
 
-		ret = btrfs_get_bdev_and_sb(device->name->str, flags, holder, 1,
-					    &bdev, &bh);
-		if (ret)
+		/* Just open everything we can; ignore failures here */
+		if (btrfs_get_bdev_and_sb(device->name->str, flags, holder, 1,
+					    &bdev, &bh))
 			continue;
 
 		disk_super = (struct btrfs_super_block *)bh->b_data;



^ permalink raw reply related	[flat|nested] 7+ messages in thread

end of thread, other threads:[~2013-04-04 20:45 UTC | newest]

Thread overview: 7+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2013-04-04 15:35 [PATCH] btrfs: close any open devices if btrfs_mount fails Eric Sandeen
2013-04-04 16:32 ` [PATCH V2] " Eric Sandeen
2013-04-04 18:46   ` Zach Brown
2013-04-04 19:45     ` Eric Sandeen
2013-04-04 19:50       ` Zach Brown
2013-04-04 20:05       ` Chris Mason
2013-04-04 20:45   ` [PATCH V3] btrfs: ignore device open failures in __btrfs_open_devices Eric Sandeen

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.