linux-btrfs.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* btrfs: convert metadata from raid5 to raid1
@ 2020-02-17 13:43 Menion
  2020-02-17 13:49 ` Swâmi Petaramesh
  0 siblings, 1 reply; 13+ messages in thread
From: Menion @ 2020-02-17 13:43 UTC (permalink / raw)
  To: linux-btrfs

Hi all

Following another thread, it has been explicitly advised to avoid
using raid5 metadata scheme with raid5 data due to very high chance to
being not able to recover an array in case of full disk loss.

I am running an array of 5x8Tb HDD with raid5 for data and metadata,
with 5.5 kernel and btrfs prog 5.4.1

This array has been created with Ubuntu 18.10 stock kernel (4.14)

What is the correct procedure to convert metadata from raid5 to proper
raid scheme (raid1 or)?

I think it is wise to state this somewhere in the wiki also

Bye

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: btrfs: convert metadata from raid5 to raid1
  2020-02-17 13:43 btrfs: convert metadata from raid5 to raid1 Menion
@ 2020-02-17 13:49 ` Swâmi Petaramesh
  2020-02-17 13:50   ` Menion
  0 siblings, 1 reply; 13+ messages in thread
From: Swâmi Petaramesh @ 2020-02-17 13:49 UTC (permalink / raw)
  To: Menion, linux-btrfs

Hi,

On 2020-02-17 14:43, Menion wrote:
> What is the correct procedure to convert metadata from raid5 to proper
> raid scheme (raid1 or)?

# btrfs balance start -mconvert=raid1 /array/mount/point should do the trick

ॐ

-- 

Swâmi Petaramesh <swami@petaramesh.org> PGP 9076E32E


^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: btrfs: convert metadata from raid5 to raid1
  2020-02-17 13:49 ` Swâmi Petaramesh
@ 2020-02-17 13:50   ` Menion
  2020-02-17 13:51     ` Menion
                       ` (2 more replies)
  0 siblings, 3 replies; 13+ messages in thread
From: Menion @ 2020-02-17 13:50 UTC (permalink / raw)
  To: Swâmi Petaramesh; +Cc: linux-btrfs

Is it ok to run it on a mounted filesystem with concurrent read and
write operations?

Il giorno lun 17 feb 2020 alle ore 14:49 Swâmi Petaramesh
<swami@petaramesh.org> ha scritto:
>
> Hi,
>
> On 2020-02-17 14:43, Menion wrote:
> > What is the correct procedure to convert metadata from raid5 to proper
> > raid scheme (raid1 or)?
>
> # btrfs balance start -mconvert=raid1 /array/mount/point should do the trick
>
> ॐ
>
> --
>
> Swâmi Petaramesh <swami@petaramesh.org> PGP 9076E32E
>

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: btrfs: convert metadata from raid5 to raid1
  2020-02-17 13:50   ` Menion
@ 2020-02-17 13:51     ` Menion
  2020-02-17 13:55       ` Hugo Mills
  2020-02-17 13:54     ` Hugo Mills
  2020-02-17 13:55     ` Swâmi Petaramesh
  2 siblings, 1 reply; 13+ messages in thread
From: Menion @ 2020-02-17 13:51 UTC (permalink / raw)
  To: Swâmi Petaramesh; +Cc: linux-btrfs

Also, since the number of HDD is 5, how this "raid1" scheme is deployed?

Il giorno lun 17 feb 2020 alle ore 14:50 Menion <menion@gmail.com> ha scritto:
>
> Is it ok to run it on a mounted filesystem with concurrent read and
> write operations?
>
> Il giorno lun 17 feb 2020 alle ore 14:49 Swâmi Petaramesh
> <swami@petaramesh.org> ha scritto:
> >
> > Hi,
> >
> > On 2020-02-17 14:43, Menion wrote:
> > > What is the correct procedure to convert metadata from raid5 to proper
> > > raid scheme (raid1 or)?
> >
> > # btrfs balance start -mconvert=raid1 /array/mount/point should do the trick
> >
> > ॐ
> >
> > --
> >
> > Swâmi Petaramesh <swami@petaramesh.org> PGP 9076E32E
> >

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: btrfs: convert metadata from raid5 to raid1
  2020-02-17 13:50   ` Menion
  2020-02-17 13:51     ` Menion
@ 2020-02-17 13:54     ` Hugo Mills
  2020-02-17 13:55     ` Swâmi Petaramesh
  2 siblings, 0 replies; 13+ messages in thread
From: Hugo Mills @ 2020-02-17 13:54 UTC (permalink / raw)
  To: Menion; +Cc: Swâmi Petaramesh, linux-btrfs

On Mon, Feb 17, 2020 at 02:50:35PM +0100, Menion wrote:
> Is it ok to run it on a mounted filesystem with concurrent read and
> write operations?

   Yes, absolutely fine.

   Hugo.

> Il giorno lun 17 feb 2020 alle ore 14:49 Swâmi Petaramesh
> <swami@petaramesh.org> ha scritto:
> >
> > Hi,
> >
> > On 2020-02-17 14:43, Menion wrote:
> > > What is the correct procedure to convert metadata from raid5 to proper
> > > raid scheme (raid1 or)?
> >
> > # btrfs balance start -mconvert=raid1 /array/mount/point should do the trick
> >
> > ॐ
> >

-- 
Hugo Mills             | Turning, pages turning in the widening bath,
hugo@... carfax.org.uk | The spine cannot bear the humidity.
http://carfax.org.uk/  | Books fall apart; the binding cannot hold.
PGP: E2AB1DE4          | Page 129 is loosed upon the world.               Zarf

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: btrfs: convert metadata from raid5 to raid1
  2020-02-17 13:51     ` Menion
@ 2020-02-17 13:55       ` Hugo Mills
  0 siblings, 0 replies; 13+ messages in thread
From: Hugo Mills @ 2020-02-17 13:55 UTC (permalink / raw)
  To: Menion; +Cc: Swâmi Petaramesh, linux-btrfs

On Mon, Feb 17, 2020 at 02:51:12PM +0100, Menion wrote:
> Also, since the number of HDD is 5, how this "raid1" scheme is deployed?

   Two copies of any one piece of data, each copy on a separate
device.

   Hugo.

> Il giorno lun 17 feb 2020 alle ore 14:50 Menion <menion@gmail.com> ha scritto:
> >
> > Is it ok to run it on a mounted filesystem with concurrent read and
> > write operations?
> >
> > Il giorno lun 17 feb 2020 alle ore 14:49 Swâmi Petaramesh
> > <swami@petaramesh.org> ha scritto:
> > >
> > > Hi,
> > >
> > > On 2020-02-17 14:43, Menion wrote:
> > > > What is the correct procedure to convert metadata from raid5 to proper
> > > > raid scheme (raid1 or)?
> > >
> > > # btrfs balance start -mconvert=raid1 /array/mount/point should do the trick
> > >
> > > ॐ
> > >

-- 
Hugo Mills             | Turning, pages turning in the widening bath,
hugo@... carfax.org.uk | The spine cannot bear the humidity.
http://carfax.org.uk/  | Books fall apart; the binding cannot hold.
PGP: E2AB1DE4          | Page 129 is loosed upon the world.               Zarf

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: btrfs: convert metadata from raid5 to raid1
  2020-02-17 13:50   ` Menion
  2020-02-17 13:51     ` Menion
  2020-02-17 13:54     ` Hugo Mills
@ 2020-02-17 13:55     ` Swâmi Petaramesh
  2020-02-17 14:12       ` Menion
  2 siblings, 1 reply; 13+ messages in thread
From: Swâmi Petaramesh @ 2020-02-17 13:55 UTC (permalink / raw)
  To: Menion; +Cc: linux-btrfs

On 2020-02-17 14:50, Menion wrote:
> Is it ok to run it on a mounted filesystem with concurrent read and
> write operations?

Yes. Please check man btrfs-balance.

All such BTRFS operations are to be run on live, mounted filesystems.

Performance will suffer and it might be long though.

> Also, since the number of HDD is 5, how this "raid1" scheme is deployed?

BTRFS will manage storing 2 copies of every metadata block on 2 
different disks, and will choose how by itself.

ॐ

-- 
Swâmi Petaramesh <swami@petaramesh.org> PGP 9076E32E


^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: btrfs: convert metadata from raid5 to raid1
  2020-02-17 13:55     ` Swâmi Petaramesh
@ 2020-02-17 14:12       ` Menion
  2020-02-17 14:17         ` Hugo Mills
  2020-02-18  8:34         ` Menion
  0 siblings, 2 replies; 13+ messages in thread
From: Menion @ 2020-02-17 14:12 UTC (permalink / raw)
  To: Swâmi Petaramesh; +Cc: linux-btrfs

ok thanks
I have launched it (in a tmux session), after 5 minutes the command
did not return yet, but dmesg and  btrfs balance status
/array/mount/point report it in progress (0%).
Is it normal?

Il giorno lun 17 feb 2020 alle ore 14:55 Swâmi Petaramesh
<swami@petaramesh.org> ha scritto:
>
> On 2020-02-17 14:50, Menion wrote:
> > Is it ok to run it on a mounted filesystem with concurrent read and
> > write operations?
>
> Yes. Please check man btrfs-balance.
>
> All such BTRFS operations are to be run on live, mounted filesystems.
>
> Performance will suffer and it might be long though.
>
> > Also, since the number of HDD is 5, how this "raid1" scheme is deployed?
>
> BTRFS will manage storing 2 copies of every metadata block on 2
> different disks, and will choose how by itself.
>
> ॐ
>
> --
> Swâmi Petaramesh <swami@petaramesh.org> PGP 9076E32E
>

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: btrfs: convert metadata from raid5 to raid1
  2020-02-17 14:12       ` Menion
@ 2020-02-17 14:17         ` Hugo Mills
  2020-02-17 18:05           ` Graham Cobb
  2020-02-18  8:34         ` Menion
  1 sibling, 1 reply; 13+ messages in thread
From: Hugo Mills @ 2020-02-17 14:17 UTC (permalink / raw)
  To: Menion; +Cc: Swâmi Petaramesh, linux-btrfs

On Mon, Feb 17, 2020 at 03:12:35PM +0100, Menion wrote:
> ok thanks
> I have launched it (in a tmux session), after 5 minutes the command
> did not return yet, but dmesg and  btrfs balance status
> /array/mount/point report it in progress (0%).
> Is it normal?

   Yes, it's got to rewrite all of your metadata. This can take a
while (especially if you have lots of snapshots or reflinks -- such as
from running a deduper). You should be able to see progress happening
fairly regularly in dmesg. This is typically one chunk every minute or
so, although some chunks can take much *much* longer.

   Hugo.

> Il giorno lun 17 feb 2020 alle ore 14:55 Swâmi Petaramesh
> <swami@petaramesh.org> ha scritto:
> >
> > On 2020-02-17 14:50, Menion wrote:
> > > Is it ok to run it on a mounted filesystem with concurrent read and
> > > write operations?
> >
> > Yes. Please check man btrfs-balance.
> >
> > All such BTRFS operations are to be run on live, mounted filesystems.
> >
> > Performance will suffer and it might be long though.
> >
> > > Also, since the number of HDD is 5, how this "raid1" scheme is deployed?
> >
> > BTRFS will manage storing 2 copies of every metadata block on 2
> > different disks, and will choose how by itself.
> >
> > ॐ
> >

-- 
Hugo Mills             | You've read the project plan. Forget that. We're
hugo@... carfax.org.uk | going to Do Stuff and Have Fun doing it.
http://carfax.org.uk/  |
PGP: E2AB1DE4          |                                           Jeremy Frey

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: btrfs: convert metadata from raid5 to raid1
  2020-02-17 14:17         ` Hugo Mills
@ 2020-02-17 18:05           ` Graham Cobb
  0 siblings, 0 replies; 13+ messages in thread
From: Graham Cobb @ 2020-02-17 18:05 UTC (permalink / raw)
  To: Menion, linux-btrfs

On 17/02/2020 14:17, Hugo Mills wrote:
> On Mon, Feb 17, 2020 at 03:12:35PM +0100, Menion wrote:
>> ok thanks
>> I have launched it (in a tmux session), after 5 minutes the command
>> did not return yet, but dmesg and  btrfs balance status
>> /array/mount/point report it in progress (0%).
>> Is it normal?
> 
>    Yes, it's got to rewrite all of your metadata. This can take a
> while (especially if you have lots of snapshots or reflinks -- such as
> from running a deduper). You should be able to see progress happening
> fairly regularly in dmesg. This is typically one chunk every minute or
> so, although some chunks can take much *much* longer.

Also, you can watch what is happening by using "btrfs filesystem usage
/your/mount/point". You should see "Metadata,RAID5:" going down and
"Metadata,RAID1" going up. I often leave:

watch -n 10 btrfs fi usage /mount/point

running in a window while doing these sorts of things so I can see how
things are going at a glance.



^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: btrfs: convert metadata from raid5 to raid1
  2020-02-17 14:12       ` Menion
  2020-02-17 14:17         ` Hugo Mills
@ 2020-02-18  8:34         ` Menion
  2020-02-18  8:41           ` Nikolay Borisov
  1 sibling, 1 reply; 13+ messages in thread
From: Menion @ 2020-02-18  8:34 UTC (permalink / raw)
  To: Swâmi Petaramesh; +Cc: linux-btrfs

Hello again

Task completed, I see in three occurrence of this event:

[518366.156963] INFO: task btrfs-cleaner:1034 blocked for more than 120 seconds.
[518366.156989]       Not tainted 5.5.3-050503-generic #202002110832
[518366.157024] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs"
disables this message.
[518366.157044] btrfs-cleaner   D    0  1034      2 0x80004000
[518366.157054] Call Trace:
[518366.157082]  __schedule+0x2d8/0x760
[518366.157094]  schedule+0x55/0xc0
[518366.157105]  schedule_preempt_disabled+0xe/0x10
[518366.157113]  __mutex_lock.isra.0+0x182/0x4f0
[518366.157125]  __mutex_lock_slowpath+0x13/0x20
[518366.157132]  mutex_lock+0x2e/0x40
[518366.157261]  btrfs_delete_unused_bgs+0xc0/0x560 [btrfs]
[518366.157322]  ? __wake_up+0x13/0x20
[518366.157424]  cleaner_kthread+0x124/0x130 [btrfs]
[518366.157437]  kthread+0x104/0x140
[518366.157531]  ? kzalloc.constprop.0+0x40/0x40 [btrfs]
[518366.157565]  ? kthread_park+0x90/0x90
[518366.157575]  ret_from_fork+0x35/0x40

and

[518486.984177] INFO: task btrfs-cleaner:1034 blocked for more than 241 seconds.
[518486.984204]       Not tainted 5.5.3-050503-generic #202002110832
[518486.984216] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs"
disables this message.
[518486.984233] btrfs-cleaner   D    0  1034      2 0x80004000
[518486.984243] Call Trace:
[518486.984271]  __schedule+0x2d8/0x760
[518486.984284]  schedule+0x55/0xc0
[518486.984295]  schedule_preempt_disabled+0xe/0x10
[518486.984305]  __mutex_lock.isra.0+0x182/0x4f0
[518486.984319]  __mutex_lock_slowpath+0x13/0x20
[518486.984326]  mutex_lock+0x2e/0x40
[518486.984451]  btrfs_delete_unused_bgs+0xc0/0x560 [btrfs]
[518486.984464]  ? __wake_up+0x13/0x20
[518486.984562]  cleaner_kthread+0x124/0x130 [btrfs]
[518486.984573]  kthread+0x104/0x140
[518486.984666]  ? kzalloc.constprop.0+0x40/0x40 [btrfs]
[518486.984675]  ? kthread_park+0x90/0x90
[518486.984686]  ret_from_fork+0x35/0x40

and

[518728.646379] INFO: task btrfs-cleaner:1034 blocked for more than 120 seconds.
[518728.646413]       Not tainted 5.5.3-050503-generic #202002110832
[518728.646428] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs"
disables this message.
[518728.646447] btrfs-cleaner   D    0  1034      2 0x80004000
[518728.646460] Call Trace:
[518728.646494]  __schedule+0x2d8/0x760
[518728.646508]  schedule+0x55/0xc0
[518728.646522]  schedule_preempt_disabled+0xe/0x10
[518728.646534]  __mutex_lock.isra.0+0x182/0x4f0
[518728.646550]  __mutex_lock_slowpath+0x13/0x20
[518728.646559]  mutex_lock+0x2e/0x40
[518728.646719]  btrfs_delete_unused_bgs+0xc0/0x560 [btrfs]
[518728.646735]  ? __wake_up+0x13/0x20
[518728.646859]  cleaner_kthread+0x124/0x130 [btrfs]
[518728.646875]  kthread+0x104/0x140
[518728.647019]  ? kzalloc.constprop.0+0x40/0x40 [btrfs]
[518728.647031]  ? kthread_park+0x90/0x90
[518728.647045]  ret_from_fork+0x35/0x40

Is it a kind of normal?
Thanks, bye

Il giorno lun 17 feb 2020 alle ore 15:12 Menion <menion@gmail.com> ha scritto:
>
> ok thanks
> I have launched it (in a tmux session), after 5 minutes the command
> did not return yet, but dmesg and  btrfs balance status
> /array/mount/point report it in progress (0%).
> Is it normal?
>
> Il giorno lun 17 feb 2020 alle ore 14:55 Swâmi Petaramesh
> <swami@petaramesh.org> ha scritto:
> >
> > On 2020-02-17 14:50, Menion wrote:
> > > Is it ok to run it on a mounted filesystem with concurrent read and
> > > write operations?
> >
> > Yes. Please check man btrfs-balance.
> >
> > All such BTRFS operations are to be run on live, mounted filesystems.
> >
> > Performance will suffer and it might be long though.
> >
> > > Also, since the number of HDD is 5, how this "raid1" scheme is deployed?
> >
> > BTRFS will manage storing 2 copies of every metadata block on 2
> > different disks, and will choose how by itself.
> >
> > ॐ
> >
> > --
> > Swâmi Petaramesh <swami@petaramesh.org> PGP 9076E32E
> >

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: btrfs: convert metadata from raid5 to raid1
  2020-02-18  8:34         ` Menion
@ 2020-02-18  8:41           ` Nikolay Borisov
  2020-02-18  8:43             ` Menion
  0 siblings, 1 reply; 13+ messages in thread
From: Nikolay Borisov @ 2020-02-18  8:41 UTC (permalink / raw)
  To: Menion, Swâmi Petaramesh; +Cc: linux-btrfs



On 18.02.20 г. 10:34 ч., Menion wrote:
> Hello again
> 
> Task completed, I see in three occurrence of this event:
> 
> [518366.156963] INFO: task btrfs-cleaner:1034 blocked for more than 120 seconds.
> [518366.156989]       Not tainted 5.5.3-050503-generic #202002110832
> [518366.157024] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs"
> disables this message.
> [518366.157044] btrfs-cleaner   D    0  1034      2 0x80004000
> [518366.157054] Call Trace:
> [518366.157082]  __schedule+0x2d8/0x760
> [518366.157094]  schedule+0x55/0xc0
> [518366.157105]  schedule_preempt_disabled+0xe/0x10
> [518366.157113]  __mutex_lock.isra.0+0x182/0x4f0
> [518366.157125]  __mutex_lock_slowpath+0x13/0x20
> [518366.157132]  mutex_lock+0x2e/0x40
> [518366.157261]  btrfs_delete_unused_bgs+0xc0/0x560 [btrfs]
> [518366.157322]  ? __wake_up+0x13/0x20
> [518366.157424]  cleaner_kthread+0x124/0x130 [btrfs]
> [518366.157437]  kthread+0x104/0x140
> [518366.157531]  ? kzalloc.constprop.0+0x40/0x40 [btrfs]
> [518366.157565]  ? kthread_park+0x90/0x90
> [518366.157575]  ret_from_fork+0x35/0x40
> 
> and
> 
> [518486.984177] INFO: task btrfs-cleaner:1034 blocked for more than 241 seconds.
> [518486.984204]       Not tainted 5.5.3-050503-generic #202002110832
> [518486.984216] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs"
> disables this message.
> [518486.984233] btrfs-cleaner   D    0  1034      2 0x80004000
> [518486.984243] Call Trace:
> [518486.984271]  __schedule+0x2d8/0x760
> [518486.984284]  schedule+0x55/0xc0
> [518486.984295]  schedule_preempt_disabled+0xe/0x10
> [518486.984305]  __mutex_lock.isra.0+0x182/0x4f0
> [518486.984319]  __mutex_lock_slowpath+0x13/0x20
> [518486.984326]  mutex_lock+0x2e/0x40
> [518486.984451]  btrfs_delete_unused_bgs+0xc0/0x560 [btrfs]
> [518486.984464]  ? __wake_up+0x13/0x20
> [518486.984562]  cleaner_kthread+0x124/0x130 [btrfs]
> [518486.984573]  kthread+0x104/0x140
> [518486.984666]  ? kzalloc.constprop.0+0x40/0x40 [btrfs]
> [518486.984675]  ? kthread_park+0x90/0x90
> [518486.984686]  ret_from_fork+0x35/0x40
> 
> and
> 
> [518728.646379] INFO: task btrfs-cleaner:1034 blocked for more than 120 seconds.
> [518728.646413]       Not tainted 5.5.3-050503-generic #202002110832
> [518728.646428] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs"
> disables this message.
> [518728.646447] btrfs-cleaner   D    0  1034      2 0x80004000
> [518728.646460] Call Trace:
> [518728.646494]  __schedule+0x2d8/0x760
> [518728.646508]  schedule+0x55/0xc0
> [518728.646522]  schedule_preempt_disabled+0xe/0x10
> [518728.646534]  __mutex_lock.isra.0+0x182/0x4f0
> [518728.646550]  __mutex_lock_slowpath+0x13/0x20
> [518728.646559]  mutex_lock+0x2e/0x40
> [518728.646719]  btrfs_delete_unused_bgs+0xc0/0x560 [btrfs]
> [518728.646735]  ? __wake_up+0x13/0x20
> [518728.646859]  cleaner_kthread+0x124/0x130 [btrfs]
> [518728.646875]  kthread+0x104/0x140
> [518728.647019]  ? kzalloc.constprop.0+0x40/0x40 [btrfs]
> [518728.647031]  ? kthread_park+0x90/0x90
> [518728.647045]  ret_from_fork+0x35/0x40
> 
> Is it a kind of normal?
> Thanks, bye


provide the output of echo w > /proc/sysrq-trigger

I suspect there were 3 times that there was lock contention on
delete_unused_bgs_mutex due to balance. Unless it persists it's fine.

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: btrfs: convert metadata from raid5 to raid1
  2020-02-18  8:41           ` Nikolay Borisov
@ 2020-02-18  8:43             ` Menion
  0 siblings, 0 replies; 13+ messages in thread
From: Menion @ 2020-02-18  8:43 UTC (permalink / raw)
  To: Nikolay Borisov; +Cc: Swâmi Petaramesh, linux-btrfs

[578688.840150] sysrq: Show Blocked State
[578688.840184]   task                        PC stack   pid father

but as said I completed the balance yesterday

Il giorno mar 18 feb 2020 alle ore 09:41 Nikolay Borisov
<nborisov@suse.com> ha scritto:
>
>
>
> On 18.02.20 г. 10:34 ч., Menion wrote:
> > Hello again
> >
> > Task completed, I see in three occurrence of this event:
> >
> > [518366.156963] INFO: task btrfs-cleaner:1034 blocked for more than 120 seconds.
> > [518366.156989]       Not tainted 5.5.3-050503-generic #202002110832
> > [518366.157024] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs"
> > disables this message.
> > [518366.157044] btrfs-cleaner   D    0  1034      2 0x80004000
> > [518366.157054] Call Trace:
> > [518366.157082]  __schedule+0x2d8/0x760
> > [518366.157094]  schedule+0x55/0xc0
> > [518366.157105]  schedule_preempt_disabled+0xe/0x10
> > [518366.157113]  __mutex_lock.isra.0+0x182/0x4f0
> > [518366.157125]  __mutex_lock_slowpath+0x13/0x20
> > [518366.157132]  mutex_lock+0x2e/0x40
> > [518366.157261]  btrfs_delete_unused_bgs+0xc0/0x560 [btrfs]
> > [518366.157322]  ? __wake_up+0x13/0x20
> > [518366.157424]  cleaner_kthread+0x124/0x130 [btrfs]
> > [518366.157437]  kthread+0x104/0x140
> > [518366.157531]  ? kzalloc.constprop.0+0x40/0x40 [btrfs]
> > [518366.157565]  ? kthread_park+0x90/0x90
> > [518366.157575]  ret_from_fork+0x35/0x40
> >
> > and
> >
> > [518486.984177] INFO: task btrfs-cleaner:1034 blocked for more than 241 seconds.
> > [518486.984204]       Not tainted 5.5.3-050503-generic #202002110832
> > [518486.984216] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs"
> > disables this message.
> > [518486.984233] btrfs-cleaner   D    0  1034      2 0x80004000
> > [518486.984243] Call Trace:
> > [518486.984271]  __schedule+0x2d8/0x760
> > [518486.984284]  schedule+0x55/0xc0
> > [518486.984295]  schedule_preempt_disabled+0xe/0x10
> > [518486.984305]  __mutex_lock.isra.0+0x182/0x4f0
> > [518486.984319]  __mutex_lock_slowpath+0x13/0x20
> > [518486.984326]  mutex_lock+0x2e/0x40
> > [518486.984451]  btrfs_delete_unused_bgs+0xc0/0x560 [btrfs]
> > [518486.984464]  ? __wake_up+0x13/0x20
> > [518486.984562]  cleaner_kthread+0x124/0x130 [btrfs]
> > [518486.984573]  kthread+0x104/0x140
> > [518486.984666]  ? kzalloc.constprop.0+0x40/0x40 [btrfs]
> > [518486.984675]  ? kthread_park+0x90/0x90
> > [518486.984686]  ret_from_fork+0x35/0x40
> >
> > and
> >
> > [518728.646379] INFO: task btrfs-cleaner:1034 blocked for more than 120 seconds.
> > [518728.646413]       Not tainted 5.5.3-050503-generic #202002110832
> > [518728.646428] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs"
> > disables this message.
> > [518728.646447] btrfs-cleaner   D    0  1034      2 0x80004000
> > [518728.646460] Call Trace:
> > [518728.646494]  __schedule+0x2d8/0x760
> > [518728.646508]  schedule+0x55/0xc0
> > [518728.646522]  schedule_preempt_disabled+0xe/0x10
> > [518728.646534]  __mutex_lock.isra.0+0x182/0x4f0
> > [518728.646550]  __mutex_lock_slowpath+0x13/0x20
> > [518728.646559]  mutex_lock+0x2e/0x40
> > [518728.646719]  btrfs_delete_unused_bgs+0xc0/0x560 [btrfs]
> > [518728.646735]  ? __wake_up+0x13/0x20
> > [518728.646859]  cleaner_kthread+0x124/0x130 [btrfs]
> > [518728.646875]  kthread+0x104/0x140
> > [518728.647019]  ? kzalloc.constprop.0+0x40/0x40 [btrfs]
> > [518728.647031]  ? kthread_park+0x90/0x90
> > [518728.647045]  ret_from_fork+0x35/0x40
> >
> > Is it a kind of normal?
> > Thanks, bye
>
>
> provide the output of echo w > /proc/sysrq-trigger
>
> I suspect there were 3 times that there was lock contention on
> delete_unused_bgs_mutex due to balance. Unless it persists it's fine.

^ permalink raw reply	[flat|nested] 13+ messages in thread

end of thread, other threads:[~2020-02-18  8:43 UTC | newest]

Thread overview: 13+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-02-17 13:43 btrfs: convert metadata from raid5 to raid1 Menion
2020-02-17 13:49 ` Swâmi Petaramesh
2020-02-17 13:50   ` Menion
2020-02-17 13:51     ` Menion
2020-02-17 13:55       ` Hugo Mills
2020-02-17 13:54     ` Hugo Mills
2020-02-17 13:55     ` Swâmi Petaramesh
2020-02-17 14:12       ` Menion
2020-02-17 14:17         ` Hugo Mills
2020-02-17 18:05           ` Graham Cobb
2020-02-18  8:34         ` Menion
2020-02-18  8:41           ` Nikolay Borisov
2020-02-18  8:43             ` Menion

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).