All of lore.kernel.org
 help / color / mirror / Atom feed
* send/receive locking
@ 2014-03-08 21:53           ` Hugo Mills
  2014-03-08 21:55             ` Josef Bacik
  2014-03-14  2:19             ` Marc MERLIN
  0 siblings, 2 replies; 19+ messages in thread
From: Hugo Mills @ 2014-03-08 21:53 UTC (permalink / raw)
  To: Btrfs mailing list

[-- Attachment #1: Type: text/plain, Size: 1056 bytes --]

   Is there anything that can be done about the issues of btrfs send
blocking? I've been writing a backup script (slowly), and several
times I've managed to hit a situation where large chunks of the
machine grind to a complete halt in D state because the backup script
has jammed up.

   Now, I'm aware that you can't send and receive to the same
filesystem at the same time, and that's a restriction I can live with.
However, having things that aren't related to the backup process
suddenly stop working because the backup script is trying to log its
progress to the same FS it's backing up is... umm... somewhat vexing,
to say the least.

   Is this a truly fundamental property of send/receive, or is there
likely to be a simple(ish) solution?

   Hugo.

-- 
=== Hugo Mills: hugo@... carfax.org.uk | darksatanic.net | lug.org.uk ===
  PGP key: 65E74AC0 from wwwkeys.eu.pgp.net or http://www.carfax.org.uk
   --- Well, you don't get to be a kernel hacker simply by looking ---   
                    good in Speedos. -- Rusty Russell                    

[-- Attachment #2: Digital signature --]
[-- Type: application/pgp-signature, Size: 811 bytes --]

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: send/receive locking
  2014-03-08 21:53           ` send/receive locking Hugo Mills
@ 2014-03-08 21:55             ` Josef Bacik
  2014-03-08 22:00               ` Hugo Mills
  2014-03-14  2:19             ` Marc MERLIN
  1 sibling, 1 reply; 19+ messages in thread
From: Josef Bacik @ 2014-03-08 21:55 UTC (permalink / raw)
  To: Hugo Mills; +Cc: Btrfs mailing list

Hey Hugo will you try the danger branch on btrfs-next, Wang changed the locking a bit.  Thanks,

Josef

Hugo Mills <hugo@carfax.org.uk> wrote:


   Is there anything that can be done about the issues of btrfs send
blocking? I've been writing a backup script (slowly), and several
times I've managed to hit a situation where large chunks of the
machine grind to a complete halt in D state because the backup script
has jammed up.

   Now, I'm aware that you can't send and receive to the same
filesystem at the same time, and that's a restriction I can live with.
However, having things that aren't related to the backup process
suddenly stop working because the backup script is trying to log its
progress to the same FS it's backing up is... umm... somewhat vexing,
to say the least.

   Is this a truly fundamental property of send/receive, or is there
likely to be a simple(ish) solution?

   Hugo.

--
=== Hugo Mills: hugo@... carfax.org.uk | darksatanic.net | lug.org.uk ===
  PGP key: 65E74AC0 from wwwkeys.eu.pgp.net or https://urldefense.proofpoint.com/v1/url?u=http://www.carfax.org.uk/&k=ZVNjlDMF0FElm4dQtryO4A%3D%3D%0A&r=cKCbChRKsMpTX8ybrSkonQ%3D%3D%0A&m=DpjVPUDw%2BBOF8It3j6uqicc%2Fa8ToXNmi%2FzBNxek3pv8%3D%0A&s=b4f6d6e5bbee25bad639a60b61a95c5813d89a00a979388dea429adfb3902498
   --- Well, you don't get to be a kernel hacker simply by looking ---
                    good in Speedos. -- Rusty Russell

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: send/receive locking
  2014-03-08 21:55             ` Josef Bacik
@ 2014-03-08 22:00               ` Hugo Mills
  2014-03-08 22:02                 ` Josef Bacik
  0 siblings, 1 reply; 19+ messages in thread
From: Hugo Mills @ 2014-03-08 22:00 UTC (permalink / raw)
  To: Josef Bacik; +Cc: Btrfs mailing list

[-- Attachment #1: Type: text/plain, Size: 1470 bytes --]

On Sat, Mar 08, 2014 at 09:55:50PM +0000, Josef Bacik wrote:
> Hey Hugo will you try the danger branch on btrfs-next, Wang changed the locking a bit.  Thanks,

   Sure. I'll build a kernel tonight and report tomorrow. I'm not sure
how repeatable the problem is, though. I'll see if I can quantify
that, too.

   Hugo.

> Josef
> 
> Hugo Mills <hugo@carfax.org.uk> wrote:
> 
> 
>    Is there anything that can be done about the issues of btrfs send
> blocking? I've been writing a backup script (slowly), and several
> times I've managed to hit a situation where large chunks of the
> machine grind to a complete halt in D state because the backup script
> has jammed up.
> 
>    Now, I'm aware that you can't send and receive to the same
> filesystem at the same time, and that's a restriction I can live with.
> However, having things that aren't related to the backup process
> suddenly stop working because the backup script is trying to log its
> progress to the same FS it's backing up is... umm... somewhat vexing,
> to say the least.
> 
>    Is this a truly fundamental property of send/receive, or is there
> likely to be a simple(ish) solution?
> 
>    Hugo.
> 

-- 
=== Hugo Mills: hugo@... carfax.org.uk | darksatanic.net | lug.org.uk ===
  PGP key: 65E74AC0 from wwwkeys.eu.pgp.net or http://www.carfax.org.uk
   --- Well, you don't get to be a kernel hacker simply by looking ---   
                    good in Speedos. -- Rusty Russell                    

[-- Attachment #2: Digital signature --]
[-- Type: application/pgp-signature, Size: 811 bytes --]

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: send/receive locking
  2014-03-08 22:00               ` Hugo Mills
@ 2014-03-08 22:02                 ` Josef Bacik
  2014-03-08 22:16                   ` Hugo Mills
  0 siblings, 1 reply; 19+ messages in thread
From: Josef Bacik @ 2014-03-08 22:02 UTC (permalink / raw)
  To: Hugo Mills; +Cc: Btrfs mailing list

Don't do it on a fs you care about, cow is broken with that branch so it will corrupt your fs.  So actually just find Wang's patch that sets skip locking for send and do that.  Thanks,

Josef

Hugo Mills <hugo@carfax.org.uk> wrote:


On Sat, Mar 08, 2014 at 09:55:50PM +0000, Josef Bacik wrote:
> Hey Hugo will you try the danger branch on btrfs-next, Wang changed the locking a bit.  Thanks,

   Sure. I'll build a kernel tonight and report tomorrow. I'm not sure
how repeatable the problem is, though. I'll see if I can quantify
that, too.

   Hugo.

> Josef
>
> Hugo Mills <hugo@carfax.org.uk> wrote:
>
>
>    Is there anything that can be done about the issues of btrfs send
> blocking? I've been writing a backup script (slowly), and several
> times I've managed to hit a situation where large chunks of the
> machine grind to a complete halt in D state because the backup script
> has jammed up.
>
>    Now, I'm aware that you can't send and receive to the same
> filesystem at the same time, and that's a restriction I can live with.
> However, having things that aren't related to the backup process
> suddenly stop working because the backup script is trying to log its
> progress to the same FS it's backing up is... umm... somewhat vexing,
> to say the least.
>
>    Is this a truly fundamental property of send/receive, or is there
> likely to be a simple(ish) solution?
>
>    Hugo.
>

--
=== Hugo Mills: hugo@... carfax.org.uk | darksatanic.net | lug.org.uk ===
  PGP key: 65E74AC0 from wwwkeys.eu.pgp.net or https://urldefense.proofpoint.com/v1/url?u=http://www.carfax.org.uk/&k=ZVNjlDMF0FElm4dQtryO4A%3D%3D%0A&r=cKCbChRKsMpTX8ybrSkonQ%3D%3D%0A&m=mielorsIWCCiMGh9bKTw2yfu58jDUmYHzG5Bs4%2BQ668%3D%0A&s=e7731595293b5612cadc9ba07854b17052b8c74ac69ddefd02b0437e42086594
   --- Well, you don't get to be a kernel hacker simply by looking ---
                    good in Speedos. -- Rusty Russell

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: send/receive locking
  2014-03-08 22:02                 ` Josef Bacik
@ 2014-03-08 22:16                   ` Hugo Mills
  2014-03-09 16:43                     ` Hugo Mills
  0 siblings, 1 reply; 19+ messages in thread
From: Hugo Mills @ 2014-03-08 22:16 UTC (permalink / raw)
  To: Josef Bacik; +Cc: Btrfs mailing list

[-- Attachment #1: Type: text/plain, Size: 2019 bytes --]

On Sat, Mar 08, 2014 at 10:02:39PM +0000, Josef Bacik wrote:
> Don't do it on a fs you care about, cow is broken with that branch so it will corrupt your fs.  So actually just find Wang's patch that sets skip locking for send and do that.  Thanks,

   Aah, that's good to know. It's not an FS I care about massively
much, but it'd be a pain to recreate all the setup. I think I'll go
cherry picking tomorrow morning. :)

   Hugo.

> Josef
> 
> Hugo Mills <hugo@carfax.org.uk> wrote:
> 
> 
> On Sat, Mar 08, 2014 at 09:55:50PM +0000, Josef Bacik wrote:
> > Hey Hugo will you try the danger branch on btrfs-next, Wang changed the locking a bit.  Thanks,
> 
>    Sure. I'll build a kernel tonight and report tomorrow. I'm not sure
> how repeatable the problem is, though. I'll see if I can quantify
> that, too.
> 
>    Hugo.
> 
> > Josef
> >
> > Hugo Mills <hugo@carfax.org.uk> wrote:
> >
> >
> >    Is there anything that can be done about the issues of btrfs send
> > blocking? I've been writing a backup script (slowly), and several
> > times I've managed to hit a situation where large chunks of the
> > machine grind to a complete halt in D state because the backup script
> > has jammed up.
> >
> >    Now, I'm aware that you can't send and receive to the same
> > filesystem at the same time, and that's a restriction I can live with.
> > However, having things that aren't related to the backup process
> > suddenly stop working because the backup script is trying to log its
> > progress to the same FS it's backing up is... umm... somewhat vexing,
> > to say the least.
> >
> >    Is this a truly fundamental property of send/receive, or is there
> > likely to be a simple(ish) solution?
> >
> >    Hugo.
> >
> 

-- 
=== Hugo Mills: hugo@... carfax.org.uk | darksatanic.net | lug.org.uk ===
  PGP key: 65E74AC0 from wwwkeys.eu.pgp.net or http://www.carfax.org.uk
   --- Comic Sans goes into a bar,  and the barman says, "We don't ---   
                         serve your type here."                          

[-- Attachment #2: Digital signature --]
[-- Type: application/pgp-signature, Size: 811 bytes --]

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: send/receive locking
  2014-03-08 22:16                   ` Hugo Mills
@ 2014-03-09 16:43                     ` Hugo Mills
  2014-03-10 22:28                       ` Hugo Mills
  0 siblings, 1 reply; 19+ messages in thread
From: Hugo Mills @ 2014-03-09 16:43 UTC (permalink / raw)
  To: Josef Bacik, Btrfs mailing list

[-- Attachment #1: Type: text/plain, Size: 77750 bytes --]

On Sat, Mar 08, 2014 at 10:16:52PM +0000, Hugo Mills wrote:
> On Sat, Mar 08, 2014 at 10:02:39PM +0000, Josef Bacik wrote:
> > Don't do it on a fs you care about, cow is broken with that branch so it will corrupt your fs.  So actually just find Wang's patch that sets skip locking for send and do that.  Thanks,
> 
>    Aah, that's good to know. It's not an FS I care about massively
> much, but it'd be a pain to recreate all the setup. I think I'll go
> cherry picking tomorrow morning. :)

   OK, I'm running with btrfs-next, plus a cherry picked

    Btrfs: skip locking when searching commit root

from Wang on top of that. It's just locked up again in exactly the
same way. Twice. The first time I forgot to capture a SysRq-w before I
killed the script. The second time, I got the SysRq-w, which is
appended below. I've also got a couple of "task blocked for more than
120 seconds" traces in the syslog from the earlier run, if those are
helpful.

   Hugo.

Mar  9 16:34:58 s_src@amelia kernel: SysRq : Show Blocked State
Mar  9 16:34:58 s_src@amelia kernel:   task                        PC stack   pid father
Mar  9 16:34:58 s_src@amelia kernel: btrfs-transacti D ffff88003dc92640     0   224      2 0x00000000
Mar  9 16:34:58 s_src@amelia kernel:  ffff88003ac46d00 0000000000000046 0000000000012640 ffff88003ac46d00
Mar  9 16:34:58 s_src@amelia kernel:  ffff88003ad75fd8 ffff88003c093d50 ffff88003b19d000 ffff880008b11f00
Mar  9 16:34:58 s_src@amelia kernel:  ffff88003a4dc6e8 ffff88003a4dc690 0000000000000000 0000000000000000
Mar  9 16:34:58 s_src@amelia kernel: Call Trace:
Mar  9 16:34:58 s_src@amelia kernel:  [<ffffffff81158404>] ? btrfs_commit_transaction+0x306/0x808
Mar  9 16:34:58 s_src@amelia kernel:  [<ffffffff8105ebdc>] ? finish_wait+0x60/0x60
Mar  9 16:34:58 s_src@amelia kernel:  [<ffffffff81154c82>] ? transaction_kthread+0xd5/0x185
Mar  9 16:34:58 s_src@amelia kernel:  [<ffffffff81154bad>] ? btrfs_cleanup_transaction+0x3e3/0x3e3
Mar  9 16:34:58 s_src@amelia kernel:  [<ffffffff8104cede>] ? kthread+0x9e/0xa6
Mar  9 16:34:58 s_src@amelia kernel:  [<ffffffff81050000>] ? blocking_notifier_chain_cond_register+0x13/0x40
Mar  9 16:34:58 s_src@amelia kernel:  [<ffffffff8104ce40>] ? __kthread_parkme+0x55/0x55
Mar  9 16:34:58 s_src@amelia kernel:  [<ffffffff813a6ebc>] ? ret_from_fork+0x7c/0xb0
Mar  9 16:34:58 s_src@amelia kernel:  [<ffffffff8104ce40>] ? __kthread_parkme+0x55/0x55
Mar  9 16:34:58 s_src@amelia kernel: syslog-ng       D ffff88003dc12640     0  1995   1994 0x00000000
Mar  9 16:34:58 s_src@amelia kernel:  ffff88003b2fdf60 0000000000000086 0000000000012640 ffff88003b2fdf60
Mar  9 16:34:58 s_src@amelia kernel:  ffff880038d1dfd8 ffffffff81811430 ffff88003a4dc690 ffff88003b3441e8
Mar  9 16:34:58 s_src@amelia kernel:  0000000000000000 0000000000000000 ffff88003b344000 ffff88002969e460
Mar  9 16:34:53 s_src@amelia sudo: Libgcrypt warning: missing initialization - please fix the application
Mar  9 16:34:58 s_src@amelia kernel: Call Trace:
Mar  9 16:34:58 s_src@amelia kernel:  [<ffffffff81157a0c>] ? wait_current_trans.isra.19+0xad/0xd1
Mar  9 16:34:58 s_src@amelia kernel:  [<ffffffff8105ebdc>] ? finish_wait+0x60/0x60
Mar  9 16:34:58 s_src@amelia kernel:  [<ffffffff81158bf2>] ? start_transaction+0x2ec/0x4c5
Mar  9 16:34:58 s_src@amelia kernel:  [<ffffffff8114e66a>] ? btrfs_lookup_xattr+0x68/0x97
Mar  9 16:34:58 s_src@amelia kernel:  [<ffffffff8115cf27>] ? btrfs_dirty_inode+0x25/0xa1
Mar  9 16:34:58 s_src@amelia kernel:  [<ffffffff81160a4f>] ? btrfs_setattr+0x230/0x267
Mar  9 16:34:58 s_src@amelia kernel:  [<ffffffff810f39b8>] ? notify_change+0x20c/0x2ef
Mar  9 16:34:58 s_src@amelia kernel:  [<ffffffff810dec32>] ? chown_common.isra.16+0xc9/0x12e
Mar  9 16:34:58 s_src@amelia kernel:  [<ffffffff810e26a4>] ? __sb_start_write+0x8e/0xbb
Mar  9 16:34:58 s_src@amelia kernel:  [<ffffffff810df6ef>] ? SyS_fchown+0x47/0x6d
Mar  9 16:34:58 s_src@amelia kernel:  [<ffffffff813a6f66>] ? system_call_fastpath+0x1a/0x1f
Mar  9 16:34:58 s_src@amelia kernel: mosquitto       D ffff88003dc92640     0  2093      1 0x00000000
Mar  9 16:34:58 s_src@amelia kernel:  ffff88003a882210 0000000000000082 0000000000012640 ffff88003a882210
Mar  9 16:34:58 s_src@amelia kernel:  ffff8800380cbfd8 ffff88003c093d50 ffff88003a4dc690 ffff88003b3441e8
Mar  9 16:34:58 s_src@amelia kernel:  0000000000000000 0000000000000000 ffff88003b344000 ffff880008b111e0
Mar  9 16:34:58 s_src@amelia kernel: Call Trace:
Mar  9 16:34:58 s_src@amelia kernel:  [<ffffffff81157a0c>] ? wait_current_trans.isra.19+0xad/0xd1
Mar  9 16:34:58 s_src@amelia kernel:  [<ffffffff8105ebdc>] ? finish_wait+0x60/0x60
Mar  9 16:34:58 s_src@amelia kernel:  [<ffffffff81158d1a>] ? start_transaction+0x414/0x4c5
Mar  9 16:34:58 s_src@amelia kernel:  [<ffffffff8113abef>] ? btrfs_release_path+0x38/0x53
Mar  9 16:34:58 s_src@amelia kernel:  [<ffffffff81162897>] ? btrfs_create+0x35/0x1c2
Mar  9 16:34:58 s_src@amelia kernel:  [<ffffffff810e986e>] ? vfs_create+0x46/0x6c
Mar  9 16:34:58 s_src@amelia kernel:  [<ffffffff810ea425>] ? do_last.isra.58+0x544/0x95e
Mar  9 16:34:58 s_src@amelia kernel:  [<ffffffff810eaa5b>] ? path_openat+0x21c/0x488
Mar  9 16:34:58 s_src@amelia kernel:  [<ffffffff810ebbdc>] ? do_filp_open+0x35/0x7a
Mar  9 16:34:58 s_src@amelia kernel:  [<ffffffff810f4555>] ? __alloc_fd+0x56/0xda
Mar  9 16:34:58 s_src@amelia kernel:  [<ffffffff810df880>] ? do_sys_open+0x65/0xe9
Mar  9 16:34:58 s_src@amelia kernel:  [<ffffffff813a6f66>] ? system_call_fastpath+0x1a/0x1f
Mar  9 16:34:58 s_src@amelia kernel: krb5kdc         D ffff88003dc92640     0  2195      1 0x00000000
Mar  9 16:34:58 s_src@amelia kernel:  ffff88003b4d8da0 0000000000000086 0000000000012640 ffff88003b4d8da0
Mar  9 16:34:58 s_src@amelia kernel:  ffff880038215fd8 ffff88003c093d50 ffff88003a4dc690 ffff88003b3441e8
Mar  9 16:34:58 s_src@amelia kernel:  0000000000000000 0000000000000000 ffff88003b344000 ffff880008b11320
Mar  9 16:34:58 s_src@amelia kernel: Call Trace:
Mar  9 16:34:58 s_src@amelia kernel:  [<ffffffff81157a0c>] ? wait_current_trans.isra.19+0xad/0xd1
Mar  9 16:34:58 s_src@amelia kernel:  [<ffffffff8105ebdc>] ? finish_wait+0x60/0x60
Mar  9 16:34:58 s_src@amelia kernel:  [<ffffffff81158bf2>] ? start_transaction+0x2ec/0x4c5
Mar  9 16:34:58 s_src@amelia kernel:  [<ffffffff811159fc>] ? locks_alloc_lock+0x4e/0x54
Mar  9 16:34:58 s_src@amelia kernel:  [<ffffffff8115cf27>] ? btrfs_dirty_inode+0x25/0xa1
Mar  9 16:34:58 s_src@amelia kernel:  [<ffffffff81160a4f>] ? btrfs_setattr+0x230/0x267
Mar  9 16:34:58 s_src@amelia kernel:  [<ffffffff810f39b8>] ? notify_change+0x20c/0x2ef
Mar  9 16:34:58 s_src@amelia kernel:  [<ffffffff811018fe>] ? utimes_common+0x119/0x174
Mar  9 16:34:58 s_src@amelia kernel:  [<ffffffff81101a3e>] ? do_utimes+0xe5/0x11c
Mar  9 16:34:58 s_src@amelia kernel:  [<ffffffff81101ad9>] ? SyS_utime+0x64/0x66
Mar  9 16:34:58 s_src@amelia kernel:  [<ffffffff813a6f66>] ? system_call_fastpath+0x1a/0x1f
Mar  9 16:34:58 s_src@amelia kernel: postgres        D ffff88003dc92640     0  2250   2245 0x00000000
Mar  9 16:34:58 s_src@amelia kernel:  ffff88003b2fb680 0000000000000082 0000000000012640 ffff88003b2fb680
Mar  9 16:34:58 s_src@amelia kernel:  ffff880034533fd8 ffff88003c093d50 ffff88003dc92640 ffff88003b2fb680
Mar  9 16:34:58 s_src@amelia kernel:  ffff880034533c40 0000000000000002 0000000000000000 ffffffff810acdc1
Mar  9 16:34:58 s_src@amelia kernel: Call Trace:
Mar  9 16:34:58 s_src@amelia kernel:  [<ffffffff810acdc1>] ? wait_on_page_read+0x32/0x32
Mar  9 16:34:58 s_src@amelia kernel:  [<ffffffff813a46ba>] ? io_schedule+0x54/0x69
Mar  9 16:34:58 s_src@amelia kernel:  [<ffffffff810acdc6>] ? sleep_on_page+0x5/0x8
Mar  9 16:34:58 s_src@amelia kernel:  [<ffffffff813a496c>] ? __wait_on_bit_lock+0x3c/0x7f
Mar  9 16:34:58 s_src@amelia kernel:  [<ffffffff810ace6b>] ? __lock_page+0x64/0x66
Mar  9 16:34:58 s_src@amelia kernel:  [<ffffffff8105ec06>] ? autoremove_wake_function+0x2a/0x2a
Mar  9 16:34:58 s_src@amelia kernel:  [<ffffffff810ad615>] ? lock_page+0x9/0x18
Mar  9 16:34:58 s_src@amelia kernel:  [<ffffffff810ad668>] ? find_lock_page+0x29/0x49
Mar  9 16:34:58 s_src@amelia kernel:  [<ffffffff810ad9c4>] ? find_or_create_page+0x28/0x85
Mar  9 16:34:58 s_src@amelia kernel:  [<ffffffff81165652>] ? prepare_pages.isra.18+0x7d/0x120
Mar  9 16:34:58 s_src@amelia kernel:  [<ffffffff81165f16>] ? __btrfs_buffered_write+0x1ef/0x43c
Mar  9 16:34:58 s_src@amelia kernel:  [<ffffffff8116651a>] ? btrfs_file_aio_write+0x3b7/0x47b
Mar  9 16:34:58 s_src@amelia kernel:  [<ffffffff810c573e>] ? tlb_flush_mmu+0x4e/0x64
Mar  9 16:34:58 s_src@amelia kernel:  [<ffffffff810dfb6d>] ? do_sync_write+0x56/0x76
Mar  9 16:34:58 s_src@amelia kernel:  [<ffffffff810e0157>] ? vfs_write+0x9f/0x102
Mar  9 16:34:58 s_src@amelia kernel:  [<ffffffff810e0868>] ? SyS_write+0x41/0x74
Mar  9 16:34:58 s_src@amelia kernel:  [<ffffffff813a6f66>] ? system_call_fastpath+0x1a/0x1f
Mar  9 16:34:58 s_src@amelia kernel: postgres        D ffff88003dc92640     0  2251   2245 0x00000000
Mar  9 16:34:58 s_src@amelia kernel:  ffff88003b2ff3d0 0000000000000086 0000000000012640 ffff88003b2ff3d0
Mar  9 16:34:58 s_src@amelia kernel:  ffff88003455bfd8 ffff88003c093d50 ffff88003a4dc690 ffff88003b3441e8
Mar  9 16:34:58 s_src@amelia kernel:  0000000000000000 0000000000000000 ffff88003b344000 ffff880008b11aa0
Mar  9 16:34:58 s_src@amelia kernel: Call Trace:
Mar  9 16:34:58 s_src@amelia kernel:  [<ffffffff81157a0c>] ? wait_current_trans.isra.19+0xad/0xd1
Mar  9 16:34:58 s_src@amelia kernel:  [<ffffffff8105ebdc>] ? finish_wait+0x60/0x60
Mar  9 16:34:58 s_src@amelia kernel:  [<ffffffff81158d1a>] ? start_transaction+0x414/0x4c5
Mar  9 16:34:58 s_src@amelia kernel:  [<ffffffff81162897>] ? btrfs_create+0x35/0x1c2
Mar  9 16:34:58 s_src@amelia kernel:  [<ffffffff810e986e>] ? vfs_create+0x46/0x6c
Mar  9 16:34:58 s_src@amelia kernel:  [<ffffffff810ea425>] ? do_last.isra.58+0x544/0x95e
Mar  9 16:34:58 s_src@amelia kernel:  [<ffffffff810eaa5b>] ? path_openat+0x21c/0x488
Mar  9 16:34:58 s_src@amelia kernel:  [<ffffffff810e67b2>] ? pipe_read+0x31e/0x383
Mar  9 16:34:58 s_src@amelia kernel:  [<ffffffff810ebbdc>] ? do_filp_open+0x35/0x7a
Mar  9 16:34:58 s_src@amelia kernel:  [<ffffffff810f4555>] ? __alloc_fd+0x56/0xda
Mar  9 16:34:58 s_src@amelia kernel:  [<ffffffff810df880>] ? do_sys_open+0x65/0xe9
Mar  9 16:34:58 s_src@amelia kernel:  [<ffffffff813a6f66>] ? system_call_fastpath+0x1a/0x1f
Mar  9 16:34:58 s_src@amelia kernel: btrfs-endio-wri D ffff88003dc92640     0  5436      2 0x00000000
Mar  9 16:34:58 s_src@amelia kernel:  ffff8800381906d0 0000000000000046 0000000000012640 ffff8800381906d0
Mar  9 16:34:58 s_src@amelia kernel:  ffff88002dd81fd8 ffff88003c093d50 ffff88003a4dc690 ffff88003b3441e8
Mar  9 16:34:58 s_src@amelia kernel:  0000000000000000 0000000000000000 ffff88003b344000 ffff880008b11dc0
Mar  9 16:34:58 s_src@amelia kernel: Call Trace:
Mar  9 16:34:58 s_src@amelia kernel:  [<ffffffff81157a0c>] ? wait_current_trans.isra.19+0xad/0xd1
Mar  9 16:34:58 s_src@amelia kernel:  [<ffffffff8105ebdc>] ? finish_wait+0x60/0x60
Mar  9 16:34:58 s_src@amelia kernel:  [<ffffffff81158bf2>] ? start_transaction+0x2ec/0x4c5
Mar  9 16:34:58 s_src@amelia kernel:  [<ffffffff8115f0a1>] ? btrfs_finish_ordered_io+0x19b/0x3c2
Mar  9 16:34:58 s_src@amelia kernel:  [<ffffffff8103d700>] ? ftrace_raw_output_itimer_state+0x3d/0x60
Mar  9 16:34:58 s_src@amelia kernel:  [<ffffffff8117ba9d>] ? worker_loop+0x149/0x4a7
Mar  9 16:34:58 s_src@amelia kernel:  [<ffffffff813a4338>] ? __schedule+0x352/0x4f0
Mar  9 16:34:58 s_src@amelia kernel:  [<ffffffff8117b954>] ? btrfs_queue_worker+0x269/0x269
Mar  9 16:34:58 s_src@amelia kernel:  [<ffffffff8104cede>] ? kthread+0x9e/0xa6
Mar  9 16:34:58 s_src@amelia kernel:  [<ffffffff8104ce40>] ? __kthread_parkme+0x55/0x55
Mar  9 16:34:58 s_src@amelia kernel:  [<ffffffff813a6ebc>] ? ret_from_fork+0x7c/0xb0
Mar  9 16:34:58 s_src@amelia kernel:  [<ffffffff8104ce40>] ? __kthread_parkme+0x55/0x55
Mar  9 16:34:58 s_src@amelia kernel: carfax-backup   D ffff8800223cc420     0  5486   4964 0x00000000
Mar  9 16:34:58 s_src@amelia kernel:  ffff8800223cc420 0000000000000082 0000000000012640 ffff8800223cc420
Mar  9 16:34:58 s_src@amelia kernel:  ffff880020dfffd8 ffff880031cdb680 ffff88003a6acbc8 ffff8800273d2c10
Mar  9 16:34:58 s_src@amelia kernel:  ffff88003a6acbe4 000000000002e000 ffff880020dffc90 000000000002efff
Mar  9 16:34:58 s_src@amelia kernel: Call Trace:
Mar  9 16:34:58 s_src@amelia kernel:  [<ffffffff8116e069>] ? lock_extent_bits+0x108/0x180
Mar  9 16:34:58 s_src@amelia kernel:  [<ffffffff8105ebdc>] ? finish_wait+0x60/0x60
Mar  9 16:34:58 s_src@amelia kernel:  [<ffffffff81164bc4>] ? lock_and_cleanup_extent_if_need+0x66/0x191
Mar  9 16:34:58 s_src@amelia kernel:  [<ffffffff81165f44>] ? __btrfs_buffered_write+0x21d/0x43c
Mar  9 16:34:58 s_src@amelia kernel:  [<ffffffff8116651a>] ? btrfs_file_aio_write+0x3b7/0x47b
Mar  9 16:34:58 s_src@amelia kernel:  [<ffffffff810eb952>] ? user_path_at_empty+0x60/0x87
Mar  9 16:34:58 s_src@amelia kernel:  [<ffffffff8108af43>] ? from_kgid_munged+0x9/0x14
Mar  9 16:34:58 s_src@amelia kernel:  [<ffffffff810dfb6d>] ? do_sync_write+0x56/0x76
Mar  9 16:34:58 s_src@amelia kernel:  [<ffffffff810e0157>] ? vfs_write+0x9f/0x102
Mar  9 16:34:58 s_src@amelia kernel:  [<ffffffff810e0868>] ? SyS_write+0x41/0x74
Mar  9 16:34:58 s_src@amelia kernel:  [<ffffffff813a6f66>] ? system_call_fastpath+0x1a/0x1f
Mar  9 16:34:58 s_src@amelia kernel: kworker/u8:1    D ffff88003dc12640     0  6881      2 0x00000000
Mar  9 16:34:58 s_src@amelia kernel: Workqueue: writeback bdi_writeback_workfn (flush-btrfs-1)
Mar  9 16:34:58 s_src@amelia kernel:  ffff88003b9cdf60 0000000000000046 0000000000012640 ffff88003b9cdf60
Mar  9 16:34:58 s_src@amelia kernel:  ffff880031909fd8 ffffffff81811430 ffff88003a4dc690 ffff88003b3441e8
Mar  9 16:34:58 s_src@amelia kernel:  0000000000000000 0000000000000000 ffff88003b344000 ffff88002969edc0
Mar  9 16:34:58 s_src@amelia kernel: Call Trace:
Mar  9 16:34:58 s_src@amelia kernel:  [<ffffffff81157a0c>] ? wait_current_trans.isra.19+0xad/0xd1
Mar  9 16:34:58 s_src@amelia kernel:  [<ffffffff8105ebdc>] ? finish_wait+0x60/0x60
Mar  9 16:34:58 s_src@amelia kernel:  [<ffffffff81158bf2>] ? start_transaction+0x2ec/0x4c5
Mar  9 16:34:58 s_src@amelia kernel:  [<ffffffff8115d361>] ? cow_file_range_inline+0xe9/0x281
Mar  9 16:34:58 s_src@amelia kernel:  [<ffffffff810dbf07>] ? kmem_cache_free+0x32/0xb9
Mar  9 16:34:58 s_src@amelia kernel:  [<ffffffff8116d73e>] ? __set_extent_bit+0x3c3/0x3ff
Mar  9 16:34:58 s_src@amelia kernel:  [<ffffffff8115d5c7>] ? cow_file_range+0xce/0x37d
Mar  9 16:34:58 s_src@amelia kernel:  [<ffffffff8115dfe8>] ? run_delalloc_range+0x9a/0x2c7
Mar  9 16:34:58 s_src@amelia kernel:  [<ffffffff8116cfd3>] ? free_extent_state+0x12/0x21
Mar  9 16:34:58 s_src@amelia kernel:  [<ffffffff8116ffb3>] ? __extent_writepage+0x1cc/0x5f4
Mar  9 16:34:58 s_src@amelia kernel:  [<ffffffff81055320>] ? check_preempt_curr+0x27/0x62
Mar  9 16:34:58 s_src@amelia kernel:  [<ffffffff8105536a>] ? ttwu_do_wakeup+0xf/0xb0
Mar  9 16:34:58 s_src@amelia kernel:  [<ffffffff810ad5bf>] ? find_get_pages_tag+0xe3/0x11c
Mar  9 16:34:58 s_src@amelia kernel:  [<ffffffff8117059f>] ? extent_write_cache_pages.isra.24.constprop.44+0x1c4/0x25a
Mar  9 16:34:58 s_src@amelia kernel:  [<ffffffff8105b724>] ? update_group_power+0xb9/0x1f1
Mar  9 16:34:58 s_src@amelia kernel:  [<ffffffff811ea9c3>] ? cpumask_next_and+0x1a/0x36
Mar  9 16:34:58 s_src@amelia kernel:  [<ffffffff8105b94c>] ? find_busiest_group+0xf0/0x4d2
Mar  9 16:34:58 s_src@amelia kernel:  [<ffffffff811708c8>] ? extent_writepages+0x49/0x60
Mar  9 16:34:58 s_src@amelia kernel:  [<ffffffff8115bcdf>] ? btrfs_submit_direct+0x3f5/0x3f5
Mar  9 16:34:58 s_src@amelia kernel:  [<ffffffff810fd953>] ? __writeback_single_inode+0x4b/0x1c2
Mar  9 16:34:58 s_src@amelia kernel:  [<ffffffff810fe1b9>] ? writeback_sb_inodes+0x1d3/0x314
Mar  9 16:34:58 s_src@amelia kernel:  [<ffffffff810fe365>] ? __writeback_inodes_wb+0x6b/0xa1
Mar  9 16:34:58 s_src@amelia kernel:  [<ffffffff810fe4ac>] ? wb_writeback+0x111/0x273
Mar  9 16:34:58 s_src@amelia kernel:  [<ffffffff810f191e>] ? get_nr_inodes_unused+0x24/0x4b
Mar  9 16:34:58 s_src@amelia kernel:  [<ffffffff810fea5b>] ? bdi_writeback_workfn+0x16a/0x316
Mar  9 16:34:58 s_src@amelia kernel:  [<ffffffff81048729>] ? process_one_work+0x179/0x28c
Mar  9 16:34:58 s_src@amelia kernel:  [<ffffffff81048be7>] ? worker_thread+0x139/0x1de
Mar  9 16:34:58 s_src@amelia kernel:  [<ffffffff81048aae>] ? rescuer_thread+0x24f/0x24f
Mar  9 16:34:58 s_src@amelia kernel:  [<ffffffff8104cede>] ? kthread+0x9e/0xa6
Mar  9 16:34:58 s_src@amelia kernel:  [<ffffffff81050000>] ? blocking_notifier_chain_cond_register+0x13/0x40
Mar  9 16:34:58 s_src@amelia kernel:  [<ffffffff8104ce40>] ? __kthread_parkme+0x55/0x55
Mar  9 16:34:58 s_src@amelia kernel:  [<ffffffff813a6ebc>] ? ret_from_fork+0x7c/0xb0
Mar  9 16:34:58 s_src@amelia kernel:  [<ffffffff8104ce40>] ? __kthread_parkme+0x55/0x55
Mar  9 16:34:58 s_src@amelia kernel: sudo            D ffff88003dc12640     0  6899   5354 0x00000000
Mar  9 16:34:58 s_src@amelia kernel:  ffff88003b9cd1c0 0000000000000086 0000000000012640 ffff88003b9cd1c0
Mar  9 16:34:58 s_src@amelia kernel:  ffff88001d7e3fd8 ffffffff81811430 ffff88003a4dc690 ffff88003b3441e8
Mar  9 16:34:58 s_src@amelia kernel:  0000000000000000 0000000000000000 ffff88003b344000 ffff88002969ebe0
Mar  9 16:34:58 s_src@amelia kernel: Call Trace:
Mar  9 16:34:58 s_src@amelia kernel:  [<ffffffff81157a0c>] ? wait_current_trans.isra.19+0xad/0xd1
Mar  9 16:34:58 s_src@amelia kernel:  [<ffffffff8105ebdc>] ? finish_wait+0x60/0x60
Mar  9 16:34:58 s_src@amelia kernel:  [<ffffffff81158d1a>] ? start_transaction+0x414/0x4c5
Mar  9 16:34:58 s_src@amelia kernel:  [<ffffffff811652f7>] ? btrfs_sync_file+0x16a/0x250
Mar  9 16:34:58 s_src@amelia kernel:  [<ffffffff811015b8>] ? do_fsync+0x2b/0x50
Mar  9 16:34:58 s_src@amelia kernel:  [<ffffffff811017cd>] ? SyS_fdatasync+0xa/0xd
Mar  9 16:34:58 s_src@amelia kernel:  [<ffffffff813a6f66>] ? system_call_fastpath+0x1a/0x1f
Mar  9 16:34:58 s_src@amelia kernel: Sched Debug Version: v0.11, 3.13.0-00189-g56a5aaf-dirty #6
Mar  9 16:34:58 s_src@amelia kernel: ktime                                   : 5633983.834345
Mar  9 16:34:58 s_src@amelia kernel: sched_clk                               : 5634341.491684
Mar  9 16:34:58 s_src@amelia kernel: cpu_clk                                 : 5634341.491817
Mar  9 16:34:58 s_src@amelia kernel: jiffies                                 : 4295500694
Mar  9 16:34:58 s_src@amelia kernel: sched_clock_stable                      : 1
Mar  9 16:34:58 s_src@amelia kernel: 
Mar  9 16:34:58 s_src@amelia kernel: sysctl_sched
Mar  9 16:34:58 s_src@amelia kernel:   .sysctl_sched_latency                    : 12.000000
Mar  9 16:34:58 s_src@amelia kernel:   .sysctl_sched_min_granularity            : 1.500000
Mar  9 16:34:58 s_src@amelia kernel:   .sysctl_sched_wakeup_granularity         : 2.000000
Mar  9 16:34:58 s_src@amelia kernel:   .sysctl_sched_child_runs_first           : 0
Mar  9 16:34:58 s_src@amelia kernel:   .sysctl_sched_features                   : 11899
Mar  9 16:34:58 s_src@amelia kernel:   .sysctl_sched_tunable_scaling            : 1 (logaritmic)
Mar  9 16:34:58 s_src@amelia kernel: 
Mar  9 16:34:58 s_src@amelia kernel: cpu#0, 1297.893 MHz
Mar  9 16:34:58 s_src@amelia kernel:   .nr_running                    : 2
Mar  9 16:34:58 s_src@amelia kernel:   .load                          : 2048
Mar  9 16:34:58 s_src@amelia kernel:   .nr_switches                   : 4289270
Mar  9 16:34:58 s_src@amelia kernel:   .nr_load_updates               : 307219
Mar  9 16:34:58 s_src@amelia kernel:   .nr_uninterruptible            : 12512
Mar  9 16:34:58 s_src@amelia kernel:   .next_balance                  : 4295.500693
Mar  9 16:34:58 s_src@amelia kernel:   .curr->pid                     : 6909
Mar  9 16:34:58 s_src@amelia kernel:   .clock                         : 5634339.618360
Mar  9 16:34:58 s_src@amelia kernel:   .cpu_load[0]                   : 0
Mar  9 16:34:58 s_src@amelia kernel:   .cpu_load[1]                   : 0
Mar  9 16:34:58 s_src@amelia kernel:   .cpu_load[2]                   : 0
Mar  9 16:34:58 s_src@amelia kernel:   .cpu_load[3]                   : 0
Mar  9 16:34:58 s_src@amelia kernel:   .cpu_load[4]                   : 0
Mar  9 16:34:58 s_src@amelia kernel: \x0acfs_rq[0]:/autogroup-55
Mar  9 16:34:58 s_src@amelia kernel:   .exec_clock                    : 0.000000
Mar  9 16:34:58 s_src@amelia kernel:   .MIN_vruntime                  : 0.000001
Mar  9 16:34:58 s_src@amelia kernel:   .min_vruntime                  : 715.021351
Mar  9 16:34:58 s_src@amelia kernel:   .max_vruntime                  : 0.000001
Mar  9 16:34:58 s_src@amelia kernel:   .spread                        : 0.000000
Mar  9 16:34:58 s_src@amelia kernel:   .spread0                       : -379893.663525
Mar  9 16:34:58 s_src@amelia kernel:   .nr_spread_over                : 0
Mar  9 16:34:58 s_src@amelia kernel:   .nr_running                    : 1
Mar  9 16:34:58 s_src@amelia kernel:   .load                          : 1024
Mar  9 16:34:58 s_src@amelia kernel:   .runnable_load_avg             : 0
Mar  9 16:34:58 s_src@amelia kernel:   .blocked_load_avg              : 0
Mar  9 16:34:58 s_src@amelia kernel:   .tg_load_contrib               : 0
Mar  9 16:34:58 s_src@amelia kernel:   .tg_runnable_contrib           : 0
Mar  9 16:34:58 s_src@amelia kernel:   .tg_load_avg                   : 0
Mar  9 16:34:58 s_src@amelia kernel:   .tg->runnable_avg              : 0
Mar  9 16:34:58 s_src@amelia kernel:   .se->exec_start                : 5634339.618360
Mar  9 16:34:58 s_src@amelia kernel:   .se->vruntime                  : 380604.574667
Mar  9 16:34:58 s_src@amelia kernel:   .se->sum_exec_runtime          : 448.443805
Mar  9 16:34:58 s_src@amelia kernel:   .se->load.weight               : 1024
Mar  9 16:34:58 s_src@amelia kernel:   .se->avg.runnable_avg_sum      : 0
Mar  9 16:34:58 s_src@amelia kernel:   .se->avg.runnable_avg_period   : 46818
Mar  9 16:34:58 s_src@amelia kernel:   .se->avg.load_avg_contrib      : 0
Mar  9 16:34:58 s_src@amelia kernel:   .se->avg.decay_count           : 0
Mar  9 16:34:58 s_src@amelia kernel: \x0acfs_rq[0]:/
Mar  9 16:34:58 s_src@amelia kernel:   .exec_clock                    : 0.000000
Mar  9 16:34:58 s_src@amelia kernel:   .MIN_vruntime                  : 380603.194337
Mar  9 16:34:58 s_src@amelia kernel:   .min_vruntime                  : 380608.684876
Mar  9 16:34:58 s_src@amelia kernel:   .max_vruntime                  : 380603.194337
Mar  9 16:34:58 s_src@amelia kernel:   .spread                        : 0.000000
Mar  9 16:34:58 s_src@amelia kernel:   .spread0                       : 0.000000
Mar  9 16:34:58 s_src@amelia kernel:   .nr_spread_over                : 0
Mar  9 16:34:58 s_src@amelia kernel:   .nr_running                    : 2
Mar  9 16:34:58 s_src@amelia kernel:   .load                          : 2048
Mar  9 16:34:58 s_src@amelia kernel:   .runnable_load_avg             : 0
Mar  9 16:34:58 s_src@amelia kernel:   .blocked_load_avg              : 0
Mar  9 16:34:58 s_src@amelia kernel:   .tg_load_contrib               : 0
Mar  9 16:34:58 s_src@amelia kernel:   .tg_runnable_contrib           : 2
Mar  9 16:34:58 s_src@amelia kernel:   .tg_load_avg                   : 4
Mar  9 16:34:58 s_src@amelia kernel:   .tg->runnable_avg              : 6
Mar  9 16:34:58 s_src@amelia kernel:   .avg->runnable_avg_sum         : 97
Mar  9 16:34:58 s_src@amelia kernel:   .avg->runnable_avg_period      : 46783
Mar  9 16:34:58 s_src@amelia kernel: \x0art_rq[0]:
Mar  9 16:34:58 s_src@amelia kernel:   .rt_nr_running                 : 0
Mar  9 16:34:58 s_src@amelia kernel:   .rt_throttled                  : 0
Mar  9 16:34:58 s_src@amelia kernel:   .rt_time                       : 0.000000
Mar  9 16:34:58 s_src@amelia kernel:   .rt_runtime                    : 950.000000
Mar  9 16:34:58 s_src@amelia kernel: \x0arunnable tasks:\x0a            task   PID         tree-key  switches  prio     exec-runtime         sum-exec        sum-sleep\x0a----------------------------------------------------------------------------------------------------------
Mar  9 16:34:58 s_src@amelia kernel:         kthreadd     2    380039.827036       280   120               0               0               0.000000               0.000000               0.000000 /
Mar  9 16:34:58 s_src@amelia kernel:      ksoftirqd/0     3    380602.699747     63098   120               0               0               0.000000               0.000000               0.000000 /
Mar  9 16:34:58 s_src@amelia kernel:     kworker/0:0H     5       256.529404         3   100               0               0               0.000000               0.000000               0.000000 /
Mar  9 16:34:58 s_src@amelia kernel:           rcu_bh     8     20085.382686        12   120               0               0               0.000000               0.000000               0.000000 /
Mar  9 16:34:58 s_src@amelia kernel:      migration/0     9         0.000000      1523     0               0               0               0.000000               0.000000               0.000000 /
Mar  9 16:34:58 s_src@amelia kernel:        kdevtmpfs    15     87065.789372       157   120               0               0               0.000000               0.000000               0.000000 /
Mar  9 16:34:58 s_src@amelia kernel:            netns    16       262.524108         2   100               0               0               0.000000               0.000000               0.000000 /
Mar  9 16:34:58 s_src@amelia kernel:        writeback    17       262.529217         2   100               0               0               0.000000               0.000000               0.000000 /
Mar  9 16:34:58 s_src@amelia kernel:       devfreq_wq    23       791.136003         2   100               0               0               0.000000               0.000000               0.000000 /
Mar  9 16:34:58 s_src@amelia kernel:       khungtaskd    27    380120.831469        48   120               0               0               0.000000               0.000000               0.000000 /
Mar  9 16:34:58 s_src@amelia kernel:        scsi_eh_0    48       903.719125        17   120               0               0               0.000000               0.000000               0.000000 /
Mar  9 16:34:58 s_src@amelia kernel:        scsi_eh_1    49       897.791182        18   120               0               0               0.000000               0.000000               0.000000 /
Mar  9 16:34:58 s_src@amelia kernel:        scsi_eh_2    50       897.785919        18   120               0               0               0.000000               0.000000               0.000000 /
Mar  9 16:34:58 s_src@amelia kernel:        scsi_eh_3    51       903.748406        20   120               0               0               0.000000               0.000000               0.000000 /
Mar  9 16:34:58 s_src@amelia kernel:           bioset    60       925.381221         2   100               0               0               0.000000               0.000000               0.000000 /
Mar  9 16:34:58 s_src@amelia kernel:   btrfs-worker-1   206    380122.513072       956   120               0               0               0.000000               0.000000               0.000000 /
Mar  9 16:34:58 s_src@amelia kernel:  btrfs-genwork-1   207    380171.326455       112   120               0               0               0.000000               0.000000               0.000000 /
Mar  9 16:34:58 s_src@amelia kernel:    btrfs-fixup-1   210    380123.092078        78   120               0               0               0.000000               0.000000               0.000000 /
Mar  9 16:34:58 s_src@amelia kernel:    btrfs-cache-1   219    380123.114290        81   120               0               0               0.000000               0.000000               0.000000 /
Mar  9 16:34:58 s_src@amelia kernel:    btrfs-cleaner   223    373144.934019       186   120               0               0               0.000000               0.000000               0.000000 /
Mar  9 16:34:58 s_src@amelia kernel:      edac-poller   489      2677.694949         2   100               0               0               0.000000               0.000000               0.000000 /
Mar  9 16:34:58 s_src@amelia kernel:          rpcbind  1690         4.318723       290   120               0               0               0.000000               0.000000               0.000000 /autogroup-5
Mar  9 16:34:58 s_src@amelia kernel:        rpc.statd  1718        13.242367         9   120               0               0               0.000000               0.000000               0.000000 /autogroup-6
Mar  9 16:34:58 s_src@amelia kernel:           rpciod  1723      6264.804884         2   100               0               0               0.000000               0.000000               0.000000 /
Mar  9 16:34:58 s_src@amelia kernel:         rpc.gssd  1739         0.089455         1   120               0               0               0.000000               0.000000               0.000000 /autogroup-8
Mar  9 16:34:58 s_src@amelia kernel:        syslog-ng  1995       134.910323       443   120               0               0               0.000000               0.000000               0.000000 /autogroup-12
Mar  9 16:34:58 s_src@amelia kernel:            named  2076         3.006953         1   120               0               0               0.000000               0.000000               0.000000 /autogroup-15
Mar  9 16:34:58 s_src@amelia kernel:            named  2080       209.940888      1234   120               0               0               0.000000               0.000000               0.000000 /autogroup-15
Mar  9 16:34:58 s_src@amelia kernel:            named  2081       202.586909       641   120               0               0               0.000000               0.000000               0.000000 /autogroup-15
Mar  9 16:34:58 s_src@amelia kernel:            named  2082       204.847556       960   120               0               0               0.000000               0.000000               0.000000 /autogroup-15
Mar  9 16:34:58 s_src@amelia kernel:            inetd  2155         0.911649         1   120               0               0               0.000000               0.000000               0.000000 /autogroup-17
Mar  9 16:34:58 s_src@amelia kernel:          apache2  2310       126.886684         1   120               0               0               0.000000               0.000000               0.000000 /autogroup-32
Mar  9 16:34:58 s_src@amelia kernel:          apache2  2311       126.867561         1   120               0               0               0.000000               0.000000               0.000000 /autogroup-32
Mar  9 16:34:58 s_src@amelia kernel:          apache2  2312       132.886257         1   120               0               0               0.000000               0.000000               0.000000 /autogroup-32
Mar  9 16:34:58 s_src@amelia kernel:          apache2  2313       132.875815         1   120               0               0               0.000000               0.000000               0.000000 /autogroup-32
Mar  9 16:34:58 s_src@amelia kernel:          apache2  2314       132.875693         1   120               0               0               0.000000               0.000000               0.000000 /autogroup-32
Mar  9 16:34:58 s_src@amelia kernel:          apache2  2315       132.894180         1   120               0               0               0.000000               0.000000               0.000000 /autogroup-32
Mar  9 16:34:58 s_src@amelia kernel:          apache2  2317       132.949154         1   120               0               0               0.000000               0.000000               0.000000 /autogroup-32
Mar  9 16:34:58 s_src@amelia kernel:          apache2  2319       132.997783         1   120               0               0               0.000000               0.000000               0.000000 /autogroup-32
Mar  9 16:34:58 s_src@amelia kernel:          apache2  2321       133.049228         1   120               0               0               0.000000               0.000000               0.000000 /autogroup-32
Mar  9 16:34:58 s_src@amelia kernel:          apache2  2323       133.093223         1   120               0               0               0.000000               0.000000               0.000000 /autogroup-32
Mar  9 16:34:58 s_src@amelia kernel:          apache2  2325       133.143419         1   120               0               0               0.000000               0.000000               0.000000 /autogroup-32
Mar  9 16:34:58 s_src@amelia kernel:          apache2  2327       133.191952         1   120               0               0               0.000000               0.000000               0.000000 /autogroup-32
Mar  9 16:34:58 s_src@amelia kernel:          apache2  2329       133.234689         1   120               0               0               0.000000               0.000000               0.000000 /autogroup-32
Mar  9 16:34:58 s_src@amelia kernel:          apache2  2331       133.283717         1   120               0               0               0.000000               0.000000               0.000000 /autogroup-32
Mar  9 16:34:58 s_src@amelia kernel:          apache2  2333       133.335861         1   120               0               0               0.000000               0.000000               0.000000 /autogroup-32
Mar  9 16:34:58 s_src@amelia kernel:          apache2  2335       133.375538         1   120               0               0               0.000000               0.000000               0.000000 /autogroup-32
Mar  9 16:34:58 s_src@amelia kernel:          apache2  2342       134.337740         1   120               0               0               0.000000               0.000000               0.000000 /autogroup-32
Mar  9 16:34:58 s_src@amelia kernel:          apache2  2344       133.875717         1   120               0               0               0.000000               0.000000               0.000000 /autogroup-32
Mar  9 16:34:58 s_src@amelia kernel:          apache2  2346       133.764188         1   120               0               0               0.000000               0.000000               0.000000 /autogroup-32
Mar  9 16:34:58 s_src@amelia kernel:          apache2  2348       133.741640         1   120               0               0               0.000000               0.000000               0.000000 /autogroup-32
Mar  9 16:34:58 s_src@amelia kernel:          apache2  2350       133.728917         1   120               0               0               0.000000               0.000000               0.000000 /autogroup-32
Mar  9 16:34:58 s_src@amelia kernel:          apache2  2352       133.531329         1   120               0               0               0.000000               0.000000               0.000000 /autogroup-32
Mar  9 16:34:58 s_src@amelia kernel:          apache2  2353       133.535273         1   120               0               0               0.000000               0.000000               0.000000 /autogroup-32
Mar  9 16:34:58 s_src@amelia kernel:          apache2  2307       122.849120        73   120               0               0               0.000000               0.000000               0.000000 /autogroup-32
Mar  9 16:34:58 s_src@amelia kernel:          apache2  2330       128.779507         1   120               0               0               0.000000               0.000000               0.000000 /autogroup-32
Mar  9 16:34:58 s_src@amelia kernel:          apache2  2343       133.840685         1   120               0               0               0.000000               0.000000               0.000000 /autogroup-32
Mar  9 16:34:58 s_src@amelia kernel:          apache2  2345       133.723956         1   120               0               0               0.000000               0.000000               0.000000 /autogroup-32
Mar  9 16:34:58 s_src@amelia kernel:          apache2  2347       133.744236         1   120               0               0               0.000000               0.000000               0.000000 /autogroup-32
Mar  9 16:34:58 s_src@amelia kernel:          apache2  2349       133.736222         1   120               0               0               0.000000               0.000000               0.000000 /autogroup-32
Mar  9 16:34:58 s_src@amelia kernel:          apache2  2351       133.534242         1   120               0               0               0.000000               0.000000               0.000000 /autogroup-32
Mar  9 16:34:58 s_src@amelia kernel:          apache2  2354       133.529426         1   120               0               0               0.000000               0.000000               0.000000 /autogroup-32
Mar  9 16:34:58 s_src@amelia kernel:          apache2  2356       133.529394         1   120               0               0               0.000000               0.000000               0.000000 /autogroup-32
Mar  9 16:34:58 s_src@amelia kernel:          apache2  2357       133.529660         1   120               0               0               0.000000               0.000000               0.000000 /autogroup-32
Mar  9 16:34:58 s_src@amelia kernel:          apache2  2358       133.529361         1   120               0               0               0.000000               0.000000               0.000000 /autogroup-32
Mar  9 16:34:58 s_src@amelia kernel:          apache2  2359       133.530160         1   120               0               0               0.000000               0.000000               0.000000 /autogroup-32
Mar  9 16:34:58 s_src@amelia kernel:          apache2  2360       133.533665         1   120               0               0               0.000000               0.000000               0.000000 /autogroup-32
Mar  9 16:34:58 s_src@amelia kernel:             cron  2422        99.765441       101   120               0               0               0.000000               0.000000               0.000000 /autogroup-33
Mar  9 16:34:58 s_src@amelia kernel:          kadmind  2475         2.415995        99   120               0               0               0.000000               0.000000               0.000000 /autogroup-35
Mar  9 16:34:58 s_src@amelia kernel:     avahi-daemon  2478        46.816562       567   120               0               0               0.000000               0.000000               0.000000 /autogroup-34
Mar  9 16:34:58 s_src@amelia kernel:     avahi-daemon  2479         0.966223         4   120               0               0               0.000000               0.000000               0.000000 /autogroup-34
Mar  9 16:34:58 s_src@amelia kernel:             sshd  2485        27.546302        12   120               0               0               0.000000               0.000000               0.000000 /autogroup-36
Mar  9 16:34:58 s_src@amelia kernel:            nfsd4  2585      9747.469163         2   100               0               0               0.000000               0.000000               0.000000 /
Mar  9 16:34:58 s_src@amelia kernel:            lockd  2589      9753.498182         2   120               0               0               0.000000               0.000000               0.000000 /
Mar  9 16:34:58 s_src@amelia kernel:             nfsd  2596    224565.435280         4   120               0               0               0.000000               0.000000               0.000000 /
Mar  9 16:34:58 s_src@amelia kernel:             nfsd  2598    224565.416805         4   120               0               0               0.000000               0.000000               0.000000 /
Mar  9 16:34:58 s_src@amelia kernel:             nfsd  2599    224565.415154         4   120               0               0               0.000000               0.000000               0.000000 /
Mar  9 16:34:58 s_src@amelia kernel:             nfsd  2600    224565.423609         4   120               0               0               0.000000               0.000000               0.000000 /
Mar  9 16:34:58 s_src@amelia kernel:             nfsd  2601    224565.417033         4   120               0               0               0.000000               0.000000               0.000000 /
Mar  9 16:34:58 s_src@amelia kernel:    apt-cacher-ng  2660         0.694312         1   120               0               0               0.000000               0.000000               0.000000 /autogroup-38
Mar  9 16:34:58 s_src@amelia kernel:       rpc.mountd  2704         0.548005         1   120               0               0               0.000000               0.000000               0.000000 /autogroup-40
Mar  9 16:34:58 s_src@amelia kernel:            getty  3054         0.343556       126   120               0               0               0.000000               0.000000               0.000000 /autogroup-45
Mar  9 16:34:58 s_src@amelia kernel:            getty  3056         0.736470        63   120               0               0               0.000000               0.000000               0.000000 /autogroup-47
Mar  9 16:34:58 s_src@amelia kernel:             sshd  4854        35.700992        30   120               0               0               0.000000               0.000000               0.000000 /autogroup-52
Mar  9 16:34:58 s_src@amelia kernel:             sshd  4862        48.777787       343   120               0               0               0.000000               0.000000               0.000000 /autogroup-52
Mar  9 16:34:58 s_src@amelia kernel:             bash  4865       428.437126       156   120               0               0               0.000000               0.000000               0.000000 /autogroup-53
Mar  9 16:34:58 s_src@amelia kernel:               su  4955       471.824541         9   120               0               0               0.000000               0.000000               0.000000 /autogroup-53
Mar  9 16:34:58 s_src@amelia kernel:             bash  4964      9899.960530       181   120               0               0               0.000000               0.000000               0.000000 /autogroup-53
Mar  9 16:34:58 s_src@amelia kernel:             sshd  5006        32.152152        17   120               0               0               0.000000               0.000000               0.000000 /autogroup-54
Mar  9 16:34:58 s_src@amelia kernel:             bash  5017       446.458803       154   120               0               0               0.000000               0.000000               0.000000 /autogroup-55
Mar  9 16:34:58 s_src@amelia kernel:             sshd  5203        28.615789        45   120               0               0               0.000000               0.000000               0.000000 /autogroup-63
Mar  9 16:34:58 s_src@amelia kernel:             sshd  5211       165.183329      2673   120               0               0               0.000000               0.000000               0.000000 /autogroup-63
Mar  9 16:34:58 s_src@amelia kernel:             bash  5214      3151.392304       248   120               0               0               0.000000               0.000000               0.000000 /autogroup-64
Mar  9 16:34:58 s_src@amelia kernel:             sshd  5343        46.296097        29   120               0               0               0.000000               0.000000               0.000000 /autogroup-75
Mar  9 16:34:58 s_src@amelia kernel:             bash  5354       566.590635       199   120               0               0               0.000000               0.000000               0.000000 /autogroup-76
Mar  9 16:34:58 s_src@amelia kernel:         kdmflush  5502     87075.848380         2   100               0               0               0.000000               0.000000               0.000000 /
Mar  9 16:34:58 s_src@amelia kernel:           bioset  5503     87079.832428         2   100               0               0               0.000000               0.000000               0.000000 /
Mar  9 16:34:58 s_src@amelia kernel:       kcryptd_io  5504     87083.824598         2   100               0               0               0.000000               0.000000               0.000000 /
Mar  9 16:34:58 s_src@amelia kernel:          kcryptd  5505     87087.816501         2   100               0               0               0.000000               0.000000               0.000000 /
Mar  9 16:34:58 s_src@amelia kernel:           bioset  5506     87091.804742         2   100               0               0               0.000000               0.000000               0.000000 /
Mar  9 16:34:58 s_src@amelia kernel:  btrfs-delalloc-  5516    380133.530871        44   120               0               0               0.000000               0.000000               0.000000 /
Mar  9 16:34:58 s_src@amelia kernel:    btrfs-fixup-1  5517    380133.531442        43   120               0               0               0.000000               0.000000               0.000000 /
Mar  9 16:34:58 s_src@amelia kernel:    btrfs-endio-1  5518    380133.578505        40   120               0               0               0.000000               0.000000               0.000000 /
Mar  9 16:34:58 s_src@amelia kernel:  btrfs-endio-met  5519    380154.996930       980   120               0               0               0.000000               0.000000               0.000000 /
Mar  9 16:34:58 s_src@amelia kernel:      btrfs-rmw-1  5520    380133.490896        45   120               0               0               0.000000               0.000000               0.000000 /
Mar  9 16:34:58 s_src@amelia kernel:  btrfs-endio-rai  5521    380133.534115        45   120               0               0               0.000000               0.000000               0.000000 /
Mar  9 16:34:58 s_src@amelia kernel:  btrfs-endio-met  5522    380133.531342        41   120               0               0               0.000000               0.000000               0.000000 /
Mar  9 16:34:58 s_src@amelia kernel:  btrfs-freespace  5524    380203.193512       287   120               0               0               0.000000               0.000000               0.000000 /
Mar  9 16:34:58 s_src@amelia kernel:  btrfs-readahead  5527    380133.513433        49   120               0               0               0.000000               0.000000               0.000000 /
Mar  9 16:34:58 s_src@amelia kernel:          sshpass  5535     10173.527391         5   120               0               0               0.000000               0.000000               0.000000 /autogroup-53
Mar  9 16:34:58 s_src@amelia kernel:              ssh  5555    121266.170252    519897   120               0               0               0.000000               0.000000               0.000000 /autogroup-84
Mar  9 16:34:58 s_src@amelia kernel:     kworker/u8:2  5610    379971.139695      4824   120               0               0               0.000000               0.000000               0.000000 /
Mar  9 16:34:58 s_src@amelia kernel:      kworker/0:0  5617    379940.324607     26456   120               0               0               0.000000               0.000000               0.000000 /
Mar  9 16:34:58 s_src@amelia kernel:  btrfs-endio-wri  6672    380179.399614       424   120               0               0               0.000000               0.000000               0.000000 /
Mar  9 16:34:58 s_src@amelia kernel:     kworker/u9:2  6703    378377.605396     90095   100               0               0               0.000000               0.000000               0.000000 /
Mar  9 16:34:58 s_src@amelia kernel:             mutt  6708      8962.875344      8326   120               0               0               0.000000               0.000000               0.000000 /autogroup-64
Mar  9 16:34:58 s_src@amelia kernel:     kworker/u8:1  6881    379964.644586       785   120               0               0               0.000000               0.000000               0.000000 /
Mar  9 16:34:58 s_src@amelia kernel:      kworker/0:2  6884    379915.594053       314   120               0               0               0.000000               0.000000               0.000000 /
Mar  9 16:34:58 s_src@amelia kernel:      kworker/0:1  6891    380603.194337      1025   120               0               0               0.000000               0.000000               0.000000 /
Mar  9 16:34:58 s_src@amelia kernel:     kworker/u8:0  6892    380602.795752       289   120               0               0               0.000000               0.000000               0.000000 /
Mar  9 16:34:58 s_src@amelia kernel:             sudo  6899       658.296984        58   120               0               0               0.000000               0.000000               0.000000 /autogroup-76
Mar  9 16:34:58 s_src@amelia kernel:               su  6900       495.999647        19   120               0               0               0.000000               0.000000               0.000000 /autogroup-55
Mar  9 16:34:58 s_src@amelia kernel: R           bash  6909       715.021351        88   120               0               0               0.000000               0.000000               0.000000 /autogroup-55
Mar  9 16:34:58 s_src@amelia kernel: 
Mar  9 16:34:58 s_src@amelia kernel: cpu#1, 1297.893 MHz
Mar  9 16:34:58 s_src@amelia kernel:   .nr_running                    : 0
Mar  9 16:34:58 s_src@amelia kernel:   .load                          : 0
Mar  9 16:34:58 s_src@amelia kernel:   .nr_switches                   : 4529939
Mar  9 16:34:58 s_src@amelia kernel:   .nr_load_updates               : 346773
Mar  9 16:34:58 s_src@amelia kernel:   .nr_uninterruptible            : -12502
Mar  9 16:34:58 s_src@amelia kernel:   .next_balance                  : 4295.500666
Mar  9 16:34:58 s_src@amelia kernel:   .curr->pid                     : 0
Mar  9 16:34:58 s_src@amelia kernel:   .clock                         : 5634339.558146
Mar  9 16:34:58 s_src@amelia kernel:   .cpu_load[0]                   : 0
Mar  9 16:34:58 s_src@amelia kernel:   .cpu_load[1]                   : 0
Mar  9 16:34:58 s_src@amelia kernel:   .cpu_load[2]                   : 0
Mar  9 16:34:58 s_src@amelia kernel:   .cpu_load[3]                   : 0
Mar  9 16:34:58 s_src@amelia kernel:   .cpu_load[4]                   : 0
Mar  9 16:34:58 s_src@amelia kernel: \x0acfs_rq[1]:/autogroup-54
Mar  9 16:34:58 s_src@amelia kernel:   .exec_clock                    : 0.000000
Mar  9 16:34:58 s_src@amelia kernel:   .MIN_vruntime                  : 0.000001
Mar  9 16:34:58 s_src@amelia kernel:   .min_vruntime                  : 101.697478
Mar  9 16:34:58 s_src@amelia kernel:   .max_vruntime                  : 0.000001
Mar  9 16:34:58 s_src@amelia kernel:   .spread                        : 0.000000
Mar  9 16:34:58 s_src@amelia kernel:   .spread0                       : -380506.987398
Mar  9 16:34:58 s_src@amelia kernel:   .nr_spread_over                : 0
Mar  9 16:34:58 s_src@amelia kernel:   .nr_running                    : 0
Mar  9 16:34:58 s_src@amelia kernel:   .load                          : 0
Mar  9 16:34:58 s_src@amelia kernel:   .runnable_load_avg             : 0
Mar  9 16:34:58 s_src@amelia kernel:   .blocked_load_avg              : 0
Mar  9 16:34:58 s_src@amelia kernel:   .tg_load_contrib               : 0
Mar  9 16:34:58 s_src@amelia kernel:   .tg_runnable_contrib           : 0
Mar  9 16:34:58 s_src@amelia kernel:   .tg_load_avg                   : 0
Mar  9 16:34:58 s_src@amelia kernel:   .tg->runnable_avg              : 0
Mar  9 16:34:58 s_src@amelia kernel:   .se->exec_start                : 5634339.534386
Mar  9 16:34:58 s_src@amelia kernel:   .se->vruntime                  : 362795.231762
Mar  9 16:34:58 s_src@amelia kernel:   .se->sum_exec_runtime          : 97.286387
Mar  9 16:34:58 s_src@amelia kernel:   .se->load.weight               : 2
Mar  9 16:34:58 s_src@amelia kernel:   .se->avg.runnable_avg_sum      : 131
Mar  9 16:34:58 s_src@amelia kernel:   .se->avg.runnable_avg_period   : 47551
Mar  9 16:34:58 s_src@amelia kernel:   .se->avg.load_avg_contrib      : 0
Mar  9 16:34:58 s_src@amelia kernel:   .se->avg.decay_count           : 5373325
Mar  9 16:34:58 s_src@amelia kernel: \x0acfs_rq[1]:/autogroup-32
Mar  9 16:34:58 s_src@amelia kernel:   .exec_clock                    : 0.000000
Mar  9 16:34:58 s_src@amelia kernel:   .MIN_vruntime                  : 0.000001
Mar  9 16:34:58 s_src@amelia kernel:   .min_vruntime                  : 2330.309909
Mar  9 16:34:58 s_src@amelia kernel:   .max_vruntime                  : 0.000001
Mar  9 16:34:58 s_src@amelia kernel:   .spread                        : 0.000000
Mar  9 16:34:58 s_src@amelia kernel:   .spread0                       : -378278.374967
Mar  9 16:34:58 s_src@amelia kernel:   .nr_spread_over                : 0
Mar  9 16:34:58 s_src@amelia kernel:   .nr_running                    : 0
Mar  9 16:34:58 s_src@amelia kernel:   .load                          : 0
Mar  9 16:34:58 s_src@amelia kernel:   .runnable_load_avg             : 0
Mar  9 16:34:58 s_src@amelia kernel:   .blocked_load_avg              : 0
Mar  9 16:34:58 s_src@amelia kernel:   .tg_load_contrib               : 0
Mar  9 16:34:58 s_src@amelia kernel:   .tg_runnable_contrib           : 0
Mar  9 16:34:58 s_src@amelia kernel:   .tg_load_avg                   : 0
Mar  9 16:34:58 s_src@amelia kernel:   .tg->runnable_avg              : 0
Mar  9 16:34:58 s_src@amelia kernel:   .se->exec_start                : 5634302.208649
Mar  9 16:34:58 s_src@amelia kernel:   .se->vruntime                  : 362800.837614
Mar  9 16:34:58 s_src@amelia kernel:   .se->sum_exec_runtime          : 3073.640955
Mar  9 16:34:58 s_src@amelia kernel:   .se->load.weight               : 2
Mar  9 16:34:58 s_src@amelia kernel:   .se->avg.runnable_avg_sum      : 35
Mar  9 16:34:58 s_src@amelia kernel:   .se->avg.runnable_avg_period   : 47983
Mar  9 16:34:58 s_src@amelia kernel:   .se->avg.load_avg_contrib      : 0
Mar  9 16:34:58 s_src@amelia kernel:   .se->avg.decay_count           : 5373290
Mar  9 16:34:58 s_src@amelia kernel: \x0acfs_rq[1]:/
Mar  9 16:34:58 s_src@amelia kernel:   .exec_clock                    : 0.000000
Mar  9 16:34:58 s_src@amelia kernel:   .MIN_vruntime                  : 0.000001
Mar  9 16:34:58 s_src@amelia kernel:   .min_vruntime                  : 362800.837614
Mar  9 16:34:58 s_src@amelia kernel:   .max_vruntime                  : 0.000001
Mar  9 16:34:58 s_src@amelia kernel:   .spread                        : 0.000000
Mar  9 16:34:58 s_src@amelia kernel:   .spread0                       : -17807.847262
Mar  9 16:34:58 s_src@amelia kernel:   .nr_spread_over                : 0
Mar  9 16:34:58 s_src@amelia kernel:   .nr_running                    : 0
Mar  9 16:34:58 s_src@amelia kernel:   .load                          : 0
Mar  9 16:34:58 s_src@amelia kernel:   .runnable_load_avg             : 0
Mar  9 16:34:58 s_src@amelia kernel:   .blocked_load_avg              : 4
Mar  9 16:34:58 s_src@amelia kernel:   .tg_load_contrib               : 4
Mar  9 16:34:58 s_src@amelia kernel:   .tg_runnable_contrib           : 4
Mar  9 16:34:58 s_src@amelia kernel:   .tg_load_avg                   : 4
Mar  9 16:34:58 s_src@amelia kernel:   .tg->runnable_avg              : 6
Mar  9 16:34:58 s_src@amelia kernel:   .avg->runnable_avg_sum         : 188
Mar  9 16:34:58 s_src@amelia kernel:   .avg->runnable_avg_period      : 47826
Mar  9 16:34:58 s_src@amelia kernel: \x0art_rq[1]:
Mar  9 16:34:58 s_src@amelia kernel:   .rt_nr_running                 : 0
Mar  9 16:34:58 s_src@amelia kernel:   .rt_throttled                  : 0
Mar  9 16:34:58 s_src@amelia kernel:   .rt_time                       : 0.000000
Mar  9 16:34:58 s_src@amelia kernel:   .rt_runtime                    : 950.000000
Mar  9 16:34:58 s_src@amelia kernel: \x0arunnable tasks:\x0a            task   PID         tree-key  switches  prio     exec-runtime         sum-exec        sum-sleep\x0a----------------------------------------------------------------------------------------------------------
Mar  9 16:34:58 s_src@amelia kernel:             init     1        62.272117      1520   120               0               0               0.000000               0.000000               0.000000 /autogroup-2
Mar  9 16:34:58 s_src@amelia kernel:        rcu_sched     7    362794.704572     91712   120               0               0               0.000000               0.000000               0.000000 /
Mar  9 16:34:58 s_src@amelia kernel:      migration/1    10         0.000000      1589     0               0               0               0.000000               0.000000               0.000000 /
Mar  9 16:34:58 s_src@amelia kernel:      ksoftirqd/1    11    362779.231884     88447   120               0               0               0.000000               0.000000               0.000000 /
Mar  9 16:34:58 s_src@amelia kernel:     kworker/1:0H    13         3.434556         4   100               0               0               0.000000               0.000000               0.000000 /
Mar  9 16:34:58 s_src@amelia kernel:          khelper    14         4.969285         2   100               0               0               0.000000               0.000000               0.000000 /
Mar  9 16:34:58 s_src@amelia kernel:      kintegrityd    18        15.480282         2   100               0               0               0.000000               0.000000               0.000000 /
Mar  9 16:34:58 s_src@amelia kernel:           bioset    19        21.491961         2   100               0               0               0.000000               0.000000               0.000000 /
Mar  9 16:34:58 s_src@amelia kernel:          kblockd    21        27.504069         2   100               0               0               0.000000               0.000000               0.000000 /
Mar  9 16:34:58 s_src@amelia kernel:            khubd    22      1199.047301       113   120               0               0               0.000000               0.000000               0.000000 /
Mar  9 16:34:58 s_src@amelia kernel:          kswapd0    28    362779.127381     64282   120               0               0               0.000000               0.000000               0.000000 /
Mar  9 16:34:58 s_src@amelia kernel:             ksmd    29       174.485143         2   125               0               0               0.000000               0.000000               0.000000 /
Mar  9 16:34:58 s_src@amelia kernel:    fsnotify_mark    30    362612.106727        19   120               0               0               0.000000               0.000000               0.000000 /
Mar  9 16:34:58 s_src@amelia kernel:           crypto    31       180.939545         2   100               0               0               0.000000               0.000000               0.000000 /
Mar  9 16:34:58 s_src@amelia kernel:         pencrypt    38       211.314784         2   100               0               0               0.000000               0.000000               0.000000 /
Mar  9 16:34:58 s_src@amelia kernel:         pdecrypt    39       217.333457         2   100               0               0               0.000000               0.000000               0.000000 /
Mar  9 16:34:58 s_src@amelia kernel:          deferwq    61       306.808896         2   100               0               0               0.000000               0.000000               0.000000 /
Mar  9 16:34:58 s_src@amelia kernel:   btrfs-submit-1   208    362594.711381      1578   120               0               0               0.000000               0.000000               0.000000 /
Mar  9 16:34:58 s_src@amelia kernel:  btrfs-delalloc-   209    362594.788595        82   120               0               0               0.000000               0.000000               0.000000 /
Mar  9 16:34:58 s_src@amelia kernel:  btrfs-endio-met   212    362779.523523      4203   120               0               0               0.000000               0.000000               0.000000 /
Mar  9 16:34:58 s_src@amelia kernel:      btrfs-rmw-1   213    362594.811227        80   120               0               0               0.000000               0.000000               0.000000 /
Mar  9 16:34:58 s_src@amelia kernel:  btrfs-endio-rai   214    362594.789344        84   120               0               0               0.000000               0.000000               0.000000 /
Mar  9 16:34:58 s_src@amelia kernel:  btrfs-endio-met   215    362594.789509        83   120               0               0               0.000000               0.000000               0.000000 /
Mar  9 16:34:58 s_src@amelia kernel:  btrfs-freespace   217    362611.556244       833   120               0               0               0.000000               0.000000               0.000000 /
Mar  9 16:34:58 s_src@amelia kernel:  btrfs-delayed-m   218    362601.601288       103   120               0               0               0.000000               0.000000               0.000000 /
Mar  9 16:34:58 s_src@amelia kernel:  btrfs-readahead   220    362594.814374        86   120               0               0               0.000000               0.000000               0.000000 /
Mar  9 16:34:58 s_src@amelia kernel:  btrfs-flush_del   221    362593.974556       298   120               0               0               0.000000               0.000000               0.000000 /
Mar  9 16:34:58 s_src@amelia kernel:  btrfs-qgroup-re   222    362594.808427        79   120               0               0               0.000000               0.000000               0.000000 /
Mar  9 16:34:58 s_src@amelia kernel:  btrfs-transacti   224    358950.123194      7410   120               0               0               0.000000               0.000000               0.000000 /
Mar  9 16:34:58 s_src@amelia kernel:            udevd   378       426.587762       352   120               0               0               0.000000               0.000000               0.000000 /autogroup-4
Mar  9 16:34:58 s_src@amelia kernel:        scsi_eh_4   493      2045.920555         2   120               0               0               0.000000               0.000000               0.000000 /
Mar  9 16:34:58 s_src@amelia kernel:      usb-storage   494    362424.935557    247809   120               0               0               0.000000               0.000000               0.000000 /
Mar  9 16:34:58 s_src@amelia kernel:  kvm-irqfd-clean   512      2275.124722         3   100               0               0               0.000000               0.000000               0.000000 /
Mar  9 16:34:58 s_src@amelia kernel:           nfsiod  1727      6433.121424         2   100               0               0               0.000000               0.000000               0.000000 /
Mar  9 16:34:58 s_src@amelia kernel:       rpc.idmapd  1735         0.422149         1   120               0               0               0.000000               0.000000               0.000000 /autogroup-7
Mar  9 16:34:58 s_src@amelia kernel:        syslog-ng  1994         6.062742         3   120               0               0               0.000000               0.000000               0.000000 /autogroup-11
Mar  9 16:34:58 s_src@amelia kernel:            acpid  2029         0.117387         2   120               0               0               0.000000               0.000000               0.000000 /autogroup-13
Mar  9 16:34:58 s_src@amelia kernel:              atd  2068         0.553231         2   120               0               0               0.000000               0.000000               0.000000 /autogroup-14
Mar  9 16:34:58 s_src@amelia kernel:            named  2079       105.247952      1084   120               0               0               0.000000               0.000000               0.000000 /autogroup-15
Mar  9 16:34:58 s_src@amelia kernel:        mosquitto  2093      1659.113640     53988   120               0               0               0.000000               0.000000               0.000000 /autogroup-16
Mar  9 16:34:58 s_src@amelia kernel:             ntpd  2176       314.210072      6028   120               0               0               0.000000               0.000000               0.000000 /autogroup-18
Mar  9 16:34:58 s_src@amelia kernel:          krb5kdc  2195        23.551388       147   120               0               0               0.000000               0.000000               0.000000 /autogroup-19
Mar  9 16:34:58 s_src@amelia kernel:      dbus-daemon  2213         0.858032        12   120               0               0               0.000000               0.000000               0.000000 /autogroup-20
Mar  9 16:34:58 s_src@amelia kernel:         postgres  2245       321.054358       262   120               0               0               0.000000               0.000000               0.000000 /autogroup-21
Mar  9 16:34:58 s_src@amelia kernel:         postgres  2247         1.428740        19   120               0               0               0.000000               0.000000               0.000000 /autogroup-23
Mar  9 16:34:58 s_src@amelia kernel:         postgres  2248        65.439702      1677   120               0               0               0.000000               0.000000               0.000000 /autogroup-24
Mar  9 16:34:58 s_src@amelia kernel:         postgres  2249        29.865218      1164   120               0               0               0.000000               0.000000               0.000000 /autogroup-27
Mar  9 16:34:58 s_src@amelia kernel:         postgres  2250       667.756863      8450   120               0               0               0.000000               0.000000               0.000000 /autogroup-26
Mar  9 16:34:58 s_src@amelia kernel:         postgres  2251       147.322932       366   120               0               0               0.000000               0.000000               0.000000 /autogroup-25
Mar  9 16:34:58 s_src@amelia kernel:          apache2  2303      2323.946696      5588   120               0               0               0.000000               0.000000               0.000000 /autogroup-32
Mar  9 16:34:58 s_src@amelia kernel:          apache2  2306       110.314012        70   120               0               0               0.000000               0.000000               0.000000 /autogroup-32
Mar  9 16:34:58 s_src@amelia kernel:          apache2  2337       123.167267         1   120               0               0               0.000000               0.000000               0.000000 /autogroup-32
Mar  9 16:34:58 s_src@amelia kernel:          apache2  2339       117.397234         1   120               0               0               0.000000               0.000000               0.000000 /autogroup-32
Mar  9 16:34:58 s_src@amelia kernel:          apache2  2355      2330.309909     55777   120               0               0               0.000000               0.000000               0.000000 /autogroup-32
Mar  9 16:34:58 s_src@amelia kernel:          apache2  2316       118.725957         2   120               0               0               0.000000               0.000000               0.000000 /autogroup-32
Mar  9 16:34:58 s_src@amelia kernel:          apache2  2318       117.447615         6   120               0               0               0.000000               0.000000               0.000000 /autogroup-32
Mar  9 16:34:58 s_src@amelia kernel:          apache2  2320       118.677570         2   120               0               0               0.000000               0.000000               0.000000 /autogroup-32
Mar  9 16:34:58 s_src@amelia kernel:          apache2  2322       118.586077         2   120               0               0               0.000000               0.000000               0.000000 /autogroup-32
Mar  9 16:34:58 s_src@amelia kernel:          apache2  2324       118.595195         3   120               0               0               0.000000               0.000000               0.000000 /autogroup-32
Mar  9 16:34:58 s_src@amelia kernel:          apache2  2326       118.488200         3   120               0               0               0.000000               0.000000               0.000000 /autogroup-32
Mar  9 16:34:58 s_src@amelia kernel:          apache2  2328       114.598501         3   120               0               0               0.000000               0.000000               0.000000 /autogroup-32
Mar  9 16:34:58 s_src@amelia kernel:          apache2  2332       118.723989         2   120               0               0               0.000000               0.000000               0.000000 /autogroup-32
Mar  9 16:34:58 s_src@amelia kernel:          apache2  2334       114.711802         2   120               0               0               0.000000               0.000000               0.000000 /autogroup-32
Mar  9 16:34:58 s_src@amelia kernel:          apache2  2336       114.651316         2   120               0               0               0.000000               0.000000               0.000000 /autogroup-32
Mar  9 16:34:58 s_src@amelia kernel:          apache2  2338       118.719173         1   120               0               0               0.000000               0.000000               0.000000 /autogroup-32
Mar  9 16:34:58 s_src@amelia kernel:          apache2  2340       118.749580         1   120               0               0               0.000000               0.000000               0.000000 /autogroup-32
Mar  9 16:34:58 s_src@amelia kernel:          apache2  2341       118.796725         1   120               0               0               0.000000               0.000000               0.000000 /autogroup-32
Mar  9 16:34:58 s_src@amelia kernel:          apache2  2361      2330.226622     55778   120               0               0               0.000000               0.000000               0.000000 /autogroup-32
Mar  9 16:34:58 s_src@amelia kernel:  nfsd4_callbacks  2586      9812.495450         2   100               0               0               0.000000               0.000000               0.000000 /
Mar  9 16:34:58 s_src@amelia kernel:             nfsd  2597    220167.067292         4   120               0               0               0.000000               0.000000               0.000000 /
Mar  9 16:34:58 s_src@amelia kernel:             nfsd  2602    220167.021504         4   120               0               0               0.000000               0.000000               0.000000 /
Mar  9 16:34:58 s_src@amelia kernel:             nfsd  2603    220167.017600         4   120               0               0               0.000000               0.000000               0.000000 /
Mar  9 16:34:58 s_src@amelia kernel:      rpc.svcgssd  2640        12.397936         7   120               0               0               0.000000               0.000000               0.000000 /autogroup-37
Mar  9 16:34:58 s_src@amelia kernel:            exim4  2936        58.616125         8   120               0               0               0.000000               0.000000               0.000000 /autogroup-41
Mar  9 16:34:58 s_src@amelia kernel:            getty  3051         0.826585       124   120               0               0               0.000000               0.000000               0.000000 /autogroup-42
Mar  9 16:34:58 s_src@amelia kernel:            getty  3052         0.818461       129   120               0               0               0.000000               0.000000               0.000000 /autogroup-43
Mar  9 16:34:58 s_src@amelia kernel:            getty  3053         2.062473       127   120               0               0               0.000000               0.000000               0.000000 /autogroup-44
Mar  9 16:34:58 s_src@amelia kernel:            getty  3055         1.900641       128   120               0               0               0.000000               0.000000               0.000000 /autogroup-46
Mar  9 16:34:58 s_src@amelia kernel:             sudo  4947       398.020265        51   120               0               0               0.000000               0.000000               0.000000 /autogroup-53
Mar  9 16:34:58 s_src@amelia kernel:             sshd  5014       101.697478       754   120               0               0               0.000000               0.000000               0.000000 /autogroup-54
Mar  9 16:34:58 s_src@amelia kernel:             sshd  5351        72.987492       279   120               0               0               0.000000               0.000000               0.000000 /autogroup-75
Mar  9 16:34:58 s_src@amelia kernel:  btrfs-endio-wri  5436    360147.035637       214   120               0               0               0.000000               0.000000               0.000000 /
Mar  9 16:34:58 s_src@amelia kernel:    carfax-backup  5486     66091.156131    280654   120               0               0               0.000000               0.000000               0.000000 /autogroup-53
Mar  9 16:34:58 s_src@amelia kernel:  btrfs-genwork-1  5514    362601.017319        77   120               0               0               0.000000               0.000000               0.000000 /
Mar  9 16:34:58 s_src@amelia kernel:   btrfs-submit-1  5515    362601.622224     33414   120               0               0               0.000000               0.000000               0.000000 /
Mar  9 16:34:58 s_src@amelia kernel:    btrfs-cache-1  5526    362595.942504        41   120               0               0               0.000000               0.000000               0.000000 /
Mar  9 16:34:58 s_src@amelia kernel:  btrfs-flush_del  5528    362597.490485        55   120               0               0               0.000000               0.000000               0.000000 /
Mar  9 16:34:58 s_src@amelia kernel:  btrfs-qgroup-re  5529    362595.912995        47   120               0               0               0.000000               0.000000               0.000000 /
Mar  9 16:34:58 s_src@amelia kernel:    btrfs-cleaner  5533    362779.106140       509   120               0               0               0.000000               0.000000               0.000000 /
Mar  9 16:34:58 s_src@amelia kernel:  btrfs-transacti  5534    362779.136015      4490   120               0               0               0.000000               0.000000               0.000000 /
Mar  9 16:34:58 s_src@amelia kernel:              ssh  5536    131713.074728    440801   120               0               0               0.000000               0.000000               0.000000 /autogroup-82
Mar  9 16:34:58 s_src@amelia kernel:          python3  5549     14344.674116        13   120               0               0               0.000000               0.000000               0.000000 /autogroup-53
Mar  9 16:34:58 s_src@amelia kernel:            btrfs  5550     66093.931176    984146   120               0               0               0.000000               0.000000               0.000000 /autogroup-53
Mar  9 16:34:58 s_src@amelia kernel:            btrfs  5551     66090.437387    131126   120               0               0               0.000000               0.000000               0.000000 /autogroup-53
Mar  9 16:34:58 s_src@amelia kernel:          sshpass  5554     14369.713511        14   120               0               0               0.000000               0.000000               0.000000 /autogroup-53
Mar  9 16:34:58 s_src@amelia kernel:          python3  5557     14567.702584        19   120               0               0               0.000000               0.000000               0.000000 /autogroup-53
Mar  9 16:34:58 s_src@amelia kernel:            btrfs  5558     66097.008203    225880   120               0               0               0.000000               0.000000               0.000000 /autogroup-53
Mar  9 16:34:58 s_src@amelia kernel:   btrfs-worker-4  5566    362601.637843      5613   120               0               0               0.000000               0.000000               0.000000 /
Mar  9 16:34:58 s_src@amelia kernel:  btrfs-delayed-m  5575    362599.895563      1006   120               0               0               0.000000               0.000000               0.000000 /
Mar  9 16:34:58 s_src@amelia kernel:      kworker/1:1  6662    362794.860295     16139   120               0               0               0.000000               0.000000               0.000000 /
Mar  9 16:34:58 s_src@amelia kernel:      kworker/1:2  6879    361944.549776       635   120               0               0               0.000000               0.000000               0.000000 /
Mar  9 16:34:58 s_src@amelia kernel:     kworker/u9:1  6887    362790.097631     19032   100               0               0               0.000000               0.000000               0.000000 /
Mar  9 16:34:58 s_src@amelia kernel:    btrfs-endio-2  6894    362779.225229       105   120               0               0               0.000000               0.000000               0.000000 /
Mar  9 16:34:58 s_src@amelia kernel: 


-- 
=== Hugo Mills: hugo@... carfax.org.uk | darksatanic.net | lug.org.uk ===
  PGP key: 65E74AC0 from wwwkeys.eu.pgp.net or http://www.carfax.org.uk
  --- You can get more with a kind word and a two-by-four than you ---   
                       can with just a kind word.                        

[-- Attachment #2: Digital signature --]
[-- Type: application/pgp-signature, Size: 811 bytes --]

^ permalink raw reply	[flat|nested] 19+ messages in thread

* [PATCH] Btrfs-progs: fsck: disable --init-extent-tree option when using snapshots
@ 2014-03-10 10:39   ` Wang Shilong
  2014-03-10 12:12     ` Shilong Wang
  0 siblings, 1 reply; 19+ messages in thread
From: Wang Shilong @ 2014-03-10 10:39 UTC (permalink / raw)
  To: linux-btrfs

We haven't supported to rebuild extent tree if there are any *FULL BACKREF*
with broken filesystem, disable this option when detecting snapshots.

Signed-off-by: Wang Shilong <wangsl.fnst@cn.fujitsu.com>
---
 cmds-check.c | 62 ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 62 insertions(+)

diff --git a/cmds-check.c b/cmds-check.c
index d1cafe1..ddee897 100644
--- a/cmds-check.c
+++ b/cmds-check.c
@@ -6143,6 +6143,56 @@ static int reset_block_groups(struct btrfs_fs_info *fs_info)
 	return 0;
 }
 
+static int is_snapshot_exist(struct btrfs_fs_info *fs_info)
+{
+	struct btrfs_root *root = fs_info->tree_root;
+	struct btrfs_path *path;
+	struct extent_buffer *leaf;
+	struct btrfs_key key;
+	int ret;
+	int found = 0;
+
+	path = btrfs_alloc_path();
+	if (!path)
+		return -ENOMEM;
+
+	key.objectid = BTRFS_FIRST_FREE_OBJECTID;
+	key.type = BTRFS_ROOT_ITEM_KEY;
+	key.offset = 0;
+
+	ret = btrfs_search_slot(NULL, root, &key, path, 0, 0);
+	if (ret < 0)
+		goto out;
+
+	while (1) {
+		if (path->slots[0] >= btrfs_header_nritems(path->nodes[0])) {
+			ret = btrfs_next_leaf(root, path);
+			if (ret)
+				goto out;
+		}
+		leaf = path->nodes[0];
+		btrfs_item_key_to_cpu(leaf, &key, path->slots[0]);
+
+		if (key.type != BTRFS_ROOT_ITEM_KEY ||
+		    key.objectid < BTRFS_FIRST_FREE_OBJECTID) {
+			path->slots[0]++;
+			continue;
+		}
+		if (key.offset > 0) {
+			found = 1;
+			break;
+		}
+		path->slots[0]++;
+	}
+out:
+	btrfs_free_path(path);
+	if (found)
+		return 1;
+	else if (ret >= 0)
+		return 0;
+	return ret;
+}
+
 static int reset_balance(struct btrfs_trans_handle *trans,
 			 struct btrfs_fs_info *fs_info)
 {
@@ -6537,6 +6587,18 @@ int cmd_check(int argc, char **argv)
 		ret = -EIO;
 		goto close_out;
 	}
+	if (init_extent_tree) {
+		ret = is_snapshot_exist(info);
+		if (ret < 0) {
+			fprintf(stderr, "ERROR: fail to check if there are snapshots in btrfs filesystem\n");
+			ret = 1;
+			goto close_out;
+		} else if (ret) {
+			fprintf(stderr, "Snapshots detected, unable to rebuild extent tree for such case.\n");
+			ret = 1;
+			goto close_out;
+		}
+	}
 
 	if (init_extent_tree || init_csum_tree) {
 		struct btrfs_trans_handle *trans;
-- 
1.9.0


^ permalink raw reply related	[flat|nested] 19+ messages in thread

* Re: [PATCH] Btrfs-progs: fsck: disable --init-extent-tree option when using snapshots
  2014-03-10 10:39   ` [PATCH] Btrfs-progs: fsck: disable --init-extent-tree option when using snapshots Wang Shilong
@ 2014-03-10 12:12     ` Shilong Wang
  2014-03-10 15:50       ` Josef Bacik
  0 siblings, 1 reply; 19+ messages in thread
From: Shilong Wang @ 2014-03-10 12:12 UTC (permalink / raw)
  To: Josef Bacik; +Cc: linux-btrfs, Wang Shilong

Hi Josef,

As i haven't thought any better ideas to rebuild extent tree which contains
extent that owns 'FULL BACKREF' flag.

Considering an extent's refs can be equal or more than 1 if this extent has
*FULL BACKREF* flag, so we could not make sure an extent's flag by only
searching fs/file tree any more.

So until now, i just disable this option if snapshots exists, please correct me
if i miss something here. Or you have any better ideas to solve this problem.~_~


Thanks,
Wang
2014-03-10 18:39 GMT+08:00 Wang Shilong <wangsl.fnst@cn.fujitsu.com>:
> We haven't supported to rebuild extent tree if there are any *FULL BACKREF*
> with broken filesystem, disable this option when detecting snapshots.
>
> Signed-off-by: Wang Shilong <wangsl.fnst@cn.fujitsu.com>
> ---
>  cmds-check.c | 62 ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
>  1 file changed, 62 insertions(+)
>
> diff --git a/cmds-check.c b/cmds-check.c
> index d1cafe1..ddee897 100644
> --- a/cmds-check.c
> +++ b/cmds-check.c
> @@ -6143,6 +6143,56 @@ static int reset_block_groups(struct btrfs_fs_info *fs_info)
>         return 0;
>  }
>
> +static int is_snapshot_exist(struct btrfs_fs_info *fs_info)
> +{
> +       struct btrfs_root *root = fs_info->tree_root;
> +       struct btrfs_path *path;
> +       struct extent_buffer *leaf;
> +       struct btrfs_key key;
> +       int ret;
> +       int found = 0;
> +
> +       path = btrfs_alloc_path();
> +       if (!path)
> +               return -ENOMEM;
> +
> +       key.objectid = BTRFS_FIRST_FREE_OBJECTID;
> +       key.type = BTRFS_ROOT_ITEM_KEY;
> +       key.offset = 0;
> +
> +       ret = btrfs_search_slot(NULL, root, &key, path, 0, 0);
> +       if (ret < 0)
> +               goto out;
> +
> +       while (1) {
> +               if (path->slots[0] >= btrfs_header_nritems(path->nodes[0])) {
> +                       ret = btrfs_next_leaf(root, path);
> +                       if (ret)
> +                               goto out;
> +               }
> +               leaf = path->nodes[0];
> +               btrfs_item_key_to_cpu(leaf, &key, path->slots[0]);
> +
> +               if (key.type != BTRFS_ROOT_ITEM_KEY ||
> +                   key.objectid < BTRFS_FIRST_FREE_OBJECTID) {
> +                       path->slots[0]++;
> +                       continue;
> +               }
> +               if (key.offset > 0) {
> +                       found = 1;
> +                       break;
> +               }
> +               path->slots[0]++;
> +       }
> +out:
> +       btrfs_free_path(path);
> +       if (found)
> +               return 1;
> +       else if (ret >= 0)
> +               return 0;
> +       return ret;
> +}
> +
>  static int reset_balance(struct btrfs_trans_handle *trans,
>                          struct btrfs_fs_info *fs_info)
>  {
> @@ -6537,6 +6587,18 @@ int cmd_check(int argc, char **argv)
>                 ret = -EIO;
>                 goto close_out;
>         }
> +       if (init_extent_tree) {
> +               ret = is_snapshot_exist(info);
> +               if (ret < 0) {
> +                       fprintf(stderr, "ERROR: fail to check if there are snapshots in btrfs filesystem\n");
> +                       ret = 1;
> +                       goto close_out;
> +               } else if (ret) {
> +                       fprintf(stderr, "Snapshots detected, unable to rebuild extent tree for such case.\n");
> +                       ret = 1;
> +                       goto close_out;
> +               }
> +       }
>
>         if (init_extent_tree || init_csum_tree) {
>                 struct btrfs_trans_handle *trans;
> --
> 1.9.0
>
> --
> To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH] Btrfs-progs: fsck: disable --init-extent-tree option when using snapshots
  2014-03-10 12:12     ` Shilong Wang
@ 2014-03-10 15:50       ` Josef Bacik
  2014-03-11  1:23         ` Wang Shilong
  0 siblings, 1 reply; 19+ messages in thread
From: Josef Bacik @ 2014-03-10 15:50 UTC (permalink / raw)
  To: Shilong Wang; +Cc: linux-btrfs, Wang Shilong

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

On 03/10/2014 08:12 AM, Shilong Wang wrote:
> Hi Josef,
> 
> As i haven't thought any better ideas to rebuild extent tree which
> contains extent that owns 'FULL BACKREF' flag.
> 
> Considering an extent's refs can be equal or more than 1 if this
> extent has *FULL BACKREF* flag, so we could not make sure an
> extent's flag by only searching fs/file tree any more.
> 
> So until now, i just disable this option if snapshots exists,
> please correct me if i miss something here. Or you have any better
> ideas to solve this problem.~_~
> 
> 

I thought the fsck stuff rebuilds full backref refs properly, does it
not?  If it doesn't we need to fix that, however I'm fine with
disabling the option if snapshots exist for the time being.  Thanks,

Josef
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1
Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/

iQIcBAEBAgAGBQJTHd9NAAoJEANb+wAKly3BYCYP/0iTaaa7w0SnfXtgjoVyX+nT
+e0Pa46zeKzpTujotCDb9E/2PBesCAvA4Psog3rkfsqJ2nXN9cERN4E6/JG4nAHh
Hv4KPo+w+tCkC4U2wSoDivYrVk9G5SH25ewkgW6iheSYNIlm+PLbOQz9DzGjCFDp
51J9tG5E010siOyhlLCyGj8ZTj+gXuoQVWKCS8dOpCLMrbYYjMDXa562hqWaLoS/
t3eSfP7Tnnpl43NiMZI4fWrzmlFa5lba5iJmG59FeyiseRH4Zrhee4St1L1xDL5A
/6f3tJJT7DJjRRJFv0nJAOvOPyFaK8bMaYmOQJg6VrhcyPKM3BxBVEab3HrmQ7jt
LCMWobpIcM7e5BugmbTGGsFymhv05SQgvYGzpzRVXdsSzqubuqTcXwloNU5RyyFF
sXT9IiW9wAibHe7mDN7V6nfo1bVfHsjvSVi1rqz4/zFOWyh8oqxfEhxUJIWhfFsn
j0WJevvqKnjBJujyyuQpL13tzh69qei0AHOEme3R46BSRMnyuacy/WOeyo4VXPcn
0GIeWbngAIWF/quhoQGkvofRMlPgftiDge8uz9pbm3IEKeiP9dQ/HvKsIHMKjnKW
3dEBvMV/CSUQNek4VjO1ALefTRZQvJVL8Wxdij4W+djJw/uVX7fOhuqdkqyfM3FY
CKSB3HUSUtDCammsvgQA
=OT98
-----END PGP SIGNATURE-----

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: send/receive locking
  2014-03-09 16:43                     ` Hugo Mills
@ 2014-03-10 22:28                       ` Hugo Mills
  0 siblings, 0 replies; 19+ messages in thread
From: Hugo Mills @ 2014-03-10 22:28 UTC (permalink / raw)
  To: Josef Bacik, Btrfs mailing list

[-- Attachment #1: Type: text/plain, Size: 77039 bytes --]

On Sun, Mar 09, 2014 at 04:43:21PM +0000, Hugo Mills wrote:
> On Sat, Mar 08, 2014 at 10:16:52PM +0000, Hugo Mills wrote:
> > On Sat, Mar 08, 2014 at 10:02:39PM +0000, Josef Bacik wrote:
> > > Don't do it on a fs you care about, cow is broken with that branch so it will corrupt your fs.  So actually just find Wang's patch that sets skip locking for send and do that.  Thanks,
> > 
> >    Aah, that's good to know. It's not an FS I care about massively
> > much, but it'd be a pain to recreate all the setup. I think I'll go
> > cherry picking tomorrow morning. :)
> 
>    OK, I'm running with btrfs-next, plus a cherry picked
> 
>     Btrfs: skip locking when searching commit root

   As requested on IRC, here's a second SysRq-w trace for
cross-reference. This is from running the same kernel as the previous
trace.

   I now have a new kernel and debug symbols built, and will do
another run with that one overnight.

   Hugo.

[14613.213188] BTRFS: device label flambeaux devid 1 transid 9852 /dev/mapper/flambeaux
[14613.217943] BTRFS info (device dm-0): disk space caching is enabled
[15029.515638] BTRFS: device label flambeaux devid 1 transid 9855 /dev/mapper/flambeaux
[15029.520703] BTRFS info (device dm-0): disk space caching is enabled
[15149.171735] BTRFS: device label flambeaux devid 1 transid 9858 /dev/mapper/flambeaux
[15149.176503] BTRFS info (device dm-0): disk space caching is enabled
[15265.202886] BTRFS: device label flambeaux devid 1 transid 9861 /dev/mapper/flambeaux
[15265.207321] BTRFS info (device dm-0): disk space caching is enabled
[15420.329775] bio: create slab <bio-2> at 2
[15420.715749] bio: create slab <bio-2> at 2
[15420.900726] BTRFS: device label flambeaux devid 1 transid 9862 /dev/dm-0
[15420.918482] BTRFS: device label flambeaux devid 1 transid 9862 /dev/mapper/flambeaux
[15420.922463] BTRFS info (device dm-0): disk space caching is enabled
[15420.943656] BTRFS: device label flambeaux devid 1 transid 9862 /dev/dm-0
[36421.678552] bio: create slab <bio-2> at 2
[36422.403670] bio: create slab <bio-2> at 2
[36422.594050] BTRFS: device label flambeaux devid 1 transid 9865 /dev/dm-0
[36422.608127] BTRFS: device label flambeaux devid 1 transid 9865 /dev/dm-0
[36422.614405] BTRFS: device label flambeaux devid 1 transid 9865 /dev/mapper/flambeaux
[36422.617686] BTRFS info (device dm-0): disk space caching is enabled
[104819.255050] bio: create slab <bio-2> at 2
[104819.706538] bio: create slab <bio-2> at 2
[104819.890243] BTRFS: device label flambeaux devid 1 transid 9868 /dev/dm-0
[104819.922908] BTRFS: device label flambeaux devid 1 transid 9868 /dev/mapper/flambeaux
[104819.926393] BTRFS info (device dm-0): disk space caching is enabled
[104819.947248] BTRFS: device label flambeaux devid 1 transid 9868 /dev/dm-0
[104822.093454] BTRFS: device label amelia devid 4 transid 240759 /dev/sdd2
[104852.026978] bio: create slab <bio-2> at 2
[104852.481069] bio: create slab <bio-2> at 2
[104852.667219] BTRFS: device label flambeaux devid 1 transid 9869 /dev/dm-0
[104852.684925] BTRFS: device label flambeaux devid 1 transid 9869 /dev/dm-0
[104852.687485] BTRFS: device label flambeaux devid 1 transid 9869 /dev/mapper/flambeaux
[104852.691499] BTRFS info (device dm-0): disk space caching is enabled
[104854.786045] BTRFS: device label amelia devid 4 transid 240760 /dev/sdd2
[111687.082328] SysRq : Show Blocked State
[111687.082427]   task                        PC stack   pid father
[111687.082441] btrfs-transacti D ffff88003dc92640     0   224      2 0x00000000
[111687.082453]  ffff88003ac46d00 0000000000000046 0000000000012640 ffff88003ac46d00
[111687.082461]  ffff88003ad75fd8 ffff88003c093d50 ffff88003b19d000 ffff88003a7ee000
[111687.082469]  ffff88003a4dc508 ffff88003a4dc4b0 0000000000000000 0000000000000000
[111687.082477] Call Trace:
[111687.082496]  [<ffffffff81158404>] ? btrfs_commit_transaction+0x306/0x808
[111687.082507]  [<ffffffff8105ebdc>] ? finish_wait+0x60/0x60
[111687.082516]  [<ffffffff81154c82>] ? transaction_kthread+0xd5/0x185
[111687.082524]  [<ffffffff81154bad>] ? btrfs_cleanup_transaction+0x3e3/0x3e3
[111687.082534]  [<ffffffff8104cede>] ? kthread+0x9e/0xa6
[111687.082543]  [<ffffffff81050000>] ? blocking_notifier_chain_cond_register+0x13/0x40
[111687.082551]  [<ffffffff8104ce40>] ? __kthread_parkme+0x55/0x55
[111687.082561]  [<ffffffff813a6ebc>] ? ret_from_fork+0x7c/0xb0
[111687.082569]  [<ffffffff8104ce40>] ? __kthread_parkme+0x55/0x55
[111687.082577] syslog-ng       D ffff88003dc92640     0  1995   1994 0x00000000
[111687.082587]  ffff88003b2fdf60 0000000000000086 0000000000012640 ffff88003b2fdf60
[111687.082594]  ffff880038d1dfd8 ffff88003c093d50 ffff88003a4dc4b0 ffff88003b3441e8
[111687.082601]  0000000000000000 0000000000000000 ffff88003b344000 ffff88003a7ff320
[111687.082609] Call Trace:
[111687.082618]  [<ffffffff81157a0c>] ? wait_current_trans.isra.19+0xad/0xd1
[111687.082626]  [<ffffffff8105ebdc>] ? finish_wait+0x60/0x60
[111687.082634]  [<ffffffff81158bf2>] ? start_transaction+0x2ec/0x4c5
[111687.082643]  [<ffffffff8114e66a>] ? btrfs_lookup_xattr+0x68/0x97
[111687.082653]  [<ffffffff8115cf27>] ? btrfs_dirty_inode+0x25/0xa1
[111687.082661]  [<ffffffff81160a4f>] ? btrfs_setattr+0x230/0x267
[111687.082670]  [<ffffffff810f39b8>] ? notify_change+0x20c/0x2ef
[111687.082679]  [<ffffffff810dec32>] ? chown_common.isra.16+0xc9/0x12e
[111687.082687]  [<ffffffff810e26a4>] ? __sb_start_write+0x8e/0xbb
[111687.082697]  [<ffffffff810df6ef>] ? SyS_fchown+0x47/0x6d
[111687.082705]  [<ffffffff813a6f66>] ? system_call_fastpath+0x1a/0x1f
[111687.082712] mosquitto       D ffff88003dc92640     0  2093      1 0x00000000
[111687.082720]  ffff88003a882210 0000000000000082 0000000000012640 ffff88003a882210
[111687.082727]  ffff8800380cbfd8 ffff88003c093d50 ffff88003a4dc4b0 ffff88003b3441e8
[111687.082733]  0000000000000000 0000000000000000 ffff88003b344000 ffff88003a7ff3c0
[111687.082740] Call Trace:
[111687.082749]  [<ffffffff81157a0c>] ? wait_current_trans.isra.19+0xad/0xd1
[111687.082756]  [<ffffffff8105ebdc>] ? finish_wait+0x60/0x60
[111687.082764]  [<ffffffff81158d1a>] ? start_transaction+0x414/0x4c5
[111687.082773]  [<ffffffff8113abef>] ? btrfs_release_path+0x38/0x53
[111687.082782]  [<ffffffff81162897>] ? btrfs_create+0x35/0x1c2
[111687.082790]  [<ffffffff810e986e>] ? vfs_create+0x46/0x6c
[111687.082798]  [<ffffffff810ea425>] ? do_last.isra.58+0x544/0x95e
[111687.082805]  [<ffffffff810eaa5b>] ? path_openat+0x21c/0x488
[111687.082813]  [<ffffffff810ebbdc>] ? do_filp_open+0x35/0x7a
[111687.082822]  [<ffffffff810f4555>] ? __alloc_fd+0x56/0xda
[111687.082830]  [<ffffffff810df880>] ? do_sys_open+0x65/0xe9
[111687.082839]  [<ffffffff813a6f66>] ? system_call_fastpath+0x1a/0x1f
[111687.082844] ntpd            D ffff88003dc92640     0  2176      1 0x00000004
[111687.082852]  ffff88003b4da210 0000000000000082 0000000000012640 ffff88003b4da210
[111687.082858]  ffff88003814bfd8 ffff88003c093d50 ffff88003a4dc4b0 ffff88003b3441e8
[111687.082865]  0000000000000000 0000000000000000 ffff88003b344000 ffff88003a7fff00
[111687.082872] Call Trace:
[111687.082880]  [<ffffffff81157a0c>] ? wait_current_trans.isra.19+0xad/0xd1
[111687.082888]  [<ffffffff8105ebdc>] ? finish_wait+0x60/0x60
[111687.082895]  [<ffffffff81158d1a>] ? start_transaction+0x414/0x4c5
[111687.082904]  [<ffffffff8113abef>] ? btrfs_release_path+0x38/0x53
[111687.082912]  [<ffffffff81162897>] ? btrfs_create+0x35/0x1c2
[111687.082920]  [<ffffffff810e986e>] ? vfs_create+0x46/0x6c
[111687.082927]  [<ffffffff810ea425>] ? do_last.isra.58+0x544/0x95e
[111687.082935]  [<ffffffff810eaa5b>] ? path_openat+0x21c/0x488
[111687.082943]  [<ffffffff81041bfd>] ? __set_current_blocked+0x2c/0x41
[111687.082950]  [<ffffffff810ebbdc>] ? do_filp_open+0x35/0x7a
[111687.082960]  [<ffffffff810ca6d0>] ? vma_rb_erase+0x15d/0x18b
[111687.082967]  [<ffffffff810f4555>] ? __alloc_fd+0x56/0xda
[111687.082975]  [<ffffffff810df880>] ? do_sys_open+0x65/0xe9
[111687.082984]  [<ffffffff813a6f66>] ? system_call_fastpath+0x1a/0x1f
[111687.082991] postgres        D ffff88003dc12640     0  2250   2245 0x00000000
[111687.082998]  ffff88003b2fb680 0000000000000082 0000000000012640 ffff88003b2fb680
[111687.083005]  ffff880034533fd8 ffffffff81811430 ffff88003a4d64e8 ffff880034533dc8
[111687.083012]  ffff88003a4d64ec ffff88003b2fb680 ffff88003a4d64f0 00000000ffffffff
[111687.083018] Call Trace:
[111687.083027]  [<ffffffff813a4747>] ? schedule_preempt_disabled+0x5/0x6
[111687.083035]  [<ffffffff813a56aa>] ? __mutex_lock_slowpath+0x146/0x1a4
[111687.083043]  [<ffffffff813a5716>] ? mutex_lock+0xe/0x1d
[111687.083050]  [<ffffffff811661dc>] ? btrfs_file_aio_write+0x79/0x47b
[111687.083058]  [<ffffffff810c573e>] ? tlb_flush_mmu+0x4e/0x64
[111687.083066]  [<ffffffff810ca3ce>] ? unmap_region+0xb5/0xc4
[111687.083074]  [<ffffffff810dfb6d>] ? do_sync_write+0x56/0x76
[111687.083083]  [<ffffffff810e0157>] ? vfs_write+0x9f/0x102
[111687.083091]  [<ffffffff810e0868>] ? SyS_write+0x41/0x74
[111687.083100]  [<ffffffff813a6f66>] ? system_call_fastpath+0x1a/0x1f
[111687.083105] postgres        D ffff88003dc92640     0  2251   2245 0x00000000
[111687.083112]  ffff88003b2ff3d0 0000000000000086 0000000000012640 ffff88003b2ff3d0
[111687.083119]  ffff88003455bfd8 ffff88003c093d50 ffff88003a4dc4b0 ffff88003b3441e8
[111687.083125]  0000000000000000 0000000000000000 ffff88003b344000 ffff88003a7ff500
[111687.083132] Call Trace:
[111687.083140]  [<ffffffff81157a0c>] ? wait_current_trans.isra.19+0xad/0xd1
[111687.083148]  [<ffffffff8105ebdc>] ? finish_wait+0x60/0x60
[111687.083155]  [<ffffffff81158d1a>] ? start_transaction+0x414/0x4c5
[111687.083164]  [<ffffffff8113abef>] ? btrfs_release_path+0x38/0x53
[111687.083173]  [<ffffffff81162897>] ? btrfs_create+0x35/0x1c2
[111687.083180]  [<ffffffff810e986e>] ? vfs_create+0x46/0x6c
[111687.083187]  [<ffffffff810ea425>] ? do_last.isra.58+0x544/0x95e
[111687.083195]  [<ffffffff810eaa5b>] ? path_openat+0x21c/0x488
[111687.083203]  [<ffffffff810e67b2>] ? pipe_read+0x31e/0x383
[111687.083211]  [<ffffffff810ebbdc>] ? do_filp_open+0x35/0x7a
[111687.083219]  [<ffffffff810f4555>] ? __alloc_fd+0x56/0xda
[111687.083227]  [<ffffffff810df880>] ? do_sys_open+0x65/0xe9
[111687.083235]  [<ffffffff813a6f66>] ? system_call_fastpath+0x1a/0x1f
[111687.083257] btrfs-endio-wri D ffff88003dc92640     0 23813      2 0x00000000
[111687.083265]  ffff88003bb1d1c0 0000000000000046 0000000000012640 ffff88003bb1d1c0
[111687.083271]  ffff88001a9e3fd8 ffff88003c093d50 ffff88003a4dc4b0 ffff88003b3441e8
[111687.083278]  0000000000000000 0000000000000000 ffff88003b344000 ffff88003a7ffc80
[111687.083285] Call Trace:
[111687.083293]  [<ffffffff81157a0c>] ? wait_current_trans.isra.19+0xad/0xd1
[111687.083301]  [<ffffffff8105ebdc>] ? finish_wait+0x60/0x60
[111687.083309]  [<ffffffff81158bf2>] ? start_transaction+0x2ec/0x4c5
[111687.083318]  [<ffffffff8115f0a1>] ? btrfs_finish_ordered_io+0x19b/0x3c2
[111687.083327]  [<ffffffff8103d700>] ? ftrace_raw_output_itimer_state+0x3d/0x60
[111687.083336]  [<ffffffff8117ba9d>] ? worker_loop+0x149/0x4a7
[111687.083343]  [<ffffffff813a4338>] ? __schedule+0x352/0x4f0
[111687.083351]  [<ffffffff8117b954>] ? btrfs_queue_worker+0x269/0x269
[111687.083359]  [<ffffffff8104cede>] ? kthread+0x9e/0xa6
[111687.083368]  [<ffffffff81050000>] ? blocking_notifier_chain_cond_register+0x13/0x40
[111687.083376]  [<ffffffff8104ce40>] ? __kthread_parkme+0x55/0x55
[111687.083384]  [<ffffffff813a6ebc>] ? ret_from_fork+0x7c/0xb0
[111687.083392]  [<ffffffff8104ce40>] ? __kthread_parkme+0x55/0x55
[111687.083399] carfax-backup   D ffff880031cdafb0     0  6718   6365 0x00000000
[111687.083406]  ffff880031cdafb0 0000000000000086 0000000000012640 ffff880031cdafb0
[111687.083413]  ffff88002e363fd8 ffff88003c1d9b40 ffff88003a6acbc8 ffff88002bf82f70
[111687.083420]  ffff88003a6acbe4 0000000000041000 ffff88002e363c90 0000000000042fff
[111687.083426] Call Trace:
[111687.083436]  [<ffffffff8116e069>] ? lock_extent_bits+0x108/0x180
[111687.083444]  [<ffffffff8105ebdc>] ? finish_wait+0x60/0x60
[111687.083452]  [<ffffffff81164bc4>] ? lock_and_cleanup_extent_if_need+0x66/0x191
[111687.083460]  [<ffffffff81165f44>] ? __btrfs_buffered_write+0x21d/0x43c
[111687.083468]  [<ffffffff8116651a>] ? btrfs_file_aio_write+0x3b7/0x47b
[111687.083476]  [<ffffffff810eb952>] ? user_path_at_empty+0x60/0x87
[111687.083485]  [<ffffffff8108af43>] ? from_kgid_munged+0x9/0x14
[111687.083494]  [<ffffffff810dfb6d>] ? do_sync_write+0x56/0x76
[111687.083502]  [<ffffffff810e0157>] ? vfs_write+0x9f/0x102
[111687.083511]  [<ffffffff810e0868>] ? SyS_write+0x41/0x74
[111687.083519]  [<ffffffff813a6f66>] ? system_call_fastpath+0x1a/0x1f
[111687.083531] kworker/u8:2    D ffff88003dc92640     0  6983      2 0x00000000
[111687.083544] Workqueue: writeback bdi_writeback_workfn (flush-btrfs-1)
[111687.083550]  ffff880000469b40 0000000000000046 0000000000012640 ffff880000469b40
[111687.083556]  ffff880024087fd8 ffff88003c093d50 ffff88003a4dc4b0 ffff88003b3441e8
[111687.083563]  0000000000000000 0000000000000000 ffff88003b344000 ffff88003a7ff820
[111687.083570] Call Trace:
[111687.083579]  [<ffffffff81157a0c>] ? wait_current_trans.isra.19+0xad/0xd1
[111687.083586]  [<ffffffff8105ebdc>] ? finish_wait+0x60/0x60
[111687.083594]  [<ffffffff81158bf2>] ? start_transaction+0x2ec/0x4c5
[111687.083603]  [<ffffffff8115d361>] ? cow_file_range_inline+0xe9/0x281
[111687.083611]  [<ffffffff810dbf07>] ? kmem_cache_free+0x32/0xb9
[111687.083620]  [<ffffffff8116d73e>] ? __set_extent_bit+0x3c3/0x3ff
[111687.083629]  [<ffffffff8115d5c7>] ? cow_file_range+0xce/0x37d
[111687.083639]  [<ffffffff81105e60>] ? __bio_add_page.part.14+0x130/0x1b5
[111687.083648]  [<ffffffff8115dfe8>] ? run_delalloc_range+0x9a/0x2c7
[111687.083656]  [<ffffffff8116cfd3>] ? free_extent_state+0x12/0x21
[111687.083666]  [<ffffffff8116ffb3>] ? __extent_writepage+0x1cc/0x5f4
[111687.083674]  [<ffffffff81055320>] ? check_preempt_curr+0x27/0x62
[111687.083681]  [<ffffffff8105536a>] ? ttwu_do_wakeup+0xf/0xb0
[111687.083691]  [<ffffffff810ad5bf>] ? find_get_pages_tag+0xe3/0x11c
[111687.083701]  [<ffffffff8117059f>] ? extent_write_cache_pages.isra.24.constprop.44+0x1c4/0x25a
[111687.083711]  [<ffffffff8116c4d0>] ? submit_one_bio+0xac/0xbf
[111687.083720]  [<ffffffff81001564>] ? __switch_to+0x13f/0x3bc
[111687.083729]  [<ffffffff811708c8>] ? extent_writepages+0x49/0x60
[111687.083736]  [<ffffffff813a4338>] ? __schedule+0x352/0x4f0
[111687.083744]  [<ffffffff8115bcdf>] ? btrfs_submit_direct+0x3f5/0x3f5
[111687.083753]  [<ffffffff810fd953>] ? __writeback_single_inode+0x4b/0x1c2
[111687.083762]  [<ffffffff810fe1b9>] ? writeback_sb_inodes+0x1d3/0x314
[111687.083771]  [<ffffffff810fe365>] ? __writeback_inodes_wb+0x6b/0xa1
[111687.083779]  [<ffffffff810fe4ac>] ? wb_writeback+0x111/0x273
[111687.083787]  [<ffffffff810b413c>] ? bdi_dirty_limit+0x27/0x84
[111687.083796]  [<ffffffff810fea9c>] ? bdi_writeback_workfn+0x1ab/0x316
[111687.083804]  [<ffffffff81048729>] ? process_one_work+0x179/0x28c
[111687.083812]  [<ffffffff81048be7>] ? worker_thread+0x139/0x1de
[111687.083819]  [<ffffffff81048aae>] ? rescuer_thread+0x24f/0x24f
[111687.083826]  [<ffffffff8104cede>] ? kthread+0x9e/0xa6
[111687.083835]  [<ffffffff81050000>] ? blocking_notifier_chain_cond_register+0x13/0x40
[111687.083843]  [<ffffffff8104ce40>] ? __kthread_parkme+0x55/0x55
[111687.083851]  [<ffffffff813a6ebc>] ? ret_from_fork+0x7c/0xb0
[111687.083860]  [<ffffffff8104ce40>] ? __kthread_parkme+0x55/0x55
[111687.083865] postgres        D ffff88003dc92640     0  8011   2245 0x00000000
[111687.083872]  ffff88003b2f8000 0000000000000082 0000000000012640 ffff88003b2f8000
[111687.083879]  ffff880002399fd8 ffff88003c093d50 ffff88003dc92640 ffff88003b2f8000
[111687.083886]  ffff880002399c40 0000000000000002 0000000000000000 ffffffff810acdc1
[111687.083893] Call Trace:
[111687.083901]  [<ffffffff810acdc1>] ? wait_on_page_read+0x32/0x32
[111687.083908]  [<ffffffff813a46ba>] ? io_schedule+0x54/0x69
[111687.083916]  [<ffffffff810acdc6>] ? sleep_on_page+0x5/0x8
[111687.083923]  [<ffffffff813a496c>] ? __wait_on_bit_lock+0x3c/0x7f
[111687.083931]  [<ffffffff810ace6b>] ? __lock_page+0x64/0x66
[111687.083939]  [<ffffffff8105ec06>] ? autoremove_wake_function+0x2a/0x2a
[111687.083947]  [<ffffffff810ad615>] ? lock_page+0x9/0x18
[111687.083954]  [<ffffffff810ad668>] ? find_lock_page+0x29/0x49
[111687.083963]  [<ffffffff810ad9c4>] ? find_or_create_page+0x28/0x85
[111687.083970]  [<ffffffff81165652>] ? prepare_pages.isra.18+0x7d/0x120
[111687.083978]  [<ffffffff81165f16>] ? __btrfs_buffered_write+0x1ef/0x43c
[111687.083987]  [<ffffffff8116651a>] ? btrfs_file_aio_write+0x3b7/0x47b
[111687.083994]  [<ffffffff810c573e>] ? tlb_flush_mmu+0x4e/0x64
[111687.084003]  [<ffffffff8102a6e9>] ? __do_page_fault+0x2b9/0x335
[111687.084012]  [<ffffffff810dfb6d>] ? do_sync_write+0x56/0x76
[111687.084021]  [<ffffffff810e0157>] ? vfs_write+0x9f/0x102
[111687.084029]  [<ffffffff810e0868>] ? SyS_write+0x41/0x74
[111687.084037]  [<ffffffff813a6f66>] ? system_call_fastpath+0x1a/0x1f
[111687.084043] mutt_dotlock    D ffff88003dc12640     0  8013   8012 0x00000000
[111687.084050]  ffff8800223cb680 0000000000000086 0000000000012640 ffff8800223cb680
[111687.084057]  ffff88000493dfd8 ffffffff81811430 ffff88003a4dc4b0 ffff88003b3441e8
[111687.084064]  0000000000000000 0000000000000000 ffff88003b344000 ffff88003a79ef00
[111687.084070] Call Trace:
[111687.084079]  [<ffffffff81157a0c>] ? wait_current_trans.isra.19+0xad/0xd1
[111687.084087]  [<ffffffff8105ebdc>] ? finish_wait+0x60/0x60
[111687.084094]  [<ffffffff81158d1a>] ? start_transaction+0x414/0x4c5
[111687.084104]  [<ffffffff8113c653>] ? generic_bin_search.constprop.34+0xf1/0x129
[111687.084113]  [<ffffffff81162897>] ? btrfs_create+0x35/0x1c2
[111687.084120]  [<ffffffff810e986e>] ? vfs_create+0x46/0x6c
[111687.084127]  [<ffffffff810ea425>] ? do_last.isra.58+0x544/0x95e
[111687.084135]  [<ffffffff810eaa5b>] ? path_openat+0x21c/0x488
[111687.084144]  [<ffffffff810ef151>] ? __d_rehash+0x19/0x4c
[111687.084151]  [<ffffffff810ebbdc>] ? do_filp_open+0x35/0x7a
[111687.084159]  [<ffffffff810f4555>] ? __alloc_fd+0x56/0xda
[111687.084167]  [<ffffffff810df880>] ? do_sys_open+0x65/0xe9
[111687.084176]  [<ffffffff813a6f66>] ? system_call_fastpath+0x1a/0x1f
[111687.084181] cron            D ffff88003dc12640     0  8014   2422 0x00000000
[111687.084188]  ffff88003b2fed00 0000000000000086 0000000000012640 ffff88003b2fed00
[111687.084195]  ffff880018d23fd8 ffffffff81811430 ffff88003a4dc4b0 ffff88003b3441e8
[111687.084202]  0000000000000000 0000000000000000 ffff88003b344000 ffff88003a79eaa0
[111687.084208] Call Trace:
[111687.084217]  [<ffffffff81157a0c>] ? wait_current_trans.isra.19+0xad/0xd1
[111687.084224]  [<ffffffff8105ebdc>] ? finish_wait+0x60/0x60
[111687.084232]  [<ffffffff81158d1a>] ? start_transaction+0x414/0x4c5
[111687.084241]  [<ffffffff8113abef>] ? btrfs_release_path+0x38/0x53
[111687.084249]  [<ffffffff81162897>] ? btrfs_create+0x35/0x1c2
[111687.084257]  [<ffffffff810e986e>] ? vfs_create+0x46/0x6c
[111687.084264]  [<ffffffff810ea425>] ? do_last.isra.58+0x544/0x95e
[111687.084272]  [<ffffffff810eaa5b>] ? path_openat+0x21c/0x488
[111687.084279]  [<ffffffff810c7df5>] ? handle_mm_fault+0x1f9/0x6da
[111687.084287]  [<ffffffff810ebbdc>] ? do_filp_open+0x35/0x7a
[111687.084295]  [<ffffffff810f4555>] ? __alloc_fd+0x56/0xda
[111687.084303]  [<ffffffff810df880>] ? do_sys_open+0x65/0xe9
[111687.084312]  [<ffffffff813a6f66>] ? system_call_fastpath+0x1a/0x1f
[111687.084317] cron            D ffff88003dc12640     0  8071   2422 0x00000000
[111687.084324]  ffff88003ac44420 0000000000000082 0000000000012640 ffff88003ac44420
[111687.084331]  ffff880005a35fd8 ffffffff81811430 ffff88003bfbfbf8 ffff880005a35d28
[111687.084338]  ffff88003bfbfbfc ffff88003ac44420 ffff88003bfbfc00 00000000ffffffff
[111687.084345] Call Trace:
[111687.084353]  [<ffffffff813a4747>] ? schedule_preempt_disabled+0x5/0x6
[111687.084361]  [<ffffffff813a56aa>] ? __mutex_lock_slowpath+0x146/0x1a4
[111687.084368]  [<ffffffff813a5716>] ? mutex_lock+0xe/0x1d
[111687.084375]  [<ffffffff810ea03f>] ? do_last.isra.58+0x15e/0x95e
[111687.084383]  [<ffffffff810eaa5b>] ? path_openat+0x21c/0x488
[111687.084390]  [<ffffffff810c7df5>] ? handle_mm_fault+0x1f9/0x6da
[111687.084398]  [<ffffffff810ebbdc>] ? do_filp_open+0x35/0x7a
[111687.084407]  [<ffffffff810f4555>] ? __alloc_fd+0x56/0xda
[111687.084415]  [<ffffffff810df880>] ? do_sys_open+0x65/0xe9
[111687.084424]  [<ffffffff813a6f66>] ? system_call_fastpath+0x1a/0x1f
[111687.084431] Sched Debug Version: v0.11, 3.13.0-00189-g56a5aaf-dirty #6
[111687.084439] ktime                                   : 111682644.677766
[111687.084445] sched_clk                               : 111687084.428515
[111687.084450] cpu_clk                                 : 111687084.428648
[111687.084455] jiffies                                 : 4306105560
[111687.084459] sched_clock_stable                      : 1
[111687.084462] 
[111687.084466] sysctl_sched
[111687.084471]   .sysctl_sched_latency                    : 12.000000
[111687.084476]   .sysctl_sched_min_granularity            : 1.500000
[111687.084481]   .sysctl_sched_wakeup_granularity         : 2.000000
[111687.084485]   .sysctl_sched_child_runs_first           : 0
[111687.084489]   .sysctl_sched_features                   : 11899
[111687.084495]   .sysctl_sched_tunable_scaling            : 1 (logaritmic)
[111687.084498] 
[111687.084503] cpu#0, 1297.893 MHz
[111687.084507]   .nr_running                    : 2
[111687.084512]   .load                          : 2048
[111687.084516]   .nr_switches                   : 16915764
[111687.084520]   .nr_load_updates               : 2618312
[111687.084524]   .nr_uninterruptible            : 23576
[111687.084529]   .next_balance                  : 4306.105501
[111687.084533]   .curr->pid                     : 8029
[111687.084538]   .clock                         : 111687082.076429
[111687.084542]   .cpu_load[0]                   : 0
[111687.084546]   .cpu_load[1]                   : 0
[111687.084550]   .cpu_load[2]                   : 0
[111687.084554]   .cpu_load[3]                   : 0
[111687.084558]   .cpu_load[4]                   : 0
[111687.084564] 
[111687.084564] cfs_rq[0]:/autogroup-76
[111687.084571]   .exec_clock                    : 0.000000
[111687.084577]   .MIN_vruntime                  : 0.000001
[111687.084582]   .min_vruntime                  : 19306.798558
[111687.084587]   .max_vruntime                  : 0.000001
[111687.084591]   .spread                        : 0.000000
[111687.084597]   .spread0                       : -1406281.471418
[111687.084601]   .nr_spread_over                : 0
[111687.084606]   .nr_running                    : 1
[111687.084610]   .load                          : 1024
[111687.084614]   .runnable_load_avg             : 0
[111687.084618]   .blocked_load_avg              : 0
[111687.084622]   .tg_load_contrib               : 0
[111687.084626]   .tg_runnable_contrib           : 0
[111687.084630]   .tg_load_avg                   : 0
[111687.084634]   .tg->runnable_avg              : 0
[111687.084640]   .se->exec_start                : 111687082.076429
[111687.084645]   .se->vruntime                  : 1425584.231527
[111687.084650]   .se->sum_exec_runtime          : 19482.934096
[111687.084655]   .se->load.weight               : 1024
[111687.084659]   .se->avg.runnable_avg_sum      : 0
[111687.084663]   .se->avg.runnable_avg_period   : 47110
[111687.084667]   .se->avg.load_avg_contrib      : 0
[111687.084672]   .se->avg.decay_count           : 0
[111687.084677] 
[111687.084677] cfs_rq[0]:/autogroup-32
[111687.084684]   .exec_clock                    : 0.000000
[111687.084689]   .MIN_vruntime                  : 0.000001
[111687.084694]   .min_vruntime                  : 39788.898281
[111687.084698]   .max_vruntime                  : 0.000001
[111687.084703]   .spread                        : 0.000000
[111687.084708]   .spread0                       : -1385799.371695
[111687.084712]   .nr_spread_over                : 0
[111687.084716]   .nr_running                    : 0
[111687.084720]   .load                          : 0
[111687.084724]   .runnable_load_avg             : 0
[111687.084728]   .blocked_load_avg              : 0
[111687.084732]   .tg_load_contrib               : 0
[111687.084736]   .tg_runnable_contrib           : 0
[111687.084741]   .tg_load_avg                   : 0
[111687.084745]   .tg->runnable_avg              : 0
[111687.084750]   .se->exec_start                : 111687003.265398
[111687.084755]   .se->vruntime                  : 1425588.269976
[111687.084760]   .se->sum_exec_runtime          : 52487.091255
[111687.084764]   .se->load.weight               : 2
[111687.084768]   .se->avg.runnable_avg_sum      : 9
[111687.084773]   .se->avg.runnable_avg_period   : 47151
[111687.084777]   .se->avg.load_avg_contrib      : 0
[111687.084781]   .se->avg.decay_count           : 106513027
[111687.084786] 
[111687.084786] cfs_rq[0]:/
[111687.084793]   .exec_clock                    : 0.000000
[111687.084798]   .MIN_vruntime                  : 1425582.269976
[111687.084803]   .min_vruntime                  : 1425588.269976
[111687.084808]   .max_vruntime                  : 1425582.269976
[111687.084812]   .spread                        : 0.000000
[111687.084817]   .spread0                       : 0.000000
[111687.084821]   .nr_spread_over                : 0
[111687.084825]   .nr_running                    : 2
[111687.084829]   .load                          : 2048
[111687.084833]   .runnable_load_avg             : 0
[111687.084837]   .blocked_load_avg              : 0
[111687.084841]   .tg_load_contrib               : 0
[111687.084845]   .tg_runnable_contrib           : 2
[111687.084849]   .tg_load_avg                   : 0
[111687.084854]   .tg->runnable_avg              : 10
[111687.084858]   .avg->runnable_avg_sum         : 122
[111687.084862]   .avg->runnable_avg_period      : 47403
[111687.084867] 
[111687.084867] rt_rq[0]:
[111687.084872]   .rt_nr_running                 : 0
[111687.084876]   .rt_throttled                  : 0
[111687.084881]   .rt_time                       : 0.000000
[111687.084886]   .rt_runtime                    : 950.000000
[111687.084893] 
[111687.084893] runnable tasks:
[111687.084893]             task   PID         tree-key  switches  prio     exec-runtime         sum-exec        sum-sleep
[111687.084893] ----------------------------------------------------------------------------------------------------------
[111687.084903]             init     1      1296.862754     22719   120               0               0               0.000000               0.000000               0.000000 /autogroup-2
[111687.084919]      ksoftirqd/0     3   1425571.863737    192653   120               0               0               0.000000               0.000000               0.000000 /
[111687.084933]     kworker/0:0H     5       256.529404         3   100               0               0               0.000000               0.000000               0.000000 /
[111687.084946]        rcu_sched     7   1425582.319138    439623   120               0               0               0.000000               0.000000               0.000000 /
[111687.084959]      migration/0     9         0.000000     13652     0               0               0               0.000000               0.000000               0.000000 /
[111687.084971]        kdevtmpfs    15   1119962.560586       173   120               0               0               0.000000               0.000000               0.000000 /
[111687.084984]            netns    16       262.524108         2   100               0               0               0.000000               0.000000               0.000000 /
[111687.084996]        writeback    17       262.529217         2   100               0               0               0.000000               0.000000               0.000000 /
[111687.085009]       devfreq_wq    23       791.136003         2   100               0               0               0.000000               0.000000               0.000000 /
[111687.085021]       khungtaskd    27   1425221.795289       932   120               0               0               0.000000               0.000000               0.000000 /
[111687.085034]        scsi_eh_0    48       903.719125        17   120               0               0               0.000000               0.000000               0.000000 /
[111687.085047]        scsi_eh_1    49       897.791182        18   120               0               0               0.000000               0.000000               0.000000 /
[111687.085059]        scsi_eh_2    50       897.785919        18   120               0               0               0.000000               0.000000               0.000000 /
[111687.085071]        scsi_eh_3    51       903.748406        20   120               0               0               0.000000               0.000000               0.000000 /
[111687.085084]           bioset    60       925.381221         2   100               0               0               0.000000               0.000000               0.000000 /
[111687.085096]  btrfs-genwork-1   207   1425211.098083      1332   120               0               0               0.000000               0.000000               0.000000 /
[111687.085109]    btrfs-fixup-1   210   1425223.161511       984   120               0               0               0.000000               0.000000               0.000000 /
[111687.085122]    btrfs-cache-1   219   1425223.183779       998   120               0               0               0.000000               0.000000               0.000000 /
[111687.085134]  btrfs-flush_del   221   1425229.231609      5201   120               0               0               0.000000               0.000000               0.000000 /
[111687.085147]    btrfs-cleaner   223   1416058.997293     10250   120               0               0               0.000000               0.000000               0.000000 /
[111687.085159]            udevd   378       772.214379       569   120               0               0               0.000000               0.000000               0.000000 /autogroup-4
[111687.085173]      edac-poller   489      2677.694949         2   100               0               0               0.000000               0.000000               0.000000 /
[111687.085186]          rpcbind  1690       167.304490      3822   120               0               0               0.000000               0.000000               0.000000 /autogroup-5
[111687.085199]        rpc.statd  1718        13.242367         9   120               0               0               0.000000               0.000000               0.000000 /autogroup-6
[111687.085212]           rpciod  1723      6264.804884         2   100               0               0               0.000000               0.000000               0.000000 /
[111687.085225]         rpc.gssd  1739         0.089455         1   120               0               0               0.000000               0.000000               0.000000 /autogroup-8
[111687.085239]            named  2076         3.006953         1   120               0               0               0.000000               0.000000               0.000000 /autogroup-15
[111687.085252]            named  2082     10122.498241     70731   120               0               0               0.000000               0.000000               0.000000 /autogroup-15
[111687.085266]            inetd  2155         0.911649         1   120               0               0               0.000000               0.000000               0.000000 /autogroup-17
[111687.085279]          krb5kdc  2195        94.667089      1990   120               0               0               0.000000               0.000000               0.000000 /autogroup-19
[111687.085293]         postgres  2245      6949.516127      4045   120               0               0               0.000000               0.000000               0.000000 /autogroup-21
[111687.085307]         postgres  2247        33.163002       373   120               0               0               0.000000               0.000000               0.000000 /autogroup-23
[111687.085320]         postgres  2248       899.817081     22452   120               0               0               0.000000               0.000000               0.000000 /autogroup-24
[111687.085334]         postgres  2250      1800.081068     20777   120               0               0               0.000000               0.000000               0.000000 /autogroup-26
[111687.085347]          apache2  2303     39782.711062    111709   120               0               0               0.000000               0.000000               0.000000 /autogroup-32
[111687.085361]          apache2  2310       126.886684         1   120               0               0               0.000000               0.000000               0.000000 /autogroup-32
[111687.085374]          apache2  2311       126.867561         1   120               0               0               0.000000               0.000000               0.000000 /autogroup-32
[111687.085387]          apache2  2312       132.886257         1   120               0               0               0.000000               0.000000               0.000000 /autogroup-32
[111687.085400]          apache2  2313       132.875815         1   120               0               0               0.000000               0.000000               0.000000 /autogroup-32
[111687.085413]          apache2  2314       132.875693         1   120               0               0               0.000000               0.000000               0.000000 /autogroup-32
[111687.085426]          apache2  2315       132.894180         1   120               0               0               0.000000               0.000000               0.000000 /autogroup-32
[111687.085439]          apache2  2317       132.949154         1   120               0               0               0.000000               0.000000               0.000000 /autogroup-32
[111687.085452]          apache2  2319       132.997783         1   120               0               0               0.000000               0.000000               0.000000 /autogroup-32
[111687.085465]          apache2  2321       133.049228         1   120               0               0               0.000000               0.000000               0.000000 /autogroup-32
[111687.085478]          apache2  2323       133.093223         1   120               0               0               0.000000               0.000000               0.000000 /autogroup-32
[111687.085491]          apache2  2325       133.143419         1   120               0               0               0.000000               0.000000               0.000000 /autogroup-32
[111687.085504]          apache2  2327       133.191952         1   120               0               0               0.000000               0.000000               0.000000 /autogroup-32
[111687.085517]          apache2  2329       133.234689         1   120               0               0               0.000000               0.000000               0.000000 /autogroup-32
[111687.085530]          apache2  2331       133.283717         1   120               0               0               0.000000               0.000000               0.000000 /autogroup-32
[111687.085543]          apache2  2333       133.335861         1   120               0               0               0.000000               0.000000               0.000000 /autogroup-32
[111687.085556]          apache2  2335       133.375538         1   120               0               0               0.000000               0.000000               0.000000 /autogroup-32
[111687.085569]          apache2  2342       134.337740         1   120               0               0               0.000000               0.000000               0.000000 /autogroup-32
[111687.085582]          apache2  2344       133.875717         1   120               0               0               0.000000               0.000000               0.000000 /autogroup-32
[111687.085595]          apache2  2346       133.764188         1   120               0               0               0.000000               0.000000               0.000000 /autogroup-32
[111687.085608]          apache2  2348       133.741640         1   120               0               0               0.000000               0.000000               0.000000 /autogroup-32
[111687.085621]          apache2  2350       133.728917         1   120               0               0               0.000000               0.000000               0.000000 /autogroup-32
[111687.085634]          apache2  2352       133.531329         1   120               0               0               0.000000               0.000000               0.000000 /autogroup-32
[111687.085647]          apache2  2353       133.535273         1   120               0               0               0.000000               0.000000               0.000000 /autogroup-32
[111687.085660]          apache2  2355     39788.898281   1114738   120               0               0               0.000000               0.000000               0.000000 /autogroup-32
[111687.085673]          apache2  2307       122.849120        73   120               0               0               0.000000               0.000000               0.000000 /autogroup-32
[111687.085687]          apache2  2330       128.779507         1   120               0               0               0.000000               0.000000               0.000000 /autogroup-32
[111687.085701]          apache2  2343       133.840685         1   120               0               0               0.000000               0.000000               0.000000 /autogroup-32
[111687.085714]          apache2  2345       133.723956         1   120               0               0               0.000000               0.000000               0.000000 /autogroup-32
[111687.085727]          apache2  2347       133.744236         1   120               0               0               0.000000               0.000000               0.000000 /autogroup-32
[111687.085740]          apache2  2349       133.736222         1   120               0               0               0.000000               0.000000               0.000000 /autogroup-32
[111687.085753]          apache2  2351       133.534242         1   120               0               0               0.000000               0.000000               0.000000 /autogroup-32
[111687.085766]          apache2  2354       133.529426         1   120               0               0               0.000000               0.000000               0.000000 /autogroup-32
[111687.085779]          apache2  2356       133.529394         1   120               0               0               0.000000               0.000000               0.000000 /autogroup-32
[111687.085792]          apache2  2357       133.529660         1   120               0               0               0.000000               0.000000               0.000000 /autogroup-32
[111687.085805]          apache2  2358       133.529361         1   120               0               0               0.000000               0.000000               0.000000 /autogroup-32
[111687.085818]          apache2  2359       133.530160         1   120               0               0               0.000000               0.000000               0.000000 /autogroup-32
[111687.085831]          apache2  2360       133.533665         1   120               0               0               0.000000               0.000000               0.000000 /autogroup-32
[111687.085845]          kadmind  2475        62.052283      1873   120               0               0               0.000000               0.000000               0.000000 /autogroup-35
[111687.085858]     avahi-daemon  2479         0.966223         4   120               0               0               0.000000               0.000000               0.000000 /autogroup-34
[111687.085872]            nfsd4  2585      9747.469163         2   100               0               0               0.000000               0.000000               0.000000 /
[111687.085885]            lockd  2589      9753.498182         2   120               0               0               0.000000               0.000000               0.000000 /
[111687.085898]             nfsd  2600   1425278.450673        34   120               0               0               0.000000               0.000000               0.000000 /
[111687.085911]    apt-cacher-ng 23734     26021.510330     33616   120               0               0               0.000000               0.000000               0.000000 /autogroup-38
[111687.085925]    apt-cacher-ng 23735     26025.775064      9738   120               0               0               0.000000               0.000000               0.000000 /autogroup-38
[111687.085938]       rpc.mountd  2704         0.548005         1   120               0               0               0.000000               0.000000               0.000000 /autogroup-40
[111687.085952]            getty  3054         0.343556       126   120               0               0               0.000000               0.000000               0.000000 /autogroup-45
[111687.085965]            getty  3056         0.736470        63   120               0               0               0.000000               0.000000               0.000000 /autogroup-47
[111687.085979]             sshd  4854        35.700992        30   120               0               0               0.000000               0.000000               0.000000 /autogroup-52
[111687.085992]             sshd  4862        95.412813      4304   120               0               0               0.000000               0.000000               0.000000 /autogroup-52
[111687.086005]             bash  4865       428.437126       156   120               0               0               0.000000               0.000000               0.000000 /autogroup-53
[111687.086019]               su  4955       471.824541         9   120               0               0               0.000000               0.000000               0.000000 /autogroup-53
[111687.086032]             bash  4964    335591.057713      2348   120               0               0               0.000000               0.000000               0.000000 /autogroup-53
[111687.086046]             sshd  5006        32.152152        17   120               0               0               0.000000               0.000000               0.000000 /autogroup-54
[111687.086059]             sshd  5014       128.417749      4878   120               0               0               0.000000               0.000000               0.000000 /autogroup-54
[111687.086073]             bash  5017       446.458803       154   120               0               0               0.000000               0.000000               0.000000 /autogroup-55
[111687.086086]             sshd  5203        28.615789        45   120               0               0               0.000000               0.000000               0.000000 /autogroup-63
[111687.086100]             bash  5214      3151.392304       248   120               0               0               0.000000               0.000000               0.000000 /autogroup-64
[111687.086114]             sshd  5343        46.296097        29   120               0               0               0.000000               0.000000               0.000000 /autogroup-75
[111687.086127]             bash  5354     19044.348904      1235   120               0               0               0.000000               0.000000               0.000000 /autogroup-76
[111687.086140]             mutt  6708     10930.252771     12816   120               0               0               0.000000               0.000000               0.000000 /autogroup-64
[111687.086154]               su  6900       495.999647        19   120               0               0               0.000000               0.000000               0.000000 /autogroup-55
[111687.086167]             bash  6909      1318.068343       260   120               0               0               0.000000               0.000000               0.000000 /autogroup-55
[111687.086180]             tail  8466      1355.501545      1398   120               0               0               0.000000               0.000000               0.000000 /autogroup-55
[111687.086194]  btrfs-delayed-m  9692   1425576.521728      1196   120               0               0               0.000000               0.000000               0.000000 /
[111687.086207]             sshd 30170        42.607579        38   120               0               0               0.000000               0.000000               0.000000 /autogroup-1403
[111687.086221]             sshd 30179        50.158711       561   120               0               0               0.000000               0.000000               0.000000 /autogroup-1403
[111687.086235]   btrfs-worker-3  6128   1425229.655283      1770   120               0               0               0.000000               0.000000               0.000000 /
[111687.086249]         kdmflush  6734   1119974.621329         2   100               0               0               0.000000               0.000000               0.000000 /
[111687.086261]           bioset  6735   1119978.612779         2   100               0               0               0.000000               0.000000               0.000000 /
[111687.086273]       kcryptd_io  6736   1119982.601457         2   100               0               0               0.000000               0.000000               0.000000 /
[111687.086286]          kcryptd  6737   1119986.595644         2   100               0               0               0.000000               0.000000               0.000000 /
[111687.086298]           bioset  6738   1119990.583874         2   100               0               0               0.000000               0.000000               0.000000 /
[111687.086310]   btrfs-worker-1  6745   1425211.090613      6117   120               0               0               0.000000               0.000000               0.000000 /
[111687.086323]  btrfs-delalloc-  6748   1425202.251492       104   120               0               0               0.000000               0.000000               0.000000 /
[111687.086336]    btrfs-fixup-1  6749   1425202.250784       101   120               0               0               0.000000               0.000000               0.000000 /
[111687.086348]    btrfs-endio-1  6750   1425202.251809       100   120               0               0               0.000000               0.000000               0.000000 /
[111687.086361]  btrfs-freespace  6756   1425211.089326       376   120               0               0               0.000000               0.000000               0.000000 /
[111687.086374]  btrfs-delayed-m  6757   1425233.871251      2267   120               0               0               0.000000               0.000000               0.000000 /
[111687.086386]    btrfs-cache-1  6758   1425202.251344       103   120               0               0               0.000000               0.000000               0.000000 /
[111687.086399]  btrfs-readahead  6759   1425202.297607        83   120               0               0               0.000000               0.000000               0.000000 /
[111687.086411]  btrfs-flush_del  6760   1425221.029890       135   120               0               0               0.000000               0.000000               0.000000 /
[111687.086423]  btrfs-qgroup-re  6761   1425202.267649       104   120               0               0               0.000000               0.000000               0.000000 /
[111687.086436]          python3  6779      1996.172122        42   120               0               0               0.000000               0.000000               0.000000 /autogroup-1871
[111687.086451]              ssh  6785    118603.802447    504921   120               0               0               0.000000               0.000000               0.000000 /autogroup-1876
[111687.086465]          python3  6787      2218.316569        40   120               0               0               0.000000               0.000000               0.000000 /autogroup-1871
[111687.086479]     kworker/u9:0  6846   1425573.308146    334027   100               0               0               0.000000               0.000000               0.000000 /
[111687.086492]     kworker/u8:1  6896   1425581.961121      3810   120               0               0               0.000000               0.000000               0.000000 /
[111687.086504]     kworker/u9:1  6899   1420811.378253    172157   100               0               0               0.000000               0.000000               0.000000 /
[111687.086516]      kworker/0:0  6901   1425582.269976     26415   120               0               0               0.000000               0.000000               0.000000 /
[111687.086529]     kworker/u8:0  6999   1424795.813917      2797   120               0               0               0.000000               0.000000               0.000000 /
[111687.086542]      kworker/0:2  7008   1423001.844114       426   120               0               0               0.000000               0.000000               0.000000 /
[111687.086554]               sh  8012     10932.036968         2   120               0               0               0.000000               0.000000               0.000000 /autogroup-64
[111687.086568]     mutt_dotlock  8013     10933.323866         2   120               0               0               0.000000               0.000000               0.000000 /autogroup-64
[111687.086581]             cron  8014       896.991244         1   120               0               0               0.000000               0.000000               0.000000 /autogroup-33
[111687.086594]               su  8018     19094.386205        20   120               0               0               0.000000               0.000000               0.000000 /autogroup-76
[111687.086608] R           bash  8029     19306.798558       144   120               0               0               0.000000               0.000000               0.000000 /autogroup-76
[111687.086620]             cron  8071       903.431321         1   120               0               0               0.000000               0.000000               0.000000 /autogroup-33
[111687.086633] 
[111687.086638] cpu#1, 1297.893 MHz
[111687.086643]   .nr_running                    : 0
[111687.086647]   .load                          : 0
[111687.086651]   .nr_switches                   : 18360584
[111687.086656]   .nr_load_updates               : 2784053
[111687.086660]   .nr_uninterruptible            : -23563
[111687.086665]   .next_balance                  : 4306.105561
[111687.086669]   .curr->pid                     : 0
[111687.086674]   .clock                         : 111687082.018466
[111687.086678]   .cpu_load[0]                   : 0
[111687.086682]   .cpu_load[1]                   : 0
[111687.086686]   .cpu_load[2]                   : 0
[111687.086690]   .cpu_load[3]                   : 0
[111687.086694]   .cpu_load[4]                   : 0
[111687.086700] 
[111687.086700] cfs_rq[1]:/autogroup-75
[111687.086706]   .exec_clock                    : 0.000000
[111687.086711]   .MIN_vruntime                  : 0.000001
[111687.086716]   .min_vruntime                  : 315.671560
[111687.086721]   .max_vruntime                  : 0.000001
[111687.086725]   .spread                        : 0.000000
[111687.086730]   .spread0                       : -1425272.598416
[111687.086734]   .nr_spread_over                : 0
[111687.086738]   .nr_running                    : 0
[111687.086742]   .load                          : 0
[111687.086747]   .runnable_load_avg             : 0
[111687.086751]   .blocked_load_avg              : 0
[111687.086755]   .tg_load_contrib               : 0
[111687.086759]   .tg_runnable_contrib           : 0
[111687.086763]   .tg_load_avg                   : 0
[111687.086767]   .tg->runnable_avg              : 0
[111687.086773]   .se->exec_start                : 111687081.995141
[111687.086777]   .se->vruntime                  : 1470736.343594
[111687.086782]   .se->sum_exec_runtime          : 305.236937
[111687.086787]   .se->load.weight               : 2
[111687.086791]   .se->avg.runnable_avg_sum      : 125
[111687.086795]   .se->avg.runnable_avg_period   : 47935
[111687.086800]   .se->avg.load_avg_contrib      : 0
[111687.086804]   .se->avg.decay_count           : 106513102
[111687.086809] 
[111687.086809] cfs_rq[1]:/autogroup-1870
[111687.086816]   .exec_clock                    : 0.000000
[111687.086821]   .MIN_vruntime                  : 0.000001
[111687.086826]   .min_vruntime                  : 1582.891691
[111687.086830]   .max_vruntime                  : 0.000001
[111687.086835]   .spread                        : 0.000000
[111687.086840]   .spread0                       : -1424005.378285
[111687.086844]   .nr_spread_over                : 0
[111687.086848]   .nr_running                    : 0
[111687.086853]   .load                          : 0
[111687.086857]   .runnable_load_avg             : 0
[111687.086861]   .blocked_load_avg              : 0
[111687.086865]   .tg_load_contrib               : 0
[111687.086869]   .tg_runnable_contrib           : 4
[111687.086873]   .tg_load_avg                   : 0
[111687.086877]   .tg->runnable_avg              : 4
[111687.086883]   .se->exec_start                : 111687076.809681
[111687.086888]   .se->vruntime                  : 1470741.000918
[111687.086893]   .se->sum_exec_runtime          : 1578.185103
[111687.086897]   .se->load.weight               : 2
[111687.086901]   .se->avg.runnable_avg_sum      : 195
[111687.086906]   .se->avg.runnable_avg_period   : 47493
[111687.086910]   .se->avg.load_avg_contrib      : 0
[111687.086914]   .se->avg.decay_count           : 106513097
[111687.086919] 
[111687.086919] cfs_rq[1]:/autogroup-32
[111687.086926]   .exec_clock                    : 0.000000
[111687.086930]   .MIN_vruntime                  : 0.000001
[111687.086935]   .min_vruntime                  : 49491.793832
[111687.086940]   .max_vruntime                  : 0.000001
[111687.086944]   .spread                        : 0.000000
[111687.086949]   .spread0                       : -1376096.476144
[111687.086954]   .nr_spread_over                : 0
[111687.086958]   .nr_running                    : 0
[111687.086962]   .load                          : 0
[111687.086966]   .runnable_load_avg             : 0
[111687.086970]   .blocked_load_avg              : 0
[111687.086974]   .tg_load_contrib               : 0
[111687.086978]   .tg_runnable_contrib           : 0
[111687.086982]   .tg_load_avg                   : 0
[111687.086986]   .tg->runnable_avg              : 0
[111687.086991]   .se->exec_start                : 111687003.237296
[111687.086996]   .se->vruntime                  : 1470739.572224
[111687.087001]   .se->sum_exec_runtime          : 54762.123868
[111687.087005]   .se->load.weight               : 2
[111687.087010]   .se->avg.runnable_avg_sum      : 10
[111687.087014]   .se->avg.runnable_avg_period   : 47524
[111687.087018]   .se->avg.load_avg_contrib      : 0
[111687.087023]   .se->avg.decay_count           : 106513027
[111687.087027] 
[111687.087027] cfs_rq[1]:/
[111687.087033]   .exec_clock                    : 0.000000
[111687.087038]   .MIN_vruntime                  : 0.000001
[111687.087043]   .min_vruntime                  : 1470741.000918
[111687.087048]   .max_vruntime                  : 0.000001
[111687.087052]   .spread                        : 0.000000
[111687.087057]   .spread0                       : 45152.730942
[111687.087061]   .nr_spread_over                : 0
[111687.087065]   .nr_running                    : 0
[111687.087069]   .load                          : 0
[111687.087074]   .runnable_load_avg             : 0
[111687.087078]   .blocked_load_avg              : 0
[111687.087082]   .tg_load_contrib               : 0
[111687.087086]   .tg_runnable_contrib           : 8
[111687.087090]   .tg_load_avg                   : 0
[111687.087094]   .tg->runnable_avg              : 10
[111687.087098]   .avg->runnable_avg_sum         : 401
[111687.087103]   .avg->runnable_avg_period      : 47511
[111687.087107] 
[111687.087107] rt_rq[1]:
[111687.087112]   .rt_nr_running                 : 0
[111687.087116]   .rt_throttled                  : 0
[111687.087121]   .rt_time                       : 0.000000
[111687.087126]   .rt_runtime                    : 950.000000
[111687.087132] 
[111687.087132] runnable tasks:
[111687.087132]             task   PID         tree-key  switches  prio     exec-runtime         sum-exec        sum-sleep
[111687.087132] ----------------------------------------------------------------------------------------------------------
[111687.087143]         kthreadd     2   1463123.043387       924   120               0               0               0.000000               0.000000               0.000000 /
[111687.087157]           rcu_bh     8   1468759.098070        23   120               0               0               0.000000               0.000000               0.000000 /
[111687.087170]      migration/1    10         0.000000     13726     0               0               0               0.000000               0.000000               0.000000 /
[111687.087182]      ksoftirqd/1    11   1470588.170686    334191   120               0               0               0.000000               0.000000               0.000000 /
[111687.087194]     kworker/1:0H    13         3.434556         4   100               0               0               0.000000               0.000000               0.000000 /
[111687.087206]          khelper    14         4.969285         2   100               0               0               0.000000               0.000000               0.000000 /
[111687.087219]      kintegrityd    18        15.480282         2   100               0               0               0.000000               0.000000               0.000000 /
[111687.087231]           bioset    19        21.491961         2   100               0               0               0.000000               0.000000               0.000000 /
[111687.087243]          kblockd    21        27.504069         2   100               0               0               0.000000               0.000000               0.000000 /
[111687.087256]            khubd    22      1199.047301       113   120               0               0               0.000000               0.000000               0.000000 /
[111687.087269]          kswapd0    28   1467720.143251    227607   120               0               0               0.000000               0.000000               0.000000 /
[111687.087281]             ksmd    29       174.485143         2   125               0               0               0.000000               0.000000               0.000000 /
[111687.087293]    fsnotify_mark    30   1173026.623136        31   120               0               0               0.000000               0.000000               0.000000 /
[111687.087305]           crypto    31       180.939545         2   100               0               0               0.000000               0.000000               0.000000 /
[111687.087317]         pencrypt    38       211.314784         2   100               0               0               0.000000               0.000000               0.000000 /
[111687.087330]         pdecrypt    39       217.333457         2   100               0               0               0.000000               0.000000               0.000000 /
[111687.087342]          deferwq    61       306.808896         2   100               0               0               0.000000               0.000000               0.000000 /
[111687.087355]   btrfs-submit-1   208   1470526.737257     21369   120               0               0               0.000000               0.000000               0.000000 /
[111687.087367]  btrfs-delalloc-   209   1470520.960880      1002   120               0               0               0.000000               0.000000               0.000000 /
[111687.087380]  btrfs-endio-met   212   1470724.984741      9820   120               0               0               0.000000               0.000000               0.000000 /
[111687.087392]      btrfs-rmw-1   213   1470520.969024       980   120               0               0               0.000000               0.000000               0.000000 /
[111687.087405]  btrfs-endio-rai   214   1470521.014866      1009   120               0               0               0.000000               0.000000               0.000000 /
[111687.087417]  btrfs-endio-met   215   1470520.960882      1009   120               0               0               0.000000               0.000000               0.000000 /
[111687.087430]  btrfs-freespace   217   1470509.836385      9003   120               0               0               0.000000               0.000000               0.000000 /
[111687.087443]  btrfs-readahead   220   1470520.963549      1003   120               0               0               0.000000               0.000000               0.000000 /
[111687.087456]  btrfs-qgroup-re   222   1470520.966462      1004   120               0               0               0.000000               0.000000               0.000000 /
[111687.087469]  btrfs-transacti   224   1465040.022545     90564   120               0               0               0.000000               0.000000               0.000000 /
[111687.087482]        scsi_eh_4   493      2045.920555         2   120               0               0               0.000000               0.000000               0.000000 /
[111687.087495]      usb-storage   494   1468310.710565   1031798   120               0               0               0.000000               0.000000               0.000000 /
[111687.087508]  kvm-irqfd-clean   512      2275.124722         3   100               0               0               0.000000               0.000000               0.000000 /
[111687.087521]           nfsiod  1727      6433.121424         2   100               0               0               0.000000               0.000000               0.000000 /
[111687.087533]       rpc.idmapd  1735         0.422149         1   120               0               0               0.000000               0.000000               0.000000 /autogroup-7
[111687.087547]        syslog-ng  1994         6.062742         3   120               0               0               0.000000               0.000000               0.000000 /autogroup-11
[111687.087561]        syslog-ng  1995       591.710073      5909   120               0               0               0.000000               0.000000               0.000000 /autogroup-12
[111687.087575]            acpid  2029         0.117387         2   120               0               0               0.000000               0.000000               0.000000 /autogroup-13
[111687.087588]              atd  2068         1.152685        32   120               0               0               0.000000               0.000000               0.000000 /autogroup-14
[111687.087602]            named  2079      8012.064665     52647   120               0               0               0.000000               0.000000               0.000000 /autogroup-15
[111687.087616]            named  2080      8012.319426     52640   120               0               0               0.000000               0.000000               0.000000 /autogroup-15
[111687.087629]            named  2081      8005.605658     11854   120               0               0               0.000000               0.000000               0.000000 /autogroup-15
[111687.087643]        mosquitto  2093     20898.207506   1079108   120               0               0               0.000000               0.000000               0.000000 /autogroup-16
[111687.087656]             ntpd  2176      4479.373308    115940   120               0               0               0.000000               0.000000               0.000000 /autogroup-18
[111687.087670]      dbus-daemon  2213         0.858032        12   120               0               0               0.000000               0.000000               0.000000 /autogroup-20
[111687.087684]         postgres  2249       209.177551     22353   120               0               0               0.000000               0.000000               0.000000 /autogroup-27
[111687.087699]         postgres  2251      3223.357619      7869   120               0               0               0.000000               0.000000               0.000000 /autogroup-25
[111687.087712]          apache2  2306       110.314012        70   120               0               0               0.000000               0.000000               0.000000 /autogroup-32
[111687.087728]          apache2  2337       123.167267         1   120               0               0               0.000000               0.000000               0.000000 /autogroup-32
[111687.087741]          apache2  2339       117.397234         1   120               0               0               0.000000               0.000000               0.000000 /autogroup-32
[111687.087755]          apache2  2316       118.725957         2   120               0               0               0.000000               0.000000               0.000000 /autogroup-32
[111687.087769]          apache2  2318       117.447615         6   120               0               0               0.000000               0.000000               0.000000 /autogroup-32
[111687.087783]          apache2  2320       118.677570         2   120               0               0               0.000000               0.000000               0.000000 /autogroup-32
[111687.087796]          apache2  2322       118.586077         2   120               0               0               0.000000               0.000000               0.000000 /autogroup-32
[111687.087809]          apache2  2324       118.595195         3   120               0               0               0.000000               0.000000               0.000000 /autogroup-32
[111687.087822]          apache2  2326       118.488200         3   120               0               0               0.000000               0.000000               0.000000 /autogroup-32
[111687.087835]          apache2  2328       114.598501         3   120               0               0               0.000000               0.000000               0.000000 /autogroup-32
[111687.087849]          apache2  2332       118.723989         2   120               0               0               0.000000               0.000000               0.000000 /autogroup-32
[111687.087862]          apache2  2334       114.711802         2   120               0               0               0.000000               0.000000               0.000000 /autogroup-32
[111687.087876]          apache2  2336       114.651316         2   120               0               0               0.000000               0.000000               0.000000 /autogroup-32
[111687.087889]          apache2  2338       118.719173         1   120               0               0               0.000000               0.000000               0.000000 /autogroup-32
[111687.087902]          apache2  2340       118.749580         1   120               0               0               0.000000               0.000000               0.000000 /autogroup-32
[111687.087915]          apache2  2341       118.796725         1   120               0               0               0.000000               0.000000               0.000000 /autogroup-32
[111687.087929]          apache2  2361     49491.793832   1114708   120               0               0               0.000000               0.000000               0.000000 /autogroup-32
[111687.087943]             cron  2422      1100.383926      1929   120               0               0               0.000000               0.000000               0.000000 /autogroup-33
[111687.087956]     avahi-daemon  2478      1112.025290     11177   120               0               0               0.000000               0.000000               0.000000 /autogroup-34
[111687.087970]             sshd  2485         2.488331        29   120               0               0               0.000000               0.000000               0.000000 /autogroup-36
[111687.087984]  nfsd4_callbacks  2586      9812.495450         2   100               0               0               0.000000               0.000000               0.000000 /
[111687.087996]             nfsd  2596   1470548.108591        34   120               0               0               0.000000               0.000000               0.000000 /
[111687.088009]             nfsd  2597   1470548.108453        34   120               0               0               0.000000               0.000000               0.000000 /
[111687.088022]             nfsd  2598   1470548.108786        34   120               0               0               0.000000               0.000000               0.000000 /
[111687.088034]             nfsd  2599   1470548.108658        34   120               0               0               0.000000               0.000000               0.000000 /
[111687.088047]             nfsd  2601   1470548.128113        34   120               0               0               0.000000               0.000000               0.000000 /
[111687.088060]             nfsd  2602   1470548.110195        34   120               0               0               0.000000               0.000000               0.000000 /
[111687.088072]             nfsd  2603   1470548.108369        34   120               0               0               0.000000               0.000000               0.000000 /
[111687.088085]      rpc.svcgssd  2640        12.397936         7   120               0               0               0.000000               0.000000               0.000000 /autogroup-37
[111687.088099]    apt-cacher-ng  2660     16498.787015        16   120               0               0               0.000000               0.000000               0.000000 /autogroup-38
[111687.088113]            exim4  2936       868.310810       127   120               0               0               0.000000               0.000000               0.000000 /autogroup-41
[111687.088128]            getty  3051         0.826585       124   120               0               0               0.000000               0.000000               0.000000 /autogroup-42
[111687.088142]            getty  3052         0.818461       129   120               0               0               0.000000               0.000000               0.000000 /autogroup-43
[111687.088156]            getty  3053         2.062473       127   120               0               0               0.000000               0.000000               0.000000 /autogroup-44
[111687.088170]            getty  3055         1.900641       128   120               0               0               0.000000               0.000000               0.000000 /autogroup-46
[111687.088185]             sudo  4947       398.020265        51   120               0               0               0.000000               0.000000               0.000000 /autogroup-53
[111687.088200]             sshd  5211       663.823815      5219   120               0               0               0.000000               0.000000               0.000000 /autogroup-63
[111687.088214]             sshd  5351       315.671560      2178   120               0               0               0.000000               0.000000               0.000000 /autogroup-75
[111687.088228]  btrfs-endio-wri 23813   1465101.293150      2185   120               0               0               0.000000               0.000000               0.000000 /
[111687.088241]             bash 30182       540.952181       103   120               0               0               0.000000               0.000000               0.000000 /autogroup-1404
[111687.088255]             tmux  6361    452234.691994         7   120               0               0               0.000000               0.000000               0.000000 /autogroup-53
[111687.088269]             tmux  6364      1582.891691     21089   120               0               0               0.000000               0.000000               0.000000 /autogroup-1870
[111687.088283]             bash  6365      1486.629505       383   120               0               0               0.000000               0.000000               0.000000 /autogroup-1871
[111687.088296]    carfax-backup  6718     52321.615966    280808   120               0               0               0.000000               0.000000               0.000000 /autogroup-1871
[111687.088311]  btrfs-genwork-1  6746   1470548.131011        93   120               0               0               0.000000               0.000000               0.000000 /
[111687.088323]   btrfs-submit-1  6747   1470509.890602     35473   120               0               0               0.000000               0.000000               0.000000 /
[111687.088337]  btrfs-endio-met  6751   1470506.315610       489   120               0               0               0.000000               0.000000               0.000000 /
[111687.088349]      btrfs-rmw-1  6752   1470501.885718        79   120               0               0               0.000000               0.000000               0.000000 /
[111687.088362]  btrfs-endio-rai  6753   1470501.932541       101   120               0               0               0.000000               0.000000               0.000000 /
[111687.088375]  btrfs-endio-met  6754   1470501.912432       103   120               0               0               0.000000               0.000000               0.000000 /
[111687.088389]    btrfs-cleaner  6762   1470729.918748       335   120               0               0               0.000000               0.000000               0.000000 /
[111687.088402]  btrfs-transacti  6763   1470729.950093      4662   120               0               0               0.000000               0.000000               0.000000 /
[111687.088415]          sshpass  6764      1729.050543         6   120               0               0               0.000000               0.000000               0.000000 /autogroup-1871
[111687.088429]              ssh  6765    122063.699027    465386   120               0               0               0.000000               0.000000               0.000000 /autogroup-1874
[111687.088443]            btrfs  6780     52324.281534    984848   120               0               0               0.000000               0.000000               0.000000 /autogroup-1871
[111687.088456]            btrfs  6781     52321.129711    132173   120               0               0               0.000000               0.000000               0.000000 /autogroup-1871
[111687.088470]          sshpass  6783      2218.346496         5   120               0               0               0.000000               0.000000               0.000000 /autogroup-1871
[111687.088484]            btrfs  6788     52327.745643    226230   120               0               0               0.000000               0.000000               0.000000 /autogroup-1871
[111687.088498]    btrfs-endio-2  6869   1470724.752666    583773   120               0               0               0.000000               0.000000               0.000000 /
[111687.088511]      kworker/1:1  6889   1470735.069746     30405   120               0               0               0.000000               0.000000               0.000000 /
[111687.088525]  btrfs-endio-wri  6910   1470728.870267       560   120               0               0               0.000000               0.000000               0.000000 /
[111687.088537]     kworker/u8:2  6983   1468168.563703       376   120               0               0               0.000000               0.000000               0.000000 /
[111687.088550]      kworker/1:2  7001   1468066.800362        64   120               0               0               0.000000               0.000000               0.000000 /
[111687.088563]         postgres  8011        95.710109      1002   120               0               0               0.000000               0.000000               0.000000 /autogroup-1937
[111687.088577] 


-- 
=== Hugo Mills: hugo@... carfax.org.uk | darksatanic.net | lug.org.uk ===
  PGP key: 65E74AC0 from wwwkeys.eu.pgp.net or http://www.carfax.org.uk
  --- My doctor tells me that I have a malformed public-duty gland, ---  
                and a natural deficiency in moral fibre.                 

[-- Attachment #2: Digital signature --]
[-- Type: application/pgp-signature, Size: 811 bytes --]

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH] Btrfs-progs: fsck: disable --init-extent-tree option when using snapshots
  2014-03-10 15:50       ` Josef Bacik
@ 2014-03-11  1:23         ` Wang Shilong
  2014-03-08 21:53           ` send/receive locking Hugo Mills
  2014-03-14 13:36           ` [PATCH] Btrfs-progs: fsck: disable --init-extent-tree option when using snapshots Wang Shilong
  0 siblings, 2 replies; 19+ messages in thread
From: Wang Shilong @ 2014-03-11  1:23 UTC (permalink / raw)
  To: Josef Bacik; +Cc: Shilong Wang, linux-btrfs

On 03/10/2014 11:50 PM, Josef Bacik wrote:
> -----BEGIN PGP SIGNED MESSAGE-----
> Hash: SHA1
>
> On 03/10/2014 08:12 AM, Shilong Wang wrote:
>> Hi Josef,
>>
>> As i haven't thought any better ideas to rebuild extent tree which
>> contains extent that owns 'FULL BACKREF' flag.
>>
>> Considering an extent's refs can be equal or more than 1 if this
>> extent has *FULL BACKREF* flag, so we could not make sure an
>> extent's flag by only searching fs/file tree any more.
>>
>> So until now, i just disable this option if snapshots exists,
>> please correct me if i miss something here. Or you have any better
>> ideas to solve this problem.~_~
>>
>>
> I thought the fsck stuff rebuilds full backref refs properly, does it
> not?  If it doesn't we need to fix that, however I'm fine with
> disabling the option if snapshots exist for the time being.  Thanks,
If there are no snapshots, --init-extent-tree can works as expected.
I just have not thought a better idea to rebuild extent tree if we do have
snapshots which means we may have an extent with *FULL BACKREF*
flag.

Thanks,
Wang
>
> Josef
> -----BEGIN PGP SIGNATURE-----
> Version: GnuPG v1
> Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/
>
> iQIcBAEBAgAGBQJTHd9NAAoJEANb+wAKly3BYCYP/0iTaaa7w0SnfXtgjoVyX+nT
> +e0Pa46zeKzpTujotCDb9E/2PBesCAvA4Psog3rkfsqJ2nXN9cERN4E6/JG4nAHh
> Hv4KPo+w+tCkC4U2wSoDivYrVk9G5SH25ewkgW6iheSYNIlm+PLbOQz9DzGjCFDp
> 51J9tG5E010siOyhlLCyGj8ZTj+gXuoQVWKCS8dOpCLMrbYYjMDXa562hqWaLoS/
> t3eSfP7Tnnpl43NiMZI4fWrzmlFa5lba5iJmG59FeyiseRH4Zrhee4St1L1xDL5A
> /6f3tJJT7DJjRRJFv0nJAOvOPyFaK8bMaYmOQJg6VrhcyPKM3BxBVEab3HrmQ7jt
> LCMWobpIcM7e5BugmbTGGsFymhv05SQgvYGzpzRVXdsSzqubuqTcXwloNU5RyyFF
> sXT9IiW9wAibHe7mDN7V6nfo1bVfHsjvSVi1rqz4/zFOWyh8oqxfEhxUJIWhfFsn
> j0WJevvqKnjBJujyyuQpL13tzh69qei0AHOEme3R46BSRMnyuacy/WOeyo4VXPcn
> 0GIeWbngAIWF/quhoQGkvofRMlPgftiDge8uz9pbm3IEKeiP9dQ/HvKsIHMKjnKW
> 3dEBvMV/CSUQNek4VjO1ALefTRZQvJVL8Wxdij4W+djJw/uVX7fOhuqdkqyfM3FY
> CKSB3HUSUtDCammsvgQA
> =OT98
> -----END PGP SIGNATURE-----
> --
> To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>


^ permalink raw reply	[flat|nested] 19+ messages in thread

* 3.14.0-rc3: btrfs send/receive blocks btrfs IO on other devices (near deadlocks)
@ 2014-03-12 15:18 Marc MERLIN
  2014-03-14  1:48 ` Marc MERLIN
  2014-03-14 14:42 ` Josef Bacik
  0 siblings, 2 replies; 19+ messages in thread
From: Marc MERLIN @ 2014-03-12 15:18 UTC (permalink / raw)
  To: linux-btrfs

I have a file server with 4 cpu cores and 5 btrfs devices:
Label: btrfs_boot  uuid: e4c1daa8-9c39-4a59-b0a9-86297d397f3b
        Total devices 1 FS bytes used 48.92GiB
        devid    1 size 79.93GiB used 73.04GiB path /dev/mapper/cryptroot

Label: varlocalspace  uuid: 9f46dbe2-1344-44c3-b0fb-af2888c34f18
        Total devices 1 FS bytes used 1.10TiB
        devid    1 size 1.63TiB used 1.50TiB path /dev/mapper/cryptraid0

Label: btrfs_pool1  uuid: 6358304a-2234-4243-b02d-4944c9af47d7
        Total devices 1 FS bytes used 7.16TiB
        devid    1 size 14.55TiB used 7.50TiB path /dev/mapper/dshelf1

Label: btrfs_pool2  uuid: cb9df6d3-a528-4afc-9a45-4fed5ec358d6
        Total devices 1 FS bytes used 3.34TiB
        devid    1 size 7.28TiB used 3.42TiB path /dev/mapper/dshelf2

Label: bigbackup  uuid: 024ba4d0-dacb-438d-9f1b-eeb34083fe49
        Total devices 5 FS bytes used 6.02TiB
        devid    1 size 1.82TiB used 1.43TiB path /dev/dm-9
        devid    2 size 1.82TiB used 1.43TiB path /dev/dm-6
        devid    3 size 1.82TiB used 1.43TiB path /dev/dm-5
        devid    4 size 1.82TiB used 1.43TiB path /dev/dm-7
        devid    5 size 1.82TiB used 1.43TiB path /dev/dm-8


I have a very long running btrfs send/receive from btrfs_pool1 to bigbackup
(long running meaning that it's been slowly copying over 5 days)

The problem is that this is blocking IO to btrfs_pool2 which is using
totally different drives.
By blocking IO I mean that IO to pool2 kind of works sometimes, and
hangs for very long times at other times.

It looks as if one rsync to btrfs_pool2 or one piece of IO hangs on a shared lock
and once that happens, all IO to btrfs_pool2 stops for a long time.
It does recover eventually without reboot, but the wait times are ridiculous (it 
could be 1H or more).

As I write this, I have a killall -9 rsync that waited for over 10mn before
these processes would finally die:
23555       07:36 wait_current_trans.isra.15     rsync -av -SH --delete (...)
23556       07:36 exit                           [rsync] <defunct>
25387  2-04:41:22 wait_current_trans.isra.15     rsync --password-file  (...)
27481       31:26 wait_current_trans.isra.15     rsync --password-file  (...)
29268    04:41:34 wait_current_trans.isra.15     rsync --password-file  (...)
29343    04:41:31 exit                           [rsync] <defunct>
29492    04:41:27 wait_current_trans.isra.15     rsync --password-file  (...)

14559    07:14:49 wait_current_trans.isra.15     cp -i -al current 20140312-feisty

This is all stuck in btrfs kernel code.
If someeone wants sysrq-w, there it is.
http://marc.merlins.org/tmp/btrfs_full.txt

A quick summary:
SysRq : Show Blocked State
  task                        PC stack   pid father
btrfs-cleaner   D ffff8802126b0840     0  3332      2 0x00000000
 ffff8800c5dc9d00 0000000000000046 ffff8800c5dc9fd8 ffff8800c69f6310
 00000000000141c0 ffff8800c69f6310 ffff88017574c170 ffff880211e671e8
 0000000000000000 ffff880211e67000 ffff8801e5936e20 ffff8800c5dc9d10
Call Trace:
 [<ffffffff8160b0d9>] schedule+0x73/0x75
 [<ffffffff8122a3c7>] wait_current_trans.isra.15+0x98/0xf4
 [<ffffffff81085062>] ? finish_wait+0x65/0x65
 [<ffffffff8122b86c>] start_transaction+0x48e/0x4f2
 [<ffffffff8122bc4f>] ? __btrfs_end_transaction+0x2a1/0x2c6
 [<ffffffff8122b8eb>] btrfs_start_transaction+0x1b/0x1d
 [<ffffffff8121c5cd>] btrfs_drop_snapshot+0x443/0x610
 [<ffffffff8160d7b3>] ? _raw_spin_unlock+0x17/0x2a
 [<ffffffff81074efb>] ? finish_task_switch+0x51/0xdb
 [<ffffffff8160afbf>] ? __schedule+0x537/0x5de
 [<ffffffff8122c08d>] btrfs_clean_one_deleted_snapshot+0x103/0x10f
 [<ffffffff81224859>] cleaner_kthread+0x103/0x136
 [<ffffffff81224756>] ? btrfs_alloc_root+0x26/0x26
 [<ffffffff8106bc1b>] kthread+0xae/0xb6
 [<ffffffff8106bb6d>] ? __kthread_parkme+0x61/0x61
 [<ffffffff816141bc>] ret_from_fork+0x7c/0xb0
 [<ffffffff8106bb6d>] ? __kthread_parkme+0x61/0x61
btrfs-transacti D ffff88021387eb00     0  3333      2 0x00000000
 ffff8800c5dcb890 0000000000000046 ffff8800c5dcbfd8 ffff88021387e5d0
 00000000000141c0 ffff88021387e5d0 ffff88021f2141c0 ffff88021387e5d0
 ffff8800c5dcb930 ffffffff810fe574 0000000000000002 ffff8800c5dcb8a0
Call Trace:
 [<ffffffff810fe574>] ? wait_on_page_read+0x3c/0x3c
 [<ffffffff8160b0d9>] schedule+0x73/0x75
 [<ffffffff8160b27e>] io_schedule+0x60/0x7a
 [<ffffffff810fe582>] sleep_on_page+0xe/0x12
 [<ffffffff8160b510>] __wait_on_bit+0x48/0x7a
 [<ffffffff810fe522>] wait_on_page_bit+0x7a/0x7c
 [<ffffffff81085096>] ? autoremove_wake_function+0x34/0x34
 [<ffffffff81245c70>] read_extent_buffer_pages+0x1bf/0x204
 [<ffffffff81223710>] ? free_root_pointers+0x5b/0x5b
 [<ffffffff81224412>] btree_read_extent_buffer_pages.constprop.45+0x66/0x100
 [<ffffffff81225367>] read_tree_block+0x2f/0x47
 [<ffffffff8120e4b6>] read_block_for_search.isra.26+0x24a/0x287
 [<ffffffff8120fcf7>] btrfs_search_slot+0x4f4/0x6bb
 [<ffffffff81214c3d>] lookup_inline_extent_backref+0xda/0x3fb
 [<ffffffff812167e1>] __btrfs_free_extent+0xf4/0x712
 [<ffffffff8121ba57>] __btrfs_run_delayed_refs+0x939/0xbdf
 [<ffffffff8121d896>] btrfs_run_delayed_refs+0x81/0x18f
 [<ffffffff8122af3e>] btrfs_commit_transaction+0x3a9/0x849
 [<ffffffff81085062>] ? finish_wait+0x65/0x65
 [<ffffffff81227598>] transaction_kthread+0xf8/0x1ab
 [<ffffffff812274a0>] ? btrfs_cleanup_transaction+0x43f/0x43f
 [<ffffffff8106bc1b>] kthread+0xae/0xb6
 [<ffffffff8106bb6d>] ? __kthread_parkme+0x61/0x61
 [<ffffffff816141bc>] ret_from_fork+0x7c/0xb0
 [<ffffffff8106bb6d>] ? __kthread_parkme+0x61/0x61


Worse, taking that dump gave me:
gargamel:/etc/udev/rules.d# echo w > /proc/sysrq-trigger 
Message from syslogd@gargamel at Mar 12 07:13:16 ...
 kernel:[1234536.531251] BUG: soft lockup - CPU#1 stuck for 22s! [mysqld:12552]
Message from syslogd@gargamel at Mar 12 07:13:16 ...
 kernel:[1234536.559276] BUG: soft lockup - CPU#2 stuck for 22s! [mysqld:4955]
Message from syslogd@gargamel at Mar 12 07:13:16 ...
 kernel:[1234540.538636] BUG: soft lockup - CPU#0 stuck for 22s! [kswapd0:48]
(there are more in the logs attached on what those lockups are)

Thankfully my btrfs send/receive is still working and should finish
today, but the amount of time it's been taking has been painful, and
the effect it's been having on the rest of my system, making it hang
or rendering its devices unusable for long periods of time, has been
punishing.

Can someone figure out from the kernel logs what is causing those near deadlocks?

Actually this was so bad apparently that sysrq w didn't even all make it
to syslog/disk (also on btrfs) but thankfully I got it on serial console.


I also found this suring sysrq. Shouldit be reported to someone else?
INFO: rcu_preempt detected stalls on CPUs/tasks:
	3: (1 GPs behind) idle=395/140000000000000/0 softirq=284540927/284540928 last_accelerate: ed62/2821, nonlazy_posted: 1, ..
	(detected by 0, t=15002 jiffies, g=100566635, c=100566634, q=87438)
sending NMI to all CPUs:
NMI backtrace for cpu 3
CPU: 3 PID: 21730 Comm: bash Not tainted 3.14.0-rc3-amd64-i915-preempt-20140216 #2
Hardware name: System manufacturer System Product Name/P8H67-M PRO, BIOS 3806 08/20/2012
task: ffff88001cf3a710 ti: ffff880037f78000 task.ti: ffff880037f78000
RIP: 0010:[<ffffffff81309a80>]  [<ffffffff81309a80>] paravirt_read_tsc+0x0/0xd
RSP: 0018:ffff880037f79ac0  EFLAGS: 00000046
RAX: 0000000000000003 RBX: 0000000094b043ff RCX: 0000000000000000
RDX: 0000000000000004 RSI: 00000000000003fd RDI: 0000000000000001
RBP: ffff880037f79ae8 R08: ffffffff81cf24d0 R09: 00000000fffffffe
R10: 0000000000001a18 R11: 0000000000000000 R12: 00000000000009fb
R13: 0000000000000003 R14: 0000000094b047d7 R15: 0000000000000036
FS:  0000000000000000(0000) GS:ffff88021f380000(0063) knlGS:00000000f754b6c0
CS:  0010 DS: 002b ES: 002b CR0: 0000000080050033
CR2: 00000000ffc13412 CR3: 0000000145622000 CR4: 00000000000407e0
Stack:
 ffffffff81309b59 ffffffff81f27560 00000000000026f0 0000000000000020
 ffffffff813c9e1b ffff880037f79af8 ffffffff81309ac9 ffff880037f79b08
 ffffffff81309aef ffff880037f79b30 ffffffff813c9cc6 ffffffff81f27560
Call Trace:
 [<ffffffff81309b59>] ? delay_tsc+0x3d/0xa4
 [<ffffffff813c9e1b>] ? serial8250_console_write+0x10d/0x10d
 [<ffffffff81309ac9>] __delay+0xf/0x11
 [<ffffffff81309aef>] __const_udelay+0x24/0x26
 [<ffffffff813c9cc6>] wait_for_xmitr+0x49/0x91
 [<ffffffff813c9e37>] serial8250_console_putchar+0x1c/0x2e
 [<ffffffff813c5d97>] uart_console_write+0x3f/0x54
 [<ffffffff813c9dc8>] serial8250_console_write+0xba/0x10d
 [<ffffffff8109363b>] call_console_drivers.constprop.6+0xbc/0xf0
 [<ffffffff81093bf7>] console_unlock+0x269/0x302
 [<ffffffff8109405e>] vprintk_emit+0x3ce/0x3ff
 [<ffffffff81604702>] printk+0x54/0x56
 [<ffffffff81089799>] ? arch_local_irq_save+0x15/0x1b
 [<ffffffff8108752e>] print_cfs_rq+0x4fc/0xd71
 [<ffffffff81080fff>] print_cfs_stats+0x5a/0x9e
 [<ffffffff81086c65>] print_cpu+0x538/0x8e3
 [<ffffffff81087f7e>] sysrq_sched_debug_show+0x1f/0x42
 [<ffffffff81078874>] show_state_filter+0x92/0x9f
 [<ffffffff813b7c7a>] sysrq_handle_showstate_blocked+0x13/0x15
 [<ffffffff813b82c3>] __handle_sysrq+0xa0/0x138
 [<ffffffff813b8630>] write_sysrq_trigger+0x28/0x37
 [<ffffffff811a565a>] proc_reg_write+0x5a/0x7c
 [<ffffffff81155417>] vfs_write+0xab/0x107
 [<ffffffff81155b19>] SyS_write+0x46/0x79
 [<ffffffff81615f6c>] sysenter_dispatch+0x7/0x21
Code: 89 e5 e8 a2 fe ff ff 89 c2 66 31 c0 c1 e2 10 01 d0 15 ff ff 00 00 f7 d0 c1 e8 10 5d c3 90 90 90 90 90 90 90 90 90 90 90 90 90 90 <55> 48 89 e5 e8 9c c4 d0 ff 66 90 5d c3 66 66 66 66 90 55 48 89 


Thanks,
Marc
-- 
"A mouse is a device used to point at the xterm you want to type in" - A.S.R.
Microsoft is to operating systems ....
                                      .... what McDonalds is to gourmet cooking
Home page: http://marc.merlins.org/                         | PGP 1024R/763BE901

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: 3.14.0-rc3: btrfs send/receive blocks btrfs IO on other devices (near deadlocks)
  2014-03-12 15:18 3.14.0-rc3: btrfs send/receive blocks btrfs IO on other devices (near deadlocks) Marc MERLIN
@ 2014-03-14  1:48 ` Marc MERLIN
  2014-03-10 10:39   ` [PATCH] Btrfs-progs: fsck: disable --init-extent-tree option when using snapshots Wang Shilong
  2014-03-14  4:54   ` 3.14.0-rc3: btrfs send/receive blocks btrfs IO on other devices (near deadlocks) Duncan
  2014-03-14 14:42 ` Josef Bacik
  1 sibling, 2 replies; 19+ messages in thread
From: Marc MERLIN @ 2014-03-14  1:48 UTC (permalink / raw)
  To: linux-btrfs

Can anyone comment on this.

Are others seeing some btrfs operations on filesystem/diskA hang/deadlock
other btrfs operations on filesystem/diskB ?

I just spent time fixing near data corruption in one of my systems due to
a 7h delay between when the timestamp was written and the actual data was
written, and traced it down to a btrfs hang that should never have happened
on that filesystem.

Surely, it's not a single queue for all filesystem and devices, right?

If not, does anyone know what bugs I've been hitting then?

Is the full report below I spent quite a while getting together for you :)
useful in any way to see where the hangs are?

To be honest, I'm looking at moving some important filesystems back to ext4
because I can't afford such long hangs on my root filesystem when I have a
media device that is doing heavy btrfs IO or a send/receive.

Mmmh, is it maybe just btrfs send/receive that is taking a btrfs-wide lock?
Or btrfs scrub maybe?

Thanks,
Marc

On Wed, Mar 12, 2014 at 08:18:08AM -0700, Marc MERLIN wrote:
> I have a file server with 4 cpu cores and 5 btrfs devices:
> Label: btrfs_boot  uuid: e4c1daa8-9c39-4a59-b0a9-86297d397f3b
>         Total devices 1 FS bytes used 48.92GiB
>         devid    1 size 79.93GiB used 73.04GiB path /dev/mapper/cryptroot
> 
> Label: varlocalspace  uuid: 9f46dbe2-1344-44c3-b0fb-af2888c34f18
>         Total devices 1 FS bytes used 1.10TiB
>         devid    1 size 1.63TiB used 1.50TiB path /dev/mapper/cryptraid0
> 
> Label: btrfs_pool1  uuid: 6358304a-2234-4243-b02d-4944c9af47d7
>         Total devices 1 FS bytes used 7.16TiB
>         devid    1 size 14.55TiB used 7.50TiB path /dev/mapper/dshelf1
> 
> Label: btrfs_pool2  uuid: cb9df6d3-a528-4afc-9a45-4fed5ec358d6
>         Total devices 1 FS bytes used 3.34TiB
>         devid    1 size 7.28TiB used 3.42TiB path /dev/mapper/dshelf2
> 
> Label: bigbackup  uuid: 024ba4d0-dacb-438d-9f1b-eeb34083fe49
>         Total devices 5 FS bytes used 6.02TiB
>         devid    1 size 1.82TiB used 1.43TiB path /dev/dm-9
>         devid    2 size 1.82TiB used 1.43TiB path /dev/dm-6
>         devid    3 size 1.82TiB used 1.43TiB path /dev/dm-5
>         devid    4 size 1.82TiB used 1.43TiB path /dev/dm-7
>         devid    5 size 1.82TiB used 1.43TiB path /dev/dm-8
> 
> 
> I have a very long running btrfs send/receive from btrfs_pool1 to bigbackup
> (long running meaning that it's been slowly copying over 5 days)
> 
> The problem is that this is blocking IO to btrfs_pool2 which is using
> totally different drives.
> By blocking IO I mean that IO to pool2 kind of works sometimes, and
> hangs for very long times at other times.
> 
> It looks as if one rsync to btrfs_pool2 or one piece of IO hangs on a shared lock
> and once that happens, all IO to btrfs_pool2 stops for a long time.
> It does recover eventually without reboot, but the wait times are ridiculous (it 
> could be 1H or more).
> 
> As I write this, I have a killall -9 rsync that waited for over 10mn before
> these processes would finally die:
> 23555       07:36 wait_current_trans.isra.15     rsync -av -SH --delete (...)
> 23556       07:36 exit                           [rsync] <defunct>
> 25387  2-04:41:22 wait_current_trans.isra.15     rsync --password-file  (...)
> 27481       31:26 wait_current_trans.isra.15     rsync --password-file  (...)
> 29268    04:41:34 wait_current_trans.isra.15     rsync --password-file  (...)
> 29343    04:41:31 exit                           [rsync] <defunct>
> 29492    04:41:27 wait_current_trans.isra.15     rsync --password-file  (...)
> 
> 14559    07:14:49 wait_current_trans.isra.15     cp -i -al current 20140312-feisty
> 
> This is all stuck in btrfs kernel code.
> If someeone wants sysrq-w, there it is.
> http://marc.merlins.org/tmp/btrfs_full.txt
> 
> A quick summary:
> SysRq : Show Blocked State
>   task                        PC stack   pid father
> btrfs-cleaner   D ffff8802126b0840     0  3332      2 0x00000000
>  ffff8800c5dc9d00 0000000000000046 ffff8800c5dc9fd8 ffff8800c69f6310
>  00000000000141c0 ffff8800c69f6310 ffff88017574c170 ffff880211e671e8
>  0000000000000000 ffff880211e67000 ffff8801e5936e20 ffff8800c5dc9d10
> Call Trace:
>  [<ffffffff8160b0d9>] schedule+0x73/0x75
>  [<ffffffff8122a3c7>] wait_current_trans.isra.15+0x98/0xf4
>  [<ffffffff81085062>] ? finish_wait+0x65/0x65
>  [<ffffffff8122b86c>] start_transaction+0x48e/0x4f2
>  [<ffffffff8122bc4f>] ? __btrfs_end_transaction+0x2a1/0x2c6
>  [<ffffffff8122b8eb>] btrfs_start_transaction+0x1b/0x1d
>  [<ffffffff8121c5cd>] btrfs_drop_snapshot+0x443/0x610
>  [<ffffffff8160d7b3>] ? _raw_spin_unlock+0x17/0x2a
>  [<ffffffff81074efb>] ? finish_task_switch+0x51/0xdb
>  [<ffffffff8160afbf>] ? __schedule+0x537/0x5de
>  [<ffffffff8122c08d>] btrfs_clean_one_deleted_snapshot+0x103/0x10f
>  [<ffffffff81224859>] cleaner_kthread+0x103/0x136
>  [<ffffffff81224756>] ? btrfs_alloc_root+0x26/0x26
>  [<ffffffff8106bc1b>] kthread+0xae/0xb6
>  [<ffffffff8106bb6d>] ? __kthread_parkme+0x61/0x61
>  [<ffffffff816141bc>] ret_from_fork+0x7c/0xb0
>  [<ffffffff8106bb6d>] ? __kthread_parkme+0x61/0x61
> btrfs-transacti D ffff88021387eb00     0  3333      2 0x00000000
>  ffff8800c5dcb890 0000000000000046 ffff8800c5dcbfd8 ffff88021387e5d0
>  00000000000141c0 ffff88021387e5d0 ffff88021f2141c0 ffff88021387e5d0
>  ffff8800c5dcb930 ffffffff810fe574 0000000000000002 ffff8800c5dcb8a0
> Call Trace:
>  [<ffffffff810fe574>] ? wait_on_page_read+0x3c/0x3c
>  [<ffffffff8160b0d9>] schedule+0x73/0x75
>  [<ffffffff8160b27e>] io_schedule+0x60/0x7a
>  [<ffffffff810fe582>] sleep_on_page+0xe/0x12
>  [<ffffffff8160b510>] __wait_on_bit+0x48/0x7a
>  [<ffffffff810fe522>] wait_on_page_bit+0x7a/0x7c
>  [<ffffffff81085096>] ? autoremove_wake_function+0x34/0x34
>  [<ffffffff81245c70>] read_extent_buffer_pages+0x1bf/0x204
>  [<ffffffff81223710>] ? free_root_pointers+0x5b/0x5b
>  [<ffffffff81224412>] btree_read_extent_buffer_pages.constprop.45+0x66/0x100
>  [<ffffffff81225367>] read_tree_block+0x2f/0x47
>  [<ffffffff8120e4b6>] read_block_for_search.isra.26+0x24a/0x287
>  [<ffffffff8120fcf7>] btrfs_search_slot+0x4f4/0x6bb
>  [<ffffffff81214c3d>] lookup_inline_extent_backref+0xda/0x3fb
>  [<ffffffff812167e1>] __btrfs_free_extent+0xf4/0x712
>  [<ffffffff8121ba57>] __btrfs_run_delayed_refs+0x939/0xbdf
>  [<ffffffff8121d896>] btrfs_run_delayed_refs+0x81/0x18f
>  [<ffffffff8122af3e>] btrfs_commit_transaction+0x3a9/0x849
>  [<ffffffff81085062>] ? finish_wait+0x65/0x65
>  [<ffffffff81227598>] transaction_kthread+0xf8/0x1ab
>  [<ffffffff812274a0>] ? btrfs_cleanup_transaction+0x43f/0x43f
>  [<ffffffff8106bc1b>] kthread+0xae/0xb6
>  [<ffffffff8106bb6d>] ? __kthread_parkme+0x61/0x61
>  [<ffffffff816141bc>] ret_from_fork+0x7c/0xb0
>  [<ffffffff8106bb6d>] ? __kthread_parkme+0x61/0x61
> 
> 
> Worse, taking that dump gave me:
> gargamel:/etc/udev/rules.d# echo w > /proc/sysrq-trigger 
> Message from syslogd@gargamel at Mar 12 07:13:16 ...
>  kernel:[1234536.531251] BUG: soft lockup - CPU#1 stuck for 22s! [mysqld:12552]
> Message from syslogd@gargamel at Mar 12 07:13:16 ...
>  kernel:[1234536.559276] BUG: soft lockup - CPU#2 stuck for 22s! [mysqld:4955]
> Message from syslogd@gargamel at Mar 12 07:13:16 ...
>  kernel:[1234540.538636] BUG: soft lockup - CPU#0 stuck for 22s! [kswapd0:48]
> (there are more in the logs attached on what those lockups are)
> 
> Thankfully my btrfs send/receive is still working and should finish
> today, but the amount of time it's been taking has been painful, and
> the effect it's been having on the rest of my system, making it hang
> or rendering its devices unusable for long periods of time, has been
> punishing.
> 
> Can someone figure out from the kernel logs what is causing those near deadlocks?
> 
> Actually this was so bad apparently that sysrq w didn't even all make it
> to syslog/disk (also on btrfs) but thankfully I got it on serial console.
> 
> 
> I also found this suring sysrq. Shouldit be reported to someone else?
> INFO: rcu_preempt detected stalls on CPUs/tasks:
> 	3: (1 GPs behind) idle=395/140000000000000/0 softirq=284540927/284540928 last_accelerate: ed62/2821, nonlazy_posted: 1, ..
> 	(detected by 0, t=15002 jiffies, g=100566635, c=100566634, q=87438)
> sending NMI to all CPUs:
> NMI backtrace for cpu 3
> CPU: 3 PID: 21730 Comm: bash Not tainted 3.14.0-rc3-amd64-i915-preempt-20140216 #2
> Hardware name: System manufacturer System Product Name/P8H67-M PRO, BIOS 3806 08/20/2012
> task: ffff88001cf3a710 ti: ffff880037f78000 task.ti: ffff880037f78000
> RIP: 0010:[<ffffffff81309a80>]  [<ffffffff81309a80>] paravirt_read_tsc+0x0/0xd
> RSP: 0018:ffff880037f79ac0  EFLAGS: 00000046
> RAX: 0000000000000003 RBX: 0000000094b043ff RCX: 0000000000000000
> RDX: 0000000000000004 RSI: 00000000000003fd RDI: 0000000000000001
> RBP: ffff880037f79ae8 R08: ffffffff81cf24d0 R09: 00000000fffffffe
> R10: 0000000000001a18 R11: 0000000000000000 R12: 00000000000009fb
> R13: 0000000000000003 R14: 0000000094b047d7 R15: 0000000000000036
> FS:  0000000000000000(0000) GS:ffff88021f380000(0063) knlGS:00000000f754b6c0
> CS:  0010 DS: 002b ES: 002b CR0: 0000000080050033
> CR2: 00000000ffc13412 CR3: 0000000145622000 CR4: 00000000000407e0
> Stack:
>  ffffffff81309b59 ffffffff81f27560 00000000000026f0 0000000000000020
>  ffffffff813c9e1b ffff880037f79af8 ffffffff81309ac9 ffff880037f79b08
>  ffffffff81309aef ffff880037f79b30 ffffffff813c9cc6 ffffffff81f27560
> Call Trace:
>  [<ffffffff81309b59>] ? delay_tsc+0x3d/0xa4
>  [<ffffffff813c9e1b>] ? serial8250_console_write+0x10d/0x10d
>  [<ffffffff81309ac9>] __delay+0xf/0x11
>  [<ffffffff81309aef>] __const_udelay+0x24/0x26
>  [<ffffffff813c9cc6>] wait_for_xmitr+0x49/0x91
>  [<ffffffff813c9e37>] serial8250_console_putchar+0x1c/0x2e
>  [<ffffffff813c5d97>] uart_console_write+0x3f/0x54
>  [<ffffffff813c9dc8>] serial8250_console_write+0xba/0x10d
>  [<ffffffff8109363b>] call_console_drivers.constprop.6+0xbc/0xf0
>  [<ffffffff81093bf7>] console_unlock+0x269/0x302
>  [<ffffffff8109405e>] vprintk_emit+0x3ce/0x3ff
>  [<ffffffff81604702>] printk+0x54/0x56
>  [<ffffffff81089799>] ? arch_local_irq_save+0x15/0x1b
>  [<ffffffff8108752e>] print_cfs_rq+0x4fc/0xd71
>  [<ffffffff81080fff>] print_cfs_stats+0x5a/0x9e
>  [<ffffffff81086c65>] print_cpu+0x538/0x8e3
>  [<ffffffff81087f7e>] sysrq_sched_debug_show+0x1f/0x42
>  [<ffffffff81078874>] show_state_filter+0x92/0x9f
>  [<ffffffff813b7c7a>] sysrq_handle_showstate_blocked+0x13/0x15
>  [<ffffffff813b82c3>] __handle_sysrq+0xa0/0x138
>  [<ffffffff813b8630>] write_sysrq_trigger+0x28/0x37
>  [<ffffffff811a565a>] proc_reg_write+0x5a/0x7c
>  [<ffffffff81155417>] vfs_write+0xab/0x107
>  [<ffffffff81155b19>] SyS_write+0x46/0x79
>  [<ffffffff81615f6c>] sysenter_dispatch+0x7/0x21
> Code: 89 e5 e8 a2 fe ff ff 89 c2 66 31 c0 c1 e2 10 01 d0 15 ff ff 00 00 f7 d0 c1 e8 10 5d c3 90 90 90 90 90 90 90 90 90 90 90 90 90 90 <55> 48 89 e5 e8 9c c4 d0 ff 66 90 5d c3 66 66 66 66 90 55 48 89 
> 
> 
> Thanks,
> Marc
> -- 
> "A mouse is a device used to point at the xterm you want to type in" - A.S.R.
> Microsoft is to operating systems ....
>                                       .... what McDonalds is to gourmet cooking
> Home page: http://marc.merlins.org/                         | PGP 1024R/763BE901
> --
> To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> 

-- 
"A mouse is a device used to point at the xterm you want to type in" - A.S.R.
Microsoft is to operating systems ....
                                      .... what McDonalds is to gourmet cooking
Home page: http://marc.merlins.org/  

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: send/receive locking
  2014-03-08 21:53           ` send/receive locking Hugo Mills
  2014-03-08 21:55             ` Josef Bacik
@ 2014-03-14  2:19             ` Marc MERLIN
  1 sibling, 0 replies; 19+ messages in thread
From: Marc MERLIN @ 2014-03-14  2:19 UTC (permalink / raw)
  To: Hugo Mills, Btrfs mailing list, Wang Shilong; +Cc: Josef Bacik, Shilong Wang

On Sat, Mar 08, 2014 at 09:53:50PM +0000, Hugo Mills wrote:
>    Is there anything that can be done about the issues of btrfs send
> blocking? I've been writing a backup script (slowly), and several
> times I've managed to hit a situation where large chunks of the
> machine grind to a complete halt in D state because the backup script
> has jammed up.
 
Ah, we're doing the exact same thing then :)

>    Now, I'm aware that you can't send and receive to the same
> filesystem at the same time, and that's a restriction I can live with.
> However, having things that aren't related to the backup process
> suddenly stop working because the backup script is trying to log its
> progress to the same FS it's backing up is... umm... somewhat vexing,
> to say the least.

Mmmh, my backup doesn't log to disk, just in a screen bufffer, but I've seen
extensive hangs too, and my 6TB send/receive priming has been taking 6 days
on local disks. I think it stops all the time due to locks.
But as per my other message below, it's very bad when it deadlocks other
filesystems not involved in the backup, like my root filesystem.

See the other thread I appended to before seeing your message:
Subject: Re: 3.14.0-rc3: btrfs send/receive blocks btrfs IO on other devices (near deadlocks)                                

attached below.
I'll be happy to try new stuff, but I want that 6 day running send/receive
to finish first. It took so long that I don't want to do it again :)

Marc

On Thu, Mar 13, 2014 at 06:48:13PM -0700, Marc MERLIN wrote:
> Can anyone comment on this.
> 
> Are others seeing some btrfs operations on filesystem/diskA hang/deadlock
> other btrfs operations on filesystem/diskB ?
> 
> I just spent time fixing near data corruption in one of my systems due to
> a 7h delay between when the timestamp was written and the actual data was
> written, and traced it down to a btrfs hang that should never have happened
> on that filesystem.
> 
> Surely, it's not a single queue for all filesystem and devices, right?
> 
> If not, does anyone know what bugs I've been hitting then?
> 
> Is the full report below I spent quite a while getting together for you :)
> useful in any way to see where the hangs are?
> 
> To be honest, I'm looking at moving some important filesystems back to ext4
> because I can't afford such long hangs on my root filesystem when I have a
> media device that is doing heavy btrfs IO or a send/receive.
> 
> Mmmh, is it maybe just btrfs send/receive that is taking a btrfs-wide lock?
> Or btrfs scrub maybe?
> 
> Thanks,
> Marc
> 
> On Wed, Mar 12, 2014 at 08:18:08AM -0700, Marc MERLIN wrote:
> > I have a file server with 4 cpu cores and 5 btrfs devices:
> > Label: btrfs_boot  uuid: e4c1daa8-9c39-4a59-b0a9-86297d397f3b
> >         Total devices 1 FS bytes used 48.92GiB
> >         devid    1 size 79.93GiB used 73.04GiB path /dev/mapper/cryptroot
> > 
> > Label: varlocalspace  uuid: 9f46dbe2-1344-44c3-b0fb-af2888c34f18
> >         Total devices 1 FS bytes used 1.10TiB
> >         devid    1 size 1.63TiB used 1.50TiB path /dev/mapper/cryptraid0
> > 
> > Label: btrfs_pool1  uuid: 6358304a-2234-4243-b02d-4944c9af47d7
> >         Total devices 1 FS bytes used 7.16TiB
> >         devid    1 size 14.55TiB used 7.50TiB path /dev/mapper/dshelf1
> > 
> > Label: btrfs_pool2  uuid: cb9df6d3-a528-4afc-9a45-4fed5ec358d6
> >         Total devices 1 FS bytes used 3.34TiB
> >         devid    1 size 7.28TiB used 3.42TiB path /dev/mapper/dshelf2
> > 
> > Label: bigbackup  uuid: 024ba4d0-dacb-438d-9f1b-eeb34083fe49
> >         Total devices 5 FS bytes used 6.02TiB
> >         devid    1 size 1.82TiB used 1.43TiB path /dev/dm-9
> >         devid    2 size 1.82TiB used 1.43TiB path /dev/dm-6
> >         devid    3 size 1.82TiB used 1.43TiB path /dev/dm-5
> >         devid    4 size 1.82TiB used 1.43TiB path /dev/dm-7
> >         devid    5 size 1.82TiB used 1.43TiB path /dev/dm-8
> > 
> > 
> > I have a very long running btrfs send/receive from btrfs_pool1 to bigbackup
> > (long running meaning that it's been slowly copying over 5 days)
> > 
> > The problem is that this is blocking IO to btrfs_pool2 which is using
> > totally different drives.
> > By blocking IO I mean that IO to pool2 kind of works sometimes, and
> > hangs for very long times at other times.
> > 
> > It looks as if one rsync to btrfs_pool2 or one piece of IO hangs on a shared lock
> > and once that happens, all IO to btrfs_pool2 stops for a long time.
> > It does recover eventually without reboot, but the wait times are ridiculous (it 
> > could be 1H or more).
> > 
> > As I write this, I have a killall -9 rsync that waited for over 10mn before
> > these processes would finally die:
> > 23555       07:36 wait_current_trans.isra.15     rsync -av -SH --delete (...)
> > 23556       07:36 exit                           [rsync] <defunct>
> > 25387  2-04:41:22 wait_current_trans.isra.15     rsync --password-file  (...)
> > 27481       31:26 wait_current_trans.isra.15     rsync --password-file  (...)
> > 29268    04:41:34 wait_current_trans.isra.15     rsync --password-file  (...)
> > 29343    04:41:31 exit                           [rsync] <defunct>
> > 29492    04:41:27 wait_current_trans.isra.15     rsync --password-file  (...)
> > 
> > 14559    07:14:49 wait_current_trans.isra.15     cp -i -al current 20140312-feisty
> > 
> > This is all stuck in btrfs kernel code.
> > If someeone wants sysrq-w, there it is.
> > http://marc.merlins.org/tmp/btrfs_full.txt
> > 
> > A quick summary:
> > SysRq : Show Blocked State
> >   task                        PC stack   pid father
> > btrfs-cleaner   D ffff8802126b0840     0  3332      2 0x00000000
> >  ffff8800c5dc9d00 0000000000000046 ffff8800c5dc9fd8 ffff8800c69f6310
> >  00000000000141c0 ffff8800c69f6310 ffff88017574c170 ffff880211e671e8
> >  0000000000000000 ffff880211e67000 ffff8801e5936e20 ffff8800c5dc9d10
> > Call Trace:
> >  [<ffffffff8160b0d9>] schedule+0x73/0x75
> >  [<ffffffff8122a3c7>] wait_current_trans.isra.15+0x98/0xf4
> >  [<ffffffff81085062>] ? finish_wait+0x65/0x65
> >  [<ffffffff8122b86c>] start_transaction+0x48e/0x4f2
> >  [<ffffffff8122bc4f>] ? __btrfs_end_transaction+0x2a1/0x2c6
> >  [<ffffffff8122b8eb>] btrfs_start_transaction+0x1b/0x1d
> >  [<ffffffff8121c5cd>] btrfs_drop_snapshot+0x443/0x610
> >  [<ffffffff8160d7b3>] ? _raw_spin_unlock+0x17/0x2a
> >  [<ffffffff81074efb>] ? finish_task_switch+0x51/0xdb
> >  [<ffffffff8160afbf>] ? __schedule+0x537/0x5de
> >  [<ffffffff8122c08d>] btrfs_clean_one_deleted_snapshot+0x103/0x10f
> >  [<ffffffff81224859>] cleaner_kthread+0x103/0x136
> >  [<ffffffff81224756>] ? btrfs_alloc_root+0x26/0x26
> >  [<ffffffff8106bc1b>] kthread+0xae/0xb6
> >  [<ffffffff8106bb6d>] ? __kthread_parkme+0x61/0x61
> >  [<ffffffff816141bc>] ret_from_fork+0x7c/0xb0
> >  [<ffffffff8106bb6d>] ? __kthread_parkme+0x61/0x61
> > btrfs-transacti D ffff88021387eb00     0  3333      2 0x00000000
> >  ffff8800c5dcb890 0000000000000046 ffff8800c5dcbfd8 ffff88021387e5d0
> >  00000000000141c0 ffff88021387e5d0 ffff88021f2141c0 ffff88021387e5d0
> >  ffff8800c5dcb930 ffffffff810fe574 0000000000000002 ffff8800c5dcb8a0
> > Call Trace:
> >  [<ffffffff810fe574>] ? wait_on_page_read+0x3c/0x3c
> >  [<ffffffff8160b0d9>] schedule+0x73/0x75
> >  [<ffffffff8160b27e>] io_schedule+0x60/0x7a
> >  [<ffffffff810fe582>] sleep_on_page+0xe/0x12
> >  [<ffffffff8160b510>] __wait_on_bit+0x48/0x7a
> >  [<ffffffff810fe522>] wait_on_page_bit+0x7a/0x7c
> >  [<ffffffff81085096>] ? autoremove_wake_function+0x34/0x34
> >  [<ffffffff81245c70>] read_extent_buffer_pages+0x1bf/0x204
> >  [<ffffffff81223710>] ? free_root_pointers+0x5b/0x5b
> >  [<ffffffff81224412>] btree_read_extent_buffer_pages.constprop.45+0x66/0x100
> >  [<ffffffff81225367>] read_tree_block+0x2f/0x47
> >  [<ffffffff8120e4b6>] read_block_for_search.isra.26+0x24a/0x287
> >  [<ffffffff8120fcf7>] btrfs_search_slot+0x4f4/0x6bb
> >  [<ffffffff81214c3d>] lookup_inline_extent_backref+0xda/0x3fb
> >  [<ffffffff812167e1>] __btrfs_free_extent+0xf4/0x712
> >  [<ffffffff8121ba57>] __btrfs_run_delayed_refs+0x939/0xbdf
> >  [<ffffffff8121d896>] btrfs_run_delayed_refs+0x81/0x18f
> >  [<ffffffff8122af3e>] btrfs_commit_transaction+0x3a9/0x849
> >  [<ffffffff81085062>] ? finish_wait+0x65/0x65
> >  [<ffffffff81227598>] transaction_kthread+0xf8/0x1ab
> >  [<ffffffff812274a0>] ? btrfs_cleanup_transaction+0x43f/0x43f
> >  [<ffffffff8106bc1b>] kthread+0xae/0xb6
> >  [<ffffffff8106bb6d>] ? __kthread_parkme+0x61/0x61
> >  [<ffffffff816141bc>] ret_from_fork+0x7c/0xb0
> >  [<ffffffff8106bb6d>] ? __kthread_parkme+0x61/0x61
> > 
> > 
> > Worse, taking that dump gave me:
> > gargamel:/etc/udev/rules.d# echo w > /proc/sysrq-trigger 
> > Message from syslogd@gargamel at Mar 12 07:13:16 ...
> >  kernel:[1234536.531251] BUG: soft lockup - CPU#1 stuck for 22s! [mysqld:12552]
> > Message from syslogd@gargamel at Mar 12 07:13:16 ...
> >  kernel:[1234536.559276] BUG: soft lockup - CPU#2 stuck for 22s! [mysqld:4955]
> > Message from syslogd@gargamel at Mar 12 07:13:16 ...
> >  kernel:[1234540.538636] BUG: soft lockup - CPU#0 stuck for 22s! [kswapd0:48]
> > (there are more in the logs attached on what those lockups are)
> > 
> > Thankfully my btrfs send/receive is still working and should finish
> > today, but the amount of time it's been taking has been painful, and
> > the effect it's been having on the rest of my system, making it hang
> > or rendering its devices unusable for long periods of time, has been
> > punishing.
> > 
> > Can someone figure out from the kernel logs what is causing those near deadlocks?
> > 
> > Actually this was so bad apparently that sysrq w didn't even all make it
> > to syslog/disk (also on btrfs) but thankfully I got it on serial console.
> > 
> > 
> > I also found this suring sysrq. Shouldit be reported to someone else?
> > INFO: rcu_preempt detected stalls on CPUs/tasks:
> > 	3: (1 GPs behind) idle=395/140000000000000/0 softirq=284540927/284540928 last_accelerate: ed62/2821, nonlazy_posted: 1, ..
> > 	(detected by 0, t=15002 jiffies, g=100566635, c=100566634, q=87438)
> > sending NMI to all CPUs:
> > NMI backtrace for cpu 3
> > CPU: 3 PID: 21730 Comm: bash Not tainted 3.14.0-rc3-amd64-i915-preempt-20140216 #2
> > Hardware name: System manufacturer System Product Name/P8H67-M PRO, BIOS 3806 08/20/2012
> > task: ffff88001cf3a710 ti: ffff880037f78000 task.ti: ffff880037f78000
> > RIP: 0010:[<ffffffff81309a80>]  [<ffffffff81309a80>] paravirt_read_tsc+0x0/0xd
> > RSP: 0018:ffff880037f79ac0  EFLAGS: 00000046
> > RAX: 0000000000000003 RBX: 0000000094b043ff RCX: 0000000000000000
> > RDX: 0000000000000004 RSI: 00000000000003fd RDI: 0000000000000001
> > RBP: ffff880037f79ae8 R08: ffffffff81cf24d0 R09: 00000000fffffffe
> > R10: 0000000000001a18 R11: 0000000000000000 R12: 00000000000009fb
> > R13: 0000000000000003 R14: 0000000094b047d7 R15: 0000000000000036
> > FS:  0000000000000000(0000) GS:ffff88021f380000(0063) knlGS:00000000f754b6c0
> > CS:  0010 DS: 002b ES: 002b CR0: 0000000080050033
> > CR2: 00000000ffc13412 CR3: 0000000145622000 CR4: 00000000000407e0
> > Stack:
> >  ffffffff81309b59 ffffffff81f27560 00000000000026f0 0000000000000020
> >  ffffffff813c9e1b ffff880037f79af8 ffffffff81309ac9 ffff880037f79b08
> >  ffffffff81309aef ffff880037f79b30 ffffffff813c9cc6 ffffffff81f27560
> > Call Trace:
> >  [<ffffffff81309b59>] ? delay_tsc+0x3d/0xa4
> >  [<ffffffff813c9e1b>] ? serial8250_console_write+0x10d/0x10d
> >  [<ffffffff81309ac9>] __delay+0xf/0x11
> >  [<ffffffff81309aef>] __const_udelay+0x24/0x26
> >  [<ffffffff813c9cc6>] wait_for_xmitr+0x49/0x91
> >  [<ffffffff813c9e37>] serial8250_console_putchar+0x1c/0x2e
> >  [<ffffffff813c5d97>] uart_console_write+0x3f/0x54
> >  [<ffffffff813c9dc8>] serial8250_console_write+0xba/0x10d
> >  [<ffffffff8109363b>] call_console_drivers.constprop.6+0xbc/0xf0
> >  [<ffffffff81093bf7>] console_unlock+0x269/0x302
> >  [<ffffffff8109405e>] vprintk_emit+0x3ce/0x3ff
> >  [<ffffffff81604702>] printk+0x54/0x56
> >  [<ffffffff81089799>] ? arch_local_irq_save+0x15/0x1b
> >  [<ffffffff8108752e>] print_cfs_rq+0x4fc/0xd71
> >  [<ffffffff81080fff>] print_cfs_stats+0x5a/0x9e
> >  [<ffffffff81086c65>] print_cpu+0x538/0x8e3
> >  [<ffffffff81087f7e>] sysrq_sched_debug_show+0x1f/0x42
> >  [<ffffffff81078874>] show_state_filter+0x92/0x9f
> >  [<ffffffff813b7c7a>] sysrq_handle_showstate_blocked+0x13/0x15
> >  [<ffffffff813b82c3>] __handle_sysrq+0xa0/0x138
> >  [<ffffffff813b8630>] write_sysrq_trigger+0x28/0x37
> >  [<ffffffff811a565a>] proc_reg_write+0x5a/0x7c
> >  [<ffffffff81155417>] vfs_write+0xab/0x107
> >  [<ffffffff81155b19>] SyS_write+0x46/0x79
> >  [<ffffffff81615f6c>] sysenter_dispatch+0x7/0x21
> > Code: 89 e5 e8 a2 fe ff ff 89 c2 66 31 c0 c1 e2 10 01 d0 15 ff ff 00 00 f7 d0 c1 e8 10 5d c3 90 90 90 90 90 90 90 90 90 90 90 90 90 90 <55> 48 89 e5 e8 9c c4 d0 ff 66 90 5d c3 66 66 66 66 90 55 48 89 
> > 
> > 
> > Thanks,
> > Marc
> > -- 
> > "A mouse is a device used to point at the xterm you want to type in" - A.S.R.
> > Microsoft is to operating systems ....
> >                                       .... what McDonalds is to gourmet cooking
> > Home page: http://marc.merlins.org/                         | PGP 1024R/763BE901
> > --
> > To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
> > the body of a message to majordomo@vger.kernel.org
> > More majordomo info at  http://vger.kernel.org/majordomo-info.html
> > 
> 
> -- 
> "A mouse is a device used to point at the xterm you want to type in" - A.S.R.
> Microsoft is to operating systems ....
>                                       .... what McDonalds is to gourmet cooking
> Home page: http://marc.merlins.org/  
> --
> To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> 


-- 
"A mouse is a device used to point at the xterm you want to type in" - A.S.R.
Microsoft is to operating systems ....
                                      .... what McDonalds is to gourmet cooking
Home page: http://marc.merlins.org/  

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: 3.14.0-rc3: btrfs send/receive blocks btrfs IO on other devices (near deadlocks)
  2014-03-14  1:48 ` Marc MERLIN
  2014-03-10 10:39   ` [PATCH] Btrfs-progs: fsck: disable --init-extent-tree option when using snapshots Wang Shilong
@ 2014-03-14  4:54   ` Duncan
  1 sibling, 0 replies; 19+ messages in thread
From: Duncan @ 2014-03-14  4:54 UTC (permalink / raw)
  To: linux-btrfs

Marc MERLIN posted on Thu, 13 Mar 2014 18:48:13 -0700 as excerpted:

> Are others seeing some btrfs operations on filesystem/diskA
> hang/deadlock other btrfs operations on filesystem/diskB ?

Well, if the filesystem in filesystem/diskA and filesystem/diskB is the 
same (multi-device) filesystem, as the above definitely implies...  Tho 
based on the context I don't believe that's what you actually meant.

Meanwhile, send/receive is intensely focused in bug-finding/fixing mode 
ATM.  The basic concept is there, but to this point it has definitely 
been more development/testing-reliability (as befitted btrfs overall 
state, with the eat-your-babies kconfig option warning only recently 
toned down to what I'd call semi-stable) than enterprise-reliability.  
Hopefully by the time they're done with all this bug-stomping it'll be 
rather closer to the latter.

-- 
Duncan - List replies preferred.   No HTML msgs.
"Every nonfree program has a lord, a master --
and if you use the program, he is your master."  Richard Stallman


^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH] Btrfs-progs: fsck: disable --init-extent-tree option when using snapshots
  2014-03-11  1:23         ` Wang Shilong
  2014-03-08 21:53           ` send/receive locking Hugo Mills
@ 2014-03-14 13:36           ` Wang Shilong
  2014-03-14 14:36             ` Josef Bacik
  1 sibling, 1 reply; 19+ messages in thread
From: Wang Shilong @ 2014-03-14 13:36 UTC (permalink / raw)
  To: Josef Bacik; +Cc: linux-btrfs

Hi Josef,

Just ping this again.

Did you have any good ideas to rebuild extent tree if broken filesystem
is filled with snapshots.?

I was working on this recently, i was blocked that i can not verify if an extent
is *FULL BACKREF* mode or not. As a *FULL BACKREF* extent's refs can be 1
and more than 1..

I am willing to test  codes or have a try if you could give me some advice etc.

-Wang

> On 03/10/2014 11:50 PM, Josef Bacik wrote:
>> -----BEGIN PGP SIGNED MESSAGE-----
>> Hash: SHA1
>> 
>> On 03/10/2014 08:12 AM, Shilong Wang wrote:
>>> Hi Josef,
>>> 
>>> As i haven't thought any better ideas to rebuild extent tree which
>>> contains extent that owns 'FULL BACKREF' flag.
>>> 
>>> Considering an extent's refs can be equal or more than 1 if this
>>> extent has *FULL BACKREF* flag, so we could not make sure an
>>> extent's flag by only searching fs/file tree any more.
>>> 
>>> So until now, i just disable this option if snapshots exists,
>>> please correct me if i miss something here. Or you have any better
>>> ideas to solve this problem.~_~
>>> 
>>> 
>> I thought the fsck stuff rebuilds full backref refs properly, does it
>> not?  If it doesn't we need to fix that, however I'm fine with
>> disabling the option if snapshots exist for the time being.  Thanks,
> If there are no snapshots, --init-extent-tree can works as expected.
> I just have not thought a better idea to rebuild extent tree if we do have
> snapshots which means we may have an extent with *FULL BACKREF*
> flag.
> 
> Thanks,
> Wang
>> 
>> Josef
>> -----BEGIN PGP SIGNATURE-----
>> Version: GnuPG v1
>> Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/
>> 
>> iQIcBAEBAgAGBQJTHd9NAAoJEANb+wAKly3BYCYP/0iTaaa7w0SnfXtgjoVyX+nT
>> +e0Pa46zeKzpTujotCDb9E/2PBesCAvA4Psog3rkfsqJ2nXN9cERN4E6/JG4nAHh
>> Hv4KPo+w+tCkC4U2wSoDivYrVk9G5SH25ewkgW6iheSYNIlm+PLbOQz9DzGjCFDp
>> 51J9tG5E010siOyhlLCyGj8ZTj+gXuoQVWKCS8dOpCLMrbYYjMDXa562hqWaLoS/
>> t3eSfP7Tnnpl43NiMZI4fWrzmlFa5lba5iJmG59FeyiseRH4Zrhee4St1L1xDL5A
>> /6f3tJJT7DJjRRJFv0nJAOvOPyFaK8bMaYmOQJg6VrhcyPKM3BxBVEab3HrmQ7jt
>> LCMWobpIcM7e5BugmbTGGsFymhv05SQgvYGzpzRVXdsSzqubuqTcXwloNU5RyyFF
>> sXT9IiW9wAibHe7mDN7V6nfo1bVfHsjvSVi1rqz4/zFOWyh8oqxfEhxUJIWhfFsn
>> j0WJevvqKnjBJujyyuQpL13tzh69qei0AHOEme3R46BSRMnyuacy/WOeyo4VXPcn
>> 0GIeWbngAIWF/quhoQGkvofRMlPgftiDge8uz9pbm3IEKeiP9dQ/HvKsIHMKjnKW
>> 3dEBvMV/CSUQNek4VjO1ALefTRZQvJVL8Wxdij4W+djJw/uVX7fOhuqdkqyfM3FY
>> CKSB3HUSUtDCammsvgQA
>> =OT98
>> -----END PGP SIGNATURE-----
>> --
>> To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
>> the body of a message to majordomo@vger.kernel.org
>> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>> 
> 


^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH] Btrfs-progs: fsck: disable --init-extent-tree option when using snapshots
  2014-03-14 13:36           ` [PATCH] Btrfs-progs: fsck: disable --init-extent-tree option when using snapshots Wang Shilong
@ 2014-03-14 14:36             ` Josef Bacik
  2014-03-17 12:21               ` Shilong Wang
  0 siblings, 1 reply; 19+ messages in thread
From: Josef Bacik @ 2014-03-14 14:36 UTC (permalink / raw)
  To: Wang Shilong; +Cc: linux-btrfs

On 03/14/2014 09:36 AM, Wang Shilong wrote:
> Hi Josef,
> 
> Just ping this again.
> 
> Did you have any good ideas to rebuild extent tree if broken
> filesystem is filled with snapshots.?
> 
> I was working on this recently, i was blocked that i can not verify
> if an extent is *FULL BACKREF* mode or not. As a *FULL BACKREF*
> extent's refs can be 1 and more than 1..
> 
> I am willing to test  codes or have a try if you could give me some
> advice etc.
> 

Full backrefs aren't too hard.  Basically all you have to do is walk
down the fs tree and keep track of btrfs_header_owner(eb) for
everything we walk into.  If btrfs_header_owner(eb) == root->objectid
for the tree we are walking down then we need a ye olde normal backref
for this block.  If btrfs_header_owner(eb) != root->objectid we _may_
need a full backref, it depends on who owns the parent block.  The
following may be incomplete, I'm kind of sick

1) We walk down the original tree, every eb we encounter has
btrfs_header_owner(eb) == root->objectid.  We add normal references
for this root (BTRFS_TREE_BLOCK_REF_KEY) for this root.  World peace
is achieved.

2) We walk down the snapshotted tree.  Say we didn't change anything
at all, it was just a clean snapshot and then boom.  So the
btrfs_header_owner(root->node) == root->objectid, so normal backref.
We walk down to the next level, where btrfs_header_owner(eb) !=
root->objectid, but the level above did, so we add normal refs for all
of these blocks.  We go down the next level, now our
btrfs_header_owner(parent) != root->objectid and
btrfs_header_owner(eb) != root->objectid.  This is where we need to
now go back and see if btrfs_header_owner(eb) currently has a ref on
eb.  If it does we are done, move on to the next block in this same
level, we don't have to go further down.

3) Harder case, we snapshotted and then changed things in the original
root.  Do the same thing as in step 2, but now we get down to
btrfs_header_level(eb) != root->objectid && btrfs_header_level(parent)
!= root->objectid.  We lookup the references we have for eb and notice
that btrfs_header_owner(eb) no longer refers to eb.  So now we must
set FULL_BACKREF on this extent reference and add a
SHARED_BLOCK_REF_KEY for this eb using the parent->start as the
offset.  And we need to keep walking down and doing the same thing
until we either hit level 0 or btrfs_header_owner(eb) has a ref on the
block.

4) Not really a whole special case, just something to keep in mind, if
btrfs_header_owner(parent) == root->objectid but
btrfs_header_owner(eb) != root->objectid that means we have a normal
TREE_BLOCK_REF on eb, it's only when the parent doesn't match our
current root that it's a problem.


Does that make sense?  Thanks,

Josef

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: 3.14.0-rc3: btrfs send/receive blocks btrfs IO on other devices (near deadlocks)
  2014-03-12 15:18 3.14.0-rc3: btrfs send/receive blocks btrfs IO on other devices (near deadlocks) Marc MERLIN
  2014-03-14  1:48 ` Marc MERLIN
@ 2014-03-14 14:42 ` Josef Bacik
  1 sibling, 0 replies; 19+ messages in thread
From: Josef Bacik @ 2014-03-14 14:42 UTC (permalink / raw)
  To: Marc MERLIN, linux-btrfs

On 03/12/2014 11:18 AM, Marc MERLIN wrote:
> I have a file server with 4 cpu cores and 5 btrfs devices: Label:
> btrfs_boot  uuid: e4c1daa8-9c39-4a59-b0a9-86297d397f3b Total
> devices 1 FS bytes used 48.92GiB devid    1 size 79.93GiB used
> 73.04GiB path /dev/mapper/cryptroot
> 
> Label: varlocalspace  uuid: 9f46dbe2-1344-44c3-b0fb-af2888c34f18 
> Total devices 1 FS bytes used 1.10TiB devid    1 size 1.63TiB used
> 1.50TiB path /dev/mapper/cryptraid0
> 
> Label: btrfs_pool1  uuid: 6358304a-2234-4243-b02d-4944c9af47d7 
> Total devices 1 FS bytes used 7.16TiB devid    1 size 14.55TiB used
> 7.50TiB path /dev/mapper/dshelf1
> 
> Label: btrfs_pool2  uuid: cb9df6d3-a528-4afc-9a45-4fed5ec358d6 
> Total devices 1 FS bytes used 3.34TiB devid    1 size 7.28TiB used
> 3.42TiB path /dev/mapper/dshelf2
> 
> Label: bigbackup  uuid: 024ba4d0-dacb-438d-9f1b-eeb34083fe49 Total
> devices 5 FS bytes used 6.02TiB devid    1 size 1.82TiB used
> 1.43TiB path /dev/dm-9 devid    2 size 1.82TiB used 1.43TiB path
> /dev/dm-6 devid    3 size 1.82TiB used 1.43TiB path /dev/dm-5 devid
> 4 size 1.82TiB used 1.43TiB path /dev/dm-7 devid    5 size 1.82TiB
> used 1.43TiB path /dev/dm-8
> 
> 
> I have a very long running btrfs send/receive from btrfs_pool1 to
> bigbackup (long running meaning that it's been slowly copying over
> 5 days)
> 
> The problem is that this is blocking IO to btrfs_pool2 which is
> using totally different drives. By blocking IO I mean that IO to
> pool2 kind of works sometimes, and hangs for very long times at
> other times.
> 
> It looks as if one rsync to btrfs_pool2 or one piece of IO hangs on
> a shared lock and once that happens, all IO to btrfs_pool2 stops
> for a long time. It does recover eventually without reboot, but the
> wait times are ridiculous (it could be 1H or more).
> 
> As I write this, I have a killall -9 rsync that waited for over
> 10mn before these processes would finally die: 23555       07:36
> wait_current_trans.isra.15     rsync -av -SH --delete (...) 23556
> 07:36 exit                           [rsync] <defunct> 25387
> 2-04:41:22 wait_current_trans.isra.15     rsync --password-file
> (...) 27481       31:26 wait_current_trans.isra.15     rsync
> --password-file  (...) 29268    04:41:34 wait_current_trans.isra.15
> rsync --password-file  (...) 29343    04:41:31 exit
> [rsync] <defunct> 29492    04:41:27 wait_current_trans.isra.15
> rsync --password-file  (...)
> 
> 14559    07:14:49 wait_current_trans.isra.15     cp -i -al current
> 20140312-feisty
> 
> This is all stuck in btrfs kernel code. If someeone wants sysrq-w,
> there it is. 
> https://urldefense.proofpoint.com/v1/url?u=http://marc.merlins.org/tmp/btrfs_full.txt&k=ZVNjlDMF0FElm4dQtryO4A%3D%3D%0A&r=cKCbChRKsMpTX8ybrSkonQ%3D%3D%0A&m=NfFB494sWgA3qCQbFaAQO2FapIJ6kuZcyS6PlP%2FXkCg%3D%0A&s=573f0b2deecc8980550a7645c9627b918659e0ab067590577c8ead4a59498bc1
>
>  A quick summary: SysRq : Show Blocked State task
> PC stack   pid father btrfs-cleaner   D ffff8802126b0840     0
> 3332      2 0x00000000 ffff8800c5dc9d00 0000000000000046
> ffff8800c5dc9fd8 ffff8800c69f6310 00000000000141c0 ffff8800c69f6310
> ffff88017574c170 ffff880211e671e8 0000000000000000 ffff880211e67000
> ffff8801e5936e20 ffff8800c5dc9d10 Call Trace: [<ffffffff8160b0d9>]
> schedule+0x73/0x75 [<ffffffff8122a3c7>]
> wait_current_trans.isra.15+0x98/0xf4 [<ffffffff81085062>] ?
> finish_wait+0x65/0x65 [<ffffffff8122b86c>]
> start_transaction+0x48e/0x4f2 [<ffffffff8122bc4f>] ?
> __btrfs_end_transaction+0x2a1/0x2c6 [<ffffffff8122b8eb>]
> btrfs_start_transaction+0x1b/0x1d [<ffffffff8121c5cd>]
> btrfs_drop_snapshot+0x443/0x610 [<ffffffff8160d7b3>] ?
> _raw_spin_unlock+0x17/0x2a [<ffffffff81074efb>] ?
> finish_task_switch+0x51/0xdb [<ffffffff8160afbf>] ?
> __schedule+0x537/0x5de [<ffffffff8122c08d>]
> btrfs_clean_one_deleted_snapshot+0x103/0x10f [<ffffffff81224859>]
> cleaner_kthread+0x103/0x136 [<ffffffff81224756>] ?
> btrfs_alloc_root+0x26/0x26 [<ffffffff8106bc1b>] kthread+0xae/0xb6 
> [<ffffffff8106bb6d>] ? __kthread_parkme+0x61/0x61 
> [<ffffffff816141bc>] ret_from_fork+0x7c/0xb0 [<ffffffff8106bb6d>] ?
> __kthread_parkme+0x61/0x61 btrfs-transacti D ffff88021387eb00     0
> 3333      2 0x00000000 ffff8800c5dcb890 0000000000000046
> ffff8800c5dcbfd8 ffff88021387e5d0 00000000000141c0 ffff88021387e5d0
> ffff88021f2141c0 ffff88021387e5d0 ffff8800c5dcb930 ffffffff810fe574
> 0000000000000002 ffff8800c5dcb8a0 Call Trace: [<ffffffff810fe574>]
> ? wait_on_page_read+0x3c/0x3c [<ffffffff8160b0d9>]
> schedule+0x73/0x75 [<ffffffff8160b27e>] io_schedule+0x60/0x7a 
> [<ffffffff810fe582>] sleep_on_page+0xe/0x12 [<ffffffff8160b510>]
> __wait_on_bit+0x48/0x7a [<ffffffff810fe522>]
> wait_on_page_bit+0x7a/0x7c [<ffffffff81085096>] ?
> autoremove_wake_function+0x34/0x34 [<ffffffff81245c70>]
> read_extent_buffer_pages+0x1bf/0x204 [<ffffffff81223710>] ?
> free_root_pointers+0x5b/0x5b [<ffffffff81224412>]
> btree_read_extent_buffer_pages.constprop.45+0x66/0x100 
> [<ffffffff81225367>] read_tree_block+0x2f/0x47 [<ffffffff8120e4b6>]
> read_block_for_search.isra.26+0x24a/0x287 [<ffffffff8120fcf7>]
> btrfs_search_slot+0x4f4/0x6bb [<ffffffff81214c3d>]
> lookup_inline_extent_backref+0xda/0x3fb [<ffffffff812167e1>]
> __btrfs_free_extent+0xf4/0x712 [<ffffffff8121ba57>]
> __btrfs_run_delayed_refs+0x939/0xbdf [<ffffffff8121d896>]
> btrfs_run_delayed_refs+0x81/0x18f [<ffffffff8122af3e>]
> btrfs_commit_transaction+0x3a9/0x849 [<ffffffff81085062>] ?
> finish_wait+0x65/0x65 [<ffffffff81227598>]
> transaction_kthread+0xf8/0x1ab [<ffffffff812274a0>] ?
> btrfs_cleanup_transaction+0x43f/0x43f [<ffffffff8106bc1b>]
> kthread+0xae/0xb6 [<ffffffff8106bb6d>] ?
> __kthread_parkme+0x61/0x61 [<ffffffff816141bc>]
> ret_from_fork+0x7c/0xb0 [<ffffffff8106bb6d>] ?
> __kthread_parkme+0x61/0x61
> 
> 
> Worse, taking that dump gave me: gargamel:/etc/udev/rules.d# echo w
> > /proc/sysrq-trigger Message from syslogd@gargamel at Mar 12
> 07:13:16 ... kernel:[1234536.531251] BUG: soft lockup - CPU#1 stuck
> for 22s! [mysqld:12552] Message from syslogd@gargamel at Mar 12
> 07:13:16 ... kernel:[1234536.559276] BUG: soft lockup - CPU#2 stuck
> for 22s! [mysqld:4955] Message from syslogd@gargamel at Mar 12
> 07:13:16 ... kernel:[1234540.538636] BUG: soft lockup - CPU#0 stuck
> for 22s! [kswapd0:48] (there are more in the logs attached on what
> those lockups are)
> 
> Thankfully my btrfs send/receive is still working and should
> finish today, but the amount of time it's been taking has been
> painful, and the effect it's been having on the rest of my system,
> making it hang or rendering its devices unusable for long periods
> of time, has been punishing.
> 
> Can someone figure out from the kernel logs what is causing those
> near deadlocks?
> 
> Actually this was so bad apparently that sysrq w didn't even all
> make it to syslog/disk (also on btrfs) but thankfully I got it on
> serial console.
> 
> 
> I also found this suring sysrq. Shouldit be reported to someone
> else? INFO: rcu_preempt detected stalls on CPUs/tasks: 3: (1 GPs
> behind) idle=395/140000000000000/0 softirq=284540927/284540928
> last_accelerate: ed62/2821, nonlazy_posted: 1, .. (detected by 0,
> t=15002 jiffies, g=100566635, c=100566634, q=87438) sending NMI to
> all CPUs: NMI backtrace for cpu 3 CPU: 3 PID: 21730 Comm: bash Not
> tainted 3.14.0-rc3-amd64-i915-preempt-20140216 #2 Hardware name:
> System manufacturer System Product Name/P8H67-M PRO, BIOS 3806
> 08/20/2012 task: ffff88001cf3a710 ti: ffff880037f78000 task.ti:
> ffff880037f78000 RIP: 0010:[<ffffffff81309a80>]
> [<ffffffff81309a80>] paravirt_read_tsc+0x0/0xd RSP:
> 0018:ffff880037f79ac0  EFLAGS: 00000046 RAX: 0000000000000003 RBX:
> 0000000094b043ff RCX: 0000000000000000 RDX: 0000000000000004 RSI:
> 00000000000003fd RDI: 0000000000000001 RBP: ffff880037f79ae8 R08:
> ffffffff81cf24d0 R09: 00000000fffffffe R10: 0000000000001a18 R11:
> 0000000000000000 R12: 00000000000009fb R13: 0000000000000003 R14:
> 0000000094b047d7 R15: 0000000000000036 FS:  0000000000000000(0000)
> GS:ffff88021f380000(0063) knlGS:00000000f754b6c0 CS:  0010 DS: 002b
> ES: 002b CR0: 0000000080050033 CR2: 00000000ffc13412 CR3:
> 0000000145622000 CR4: 00000000000407e0 Stack: ffffffff81309b59
> ffffffff81f27560 00000000000026f0 0000000000000020 ffffffff813c9e1b
> ffff880037f79af8 ffffffff81309ac9 ffff880037f79b08 ffffffff81309aef
> ffff880037f79b30 ffffffff813c9cc6 ffffffff81f27560 Call Trace: 
> [<ffffffff81309b59>] ? delay_tsc+0x3d/0xa4 [<ffffffff813c9e1b>] ?
> serial8250_console_write+0x10d/0x10d [<ffffffff81309ac9>]
> __delay+0xf/0x11 [<ffffffff81309aef>] __const_udelay+0x24/0x26 
> [<ffffffff813c9cc6>] wait_for_xmitr+0x49/0x91 [<ffffffff813c9e37>]
> serial8250_console_putchar+0x1c/0x2e [<ffffffff813c5d97>]
> uart_console_write+0x3f/0x54 [<ffffffff813c9dc8>]
> serial8250_console_write+0xba/0x10d [<ffffffff8109363b>]
> call_console_drivers.constprop.6+0xbc/0xf0 [<ffffffff81093bf7>]
> console_unlock+0x269/0x302 [<ffffffff8109405e>]
> vprintk_emit+0x3ce/0x3ff [<ffffffff81604702>] printk+0x54/0x56 
> [<ffffffff81089799>] ? arch_local_irq_save+0x15/0x1b 
> [<ffffffff8108752e>] print_cfs_rq+0x4fc/0xd71 [<ffffffff81080fff>]
> print_cfs_stats+0x5a/0x9e [<ffffffff81086c65>]
> print_cpu+0x538/0x8e3 [<ffffffff81087f7e>]
> sysrq_sched_debug_show+0x1f/0x42 [<ffffffff81078874>]
> show_state_filter+0x92/0x9f [<ffffffff813b7c7a>]
> sysrq_handle_showstate_blocked+0x13/0x15 [<ffffffff813b82c3>]
> __handle_sysrq+0xa0/0x138 [<ffffffff813b8630>]
> write_sysrq_trigger+0x28/0x37 [<ffffffff811a565a>]
> proc_reg_write+0x5a/0x7c [<ffffffff81155417>] vfs_write+0xab/0x107 
> [<ffffffff81155b19>] SyS_write+0x46/0x79 [<ffffffff81615f6c>]
> sysenter_dispatch+0x7/0x21 Code: 89 e5 e8 a2 fe ff ff 89 c2 66 31
> c0 c1 e2 10 01 d0 15 ff ff 00 00 f7 d0 c1 e8 10 5d c3 90 90 90 90
> 90 90 90 90 90 90 90 90 90 90 <55> 48 89 e5 e8 9c c4 d0 ff 66 90 5d
> c3 66 66 66 66 90 55 48 89
> 
> 

I'm working on a deadlock with send/receive and then I'll turn my
attention to this.  Thanks,

Josef


^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH] Btrfs-progs: fsck: disable --init-extent-tree option when using snapshots
  2014-03-14 14:36             ` Josef Bacik
@ 2014-03-17 12:21               ` Shilong Wang
  0 siblings, 0 replies; 19+ messages in thread
From: Shilong Wang @ 2014-03-17 12:21 UTC (permalink / raw)
  To: Josef Bacik; +Cc: linux-btrfs

Hi Josef,

Thanks for your information ^_^, i have just finished codes and it passed
my simple test and i will do more tests about rebuilding extent tree
with snapshots before
i send out patches.

Thanks,
Wang

2014-03-14 22:36 GMT+08:00 Josef Bacik <jbacik@fb.com>:
> On 03/14/2014 09:36 AM, Wang Shilong wrote:
>> Hi Josef,
>>
>> Just ping this again.
>>
>> Did you have any good ideas to rebuild extent tree if broken
>> filesystem is filled with snapshots.?
>>
>> I was working on this recently, i was blocked that i can not verify
>> if an extent is *FULL BACKREF* mode or not. As a *FULL BACKREF*
>> extent's refs can be 1 and more than 1..
>>
>> I am willing to test  codes or have a try if you could give me some
>> advice etc.
>>
>
> Full backrefs aren't too hard.  Basically all you have to do is walk
> down the fs tree and keep track of btrfs_header_owner(eb) for
> everything we walk into.  If btrfs_header_owner(eb) == root->objectid
> for the tree we are walking down then we need a ye olde normal backref
> for this block.  If btrfs_header_owner(eb) != root->objectid we _may_
> need a full backref, it depends on who owns the parent block.  The
> following may be incomplete, I'm kind of sick
>
> 1) We walk down the original tree, every eb we encounter has
> btrfs_header_owner(eb) == root->objectid.  We add normal references
> for this root (BTRFS_TREE_BLOCK_REF_KEY) for this root.  World peace
> is achieved.
>
> 2) We walk down the snapshotted tree.  Say we didn't change anything
> at all, it was just a clean snapshot and then boom.  So the
> btrfs_header_owner(root->node) == root->objectid, so normal backref.
> We walk down to the next level, where btrfs_header_owner(eb) !=
> root->objectid, but the level above did, so we add normal refs for all
> of these blocks.  We go down the next level, now our
> btrfs_header_owner(parent) != root->objectid and
> btrfs_header_owner(eb) != root->objectid.  This is where we need to
> now go back and see if btrfs_header_owner(eb) currently has a ref on
> eb.  If it does we are done, move on to the next block in this same
> level, we don't have to go further down.
>
> 3) Harder case, we snapshotted and then changed things in the original
> root.  Do the same thing as in step 2, but now we get down to
> btrfs_header_level(eb) != root->objectid && btrfs_header_level(parent)
> != root->objectid.  We lookup the references we have for eb and notice
> that btrfs_header_owner(eb) no longer refers to eb.  So now we must
> set FULL_BACKREF on this extent reference and add a
> SHARED_BLOCK_REF_KEY for this eb using the parent->start as the
> offset.  And we need to keep walking down and doing the same thing
> until we either hit level 0 or btrfs_header_owner(eb) has a ref on the
> block.
>
> 4) Not really a whole special case, just something to keep in mind, if
> btrfs_header_owner(parent) == root->objectid but
> btrfs_header_owner(eb) != root->objectid that means we have a normal
> TREE_BLOCK_REF on eb, it's only when the parent doesn't match our
> current root that it's a problem.
>
>
> Does that make sense?  Thanks,
>
> Josef

^ permalink raw reply	[flat|nested] 19+ messages in thread

end of thread, other threads:[~2014-03-17 12:21 UTC | newest]

Thread overview: 19+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2014-03-12 15:18 3.14.0-rc3: btrfs send/receive blocks btrfs IO on other devices (near deadlocks) Marc MERLIN
2014-03-14  1:48 ` Marc MERLIN
2014-03-10 10:39   ` [PATCH] Btrfs-progs: fsck: disable --init-extent-tree option when using snapshots Wang Shilong
2014-03-10 12:12     ` Shilong Wang
2014-03-10 15:50       ` Josef Bacik
2014-03-11  1:23         ` Wang Shilong
2014-03-08 21:53           ` send/receive locking Hugo Mills
2014-03-08 21:55             ` Josef Bacik
2014-03-08 22:00               ` Hugo Mills
2014-03-08 22:02                 ` Josef Bacik
2014-03-08 22:16                   ` Hugo Mills
2014-03-09 16:43                     ` Hugo Mills
2014-03-10 22:28                       ` Hugo Mills
2014-03-14  2:19             ` Marc MERLIN
2014-03-14 13:36           ` [PATCH] Btrfs-progs: fsck: disable --init-extent-tree option when using snapshots Wang Shilong
2014-03-14 14:36             ` Josef Bacik
2014-03-17 12:21               ` Shilong Wang
2014-03-14  4:54   ` 3.14.0-rc3: btrfs send/receive blocks btrfs IO on other devices (near deadlocks) Duncan
2014-03-14 14:42 ` Josef Bacik

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.