linux-btrfs.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Jerome Ibanes <Jerome@ops.zillow.com>
To: Chris Mason <chris.mason@oracle.com>
Cc: Shaohua Li <shaohua.li@intel.com>,
	"Yan, Zheng " <yanzheng@21cn.com>,
	"linux-btrfs@vger.kernel.org" <linux-btrfs@vger.kernel.org>,
	"Zhang, Yanmin" <yanmin.zhang@intel.com>,
	"Chen, Tim C" <tim.c.chen@intel.com>
Subject: Re: btrfs: hanging processes - race condition?
Date: Mon, 14 Jun 2010 12:13:52 -0700 (PDT)	[thread overview]
Message-ID: <Pine.LNX.4.64.1006141211560.12518@lyn-del-uti-015> (raw)
In-Reply-To: <20100614190832.GK18266@think>

[-- Attachment #1: Type: TEXT/PLAIN, Size: 6510 bytes --]

> On Mon, Jun 14, 2010 at 11:12:53AM -0700, Jerome Ibanes wrote:
>> On Mon, 14 Jun 2010, Chris Mason wrote:
>>
>>> On Sun, Jun 13, 2010 at 02:50:06PM +0800, Shaohua Li wrote:
>>>> On Fri, Jun 11, 2010 at 10:32:07AM +0800, Yan, Zheng  wrote:
>>>>> On Fri, Jun 11, 2010 at 9:12 AM, Shaohua Li <shaohua.li@intel.com> wrote:
>>>>>> On Fri, Jun 11, 2010 at 01:41:41AM +0800, Jerome Ibanes wrote:
>>>>>>> List,
>>>>>>>
>>>>>>> I ran into a hang issue (race condition: cpu is high when the server is
>>>>>>> idle, meaning that btrfs is hanging, and IOwait is high as well) running
>>>>>>> 2.6.34 on debian/lenny on a x86_64 server (dual Opteron 275 w/ 16GB ram).
>>>>>>> The btrfs filesystem live on 18x300GB scsi spindles, configured as Raid-0,
>>>>>>> as shown below:
>>>>>>>
>>>>>>> Label: none  uuid: bc6442c6-2fe2-4236-a5aa-6b7841234c52
>>>>>>>          Total devices 18 FS bytes used 2.94TB
>>>>>>>          devid    5 size 279.39GB used 208.33GB path /dev/cciss/c1d0
>>>>>>>          devid   17 size 279.39GB used 208.34GB path /dev/cciss/c1d8
>>>>>>>          devid   16 size 279.39GB used 209.33GB path /dev/cciss/c1d7
>>>>>>>          devid    4 size 279.39GB used 208.33GB path /dev/cciss/c0d4
>>>>>>>          devid    1 size 279.39GB used 233.72GB path /dev/cciss/c0d1
>>>>>>>          devid   13 size 279.39GB used 208.33GB path /dev/cciss/c1d4
>>>>>>>          devid    8 size 279.39GB used 208.33GB path /dev/cciss/c1d11
>>>>>>>          devid   12 size 279.39GB used 208.33GB path /dev/cciss/c1d3
>>>>>>>          devid    3 size 279.39GB used 208.33GB path /dev/cciss/c0d3
>>>>>>>          devid    9 size 279.39GB used 208.33GB path /dev/cciss/c1d12
>>>>>>>          devid    6 size 279.39GB used 208.33GB path /dev/cciss/c1d1
>>>>>>>          devid   11 size 279.39GB used 208.33GB path /dev/cciss/c1d2
>>>>>>>          devid   14 size 279.39GB used 208.33GB path /dev/cciss/c1d5
>>>>>>>          devid    2 size 279.39GB used 233.70GB path /dev/cciss/c0d2
>>>>>>>          devid   15 size 279.39GB used 209.33GB path /dev/cciss/c1d6
>>>>>>>          devid   10 size 279.39GB used 208.33GB path /dev/cciss/c1d13
>>>>>>>          devid    7 size 279.39GB used 208.33GB path /dev/cciss/c1d10
>>>>>>>          devid   18 size 279.39GB used 208.34GB path /dev/cciss/c1d9
>>>>>>> Btrfs v0.19-16-g075587c-dirty
>>>>>>>
>>>>>>> The filesystem, mounted in /mnt/btrfs is hanging, no existing or new
>>>>>>> process can access it, however 'df' still displays the disk usage (3TB out
>>>>>>> of 5). The disks appear to be physically healthy. Please note that a
>>>>>>> significant number of files were placed on this filesystem, between 20 and
>>>>>>> 30 million files.
>>>>>>>
>>>>>>> The relevant kernel messages are displayed below:
>>>>>>>
>>>>>>> INFO: task btrfs-submit-0:4220 blocked for more than 120 seconds.
>>>>>>> "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
>>>>>>> btrfs-submit- D 000000010042e12f     0  4220      2 0x00000000
>>>>>>>   ffff8803e584ac70 0000000000000046 0000000000004000 0000000000011680
>>>>>>>   ffff8803f7349fd8 ffff8803f7349fd8 ffff8803e584ac70 0000000000011680
>>>>>>>   0000000000000001 ffff8803ff99d250 ffffffff8149f020 0000000081150ab0
>>>>>>> Call Trace:
>>>>>>>   [<ffffffff813089f3>] ? io_schedule+0x71/0xb1
>>>>>>>   [<ffffffff811470be>] ? get_request_wait+0xab/0x140
>>>>>>>   [<ffffffff810406f4>] ? autoremove_wake_function+0x0/0x2e
>>>>>>>   [<ffffffff81143a4d>] ? elv_rq_merge_ok+0x89/0x97
>>>>>>>   [<ffffffff8114a245>] ? blk_recount_segments+0x17/0x27
>>>>>>>   [<ffffffff81147429>] ? __make_request+0x2d6/0x3fc
>>>>>>>   [<ffffffff81145b16>] ? generic_make_request+0x207/0x268
>>>>>>>   [<ffffffff81145c12>] ? submit_bio+0x9b/0xa2
>>>>>>>   [<ffffffffa01aa081>] ? btrfs_requeue_work+0xd7/0xe1 [btrfs]
>>>>>>>   [<ffffffffa01a5365>] ? run_scheduled_bios+0x297/0x48f [btrfs]
>>>>>>>   [<ffffffffa01aa687>] ? worker_loop+0x17c/0x452 [btrfs]
>>>>>>>   [<ffffffffa01aa50b>] ? worker_loop+0x0/0x452 [btrfs]
>>>>>>>   [<ffffffff81040331>] ? kthread+0x79/0x81
>>>>>>>   [<ffffffff81003674>] ? kernel_thread_helper+0x4/0x10
>>>>>>>   [<ffffffff810402b8>] ? kthread+0x0/0x81
>>>>>>>   [<ffffffff81003670>] ? kernel_thread_helper+0x0/0x10
>>>>>> This looks like the issue we saw too, http://lkml.org/lkml/2010/6/8/375.
>>>>>> This is reproduceable in our setup.
>>>>>
>>>>> I think I know the cause of http://lkml.org/lkml/2010/6/8/375.
>>>>> The code in the first do-while loop in btrfs_commit_transaction
>>>>> set current process to TASK_UNINTERRUPTIBLE state, then calls
>>>>> btrfs_start_delalloc_inodes, btrfs_wait_ordered_extents and
>>>>> btrfs_run_ordered_operations(). All of these function may call
>>>>> cond_resched().
>>>> Hi,
>>>> When I test random write, I saw a lot of threads jump into btree_writepages()
>>>> and do noting and io throughput is zero for some time. Looks like there is a
>>>> live lock. See the code of btree_writepages():
>>>> 	if (wbc->sync_mode == WB_SYNC_NONE) {
>>>> 		struct btrfs_root *root = BTRFS_I(mapping->host)->root;
>>>> 		u64 num_dirty;
>>>> 		unsigned long thresh = 32 * 1024 * 1024;
>>>>
>>>> 		if (wbc->for_kupdate)
>>>> 			return 0;
>>>>
>>>> 		/* this is a bit racy, but that's ok */
>>>> 		num_dirty = root->fs_info->dirty_metadata_bytes;
>>>>>>>>>> 		if (num_dirty < thresh)
>>>> 			return 0;
>>>> 	}
>>>> The marked line is quite intrusive. In my test, the live lock is caused by the thresh
>>>> check. The dirty_metadata_bytes < 32M. Without it, I can't see the live lock. Not
>>>> sure if this is related to the hang.
>>>
>>> How much ram do you have?  The goal of the check is to avoid writing
>>> metadata blocks because once we write them we have to do more IO to cow
>>> them again if they are changed later.
>>
>> This server has 16GB of ram on a x86_64 (dual opteron 275, ecc memory).
>>
>>> It shouldn't be looping hard in btrfs there, what was the workload?
>>
>> The workload was the extraction of large tarballs (one at the time,
>> about 300+ files extracted by second from a single tarball, which is
>> pretty good), as you might expect, the disks were tested (read and
>> write) for physical errors before I report this bug.
>
> I think Zheng is right and this one will get fixed by the latest code.
> The spinning writepage part should be a different problem.

I'm trying to repro with 2.6.35-rc3, expect results within 24 hours.


Jerome J. Ibanes

  reply	other threads:[~2010-06-14 19:13 UTC|newest]

Thread overview: 12+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2010-06-10 17:41 btrfs: hanging processes - race condition? Jerome Ibanes
2010-06-11  1:12 ` Shaohua Li
2010-06-11  2:32   ` Yan, Zheng 
2010-06-13  6:50     ` Shaohua Li
2010-06-14 13:28       ` Chris Mason
2010-06-14 18:12         ` Jerome Ibanes
2010-06-14 19:08           ` Chris Mason
2010-06-14 19:13             ` Jerome Ibanes [this message]
2010-06-16 18:12               ` Jerome Ibanes
2010-06-17  1:41         ` Shaohua Li
2010-06-18  0:57           ` Shaohua Li
2010-06-14 13:26     ` Chris Mason

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=Pine.LNX.4.64.1006141211560.12518@lyn-del-uti-015 \
    --to=jerome@ops.zillow.com \
    --cc=chris.mason@oracle.com \
    --cc=linux-btrfs@vger.kernel.org \
    --cc=shaohua.li@intel.com \
    --cc=tim.c.chen@intel.com \
    --cc=yanmin.zhang@intel.com \
    --cc=yanzheng@21cn.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).