From: Al Viro <viro@ZenIV.linux.org.uk>
To: Mika Westerberg <mika.westerberg@linux.intel.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>,
Linux Kernel Mailing List <linux-kernel@vger.kernel.org>,
Miklos Szeredi <mszeredi@suse.cz>,
linux-fsdevel <linux-fsdevel@vger.kernel.org>
Subject: Re: fs/dcache.c - BUG: soft lockup - CPU#5 stuck for 22s! [systemd-udevd:1667]
Date: Wed, 28 May 2014 12:57:01 +0100 [thread overview]
Message-ID: <20140528115701.GY18016@ZenIV.linux.org.uk> (raw)
In-Reply-To: <20140528073751.GB1757@lahna.fi.intel.com>
On Wed, May 28, 2014 at 10:37:51AM +0300, Mika Westerberg wrote:
> I sent you the whole log privately so that I don't spam everyone.
>
> Summary is here:
>
> May 28 10:24:23 lahna kernel: scsi 14:0:0:0: Direct-Access JetFlash Transcend 4GB 8.07 PQ: 0 ANSI: 4
> May 28 10:24:23 lahna kernel: sd 14:0:0:0: Attached scsi generic sg4 type 0
> May 28 10:24:23 lahna kernel: sd 14:0:0:0: [sdc] 7864320 512-byte logical blocks: (4.02 GB/3.75 GiB)
> May 28 10:24:23 lahna kernel: sd 14:0:0:0: [sdc] Write Protect is off
> May 28 10:24:23 lahna kernel: sd 14:0:0:0: [sdc] Mode Sense: 23 00 00 00
> May 28 10:24:23 lahna kernel: sd 14:0:0:0: [sdc] Write cache: disabled, read cache: enabled, doesn't support DPO or FUA
> May 28 10:24:23 lahna kernel: sdc: sdc1
> May 28 10:24:23 lahna kernel: sd 14:0:0:0: [sdc] Attached SCSI removable disk
>
> Here I detached the USB stick:
>
> May 28 10:24:32 lahna kernel: usb 3-10.4: USB disconnect, device number 6
> May 28 10:24:32 lahna kernel: check_submounts_and_drop[/dev/block/8:33]; CPU 4 PID 576 [systemd-udevd]
> May 28 10:24:32 lahna kernel: dput[/dev/block/8:33]; CPU 4 PID 576 [systemd-udevd]
> May 28 10:24:32 lahna kernel: check_submounts_and_drop[3-10/3-10.4/3-10.4:1.0/host14]; CPU 1 PID 1683 [systemd-udevd]
> May 28 10:24:32 lahna kernel: shrink[3-10.4:1.0/host14/target14:0:0/subsystem]; CPU 1 PID 1683 [systemd-udevd]
> May 28 10:24:32 lahna kernel: shrink[host14/target14:0:0/14:0:0:0/rev]; CPU 1 PID 1683 [systemd-udevd]
> May 28 10:24:32 lahna kernel: shrink[host14/target14:0:0/14:0:0:0/vendor]; CPU 1 PID 1683 [systemd-udevd]
> May 28 10:24:32 lahna kernel: shrink[host14/target14:0:0/14:0:0:0/model]; CPU 1 PID 1683 [systemd-udevd]
> May 28 10:24:32 lahna kernel: check_submounts_and_drop[/dev/block/8:32]; CPU 4 PID 576 [systemd-udevd]
> ...
> May 28 10:24:32 lahna kernel: check_submounts_and_drop[0000:00:14.0/usb3/3-10/3-10.4]; CPU 6 PID 1684 [systemd-udevd]
> May 28 10:24:32 lahna kernel: check_submounts_and_drop[0000:00:14.0/usb3/3-10/3-10.4]; CPU 7 PID 1685 [systemd-udevd]
> May 28 10:24:32 lahna kernel: shrink[usb3/3-10/3-10.4/ltm_capable]; CPU 7 PID 1685 [systemd-udevd]
> May 28 10:24:32 lahna kernel: A
...
Hmm... As it is, we have d_walk() trying _very_ hard in those situations.
Could you add that on top of the previous and see if livelock changes?
diff --git a/fs/dcache.c b/fs/dcache.c
index 42ae01e..4ce58d3 100644
--- a/fs/dcache.c
+++ b/fs/dcache.c
@@ -1209,7 +1209,7 @@ static enum d_walk_ret select_collect(void *_data, struct dentry *dentry)
* ensures forward progress). We'll be coming back to find
* the rest.
*/
- if (!list_empty(&data->dispose))
+ if (data->found)
ret = need_resched() ? D_WALK_QUIT : D_WALK_NORETRY;
out:
return ret;
next prev parent reply other threads:[~2014-05-28 11:57 UTC|newest]
Thread overview: 55+ messages / expand[flat|nested] mbox.gz Atom feed top
2014-05-26 9:37 fs/dcache.c - BUG: soft lockup - CPU#5 stuck for 22s! [systemd-udevd:1667] Mika Westerberg
2014-05-26 13:57 ` Al Viro
2014-05-26 14:29 ` Mika Westerberg
2014-05-26 15:27 ` Al Viro
2014-05-26 16:42 ` Al Viro
2014-05-26 18:17 ` Linus Torvalds
2014-05-26 18:26 ` Al Viro
2014-05-26 20:24 ` Linus Torvalds
2014-05-27 1:40 ` Al Viro
2014-05-27 3:14 ` Al Viro
2014-05-27 4:00 ` Al Viro
2014-05-27 7:04 ` Mika Westerberg
2014-05-28 3:19 ` Al Viro
2014-05-28 7:37 ` Mika Westerberg
2014-05-28 11:57 ` Al Viro [this message]
2014-05-28 13:11 ` Mika Westerberg
2014-05-28 14:19 ` Al Viro
2014-05-28 18:39 ` Al Viro
2014-05-28 19:43 ` Linus Torvalds
2014-05-28 20:02 ` Linus Torvalds
2014-05-28 20:25 ` Al Viro
2014-05-29 10:42 ` Mika Westerberg
2014-05-28 20:14 ` Al Viro
2014-05-28 21:11 ` Linus Torvalds
2014-05-28 21:28 ` Al Viro
2014-05-29 3:11 ` Al Viro
2014-05-29 3:52 ` Al Viro
2014-05-29 5:34 ` Al Viro
2014-05-29 10:51 ` Mika Westerberg
2014-05-29 11:04 ` Mika Westerberg
2014-05-29 13:30 ` Al Viro
2014-05-29 14:56 ` Mika Westerberg
2014-05-29 15:10 ` Linus Torvalds
2014-05-29 15:44 ` Al Viro
2014-05-29 16:23 ` Al Viro
2014-05-29 16:29 ` Linus Torvalds
2014-05-29 16:53 ` Al Viro
2014-05-29 18:52 ` Al Viro
2014-05-29 19:14 ` Linus Torvalds
2014-05-30 4:50 ` Al Viro
2014-05-30 5:00 ` Linus Torvalds
2014-05-30 6:49 ` Al Viro
2014-05-30 8:12 ` Mika Westerberg
2014-05-30 15:21 ` Al Viro
2014-05-30 15:31 ` Linus Torvalds
2014-05-30 16:48 ` [git pull] " Al Viro
2014-05-30 17:14 ` Al Viro
2014-05-31 14:18 ` Josh Boyer
2014-05-31 14:48 ` Linus Torvalds
2014-05-31 14:58 ` Josh Boyer
2014-05-31 16:12 ` Josh Boyer
2014-05-30 17:15 ` Sedat Dilek
2014-05-29 4:21 ` Linus Torvalds
2014-05-29 5:16 ` Al Viro
2014-05-29 5:26 ` Al Viro
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20140528115701.GY18016@ZenIV.linux.org.uk \
--to=viro@zeniv.linux.org.uk \
--cc=linux-fsdevel@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=mika.westerberg@linux.intel.com \
--cc=mszeredi@suse.cz \
--cc=torvalds@linux-foundation.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).