From: Marc Zyngier <marc.zyngier@arm.com>
To: YASUAKI ISHIMATSU <yasu.isimatu@gmail.com>
Cc: tglx@linutronix.de, axboe@kernel.dk, mpe@ellerman.id.au,
keith.busch@intel.com, peterz@infradead.org,
LKML <linux-kernel@vger.kernel.org>
Subject: Re: system hung up when offlining CPUs
Date: Wed, 9 Aug 2017 12:42:13 +0100 [thread overview]
Message-ID: <20170809124213.0d9518bb@why.wild-wind.fr.eu.org> (raw)
In-Reply-To: <c55a33b4-a886-8882-dd8d-5c488f94ee06@gmail.com>
On Tue, 8 Aug 2017 15:25:35 -0400
YASUAKI ISHIMATSU <yasu.isimatu@gmail.com> wrote:
> Hi Thomas,
>
> When offlining all CPUs except cpu0, system hung up with the following message.
>
> [...] INFO: task kworker/u384:1:1234 blocked for more than 120 seconds.
> [...] Not tainted 4.12.0-rc6+ #19
> [...] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
> [...] kworker/u384:1 D 0 1234 2 0x00000000
> [...] Workqueue: writeback wb_workfn (flush-253:0)
> [...] Call Trace:
> [...] __schedule+0x28a/0x880
> [...] schedule+0x36/0x80
> [...] schedule_timeout+0x249/0x300
> [...] ? __schedule+0x292/0x880
> [...] __down_common+0xfc/0x132
> [...] ? _xfs_buf_find+0x2bb/0x510 [xfs]
> [...] __down+0x1d/0x1f
> [...] down+0x41/0x50
> [...] xfs_buf_lock+0x3c/0xf0 [xfs]
> [...] _xfs_buf_find+0x2bb/0x510 [xfs]
> [...] xfs_buf_get_map+0x2a/0x280 [xfs]
> [...] xfs_buf_read_map+0x2d/0x180 [xfs]
> [...] xfs_trans_read_buf_map+0xf5/0x310 [xfs]
> [...] xfs_btree_read_buf_block.constprop.35+0x78/0xc0 [xfs]
> [...] xfs_btree_lookup_get_block+0x88/0x160 [xfs]
> [...] xfs_btree_lookup+0xd0/0x3b0 [xfs]
> [...] ? xfs_allocbt_init_cursor+0x41/0xe0 [xfs]
> [...] xfs_alloc_ag_vextent_near+0xaf/0xaa0 [xfs]
> [...] xfs_alloc_ag_vextent+0x13c/0x150 [xfs]
> [...] xfs_alloc_vextent+0x425/0x590 [xfs]
> [...] xfs_bmap_btalloc+0x448/0x770 [xfs]
> [...] xfs_bmap_alloc+0xe/0x10 [xfs]
> [...] xfs_bmapi_write+0x61d/0xc10 [xfs]
> [...] ? kmem_zone_alloc+0x96/0x100 [xfs]
> [...] xfs_iomap_write_allocate+0x199/0x3a0 [xfs]
> [...] xfs_map_blocks+0x1e8/0x260 [xfs]
> [...] xfs_do_writepage+0x1ca/0x680 [xfs]
> [...] write_cache_pages+0x26f/0x510
> [...] ? xfs_vm_set_page_dirty+0x1d0/0x1d0 [xfs]
> [...] ? blk_mq_dispatch_rq_list+0x305/0x410
> [...] ? deadline_remove_request+0x7d/0xc0
> [...] xfs_vm_writepages+0xb6/0xd0 [xfs]
> [...] do_writepages+0x1c/0x70
> [...] __writeback_single_inode+0x45/0x320
> [...] writeback_sb_inodes+0x280/0x570
> [...] __writeback_inodes_wb+0x8c/0xc0
> [...] wb_writeback+0x276/0x310
> [...] ? get_nr_dirty_inodes+0x4d/0x80
> [...] wb_workfn+0x2d4/0x3b0
> [...] process_one_work+0x149/0x360
> [...] worker_thread+0x4d/0x3c0
> [...] kthread+0x109/0x140
> [...] ? rescuer_thread+0x380/0x380
> [...] ? kthread_park+0x60/0x60
> [...] ret_from_fork+0x25/0x30
>
>
> I bisected upstream kernel. And I found that the following commit lead
> the issue.
>
> commit c5cb83bb337c25caae995d992d1cdf9b317f83de
> Author: Thomas Gleixner <tglx@linutronix.de>
> Date: Tue Jun 20 01:37:51 2017 +0200
>
> genirq/cpuhotplug: Handle managed IRQs on CPU hotplug
Can you please post your /proc/interrupts and details of which
interrupt you think goes wrong? This backtrace is not telling us much
in terms of where to start looking...
Thanks,
M.
--
Without deviation from the norm, progress is not possible.
next prev parent reply other threads:[~2017-08-09 11:42 UTC|newest]
Thread overview: 28+ messages / expand[flat|nested] mbox.gz Atom feed top
2017-08-08 19:25 system hung up when offlining CPUs YASUAKI ISHIMATSU
2017-08-09 11:42 ` Marc Zyngier [this message]
2017-08-09 19:09 ` YASUAKI ISHIMATSU
2017-08-10 11:54 ` Marc Zyngier
2017-08-21 12:07 ` Christoph Hellwig
2017-08-21 13:18 ` Christoph Hellwig
2017-08-21 13:37 ` Marc Zyngier
2017-09-07 20:23 ` YASUAKI ISHIMATSU
2017-09-12 18:15 ` YASUAKI ISHIMATSU
2017-09-13 11:13 ` Hannes Reinecke
2017-09-13 11:35 ` Kashyap Desai
2017-09-13 13:33 ` Thomas Gleixner
2017-09-14 16:28 ` YASUAKI ISHIMATSU
2017-09-16 10:15 ` Thomas Gleixner
2017-09-16 15:02 ` Thomas Gleixner
2017-10-02 16:36 ` YASUAKI ISHIMATSU
2017-10-03 21:44 ` Thomas Gleixner
2017-10-04 21:04 ` Thomas Gleixner
2017-10-09 11:35 ` [tip:irq/urgent] genirq/cpuhotplug: Add sanity check for effective affinity mask tip-bot for Thomas Gleixner
2017-10-09 11:35 ` [tip:irq/urgent] genirq/cpuhotplug: Enforce affinity setting on startup of managed irqs tip-bot for Thomas Gleixner
2017-10-10 16:30 ` system hung up when offlining CPUs YASUAKI ISHIMATSU
2017-10-16 18:59 ` YASUAKI ISHIMATSU
2017-10-16 20:27 ` Thomas Gleixner
2017-10-30 9:08 ` Shivasharan Srikanteshwara
2017-11-01 0:47 ` Thomas Gleixner
2017-11-01 11:01 ` Hannes Reinecke
2017-10-04 21:10 ` Thomas Gleixner
-- strict thread matches above, loose matches on Subject: below --
2017-08-08 19:24 YASUAKI ISHIMATSU
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20170809124213.0d9518bb@why.wild-wind.fr.eu.org \
--to=marc.zyngier@arm.com \
--cc=axboe@kernel.dk \
--cc=keith.busch@intel.com \
--cc=linux-kernel@vger.kernel.org \
--cc=mpe@ellerman.id.au \
--cc=peterz@infradead.org \
--cc=tglx@linutronix.de \
--cc=yasu.isimatu@gmail.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).