linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: "Long, Wai Man" <waiman.long@hpe.com>
To: Dave Chinner <david@fromorbit.com>
Cc: Alexander Viro <viro@zeniv.linux.org.uk>,
	Jan Kara <jack@suse.com>, "Jeff Layton" <jlayton@poochiereds.net>,
	"J. Bruce Fields" <bfields@fieldses.org>,
	Tejun Heo <tj@kernel.org>,
	Christoph Lameter <cl@linux-foundation.org>,
	"linux-fsdevel@vger.kernel.org" <linux-fsdevel@vger.kernel.org>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	Ingo Molnar <mingo@redhat.com>,
	Peter Zijlstra <peterz@infradead.org>,
	Andi Kleen <andi@firstfloor.org>,
	Dave Chinner <dchinner@redhat.com>,
	Scott J Norton <scott.norton@hp.com>,
	Douglas Hatch <doug.hatch@hp.com>
Subject: Re: [RFC PATCH 0/2] vfs: Use per-cpu list for SB's s_inodes list
Date: Fri, 19 Feb 2016 21:04:35 +0000	[thread overview]
Message-ID: <DF4PR84MB0138B7EEA85742D4078879DDF1A00@DF4PR84MB0138.NAMPRD84.PROD.OUTLOOK.COM> (raw)
In-Reply-To: <20160218235829.GV14668@dastard>

On 02/18/2016 06:58 PM, Dave Chinner wrote:
> On Tue, Feb 16, 2016 at 08:31:18PM -0500, Waiman Long wrote:
>> This patch is a replacement of my previous list batching patch -
>> https://lwn.net/Articles/674105/. Compared with the previous patch,
>> this one provides better performance and fairness. However, it also
>> requires a bit more changes in the VFS layer.
>>
>> This patchset is a derivative of Andi Kleen's patch on "Initial per
>> cpu list for the per sb inode list"
>>
>> https://git.kernel.org/cgit/linux/kernel/git/ak/linux-misc.git/commit/?h=hle315/combined&id=f1cf9e715a40f44086662ae3b29f123cf059cbf4
>>
>> Patch 1 introduces the per-cpu list.
>>
>> Patch 2 modifies the superblock and inode structures to use the per-cpu
>> list. The corresponding functions that reference those structures are
>> modified.
>>
>> Waiman Long (2):
>>    lib/percpu-list: Per-cpu list with associated per-cpu locks
>>    vfs: Use per-cpu list for superblock's inode list
> xfstests:xfs/013 deadlocks (running on 4GB ram disks on a 16p VM):
>
> [135478.644495] run fstests generic/013 at 2016-02-19 10:54:51
> [135479.058833] XFS (ram0): Unmounting Filesystem
> [135479.149472] XFS (ram0): Mounting V5 Filesystem
> [135479.154056] XFS (ram0): Ending clean mount
> [135479.461571] XFS (ram0): Unmounting Filesystem
> [135479.548060] XFS (ram0): Mounting V5 Filesystem
> [135479.553103] XFS (ram0): Ending clean mount
> [135507.633377] NMI watchdog: BUG: soft lockup - CPU#4 stuck for 23s! [fsstress:3390]
> [135507.634572] Modules linked in:
> [135507.635059] CPU: 4 PID: 3390 Comm: fsstress Not tainted 4.5.0-rc2-dgc+ #683
> [135507.636108] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS Debian-1.8.2-1 04/01/2014
> [135507.637442] task: ffff88023a420000 ti: ffff88023758c000 task.ti: ffff88023758c000
> [135507.638566] RIP: 0010:[<ffffffff810f4982>]  [<ffffffff810f4982>] do_raw_spin_lock+0x52/0x120
> [135507.639863] RSP: 0018:ffff88023758fe10  EFLAGS: 00000246
> [135507.640669] RAX: 0000000000000000 RBX: ffff880306e05ca8 RCX: 0000000000000024
> [135507.641751] RDX: 0000000000000001 RSI: 00010f15431142d8 RDI: ffff880306e05ca8
> [135507.642825] RBP: ffff88023758fe30 R08: 0000000000000004 R09: 0000000000000007
> [135507.643897] R10: 000000000002e400 R11: 0000000000000001 R12: ffffe8fefbd01800
> [135507.644966] R13: ffff880306e05db8 R14: ffff880306e05ca8 R15: ffff880306e05c20
> [135507.646051] FS:  00007f3f4244f700(0000) GS:ffff88013bc80000(0000) knlGS:0000000000000000
> [135507.647269] CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
> [135507.648140] CR2: 00007f3f415cb008 CR3: 000000018c9ab000 CR4: 00000000000006e0
> [135507.649232] Stack:
> [135507.649559]  ffff880306e05c20 ffffe8fefbd01800 ffff880306e05db8 ffff880306e05ca8
> [135507.650740]  ffff88023758fe40 ffffffff81e01d75 ffff88023758fee8 ffffffff8121727f
> [135507.651924]  ffff8802b2fb76b8 ffff8802b2fb7000 0000000a00000246 ffffe8fefbd01810
> [135507.653108] Call Trace:
> [135507.653504]  [<ffffffff81e01d75>] _raw_spin_lock+0x15/0x20
> [135507.654343]  [<ffffffff8121727f>] sync_inodes_sb+0x1af/0x280
> [135507.655204]  [<ffffffff8121d8f0>] ? SyS_tee+0x3d0/0x3d0
> [135507.656003]  [<ffffffff8121d905>] sync_inodes_one_sb+0x15/0x20
> [135507.656891]  [<ffffffff811efede>] iterate_supers+0xae/0x100
> [135507.657748]  [<ffffffff8121dc35>] sys_sync+0x35/0x90
> [135507.658507]  [<ffffffff81e0232e>] entry_SYSCALL_64_fastpath+0x12/0x71
> [135507.659483] Code: 00 00 48 39 43 10 0f 84 b7 00 00 00 65 8b 05 de 57 f1 7e 39 43 08 0f 84 bb 00 00 00 8b 03 85 c0 75 0d ba 01 00 00 00 f0 0f b1 13<85>  c0 74 63 4c
> [135507.733374] NMI watchdog: BUG: soft lockup - CPU#10 stuck for 22s! [fsstress:3380]
> [135507.734665] Modules linked in:
> [135507.735195] CPU: 10 PID: 3380 Comm: fsstress Tainted: G             L  4.5.0-rc2-dgc+ #683
> [135507.736525] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS Debian-1.8.2-1 04/01/2014
> [135507.737966] task: ffff8803814a23c0 ti: ffff880239264000 task.ti: ffff880239264000
> [135507.739176] RIP: 0010:[<ffffffff817da7b9>]  [<ffffffff817da7b9>] delay_tsc+0x39/0x80
> [135507.740457] RSP: 0018:ffff880239267a00  EFLAGS: 00000206
> [135507.741338] RAX: 00010f23910bf4f8 RBX: ffffe8fefbd01810 RCX: 0000000000000024
> [135507.742523] RDX: 00010f2300000000 RSI: 00010f23910bf4d4 RDI: 0000000000000001
> [135507.743708] RBP: ffff880239267a00 R08: 000000000000000a R09: ffff880269028460
> [135507.744896] R10: 00000000c383dd1a R11: 00000000001f07e0 R12: 00000000334293fd
> [135507.746087] R13: 0000000000000001 R14: 0000000083214e30 R15: ffff880265b51900
> [135507.747271] FS:  00007f3f4244f700(0000) GS:ffff88033bd00000(0000) knlGS:0000000000000000
> [135507.748604] CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
> [135507.749568] CR2: 00007f3f4159d008 CR3: 0000000143196000 CR4: 00000000000006e0
> [135507.750753] Stack:
> [135507.751111]  ffff880239267a10 ffffffff817da6bf ffff880239267a40 ffffffff810f49bc
> [135507.752400]  000060fbc0001800 ffffe8fefbd01810 ffff880306e07058 0000000000000000
> [135507.753697]  ffff880239267a50 ffffffff81e01d75 ffff880239267a78 ffffffff817e529a
> [135507.754991] Call Trace:
> [135507.755424]  [<ffffffff817da6bf>] __delay+0xf/0x20
> [135507.756234]  [<ffffffff810f49bc>] do_raw_spin_lock+0x8c/0x120
> [135507.757204]  [<ffffffff81e01d75>] _raw_spin_lock+0x15/0x20
> [135507.758130]  [<ffffffff817e529a>] percpu_list_add+0x2a/0x70
> [135507.759064]  [<ffffffff81206830>] inode_sb_list_add+0x20/0x30
> [135507.760025]  [<ffffffff814f9744>] xfs_setup_inode+0x34/0x240
> [135507.760972]  [<ffffffff814fb22b>] xfs_ialloc+0x36b/0x550
> [135507.761874]  [<ffffffff814fb49d>] xfs_dir_ialloc+0x8d/0x260
> [135507.762805]  [<ffffffff814fb92f>] xfs_create+0x25f/0x6a0
> [135507.763700]  [<ffffffff814f881d>] xfs_generic_create+0xcd/0x2a0
> [135507.764686]  [<ffffffff81e01dce>] ? _raw_spin_unlock+0xe/0x20
> [135507.765652]  [<ffffffff814f8a26>] xfs_vn_create+0x16/0x20
> [135507.766556]  [<ffffffff811f8ff2>] vfs_create+0xc2/0x120
> [135507.767430]  [<ffffffff811fb6f9>] path_openat+0x1239/0x1370
> [135507.768364]  [<ffffffff812b973c>] ? ext4_file_write_iter+0x21c/0x420
> [135507.769430]  [<ffffffff810d2a89>] ? __might_sleep+0x49/0x80
> [135507.770362]  [<ffffffff811fc8ee>] do_filp_open+0x7e/0xe0
> [135507.771256]  [<ffffffff811e41f2>] ? kmem_cache_alloc+0x42/0x170
> [135507.772250]  [<ffffffff81e01dce>] ? _raw_spin_unlock+0xe/0x20
> [135507.773225]  [<ffffffff8120a2dc>] ? __alloc_fd+0xbc/0x170
> [135507.774135]  [<ffffffff811eb2e6>] do_sys_open+0x116/0x1f0
> [135507.775043]  [<ffffffff811eb41e>] SyS_creat+0x1e/0x20
> [135507.775895]  [<ffffffff81e0232e>] entry_SYSCALL_64_fastpath+0x12/0x71
>
> Cheers,
>
> Dave.

Thanks for reporting this problem.

PeterZ had actually noticed an issue in my patch that I used 
list_for_each_entry_safe() for all the modified s_inodes iteration 
functions which originally used list_for_each_entry(). That may not be 
safe in some cases. The hangup you saw may be caused by this problem. I 
am going to send out an updated patch to use the correct 
list_for_each_entry() macro for those iteration functions. Please try 
that out to see if you see the same hangup problem again.

Cheers,
Longman

      reply	other threads:[~2016-02-19 21:04 UTC|newest]

Thread overview: 27+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2016-02-17  1:31 [RFC PATCH 0/2] vfs: Use per-cpu list for SB's s_inodes list Waiman Long
2016-02-17  1:31 ` [RFC PATCH 1/2] lib/percpu-list: Per-cpu list with associated per-cpu locks Waiman Long
2016-02-17  9:53   ` Dave Chinner
2016-02-17 11:00     ` Peter Zijlstra
2016-02-17 11:05       ` Peter Zijlstra
2016-02-17 16:16         ` Waiman Long
2016-02-17 16:22           ` Peter Zijlstra
2016-02-17 16:27           ` Christoph Lameter
2016-02-17 17:12             ` Waiman Long
2016-02-17 17:18               ` Peter Zijlstra
2016-02-17 17:41                 ` Waiman Long
2016-02-17 18:22                   ` Peter Zijlstra
2016-02-17 18:45                     ` Waiman Long
2016-02-17 19:39                       ` Peter Zijlstra
2016-02-17 11:10       ` Dave Chinner
2016-02-17 11:26         ` Peter Zijlstra
2016-02-17 11:36           ` Peter Zijlstra
2016-02-17 15:56     ` Waiman Long
2016-02-17 16:02       ` Peter Zijlstra
2016-02-17 15:13   ` Christoph Lameter
2016-02-17  1:31 ` [RRC PATCH 2/2] vfs: Use per-cpu list for superblock's inode list Waiman Long
2016-02-17  7:16   ` Ingo Molnar
2016-02-17 15:40     ` Waiman Long
2016-02-17 10:37   ` Dave Chinner
2016-02-17 16:08     ` Waiman Long
2016-02-18 23:58 ` [RFC PATCH 0/2] vfs: Use per-cpu list for SB's s_inodes list Dave Chinner
2016-02-19 21:04   ` Long, Wai Man [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=DF4PR84MB0138B7EEA85742D4078879DDF1A00@DF4PR84MB0138.NAMPRD84.PROD.OUTLOOK.COM \
    --to=waiman.long@hpe.com \
    --cc=andi@firstfloor.org \
    --cc=bfields@fieldses.org \
    --cc=cl@linux-foundation.org \
    --cc=david@fromorbit.com \
    --cc=dchinner@redhat.com \
    --cc=doug.hatch@hp.com \
    --cc=jack@suse.com \
    --cc=jlayton@poochiereds.net \
    --cc=linux-fsdevel@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=mingo@redhat.com \
    --cc=peterz@infradead.org \
    --cc=scott.norton@hp.com \
    --cc=tj@kernel.org \
    --cc=viro@zeniv.linux.org.uk \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).