From: Oleksandr Natalenko <oleksandr@natalenko.name>
To: linux-kernel@vger.kernel.org
Cc: linux-mm@kvack.org, Andrew Morton <akpm@linux-foundation.org>,
Sebastian Andrzej Siewior <bigeasy@linutronix.de>,
Steven Rostedt <rostedt@goodmis.org>,
Mike Galbraith <efault@gmx.de>,
Thomas Gleixner <tglx@linutronix.de>,
linux-rt-users@vger.kernel.org
Subject: Re: scheduling while atomic in z3fold
Date: Sat, 28 Nov 2020 15:09:24 +0100 [thread overview]
Message-ID: <20201128140924.iyqr2h52z2olt6zb@spock.localdomain> (raw)
In-Reply-To: <20201128140523.ovmqon5fjetvpby4@spock.localdomain>
On Sat, Nov 28, 2020 at 03:05:24PM +0100, Oleksandr Natalenko wrote:
> Hi.
>
> While running v5.10-rc5-rt11 I bumped into the following:
>
> ```
> BUG: scheduling while atomic: git/18695/0x00000002
> Preemption disabled at:
> [<ffffffffbb93fcb3>] z3fold_zpool_malloc+0x463/0x6e0
> …
> Call Trace:
> dump_stack+0x6d/0x88
> __schedule_bug.cold+0x88/0x96
> __schedule+0x69e/0x8c0
> preempt_schedule_lock+0x51/0x150
> rt_spin_lock_slowlock_locked+0x117/0x2c0
> rt_spin_lock_slowlock+0x58/0x80
> rt_spin_lock+0x2a/0x40
> z3fold_zpool_malloc+0x4c1/0x6e0
> zswap_frontswap_store+0x39c/0x980
> __frontswap_store+0x6e/0xf0
> swap_writepage+0x39/0x70
> shmem_writepage+0x31b/0x490
> pageout+0xf4/0x350
> shrink_page_list+0xa28/0xcc0
> shrink_inactive_list+0x300/0x690
> shrink_lruvec+0x59a/0x770
> shrink_node+0x2d6/0x8d0
> do_try_to_free_pages+0xda/0x530
> try_to_free_pages+0xff/0x260
> __alloc_pages_slowpath.constprop.0+0x3d5/0x1230
> __alloc_pages_nodemask+0x2f6/0x350
> allocate_slab+0x3da/0x660
> ___slab_alloc+0x4ff/0x760
> __slab_alloc.constprop.0+0x7a/0x100
> kmem_cache_alloc+0x27b/0x2c0
> __d_alloc+0x22/0x230
> d_alloc_parallel+0x67/0x5e0
> __lookup_slow+0x5c/0x150
> path_lookupat+0x2ea/0x4d0
> filename_lookup+0xbf/0x210
> vfs_statx.constprop.0+0x4d/0x110
> __do_sys_newlstat+0x3d/0x80
> do_syscall_64+0x33/0x40
> entry_SYSCALL_64_after_hwframe+0x44/0xa9
> ```
>
> The preemption seems to be disabled here:
>
> ```
> $ scripts/faddr2line mm/z3fold.o z3fold_zpool_malloc+0x463
> z3fold_zpool_malloc+0x463/0x6e0:
> add_to_unbuddied at mm/z3fold.c:645
> (inlined by) z3fold_alloc at mm/z3fold.c:1195
> (inlined by) z3fold_zpool_malloc at mm/z3fold.c:1737
> ```
>
> The call to the rt_spin_lock() seems to be here:
>
> ```
> $ scripts/faddr2line mm/z3fold.o z3fold_zpool_malloc+0x4c1
> z3fold_zpool_malloc+0x4c1/0x6e0:
> add_to_unbuddied at mm/z3fold.c:649
> (inlined by) z3fold_alloc at mm/z3fold.c:1195
> (inlined by) z3fold_zpool_malloc at mm/z3fold.c:1737
> ```
>
> Or, in source code:
>
> ```
> 639 /* Add to the appropriate unbuddied list */
> 640 static inline void add_to_unbuddied(struct z3fold_pool *pool,
> 641 struct z3fold_header *zhdr)
> 642 {
> 643 if (zhdr->first_chunks == 0 || zhdr->last_chunks == 0 ||
> 644 zhdr->middle_chunks == 0) {
> 645 struct list_head *unbuddied = get_cpu_ptr(pool->unbuddied);
> 646
> 647 int freechunks = num_free_chunks(zhdr);
> 648 spin_lock(&pool->lock);
> 649 list_add(&zhdr->buddy, &unbuddied[freechunks]);
> 650 spin_unlock(&pool->lock);
> 651 zhdr->cpu = smp_processor_id();
> 652 put_cpu_ptr(pool->unbuddied);
> 653 }
> 654 }
> ```
>
> Shouldn't the list manipulation be protected with
> local_lock+this_cpu_ptr instead of get_cpu_ptr+spin_lock?
>
> Thanks.
>
> --
> Oleksandr Natalenko (post-factum)
Forgot to Cc linux-rt-users@, sorry.
--
Oleksandr Natalenko (post-factum)
next prev parent reply other threads:[~2020-11-28 21:53 UTC|newest]
Thread overview: 70+ messages / expand[flat|nested] mbox.gz Atom feed top
2020-11-28 14:05 scheduling while atomic in z3fold Oleksandr Natalenko
2020-11-28 14:09 ` Oleksandr Natalenko [this message]
2020-11-28 14:27 ` Oleksandr Natalenko
2020-11-29 6:41 ` Mike Galbraith
2020-11-29 6:41 ` Mike Galbraith
2020-11-29 7:48 ` Mike Galbraith
2020-11-29 9:21 ` Mike Galbraith
2020-11-29 9:21 ` Mike Galbraith
2020-11-29 10:56 ` Mike Galbraith
2020-11-29 10:56 ` Mike Galbraith
2020-11-29 11:29 ` Oleksandr Natalenko
2020-11-29 11:41 ` Mike Galbraith
2020-11-29 11:41 ` Mike Galbraith
2020-11-30 13:20 ` Sebastian Andrzej Siewior
2020-11-30 13:53 ` Oleksandr Natalenko
2020-11-30 14:28 ` Sebastian Andrzej Siewior
2020-11-30 14:42 ` Mike Galbraith
2020-11-30 14:42 ` Mike Galbraith
2020-11-30 14:52 ` Sebastian Andrzej Siewior
2020-11-30 15:01 ` Mike Galbraith
2020-11-30 15:01 ` Mike Galbraith
2020-11-30 15:03 ` Mike Galbraith
2020-11-30 15:03 ` Mike Galbraith
2020-11-30 16:03 ` Sebastian Andrzej Siewior
2020-11-30 16:27 ` Mike Galbraith
2020-11-30 16:27 ` Mike Galbraith
2020-11-30 16:32 ` Sebastian Andrzej Siewior
2020-11-30 16:36 ` Mike Galbraith
2020-11-30 16:36 ` Mike Galbraith
2020-11-30 19:09 ` Mike Galbraith
2020-11-30 19:09 ` Mike Galbraith
2020-11-30 16:53 ` Mike Galbraith
2020-11-30 16:53 ` Mike Galbraith
2020-12-02 2:30 ` Mike Galbraith
2020-12-02 2:30 ` Mike Galbraith
2020-12-02 22:08 ` Sebastian Andrzej Siewior
2020-12-03 2:16 ` Mike Galbraith
2020-12-03 2:16 ` Mike Galbraith
2020-12-03 8:18 ` Mike Galbraith
2020-12-03 8:18 ` Mike Galbraith
2020-12-03 9:40 ` zswap explosion when using zsmalloc pool compression Mike Galbraith
2020-12-03 9:48 ` Sebastian Andrzej Siewior
2020-12-03 11:03 ` Mike Galbraith
2020-12-03 12:51 ` Mike Galbraith
2020-12-03 13:07 ` Mike Galbraith
2020-12-03 13:43 ` Mike Galbraith
2020-12-03 14:04 ` Mike Galbraith
2020-12-04 5:22 ` [patch] mm,zswap: fix zswap::zswap_comp.lock vs zsmalloc::zs_map_area.lock deadlock Mike Galbraith
2020-12-03 13:39 ` scheduling while atomic in z3fold Sebastian Andrzej Siewior
2020-12-03 14:07 ` Vitaly Wool
2020-12-03 14:07 ` Vitaly Wool
2020-12-06 9:18 ` Mike Galbraith
2020-12-06 9:18 ` Mike Galbraith
2020-12-07 1:05 ` Vitaly Wool
2020-12-07 2:18 ` Mike Galbraith
2020-12-07 2:18 ` Mike Galbraith
2020-12-07 11:52 ` Vitaly Wool
2020-12-07 11:52 ` Vitaly Wool
2020-12-07 12:34 ` Mike Galbraith
2020-12-07 12:34 ` Mike Galbraith
2020-12-07 15:21 ` Vitaly Wool
2020-12-07 15:21 ` Vitaly Wool
2020-12-07 15:41 ` Sebastian Andrzej Siewior
2020-12-07 15:41 ` Mike Galbraith
2020-12-07 15:41 ` Mike Galbraith
2020-12-08 23:26 ` Vitaly Wool
2020-12-09 6:13 ` Mike Galbraith
2020-12-09 6:13 ` Mike Galbraith
2020-12-09 6:31 ` Mike Galbraith
2020-12-09 6:31 ` Mike Galbraith
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20201128140924.iyqr2h52z2olt6zb@spock.localdomain \
--to=oleksandr@natalenko.name \
--cc=akpm@linux-foundation.org \
--cc=bigeasy@linutronix.de \
--cc=efault@gmx.de \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=linux-rt-users@vger.kernel.org \
--cc=rostedt@goodmis.org \
--cc=tglx@linutronix.de \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.