linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Qian Cai <cai@lca.pw>
To: Peter Zijlstra <peterz@infradead.org>
Cc: akpm@linux-foundation.org, bigeasy@linutronix.de,
	tglx@linutronix.de, thgarnie@google.com, tytso@mit.edu,
	cl@linux.com, penberg@kernel.org, rientjes@google.com,
	mingo@redhat.com, will@kernel.org, linux-mm@kvack.org,
	linux-kernel@vger.kernel.org, keescook@chromium.org
Subject: Re: [PATCH] mm/slub: fix a deadlock in shuffle_freelist()
Date: Thu, 26 Sep 2019 08:29:34 -0400	[thread overview]
Message-ID: <1569500974.5576.234.camel@lca.pw> (raw)
In-Reply-To: <20190925164527.GG4553@hirez.programming.kicks-ass.net>

On Wed, 2019-09-25 at 18:45 +0200, Peter Zijlstra wrote:
> On Wed, Sep 25, 2019 at 11:18:47AM -0400, Qian Cai wrote:
> > On Wed, 2019-09-25 at 11:31 +0200, Peter Zijlstra wrote:
> > > On Fri, Sep 13, 2019 at 12:27:44PM -0400, Qian Cai wrote:
> > > > -> #3 (batched_entropy_u32.lock){-.-.}:
> > > >        lock_acquire+0x31c/0x360
> > > >        _raw_spin_lock_irqsave+0x7c/0x9c
> > > >        get_random_u32+0x6c/0x1dc
> > > >        new_slab+0x234/0x6c0
> > > >        ___slab_alloc+0x3c8/0x650
> > > >        kmem_cache_alloc+0x4b0/0x590
> > > >        __debug_object_init+0x778/0x8b4
> > > >        debug_object_init+0x40/0x50
> > > >        debug_init+0x30/0x29c
> > > >        hrtimer_init+0x30/0x50
> > > >        init_dl_task_timer+0x24/0x44
> > > >        __sched_fork+0xc0/0x168
> > > >        init_idle+0x78/0x26c
> > > >        fork_idle+0x12c/0x178
> > > >        idle_threads_init+0x108/0x178
> > > >        smp_init+0x20/0x1bc
> > > >        kernel_init_freeable+0x198/0x26c
> > > >        kernel_init+0x18/0x334
> > > >        ret_from_fork+0x10/0x18
> > > > 
> > > > -> #2 (&rq->lock){-.-.}:
> > > 
> > > This relation is silly..
> > > 
> > > I suspect the below 'works'...
> > 
> > Unfortunately, the relation is still there,
> > 
> > copy_process()->rt_mutex_init_task()->"&p->pi_lock"
> > 
> > [24438.676716][    T2] -> #2 (&rq->lock){-.-.}:
> > [24438.676727][    T2]        __lock_acquire+0x5b4/0xbf0
> > [24438.676736][    T2]        lock_acquire+0x130/0x360
> > [24438.676754][    T2]        _raw_spin_lock+0x54/0x80
> > [24438.676771][    T2]        task_fork_fair+0x60/0x190
> > [24438.676788][    T2]        sched_fork+0x128/0x270
> > [24438.676806][    T2]        copy_process+0x7a4/0x1bf0
> > [24438.676823][    T2]        _do_fork+0xac/0xac0
> > [24438.676841][    T2]        kernel_thread+0x70/0xa0
> > [24438.676858][    T2]        rest_init+0x4c/0x42c
> > [24438.676884][    T2]        start_kernel+0x778/0x7c0
> > [24438.676902][    T2]        start_here_common+0x1c/0x334
> 
> That's the 'where we took #2 while holding #1' stacktrace and not
> relevant to our discussion.

Oh, you were talking about took #3 while holding #2. Anyway, your patch is
working fine so far. Care to post/merge it officially or do you want me to post
it?

> 
> > [24438.675836][    T2] -> #4 (batched_entropy_u64.lock){-...}:
> > [24438.675860][    T2]        __lock_acquire+0x5b4/0xbf0
> > [24438.675878][    T2]        lock_acquire+0x130/0x360
> > [24438.675906][    T2]        _raw_spin_lock_irqsave+0x70/0xa0
> > [24438.675923][    T2]        get_random_u64+0x60/0x100
> > [24438.675944][    T2]        add_to_free_area_random+0x164/0x1b0
> > [24438.675962][    T2]        free_one_page+0xb24/0xcf0
> > [24438.675980][    T2]        __free_pages_ok+0x448/0xbf0
> > [24438.675999][    T2]        deferred_init_maxorder+0x404/0x4a4
> > [24438.676018][    T2]        deferred_grow_zone+0x158/0x1f0
> > [24438.676035][    T2]        get_page_from_freelist+0x1dc8/0x1e10
> > [24438.676063][    T2]        __alloc_pages_nodemask+0x1d8/0x1940
> > [24438.676083][    T2]        allocate_slab+0x130/0x2740
> > [24438.676091][    T2]        new_slab+0xa8/0xe0
> > [24438.676101][    T2]        kmem_cache_open+0x254/0x660
> > [24438.676119][    T2]        __kmem_cache_create+0x44/0x2a0
> > [24438.676136][    T2]        create_boot_cache+0xcc/0x110
> > [24438.676154][    T2]        kmem_cache_init+0x90/0x1f0
> > [24438.676173][    T2]        start_kernel+0x3b8/0x7c0
> > [24438.676191][    T2]        start_here_common+0x1c/0x334
> > [24438.676208][    T2] 
> > [24438.676208][    T2] -> #3 (&(&zone->lock)->rlock){-.-.}:
> > [24438.676221][    T2]        __lock_acquire+0x5b4/0xbf0
> > [24438.676247][    T2]        lock_acquire+0x130/0x360
> > [24438.676264][    T2]        _raw_spin_lock+0x54/0x80
> > [24438.676282][    T2]        rmqueue_bulk.constprop.23+0x64/0xf20
> > [24438.676300][    T2]        get_page_from_freelist+0x718/0x1e10
> > [24438.676319][    T2]        __alloc_pages_nodemask+0x1d8/0x1940
> > [24438.676339][    T2]        alloc_page_interleave+0x34/0x170
> > [24438.676356][    T2]        allocate_slab+0xd1c/0x2740
> > [24438.676374][    T2]        new_slab+0xa8/0xe0
> > [24438.676391][    T2]        ___slab_alloc+0x580/0xef0
> > [24438.676408][    T2]        __slab_alloc+0x64/0xd0
> > [24438.676426][    T2]        kmem_cache_alloc+0x5c4/0x6c0
> > [24438.676444][    T2]        fill_pool+0x280/0x540
> > [24438.676461][    T2]        __debug_object_init+0x60/0x6b0
> > [24438.676479][    T2]        hrtimer_init+0x5c/0x310
> > [24438.676497][    T2]        init_dl_task_timer+0x34/0x60
> > [24438.676516][    T2]        __sched_fork+0x8c/0x110
> > [24438.676535][    T2]        init_idle+0xb4/0x3c0
> > [24438.676553][    T2]        idle_thread_get+0x78/0x120
> > [24438.676572][    T2]        bringup_cpu+0x30/0x230
> > [24438.676590][    T2]        cpuhp_invoke_callback+0x190/0x1580
> > [24438.676618][    T2]        do_cpu_up+0x248/0x460
> > [24438.676636][    T2]        smp_init+0x118/0x1c0
> > [24438.676662][    T2]        kernel_init_freeable+0x3f8/0x8dc
> > [24438.676681][    T2]        kernel_init+0x2c/0x154
> > [24438.676699][    T2]        ret_from_kernel_thread+0x5c/0x74
> > [24438.676716][    T2] 
> > [24438.676716][    T2] -> #2 (&rq->lock){-.-.}:
> 
> This then shows we now have:
> 
> 	rq->lock
> 	  zone->lock.rlock
> 	    batched_entropy_u64.lock
> 
> Which, to me, appears to be distinctly different from the last time,
> which was:
> 
> 	rq->lock
> 	  batched_entropy_u32.lock
> 
> Notable: "u32" != "u64".
> 
> But #3 has:
> 
> > [24438.676516][    T2]        __sched_fork+0x8c/0x110
> > [24438.676535][    T2]        init_idle+0xb4/0x3c0
> 
> Which seems to suggest you didn't actually apply the patch; or rather,
> if you did, i'm not immediately seeing where #2 is acquired.
> 

  reply	other threads:[~2019-09-26 12:29 UTC|newest]

Thread overview: 23+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-09-13 16:27 [PATCH] mm/slub: fix a deadlock in shuffle_freelist() Qian Cai
2019-09-16  9:03 ` Sebastian Andrzej Siewior
2019-09-16 14:01   ` Qian Cai
2019-09-16 19:51     ` Sebastian Andrzej Siewior
2019-09-16 21:31       ` Qian Cai
2019-09-17  7:16         ` Sebastian Andrzej Siewior
2019-09-18 19:59           ` Qian Cai
2019-09-25  9:31 ` Peter Zijlstra
2019-09-25 15:18   ` Qian Cai
2019-09-25 16:45     ` Peter Zijlstra
2019-09-26 12:29       ` Qian Cai [this message]
2019-10-01  9:18         ` [PATCH] sched: Avoid spurious lock dependencies Peter Zijlstra
2019-10-01 10:01           ` Valentin Schneider
2019-10-01 11:22           ` Qian Cai
2019-10-01 11:36           ` Srikar Dronamraju
2019-10-01 13:44             ` Peter Zijlstra
2019-10-29 11:10           ` Qian Cai
2019-10-29 12:44             ` Peter Zijlstra
2019-11-12  0:54               ` Qian Cai
2019-11-13 10:06           ` [tip: sched/urgent] sched/core: " tip-bot2 for Peter Zijlstra
2019-11-22 20:01             ` Sebastian Andrzej Siewior
2019-11-22 20:20               ` Peter Zijlstra
2019-11-22 21:03                 ` Qian Cai

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=1569500974.5576.234.camel@lca.pw \
    --to=cai@lca.pw \
    --cc=akpm@linux-foundation.org \
    --cc=bigeasy@linutronix.de \
    --cc=cl@linux.com \
    --cc=keescook@chromium.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=mingo@redhat.com \
    --cc=penberg@kernel.org \
    --cc=peterz@infradead.org \
    --cc=rientjes@google.com \
    --cc=tglx@linutronix.de \
    --cc=thgarnie@google.com \
    --cc=tytso@mit.edu \
    --cc=will@kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).