linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Peter Zijlstra <peterz@infradead.org>
To: Michal Hocko <mhocko@kernel.org>
Cc: Byungchul Park <byungchul.park@lge.com>,
	Dmitry Vyukov <dvyukov@google.com>,
	syzbot 
	<bot+e7353c7141ff7cbb718e4c888a14fa92de41ebaa@syzkaller.appspotmail.com>,
	Andrew Morton <akpm@linux-foundation.org>,
	Dan Williams <dan.j.williams@intel.com>,
	Johannes Weiner <hannes@cmpxchg.org>, Jan Kara <jack@suse.cz>,
	jglisse@redhat.com, LKML <linux-kernel@vger.kernel.org>,
	linux-mm@kvack.org, shli@fb.com, syzkaller-bugs@googlegroups.com,
	Thomas Gleixner <tglx@linutronix.de>,
	Vlastimil Babka <vbabka@suse.cz>,
	ying.huang@intel.com, kernel-team@lge.com
Subject: Re: possible deadlock in lru_add_drain_all
Date: Tue, 31 Oct 2017 14:51:05 +0100	[thread overview]
Message-ID: <20171031135104.rnlytzawi2xzuih3@hirez.programming.kicks-ass.net> (raw)
In-Reply-To: <20171031131333.pr2ophwd2bsvxc3l@dhcp22.suse.cz>

On Tue, Oct 31, 2017 at 02:13:33PM +0100, Michal Hocko wrote:
> On Mon 30-10-17 16:10:09, Peter Zijlstra wrote:

> > However, that splat translates like:
> > 
> > 	__cpuhp_setup_state()
> > #0	  cpus_read_lock()
> > 	  __cpuhp_setup_state_cpuslocked()
> > #1	    mutex_lock(&cpuhp_state_mutex)
> > 
> > 
> > 
> > 	__cpuhp_state_add_instance()
> > #2	  mutex_lock(&cpuhp_state_mutex)
> 
> this should be #1 right?

Yes

> > 	  cpuhp_issue_call()
> > 	    cpuhp_invoke_ap_callback()
> > #3	      wait_for_completion()
> > 
> > 						msr_device_create()
> > 						  ...
> > #4						    filename_create()
> > #3						complete()
> > 
> > 
> > 
> > 	do_splice()
> > #4	  file_start_write()
> > 	  do_splice_from()
> > 	    iter_file_splice_write()
> > #5	      pipe_lock()
> > 	      vfs_iter_write()
> > 	        ...
> > #6		  inode_lock()
> > 
> > 
> > 
> > 	sys_fcntl()
> > 	  do_fcntl()
> > 	    shmem_fcntl()
> > #5	      inode_lock()

And that #6

> > 	      shmem_wait_for_pins()
> > 	        if (!scan)
> > 		  lru_add_drain_all()
> > #0		    cpus_read_lock()
> > 
> > 
> > 
> > Which is an actual real deadlock, there is no mixing of up and down.
> 
> thanks a lot, this made it more clear to me. It took a while to
> actually see 0 -> 1 -> 3 -> 4 -> 5 -> 0 cycle. I have only focused
> on lru_add_drain_all while it was holding the cpus lock.

Yeah, these things are a pain to read, which is why I always construct
something like the above first.

  reply	other threads:[~2017-10-31 13:51 UTC|newest]

Thread overview: 27+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
     [not found] <089e0825eec8955c1f055c83d476@google.com>
2017-10-27  9:34 ` possible deadlock in lru_add_drain_all Michal Hocko
2017-10-27  9:44   ` Dmitry Vyukov
2017-10-27  9:47     ` Dmitry Vyukov
2017-10-27 13:42     ` Michal Hocko
2017-10-30  8:22       ` Michal Hocko
2017-10-30 10:09         ` Byungchul Park
2017-10-30 15:10           ` Peter Zijlstra
2017-10-30 15:22             ` Peter Zijlstra
2017-10-31 13:13             ` Michal Hocko
2017-10-31 13:51               ` Peter Zijlstra [this message]
2017-10-31 13:55                 ` Dmitry Vyukov
2017-10-31 14:52                   ` Peter Zijlstra
2017-10-31 14:58                     ` Michal Hocko
2017-10-31 15:10                       ` Peter Zijlstra
2017-11-01  8:59                         ` Byungchul Park
2017-11-01 12:01                           ` Peter Zijlstra
2017-11-01 23:54                             ` Byungchul Park
2018-02-14 14:01                               ` Dmitry Vyukov
2018-02-14 15:44                                 ` Michal Hocko
2018-02-14 15:57                                   ` Dmitry Vyukov
2017-10-31 15:25               ` Peter Zijlstra
2017-10-31 15:45                 ` Michal Hocko
2017-10-31 16:30                   ` Peter Zijlstra
2017-11-01  8:31                 ` Byungchul Park
2017-10-30 10:26         ` Byungchul Park
2017-10-30 11:48           ` Michal Hocko
2017-10-27 11:27   ` Vlastimil Babka

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20171031135104.rnlytzawi2xzuih3@hirez.programming.kicks-ass.net \
    --to=peterz@infradead.org \
    --cc=akpm@linux-foundation.org \
    --cc=bot+e7353c7141ff7cbb718e4c888a14fa92de41ebaa@syzkaller.appspotmail.com \
    --cc=byungchul.park@lge.com \
    --cc=dan.j.williams@intel.com \
    --cc=dvyukov@google.com \
    --cc=hannes@cmpxchg.org \
    --cc=jack@suse.cz \
    --cc=jglisse@redhat.com \
    --cc=kernel-team@lge.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=mhocko@kernel.org \
    --cc=shli@fb.com \
    --cc=syzkaller-bugs@googlegroups.com \
    --cc=tglx@linutronix.de \
    --cc=vbabka@suse.cz \
    --cc=ying.huang@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).