All of lore.kernel.org
 help / color / mirror / Atom feed
From: Byungchul Park <byungchul.park@lge.com>
To: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Fengguang Wu <fengguang.wu@intel.com>,
	Ingo Molnar <mingo@kernel.org>,
	"Peter Zijlstra (Intel)" <peterz@infradead.org>,
	Linux Kernel Mailing List <linux-kernel@vger.kernel.org>,
	LKP <lkp@01.org>, Josh Poimboeuf <jpoimboe@redhat.com>,
	kernel-team@lge.com
Subject: Re: [lockdep] b09be676e0 BUG: unable to handle kernel NULL pointer dereference at 000001f2
Date: Thu, 12 Oct 2017 10:15:33 +0900	[thread overview]
Message-ID: <20171012011533.GJ3323@X58A-UD3R> (raw)
In-Reply-To: <20171011005605.GF3323@X58A-UD3R>

On Wed, Oct 11, 2017 at 09:56:05AM +0900, Byungchul Park wrote:
> Thank you very much for explaining it in detail.
> 
> But let's shift a viewpoint. Precisely, I didn't want to work on locks
> but *waiters* becasue dependancies causing deadlocks only can be created
> by waiters - nevertheless I have no idea for a better name to my feature.
> 
> Lockdep should also have worked on waiters instead of locks, in the
> strict sense. Having said that, we can work on locks to detect deadlocks
> one way or another, becasue typical locks implicitly include wait
> operations except trylocks, which in turn of course cause other waitings
> once it's acquired successfully, though.
> 
> I mean, all we have to do to detect deadlocks is to identify
> dependencies. *That's all*. IMHO, we don't need to consider "transfering
> and recieving locks" and even lock protection. We only have to focus on
> dependecies by waiters and how to identify dependencies from them.

Lastly, please let me explain one more.

There are many "wait_for_event and event" pairs in kernel. The pairs
build dependencies, and dependencies are the sole cause of deadlocks.

Typical locks roughly have the following two functionalities:

   1. protection - Only goal of this functionality is to prevent other
      accessors from entering a critical section, via making them wait
      or fail, whatever. By preventing it, it provides *ownership* of
      the critical section access.

   2. synchronization - I mean synchroniation between entering/exiting
      points of critical sections. Normally using a "wait_for_event and
      event" pair, it controls the flow under contention, where the
      event is unlock.

What I want to note is that *only* the second one participates in
creating dependencies and deadlocks.

In addition, in wait_for_completion() case, it's an operation exactly
doing only the synchronization. Therefore, of course it's itself a basic
element of dependencies, like the second one of typical locks.

I am afraid and wonder if I successfully delivered my original intention.
Please let me explain it more if not.

Thanks,
Byungchul

> > This is kind of similar to my opinion on the C "volatile" keyword, and
> > why we do not generally use it in the kernel. It's not the *data* that
> > is volatile, because the data itself might be stable or volatile
> > depending on whether you hold a lock or not. It's the _code_access_
> > that is either volatile or not, and rather than using volatile on data
> > structures, we use volatile in code (although not explicitly as such -
> > we hide it inside the accessors like "READ_ONCE()" etc).
> 
> I like it. I agree with you.
> 
> > I agree wholeheartedly that it can often be much more convenient to
> > just mark one particular lock as being special, but at the same time
> > it's really not the lock itself that is interesting, it's the
> > _handoff_ of the lock that is interesting.
> > 
> > And particularly for cross-thread lock/unlock sequences, the hand-over
> > really is special. For a normal lock/unlock sequence, the lock itself
> > is the thing that protects the data. But that is simply not true if
> > you have a cross-thread hand-over of the lock: you also need to make
> > sure that the hand-over itself is safe. That's generally very easy to
> > do, you just make sure that the original owner of the lock has done
> > everything the lock protects and then make the lock available with
> > smp_store_release() and then the receiving end should do
> > smp_load_acquire() to read the lock pointer (or lock transfer status,
> > or whatever). Because *within* a thread, memory ordering is guaranteed
> > on its own. Between two threads? Memory ordering comes into play even
> > when you *hold* the lock.
> 
> I and Peter have handled memory ordering carefully, when identifying
> dependencies between waiters. That was where we have to consider memory
> ordering.
> 
> Thanks,
> Byungchul

WARNING: multiple messages have this Message-ID (diff)
From: Byungchul Park <byungchul.park@lge.com>
To: lkp@lists.01.org
Subject: Re: [lockdep] b09be676e0 BUG: unable to handle kernel NULL pointer dereference at 000001f2
Date: Thu, 12 Oct 2017 10:15:33 +0900	[thread overview]
Message-ID: <20171012011533.GJ3323@X58A-UD3R> (raw)
In-Reply-To: <20171011005605.GF3323@X58A-UD3R>

[-- Attachment #1: Type: text/plain, Size: 4025 bytes --]

On Wed, Oct 11, 2017 at 09:56:05AM +0900, Byungchul Park wrote:
> Thank you very much for explaining it in detail.
> 
> But let's shift a viewpoint. Precisely, I didn't want to work on locks
> but *waiters* becasue dependancies causing deadlocks only can be created
> by waiters - nevertheless I have no idea for a better name to my feature.
> 
> Lockdep should also have worked on waiters instead of locks, in the
> strict sense. Having said that, we can work on locks to detect deadlocks
> one way or another, becasue typical locks implicitly include wait
> operations except trylocks, which in turn of course cause other waitings
> once it's acquired successfully, though.
> 
> I mean, all we have to do to detect deadlocks is to identify
> dependencies. *That's all*. IMHO, we don't need to consider "transfering
> and recieving locks" and even lock protection. We only have to focus on
> dependecies by waiters and how to identify dependencies from them.

Lastly, please let me explain one more.

There are many "wait_for_event and event" pairs in kernel. The pairs
build dependencies, and dependencies are the sole cause of deadlocks.

Typical locks roughly have the following two functionalities:

   1. protection - Only goal of this functionality is to prevent other
      accessors from entering a critical section, via making them wait
      or fail, whatever. By preventing it, it provides *ownership* of
      the critical section access.

   2. synchronization - I mean synchroniation between entering/exiting
      points of critical sections. Normally using a "wait_for_event and
      event" pair, it controls the flow under contention, where the
      event is unlock.

What I want to note is that *only* the second one participates in
creating dependencies and deadlocks.

In addition, in wait_for_completion() case, it's an operation exactly
doing only the synchronization. Therefore, of course it's itself a basic
element of dependencies, like the second one of typical locks.

I am afraid and wonder if I successfully delivered my original intention.
Please let me explain it more if not.

Thanks,
Byungchul

> > This is kind of similar to my opinion on the C "volatile" keyword, and
> > why we do not generally use it in the kernel. It's not the *data* that
> > is volatile, because the data itself might be stable or volatile
> > depending on whether you hold a lock or not. It's the _code_access_
> > that is either volatile or not, and rather than using volatile on data
> > structures, we use volatile in code (although not explicitly as such -
> > we hide it inside the accessors like "READ_ONCE()" etc).
> 
> I like it. I agree with you.
> 
> > I agree wholeheartedly that it can often be much more convenient to
> > just mark one particular lock as being special, but at the same time
> > it's really not the lock itself that is interesting, it's the
> > _handoff_ of the lock that is interesting.
> > 
> > And particularly for cross-thread lock/unlock sequences, the hand-over
> > really is special. For a normal lock/unlock sequence, the lock itself
> > is the thing that protects the data. But that is simply not true if
> > you have a cross-thread hand-over of the lock: you also need to make
> > sure that the hand-over itself is safe. That's generally very easy to
> > do, you just make sure that the original owner of the lock has done
> > everything the lock protects and then make the lock available with
> > smp_store_release() and then the receiving end should do
> > smp_load_acquire() to read the lock pointer (or lock transfer status,
> > or whatever). Because *within* a thread, memory ordering is guaranteed
> > on its own. Between two threads? Memory ordering comes into play even
> > when you *hold* the lock.
> 
> I and Peter have handled memory ordering carefully, when identifying
> dependencies between waiters. That was where we have to consider memory
> ordering.
> 
> Thanks,
> Byungchul

  parent reply	other threads:[~2017-10-12  1:15 UTC|newest]

Thread overview: 102+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2017-10-03 14:06 [lockdep] b09be676e0 BUG: unable to handle kernel NULL pointer dereference at 000001f2 Fengguang Wu
2017-10-03 14:06 ` Fengguang Wu
2017-10-03 14:31 ` Josh Poimboeuf
2017-10-03 14:31   ` Josh Poimboeuf
2017-10-03 14:41   ` Josh Poimboeuf
2017-10-03 14:41     ` Josh Poimboeuf
2017-10-03 15:05     ` Josh Poimboeuf
2017-10-03 15:05       ` Josh Poimboeuf
2017-10-03 16:28       ` Josh Poimboeuf
2017-10-03 16:28         ` Josh Poimboeuf
2017-10-03 17:34         ` Josh Poimboeuf
2017-10-03 17:34           ` Josh Poimboeuf
2017-10-03 21:44           ` Tetsuo Handa
2017-10-03 21:44             ` Tetsuo Handa
2017-10-04 21:06             ` Josh Poimboeuf
2017-10-04 21:06               ` Josh Poimboeuf
2017-10-04 21:30               ` Linus Torvalds
2017-10-04 21:30                 ` Linus Torvalds
2017-10-04 22:15                 ` Josh Poimboeuf
2017-10-04 22:15                   ` Josh Poimboeuf
2017-10-04 22:40             ` Josh Poimboeuf
2017-10-04 22:40               ` Josh Poimboeuf
2017-10-05 11:02               ` Tetsuo Handa
2017-10-05 11:02                 ` Tetsuo Handa
2017-10-05 13:57                 ` Josh Poimboeuf
2017-10-05 13:57                   ` Josh Poimboeuf
2017-10-04  8:34       ` Peter Zijlstra
2017-10-04  8:34         ` Peter Zijlstra
2017-10-10  5:57         ` Byungchul Park
2017-10-10  5:57           ` Byungchul Park
2017-10-03 16:54 ` Linus Torvalds
2017-10-03 16:54   ` Linus Torvalds
2017-10-03 16:57   ` Linus Torvalds
2017-10-03 16:57     ` Linus Torvalds
2017-10-10  5:48     ` Byungchul Park
2017-10-10  5:48       ` Byungchul Park
2017-10-10 16:22       ` Linus Torvalds
2017-10-10 16:22         ` Linus Torvalds
2017-10-10 16:56         ` Linus Torvalds
2017-10-10 16:56           ` Linus Torvalds
2017-10-10 18:14           ` Peter Zijlstra
2017-10-10 18:14             ` Peter Zijlstra
2017-10-10 18:38             ` Linus Torvalds
2017-10-10 18:38               ` Linus Torvalds
2017-10-11  1:14             ` Byungchul Park
2017-10-11  1:14               ` Byungchul Park
2017-10-11  2:36           ` Byungchul Park
2017-10-11  2:36             ` Byungchul Park
2017-10-11  0:56         ` Byungchul Park
2017-10-11  0:56           ` Byungchul Park
2017-10-11  1:02           ` Byungchul Park
2017-10-11  1:02             ` Byungchul Park
2017-10-12  1:15           ` Byungchul Park [this message]
2017-10-12  1:15             ` Byungchul Park
2017-10-03 17:18   ` Ingo Molnar
2017-10-03 17:18     ` Ingo Molnar
2017-10-04  9:20     ` Peter Zijlstra
2017-10-04  9:20       ` Peter Zijlstra
2017-10-04 10:31       ` Ingo Molnar
2017-10-04 10:31         ` Ingo Molnar
2017-10-04 14:15       ` Josh Poimboeuf
2017-10-04 14:15         ` Josh Poimboeuf
2017-10-10  5:30     ` Byungchul Park
2017-10-10  5:30       ` Byungchul Park
2017-10-05 13:01   ` Josh Poimboeuf
2017-10-05 13:01     ` Josh Poimboeuf
2017-10-05 14:54     ` Josh Poimboeuf
2017-10-05 14:54       ` Josh Poimboeuf
2017-10-09 10:50       ` Peter Zijlstra
2017-10-09 10:50         ` Peter Zijlstra
2017-10-09 12:21         ` Fengguang Wu
2017-10-09 12:21           ` Fengguang Wu
2017-10-09 12:54           ` Peter Zijlstra
2017-10-09 12:54             ` Peter Zijlstra
2017-10-09 12:59             ` Fengguang Wu
2017-10-09 12:59               ` Fengguang Wu
2017-10-09 13:03             ` Josh Poimboeuf
2017-10-09 13:03               ` Josh Poimboeuf
2017-10-09 12:55           ` Fengguang Wu
2017-10-09 12:55             ` Fengguang Wu
2017-10-09 13:26             ` Josh Poimboeuf
2017-10-09 13:26               ` Josh Poimboeuf
2017-10-09 14:17               ` Fengguang Wu
2017-10-09 14:17                 ` Fengguang Wu
2017-10-09 15:28                 ` Peter Zijlstra
2017-10-09 15:28                   ` Peter Zijlstra
2017-10-09 15:41                   ` Fengguang Wu
2017-10-09 15:41                     ` Fengguang Wu
2017-10-09 15:44                     ` Peter Zijlstra
2017-10-09 15:44                       ` Peter Zijlstra
2017-10-09 15:47                       ` Fengguang Wu
2017-10-09 15:47                         ` Fengguang Wu
2017-10-10  5:08   ` Byungchul Park
2017-10-10  5:08     ` Byungchul Park
2017-10-12  8:47 ` Peter Zijlstra
2017-10-12  8:47   ` Peter Zijlstra
2017-10-12  9:21   ` Fengguang Wu
2017-10-12  9:21     ` Fengguang Wu
2017-10-12  9:28     ` Fengguang Wu
2017-10-12  9:28       ` Fengguang Wu
2017-10-12 11:45       ` Peter Zijlstra
2017-10-12 11:45         ` Peter Zijlstra

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20171012011533.GJ3323@X58A-UD3R \
    --to=byungchul.park@lge.com \
    --cc=fengguang.wu@intel.com \
    --cc=jpoimboe@redhat.com \
    --cc=kernel-team@lge.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=lkp@01.org \
    --cc=mingo@kernel.org \
    --cc=peterz@infradead.org \
    --cc=torvalds@linux-foundation.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.