All of lore.kernel.org
 help / color / mirror / Atom feed
From: Dave Chinner <david@fromorbit.com>
To: Theodore Ts'o <tytso@mit.edu>
Cc: Byungchul Park <byungchul.park@lge.com>,
	fstests@vger.kernel.org, linux-fsdevel@vger.kernel.org,
	peterz@infradead.org, mingo@redhat.com, kernel-team@lge.com
Subject: Re: False lockdep completion splats with loop device
Date: Wed, 6 Dec 2017 08:09:56 +1100	[thread overview]
Message-ID: <20171205210956.GZ5858@dastard> (raw)
In-Reply-To: <20171205150741.xsbp4mtilqfrsukl@thunk.org>

On Tue, Dec 05, 2017 at 10:07:41AM -0500, Theodore Ts'o wrote:
> On Tue, Dec 05, 2017 at 02:16:45PM +0900, Byungchul Park wrote:
> > 
> > Hello,
> > 
> > I believe that the commit e319e1fbd9d42 "block, locking/lockdep: Assign
> > a lock_class per gendisk used for wait_for_completion()" solved the
> > false positive.
> > 
> > Could you tell me if it doesn't handle it, with the report? Then, I
> > will follow up and try to solve it.
> 
> No, it doesn't handle it.  And there was some discussion in the linked
> thread on the xfs mailing list that seemed to indicate that it was not
> a complete fix.

Well, it uses a static key hidden inside the macro
alloc_disk_node(), so every disk allocated from the same callsite
points to the same static lockdep map. IOWs, every caller of
alloc_disk() (i.e. the vast majority of storage devices) are
configured to point at the same static lockdep map, regardless of
their location in the storage stack.

The loop device uses alloc_disk().

IOWs, it doesn't address the problem of false positives due to
layering in the IO stack at all because we can still have
filesystems both above and below the lockdep map that has been
attached to the devices...

Cheers,

Dave.
-- 
Dave Chinner
david@fromorbit.com

  reply	other threads:[~2017-12-05 21:12 UTC|newest]

Thread overview: 21+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2017-12-05  3:03 False lockdep completion splats with loop device Theodore Ts'o
2017-12-05  5:16 ` Byungchul Park
2017-12-05 15:07   ` Theodore Ts'o
2017-12-05 21:09     ` Dave Chinner [this message]
2017-12-06  5:01       ` Byungchul Park
2017-12-06  6:08         ` Dave Chinner
2017-12-06  6:31         ` Amir Goldstein
2017-12-06  7:01           ` Byungchul Park
2017-12-07  2:46             ` Amir Goldstein
2017-12-07  4:18               ` Amir Goldstein
2017-12-07 14:33                 ` Theodore Ts'o
2017-12-07 14:53                   ` Peter Zijlstra
2017-12-08  1:51                 ` Byungchul Park
2017-12-07 23:59               ` Dave Chinner
2017-12-08  0:13                 ` Al Viro
2017-12-08  8:15                   ` Amir Goldstein
2017-12-08 22:57                     ` Dave Chinner
2017-12-09  8:44                       ` Amir Goldstein
2017-12-09 16:02                         ` Theodore Ts'o
2017-12-09 20:08                           ` Amir Goldstein
2017-12-06  6:23     ` Byungchul Park

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20171205210956.GZ5858@dastard \
    --to=david@fromorbit.com \
    --cc=byungchul.park@lge.com \
    --cc=fstests@vger.kernel.org \
    --cc=kernel-team@lge.com \
    --cc=linux-fsdevel@vger.kernel.org \
    --cc=mingo@redhat.com \
    --cc=peterz@infradead.org \
    --cc=tytso@mit.edu \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.