All of lore.kernel.org
 help / color / mirror / Atom feed
* [overlayfs] lockdep splat after mounting overlayfs over overlayfs
@ 2015-06-24 15:55 Konstantin Khlebnikov
  2015-06-25  7:24 ` Xu Wang
  0 siblings, 1 reply; 4+ messages in thread
From: Konstantin Khlebnikov @ 2015-06-24 15:55 UTC (permalink / raw)
  To: Miklos Szeredi; +Cc: linux-unionfs, linux-kernel

I've accidentally mounted one overlayfs over another and got obvious 
warning from lockdep: i_mutex lockdep classes are per-fs-type.

# mount -t overlay overlay 1 -o 
upperdir=1_upper,workdir=1_work,lowerdir=1_lower
# mount -t overlay overlay 2 -o upperdir=2_upper,workdir=2_work,lowerdir=1
# ls 2

[  305.098705] =============================================
[  305.099011] [ INFO: possible recursive locking detected ]
[  305.099011] 4.1.0+ #76 Not tainted
[  305.099011] ---------------------------------------------
[  305.099011] ls/3070 is trying to acquire lock:
[  305.099011]  (&sb->s_type->i_mutex_key#13){+.+.+.}, at: 
[<ffffffff811b9ac0>] iterate_dir+0x70/0x130
[  305.099011]
[  305.099011] but task is already holding lock:
[  305.099011]  (&sb->s_type->i_mutex_key#13){+.+.+.}, at: 
[<ffffffff811b9ac0>] iterate_dir+0x70/0x130
[  305.099011]
[  305.099011] other info that might help us debug this:
[  305.099011]  Possible unsafe locking scenario:
[  305.099011]
[  305.099011]        CPU0
[  305.099011]        ----
[  305.099011]   lock(&sb->s_type->i_mutex_key#13);
[  305.099011]   lock(&sb->s_type->i_mutex_key#13);
[  305.099011]
[  305.099011]  *** DEADLOCK ***
[  305.099011]
[  305.099011]  May be due to missing lock nesting notation
[  305.099011]
[  305.099011] 1 lock held by ls/3070:
[  305.099011]  #0:  (&sb->s_type->i_mutex_key#13){+.+.+.}, at: 
[<ffffffff811b9ac0>] iterate_dir+0x70/0x130
[  305.099011]
[  305.099011] stack backtrace:
[  305.099011] CPU: 2 PID: 3070 Comm: ls Not tainted 4.1.0+ #76
[  305.099011] Hardware name: OpenStack Foundation OpenStack Nova, BIOS 
Bochs 01/01/2011
[  305.099011]  ffff880037f40000 ffff880037fa3b48 ffffffff81991179 
0000000000000007
[  305.099011]  ffffffff825ac1a0 ffff880037fa3c38 ffffffff810a92ea 
0000000000000000
[  305.099011]  0000000000000000 ffff8802110eaea0 ffff88021118c720 
ffff880214576710
[  305.099011] Call Trace:
[  305.099011]  [<ffffffff81991179>] dump_stack+0x4c/0x65
[  305.099011]  [<ffffffff810a92ea>] __lock_acquire+0x91a/0x1f60
[  305.099011]  [<ffffffff811d0a1b>] ? inode_to_bdi+0x2b/0x60
[  305.099011]  [<ffffffff810ab0d1>] lock_acquire+0xd1/0x290
[  305.099011]  [<ffffffff811b9ac0>] ? iterate_dir+0x70/0x130
[  305.099011]  [<ffffffff8199b2c8>] mutex_lock_killable_nested+0x68/0x440
[  305.099011]  [<ffffffff811b9ac0>] ? iterate_dir+0x70/0x130
[  305.099011]  [<ffffffff811b9ac0>] ? iterate_dir+0x70/0x130
[  305.099011]  [<ffffffff811b9ac0>] iterate_dir+0x70/0x130
[  305.099011]  [<ffffffff812fcf05>] ovl_dir_read_merged+0x125/0x160
[  305.099011]  [<ffffffff812fcc10>] ? ovl_cache_entry_new+0x110/0x110
[  305.099011]  [<ffffffff812fd45d>] ovl_iterate+0x14d/0x1e0
[  305.099011]  [<ffffffff811b9afc>] iterate_dir+0xac/0x130
[  305.099011]  [<ffffffff811c4d64>] ? __fget_light+0x74/0xa0
[  305.099011]  [<ffffffff811b9c87>] SyS_getdents+0x87/0x100
[  305.099011]  [<ffffffff811b9920>] ? filldir64+0x140/0x140
[  305.099011]  [<ffffffff8199e5ae>] entry_SYSCALL_64_fastpath+0x12/0x76

-- 
Konstantin

^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: [overlayfs] lockdep splat after mounting overlayfs over overlayfs
  2015-06-24 15:55 [overlayfs] lockdep splat after mounting overlayfs over overlayfs Konstantin Khlebnikov
@ 2015-06-25  7:24 ` Xu Wang
  2015-06-25  8:21   ` Konstantin Khlebnikov
  0 siblings, 1 reply; 4+ messages in thread
From: Xu Wang @ 2015-06-25  7:24 UTC (permalink / raw)
  To: Konstantin Khlebnikov; +Cc: Miklos Szeredi, linux-unionfs, linux-kernel

> I've accidentally mounted one overlayfs over another and got obvious
> warning from lockdep: i_mutex lockdep classes are per-fs-type.
> 
> # mount -t overlay overlay 1 -o
> upperdir=1_upper,workdir=1_work,lowerdir=1_lower
> # mount -t overlay overlay 2 -o upperdir=2_upper,workdir=2_work,lowerdir=1
> # ls 2

This reporting is positive, we are not in deadlock situation actually. The "2" dir
of overlayfs will call iterate_dir->"1" dir of overlayfs->iterate_dir, and the nest
iterate_dir happened on the same file system, so the warning came out.

We'd better make the lower and upper in different fs instance, and this warning
will disappear.

And this lockdep warning happened when the nest iterate_dir call of same fs(I
mean the same super block). The function check_deadlock in lockdep.c will
report the nest lock of same process. If we make the 2_upper and 2_work in
a different fs, no warning comes out.

Thanks,

-- 
George Wang 王旭

Kernel Quantity Engineer
Red Hat Software (Beijing) Co.,Ltd
IRC:xuw
Tel:+86-010-62608041
Phone:15901231579
9/F, Tower C, Raycom

^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: [overlayfs] lockdep splat after mounting overlayfs over overlayfs
  2015-06-25  7:24 ` Xu Wang
@ 2015-06-25  8:21   ` Konstantin Khlebnikov
  2015-06-26  3:28     ` Xu Wang
  0 siblings, 1 reply; 4+ messages in thread
From: Konstantin Khlebnikov @ 2015-06-25  8:21 UTC (permalink / raw)
  To: Xu Wang
  Cc: Konstantin Khlebnikov, Miklos Szeredi, linux-unionfs,
	Linux Kernel Mailing List, Al Viro

On Thu, Jun 25, 2015 at 10:24 AM, Xu Wang <xuw@redhat.com> wrote:
>> I've accidentally mounted one overlayfs over another and got obvious
>> warning from lockdep: i_mutex lockdep classes are per-fs-type.
>>
>> # mount -t overlay overlay 1 -o
>> upperdir=1_upper,workdir=1_work,lowerdir=1_lower
>> # mount -t overlay overlay 2 -o upperdir=2_upper,workdir=2_work,lowerdir=1
>> # ls 2
>
> This reporting is positive, we are not in deadlock situation actually. The "2" dir
> of overlayfs will call iterate_dir->"1" dir of overlayfs->iterate_dir, and the nest
> iterate_dir happened on the same file system, so the warning came out.
>
> We'd better make the lower and upper in different fs instance, and this warning
> will disappear.
>
> And this lockdep warning happened when the nest iterate_dir call of same fs(I
> mean the same super block). The function check_deadlock in lockdep.c will
> report the nest lock of same process. If we make the 2_upper and 2_work in
> a different fs, no warning comes out.

Yep, it's not a deadlock. As I mentioned lockdep classes are per-fs-type so
lockdep cannot see difference between i_mutexes on different sb of the
same type.
But anyway this looks messy.

Probably it's safer to forbid overlayfs as lower or upper mount for overlayfs
because this have no sense. Nesting anyway is limited by the depth of kernel
stack and sb->s_stack_depth.

Or overlayfs could detect this situation and substitute layers of underlying
overlayfs into its own lower layers in appropriate place.

>
> Thanks,
>
> --
> George Wang 王旭
>
> Kernel Quantity Engineer
> Red Hat Software (Beijing) Co.,Ltd
> IRC:xuw
> Tel:+86-010-62608041
> Phone:15901231579
> 9/F, Tower C, Raycom
> --
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> Please read the FAQ at  http://www.tux.org/lkml/

^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: [overlayfs] lockdep splat after mounting overlayfs over overlayfs
  2015-06-25  8:21   ` Konstantin Khlebnikov
@ 2015-06-26  3:28     ` Xu Wang
  0 siblings, 0 replies; 4+ messages in thread
From: Xu Wang @ 2015-06-26  3:28 UTC (permalink / raw)
  To: Konstantin Khlebnikov
  Cc: Konstantin Khlebnikov, Miklos Szeredi, linux-unionfs,
	Linux Kernel Mailing List, Al Viro

> On Thu, Jun 25, 2015 at 10:24 AM, Xu Wang <xuw@redhat.com> wrote:
> >> I've accidentally mounted one overlayfs over another and got obvious
> >> warning from lockdep: i_mutex lockdep classes are per-fs-type.
> >>
> >> # mount -t overlay overlay 1 -o
> >> upperdir=1_upper,workdir=1_work,lowerdir=1_lower
> >> # mount -t overlay overlay 2 -o upperdir=2_upper,workdir=2_work,lowerdir=1
> >> # ls 2
> >
> > This reporting is positive, we are not in deadlock situation actually. The
> > "2" dir
> > of overlayfs will call iterate_dir->"1" dir of overlayfs->iterate_dir, and
> > the nest
> > iterate_dir happened on the same file system, so the warning came out.
> >
> > We'd better make the lower and upper in different fs instance, and this
> > warning
> > will disappear.
> >
> > And this lockdep warning happened when the nest iterate_dir call of same
> > fs(I
> > mean the same super block). The function check_deadlock in lockdep.c will
> > report the nest lock of same process. If we make the 2_upper and 2_work in
> > a different fs, no warning comes out.
> 
> Yep, it's not a deadlock. As I mentioned lockdep classes are per-fs-type so
> lockdep cannot see difference between i_mutexes on different sb of the
> same type.
> But anyway this looks messy.
>
 
yes, you are right. The i_mutex_class is file_system_tye scale. I was puzzled by
the debug_locks mechanism during my quick tests. The nest iterate_dir is overlay
dir, neither upper nor lower.

> Probably it's safer to forbid overlayfs as lower or upper mount for overlayfs
> because this have no sense. Nesting anyway is limited by the depth of kernel
> stack and sb->s_stack_depth.
> 
> Or overlayfs could detect this situation and substitute layers of underlying
> overlayfs into its own lower layers in appropriate place.
> 

Can we add the lockdep_off/lockdep_on in this situation? For we know this is
just the false positive reporting of lockdep.


Thanks,

Xu Wang

^ permalink raw reply	[flat|nested] 4+ messages in thread

end of thread, other threads:[~2015-06-26  3:28 UTC | newest]

Thread overview: 4+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2015-06-24 15:55 [overlayfs] lockdep splat after mounting overlayfs over overlayfs Konstantin Khlebnikov
2015-06-25  7:24 ` Xu Wang
2015-06-25  8:21   ` Konstantin Khlebnikov
2015-06-26  3:28     ` Xu Wang

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.