* BUG: ceph_inode_cachep and ceph_dentry_cachep caches are not clean when destroying
@ 2020-02-19 9:38 Xiubo Li
2020-02-19 10:53 ` Ilya Dryomov
0 siblings, 1 reply; 7+ messages in thread
From: Xiubo Li @ 2020-02-19 9:38 UTC (permalink / raw)
To: Jeff Layton, Ilya Dryomov; +Cc: Patrick Donnelly, Yan, Zheng, Ceph Development
Hi Jeff, Ilya and all
I hit this call traces by running some test cases when unmounting the fs
mount points.
It seems there still have some inodes or dentries are not destroyed.
Will this be a problem ? Any idea ?
<6>[ 3336.729015] libceph: mon1 (1)192.168.195.165:40291 session established
<6>[ 3336.732380] libceph: client4297 fsid
f7621edd-ef06-4ca3-8a5b-1ba8c52ae15f
<6>[ 3336.768752] rbd: rbd0: capacity 209715200 features 0x3d
<6>[ 3571.749795] libceph: mon1 (1)192.168.195.165:40291 session established
<6>[ 3571.758259] libceph: client4300 fsid
f7621edd-ef06-4ca3-8a5b-1ba8c52ae15f
<6>[ 3571.792768] rbd: rbd0: capacity 209715200 features 0x3d
<6>[ 3927.396784] libceph: mon2 (1)192.168.195.165:40293 session established
<6>[ 3927.397900] libceph: client4307 fsid
f7621edd-ef06-4ca3-8a5b-1ba8c52ae15f
<3>[ 3943.896176]
=============================================================================
<3>[ 3943.896179] BUG ceph_inode_info (Tainted: G E ): Objects
remaining in ceph_inode_info on __kmem_cache_shutdown()
<3>[ 3943.896180]
-----------------------------------------------------------------------------
<3>[ 3943.896180]
<4>[ 3943.896181] Disabling lock debugging due to kernel taint
<3>[ 3943.896184] INFO: Slab 0x0000000005d371ba objects=23 used=1
fp=0x00000000347baa56 flags=0x17ffe000010200
<4>[ 3943.896187] CPU: 0 PID: 26423 Comm: rmmod Tainted: G B
E 5.6.0-rc1+ #23
<4>[ 3943.896188] Hardware name: VMware, Inc. VMware Virtual
Platform/440BX Desktop Reference Platform, BIOS 6.00 05/19/2017
<4>[ 3943.896189] Call Trace:
<4>[ 3943.896197] dump_stack+0x66/0x90
<4>[ 3943.896201] slab_err+0xb7/0xdc
<4>[ 3943.896205] ? ksm_migrate_page+0xe0/0xe0
<4>[ 3943.896207] ? slub_cpu_dead+0xb0/0xb0
<4>[ 3943.896209] __kmem_cache_shutdown.cold+0x29/0x153
<4>[ 3943.896213] shutdown_cache+0x13/0x1b0
<4>[ 3943.896215] kmem_cache_destroy+0x239/0x260
<4>[ 3943.896310] destroy_caches+0x16/0x57 [ceph]
<4>[ 3943.896316] __x64_sys_delete_module+0x13d/0x290
<4>[ 3943.896320] ? exit_to_usermode_loop+0x94/0xd0
<4>[ 3943.896323] do_syscall_64+0x5b/0x1b0
<4>[ 3943.896327] entry_SYSCALL_64_after_hwframe+0x44/0xa9
<4>[ 3943.896329] RIP: 0033:0x7fbbb91fc97b
<4>[ 3943.896332] Code: 73 01 c3 48 8b 0d 0d 45 0c 00 f7 d8 64 89 01 48
83 c8 ff c3 66 2e 0f 1f 84 00 00 00 00 00 90 f3 0f 1e fa b8 b0 00 00 00
0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 8b 0d dd 44 0c 00 f7 d8 64 89 01 48
<4>[ 3943.896334] RSP: 002b:00007ffef23f7368 EFLAGS: 00000206 ORIG_RAX:
00000000000000b0
<4>[ 3943.896336] RAX: ffffffffffffffda RBX: 000055f423b5e7a0 RCX:
00007fbbb91fc97b
<4>[ 3943.896336] RDX: 000000000000000a RSI: 0000000000000800 RDI:
000055f423b5e808
<4>[ 3943.896337] RBP: 00007ffef23f73b8 R08: 000000000000000a R09:
00007ffef23f62e1
<4>[ 3943.896338] R10: 00007fbbb9271ac0 R11: 0000000000000206 R12:
00007ffef23f7580
<4>[ 3943.896339] R13: 00007ffef23f8f17 R14: 000055f423b5e260 R15:
00007ffef23f73c0
<3>[ 3943.896346] INFO: Object 0x000000005792a1ca @offset=14080
<3>[ 3943.896348]
=============================================================================
<3>[ 3943.896349] BUG ceph_inode_info (Tainted: G B E ): Objects
remaining in ceph_inode_info on __kmem_cache_shutdown()
<3>[ 3943.896350]
-----------------------------------------------------------------------------
<3>[ 3943.896350]
<3>[ 3943.896352] INFO: Slab 0x0000000048f8188c objects=23 used=1
fp=0x00000000a5d1ff93 flags=0x17ffe000010200
<4>[ 3943.896354] CPU: 0 PID: 26423 Comm: rmmod Tainted: G B
E 5.6.0-rc1+ #23
<4>[ 3943.896354] Hardware name: VMware, Inc. VMware Virtual
Platform/440BX Desktop Reference Platform, BIOS 6.00 05/19/2017
<4>[ 3943.896355] Call Trace:
<4>[ 3943.896358] dump_stack+0x66/0x90
<4>[ 3943.896360] slab_err+0xb7/0xdc
<4>[ 3943.896364] ? printk+0x58/0x6f
<4>[ 3943.896366] ? slub_cpu_dead+0xb0/0xb0
<4>[ 3943.896368] __kmem_cache_shutdown.cold+0x29/0x153
<4>[ 3943.896371] shutdown_cache+0x13/0x1b0
<4>[ 3943.896374] kmem_cache_destroy+0x239/0x260
<4>[ 3943.896388] destroy_caches+0x16/0x57 [ceph]
<4>[ 3943.896391] __x64_sys_delete_module+0x13d/0x290
<4>[ 3943.896393] ? exit_to_usermode_loop+0x94/0xd0
<4>[ 3943.896396] do_syscall_64+0x5b/0x1b0
<4>[ 3943.896398] entry_SYSCALL_64_after_hwframe+0x44/0xa9
<4>[ 3943.896400] RIP: 0033:0x7fbbb91fc97b
<4>[ 3943.896401] Code: 73 01 c3 48 8b 0d 0d 45 0c 00 f7 d8 64 89 01 48
83 c8 ff c3 66 2e 0f 1f 84 00 00 00 00 00 90 f3 0f 1e fa b8 b0 00 00 00
0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 8b 0d dd 44 0c 00 f7 d8 64 89 01 48
<4>[ 3943.896402] RSP: 002b:00007ffef23f7368 EFLAGS: 00000206 ORIG_RAX:
00000000000000b0
<4>[ 3943.896404] RAX: ffffffffffffffda RBX: 000055f423b5e7a0 RCX:
00007fbbb91fc97b
<4>[ 3943.896405] RDX: 000000000000000a RSI: 0000000000000800 RDI:
000055f423b5e808
<4>[ 3943.896406] RBP: 00007ffef23f73b8 R08: 000000000000000a R09:
00007ffef23f62e1
<4>[ 3943.896407] R10: 00007fbbb9271ac0 R11: 0000000000000206 R12:
00007ffef23f7580
<4>[ 3943.896407] R13: 00007ffef23f8f17 R14: 000055f423b5e260 R15:
00007ffef23f73c0
<3>[ 3943.896412] INFO: Object 0x00000000376f6bfe @offset=15488
<3>[ 3943.896429]
=============================================================================
<3>[ 3943.896431] BUG ceph_inode_info (Tainted: G B E ): Objects
remaining in ceph_inode_info on __kmem_cache_shutdown()
<3>[ 3943.896431]
-----------------------------------------------------------------------------
<3>[ 3943.896431]
<3>[ 3943.896433] INFO: Slab 0x00000000b9901e11 objects=23 used=1
fp=0x0000000039e61a30 flags=0x17ffe000010200
<4>[ 3943.896434] CPU: 0 PID: 26423 Comm: rmmod Tainted: G B
E 5.6.0-rc1+ #23
<4>[ 3943.896435] Hardware name: VMware, Inc. VMware Virtual
Platform/440BX Desktop Reference Platform, BIOS 6.00 05/19/2017
<4>[ 3943.896436] Call Trace:
<4>[ 3943.896439] dump_stack+0x66/0x90
<4>[ 3943.896441] slab_err+0xb7/0xdc
<4>[ 3943.896445] ? printk+0x58/0x6f
<4>[ 3943.896446] ? slub_cpu_dead+0xb0/0xb0
<4>[ 3943.896448] __kmem_cache_shutdown.cold+0x29/0x153
<4>[ 3943.896451] shutdown_cache+0x13/0x1b0
<4>[ 3943.896452] kmem_cache_destroy+0x239/0x260
<4>[ 3943.896466] destroy_caches+0x16/0x57 [ceph]
<4>[ 3943.896469] __x64_sys_delete_module+0x13d/0x290
<4>[ 3943.896472] ? exit_to_usermode_loop+0x94/0xd0
<4>[ 3943.896474] do_syscall_64+0x5b/0x1b0
<4>[ 3943.896477] entry_SYSCALL_64_after_hwframe+0x44/0xa9
<4>[ 3943.896478] RIP: 0033:0x7fbbb91fc97b
<4>[ 3943.896479] Code: 73 01 c3 48 8b 0d 0d 45 0c 00 f7 d8 64 89 01 48
83 c8 ff c3 66 2e 0f 1f 84 00 00 00 00 00 90 f3 0f 1e fa b8 b0 00 00 00
0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 8b 0d dd 44 0c 00 f7 d8 64 89 01 48
<4>[ 3943.896480] RSP: 002b:00007ffef23f7368 EFLAGS: 00000206 ORIG_RAX:
00000000000000b0
<4>[ 3943.896482] RAX: ffffffffffffffda RBX: 000055f423b5e7a0 RCX:
00007fbbb91fc97b
<4>[ 3943.896483] RDX: 000000000000000a RSI: 0000000000000800 RDI:
000055f423b5e808
<4>[ 3943.896483] RBP: 00007ffef23f73b8 R08: 000000000000000a R09:
00007ffef23f62e1
<4>[ 3943.896484] R10: 00007fbbb9271ac0 R11: 0000000000000206 R12:
00007ffef23f7580
<4>[ 3943.896485] R13: 00007ffef23f8f17 R14: 000055f423b5e260 R15:
00007ffef23f73c0
<3>[ 3943.896489] INFO: Object 0x0000000090e93ce6 @offset=16896
<3>[ 3943.896550] kmem_cache_destroy ceph_inode_info: Slab cache still
has objects
<4>[ 3943.896553] CPU: 0 PID: 26423 Comm: rmmod Tainted: G B
E 5.6.0-rc1+ #23
<4>[ 3943.896554] Hardware name: VMware, Inc. VMware Virtual
Platform/440BX Desktop Reference Platform, BIOS 6.00 05/19/2017
<4>[ 3943.896554] Call Trace:
<4>[ 3943.896558] dump_stack+0x66/0x90
<4>[ 3943.896560] kmem_cache_destroy.cold+0x15/0x1a
<4>[ 3943.896575] destroy_caches+0x16/0x57 [ceph]
<4>[ 3943.896578] __x64_sys_delete_module+0x13d/0x290
<4>[ 3943.896581] ? exit_to_usermode_loop+0x94/0xd0
<4>[ 3943.896583] do_syscall_64+0x5b/0x1b0
<4>[ 3943.896586] entry_SYSCALL_64_after_hwframe+0x44/0xa9
<4>[ 3943.896589] RIP: 0033:0x7fbbb91fc97b
<4>[ 3943.896593] Code: 73 01 c3 48 8b 0d 0d 45 0c 00 f7 d8 64 89 01 48
83 c8 ff c3 66 2e 0f 1f 84 00 00 00 00 00 90 f3 0f 1e fa b8 b0 00 00 00
0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 8b 0d dd 44 0c 00 f7 d8 64 89 01 48
<4>[ 3943.896595] RSP: 002b:00007ffef23f7368 EFLAGS: 00000206 ORIG_RAX:
00000000000000b0
<4>[ 3943.896597] RAX: ffffffffffffffda RBX: 000055f423b5e7a0 RCX:
00007fbbb91fc97b
<4>[ 3943.896600] RDX: 000000000000000a RSI: 0000000000000800 RDI:
000055f423b5e808
<4>[ 3943.896601] RBP: 00007ffef23f73b8 R08: 000000000000000a R09:
00007ffef23f62e1
<4>[ 3943.896606] R10: 00007fbbb9271ac0 R11: 0000000000000206 R12:
00007ffef23f7580
<4>[ 3943.896609] R13: 00007ffef23f8f17 R14: 000055f423b5e260 R15:
00007ffef23f73c0
<3>[ 3943.914328]
=============================================================================
<3>[ 3943.914330] BUG ceph_dentry_info (Tainted: G B E ):
Objects remaining in ceph_dentry_info on __kmem_cache_shutdown()
<3>[ 3943.914331]
-----------------------------------------------------------------------------
<3>[ 3943.914331]
<3>[ 3943.914333] INFO: Slab 0x00000000713366a2 objects=51 used=2
fp=0x00000000c5c96d72 flags=0x17ffe000000200
<4>[ 3943.914335] CPU: 0 PID: 26423 Comm: rmmod Tainted: G B
E 5.6.0-rc1+ #23
<4>[ 3943.914336] Hardware name: VMware, Inc. VMware Virtual
Platform/440BX Desktop Reference Platform, BIOS 6.00 05/19/2017
<4>[ 3943.914336] Call Trace:
<4>[ 3943.914343] dump_stack+0x66/0x90
<4>[ 3943.914345] slab_err+0xb7/0xdc
<4>[ 3943.914349] ? ksm_migrate_page+0xe0/0xe0
<4>[ 3943.914350] ? slub_cpu_dead+0xb0/0xb0
<4>[ 3943.914351] __kmem_cache_shutdown.cold+0x29/0x153
<4>[ 3943.914353] shutdown_cache+0x13/0x1b0
<4>[ 3943.914354] kmem_cache_destroy+0x239/0x260
<4>[ 3943.914367] destroy_caches+0x3a/0x57 [ceph]
<4>[ 3943.914370] __x64_sys_delete_module+0x13d/0x290
<4>[ 3943.914373] ? exit_to_usermode_loop+0x94/0xd0
<4>[ 3943.914374] do_syscall_64+0x5b/0x1b0
<4>[ 3943.914376] entry_SYSCALL_64_after_hwframe+0x44/0xa9
<4>[ 3943.914378] RIP: 0033:0x7fbbb91fc97b
<4>[ 3943.914380] Code: 73 01 c3 48 8b 0d 0d 45 0c 00 f7 d8 64 89 01 48
83 c8 ff c3 66 2e 0f 1f 84 00 00 00 00 00 90 f3 0f 1e fa b8 b0 00 00 00
0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 8b 0d dd 44 0c 00 f7 d8 64 89 01 48
<4>[ 3943.914381] RSP: 002b:00007ffef23f7368 EFLAGS: 00000206 ORIG_RAX:
00000000000000b0
<4>[ 3943.914382] RAX: ffffffffffffffda RBX: 000055f423b5e7a0 RCX:
00007fbbb91fc97b
<4>[ 3943.914383] RDX: 000000000000000a RSI: 0000000000000800 RDI:
000055f423b5e808
<4>[ 3943.914383] RBP: 00007ffef23f73b8 R08: 000000000000000a R09:
00007ffef23f62e1
<4>[ 3943.914383] R10: 00007fbbb9271ac0 R11: 0000000000000206 R12:
00007ffef23f7580
<4>[ 3943.914384] R13: 00007ffef23f8f17 R14: 000055f423b5e260 R15:
00007ffef23f73c0
<3>[ 3943.914387] INFO: Object 0x000000000917f90f @offset=2800
<3>[ 3943.914387] INFO: Object 0x00000000cea9f98e @offset=2880
<3>[ 3943.914388]
=============================================================================
<3>[ 3943.914389] BUG ceph_dentry_info (Tainted: G B E ):
Objects remaining in ceph_dentry_info on __kmem_cache_shutdown()
<3>[ 3943.914389]
-----------------------------------------------------------------------------
<3>[ 3943.914389]
<3>[ 3943.914390] INFO: Slab 0x00000000d49f198a objects=51 used=1
fp=0x000000007a03922c flags=0x17ffe000000200
<4>[ 3943.914391] CPU: 0 PID: 26423 Comm: rmmod Tainted: G B
E 5.6.0-rc1+ #23
<4>[ 3943.914391] Hardware name: VMware, Inc. VMware Virtual
Platform/440BX Desktop Reference Platform, BIOS 6.00 05/19/2017
<4>[ 3943.914392] Call Trace:
<4>[ 3943.914393] dump_stack+0x66/0x90
<4>[ 3943.914394] slab_err+0xb7/0xdc
<4>[ 3943.914397] ? printk+0x58/0x6f
<4>[ 3943.914397] ? slub_cpu_dead+0xb0/0xb0
<4>[ 3943.914398] __kmem_cache_shutdown.cold+0x29/0x153
<4>[ 3943.914400] shutdown_cache+0x13/0x1b0
<4>[ 3943.914401] kmem_cache_destroy+0x239/0x260
<4>[ 3943.914409] destroy_caches+0x3a/0x57 [ceph]
<4>[ 3943.914411] __x64_sys_delete_module+0x13d/0x290
<4>[ 3943.914413] ? exit_to_usermode_loop+0x94/0xd0
<4>[ 3943.914414] do_syscall_64+0x5b/0x1b0
<4>[ 3943.914416] entry_SYSCALL_64_after_hwframe+0x44/0xa9
<4>[ 3943.914416] RIP: 0033:0x7fbbb91fc97b
<4>[ 3943.914417] Code: 73 01 c3 48 8b 0d 0d 45 0c 00 f7 d8 64 89 01 48
83 c8 ff c3 66 2e 0f 1f 84 00 00 00 00 00 90 f3 0f 1e fa b8 b0 00 00 00
0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 8b 0d dd 44 0c 00 f7 d8 64 89 01 48
<4>[ 3943.914418] RSP: 002b:00007ffef23f7368 EFLAGS: 00000206 ORIG_RAX:
00000000000000b0
<4>[ 3943.914418] RAX: ffffffffffffffda RBX: 000055f423b5e7a0 RCX:
00007fbbb91fc97b
<4>[ 3943.914419] RDX: 000000000000000a RSI: 0000000000000800 RDI:
000055f423b5e808
<4>[ 3943.914419] RBP: 00007ffef23f73b8 R08: 000000000000000a R09:
00007ffef23f62e1
<4>[ 3943.914419] R10: 00007fbbb9271ac0 R11: 0000000000000206 R12:
00007ffef23f7580
<4>[ 3943.914420] R13: 00007ffef23f8f17 R14: 000055f423b5e260 R15:
00007ffef23f73c0
<3>[ 3943.914422] INFO: Object 0x00000000a465a019 @offset=240
<3>[ 3943.914423] kmem_cache_destroy ceph_dentry_info: Slab cache still
has objects
<4>[ 3943.914424] CPU: 0 PID: 26423 Comm: rmmod Tainted: G B
E 5.6.0-rc1+ #23
<4>[ 3943.914425] Hardware name: VMware, Inc. VMware Virtual
Platform/440BX Desktop Reference Platform, BIOS 6.00 05/19/2017
<4>[ 3943.914425] Call Trace:
<4>[ 3943.914426] dump_stack+0x66/0x90
<4>[ 3943.914427] kmem_cache_destroy.cold+0x15/0x1a
<4>[ 3943.914434] destroy_caches+0x3a/0x57 [ceph]
<4>[ 3943.914436] __x64_sys_delete_module+0x13d/0x290
<4>[ 3943.914437] ? exit_to_usermode_loop+0x94/0xd0
<4>[ 3943.914438] do_syscall_64+0x5b/0x1b0
<4>[ 3943.914440] entry_SYSCALL_64_after_hwframe+0x44/0xa9
<4>[ 3943.914440] RIP: 0033:0x7fbbb91fc97b
<4>[ 3943.914441] Code: 73 01 c3 48 8b 0d 0d 45 0c 00 f7 d8 64 89 01 48
83 c8 ff c3 66 2e 0f 1f 84 00 00 00 00 00 90 f3 0f 1e fa b8 b0 00 00 00
0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 8b 0d dd 44 0c 00 f7 d8 64 89 01 48
<4>[ 3943.914441] RSP: 002b:00007ffef23f7368 EFLAGS: 00000206 ORIG_RAX:
00000000000000b0
<4>[ 3943.914442] RAX: ffffffffffffffda RBX: 000055f423b5e7a0 RCX:
00007fbbb91fc97b
<4>[ 3943.914442] RDX: 000000000000000a RSI: 0000000000000800 RDI:
000055f423b5e808
<4>[ 3943.914442] RBP: 00007ffef23f73b8 R08: 000000000000000a R09:
00007ffef23f62e1
<4>[ 3943.914443] R10: 00007fbbb9271ac0 R11: 0000000000000206 R12:
00007ffef23f7580
<4>[ 3943.914443] R13: 00007ffef23f8f17 R14: 000055f423b5e260 R15:
00007ffef23f73c0
<5>[ 3943.923089] FS-Cache: Netfs 'ceph' unregistered from caching
<5>[ 4022.394090] Key type ceph unregistered
<5>[ 4028.645127] Key type ceph registered
<6>[ 4028.645522] libceph: loaded (mon/osd proto 15/24)
<5>[ 4028.658549] FS-Cache: Netfs 'ceph' registered for caching
<6>[ 4028.658558] ceph: loaded (mds proto 32)
<6>[ 4028.662334] libceph: mon1 (1)192.168.195.165:40291 session established
<6>[ 4028.663998] libceph: client4303 fsid
f7621edd-ef06-4ca3-8a5b-1ba8c52ae15f
<3>[11275.766909]
=============================================================================
<3>[11275.766910] BUG ceph_dentry_info (Tainted: G B E ):
Objects remaining in ceph_dentry_info on __kmem_cache_shutdown()
<3>[11275.766911]
-----------------------------------------------------------------------------
<3>[11275.766911]
<3>[11275.766912] INFO: Slab 0x00000000d49f198a objects=51 used=1
fp=0x000000007a03922c flags=0x17ffe000000200
<4>[11275.766915] CPU: 0 PID: 40095 Comm: rmmod Tainted: G B
E 5.6.0-rc1+ #23
<4>[11275.766916] Hardware name: VMware, Inc. VMware Virtual
Platform/440BX Desktop Reference Platform, BIOS 6.00 05/19/2017
<4>[11275.766916] Call Trace:
<4>[11275.767023] dump_stack+0x66/0x90
<4>[11275.767043] slab_err+0xb7/0xdc
<4>[11275.767046] ? ksm_migrate_page+0xe0/0xe0
<4>[11275.767047] ? slub_cpu_dead+0xb0/0xb0
<4>[11275.767048] __kmem_cache_shutdown.cold+0x29/0x153
<4>[11275.767050] shutdown_cache+0x13/0x1b0
<4>[11275.767051] kmem_cache_destroy+0x239/0x260
<4>[11275.767083] destroy_caches+0x3a/0x57 [ceph]
<4>[11275.767086] __x64_sys_delete_module+0x13d/0x290
<4>[11275.767108] ? exit_to_usermode_loop+0x94/0xd0
<4>[11275.767109] do_syscall_64+0x5b/0x1b0
<4>[11275.767129] entry_SYSCALL_64_after_hwframe+0x44/0xa9
<4>[11275.767164] RIP: 0033:0x7f6da227797b
<4>[11275.767167] Code: 73 01 c3 48 8b 0d 0d 45 0c 00 f7 d8 64 89 01 48
83 c8 ff c3 66 2e 0f 1f 84 00 00 00 00 00 90 f3 0f 1e fa b8 b0 00 00 00
0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 8b 0d dd 44 0c 00 f7 d8 64 89 01 48
<4>[11275.767168] RSP: 002b:00007ffdb75aa098 EFLAGS: 00000206 ORIG_RAX:
00000000000000b0
<4>[11275.767169] RAX: ffffffffffffffda RBX: 000055de019007a0 RCX:
00007f6da227797b
<4>[11275.767169] RDX: 000000000000000a RSI: 0000000000000800 RDI:
000055de01900808
<4>[11275.767170] RBP: 00007ffdb75aa0e8 R08: 000000000000000a R09:
00007ffdb75a9011
<4>[11275.767170] R10: 00007f6da22ecac0 R11: 0000000000000206 R12:
00007ffdb75aa2b0
<4>[11275.767171] R13: 00007ffdb75abf17 R14: 000055de01900260 R15:
00007ffdb75aa0f0
<3>[11275.767175] INFO: Object 0x00000000a465a019 @offset=240
<3>[11275.767177]
=============================================================================
<3>[11275.767177] BUG ceph_dentry_info (Tainted: G B E ):
Objects remaining in ceph_dentry_info on __kmem_cache_shutdown()
<3>[11275.767178]
-----------------------------------------------------------------------------
<3>[11275.767178]
<3>[11275.767178] INFO: Slab 0x00000000713366a2 objects=51 used=2
fp=0x0000000062e48697 flags=0x17ffe000000200
<4>[11275.767180] CPU: 0 PID: 40095 Comm: rmmod Tainted: G B
E 5.6.0-rc1+ #23
<4>[11275.767180] Hardware name: VMware, Inc. VMware Virtual
Platform/440BX Desktop Reference Platform, BIOS 6.00 05/19/2017
<4>[11275.767180] Call Trace:
<4>[11275.767182] dump_stack+0x66/0x90
<4>[11275.767183] slab_err+0xb7/0xdc
<4>[11275.767185] ? printk+0x58/0x6f
<4>[11275.767186] ? slub_cpu_dead+0xb0/0xb0
<4>[11275.767188] __kmem_cache_shutdown.cold+0x29/0x153
<4>[11275.767189] shutdown_cache+0x13/0x1b0
<4>[11275.767190] kmem_cache_destroy+0x239/0x260
<4>[11275.767198] destroy_caches+0x3a/0x57 [ceph]
<4>[11275.767200] __x64_sys_delete_module+0x13d/0x290
<4>[11275.767202] ? exit_to_usermode_loop+0x94/0xd0
<4>[11275.767203] do_syscall_64+0x5b/0x1b0
<4>[11275.767205] entry_SYSCALL_64_after_hwframe+0x44/0xa9
<4>[11275.767205] RIP: 0033:0x7f6da227797b
<4>[11275.767206] Code: 73 01 c3 48 8b 0d 0d 45 0c 00 f7 d8 64 89 01 48
83 c8 ff c3 66 2e 0f 1f 84 00 00 00 00 00 90 f3 0f 1e fa b8 b0 00 00 00
0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 8b 0d dd 44 0c 00 f7 d8 64 89 01 48
<4>[11275.767207] RSP: 002b:00007ffdb75aa098 EFLAGS: 00000206 ORIG_RAX:
00000000000000b0
<4>[11275.767208] RAX: ffffffffffffffda RBX: 000055de019007a0 RCX:
00007f6da227797b
<4>[11275.767208] RDX: 000000000000000a RSI: 0000000000000800 RDI:
000055de01900808
<4>[11275.767208] RBP: 00007ffdb75aa0e8 R08: 000000000000000a R09:
00007ffdb75a9011
<4>[11275.767209] R10: 00007f6da22ecac0 R11: 0000000000000206 R12:
00007ffdb75aa2b0
<4>[11275.767209] R13: 00007ffdb75abf17 R14: 000055de01900260 R15:
00007ffdb75aa0f0
<3>[11275.767212] INFO: Object 0x000000000917f90f @offset=2800
<3>[11275.767212] INFO: Object 0x00000000cea9f98e @offset=2880
<3>[11275.767213] kmem_cache_destroy ceph_dentry_info: Slab cache still
has objects
<4>[11275.767214] CPU: 0 PID: 40095 Comm: rmmod Tainted: G B
E 5.6.0-rc1+ #23
<4>[11275.767214] Hardware name: VMware, Inc. VMware Virtual
Platform/440BX Desktop Reference Platform, BIOS 6.00 05/19/2017
<4>[11275.767215] Call Trace:
<4>[11275.767215] dump_stack+0x66/0x90
<4>[11275.767217] kmem_cache_destroy.cold+0x15/0x1a
<4>[11275.767223] destroy_caches+0x3a/0x57 [ceph]
<4>[11275.767225] __x64_sys_delete_module+0x13d/0x290
<4>[11275.767226] ? exit_to_usermode_loop+0x94/0xd0
<4>[11275.767227] do_syscall_64+0x5b/0x1b0
<4>[11275.767229] entry_SYSCALL_64_after_hwframe+0x44/0xa9
<4>[11275.767229] RIP: 0033:0x7f6da227797b
<4>[11275.767230] Code: 73 01 c3 48 8b 0d 0d 45 0c 00 f7 d8 64 89 01 48
83 c8 ff c3 66 2e 0f 1f 84 00 00 00 00 00 90 f3 0f 1e fa b8 b0 00 00 00
0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 8b 0d dd 44 0c 00 f7 d8 64 89 01 48
<4>[11275.767230] RSP: 002b:00007ffdb75aa098 EFLAGS: 00000206 ORIG_RAX:
00000000000000b0
<4>[11275.767231] RAX: ffffffffffffffda RBX: 000055de019007a0 RCX:
00007f6da227797b
<4>[11275.767231] RDX: 000000000000000a RSI: 0000000000000800 RDI:
000055de01900808
<4>[11275.767232] RBP: 00007ffdb75aa0e8 R08: 000000000000000a R09:
00007ffdb75a9011
<4>[11275.767232] R10: 00007f6da22ecac0 R11: 0000000000000206 R12:
00007ffdb75aa2b0
<4>[11275.767232] R13: 00007ffdb75abf17 R14: 000055de01900260 R15:
00007ffdb75aa0f0
<5>[11275.767361] FS-Cache: Netfs 'ceph' unregistered from caching
<5>[11275.807037] Key type ceph unregistered
<4>[11594.856257] hrtimer: interrupt took 3786932 ns
<5>[11842.570801] Key type ceph registered
<6>[11842.571477] libceph: loaded (mon/osd proto 15/24)
<5>[11842.671795] FS-Cache: Netfs 'ceph' registered for caching
<6>[11842.671803] ceph: loaded (mds proto 32)
<6>[11842.705475] libceph: mon2 (1)192.168.195.165:40293 session established
<6>[11842.708894] libceph: client4310 fsid
f7621edd-ef06-4ca3-8a5b-1ba8c52ae15f
<3>[12247.488188]
=============================================================================
<3>[12247.488189] BUG ceph_dentry_info (Tainted: G B E ):
Objects remaining in ceph_dentry_info on __kmem_cache_shutdown()
<3>[12247.488228]
-----------------------------------------------------------------------------
<3>[12247.488228]
<3>[12247.488231] INFO: Slab 0x00000000713366a2 objects=51 used=2
fp=0x0000000062e48697 flags=0x17ffe000000200
<4>[12247.488233] CPU: 2 PID: 42854 Comm: rmmod Tainted: G B
E 5.6.0-rc1+ #23
<4>[12247.488234] Hardware name: VMware, Inc. VMware Virtual
Platform/440BX Desktop Reference Platform, BIOS 6.00 05/19/2017
<4>[12247.488234] Call Trace:
<4>[12247.488241] dump_stack+0x66/0x90
<4>[12247.488244] slab_err+0xb7/0xdc
<4>[12247.488246] ? ksm_migrate_page+0xe0/0xe0
<4>[12247.488247] ? slub_cpu_dead+0xb0/0xb0
<4>[12247.488249] __kmem_cache_shutdown.cold+0x29/0x153
<4>[12247.488251] shutdown_cache+0x13/0x1b0
<4>[12247.488252] kmem_cache_destroy+0x239/0x260
<4>[12247.488265] destroy_caches+0x3a/0x57 [ceph]
<4>[12247.488268] __x64_sys_delete_module+0x13d/0x290
<4>[12247.488271] ? exit_to_usermode_loop+0x94/0xd0
<4>[12247.488272] do_syscall_64+0x5b/0x1b0
<4>[12247.488299] entry_SYSCALL_64_after_hwframe+0x44/0xa9
<4>[12247.488301] RIP: 0033:0x7fd1c5bb797b
<4>[12247.488304] Code: 73 01 c3 48 8b 0d 0d 45 0c 00 f7 d8 64 89 01 48
83 c8 ff c3 66 2e 0f 1f 84 00 00 00 00 00 90 f3 0f 1e fa b8 b0 00 00 00
0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 8b 0d dd 44 0c 00 f7 d8 64 89 01 48
<4>[12247.488304] RSP: 002b:00007ffd37a293f8 EFLAGS: 00000206 ORIG_RAX:
00000000000000b0
<4>[12247.488306] RAX: ffffffffffffffda RBX: 0000559e2ce707a0 RCX:
00007fd1c5bb797b
<4>[12247.488306] RDX: 000000000000000a RSI: 0000000000000800 RDI:
0000559e2ce70808
<4>[12247.488307] RBP: 00007ffd37a29448 R08: 000000000000000a R09:
00007ffd37a28371
<4>[12247.488307] R10: 00007fd1c5c2cac0 R11: 0000000000000206 R12:
00007ffd37a29610
<4>[12247.488307] R13: 00007ffd37a2af17 R14: 0000559e2ce70260 R15:
00007ffd37a29450
<3>[12247.488312] INFO: Object 0x000000000917f90f @offset=2800
<3>[12247.488313] INFO: Object 0x00000000cea9f98e @offset=2880
<3>[12247.488314]
=============================================================================
<3>[12247.488315] BUG ceph_dentry_info (Tainted: G B E ):
Objects remaining in ceph_dentry_info on __kmem_cache_shutdown()
<3>[12247.488315]
-----------------------------------------------------------------------------
<3>[12247.488315]
<3>[12247.488316] INFO: Slab 0x00000000d49f198a objects=51 used=1
fp=0x000000001b4111af flags=0x17ffe000000200
<4>[12247.488317] CPU: 2 PID: 42854 Comm: rmmod Tainted: G B
E 5.6.0-rc1+ #23
<4>[12247.488317] Hardware name: VMware, Inc. VMware Virtual
Platform/440BX Desktop Reference Platform, BIOS 6.00 05/19/2017
<4>[12247.488318] Call Trace:
<4>[12247.488319] dump_stack+0x66/0x90
<4>[12247.488321] slab_err+0xb7/0xdc
<4>[12247.488324] ? printk+0x58/0x6f
<4>[12247.488324] ? slub_cpu_dead+0xb0/0xb0
<4>[12247.488326] __kmem_cache_shutdown.cold+0x29/0x153
<4>[12247.488327] shutdown_cache+0x13/0x1b0
<4>[12247.488329] kmem_cache_destroy+0x239/0x260
<4>[12247.488337] destroy_caches+0x3a/0x57 [ceph]
<4>[12247.488339] __x64_sys_delete_module+0x13d/0x290
<4>[12247.488341] ? exit_to_usermode_loop+0x94/0xd0
<4>[12247.488342] do_syscall_64+0x5b/0x1b0
<4>[12247.488344] entry_SYSCALL_64_after_hwframe+0x44/0xa9
<4>[12247.488345] RIP: 0033:0x7fd1c5bb797b
<4>[12247.488346] Code: 73 01 c3 48 8b 0d 0d 45 0c 00 f7 d8 64 89 01 48
83 c8 ff c3 66 2e 0f 1f 84 00 00 00 00 00 90 f3 0f 1e fa b8 b0 00 00 00
0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 8b 0d dd 44 0c 00 f7 d8 64 89 01 48
<4>[12247.488346] RSP: 002b:00007ffd37a293f8 EFLAGS: 00000206 ORIG_RAX:
00000000000000b0
<4>[12247.488347] RAX: ffffffffffffffda RBX: 0000559e2ce707a0 RCX:
00007fd1c5bb797b
<4>[12247.488347] RDX: 000000000000000a RSI: 0000000000000800 RDI:
0000559e2ce70808
<4>[12247.488348] RBP: 00007ffd37a29448 R08: 000000000000000a R09:
00007ffd37a28371
<4>[12247.488348] R10: 00007fd1c5c2cac0 R11: 0000000000000206 R12:
00007ffd37a29610
<4>[12247.488349] R13: 00007ffd37a2af17 R14: 0000559e2ce70260 R15:
00007ffd37a29450
<3>[12247.488352] INFO: Object 0x00000000a465a019 @offset=240
<3>[12247.488353] kmem_cache_destroy ceph_dentry_info: Slab cache still
has objects
<4>[12247.488354] CPU: 2 PID: 42854 Comm: rmmod Tainted: G B
E 5.6.0-rc1+ #23
<4>[12247.488354] Hardware name: VMware, Inc. VMware Virtual
Platform/440BX Desktop Reference Platform, BIOS 6.00 05/19/2017
<4>[12247.488354] Call Trace:
<4>[12247.488355] dump_stack+0x66/0x90
<4>[12247.488357] kmem_cache_destroy.cold+0x15/0x1a
<4>[12247.488364] destroy_caches+0x3a/0x57 [ceph]
<4>[12247.488366] __x64_sys_delete_module+0x13d/0x290
<4>[12247.488367] ? exit_to_usermode_loop+0x94/0xd0
<4>[12247.488369] do_syscall_64+0x5b/0x1b0
<4>[12247.488370] entry_SYSCALL_64_after_hwframe+0x44/0xa9
<4>[12247.488371] RIP: 0033:0x7fd1c5bb797b
<4>[12247.488372] Code: 73 01 c3 48 8b 0d 0d 45 0c 00 f7 d8 64 89 01 48
83 c8 ff c3 66 2e 0f 1f 84 00 00 00 00 00 90 f3 0f 1e fa b8 b0 00 00 00
0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 8b 0d dd 44 0c 00 f7 d8 64 89 01 48
<4>[12247.488372] RSP: 002b:00007ffd37a293f8 EFLAGS: 00000206 ORIG_RAX:
00000000000000b0
<4>[12247.488373] RAX: ffffffffffffffda RBX: 0000559e2ce707a0 RCX:
00007fd1c5bb797b
<4>[12247.488373] RDX: 000000000000000a RSI: 0000000000000800 RDI:
0000559e2ce70808
<4>[12247.488374] RBP: 00007ffd37a29448 R08: 000000000000000a R09:
00007ffd37a28371
<4>[12247.488374] R10: 00007fd1c5c2cac0 R11: 0000000000000206 R12:
00007ffd37a29610
<4>[12247.488375] R13: 00007ffd37a2af17 R14: 0000559e2ce70260 R15:
00007ffd37a29450
<5>[12247.499349] FS-Cache: Netfs 'ceph' unregistered from caching
<5>[12247.524579] Key type ceph unregistered
<5>[12403.035063] Key type ceph registered
<6>[12403.040353] libceph: loaded (mon/osd proto 15/24)
<5>[12403.100932] FS-Cache: Netfs 'ceph' registered for caching
<6>[12403.100939] ceph: loaded (mds proto 32)
<6>[12403.117931] libceph: mon1 (1)192.168.195.165:40291 session established
<6>[12403.124988] libceph: client4306 fsid
f7621edd-ef06-4ca3-8a5b-1ba8c52ae15f
<3>[12577.319568]
=============================================================================
<3>[12577.319572] BUG ceph_dentry_info (Tainted: G B E ):
Objects remaining in ceph_dentry_info on __kmem_cache_shutdown()
<3>[12577.319572]
-----------------------------------------------------------------------------
<3>[12577.319572]
<3>[12577.319575] INFO: Slab 0x00000000d49f198a objects=51 used=1
fp=0x000000001b4111af flags=0x17ffe000000200
<4>[12577.319579] CPU: 1 PID: 1919 Comm: rmmod Tainted: G B E
5.6.0-rc1+ #23
<4>[12577.319580] Hardware name: VMware, Inc. VMware Virtual
Platform/440BX Desktop Reference Platform, BIOS 6.00 05/19/2017
<4>[12577.319581] Call Trace:
<4>[12577.319590] dump_stack+0x66/0x90
<4>[12577.319593] slab_err+0xb7/0xdc
<4>[12577.319596] ? slub_cpu_dead+0xb0/0xb0
<4>[12577.319599] ? ksm_migrate_page+0xe0/0xe0
<4>[12577.319601] ? ksm_migrate_page+0xe0/0xe0
<4>[12577.319603] __kmem_cache_shutdown.cold+0x29/0x153
<4>[12577.319606] shutdown_cache+0x13/0x1b0
<4>[12577.319609] kmem_cache_destroy+0x239/0x260
<4>[12577.319628] destroy_caches+0x3a/0x57 [ceph]
<4>[12577.319632] __x64_sys_delete_module+0x13d/0x290
<4>[12577.319636] ? exit_to_usermode_loop+0x94/0xd0
<4>[12577.319638] do_syscall_64+0x5b/0x1b0
<4>[12577.319641] entry_SYSCALL_64_after_hwframe+0x44/0xa9
<4>[12577.319644] RIP: 0033:0x7eff79c6997b
<4>[12577.319647] Code: 73 01 c3 48 8b 0d 0d 45 0c 00 f7 d8 64 89 01 48
83 c8 ff c3 66 2e 0f 1f 84 00 00 00 00 00 90 f3 0f 1e fa b8 b0 00 00 00
0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 8b 0d dd 44 0c 00 f7 d8 64 89 01 48
<4>[12577.319648] RSP: 002b:00007ffd9d0f24c8 EFLAGS: 00000206 ORIG_RAX:
00000000000000b0
<4>[12577.319650] RAX: ffffffffffffffda RBX: 000055a5357457a0 RCX:
00007eff79c6997b
<4>[12577.319651] RDX: 000000000000000a RSI: 0000000000000800 RDI:
000055a535745808
<4>[12577.319652] RBP: 00007ffd9d0f2518 R08: 000000000000000a R09:
00007ffd9d0f1441
<4>[12577.319653] R10: 00007eff79cdeac0 R11: 0000000000000206 R12:
00007ffd9d0f26e0
<4>[12577.319654] R13: 00007ffd9d0f3f17 R14: 000055a535745260 R15:
00007ffd9d0f2520
<3>[12577.319660] INFO: Object 0x00000000a465a019 @offset=240
<3>[12577.319662]
=============================================================================
<3>[12577.319664] BUG ceph_dentry_info (Tainted: G B E ):
Objects remaining in ceph_dentry_info on __kmem_cache_shutdown()
<3>[12577.319664]
-----------------------------------------------------------------------------
<3>[12577.319664]
<3>[12577.319666] INFO: Slab 0x00000000713366a2 objects=51 used=2
fp=0x00000000c5c96d72 flags=0x17ffe000000200
<4>[12577.319668] CPU: 1 PID: 1919 Comm: rmmod Tainted: G B E
5.6.0-rc1+ #23
<4>[12577.319669] Hardware name: VMware, Inc. VMware Virtual
Platform/440BX Desktop Reference Platform, BIOS 6.00 05/19/2017
<4>[12577.319669] Call Trace:
<4>[12577.319671] dump_stack+0x66/0x90
<4>[12577.319673] slab_err+0xb7/0xdc
<4>[12577.319677] ? printk+0x58/0x6f
<4>[12577.319679] ? ksm_migrate_page+0xe0/0xe0
<4>[12577.319682] __kmem_cache_shutdown.cold+0x29/0x153
<4>[12577.319684] shutdown_cache+0x13/0x1b0
<4>[12577.319687] kmem_cache_destroy+0x239/0x260
<4>[12577.319701] destroy_caches+0x3a/0x57 [ceph]
<4>[12577.319703] __x64_sys_delete_module+0x13d/0x290
<4>[12577.319706] ? exit_to_usermode_loop+0x94/0xd0
<4>[12577.319709] do_syscall_64+0x5b/0x1b0
<4>[12577.319711] entry_SYSCALL_64_after_hwframe+0x44/0xa9
<4>[12577.319712] RIP: 0033:0x7eff79c6997b
<4>[12577.319714] Code: 73 01 c3 48 8b 0d 0d 45 0c 00 f7 d8 64 89 01 48
83 c8 ff c3 66 2e 0f 1f 84 00 00 00 00 00 90 f3 0f 1e fa b8 b0 00 00 00
0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 8b 0d dd 44 0c 00 f7 d8 64 89 01 48
<4>[12577.319715] RSP: 002b:00007ffd9d0f24c8 EFLAGS: 00000206 ORIG_RAX:
00000000000000b0
<4>[12577.319716] RAX: ffffffffffffffda RBX: 000055a5357457a0 RCX:
00007eff79c6997b
<4>[12577.319717] RDX: 000000000000000a RSI: 0000000000000800 RDI:
000055a535745808
<4>[12577.319718] RBP: 00007ffd9d0f2518 R08: 000000000000000a R09:
00007ffd9d0f1441
<4>[12577.319719] R10: 00007eff79cdeac0 R11: 0000000000000206 R12:
00007ffd9d0f26e0
<4>[12577.319720] R13: 00007ffd9d0f3f17 R14: 000055a535745260 R15:
00007ffd9d0f2520
<3>[12577.319724] INFO: Object 0x000000000917f90f @offset=2800
<3>[12577.319725] INFO: Object 0x00000000cea9f98e @offset=2880
<3>[12577.319727] kmem_cache_destroy ceph_dentry_info: Slab cache still
has objects
<4>[12577.319728] CPU: 1 PID: 1919 Comm: rmmod Tainted: G B E
5.6.0-rc1+ #23
<4>[12577.319729] Hardware name: VMware, Inc. VMware Virtual
Platform/440BX Desktop Reference Platform, BIOS 6.00 05/19/2017
<4>[12577.319729] Call Trace:
<4>[12577.319731] dump_stack+0x66/0x90
<4>[12577.319733] kmem_cache_destroy.cold+0x15/0x1a
<4>[12577.319747] destroy_caches+0x3a/0x57 [ceph]
<4>[12577.319750] __x64_sys_delete_module+0x13d/0x290
<4>[12577.319752] ? exit_to_usermode_loop+0x94/0xd0
<4>[12577.319754] do_syscall_64+0x5b/0x1b0
<4>[12577.319757] entry_SYSCALL_64_after_hwframe+0x44/0xa9
<4>[12577.319758] RIP: 0033:0x7eff79c6997b
<4>[12577.319759] Code: 73 01 c3 48 8b 0d 0d 45 0c 00 f7 d8 64 89 01 48
83 c8 ff c3 66 2e 0f 1f 84 00 00 00 00 00 90 f3 0f 1e fa b8 b0 00 00 00
0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 8b 0d dd 44 0c 00 f7 d8 64 89 01 48
<4>[12577.319760] RSP: 002b:00007ffd9d0f24c8 EFLAGS: 00000206 ORIG_RAX:
00000000000000b0
<4>[12577.319761] RAX: ffffffffffffffda RBX: 000055a5357457a0 RCX:
00007eff79c6997b
<4>[12577.319762] RDX: 000000000000000a RSI: 0000000000000800 RDI:
000055a535745808
<4>[12577.319763] RBP: 00007ffd9d0f2518 R08: 000000000000000a R09:
00007ffd9d0f1441
<4>[12577.319764] R10: 00007eff79cdeac0 R11: 0000000000000206 R12:
00007ffd9d0f26e0
<4>[12577.319765] R13: 00007ffd9d0f3f17 R14: 000055a535745260 R15:
00007ffd9d0f2520
<5>[12577.343429] FS-Cache: Netfs 'ceph' unregistered from caching
<5>[12577.377374] Key type ceph unregistered
<5>[12824.742825] Key type ceph registered
<6>[12824.743522] libceph: loaded (mon/osd proto 15/24)
<5>[12824.754924] FS-Cache: Netfs 'ceph' registered for caching
<6>[12824.754931] ceph: loaded (mds proto 32)
<6>[12824.759841] libceph: mon0 (1)192.168.195.165:40289 session established
<6>[12824.762760] libceph: client4296 fsid
f7621edd-ef06-4ca3-8a5b-1ba8c52ae15f
<4>[12891.829780] ceph: mdsmap_decode got incorrect state(up:creating)
<4>[12892.874795] ceph: mdsmap_decode got incorrect state(up:creating)
<6>[13362.740912] libceph: mon2 (1)192.168.195.165:40293 session established
<6>[13362.743519] libceph: client4316 fsid
f7621edd-ef06-4ca3-8a5b-1ba8c52ae15f
<6>[13480.045907] libceph: mon2 (1)192.168.195.165:40293 session established
<6>[13480.046889] libceph: client4319 fsid
f7621edd-ef06-4ca3-8a5b-1ba8c52ae15f
Thanks,
BRs
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: BUG: ceph_inode_cachep and ceph_dentry_cachep caches are not clean when destroying
2020-02-19 9:38 BUG: ceph_inode_cachep and ceph_dentry_cachep caches are not clean when destroying Xiubo Li
@ 2020-02-19 10:53 ` Ilya Dryomov
2020-02-19 11:01 ` Xiubo Li
0 siblings, 1 reply; 7+ messages in thread
From: Ilya Dryomov @ 2020-02-19 10:53 UTC (permalink / raw)
To: Xiubo Li; +Cc: Jeff Layton, Patrick Donnelly, Yan, Zheng, Ceph Development
On Wed, Feb 19, 2020 at 10:39 AM Xiubo Li <xiubli@redhat.com> wrote:
>
> Hi Jeff, Ilya and all
>
> I hit this call traces by running some test cases when unmounting the fs
> mount points.
>
> It seems there still have some inodes or dentries are not destroyed.
>
> Will this be a problem ? Any idea ?
Hi Xiubo,
Of course it is a problem ;)
These are all in ceph_inode_info and ceph_dentry_info caches, but
I see traces of rbd mappings as well. Could you please share your
test cases? How are you unloading modules?
Thanks,
Ilya
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: BUG: ceph_inode_cachep and ceph_dentry_cachep caches are not clean when destroying
2020-02-19 10:53 ` Ilya Dryomov
@ 2020-02-19 11:01 ` Xiubo Li
2020-02-19 11:27 ` Ilya Dryomov
0 siblings, 1 reply; 7+ messages in thread
From: Xiubo Li @ 2020-02-19 11:01 UTC (permalink / raw)
To: Ilya Dryomov; +Cc: Jeff Layton, Patrick Donnelly, Yan, Zheng, Ceph Development
On 2020/2/19 18:53, Ilya Dryomov wrote:
> On Wed, Feb 19, 2020 at 10:39 AM Xiubo Li <xiubli@redhat.com> wrote:
>> Hi Jeff, Ilya and all
>>
>> I hit this call traces by running some test cases when unmounting the fs
>> mount points.
>>
>> It seems there still have some inodes or dentries are not destroyed.
>>
>> Will this be a problem ? Any idea ?
> Hi Xiubo,
>
> Of course it is a problem ;)
>
> These are all in ceph_inode_info and ceph_dentry_info caches, but
> I see traces of rbd mappings as well. Could you please share your
> test cases? How are you unloading modules?
I am not sure exactly in which one, mostly I was running the following
commands.
1, ./bin/rbd map share -o mount_timeout=30
2, ./bin/rbd unmap share
3, ./bin/mount.ceph :/ /mnt/cephfs/
4, `for i in {0..1000}; do mkdir /mnt/cephfs/dir$0; done` and `for i in
{0..1000}; do rm -rf /mnt/cephfs/dir$0; done`
5, umount /mnt/cephfs/
6, rmmod ceph; rmmod rbd; rmmod libceph
This it seems none business with the rbd mappings.
Thanks.
BRs
Xiubo
> Thanks,
>
> Ilya
>
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: BUG: ceph_inode_cachep and ceph_dentry_cachep caches are not clean when destroying
2020-02-19 11:01 ` Xiubo Li
@ 2020-02-19 11:27 ` Ilya Dryomov
2020-02-19 11:29 ` Xiubo Li
0 siblings, 1 reply; 7+ messages in thread
From: Ilya Dryomov @ 2020-02-19 11:27 UTC (permalink / raw)
To: Xiubo Li; +Cc: Jeff Layton, Patrick Donnelly, Yan, Zheng, Ceph Development
On Wed, Feb 19, 2020 at 12:01 PM Xiubo Li <xiubli@redhat.com> wrote:
>
> On 2020/2/19 18:53, Ilya Dryomov wrote:
> > On Wed, Feb 19, 2020 at 10:39 AM Xiubo Li <xiubli@redhat.com> wrote:
> >> Hi Jeff, Ilya and all
> >>
> >> I hit this call traces by running some test cases when unmounting the fs
> >> mount points.
> >>
> >> It seems there still have some inodes or dentries are not destroyed.
> >>
> >> Will this be a problem ? Any idea ?
> > Hi Xiubo,
> >
> > Of course it is a problem ;)
> >
> > These are all in ceph_inode_info and ceph_dentry_info caches, but
> > I see traces of rbd mappings as well. Could you please share your
> > test cases? How are you unloading modules?
>
> I am not sure exactly in which one, mostly I was running the following
> commands.
>
> 1, ./bin/rbd map share -o mount_timeout=30
>
> 2, ./bin/rbd unmap share
>
> 3, ./bin/mount.ceph :/ /mnt/cephfs/
>
> 4, `for i in {0..1000}; do mkdir /mnt/cephfs/dir$0; done` and `for i in
> {0..1000}; do rm -rf /mnt/cephfs/dir$0; done`
>
> 5, umount /mnt/cephfs/
>
> 6, rmmod ceph; rmmod rbd; rmmod libceph
>
> This it seems none business with the rbd mappings.
Is this on more or less plain upstream or with async unlink and
possibly other filesystem patches applied?
Thanks,
Ilya
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: BUG: ceph_inode_cachep and ceph_dentry_cachep caches are not clean when destroying
2020-02-19 11:27 ` Ilya Dryomov
@ 2020-02-19 11:29 ` Xiubo Li
2020-02-19 12:33 ` Jeff Layton
0 siblings, 1 reply; 7+ messages in thread
From: Xiubo Li @ 2020-02-19 11:29 UTC (permalink / raw)
To: Ilya Dryomov; +Cc: Jeff Layton, Patrick Donnelly, Yan, Zheng, Ceph Development
On 2020/2/19 19:27, Ilya Dryomov wrote:
> On Wed, Feb 19, 2020 at 12:01 PM Xiubo Li <xiubli@redhat.com> wrote:
>> On 2020/2/19 18:53, Ilya Dryomov wrote:
>>> On Wed, Feb 19, 2020 at 10:39 AM Xiubo Li <xiubli@redhat.com> wrote:
>>>> Hi Jeff, Ilya and all
>>>>
>>>> I hit this call traces by running some test cases when unmounting the fs
>>>> mount points.
>>>>
>>>> It seems there still have some inodes or dentries are not destroyed.
>>>>
>>>> Will this be a problem ? Any idea ?
>>> Hi Xiubo,
>>>
>>> Of course it is a problem ;)
>>>
>>> These are all in ceph_inode_info and ceph_dentry_info caches, but
>>> I see traces of rbd mappings as well. Could you please share your
>>> test cases? How are you unloading modules?
>> I am not sure exactly in which one, mostly I was running the following
>> commands.
>>
>> 1, ./bin/rbd map share -o mount_timeout=30
>>
>> 2, ./bin/rbd unmap share
>>
>> 3, ./bin/mount.ceph :/ /mnt/cephfs/
>>
>> 4, `for i in {0..1000}; do mkdir /mnt/cephfs/dir$0; done` and `for i in
>> {0..1000}; do rm -rf /mnt/cephfs/dir$0; done`
>>
>> 5, umount /mnt/cephfs/
>>
>> 6, rmmod ceph; rmmod rbd; rmmod libceph
>>
>> This it seems none business with the rbd mappings.
> Is this on more or less plain upstream or with async unlink and
> possibly other filesystem patches applied?
Using the latest test branch:
https://github.com/ceph/ceph-client/tree/testing.
thanks
> Thanks,
>
> Ilya
>
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: BUG: ceph_inode_cachep and ceph_dentry_cachep caches are not clean when destroying
2020-02-19 11:29 ` Xiubo Li
@ 2020-02-19 12:33 ` Jeff Layton
2020-02-19 12:43 ` Xiubo Li
0 siblings, 1 reply; 7+ messages in thread
From: Jeff Layton @ 2020-02-19 12:33 UTC (permalink / raw)
To: Xiubo Li, Ilya Dryomov; +Cc: Patrick Donnelly, Yan, Zheng, Ceph Development
On Wed, 2020-02-19 at 19:29 +0800, Xiubo Li wrote:
> On 2020/2/19 19:27, Ilya Dryomov wrote:
> > On Wed, Feb 19, 2020 at 12:01 PM Xiubo Li <xiubli@redhat.com> wrote:
> > > On 2020/2/19 18:53, Ilya Dryomov wrote:
> > > > On Wed, Feb 19, 2020 at 10:39 AM Xiubo Li <xiubli@redhat.com> wrote:
> > > > > Hi Jeff, Ilya and all
> > > > >
> > > > > I hit this call traces by running some test cases when unmounting the fs
> > > > > mount points.
> > > > >
> > > > > It seems there still have some inodes or dentries are not destroyed.
> > > > >
> > > > > Will this be a problem ? Any idea ?
> > > > Hi Xiubo,
> > > >
> > > > Of course it is a problem ;)
> > > >
> > > > These are all in ceph_inode_info and ceph_dentry_info caches, but
> > > > I see traces of rbd mappings as well. Could you please share your
> > > > test cases? How are you unloading modules?
> > > I am not sure exactly in which one, mostly I was running the following
> > > commands.
> > >
> > > 1, ./bin/rbd map share -o mount_timeout=30
> > >
> > > 2, ./bin/rbd unmap share
> > >
> > > 3, ./bin/mount.ceph :/ /mnt/cephfs/
> > >
> > > 4, `for i in {0..1000}; do mkdir /mnt/cephfs/dir$0; done` and `for i in
> > > {0..1000}; do rm -rf /mnt/cephfs/dir$0; done`
> > >
> > > 5, umount /mnt/cephfs/
> > >
> > > 6, rmmod ceph; rmmod rbd; rmmod libceph
> > >
> > > This it seems none business with the rbd mappings.
> > Is this on more or less plain upstream or with async unlink and
> > possibly other filesystem patches applied?
>
> Using the latest test branch:
> https://github.com/ceph/ceph-client/tree/testing.
>
> thanks
>
I've run a lot of tests like this and haven't see this at all. Did you
see any "Busy inodes after umount" messages in dmesg?
I note that your kernel is tainted -- sometimes if you're plugging in
modules that have subtle ABI incompatibilities, you can end up with
memory corruption like this.
What would be ideal would be to come up with a reliable reproducer if
possible.
--
Jeff Layton <jlayton@kernel.org>
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: BUG: ceph_inode_cachep and ceph_dentry_cachep caches are not clean when destroying
2020-02-19 12:33 ` Jeff Layton
@ 2020-02-19 12:43 ` Xiubo Li
0 siblings, 0 replies; 7+ messages in thread
From: Xiubo Li @ 2020-02-19 12:43 UTC (permalink / raw)
To: Jeff Layton, Ilya Dryomov; +Cc: Patrick Donnelly, Yan, Zheng, Ceph Development
On 2020/2/19 20:33, Jeff Layton wrote:
> On Wed, 2020-02-19 at 19:29 +0800, Xiubo Li wrote:
>> On 2020/2/19 19:27, Ilya Dryomov wrote:
>>> On Wed, Feb 19, 2020 at 12:01 PM Xiubo Li <xiubli@redhat.com> wrote:
>>>> On 2020/2/19 18:53, Ilya Dryomov wrote:
>>>>> On Wed, Feb 19, 2020 at 10:39 AM Xiubo Li <xiubli@redhat.com> wrote:
>>>>>> Hi Jeff, Ilya and all
>>>>>>
>>>>>> I hit this call traces by running some test cases when unmounting the fs
>>>>>> mount points.
>>>>>>
>>>>>> It seems there still have some inodes or dentries are not destroyed.
>>>>>>
>>>>>> Will this be a problem ? Any idea ?
>>>>> Hi Xiubo,
>>>>>
>>>>> Of course it is a problem ;)
>>>>>
>>>>> These are all in ceph_inode_info and ceph_dentry_info caches, but
>>>>> I see traces of rbd mappings as well. Could you please share your
>>>>> test cases? How are you unloading modules?
>>>> I am not sure exactly in which one, mostly I was running the following
>>>> commands.
>>>>
>>>> 1, ./bin/rbd map share -o mount_timeout=30
>>>>
>>>> 2, ./bin/rbd unmap share
>>>>
>>>> 3, ./bin/mount.ceph :/ /mnt/cephfs/
>>>>
>>>> 4, `for i in {0..1000}; do mkdir /mnt/cephfs/dir$0; done` and `for i in
>>>> {0..1000}; do rm -rf /mnt/cephfs/dir$0; done`
>>>>
>>>> 5, umount /mnt/cephfs/
>>>>
>>>> 6, rmmod ceph; rmmod rbd; rmmod libceph
>>>>
>>>> This it seems none business with the rbd mappings.
>>> Is this on more or less plain upstream or with async unlink and
>>> possibly other filesystem patches applied?
>> Using the latest test branch:
>> https://github.com/ceph/ceph-client/tree/testing.
>>
>> thanks
>>
> I've run a lot of tests like this and haven't see this at all. Did you
> see any "Busy inodes after umount" messages in dmesg?
>
> I note that your kernel is tainted -- sometimes if you're plugging in
> modules that have subtle ABI incompatibilities, you can end up with
> memory corruption like this.
>
> What would be ideal would be to come up with a reliable reproducer if
> possible.
The code is clean from the testing branch pulled yesterday, but hit it
only once locally, just encounter it in the dmesg when checking other logs.
Thanks.
^ permalink raw reply [flat|nested] 7+ messages in thread
end of thread, other threads:[~2020-02-19 12:44 UTC | newest]
Thread overview: 7+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-02-19 9:38 BUG: ceph_inode_cachep and ceph_dentry_cachep caches are not clean when destroying Xiubo Li
2020-02-19 10:53 ` Ilya Dryomov
2020-02-19 11:01 ` Xiubo Li
2020-02-19 11:27 ` Ilya Dryomov
2020-02-19 11:29 ` Xiubo Li
2020-02-19 12:33 ` Jeff Layton
2020-02-19 12:43 ` Xiubo Li
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.