All of lore.kernel.org
 help / color / mirror / Atom feed
* Tasks blocking forever with XFS stack traces
@ 2019-11-05  7:27 Sitsofe Wheeler
  2019-11-05  8:54 ` Carlos Maiolino
  0 siblings, 1 reply; 10+ messages in thread
From: Sitsofe Wheeler @ 2019-11-05  7:27 UTC (permalink / raw)
  To: Darrick J. Wong; +Cc: linux-xfs, linux-fsdevel

Hi,

We have a system that has been seeing tasks with XFS calls in their
stacks. Once these tasks start hanging with uninterruptible sleep any
write I/O to the directory they were doing I/O to will also hang
forever. The I/O they doing is being done to a bind mounted directory
atop an XFS filesystem on top an MD device (the MD device seems to be
still functional and isn't offline). The kernel is fairly old but I
thought I'd post a stack in case anyone can describe this or has seen
it before:

kernel: [425684.110424] INFO: task kworker/u162:0:58843 blocked for
more than 120 seconds.
kernel: [425684.110800]       Tainted: G           OE
4.15.0-64-generic #73-Ubuntu
kernel: [425684.111164] "echo 0 >
/proc/sys/kernel/hung_task_timeout_secs" disables this message.
kernel: [425684.111568] kworker/u162:0  D    0 58843      2 0x80000080
kernel: [425684.111581] Workqueue: writeback wb_workfn (flush-9:126)
kernel: [425684.111585] Call Trace:
kernel: [425684.111595]  __schedule+0x24e/0x880
kernel: [425684.111664]  ? xfs_map_blocks+0x82/0x250 [xfs]
kernel: [425684.111668]  schedule+0x2c/0x80
kernel: [425684.111671]  rwsem_down_read_failed+0xf0/0x160
kernel: [425684.111675]  ? bitmap_startwrite+0x9f/0x1f0
kernel: [425684.111679]  call_rwsem_down_read_failed+0x18/0x30
kernel: [425684.111682]  ? call_rwsem_down_read_failed+0x18/0x30
kernel: [425684.111685]  down_read+0x20/0x40
kernel: [425684.111736]  xfs_ilock+0xd5/0x100 [xfs]
kernel: [425684.111782]  xfs_map_blocks+0x82/0x250 [xfs]
kernel: [425684.111823]  xfs_do_writepage+0x167/0x6a0 [xfs]
kernel: [425684.111830]  ? clear_page_dirty_for_io+0x19f/0x1f0
kernel: [425684.111834]  write_cache_pages+0x207/0x4e0
kernel: [425684.111869]  ? xfs_vm_writepages+0xf0/0xf0 [xfs]
kernel: [425684.111875]  ? submit_bio+0x73/0x140
kernel: [425684.111878]  ? submit_bio+0x73/0x140
kernel: [425684.111911]  ? xfs_setfilesize_trans_alloc.isra.13+0x3e/0x90 [xfs]
kernel: [425684.111944]  xfs_vm_writepages+0xbe/0xf0 [xfs]
kernel: [425684.111949]  do_writepages+0x4b/0xe0
kernel: [425684.111954]  ? fprop_fraction_percpu+0x2f/0x80
kernel: [425684.111958]  ? __wb_calc_thresh+0x3e/0x130
kernel: [425684.111963]  __writeback_single_inode+0x45/0x350
kernel: [425684.111966]  ? __writeback_single_inode+0x45/0x350
kernel: [425684.111970]  writeback_sb_inodes+0x1e1/0x510
kernel: [425684.111975]  __writeback_inodes_wb+0x67/0xb0
kernel: [425684.111979]  wb_writeback+0x271/0x300
kernel: [425684.111983]  wb_workfn+0x1bb/0x400
kernel: [425684.111986]  ? wb_workfn+0x1bb/0x400
kernel: [425684.111992]  process_one_work+0x1de/0x420
kernel: [425684.111996]  worker_thread+0x32/0x410
kernel: [425684.111999]  kthread+0x121/0x140
kernel: [425684.112003]  ? process_one_work+0x420/0x420
kernel: [425684.112005]  ? kthread_create_worker_on_cpu+0x70/0x70
kernel: [425684.112009]  ret_from_fork+0x35/0x40
kernel: [425684.112024] INFO: task kworker/74:0:9623 blocked for more
than 120 seconds.
kernel: [425684.112461]       Tainted: G           OE
4.15.0-64-generic #73-Ubuntu
kernel: [425684.112925] "echo 0 >
/proc/sys/kernel/hung_task_timeout_secs" disables this message.
kernel: [425684.113438] kworker/74:0    D    0  9623      2 0x80000080
kernel: [425684.113500] Workqueue: xfs-cil/md126 xlog_cil_push_work [xfs]
kernel: [425684.113502] Call Trace:
kernel: [425684.113508]  __schedule+0x24e/0x880
kernel: [425684.113559]  ? xlog_bdstrat+0x2b/0x60 [xfs]
kernel: [425684.113564]  schedule+0x2c/0x80
kernel: [425684.113609]  xlog_state_get_iclog_space+0x105/0x2d0 [xfs]
kernel: [425684.113614]  ? wake_up_q+0x80/0x80
kernel: [425684.113656]  xlog_write+0x163/0x6e0 [xfs]
kernel: [425684.113699]  xlog_cil_push+0x2a7/0x410 [xfs]
kernel: [425684.113740]  xlog_cil_push_work+0x15/0x20 [xfs]
kernel: [425684.113743]  process_one_work+0x1de/0x420
kernel: [425684.113747]  worker_thread+0x32/0x410
kernel: [425684.113750]  kthread+0x121/0x140
kernel: [425684.113753]  ? process_one_work+0x420/0x420
kernel: [425684.113756]  ? kthread_create_worker_on_cpu+0x70/0x70
kernel: [425684.113759]  ret_from_fork+0x35/0x40

Other directories on the same filesystem seem fine as do other XFS
filesystems on the same system.

-- 
Sitsofe | http://sucs.org/~sits/

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: Tasks blocking forever with XFS stack traces
  2019-11-05  7:27 Tasks blocking forever with XFS stack traces Sitsofe Wheeler
@ 2019-11-05  8:54 ` Carlos Maiolino
  2019-11-05  9:32   ` Sitsofe Wheeler
  0 siblings, 1 reply; 10+ messages in thread
From: Carlos Maiolino @ 2019-11-05  8:54 UTC (permalink / raw)
  To: Sitsofe Wheeler; +Cc: linux-xfs

Hi.

On Tue, Nov 05, 2019 at 07:27:16AM +0000, Sitsofe Wheeler wrote:
> Hi,
> 
> We have a system that has been seeing tasks with XFS calls in their
> stacks. Once these tasks start hanging with uninterruptible sleep any
> write I/O to the directory they were doing I/O to will also hang
> forever. The I/O they doing is being done to a bind mounted directory
> atop an XFS filesystem on top an MD device (the MD device seems to be
> still functional and isn't offline). The kernel is fairly old but I
> thought I'd post a stack in case anyone can describe this or has seen
> it before:
> 
> kernel: [425684.110424] INFO: task kworker/u162:0:58843 blocked for
> more than 120 seconds.
> kernel: [425684.110800]       Tainted: G           OE
> 4.15.0-64-generic #73-Ubuntu
> kernel: [425684.111164] "echo 0 >
> /proc/sys/kernel/hung_task_timeout_secs" disables this message.
> kernel: [425684.111568] kworker/u162:0  D    0 58843      2 0x80000080
> kernel: [425684.111581] Workqueue: writeback wb_workfn (flush-9:126)
> kernel: [425684.111585] Call Trace:
> kernel: [425684.111595]  __schedule+0x24e/0x880
> kernel: [425684.111664]  ? xfs_map_blocks+0x82/0x250 [xfs]
> kernel: [425684.111668]  schedule+0x2c/0x80
> kernel: [425684.111671]  rwsem_down_read_failed+0xf0/0x160
> kernel: [425684.111675]  ? bitmap_startwrite+0x9f/0x1f0
> kernel: [425684.111679]  call_rwsem_down_read_failed+0x18/0x30
> kernel: [425684.111682]  ? call_rwsem_down_read_failed+0x18/0x30
> kernel: [425684.111685]  down_read+0x20/0x40
> kernel: [425684.111736]  xfs_ilock+0xd5/0x100 [xfs]
> kernel: [425684.111782]  xfs_map_blocks+0x82/0x250 [xfs]
> kernel: [425684.111823]  xfs_do_writepage+0x167/0x6a0 [xfs]
> kernel: [425684.111830]  ? clear_page_dirty_for_io+0x19f/0x1f0
> kernel: [425684.111834]  write_cache_pages+0x207/0x4e0
> kernel: [425684.111869]  ? xfs_vm_writepages+0xf0/0xf0 [xfs]
> kernel: [425684.111875]  ? submit_bio+0x73/0x140
> kernel: [425684.111878]  ? submit_bio+0x73/0x140
> kernel: [425684.111911]  ? xfs_setfilesize_trans_alloc.isra.13+0x3e/0x90 [xfs]
> kernel: [425684.111944]  xfs_vm_writepages+0xbe/0xf0 [xfs]
> kernel: [425684.111949]  do_writepages+0x4b/0xe0
> kernel: [425684.111954]  ? fprop_fraction_percpu+0x2f/0x80
> kernel: [425684.111958]  ? __wb_calc_thresh+0x3e/0x130
> kernel: [425684.111963]  __writeback_single_inode+0x45/0x350
> kernel: [425684.111966]  ? __writeback_single_inode+0x45/0x350
> kernel: [425684.111970]  writeback_sb_inodes+0x1e1/0x510
> kernel: [425684.111975]  __writeback_inodes_wb+0x67/0xb0
> kernel: [425684.111979]  wb_writeback+0x271/0x300
> kernel: [425684.111983]  wb_workfn+0x1bb/0x400
> kernel: [425684.111986]  ? wb_workfn+0x1bb/0x400
> kernel: [425684.111992]  process_one_work+0x1de/0x420
> kernel: [425684.111996]  worker_thread+0x32/0x410
> kernel: [425684.111999]  kthread+0x121/0x140
> kernel: [425684.112003]  ? process_one_work+0x420/0x420
> kernel: [425684.112005]  ? kthread_create_worker_on_cpu+0x70/0x70
> kernel: [425684.112009]  ret_from_fork+0x35/0x40
> kernel: [425684.112024] INFO: task kworker/74:0:9623 blocked for more
> than 120 seconds.
> kernel: [425684.112461]       Tainted: G           OE
> 4.15.0-64-generic #73-Ubuntu
> kernel: [425684.112925] "echo 0 >
> /proc/sys/kernel/hung_task_timeout_secs" disables this message.
> kernel: [425684.113438] kworker/74:0    D    0  9623      2 0x80000080
> kernel: [425684.113500] Workqueue: xfs-cil/md126 xlog_cil_push_work [xfs]
> kernel: [425684.113502] Call Trace:
> kernel: [425684.113508]  __schedule+0x24e/0x880
> kernel: [425684.113559]  ? xlog_bdstrat+0x2b/0x60 [xfs]
> kernel: [425684.113564]  schedule+0x2c/0x80
> kernel: [425684.113609]  xlog_state_get_iclog_space+0x105/0x2d0 [xfs]
> kernel: [425684.113614]  ? wake_up_q+0x80/0x80
> kernel: [425684.113656]  xlog_write+0x163/0x6e0 [xfs]
> kernel: [425684.113699]  xlog_cil_push+0x2a7/0x410 [xfs]
> kernel: [425684.113740]  xlog_cil_push_work+0x15/0x20 [xfs]
> kernel: [425684.113743]  process_one_work+0x1de/0x420
> kernel: [425684.113747]  worker_thread+0x32/0x410
> kernel: [425684.113750]  kthread+0x121/0x140
> kernel: [425684.113753]  ? process_one_work+0x420/0x420
> kernel: [425684.113756]  ? kthread_create_worker_on_cpu+0x70/0x70
> kernel: [425684.113759]  ret_from_fork+0x35/0x40
> 
> Other directories on the same filesystem seem fine as do other XFS
> filesystems on the same system.

The fact you mention other directories seems to work, and the first stack trace
you posted, it sounds like you've been keeping a singe AG too busy to almost
make it unusable. But, you didn't provide enough information we can really make
any progress here, and to be honest I'm more inclined to point the finger to
your MD device.

Can you describe your MD device? RAID array? What kind? How many disks?
What's your filesystem configuration? (xfs_info <mount point>) 
Do you have anything else on your dmesg other than these two stack traces? I'd
suggest posting the whole dmesg, not only what you think is relevant.

Better yet:

http://xfs.org/index.php/XFS_FAQ#Q:_What_information_should_I_include_when_reporting_a_problem.3F

Cheers.

> 
> -- 
> Sitsofe | http://sucs.org/~sits/

-- 
Carlos


P.S. I'm removing Darrick and linux-fsdevel from CC to avoid spamming too many.


^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: Tasks blocking forever with XFS stack traces
  2019-11-05  8:54 ` Carlos Maiolino
@ 2019-11-05  9:32   ` Sitsofe Wheeler
  2019-11-05 10:36     ` Carlos Maiolino
  0 siblings, 1 reply; 10+ messages in thread
From: Sitsofe Wheeler @ 2019-11-05  9:32 UTC (permalink / raw)
  To: Carlos Maiolino; +Cc: linux-xfs

Hi,

On Tue, 5 Nov 2019 at 08:54, Carlos Maiolino <cmaiolino@redhat.com> wrote:
>
> Hi.
>
> On Tue, Nov 05, 2019 at 07:27:16AM +0000, Sitsofe Wheeler wrote:
> > Hi,
> >
> > We have a system that has been seeing tasks with XFS calls in their
> > stacks. Once these tasks start hanging with uninterruptible sleep any
> > write I/O to the directory they were doing I/O to will also hang
> > forever. The I/O they doing is being done to a bind mounted directory
> > atop an XFS filesystem on top an MD device (the MD device seems to be
> > still functional and isn't offline). The kernel is fairly old but I
> > thought I'd post a stack in case anyone can describe this or has seen
> > it before:
> >
> > kernel: [425684.110424] INFO: task kworker/u162:0:58843 blocked for
> > more than 120 seconds.
> > kernel: [425684.110800]       Tainted: G           OE
> > 4.15.0-64-generic #73-Ubuntu
> > kernel: [425684.111164] "echo 0 >
> > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
> > kernel: [425684.111568] kworker/u162:0  D    0 58843      2 0x80000080
> > kernel: [425684.111581] Workqueue: writeback wb_workfn (flush-9:126)
> > kernel: [425684.111585] Call Trace:
> > kernel: [425684.111595]  __schedule+0x24e/0x880
> > kernel: [425684.111664]  ? xfs_map_blocks+0x82/0x250 [xfs]

<snip>
> >
> > Other directories on the same filesystem seem fine as do other XFS
> > filesystems on the same system.
>
> The fact you mention other directories seems to work, and the first stack trace
> you posted, it sounds like you've been keeping a singe AG too busy to almost
> make it unusable. But, you didn't provide enough information we can really make
> any progress here, and to be honest I'm more inclined to point the finger to
> your MD device.

Let's see if we can pinpoint something :-)

> Can you describe your MD device? RAID array? What kind? How many disks?

RAID6 8 disks.

> What's your filesystem configuration? (xfs_info <mount point>)

meta-data=/dev/md126             isize=512    agcount=32, agsize=43954432 blks
         =                       sectsz=4096  attr=2, projid32bit=1
         =                       crc=1        finobt=1 spinodes=0 rmapbt=0
         =                       reflink=0
data     =                       bsize=4096   blocks=1406538240, imaxpct=5
         =                       sunit=128    swidth=768 blks
naming   =version 2              bsize=4096   ascii-ci=0 ftype=1
log      =internal               bsize=4096   blocks=521728, version=2
         =                       sectsz=4096  sunit=1 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0

> Do you have anything else on your dmesg other than these two stack traces? I'd
> suggest posting the whole dmesg, not only what you think is relevant.

Yes there's more. See a slightly elided dmesg from a longer run on
https://sucs.org/~sits/test/kern-20191024.log.gz .

>
> Better yet:
>
> http://xfs.org/index.php/XFS_FAQ#Q:_What_information_should_I_include_when_reporting_a_problem.3F

Note most of the following was gathered from the currently not-hanging system:

kernel: was 4.15.0-64-generic from Ubuntu 18.04 but we're now testing
5.0.0-32-generic

xfsprogs version: xfs_repair version 4.9.0
CPUs: 80
cat /proc/meminfo
MemTotal:       791232512 kB
MemFree:        616987432 kB
MemAvailable:   781352708 kB
Buffers:            5520 kB
Cached:         113300540 kB
SwapCached:            0 kB
Active:         28385760 kB
Inactive:       85358040 kB
Active(anon):     436084 kB
Inactive(anon):     3476 kB
Active(file):   27949676 kB
Inactive(file): 85354564 kB
Unevictable:           0 kB
Mlocked:               0 kB
SwapTotal:      31248380 kB
SwapFree:       31248380 kB
Dirty:               688 kB
Writeback:             0 kB
AnonPages:        436396 kB
Mapped:           206652 kB
Shmem:              6944 kB
KReclaimable:   56047960 kB
Slab:           58126044 kB
SReclaimable:   56047960 kB
SUnreclaim:      2078084 kB
KernelStack:       22240 kB
PageTables:        17552 kB
NFS_Unstable:          0 kB
Bounce:                0 kB
WritebackTmp:          0 kB
CommitLimit:    426864636 kB
Committed_AS:    4147112 kB
VmallocTotal:   34359738367 kB
VmallocUsed:           0 kB
VmallocChunk:          0 kB
Percpu:            61760 kB
HardwareCorrupted:     0 kB
AnonHugePages:         0 kB
ShmemHugePages:        0 kB
ShmemPmdMapped:        0 kB
CmaTotal:              0 kB
CmaFree:               0 kB
HugePages_Total:       0
HugePages_Free:        0
HugePages_Rsvd:        0
HugePages_Surp:        0
Hugepagesize:       2048 kB
Hugetlb:               0 kB
DirectMap4k:     3245828 kB
DirectMap2M:    100208640 kB
DirectMap1G:    702545920 kB

cat /proc/mounts
sysfs /sys sysfs rw,nosuid,nodev,noexec,relatime 0 0
proc /proc proc rw,nosuid,nodev,noexec,relatime 0 0
udev /dev devtmpfs
rw,nosuid,relatime,size=395591264k,nr_inodes=98897816,mode=755 0 0
devpts /dev/pts devpts rw,nosuid,noexec,relatime,gid=5,mode=620,ptmxmode=000 0 0
tmpfs /run tmpfs rw,nosuid,noexec,relatime,size=79123252k,mode=755 0 0
/dev/mapper/vgsys-root / xfs rw,relatime,attr2,inode64,noquota 0 0
securityfs /sys/kernel/security securityfs rw,nosuid,nodev,noexec,relatime 0 0
tmpfs /dev/shm tmpfs rw,nosuid,nodev 0 0
tmpfs /run/lock tmpfs rw,nosuid,nodev,noexec,relatime,size=5120k 0 0
tmpfs /sys/fs/cgroup tmpfs ro,nosuid,nodev,noexec,mode=755 0 0
cgroup /sys/fs/cgroup/unified cgroup2
rw,nosuid,nodev,noexec,relatime,nsdelegate 0 0
cgroup /sys/fs/cgroup/systemd cgroup
rw,nosuid,nodev,noexec,relatime,xattr,name=systemd 0 0
pstore /sys/fs/pstore pstore rw,nosuid,nodev,noexec,relatime 0 0
cgroup /sys/fs/cgroup/net_cls,net_prio cgroup
rw,nosuid,nodev,noexec,relatime,net_cls,net_prio 0 0
cgroup /sys/fs/cgroup/blkio cgroup rw,nosuid,nodev,noexec,relatime,blkio 0 0
cgroup /sys/fs/cgroup/rdma cgroup rw,nosuid,nodev,noexec,relatime,rdma 0 0
cgroup /sys/fs/cgroup/hugetlb cgroup rw,nosuid,nodev,noexec,relatime,hugetlb 0 0
cgroup /sys/fs/cgroup/pids cgroup rw,nosuid,nodev,noexec,relatime,pids 0 0
cgroup /sys/fs/cgroup/cpu,cpuacct cgroup
rw,nosuid,nodev,noexec,relatime,cpu,cpuacct 0 0
cgroup /sys/fs/cgroup/perf_event cgroup
rw,nosuid,nodev,noexec,relatime,perf_event 0 0
cgroup /sys/fs/cgroup/freezer cgroup rw,nosuid,nodev,noexec,relatime,freezer 0 0
cgroup /sys/fs/cgroup/cpuset cgroup rw,nosuid,nodev,noexec,relatime,cpuset 0 0
cgroup /sys/fs/cgroup/devices cgroup rw,nosuid,nodev,noexec,relatime,devices 0 0
cgroup /sys/fs/cgroup/memory cgroup rw,nosuid,nodev,noexec,relatime,memory 0 0
systemd-1 /proc/sys/fs/binfmt_misc autofs
rw,relatime,fd=38,pgrp=1,timeout=0,minproto=5,maxproto=5,direct,pipe_ino=66154
0 0
mqueue /dev/mqueue mqueue rw,relatime 0 0
debugfs /sys/kernel/debug debugfs rw,relatime 0 0
hugetlbfs /dev/hugepages hugetlbfs rw,relatime,pagesize=2M 0 0
configfs /sys/kernel/config configfs rw,relatime 0 0
fusectl /sys/fs/fuse/connections fusectl rw,relatime 0 0
tmpfs /tmp tmpfs rw,nosuid,nodev 0 0
/dev/md0 /boot ext2 rw,relatime 0 0
/dev/md126 /localdata xfs
rw,relatime,attr2,inode64,sunit=1024,swidth=6144,noquota 0 0
/dev/md126 /var/lib/docker xfs
rw,relatime,attr2,inode64,sunit=1024,swidth=6144,noquota 0 0
/dev/mapper/vgsys-home /home xfs rw,relatime,attr2,inode64,noquota 0 0
binfmt_misc /proc/sys/fs/binfmt_misc binfmt_misc rw,relatime 0 0
overlay /var/lib/docker/overlay2/c86b0eab253a97ffe75b0661886337322c558386083bcb2d4823446025131b0a/merged
overlay rw,relatime,lowerdir=/var/lib/docker/overlay2/l/XPFD5GLZ7YBMUP7S3E6W5OUE6A:/var/lib/docker/overlay2/l/GJVZ2MXOD5AOLUELAEYCSYCXLK:/var/lib/docker/overlay2/l/JEYWOT7MNNHX2DAE4AQ5XO674I:/var/lib/docker/overlay2/l/YAS2YWA4FTAWNEKRAJQY47TQDY,upperdir=/var/lib/docker/overlay2/c86b0eab253a97ffe75b0661886337322c558386083bcb2d4823446025131b0a/diff,workdir=/var/lib/docker/overlay2/c86b0eab253a97ffe75b0661886337322c558386083bcb2d4823446025131b0a/work,xino=off
0 0
overlay /localdata/docker/overlay2/c86b0eab253a97ffe75b0661886337322c558386083bcb2d4823446025131b0a/merged
overlay rw,relatime,lowerdir=/var/lib/docker/overlay2/l/XPFD5GLZ7YBMUP7S3E6W5OUE6A:/var/lib/docker/overlay2/l/GJVZ2MXOD5AOLUELAEYCSYCXLK:/var/lib/docker/overlay2/l/JEYWOT7MNNHX2DAE4AQ5XO674I:/var/lib/docker/overlay2/l/YAS2YWA4FTAWNEKRAJQY47TQDY,upperdir=/var/lib/docker/overlay2/c86b0eab253a97ffe75b0661886337322c558386083bcb2d4823446025131b0a/diff,workdir=/var/lib/docker/overlay2/c86b0eab253a97ffe75b0661886337322c558386083bcb2d4823446025131b0a/work,xino=off
0 0
nsfs /run/docker/netns/160ed5c707bb nsfs rw 0 0
overlay /var/lib/docker/overlay2/551458a050177ebbc7b7e43646bc5cb645455cb6e9a5b1f420dc6b1a4322504d/merged
overlay rw,relatime,lowerdir=/var/lib/docker/overlay2/l/ECXX2YJFYUMBVKTP7OTRSAJVWE:/var/lib/docker/overlay2/l/E4BBLB3NCC34KONYP23RP7VJ2X:/var/lib/docker/overlay2/l/SVYAOAODE6MEJVAEK2OO4SFF2E:/var/lib/docker/overlay2/l/A7TNW2Z7KHULNAU4BDB4GYRJ4A:/var/lib/docker/overlay2/l/SJ637O5BUZNAJSXNT27BO3CQGO:/var/lib/docker/overlay2/l/PYVRDDP7ABBFVD3PY2QGTJFQEM:/var/lib/docker/overlay2/l/OGFQOLFLSU27UIRKWXRZQ43OAP:/var/lib/docker/overlay2/l/KCOSL4MV3WQXKQZIZTQNTY4QEU:/var/lib/docker/overlay2/l/YTEXTILIATA6VFSWCQBWUHDY2D:/var/lib/docker/overlay2/l/4BAQ5SVXAVZWLTKZ6FH6VHJLWA:/var/lib/docker/overlay2/l/MUZSGTDT2THJSZEPFBG5NFWRGW:/var/lib/docker/overlay2/l/I6BCWJFX34IQ33OMCKNEHUUJU5:/var/lib/docker/overlay2/l/IRGYEAIEWEA4UJUYV3KEX3P4TI:/var/lib/docker/overlay2/l/J2PDWFCIYIFMH63PCXDJ6P2V7S:/var/lib/docker/overlay2/l/RC6FRWC3WRMRDRMCQM4L6R4VGA:/var/lib/docker/overlay2/l/HJM7E2PHDYPHGWF6RWP7R6OOZI:/var/lib/docker/overlay2/l/JI5RMXGTTBAM4NYEDR4FMNWV25:/var/lib/docker/overlay2/l/2TKWRPIAHOTDHLTGEYFRN4OUWL:/var/lib/docker/overlay2/l/6KCFDR62MDJOQ3ZA54IDNLUI7M:/var/lib/docker/overlay2/l/AN3SVYKAI6L4F54FKFSZMFDPUJ:/var/lib/docker/overlay2/l/YVJF7YEVLHXGC4L27UPEUK47HF:/var/lib/docker/overlay2/l/3NF7EYNTMPB7FFNI7POOBKXJPX:/var/lib/docker/overlay2/l/WAA6KYOATJLN6EP2PYYRQWEGOR:/var/lib/docker/overlay2/l/PHGIYF5LT5FKNUPFVSMEVHWNDU:/var/lib/docker/overlay2/l/KY5BSB7LSJPUNYBISCA4KYF7KS:/var/lib/docker/overlay2/l/HYDHRQJPMUKG4AXLIVBDPSUXJK:/var/lib/docker/overlay2/l/YI26DO7GTXPYQJSZ6BXHJUV5AR,upperdir=/var/lib/docker/overlay2/551458a050177ebbc7b7e43646bc5cb645455cb6e9a5b1f420dc6b1a4322504d/diff,workdir=/var/lib/docker/overlay2/551458a050177ebbc7b7e43646bc5cb645455cb6e9a5b1f420dc6b1a4322504d/work,xino=off
0 0
overlay /localdata/docker/overlay2/551458a050177ebbc7b7e43646bc5cb645455cb6e9a5b1f420dc6b1a4322504d/merged
overlay rw,relatime,lowerdir=/var/lib/docker/overlay2/l/ECXX2YJFYUMBVKTP7OTRSAJVWE:/var/lib/docker/overlay2/l/E4BBLB3NCC34KONYP23RP7VJ2X:/var/lib/docker/overlay2/l/SVYAOAODE6MEJVAEK2OO4SFF2E:/var/lib/docker/overlay2/l/A7TNW2Z7KHULNAU4BDB4GYRJ4A:/var/lib/docker/overlay2/l/SJ637O5BUZNAJSXNT27BO3CQGO:/var/lib/docker/overlay2/l/PYVRDDP7ABBFVD3PY2QGTJFQEM:/var/lib/docker/overlay2/l/OGFQOLFLSU27UIRKWXRZQ43OAP:/var/lib/docker/overlay2/l/KCOSL4MV3WQXKQZIZTQNTY4QEU:/var/lib/docker/overlay2/l/YTEXTILIATA6VFSWCQBWUHDY2D:/var/lib/docker/overlay2/l/4BAQ5SVXAVZWLTKZ6FH6VHJLWA:/var/lib/docker/overlay2/l/MUZSGTDT2THJSZEPFBG5NFWRGW:/var/lib/docker/overlay2/l/I6BCWJFX34IQ33OMCKNEHUUJU5:/var/lib/docker/overlay2/l/IRGYEAIEWEA4UJUYV3KEX3P4TI:/var/lib/docker/overlay2/l/J2PDWFCIYIFMH63PCXDJ6P2V7S:/var/lib/docker/overlay2/l/RC6FRWC3WRMRDRMCQM4L6R4VGA:/var/lib/docker/overlay2/l/HJM7E2PHDYPHGWF6RWP7R6OOZI:/var/lib/docker/overlay2/l/JI5RMXGTTBAM4NYEDR4FMNWV25:/var/lib/docker/overlay2/l/2TKWRPIAHOTDHLTGEYFRN4OUWL:/var/lib/docker/overlay2/l/6KCFDR62MDJOQ3ZA54IDNLUI7M:/var/lib/docker/overlay2/l/AN3SVYKAI6L4F54FKFSZMFDPUJ:/var/lib/docker/overlay2/l/YVJF7YEVLHXGC4L27UPEUK47HF:/var/lib/docker/overlay2/l/3NF7EYNTMPB7FFNI7POOBKXJPX:/var/lib/docker/overlay2/l/WAA6KYOATJLN6EP2PYYRQWEGOR:/var/lib/docker/overlay2/l/PHGIYF5LT5FKNUPFVSMEVHWNDU:/var/lib/docker/overlay2/l/KY5BSB7LSJPUNYBISCA4KYF7KS:/var/lib/docker/overlay2/l/HYDHRQJPMUKG4AXLIVBDPSUXJK:/var/lib/docker/overlay2/l/YI26DO7GTXPYQJSZ6BXHJUV5AR,upperdir=/var/lib/docker/overlay2/551458a050177ebbc7b7e43646bc5cb645455cb6e9a5b1f420dc6b1a4322504d/diff,workdir=/var/lib/docker/overlay2/551458a050177ebbc7b7e43646bc5cb645455cb6e9a5b1f420dc6b1a4322504d/work,xino=off
0 0
nsfs /run/docker/netns/cc8ad7e2cc51 nsfs rw 0 0
overlay /var/lib/docker/overlay2/77096fc6ca39461683809377f6efa83957e73cdb91eb5f08957a64f75d829356/merged
overlay rw,relatime,lowerdir=/var/lib/docker/overlay2/l/S5DDQ53MEAP37J6723CYPVDTO6:/var/lib/docker/overlay2/l/E4BBLB3NCC34KONYP23RP7VJ2X:/var/lib/docker/overlay2/l/SVYAOAODE6MEJVAEK2OO4SFF2E:/var/lib/docker/overlay2/l/A7TNW2Z7KHULNAU4BDB4GYRJ4A:/var/lib/docker/overlay2/l/SJ637O5BUZNAJSXNT27BO3CQGO:/var/lib/docker/overlay2/l/PYVRDDP7ABBFVD3PY2QGTJFQEM:/var/lib/docker/overlay2/l/OGFQOLFLSU27UIRKWXRZQ43OAP:/var/lib/docker/overlay2/l/KCOSL4MV3WQXKQZIZTQNTY4QEU:/var/lib/docker/overlay2/l/YTEXTILIATA6VFSWCQBWUHDY2D:/var/lib/docker/overlay2/l/4BAQ5SVXAVZWLTKZ6FH6VHJLWA:/var/lib/docker/overlay2/l/MUZSGTDT2THJSZEPFBG5NFWRGW:/var/lib/docker/overlay2/l/I6BCWJFX34IQ33OMCKNEHUUJU5:/var/lib/docker/overlay2/l/IRGYEAIEWEA4UJUYV3KEX3P4TI:/var/lib/docker/overlay2/l/J2PDWFCIYIFMH63PCXDJ6P2V7S:/var/lib/docker/overlay2/l/RC6FRWC3WRMRDRMCQM4L6R4VGA:/var/lib/docker/overlay2/l/HJM7E2PHDYPHGWF6RWP7R6OOZI:/var/lib/docker/overlay2/l/JI5RMXGTTBAM4NYEDR4FMNWV25:/var/lib/docker/overlay2/l/2TKWRPIAHOTDHLTGEYFRN4OUWL:/var/lib/docker/overlay2/l/6KCFDR62MDJOQ3ZA54IDNLUI7M:/var/lib/docker/overlay2/l/AN3SVYKAI6L4F54FKFSZMFDPUJ:/var/lib/docker/overlay2/l/YVJF7YEVLHXGC4L27UPEUK47HF:/var/lib/docker/overlay2/l/3NF7EYNTMPB7FFNI7POOBKXJPX:/var/lib/docker/overlay2/l/WAA6KYOATJLN6EP2PYYRQWEGOR:/var/lib/docker/overlay2/l/PHGIYF5LT5FKNUPFVSMEVHWNDU:/var/lib/docker/overlay2/l/KY5BSB7LSJPUNYBISCA4KYF7KS:/var/lib/docker/overlay2/l/HYDHRQJPMUKG4AXLIVBDPSUXJK:/var/lib/docker/overlay2/l/YI26DO7GTXPYQJSZ6BXHJUV5AR,upperdir=/var/lib/docker/overlay2/77096fc6ca39461683809377f6efa83957e73cdb91eb5f08957a64f75d829356/diff,workdir=/var/lib/docker/overlay2/77096fc6ca39461683809377f6efa83957e73cdb91eb5f08957a64f75d829356/work,xino=off
0 0
overlay /localdata/docker/overlay2/77096fc6ca39461683809377f6efa83957e73cdb91eb5f08957a64f75d829356/merged
overlay rw,relatime,lowerdir=/var/lib/docker/overlay2/l/S5DDQ53MEAP37J6723CYPVDTO6:/var/lib/docker/overlay2/l/E4BBLB3NCC34KONYP23RP7VJ2X:/var/lib/docker/overlay2/l/SVYAOAODE6MEJVAEK2OO4SFF2E:/var/lib/docker/overlay2/l/A7TNW2Z7KHULNAU4BDB4GYRJ4A:/var/lib/docker/overlay2/l/SJ637O5BUZNAJSXNT27BO3CQGO:/var/lib/docker/overlay2/l/PYVRDDP7ABBFVD3PY2QGTJFQEM:/var/lib/docker/overlay2/l/OGFQOLFLSU27UIRKWXRZQ43OAP:/var/lib/docker/overlay2/l/KCOSL4MV3WQXKQZIZTQNTY4QEU:/var/lib/docker/overlay2/l/YTEXTILIATA6VFSWCQBWUHDY2D:/var/lib/docker/overlay2/l/4BAQ5SVXAVZWLTKZ6FH6VHJLWA:/var/lib/docker/overlay2/l/MUZSGTDT2THJSZEPFBG5NFWRGW:/var/lib/docker/overlay2/l/I6BCWJFX34IQ33OMCKNEHUUJU5:/var/lib/docker/overlay2/l/IRGYEAIEWEA4UJUYV3KEX3P4TI:/var/lib/docker/overlay2/l/J2PDWFCIYIFMH63PCXDJ6P2V7S:/var/lib/docker/overlay2/l/RC6FRWC3WRMRDRMCQM4L6R4VGA:/var/lib/docker/overlay2/l/HJM7E2PHDYPHGWF6RWP7R6OOZI:/var/lib/docker/overlay2/l/JI5RMXGTTBAM4NYEDR4FMNWV25:/var/lib/docker/overlay2/l/2TKWRPIAHOTDHLTGEYFRN4OUWL:/var/lib/docker/overlay2/l/6KCFDR62MDJOQ3ZA54IDNLUI7M:/var/lib/docker/overlay2/l/AN3SVYKAI6L4F54FKFSZMFDPUJ:/var/lib/docker/overlay2/l/YVJF7YEVLHXGC4L27UPEUK47HF:/var/lib/docker/overlay2/l/3NF7EYNTMPB7FFNI7POOBKXJPX:/var/lib/docker/overlay2/l/WAA6KYOATJLN6EP2PYYRQWEGOR:/var/lib/docker/overlay2/l/PHGIYF5LT5FKNUPFVSMEVHWNDU:/var/lib/docker/overlay2/l/KY5BSB7LSJPUNYBISCA4KYF7KS:/var/lib/docker/overlay2/l/HYDHRQJPMUKG4AXLIVBDPSUXJK:/var/lib/docker/overlay2/l/YI26DO7GTXPYQJSZ6BXHJUV5AR,upperdir=/var/lib/docker/overlay2/77096fc6ca39461683809377f6efa83957e73cdb91eb5f08957a64f75d829356/diff,workdir=/var/lib/docker/overlay2/77096fc6ca39461683809377f6efa83957e73cdb91eb5f08957a64f75d829356/work,xino=off
0 0
nsfs /run/docker/netns/e892b0d9fdea nsfs rw 0 0
overlay /var/lib/docker/overlay2/77b8012caabd1b32e965ba6258c4a41788a7e86e11205ec719d993f30a8e6257/merged
overlay rw,relatime,lowerdir=/var/lib/docker/overlay2/l/2MTUND5M3MS3FZWZCZVXTBIB5K:/var/lib/docker/overlay2/l/E4BBLB3NCC34KONYP23RP7VJ2X:/var/lib/docker/overlay2/l/SVYAOAODE6MEJVAEK2OO4SFF2E:/var/lib/docker/overlay2/l/A7TNW2Z7KHULNAU4BDB4GYRJ4A:/var/lib/docker/overlay2/l/SJ637O5BUZNAJSXNT27BO3CQGO:/var/lib/docker/overlay2/l/PYVRDDP7ABBFVD3PY2QGTJFQEM:/var/lib/docker/overlay2/l/OGFQOLFLSU27UIRKWXRZQ43OAP:/var/lib/docker/overlay2/l/KCOSL4MV3WQXKQZIZTQNTY4QEU:/var/lib/docker/overlay2/l/YTEXTILIATA6VFSWCQBWUHDY2D:/var/lib/docker/overlay2/l/4BAQ5SVXAVZWLTKZ6FH6VHJLWA:/var/lib/docker/overlay2/l/MUZSGTDT2THJSZEPFBG5NFWRGW:/var/lib/docker/overlay2/l/I6BCWJFX34IQ33OMCKNEHUUJU5:/var/lib/docker/overlay2/l/IRGYEAIEWEA4UJUYV3KEX3P4TI:/var/lib/docker/overlay2/l/J2PDWFCIYIFMH63PCXDJ6P2V7S:/var/lib/docker/overlay2/l/RC6FRWC3WRMRDRMCQM4L6R4VGA:/var/lib/docker/overlay2/l/HJM7E2PHDYPHGWF6RWP7R6OOZI:/var/lib/docker/overlay2/l/JI5RMXGTTBAM4NYEDR4FMNWV25:/var/lib/docker/overlay2/l/2TKWRPIAHOTDHLTGEYFRN4OUWL:/var/lib/docker/overlay2/l/6KCFDR62MDJOQ3ZA54IDNLUI7M:/var/lib/docker/overlay2/l/AN3SVYKAI6L4F54FKFSZMFDPUJ:/var/lib/docker/overlay2/l/YVJF7YEVLHXGC4L27UPEUK47HF:/var/lib/docker/overlay2/l/3NF7EYNTMPB7FFNI7POOBKXJPX:/var/lib/docker/overlay2/l/WAA6KYOATJLN6EP2PYYRQWEGOR:/var/lib/docker/overlay2/l/PHGIYF5LT5FKNUPFVSMEVHWNDU:/var/lib/docker/overlay2/l/KY5BSB7LSJPUNYBISCA4KYF7KS:/var/lib/docker/overlay2/l/HYDHRQJPMUKG4AXLIVBDPSUXJK:/var/lib/docker/overlay2/l/YI26DO7GTXPYQJSZ6BXHJUV5AR,upperdir=/var/lib/docker/overlay2/77b8012caabd1b32e965ba6258c4a41788a7e86e11205ec719d993f30a8e6257/diff,workdir=/var/lib/docker/overlay2/77b8012caabd1b32e965ba6258c4a41788a7e86e11205ec719d993f30a8e6257/work,xino=off
0 0
overlay /localdata/docker/overlay2/77b8012caabd1b32e965ba6258c4a41788a7e86e11205ec719d993f30a8e6257/merged
overlay rw,relatime,lowerdir=/var/lib/docker/overlay2/l/2MTUND5M3MS3FZWZCZVXTBIB5K:/var/lib/docker/overlay2/l/E4BBLB3NCC34KONYP23RP7VJ2X:/var/lib/docker/overlay2/l/SVYAOAODE6MEJVAEK2OO4SFF2E:/var/lib/docker/overlay2/l/A7TNW2Z7KHULNAU4BDB4GYRJ4A:/var/lib/docker/overlay2/l/SJ637O5BUZNAJSXNT27BO3CQGO:/var/lib/docker/overlay2/l/PYVRDDP7ABBFVD3PY2QGTJFQEM:/var/lib/docker/overlay2/l/OGFQOLFLSU27UIRKWXRZQ43OAP:/var/lib/docker/overlay2/l/KCOSL4MV3WQXKQZIZTQNTY4QEU:/var/lib/docker/overlay2/l/YTEXTILIATA6VFSWCQBWUHDY2D:/var/lib/docker/overlay2/l/4BAQ5SVXAVZWLTKZ6FH6VHJLWA:/var/lib/docker/overlay2/l/MUZSGTDT2THJSZEPFBG5NFWRGW:/var/lib/docker/overlay2/l/I6BCWJFX34IQ33OMCKNEHUUJU5:/var/lib/docker/overlay2/l/IRGYEAIEWEA4UJUYV3KEX3P4TI:/var/lib/docker/overlay2/l/J2PDWFCIYIFMH63PCXDJ6P2V7S:/var/lib/docker/overlay2/l/RC6FRWC3WRMRDRMCQM4L6R4VGA:/var/lib/docker/overlay2/l/HJM7E2PHDYPHGWF6RWP7R6OOZI:/var/lib/docker/overlay2/l/JI5RMXGTTBAM4NYEDR4FMNWV25:/var/lib/docker/overlay2/l/2TKWRPIAHOTDHLTGEYFRN4OUWL:/var/lib/docker/overlay2/l/6KCFDR62MDJOQ3ZA54IDNLUI7M:/var/lib/docker/overlay2/l/AN3SVYKAI6L4F54FKFSZMFDPUJ:/var/lib/docker/overlay2/l/YVJF7YEVLHXGC4L27UPEUK47HF:/var/lib/docker/overlay2/l/3NF7EYNTMPB7FFNI7POOBKXJPX:/var/lib/docker/overlay2/l/WAA6KYOATJLN6EP2PYYRQWEGOR:/var/lib/docker/overlay2/l/PHGIYF5LT5FKNUPFVSMEVHWNDU:/var/lib/docker/overlay2/l/KY5BSB7LSJPUNYBISCA4KYF7KS:/var/lib/docker/overlay2/l/HYDHRQJPMUKG4AXLIVBDPSUXJK:/var/lib/docker/overlay2/l/YI26DO7GTXPYQJSZ6BXHJUV5AR,upperdir=/var/lib/docker/overlay2/77b8012caabd1b32e965ba6258c4a41788a7e86e11205ec719d993f30a8e6257/diff,workdir=/var/lib/docker/overlay2/77b8012caabd1b32e965ba6258c4a41788a7e86e11205ec719d993f30a8e6257/work,xino=off
0 0
nsfs /run/docker/netns/e9d00dfcaa30 nsfs rw 0 0
overlay /var/lib/docker/overlay2/28b0f26ad2c4dd1eccd966d1dc59499be968205a00572715db840abbbcc2789d/merged
overlay rw,relatime,lowerdir=/var/lib/docker/overlay2/l/SLHFVMXTCIQY5TYHXX3XY2QUTX:/var/lib/docker/overlay2/l/E4BBLB3NCC34KONYP23RP7VJ2X:/var/lib/docker/overlay2/l/SVYAOAODE6MEJVAEK2OO4SFF2E:/var/lib/docker/overlay2/l/A7TNW2Z7KHULNAU4BDB4GYRJ4A:/var/lib/docker/overlay2/l/SJ637O5BUZNAJSXNT27BO3CQGO:/var/lib/docker/overlay2/l/PYVRDDP7ABBFVD3PY2QGTJFQEM:/var/lib/docker/overlay2/l/OGFQOLFLSU27UIRKWXRZQ43OAP:/var/lib/docker/overlay2/l/KCOSL4MV3WQXKQZIZTQNTY4QEU:/var/lib/docker/overlay2/l/YTEXTILIATA6VFSWCQBWUHDY2D:/var/lib/docker/overlay2/l/4BAQ5SVXAVZWLTKZ6FH6VHJLWA:/var/lib/docker/overlay2/l/MUZSGTDT2THJSZEPFBG5NFWRGW:/var/lib/docker/overlay2/l/I6BCWJFX34IQ33OMCKNEHUUJU5:/var/lib/docker/overlay2/l/IRGYEAIEWEA4UJUYV3KEX3P4TI:/var/lib/docker/overlay2/l/J2PDWFCIYIFMH63PCXDJ6P2V7S:/var/lib/docker/overlay2/l/RC6FRWC3WRMRDRMCQM4L6R4VGA:/var/lib/docker/overlay2/l/HJM7E2PHDYPHGWF6RWP7R6OOZI:/var/lib/docker/overlay2/l/JI5RMXGTTBAM4NYEDR4FMNWV25:/var/lib/docker/overlay2/l/2TKWRPIAHOTDHLTGEYFRN4OUWL:/var/lib/docker/overlay2/l/6KCFDR62MDJOQ3ZA54IDNLUI7M:/var/lib/docker/overlay2/l/AN3SVYKAI6L4F54FKFSZMFDPUJ:/var/lib/docker/overlay2/l/YVJF7YEVLHXGC4L27UPEUK47HF:/var/lib/docker/overlay2/l/3NF7EYNTMPB7FFNI7POOBKXJPX:/var/lib/docker/overlay2/l/WAA6KYOATJLN6EP2PYYRQWEGOR:/var/lib/docker/overlay2/l/PHGIYF5LT5FKNUPFVSMEVHWNDU:/var/lib/docker/overlay2/l/KY5BSB7LSJPUNYBISCA4KYF7KS:/var/lib/docker/overlay2/l/HYDHRQJPMUKG4AXLIVBDPSUXJK:/var/lib/docker/overlay2/l/YI26DO7GTXPYQJSZ6BXHJUV5AR,upperdir=/var/lib/docker/overlay2/28b0f26ad2c4dd1eccd966d1dc59499be968205a00572715db840abbbcc2789d/diff,workdir=/var/lib/docker/overlay2/28b0f26ad2c4dd1eccd966d1dc59499be968205a00572715db840abbbcc2789d/work,xino=off
0 0
overlay /localdata/docker/overlay2/28b0f26ad2c4dd1eccd966d1dc59499be968205a00572715db840abbbcc2789d/merged
overlay rw,relatime,lowerdir=/var/lib/docker/overlay2/l/SLHFVMXTCIQY5TYHXX3XY2QUTX:/var/lib/docker/overlay2/l/E4BBLB3NCC34KONYP23RP7VJ2X:/var/lib/docker/overlay2/l/SVYAOAODE6MEJVAEK2OO4SFF2E:/var/lib/docker/overlay2/l/A7TNW2Z7KHULNAU4BDB4GYRJ4A:/var/lib/docker/overlay2/l/SJ637O5BUZNAJSXNT27BO3CQGO:/var/lib/docker/overlay2/l/PYVRDDP7ABBFVD3PY2QGTJFQEM:/var/lib/docker/overlay2/l/OGFQOLFLSU27UIRKWXRZQ43OAP:/var/lib/docker/overlay2/l/KCOSL4MV3WQXKQZIZTQNTY4QEU:/var/lib/docker/overlay2/l/YTEXTILIATA6VFSWCQBWUHDY2D:/var/lib/docker/overlay2/l/4BAQ5SVXAVZWLTKZ6FH6VHJLWA:/var/lib/docker/overlay2/l/MUZSGTDT2THJSZEPFBG5NFWRGW:/var/lib/docker/overlay2/l/I6BCWJFX34IQ33OMCKNEHUUJU5:/var/lib/docker/overlay2/l/IRGYEAIEWEA4UJUYV3KEX3P4TI:/var/lib/docker/overlay2/l/J2PDWFCIYIFMH63PCXDJ6P2V7S:/var/lib/docker/overlay2/l/RC6FRWC3WRMRDRMCQM4L6R4VGA:/var/lib/docker/overlay2/l/HJM7E2PHDYPHGWF6RWP7R6OOZI:/var/lib/docker/overlay2/l/JI5RMXGTTBAM4NYEDR4FMNWV25:/var/lib/docker/overlay2/l/2TKWRPIAHOTDHLTGEYFRN4OUWL:/var/lib/docker/overlay2/l/6KCFDR62MDJOQ3ZA54IDNLUI7M:/var/lib/docker/overlay2/l/AN3SVYKAI6L4F54FKFSZMFDPUJ:/var/lib/docker/overlay2/l/YVJF7YEVLHXGC4L27UPEUK47HF:/var/lib/docker/overlay2/l/3NF7EYNTMPB7FFNI7POOBKXJPX:/var/lib/docker/overlay2/l/WAA6KYOATJLN6EP2PYYRQWEGOR:/var/lib/docker/overlay2/l/PHGIYF5LT5FKNUPFVSMEVHWNDU:/var/lib/docker/overlay2/l/KY5BSB7LSJPUNYBISCA4KYF7KS:/var/lib/docker/overlay2/l/HYDHRQJPMUKG4AXLIVBDPSUXJK:/var/lib/docker/overlay2/l/YI26DO7GTXPYQJSZ6BXHJUV5AR,upperdir=/var/lib/docker/overlay2/28b0f26ad2c4dd1eccd966d1dc59499be968205a00572715db840abbbcc2789d/diff,workdir=/var/lib/docker/overlay2/28b0f26ad2c4dd1eccd966d1dc59499be968205a00572715db840abbbcc2789d/work,xino=off
0 0
nsfs /run/docker/netns/2d3a60de14ae nsfs rw 0 0
tmpfs /run/user/2266 tmpfs
rw,nosuid,nodev,relatime,size=79123248k,mode=700,uid=2266,gid=501 0 0
tmpfs /run/user/2042 tmpfs
rw,nosuid,nodev,relatime,size=79123248k,mode=700,uid=2042,gid=501 0 0

cat /proc/partitions
major minor  #blocks  name

   8        0  937692504 sda
   8       16  937692504 sdb
   8       32  937692504 sdc
   8       48  937692504 sdd
   8       64  234431064 sde
   8       65     999424 sde1
   8       66          1 sde2
   8       69  233428992 sde5
   8       80  234431064 sdf
   8       81     999424 sdf1
   8       82          1 sdf2
   8       85  233428992 sdf5
   9      126 5626152960 md126
   9        0     998848 md0
   9        1  233297920 md1
   8       96  937692504 sdg
   8      112  937692504 sdh
   8      128  937692504 sdi
   8      144  937692504 sdj
 253        0  104857600 dm-0
 253        1   31248384 dm-1
 253        2   52428800 dm-2

cat /proc/mdstat
Personalities : [raid1] [raid6] [raid5] [raid4] [linear] [multipath]
[raid0] [raid10]
md1 : active raid1 sdf5[1] sde5[0]
      233297920 blocks super 1.2 [2/2] [UU]
      bitmap: 1/2 pages [4KB], 65536KB chunk

md0 : active raid1 sdf1[1] sde1[0]
      998848 blocks super 1.2 [2/2] [UU]

md126 : active raid6 sdj[6] sdg[3] sdi[2] sdh[7] sdc[4] sdd[0] sda[5] sdb[1]
      5626152960 blocks level 6, 512k chunk, algorithm 2 [8/8] [UUUUUUUU]
      bitmap: 0/7 pages [0KB], 65536KB chunk

unused devices: <none>

All disks are SATA Micron 5200 SSDs
No Battery Backed Write Cache

Workload:
Mixture of compiles and later on accelerator device I/O through
multiple docker containers. It usually takes days before the problem
is triggered.
I'm afraid I don't have the iostat/vmstat during the time of the
problem recorded

If there's key information missing that I can supply let me know and
I'll try and get it to you.

--
Sitsofe | http://sucs.org/~sits/

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: Tasks blocking forever with XFS stack traces
  2019-11-05  9:32   ` Sitsofe Wheeler
@ 2019-11-05 10:36     ` Carlos Maiolino
  2019-11-05 11:58       ` Carlos Maiolino
                         ` (2 more replies)
  0 siblings, 3 replies; 10+ messages in thread
From: Carlos Maiolino @ 2019-11-05 10:36 UTC (permalink / raw)
  To: Sitsofe Wheeler; +Cc: linux-xfs


Hi Sitsofe.

...
> <snip>
> > >
> > > Other directories on the same filesystem seem fine as do other XFS
> > > filesystems on the same system.
> >
> > The fact you mention other directories seems to work, and the first stack trace
> > you posted, it sounds like you've been keeping a singe AG too busy to almost
> > make it unusable. But, you didn't provide enough information we can really make
> > any progress here, and to be honest I'm more inclined to point the finger to
> > your MD device.
> 
> Let's see if we can pinpoint something :-)
> 
> > Can you describe your MD device? RAID array? What kind? How many disks?
> 
> RAID6 8 disks.

> 
> > What's your filesystem configuration? (xfs_info <mount point>)
> 
> meta-data=/dev/md126             isize=512    agcount=32, agsize=43954432 blks
>          =                       sectsz=4096  attr=2, projid32bit=1
>          =                       crc=1        finobt=1 spinodes=0 rmapbt=0
>          =                       reflink=0
> data     =                       bsize=4096   blocks=1406538240, imaxpct=5
>          =                       sunit=128    swidth=768 blks

> naming   =version 2              bsize=4096   ascii-ci=0 ftype=1
> log      =internal               bsize=4096   blocks=521728, version=2
>          =                       sectsz=4096  sunit=1 blks, lazy-count=1
						^^^^^^  This should have been
							configured to 8 blocks, not 1

> Yes there's more. See a slightly elided dmesg from a longer run on
> https://sucs.org/~sits/test/kern-20191024.log.gz .

At a first quick look, it looks like you are having lots of IO contention in the
log, and this is slowing down the rest of the filesystem. What caught my
attention at first was the wrong configured log striping for the filesystem and
I wonder if this isn't the responsible for the amount of IO contention you are
having in the log. This might well be generating lots of RMW cycles while
writing to the log generating the IO contention and slowing down the rest of the
filesystem, I'll try to take a more careful look later on.

I can't say anything if there is any bug related with the issue first because I
honestly don't remember, second because you are using an old distro kernel which
I have no idea to know which bug fixes have been backported or not. Maybe
somebody else can remember of any bug that might be related, but the amount of
threads you have waiting for log IO, and that misconfigured striping for the log
smells smoke to me.

I let you know if I can identify anything else later.

Cheers.

-- 
Carlos


^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: Tasks blocking forever with XFS stack traces
  2019-11-05 10:36     ` Carlos Maiolino
@ 2019-11-05 11:58       ` Carlos Maiolino
  2019-11-05 14:12       ` Sitsofe Wheeler
  2019-11-13 10:04       ` Sitsofe Wheeler
  2 siblings, 0 replies; 10+ messages in thread
From: Carlos Maiolino @ 2019-11-05 11:58 UTC (permalink / raw)
  To: Sitsofe Wheeler, linux-xfs

Just to make clear my previous point:


> > > What's your filesystem configuration? (xfs_info <mount point>)
> > 
> > meta-data=/dev/md126             isize=512    agcount=32, agsize=43954432 blks
> >          =                       sectsz=4096  attr=2, projid32bit=1
> >          =                       crc=1        finobt=1 spinodes=0 rmapbt=0
> >          =                       reflink=0
> > data     =                       bsize=4096   blocks=1406538240, imaxpct=5
> >          =                       sunit=128    swidth=768 blks
> 
> > naming   =version 2              bsize=4096   ascii-ci=0 ftype=1
> > log      =internal               bsize=4096   blocks=521728, version=2
> >          =                       sectsz=4096  sunit=1 blks, lazy-count=1
> 						^^^^^^  This should have been
> 							configured to 8 blocks, not 1
> 

8 blocks here considering you've used default configuration, i.e. XFS defaults
log stripe unit to 32KiB when the device strip is larger than 256KiB.

-- 
Carlos


^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: Tasks blocking forever with XFS stack traces
  2019-11-05 10:36     ` Carlos Maiolino
  2019-11-05 11:58       ` Carlos Maiolino
@ 2019-11-05 14:12       ` Sitsofe Wheeler
  2019-11-05 16:09         ` Carlos Maiolino
  2019-11-07  0:12         ` Chris Murphy
  2019-11-13 10:04       ` Sitsofe Wheeler
  2 siblings, 2 replies; 10+ messages in thread
From: Sitsofe Wheeler @ 2019-11-05 14:12 UTC (permalink / raw)
  To: Carlos Maiolino; +Cc: linux-xfs

On Tue, 5 Nov 2019 at 10:37, Carlos Maiolino <cmaiolino@redhat.com> wrote:
>
>
> Hi Sitsofe.
>
> ...
> > <snip>
> > > >
> > > > Other directories on the same filesystem seem fine as do other XFS
> > > > filesystems on the same system.
> > >
> > > The fact you mention other directories seems to work, and the first stack trace
> > > you posted, it sounds like you've been keeping a singe AG too busy to almost
> > > make it unusable. But, you didn't provide enough information we can really make
> > > any progress here, and to be honest I'm more inclined to point the finger to
> > > your MD device.
> >
> > Let's see if we can pinpoint something :-)
> >
> > > Can you describe your MD device? RAID array? What kind? How many disks?
> >
> > RAID6 8 disks.
>
> >
> > > What's your filesystem configuration? (xfs_info <mount point>)
> >
> > meta-data=/dev/md126             isize=512    agcount=32, agsize=43954432 blks
> >          =                       sectsz=4096  attr=2, projid32bit=1
> >          =                       crc=1        finobt=1 spinodes=0 rmapbt=0
> >          =                       reflink=0
> > data     =                       bsize=4096   blocks=1406538240, imaxpct=5
> >          =                       sunit=128    swidth=768 blks
>
> > naming   =version 2              bsize=4096   ascii-ci=0 ftype=1
> > log      =internal               bsize=4096   blocks=521728, version=2
> >          =                       sectsz=4096  sunit=1 blks, lazy-count=1
>                                                 ^^^^^^  This should have been
>                                                         configured to 8 blocks, not 1
>
> > Yes there's more. See a slightly elided dmesg from a longer run on
> > https://sucs.org/~sits/test/kern-20191024.log.gz .
>
> At a first quick look, it looks like you are having lots of IO contention in the
> log, and this is slowing down the rest of the filesystem. What caught my

Should it become so slow that a task freezes entirely and never
finishes? Once the problem hits it's not like anything makes any more
progress on those directories nor was there very much generating dirty
data.

If this were to happen again though what extra information would be
helpful (I'm guessing things like /proc/meminfo output)?

> attention at first was the wrong configured log striping for the filesystem and
> I wonder if this isn't the responsible for the amount of IO contention you are
> having in the log. This might well be generating lots of RMW cycles while
> writing to the log generating the IO contention and slowing down the rest of the
> filesystem, I'll try to take a more careful look later on.

My understanding is that the md "chunk size" is 64k so basically
you're saying the sectsz should have been manually set to be as big as
possible at mkfs time? I never realised this never happened by default
(I see the sunit seems to be correct given the block size of 4096 but
I'm unsure about swidth)...

> I can't say anything if there is any bug related with the issue first because I
> honestly don't remember, second because you are using an old distro kernel which
> I have no idea to know which bug fixes have been backported or not. Maybe

Very true.

> somebody else can remember of any bug that might be related, but the amount of
> threads you have waiting for log IO, and that misconfigured striping for the log
> smells smoke to me.
>
> I let you know if I can identify anything else later.

Thanks.

--
Sitsofe | http://sucs.org/~sits/

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: Tasks blocking forever with XFS stack traces
  2019-11-05 14:12       ` Sitsofe Wheeler
@ 2019-11-05 16:09         ` Carlos Maiolino
  2019-11-07  0:12         ` Chris Murphy
  1 sibling, 0 replies; 10+ messages in thread
From: Carlos Maiolino @ 2019-11-05 16:09 UTC (permalink / raw)
  To: Sitsofe Wheeler; +Cc: linux-xfs

Hello.

> > > <snip>
> > > > >
> > > > > Other directories on the same filesystem seem fine as do other XFS
> > > > > filesystems on the same system.
> > > >
> > > > The fact you mention other directories seems to work, and the first stack trace
> > > > you posted, it sounds like you've been keeping a singe AG too busy to almost
> > > > make it unusable. But, you didn't provide enough information we can really make
> > > > any progress here, and to be honest I'm more inclined to point the finger to
> > > > your MD device.
> > >
> > > Let's see if we can pinpoint something :-)
> > >
> > > > Can you describe your MD device? RAID array? What kind? How many disks?
> > >
> > > RAID6 8 disks.
> >
> > >
> > > > What's your filesystem configuration? (xfs_info <mount point>)
> > >
> > > meta-data=/dev/md126             isize=512    agcount=32, agsize=43954432 blks
> > >          =                       sectsz=4096  attr=2, projid32bit=1
> > >          =                       crc=1        finobt=1 spinodes=0 rmapbt=0
> > >          =                       reflink=0
> > > data     =                       bsize=4096   blocks=1406538240, imaxpct=5
> > >          =                       sunit=128    swidth=768 blks
> >
> > > naming   =version 2              bsize=4096   ascii-ci=0 ftype=1
> > > log      =internal               bsize=4096   blocks=521728, version=2
> > >          =                       sectsz=4096  sunit=1 blks, lazy-count=1
> >                                                 ^^^^^^  This should have been
> >                                                         configured to 8 blocks, not 1
> >
> > > Yes there's more. See a slightly elided dmesg from a longer run on
> > > https://sucs.org/~sits/test/kern-20191024.log.gz .
> >
> > At a first quick look, it looks like you are having lots of IO contention in the
> > log, and this is slowing down the rest of the filesystem. What caught my
> 
> Should it become so slow that a task freezes entirely and never
> finishes? Once the problem hits it's not like anything makes any more
> progress on those directories nor was there very much generating dirty
> data.
> 

I am not sure how long you waited until you assumed it 'never finishes' :),
but...
I've seen systems starve to death due large amounts of RMW being generated,
mainly when disks are slow or problematic. Since you are using SSDs and not
spindles, I wonder if the SSDs write cycle might play a role here, but it's just
a question I'm asking myself, I am not sure if it can play a role in making
things even worse.

In the file you sent above, you basically have:

xfs-sync worker thread, which forced a log flush, and now is sleeping waiting
for the flush to complete:

Oct 24 16:27:11 <host> kernel: [115151.674164] Call Trace:
Oct 24 16:27:11 <host> kernel: [115151.674170]  __schedule+0x24e/0x880
Oct 24 16:27:11 <host> kernel: [115151.674175]  schedule+0x2c/0x80
Oct 24 16:27:11 <host> kernel: [115151.674178]  schedule_timeout+0x1cf/0x350
Oct 24 16:27:11 <host> kernel: [115151.674184]  ? sched_clock+0x9/0x10
Oct 24 16:27:11 <host> kernel: [115151.674187]  ? sched_clock+0x9/0x10
Oct 24 16:27:11 <host> kernel: [115151.674191]  ? sched_clock_cpu+0x11/0xb0
Oct 24 16:27:11 <host> kernel: [115151.674196]  wait_for_completion+0xba/0x140
Oct 24 16:27:11 <host> kernel: [115151.674199]  ? wake_up_q+0x80/0x80
Oct 24 16:27:11 <host> kernel: [115151.674203]  __flush_work+0x15b/0x210
Oct 24 16:27:11 <host> kernel: [115151.674206]  ? worker_detach_from_pool+0xa0/0xa0
Oct 24 16:27:11 <host> kernel: [115151.674210]  flush_work+0x10/0x20
Oct 24 16:27:11 <host> kernel: [115151.674250]  xlog_cil_force_lsn+0x7b/0x210 [xfs]
Oct 24 16:27:11 <host> kernel: [115151.674253]  ? __switch_to_asm+0x41/0x70
Oct 24 16:27:11 <host> kernel: [115151.674256]  ? __switch_to_asm+0x35/0x70
Oct 24 16:27:11 <host> kernel: [115151.674259]  ? __switch_to_asm+0x41/0x70
Oct 24 16:27:11 <host> kernel: [115151.674262]  ? __switch_to_asm+0x35/0x70
Oct 24 16:27:11 <host> kernel: [115151.674264]  ? __switch_to_asm+0x41/0x70
Oct 24 16:27:11 <host> kernel: [115151.674267]  ? __switch_to_asm+0x35/0x70
Oct 24 16:27:11 <host> kernel: [115151.674306]  _xfs_log_force+0x8f/0x2a0 [xfs]


Which wake up and wait for xlog_cil_push on the following thread:

Oct 24 16:29:12 <host> kernel: [115272.503324] kworker/52:2    D    0 56479      2 0x80000080
Oct 24 16:29:12 <host> kernel: [115272.503381] Workqueue: xfs-cil/md126 xlog_cil_push_work [xfs]
Oct 24 16:29:12 <host> kernel: [115272.503383] Call Trace:
Oct 24 16:29:12 <host> kernel: [115272.503389]  __schedule+0x24e/0x880
Oct 24 16:29:12 <host> kernel: [115272.503394]  schedule+0x2c/0x80
Oct 24 16:29:12 <host> kernel: [115272.503444]  xlog_state_get_iclog_space+0x105/0x2d0 [xfs]
Oct 24 16:29:12 <host> kernel: [115272.503449]  ? wake_up_q+0x80/0x80
Oct 24 16:29:12 <host> kernel: [115272.503492]  xlog_write+0x163/0x6e0 [xfs]
Oct 24 16:29:12 <host> kernel: [115272.503536]  xlog_cil_push+0x2a7/0x410 [xfs]
Oct 24 16:29:12 <host> kernel: [115272.503577]  xlog_cil_push_work+0x15/0x20 [xfs]
Oct 24 16:29:12 <host> kernel: [115272.503581]  process_one_work+0x1de/0x420
Oct 24 16:29:12 <host> kernel: [115272.503584]  worker_thread+0x32/0x410
Oct 24 16:29:12 <host> kernel: [115272.503587]  kthread+0x121/0x140
Oct 24 16:29:12 <host> kernel: [115272.503590]  ? process_one_work+0x420/0x420
Oct 24 16:29:12 <host> kernel: [115272.503593]  ? kthread_create_worker_on_cpu+0x70/0x70
Oct 24 16:29:12 <host> kernel: [115272.503596]  ret_from_fork+0x35/0x40


xlog_state_get_iclog_space() is waiting at:

xlog_wait(&log->l_flush_wait, &log->l_icloglock);

Which should be awaken by xlog_state_do_callback() which is called by
xlog_state_done_syncing(), which in short, IIRC, comes from bio end io
callbacks.

I'm saying this by a quick look at the current code though. So, I think it ends
up by your system waiting for journal IO completion to make progress (or maybe
your storage stack missed an IO and XFS is stuck waiting for the completion
which will never come, maybe it's deadlocked somewhere I am not seeing).

I may very well be wrong though, it's been a while since I don't work on this
part of the code, so, I'll just wait somebody else to give a different POV,
other than risking pointing you in the wrong direction.

I'll try to keep looking at it when I get some extra time, but well, at the end
you will probably need to update your kernel to a recent version and try to
reproduce the problem again, chasing bugs on old kernels is not really the scope
of this list, but I'll give it some more thought when I get some extra time (if
nobody sees what's wrong before :)


> If this were to happen again though what extra information would be
> helpful (I'm guessing things like /proc/meminfo output)?
>

> > attention at first was the wrong configured log striping for the filesystem and
> > I wonder if this isn't the responsible for the amount of IO contention you are
> > having in the log. This might well be generating lots of RMW cycles while
> > writing to the log generating the IO contention and slowing down the rest of the
> > filesystem, I'll try to take a more careful look later on.
> 
> My understanding is that the md "chunk size" is 64k so basically
> you're saying the sectsz should have been manually set to be as big as
> possible at mkfs time? I never realised this never happened by default
> (I see the sunit seems to be correct given the block size of 4096 but
> I'm unsure about swidth)...

Your RAID chunk is 512k according to the information you provided previously:

md126 : active raid6 sdj[6] sdg[3] sdi[2] sdh[7] sdc[4] sdd[0] sda[5] sdb[1]
      5626152960 blocks level 6, 512k chunk, algorithm 2 [8/8] [UUUUUUUU]
      bitmap: 0/7 pages [0KB], 65536KB chunk

And I believe you used default mkfs options when you created the filesystem, so
xfs properly identified and automatically configured it for you:

sunit=128    swidth=768 blks

(128*4096) = 512Kib

And the swidth matches exactly the amount of data disks I'd expect from your
array, i.e. 6 Data Disks (+ 2 parity):
(768/128) = 6

What I meant is the Log sunit is weird in your configuration, see the 'sunit' in
the Log section. The XFS log stripe unit can range from 32Kib to 256Kib. And
mkfs configures it to match the data sunit, UNLESS the data sunit is bigger tha
256KiB, which then mkfs will set it to 32KiB by default, which, in your case,
using default mkfs configuration, I was expecting to see a log sunit=8
(8*4096) = 32KiB

Maybe your xfsprogs version had any bug, or maybe somebody set it manually.
I can't really say.

-- 
Carlos


^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: Tasks blocking forever with XFS stack traces
  2019-11-05 14:12       ` Sitsofe Wheeler
  2019-11-05 16:09         ` Carlos Maiolino
@ 2019-11-07  0:12         ` Chris Murphy
  1 sibling, 0 replies; 10+ messages in thread
From: Chris Murphy @ 2019-11-07  0:12 UTC (permalink / raw)
  To: Sitsofe Wheeler; +Cc: Carlos Maiolino, xfs list

On Tue, Nov 5, 2019 at 2:13 PM Sitsofe Wheeler <sitsofe@gmail.com> wrote:
>
> My understanding is that the md "chunk size" is 64k so basically
> you're saying the sectsz should have been manually set to be as big as
> possible at mkfs time? I never realised this never happened by default
> (I see the sunit seems to be correct given the block size of 4096 but
> I'm unsure about swidth)...

Check with
# mdadm -E /dev/sdXY

Chunk size depends on the version of mdadm at the time of creation.
It's been a while since 512KiB was the default, leading to rather
large full stripe write size, and a lot of RMW if a significant number
of writes aren't full stripe writes.



-- 
Chris Murphy

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: Tasks blocking forever with XFS stack traces
  2019-11-05 10:36     ` Carlos Maiolino
  2019-11-05 11:58       ` Carlos Maiolino
  2019-11-05 14:12       ` Sitsofe Wheeler
@ 2019-11-13 10:04       ` Sitsofe Wheeler
  2020-12-23  8:45         ` Sitsofe Wheeler
  2 siblings, 1 reply; 10+ messages in thread
From: Sitsofe Wheeler @ 2019-11-13 10:04 UTC (permalink / raw)
  To: Carlos Maiolino; +Cc: linux-xfs

On Tue, 5 Nov 2019 at 10:37, Carlos Maiolino <cmaiolino@redhat.com> wrote:
>
> I can't say anything if there is any bug related with the issue first because I
> honestly don't remember, second because you are using an old distro kernel which
> I have no idea to know which bug fixes have been backported or not. Maybe
> somebody else can remember of any bug that might be related, but the amount of
> threads you have waiting for log IO, and that misconfigured striping for the log
> smells smoke to me.
>
> I let you know if I can identify anything else later.

So just to let anyone who might be following this know, going to a 5.0
kernel didn't solve the issue:

Nov 12 16:45:02 <host> kernel: [27678.931551] INFO: task
kworker/50:0:20430 blocked for more than 120 seconds.
Nov 12 16:45:02 <host> kernel: [27678.931613]       Tainted: G
  OE     5.0.0-32-generic #34~18.04.2-Ubuntu
Nov 12 16:45:02 <host> kernel: [27678.931667] "echo 0 >
/proc/sys/kernel/hung_task_timeout_secs" disables this message.
Nov 12 16:45:02 <host> kernel: [27678.931723] kworker/50:0    D    0
20430      2 0x80000080
Nov 12 16:45:02 <host> kernel: [27678.931801] Workqueue:
xfs-sync/md126 xfs_log_worker [xfs]
Nov 12 16:45:02 <host> kernel: [27678.931804] Call Trace:
Nov 12 16:45:02 <host> kernel: [27678.931814]  __schedule+0x2c0/0x870
Nov 12 16:45:02 <host> kernel: [27678.931819]  schedule+0x2c/0x70
Nov 12 16:45:02 <host> kernel: [27678.931823]  schedule_timeout+0x1db/0x360
Nov 12 16:45:02 <host> kernel: [27678.931829]  ? ttwu_do_activate+0x77/0x80
Nov 12 16:45:02 <host> kernel: [27678.931833]  wait_for_completion+0xba/0x140
Nov 12 16:45:02 <host> kernel: [27678.931837]  ? wake_up_q+0x80/0x80
Nov 12 16:45:02 <host> kernel: [27678.931843]  __flush_work+0x15c/0x210
Nov 12 16:45:02 <host> kernel: [27678.931847]  ?
worker_detach_from_pool+0xb0/0xb0
Nov 12 16:45:02 <host> kernel: [27678.931850]  flush_work+0x10/0x20
Nov 12 16:45:02 <host> kernel: [27678.931915]
xlog_cil_force_lsn+0x7b/0x210 [xfs]
Nov 12 16:45:02 <host> kernel: [27678.931920]  ? __switch_to_asm+0x41/0x70
Nov 12 16:45:02 <host> kernel: [27678.931924]  ? __switch_to_asm+0x35/0x70
Nov 12 16:45:02 <host> kernel: [27678.931928]  ? __switch_to_asm+0x41/0x70
Nov 12 16:45:02 <host> kernel: [27678.931931]  ? __switch_to_asm+0x35/0x70
Nov 12 16:45:02 <host> kernel: [27678.931935]  ? __switch_to_asm+0x41/0x70
Nov 12 16:45:02 <host> kernel: [27678.931938]  ? __switch_to_asm+0x35/0x70
Nov 12 16:45:02 <host> kernel: [27678.931992]  ? xfs_log_worker+0x34/0x100 [xfs]
Nov 12 16:45:02 <host> kernel: [27678.932043]  xfs_log_force+0x95/0x2e0 [xfs]
Nov 12 16:45:02 <host> kernel: [27678.932047]  ? __switch_to+0x96/0x4e0
Nov 12 16:45:02 <host> kernel: [27678.932051]  ? __switch_to_asm+0x35/0x70
Nov 12 16:45:02 <host> kernel: [27678.932054]  ? __switch_to_asm+0x41/0x70
Nov 12 16:45:02 <host> kernel: [27678.932058]  ? __switch_to_asm+0x35/0x70
Nov 12 16:45:02 <host> kernel: [27678.932107]  xfs_log_worker+0x34/0x100 [xfs]
Nov 12 16:45:02 <host> kernel: [27678.932111]  process_one_work+0x1fd/0x400
Nov 12 16:45:02 <host> kernel: [27678.932114]  worker_thread+0x34/0x410
Nov 12 16:45:02 <host> kernel: [27678.932120]  kthread+0x121/0x140
Nov 12 16:45:02 <host> kernel: [27678.932123]  ? process_one_work+0x400/0x400
Nov 12 16:45:02 <host> kernel: [27678.932127]  ? kthread_park+0xb0/0xb0
Nov 12 16:45:02 <host> kernel: [27678.932132]  ret_from_fork+0x35/0x40
Nov 12 16:45:02 <host> kernel: [27678.932146] INFO: task
kworker/u161:0:46903 blocked for more than 120 seconds.
Nov 12 16:45:02 <host> kernel: [27678.932200]       Tainted: G
  OE     5.0.0-32-generic #34~18.04.2-Ubuntu
Nov 12 16:45:02 <host> kernel: [27678.932253] "echo 0 >
/proc/sys/kernel/hung_task_timeout_secs" disables this message.
Nov 12 16:45:02 <host> kernel: [27678.932309] kworker/u161:0  D    0
46903      2 0x80000080
Nov 12 16:45:02 <host> kernel: [27678.932316] Workqueue: writeback
wb_workfn (flush-9:126)
Nov 12 16:45:02 <host> kernel: [27678.932319] Call Trace:
Nov 12 16:45:02 <host> kernel: [27678.932323]  __schedule+0x2c0/0x870
Nov 12 16:45:02 <host> kernel: [27678.932373]  ? xfs_map_blocks+0xab/0x450 [xfs]
Nov 12 16:45:02 <host> kernel: [27678.932376]  schedule+0x2c/0x70
Nov 12 16:45:02 <host> kernel: [27678.932380]  rwsem_down_read_failed+0xe8/0x180
Nov 12 16:45:02 <host> kernel: [27678.932386]
call_rwsem_down_read_failed+0x18/0x30
Nov 12 16:45:02 <host> kernel: [27678.932390]  ?
call_rwsem_down_read_failed+0x18/0x30
Nov 12 16:45:02 <host> kernel: [27678.932393]  down_read+0x20/0x40
Nov 12 16:45:02 <host> kernel: [27678.932445]  xfs_ilock+0xd5/0x100 [xfs]
Nov 12 16:45:02 <host> kernel: [27678.932492]  xfs_map_blocks+0xab/0x450 [xfs]
Nov 12 16:45:02 <host> kernel: [27678.932497]  ? wait_woken+0x80/0x80
Nov 12 16:45:02 <host> kernel: [27678.932503]  ? blk_queue_split+0x10c/0x640
Nov 12 16:45:02 <host> kernel: [27678.932509]  ? kmem_cache_alloc+0x15f/0x1c0
Nov 12 16:45:02 <host> kernel: [27678.932561]  ? kmem_zone_alloc+0x6c/0xf0 [xfs]
Nov 12 16:45:02 <host> kernel: [27678.932607]
xfs_do_writepage+0x110/0x410 [xfs]
Nov 12 16:45:02 <host> kernel: [27678.932614]  write_cache_pages+0x1bc/0x480
Nov 12 16:45:02 <host> kernel: [27678.932658]  ?
xfs_vm_writepages+0xa0/0xa0 [xfs]
Nov 12 16:45:02 <host> kernel: [27678.932662]  ? submit_bio+0x73/0x140
Nov 12 16:45:02 <host> kernel: [27678.932703]  ?
xfs_setfilesize_trans_alloc.isra.16+0x41/0x90 [xfs]
Nov 12 16:45:02 <host> kernel: [27678.932745]  xfs_vm_writepages+0x6b/0xa0 [xfs]
Nov 12 16:45:02 <host> kernel: [27678.932750]  do_writepages+0x4b/0xe0
Nov 12 16:45:02 <host> kernel: [27678.932754]
__writeback_single_inode+0x40/0x330
Nov 12 16:45:02 <host> kernel: [27678.932756]  ?
__writeback_single_inode+0x40/0x330
Nov 12 16:45:02 <host> kernel: [27678.932759]  writeback_sb_inodes+0x1e6/0x510
Nov 12 16:45:02 <host> kernel: [27678.932763]  __writeback_inodes_wb+0x67/0xb0
Nov 12 16:45:02 <host> kernel: [27678.932766]  wb_writeback+0x265/0x2f0
Nov 12 16:45:02 <host> kernel: [27678.932770]  ? strp_read_sock+0x70/0xa0
Nov 12 16:45:02 <host> kernel: [27678.932772]  wb_workfn+0x180/0x400
Nov 12 16:45:02 <host> kernel: [27678.932775]  ? wb_workfn+0x180/0x400
Nov 12 16:45:02 <host> kernel: [27678.932779]  process_one_work+0x1fd/0x400
Nov 12 16:45:02 <host> kernel: [27678.932783]  worker_thread+0x34/0x410
Nov 12 16:45:02 <host> kernel: [27678.932787]  kthread+0x121/0x140
Nov 12 16:45:02 <host> kernel: [27678.932790]  ? process_one_work+0x400/0x400
Nov 12 16:45:02 <host> kernel: [27678.932794]  ? kthread_park+0xb0/0xb0
Nov 12 16:45:02 <host> kernel: [27678.932799]  ret_from_fork+0x35/0x40
Nov 12 16:45:02 <host> kernel: [27678.932843] INFO: task
kworker/58:0:48437 blocked for more than 120 seconds.
Nov 12 16:45:02 <host> kernel: [27678.932895]       Tainted: G
  OE     5.0.0-32-generic #34~18.04.2-Ubuntu
Nov 12 16:45:02 <host> kernel: [27678.932948] "echo 0 >
/proc/sys/kernel/hung_task_timeout_secs" disables this message.
Nov 12 16:45:02 <host> kernel: [27678.933002] kworker/58:0    D    0
48437      2 0x80000080
Nov 12 16:45:02 <host> kernel: [27678.933059] Workqueue: xfs-cil/md126
xlog_cil_push_work [xfs]
Nov 12 16:45:02 <host> kernel: [27678.933061] Call Trace:
Nov 12 16:45:02 <host> kernel: [27678.933065]  __schedule+0x2c0/0x870
Nov 12 16:45:02 <host> kernel: [27678.933117]  ? xlog_bdstrat+0x37/0x70 [xfs]
Nov 12 16:45:02 <host> kernel: [27678.933120]  schedule+0x2c/0x70
Nov 12 16:45:02 <host> kernel: [27678.933170]
xlog_state_get_iclog_space+0x105/0x2d0 [xfs]
Nov 12 16:45:02 <host> kernel: [27678.933174]  ? wake_up_q+0x80/0x80
Nov 12 16:45:02 <host> kernel: [27678.933224]  xlog_write+0x163/0x6e0 [xfs]
Nov 12 16:45:02 <host> kernel: [27678.933273]  xlog_cil_push+0x2a7/0x400 [xfs]
Nov 12 16:45:02 <host> kernel: [27678.933322]
xlog_cil_push_work+0x15/0x20 [xfs]
Nov 12 16:45:02 <host> kernel: [27678.933326]  process_one_work+0x1fd/0x400
Nov 12 16:45:02 <host> kernel: [27678.933329]  worker_thread+0x34/0x410
Nov 12 16:45:02 <host> kernel: [27678.933334]  kthread+0x121/0x140
Nov 12 16:45:02 <host> kernel: [27678.933337]  ? process_one_work+0x400/0x400
Nov 12 16:45:02 <host> kernel: [27678.933341]  ? kthread_park+0xb0/0xb0
Nov 12 16:45:02 <host> kernel: [27678.933345]  ret_from_fork+0x35/0x40
Nov 12 16:45:02 <host> kernel: [27678.933354] INFO: task ninja:53333
blocked for more than 120 seconds.
Nov 12 16:45:02 <host> kernel: [27678.933401]       Tainted: G
  OE     5.0.0-32-generic #34~18.04.2-Ubuntu
Nov 12 16:45:02 <host> kernel: [27678.933453] "echo 0 >
/proc/sys/kernel/hung_task_timeout_secs" disables this message.
Nov 12 16:45:02 <host> kernel: [27678.933508] ninja           D    0
53333  53327 0x000003a0
Nov 12 16:45:02 <host> kernel: [27678.933511] Call Trace:
Nov 12 16:45:02 <host> kernel: [27678.933515]  __schedule+0x2c0/0x870
Nov 12 16:45:02 <host> kernel: [27678.933565]  ?
xfs_ilock_attr_map_shared+0x34/0x40 [xfs]
Nov 12 16:45:02 <host> kernel: [27678.933568]  schedule+0x2c/0x70
Nov 12 16:45:02 <host> kernel: [27678.933572]  rwsem_down_read_failed+0xe8/0x180
Nov 12 16:45:02 <host> kernel: [27678.933577]
call_rwsem_down_read_failed+0x18/0x30
Nov 12 16:45:02 <host> kernel: [27678.933580]  ?
call_rwsem_down_read_failed+0x18/0x30
Nov 12 16:45:02 <host> kernel: [27678.933631]  ? xfs_trans_roll+0xe0/0xe0 [xfs]
Nov 12 16:45:02 <host> kernel: [27678.933635]  down_read+0x20/0x40
Nov 12 16:45:02 <host> kernel: [27678.933684]  xfs_ilock+0xd5/0x100 [xfs]
Nov 12 16:45:02 <host> kernel: [27678.933730]
xfs_ilock_attr_map_shared+0x34/0x40 [xfs]
Nov 12 16:45:02 <host> kernel: [27678.933762]  xfs_attr_get+0xbe/0x120 [xfs]
Nov 12 16:45:02 <host> kernel: [27678.933814]  xfs_xattr_get+0x4b/0x70 [xfs]
Nov 12 16:45:02 <host> kernel: [27678.933818]  __vfs_getxattr+0x59/0x80
Nov 12 16:45:02 <host> kernel: [27678.933823]  get_vfs_caps_from_disk+0x6a/0x170
Nov 12 16:45:02 <host> kernel: [27678.933827]  ?
legitimize_path.isra.31+0x2e/0x60
Nov 12 16:45:02 <host> kernel: [27678.933832]  audit_copy_inode+0x6d/0xb0
Nov 12 16:45:02 <host> kernel: [27678.933837]  __audit_inode+0x17b/0x2f0
Nov 12 16:45:02 <host> kernel: [27678.933840]  filename_lookup+0x130/0x190
Nov 12 16:45:02 <host> kernel: [27678.933846]  ?
iomap_file_buffered_write+0x6e/0xa0
Nov 12 16:45:02 <host> kernel: [27678.933850]  ? __check_object_size+0xdb/0x1b0
Nov 12 16:45:02 <host> kernel: [27678.933854]  ? path_get+0x27/0x30
Nov 12 16:45:02 <host> kernel: [27678.933858]  ? __audit_getname+0x97/0xb0
Nov 12 16:45:02 <host> kernel: [27678.933861]  user_path_at_empty+0x36/0x40
Nov 12 16:45:02 <host> kernel: [27678.933864]  ? user_path_at_empty+0x36/0x40
Nov 12 16:45:02 <host> kernel: [27678.933867]  vfs_statx+0x76/0xe0
Nov 12 16:45:02 <host> kernel: [27678.933871]  __do_sys_newstat+0x3d/0x70
Nov 12 16:45:02 <host> kernel: [27678.933876]  ? syscall_trace_enter+0x1da/0x2d0
Nov 12 16:45:02 <host> kernel: [27678.933880]  __x64_sys_newstat+0x16/0x20
Nov 12 16:45:02 <host> kernel: [27678.933884]  do_syscall_64+0x5a/0x120
Nov 12 16:45:02 <host> kernel: [27678.933889]
entry_SYSCALL_64_after_hwframe+0x44/0xa9
Nov 12 16:45:02 <host> kernel: [27678.933892] RIP: 0033:0x7f0a393fc775
Nov 12 16:45:02 <host> kernel: [27678.933900] Code: Bad RIP value.
Nov 12 16:45:02 <host> kernel: [27678.933902] RSP:
002b:00007ffdaaa336b8 EFLAGS: 00000246 ORIG_RAX: 0000000000000004
Nov 12 16:45:02 <host> kernel: [27678.933905] RAX: ffffffffffffffda
RBX: 00007ffdaaa33740 RCX: 00007f0a393fc775
Nov 12 16:45:02 <host> kernel: [27678.933906] RDX: 00007ffdaaa33740
RSI: 00007ffdaaa33740 RDI: 0000556f5acdf100
Nov 12 16:45:02 <host> kernel: [27678.933908] RBP: 00007ffdaaa336d0
R08: 00007ffdaaa33866 R09: 0000000000000036
Nov 12 16:45:02 <host> kernel: [27678.933909] R10: 00007ffdaaa33867
R11: 0000000000000246 R12: 00007ffdaaa33820
Nov 12 16:45:02 <host> kernel: [27678.933911] R13: 00007ffdaaa33840
R14: 0000000000000000 R15: 0000000057d60100
Nov 12 16:45:02 <host> kernel: [27678.933920] INFO: task c++:56846
blocked for more than 120 seconds.
Nov 12 16:45:02 <host> kernel: [27678.933967]       Tainted: G
  OE     5.0.0-32-generic #34~18.04.2-Ubuntu
Nov 12 16:45:02 <host> kernel: [27678.934019] "echo 0 >
/proc/sys/kernel/hung_task_timeout_secs" disables this message.
Nov 12 16:45:02 <host> kernel: [27678.934074] c++             D    0
56846  56844 0x000003a0
Nov 12 16:45:02 <host> kernel: [27678.934077] Call Trace:
Nov 12 16:45:02 <host> kernel: [27678.934081]  __schedule+0x2c0/0x870
Nov 12 16:45:02 <host> kernel: [27678.934084]  ? __switch_to+0x309/0x4e0
Nov 12 16:45:02 <host> kernel: [27678.934087]  schedule+0x2c/0x70
Nov 12 16:45:02 <host> kernel: [27678.934091]  schedule_timeout+0x1db/0x360
Nov 12 16:45:02 <host> kernel: [27678.934093]  ? __schedule+0x2c8/0x870
Nov 12 16:45:02 <host> kernel: [27678.934097]  wait_for_completion+0xba/0x140
Nov 12 16:45:02 <host> kernel: [27678.934101]  ? wake_up_q+0x80/0x80
Nov 12 16:45:02 <host> kernel: [27678.934104]  __flush_work+0x15c/0x210
Nov 12 16:45:02 <host> kernel: [27678.934107]  ?
worker_detach_from_pool+0xb0/0xb0
Nov 12 16:45:02 <host> kernel: [27678.934111]  flush_work+0x10/0x20
Nov 12 16:45:02 <host> kernel: [27678.934162]
xlog_cil_force_lsn+0x7b/0x210 [xfs]
Nov 12 16:45:02 <host> kernel: [27678.934211]  ? xfs_buf_lock+0xe9/0xf0 [xfs]
Nov 12 16:45:02 <host> kernel: [27678.934261]  xfs_log_force+0x95/0x2e0 [xfs]
Nov 12 16:45:02 <host> kernel: [27678.934309]  ?
xfs_buf_find.isra.29+0x1fa/0x600 [xfs]
Nov 12 16:45:02 <host> kernel: [27678.934355]  xfs_buf_lock+0xe9/0xf0 [xfs]
Nov 12 16:45:02 <host> kernel: [27678.934399]
xfs_buf_find.isra.29+0x1fa/0x600 [xfs]
Nov 12 16:45:02 <host> kernel: [27678.934442]  xfs_buf_get_map+0x43/0x2b0 [xfs]
Nov 12 16:45:02 <host> kernel: [27678.934496]
xfs_trans_get_buf_map+0xec/0x170 [xfs]
Nov 12 16:45:02 <host> kernel: [27678.934534]  xfs_da_get_buf+0xbd/0xf0 [xfs]
Nov 12 16:45:02 <host> kernel: [27678.934574]
xfs_dir3_data_init+0x6e/0x210 [xfs]
Nov 12 16:45:02 <host> kernel: [27678.934613]
xfs_dir2_sf_to_block+0x12e/0x6e0 [xfs]
Nov 12 16:45:02 <host> kernel: [27678.934649]  ?
xfs_dir2_sf_to_block+0x12e/0x6e0 [xfs]
Nov 12 16:45:02 <host> kernel: [27678.934700]  ? kmem_zone_alloc+0x6c/0xf0 [xfs]
Nov 12 16:45:02 <host> kernel: [27678.934750]  ? kmem_zone_alloc+0x6c/0xf0 [xfs]
Nov 12 16:45:02 <host> kernel: [27678.934797]  ?
xlog_grant_head_check+0x54/0xf0 [xfs]
Nov 12 16:45:02 <host> kernel: [27678.934838]
xfs_dir2_sf_addname+0xd9/0x6c0 [xfs]
Nov 12 16:45:02 <host> kernel: [27678.934843]  ? __kmalloc+0x178/0x210
Nov 12 16:45:02 <host> kernel: [27678.934891]  ? kmem_alloc+0x6c/0xf0 [xfs]
Nov 12 16:45:02 <host> kernel: [27678.934930]
xfs_dir_createname+0x182/0x1d0 [xfs]
Nov 12 16:45:02 <host> kernel: [27678.934979]  xfs_rename+0x771/0x8d0 [xfs]
Nov 12 16:45:02 <host> kernel: [27678.935029]  xfs_vn_rename+0xd3/0x140 [xfs]
Nov 12 16:45:02 <host> kernel: [27678.935033]  vfs_rename+0x383/0x920
Nov 12 16:45:02 <host> kernel: [27678.935037]  do_renameat2+0x4ca/0x590
Nov 12 16:45:02 <host> kernel: [27678.935041]  __x64_sys_rename+0x20/0x30
Nov 12 16:45:02 <host> kernel: [27678.935045]  do_syscall_64+0x5a/0x120
Nov 12 16:45:02 <host> kernel: [27678.935050]
entry_SYSCALL_64_after_hwframe+0x44/0xa9
Nov 12 16:45:02 <host> kernel: [27678.935052] RIP: 0033:0x7f5dc9e25d37
Nov 12 16:45:02 <host> kernel: [27678.935057] Code: Bad RIP value.
Nov 12 16:45:02 <host> kernel: [27678.935059] RSP:
002b:00007ffdcc39b908 EFLAGS: 00000213 ORIG_RAX: 0000000000000052
Nov 12 16:45:02 <host> kernel: [27678.935061] RAX: ffffffffffffffda
RBX: 00007ffdcc39b934 RCX: 00007f5dc9e25d37
Nov 12 16:45:02 <host> kernel: [27678.935063] RDX: 000055aee60eb010
RSI: 000055aee60f02d0 RDI: 000055aee60f01c0
Nov 12 16:45:02 <host> kernel: [27678.935064] RBP: 00007ffdcc39b9d0
R08: 0000000000000000 R09: 000055aee61321c0
Nov 12 16:45:02 <host> kernel: [27678.935066] R10: 000055aee60eb010
R11: 0000000000000213 R12: 0000000000000004
Nov 12 16:45:02 <host> kernel: [27678.935068] R13: 0000000000006e90
R14: 000055aee60f00d0 R15: 0000000000006e90
Nov 12 16:45:02 <host> kernel: [27678.935071] INFO: task c++:56847
blocked for more than 120 seconds.
Nov 12 16:45:02 <host> kernel: [27678.935118]       Tainted: G
  OE     5.0.0-32-generic #34~18.04.2-Ubuntu
Nov 12 16:45:02 <host> kernel: [27678.935170] "echo 0 >
/proc/sys/kernel/hung_task_timeout_secs" disables this message.
Nov 12 16:45:02 <host> kernel: [27678.935269] c++             D    0
56847  56845 0x000003a0
Nov 12 16:45:02 <host> kernel: [27678.935287] Call Trace:
Nov 12 16:45:02 <host> kernel: [27678.935293]  __schedule+0x2c0/0x870
Nov 12 16:45:02 <host> kernel: [27678.935347]  ?
xfs_ilock_attr_map_shared+0x34/0x40 [xfs]
Nov 12 16:45:02 <host> kernel: [27678.935356]  schedule+0x2c/0x70
Nov 12 16:45:02 <host> kernel: [27678.935364]  rwsem_down_read_failed+0xe8/0x180
Nov 12 16:45:02 <host> kernel: [27678.935372]
call_rwsem_down_read_failed+0x18/0x30
Nov 12 16:45:02 <host> kernel: [27678.935381]  ?
call_rwsem_down_read_failed+0x18/0x30
Nov 12 16:45:02 <host> kernel: [27678.935436]  ? xfs_trans_roll+0xe0/0xe0 [xfs]
Nov 12 16:45:02 <host> kernel: [27678.935448]  down_read+0x20/0x40
Nov 12 16:45:02 <host> kernel: [27678.935501]  xfs_ilock+0xd5/0x100 [xfs]
Nov 12 16:45:02 <host> kernel: [27678.935551]
xfs_ilock_attr_map_shared+0x34/0x40 [xfs]
Nov 12 16:45:02 <host> kernel: [27678.935588]  xfs_attr_get+0xbe/0x120 [xfs]
Nov 12 16:45:02 <host> kernel: [27678.935643]  xfs_xattr_get+0x4b/0x70 [xfs]
Nov 12 16:45:02 <host> kernel: [27678.935652]  __vfs_getxattr+0x59/0x80
Nov 12 16:45:02 <host> kernel: [27678.935664]  get_vfs_caps_from_disk+0x6a/0x170
Nov 12 16:45:02 <host> kernel: [27678.935675]  audit_copy_inode+0x6d/0xb0
Nov 12 16:45:02 <host> kernel: [27678.935684]  __audit_inode+0x17b/0x2f0
Nov 12 16:45:02 <host> kernel: [27678.935692]  path_openat+0x38f/0x1700
Nov 12 16:45:02 <host> kernel: [27678.935702]  ? insert_pfn+0x152/0x240
Nov 12 16:45:02 <host> kernel: [27678.935709]  ? vmf_insert_pfn_prot+0x9b/0x120
Nov 12 16:45:02 <host> kernel: [27678.935716]  do_filp_open+0x9b/0x110
Nov 12 16:45:02 <host> kernel: [27678.935723]  ? __check_object_size+0xdb/0x1b0
Nov 12 16:45:02 <host> kernel: [27678.935734]  ? path_get+0x27/0x30
Nov 12 16:45:02 <host> kernel: [27678.935746]  ? __alloc_fd+0x46/0x170
Nov 12 16:45:02 <host> kernel: [27678.935756]  do_sys_open+0x1bb/0x2d0
Nov 12 16:45:02 <host> kernel: [27678.935764]  ? do_sys_open+0x1bb/0x2d0
Nov 12 16:45:02 <host> kernel: [27678.935773]  __x64_sys_openat+0x20/0x30
Nov 12 16:45:02 <host> kernel: [27678.935783]  do_syscall_64+0x5a/0x120
Nov 12 16:45:02 <host> kernel: [27678.935793]
entry_SYSCALL_64_after_hwframe+0x44/0xa9
Nov 12 16:45:02 <host> kernel: [27678.935801] RIP: 0033:0x7fe4f94a5c8e
Nov 12 16:45:02 <host> kernel: [27678.935813] Code: Bad RIP value.
Nov 12 16:45:02 <host> kernel: [27678.935818] RSP:
002b:00007ffc4c6e2550 EFLAGS: 00000246 ORIG_RAX: 0000000000000101
Nov 12 16:45:02 <host> kernel: [27678.935828] RAX: ffffffffffffffda
RBX: 000000000003a2f8 RCX: 00007fe4f94a5c8e
Nov 12 16:45:02 <host> kernel: [27678.935836] RDX: 00000000000000c2
RSI: 0000557e925fdc80 RDI: 00000000ffffff9c
Nov 12 16:45:02 <host> kernel: [27678.935841] RBP: 0000000000000000
R08: 00007ffc4c7250a0 R09: 00007ffc4c725080
Nov 12 16:45:02 <host> kernel: [27678.935847] R10: 0000000000000180
R11: 0000000000000246 R12: 0000557e925fdc80
Nov 12 16:45:02 <host> kernel: [27678.935854] R13: 0000557e925fdcda
R14: 00007fe4f9551c80 R15: 8421084210842109
Nov 12 16:45:02 <host> kernel: [27678.935863] INFO: task c++:56849
blocked for more than 120 seconds.
Nov 12 16:45:02 <host> kernel: [27678.935917]       Tainted: G
  OE     5.0.0-32-generic #34~18.04.2-Ubuntu
Nov 12 16:45:02 <host> kernel: [27678.935971] "echo 0 >
/proc/sys/kernel/hung_task_timeout_secs" disables this message.
Nov 12 16:45:02 <host> kernel: [27678.936028] c++             D    0
56849  56848 0x000003a0
Nov 12 16:45:02 <host> kernel: [27678.936031] Call Trace:
Nov 12 16:45:02 <host> kernel: [27678.936035]  __schedule+0x2c0/0x870
Nov 12 16:45:02 <host> kernel: [27678.936084]  ?
xfs_ilock_attr_map_shared+0x34/0x40 [xfs]
Nov 12 16:45:02 <host> kernel: [27678.936087]  schedule+0x2c/0x70
Nov 12 16:45:02 <host> kernel: [27678.936091]  rwsem_down_read_failed+0xe8/0x180
Nov 12 16:45:02 <host> kernel: [27678.936095]
call_rwsem_down_read_failed+0x18/0x30
Nov 12 16:45:02 <host> kernel: [27678.936099]  ?
call_rwsem_down_read_failed+0x18/0x30
Nov 12 16:45:02 <host> kernel: [27678.936149]  ? xfs_trans_roll+0xe0/0xe0 [xfs]
Nov 12 16:45:02 <host> kernel: [27678.936154]  down_read+0x20/0x40
Nov 12 16:45:02 <host> kernel: [27678.936202]  xfs_ilock+0xd5/0x100 [xfs]
Nov 12 16:45:02 <host> kernel: [27678.936250]
xfs_ilock_attr_map_shared+0x34/0x40 [xfs]
Nov 12 16:45:02 <host> kernel: [27678.936283]  xfs_attr_get+0xbe/0x120 [xfs]
Nov 12 16:45:02 <host> kernel: [27678.936335]  xfs_xattr_get+0x4b/0x70 [xfs]
Nov 12 16:45:02 <host> kernel: [27678.936340]  __vfs_getxattr+0x59/0x80
Nov 12 16:45:02 <host> kernel: [27678.936344]  get_vfs_caps_from_disk+0x6a/0x170
Nov 12 16:45:02 <host> kernel: [27678.936348]  audit_copy_inode+0x6d/0xb0
Nov 12 16:45:02 <host> kernel: [27678.936352]  __audit_inode+0x17b/0x2f0
Nov 12 16:45:02 <host> kernel: [27678.936356]  path_openat+0x38f/0x1700
Nov 12 16:45:02 <host> kernel: [27678.936360]  ? insert_pfn+0x152/0x240
Nov 12 16:45:02 <host> kernel: [27678.936364]  ? vmf_insert_pfn_prot+0x9b/0x120
Nov 12 16:45:02 <host> kernel: [27678.936368]  do_filp_open+0x9b/0x110
Nov 12 16:45:02 <host> kernel: [27678.936371]  ? __check_object_size+0xdb/0x1b0
Nov 12 16:45:02 <host> kernel: [27678.936375]  ? path_get+0x27/0x30
Nov 12 16:45:02 <host> kernel: [27678.936379]  ? __alloc_fd+0x46/0x170
Nov 12 16:45:02 <host> kernel: [27678.936383]  do_sys_open+0x1bb/0x2d0
Nov 12 16:45:02 <host> kernel: [27678.936386]  ? do_sys_open+0x1bb/0x2d0
Nov 12 16:45:02 <host> kernel: [27678.936390]  __x64_sys_openat+0x20/0x30
Nov 12 16:45:02 <host> kernel: [27678.936394]  do_syscall_64+0x5a/0x120
Nov 12 16:45:02 <host> kernel: [27678.936399]
entry_SYSCALL_64_after_hwframe+0x44/0xa9
Nov 12 16:45:02 <host> kernel: [27678.936401] RIP: 0033:0x7fe5c6575c8e
Nov 12 16:45:02 <host> kernel: [27678.936407] Code: Bad RIP value.
Nov 12 16:45:02 <host> kernel: [27678.936409] RSP:
002b:00007fffbd824140 EFLAGS: 00000246 ORIG_RAX: 0000000000000101
Nov 12 16:45:02 <host> kernel: [27678.936411] RAX: ffffffffffffffda
RBX: 000000000003a2f8 RCX: 00007fe5c6575c8e
Nov 12 16:45:02 <host> kernel: [27678.936413] RDX: 00000000000000c2
RSI: 0000561423cc41c0 RDI: 00000000ffffff9c
Nov 12 16:45:02 <host> kernel: [27678.936414] RBP: 0000000000000000
R08: 00007fffbd8f80a0 R09: 00007fffbd8f8080
Nov 12 16:45:02 <host> kernel: [27678.936416] R10: 0000000000000180
R11: 0000000000000246 R12: 0000561423cc41c0
Nov 12 16:45:02 <host> kernel: [27678.936417] R13: 0000561423cc4218
R14: 00007fe5c6621c80 R15: 8421084210842109
Nov 12 16:45:02 <host> kernel: [27678.936421] INFO: task c++:56851
blocked for more than 120 seconds.
Nov 12 16:45:02 <host> kernel: [27678.936474]       Tainted: G
  OE     5.0.0-32-generic #34~18.04.2-Ubuntu
Nov 12 16:45:02 <host> kernel: [27678.936535] "echo 0 >
/proc/sys/kernel/hung_task_timeout_secs" disables this message.
Nov 12 16:45:02 <host> kernel: [27678.936596] c++             D    0
56851  56850 0x000003a0
Nov 12 16:45:02 <host> kernel: [27678.936605] Call Trace:
Nov 12 16:45:02 <host> kernel: [27678.936615]  __schedule+0x2c0/0x870
Nov 12 16:45:02 <host> kernel: [27678.936623]  ? __switch_to_asm+0x41/0x70
Nov 12 16:45:02 <host> kernel: [27678.936631]  ? __switch_to_asm+0x35/0x70
Nov 12 16:45:02 <host> kernel: [27678.936683]  ?
xfs_ilock_attr_map_shared+0x34/0x40 [xfs]
Nov 12 16:45:02 <host> kernel: [27678.936692]  schedule+0x2c/0x70
Nov 12 16:45:02 <host> kernel: [27678.936703]  rwsem_down_read_failed+0xe8/0x180
Nov 12 16:45:02 <host> kernel: [27678.936712]  ? __switch_to_asm+0x35/0x70
Nov 12 16:45:02 <host> kernel: [27678.936722]
call_rwsem_down_read_failed+0x18/0x30
Nov 12 16:45:02 <host> kernel: [27678.936731]  ?
call_rwsem_down_read_failed+0x18/0x30
Nov 12 16:45:02 <host> kernel: [27678.936787]  ? xfs_trans_roll+0xe0/0xe0 [xfs]
Nov 12 16:45:02 <host> kernel: [27678.936796]  down_read+0x20/0x40
Nov 12 16:45:02 <host> kernel: [27678.936852]  xfs_ilock+0xd5/0x100 [xfs]
Nov 12 16:45:02 <host> kernel: [27678.936903]
xfs_ilock_attr_map_shared+0x34/0x40 [xfs]
Nov 12 16:45:02 <host> kernel: [27678.936943]  xfs_attr_get+0xbe/0x120 [xfs]
Nov 12 16:45:02 <host> kernel: [27678.936998]  xfs_xattr_get+0x4b/0x70 [xfs]
Nov 12 16:45:02 <host> kernel: [27678.937002]  __vfs_getxattr+0x59/0x80
Nov 12 16:45:02 <host> kernel: [27678.937006]  get_vfs_caps_from_disk+0x6a/0x170
Nov 12 16:45:02 <host> kernel: [27678.937012]  ? inode_permission+0x63/0x1a0
Nov 12 16:45:02 <host> kernel: [27678.937016]  audit_copy_inode+0x6d/0xb0
Nov 12 16:45:02 <host> kernel: [27678.937020]  __audit_inode+0x17b/0x2f0
Nov 12 16:45:02 <host> kernel: [27678.937023]  filename_parentat+0x147/0x190
Nov 12 16:45:02 <host> kernel: [27678.937028]  ? radix_tree_lookup+0xd/0x10
Nov 12 16:45:02 <host> kernel: [27678.937032]  ? __check_object_size+0xdb/0x1b0
Nov 12 16:45:02 <host> kernel: [27678.937036]  ? path_get+0x27/0x30
Nov 12 16:45:02 <host> kernel: [27678.937040]  do_renameat2+0xc6/0x590
Nov 12 16:45:02 <host> kernel: [27678.937042]  ? do_renameat2+0xc6/0x590
Nov 12 16:45:02 <host> kernel: [27678.937046]  ?
__audit_syscall_entry+0xdd/0x130
Nov 12 16:45:02 <host> kernel: [27678.937050]  __x64_sys_rename+0x20/0x30
Nov 12 16:45:02 <host> kernel: [27678.937054]  do_syscall_64+0x5a/0x120
Nov 12 16:45:02 <host> kernel: [27678.937058]
entry_SYSCALL_64_after_hwframe+0x44/0xa9
Nov 12 16:45:02 <host> kernel: [27678.937060] RIP: 0033:0x7f83b31c6d37
Nov 12 16:45:02 <host> kernel: [27678.937065] Code: Bad RIP value.
Nov 12 16:45:02 <host> kernel: [27678.937067] RSP:
002b:00007ffd60610948 EFLAGS: 00000213 ORIG_RAX: 0000000000000052
Nov 12 16:45:02 <host> kernel: [27678.937069] RAX: ffffffffffffffda
RBX: 00007ffd60610974 RCX: 00007f83b31c6d37
Nov 12 16:45:02 <host> kernel: [27678.937070] RDX: 000056474e2f3010
RSI: 000056474e2f82d0 RDI: 000056474e2f81c0
Nov 12 16:45:02 <host> kernel: [27678.937074] RBP: 00007ffd60610a10
R08: 0000000000000000 R09: 000056474e33cc20
Nov 12 16:45:02 <host> kernel: [27678.937075] R10: 000056474e2f3010
R11: 0000000000000213 R12: 0000000000000004
Nov 12 16:45:02 <host> kernel: [27678.937077] R13: 0000000000003720
R14: 000056474e2f80d0 R15: 0000000000003720
Nov 12 16:45:02 <host> kernel: [27678.937081] INFO: task c++:56853
blocked for more than 120 seconds.
Nov 12 16:45:02 <host> kernel: [27678.937127]       Tainted: G
  OE     5.0.0-32-generic #34~18.04.2-Ubuntu
Nov 12 16:45:02 <host> kernel: [27678.937179] "echo 0 >
/proc/sys/kernel/hung_task_timeout_secs" disables this message.
Nov 12 16:45:02 <host> kernel: [27678.937234] c++             D    0
56853  56852 0x000003a0
Nov 12 16:45:02 <host> kernel: [27678.937237] Call Trace:
Nov 12 16:45:02 <host> kernel: [27678.937241]  __schedule+0x2c0/0x870
Nov 12 16:45:02 <host> kernel: [27678.937289]  ?
xfs_ilock_attr_map_shared+0x34/0x40 [xfs]
Nov 12 16:45:02 <host> kernel: [27678.937292]  schedule+0x2c/0x70
Nov 12 16:45:02 <host> kernel: [27678.937296]  rwsem_down_read_failed+0xe8/0x180
Nov 12 16:45:02 <host> kernel: [27678.937301]  ? xas_store+0x1e1/0x5b0
Nov 12 16:45:02 <host> kernel: [27678.937305]
call_rwsem_down_read_failed+0x18/0x30
Nov 12 16:45:02 <host> kernel: [27678.937309]  ?
call_rwsem_down_read_failed+0x18/0x30
Nov 12 16:45:02 <host> kernel: [27678.937358]  ? xfs_trans_roll+0xe0/0xe0 [xfs]
Nov 12 16:45:02 <host> kernel: [27678.937368]  down_read+0x20/0x40
Nov 12 16:45:02 <host> kernel: [27678.937424]  xfs_ilock+0xd5/0x100 [xfs]
Nov 12 16:45:02 <host> kernel: [27678.937476]
xfs_ilock_attr_map_shared+0x34/0x40 [xfs]
Nov 12 16:45:02 <host> kernel: [27678.937514]  xfs_attr_get+0xbe/0x120 [xfs]
Nov 12 16:45:02 <host> kernel: [27678.937571]  xfs_xattr_get+0x4b/0x70 [xfs]
Nov 12 16:45:02 <host> kernel: [27678.937581]  __vfs_getxattr+0x59/0x80
Nov 12 16:45:02 <host> kernel: [27678.937605]  get_vfs_caps_from_disk+0x6a/0x170
Nov 12 16:45:02 <host> kernel: [27678.937614]  ? inode_permission+0x63/0x1a0
Nov 12 16:45:02 <host> kernel: [27678.937623]  audit_copy_inode+0x6d/0xb0
Nov 12 16:45:02 <host> kernel: [27678.937631]  __audit_inode+0x17b/0x2f0
Nov 12 16:45:02 <host> kernel: [27678.937638]  filename_parentat+0x147/0x190
Nov 12 16:45:02 <host> kernel: [27678.937646]  ? radix_tree_lookup+0xd/0x10
Nov 12 16:45:02 <host> kernel: [27678.937652]  ? __check_object_size+0xdb/0x1b0
Nov 12 16:45:02 <host> kernel: [27678.937661]  ? path_get+0x27/0x30
Nov 12 16:45:02 <host> kernel: [27678.937669]  do_renameat2+0xc6/0x590
Nov 12 16:45:02 <host> kernel: [27678.937676]  ? do_renameat2+0xc6/0x590
Nov 12 16:45:02 <host> kernel: [27678.937683]  ?
__audit_syscall_entry+0xdd/0x130
Nov 12 16:45:02 <host> kernel: [27678.937690]  __x64_sys_rename+0x20/0x30
Nov 12 16:45:02 <host> kernel: [27678.937694]  do_syscall_64+0x5a/0x120
Nov 12 16:45:02 <host> kernel: [27678.937699]
entry_SYSCALL_64_after_hwframe+0x44/0xa9
Nov 12 16:45:02 <host> kernel: [27678.937701] RIP: 0033:0x7faba8af6d37
Nov 12 16:45:02 <host> kernel: [27678.937705] Code: Bad RIP value.
Nov 12 16:45:02 <host> kernel: [27678.937707] RSP:
002b:00007fff0e234358 EFLAGS: 00000213 ORIG_RAX: 0000000000000052
Nov 12 16:45:02 <host> kernel: [27678.937709] RAX: ffffffffffffffda
RBX: 00007fff0e234384 RCX: 00007faba8af6d37
Nov 12 16:45:02 <host> kernel: [27678.937710] RDX: 0000564342653010
RSI: 00005643426577a0 RDI: 0000564342659c80
Nov 12 16:45:02 <host> kernel: [27678.937712] RBP: 00007fff0e234420
R08: 0000000000000000 R09: 0000564342699f20
Nov 12 16:45:02 <host> kernel: [27678.937713] R10: 0000564342653010
R11: 0000000000000213 R12: 0000000000000004
Nov 12 16:45:02 <host> kernel: [27678.937715] R13: 000000000000f160
R14: 0000564342658030 R15: 000000000000f160
Nov 12 16:45:02 <host> kernel: [27678.937719] INFO: task c++:56855
blocked for more than 120 seconds.
Nov 12 16:45:02 <host> kernel: [27678.937765]       Tainted: G
  OE     5.0.0-32-generic #34~18.04.2-Ubuntu
Nov 12 16:45:02 <host> kernel: [27678.939382] "echo 0 >
/proc/sys/kernel/hung_task_timeout_secs" disables this message.
Nov 12 16:45:02 <host> kernel: [27678.940988] c++             D    0
56855  56854 0x000003a0
Nov 12 16:45:02 <host> kernel: [27678.940991] Call Trace:
Nov 12 16:45:02 <host> kernel: [27678.940995]  __schedule+0x2c0/0x870
Nov 12 16:45:02 <host> kernel: [27678.941048]  ?
xfs_ilock_attr_map_shared+0x34/0x40 [xfs]
Nov 12 16:45:02 <host> kernel: [27678.941052]  schedule+0x2c/0x70
Nov 12 16:45:02 <host> kernel: [27678.941055]  rwsem_down_read_failed+0xe8/0x180
Nov 12 16:45:02 <host> kernel: [27678.941062]
call_rwsem_down_read_failed+0x18/0x30
Nov 12 16:45:02 <host> kernel: [27678.941066]  ?
call_rwsem_down_read_failed+0x18/0x30
Nov 12 16:45:02 <host> kernel: [27678.941116]  ? xfs_trans_roll+0xe0/0xe0 [xfs]
Nov 12 16:45:02 <host> kernel: [27678.941121]  down_read+0x20/0x40
Nov 12 16:45:02 <host> kernel: [27678.941169]  xfs_ilock+0xd5/0x100 [xfs]
Nov 12 16:45:02 <host> kernel: [27678.941217]
xfs_ilock_attr_map_shared+0x34/0x40 [xfs]
Nov 12 16:45:02 <host> kernel: [27678.941251]  xfs_attr_get+0xbe/0x120 [xfs]
Nov 12 16:45:02 <host> kernel: [27678.941303]  xfs_xattr_get+0x4b/0x70 [xfs]
Nov 12 16:45:02 <host> kernel: [27678.941307]  __vfs_getxattr+0x59/0x80
Nov 12 16:45:02 <host> kernel: [27678.941312]  get_vfs_caps_from_disk+0x6a/0x170
Nov 12 16:45:02 <host> kernel: [27678.941317]  audit_copy_inode+0x6d/0xb0
Nov 12 16:45:02 <host> kernel: [27678.941322]  __audit_inode+0x17b/0x2f0
Nov 12 16:45:02 <host> kernel: [27678.941325]  path_openat+0x38f/0x1700
Nov 12 16:45:02 <host> kernel: [27678.941330]  ? insert_pfn+0x152/0x240
Nov 12 16:45:02 <host> kernel: [27678.941334]  ? vmf_insert_pfn_prot+0x9b/0x120
Nov 12 16:45:02 <host> kernel: [27678.941337]  do_filp_open+0x9b/0x110
Nov 12 16:45:02 <host> kernel: [27678.941341]  ? __check_object_size+0xdb/0x1b0
Nov 12 16:45:02 <host> kernel: [27678.941345]  ? path_get+0x27/0x30
Nov 12 16:45:02 <host> kernel: [27678.941349]  ? __alloc_fd+0x46/0x170
Nov 12 16:45:02 <host> kernel: [27678.941353]  do_sys_open+0x1bb/0x2d0
Nov 12 16:45:02 <host> kernel: [27678.941356]  ? do_sys_open+0x1bb/0x2d0
Nov 12 16:45:02 <host> kernel: [27678.941360]  __x64_sys_openat+0x20/0x30
Nov 12 16:45:02 <host> kernel: [27678.941364]  do_syscall_64+0x5a/0x120
Nov 12 16:45:02 <host> kernel: [27678.941369]
entry_SYSCALL_64_after_hwframe+0x44/0xa9
Nov 12 16:45:02 <host> kernel: [27678.941371] RIP: 0033:0x7f8bc5732c8e
Nov 12 16:45:02 <host> kernel: [27678.941376] Code: Bad RIP value.
Nov 12 16:45:02 <host> kernel: [27678.941379] RSP:
002b:00007ffe3e6fa850 EFLAGS: 00000246 ORIG_RAX: 0000000000000101
Nov 12 16:45:02 <host> kernel: [27678.941382] RAX: ffffffffffffffda
RBX: 000000000003a2f8 RCX: 00007f8bc5732c8e
Nov 12 16:45:02 <host> kernel: [27678.941383] RDX: 00000000000000c2
RSI: 000055b674710770 RDI: 00000000ffffff9c
Nov 12 16:45:02 <host> kernel: [27678.941385] RBP: 0000000000000000
R08: 00007ffe3e7960a0 R09: 00007ffe3e796080
Nov 12 16:45:02 <host> kernel: [27678.941386] R10: 0000000000000180
R11: 0000000000000246 R12: 000055b674710770
Nov 12 16:45:02 <host> kernel: [27678.941388] R13: 000055b6747107cd
R14: 00007f8bc57dec80 R15: 8421084210842109

--
Sitsofe | http://sucs.org/~sits/

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: Tasks blocking forever with XFS stack traces
  2019-11-13 10:04       ` Sitsofe Wheeler
@ 2020-12-23  8:45         ` Sitsofe Wheeler
  0 siblings, 0 replies; 10+ messages in thread
From: Sitsofe Wheeler @ 2020-12-23  8:45 UTC (permalink / raw)
  To: linux-xfs; +Cc: Carlos Maiolino

On Wed, 13 Nov 2019 at 10:04, Sitsofe Wheeler <sitsofe@gmail.com> wrote:
>
> Nov 12 16:45:02 <host> kernel: [27678.931551] INFO: task
> kworker/50:0:20430 blocked for more than 120 seconds.
> Nov 12 16:45:02 <host> kernel: [27678.931613]       Tainted: G
>   OE     5.0.0-32-generic #34~18.04.2-Ubuntu
> Nov 12 16:45:02 <host> kernel: [27678.931667] "echo 0 >
> /proc/sys/kernel/hung_task_timeout_secs" disables this message.
> Nov 12 16:45:02 <host> kernel: [27678.931723] kworker/50:0    D    0
> 20430      2 0x80000080
> Nov 12 16:45:02 <host> kernel: [27678.931801] Workqueue:
> xfs-sync/md126 xfs_log_worker [xfs]
> Nov 12 16:45:02 <host> kernel: [27678.931804] Call Trace:
> Nov 12 16:45:02 <host> kernel: [27678.931814]  __schedule+0x2c0/0x870
> Nov 12 16:45:02 <host> kernel: [27678.931819]  schedule+0x2c/0x70
> Nov 12 16:45:02 <host> kernel: [27678.931823]  schedule_timeout+0x1db/0x360
> Nov 12 16:45:02 <host> kernel: [27678.931829]  ? ttwu_do_activate+0x77/0x80
> Nov 12 16:45:02 <host> kernel: [27678.931833]  wait_for_completion+0xba/0x140
> Nov 12 16:45:02 <host> kernel: [27678.931837]  ? wake_up_q+0x80/0x80
> Nov 12 16:45:02 <host> kernel: [27678.931843]  __flush_work+0x15c/0x210

(Quite some time later) This issue went away after switching to an
Ubuntu 18.04 5.3 LTS kernel. Later kernels (e.g. 5.4) have not
manifested the issue either.

-- 
Sitsofe

^ permalink raw reply	[flat|nested] 10+ messages in thread

end of thread, other threads:[~2020-12-23  8:47 UTC | newest]

Thread overview: 10+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-11-05  7:27 Tasks blocking forever with XFS stack traces Sitsofe Wheeler
2019-11-05  8:54 ` Carlos Maiolino
2019-11-05  9:32   ` Sitsofe Wheeler
2019-11-05 10:36     ` Carlos Maiolino
2019-11-05 11:58       ` Carlos Maiolino
2019-11-05 14:12       ` Sitsofe Wheeler
2019-11-05 16:09         ` Carlos Maiolino
2019-11-07  0:12         ` Chris Murphy
2019-11-13 10:04       ` Sitsofe Wheeler
2020-12-23  8:45         ` Sitsofe Wheeler

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.