All of lore.kernel.org
 help / color / mirror / Atom feed
* [Bug 201331] New: deadlock (XFS?)
@ 2018-10-04 23:14 bugzilla-daemon
  2018-10-04 23:16 ` [Bug 201331] " bugzilla-daemon
                   ` (11 more replies)
  0 siblings, 12 replies; 13+ messages in thread
From: bugzilla-daemon @ 2018-10-04 23:14 UTC (permalink / raw)
  To: linux-xfs

https://bugzilla.kernel.org/show_bug.cgi?id=201331

            Bug ID: 201331
           Summary: deadlock (XFS?)
           Product: File System
           Version: 2.5
    Kernel Version: 4.18.12
          Hardware: All
                OS: Linux
              Tree: Mainline
            Status: NEW
          Severity: high
          Priority: P1
         Component: XFS
          Assignee: filesystem_xfs@kernel-bugs.kernel.org
          Reporter: edo.rus@gmail.com
        Regression: No

Created attachment 278927
  --> https://bugzilla.kernel.org/attachment.cgi?id=278927&action=edit
dmesg output

I've set up new server with ten of 10Tb disks.
Main volume is XFS over RAID6 (created with mdadm).

For now fs is filling with data. After several hours of uptime disk IO freezes
with such messages in log:
[ 5679.900329] INFO: task tar:18235 blocked for more than 120 seconds.
[ 5679.900404]       Not tainted 4.18.12 #2
...
[ 5679.904044] INFO: task kworker/u24:3:18307 blocked for more than 120
seconds.
[ 5679.904137]       Not tainted 4.18.12 #2
and so on.

I'm unsure, but it seems to be XFS-related.

-- 
You are receiving this mail because:
You are watching the assignee of the bug.

^ permalink raw reply	[flat|nested] 13+ messages in thread

* [Bug 201331] deadlock (XFS?)
  2018-10-04 23:14 [Bug 201331] New: deadlock (XFS?) bugzilla-daemon
@ 2018-10-04 23:16 ` bugzilla-daemon
  2018-10-04 23:17 ` bugzilla-daemon
                   ` (10 subsequent siblings)
  11 siblings, 0 replies; 13+ messages in thread
From: bugzilla-daemon @ 2018-10-04 23:16 UTC (permalink / raw)
  To: linux-xfs

https://bugzilla.kernel.org/show_bug.cgi?id=201331

--- Comment #1 from edo (edo.rus@gmail.com) ---
Created attachment 278929
  --> https://bugzilla.kernel.org/attachment.cgi?id=278929&action=edit
/proc/mdstat

-- 
You are receiving this mail because:
You are watching the assignee of the bug.

^ permalink raw reply	[flat|nested] 13+ messages in thread

* [Bug 201331] deadlock (XFS?)
  2018-10-04 23:14 [Bug 201331] New: deadlock (XFS?) bugzilla-daemon
  2018-10-04 23:16 ` [Bug 201331] " bugzilla-daemon
@ 2018-10-04 23:17 ` bugzilla-daemon
  2018-10-04 23:18 ` bugzilla-daemon
                   ` (9 subsequent siblings)
  11 siblings, 0 replies; 13+ messages in thread
From: bugzilla-daemon @ 2018-10-04 23:17 UTC (permalink / raw)
  To: linux-xfs

https://bugzilla.kernel.org/show_bug.cgi?id=201331

--- Comment #2 from edo (edo.rus@gmail.com) ---
Created attachment 278931
  --> https://bugzilla.kernel.org/attachment.cgi?id=278931&action=edit
xfs_info output

-- 
You are receiving this mail because:
You are watching the assignee of the bug.

^ permalink raw reply	[flat|nested] 13+ messages in thread

* [Bug 201331] deadlock (XFS?)
  2018-10-04 23:14 [Bug 201331] New: deadlock (XFS?) bugzilla-daemon
  2018-10-04 23:16 ` [Bug 201331] " bugzilla-daemon
  2018-10-04 23:17 ` bugzilla-daemon
@ 2018-10-04 23:18 ` bugzilla-daemon
  2018-10-04 23:25 ` bugzilla-daemon
                   ` (8 subsequent siblings)
  11 siblings, 0 replies; 13+ messages in thread
From: bugzilla-daemon @ 2018-10-04 23:18 UTC (permalink / raw)
  To: linux-xfs

https://bugzilla.kernel.org/show_bug.cgi?id=201331

--- Comment #3 from edo (edo.rus@gmail.com) ---
I have made some RAID tuning:
echo 32768 >  /sys/block/md3/md/stripe_cache_size

-- 
You are receiving this mail because:
You are watching the assignee of the bug.

^ permalink raw reply	[flat|nested] 13+ messages in thread

* [Bug 201331] deadlock (XFS?)
  2018-10-04 23:14 [Bug 201331] New: deadlock (XFS?) bugzilla-daemon
                   ` (2 preceding siblings ...)
  2018-10-04 23:18 ` bugzilla-daemon
@ 2018-10-04 23:25 ` bugzilla-daemon
  2018-10-05  1:06 ` bugzilla-daemon
                   ` (7 subsequent siblings)
  11 siblings, 0 replies; 13+ messages in thread
From: bugzilla-daemon @ 2018-10-04 23:25 UTC (permalink / raw)
  To: linux-xfs

https://bugzilla.kernel.org/show_bug.cgi?id=201331

--- Comment #4 from edo (edo.rus@gmail.com) ---
I tested with 4.17 and 4.18 prebuilt Debian kernels, behavior is the same:
Sep 30 16:01:23 storage10x10n1 kernel: [23683.218388] INFO: task
kworker/u24:0:21848 blocked for more than 120 seconds.
Sep 30 16:01:23 storage10x10n1 kernel: [23683.218495]       Not tainted
4.18.0-0.bpo.1-amd64 #1 Debian 4.18.6-1~bpo9+1
Sep 30 16:01:23 storage10x10n1 kernel: [23683.218593] "echo 0 >
/proc/sys/kernel/hung_task_timeout_secs" disables this message.
Sep 30 16:01:23 storage10x10n1 kernel: [23683.218712] kworker/u24:0   D    0
21848      2 0x80000000
Sep 30 16:01:23 storage10x10n1 kernel: [23683.218814] Workqueue: writeback
wb_workfn (flush-9:3)
Sep 30 16:01:23 storage10x10n1 kernel: [23683.218910] Call Trace:
Sep 30 16:01:23 storage10x10n1 kernel: [23683.219005]  ? __schedule+0x3f5/0x880
Sep 30 16:01:23 storage10x10n1 kernel: [23683.219096]  schedule+0x32/0x80
Sep 30 16:01:23 storage10x10n1 kernel: [23683.219192] 
bitmap_startwrite+0x161/0x1e0 [md_mod]
Sep 30 16:01:23 storage10x10n1 kernel: [23683.219291]  ?
remove_wait_queue+0x60/0x60
Sep 30 16:01:23 storage10x10n1 kernel: [23683.219388] 
add_stripe_bio+0x441/0x7d0 [raid456]
Sep 30 16:01:23 storage10x10n1 kernel: [23683.219484] 
raid5_make_request+0x1ae/0xb10 [raid456]
Sep 30 16:01:23 storage10x10n1 kernel: [23683.219580]  ?
remove_wait_queue+0x60/0x60
Sep 30 16:01:23 storage10x10n1 kernel: [23683.219675]  ?
blk_queue_split+0x222/0x5e0
Sep 30 16:01:23 storage10x10n1 kernel: [23683.219770] 
md_handle_request+0x116/0x190 [md_mod]
Sep 30 16:01:23 storage10x10n1 kernel: [23683.219867] 
md_make_request+0x65/0x160 [md_mod]
Sep 30 16:01:23 storage10x10n1 kernel: [23683.219962] 
generic_make_request+0x1e7/0x410
Sep 30 16:01:23 storage10x10n1 kernel: [23683.220058]  ? submit_bio+0x6c/0x140
Sep 30 16:01:23 storage10x10n1 kernel: [23683.220148]  submit_bio+0x6c/0x140
Sep 30 16:01:23 storage10x10n1 kernel: [23683.220294] 
xfs_add_to_ioend+0x14c/0x280 [xfs]
Sep 30 16:01:23 storage10x10n1 kernel: [23683.220415]  ?
xfs_map_buffer.isra.14+0x37/0x70 [xfs]
Sep 30 16:01:23 storage10x10n1 kernel: [23683.220534] 
xfs_do_writepage+0x2bb/0x680 [xfs]
Sep 30 16:01:23 storage10x10n1 kernel: [23683.220632]  ?
clear_page_dirty_for_io+0x20c/0x2a0
Sep 30 16:01:23 storage10x10n1 kernel: [23683.220727] 
write_cache_pages+0x1ed/0x430
Sep 30 16:01:23 storage10x10n1 kernel: [23683.220852]  ?
xfs_add_to_ioend+0x280/0x280 [xfs]
Sep 30 16:01:23 storage10x10n1 kernel: [23683.220971] 
xfs_vm_writepages+0x64/0xa0 [xfs]
Sep 30 16:01:23 storage10x10n1 kernel: [23683.221068]  do_writepages+0x1a/0x60
Sep 30 16:01:23 storage10x10n1 kernel: [23683.221161] 
__writeback_single_inode+0x3d/0x320
Sep 30 16:01:23 storage10x10n1 kernel: [23683.221255] 
writeback_sb_inodes+0x221/0x4b0
Sep 30 16:01:23 storage10x10n1 kernel: [23683.221349] 
__writeback_inodes_wb+0x87/0xb0
Sep 30 16:01:23 storage10x10n1 kernel: [23683.221442]  wb_writeback+0x288/0x320
Sep 30 16:01:23 storage10x10n1 kernel: [23683.221534]  ? cpumask_next+0x16/0x20
Sep 30 16:01:23 storage10x10n1 kernel: [23683.221626]  ? wb_workfn+0x37c/0x450
Sep 30 16:01:23 storage10x10n1 kernel: [23683.221717]  wb_workfn+0x37c/0x450
Sep 30 16:01:23 storage10x10n1 kernel: [23683.221811] 
process_one_work+0x191/0x370
Sep 30 16:01:23 storage10x10n1 kernel: [23683.221904]  worker_thread+0x4f/0x3b0
Sep 30 16:01:23 storage10x10n1 kernel: [23683.221995]  kthread+0xf8/0x130
Sep 30 16:01:23 storage10x10n1 kernel: [23683.222086]  ?
rescuer_thread+0x340/0x340
Sep 30 16:01:23 storage10x10n1 kernel: [23683.222179]  ?
kthread_create_worker_on_cpu+0x70/0x70
Sep 30 16:01:23 storage10x10n1 kernel: [23683.222276]  ret_from_fork+0x1f/0x40

-- 
You are receiving this mail because:
You are watching the assignee of the bug.

^ permalink raw reply	[flat|nested] 13+ messages in thread

* [Bug 201331] deadlock (XFS?)
  2018-10-04 23:14 [Bug 201331] New: deadlock (XFS?) bugzilla-daemon
                   ` (3 preceding siblings ...)
  2018-10-04 23:25 ` bugzilla-daemon
@ 2018-10-05  1:06 ` bugzilla-daemon
  2018-10-05  8:20 ` bugzilla-daemon
                   ` (6 subsequent siblings)
  11 siblings, 0 replies; 13+ messages in thread
From: bugzilla-daemon @ 2018-10-05  1:06 UTC (permalink / raw)
  To: linux-xfs

https://bugzilla.kernel.org/show_bug.cgi?id=201331

--- Comment #5 from Dave Chinner (david@fromorbit.com) ---
On Thu, Oct 04, 2018 at 11:25:49PM +0000, bugzilla-daemon@bugzilla.kernel.org
wrote:
> https://bugzilla.kernel.org/show_bug.cgi?id=201331

> 
> --- Comment #4 from edo (edo.rus@gmail.com) ---
> I tested with 4.17 and 4.18 prebuilt Debian kernels, behavior is the same:
> Sep 30 16:01:23 storage10x10n1 kernel: [23683.218388] INFO: task
> kworker/u24:0:21848 blocked for more than 120 seconds.

I think we need to rename XFS to "The Messenger: Please don't shoot
me"... :)

>From the xfs_info:

sunit=4096   swidth=32768 blks

Ok, that looks wrong - why do you have a MD raid device with
16MB stripe unit and a 128MB stripe width?

Yup:

md3 : active raid6 sda4[0] sdj4[9] sdg4[6] sdd4[3] sdi4[8] sdf4[5] sde4[4]
sdh4[7] sdb4[2] sdc4[1]
      77555695616 blocks super 1.2 level 6, 16384k chunk, algorithm 2 [10/10]
[UUUUUUUUUU]
            bitmap: 9/73 pages [36KB], 65536KB chunk

You've configured your RAID6 device with a 16MB chunk size, which
gives the XFS su/sw noted above.

Basically, your RMW'd your RAID device to death because every write
is a sub-stripe write.

>  Workqueue: writeback wb_workfn (flush-9:3)
>  Call Trace:
>   schedule+0x32/0x80
>  bitmap_startwrite+0x161/0x1e0 [md_mod]

MD blocks here when it has too many inflight bitmap updates and so
waits for IO to complete before starting another. This isn't XFS
filesystem IO - this in internal MD RAID consistency information
that it needs to write for crash recovery purposes.

This will be a direct result of the raid device configuration....

>  add_stripe_bio+0x441/0x7d0 [raid456]
>  raid5_make_request+0x1ae/0xb10 [raid456]
>  md_handle_request+0x116/0x190 [md_mod]
>  md_make_request+0x65/0x160 [md_mod]
>  generic_make_request+0x1e7/0x410
>   submit_bio+0x6c/0x140
>  xfs_add_to_ioend+0x14c/0x280 [xfs]
>  xfs_do_writepage+0x2bb/0x680 [xfs]
>  write_cache_pages+0x1ed/0x430
>  xfs_vm_writepages+0x64/0xa0 [xfs]
>   do_writepages+0x1a/0x60
>  __writeback_single_inode+0x3d/0x320
>  writeback_sb_inodes+0x221/0x4b0
>  __writeback_inodes_wb+0x87/0xb0
>   wb_writeback+0x288/0x320
>   wb_workfn+0x37c/0x450

... and this is just the writeback path - your problem has nothing
do with XFS...

Cheers,

Dave.

-- 
You are receiving this mail because:
You are watching the assignee of the bug.

^ permalink raw reply	[flat|nested] 13+ messages in thread

* [Bug 201331] deadlock (XFS?)
  2018-10-04 23:14 [Bug 201331] New: deadlock (XFS?) bugzilla-daemon
                   ` (4 preceding siblings ...)
  2018-10-05  1:06 ` bugzilla-daemon
@ 2018-10-05  8:20 ` bugzilla-daemon
  2018-10-05  9:08 ` bugzilla-daemon
                   ` (5 subsequent siblings)
  11 siblings, 0 replies; 13+ messages in thread
From: bugzilla-daemon @ 2018-10-05  8:20 UTC (permalink / raw)
  To: linux-xfs

https://bugzilla.kernel.org/show_bug.cgi?id=201331

edo (edo.rus@gmail.com) changed:

           What    |Removed                     |Added
----------------------------------------------------------------------------
          Component|XFS                         |MD
            Product|File System                 |IO/Storage

-- 
You are receiving this mail because:
You are watching the assignee of the bug.

^ permalink raw reply	[flat|nested] 13+ messages in thread

* [Bug 201331] deadlock (XFS?)
  2018-10-04 23:14 [Bug 201331] New: deadlock (XFS?) bugzilla-daemon
                   ` (5 preceding siblings ...)
  2018-10-05  8:20 ` bugzilla-daemon
@ 2018-10-05  9:08 ` bugzilla-daemon
  2018-10-05  9:11 ` bugzilla-daemon
                   ` (4 subsequent siblings)
  11 siblings, 0 replies; 13+ messages in thread
From: bugzilla-daemon @ 2018-10-05  9:08 UTC (permalink / raw)
  To: linux-xfs

https://bugzilla.kernel.org/show_bug.cgi?id=201331

Carlos Maiolino (cmaiolino@redhat.com) changed:

           What    |Removed                     |Added
----------------------------------------------------------------------------
                 CC|                            |cmaiolino@redhat.com

--- Comment #6 from Carlos Maiolino (cmaiolino@redhat.com) ---
Edo, I'm closing this bug.

There is no evidence of bug on what you described, but a poorly configured
storage.

As Dave mentioned, you are RMWing your storage array to death. Please, fix your
storage array configuration, and feel free to reopen the bug if you still hit
it with a proper storage configuration, but by now, what you have is a big
number of huge RMW cycles.

-- 
You are receiving this mail because:
You are watching the assignee of the bug.

^ permalink raw reply	[flat|nested] 13+ messages in thread

* [Bug 201331] deadlock (XFS?)
  2018-10-04 23:14 [Bug 201331] New: deadlock (XFS?) bugzilla-daemon
                   ` (6 preceding siblings ...)
  2018-10-05  9:08 ` bugzilla-daemon
@ 2018-10-05  9:11 ` bugzilla-daemon
  2018-10-05  9:18 ` bugzilla-daemon
                   ` (3 subsequent siblings)
  11 siblings, 0 replies; 13+ messages in thread
From: bugzilla-daemon @ 2018-10-05  9:11 UTC (permalink / raw)
  To: linux-xfs

https://bugzilla.kernel.org/show_bug.cgi?id=201331

--- Comment #7 from Carlos Maiolino (cmaiolino@redhat.com) ---
D'oh, I have no permission to close MD bugs, but well, as Dave mentioned, your
problem is not a bug.

-- 
You are receiving this mail because:
You are watching the assignee of the bug.

^ permalink raw reply	[flat|nested] 13+ messages in thread

* [Bug 201331] deadlock (XFS?)
  2018-10-04 23:14 [Bug 201331] New: deadlock (XFS?) bugzilla-daemon
                   ` (7 preceding siblings ...)
  2018-10-05  9:11 ` bugzilla-daemon
@ 2018-10-05  9:18 ` bugzilla-daemon
  2018-10-05 10:15 ` bugzilla-daemon
                   ` (2 subsequent siblings)
  11 siblings, 0 replies; 13+ messages in thread
From: bugzilla-daemon @ 2018-10-05  9:18 UTC (permalink / raw)
  To: linux-xfs

https://bugzilla.kernel.org/show_bug.cgi?id=201331

--- Comment #8 from edo (edo.rus@gmail.com) ---
> Basically, your RMW'd your RAID device to death because every write
is a sub-stripe write.

Why is it bad?
Even with default 512k stipe almost every write is sub-stripe write.
Anyway system lock till reset is an error, isn't it?

-- 
You are receiving this mail because:
You are watching the assignee of the bug.

^ permalink raw reply	[flat|nested] 13+ messages in thread

* [Bug 201331] deadlock (XFS?)
  2018-10-04 23:14 [Bug 201331] New: deadlock (XFS?) bugzilla-daemon
                   ` (8 preceding siblings ...)
  2018-10-05  9:18 ` bugzilla-daemon
@ 2018-10-05 10:15 ` bugzilla-daemon
  2018-10-05 16:39 ` bugzilla-daemon
  2018-10-05 17:09 ` bugzilla-daemon
  11 siblings, 0 replies; 13+ messages in thread
From: bugzilla-daemon @ 2018-10-05 10:15 UTC (permalink / raw)
  To: linux-xfs

https://bugzilla.kernel.org/show_bug.cgi?id=201331

--- Comment #9 from Carlos Maiolino (cmaiolino@redhat.com) ---
(In reply to edo from comment #8)
> > Basically, your RMW'd your RAID device to death because every write
> is a sub-stripe write.
> 
> Why is it bad?
> Even with default 512k stipe almost every write is sub-stripe write.

512k * 8 = 4MiB

A substripe write will require 4MiB + parity (which IIRC will be another
1024MiB) so a total of 5MiB needs to be Read, modified and written for each
undersized write.

For a 16MiB stripe:

16MiB * 8 data disks = 128MiB + 32MiB for the parity chunks, so, for every
undersized write, you need to have 160MiB of data read modified and written
back to the array.

Multiply it for several files, and you will RMW your array to death really
fast, mainly if your workload is mostly undersized IO.



> Anyway system lock till reset is an error, isn't it?

Your system is not locked up, it's just really slow due the amount of time
being spent waiting for IO completion.

-- 
You are receiving this mail because:
You are watching the assignee of the bug.

^ permalink raw reply	[flat|nested] 13+ messages in thread

* [Bug 201331] deadlock (XFS?)
  2018-10-04 23:14 [Bug 201331] New: deadlock (XFS?) bugzilla-daemon
                   ` (9 preceding siblings ...)
  2018-10-05 10:15 ` bugzilla-daemon
@ 2018-10-05 16:39 ` bugzilla-daemon
  2018-10-05 17:09 ` bugzilla-daemon
  11 siblings, 0 replies; 13+ messages in thread
From: bugzilla-daemon @ 2018-10-05 16:39 UTC (permalink / raw)
  To: linux-xfs

https://bugzilla.kernel.org/show_bug.cgi?id=201331

--- Comment #10 from edo (edo.rus@gmail.com) ---
> A substripe write will require 4MiB + parity (which IIRC will be another 1024MiB) so a total of 5MiB needs to be Read, modified and written for each undersized write.

I'm pretty sure, md raid can do partial update for chunks.

I just ran 4Kib random write test (fio) on freshly created array with chunk
size 4Kib, after that with chunk size 16Mib - no difference in terms iops on
raid device (observed in fio), iops and transfer speed on raid members
(observed in iostat).


> Your system is not locked up, it's just really slow due the amount of time being spent waiting for IO completion.

No.
Array was filling with good speed (80-250Mib/s, limited by network) several
hours after system startup.
Unexpectedly write stopped (zeros in iostat) and messages "blocked for more
than 120 seconds" arrive. I managed to continue disk filling with hardware
reset only.
Such behavior repeated many times, I tried several kernel versions and kernel
options.

-- 
You are receiving this mail because:
You are watching the assignee of the bug.

^ permalink raw reply	[flat|nested] 13+ messages in thread

* [Bug 201331] deadlock (XFS?)
  2018-10-04 23:14 [Bug 201331] New: deadlock (XFS?) bugzilla-daemon
                   ` (10 preceding siblings ...)
  2018-10-05 16:39 ` bugzilla-daemon
@ 2018-10-05 17:09 ` bugzilla-daemon
  11 siblings, 0 replies; 13+ messages in thread
From: bugzilla-daemon @ 2018-10-05 17:09 UTC (permalink / raw)
  To: linux-xfs

https://bugzilla.kernel.org/show_bug.cgi?id=201331

Eric Sandeen (sandeen@sandeen.net) changed:

           What    |Removed                     |Added
----------------------------------------------------------------------------
                 CC|                            |sandeen@sandeen.net
           Assignee|filesystem_xfs@kernel-bugs. |io_md@kernel-bugs.osdl.org
                   |kernel.org                  |

--- Comment #11 from Eric Sandeen (sandeen@sandeen.net) ---
One thing that's kind of weird is this:

[ 1679.494859] md: md2: resync done.
[ 5679.900329] INFO: task tar:18235 blocked for more than 120 seconds.

almost exactly 4000 seconds?  Maybe a coincidence.

The messages from md's bitmap_startwrite is almost the same timestamp, too:

[ 5679.904044] INFO: task kworker/u24:3:18307 blocked for more than 120
seconds.

md is scheduled out here:

                if (unlikely(COUNTER(*bmc) == COUNTER_MAX)) {
                        DEFINE_WAIT(__wait);
                        /* note that it is safe to do the prepare_to_wait
                         * after the test as long as we do it before dropping
                         * the spinlock.
                         */
                        prepare_to_wait(&bitmap->overflow_wait, &__wait,
                                        TASK_UNINTERRUPTIBLE);
                        spin_unlock_irq(&bitmap->counts.lock);
                        schedule();
                        finish_wait(&bitmap->overflow_wait, &__wait);
                        continue;
                }

So md is waiting to be woken up when the bitmap writer finishes.  Details
aside, I really do think that xfs is the victim/messenger here; we should at
least try to get some md eyes on this one as well.

-- 
You are receiving this mail because:
You are watching the assignee of the bug.

^ permalink raw reply	[flat|nested] 13+ messages in thread

end of thread, other threads:[~2018-10-06  0:09 UTC | newest]

Thread overview: 13+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-10-04 23:14 [Bug 201331] New: deadlock (XFS?) bugzilla-daemon
2018-10-04 23:16 ` [Bug 201331] " bugzilla-daemon
2018-10-04 23:17 ` bugzilla-daemon
2018-10-04 23:18 ` bugzilla-daemon
2018-10-04 23:25 ` bugzilla-daemon
2018-10-05  1:06 ` bugzilla-daemon
2018-10-05  8:20 ` bugzilla-daemon
2018-10-05  9:08 ` bugzilla-daemon
2018-10-05  9:11 ` bugzilla-daemon
2018-10-05  9:18 ` bugzilla-daemon
2018-10-05 10:15 ` bugzilla-daemon
2018-10-05 16:39 ` bugzilla-daemon
2018-10-05 17:09 ` bugzilla-daemon

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.