All of lore.kernel.org
 help / color / mirror / Atom feed
* Re: Wierd: Degrading while recovering raid5
@ 2015-02-11  6:23 Kyle Logue
  2015-02-11 14:28 ` Phil Turmel
  0 siblings, 1 reply; 9+ messages in thread
From: Kyle Logue @ 2015-02-11  6:23 UTC (permalink / raw)
  To: linux-raid

Phil:

For a while I really thought that was going to work. I swapped out the
sata cable and set the timeout to 10 minutes. At about 70% rebuilt I
got the following dmesg which seems to indicate the death of my sdc
drive.

Here is my question: I still have this sde that I manually failed and
hasn't been touched. Can i force re-add it to the array and just take
the data corruption hit?

I'd rather have to revert part of my data than all of it. The drive
counts are significantly different now, but I haven't mounted the
drives since the beginning. I haven't tried it but I saw someone else
online get a message like 'raid has failed so using --add cannot work
and might destroy data'. Is there a force add? What are my chances?

The dmesg in question. I started rebuilding at 20:24.

[Tue Feb 10 20:23:59 2015] md: md0 stopped.
[Tue Feb 10 20:23:59 2015] md: unbind<sdf1>
[Tue Feb 10 20:23:59 2015] md: export_rdev(sdf1)
[Tue Feb 10 20:23:59 2015] md: unbind<sde1>
[Tue Feb 10 20:23:59 2015] md: export_rdev(sde1)
[Tue Feb 10 20:23:59 2015] md: unbind<sdd1>
[Tue Feb 10 20:23:59 2015] md: export_rdev(sdd1)
[Tue Feb 10 20:23:59 2015] md: unbind<sdc1>
[Tue Feb 10 20:23:59 2015] md: export_rdev(sdc1)
[Tue Feb 10 20:23:59 2015] md: unbind<sdb1>
[Tue Feb 10 20:23:59 2015] md: export_rdev(sdb1)
[Tue Feb 10 20:23:59 2015] md: unbind<sda1>
[Tue Feb 10 20:23:59 2015] md: export_rdev(sda1)
[Tue Feb 10 20:24:59 2015] md: md0 stopped.
[Tue Feb 10 20:24:59 2015] md: bind<sdd1>
[Tue Feb 10 20:24:59 2015] md: bind<sde1>
[Tue Feb 10 20:24:59 2015] md: bind<sdf1>
[Tue Feb 10 20:24:59 2015] md: bind<sdb1>
[Tue Feb 10 20:24:59 2015] md: bind<sda1>
[Tue Feb 10 20:24:59 2015] md: bind<sdc1>
[Tue Feb 10 20:24:59 2015] md: kicking non-fresh sde1 from array!
[Tue Feb 10 20:24:59 2015] md: unbind<sde1>
[Tue Feb 10 20:24:59 2015] md: export_rdev(sde1)
[Tue Feb 10 20:24:59 2015] md/raid:md0: device sdc1 operational as raid disk 0
[Tue Feb 10 20:24:59 2015] md/raid:md0: device sdb1 operational as raid disk 4
[Tue Feb 10 20:24:59 2015] md/raid:md0: device sdf1 operational as raid disk 3
[Tue Feb 10 20:24:59 2015] md/raid:md0: device sdd1 operational as raid disk 1
[Tue Feb 10 20:24:59 2015] md/raid:md0: allocated 0kB
[Tue Feb 10 20:24:59 2015] md/raid:md0: raid level 5 active with 4 out
of 5 devices, algorithm 2
[Tue Feb 10 20:24:59 2015] RAID conf printout:
[Tue Feb 10 20:24:59 2015]  --- level:5 rd:5 wd:4
[Tue Feb 10 20:24:59 2015]  disk 0, o:1, dev:sdc1
[Tue Feb 10 20:24:59 2015]  disk 1, o:1, dev:sdd1
[Tue Feb 10 20:24:59 2015]  disk 3, o:1, dev:sdf1
[Tue Feb 10 20:24:59 2015]  disk 4, o:1, dev:sdb1
[Tue Feb 10 20:24:59 2015] md0: Warning: Device sda1 is misaligned
[Tue Feb 10 20:24:59 2015] md0: Warning: Device sdb1 is misaligned
[Tue Feb 10 20:24:59 2015] md0: Warning: Device sdb1 is misaligned
[Tue Feb 10 20:24:59 2015] md0: detected capacity change from 0 to 8001584889856
[Tue Feb 10 20:24:59 2015] RAID conf printout:
[Tue Feb 10 20:24:59 2015]  --- level:5 rd:5 wd:4
[Tue Feb 10 20:24:59 2015]  disk 0, o:1, dev:sdc1
[Tue Feb 10 20:24:59 2015]  disk 1, o:1, dev:sdd1
[Tue Feb 10 20:24:59 2015]  disk 2, o:1, dev:sda1
[Tue Feb 10 20:24:59 2015]  disk 3, o:1, dev:sdf1
[Tue Feb 10 20:24:59 2015]  disk 4, o:1, dev:sdb1
[Tue Feb 10 20:24:59 2015] md: recovery of RAID array md0
[Tue Feb 10 20:24:59 2015] md: minimum _guaranteed_  speed: 1000 KB/sec/disk.
[Tue Feb 10 20:24:59 2015] md: using maximum available idle IO
bandwidth (but not more than 200000 KB/sec) for recovery.
[Tue Feb 10 20:24:59 2015] md: using 128k window, over a total of 1953511936k.
[Tue Feb 10 20:24:59 2015]  md0: unknown partition table
[Tue Feb 10 20:35:34 2015] perf samples too long (2505 > 2500),
lowering kernel.perf_event_max_sample_rate to 50000
[Wed Feb 11 01:02:15 2015] ata5.00: exception Emask 0x0 SAct 0x30 SErr
0x0 action 0x0
[Wed Feb 11 01:02:15 2015] ata5.00: irq_stat 0x40000008
[Wed Feb 11 01:02:15 2015] ata5.00: failed command: READ FPDMA QUEUED
[Wed Feb 11 01:02:15 2015] ata5.00: cmd
60/00:20:18:1d:1c/04:00:a4:00:00/40 tag 4 ncq 524288 in
[Wed Feb 11 01:02:15 2015]          res
41/40:00:e8:1d:1c/00:04:a4:00:00/00 Emask 0x409 (media error) <F>
[Wed Feb 11 01:02:15 2015] ata5.00: status: { DRDY ERR }
[Wed Feb 11 01:02:15 2015] ata5.00: error: { UNC }
[Wed Feb 11 01:02:15 2015] ata5.00: configured for UDMA/133
[Wed Feb 11 01:02:15 2015] sd 4:0:0:0: [sdc] Unhandled sense code
[Wed Feb 11 01:02:15 2015] sd 4:0:0:0: [sdc]
[Wed Feb 11 01:02:15 2015] Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE
[Wed Feb 11 01:02:15 2015] sd 4:0:0:0: [sdc]
[Wed Feb 11 01:02:15 2015] Sense Key : Medium Error [current] [descriptor]
[Wed Feb 11 01:02:15 2015] Descriptor sense data with sense
descriptors (in hex):
[Wed Feb 11 01:02:15 2015]         72 03 11 04 00 00 00 0c 00 0a 80 00
00 00 00 00
[Wed Feb 11 01:02:15 2015]         a4 1c 1d e8
[Wed Feb 11 01:02:15 2015] sd 4:0:0:0: [sdc]
[Wed Feb 11 01:02:15 2015] Add. Sense: Unrecovered read error - auto
reallocate failed
[Wed Feb 11 01:02:15 2015] sd 4:0:0:0: [sdc] CDB:
[Wed Feb 11 01:02:15 2015] Read(10): 28 00 a4 1c 1d 18 00 04 00 00
[Wed Feb 11 01:02:15 2015] end_request: I/O error, dev sdc, sector 2753306088
[Wed Feb 11 01:02:15 2015] md/raid:md0: read error not correctable
(sector 2753304040 on sdc1).
[Wed Feb 11 01:02:15 2015] md/raid:md0: read error not correctable
(sector 2753304048 on sdc1).
[Wed Feb 11 01:02:15 2015] md/raid:md0: read error not correctable
(sector 2753304056 on sdc1).
[Wed Feb 11 01:02:15 2015] md/raid:md0: read error not correctable
(sector 2753304064 on sdc1).
[Wed Feb 11 01:02:15 2015] md/raid:md0: read error not correctable
(sector 2753304072 on sdc1).
[Wed Feb 11 01:02:15 2015] md/raid:md0: read error not correctable
(sector 2753304080 on sdc1).
[Wed Feb 11 01:02:15 2015] md/raid:md0: read error not correctable
(sector 2753304088 on sdc1).
[Wed Feb 11 01:02:15 2015] md/raid:md0: read error not correctable
(sector 2753304096 on sdc1).
[Wed Feb 11 01:02:15 2015] md/raid:md0: read error not correctable
(sector 2753304104 on sdc1).
[Wed Feb 11 01:02:15 2015] md/raid:md0: read error not correctable
(sector 2753304112 on sdc1).
[Wed Feb 11 01:02:15 2015] ata5: EH complete
[Wed Feb 11 01:02:18 2015] ata5.00: exception Emask 0x0 SAct 0xff80
SErr 0x0 action 0x0
[Wed Feb 11 01:02:18 2015] ata5.00: irq_stat 0x40000008
[Wed Feb 11 01:02:18 2015] ata5.00: failed command: READ FPDMA QUEUED
[Wed Feb 11 01:02:18 2015] ata5.00: cmd
60/80:38:e8:1d:1c/00:00:a4:00:00/40 tag 7 ncq 65536 in
[Wed Feb 11 01:02:18 2015]          res
41/40:80:e8:1d:1c/00:00:a4:00:00/00 Emask 0x409 (media error) <F>
[Wed Feb 11 01:02:18 2015] ata5.00: status: { DRDY ERR }
[Wed Feb 11 01:02:18 2015] ata5.00: error: { UNC }
[Wed Feb 11 01:02:18 2015] ata5.00: configured for UDMA/133
[Wed Feb 11 01:02:18 2015] sd 4:0:0:0: [sdc] Unhandled sense code
[Wed Feb 11 01:02:18 2015] sd 4:0:0:0: [sdc]
[Wed Feb 11 01:02:18 2015] Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE
[Wed Feb 11 01:02:18 2015] sd 4:0:0:0: [sdc]
[Wed Feb 11 01:02:18 2015] Sense Key : Medium Error [current] [descriptor]
[Wed Feb 11 01:02:18 2015] Descriptor sense data with sense
descriptors (in hex):
[Wed Feb 11 01:02:18 2015]         72 03 11 04 00 00 00 0c 00 0a 80 00
00 00 00 00
[Wed Feb 11 01:02:18 2015]         a4 1c 1d e8
[Wed Feb 11 01:02:18 2015] sd 4:0:0:0: [sdc]
[Wed Feb 11 01:02:18 2015] Add. Sense: Unrecovered read error - auto
reallocate failed
[Wed Feb 11 01:02:18 2015] sd 4:0:0:0: [sdc] CDB:
[Wed Feb 11 01:02:18 2015] Read(10): 28 00 a4 1c 1d e8 00 00 80 00
[Wed Feb 11 01:02:18 2015] end_request: I/O error, dev sdc, sector 2753306088
[Wed Feb 11 01:02:18 2015] md/raid:md0: Disk failure on sdc1, disabling device.
[Wed Feb 11 01:02:18 2015] md/raid:md0: Operation continuing on 3 devices.
[Wed Feb 11 01:02:18 2015] ata5: EH complete
[Wed Feb 11 01:02:18 2015] md: md0: recovery interrupted.
[Wed Feb 11 01:02:18 2015] RAID conf printout:
[Wed Feb 11 01:02:18 2015]  --- level:5 rd:5 wd:3
[Wed Feb 11 01:02:18 2015]  disk 0, o:0, dev:sdc1
[Wed Feb 11 01:02:18 2015]  disk 1, o:1, dev:sdd1
[Wed Feb 11 01:02:18 2015]  disk 2, o:1, dev:sda1
[Wed Feb 11 01:02:18 2015]  disk 3, o:1, dev:sdf1
[Wed Feb 11 01:02:18 2015]  disk 4, o:1, dev:sdb1
[Wed Feb 11 01:02:18 2015] RAID conf printout:
[Wed Feb 11 01:02:18 2015]  --- level:5 rd:5 wd:3
[Wed Feb 11 01:02:18 2015]  disk 1, o:1, dev:sdd1
[Wed Feb 11 01:02:18 2015]  disk 2, o:1, dev:sda1
[Wed Feb 11 01:02:18 2015]  disk 3, o:1, dev:sdf1
[Wed Feb 11 01:02:18 2015]  disk 4, o:1, dev:sdb1
[Wed Feb 11 01:02:18 2015] RAID conf printout:
[Wed Feb 11 01:02:18 2015]  --- level:5 rd:5 wd:3
[Wed Feb 11 01:02:18 2015]  disk 1, o:1, dev:sdd1
[Wed Feb 11 01:02:18 2015]  disk 2, o:1, dev:sda1
[Wed Feb 11 01:02:18 2015]  disk 3, o:1, dev:sdf1
[Wed Feb 11 01:02:18 2015]  disk 4, o:1, dev:sdb1
[Wed Feb 11 01:02:18 2015] RAID conf printout:
[Wed Feb 11 01:02:18 2015]  --- level:5 rd:5 wd:3
[Wed Feb 11 01:02:18 2015]  disk 1, o:1, dev:sdd1
[Wed Feb 11 01:02:18 2015]  disk 3, o:1, dev:sdf1
[Wed Feb 11 01:02:18 2015]  disk 4, o:1, dev:sdb1

Thanks again,

Kyle L

On Tue, Feb 10, 2015 at 9:14 PM, Phil Turmel <philip@turmel.org> wrote:
>
> Hi Kyle,
>
> { Convention on kernel.org lists is reply-to-all, trim replies, and
> either bottom post or interleave }
>
> On 02/10/2015 04:50 PM, Kyle Logue wrote:
> > Phil:
> >
> > Thanks for your detailed response. That link does seem to describe my
> > problem and I do understand that desktop grade drives are sub-optimal.
> > It was many years ago when I first set up this array on my home
> > theater pc.  Until now I had no idea about the cron job - I'll make
> > sure to implement that. I am preparing to move to 6 tb disks sometime
> > soon and i'll definitely go enterprise this time.
> >
> > Regarding the drive timeout: I understand that I need to increase it
> > from 30 seconds to something larger (2+ min) but am unaware how to do
> > this. Is it a kernel variable? I'll keep googling but this seems like
> > it's whats going to save me.
> >
> > tl;dr: How do I change the drive timeout?
>
> Put something like this in /etc/rc.local or wherever your distro suggests:
>
> for x in /sys/block/sd[a-f]/device/timeout ; do
>   echo 180 > $x
> done
>
> Where the [a-f] is adjusted to suit your needs, and only for non-raid
> non-scterc drives.
>
> Phil

^ permalink raw reply	[flat|nested] 9+ messages in thread
* Wierd: Degrading while recovering raid5
@ 2015-02-10  4:20 Kyle Logue
  2015-02-10  7:35 ` Adam Goryachev
  0 siblings, 1 reply; 9+ messages in thread
From: Kyle Logue @ 2015-02-10  4:20 UTC (permalink / raw)
  To: linux-raid

Hey all:

I have a 5 disk software raid5 that was working fine until I decided
to swap out an old disk with a new one.

mdadm /dev/md0 --add /dev/sda1
mdadm /dev/md0 --fail /dev/sde1

At this point it started automatically rebuilding the array.
About 60%? of the way in it stops and I see a lot of this repeated in my dmesg:

[Mon Feb  9 18:06:48 2015] ata5.00: exception Emask 0x0 SAct 0x0 SErr
0x0 action 0x6 frozen
[Mon Feb  9 18:06:48 2015] ata5.00: failed command: SMART
[Mon Feb  9 18:06:48 2015] ata5.00: cmd
b0/da:00:00:4f:c2/00:00:00:00:00/00 tag 7
[Mon Feb  9 18:06:48 2015]          res
40/00:ff:00:00:00/00:00:00:00:00/40 Emask 0x4 (timeout)
[Mon Feb  9 18:06:48 2015] ata5.00: status: { DRDY }
[Mon Feb  9 18:06:48 2015] ata5: hard resetting link
[Mon Feb  9 18:06:58 2015] ata5: softreset failed (1st FIS failed)
[Mon Feb  9 18:06:58 2015] ata5: hard resetting link
[Mon Feb  9 18:07:08 2015] ata5: softreset failed (1st FIS failed)
[Mon Feb  9 18:07:08 2015] ata5: hard resetting link
[Mon Feb  9 18:07:12 2015] ata5: SATA link up 1.5 Gbps (SStatus 113
SControl 310)
[Mon Feb  9 18:07:12 2015] ata5.00: configured for UDMA/33
[Mon Feb  9 18:07:12 2015] ata5: EH complete

ata5 corresponds to my /dev/sdc drive.
So I was worried but it didn't look so terrible when i did examine:

sudo mdadm --examine /dev/sd[dabfec]1 | egrep 'dev|Update|Role|State|Events'
/dev/sda1:
          State : clean
    Update Time : Sun Feb  8 20:43:27 2015
   Device Role : spare
   Array State : .A.AA ('A' == active, '.' == missing)
         Events : 27009
/dev/sdb1:
          State : clean
    Update Time : Sun Feb  8 20:43:27 2015
   Device Role : Active device 4
   Array State : .A.AA ('A' == active, '.' == missing)
         Events : 27009
/dev/sdc1:
          State : clean
    Update Time : Sun Feb  8 20:21:13 2015
   Device Role : Active device 0
   Array State : AAAAA ('A' == active, '.' == missing)
         Events : 26995
/dev/sdd1:
          State : clean
    Update Time : Sun Feb  8 20:43:27 2015
   Device Role : Active device 1
   Array State : .A.AA ('A' == active, '.' == missing)
         Events : 27009
/dev/sde1:
          State : clean
    Update Time : Sun Feb  8 12:17:10 2015
   Device Role : Active device 2
   Array State : AAAAA ('A' == active, '.' == missing)
         Events : 21977
/dev/sdf1:
          State : clean
    Update Time : Sun Feb  8 20:43:27 2015
   Device Role : Active device 3
   Array State : .A.AA ('A' == active, '.' == missing)
         Events : 27009

So the event counts looked pretty close on the drives I was updating, so I did:

mdadm --stop /dev/md0
mdadm --assemble --force /dev/md0 /dev/sd[dabfec]1

But it stopped again during recovery at some point while at work with
the same ATA errors in the dmesg.
Searching the web for these errors show lots of people having this
issue with various linux distros and laying the blame on everything
from faulty SATA cables to BIOS to NVIDIA drivers - nothing
definitive. I powered off my box and reconnected all my SATA cables as
a sanity check.

I tried --assemble --force again and it got to 70%:

Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5]
[raid4] [raid10]
md0 : active raid5 sdc1[7] sda1[8] sdb1[6] sdf1[4] sdd1[5]
      7814047744 blocks super 1.2 level 5, 512k chunk, algorithm 2 [5/4] [UU_UU]
      [=============>.......]  recovery = 68.9%
(1347855508/1953511936) finish=306.1min speed=32967K/sec

...but died again. I was monitoring dmesg like a hawk this time and
saw those ata5 errors every 3-15 minutes with different cmd and res
values. At the very end I got this:

[Mon Feb  9 23:11:01 2015] ata5.00: configured for UDMA/33
[Mon Feb  9 23:11:01 2015] sd 4:0:0:0: [sdc] Unhandled sense code
[Mon Feb  9 23:11:01 2015] sd 4:0:0:0: [sdc]
[Mon Feb  9 23:11:01 2015] Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE
[Mon Feb  9 23:11:01 2015] sd 4:0:0:0: [sdc]
[Mon Feb  9 23:11:01 2015] Sense Key : Medium Error [current] [descriptor]
[Mon Feb  9 23:11:01 2015] Descriptor sense data with sense
descriptors (in hex):
[Mon Feb  9 23:11:01 2015]         72 03 11 04 00 00 00 0c 00 0a 80 00
00 00 00 00
[Mon Feb  9 23:11:01 2015]         a4 1c 1d e8
[Mon Feb  9 23:11:01 2015] sd 4:0:0:0: [sdc]
[Mon Feb  9 23:11:01 2015] Add. Sense: Unrecovered read error - auto
reallocate failed
[Mon Feb  9 23:11:01 2015] sd 4:0:0:0: [sdc] CDB:
[Mon Feb  9 23:11:01 2015] Read(10): 28 00 a4 1c 1d e8 00 00 80 00
[Mon Feb  9 23:11:01 2015] end_request: I/O error, dev sdc, sector 2753306088
[Mon Feb  9 23:11:01 2015] md/raid:md0: Disk failure on sdc1, disabling device.
[Mon Feb  9 23:11:01 2015] md/raid:md0: Operation continuing on 3 devices.
[Mon Feb  9 23:11:01 2015] ata5: EH complete
[Mon Feb  9 23:11:01 2015] md: md0: recovery interrupted.
[Mon Feb  9 23:11:01 2015] RAID conf printout:
[Mon Feb  9 23:11:01 2015]  --- level:5 rd:5 wd:3
[Mon Feb  9 23:11:01 2015]  disk 0, o:0, dev:sdc1
[Mon Feb  9 23:11:01 2015]  disk 1, o:1, dev:sdd1
[Mon Feb  9 23:11:01 2015]  disk 2, o:1, dev:sda1
[Mon Feb  9 23:11:01 2015]  disk 3, o:1, dev:sdf1
[Mon Feb  9 23:11:01 2015]  disk 4, o:1, dev:sdb1
[Mon Feb  9 23:11:01 2015] RAID conf printout:
[Mon Feb  9 23:11:01 2015]  --- level:5 rd:5 wd:3
[Mon Feb  9 23:11:01 2015]  disk 1, o:1, dev:sdd1
[Mon Feb  9 23:11:01 2015]  disk 2, o:1, dev:sda1
[Mon Feb  9 23:11:01 2015]  disk 3, o:1, dev:sdf1
[Mon Feb  9 23:11:01 2015]  disk 4, o:1, dev:sdb1
[Mon Feb  9 23:11:01 2015] RAID conf printout:
[Mon Feb  9 23:11:01 2015]  --- level:5 rd:5 wd:3
[Mon Feb  9 23:11:01 2015]  disk 1, o:1, dev:sdd1
[Mon Feb  9 23:11:01 2015]  disk 2, o:1, dev:sda1
[Mon Feb  9 23:11:01 2015]  disk 3, o:1, dev:sdf1
[Mon Feb  9 23:11:01 2015]  disk 4, o:1, dev:sdb1
[Mon Feb  9 23:11:01 2015] RAID conf printout:
[Mon Feb  9 23:11:01 2015]  --- level:5 rd:5 wd:3
[Mon Feb  9 23:11:01 2015]  disk 1, o:1, dev:sdd1
[Mon Feb  9 23:11:01 2015]  disk 3, o:1, dev:sdf1
[Mon Feb  9 23:11:01 2015]  disk 4, o:1, dev:sdb1

and mdstat now has:

Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5]
[raid4] [raid10]
md0 : active raid5 sdc1[7](F) sda1[8](S) sdb1[6] sdf1[4] sdd1[5]
      7814047744 blocks super 1.2 level 5, 512k chunk, algorithm 2 [5/3] [_U_UU]

And now I am out of ideas. Any thoughts on correcting those ata5
errors? or skipping those sectors maybe? While sde1 is the disk i
manually failed, it hasn't been touched yet. The event count is way
off now, but maybe I can use that somehow? Should i replace the sata
cable for sdc and retry?

Anybody in DC want a beer on me for helping figure this out? I have
more log files stored, but was trying to keep it short.

Thanks for looking,

Kyle L

PS. mdadm v3.2.5 on Ubuntu 14.04 running linux 3.13.0-45
PPS. Last full backup was six months ago. Hmm.

^ permalink raw reply	[flat|nested] 9+ messages in thread

end of thread, other threads:[~2015-02-12  0:15 UTC | newest]

Thread overview: 9+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2015-02-11  6:23 Wierd: Degrading while recovering raid5 Kyle Logue
2015-02-11 14:28 ` Phil Turmel
2015-02-11 22:12   ` Kyle Logue
2015-02-12  0:15     ` Phil Turmel
  -- strict thread matches above, loose matches on Subject: below --
2015-02-10  4:20 Kyle Logue
2015-02-10  7:35 ` Adam Goryachev
2015-02-10 13:51   ` Phil Turmel
2015-02-10 21:50     ` Kyle Logue
2015-02-11  2:14       ` Phil Turmel

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.