All of lore.kernel.org
 help / color / mirror / Atom feed
* Awful Raid10,f2 performance
@ 2008-06-02 14:11 Jon Nelson
  2008-06-02 14:53 ` Tomasz Chmielewski
  0 siblings, 1 reply; 13+ messages in thread
From: Jon Nelson @ 2008-06-02 14:11 UTC (permalink / raw)
  To: Linux-Raid

I have set up a 3-disk raid10 using f2 layout using 3x SATA disks each
capable of 70+ MB/s (give or take).
The CPU is a dual core 64 bit Athlon 3600+, and the SATA interface
consists of NVidia MCP55+ (PCIe or whatever).
Previously this was a 3 disk raid5.
The problem: I'm getting really awful transfer rates.  7-9MB/s
per-drive with 21-30MB/s combined. The average hangs around 22-24MB/s.
This, I feel, is really awful!

What parameters can I twiddle to improve the performance?
I am not using NCQ.
The drives individually are capable of 70+ MB/s.
I am using the deadline I/O scheduler but I have tried the others.
This is the openSUSE 2.6.22.17 kernel.
I am getting the I/O rates via dstat.
I am using the jfs filesystem primarily.
The operation I am performing varies but the I/O rates don't (much).
In particular, moving one 17G file from a logical volume (I am using
LVM) to another, both filesytems are JFS.
The load is around 2.6, with these four processing being the top CPU
consumers (they wiggle around a bit):


 2034 root      10  -5     0    0    0 D   10  0.0 201:00.30 md0_raid10
10631 root      18   0  8436 1076  668 D    7  0.1   2:07.65 mv
  218 root      15   0     0    0    0 D    3  0.0   2:06.30 pdflush
 2182 root      10  -5     0    0    0 S    3  0.0   0:32.51 jfsCommit

Is there more information that I can provide that can help explain why
I'm getting such slow speeds?


-- 
Jon

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: Awful Raid10,f2 performance
  2008-06-02 14:11 Awful Raid10,f2 performance Jon Nelson
@ 2008-06-02 14:53 ` Tomasz Chmielewski
  2008-06-02 15:09   ` Jon Nelson
  0 siblings, 1 reply; 13+ messages in thread
From: Tomasz Chmielewski @ 2008-06-02 14:53 UTC (permalink / raw)
  To: Jon Nelson; +Cc: Linux-Raid

Jon Nelson schrieb:
> I have set up a 3-disk raid10 using f2 layout using 3x SATA disks each
> capable of 70+ MB/s (give or take).
> The CPU is a dual core 64 bit Athlon 3600+, and the SATA interface
> consists of NVidia MCP55+ (PCIe or whatever).
> Previously this was a 3 disk raid5.
> The problem: I'm getting really awful transfer rates.  7-9MB/s
> per-drive with 21-30MB/s combined. The average hangs around 22-24MB/s.
> This, I feel, is really awful!
> 
> What parameters can I twiddle to improve the performance?
> I am not using NCQ.
> The drives individually are capable of 70+ MB/s.
> I am using the deadline I/O scheduler but I have tried the others.
> This is the openSUSE 2.6.22.17 kernel.
> I am getting the I/O rates via dstat.
> I am using the jfs filesystem primarily.
> The operation I am performing varies but the I/O rates don't (much).
> In particular, moving one 17G file from a logical volume (I am using
> LVM) to another, both filesytems are JFS.
> The load is around 2.6, with these four processing being the top CPU
> consumers (they wiggle around a bit):
> 
> 
>  2034 root      10  -5     0    0    0 D   10  0.0 201:00.30 md0_raid10
> 10631 root      18   0  8436 1076  668 D    7  0.1   2:07.65 mv
>   218 root      15   0     0    0    0 D    3  0.0   2:06.30 pdflush
>  2182 root      10  -5     0    0    0 S    3  0.0   0:32.51 jfsCommit
> 
> Is there more information that I can provide that can help explain why
> I'm getting such slow speeds?

It would be interesting to know the raw values of your RAID-10 (without 
any filesystem).

I.e.

# dd if=/dev/zero of=/dev/md10 bs=64k count=10000
# echo 3 > /proc/sys/vm/drop_caches
# dd if=/dev/md10 of=/dev/null bs=64k


-- 
Tomasz Chmielewski
http://wpkg.org

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: Awful Raid10,f2 performance
  2008-06-02 14:53 ` Tomasz Chmielewski
@ 2008-06-02 15:09   ` Jon Nelson
  2008-06-02 18:30     ` Keld Jørn Simonsen
  2008-12-15 13:33     ` Jon Nelson
  0 siblings, 2 replies; 13+ messages in thread
From: Jon Nelson @ 2008-06-02 15:09 UTC (permalink / raw)
  To: Tomasz Chmielewski; +Cc: Linux-Raid

i can't show you raw write values, but I can show you raw read values
(after dropping the caches):

From dstat, this shows disks sdb, sdc, and sdd values in read (space)
write format, with the total on the end.

Thus:

  70M    0 :  71M    0 :  72M    0 : 213M    0
  71M    0 :  70M    0 :  69M    0 : 210M    0
  71M    0 :  73M    0 :  74M    0 : 217M    0

shows that I'm getting 70+- MB/s on each disk, combined to 210 to 217MB/s.
These are read values.

I created a logical volume (50G), dropped the caches again, and issued:

dd if=/dev/zero of=/dev/raid/test bs=64k

and got from 100 to 150MB/s sustained write speeds.
NOTE: this is with a bitmap (internal)
Without a bitmap (removed for this test), I get a much more consistent
130-150MB/s.
dd reports a mere 70MB/s when complete.
When using oflag=direct, I get about 100MB/s combined, with a
"reported" speed of 51.9MB/s.
Since I'm using 3x drives, my total I/O is going to be about 2x what dd "sees".

Does that help?

-- 
Jon

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: Awful Raid10,f2 performance
  2008-06-02 15:09   ` Jon Nelson
@ 2008-06-02 18:30     ` Keld Jørn Simonsen
  2008-12-15 13:33     ` Jon Nelson
  1 sibling, 0 replies; 13+ messages in thread
From: Keld Jørn Simonsen @ 2008-06-02 18:30 UTC (permalink / raw)
  To: Jon Nelson; +Cc: Tomasz Chmielewski, Linux-Raid

On Mon, Jun 02, 2008 at 10:09:46AM -0500, Jon Nelson wrote:
> i can't show you raw write values, but I can show you raw read values
> (after dropping the caches):
> 
> >From dstat, this shows disks sdb, sdc, and sdd values in read (space)
> write format, with the total on the end.
> 
> Thus:
> 
>   70M    0 :  71M    0 :  72M    0 : 213M    0
>   71M    0 :  70M    0 :  69M    0 : 210M    0
>   71M    0 :  73M    0 :  74M    0 : 217M    0
> 
> shows that I'm getting 70+- MB/s on each disk, combined to 210 to 217MB/s.
> These are read values.
> 
> I created a logical volume (50G), dropped the caches again, and issued:
> 
> dd if=/dev/zero of=/dev/raid/test bs=64k
> 
> and got from 100 to 150MB/s sustained write speeds.
> NOTE: this is with a bitmap (internal)
> Without a bitmap (removed for this test), I get a much more consistent
> 130-150MB/s.

70 MB/s sounds like as expected.

> dd reports a mere 70MB/s when complete.
> When using oflag=direct, I get about 100MB/s combined, with a
> "reported" speed of 51.9MB/s.
> Since I'm using 3x drives, my total I/O is going to be about 2x what dd "sees".
> 
> Does that help?

It does not explain a 7 MB/s rate.

the 52 MB/s writing seems as expected - maybe a bit slow, but not very
slow. I do not know is a 3-disk raid10,f2 has specific performance
problems.

You could try the command (md3 being my raid)

blockdev --setra 65536 /dev/md3 

It solved some performance problems for me.

Best regards
keld

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: Awful Raid10,f2 performance
  2008-06-02 15:09   ` Jon Nelson
  2008-06-02 18:30     ` Keld Jørn Simonsen
@ 2008-12-15 13:33     ` Jon Nelson
  2008-12-15 21:38       ` Neil Brown
  1 sibling, 1 reply; 13+ messages in thread
From: Jon Nelson @ 2008-12-15 13:33 UTC (permalink / raw)
  Cc: Linux-Raid

A follow-up to an earlier post about weird slowness with RAID10,f2 and
3 drives. This morning's "check" operation is proceeding very slowly,
for some reason.

dstat is showing 14-15MB/s worth of read I/O (0 or negligible write
I/O) on each of the 3 drives which comprise the raid10,f2.

According to /proc/mdstat this is the current rate:

      [================>....]  check = 82.9% (381575040/460057152)
finish=60.6min speed=21554K/sec

The sync_speed_min is 40000, sync_speed_max is 200000, and there is no
other I/O on the system to speak of.

blockdev shows:

blockdev --getra /dev/sdb /dev/sdc /dev/sde
256
256
256

I just now tried setting it to 64K (65536) on each device and that did
not seem to make much difference.

As you may recall from earlier posts, these drives are easily capable
of twice or even three times this rate or more even at the inner
tracks (70-80MB/s each on the outer tracks, 35-40MB/s on the inner
tracks).

What might be going on here?

kernel: 2.6.25.18-0.2-default x86_64
mdadm --detail

/dev/md0:
        Version : 0.90
  Creation Time : Fri May 23 23:24:20 2008
     Raid Level : raid10
     Array Size : 460057152 (438.74 GiB 471.10 GB)
  Used Dev Size : 306704768 (292.50 GiB 314.07 GB)
   Raid Devices : 3
  Total Devices : 3
Preferred Minor : 0
    Persistence : Superblock is persistent

  Intent Bitmap : Internal

    Update Time : Mon Dec 15 07:31:11 2008
          State : active, recovering
 Active Devices : 3
Working Devices : 3
 Failed Devices : 0
  Spare Devices : 0

         Layout : far=2
     Chunk Size : 64K

 Rebuild Status : 84% complete

           UUID : ff4e969d:2f07be4e:8c61e068:8406cdc0
         Events : 0.7676

    Number   Major   Minor   RaidDevice State
       0       8       20        0      active sync   /dev/sdb4
       1       8       68        1      active sync   /dev/sde4
       2       8       36        2      active sync   /dev/sdc4

-- 
Jon

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: Awful Raid10,f2 performance
  2008-12-15 13:33     ` Jon Nelson
@ 2008-12-15 21:38       ` Neil Brown
  2008-12-16  2:47         ` Jon Nelson
  0 siblings, 1 reply; 13+ messages in thread
From: Neil Brown @ 2008-12-15 21:38 UTC (permalink / raw)
  To: Jon Nelson; +Cc: Linux-Raid

On Monday December 15, jnelson-linux-raid@jamponi.net wrote:
> A follow-up to an earlier post about weird slowness with RAID10,f2 and
> 3 drives. This morning's "check" operation is proceeding very slowly,
> for some reason.
> 
> dstat is showing 14-15MB/s worth of read I/O (0 or negligible write
> I/O) on each of the 3 drives which comprise the raid10,f2.
> 
> According to /proc/mdstat this is the current rate:
> 
>       [================>....]  check = 82.9% (381575040/460057152)
> finish=60.6min speed=21554K/sec
> 
> The sync_speed_min is 40000, sync_speed_max is 200000, and there is no
> other I/O on the system to speak of.
> 
> blockdev shows:
> 
> blockdev --getra /dev/sdb /dev/sdc /dev/sde
> 256
> 256
> 256
> 
> I just now tried setting it to 64K (65536) on each device and that did
> not seem to make much difference.
> 
> As you may recall from earlier posts, these drives are easily capable
> of twice or even three times this rate or more even at the inner
> tracks (70-80MB/s each on the outer tracks, 35-40MB/s on the inner
> tracks).
> 
> What might be going on here?

If you think about exactly which blocks of which drives md will have
to read, and in which order, you will see that each drive is seeking
half the size of the disk very often.  Exactly how often would depend
on chunk size and the depth of the queue in the elevator, but it would
probably read several hundred K from early in the disk, then several
hundred from half-way in, then back to start etc.  This would be
expected to be slow.

NeilBrown

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: Awful Raid10,f2 performance
  2008-12-15 21:38       ` Neil Brown
@ 2008-12-16  2:47         ` Jon Nelson
  2008-12-16  4:03           ` Keld Jørn Simonsen
  0 siblings, 1 reply; 13+ messages in thread
From: Jon Nelson @ 2008-12-16  2:47 UTC (permalink / raw)
  To: Neil Brown; +Cc: Linux-Raid

On Mon, Dec 15, 2008 at 3:38 PM, Neil Brown <neilb@suse.de> wrote:
> On Monday December 15, jnelson-linux-raid@jamponi.net wrote:
>> A follow-up to an earlier post about weird slowness with RAID10,f2 and
>> 3 drives. This morning's "check" operation is proceeding very slowly,
>> for some reason.

...

>> What might be going on here?
>
> If you think about exactly which blocks of which drives md will have
> to read, and in which order, you will see that each drive is seeking
> half the size of the disk very often.  Exactly how often would depend
> on chunk size and the depth of the queue in the elevator, but it would
> probably read several hundred K from early in the disk, then several
> hundred from half-way in, then back to start etc.  This would be
> expected to be slow.

An excellent explanation, I think.

However, not to add fuel to the fire, but would an alternate 'check'
(and resync and recover) algorithm possibly work better?

Instead of reading each logical block from start to finish (and
comparing it against the N copies), one *could* start with device 0
and read all of the non-mirror chunks (in order) but only from that
device, comparing against other copies. Then md could proceed to the
next device and so on until all devices have been iterated through.
The advantage to this algorithm is that unless you have > 1 copy of
the data on the *same* device the seeking will be minimized and you
could get substantially higher sustained read rates (and less wear and
tear).

-- 
Jon

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: Awful Raid10,f2 performance
  2008-12-16  2:47         ` Jon Nelson
@ 2008-12-16  4:03           ` Keld Jørn Simonsen
  2008-12-16  4:28             ` Jon Nelson
  0 siblings, 1 reply; 13+ messages in thread
From: Keld Jørn Simonsen @ 2008-12-16  4:03 UTC (permalink / raw)
  To: Jon Nelson; +Cc: Neil Brown, Linux-Raid

On Mon, Dec 15, 2008 at 08:47:24PM -0600, Jon Nelson wrote:
> On Mon, Dec 15, 2008 at 3:38 PM, Neil Brown <neilb@suse.de> wrote:
> > On Monday December 15, jnelson-linux-raid@jamponi.net wrote:
> >> A follow-up to an earlier post about weird slowness with RAID10,f2 and
> >> 3 drives. This morning's "check" operation is proceeding very slowly,
> >> for some reason.
> 
> ...
> 
> >> What might be going on here?
> >
> > If you think about exactly which blocks of which drives md will have
> > to read, and in which order, you will see that each drive is seeking
> > half the size of the disk very often.  Exactly how often would depend
> > on chunk size and the depth of the queue in the elevator, but it would
> > probably read several hundred K from early in the disk, then several
> > hundred from half-way in, then back to start etc.  This would be
> > expected to be slow.
> 
> An excellent explanation, I think.
> 
> However, not to add fuel to the fire, but would an alternate 'check'
> (and resync and recover) algorithm possibly work better?
> 
> Instead of reading each logical block from start to finish (and
> comparing it against the N copies), one *could* start with device 0
> and read all of the non-mirror chunks (in order) but only from that
> device, comparing against other copies. Then md could proceed to the
> next device and so on until all devices have been iterated through.
> The advantage to this algorithm is that unless you have > 1 copy of
> the data on the *same* device the seeking will be minimized and you
> could get substantially higher sustained read rates (and less wear and
> tear).

there was a pach to speed up raid10,f2 check in a recent kernel,
something like 2.6.27. It did improve thruput from something
like 40 % to about 90 %. What kernel are you using?

best regards
keld

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: Awful Raid10,f2 performance
  2008-12-16  4:03           ` Keld Jørn Simonsen
@ 2008-12-16  4:28             ` Jon Nelson
  2008-12-16 10:10               ` Keld Jørn Simonsen
  0 siblings, 1 reply; 13+ messages in thread
From: Jon Nelson @ 2008-12-16  4:28 UTC (permalink / raw)
  To: Keld Jørn Simonsen; +Cc: Neil Brown, Linux-Raid

On Mon, Dec 15, 2008 at 10:03 PM, Keld Jørn Simonsen <keld@dkuug.dk> wrote:

> there was a pach to speed up raid10,f2 check in a recent kernel,
> something like 2.6.27. It did improve thruput from something
> like 40 % to about 90 %. What kernel are you using?

2.6.25.18-0.2-default

-- 
Jon
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: Awful Raid10,f2 performance
  2008-12-16  4:28             ` Jon Nelson
@ 2008-12-16 10:10               ` Keld Jørn Simonsen
  2008-12-16 15:26                 ` Jon Nelson
  0 siblings, 1 reply; 13+ messages in thread
From: Keld Jørn Simonsen @ 2008-12-16 10:10 UTC (permalink / raw)
  To: Jon Nelson; +Cc: Neil Brown, Linux-Raid

On Mon, Dec 15, 2008 at 10:28:30PM -0600, Jon Nelson wrote:
> On Mon, Dec 15, 2008 at 10:03 PM, Keld Jørn Simonsen <keld@dkuug.dk> wrote:
> 
> > there was a pach to speed up raid10,f2 check in a recent kernel,
> > something like 2.6.27. It did improve thruput from something
> > like 40 % to about 90 %. What kernel are you using?
> 
> 2.6.25.18-0.2-default

I believe the patch arrived in a later kernel. If you can try it out
with a vanilla kernel and report, I think it would be quite interesting.

best regards
keld
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: Awful Raid10,f2 performance
  2008-12-16 10:10               ` Keld Jørn Simonsen
@ 2008-12-16 15:26                 ` Jon Nelson
  2008-12-16 15:53                   ` Jon Nelson
  0 siblings, 1 reply; 13+ messages in thread
From: Jon Nelson @ 2008-12-16 15:26 UTC (permalink / raw)
  To: Keld Jørn Simonsen; +Cc: Neil Brown, Linux-Raid

On Tue, Dec 16, 2008 at 4:10 AM, Keld Jørn Simonsen <keld@dkuug.dk> wrote:
> On Mon, Dec 15, 2008 at 10:28:30PM -0600, Jon Nelson wrote:
>> On Mon, Dec 15, 2008 at 10:03 PM, Keld Jørn Simonsen <keld@dkuug.dk> wrote:
>>
>> > there was a pach to speed up raid10,f2 check in a recent kernel,
>> > something like 2.6.27. It did improve thruput from something
>> > like 40 % to about 90 %. What kernel are you using?
>>
>> 2.6.25.18-0.2-default
>
> I believe the patch arrived in a later kernel. If you can try it out
> with a vanilla kernel and report, I think it would be quite interesting.

I'm pretty sure I did try it out and did have an improvement.
However, thanks to the power of the openSUSE build service, I am going
to build a kernel just now, if I can.





-- 
Jon
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: Awful Raid10,f2 performance
  2008-12-16 15:26                 ` Jon Nelson
@ 2008-12-16 15:53                   ` Jon Nelson
  2008-12-16 22:01                     ` Keld Jørn Simonsen
  0 siblings, 1 reply; 13+ messages in thread
From: Jon Nelson @ 2008-12-16 15:53 UTC (permalink / raw)
  To: Keld Jørn Simonsen; +Cc: Neil Brown, Linux-Raid

On Tue, Dec 16, 2008 at 9:26 AM, Jon Nelson
<jnelson-linux-raid@jamponi.net> wrote:
> On Tue, Dec 16, 2008 at 4:10 AM, Keld Jørn Simonsen <keld@dkuug.dk> wrote:
>> On Mon, Dec 15, 2008 at 10:28:30PM -0600, Jon Nelson wrote:
>>> On Mon, Dec 15, 2008 at 10:03 PM, Keld Jørn Simonsen <keld@dkuug.dk> wrote:
>>>
>>> > there was a pach to speed up raid10,f2 check in a recent kernel,
>>> > something like 2.6.27. It did improve thruput from something
>>> > like 40 % to about 90 %. What kernel are you using?
>>>
>>> 2.6.25.18-0.2-default
>>
>> I believe the patch arrived in a later kernel. If you can try it out
>> with a vanilla kernel and report, I think it would be quite interesting.
>
> I'm pretty sure I did try it out and did have an improvement.
> However, thanks to the power of the openSUSE build service, I am going
> to build a kernel just now, if I can.

Scratch that. I decided to just do this at home. The suse kernel
2.6.25.18-0.2 appears to have this patch already. Thus, I've been
running with this patch.



-- 
Jon
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: Awful Raid10,f2 performance
  2008-12-16 15:53                   ` Jon Nelson
@ 2008-12-16 22:01                     ` Keld Jørn Simonsen
  0 siblings, 0 replies; 13+ messages in thread
From: Keld Jørn Simonsen @ 2008-12-16 22:01 UTC (permalink / raw)
  To: Jon Nelson; +Cc: Neil Brown, Linux-Raid

On Tue, Dec 16, 2008 at 09:53:39AM -0600, Jon Nelson wrote:
> On Tue, Dec 16, 2008 at 9:26 AM, Jon Nelson
> <jnelson-linux-raid@jamponi.net> wrote:
> > On Tue, Dec 16, 2008 at 4:10 AM, Keld Jørn Simonsen <keld@dkuug.dk> wrote:
> >> On Mon, Dec 15, 2008 at 10:28:30PM -0600, Jon Nelson wrote:
> >>> On Mon, Dec 15, 2008 at 10:03 PM, Keld Jørn Simonsen <keld@dkuug.dk> wrote:
> >>>
> >>> > there was a pach to speed up raid10,f2 check in a recent kernel,
> >>> > something like 2.6.27. It did improve thruput from something
> >>> > like 40 % to about 90 %. What kernel are you using?
> >>>
> >>> 2.6.25.18-0.2-default
> >>
> >> I believe the patch arrived in a later kernel. If you can try it out
> >> with a vanilla kernel and report, I think it would be quite interesting.
> >
> > I'm pretty sure I did try it out and did have an improvement.
> > However, thanks to the power of the openSUSE build service, I am going
> > to build a kernel just now, if I can.
> 
> Scratch that. I decided to just do this at home. The suse kernel
> 2.6.25.18-0.2 appears to have this patch already. Thus, I've been
> running with this patch.

OK, well, then the patch did not apply to check, but only to resync.
One could write up a similar patch for check as what was done for
resync. The idea of the previous patch was to read quite much data in a
striped way, say 10 - 20 MB at a time.

best regards
keld
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 13+ messages in thread

end of thread, other threads:[~2008-12-16 22:01 UTC | newest]

Thread overview: 13+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2008-06-02 14:11 Awful Raid10,f2 performance Jon Nelson
2008-06-02 14:53 ` Tomasz Chmielewski
2008-06-02 15:09   ` Jon Nelson
2008-06-02 18:30     ` Keld Jørn Simonsen
2008-12-15 13:33     ` Jon Nelson
2008-12-15 21:38       ` Neil Brown
2008-12-16  2:47         ` Jon Nelson
2008-12-16  4:03           ` Keld Jørn Simonsen
2008-12-16  4:28             ` Jon Nelson
2008-12-16 10:10               ` Keld Jørn Simonsen
2008-12-16 15:26                 ` Jon Nelson
2008-12-16 15:53                   ` Jon Nelson
2008-12-16 22:01                     ` Keld Jørn Simonsen

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.