All of lore.kernel.org
 help / color / mirror / Atom feed
* --status-interval output race
@ 2014-10-21 23:54 Michael Mattsson
  2014-10-22 15:44 ` Jens Axboe
  0 siblings, 1 reply; 11+ messages in thread
From: Michael Mattsson @ 2014-10-21 23:54 UTC (permalink / raw)
  To: fio

[-- Attachment #1: Type: text/plain, Size: 1901 bytes --]

Hey,
I'm trying to get predictable output from --status-interval. I believe
there's some output racing going on and can't think of a way of
working around it at this point. Is there a way to redirect
--status-interval to a separate file and have the summary go to
--output? That would be useful.

Anyhow.

Attached is a --side-by-side diff by a custom --minimal parser, this
is the command line used: fio --minimal --direct=1 --group_reporting
--filesize=100m --norandommap --blocksize=32k --time_based --iodepth=1
--ramp_time=0 --ioengine=libaio --status-interval=5
--percentage_random=100 --name=randrw-workload --rw=randrw
--rwmixread=0 --numjobs=2 --filename=/fut1/rs-dev2_2:/fut2/rs-dev2_2
--runtime=60 --output fio_w.dat

What happens is that intermittently I get a <1s splurge on the last
line which is suppose to be final output but it screws up the summary
fields, more explictly:

Bandwidth KB
IOPS
Runtime (ms)

Sometime it happens with this workload but it happens more
infrequently: fio --minimal --direct=1 --group_reporting
--filesize=100m --norandommap --blocksize=32k --time_based --iodepth=1
--ramp_time=0 --ioengine=libaio --status-interval=5
--percentage_random=100 --name=rainscaler-randrw --rw=randrw
--rwmixread=100 --numjobs=2 --filename=/fut1/rs-dev2_1:/fut2/rs-dev2_1
--runtime=60 --output fio_r.dat

What is good to know here is that both these run in parallel as two
independent fio jobs on the same host if there is some communication
between fio processes that are run independently on the same host that
promotes this behavior. I can't use /tmp/fio-dump-status as there will
be a race between master processes who unlink the file first.

fio-2.1.4 and fio-2.1.13 on CentOS 6.5.

Thanks!
Michael

PS. The other --status-interval issue I've reported a few months back
I haven't got to as that have only occurred on my "production" rig and
this is just my dev environment.

[-- Attachment #2: diff.txt --]
[-- Type: text/plain, Size: 5568 bytes --]

[admin@rsm-dev1 ~]$ diff --side-by-side 1 2
Overview							Overview
 1:3:terse version						 1:3:terse version
 2:fio-2.1.4:fio version					 2:fio-2.1.4:fio version
 3:rainscaler-randrw:name					 3:rainscaler-randrw:name
 4:0:group id							 4:0:group id
 5:0:error							 5:0:error
Read Status							Read Status
 6:0:Total I/O KB						 6:0:Total I/O KB
 7:0:Bandwidth KB						 7:0:Bandwidth KB
 8:0:IOPS							 8:0:IOPS
 9:0:Runtime (ms)						 9:0:Runtime (ms)
Submission latency						Submission latency
 10:0:min							 10:0:min
 11:0:max							 11:0:max
 12:0.000000:mean						 12:0.000000:mean
 13:0.000000:stdev						 13:0.000000:stdev
Completion latency						Completion latency
 14:0:min							 14:0:min
 15:0:max							 15:0:max
 16:0.000000:mean						 16:0.000000:mean
 17:0.000000:stdev						 17:0.000000:stdev
Completion latency percentiles (20 fields)			Completion latency percentiles (20 fields)
 18:1.000000%=0:						 18:1.000000%=0:
 19:5.000000%=0:						 19:5.000000%=0:
 20:10.000000%=0:						 20:10.000000%=0:
 21:20.000000%=0:						 21:20.000000%=0:
 22:30.000000%=0:						 22:30.000000%=0:
 23:40.000000%=0:						 23:40.000000%=0:
 24:50.000000%=0:						 24:50.000000%=0:
 25:60.000000%=0:						 25:60.000000%=0:
 26:70.000000%=0:						 26:70.000000%=0:
 27:80.000000%=0:						 27:80.000000%=0:
 28:90.000000%=0:						 28:90.000000%=0:
 29:95.000000%=0:						 29:95.000000%=0:
 30:99.000000%=0:						 30:99.000000%=0:
 31:99.500000%=0:						 31:99.500000%=0:
 32:99.900000%=0:						 32:99.900000%=0:
 33:99.950000%=0:						 33:99.950000%=0:
 34:99.990000%=0:						 34:99.990000%=0:
 35:0%=0:							 35:0%=0:
 36:0%=0:							 36:0%=0:
 37:0%=0:							 37:0%=0:
Total latency							Total latency
 38:0:min							 38:0:min
 39:0:max							 39:0:max
 40:0.000000:mean						 40:0.000000:mean
 41:0.000000:stdev						 41:0.000000:stdev
Bandwidth							Bandwidth
 42:0:min							 42:0:min
 43:0:max							 43:0:max
 44:0.000000%:aggregate % of total				 44:0.000000%:aggregate % of total
 45:0.000000:mean						 45:0.000000:mean
 46:0.000000:stdev						 46:0.000000:stdev
Write status							Write status
 47:313248:Total IO KB						 47:313248:Total IO KB
 48:5161:Bandwidth KB					      |	 48:435671:Bandwidth KB
 49:161:IOPS						      |	 49:13614:IOPS
 50:60693:runtime (ms)					      |	 50:719:runtime (ms)
Submission latency						Submission latency
 51:0:min							 51:0:min
 52:2637:max							 52:2637:max
 53:25.125549:mean						 53:25.125549:mean
 54:101.861732:stdev						 54:101.861732:stdev
Completion latency						Completion latency
 55:3792:min							 55:3792:min
 56:1168992:max							 56:1168992:max
 57:12352.673613:mean						 57:12352.673613:mean
 58:27804.289485:stdev						 58:27804.289485:stdev
Completion latency percentiles (20 fields)			Completion latency percentiles (20 fields)
 59:1.000000%=5472:						 59:1.000000%=5472:
 60:5.000000%=6368:						 60:5.000000%=6368:
 61:10.000000%=7136:						 61:10.000000%=7136:
 62:20.000000%=8256:						 62:20.000000%=8256:
 63:30.000000%=8640:						 63:30.000000%=8640:
 64:40.000000%=9024:						 64:40.000000%=9024:
 65:50.000000%=9408:						 65:50.000000%=9408:
 66:60.000000%=9920:						 66:60.000000%=9920:
 67:70.000000%=10688:						 67:70.000000%=10688:
 68:80.000000%=11712:						 68:80.000000%=11712:
 69:90.000000%=13760:						 69:90.000000%=13760:
 70:95.000000%=16320:						 70:95.000000%=16320:
 71:99.000000%=64256:						 71:99.000000%=64256:
 72:99.500000%=130560:						 72:99.500000%=130560:
 73:99.900000%=428032:						 73:99.900000%=428032:
 74:99.950000%=468992:						 74:99.950000%=468992:
 75:99.990000%=1171456:						 75:99.990000%=1171456:
 76:0%=0:							 76:0%=0:
 77:0%=0:							 77:0%=0:
 78:0%=0:							 78:0%=0:
Total latency							Total latency
 79:3798:min							 79:3798:min
 80:1169002:max							 80:1169002:max
 81:12378.944019:mean						 81:12378.944019:mean
 82:27805.729411:stdev						 82:27805.729411:stdev
Bandwidth							Bandwidth
 83:183:min							 83:183:min
 84:3583:max							 84:3583:max
 85:51.432252%:aggreate % of total			      |	 85:0.609271%:aggreate % of total
 86:2654.418502:mean						 86:2654.418502:mean
 87:901.038597:stdev						 87:901.038597:stdev
CPU Usage							CPU Usage
 88:0.023915%:user						 88:0.023915%:user
 89:0.396659%:system						 89:0.396659%:system
 90:9808:ctx switches						 90:9808:ctx switches
 91:0:major pf							 91:0:major pf
 92:54:minor pf							 92:54:minor pf
IO depth dist							IO depth dist
 93:100.0%:<=1							 93:100.0%:<=1
 94:0.0%:2							 94:0.0%:2
 95:0.0%:4							 95:0.0%:4
 96:0.0%:8							 96:0.0%:8
 97:0.0%:16							 97:0.0%:16
 98:0.0%:32							 98:0.0%:32
 99:0.0%:>=64							 99:0.0%:>=64
IO latency dist (microseconds)					IO latency dist (microseconds)
 100:0.00%:<=2							 100:0.00%:<=2
 101:0.00%:4							 101:0.00%:4
 102:0.00%:10							 102:0.00%:10
 103:0.00%:20							 103:0.00%:20
 104:0.00%:50							 104:0.00%:50
 105:0.00%:100							 105:0.00%:100
 106:0.00%:250							 106:0.00%:250
 107:0.00%:500							 107:0.00%:500
 108:0.00%:750							 108:0.00%:750
 109:0.00%:1000							 109:0.00%:1000
IO latency dist (ms)						IO latency dist (ms)
 110:0.00%:<=2							 110:0.00%:<=2
 111:0.01%:4							 111:0.01%:4
 112:61.71%:10							 112:61.71%:10
 113:35.08%:20							 113:35.08%:20
 114:1.73%:50							 114:1.73%:50
 115:0.81%:100							 115:0.81%:100
 116:0.40%:250							 116:0.40%:250
 117:0.25%:500							 117:0.25%:500
 118:0.00%:750							 118:0.00%:750
 119:0.00%:1000							 119:0.00%:1000
 120:0.02%:2000							 120:0.02%:2000
 121:0.00%:>=2000						 121:0.00%:>=2000


^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: --status-interval output race
  2014-10-21 23:54 --status-interval output race Michael Mattsson
@ 2014-10-22 15:44 ` Jens Axboe
  2014-10-22 16:19   ` Michael Mattsson
  0 siblings, 1 reply; 11+ messages in thread
From: Jens Axboe @ 2014-10-22 15:44 UTC (permalink / raw)
  To: Michael Mattsson, fio

On 10/21/2014 05:54 PM, Michael Mattsson wrote:
> Hey,
> I'm trying to get predictable output from --status-interval. I believe
> there's some output racing going on and can't think of a way of
> working around it at this point. Is there a way to redirect
> --status-interval to a separate file and have the summary go to
> --output? That would be useful.
> 
> Anyhow.
> 
> Attached is a --side-by-side diff by a custom --minimal parser, this
> is the command line used: fio --minimal --direct=1 --group_reporting
> --filesize=100m --norandommap --blocksize=32k --time_based --iodepth=1
> --ramp_time=0 --ioengine=libaio --status-interval=5
> --percentage_random=100 --name=randrw-workload --rw=randrw
> --rwmixread=0 --numjobs=2 --filename=/fut1/rs-dev2_2:/fut2/rs-dev2_2
> --runtime=60 --output fio_w.dat
> 
> What happens is that intermittently I get a <1s splurge on the last
> line which is suppose to be final output but it screws up the summary
> fields, more explictly:
> 
> Bandwidth KB
> IOPS
> Runtime (ms)
> 
> Sometime it happens with this workload but it happens more
> infrequently: fio --minimal --direct=1 --group_reporting
> --filesize=100m --norandommap --blocksize=32k --time_based --iodepth=1
> --ramp_time=0 --ioengine=libaio --status-interval=5
> --percentage_random=100 --name=rainscaler-randrw --rw=randrw
> --rwmixread=100 --numjobs=2 --filename=/fut1/rs-dev2_1:/fut2/rs-dev2_1
> --runtime=60 --output fio_r.dat
> 
> What is good to know here is that both these run in parallel as two
> independent fio jobs on the same host if there is some communication
> between fio processes that are run independently on the same host that
> promotes this behavior. I can't use /tmp/fio-dump-status as there will
> be a race between master processes who unlink the file first.
> 
> fio-2.1.4 and fio-2.1.13 on CentOS 6.5.

Can you see if it works better if you revert commit 83f7b64ea773?

-- 
Jens Axboe



^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: --status-interval output race
  2014-10-22 15:44 ` Jens Axboe
@ 2014-10-22 16:19   ` Michael Mattsson
  2014-10-22 17:37     ` Sitsofe Wheeler
  0 siblings, 1 reply; 11+ messages in thread
From: Michael Mattsson @ 2014-10-22 16:19 UTC (permalink / raw)
  To: Jens Axboe; +Cc: fio

Hey,
On a sample of ten runs I had one successful run with commit
83f7b64ea773. Successful meaning that the last output being the
correct summary.

Regards
Michael


On Wed, Oct 22, 2014 at 8:44 AM, Jens Axboe <axboe@kernel.dk> wrote:
> On 10/21/2014 05:54 PM, Michael Mattsson wrote:
>> Hey,
>> I'm trying to get predictable output from --status-interval. I believe
>> there's some output racing going on and can't think of a way of
>> working around it at this point. Is there a way to redirect
>> --status-interval to a separate file and have the summary go to
>> --output? That would be useful.
>>
>> Anyhow.
>>
>> Attached is a --side-by-side diff by a custom --minimal parser, this
>> is the command line used: fio --minimal --direct=1 --group_reporting
>> --filesize=100m --norandommap --blocksize=32k --time_based --iodepth=1
>> --ramp_time=0 --ioengine=libaio --status-interval=5
>> --percentage_random=100 --name=randrw-workload --rw=randrw
>> --rwmixread=0 --numjobs=2 --filename=/fut1/rs-dev2_2:/fut2/rs-dev2_2
>> --runtime=60 --output fio_w.dat
>>
>> What happens is that intermittently I get a <1s splurge on the last
>> line which is suppose to be final output but it screws up the summary
>> fields, more explictly:
>>
>> Bandwidth KB
>> IOPS
>> Runtime (ms)
>>
>> Sometime it happens with this workload but it happens more
>> infrequently: fio --minimal --direct=1 --group_reporting
>> --filesize=100m --norandommap --blocksize=32k --time_based --iodepth=1
>> --ramp_time=0 --ioengine=libaio --status-interval=5
>> --percentage_random=100 --name=rainscaler-randrw --rw=randrw
>> --rwmixread=100 --numjobs=2 --filename=/fut1/rs-dev2_1:/fut2/rs-dev2_1
>> --runtime=60 --output fio_r.dat
>>
>> What is good to know here is that both these run in parallel as two
>> independent fio jobs on the same host if there is some communication
>> between fio processes that are run independently on the same host that
>> promotes this behavior. I can't use /tmp/fio-dump-status as there will
>> be a race between master processes who unlink the file first.
>>
>> fio-2.1.4 and fio-2.1.13 on CentOS 6.5.
>
> Can you see if it works better if you revert commit 83f7b64ea773?
>
> --
> Jens Axboe
>


^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: --status-interval output race
  2014-10-22 16:19   ` Michael Mattsson
@ 2014-10-22 17:37     ` Sitsofe Wheeler
  2014-10-22 18:19       ` Michael Mattsson
  0 siblings, 1 reply; 11+ messages in thread
From: Sitsofe Wheeler @ 2014-10-22 17:37 UTC (permalink / raw)
  To: Michael Mattsson; +Cc: Jens Axboe, fio

On 22 October 2014 17:19, Michael Mattsson <michael.mattsson@gmail.com> wrote:
> On a sample of ten runs I had one successful run with commit
> 83f7b64ea773. Successful meaning that the last output being the
> correct summary.

What were your results with the commit _before_ 83f7b64ea773?

-- 
Sitsofe | http://sucs.org/~sits/


^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: --status-interval output race
  2014-10-22 17:37     ` Sitsofe Wheeler
@ 2014-10-22 18:19       ` Michael Mattsson
  2014-10-23 15:17         ` Jens Axboe
  0 siblings, 1 reply; 11+ messages in thread
From: Michael Mattsson @ 2014-10-22 18:19 UTC (permalink / raw)
  To: Sitsofe Wheeler; +Cc: Jens Axboe, fio

I reran with 2.1.4 and had the same results. 1 in 10 reported successfully.

Thanks!
Michael

On Wed, Oct 22, 2014 at 10:37 AM, Sitsofe Wheeler <sitsofe@gmail.com> wrote:
> On 22 October 2014 17:19, Michael Mattsson <michael.mattsson@gmail.com> wrote:
>> On a sample of ten runs I had one successful run with commit
>> 83f7b64ea773. Successful meaning that the last output being the
>> correct summary.
>
> What were your results with the commit _before_ 83f7b64ea773?
>
> --
> Sitsofe | http://sucs.org/~sits/


^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: --status-interval output race
  2014-10-22 18:19       ` Michael Mattsson
@ 2014-10-23 15:17         ` Jens Axboe
  2014-10-23 21:54           ` Michael Mattsson
  0 siblings, 1 reply; 11+ messages in thread
From: Jens Axboe @ 2014-10-23 15:17 UTC (permalink / raw)
  To: Michael Mattsson, Sitsofe Wheeler; +Cc: fio

On 10/22/2014 12:19 PM, Michael Mattsson wrote:
> I reran with 2.1.4 and had the same results. 1 in 10 reported successfully.

Can you try current -git? I added some code to properly shutdown status
interval dumps before we finalize everything. It's a cleanup in any
case, but would be nice to know if it impacts your situation.

-- 
Jens Axboe



^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: --status-interval output race
  2014-10-23 15:17         ` Jens Axboe
@ 2014-10-23 21:54           ` Michael Mattsson
  2014-10-24 15:44             ` Michael Mattsson
  0 siblings, 1 reply; 11+ messages in thread
From: Michael Mattsson @ 2014-10-23 21:54 UTC (permalink / raw)
  To: Jens Axboe; +Cc: Sitsofe Wheeler, fio

Tried the latest git. No change. However, offsetting the runtime to
not be an even multiple of the status interval alleviated the symptoms
somewhat but not entirely. I got 7 successful runs out of 10 by
offsetting. (Interval 5 and runtime 62).

Regards
Michael

On Thu, Oct 23, 2014 at 8:17 AM, Jens Axboe <axboe@kernel.dk> wrote:
> On 10/22/2014 12:19 PM, Michael Mattsson wrote:
>> I reran with 2.1.4 and had the same results. 1 in 10 reported successfully.
>
> Can you try current -git? I added some code to properly shutdown status
> interval dumps before we finalize everything. It's a cleanup in any
> case, but would be nice to know if it impacts your situation.
>
> --
> Jens Axboe
>


^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: --status-interval output race
  2014-10-23 21:54           ` Michael Mattsson
@ 2014-10-24 15:44             ` Michael Mattsson
  2014-10-24 15:50               ` Jens Axboe
  0 siblings, 1 reply; 11+ messages in thread
From: Michael Mattsson @ 2014-10-24 15:44 UTC (permalink / raw)
  To: Jens Axboe; +Cc: Sitsofe Wheeler, fio

Hey,
Is there anything I can do to help you look at this issue?

Thanks!
Michael

On Thu, Oct 23, 2014 at 2:54 PM, Michael Mattsson
<michael.mattsson@gmail.com> wrote:
> Tried the latest git. No change. However, offsetting the runtime to
> not be an even multiple of the status interval alleviated the symptoms
> somewhat but not entirely. I got 7 successful runs out of 10 by
> offsetting. (Interval 5 and runtime 62).
>
> Regards
> Michael
>
> On Thu, Oct 23, 2014 at 8:17 AM, Jens Axboe <axboe@kernel.dk> wrote:
>> On 10/22/2014 12:19 PM, Michael Mattsson wrote:
>>> I reran with 2.1.4 and had the same results. 1 in 10 reported successfully.
>>
>> Can you try current -git? I added some code to properly shutdown status
>> interval dumps before we finalize everything. It's a cleanup in any
>> case, but would be nice to know if it impacts your situation.
>>
>> --
>> Jens Axboe
>>


^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: --status-interval output race
  2014-10-24 15:44             ` Michael Mattsson
@ 2014-10-24 15:50               ` Jens Axboe
  2014-10-24 19:22                 ` Michael Mattsson
  0 siblings, 1 reply; 11+ messages in thread
From: Jens Axboe @ 2014-10-24 15:50 UTC (permalink / raw)
  To: Michael Mattsson; +Cc: Sitsofe Wheeler, fio

On 2014-10-24 09:44, Michael Mattsson wrote:
> Hey,
> Is there anything I can do to help you look at this issue?

Does it only happen for minimal output, or on "normal" or json output too?

-- 
Jens Axboe



^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: --status-interval output race
  2014-10-24 15:50               ` Jens Axboe
@ 2014-10-24 19:22                 ` Michael Mattsson
  2014-10-25 16:24                   ` Michael Mattsson
  0 siblings, 1 reply; 11+ messages in thread
From: Michael Mattsson @ 2014-10-24 19:22 UTC (permalink / raw)
  To: Jens Axboe; +Cc: Sitsofe Wheeler, fio

Hey,
I took another stab at reshuffling output control and I've found a
combo that works very consistently. By dropping --output and piping to
tee instead I get an incredible high accuracy in my 'runtime' output.
It only differs 1-2ms between a sample of ~500 runs. Before this was
literally all over the place (+/- <1000ms).

Could it be that output redirection is blocked when writing with with
fio and asynchronously when piping to tee? Output is being written to
an NFS mount which is subject to several layers of virtualization and
may cause intermittent latency spikes and deliver my very inconsistent
results. Thoughts?

Anyhow, thank you for your work here and I owe you lunch!

Regards
Michael

On Fri, Oct 24, 2014 at 8:50 AM, Jens Axboe <axboe@kernel.dk> wrote:
> On 2014-10-24 09:44, Michael Mattsson wrote:
>>
>> Hey,
>> Is there anything I can do to help you look at this issue?
>
>
> Does it only happen for minimal output, or on "normal" or json output too?
>
> --
> Jens Axboe
>


^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: --status-interval output race
  2014-10-24 19:22                 ` Michael Mattsson
@ 2014-10-25 16:24                   ` Michael Mattsson
  0 siblings, 0 replies; 11+ messages in thread
From: Michael Mattsson @ 2014-10-25 16:24 UTC (permalink / raw)
  To: Jens Axboe; +Cc: Sitsofe Wheeler, fio

It seems I jumped the gun too early here. I've been running tests for
the last 24 hours and I just had one squirt across multiple clients
rendering the last tick unusable data.

Testing this at my scale with another output engine requires me to
rewrite my parsers and I might end up doing that but I'd hoped there
was a more simplistic workaround for this problem.

Thanks!
Michael

On Fri, Oct 24, 2014 at 12:22 PM, Michael Mattsson
<michael.mattsson@gmail.com> wrote:
> Hey,
> I took another stab at reshuffling output control and I've found a
> combo that works very consistently. By dropping --output and piping to
> tee instead I get an incredible high accuracy in my 'runtime' output.
> It only differs 1-2ms between a sample of ~500 runs. Before this was
> literally all over the place (+/- <1000ms).
>
> Could it be that output redirection is blocked when writing with with
> fio and asynchronously when piping to tee? Output is being written to
> an NFS mount which is subject to several layers of virtualization and
> may cause intermittent latency spikes and deliver my very inconsistent
> results. Thoughts?
>
> Anyhow, thank you for your work here and I owe you lunch!
>
> Regards
> Michael
>
> On Fri, Oct 24, 2014 at 8:50 AM, Jens Axboe <axboe@kernel.dk> wrote:
>> On 2014-10-24 09:44, Michael Mattsson wrote:
>>>
>>> Hey,
>>> Is there anything I can do to help you look at this issue?
>>
>>
>> Does it only happen for minimal output, or on "normal" or json output too?
>>
>> --
>> Jens Axboe
>>


^ permalink raw reply	[flat|nested] 11+ messages in thread

end of thread, other threads:[~2014-10-25 16:24 UTC | newest]

Thread overview: 11+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2014-10-21 23:54 --status-interval output race Michael Mattsson
2014-10-22 15:44 ` Jens Axboe
2014-10-22 16:19   ` Michael Mattsson
2014-10-22 17:37     ` Sitsofe Wheeler
2014-10-22 18:19       ` Michael Mattsson
2014-10-23 15:17         ` Jens Axboe
2014-10-23 21:54           ` Michael Mattsson
2014-10-24 15:44             ` Michael Mattsson
2014-10-24 15:50               ` Jens Axboe
2014-10-24 19:22                 ` Michael Mattsson
2014-10-25 16:24                   ` Michael Mattsson

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.