All of lore.kernel.org
 help / color / mirror / Atom feed
* device mapper increased latency on RAID array
@ 2016-01-14 16:21 Thanos Makatos
  2016-01-15 21:52 ` Mike Snitzer
  2016-01-15 22:04 ` Heinz Mauelshagen
  0 siblings, 2 replies; 5+ messages in thread
From: Thanos Makatos @ 2016-01-14 16:21 UTC (permalink / raw)
  To: device-mapper development

I noticed that when a linear device mapper target is used on top of a
RAID array (made of SSDs), latency is 3 times higher than accessing
the RAID array itself. Strangely, when I do the same test to an SSD on
the controller that is passed through (not configured in an array),
latency is unaffected.

/dev/sda is the SSD passed through from the RAID controller.
/dev/sdc is the block device in the RAID array.

[ ~]# dmsetup create sda --table "0 $((2**30/512)) linear /dev/sda 0
[ ~]# dmsetup create sdc --table "0 $((2**30/512)) linear /dev/sdc 0
[ ~]# echo noop > /sys/block/sda/queue/scheduler
[ ~]# echo noop > /sys/block/sdc/queue/scheduler

[ ~]# ./ioping -c 10000 -s 4k -i 0 -q -D /dev/sda

--- /dev/sda (block device 186.3 GiB) ioping statistics ---
10 k requests completed in 377.9 ms, 39.1 MiB read, 26.5 k iops, 103.4 MiB/s
min/avg/max/mdev = 31 us / 37 us / 140 us / 20 us

[ ~]# ./ioping -c 10000 -s 4k -i 0 -q -D /dev/mapper/sda

--- /dev/mapper/sda (block device 1 GiB) ioping statistics ---
10 k requests completed in 387.5 ms, 39.1 MiB read, 25.8 k iops, 100.8 MiB/s
min/avg/max/mdev = 36 us / 38 us / 134 us / 5 us

[root@192.168.1.130 ~]# ./ioping -c 10000 -s 4k -i 0 -q -D /dev/mapper/sdc

--- /dev/mapper/sdc (block device 1 GiB) ioping statistics ---
10 k requests completed in 1.33 s, 39.1 MiB read, 7.50 k iops, 29.3 MiB/s
min/avg/max/mdev = 112 us / 133 us / 226 us / 11 us
[ ~]# ./ioping -c 10000 -s 4k -i 0 -q -D /dev/sdc

--- /dev/sdc (block device 1.45 TiB) ioping statistics ---
10 k requests completed in 477.8 ms, 39.1 MiB read, 20.9 k iops, 81.7 MiB/s
min/avg/max/mdev = 36 us / 47 us / 158 us / 18 us

[ ~]# ./ioping -c 10000 -s 4k -i 0 -q -D /dev/mapper/sdc

--- /dev/mapper/sdc (block device 1 GiB) ioping statistics ---
10 k requests completed in 1.33 s, 39.1 MiB read, 7.50 k iops, 29.3 MiB/s
min/avg/max/mdev = 111 us / 133 us / 181 us / 11 us

These results are reproduced consistently. I've tried this on kernels
2.6.32-431.29.2.el6.x86_64, 3.10.68-11.el6.centos.alt.x86_64 (CentOS
6), and 4.3 (Debian testing).

I really doubt that there is something with the device mapper here,
but I'd like to understand this weird interaction between the device
mapper (or maybe the block I/O layer?) and the RAID controller. Any
ideas how to investigate this?


-- 
Thanos Makatos

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: device mapper increased latency on RAID array
  2016-01-14 16:21 device mapper increased latency on RAID array Thanos Makatos
@ 2016-01-15 21:52 ` Mike Snitzer
  2016-01-15 22:04 ` Heinz Mauelshagen
  1 sibling, 0 replies; 5+ messages in thread
From: Mike Snitzer @ 2016-01-15 21:52 UTC (permalink / raw)
  To: Thanos Makatos; +Cc: device-mapper development

On Thu, Jan 14 2016 at 11:21am -0500,
Thanos Makatos <thanos.makatos@onapp.com> wrote:

> I noticed that when a linear device mapper target is used on top of a
> RAID array (made of SSDs), latency is 3 times higher than accessing
> the RAID array itself. Strangely, when I do the same test to an SSD on
> the controller that is passed through (not configured in an array),
> latency is unaffected.
> 
> /dev/sda is the SSD passed through from the RAID controller.
> /dev/sdc is the block device in the RAID array.
> 
> [ ~]# dmsetup create sda --table "0 $((2**30/512)) linear /dev/sda 0
> [ ~]# dmsetup create sdc --table "0 $((2**30/512)) linear /dev/sdc 0
> [ ~]# echo noop > /sys/block/sda/queue/scheduler
> [ ~]# echo noop > /sys/block/sdc/queue/scheduler
> 
> [ ~]# ./ioping -c 10000 -s 4k -i 0 -q -D /dev/sda
> 
> --- /dev/sda (block device 186.3 GiB) ioping statistics ---
> 10 k requests completed in 377.9 ms, 39.1 MiB read, 26.5 k iops, 103.4 MiB/s
> min/avg/max/mdev = 31 us / 37 us / 140 us / 20 us
> 
> [ ~]# ./ioping -c 10000 -s 4k -i 0 -q -D /dev/mapper/sda
> 
> --- /dev/mapper/sda (block device 1 GiB) ioping statistics ---
> 10 k requests completed in 387.5 ms, 39.1 MiB read, 25.8 k iops, 100.8 MiB/s
> min/avg/max/mdev = 36 us / 38 us / 134 us / 5 us
> 
> [root@192.168.1.130 ~]# ./ioping -c 10000 -s 4k -i 0 -q -D /dev/mapper/sdc
> 
> --- /dev/mapper/sdc (block device 1 GiB) ioping statistics ---
> 10 k requests completed in 1.33 s, 39.1 MiB read, 7.50 k iops, 29.3 MiB/s
> min/avg/max/mdev = 112 us / 133 us / 226 us / 11 us
> [ ~]# ./ioping -c 10000 -s 4k -i 0 -q -D /dev/sdc
> 
> --- /dev/sdc (block device 1.45 TiB) ioping statistics ---
> 10 k requests completed in 477.8 ms, 39.1 MiB read, 20.9 k iops, 81.7 MiB/s
> min/avg/max/mdev = 36 us / 47 us / 158 us / 18 us
> 
> [ ~]# ./ioping -c 10000 -s 4k -i 0 -q -D /dev/mapper/sdc
> 
> --- /dev/mapper/sdc (block device 1 GiB) ioping statistics ---
> 10 k requests completed in 1.33 s, 39.1 MiB read, 7.50 k iops, 29.3 MiB/s
> min/avg/max/mdev = 111 us / 133 us / 181 us / 11 us
> 
> These results are reproduced consistently. I've tried this on kernels
> 2.6.32-431.29.2.el6.x86_64, 3.10.68-11.el6.centos.alt.x86_64 (CentOS
> 6), and 4.3 (Debian testing).
> 
> I really doubt that there is something with the device mapper here,
> but I'd like to understand this weird interaction between the device
> mapper (or maybe the block I/O layer?) and the RAID controller. Any
> ideas how to investigate this?

I just ran your ioping test against a relatively fast PCIe SSD on a
system with a 4.4.0-rc3 kernel:

# cat /sys/block/fioa/queue/scheduler
[noop] deadline cfq

# ioping -c 10000 -s 4k -i 0 -q -D /dev/fioa

--- /dev/fioa (block device 731.1 GiB) ioping statistics ---
10 k requests completed in 1.71 s, 5.98 k iops, 23.3 MiB/s
min/avg/max/mdev = 26 us / 167 us / 310 us / 53 us

# dmsetup create fioa --table "0 $((2**30/512)) linear /dev/fioa 0"

# ioping -c 10000 -s 4k -i 0 -q -D /dev/mapper/fioa

--- /dev/mapper/fioa (block device 1 GiB) ioping statistics ---
10 k requests completed in 1.81 s, 5.65 k iops, 22.1 MiB/s
min/avg/max/mdev = 74 us / 176 us / 321 us / 46 us

So I cannot replicate your high performance (or drop in performance).

Struggling to understand how you're seeing the performance you are from
your JBOD and RAID devices.  But I've never used ioping before either.

I think this issue is probably very HW dependent; and could be rooted in
some aspect of your controller's cache.  But your iops of > 20.9K seem
_really_ high for this test.

Might be worth trying a more proven load-generator tool to try to
evaluate your hardware (e.g. fio)...

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: device mapper increased latency on RAID array
  2016-01-14 16:21 device mapper increased latency on RAID array Thanos Makatos
  2016-01-15 21:52 ` Mike Snitzer
@ 2016-01-15 22:04 ` Heinz Mauelshagen
  2016-01-18 10:52   ` Thanos Makatos
  1 sibling, 1 reply; 5+ messages in thread
From: Heinz Mauelshagen @ 2016-01-15 22:04 UTC (permalink / raw)
  To: dm-devel


You can use blktrace to figure which block device introduces the latencies.


On 01/14/2016 05:21 PM, Thanos Makatos wrote:
> I noticed that when a linear device mapper target is used on top of a
> RAID array (made of SSDs), latency is 3 times higher than accessing
> the RAID array itself. Strangely, when I do the same test to an SSD on
> the controller that is passed through (not configured in an array),
> latency is unaffected.
>
> /dev/sda is the SSD passed through from the RAID controller.
> /dev/sdc is the block device in the RAID array.
>
> [ ~]# dmsetup create sda --table "0 $((2**30/512)) linear /dev/sda 0
> [ ~]# dmsetup create sdc --table "0 $((2**30/512)) linear /dev/sdc 0
> [ ~]# echo noop > /sys/block/sda/queue/scheduler
> [ ~]# echo noop > /sys/block/sdc/queue/scheduler
>
> [ ~]# ./ioping -c 10000 -s 4k -i 0 -q -D /dev/sda
>
> --- /dev/sda (block device 186.3 GiB) ioping statistics ---
> 10 k requests completed in 377.9 ms, 39.1 MiB read, 26.5 k iops, 103.4 MiB/s
> min/avg/max/mdev = 31 us / 37 us / 140 us / 20 us
>
> [ ~]# ./ioping -c 10000 -s 4k -i 0 -q -D /dev/mapper/sda
>
> --- /dev/mapper/sda (block device 1 GiB) ioping statistics ---
> 10 k requests completed in 387.5 ms, 39.1 MiB read, 25.8 k iops, 100.8 MiB/s
> min/avg/max/mdev = 36 us / 38 us / 134 us / 5 us
>
> [root@192.168.1.130 ~]# ./ioping -c 10000 -s 4k -i 0 -q -D /dev/mapper/sdc
>
> --- /dev/mapper/sdc (block device 1 GiB) ioping statistics ---
> 10 k requests completed in 1.33 s, 39.1 MiB read, 7.50 k iops, 29.3 MiB/s
> min/avg/max/mdev = 112 us / 133 us / 226 us / 11 us
> [ ~]# ./ioping -c 10000 -s 4k -i 0 -q -D /dev/sdc
>
> --- /dev/sdc (block device 1.45 TiB) ioping statistics ---
> 10 k requests completed in 477.8 ms, 39.1 MiB read, 20.9 k iops, 81.7 MiB/s
> min/avg/max/mdev = 36 us / 47 us / 158 us / 18 us
>
> [ ~]# ./ioping -c 10000 -s 4k -i 0 -q -D /dev/mapper/sdc
>
> --- /dev/mapper/sdc (block device 1 GiB) ioping statistics ---
> 10 k requests completed in 1.33 s, 39.1 MiB read, 7.50 k iops, 29.3 MiB/s
> min/avg/max/mdev = 111 us / 133 us / 181 us / 11 us
>
> These results are reproduced consistently. I've tried this on kernels
> 2.6.32-431.29.2.el6.x86_64, 3.10.68-11.el6.centos.alt.x86_64 (CentOS
> 6), and 4.3 (Debian testing).
>
> I really doubt that there is something with the device mapper here,
> but I'd like to understand this weird interaction between the device
> mapper (or maybe the block I/O layer?) and the RAID controller. Any
> ideas how to investigate this?
>
>

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: device mapper increased latency on RAID array
  2016-01-15 22:04 ` Heinz Mauelshagen
@ 2016-01-18 10:52   ` Thanos Makatos
  2016-01-18 17:58     ` Thanos Makatos
  0 siblings, 1 reply; 5+ messages in thread
From: Thanos Makatos @ 2016-01-18 10:52 UTC (permalink / raw)
  To: device-mapper development

I used fio and got the exact same results, so it doesn't seem to be
tool-related. Indeed the RAID controller (PERC H730P Mini) and SSDs
(INTEL SSDSC1BG200G4R) are quite fast. I cannot reproduce this on any
other configuration either (e.g. another disk, ramdisk, etc.) so it
definitely has something to do with the controller.

I used blktrace and here's the output, on the SSD passed through by
the RAID controller:

ioping-8868  [002]  5666.780726:   8,32   Q   R 232955928 + 8 [ioping]
ioping-8868  [002]  5666.780731:   8,32   G   R 232955928 + 8 [ioping]
ioping-8868  [002]  5666.780732:   8,32   P   N [ioping]
ioping-8868  [002]  5666.780733:   8,32   I   R 232955928 + 8 [ioping]
ioping-8868  [002]  5666.780734:   8,32   U   N [ioping] 1
ioping-8868  [002]  5666.780735:   8,32   D   R 232955928 + 8 [ioping]
<idle>-0     [014]  5666.780803:   8,32   C   R 232955928 + 8 [0]

And on the RAID array:

ioping-8869  [003]  5673.729427:   8,32   A   R 1696736 + 8 <- (253,1) 1696736
ioping-8869  [003]  5673.729429:   8,32   Q   R 1696736 + 8 [ioping]
ioping-8869  [003]  5673.729431:   8,32   G   R 1696736 + 8 [ioping]
ioping-8869  [003]  5673.729432:   8,32   P   N [ioping]
ioping-8869  [003]  5673.729433:   8,32   I   R 1696736 + 8 [ioping]
ioping-8869  [003]  5673.729434:   8,32   U   N [ioping] 1
ioping-8869  [003]  5673.729435:   8,32   D   R 1696736 + 8 [ioping]
<idle>-0     [001]  5673.729615:   8,32   C   R 1696736 + 8 [0]

In both cases the vast majority of time is spent doing I/O (77 us vs.
188 us), as expected. Also, DM's remap overhead is negligible. The
exact same sectors are
accessed so it doesn't seem related to alignment etc.

On 15 January 2016 at 22:04, Heinz Mauelshagen <heinzm@redhat.com> wrote:
>
> You can use blktrace to figure which block device introduces the latencies.
>
>
>
> On 01/14/2016 05:21 PM, Thanos Makatos wrote:
>>
>> I noticed that when a linear device mapper target is used on top of a
>> RAID array (made of SSDs), latency is 3 times higher than accessing
>> the RAID array itself. Strangely, when I do the same test to an SSD on
>> the controller that is passed through (not configured in an array),
>> latency is unaffected.
>>
>> /dev/sda is the SSD passed through from the RAID controller.
>> /dev/sdc is the block device in the RAID array.
>>
>> [ ~]# dmsetup create sda --table "0 $((2**30/512)) linear /dev/sda 0
>> [ ~]# dmsetup create sdc --table "0 $((2**30/512)) linear /dev/sdc 0
>> [ ~]# echo noop > /sys/block/sda/queue/scheduler
>> [ ~]# echo noop > /sys/block/sdc/queue/scheduler
>>
>> [ ~]# ./ioping -c 10000 -s 4k -i 0 -q -D /dev/sda
>>
>> --- /dev/sda (block device 186.3 GiB) ioping statistics ---
>> 10 k requests completed in 377.9 ms, 39.1 MiB read, 26.5 k iops, 103.4
>> MiB/s
>> min/avg/max/mdev = 31 us / 37 us / 140 us / 20 us
>>
>> [ ~]# ./ioping -c 10000 -s 4k -i 0 -q -D /dev/mapper/sda
>>
>> --- /dev/mapper/sda (block device 1 GiB) ioping statistics ---
>> 10 k requests completed in 387.5 ms, 39.1 MiB read, 25.8 k iops, 100.8
>> MiB/s
>> min/avg/max/mdev = 36 us / 38 us / 134 us / 5 us
>>
>> [root@192.168.1.130 ~]# ./ioping -c 10000 -s 4k -i 0 -q -D /dev/mapper/sdc
>>
>> --- /dev/mapper/sdc (block device 1 GiB) ioping statistics ---
>> 10 k requests completed in 1.33 s, 39.1 MiB read, 7.50 k iops, 29.3 MiB/s
>> min/avg/max/mdev = 112 us / 133 us / 226 us / 11 us
>> [ ~]# ./ioping -c 10000 -s 4k -i 0 -q -D /dev/sdc
>>
>> --- /dev/sdc (block device 1.45 TiB) ioping statistics ---
>> 10 k requests completed in 477.8 ms, 39.1 MiB read, 20.9 k iops, 81.7
>> MiB/s
>> min/avg/max/mdev = 36 us / 47 us / 158 us / 18 us
>>
>> [ ~]# ./ioping -c 10000 -s 4k -i 0 -q -D /dev/mapper/sdc
>>
>> --- /dev/mapper/sdc (block device 1 GiB) ioping statistics ---
>> 10 k requests completed in 1.33 s, 39.1 MiB read, 7.50 k iops, 29.3 MiB/s
>> min/avg/max/mdev = 111 us / 133 us / 181 us / 11 us
>>
>> These results are reproduced consistently. I've tried this on kernels
>> 2.6.32-431.29.2.el6.x86_64, 3.10.68-11.el6.centos.alt.x86_64 (CentOS
>> 6), and 4.3 (Debian testing).
>>
>> I really doubt that there is something with the device mapper here,
>> but I'd like to understand this weird interaction between the device
>> mapper (or maybe the block I/O layer?) and the RAID controller. Any
>> ideas how to investigate this?
>>
>>
>
> --
> dm-devel mailing list
> dm-devel@redhat.com
> https://www.redhat.com/mailman/listinfo/dm-devel



-- 
Thanos Makatos

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: device mapper increased latency on RAID array
  2016-01-18 10:52   ` Thanos Makatos
@ 2016-01-18 17:58     ` Thanos Makatos
  0 siblings, 0 replies; 5+ messages in thread
From: Thanos Makatos @ 2016-01-18 17:58 UTC (permalink / raw)
  To: device-mapper development

What am I saying, it's not the same sector at all, I need to double check...

On 18 January 2016 at 10:52, Thanos Makatos <thanos.makatos@onapp.com> wrote:
> I used fio and got the exact same results, so it doesn't seem to be
> tool-related. Indeed the RAID controller (PERC H730P Mini) and SSDs
> (INTEL SSDSC1BG200G4R) are quite fast. I cannot reproduce this on any
> other configuration either (e.g. another disk, ramdisk, etc.) so it
> definitely has something to do with the controller.
>
> I used blktrace and here's the output, on the SSD passed through by
> the RAID controller:
>
> ioping-8868  [002]  5666.780726:   8,32   Q   R 232955928 + 8 [ioping]
> ioping-8868  [002]  5666.780731:   8,32   G   R 232955928 + 8 [ioping]
> ioping-8868  [002]  5666.780732:   8,32   P   N [ioping]
> ioping-8868  [002]  5666.780733:   8,32   I   R 232955928 + 8 [ioping]
> ioping-8868  [002]  5666.780734:   8,32   U   N [ioping] 1
> ioping-8868  [002]  5666.780735:   8,32   D   R 232955928 + 8 [ioping]
> <idle>-0     [014]  5666.780803:   8,32   C   R 232955928 + 8 [0]
>
> And on the RAID array:
>
> ioping-8869  [003]  5673.729427:   8,32   A   R 1696736 + 8 <- (253,1) 1696736
> ioping-8869  [003]  5673.729429:   8,32   Q   R 1696736 + 8 [ioping]
> ioping-8869  [003]  5673.729431:   8,32   G   R 1696736 + 8 [ioping]
> ioping-8869  [003]  5673.729432:   8,32   P   N [ioping]
> ioping-8869  [003]  5673.729433:   8,32   I   R 1696736 + 8 [ioping]
> ioping-8869  [003]  5673.729434:   8,32   U   N [ioping] 1
> ioping-8869  [003]  5673.729435:   8,32   D   R 1696736 + 8 [ioping]
> <idle>-0     [001]  5673.729615:   8,32   C   R 1696736 + 8 [0]
>
> In both cases the vast majority of time is spent doing I/O (77 us vs.
> 188 us), as expected. Also, DM's remap overhead is negligible. The
> exact same sectors are
> accessed so it doesn't seem related to alignment etc.
>
> On 15 January 2016 at 22:04, Heinz Mauelshagen <heinzm@redhat.com> wrote:
>>
>> You can use blktrace to figure which block device introduces the latencies.
>>
>>
>>
>> On 01/14/2016 05:21 PM, Thanos Makatos wrote:
>>>
>>> I noticed that when a linear device mapper target is used on top of a
>>> RAID array (made of SSDs), latency is 3 times higher than accessing
>>> the RAID array itself. Strangely, when I do the same test to an SSD on
>>> the controller that is passed through (not configured in an array),
>>> latency is unaffected.
>>>
>>> /dev/sda is the SSD passed through from the RAID controller.
>>> /dev/sdc is the block device in the RAID array.
>>>
>>> [ ~]# dmsetup create sda --table "0 $((2**30/512)) linear /dev/sda 0
>>> [ ~]# dmsetup create sdc --table "0 $((2**30/512)) linear /dev/sdc 0
>>> [ ~]# echo noop > /sys/block/sda/queue/scheduler
>>> [ ~]# echo noop > /sys/block/sdc/queue/scheduler
>>>
>>> [ ~]# ./ioping -c 10000 -s 4k -i 0 -q -D /dev/sda
>>>
>>> --- /dev/sda (block device 186.3 GiB) ioping statistics ---
>>> 10 k requests completed in 377.9 ms, 39.1 MiB read, 26.5 k iops, 103.4
>>> MiB/s
>>> min/avg/max/mdev = 31 us / 37 us / 140 us / 20 us
>>>
>>> [ ~]# ./ioping -c 10000 -s 4k -i 0 -q -D /dev/mapper/sda
>>>
>>> --- /dev/mapper/sda (block device 1 GiB) ioping statistics ---
>>> 10 k requests completed in 387.5 ms, 39.1 MiB read, 25.8 k iops, 100.8
>>> MiB/s
>>> min/avg/max/mdev = 36 us / 38 us / 134 us / 5 us
>>>
>>> [root@192.168.1.130 ~]# ./ioping -c 10000 -s 4k -i 0 -q -D /dev/mapper/sdc
>>>
>>> --- /dev/mapper/sdc (block device 1 GiB) ioping statistics ---
>>> 10 k requests completed in 1.33 s, 39.1 MiB read, 7.50 k iops, 29.3 MiB/s
>>> min/avg/max/mdev = 112 us / 133 us / 226 us / 11 us
>>> [ ~]# ./ioping -c 10000 -s 4k -i 0 -q -D /dev/sdc
>>>
>>> --- /dev/sdc (block device 1.45 TiB) ioping statistics ---
>>> 10 k requests completed in 477.8 ms, 39.1 MiB read, 20.9 k iops, 81.7
>>> MiB/s
>>> min/avg/max/mdev = 36 us / 47 us / 158 us / 18 us
>>>
>>> [ ~]# ./ioping -c 10000 -s 4k -i 0 -q -D /dev/mapper/sdc
>>>
>>> --- /dev/mapper/sdc (block device 1 GiB) ioping statistics ---
>>> 10 k requests completed in 1.33 s, 39.1 MiB read, 7.50 k iops, 29.3 MiB/s
>>> min/avg/max/mdev = 111 us / 133 us / 181 us / 11 us
>>>
>>> These results are reproduced consistently. I've tried this on kernels
>>> 2.6.32-431.29.2.el6.x86_64, 3.10.68-11.el6.centos.alt.x86_64 (CentOS
>>> 6), and 4.3 (Debian testing).
>>>
>>> I really doubt that there is something with the device mapper here,
>>> but I'd like to understand this weird interaction between the device
>>> mapper (or maybe the block I/O layer?) and the RAID controller. Any
>>> ideas how to investigate this?
>>>
>>>
>>
>> --
>> dm-devel mailing list
>> dm-devel@redhat.com
>> https://www.redhat.com/mailman/listinfo/dm-devel
>
>
>
> --
> Thanos Makatos



-- 
Thanos Makatos

^ permalink raw reply	[flat|nested] 5+ messages in thread

end of thread, other threads:[~2016-01-18 17:58 UTC | newest]

Thread overview: 5+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2016-01-14 16:21 device mapper increased latency on RAID array Thanos Makatos
2016-01-15 21:52 ` Mike Snitzer
2016-01-15 22:04 ` Heinz Mauelshagen
2016-01-18 10:52   ` Thanos Makatos
2016-01-18 17:58     ` Thanos Makatos

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.