* Re: DM-Cache Writeback & Direct IO?
[not found] ` <CAEZ+n-j_9QGWi7pdZY71VAmF-tnuRWb_8neWBd3iKMTm-yQU1A@mail.gmail.com>
@ 2015-07-08 17:55 ` Leonardo Santos
2015-07-09 15:51 ` Christoph Nelles
0 siblings, 1 reply; 6+ messages in thread
From: Leonardo Santos @ 2015-07-08 17:55 UTC (permalink / raw)
To: dm-devel, evilazrael
[-- Attachment #1.1: Type: text/plain, Size: 119 bytes --]
Are you using sequential or random workload?
It's important, since DMCache bypasses sequential I/O based on theshold.
[-- Attachment #1.2: Type: text/html, Size: 367 bytes --]
[-- Attachment #2: Type: text/plain, Size: 0 bytes --]
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: DM-Cache Writeback & Direct IO?
2015-07-08 17:55 ` DM-Cache Writeback & Direct IO? Leonardo Santos
@ 2015-07-09 15:51 ` Christoph Nelles
0 siblings, 0 replies; 6+ messages in thread
From: Christoph Nelles @ 2015-07-09 15:51 UTC (permalink / raw)
To: device-mapper development
[-- Attachment #1.1: Type: text/plain, Size: 678 bytes --]
Hi Leonardo,
I am aware that sequential IO is routed directly to the backing device.
And I also played with the parameters like sequential_threshold and the
write_promote_adjustment introduced in 3.14.
To test this i ran ./fio --name=test --filename=/dev/mapper/dmcache
--rw=randwrite --filesize=16G --bs=64k --ioengine=libaio --direct=1
--iodepth=1000 in a endless loop.
Regards
Christoph
Am 08.07.2015 um 19:55 schrieb Leonardo Santos:
> Are you using sequential or random workload?
> It's important, since DMCache bypasses sequential I/O based on theshold.
>
>
> --
> dm-devel mailing list
> dm-devel@redhat.com
> https://www.redhat.com/mailman/listinfo/dm-devel
[-- Attachment #1.2: Type: text/html, Size: 1796 bytes --]
[-- Attachment #2: Type: text/plain, Size: 0 bytes --]
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: DM-Cache Writeback & Direct IO?
2015-07-08 14:48 ` Joe Thornber
2015-07-09 16:51 ` Christoph Nelles
@ 2015-07-21 13:51 ` Christoph Nelles
1 sibling, 0 replies; 6+ messages in thread
From: Christoph Nelles @ 2015-07-21 13:51 UTC (permalink / raw)
To: device-mapper development
Hello Joe,
Am 08.07.2015 um 16:48 schrieb Joe Thornber:
> Could you try with the latest kernels and the new smq policy please?
I changed my test a little bit and after warming up and waiting a couple
of minutes until there was no activity I rerun the same test. Result is
that the writes are split roughly 2/3 to the caching device and 1/3 to
the backing device. The backing device is a HDD raid and utilized to
100%, so it is throttling fio's speed down.
In bcache I could force to write the full IO to the cache by changing
the congested_*_threshold_us to 0 and it worked quite well to accelerate
new writes without the need to warm up. It looks like dm-cache can only
do hotspot acceleration, right?
My test was dm-cache with smq with sde being a fast SSD RAID0 used for
cache and metadata and sdb used as backing device. This time I used XFS
the device and ran fio repeatedly on the same file without deleting it.
So the same HDD blocks should be involved.
The fio command is ./fio --name=test --filename=/mnt/somefile.dat
--rw=randwrite --filesize=16G --bs=64k --ioengine=libaio --direct=1
--iodepth=1000
The final statistics are
WRITE: io=16384MB, aggrb=158933KB/s, minb=158933KB/s, maxb=158933KB/s,
mint=105561msec, maxt=105561msec
Disk stats (read/write):
dm-2: ios=0/262149, merge=0/0, ticks=0/103875140,
in_queue=104066916, util=100.00%, aggrios=175/131307, aggrmerge=232/243,
aggrticks=9982/7629026, aggrin_queue=7638856, aggrutil=100.00%
sdb: ios=72/88760, merge=132/344, ticks=19276/15245864,
in_queue=15265104, util=100.00%
sde: ios=279/173854, merge=333/143, ticks=688/12188, in_queue=12608,
util=10.03%
The dmcache status is before
dmcache: 0 6442450943 cache 8 8915/4161600 512 114081/2913051 1757 121
90280078 90791892 0 25373 24692 1 writeback 2 migration_threshold 2048
smq 0 rw
and after the test
dmcache: 0 6442450943 cache 8 8915/4161600 512 114132/2913051 1757 121
90453546 90867517 0 25424 36538 1 writeback 2 migration_threshold 2048
smq 0 rw
Kind regards
Christoph
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: DM-Cache Writeback & Direct IO?
2015-07-08 14:48 ` Joe Thornber
@ 2015-07-09 16:51 ` Christoph Nelles
2015-07-21 13:51 ` Christoph Nelles
1 sibling, 0 replies; 6+ messages in thread
From: Christoph Nelles @ 2015-07-09 16:51 UTC (permalink / raw)
To: device-mapper development
Hello Joe,
I updated to the latest kernel from Ubuntu mainline (4.2.0-040200rc1)
and will rerun the same fio command over the night. Currently it shows
the same behaviour. The fio command i use is
fio --name=test --filename=/dev/mapper/dmcache --rw=randwrite
--filesize=16G --bs=64k --ioengine=libaio --direct=1 --iodepth=1000
After 4 runs, the status for the device mapper looks like this:
dmcache: 0 6442450943 cache 8 6029/4161600 512 2283/2913051 1517 119
25120 1294780 0 2283 1484 1 writeback 2 migration_threshold 2048 smq 0 rw
Regards
Christoph
Am 08.07.2015 um 16:48 schrieb Joe Thornber:
> On Wed, Jul 08, 2015 at 11:03:42AM +0200, Christoph Nelles wrote:
>> Hi,
>>
>> Can the dm-cache writeback cache buffer direct IO or goes direct IO
>> always immediately to the origin device?
> Yes, the device mapper layer knows nothing about direct IO vs page
> cache IO.
>
>> I am currently performing tests and see that random direct write IO
>> never hits the cache, even after overwriting the same region over
>> and over.
> Could you try with the latest kernels and the new smq policy please?
>
> - Joe
>
> --
> dm-devel mailing list
> dm-devel@redhat.com
> https://www.redhat.com/mailman/listinfo/dm-devel
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: DM-Cache Writeback & Direct IO?
2015-07-08 9:03 Christoph Nelles
@ 2015-07-08 14:48 ` Joe Thornber
2015-07-09 16:51 ` Christoph Nelles
2015-07-21 13:51 ` Christoph Nelles
0 siblings, 2 replies; 6+ messages in thread
From: Joe Thornber @ 2015-07-08 14:48 UTC (permalink / raw)
To: device-mapper development
On Wed, Jul 08, 2015 at 11:03:42AM +0200, Christoph Nelles wrote:
> Hi,
>
> Can the dm-cache writeback cache buffer direct IO or goes direct IO
> always immediately to the origin device?
Yes, the device mapper layer knows nothing about direct IO vs page
cache IO.
> I am currently performing tests and see that random direct write IO
> never hits the cache, even after overwriting the same region over
> and over.
Could you try with the latest kernels and the new smq policy please?
- Joe
^ permalink raw reply [flat|nested] 6+ messages in thread
* DM-Cache Writeback & Direct IO?
@ 2015-07-08 9:03 Christoph Nelles
2015-07-08 14:48 ` Joe Thornber
0 siblings, 1 reply; 6+ messages in thread
From: Christoph Nelles @ 2015-07-08 9:03 UTC (permalink / raw)
To: dm-devel
Hi,
Can the dm-cache writeback cache buffer direct IO or goes direct IO
always immediately to the origin device?
I am currently performing tests and see that random direct write IO
never hits the cache, even after overwriting the same region over and over.
Kind Regards
Christoph Nelles
^ permalink raw reply [flat|nested] 6+ messages in thread
end of thread, other threads:[~2015-07-21 13:51 UTC | newest]
Thread overview: 6+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
[not found] <9E0DEE10BA62134F89BB5A513CA84BC3A1A6B5CA@MAILSERVER01.trt9a.local>
[not found] ` <CAEZ+n-j_9QGWi7pdZY71VAmF-tnuRWb_8neWBd3iKMTm-yQU1A@mail.gmail.com>
2015-07-08 17:55 ` DM-Cache Writeback & Direct IO? Leonardo Santos
2015-07-09 15:51 ` Christoph Nelles
2015-07-08 9:03 Christoph Nelles
2015-07-08 14:48 ` Joe Thornber
2015-07-09 16:51 ` Christoph Nelles
2015-07-21 13:51 ` Christoph Nelles
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.