From mboxrd@z Thu Jan 1 00:00:00 1970 From: Christoph Nelles Subject: Re: DM-Cache Writeback & Direct IO? Date: Thu, 09 Jul 2015 17:51:50 +0200 Message-ID: <559E9896.3040305@evilazrael.de> References: <9E0DEE10BA62134F89BB5A513CA84BC3A1A6B5CA@MAILSERVER01.trt9a.local> Reply-To: device-mapper development Mime-Version: 1.0 Content-Type: multipart/mixed; boundary="===============0584186564011214562==" Return-path: In-Reply-To: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: dm-devel-bounces@redhat.com Errors-To: dm-devel-bounces@redhat.com To: device-mapper development List-Id: dm-devel.ids This is a multi-part message in MIME format. --===============0584186564011214562== Content-Type: multipart/alternative; boundary="------------020104090005010205010004" This is a multi-part message in MIME format. --------------020104090005010205010004 Content-Type: text/plain; charset=windows-1252; format=flowed Content-Transfer-Encoding: 7bit Hi Leonardo, I am aware that sequential IO is routed directly to the backing device. And I also played with the parameters like sequential_threshold and the write_promote_adjustment introduced in 3.14. To test this i ran ./fio --name=test --filename=/dev/mapper/dmcache --rw=randwrite --filesize=16G --bs=64k --ioengine=libaio --direct=1 --iodepth=1000 in a endless loop. Regards Christoph Am 08.07.2015 um 19:55 schrieb Leonardo Santos: > Are you using sequential or random workload? > It's important, since DMCache bypasses sequential I/O based on theshold. > > > -- > dm-devel mailing list > dm-devel@redhat.com > https://www.redhat.com/mailman/listinfo/dm-devel --------------020104090005010205010004 Content-Type: text/html; charset=windows-1252 Content-Transfer-Encoding: quoted-printable X-MIME-Autoconverted: from 8bit to quoted-printable by mx4-phx2.redhat.com id t69FpojF027438
Hi Leonardo,

I am aware that sequential IO is routed directly to the backing device. And I also played with the parameters like sequential_threshold and the write_promote_adjustment introduced in 3.14.
To test this i ran ./fio --name=3Dtest --filename=3D/dev/mapper/dmcache=A0 --rw=3Drandwrite=A0 --filesize=3D= 16G=A0 --bs=3D64k --ioengine=3Dlibaio --direct=3D1 --iodepth=3D1000 in a e= ndless loop.

Regards

Christoph


Am 08.07.2015 um 19:55 schrieb Leonardo Santos:
Are you using sequential or random workload?=A0
It's important, since DMCache bypasses sequential I/O bas=
ed on theshold.


--
dm-devel mailing list
dm-devel@redhat.com
https://www.redhat.com/mailman/listinfo/dm-devel<=
/pre>
    

--------------020104090005010205010004-- --===============0584186564011214562== Content-Type: text/plain; charset="us-ascii" MIME-Version: 1.0 Content-Disposition: inline Content-Transfer-Encoding: 7bit --===============0584186564011214562==--