From mboxrd@z Thu Jan 1 00:00:00 1970 From: Patrick Dung Subject: Re: About dm-integrity layer and fsync Date: Sun, 5 Jan 2020 20:20:53 +0800 Message-ID: References: Mime-Version: 1.0 Content-Type: multipart/mixed; boundary="===============0776381399121141423==" Return-path: In-Reply-To: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: dm-devel-bounces@redhat.com Errors-To: dm-devel-bounces@redhat.com To: Mikulas Patocka Cc: dm-devel@redhat.com List-Id: dm-devel.ids --===============0776381399121141423== Content-Type: multipart/alternative; boundary="0000000000007f3471059b639450" --0000000000007f3471059b639450 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: quoted-printable OK, I see. Thanks Mikulas for the explanation. On Sun, Jan 5, 2020 at 5:39 PM Mikulas Patocka wrote: > > > On Sat, 4 Jan 2020, Patrick Dung wrote: > > > Thanks for reply. After performing an additional testing with SSD. I > have more questions. > > > > Firstly, about the additional testing with SSD: > > I tested it with SSD (in Linux software raid level 10 setup). The resul= t > shown using dm-integrity is faster than using XFS directly. For using > dm-integrity, fio shows > > lots of I/O merges by the scheduler. Please find the attachment for the > result. > > > > Finally, please find the questions below: > > 1) So after the dm-integrity journal is written to the actual back end > storage (hard drive), then fsync would then report completed? > > Yes. > > > 2) To my understanding, for using dm-integrity with journal mode. Data > has to written into the storage device twice (one part is the dm-integrit= y > journal, the other > > one is the actual data). For the fio test, the data should be random an= d > sustained for 60 seconds. But using dm-integrity with journal mode is sti= ll > faster. > > > > Thanks, > > Patrick > > With ioengine=3Dsync, fio sends one I/O, waits for it to finish, send > another I/O, wait for it to finish, etc. > > With dm-integrity, I/Os will be written to the journal (that is held in > memory, no disk I/O is done), and when fio does the sync(), fsync() or > fdatasync() syscall, the journal is written to the disk. After the journa= l > is flushed, the blocks are written concurrently to the disk locations. > > The SSD has better performance for concurrent write then for > block-by-block write, so that's why you see performance improvement with > dm-integrity. > > Mikulas > > --0000000000007f3471059b639450 Content-Type: text/html; charset=UTF-8 Content-Transfer-Encoding: quoted-printable
OK, I see. Thanks Mikulas for the explanation.
On Sun, = Jan 5, 2020 at 5:39 PM Mikulas Patocka <mpatocka@redhat.com> wrote:


On Sat, 4 Jan 2020, Patrick Dung wrote:

> Thanks for reply. After performing an additional testing with SSD. I h= ave more questions.
>
> Firstly, about the additional testing with SSD:
> I tested it with SSD (in Linux software raid level 10 setup). The resu= lt shown using dm-integrity is faster than using XFS directly. For using dm= -integrity, fio shows
> lots of I/O merges by the scheduler. Please find the attachment for th= e result.
>
> Finally, please find the questions below:
> 1) So after the dm-integrity journal is written to the actual back end= storage (hard drive), then fsync would then report completed?

Yes.

> 2) To my understanding, for using dm-integrity with journal mode. Data= has to written into the storage device twice (one part is the dm-integrity= journal, the other
> one is the actual data). For the fio test, the data should be random a= nd sustained for 60 seconds. But using dm-integrity with journal mode is st= ill faster.
>
> Thanks,
> Patrick

With ioengine=3Dsync, fio sends one I/O, waits for it to finish, send
another I/O, wait for it to finish, etc.

With dm-integrity, I/Os will be written to the journal (that is held in memory, no disk I/O is done), and when fio does the sync(), fsync() or
fdatasync() syscall, the journal is written to the disk. After the journal =
is flushed, the blocks are written concurrently to the disk locations.

The SSD has better performance for concurrent write then for
block-by-block write, so that's why you see performance improvement wit= h
dm-integrity.

Mikulas

--0000000000007f3471059b639450-- --===============0776381399121141423== Content-Type: text/plain; charset="us-ascii" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit Content-Disposition: inline --===============0776381399121141423==--