All of lore.kernel.org
 help / color / mirror / Atom feed
* About dm-integrity layer and fsync
@ 2020-01-03 15:51 Patrick Dung
  2020-01-03 17:14 ` Mikulas Patocka
  0 siblings, 1 reply; 5+ messages in thread
From: Patrick Dung @ 2020-01-03 15:51 UTC (permalink / raw)
  To: dm-devel


[-- Attachment #1.1: Type: text/plain, Size: 1574 bytes --]

Hello,

A quick question on dm-integrity. Does dm-integrity layer honors fsync?

I was testing dm-integrity and performance. It had a strange result that
using dm-integrity with journal is faster than a normal file system or
dm-integrity with bitmap (no journal). fio is used for testing the storage
performance. The device is a SATA hard disk drive. Then I created a 100GB
partition for testing.

Below is the test cases:

1) XFS on a partition directly test case

2) dm-integrity: crc32c on a partition with default setting journal commit
interval is 10 seconds. Then create XFS on it. test case

3) dm-integrity: crc32c on a partition default setting journal commit
interval set to 5 seconds. Then create XFS on it.

4) dm-integrity:  crc32c on a partition default setting but using bitmap
instead of journal. Then create XFS on it.

FIO command:

fio --filename=./t1 --direct=1 --rw=randrw --refill_buffers --norandommap
--randrepeat=0 --ioengine=sync --bs=4k --rwmixread=75 --iodepth=16
--numjobs=8 --runtime=60 --group_reporting --fsync=1 --name=4ktest
--size=4G

Result:

   1. Read/Write IOPS: 117/41. Read/Write Speed 481KB/s 168KB/s
   2. Read/Write IOPS: 178/59. Read/Write Speed 732KB/s 244KB/s
   3. Read/Write IOPS: 169/57. Read/Write Speed 695KB/s 236KB/s
   4. Read/Write IOPS: 97/32. Read/Write Speed 400K/s 131KB/s

The original discussion in:
https://gitlab.com/cryptsetup/cryptsetup/issues/513 . Milan Broz said the
dm-devel mailing list is a suitable place to discuss the probem.

Thanks in advance.

Patrick

[-- Attachment #1.2: Type: text/html, Size: 1920 bytes --]

[-- Attachment #2: Type: text/plain, Size: 0 bytes --]



^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: About dm-integrity layer and fsync
  2020-01-03 15:51 About dm-integrity layer and fsync Patrick Dung
@ 2020-01-03 17:14 ` Mikulas Patocka
  2020-01-03 19:05   ` Patrick Dung
  0 siblings, 1 reply; 5+ messages in thread
From: Mikulas Patocka @ 2020-01-03 17:14 UTC (permalink / raw)
  To: Patrick Dung; +Cc: dm-devel

[-- Attachment #1: Type: TEXT/PLAIN, Size: 2018 bytes --]



On Fri, 3 Jan 2020, Patrick Dung wrote:

> Hello,
> 
> A quick question on dm-integrity. Does dm-integrity layer honors fsync?

Yes it does.

However, it writes data into the journal and when the journal is flushed, 
it reports the fsync function as finished.

On a mechanical disk, writes to contiguous space (i.e. the journal) are 
faster than random writes all over the disk, that's why you see better 
performance with dm-integrity than without it.

Mikulas

> I was testing dm-integrity and performance. It had a strange result that using dm-integrity with journal is faster than a normal file system or dm-integrity with
> bitmap (no journal). fio is used for testing the storage performance. The device is a SATA hard disk drive. Then I created a 100GB partition for testing.
> 
> Below is the test cases:
> 
> 1) XFS on a partition directly test case
> 
> 2) dm-integrity: crc32c on a partition with default setting journal commit interval is 10 seconds. Then create XFS on it. test case
> 
> 3) dm-integrity: crc32c on a partition default setting journal commit interval set to 5 seconds. Then create XFS on it.
> 
> 4) dm-integrity:  crc32c on a partition default setting but using bitmap instead of journal. Then create XFS on it.
> 
> FIO command:
> 
> fio --filename=./t1 --direct=1 --rw=randrw --refill_buffers --norandommap --randrepeat=0 --ioengine=sync --bs=4k --rwmixread=75 --iodepth=16 --numjobs=8 --runtime=60
> --group_reporting --fsync=1 --name=4ktest --size=4G
> 
> Result:
> 
>  1. Read/Write IOPS: 117/41. Read/Write Speed 481KB/s 168KB/s
>  2. Read/Write IOPS: 178/59. Read/Write Speed 732KB/s 244KB/s
>  3. Read/Write IOPS: 169/57. Read/Write Speed 695KB/s 236KB/s
>  4. Read/Write IOPS: 97/32. Read/Write Speed 400K/s 131KB/s
> The original discussion in: https://gitlab.com/cryptsetup/cryptsetup/issues/513 . Milan Broz said the dm-devel mailing list is a suitable place to discuss the probem.
> 
> Thanks in advance.
> 
> Patrick
> 
> 

[-- Attachment #2: Type: text/plain, Size: 0 bytes --]



^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: About dm-integrity layer and fsync
  2020-01-03 17:14 ` Mikulas Patocka
@ 2020-01-03 19:05   ` Patrick Dung
  2020-01-05  9:39     ` Mikulas Patocka
  0 siblings, 1 reply; 5+ messages in thread
From: Patrick Dung @ 2020-01-03 19:05 UTC (permalink / raw)
  To: Mikulas Patocka; +Cc: dm-devel


[-- Attachment #1.1: Type: text/plain, Size: 3131 bytes --]

Thanks for reply. After performing an additional testing with SSD. I have
more questions.

Firstly, about the additional testing with SSD:
I tested it with SSD (in Linux software raid level 10 setup). The result
shown using dm-integrity is faster than using XFS directly. For using
dm-integrity, fio shows lots of I/O merges by the scheduler. Please find
the attachment for the result.

Finally, please find the questions below:
1) So after the dm-integrity journal is written to the actual back end
storage (hard drive), then fsync would then report completed?

2) To my understanding, for using dm-integrity with journal mode. Data has
to written into the storage device twice (one part is the dm-integrity
journal, the other one is the actual data). For the fio test, the data
should be random and sustained for 60 seconds. But using dm-integrity with
journal mode is still faster.

Thanks,
Patrick

On Sat, Jan 4, 2020 at 1:14 AM Mikulas Patocka <mpatocka@redhat.com> wrote:

>
>
> On Fri, 3 Jan 2020, Patrick Dung wrote:
>
> > Hello,
> >
> > A quick question on dm-integrity. Does dm-integrity layer honors fsync?
>
> Yes it does.
>
> However, it writes data into the journal and when the journal is flushed,
> it reports the fsync function as finished.
>
> On a mechanical disk, writes to contiguous space (i.e. the journal) are
> faster than random writes all over the disk, that's why you see better
> performance with dm-integrity than without it.
>
> Mikulas
>
> > I was testing dm-integrity and performance. It had a strange result that
> using dm-integrity with journal is faster than a normal file system or
> dm-integrity with
> > bitmap (no journal). fio is used for testing the storage performance.
> The device is a SATA hard disk drive. Then I created a 100GB partition for
> testing.
> >
> > Below is the test cases:
> >
> > 1) XFS on a partition directly test case
> >
> > 2) dm-integrity: crc32c on a partition with default setting journal
> commit interval is 10 seconds. Then create XFS on it. test case
> >
> > 3) dm-integrity: crc32c on a partition default setting journal commit
> interval set to 5 seconds. Then create XFS on it.
> >
> > 4) dm-integrity:  crc32c on a partition default setting but using bitmap
> instead of journal. Then create XFS on it.
> >
> > FIO command:
> >
> > fio --filename=./t1 --direct=1 --rw=randrw --refill_buffers
> --norandommap --randrepeat=0 --ioengine=sync --bs=4k --rwmixread=75
> --iodepth=16 --numjobs=8 --runtime=60
> > --group_reporting --fsync=1 --name=4ktest --size=4G
> >
> > Result:
> >
> >  1. Read/Write IOPS: 117/41. Read/Write Speed 481KB/s 168KB/s
> >  2. Read/Write IOPS: 178/59. Read/Write Speed 732KB/s 244KB/s
> >  3. Read/Write IOPS: 169/57. Read/Write Speed 695KB/s 236KB/s
> >  4. Read/Write IOPS: 97/32. Read/Write Speed 400K/s 131KB/s
> > The original discussion in:
> https://gitlab.com/cryptsetup/cryptsetup/issues/513 . Milan Broz said the
> dm-devel mailing list is a suitable place to discuss the probem.
> >
> > Thanks in advance.
> >
> > Patrick
> >
> >

[-- Attachment #1.2: Type: text/html, Size: 3859 bytes --]

[-- Attachment #2: result.txt --]
[-- Type: text/plain, Size: 8479 bytes --]

Testing with SSD

1) Without dm-integrity
4ktest: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=sync, iodepth=16
...
fio-3.14
Starting 8 processes
4ktest: Laying out IO file (1 file / 4096MiB)
Jobs: 8 (f=8): [m(8)][100.0%][r=3347KiB/s,w=1205KiB/s][r=836,w=301 IOPS][eta 00m:00s]
4ktest: (groupid=0, jobs=8): err= 0: pid=429966: Sat Jan  4 02:34:22 2020
  read: IOPS=806, BW=3225KiB/s (3302kB/s)(189MiB/60007msec)
    clat (usec): min=91, max=1333.0k, avg=1449.50, stdev=9061.33
     lat (usec): min=91, max=1333.0k, avg=1449.86, stdev=9061.33
    clat percentiles (usec):
     |  1.00th=[  106],  5.00th=[  116], 10.00th=[  125], 20.00th=[  145],
     | 30.00th=[  174], 40.00th=[  202], 50.00th=[  253], 60.00th=[  363],
     | 70.00th=[ 1418], 80.00th=[ 3032], 90.00th=[ 4359], 95.00th=[ 6128],
     | 99.00th=[ 9241], 99.50th=[10290], 99.90th=[13042], 99.95th=[14484],
     | 99.99th=[18744]
   bw (  KiB/s): min= 1846, max= 4150, per=100.00%, avg=3279.09, stdev=44.56, samples=944
   iops        : min=  460, max= 1037, avg=819.37, stdev=11.15, samples=944
  write: IOPS=270, BW=1082KiB/s (1108kB/s)(63.4MiB/60007msec); 0 zone resets
    clat (usec): min=39, max=1334.8k, avg=2561.63, stdev=15018.30
     lat (usec): min=40, max=1334.8k, avg=2562.07, stdev=15018.30
    clat percentiles (usec):
     |  1.00th=[     58],  5.00th=[     83], 10.00th=[     97],
     | 20.00th=[    153], 30.00th=[    249], 40.00th=[    461],
     | 50.00th=[   1500], 60.00th=[   2835], 70.00th=[   3130],
     | 80.00th=[   4146], 90.00th=[   6128], 95.00th=[   7570],
     | 99.00th=[  11076], 99.50th=[  11994], 99.90th=[  16581],
     | 99.95th=[  17957], 99.99th=[1333789]
   bw (  KiB/s): min=  455, max= 1728, per=100.00%, avg=1100.00, stdev=30.18, samples=944
   iops        : min=  113, max=  432, avg=274.59, stdev= 7.55, samples=944
  lat (usec)   : 50=0.11%, 100=2.76%, 250=41.86%, 500=13.99%, 750=3.52%
  lat (usec)   : 1000=0.20%
  lat (msec)   : 2=7.62%, 4=16.54%, 10=12.44%, 20=0.95%, 50=0.01%
  lat (msec)   : 500=0.01%, 2000=0.01%
  fsync/fdatasync/sync_file_range:
    sync (usec): min=194, max=1339.7k, avg=5697.28, stdev=10037.09
    sync percentiles (usec):
     |  1.00th=[  461],  5.00th=[ 1483], 10.00th=[ 2835], 20.00th=[ 3195],
     | 30.00th=[ 4228], 40.00th=[ 5145], 50.00th=[ 5735], 60.00th=[ 5997],
     | 70.00th=[ 6783], 80.00th=[ 7439], 90.00th=[ 8717], 95.00th=[ 9765],
     | 99.00th=[12387], 99.50th=[13829], 99.90th=[16581], 99.95th=[17695],
     | 99.99th=[20579]
  cpu          : usr=0.22%, sys=1.24%, ctx=308172, majf=0, minf=108
  IO depths    : 1=199.9%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued rwts: total=48377,16234,0,64566 short=0,0,0,0 dropped=0,0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=16

Run status group 0 (all jobs):
   READ: bw=3225KiB/s (3302kB/s), 3225KiB/s-3225KiB/s (3302kB/s-3302kB/s), io=189MiB (198MB), run=60007-60007msec
  WRITE: bw=1082KiB/s (1108kB/s), 1082KiB/s-1082KiB/s (1108kB/s-1108kB/s), io=63.4MiB (66.5MB), run=60007-60007msec

Disk stats (read/write):
    dm-15: ios=48215/96083, merge=0/0, ticks=13910/204053, in_queue=217963, util=52.52%, aggrios=48384/100562, aggrmerge=0/0, aggrticks=0/0, aggrin_queue=0, aggrutil=0.00%
    md63: ios=48384/100562, merge=0/0, ticks=0/0, in_queue=0, util=0.00%, aggrios=24194/46905, aggrmerge=0/0, aggrticks=0/0, aggrin_queue=0, aggrutil=0.00%
    md62: ios=24360/46880, merge=0/0, ticks=0/0, in_queue=0, util=0.00%, aggrios=12186/47157, aggrmerge=0/86, aggrticks=3328/25276, aggrin_queue=16938, aggrutil=52.70%
  sdd: ios=7665/47426, merge=0/162, ticks=4452/40035, in_queue=31377, util=50.48%
  sdg: ios=16707/46888, merge=0/10, ticks=2205/10517, in_queue=2500, util=52.70%
    md61: ios=24029/46931, merge=0/0, ticks=0/0, in_queue=0, util=0.00%, aggrios=12024/47174, aggrmerge=0/122, aggrticks=3502/25432, aggrin_queue=17289, aggrutil=52.36%
  sdf: ios=16251/46906, merge=0/46, ticks=2450/10909, in_queue=3265, util=52.36%
  sdb: ios=7798/47443, merge=0/199, ticks=4555/39956, in_queue=31313, util=50.55%


2) With dm-integrity
4ktest: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=sync, iodepth=16
...
fio-3.14
Starting 8 processes
4ktest: Laying out IO file (1 file / 4096MiB)
Jobs: 8 (f=8): [m(8)][100.0%][r=3347KiB/s,w=1205KiB/s][r=836,w=301 IOPS][eta 00m:00s]
4ktest: (groupid=0, jobs=8): err= 0: pid=429966: Sat Jan  4 02:34:22 2020
  read: IOPS=806, BW=3225KiB/s (3302kB/s)(189MiB/60007msec)
    clat (usec): min=91, max=1333.0k, avg=1449.50, stdev=9061.33
     lat (usec): min=91, max=1333.0k, avg=1449.86, stdev=9061.33
    clat percentiles (usec):
     |  1.00th=[  106],  5.00th=[  116], 10.00th=[  125], 20.00th=[  145],
     | 30.00th=[  174], 40.00th=[  202], 50.00th=[  253], 60.00th=[  363],
     | 70.00th=[ 1418], 80.00th=[ 3032], 90.00th=[ 4359], 95.00th=[ 6128],
     | 99.00th=[ 9241], 99.50th=[10290], 99.90th=[13042], 99.95th=[14484],
     | 99.99th=[18744]
   bw (  KiB/s): min= 1846, max= 4150, per=100.00%, avg=3279.09, stdev=44.56, samples=944
   iops        : min=  460, max= 1037, avg=819.37, stdev=11.15, samples=944
  write: IOPS=270, BW=1082KiB/s (1108kB/s)(63.4MiB/60007msec); 0 zone resets
    clat (usec): min=39, max=1334.8k, avg=2561.63, stdev=15018.30
     lat (usec): min=40, max=1334.8k, avg=2562.07, stdev=15018.30
    clat percentiles (usec):
     |  1.00th=[     58],  5.00th=[     83], 10.00th=[     97],
     | 20.00th=[    153], 30.00th=[    249], 40.00th=[    461],
     | 50.00th=[   1500], 60.00th=[   2835], 70.00th=[   3130],
     | 80.00th=[   4146], 90.00th=[   6128], 95.00th=[   7570],
     | 99.00th=[  11076], 99.50th=[  11994], 99.90th=[  16581],
     | 99.95th=[  17957], 99.99th=[1333789]
   bw (  KiB/s): min=  455, max= 1728, per=100.00%, avg=1100.00, stdev=30.18, samples=944
   iops        : min=  113, max=  432, avg=274.59, stdev= 7.55, samples=944
  lat (usec)   : 50=0.11%, 100=2.76%, 250=41.86%, 500=13.99%, 750=3.52%
  lat (usec)   : 1000=0.20%
  lat (msec)   : 2=7.62%, 4=16.54%, 10=12.44%, 20=0.95%, 50=0.01%
  lat (msec)   : 500=0.01%, 2000=0.01%
  fsync/fdatasync/sync_file_range:
    sync (usec): min=194, max=1339.7k, avg=5697.28, stdev=10037.09
    sync percentiles (usec):
     |  1.00th=[  461],  5.00th=[ 1483], 10.00th=[ 2835], 20.00th=[ 3195],
     | 30.00th=[ 4228], 40.00th=[ 5145], 50.00th=[ 5735], 60.00th=[ 5997],
     | 70.00th=[ 6783], 80.00th=[ 7439], 90.00th=[ 8717], 95.00th=[ 9765],
     | 99.00th=[12387], 99.50th=[13829], 99.90th=[16581], 99.95th=[17695],
     | 99.99th=[20579]
  cpu          : usr=0.22%, sys=1.24%, ctx=308172, majf=0, minf=108
  IO depths    : 1=199.9%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued rwts: total=48377,16234,0,64566 short=0,0,0,0 dropped=0,0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=16

Run status group 0 (all jobs):
   READ: bw=3225KiB/s (3302kB/s), 3225KiB/s-3225KiB/s (3302kB/s-3302kB/s), io=189MiB (198MB), run=60007-60007msec
  WRITE: bw=1082KiB/s (1108kB/s), 1082KiB/s-1082KiB/s (1108kB/s-1108kB/s), io=63.4MiB (66.5MB), run=60007-60007msec

Disk stats (read/write):
    dm-15: ios=48215/96083, merge=0/0, ticks=13910/204053, in_queue=217963, util=52.52%, aggrios=48384/100562, aggrmerge=0/0, aggrticks=0/0, aggrin_queue=0, aggrutil=0.00%
    md63: ios=48384/100562, merge=0/0, ticks=0/0, in_queue=0, util=0.00%, aggrios=24194/46905, aggrmerge=0/0, aggrticks=0/0, aggrin_queue=0, aggrutil=0.00%
    md62: ios=24360/46880, merge=0/0, ticks=0/0, in_queue=0, util=0.00%, aggrios=12186/47157, aggrmerge=0/86, aggrticks=3328/25276, aggrin_queue=16938, aggrutil=52.70%
  sdd: ios=7665/47426, merge=0/162, ticks=4452/40035, in_queue=31377, util=50.48%
  sdg: ios=16707/46888, merge=0/10, ticks=2205/10517, in_queue=2500, util=52.70%
    md61: ios=24029/46931, merge=0/0, ticks=0/0, in_queue=0, util=0.00%, aggrios=12024/47174, aggrmerge=0/122, aggrticks=3502/25432, aggrin_queue=17289, aggrutil=52.36%
  sdf: ios=16251/46906, merge=0/46, ticks=2450/10909, in_queue=3265, util=52.36%
  sdb: ios=7798/47443, merge=0/199, ticks=4555/39956, in_queue=31313, util=50.55%

[-- Attachment #3: Type: text/plain, Size: 0 bytes --]



^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: About dm-integrity layer and fsync
  2020-01-03 19:05   ` Patrick Dung
@ 2020-01-05  9:39     ` Mikulas Patocka
  2020-01-05 12:20       ` Patrick Dung
  0 siblings, 1 reply; 5+ messages in thread
From: Mikulas Patocka @ 2020-01-05  9:39 UTC (permalink / raw)
  To: Patrick Dung; +Cc: dm-devel



On Sat, 4 Jan 2020, Patrick Dung wrote:

> Thanks for reply. After performing an additional testing with SSD. I have more questions.
> 
> Firstly, about the additional testing with SSD:
> I tested it with SSD (in Linux software raid level 10 setup). The result shown using dm-integrity is faster than using XFS directly. For using dm-integrity, fio shows
> lots of I/O merges by the scheduler. Please find the attachment for the result.
> 
> Finally, please find the questions below:
> 1) So after the dm-integrity journal is written to the actual back end storage (hard drive), then fsync would then report completed?

Yes.

> 2) To my understanding, for using dm-integrity with journal mode. Data has to written into the storage device twice (one part is the dm-integrity journal, the other
> one is the actual data). For the fio test, the data should be random and sustained for 60 seconds. But using dm-integrity with journal mode is still faster.
> 
> Thanks,
> Patrick

With ioengine=sync, fio sends one I/O, waits for it to finish, send 
another I/O, wait for it to finish, etc.

With dm-integrity, I/Os will be written to the journal (that is held in 
memory, no disk I/O is done), and when fio does the sync(), fsync() or 
fdatasync() syscall, the journal is written to the disk. After the journal 
is flushed, the blocks are written concurrently to the disk locations.

The SSD has better performance for concurrent write then for 
block-by-block write, so that's why you see performance improvement with 
dm-integrity.

Mikulas

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: About dm-integrity layer and fsync
  2020-01-05  9:39     ` Mikulas Patocka
@ 2020-01-05 12:20       ` Patrick Dung
  0 siblings, 0 replies; 5+ messages in thread
From: Patrick Dung @ 2020-01-05 12:20 UTC (permalink / raw)
  To: Mikulas Patocka; +Cc: dm-devel


[-- Attachment #1.1: Type: text/plain, Size: 1787 bytes --]

OK, I see. Thanks Mikulas for the explanation.

On Sun, Jan 5, 2020 at 5:39 PM Mikulas Patocka <mpatocka@redhat.com> wrote:

>
>
> On Sat, 4 Jan 2020, Patrick Dung wrote:
>
> > Thanks for reply. After performing an additional testing with SSD. I
> have more questions.
> >
> > Firstly, about the additional testing with SSD:
> > I tested it with SSD (in Linux software raid level 10 setup). The result
> shown using dm-integrity is faster than using XFS directly. For using
> dm-integrity, fio shows
> > lots of I/O merges by the scheduler. Please find the attachment for the
> result.
> >
> > Finally, please find the questions below:
> > 1) So after the dm-integrity journal is written to the actual back end
> storage (hard drive), then fsync would then report completed?
>
> Yes.
>
> > 2) To my understanding, for using dm-integrity with journal mode. Data
> has to written into the storage device twice (one part is the dm-integrity
> journal, the other
> > one is the actual data). For the fio test, the data should be random and
> sustained for 60 seconds. But using dm-integrity with journal mode is still
> faster.
> >
> > Thanks,
> > Patrick
>
> With ioengine=sync, fio sends one I/O, waits for it to finish, send
> another I/O, wait for it to finish, etc.
>
> With dm-integrity, I/Os will be written to the journal (that is held in
> memory, no disk I/O is done), and when fio does the sync(), fsync() or
> fdatasync() syscall, the journal is written to the disk. After the journal
> is flushed, the blocks are written concurrently to the disk locations.
>
> The SSD has better performance for concurrent write then for
> block-by-block write, so that's why you see performance improvement with
> dm-integrity.
>
> Mikulas
>
>

[-- Attachment #1.2: Type: text/html, Size: 2168 bytes --]

[-- Attachment #2: Type: text/plain, Size: 0 bytes --]



^ permalink raw reply	[flat|nested] 5+ messages in thread

end of thread, other threads:[~2020-01-05 12:20 UTC | newest]

Thread overview: 5+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-01-03 15:51 About dm-integrity layer and fsync Patrick Dung
2020-01-03 17:14 ` Mikulas Patocka
2020-01-03 19:05   ` Patrick Dung
2020-01-05  9:39     ` Mikulas Patocka
2020-01-05 12:20       ` Patrick Dung

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.