linux-fsdevel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH 0/8 RESEND] Cleanup and improve sync (v4)
@ 2012-01-05 23:46 Jan Kara
  2012-01-05 23:46 ` [PATCH 1/8] vfs: Move noop_backing_dev_info check from sync into writeback Jan Kara
                   ` (7 more replies)
  0 siblings, 8 replies; 15+ messages in thread
From: Jan Kara @ 2012-01-05 23:46 UTC (permalink / raw)
  To: linux-fsdevel; +Cc: Christoph Hellwig, Al Viro


  Hello,

  this is a fourth iteration of my series improving handling of sync syscall.
Since previous submission I have slightly improved cleaned up iteration loops
so that we don't have to pass void * around. Christoph also asked about why
we do non-blocking ->sync_fs() pass. My answer to it was:

I did also measurements where non-blocking ->sync_fs was removed and I didn't
see any regression with ext3, ext4, xfs, or btrfs. OTOH I can imagine *some*
filesystem can do an equivalent of filemap_fdatawrite() on some metadata for
non-blocking ->sync_fs and filemap_fdatawrite_and_wait() on the blocking one
and if there are more such filesystems on different backing storages the
performance difference can be noticeable (actually, checking the filesystems,
JFS and Ceph seem to be doing something like this). So I that's why I didn't
include the change in the end...

So Christoph, if you think we should get rid of non-blocking ->sync_fs, I can
include the patch but personally I think it has some use. Arguably a cleaner
interface for the users will be something like two methods ->sync_fs_begin
and ->sync_fs_end. Filesystems that don't have much to optimize in ->sync_fs()
would just use one of these functions.

I have run three tests below to verify performance impact of the patch series.
Each test has been run with 1, 2, and 4 filesystems mounted; test with 2
filesystems was run with each filesystem on a different disk, test with 4
filesystems had 2 filesystems on the first disk and 2 filesystems on the second
disk.

Test 1: Run 200 times sync with filesystem mounted to verify overhead of
  sync when there are no data to write.
Test 2: For each filesystem run a process creating 40 KB files, sleep
  for 3 seconds, run sync.
Test 3: For each filesystem run a process creating 20 GB file, sleep for
  5 seconds, run sync.

I have performed 10 runs of each test for xfs, ext3, ext4, and btrfs
filesystems.

Results of test 1
-----------------
Numbers are time it took 200 syncs to complete.
Character in braces is + if the time increased with 2*STDDEV reliability,
- if it decreased with 2*STDDEV reliability, 0 otherwise.
		      BASE		      PATCHED
FS		AVG      STDDEV         AVG      STDDEV
xfs, 1 disks	4.189300 0.051525	2.141300 0.063389 (-)
xfs, 2 disks	4.820600 0.019096	4.611400 0.066322 (-)
xfs, 4 disks	6.518300 1.440362	6.435700 0.510641 (0)
ext4, 1 disks	4.085000 0.011375	1.689500 0.001360 (-)
ext4, 2 disks	4.088100 0.006488	1.705000 0.026359 (-)
ext4, 4 disks	4.107300 0.011934	1.702900 0.001814 (-)
ext3, 1 disks	4.080200 0.009527	1.703400 0.030559 (-)
ext3, 2 disks	4.138300 0.143909	1.694000 0.001414 (-)
ext3, 4 disks	4.107200 0.002482	1.702900 0.007778 (-)
btrfs, 1 disks	11.214600 0.086619	8.737200 0.081076 (-)
btrfs, 2 disks	32.910000 0.162089	30.673400 0.538820 (-)
btrfs, 4 disks	67.987700 1.655654	67.247100 1.971887 (0)

So we see nice improvements almost all over the board.

Results of test 2
-----------------
Numbers are time it took sync to complete.

		    BASE		    PATCHED
FS		AVG      STDDEV         AVG      STDDEV
xfs, 1 disks	0.436000 0.012000	0.506000 0.014283 (+)
xfs, 2 disks	1.105000 0.055543	1.274000 0.244426 (0)
xfs, 4 disks	5.880000 2.997135	4.837000 3.875448 (0)
ext4, 1 disks	0.791000 0.055579	0.853000 0.042438 (0)
ext4, 2 disks	18.232000 13.505638	17.254000 2.000506 (0)
ext4, 4 disks	491.790000 218.565229	696.783000 234.933562 (0)
ext3, 1 disks	15.315000 2.065465	1.900000 0.184662 (-)
ext3, 2 disks	128.524000 18.090519	55.278000 1.530554 (-)
ext3, 4 disks	221.202000 30.090432	232.849000 68.745423 (0)
btrfs, 1 disks	0.452000 0.026000	0.494000 0.023749 (0)
btrfs, 2 disks	5.156000 4.530852	4.083000 1.560519 (0)
btrfs, 4 disks	31.154000 11.220828	36.987000 17.334126 (0)

Except for ext3 which got a nice boost here and XFS which seems to be a tad bit
slower, there are no changes that would stand out of the noise.

Results of test 3
-----------------
Numbers are time it took sync to complete.

		    BASE		    PATCHED
FS		AVG      STDDEV         AVG      STDDEV
xfs, 1 disks	12.083000 0.058660	10.898000 0.285475 (-)
xfs, 2 disks	20.182000 0.549614	14.977000 0.351114 (-)
xfs, 4 disks	35.814000 5.318310	28.452000 3.332281 (0)
ext4, 1 disks	32.956000 5.753789	20.865000 3.892098 (0)
ext4, 2 disks	34.922000 3.051966	27.411000 2.752978 (0)
ext4, 4 disks	44.508000 6.829004	28.360000 2.561437 (0)
ext3, 1 disks	23.475000 1.288885	17.116000 0.319631 (-)
ext3, 2 disks	43.508000 4.998647	41.547000 2.597976 (0)
ext3, 4 disks	92.130000 11.344117	79.362000 9.891208 (0)
btrfs, 1 disks	12.478000 0.394304	12.847000 0.171117 (0)
btrfs, 2 disks	15.030000 0.777817	18.014000 2.011418 (0)
btrfs, 4 disks	32.395000 4.248859	38.411000 3.179939 (0)

Here we see XFS and ext3 had some improvements, ext4 likely as well although
the results are relatively noisy.

								Honza

^ permalink raw reply	[flat|nested] 15+ messages in thread
* [PATCH 0/8 v4] Flush all block devices on sync(2) and cleanup the code
@ 2012-07-03 14:45 Jan Kara
  2012-07-03 14:45 ` [PATCH 5/8] vfs: Create function for iterating over block devices Jan Kara
  0 siblings, 1 reply; 15+ messages in thread
From: Jan Kara @ 2012-07-03 14:45 UTC (permalink / raw)
  To: Al Viro; +Cc: linux-fsdevel, LKML, Curt Wohlgemuth, Christoph Hellwig, Jan Kara


  Hello,

  this is a fourth iteration of my series improving handling of sync syscall.
Since previous submission I have slightly improved cleaned up iteration loops
so that we don't have to pass void * around. Christoph also asked about why
we do non-blocking ->sync_fs() pass. My answer to it was:

I did also measurements where non-blocking ->sync_fs was removed and I didn't
see any regression with ext3, ext4, xfs, or btrfs. OTOH I can imagine *some*
filesystem can do an equivalent of filemap_fdatawrite() on some metadata for
non-blocking ->sync_fs and filemap_fdatawrite_and_wait() on the blocking one
and if there are more such filesystems on different backing storages the
performance difference can be noticeable (actually, checking the filesystems,
JFS and Ceph seem to be doing something like this). So I that's why I didn't
include the change in the end...

So Christoph, if you think we should get rid of non-blocking ->sync_fs, I can
include the patch but personally I think it has some use. Arguably a cleaner
interface for the users will be something like two methods ->sync_fs_begin
and ->sync_fs_end. Filesystems that don't have much to optimize in ->sync_fs()
would just use one of these functions.

I have run three tests below to verify performance impact of the patch series.
Each test has been run with 1, 2, and 4 filesystems mounted; test with 2
filesystems was run with each filesystem on a different disk, test with 4
filesystems had 2 filesystems on the first disk and 2 filesystems on the second
disk.

Test 1: Run 200 times sync with filesystem mounted to verify overhead of
  sync when there are no data to write.
Test 2: For each filesystem run a process creating 40 KB files, sleep
  for 3 seconds, run sync.
Test 3: For each filesystem run a process creating 20 GB file, sleep for
  5 seconds, run sync.

I have performed 10 runs of each test for xfs, ext3, ext4, and btrfs
filesystems.

Results of test 1
-----------------
Numbers are time it took 200 syncs to complete.
Character in braces is + if the time increased with 2*STDDEV reliability,
- if it decreased with 2*STDDEV reliability, 0 otherwise.
		      BASE		      PATCHED
FS		AVG      STDDEV         AVG      STDDEV
Test xfs, 1 disks	0.783000 0.012689	1.628000 0.120316 (+)	
Test xfs, 2 disks	0.742000 0.011662	1.774000 0.135144 (+)	
Test xfs, 4 disks	0.823000 0.057280	1.034000 0.083690 (0)	
Test ext4, 1 disks	0.620000 0.000000	0.678000 0.004000 (+)	
Test ext4, 2 disks	0.629000 0.003000	0.672000 0.004000 (+)	
Test ext4, 4 disks	0.642000 0.004000	0.670000 0.004472 (+)	
Test ext3, 1 disks	0.625000 0.005000	0.662000 0.009798 (+)	
Test ext3, 2 disks	0.622000 0.004000	0.662000 0.004000 (+)	
Test ext3, 4 disks	0.639000 0.003000	0.661000 0.005385 (+)	
Test btrfs, 1 disks	7.901000 0.173807	7.635000 0.171712 (0)	
Test btrfs, 2 disks	19.690000 0.357379	18.630000 0.260000 (0)	
Test btrfs, 4 disks	42.113000 0.725438	41.440000 0.492016 (0)	

We see small increases in runtime, likely due to us having to process all block
devices in the system. XFS actually suffers a bit more, which has been caused
by the last patch dropping writeback_inodes_sb() and reordering ->sync_fs
calls. But still it seems to be OK for this no-so-important workload.

Results of test 2
-----------------
Numbers are time it took sync to complete.

		    BASE		    PATCHED
FS		AVG      STDDEV         AVG      STDDEV
Test xfs, 1 disks	0.391000 0.010440	0.408000 0.011662 (0)	
Test xfs, 2 disks	0.670000 0.014832	0.707000 0.038223 (0)	
Test xfs, 4 disks	2.800000 1.722202	1.818000 0.144900 (0)	
Test ext4, 1 disks	1.531000 0.778247	0.852000 0.109252 (0)	
Test ext4, 2 disks	9.313000 1.857375	10.671000 2.624806 (0)	
Test ext4, 4 disks	254.982000 88.016783	312.003000 30.387435 (0)	
Test ext3, 1 disks	11.751000 0.924472	1.855000 0.179736 (-)	
Test ext3, 2 disks	82.625000 12.903233	43.483000 0.493438 (-)	
Test ext3, 4 disks	79.826000 21.118762	91.593000 31.763338 (0)	
Test btrfs, 1 disks	0.407000 0.012689	0.423000 0.011874 (0)	
Test btrfs, 2 disks	0.790000 0.404252	1.387000 0.606829 (0)	
Test btrfs, 4 disks	2.069000 0.635460	2.273000 1.617641 (0)	

Changes are mostly in the (sometimes heavy) noise, only ext3 stands out with
some noticeable improvements.

Results of test 3
-----------------
Numbers are time it took sync to complete.

		    BASE		    PATCHED
FS		AVG      STDDEV         AVG      STDDEV
Test xfs, 1 disks	12.541000 1.875209	11.351000 0.824724 (0)	
Test xfs, 2 disks	14.858000 0.866162	12.114000 0.632743 (0)	
Test xfs, 4 disks	23.825000 2.020224	17.388000 1.641809 (0)	
Test ext4, 1 disks	39.697000 2.151465	14.987000 2.611670 (-)	
Test ext4, 2 disks	36.148000 1.231104	20.030000 0.656171 (-)	
Test ext4, 4 disks	33.326000 2.116559	19.864000 1.171829 (-)	
Test ext3, 1 disks	21.509000 1.944307	15.166000 0.115603 (-)	
Test ext3, 2 disks	26.694000 1.989750	21.465000 2.187219 (0)	
Test ext3, 4 disks	42.809000 5.220120	34.878000 5.011055 (0)	
Test btrfs, 1 disks	7.339000 2.299637	9.386000 0.631493 (0)	
Test btrfs, 2 disks	7.945000 3.100275	10.554000 0.073919 (0)	
Test btrfs, 4 disks	18.271000 2.669938	25.275000 2.682839 (0)	

Here we see ext3 & ext4 improved somewhat, XFS likely as well although it's
still in the noise. OTOH btrfs likely got slower although it's in the noise.
I didn't drill down into what caused this, I just now that it's not the last
patch.

								Honza

^ permalink raw reply	[flat|nested] 15+ messages in thread
* [PATCH 0/8] Cleanup and improve sync (v4)
@ 2011-11-09 17:44 Jan Kara
  2011-11-09 17:45 ` [PATCH 5/8] vfs: Create function for iterating over block devices Jan Kara
  0 siblings, 1 reply; 15+ messages in thread
From: Jan Kara @ 2011-11-09 17:44 UTC (permalink / raw)
  To: linux-fsdevel; +Cc: Christoph Hellwig, Al Viro


  Hello,

  this is a fourth iteration of my series improving handling of sync syscall.
Since previous submission I have slightly improved cleaned up iteration loops
so that we don't have to pass void * around. Christoph also asked about why
we do non-blocking ->sync_fs() pass. My answer to it was:

I did also measurements where non-blocking ->sync_fs was removed and I didn't
see any regression with ext3, ext4, xfs, or btrfs. OTOH I can imagine *some*
filesystem can do an equivalent of filemap_fdatawrite() on some metadata for
non-blocking ->sync_fs and filemap_fdatawrite_and_wait() on the blocking one
and if there are more such filesystems on different backing storages the
performance difference can be noticeable (actually, checking the filesystems,
JFS and Ceph seem to be doing something like this). So I that's why I didn't
include the change in the end...

So Christoph, if you think we should get rid of non-blocking ->sync_fs, I can
include the patch but personally I think it has some use. Arguably a cleaner
interface for the users will be something like two methods ->sync_fs_begin
and ->sync_fs_end. Filesystems that don't have much to optimize in ->sync_fs()
would just use one of these functions.

I have run three tests below to verify performance impact of the patch series.
Each test has been run with 1, 2, and 4 filesystems mounted; test with 2
filesystems was run with each filesystem on a different disk, test with 4
filesystems had 2 filesystems on the first disk and 2 filesystems on the second
disk.

Test 1: Run 200 times sync with filesystem mounted to verify overhead of
  sync when there are no data to write.
Test 2: For each filesystem run a process creating 40 KB files, sleep
  for 3 seconds, run sync.
Test 3: For each filesystem run a process creating 20 GB file, sleep for
  5 seconds, run sync.

I have performed 10 runs of each test for xfs, ext3, ext4, and btrfs
filesystems.

Results of test 1
-----------------
Numbers are time it took 200 syncs to complete.
Character in braces is + if the time increased with 2*STDDEV reliability,
- if it decreased with 2*STDDEV reliability, 0 otherwise.
		      BASE		      PATCHED
FS		AVG      STDDEV         AVG      STDDEV
xfs, 1 disks	4.189300 0.051525	2.141300 0.063389 (-)
xfs, 2 disks	4.820600 0.019096	4.611400 0.066322 (-)
xfs, 4 disks	6.518300 1.440362	6.435700 0.510641 (0)
ext4, 1 disks	4.085000 0.011375	1.689500 0.001360 (-)
ext4, 2 disks	4.088100 0.006488	1.705000 0.026359 (-)
ext4, 4 disks	4.107300 0.011934	1.702900 0.001814 (-)
ext3, 1 disks	4.080200 0.009527	1.703400 0.030559 (-)
ext3, 2 disks	4.138300 0.143909	1.694000 0.001414 (-)
ext3, 4 disks	4.107200 0.002482	1.702900 0.007778 (-)
btrfs, 1 disks	11.214600 0.086619	8.737200 0.081076 (-)
btrfs, 2 disks	32.910000 0.162089	30.673400 0.538820 (-)
btrfs, 4 disks	67.987700 1.655654	67.247100 1.971887 (0)

So we see nice improvements almost all over the board.

Results of test 2
-----------------
Numbers are time it took sync to complete.

		    BASE		    PATCHED
FS		AVG      STDDEV         AVG      STDDEV
xfs, 1 disks	0.436000 0.012000	0.506000 0.014283 (+)
xfs, 2 disks	1.105000 0.055543	1.274000 0.244426 (0)
xfs, 4 disks	5.880000 2.997135	4.837000 3.875448 (0)
ext4, 1 disks	0.791000 0.055579	0.853000 0.042438 (0)
ext4, 2 disks	18.232000 13.505638	17.254000 2.000506 (0)
ext4, 4 disks	491.790000 218.565229	696.783000 234.933562 (0)
ext3, 1 disks	15.315000 2.065465	1.900000 0.184662 (-)
ext3, 2 disks	128.524000 18.090519	55.278000 1.530554 (-)
ext3, 4 disks	221.202000 30.090432	232.849000 68.745423 (0)
btrfs, 1 disks	0.452000 0.026000	0.494000 0.023749 (0)
btrfs, 2 disks	5.156000 4.530852	4.083000 1.560519 (0)
btrfs, 4 disks	31.154000 11.220828	36.987000 17.334126 (0)

Except for ext3 which got a nice boost here and XFS which seems to be a tad bit
slower, there are no changes that would stand out of the noise.

Results of test 3
-----------------
Numbers are time it took sync to complete.

		    BASE		    PATCHED
FS		AVG      STDDEV         AVG      STDDEV
xfs, 1 disks	12.083000 0.058660	10.898000 0.285475 (-)
xfs, 2 disks	20.182000 0.549614	14.977000 0.351114 (-)
xfs, 4 disks	35.814000 5.318310	28.452000 3.332281 (0)
ext4, 1 disks	32.956000 5.753789	20.865000 3.892098 (0)
ext4, 2 disks	34.922000 3.051966	27.411000 2.752978 (0)
ext4, 4 disks	44.508000 6.829004	28.360000 2.561437 (0)
ext3, 1 disks	23.475000 1.288885	17.116000 0.319631 (-)
ext3, 2 disks	43.508000 4.998647	41.547000 2.597976 (0)
ext3, 4 disks	92.130000 11.344117	79.362000 9.891208 (0)
btrfs, 1 disks	12.478000 0.394304	12.847000 0.171117 (0)
btrfs, 2 disks	15.030000 0.777817	18.014000 2.011418 (0)
btrfs, 4 disks	32.395000 4.248859	38.411000 3.179939 (0)

Here we see XFS and ext3 had some improvements, ext4 likely as well although
the results are relatively noisy.

								Honza

^ permalink raw reply	[flat|nested] 15+ messages in thread

end of thread, other threads:[~2012-07-03 14:47 UTC | newest]

Thread overview: 15+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2012-01-05 23:46 [PATCH 0/8 RESEND] Cleanup and improve sync (v4) Jan Kara
2012-01-05 23:46 ` [PATCH 1/8] vfs: Move noop_backing_dev_info check from sync into writeback Jan Kara
2012-01-05 23:46 ` [PATCH 2/8] quota: Split dquot_quota_sync() to writeback and cache flushing part Jan Kara
2012-01-05 23:46 ` [PATCH 3/8] quota: Move quota syncing to ->sync_fs method Jan Kara
2012-01-05 23:46 ` [PATCH 4/8] vfs: Reorder operations during sys_sync Jan Kara
2012-01-05 23:46 ` [PATCH 5/8] vfs: Create function for iterating over block devices Jan Kara
2012-01-05 23:46 ` [PATCH 6/8] vfs: Make sys_sync writeout also block device inodes Jan Kara
2012-06-20 14:23   ` Curt Wohlgemuth
2012-06-20 20:03     ` Jan Kara
2012-06-22 10:30       ` Al Viro
2012-07-03 14:47         ` Jan Kara
2012-01-05 23:46 ` [PATCH 7/8] vfs: Remove unnecessary flushing of block devices Jan Kara
2012-01-05 23:46 ` [PATCH 8/8] vfs: Avoid unnecessary WB_SYNC_NONE writeback during sys_sync and reorder sync passes Jan Kara
  -- strict thread matches above, loose matches on Subject: below --
2012-07-03 14:45 [PATCH 0/8 v4] Flush all block devices on sync(2) and cleanup the code Jan Kara
2012-07-03 14:45 ` [PATCH 5/8] vfs: Create function for iterating over block devices Jan Kara
2011-11-09 17:44 [PATCH 0/8] Cleanup and improve sync (v4) Jan Kara
2011-11-09 17:45 ` [PATCH 5/8] vfs: Create function for iterating over block devices Jan Kara

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).