linux-xfs.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* dbench throughput(sync, reflink=0|1) on xfs over hardware throughput
@ 2020-10-13 15:56 Wang Yugui
  2020-10-14  4:41 ` Wang Yugui
  0 siblings, 1 reply; 3+ messages in thread
From: Wang Yugui @ 2020-10-13 15:56 UTC (permalink / raw)
  To: linux-xfs; +Cc: wangyugui

Hi,

#any reply, please Cc: wangyugui@e16-tech.com

dbench throughput(sync, reflink=0|1) on xfs over hardware
throughput(6Gb/s=750MB/s).

Is this a bug of xfs sync?  or some feature of performance optimization?

we test mkfs.xfs -m reflink=0|1, crc=0|1, still over hardware
throughput(6Gb/s=750MB/s).

Disk: TOSHIBA  PX05SMQ040
This is a 12Gb/s SAS SSD disk, but connect to 6Gb/s SAS HBA,
so it works with 6Gb/s.

dbench -s -t 60 -D /xfs 32
#Throughput 884.406 MB/sec (sync open)


dbench -s -t 60 -D /xfs 1
#Throughput 149.172 MB/sec (sync open)

we test the same disk with ext4 filesystem, 
the throughput is very close to, but less than the hardware limit.

dbench -s -t 60 -D /ext4 32
#Throughput 740.95 MB/sec (sync open)

dbench -s -t 60 -D /ext4 1
#Throughput 124.67 MB/sec (sync open)

linux kernel: 5.4.70, 5.9.0

Best Regards
Wang Yugui (wangyugui@e16-tech.com)
2020/10/13



^ permalink raw reply	[flat|nested] 3+ messages in thread

* Re: dbench throughput(sync, reflink=0|1) on xfs over hardware throughput
  2020-10-13 15:56 dbench throughput(sync, reflink=0|1) on xfs over hardware throughput Wang Yugui
@ 2020-10-14  4:41 ` Wang Yugui
  2020-10-14 22:45   ` Wang Yugui
  0 siblings, 1 reply; 3+ messages in thread
From: Wang Yugui @ 2020-10-14  4:41 UTC (permalink / raw)
  To: linux-xfs; +Cc: wangyugui

Hi,

For xfs sync performance optimization, there is an option 'osyncisdsync'
which is removed in 4.0(man xfs)

the xfs sync performance optimization in linux 5.4.70/5.9.0 is beyond
'osyncisdsync'? When multiple write(sync) at the same time, just some of
them are guaranteed?

Or deduplication(based on reflink=1) help the sync write?
and 'mkfs.xfs -m reflink=0' failed to disable it?

iotop show that 'Actual DISK WRITE:' is NOT over hardware throughput.

Best Regards
Wang Yugui (wangyugui@e16-tech.com)
2020/10/14

> Hi,
> 
> #any reply, please Cc: wangyugui@e16-tech.com
> 
> dbench throughput(sync, reflink=0|1) on xfs over hardware
> throughput(6Gb/s=750MB/s).
> 
> Is this a bug of xfs sync?  or some feature of performance optimization?
> 
> we test mkfs.xfs -m reflink=0|1, crc=0|1, still over hardware
> throughput(6Gb/s=750MB/s).
> 
> Disk: TOSHIBA  PX05SMQ040
> This is a 12Gb/s SAS SSD disk, but connect to 6Gb/s SAS HBA,
> so it works with 6Gb/s.
> 
> dbench -s -t 60 -D /xfs 32
> #Throughput 884.406 MB/sec (sync open)
> 
> 
> dbench -s -t 60 -D /xfs 1
> #Throughput 149.172 MB/sec (sync open)
> 
> we test the same disk with ext4 filesystem, 
> the throughput is very close to, but less than the hardware limit.
> 
> dbench -s -t 60 -D /ext4 32
> #Throughput 740.95 MB/sec (sync open)
> 
> dbench -s -t 60 -D /ext4 1
> #Throughput 124.67 MB/sec (sync open)
> 
> linux kernel: 5.4.70, 5.9.0
> 
> Best Regards
> Wang Yugui (wangyugui@e16-tech.com)
> 2020/10/13
> 



^ permalink raw reply	[flat|nested] 3+ messages in thread

* Re: dbench throughput(sync, reflink=0|1) on xfs over hardware throughput
  2020-10-14  4:41 ` Wang Yugui
@ 2020-10-14 22:45   ` Wang Yugui
  0 siblings, 0 replies; 3+ messages in thread
From: Wang Yugui @ 2020-10-14 22:45 UTC (permalink / raw)
  To: Wang Yugui; +Cc: linux-xfs

Hi, 

Thanks a lot fo Dave Chinner,

The 'Throughput' of dbench include not only 'write', but also include
'read' in dbench too.

And 'max_latency' include not only 'write', but also include others too.

1)dbench result example1
WriteX        365460     4.474  2279.808
...
Throughput 385.697 MB/sec (sync open)  32 clients  32 procs  max_latency=2279.818 ms

2)dbench result example2
 WriteX        741543     3.521    16.380
 ...
Throughput 779.972 MB/sec (sync open)  48 clients  48 procs  max_latency=11.246 ms


for ext4, with more clients(32->80),  the Throughput of ext4 is over
6Gb/s too.


Best Regards
Wang Yugui (wangyugui@e16-tech.com)
2020/10/15

> Hi,
> 
> For xfs sync performance optimization, there is an option 'osyncisdsync'
> which is removed in 4.0(man xfs)
> 
> the xfs sync performance optimization in linux 5.4.70/5.9.0 is beyond
> 'osyncisdsync'? When multiple write(sync) at the same time, just some of
> them are guaranteed?
> 
> Or deduplication(based on reflink=1) help the sync write?
> and 'mkfs.xfs -m reflink=0' failed to disable it?
> 
> iotop show that 'Actual DISK WRITE:' is NOT over hardware throughput.
> 
> Best Regards
> Wang Yugui (wangyugui@e16-tech.com)
> 2020/10/14
> 
> > Hi,
> > 
> > #any reply, please Cc: wangyugui@e16-tech.com
> > 
> > dbench throughput(sync, reflink=0|1) on xfs over hardware
> > throughput(6Gb/s=750MB/s).
> > 
> > Is this a bug of xfs sync?  or some feature of performance optimization?
> > 
> > we test mkfs.xfs -m reflink=0|1, crc=0|1, still over hardware
> > throughput(6Gb/s=750MB/s).
> > 
> > Disk: TOSHIBA  PX05SMQ040
> > This is a 12Gb/s SAS SSD disk, but connect to 6Gb/s SAS HBA,
> > so it works with 6Gb/s.
> > 
> > dbench -s -t 60 -D /xfs 32
> > #Throughput 884.406 MB/sec (sync open)
> > 
> > 
> > dbench -s -t 60 -D /xfs 1
> > #Throughput 149.172 MB/sec (sync open)
> > 
> > we test the same disk with ext4 filesystem, 
> > the throughput is very close to, but less than the hardware limit.
> > 
> > dbench -s -t 60 -D /ext4 32
> > #Throughput 740.95 MB/sec (sync open)
> > 
> > dbench -s -t 60 -D /ext4 1
> > #Throughput 124.67 MB/sec (sync open)
> > 
> > linux kernel: 5.4.70, 5.9.0
> > 
> > Best Regards
> > Wang Yugui (wangyugui@e16-tech.com)
> > 2020/10/13
> > 
> 



^ permalink raw reply	[flat|nested] 3+ messages in thread

end of thread, other threads:[~2020-10-14 22:45 UTC | newest]

Thread overview: 3+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-10-13 15:56 dbench throughput(sync, reflink=0|1) on xfs over hardware throughput Wang Yugui
2020-10-14  4:41 ` Wang Yugui
2020-10-14 22:45   ` Wang Yugui

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).