Linux-XFS Archive on lore.kernel.org
 help / color / Atom feed
* dbench throughput on xfs over hardware limit(6Gb/s)
@ 2020-10-13 14:11 Wang Yugui
  2020-10-13 23:05 ` Dave Chinner
  0 siblings, 1 reply; 5+ messages in thread
From: Wang Yugui @ 2020-10-13 14:11 UTC (permalink / raw)
  To: linux-xfs

Hi,

dbench throughput on xfs over hardware limit(6Gb/s=750MB/s).

Is this a bug or some feature of performance optimization?

Disk: TOSHIBA  PX05SMQ040
This is a 12Gb/s SAS SSD disk, but connect to 6Gb/s SAS HBA,
so it works with 6Gb/s.

dbench -s -t 60 -D /xfs 32
#Throughput 884.406 MB/sec (sync open)

dbench -s -t 60 -D /xfs 1
#Throughput 149.172 MB/sec (sync open)

we test the same disk with ext4 filesystem, 
the throughput is very close to, but less than the hardware limit.

dbench -s -t 60 -D /ext4 32
#Throughput 740.95 MB/sec (sync open)

dbench -s -t 60 -D /ext4 1
#Throughput 124.67 MB/sec (sync open)

linux kernel: 5.4.70, 5.9.0

Best Regards
王玉贵
2020/10/13

--------------------------------------
北京京垓科技有限公司
王玉贵	wangyugui@e16-tech.com
电话:+86-136-71123776


^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: dbench throughput on xfs over hardware limit(6Gb/s)
  2020-10-13 14:11 dbench throughput on xfs over hardware limit(6Gb/s) Wang Yugui
@ 2020-10-13 23:05 ` Dave Chinner
  2020-10-14  3:32   ` Wang Yugui
  0 siblings, 1 reply; 5+ messages in thread
From: Dave Chinner @ 2020-10-13 23:05 UTC (permalink / raw)
  To: Wang Yugui; +Cc: linux-xfs

On Tue, Oct 13, 2020 at 10:11:13PM +0800, Wang Yugui wrote:
> Hi,
> 
> dbench throughput on xfs over hardware limit(6Gb/s=750MB/s).
> 
> Is this a bug or some feature of performance optimization?

dbench measures page cache throughput, not physical IO throughput.
This sort of results is expected.

Cheers,

Dave.
-- 
Dave Chinner
david@fromorbit.com

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: dbench throughput on xfs over hardware limit(6Gb/s)
  2020-10-13 23:05 ` Dave Chinner
@ 2020-10-14  3:32   ` Wang Yugui
  2020-10-14 21:24     ` Dave Chinner
  0 siblings, 1 reply; 5+ messages in thread
From: Wang Yugui @ 2020-10-14  3:32 UTC (permalink / raw)
  To: Dave Chinner; +Cc: linux-xfs


> On Tue, Oct 13, 2020 at 10:11:13PM +0800, Wang Yugui wrote:
> > Hi,
> > 
> > dbench throughput on xfs over hardware limit(6Gb/s=750MB/s).
> > 
> > Is this a bug or some feature of performance optimization?
> 
> dbench measures page cache throughput, not physical IO throughput.
> This sort of results is expected.

We use 'dbench -s', so it should be physical IO.
   -s     Use synchronous file IO on all file operations.

we check 'dbench -s' with 'strace -ff -o s.log',
we can see 'O_SYNC' in openat().


Best Regards
Wang Yugui (wangyugui@e16-tech.com)
2020/10/14




^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: dbench throughput on xfs over hardware limit(6Gb/s)
  2020-10-14  3:32   ` Wang Yugui
@ 2020-10-14 21:24     ` Dave Chinner
  2020-10-14 22:43       ` Wang Yugui
  0 siblings, 1 reply; 5+ messages in thread
From: Dave Chinner @ 2020-10-14 21:24 UTC (permalink / raw)
  To: Wang Yugui; +Cc: linux-xfs

On Wed, Oct 14, 2020 at 11:32:11AM +0800, Wang Yugui wrote:
> 
> > On Tue, Oct 13, 2020 at 10:11:13PM +0800, Wang Yugui wrote:
> > > Hi,
> > > 
> > > dbench throughput on xfs over hardware limit(6Gb/s=750MB/s).
> > > 
> > > Is this a bug or some feature of performance optimization?
> > 
> > dbench measures page cache throughput, not physical IO throughput.
> > This sort of results is expected.
> 
> We use 'dbench -s', so it should be physical IO.
>    -s     Use synchronous file IO on all file operations.

Reads can still be served from the page cache without doing physical
IO.

Cheers,

Dave.
-- 
Dave Chinner
david@fromorbit.com

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: dbench throughput on xfs over hardware limit(6Gb/s)
  2020-10-14 21:24     ` Dave Chinner
@ 2020-10-14 22:43       ` Wang Yugui
  0 siblings, 0 replies; 5+ messages in thread
From: Wang Yugui @ 2020-10-14 22:43 UTC (permalink / raw)
  To: Dave Chinner; +Cc: linux-xfs

Hi,

> Reads can still be served from the page cache without doing physical
> IO.

You are right.

for ext4, with more clients(32->80),  the Throughput of ext4 is over
6Gb/s too.

the 'Throughput' of dbench include not only 'write', but also include
'read' in dbench too.

And 'max_latency' include not only 'write', but also include others too.

1)dbench result example1
WriteX        365460     4.474  2279.808
...
Throughput 385.697 MB/sec (sync open)  32 clients  32 procs  max_latency=2279.818 ms

2)dbench result example2
 WriteX        741543     3.521    16.380
 ...
Throughput 779.972 MB/sec (sync open)  48 clients  48 procs  max_latency=11.246 ms

Thanks a lot.

Best Regards
Wang Yugui (wangyugui@e16-tech.com)
2020/10/15



^ permalink raw reply	[flat|nested] 5+ messages in thread

end of thread, back to index

Thread overview: 5+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-10-13 14:11 dbench throughput on xfs over hardware limit(6Gb/s) Wang Yugui
2020-10-13 23:05 ` Dave Chinner
2020-10-14  3:32   ` Wang Yugui
2020-10-14 21:24     ` Dave Chinner
2020-10-14 22:43       ` Wang Yugui

Linux-XFS Archive on lore.kernel.org

Archives are clonable:
	git clone --mirror https://lore.kernel.org/linux-xfs/0 linux-xfs/git/0.git

	# If you have public-inbox 1.1+ installed, you may
	# initialize and index your mirror using the following commands:
	public-inbox-init -V2 linux-xfs linux-xfs/ https://lore.kernel.org/linux-xfs \
		linux-xfs@vger.kernel.org
	public-inbox-index linux-xfs

Example config snippet for mirrors

Newsgroup available over NNTP:
	nntp://nntp.lore.kernel.org/org.kernel.vger.linux-xfs


AGPL code for this site: git clone https://public-inbox.org/public-inbox.git