linux-xfs.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* Re: Bad Metadata performances for XFS?
       [not found] <3ED34739A4E85E4F894367D57617CDEF9ED9518B@LAX-EX-MB2.datadirect.datadirectnet.com>
@ 2016-07-04 22:52 ` Dave Chinner
  2016-07-05  0:18   ` Dave Chinner
  0 siblings, 1 reply; 7+ messages in thread
From: Dave Chinner @ 2016-07-04 22:52 UTC (permalink / raw)
  To: Wang Shilong; +Cc: linux-xfs, xfs

[xfs@oss.sgi.com is where you'll find the XFS developers]

On Mon, Jul 04, 2016 at 05:32:40AM +0000, Wang Shilong wrote:
> Hello Guys,
> 
>       I happened run some benchmarks for XFS, and found some intresting to share here:
> Kernel version:
> [root@localhost shm]# uname -r
> 4.7.0-rc5+
> 
> [root@localhost shm]# cat /proc/cpuinfo  | grep Intel
> vendor_id	: GenuineIntel
> model name	: Intel(R) Core(TM) i7-4770 CPU @ 3.40GHz
> 
> dd 16GB to /dev/shm/data to use memory backend storage to benchmark metadata performaces.
> 
> Benchmark tool is mdtest, you can download it from 
> https://sourceforge.net/projects/mdtest/
> 
> Steps to run benchmark
> #mkfs.xfs /dev/shm/data
> #mount /dev/shm/data /mnt/test
> #mdtest -d /mnt/test -n 2000000
> 
> 1 tasks, 2000000 files/directories
> 
> SUMMARY: (of 1 iterations)
>    Operation                      Max            Min           Mean        Std Dev
>    ---------                      ---            ---           ----        -------
>    Directory creation:      24724.717      24724.717      24724.717          0.000
>    Directory stat    :    1156009.290    1156009.290    1156009.290          0.000
>    Directory removal :     103496.353     103496.353     103496.353          0.000
>    File creation     :      23094.444      23094.444      23094.444          0.000
>    File stat         :    1158704.969    1158704.969    1158704.969          0.000
>    File read         :     752731.595     752731.595     752731.595          0.000
>    File removal      :     105481.766     105481.766     105481.766          0.000
>    Tree creation     :       2229.827       2229.827       2229.827          0.000
>    Tree removal      :          1.275          1.275          1.275          0.000
> 
> -- finished at 07/04/2016 12:54:26 --
> 
> IOPS for file creation is only 2.3W, however compare to Ext4 with same testing.
>   Operation                      Max            Min           Mean        Std Dev
>    ---------                      ---            ---           ----        -------
>    Directory creation:      65875.462      65875.462      65875.462          0.000
>    Directory stat    :    1060115.356    1060115.356    1060115.356          0.000
>    Directory removal :     109955.606     109955.606     109955.606          0.000
>    File creation     :     114898.425     114898.425     114898.425          0.000
>    File stat         :    1046223.044    1046223.044    1046223.044          0.000
>    File read         :     699663.339     699663.339     699663.339          0.000
>    File removal      :     152320.304     152320.304     152320.304          0.000
>    Tree creation     :      19065.018      19065.018      19065.018          0.000
>    Tree removal      :          1.269          1.269          1.269          0.000
> 
> IOPS of file creation is more than 11W!!!
> 
> I understand Ext4 use Hash index tree and XFS use B+ tree for directroy, so i test Btrfs
> for this case.
>   ---------                      ---            ---           ----        -------
>    Directory creation:      99312.866      99312.866      99312.866          0.000
>    Directory stat    :    1116205.199    1116205.199    1116205.199          0.000
>    Directory removal :      66441.011      66441.011      66441.011          0.000
>    File creation     :      91282.930      91282.930      91282.930          0.000
>    File stat         :    1117636.580    1117636.580    1117636.580          0.000
>    File read         :     754964.332     754964.332     754964.332          0.000
>    File removal      :      69708.657      69708.657      69708.657          0.000
>    Tree creation     :      29746.837      29746.837      29746.837          0.000
>    Tree removal      :          1.289          1.289          1.289          0.000
> 
> Even Btrfs, it got about 9W....
> 
> Hmm..Maybe this is because too many files under single directory?(200W won't be too much i guess)
> I reduce number of files for XFS to 50W.
> 1 tasks, 500000 files/directories
> 
> SUMMARY: (of 1 iterations)
>    Operation                      Max            Min           Mean        Std Dev
>    ---------                      ---            ---           ----        -------
>    Directory creation:      53021.632      53021.632      53021.632          0.000
>    Directory stat    :    1187581.191    1187581.191    1187581.191          0.000
>    Directory removal :     108112.695     108112.695     108112.695          0.000
>    File creation     :      51654.911      51654.911      51654.911          0.000
>    File stat         :    1180447.310    1180447.310    1180447.310          0.000
>    File read         :     755391.001     755391.001     755391.001          0.000
>    File removal      :     108415.353     108415.353     108415.353          0.000
>    Tree creation     :       2129.088       2129.088       2129.088          0.000
>    Tree removal      :          5.272          5.272          5.272          0.000
> 
> -- finished at 07/04/2016 12:59:17 --
> 
> So performances go up from 2.3W to 5.1W, but still very slow compared to others...
> 
> 
> PS: mkfs options for Btrfs is: mkfs.btrfs -m single -d single -f
>        mkfs options for XFS is: mkfs.xfs -f
>        mkfs options for Ext4 is: mkfs.ext4 -i 2048(to generate enough inodes for testing)
> 
> Best Regards,
> Shilong
> --
> To unsubscribe from this list: send the line "unsubscribe linux-xfs" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> 

-- 
Dave Chinner
david@fromorbit.com

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: Bad Metadata performances for XFS?
  2016-07-04 22:52 ` Bad Metadata performances for XFS? Dave Chinner
@ 2016-07-05  0:18   ` Dave Chinner
  2016-07-05  1:43     ` Wang Shilong
  0 siblings, 1 reply; 7+ messages in thread
From: Dave Chinner @ 2016-07-05  0:18 UTC (permalink / raw)
  To: Wang Shilong; +Cc: linux-xfs, xfs

On Tue, Jul 05, 2016 at 08:52:26AM +1000, Dave Chinner wrote:
> [xfs@oss.sgi.com is where you'll find the XFS developers]
> 
> On Mon, Jul 04, 2016 at 05:32:40AM +0000, Wang Shilong wrote:
> > Hello Guys,
> > 
> >       I happened run some benchmarks for XFS, and found some intresting to share here:
> > Kernel version:
> > [root@localhost shm]# uname -r
> > 4.7.0-rc5+
> > 
> > [root@localhost shm]# cat /proc/cpuinfo  | grep Intel
> > vendor_id	: GenuineIntel
> > model name	: Intel(R) Core(TM) i7-4770 CPU @ 3.40GHz

What's the rest of the hardware in the machine?

> > dd 16GB to /dev/shm/data to use memory backend storage to benchmark metadata performaces.

I've never seen anyone create a ramdisk like that before.
What's the backing device type? i.e. what block device driver does
this use?

> > Benchmark tool is mdtest, you can download it from 
> > https://sourceforge.net/projects/mdtest/

What version? The sourceforge version, of the github fork that the
sourceforge page points to? Or the forked branch of recent
development in the github fork?

> > Steps to run benchmark
> > #mkfs.xfs /dev/shm/data

Output of this command so we can recreate the same filesystem
structure?

> > #mount /dev/shm/data /mnt/test
> > #mdtest -d /mnt/test -n 2000000
> > 
> > 1 tasks, 2000000 files/directories
> > 
> > SUMMARY: (of 1 iterations)
> >    Operation                      Max            Min           Mean        Std Dev
> >    ---------                      ---            ---           ----        -------
> >    Directory creation:      24724.717      24724.717      24724.717          0.000
> >    Directory stat    :    1156009.290    1156009.290    1156009.290          0.000
> >    Directory removal :     103496.353     103496.353     103496.353          0.000
> >    File creation     :      23094.444      23094.444      23094.444          0.000
> >    File stat         :    1158704.969    1158704.969    1158704.969          0.000
> >    File read         :     752731.595     752731.595     752731.595          0.000
> >    File removal      :     105481.766     105481.766     105481.766          0.000
> >    Tree creation     :       2229.827       2229.827       2229.827          0.000
> >    Tree removal      :          1.275          1.275          1.275          0.000
> > 
> > -- finished at 07/04/2016 12:54:26 --

A table of numbers with no units or explanation as to what they
mean. Let me guess - I have to read the benchmark source code to
understand what the numbers mean?

> > IOPS for file creation is only 2.3W, however compare to Ext4 with same testing.

Ummm - what unit of measurement is "W"? Watts?

Please, when presenting benchmark results to ask for help with
analysis, be *extremely specific* about what you running and what
the results mean. It's no different from reporting a bug from this
perspective:

http://xfs.org/index.php/XFS_FAQ#Q:_What_information_should_I_include_when_reporting_a_problem.3F

That said, this is a single threaded benchmark. It's well
known that XFS uses more CPU per metadata operation than either ext4
or btrfs, so it won't be any surprise that they are faster than XFS
on this particular test. We've known this for many years now -
perhaps you should watch/read this presentation I did more than 4
years ago now:

http://xfs.org/index.php/File:Xfs-scalability-lca2012.pdf
http://www.youtube.com/watch?v=FegjLbCnoBw

IOWs: Being CPU bound at 25,000 file creates/s is in line with
what I'd expect on XFS for a single threaded, single directory
create over 2 million directory entries with the default 4k
directory block size....

Cheers,

Dave.
-- 
Dave Chinner
david@fromorbit.com

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 7+ messages in thread

* RE: Bad Metadata performances for XFS?
  2016-07-05  0:18   ` Dave Chinner
@ 2016-07-05  1:43     ` Wang Shilong
  2016-07-05  7:29       ` Dave Chinner
  2016-07-05 20:34       ` Chris Murphy
  0 siblings, 2 replies; 7+ messages in thread
From: Wang Shilong @ 2016-07-05  1:43 UTC (permalink / raw)
  To: Dave Chinner; +Cc: linux-xfs, xfs

Hello Dave Chinner,

________________________________________
From: Dave Chinner [david@fromorbit.com]
Sent: Tuesday, July 05, 2016 8:18
To: Wang Shilong
Cc: linux-xfs@vger.kernel.org; xfs@oss.sgi.com
Subject: Re: Bad Metadata performances for XFS?

On Tue, Jul 05, 2016 at 08:52:26AM +1000, Dave Chinner wrote:
> [xfs@oss.sgi.com is where you'll find the XFS developers]
>
> On Mon, Jul 04, 2016 at 05:32:40AM +0000, Wang Shilong wrote:
> > Hello Guys,
> >
> >       I happened run some benchmarks for XFS, and found some intresting to share here:
> > Kernel version:
> > [root@localhost shm]# uname -r
> > 4.7.0-rc5+
> >
> > [root@localhost shm]# cat /proc/cpuinfo  | grep Intel
> > vendor_id   : GenuineIntel
> > model name  : Intel(R) Core(TM) i7-4770 CPU @ 3.40GHz

What's the rest of the hardware in the machine?
[root@localhost ~]# cat /proc/meminfo 
MemTotal:       32823104 kB
MemFree:        29981320 kB
MemAvailable:   31672712 kB
Buffers:            6176 kB
Cached:          1241192 kB
SwapCached:            0 kB
Active:           938332 kB
Inactive:         692420 kB
Active(anon):     384576 kB
Inactive(anon):   111320 kB
Active(file):     553756 kB
Inactive(file):   581100 kB
Unevictable:           0 kB
Mlocked:               0 kB
SwapTotal:             0 kB
SwapFree:              0 kB
Dirty:             11324 kB
Writeback:             0 kB
AnonPages:        383496 kB
Mapped:           186516 kB
Shmem:            112496 kB
Slab:            1059544 kB
SReclaimable:    1020764 kB
SUnreclaim:        38780 kB
KernelStack:        6896 kB
PageTables:        22160 kB
NFS_Unstable:          0 kB
Bounce:                0 kB
WritebackTmp:          0 kB
CommitLimit:    16411552 kB
Committed_AS:    2382640 kB
VmallocTotal:   34359738367 kB
VmallocUsed:           0 kB
VmallocChunk:          0 kB
HardwareCorrupted:     0 kB
AnonHugePages:         0 kB
CmaTotal:              0 kB
CmaFree:               0 kB
HugePages_Total:       0
HugePages_Free:        0
HugePages_Rsvd:        0
HugePages_Surp:        0
Hugepagesize:       2048 kB
DirectMap4k:      194868 kB
DirectMap2M:     3874816 kB
DirectMap1G:    30408704 kB

[root@localhost ~]# dmidecode -t memory
# dmidecode 3.0
Getting SMBIOS data from sysfs.
SMBIOS 2.8 present.

Handle 0x0041, DMI type 16, 23 bytes
Physical Memory Array
	Location: System Board Or Motherboard
	Use: System Memory
	Error Correction Type: None
	Maximum Capacity: 32 GB
	Error Information Handle: Not Provided
	Number Of Devices: 4

Handle 0x0042, DMI type 17, 40 bytes
Memory Device
	Array Handle: 0x0041
	Error Information Handle: Not Provided
	Total Width: 64 bits
	Data Width: 64 bits
	Size: 8192 MB
	Form Factor: DIMM
	Set: None
	Locator: ChannelA-DIMM0
	Bank Locator: BANK 0
	Type: DDR3
	Type Detail: Synchronous
	Speed: 1600 MHz
	Manufacturer: Kingston
	Serial Number: 692A784E
	Asset Tag: 9876543210
	Part Number: KHX1600C10D3/8GX  
	Rank: 2
	Configured Clock Speed: 1333 MHz
	Minimum Voltage: 1.5 V
	Maximum Voltage: 1.5 V
	Configured Voltage: 1.5 V

Handle 0x0044, DMI type 17, 40 bytes
Memory Device
	Array Handle: 0x0041
	Error Information Handle: Not Provided
	Total Width: 64 bits
	Data Width: 64 bits
	Size: 8192 MB
	Form Factor: DIMM
	Set: None
	Locator: ChannelA-DIMM1
	Bank Locator: BANK 1
	Type: DDR3
	Type Detail: Synchronous
	Speed: 1600 MHz
	Manufacturer: Kingston
	Serial Number: 672A954E
	Asset Tag: 9876543210
	Part Number: KHX1600C10D3/8GX  
	Rank: 2
	Configured Clock Speed: 1333 MHz
	Minimum Voltage: 1.5 V
	Maximum Voltage: 1.5 V
	Configured Voltage: 1.5 V

Handle 0x0046, DMI type 17, 40 bytes
Memory Device
	Array Handle: 0x0041
	Error Information Handle: Not Provided
	Total Width: 64 bits
	Data Width: 64 bits
	Size: 8192 MB
	Form Factor: DIMM
	Set: None
	Locator: ChannelB-DIMM0
	Bank Locator: BANK 2
	Type: DDR3
	Type Detail: Synchronous
	Speed: 1600 MHz
	Manufacturer: Kingston
	Serial Number: 712AE08D
	Asset Tag: 9876543210
	Part Number: KHX1600C10D3/8GX  
	Rank: 2
	Configured Clock Speed: 1333 MHz
	Minimum Voltage: 1.5 V
	Maximum Voltage: 1.5 V
	Configured Voltage: 1.5 V

Handle 0x0048, DMI type 17, 40 bytes
Memory Device
	Array Handle: 0x0041
	Error Information Handle: Not Provided
	Total Width: 64 bits
	Data Width: 64 bits
	Size: 8192 MB
	Form Factor: DIMM
	Set: None
	Locator: ChannelB-DIMM1
	Bank Locator: BANK 3
	Type: DDR3
	Type Detail: Synchronous
	Speed: 1600 MHz
	Manufacturer: Kingston
	Serial Number: 6A2A144E
	Asset Tag: 9876543210
	Part Number: KHX1600C10D3/8GX  
	Rank: 2
	Configured Clock Speed: 1333 MHz
	Minimum Voltage: 1.5 V
	Maximum Voltage: 1.5 V
	Configured Voltage: 1.5 V



> > dd 16GB to /dev/shm/data to use memory backend storage to benchmark metadata performaces.

I've never seen anyone create a ramdisk like that before.
What's the backing device type? i.e. what block device driver does
this use?

I guess you mean loop device here? It is common file and setup
as loop0 device here.

> > Benchmark tool is mdtest, you can download it from
> > https://sourceforge.net/projects/mdtest/

What version? The sourceforge version, of the github fork that the
sourceforge page points to? Or the forked branch of recent
development in the github fork?

I don't think sourceforge version or github version make some
differences here, you could use any of them.(I used Souceforge version)


> > Steps to run benchmark
> > #mkfs.xfs /dev/shm/data

Output of this command so we can recreate the same filesystem
structure?

[root@localhost shm]# mkfs.xfs data
meta-data=data                   isize=512    agcount=4, agsize=1025710 blks
         =                       sectsz=512   attr=2, projid32bit=1
         =                       crc=1        finobt=1, sparse=0
data     =                       bsize=4096   blocks=4102840, imaxpct=25
         =                       sunit=0      swidth=0 blks
naming   =version 2              bsize=4096   ascii-ci=0 ftype=1
log      =internal log           bsize=4096   blocks=2560, version=2
         =                       sectsz=512   sunit=0 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0


> > #mount /dev/shm/data /mnt/test
> > #mdtest -d /mnt/test -n 2000000
> >
> > 1 tasks, 2000000 files/directories
> >
> > SUMMARY: (of 1 iterations)
> >    Operation                      Max            Min           Mean        Std Dev
> >    ---------                      ---            ---           ----        -------
> >    Directory creation:      24724.717      24724.717      24724.717          0.000
> >    Directory stat    :    1156009.290    1156009.290    1156009.290          0.000
> >    Directory removal :     103496.353     103496.353     103496.353          0.000
> >    File creation     :      23094.444      23094.444      23094.444          0.000
> >    File stat         :    1158704.969    1158704.969    1158704.969          0.000
> >    File read         :     752731.595     752731.595     752731.595          0.000
> >    File removal      :     105481.766     105481.766     105481.766          0.000
> >    Tree creation     :       2229.827       2229.827       2229.827          0.000
> >    Tree removal      :          1.275          1.275          1.275          0.000
> >
> > -- finished at 07/04/2016 12:54:26 --

A table of numbers with no units or explanation as to what they
mean. Let me guess - I have to read the benchmark source code to
understand what the numbers mean?

You could look File Creation, Units mean number of files create per seconds.
(Here it is 23094.444)


> > IOPS for file creation is only 2.3W, however compare to Ext4 with same testing.

Ummm - what unit of measurement is "W"? Watts?

Sorry, same as above..


Please, when presenting benchmark results to ask for help with
analysis, be *extremely specific* about what you running and what
the results mean. It's no different from reporting a bug from this
perspective:

http://xfs.org/index.php/XFS_FAQ#Q:_What_information_should_I_include_when_reporting_a_problem.3F

That said, this is a single threaded benchmark. It's well
known that XFS uses more CPU per metadata operation than either ext4
or btrfs, so it won't be any surprise that they are faster than XFS
on this particular test. We've known this for many years now -
perhaps you should watch/read this presentation I did more than 4
years ago now:

http://xfs.org/index.php/File:Xfs-scalability-lca2012.pdf
http://www.youtube.com/watch?v=FegjLbCnoBw

IOWs: Being CPU bound at 25,000 file creates/s is in line with
what I'd expect on XFS for a single threaded, single directory
create over 2 million directory entries with the default 4k
directory block size....
----------

I understand that this is single thread Limit, but I guess there are some
other Limit here, because even single thread creating 50W files speed
is twice than 200W files.

Thanks,
Shilong

Cheers,

Dave.
--
Dave Chinner
david@fromorbit.com
_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: Bad Metadata performances for XFS?
  2016-07-05  1:43     ` Wang Shilong
@ 2016-07-05  7:29       ` Dave Chinner
  2016-07-05 20:34       ` Chris Murphy
  1 sibling, 0 replies; 7+ messages in thread
From: Dave Chinner @ 2016-07-05  7:29 UTC (permalink / raw)
  To: Wang Shilong; +Cc: linux-xfs, xfs

[Please fix you mail program to correctly quote replies - I've done
it manually here so i could work out what your wrote ]

On Tue, Jul 05, 2016 at 01:43:33AM +0000, Wang Shilong wrote:
> From: Dave Chinner [david@fromorbit.com]
> On Tue, Jul 05, 2016 at 08:52:26AM +1000, Dave Chinner wrote:
> > On Mon, Jul 04, 2016 at 05:32:40AM +0000, Wang Shilong wrote:
> > > dd 16GB to /dev/shm/data to use memory backend storage to benchmark metadata performaces.
> 
> > I've never seen anyone create a ramdisk like that before.
> > What's the backing device type? i.e. what block device driver does
> > this use?
> 
> I guess you mean loop device here? It is common file and setup
> as loop0 device here.

For me, the "common" way to test a
filesystem with RAM backing it is to use the brd driver because it
can do DAX, is as light weight and scalable, and doesn't have any of
the quirks that the loop device has.

This is why I ask people to fully describe their hardware, software and
config - assumptions only lead to misunderstandings.

> > > Benchmark tool is mdtest, you can download it from
> > > https://sourceforge.net/projects/mdtest/
> >
> > What version? The sourceforge version, of the github fork that the
> > sourceforge page points to? Or the forked branch of recent
> > development in the github fork?
> 
> I don't think sourceforge version or github version make some
> differences here, you could use any of them.(I used Souceforge version)

They are different, and there's evidence of many nasty hacks in the
github version. it appears that some of them come from the source
forge version. Not particularly confidence inspiring.

> > > Steps to run benchmark
> > > #mkfs.xfs /dev/shm/data
> 
> > Output of this command so we can recreate the same filesystem
> > structure?
> 
> [root@localhost shm]# mkfs.xfs data
> meta-data=data                   isize=512    agcount=4, agsize=1025710 blks
>          =                       sectsz=512   attr=2, projid32bit=1
>          =                       crc=1        finobt=1, sparse=0
> data     =                       bsize=4096   blocks=4102840, imaxpct=25
>          =                       sunit=0      swidth=0 blks
> naming   =version 2              bsize=4096   ascii-ci=0 ftype=1
> log      =internal log           bsize=4096   blocks=2560, version=2
>          =                       sectsz=512   sunit=0 blks, lazy-count=1
> realtime =none                   extsz=4096   blocks=0, rtextents=0

As I suspected, mkfs optimised the layout for the small size,
not performance. Performance will likely improve if you increase the
log size to something more reasonably sized for heavy metadata
workloads.

> > > #mount /dev/shm/data /mnt/test
> > > #mdtest -d /mnt/test -n 2000000
> > >
> > > 1 tasks, 2000000 files/directories
> > >
> > > SUMMARY: (of 1 iterations)
> > >    Operation                      Max            Min           Mean        Std Dev
> > >    ---------                      ---            ---           ----        -------
> > >    Directory creation:      24724.717      24724.717      24724.717          0.000
> > >    Directory stat    :    1156009.290    1156009.290    1156009.290          0.000
> > >    Directory removal :     103496.353     103496.353     103496.353          0.000
> > >    File creation     :      23094.444      23094.444      23094.444          0.000
> > >    File stat         :    1158704.969    1158704.969    1158704.969          0.000
> > >    File read         :     752731.595     752731.595     752731.595          0.000
> > >    File removal      :     105481.766     105481.766     105481.766          0.000
> > >    Tree creation     :       2229.827       2229.827       2229.827          0.000
> > >    Tree removal      :          1.275          1.275          1.275          0.000
> > >
> > > -- finished at 07/04/2016 12:54:26 --
> 
> > A table of numbers with no units or explanation as to what they
> > mean. Let me guess - I have to read the benchmark source code to
> > understand what the numbers mean?
> 
> You could look File Creation, Units mean number of files create per seconds.
> (Here it is 23094.444)

Great. What about all the others? How is the directory creation
number different to file creation? What about "tree creation"? What
is the difference between them - a tree implies multiple things are
being indexed, so that's got to be different in some way from file
and directory creation?

Indeed, if these are all measuring operations per second, then why
is tree creation 2000x faster than tree removal when file and
directory removal are 4x faster than creation? They can't all be
measuring single operations, and so the numbers are essentially
meaningless without being able to understand how they are different.

> > > IOPS for file creation is only 2.3W, however compare to Ext4 with same testing.
> 
> > Ummm - what unit of measurement is "W"? Watts?
> 
> Sorry, same as above..

So you made it up?

> > IOWs: Being CPU bound at 25,000 file creates/s is in line with
> > what I'd expect on XFS for a single threaded, single directory
> > create over 2 million directory entries with the default 4k
> > directory block size....
> ----------
> 
> I understand that this is single thread Limit, but I guess there are some
> other Limit here, because even single thread creating 50W files speed
> is twice than 200W files.

What's this W unit mean now? It's not 10000ops/s, like above,
because that just makes no sense at all.  Again: please stop
using shorthand or abbreviations that other people will not
understand. If you meant "the file  create speed is different when
creating 50,000 files versus creating 200,000 files", then write it
out in full because then everyone understands exactly what you mean.

/Assuming/ this is what you meant, then it's pretty obvious why they
are different - it's basic CS alogrithms and math. Answer these two
questions, and you have your answer as to what is going on:

	1. How does the CPU overhead of btree operation scale with
	increasing numbers of items in the btree?

	2. What does that do to the *average* insert rate for N
	insertions into an empty tree for increasing values of N?

Cheers,

Dave.
-- 
Dave Chinner
david@fromorbit.com

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: Bad Metadata performances for XFS?
  2016-07-05  1:43     ` Wang Shilong
  2016-07-05  7:29       ` Dave Chinner
@ 2016-07-05 20:34       ` Chris Murphy
  2016-07-06 11:49         ` Roger Willcocks
  1 sibling, 1 reply; 7+ messages in thread
From: Chris Murphy @ 2016-07-05 20:34 UTC (permalink / raw)
  To: Wang Shilong; +Cc: linux-xfs, xfs

On Mon, Jul 4, 2016 at 7:43 PM, Wang Shilong <wshilong@ddn.com> wrote:
>
>
> I understand that this is single thread Limit, but I guess there are some
> other Limit here, because even single thread creating 50W files speed
> is twice than 200W files.

Watts or Wolframs (tungsten)?

50W!=50000. You could write it as 50k and 200k. It's unlikely to get
confused with 50K and 200K, which are temperatures, because of
context. But W makes no sense.



-- 
Chris Murphy

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: Bad Metadata performances for XFS?
  2016-07-05 20:34       ` Chris Murphy
@ 2016-07-06 11:49         ` Roger Willcocks
  2016-07-06 23:05           ` Dave Chinner
  0 siblings, 1 reply; 7+ messages in thread
From: Roger Willcocks @ 2016-07-06 11:49 UTC (permalink / raw)
  To: Chris Murphy; +Cc: linux-xfs, Wang Shilong, xfs

On Tue, 2016-07-05 at 14:34 -0600, Chris Murphy wrote:
> On Mon, Jul 4, 2016 at 7:43 PM, Wang Shilong <wshilong@ddn.com> wrote:
> >
> >
> > I understand that this is single thread Limit, but I guess there are some
> > other Limit here, because even single thread creating 50W files speed
> > is twice than 200W files.
> 
> Watts or Wolframs (tungsten)?
> 
> 50W!=50000. You could write it as 50k and 200k. It's unlikely to get
> confused with 50K and 200K, which are temperatures, because of
> context. But W makes no sense.
> 
> 
> 

I suspect it's an abbreviation for the (Chinese) unit 'wan'

https://www.quora.com/Why-do-Chinese-people-count-in-units-of-10-000

so it makes perfect sense but it's not an SI unit.

--
Roger


_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: Bad Metadata performances for XFS?
  2016-07-06 11:49         ` Roger Willcocks
@ 2016-07-06 23:05           ` Dave Chinner
  0 siblings, 0 replies; 7+ messages in thread
From: Dave Chinner @ 2016-07-06 23:05 UTC (permalink / raw)
  To: Roger Willcocks; +Cc: linux-xfs, Chris Murphy, Wang Shilong, xfs

On Wed, Jul 06, 2016 at 12:49:29PM +0100, Roger Willcocks wrote:
> On Tue, 2016-07-05 at 14:34 -0600, Chris Murphy wrote:
> > On Mon, Jul 4, 2016 at 7:43 PM, Wang Shilong <wshilong@ddn.com> wrote:
> > >
> > >
> > > I understand that this is single thread Limit, but I guess there are some
> > > other Limit here, because even single thread creating 50W files speed
> > > is twice than 200W files.
> > 
> > Watts or Wolframs (tungsten)?
> > 
> > 50W!=50000. You could write it as 50k and 200k. It's unlikely to get
> > confused with 50K and 200K, which are temperatures, because of
> > context. But W makes no sense.
> > 
> > 
> > 
> 
> I suspect it's an abbreviation for the (Chinese) unit 'wan'
> 
> https://www.quora.com/Why-do-Chinese-people-count-in-units-of-10-000
> 
> so it makes perfect sense but it's not an SI unit.

Thanks, Roger - it does make sense now the unit being used has been
explained. This is a canonical example of how being explicit about
units being used and what they mean is of prime importance to
understanding what each other are saying.

Cheers,

Dave.
-- 
Dave Chinner
david@fromorbit.com

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 7+ messages in thread

end of thread, other threads:[~2016-07-06 23:06 UTC | newest]

Thread overview: 7+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
     [not found] <3ED34739A4E85E4F894367D57617CDEF9ED9518B@LAX-EX-MB2.datadirect.datadirectnet.com>
2016-07-04 22:52 ` Bad Metadata performances for XFS? Dave Chinner
2016-07-05  0:18   ` Dave Chinner
2016-07-05  1:43     ` Wang Shilong
2016-07-05  7:29       ` Dave Chinner
2016-07-05 20:34       ` Chris Murphy
2016-07-06 11:49         ` Roger Willcocks
2016-07-06 23:05           ` Dave Chinner

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).