linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* re: Performance 2.4.8 is worse than 2.4.x<8 (SPEC NFS results sho w this)
@ 2001-08-13 16:40 HABBINGA,ERIK (HP-Loveland,ex1)
  2001-08-13 21:12 ` Performance 2.4.8 is worse than 2.4.x<8 (SPEC NFS results show this) Hans Reiser
                   ` (2 more replies)
  0 siblings, 3 replies; 4+ messages in thread
From: HABBINGA,ERIK (HP-Loveland,ex1) @ 2001-08-13 16:40 UTC (permalink / raw)
  To: 'linux-kernel@vger.kernel.org'

Here are some SPEC SFS NFS testing (http://www.spec.org/osg/sfs97) results
I've been doing over the past few weeks that shows NFS performance degrading
since the 2.4.5pre1 kernel.  I've kept the hardware constant, only changing
the kernel.  I'm prevented by management from releasing our top numbers, but
have given our results normalized to the 2.4.5pre1 kernel.  I've also shown
the results from the first three SPEC runs to show the response time trend.

Normally, response time should start out very low, increasing slowly until
the maximum load of the system under test is reached.  Starting with
2.4.8pre8, the response time starts very high, and then decreases.  Very
bizarre behaviour.

The spec results consist of the following data (only the first three numbers
are significant for this discussion)
- load.  The load the SPEC prime client will try to get out of the system
under test.  Measured in I/O's per second (IOPS).
- throughput.  The load seen from the system under test.  Measured in IOPS
- response time.  Measured in milliseconds
- total operations
- elapsed time.  Measured in seconds
- NFS version. 2 or 3
- Protocol. UDP (U) or TCP (T)
- file set size in megabytes
- number of clients
- number of SPEC SFS processes
- biod reads
- biod writes
- SPEC SFS version

The 2.4.8pre4 and 2.4.8 tests were invalid.  Too many (> 1%) of the RPC
calls between the SPEC prime client and the system under test failed.  This
is not a good thing.

I'm willing to try out any ideas on this system to help find and fix the
performance degradation.

Erik Habbinga
Hewlett Packard

Hardware:
4 processors, 4GB ram
45 fibre channel drives, set up in hardware RAID 0/1
2 direct Gigabit Ethernet connections between SPEC SFS prime client and
system under test
reiserfs
all NFS filesystems exported with sync,no_wdelay to insure O_SYNC writes to
storage
NFS v3 UDP

Results:
2.4.5pre1
            500     497     0.8   149116  300 3 U    5070624   1 48  2  2
2.0
           1000    1004     1.0   300240  299 3 U   10141248   1 48  2  2
2.0
           1500    1501     1.0   448807  299 3 U   15210624   1 48  2  2
2.0
peak IOPS: 100% of 2.4.5pre1

2.4.5pre2
            500     497     1.0   149195  300 3 U    5070624   1 48  2  2
2.0
           1000    1005     1.2   300449  299 3 U   10141248   1 48  2  2
2.0
           1500    1502     1.2   449057  299 3 U   15210624   1 48  2  2
2.0
peak IOPS: 91% of 2.4.5pre1

2.4.5pre3
            500     497     1.0   149095  300 3 U    5070624   1 48  2  2
2.0
           1000    1004     1.1   300135  299 3 U   10141248   1 48  2  2
2.0
           1500    1502     1.2   449069  299 3 U   15210624   1 48  2  2
2.0
peak IOPS: 91% of 2.4.5pre1

2.4.5pre4
   wouldn't run (stale NFS file handle error)

2.4.5pre5
   wouldn't run (stale NFS file handle error)

2.4.5pre6
   wouldn't run (stale NFS file handle error)

2.4.7
            500     497     1.2   149206  300 3 U    5070624   1 48  2  2
2.0
           1000    1005     1.5   300503  299 3 U   10141248   1 48  2  2
2.0
           1500    1502     1.3   449232  299 3 U   15210624   1 48  2  2
2.0
peak IOPS: 65% of 2.4.5pre1

2.4.8pre1
   wouldn't run

2.4.8pre4
            500     497     1.1   149180  300 3 U    5070624   1 48  2  2
2.0
           1000    1002     1.2   299465  299 3 U   10141248   1 48  2  2
2.0
           1500    1502     1.3   449190  299 3 U   15210624   1 48  2  2
2.0
INVALID
peak IOPS: 54% of 2.4.5pre1

2.4.8pre6
            500     497     1.1   149168  300 3 U    5070624   1 48  2  2
2.0
           1000    1004     1.3   300246  299 3 U   10141248   1 48  2  2
2.0
           1500    1502     1.3   449135  299 3 U   15210624   1 48  2  2
2.0
peak IOPS 55% of 2.4.5pre1

2.4.8pre7
            500     498     1.5   149367  300 3 U    5070624   1 48  2  2
2.0
           1000    1006     2.2   301829  300 3 U   10141248   1 48  2  2
2.0
           1500    1502     2.2   449244  299 3 U   15210624   1 48  2  2
2.0
peak IOPS: 58% of 2.4.5pre1

2.4.8pre8
            500     597     8.3   179030  300 3 U    5070624   1 48  2  2
2.0
           1000    1019     6.5   304614  299 3 U   10141248   1 48  2  2
2.0
           1500    1538     4.5   461335  300 3 U   15210624   1 48  2  2
2.0
peak IOPS: 48% of 2.4.5pre1

2.4.8
            500     607     7.1   181981  300 3 U    5070624   1 48  2  2
2.0
           1000     997     7.0   299243  300 3 U   10141248   1 48  2  2
2.0
           1500    1497     2.9   447475  299 3 U   15210624   1 48  2  2
2.0
INVALID
peak IOPS: 45% of 2.4.5pre1

2.4.9pre2
   wouldn't run (NFS readdir errors)

^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: Performance 2.4.8 is worse than 2.4.x<8 (SPEC NFS results show this)
  2001-08-13 16:40 Performance 2.4.8 is worse than 2.4.x<8 (SPEC NFS results sho w this) HABBINGA,ERIK (HP-Loveland,ex1)
@ 2001-08-13 21:12 ` Hans Reiser
  2001-08-14  7:57 ` Performance 2.4.8 is worse than 2.4.x<8 (SPEC NFS results sho Henning P. Schmiedehausen
  2001-08-14 14:24 ` Performance 2.4.8 is worse than 2.4.x<8 (SPEC NFS results sho w this) Chris Mason
  2 siblings, 0 replies; 4+ messages in thread
From: Hans Reiser @ 2001-08-13 21:12 UTC (permalink / raw)
  To: HABBINGA,ERIK (HP-Loveland,ex1)
  Cc: 'linux-kernel@vger.kernel.org',
	reiserfs-list, Gryaznova E.,
	Chris Mason

We are looking into this.  Elena and Chris, please advise as to whether the
slowdown is ReiserFS code added recently or is due to layers not ReiserFS.

Hans


"HABBINGA,ERIK (HP-Loveland,ex1)" wrote:
> 
> Here are some SPEC SFS NFS testing (http://www.spec.org/osg/sfs97) results
> I've been doing over the past few weeks that shows NFS performance degrading
> since the 2.4.5pre1 kernel.  I've kept the hardware constant, only changing
> the kernel.  I'm prevented by management from releasing our top numbers, but
> have given our results normalized to the 2.4.5pre1 kernel.  I've also shown
> the results from the first three SPEC runs to show the response time trend.
> 
> Normally, response time should start out very low, increasing slowly until
> the maximum load of the system under test is reached.  Starting with
> 2.4.8pre8, the response time starts very high, and then decreases.  Very
> bizarre behaviour.
> 
> The spec results consist of the following data (only the first three numbers
> are significant for this discussion)
> - load.  The load the SPEC prime client will try to get out of the system
> under test.  Measured in I/O's per second (IOPS).
> - throughput.  The load seen from the system under test.  Measured in IOPS
> - response time.  Measured in milliseconds
> - total operations
> - elapsed time.  Measured in seconds
> - NFS version. 2 or 3
> - Protocol. UDP (U) or TCP (T)
> - file set size in megabytes
> - number of clients
> - number of SPEC SFS processes
> - biod reads
> - biod writes
> - SPEC SFS version
> 
> The 2.4.8pre4 and 2.4.8 tests were invalid.  Too many (> 1%) of the RPC
> calls between the SPEC prime client and the system under test failed.  This
> is not a good thing.
> 
> I'm willing to try out any ideas on this system to help find and fix the
> performance degradation.
> 
> Erik Habbinga
> Hewlett Packard
> 
> Hardware:
> 4 processors, 4GB ram
> 45 fibre channel drives, set up in hardware RAID 0/1
> 2 direct Gigabit Ethernet connections between SPEC SFS prime client and
> system under test
> reiserfs
> all NFS filesystems exported with sync,no_wdelay to insure O_SYNC writes to
> storage
> NFS v3 UDP
> 
> Results:
> 2.4.5pre1
>             500     497     0.8   149116  300 3 U    5070624   1 48  2  2
> 2.0
>            1000    1004     1.0   300240  299 3 U   10141248   1 48  2  2
> 2.0
>            1500    1501     1.0   448807  299 3 U   15210624   1 48  2  2
> 2.0
> peak IOPS: 100% of 2.4.5pre1
> 
> 2.4.5pre2
>             500     497     1.0   149195  300 3 U    5070624   1 48  2  2
> 2.0
>            1000    1005     1.2   300449  299 3 U   10141248   1 48  2  2
> 2.0
>            1500    1502     1.2   449057  299 3 U   15210624   1 48  2  2
> 2.0
> peak IOPS: 91% of 2.4.5pre1
> 
> 2.4.5pre3
>             500     497     1.0   149095  300 3 U    5070624   1 48  2  2
> 2.0
>            1000    1004     1.1   300135  299 3 U   10141248   1 48  2  2
> 2.0
>            1500    1502     1.2   449069  299 3 U   15210624   1 48  2  2
> 2.0
> peak IOPS: 91% of 2.4.5pre1
> 
> 2.4.5pre4
>    wouldn't run (stale NFS file handle error)
> 
> 2.4.5pre5
>    wouldn't run (stale NFS file handle error)
> 
> 2.4.5pre6
>    wouldn't run (stale NFS file handle error)
> 
> 2.4.7
>             500     497     1.2   149206  300 3 U    5070624   1 48  2  2
> 2.0
>            1000    1005     1.5   300503  299 3 U   10141248   1 48  2  2
> 2.0
>            1500    1502     1.3   449232  299 3 U   15210624   1 48  2  2
> 2.0
> peak IOPS: 65% of 2.4.5pre1
> 
> 2.4.8pre1
>    wouldn't run
> 
> 2.4.8pre4
>             500     497     1.1   149180  300 3 U    5070624   1 48  2  2
> 2.0
>            1000    1002     1.2   299465  299 3 U   10141248   1 48  2  2
> 2.0
>            1500    1502     1.3   449190  299 3 U   15210624   1 48  2  2
> 2.0
> INVALID
> peak IOPS: 54% of 2.4.5pre1
> 
> 2.4.8pre6
>             500     497     1.1   149168  300 3 U    5070624   1 48  2  2
> 2.0
>            1000    1004     1.3   300246  299 3 U   10141248   1 48  2  2
> 2.0
>            1500    1502     1.3   449135  299 3 U   15210624   1 48  2  2
> 2.0
> peak IOPS 55% of 2.4.5pre1
> 
> 2.4.8pre7
>             500     498     1.5   149367  300 3 U    5070624   1 48  2  2
> 2.0
>            1000    1006     2.2   301829  300 3 U   10141248   1 48  2  2
> 2.0
>            1500    1502     2.2   449244  299 3 U   15210624   1 48  2  2
> 2.0
> peak IOPS: 58% of 2.4.5pre1
> 
> 2.4.8pre8
>             500     597     8.3   179030  300 3 U    5070624   1 48  2  2
> 2.0
>            1000    1019     6.5   304614  299 3 U   10141248   1 48  2  2
> 2.0
>            1500    1538     4.5   461335  300 3 U   15210624   1 48  2  2
> 2.0
> peak IOPS: 48% of 2.4.5pre1
> 
> 2.4.8
>             500     607     7.1   181981  300 3 U    5070624   1 48  2  2
> 2.0
>            1000     997     7.0   299243  300 3 U   10141248   1 48  2  2
> 2.0
>            1500    1497     2.9   447475  299 3 U   15210624   1 48  2  2
> 2.0
> INVALID
> peak IOPS: 45% of 2.4.5pre1
> 
> 2.4.9pre2
>    wouldn't run (NFS readdir errors)
> -
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> Please read the FAQ at  http://www.tux.org/lkml/

^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: Performance 2.4.8 is worse than 2.4.x<8 (SPEC NFS results sho
  2001-08-13 16:40 Performance 2.4.8 is worse than 2.4.x<8 (SPEC NFS results sho w this) HABBINGA,ERIK (HP-Loveland,ex1)
  2001-08-13 21:12 ` Performance 2.4.8 is worse than 2.4.x<8 (SPEC NFS results show this) Hans Reiser
@ 2001-08-14  7:57 ` Henning P. Schmiedehausen
  2001-08-14 14:24 ` Performance 2.4.8 is worse than 2.4.x<8 (SPEC NFS results sho w this) Chris Mason
  2 siblings, 0 replies; 4+ messages in thread
From: Henning P. Schmiedehausen @ 2001-08-14  7:57 UTC (permalink / raw)
  To: linux-kernel

"HABBINGA,ERIK (HP-Loveland,ex1)" <erik_habbinga@hp.com> writes:

>reiserfs

Would you mind rerunning your tests with ext2?

	Regards
		Henning


-- 
Dipl.-Inf. (Univ.) Henning P. Schmiedehausen       -- Geschaeftsfuehrer
INTERMETA - Gesellschaft fuer Mehrwertdienste mbH     hps@intermeta.de

Am Schwabachgrund 22  Fon.: 09131 / 50654-0   info@intermeta.de
D-91054 Buckenhof     Fax.: 09131 / 50654-20   

^ permalink raw reply	[flat|nested] 4+ messages in thread

* re: Performance 2.4.8 is worse than 2.4.x<8 (SPEC NFS results sho w this)
  2001-08-13 16:40 Performance 2.4.8 is worse than 2.4.x<8 (SPEC NFS results sho w this) HABBINGA,ERIK (HP-Loveland,ex1)
  2001-08-13 21:12 ` Performance 2.4.8 is worse than 2.4.x<8 (SPEC NFS results show this) Hans Reiser
  2001-08-14  7:57 ` Performance 2.4.8 is worse than 2.4.x<8 (SPEC NFS results sho Henning P. Schmiedehausen
@ 2001-08-14 14:24 ` Chris Mason
  2 siblings, 0 replies; 4+ messages in thread
From: Chris Mason @ 2001-08-14 14:24 UTC (permalink / raw)
  To: HABBINGA,ERIK (HP-Loveland,ex1), 'linux-kernel@vger.kernel.org'



On Monday, August 13, 2001 09:40:59 AM -0700 "HABBINGA,ERIK
(HP-Loveland,ex1)" <erik_habbinga@hp.com> wrote:

> Here are some SPEC SFS NFS testing (http://www.spec.org/osg/sfs97) results
> I've been doing over the past few weeks that shows NFS performance
> degrading since the 2.4.5pre1 kernel.  I've kept the hardware constant,
> only changing the kernel. 

Did the 2.4.5pre1 have the transaction tracking patch?

-chris


^ permalink raw reply	[flat|nested] 4+ messages in thread

end of thread, other threads:[~2001-08-14 20:16 UTC | newest]

Thread overview: 4+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2001-08-13 16:40 Performance 2.4.8 is worse than 2.4.x<8 (SPEC NFS results sho w this) HABBINGA,ERIK (HP-Loveland,ex1)
2001-08-13 21:12 ` Performance 2.4.8 is worse than 2.4.x<8 (SPEC NFS results show this) Hans Reiser
2001-08-14  7:57 ` Performance 2.4.8 is worse than 2.4.x<8 (SPEC NFS results sho Henning P. Schmiedehausen
2001-08-14 14:24 ` Performance 2.4.8 is worse than 2.4.x<8 (SPEC NFS results sho w this) Chris Mason

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).