All of lore.kernel.org
 help / color / mirror / Atom feed
* NFS access slow
@ 2012-12-18 15:52 Keith Edmunds
  2012-12-18 18:50 ` J. Bruce Fields
  0 siblings, 1 reply; 5+ messages in thread
From: Keith Edmunds @ 2012-12-18 15:52 UTC (permalink / raw)
  To: linux-nfs

Accessing disks locally on a server gives read speeds around 100MB/s,
write speeds around 267MB/s.

Mounting the same disks on the same server via NFS (ie, not using the
network at all) gives read speeds around 30MB/s, write speeds around
80MB/s.

That's about 30% of the local access speed.

Is that to be expected? I'd expect a 10-15% slowdown, but not this much.

This is using NFSv4; using NFSv3 improves the speeds slightly (36MB/s
read, 95MB/s write).

Other parameters we've changed, none of which have a significant impact:

 - UDP/TCP
 - rsize and wsize
 - noatime
 - noacl
 - nocto

If that's an unexpected slow down, where should we be looking?

Thanks.

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: NFS access slow
  2012-12-18 15:52 NFS access slow Keith Edmunds
@ 2012-12-18 18:50 ` J. Bruce Fields
  2012-12-18 19:42   ` Keith Edmunds
  0 siblings, 1 reply; 5+ messages in thread
From: J. Bruce Fields @ 2012-12-18 18:50 UTC (permalink / raw)
  To: Keith Edmunds; +Cc: linux-nfs

On Tue, Dec 18, 2012 at 03:52:48PM +0000, Keith Edmunds wrote:
> Accessing disks locally on a server gives read speeds around 100MB/s,
> write speeds around 267MB/s.
> 
> Mounting the same disks on the same server via NFS (ie, not using the
> network at all) gives read speeds around 30MB/s, write speeds around
> 80MB/s.
> 
> That's about 30% of the local access speed.
> 
> Is that to be expected? I'd expect a 10-15% slowdown, but not this much.
> 
> This is using NFSv4; using NFSv3 improves the speeds slightly (36MB/s
> read, 95MB/s write).

What are your disks?  How exactly are you getting those numbers?
(Literally, step-by-step, what commands are you running?)

What kernel version?

Note loopback-mounts (client and server on same machine) aren't really
fully supported.

--b.

> 
> Other parameters we've changed, none of which have a significant impact:
> 
>  - UDP/TCP
>  - rsize and wsize
>  - noatime
>  - noacl
>  - nocto
> 
> If that's an unexpected slow down, where should we be looking?
> 
> Thanks.
> --
> To unsubscribe from this list: send the line "unsubscribe linux-nfs" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: NFS access slow
  2012-12-18 18:50 ` J. Bruce Fields
@ 2012-12-18 19:42   ` Keith Edmunds
  2012-12-18 20:34     ` Jim Rees
  2012-12-18 20:37     ` J. Bruce Fields
  0 siblings, 2 replies; 5+ messages in thread
From: Keith Edmunds @ 2012-12-18 19:42 UTC (permalink / raw)
  To: J. Bruce Fields; +Cc: linux-nfs

> What are your disks?

They are Enterprise Nearline 6Gb/s SAS drives in an Infortrend disk array.

>  How exactly are you getting those numbers?
> (Literally, step-by-step, what commands are you running?)

Using postmark:

pm> set location /mnt/tmp
pm> set size 10000 10000000
pm> run

The only difference is the 'set location' line, which points to either the
NFS mountpoint or the local mountpoint.

A test using dd ("dd if=/dev/zero of=/mnt/tmp bs=1M count=8192") gave a
difference of about five times faster for direct access versus access via
NFS.

> What kernel version?

3.2

> Note loopback-mounts (client and server on same machine) aren't really
> fully supported.

OK, I wasn't aware of that. We were only testing that way to try to
eliminate switches, cables, etc. I've just run a test from another server,
both connected via 10G links, and I'm getting a read speed of just under
20BM/s and a write speed of 52MB/s.

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: NFS access slow
  2012-12-18 19:42   ` Keith Edmunds
@ 2012-12-18 20:34     ` Jim Rees
  2012-12-18 20:37     ` J. Bruce Fields
  1 sibling, 0 replies; 5+ messages in thread
From: Jim Rees @ 2012-12-18 20:34 UTC (permalink / raw)
  To: Keith Edmunds; +Cc: J. Bruce Fields, linux-nfs

Keith Edmunds wrote:

  OK, I wasn't aware of that. We were only testing that way to try to
  eliminate switches, cables, etc. I've just run a test from another server,
  both connected via 10G links, and I'm getting a read speed of just under
  20BM/s and a write speed of 52MB/s.

Something's wrong. What numbers do you get from iperf, or even something
like wget? Are you setting anything unusual with sysctl?

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: NFS access slow
  2012-12-18 19:42   ` Keith Edmunds
  2012-12-18 20:34     ` Jim Rees
@ 2012-12-18 20:37     ` J. Bruce Fields
  1 sibling, 0 replies; 5+ messages in thread
From: J. Bruce Fields @ 2012-12-18 20:37 UTC (permalink / raw)
  To: Keith Edmunds; +Cc: linux-nfs

On Tue, Dec 18, 2012 at 07:42:51PM +0000, Keith Edmunds wrote:
> > What are your disks?
> 
> They are Enterprise Nearline 6Gb/s SAS drives in an Infortrend disk array.
> 
> >  How exactly are you getting those numbers?
> > (Literally, step-by-step, what commands are you running?)
> 
> Using postmark:
> 
> pm> set location /mnt/tmp
> pm> set size 10000 10000000
> pm> run
> 
> The only difference is the 'set location' line, which points to either the
> NFS mountpoint or the local mountpoint.

Note that NFS requires operations such as file creation and removal to
be synchronous (for reboot/crash-recovery reasons).  So e.g. if postmark
is single threaded (I think it is), then the client has to wait for the
server to respond to a file create before proceeding, and the server has
to wait for the create to hit disk before responding.

Depending on exactly how postmark calculates those bandwidth numbers
that could have a big effect.

If your array has a battery-backed cache that should help.

> A test using dd ("dd if=/dev/zero of=/mnt/tmp bs=1M count=8192") gave a
> difference of about five times faster for direct access versus access via
> NFS.

To make that an apples-to-apples comparison you should include the
time to sync after the dd in both cases.  (Though if your server doesn't
have much memory that might not make a big difference.)

> > What kernel version?
> 
> 3.2
> 
> > Note loopback-mounts (client and server on same machine) aren't really
> > fully supported.
> 
> OK, I wasn't aware of that. We were only testing that way to try to
> eliminate switches, cables, etc. I've just run a test from another server,
> both connected via 10G links, and I'm getting a read speed of just under
> 20BM/s and a write speed of 52MB/s.

Have you tested the network speed?  (E.g. with iperf.)

--b.

^ permalink raw reply	[flat|nested] 5+ messages in thread

end of thread, other threads:[~2012-12-18 20:37 UTC | newest]

Thread overview: 5+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2012-12-18 15:52 NFS access slow Keith Edmunds
2012-12-18 18:50 ` J. Bruce Fields
2012-12-18 19:42   ` Keith Edmunds
2012-12-18 20:34     ` Jim Rees
2012-12-18 20:37     ` J. Bruce Fields

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.