All of lore.kernel.org
 help / color / mirror / Atom feed
* Read speed
@ 2013-02-05  1:23 Jeff Blaine
  2013-02-05  2:03 ` Steve French
  0 siblings, 1 reply; 9+ messages in thread
From: Jeff Blaine @ 2013-02-05  1:23 UTC (permalink / raw)
  To: linux-cifs-u79uwXL29TY76Z2rM5mHXA

Hi,

On a RHEL 6.3 box talking to a Windows 7 Enterprise box,
I am seeing approximately 1/4th the speed with mount.cifs
as I am with smbclient 'get'. RHEL 6.3 currently has
CIFS 1.68.

After about a half hour of reading forum threads for the
last few years, it seems this is very well known and has
been the case for a long time.

I have tried using CIFSMaxBufSize=61440 with rsize=61140
at mount-time and it doesn't really buy me much.

Is there any sort of public-facing summary of the state of
the read performance issues. I saw no mention of it in the
BUGS section of the mount.cifs man page or in the README for
the kernel module.

Is the cause known?

Has already been fixed since 1.68 by chance? If so,
what assembly of pieces will overcome the issue? Should
I just open a RHEL bug through our support channel and
get them involved in this effort somehow?

Any guidance would be welcome at this point.

Jeff

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: Read speed
  2013-02-05  1:23 Read speed Jeff Blaine
@ 2013-02-05  2:03 ` Steve French
       [not found]   ` <CAH2r5mvbbuo9e30kuRFp9eegJqq_9HLR9=oaGvFqyMi-aBKXTQ-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
  0 siblings, 1 reply; 9+ messages in thread
From: Steve French @ 2013-02-05  2:03 UTC (permalink / raw)
  To: Jeff Blaine; +Cc: linux-cifs, LKML, linux-fsdevel

You will need a more recent kernel (probably based on the 3.2 kernel
or later, 3.2 was released a year or so ago) to see the dramatic
improvements in cifs read speeds (with the redesign of the read code
to add more parallelism on i/o to the same file) although RedHat may
have backported some of Jeff's excellent performance improvements to
some of the older distros.  See slides 21 through 26 of my
presentation at

 http://www.snia.org/sites/default/files2/SDC2012/presentations/Revisions/SteveFrench_Linux_CIFS-SMB2-year-in-review-revision.pdf

Slides 23 and 24 list the cifs performance and functional enhancements
by kernel release.  Buffered, sequential read (e.g. file copy from a
server) got much faster in 3.2 kernel, especially to Samba and other
server which support the Unix extensions (due to support for larger
i/o sizes than 64K).

Similarly note that cifs write speed was dramatically improved
starting at kernel version 3.0 (1.5 to 2 years ago) due to the
addition of more async parallelism to the design of the cifs write
code (writing to the server from the cifs client) by making the i/o
sizes larger and allowing more async dispatch of writes (previously to
use a network interface fully you would need to be reading and or
writing to multiple different files simultaneously).

On Mon, Feb 4, 2013 at 7:23 PM, Jeff Blaine <jblaine@kickflop.net> wrote:
> Hi,
>
> On a RHEL 6.3 box talking to a Windows 7 Enterprise box,
> I am seeing approximately 1/4th the speed with mount.cifs
> as I am with smbclient 'get'. RHEL 6.3 currently has
> CIFS 1.68.
>
> After about a half hour of reading forum threads for the
> last few years, it seems this is very well known and has
> been the case for a long time.
>
> I have tried using CIFSMaxBufSize=61440 with rsize=61140
> at mount-time and it doesn't really buy me much.
>
> Is there any sort of public-facing summary of the state of
> the read performance issues. I saw no mention of it in the
> BUGS section of the mount.cifs man page or in the README for
> the kernel module.
>
> Is the cause known?
>
> Has already been fixed since 1.68 by chance? If so,
> what assembly of pieces will overcome the issue? Should
> I just open a RHEL bug through our support channel and
> get them involved in this effort somehow?
>
> Any guidance would be welcome at this point.
>
> Jeff
> --
> To unsubscribe from this list: send the line "unsubscribe linux-cifs" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html



-- 
Thanks,

Steve

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: Read speed
       [not found]   ` <CAH2r5mvbbuo9e30kuRFp9eegJqq_9HLR9=oaGvFqyMi-aBKXTQ-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
@ 2013-02-05 17:12     ` Jeff Blaine
       [not found]       ` <51113D76.80609-GbE5gUWZ6k7k1uMJSBkQmQ@public.gmane.org>
  0 siblings, 1 reply; 9+ messages in thread
From: Jeff Blaine @ 2013-02-05 17:12 UTC (permalink / raw)
  To: Steve French; +Cc: linux-cifs-u79uwXL29TY76Z2rM5mHXA

Thank you for the thorough reply, Steve. It's nice to
read of the progress in 2012, but RHEL 6.3 and what is
supported there is what we have to work with. It is the
latest supported version, as you surely know, so it
seems like we'll have to wait quite awhile before
getting the module's read performance increases.

We're dealing with 9 US sites, nightly time window (which
we're exceeding) for transferring large amounts of data
over 1.5Mbps links).

Back to the drawing board.

Thanks again.

On 2/4/2013 9:03 PM, Steve French wrote:
> You will need a more recent kernel (probably based on the 3.2 kernel
> or later, 3.2 was released a year or so ago) to see the dramatic
> improvements in cifs read speeds (with the redesign of the read code
> to add more parallelism on i/o to the same file) although RedHat may
> have backported some of Jeff's excellent performance improvements to
> some of the older distros.  See slides 21 through 26 of my
> presentation at
>
>   http://www.snia.org/sites/default/files2/SDC2012/presentations/Revisions/SteveFrench_Linux_CIFS-SMB2-year-in-review-revision.pdf
>
> Slides 23 and 24 list the cifs performance and functional enhancements
> by kernel release.  Buffered, sequential read (e.g. file copy from a
> server) got much faster in 3.2 kernel, especially to Samba and other
> server which support the Unix extensions (due to support for larger
> i/o sizes than 64K).
>
> Similarly note that cifs write speed was dramatically improved
> starting at kernel version 3.0 (1.5 to 2 years ago) due to the
> addition of more async parallelism to the design of the cifs write
> code (writing to the server from the cifs client) by making the i/o
> sizes larger and allowing more async dispatch of writes (previously to
> use a network interface fully you would need to be reading and or
> writing to multiple different files simultaneously).
>
> On Mon, Feb 4, 2013 at 7:23 PM, Jeff Blaine <jblaine-GbE5gUWZ6k7k1uMJSBkQmQ@public.gmane.org> wrote:
>> Hi,
>>
>> On a RHEL 6.3 box talking to a Windows 7 Enterprise box,
>> I am seeing approximately 1/4th the speed with mount.cifs
>> as I am with smbclient 'get'. RHEL 6.3 currently has
>> CIFS 1.68.
>>
>> After about a half hour of reading forum threads for the
>> last few years, it seems this is very well known and has
>> been the case for a long time.
>>
>> I have tried using CIFSMaxBufSize=61440 with rsize=61140
>> at mount-time and it doesn't really buy me much.
>>
>> Is there any sort of public-facing summary of the state of
>> the read performance issues. I saw no mention of it in the
>> BUGS section of the mount.cifs man page or in the README for
>> the kernel module.
>>
>> Is the cause known?
>>
>> Has already been fixed since 1.68 by chance? If so,
>> what assembly of pieces will overcome the issue? Should
>> I just open a RHEL bug through our support channel and
>> get them involved in this effort somehow?
>>
>> Any guidance would be welcome at this point.
>>
>> Jeff
>> --
>> To unsubscribe from this list: send the line "unsubscribe linux-cifs" in
>> the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
>> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>
>
>

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: Read speed
       [not found]       ` <51113D76.80609-GbE5gUWZ6k7k1uMJSBkQmQ@public.gmane.org>
@ 2013-02-05 19:20         ` Jeff Layton
       [not found]           ` <20130205142058.7618dfcf-4QP7MXygkU+dMjc06nkz3ljfA9RmPOcC@public.gmane.org>
  0 siblings, 1 reply; 9+ messages in thread
From: Jeff Layton @ 2013-02-05 19:20 UTC (permalink / raw)
  To: Jeff Blaine
  Cc: Steve French, linux-cifs-u79uwXL29TY76Z2rM5mHXA,
	sprabhu-H+wXaHxf7aLQT0dZR+AlfA

On Tue, 05 Feb 2013 12:12:22 -0500
Jeff Blaine <jblaine-GbE5gUWZ6k7k1uMJSBkQmQ@public.gmane.org> wrote:

> Thank you for the thorough reply, Steve. It's nice to
> read of the progress in 2012, but RHEL 6.3 and what is
> supported there is what we have to work with. It is the
> latest supported version, as you surely know, so it
> seems like we'll have to wait quite awhile before
> getting the module's read performance increases.
> 
> We're dealing with 9 US sites, nightly time window (which
> we're exceeding) for transferring large amounts of data
> over 1.5Mbps links).
> 
> Back to the drawing board.
> 
> Thanks again.
> 

I think you're basically out of luck for now...

Long haul links are really a pessimal case for the older code that did
synchronous reads and writes. You spend most of your time waiting
around for the calls to go back and forth.

I believe Sachin is looking at backporting the async read patches for
6.5. The async write code is already in 6.3 (I think). You should
definitely open a support case since documented customer demand helps
us make the case for including these sorts of changes in an update.

-- 
Jeff Layton <jlayton-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: Read speed
       [not found]           ` <20130205142058.7618dfcf-4QP7MXygkU+dMjc06nkz3ljfA9RmPOcC@public.gmane.org>
@ 2013-02-05 19:23             ` Jeff Layton
       [not found]               ` <20130205142307.58d87a21-4QP7MXygkU+dMjc06nkz3ljfA9RmPOcC@public.gmane.org>
  0 siblings, 1 reply; 9+ messages in thread
From: Jeff Layton @ 2013-02-05 19:23 UTC (permalink / raw)
  To: Jeff Layton
  Cc: Jeff Blaine, Steve French, linux-cifs-u79uwXL29TY76Z2rM5mHXA,
	sprabhu-H+wXaHxf7aLQT0dZR+AlfA

On Tue, 5 Feb 2013 14:20:58 -0500
Jeff Layton <jlayton-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org> wrote:

> On Tue, 05 Feb 2013 12:12:22 -0500
> Jeff Blaine <jblaine-GbE5gUWZ6k7k1uMJSBkQmQ@public.gmane.org> wrote:
> 
> > Thank you for the thorough reply, Steve. It's nice to
> > read of the progress in 2012, but RHEL 6.3 and what is
> > supported there is what we have to work with. It is the
> > latest supported version, as you surely know, so it
> > seems like we'll have to wait quite awhile before
> > getting the module's read performance increases.
> > 
> > We're dealing with 9 US sites, nightly time window (which
> > we're exceeding) for transferring large amounts of data
> > over 1.5Mbps links).
> > 
> > Back to the drawing board.
> > 
> > Thanks again.
> > 
> 
> I think you're basically out of luck for now...
> 
> Long haul links are really a pessimal case for the older code that did
> synchronous reads and writes. You spend most of your time waiting
> around for the calls to go back and forth.
> 
> I believe Sachin is looking at backporting the async read patches for
> 6.5. The async write code is already in 6.3 (I think). You should
> definitely open a support case since documented customer demand helps
> us make the case for including these sorts of changes in an update.
> 

...oh, and if you haven't already, you should try to make some time for
testing a more recent kernel to ensure that this has been addressed
there. Consider doing something like installing Fedora and testing the
read speeds over your long haul link.

It wouldn't do you any good for us to backport those changes if they
don't actually help your use-case.

-- 
Jeff Layton <jlayton-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: Read speed
       [not found]               ` <20130205142307.58d87a21-4QP7MXygkU+dMjc06nkz3ljfA9RmPOcC@public.gmane.org>
@ 2013-02-05 20:02                 ` Jeff Blaine
       [not found]                   ` <5111656C.90109-GbE5gUWZ6k7k1uMJSBkQmQ@public.gmane.org>
  0 siblings, 1 reply; 9+ messages in thread
From: Jeff Blaine @ 2013-02-05 20:02 UTC (permalink / raw)
  To: Jeff Layton
  Cc: Steve French, linux-cifs-u79uwXL29TY76Z2rM5mHXA,
	sprabhu-H+wXaHxf7aLQT0dZR+AlfA

>> I believe Sachin is looking at backporting the async read patches for
>> 6.5. The async write code is already in 6.3 (I think). You should
>> definitely open a support case since documented customer demand helps
>> us make the case for including these sorts of changes in an update.

I opened one an hour or so ago, so that's done.

Should I test with Fedora 17 or Fedora 18 for the sake of this?
Just because 18 is newer doesn't mean it's the most appropriate
for this test need, eh?

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: Read speed
       [not found]                   ` <5111656C.90109-GbE5gUWZ6k7k1uMJSBkQmQ@public.gmane.org>
@ 2013-02-05 20:17                     ` Jeff Layton
  2013-03-11 17:54                     ` Jeff Blaine
  1 sibling, 0 replies; 9+ messages in thread
From: Jeff Layton @ 2013-02-05 20:17 UTC (permalink / raw)
  To: Jeff Blaine
  Cc: Steve French, linux-cifs-u79uwXL29TY76Z2rM5mHXA,
	sprabhu-H+wXaHxf7aLQT0dZR+AlfA

On Tue, 05 Feb 2013 15:02:52 -0500
Jeff Blaine <jblaine-GbE5gUWZ6k7k1uMJSBkQmQ@public.gmane.org> wrote:

> >> I believe Sachin is looking at backporting the async read patches for
> >> 6.5. The async write code is already in 6.3 (I think). You should
> >> definitely open a support case since documented customer demand helps
> >> us make the case for including these sorts of changes in an update.
> 
> I opened one an hour or so ago, so that's done.
> 
> Should I test with Fedora 17 or Fedora 18 for the sake of this?
> Just because 18 is newer doesn't mean it's the most appropriate
> for this test need, eh?

In general, later is better, but either should be fine. The async read
code went into mainline about a year or so ago, so both should have it.

-- 
Jeff Layton <jlayton-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: Read speed
       [not found]                   ` <5111656C.90109-GbE5gUWZ6k7k1uMJSBkQmQ@public.gmane.org>
  2013-02-05 20:17                     ` Jeff Layton
@ 2013-03-11 17:54                     ` Jeff Blaine
       [not found]                       ` <513E1A3E.1080404-GbE5gUWZ6k7k1uMJSBkQmQ@public.gmane.org>
  1 sibling, 1 reply; 9+ messages in thread
From: Jeff Blaine @ 2013-03-11 17:54 UTC (permalink / raw)
  Cc: Jeff Layton, Steve French, linux-cifs-u79uwXL29TY76Z2rM5mHXA,
	sprabhu-H+wXaHxf7aLQT0dZR+AlfA

[ suggestion by Jeff Layton to try to ensure that a modern  ]
[ kernel addresses the issues we were seeing over WAN links ]

Just following up.

* Fedora 18 CIFS-over-WAN read performance, as recorded with the same
   testing that was done previously, is ~3.4x that of RHEL 6.3
   (or any common production distro running a pre-3.2 kernel).

So yes, the work done to CIFS addresses our situation directly.

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: Read speed
       [not found]                       ` <513E1A3E.1080404-GbE5gUWZ6k7k1uMJSBkQmQ@public.gmane.org>
@ 2013-03-11 17:57                         ` Steve French
  0 siblings, 0 replies; 9+ messages in thread
From: Steve French @ 2013-03-11 17:57 UTC (permalink / raw)
  To: Jeff Blaine
  Cc: Jeff Layton, linux-cifs-u79uwXL29TY76Z2rM5mHXA,
	sprabhu-H+wXaHxf7aLQT0dZR+AlfA

On Mon, Mar 11, 2013 at 12:54 PM, Jeff Blaine <jblaine-GbE5gUWZ6k7k1uMJSBkQmQ@public.gmane.org> wrote:
> [ suggestion by Jeff Layton to try to ensure that a modern  ]
> [ kernel addresses the issues we were seeing over WAN links ]
>
> Just following up.
>
> * Fedora 18 CIFS-over-WAN read performance, as recorded with the same
>   testing that was done previously, is ~3.4x that of RHEL 6.3
>   (or any common production distro running a pre-3.2 kernel).
>
> So yes, the work done to CIFS addresses our situation directly.

That is great - hope the SMB2.1 leases (and SMB3 directory leases)
will help even more over WAN as we look to the future


-- 
Thanks,

Steve

^ permalink raw reply	[flat|nested] 9+ messages in thread

end of thread, other threads:[~2013-03-11 17:57 UTC | newest]

Thread overview: 9+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2013-02-05  1:23 Read speed Jeff Blaine
2013-02-05  2:03 ` Steve French
     [not found]   ` <CAH2r5mvbbuo9e30kuRFp9eegJqq_9HLR9=oaGvFqyMi-aBKXTQ-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
2013-02-05 17:12     ` Jeff Blaine
     [not found]       ` <51113D76.80609-GbE5gUWZ6k7k1uMJSBkQmQ@public.gmane.org>
2013-02-05 19:20         ` Jeff Layton
     [not found]           ` <20130205142058.7618dfcf-4QP7MXygkU+dMjc06nkz3ljfA9RmPOcC@public.gmane.org>
2013-02-05 19:23             ` Jeff Layton
     [not found]               ` <20130205142307.58d87a21-4QP7MXygkU+dMjc06nkz3ljfA9RmPOcC@public.gmane.org>
2013-02-05 20:02                 ` Jeff Blaine
     [not found]                   ` <5111656C.90109-GbE5gUWZ6k7k1uMJSBkQmQ@public.gmane.org>
2013-02-05 20:17                     ` Jeff Layton
2013-03-11 17:54                     ` Jeff Blaine
     [not found]                       ` <513E1A3E.1080404-GbE5gUWZ6k7k1uMJSBkQmQ@public.gmane.org>
2013-03-11 17:57                         ` Steve French

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.