All of lore.kernel.org
 help / color / mirror / Atom feed
* Trouble accessing Buffalo NAS with CIFSFS
@ 2012-01-19  8:07 ralda-Mmb7MZpHnFY
       [not found] ` <20120119090752.9b7aea6c.ralda-Mmb7MZpHnFY@public.gmane.org>
  0 siblings, 1 reply; 12+ messages in thread
From: ralda-Mmb7MZpHnFY @ 2012-01-19  8:07 UTC (permalink / raw)
  To: linux-cifs-u79uwXL29TY76Z2rM5mHXA

Hi!

Some weeks ago I upgraded my Linux system to a recent kernel (currently
3.2.1). Since this update I have trouble accessing my NAS station
(Buffalo DriveStation 2Share) with CIFSFS.

Formerly I used an older 2.6er Kernel with SMBFS to mount and access
the NAS station. The speed was not that high, but the connection worked
without failures. As the SMBFS vanished I had to switch to CIFS in the
new installation. With CIFS I can mount the NAS station and I can see
the directory listings, but whenever I try to copy a file to the NAS
station the copy hangs and probably never return (leaving the process
with a D status in process list).

The NAS is mounted like this (output from /proc/mounts):

//archiv/share /nas cifs rw,mand,nosuid,nodev,noexec,relatime,sec=ntlm,
  unc=\\archiv\share,username=root,uid=1002,forceuid,gid=65534,forcegid,
  addr=192.168.178.3,file_mode=0600,dir_mode=0700,nounix,rsize=16384,
  wsize=65216,actimeo=1
  0 0

mount.cifs --version: 1.10

dmesg output gives info like this:

CIFS VFS: Autodisabling the use of server inode numbers on \\archiv
\share. This server doesn't seem to support them properly. Hardlinks
will not be recognized on this mount. Consider mounting with the
"noserverino" option to silence this message.
CIFS VFS: Server archiv has not responded in 300 seconds.
Reconnecting...
CIFS VFS: sends on sock ae3b9b40 stuck for 15 seconds
CIFS VFS: Error -11 sending data on socket to server
CIFS VFS: sends on sock ae3b9b40 stuck for 15 seconds
CIFS VFS: Error -11 sending data on socket to server
CIFS VFS: sends on sock ae3b9b40 stuck for 15 seconds
CIFS VFS: Error -11 sending data on socket to server
CIFS VFS: Error -32 sending data on socket to server
CIFS VFS: Error -32 sending data on socket to server
CIFS VFS: Error -32 sending data on socket to server
...

adding "noserverino" to mount options silences one message in the above
list but changes the other behavior.

CIFS is compiled statically into the kernel. Self optimized statical
kernel, not standard modular distro kernel.

I really need help to get the NAS back to a working state. Feel free to
ask for any information you need for diagnosis. If required I'm able to
add patches and recompile the kernel for diagnosis.

Thx ahead
Harald

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: Trouble accessing Buffalo NAS with CIFSFS
       [not found] ` <20120119090752.9b7aea6c.ralda-Mmb7MZpHnFY@public.gmane.org>
@ 2012-01-19 12:05   ` Jeff Layton
       [not found]     ` <20120120113042.0fdfea5f.ralda@gmx.de>
  2012-01-19 17:38   ` Suresh Jayaraman
  1 sibling, 1 reply; 12+ messages in thread
From: Jeff Layton @ 2012-01-19 12:05 UTC (permalink / raw)
  To: ralda-Mmb7MZpHnFY; +Cc: linux-cifs-u79uwXL29TY76Z2rM5mHXA

On Thu, 19 Jan 2012 09:07:52 +0100
"ralda-Mmb7MZpHnFY@public.gmane.org" <ralda-Mmb7MZpHnFY@public.gmane.org> wrote:

> Hi!
> 
> Some weeks ago I upgraded my Linux system to a recent kernel (currently
> 3.2.1). Since this update I have trouble accessing my NAS station
> (Buffalo DriveStation 2Share) with CIFSFS.
> 
> Formerly I used an older 2.6er Kernel with SMBFS to mount and access
> the NAS station. The speed was not that high, but the connection worked
> without failures. As the SMBFS vanished I had to switch to CIFS in the
> new installation. With CIFS I can mount the NAS station and I can see
> the directory listings, but whenever I try to copy a file to the NAS
> station the copy hangs and probably never return (leaving the process
> with a D status in process list).
> 
> The NAS is mounted like this (output from /proc/mounts):
> 
> //archiv/share /nas cifs rw,mand,nosuid,nodev,noexec,relatime,sec=ntlm,
>   unc=\\archiv\share,username=root,uid=1002,forceuid,gid=65534,forcegid,
>   addr=192.168.178.3,file_mode=0600,dir_mode=0700,nounix,rsize=16384,
>   wsize=65216,actimeo=1
>   0 0
> 
> mount.cifs --version: 1.10
> 
> dmesg output gives info like this:
> 
> CIFS VFS: Autodisabling the use of server inode numbers on \\archiv
> \share. This server doesn't seem to support them properly. Hardlinks
> will not be recognized on this mount. Consider mounting with the
> "noserverino" option to silence this message.
> CIFS VFS: Server archiv has not responded in 300 seconds.
> Reconnecting...
> CIFS VFS: sends on sock ae3b9b40 stuck for 15 seconds
> CIFS VFS: Error -11 sending data on socket to server
> CIFS VFS: sends on sock ae3b9b40 stuck for 15 seconds
> CIFS VFS: Error -11 sending data on socket to server
> CIFS VFS: sends on sock ae3b9b40 stuck for 15 seconds
> CIFS VFS: Error -11 sending data on socket to server
> CIFS VFS: Error -32 sending data on socket to server
> CIFS VFS: Error -32 sending data on socket to server
> CIFS VFS: Error -32 sending data on socket to server
> ...
> 
> adding "noserverino" to mount options silences one message in the above
> list but changes the other behavior.
> 
> CIFS is compiled statically into the kernel. Self optimized statical
> kernel, not standard modular distro kernel.
> 
> I really need help to get the NAS back to a working state. Feel free to
> ask for any information you need for diagnosis. If required I'm able to
> add patches and recompile the kernel for diagnosis.
> 
> Thx ahead
> Harald

It sounds like the server is not responding for some reason. We'll
probably need to see a network capture to understand what's going on:

    http://wiki.samba.org/index.php/LinuxCIFS_troubleshooting#Wire_Captures

If you can open a bug at bugzilla.samba.org, you can upload the
captures there and won't need to send it to the mailing list. If you do
that, please cc me on it.

Thanks,
-- 
Jeff Layton <jlayton-eUNUBHrolfbYtjvyW6yDsg@public.gmane.org>

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: Trouble accessing Buffalo NAS with CIFSFS
       [not found] ` <20120119090752.9b7aea6c.ralda-Mmb7MZpHnFY@public.gmane.org>
  2012-01-19 12:05   ` Jeff Layton
@ 2012-01-19 17:38   ` Suresh Jayaraman
       [not found]     ` <4F18552A.3030804-IBi9RG/b67k@public.gmane.org>
  1 sibling, 1 reply; 12+ messages in thread
From: Suresh Jayaraman @ 2012-01-19 17:38 UTC (permalink / raw)
  To: ralda-Mmb7MZpHnFY; +Cc: linux-cifs-u79uwXL29TY76Z2rM5mHXA

On 01/19/2012 01:37 PM, ralda-Mmb7MZpHnFY@public.gmane.org wrote:
> new installation. With CIFS I can mount the NAS station and I can see
> the directory listings, but whenever I try to copy a file to the NAS
> station the copy hangs and probably never return (leaving the process
> with a D status in process list).

> The NAS is mounted like this (output from /proc/mounts):
> 
> //archiv/share /nas cifs rw,mand,nosuid,nodev,noexec,relatime,sec=ntlm,
>   unc=\\archiv\share,username=root,uid=1002,forceuid,gid=65534,forcegid,
>   addr=192.168.178.3,file_mode=0600,dir_mode=0700,nounix,rsize=16384,
>   wsize=65216,actimeo=1
>   0 0
> 
> mount.cifs --version: 1.10
> 
> dmesg output gives info like this:
> 
> CIFS VFS: Autodisabling the use of server inode numbers on \\archiv
> \share. This server doesn't seem to support them properly. Hardlinks
> will not be recognized on this mount. Consider mounting with the
> "noserverino" option to silence this message.
> CIFS VFS: Server archiv has not responded in 300 seconds.
> Reconnecting...
> CIFS VFS: sends on sock ae3b9b40 stuck for 15 seconds
> CIFS VFS: Error -11 sending data on socket to server
> CIFS VFS: sends on sock ae3b9b40 stuck for 15 seconds
> CIFS VFS: Error -11 sending data on socket to server
> CIFS VFS: sends on sock ae3b9b40 stuck for 15 seconds
> CIFS VFS: Error -11 sending data on socket to server
> CIFS VFS: Error -32 sending data on socket to server
> CIFS VFS: Error -32 sending data on socket to server
> CIFS VFS: Error -32 sending data on socket to server
> ...
> 

Error -32 is -EPIPE and it is kinda unusual from kernel_sendmsg(). I'm
suspecting problems with CIFS socket/TCP state but hard to tell without
much information. Wondering perhaps, CIFS should also handle -EPIPE like
sunrpc and try to gracefully shutdown the socket on -EPIPE allowing
reconnections works without trouble...


Suresh

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: Trouble accessing Buffalo NAS with CIFSFS
       [not found]     ` <4F18552A.3030804-IBi9RG/b67k@public.gmane.org>
@ 2012-01-19 18:39       ` Steve French
  0 siblings, 0 replies; 12+ messages in thread
From: Steve French @ 2012-01-19 18:39 UTC (permalink / raw)
  To: Suresh Jayaraman; +Cc: linux-cifs-u79uwXL29TY76Z2rM5mHXA

Would feel much more comfortable if at least one of us had a network
trace of the failure to look at

On Thu, Jan 19, 2012 at 11:38 AM, Suresh Jayaraman <sjayaraman-IBi9RG/b67k@public.gmane.org> wrote:
> On 01/19/2012 01:37 PM, ralda-Mmb7MZpHnFY@public.gmane.org wrote:
>> new installation. With CIFS I can mount the NAS station and I can see
>> the directory listings, but whenever I try to copy a file to the NAS
>> station the copy hangs and probably never return (leaving the process
>> with a D status in process list).
>
>> The NAS is mounted like this (output from /proc/mounts):
>>
>> //archiv/share /nas cifs rw,mand,nosuid,nodev,noexec,relatime,sec=ntlm,
>>   unc=\\archiv\share,username=root,uid=1002,forceuid,gid=65534,forcegid,
>>   addr=192.168.178.3,file_mode=0600,dir_mode=0700,nounix,rsize=16384,
>>   wsize=65216,actimeo=1
>>   0 0
>>
>> mount.cifs --version: 1.10
>>
>> dmesg output gives info like this:
>>
>> CIFS VFS: Autodisabling the use of server inode numbers on \\archiv
>> \share. This server doesn't seem to support them properly. Hardlinks
>> will not be recognized on this mount. Consider mounting with the
>> "noserverino" option to silence this message.
>> CIFS VFS: Server archiv has not responded in 300 seconds.
>> Reconnecting...
>> CIFS VFS: sends on sock ae3b9b40 stuck for 15 seconds
>> CIFS VFS: Error -11 sending data on socket to server
>> CIFS VFS: sends on sock ae3b9b40 stuck for 15 seconds
>> CIFS VFS: Error -11 sending data on socket to server
>> CIFS VFS: sends on sock ae3b9b40 stuck for 15 seconds
>> CIFS VFS: Error -11 sending data on socket to server
>> CIFS VFS: Error -32 sending data on socket to server
>> CIFS VFS: Error -32 sending data on socket to server
>> CIFS VFS: Error -32 sending data on socket to server
>> ...
>>
>
> Error -32 is -EPIPE and it is kinda unusual from kernel_sendmsg(). I'm
> suspecting problems with CIFS socket/TCP state but hard to tell without
> much information. Wondering perhaps, CIFS should also handle -EPIPE like
> sunrpc and try to gracefully shutdown the socket on -EPIPE allowing
> reconnections works without trouble...
>
>
> Suresh
> --
> To unsubscribe from this list: send the line "unsubscribe linux-cifs" in
> the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html



-- 
Thanks,

Steve

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: Trouble accessing Buffalo NAS with CIFSFS
       [not found]           ` <20120120165657.00042c72.ralda-Mmb7MZpHnFY@public.gmane.org>
@ 2012-01-20 16:19             ` Jeff Layton
       [not found]               ` <20120120111936.5329cbb4-9yPaYZwiELC+kQycOl6kW4xkIHaj4LzF@public.gmane.org>
  0 siblings, 1 reply; 12+ messages in thread
From: Jeff Layton @ 2012-01-20 16:19 UTC (permalink / raw)
  To: ralda-Mmb7MZpHnFY, linux-cifs-u79uwXL29TY76Z2rM5mHXA

On Fri, 20 Jan 2012 16:56:57 +0100
"ralda-Mmb7MZpHnFY@public.gmane.org" <ralda-Mmb7MZpHnFY@public.gmane.org> wrote:

> Hallo Jeff!
> 
> > Unfortunately, this server seems to only be able to handle one request
> > at a time per socket.
> 
> Ack. I know that the Buffalo Drive station can handle only one request
> a time. It fails to work properly if several stations access (write
> to) the drive simultaneously.
> 
> 
> > It sets this value in the NEGOTIATE reply: Max Mpx Count: 1
> > 
> > CIFS ignores this value currently, which is a (rather bad) bug. Steve
> > is apparently working on fixing this, so he might have a patch that you
> > can help test.
> 
> Sure. Let me know what I shall test. I even freed another (smaller)
> hard disk that can be inserted into the Buffalo station. That disk
> allows tests with the NAS without risk of loosing data. So I may be
> able to run different test with the device and collect data for your
> analysis, if that can help your development.
> 
> 
> > > In the meantime, you can try setting the the cifs_max_pending module
> > parm to a low value (I think 2 is the minimum). I suspect that will
> > prevent this problem.
> 
> If you request to do me a test with this, I can do, else I wait for a
> patch.
> 
> As the Buffalo DriveStation 2Share allows for access via network and as
> an USB drive (alternate function) I do still have access to my data
> using the drive as an external USB device. That works fine but needs a
> separate computer for forwarding of the files. So it is not required to
> have a fix or workaround within a few days. But I like the simplicity
> of the NAS station to be accessed via network from every computer, so I
> would like to see a future solution for the problem. Thats all.
> 
> And probably there are more people out there, using Buffalo Drive
> Stations, so they probably ran into the same trouble and may not be
> able to collect data for diagnosis.
> 
> And thanks for your help so far.
> 

(re-cc'ing linux-cifs)

At this point, I'd suggest just setting the cifs_max_pending module
parm low  (1 or 2) and seeing if that helps. If it does, then that
should basically emulate the effect of steve's eventual patch. If it
doesn't then we'll probably need to have a closer look at why it isn't.

-- 
Jeff Layton <jlayton-eUNUBHrolfbYtjvyW6yDsg@public.gmane.org>

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: Trouble accessing Buffalo NAS with CIFSFS
       [not found]               ` <20120120111936.5329cbb4-9yPaYZwiELC+kQycOl6kW4xkIHaj4LzF@public.gmane.org>
@ 2012-01-20 19:06                 ` ralda-Mmb7MZpHnFY
       [not found]                   ` <20120120200626.b04b1d40.ralda-Mmb7MZpHnFY@public.gmane.org>
  0 siblings, 1 reply; 12+ messages in thread
From: ralda-Mmb7MZpHnFY @ 2012-01-20 19:06 UTC (permalink / raw)
  To: Jeff Layton; +Cc: linux-cifs-u79uwXL29TY76Z2rM5mHXA

Hallo Jeff!

> At this point, I'd suggest just setting the cifs_max_pending module
> parm low  (1 or 2) and seeing if that helps. If it does, then that
> should basically emulate the effect of steve's eventual patch. If it
> doesn't then we'll probably need to have a closer look at why it isn't.

I have set cifs_max_pending to 2 (setting it 1 results in value 2).
Giving interesting results:

Normal mount/cp of single file works (same mount parameter as before).

Copying several files with midnight commander showed me a copy speed of
4.5 to 5.6 MByte per second - start at 4.5 and raising quickly to 5.6).
This speed is higher then ever received on copying to the same NAS
station from same hardware installation. Formerly (kernel 2.6.18
SMBFS) copies startet below 3.5 MBps and raised slowly up to about 4
MBps (or slightly above - only at very big files). So reducing the
setting of cifs_max_pendig not only let the drive work it additionally
gave me a big step in speed optimization.

... but: Switching virtual console during copy process let the copy
stop with a stuck mc as before. The only difference: After killing the
process and a delay of about 2 minutes (felt, not measured) the process
disappeared and NAS drive returned to a working state without system
reboot.

If you like/need I may catch more data for diagnosis tomorrow. Just let
me know what you need and how I shall test. I'm willing to give you all
data for diagnosis I'm able to collect.

--
Harald

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: Trouble accessing Buffalo NAS with CIFSFS
       [not found]                   ` <20120120200626.b04b1d40.ralda-Mmb7MZpHnFY@public.gmane.org>
@ 2012-01-20 19:32                     ` Jeff Layton
       [not found]                       ` <20120120143235.1d4d811e-9yPaYZwiELC+kQycOl6kW4xkIHaj4LzF@public.gmane.org>
  0 siblings, 1 reply; 12+ messages in thread
From: Jeff Layton @ 2012-01-20 19:32 UTC (permalink / raw)
  To: ralda-Mmb7MZpHnFY; +Cc: linux-cifs-u79uwXL29TY76Z2rM5mHXA

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

On Fri, 20 Jan 2012 20:06:26 +0100
"ralda@gmx.de" <ralda@gmx.de> wrote:

> Hallo Jeff!
> 
> > At this point, I'd suggest just setting the cifs_max_pending module
> > parm low  (1 or 2) and seeing if that helps. If it does, then that
> > should basically emulate the effect of steve's eventual patch. If it
> > doesn't then we'll probably need to have a closer look at why it isn't.
> 
> I have set cifs_max_pending to 2 (setting it 1 results in value 2).
> Giving interesting results:
> 
> Normal mount/cp of single file works (same mount parameter as before).
> 
> Copying several files with midnight commander showed me a copy speed of
> 4.5 to 5.6 MByte per second - start at 4.5 and raising quickly to 5.6).
> This speed is higher then ever received on copying to the same NAS
> station from same hardware installation. Formerly (kernel 2.6.18
> SMBFS) copies startet below 3.5 MBps and raised slowly up to about 4
> MBps (or slightly above - only at very big files). So reducing the
> setting of cifs_max_pendig not only let the drive work it additionally
> gave me a big step in speed optimization.
> 

Not sure why that would be unless requests were sometimes getting
dropped on the floor and the client was occasionally able to recover.

> ... but: Switching virtual console during copy process let the copy
> stop with a stuck mc as before. The only difference: After killing the
> process and a delay of about 2 minutes (felt, not measured) the process
> disappeared and NAS drive returned to a working state without system
> reboot.
> 
> If you like/need I may catch more data for diagnosis tomorrow. Just let
> me know what you need and how I shall test. I'm willing to give you all
> data for diagnosis I'm able to collect.
> 

This is certainly a very troublesome device :)

I think the existing code is wrong in that it sets the floor value too
high (2 when it should be 1). The problem we get into however is that
when the server can't handle concurrent requests, we must disable
certain features.

We've discussed this on the list before, but we probably ought to do
something like this depending on what the server sends for the maxmpx
value:

maxmpx >= 3: normal operation

maxmpx = 2: disable oplocks, since doing writes while there's still an
outstanding open is problematic

maxmpx = 1: disable sending smb echoes. This does mean that we can't do
detection of unresponsive servers correctly, but there's not much else
we can do at that point.

Steve, what's going on with that patch? It's been many months since we
discussed it last. Do you have anything that Harald can test?

- -- 
Jeff Layton <jlayton@samba.org>
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v2.0.18 (GNU/Linux)

iQIcBAEBAgAGBQJPGcFTAAoJEAAOaEEZVoIVw2kQALVcufVQG1qL6twu/SLbMpgn
YbrYeHF+O+TdsCJxGzBr8+/FZFdGCYIIzrCgiZIwH5Cb38m8URfaikm4qUv7QawX
+pVecmOZeSfLu4DytfsNX7KLisqGfjuHzQc1FnXFTvTgUDwhRdVoKmXIfjIetDIu
cmlAcpaEEfROGOkFb3DNYLP2+Tf+9fgZ0YTJ5vvmzpcK5i18jnmbyBITzmEvBxMl
aAtgLLRrDsbf6ld+rIAnm3FEKcisH702W2Nbn7rF3dGXItt0reDwor8zZPjxFvDn
LCzDfb228ZbenfrIsWAA3ZlFBAJ4qARb76R2Im/0/7z62doFLysS28pI/YV5t17r
rTlc386MFGqa6ffLuEwt49oskNaVwGuqBoZ4OyCYoRjue0uC2zmMAHP7bYdyqrm1
+/9chP0P2VCsxTqzTF1lw7HpZOgfF/08Hr5l+4I/raaSHUyDyhu/gvxuGOHaKRo3
ob8jT55j104sC9WBIDiO0rXuUv0oqgHbBhszduQ2T4MwYjHNlwy5EgA2aRZBvqkq
jeUojXsPd4pjQsNf5Vz1S6RVfHzPDQkrF/O5QPlJaBgTq69PDQCZlxvwb8dmBmhY
n9lkn7KMIs9BmB5XQ6VFbtgIr2FEQzSTb0coe8Vs6dx3D8P2D0tnI20I0WaOk5uF
cBd640zNytbjLTlOlbyO
=IWYp
-----END PGP SIGNATURE-----

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: Trouble accessing Buffalo NAS with CIFSFS
       [not found]                       ` <20120120143235.1d4d811e-9yPaYZwiELC+kQycOl6kW4xkIHaj4LzF@public.gmane.org>
@ 2012-01-20 19:42                         ` Steve French
       [not found]                           ` <CAH2r5mt5KPtK_5Abz8nz_sKH3vT2h=1d7DHHBvX4BSRd37auug-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
  2012-01-20 21:07                         ` ralda-Mmb7MZpHnFY
  1 sibling, 1 reply; 12+ messages in thread
From: Steve French @ 2012-01-20 19:42 UTC (permalink / raw)
  To: Jeff Layton; +Cc: linux-cifs-u79uwXL29TY76Z2rM5mHXA

On Fri, Jan 20, 2012 at 1:32 PM, Jeff Layton <jlayton-eUNUBHrolfbYtjvyW6yDsg@public.gmane.org> wrote:
> -----BEGIN PGP SIGNED MESSAGE-----
> Hash: SHA1
>
> On Fri, 20 Jan 2012 20:06:26 +0100
> "ralda-Mmb7MZpHnFY@public.gmane.org" <ralda-Mmb7MZpHnFY@public.gmane.org> wrote:
>
>> Hallo Jeff!
>>
>> > At this point, I'd suggest just setting the cifs_max_pending module
>> > parm low  (1 or 2) and seeing if that helps. If it does, then that
>> > should basically emulate the effect of steve's eventual patch. If it
>> > doesn't then we'll probably need to have a closer look at why it isn't.
>>
>> I have set cifs_max_pending to 2 (setting it 1 results in value 2).
>> Giving interesting results:
>>
>> Normal mount/cp of single file works (same mount parameter as before).
>>
>> Copying several files with midnight commander showed me a copy speed of
>> 4.5 to 5.6 MByte per second - start at 4.5 and raising quickly to 5.6).
>> This speed is higher then ever received on copying to the same NAS
>> station from same hardware installation. Formerly (kernel 2.6.18
>> SMBFS) copies startet below 3.5 MBps and raised slowly up to about 4
>> MBps (or slightly above - only at very big files). So reducing the
>> setting of cifs_max_pendig not only let the drive work it additionally
>> gave me a big step in speed optimization.
>>
>
> Not sure why that would be unless requests were sometimes getting
> dropped on the floor and the client was occasionally able to recover.
>
>> ... but: Switching virtual console during copy process let the copy
>> stop with a stuck mc as before. The only difference: After killing the
>> process and a delay of about 2 minutes (felt, not measured) the process
>> disappeared and NAS drive returned to a working state without system
>> reboot.
>>
>> If you like/need I may catch more data for diagnosis tomorrow. Just let
>> me know what you need and how I shall test. I'm willing to give you all
>> data for diagnosis I'm able to collect.
>>
>
> This is certainly a very troublesome device :)
>
> I think the existing code is wrong in that it sets the floor value too
> high (2 when it should be 1). The problem we get into however is that
> when the server can't handle concurrent requests, we must disable
> certain features.
>
> We've discussed this on the list before, but we probably ought to do
> something like this depending on what the server sends for the maxmpx
> value:
>
> maxmpx >= 3: normal operation
>
> maxmpx = 2: disable oplocks, since doing writes while there's still an
> outstanding open is problematic
>
> maxmpx = 1: disable sending smb echoes. This does mean that we can't do
> detection of unresponsive servers correctly, but there's not much else
> we can do at that point.
>
> Steve, what's going on with that patch? It's been many months since we
> discussed it last. Do you have anything that Harald can test?

I thought I put an earlier version of this on list, but I did run
into problems with it and async write.   I do want to combine
this with bumping the global value of maximum requests so
we use the minimum of the server value and what we set
but servers which handle more simultaneous requests
benefit.

As you note, a server device which can't handle 3 simultaneous requests
is basically broken, but if it lowers support cost for us on the
client to stumble through with
oplock off and echo off (to those ancient servers), it may be better
than failing mount
or setting the value to a "reasonable minimum" (2 or 3).  There may be servers
which just have a bug and set it to 1, when in fact they would allow the case
you describe, pending open, write and echo - but we may be able to get away with
an implied minimum value of 2 (where we always assume the server can handle echo
plus one other request).   In any case we need to warn on this.  I will code up
something this weekend.  IIRC Samba and Windows don't enforce maxmpx  (and
it gets really complicated for Windows as a server due to their queuing problems
as we saw from the delayed response on the dochelp request).  By far the
biggest issue is throttling our requests back to some versions of
Windows (XP or Vista?)
which set it to 10 and although more than 10 works, they have strange
problems when
it hits 25 or so simultaneous (and these servers are far more common).

Ideas how to test this (since I don't have one of the ancient NAS that have
this 1 request limit).
requests


-- 
Thanks,

Steve

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: Trouble accessing Buffalo NAS with CIFSFS
       [not found]                           ` <CAH2r5mt5KPtK_5Abz8nz_sKH3vT2h=1d7DHHBvX4BSRd37auug-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
@ 2012-01-20 20:54                             ` Harald Becker
  2012-01-20 21:24                             ` Łukasz Maśko
  1 sibling, 0 replies; 12+ messages in thread
From: Harald Becker @ 2012-01-20 20:54 UTC (permalink / raw)
  To: Steve French; +Cc: Jeff Layton, linux-cifs-u79uwXL29TY76Z2rM5mHXA

Hallo Steve!

> Ideas how to test this (since I don't have one of the ancient NAS that have
> this 1 request limit).

I'm willing to help, do the tests and collect the data for diagnosis
for you. If this fits your needs.

Solving the problem is not time critical for me, so we do not need to
hurry.

--
Harald

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: Trouble accessing Buffalo NAS with CIFSFS
       [not found]                       ` <20120120143235.1d4d811e-9yPaYZwiELC+kQycOl6kW4xkIHaj4LzF@public.gmane.org>
  2012-01-20 19:42                         ` Steve French
@ 2012-01-20 21:07                         ` ralda-Mmb7MZpHnFY
  1 sibling, 0 replies; 12+ messages in thread
From: ralda-Mmb7MZpHnFY @ 2012-01-20 21:07 UTC (permalink / raw)
  To: Jeff Layton; +Cc: linux-cifs-u79uwXL29TY76Z2rM5mHXA

Hallo Jeff!

> This is certainly a very troublesome device :)

Absolutely sure. I won't recommend one to buy such a drive. I didn't
wanted to buy that type of drive but due to a communication problem
someone else bought me that one. As I didn't had to pay for it, there
was no reason to call for an exchange. Especially as the drive does
what I need .. ohps, did ... until I updated my Linux boxes to recent
kernel versions.

The drive itself isn't bad. It's fan less, silent and doesn't get hot
even on lengthy copy sessions (Samsung 1TB 3.5inch SATA drive).
Accessing the drive via USB gives speeds above 20 MByte per second,
which is not so bad for USB 2.0 drives. The NAS feature is handy but
awesome, however.

--
Harald

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: Trouble accessing Buffalo NAS with CIFSFS
       [not found]                           ` <CAH2r5mt5KPtK_5Abz8nz_sKH3vT2h=1d7DHHBvX4BSRd37auug-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
  2012-01-20 20:54                             ` Harald Becker
@ 2012-01-20 21:24                             ` Łukasz Maśko
  1 sibling, 0 replies; 12+ messages in thread
From: Łukasz Maśko @ 2012-01-20 21:24 UTC (permalink / raw)
  To: linux-cifs-u79uwXL29TY76Z2rM5mHXA

Dnia piątek, 20 stycznia 2012, napisałeś:
[...]
> This is certainly a very troublesome device :)

It seems that it behaves like my Welland ME-752GNS which is the reason, why 
I've subscribed to this list. I have eactly the same problems with processes 
which hang and sometimes I cannot even kill them if I don't restart my NAS. 
I'm getting a bit higher transfers (on kernels 3.2.x about 11MB/s, earlier 
it was at least 2 times lower) but maybe thanks to a gigabit connection. I 
have found that cifs_max_pendig=2 helps a lot, but still, not completly.

-- 
Łukasz Maśko                                           GG:   2441498    _o)
Lukasz.Masko(at)ipipan.waw.pl                                           /\\
Registered Linux User #61028                                           _\_V
Ubuntu: staroafrykańskie słowo oznaczające "Nie umiem zainstalować Debiana"

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Trouble accessing Buffalo NAS with CIFSFS
@ 2012-01-19  8:19 ralda-Mmb7MZpHnFY
  0 siblings, 0 replies; 12+ messages in thread
From: ralda-Mmb7MZpHnFY @ 2012-01-19  8:19 UTC (permalink / raw)
  To: linux-cifs-u79uwXL29TY76Z2rM5mHXA

Sorry, I got interrupted during writing the message, so I made a
mistake:

Adding "noserverino" to mount options silences one message in the
dmesg list but DO NOT CHANGE the other behavior.

^ permalink raw reply	[flat|nested] 12+ messages in thread

end of thread, other threads:[~2012-01-20 21:24 UTC | newest]

Thread overview: 12+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2012-01-19  8:07 Trouble accessing Buffalo NAS with CIFSFS ralda-Mmb7MZpHnFY
     [not found] ` <20120119090752.9b7aea6c.ralda-Mmb7MZpHnFY@public.gmane.org>
2012-01-19 12:05   ` Jeff Layton
     [not found]     ` <20120120113042.0fdfea5f.ralda@gmx.de>
     [not found]       ` <20120120101938.7ca7464d@tlielax.poochiereds.net>
     [not found]         ` <20120120165657.00042c72.ralda@gmx.de>
     [not found]           ` <20120120165657.00042c72.ralda-Mmb7MZpHnFY@public.gmane.org>
2012-01-20 16:19             ` Jeff Layton
     [not found]               ` <20120120111936.5329cbb4-9yPaYZwiELC+kQycOl6kW4xkIHaj4LzF@public.gmane.org>
2012-01-20 19:06                 ` ralda-Mmb7MZpHnFY
     [not found]                   ` <20120120200626.b04b1d40.ralda-Mmb7MZpHnFY@public.gmane.org>
2012-01-20 19:32                     ` Jeff Layton
     [not found]                       ` <20120120143235.1d4d811e-9yPaYZwiELC+kQycOl6kW4xkIHaj4LzF@public.gmane.org>
2012-01-20 19:42                         ` Steve French
     [not found]                           ` <CAH2r5mt5KPtK_5Abz8nz_sKH3vT2h=1d7DHHBvX4BSRd37auug-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
2012-01-20 20:54                             ` Harald Becker
2012-01-20 21:24                             ` Łukasz Maśko
2012-01-20 21:07                         ` ralda-Mmb7MZpHnFY
2012-01-19 17:38   ` Suresh Jayaraman
     [not found]     ` <4F18552A.3030804-IBi9RG/b67k@public.gmane.org>
2012-01-19 18:39       ` Steve French
2012-01-19  8:19 ralda-Mmb7MZpHnFY

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.