linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* cifs causes high system load avg, oopses when unloaded on 2.6.0-test11
@ 2003-12-11  6:42 Darren Dupre
  2003-12-11  6:55 ` William Lee Irwin III
  0 siblings, 1 reply; 4+ messages in thread
From: Darren Dupre @ 2003-12-11  6:42 UTC (permalink / raw)
  To: linux-kernel

Using CIFS causes a very high load average (approx. 12 according to uptime).
After I umout all filesystems (CIFS ones) and then unload the module, it
oopses (below).

CC me replies if more information is needed.

Dec 11 00:33:09 dmdtech kernel: slab error in kmem_cache_destroy(): cache
`cifs_request': Can't free all objects
Dec 11 00:33:09 dmdtech kernel: Call Trace:
Dec 11 00:33:09 dmdtech kernel:  [<c013a955>] kmem_cache_destroy+0x85/0x100
Dec 11 00:33:09 dmdtech kernel:  [<e08f96e0>]
cifs_destroy_request_bufs+0x10/0x30 [cifs]
Dec 11 00:33:09 dmdtech kernel:  [<e0910823>] exit_cifs+0x23/0x9d [cifs]
Dec 11 00:33:09 dmdtech kernel:  [<c0130d08>] sys_delete_module+0x138/0x1b0
Dec 11 00:33:09 dmdtech kernel:  [<c014392c>] do_munmap+0x14c/0x190
Dec 11 00:33:09 dmdtech kernel:  [<c0109165>] sysenter_past_esp+0x52/0x71
Dec 11 00:33:09 dmdtech kernel:
Dec 11 00:33:09 dmdtech kernel: cifs_destroy_request_cache: error not all
structures were freed
Dec 11 00:33:09 dmdtech kernel: Unable to handle kernel paging request at
virtual address e0900b92
Dec 11 00:33:09 dmdtech kernel:  printing eip:
Dec 11 00:33:09 dmdtech kernel: e0900b92
Dec 11 00:33:09 dmdtech kernel: *pde = 1ff6d067
Dec 11 00:33:09 dmdtech kernel: *pte = 00000000
Dec 11 00:33:09 dmdtech kernel: Oops: 0000 [#1]
Dec 11 00:33:09 dmdtech kernel: CPU:    0
Dec 11 00:33:09 dmdtech kernel: EIP:    0060:[<e0900b92>]    Not tainted
Dec 11 00:33:09 dmdtech kernel: EFLAGS: 00010292
Dec 11 00:33:09 dmdtech kernel: EIP is at 0xe0900b92
Dec 11 00:33:09 dmdtech kernel: eax: 00000000   ebx: 00000000   ecx:
d5dc8644   edx: 00000000
Dec 11 00:33:09 dmdtech kernel: esi: c92fbf34   edi: c92fbf30   ebp:
cfe50000   esp: cfe51f48
Dec 11 00:33:09 dmdtech kernel: ds: 007b   es: 007b   ss: 0068
Dec 11 00:33:09 dmdtech kernel: Process cifsd (pid: 27573,
threadinfo=cfe50000 task=c8a9e0c0)
Dec 11 00:33:09 dmdtech kernel: Stack: 00000002 00000001 00000006 c92fbf30
c92fbf00 c92fbf34 c92fbf30 e08ff46f
Dec 11 00:33:09 dmdtech kernel:        c92fbf34 c92fbf30 00000000 fffffe00
c94c0200 c92fbf00 fffffffc ce0fbac0
Dec 11 00:33:09 dmdtech kernel:        e08ff6a8 c92fbf00 cfe51fc0 00000024
00000002 c92fbf4c cfe50000 00000027
Dec 11 00:33:09 dmdtech kernel: Call Trace:
Dec 11 00:33:09 dmdtech kernel:  [<c01070c9>] kernel_thread_helper+0x5/0xc
Dec 11 00:33:09 dmdtech kernel:
Dec 11 00:33:09 dmdtech kernel: Code:  Bad EIP value.
Dec 11 00:33:09 dmdtech kernel:  <1>Unable to handle kernel paging request
at virtual address e0900b1d
Dec 11 00:33:09 dmdtech kernel:  printing eip:
Dec 11 00:33:09 dmdtech kernel: e0900b1d
Dec 11 00:33:09 dmdtech kernel: *pde = 1ff6d067
Dec 11 00:33:09 dmdtech kernel: *pte = 00000000
Dec 11 00:33:09 dmdtech kernel: Oops: 0000 [#2]
Dec 11 00:33:09 dmdtech kernel: CPU:    0
Dec 11 00:33:09 dmdtech kernel: EIP:    0060:[<e0900b1d>]    Not tainted
Dec 11 00:33:09 dmdtech kernel: EFLAGS: 00010246
Dec 11 00:33:09 dmdtech kernel: EIP is at 0xe0900b1d
Dec 11 00:33:09 dmdtech kernel: eax: cd308cc0   ebx: fffffe00   ecx:
00000287   edx: cd308cd8
Dec 11 00:33:09 dmdtech kernel: esi: c92fbab4   edi: c92fbab0   ebp:
df812000   esp: df813f48
Dec 11 00:33:09 dmdtech kernel: ds: 007b   es: 007b   ss: 0068
Dec 11 00:33:09 dmdtech kernel: Process cifsd (pid: 18107,
threadinfo=df812000 task=cfae0080)
Dec 11 00:33:09 dmdtech kernel: Stack: cd308cc0 c92fbab4 00000010 00000000
c92fba80 c92fbab4 c92fbab0 e08ff46f
Dec 11 00:33:09 dmdtech kernel:        c92fbab4 c92fbab0 0000007b fffffe00
d2dc8140 c92fba80 fffffffc d2fe3040
Dec 11 00:33:09 dmdtech kernel:        e08ff6a8 c92fba80 df813fc0 00000024
00000002 c92fbacc df812000 00000027
Dec 11 00:33:09 dmdtech kernel: Call Trace:
Dec 11 00:33:09 dmdtech kernel:  [<c01070c9>] kernel_thread_helper+0x5/0xc
Dec 11 00:33:09 dmdtech kernel:
Dec 11 00:33:09 dmdtech kernel: Code:  Bad EIP value.
Dec 11 00:33:09 dmdtech kernel:  <1>Unable to handle kernel paging request
at virtual address e0900b5f
Dec 11 00:33:09 dmdtech kernel:  printing eip:
Dec 11 00:33:09 dmdtech kernel: e0900b5f
Dec 11 00:33:09 dmdtech kernel: *pde = 1ff6d067
Dec 11 00:33:09 dmdtech kernel: *pte = 00000000
Dec 11 00:33:09 dmdtech kernel: Oops: 0000 [#3]
Dec 11 00:33:09 dmdtech kernel: CPU:    0
Dec 11 00:33:09 dmdtech kernel: EIP:    0060:[<e0900b5f>]    Not tainted
Dec 11 00:33:09 dmdtech kernel: EFLAGS: 00010292
Dec 11 00:33:09 dmdtech kernel: EIP is at 0xe0900b5f
Dec 11 00:33:09 dmdtech kernel: eax: fffffe00   ebx: 00000000   ecx:
00000202   edx: db955258
Dec 11 00:33:09 dmdtech kernel: esi: c92fbb74   edi: c92fbb70   ebp:
c38e8000   esp: c38e9f48
Dec 11 00:33:09 dmdtech kernel: ds: 007b   es: 007b   ss: 0068
Dec 11 00:33:09 dmdtech kernel: Process cifsd (pid: 11564,
threadinfo=c38e8000 task=dd85a100)
Dec 11 00:33:09 dmdtech kernel: Stack: db955240 c92fbb74 00000010 00000000
c92fbb40 c92fbb74 c92fbb70 e08ff46f
Dec 11 00:33:09 dmdtech kernel:        c92fbb74 c92fbb70 c38e9fc0 fffffe00
c20a0040 c92fbb40 fffffffc db955240
Dec 11 00:33:09 dmdtech kernel:        e08ff6a8 c92fbb40 c38e9fc0 00000024
00000002 c92fbb8c c38e8000 00000027
Dec 11 00:33:09 dmdtech kernel: Call Trace:
Dec 11 00:33:09 dmdtech kernel:  [<c01070c9>] kernel_thread_helper+0x5/0xc
Dec 11 00:33:09 dmdtech kernel:
Dec 11 00:33:09 dmdtech kernel: Code:  Bad EIP value.


^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: cifs causes high system load avg, oopses when unloaded on 2.6.0-test11
  2003-12-11  6:42 cifs causes high system load avg, oopses when unloaded on 2.6.0-test11 Darren Dupre
@ 2003-12-11  6:55 ` William Lee Irwin III
  0 siblings, 0 replies; 4+ messages in thread
From: William Lee Irwin III @ 2003-12-11  6:55 UTC (permalink / raw)
  To: Darren Dupre; +Cc: linux-kernel

On Thu, Dec 11, 2003 at 12:42:10AM -0600, Darren Dupre wrote:
> Using CIFS causes a very high load average (approx. 12 according to uptime).
> After I umout all filesystems (CIFS ones) and then unload the module, it
> oopses (below).
> CC me replies if more information is needed.

Hmm, this unload needs to hand back failure to module unload when it
can't nuke inodes etc. I'd suggest not using it as a module for the
time being.


-- wli

^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: cifs causes high system load avg, oopses when unloaded on 2.6.0-test11
  2003-12-16 21:26 Steve French
@ 2003-12-17  1:02 ` William Lee Irwin III
  0 siblings, 0 replies; 4+ messages in thread
From: William Lee Irwin III @ 2003-12-17  1:02 UTC (permalink / raw)
  To: Steve French; +Cc: linux-kernel, linux-cifs-client, darren

At some point in the past, I wrote:
>> Hmm, this unload needs to hand back failure to module unload when it>>
>> can't nuke inodes etc. I'd suggest not using it as a module for the
>> time being.

On Tue, Dec 16, 2003 at 03:26:17PM -0600, Steve French wrote:
> I don't see how I could pass failure back on module unload even if I
> could detect problems freeing the memory associated with cifs's inode
> cache - there is no place for return code info - see the caller ie the
> call to mod->exit() in sys_delete_module (about line 735 of
> kernel/module.c)

Right, the facility isn't there. Maybe making the thing not-unloadable
would be the best option for the time being.


-- wli

^ permalink raw reply	[flat|nested] 4+ messages in thread

* cifs causes high system load avg, oopses when unloaded on 2.6.0-test11
@ 2003-12-16 21:26 Steve French
  2003-12-17  1:02 ` William Lee Irwin III
  0 siblings, 1 reply; 4+ messages in thread
From: Steve French @ 2003-12-16 21:26 UTC (permalink / raw)
  To: linux-kernel; +Cc: linux-cifs-client, wli, darren

>> Using CIFS causes a very high load average (approx. 12 according to
>> After I umout all filesystems (CIFS ones) and then unload the module,
>> it oopses (below).
>> CC me replies if more information is needed.

I don't know if this will fail with the more current (version 0.99 of 
the 2.6 version of the cifs filesystem) which is at
http://us1.samba.org/samba/ftp/cifs-cvs/cifs-0.9.9-2.6kern.tar.gz
but I am trying some experiments today to see if I can reproduce
something similar artificially.  I was concerned about some other oopses
and problems in the tcp reconnection logic that are now fixed but are in
the much older version 0.94 of the cifs vfs in the
linux.bkbits.net/linux-2.5 tree but as 2.6 has been mostly locked down
for weeks - test11 is missing at least a dozen key cifs fixes (including
stress test fixes and fixes for a few oopses reported by 2.6 users
testing more actively over the past couple months), the more recent
fs/cifs files (version 0.9.9) are likely to be much better than what is
in 2.6-test11

(the gz simply contains the contents of the 0.9.9 version of the fs/cifs
directory, rather than a patch (about 15 changesets ahead of 2.6test9,10
or 11).  There are no corequisite fixes outside the directory and it can
be applied to any of the recent 2.6-test* versions)

> Hmm, this unload needs to hand back failure to module unload when it>>
> can't nuke inodes etc. I'd suggest not using it as a module for the
> time being.

I don't see how I could pass failure back on module unload even if I
could detect problems freeing the memory associated with cifs's inode
cache - there is no place for return code info - see the caller ie the
call to mod->exit() in sys_delete_module (about line 735 of
kernel/module.c)



^ permalink raw reply	[flat|nested] 4+ messages in thread

end of thread, other threads:[~2003-12-17  1:03 UTC | newest]

Thread overview: 4+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2003-12-11  6:42 cifs causes high system load avg, oopses when unloaded on 2.6.0-test11 Darren Dupre
2003-12-11  6:55 ` William Lee Irwin III
2003-12-16 21:26 Steve French
2003-12-17  1:02 ` William Lee Irwin III

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).