linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* 2.4.4: thread dumping core
@ 2001-08-09 14:55 Ulrich Windl
  2001-08-09 15:08 ` Alan Cox
  0 siblings, 1 reply; 3+ messages in thread
From: Ulrich Windl @ 2001-08-09 14:55 UTC (permalink / raw)
  To: linux-kernel

Hi,

I wonder whether the kernel does the right thing if a thread causes a 
segmentation violation: Currently it seems the other LWPs just 
continue. However in practice this means that the application does not 
work reliable when one thread went away.

I suggest to terminate all LWPs if one receives a fatal signal.

I wasn't successful debugging the beast:

Program terminated with signal 11, Segmentation fault.
Reading symbols from /lib/libpthread.so.0...done.
rw_common (): write: Success.

warning: unable to set global thread event mask
[New Thread 1024 (LWP 10566)]
Error while reading shared library symbols:
attach_thread: No such process.
Reading symbols from /lib/libc.so.6...done.
Loaded symbols for /lib/libc.so.6
Reading symbols from /lib/ld-linux.so.2...done.
Loaded symbols for /lib/ld-linux.so.2
#0  0x4005e0a6 in sigsuspend () from /lib/libc.so.6
(gdb) bt
#0  0x4005e0a6 in sigsuspend () from /lib/libc.so.6
#1  0x4002496c in sigwait () from /lib/libpthread.so.0
#2  0x804da47 in mi_signal_thread ()
#3  0x40021ba3 in pthread_start_thread () from /lib/libpthread.so.0


Opinions?

Ulrich


^ permalink raw reply	[flat|nested] 3+ messages in thread

* Re: 2.4.4: thread dumping core
  2001-08-09 14:55 2.4.4: thread dumping core Ulrich Windl
@ 2001-08-09 15:08 ` Alan Cox
  2001-08-10  6:17   ` Ulrich Windl
  0 siblings, 1 reply; 3+ messages in thread
From: Alan Cox @ 2001-08-09 15:08 UTC (permalink / raw)
  To: Ulrich Windl; +Cc: linux-kernel

> I wonder whether the kernel does the right thing if a thread causes a 
> segmentation violation: Currently it seems the other LWPs just 
> continue. However in practice this means that the application does not 

This is a feature in most cases

> I suggest to terminate all LWPs if one receives a fatal signal.

So write some signal handlers. 

In all cases the other threads will continue for some time, so you gain
nothing by pretending they dont. 

Alan

^ permalink raw reply	[flat|nested] 3+ messages in thread

* Re: 2.4.4: thread dumping core
  2001-08-09 15:08 ` Alan Cox
@ 2001-08-10  6:17   ` Ulrich Windl
  0 siblings, 0 replies; 3+ messages in thread
From: Ulrich Windl @ 2001-08-10  6:17 UTC (permalink / raw)
  To: Alan Cox; +Cc: linux-kernel

On 9 Aug 2001, at 16:08, Alan Cox wrote:

> > I wonder whether the kernel does the right thing if a thread causes a 
> > segmentation violation: Currently it seems the other LWPs just 
> > continue. However in practice this means that the application does not 
> 
> This is a feature in most cases
> 
> > I suggest to terminate all LWPs if one receives a fatal signal.
> 
> So write some signal handlers. 

Actually I'm using a wrapper library that is supposed to do that stuff 
for me (libmilter from sendmail-8.12.0.Beta16).

> 
> In all cases the other threads will continue for some time, so you gain
> nothing by pretending they dont. 

Imagine you aquire a lock in one thread then that thread gets a 
SIGSEGV. There are a lot of threads around, possibly consuming a lot of 
CPU without getting any (a lot of) work done.

Maybe the real problem is a simple a a binary incompatibility between 
libpthread form SuSE 7.1 and SuSE 7.2 (which would be a very bad case).
As for any real bug, the application works most of the time.

Thanks for the statement.

Ulrich


^ permalink raw reply	[flat|nested] 3+ messages in thread

end of thread, other threads:[~2001-08-10  6:17 UTC | newest]

Thread overview: 3+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2001-08-09 14:55 2.4.4: thread dumping core Ulrich Windl
2001-08-09 15:08 ` Alan Cox
2001-08-10  6:17   ` Ulrich Windl

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).