* Soft lock issue with 2.6.33.7-rt29
@ 2010-11-17 19:11 Nathan Grennan
2010-11-18 1:26 ` Darren Hart
0 siblings, 1 reply; 10+ messages in thread
From: Nathan Grennan @ 2010-11-17 19:11 UTC (permalink / raw)
To: linux-rt-users
I have been working for weeks to get a stable rt kernel. I had been
focusing on 2.6.31.6-rt19. It is stable for about four days under stress
testing before it soft locks. I am using rt19 instead of rt21, because
rt19 seems to be more stable. The rtmutex issue that seems to still be
in rt29 is in rt21. I also had to backport the iptables fix to rt19.
I just started looking at 2.6.33.7-rt29 again, since I can reproduce
a soft lock with it in 10-15 minutes. I have yet to get sysrq output for
rt19, since it takes four days. The soft lock with rt29 as far as I can
tell seems to relate to disk i/o.
There are links to two logs of rt29 from a serial console below.
They include sysrq output like "Show Blocked State" and "Show State".
The level7 file is with nfsd enable, and level9 is with it disable. So
nfsd doesn't seem to be the issue.
If any other debugging information is useful or needed, just say the
word.
http://proton.cygnusx-1.org/~edgan/kernel-logs/kernel-2.6.33-rt29.level7.log
http://proton.cygnusx-1.org/~edgan/kernel-logs/kernel-2.6.33-rt29.level9.log
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: Soft lock issue with 2.6.33.7-rt29
2010-11-17 19:11 Soft lock issue with 2.6.33.7-rt29 Nathan Grennan
@ 2010-11-18 1:26 ` Darren Hart
2010-11-18 11:35 ` Luis Claudio R. Goncalves
2010-11-18 22:11 ` Nathan Grennan
0 siblings, 2 replies; 10+ messages in thread
From: Darren Hart @ 2010-11-18 1:26 UTC (permalink / raw)
To: Nathan Grennan; +Cc: linux-rt-users
On 11/17/2010 11:11 AM, Nathan Grennan wrote:
> I have been working for weeks to get a stable rt kernel. I had been
> focusing on 2.6.31.6-rt19. It is stable for about four days under stress
> testing before it soft locks. I am using rt19 instead of rt21, because
> rt19 seems to be more stable. The rtmutex issue that seems to still be
> in rt29 is in rt21. I also had to backport the iptables fix to rt19.
>
> I just started looking at 2.6.33.7-rt29 again, since I can reproduce a
> soft lock with it in 10-15 minutes. I have yet to get sysrq output for
> rt19, since it takes four days. The soft lock with rt29 as far as I can
> tell seems to relate to disk i/o.
>
> There are links to two logs of rt29 from a serial console below. They
> include sysrq output like "Show Blocked State" and "Show State". The
> level7 file is with nfsd enable, and level9 is with it disable. So nfsd
> doesn't seem to be the issue.
>
> If any other debugging information is useful or needed, just say the word.
A reproducible test-case is always the first thing we ask for :-) What
is your stress test?
What policy and priority are you running your load at? Are you providing
enough cycles for the system threads to run?
--
Darren Hart
Yocto Linux Kernel
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: Soft lock issue with 2.6.33.7-rt29
2010-11-18 1:26 ` Darren Hart
@ 2010-11-18 11:35 ` Luis Claudio R. Goncalves
2010-11-18 17:48 ` Nathan Grennan
2010-11-18 22:11 ` Nathan Grennan
1 sibling, 1 reply; 10+ messages in thread
From: Luis Claudio R. Goncalves @ 2010-11-18 11:35 UTC (permalink / raw)
To: Darren Hart; +Cc: Nathan Grennan, linux-rt-users
On Wed, Nov 17, 2010 at 05:26:23PM -0800, Darren Hart wrote:
| On 11/17/2010 11:11 AM, Nathan Grennan wrote:
| >I have been working for weeks to get a stable rt kernel. I had been
| >focusing on 2.6.31.6-rt19. It is stable for about four days under stress
| >testing before it soft locks. I am using rt19 instead of rt21, because
| >rt19 seems to be more stable. The rtmutex issue that seems to still be
| >in rt29 is in rt21. I also had to backport the iptables fix to rt19.
| >
| >I just started looking at 2.6.33.7-rt29 again, since I can reproduce a
| >soft lock with it in 10-15 minutes. I have yet to get sysrq output for
| >rt19, since it takes four days. The soft lock with rt29 as far as I can
| >tell seems to relate to disk i/o.
| >
| >There are links to two logs of rt29 from a serial console below. They
| >include sysrq output like "Show Blocked State" and "Show State". The
| >level7 file is with nfsd enable, and level9 is with it disable. So nfsd
| >doesn't seem to be the issue.
| >
| >If any other debugging information is useful or needed, just say the word.
|
| A reproducible test-case is always the first thing we ask for :-)
| What is your stress test?
|
| What policy and priority are you running your load at? Are you
| providing enough cycles for the system threads to run?
I noticed a e1000e warning on the first log. Those are usually harmful.
You may also want to boot your kernel with
"ignore_loglevel debug initcall_debug"
appended to your kernel command line. The real issue or the important
warning may happen during the boot process.
Luis
--
[ Luis Claudio R. Goncalves Bass - Gospel - RT ]
[ Fingerprint: 4FDD B8C4 3C59 34BD 8BE9 2696 7203 D980 A448 C8F8 ]
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: Soft lock issue with 2.6.33.7-rt29
2010-11-18 11:35 ` Luis Claudio R. Goncalves
@ 2010-11-18 17:48 ` Nathan Grennan
0 siblings, 0 replies; 10+ messages in thread
From: Nathan Grennan @ 2010-11-18 17:48 UTC (permalink / raw)
To: Luis Claudio R. Goncalves; +Cc: Darren Hart, Nathan Grennan, linux-rt-users
On 11/18/2010 03:35 AM, Luis Claudio R. Goncalves wrote:
> I noticed a e1000e warning on the first log. Those are usually harmful.
That is only in the level 7 log, and isn't in the level 9 log. So as
much as wish it wasn't there, I don't think it is the problem.
>
> You may also want to boot your kernel with
>
> "ignore_loglevel debug initcall_debug"
>
> appended to your kernel command line. The real issue or the important
> warning may happen during the boot process.
I will try these.
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: Soft lock issue with 2.6.33.7-rt29
2010-11-18 1:26 ` Darren Hart
2010-11-18 11:35 ` Luis Claudio R. Goncalves
@ 2010-11-18 22:11 ` Nathan Grennan
2010-11-19 12:05 ` Locating processes impacting my rt application Leggo, Adam (UK)
2010-11-19 18:46 ` Soft lock issue with 2.6.33.7-rt29 Darren Hart
1 sibling, 2 replies; 10+ messages in thread
From: Nathan Grennan @ 2010-11-18 22:11 UTC (permalink / raw)
To: Darren Hart; +Cc: linux-rt-users
On 11/17/2010 05:26 PM, Darren Hart wrote:
> On 11/17/2010 11:11 AM, Nathan Grennan wrote:
>> I have been working for weeks to get a stable rt kernel. I had been
>> focusing on 2.6.31.6-rt19. It is stable for about four days under stress
>> testing before it soft locks. I am using rt19 instead of rt21, because
>> rt19 seems to be more stable. The rtmutex issue that seems to still be
>> in rt29 is in rt21. I also had to backport the iptables fix to rt19.
>>
>> I just started looking at 2.6.33.7-rt29 again, since I can reproduce a
>> soft lock with it in 10-15 minutes. I have yet to get sysrq output for
>> rt19, since it takes four days. The soft lock with rt29 as far as I can
>> tell seems to relate to disk i/o.
>>
>> There are links to two logs of rt29 from a serial console below. They
>> include sysrq output like "Show Blocked State" and "Show State". The
>> level7 file is with nfsd enable, and level9 is with it disable. So nfsd
>> doesn't seem to be the issue.
>>
>> If any other debugging information is useful or needed, just say the
>> word.
>
> A reproducible test-case is always the first thing we ask for :-) What
> is your stress test?
I have been able to boil it down the script below. If I just run yes it
is fine, if I just run dd, it is fine. If you just run octave, it is
fine. Run yes+dd, gets it most of the way there, but will wake up
sometimes, off and on. Do all three together and it soft locks. It
takes 5-15 minutes. I did it on our main example hardware, which is a
server. I have also reproduced it on a desktop. Sometimes sysrq-n, to
renice realtime processes, brings it out of it enough you can kill
processes off.
Run with:
./stress_test
#!/bin/bash
TIMEOUT=600
MAXTEMP=75
args=`getopt qt:m: $*`
set -- $args
for i
do
case "$i" in
-q) shift; QUIET=1;;
-t) shift; TIMEOUT=$1; shift;;
-m) shift; MAXTEMP=$1; shift;;
esac
done
PROCLOOP=`mktemp`
CHECKLOOP=`mktemp`
echo 1 > ${PROCLOOP}
echo 1 > ${CHECKLOOP}
trap 'cat /dev/null > $CHECKLOOP' SIGHUP SIGINT SIGTERM
if [[ ! -e `which octave` ]]; then
echo "Octave not installed. Please apt-get install octave." >&2
exit -1
fi
[[ $QUIET ]] || echo "Starting Octave processes..."
for i in {1..8}; do
(while [ -s $PROCLOOP ]; do nice -n 20 octave --eval
"a=rand(2000);det(a);a=inv(a);"; done) > /dev/null 2>&1 &
done
[[ $QUIET ]] || echo "Starting yes processes..."
for i in {1..8}; do
nice -n 20 yes > /dev/null 2>&1 &
done
[[ $QUIET ]] || echo "Starting dd in 5 seconds so that other processes
can finish loading..."
sleep 5
for d in /dev/sd? /dev/hd?; do
if [[ -b $d ]]; then
[[ $QUIET ]] || echo Starting dd on $d now...
(while [ -s $PROCLOOP ]; do test -e $d && nice -n 20 dd if=$d
of=/dev/null; sleep 10; done) > /dev/null 2>&1 &
(while [ -s $PROCLOOP ]; do test -e $d && nice -n 20 dd if=$d
of=/dev/null skip=20000 bs=1000000; sleep 10; done) > /dev/null 2>&1 &
fi
done
Here is a cut and paste from top right before the server soft locks.
top - 13:42:25 up 6 min, 3 users, load average: 28.52, 18.06, 7.90
Tasks: 371 total, 23 running, 348 sleeping, 0 stopped, 0 zombie
Cpu(s): 0.3%us, 1.6%sy, 98.0%ni, 0.0%id, 0.0%wa, 0.0%hi, 0.1%si,
0.0%st
Mem: 24734280k total, 24600312k used, 133968k free, 21564200k buffers
Swap: 0k total, 0k used, 0k free, 37292k cached
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
3440 root 39 19 5484 728 604 R 100 0.0 4:39.56 yes
3432 root 39 19 5484 732 604 R 100 0.0 5:01.25 yes
3436 root 39 19 5484 732 604 R 100 0.0 4:53.26 yes
3437 root 39 19 5484 732 604 R 100 0.0 3:47.83 yes
3441 root 39 19 5484 732 604 R 100 0.0 4:34.36 yes
3439 root 39 19 5484 728 604 R 100 0.0 4:46.99 yes
6030 root 39 19 243m 137m 11m R 61 0.6 0:04.96 octave
6032 root 39 19 211m 107m 11m R 30 0.4 0:00.90 octave
5997 root 39 19 211m 107m 11m R 19 0.4 0:00.56 octave
6031 root 39 19 211m 107m 11m R 16 0.4 0:00.79 octave
6029 root 39 19 211m 107m 11m R 14 0.4 0:00.66 octave
6012 root 39 19 216m 111m 11m R 13 0.5 0:01.33 octave
3606 root 39 19 10736 1840 704 D 4 0.0 0:05.63 dd
3608 root 39 19 9748 856 696 D 2 0.0 0:06.61 dd
1310 root 20 0 254m 15m 3288 S 2 0.1 0:04.95 python
159 root 20 0 0 0 0 S 1 0.0 0:00.29 kswapd0
61 root -50 0 0 0 0 S 1 0.0 0:02.70 sirq-block/4
45 root -50 0 0 0 0 S 0 0.0 0:00.28 sirq-timer/3
84 root -50 0 0 0 0 S 0 0.0 0:00.56 sirq-timer/6
97 root -50 0 0 0 0 S 0 0.0 0:00.36 sirq-timer/7
373 root -51 0 0 0 0 S 0 0.0 0:01.05 irq/61-ahci
3434 root 39 19 5484 732 604 R 0 0.0 3:53.88 yes
3438 root 39 19 5484 732 604 R 0 0.0 1:00.93 yes
3513 root 20 0 77060 3480 2688 S 0 0.0 0:00.09 sshd
6007 root 39 19 243m 137m 11m R 0 0.6 0:05.06 octave
1 root 20 0 23792 1952 1268 S 0 0.0 0:01.19 init
>
> What policy and priority are you running your load at? Are you
> providing enough cycles for the system threads to run?
>
With the script above, the processes are actually nice 19.
^ permalink raw reply [flat|nested] 10+ messages in thread
* Locating processes impacting my rt application
2010-11-18 22:11 ` Nathan Grennan
@ 2010-11-19 12:05 ` Leggo, Adam (UK)
2010-11-19 14:53 ` Uwe Kleine-König
2010-11-19 15:11 ` Husak, Jan
2010-11-19 18:46 ` Soft lock issue with 2.6.33.7-rt29 Darren Hart
1 sibling, 2 replies; 10+ messages in thread
From: Leggo, Adam (UK) @ 2010-11-19 12:05 UTC (permalink / raw)
To: linux-rt-users
Hello,
I have been working on a simulator that sends large amount of data to
another system. I am having a problem where something in the system is
interupting my processing loops for an extended period. I would like
some suggestions on how to find the offending process and prevent it
from impacting my applications while it is running.
The purpose of the simulator is to receive an instruction every 4ms and
then responses by sending 400KB chunks of data on each of 8 sFPDP fibre
channels to the system under test before the next instruction. The
current simulator can match the 4ms frequency most of the time, but
every 300 or so instructions a delay occurs of up to 15ms that impacts
the system under test.
The simulator is running on an 8 core HP server running openSUSE 11.3
with real time kernel 2.6.33.1-rt11. The instruction processing thread
is locked to cpu 4 and the four data sending threads are locked to cpu's
5-8.
How do you either find the offending processes or prevent other
processes using the real time CPU cores? I have tried oprofile to
profile the system and haven't found anything. I may not be interpreting
the results correctly.
I tried the suggestions in the following website, but I may not have set
it up correctly.
https://rt.wiki.kernel.org/index.php/CPU_shielding_using_/proc_and_/dev/
cpuset
Any suggestion would be helpful.
Regards
Adam Leggo
********************************************************************
This email and any attachments are confidential to the intended
recipient and may also be privileged. If you are not the intended
recipient please delete it from your system and notify the sender.
You should not copy it or use it for any purpose nor disclose or
distribute its contents to any other person.
********************************************************************
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: Locating processes impacting my rt application
2010-11-19 12:05 ` Locating processes impacting my rt application Leggo, Adam (UK)
@ 2010-11-19 14:53 ` Uwe Kleine-König
2010-11-19 15:11 ` Husak, Jan
1 sibling, 0 replies; 10+ messages in thread
From: Uwe Kleine-König @ 2010-11-19 14:53 UTC (permalink / raw)
To: Leggo, Adam (UK); +Cc: linux-rt-users
Hello Adam,
On Fri, Nov 19, 2010 at 12:05:11PM -0000, Leggo, Adam (UK) wrote:
> Hello,
>
> I have been working on a simulator that sends large amount of data to
> another system. I am having a problem where something in the system is
> interupting my processing loops for an extended period. I would like
> some suggestions on how to find the offending process and prevent it
> from impacting my applications while it is running.
Maybe an SMI is the problem here. Did you try hwlatdetect?
Best regards
Uwe
--
Pengutronix e.K. | Uwe Kleine-König |
Industrial Linux Solutions | http://www.pengutronix.de/ |
--
To unsubscribe from this list: send the line "unsubscribe linux-rt-users" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
^ permalink raw reply [flat|nested] 10+ messages in thread
* RE: Locating processes impacting my rt application
2010-11-19 12:05 ` Locating processes impacting my rt application Leggo, Adam (UK)
2010-11-19 14:53 ` Uwe Kleine-König
@ 2010-11-19 15:11 ` Husak, Jan
1 sibling, 0 replies; 10+ messages in thread
From: Husak, Jan @ 2010-11-19 15:11 UTC (permalink / raw)
To: linux-rt-users
Try ftrace, with sched_switch tracer, to see threads switching.
Jan
-----Original Message-----
From: linux-rt-users-owner@vger.kernel.org
[mailto:linux-rt-users-owner@vger.kernel.org] On Behalf Of Leggo, Adam
(UK)
Sent: 19. listopadu 2010 13:05
To: linux-rt-users@vger.kernel.org
Subject: Locating processes impacting my rt application
Hello,
I have been working on a simulator that sends large amount of data to
another system. I am having a problem where something in the system is
interupting my processing loops for an extended period. I would like
some suggestions on how to find the offending process and prevent it
from impacting my applications while it is running.
The purpose of the simulator is to receive an instruction every 4ms and
then responses by sending 400KB chunks of data on each of 8 sFPDP fibre
channels to the system under test before the next instruction. The
current simulator can match the 4ms frequency most of the time, but
every 300 or so instructions a delay occurs of up to 15ms that impacts
the system under test.
The simulator is running on an 8 core HP server running openSUSE 11.3
with real time kernel 2.6.33.1-rt11. The instruction processing thread
is locked to cpu 4 and the four data sending threads are locked to cpu's
5-8.
How do you either find the offending processes or prevent other
processes using the real time CPU cores? I have tried oprofile to
profile the system and haven't found anything. I may not be interpreting
the results correctly.
I tried the suggestions in the following website, but I may not have set
it up correctly.
https://rt.wiki.kernel.org/index.php/CPU_shielding_using_/proc_and_/dev/
cpuset
Any suggestion would be helpful.
Regards
Adam Leggo
********************************************************************
This email and any attachments are confidential to the intended
recipient and may also be privileged. If you are not the intended
recipient please delete it from your system and notify the sender.
You should not copy it or use it for any purpose nor disclose or
distribute its contents to any other person.
********************************************************************
--
To unsubscribe from this list: send the line "unsubscribe
linux-rt-users" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: Soft lock issue with 2.6.33.7-rt29
2010-11-18 22:11 ` Nathan Grennan
2010-11-19 12:05 ` Locating processes impacting my rt application Leggo, Adam (UK)
@ 2010-11-19 18:46 ` Darren Hart
2010-11-19 19:21 ` Nathan Grennan
1 sibling, 1 reply; 10+ messages in thread
From: Darren Hart @ 2010-11-19 18:46 UTC (permalink / raw)
To: Nathan Grennan; +Cc: linux-rt-users
On 11/18/2010 02:11 PM, Nathan Grennan wrote:
> On 11/17/2010 05:26 PM, Darren Hart wrote:
>> On 11/17/2010 11:11 AM, Nathan Grennan wrote:
>>> I have been working for weeks to get a stable rt kernel. I had been
>>> focusing on 2.6.31.6-rt19. It is stable for about four days under stress
>>> testing before it soft locks. I am using rt19 instead of rt21, because
>>> rt19 seems to be more stable. The rtmutex issue that seems to still be
>>> in rt29 is in rt21. I also had to backport the iptables fix to rt19.
>>>
>>> I just started looking at 2.6.33.7-rt29 again, since I can reproduce a
>>> soft lock with it in 10-15 minutes. I have yet to get sysrq output for
>>> rt19, since it takes four days. The soft lock with rt29 as far as I can
>>> tell seems to relate to disk i/o.
>>>
>>> There are links to two logs of rt29 from a serial console below. They
>>> include sysrq output like "Show Blocked State" and "Show State". The
>>> level7 file is with nfsd enable, and level9 is with it disable. So nfsd
>>> doesn't seem to be the issue.
>>>
>>> If any other debugging information is useful or needed, just say the
>>> word.
>>
>> A reproducible test-case is always the first thing we ask for :-) What
>> is your stress test?
>
> I have been able to boil it down the script below. If I just run yes it
> is fine, if I just run dd, it is fine. If you just run octave, it is
> fine. Run yes+dd, gets it most of the way there, but will wake up
> sometimes, off and on. Do all three together and it soft locks. It takes
> 5-15 minutes. I did it on our main example hardware, which is a server.
> I have also reproduced it on a desktop. Sometimes sysrq-n, to renice
> realtime processes, brings it out of it enough you can kill processes off.
Interesting, so you're locking up a preempt-rt kernel with SCHED_OTHER
tasks running at the least favorable priority.
Note: nice -n 19 is actually the valid nice value (20 and higher seem to
be accepted, but have the same effect as 19). NICE(1)
How many CPUs on your test machine?
--
Darren Hart
Yocto Linux Kernel
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: Soft lock issue with 2.6.33.7-rt29
2010-11-19 18:46 ` Soft lock issue with 2.6.33.7-rt29 Darren Hart
@ 2010-11-19 19:21 ` Nathan Grennan
0 siblings, 0 replies; 10+ messages in thread
From: Nathan Grennan @ 2010-11-19 19:21 UTC (permalink / raw)
To: Darren Hart; +Cc: linux-rt-users
On 11/19/2010 10:46 AM, Darren Hart wrote:
> On 11/18/2010 02:11 PM, Nathan Grennan wrote:
>> On 11/17/2010 05:26 PM, Darren Hart wrote:
>>> On 11/17/2010 11:11 AM, Nathan Grennan wrote:
>>>> I have been working for weeks to get a stable rt kernel. I had been
>>>> focusing on 2.6.31.6-rt19. It is stable for about four days under
>>>> stress
>>>> testing before it soft locks. I am using rt19 instead of rt21, because
>>>> rt19 seems to be more stable. The rtmutex issue that seems to still be
>>>> in rt29 is in rt21. I also had to backport the iptables fix to rt19.
>>>>
>>>> I just started looking at 2.6.33.7-rt29 again, since I can reproduce a
>>>> soft lock with it in 10-15 minutes. I have yet to get sysrq output for
>>>> rt19, since it takes four days. The soft lock with rt29 as far as I
>>>> can
>>>> tell seems to relate to disk i/o.
>>>>
>>>> There are links to two logs of rt29 from a serial console below. They
>>>> include sysrq output like "Show Blocked State" and "Show State". The
>>>> level7 file is with nfsd enable, and level9 is with it disable. So
>>>> nfsd
>>>> doesn't seem to be the issue.
>>>>
>>>> If any other debugging information is useful or needed, just say the
>>>> word.
>>>
>>> A reproducible test-case is always the first thing we ask for :-) What
>>> is your stress test?
>>
>> I have been able to boil it down the script below. If I just run yes it
>> is fine, if I just run dd, it is fine. If you just run octave, it is
>> fine. Run yes+dd, gets it most of the way there, but will wake up
>> sometimes, off and on. Do all three together and it soft locks. It takes
>> 5-15 minutes. I did it on our main example hardware, which is a server.
>> I have also reproduced it on a desktop. Sometimes sysrq-n, to renice
>> realtime processes, brings it out of it enough you can kill processes
>> off.
>
>
> Interesting, so you're locking up a preempt-rt kernel with SCHED_OTHER
> tasks running at the least favorable priority.
>
> Note: nice -n 19 is actually the valid nice value (20 and higher seem
> to be accepted, but have the same effect as 19). NICE(1)
>
> How many CPUs on your test machine?
>
The server is dual quad-core. The desktop is a quad-core with
hyperthreading. Both are i7 based.
^ permalink raw reply [flat|nested] 10+ messages in thread
end of thread, other threads:[~2010-11-19 19:21 UTC | newest]
Thread overview: 10+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2010-11-17 19:11 Soft lock issue with 2.6.33.7-rt29 Nathan Grennan
2010-11-18 1:26 ` Darren Hart
2010-11-18 11:35 ` Luis Claudio R. Goncalves
2010-11-18 17:48 ` Nathan Grennan
2010-11-18 22:11 ` Nathan Grennan
2010-11-19 12:05 ` Locating processes impacting my rt application Leggo, Adam (UK)
2010-11-19 14:53 ` Uwe Kleine-König
2010-11-19 15:11 ` Husak, Jan
2010-11-19 18:46 ` Soft lock issue with 2.6.33.7-rt29 Darren Hart
2010-11-19 19:21 ` Nathan Grennan
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.