All of lore.kernel.org
 help / color / mirror / Atom feed
* [Xenomai-core] [bug] broken PI-chain
@ 2006-08-25 16:56 Jan Kiszka
  2006-08-25 18:33 ` Philippe Gerum
  0 siblings, 1 reply; 6+ messages in thread
From: Jan Kiszka @ 2006-08-25 16:56 UTC (permalink / raw)
  To: xenomai-core


[-- Attachment #1.1: Type: text/plain, Size: 1054 bytes --]

Hi,

the following (academic?) scenario is not handled as expected by the
nucleus:

 Prio	^
	|                  -- T2
	|                 /
	|  T0 <----- T1 <- M1
	| (M0)  M0  (M1)
	+------------------------

That means: Task T0 at, e.g., prio 1 holds mutex M0. T1, also at prio 1,
happens to interrupt T0 (round-robin?). T1 already holds M1 and now
tries to acquire M0. As both are at the same prio level, no boosting
takes place, all fine. Then T2 comes in play. It's at prio 2 and tries
to gain access to M1. T1 gets therefore boosted to prio 2 as well, but
T0 will stay at prio 1 under Xenomai.

I hacked this scenario in the attached demo. Set MAX to 2 to let it
work, leave it 3 and it breaks.

The problem looks on first sight like this test [1]: the claiming
relation is only establish if there is a priority delta on entry, but
that breaks when the claiming thread's prio changes later. Can we simply
remove this test?

Jan

[1]http://www.rts.uni-hannover.de/xenomai/lxr/source/ksrc/nucleus/synch.c?v=SVN-trunk#L193

[-- Attachment #1.2: pip-chain.c --]
[-- Type: text/plain, Size: 903 bytes --]

#include <stdio.h>
#include <sys/mman.h>
#include <native/task.h>
#include <native/mutex.h>

#define MAX	3

RT_MUTEX mutex[3];

void task_func(void *arg)
{
	int index = (int)arg;

	rt_mutex_lock(&mutex[index], TM_INFINITE);
	rt_mutex_lock(&mutex[index-1], TM_INFINITE);
	rt_mutex_unlock(&mutex[index]);
}


int main(int argc, char **argv)
{
	RT_TASK task[3];
	RT_TASK_INFO info;
	int i;

	mlockall(MCL_CURRENT|MCL_FUTURE);

	for (i = 0; i < 3; i++)
		rt_mutex_create(&mutex[i], NULL);

	rt_task_shadow(&task[0], NULL, 1, 0);

	rt_mutex_lock(&mutex[0], TM_INFINITE);

	for (i = 1; i < MAX; i++)
		rt_task_spawn(&task[i], NULL, 0, (i == MAX-1) ? 2 : 1, 0, task_func, (void *)i);

	rt_task_inquire(NULL, &info);
	printf("MAX=%d, prio=%d\n", MAX, info.cprio);

	rt_mutex_unlock(&mutex[0]);

	for (i = 0; i < 3; i++)
		rt_mutex_delete(&mutex[i]);

	return 0;
}

[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 250 bytes --]

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [Xenomai-core] [bug] broken PI-chain
  2006-08-25 16:56 [Xenomai-core] [bug] broken PI-chain Jan Kiszka
@ 2006-08-25 18:33 ` Philippe Gerum
  2006-08-25 20:11   ` Philippe Gerum
  2006-08-26  0:03   ` Philippe Gerum
  0 siblings, 2 replies; 6+ messages in thread
From: Philippe Gerum @ 2006-08-25 18:33 UTC (permalink / raw)
  To: Jan Kiszka; +Cc: xenomai-core

[-- Attachment #1: Type: text/plain, Size: 1748 bytes --]

On Fri, 2006-08-25 at 18:56 +0200, Jan Kiszka wrote:
> Hi,
> 
> the following (academic?) scenario is not handled as expected by the
> nucleus:
> 
>  Prio	^
> 	|                  -- T2
> 	|                 /
> 	|  T0 <----- T1 <- M1
> 	| (M0)  M0  (M1)
> 	+------------------------
> 
> That means: Task T0 at, e.g., prio 1 holds mutex M0. T1, also at prio 1,
> happens to interrupt T0 (round-robin?). T1 already holds M1 and now
> tries to acquire M0. As both are at the same prio level, no boosting
> takes place, all fine. Then T2 comes in play. It's at prio 2 and tries
> to gain access to M1. T1 gets therefore boosted to prio 2 as well, but
> T0 will stay at prio 1 under Xenomai.
> 

Ok, confirmed. The way the PI chain is handled is very academically
broken.

> I hacked this scenario in the attached demo. Set MAX to 2 to let it
> work, leave it 3 and it breaks.

Ouch. It does indeed...

> 
> The problem looks on first sight like this test [1]: the claiming
> relation is only establish if there is a priority delta on entry, but
> that breaks when the claiming thread's prio changes later. Can we simply
> remove this test?

It seems that there is even more than this, and I'm wondering now if
something fishy is not happening with the CLAIMED bit handling too, when
an attempt is made to clear the priority boost.
http://www.rts.uni-hannover.de/xenomai/lxr/source/ksrc/nucleus/synch.c?v=SVN-trunk#L265

I've slightly adapted your test code so that my friend the simulator
helps us, and a segfault is raised upon exit of task_func(), which goes
south, likely trying to flush the pend queue of some synch object it
should not still be waiting for.

FWIW, I've attached the code and the Makefile adapted to the simulator.

-- 
Philippe.


[-- Attachment #2: Makefile --]
[-- Type: text/x-makefile, Size: 502 bytes --]

# Allow overriding xeno-config on make command line
XENO_CONFIG=xeno-config
prefix := $(shell $(XENO_CONFIG) --prefix)

ifeq ($(prefix),)
$(error Please add <xenomai-install-path>/bin to your PATH variable)
endif

GCIC := $(prefix)/bin/gcic
SIM_CFLAGS  := -g
SIM_LDFLAGS := -lnative_sim
SIM_TARGETS := pip-chain

sim: $(SIM_TARGETS)

all: sim

$(SIM_TARGETS): $(SIM_TARGETS:%=%.o)
	$(GCIC) -o $@ $^ $(SIM_LDFLAGS)

%.o: %.c
	$(GCIC) -o $@ -c $< $(SIM_CFLAGS)

clean:
	$(RM) -f *.o *_s.o $(SIM_TARGETS)

[-- Attachment #3: pip-chain.c --]
[-- Type: text/x-csrc, Size: 918 bytes --]

#include <stdio.h>
#include <sys/mman.h>
#include <native/task.h>
#include <native/mutex.h>

#define MAX	3

RT_MUTEX mutex[3];

RT_TASK task[3], main_tcb;

void task_func(void *arg)
{
	int index = (int)arg;

	rt_mutex_lock(&mutex[index], TM_INFINITE);
	rt_mutex_lock(&mutex[index-1], TM_INFINITE);
	rt_mutex_unlock(&mutex[index]);
}

void task_init(void *arg)
{
	RT_TASK_INFO info;
	int i;

	for (i = 0; i < 3; i++)
		rt_mutex_create(&mutex[i], NULL);

	rt_mutex_lock(&mutex[0], TM_INFINITE);

	for (i = 1; i < MAX; i++)
		rt_task_spawn(&task[i], NULL, 0, (i == MAX-1) ? 2 : 1, 0, task_func, (void *)i);

	rt_task_inquire(NULL, &info);
	printf("MAX=%d, prio=%d\n", MAX, info.cprio);

	rt_mutex_unlock(&mutex[0]);

	for (i = 0; i < 3; i++)
		rt_mutex_delete(&mutex[i]);
}

int root_thread_init(void)
{
    return rt_task_spawn(&main_tcb, "task_init", 0, 1, T_JOINABLE, task_init, 0);
}

void root_thread_exit(void)
{
}

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [Xenomai-core] [bug] broken PI-chain
  2006-08-25 18:33 ` Philippe Gerum
@ 2006-08-25 20:11   ` Philippe Gerum
  2006-08-26  0:03   ` Philippe Gerum
  1 sibling, 0 replies; 6+ messages in thread
From: Philippe Gerum @ 2006-08-25 20:11 UTC (permalink / raw)
  To: Jan Kiszka; +Cc: xenomai-core

On Fri, 2006-08-25 at 20:33 +0200, Philippe Gerum wrote:
> On Fri, 2006-08-25 at 18:56 +0200, Jan Kiszka wrote:
> > Hi,
> > 
> > the following (academic?) scenario is not handled as expected by the
> > nucleus:
> > 
> >  Prio	^
> > 	|                  -- T2
> > 	|                 /
> > 	|  T0 <----- T1 <- M1
> > 	| (M0)  M0  (M1)
> > 	+------------------------
> > 
> > That means: Task T0 at, e.g., prio 1 holds mutex M0. T1, also at prio 1,
> > happens to interrupt T0 (round-robin?). T1 already holds M1 and now
> > tries to acquire M0. As both are at the same prio level, no boosting
> > takes place, all fine. Then T2 comes in play. It's at prio 2 and tries
> > to gain access to M1. T1 gets therefore boosted to prio 2 as well, but
> > T0 will stay at prio 1 under Xenomai.
> > 
> 
> Ok, confirmed. The way the PI chain is handled is very academically
> broken.
> 
> > I hacked this scenario in the attached demo. Set MAX to 2 to let it
> > work, leave it 3 and it breaks.
> 
> Ouch. It does indeed...
> 
> > 
> > The problem looks on first sight like this test [1]: the claiming
> > relation is only establish if there is a priority delta on entry, but
> > that breaks when the claiming thread's prio changes later. Can we simply
> > remove this test?
> 
> It seems that there is even more than this, and I'm wondering now if
> something fishy is not happening with the CLAIMED bit handling too, when
> an attempt is made to clear the priority boost.
> http://www.rts.uni-hannover.de/xenomai/lxr/source/ksrc/nucleus/synch.c?v=SVN-trunk#L265
> 

This one is a false alarm, due to outdated simulation libraries. But the
PIP chain issue still needs a fix.

> I've slightly adapted your test code so that my friend the simulator
> helps us, and a segfault is raised upon exit of task_func(), which goes
> south, likely trying to flush the pend queue of some synch object it
> should not still be waiting for.
> 
> FWIW, I've attached the code and the Makefile adapted to the simulator.
> 
> _______________________________________________
> Xenomai-core mailing list
> Xenomai-core@domain.hid
> https://mail.gna.org/listinfo/xenomai-core
-- 
Philippe.




^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [Xenomai-core] [bug] broken PI-chain
  2006-08-25 18:33 ` Philippe Gerum
  2006-08-25 20:11   ` Philippe Gerum
@ 2006-08-26  0:03   ` Philippe Gerum
  2006-08-26  8:07     ` Jan Kiszka
  1 sibling, 1 reply; 6+ messages in thread
From: Philippe Gerum @ 2006-08-26  0:03 UTC (permalink / raw)
  To: Jan Kiszka; +Cc: xenomai-core

On Fri, 2006-08-25 at 20:33 +0200, Philippe Gerum wrote:
> On Fri, 2006-08-25 at 18:56 +0200, Jan Kiszka wrote:
> > Hi,
> > 
> > the following (academic?) scenario is not handled as expected by the
> > nucleus:
> > 
> >  Prio	^
> > 	|                  -- T2
> > 	|                 /
> > 	|  T0 <----- T1 <- M1
> > 	| (M0)  M0  (M1)
> > 	+------------------------
> > 
> > That means: Task T0 at, e.g., prio 1 holds mutex M0. T1, also at prio 1,
> > happens to interrupt T0 (round-robin?). T1 already holds M1 and now
> > tries to acquire M0. As both are at the same prio level, no boosting
> > takes place, all fine. Then T2 comes in play. It's at prio 2 and tries
> > to gain access to M1. T1 gets therefore boosted to prio 2 as well, but
> > T0 will stay at prio 1 under Xenomai.
> > 
> 
> Ok, confirmed. The way the PI chain is handled is very academically
> broken.
> 

xnsynch_renice_sleeper() is the culprit. It does not boost the current
owner of an unclaimed resource when moving down the PI chain, albeit it
should. The main patch below fixes the issue here.

Btw, the test code needs likely needs fixing too:

-	for (i = 1; i < MAX; i++)
+	for (i = 1; i < MAX; i++) {
 		rt_task_spawn(&task[i], NULL, 0, (i == MAX-1) ? 2 : 1, 0, task_func, (void *)i);
+		rt_task_yield();
+	}

AFAICS, since the main thread (T0) is shadowed with priority 1,
the sequence is going to be:

T0, T2, T0 ... rt_task_inquire()

With T1 remaining in limbo since there is no reason to preempt T0,
in which case T0's priority should remain 1, since there is no
reason to boost it in the first place (i.e. T1 did not run).
With the added yield, the test works as expected, and T0 properly
undergoes a priority boost (=2).

> > I hacked this scenario in the attached demo. Set MAX to 2 to let it
> > work, leave it 3 and it breaks.
> 
> Ouch. It does indeed...
> 
> > 
> > The problem looks on first sight like this test [1]: the claiming
> > relation is only establish if there is a priority delta on entry, but
> > that breaks when the claiming thread's prio changes later. Can we simply
> > remove this test?
> 
> It seems that there is even more than this, and I'm wondering now if
> something fishy is not happening with the CLAIMED bit handling too, when
> an attempt is made to clear the priority boost.
> http://www.rts.uni-hannover.de/xenomai/lxr/source/ksrc/nucleus/synch.c?v=SVN-trunk#L265
> 

Ok, this one was a red herring. The CLAIMED bit is properly handled.

> I've slightly adapted your test code so that my friend the simulator
> helps us, and a segfault is raised upon exit of task_func(), which goes
> south, likely trying to flush the pend queue of some synch object it
> should not still be waiting for.

Red herring again. Wrong simulation libraries. Sorry for the noise.

Index: ksrc/nucleus/synch.c
===================================================================
--- ksrc/nucleus/synch.c	(revision 1509)
+++ ksrc/nucleus/synch.c	(working copy)
@@ -116,8 +116,8 @@
 		xnsynch_renice_sleeper(thread);
 	else if (thread != xnpod_current_thread() &&
 		 testbits(thread->status, XNREADY))
-		/* xnpod_resume_thread() must be called for runnable threads
-		   but the running one. */
+		/* xnpod_resume_thread() must be called for runnable
+		   threads but the running one. */
 		xnpod_resume_thread(thread, 0);
 
 #if defined(__KERNEL__) && defined(CONFIG_XENO_OPT_PERVASIVE)
@@ -210,8 +210,8 @@
 		else
 			__setbits(synch->status, XNSYNCH_CLAIMED);
 
+		insertpqf(&owner->claimq, &synch->link, thread->cprio);
 		insertpqf(&synch->pendq, &thread->plink, thread->cprio);
-		insertpqf(&owner->claimq, &synch->link, thread->cprio);
 		xnsynch_renice_thread(owner, thread->cprio);
 	} else
 		insertpqf(&synch->pendq, &thread->plink, thread->cprio);
@@ -262,8 +262,8 @@
 	int downprio;
 
 	removepq(&lastowner->claimq, &synch->link);
+	downprio = lastowner->bprio;
 	__clrbits(synch->status, XNSYNCH_CLAIMED);
-	downprio = lastowner->bprio;
 
 	if (emptypq_p(&lastowner->claimq))
 		__clrbits(lastowner->status, XNBOOST);
@@ -295,19 +295,35 @@
 void xnsynch_renice_sleeper(xnthread_t *thread)
 {
 	xnsynch_t *synch = thread->wchan;
+	xnthread_t *owner;
 
-	if (testbits(synch->status, XNSYNCH_PRIO)) {
-		xnthread_t *owner = synch->owner;
+	if (!testbits(synch->status, XNSYNCH_PRIO))
+		return;
 
-		removepq(&synch->pendq, &thread->plink);
-		insertpqf(&synch->pendq, &thread->plink, thread->cprio);
+	owner = synch->owner;
+	removepq(&synch->pendq, &thread->plink);
+	insertpqf(&synch->pendq, &thread->plink, thread->cprio);
 
-		if (testbits(synch->status, XNSYNCH_CLAIMED) &&
-		    xnpod_compare_prio(thread->cprio, owner->cprio) > 0) {
+	if (xnpod_compare_prio(thread->cprio, owner->cprio) > 0) {
+		/* The new priority of the sleeping thread is higher
+		 * than the priority of the current owner of the
+		 * resource: we need to update the PI state. */
+		if (testbits(synch->status, XNSYNCH_CLAIMED)) {
+			/* The resource is already claimed, just
+			   reorder the claim queue. */
 			removepq(&owner->claimq, &synch->link);
 			insertpqf(&owner->claimq, &synch->link, thread->cprio);
-			xnsynch_renice_thread(owner, thread->cprio);
+		} else {
+			/* The resource was NOT claimed, claim it now
+			 * and boost the owner. */
+			__setbits(synch->status, XNSYNCH_CLAIMED);
+			insertpqf(&owner->claimq, &synch->link, thread->cprio);
+			owner->bprio = owner->cprio;
+			__setbits(owner->status, XNBOOST);
 		}
+		/* Renice the owner thread, progressing in the PI
+		   chain as needed. */
+		xnsynch_renice_thread(owner, thread->cprio);
 	}
 }
 
@@ -605,8 +621,9 @@
 
 	for (holder = getheadpq(&thread->claimq); holder != NULL;
 	     holder = nholder) {
-		/* Since xnsynch_wakeup_one_sleeper() alters the claim queue,
-		   we need to be conservative while scanning it. */
+		/* Since xnsynch_wakeup_one_sleeper() alters the claim
+		   queue, we need to be conservative while scanning
+		   it. */
 		xnsynch_t *synch = link2synch(holder);
 		nholder = nextpq(&thread->claimq, holder);
 		xnsynch_wakeup_one_sleeper(synch);


-- 
Philippe.




^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [Xenomai-core] [bug] broken PI-chain
  2006-08-26  0:03   ` Philippe Gerum
@ 2006-08-26  8:07     ` Jan Kiszka
  2006-08-26  8:44       ` Philippe Gerum
  0 siblings, 1 reply; 6+ messages in thread
From: Jan Kiszka @ 2006-08-26  8:07 UTC (permalink / raw)
  To: rpm; +Cc: xenomai-core

[-- Attachment #1: Type: text/plain, Size: 6946 bytes --]

Philippe Gerum wrote:
> On Fri, 2006-08-25 at 20:33 +0200, Philippe Gerum wrote:
>> On Fri, 2006-08-25 at 18:56 +0200, Jan Kiszka wrote:
>>> Hi,
>>>
>>> the following (academic?) scenario is not handled as expected by the
>>> nucleus:
>>>
>>>  Prio	^
>>> 	|                  -- T2
>>> 	|                 /
>>> 	|  T0 <----- T1 <- M1
>>> 	| (M0)  M0  (M1)
>>> 	+------------------------
>>>
>>> That means: Task T0 at, e.g., prio 1 holds mutex M0. T1, also at prio 1,
>>> happens to interrupt T0 (round-robin?). T1 already holds M1 and now
>>> tries to acquire M0. As both are at the same prio level, no boosting
>>> takes place, all fine. Then T2 comes in play. It's at prio 2 and tries
>>> to gain access to M1. T1 gets therefore boosted to prio 2 as well, but
>>> T0 will stay at prio 1 under Xenomai.
>>>
>> Ok, confirmed. The way the PI chain is handled is very academically
>> broken.
>>
> 
> xnsynch_renice_sleeper() is the culprit. It does not boost the current
> owner of an unclaimed resource when moving down the PI chain, albeit it
> should. The main patch below fixes the issue here.
> 
> Btw, the test code needs likely needs fixing too:
> 
> -	for (i = 1; i < MAX; i++)
> +	for (i = 1; i < MAX; i++) {
>  		rt_task_spawn(&task[i], NULL, 0, (i == MAX-1) ? 2 : 1, 0, task_func, (void *)i);
> +		rt_task_yield();
> +	}

True for kernel-space and simulator. It's in my model as well but I
forgot to port it to Real Life. User-space does this yield implicitly
due to the Linux thread creation underneath.

> 
> AFAICS, since the main thread (T0) is shadowed with priority 1,
> the sequence is going to be:
> 
> T0, T2, T0 ... rt_task_inquire()
> 
> With T1 remaining in limbo since there is no reason to preempt T0,
> in which case T0's priority should remain 1, since there is no
> reason to boost it in the first place (i.e. T1 did not run).
> With the added yield, the test works as expected, and T0 properly
> undergoes a priority boost (=2).
> 
>>> I hacked this scenario in the attached demo. Set MAX to 2 to let it
>>> work, leave it 3 and it breaks.
>> Ouch. It does indeed...
>>
>>> The problem looks on first sight like this test [1]: the claiming
>>> relation is only establish if there is a priority delta on entry, but
>>> that breaks when the claiming thread's prio changes later. Can we simply
>>> remove this test?
>> It seems that there is even more than this, and I'm wondering now if
>> something fishy is not happening with the CLAIMED bit handling too, when
>> an attempt is made to clear the priority boost.
>> http://www.rts.uni-hannover.de/xenomai/lxr/source/ksrc/nucleus/synch.c?v=SVN-trunk#L265
>>
> 
> Ok, this one was a red herring. The CLAIMED bit is properly handled.
> 
>> I've slightly adapted your test code so that my friend the simulator
>> helps us, and a segfault is raised upon exit of task_func(), which goes
>> south, likely trying to flush the pend queue of some synch object it
>> should not still be waiting for.
> 
> Red herring again. Wrong simulation libraries. Sorry for the noise.
> 
> Index: ksrc/nucleus/synch.c
> ===================================================================
> --- ksrc/nucleus/synch.c	(revision 1509)
> +++ ksrc/nucleus/synch.c	(working copy)
> @@ -116,8 +116,8 @@
>  		xnsynch_renice_sleeper(thread);
>  	else if (thread != xnpod_current_thread() &&
>  		 testbits(thread->status, XNREADY))
> -		/* xnpod_resume_thread() must be called for runnable threads
> -		   but the running one. */
> +		/* xnpod_resume_thread() must be called for runnable
> +		   threads but the running one. */
>  		xnpod_resume_thread(thread, 0);
>  
>  #if defined(__KERNEL__) && defined(CONFIG_XENO_OPT_PERVASIVE)
> @@ -210,8 +210,8 @@
>  		else
>  			__setbits(synch->status, XNSYNCH_CLAIMED);
>  
> +		insertpqf(&owner->claimq, &synch->link, thread->cprio);
>  		insertpqf(&synch->pendq, &thread->plink, thread->cprio);
> -		insertpqf(&owner->claimq, &synch->link, thread->cprio);
>  		xnsynch_renice_thread(owner, thread->cprio);
>  	} else
>  		insertpqf(&synch->pendq, &thread->plink, thread->cprio);
> @@ -262,8 +262,8 @@
>  	int downprio;
>  
>  	removepq(&lastowner->claimq, &synch->link);
> +	downprio = lastowner->bprio;
>  	__clrbits(synch->status, XNSYNCH_CLAIMED);
> -	downprio = lastowner->bprio;
>  
>  	if (emptypq_p(&lastowner->claimq))
>  		__clrbits(lastowner->status, XNBOOST);
> @@ -295,19 +295,35 @@
>  void xnsynch_renice_sleeper(xnthread_t *thread)
>  {
>  	xnsynch_t *synch = thread->wchan;
> +	xnthread_t *owner;
>  
> -	if (testbits(synch->status, XNSYNCH_PRIO)) {
> -		xnthread_t *owner = synch->owner;
> +	if (!testbits(synch->status, XNSYNCH_PRIO))
> +		return;
>  
> -		removepq(&synch->pendq, &thread->plink);
> -		insertpqf(&synch->pendq, &thread->plink, thread->cprio);
> +	owner = synch->owner;
> +	removepq(&synch->pendq, &thread->plink);
> +	insertpqf(&synch->pendq, &thread->plink, thread->cprio);
>  
> -		if (testbits(synch->status, XNSYNCH_CLAIMED) &&
> -		    xnpod_compare_prio(thread->cprio, owner->cprio) > 0) {
> +	if (xnpod_compare_prio(thread->cprio, owner->cprio) > 0) {
> +		/* The new priority of the sleeping thread is higher
> +		 * than the priority of the current owner of the
> +		 * resource: we need to update the PI state. */
> +		if (testbits(synch->status, XNSYNCH_CLAIMED)) {
> +			/* The resource is already claimed, just
> +			   reorder the claim queue. */
>  			removepq(&owner->claimq, &synch->link);
>  			insertpqf(&owner->claimq, &synch->link, thread->cprio);
> -			xnsynch_renice_thread(owner, thread->cprio);
> +		} else {
> +			/* The resource was NOT claimed, claim it now
> +			 * and boost the owner. */
> +			__setbits(synch->status, XNSYNCH_CLAIMED);
> +			insertpqf(&owner->claimq, &synch->link, thread->cprio);
> +			owner->bprio = owner->cprio;
> +			__setbits(owner->status, XNBOOST);
>  		}
> +		/* Renice the owner thread, progressing in the PI
> +		   chain as needed. */
> +		xnsynch_renice_thread(owner, thread->cprio);
>  	}
>  }
>  
> @@ -605,8 +621,9 @@
>  
>  	for (holder = getheadpq(&thread->claimq); holder != NULL;
>  	     holder = nholder) {
> -		/* Since xnsynch_wakeup_one_sleeper() alters the claim queue,
> -		   we need to be conservative while scanning it. */
> +		/* Since xnsynch_wakeup_one_sleeper() alters the claim
> +		   queue, we need to be conservative while scanning
> +		   it. */
>  		xnsynch_t *synch = link2synch(holder);
>  		nholder = nextpq(&thread->claimq, holder);
>  		xnsynch_wakeup_one_sleeper(synch);
> 
> 

Patch works fine for me.

But I still wonder what the idea is to officially claim a resource (i.e.
add yourself to the claim queue) only when you boost. That doesn't seem
to be intuitive. Isn't a thread always claiming the resource,
independent of its prio delta?

Jan


[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 250 bytes --]

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [Xenomai-core] [bug] broken PI-chain
  2006-08-26  8:07     ` Jan Kiszka
@ 2006-08-26  8:44       ` Philippe Gerum
  0 siblings, 0 replies; 6+ messages in thread
From: Philippe Gerum @ 2006-08-26  8:44 UTC (permalink / raw)
  To: Jan Kiszka; +Cc: xenomai-core

On Sat, 2006-08-26 at 10:07 +0200, Jan Kiszka wrote:

[...]

> Patch works fine for me.
> 
> But I still wonder what the idea is to officially claim a resource (i.e.
> add yourself to the claim queue) only when you boost. That doesn't seem
> to be intuitive. Isn't a thread always claiming the resource,
> independent of its prio delta?

Well, in this context, "claimed" is supposed to mean "actively required"
so that the current owner is paying attention to the waiter(s) needing
the resource, by undergoing a priority boost. When there is no risk of
priority inversion, the waiter is more "passive", it just sleeps without
changing anything else than its own state. In the logic I've used, what
you refer to is the "pendq" any waiter is linked to, regardless of its
priority delta with the current owner.

-- 
Philippe.




^ permalink raw reply	[flat|nested] 6+ messages in thread

end of thread, other threads:[~2006-08-26  8:44 UTC | newest]

Thread overview: 6+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2006-08-25 16:56 [Xenomai-core] [bug] broken PI-chain Jan Kiszka
2006-08-25 18:33 ` Philippe Gerum
2006-08-25 20:11   ` Philippe Gerum
2006-08-26  0:03   ` Philippe Gerum
2006-08-26  8:07     ` Jan Kiszka
2006-08-26  8:44       ` Philippe Gerum

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.