All of lore.kernel.org
 help / color / mirror / Atom feed
* [Xenomai-help] Possible memory leak in psos skin message queue handling
@ 2010-10-19  9:28 ronny meeus
  2010-10-26  9:58 ` ronny meeus
  0 siblings, 1 reply; 7+ messages in thread
From: ronny meeus @ 2010-10-19  9:28 UTC (permalink / raw)
  To: Xenomai-help

[-- Attachment #1: Type: text/plain, Size: 2709 bytes --]

Hello

We have a configuration based on QEMU, Xenomai 2.5.4 and we use the PSOS
skin.
[    0.623238] Xenomai: hal/i386 started.
[    0.633758] Xenomai: scheduling class idle registered.
[    0.636172] Xenomai: scheduling class rt registered.
[    0.693178] Xenomai: real-time nucleus v2.5.4 (Sleep Walk) loaded.
[    0.723234] Xenomai: starting native API services.
[    0.728107] Xenomai: starting pSOS+ services.

I'm currently writing a test application in pSOS to check whether the
run-time behavior is equal to the real PSOS implementation.
The check function I use is based in the one found in
"/src/testsuite/unit/rtdm.c" of the xenomai distribution.

This is a piece of the test code I use:

static void test_queue(void)
{
    unsigned long qid;
    unsigned long mesg[4] = {0,1,2,3};
    unsigned long recv_msg[4];
    char testCaseName[32];

    printf("test_queue\n");
    check("q_create",q_create("TEST",0,Q_NOLIMIT|Q_PRIOR,&qid),0);

    check("q_send",q_send(qid,mesg),0);
    check("q_receive",q_receive(qid,Q_NOWAIT,0,recv_msg),0);
    check("q_receive TO",q_receive(qid,Q_WAIT,50,recv_msg),ERR_TIMEOUT);

    for (i=0;i<10000;i++)
    {
        sprintf(testCaseName,"q_send LOOP %d",i);
        mesg[3] = (unsigned long)i;
        check(testCaseName,q_send(qid,mesg),0);
    }

    while (q_receive(qid,Q_NOWAIT,0,recv_msg) == 0);

    check("q_delete",q_delete(qid),0);
}

This function is called in a loop:
  i = 0;
  while (1)
  {
    test_queue();
    printf("LoopCount = %d\n",i++);
  }

If I run the testcode  I get:

test_queue
FAILED test_queue:216: q_send LOOP 7808 returned 52 instead of 0 - Unknown
error -52

If I change the number of messages I send to the queue to for example 100, I
observe more or less the same behavior.
After a number of invocations of the "test_queue" functions, the same error
is reported.

test_queue
LoopCount = 117
test_queue
LoopCount = 118
test_queue
LoopCount = 119
test_queue
LoopCount = 120
test_queue
FAILED test_queue:216: q_send LOOP 64 returned 52 instead of 0 - Unknown
error -52

Once the test is failed and I restart the application, I get immediately a
failure:

test_queue
FAILED test_queue:216: q_send LOOP 64 returned 52 instead of 0 - Unknown
error -52

So it looks to me that there is a memory leak in the message handling
mechanism inside the kernel module.
It looks like op to 64 messages can be sent to the queue but I get a failure
as soon as I want to send more.

As a last test I changed the number of loop iterations to 64.
After doing this the test keeps on running forever.

Another question I have: is there any testcode available for the pSOS skin?
If not I'm willing the share my code once it is finalized.

Thanks
Ronny

[-- Attachment #2: Type: text/html, Size: 3177 bytes --]

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [Xenomai-help] Possible memory leak in psos skin message queue handling
  2010-10-19  9:28 [Xenomai-help] Possible memory leak in psos skin message queue handling ronny meeus
@ 2010-10-26  9:58 ` ronny meeus
  2010-10-26 11:34   ` Gilles Chanteperdrix
  0 siblings, 1 reply; 7+ messages in thread
From: ronny meeus @ 2010-10-26  9:58 UTC (permalink / raw)
  To: Xenomai-help

[-- Attachment #1: Type: text/plain, Size: 5222 bytes --]

Hello

I did some further debugging on the problem described below and I made some
progress.
At creation time of the message queue using:
q_create("TEST",0,Q_NOLIMIT|Q_PRIOR,&qid);
a chunk of message buffers (64) is allocated and added to the free message
list of the queue (queue->freeq).
Once the message queue is deleted, the messages are added into the global
psosmbufq.
During the q_create/q_delete loop, the memory pool get depleted since the
number of messages in the psosmbufq keeps on increasing all the time.

In my opinion if the Q_PRIBUF is not set during queue creation, the
"psosmbufq" has to be used to allocate/release from/to.
This also implies that the local freeq in the queue object is not used in
this mode anymore.
Each time a message needs to be send, in function get_mbuf it is simply
taken from the psosmbufq. In case this would be empty, a feed_pool operation
is called on it.

static psosmbuf_t *get_mbuf(psosqueue_t *queue, u_long msglen)
{
    psosmbuf_t *mbuf = NULL;

    if (testbits(queue->synchbase.status, Q_NOCACHE)) {
        mbuf =
            (psosmbuf_t *) xnmalloc(sizeof(*mbuf) + msglen -
                        sizeof(mbuf->data));

        if (mbuf)
            inith(&mbuf->link);
    } else {
        xnholder_t *holder = NULL;
        if (testbits(queue->synchbase.status, Q_SHAREDINIT)) {
            holder = getq(&psosmbufq);
            if (!holder) {
                feed_pool(&psoschunkq,
&psosmbufq,PSOS_QUEUE_MIN_ALLOC,queue->maxlen);
                holder = getq(&psosmbufq);
            }
        } else {
            holder = getq(&queue->freeq);
            if (!holder && testbits(queue->synchbase.status, Q_INFINITE)) {
                feed_pool(&queue->chunkq,&queue->freeq,
PSOS_QUEUE_MIN_ALLOC,queue->maxlen);
                holder = getq(&queue->freeq);
            }
        }
        if (holder)
            mbuf = link2psosmbuf(holder);
    }
    return mbuf;
}

I have adapted the code accordingly and rerun my tests. Now it runs forever.
(Of course I also did changes in the code to create and delete a queue.)

Now the question is: Is my understanding correct? If it is, the flag
Q_SHAREDINIT would be better renamed to Q_SHAREDMSGS.

Please share your thoughts.

Best regards,
Ronny


On Tue, Oct 19, 2010 at 11:28 AM, ronny meeus <ronny.meeus@domain.hid> wrote:

> Hello
>
> We have a configuration based on QEMU, Xenomai 2.5.4 and we use the PSOS
> skin.
> [    0.623238] Xenomai: hal/i386 started.
> [    0.633758] Xenomai: scheduling class idle registered.
> [    0.636172] Xenomai: scheduling class rt registered.
> [    0.693178] Xenomai: real-time nucleus v2.5.4 (Sleep Walk) loaded.
> [    0.723234] Xenomai: starting native API services.
> [    0.728107] Xenomai: starting pSOS+ services.
>
> I'm currently writing a test application in pSOS to check whether the
> run-time behavior is equal to the real PSOS implementation.
> The check function I use is based in the one found in
> "/src/testsuite/unit/rtdm.c" of the xenomai distribution.
>
> This is a piece of the test code I use:
>
> static void test_queue(void)
> {
>     unsigned long qid;
>     unsigned long mesg[4] = {0,1,2,3};
>     unsigned long recv_msg[4];
>     char testCaseName[32];
>
>     printf("test_queue\n");
>     check("q_create",q_create("TEST",0,Q_NOLIMIT|Q_PRIOR,&qid),0);
>
>     check("q_send",q_send(qid,mesg),0);
>     check("q_receive",q_receive(qid,Q_NOWAIT,0,recv_msg),0);
>     check("q_receive TO",q_receive(qid,Q_WAIT,50,recv_msg),ERR_TIMEOUT);
>
>     for (i=0;i<10000;i++)
>     {
>         sprintf(testCaseName,"q_send LOOP %d",i);
>         mesg[3] = (unsigned long)i;
>         check(testCaseName,q_send(qid,mesg),0);
>     }
>
>     while (q_receive(qid,Q_NOWAIT,0,recv_msg) == 0);
>
>     check("q_delete",q_delete(qid),0);
> }
>
> This function is called in a loop:
>   i = 0;
>   while (1)
>   {
>     test_queue();
>     printf("LoopCount = %d\n",i++);
>   }
>
> If I run the testcode  I get:
>
> test_queue
> FAILED test_queue:216: q_send LOOP 7808 returned 52 instead of 0 - Unknown
> error -52
>
> If I change the number of messages I send to the queue to for example 100,
> I observe more or less the same behavior.
> After a number of invocations of the "test_queue" functions, the same error
> is reported.
>
> test_queue
> LoopCount = 117
> test_queue
> LoopCount = 118
> test_queue
> LoopCount = 119
> test_queue
> LoopCount = 120
> test_queue
> FAILED test_queue:216: q_send LOOP 64 returned 52 instead of 0 - Unknown
> error -52
>
> Once the test is failed and I restart the application, I get immediately a
> failure:
>
> test_queue
> FAILED test_queue:216: q_send LOOP 64 returned 52 instead of 0 - Unknown
> error -52
>
> So it looks to me that there is a memory leak in the message handling
> mechanism inside the kernel module.
> It looks like op to 64 messages can be sent to the queue but I get a
> failure as soon as I want to send more.
>
> As a last test I changed the number of loop iterations to 64.
> After doing this the test keeps on running forever.
>
> Another question I have: is there any testcode available for the pSOS skin?
> If not I'm willing the share my code once it is finalized.
>
> Thanks
> Ronny
>
>

[-- Attachment #2: Type: text/html, Size: 9898 bytes --]

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [Xenomai-help] Possible memory leak in psos skin message queue handling
  2010-10-26  9:58 ` ronny meeus
@ 2010-10-26 11:34   ` Gilles Chanteperdrix
  2010-10-26 13:13     ` ronny meeus
  0 siblings, 1 reply; 7+ messages in thread
From: Gilles Chanteperdrix @ 2010-10-26 11:34 UTC (permalink / raw)
  To: ronny meeus; +Cc: Xenomai-help

ronny meeus wrote:
> Hello
> 
> I did some further debugging on the problem described below and I made
> some progress.
> At creation time of the message queue using:
> q_create("TEST",0,Q_NOLIMIT|Q_PRIOR,&qid);
> a chunk of message buffers (64) is allocated and added to the free
> message list of the queue (queue->freeq).
> Once the message queue is deleted, the messages are added into the
> global psosmbufq.
> During the q_create/q_delete loop, the memory pool get depleted since
> the number of messages in the psosmbufq keeps on increasing all the time.
> 
> In my opinion if the Q_PRIBUF is not set during queue creation, the
> "psosmbufq" has to be used to allocate/release from/to.
> This also implies that the local freeq in the queue object is not used
> in this mode anymore.
> Each time a message needs to be send, in function get_mbuf it is simply
> taken from the psosmbufq. In case this would be empty, a feed_pool
> operation is called on it.
> 
> static psosmbuf_t *get_mbuf(psosqueue_t *queue, u_long msglen)
> {
>     psosmbuf_t *mbuf = NULL;
> 
>     if (testbits(queue->synchbase.status, Q_NOCACHE)) {
>         mbuf =
>             (psosmbuf_t *) xnmalloc(sizeof(*mbuf) + msglen -
>                         sizeof(mbuf->data));
> 
>         if (mbuf)
>             inith(&mbuf->link);
>     } else {
>         xnholder_t *holder = NULL;
>         if (testbits(queue->synchbase.status, Q_SHAREDINIT)) {
>             holder = getq(&psosmbufq);
>             if (!holder) {
>                 feed_pool(&psoschunkq,
> &psosmbufq,PSOS_QUEUE_MIN_ALLOC,queue->maxlen);
>                 holder = getq(&psosmbufq);
>             }
>         } else {
>             holder = getq(&queue->freeq);
>             if (!holder && testbits(queue->synchbase.status, Q_INFINITE)) {
>                 feed_pool(&queue->chunkq,&queue->freeq,
> PSOS_QUEUE_MIN_ALLOC,queue->maxlen);
>                 holder = getq(&queue->freeq);
>             }   
>         }
>         if (holder)
>             mbuf = link2psosmbuf(holder);
>     }
>     return mbuf;
> }
> 
> I have adapted the code accordingly and rerun my tests. Now it runs forever.
> (Of course I also did changes in the code to create and delete a queue.)
> 
> Now the question is: Is my understanding correct? If it is, the flag
> Q_SHAREDINIT would be better renamed to Q_SHAREDMSGS.
> 
> Please share your thoughts.

A patch would be better than some fancy HTML colouring, if you intend
your fix to be reviewed/integrated.

-- 
                                                                Gilles.


^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [Xenomai-help] Possible memory leak in psos skin message queue handling
  2010-10-26 11:34   ` Gilles Chanteperdrix
@ 2010-10-26 13:13     ` ronny meeus
  2010-11-02 20:05       ` ronny meeus
  0 siblings, 1 reply; 7+ messages in thread
From: ronny meeus @ 2010-10-26 13:13 UTC (permalink / raw)
  To: Gilles Chanteperdrix; +Cc: Xenomai-help


[-- Attachment #1.1: Type: text/plain, Size: 2919 bytes --]

Gilles,

patch is in attachment, no problem.
I wanted to check first whether my reasoning is correct.

Best regards,
Ronny

On Tue, Oct 26, 2010 at 1:34 PM, Gilles Chanteperdrix <
gilles.chanteperdrix@xenomai.org> wrote:

> ronny meeus wrote:
> > Hello
> >
> > I did some further debugging on the problem described below and I made
> > some progress.
> > At creation time of the message queue using:
> > q_create("TEST",0,Q_NOLIMIT|Q_PRIOR,&qid);
> > a chunk of message buffers (64) is allocated and added to the free
> > message list of the queue (queue->freeq).
> > Once the message queue is deleted, the messages are added into the
> > global psosmbufq.
> > During the q_create/q_delete loop, the memory pool get depleted since
> > the number of messages in the psosmbufq keeps on increasing all the time.
> >
> > In my opinion if the Q_PRIBUF is not set during queue creation, the
> > "psosmbufq" has to be used to allocate/release from/to.
> > This also implies that the local freeq in the queue object is not used
> > in this mode anymore.
> > Each time a message needs to be send, in function get_mbuf it is simply
> > taken from the psosmbufq. In case this would be empty, a feed_pool
> > operation is called on it.
> >
> > static psosmbuf_t *get_mbuf(psosqueue_t *queue, u_long msglen)
> > {
> >     psosmbuf_t *mbuf = NULL;
> >
> >     if (testbits(queue->synchbase.status, Q_NOCACHE)) {
> >         mbuf =
> >             (psosmbuf_t *) xnmalloc(sizeof(*mbuf) + msglen -
> >                         sizeof(mbuf->data));
> >
> >         if (mbuf)
> >             inith(&mbuf->link);
> >     } else {
> >         xnholder_t *holder = NULL;
> >         if (testbits(queue->synchbase.status, Q_SHAREDINIT)) {
> >             holder = getq(&psosmbufq);
> >             if (!holder) {
> >                 feed_pool(&psoschunkq,
> > &psosmbufq,PSOS_QUEUE_MIN_ALLOC,queue->maxlen);
> >                 holder = getq(&psosmbufq);
> >             }
> >         } else {
> >             holder = getq(&queue->freeq);
> >             if (!holder && testbits(queue->synchbase.status, Q_INFINITE))
> {
> >                 feed_pool(&queue->chunkq,&queue->freeq,
> > PSOS_QUEUE_MIN_ALLOC,queue->maxlen);
> >                 holder = getq(&queue->freeq);
> >             }
> >         }
> >         if (holder)
> >             mbuf = link2psosmbuf(holder);
> >     }
> >     return mbuf;
> > }
> >
> > I have adapted the code accordingly and rerun my tests. Now it runs
> forever.
> > (Of course I also did changes in the code to create and delete a queue.)
> >
> > Now the question is: Is my understanding correct? If it is, the flag
> > Q_SHAREDINIT would be better renamed to Q_SHAREDMSGS.
> >
> > Please share your thoughts.
>
> A patch would be better than some fancy HTML colouring, if you intend
> your fix to be reviewed/integrated.
>
> --
>                                                                 Gilles.
>

[-- Attachment #1.2: Type: text/html, Size: 3775 bytes --]

[-- Attachment #2: psos-queue.patch --]
[-- Type: text/x-patch, Size: 2989 bytes --]

diff -r f8bbb3e78c40 xenomai-2.5.4/ksrc/skins/psos+/queue.c
--- a/xenomai-2.5.4/ksrc/skins/psos+/queue.c	Mon Sep 27 20:26:32 2010 +0200
+++ b/xenomai-2.5.4/ksrc/skins/psos+/queue.c	Tue Oct 26 15:12:06 2010 +0200
@@ -40,6 +40,7 @@
 	int len;
 	spl_t s;
 
+	p += sprintf(p, "psosmbufq #bufs=%d\n",countq(&psosmbufq));
 	p += sprintf(p, "maxnum=%lu:maxlen=%lu:mcount=%d\n",
 		     queue->maxnum, queue->maxlen, countq(&queue->inq));
 
@@ -47,7 +48,7 @@
 
 	if (xnsynch_nsleepers(&queue->synchbase) > 0) {
 		xnpholder_t *holder;
-
+	    
 		/* Pended queue -- dump waiters. */
 
 		holder = getheadpq(xnsynch_wait_queue(&queue->synchbase));
@@ -164,15 +165,20 @@
 		if (mbuf)
 			inith(&mbuf->link);
 	} else {
-		xnholder_t *holder = getq(&queue->freeq);
-
-		if (!holder &&
-		    testbits(queue->synchbase.status, Q_INFINITE) &&
-		    feed_pool(&queue->chunkq,
-			      &queue->freeq, PSOS_QUEUE_MIN_ALLOC,
-			      queue->maxlen) != 0)
-			holder = getq(&queue->freeq);
-
+	    xnholder_t *holder = NULL;
+	    if (testbits(queue->synchbase.status, Q_SHAREDINIT)) {
+	        holder = getq(&psosmbufq);
+	        if (!holder) {
+	            feed_pool(&psoschunkq, &psosmbufq,PSOS_QUEUE_MIN_ALLOC,queue->maxlen);
+	            holder = getq(&psosmbufq);
+            }
+        } else {        
+		    holder = getq(&queue->freeq);
+		    if (!holder && testbits(queue->synchbase.status, Q_INFINITE)) { 
+                feed_pool(&queue->chunkq,&queue->freeq, PSOS_QUEUE_MIN_ALLOC,queue->maxlen);
+                holder = getq(&queue->freeq);
+            }    
+        }
 		if (holder)
 			mbuf = link2psosmbuf(holder);
 	}
@@ -187,7 +193,6 @@
 	static unsigned long msgq_ids;
 	psosqueue_t *queue;
 	int bflags, ret;
-	u_long rc;
 	spl_t s;
 
 	bflags = (flags & Q_VARIABLE);
@@ -236,27 +241,13 @@
 	initq(&queue->chunkq);
 
 	if (bflags & Q_PRIVCACHE) {
-		if (bflags & Q_SHAREDINIT) {
-			xnlock_get_irqsave(&nklock, s);
-			rc = feed_pool(&psoschunkq, &psosmbufq, maxnum, maxlen);
-			xnlock_put_irqrestore(&nklock, s);
-		} else
-			rc = feed_pool(&queue->chunkq, &queue->freeq, maxnum,
-				       maxlen);
-
-		if (!rc) {
-			/* Can't preallocate msg buffers. */
-			xnfree(queue);
-			return ERR_NOMGB;
-		}
-
-		if (bflags & Q_SHAREDINIT) {
-			xnlock_get_irqsave(&nklock, s);
-
-			while (countq(&queue->freeq) < maxnum)
-				appendq(&queue->freeq, getq(&psosmbufq));
-
-			xnlock_put_irqrestore(&nklock, s);
+		if ((bflags & Q_SHAREDINIT) == 0) {
+			u_long rc = feed_pool(&queue->chunkq, &queue->freeq, maxnum, maxlen);
+			if (!rc) {
+				/* Can't preallocate msg buffers. */
+				xnfree(queue);   
+				return ERR_NOMGB;
+			}
 		}
 	}
 
@@ -477,7 +468,10 @@
 
 	if (testbits(queue->synchbase.status, Q_NOCACHE))
 		xnfree(mbuf);
-	else
+	else if (testbits(queue->synchbase.status, Q_SHAREDINIT)) {
+		/* Message buffer should go to the psosmbufq */
+		appendq(&psosmbufq, &mbuf->link);
+	} else    
 		appendq(&queue->freeq, &mbuf->link);
 
       unlock_and_exit:

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [Xenomai-help] Possible memory leak in psos skin message queue handling
  2010-10-26 13:13     ` ronny meeus
@ 2010-11-02 20:05       ` ronny meeus
  2010-11-03 20:49         ` Gilles Chanteperdrix
  0 siblings, 1 reply; 7+ messages in thread
From: ronny meeus @ 2010-11-02 20:05 UTC (permalink / raw)
  To: Gilles Chanteperdrix; +Cc: Xenomai-help

[-- Attachment #1: Type: text/plain, Size: 3170 bytes --]

Hello

any feedback on the patch I have posted last week?

Thanks,
Ronny

On Tue, Oct 26, 2010 at 3:13 PM, ronny meeus <ronny.meeus@domain.hid> wrote:

> Gilles,
>
> patch is in attachment, no problem.
> I wanted to check first whether my reasoning is correct.
>
> Best regards,
> Ronny
>
>
> On Tue, Oct 26, 2010 at 1:34 PM, Gilles Chanteperdrix <
> gilles.chanteperdrix@xenomai.org> wrote:
>
>>  ronny meeus wrote:
>> > Hello
>> >
>> > I did some further debugging on the problem described below and I made
>> > some progress.
>> > At creation time of the message queue using:
>> > q_create("TEST",0,Q_NOLIMIT|Q_PRIOR,&qid);
>> > a chunk of message buffers (64) is allocated and added to the free
>> > message list of the queue (queue->freeq).
>> > Once the message queue is deleted, the messages are added into the
>> > global psosmbufq.
>> > During the q_create/q_delete loop, the memory pool get depleted since
>> > the number of messages in the psosmbufq keeps on increasing all the
>> time.
>> >
>> > In my opinion if the Q_PRIBUF is not set during queue creation, the
>> > "psosmbufq" has to be used to allocate/release from/to.
>> > This also implies that the local freeq in the queue object is not used
>> > in this mode anymore.
>> > Each time a message needs to be send, in function get_mbuf it is simply
>> > taken from the psosmbufq. In case this would be empty, a feed_pool
>> > operation is called on it.
>> >
>> > static psosmbuf_t *get_mbuf(psosqueue_t *queue, u_long msglen)
>> > {
>> >     psosmbuf_t *mbuf = NULL;
>> >
>> >     if (testbits(queue->synchbase.status, Q_NOCACHE)) {
>> >         mbuf =
>> >             (psosmbuf_t *) xnmalloc(sizeof(*mbuf) + msglen -
>> >                         sizeof(mbuf->data));
>> >
>> >         if (mbuf)
>> >             inith(&mbuf->link);
>> >     } else {
>> >         xnholder_t *holder = NULL;
>> >         if (testbits(queue->synchbase.status, Q_SHAREDINIT)) {
>> >             holder = getq(&psosmbufq);
>> >             if (!holder) {
>> >                 feed_pool(&psoschunkq,
>> > &psosmbufq,PSOS_QUEUE_MIN_ALLOC,queue->maxlen);
>> >                 holder = getq(&psosmbufq);
>> >             }
>> >         } else {
>> >             holder = getq(&queue->freeq);
>> >             if (!holder && testbits(queue->synchbase.status,
>> Q_INFINITE)) {
>> >                 feed_pool(&queue->chunkq,&queue->freeq,
>> > PSOS_QUEUE_MIN_ALLOC,queue->maxlen);
>> >                 holder = getq(&queue->freeq);
>> >             }
>> >         }
>> >         if (holder)
>> >             mbuf = link2psosmbuf(holder);
>> >     }
>> >     return mbuf;
>> > }
>> >
>> > I have adapted the code accordingly and rerun my tests. Now it runs
>> forever.
>> > (Of course I also did changes in the code to create and delete a queue.)
>> >
>> > Now the question is: Is my understanding correct? If it is, the flag
>> > Q_SHAREDINIT would be better renamed to Q_SHAREDMSGS.
>> >
>> > Please share your thoughts.
>>
>> A patch would be better than some fancy HTML colouring, if you intend
>> your fix to be reviewed/integrated.
>>
>> --
>>                                                                Gilles.
>>
>
>

[-- Attachment #2: Type: text/html, Size: 4236 bytes --]

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [Xenomai-help] Possible memory leak in psos skin message queue handling
  2010-11-02 20:05       ` ronny meeus
@ 2010-11-03 20:49         ` Gilles Chanteperdrix
  2010-11-03 21:15           ` ronny meeus
  0 siblings, 1 reply; 7+ messages in thread
From: Gilles Chanteperdrix @ 2010-11-03 20:49 UTC (permalink / raw)
  To: ronny meeus; +Cc: Xenomai-help

ronny meeus wrote:
> Hello
> 
> any feedback on the patch I have posted last week?

The patch is in Philippe's queue, so, on its way to be merged.

-- 
                                                                Gilles.


^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [Xenomai-help] Possible memory leak in psos skin message queue handling
  2010-11-03 20:49         ` Gilles Chanteperdrix
@ 2010-11-03 21:15           ` ronny meeus
  0 siblings, 0 replies; 7+ messages in thread
From: ronny meeus @ 2010-11-03 21:15 UTC (permalink / raw)
  To: Gilles Chanteperdrix; +Cc: Xenomai-help

[-- Attachment #1: Type: text/plain, Size: 353 bytes --]

Thanks !

Ronny

On Wed, Nov 3, 2010 at 9:49 PM, Gilles Chanteperdrix <
gilles.chanteperdrix@xenomai.org> wrote:

> ronny meeus wrote:
> > Hello
> >
> > any feedback on the patch I have posted last week?
>
> The patch is in Philippe's queue, so, on its way to be merged.
>
> --
>                                                                Gilles.
>

[-- Attachment #2: Type: text/html, Size: 726 bytes --]

^ permalink raw reply	[flat|nested] 7+ messages in thread

end of thread, other threads:[~2010-11-03 21:15 UTC | newest]

Thread overview: 7+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2010-10-19  9:28 [Xenomai-help] Possible memory leak in psos skin message queue handling ronny meeus
2010-10-26  9:58 ` ronny meeus
2010-10-26 11:34   ` Gilles Chanteperdrix
2010-10-26 13:13     ` ronny meeus
2010-11-02 20:05       ` ronny meeus
2010-11-03 20:49         ` Gilles Chanteperdrix
2010-11-03 21:15           ` ronny meeus

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.