All of lore.kernel.org
 help / color / mirror / Atom feed
* [Xenomai] Question about config option –mem-pool-size
@ 2016-12-02 13:03 Ronny Meeus
  2016-12-02 14:01 ` Philippe Gerum
  0 siblings, 1 reply; 6+ messages in thread
From: Ronny Meeus @ 2016-12-02 13:03 UTC (permalink / raw)
  To: xenomai

Hello

Context: I'm using the pSOS interface over the Mercury core (version 3.0.3).

I have a question about option:
–mem-pool-size
described on page
http://xenomai.org/2015/05/application-setup-and-init/

Is the parameter specifying the maximum size that will be used Xenomai
or is this the initial value that will be used while creating the main
memory pool.

The explanation makes me think it is the first case while tests show
that it is the
latter case. e.g. the memory pool just extends when it is depleted.

To me it would make more sense that the parameter is specifying the maximum
memory that will be used since like it is today the application's
memory usage can
grow until all system memory is consumed.

Ronny


^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [Xenomai] Question about config option –mem-pool-size
  2016-12-02 13:03 [Xenomai] Question about config option –mem-pool-size Ronny Meeus
@ 2016-12-02 14:01 ` Philippe Gerum
  2016-12-02 14:29   ` Ronny Meeus
  0 siblings, 1 reply; 6+ messages in thread
From: Philippe Gerum @ 2016-12-02 14:01 UTC (permalink / raw)
  To: Ronny Meeus, xenomai

On 12/02/2016 02:03 PM, Ronny Meeus wrote:
> Hello
> 
> Context: I'm using the pSOS interface over the Mercury core (version 3.0.3).
> 
> I have a question about option:
> –mem-pool-size
> described on page
> http://xenomai.org/2015/05/application-setup-and-init/
> 
> Is the parameter specifying the maximum size that will be used Xenomai
> or is this the initial value that will be used while creating the main
> memory pool.
> 
> The explanation makes me think it is the first case while tests show
> that it is the
> latter case. e.g. the memory pool just extends when it is depleted.
>

How do you check this?

> To me it would make more sense that the parameter is specifying the maximum
> memory that will be used since like it is today the application's
> memory usage can
> grow until all system memory is consumed.
> 

As stated in the documentation, this parameter configures the size of
the memory area underlying the the main/session heap, which is meant to
be a memory limit.

-- 
Philippe.


^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [Xenomai] Question about config option –mem-pool-size
  2016-12-02 14:01 ` Philippe Gerum
@ 2016-12-02 14:29   ` Ronny Meeus
  2016-12-02 18:33     ` Philippe Gerum
  0 siblings, 1 reply; 6+ messages in thread
From: Ronny Meeus @ 2016-12-02 14:29 UTC (permalink / raw)
  To: Philippe Gerum; +Cc: xenomai

On Fri, Dec 2, 2016 at 3:01 PM, Philippe Gerum <rpm@xenomai.org> wrote:
> On 12/02/2016 02:03 PM, Ronny Meeus wrote:
>> Hello
>>
>> Context: I'm using the pSOS interface over the Mercury core (version 3.0.3).
>>
>> I have a question about option:
>> –mem-pool-size
>> described on page
>> http://xenomai.org/2015/05/application-setup-and-init/
>>
>> Is the parameter specifying the maximum size that will be used Xenomai
>> or is this the initial value that will be used while creating the main
>> memory pool.
>>
>> The explanation makes me think it is the first case while tests show
>> that it is the
>> latter case. e.g. the memory pool just extends when it is depleted.
>>
>
> How do you check this?

I check it by creating a pSOS message queue in which I sent 100k messages.
I see that this succeeds without errors.
I have also put a trace in the tlsf heap code and I see it constantly extending.

This is the test application:

#include <psos.h>
#include <stdio.h>

static void main_test_task(u_long a,u_long b,u_long c,u_long d)
{
        unsigned long qid, err;
        int i;
        unsigned long message_count = 100000;
        unsigned long mesg[4];

        q_create("LINE",0,Q_NOLIMIT|Q_PRIOR,&qid);
        for (i=0;i<message_count;i++)
        {
                err = q_send(qid, mesg);
                if (err != 0)
                    printf("Error: q_send err=%ld count=%d\n", err, i);
        }
        printf("SUCCESS: Test passed (nr messages sent=%ld)\n",message_count);

        while (1)
                tm_wkafter(1000);
}


int main(int argc, char * const *argv)
{
        unsigned long tid;
        unsigned long args[4] = {0,0,0,0};

        t_create("MAIN",25,16000,16000,0,&tid);
        t_start(tid,T_PREEMPT|T_TSLICE,main_test_task,args);
        while (1)
                tm_wkafter(1000);
}


The unclean patch below solves the issue (to better understand the issue).
Correct patch can be provided later.

I was introducing a config parameter that specifies whether the pool actually
needs to grow or not. To be backwards compatible the default could be to grow.


diff --git a/lib/boilerplate/tlsf/tlsf.c b/lib/boilerplate/tlsf/tlsf.c
--- a/lib/boilerplate/tlsf/tlsf.c
+++ b/lib/boilerplate/tlsf/tlsf.c
@@ -452,6 +452,12 @@ static __inline__ bhdr_t *process_area(v
 /******************************************************************/

 static char *mp;         /* Default memory pool. */
+static int grow_mp = 0;
+
+void tlsf_grow_common_pool(int grow)
+{
+       grow_mp = grow;
+}

 /******************************************************************/
 size_t init_memory_pool(size_t mem_pool_size, void *mem_pool)
@@ -625,6 +631,7 @@ void *tlsf_malloc(size_t size)
        void *area;

        area_size = sizeof(tlsf_t) + BHDR_OVERHEAD * 8; /* Just a
safety constant */
+       area_size += size;
        area_size = (area_size > DEFAULT_AREA_SIZE) ? area_size :
DEFAULT_AREA_SIZE;
        area = get_new_area(&area_size);
        if (area == ((void *) ~0))
@@ -710,7 +717,7 @@ void *malloc_ex(size_t size, void *mem_p
        so they are not longer valid when the function fails */
     b = FIND_SUITABLE_BLOCK(tlsf, &fl, &sl);
 #if USE_MMAP || USE_SBRK
-    if (!b && mem_pool == mp) {        /* Don't grow private pools */
+    if (!b && (mem_pool == mp) && grow_mp) {   /* Don't grow private pools */
        size_t area_size;
        void *area;
        /* Growing the pool size when needed */


Ronny

>
>> To me it would make more sense that the parameter is specifying the maximum
>> memory that will be used since like it is today the application's
>> memory usage can
>> grow until all system memory is consumed.
>>
>
> As stated in the documentation, this parameter configures the size of
> the memory area underlying the the main/session heap, which is meant to
> be a memory limit.
>
> --
> Philippe.


^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [Xenomai] Question about config option –mem-pool-size
  2016-12-02 14:29   ` Ronny Meeus
@ 2016-12-02 18:33     ` Philippe Gerum
  2016-12-02 21:49       ` Jorge Ramirez
  0 siblings, 1 reply; 6+ messages in thread
From: Philippe Gerum @ 2016-12-02 18:33 UTC (permalink / raw)
  To: Ronny Meeus; +Cc: xenomai

On 12/02/2016 03:29 PM, Ronny Meeus wrote:
> On Fri, Dec 2, 2016 at 3:01 PM, Philippe Gerum <rpm@xenomai.org> wrote:
>> On 12/02/2016 02:03 PM, Ronny Meeus wrote:
>>> Hello
>>>
>>> Context: I'm using the pSOS interface over the Mercury core (version 3.0.3).
>>>
>>> I have a question about option:
>>> –mem-pool-size
>>> described on page
>>> http://xenomai.org/2015/05/application-setup-and-init/
>>>
>>> Is the parameter specifying the maximum size that will be used Xenomai
>>> or is this the initial value that will be used while creating the main
>>> memory pool.
>>>
>>> The explanation makes me think it is the first case while tests show
>>> that it is the
>>> latter case. e.g. the memory pool just extends when it is depleted.
>>>
>>
>> How do you check this?
> 
> I check it by creating a pSOS message queue in which I sent 100k messages.
> I see that this succeeds without errors.
> I have also put a trace in the tlsf heap code and I see it constantly extending.
> 
> This is the test application:
> 
> #include <psos.h>
> #include <stdio.h>
> 
> static void main_test_task(u_long a,u_long b,u_long c,u_long d)
> {
>         unsigned long qid, err;
>         int i;
>         unsigned long message_count = 100000;
>         unsigned long mesg[4];
> 
>         q_create("LINE",0,Q_NOLIMIT|Q_PRIOR,&qid);
>         for (i=0;i<message_count;i++)
>         {
>                 err = q_send(qid, mesg);
>                 if (err != 0)
>                     printf("Error: q_send err=%ld count=%d\n", err, i);
>         }
>         printf("SUCCESS: Test passed (nr messages sent=%ld)\n",message_count);
> 
>         while (1)
>                 tm_wkafter(1000);
> }
> 
> 
> int main(int argc, char * const *argv)
> {
>         unsigned long tid;
>         unsigned long args[4] = {0,0,0,0};
> 
>         t_create("MAIN",25,16000,16000,0,&tid);
>         t_start(tid,T_PREEMPT|T_TSLICE,main_test_task,args);
>         while (1)
>                 tm_wkafter(1000);
> }
> 
> 
> The unclean patch below solves the issue (to better understand the issue).
> Correct patch can be provided later.
> 
> I was introducing a config parameter that specifies whether the pool actually
> needs to grow or not. To be backwards compatible the default could be to grow.

The intent is to enforce a limit as specified by --mem-pool-size, just
like the malloc and pshared allocators do (see heapobj-malloc.c), since
there is no way to extend the heap without switching to secondary mode
with Cobalt. The tlsf_malloc() interface allocating the main pool
implicitly does not help with this.

I plan to phase out TLSF entirely, which has poor performances with
respect to memory fragmentation (the numbers are actually pretty ugly).
So either we live with the current state of affairs until TLSF is
replaced, or a trivial patch preventing the extension will do.

-- 
Philippe.


^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [Xenomai] Question about config option –mem-pool-size
  2016-12-02 18:33     ` Philippe Gerum
@ 2016-12-02 21:49       ` Jorge Ramirez
  2016-12-03 12:15         ` Philippe Gerum
  0 siblings, 1 reply; 6+ messages in thread
From: Jorge Ramirez @ 2016-12-02 21:49 UTC (permalink / raw)
  To: Philippe Gerum, Ronny Meeus; +Cc: xenomai

On 12/02/2016 01:33 PM, Philippe Gerum wrote:
> I plan to phase out TLSF entirely, which has poor performances with
> respect to memory fragmentation (the numbers are actually pretty ugly).
> So either we live with the current state of affairs until TLSF is
> replaced, or a trivial patch preventing the extension will do.

the TLSF website [1]  claims an average fragmentation lower than 15% an 
a maximum of 25%.
What do you hope the replacement algorithm will provide?

[1] http://www.gii.upv.es/tlsf/


^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [Xenomai] Question about config option –mem-pool-size
  2016-12-02 21:49       ` Jorge Ramirez
@ 2016-12-03 12:15         ` Philippe Gerum
  0 siblings, 0 replies; 6+ messages in thread
From: Philippe Gerum @ 2016-12-03 12:15 UTC (permalink / raw)
  To: Jorge Ramirez, Ronny Meeus; +Cc: xenomai

On 12/02/2016 10:49 PM, Jorge Ramirez wrote:
> On 12/02/2016 01:33 PM, Philippe Gerum wrote:
>> I plan to phase out TLSF entirely, which has poor performances with
>> respect to memory fragmentation (the numbers are actually pretty ugly).
>> So either we live with the current state of affairs until TLSF is
>> replaced, or a trivial patch preventing the extension will do.
> 
> the TLSF website [1]  claims an average fragmentation lower than 15% an
> a maximum of 25%.

Which is not that great. Besides, I suspect those figures to have been
obtained on a 32bit CPU architecture. Trying on a 64bit one, e.g.
allocating 24 byte blocks or less from a 8Mb pool, I can see the worst
case doubling. The Xenomai core typically allocates small blocks.

Some attempts to fix this issue with the original TLSF exist:
https://github.com/mattconte/tlsf. However, I don't think TLSF best
matches the allocation pattern we have in kernel space.

> What do you hope the replacement algorithm will provide?
> 
> [1] http://www.gii.upv.es/tlsf/
> 

Less than 10%. In addition, the cobalt core allocator direly needs
improvements on the block release path, either by adding it a free page
bitmap like the pshared allocator now has, or by going for the drop in
replacement Gilles wrote. There is also the option of unifying the
implementation on the latter, replacing both the current cobalt
allocator and tlsf in the same move.

-- 
Philippe.


^ permalink raw reply	[flat|nested] 6+ messages in thread

end of thread, other threads:[~2016-12-03 12:15 UTC | newest]

Thread overview: 6+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2016-12-02 13:03 [Xenomai] Question about config option –mem-pool-size Ronny Meeus
2016-12-02 14:01 ` Philippe Gerum
2016-12-02 14:29   ` Ronny Meeus
2016-12-02 18:33     ` Philippe Gerum
2016-12-02 21:49       ` Jorge Ramirez
2016-12-03 12:15         ` Philippe Gerum

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.