* (rt_)printf and virtual memory
@ 2021-09-07 9:21 Mauro S.
2021-09-07 9:23 ` Mauro S.
2021-09-09 18:44 ` Jan Kiszka
0 siblings, 2 replies; 4+ messages in thread
From: Mauro S. @ 2021-09-07 9:21 UTC (permalink / raw)
To: xenomai
Hi all,
consider the simple code attached.
I'm using Xenomai 3.1 on a x86_64 CPU, 2GB RAM.
I compile and link the code using "xeno-config --skin=alchemy --cflags"
and "xeno-config --skin=alchemy --ldflags"
* Scenario 1)
#define printf rt_printf commented out (use printf for prints)
Changing the NUM_TASKS value, in top command I see these results:
- NUM_TASKS 2
PID PPID USER STAT VSZ %VSZ %CPU COMMAND
496 480 root S 80068 4% 0% ./test
- NUM_TASKS 4
PID PPID USER STAT VSZ %VSZ %CPU COMMAND
496 480 root S 80204 4% 0% ./test
- NUM_TASKS 5
PID PPID USER STAT VSZ %VSZ %CPU COMMAND
496 480 root S 80272 4% 0% ./test
- NUM_TASKS 6
PID PPID USER STAT VSZ %VSZ %CPU COMMAND
496 480 root S 80340 4% 0% ./test
Virtual memory size increases in a linear way.
* Scenario 2)
#define printf rt_printf not commented (use rt_printf for prints)
Changing the NUM_TASKS value, in top command I see these results:
- NUM_TASKS 2
PID PPID USER STAT VSZ %VSZ %CPU COMMAND
496 480 root S 80068 4% 0% ./test
- NUM_TASKS 4
PID PPID USER STAT VSZ %VSZ %CPU COMMAND
496 480 root S 80204 4% 0% ./test
- NUM_TASKS 5
PID PPID USER STAT VSZ %VSZ %CPU COMMAND
496 480 root S 142M 4% 0% ./test
- NUM_TASKS 6
PID PPID USER STAT VSZ %VSZ %CPU COMMAND
496 480 root S 206M 4% 0% ./test
Starting from a number of tasks > 4, the VMEM got a jump and then
increases very rapidly.
Is it normal? Or I'm misusing the rt_printf?
Thanks in advance, regards
---
Mauro S.
^ permalink raw reply [flat|nested] 4+ messages in thread
* Re: (rt_)printf and virtual memory
2021-09-07 9:21 (rt_)printf and virtual memory Mauro S.
@ 2021-09-07 9:23 ` Mauro S.
2021-09-09 18:44 ` Jan Kiszka
1 sibling, 0 replies; 4+ messages in thread
From: Mauro S. @ 2021-09-07 9:23 UTC (permalink / raw)
To: xenomai
Il 07/09/21 11:21, Mauro S. via Xenomai ha scritto:
> Hi all,
>
> consider the simple code attached.
>
> I'm using Xenomai 3.1 on a x86_64 CPU, 2GB RAM.
>
> I compile and link the code using "xeno-config --skin=alchemy --cflags"
> and "xeno-config --skin=alchemy --ldflags"
>
> * Scenario 1)
> #define printf rt_printf commented out (use printf for prints)
>
> Changing the NUM_TASKS value, in top command I see these results:
>
> - NUM_TASKS 2
>
> PID PPID USER STAT VSZ %VSZ %CPU COMMAND
> 496 480 root S 80068 4% 0% ./test
>
> - NUM_TASKS 4
>
> PID PPID USER STAT VSZ %VSZ %CPU COMMAND
> 496 480 root S 80204 4% 0% ./test
>
> - NUM_TASKS 5
>
> PID PPID USER STAT VSZ %VSZ %CPU COMMAND
> 496 480 root S 80272 4% 0% ./test
>
> - NUM_TASKS 6
>
> PID PPID USER STAT VSZ %VSZ %CPU COMMAND
> 496 480 root S 80340 4% 0% ./test
>
> Virtual memory size increases in a linear way.
>
>
> * Scenario 2)
> #define printf rt_printf not commented (use rt_printf for prints)
>
> Changing the NUM_TASKS value, in top command I see these results:
>
> - NUM_TASKS 2
>
> PID PPID USER STAT VSZ %VSZ %CPU COMMAND
> 496 480 root S 80068 4% 0% ./test
>
> - NUM_TASKS 4
>
> PID PPID USER STAT VSZ %VSZ %CPU COMMAND
> 496 480 root S 80204 4% 0% ./test
>
> - NUM_TASKS 5
>
> PID PPID USER STAT VSZ %VSZ %CPU COMMAND
> 496 480 root S 142M 4% 0% ./test
>
> - NUM_TASKS 6
>
> PID PPID USER STAT VSZ %VSZ %CPU COMMAND
> 496 480 root S 206M 4% 0% ./test
>
>
> Starting from a number of tasks > 4, the VMEM got a jump and then
> increases very rapidly.
>
> Is it normal? Or I'm misusing the rt_printf?
>
> Thanks in advance, regards
Damn, I forgot the attachment, sorry.
-------------- next part --------------
A non-text attachment was scrubbed...
Name: main.c
Type: text/x-csrc
Size: 1229 bytes
Desc: not available
URL: <http://xenomai.org/pipermail/xenomai/attachments/20210907/0e3c2f5b/attachment.c>
^ permalink raw reply [flat|nested] 4+ messages in thread
* Re: (rt_)printf and virtual memory
2021-09-07 9:21 (rt_)printf and virtual memory Mauro S.
2021-09-07 9:23 ` Mauro S.
@ 2021-09-09 18:44 ` Jan Kiszka
2021-09-14 6:07 ` Mauro S.
1 sibling, 1 reply; 4+ messages in thread
From: Jan Kiszka @ 2021-09-09 18:44 UTC (permalink / raw)
To: Mauro S., xenomai
On 07.09.21 11:21, Mauro S. via Xenomai wrote:
> Hi all,
>
> consider the simple code attached.
>
> I'm using Xenomai 3.1 on a x86_64 CPU, 2GB RAM.
>
> I compile and link the code using "xeno-config --skin=alchemy --cflags"
> and "xeno-config --skin=alchemy --ldflags"
>
> * Scenario 1)
> #define printf rt_printf commented out (use printf for prints)
>
> Changing the NUM_TASKS value, in top command I see these results:
>
> - NUM_TASKS 2
>
> PID PPID USER STAT VSZ %VSZ %CPU COMMAND
> 496 480 root S 80068 4% 0% ./test
>
> - NUM_TASKS 4
>
> PID PPID USER STAT VSZ %VSZ %CPU COMMAND
> 496 480 root S 80204 4% 0% ./test
>
> - NUM_TASKS 5
>
> PID PPID USER STAT VSZ %VSZ %CPU COMMAND
> 496 480 root S 80272 4% 0% ./test
>
> - NUM_TASKS 6
>
> PID PPID USER STAT VSZ %VSZ %CPU COMMAND
> 496 480 root S 80340 4% 0% ./test
>
> Virtual memory size increases in a linear way.
>
>
> * Scenario 2)
> #define printf rt_printf not commented (use rt_printf for prints)
>
> Changing the NUM_TASKS value, in top command I see these results:
>
> - NUM_TASKS 2
>
> PID PPID USER STAT VSZ %VSZ %CPU COMMAND
> 496 480 root S 80068 4% 0% ./test
>
> - NUM_TASKS 4
>
> PID PPID USER STAT VSZ %VSZ %CPU COMMAND
> 496 480 root S 80204 4% 0% ./test
>
> - NUM_TASKS 5
>
> PID PPID USER STAT VSZ %VSZ %CPU COMMAND
> 496 480 root S 142M 4% 0% ./test
>
> - NUM_TASKS 6
>
> PID PPID USER STAT VSZ %VSZ %CPU COMMAND
> 496 480 root S 206M 4% 0% ./test
>
>
> Starting from a number of tasks > 4, the VMEM got a jump and then
> increases very rapidly.
>
> Is it normal? Or I'm misusing the rt_printf?
No, that's related to glibc's internal memory allocation strategy. I
didn't dig into details, but you can easily trigger enormous virtual
memory reservations (which are not real memory, as we know) by adding a
pre-thread malloc(1) to your program - without using rt_printf.
Jan
--
Siemens AG, T RDA IOT
Corporate Competence Center Embedded Linux
^ permalink raw reply [flat|nested] 4+ messages in thread
* Re: (rt_)printf and virtual memory
2021-09-09 18:44 ` Jan Kiszka
@ 2021-09-14 6:07 ` Mauro S.
0 siblings, 0 replies; 4+ messages in thread
From: Mauro S. @ 2021-09-14 6:07 UTC (permalink / raw)
To: xenomai
Il 09/09/21 20:44, Jan Kiszka ha scritto:
> On 07.09.21 11:21, Mauro S. via Xenomai wrote:
..snip..
>>
>> Starting from a number of tasks > 4, the VMEM got a jump and then
>> increases very rapidly.
>>
>> Is it normal? Or I'm misusing the rt_printf?
>
> No, that's related to glibc's internal memory allocation strategy. I
> didn't dig into details, but you can easily trigger enormous virtual
> memory reservations (which are not real memory, as we know) by adding a
> pre-thread malloc(1) to your program - without using rt_printf.
>
> Jan
>
Hi Jan,
ok thank you.
Regards
--
Mauro
^ permalink raw reply [flat|nested] 4+ messages in thread
end of thread, other threads:[~2021-09-14 6:07 UTC | newest]
Thread overview: 4+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-09-07 9:21 (rt_)printf and virtual memory Mauro S.
2021-09-07 9:23 ` Mauro S.
2021-09-09 18:44 ` Jan Kiszka
2021-09-14 6:07 ` Mauro S.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.