All of lore.kernel.org
 help / color / mirror / Atom feed
* [Xenomai] Heap binding error (EWOULDBLOCK)
@ 2017-10-17  9:20 Roberto Finazzi
  2017-10-17 10:04 ` Philippe Gerum
  0 siblings, 1 reply; 6+ messages in thread
From: Roberto Finazzi @ 2017-10-17  9:20 UTC (permalink / raw)
  To: xenomai

Hi,
I'm porting an old Xenomai 2.5.6 application to the Cobalt 3.0.5.
In the original application there were several heaps shared between many
processes.
Now I maintained the same structure but, when I tryed to bind an already
created heap from another process, I had always the EWOULDBLOCK error.
The names used in create and bind were the same. In the previous
application the heap was created using H_SHARED but now is obsolete.

Just to test I created two simple programs with a shared semaphore and
the behaviour is the same: the second program is blocked always on the
rt_sem_bind even if the semaphore is already created.
It seems like if the name of the shared resource is not registered
properly.

Have you got some suggestions for me? 

Thanks
Roberto



^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [Xenomai] Heap binding error (EWOULDBLOCK)
  2017-10-17  9:20 [Xenomai] Heap binding error (EWOULDBLOCK) Roberto Finazzi
@ 2017-10-17 10:04 ` Philippe Gerum
  2017-10-18  6:03   ` Roberto Finazzi
  0 siblings, 1 reply; 6+ messages in thread
From: Philippe Gerum @ 2017-10-17 10:04 UTC (permalink / raw)
  To: Roberto Finazzi, xenomai

On 10/17/2017 11:20 AM, Roberto Finazzi wrote:
> Hi,
> I'm porting an old Xenomai 2.5.6 application to the Cobalt 3.0.5.
> In the original application there were several heaps shared between many
> processes.
> Now I maintained the same structure but, when I tryed to bind an already
> created heap from another process, I had always the EWOULDBLOCK error.
> The names used in create and bind were the same. In the previous
> application the heap was created using H_SHARED but now is obsolete.
> 
> Just to test I created two simple programs with a shared semaphore and
> the behaviour is the same: the second program is blocked always on the
> rt_sem_bind even if the semaphore is already created.
> It seems like if the name of the shared resource is not registered
> properly.
> 
> Have you got some suggestions for me?
> 

--enable-pshared is required for sharing common resources between processes.


-- 
Philippe.


^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [Xenomai] Heap binding error (EWOULDBLOCK)
  2017-10-17 10:04 ` Philippe Gerum
@ 2017-10-18  6:03   ` Roberto Finazzi
  2017-10-18  9:02     ` Philippe Gerum
  0 siblings, 1 reply; 6+ messages in thread
From: Roberto Finazzi @ 2017-10-18  6:03 UTC (permalink / raw)
  To: Philippe Gerum; +Cc: xenomai

Hi,
thank you for your answer, but there was already both --enable-registry
and --enable-pshared as I can see using /usr/xenomai/sbin/version -a.

Just to be sure for the code, this is the first program I used.

#include <stdio.h>
#include <stdlib.h>
#include <signal.h>
#include <sys/mman.h>
#include <alchemy/task.h>
#include <alchemy/sem.h>

RT_TASK task;
RT_SEM semA;

void end(int sig) {
  rt_sem_delete(&semA);
  exit(0);
}

int main() {
int err;

signal (SIGINT, end);

mlockall(MCL_CURRENT|MCL_FUTURE);
err=rt_task_shadow(&task, "writetest", 10, 0);

err= rt_sem_create(&semA, "semA", 0, S_FIFO);
printf("After create= %d\n", err);

while(1) ;

}

And this is the second one.
 
#include <stdio.h>
#include <stdlib.h>
#include <signal.h>
#include <sys/mman.h>
#include <alchemy/task.h>
#include <alchemy/sem.h>

RT_TASK task;
RT_SEM semA;

void end(int sig) {

  rt_sem_unbind(&semA);
  exit(0);
}

int main() {
int err;

signal (SIGINT, end);

mlockall(MCL_CURRENT|MCL_FUTURE);
err=rt_task_shadow(&task, "readtest", 10, 0);

err= rt_sem_bind(&semA, "semA", TM_INFINITE);
printf("After bind= %d\n", err);

while(1) ;

}

I started the first one and the semaphore was created without problems.
When I started the second, it remained blocked on the rt_sem_bind.

Regards
Roberto

Il giorno mar, 17/10/2017 alle 12.04 +0200, Philippe Gerum ha scritto:
> On 10/17/2017 11:20 AM, Roberto Finazzi wrote:
> > Hi,
> > I'm porting an old Xenomai 2.5.6 application to the Cobalt 3.0.5.
> > In the original application there were several heaps shared between many
> > processes.
> > Now I maintained the same structure but, when I tryed to bind an already
> > created heap from another process, I had always the EWOULDBLOCK error.
> > The names used in create and bind were the same. In the previous
> > application the heap was created using H_SHARED but now is obsolete.
> > 
> > Just to test I created two simple programs with a shared semaphore and
> > the behaviour is the same: the second program is blocked always on the
> > rt_sem_bind even if the semaphore is already created.
> > It seems like if the name of the shared resource is not registered
> > properly.
> > 
> > Have you got some suggestions for me?
> > 
> 
> --enable-pshared is required for sharing common resources between processes.
> 
> 




^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [Xenomai] Heap binding error (EWOULDBLOCK)
  2017-10-18  6:03   ` Roberto Finazzi
@ 2017-10-18  9:02     ` Philippe Gerum
  2017-10-18  9:55       ` Roberto Finazzi
  0 siblings, 1 reply; 6+ messages in thread
From: Philippe Gerum @ 2017-10-18  9:02 UTC (permalink / raw)
  To: Roberto Finazzi; +Cc: xenomai

On 10/18/2017 08:03 AM, Roberto Finazzi wrote:
> Hi,
> thank you for your answer, but there was already both --enable-registry
> and --enable-pshared as I can see using /usr/xenomai/sbin/version -a.
> 

Please paste the output of:
# <your-test-app> --dump-config.

> Just to be sure for the code, this is the first program I used.
> 
> #include <stdio.h>
> #include <stdlib.h>
> #include <signal.h>
> #include <sys/mman.h>
> #include <alchemy/task.h>
> #include <alchemy/sem.h>
> 
> RT_TASK task;
> RT_SEM semA;
> 
> void end(int sig) {
>   rt_sem_delete(&semA);

Calling rt_sem_delete() over a signal handler is unsafe.

>   exit(0);
> }
> 
> int main() {
> int err;
> 
> signal (SIGINT, end);
> 
> mlockall(MCL_CURRENT|MCL_FUTURE);

Explicit mlock is redundant with Xenomai 3.x (libcobalt does this for
you during early init).

> err=rt_task_shadow(&task, "writetest", 10, 0);
> 
> err= rt_sem_create(&semA, "semA", 0, S_FIFO);
> printf("After create= %d\n", err);
> 
> while(1) ;
>

That infinite CPU-bound loop should rapidly cause a hard lockup, even on
a multi-core system.

> }
> 
> And this is the second one.
>  
> #include <stdio.h>
> #include <stdlib.h>
> #include <signal.h>
> #include <sys/mman.h>
> #include <alchemy/task.h>
> #include <alchemy/sem.h>
> 
> RT_TASK task;
> RT_SEM semA;
> 
> void end(int sig) {
> 
>   rt_sem_unbind(&semA);
>   exit(0);
> }
> 
> int main() {
> int err;
> 
> signal (SIGINT, end);
> 
> mlockall(MCL_CURRENT|MCL_FUTURE);
> err=rt_task_shadow(&task, "readtest", 10, 0);
> 
> err= rt_sem_bind(&semA, "semA", TM_INFINITE);
> printf("After bind= %d\n", err);
> 
> while(1) ;
> 
> }
> 
> I started the first one and the semaphore was created without problems.
> When I started the second, it remained blocked on the rt_sem_bind.
> 

Are you starting both apps in sequence on a single shell command line,
like "./sem_create_app; ./sem_bind_app"?

-- 
Philippe.


^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [Xenomai] Heap binding error (EWOULDBLOCK)
  2017-10-18  9:02     ` Philippe Gerum
@ 2017-10-18  9:55       ` Roberto Finazzi
  2017-10-18 16:40         ` Philippe Gerum
  0 siblings, 1 reply; 6+ messages in thread
From: Roberto Finazzi @ 2017-10-18  9:55 UTC (permalink / raw)
  To: Philippe Gerum; +Cc: xenomai

Il giorno mer, 18/10/2017 alle 11.02 +0200, Philippe Gerum ha scritto:
> On 10/18/2017 08:03 AM, Roberto Finazzi wrote:
> > Hi,
> > thank you for your answer, but there was already both --enable-registry
> > and --enable-pshared as I can see using /usr/xenomai/sbin/version -a.
> > 
> 
> Please paste the output of:
> # <your-test-app> --dump-config.
> 


based on Xenomai/cobalt v3.0.5
CONFIG_MMU=1
CONFIG_SMP=1
CONFIG_XENO_BUILD_ARGS=" '--with-core=cobalt' '--disable-debug'
'--enable-pshared' '--enable-smp' '--enable-registry'"
CONFIG_XENO_BUILD_STRING="x86_64-unknown-linux-gnu"
CONFIG_XENO_COBALT=1
CONFIG_XENO_COMPILER="gcc version 6.3.0 20170516 (Debian 6.3.0-18) "
CONFIG_XENO_DEFAULT_PERIOD=100000
CONFIG_XENO_FORTIFY=1
CONFIG_XENO_HOST_STRING="x86_64-unknown-linux-gnu"
CONFIG_XENO_LORES_CLOCK_DISABLED=1
CONFIG_XENO_PREFIX="/usr/xenomai"
CONFIG_XENO_PSHARED=1
CONFIG_XENO_RAW_CLOCK_ENABLED=1
CONFIG_XENO_REGISTRY=1
CONFIG_XENO_REGISTRY_ROOT="/var/run/xenomai"
CONFIG_XENO_REVISION_LEVEL=5
CONFIG_XENO_SANITY=1
CONFIG_XENO_TLSF=1
CONFIG_XENO_TLS_MODEL="initial-exec"
CONFIG_XENO_UAPI_LEVEL=14
CONFIG_XENO_VERSION_MAJOR=3
CONFIG_XENO_VERSION_MINOR=0
CONFIG_XENO_VERSION_NAME="Sisyphus's Boulder"
CONFIG_XENO_VERSION_STRING="3.0.5"
CONFIG_XENO_X86_VSYSCALL=1
---
CONFIG_XENO_ASYNC_CANCEL is OFF
CONFIG_XENO_COPPERPLATE_CLOCK_RESTRICTED is OFF
CONFIG_XENO_DEBUG is OFF
CONFIG_XENO_DEBUG_FULL is OFF
CONFIG_XENO_LIBS_DLOPEN is OFF
CONFIG_XENO_MERCURY is OFF
CONFIG_XENO_VALGRIND_API is OFF
CONFIG_XENO_WORKAROUND_CONDVAR_PI is OFF
---
PTHREAD_STACK_DEFAULT=65536
AUTOMATIC_BOOTSTRAP=1

> > Just to be sure for the code, this is the first program I used.
> > 
> > #include <stdio.h>
> > #include <stdlib.h>
> > #include <signal.h>
> > #include <sys/mman.h>
> > #include <alchemy/task.h>
> > #include <alchemy/sem.h>
> > 
> > RT_TASK task;
> > RT_SEM semA;
> > 
> > void end(int sig) {
> >   rt_sem_delete(&semA);
> 
> Calling rt_sem_delete() over a signal handler is unsafe.
> 
> >   exit(0);
> > }
> > 
> > int main() {
> > int err;
> > 
> > signal (SIGINT, end);
> > 
> > mlockall(MCL_CURRENT|MCL_FUTURE);
> 
> Explicit mlock is redundant with Xenomai 3.x (libcobalt does this for
> you during early init).
> 

Ok, thank you.

> > err=rt_task_shadow(&task, "writetest", 10, 0);
> > 
> > err= rt_sem_create(&semA, "semA", 0, S_FIFO);
> > printf("After create= %d\n", err);
> > 
> > while(1) ;
> >
> 
> That infinite CPU-bound loop should rapidly cause a hard lockup, even on
> a multi-core system.
> 

To simplify I removed part of source code not useful for understand the
problem...

> > }
> > 
> > And this is the second one.
> >  
> > #include <stdio.h>
> > #include <stdlib.h>
> > #include <signal.h>
> > #include <sys/mman.h>
> > #include <alchemy/task.h>
> > #include <alchemy/sem.h>
> > 
> > RT_TASK task;
> > RT_SEM semA;
> > 
> > void end(int sig) {
> > 
> >   rt_sem_unbind(&semA);
> >   exit(0);
> > }
> > 
> > int main() {
> > int err;
> > 
> > signal (SIGINT, end);
> > 
> > mlockall(MCL_CURRENT|MCL_FUTURE);
> > err=rt_task_shadow(&task, "readtest", 10, 0);
> > 
> > err= rt_sem_bind(&semA, "semA", TM_INFINITE);
> > printf("After bind= %d\n", err);
> > 
> > while(1) ;
> > 
> > }
> > 
> > I started the first one and the semaphore was created without problems.
> > When I started the second, it remained blocked on the rt_sem_bind.
> > 
> 
> Are you starting both apps in sequence on a single shell command line,
> like "./sem_create_app; ./sem_bind_app"?
> 

I started them in two different xterm. I tried also with one shell
command line with the same result.

Regards
Roberto



^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [Xenomai] Heap binding error (EWOULDBLOCK)
  2017-10-18  9:55       ` Roberto Finazzi
@ 2017-10-18 16:40         ` Philippe Gerum
  0 siblings, 0 replies; 6+ messages in thread
From: Philippe Gerum @ 2017-10-18 16:40 UTC (permalink / raw)
  To: Roberto Finazzi; +Cc: xenomai

On 10/18/2017 11:55 AM, Roberto Finazzi wrote:
> Il giorno mer, 18/10/2017 alle 11.02 +0200, Philippe Gerum ha scritto:
>> On 10/18/2017 08:03 AM, Roberto Finazzi wrote:
>>> Hi,
>>> thank you for your answer, but there was already both --enable-registry
>>> and --enable-pshared as I can see using /usr/xenomai/sbin/version -a.
>>>
>>
>> Please paste the output of:
>> # <your-test-app> --dump-config.
>>
> 
> 
> based on Xenomai/cobalt v3.0.5
> CONFIG_MMU=1
> CONFIG_SMP=1
> CONFIG_XENO_BUILD_ARGS=" '--with-core=cobalt' '--disable-debug'
> '--enable-pshared' '--enable-smp' '--enable-registry'"
> CONFIG_XENO_BUILD_STRING="x86_64-unknown-linux-gnu"
> CONFIG_XENO_COBALT=1
> CONFIG_XENO_COMPILER="gcc version 6.3.0 20170516 (Debian 6.3.0-18) "
> CONFIG_XENO_DEFAULT_PERIOD=100000
> CONFIG_XENO_FORTIFY=1
> CONFIG_XENO_HOST_STRING="x86_64-unknown-linux-gnu"
> CONFIG_XENO_LORES_CLOCK_DISABLED=1
> CONFIG_XENO_PREFIX="/usr/xenomai"
> CONFIG_XENO_PSHARED=1
> CONFIG_XENO_RAW_CLOCK_ENABLED=1
> CONFIG_XENO_REGISTRY=1
> CONFIG_XENO_REGISTRY_ROOT="/var/run/xenomai"
> CONFIG_XENO_REVISION_LEVEL=5
> CONFIG_XENO_SANITY=1
> CONFIG_XENO_TLSF=1
> CONFIG_XENO_TLS_MODEL="initial-exec"
> CONFIG_XENO_UAPI_LEVEL=14
> CONFIG_XENO_VERSION_MAJOR=3
> CONFIG_XENO_VERSION_MINOR=0
> CONFIG_XENO_VERSION_NAME="Sisyphus's Boulder"
> CONFIG_XENO_VERSION_STRING="3.0.5"
> CONFIG_XENO_X86_VSYSCALL=1
> ---
> CONFIG_XENO_ASYNC_CANCEL is OFF
> CONFIG_XENO_COPPERPLATE_CLOCK_RESTRICTED is OFF
> CONFIG_XENO_DEBUG is OFF
> CONFIG_XENO_DEBUG_FULL is OFF
> CONFIG_XENO_LIBS_DLOPEN is OFF
> CONFIG_XENO_MERCURY is OFF
> CONFIG_XENO_VALGRIND_API is OFF
> CONFIG_XENO_WORKAROUND_CONDVAR_PI is OFF
> ---
> PTHREAD_STACK_DEFAULT=65536
> AUTOMATIC_BOOTSTRAP=1
> 
>>> Just to be sure for the code, this is the first program I used.
>>>
>>> #include <stdio.h>
>>> #include <stdlib.h>
>>> #include <signal.h>
>>> #include <sys/mman.h>
>>> #include <alchemy/task.h>
>>> #include <alchemy/sem.h>
>>>
>>> RT_TASK task;
>>> RT_SEM semA;
>>>
>>> void end(int sig) {
>>>   rt_sem_delete(&semA);
>>
>> Calling rt_sem_delete() over a signal handler is unsafe.
>>
>>>   exit(0);
>>> }
>>>
>>> int main() {
>>> int err;
>>>
>>> signal (SIGINT, end);
>>>
>>> mlockall(MCL_CURRENT|MCL_FUTURE);
>>
>> Explicit mlock is redundant with Xenomai 3.x (libcobalt does this for
>> you during early init).
>>
> 
> Ok, thank you.
> 
>>> err=rt_task_shadow(&task, "writetest", 10, 0);
>>>
>>> err= rt_sem_create(&semA, "semA", 0, S_FIFO);
>>> printf("After create= %d\n", err);
>>>
>>> while(1) ;
>>>
>>
>> That infinite CPU-bound loop should rapidly cause a hard lockup, even on
>> a multi-core system.
>>
> 
> To simplify I removed part of source code not useful for understand the
> problem...
> 
>>> }
>>>
>>> And this is the second one.
>>>  
>>> #include <stdio.h>
>>> #include <stdlib.h>
>>> #include <signal.h>
>>> #include <sys/mman.h>
>>> #include <alchemy/task.h>
>>> #include <alchemy/sem.h>
>>>
>>> RT_TASK task;
>>> RT_SEM semA;
>>>
>>> void end(int sig) {
>>>
>>>   rt_sem_unbind(&semA);
>>>   exit(0);
>>> }
>>>
>>> int main() {
>>> int err;
>>>
>>> signal (SIGINT, end);
>>>
>>> mlockall(MCL_CURRENT|MCL_FUTURE);
>>> err=rt_task_shadow(&task, "readtest", 10, 0);
>>>
>>> err= rt_sem_bind(&semA, "semA", TM_INFINITE);
>>> printf("After bind= %d\n", err);
>>>
>>> while(1) ;
>>>
>>> }
>>>
>>> I started the first one and the semaphore was created without problems.
>>> When I started the second, it remained blocked on the rt_sem_bind.
>>>
>>
>> Are you starting both apps in sequence on a single shell command line,
>> like "./sem_create_app; ./sem_bind_app"?
>>
> 
> I started them in two different xterm. I tried also with one shell
> command line with the same result.
> 

You need to have both processes belong to the same session:
http://xenomai.org/2015/05/application-setup-and-init/#Standard_Xenomai_command_line_options

e.g.
# ./app1 --session=foo
# ./app2 --session=foo

Common session names for processes cause resources to be shared between
them. Otherwise, the binder expects the semaphore to be created in its
own private scope, which never happens.

And yes, the doc is terse regarding this and many other things, this is
hardly well explained, if ever explained at all.

-- 
Philippe.


^ permalink raw reply	[flat|nested] 6+ messages in thread

end of thread, other threads:[~2017-10-18 16:40 UTC | newest]

Thread overview: 6+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2017-10-17  9:20 [Xenomai] Heap binding error (EWOULDBLOCK) Roberto Finazzi
2017-10-17 10:04 ` Philippe Gerum
2017-10-18  6:03   ` Roberto Finazzi
2017-10-18  9:02     ` Philippe Gerum
2017-10-18  9:55       ` Roberto Finazzi
2017-10-18 16:40         ` Philippe Gerum

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.