All of lore.kernel.org
 help / color / mirror / Atom feed
* [Xenomai-help] netrpc
@ 2010-03-31 12:34 Michel He
  2010-03-31 13:33 ` Jan Kiszka
  0 siblings, 1 reply; 6+ messages in thread
From: Michel He @ 2010-03-31 12:34 UTC (permalink / raw)
  To: xenomai

Hello all,

    I'm currently trying to port xrtai-lab to xenomai, inside of it  
there's the netrpc interface used to make communication between tasks.  
However, there is no equivalent for Xenomai, which makes the port  
quite impossible ! So is there any chance to fulfill that, and with/or  
without RTNet ? To do that, it should have something to do with socket  
programming. Any experience encountered it is welcomed.


Thanks.



^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [Xenomai-help] netrpc
  2010-03-31 12:34 [Xenomai-help] netrpc Michel He
@ 2010-03-31 13:33 ` Jan Kiszka
  2010-04-13  8:13   ` Michel He
  2010-04-15 14:45   ` Michel He
  0 siblings, 2 replies; 6+ messages in thread
From: Jan Kiszka @ 2010-03-31 13:33 UTC (permalink / raw)
  To: Michel He; +Cc: xenomai

Michel He wrote:
> Hello all,
> 
>     I'm currently trying to port xrtai-lab to xenomai, inside of it  
> there's the netrpc interface used to make communication between tasks.  
> However, there is no equivalent for Xenomai, which makes the port  
> quite impossible !

Nothing is impossible. :)

> So is there any chance to fulfill that, and with/or  
> without RTNet ? To do that, it should have something to do with socket  
> programming. Any experience encountered it is welcomed.

Well, you could start with mapping the existing RTAI API calls in
xrtai-lab on local Native calls. That will already give you a
non-distributed port.

But there is also no magic behind netrpc. It just uses RTnet for remote
calls, and that works for Xenomai at least equally well. You could
simply write a RPC API extension for libnative (a pure user space job).
That lib would do the routing, encapsulate and forward non-local calls
to some sockets provided via the RTDM API.

BTW, the same should be feasible for a POSIX-based API extension, which
would have the advantage of making the result easier portable to plain
Linux.

Jan

-- 
Siemens AG, Corporate Technology, CT T DE IT 1
Corporate Competence Center Embedded Linux


^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [Xenomai-help] netrpc
  2010-03-31 13:33 ` Jan Kiszka
@ 2010-04-13  8:13   ` Michel He
  2010-04-13 23:59     ` Jan Kiszka
  2010-04-15 14:45   ` Michel He
  1 sibling, 1 reply; 6+ messages in thread
From: Michel He @ 2010-04-13  8:13 UTC (permalink / raw)
  To: Jan Kiszka; +Cc: xenomai

Hello Jan,

      Thank you for your answer so quickly. The last time, I find out  
that the IPC (Xenomai) has strong correlation to the rpc  
implementation but it lacks the network addressing. If we look into  
the codes, we can not specifiy the remote destination in a variable  
like the sipc_address(not existing). So I imagine that AF_IPC is  
something purely local like the shared memory or the queues. Is it  
possible to make a remote connection in xenomai with the ipc protocols?

thanks

Jan Kiszka <jan.kiszka@domain.hid> a écrit :

> Michel He wrote:
>> Hello all,
>>
>>     I'm currently trying to port xrtai-lab to xenomai, inside of it
>> there's the netrpc interface used to make communication between tasks.
>> However, there is no equivalent for Xenomai, which makes the port
>> quite impossible !
>
> Nothing is impossible. :)
>
>> So is there any chance to fulfill that, and with/or
>> without RTNet ? To do that, it should have something to do with socket
>> programming. Any experience encountered it is welcomed.
>
> Well, you could start with mapping the existing RTAI API calls in
> xrtai-lab on local Native calls. That will already give you a
> non-distributed port.
>
> But there is also no magic behind netrpc. It just uses RTnet for remote
> calls, and that works for Xenomai at least equally well. You could
> simply write a RPC API extension for libnative (a pure user space job).
> That lib would do the routing, encapsulate and forward non-local calls
> to some sockets provided via the RTDM API.
>
> BTW, the same should be feasible for a POSIX-based API extension, which
> would have the advantage of making the result easier portable to plain
> Linux.
>
> Jan
>
> --
> Siemens AG, Corporate Technology, CT T DE IT 1
> Corporate Competence Center Embedded Linux
>




^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [Xenomai-help] netrpc
  2010-04-13  8:13   ` Michel He
@ 2010-04-13 23:59     ` Jan Kiszka
  0 siblings, 0 replies; 6+ messages in thread
From: Jan Kiszka @ 2010-04-13 23:59 UTC (permalink / raw)
  To: Michel He; +Cc: xenomai

[-- Attachment #1: Type: text/plain, Size: 2220 bytes --]

Michel He wrote:
> Hello Jan,
> 
>       Thank you for your answer so quickly. The last time, I find out  
> that the IPC (Xenomai) has strong correlation to the rpc  
> implementation but it lacks the network addressing. If we look into  
> the codes, we can not specifiy the remote destination in a variable  
> like the sipc_address(not existing). So I imagine that AF_IPC is  
> something purely local like the shared memory or the queues. Is it  
> possible to make a remote connection in xenomai with the ipc protocols?

PF_RTIPC is primarily targeting local communication channels. If you
think that this programming model is already sufficient to distribute
xrtai-lab, it should also be possible to map it on other socket types
like UDP (provided by RTnet) - and then you have node addressability.

Jan

> 
> thanks
> 
> Jan Kiszka <jan.kiszka@domain.hid> a écrit :
> 
>> Michel He wrote:
>>> Hello all,
>>>
>>>     I'm currently trying to port xrtai-lab to xenomai, inside of it
>>> there's the netrpc interface used to make communication between tasks.
>>> However, there is no equivalent for Xenomai, which makes the port
>>> quite impossible !
>> Nothing is impossible. :)
>>
>>> So is there any chance to fulfill that, and with/or
>>> without RTNet ? To do that, it should have something to do with socket
>>> programming. Any experience encountered it is welcomed.
>> Well, you could start with mapping the existing RTAI API calls in
>> xrtai-lab on local Native calls. That will already give you a
>> non-distributed port.
>>
>> But there is also no magic behind netrpc. It just uses RTnet for remote
>> calls, and that works for Xenomai at least equally well. You could
>> simply write a RPC API extension for libnative (a pure user space job).
>> That lib would do the routing, encapsulate and forward non-local calls
>> to some sockets provided via the RTDM API.
>>
>> BTW, the same should be feasible for a POSIX-based API extension, which
>> would have the advantage of making the result easier portable to plain
>> Linux.
>>
>> Jan
>>
>> --
>> Siemens AG, Corporate Technology, CT T DE IT 1
>> Corporate Competence Center Embedded Linux
>>



[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 257 bytes --]

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [Xenomai-help] netrpc
  2010-03-31 13:33 ` Jan Kiszka
  2010-04-13  8:13   ` Michel He
@ 2010-04-15 14:45   ` Michel He
  2010-04-15 15:03     ` Philippe Gerum
  1 sibling, 1 reply; 6+ messages in thread
From: Michel He @ 2010-04-15 14:45 UTC (permalink / raw)
  To: Jan Kiszka; +Cc: xenomai

For message passing from task to task, I use the 3 available  
procedures rt_task_send(), rt_task_reply(), rt_task_receive().

The job is when one message emitted from the source to one  
destination, however the message can be catched by an other task (not  
the good correspondant). In Xenomai, when the message is NOT for a  
task, it couldn't be passed to a next task. Tasks can receive any  
message but not being able to relay it to the destination. (there's no  
such function mentionned inside the API doc).

I build an example. (cf code sample)

any help is welcome

code sample :

/*
  *
  *  Created on: 15 avr. 2010
  *      Author: hemichel
  */

#include <stdio.h>
#include <unistd.h>
#include <stdlib.h>
#include <string.h>
#include <signal.h>
#include <sys/time.h>
#include <sys/io.h>
#include <sys/mman.h>
#include <native/task.h>
#include <native/queue.h>
#include <native/intr.h>

#define STACK_SIZE 8192
#define STD_PRIO 1

RT_TASK test_task_ptr,test_task2_ptr,test_task3_ptr;
int int_count = 0;
int end = 0;

#define PEER_RATE_NS 10000000
//                     --s-ms-us-ns
RTIME task_period_ns =   1000000000llu;

void testtask(void *cookie) {
	RT_TASK_MCB mcb_send, mcb_reply;
	int flowid, i, rv;
	unsigned char datasend[16];
	unsigned char datareply[16];

	int count = 0;
	int ret;
	unsigned long overrun;
	ret = rt_task_set_periodic(NULL, TM_NOW, rt_timer_ns2ticks(task_period_ns));
	if (ret) {
		printf("error while set periodic, code %d\n",ret);
		return;
	}

	mcb_send.opcode = 0x03;
	datasend[0]='a';
	mcb_send.data = datasend;
	mcb_send.size = sizeof(datasend);

	mcb_reply.size = sizeof(datareply);
	mcb_reply.data = datareply;

	while(!end){
		ret = rt_task_set_mode(0, T_PRIMARY, NULL);
		if (ret) {
			printf("error while rt_task_set_mode, code %d\n",ret);
			return;
		}
		ret = rt_task_wait_period(&overrun);
		if (ret) {
			printf("error while rt_task_wait_period, code %d\n",ret);
			return;
		}
		count++;
		printf("message from testtask: count=%d\n", count);

		rv = rt_task_send(&test_task2_ptr,&mcb_send,&mcb_reply,PEER_RATE_NS);
		if (rv < 0) printf("rt_task_send error\n");
		else rt_printf("response mcb_reply=%d\n",mcb_reply.data[0]);
		fflush(NULL);
	}
}


void testtask2(void *cookie) {
	RT_TASK_MCB mcb_rcv, mcb_reply;
	int flowid, i, rv;
	unsigned char datareply[16];

	int count = 12;
	int ret;
	unsigned long overrun;
	ret = rt_task_set_periodic(NULL, TM_NOW, rt_timer_ns2ticks(task_period_ns));
	if (ret) {
		printf("error while set periodic, code %d\n",ret);
		return;
	}

	while(!end){
		ret = rt_task_set_mode(0, T_PRIMARY, NULL);
		if (ret) {
			printf("error while rt_task_set_mode, code %d\n",ret);
			return;
		}
		ret = rt_task_wait_period(&overrun);
		if (ret) {
			printf("error while rt_task_wait_period, code %d\n",ret);
			return;
		}

		mcb_rcv.data = (caddr_t)datareply;
		mcb_rcv.size = sizeof(datareply);

		flowid = rt_task_receive(&mcb_rcv,PEER_RATE_NS);
		rt_printf("task 2: flowid=%d rcv.size=%d bytes to receive buf,  
opcode=%d\n",\
				flowid,mcb_rcv.size,mcb_rcv.opcode);
		if(flowid >= 0)
		{
			if (mcb_rcv.opcode == 2) {
				//this is mine
				mcb_reply.opcode = 0x2;
				mcb_reply.size = 1;
				datareply[0]=count;
				mcb_reply.data = datareply;
				rt_task_reply(flowid, &mcb_reply);
				rt_printf("replied from task2 to flowid=%d\n",flowid);
			}
			else {
				rt_printf("task2 : not mined\n");
                                 //how to relay the catched msg to next ?
			}
		}

		fflush(NULL);
	}
}


void testtask3(void *cookie) {
	RT_TASK_MCB mcb_rcv, mcb_reply;
	int flowid, i, rv;
	unsigned char datareply[16];

	int count = 13;
	int ret;
	unsigned long overrun;
	ret = rt_task_set_periodic(NULL, TM_NOW, rt_timer_ns2ticks(task_period_ns));
	if (ret) {
		printf("error while set periodic, code %d\n",ret);
		return;
	}

	while(!end){
		ret = rt_task_set_mode(0, T_PRIMARY, NULL);
		if (ret) {
			printf("error while rt_task_set_mode, code %d\n",ret);
			return;
		}
		ret = rt_task_wait_period(&overrun);
		if (ret) {
			printf("error while rt_task_wait_period, code %d\n",ret);
			return;
		}

		mcb_rcv.data = (caddr_t)datareply;
		mcb_rcv.size = sizeof(datareply);

		flowid = rt_task_receive(&mcb_rcv,PEER_RATE_NS);
		rt_printf("task 3 : flowid=%d rcv.size=%d bytes to receive buf,  
opcode=%d\n",\
				flowid,mcb_rcv.size,mcb_rcv.opcode);
		if(flowid >= 0)
		{
			if (mcb_rcv.opcode == 3) {
				//this is mine
				mcb_reply.opcode = 0x3;
				mcb_reply.size = 1;
				datareply[0]=count;
				mcb_reply.data = datareply;
				rt_task_reply(flowid, &mcb_reply);
				rt_printf("replied from task3 to flowid=%d\n",flowid);
			}
			else {
				rt_printf("task3 : not mined\n");
                                 //how to relay the catched msg to next ?
			}

		}

		fflush(NULL);
	}
}



Jan Kiszka <jan.kiszka@domain.hid> a écrit :

> Well, you could start with mapping the existing RTAI API calls in
> xrtai-lab on local Native calls. That will already give you a
> non-distributed port.



^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [Xenomai-help] netrpc
  2010-04-15 14:45   ` Michel He
@ 2010-04-15 15:03     ` Philippe Gerum
  0 siblings, 0 replies; 6+ messages in thread
From: Philippe Gerum @ 2010-04-15 15:03 UTC (permalink / raw)
  To: Michel He; +Cc: Jan Kiszka, xenomai

On Thu, 2010-04-15 at 16:45 +0200, Michel He wrote:
> For message passing from task to task, I use the 3 available  
> procedures rt_task_send(), rt_task_reply(), rt_task_receive().
> 
> The job is when one message emitted from the source to one  
> destination, however the message can be catched by an other task (not  
> the good correspondant). In Xenomai, when the message is NOT for a  
> task, it couldn't be passed to a next task. Tasks can receive any  
> message but not being able to relay it to the destination. (there's no  
> such function mentionned inside the API doc).

Well, actually, no, there is none. Those services are aimed at being
plain simple client/server primitives, which assume that you do know
which server wants to receive your traffic. Providing a service to
re-queue the request for others to pick it, does not seem the right
approach, since there is no way you can tell whether the-other-guy will
schedule in before the current server goes back to its receive point,
unless you synchronize both servers, which would end up being quite
silly. However, your application can emulate this behavior much more
sanely, you don't need kernel support for that.

Btw, you should really remove rt_task_set_mode(...T_PRIMARY...) from
your code, because this is unfortunately totally useless overhead. Not
really your fault, T_PRIMARY should not have been provided this way.

> 
> I build an example. (cf code sample)
> 
> any help is welcome
> 
> code sample :
> 
> /*
>   *
>   *  Created on: 15 avr. 2010
>   *      Author: hemichel
>   */
> 
> #include <stdio.h>
> #include <unistd.h>
> #include <stdlib.h>
> #include <string.h>
> #include <signal.h>
> #include <sys/time.h>
> #include <sys/io.h>
> #include <sys/mman.h>
> #include <native/task.h>
> #include <native/queue.h>
> #include <native/intr.h>
> 
> #define STACK_SIZE 8192
> #define STD_PRIO 1
> 
> RT_TASK test_task_ptr,test_task2_ptr,test_task3_ptr;
> int int_count = 0;
> int end = 0;
> 
> #define PEER_RATE_NS 10000000
> //                     --s-ms-us-ns
> RTIME task_period_ns =   1000000000llu;
> 
> void testtask(void *cookie) {
> 	RT_TASK_MCB mcb_send, mcb_reply;
> 	int flowid, i, rv;
> 	unsigned char datasend[16];
> 	unsigned char datareply[16];
> 
> 	int count = 0;
> 	int ret;
> 	unsigned long overrun;
> 	ret = rt_task_set_periodic(NULL, TM_NOW, rt_timer_ns2ticks(task_period_ns));
> 	if (ret) {
> 		printf("error while set periodic, code %d\n",ret);
> 		return;
> 	}
> 
> 	mcb_send.opcode = 0x03;
> 	datasend[0]='a';
> 	mcb_send.data = datasend;
> 	mcb_send.size = sizeof(datasend);
> 
> 	mcb_reply.size = sizeof(datareply);
> 	mcb_reply.data = datareply;
> 
> 	while(!end){
> 		ret = rt_task_set_mode(0, T_PRIMARY, NULL);
> 		if (ret) {
> 			printf("error while rt_task_set_mode, code %d\n",ret);
> 			return;
> 		}
> 		ret = rt_task_wait_period(&overrun);
> 		if (ret) {
> 			printf("error while rt_task_wait_period, code %d\n",ret);
> 			return;
> 		}
> 		count++;
> 		printf("message from testtask: count=%d\n", count);
> 
> 		rv = rt_task_send(&test_task2_ptr,&mcb_send,&mcb_reply,PEER_RATE_NS);
> 		if (rv < 0) printf("rt_task_send error\n");
> 		else rt_printf("response mcb_reply=%d\n",mcb_reply.data[0]);
> 		fflush(NULL);
> 	}
> }
> 
> 
> void testtask2(void *cookie) {
> 	RT_TASK_MCB mcb_rcv, mcb_reply;
> 	int flowid, i, rv;
> 	unsigned char datareply[16];
> 
> 	int count = 12;
> 	int ret;
> 	unsigned long overrun;
> 	ret = rt_task_set_periodic(NULL, TM_NOW, rt_timer_ns2ticks(task_period_ns));
> 	if (ret) {
> 		printf("error while set periodic, code %d\n",ret);
> 		return;
> 	}
> 
> 	while(!end){
> 		ret = rt_task_set_mode(0, T_PRIMARY, NULL);
> 		if (ret) {
> 			printf("error while rt_task_set_mode, code %d\n",ret);
> 			return;
> 		}
> 		ret = rt_task_wait_period(&overrun);
> 		if (ret) {
> 			printf("error while rt_task_wait_period, code %d\n",ret);
> 			return;
> 		}
> 
> 		mcb_rcv.data = (caddr_t)datareply;
> 		mcb_rcv.size = sizeof(datareply);
> 
> 		flowid = rt_task_receive(&mcb_rcv,PEER_RATE_NS);
> 		rt_printf("task 2: flowid=%d rcv.size=%d bytes to receive buf,  
> opcode=%d\n",\
> 				flowid,mcb_rcv.size,mcb_rcv.opcode);
> 		if(flowid >= 0)
> 		{
> 			if (mcb_rcv.opcode == 2) {
> 				//this is mine
> 				mcb_reply.opcode = 0x2;
> 				mcb_reply.size = 1;
> 				datareply[0]=count;
> 				mcb_reply.data = datareply;
> 				rt_task_reply(flowid, &mcb_reply);
> 				rt_printf("replied from task2 to flowid=%d\n",flowid);
> 			}
> 			else {
> 				rt_printf("task2 : not mined\n");
>                                  //how to relay the catched msg to next ?
> 			}
> 		}
> 
> 		fflush(NULL);
> 	}
> }
> 
> 
> void testtask3(void *cookie) {
> 	RT_TASK_MCB mcb_rcv, mcb_reply;
> 	int flowid, i, rv;
> 	unsigned char datareply[16];
> 
> 	int count = 13;
> 	int ret;
> 	unsigned long overrun;
> 	ret = rt_task_set_periodic(NULL, TM_NOW, rt_timer_ns2ticks(task_period_ns));
> 	if (ret) {
> 		printf("error while set periodic, code %d\n",ret);
> 		return;
> 	}
> 
> 	while(!end){
> 		ret = rt_task_set_mode(0, T_PRIMARY, NULL);
> 		if (ret) {
> 			printf("error while rt_task_set_mode, code %d\n",ret);
> 			return;
> 		}
> 		ret = rt_task_wait_period(&overrun);
> 		if (ret) {
> 			printf("error while rt_task_wait_period, code %d\n",ret);
> 			return;
> 		}
> 
> 		mcb_rcv.data = (caddr_t)datareply;
> 		mcb_rcv.size = sizeof(datareply);
> 
> 		flowid = rt_task_receive(&mcb_rcv,PEER_RATE_NS);
> 		rt_printf("task 3 : flowid=%d rcv.size=%d bytes to receive buf,  
> opcode=%d\n",\
> 				flowid,mcb_rcv.size,mcb_rcv.opcode);
> 		if(flowid >= 0)
> 		{
> 			if (mcb_rcv.opcode == 3) {
> 				//this is mine
> 				mcb_reply.opcode = 0x3;
> 				mcb_reply.size = 1;
> 				datareply[0]=count;
> 				mcb_reply.data = datareply;
> 				rt_task_reply(flowid, &mcb_reply);
> 				rt_printf("replied from task3 to flowid=%d\n",flowid);
> 			}
> 			else {
> 				rt_printf("task3 : not mined\n");
>                                  //how to relay the catched msg to next ?
> 			}
> 
> 		}
> 
> 		fflush(NULL);
> 	}
> }
> 
> 
> 
> Jan Kiszka <jan.kiszka@domain.hid> a écrit :
> 
> > Well, you could start with mapping the existing RTAI API calls in
> > xrtai-lab on local Native calls. That will already give you a
> > non-distributed port.
> 
> 
> _______________________________________________
> Xenomai-help mailing list
> Xenomai-help@domain.hid
> https://mail.gna.org/listinfo/xenomai-help


-- 
Philippe.




^ permalink raw reply	[flat|nested] 6+ messages in thread

end of thread, other threads:[~2010-04-15 15:03 UTC | newest]

Thread overview: 6+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2010-03-31 12:34 [Xenomai-help] netrpc Michel He
2010-03-31 13:33 ` Jan Kiszka
2010-04-13  8:13   ` Michel He
2010-04-13 23:59     ` Jan Kiszka
2010-04-15 14:45   ` Michel He
2010-04-15 15:03     ` Philippe Gerum

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.