All of lore.kernel.org
 help / color / mirror / Atom feed
* RTnet sendmmsg and ENOBUFS
@ 2019-11-13 15:10 Lange Norbert
  2019-11-13 17:39 ` Jan Kiszka
  0 siblings, 1 reply; 11+ messages in thread
From: Lange Norbert @ 2019-11-13 15:10 UTC (permalink / raw)
  To: Xenomai (xenomai@xenomai.org)

Hello,

for one of our applications, we have (unfortunatly) a single ethernet connection for Realtime and Nonrealtime.

We solve this by sending timeslices with RT first, then filling up the remaining space. When stressing the limits (quite possibly beyond if accounting for bugs),
the sendmmsg call over a raw socket returns ENOBUFS (even with a single small packet).
I was expecting this call to just block until the resouces are available.

Timeslices are 1 ms, so that could be around 12Kbyte total or ~190 60Byte packets (theoretical max).

What variables are involved (whats the xenomai buffer limits, are they shared or per interface) and choices do I have?

- I could send the packages nonblocking and wait or drop the remaining myself
- I could deal with ENOBUFS the same way as EAGAIN (is there any difference actually)
- I could raise the amount of internal buffer somehow

Also while stresstesting I get these messages:

[ 5572.044934] hard_start_xmit returned 16
[ 5572.054989] hard_start_xmit returned 16
[ 5572.064007] hard_start_xmit returned 16
[ 5572.067893] hard_start_xmit returned 16
[ 5572.071739] hard_start_xmit returned 16
[ 5572.075586] hard_start_xmit returned 16
[ 5575.096116] hard_start_xmit returned 16
[ 5579.377038] hard_start_xmit returned 16

Kind regards, Norbert
________________________________

This message and any attachments are solely for the use of the intended recipients. They may contain privileged and/or confidential information or other information protected from disclosure. If you are not an intended recipient, you are hereby notified that you received this email in error and that any review, dissemination, distribution or copying of this email and any attachment is strictly prohibited. If you have received this email in error, please contact the sender and delete the message and any attachment from your system.

ANDRITZ HYDRO GmbH


Rechtsform/ Legal form: Gesellschaft mit beschränkter Haftung / Corporation

Firmensitz/ Registered seat: Wien

Firmenbuchgericht/ Court of registry: Handelsgericht Wien

Firmenbuchnummer/ Company registration: FN 61833 g

DVR: 0605077

UID-Nr.: ATU14756806


Thank You
________________________________

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: RTnet sendmmsg and ENOBUFS
  2019-11-13 15:10 RTnet sendmmsg and ENOBUFS Lange Norbert
@ 2019-11-13 17:39 ` Jan Kiszka
  2019-11-13 17:53   ` Lange Norbert
  0 siblings, 1 reply; 11+ messages in thread
From: Jan Kiszka @ 2019-11-13 17:39 UTC (permalink / raw)
  To: Lange Norbert, Xenomai (xenomai@xenomai.org)

On 13.11.19 16:10, Lange Norbert via Xenomai wrote:
> Hello,
> 
> for one of our applications, we have (unfortunatly) a single ethernet connection for Realtime and Nonrealtime.
> 
> We solve this by sending timeslices with RT first, then filling up the remaining space. When stressing the limits (quite possibly beyond if accounting for bugs),
> the sendmmsg call over a raw socket returns ENOBUFS (even with a single small packet).
> I was expecting this call to just block until the resouces are available.

Blocking would mean that the sites which make buffers available again 
had to signal this. The original design idea was to avoid such overhead 
and rather rely on the applications to schedule their submissions 
properly and preallocate resources accordingly.

> 
> Timeslices are 1 ms, so that could be around 12Kbyte total or ~190 60Byte packets (theoretical max).
> 
> What variables are involved (whats the xenomai buffer limits, are they shared or per interface) and choices do I have?
> 
> - I could send the packages nonblocking and wait or drop the remaining myself
> - I could deal with ENOBUFS the same way as EAGAIN (is there any difference actually)
> - I could raise the amount of internal buffer somehow

Check kernel/drivers/net/doc/README.pools

> 
> Also while stresstesting I get these messages:
> 
> [ 5572.044934] hard_start_xmit returned 16
> [ 5572.054989] hard_start_xmit returned 16
> [ 5572.064007] hard_start_xmit returned 16
> [ 5572.067893] hard_start_xmit returned 16
> [ 5572.071739] hard_start_xmit returned 16
> [ 5572.075586] hard_start_xmit returned 16
> [ 5575.096116] hard_start_xmit returned 16
> [ 5579.377038] hard_start_xmit returned 16

This likely comes from NETDEV_TX_BUSY signaled by the driver. Check the 
one you use for reasons. May include "I don't have buffers left".

Jan

-- 
Siemens AG, Corporate Technology, CT RDA IOT SES-DE
Corporate Competence Center Embedded Linux


^ permalink raw reply	[flat|nested] 11+ messages in thread

* RE: RTnet sendmmsg and ENOBUFS
  2019-11-13 17:39 ` Jan Kiszka
@ 2019-11-13 17:53   ` Lange Norbert
  2019-11-14 17:55     ` Lange Norbert
  0 siblings, 1 reply; 11+ messages in thread
From: Lange Norbert @ 2019-11-13 17:53 UTC (permalink / raw)
  To: Jan Kiszka, Xenomai (xenomai@xenomai.org)



> -----Original Message-----
> From: Jan Kiszka <jan.kiszka@siemens.com>
> Sent: Mittwoch, 13. November 2019 18:39
> To: Lange Norbert <norbert.lange@andritz.com>; Xenomai
> (xenomai@xenomai.org) <xenomai@xenomai.org>
> Subject: Re: RTnet sendmmsg and ENOBUFS
>
> NON-ANDRITZ SOURCE: BE CAUTIOUS WITH CONTENT, LINKS OR
> ATTACHMENTS.
>
>
> On 13.11.19 16:10, Lange Norbert via Xenomai wrote:
> > Hello,
> >
> > for one of our applications, we have (unfortunatly) a single ethernet
> connection for Realtime and Nonrealtime.
> >
> > We solve this by sending timeslices with RT first, then filling up the
> > remaining space. When stressing the limits (quite possibly beyond if
> accounting for bugs), the sendmmsg call over a raw socket returns ENOBUFS
> (even with a single small packet).
> > I was expecting this call to just block until the resouces are available.
>
> Blocking would mean that the sites which make buffers available again had to
> signal this. The original design idea was to avoid such overhead and rather
> rely on the applications to schedule their submissions properly and
> preallocate resources accordingly.

Ok.
In other words, this is the same behaviour as using MSG_DONTWAIT
(with a different errno value)

>
> >
> > Timeslices are 1 ms, so that could be around 12Kbyte total or ~190 60Byte
> packets (theoretical max).
> >
> > What variables are involved (whats the xenomai buffer limits, are they
> shared or per interface) and choices do I have?
> >
> > - I could send the packages nonblocking and wait or drop the remaining
> > myself
> > - I could deal with ENOBUFS the same way as EAGAIN (is there any
> > difference actually)
> > - I could raise the amount of internal buffer somehow
>
> Check kernel/drivers/net/doc/README.pools
>
> >
> > Also while stresstesting I get these messages:
> >
> > [ 5572.044934] hard_start_xmit returned 16 [ 5572.054989]
> > hard_start_xmit returned 16 [ 5572.064007] hard_start_xmit returned 16
> > [ 5572.067893] hard_start_xmit returned 16 [ 5572.071739]
> > hard_start_xmit returned 16 [ 5572.075586] hard_start_xmit returned 16
> > [ 5575.096116] hard_start_xmit returned 16 [ 5579.377038]
> > hard_start_xmit returned 16
>
> This likely comes from NETDEV_TX_BUSY signaled by the driver. Check the
> one you use for reasons. May include "I don't have buffers left".

Yes it does, I was afraid this would indicate some leaked buffers.

Norbert
________________________________

This message and any attachments are solely for the use of the intended recipients. They may contain privileged and/or confidential information or other information protected from disclosure. If you are not an intended recipient, you are hereby notified that you received this email in error and that any review, dissemination, distribution or copying of this email and any attachment is strictly prohibited. If you have received this email in error, please contact the sender and delete the message and any attachment from your system.

ANDRITZ HYDRO GmbH


Rechtsform/ Legal form: Gesellschaft mit beschränkter Haftung / Corporation

Firmensitz/ Registered seat: Wien

Firmenbuchgericht/ Court of registry: Handelsgericht Wien

Firmenbuchnummer/ Company registration: FN 61833 g

DVR: 0605077

UID-Nr.: ATU14756806


Thank You
________________________________

^ permalink raw reply	[flat|nested] 11+ messages in thread

* RE: RTnet sendmmsg and ENOBUFS
  2019-11-13 17:53   ` Lange Norbert
@ 2019-11-14 17:55     ` Lange Norbert
  2019-11-14 18:18       ` Jan Kiszka
  0 siblings, 1 reply; 11+ messages in thread
From: Lange Norbert @ 2019-11-14 17:55 UTC (permalink / raw)
  To: Jan Kiszka, Xenomai (xenomai@xenomai.org)

So, for my setup socket_rtskbs is 16, the rt_igp driver rtskbs are 256TX + 256RX.

As said, our software prepares packets before a timeslice, and would aim to minimize systemcalls and interrupts,
packets are sent over raw rtsockets.

if  understand __rtdm_fd_sendmmsg and rt_packet_sendmsg correctly,
sendmsg will pick one socket_rtskbs, copies data from userspace and
then passes this rtskbs to rtdev_xmit.
I don’t see how a free buffers gets passed back, like README.pools describes,
I guess rtskb_acquire should somehow do this.

So in short, I am using only one socket_rtskbs temporarily, as the function passes
the buffer to the rtdev (rt_igp driver)?
I suppose the receive path works similarly.


Now if I would want to send nonblocking, ie. as much packets as are possible,
exhausting the rtskbs then I would expect the EAGAIN/EWOULDBLOCK error and getting
back the number of successfully queued packets (so I could  drop them and send the remaining later).

According to the code in __rtdm_fd_sendmmsg, that’s not what happens, ENOBUFS would be returned instead,
And the amount of sent packets is lost forever.

if (datagrams > 0 && (ret == 0 || ret == -EWOULDBLOCK)) {
/* NOTE: SO_ERROR should be honored for other errors. */
rtdm_fd_put(fd);
return datagrams;
}

IMHO this condition would need to added:
((flags | MSG_DONTWAIT) && ret == -ENOBUFS)

(Recvmmsg possibly similarly, havent checked yet)

Thanks for the help,
Norbert

> -----Original Message-----
> From: Xenomai <xenomai-bounces@xenomai.org> On Behalf Of Lange
> Norbert via Xenomai
> Sent: Mittwoch, 13. November 2019 18:53
> To: Jan Kiszka <jan.kiszka@siemens.com>; Xenomai
> (xenomai@xenomai.org) <xenomai@xenomai.org>
> Subject: RE: RTnet sendmmsg and ENOBUFS
>
> NON-ANDRITZ SOURCE: BE CAUTIOUS WITH CONTENT, LINKS OR
> ATTACHMENTS.
>
>
> > -----Original Message-----
> > From: Jan Kiszka <jan.kiszka@siemens.com>
> > Sent: Mittwoch, 13. November 2019 18:39
> > To: Lange Norbert <norbert.lange@andritz.com>; Xenomai
> > (xenomai@xenomai.org) <xenomai@xenomai.org>
> > Subject: Re: RTnet sendmmsg and ENOBUFS
> >
> > NON-ANDRITZ SOURCE: BE CAUTIOUS WITH CONTENT, LINKS OR
> ATTACHMENTS.
> >
> >
> > On 13.11.19 16:10, Lange Norbert via Xenomai wrote:
> > > Hello,
> > >
> > > for one of our applications, we have (unfortunatly) a single ethernet
> > connection for Realtime and Nonrealtime.
> > >
> > > We solve this by sending timeslices with RT first, then filling up the
> > > remaining space. When stressing the limits (quite possibly beyond if
> > accounting for bugs), the sendmmsg call over a raw socket returns
> ENOBUFS
> > (even with a single small packet).
> > > I was expecting this call to just block until the resouces are available.
> >
> > Blocking would mean that the sites which make buffers available again had
> to
> > signal this. The original design idea was to avoid such overhead and rather
> > rely on the applications to schedule their submissions properly and
> > preallocate resources accordingly.
>
> Ok.
> In other words, this is the same behaviour as using MSG_DONTWAIT
> (with a different errno value)
>
> >
> > >
> > > Timeslices are 1 ms, so that could be around 12Kbyte total or ~190 60Byte
> > packets (theoretical max).
> > >
> > > What variables are involved (whats the xenomai buffer limits, are they
> > shared or per interface) and choices do I have?
> > >
> > > - I could send the packages nonblocking and wait or drop the remaining
> > > myself
> > > - I could deal with ENOBUFS the same way as EAGAIN (is there any
> > > difference actually)
> > > - I could raise the amount of internal buffer somehow
> >
> > Check kernel/drivers/net/doc/README.pools
> >
> > >
> > > Also while stresstesting I get these messages:
> > >
> > > [ 5572.044934] hard_start_xmit returned 16 [ 5572.054989]
> > > hard_start_xmit returned 16 [ 5572.064007] hard_start_xmit returned 16
> > > [ 5572.067893] hard_start_xmit returned 16 [ 5572.071739]
> > > hard_start_xmit returned 16 [ 5572.075586] hard_start_xmit returned 16
> > > [ 5575.096116] hard_start_xmit returned 16 [ 5579.377038]
> > > hard_start_xmit returned 16
> >
> > This likely comes from NETDEV_TX_BUSY signaled by the driver. Check the
> > one you use for reasons. May include "I don't have buffers left".
>
> Yes it does, I was afraid this would indicate some leaked buffers.
>
> Norbert
> ________________________________
>
> This message and any attachments are solely for the use of the intended
> recipients. They may contain privileged and/or confidential information or
> other information protected from disclosure. If you are not an intended
> recipient, you are hereby notified that you received this email in error and
> that any review, dissemination, distribution or copying of this email and any
> attachment is strictly prohibited. If you have received this email in error,
> please contact the sender and delete the message and any attachment from
> your system.
>
> ANDRITZ HYDRO GmbH
>
>
> Rechtsform/ Legal form: Gesellschaft mit beschränkter Haftung / Corporation
>
> Firmensitz/ Registered seat: Wien
>
> Firmenbuchgericht/ Court of registry: Handelsgericht Wien
>
> Firmenbuchnummer/ Company registration: FN 61833 g
>
> DVR: 0605077
>
> UID-Nr.: ATU14756806
>
>
> Thank You
> ________________________________
________________________________

This message and any attachments are solely for the use of the intended recipients. They may contain privileged and/or confidential information or other information protected from disclosure. If you are not an intended recipient, you are hereby notified that you received this email in error and that any review, dissemination, distribution or copying of this email and any attachment is strictly prohibited. If you have received this email in error, please contact the sender and delete the message and any attachment from your system.

ANDRITZ HYDRO GmbH


Rechtsform/ Legal form: Gesellschaft mit beschränkter Haftung / Corporation

Firmensitz/ Registered seat: Wien

Firmenbuchgericht/ Court of registry: Handelsgericht Wien

Firmenbuchnummer/ Company registration: FN 61833 g

DVR: 0605077

UID-Nr.: ATU14756806


Thank You
________________________________

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: RTnet sendmmsg and ENOBUFS
  2019-11-14 17:55     ` Lange Norbert
@ 2019-11-14 18:18       ` Jan Kiszka
  2019-11-14 18:39         ` Jan Kiszka
  2019-11-15 10:10         ` Lange Norbert
  0 siblings, 2 replies; 11+ messages in thread
From: Jan Kiszka @ 2019-11-14 18:18 UTC (permalink / raw)
  To: Lange Norbert, Xenomai (xenomai@xenomai.org)

On 14.11.19 18:55, Lange Norbert wrote:
> So, for my setup socket_rtskbs is 16, the rt_igp driver rtskbs are 256TX + 256RX.
> 
> As said, our software prepares packets before a timeslice, and would aim to minimize systemcalls and interrupts,
> packets are sent over raw rtsockets.
> 
> if  understand __rtdm_fd_sendmmsg and rt_packet_sendmsg correctly,
> sendmsg will pick one socket_rtskbs, copies data from userspace and
> then passes this rtskbs to rtdev_xmit.
> I don’t see how a free buffers gets passed back, like README.pools describes,
> I guess rtskb_acquire should somehow do this.

The buffer returns (not necessarily the same one, though) when the 
packet was truly sent, and the driver ran its TX cleanup. If you submit 
many packets as a chunk, that may block them for a while.

> 
> So in short, I am using only one socket_rtskbs temporarily, as the function passes
> the buffer to the rtdev (rt_igp driver)?

You are using as many rtskbs as it takes to get the data you passed down 
forwarded as packets to the NIC, and that as long as the NIC needs to 
get that data DMA'ed to the transmitter.

> I suppose the receive path works similarly.
> 

RX works by accepting a global-pool buffer (this is where incoming 
packets first end up in) filled with data in exchange to an empty rtskb 
from the socket pool. That filled rtskb is put into the socket pool once 
the data was transferred to userspace.

> 
> Now if I would want to send nonblocking, ie. as much packets as are possible,
> exhausting the rtskbs then I would expect the EAGAIN/EWOULDBLOCK error and getting
> back the number of successfully queued packets (so I could  drop them and send the remaining later).

I don't recall why anymore, but we decided to use a different error code 
in RTnet for this back then, possibly to differentiate this "should 
never ever happen in a deterministic network" from other errors.

> 
> According to the code in __rtdm_fd_sendmmsg, that’s not what happens, ENOBUFS would be returned instead,
> And the amount of sent packets is lost forever.
> 
> if (datagrams > 0 && (ret == 0 || ret == -EWOULDBLOCK)) {
> /* NOTE: SO_ERROR should be honored for other errors. */
> rtdm_fd_put(fd);
> return datagrams;
> }
> 
> IMHO this condition would need to added:
> ((flags | MSG_DONTWAIT) && ret == -ENOBUFS)
> 
> (Recvmmsg possibly similarly, havent checked yet)

sendmmsg was only added to Xenomai 3.1. There might be room for 
improvements, if not corrections. So, if we do not return the number of 
sent messages or signal an error where we should not (this is how I read 
the man page currently), this needs a patch...

Jan

>> -----Original Message-----
>> From: Xenomai <xenomai-bounces@xenomai.org> On Behalf Of Lange
>> Norbert via Xenomai
>> Sent: Mittwoch, 13. November 2019 18:53
>> To: Jan Kiszka <jan.kiszka@siemens.com>; Xenomai
>> (xenomai@xenomai.org) <xenomai@xenomai.org>
>> Subject: RE: RTnet sendmmsg and ENOBUFS
>>
>> NON-ANDRITZ SOURCE: BE CAUTIOUS WITH CONTENT, LINKS OR
>> ATTACHMENTS.
>>
>>
>>> -----Original Message-----
>>> From: Jan Kiszka <jan.kiszka@siemens.com>
>>> Sent: Mittwoch, 13. November 2019 18:39
>>> To: Lange Norbert <norbert.lange@andritz.com>; Xenomai
>>> (xenomai@xenomai.org) <xenomai@xenomai.org>
>>> Subject: Re: RTnet sendmmsg and ENOBUFS
>>>
>>> NON-ANDRITZ SOURCE: BE CAUTIOUS WITH CONTENT, LINKS OR
>> ATTACHMENTS.
>>>
>>>
>>> On 13.11.19 16:10, Lange Norbert via Xenomai wrote:
>>>> Hello,
>>>>
>>>> for one of our applications, we have (unfortunatly) a single ethernet
>>> connection for Realtime and Nonrealtime.
>>>>
>>>> We solve this by sending timeslices with RT first, then filling up the
>>>> remaining space. When stressing the limits (quite possibly beyond if
>>> accounting for bugs), the sendmmsg call over a raw socket returns
>> ENOBUFS
>>> (even with a single small packet).
>>>> I was expecting this call to just block until the resouces are available.
>>>
>>> Blocking would mean that the sites which make buffers available again had
>> to
>>> signal this. The original design idea was to avoid such overhead and rather
>>> rely on the applications to schedule their submissions properly and
>>> preallocate resources accordingly.
>>
>> Ok.
>> In other words, this is the same behaviour as using MSG_DONTWAIT
>> (with a different errno value)
>>
>>>
>>>>
>>>> Timeslices are 1 ms, so that could be around 12Kbyte total or ~190 60Byte
>>> packets (theoretical max).
>>>>
>>>> What variables are involved (whats the xenomai buffer limits, are they
>>> shared or per interface) and choices do I have?
>>>>
>>>> - I could send the packages nonblocking and wait or drop the remaining
>>>> myself
>>>> - I could deal with ENOBUFS the same way as EAGAIN (is there any
>>>> difference actually)
>>>> - I could raise the amount of internal buffer somehow
>>>
>>> Check kernel/drivers/net/doc/README.pools
>>>
>>>>
>>>> Also while stresstesting I get these messages:
>>>>
>>>> [ 5572.044934] hard_start_xmit returned 16 [ 5572.054989]
>>>> hard_start_xmit returned 16 [ 5572.064007] hard_start_xmit returned 16
>>>> [ 5572.067893] hard_start_xmit returned 16 [ 5572.071739]
>>>> hard_start_xmit returned 16 [ 5572.075586] hard_start_xmit returned 16
>>>> [ 5575.096116] hard_start_xmit returned 16 [ 5579.377038]
>>>> hard_start_xmit returned 16
>>>
>>> This likely comes from NETDEV_TX_BUSY signaled by the driver. Check the
>>> one you use for reasons. May include "I don't have buffers left".
>>
>> Yes it does, I was afraid this would indicate some leaked buffers.
>>
>> Norbert
>> ________________________________
>>
>> This message and any attachments are solely for the use of the intended
>> recipients. They may contain privileged and/or confidential information or
>> other information protected from disclosure. If you are not an intended
>> recipient, you are hereby notified that you received this email in error and
>> that any review, dissemination, distribution or copying of this email and any
>> attachment is strictly prohibited. If you have received this email in error,
>> please contact the sender and delete the message and any attachment from
>> your system.
>>
>> ANDRITZ HYDRO GmbH
>>
>>
>> Rechtsform/ Legal form: Gesellschaft mit beschränkter Haftung / Corporation
>>
>> Firmensitz/ Registered seat: Wien
>>
>> Firmenbuchgericht/ Court of registry: Handelsgericht Wien
>>
>> Firmenbuchnummer/ Company registration: FN 61833 g
>>
>> DVR: 0605077
>>
>> UID-Nr.: ATU14756806
>>
>>
>> Thank You
>> ________________________________
> ________________________________
> 
> This message and any attachments are solely for the use of the intended recipients. They may contain privileged and/or confidential information or other information protected from disclosure. If you are not an intended recipient, you are hereby notified that you received this email in error and that any review, dissemination, distribution or copying of this email and any attachment is strictly prohibited. If you have received this email in error, please contact the sender and delete the message and any attachment from your system.
> 
> ANDRITZ HYDRO GmbH
> 
> 
> Rechtsform/ Legal form: Gesellschaft mit beschränkter Haftung / Corporation
> 
> Firmensitz/ Registered seat: Wien
> 
> Firmenbuchgericht/ Court of registry: Handelsgericht Wien
> 
> Firmenbuchnummer/ Company registration: FN 61833 g
> 
> DVR: 0605077
> 
> UID-Nr.: ATU14756806
> 
> 
> Thank You
> ________________________________
> 

-- 
Siemens AG, Corporate Technology, CT RDA IOT SES-DE
Corporate Competence Center Embedded Linux


^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: RTnet sendmmsg and ENOBUFS
  2019-11-14 18:18       ` Jan Kiszka
@ 2019-11-14 18:39         ` Jan Kiszka
  2019-11-15 10:10         ` Lange Norbert
  1 sibling, 0 replies; 11+ messages in thread
From: Jan Kiszka @ 2019-11-14 18:39 UTC (permalink / raw)
  To: Lange Norbert, Xenomai (xenomai@xenomai.org)

On 14.11.19 19:18, Jan Kiszka wrote:
> On 14.11.19 18:55, Lange Norbert wrote:
>> According to the code in __rtdm_fd_sendmmsg, that’s not what happens, 
>> ENOBUFS would be returned instead,
>> And the amount of sent packets is lost forever.
>>
>> if (datagrams > 0 && (ret == 0 || ret == -EWOULDBLOCK)) {
>> /* NOTE: SO_ERROR should be honored for other errors. */
>> rtdm_fd_put(fd);
>> return datagrams;
>> }
>>
>> IMHO this condition would need to added:
>> ((flags | MSG_DONTWAIT) && ret == -ENOBUFS)
>>
>> (Recvmmsg possibly similarly, havent checked yet)
> 
> sendmmsg was only added to Xenomai 3.1. There might be room for 
> improvements, if not corrections. So, if we do not return the number of 
> sent messages or signal an error where we should not (this is how I read 
> the man page currently), this needs a patch...

The implementation of sendmmsg is wrong when comparing it to the man 
page and the reference in the kernel (as this is Linux-only):

/* We only return an error if no datagrams were able to be sent */

says the kernel e.g. and does

         if (datagrams != 0)
                 return datagrams;

It's also missing to trace on certain exits. Will write a patch.

Jan

-- 
Siemens AG, Corporate Technology, CT RDA IOT SES-DE
Corporate Competence Center Embedded Linux


^ permalink raw reply	[flat|nested] 11+ messages in thread

* RE: RTnet sendmmsg and ENOBUFS
  2019-11-14 18:18       ` Jan Kiszka
  2019-11-14 18:39         ` Jan Kiszka
@ 2019-11-15 10:10         ` Lange Norbert
  2019-11-15 11:30           ` Jan Kiszka
  1 sibling, 1 reply; 11+ messages in thread
From: Lange Norbert @ 2019-11-15 10:10 UTC (permalink / raw)
  To: Jan Kiszka, Xenomai (xenomai@xenomai.org)



> -----Original Message-----
> From: Jan Kiszka <jan.kiszka@siemens.com>
> Sent: Donnerstag, 14. November 2019 19:18
> To: Lange Norbert <norbert.lange@andritz.com>; Xenomai
> (xenomai@xenomai.org) <xenomai@xenomai.org>
> Cc: Philippe Gerum (rpm@xenomai.org) <rpm@xenomai.org>
> Subject: Re: RTnet sendmmsg and ENOBUFS
>
> NON-ANDRITZ SOURCE: BE CAUTIOUS WITH CONTENT, LINKS OR
> ATTACHMENTS.
>
>
> On 14.11.19 18:55, Lange Norbert wrote:
> > So, for my setup socket_rtskbs is 16, the rt_igp driver rtskbs are 256TX +
> 256RX.
> >
> > As said, our software prepares packets before a timeslice, and would
> > aim to minimize systemcalls and interrupts, packets are sent over raw
> rtsockets.
> >
> > if  understand __rtdm_fd_sendmmsg and rt_packet_sendmsg correctly,
> > sendmsg will pick one socket_rtskbs, copies data from userspace and
> > then passes this rtskbs to rtdev_xmit.
> > I don’t see how a free buffers gets passed back, like README.pools
> > describes, I guess rtskb_acquire should somehow do this.
>
> The buffer returns (not necessarily the same one, though) when the packet
> was truly sent, and the driver ran its TX cleanup. If you submit many packets
> as a chunk, that may block them for a while.
>
> >
> > So in short, I am using only one socket_rtskbs temporarily, as the
> > function passes the buffer to the rtdev (rt_igp driver)?
>
> You are using as many rtskbs as it takes to get the data you passed down
> forwarded as packets to the NIC, and that as long as the NIC needs to get
> that data DMA'ed to the transmitter.

I was talking about the pools. The socket pool has 16  rtskbs, the device pool has 512.
As I understand it, the __rtdm_fd_sendmmsg picks one rtskb from the socket pool,
Then exchanges this buffer with a free one from the device pool (rtskb_acquire?).
So sendmmsg requires a single "slot" from their pool, then gets that "slot" back
when passing the rtskb down to the device.

Or In other words, I could successfully sendmmsg 100 messages, aslong there is one free
slot in the socket pool, and the device pool has enough free slots.

>
> > I suppose the receive path works similarly.
> >
>
> RX works by accepting a global-pool buffer (this is where incoming packets
> first end up in) filled with data in exchange to an empty rtskb from the socket
> pool. That filled rtskb is put into the socket pool once the data was
> transferred to userspace.

I suppose all pools can exchange rtskb, so this is just a matter of which pool size is limiting.
If I want to recvmmsg 100 messages, will I get at most 16 (socket pool size),
or will a single slot be used and exchanged with the drivers?

>
> >
> > Now if I would want to send nonblocking, ie. as much packets as are
> > possible, exhausting the rtskbs then I would expect the
> > EAGAIN/EWOULDBLOCK error and getting back the number of successfully
> queued packets (so I could  drop them and send the remaining later).
>
> I don't recall why anymore, but we decided to use a different error code in
> RTnet for this back then, possibly to differentiate this "should never ever
> happen in a deterministic network" from other errors.

Yes, I guess that makes sense in alot usecases. Mine is a bit different,
I use a service that just tunnels a TUN to a RT Packet socket, and once someone
connects to an IDDP socket for RT traffic, timeslices are used.

So the network is not always in "deterministic mode".

>
> >
> > According to the code in __rtdm_fd_sendmmsg, that’s not what happens,
> > ENOBUFS would be returned instead, And the amount of sent packets is
> lost forever.
> >
> > if (datagrams > 0 && (ret == 0 || ret == -EWOULDBLOCK)) {
> > /* NOTE: SO_ERROR should be honored for other errors. */
> > rtdm_fd_put(fd); return datagrams; }
> >
> > IMHO this condition would need to added:
> > ((flags | MSG_DONTWAIT) && ret == -ENOBUFS)
> >
> > (Recvmmsg possibly similarly, havent checked yet)
>
> sendmmsg was only added to Xenomai 3.1. There might be room for
> improvements, if not corrections. So, if we do not return the number of sent
> messages or signal an error where we should not (this is how I read the man
> page currently), this needs a patch...

Yes, seems you either need to drop the number of transmitted msgs (unlike the linux call)
or the error condition.
If you can pass both out of the kernel function, perhaps you could still set errno in case of an (real) error?
(I really don’t need it, but it's worth considering)

Thanks,
Norbert
________________________________

This message and any attachments are solely for the use of the intended recipients. They may contain privileged and/or confidential information or other information protected from disclosure. If you are not an intended recipient, you are hereby notified that you received this email in error and that any review, dissemination, distribution or copying of this email and any attachment is strictly prohibited. If you have received this email in error, please contact the sender and delete the message and any attachment from your system.

ANDRITZ HYDRO GmbH


Rechtsform/ Legal form: Gesellschaft mit beschränkter Haftung / Corporation

Firmensitz/ Registered seat: Wien

Firmenbuchgericht/ Court of registry: Handelsgericht Wien

Firmenbuchnummer/ Company registration: FN 61833 g

DVR: 0605077

UID-Nr.: ATU14756806


Thank You
________________________________

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: RTnet sendmmsg and ENOBUFS
  2019-11-15 10:10         ` Lange Norbert
@ 2019-11-15 11:30           ` Jan Kiszka
  2019-11-15 12:37             ` Lange Norbert
  0 siblings, 1 reply; 11+ messages in thread
From: Jan Kiszka @ 2019-11-15 11:30 UTC (permalink / raw)
  To: Lange Norbert, Xenomai (xenomai@xenomai.org)

On 15.11.19 11:10, Lange Norbert via Xenomai wrote:
> 
> 
>> -----Original Message-----
>> From: Jan Kiszka <jan.kiszka@siemens.com>
>> Sent: Donnerstag, 14. November 2019 19:18
>> To: Lange Norbert <norbert.lange@andritz.com>; Xenomai
>> (xenomai@xenomai.org) <xenomai@xenomai.org>
>> Cc: Philippe Gerum (rpm@xenomai.org) <rpm@xenomai.org>
>> Subject: Re: RTnet sendmmsg and ENOBUFS
>>
>> NON-ANDRITZ SOURCE: BE CAUTIOUS WITH CONTENT, LINKS OR
>> ATTACHMENTS.
>>
>>
>> On 14.11.19 18:55, Lange Norbert wrote:
>>> So, for my setup socket_rtskbs is 16, the rt_igp driver rtskbs are 256TX +
>> 256RX.
>>>
>>> As said, our software prepares packets before a timeslice, and would
>>> aim to minimize systemcalls and interrupts, packets are sent over raw
>> rtsockets.
>>>
>>> if  understand __rtdm_fd_sendmmsg and rt_packet_sendmsg correctly,
>>> sendmsg will pick one socket_rtskbs, copies data from userspace and
>>> then passes this rtskbs to rtdev_xmit.
>>> I don’t see how a free buffers gets passed back, like README.pools
>>> describes, I guess rtskb_acquire should somehow do this.
>>
>> The buffer returns (not necessarily the same one, though) when the packet
>> was truly sent, and the driver ran its TX cleanup. If you submit many packets
>> as a chunk, that may block them for a while.
>>
>>>
>>> So in short, I am using only one socket_rtskbs temporarily, as the
>>> function passes the buffer to the rtdev (rt_igp driver)?
>>
>> You are using as many rtskbs as it takes to get the data you passed down
>> forwarded as packets to the NIC, and that as long as the NIC needs to get
>> that data DMA'ed to the transmitter.
> 
> I was talking about the pools. The socket pool has 16  rtskbs, the device pool has 512.
> As I understand it, the __rtdm_fd_sendmmsg picks one rtskb from the socket pool,
> Then exchanges this buffer with a free one from the device pool (rtskb_acquire?).
> So sendmmsg requires a single "slot" from their pool, then gets that "slot" back
> when passing the rtskb down to the device.
> 
> Or In other words, I could successfully sendmmsg 100 messages, aslong there is one free
> slot in the socket pool, and the device pool has enough free slots.

Yep.

> 
>>
>>> I suppose the receive path works similarly.
>>>
>>
>> RX works by accepting a global-pool buffer (this is where incoming packets
>> first end up in) filled with data in exchange to an empty rtskb from the socket
>> pool. That filled rtskb is put into the socket pool once the data was
>> transferred to userspace.
> 
> I suppose all pools can exchange rtskb, so this is just a matter of which pool size is limiting.
> If I want to recvmmsg 100 messages, will I get at most 16 (socket pool size),
> or will a single slot be used and exchanged with the drivers?

One packet, one rtskb. So you have both the device and the socket pool 
as limiting factors.

> 
>>
>>>
>>> Now if I would want to send nonblocking, ie. as much packets as are
>>> possible, exhausting the rtskbs then I would expect the
>>> EAGAIN/EWOULDBLOCK error and getting back the number of successfully
>> queued packets (so I could  drop them and send the remaining later).
>>
>> I don't recall why anymore, but we decided to use a different error code in
>> RTnet for this back then, possibly to differentiate this "should never ever
>> happen in a deterministic network" from other errors.
> 
> Yes, I guess that makes sense in alot usecases. Mine is a bit different,
> I use a service that just tunnels a TUN to a RT Packet socket, and once someone
> connects to an IDDP socket for RT traffic, timeslices are used.
> 
> So the network is not always in "deterministic mode".
> 
>>
>>>
>>> According to the code in __rtdm_fd_sendmmsg, that’s not what happens,
>>> ENOBUFS would be returned instead, And the amount of sent packets is
>> lost forever.
>>>
>>> if (datagrams > 0 && (ret == 0 || ret == -EWOULDBLOCK)) {
>>> /* NOTE: SO_ERROR should be honored for other errors. */
>>> rtdm_fd_put(fd); return datagrams; }
>>>
>>> IMHO this condition would need to added:
>>> ((flags | MSG_DONTWAIT) && ret == -ENOBUFS)
>>>
>>> (Recvmmsg possibly similarly, havent checked yet)
>>
>> sendmmsg was only added to Xenomai 3.1. There might be room for
>> improvements, if not corrections. So, if we do not return the number of sent
>> messages or signal an error where we should not (this is how I read the man
>> page currently), this needs a patch...
> 
> Yes, seems you either need to drop the number of transmitted msgs (unlike the linux call)
> or the error condition.
> If you can pass both out of the kernel function, perhaps you could still set errno in case of an (real) error?
> (I really don’t need it, but it's worth considering)

Patches are on the list, feedback is appreciated. Just note that we try 
to follow the Linux interface.

Jan

-- 
Siemens AG, Corporate Technology, CT RDA IOT SES-DE
Corporate Competence Center Embedded Linux


^ permalink raw reply	[flat|nested] 11+ messages in thread

* RE: RTnet sendmmsg and ENOBUFS
  2019-11-15 11:30           ` Jan Kiszka
@ 2019-11-15 12:37             ` Lange Norbert
  2019-11-15 13:35               ` Jan Kiszka
  0 siblings, 1 reply; 11+ messages in thread
From: Lange Norbert @ 2019-11-15 12:37 UTC (permalink / raw)
  To: Jan Kiszka, Xenomai (xenomai@xenomai.org)



>
> >
> >>
> >>> I suppose the receive path works similarly.
> >>>
> >>
> >> RX works by accepting a global-pool buffer (this is where incoming packets
> >> first end up in) filled with data in exchange to an empty rtskb from the
> socket
> >> pool. That filled rtskb is put into the socket pool once the data was
> >> transferred to userspace.
> >
> > I suppose all pools can exchange rtskb, so this is just a matter of which pool
> size is limiting.
> > If I want to recvmmsg 100 messages, will I get at most 16 (socket pool size),
> > or will a single slot be used and exchanged with the drivers?
>
> One packet, one rtskb. So you have both the device and the socket pool
> as limiting factors.

I guess this is different to the sendpath as the device "pushes up" the rtskb's,
And the recvmmsg call then picks up the packets that are available?
(having some trouble following the path in the kernel sources)

We have some legacy application that ran on an imx28, and we have to keep the protocols for now,
They are almost exclusively UDP/IP4 but there's one message for distributing timestamps with a 802.2 SNAP packet.
(had some technical/implementation reasons, cant remember now).
Currently I use an ETH_P_ALL packet socket for that reason, but looking at the code it seems beneficial to drop that.

-   there seems to be copies of rtskb made
-   the packets are not consumed, but kicked up further in the stack

What I would want is a socket that simply drains everything, packets in use are ETH_P_IP,
ETH_P_ARP and whatever is necessary to send/receive those 802.2 SNAP packets.

I see ETH_P_802_EX1 used for such packets in examples, but I don’t see how your stack identifies such packets?

-   I don’t know which ETH_P type to use, theres ETH_P_802_2, ETH_P_802_EX1, ETH_P_SNAP potentially others.
-   Those constants seems to be largely missing in xenomais sources, so I don’t know how you would test for a match.
-   Easiest for me would be a ETH_P_ALL_CONSUME type, buffers just end there when such a socket is open (feasible, would be accepted upstream?)

Regards, Norbert
________________________________

This message and any attachments are solely for the use of the intended recipients. They may contain privileged and/or confidential information or other information protected from disclosure. If you are not an intended recipient, you are hereby notified that you received this email in error and that any review, dissemination, distribution or copying of this email and any attachment is strictly prohibited. If you have received this email in error, please contact the sender and delete the message and any attachment from your system.

ANDRITZ HYDRO GmbH


Rechtsform/ Legal form: Gesellschaft mit beschränkter Haftung / Corporation

Firmensitz/ Registered seat: Wien

Firmenbuchgericht/ Court of registry: Handelsgericht Wien

Firmenbuchnummer/ Company registration: FN 61833 g

DVR: 0605077

UID-Nr.: ATU14756806


Thank You
________________________________

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: RTnet sendmmsg and ENOBUFS
  2019-11-15 12:37             ` Lange Norbert
@ 2019-11-15 13:35               ` Jan Kiszka
  2019-11-15 15:00                 ` Lange Norbert
  0 siblings, 1 reply; 11+ messages in thread
From: Jan Kiszka @ 2019-11-15 13:35 UTC (permalink / raw)
  To: Lange Norbert, Xenomai (xenomai@xenomai.org)

On 15.11.19 13:37, Lange Norbert wrote:
> 
> 
>>
>>>
>>>>
>>>>> I suppose the receive path works similarly.
>>>>>
>>>>
>>>> RX works by accepting a global-pool buffer (this is where incoming packets
>>>> first end up in) filled with data in exchange to an empty rtskb from the
>> socket
>>>> pool. That filled rtskb is put into the socket pool once the data was
>>>> transferred to userspace.
>>>
>>> I suppose all pools can exchange rtskb, so this is just a matter of which pool
>> size is limiting.
>>> If I want to recvmmsg 100 messages, will I get at most 16 (socket pool size),
>>> or will a single slot be used and exchanged with the drivers?
>>
>> One packet, one rtskb. So you have both the device and the socket pool
>> as limiting factors.
> 
> I guess this is different to the sendpath as the device "pushes up" the rtskb's,

Actually, commit 91b3302284fd "aligned" TX to RX path. But the documents 
still state something else. And I doubt now that this commit was going 
in the right direction.

In fact, it introduced a way for competing transmitters to starve each 
other by exhausting the now shared device pool for TX.

> And the recvmmsg call then picks up the packets that are available?
> (having some trouble following the path in the kernel sources)

The point of the ownership transfer on RX is that, when receiving only 
with one queue (RTnet can't handle more in fact, though it should by 
now...), we first need to parse the packet content in order to dispatch 
it. That makes the packet first owned by the device (formerly the RX 
pool), and once the actual owner is known, it is transferred - or 
dropped if the recipient has no free buffer.

> 
> We have some legacy application that ran on an imx28, and we have to keep the protocols for now,
> They are almost exclusively UDP/IP4 but there's one message for distributing timestamps with a 802.2 SNAP packet.
> (had some technical/implementation reasons, cant remember now).
> Currently I use an ETH_P_ALL packet socket for that reason, but looking at the code it seems beneficial to drop that.
> 
> -   there seems to be copies of rtskb made
> -   the packets are not consumed, but kicked up further in the stack

ETH_P_ALL is a kind of snooping mode, therefore the copies. You rather 
want to register on the desired (non-IP) type(s).

> 
> What I would want is a socket that simply drains everything, packets in use are ETH_P_IP,
> ETH_P_ARP and whatever is necessary to send/receive those 802.2 SNAP packets.

IP and ARP will be handled by RTnet, the other types is what you need 
packet sockets for.

> 
> I see ETH_P_802_EX1 used for such packets in examples, but I don’t see how your stack identifies such packets?

rt_packet_bind -> rtdev_add_pack. Dispatching is done in 
rt_stack_deliver, in rt-kernel thread context.

> 
> -   I don’t know which ETH_P type to use, theres ETH_P_802_2, ETH_P_802_EX1, ETH_P_SNAP potentially others.
> -   Those constants seems to be largely missing in xenomais sources, so I don’t know how you would test for a match.

You can register any value you like with AF_PACKET, just like in Linux. 
Matching is not done by a switch-case, rather via a hashed list via the 
values.

> -   Easiest for me would be a ETH_P_ALL_CONSUME type, buffers just end there when such a socket is open (feasible, would be accepted upstream?)

I think you are better of with target subscriptions. Then all unknown 
packets will simply be dropped, and that much earlier.

Jan

-- 
Siemens AG, Corporate Technology, CT RDA IOT SES-DE
Corporate Competence Center Embedded Linux


^ permalink raw reply	[flat|nested] 11+ messages in thread

* RE: RTnet sendmmsg and ENOBUFS
  2019-11-15 13:35               ` Jan Kiszka
@ 2019-11-15 15:00                 ` Lange Norbert
  0 siblings, 0 replies; 11+ messages in thread
From: Lange Norbert @ 2019-11-15 15:00 UTC (permalink / raw)
  To: Jan Kiszka, Xenomai (xenomai@xenomai.org)



> -----Original Message-----
> From: Jan Kiszka <jan.kiszka@siemens.com>
> Sent: Freitag, 15. November 2019 14:36
> To: Lange Norbert <norbert.lange@andritz.com>; Xenomai
> (xenomai@xenomai.org) <xenomai@xenomai.org>
> Subject: Re: RTnet sendmmsg and ENOBUFS
>
> NON-ANDRITZ SOURCE: BE CAUTIOUS WITH CONTENT, LINKS OR
> ATTACHMENTS.
>
>
> On 15.11.19 13:37, Lange Norbert wrote:
> >
> >
> >>
> >>>
> >>>>
> >>>>> I suppose the receive path works similarly.
> >>>>>
> >>>>
> >>>> RX works by accepting a global-pool buffer (this is where incoming
> >>>> packets first end up in) filled with data in exchange to an empty
> >>>> rtskb from the
> >> socket
> >>>> pool. That filled rtskb is put into the socket pool once the data
> >>>> was transferred to userspace.
> >>>
> >>> I suppose all pools can exchange rtskb, so this is just a matter of
> >>> which pool
> >> size is limiting.
> >>> If I want to recvmmsg 100 messages, will I get at most 16 (socket
> >>> pool size), or will a single slot be used and exchanged with the drivers?
> >>
> >> One packet, one rtskb. So you have both the device and the socket
> >> pool as limiting factors.
> >
> > I guess this is different to the sendpath as the device "pushes up"
> > the rtskb's,
>
> Actually, commit 91b3302284fd "aligned" TX to RX path. But the documents
> still state something else. And I doubt now that this commit was going in the
> right direction.
>
> In fact, it introduced a way for competing transmitters to starve each other
> by exhausting the now shared device pool for TX.
>
> > And the recvmmsg call then picks up the packets that are available?
> > (having some trouble following the path in the kernel sources)
>
> The point of the ownership transfer on RX is that, when receiving only with
> one queue (RTnet can't handle more in fact, though it should by now...), we
> first need to parse the packet content in order to dispatch it. That makes the
> packet first owned by the device (formerly the RX pool), and once the actual
> owner is known, it is transferred - or dropped if the recipient has no free
> buffer.

Right, but if I open up a packetsocket with ETH_P_802_EX1, rt_stack_deliver will
try to match the incoming packets, so somewhere the eth packet needs to be
dissected, is that rt_eth_type_trans?
means the valid values for a packet socket would be ETH_P_802_2, ETH_P_802_3,
and the protocols >= 0x600 from 802.3.

>
> >
> > We have some legacy application that ran on an imx28, and we have to
> > keep the protocols for now, They are almost exclusively UDP/IP4 but
> there's one message for distributing timestamps with a 802.2 SNAP packet.
> > (had some technical/implementation reasons, cant remember now).
> > Currently I use an ETH_P_ALL packet socket for that reason, but looking at
> the code it seems beneficial to drop that.
> >
> > -   there seems to be copies of rtskb made
> > -   the packets are not consumed, but kicked up further in the stack
>
> ETH_P_ALL is a kind of snooping mode, therefore the copies. You rather
> want to register on the desired (non-IP) type(s).

See above, seems the only usable types are ETH_P_802_2 and ETH_P_802_3?

> > What I would want is a socket that simply drains everything, packets
> > in use are ETH_P_IP, ETH_P_ARP and whatever is necessary to
> send/receive those 802.2 SNAP packets.
>
> IP and ARP will be handled by RTnet, the other types is what you need
> packet sockets for.

I use neither IP or ARP from RtNet, and I don’t have the choice right now,
Our timesync is via an external wire, and I gotta burst out everything at that point.
This is no cleanroom design.

Further, a tftp server is hooked into the RT connection via a TUN device,
So packets not for the RT Application (udp port range) go to that device.

> > I see ETH_P_802_EX1 used for such packets in examples, but I don’t see
> how your stack identifies such packets?
>
> rt_packet_bind -> rtdev_add_pack. Dispatching is done in rt_stack_deliver,
> in rt-kernel thread context.

Yes, and I fail to see anything supported outside ETH_P_802_2 and ETH_P_802_3
and the protocols from 803.
Nowhere will a eth packet be dissected and classified as ETH_P_802_EX1 or ETH_P_SNAP,
What packets would end up in such a socket?

> > -   I don’t know which ETH_P type to use, theres ETH_P_802_2,
> ETH_P_802_EX1, ETH_P_SNAP potentially others.
> > -   Those constants seems to be largely missing in xenomais sources, so I
> don’t know how you would test for a match.
>
> You can register any value you like with AF_PACKET, just like in Linux.
> Matching is not done by a switch-case, rather via a hashed list via the values.

That's for faster lookup, values are still compared afterwards.

>
> > -   Easiest for me would be a ETH_P_ALL_CONSUME type, buffers just end
> there when such a socket is open (feasible, would be accepted upstream?)
>
> I think you are better of with target subscriptions. Then all unknown packets
> will simply be dropped, and that much earlier.

Unknown packets need to be forwarded to the TUN device.
(To put it another way, I had a deep direct queue to the network device before,
and software/protocols grew around that)

Regards, Norbert.
________________________________

This message and any attachments are solely for the use of the intended recipients. They may contain privileged and/or confidential information or other information protected from disclosure. If you are not an intended recipient, you are hereby notified that you received this email in error and that any review, dissemination, distribution or copying of this email and any attachment is strictly prohibited. If you have received this email in error, please contact the sender and delete the message and any attachment from your system.

ANDRITZ HYDRO GmbH


Rechtsform/ Legal form: Gesellschaft mit beschränkter Haftung / Corporation

Firmensitz/ Registered seat: Wien

Firmenbuchgericht/ Court of registry: Handelsgericht Wien

Firmenbuchnummer/ Company registration: FN 61833 g

DVR: 0605077

UID-Nr.: ATU14756806


Thank You
________________________________

^ permalink raw reply	[flat|nested] 11+ messages in thread

end of thread, other threads:[~2019-11-15 15:00 UTC | newest]

Thread overview: 11+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-11-13 15:10 RTnet sendmmsg and ENOBUFS Lange Norbert
2019-11-13 17:39 ` Jan Kiszka
2019-11-13 17:53   ` Lange Norbert
2019-11-14 17:55     ` Lange Norbert
2019-11-14 18:18       ` Jan Kiszka
2019-11-14 18:39         ` Jan Kiszka
2019-11-15 10:10         ` Lange Norbert
2019-11-15 11:30           ` Jan Kiszka
2019-11-15 12:37             ` Lange Norbert
2019-11-15 13:35               ` Jan Kiszka
2019-11-15 15:00                 ` Lange Norbert

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.