All of lore.kernel.org
 help / color / mirror / Atom feed
From: Philippe Gerum <rpm@xenomai.org>
To: Pintu Kumar <pintu.ping@gmail.com>
Cc: "Xenomai@xenomai.org" <xenomai@xenomai.org>
Subject: Re: [Xenomai] Simple application for invoking rtdm driver
Date: Tue, 20 Mar 2018 14:09:54 +0100	[thread overview]
Message-ID: <20bffd5c-9f11-3bb2-236a-11764a129059@xenomai.org> (raw)
In-Reply-To: <CAOuPNLjP+6CPOf-4uYHZCLEfAzGjQOXy32r+5TBwMrVFFH9fOw@mail.gmail.com>

On 03/20/2018 01:00 PM, Pintu Kumar wrote:
> On Tue, Mar 20, 2018 at 5:15 PM, Philippe Gerum <rpm@xenomai.org> wrote:
>> On 03/20/2018 12:31 PM, Pintu Kumar wrote:
>>> On Tue, Mar 20, 2018 at 3:02 PM, Philippe Gerum <rpm@xenomai.org> wrote:
>>>> On 03/20/2018 08:26 AM, Pintu Kumar wrote:
>>>>> On Tue, Mar 20, 2018 at 10:57 AM, Pintu Kumar <pintu.ping@gmail.com> wrote:
>>>>>> On Tue, Mar 20, 2018 at 9:03 AM, Greg Gallagher <greg@embeddedgreg.com> wrote:
>>>>>>> If you want to use open, read, write you need to specify in the
>>>>>>> makefile to use the posix skin.  You need something like these in your
>>>>>>> Makefile:
>>>>>>>
>>>>>>> XENO_CONFIG := /usr/xenomai/bin/xeno-config
>>>>>>> CFLAGS := $(shell $(XENO_CONFIG) --posix --cflags)
>>>>>>> LDFLAGS := $(shell  $(XENO_CONFIG) --posix --ldflags)
>>>>>>>
>>>>>>
>>>>>> Oh yes I forgot to mention with posix skin it is working.
>>>>>>
>>>>>> But I wanted to use native API only, so I removed posix skin from Makefile.
>>>>>>
>>>>>> For, native API, I am using: rt_dev_{open, read, write}. Is this the
>>>>>> valid API for Xenomai 3.0 ?
>>>>>> Or there is something else?
>>>>>> Is there any reference ?
>>>>>>
>>>>>
>>>>> Dear Greg,
>>>>>
>>>>> In my sample, I am just copying some string from user <--> kernel and
>>>>> printing them.
>>>>> For normal driver, I get read/write latency like this:
>>>>> write latency: 2.247 us
>>>>> read latency: 2.202 us
>>>>>
>>>>> For Xenomai 3.0 rtdm driver, using : rt_dev_{open, read, write}
>>>>> I get the latency like this:
>>>>> write latency: 7.668 us
>>>>> read latency: 5.558 us
>>>>>
>>>>> My concern is, why the latency is higher in case of RTDM ?
>>>>> This is on x86-64 machine.
>>>>>
>>>>
>>>> Did you stress your machine while your test was running? If not, you
>>>> were not measuring worst-case latency, you were measuring execution time
>>>> in this case, which is different. If you want to actually measure
>>>> latency for real-time usage, you need to run your tests under
>>>> significant stress load. Under such load, the RTDM version should
>>>> perform reliably below a reasonable latency limit, the "normal" version
>>>> will experience jittery above that limit.
>>>>
>>>> A trivial stress load may be as simple as running a dd loop copying
>>>> 128Mb blocks from /dev/zero to /dev/null in the background, you may also
>>>> add a kernel compilation keeping all CPUs busy.
>>>>
>>>
>>> OK, I tried both the option. But still normal driver latency is much lower.
>>> In fact, with kernel build in another terminal, rtdm latency shoots much higher.
>>> Normal Kernel
>>> --------------------
>>> write latency: 3.084 us
>>> read latency: 3.186 us
>>>
>>> RTDM Kernel (native)
>>> ---------------------------------
>>> write latency: 12.676 us
>>> read latency: 9.858 us
>>>
>>> RTDM Kernel (posix)
>>> ---------------------------------
>>> write latency: 12.907 us
>>> read latency: 8.699 us
>>>
>>> During the beginning of kernel build I even observed, RTDM (native)
>>> goes as high as:
>>> write latency: 4061.266 us
>>> read latency: 3947.836 us
>>>
>>> ---------------------------------
>>> As a quick reference, this is the snippet for the rtdm write method.
>>>
>>> --------------------------------
>>> static ssize_t rtdm_write(..)
>>> {
>>>         struct dummy_context *context;
>>>
>>>         context = rtdm_fd_to_private(fd);
>>>
>>>         memset(context->buffer, 0, 4096);
>>>         rtdm_safe_copy_from_user(fd, context->buffer, buff, len);
>>>         rtdm_printk("write done\n");
>>>
>>>         return len;
>>> }
>>>
>>> The normal driver write is also almost same.
>>>
>>> In the application side, I just invoke using:
>>>         t1 = rt_timer_read();
>>>         ret = rt_dev_write(fd, msg, len);
>>>         t2 = rt_timer_read();
>>>
>>> Is there any thing wrong on the rtdm side ?
>>> --------------------------------
>>>
>>>> Besides, you need to make sure to disable I-pipe and Cobalt debug
>>>> options, particularly CONFIG_IPIPE_TRACE and
>>>> CONFIG_XENO_OPT_DEBUG_LOCKING when running the RTDM case.
>>>>
>>>
>>> Yes this debug options are already disabled.
>>>
>>>>>
>>>>> Latency is little better, when using only posix skin:
>>>>> write latency: 3.587 us
>>>>> read latency: 3.392 us
>>>>>
>>>>
>>>> This does not make much sense, see the excerpt from
>>>> include/trank/rtdm/rtdm.h, which simply wraps the inline rt_dev_write()
>>>> call to Cobalt's POSIX call [__wrap_]write() from lib/cobalt/rtdm.c:
>>>>
>>>
>>> OK sorry, there was a mistake in posix latency value.
>>> I forgot to switch to rtdm driver, instead of normal driver.
>>> With posix skin and using the exactly same as normal driver application.
>>> The latency figure was almost same as native skin.
>>> write latency: 7.044 us
>>> read latency: 6.786 us
>>>
>>>
>>>> #define rt_dev_call(__call, __args...)  \
>>>> ({                                      \
>>>>         int __ret;                      \
>>>>         __ret = __RT(__call(__args));   \
>>>>         __ret < 0 ? -errno : __ret;     \
>>>> })
>>>>
>>>> static inline ssize_t rt_dev_write(int fd, const void *buf, size_t len)
>>>> {
>>>>         return rt_dev_call(write, fd, buf, len);
>>>> }
>>>>
>>>> The way you measure the elapsed time may affect the measurement:
>>>> libalchemy's rt_timer_read() is definitely slower than libcobalt's
>>>> clock_gettime().
>>>
>>> For normal kernel driver (and rtdm with posix skin) application, I am
>>> using clock_gettime().
>>> For Xenomai rtdm driver with native skin application, I am using rt_timer_read()
>>>
>>>
>>>>
>>>> The POSIX skin is generally faster than the alchemy API, because it
>>>> implements wrappers to the corresponding Cobalt system calls (i.e.
>>>> libcobalt is Xenomai's libc equivalent). Alchemy has to traverse
>>>> libcopperplate before actual syscalls may be issued by libcobalt it is
>>>> depending on, because libalchemy needs the copperplate interface layer
>>>> for shielding itself from Cobalt/Mercury differences.
>>>>
>>>
>>> Actually, as per the previous experience for simple thread
>>> application, rt_timer_read() with native
>>> skin gave better latency, when compared to using posix skin with clock API.
>>>
>>
>> This behavior makes not much sense, simply looking at the library code:
>> rt_timer_read() may be considered as a superset of libcobalt's
>> clock_gettime.
>>
>> This could be a hint that you might not be testing with Cobalt's POSIX
>> API. You may want to check running "nm" on your executable, verifying
>> that __wrap_* calls are listed (e.g. __wrap_clock_gettime instead of
>> clock_gettime).
>>
> 
> Yes, wrap calls are listed in symbol table.
> 
> posix# nm -a my_app | grep clock
>                  U __wrap_clock_gettime
> 
> 

Then you may suspect a SMI problem, less likely an hyperthreading issue
given the magnitude of the latency figures. You may want to have a look
at this thread [1].

If this is a SMI issue, you should be able to see it on a regular kernel
as well, provided the protocol for testing is right and actually
exercises the same thing for an equivalent period of time on both ends
(Xenomai / native).

We generally don't know which configuration is under test on your end:
which I-pipe patch you have been using, which kernel release, which
Xenomai release (assuming you are not using the obsolete 3.0.0 original
release). Actually, we don't know much about your hw either (the Intel
micro-architecture is not defining when it comes to latency issues, the
SoC may be, the BIOS usually is).

AFAIU, the symptoms you are referring to are either SMI-related, or
might be related to some GPU driver doing issuing costly instructions
badly affecting the latency (e.g. wbinvd, although a 4 ms latency spike
is completely insane and does not fit the typical footprint of those
insns), or might denote a massive I-pipe/Xenomai bug.

The latter is always possible, but not the most likely at the moment, as
this kind of bug tends to be noticed by many people, and nobody did so
far with the stable Xenomai release and an official I-pipe patch for
x86, or any other architecture.

First I would recommend reading [2], providing the missing information
afterwards.

Then, I would make 100% sure that your SoC is SMI-free, and HT is
disabled in the BIOS, running a vanilla kernel: no I-pipe, no Xenomai,
just plain regular kernel.

[1] http://xenomai.org/pipermail/xenomai/2018-February/038393.html
[2] http://xenomai.org/asking-for-help/

-- 
Philippe.


  reply	other threads:[~2018-03-20 13:09 UTC|newest]

Thread overview: 18+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2018-03-20  1:42 [Xenomai] Simple application for invoking rtdm driver Pintu Kumar
2018-03-20  3:33 ` Greg Gallagher
2018-03-20  5:27   ` Pintu Kumar
2018-03-20  7:26     ` Pintu Kumar
2018-03-20  9:32       ` Philippe Gerum
2018-03-20 11:31         ` Pintu Kumar
2018-03-20 11:37           ` Philippe Gerum
2018-03-20 11:45           ` Philippe Gerum
2018-03-20 12:00             ` Pintu Kumar
2018-03-20 13:09               ` Philippe Gerum [this message]
2018-03-23 12:40                 ` Pintu Kumar
2018-03-25 12:09                   ` Philippe Gerum
2018-03-26 13:12                     ` Pintu Kumar
2018-03-26 15:09                       ` Philippe Gerum
2018-03-27 12:09                         ` Pintu Kumar
2018-03-27 13:05                           ` Philippe Gerum
2018-04-02 13:48                             ` Pintu Kumar
2018-04-03 10:44                               ` Pintu Kumar

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20bffd5c-9f11-3bb2-236a-11764a129059@xenomai.org \
    --to=rpm@xenomai.org \
    --cc=pintu.ping@gmail.com \
    --cc=xenomai@xenomai.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.