xenomai.lists.linux.dev archive mirror
 help / color / mirror / Atom feed
From: Russell Johnson <russell.johnson@kratosdefense.com>
To: Philippe Gerum <rpm@xenomai.org>
Cc: "xenomai@lists.linux.dev" <xenomai@lists.linux.dev>,
	Bryan Butler <Bryan.Butler@kratosdefense.com>
Subject: RE: [External] - Re: Conflicting EVL Processing Loops
Date: Thu, 2 Feb 2023 21:08:23 +0000	[thread overview]
Message-ID: <PH1P110MB1050E0B3A0465A0D0911F171E2D69@PH1P110MB1050.NAMP110.PROD.OUTLOOK.COM> (raw)
In-Reply-To: <87fscfboox.fsf@xenomai.org>

[-- Attachment #1: Type: text/plain, Size: 2876 bytes --]

Philippe,

An update and question for you.

First, as you know, we found that the built-in heap management in Xenomai
4/EVL was causing us substantial performance problems, due to the need to
perform locking on all memory allocations. The nolock API is not an option
for us since we perform memory operations in most of our threads. We also
tried the TLSF manager, but saw similar performance issues.

The good news is that we've adapted the mimalloc memory management library,
which I believe implements something like the fast bins you mention in your
earlier email. The performance of mimalloc looks to be very good, and we are
able to get our dual processing loop system running within our real-time
constraints. The current implementation is still a bit "hackish", and we're
continuing to test it and clean it up. I am hoping to get you the specifics
about what we had to do to implement it, since I think it could be a good
option for other X4/EVL users. At a high level, it is essentially a
"go-between" with the Xenomai heap management at the bottom layer, replacing
the low-level sbrk() used to get memory in a Linux run-time environment.

One nagging problem is that we're still plagued by occasional page faults.
We have tried to prefault everything we can think of, but we're obviously
missing something. I go through each accessible section in the
/proc/self/maps file, prefaulting each one (this includes all code and data
segments). I have also added a hack to the kernel so that when the "switched
inband (fault)" occurs, the faulting address is displayed in dmesg. So far,
all of the runtime page faults we see are in the heap section, which I have
attempted to prefault completely, even doing it multiple times during
startup, since the heap section seems to be growing as we start up our real
time threads. 

So, I'm looking at one of 2 possibilities:
1. My prefaulting code, which touches one memory location in each page, is
not actually doing what it is supposed to. I've declared variables in the
prefaulting function to be "volatile" so that they don't get optimized out.
But I don't know any way to really verify that the pages are being mapped in
and locked.
2. A kernel bug, where the pages are not, in fact, being locked into memory.
We're calling "mlockall(MCL_CURRENT | MCL_FUTURE)", so, even if the heap is
growing, I don't understand why any future pages are not being populated and
locked into memory at the very beginning. And, the kernel should not be
unmapping any of our pages, but perhaps it is?

I know this isn't likely a problem with the EVL code, but we're just about
out of ideas for how to find and kill this problem. I'm not much of a kernel
expert. If you have any ideas for how to isolate this problem, especially if
there's a way to verify whether our process pages are really locked or not,
they would be greatly appreciated.


[-- Attachment #2: smime.p7s --]
[-- Type: application/pkcs7-signature, Size: 6759 bytes --]

  parent reply	other threads:[~2023-02-02 21:18 UTC|newest]

Thread overview: 9+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-01-04 22:28 Conflicting EVL Processing Loops Russell Johnson
2023-01-05  7:49 ` Philippe Gerum
2023-01-11 15:57 ` Russell Johnson
2023-01-11 16:44   ` Russell Johnson
2023-01-11 20:33     ` Russell Johnson
2023-01-12 17:23       ` Philippe Gerum
2023-02-02 17:58         ` [External] - " Bryan Butler
2023-02-02 21:08         ` Russell Johnson [this message]
2023-02-05 17:29           ` Philippe Gerum

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=PH1P110MB1050E0B3A0465A0D0911F171E2D69@PH1P110MB1050.NAMP110.PROD.OUTLOOK.COM \
    --to=russell.johnson@kratosdefense.com \
    --cc=Bryan.Butler@kratosdefense.com \
    --cc=rpm@xenomai.org \
    --cc=xenomai@lists.linux.dev \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).