linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Alex Belits <abelits@marvell.com>
To: Christoph Lameter <cl@linux.com>, Marcelo Tosatti <mtosatti@redhat.com>
Cc: "tglx@linutronix.de" <tglx@linutronix.de>,
	"pauld@redhat.com" <pauld@redhat.com>,
	"linux-mm@kvack.org" <linux-mm@kvack.org>,
	"frederic@kernel.org" <frederic@kernel.org>,
	"willy@infradead.org" <willy@infradead.org>,
	"peterz@infradead.org" <peterz@infradead.org>,
	"akpm@linux-foundation.org" <akpm@linux-foundation.org>,
	Juri Lelli <juri.lelli@redhat.com>,
	Daniel Bristot de Oliveira <bristot@redhat.com>
Subject: Re: [RFC] tentative prctl task isolation interface
Date: Fri, 15 Jan 2021 10:35:14 -0800	[thread overview]
Message-ID: <3fe6a794-a578-3564-acec-d1f4684abeee@marvell.com> (raw)
In-Reply-To: <alpine.DEB.2.22.394.2101151311440.48145@www.lameter.com>

On 1/15/21 05:24, Christoph Lameter wrote:

> ----------------------------------------------------------------------
> On Thu, 14 Jan 2021, Marcelo Tosatti wrote:
> 
>>> How does one do a oneshot flush of OS activities?
>>
>>          ret = prctl(PR_TASK_ISOLATION_REQUEST, ISOL_F_QUIESCE, 0, 0, 0);
>>          if (ret == -1) {
>>                  perror("prctl PR_TASK_ISOLATION_REQUEST");
>>                  exit(0);
>>          }
>>
>>>
>>> I.e. I have a polling loop over numerous shared and I/o devices in user
>>> space and I want to make sure that the system is quite before I enter the
>>> loop.
>>
>> You could configure things in two ways: with syscalls allowed or not.
> 
> Well syscalls that do not cause deferred processing like getting the time
> or determining the current cpu should be ok to use.

Some of those syscalls go through vdso, and don't enter the kernel -- 
nothing specific is necessary to allow them, and it would be pointless 
and difficult to prevent them.

For syscalls that enter the kernel, it's often difficult to predict, if 
they will or won't cause deferred processing, so I am afraid, it won't 
be possible to provide a "safe" class of syscalls for this purpose and 
not end up with something minimal like reading /sys and /proc. Right now 
isolation only "allows" syscalls that exit isolation.

It may be possible to set up a filter by the system (allowing few safe 
things like reading /proc) and let the user expand it by adding 
combinations of syscall / file descriptor. If some device is known to 
process operations safely, user can open it and mark file descriptor as 
allowed, say, for reading.

> And I already said that I want the system to quiet down and allow system
> calls. Some indication that deferred actions have occurred may be useful
> by f.e. resetting the flag.

I think, it should be possible to process a syscall, and if any deferred 
action occurred, exit isolation on return to userspace. Then there is a 
question, how userspace should be notified about isolation being lost. 
Normally this happens with a signal, but that is useful if we want 
syscall to fail with EINTR, not to succeed. Make sure that signal 
arrives after successful syscall return but before deferred action to 
happen? Sounds convoluted. Maybe reflecting isolation status in vdso and 
having the user check it there will be a good solution.

When I worked on my implementation I have encountered both a problem of 
interaction with the rest of system from isolated task (at least simple 
things as logging) and a problem of handling enter/exit from isolation 
on a system when it's possible for a task to be interrupted early after 
entering isolation due to various events that were still in progress on 
other CPUs.

I ended up implementing a manager/helper task that talks to tasks over a 
socket (when they are not isolated) and over ring buffers in shared 
memory (when they are isolated). While the current implementation is 
rather limited, the intention is to delegate to it everything that 
isolated task either can't do at all (like, writing logs) or that it 
would be cumbersome to implement (like monitoring the state of task, 
determining presence of deferred work after the task returned to 
userspace), etc.

It would be great if the complexity and amount of functionality of that 
manager/helper task can be reduced, however I believe that having such a 
task is a legitimate way of implementing things that otherwise would 
require additional functionality in kernel.

> 
>> 1) Add a new isolation feature ISOL_F_BLOCK_SYSCALLS (to block certain
>> syscalls) along with ISOL_F_SETUP_NOTIF (to notify upon isolation
>> breaking):
> 
> Well come up with a use case for that .... I know mine. What you propose
> could be  useful for debugging for me but I would prefer the quiet down
> approach where I determine when I use some syscalls or not and will deal
> with the consequences.

For my purposes breaking isolation on syscalls and notifications about 
isolation breaking is a very useful approach -- this is why I kept it 
exactly as it was in the original implementation by Chris Metcalf.

In applications that I intend to use isolation for, the primary concern 
is consistent time for running code in userspace, so syscalls should be 
only issued when the task is specifically not in isolated mode. If the 
program issues a syscall by mistake (and that may happen when some 
libraries are used, or thread synchronization primitives are kept from 
non-isolated version of the program, even though isolated tasks are not 
supposed to use those), it means not only that deferred work may cause 
delay in the future, but also that there is an additional time to be 
spent in kernel. This should be immediately visible to the developer, 
and the best way to do it is by breaking isolation on syscall immediately.

> 
>>
>>> Features that I think may be needed:
>>>
>>> F_ISOL_QUIESCE		-> quiet down now but allow all OS activities. OS
>>> 			activites reset flag
>>>
>>> F_ISOL_BAREMETAL_HARD	-> No OS interruptions. Fault on syscalls that
>>> 			require such actions in the future.
>>
>> Question: why BAREMETAL ?
> 
> To disinguish it from "Realtime". We want the processor for ourselves
> without anything else running on it.
> 
>> Two comments:
>>
>> 1) HARD mode could also block activities from different CPUs that can
>> interrupt this isolated CPU (for example CPU hotplug, or increasing
>> per-CPU trace buffer size).
> 
> Blocking? The app should fail if any deferred actions are triggered as a
> result of syscalls. It would give a warning with _WARN

There are many supposedly innocent things, nowhere at the scale of CPU 
hotplug, that happen in a system and result in synchronization 
implemented as an IPI to every online CPU. We should consider them to be 
an ordinary occurrence, so there is a choice:

1. Ignore them completely and allow them in isolated mode. This will 
delay userspace with no indication and no isolation breaking.

2. Allow them, and notify userspace afterwards (through vdso or through 
userspace helper/manager over shared memory). This may be useful in 
those rare situations when the consequences of delay can be mitigated 
afterwards.

3. Make them break isolation, with userspace being notified normally 
(ex: with a signal in the current implementation). I guess, can be used 
if somehow most of the causes will be eliminated.

4. Prevent them from reaching the target CPU and make sure that whatever 
synchronization they are intended to cause, will happen when intended 
target CPU will enter to kernel later. Since we may have to synchronize 
things like code modification, some of this synchronization has to 
happen very early on kernel entry.

I am most interested in (4), so this is what was implemented in my 
version of the patch (and currently I am trying to achieve completeness 
and, if possible, elegance of the implementation).

I guess, if we want to add more controls, we can allow the user to 
choose either of those four options, or of a subset of them. In my 
opinion, if (4) will be available, and the only additional cost will be 
time for synchronization spent in breaking isolation procedure, there is 
not much need in the other three. Without (4) I don't think, the goal of 
providing consistent, interruption-free environment is achieved at all, 
so not implementing it would be very bad.

>> 2) For a type of application it is the case that certain interruptions
>> can be tolerated, as long as they do not cross certain thresholds.
>> For example, one loses the flexibility to read/write MSRs
>> on the isolated CPUs (including performance counters,
>> RDT/MBM type MSRs, frequency/power statistics) by
>> forcing a "no interruptions" mode.
> 
> Does reading these really cause deferred actions by the OS? AFAICT you
> could map these into memory as well as read them without OS activities.

Access to those is hardware/architecture-specific, and in many cases, 
indeed, there is no need to issue a syscall at all.

However for many applications the model with a helper task performing 
interactions with OS on a different core and exchanging data over shared 
memory may be sufficient, and it will also provide clear separation 
between operations that do require consistent timing and those that don't.

> 
> "Interruptions that can be tolerated".... Well that is the wild west of
> "realtime" where you can define how much of a time slice is "real" and how
> much can be use by other processes. I do not think that any of that should
> come into this API.
> 

To be honest, I have no idea, what can and can not be tolerated by 
applications other than what I am familiar with. Applications that I 
know, require no interruptions at all, so I want to implement that. I 
assume, someone already uses existing CPU isolation for the purpose of 
providing "nearly interrupt-less" environment.

I can imaging something like a task of controlling a large slow-updating 
LED display by bit-banging a strictly timed long serial message 
representing a frame or frame update. If interrupted, it may, depending 
on the protocol, corrupt the state of a single LED or fail to update 
until the end of the screen, but the next start of message will reset 
the state, and everything will work until the next interrupt. Maybe 
there are more realistic or useful examples.

-- 
Alex


  reply	other threads:[~2021-01-15 18:36 UTC|newest]

Thread overview: 42+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-11-17 16:28 [PATCH] mm: introduce sysctl file to flush per-cpu vmstat statistics Marcelo Tosatti
2020-11-17 18:03 ` Matthew Wilcox
2020-11-17 19:06   ` Christopher Lameter
2020-11-17 19:09     ` Matthew Wilcox
2020-11-20 18:04       ` Christopher Lameter
2020-11-17 20:23     ` Marcelo Tosatti
2020-11-20 18:02       ` Marcelo Tosatti
2020-11-20 18:20       ` Christopher Lameter
2020-11-23 18:02         ` Marcelo Tosatti
2020-11-24 17:12         ` Vlastimil Babka
2020-11-24 19:52           ` Marcelo Tosatti
2020-11-27 15:48         ` Marcelo Tosatti
2020-11-28  3:49           ` [EXT] " Alex Belits
2020-11-30 18:18             ` Marcelo Tosatti
2020-11-30 18:29               ` Marcelo Tosatti
2020-12-03 22:47                 ` Alex Belits
2020-12-03 22:21               ` Alex Belits
2020-11-30  9:31           ` Christoph Lameter
2020-12-02 12:43             ` Marcelo Tosatti
2020-12-02 15:57             ` Thomas Gleixner
2020-12-02 17:43               ` Christoph Lameter
2020-12-03  3:17                 ` Thomas Gleixner
2020-12-07  8:08                   ` Christoph Lameter
2020-12-07 16:09                     ` Thomas Gleixner
2020-12-07 19:01                       ` Thomas Gleixner
2020-12-02 18:38               ` Marcelo Tosatti
2020-12-04  0:20                 ` Frederic Weisbecker
2020-12-04 13:31                   ` Marcelo Tosatti
2020-12-04  1:43               ` [EXT] " Alex Belits
2021-01-13 12:15                 ` [RFC] tentative prctl task isolation interface Marcelo Tosatti
2021-01-14  9:22                   ` Christoph Lameter
2021-01-14 19:34                     ` Marcelo Tosatti
2021-01-15 13:24                       ` Christoph Lameter
2021-01-15 18:35                         ` Alex Belits [this message]
2021-01-21 15:51                           ` Marcelo Tosatti
2021-01-21 16:20                             ` Marcelo Tosatti
2021-01-22 13:05                               ` Marcelo Tosatti
2021-02-01 10:48                             ` Christoph Lameter
2021-02-01 12:47                               ` Alex Belits
2021-02-01 18:20                               ` Marcelo Tosatti
2021-01-18 15:18                         ` Marcelo Tosatti
2020-11-24  5:02 ` [mm] e655d17ffa: BUG:using_smp_processor_id()in_preemptible kernel test robot

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=3fe6a794-a578-3564-acec-d1f4684abeee@marvell.com \
    --to=abelits@marvell.com \
    --cc=akpm@linux-foundation.org \
    --cc=bristot@redhat.com \
    --cc=cl@linux.com \
    --cc=frederic@kernel.org \
    --cc=juri.lelli@redhat.com \
    --cc=linux-mm@kvack.org \
    --cc=mtosatti@redhat.com \
    --cc=pauld@redhat.com \
    --cc=peterz@infradead.org \
    --cc=tglx@linutronix.de \
    --cc=willy@infradead.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).