linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Joel Fernandes <joel@joelfernandes.org>
To: Thomas Gleixner <tglx@linutronix.de>
Cc: "Tim Chen" <tim.c.chen@linux.intel.com>,
	"Julien Desfossez" <jdesfossez@digitalocean.com>,
	"Peter Zijlstra" <peterz@infradead.org>,
	"Vineeth Remanan Pillai" <vpillai@digitalocean.com>,
	"Aubrey Li" <aubrey.intel@gmail.com>,
	"Nishanth Aravamudan" <naravamudan@digitalocean.com>,
	"Ingo Molnar" <mingo@kernel.org>, "Paul Turner" <pjt@google.com>,
	"Linus Torvalds" <torvalds@linux-foundation.org>,
	"Linux List Kernel Mailing" <linux-kernel@vger.kernel.org>,
	"Dario Faggioli" <dfaggioli@suse.com>,
	"Frédéric Weisbecker" <fweisbec@gmail.com>,
	"Kees Cook" <keescook@chromium.org>,
	"Greg Kerr" <kerrnel@google.com>, "Phil Auld" <pauld@redhat.com>,
	"Aaron Lu" <aaron.lwe@gmail.com>,
	"Valentin Schneider" <valentin.schneider@arm.com>,
	"Mel Gorman" <mgorman@techsingularity.net>,
	"Pawan Gupta" <pawan.kumar.gupta@linux.intel.com>,
	"Paolo Bonzini" <pbonzini@redhat.com>,
	"Luck, Tony" <tony.luck@intel.com>
Subject: Re: [RFC PATCH v4 00/19] Core scheduling v4
Date: Wed, 18 Mar 2020 21:54:44 -0400	[thread overview]
Message-ID: <20200319015444.GA224062@google.com> (raw)
In-Reply-To: <877dzhc21u.fsf@nanos.tec.linutronix.de>

Hi Thomas,

On Wed, Mar 18, 2020 at 12:53:01PM +0100, Thomas Gleixner wrote:
> Joel,
> 
> Joel Fernandes <joel@joelfernandes.org> writes:
> > We have only 2 cores (4 HT) on many devices. It is not an option to
> > dedicate a core to only running trusted code, that would kill
> > performance. Another option is to designate a single HT of a
> > particular core to run both untrusted code and an interrupt handler --
> > but as Thomas pointed out, this does not work for per-CPU interrupts
> > or managed interrupts, and the softirqs that they trigger. But if we
> > just consider interrupts for which we can control the affinities (and
> > assuming that most interrupts can be controlled like that), then maybe
> > it will work? In the ChromeOS model, each untrusted task is in its own
> > domain (cookie). So untrusted tasks cannot benefit from parallelism
> > (in our case) anyway -- so it seems reasonable to run an affinable
> > interrupt and an untrusted task, on a particular designated core.
> >
> > (Just thinking out loud...).  Another option could be a patch that
> > Vineeth shared with me (that Peter experimentally wrote) where he
> > sends IPI from an interrupt handler to a sibling running untrusted
> > guest code which would result in it getting paused. I am hoping
> > something like this could work on the host side as well (not just for
> > guests). We could also set per-core state from the interrupted HT,
> > possibly IPI'ing the untrusted sibling if we have to. If sibling runs
> > untrusted code *after* the other's siblings interrupt already started,
> > then the schedule() loop on the untrusted sibling would spin knowing
> > the other sibling has an interrupt in progress. The softirq is a real
> > problem though. Perhaps it can also set similar per-core state.
> 
> There is not much difference between bringing the sibling out of guest
> mode and bringing it out of host user mode.
> 
> Adding state to force spinning until the other side has completed is not
> rocket science either.
> 
> But the whole concept is prone to starvation issues and full of nasty
> corner cases. From experiments I did back in the L1TF days I'm pretty
> much convinced that this can't result in a generaly usable solution.
> Let me share a few thoughts what might be doable with less horrors, but
> be aware that this is mostly a brain dump of half thought out ideas.
> 
> 1) Managed interrupts on multi queue devices
> 
>    It should be reasonably simple to force a reduced number of queues
>    which would in turn allow to shield one ore two CPUs from such
>    interrupts and queue handling for the price of indirection.
> 
> 2) Timers and softirqs
> 
>    If device interrupts are targeted to "safe" CPUs then the amount of
>    timers and soft interrupt processing will be reduced as well.
> 
>    That still leaves e.g. network TX side soft interrupts when the task
>    running on a shielded core does networking. Maybe that's a non issue,
>    but I'm not familiar enough with the network maze to give an answer.
> 
>    A possible workaround would be to force softirq processing into
>    thread context so everything is under scheduler control. How well
>    that scales is a different story.
> 
>    That would bring out the timer_list timers and reduce the potential
>    surface to hrtimer expiry callbacks. Most of them should be fine
>    (doing wakeups or scheduler housekeeping of some sort). For the
>    others we might just utilize the mechanism which PREEMPT_RT uses and
>    force them off into softirq expiry mode.

Thanks a lot for your thoughts, it is really helpful.

One issue with interrupt affinities is, we have only 2 cores on many devices
(4 HTs).  If we assign 1 HT out of the 4 for receiving interrupts (at least
the ones we can control affinity off), then that means we have to devote that
core to running only 1 trusted task at a time, since one of the 2 HT is
designated for interrupts. That leaves us with only the other 2 HTs on the
other core for simultaneously running the 2 trusted tasks, thus reducing the
parallellism.

Thanks for raising the point about running interrupts and softirqs in process
context to solve these, turns out we do use threaded IRQs however not on most
interrupts.  We could consider threading as many IRQs (and softirqs) as
as possible and for the ones we cannot thread, may be we can code audit code and
assess the risk, possibly deferring processing of sensitivity data to
workqueue context. Perhaps we can also conditionally force a softirq into
process context say if we detect that the core has an untrusted task running
on the softirq HT's sibling, and if not let the softirq execute in softirq
context itself.

Thanks, Thomas!!

 - Joel


  reply	other threads:[~2020-03-19  1:54 UTC|newest]

Thread overview: 84+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-10-30 18:33 [RFC PATCH v4 00/19] Core scheduling v4 Vineeth Remanan Pillai
2019-10-30 18:33 ` [RFC PATCH v4 01/19] stop_machine: Fix stop_cpus_in_progress ordering Vineeth Remanan Pillai
2019-10-30 18:33 ` [RFC PATCH v4 02/19] sched: Fix kerneldoc comment for ia64_set_curr_task Vineeth Remanan Pillai
2019-10-30 18:33 ` [RFC PATCH v4 03/19] sched: Wrap rq::lock access Vineeth Remanan Pillai
2019-10-30 18:33 ` [RFC PATCH v4 04/19] sched/{rt,deadline}: Fix set_next_task vs pick_next_task Vineeth Remanan Pillai
2019-10-30 18:33 ` [RFC PATCH v4 05/19] sched: Add task_struct pointer to sched_class::set_curr_task Vineeth Remanan Pillai
2019-10-30 18:33 ` [RFC PATCH v4 06/19] sched/fair: Export newidle_balance() Vineeth Remanan Pillai
2019-10-30 18:33 ` [RFC PATCH v4 07/19] sched: Allow put_prev_task() to drop rq->lock Vineeth Remanan Pillai
2019-10-30 18:33 ` [RFC PATCH v4 08/19] sched: Rework pick_next_task() slow-path Vineeth Remanan Pillai
2019-10-30 18:33 ` [RFC PATCH v4 09/19] sched: Introduce sched_class::pick_task() Vineeth Remanan Pillai
2019-10-30 18:33 ` [RFC PATCH v4 10/19] sched: Core-wide rq->lock Vineeth Remanan Pillai
2019-10-30 18:33 ` [RFC PATCH v4 11/19] sched: Basic tracking of matching tasks Vineeth Remanan Pillai
2019-10-30 18:33 ` [RFC PATCH v4 12/19] sched: A quick and dirty cgroup tagging interface Vineeth Remanan Pillai
2019-10-30 18:33 ` [RFC PATCH v4 13/19] sched: Add core wide task selection and scheduling Vineeth Remanan Pillai
2019-10-30 18:33 ` [RFC PATCH v4 14/19] sched/fair: Add a few assertions Vineeth Remanan Pillai
2019-10-30 18:33 ` [RFC PATCH v4 15/19] sched: Trivial forced-newidle balancer Vineeth Remanan Pillai
2019-10-30 18:33 ` [RFC PATCH v4 16/19] sched: Debug bits Vineeth Remanan Pillai
2019-10-30 18:33 ` [RFC PATCH v4 17/19] sched/fair: wrapper for cfs_rq->min_vruntime Vineeth Remanan Pillai
2019-10-30 18:33 ` [RFC PATCH v4 18/19] sched/fair: core wide vruntime comparison Vineeth Remanan Pillai
2019-10-30 18:33 ` [RFC PATCH v4 19/19] sched/fair : Wake up forced idle siblings if needed Vineeth Remanan Pillai
2019-10-31 11:42 ` [RFC PATCH v4 00/19] Core scheduling v4 Li, Aubrey
2019-11-01 11:33   ` Li, Aubrey
2019-11-08  3:20     ` Li, Aubrey
2019-10-31 18:42 ` Phil Auld
2019-11-01 14:03   ` Vineeth Remanan Pillai
2019-11-01 16:35     ` Greg Kerr
2019-11-01 18:07       ` Dario Faggioli
2019-11-12  1:45     ` Dario Faggioli
2019-11-13 17:16       ` Tim Chen
2020-01-02  2:28       ` Aubrey Li
2020-01-10 23:19         ` Tim Chen
2019-11-11 19:10 ` Tim Chen
2020-01-14  1:12   ` Tim Chen
2020-01-14 15:40     ` Vineeth Remanan Pillai
2020-01-15  3:43       ` Li, Aubrey
2020-01-15 19:33         ` Tim Chen
2020-01-16  1:45           ` Aubrey Li
2020-01-17 16:00             ` Vineeth Remanan Pillai
2020-01-22 18:04               ` Gruza, Agata
2020-01-28  2:40       ` Dario Faggioli
     [not found]         ` <CANaguZDDpzrzdTmvjXvCmV2c+wBt6mXWSz4Vn-LJ-onc_Oj=yw@mail.gmail.com>
2020-02-01 15:31           ` Dario Faggioli
2020-02-06  0:28       ` Tim Chen
2020-02-06 22:37         ` Julien Desfossez
2020-02-12 23:07         ` Julien Desfossez
2020-02-13 18:37           ` Tim Chen
2020-02-14  6:10             ` Aubrey Li
     [not found]               ` <CANaguZC40mDHfL1H_9AA7H8cyd028t9PQVRqQ3kB4ha8R7hhqg@mail.gmail.com>
2020-02-15  6:01                 ` Aubrey Li
     [not found]                   ` <CANaguZBj_x_2+9KwbHCQScsmraC_mHdQB6uRqMTYMmvhBYfv2Q@mail.gmail.com>
2020-02-21 23:20                     ` Julien Desfossez
2020-03-17  0:55                       ` Joel Fernandes
2020-03-17 19:07                         ` Tim Chen
2020-03-17 20:18                           ` Tim Chen
2020-03-18  1:10                             ` Joel Fernandes
2020-03-17 21:17                           ` Thomas Gleixner
2020-03-17 21:58                             ` Tim Chen
2020-03-18  1:03                             ` Joel Fernandes
2020-03-18  2:30                               ` Joel Fernandes
2020-03-18  0:52                           ` Joel Fernandes
2020-03-18 11:53                             ` Thomas Gleixner
2020-03-19  1:54                               ` Joel Fernandes [this message]
2020-02-25  3:44               ` Aaron Lu
2020-02-25  5:32                 ` Aubrey Li
2020-02-25  7:34                   ` Aaron Lu
2020-02-25 10:40                     ` Aubrey Li
2020-02-25 11:21                       ` Aaron Lu
2020-02-25 13:41                         ` Aubrey Li
     [not found]                 ` <CANaguZD205ccu1V_2W-QuMRrJA9SjJ5ng1do4NCdLy8NDKKrbA@mail.gmail.com>
2020-02-26  3:13                   ` Aaron Lu
2020-02-26  7:21                   ` Aubrey Li
     [not found]                     ` <CANaguZDQZg-Z6aNpeLcjQ-cGm3X8CQOkZ_hnJNUyqDRM=yVDFQ@mail.gmail.com>
2020-02-27  4:45                       ` Aubrey Li
2020-02-28 23:55                       ` Tim Chen
2020-03-03 14:59                         ` Li, Aubrey
2020-03-03 23:54                           ` Li, Aubrey
2020-03-05  4:33                             ` Aaron Lu
2020-03-05  6:10                               ` Li, Aubrey
2020-03-05  8:52                                 ` Aaron Lu
2020-02-27  2:04                   ` Aaron Lu
2020-02-27 14:10                     ` Phil Auld
2020-02-27 14:37                       ` Aubrey Li
2020-02-28  2:54                       ` Aaron Lu
2020-03-05 13:45                         ` Aubrey Li
2020-03-06  2:41                           ` Aaron Lu
2020-03-06 18:06                             ` Tim Chen
2020-03-06 18:33                               ` Phil Auld
2020-03-06 21:44                                 ` Tim Chen
2020-03-07  3:13                                   ` Aaron Lu

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20200319015444.GA224062@google.com \
    --to=joel@joelfernandes.org \
    --cc=aaron.lwe@gmail.com \
    --cc=aubrey.intel@gmail.com \
    --cc=dfaggioli@suse.com \
    --cc=fweisbec@gmail.com \
    --cc=jdesfossez@digitalocean.com \
    --cc=keescook@chromium.org \
    --cc=kerrnel@google.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=mgorman@techsingularity.net \
    --cc=mingo@kernel.org \
    --cc=naravamudan@digitalocean.com \
    --cc=pauld@redhat.com \
    --cc=pawan.kumar.gupta@linux.intel.com \
    --cc=pbonzini@redhat.com \
    --cc=peterz@infradead.org \
    --cc=pjt@google.com \
    --cc=tglx@linutronix.de \
    --cc=tim.c.chen@linux.intel.com \
    --cc=tony.luck@intel.com \
    --cc=torvalds@linux-foundation.org \
    --cc=valentin.schneider@arm.com \
    --cc=vpillai@digitalocean.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).