All of lore.kernel.org
 help / color / mirror / Atom feed
From: Jeff Kubascik <jeff.kubascik@dornerworks.com>
To: "Dario Faggioli" <dfaggioli@suse.com>,
	"Stewart Hildebrand" <Stewart.Hildebrand@dornerworks.com>,
	"Jürgen Groß" <jgross@suse.com>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: xen-devel <xen-devel@dornerworks.com>,
	Josh Whitehead <Josh.Whitehead@dornerworks.com>,
	George Dunlap <george.dunlap@citrix.com>
Subject: Re: [PATCH 5/5] sched/arinc653: Implement CAST-32A multicore scheduling
Date: Fri, 18 Sep 2020 16:03:31 -0400	[thread overview]
Message-ID: <16401afe-9dfc-48d6-1fd5-bcfb519417ad@dornerworks.com> (raw)
In-Reply-To: <960f5e3b5a27326cd18ecb44a96f22bcf94f2498.camel@suse.com>

On 9/17/2020 1:30 PM, Dario Faggioli wrote:
>On Thu, 2020-09-17 at 15:59 +0000, Stewart Hildebrand wrote:
>> On Thursday, September 17, 2020 11:20 AM, Dario Faggioli wrote:
>>> On Thu, 2020-09-17 at 15:10 +0000, Stewart Hildebrand wrote:
>>>>> It might be worth to consider using just the core scheduling
>>>>> framework
>>>>> in order to achive this. Using a sched_granularity with the
>>>>> number
>>>>> of
>>>>> cpus in the cpupool running ARINC653 scheduler should already
>>>>> do
>>>>> the
>>>>> trick. There should be no further midification of ARINC653
>>>>> scheduler
>>>>> required.
>>>>>
>>>>
>>>> This CAST-32A multicore patch series allows you to have a
>>>> different
>>>> number of vCPUs (UNITs, I guess) assigned to domUs.
>>>>
>>> And if you have domain A with 1 vCPU and domain B with 2 vCPUs,
>>> with
>>> sched-gran=core:
>>> - when the vCPU of domain A is scheduled on a pCPU of a core, no
>>> vCPU
>>>  from domain B can be scheduled on the same core;
>>> - when one of the vCPUs of domain B is scheduled on a pCPU of a
>>> core,
>>>  no other vCPUs, except the other vCPU of domain B can run on the
>>>  same core.
>>
>> Fascinating. Very cool, thanks for the insight. My understanding is
>> that core scheduling is not currently enabled on arm. This series
>> allows us to have multicore ARINC 653 on arm today without chasing
>> down potential issues with core scheduling on arm...
>>
>Yeah, but at the cost of quite a bit of churn, and of a lot more code
>in arinc653.c, basically duplicating the functionality.
>
>I appreciate how crude and inaccurate this is, but arinc653.c is
>currently 740 LOCs, and this patch is adding 601 and removing 204.
>
>Add to this the fact that the architecture specific part of core-
>scheduling should be limited to the handling of the context switches
>(and that it may even work already, as what we weren't able to do was
>proper testing).
>
>If I can cite an anecdote, back in the days where core-scheduling was
>being developed, I sent my own series implementing, for both credit1
>and credit2. It had its issues, of course, but I think it had some
>merits, even if compared with the current implementation we have
>upstream (e.g., more flexibility, as core-scheduling could have been
>enabled on a per-domain basis).
>
>At least for me, a very big plus of the other approach that Juergen
>suggested and then also implemented, was the fact that we would get the
>feature for all the schedulers at once. And this (i.e., the fact that
>it probably can be used for this purpose as well, without major changes
>necessary inside ARINC653) seems to me to be a further confirmation
>that it was the good way forward.
>
>And don't think only to the need of writing the code (as you kind of
>have it already), but also to testing. As in, the vast majority of the
>core-scheduling logic and code is scheduler independent, and hence has
>been stressed and tested already, even by people using schedulers
>different than ARINC.

When is core scheduling expected to be available for ARM platforms? My
understanding is that this only works for Intel.

With core scheduling, is the pinning of vCPUs to pCPUs configurable? Or can the
scheduler change it at will? One advantage of this patch is that you can
explicitly pin a vCPU to a pCPU. This is a desirable feature for systems where
you are looking for determinism.

There are a few changes in this patch that I think should be pulled out if we go
the path of core scheduling. For instance, it removes a large structure from
being placed on the stack. It also adds the concept of an epoch so that the
scheduler doesn't drift (important for ARINC653) and can recover if a frame is
somehow missed (I commonly saw this when using a debugger). I also observed the
occasional kernel panic when using xl commands which was fixed by improving the
locking scheme.

-Jeff


  reply	other threads:[~2020-09-18 20:03 UTC|newest]

Thread overview: 34+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-09-16 18:18 [PATCH 0/5] Multicore support for ARINC653 scheduler Jeff Kubascik
2020-09-16 18:18 ` [PATCH 1/5] sched/arinc653: Clean up comments Jeff Kubascik
2020-09-17 13:24   ` Andrew Cooper
2020-09-18 15:33     ` Jeff Kubascik
2020-09-16 18:18 ` [PATCH 2/5] sched/arinc653: Rename scheduler private structs Jeff Kubascik
2020-09-17 12:09   ` Andrew Cooper
2020-09-17 14:46     ` Dario Faggioli
2020-09-18 15:52       ` Jeff Kubascik
2020-09-16 18:18 ` [PATCH 3/5] sched/arinc653: Clean up function definitions Jeff Kubascik
2020-09-17  8:09   ` Jan Beulich
2020-09-17 14:40     ` Dario Faggioli
2020-09-18 17:43       ` Jeff Kubascik
2020-09-16 18:18 ` [PATCH 4/5] sched/arinc653: Reorganize function definition order Jeff Kubascik
2020-09-17  8:12   ` Jan Beulich
2020-09-17 14:16     ` Dario Faggioli
2020-09-18 18:21       ` Jeff Kubascik
2020-09-17 14:17     ` Andrew Cooper
2020-09-18 18:04       ` Jeff Kubascik
2020-09-18 18:05       ` Jeff Kubascik
2020-09-16 18:18 ` [PATCH 5/5] sched/arinc653: Implement CAST-32A multicore scheduling Jeff Kubascik
2020-09-17  9:04   ` Jürgen Groß
2020-09-17 15:10     ` Stewart Hildebrand
2020-09-17 15:18       ` Jürgen Groß
2020-09-17 15:20       ` Dario Faggioli
2020-09-17 15:59         ` Stewart Hildebrand
2020-09-17 17:30           ` Dario Faggioli
2020-09-18 20:03             ` Jeff Kubascik [this message]
2020-09-18 20:34               ` Dario Faggioli
2020-09-22 19:50               ` Andrew Cooper
2020-09-17 14:42   ` Andrew Cooper
2020-09-17 14:57     ` Stewart Hildebrand
2020-09-17 16:18       ` Andrew Cooper
2020-09-17 17:57         ` Stewart Hildebrand
2020-09-18 19:22           ` Jeff Kubascik

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=16401afe-9dfc-48d6-1fd5-bcfb519417ad@dornerworks.com \
    --to=jeff.kubascik@dornerworks.com \
    --cc=Josh.Whitehead@dornerworks.com \
    --cc=Stewart.Hildebrand@dornerworks.com \
    --cc=dfaggioli@suse.com \
    --cc=george.dunlap@citrix.com \
    --cc=jgross@suse.com \
    --cc=xen-devel@dornerworks.com \
    --cc=xen-devel@lists.xenproject.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.