linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: "Jan H. Schönherr" <jschoenh@amazon.de>
To: Peter Zijlstra <peterz@infradead.org>
Cc: Ingo Molnar <mingo@redhat.com>,
	linux-kernel@vger.kernel.org, Paul Turner <pjt@google.com>,
	Vincent Guittot <vincent.guittot@linaro.org>,
	Morten Rasmussen <morten.rasmussen@arm.com>,
	Tim Chen <tim.c.chen@linux.intel.com>
Subject: Re: [RFC 00/60] Coscheduling for Linux
Date: Fri, 14 Sep 2018 18:25:44 +0200	[thread overview]
Message-ID: <1d86f497-9fef-0b19-50d6-d46ef1c0bffa@amazon.de> (raw)
In-Reply-To: <20180914111251.GC24106@hirez.programming.kicks-ass.net>

On 09/14/2018 01:12 PM, Peter Zijlstra wrote:
> On Fri, Sep 07, 2018 at 11:39:47PM +0200, Jan H. Schönherr wrote:
>> This patch series extends CFS with support for coscheduling. The
>> implementation is versatile enough to cover many different coscheduling
>> use-cases, while at the same time being non-intrusive, so that behavior of
>> legacy workloads does not change.
> 
> I don't call this non-intrusive.

Mm... there is certainly room for interpretation. :) For example, it is still
possible to set affinities, to use nice, and to tune all the other existing CFS
knobs. That is, if you have tuned the scheduler to your workload or your workload
depends on some CFS feature to work efficiently (whether on purpose or not), then
running with this patch set should not change the behavior of said workload.

This patch set should "just" give the user the additional ability to coordinate
scheduling decisions across multiple CPUs. At least, that's my goal.

If someone doesn't need it, they don't have to use it. Just like task groups.

But maybe, people start experimenting with coordinated scheduling decisions --
after all, there is a ton of research on what one *could* do, if there was
coscheduling. I did look over much of that research. What I didn't like about
many of them, is that evaluation is based on a "prototype", that -- while
making the point that coscheduling might be beneficial for that use case --
totally screws over the scheduler for any other use case. Like coscheduling
based on deterministic, timed context switches across all CPUs. Bye bye
interactivity. That is, what I call intrusive.

As mentioned before, existing scheduler features, like preemption, (should)
still work as before with this variant of coscheduling, with the same look and
feel.

And who knows, maybe someone will come up with a use case that moves coscheduling
out of its niche; like the auto-grouping feature promoted the use of task groups.


>> Peter Zijlstra once called coscheduling a "scalability nightmare waiting to
>> happen". Well, with this patch series, coscheduling certainly happened.
> 
> I'll beg to differ; this isn't anywhere near something to consider
> merging. Also 'happened' suggests a certain stage of completeness, this
> again doesn't qualify.

I agree, that this isn't ready to be merged. Still, the current state is good
to start a discussion about the involved mechanics.


>> However, I disagree on the scalability nightmare. :)
> 
> There are known scalability problems with the existing cgroup muck; you
> just made things a ton worse. The existing cgroup overhead is
> significant, you also made that many times worse.
> 
> The cgroup stuff needs cleanups and optimization, not this.

Are you referring to cgroups in general, or task groups (aka. the cpu
controller) specifically?


With respect to scalability: many coscheduling use cases don't require
synchronization across the whole system. With this patch set, only those
parts that are actually coscheduled are involved in synchronization.
So, conceptually, this scales to larger systems from that point of view.

If coscheduling of a larger fraction of the system is required, costs
increase. So what? It's a trade-off. It may *still* be beneficial for a
use case. If it is, it might get adopted. If not, that particular use
case may be considered impractical unless someone comes up with a better
implementation of coscheduling.


With respect to the need of cleanups and optimizations: I agree, that
task groups are a bit messy. For example, here's my current wish list
off the top of my head:

a) lazy scheduler operations; for example: when dequeuing a task, don't bother
   walking up the task group hierarchy to dequeue all the SEs -- do it lazily
   when encountering an empty CFS RQ during picking when we hold the lock anyway.

b) ability to move CFS RQs between CPUs: someone changed the affinity of
   a cpuset? No problem, just attach the runqueue with all the tasks elsewhere.
   No need to touch each and every task.

c) light-weight task groups: don't allocate a runqueue for every CPU in the
   system, when it is known that tasks in the task group will only ever run
   on at most two CPUs, or so. (And while there is of course a use case for
   VMs in this, another class of use cases are auxiliary tasks, see eg, [1-5].)

Is this the level of optimizations, you're thinking about? Or do you want
to throw away the whole nested CFS RQ experience in the code?


>> B) Why would I want this?
> 
>>    In the L1TF context, it prevents other applications from loading
>>    additional data into the L1 cache, while one application tries to leak
>>    data.
> 
> That is the whole and only reason you did this;
It really isn't. But as your mind seems made up, I'm not going to bother
to argue.


> and it doesn't even
> begin to cover the requirements for it.
> 
> Not to mention I detest cgroups; for their inherent complixity and the
> performance costs associated with them.  _If_ we're going to do
> something for L1TF then I feel it should not depend on cgroups.
> 
> It is after all, perfectly possible to run a kvm thingy without cgroups.

Yes it is. But, for example, you won't have group-based fairness between
multiple kvm thingies.

Assuming, there is a cgroup-less solution that can prevent simultaneous
execution of tasks on a core, when they're not supposed to. How would you
tell the scheduler, which tasks these are?


>> 1. Execute parallel applications that rely on active waiting or synchronous
>>    execution concurrently with other applications.
>>
>>    The prime example in this class are probably virtual machines. Here,
>>    coscheduling is an alternative to paravirtualized spinlocks, pause loop
>>    exiting, and other techniques with its own set of advantages and
>>    disadvantages over the other approaches.
> 
> Note that in order to avoid PLE and paravirt spinlocks and paravirt
> tlb-invalidate you have to gang-schedule the _entire_ VM, not just SMT
> siblings.
> 
> Now explain to me how you're going to gang-schedule a VM with a good
> number of vCPU threads (say spanning a number of nodes) and preserving
> the rest of CFS without it turning into a massive trainwreck?

You probably don't -- for the same reason, why it is a bad idea to give
an endless loop realtime priority. It's just a bad idea. As I said in the
text you quoted: coscheduling comes with its own set of advantages and
disadvantages. Just because you find one example, where it is a bad idea,
doesn't make it a bad thing in general.


> Such things (gang scheduling VMs) _are_ possible, but not within the
> confines of something like CFS, they are also fairly inefficient
> because, as you do note, you will have to explicitly schedule idle time
> for idle vCPUs.

With gang scheduling as defined by Feitelson and Rudolph [6], you'd have to
explicitly schedule idle time. With coscheduling as defined by Ousterhout [7],
you don't. In this patch set, the scheduling of idle time is "merely" a quirk
of the implementation. And even with this implementation, there's nothing
stopping you from down-sizing the width of the coscheduled set to take out
the idle vCPUs dynamically, cutting down on fragmentation.


> Things like the Tableau scheduler are what come to mind; but I'm not
> sure how to integrate that with a general purpose scheduling scheme. You
> pretty much have to dedicate a set of CPUs to just scheduling VMs with
> such a scheduler.
> 
> And that would call for cpuset-v2 integration along with a new
> scheduling class.
> 
> And then people will complain again that partitioning a system isn't
> dynamic enough and we need magic :/
> 
> (and this too would be tricky to virtualize itself)

Hence my "counter" suggestion in the form of this patch set: Integrated
into a general purpose scheduler, no need to partition off a part of the system,
not tied to just VM use cases.


>> C) How does it work?
>> --------------------
>>
>> This patch series introduces hierarchical runqueues, that represent larger
>> and larger fractions of the system. By default, there is one runqueue per
>> scheduling domain. These additional levels of runqueues are activated by
>> the "cosched_max_level=" kernel command line argument. The bottom level is
>> 0.
> 
> You gloss over a ton of details here; 

Yes, I do. :) I wanted a summary, not a design document. Maybe I was a bit
to eager in condensing the design to just a few paragraphs...


> many of which are non trivial and
> marked broken in your patches. Unless you have solid suggestions on how
> to deal with all of them, this is a complete non-starter.

Address them one by one. Probably do some of the optimizations you suggested
to just get rid of some of them. It's work in progress. Though, at this
stage I am also really interested in things that are broken, that I am not
aware of yet.


> The per-cpu IRQ/steal time accounting for example. The task timeline
> isn't the same on every CPU because of those.
> 
> You now basically require steal time and IRQ load to match between CPUs.
> That places very strict requirements and effectively breaks virt
> invariance. That is, the scheduler now behaves significantly different
> inside a VM than it does outside of it -- without the guest being gang
> scheduled itself and having physical pinning to reflect the same
> topology the coschedule=1 thing should not be exposed in a guest. And
> that is a mayor failing IMO.

I'll have to read up some more code to make a qualified statement here.


> Also; I think you're sharing a cfs_rq between CPUs:
> 
> +       init_cfs_rq(&sd->shared->rq.cfs);
> 
> that is broken, the virtual runtime stuff needs nontrivial modifications
> for multiple CPUs. And if you do that, I've no idea how you're dealing
> with SMP affinities.

It is not shared per se. There's only one CPU (the leader) making the scheduling
decision for that runqueue and if another CPU needs to modify the runqueue, it
works like it does for CPU runqueues as well: the other CPU works with the
leader's time. There are also no tasks in a runqueue when it is responsible for
more than one CPU.

Assuming, that a runqueue is responsible for a core and there are runnable
tasks within the task group on said core, then there will one SE enqueued in
that runqueue, a so called SD-SE (scheduling domain SE, or synchronization
domain SE). This SD-SE represents the per CPU runqueues of this core of this
task group. (As opposed to a "normal" task group SE (TG-SE), which represents
just one runqueue in a different task group.) Tasks are still only enqueued
in the per CPU runqueues.


>> You currently have to explicitly set affinities of tasks within coscheduled
>> task groups, as load balancing is not implemented for them at this point.
> 
> You don't even begin to outline how you preserve smp-nice fairness.

Works as before (or will work as before): a coscheduled task group has its
own set of per CPU runqueues that hold the tasks of this group (per CPU).
The load balancer will work on this subset of runqueues as it does on the
"normal" per CPU runqueues -- smp-nice fairness and all.


>> D) What can I *not* do with this?
>> ---------------------------------
>>
>> Besides the missing load-balancing within coscheduled task-groups, this
>> implementation has the following properties, which might be considered
>> short-comings.
>>
>> This particular implementation focuses on SCHED_OTHER tasks managed by CFS
>> and allows coscheduling them. Interrupts as well as tasks in higher
>> scheduling classes are currently out-of-scope: they are assumed to be
>> negligible interruptions as far as coscheduling is concerned and they do
>> *not* cause a preemption of a whole group. This implementation could be
>> extended to cover higher scheduling classes. Interrupts, however, are an
>> orthogonal issue.
>>
>> The collective context switch from one coscheduled set of tasks to another
>> -- while fast -- is not atomic. If a use-case needs the absolute guarantee
>> that all tasks of the previous set have stopped executing before any task
>> of the next set starts executing, an additional hand-shake/barrier needs to
>> be added.
> 
> IOW it's completely friggin useless for L1TF.

Do you believe me now, that L1TF is not "the whole and only reason" I did this? :D


>> E) What's the overhead?
>> -----------------------
>>
>> Each (active) hierarchy level has roughly the same effect as one additional
>> level of nested cgroups. In addition -- at this stage -- there may be some
>> additional lock contention if you coschedule larger fractions of the system
>> with a dynamic task set.
> 
> Have you actually read your own code?
> 
> What about that atrocious locking you sprinkle all over the place?
> 'some additional lock contention' doesn't even begin to describe that
> horror show.

Currently, there are more code paths than I like, that climb up the se->parent
relation to the top. They need to go, if we want to coschedule larger parts of
the system in a more efficient manner. Hence, parts of my wish list further up.

That said, it is not as bad as you make it sound for the following three reasons:

a) The amount of CPUs that compete for a lock is currently governed by the
   "cosched_max_level" command line argument, making it a conscious decision to
   increase the overall overhead. Hence, coscheduling at, e.g., core level
   does not have a too serious impact on lock contention.

b) The runqueue locks are usually only taken by the leader of said runqueue.
   Hence, there is often only one user per lock even at higher levels.
   The prominent exception at this stage of the patch set is that enqueue and
   dequeue operations walk up the hierarchy up to the "cosched_max_level".
   And even then, due to lock chaining, multiple enqueue/dequeue operations
   on different CPUs can bubble up the shared part of the hierarchy in parallel.

c) The scheduling decision does not cause any lock contention by itself. Each
   CPU only accesses runqueues, where itself is the leader. Hence, once you
   have a relatively stable situation, lock contention is not an issue.


> Hint: we're not going to increase the lockdep subclasses, and most
> certainly not for scheduler locking.

That's fine. Due to the overhead of nesting cgroups that you mentioned earlier,
that many levels in the runqueue hierarchy are likely to be impracticable
anyway. For the future, I imagine a more dynamic variant of task groups/scheduling
domains, that can provide all the flexibility one would want without that deep
of a nesting. At this stage, it is just a way to experiment with larger systems
without having to disable lockdep.

Of course, if you have a suggestion for a different locking scheme, we can
discuss that as well. The current one, is what I considered most suitable
among some alternatives under the premise I was working: integrate coscheduling
in a scheduler as an additional feature (instead of, eg, write a scheduler
capable of coscheduling). So, I probably haven't considered all alternatives.


> All in all, I'm not inclined to consider this approach, it complicates
> an already overly complicated thing (cpu-cgroups) and has a ton of
> unresolved issues
Even if you're not inclined -- at this stage, if I may be so bold :) --
your feedback is valuable. Thank you for that.

Regards
Jan


References (for those that are into that kind of thing):

[1] D. Kim, S. S.-w. Liao, P. H. Wang, J. del Cuvillo, X. Tian, X. Zou,
    H. Wang, D. Yeung, M. Girkar, and J. P. Shen, “Physical experimentation
    with prefetching helper threads on Intel’s hyper-threaded processors,”
    in Proceedings of the International Symposium on Code Generation and
    Optimization (CGO ’04). Los Alamitos, CA, USA: IEEE Computer
    Society, Mar. 2004, pp. 27–38.

[2] C. Jung, D. Lim, J. Lee, and D. Solihin, “Helper thread prefetching for
    loosely-coupled multiprocessor systems,” in Parallel and Distributed Pro-
    cessing Symposium, 2006. IPDPS 2006. 20th International, April 2006.

[3] C. G. Quiñones, C. Madriles, J. Sánchez, P. Marcuello, A. González,
    and D. M. Tullsen, “Mitosis compiler: An infrastructure for speculative
    threading based on pre-computation slices,” in Proceedings of the 2005
    ACM SIGPLAN Conference on Programming Language Design and
    Implementation, ser. PLDI ’05. New York, NY, USA: ACM, 2005, pp.
    269–279.

[4] J. Mars, L. Tang, and M. L. Soffa, “Directly characterizing cross
    core interference through contention synthesis,” in Proceedings of the
    6th International Conference on High Performance and Embedded
    Architectures and Compilers, ser. HiPEAC ’11. New York, NY, USA:
    ACM, 2011, pp. 167–176.

[5] Q. Zeng, D. Wu, and P. Liu, “Cruiser: Concurrent heap buffer overflow
    monitoring using lock-free data structures,” in Proceedings of the 32Nd
    ACM SIGPLAN Conference on Programming Language Design and
    Implementation, ser. PLDI ’11. New York, NY, USA: ACM, 2011, pp.
    367–377.

[6] D. G. Feitelson and L. Rudolph, “Distributed hierarchical control for
    parallel processing,” Computer, vol. 23, no. 5, pp. 65–77, May 1990.

[7] J. Ousterhout, “Scheduling techniques for concurrent systems,” in
    Proceedings of the 3rd International Conference on Distributed Computing
    Systems (ICDCS ’82). Los Alamitos, CA, USA: IEEE Computer Society,
    Oct. 1982, pp. 22–30.

  reply	other threads:[~2018-09-14 16:26 UTC|newest]

Thread overview: 114+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2018-09-07 21:39 [RFC 00/60] Coscheduling for Linux Jan H. Schönherr
2018-09-07 21:39 ` [RFC 01/60] sched: Store task_group->se[] pointers as part of cfs_rq Jan H. Schönherr
2018-09-07 21:39 ` [RFC 02/60] sched: Introduce set_entity_cfs() to place a SE into a certain CFS runqueue Jan H. Schönherr
2018-09-07 21:39 ` [RFC 03/60] sched: Setup sched_domain_shared for all sched_domains Jan H. Schönherr
2018-09-07 21:39 ` [RFC 04/60] sched: Replace sd_numa_mask() hack with something sane Jan H. Schönherr
2018-09-07 21:39 ` [RFC 05/60] sched: Allow to retrieve the sched_domain_topology Jan H. Schönherr
2018-09-07 21:39 ` [RFC 06/60] sched: Add a lock-free variant of resched_cpu() Jan H. Schönherr
2018-09-07 21:39 ` [RFC 07/60] sched: Reduce dependencies of init_tg_cfs_entry() Jan H. Schönherr
2018-09-07 21:39 ` [RFC 08/60] sched: Move init_entity_runnable_average() into init_tg_cfs_entry() Jan H. Schönherr
2018-09-07 21:39 ` [RFC 09/60] sched: Do not require a CFS in init_tg_cfs_entry() Jan H. Schönherr
2018-09-07 21:39 ` [RFC 10/60] sched: Use parent_entity() in more places Jan H. Schönherr
2018-09-07 21:39 ` [RFC 11/60] locking/lockdep: Increase number of supported lockdep subclasses Jan H. Schönherr
2018-09-07 21:39 ` [RFC 12/60] locking/lockdep: Make cookie generator accessible Jan H. Schönherr
2018-09-07 21:40 ` [RFC 13/60] sched: Remove useless checks for root task-group Jan H. Schönherr
2018-09-07 21:40 ` [RFC 14/60] sched: Refactor sync_throttle() to accept a CFS runqueue as argument Jan H. Schönherr
2018-09-07 21:40 ` [RFC 15/60] sched: Introduce parent_cfs_rq() and use it Jan H. Schönherr
2018-09-07 21:40 ` [RFC 16/60] sched: Preparatory code movement Jan H. Schönherr
2018-09-07 21:40 ` [RFC 17/60] sched: Introduce and use generic task group CFS traversal functions Jan H. Schönherr
2018-09-07 21:40 ` [RFC 18/60] sched: Fix return value of SCHED_WARN_ON() Jan H. Schönherr
2018-09-07 21:40 ` [RFC 19/60] sched: Add entity variants of enqueue_task_fair() and dequeue_task_fair() Jan H. Schönherr
2018-09-07 21:40 ` [RFC 20/60] sched: Let {en,de}queue_entity_fair() work with a varying amount of tasks Jan H. Schönherr
2018-09-07 21:40 ` [RFC 21/60] sched: Add entity variants of put_prev_task_fair() and set_curr_task_fair() Jan H. Schönherr
2018-09-07 21:40 ` [RFC 22/60] cosched: Add config option for coscheduling support Jan H. Schönherr
2018-09-07 21:40 ` [RFC 23/60] cosched: Add core data structures for coscheduling Jan H. Schönherr
2018-09-07 21:40 ` [RFC 24/60] cosched: Do minimal pre-SMP coscheduler initialization Jan H. Schönherr
2018-09-07 21:40 ` [RFC 25/60] cosched: Prepare scheduling domain topology for coscheduling Jan H. Schönherr
2018-09-07 21:40 ` [RFC 26/60] cosched: Construct runqueue hierarchy Jan H. Schönherr
2018-09-07 21:40 ` [RFC 27/60] cosched: Add some small helper functions for later use Jan H. Schönherr
2018-09-07 21:40 ` [RFC 28/60] cosched: Add is_sd_se() to distinguish SD-SEs from TG-SEs Jan H. Schönherr
2018-09-07 21:40 ` [RFC 29/60] cosched: Adjust code reflecting on the total number of CFS tasks on a CPU Jan H. Schönherr
2018-09-07 21:40 ` [RFC 30/60] cosched: Disallow share modification on task groups for now Jan H. Schönherr
2018-09-07 21:40 ` [RFC 31/60] cosched: Don't disable idle tick " Jan H. Schönherr
2018-09-07 21:40 ` [RFC 32/60] cosched: Specialize parent_cfs_rq() for hierarchical runqueues Jan H. Schönherr
2018-09-07 21:40 ` [RFC 33/60] cosched: Allow resched_curr() to be called " Jan H. Schönherr
2018-09-07 21:40 ` [RFC 34/60] cosched: Add rq_of() variants for different use cases Jan H. Schönherr
2018-09-07 21:40 ` [RFC 35/60] cosched: Adjust rq_lock() functions to work with hierarchical runqueues Jan H. Schönherr
2018-09-07 21:40 ` [RFC 36/60] cosched: Use hrq_of() for rq_clock() and rq_clock_task() Jan H. Schönherr
2018-09-07 21:40 ` [RFC 37/60] cosched: Use hrq_of() for (indirect calls to) ___update_load_sum() Jan H. Schönherr
2018-09-07 21:40 ` [RFC 38/60] cosched: Skip updates on non-CPU runqueues in cfs_rq_util_change() Jan H. Schönherr
2018-09-07 21:40 ` [RFC 39/60] cosched: Adjust task group management for hierarchical runqueues Jan H. Schönherr
2018-09-07 21:40 ` [RFC 40/60] cosched: Keep track of task group hierarchy within each SD-RQ Jan H. Schönherr
2018-09-07 21:40 ` [RFC 41/60] cosched: Introduce locking for leader activities Jan H. Schönherr
2018-09-07 21:40 ` [RFC 42/60] cosched: Introduce locking for (mostly) enqueuing and dequeuing Jan H. Schönherr
2018-09-07 21:40 ` [RFC 43/60] cosched: Add for_each_sched_entity() variant for owned entities Jan H. Schönherr
2018-09-07 21:40 ` [RFC 44/60] cosched: Perform various rq_of() adjustments in scheduler code Jan H. Schönherr
2018-09-07 21:40 ` [RFC 45/60] cosched: Continue to account all load on per-CPU runqueues Jan H. Schönherr
2018-09-07 21:40 ` [RFC 46/60] cosched: Warn on throttling attempts of non-CPU runqueues Jan H. Schönherr
2018-09-07 21:40 ` [RFC 47/60] cosched: Adjust SE traversal and locking for common leader activities Jan H. Schönherr
2018-09-07 21:40 ` [RFC 48/60] cosched: Adjust SE traversal and locking for yielding and buddies Jan H. Schönherr
2018-09-07 21:40 ` [RFC 49/60] cosched: Adjust locking for enqueuing and dequeueing Jan H. Schönherr
2018-09-07 21:40 ` [RFC 50/60] cosched: Propagate load changes across hierarchy levels Jan H. Schönherr
2018-09-07 21:40 ` [RFC 51/60] cosched: Hacky work-around to avoid observing zero weight SD-SE Jan H. Schönherr
2018-09-07 21:40 ` [RFC 52/60] cosched: Support SD-SEs in enqueuing and dequeuing Jan H. Schönherr
2018-09-07 21:40 ` [RFC 53/60] cosched: Prevent balancing related functions from crossing hierarchy levels Jan H. Schönherr
2018-09-07 21:40 ` [RFC 54/60] cosched: Support idling in a coscheduled set Jan H. Schönherr
2018-09-07 21:40 ` [RFC 55/60] cosched: Adjust task selection for coscheduling Jan H. Schönherr
2018-09-07 21:40 ` [RFC 56/60] cosched: Adjust wakeup preemption rules " Jan H. Schönherr
2018-09-07 21:40 ` [RFC 57/60] cosched: Add sysfs interface to configure coscheduling on cgroups Jan H. Schönherr
2018-09-07 21:40 ` [RFC 58/60] cosched: Switch runqueues between regular scheduling and coscheduling Jan H. Schönherr
2018-09-07 21:40 ` [RFC 59/60] cosched: Handle non-atomicity during switches to and from coscheduling Jan H. Schönherr
2018-09-07 21:40 ` [RFC 60/60] cosched: Add command line argument to enable coscheduling Jan H. Schönherr
2018-09-10  2:50   ` Randy Dunlap
2018-09-12  0:24 ` [RFC 00/60] Coscheduling for Linux Nishanth Aravamudan
2018-09-12 19:34   ` Jan H. Schönherr
2018-09-12 23:15     ` Nishanth Aravamudan
2018-09-13 11:31       ` Jan H. Schönherr
2018-09-13 18:16         ` Nishanth Aravamudan
2018-09-12 23:18     ` Jan H. Schönherr
2018-09-13  3:05       ` Nishanth Aravamudan
2018-09-13 19:19 ` [RFC 61/60] cosched: Accumulated fixes and improvements Jan H. Schönherr
2018-09-26 17:25   ` Nishanth Aravamudan
2018-09-26 21:05     ` Nishanth Aravamudan
2018-10-01  9:13       ` Jan H. Schönherr
2018-09-14 11:12 ` [RFC 00/60] Coscheduling for Linux Peter Zijlstra
2018-09-14 16:25   ` Jan H. Schönherr [this message]
2018-09-15  8:48     ` Task group cleanups and optimizations (was: Re: [RFC 00/60] Coscheduling for Linux) Jan H. Schönherr
2018-09-17  9:48       ` Peter Zijlstra
2018-09-18 13:22         ` Jan H. Schönherr
2018-09-18 13:38           ` Peter Zijlstra
2018-09-18 13:54             ` Jan H. Schönherr
2018-09-18 13:42           ` Peter Zijlstra
2018-09-18 14:35           ` Rik van Riel
2018-09-19  9:23             ` Jan H. Schönherr
2018-11-23 16:51           ` Frederic Weisbecker
2018-12-04 13:23             ` Jan H. Schönherr
2018-09-17 11:33     ` [RFC 00/60] Coscheduling for Linux Peter Zijlstra
2018-11-02 22:13       ` Nishanth Aravamudan
2018-09-17 12:25     ` Peter Zijlstra
2018-09-26  9:58       ` Jan H. Schönherr
2018-09-27 18:36         ` Subhra Mazumdar
2018-11-23 16:29           ` Frederic Weisbecker
2018-09-17 13:37     ` Peter Zijlstra
2018-09-26  9:35       ` Jan H. Schönherr
2018-09-18 14:40     ` Rik van Riel
2018-09-24 15:23       ` Jan H. Schönherr
2018-09-24 18:01         ` Rik van Riel
2018-09-18  0:33 ` Subhra Mazumdar
2018-09-18 11:44   ` Jan H. Schönherr
2018-09-19 21:53     ` Subhra Mazumdar
2018-09-24 15:43       ` Jan H. Schönherr
2018-09-27 18:12         ` Subhra Mazumdar
2018-10-04 13:29 ` Jon Masters
2018-10-17  2:09 ` Frederic Weisbecker
2018-10-19 11:40   ` Jan H. Schönherr
2018-10-19 14:52     ` Frederic Weisbecker
2018-10-19 15:16     ` Rik van Riel
2018-10-19 15:33       ` Frederic Weisbecker
2018-10-19 15:45         ` Rik van Riel
2018-10-19 19:07           ` Jan H. Schönherr
2018-10-19  0:26 ` Subhra Mazumdar
2018-10-26 23:44   ` Jan H. Schönherr
2018-10-29 22:52     ` Subhra Mazumdar
2018-10-26 23:05 ` Subhra Mazumdar
2018-10-27  0:07   ` Jan H. Schönherr

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=1d86f497-9fef-0b19-50d6-d46ef1c0bffa@amazon.de \
    --to=jschoenh@amazon.de \
    --cc=linux-kernel@vger.kernel.org \
    --cc=mingo@redhat.com \
    --cc=morten.rasmussen@arm.com \
    --cc=peterz@infradead.org \
    --cc=pjt@google.com \
    --cc=tim.c.chen@linux.intel.com \
    --cc=vincent.guittot@linaro.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).