All of lore.kernel.org
 help / color / mirror / Atom feed
From: Randy Dunlap <rdunlap@infradead.org>
To: "Joel Fernandes (Google)" <joel@joelfernandes.org>,
	Nishanth Aravamudan <naravamudan@digitalocean.com>,
	Julien Desfossez <jdesfossez@digitalocean.com>,
	Peter Zijlstra <peterz@infradead.org>,
	Tim Chen <tim.c.chen@linux.intel.com>,
	Vineeth Pillai <viremana@linux.microsoft.com>,
	Aaron Lu <aaron.lwe@gmail.com>,
	Aubrey Li <aubrey.intel@gmail.com>,
	tglx@linutronix.de, linux-kernel@vger.kernel.org
Cc: mingo@kernel.org, torvalds@linux-foundation.org,
	fweisbec@gmail.com, keescook@chromium.org, kerrnel@google.com,
	Phil Auld <pauld@redhat.com>,
	Valentin Schneider <valentin.schneider@arm.com>,
	Mel Gorman <mgorman@techsingularity.net>,
	Pawan Gupta <pawan.kumar.gupta@linux.intel.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	vineeth@bitbyteword.org, Chen Yu <yu.c.chen@intel.com>,
	Christian Brauner <christian.brauner@ubuntu.com>,
	Agata Gruza <agata.gruza@intel.com>,
	Antonio Gomez Iglesias <antonio.gomez.iglesias@intel.com>,
	graf@amazon.com, konrad.wilk@oracle.com, dfaggioli@suse.com,
	pjt@google.com, rostedt@goodmis.org, derkling@google.com,
	benbjiang@tencent.com,
	Alexandre Chartre <alexandre.chartre@oracle.com>,
	James.Bottomley@hansenpartnership.com, OWeisse@umich.edu,
	Dhaval Giani <dhaval.giani@oracle.com>,
	Junaid Shahid <junaids@google.com>,
	jsbarnes@google.com, chris.hyser@oracle.com,
	Aubrey Li <aubrey.li@linux.intel.com>,
	"Paul E. McKenney" <paulmck@kernel.org>,
	Tim Chen <tim.c.chen@intel.com>
Subject: Re: [PATCH v8 -tip 25/26] Documentation: Add core scheduling documentation
Date: Mon, 19 Oct 2020 20:36:25 -0700	[thread overview]
Message-ID: <e8fd7a2b-0439-719e-aef8-de789a564fbe@infradead.org> (raw)
In-Reply-To: <20201020014336.2076526-26-joel@joelfernandes.org>

Hi Joel,

On 10/19/20 6:43 PM, Joel Fernandes (Google) wrote:
> Document the usecases, design and interfaces for core scheduling.
> 
> Co-developed-by: Vineeth Pillai <viremana@linux.microsoft.com>
> Tested-by: Julien Desfossez <jdesfossez@digitalocean.com>
> Signed-off-by: Joel Fernandes (Google) <joel@joelfernandes.org>
> ---
>   .../admin-guide/hw-vuln/core-scheduling.rst   | 312 ++++++++++++++++++
>   Documentation/admin-guide/hw-vuln/index.rst   |   1 +
>   2 files changed, 313 insertions(+)
>   create mode 100644 Documentation/admin-guide/hw-vuln/core-scheduling.rst
> 
> diff --git a/Documentation/admin-guide/hw-vuln/core-scheduling.rst b/Documentation/admin-guide/hw-vuln/core-scheduling.rst
> new file mode 100644
> index 000000000000..eacafbb8fa3f
> --- /dev/null
> +++ b/Documentation/admin-guide/hw-vuln/core-scheduling.rst
> @@ -0,0 +1,312 @@
> +Core Scheduling
> +***************
> +Core scheduling support allows userspace to define groups of tasks that can
> +share a core. These groups can be specified either for security usecases (one
> +group of tasks don't trust another), or for performance usecases (some
> +workloads may benefit from running on the same core as they don't need the same
> +hardware resources of the shared core).
> +
> +Security usecase
> +----------------
> +A cross-HT attack involves the attacker and victim running on different
> +Hyper Threads of the same core. MDS and L1TF are examples of such attacks.
> +Without core scheduling, the only full mitigation of cross-HT attacks is to
> +disable Hyper Threading (HT). Core scheduling allows HT to be turned on safely
> +by ensuring that trusted tasks can share a core. This increase in core sharing
> +can improvement performance, however it is not guaranteed that performance will
> +always improve, though that is seen to be the case with a number of real world
> +workloads. In theory, core scheduling aims to perform at least as good as when
> +Hyper Threading is disabled. In practise, this is mostly the case though not
> +always: as synchronizing scheduling decisions across 2 or more CPUs in a core
> +involves additional overhead - especially when the system is lightly loaded
> +(``total_threads <= N/2``).

N is number of CPUs?

> +
> +Usage
> +-----
> +Core scheduling support is enabled via the ``CONFIG_SCHED_CORE`` config option.
> +Using this feature, userspace defines groups of tasks that trust each other.
> +The core scheduler uses this information to make sure that tasks that do not
> +trust each other will never run simultaneously on a core, while doing its best
> +to satisfy the system's scheduling requirements.
> +
> +There are 2 ways to use core-scheduling:
> +
> +CGroup
> +######
> +Core scheduling adds additional files to the CPU controller CGroup:
> +
> +* ``cpu.tag``
> +Writing ``1`` into this file results in all tasks in the group get tagged. This

                                                                   getting
or                                                                being

> +results in all the CGroup's tasks allowed to run concurrently on a core's
> +hyperthreads (also called siblings).
> +
> +The file being a value of ``0`` means the tag state of the CGroup is inheritted

                                                                         inherited

> +from its parent hierarchy. If any ancestor of the CGroup is tagged, then the
> +group is tagged.
> +
> +.. note:: Once a CGroup is tagged via cpu.tag, it is not possible to set this
> +          for any descendant of the tagged group. For finer grained control, the
> +          ``cpu.tag_color`` file described next may be used.
> +
> +.. note:: When a CGroup is not tagged, all the tasks within the group can share
> +          a core with kernel threads and untagged system threads. For this reason,
> +          if a group has ``cpu.tag`` of 0, it is considered to be trusted.
> +
> +* ``cpu.tag_color``
> +For finer grained control over core sharing, a color can also be set in
> +addition to the tag. This allows to further control core sharing between child
> +CGroups within an already tagged CGroup. The color and the tag are both used to
> +generate a `cookie` which is used by the scheduler to identify the group.
> +
> +Upto 256 different colors can be set (0-255) by writing into this file.

   Up to

> +
> +A sample real-world usage of this file follows:
> +
> +Google uses DAC controls to make ``cpu.tag`` writeable only by root and the

$search tells me "writable".

> +``cpu.tag_color`` can be changed by anyone.
> +
> +The hierarchy looks like this:
> +::
> +  Root group
> +     / \
> +    A   B    (These are created by the root daemon - borglet).
> +   / \   \
> +  C   D   E  (These are created by AppEngine within the container).
> +
> +A and B are containers for 2 different jobs or apps that are created by a root
> +daemon called borglet. borglet then tags each of these group with the ``cpu.tag``
> +file. The job itself can create additional child CGroups which are colored by
> +the container's AppEngine with the ``cpu.tag_color`` file.
> +
> +The reason why Google uses this 2-level tagging system is that AppEngine wants to
> +allow a subset of child CGroups within a tagged parent CGroup to be co-scheduled on a
> +core while not being co-scheduled with other child CGroups. Think of these
> +child CGroups as belonging to the same customer or project.  Because these
> +child CGroups are created by AppEngine, they are not tracked by borglet (the
> +root daemon), therefore borglet won't have a chance to set a color for them.
> +That's where cpu.tag_color file comes in. A color could be set by AppEngine,
> +and once set, the normal tasks within the subcgroup would not be able to
> +overwrite it. This is enforced by promoting the permission of the
> +``cpu.tag_color`` file in cgroupfs.
> +
> +The color is an 8-bit value allowing for upto 256 unique colors.

                                             up to

> +
> +.. note:: Once a CGroup is colored, none of its descendants can be re-colored. Also
> +          coloring of a CGroup is possible only if either the group or one of its
> +          ancestors were tagged via the ``cpu.tag`` file.

                        was

> +
> +prctl interface
> +###############
> +A ``prtcl(2)`` command ``PR_SCHED_CORE_SHARE`` is available to a process to request
> +sharing a core with another process.  For example, consider 2 processes ``P1``
> +and ``P2`` with PIDs 100 and 200. If process ``P1`` calls
> +``prctl(PR_SCHED_CORE_SHARE, 200)``, the kernel makes ``P1`` share a core with ``P2``.
> +The kernel performs ptrace access mode checks before granting the request.
> +
> +.. note:: This operation is not commutative. P1 calling
> +          ``prctl(PR_SCHED_CORE_SHARE, pidof(P2)`` is not the same as P2 calling the
> +          same for P1. The former case is P1 joining P2's group of processes
> +          (which P2 would have joined with ``prctl(2)`` prior to P1's ``prctl(2)``).
> +
> +.. note:: The core-sharing granted with prctl(2) will be subject to
> +          core-sharing restrictions specified by the CGroup interface. For example
> +          if P1 and P2 are a part of 2 different tagged CGroups, then they will
> +          not share a core even if a prctl(2) call is made. This is analogous
> +          to how affinities are set using the cpuset interface.
> +
> +It is important to note that, on a ``CLONE_THREAD`` ``clone(2)`` syscall, the child
> +will be assigned the same tag as its parent and thus be allowed to share a core
> +with them. is design choice is because, for the security usecase, a

               ^^^ missing subject ...

> +``CLONE_THREAD`` child can access its parent's address space anyway, so there's
> +no point in not allowing them to share a core. If a different behavior is
> +desired, the child thread can call ``prctl(2)`` as needed.  This behavior is
> +specific to the ``prctl(2)`` interface. For the CGroup interface, the child of a
> +fork always share's a core with its parent's.  On the other hand, if a parent

                shares         with its parent's what?  "parent's" is possessive.


> +was previously tagged via ``prctl(2)`` and does a regular ``fork(2)`` syscall, the
> +child will receive a unique tag.
> +
> +Design/Implementation
> +---------------------
> +Each task that is tagged is assigned a cookie internally in the kernel. As
> +mentioned in `Usage`_, tasks with the same cookie value are assumed to trust
> +each other and share a core.
> +
> +The basic idea is that, every schedule event tries to select tasks for all the
> +siblings of a core such that all the selected tasks running on a core are
> +trusted (same cookie) at any point in time. Kernel threads are assumed trusted.
> +The idle task is considered special, in that it trusts every thing.

                                         or                everything.

> +
> +During a ``schedule()`` event on any sibling of a core, the highest priority task for
> +that core is picked and assigned to the sibling calling ``schedule()`` if it has it

                                                            too many          it     it
They are not the same "it,", are they?

> +enqueued. For rest of the siblings in the core, highest priority task with the
> +same cookie is selected if there is one runnable in their individual run
> +queues. If a task with same cookie is not available, the idle task is selected.
> +Idle task is globally trusted.
> +
> +Once a task has been selected for all the siblings in the core, an IPI is sent to
> +siblings for whom a new task was selected. Siblings on receiving the IPI, will

                                                                     no comma^

> +switch to the new task immediately. If an idle task is selected for a sibling,
> +then the sibling is considered to be in a `forced idle` state. i.e., it may

                                                                   I.e.,

> +have tasks on its on runqueue to run, however it will still have to run idle.
> +More on this in the next section.
> +
> +Forced-idling of tasks
> +----------------------
> +The scheduler tries its best to find tasks that trust each other such that all
> +tasks selected to be scheduled are of the highest priority in a core.  However,
> +it is possible that some runqueues had tasks that were incompatibile with the

                                                           incompatible

> +highest priority ones in the core. Favoring security over fairness, one or more
> +siblings could be forced to select a lower priority task if the highest
> +priority task is not trusted with respect to the core wide highest priority
> +task.  If a sibling does not have a trusted task to run, it will be forced idle
> +by the scheduler(idle thread is scheduled to run).

           scheduler (idle

> +
> +When the highest priorty task is selected to run, a reschedule-IPI is sent to

                     priority

> +the sibling to force it into idle. This results in 4 cases which need to be
> +considered depending on whether a VM or a regular usermode process was running
> +on either HT::
> +
> +          HT1 (attack)            HT2 (victim)
> +   A      idle -> user space      user space -> idle
> +   B      idle -> user space      guest -> idle
> +   C      idle -> guest           user space -> idle
> +   D      idle -> guest           guest -> idle
> +
> +Note that for better performance, we do not wait for the destination CPU
> +(victim) to enter idle mode. This is because the sending of the IPI would bring
> +the destination CPU immediately into kernel mode from user space, or VMEXIT
> +in the case of  guests. At best, this would only leak some scheduler metadata

   drop one space ^^

> +which may not be worth protecting. It is also possible that the IPI is received
> +too late on some architectures, but this has not been observed in the case of
> +x86.
> +
> +Kernel protection from untrusted tasks
> +--------------------------------------
> +The scheduler on its own cannot protect the kernel executing concurrently with
> +an untrusted task in a core. This is because the scheduler is unaware of
> +interrupts/syscalls at scheduling time. To mitigate this, an IPI is sent to
> +siblings on kernel entry (syscall and IRQ). This IPI forces the sibling to enter
> +kernel mode and wait before returning to user until all siblings of the
> +core have left kernel mode. This process is also known as stunning.  For good
> +performance, an IPI is sent only to a sibling only if it is running a tagged
> +task. If a sibling is running a kernel thread or is idle, no IPI is sent.
> +
> +The kernel protection feature can be turned off on the kernel command line by
> +passing ``sched_core_protect_kernel=0``.
> +
> +Other alternative ideas discussed for kernel protection are listed below just
> +for completeness. They all have limitations:
> +
> +1. Changing interrupt affinities to a trusted core which does not execute untrusted tasks
> +#########################################################################################
> +By changing the interrupt affinities to a designated safe-CPU which runs
> +only trusted tasks, IRQ data can be protected. One issue is this involves
> +giving up a full CPU core of the system to run safe tasks. Another is that,
> +per-cpu interrupts such as the local timer interrupt cannot have their
> +affinity changed. also, sensitive timer callbacks such as the random entropy timer

                      Also,

> +can run in softirq on return from these interrupts and expose sensitive
> +data. In the future, that could be mitigated by forcing softirqs into threaded
> +mode by utilizing a mechanism similar to ``CONFIG_PREEMPT_RT``.
> +
> +Yet another issue with this is, for multiqueue devices with managed
> +interrupts, the IRQ affinities cannot be changed however it could be

                                             changed; however,

> +possible to force a reduced number of queues which would in turn allow to
> +shield one or two CPUs from such interrupts and queue handling for the price
> +of indirection.
> +
> +2. Running IRQs as threaded-IRQs
> +################################
> +This would result in forcing IRQs into the scheduler which would then provide
> +the process-context mitigation. However, not all interrupts can be threaded.
> +Also this does nothing about syscall entries.
> +
> +3. Kernel Address Space Isolation
> +#################################
> +System calls could run in a much restricted address space which is
> +guarenteed not to leak any sensitive data. There are practical limitation in

    guaranteed                                                     limitations in


> +implementing this - the main concern being how to decide on an address space
> +that is guarenteed to not have any sensitive data.

            guaranteed

> +
> +4. Limited cookie-based protection
> +##################################
> +On a system call, change the cookie to the system trusted cookie and initiate a
> +schedule event. This would be better than pausing all the siblings during the
> +entire duration for the system call, but still would be a huge hit to the
> +performance.
> +
> +Trust model
> +-----------
> +Core scheduling maintains trust relationships amongst groups of tasks by
> +assigning the tag of them with the same cookie value.
> +When a system with core scheduling boots, all tasks are considered to trust
> +each other. This is because the core scheduler does not have information about
> +trust relationships until userspace uses the above mentioned interfaces, to
> +communicate them. In other words, all tasks have a default cookie value of 0.
> +and are considered system-wide trusted. The stunning of siblings running
> +cookie-0 tasks is also avoided.
> +
> +Once userspace uses the above mentioned interfaces to group sets of tasks, tasks
> +within such groups are considered to trust each other, but do not trust those
> +outside. Tasks outside the group also don't trust tasks within.
> +
> +Limitations
> +-----------
> +Core scheduling tries to guarentee that only trusted tasks run concurrently on a

                             guarantee

> +core. But there could be small window of time during which untrusted tasks run
> +concurrently or kernel could be running concurrently with a task not trusted by
> +kernel.
> +
> +1. IPI processing delays
> +########################
> +Core scheduling selects only trusted tasks to run together. IPI is used to notify
> +the siblings to switch to the new task. But there could be hardware delays in
> +receiving of the IPI on some arch (on x86, this has not been observed). This may
> +cause an attacker task to start running on a cpu before its siblings receive the

                                                 CPU

> +IPI. Even though cache is flushed on entry to user mode, victim tasks on siblings
> +may populate data in the cache and micro acrhitectural buffers after the attacker

                                             architectural

> +starts to run and this is a possibility for data leak.
> +
> +Open cross-HT issues that core scheduling does not solve
> +--------------------------------------------------------
> +1. For MDS
> +##########
> +Core scheduling cannot protect against MDS attacks between an HT running in
> +user mode and another running in kernel mode. Even though both HTs run tasks
> +which trust each other, kernel memory is still considered untrusted. Such
> +attacks are possible for any combination of sibling CPU modes (host or guest mode).
> +
> +2. For L1TF
> +###########
> +Core scheduling cannot protect against a L1TF guest attackers exploiting a

                                           an           attacker


> +guest or host victim. This is because the guest attacker can craft invalid
> +PTEs which are not inverted due to a vulnerable guest kernel. The only
> +solution is to disable EPT.

huh?  what is EPT?  where is it documented/discussed?

> +
> +For both MDS and L1TF, if the guest vCPU is configured to not trust each
> +other (by tagging separately), then the guest to guest attacks would go away.
> +Or it could be a system admin policy which considers guest to guest attacks as
> +a guest problem.
> +
> +Another approach to resolve these would be to make every untrusted task on the
> +system to not trust every other untrusted task. While this could reduce
> +parallelism of the untrusted tasks, it would still solve the above issues while
> +allowing system processes (trusted tasks) to share a core.
> +
> +Use cases
> +---------
> +The main use case for Core scheduling is mitigating the cross-HT vulnerabilities
> +with SMT enabled. There are other use cases where this feature could be used:
> +
> +- Isolating tasks that needs a whole core: Examples include realtime tasks, tasks
> +  that uses SIMD instructions etc.
> +- Gang scheduling: Requirements for a group of tasks that needs to be scheduled
> +  together could also be realized using core scheduling. One example is vcpus of

                                                                            vCPUs

> +  a VM.
> +
> +Future work
> +-----------
> +Skipping per-HT mitigations if task is trusted
> +##############################################
> +If core scheduling is enabled, by default all tasks trust each other as
> +mentioned above. In such scenario, it may be desirable to skip the same-HT
> +mitigations on return to the trusted user-mode to improve performance.

> diff --git a/Documentation/admin-guide/hw-vuln/index.rst b/Documentation/admin-guide/hw-vuln/index.rst
> index 21710f8609fe..361ccbbd9e54 100644
> --- a/Documentation/admin-guide/hw-vuln/index.rst
> +++ b/Documentation/admin-guide/hw-vuln/index.rst
> @@ -16,3 +16,4 @@ are configurable at compile, boot or run time.
>      multihit.rst
>      special-register-buffer-data-sampling.rst
>      l1d_flush.rst
> +   core-scheduling.rst

Might be an indentation problem there?  Can't tell for sure.


thanks.


  reply	other threads:[~2020-10-20  3:37 UTC|newest]

Thread overview: 98+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-10-20  1:43 [PATCH v8 -tip 00/26] Core scheduling Joel Fernandes (Google)
2020-10-20  1:43 ` [PATCH v8 -tip 01/26] sched: Wrap rq::lock access Joel Fernandes (Google)
2020-10-20  1:43 ` [PATCH v8 -tip 02/26] sched: Introduce sched_class::pick_task() Joel Fernandes (Google)
2020-10-22  7:59   ` Li, Aubrey
2020-10-22 15:25     ` Joel Fernandes
2020-10-23  5:25       ` Li, Aubrey
2020-10-23 21:47         ` Joel Fernandes
2020-10-24  2:48           ` Li, Aubrey
2020-10-24 11:10             ` Vineeth Pillai
2020-10-24 12:27               ` Vineeth Pillai
2020-10-24 23:48                 ` Li, Aubrey
2020-10-26  9:01                 ` Peter Zijlstra
2020-10-27  3:17                   ` Li, Aubrey
2020-10-27 14:19                   ` Joel Fernandes
2020-10-27 15:23                     ` Joel Fernandes
2020-10-27 14:14                 ` Joel Fernandes
2020-10-20  1:43 ` [PATCH v8 -tip 03/26] sched: Core-wide rq->lock Joel Fernandes (Google)
2020-10-26 11:59   ` Peter Zijlstra
2020-10-27 16:27     ` Joel Fernandes
2020-10-20  1:43 ` [PATCH v8 -tip 04/26] sched/fair: Add a few assertions Joel Fernandes (Google)
2020-10-20  1:43 ` [PATCH v8 -tip 05/26] sched: Basic tracking of matching tasks Joel Fernandes (Google)
2020-10-20  1:43 ` [PATCH v8 -tip 06/26] sched: Add core wide task selection and scheduling Joel Fernandes (Google)
2020-10-23 13:51   ` Peter Zijlstra
2020-10-23 13:54     ` Peter Zijlstra
2020-10-23 17:57       ` Joel Fernandes
2020-10-23 19:26         ` Peter Zijlstra
2020-10-23 21:31           ` Joel Fernandes
2020-10-26  8:28             ` Peter Zijlstra
2020-10-27 16:58               ` Joel Fernandes
2020-10-26  9:31             ` Peter Zijlstra
2020-11-05 18:50               ` Joel Fernandes
2020-11-05 22:07                 ` Joel Fernandes
2020-10-23 15:05   ` Peter Zijlstra
2020-10-23 17:59     ` Joel Fernandes
2020-10-20  1:43 ` [PATCH v8 -tip 07/26] sched/fair: Fix forced idle sibling starvation corner case Joel Fernandes (Google)
2020-10-20  1:43 ` [PATCH v8 -tip 08/26] sched/fair: Snapshot the min_vruntime of CPUs on force idle Joel Fernandes (Google)
2020-10-26 12:47   ` Peter Zijlstra
2020-10-28 15:29     ` Joel Fernandes
2020-10-28 18:39     ` Joel Fernandes
2020-10-29 16:59     ` Joel Fernandes
2020-10-29 18:24     ` Joel Fernandes
2020-10-29 18:59       ` Peter Zijlstra
2020-10-30  2:36         ` Joel Fernandes
2020-10-30  2:42           ` Joel Fernandes
2020-10-30  8:41             ` Peter Zijlstra
2020-10-31 21:41               ` Joel Fernandes
2020-10-20  1:43 ` [PATCH v8 -tip 09/26] sched: Trivial forced-newidle balancer Joel Fernandes (Google)
2020-10-20  1:43 ` [PATCH v8 -tip 10/26] sched: migration changes for core scheduling Joel Fernandes (Google)
2020-10-20  1:43 ` [PATCH v8 -tip 11/26] irq_work: Cleanup Joel Fernandes (Google)
2020-10-20  1:43 ` [PATCH v8 -tip 12/26] arch/x86: Add a new TIF flag for untrusted tasks Joel Fernandes (Google)
2020-10-20  1:43 ` [PATCH v8 -tip 13/26] kernel/entry: Add support for core-wide protection of kernel-mode Joel Fernandes (Google)
2020-10-20  3:41   ` Randy Dunlap
2020-11-03  0:20     ` Joel Fernandes
2020-10-22  5:48   ` Li, Aubrey
2020-11-03  0:50     ` Joel Fernandes
2020-10-30 10:29   ` Alexandre Chartre
2020-11-03  1:20     ` Joel Fernandes
2020-11-06 16:57       ` Alexandre Chartre
2020-11-06 17:43         ` Joel Fernandes
2020-11-06 18:07           ` Alexandre Chartre
2020-11-10  9:35       ` Alexandre Chartre
2020-11-10 22:42         ` Joel Fernandes
2020-11-16 10:08           ` Alexandre Chartre
2020-11-16 14:50             ` Joel Fernandes
2020-11-16 15:43               ` Joel Fernandes
2020-10-20  1:43 ` [PATCH v8 -tip 14/26] entry/idle: Enter and exit kernel protection during idle entry and exit Joel Fernandes (Google)
2020-10-20  1:43 ` [PATCH v8 -tip 15/26] entry/kvm: Protect the kernel when entering from guest Joel Fernandes (Google)
2020-10-20  1:43 ` [PATCH v8 -tip 16/26] sched: cgroup tagging interface for core scheduling Joel Fernandes (Google)
2020-10-20  1:43 ` [PATCH v8 -tip 17/26] sched: Split the cookie and setup per-task cookie on fork Joel Fernandes (Google)
2020-11-04 22:30   ` chris hyser
2020-11-05 14:49     ` Joel Fernandes
2020-11-09 23:30     ` chris hyser
2020-10-20  1:43 ` [PATCH v8 -tip 18/26] sched: Add a per-thread core scheduling interface Joel Fernandes (Google)
2020-10-20  1:43 ` [PATCH v8 -tip 19/26] sched: Add a second-level tag for nested CGroup usecase Joel Fernandes (Google)
2020-10-31  0:42   ` Josh Don
2020-11-03  2:54     ` Joel Fernandes
     [not found]   ` <6c07e70d-52f2-69ff-e1fa-690cd2c97f3d@linux.intel.com>
2020-11-05 15:52     ` Joel Fernandes
2020-10-20  1:43 ` [PATCH v8 -tip 20/26] sched: Release references to the per-task cookie on exit Joel Fernandes (Google)
2020-11-04 21:50   ` chris hyser
2020-11-05 15:46     ` Joel Fernandes
2020-10-20  1:43 ` [PATCH v8 -tip 21/26] sched: Handle task addition to CGroup Joel Fernandes (Google)
2020-10-20  1:43 ` [PATCH v8 -tip 22/26] sched/debug: Add CGroup node for printing group cookie if SCHED_DEBUG Joel Fernandes (Google)
2020-10-20  1:43 ` [PATCH v8 -tip 23/26] kselftest: Add tests for core-sched interface Joel Fernandes (Google)
2020-10-30 22:20   ` [PATCH] sched: Change all 4 space tabs to actual tabs John B. Wyatt IV
2020-10-20  1:43 ` [PATCH v8 -tip 24/26] sched: Move core-scheduler interfacing code to a new file Joel Fernandes (Google)
2020-10-26  1:05   ` Li, Aubrey
2020-11-03  2:58     ` Joel Fernandes
2020-10-20  1:43 ` [PATCH v8 -tip 25/26] Documentation: Add core scheduling documentation Joel Fernandes (Google)
2020-10-20  3:36   ` Randy Dunlap [this message]
2020-11-12 16:11     ` Joel Fernandes
2020-10-20  1:43 ` [PATCH v8 -tip 26/26] sched: Debug bits Joel Fernandes (Google)
2020-10-30 13:26 ` [PATCH v8 -tip 00/26] Core scheduling Ning, Hongyu
2020-11-06  2:58   ` Li, Aubrey
2020-11-06 17:54     ` Joel Fernandes
2020-11-09  6:04       ` Li, Aubrey
2020-11-06 20:55 ` [RFT for v9] (Was Re: [PATCH v8 -tip 00/26] Core scheduling) Joel Fernandes
2020-11-13  9:22   ` Ning, Hongyu
2020-11-13 10:01     ` Ning, Hongyu

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=e8fd7a2b-0439-719e-aef8-de789a564fbe@infradead.org \
    --to=rdunlap@infradead.org \
    --cc=James.Bottomley@hansenpartnership.com \
    --cc=OWeisse@umich.edu \
    --cc=aaron.lwe@gmail.com \
    --cc=agata.gruza@intel.com \
    --cc=alexandre.chartre@oracle.com \
    --cc=antonio.gomez.iglesias@intel.com \
    --cc=aubrey.intel@gmail.com \
    --cc=aubrey.li@linux.intel.com \
    --cc=benbjiang@tencent.com \
    --cc=chris.hyser@oracle.com \
    --cc=christian.brauner@ubuntu.com \
    --cc=derkling@google.com \
    --cc=dfaggioli@suse.com \
    --cc=dhaval.giani@oracle.com \
    --cc=fweisbec@gmail.com \
    --cc=graf@amazon.com \
    --cc=jdesfossez@digitalocean.com \
    --cc=joel@joelfernandes.org \
    --cc=jsbarnes@google.com \
    --cc=junaids@google.com \
    --cc=keescook@chromium.org \
    --cc=kerrnel@google.com \
    --cc=konrad.wilk@oracle.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=mgorman@techsingularity.net \
    --cc=mingo@kernel.org \
    --cc=naravamudan@digitalocean.com \
    --cc=pauld@redhat.com \
    --cc=paulmck@kernel.org \
    --cc=pawan.kumar.gupta@linux.intel.com \
    --cc=pbonzini@redhat.com \
    --cc=peterz@infradead.org \
    --cc=pjt@google.com \
    --cc=rostedt@goodmis.org \
    --cc=tglx@linutronix.de \
    --cc=tim.c.chen@intel.com \
    --cc=tim.c.chen@linux.intel.com \
    --cc=torvalds@linux-foundation.org \
    --cc=valentin.schneider@arm.com \
    --cc=vineeth@bitbyteword.org \
    --cc=viremana@linux.microsoft.com \
    --cc=yu.c.chen@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.