All of lore.kernel.org
 help / color / mirror / Atom feed
From: "Li, Aubrey" <aubrey.li@linux.intel.com>
To: "Ning, Hongyu" <hongyu.ning@linux.intel.com>,
	"Joel Fernandes (Google)" <joel@joelfernandes.org>,
	Nishanth Aravamudan <naravamudan@digitalocean.com>,
	Julien Desfossez <jdesfossez@digitalocean.com>,
	Peter Zijlstra <peterz@infradead.org>,
	Tim Chen <tim.c.chen@linux.intel.com>,
	Vineeth Pillai <viremana@linux.microsoft.com>,
	Aaron Lu <aaron.lwe@gmail.com>,
	Aubrey Li <aubrey.intel@gmail.com>,
	tglx@linutronix.de, linux-kernel@vger.kernel.org
Cc: mingo@kernel.org, torvalds@linux-foundation.org,
	fweisbec@gmail.com, keescook@chromium.org, kerrnel@google.com,
	Phil Auld <pauld@redhat.com>,
	Valentin Schneider <valentin.schneider@arm.com>,
	Mel Gorman <mgorman@techsingularity.net>,
	Pawan Gupta <pawan.kumar.gupta@linux.intel.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	vineeth@bitbyteword.org, Chen Yu <yu.c.chen@intel.com>,
	Christian Brauner <christian.brauner@ubuntu.com>,
	Agata Gruza <agata.gruza@intel.com>,
	Antonio Gomez Iglesias <antonio.gomez.iglesias@intel.com>,
	graf@amazon.com, konrad.wilk@oracle.com, dfaggioli@suse.com,
	pjt@google.com, rostedt@goodmis.org, derkling@google.com,
	benbjiang@tencent.com,
	Alexandre Chartre <alexandre.chartre@oracle.com>,
	James.Bottomley@hansenpartnership.com, OWeisse@umich.edu,
	Dhaval Giani <dhaval.giani@oracle.com>,
	Junaid Shahid <junaids@google.com>,
	jsbarnes@google.com, chris.hyser@oracle.com,
	"Paul E. McKenney" <paulmck@kernel.org>,
	Tim Chen <tim.c.chen@intel.com>
Subject: Re: [PATCH v8 -tip 00/26] Core scheduling
Date: Fri, 6 Nov 2020 10:58:58 +0800	[thread overview]
Message-ID: <bf2ee997-1f53-0eef-40ad-1e98274da587@linux.intel.com> (raw)
In-Reply-To: <f7fc588b-12cf-95a8-6142-e4d112fb1689@linux.intel.com>

On 2020/10/30 21:26, Ning, Hongyu wrote:
> On 2020/10/20 9:43, Joel Fernandes (Google) wrote:
>> Eighth iteration of the Core-Scheduling feature.
>>
>> Core scheduling is a feature that allows only trusted tasks to run
>> concurrently on cpus sharing compute resources (eg: hyperthreads on a
>> core). The goal is to mitigate the core-level side-channel attacks
>> without requiring to disable SMT (which has a significant impact on
>> performance in some situations). Core scheduling (as of v7) mitigates
>> user-space to user-space attacks and user to kernel attack when one of
>> the siblings enters the kernel via interrupts or system call.
>>
>> By default, the feature doesn't change any of the current scheduler
>> behavior. The user decides which tasks can run simultaneously on the
>> same core (for now by having them in the same tagged cgroup). When a tag
>> is enabled in a cgroup and a task from that cgroup is running on a
>> hardware thread, the scheduler ensures that only idle or trusted tasks
>> run on the other sibling(s). Besides security concerns, this feature can
>> also be beneficial for RT and performance applications where we want to
>> control how tasks make use of SMT dynamically.
>>
>> This iteration focuses on the the following stuff:
>> - Redesigned API.
>> - Rework of Kernel Protection feature based on Thomas's entry work.
>> - Rework of hotplug fixes.
>> - Address review comments in v7
>>
>> Joel: Both a CGroup and Per-task interface via prctl(2) are provided for
>> configuring core sharing. More details are provided in documentation patch.
>> Kselftests are provided to verify the correctness/rules of the interface.
>>
>> Julien: TPCC tests showed improvements with core-scheduling. With kernel
>> protection enabled, it does not show any regression. Possibly ASI will improve
>> the performance for those who choose kernel protection (can be toggled through
>> sched_core_protect_kernel sysctl). Results:
>> v8				average		stdev		diff
>> baseline (SMT on)		1197.272	44.78312824	
>> core sched (   kernel protect)	412.9895	45.42734343	-65.51%
>> core sched (no kernel protect)	686.6515	71.77756931	-42.65%
>> nosmt				408.667		39.39042872	-65.87%
>>
>> v8 is rebased on tip/master.
>>
>> Future work
>> ===========
>> - Load balancing/Migration fixes for core scheduling.
>>   With v6, Load balancing is partially coresched aware, but has some
>>   issues w.r.t process/taskgroup weights:
>>   https://lwn.net/ml/linux-kernel/20200225034438.GA617271@z...
>> - Core scheduling test framework: kselftests, torture tests etc
>>
>> Changes in v8
>> =============
>> - New interface/API implementation
>>   - Joel
>> - Revised kernel protection patch
>>   - Joel
>> - Revised Hotplug fixes
>>   - Joel
>> - Minor bug fixes and address review comments
>>   - Vineeth
>>
> 
>> create mode 100644 tools/testing/selftests/sched/config
>> create mode 100644 tools/testing/selftests/sched/test_coresched.c
>>
> 
> Adding 4 workloads test results for Core Scheduling v8: 
> 
> - kernel under test: coresched community v8 from https://git.kernel.org/pub/scm/linux/kernel/git/jfern/linux.git/log/?h=coresched-v5.9
> - workloads: 
> 	-- A. sysbench cpu (192 threads) + sysbench cpu (192 threads)
> 	-- B. sysbench cpu (192 threads) + sysbench mysql (192 threads, mysqld forced into the same cgroup)
> 	-- C. uperf netperf.xml (192 threads over TCP or UDP protocol separately)
> 	-- D. will-it-scale context_switch via pipe (192 threads)
> - test machine setup: 
> 	CPU(s):              192
> 	On-line CPU(s) list: 0-191
> 	Thread(s) per core:  2
> 	Core(s) per socket:  48
> 	Socket(s):           2
> 	NUMA node(s):        4
> - test results:
> 	-- workload A, no obvious performance drop in cs_on:
> 	+----------------------+------+----------------------+------------------------+
> 	|                      | **   | sysbench cpu * 192   | sysbench mysql * 192   |
> 	+======================+======+======================+========================+
> 	| cgroup               | **   | cg_sysbench_cpu_0    | cg_sysbench_mysql_0    |
> 	+----------------------+------+----------------------+------------------------+
> 	| record_item          | **   | Tput_avg (events/s)  | Tput_avg (events/s)    |
> 	+----------------------+------+----------------------+------------------------+
> 	| coresched_normalized | **   | 1.01                 | 0.87                   |
> 	+----------------------+------+----------------------+------------------------+
> 	| default_normalized   | **   | 1                    | 1                      |
> 	+----------------------+------+----------------------+------------------------+
> 	| smtoff_normalized    | **   | 0.59                 | 0.82                   |
> 	+----------------------+------+----------------------+------------------------+
> 
> 	-- workload B, no obvious performance drop in cs_on:
> 	+----------------------+------+----------------------+------------------------+
> 	|                      | **   | sysbench cpu * 192   | sysbench cpu * 192     |
> 	+======================+======+======================+========================+
> 	| cgroup               | **   | cg_sysbench_cpu_0    | cg_sysbench_cpu_1      |
> 	+----------------------+------+----------------------+------------------------+
> 	| record_item          | **   | Tput_avg (events/s)  | Tput_avg (events/s)    |
> 	+----------------------+------+----------------------+------------------------+
> 	| coresched_normalized | **   | 1.01                 | 0.98                   |
> 	+----------------------+------+----------------------+------------------------+
> 	| default_normalized   | **   | 1                    | 1                      |
> 	+----------------------+------+----------------------+------------------------+
> 	| smtoff_normalized    | **   | 0.6                  | 0.6                    |
> 	+----------------------+------+----------------------+------------------------+
> 
> 	-- workload C, known performance drop in cs_on since Core Scheduling v6:
> 	+----------------------+------+---------------------------+---------------------------+
> 	|                      | **   | uperf netperf TCP * 192   | uperf netperf UDP * 192   |
> 	+======================+======+===========================+===========================+
> 	| cgroup               | **   | cg_uperf                  | cg_uperf                  |
> 	+----------------------+------+---------------------------+---------------------------+
> 	| record_item          | **   | Tput_avg (Gb/s)           | Tput_avg (Gb/s)           |
> 	+----------------------+------+---------------------------+---------------------------+
> 	| coresched_normalized | **   | 0.46                      | 0.48                      |
> 	+----------------------+------+---------------------------+---------------------------+
> 	| default_normalized   | **   | 1                         | 1                         |
> 	+----------------------+------+---------------------------+---------------------------+
> 	| smtoff_normalized    | **   | 0.82                      | 0.79                      |
> 	+----------------------+------+---------------------------+---------------------------+

This is known that when coresched is on, uperf offloads softirq service to
ksoftirqd, and the cookie of ksoftirqd is different from the cookie of uperf.
As a result, ksoftirqd can run concurrently with uperf previous but not now.

> 
> 	-- workload D, new added syscall workload, performance drop in cs_on:
> 	+----------------------+------+-------------------------------+
> 	|                      | **   | will-it-scale  * 192          |
> 	|                      |      | (pipe based context_switch)   |
> 	+======================+======+===============================+
> 	| cgroup               | **   | cg_will-it-scale              |
> 	+----------------------+------+-------------------------------+
> 	| record_item          | **   | threads_avg                   |
> 	+----------------------+------+-------------------------------+
> 	| coresched_normalized | **   | 0.2                           |
> 	+----------------------+------+-------------------------------+
> 	| default_normalized   | **   | 1                             |
> 	+----------------------+------+-------------------------------+
> 	| smtoff_normalized    | **   | 0.89                          |
> 	+----------------------+------+-------------------------------+

will-it-scale may be a very extreme case. The story here is,
- On one sibling reader/writer gets blocked and tries to schedule another reader/writer in.
- The other sibling tries to wake up reader/writer.

Both CPUs are acquiring rq->__lock,

So when coresched off, they are two different locks, lock stat(1 second delta) below:

class name    con-bounces    contentions   waittime-min   waittime-max waittime-total   waittime-avg    acq-bounces   acquisitions   holdtime-min   holdtime-max holdtime-total   holdtime-avg
&rq->__lock:          210            210           0.10           3.04         180.87           0.86            797       79165021           0.03          20.69    60650198.34           0.77

But when coresched on, they are actually one same lock, lock stat(1 second delta) below:

class name    con-bounces    contentions   waittime-min   waittime-max waittime-total   waittime-avg    acq-bounces   acquisitions   holdtime-min   holdtime-max holdtime-total   holdtime-avg
&rq->__lock:      6479459        6484857           0.05         216.46    60829776.85           9.38        8346319       15399739           0.03          95.56    81119515.38           5.27

This nature of core scheduling may degrade the performance of similar workloads with frequent context switching.

Any thoughts?

Thanks,
-Aubrey

  reply	other threads:[~2020-11-06  2:59 UTC|newest]

Thread overview: 98+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-10-20  1:43 [PATCH v8 -tip 00/26] Core scheduling Joel Fernandes (Google)
2020-10-20  1:43 ` [PATCH v8 -tip 01/26] sched: Wrap rq::lock access Joel Fernandes (Google)
2020-10-20  1:43 ` [PATCH v8 -tip 02/26] sched: Introduce sched_class::pick_task() Joel Fernandes (Google)
2020-10-22  7:59   ` Li, Aubrey
2020-10-22 15:25     ` Joel Fernandes
2020-10-23  5:25       ` Li, Aubrey
2020-10-23 21:47         ` Joel Fernandes
2020-10-24  2:48           ` Li, Aubrey
2020-10-24 11:10             ` Vineeth Pillai
2020-10-24 12:27               ` Vineeth Pillai
2020-10-24 23:48                 ` Li, Aubrey
2020-10-26  9:01                 ` Peter Zijlstra
2020-10-27  3:17                   ` Li, Aubrey
2020-10-27 14:19                   ` Joel Fernandes
2020-10-27 15:23                     ` Joel Fernandes
2020-10-27 14:14                 ` Joel Fernandes
2020-10-20  1:43 ` [PATCH v8 -tip 03/26] sched: Core-wide rq->lock Joel Fernandes (Google)
2020-10-26 11:59   ` Peter Zijlstra
2020-10-27 16:27     ` Joel Fernandes
2020-10-20  1:43 ` [PATCH v8 -tip 04/26] sched/fair: Add a few assertions Joel Fernandes (Google)
2020-10-20  1:43 ` [PATCH v8 -tip 05/26] sched: Basic tracking of matching tasks Joel Fernandes (Google)
2020-10-20  1:43 ` [PATCH v8 -tip 06/26] sched: Add core wide task selection and scheduling Joel Fernandes (Google)
2020-10-23 13:51   ` Peter Zijlstra
2020-10-23 13:54     ` Peter Zijlstra
2020-10-23 17:57       ` Joel Fernandes
2020-10-23 19:26         ` Peter Zijlstra
2020-10-23 21:31           ` Joel Fernandes
2020-10-26  8:28             ` Peter Zijlstra
2020-10-27 16:58               ` Joel Fernandes
2020-10-26  9:31             ` Peter Zijlstra
2020-11-05 18:50               ` Joel Fernandes
2020-11-05 22:07                 ` Joel Fernandes
2020-10-23 15:05   ` Peter Zijlstra
2020-10-23 17:59     ` Joel Fernandes
2020-10-20  1:43 ` [PATCH v8 -tip 07/26] sched/fair: Fix forced idle sibling starvation corner case Joel Fernandes (Google)
2020-10-20  1:43 ` [PATCH v8 -tip 08/26] sched/fair: Snapshot the min_vruntime of CPUs on force idle Joel Fernandes (Google)
2020-10-26 12:47   ` Peter Zijlstra
2020-10-28 15:29     ` Joel Fernandes
2020-10-28 18:39     ` Joel Fernandes
2020-10-29 16:59     ` Joel Fernandes
2020-10-29 18:24     ` Joel Fernandes
2020-10-29 18:59       ` Peter Zijlstra
2020-10-30  2:36         ` Joel Fernandes
2020-10-30  2:42           ` Joel Fernandes
2020-10-30  8:41             ` Peter Zijlstra
2020-10-31 21:41               ` Joel Fernandes
2020-10-20  1:43 ` [PATCH v8 -tip 09/26] sched: Trivial forced-newidle balancer Joel Fernandes (Google)
2020-10-20  1:43 ` [PATCH v8 -tip 10/26] sched: migration changes for core scheduling Joel Fernandes (Google)
2020-10-20  1:43 ` [PATCH v8 -tip 11/26] irq_work: Cleanup Joel Fernandes (Google)
2020-10-20  1:43 ` [PATCH v8 -tip 12/26] arch/x86: Add a new TIF flag for untrusted tasks Joel Fernandes (Google)
2020-10-20  1:43 ` [PATCH v8 -tip 13/26] kernel/entry: Add support for core-wide protection of kernel-mode Joel Fernandes (Google)
2020-10-20  3:41   ` Randy Dunlap
2020-11-03  0:20     ` Joel Fernandes
2020-10-22  5:48   ` Li, Aubrey
2020-11-03  0:50     ` Joel Fernandes
2020-10-30 10:29   ` Alexandre Chartre
2020-11-03  1:20     ` Joel Fernandes
2020-11-06 16:57       ` Alexandre Chartre
2020-11-06 17:43         ` Joel Fernandes
2020-11-06 18:07           ` Alexandre Chartre
2020-11-10  9:35       ` Alexandre Chartre
2020-11-10 22:42         ` Joel Fernandes
2020-11-16 10:08           ` Alexandre Chartre
2020-11-16 14:50             ` Joel Fernandes
2020-11-16 15:43               ` Joel Fernandes
2020-10-20  1:43 ` [PATCH v8 -tip 14/26] entry/idle: Enter and exit kernel protection during idle entry and exit Joel Fernandes (Google)
2020-10-20  1:43 ` [PATCH v8 -tip 15/26] entry/kvm: Protect the kernel when entering from guest Joel Fernandes (Google)
2020-10-20  1:43 ` [PATCH v8 -tip 16/26] sched: cgroup tagging interface for core scheduling Joel Fernandes (Google)
2020-10-20  1:43 ` [PATCH v8 -tip 17/26] sched: Split the cookie and setup per-task cookie on fork Joel Fernandes (Google)
2020-11-04 22:30   ` chris hyser
2020-11-05 14:49     ` Joel Fernandes
2020-11-09 23:30     ` chris hyser
2020-10-20  1:43 ` [PATCH v8 -tip 18/26] sched: Add a per-thread core scheduling interface Joel Fernandes (Google)
2020-10-20  1:43 ` [PATCH v8 -tip 19/26] sched: Add a second-level tag for nested CGroup usecase Joel Fernandes (Google)
2020-10-31  0:42   ` Josh Don
2020-11-03  2:54     ` Joel Fernandes
     [not found]   ` <6c07e70d-52f2-69ff-e1fa-690cd2c97f3d@linux.intel.com>
2020-11-05 15:52     ` Joel Fernandes
2020-10-20  1:43 ` [PATCH v8 -tip 20/26] sched: Release references to the per-task cookie on exit Joel Fernandes (Google)
2020-11-04 21:50   ` chris hyser
2020-11-05 15:46     ` Joel Fernandes
2020-10-20  1:43 ` [PATCH v8 -tip 21/26] sched: Handle task addition to CGroup Joel Fernandes (Google)
2020-10-20  1:43 ` [PATCH v8 -tip 22/26] sched/debug: Add CGroup node for printing group cookie if SCHED_DEBUG Joel Fernandes (Google)
2020-10-20  1:43 ` [PATCH v8 -tip 23/26] kselftest: Add tests for core-sched interface Joel Fernandes (Google)
2020-10-30 22:20   ` [PATCH] sched: Change all 4 space tabs to actual tabs John B. Wyatt IV
2020-10-20  1:43 ` [PATCH v8 -tip 24/26] sched: Move core-scheduler interfacing code to a new file Joel Fernandes (Google)
2020-10-26  1:05   ` Li, Aubrey
2020-11-03  2:58     ` Joel Fernandes
2020-10-20  1:43 ` [PATCH v8 -tip 25/26] Documentation: Add core scheduling documentation Joel Fernandes (Google)
2020-10-20  3:36   ` Randy Dunlap
2020-11-12 16:11     ` Joel Fernandes
2020-10-20  1:43 ` [PATCH v8 -tip 26/26] sched: Debug bits Joel Fernandes (Google)
2020-10-30 13:26 ` [PATCH v8 -tip 00/26] Core scheduling Ning, Hongyu
2020-11-06  2:58   ` Li, Aubrey [this message]
2020-11-06 17:54     ` Joel Fernandes
2020-11-09  6:04       ` Li, Aubrey
2020-11-06 20:55 ` [RFT for v9] (Was Re: [PATCH v8 -tip 00/26] Core scheduling) Joel Fernandes
2020-11-13  9:22   ` Ning, Hongyu
2020-11-13 10:01     ` Ning, Hongyu

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=bf2ee997-1f53-0eef-40ad-1e98274da587@linux.intel.com \
    --to=aubrey.li@linux.intel.com \
    --cc=James.Bottomley@hansenpartnership.com \
    --cc=OWeisse@umich.edu \
    --cc=aaron.lwe@gmail.com \
    --cc=agata.gruza@intel.com \
    --cc=alexandre.chartre@oracle.com \
    --cc=antonio.gomez.iglesias@intel.com \
    --cc=aubrey.intel@gmail.com \
    --cc=benbjiang@tencent.com \
    --cc=chris.hyser@oracle.com \
    --cc=christian.brauner@ubuntu.com \
    --cc=derkling@google.com \
    --cc=dfaggioli@suse.com \
    --cc=dhaval.giani@oracle.com \
    --cc=fweisbec@gmail.com \
    --cc=graf@amazon.com \
    --cc=hongyu.ning@linux.intel.com \
    --cc=jdesfossez@digitalocean.com \
    --cc=joel@joelfernandes.org \
    --cc=jsbarnes@google.com \
    --cc=junaids@google.com \
    --cc=keescook@chromium.org \
    --cc=kerrnel@google.com \
    --cc=konrad.wilk@oracle.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=mgorman@techsingularity.net \
    --cc=mingo@kernel.org \
    --cc=naravamudan@digitalocean.com \
    --cc=pauld@redhat.com \
    --cc=paulmck@kernel.org \
    --cc=pawan.kumar.gupta@linux.intel.com \
    --cc=pbonzini@redhat.com \
    --cc=peterz@infradead.org \
    --cc=pjt@google.com \
    --cc=rostedt@goodmis.org \
    --cc=tglx@linutronix.de \
    --cc=tim.c.chen@intel.com \
    --cc=tim.c.chen@linux.intel.com \
    --cc=torvalds@linux-foundation.org \
    --cc=valentin.schneider@arm.com \
    --cc=vineeth@bitbyteword.org \
    --cc=viremana@linux.microsoft.com \
    --cc=yu.c.chen@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.