linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Julien Desfossez <jdesfossez@digitalocean.com>
To: Peter Zijlstra <peterz@infradead.org>,
	mingo@kernel.org, tglx@linutronix.de, pjt@google.com,
	tim.c.chen@linux.intel.com, torvalds@linux-foundation.org
Cc: Julien Desfossez <jdesfossez@digitalocean.com>,
	linux-kernel@vger.kernel.org, subhra.mazumdar@oracle.com,
	fweisbec@gmail.com, keescook@chromium.org, kerrnel@google.com,
	Vineeth Pillai <vpillai@digitalocean.com>,
	Nishanth Aravamudan <naravamudan@digitalocean.com>
Subject: Re: [RFC][PATCH 00/16] sched: Core scheduling
Date: Thu, 14 Mar 2019 11:28:31 -0400	[thread overview]
Message-ID: <1552577311-8218-1-git-send-email-jdesfossez@digitalocean.com> (raw)
In-Reply-To: <20190218165620.383905466@infradead.org>

On 2/18/19 8:56 AM, Peter Zijlstra wrote:
> A much 'demanded' feature: core-scheduling :-(
>
> I still hate it with a passion, and that is part of why it took a little
> longer than 'promised'.
>
> While this one doesn't have all the 'features' of the previous (never
> published) version and isn't L1TF 'complete', I tend to like the structure
> better (relatively speaking: I hate it slightly less).
>
> This one is sched class agnostic and therefore, in principle, doesn't horribly
> wreck RT (in fact, RT could 'ab'use this by setting 'task->core_cookie = task'
> to force-idle siblings).
>
> Now, as hinted by that, there are semi sane reasons for actually having this.
> Various hardware features like Intel RDT - Memory Bandwidth Allocation, work
> per core (due to SMT fundamentally sharing caches) and therefore grouping
> related tasks on a core makes it more reliable.
>
> However; whichever way around you turn this cookie; it is expensive and nasty.

We are seeing this hard lockup within 1 hour of testing the patchset with 2
VMs using the core scheduler feature. Here is the full dmesg. We have the
kdump as well if more information is necessary.

[ 1989.647539] core sched enabled
[ 3353.211527] NMI: IOCK error (debug interrupt?) for reason 75 on CPU 0.
[ 3353.211528] CPU: 0 PID: 0 Comm: swapper/0 Kdump: loaded Not tainted
5.0-0.coresched-generic #1
[ 3353.211530] RIP: 0010:native_queued_spin_lock_slowpath+0x199/0x1e0
[ 3353.211532] Code: eb e8 c1 ee 12 83 e0 03 83 ee 01 48 c1 e0 05 48 63 f6
48 05 00 3a 02 00 48 03 04 f5 20 48 bb a6 48 89 10 8b 42 08 85 c0 75 09 <f3>
90 8b 42 08 85 c0 74 f7 48 8b 32 48 85 f6 74 8e 0f 18 0e eb 8f
[ 3353.211533] RSP: 0018:ffff97ba3f603e18 EFLAGS: 00000046
[ 3353.211535] RAX: 0000000000000000 RBX: 0000000000000202 RCX:
0000000000040000
[ 3353.211535] RDX: ffff97ba3f623a00 RSI: 0000000000000007 RDI:
ffff97dabf822d40
[ 3353.211536] RBP: ffff97ba3f603e18 R08: 0000000000040000 R09:
0000000000018499
[ 3353.211537] R10: 0000000000000001 R11: 0000000000000000 R12:
0000000000000001
[ 3353.211538] R13: ffffffffa7340740 R14: 000000000000000c R15:
000000000000000c
[ 3353.211539] FS:  0000000000000000(0000) GS:ffff97ba3f600000(0000)
knlGS:0000000000000000
[ 3353.211544] CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[ 3353.211545] CR2: 00007efeac310004 CR3: 0000001bf4c0e002 CR4:
00000000001626f0
[ 3353.211546] Call Trace:
[ 3353.211546]  <IRQ>
[ 3353.211547]  _raw_spin_lock_irqsave+0x35/0x40
[ 3353.211548]  update_blocked_averages+0x35/0x5d0
[ 3353.211549]  ? rebalance_domains+0x180/0x2c0
[ 3353.211549]  update_nohz_stats+0x48/0x60
[ 3353.211550]  _nohz_idle_balance+0xdf/0x290
[ 3353.211551]  run_rebalance_domains+0x97/0xa0
[ 3353.211551]  __do_softirq+0xe4/0x2f3
[ 3353.211552]  irq_exit+0xb6/0xc0
[ 3353.211553]  scheduler_ipi+0xe4/0x130
[ 3353.211553]  smp_reschedule_interrupt+0x39/0xe0
[ 3353.211554]  reschedule_interrupt+0xf/0x20
[ 3353.211555]  </IRQ>
[ 3353.211556] RIP: 0010:cpuidle_enter_state+0xbc/0x440
[ 3353.211557] Code: ff e8 d8 dd 86 ff 80 7d d3 00 74 17 9c 58 0f 1f 44 00
00 f6 c4 02 0f 85 54 03 00 00 31 ff e8 eb 1d 8d ff fb 66 0f 1f 44 00 00 <45>
85 f6 0f 88 1a 03 00 00 4c 2b 6d c8 48 ba cf f7 53 e3 a5 9b c4
[ 3353.211558] RSP: 0018:ffffffffa6e03df8 EFLAGS: 00000246 ORIG_RAX:
ffffffffffffff02
[ 3353.211560] RAX: ffff97ba3f622d40 RBX: ffffffffa6f545e0 RCX:
000000000000001f
[ 3353.211561] RDX: 0000024c9b7d936c RSI: 0000000047318912 RDI:
0000000000000000
[ 3353.211562] RBP: ffffffffa6e03e38 R08: 0000000000000002 R09:
0000000000022600
[ 3353.211562] R10: ffffffffa6e03dc8 R11: 00000000000002dc R12:
ffffd6c67f602968
[ 3353.211563] R13: 0000024c9b7d936c R14: 0000000000000004 R15:
ffffffffa6f54760
[ 3353.211564]  ? cpuidle_enter_state+0x98/0x440
[ 3353.211565]  cpuidle_enter+0x17/0x20
[ 3353.211565]  call_cpuidle+0x23/0x40
[ 3353.211566]  do_idle+0x204/0x280
[ 3353.211567]  cpu_startup_entry+0x1d/0x20
[ 3353.211567]  rest_init+0xae/0xb0
[ 3353.211568]  arch_call_rest_init+0xe/0x1b
[ 3353.211569]  start_kernel+0x4f5/0x516
[ 3353.211569]  x86_64_start_reservations+0x24/0x26
[ 3353.211570]  x86_64_start_kernel+0x74/0x77
[ 3353.211571]  secondary_startup_64+0xa4/0xb0
[ 3353.211571] Kernel panic - not syncing: NMI IOCK error: Not continuing
[ 3353.211572] CPU: 0 PID: 0 Comm: swapper/0 Kdump: loaded Not tainted
5.0-0.coresched-generic #1
[ 3353.211574] Call Trace:
[ 3353.211575]  <NMI>
[ 3353.211575]  dump_stack+0x63/0x85
[ 3353.211576]  panic+0xfe/0x2a4
[ 3353.211576]  nmi_panic+0x39/0x40
[ 3353.211577]  io_check_error+0x92/0xa0
[ 3353.211578]  default_do_nmi+0x9e/0x110
[ 3353.211578]  do_nmi+0x119/0x180
[ 3353.211579]  end_repeat_nmi+0x16/0x50
[ 3353.211580] RIP: 0010:native_queued_spin_lock_slowpath+0x199/0x1e0
[ 3353.211581] Code: eb e8 c1 ee 12 83 e0 03 83 ee 01 48 c1 e0 05 48 63 f6
48 05 00 3a 02 00 48 03 04 f5 20 48 bb a6 48 89 10 8b 42 08 85 c0 75 09 <f3>
90 8b 42 08 85 c0 74 f7 48 8b 32 48 85 f6 74 8e 0f 18 0e eb 8f
[ 3353.211582] RSP: 0018:ffff97ba3f603e18 EFLAGS: 00000046
[ 3353.211583] RAX: 0000000000000000 RBX: 0000000000000202 RCX:
0000000000040000
[ 3353.211584] RDX: ffff97ba3f623a00 RSI: 0000000000000007 RDI:
ffff97dabf822d40
[ 3353.211585] RBP: ffff97ba3f603e18 R08: 0000000000040000 R09:
0000000000018499
[ 3353.211586] R10: 0000000000000001 R11: 0000000000000000 R12:
0000000000000001
[ 3353.211587] R13: ffffffffa7340740 R14: 000000000000000c R15:
000000000000000c
[ 3353.211587]  ? native_queued_spin_lock_slowpath+0x199/0x1e0
[ 3353.211588]  ? native_queued_spin_lock_slowpath+0x199/0x1e0
[ 3353.211589]  </NMI>
[ 3353.211589]  <IRQ>
[ 3353.211590]  _raw_spin_lock_irqsave+0x35/0x40
[ 3353.211591]  update_blocked_averages+0x35/0x5d0
[ 3353.211591]  ? rebalance_domains+0x180/0x2c0
[ 3353.211592]  update_nohz_stats+0x48/0x60
[ 3353.211593]  _nohz_idle_balance+0xdf/0x290
[ 3353.211593]  run_rebalance_domains+0x97/0xa0
[ 3353.211594]  __do_softirq+0xe4/0x2f3
[ 3353.211595]  irq_exit+0xb6/0xc0
[ 3353.211595]  scheduler_ipi+0xe4/0x130
[ 3353.211596]  smp_reschedule_interrupt+0x39/0xe0
[ 3353.211597]  reschedule_interrupt+0xf/0x20
[ 3353.211597]  </IRQ>
[ 3353.211598] RIP: 0010:cpuidle_enter_state+0xbc/0x440
[ 3353.211599] Code: ff e8 d8 dd 86 ff 80 7d d3 00 74 17 9c 58 0f 1f 44 00
00 f6 c4 02 0f 85 54 03 00 00 31 ff e8 eb 1d 8d ff fb 66 0f 1f 44 00 00 <45>
85 f6 0f 88 1a 03 00 00 4c 2b 6d c8 48 ba cf f7 53 e3 a5 9b c4
[ 3353.211600] RSP: 0018:ffffffffa6e03df8 EFLAGS: 00000246 ORIG_RAX:
ffffffffffffff02
[ 3353.211602] RAX: ffff97ba3f622d40 RBX: ffffffffa6f545e0 RCX:
000000000000001f
[ 3353.211603] RDX: 0000024c9b7d936c RSI: 0000000047318912 RDI:
0000000000000000
[ 3353.211603] RBP: ffffffffa6e03e38 R08: 0000000000000002 R09:
0000000000022600
[ 3353.211604] R10: ffffffffa6e03dc8 R11: 00000000000002dc R12:
ffffd6c67f602968
[ 3353.211605] R13: 0000024c9b7d936c R14: 0000000000000004 R15:
ffffffffa6f54760
[ 3353.211606]  ? cpuidle_enter_state+0x98/0x440
[ 3353.211607]  cpuidle_enter+0x17/0x20
[ 3353.211607]  call_cpuidle+0x23/0x40
[ 3353.211608]  do_idle+0x204/0x280
[ 3353.211609]  cpu_startup_entry+0x1d/0x20
[ 3353.211609]  rest_init+0xae/0xb0
[ 3353.211610]  arch_call_rest_init+0xe/0x1b
[ 3353.211611]  start_kernel+0x4f5/0x516
[ 3353.211611]  x86_64_start_reservations+0x24/0x26
[ 3353.211612]  x86_64_start_kernel+0x74/0x77
[ 3353.211613]  secondary_startup_64+0xa4/0xb0


      parent reply	other threads:[~2019-03-14 15:28 UTC|newest]

Thread overview: 99+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-02-18 16:56 [RFC][PATCH 00/16] sched: Core scheduling Peter Zijlstra
2019-02-18 16:56 ` [RFC][PATCH 01/16] stop_machine: Fix stop_cpus_in_progress ordering Peter Zijlstra
2019-02-18 16:56 ` [RFC][PATCH 02/16] sched: Fix kerneldoc comment for ia64_set_curr_task Peter Zijlstra
2019-02-18 16:56 ` [RFC][PATCH 03/16] sched: Wrap rq::lock access Peter Zijlstra
2019-02-19 16:13   ` Phil Auld
2019-02-19 16:22     ` Peter Zijlstra
2019-02-19 16:37       ` Phil Auld
2019-03-18 15:41   ` Julien Desfossez
2019-03-20  2:29     ` Subhra Mazumdar
2019-03-21 21:20       ` Julien Desfossez
2019-03-22 13:34         ` Peter Zijlstra
2019-03-22 20:59           ` Julien Desfossez
2019-03-23  0:06         ` Subhra Mazumdar
2019-03-27  1:02           ` Subhra Mazumdar
2019-03-29 13:35           ` Julien Desfossez
2019-03-29 22:23             ` Subhra Mazumdar
2019-04-01 21:35               ` Subhra Mazumdar
2019-04-03 20:16                 ` Julien Desfossez
2019-04-05  1:30                   ` Subhra Mazumdar
2019-04-02  7:42               ` Peter Zijlstra
2019-03-22 23:28       ` Tim Chen
2019-03-22 23:44         ` Tim Chen
2019-02-18 16:56 ` [RFC][PATCH 04/16] sched/{rt,deadline}: Fix set_next_task vs pick_next_task Peter Zijlstra
2019-02-18 16:56 ` [RFC][PATCH 05/16] sched: Add task_struct pointer to sched_class::set_curr_task Peter Zijlstra
2019-02-18 16:56 ` [RFC][PATCH 06/16] sched/fair: Export newidle_balance() Peter Zijlstra
2019-02-18 16:56 ` [RFC][PATCH 07/16] sched: Allow put_prev_task() to drop rq->lock Peter Zijlstra
2019-02-18 16:56 ` [RFC][PATCH 08/16] sched: Rework pick_next_task() slow-path Peter Zijlstra
2019-02-18 16:56 ` [RFC][PATCH 09/16] sched: Introduce sched_class::pick_task() Peter Zijlstra
2019-02-18 16:56 ` [RFC][PATCH 10/16] sched: Core-wide rq->lock Peter Zijlstra
2019-02-18 16:56 ` [RFC][PATCH 11/16] sched: Basic tracking of matching tasks Peter Zijlstra
2019-02-18 16:56 ` [RFC][PATCH 12/16] sched: A quick and dirty cgroup tagging interface Peter Zijlstra
2019-02-18 16:56 ` [RFC][PATCH 13/16] sched: Add core wide task selection and scheduling Peter Zijlstra
     [not found]   ` <20190402064612.GA46500@aaronlu>
2019-04-02  8:28     ` Peter Zijlstra
2019-04-02 13:20       ` Aaron Lu
2019-04-05 14:55       ` Aaron Lu
2019-04-09 18:09         ` Tim Chen
2019-04-10  4:36           ` Aaron Lu
2019-04-10 14:18             ` Aubrey Li
2019-04-11  2:11               ` Aaron Lu
2019-04-10 14:44             ` Peter Zijlstra
2019-04-11  3:05               ` Aaron Lu
2019-04-11  9:19                 ` Peter Zijlstra
2019-04-10  8:06           ` Peter Zijlstra
2019-04-10 19:58             ` Vineeth Remanan Pillai
2019-04-15 16:59             ` Julien Desfossez
2019-04-16 13:43       ` Aaron Lu
2019-04-09 18:38   ` Julien Desfossez
2019-04-10 15:01     ` Peter Zijlstra
2019-04-11  0:11     ` Subhra Mazumdar
2019-04-19  8:40       ` Ingo Molnar
2019-04-19 23:16         ` Subhra Mazumdar
2019-02-18 16:56 ` [RFC][PATCH 14/16] sched/fair: Add a few assertions Peter Zijlstra
2019-02-18 16:56 ` [RFC][PATCH 15/16] sched: Trivial forced-newidle balancer Peter Zijlstra
2019-02-21 16:19   ` Valentin Schneider
2019-02-21 16:41     ` Peter Zijlstra
2019-02-21 16:47       ` Peter Zijlstra
2019-02-21 18:28         ` Valentin Schneider
2019-04-04  8:31       ` Aubrey Li
2019-04-06  1:36         ` Aubrey Li
2019-02-18 16:56 ` [RFC][PATCH 16/16] sched: Debug bits Peter Zijlstra
2019-02-18 17:49 ` [RFC][PATCH 00/16] sched: Core scheduling Linus Torvalds
2019-02-18 20:40   ` Peter Zijlstra
2019-02-19  0:29     ` Linus Torvalds
2019-02-19 15:15       ` Ingo Molnar
2019-02-22 12:17     ` Paolo Bonzini
2019-02-22 14:20       ` Peter Zijlstra
2019-02-22 19:26         ` Tim Chen
2019-02-26  8:26           ` Aubrey Li
2019-02-27  7:54             ` Aubrey Li
2019-02-21  2:53   ` Subhra Mazumdar
2019-02-21 14:03     ` Peter Zijlstra
2019-02-21 18:44       ` Subhra Mazumdar
2019-02-22  0:34       ` Subhra Mazumdar
2019-02-22 12:45   ` Mel Gorman
2019-02-22 16:10     ` Mel Gorman
2019-03-08 19:44     ` Subhra Mazumdar
2019-03-11  4:23       ` Aubrey Li
2019-03-11 18:34         ` Subhra Mazumdar
2019-03-11 23:33           ` Subhra Mazumdar
2019-03-12  0:20             ` Greg Kerr
2019-03-12  0:47               ` Subhra Mazumdar
2019-03-12  7:33               ` Aaron Lu
2019-03-12  7:45             ` Aubrey Li
2019-03-13  5:55               ` Aubrey Li
2019-03-14  0:35                 ` Tim Chen
2019-03-14  5:30                   ` Aubrey Li
2019-03-14  6:07                     ` Li, Aubrey
2019-03-18  6:56             ` Aubrey Li
2019-03-12 19:07           ` Pawan Gupta
2019-03-26  7:32       ` Aaron Lu
2019-03-26  7:56         ` Aaron Lu
2019-02-19 22:07 ` Greg Kerr
2019-02-20  9:42   ` Peter Zijlstra
2019-02-20 18:33     ` Greg Kerr
2019-02-22 14:10       ` Peter Zijlstra
2019-03-07 22:06         ` Paolo Bonzini
2019-02-20 18:43     ` Subhra Mazumdar
2019-03-01  2:54 ` Subhra Mazumdar
2019-03-14 15:28 ` Julien Desfossez [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=1552577311-8218-1-git-send-email-jdesfossez@digitalocean.com \
    --to=jdesfossez@digitalocean.com \
    --cc=fweisbec@gmail.com \
    --cc=keescook@chromium.org \
    --cc=kerrnel@google.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=mingo@kernel.org \
    --cc=naravamudan@digitalocean.com \
    --cc=peterz@infradead.org \
    --cc=pjt@google.com \
    --cc=subhra.mazumdar@oracle.com \
    --cc=tglx@linutronix.de \
    --cc=tim.c.chen@linux.intel.com \
    --cc=torvalds@linux-foundation.org \
    --cc=vpillai@digitalocean.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).