linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* 5.6-rc3: WARNING: CPU: 48 PID: 17435 at kernel/sched/fair.c:380 enqueue_task_fair+0x328/0x440
@ 2020-02-28  7:54 Christian Borntraeger
  2020-02-28 12:04 ` Christian Borntraeger
  0 siblings, 1 reply; 28+ messages in thread
From: Christian Borntraeger @ 2020-02-28  7:54 UTC (permalink / raw)
  To: Ingo Molnar, Peter Zijlstra, linux-kernel

Peter,

it seems that your new assert did trigger for me:

The system was running fine for 4 hours and then this happened.
Unfortunately I have no idea if this reproduces and if so how.

[15260.753944] ------------[ cut here ]------------
[15260.753949] rq->tmp_alone_branch != &rq->leaf_cfs_rq_list
[15260.753959] WARNING: CPU: 48 PID: 17435 at kernel/sched/fair.c:380 enqueue_task_fair+0x328/0x440
[15260.753961] Modules linked in: kvm xt_CHECKSUM xt_MASQUERADE nf_nat_tftp nf_conntrack_tftp xt_CT tun bridge stp llc xt_tcpudp ip6t_REJECT nf_reject_ipv6 ip6t_rpfilter ipt_REJECT nf_reject_ipv4 xt_conntrack ip6table_nat ip6table_mangle ip6table_raw ip6table_security iptable_nat nf_nat iptable_mangle iptable_raw iptable_security nf_conntrack nf_defrag_ipv6 nf_defrag_ipv4 ip_set nfnetlink ip6table_filter ip6_tables iptable_filter rpcrdma sunrpc rdma_ucm rdma_cm iw_cm ib_cm configfs s390_trng mlx5_ib ghash_s390 prng ib_uverbs aes_s390 ib_core des_s390 libdes sha3_512_s390 genwqe_card vfio_ccw vfio_mdev sha3_256_s390 mdev crc_itu_t sha512_s390 vfio_iommu_type1 sha1_s390 vfio eadm_sch zcrypt_cex4 sch_fq_codel ip_tables x_tables mlx5_core sha256_s390 sha_common pkey zcrypt rng_core autofs4
[15260.754002] CPU: 48 PID: 17435 Comm: cc1 Not tainted 5.6.0-rc3+ #24
[15260.754004] Hardware name: IBM 3906 M04 704 (LPAR)
[15260.754005] Krnl PSW : 0404c00180000000 0000000942282e3c (enqueue_task_fair+0x32c/0x440)
[15260.754008]            R:0 T:1 IO:0 EX:0 Key:0 M:1 W:0 P:0 AS:3 CC:0 PM:0 RI:0 EA:3
[15260.754010] Krnl GPRS: 00000000000003e0 0000001fbd60ee00 000000000000002d 00000009435347c2
[15260.754012]            000000000000002c 00000009428ec950 0000000900000000 0000000000000001
[15260.754013]            0000001fbd60ed00 0000001fbd60ed00 0000001fbd60ee00 0000000000000000
[15260.754014]            0000001c633ea000 0000000942c34670 0000000942282e38 000003e00140baf8
[15260.754066] Krnl Code: 0000000942282e2c: c020005d39d8	larl	%r2,0000000942e2a1dc
                          0000000942282e32: c0e5fffdcc3f	brasl	%r14,000000094223c6b0
                         #0000000942282e38: af000000		mc	0,0
                         >0000000942282e3c: a7f4ff22		brc	15,0000000942282c80
                          0000000942282e40: 41b06340		la	%r11,832(%r6)
                          0000000942282e44: e3d063480004	lg	%r13,840(%r6)
                          0000000942282e4a: b904004b		lgr	%r4,%r11
                          0000000942282e4e: b904003d		lgr	%r3,%r13
[15260.754080] Call Trace:
[15260.754083]  [<0000000942282e3c>] enqueue_task_fair+0x32c/0x440 
[15260.754085] ([<0000000942282e38>] enqueue_task_fair+0x328/0x440)
[15260.754087]  [<0000000942272d78>] activate_task+0x88/0xf0 
[15260.754088]  [<00000009422732e8>] ttwu_do_activate+0x58/0x78 
[15260.754090]  [<00000009422742ce>] try_to_wake_up+0x256/0x650 
[15260.754093]  [<000000094229248e>] swake_up_locked.part.0+0x2e/0x70 
[15260.754095]  [<00000009422927ac>] swake_up_one+0x54/0x88 
[15260.754151]  [<000003ff8044c15a>] kvm_vcpu_wake_up+0x52/0x78 [kvm] 
[15260.754161]  [<000003ff8046af02>] kvm_s390_vcpu_wakeup+0x2a/0x40 [kvm] 
[15260.754171]  [<000003ff8046b68e>] kvm_s390_idle_wakeup+0x6e/0xa0 [kvm] 
[15260.754175]  [<00000009422dd05c>] __hrtimer_run_queues+0x114/0x2f0 
[15260.754178]  [<00000009422dddb4>] hrtimer_interrupt+0x12c/0x2a8 
[15260.754181]  [<0000000942200d3c>] do_IRQ+0xac/0xb0 
[15260.754185]  [<0000000942c25684>] ext_int_handler+0x130/0x134 
[15260.754186] Last Breaking-Event-Address:
[15260.754189]  [<000000094223c710>] __warn_printk+0x60/0x68
[15260.754190] ---[ end trace e84a48be72a8b514 ]---


^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: 5.6-rc3: WARNING: CPU: 48 PID: 17435 at kernel/sched/fair.c:380 enqueue_task_fair+0x328/0x440
  2020-02-28  7:54 5.6-rc3: WARNING: CPU: 48 PID: 17435 at kernel/sched/fair.c:380 enqueue_task_fair+0x328/0x440 Christian Borntraeger
@ 2020-02-28 12:04 ` Christian Borntraeger
  2020-02-28 13:32   ` Vincent Guittot
  0 siblings, 1 reply; 28+ messages in thread
From: Christian Borntraeger @ 2020-02-28 12:04 UTC (permalink / raw)
  To: Ingo Molnar, Peter Zijlstra, linux-kernel

I was able to reproduce this with 5.5.0


On 28.02.20 08:54, Christian Borntraeger wrote:
> Peter,
> 
> it seems that your new assert did trigger for me:
> 
> The system was running fine for 4 hours and then this happened.
> Unfortunately I have no idea if this reproduces and if so how.
> 
> [15260.753944] ------------[ cut here ]------------
> [15260.753949] rq->tmp_alone_branch != &rq->leaf_cfs_rq_list
> [15260.753959] WARNING: CPU: 48 PID: 17435 at kernel/sched/fair.c:380 enqueue_task_fair+0x328/0x440
> [15260.753961] Modules linked in: kvm xt_CHECKSUM xt_MASQUERADE nf_nat_tftp nf_conntrack_tftp xt_CT tun bridge stp llc xt_tcpudp ip6t_REJECT nf_reject_ipv6 ip6t_rpfilter ipt_REJECT nf_reject_ipv4 xt_conntrack ip6table_nat ip6table_mangle ip6table_raw ip6table_security iptable_nat nf_nat iptable_mangle iptable_raw iptable_security nf_conntrack nf_defrag_ipv6 nf_defrag_ipv4 ip_set nfnetlink ip6table_filter ip6_tables iptable_filter rpcrdma sunrpc rdma_ucm rdma_cm iw_cm ib_cm configfs s390_trng mlx5_ib ghash_s390 prng ib_uverbs aes_s390 ib_core des_s390 libdes sha3_512_s390 genwqe_card vfio_ccw vfio_mdev sha3_256_s390 mdev crc_itu_t sha512_s390 vfio_iommu_type1 sha1_s390 vfio eadm_sch zcrypt_cex4 sch_fq_codel ip_tables x_tables mlx5_core sha256_s390 sha_common pkey zcrypt rng_core autofs4
> [15260.754002] CPU: 48 PID: 17435 Comm: cc1 Not tainted 5.6.0-rc3+ #24
> [15260.754004] Hardware name: IBM 3906 M04 704 (LPAR)
> [15260.754005] Krnl PSW : 0404c00180000000 0000000942282e3c (enqueue_task_fair+0x32c/0x440)
> [15260.754008]            R:0 T:1 IO:0 EX:0 Key:0 M:1 W:0 P:0 AS:3 CC:0 PM:0 RI:0 EA:3
> [15260.754010] Krnl GPRS: 00000000000003e0 0000001fbd60ee00 000000000000002d 00000009435347c2
> [15260.754012]            000000000000002c 00000009428ec950 0000000900000000 0000000000000001
> [15260.754013]            0000001fbd60ed00 0000001fbd60ed00 0000001fbd60ee00 0000000000000000
> [15260.754014]            0000001c633ea000 0000000942c34670 0000000942282e38 000003e00140baf8
> [15260.754066] Krnl Code: 0000000942282e2c: c020005d39d8	larl	%r2,0000000942e2a1dc
>                           0000000942282e32: c0e5fffdcc3f	brasl	%r14,000000094223c6b0
>                          #0000000942282e38: af000000		mc	0,0
>                          >0000000942282e3c: a7f4ff22		brc	15,0000000942282c80
>                           0000000942282e40: 41b06340		la	%r11,832(%r6)
>                           0000000942282e44: e3d063480004	lg	%r13,840(%r6)
>                           0000000942282e4a: b904004b		lgr	%r4,%r11
>                           0000000942282e4e: b904003d		lgr	%r3,%r13
> [15260.754080] Call Trace:
> [15260.754083]  [<0000000942282e3c>] enqueue_task_fair+0x32c/0x440 
> [15260.754085] ([<0000000942282e38>] enqueue_task_fair+0x328/0x440)
> [15260.754087]  [<0000000942272d78>] activate_task+0x88/0xf0 
> [15260.754088]  [<00000009422732e8>] ttwu_do_activate+0x58/0x78 
> [15260.754090]  [<00000009422742ce>] try_to_wake_up+0x256/0x650 
> [15260.754093]  [<000000094229248e>] swake_up_locked.part.0+0x2e/0x70 
> [15260.754095]  [<00000009422927ac>] swake_up_one+0x54/0x88 
> [15260.754151]  [<000003ff8044c15a>] kvm_vcpu_wake_up+0x52/0x78 [kvm] 
> [15260.754161]  [<000003ff8046af02>] kvm_s390_vcpu_wakeup+0x2a/0x40 [kvm] 
> [15260.754171]  [<000003ff8046b68e>] kvm_s390_idle_wakeup+0x6e/0xa0 [kvm] 
> [15260.754175]  [<00000009422dd05c>] __hrtimer_run_queues+0x114/0x2f0 
> [15260.754178]  [<00000009422dddb4>] hrtimer_interrupt+0x12c/0x2a8 
> [15260.754181]  [<0000000942200d3c>] do_IRQ+0xac/0xb0 
> [15260.754185]  [<0000000942c25684>] ext_int_handler+0x130/0x134 
> [15260.754186] Last Breaking-Event-Address:
> [15260.754189]  [<000000094223c710>] __warn_printk+0x60/0x68
> [15260.754190] ---[ end trace e84a48be72a8b514 ]---
> 


^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: 5.6-rc3: WARNING: CPU: 48 PID: 17435 at kernel/sched/fair.c:380 enqueue_task_fair+0x328/0x440
  2020-02-28 12:04 ` Christian Borntraeger
@ 2020-02-28 13:32   ` Vincent Guittot
  2020-02-28 13:43     ` Christian Borntraeger
  0 siblings, 1 reply; 28+ messages in thread
From: Vincent Guittot @ 2020-02-28 13:32 UTC (permalink / raw)
  To: Christian Borntraeger; +Cc: Ingo Molnar, Peter Zijlstra, linux-kernel

On Fri, 28 Feb 2020 at 13:04, Christian Borntraeger
<borntraeger@de.ibm.com> wrote:
>
> I was able to reproduce this with 5.5.0

This might even be earlier as there weren't any changes on this area recently

Do you have more details about your setup ? Are you using cgroup
bandwidth an an example ?


>
>
> On 28.02.20 08:54, Christian Borntraeger wrote:
> > Peter,
> >
> > it seems that your new assert did trigger for me:
> >
> > The system was running fine for 4 hours and then this happened.
> > Unfortunately I have no idea if this reproduces and if so how.
> >
> > [15260.753944] ------------[ cut here ]------------
> > [15260.753949] rq->tmp_alone_branch != &rq->leaf_cfs_rq_list
> > [15260.753959] WARNING: CPU: 48 PID: 17435 at kernel/sched/fair.c:380 enqueue_task_fair+0x328/0x440
> > [15260.753961] Modules linked in: kvm xt_CHECKSUM xt_MASQUERADE nf_nat_tftp nf_conntrack_tftp xt_CT tun bridge stp llc xt_tcpudp ip6t_REJECT nf_reject_ipv6 ip6t_rpfilter ipt_REJECT nf_reject_ipv4 xt_conntrack ip6table_nat ip6table_mangle ip6table_raw ip6table_security iptable_nat nf_nat iptable_mangle iptable_raw iptable_security nf_conntrack nf_defrag_ipv6 nf_defrag_ipv4 ip_set nfnetlink ip6table_filter ip6_tables iptable_filter rpcrdma sunrpc rdma_ucm rdma_cm iw_cm ib_cm configfs s390_trng mlx5_ib ghash_s390 prng ib_uverbs aes_s390 ib_core des_s390 libdes sha3_512_s390 genwqe_card vfio_ccw vfio_mdev sha3_256_s390 mdev crc_itu_t sha512_s390 vfio_iommu_type1 sha1_s390 vfio eadm_sch zcrypt_cex4 sch_fq_codel ip_tables x_tables mlx5_core sha256_s390 sha_common pkey zcrypt rng_core autofs4
> > [15260.754002] CPU: 48 PID: 17435 Comm: cc1 Not tainted 5.6.0-rc3+ #24
> > [15260.754004] Hardware name: IBM 3906 M04 704 (LPAR)
> > [15260.754005] Krnl PSW : 0404c00180000000 0000000942282e3c (enqueue_task_fair+0x32c/0x440)
> > [15260.754008]            R:0 T:1 IO:0 EX:0 Key:0 M:1 W:0 P:0 AS:3 CC:0 PM:0 RI:0 EA:3
> > [15260.754010] Krnl GPRS: 00000000000003e0 0000001fbd60ee00 000000000000002d 00000009435347c2
> > [15260.754012]            000000000000002c 00000009428ec950 0000000900000000 0000000000000001
> > [15260.754013]            0000001fbd60ed00 0000001fbd60ed00 0000001fbd60ee00 0000000000000000
> > [15260.754014]            0000001c633ea000 0000000942c34670 0000000942282e38 000003e00140baf8
> > [15260.754066] Krnl Code: 0000000942282e2c: c020005d39d8      larl    %r2,0000000942e2a1dc
> >                           0000000942282e32: c0e5fffdcc3f      brasl   %r14,000000094223c6b0
> >                          #0000000942282e38: af000000          mc      0,0
> >                          >0000000942282e3c: a7f4ff22          brc     15,0000000942282c80
> >                           0000000942282e40: 41b06340          la      %r11,832(%r6)
> >                           0000000942282e44: e3d063480004      lg      %r13,840(%r6)
> >                           0000000942282e4a: b904004b          lgr     %r4,%r11
> >                           0000000942282e4e: b904003d          lgr     %r3,%r13
> > [15260.754080] Call Trace:
> > [15260.754083]  [<0000000942282e3c>] enqueue_task_fair+0x32c/0x440
> > [15260.754085] ([<0000000942282e38>] enqueue_task_fair+0x328/0x440)
> > [15260.754087]  [<0000000942272d78>] activate_task+0x88/0xf0
> > [15260.754088]  [<00000009422732e8>] ttwu_do_activate+0x58/0x78
> > [15260.754090]  [<00000009422742ce>] try_to_wake_up+0x256/0x650
> > [15260.754093]  [<000000094229248e>] swake_up_locked.part.0+0x2e/0x70
> > [15260.754095]  [<00000009422927ac>] swake_up_one+0x54/0x88
> > [15260.754151]  [<000003ff8044c15a>] kvm_vcpu_wake_up+0x52/0x78 [kvm]
> > [15260.754161]  [<000003ff8046af02>] kvm_s390_vcpu_wakeup+0x2a/0x40 [kvm]
> > [15260.754171]  [<000003ff8046b68e>] kvm_s390_idle_wakeup+0x6e/0xa0 [kvm]
> > [15260.754175]  [<00000009422dd05c>] __hrtimer_run_queues+0x114/0x2f0
> > [15260.754178]  [<00000009422dddb4>] hrtimer_interrupt+0x12c/0x2a8
> > [15260.754181]  [<0000000942200d3c>] do_IRQ+0xac/0xb0
> > [15260.754185]  [<0000000942c25684>] ext_int_handler+0x130/0x134
> > [15260.754186] Last Breaking-Event-Address:
> > [15260.754189]  [<000000094223c710>] __warn_printk+0x60/0x68
> > [15260.754190] ---[ end trace e84a48be72a8b514 ]---
> >
>

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: 5.6-rc3: WARNING: CPU: 48 PID: 17435 at kernel/sched/fair.c:380 enqueue_task_fair+0x328/0x440
  2020-02-28 13:32   ` Vincent Guittot
@ 2020-02-28 13:43     ` Christian Borntraeger
  2020-02-28 15:08       ` Christian Borntraeger
  0 siblings, 1 reply; 28+ messages in thread
From: Christian Borntraeger @ 2020-02-28 13:43 UTC (permalink / raw)
  To: Vincent Guittot; +Cc: Ingo Molnar, Peter Zijlstra, linux-kernel



On 28.02.20 14:32, Vincent Guittot wrote:
> On Fri, 28 Feb 2020 at 13:04, Christian Borntraeger
> <borntraeger@de.ibm.com> wrote:
>>
>> I was able to reproduce this with 5.5.0
> 
> This might even be earlier as there weren't any changes on this area recently
> 
> Do you have more details about your setup ? Are you using cgroup
> bandwidth an an example ?

These are KVM guests managed by libvirt. So all kind of cgroups are
active (with default values).

I will try if I can bisect. It seems to happen after some hours so this might take some time.


>>
>> On 28.02.20 08:54, Christian Borntraeger wrote:
>>> Peter,
>>>
>>> it seems that your new assert did trigger for me:
>>>
>>> The system was running fine for 4 hours and then this happened.
>>> Unfortunately I have no idea if this reproduces and if so how.
>>>
>>> [15260.753944] ------------[ cut here ]------------
>>> [15260.753949] rq->tmp_alone_branch != &rq->leaf_cfs_rq_list
>>> [15260.753959] WARNING: CPU: 48 PID: 17435 at kernel/sched/fair.c:380 enqueue_task_fair+0x328/0x440
>>> [15260.753961] Modules linked in: kvm xt_CHECKSUM xt_MASQUERADE nf_nat_tftp nf_conntrack_tftp xt_CT tun bridge stp llc xt_tcpudp ip6t_REJECT nf_reject_ipv6 ip6t_rpfilter ipt_REJECT nf_reject_ipv4 xt_conntrack ip6table_nat ip6table_mangle ip6table_raw ip6table_security iptable_nat nf_nat iptable_mangle iptable_raw iptable_security nf_conntrack nf_defrag_ipv6 nf_defrag_ipv4 ip_set nfnetlink ip6table_filter ip6_tables iptable_filter rpcrdma sunrpc rdma_ucm rdma_cm iw_cm ib_cm configfs s390_trng mlx5_ib ghash_s390 prng ib_uverbs aes_s390 ib_core des_s390 libdes sha3_512_s390 genwqe_card vfio_ccw vfio_mdev sha3_256_s390 mdev crc_itu_t sha512_s390 vfio_iommu_type1 sha1_s390 vfio eadm_sch zcrypt_cex4 sch_fq_codel ip_tables x_tables mlx5_core sha256_s390 sha_common pkey zcrypt rng_core autofs4
>>> [15260.754002] CPU: 48 PID: 17435 Comm: cc1 Not tainted 5.6.0-rc3+ #24
>>> [15260.754004] Hardware name: IBM 3906 M04 704 (LPAR)
>>> [15260.754005] Krnl PSW : 0404c00180000000 0000000942282e3c (enqueue_task_fair+0x32c/0x440)
>>> [15260.754008]            R:0 T:1 IO:0 EX:0 Key:0 M:1 W:0 P:0 AS:3 CC:0 PM:0 RI:0 EA:3
>>> [15260.754010] Krnl GPRS: 00000000000003e0 0000001fbd60ee00 000000000000002d 00000009435347c2
>>> [15260.754012]            000000000000002c 00000009428ec950 0000000900000000 0000000000000001
>>> [15260.754013]            0000001fbd60ed00 0000001fbd60ed00 0000001fbd60ee00 0000000000000000
>>> [15260.754014]            0000001c633ea000 0000000942c34670 0000000942282e38 000003e00140baf8
>>> [15260.754066] Krnl Code: 0000000942282e2c: c020005d39d8      larl    %r2,0000000942e2a1dc
>>>                           0000000942282e32: c0e5fffdcc3f      brasl   %r14,000000094223c6b0
>>>                          #0000000942282e38: af000000          mc      0,0
>>>                          >0000000942282e3c: a7f4ff22          brc     15,0000000942282c80
>>>                           0000000942282e40: 41b06340          la      %r11,832(%r6)
>>>                           0000000942282e44: e3d063480004      lg      %r13,840(%r6)
>>>                           0000000942282e4a: b904004b          lgr     %r4,%r11
>>>                           0000000942282e4e: b904003d          lgr     %r3,%r13
>>> [15260.754080] Call Trace:
>>> [15260.754083]  [<0000000942282e3c>] enqueue_task_fair+0x32c/0x440
>>> [15260.754085] ([<0000000942282e38>] enqueue_task_fair+0x328/0x440)
>>> [15260.754087]  [<0000000942272d78>] activate_task+0x88/0xf0
>>> [15260.754088]  [<00000009422732e8>] ttwu_do_activate+0x58/0x78
>>> [15260.754090]  [<00000009422742ce>] try_to_wake_up+0x256/0x650
>>> [15260.754093]  [<000000094229248e>] swake_up_locked.part.0+0x2e/0x70
>>> [15260.754095]  [<00000009422927ac>] swake_up_one+0x54/0x88
>>> [15260.754151]  [<000003ff8044c15a>] kvm_vcpu_wake_up+0x52/0x78 [kvm]
>>> [15260.754161]  [<000003ff8046af02>] kvm_s390_vcpu_wakeup+0x2a/0x40 [kvm]
>>> [15260.754171]  [<000003ff8046b68e>] kvm_s390_idle_wakeup+0x6e/0xa0 [kvm]
>>> [15260.754175]  [<00000009422dd05c>] __hrtimer_run_queues+0x114/0x2f0
>>> [15260.754178]  [<00000009422dddb4>] hrtimer_interrupt+0x12c/0x2a8
>>> [15260.754181]  [<0000000942200d3c>] do_IRQ+0xac/0xb0
>>> [15260.754185]  [<0000000942c25684>] ext_int_handler+0x130/0x134
>>> [15260.754186] Last Breaking-Event-Address:
>>> [15260.754189]  [<000000094223c710>] __warn_printk+0x60/0x68
>>> [15260.754190] ---[ end trace e84a48be72a8b514 ]---
>>>
>>


^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: 5.6-rc3: WARNING: CPU: 48 PID: 17435 at kernel/sched/fair.c:380 enqueue_task_fair+0x328/0x440
  2020-02-28 13:43     ` Christian Borntraeger
@ 2020-02-28 15:08       ` Christian Borntraeger
  2020-02-28 15:37         ` Vincent Guittot
  0 siblings, 1 reply; 28+ messages in thread
From: Christian Borntraeger @ 2020-02-28 15:08 UTC (permalink / raw)
  To: Vincent Guittot; +Cc: Ingo Molnar, Peter Zijlstra, linux-kernel

Also happened with 5.4:
Seems that I just happen to have an interesting test workload/system size interaction
on a newly installed system that triggers this.


[ 9761.439278] ------------[ cut here ]------------
[ 9761.439283] rq->tmp_alone_branch != &rq->leaf_cfs_rq_list
[ 9761.439300] WARNING: CPU: 58 PID: 17405 at kernel/sched/fair.c:381 enqueue_task_fair+0x7cc/0x9b0
[ 9761.439303] Modules linked in: kvm xt_CHECKSUM xt_MASQUERADE nf_nat_tftp nf_conntrack_tftp xt_CT tun bridge stp llc xt_tcpudp ip6t_REJECT nf_reject_ipv6 ip6t_rpfilter ipt_REJECT nf_reject_ipv4 xt_conntrack ip6table_nat ip6table_mangle ip6table_raw ip6table_security iptable_nat nf_nat iptable_mangle iptable_raw iptable_security nf_conntrack nf_defrag_ipv6 nf_defrag_ipv4 ip_set nfnetlink ip6table_filter ip6_tables iptable_filter rpcrdma sunrpc rdma_ucm rdma_cm iw_cm ib_cm configfs s390_trng ghash_s390 prng mlx5_ib aes_s390 ib_uverbs des_s390 libdes ib_core sha3_512_s390 sha3_256_s390 sha512_s390 genwqe_card sha1_s390 crc_itu_t vfio_ccw vfio_mdev mdev vfio_iommu_type1 eadm_sch vfio zcrypt_cex4 sch_fq_codel ip_tables x_tables mlx5_core sha256_s390 sha_common pkey zcrypt rng_core autofs4
[ 9761.439335] CPU: 58 PID: 17405 Comm: sh Not tainted 5.4.0 #27
[ 9761.439336] Hardware name: IBM 3906 M04 704 (LPAR)
[ 9761.439338] Krnl PSW : 0404c00180000000 00000007353f2d4c (enqueue_task_fair+0x7cc/0x9b0)
[ 9761.439340]            R:0 T:1 IO:0 EX:0 Key:0 M:1 W:0 P:0 AS:3 CC:0 PM:0 RI:0 EA:3
[ 9761.439342] Krnl GPRS: 00000000000003e0 0400000735f500bc 000000000000002d 00000007365f4bc2
[ 9761.439343]            000000000000002c 0000000735a49388 0000000000000001 0400001f00000000
[ 9761.439344]            000003e0015ebc88 0000001fbd856c00 0000001fbd856d00 0000000000000000
[ 9761.439345]            0000001bc8a12000 0000000735d853c0 00000007353f2d48 000003e0015ebad0
[ 9761.439385] Krnl Code: 00000007353f2d3c: c020005ae9c0	larl	%r2,735f500bc
                          00000007353f2d42: c0e5fffdc487	brasl	%r14,7353ab650
                         #00000007353f2d48: a7f40001		brc	15,7353f2d4a
                         >00000007353f2d4c: a7f4fcda		brc	15,7353f2700
                          00000007353f2d50: e33073480004	lg	%r3,840(%r7)
                          00000007353f2d56: 41b07340		la	%r11,832(%r7)
                          00000007353f2d5a: b9040063		lgr	%r6,%r3
                          00000007353f2d5e: b904004b		lgr	%r4,%r11
[ 9761.439397] Call Trace:
[ 9761.439399] ([<00000007353f2d48>] enqueue_task_fair+0x7c8/0x9b0)
[ 9761.439401]  [<00000007353e1b48>] activate_task+0x88/0xf0 
[ 9761.439403]  [<00000007353e20c6>] ttwu_do_activate+0x56/0x80 
[ 9761.439405]  [<00000007353e3106>] try_to_wake_up+0x256/0x650 
[ 9761.439408]  [<000000073540353e>] swake_up_locked.part.0+0x2e/0x70 
[ 9761.439409]  [<0000000735403764>] swake_up_one+0x54/0x90 
[ 9761.439449]  [<000003ff8047be52>] kvm_vcpu_wake_up+0x52/0x80 [kvm] 
[ 9761.439458]  [<000003ff80498e3a>] kvm_s390_vcpu_wakeup+0x2a/0x40 [kvm] 
[ 9761.439466]  [<000003ff8049959e>] kvm_s390_idle_wakeup+0x6e/0xa0 [kvm] 
[ 9761.439470]  [<000000073544acb4>] __hrtimer_run_queues+0x114/0x2f0 
[ 9761.439472]  [<000000073544b97c>] hrtimer_interrupt+0x12c/0x2b0 
[ 9761.439475]  [<0000000735370a1a>] do_IRQ+0xaa/0xb0 
[ 9761.439480]  [<0000000735d75998>] ext_int_handler+0x128/0x12c 
[ 9761.439485]  [<00000007355abd28>] get_page_from_freelist+0x528/0x1860 
[ 9761.439486] ([<00000007355abc36>] get_page_from_freelist+0x436/0x1860)
[ 9761.439488]  [<00000007355ae420>] __alloc_pages_nodemask+0x120/0x320 
[ 9761.439492]  [<00000007355cca8a>] alloc_pages_vma+0x9a/0x280 
[ 9761.439494]  [<0000000735588062>] wp_page_copy+0xb2/0x730 
[ 9761.439495]  [<000000073558b642>] do_wp_page+0xa2/0x760 
[ 9761.439497]  [<000000073558def2>] __handle_mm_fault+0x852/0x910 
[ 9761.439498]  [<000000073558e076>] handle_mm_fault+0xc6/0x180 
[ 9761.439500]  [<0000000735389c44>] do_protection_exception+0x164/0x4b0 
[ 9761.439502]  [<0000000735d7558c>] pgm_check_handler+0x1c8/0x220 
[ 9761.439502] Last Breaking-Event-Address:
[ 9761.439503]  [<00000007353f2d48>] enqueue_task_fair+0x7c8/0x9b0
[ 9761.439504] ---[ end trace 40ea9b5f62b01ed1 ]---


^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: 5.6-rc3: WARNING: CPU: 48 PID: 17435 at kernel/sched/fair.c:380 enqueue_task_fair+0x328/0x440
  2020-02-28 15:08       ` Christian Borntraeger
@ 2020-02-28 15:37         ` Vincent Guittot
  2020-02-28 15:42           ` Christian Borntraeger
  0 siblings, 1 reply; 28+ messages in thread
From: Vincent Guittot @ 2020-02-28 15:37 UTC (permalink / raw)
  To: Christian Borntraeger; +Cc: Ingo Molnar, Peter Zijlstra, linux-kernel

On Fri, 28 Feb 2020 at 16:08, Christian Borntraeger
<borntraeger@de.ibm.com> wrote:
>
> Also happened with 5.4:
> Seems that I just happen to have an interesting test workload/system size interaction
> on a newly installed system that triggers this.

you will probably go back to 5.1 which is the version where we put
back the deletion of unused cfs_rq from the list which can trigger the
warning:
commit 039ae8bcf7a5 : (Fix O(nr_cgroups) in the load balancing path)

AFAICT, we haven't changed this since

>
>
> [ 9761.439278] ------------[ cut here ]------------
> [ 9761.439283] rq->tmp_alone_branch != &rq->leaf_cfs_rq_list
> [ 9761.439300] WARNING: CPU: 58 PID: 17405 at kernel/sched/fair.c:381 enqueue_task_fair+0x7cc/0x9b0
> [ 9761.439303] Modules linked in: kvm xt_CHECKSUM xt_MASQUERADE nf_nat_tftp nf_conntrack_tftp xt_CT tun bridge stp llc xt_tcpudp ip6t_REJECT nf_reject_ipv6 ip6t_rpfilter ipt_REJECT nf_reject_ipv4 xt_conntrack ip6table_nat ip6table_mangle ip6table_raw ip6table_security iptable_nat nf_nat iptable_mangle iptable_raw iptable_security nf_conntrack nf_defrag_ipv6 nf_defrag_ipv4 ip_set nfnetlink ip6table_filter ip6_tables iptable_filter rpcrdma sunrpc rdma_ucm rdma_cm iw_cm ib_cm configfs s390_trng ghash_s390 prng mlx5_ib aes_s390 ib_uverbs des_s390 libdes ib_core sha3_512_s390 sha3_256_s390 sha512_s390 genwqe_card sha1_s390 crc_itu_t vfio_ccw vfio_mdev mdev vfio_iommu_type1 eadm_sch vfio zcrypt_cex4 sch_fq_codel ip_tables x_tables mlx5_core sha256_s390 sha_common pkey zcrypt rng_core autofs4
> [ 9761.439335] CPU: 58 PID: 17405 Comm: sh Not tainted 5.4.0 #27
> [ 9761.439336] Hardware name: IBM 3906 M04 704 (LPAR)
> [ 9761.439338] Krnl PSW : 0404c00180000000 00000007353f2d4c (enqueue_task_fair+0x7cc/0x9b0)
> [ 9761.439340]            R:0 T:1 IO:0 EX:0 Key:0 M:1 W:0 P:0 AS:3 CC:0 PM:0 RI:0 EA:3
> [ 9761.439342] Krnl GPRS: 00000000000003e0 0400000735f500bc 000000000000002d 00000007365f4bc2
> [ 9761.439343]            000000000000002c 0000000735a49388 0000000000000001 0400001f00000000
> [ 9761.439344]            000003e0015ebc88 0000001fbd856c00 0000001fbd856d00 0000000000000000
> [ 9761.439345]            0000001bc8a12000 0000000735d853c0 00000007353f2d48 000003e0015ebad0
> [ 9761.439385] Krnl Code: 00000007353f2d3c: c020005ae9c0        larl    %r2,735f500bc
>                           00000007353f2d42: c0e5fffdc487        brasl   %r14,7353ab650
>                          #00000007353f2d48: a7f40001            brc     15,7353f2d4a
>                          >00000007353f2d4c: a7f4fcda            brc     15,7353f2700
>                           00000007353f2d50: e33073480004        lg      %r3,840(%r7)
>                           00000007353f2d56: 41b07340            la      %r11,832(%r7)
>                           00000007353f2d5a: b9040063            lgr     %r6,%r3
>                           00000007353f2d5e: b904004b            lgr     %r4,%r11
> [ 9761.439397] Call Trace:
> [ 9761.439399] ([<00000007353f2d48>] enqueue_task_fair+0x7c8/0x9b0)
> [ 9761.439401]  [<00000007353e1b48>] activate_task+0x88/0xf0
> [ 9761.439403]  [<00000007353e20c6>] ttwu_do_activate+0x56/0x80
> [ 9761.439405]  [<00000007353e3106>] try_to_wake_up+0x256/0x650
> [ 9761.439408]  [<000000073540353e>] swake_up_locked.part.0+0x2e/0x70
> [ 9761.439409]  [<0000000735403764>] swake_up_one+0x54/0x90
> [ 9761.439449]  [<000003ff8047be52>] kvm_vcpu_wake_up+0x52/0x80 [kvm]
> [ 9761.439458]  [<000003ff80498e3a>] kvm_s390_vcpu_wakeup+0x2a/0x40 [kvm]
> [ 9761.439466]  [<000003ff8049959e>] kvm_s390_idle_wakeup+0x6e/0xa0 [kvm]
> [ 9761.439470]  [<000000073544acb4>] __hrtimer_run_queues+0x114/0x2f0
> [ 9761.439472]  [<000000073544b97c>] hrtimer_interrupt+0x12c/0x2b0
> [ 9761.439475]  [<0000000735370a1a>] do_IRQ+0xaa/0xb0
> [ 9761.439480]  [<0000000735d75998>] ext_int_handler+0x128/0x12c
> [ 9761.439485]  [<00000007355abd28>] get_page_from_freelist+0x528/0x1860
> [ 9761.439486] ([<00000007355abc36>] get_page_from_freelist+0x436/0x1860)
> [ 9761.439488]  [<00000007355ae420>] __alloc_pages_nodemask+0x120/0x320
> [ 9761.439492]  [<00000007355cca8a>] alloc_pages_vma+0x9a/0x280
> [ 9761.439494]  [<0000000735588062>] wp_page_copy+0xb2/0x730
> [ 9761.439495]  [<000000073558b642>] do_wp_page+0xa2/0x760
> [ 9761.439497]  [<000000073558def2>] __handle_mm_fault+0x852/0x910
> [ 9761.439498]  [<000000073558e076>] handle_mm_fault+0xc6/0x180
> [ 9761.439500]  [<0000000735389c44>] do_protection_exception+0x164/0x4b0
> [ 9761.439502]  [<0000000735d7558c>] pgm_check_handler+0x1c8/0x220
> [ 9761.439502] Last Breaking-Event-Address:
> [ 9761.439503]  [<00000007353f2d48>] enqueue_task_fair+0x7c8/0x9b0
> [ 9761.439504] ---[ end trace 40ea9b5f62b01ed1 ]---
>

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: 5.6-rc3: WARNING: CPU: 48 PID: 17435 at kernel/sched/fair.c:380 enqueue_task_fair+0x328/0x440
  2020-02-28 15:37         ` Vincent Guittot
@ 2020-02-28 15:42           ` Christian Borntraeger
  2020-02-28 16:32             ` Qais Yousef
  2020-02-28 16:35             ` Vincent Guittot
  0 siblings, 2 replies; 28+ messages in thread
From: Christian Borntraeger @ 2020-02-28 15:42 UTC (permalink / raw)
  To: Vincent Guittot; +Cc: Ingo Molnar, Peter Zijlstra, linux-kernel



On 28.02.20 16:37, Vincent Guittot wrote:
> On Fri, 28 Feb 2020 at 16:08, Christian Borntraeger
> <borntraeger@de.ibm.com> wrote:
>>
>> Also happened with 5.4:
>> Seems that I just happen to have an interesting test workload/system size interaction
>> on a newly installed system that triggers this.
> 
> you will probably go back to 5.1 which is the version where we put
> back the deletion of unused cfs_rq from the list which can trigger the
> warning:
> commit 039ae8bcf7a5 : (Fix O(nr_cgroups) in the load balancing path)
> 
> AFAICT, we haven't changed this since

So you do know what is the problem? If not is there any debug option or
patch that I could apply to give you more information?


^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: 5.6-rc3: WARNING: CPU: 48 PID: 17435 at kernel/sched/fair.c:380 enqueue_task_fair+0x328/0x440
  2020-02-28 15:42           ` Christian Borntraeger
@ 2020-02-28 16:32             ` Qais Yousef
  2020-02-28 16:35             ` Vincent Guittot
  1 sibling, 0 replies; 28+ messages in thread
From: Qais Yousef @ 2020-02-28 16:32 UTC (permalink / raw)
  To: Christian Borntraeger
  Cc: Vincent Guittot, Ingo Molnar, Peter Zijlstra, linux-kernel

On 02/28/20 16:42, Christian Borntraeger wrote:
> 
> 
> On 28.02.20 16:37, Vincent Guittot wrote:
> > On Fri, 28 Feb 2020 at 16:08, Christian Borntraeger
> > <borntraeger@de.ibm.com> wrote:
> >>
> >> Also happened with 5.4:
> >> Seems that I just happen to have an interesting test workload/system size interaction
> >> on a newly installed system that triggers this.
> > 
> > you will probably go back to 5.1 which is the version where we put
> > back the deletion of unused cfs_rq from the list which can trigger the
> > warning:
> > commit 039ae8bcf7a5 : (Fix O(nr_cgroups) in the load balancing path)
> > 
> > AFAICT, we haven't changed this since
> 
> So you do know what is the problem? If not is there any debug option or
> patch that I could apply to give you more information?
> 

It might be a long shot as I'm not particularly knowledgeable about this code
path, but could we be missing rcu_read_lock/unlock around the call to
unthrottle_cfs_rq() here?

---

diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index fc1dfc007604..56aa5cfbb7f1 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -7434,6 +7434,7 @@ static int tg_set_cfs_bandwidth(struct task_group *tg, u64 period, u64 quota)

        raw_spin_unlock_irq(&cfs_b->lock);

+       rcu_read_lock();
        for_each_online_cpu(i) {
                struct cfs_rq *cfs_rq = tg->cfs_rq[i];
                struct rq *rq = cfs_rq->rq;
@@ -7447,6 +7448,7 @@ static int tg_set_cfs_bandwidth(struct task_group *tg, u64 period, u64 quota)
                        unthrottle_cfs_rq(cfs_rq);
                rq_unlock_irq(rq, &rf);
        }
+       rcu_read_unlock();
        if (runtime_was_enabled && !runtime_enabled)
                cfs_bandwidth_usage_dec();
 out_unlock:


^ permalink raw reply related	[flat|nested] 28+ messages in thread

* Re: 5.6-rc3: WARNING: CPU: 48 PID: 17435 at kernel/sched/fair.c:380 enqueue_task_fair+0x328/0x440
  2020-02-28 15:42           ` Christian Borntraeger
  2020-02-28 16:32             ` Qais Yousef
@ 2020-02-28 16:35             ` Vincent Guittot
  2020-03-02 11:16               ` Christian Borntraeger
  1 sibling, 1 reply; 28+ messages in thread
From: Vincent Guittot @ 2020-02-28 16:35 UTC (permalink / raw)
  To: Christian Borntraeger; +Cc: Ingo Molnar, Peter Zijlstra, linux-kernel

Le vendredi 28 févr. 2020 à 16:42:27 (+0100), Christian Borntraeger a écrit :
> 
> 
> On 28.02.20 16:37, Vincent Guittot wrote:
> > On Fri, 28 Feb 2020 at 16:08, Christian Borntraeger
> > <borntraeger@de.ibm.com> wrote:
> >>
> >> Also happened with 5.4:
> >> Seems that I just happen to have an interesting test workload/system size interaction
> >> on a newly installed system that triggers this.
> > 
> > you will probably go back to 5.1 which is the version where we put
> > back the deletion of unused cfs_rq from the list which can trigger the
> > warning:
> > commit 039ae8bcf7a5 : (Fix O(nr_cgroups) in the load balancing path)
> > 
> > AFAICT, we haven't changed this since
> 
> So you do know what is the problem? If not is there any debug option or
> patch that I could apply to give you more information?

No I don't know what is happening. Your test probably goes through an unexpected path

Would it be difficult for me to reproduce your test env ?

There is an optimization in the code which could generate problem if assumption is not
true. Could you try the patch below ?

---
 kernel/sched/fair.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 3c8a379c357e..beb773c23e7d 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -4035,8 +4035,8 @@ enqueue_entity(struct cfs_rq *cfs_rq, struct sched_entity *se, int flags)
 		__enqueue_entity(cfs_rq, se);
 	se->on_rq = 1;
 
+	list_add_leaf_cfs_rq(cfs_rq);
 	if (cfs_rq->nr_running == 1) {
-		list_add_leaf_cfs_rq(cfs_rq);
 		check_enqueue_throttle(cfs_rq);
 	}
 }
-- 
2.17.1



> 

^ permalink raw reply related	[flat|nested] 28+ messages in thread

* Re: 5.6-rc3: WARNING: CPU: 48 PID: 17435 at kernel/sched/fair.c:380 enqueue_task_fair+0x328/0x440
  2020-02-28 16:35             ` Vincent Guittot
@ 2020-03-02 11:16               ` Christian Borntraeger
  2020-03-02 18:17                 ` Christian Borntraeger
  0 siblings, 1 reply; 28+ messages in thread
From: Christian Borntraeger @ 2020-03-02 11:16 UTC (permalink / raw)
  To: Vincent Guittot; +Cc: Ingo Molnar, Peter Zijlstra, linux-kernel



On 28.02.20 17:35, Vincent Guittot wrote:
> Le vendredi 28 févr. 2020 à 16:42:27 (+0100), Christian Borntraeger a écrit :
>>
>>
>> On 28.02.20 16:37, Vincent Guittot wrote:
>>> On Fri, 28 Feb 2020 at 16:08, Christian Borntraeger
>>> <borntraeger@de.ibm.com> wrote:
>>>>
>>>> Also happened with 5.4:
>>>> Seems that I just happen to have an interesting test workload/system size interaction
>>>> on a newly installed system that triggers this.
>>>
>>> you will probably go back to 5.1 which is the version where we put
>>> back the deletion of unused cfs_rq from the list which can trigger the
>>> warning:
>>> commit 039ae8bcf7a5 : (Fix O(nr_cgroups) in the load balancing path)
>>>
>>> AFAICT, we haven't changed this since
>>
>> So you do know what is the problem? If not is there any debug option or
>> patch that I could apply to give you more information?
> 
> No I don't know what is happening. Your test probably goes through an unexpected path
> 
> Would it be difficult for me to reproduce your test env ?

Not sure. Its a 32CPU (SMT2 -> 64) host. I have about 10 KVM guests running doing different
things.

> 
> There is an optimization in the code which could generate problem if assumption is not
> true. Could you try the patch below ?
> 
> ---
>  kernel/sched/fair.c | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> index 3c8a379c357e..beb773c23e7d 100644
> --- a/kernel/sched/fair.c
> +++ b/kernel/sched/fair.c
> @@ -4035,8 +4035,8 @@ enqueue_entity(struct cfs_rq *cfs_rq, struct sched_entity *se, int flags)
>  		__enqueue_entity(cfs_rq, se);
>  	se->on_rq = 1;
>  
> +	list_add_leaf_cfs_rq(cfs_rq);
>  	if (cfs_rq->nr_running == 1) {
> -		list_add_leaf_cfs_rq(cfs_rq);
>  		check_enqueue_throttle(cfs_rq);
>  	}
>  }

Now running for 3 hours. I have not seen the issue yet. I can tell tomorrow if this fixes 
the issue.


^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: 5.6-rc3: WARNING: CPU: 48 PID: 17435 at kernel/sched/fair.c:380 enqueue_task_fair+0x328/0x440
  2020-03-02 11:16               ` Christian Borntraeger
@ 2020-03-02 18:17                 ` Christian Borntraeger
  2020-03-03  7:37                   ` Christian Borntraeger
  0 siblings, 1 reply; 28+ messages in thread
From: Christian Borntraeger @ 2020-03-02 18:17 UTC (permalink / raw)
  To: Vincent Guittot; +Cc: Ingo Molnar, Peter Zijlstra, linux-kernel

On 02.03.20 12:16, Christian Borntraeger wrote:
> 
> 
> On 28.02.20 17:35, Vincent Guittot wrote:
>> Le vendredi 28 févr. 2020 à 16:42:27 (+0100), Christian Borntraeger a écrit :
>>>
>>>
>>> On 28.02.20 16:37, Vincent Guittot wrote:
>>>> On Fri, 28 Feb 2020 at 16:08, Christian Borntraeger
>>>> <borntraeger@de.ibm.com> wrote:
>>>>>
>>>>> Also happened with 5.4:
>>>>> Seems that I just happen to have an interesting test workload/system size interaction
>>>>> on a newly installed system that triggers this.
>>>>
>>>> you will probably go back to 5.1 which is the version where we put
>>>> back the deletion of unused cfs_rq from the list which can trigger the
>>>> warning:
>>>> commit 039ae8bcf7a5 : (Fix O(nr_cgroups) in the load balancing path)
>>>>
>>>> AFAICT, we haven't changed this since
>>>
>>> So you do know what is the problem? If not is there any debug option or
>>> patch that I could apply to give you more information?
>>
>> No I don't know what is happening. Your test probably goes through an unexpected path
>>
>> Would it be difficult for me to reproduce your test env ?
> 
> Not sure. Its a 32CPU (SMT2 -> 64) host. I have about 10 KVM guests running doing different
> things.
> 
>>
>> There is an optimization in the code which could generate problem if assumption is not
>> true. Could you try the patch below ?
>>
>> ---
>>  kernel/sched/fair.c | 2 +-
>>  1 file changed, 1 insertion(+), 1 deletion(-)
>>
>> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
>> index 3c8a379c357e..beb773c23e7d 100644
>> --- a/kernel/sched/fair.c
>> +++ b/kernel/sched/fair.c
>> @@ -4035,8 +4035,8 @@ enqueue_entity(struct cfs_rq *cfs_rq, struct sched_entity *se, int flags)
>>  		__enqueue_entity(cfs_rq, se);
>>  	se->on_rq = 1;
>>  
>> +	list_add_leaf_cfs_rq(cfs_rq);
>>  	if (cfs_rq->nr_running == 1) {
>> -		list_add_leaf_cfs_rq(cfs_rq);
>>  		check_enqueue_throttle(cfs_rq);
>>  	}
>>  }
> 
> Now running for 3 hours. I have not seen the issue yet. I can tell tomorrow if this fixes 
> the issue.


Still running fine. I can tell for sure tomorrow, but I have the impression that this makes the
WARN_ON go away.


^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: 5.6-rc3: WARNING: CPU: 48 PID: 17435 at kernel/sched/fair.c:380 enqueue_task_fair+0x328/0x440
  2020-03-02 18:17                 ` Christian Borntraeger
@ 2020-03-03  7:37                   ` Christian Borntraeger
  2020-03-03  7:55                     ` Vincent Guittot
  0 siblings, 1 reply; 28+ messages in thread
From: Christian Borntraeger @ 2020-03-03  7:37 UTC (permalink / raw)
  To: Vincent Guittot; +Cc: Ingo Molnar, Peter Zijlstra, linux-kernel



On 02.03.20 19:17, Christian Borntraeger wrote:
> On 02.03.20 12:16, Christian Borntraeger wrote:
>>
>>
>> On 28.02.20 17:35, Vincent Guittot wrote:
>>> Le vendredi 28 févr. 2020 à 16:42:27 (+0100), Christian Borntraeger a écrit :
>>>>
>>>>
>>>> On 28.02.20 16:37, Vincent Guittot wrote:
>>>>> On Fri, 28 Feb 2020 at 16:08, Christian Borntraeger
>>>>> <borntraeger@de.ibm.com> wrote:
>>>>>>
>>>>>> Also happened with 5.4:
>>>>>> Seems that I just happen to have an interesting test workload/system size interaction
>>>>>> on a newly installed system that triggers this.
>>>>>
>>>>> you will probably go back to 5.1 which is the version where we put
>>>>> back the deletion of unused cfs_rq from the list which can trigger the
>>>>> warning:
>>>>> commit 039ae8bcf7a5 : (Fix O(nr_cgroups) in the load balancing path)
>>>>>
>>>>> AFAICT, we haven't changed this since
>>>>
>>>> So you do know what is the problem? If not is there any debug option or
>>>> patch that I could apply to give you more information?
>>>
>>> No I don't know what is happening. Your test probably goes through an unexpected path
>>>
>>> Would it be difficult for me to reproduce your test env ?
>>
>> Not sure. Its a 32CPU (SMT2 -> 64) host. I have about 10 KVM guests running doing different
>> things.
>>
>>>
>>> There is an optimization in the code which could generate problem if assumption is not
>>> true. Could you try the patch below ?
>>>
>>> ---
>>>  kernel/sched/fair.c | 2 +-
>>>  1 file changed, 1 insertion(+), 1 deletion(-)
>>>
>>> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
>>> index 3c8a379c357e..beb773c23e7d 100644
>>> --- a/kernel/sched/fair.c
>>> +++ b/kernel/sched/fair.c
>>> @@ -4035,8 +4035,8 @@ enqueue_entity(struct cfs_rq *cfs_rq, struct sched_entity *se, int flags)
>>>  		__enqueue_entity(cfs_rq, se);
>>>  	se->on_rq = 1;
>>>  
>>> +	list_add_leaf_cfs_rq(cfs_rq);
>>>  	if (cfs_rq->nr_running == 1) {
>>> -		list_add_leaf_cfs_rq(cfs_rq);
>>>  		check_enqueue_throttle(cfs_rq);
>>>  	}
>>>  }
>>
>> Now running for 3 hours. I have not seen the issue yet. I can tell tomorrow if this fixes 
>> the issue.
> 
> 
> Still running fine. I can tell for sure tomorrow, but I have the impression that this makes the
> WARN_ON go away.

So I guess this change "fixed" the issue. If you want me to test additional patches, let me know.


^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: 5.6-rc3: WARNING: CPU: 48 PID: 17435 at kernel/sched/fair.c:380 enqueue_task_fair+0x328/0x440
  2020-03-03  7:37                   ` Christian Borntraeger
@ 2020-03-03  7:55                     ` Vincent Guittot
  2020-03-04 15:26                       ` Vincent Guittot
  0 siblings, 1 reply; 28+ messages in thread
From: Vincent Guittot @ 2020-03-03  7:55 UTC (permalink / raw)
  To: Christian Borntraeger; +Cc: Ingo Molnar, Peter Zijlstra, linux-kernel

On Tue, 3 Mar 2020 at 08:37, Christian Borntraeger
<borntraeger@de.ibm.com> wrote:
>
>
>
> On 02.03.20 19:17, Christian Borntraeger wrote:
> > On 02.03.20 12:16, Christian Borntraeger wrote:
> >>
> >>
> >> On 28.02.20 17:35, Vincent Guittot wrote:
> >>> Le vendredi 28 févr. 2020 à 16:42:27 (+0100), Christian Borntraeger a écrit :
> >>>>
> >>>>
> >>>> On 28.02.20 16:37, Vincent Guittot wrote:
> >>>>> On Fri, 28 Feb 2020 at 16:08, Christian Borntraeger
> >>>>> <borntraeger@de.ibm.com> wrote:
> >>>>>>
> >>>>>> Also happened with 5.4:
> >>>>>> Seems that I just happen to have an interesting test workload/system size interaction
> >>>>>> on a newly installed system that triggers this.
> >>>>>
> >>>>> you will probably go back to 5.1 which is the version where we put
> >>>>> back the deletion of unused cfs_rq from the list which can trigger the
> >>>>> warning:
> >>>>> commit 039ae8bcf7a5 : (Fix O(nr_cgroups) in the load balancing path)
> >>>>>
> >>>>> AFAICT, we haven't changed this since
> >>>>
> >>>> So you do know what is the problem? If not is there any debug option or
> >>>> patch that I could apply to give you more information?
> >>>
> >>> No I don't know what is happening. Your test probably goes through an unexpected path
> >>>
> >>> Would it be difficult for me to reproduce your test env ?
> >>
> >> Not sure. Its a 32CPU (SMT2 -> 64) host. I have about 10 KVM guests running doing different
> >> things.
> >>
> >>>
> >>> There is an optimization in the code which could generate problem if assumption is not
> >>> true. Could you try the patch below ?
> >>>
> >>> ---
> >>>  kernel/sched/fair.c | 2 +-
> >>>  1 file changed, 1 insertion(+), 1 deletion(-)
> >>>
> >>> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> >>> index 3c8a379c357e..beb773c23e7d 100644
> >>> --- a/kernel/sched/fair.c
> >>> +++ b/kernel/sched/fair.c
> >>> @@ -4035,8 +4035,8 @@ enqueue_entity(struct cfs_rq *cfs_rq, struct sched_entity *se, int flags)
> >>>             __enqueue_entity(cfs_rq, se);
> >>>     se->on_rq = 1;
> >>>
> >>> +   list_add_leaf_cfs_rq(cfs_rq);
> >>>     if (cfs_rq->nr_running == 1) {
> >>> -           list_add_leaf_cfs_rq(cfs_rq);
> >>>             check_enqueue_throttle(cfs_rq);
> >>>     }
> >>>  }
> >>
> >> Now running for 3 hours. I have not seen the issue yet. I can tell tomorrow if this fixes
> >> the issue.
> >
> >
> > Still running fine. I can tell for sure tomorrow, but I have the impression that this makes the
> > WARN_ON go away.
>
> So I guess this change "fixed" the issue. If you want me to test additional patches, let me know.

Thanks for the test. For now, I don't have any other patch to test. I
have to look more deeply how the situation happens.
I will let you know if I have other patch to test

>

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: 5.6-rc3: WARNING: CPU: 48 PID: 17435 at kernel/sched/fair.c:380 enqueue_task_fair+0x328/0x440
  2020-03-03  7:55                     ` Vincent Guittot
@ 2020-03-04 15:26                       ` Vincent Guittot
  2020-03-04 17:42                         ` Christian Borntraeger
  0 siblings, 1 reply; 28+ messages in thread
From: Vincent Guittot @ 2020-03-04 15:26 UTC (permalink / raw)
  To: Christian Borntraeger; +Cc: Ingo Molnar, Peter Zijlstra, linux-kernel

On Tue, 3 Mar 2020 at 08:55, Vincent Guittot <vincent.guittot@linaro.org> wrote:
>
> On Tue, 3 Mar 2020 at 08:37, Christian Borntraeger
> <borntraeger@de.ibm.com> wrote:
> >
> >
> >
[...]
> > >>> ---
> > >>>  kernel/sched/fair.c | 2 +-
> > >>>  1 file changed, 1 insertion(+), 1 deletion(-)
> > >>>
> > >>> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> > >>> index 3c8a379c357e..beb773c23e7d 100644
> > >>> --- a/kernel/sched/fair.c
> > >>> +++ b/kernel/sched/fair.c
> > >>> @@ -4035,8 +4035,8 @@ enqueue_entity(struct cfs_rq *cfs_rq, struct sched_entity *se, int flags)
> > >>>             __enqueue_entity(cfs_rq, se);
> > >>>     se->on_rq = 1;
> > >>>
> > >>> +   list_add_leaf_cfs_rq(cfs_rq);
> > >>>     if (cfs_rq->nr_running == 1) {
> > >>> -           list_add_leaf_cfs_rq(cfs_rq);
> > >>>             check_enqueue_throttle(cfs_rq);
> > >>>     }
> > >>>  }
> > >>
> > >> Now running for 3 hours. I have not seen the issue yet. I can tell tomorrow if this fixes
> > >> the issue.
> > >
> > >
> > > Still running fine. I can tell for sure tomorrow, but I have the impression that this makes the
> > > WARN_ON go away.
> >
> > So I guess this change "fixed" the issue. If you want me to test additional patches, let me know.
>
> Thanks for the test. For now, I don't have any other patch to test. I
> have to look more deeply how the situation happens.
> I will let you know if I have other patch to test

So I haven't been able to figure out how we reach this situation yet.
In the meantime I'm going to make a clean patch with the fix above.

Is it ok if I add a reported -by and a tested-by you ?

>
> >

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: 5.6-rc3: WARNING: CPU: 48 PID: 17435 at kernel/sched/fair.c:380 enqueue_task_fair+0x328/0x440
  2020-03-04 15:26                       ` Vincent Guittot
@ 2020-03-04 17:42                         ` Christian Borntraeger
  2020-03-04 17:51                           ` Vincent Guittot
  2020-03-04 19:19                           ` Dietmar Eggemann
  0 siblings, 2 replies; 28+ messages in thread
From: Christian Borntraeger @ 2020-03-04 17:42 UTC (permalink / raw)
  To: Vincent Guittot; +Cc: Ingo Molnar, Peter Zijlstra, linux-kernel



On 04.03.20 16:26, Vincent Guittot wrote:
> On Tue, 3 Mar 2020 at 08:55, Vincent Guittot <vincent.guittot@linaro.org> wrote:
>>
>> On Tue, 3 Mar 2020 at 08:37, Christian Borntraeger
>> <borntraeger@de.ibm.com> wrote:
>>>
>>>
>>>
> [...]
>>>>>> ---
>>>>>>  kernel/sched/fair.c | 2 +-
>>>>>>  1 file changed, 1 insertion(+), 1 deletion(-)
>>>>>>
>>>>>> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
>>>>>> index 3c8a379c357e..beb773c23e7d 100644
>>>>>> --- a/kernel/sched/fair.c
>>>>>> +++ b/kernel/sched/fair.c
>>>>>> @@ -4035,8 +4035,8 @@ enqueue_entity(struct cfs_rq *cfs_rq, struct sched_entity *se, int flags)
>>>>>>             __enqueue_entity(cfs_rq, se);
>>>>>>     se->on_rq = 1;
>>>>>>
>>>>>> +   list_add_leaf_cfs_rq(cfs_rq);
>>>>>>     if (cfs_rq->nr_running == 1) {
>>>>>> -           list_add_leaf_cfs_rq(cfs_rq);
>>>>>>             check_enqueue_throttle(cfs_rq);
>>>>>>     }
>>>>>>  }
>>>>>
>>>>> Now running for 3 hours. I have not seen the issue yet. I can tell tomorrow if this fixes
>>>>> the issue.
>>>>
>>>>
>>>> Still running fine. I can tell for sure tomorrow, but I have the impression that this makes the
>>>> WARN_ON go away.
>>>
>>> So I guess this change "fixed" the issue. If you want me to test additional patches, let me know.
>>
>> Thanks for the test. For now, I don't have any other patch to test. I
>> have to look more deeply how the situation happens.
>> I will let you know if I have other patch to test
> 
> So I haven't been able to figure out how we reach this situation yet.
> In the meantime I'm going to make a clean patch with the fix above.
> 
> Is it ok if I add a reported -by and a tested-by you ?

Sure-
I just realized that this system has something special. Some month ago I created 2 slices
$ head /etc/systemd/system/*.slice
==> /etc/systemd/system/machine-production.slice <==
[Unit]
Description=VM production
Before=slices.target
Wants=machine.slice
[Slice]
CPUQuota=2000%
CPUWeight=1000

==> /etc/systemd/system/machine-test.slice <==
[Unit]
Description=VM production
Before=slices.target
Wants=machine.slice
[Slice]
CPUQuota=300%
CPUWeight=100


And the guests are then put into these slices. that also means that this test will never use more than the 2300%.
No matter how much CPUs the system has.


^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: 5.6-rc3: WARNING: CPU: 48 PID: 17435 at kernel/sched/fair.c:380 enqueue_task_fair+0x328/0x440
  2020-03-04 17:42                         ` Christian Borntraeger
@ 2020-03-04 17:51                           ` Vincent Guittot
  2020-03-04 19:19                           ` Dietmar Eggemann
  1 sibling, 0 replies; 28+ messages in thread
From: Vincent Guittot @ 2020-03-04 17:51 UTC (permalink / raw)
  To: Christian Borntraeger; +Cc: Ingo Molnar, Peter Zijlstra, linux-kernel

On Wed, 4 Mar 2020 at 18:42, Christian Borntraeger
<borntraeger@de.ibm.com> wrote:
>
>
>
> On 04.03.20 16:26, Vincent Guittot wrote:
> > On Tue, 3 Mar 2020 at 08:55, Vincent Guittot <vincent.guittot@linaro.org> wrote:
> >>
> >> On Tue, 3 Mar 2020 at 08:37, Christian Borntraeger
> >> <borntraeger@de.ibm.com> wrote:
> >>>
> >>>
> >>>
> > [...]
> >>>>>> ---
> >>>>>>  kernel/sched/fair.c | 2 +-
> >>>>>>  1 file changed, 1 insertion(+), 1 deletion(-)
> >>>>>>
> >>>>>> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> >>>>>> index 3c8a379c357e..beb773c23e7d 100644
> >>>>>> --- a/kernel/sched/fair.c
> >>>>>> +++ b/kernel/sched/fair.c
> >>>>>> @@ -4035,8 +4035,8 @@ enqueue_entity(struct cfs_rq *cfs_rq, struct sched_entity *se, int flags)
> >>>>>>             __enqueue_entity(cfs_rq, se);
> >>>>>>     se->on_rq = 1;
> >>>>>>
> >>>>>> +   list_add_leaf_cfs_rq(cfs_rq);
> >>>>>>     if (cfs_rq->nr_running == 1) {
> >>>>>> -           list_add_leaf_cfs_rq(cfs_rq);
> >>>>>>             check_enqueue_throttle(cfs_rq);
> >>>>>>     }
> >>>>>>  }
> >>>>>
> >>>>> Now running for 3 hours. I have not seen the issue yet. I can tell tomorrow if this fixes
> >>>>> the issue.
> >>>>
> >>>>
> >>>> Still running fine. I can tell for sure tomorrow, but I have the impression that this makes the
> >>>> WARN_ON go away.
> >>>
> >>> So I guess this change "fixed" the issue. If you want me to test additional patches, let me know.
> >>
> >> Thanks for the test. For now, I don't have any other patch to test. I
> >> have to look more deeply how the situation happens.
> >> I will let you know if I have other patch to test
> >
> > So I haven't been able to figure out how we reach this situation yet.
> > In the meantime I'm going to make a clean patch with the fix above.
> >
> > Is it ok if I add a reported -by and a tested-by you ?
>
> Sure-
> I just realized that this system has something special. Some month ago I created 2 slices
> $ head /etc/systemd/system/*.slice
> ==> /etc/systemd/system/machine-production.slice <==
> [Unit]
> Description=VM production
> Before=slices.target
> Wants=machine.slice
> [Slice]
> CPUQuota=2000%
> CPUWeight=1000
>
> ==> /etc/systemd/system/machine-test.slice <==
> [Unit]
> Description=VM production
> Before=slices.target
> Wants=machine.slice
> [Slice]
> CPUQuota=300%
> CPUWeight=100
>
>
> And the guests are then put into these slices. that also means that this test will never use more than the 2300%.
> No matter how much CPUs the system has.

Thanks for the information, I will try to see how this could impact the enqueue

>

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: 5.6-rc3: WARNING: CPU: 48 PID: 17435 at kernel/sched/fair.c:380 enqueue_task_fair+0x328/0x440
  2020-03-04 17:42                         ` Christian Borntraeger
  2020-03-04 17:51                           ` Vincent Guittot
@ 2020-03-04 19:19                           ` Dietmar Eggemann
  2020-03-04 19:38                             ` Christian Borntraeger
  1 sibling, 1 reply; 28+ messages in thread
From: Dietmar Eggemann @ 2020-03-04 19:19 UTC (permalink / raw)
  To: Christian Borntraeger, Vincent Guittot
  Cc: Ingo Molnar, Peter Zijlstra, linux-kernel

Hi Christian,

On 04/03/2020 18:42, Christian Borntraeger wrote:
> 
> 
> On 04.03.20 16:26, Vincent Guittot wrote:
>> On Tue, 3 Mar 2020 at 08:55, Vincent Guittot <vincent.guittot@linaro.org> wrote:
>>>
>>> On Tue, 3 Mar 2020 at 08:37, Christian Borntraeger
>>> <borntraeger@de.ibm.com> wrote:
>>>>
>>>>
>>>>
>> [...]
>>>>>>> ---
>>>>>>>  kernel/sched/fair.c | 2 +-
>>>>>>>  1 file changed, 1 insertion(+), 1 deletion(-)
>>>>>>>
>>>>>>> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
>>>>>>> index 3c8a379c357e..beb773c23e7d 100644
>>>>>>> --- a/kernel/sched/fair.c
>>>>>>> +++ b/kernel/sched/fair.c
>>>>>>> @@ -4035,8 +4035,8 @@ enqueue_entity(struct cfs_rq *cfs_rq, struct sched_entity *se, int flags)
>>>>>>>             __enqueue_entity(cfs_rq, se);
>>>>>>>     se->on_rq = 1;
>>>>>>>
>>>>>>> +   list_add_leaf_cfs_rq(cfs_rq);
>>>>>>>     if (cfs_rq->nr_running == 1) {
>>>>>>> -           list_add_leaf_cfs_rq(cfs_rq);
>>>>>>>             check_enqueue_throttle(cfs_rq);
>>>>>>>     }
>>>>>>>  }
>>>>>>
>>>>>> Now running for 3 hours. I have not seen the issue yet. I can tell tomorrow if this fixes
>>>>>> the issue.
>>>>>
>>>>>
>>>>> Still running fine. I can tell for sure tomorrow, but I have the impression that this makes the
>>>>> WARN_ON go away.
>>>>
>>>> So I guess this change "fixed" the issue. If you want me to test additional patches, let me know.
>>>
>>> Thanks for the test. For now, I don't have any other patch to test. I
>>> have to look more deeply how the situation happens.
>>> I will let you know if I have other patch to test
>>
>> So I haven't been able to figure out how we reach this situation yet.
>> In the meantime I'm going to make a clean patch with the fix above.
>>
>> Is it ok if I add a reported -by and a tested-by you ?
> 
> Sure-
> I just realized that this system has something special. Some month ago I created 2 slices
> $ head /etc/systemd/system/*.slice
> ==> /etc/systemd/system/machine-production.slice <==
> [Unit]
> Description=VM production
> Before=slices.target
> Wants=machine.slice
> [Slice]
> CPUQuota=2000%
> CPUWeight=1000
> 
> ==> /etc/systemd/system/machine-test.slice <==
> [Unit]
> Description=VM production
> Before=slices.target
> Wants=machine.slice
> [Slice]
> CPUQuota=300%
> CPUWeight=100
> 
> 
> And the guests are then put into these slices. that also means that this test will never use more than the 2300%.
> No matter how much CPUs the system has.

If you could run this debug patch on top of your un-patched kernel, it would tell us which task (in the enqueue case)
and which taskgroup is causing that.

You could then further dump the appropriate taskgroup directory under the cpu cgroup mountpoint
(to see e.g. the CFS bandwidth data). 

I expect more than one hit since assert_list_leaf_cfs_rq() uses SCHED_WARN_ON, hence WARN_ONCE.

--8<--
From b709758f476ee4cfc260eceedc45ebcc50d93074 Mon Sep 17 00:00:00 2001
From: Dietmar Eggemann <dietmar.eggemann@arm.com>
Date: Sat, 29 Feb 2020 11:07:05 +0000
Subject: [PATCH] test: rq->tmp_alone_branch != &rq->leaf_cfs_rq_list

Signed-off-by: Dietmar Eggemann <dietmar.eggemann@arm.com>
---
 kernel/sched/fair.c | 21 +++++++++++++++++++++
 1 file changed, 21 insertions(+)

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 3c8a379c357e..69fc30db7440 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -4619,6 +4619,15 @@ void unthrottle_cfs_rq(struct cfs_rq *cfs_rq)
 			break;
 	}
 
+	if (rq->tmp_alone_branch != &rq->leaf_cfs_rq_list) {
+		char path[64];
+
+		sched_trace_cfs_rq_path(cfs_rq, path, 64);
+
+		printk("CPU%d path=%s on_list=%d nr_running=%d\n",
+		       cpu_of(rq), path, cfs_rq->on_list, cfs_rq->nr_running);
+	}
+
 	assert_list_leaf_cfs_rq(rq);
 
 	if (!se)
@@ -5320,6 +5329,18 @@ enqueue_task_fair(struct rq *rq, struct task_struct *p, int flags)
 		}
 	}
 
+	if (rq->tmp_alone_branch != &rq->leaf_cfs_rq_list) {
+		char path[64];
+
+		cfs_rq = cfs_rq_of(&p->se);
+
+		sched_trace_cfs_rq_path(cfs_rq, path, 64);
+
+		printk("CPU%d path=%s on_list=%d nr_running=%d p=[%s %d]\n",
+		       cpu_of(rq), path, cfs_rq->on_list, cfs_rq->nr_running,
+		       p->comm, p->pid);
+	}
+
 	assert_list_leaf_cfs_rq(rq);
 
 	hrtick_update(rq);
-- 
2.17.1

^ permalink raw reply related	[flat|nested] 28+ messages in thread

* Re: 5.6-rc3: WARNING: CPU: 48 PID: 17435 at kernel/sched/fair.c:380 enqueue_task_fair+0x328/0x440
  2020-03-04 19:19                           ` Dietmar Eggemann
@ 2020-03-04 19:38                             ` Christian Borntraeger
  2020-03-04 19:59                               ` Christian Borntraeger
  0 siblings, 1 reply; 28+ messages in thread
From: Christian Borntraeger @ 2020-03-04 19:38 UTC (permalink / raw)
  To: Dietmar Eggemann, Vincent Guittot
  Cc: Ingo Molnar, Peter Zijlstra, linux-kernel

[-- Attachment #1: Type: text/plain, Size: 1273 bytes --]



On 04.03.20 20:19, Dietmar Eggemann wrote:
>> I just realized that this system has something special. Some month ago I created 2 slices
>> $ head /etc/systemd/system/*.slice
>> ==> /etc/systemd/system/machine-production.slice <==
>> [Unit]
>> Description=VM production
>> Before=slices.target
>> Wants=machine.slice
>> [Slice]
>> CPUQuota=2000%
>> CPUWeight=1000
>>
>> ==> /etc/systemd/system/machine-test.slice <==
>> [Unit]
>> Description=VM production
>> Before=slices.target
>> Wants=machine.slice
>> [Slice]
>> CPUQuota=300%
>> CPUWeight=100
>>
>>
>> And the guests are then put into these slices. that also means that this test will never use more than the 2300%.
>> No matter how much CPUs the system has.
> 
> If you could run this debug patch on top of your un-patched kernel, it would tell us which task (in the enqueue case)
> and which taskgroup is causing that.
> 
> You could then further dump the appropriate taskgroup directory under the cpu cgroup mountpoint
> (to see e.g. the CFS bandwidth data). 
> 
> I expect more than one hit since assert_list_leaf_cfs_rq() uses SCHED_WARN_ON, hence WARN_ONCE.

That was quick. FWIW, I messed up dumping the cgroup mountpoint (since I restarted my guests after this happened).
Will retry. See the dmesg attached. 

[-- Attachment #2: output --]
[-- Type: text/plain, Size: 49696 bytes --]

[    0.179552] Linux version 5.6.0-rc4+ (cborntra@m83lp52.lnxne.boe) (gcc version 9.2.1 20190827 (Red Hat 9.2.1-1) (GCC)) #157 SMP Wed Mar 4 20:28:33 CET 2020
[    0.179554] setup: Linux is running natively in 64-bit mode
[    0.179600] setup: The maximum memory size is 131072MB
[    0.179605] setup: Reserving 1024MB of memory at 130048MB for crashkernel (System RAM: 130048MB)
[    0.179616] numa: NUMA mode: plain
[    0.179690] cpu: 64 configured CPUs, 0 standby CPUs
[    0.179754] cpu: The CPU configuration topology of the machine is: 0 0 4 2 3 10 / 4
[    0.180454] Write protected kernel read-only data: 13532k
[    0.181204] Zone ranges:
[    0.181205]   DMA      [mem 0x0000000000000000-0x000000007fffffff]
[    0.181207]   Normal   [mem 0x0000000080000000-0x0000001fffffffff]
[    0.181208] Movable zone start for each node
[    0.181209] Early memory node ranges
[    0.181210]   node   0: [mem 0x0000000000000000-0x0000001fffffffff]
[    0.181217] Initmem setup node 0 [mem 0x0000000000000000-0x0000001fffffffff]
[    0.181218] On node 0 totalpages: 33554432
[    0.181218]   DMA zone: 8192 pages used for memmap
[    0.181219]   DMA zone: 0 pages reserved
[    0.181220]   DMA zone: 524288 pages, LIFO batch:63
[    0.198887]   Normal zone: 516096 pages used for memmap
[    0.198887]   Normal zone: 33030144 pages, LIFO batch:63
[    0.215007] percpu: Embedded 33 pages/cpu s97280 r8192 d29696 u135168
[    0.215015] pcpu-alloc: s97280 r8192 d29696 u135168 alloc=33*4096
[    0.215016] pcpu-alloc: [0] 000 [0] 001 [0] 002 [0] 003 
[    0.215018] pcpu-alloc: [0] 004 [0] 005 [0] 006 [0] 007 
[    0.215019] pcpu-alloc: [0] 008 [0] 009 [0] 010 [0] 011 
[    0.215020] pcpu-alloc: [0] 012 [0] 013 [0] 014 [0] 015 
[    0.215021] pcpu-alloc: [0] 016 [0] 017 [0] 018 [0] 019 
[    0.215022] pcpu-alloc: [0] 020 [0] 021 [0] 022 [0] 023 
[    0.215023] pcpu-alloc: [0] 024 [0] 025 [0] 026 [0] 027 
[    0.215025] pcpu-alloc: [0] 028 [0] 029 [0] 030 [0] 031 
[    0.215026] pcpu-alloc: [0] 032 [0] 033 [0] 034 [0] 035 
[    0.215027] pcpu-alloc: [0] 036 [0] 037 [0] 038 [0] 039 
[    0.215028] pcpu-alloc: [0] 040 [0] 041 [0] 042 [0] 043 
[    0.215029] pcpu-alloc: [0] 044 [0] 045 [0] 046 [0] 047 
[    0.215030] pcpu-alloc: [0] 048 [0] 049 [0] 050 [0] 051 
[    0.215032] pcpu-alloc: [0] 052 [0] 053 [0] 054 [0] 055 
[    0.215033] pcpu-alloc: [0] 056 [0] 057 [0] 058 [0] 059 
[    0.215034] pcpu-alloc: [0] 060 [0] 061 [0] 062 [0] 063 
[    0.215035] pcpu-alloc: [0] 064 [0] 065 [0] 066 [0] 067 
[    0.215036] pcpu-alloc: [0] 068 [0] 069 [0] 070 [0] 071 
[    0.215037] pcpu-alloc: [0] 072 [0] 073 [0] 074 [0] 075 
[    0.215038] pcpu-alloc: [0] 076 [0] 077 [0] 078 [0] 079 
[    0.215039] pcpu-alloc: [0] 080 [0] 081 [0] 082 [0] 083 
[    0.215040] pcpu-alloc: [0] 084 [0] 085 [0] 086 [0] 087 
[    0.215042] pcpu-alloc: [0] 088 [0] 089 [0] 090 [0] 091 
[    0.215043] pcpu-alloc: [0] 092 [0] 093 [0] 094 [0] 095 
[    0.215044] pcpu-alloc: [0] 096 [0] 097 [0] 098 [0] 099 
[    0.215045] pcpu-alloc: [0] 100 [0] 101 [0] 102 [0] 103 
[    0.215046] pcpu-alloc: [0] 104 [0] 105 [0] 106 [0] 107 
[    0.215047] pcpu-alloc: [0] 108 [0] 109 [0] 110 [0] 111 
[    0.215048] pcpu-alloc: [0] 112 [0] 113 [0] 114 [0] 115 
[    0.215050] pcpu-alloc: [0] 116 [0] 117 [0] 118 [0] 119 
[    0.215051] pcpu-alloc: [0] 120 [0] 121 [0] 122 [0] 123 
[    0.215052] pcpu-alloc: [0] 124 [0] 125 [0] 126 [0] 127 
[    0.215053] pcpu-alloc: [0] 128 [0] 129 [0] 130 [0] 131 
[    0.215054] pcpu-alloc: [0] 132 [0] 133 [0] 134 [0] 135 
[    0.215055] pcpu-alloc: [0] 136 [0] 137 [0] 138 [0] 139 
[    0.215056] pcpu-alloc: [0] 140 [0] 141 [0] 142 [0] 143 
[    0.215058] pcpu-alloc: [0] 144 [0] 145 [0] 146 [0] 147 
[    0.215059] pcpu-alloc: [0] 148 [0] 149 [0] 150 [0] 151 
[    0.215060] pcpu-alloc: [0] 152 [0] 153 [0] 154 [0] 155 
[    0.215061] pcpu-alloc: [0] 156 [0] 157 [0] 158 [0] 159 
[    0.215062] pcpu-alloc: [0] 160 [0] 161 [0] 162 [0] 163 
[    0.215063] pcpu-alloc: [0] 164 [0] 165 [0] 166 [0] 167 
[    0.215065] pcpu-alloc: [0] 168 [0] 169 [0] 170 [0] 171 
[    0.215066] pcpu-alloc: [0] 172 [0] 173 [0] 174 [0] 175 
[    0.215067] pcpu-alloc: [0] 176 [0] 177 [0] 178 [0] 179 
[    0.215068] pcpu-alloc: [0] 180 [0] 181 [0] 182 [0] 183 
[    0.215069] pcpu-alloc: [0] 184 [0] 185 [0] 186 [0] 187 
[    0.215070] pcpu-alloc: [0] 188 [0] 189 [0] 190 [0] 191 
[    0.215072] pcpu-alloc: [0] 192 [0] 193 [0] 194 [0] 195 
[    0.215073] pcpu-alloc: [0] 196 [0] 197 [0] 198 [0] 199 
[    0.215074] pcpu-alloc: [0] 200 [0] 201 [0] 202 [0] 203 
[    0.215075] pcpu-alloc: [0] 204 [0] 205 [0] 206 [0] 207 
[    0.215076] pcpu-alloc: [0] 208 [0] 209 [0] 210 [0] 211 
[    0.215077] pcpu-alloc: [0] 212 [0] 213 [0] 214 [0] 215 
[    0.215079] pcpu-alloc: [0] 216 [0] 217 [0] 218 [0] 219 
[    0.215080] pcpu-alloc: [0] 220 [0] 221 [0] 222 [0] 223 
[    0.215081] pcpu-alloc: [0] 224 [0] 225 [0] 226 [0] 227 
[    0.215082] pcpu-alloc: [0] 228 [0] 229 [0] 230 [0] 231 
[    0.215083] pcpu-alloc: [0] 232 [0] 233 [0] 234 [0] 235 
[    0.215084] pcpu-alloc: [0] 236 [0] 237 [0] 238 [0] 239 
[    0.215085] pcpu-alloc: [0] 240 [0] 241 [0] 242 [0] 243 
[    0.215087] pcpu-alloc: [0] 244 [0] 245 [0] 246 [0] 247 
[    0.215088] pcpu-alloc: [0] 248 [0] 249 [0] 250 [0] 251 
[    0.215089] pcpu-alloc: [0] 252 [0] 253 [0] 254 [0] 255 
[    0.215090] pcpu-alloc: [0] 256 [0] 257 [0] 258 [0] 259 
[    0.215091] pcpu-alloc: [0] 260 [0] 261 [0] 262 [0] 263 
[    0.215093] pcpu-alloc: [0] 264 [0] 265 [0] 266 [0] 267 
[    0.215094] pcpu-alloc: [0] 268 [0] 269 [0] 270 [0] 271 
[    0.215095] pcpu-alloc: [0] 272 [0] 273 [0] 274 [0] 275 
[    0.215096] pcpu-alloc: [0] 276 [0] 277 [0] 278 [0] 279 
[    0.215097] pcpu-alloc: [0] 280 [0] 281 [0] 282 [0] 283 
[    0.215098] pcpu-alloc: [0] 284 [0] 285 [0] 286 [0] 287 
[    0.215099] pcpu-alloc: [0] 288 [0] 289 [0] 290 [0] 291 
[    0.215101] pcpu-alloc: [0] 292 [0] 293 [0] 294 [0] 295 
[    0.215102] pcpu-alloc: [0] 296 [0] 297 [0] 298 [0] 299 
[    0.215103] pcpu-alloc: [0] 300 [0] 301 [0] 302 [0] 303 
[    0.215104] pcpu-alloc: [0] 304 [0] 305 [0] 306 [0] 307 
[    0.215105] pcpu-alloc: [0] 308 [0] 309 [0] 310 [0] 311 
[    0.215106] pcpu-alloc: [0] 312 [0] 313 [0] 314 [0] 315 
[    0.215108] pcpu-alloc: [0] 316 [0] 317 [0] 318 [0] 319 
[    0.215109] pcpu-alloc: [0] 320 [0] 321 [0] 322 [0] 323 
[    0.215110] pcpu-alloc: [0] 324 [0] 325 [0] 326 [0] 327 
[    0.215111] pcpu-alloc: [0] 328 [0] 329 [0] 330 [0] 331 
[    0.215112] pcpu-alloc: [0] 332 [0] 333 [0] 334 [0] 335 
[    0.215114] pcpu-alloc: [0] 336 [0] 337 [0] 338 [0] 339 
[    0.215137] Built 1 zonelists, mobility grouping on.  Total pages: 33030144
[    0.215138] Policy zone: Normal
[    0.215139] Kernel command line: root=/dev/disk/by-path/ccw-0.0.3318-part1 rd.dasd=0.0.3318 cio_ignore=all,!condev rd.znet=qeth,0.0.bd00,0.0.bd01,0.0.bd02,layer2=1,portno=0,portname=OSAPORT zfcp.allow_lun_scan=0 BOOT_IMAGE=0 crashkernel=1G dyndbg="module=vhost +plt" BOOT_IMAGE=
[    0.216115] printk: log_buf_len individual max cpu contribution: 4096 bytes
[    0.216115] printk: log_buf_len total cpu_extra contributions: 1388544 bytes
[    0.216116] printk: log_buf_len min size: 131072 bytes
[    0.216409] printk: log_buf_len: 2097152 bytes
[    0.216409] printk: early log buf free: 123836(94%)
[    0.225460] Dentry cache hash table entries: 8388608 (order: 14, 67108864 bytes, linear)
[    0.230020] Inode-cache hash table entries: 4194304 (order: 13, 33554432 bytes, linear)
[    0.230032] mem auto-init: stack:off, heap alloc:off, heap free:off
[    0.261020] Memory: 2316420K/134217728K available (10460K kernel code, 2016K rwdata, 3072K rodata, 3932K init, 852K bss, 3354384K reserved, 0K cma-reserved)
[    0.261421] SLUB: HWalign=256, Order=0-3, MinObjects=0, CPUs=340, Nodes=1
[    0.261450] ftrace: allocating 31562 entries in 124 pages
[    0.265986] ftrace: allocated 124 pages with 5 groups
[    0.266642] rcu: Hierarchical RCU implementation.
[    0.266642] rcu: 	RCU event tracing is enabled.
[    0.266643] rcu: 	RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=340.
[    0.266643] 	Tasks RCU enabled.
[    0.266644] rcu: RCU calculated value of scheduler-enlistment delay is 11 jiffies.
[    0.266644] rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=340
[    0.269449] NR_IRQS: 3, nr_irqs: 3, preallocated irqs: 3
[    0.269541] clocksource: tod: mask: 0xffffffffffffffff max_cycles: 0x3b0a9be803b0a9, max_idle_ns: 1805497147909793 ns
[    0.269712] Console: colour dummy device 80x25
[    0.658102] printk: console [ttyS0] enabled
[    0.823432] Calibrating delay loop (skipped)... 21881.00 BogoMIPS preset
[    0.823432] pid_max: default: 348160 minimum: 2720
[    0.823585] LSM: Security Framework initializing
[    0.823616] SELinux:  Initializing.
[    0.823854] Mount-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear)
[    0.824000] Mountpoint-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear)
[    0.825084] rcu: Hierarchical SRCU implementation.
[    0.827868] smp: Bringing up secondary CPUs ...
[    0.842401] smp: Brought up 1 node, 64 CPUs
[    1.744807] node 0 initialised, 32136731 pages in 900ms
[    1.772554] devtmpfs: initialized
[    1.773439] random: get_random_u32 called from bucket_table_alloc.isra.0+0x82/0x120 with crng_init=0
[    1.774015] clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604462750000 ns
[    1.774659] futex hash table entries: 131072 (order: 13, 33554432 bytes, vmalloc)
[    1.779230] xor: automatically using best checksumming function   xc        
[    1.779412] NET: Registered protocol family 16
[    1.779449] audit: initializing netlink subsys (disabled)
[    1.779522] audit: type=2000 audit(1583350198.232:1): state=initialized audit_enabled=0 res=1
[    1.779663] Spectre V2 mitigation: etokens
[    1.780392] random: fast init done
[    1.799643] HugeTLB registered 1.00 MiB page size, pre-allocated 0 pages
[    2.049545] raid6: vx128x8  gen() 21586 MB/s
[    2.219544] raid6: vx128x8  xor() 13355 MB/s
[    2.219546] raid6: using algorithm vx128x8 gen() 21586 MB/s
[    2.219547] raid6: .... xor() 13355 MB/s, rmw enabled
[    2.219548] raid6: using s390xc recovery algorithm
[    2.219866] iommu: Default domain type: Translated 
[    2.219980] SCSI subsystem initialized
[    2.232745] PCI host bridge to bus 0000:00
[    2.232750] pci_bus 0000:00: root bus resource [mem 0x8000000000000000-0x8000000007ffffff 64bit pref]
[    2.232752] pci_bus 0000:00: No busn resource found for root bus, will use [bus 00-ff]
[    2.232817] pci 0000:00:00.0: [1014:044b] type 00 class 0x120000
[    2.232872] pci 0000:00:00.0: reg 0x10: [mem 0xffffd80008000000-0xffffd8000fffffff 64bit pref]
[    2.233159] pci 0000:00:00.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown speed x0 link at 0000:00:00.0 (capable of 32.000 Gb/s with 5 GT/s x8 link)
[    2.233199] pci 0000:00:00.0: Adding to iommu group 0
[    2.233210] pci_bus 0000:00: busn_res: [bus 00-ff] end is updated to 00
[    2.235224] PCI host bridge to bus 0001:00
[    2.235226] pci_bus 0001:00: root bus resource [mem 0x8001000000000000-0x80010000000fffff 64bit pref]
[    2.235228] pci_bus 0001:00: No busn resource found for root bus, will use [bus 00-ff]
[    2.235318] pci 0001:00:00.0: [15b3:1016] type 00 class 0x020000
[    2.235415] pci 0001:00:00.0: reg 0x10: [mem 0xffffd40002000000-0xffffd400020fffff 64bit pref]
[    2.235565] pci 0001:00:00.0: enabling Extended Tags
[    2.236059] pci 0001:00:00.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown speed x0 link at 0001:00:00.0 (capable of 63.008 Gb/s with 8 GT/s x8 link)
[    2.236106] pci 0001:00:00.0: Adding to iommu group 1
[    2.236114] pci_bus 0001:00: busn_res: [bus 00-ff] end is updated to 00
[    2.238133] PCI host bridge to bus 0002:00
[    2.238135] pci_bus 0002:00: root bus resource [mem 0x8002000000000000-0x80020000000fffff 64bit pref]
[    2.238137] pci_bus 0002:00: No busn resource found for root bus, will use [bus 00-ff]
[    2.238226] pci 0002:00:00.0: [15b3:1016] type 00 class 0x020000
[    2.238326] pci 0002:00:00.0: reg 0x10: [mem 0xffffd40008000000-0xffffd400080fffff 64bit pref]
[    2.238483] pci 0002:00:00.0: enabling Extended Tags
[    2.238988] pci 0002:00:00.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown speed x0 link at 0002:00:00.0 (capable of 63.008 Gb/s with 8 GT/s x8 link)
[    2.239022] pci 0002:00:00.0: Adding to iommu group 2
[    2.239028] pci_bus 0002:00: busn_res: [bus 00-ff] end is updated to 00
[    2.836711] VFS: Disk quotas dquot_6.6.0
[    2.836765] VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes)
[    2.838349] NET: Registered protocol family 2
[    2.839186] tcp_listen_portaddr_hash hash table entries: 65536 (order: 8, 1048576 bytes, linear)
[    2.839680] random: crng init done
[    2.839770] TCP established hash table entries: 524288 (order: 10, 4194304 bytes, vmalloc)
[    2.841819] TCP bind hash table entries: 65536 (order: 8, 1048576 bytes, linear)
[    2.842274] TCP: Hash tables configured (established 524288 bind 65536)
[    2.842599] UDP hash table entries: 65536 (order: 9, 2097152 bytes, vmalloc)
[    2.843583] UDP-Lite hash table entries: 65536 (order: 9, 2097152 bytes, vmalloc)
[    2.844962] NET: Registered protocol family 1
[    2.845169] Trying to unpack rootfs image as initramfs...
[    3.397729] Freeing initrd memory: 42272K
[    3.398935] alg: No test for crc32be (crc32be-vx)
[    3.403320] Initialise system trusted keyrings
[    3.403369] workingset: timestamp_bits=45 max_order=25 bucket_order=0
[    3.404472] fuse: init (API version 7.31)
[    3.404541] SGI XFS with ACLs, security attributes, realtime, quota, no debug enabled
[    3.411259] Key type asymmetric registered
[    3.411261] Asymmetric key parser 'x509' registered
[    3.411267] Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251)
[    3.411477] io scheduler mq-deadline registered
[    3.411479] io scheduler kyber registered
[    3.411501] io scheduler bfq registered
[    3.412309] atomic64_test: passed
[    3.412362] hvc_iucv: The z/VM IUCV HVC device driver cannot be used without z/VM
[    3.418786] brd: module loaded
[    3.419143] cio: Channel measurement facility initialized using format extended (mode autodetected)
[    3.419414] Discipline DIAG cannot be used without z/VM
[    4.942920] sclp_sd: No data is available for the config data entity
[    5.301505] qeth: loading core functions
[    5.301563] qeth: register layer 2 discipline
[    5.301564] qeth: register layer 3 discipline
[    5.302044] NET: Registered protocol family 10
[    5.302987] Segment Routing with IPv6
[    5.303005] NET: Registered protocol family 17
[    5.303014] Key type dns_resolver registered
[    5.303107] registered taskstats version 1
[    5.303112] Loading compiled-in X.509 certificates
[    5.343478] Loaded X.509 cert 'Build time autogenerated kernel key: c46ba92ee388c82c5891ee836c9c20b752cdfac5'
[    5.344136] zswap: default zpool zbud not available
[    5.344137] zswap: pool creation failed
[    5.344894] Key type ._fscrypt registered
[    5.344895] Key type .fscrypt registered
[    5.344896] Key type fscrypt-provisioning registered
[    5.345187] Btrfs loaded, crc32c=crc32c-vx
[    5.349875] Key type big_key registered
[    5.349880] ima: No TPM chip found, activating TPM-bypass!
[    5.349884] ima: Allocated hash algorithm: sha256
[    5.349891] ima: No architecture policies found
[    5.351364] Freeing unused kernel memory: 3932K
[    5.409626] Write protected read-only-after-init data: 68k
[    5.409629] Run /init as init process
[    5.409629]   with arguments:
[    5.409630]     /init
[    5.409630]   with environment:
[    5.409630]     HOME=/
[    5.409630]     TERM=linux
[    5.409630]     BOOT_IMAGE=
[    5.409631]     crashkernel=1G
[    5.409631]     dyndbg=module=vhost +plt
[    5.424112] systemd[1]: Inserted module 'autofs4'
[    5.425256] systemd[1]: systemd v243.7-1.fc31 running in system mode. (+PAM +AUDIT +SELINUX +IMA -APPARMOR +SMACK +SYSVINIT +UTMP +LIBCRYPTSETUP +GCRYPT +GNUTLS +ACL +XZ +LZ4 +SECCOMP +BLKID +ELFUTILS +KMOD +IDN2 -IDN +PCRE2 default-hierarchy=unified)
[    5.425663] systemd[1]: Detected architecture s390x.
[    5.425665] systemd[1]: Running in initial RAM disk.
[    5.425719] systemd[1]: Set hostname to <m83lp52.lnxne.boe>.
[    5.465976] systemd[1]: Reached target Local File Systems.
[    5.466033] systemd[1]: Reached target Slices.
[    5.466055] systemd[1]: Reached target Swap.
[    5.466075] systemd[1]: Reached target Timers.
[    5.466165] systemd[1]: Listening on Journal Audit Socket.
[    5.466216] systemd[1]: Listening on Journal Socket (/dev/log).
[    5.753422] audit: type=1130 audit(1583350202.212:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
[    5.760976] audit: type=1130 audit(1583350202.222:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
[    6.730719] audit: type=1130 audit(1583350203.192:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
[    6.746217] audit: type=1130 audit(1583350203.202:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
[    6.746614] audit: type=1334 audit(1583350203.202:6): prog-id=6 op=LOAD
[    6.746638] audit: type=1334 audit(1583350203.202:7): prog-id=7 op=LOAD
[    6.972690] audit: type=1130 audit(1583350203.432:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
[    6.976683] qeth 0.0.bd00: Priority Queueing not supported
[    6.977305] qeth 0.0.bd00: portname is deprecated and is ignored
[    6.978774] dasd-eckd 0.0.3318: A channel path to the device has become operational
[    6.979092] dasd-eckd 0.0.331a: A channel path to the device has become operational
[    6.979379] dasd-eckd 0.0.3319: A channel path to the device has become operational
[    6.979432] dasd-eckd 0.0.331b: A channel path to the device has become operational
[    6.982509] qdio: 0.0.bd02 OSA on SC 159b using AI:1 QEBSM:0 PRI:1 TDD:1 SIGA: W AP
[    6.988753] dasd-eckd 0.0.331a: New DASD 3390/0C (CU 3990/01) with 30051 cylinders, 15 heads, 224 sectors
[    6.991372] dasd-eckd 0.0.331a: DASD with 4 KB/block, 21636720 KB total size, 48 KB/track, compatible disk layout
[    6.992179]  dasdb:VOL1/  0X331A:
[    6.993148] dasd-eckd 0.0.3318: New DASD 3390/0E (CU 3990/01) with 262668 cylinders, 15 heads, 224 sectors
[    6.995792] dasd-eckd 0.0.3318: DASD with 4 KB/block, 189120960 KB total size, 48 KB/track, compatible disk layout
[    6.996773]  dasda:VOL1/  0X3318: dasda1
[    6.997591] dasd-eckd 0.0.3319: New DASD 3390/0E (CU 3990/01) with 262668 cylinders, 15 heads, 224 sectors
[    7.000216] dasd-eckd 0.0.3319: DASD with 4 KB/block, 189120960 KB total size, 48 KB/track, compatible disk layout
[    7.001240]  dasdc:VOL1/  0X3319: dasdc1
[    7.002064] dasd-eckd 0.0.331b: New DASD 3390/0C (CU 3990/01) with 30051 cylinders, 15 heads, 224 sectors
[    7.005812] dasd-eckd 0.0.331b: DASD with 4 KB/block, 21636720 KB total size, 48 KB/track, compatible disk layout
[    7.006506]  dasdd:VOL1/  0X331B:
[    7.015493] audit: type=1130 audit(1583350203.472:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
[    7.022792] qeth 0.0.bd00: QDIO data connection isolation is deactivated
[    7.023272] qeth 0.0.bd00: The device represents a Bridge Capable Port
[    7.026647] qeth 0.0.bd00: MAC address ea:98:1f:2a:e3:e9 successfully registered
[    7.027175] qeth 0.0.bd00: Device is a OSD Express card (level: 0199)
               with link type OSD_10GIG.
[    7.030971] audit: type=1130 audit(1583350203.492:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=plymouth-start comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
[    7.055532] qeth 0.0.bd00: MAC address de:45:d7:61:c4:13 successfully registered
[    7.057769] qeth 0.0.bd00 encbd00: renamed from eth0
[    7.246654] mlx5_core 0001:00:00.0: enabling device (0000 -> 0002)
[    7.246743] mlx5_core 0001:00:00.0: firmware version: 14.23.1020
[    7.405839] dasdconf.sh Warning: 0.0.331a is already online, not configuring
[    7.550030] dasdconf.sh Warning: 0.0.3319 is already online, not configuring
[    7.665404] dasdconf.sh Warning: 0.0.331b is already online, not configuring
[    7.684688] dasdconf.sh Warning: 0.0.3318 is already online, not configuring
[    7.688844] mlx5_core 0002:00:00.0: enabling device (0000 -> 0002)
[    7.688928] mlx5_core 0002:00:00.0: firmware version: 14.23.1020
[    8.137519] mlx5_core 0001:00:00.0: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0)
[    8.268721] mlx5_core 0002:00:00.0: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0)
[    8.421011] mlx5_core 0002:00:00.0 enP2s564: renamed from eth1
[    8.680291] mlx5_core 0001:00:00.0 enP1s519: renamed from eth0
[    8.911022] audit: type=1130 audit(1583350205.372:11): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
[    8.927993] audit: type=1130 audit(1583350205.382:12): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
[    8.937455] EXT4-fs (dasda1): mounted filesystem with ordered data mode. Opts: (null)
[    8.960316] audit: type=1334 audit(1583350205.422:13): prog-id=5 op=UNLOAD
[    8.960319] audit: type=1334 audit(1583350205.422:14): prog-id=4 op=UNLOAD
[    8.960321] audit: type=1334 audit(1583350205.422:15): prog-id=3 op=UNLOAD
[    8.960346] audit: type=1334 audit(1583350205.422:16): prog-id=7 op=UNLOAD
[    8.965353] audit: type=1334 audit(1583350205.422:17): prog-id=6 op=UNLOAD
[    9.253873] systemd-journald[543]: Received SIGTERM from PID 1 (systemd).
[    9.272389] printk: systemd: 19 output lines suppressed due to ratelimiting
[    9.674803] SELinux:  Permission watch in class filesystem not defined in policy.
[    9.674808] SELinux:  Permission watch in class file not defined in policy.
[    9.674809] SELinux:  Permission watch_mount in class file not defined in policy.
[    9.674810] SELinux:  Permission watch_sb in class file not defined in policy.
[    9.674811] SELinux:  Permission watch_with_perm in class file not defined in policy.
[    9.674812] SELinux:  Permission watch_reads in class file not defined in policy.
[    9.674815] SELinux:  Permission watch in class dir not defined in policy.
[    9.674816] SELinux:  Permission watch_mount in class dir not defined in policy.
[    9.674817] SELinux:  Permission watch_sb in class dir not defined in policy.
[    9.674818] SELinux:  Permission watch_with_perm in class dir not defined in policy.
[    9.674819] SELinux:  Permission watch_reads in class dir not defined in policy.
[    9.674822] SELinux:  Permission watch in class lnk_file not defined in policy.
[    9.674823] SELinux:  Permission watch_mount in class lnk_file not defined in policy.
[    9.674824] SELinux:  Permission watch_sb in class lnk_file not defined in policy.
[    9.674825] SELinux:  Permission watch_with_perm in class lnk_file not defined in policy.
[    9.674826] SELinux:  Permission watch_reads in class lnk_file not defined in policy.
[    9.674828] SELinux:  Permission watch in class chr_file not defined in policy.
[    9.674844] SELinux:  Permission watch_mount in class chr_file not defined in policy.
[    9.674845] SELinux:  Permission watch_sb in class chr_file not defined in policy.
[    9.674846] SELinux:  Permission watch_with_perm in class chr_file not defined in policy.
[    9.674848] SELinux:  Permission watch_reads in class chr_file not defined in policy.
[    9.674850] SELinux:  Permission watch in class blk_file not defined in policy.
[    9.674851] SELinux:  Permission watch_mount in class blk_file not defined in policy.
[    9.674852] SELinux:  Permission watch_sb in class blk_file not defined in policy.
[    9.674853] SELinux:  Permission watch_with_perm in class blk_file not defined in policy.
[    9.674854] SELinux:  Permission watch_reads in class blk_file not defined in policy.
[    9.674856] SELinux:  Permission watch in class sock_file not defined in policy.
[    9.674857] SELinux:  Permission watch_mount in class sock_file not defined in policy.
[    9.674858] SELinux:  Permission watch_sb in class sock_file not defined in policy.
[    9.674859] SELinux:  Permission watch_with_perm in class sock_file not defined in policy.
[    9.674860] SELinux:  Permission watch_reads in class sock_file not defined in policy.
[    9.674863] SELinux:  Permission watch in class fifo_file not defined in policy.
[    9.674864] SELinux:  Permission watch_mount in class fifo_file not defined in policy.
[    9.674865] SELinux:  Permission watch_sb in class fifo_file not defined in policy.
[    9.674866] SELinux:  Permission watch_with_perm in class fifo_file not defined in policy.
[    9.674867] SELinux:  Permission watch_reads in class fifo_file not defined in policy.
[    9.674946] SELinux:  Class perf_event not defined in policy.
[    9.674947] SELinux:  Class lockdown not defined in policy.
[    9.674948] SELinux: the above unknown classes and permissions will be allowed
[    9.674961] SELinux:  policy capability network_peer_controls=1
[    9.674962] SELinux:  policy capability open_perms=1
[    9.674963] SELinux:  policy capability extended_socket_class=1
[    9.674963] SELinux:  policy capability always_check_network=0
[    9.674964] SELinux:  policy capability cgroup_seclabel=1
[    9.674965] SELinux:  policy capability nnp_nosuid_transition=1
[    9.753985] systemd[1]: Successfully loaded SELinux policy in 312.676ms.
[    9.812306] systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 12.846ms.
[    9.814514] systemd[1]: systemd v243.7-1.fc31 running in system mode. (+PAM +AUDIT +SELINUX +IMA -APPARMOR +SMACK +SYSVINIT +UTMP +LIBCRYPTSETUP +GCRYPT +GNUTLS +ACL +XZ +LZ4 +SECCOMP +BLKID +ELFUTILS +KMOD +IDN2 -IDN +PCRE2 default-hierarchy=unified)
[    9.814956] systemd[1]: Detected architecture s390x.
[    9.816221] systemd[1]: Set hostname to <m83lp52.lnxne.boe>.
[    9.951828] systemd[1]: /usr/lib/systemd/system/sssd.service:12: PIDFile= references a path below legacy directory /var/run/, updating /var/run/sssd.pid → /run/sssd.pid; please update the unit file accordingly.
[    9.955870] systemd[1]: /usr/lib/systemd/system/iscsid.service:11: PIDFile= references a path below legacy directory /var/run/, updating /var/run/iscsid.pid → /run/iscsid.pid; please update the unit file accordingly.
[    9.956083] systemd[1]: /usr/lib/systemd/system/iscsiuio.service:13: PIDFile= references a path below legacy directory /var/run/, updating /var/run/iscsiuio.pid → /run/iscsiuio.pid; please update the unit file accordingly.
[    9.983719] systemd[1]: /usr/lib/systemd/system/sssd-kcm.socket:7: ListenStream= references a path below legacy directory /var/run/, updating /var/run/.heim_org.h5l.kcm-socket → /run/.heim_org.h5l.kcm-socket; please update the unit file accordingly.
[   10.011461] systemd[1]: initrd-switch-root.service: Succeeded.
[   10.011556] systemd[1]: Stopped Switch Root.
[   10.011787] systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1.
[   10.042867] EXT4-fs (dasda1): re-mounted. Opts: (null)
[   10.360085] systemd-journald[1087]: Received client request to flush runtime journal.
[   10.550102] VFIO - User Level meta-driver version: 0.3
[   10.608696] genwqe 0000:00:00.0: enabling device (0000 -> 0002)
[   10.609633] dasdconf.sh Warning: 0.0.3319 is already online, not configuring
[   10.609924] dasdconf.sh Warning: 0.0.331a is already online, not configuring
[   10.610250] dasdconf.sh Warning: 0.0.331b is already online, not configuring
[   10.613015] dasdconf.sh Warning: 0.0.3318 is already online, not configuring
[   10.951430] mlx5_ib: Mellanox Connect-IB Infiniband driver v5.0-0
[   11.102829] XFS (dasdc1): Mounting V5 Filesystem
[   11.113111] RPC: Registered named UNIX socket transport module.
[   11.113114] RPC: Registered udp transport module.
[   11.113115] RPC: Registered tcp transport module.
[   11.113116] RPC: Registered tcp NFSv4.1 backchannel transport module.
[   11.128347] XFS (dasdc1): Ending clean mount
[   11.130744] xfs filesystem being mounted at /home supports timestamps until 2038 (0x7fffffff)
[   11.159172] RPC: Registered rdma transport module.
[   11.159175] RPC: Registered rdma backchannel transport module.
[   12.286326] mlx5_core 0001:00:00.0 enP1s519: Link up
[   12.289162] IPv6: ADDRCONF(NETDEV_CHANGE): enP1s519: link becomes ready
[   12.406381] mlx5_core 0002:00:00.0 enP2s564: Link up
[   12.511281] tun: Universal TUN/TAP device driver, 1.6
[   12.511994] virbr0: port 1(virbr0-nic) entered blocking state
[   12.511996] virbr0: port 1(virbr0-nic) entered disabled state
[   12.512065] device virbr0-nic entered promiscuous mode
[   12.806783] virbr0: port 1(virbr0-nic) entered blocking state
[   12.806787] virbr0: port 1(virbr0-nic) entered listening state
[   12.831055] virbr0: port 1(virbr0-nic) entered disabled state
[   13.309576] IPv6: ADDRCONF(NETDEV_CHANGE): enP2s564: link becomes ready
[   33.511826] hrtimer: interrupt took 650 ns
[  111.139686] CPU34 path=/machine.slice/machine-test.slice/machine-qemu\x2d4\x2dtest11.s on_list=1 nr_running=1 p=[CPU 2/KVM 2445]
[  111.139692] ------------[ cut here ]------------
[  111.139693] rq->tmp_alone_branch != &rq->leaf_cfs_rq_list
[  111.139702] WARNING: CPU: 34 PID: 5615 at kernel/sched/fair.c:380 enqueue_task_fair+0x3f6/0x4a8
[  111.139704] Modules linked in: kvm xt_CHECKSUM xt_MASQUERADE nf_nat_tftp nf_conntrack_tftp xt_CT tun bridge stp llc xt_tcpudp ip6t_REJECT nf_reject_ipv6 ip6t_rpfilter ipt_REJECT nf_reject_ipv4 xt_conntrack ip6table_nat ip6table_mangle ip6table_raw ip6table_security iptable_nat nf_nat iptable_mangle iptable_raw iptable_security nf_conntrack nf_defrag_ipv6 nf_defrag_ipv4 ip_set nfnetlink ip6table_filter ip6_tables iptable_filter rpcrdma sunrpc rdma_ucm rdma_cm iw_cm ib_cm configfs mlx5_ib ib_uverbs s390_trng ghash_s390 prng ib_core aes_s390 des_s390 libdes sha3_512_s390 sha3_256_s390 sha512_s390 sha1_s390 genwqe_card vfio_ccw crc_itu_t vfio_mdev mdev vfio_iommu_type1 eadm_sch vfio zcrypt_cex4 sch_fq_codel ip_tables x_tables mlx5_core sha256_s390 sha_common pkey zcrypt rng_core autofs4
[  111.139743] CPU: 34 PID: 5615 Comm: genksyms Not tainted 5.6.0-rc4+ #157
[  111.139744] Hardware name: IBM 3906 M04 704 (LPAR)
[  111.139746] Krnl PSW : 0404c00180000000 00000012a3f3ea32 (enqueue_task_fair+0x3fa/0x4a8)
[  111.139748]            R:0 T:1 IO:0 EX:0 Key:0 M:1 W:0 P:0 AS:3 CC:0 PM:0 RI:0 EA:3
[  111.139750] Krnl GPRS: 00000000000003e0 0000001eb0d35400 000000000000002d 00000012a51f07c2
[  111.139751]            000000000000002c 00000012a45aa3f8 0000000000000001 0400000000000000
[  111.139753]            0000001f695ba000 000003e00116bb58 0000001eb0d35400 0000001fbd4a4928
[  111.139780]            0000001a1a092000 0000001fbd4a3d00 00000012a3f3ea2e 000003e00116baa0
[  111.139787] Krnl Code: 00000012a3f3ea22: c020005d49be	larl	%r2,00000012a4ae7d9e
                          00000012a3f3ea28: c0e5fffdcc48	brasl	%r14,00000012a3ef82b8
                         #00000012a3f3ea2e: af000000		mc	0,0
                         >00000012a3f3ea32: a7f4febe		brc	15,00000012a3f3e7ae
                          00000012a3f3ea36: ec2cfe68017f	clij	%r2,1,12,00000012a3f3e706
                          00000012a3f3ea3c: e310dd200004	lg	%r1,3360(%r13)
                          00000012a3f3ea42: 58201098		l	%r2,152(%r1)
                          00000012a3f3ea46: ec26fe63007e	cij	%r2,0,6,00000012a3f3e70c
[  111.139802] Call Trace:
[  111.139805]  [<00000012a3f3ea32>] enqueue_task_fair+0x3fa/0x4a8 
[  111.139806] ([<00000012a3f3ea2e>] enqueue_task_fair+0x3f6/0x4a8)
[  111.139809]  [<00000012a3f2ebb8>] activate_task+0x88/0xf0 
[  111.139811]  [<00000012a3f2f128>] ttwu_do_activate+0x58/0x78 
[  111.139813]  [<00000012a3f30136>] try_to_wake_up+0x256/0x650 
[  111.139815]  [<00000012a3f4eb3e>] swake_up_locked.part.0+0x2e/0x70 
[  111.139817]  [<00000012a3f4ee5c>] swake_up_one+0x54/0x88 
[  111.139858]  [<000003ff8046c07a>] kvm_vcpu_wake_up+0x52/0x78 [kvm] 
[  111.139881]  [<000003ff804893a2>] kvm_s390_vcpu_wakeup+0x2a/0x40 [kvm] 
[  111.139911]  [<000003ff80489b0e>] kvm_s390_idle_wakeup+0x6e/0xa0 [kvm] 
[  111.139918]  [<00000012a3f99ebc>] __hrtimer_run_queues+0x114/0x2f0 
[  111.139929]  [<00000012a3f9ac14>] hrtimer_interrupt+0x12c/0x2a8 
[  111.139934]  [<00000012a3ebcc0c>] do_IRQ+0xac/0xb0 
[  111.139938]  [<00000012a48e34c8>] ext_int_handler+0x128/0x12c 
[  111.139940] Last Breaking-Event-Address:
[  111.139942]  [<00000012a3ef8318>] __warn_printk+0x60/0x68
[  111.139943] ---[ end trace 63b8303def5e99c8 ]---
[  111.140565] CPU34 path=/user.slice on_list=1 nr_running=2 p=[make 5624]
[  111.140573] CPU34 path=/user.slice on_list=1 nr_running=3 p=[make 4545]
[  111.140576] CPU34 path=/user.slice on_list=1 nr_running=4 p=[make 4014]
[  111.140579] CPU34 path=/user.slice on_list=1 nr_running=5 p=[make 4330]
[  111.140583] CPU34 path=/user.slice on_list=1 nr_running=6 p=[make 4543]
[  111.140587] CPU34 path=/user.slice on_list=1 nr_running=7 p=[make 4191]
[  111.140589] CPU34 path=/user.slice on_list=1 nr_running=8 p=[make 4026]
[  111.140593] CPU34 path=/user.slice on_list=1 nr_running=9 p=[make 4013]
[  111.140595] CPU34 path=/user.slice on_list=1 nr_running=10 p=[make 4393]
[  111.140599] CPU34 path=/user.slice on_list=1 nr_running=11 p=[make 4463]
[  111.140602] CPU34 path=/user.slice on_list=1 nr_running=12 p=[make 4353]
[  111.140604] CPU34 path=/user.slice on_list=1 nr_running=13 p=[make 4012]
[  111.140607] CPU34 path=/user.slice on_list=1 nr_running=14 p=[make 4332]
[  111.140746] CPU34 path=/user.slice on_list=1 nr_running=2 p=[make 5624]
[  111.140756] CPU34 path=/user.slice on_list=1 nr_running=3 p=[make 4330]
[  111.140760] CPU34 path=/user.slice on_list=1 nr_running=4 p=[make 4014]
[  111.140763] CPU34 path=/user.slice on_list=1 nr_running=5 p=[make 4545]
[  111.140766] CPU34 path=/user.slice on_list=1 nr_running=6 p=[make 4393]
[  111.140773] CPU34 path=/user.slice on_list=1 nr_running=7 p=[make 4026]
[  111.140777] CPU34 path=/user.slice on_list=1 nr_running=8 p=[make 4191]
[  111.140784] CPU34 path=/user.slice on_list=1 nr_running=9 p=[make 4012]
[  111.140792] CPU34 path=/user.slice on_list=1 nr_running=10 p=[make 4463]
[  111.140799] CPU34 path=/user.slice on_list=1 nr_running=11 p=[make 4332]
[  111.142850] CPU34 path=/user.slice on_list=1 nr_running=2 p=[make 5624]
[  111.142862] CPU34 path=/user.slice on_list=1 nr_running=3 p=[make 4330]
[  111.142867] CPU34 path=/user.slice on_list=1 nr_running=4 p=[make 4014]
[  111.142870] CPU34 path=/user.slice on_list=1 nr_running=5 p=[make 4545]
[  111.142872] CPU34 path=/user.slice on_list=1 nr_running=6 p=[make 4393]
[  111.142881] CPU34 path=/user.slice on_list=1 nr_running=7 p=[make 4026]
[  111.142886] CPU34 path=/user.slice on_list=1 nr_running=8 p=[make 4191]
[  111.142893] CPU34 path=/user.slice on_list=1 nr_running=9 p=[make 4012]
[  111.142921] CPU34 path=/user.slice on_list=1 nr_running=10 p=[make 4463]
[  111.142930] CPU34 path=/user.slice on_list=1 nr_running=11 p=[make 4332]
[  111.143545] CPU34 path=/user.slice on_list=1 nr_running=1 p=[genksyms 5615]
[  111.144283] CPU34 path=/user.slice on_list=1 nr_running=1 p=[genksyms 5615]
[  111.145172] CPU34 path=/user.slice on_list=1 nr_running=1 p=[genksyms 5615]
[  111.145853] CPU34 path=/user.slice on_list=1 nr_running=1 p=[genksyms 5615]
[  114.145675] CPU41 path=/machine.slice/machine-test.slice/machine-qemu\x2d7\x2dtest10.s on_list=1 nr_running=1 p=[CPU 1/KVM 2440]
[  114.146949] CPU41 path=/user.slice on_list=1 nr_running=3 p=[gcc 10663]
[  114.147034] CPU41 path=/user.slice on_list=1 nr_running=3 p=[make 4072]
[  114.147250] CPU41 path=/user.slice on_list=1 nr_running=3 p=[make 4357]
[  114.147262] CPU41 path=/user.slice on_list=1 nr_running=4 p=[make 8067]
[  114.147264] CPU41 path=/user.slice on_list=1 nr_running=5 p=[make 4543]
[  114.148178] CPU41 path=/user.slice on_list=1 nr_running=2 p=[sh 10659]
[  114.148282] CPU41 path=/user.slice on_list=1 nr_running=2 p=[gcc 10658]
[  114.148305] CPU41 path=/user.slice on_list=1 nr_running=3 p=[gcc 10668]
[  114.150175] CPU41 path=/user.slice on_list=1 nr_running=2 p=[gcc 10658]
[  114.150232] CPU41 path=/user.slice on_list=1 nr_running=2 p=[cc1 10663]
[  114.169192] CPU41 path=/ on_list=1 nr_running=2 p=[ksoftirqd/41 216]
[  114.193276] CPU41 path=/user.slice on_list=1 nr_running=2 p=[make 4357]
[  114.193318] CPU41 path=/user.slice on_list=1 nr_running=3 p=[make 4543]
[  114.193325] CPU41 path=/user.slice on_list=1 nr_running=4 p=[make 8067]
[  114.649686] CPU43 path=/machine.slice/machine-test.slice/machine-qemu\x2d4\x2dtest11.s on_list=1 nr_running=1 p=[CPU 0/KVM 2437]
[  114.655304] CPU43 path=/machine.slice/machine-production.slice/machine-qemu\x2d17\x2dt on_list=1 nr_running=1 p=[CPU 0/KVM 2587]
[  114.665302] CPU43 path=/machine.slice/machine-production.slice/machine-qemu\x2d17\x2dt on_list=1 nr_running=1 p=[CPU 0/KVM 2587]
[  114.670538] CPU43 path=/user.slice on_list=1 nr_running=2 p=[as 10614]
[  114.670582] CPU43 path=/user.slice on_list=1 nr_running=2 p=[as 10614]
[  114.675302] CPU43 path=/machine.slice/machine-production.slice/machine-qemu\x2d17\x2dt on_list=1 nr_running=1 p=[CPU 0/KVM 2587]
[  114.676725] CPU43 path=/user.slice on_list=1 nr_running=2 p=[as 10614]
[  114.939686] CPU42 path=/machine.slice/machine-test.slice/machine-qemu\x2d4\x2dtest11.s on_list=1 nr_running=1 p=[CPU 0/KVM 2437]
[  120.639685] CPU33 path=/machine.slice/machine-test.slice/machine-qemu\x2d4\x2dtest11.s on_list=1 nr_running=1 p=[CPU 0/KVM 2437]
[  122.145676] CPU54 path=/machine.slice/machine-test.slice/machine-qemu\x2d7\x2dtest10.s on_list=1 nr_running=1 p=[CPU 1/KVM 2440]
[  122.148917] CPU54 path=/user.slice on_list=1 nr_running=4 p=[as 19656]
[  122.149188] CPU54 path=/ on_list=1 nr_running=2 p=[kworker/54:1 453]
[  122.179197] CPU54 path=/ on_list=1 nr_running=2 p=[ksoftirqd/54 281]
[  122.193424] CPU54 path=/user.slice on_list=1 nr_running=4 p=[as 19656]
[  127.049685] CPU35 path=/machine.slice/machine-test.slice/machine-qemu\x2d4\x2dtest11.s on_list=1 nr_running=1 p=[CPU 2/KVM 2445]
[  132.228297] CPU6 path=/machine.slice/machine-test.slice/machine-qemu\x2d10\x2dtest14. on_list=1 nr_running=1 p=[CPU 2/KVM 2579]
[  134.345678] CPU34 path=/machine.slice/machine-test.slice/machine-qemu\x2d7\x2dtest10.s on_list=1 nr_running=1 p=[CPU 2/KVM 2444]
[  134.348906] CPU34 path=/user.slice on_list=1 nr_running=2 p=[as 32056]
[  134.349078] CPU34 path=/user.slice on_list=1 nr_running=2 p=[as 32056]
[  134.353372] CPU34 path=/user.slice on_list=1 nr_running=2 p=[as 32056]
[  134.357981] CPU34 path=/user.slice on_list=1 nr_running=2 p=[as 32056]
[  135.149687] CPU58 path=/machine.slice/machine-test.slice/machine-qemu\x2d4\x2dtest11.s on_list=1 nr_running=1 p=[CPU 2/KVM 2445]
[  135.150236] CPU58 path=/user.slice on_list=1 nr_running=1 p=[genksyms 35088]
[  135.152212] CPU58 path=/user.slice on_list=1 nr_running=1 p=[genksyms 35113]
[  135.152940] CPU58 path=/user.slice on_list=1 nr_running=1 p=[genksyms 35113]
[  135.165833] CPU58 path=/machine.slice/machine-production.slice/machine-qemu\x2d5\x2dte on_list=1 nr_running=1 p=[qemu-system-s39 2330]
[  135.169227] CPU58 path=/machine.slice/machine-production.slice/machine-qemu\x2d5\x2dte on_list=1 nr_running=1 p=[qemu-system-s39 2330]
[  135.194516] CPU58 path=/user.slice on_list=1 nr_running=2 p=[cc1 35115]
[  135.198656] CPU58 path=/user.slice on_list=1 nr_running=3 p=[sh 34629]
[  135.199181] CPU58 path=/system.slice on_list=1 nr_running=1 p=[systemd-journal 1087]
[  135.199205] CPU58 path=/ on_list=1 nr_running=3 p=[ksoftirqd/58 301]
[  146.075678] CPU32 path=/machine.slice/machine-test.slice/machine-qemu\x2d7\x2dtest10.s on_list=1 nr_running=1 p=[CPU 1/KVM 2440]
[  146.097410] CPU32 path=/user.slice on_list=1 nr_running=3 p=[as 46220]
[  146.097522] CPU32 path=/user.slice on_list=1 nr_running=3 p=[as 46220]
[  148.449694] CPU5 path=/machine.slice/machine-test.slice/machine-qemu\x2d4\x2dtest11.s on_list=1 nr_running=1 p=[CPU 2/KVM 2445]
[  152.649687] CPU3 path=/machine.slice/machine-test.slice/machine-qemu\x2d4\x2dtest11.s on_list=1 nr_running=1 p=[CPU 0/KVM 2437]
[  152.659181] CPU3 path=/ on_list=1 nr_running=2 p=[rcu_sched 11]
[  152.689179] CPU3 path=/ on_list=1 nr_running=2 p=[rcu_sched 11]
[  154.579686] CPU3 path=/machine.slice/machine-test.slice/machine-qemu\x2d4\x2dtest11.s on_list=1 nr_running=1 p=[CPU 1/KVM 2441]
[  154.580233] CPU3 path=/ on_list=1 nr_running=2 p=[kworker/u680:0 8]
[  154.580242] CPU3 path=/user.slice on_list=1 nr_running=2 p=[sshd 1821]
[  154.585388] CPU3 path=/ on_list=1 nr_running=2 p=[kworker/u680:0 8]
[  154.585396] CPU3 path=/user.slice on_list=1 nr_running=2 p=[sshd 1821]
[  154.585447] CPU3 path=/ on_list=1 nr_running=2 p=[kworker/u680:0 8]
[  154.585452] CPU3 path=/user.slice on_list=1 nr_running=2 p=[sshd 1821]
[  154.592783] CPU3 path=/user.slice on_list=1 nr_running=1 p=[cc1 57925]
[  154.593378] CPU3 path=/user.slice on_list=1 nr_running=1 p=[make 58331]
[  154.593413] CPU3 path=/user.slice on_list=1 nr_running=1 p=[cc1 57128]
[  154.593733] CPU3 path=/user.slice on_list=1 nr_running=2 p=[gcc 57228]
[  154.593997] CPU3 path=/user.slice on_list=1 nr_running=2 p=[gcc 57228]
[  154.594007] CPU3 path=/user.slice on_list=1 nr_running=3 p=[sh 57226]
[  154.594365] CPU3 path=/user.slice on_list=1 nr_running=3 p=[sh 58345]
[  154.595243] CPU3 path=/ on_list=1 nr_running=2 p=[kworker/u680:0 8]
[  154.595249] CPU3 path=/user.slice on_list=1 nr_running=2 p=[sshd 1821]
[  154.598249] CPU3 path=/ on_list=1 nr_running=2 p=[kworker/u680:0 8]
[  154.598257] CPU3 path=/user.slice on_list=1 nr_running=2 p=[sshd 1821]
[  155.169688] CPU10 path=/machine.slice/machine-test.slice/machine-qemu\x2d4\x2dtest11.s on_list=1 nr_running=1 p=[CPU 1/KVM 2441]
[  155.171423] CPU10 path=/user.slice on_list=1 nr_running=1 p=[gcc 57436]
[  155.171761] CPU10 path=/user.slice on_list=1 nr_running=1 p=[gcc 57436]
[  155.171789] CPU10 path=/ on_list=1 nr_running=1 p=[kworker/10:1 439]
[  155.172119] CPU10 path=/user.slice on_list=1 nr_running=1 p=[sh 58915]
[  155.176862] CPU10 path=/ on_list=1 nr_running=2 p=[kworker/10:1H 855]
[  155.176871] CPU10 path=/ on_list=1 nr_running=3 p=[ksoftirqd/10 61]
[  155.176880] CPU10 path=/user.slice on_list=1 nr_running=1 p=[fixdep 58915]
[  155.176904] CPU10 path=/ on_list=1 nr_running=1 p=[kworker/10:1 439]
[  155.177016] CPU10 path=/user.slice on_list=1 nr_running=1 p=[sh 58939]
[  155.229683] CPU10 path=/machine.slice/machine-test.slice/machine-qemu\x2d4\x2dtest11.s on_list=1 nr_running=1 p=[CPU 1/KVM 2441]
[  155.229843] CPU10 path=/user.slice on_list=1 nr_running=1 p=[as 58433]
[  155.229886] CPU10 path=/user.slice on_list=1 nr_running=1 p=[as 58433]
[  155.229926] CPU10 path=/user.slice on_list=1 nr_running=1 p=[as 58433]
[  155.229932] CPU10 path=/machine.slice/machine-production.slice/machine-qemu\x2d17\x2dt on_list=1 nr_running=1 p=[CPU 0/KVM 2587]
[  155.229960] CPU10 path=/machine.slice/machine-production.slice/machine-qemu\x2d17\x2dt on_list=1 nr_running=1 p=[CPU 0/KVM 2587]
[  155.230074] CPU10 path=/machine.slice/machine-production.slice/machine-qemu\x2d17\x2dt on_list=1 nr_running=1 p=[CPU 0/KVM 2587]
[  155.230564] CPU10 path=/user.slice on_list=1 nr_running=1 p=[cc1 58432]
[  155.231238] CPU10 path=/user.slice on_list=1 nr_running=1 p=[make 16571]
[  155.231563] CPU10 path=/user.slice on_list=1 nr_running=2 p=[make 59029]
[  155.231706] CPU10 path=/user.slice on_list=1 nr_running=2 p=[make 16571]
[  155.232025] CPU10 path=/user.slice on_list=1 nr_running=3 p=[make 59030]
[  155.233386] CPU10 path=/ on_list=1 nr_running=2 p=[kworker/10:1 439]
[  155.233401] CPU10 path=/user.slice on_list=1 nr_running=2 p=[sh 59029]
[  155.233500] CPU10 path=/user.slice on_list=1 nr_running=2 p=[make 16571]
[  155.233504] CPU10 path=/ on_list=1 nr_running=2 p=[kworker/10:1 439]
[  155.233707] CPU10 path=/user.slice on_list=1 nr_running=3 p=[make 59032]
[  155.234966] CPU10 path=/user.slice on_list=1 nr_running=2 p=[sh 59030]
[  155.234976] CPU10 path=/ on_list=1 nr_running=2 p=[kworker/10:1 439]
[  155.235158] CPU10 path=/user.slice on_list=1 nr_running=1 p=[make 16571]
[  155.235303] CPU10 path=/machine.slice/machine-production.slice/machine-qemu\x2d17\x2dt on_list=1 nr_running=1 p=[CPU 0/KVM 2587]
[  155.235376] CPU10 path=/user.slice on_list=1 nr_running=2 p=[make 59034]
[  155.235466] CPU10 path=/user.slice on_list=1 nr_running=2 p=[make 16571]
[  155.235630] CPU10 path=/user.slice on_list=1 nr_running=3 p=[make 59036]
[  155.236974] CPU10 path=/user.slice on_list=1 nr_running=2 p=[sh 59034]
[  155.237072] CPU10 path=/ on_list=1 nr_running=1 p=[kworker/10:1 439]
[  155.237173] CPU10 path=/user.slice on_list=1 nr_running=1 p=[make 16571]
[  155.237374] CPU10 path=/user.slice on_list=1 nr_running=2 p=[make 59038]
[  155.237469] CPU10 path=/user.slice on_list=1 nr_running=2 p=[make 16571]
[  155.237634] CPU10 path=/user.slice on_list=1 nr_running=3 p=[make 59040]
[  155.845677] CPU46 path=/machine.slice/machine-test.slice/machine-qemu\x2d7\x2dtest10.s on_list=1 nr_running=1 p=[CPU 0/KVM 2435]
[  156.219684] CPU20 path=/machine.slice/machine-test.slice/machine-qemu\x2d4\x2dtest11.s on_list=1 nr_running=1 p=[CPU 0/KVM 2437]
[  156.222028] CPU20 path=/machine.slice/machine-production.slice/machine-qemu\x2d5\x2dte on_list=1 nr_running=1 p=[CPU 1/KVM 2451]
[  156.222035] CPU20 path=/machine.slice/machine-production.slice/machine-qemu\x2d5\x2dte on_list=1 nr_running=1 p=[CPU 0/KVM 2449]
[  156.222273] CPU20 path=/machine.slice/machine-production.slice/machine-qemu\x2d5\x2dte on_list=1 nr_running=1 p=[CPU 1/KVM 2451]
[  156.222362] CPU20 path=/machine.slice/machine-production.slice/machine-qemu\x2d5\x2dte on_list=1 nr_running=1 p=[CPU 1/KVM 2451]
[  156.223501] CPU20 path=/user.slice on_list=1 nr_running=1 p=[as 59244]
[  156.223618] CPU20 path=/user.slice on_list=1 nr_running=1 p=[as 59244]
[  156.223736] CPU20 path=/user.slice on_list=1 nr_running=1 p=[as 59244]
[  156.223975] CPU20 path=/machine.slice/machine-production.slice/machine-qemu\x2d5\x2dte on_list=1 nr_running=1 p=[CPU 1/KVM 2451]
[  156.224332] CPU20 path=/machine.slice/machine-production.slice/machine-qemu\x2d5\x2dte on_list=1 nr_running=1 p=[CPU 1/KVM 2451]
[  156.224367] CPU20 path=/machine.slice/machine-production.slice/machine-qemu\x2d5\x2dte on_list=1 nr_running=1 p=[CPU 1/KVM 2451]
[  156.224392] CPU20 path=/machine.slice/machine-production.slice/machine-qemu\x2d5\x2dte on_list=1 nr_running=1 p=[CPU 1/KVM 2451]
[  156.224545] CPU20 path=/machine.slice/machine-production.slice/machine-qemu\x2d5\x2dte on_list=1 nr_running=1 p=[CPU 1/KVM 2451]
[  156.227839] CPU20 path=/machine.slice/machine-production.slice/machine-qemu\x2d3\x2dte on_list=1 nr_running=1 p=[CPU 1/KVM 2442]
[  175.939686] CPU52 path=/machine.slice/machine-test.slice/machine-qemu\x2d4\x2dtest11.s on_list=1 nr_running=1 p=[CPU 0/KVM 2437]
[  175.939947] CPU52 path=/user.slice on_list=1 nr_running=2 p=[cc1 60934]
[  175.943973] CPU52 path=/user.slice on_list=1 nr_running=1 p=[as 60936]
[  175.943979] CPU52 path=/user.slice on_list=1 nr_running=2 p=[gcc 60930]
[  175.943988] CPU52 path=/ on_list=1 nr_running=2 p=[kworker/52:2 1635]
[  175.944207] CPU52 path=/user.slice on_list=1 nr_running=1 p=[gcc 60930]
[  175.944217] CPU52 path=/ on_list=1 nr_running=1 p=[kworker/52:2 1635]
[  200.985573] ctcm: CTCM driver initialized
[  201.016223] lcs: Loading LCS driver
[  224.672306] CPU22 path=/machine.slice/machine-test.slice/machine-qemu\x2d14\x2dtest2.s on_list=1 nr_running=1 p=[IO mon_iothread 2543]
[  224.672461] CPU22 path=/ on_list=1 nr_running=1 p=[kworker/u680:1 344]
[  224.672543] CPU22 path=/user.slice on_list=1 nr_running=1 p=[virsh 80195]
[  224.672563] CPU22 path=/user.slice on_list=1 nr_running=1 p=[virsh 80195]
[  224.672642] CPU22 path=/system.slice on_list=1 nr_running=1 p=[sssd_nss 1535]
[  224.673148] CPU22 path=/system.slice on_list=1 nr_running=1 p=[sssd_nss 1535]
[  224.673208] CPU22 path=/system.slice on_list=1 nr_running=1 p=[sssd_nss 1535]
[  224.673318] CPU22 path=/user.slice on_list=1 nr_running=1 p=[virsh 80195]
[  224.673342] CPU22 path=/user.slice on_list=1 nr_running=1 p=[virsh 80195]
[  224.673363] CPU22 path=/user.slice on_list=1 nr_running=1 p=[virsh 80195]
[  224.673378] CPU22 path=/user.slice on_list=1 nr_running=1 p=[virsh 80195]
[  224.673408] CPU22 path=/system.slice on_list=1 nr_running=1 p=[libvirtd 1597]
[  224.673414] CPU22 path=/user.slice on_list=1 nr_running=1 p=[virsh 80186]
[  224.674737] CPU22 path=/user.slice on_list=1 nr_running=1 p=[virsh 80186]
[  224.674749] CPU22 path=/ on_list=1 nr_running=1 p=[kworker/22:1 436]

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: 5.6-rc3: WARNING: CPU: 48 PID: 17435 at kernel/sched/fair.c:380 enqueue_task_fair+0x328/0x440
  2020-03-04 19:38                             ` Christian Borntraeger
@ 2020-03-04 19:59                               ` Christian Borntraeger
  2020-03-05  9:30                                 ` Vincent Guittot
  0 siblings, 1 reply; 28+ messages in thread
From: Christian Borntraeger @ 2020-03-04 19:59 UTC (permalink / raw)
  To: Dietmar Eggemann, Vincent Guittot
  Cc: Ingo Molnar, Peter Zijlstra, linux-kernel

[-- Attachment #1: Type: text/plain, Size: 1413 bytes --]


On 04.03.20 20:38, Christian Borntraeger wrote:
> 
> 
> On 04.03.20 20:19, Dietmar Eggemann wrote:
>>> I just realized that this system has something special. Some month ago I created 2 slices
>>> $ head /etc/systemd/system/*.slice
>>> ==> /etc/systemd/system/machine-production.slice <==
>>> [Unit]
>>> Description=VM production
>>> Before=slices.target
>>> Wants=machine.slice
>>> [Slice]
>>> CPUQuota=2000%
>>> CPUWeight=1000
>>>
>>> ==> /etc/systemd/system/machine-test.slice <==
>>> [Unit]
>>> Description=VM production
>>> Before=slices.target
>>> Wants=machine.slice
>>> [Slice]
>>> CPUQuota=300%
>>> CPUWeight=100
>>>
>>>
>>> And the guests are then put into these slices. that also means that this test will never use more than the 2300%.
>>> No matter how much CPUs the system has.
>>
>> If you could run this debug patch on top of your un-patched kernel, it would tell us which task (in the enqueue case)
>> and which taskgroup is causing that.
>>
>> You could then further dump the appropriate taskgroup directory under the cpu cgroup mountpoint
>> (to see e.g. the CFS bandwidth data). 
>>
>> I expect more than one hit since assert_list_leaf_cfs_rq() uses SCHED_WARN_ON, hence WARN_ONCE.
> 
> That was quick. FWIW, I messed up dumping the cgroup mountpoint (since I restarted my guests after this happened).
> Will retry. See the dmesg attached. 

New occurence (with just one extra debug line)




[-- Attachment #2: output --]
[-- Type: text/plain, Size: 33632 bytes --]

[    0.229052] Linux version 5.6.0-rc3+ (cborntra@m83lp52.lnxne.boe) (gcc version 9.2.1 20190827 (Red Hat 9.2.1-1) (GCC)) #159 SMP Wed Mar 4 20:36:45 CET 2020
[    0.229055] setup: Linux is running natively in 64-bit mode
[    0.229106] setup: The maximum memory size is 131072MB
[    0.229113] setup: Reserving 1024MB of memory at 130048MB for crashkernel (System RAM: 130048MB)
[    0.229202] cpu: 64 configured CPUs, 0 standby CPUs
[    0.229271] cpu: The CPU configuration topology of the machine is: 0 0 4 2 3 10 / 4
[    0.230115] Write protected kernel read-only data: 13524k
[    0.230852] Zone ranges:
[    0.230853]   DMA      [mem 0x0000000000000000-0x000000007fffffff]
[    0.230854]   Normal   [mem 0x0000000080000000-0x0000001fffffffff]
[    0.230855] Movable zone start for each node
[    0.230856] Early memory node ranges
[    0.230857]   node   0: [mem 0x0000000000000000-0x0000001fffffffff]
[    0.230865] Initmem setup node 0 [mem 0x0000000000000000-0x0000001fffffffff]
[    0.230866] On node 0 totalpages: 33554432
[    0.230867]   DMA zone: 8192 pages used for memmap
[    0.230867]   DMA zone: 0 pages reserved
[    0.230868]   DMA zone: 524288 pages, LIFO batch:63
[    0.244964]   Normal zone: 516096 pages used for memmap
[    0.244965]   Normal zone: 33030144 pages, LIFO batch:63
[    0.264910] percpu: Embedded 33 pages/cpu s97280 r8192 d29696 u135168
[    0.264919] pcpu-alloc: s97280 r8192 d29696 u135168 alloc=33*4096
[    0.264919] pcpu-alloc: [0] 000 [0] 001 [0] 002 [0] 003 
[    0.264921] pcpu-alloc: [0] 004 [0] 005 [0] 006 [0] 007 
[    0.264922] pcpu-alloc: [0] 008 [0] 009 [0] 010 [0] 011 
[    0.264924] pcpu-alloc: [0] 012 [0] 013 [0] 014 [0] 015 
[    0.264925] pcpu-alloc: [0] 016 [0] 017 [0] 018 [0] 019 
[    0.264926] pcpu-alloc: [0] 020 [0] 021 [0] 022 [0] 023 
[    0.264927] pcpu-alloc: [0] 024 [0] 025 [0] 026 [0] 027 
[    0.264929] pcpu-alloc: [0] 028 [0] 029 [0] 030 [0] 031 
[    0.264930] pcpu-alloc: [0] 032 [0] 033 [0] 034 [0] 035 
[    0.264931] pcpu-alloc: [0] 036 [0] 037 [0] 038 [0] 039 
[    0.264932] pcpu-alloc: [0] 040 [0] 041 [0] 042 [0] 043 
[    0.264933] pcpu-alloc: [0] 044 [0] 045 [0] 046 [0] 047 
[    0.264935] pcpu-alloc: [0] 048 [0] 049 [0] 050 [0] 051 
[    0.264936] pcpu-alloc: [0] 052 [0] 053 [0] 054 [0] 055 
[    0.264937] pcpu-alloc: [0] 056 [0] 057 [0] 058 [0] 059 
[    0.264938] pcpu-alloc: [0] 060 [0] 061 [0] 062 [0] 063 
[    0.264939] pcpu-alloc: [0] 064 [0] 065 [0] 066 [0] 067 
[    0.264941] pcpu-alloc: [0] 068 [0] 069 [0] 070 [0] 071 
[    0.264942] pcpu-alloc: [0] 072 [0] 073 [0] 074 [0] 075 
[    0.264943] pcpu-alloc: [0] 076 [0] 077 [0] 078 [0] 079 
[    0.264944] pcpu-alloc: [0] 080 [0] 081 [0] 082 [0] 083 
[    0.264946] pcpu-alloc: [0] 084 [0] 085 [0] 086 [0] 087 
[    0.264947] pcpu-alloc: [0] 088 [0] 089 [0] 090 [0] 091 
[    0.264948] pcpu-alloc: [0] 092 [0] 093 [0] 094 [0] 095 
[    0.264949] pcpu-alloc: [0] 096 [0] 097 [0] 098 [0] 099 
[    0.264951] pcpu-alloc: [0] 100 [0] 101 [0] 102 [0] 103 
[    0.264952] pcpu-alloc: [0] 104 [0] 105 [0] 106 [0] 107 
[    0.264953] pcpu-alloc: [0] 108 [0] 109 [0] 110 [0] 111 
[    0.264954] pcpu-alloc: [0] 112 [0] 113 [0] 114 [0] 115 
[    0.264956] pcpu-alloc: [0] 116 [0] 117 [0] 118 [0] 119 
[    0.264957] pcpu-alloc: [0] 120 [0] 121 [0] 122 [0] 123 
[    0.264958] pcpu-alloc: [0] 124 [0] 125 [0] 126 [0] 127 
[    0.264959] pcpu-alloc: [0] 128 [0] 129 [0] 130 [0] 131 
[    0.264961] pcpu-alloc: [0] 132 [0] 133 [0] 134 [0] 135 
[    0.264962] pcpu-alloc: [0] 136 [0] 137 [0] 138 [0] 139 
[    0.264963] pcpu-alloc: [0] 140 [0] 141 [0] 142 [0] 143 
[    0.264964] pcpu-alloc: [0] 144 [0] 145 [0] 146 [0] 147 
[    0.264966] pcpu-alloc: [0] 148 [0] 149 [0] 150 [0] 151 
[    0.264967] pcpu-alloc: [0] 152 [0] 153 [0] 154 [0] 155 
[    0.264968] pcpu-alloc: [0] 156 [0] 157 [0] 158 [0] 159 
[    0.264969] pcpu-alloc: [0] 160 [0] 161 [0] 162 [0] 163 
[    0.264971] pcpu-alloc: [0] 164 [0] 165 [0] 166 [0] 167 
[    0.264972] pcpu-alloc: [0] 168 [0] 169 [0] 170 [0] 171 
[    0.264973] pcpu-alloc: [0] 172 [0] 173 [0] 174 [0] 175 
[    0.264974] pcpu-alloc: [0] 176 [0] 177 [0] 178 [0] 179 
[    0.264976] pcpu-alloc: [0] 180 [0] 181 [0] 182 [0] 183 
[    0.264977] pcpu-alloc: [0] 184 [0] 185 [0] 186 [0] 187 
[    0.264978] pcpu-alloc: [0] 188 [0] 189 [0] 190 [0] 191 
[    0.264979] pcpu-alloc: [0] 192 [0] 193 [0] 194 [0] 195 
[    0.264981] pcpu-alloc: [0] 196 [0] 197 [0] 198 [0] 199 
[    0.264982] pcpu-alloc: [0] 200 [0] 201 [0] 202 [0] 203 
[    0.264983] pcpu-alloc: [0] 204 [0] 205 [0] 206 [0] 207 
[    0.264984] pcpu-alloc: [0] 208 [0] 209 [0] 210 [0] 211 
[    0.264985] pcpu-alloc: [0] 212 [0] 213 [0] 214 [0] 215 
[    0.264987] pcpu-alloc: [0] 216 [0] 217 [0] 218 [0] 219 
[    0.264988] pcpu-alloc: [0] 220 [0] 221 [0] 222 [0] 223 
[    0.264989] pcpu-alloc: [0] 224 [0] 225 [0] 226 [0] 227 
[    0.264990] pcpu-alloc: [0] 228 [0] 229 [0] 230 [0] 231 
[    0.264991] pcpu-alloc: [0] 232 [0] 233 [0] 234 [0] 235 
[    0.264993] pcpu-alloc: [0] 236 [0] 237 [0] 238 [0] 239 
[    0.264994] pcpu-alloc: [0] 240 [0] 241 [0] 242 [0] 243 
[    0.264995] pcpu-alloc: [0] 244 [0] 245 [0] 246 [0] 247 
[    0.264996] pcpu-alloc: [0] 248 [0] 249 [0] 250 [0] 251 
[    0.264998] pcpu-alloc: [0] 252 [0] 253 [0] 254 [0] 255 
[    0.264999] pcpu-alloc: [0] 256 [0] 257 [0] 258 [0] 259 
[    0.265000] pcpu-alloc: [0] 260 [0] 261 [0] 262 [0] 263 
[    0.265001] pcpu-alloc: [0] 264 [0] 265 [0] 266 [0] 267 
[    0.265002] pcpu-alloc: [0] 268 [0] 269 [0] 270 [0] 271 
[    0.265004] pcpu-alloc: [0] 272 [0] 273 [0] 274 [0] 275 
[    0.265005] pcpu-alloc: [0] 276 [0] 277 [0] 278 [0] 279 
[    0.265006] pcpu-alloc: [0] 280 [0] 281 [0] 282 [0] 283 
[    0.265007] pcpu-alloc: [0] 284 [0] 285 [0] 286 [0] 287 
[    0.265009] pcpu-alloc: [0] 288 [0] 289 [0] 290 [0] 291 
[    0.265010] pcpu-alloc: [0] 292 [0] 293 [0] 294 [0] 295 
[    0.265011] pcpu-alloc: [0] 296 [0] 297 [0] 298 [0] 299 
[    0.265012] pcpu-alloc: [0] 300 [0] 301 [0] 302 [0] 303 
[    0.265013] pcpu-alloc: [0] 304 [0] 305 [0] 306 [0] 307 
[    0.265015] pcpu-alloc: [0] 308 [0] 309 [0] 310 [0] 311 
[    0.265016] pcpu-alloc: [0] 312 [0] 313 [0] 314 [0] 315 
[    0.265017] pcpu-alloc: [0] 316 [0] 317 [0] 318 [0] 319 
[    0.265018] pcpu-alloc: [0] 320 [0] 321 [0] 322 [0] 323 
[    0.265019] pcpu-alloc: [0] 324 [0] 325 [0] 326 [0] 327 
[    0.265021] pcpu-alloc: [0] 328 [0] 329 [0] 330 [0] 331 
[    0.265022] pcpu-alloc: [0] 332 [0] 333 [0] 334 [0] 335 
[    0.265023] pcpu-alloc: [0] 336 [0] 337 [0] 338 [0] 339 
[    0.265049] Built 1 zonelists, mobility grouping on.  Total pages: 33030144
[    0.265050] Policy zone: Normal
[    0.265051] Kernel command line: root=/dev/disk/by-path/ccw-0.0.3318-part1 rd.dasd=0.0.3318 cio_ignore=all,!condev rd.znet=qeth,0.0.bd00,0.0.bd01,0.0.bd02,layer2=1,portno=0,portname=OSAPORT zfcp.allow_lun_scan=0 BOOT_IMAGE=0 crashkernel=1G dyndbg="module=vhost +plt" BOOT_IMAGE=
[    0.266109] printk: log_buf_len individual max cpu contribution: 4096 bytes
[    0.266110] printk: log_buf_len total cpu_extra contributions: 1388544 bytes
[    0.266111] printk: log_buf_len min size: 131072 bytes
[    0.266445] printk: log_buf_len: 2097152 bytes
[    0.266446] printk: early log buf free: 123876(94%)
[    0.276285] Dentry cache hash table entries: 8388608 (order: 14, 67108864 bytes, linear)
[    0.280904] Inode-cache hash table entries: 4194304 (order: 13, 33554432 bytes, linear)
[    0.280941] mem auto-init: stack:off, heap alloc:off, heap free:off
[    0.317267] Memory: 2315096K/134217728K available (10452K kernel code, 2024K rwdata, 3072K rodata, 3932K init, 852K bss, 3355708K reserved, 0K cma-reserved)
[    0.317724] SLUB: HWalign=256, Order=0-3, MinObjects=0, CPUs=340, Nodes=1
[    0.317774] ftrace: allocating 31563 entries in 124 pages
[    0.322372] ftrace: allocated 124 pages with 5 groups
[    0.323313] rcu: Hierarchical RCU implementation.
[    0.323313] rcu: 	RCU event tracing is enabled.
[    0.323314] rcu: 	RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=340.
[    0.323315] 	Tasks RCU enabled.
[    0.323316] rcu: RCU calculated value of scheduler-enlistment delay is 11 jiffies.
[    0.323317] rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=340
[    0.326356] NR_IRQS: 3, nr_irqs: 3, preallocated irqs: 3
[    0.326494] clocksource: tod: mask: 0xffffffffffffffff max_cycles: 0x3b0a9be803b0a9, max_idle_ns: 1805497147909793 ns
[    0.326764] Console: colour dummy device 80x25
[    0.431448] printk: console [ttyS0] enabled
[    0.526088] Calibrating delay loop (skipped)... 21881.00 BogoMIPS preset
[    0.526089] pid_max: default: 348160 minimum: 2720
[    0.526240] LSM: Security Framework initializing
[    0.526272] SELinux:  Initializing.
[    0.526529] Mount-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear)
[    0.526675] Mountpoint-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear)
[    0.527718] rcu: Hierarchical SRCU implementation.
[    0.530478] smp: Bringing up secondary CPUs ...
[    0.544916] smp: Brought up 1 node, 64 CPUs
[    1.570355] node 0 initialised, 32136731 pages in 1020ms
[    1.597908] devtmpfs: initialized
[    1.598796] random: get_random_u32 called from bucket_table_alloc.isra.0+0x82/0x120 with crng_init=0
[    1.599376] clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604462750000 ns
[    1.600153] futex hash table entries: 131072 (order: 13, 33554432 bytes, vmalloc)
[    1.604749] xor: automatically using best checksumming function   xc        
[    1.604926] NET: Registered protocol family 16
[    1.604962] audit: initializing netlink subsys (disabled)
[    1.605034] audit: type=2000 audit(1583350720.705:1): state=initialized audit_enabled=0 res=1
[    1.605170] Spectre V2 mitigation: etokens
[    1.605877] random: fast init done
[    1.612650] HugeTLB registered 1.00 MiB page size, pre-allocated 0 pages
[    1.866553] raid6: vx128x8  gen() 21598 MB/s
[    2.036503] raid6: vx128x8  xor() 13323 MB/s
[    2.036505] raid6: using algorithm vx128x8 gen() 21598 MB/s
[    2.036505] raid6: .... xor() 13323 MB/s, rmw enabled
[    2.036506] raid6: using s390xc recovery algorithm
[    2.036881] iommu: Default domain type: Translated 
[    2.037025] SCSI subsystem initialized
[    2.100086] PCI host bridge to bus 0000:00
[    2.100093] pci_bus 0000:00: root bus resource [mem 0x8000000000000000-0x8000000007ffffff 64bit pref]
[    2.100096] pci_bus 0000:00: No busn resource found for root bus, will use [bus 00-ff]
[    2.100170] pci 0000:00:00.0: [1014:044b] type 00 class 0x120000
[    2.100231] pci 0000:00:00.0: reg 0x10: [mem 0xffffd80008000000-0xffffd8000fffffff 64bit pref]
[    2.100547] pci 0000:00:00.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown speed x0 link at 0000:00:00.0 (capable of 32.000 Gb/s with 5 GT/s x8 link)
[    2.100590] pci 0000:00:00.0: Adding to iommu group 0
[    2.100601] pci_bus 0000:00: busn_res: [bus 00-ff] end is updated to 00
[    2.102924] PCI host bridge to bus 0001:00
[    2.102926] pci_bus 0001:00: root bus resource [mem 0x8001000000000000-0x80010000000fffff 64bit pref]
[    2.102929] pci_bus 0001:00: No busn resource found for root bus, will use [bus 00-ff]
[    2.103023] pci 0001:00:00.0: [15b3:1016] type 00 class 0x020000
[    2.103129] pci 0001:00:00.0: reg 0x10: [mem 0xffffd40002000000-0xffffd400020fffff 64bit pref]
[    2.103289] pci 0001:00:00.0: enabling Extended Tags
[    2.103793] pci 0001:00:00.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown speed x0 link at 0001:00:00.0 (capable of 63.008 Gb/s with 8 GT/s x8 link)
[    2.103831] pci 0001:00:00.0: Adding to iommu group 1
[    2.103840] pci_bus 0001:00: busn_res: [bus 00-ff] end is updated to 00
[    2.106095] PCI host bridge to bus 0002:00
[    2.106097] pci_bus 0002:00: root bus resource [mem 0x8002000000000000-0x80020000000fffff 64bit pref]
[    2.106099] pci_bus 0002:00: No busn resource found for root bus, will use [bus 00-ff]
[    2.106184] pci 0002:00:00.0: [15b3:1016] type 00 class 0x020000
[    2.106284] pci 0002:00:00.0: reg 0x10: [mem 0xffffd40008000000-0xffffd400080fffff 64bit pref]
[    2.106439] pci 0002:00:00.0: enabling Extended Tags
[    2.107033] pci 0002:00:00.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown speed x0 link at 0002:00:00.0 (capable of 63.008 Gb/s with 8 GT/s x8 link)
[    2.107068] pci 0002:00:00.0: Adding to iommu group 2
[    2.107074] pci_bus 0002:00: busn_res: [bus 00-ff] end is updated to 00
[    2.669299] VFS: Disk quotas dquot_6.6.0
[    2.669354] VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes)
[    2.670694] random: crng init done
[    2.671085] NET: Registered protocol family 2
[    2.671733] tcp_listen_portaddr_hash hash table entries: 65536 (order: 8, 1048576 bytes, linear)
[    2.672293] TCP established hash table entries: 524288 (order: 10, 4194304 bytes, vmalloc)
[    2.674302] TCP bind hash table entries: 65536 (order: 8, 1048576 bytes, linear)
[    2.674808] TCP: Hash tables configured (established 524288 bind 65536)
[    2.675133] UDP hash table entries: 65536 (order: 9, 2097152 bytes, vmalloc)
[    2.676140] UDP-Lite hash table entries: 65536 (order: 9, 2097152 bytes, vmalloc)
[    2.677625] NET: Registered protocol family 1
[    2.677828] Trying to unpack rootfs image as initramfs...
[    3.248308] Freeing initrd memory: 43596K
[    3.249509] alg: No test for crc32be (crc32be-vx)
[    3.253769] Initialise system trusted keyrings
[    3.253823] workingset: timestamp_bits=45 max_order=25 bucket_order=0
[    3.254971] fuse: init (API version 7.31)
[    3.255041] SGI XFS with ACLs, security attributes, realtime, quota, no debug enabled
[    3.261726] Key type asymmetric registered
[    3.261728] Asymmetric key parser 'x509' registered
[    3.261733] Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251)
[    3.261934] io scheduler mq-deadline registered
[    3.261936] io scheduler kyber registered
[    3.261957] io scheduler bfq registered
[    3.262765] atomic64_test: passed
[    3.262816] hvc_iucv: The z/VM IUCV HVC device driver cannot be used without z/VM
[    3.269334] brd: module loaded
[    3.269693] cio: Channel measurement facility initialized using format extended (mode autodetected)
[    3.269968] Discipline DIAG cannot be used without z/VM
[    5.124916] sclp_sd: No data is available for the config data entity
[    5.341297] qeth: loading core functions
[    5.341358] qeth: register layer 2 discipline
[    5.341360] qeth: register layer 3 discipline
[    5.341883] NET: Registered protocol family 10
[    5.342820] Segment Routing with IPv6
[    5.342835] NET: Registered protocol family 17
[    5.342845] Key type dns_resolver registered
[    5.342935] registered taskstats version 1
[    5.342940] Loading compiled-in X.509 certificates
[    5.382486] Loaded X.509 cert 'Build time autogenerated kernel key: c46ba92ee388c82c5891ee836c9c20b752cdfac5'
[    5.383144] zswap: default zpool zbud not available
[    5.383145] zswap: pool creation failed
[    5.383866] Key type ._fscrypt registered
[    5.383867] Key type .fscrypt registered
[    5.383868] Key type fscrypt-provisioning registered
[    5.384137] Btrfs loaded, crc32c=crc32c-vx
[    5.388723] Key type big_key registered
[    5.388729] ima: No TPM chip found, activating TPM-bypass!
[    5.388732] ima: Allocated hash algorithm: sha256
[    5.388740] ima: No architecture policies found
[    5.389834] Freeing unused kernel memory: 3932K
[    5.446574] Write protected read-only-after-init data: 68k
[    5.446577] Run /init as init process
[    5.446577]   with arguments:
[    5.446578]     /init
[    5.446578]   with environment:
[    5.446578]     HOME=/
[    5.446578]     TERM=linux
[    5.446578]     BOOT_IMAGE=
[    5.446579]     crashkernel=1G
[    5.446579]     dyndbg=module=vhost +plt
[    5.461019] systemd[1]: Inserted module 'autofs4'
[    5.462181] systemd[1]: systemd v243.7-1.fc31 running in system mode. (+PAM +AUDIT +SELINUX +IMA -APPARMOR +SMACK +SYSVINIT +UTMP +LIBCRYPTSETUP +GCRYPT +GNUTLS +ACL +XZ +LZ4 +SECCOMP +BLKID +ELFUTILS +KMOD +IDN2 -IDN +PCRE2 default-hierarchy=unified)
[    5.462869] systemd[1]: Detected architecture s390x.
[    5.462871] systemd[1]: Running in initial RAM disk.
[    5.462923] systemd[1]: Set hostname to <m83lp52.lnxne.boe>.
[    5.503062] systemd[1]: Reached target Local File Systems.
[    5.503120] systemd[1]: Reached target Slices.
[    5.503143] systemd[1]: Reached target Swap.
[    5.503160] systemd[1]: Reached target Timers.
[    5.503254] systemd[1]: Listening on Journal Audit Socket.
[    5.503305] systemd[1]: Listening on Journal Socket (/dev/log).
[    5.782907] audit: type=1130 audit(1583350724.885:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
[    5.792394] audit: type=1130 audit(1583350724.895:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
[    6.869558] audit: type=1130 audit(1583350725.975:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
[    6.885592] audit: type=1130 audit(1583350725.985:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
[    6.886066] audit: type=1334 audit(1583350725.985:6): prog-id=6 op=LOAD
[    6.886093] audit: type=1334 audit(1583350725.985:7): prog-id=7 op=LOAD
[    7.106613] audit: type=1130 audit(1583350726.215:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
[    7.110846] qeth 0.0.bd00: Priority Queueing not supported
[    7.111597] qeth 0.0.bd00: portname is deprecated and is ignored
[    7.112828] dasd-eckd 0.0.3318: A channel path to the device has become operational
[    7.112971] dasd-eckd 0.0.3319: A channel path to the device has become operational
[    7.113686] dasd-eckd 0.0.331a: A channel path to the device has become operational
[    7.113910] dasd-eckd 0.0.331b: A channel path to the device has become operational
[    7.117112] qdio: 0.0.bd02 OSA on SC 159b using AI:1 QEBSM:0 PRI:1 TDD:1 SIGA: W AP
[    7.123509] dasd-eckd 0.0.3319: New DASD 3390/0E (CU 3990/01) with 262668 cylinders, 15 heads, 224 sectors
[    7.126422] dasd-eckd 0.0.3319: DASD with 4 KB/block, 189120960 KB total size, 48 KB/track, compatible disk layout
[    7.127590]  dasdb:VOL1/  0X3319: dasdb1
[    7.128367] dasd-eckd 0.0.3318: New DASD 3390/0E (CU 3990/01) with 262668 cylinders, 15 heads, 224 sectors
[    7.130987] dasd-eckd 0.0.3318: DASD with 4 KB/block, 189120960 KB total size, 48 KB/track, compatible disk layout
[    7.132054]  dasda:VOL1/  0X3318: dasda1
[    7.133315] dasd-eckd 0.0.331a: New DASD 3390/0C (CU 3990/01) with 30051 cylinders, 15 heads, 224 sectors
[    7.136326] dasd-eckd 0.0.331a: DASD with 4 KB/block, 21636720 KB total size, 48 KB/track, compatible disk layout
[    7.137145]  dasdc:VOL1/  0X331A:
[    7.138060] dasd-eckd 0.0.331b: New DASD 3390/0C (CU 3990/01) with 30051 cylinders, 15 heads, 224 sectors
[    7.140842] dasd-eckd 0.0.331b: DASD with 4 KB/block, 21636720 KB total size, 48 KB/track, compatible disk layout
[    7.141538]  dasdd:VOL1/  0X331B:
[    7.147527] audit: type=1130 audit(1583350726.255:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
[    7.149679] qeth 0.0.bd00: QDIO data connection isolation is deactivated
[    7.150155] qeth 0.0.bd00: The device represents a Bridge Capable Port
[    7.153543] qeth 0.0.bd00: MAC address 8e:dc:f9:1b:1d:48 successfully registered
[    7.154070] qeth 0.0.bd00: Device is a OSD Express card (level: 0199)
               with link type OSD_10GIG.
[    7.163550] audit: type=1130 audit(1583350726.265:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=plymouth-start comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
[    7.189372] qeth 0.0.bd00: MAC address de:45:d7:61:c4:13 successfully registered
[    7.191686] qeth 0.0.bd00 encbd00: renamed from eth0
[    7.380543] mlx5_core 0001:00:00.0: enabling device (0000 -> 0002)
[    7.380634] mlx5_core 0001:00:00.0: firmware version: 14.23.1020
[    7.464556] dasdconf.sh Warning: 0.0.3318 is already online, not configuring
[    7.513188] dasdconf.sh Warning: 0.0.331b is already online, not configuring
[    7.513319] dasdconf.sh Warning: 0.0.331a is already online, not configuring
[    7.524280] dasdconf.sh Warning: 0.0.3319 is already online, not configuring
[    7.822902] mlx5_core 0002:00:00.0: enabling device (0000 -> 0002)
[    7.822988] mlx5_core 0002:00:00.0: firmware version: 14.23.1020
[    8.272590] mlx5_core 0001:00:00.0: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0)
[    8.411681] mlx5_core 0002:00:00.0: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0)
[    8.573287] mlx5_core 0001:00:00.0 enP1s519: renamed from eth0
[    8.827283] mlx5_core 0002:00:00.0 enP2s564: renamed from eth1
[    9.027834] audit: type=1130 audit(1583350728.135:11): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
[    9.045379] audit: type=1130 audit(1583350728.145:12): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
[    9.054299] EXT4-fs (dasda1): mounted filesystem with ordered data mode. Opts: (null)
[    9.063290] audit: type=1334 audit(1583350728.165:13): prog-id=7 op=UNLOAD
[    9.380815] systemd-journald[543]: Received SIGTERM from PID 1 (systemd).
[    9.395001] printk: systemd: 19 output lines suppressed due to ratelimiting
[    9.661651] SELinux:  Permission watch in class filesystem not defined in policy.
[    9.661656] SELinux:  Permission watch in class file not defined in policy.
[    9.661657] SELinux:  Permission watch_mount in class file not defined in policy.
[    9.661658] SELinux:  Permission watch_sb in class file not defined in policy.
[    9.661659] SELinux:  Permission watch_with_perm in class file not defined in policy.
[    9.661660] SELinux:  Permission watch_reads in class file not defined in policy.
[    9.661662] SELinux:  Permission watch in class dir not defined in policy.
[    9.661663] SELinux:  Permission watch_mount in class dir not defined in policy.
[    9.661664] SELinux:  Permission watch_sb in class dir not defined in policy.
[    9.661665] SELinux:  Permission watch_with_perm in class dir not defined in policy.
[    9.661666] SELinux:  Permission watch_reads in class dir not defined in policy.
[    9.661670] SELinux:  Permission watch in class lnk_file not defined in policy.
[    9.661670] SELinux:  Permission watch_mount in class lnk_file not defined in policy.
[    9.661672] SELinux:  Permission watch_sb in class lnk_file not defined in policy.
[    9.661673] SELinux:  Permission watch_with_perm in class lnk_file not defined in policy.
[    9.661674] SELinux:  Permission watch_reads in class lnk_file not defined in policy.
[    9.661676] SELinux:  Permission watch in class chr_file not defined in policy.
[    9.661690] SELinux:  Permission watch_mount in class chr_file not defined in policy.
[    9.661691] SELinux:  Permission watch_sb in class chr_file not defined in policy.
[    9.661692] SELinux:  Permission watch_with_perm in class chr_file not defined in policy.
[    9.661693] SELinux:  Permission watch_reads in class chr_file not defined in policy.
[    9.661695] SELinux:  Permission watch in class blk_file not defined in policy.
[    9.661696] SELinux:  Permission watch_mount in class blk_file not defined in policy.
[    9.661697] SELinux:  Permission watch_sb in class blk_file not defined in policy.
[    9.661698] SELinux:  Permission watch_with_perm in class blk_file not defined in policy.
[    9.661699] SELinux:  Permission watch_reads in class blk_file not defined in policy.
[    9.661702] SELinux:  Permission watch in class sock_file not defined in policy.
[    9.661702] SELinux:  Permission watch_mount in class sock_file not defined in policy.
[    9.661704] SELinux:  Permission watch_sb in class sock_file not defined in policy.
[    9.661705] SELinux:  Permission watch_with_perm in class sock_file not defined in policy.
[    9.661706] SELinux:  Permission watch_reads in class sock_file not defined in policy.
[    9.661708] SELinux:  Permission watch in class fifo_file not defined in policy.
[    9.661710] SELinux:  Permission watch_mount in class fifo_file not defined in policy.
[    9.661710] SELinux:  Permission watch_sb in class fifo_file not defined in policy.
[    9.661711] SELinux:  Permission watch_with_perm in class fifo_file not defined in policy.
[    9.661712] SELinux:  Permission watch_reads in class fifo_file not defined in policy.
[    9.661793] SELinux:  Class perf_event not defined in policy.
[    9.661794] SELinux:  Class lockdown not defined in policy.
[    9.661795] SELinux: the above unknown classes and permissions will be allowed
[    9.661808] SELinux:  policy capability network_peer_controls=1
[    9.661809] SELinux:  policy capability open_perms=1
[    9.661810] SELinux:  policy capability extended_socket_class=1
[    9.661811] SELinux:  policy capability always_check_network=0
[    9.661811] SELinux:  policy capability cgroup_seclabel=1
[    9.661812] SELinux:  policy capability nnp_nosuid_transition=1
[    9.741220] systemd[1]: Successfully loaded SELinux policy in 291.310ms.
[    9.789736] systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 12.825ms.
[    9.791767] systemd[1]: systemd v243.7-1.fc31 running in system mode. (+PAM +AUDIT +SELINUX +IMA -APPARMOR +SMACK +SYSVINIT +UTMP +LIBCRYPTSETUP +GCRYPT +GNUTLS +ACL +XZ +LZ4 +SECCOMP +BLKID +ELFUTILS +KMOD +IDN2 -IDN +PCRE2 default-hierarchy=unified)
[    9.792457] systemd[1]: Detected architecture s390x.
[    9.793656] systemd[1]: Set hostname to <m83lp52.lnxne.boe>.
[    9.902467] systemd[1]: /usr/lib/systemd/system/sssd.service:12: PIDFile= references a path below legacy directory /var/run/, updating /var/run/sssd.pid → /run/sssd.pid; please update the unit file accordingly.
[    9.906424] systemd[1]: /usr/lib/systemd/system/iscsid.service:11: PIDFile= references a path below legacy directory /var/run/, updating /var/run/iscsid.pid → /run/iscsid.pid; please update the unit file accordingly.
[    9.906622] systemd[1]: /usr/lib/systemd/system/iscsiuio.service:13: PIDFile= references a path below legacy directory /var/run/, updating /var/run/iscsiuio.pid → /run/iscsiuio.pid; please update the unit file accordingly.
[    9.934051] systemd[1]: /usr/lib/systemd/system/sssd-kcm.socket:7: ListenStream= references a path below legacy directory /var/run/, updating /var/run/.heim_org.h5l.kcm-socket → /run/.heim_org.h5l.kcm-socket; please update the unit file accordingly.
[    9.961533] systemd[1]: initrd-switch-root.service: Succeeded.
[    9.961634] systemd[1]: Stopped Switch Root.
[    9.961890] systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1.
[    9.989554] EXT4-fs (dasda1): re-mounted. Opts: (null)
[   10.299160] systemd-journald[1085]: Received client request to flush runtime journal.
[   10.499707] VFIO - User Level meta-driver version: 0.3
[   10.530145] genwqe 0000:00:00.0: enabling device (0000 -> 0002)
[   10.532037] dasdconf.sh Warning: 0.0.331a is already online, not configuring
[   10.534362] dasdconf.sh Warning: 0.0.331b is already online, not configuring
[   10.534490] dasdconf.sh Warning: 0.0.3319 is already online, not configuring
[   10.534516] dasdconf.sh Warning: 0.0.3318 is already online, not configuring
[   10.768265] mlx5_ib: Mellanox Connect-IB Infiniband driver v5.0-0
[   10.954777] XFS (dasdb1): Mounting V5 Filesystem
[   10.967218] RPC: Registered named UNIX socket transport module.
[   10.967221] RPC: Registered udp transport module.
[   10.967223] RPC: Registered tcp transport module.
[   10.967224] RPC: Registered tcp NFSv4.1 backchannel transport module.
[   10.985393] XFS (dasdb1): Ending clean mount
[   10.987810] xfs filesystem being mounted at /home supports timestamps until 2038 (0x7fffffff)
[   11.002317] RPC: Registered rdma transport module.
[   11.002319] RPC: Registered rdma backchannel transport module.
[   11.943320] mlx5_core 0001:00:00.0 enP1s519: Link up
[   11.945973] IPv6: ADDRCONF(NETDEV_CHANGE): enP1s519: link becomes ready
[   12.063453] mlx5_core 0002:00:00.0 enP2s564: Link up
[   12.136089] tun: Universal TUN/TAP device driver, 1.6
[   12.137058] virbr0: port 1(virbr0-nic) entered blocking state
[   12.137060] virbr0: port 1(virbr0-nic) entered disabled state
[   12.137150] device virbr0-nic entered promiscuous mode
[   12.536173] virbr0: port 1(virbr0-nic) entered blocking state
[   12.536176] virbr0: port 1(virbr0-nic) entered listening state
[   12.560143] virbr0: port 1(virbr0-nic) entered disabled state
[   12.976588] IPv6: ADDRCONF(NETDEV_CHANGE): enP2s564: link becomes ready
[   25.680326] CPU62 path=/machine.slice/machine-test.slice/machine-qemu\x2d16\x2dtest14. on_list=1 nr_running=1 p=[CPU 1/KVM 2543]
[   25.680334] ------------[ cut here ]------------
[   25.680335] rq->tmp_alone_branch != &rq->leaf_cfs_rq_list
[   25.680351] WARNING: CPU: 61 PID: 2535 at kernel/sched/fair.c:380 enqueue_task_fair+0x3f6/0x4a8
[   25.680353] Modules linked in: kvm xt_CHECKSUM xt_MASQUERADE nf_nat_tftp nf_conntrack_tftp xt_CT tun bridge stp llc xt_tcpudp ip6t_REJECT nf_reject_ipv6 ip6t_rpfilter ipt_REJECT nf_reject_ipv4 xt_conntrack ip6table_nat ip6table_mangle ip6table_raw ip6table_security iptable_nat nf_nat iptable_mangle iptable_raw iptable_security nf_conntrack nf_defrag_ipv6 nf_defrag_ipv4 ip_set nfnetlink ip6table_filter ip6_tables iptable_filter rpcrdma sunrpc rdma_ucm rdma_cm iw_cm ib_cm configfs mlx5_ib s390_trng ghash_s390 prng aes_s390 ib_uverbs des_s390 libdes sha3_512_s390 ib_core sha3_256_s390 sha512_s390 sha1_s390 genwqe_card vfio_ccw crc_itu_t vfio_mdev mdev vfio_iommu_type1 vfio eadm_sch zcrypt_cex4 sch_fq_codel ip_tables x_tables mlx5_core sha256_s390 sha_common pkey zcrypt rng_core autofs4
[   25.680397] CPU: 61 PID: 2535 Comm: CPU 0/KVM Not tainted 5.6.0-rc3+ #159
[   25.680398] Hardware name: IBM 3906 M04 704 (LPAR)
[   25.680399] Krnl PSW : 0404c00180000000 0000001b0ed9ef0a (enqueue_task_fair+0x3fa/0x4a8)
[   25.680402]            R:0 T:1 IO:0 EX:0 Key:0 M:1 W:0 P:0 AS:3 CC:0 PM:0 RI:0 EA:3
[   25.680404] Krnl GPRS: 00000000000003e0 0000001e40060400 000000000000002d 0000001b100507c2
[   25.680405]            000000000000002c 0000001b0f4089d0 0000000000000001 0400001b00000000
[   25.680407]            0000001eb757e000 000003e00167bb58 0000001e40060400 0000001fbd840928
[   25.680454]            0000001ebfc0a000 0000001fbd83fd00 0000001b0ed9ef06 000003e00167baa0
[   25.680461] Krnl Code: 0000001b0ed9eefa: c020005d398a	larl	%r2,0000001b0f94620e
                          0000001b0ed9ef00: c0e5fffdcbd8	brasl	%r14,0000001b0ed586b0
                         #0000001b0ed9ef06: af000000		mc	0,0
                         >0000001b0ed9ef0a: a7f4febe		brc	15,0000001b0ed9ec86
                          0000001b0ed9ef0e: ec2cfe68017f	clij	%r2,1,12,0000001b0ed9ebde
                          0000001b0ed9ef14: e310dd200004	lg	%r1,3360(%r13)
                          0000001b0ed9ef1a: 58201098		l	%r2,152(%r1)
                          0000001b0ed9ef1e: ec26fe63007e	cij	%r2,0,6,0000001b0ed9ebe4
[   25.680475] Call Trace:
[   25.680477]  [<0000001b0ed9ef0a>] enqueue_task_fair+0x3fa/0x4a8 
[   25.680479] ([<0000001b0ed9ef06>] enqueue_task_fair+0x3f6/0x4a8)
[   25.680482]  [<0000001b0ed8ed78>] activate_task+0x88/0xf0 
[   25.680483]  [<0000001b0ed8f2e8>] ttwu_do_activate+0x58/0x78 
[   25.680485]  [<0000001b0ed902ce>] try_to_wake_up+0x256/0x650 
[   25.680489]  [<0000001b0edae50e>] swake_up_locked.part.0+0x2e/0x70 
[   25.680490]  [<0000001b0edae82c>] swake_up_one+0x54/0x88 
[   25.680536]  [<000003ff8042315a>] kvm_vcpu_wake_up+0x52/0x78 [kvm] 
[   25.680545]  [<000003ff80441f0a>] kvm_s390_vcpu_wakeup+0x2a/0x40 [kvm] 
[   25.680554]  [<000003ff80442696>] kvm_s390_idle_wakeup+0x6e/0xa0 [kvm] 
[   25.680559]  [<0000001b0edf90dc>] __hrtimer_run_queues+0x114/0x2f0 
[   25.680562]  [<0000001b0edf9e34>] hrtimer_interrupt+0x12c/0x2a8 
[   25.680564]  [<0000001b0ed1cd3c>] do_IRQ+0xac/0xb0 
[   25.680570]  [<0000001b0f741704>] ext_int_handler+0x130/0x134 
[   25.680572]  [<0000001b0f740dc6>] sie_exit+0x0/0x46 
[   25.680580] ([<000003ff8043a452>] __vcpu_run+0x3a2/0xcb0 [kvm])
[   25.680589]  [<000003ff8043b7c0>] kvm_arch_vcpu_ioctl_run+0x248/0x880 [kvm] 
[   25.680597]  [<000003ff804261d4>] kvm_vcpu_ioctl+0x284/0x7b0 [kvm] 
[   25.680602]  [<0000001b0efdac0e>] ksys_ioctl+0xae/0xe8 
[   25.680604]  [<0000001b0efdacb2>] __s390x_sys_ioctl+0x2a/0x38 
[   25.680605]  [<0000001b0f7410b2>] system_call+0x2a6/0x2c8 
[   25.680606] Last Breaking-Event-Address:
[   25.680609]  [<0000001b0ed58710>] __warn_printk+0x60/0x68
[   25.680610] ---[ end trace 1298e6d8f1f0ce77 ]---

[-- Attachment #3: sysfs.tar.lz4 --]
[-- Type: application/x-lz4, Size: 90058 bytes --]

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: 5.6-rc3: WARNING: CPU: 48 PID: 17435 at kernel/sched/fair.c:380 enqueue_task_fair+0x328/0x440
  2020-03-04 19:59                               ` Christian Borntraeger
@ 2020-03-05  9:30                                 ` Vincent Guittot
  2020-03-05 11:28                                   ` Christian Borntraeger
  2020-03-05 11:54                                   ` Dietmar Eggemann
  0 siblings, 2 replies; 28+ messages in thread
From: Vincent Guittot @ 2020-03-05  9:30 UTC (permalink / raw)
  To: Christian Borntraeger
  Cc: Dietmar Eggemann, Ingo Molnar, Peter Zijlstra, linux-kernel

Le mercredi 04 mars 2020 à 20:59:33 (+0100), Christian Borntraeger a écrit :
> 
> On 04.03.20 20:38, Christian Borntraeger wrote:
> > 
> > 
> > On 04.03.20 20:19, Dietmar Eggemann wrote:
> >>> I just realized that this system has something special. Some month ago I created 2 slices
> >>> $ head /etc/systemd/system/*.slice
> >>> ==> /etc/systemd/system/machine-production.slice <==
> >>> [Unit]
> >>> Description=VM production
> >>> Before=slices.target
> >>> Wants=machine.slice
> >>> [Slice]
> >>> CPUQuota=2000%
> >>> CPUWeight=1000
> >>>
> >>> ==> /etc/systemd/system/machine-test.slice <==
> >>> [Unit]
> >>> Description=VM production
> >>> Before=slices.target
> >>> Wants=machine.slice
> >>> [Slice]
> >>> CPUQuota=300%
> >>> CPUWeight=100
> >>>
> >>>
> >>> And the guests are then put into these slices. that also means that this test will never use more than the 2300%.
> >>> No matter how much CPUs the system has.
> >>
> >> If you could run this debug patch on top of your un-patched kernel, it would tell us which task (in the enqueue case)
> >> and which taskgroup is causing that.
> >>
> >> You could then further dump the appropriate taskgroup directory under the cpu cgroup mountpoint
> >> (to see e.g. the CFS bandwidth data). 
> >>
> >> I expect more than one hit since assert_list_leaf_cfs_rq() uses SCHED_WARN_ON, hence WARN_ONCE.
> > 
> > That was quick. FWIW, I messed up dumping the cgroup mountpoint (since I restarted my guests after this happened).
> > Will retry. See the dmesg attached. 
> 
> New occurence (with just one extra debug line)

Could you try to add the patch below on top of dietmar's one so we will have the status of
each level of the hierarchy ?
The 1st level seems ok but something wrong happens while walking the hierarchy

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 69fc30db7440..9ccde775e02e 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -5331,14 +5331,17 @@ enqueue_task_fair(struct rq *rq, struct task_struct *p, int flags)
 
        if (rq->tmp_alone_branch != &rq->leaf_cfs_rq_list) {
                char path[64];
+               se = &p->se;
 
-               cfs_rq = cfs_rq_of(&p->se);
+               for_each_sched_entity(se) {
+                       cfs_rq = cfs_rq_of(se);
 
-               sched_trace_cfs_rq_path(cfs_rq, path, 64);
+                       sched_trace_cfs_rq_path(cfs_rq, path, 64);
 
-               printk("CPU%d path=%s on_list=%d nr_running=%d p=[%s %d]\n",
-                      cpu_of(rq), path, cfs_rq->on_list, cfs_rq->nr_running,
+                       printk("CPU%d path=%s on_list=%d nr_running=%d throttled=%d p=[%s %d]\n",
+                      cpu_of(rq), path, cfs_rq->on_list, cfs_rq->nr_running, cfs_rq_throttled(cfs_rq),
                       p->comm, p->pid);
+               }
        }
 
        assert_list_leaf_cfs_rq(rq);


> 
> 
> 

> [    0.229052] Linux version 5.6.0-rc3+ (cborntra@m83lp52.lnxne.boe) (gcc version 9.2.1 20190827 (Red Hat 9.2.1-1) (GCC)) #159 SMP Wed Mar 4 20:36:45 CET 2020
> [    0.229055] setup: Linux is running natively in 64-bit mode
> [    0.229106] setup: The maximum memory size is 131072MB
> [    0.229113] setup: Reserving 1024MB of memory at 130048MB for crashkernel (System RAM: 130048MB)
> [    0.229202] cpu: 64 configured CPUs, 0 standby CPUs
> [    0.229271] cpu: The CPU configuration topology of the machine is: 0 0 4 2 3 10 / 4
> [    0.230115] Write protected kernel read-only data: 13524k
> [    0.230852] Zone ranges:
> [    0.230853]   DMA      [mem 0x0000000000000000-0x000000007fffffff]
> [    0.230854]   Normal   [mem 0x0000000080000000-0x0000001fffffffff]
> [    0.230855] Movable zone start for each node
> [    0.230856] Early memory node ranges
> [    0.230857]   node   0: [mem 0x0000000000000000-0x0000001fffffffff]
> [    0.230865] Initmem setup node 0 [mem 0x0000000000000000-0x0000001fffffffff]
> [    0.230866] On node 0 totalpages: 33554432
> [    0.230867]   DMA zone: 8192 pages used for memmap
> [    0.230867]   DMA zone: 0 pages reserved
> [    0.230868]   DMA zone: 524288 pages, LIFO batch:63
> [    0.244964]   Normal zone: 516096 pages used for memmap
> [    0.244965]   Normal zone: 33030144 pages, LIFO batch:63
> [    0.264910] percpu: Embedded 33 pages/cpu s97280 r8192 d29696 u135168
> [    0.264919] pcpu-alloc: s97280 r8192 d29696 u135168 alloc=33*4096
> [    0.264919] pcpu-alloc: [0] 000 [0] 001 [0] 002 [0] 003 
> [    0.264921] pcpu-alloc: [0] 004 [0] 005 [0] 006 [0] 007 
> [    0.264922] pcpu-alloc: [0] 008 [0] 009 [0] 010 [0] 011 
> [    0.264924] pcpu-alloc: [0] 012 [0] 013 [0] 014 [0] 015 
> [    0.264925] pcpu-alloc: [0] 016 [0] 017 [0] 018 [0] 019 
> [    0.264926] pcpu-alloc: [0] 020 [0] 021 [0] 022 [0] 023 
> [    0.264927] pcpu-alloc: [0] 024 [0] 025 [0] 026 [0] 027 
> [    0.264929] pcpu-alloc: [0] 028 [0] 029 [0] 030 [0] 031 
> [    0.264930] pcpu-alloc: [0] 032 [0] 033 [0] 034 [0] 035 
> [    0.264931] pcpu-alloc: [0] 036 [0] 037 [0] 038 [0] 039 
> [    0.264932] pcpu-alloc: [0] 040 [0] 041 [0] 042 [0] 043 
> [    0.264933] pcpu-alloc: [0] 044 [0] 045 [0] 046 [0] 047 
> [    0.264935] pcpu-alloc: [0] 048 [0] 049 [0] 050 [0] 051 
> [    0.264936] pcpu-alloc: [0] 052 [0] 053 [0] 054 [0] 055 
> [    0.264937] pcpu-alloc: [0] 056 [0] 057 [0] 058 [0] 059 
> [    0.264938] pcpu-alloc: [0] 060 [0] 061 [0] 062 [0] 063 
> [    0.264939] pcpu-alloc: [0] 064 [0] 065 [0] 066 [0] 067 
> [    0.264941] pcpu-alloc: [0] 068 [0] 069 [0] 070 [0] 071 
> [    0.264942] pcpu-alloc: [0] 072 [0] 073 [0] 074 [0] 075 
> [    0.264943] pcpu-alloc: [0] 076 [0] 077 [0] 078 [0] 079 
> [    0.264944] pcpu-alloc: [0] 080 [0] 081 [0] 082 [0] 083 
> [    0.264946] pcpu-alloc: [0] 084 [0] 085 [0] 086 [0] 087 
> [    0.264947] pcpu-alloc: [0] 088 [0] 089 [0] 090 [0] 091 
> [    0.264948] pcpu-alloc: [0] 092 [0] 093 [0] 094 [0] 095 
> [    0.264949] pcpu-alloc: [0] 096 [0] 097 [0] 098 [0] 099 
> [    0.264951] pcpu-alloc: [0] 100 [0] 101 [0] 102 [0] 103 
> [    0.264952] pcpu-alloc: [0] 104 [0] 105 [0] 106 [0] 107 
> [    0.264953] pcpu-alloc: [0] 108 [0] 109 [0] 110 [0] 111 
> [    0.264954] pcpu-alloc: [0] 112 [0] 113 [0] 114 [0] 115 
> [    0.264956] pcpu-alloc: [0] 116 [0] 117 [0] 118 [0] 119 
> [    0.264957] pcpu-alloc: [0] 120 [0] 121 [0] 122 [0] 123 
> [    0.264958] pcpu-alloc: [0] 124 [0] 125 [0] 126 [0] 127 
> [    0.264959] pcpu-alloc: [0] 128 [0] 129 [0] 130 [0] 131 
> [    0.264961] pcpu-alloc: [0] 132 [0] 133 [0] 134 [0] 135 
> [    0.264962] pcpu-alloc: [0] 136 [0] 137 [0] 138 [0] 139 
> [    0.264963] pcpu-alloc: [0] 140 [0] 141 [0] 142 [0] 143 
> [    0.264964] pcpu-alloc: [0] 144 [0] 145 [0] 146 [0] 147 
> [    0.264966] pcpu-alloc: [0] 148 [0] 149 [0] 150 [0] 151 
> [    0.264967] pcpu-alloc: [0] 152 [0] 153 [0] 154 [0] 155 
> [    0.264968] pcpu-alloc: [0] 156 [0] 157 [0] 158 [0] 159 
> [    0.264969] pcpu-alloc: [0] 160 [0] 161 [0] 162 [0] 163 
> [    0.264971] pcpu-alloc: [0] 164 [0] 165 [0] 166 [0] 167 
> [    0.264972] pcpu-alloc: [0] 168 [0] 169 [0] 170 [0] 171 
> [    0.264973] pcpu-alloc: [0] 172 [0] 173 [0] 174 [0] 175 
> [    0.264974] pcpu-alloc: [0] 176 [0] 177 [0] 178 [0] 179 
> [    0.264976] pcpu-alloc: [0] 180 [0] 181 [0] 182 [0] 183 
> [    0.264977] pcpu-alloc: [0] 184 [0] 185 [0] 186 [0] 187 
> [    0.264978] pcpu-alloc: [0] 188 [0] 189 [0] 190 [0] 191 
> [    0.264979] pcpu-alloc: [0] 192 [0] 193 [0] 194 [0] 195 
> [    0.264981] pcpu-alloc: [0] 196 [0] 197 [0] 198 [0] 199 
> [    0.264982] pcpu-alloc: [0] 200 [0] 201 [0] 202 [0] 203 
> [    0.264983] pcpu-alloc: [0] 204 [0] 205 [0] 206 [0] 207 
> [    0.264984] pcpu-alloc: [0] 208 [0] 209 [0] 210 [0] 211 
> [    0.264985] pcpu-alloc: [0] 212 [0] 213 [0] 214 [0] 215 
> [    0.264987] pcpu-alloc: [0] 216 [0] 217 [0] 218 [0] 219 
> [    0.264988] pcpu-alloc: [0] 220 [0] 221 [0] 222 [0] 223 
> [    0.264989] pcpu-alloc: [0] 224 [0] 225 [0] 226 [0] 227 
> [    0.264990] pcpu-alloc: [0] 228 [0] 229 [0] 230 [0] 231 
> [    0.264991] pcpu-alloc: [0] 232 [0] 233 [0] 234 [0] 235 
> [    0.264993] pcpu-alloc: [0] 236 [0] 237 [0] 238 [0] 239 
> [    0.264994] pcpu-alloc: [0] 240 [0] 241 [0] 242 [0] 243 
> [    0.264995] pcpu-alloc: [0] 244 [0] 245 [0] 246 [0] 247 
> [    0.264996] pcpu-alloc: [0] 248 [0] 249 [0] 250 [0] 251 
> [    0.264998] pcpu-alloc: [0] 252 [0] 253 [0] 254 [0] 255 
> [    0.264999] pcpu-alloc: [0] 256 [0] 257 [0] 258 [0] 259 
> [    0.265000] pcpu-alloc: [0] 260 [0] 261 [0] 262 [0] 263 
> [    0.265001] pcpu-alloc: [0] 264 [0] 265 [0] 266 [0] 267 
> [    0.265002] pcpu-alloc: [0] 268 [0] 269 [0] 270 [0] 271 
> [    0.265004] pcpu-alloc: [0] 272 [0] 273 [0] 274 [0] 275 
> [    0.265005] pcpu-alloc: [0] 276 [0] 277 [0] 278 [0] 279 
> [    0.265006] pcpu-alloc: [0] 280 [0] 281 [0] 282 [0] 283 
> [    0.265007] pcpu-alloc: [0] 284 [0] 285 [0] 286 [0] 287 
> [    0.265009] pcpu-alloc: [0] 288 [0] 289 [0] 290 [0] 291 
> [    0.265010] pcpu-alloc: [0] 292 [0] 293 [0] 294 [0] 295 
> [    0.265011] pcpu-alloc: [0] 296 [0] 297 [0] 298 [0] 299 
> [    0.265012] pcpu-alloc: [0] 300 [0] 301 [0] 302 [0] 303 
> [    0.265013] pcpu-alloc: [0] 304 [0] 305 [0] 306 [0] 307 
> [    0.265015] pcpu-alloc: [0] 308 [0] 309 [0] 310 [0] 311 
> [    0.265016] pcpu-alloc: [0] 312 [0] 313 [0] 314 [0] 315 
> [    0.265017] pcpu-alloc: [0] 316 [0] 317 [0] 318 [0] 319 
> [    0.265018] pcpu-alloc: [0] 320 [0] 321 [0] 322 [0] 323 
> [    0.265019] pcpu-alloc: [0] 324 [0] 325 [0] 326 [0] 327 
> [    0.265021] pcpu-alloc: [0] 328 [0] 329 [0] 330 [0] 331 
> [    0.265022] pcpu-alloc: [0] 332 [0] 333 [0] 334 [0] 335 
> [    0.265023] pcpu-alloc: [0] 336 [0] 337 [0] 338 [0] 339 
> [    0.265049] Built 1 zonelists, mobility grouping on.  Total pages: 33030144
> [    0.265050] Policy zone: Normal
> [    0.265051] Kernel command line: root=/dev/disk/by-path/ccw-0.0.3318-part1 rd.dasd=0.0.3318 cio_ignore=all,!condev rd.znet=qeth,0.0.bd00,0.0.bd01,0.0.bd02,layer2=1,portno=0,portname=OSAPORT zfcp.allow_lun_scan=0 BOOT_IMAGE=0 crashkernel=1G dyndbg="module=vhost +plt" BOOT_IMAGE=
> [    0.266109] printk: log_buf_len individual max cpu contribution: 4096 bytes
> [    0.266110] printk: log_buf_len total cpu_extra contributions: 1388544 bytes
> [    0.266111] printk: log_buf_len min size: 131072 bytes
> [    0.266445] printk: log_buf_len: 2097152 bytes
> [    0.266446] printk: early log buf free: 123876(94%)
> [    0.276285] Dentry cache hash table entries: 8388608 (order: 14, 67108864 bytes, linear)
> [    0.280904] Inode-cache hash table entries: 4194304 (order: 13, 33554432 bytes, linear)
> [    0.280941] mem auto-init: stack:off, heap alloc:off, heap free:off
> [    0.317267] Memory: 2315096K/134217728K available (10452K kernel code, 2024K rwdata, 3072K rodata, 3932K init, 852K bss, 3355708K reserved, 0K cma-reserved)
> [    0.317724] SLUB: HWalign=256, Order=0-3, MinObjects=0, CPUs=340, Nodes=1
> [    0.317774] ftrace: allocating 31563 entries in 124 pages
> [    0.322372] ftrace: allocated 124 pages with 5 groups
> [    0.323313] rcu: Hierarchical RCU implementation.
> [    0.323313] rcu: 	RCU event tracing is enabled.
> [    0.323314] rcu: 	RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=340.
> [    0.323315] 	Tasks RCU enabled.
> [    0.323316] rcu: RCU calculated value of scheduler-enlistment delay is 11 jiffies.
> [    0.323317] rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=340
> [    0.326356] NR_IRQS: 3, nr_irqs: 3, preallocated irqs: 3
> [    0.326494] clocksource: tod: mask: 0xffffffffffffffff max_cycles: 0x3b0a9be803b0a9, max_idle_ns: 1805497147909793 ns
> [    0.326764] Console: colour dummy device 80x25
> [    0.431448] printk: console [ttyS0] enabled
> [    0.526088] Calibrating delay loop (skipped)... 21881.00 BogoMIPS preset
> [    0.526089] pid_max: default: 348160 minimum: 2720
> [    0.526240] LSM: Security Framework initializing
> [    0.526272] SELinux:  Initializing.
> [    0.526529] Mount-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear)
> [    0.526675] Mountpoint-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear)
> [    0.527718] rcu: Hierarchical SRCU implementation.
> [    0.530478] smp: Bringing up secondary CPUs ...
> [    0.544916] smp: Brought up 1 node, 64 CPUs
> [    1.570355] node 0 initialised, 32136731 pages in 1020ms
> [    1.597908] devtmpfs: initialized
> [    1.598796] random: get_random_u32 called from bucket_table_alloc.isra.0+0x82/0x120 with crng_init=0
> [    1.599376] clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604462750000 ns
> [    1.600153] futex hash table entries: 131072 (order: 13, 33554432 bytes, vmalloc)
> [    1.604749] xor: automatically using best checksumming function   xc        
> [    1.604926] NET: Registered protocol family 16
> [    1.604962] audit: initializing netlink subsys (disabled)
> [    1.605034] audit: type=2000 audit(1583350720.705:1): state=initialized audit_enabled=0 res=1
> [    1.605170] Spectre V2 mitigation: etokens
> [    1.605877] random: fast init done
> [    1.612650] HugeTLB registered 1.00 MiB page size, pre-allocated 0 pages
> [    1.866553] raid6: vx128x8  gen() 21598 MB/s
> [    2.036503] raid6: vx128x8  xor() 13323 MB/s
> [    2.036505] raid6: using algorithm vx128x8 gen() 21598 MB/s
> [    2.036505] raid6: .... xor() 13323 MB/s, rmw enabled
> [    2.036506] raid6: using s390xc recovery algorithm
> [    2.036881] iommu: Default domain type: Translated 
> [    2.037025] SCSI subsystem initialized
> [    2.100086] PCI host bridge to bus 0000:00
> [    2.100093] pci_bus 0000:00: root bus resource [mem 0x8000000000000000-0x8000000007ffffff 64bit pref]
> [    2.100096] pci_bus 0000:00: No busn resource found for root bus, will use [bus 00-ff]
> [    2.100170] pci 0000:00:00.0: [1014:044b] type 00 class 0x120000
> [    2.100231] pci 0000:00:00.0: reg 0x10: [mem 0xffffd80008000000-0xffffd8000fffffff 64bit pref]
> [    2.100547] pci 0000:00:00.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown speed x0 link at 0000:00:00.0 (capable of 32.000 Gb/s with 5 GT/s x8 link)
> [    2.100590] pci 0000:00:00.0: Adding to iommu group 0
> [    2.100601] pci_bus 0000:00: busn_res: [bus 00-ff] end is updated to 00
> [    2.102924] PCI host bridge to bus 0001:00
> [    2.102926] pci_bus 0001:00: root bus resource [mem 0x8001000000000000-0x80010000000fffff 64bit pref]
> [    2.102929] pci_bus 0001:00: No busn resource found for root bus, will use [bus 00-ff]
> [    2.103023] pci 0001:00:00.0: [15b3:1016] type 00 class 0x020000
> [    2.103129] pci 0001:00:00.0: reg 0x10: [mem 0xffffd40002000000-0xffffd400020fffff 64bit pref]
> [    2.103289] pci 0001:00:00.0: enabling Extended Tags
> [    2.103793] pci 0001:00:00.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown speed x0 link at 0001:00:00.0 (capable of 63.008 Gb/s with 8 GT/s x8 link)
> [    2.103831] pci 0001:00:00.0: Adding to iommu group 1
> [    2.103840] pci_bus 0001:00: busn_res: [bus 00-ff] end is updated to 00
> [    2.106095] PCI host bridge to bus 0002:00
> [    2.106097] pci_bus 0002:00: root bus resource [mem 0x8002000000000000-0x80020000000fffff 64bit pref]
> [    2.106099] pci_bus 0002:00: No busn resource found for root bus, will use [bus 00-ff]
> [    2.106184] pci 0002:00:00.0: [15b3:1016] type 00 class 0x020000
> [    2.106284] pci 0002:00:00.0: reg 0x10: [mem 0xffffd40008000000-0xffffd400080fffff 64bit pref]
> [    2.106439] pci 0002:00:00.0: enabling Extended Tags
> [    2.107033] pci 0002:00:00.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown speed x0 link at 0002:00:00.0 (capable of 63.008 Gb/s with 8 GT/s x8 link)
> [    2.107068] pci 0002:00:00.0: Adding to iommu group 2
> [    2.107074] pci_bus 0002:00: busn_res: [bus 00-ff] end is updated to 00
> [    2.669299] VFS: Disk quotas dquot_6.6.0
> [    2.669354] VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes)
> [    2.670694] random: crng init done
> [    2.671085] NET: Registered protocol family 2
> [    2.671733] tcp_listen_portaddr_hash hash table entries: 65536 (order: 8, 1048576 bytes, linear)
> [    2.672293] TCP established hash table entries: 524288 (order: 10, 4194304 bytes, vmalloc)
> [    2.674302] TCP bind hash table entries: 65536 (order: 8, 1048576 bytes, linear)
> [    2.674808] TCP: Hash tables configured (established 524288 bind 65536)
> [    2.675133] UDP hash table entries: 65536 (order: 9, 2097152 bytes, vmalloc)
> [    2.676140] UDP-Lite hash table entries: 65536 (order: 9, 2097152 bytes, vmalloc)
> [    2.677625] NET: Registered protocol family 1
> [    2.677828] Trying to unpack rootfs image as initramfs...
> [    3.248308] Freeing initrd memory: 43596K
> [    3.249509] alg: No test for crc32be (crc32be-vx)
> [    3.253769] Initialise system trusted keyrings
> [    3.253823] workingset: timestamp_bits=45 max_order=25 bucket_order=0
> [    3.254971] fuse: init (API version 7.31)
> [    3.255041] SGI XFS with ACLs, security attributes, realtime, quota, no debug enabled
> [    3.261726] Key type asymmetric registered
> [    3.261728] Asymmetric key parser 'x509' registered
> [    3.261733] Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251)
> [    3.261934] io scheduler mq-deadline registered
> [    3.261936] io scheduler kyber registered
> [    3.261957] io scheduler bfq registered
> [    3.262765] atomic64_test: passed
> [    3.262816] hvc_iucv: The z/VM IUCV HVC device driver cannot be used without z/VM
> [    3.269334] brd: module loaded
> [    3.269693] cio: Channel measurement facility initialized using format extended (mode autodetected)
> [    3.269968] Discipline DIAG cannot be used without z/VM
> [    5.124916] sclp_sd: No data is available for the config data entity
> [    5.341297] qeth: loading core functions
> [    5.341358] qeth: register layer 2 discipline
> [    5.341360] qeth: register layer 3 discipline
> [    5.341883] NET: Registered protocol family 10
> [    5.342820] Segment Routing with IPv6
> [    5.342835] NET: Registered protocol family 17
> [    5.342845] Key type dns_resolver registered
> [    5.342935] registered taskstats version 1
> [    5.342940] Loading compiled-in X.509 certificates
> [    5.382486] Loaded X.509 cert 'Build time autogenerated kernel key: c46ba92ee388c82c5891ee836c9c20b752cdfac5'
> [    5.383144] zswap: default zpool zbud not available
> [    5.383145] zswap: pool creation failed
> [    5.383866] Key type ._fscrypt registered
> [    5.383867] Key type .fscrypt registered
> [    5.383868] Key type fscrypt-provisioning registered
> [    5.384137] Btrfs loaded, crc32c=crc32c-vx
> [    5.388723] Key type big_key registered
> [    5.388729] ima: No TPM chip found, activating TPM-bypass!
> [    5.388732] ima: Allocated hash algorithm: sha256
> [    5.388740] ima: No architecture policies found
> [    5.389834] Freeing unused kernel memory: 3932K
> [    5.446574] Write protected read-only-after-init data: 68k
> [    5.446577] Run /init as init process
> [    5.446577]   with arguments:
> [    5.446578]     /init
> [    5.446578]   with environment:
> [    5.446578]     HOME=/
> [    5.446578]     TERM=linux
> [    5.446578]     BOOT_IMAGE=
> [    5.446579]     crashkernel=1G
> [    5.446579]     dyndbg=module=vhost +plt
> [    5.461019] systemd[1]: Inserted module 'autofs4'
> [    5.462181] systemd[1]: systemd v243.7-1.fc31 running in system mode. (+PAM +AUDIT +SELINUX +IMA -APPARMOR +SMACK +SYSVINIT +UTMP +LIBCRYPTSETUP +GCRYPT +GNUTLS +ACL +XZ +LZ4 +SECCOMP +BLKID +ELFUTILS +KMOD +IDN2 -IDN +PCRE2 default-hierarchy=unified)
> [    5.462869] systemd[1]: Detected architecture s390x.
> [    5.462871] systemd[1]: Running in initial RAM disk.
> [    5.462923] systemd[1]: Set hostname to <m83lp52.lnxne.boe>.
> [    5.503062] systemd[1]: Reached target Local File Systems.
> [    5.503120] systemd[1]: Reached target Slices.
> [    5.503143] systemd[1]: Reached target Swap.
> [    5.503160] systemd[1]: Reached target Timers.
> [    5.503254] systemd[1]: Listening on Journal Audit Socket.
> [    5.503305] systemd[1]: Listening on Journal Socket (/dev/log).
> [    5.782907] audit: type=1130 audit(1583350724.885:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
> [    5.792394] audit: type=1130 audit(1583350724.895:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
> [    6.869558] audit: type=1130 audit(1583350725.975:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
> [    6.885592] audit: type=1130 audit(1583350725.985:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
> [    6.886066] audit: type=1334 audit(1583350725.985:6): prog-id=6 op=LOAD
> [    6.886093] audit: type=1334 audit(1583350725.985:7): prog-id=7 op=LOAD
> [    7.106613] audit: type=1130 audit(1583350726.215:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
> [    7.110846] qeth 0.0.bd00: Priority Queueing not supported
> [    7.111597] qeth 0.0.bd00: portname is deprecated and is ignored
> [    7.112828] dasd-eckd 0.0.3318: A channel path to the device has become operational
> [    7.112971] dasd-eckd 0.0.3319: A channel path to the device has become operational
> [    7.113686] dasd-eckd 0.0.331a: A channel path to the device has become operational
> [    7.113910] dasd-eckd 0.0.331b: A channel path to the device has become operational
> [    7.117112] qdio: 0.0.bd02 OSA on SC 159b using AI:1 QEBSM:0 PRI:1 TDD:1 SIGA: W AP
> [    7.123509] dasd-eckd 0.0.3319: New DASD 3390/0E (CU 3990/01) with 262668 cylinders, 15 heads, 224 sectors
> [    7.126422] dasd-eckd 0.0.3319: DASD with 4 KB/block, 189120960 KB total size, 48 KB/track, compatible disk layout
> [    7.127590]  dasdb:VOL1/  0X3319: dasdb1
> [    7.128367] dasd-eckd 0.0.3318: New DASD 3390/0E (CU 3990/01) with 262668 cylinders, 15 heads, 224 sectors
> [    7.130987] dasd-eckd 0.0.3318: DASD with 4 KB/block, 189120960 KB total size, 48 KB/track, compatible disk layout
> [    7.132054]  dasda:VOL1/  0X3318: dasda1
> [    7.133315] dasd-eckd 0.0.331a: New DASD 3390/0C (CU 3990/01) with 30051 cylinders, 15 heads, 224 sectors
> [    7.136326] dasd-eckd 0.0.331a: DASD with 4 KB/block, 21636720 KB total size, 48 KB/track, compatible disk layout
> [    7.137145]  dasdc:VOL1/  0X331A:
> [    7.138060] dasd-eckd 0.0.331b: New DASD 3390/0C (CU 3990/01) with 30051 cylinders, 15 heads, 224 sectors
> [    7.140842] dasd-eckd 0.0.331b: DASD with 4 KB/block, 21636720 KB total size, 48 KB/track, compatible disk layout
> [    7.141538]  dasdd:VOL1/  0X331B:
> [    7.147527] audit: type=1130 audit(1583350726.255:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
> [    7.149679] qeth 0.0.bd00: QDIO data connection isolation is deactivated
> [    7.150155] qeth 0.0.bd00: The device represents a Bridge Capable Port
> [    7.153543] qeth 0.0.bd00: MAC address 8e:dc:f9:1b:1d:48 successfully registered
> [    7.154070] qeth 0.0.bd00: Device is a OSD Express card (level: 0199)
>                with link type OSD_10GIG.
> [    7.163550] audit: type=1130 audit(1583350726.265:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=plymouth-start comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
> [    7.189372] qeth 0.0.bd00: MAC address de:45:d7:61:c4:13 successfully registered
> [    7.191686] qeth 0.0.bd00 encbd00: renamed from eth0
> [    7.380543] mlx5_core 0001:00:00.0: enabling device (0000 -> 0002)
> [    7.380634] mlx5_core 0001:00:00.0: firmware version: 14.23.1020
> [    7.464556] dasdconf.sh Warning: 0.0.3318 is already online, not configuring
> [    7.513188] dasdconf.sh Warning: 0.0.331b is already online, not configuring
> [    7.513319] dasdconf.sh Warning: 0.0.331a is already online, not configuring
> [    7.524280] dasdconf.sh Warning: 0.0.3319 is already online, not configuring
> [    7.822902] mlx5_core 0002:00:00.0: enabling device (0000 -> 0002)
> [    7.822988] mlx5_core 0002:00:00.0: firmware version: 14.23.1020
> [    8.272590] mlx5_core 0001:00:00.0: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0)
> [    8.411681] mlx5_core 0002:00:00.0: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0)
> [    8.573287] mlx5_core 0001:00:00.0 enP1s519: renamed from eth0
> [    8.827283] mlx5_core 0002:00:00.0 enP2s564: renamed from eth1
> [    9.027834] audit: type=1130 audit(1583350728.135:11): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
> [    9.045379] audit: type=1130 audit(1583350728.145:12): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
> [    9.054299] EXT4-fs (dasda1): mounted filesystem with ordered data mode. Opts: (null)
> [    9.063290] audit: type=1334 audit(1583350728.165:13): prog-id=7 op=UNLOAD
> [    9.380815] systemd-journald[543]: Received SIGTERM from PID 1 (systemd).
> [    9.395001] printk: systemd: 19 output lines suppressed due to ratelimiting
> [    9.661651] SELinux:  Permission watch in class filesystem not defined in policy.
> [    9.661656] SELinux:  Permission watch in class file not defined in policy.
> [    9.661657] SELinux:  Permission watch_mount in class file not defined in policy.
> [    9.661658] SELinux:  Permission watch_sb in class file not defined in policy.
> [    9.661659] SELinux:  Permission watch_with_perm in class file not defined in policy.
> [    9.661660] SELinux:  Permission watch_reads in class file not defined in policy.
> [    9.661662] SELinux:  Permission watch in class dir not defined in policy.
> [    9.661663] SELinux:  Permission watch_mount in class dir not defined in policy.
> [    9.661664] SELinux:  Permission watch_sb in class dir not defined in policy.
> [    9.661665] SELinux:  Permission watch_with_perm in class dir not defined in policy.
> [    9.661666] SELinux:  Permission watch_reads in class dir not defined in policy.
> [    9.661670] SELinux:  Permission watch in class lnk_file not defined in policy.
> [    9.661670] SELinux:  Permission watch_mount in class lnk_file not defined in policy.
> [    9.661672] SELinux:  Permission watch_sb in class lnk_file not defined in policy.
> [    9.661673] SELinux:  Permission watch_with_perm in class lnk_file not defined in policy.
> [    9.661674] SELinux:  Permission watch_reads in class lnk_file not defined in policy.
> [    9.661676] SELinux:  Permission watch in class chr_file not defined in policy.
> [    9.661690] SELinux:  Permission watch_mount in class chr_file not defined in policy.
> [    9.661691] SELinux:  Permission watch_sb in class chr_file not defined in policy.
> [    9.661692] SELinux:  Permission watch_with_perm in class chr_file not defined in policy.
> [    9.661693] SELinux:  Permission watch_reads in class chr_file not defined in policy.
> [    9.661695] SELinux:  Permission watch in class blk_file not defined in policy.
> [    9.661696] SELinux:  Permission watch_mount in class blk_file not defined in policy.
> [    9.661697] SELinux:  Permission watch_sb in class blk_file not defined in policy.
> [    9.661698] SELinux:  Permission watch_with_perm in class blk_file not defined in policy.
> [    9.661699] SELinux:  Permission watch_reads in class blk_file not defined in policy.
> [    9.661702] SELinux:  Permission watch in class sock_file not defined in policy.
> [    9.661702] SELinux:  Permission watch_mount in class sock_file not defined in policy.
> [    9.661704] SELinux:  Permission watch_sb in class sock_file not defined in policy.
> [    9.661705] SELinux:  Permission watch_with_perm in class sock_file not defined in policy.
> [    9.661706] SELinux:  Permission watch_reads in class sock_file not defined in policy.
> [    9.661708] SELinux:  Permission watch in class fifo_file not defined in policy.
> [    9.661710] SELinux:  Permission watch_mount in class fifo_file not defined in policy.
> [    9.661710] SELinux:  Permission watch_sb in class fifo_file not defined in policy.
> [    9.661711] SELinux:  Permission watch_with_perm in class fifo_file not defined in policy.
> [    9.661712] SELinux:  Permission watch_reads in class fifo_file not defined in policy.
> [    9.661793] SELinux:  Class perf_event not defined in policy.
> [    9.661794] SELinux:  Class lockdown not defined in policy.
> [    9.661795] SELinux: the above unknown classes and permissions will be allowed
> [    9.661808] SELinux:  policy capability network_peer_controls=1
> [    9.661809] SELinux:  policy capability open_perms=1
> [    9.661810] SELinux:  policy capability extended_socket_class=1
> [    9.661811] SELinux:  policy capability always_check_network=0
> [    9.661811] SELinux:  policy capability cgroup_seclabel=1
> [    9.661812] SELinux:  policy capability nnp_nosuid_transition=1
> [    9.741220] systemd[1]: Successfully loaded SELinux policy in 291.310ms.
> [    9.789736] systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 12.825ms.
> [    9.791767] systemd[1]: systemd v243.7-1.fc31 running in system mode. (+PAM +AUDIT +SELINUX +IMA -APPARMOR +SMACK +SYSVINIT +UTMP +LIBCRYPTSETUP +GCRYPT +GNUTLS +ACL +XZ +LZ4 +SECCOMP +BLKID +ELFUTILS +KMOD +IDN2 -IDN +PCRE2 default-hierarchy=unified)
> [    9.792457] systemd[1]: Detected architecture s390x.
> [    9.793656] systemd[1]: Set hostname to <m83lp52.lnxne.boe>.
> [    9.902467] systemd[1]: /usr/lib/systemd/system/sssd.service:12: PIDFile= references a path below legacy directory /var/run/, updating /var/run/sssd.pid → /run/sssd.pid; please update the unit file accordingly.
> [    9.906424] systemd[1]: /usr/lib/systemd/system/iscsid.service:11: PIDFile= references a path below legacy directory /var/run/, updating /var/run/iscsid.pid → /run/iscsid.pid; please update the unit file accordingly.
> [    9.906622] systemd[1]: /usr/lib/systemd/system/iscsiuio.service:13: PIDFile= references a path below legacy directory /var/run/, updating /var/run/iscsiuio.pid → /run/iscsiuio.pid; please update the unit file accordingly.
> [    9.934051] systemd[1]: /usr/lib/systemd/system/sssd-kcm.socket:7: ListenStream= references a path below legacy directory /var/run/, updating /var/run/.heim_org.h5l.kcm-socket → /run/.heim_org.h5l.kcm-socket; please update the unit file accordingly.
> [    9.961533] systemd[1]: initrd-switch-root.service: Succeeded.
> [    9.961634] systemd[1]: Stopped Switch Root.
> [    9.961890] systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1.
> [    9.989554] EXT4-fs (dasda1): re-mounted. Opts: (null)
> [   10.299160] systemd-journald[1085]: Received client request to flush runtime journal.
> [   10.499707] VFIO - User Level meta-driver version: 0.3
> [   10.530145] genwqe 0000:00:00.0: enabling device (0000 -> 0002)
> [   10.532037] dasdconf.sh Warning: 0.0.331a is already online, not configuring
> [   10.534362] dasdconf.sh Warning: 0.0.331b is already online, not configuring
> [   10.534490] dasdconf.sh Warning: 0.0.3319 is already online, not configuring
> [   10.534516] dasdconf.sh Warning: 0.0.3318 is already online, not configuring
> [   10.768265] mlx5_ib: Mellanox Connect-IB Infiniband driver v5.0-0
> [   10.954777] XFS (dasdb1): Mounting V5 Filesystem
> [   10.967218] RPC: Registered named UNIX socket transport module.
> [   10.967221] RPC: Registered udp transport module.
> [   10.967223] RPC: Registered tcp transport module.
> [   10.967224] RPC: Registered tcp NFSv4.1 backchannel transport module.
> [   10.985393] XFS (dasdb1): Ending clean mount
> [   10.987810] xfs filesystem being mounted at /home supports timestamps until 2038 (0x7fffffff)
> [   11.002317] RPC: Registered rdma transport module.
> [   11.002319] RPC: Registered rdma backchannel transport module.
> [   11.943320] mlx5_core 0001:00:00.0 enP1s519: Link up
> [   11.945973] IPv6: ADDRCONF(NETDEV_CHANGE): enP1s519: link becomes ready
> [   12.063453] mlx5_core 0002:00:00.0 enP2s564: Link up
> [   12.136089] tun: Universal TUN/TAP device driver, 1.6
> [   12.137058] virbr0: port 1(virbr0-nic) entered blocking state
> [   12.137060] virbr0: port 1(virbr0-nic) entered disabled state
> [   12.137150] device virbr0-nic entered promiscuous mode
> [   12.536173] virbr0: port 1(virbr0-nic) entered blocking state
> [   12.536176] virbr0: port 1(virbr0-nic) entered listening state
> [   12.560143] virbr0: port 1(virbr0-nic) entered disabled state
> [   12.976588] IPv6: ADDRCONF(NETDEV_CHANGE): enP2s564: link becomes ready
> [   25.680326] CPU62 path=/machine.slice/machine-test.slice/machine-qemu\x2d16\x2dtest14. on_list=1 nr_running=1 p=[CPU 1/KVM 2543]
> [   25.680334] ------------[ cut here ]------------
> [   25.680335] rq->tmp_alone_branch != &rq->leaf_cfs_rq_list
> [   25.680351] WARNING: CPU: 61 PID: 2535 at kernel/sched/fair.c:380 enqueue_task_fair+0x3f6/0x4a8
> [   25.680353] Modules linked in: kvm xt_CHECKSUM xt_MASQUERADE nf_nat_tftp nf_conntrack_tftp xt_CT tun bridge stp llc xt_tcpudp ip6t_REJECT nf_reject_ipv6 ip6t_rpfilter ipt_REJECT nf_reject_ipv4 xt_conntrack ip6table_nat ip6table_mangle ip6table_raw ip6table_security iptable_nat nf_nat iptable_mangle iptable_raw iptable_security nf_conntrack nf_defrag_ipv6 nf_defrag_ipv4 ip_set nfnetlink ip6table_filter ip6_tables iptable_filter rpcrdma sunrpc rdma_ucm rdma_cm iw_cm ib_cm configfs mlx5_ib s390_trng ghash_s390 prng aes_s390 ib_uverbs des_s390 libdes sha3_512_s390 ib_core sha3_256_s390 sha512_s390 sha1_s390 genwqe_card vfio_ccw crc_itu_t vfio_mdev mdev vfio_iommu_type1 vfio eadm_sch zcrypt_cex4 sch_fq_codel ip_tables x_tables mlx5_core sha256_s390 sha_common pkey zcrypt rng_core autofs4
> [   25.680397] CPU: 61 PID: 2535 Comm: CPU 0/KVM Not tainted 5.6.0-rc3+ #159
> [   25.680398] Hardware name: IBM 3906 M04 704 (LPAR)
> [   25.680399] Krnl PSW : 0404c00180000000 0000001b0ed9ef0a (enqueue_task_fair+0x3fa/0x4a8)
> [   25.680402]            R:0 T:1 IO:0 EX:0 Key:0 M:1 W:0 P:0 AS:3 CC:0 PM:0 RI:0 EA:3
> [   25.680404] Krnl GPRS: 00000000000003e0 0000001e40060400 000000000000002d 0000001b100507c2
> [   25.680405]            000000000000002c 0000001b0f4089d0 0000000000000001 0400001b00000000
> [   25.680407]            0000001eb757e000 000003e00167bb58 0000001e40060400 0000001fbd840928
> [   25.680454]            0000001ebfc0a000 0000001fbd83fd00 0000001b0ed9ef06 000003e00167baa0
> [   25.680461] Krnl Code: 0000001b0ed9eefa: c020005d398a	larl	%r2,0000001b0f94620e
>                           0000001b0ed9ef00: c0e5fffdcbd8	brasl	%r14,0000001b0ed586b0
>                          #0000001b0ed9ef06: af000000		mc	0,0
>                          >0000001b0ed9ef0a: a7f4febe		brc	15,0000001b0ed9ec86
>                           0000001b0ed9ef0e: ec2cfe68017f	clij	%r2,1,12,0000001b0ed9ebde
>                           0000001b0ed9ef14: e310dd200004	lg	%r1,3360(%r13)
>                           0000001b0ed9ef1a: 58201098		l	%r2,152(%r1)
>                           0000001b0ed9ef1e: ec26fe63007e	cij	%r2,0,6,0000001b0ed9ebe4
> [   25.680475] Call Trace:
> [   25.680477]  [<0000001b0ed9ef0a>] enqueue_task_fair+0x3fa/0x4a8 
> [   25.680479] ([<0000001b0ed9ef06>] enqueue_task_fair+0x3f6/0x4a8)
> [   25.680482]  [<0000001b0ed8ed78>] activate_task+0x88/0xf0 
> [   25.680483]  [<0000001b0ed8f2e8>] ttwu_do_activate+0x58/0x78 
> [   25.680485]  [<0000001b0ed902ce>] try_to_wake_up+0x256/0x650 
> [   25.680489]  [<0000001b0edae50e>] swake_up_locked.part.0+0x2e/0x70 
> [   25.680490]  [<0000001b0edae82c>] swake_up_one+0x54/0x88 
> [   25.680536]  [<000003ff8042315a>] kvm_vcpu_wake_up+0x52/0x78 [kvm] 
> [   25.680545]  [<000003ff80441f0a>] kvm_s390_vcpu_wakeup+0x2a/0x40 [kvm] 
> [   25.680554]  [<000003ff80442696>] kvm_s390_idle_wakeup+0x6e/0xa0 [kvm] 
> [   25.680559]  [<0000001b0edf90dc>] __hrtimer_run_queues+0x114/0x2f0 
> [   25.680562]  [<0000001b0edf9e34>] hrtimer_interrupt+0x12c/0x2a8 
> [   25.680564]  [<0000001b0ed1cd3c>] do_IRQ+0xac/0xb0 
> [   25.680570]  [<0000001b0f741704>] ext_int_handler+0x130/0x134 
> [   25.680572]  [<0000001b0f740dc6>] sie_exit+0x0/0x46 
> [   25.680580] ([<000003ff8043a452>] __vcpu_run+0x3a2/0xcb0 [kvm])
> [   25.680589]  [<000003ff8043b7c0>] kvm_arch_vcpu_ioctl_run+0x248/0x880 [kvm] 
> [   25.680597]  [<000003ff804261d4>] kvm_vcpu_ioctl+0x284/0x7b0 [kvm] 
> [   25.680602]  [<0000001b0efdac0e>] ksys_ioctl+0xae/0xe8 
> [   25.680604]  [<0000001b0efdacb2>] __s390x_sys_ioctl+0x2a/0x38 
> [   25.680605]  [<0000001b0f7410b2>] system_call+0x2a6/0x2c8 
> [   25.680606] Last Breaking-Event-Address:
> [   25.680609]  [<0000001b0ed58710>] __warn_printk+0x60/0x68
> [   25.680610] ---[ end trace 1298e6d8f1f0ce77 ]---



^ permalink raw reply related	[flat|nested] 28+ messages in thread

* Re: 5.6-rc3: WARNING: CPU: 48 PID: 17435 at kernel/sched/fair.c:380 enqueue_task_fair+0x328/0x440
  2020-03-05  9:30                                 ` Vincent Guittot
@ 2020-03-05 11:28                                   ` Christian Borntraeger
  2020-03-05 12:12                                     ` Dietmar Eggemann
  2020-03-05 12:14                                     ` Vincent Guittot
  2020-03-05 11:54                                   ` Dietmar Eggemann
  1 sibling, 2 replies; 28+ messages in thread
From: Christian Borntraeger @ 2020-03-05 11:28 UTC (permalink / raw)
  To: Vincent Guittot
  Cc: Dietmar Eggemann, Ingo Molnar, Peter Zijlstra, linux-kernel

[-- Attachment #1: Type: text/plain, Size: 1939 bytes --]


On 05.03.20 10:30, Vincent Guittot wrote:
> Le mercredi 04 mars 2020 à 20:59:33 (+0100), Christian Borntraeger a écrit :
>>
>> On 04.03.20 20:38, Christian Borntraeger wrote:
>>>
>>>
>>> On 04.03.20 20:19, Dietmar Eggemann wrote:
>>>>> I just realized that this system has something special. Some month ago I created 2 slices
>>>>> $ head /etc/systemd/system/*.slice
>>>>> ==> /etc/systemd/system/machine-production.slice <==
>>>>> [Unit]
>>>>> Description=VM production
>>>>> Before=slices.target
>>>>> Wants=machine.slice
>>>>> [Slice]
>>>>> CPUQuota=2000%
>>>>> CPUWeight=1000
>>>>>
>>>>> ==> /etc/systemd/system/machine-test.slice <==
>>>>> [Unit]
>>>>> Description=VM production
>>>>> Before=slices.target
>>>>> Wants=machine.slice
>>>>> [Slice]
>>>>> CPUQuota=300%
>>>>> CPUWeight=100
>>>>>
>>>>>
>>>>> And the guests are then put into these slices. that also means that this test will never use more than the 2300%.
>>>>> No matter how much CPUs the system has.
>>>>
>>>> If you could run this debug patch on top of your un-patched kernel, it would tell us which task (in the enqueue case)
>>>> and which taskgroup is causing that.
>>>>
>>>> You could then further dump the appropriate taskgroup directory under the cpu cgroup mountpoint
>>>> (to see e.g. the CFS bandwidth data). 
>>>>
>>>> I expect more than one hit since assert_list_leaf_cfs_rq() uses SCHED_WARN_ON, hence WARN_ONCE.
>>>
>>> That was quick. FWIW, I messed up dumping the cgroup mountpoint (since I restarted my guests after this happened).
>>> Will retry. See the dmesg attached. 
>>
>> New occurence (with just one extra debug line)
> 
> Could you try to add the patch below on top of dietmar's one so we will have the status of
> each level of the hierarchy ?
> The 1st level seems ok but something wrong happens while walking the hierarchy

It seems to speed up the issue when I do a compile job in parallel on the host:

Do you also need the sysfs tree?

[-- Attachment #2: output --]
[-- Type: text/plain, Size: 60107 bytes --]

[    0.171250] Linux version 5.6.0-rc3+ (cborntra@m83lp52.lnxne.boe) (gcc version 9.2.1 20190827 (Red Hat 9.2.1-1) (GCC)) #160 SMP Thu Mar 5 12:24:20 CET 2020
[    0.171252] setup: Linux is running natively in 64-bit mode
[    0.171297] setup: The maximum memory size is 131072MB
[    0.171302] setup: Reserving 1024MB of memory at 130048MB for crashkernel (System RAM: 130048MB)
[    0.171384] cpu: 64 configured CPUs, 0 standby CPUs
[    0.171449] cpu: The CPU configuration topology of the machine is: 0 0 4 2 3 10 / 4
[    0.172160] Write protected kernel read-only data: 13524k
[    0.172899] Zone ranges:
[    0.172900]   DMA      [mem 0x0000000000000000-0x000000007fffffff]
[    0.172902]   Normal   [mem 0x0000000080000000-0x0000001fffffffff]
[    0.172903] Movable zone start for each node
[    0.172904] Early memory node ranges
[    0.172905]   node   0: [mem 0x0000000000000000-0x0000001fffffffff]
[    0.172912] Initmem setup node 0 [mem 0x0000000000000000-0x0000001fffffffff]
[    0.172913] On node 0 totalpages: 33554432
[    0.172914]   DMA zone: 8192 pages used for memmap
[    0.172915]   DMA zone: 0 pages reserved
[    0.172915]   DMA zone: 524288 pages, LIFO batch:63
[    0.186744]   Normal zone: 516096 pages used for memmap
[    0.186744]   Normal zone: 33030144 pages, LIFO batch:63
[    0.202282] percpu: Embedded 33 pages/cpu s97280 r8192 d29696 u135168
[    0.202290] pcpu-alloc: s97280 r8192 d29696 u135168 alloc=33*4096
[    0.202290] pcpu-alloc: [0] 000 [0] 001 [0] 002 [0] 003 
[    0.202292] pcpu-alloc: [0] 004 [0] 005 [0] 006 [0] 007 
[    0.202293] pcpu-alloc: [0] 008 [0] 009 [0] 010 [0] 011 
[    0.202295] pcpu-alloc: [0] 012 [0] 013 [0] 014 [0] 015 
[    0.202296] pcpu-alloc: [0] 016 [0] 017 [0] 018 [0] 019 
[    0.202297] pcpu-alloc: [0] 020 [0] 021 [0] 022 [0] 023 
[    0.202299] pcpu-alloc: [0] 024 [0] 025 [0] 026 [0] 027 
[    0.202300] pcpu-alloc: [0] 028 [0] 029 [0] 030 [0] 031 
[    0.202301] pcpu-alloc: [0] 032 [0] 033 [0] 034 [0] 035 
[    0.202303] pcpu-alloc: [0] 036 [0] 037 [0] 038 [0] 039 
[    0.202304] pcpu-alloc: [0] 040 [0] 041 [0] 042 [0] 043 
[    0.202305] pcpu-alloc: [0] 044 [0] 045 [0] 046 [0] 047 
[    0.202307] pcpu-alloc: [0] 048 [0] 049 [0] 050 [0] 051 
[    0.202308] pcpu-alloc: [0] 052 [0] 053 [0] 054 [0] 055 
[    0.202309] pcpu-alloc: [0] 056 [0] 057 [0] 058 [0] 059 
[    0.202311] pcpu-alloc: [0] 060 [0] 061 [0] 062 [0] 063 
[    0.202312] pcpu-alloc: [0] 064 [0] 065 [0] 066 [0] 067 
[    0.202313] pcpu-alloc: [0] 068 [0] 069 [0] 070 [0] 071 
[    0.202315] pcpu-alloc: [0] 072 [0] 073 [0] 074 [0] 075 
[    0.202316] pcpu-alloc: [0] 076 [0] 077 [0] 078 [0] 079 
[    0.202317] pcpu-alloc: [0] 080 [0] 081 [0] 082 [0] 083 
[    0.202318] pcpu-alloc: [0] 084 [0] 085 [0] 086 [0] 087 
[    0.202320] pcpu-alloc: [0] 088 [0] 089 [0] 090 [0] 091 
[    0.202321] pcpu-alloc: [0] 092 [0] 093 [0] 094 [0] 095 
[    0.202322] pcpu-alloc: [0] 096 [0] 097 [0] 098 [0] 099 
[    0.202324] pcpu-alloc: [0] 100 [0] 101 [0] 102 [0] 103 
[    0.202325] pcpu-alloc: [0] 104 [0] 105 [0] 106 [0] 107 
[    0.202326] pcpu-alloc: [0] 108 [0] 109 [0] 110 [0] 111 
[    0.202327] pcpu-alloc: [0] 112 [0] 113 [0] 114 [0] 115 
[    0.202329] pcpu-alloc: [0] 116 [0] 117 [0] 118 [0] 119 
[    0.202330] pcpu-alloc: [0] 120 [0] 121 [0] 122 [0] 123 
[    0.202331] pcpu-alloc: [0] 124 [0] 125 [0] 126 [0] 127 
[    0.202332] pcpu-alloc: [0] 128 [0] 129 [0] 130 [0] 131 
[    0.202334] pcpu-alloc: [0] 132 [0] 133 [0] 134 [0] 135 
[    0.202335] pcpu-alloc: [0] 136 [0] 137 [0] 138 [0] 139 
[    0.202336] pcpu-alloc: [0] 140 [0] 141 [0] 142 [0] 143 
[    0.202337] pcpu-alloc: [0] 144 [0] 145 [0] 146 [0] 147 
[    0.202338] pcpu-alloc: [0] 148 [0] 149 [0] 150 [0] 151 
[    0.202340] pcpu-alloc: [0] 152 [0] 153 [0] 154 [0] 155 
[    0.202341] pcpu-alloc: [0] 156 [0] 157 [0] 158 [0] 159 
[    0.202342] pcpu-alloc: [0] 160 [0] 161 [0] 162 [0] 163 
[    0.202343] pcpu-alloc: [0] 164 [0] 165 [0] 166 [0] 167 
[    0.202344] pcpu-alloc: [0] 168 [0] 169 [0] 170 [0] 171 
[    0.202346] pcpu-alloc: [0] 172 [0] 173 [0] 174 [0] 175 
[    0.202347] pcpu-alloc: [0] 176 [0] 177 [0] 178 [0] 179 
[    0.202348] pcpu-alloc: [0] 180 [0] 181 [0] 182 [0] 183 
[    0.202349] pcpu-alloc: [0] 184 [0] 185 [0] 186 [0] 187 
[    0.202350] pcpu-alloc: [0] 188 [0] 189 [0] 190 [0] 191 
[    0.202352] pcpu-alloc: [0] 192 [0] 193 [0] 194 [0] 195 
[    0.202353] pcpu-alloc: [0] 196 [0] 197 [0] 198 [0] 199 
[    0.202354] pcpu-alloc: [0] 200 [0] 201 [0] 202 [0] 203 
[    0.202355] pcpu-alloc: [0] 204 [0] 205 [0] 206 [0] 207 
[    0.202357] pcpu-alloc: [0] 208 [0] 209 [0] 210 [0] 211 
[    0.202358] pcpu-alloc: [0] 212 [0] 213 [0] 214 [0] 215 
[    0.202359] pcpu-alloc: [0] 216 [0] 217 [0] 218 [0] 219 
[    0.202360] pcpu-alloc: [0] 220 [0] 221 [0] 222 [0] 223 
[    0.202361] pcpu-alloc: [0] 224 [0] 225 [0] 226 [0] 227 
[    0.202363] pcpu-alloc: [0] 228 [0] 229 [0] 230 [0] 231 
[    0.202364] pcpu-alloc: [0] 232 [0] 233 [0] 234 [0] 235 
[    0.202365] pcpu-alloc: [0] 236 [0] 237 [0] 238 [0] 239 
[    0.202366] pcpu-alloc: [0] 240 [0] 241 [0] 242 [0] 243 
[    0.202367] pcpu-alloc: [0] 244 [0] 245 [0] 246 [0] 247 
[    0.202368] pcpu-alloc: [0] 248 [0] 249 [0] 250 [0] 251 
[    0.202370] pcpu-alloc: [0] 252 [0] 253 [0] 254 [0] 255 
[    0.202371] pcpu-alloc: [0] 256 [0] 257 [0] 258 [0] 259 
[    0.202372] pcpu-alloc: [0] 260 [0] 261 [0] 262 [0] 263 
[    0.202373] pcpu-alloc: [0] 264 [0] 265 [0] 266 [0] 267 
[    0.202375] pcpu-alloc: [0] 268 [0] 269 [0] 270 [0] 271 
[    0.202376] pcpu-alloc: [0] 272 [0] 273 [0] 274 [0] 275 
[    0.202377] pcpu-alloc: [0] 276 [0] 277 [0] 278 [0] 279 
[    0.202378] pcpu-alloc: [0] 280 [0] 281 [0] 282 [0] 283 
[    0.202379] pcpu-alloc: [0] 284 [0] 285 [0] 286 [0] 287 
[    0.202380] pcpu-alloc: [0] 288 [0] 289 [0] 290 [0] 291 
[    0.202382] pcpu-alloc: [0] 292 [0] 293 [0] 294 [0] 295 
[    0.202383] pcpu-alloc: [0] 296 [0] 297 [0] 298 [0] 299 
[    0.202384] pcpu-alloc: [0] 300 [0] 301 [0] 302 [0] 303 
[    0.202385] pcpu-alloc: [0] 304 [0] 305 [0] 306 [0] 307 
[    0.202386] pcpu-alloc: [0] 308 [0] 309 [0] 310 [0] 311 
[    0.202388] pcpu-alloc: [0] 312 [0] 313 [0] 314 [0] 315 
[    0.202389] pcpu-alloc: [0] 316 [0] 317 [0] 318 [0] 319 
[    0.202390] pcpu-alloc: [0] 320 [0] 321 [0] 322 [0] 323 
[    0.202391] pcpu-alloc: [0] 324 [0] 325 [0] 326 [0] 327 
[    0.202392] pcpu-alloc: [0] 328 [0] 329 [0] 330 [0] 331 
[    0.202393] pcpu-alloc: [0] 332 [0] 333 [0] 334 [0] 335 
[    0.202395] pcpu-alloc: [0] 336 [0] 337 [0] 338 [0] 339 
[    0.202417] Built 1 zonelists, mobility grouping on.  Total pages: 33030144
[    0.202418] Policy zone: Normal
[    0.202419] Kernel command line: root=/dev/disk/by-path/ccw-0.0.3318-part1 rd.dasd=0.0.3318 cio_ignore=all,!condev rd.znet=qeth,0.0.bd00,0.0.bd01,0.0.bd02,layer2=1,portno=0,portname=OSAPORT zfcp.allow_lun_scan=0 BOOT_IMAGE=0 crashkernel=1G dyndbg="module=vhost +plt" BOOT_IMAGE=
[    0.203394] printk: log_buf_len individual max cpu contribution: 4096 bytes
[    0.203394] printk: log_buf_len total cpu_extra contributions: 1388544 bytes
[    0.203395] printk: log_buf_len min size: 131072 bytes
[    0.203682] printk: log_buf_len: 2097152 bytes
[    0.203683] printk: early log buf free: 123876(94%)
[    0.212690] Dentry cache hash table entries: 8388608 (order: 14, 67108864 bytes, linear)
[    0.217250] Inode-cache hash table entries: 4194304 (order: 13, 33554432 bytes, linear)
[    0.217262] mem auto-init: stack:off, heap alloc:off, heap free:off
[    0.248512] Memory: 2316400K/134217728K available (10452K kernel code, 2024K rwdata, 3072K rodata, 3932K init, 852K bss, 3354404K reserved, 0K cma-reserved)
[    0.248904] SLUB: HWalign=256, Order=0-3, MinObjects=0, CPUs=340, Nodes=1
[    0.248937] ftrace: allocating 31563 entries in 124 pages
[    0.253582] ftrace: allocated 124 pages with 5 groups
[    0.254235] rcu: Hierarchical RCU implementation.
[    0.254236] rcu: 	RCU event tracing is enabled.
[    0.254236] rcu: 	RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=340.
[    0.254237] 	Tasks RCU enabled.
[    0.254238] rcu: RCU calculated value of scheduler-enlistment delay is 11 jiffies.
[    0.254238] rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=340
[    0.257047] NR_IRQS: 3, nr_irqs: 3, preallocated irqs: 3
[    0.257140] clocksource: tod: mask: 0xffffffffffffffff max_cycles: 0x3b0a9be803b0a9, max_idle_ns: 1805497147909793 ns
[    0.257321] Console: colour dummy device 80x25
[    0.358594] printk: console [ttyS0] enabled
[    0.452223] Calibrating delay loop (skipped)... 21881.00 BogoMIPS preset
[    0.452224] pid_max: default: 348160 minimum: 2720
[    0.452357] LSM: Security Framework initializing
[    0.452386] SELinux:  Initializing.
[    0.452635] Mount-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear)
[    0.452781] Mountpoint-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear)
[    0.453630] rcu: Hierarchical SRCU implementation.
[    0.456334] smp: Bringing up secondary CPUs ...
[    0.470120] smp: Brought up 1 node, 64 CPUs
[    1.414395] node 0 initialised, 32136731 pages in 940ms
[    1.441873] devtmpfs: initialized
[    1.442634] random: get_random_u32 called from bucket_table_alloc.isra.0+0x82/0x120 with crng_init=0
[    1.443234] clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604462750000 ns
[    1.443909] futex hash table entries: 131072 (order: 13, 33554432 bytes, vmalloc)
[    1.448551] xor: automatically using best checksumming function   xc        
[    1.448734] NET: Registered protocol family 16
[    1.448774] audit: initializing netlink subsys (disabled)
[    1.448847] audit: type=2000 audit(1583407550.141:1): state=initialized audit_enabled=0 res=1
[    1.448979] Spectre V2 mitigation: etokens
[    1.449697] random: fast init done
[    1.471488] HugeTLB registered 1.00 MiB page size, pre-allocated 0 pages
[    1.717202] raid6: vx128x8  gen() 21453 MB/s
[    1.887146] raid6: vx128x8  xor() 13330 MB/s
[    1.887148] raid6: using algorithm vx128x8 gen() 21453 MB/s
[    1.887149] raid6: .... xor() 13330 MB/s, rmw enabled
[    1.887149] raid6: using s390xc recovery algorithm
[    1.887456] iommu: Default domain type: Translated 
[    1.887571] SCSI subsystem initialized
[    1.968735] PCI host bridge to bus 0000:00
[    1.968741] pci_bus 0000:00: root bus resource [mem 0x8000000000000000-0x8000000007ffffff 64bit pref]
[    1.968744] pci_bus 0000:00: No busn resource found for root bus, will use [bus 00-ff]
[    1.968816] pci 0000:00:00.0: [1014:044b] type 00 class 0x120000
[    1.968871] pci 0000:00:00.0: reg 0x10: [mem 0xffffd80008000000-0xffffd8000fffffff 64bit pref]
[    1.969176] pci 0000:00:00.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown speed x0 link at 0000:00:00.0 (capable of 32.000 Gb/s with 5 GT/s x8 link)
[    1.969217] pci 0000:00:00.0: Adding to iommu group 0
[    1.969228] pci_bus 0000:00: busn_res: [bus 00-ff] end is updated to 00
[    1.971556] PCI host bridge to bus 0001:00
[    1.971559] pci_bus 0001:00: root bus resource [mem 0x8001000000000000-0x80010000000fffff 64bit pref]
[    1.971561] pci_bus 0001:00: No busn resource found for root bus, will use [bus 00-ff]
[    1.971655] pci 0001:00:00.0: [15b3:1016] type 00 class 0x020000
[    1.971758] pci 0001:00:00.0: reg 0x10: [mem 0xffffd40002000000-0xffffd400020fffff 64bit pref]
[    1.971917] pci 0001:00:00.0: enabling Extended Tags
[    1.972422] pci 0001:00:00.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown speed x0 link at 0001:00:00.0 (capable of 63.008 Gb/s with 8 GT/s x8 link)
[    1.972480] pci 0001:00:00.0: Adding to iommu group 1
[    1.972488] pci_bus 0001:00: busn_res: [bus 00-ff] end is updated to 00
[    1.974744] PCI host bridge to bus 0002:00
[    1.974746] pci_bus 0002:00: root bus resource [mem 0x8002000000000000-0x80020000000fffff 64bit pref]
[    1.974749] pci_bus 0002:00: No busn resource found for root bus, will use [bus 00-ff]
[    1.974842] pci 0002:00:00.0: [15b3:1016] type 00 class 0x020000
[    1.974945] pci 0002:00:00.0: reg 0x10: [mem 0xffffd40008000000-0xffffd400080fffff 64bit pref]
[    1.975109] pci 0002:00:00.0: enabling Extended Tags
[    1.975633] pci 0002:00:00.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown speed x0 link at 0002:00:00.0 (capable of 63.008 Gb/s with 8 GT/s x8 link)
[    1.975672] pci 0002:00:00.0: Adding to iommu group 2
[    1.975680] pci_bus 0002:00: busn_res: [bus 00-ff] end is updated to 00
[    2.487070] VFS: Disk quotas dquot_6.6.0
[    2.487123] VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes)
[    2.488166] random: crng init done
[    2.488807] NET: Registered protocol family 2
[    2.489419] tcp_listen_portaddr_hash hash table entries: 65536 (order: 8, 1048576 bytes, linear)
[    2.490014] TCP established hash table entries: 524288 (order: 10, 4194304 bytes, vmalloc)
[    2.492020] TCP bind hash table entries: 65536 (order: 8, 1048576 bytes, linear)
[    2.492524] TCP: Hash tables configured (established 524288 bind 65536)
[    2.492862] UDP hash table entries: 65536 (order: 9, 2097152 bytes, vmalloc)
[    2.493835] UDP-Lite hash table entries: 65536 (order: 9, 2097152 bytes, vmalloc)
[    2.495268] NET: Registered protocol family 1
[    2.495471] Trying to unpack rootfs image as initramfs...
[    3.046216] Freeing initrd memory: 42292K
[    3.047413] alg: No test for crc32be (crc32be-vx)
[    3.051604] Initialise system trusted keyrings
[    3.051650] workingset: timestamp_bits=45 max_order=25 bucket_order=0
[    3.052786] fuse: init (API version 7.31)
[    3.052840] SGI XFS with ACLs, security attributes, realtime, quota, no debug enabled
[    3.059346] Key type asymmetric registered
[    3.059348] Asymmetric key parser 'x509' registered
[    3.059354] Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251)
[    3.059560] io scheduler mq-deadline registered
[    3.059561] io scheduler kyber registered
[    3.059582] io scheduler bfq registered
[    3.060366] atomic64_test: passed
[    3.060417] hvc_iucv: The z/VM IUCV HVC device driver cannot be used without z/VM
[    3.067173] brd: module loaded
[    3.067578] cio: Channel measurement facility initialized using format extended (mode autodetected)
[    3.067839] Discipline DIAG cannot be used without z/VM
[    4.430809] sclp_sd: No data is available for the config data entity
[    4.799564] qeth: loading core functions
[    4.799599] qeth: register layer 2 discipline
[    4.799600] qeth: register layer 3 discipline
[    4.799930] NET: Registered protocol family 10
[    4.800703] Segment Routing with IPv6
[    4.800716] NET: Registered protocol family 17
[    4.800727] Key type dns_resolver registered
[    4.800793] registered taskstats version 1
[    4.800799] Loading compiled-in X.509 certificates
[    4.842259] Loaded X.509 cert 'Build time autogenerated kernel key: c46ba92ee388c82c5891ee836c9c20b752cdfac5'
[    4.842918] zswap: default zpool zbud not available
[    4.842919] zswap: pool creation failed
[    4.843577] Key type ._fscrypt registered
[    4.843578] Key type .fscrypt registered
[    4.843579] Key type fscrypt-provisioning registered
[    4.843821] Btrfs loaded, crc32c=crc32c-vx
[    4.848174] Key type big_key registered
[    4.848180] ima: No TPM chip found, activating TPM-bypass!
[    4.848184] ima: Allocated hash algorithm: sha256
[    4.848192] ima: No architecture policies found
[    4.849249] Freeing unused kernel memory: 3932K
[    4.917229] Write protected read-only-after-init data: 68k
[    4.917231] Run /init as init process
[    4.917232]   with arguments:
[    4.917232]     /init
[    4.917232]   with environment:
[    4.917232]     HOME=/
[    4.917233]     TERM=linux
[    4.917233]     BOOT_IMAGE=
[    4.917233]     crashkernel=1G
[    4.917233]     dyndbg=module=vhost +plt
[    4.931265] systemd[1]: Inserted module 'autofs4'
[    4.932350] systemd[1]: systemd v243.7-1.fc31 running in system mode. (+PAM +AUDIT +SELINUX +IMA -APPARMOR +SMACK +SYSVINIT +UTMP +LIBCRYPTSETUP +GCRYPT +GNUTLS +ACL +XZ +LZ4 +SECCOMP +BLKID +ELFUTILS +KMOD +IDN2 -IDN +PCRE2 default-hierarchy=unified)
[    4.933041] systemd[1]: Detected architecture s390x.
[    4.933043] systemd[1]: Running in initial RAM disk.
[    4.933096] systemd[1]: Set hostname to <m83lp52.lnxne.boe>.
[    4.971242] systemd[1]: Reached target Local File Systems.
[    4.971288] systemd[1]: Reached target Slices.
[    4.971308] systemd[1]: Reached target Swap.
[    4.971323] systemd[1]: Reached target Timers.
[    4.971406] systemd[1]: Listening on Journal Audit Socket.
[    4.971455] systemd[1]: Listening on Journal Socket (/dev/log).
[    5.251909] audit: type=1130 audit(1583407553.941:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
[    5.259974] audit: type=1130 audit(1583407553.951:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
[    6.243043] audit: type=1130 audit(1583407554.931:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
[    6.258832] audit: type=1130 audit(1583407554.951:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
[    6.259366] audit: type=1334 audit(1583407554.951:6): prog-id=6 op=LOAD
[    6.259392] audit: type=1334 audit(1583407554.951:7): prog-id=7 op=LOAD
[    6.478188] audit: type=1130 audit(1583407555.171:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
[    6.482613] qeth 0.0.bd00: Priority Queueing not supported
[    6.483213] qeth 0.0.bd00: portname is deprecated and is ignored
[    6.484281] dasd-eckd 0.0.3318: A channel path to the device has become operational
[    6.484962] dasd-eckd 0.0.3319: A channel path to the device has become operational
[    6.485154] dasd-eckd 0.0.331a: A channel path to the device has become operational
[    6.486537] dasd-eckd 0.0.331b: A channel path to the device has become operational
[    6.488280] qdio: 0.0.bd02 OSA on SC 159b using AI:1 QEBSM:0 PRI:1 TDD:1 SIGA: W AP
[    6.494401] dasd-eckd 0.0.3318: New DASD 3390/0E (CU 3990/01) with 262668 cylinders, 15 heads, 224 sectors
[    6.497066] dasd-eckd 0.0.3318: DASD with 4 KB/block, 189120960 KB total size, 48 KB/track, compatible disk layout
[    6.498452]  dasda:VOL1/  0X3318: dasda1
[    6.498984] dasd-eckd 0.0.3319: New DASD 3390/0E (CU 3990/01) with 262668 cylinders, 15 heads, 224 sectors
[    6.501652] dasd-eckd 0.0.3319: DASD with 4 KB/block, 189120960 KB total size, 48 KB/track, compatible disk layout
[    6.502708]  dasdb:VOL1/  0X3319: dasdb1
[    6.503449] dasd-eckd 0.0.331a: New DASD 3390/0C (CU 3990/01) with 30051 cylinders, 15 heads, 224 sectors
[    6.505999] dasd-eckd 0.0.331a: DASD with 4 KB/block, 21636720 KB total size, 48 KB/track, compatible disk layout
[    6.506784]  dasdc:VOL1/  0X331A:
[    6.507996] dasd-eckd 0.0.331b: New DASD 3390/0C (CU 3990/01) with 30051 cylinders, 15 heads, 224 sectors
[    6.510535] dasd-eckd 0.0.331b: DASD with 4 KB/block, 21636720 KB total size, 48 KB/track, compatible disk layout
[    6.511283]  dasdd:VOL1/  0X331B:
[    6.518394] audit: type=1130 audit(1583407555.211:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
[    6.520128] qeth 0.0.bd00: QDIO data connection isolation is deactivated
[    6.520605] qeth 0.0.bd00: The device represents a Bridge Capable Port
[    6.523982] qeth 0.0.bd00: MAC address 26:e5:ac:a6:de:01 successfully registered
[    6.524501] qeth 0.0.bd00: Device is a OSD Express card (level: 0199)
               with link type OSD_10GIG.
[    6.533725] audit: type=1130 audit(1583407555.221:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=plymouth-start comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
[    6.563482] qeth 0.0.bd00: MAC address de:45:d7:61:c4:13 successfully registered
[    6.565753] qeth 0.0.bd00 encbd00: renamed from eth0
[    6.748205] mlx5_core 0001:00:00.0: enabling device (0000 -> 0002)
[    6.748292] mlx5_core 0001:00:00.0: firmware version: 14.23.1020
[    6.963751] dasdconf.sh Warning: 0.0.331a is already online, not configuring
[    6.976290] dasdconf.sh Warning: 0.0.3319 is already online, not configuring
[    6.992525] dasdconf.sh Warning: 0.0.3318 is already online, not configuring
[    7.102843] dasdconf.sh Warning: 0.0.331b is already online, not configuring
[    7.194835] mlx5_core 0002:00:00.0: enabling device (0000 -> 0002)
[    7.194919] mlx5_core 0002:00:00.0: firmware version: 14.23.1020
[    7.646538] mlx5_core 0001:00:00.0: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0)
[    7.779209] mlx5_core 0002:00:00.0: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0)
[    7.933351] mlx5_core 0002:00:00.0 enP2s564: renamed from eth1
[    8.297822] mlx5_core 0001:00:00.0 enP1s519: renamed from eth0
[    8.388805] audit: type=1130 audit(1583407557.081:11): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
[    8.413241] audit: type=1130 audit(1583407557.101:12): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
[    8.422812] EXT4-fs (dasda1): mounted filesystem with ordered data mode. Opts: (null)
[    8.431304] audit: type=1334 audit(1583407557.121:13): prog-id=7 op=UNLOAD
[    8.431306] audit: type=1334 audit(1583407557.121:14): prog-id=6 op=UNLOAD
[    8.431990] audit: type=1334 audit(1583407557.121:15): prog-id=5 op=UNLOAD
[    8.431992] audit: type=1334 audit(1583407557.121:16): prog-id=4 op=UNLOAD
[    8.437004] audit: type=1334 audit(1583407557.121:17): prog-id=3 op=UNLOAD
[    8.961876] systemd-journald[542]: Received SIGTERM from PID 1 (systemd).
[    9.126691] printk: systemd: 19 output lines suppressed due to ratelimiting
[    9.670187] SELinux:  Permission watch in class filesystem not defined in policy.
[    9.670192] SELinux:  Permission watch in class file not defined in policy.
[    9.670193] SELinux:  Permission watch_mount in class file not defined in policy.
[    9.670194] SELinux:  Permission watch_sb in class file not defined in policy.
[    9.670195] SELinux:  Permission watch_with_perm in class file not defined in policy.
[    9.670196] SELinux:  Permission watch_reads in class file not defined in policy.
[    9.670198] SELinux:  Permission watch in class dir not defined in policy.
[    9.670199] SELinux:  Permission watch_mount in class dir not defined in policy.
[    9.670200] SELinux:  Permission watch_sb in class dir not defined in policy.
[    9.670201] SELinux:  Permission watch_with_perm in class dir not defined in policy.
[    9.670202] SELinux:  Permission watch_reads in class dir not defined in policy.
[    9.670205] SELinux:  Permission watch in class lnk_file not defined in policy.
[    9.670206] SELinux:  Permission watch_mount in class lnk_file not defined in policy.
[    9.670207] SELinux:  Permission watch_sb in class lnk_file not defined in policy.
[    9.670208] SELinux:  Permission watch_with_perm in class lnk_file not defined in policy.
[    9.670209] SELinux:  Permission watch_reads in class lnk_file not defined in policy.
[    9.670211] SELinux:  Permission watch in class chr_file not defined in policy.
[    9.670223] SELinux:  Permission watch_mount in class chr_file not defined in policy.
[    9.670224] SELinux:  Permission watch_sb in class chr_file not defined in policy.
[    9.670225] SELinux:  Permission watch_with_perm in class chr_file not defined in policy.
[    9.670227] SELinux:  Permission watch_reads in class chr_file not defined in policy.
[    9.670229] SELinux:  Permission watch in class blk_file not defined in policy.
[    9.670230] SELinux:  Permission watch_mount in class blk_file not defined in policy.
[    9.670231] SELinux:  Permission watch_sb in class blk_file not defined in policy.
[    9.670232] SELinux:  Permission watch_with_perm in class blk_file not defined in policy.
[    9.670233] SELinux:  Permission watch_reads in class blk_file not defined in policy.
[    9.670235] SELinux:  Permission watch in class sock_file not defined in policy.
[    9.670236] SELinux:  Permission watch_mount in class sock_file not defined in policy.
[    9.670237] SELinux:  Permission watch_sb in class sock_file not defined in policy.
[    9.670237] SELinux:  Permission watch_with_perm in class sock_file not defined in policy.
[    9.670238] SELinux:  Permission watch_reads in class sock_file not defined in policy.
[    9.670241] SELinux:  Permission watch in class fifo_file not defined in policy.
[    9.670242] SELinux:  Permission watch_mount in class fifo_file not defined in policy.
[    9.670243] SELinux:  Permission watch_sb in class fifo_file not defined in policy.
[    9.670244] SELinux:  Permission watch_with_perm in class fifo_file not defined in policy.
[    9.670245] SELinux:  Permission watch_reads in class fifo_file not defined in policy.
[    9.670324] SELinux:  Class perf_event not defined in policy.
[    9.670325] SELinux:  Class lockdown not defined in policy.
[    9.670326] SELinux: the above unknown classes and permissions will be allowed
[    9.670339] SELinux:  policy capability network_peer_controls=1
[    9.670339] SELinux:  policy capability open_perms=1
[    9.670340] SELinux:  policy capability extended_socket_class=1
[    9.670341] SELinux:  policy capability always_check_network=0
[    9.670342] SELinux:  policy capability cgroup_seclabel=1
[    9.670343] SELinux:  policy capability nnp_nosuid_transition=1
[    9.751744] systemd[1]: Successfully loaded SELinux policy in 333.160ms.
[   10.036043] systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 12.873ms.
[   10.049528] systemd[1]: systemd v243.7-1.fc31 running in system mode. (+PAM +AUDIT +SELINUX +IMA -APPARMOR +SMACK +SYSVINIT +UTMP +LIBCRYPTSETUP +GCRYPT +GNUTLS +ACL +XZ +LZ4 +SECCOMP +BLKID +ELFUTILS +KMOD +IDN2 -IDN +PCRE2 default-hierarchy=unified)
[   10.050264] systemd[1]: Detected architecture s390x.
[   10.058846] systemd[1]: Set hostname to <m83lp52.lnxne.boe>.
[   10.616443] systemd[1]: /usr/lib/systemd/system/sssd.service:12: PIDFile= references a path below legacy directory /var/run/, updating /var/run/sssd.pid → /run/sssd.pid; please update the unit file accordingly.
[   10.725368] systemd[1]: /usr/lib/systemd/system/iscsid.service:11: PIDFile= references a path below legacy directory /var/run/, updating /var/run/iscsid.pid → /run/iscsid.pid; please update the unit file accordingly.
[   10.725577] systemd[1]: /usr/lib/systemd/system/iscsiuio.service:13: PIDFile= references a path below legacy directory /var/run/, updating /var/run/iscsiuio.pid → /run/iscsiuio.pid; please update the unit file accordingly.
[   10.784710] systemd[1]: /usr/lib/systemd/system/sssd-kcm.socket:7: ListenStream= references a path below legacy directory /var/run/, updating /var/run/.heim_org.h5l.kcm-socket → /run/.heim_org.h5l.kcm-socket; please update the unit file accordingly.
[   10.812468] systemd[1]: initrd-switch-root.service: Succeeded.
[   10.812591] systemd[1]: Stopped Switch Root.
[   10.812815] systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1.
[   10.851253] EXT4-fs (dasda1): re-mounted. Opts: (null)
[   11.156827] systemd-journald[1089]: Received client request to flush runtime journal.
[   11.530811] VFIO - User Level meta-driver version: 0.3
[   11.751774] dasdconf.sh Warning: 0.0.3319 is already online, not configuring
[   11.752896] dasdconf.sh Warning: 0.0.3318 is already online, not configuring
[   11.752898] dasdconf.sh Warning: 0.0.331a is already online, not configuring
[   11.752995] dasdconf.sh Warning: 0.0.331b is already online, not configuring
[   11.793409] genwqe 0000:00:00.0: enabling device (0000 -> 0002)
[   12.138538] kauditd_printk_skb: 68 callbacks suppressed
[   12.138539] audit: type=1130 audit(1583407560.831:86): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=device_cio_free comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
[   12.198793] audit: type=1130 audit(1583407560.891:87): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=lvm2-monitor comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
[   12.232058] mlx5_ib: Mellanox Connect-IB Infiniband driver v5.0-0
[   12.288725] audit: type=1130 audit(1583407560.981:88): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=rdma-load-modules@infiniband comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
[   12.288936] audit: type=1130 audit(1583407560.981:89): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=rdma-load-modules@roce comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
[   12.289856] audit: type=1130 audit(1583407560.981:90): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=rdma-ndd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
[   12.298304] audit: type=1130 audit(1583407560.991:91): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
[   12.353610] audit: type=1130 audit(1583407561.041:92): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dpath-ccw\x2d0.0.3319\x2dpart1 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
[   12.381129] XFS (dasdb1): Mounting V5 Filesystem
[   12.406696] XFS (dasdb1): Ending clean mount
[   12.408702] xfs filesystem being mounted at /home supports timestamps until 2038 (0x7fffffff)
[   12.471768] audit: type=1130 audit(1583407561.161:93): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=dracut-shutdown comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
[   12.689396] audit: type=1130 audit(1583407561.381:94): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=plymouth-read-write comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
[   12.689409] audit: type=1131 audit(1583407561.381:95): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=plymouth-read-write comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
[   12.745351] RPC: Registered named UNIX socket transport module.
[   12.745353] RPC: Registered udp transport module.
[   12.745354] RPC: Registered tcp transport module.
[   12.745355] RPC: Registered tcp NFSv4.1 backchannel transport module.
[   12.773152] RPC: Registered rdma transport module.
[   12.773154] RPC: Registered rdma backchannel transport module.
[   14.792600] mlx5_core 0001:00:00.0 enP1s519: Link up
[   14.795295] IPv6: ADDRCONF(NETDEV_CHANGE): enP1s519: link becomes ready
[   14.903780] mlx5_core 0002:00:00.0 enP2s564: Link up
[   15.123712] tun: Universal TUN/TAP device driver, 1.6
[   15.124587] virbr0: port 1(virbr0-nic) entered blocking state
[   15.124588] virbr0: port 1(virbr0-nic) entered disabled state
[   15.124655] device virbr0-nic entered promiscuous mode
[   15.399381] virbr0: port 1(virbr0-nic) entered blocking state
[   15.399385] virbr0: port 1(virbr0-nic) entered listening state
[   15.423952] virbr0: port 1(virbr0-nic) entered disabled state
[   15.857186] IPv6: ADDRCONF(NETDEV_CHANGE): enP2s564: link becomes ready
[   87.428277] CPU1 path=/machine.slice/machine-test.slice/machine-qemu\x2d14\x2dtest11. on_list=1 nr_running=1 throttled=0 p=[CPU 2/KVM 2621]
[   87.428285] CPU1 path=/machine.slice/machine-test.slice/machine-qemu\x2d14\x2dtest11. on_list=0 nr_running=3 throttled=0 p=[CPU 2/KVM 2621]
[   87.428288] CPU1 path=/machine.slice/machine-test.slice on_list=1 nr_running=1 throttled=1 p=[CPU 2/KVM 2621]
[   87.428291] CPU1 path=/machine.slice on_list=1 nr_running=0 throttled=0 p=[CPU 2/KVM 2621]
[   87.428301] CPU1 path=/ on_list=1 nr_running=1 throttled=0 p=[CPU 2/KVM 2621]
[   87.428302] ------------[ cut here ]------------
[   87.428303] rq->tmp_alone_branch != &rq->leaf_cfs_rq_list
[   87.428326] WARNING: CPU: 1 PID: 6144 at kernel/sched/fair.c:380 enqueue_task_fair+0x1aa/0x508
[   87.428331] Modules linked in: kvm xt_CHECKSUM xt_MASQUERADE nf_nat_tftp nf_conntrack_tftp xt_CT tun bridge stp llc xt_tcpudp ip6t_REJECT nf_reject_ipv6 ip6t_rpfilter ipt_REJECT nf_reject_ipv4 xt_conntrack ip6table_nat ip6table_mangle ip6table_raw ip6table_security iptable_nat nf_nat iptable_mangle iptable_raw iptable_security nf_conntrack nf_defrag_ipv6 nf_defrag_ipv4 ip_set nfnetlink ip6table_filter ip6_tables iptable_filter rpcrdma sunrpc rdma_ucm rdma_cm iw_cm ib_cm configfs mlx5_ib s390_trng ib_uverbs ghash_s390 prng aes_s390 ib_core des_s390 libdes sha3_512_s390 genwqe_card sha3_256_s390 sha512_s390 crc_itu_t vfio_ccw sha1_s390 vfio_mdev zcrypt_cex4 mdev vfio_iommu_type1 eadm_sch vfio sch_fq_codel ip_tables x_tables mlx5_core sha256_s390 sha_common pkey zcrypt rng_core autofs4
[   87.428372] CPU: 1 PID: 6144 Comm: cc1 Not tainted 5.6.0-rc3+ #160
[   87.428373] Hardware name: IBM 3906 M04 704 (LPAR)
[   87.428375] Krnl PSW : 0404c00180000000 00000003ddab2cbe (enqueue_task_fair+0x1ae/0x508)
[   87.428434]            R:0 T:1 IO:0 EX:0 Key:0 M:1 W:0 P:0 AS:3 CC:0 PM:0 RI:0 EA:3
[   87.428438] Krnl GPRS: 00000000000003e0 000003e000b3bb00 000000000000002d 00000003ded647c2
[   87.428440]            000000000000002c 00000003de11ca30 0000000000000001 040003ff00000000
[   87.428442]            000003e000000000 0000001fbd062e00 000003e000b3bb58 0000001fbd063928
[   87.428443]            00000019d9cfc000 0000001fbd062d00 00000003ddab2cba 000003e000b3ba90
[   87.428450] Krnl Code: 00000003ddab2cae: c020005d3ab7	larl	%r2,00000003de65a21c
                          00000003ddab2cb4: c0e5fffdccfe	brasl	%r14,00000003dda6c6b0
                         #00000003ddab2cba: af000000		mc	0,0
                         >00000003ddab2cbe: a7f400a5		brc	15,00000003ddab2e08
                          00000003ddab2cc2: e310a3800012	lt	%r1,896(%r10)
                          00000003ddab2cc8: a784ff77		brc	8,00000003ddab2bb6
                          00000003ddab2ccc: b9940077		llcr	%r7,%r7
                          00000003ddab2cd0: e320b1580004	lg	%r2,344(%r11)
[   87.428465] Call Trace:
[   87.428470]  [<00000003ddab2cbe>] enqueue_task_fair+0x1ae/0x508 
[   87.428472] ([<00000003ddab2cba>] enqueue_task_fair+0x1aa/0x508)
[   87.428475]  [<00000003ddaa2d78>] activate_task+0x88/0xf0 
[   87.428477]  [<00000003ddaa32e8>] ttwu_do_activate+0x58/0x78 
[   87.428479]  [<00000003ddaa42ce>] try_to_wake_up+0x256/0x650 
[   87.428483]  [<00000003ddac256e>] swake_up_locked.part.0+0x2e/0x70 
[   87.428484]  [<00000003ddac288c>] swake_up_one+0x54/0x88 
[   87.428524]  [<000003ff8051a15a>] kvm_vcpu_wake_up+0x52/0x78 [kvm] 
[   87.428534]  [<000003ff80538f0a>] kvm_s390_vcpu_wakeup+0x2a/0x40 [kvm] 
[   87.428544]  [<000003ff80539696>] kvm_s390_idle_wakeup+0x6e/0xa0 [kvm] 
[   87.428549]  [<00000003ddb0d13c>] __hrtimer_run_queues+0x114/0x2f0 
[   87.428551]  [<00000003ddb0de94>] hrtimer_interrupt+0x12c/0x2a8 
[   87.428553]  [<00000003dda30d3c>] do_IRQ+0xac/0xb0 
[   87.428558]  [<00000003de455764>] ext_int_handler+0x130/0x134 
[   87.428559] Last Breaking-Event-Address:
[   87.428562]  [<00000003dda6c710>] __warn_printk+0x60/0x68
[   87.428563] ---[ end trace 98451e23506cc8c7 ]---
[   87.932552] CPU23 path=/machine.slice/machine-test.slice/machine-qemu\x2d18\x2dtest10. on_list=1 nr_running=1 throttled=0 p=[CPU 2/KVM 2662]
[   87.932559] CPU23 path=/machine.slice/machine-test.slice/machine-qemu\x2d18\x2dtest10. on_list=0 nr_running=3 throttled=0 p=[CPU 2/KVM 2662]
[   87.932562] CPU23 path=/machine.slice/machine-test.slice on_list=1 nr_running=1 throttled=1 p=[CPU 2/KVM 2662]
[   87.932564] CPU23 path=/machine.slice on_list=1 nr_running=0 throttled=0 p=[CPU 2/KVM 2662]
[   87.932566] CPU23 path=/ on_list=1 nr_running=1 throttled=0 p=[CPU 2/KVM 2662]
[   87.951872] CPU23 path=/ on_list=1 nr_running=2 throttled=0 p=[ksoftirqd/23 126]
[   87.987528] CPU23 path=/user.slice on_list=1 nr_running=2 throttled=0 p=[as 6737]
[   87.987533] CPU23 path=/ on_list=1 nr_running=1 throttled=0 p=[as 6737]
[   89.025280] CPU18 path=/machine.slice/machine-test.slice/machine-qemu\x2d16\x2dtest4.s on_list=1 nr_running=1 throttled=0 p=[CPU 1/KVM 2631]
[   89.025286] CPU18 path=/machine.slice/machine-test.slice/machine-qemu\x2d16\x2dtest4.s on_list=0 nr_running=2 throttled=0 p=[CPU 1/KVM 2631]
[   89.025289] CPU18 path=/machine.slice/machine-test.slice on_list=1 nr_running=2 throttled=1 p=[CPU 1/KVM 2631]
[   89.025291] CPU18 path=/machine.slice on_list=1 nr_running=0 throttled=0 p=[CPU 1/KVM 2631]
[   89.025293] CPU18 path=/ on_list=1 nr_running=1 throttled=0 p=[CPU 1/KVM 2631]
[   90.552898] CPU30 path=/machine.slice/machine-test.slice/machine-qemu\x2d12\x2dtest14. on_list=1 nr_running=1 throttled=0 p=[qemu-system-s39 2450]
[   90.552906] CPU30 path=/machine.slice/machine-test.slice/machine-qemu\x2d12\x2dtest14. on_list=0 nr_running=2 throttled=0 p=[qemu-system-s39 2450]
[   90.552909] CPU30 path=/machine.slice/machine-test.slice on_list=1 nr_running=2 throttled=1 p=[qemu-system-s39 2450]
[   90.552912] CPU30 path=/machine.slice on_list=1 nr_running=0 throttled=0 p=[qemu-system-s39 2450]
[   90.552914] CPU30 path=/ on_list=1 nr_running=0 throttled=0 p=[qemu-system-s39 2450]
[   90.553810] CPU30 path=/user.slice on_list=1 nr_running=1 throttled=0 p=[as 7931]
[   90.553814] CPU30 path=/ on_list=1 nr_running=1 throttled=0 p=[as 7931]
[   94.938241] CPU42 path=/machine.slice/machine-test.slice/machine-qemu\x2d14\x2dtest11. on_list=1 nr_running=1 throttled=0 p=[CPU 1/KVM 2620]
[   94.938250] CPU42 path=/machine.slice/machine-test.slice/machine-qemu\x2d14\x2dtest11. on_list=0 nr_running=3 throttled=0 p=[CPU 1/KVM 2620]
[   94.938253] CPU42 path=/machine.slice/machine-test.slice on_list=1 nr_running=1 throttled=1 p=[CPU 1/KVM 2620]
[   94.938255] CPU42 path=/machine.slice on_list=1 nr_running=0 throttled=0 p=[CPU 1/KVM 2620]
[   94.938256] CPU42 path=/ on_list=1 nr_running=1 throttled=0 p=[CPU 1/KVM 2620]
[   94.951865] CPU42 path=/ on_list=1 nr_running=2 throttled=0 p=[ksoftirqd/42 221]
[   94.974365] CPU42 path=/user.slice on_list=1 nr_running=2 throttled=0 p=[as 17010]
[   94.974369] CPU42 path=/ on_list=1 nr_running=1 throttled=0 p=[as 17010]
[   94.977650] CPU42 path=/user.slice on_list=1 nr_running=2 throttled=0 p=[as 17010]
[   94.977653] CPU42 path=/ on_list=1 nr_running=1 throttled=0 p=[as 17010]
[   94.982466] CPU42 path=/user.slice on_list=1 nr_running=2 throttled=0 p=[as 17010]
[   94.982469] CPU42 path=/ on_list=1 nr_running=1 throttled=0 p=[as 17010]
[   94.988983] CPU42 path=/user.slice on_list=1 nr_running=2 throttled=0 p=[as 17010]
[   94.988985] CPU42 path=/ on_list=1 nr_running=1 throttled=0 p=[as 17010]
[   95.828238] CPU43 path=/machine.slice/machine-test.slice/machine-qemu\x2d14\x2dtest11. on_list=1 nr_running=1 throttled=0 p=[CPU 1/KVM 2620]
[   95.828247] CPU43 path=/machine.slice/machine-test.slice/machine-qemu\x2d14\x2dtest11. on_list=0 nr_running=3 throttled=0 p=[CPU 1/KVM 2620]
[   95.828250] CPU43 path=/machine.slice/machine-test.slice on_list=1 nr_running=2 throttled=1 p=[CPU 1/KVM 2620]
[   95.828252] CPU43 path=/machine.slice on_list=1 nr_running=0 throttled=0 p=[CPU 1/KVM 2620]
[   95.828254] CPU43 path=/ on_list=1 nr_running=0 throttled=0 p=[CPU 1/KVM 2620]
[   95.832491] CPU43 path=/user.slice on_list=1 nr_running=1 throttled=0 p=[cc1 18181]
[   95.832495] CPU43 path=/ on_list=1 nr_running=1 throttled=0 p=[cc1 18181]
[   95.834324] CPU43 path=/user.slice on_list=1 nr_running=1 throttled=0 p=[cc1 18181]
[   95.834327] CPU43 path=/ on_list=1 nr_running=1 throttled=0 p=[cc1 18181]
[   95.835183] CPU43 path=/user.slice on_list=1 nr_running=1 throttled=0 p=[make 18467]
[   95.835187] CPU43 path=/ on_list=1 nr_running=1 throttled=0 p=[make 18467]
[   95.836778] CPU43 path=/user.slice on_list=1 nr_running=2 throttled=0 p=[sh 18469]
[   95.836780] CPU43 path=/ on_list=1 nr_running=1 throttled=0 p=[sh 18469]
[   95.836864] CPU43 path=/user.slice on_list=1 nr_running=3 throttled=0 p=[sh 18470]
[   95.836866] CPU43 path=/ on_list=1 nr_running=1 throttled=0 p=[sh 18470]
[   95.840220] CPU43 path=/user.slice on_list=1 nr_running=2 throttled=0 p=[grep 18470]
[   95.840224] CPU43 path=/ on_list=1 nr_running=1 throttled=0 p=[grep 18470]
[   95.840411] CPU43 path=/user.slice on_list=1 nr_running=2 throttled=0 p=[objdump 18469]
[   95.840454] CPU43 path=/ on_list=1 nr_running=1 throttled=0 p=[objdump 18469]
[   95.840466] CPU43 path=/user.slice on_list=1 nr_running=3 throttled=0 p=[sh 18467]
[   95.840468] CPU43 path=/ on_list=1 nr_running=1 throttled=0 p=[sh 18467]
[   95.840877] CPU43 path=/user.slice on_list=1 nr_running=1 throttled=0 p=[grep 18470]
[   95.840881] CPU43 path=/ on_list=1 nr_running=1 throttled=0 p=[grep 18470]
[   95.840888] CPU43 path=/user.slice on_list=1 nr_running=2 throttled=0 p=[sh 18467]
[   95.840890] CPU43 path=/ on_list=1 nr_running=1 throttled=0 p=[sh 18467]
[   95.842615] CPU43 path=/user.slice on_list=1 nr_running=1 throttled=0 p=[sh 18467]
[   95.842620] CPU43 path=/ on_list=1 nr_running=1 throttled=0 p=[sh 18467]
[   95.842945] CPU43 path=/ on_list=1 nr_running=1 throttled=0 p=[kworker/43:1 465]
[   95.842964] CPU43 path=/user.slice on_list=1 nr_running=1 throttled=0 p=[cc1 18266]
[   95.842966] CPU43 path=/ on_list=1 nr_running=1 throttled=0 p=[cc1 18266]
[   95.843684] CPU43 path=/user.slice on_list=1 nr_running=2 throttled=0 p=[make 18481]
[   95.843687] CPU43 path=/ on_list=1 nr_running=1 throttled=0 p=[make 18481]
[   95.861847] CPU43 path=/ on_list=1 nr_running=2 throttled=0 p=[ksoftirqd/43 226]
[   95.863251] CPU43 path=/user.slice on_list=1 nr_running=2 throttled=0 p=[sh 18481]
[   95.863253] CPU43 path=/ on_list=1 nr_running=1 throttled=0 p=[sh 18481]
[   95.863266] CPU43 path=/ on_list=1 nr_running=2 throttled=0 p=[kworker/43:1 465]
[   95.866701] CPU43 path=/user.slice on_list=1 nr_running=2 throttled=0 p=[make 18499]
[   95.866710] CPU43 path=/ on_list=1 nr_running=1 throttled=0 p=[make 18499]
[   95.866809] CPU43 path=/user.slice on_list=1 nr_running=3 throttled=0 p=[sh 18509]
[   95.866813] CPU43 path=/ on_list=1 nr_running=1 throttled=0 p=[sh 18509]
[   95.874109] CPU43 path=/user.slice on_list=1 nr_running=3 throttled=0 p=[sh 18499]
[   95.874113] CPU43 path=/ on_list=1 nr_running=1 throttled=0 p=[sh 18499]
[   95.874129] CPU43 path=/ on_list=1 nr_running=2 throttled=0 p=[kworker/43:1 465]
[   95.892716] CPU43 path=/user.slice on_list=1 nr_running=2 throttled=0 p=[rm 18509]
[   95.892720] CPU43 path=/ on_list=1 nr_running=1 throttled=0 p=[rm 18509]
[   95.892735] CPU43 path=/ on_list=1 nr_running=2 throttled=0 p=[kworker/43:1 465]
[  100.592021] CPU10 path=/machine.slice/machine-test.slice/machine-qemu\x2d18\x2dtest10. on_list=1 nr_running=1 throttled=0 p=[CPU 1/KVM 2661]
[  100.592028] CPU10 path=/machine.slice/machine-test.slice/machine-qemu\x2d18\x2dtest10. on_list=0 nr_running=3 throttled=0 p=[CPU 1/KVM 2661]
[  100.592031] CPU10 path=/machine.slice/machine-test.slice on_list=1 nr_running=2 throttled=1 p=[CPU 1/KVM 2661]
[  100.592033] CPU10 path=/machine.slice on_list=1 nr_running=0 throttled=0 p=[CPU 1/KVM 2661]
[  100.592034] CPU10 path=/ on_list=1 nr_running=0 throttled=0 p=[CPU 1/KVM 2661]
[  100.592042] CPU10 path=/ on_list=1 nr_running=1 throttled=0 p=[kworker/u680:0 8]
[  100.592054] CPU10 path=/user.slice on_list=1 nr_running=1 throttled=0 p=[sshd 1892]
[  100.592056] CPU10 path=/ on_list=1 nr_running=2 throttled=0 p=[sshd 1892]
[  100.592178] CPU10 path=/ on_list=1 nr_running=1 throttled=0 p=[kworker/u680:0 8]
[  100.592699] CPU10 path=/ on_list=1 nr_running=1 throttled=0 p=[kworker/u680:0 8]
[  100.592733] CPU10 path=/ on_list=1 nr_running=2 throttled=0 p=[kworker/10:1H 856]
[  100.592875] CPU10 path=/ on_list=1 nr_running=1 throttled=0 p=[kworker/10:1H 856]
[  102.148240] CPU10 path=/machine.slice/machine-test.slice/machine-qemu\x2d14\x2dtest11. on_list=1 nr_running=1 throttled=0 p=[CPU 0/KVM 2619]
[  102.148248] CPU10 path=/machine.slice/machine-test.slice/machine-qemu\x2d14\x2dtest11. on_list=0 nr_running=3 throttled=0 p=[CPU 0/KVM 2619]
[  102.148251] CPU10 path=/machine.slice/machine-test.slice on_list=1 nr_running=2 throttled=1 p=[CPU 0/KVM 2619]
[  102.148253] CPU10 path=/machine.slice on_list=1 nr_running=0 throttled=0 p=[CPU 0/KVM 2619]
[  102.148254] CPU10 path=/ on_list=1 nr_running=0 throttled=0 p=[CPU 0/KVM 2619]
[  102.149862] CPU10 path=/user.slice on_list=1 nr_running=1 throttled=0 p=[cc1 25551]
[  102.149868] CPU10 path=/ on_list=1 nr_running=1 throttled=0 p=[cc1 25551]
[  102.149933] CPU10 path=/machine.slice/machine-production.slice/machine-qemu\x2d11\x2dt on_list=1 nr_running=1 throttled=0 p=[CPU 1/KVM 2610]
[  102.149937] CPU10 path=/machine.slice/machine-production.slice/machine-qemu\x2d11\x2dt on_list=1 nr_running=1 throttled=0 p=[CPU 1/KVM 2610]
[  102.149940] CPU10 path=/machine.slice/machine-production.slice on_list=1 nr_running=1 throttled=0 p=[CPU 1/KVM 2610]
[  102.149942] CPU10 path=/machine.slice on_list=1 nr_running=1 throttled=0 p=[CPU 1/KVM 2610]
[  102.149944] CPU10 path=/ on_list=1 nr_running=2 throttled=0 p=[CPU 1/KVM 2610]
[  102.151145] CPU10 path=/user.slice on_list=1 nr_running=1 throttled=0 p=[cc1 25551]
[  102.151149] CPU10 path=/ on_list=1 nr_running=1 throttled=0 p=[cc1 25551]
[  102.822021] CPU9 path=/machine.slice/machine-test.slice/machine-qemu\x2d18\x2dtest10. on_list=1 nr_running=1 throttled=0 p=[CPU 2/KVM 2662]
[  102.822031] CPU9 path=/machine.slice/machine-test.slice/machine-qemu\x2d18\x2dtest10. on_list=0 nr_running=3 throttled=0 p=[CPU 2/KVM 2662]
[  102.822034] CPU9 path=/machine.slice/machine-test.slice on_list=1 nr_running=2 throttled=1 p=[CPU 2/KVM 2662]
[  102.822036] CPU9 path=/machine.slice on_list=1 nr_running=1 throttled=0 p=[CPU 2/KVM 2662]
[  102.822038] CPU9 path=/ on_list=1 nr_running=2 throttled=0 p=[CPU 2/KVM 2662]
[  102.824421] CPU9 path=/machine.slice/machine-production.slice/machine-qemu\x2d11\x2dt on_list=1 nr_running=1 throttled=0 p=[qemu-system-s39 2440]
[  102.824427] CPU9 path=/machine.slice/machine-production.slice/machine-qemu\x2d11\x2dt on_list=1 nr_running=1 throttled=0 p=[qemu-system-s39 2440]
[  102.824430] CPU9 path=/machine.slice/machine-production.slice on_list=1 nr_running=1 throttled=0 p=[qemu-system-s39 2440]
[  102.824432] CPU9 path=/machine.slice on_list=1 nr_running=1 throttled=0 p=[qemu-system-s39 2440]
[  102.824434] CPU9 path=/ on_list=1 nr_running=2 throttled=0 p=[qemu-system-s39 2440]
[  102.824886] CPU9 path=/user.slice on_list=1 nr_running=1 throttled=0 p=[ld 26532]
[  102.824889] CPU9 path=/ on_list=1 nr_running=2 throttled=0 p=[ld 26532]
[  102.824906] CPU9 path=/ on_list=1 nr_running=2 throttled=0 p=[kworker/9:1 431]
[  102.825047] CPU9 path=/user.slice on_list=1 nr_running=1 throttled=0 p=[sh 26560]
[  102.825050] CPU9 path=/ on_list=1 nr_running=1 throttled=0 p=[sh 26560]
[  102.826451] CPU9 path=/user.slice on_list=1 nr_running=1 throttled=0 p=[mv 26560]
[  102.826507] CPU9 path=/ on_list=1 nr_running=1 throttled=0 p=[mv 26560]
[  102.826513] CPU9 path=/user.slice on_list=1 nr_running=2 throttled=0 p=[sh 25583]
[  102.826515] CPU9 path=/ on_list=1 nr_running=1 throttled=0 p=[sh 25583]
[  102.826523] CPU9 path=/ on_list=1 nr_running=2 throttled=0 p=[kworker/9:1 431]
[  102.826667] CPU9 path=/user.slice on_list=1 nr_running=2 throttled=0 p=[sh 26563]
[  102.826669] CPU9 path=/ on_list=1 nr_running=1 throttled=0 p=[sh 26563]
[  102.827520] CPU9 path=/user.slice on_list=1 nr_running=1 throttled=0 p=[rm 26563]
[  102.827522] CPU9 path=/ on_list=1 nr_running=1 throttled=0 p=[rm 26563]
[  102.827527] CPU9 path=/user.slice on_list=1 nr_running=2 throttled=0 p=[sh 25583]
[  102.827529] CPU9 path=/ on_list=1 nr_running=1 throttled=0 p=[sh 25583]
[  102.827815] CPU9 path=/user.slice on_list=1 nr_running=1 throttled=0 p=[sh 25583]
[  102.827818] CPU9 path=/ on_list=1 nr_running=1 throttled=0 p=[sh 25583]
[  102.827824] CPU9 path=/user.slice on_list=1 nr_running=2 throttled=0 p=[make 21757]
[  102.827826] CPU9 path=/ on_list=1 nr_running=1 throttled=0 p=[make 21757]
[  102.827833] CPU9 path=/ on_list=1 nr_running=2 throttled=0 p=[kworker/9:1 431]
[  102.827896] CPU9 path=/user.slice on_list=1 nr_running=2 throttled=0 p=[make 26567]
[  102.827899] CPU9 path=/ on_list=1 nr_running=1 throttled=0 p=[make 26567]
[  102.828001] CPU9 path=/user.slice on_list=1 nr_running=2 throttled=0 p=[make 21757]
[  102.828004] CPU9 path=/ on_list=1 nr_running=1 throttled=0 p=[make 21757]
[  102.829257] CPU9 path=/user.slice on_list=1 nr_running=1 throttled=0 p=[sh 26567]
[  102.829259] CPU9 path=/ on_list=1 nr_running=1 throttled=0 p=[sh 26567]
[  102.829264] CPU9 path=/user.slice on_list=1 nr_running=2 throttled=0 p=[make 21757]
[  102.829266] CPU9 path=/ on_list=1 nr_running=1 throttled=0 p=[make 21757]
[  102.829930] CPU9 path=/machine.slice/machine-production.slice/machine-qemu\x2d11\x2dt on_list=1 nr_running=1 throttled=0 p=[CPU 0/KVM 2606]
[  102.829934] CPU9 path=/machine.slice/machine-production.slice/machine-qemu\x2d11\x2dt on_list=1 nr_running=1 throttled=0 p=[CPU 0/KVM 2606]
[  102.829936] CPU9 path=/machine.slice/machine-production.slice on_list=1 nr_running=1 throttled=0 p=[CPU 0/KVM 2606]
[  102.829938] CPU9 path=/machine.slice on_list=1 nr_running=1 throttled=0 p=[CPU 0/KVM 2606]
[  102.829940] CPU9 path=/ on_list=1 nr_running=1 throttled=0 p=[CPU 0/KVM 2606]
[  102.830538] CPU9 path=/user.slice on_list=1 nr_running=1 throttled=0 p=[make 26572]
[  102.830540] CPU9 path=/ on_list=1 nr_running=1 throttled=0 p=[make 26572]
[  102.831432] CPU9 path=/machine.slice/machine-production.slice/machine-qemu\x2d11\x2dt on_list=1 nr_running=1 throttled=0 p=[CPU 0/KVM 2606]
[  102.831435] CPU9 path=/machine.slice/machine-production.slice/machine-qemu\x2d11\x2dt on_list=1 nr_running=1 throttled=0 p=[CPU 0/KVM 2606]
[  102.831437] CPU9 path=/machine.slice/machine-production.slice on_list=1 nr_running=1 throttled=0 p=[CPU 0/KVM 2606]
[  102.831440] CPU9 path=/machine.slice on_list=1 nr_running=1 throttled=0 p=[CPU 0/KVM 2606]
[  102.831442] CPU9 path=/ on_list=1 nr_running=2 throttled=0 p=[CPU 0/KVM 2606]
[  102.831941] CPU9 path=/user.slice on_list=1 nr_running=1 throttled=0 p=[sh 26572]
[  102.831943] CPU9 path=/ on_list=1 nr_running=2 throttled=0 p=[sh 26572]
[  102.832149] CPU9 path=/user.slice on_list=1 nr_running=1 throttled=0 p=[cc1 26207]
[  102.832151] CPU9 path=/ on_list=1 nr_running=1 throttled=0 p=[cc1 26207]
[  102.834545] CPU9 path=/user.slice on_list=1 nr_running=1 throttled=0 p=[cc1 26207]
[  102.834548] CPU9 path=/ on_list=1 nr_running=1 throttled=0 p=[cc1 26207]
[  102.834575] CPU9 path=/ on_list=1 nr_running=1 throttled=0 p=[kworker/9:1 431]
[  102.835904] CPU9 path=/user.slice on_list=1 nr_running=1 throttled=0 p=[make 26585]
[  102.835908] CPU9 path=/ on_list=1 nr_running=1 throttled=0 p=[make 26585]
[  102.839933] CPU9 path=/machine.slice/machine-production.slice/machine-qemu\x2d11\x2dt on_list=1 nr_running=1 throttled=0 p=[CPU 0/KVM 2606]
[  102.839937] CPU9 path=/machine.slice/machine-production.slice/machine-qemu\x2d11\x2dt on_list=1 nr_running=1 throttled=0 p=[CPU 0/KVM 2606]
[  102.839940] CPU9 path=/machine.slice/machine-production.slice on_list=1 nr_running=1 throttled=0 p=[CPU 0/KVM 2606]
[  102.839942] CPU9 path=/machine.slice on_list=1 nr_running=1 throttled=0 p=[CPU 0/KVM 2606]
[  102.839944] CPU9 path=/ on_list=1 nr_running=1 throttled=0 p=[CPU 0/KVM 2606]
[  102.841331] CPU9 path=/user.slice on_list=1 nr_running=1 throttled=0 p=[gcc 26592]
[  102.841335] CPU9 path=/ on_list=1 nr_running=1 throttled=0 p=[gcc 26592]
[  102.849985] CPU9 path=/machine.slice/machine-production.slice/machine-qemu\x2d11\x2dt on_list=1 nr_running=1 throttled=0 p=[CPU 0/KVM 2606]
[  102.849990] CPU9 path=/machine.slice/machine-production.slice/machine-qemu\x2d11\x2dt on_list=1 nr_running=1 throttled=0 p=[CPU 0/KVM 2606]
[  102.849993] CPU9 path=/machine.slice/machine-production.slice on_list=1 nr_running=1 throttled=0 p=[CPU 0/KVM 2606]
[  102.849995] CPU9 path=/machine.slice on_list=1 nr_running=1 throttled=0 p=[CPU 0/KVM 2606]
[  102.849997] CPU9 path=/ on_list=1 nr_running=1 throttled=0 p=[CPU 0/KVM 2606]
[  102.852669] CPU9 path=/user.slice on_list=1 nr_running=1 throttled=0 p=[make 25708]
[  102.852673] CPU9 path=/ on_list=1 nr_running=1 throttled=0 p=[make 25708]
[  102.853107] CPU9 path=/user.slice on_list=1 nr_running=1 throttled=0 p=[grep 26590]
[  102.853111] CPU9 path=/ on_list=1 nr_running=1 throttled=0 p=[grep 26590]
[  102.853278] CPU9 path=/user.slice on_list=1 nr_running=1 throttled=0 p=[grep 26590]
[  102.853280] CPU9 path=/ on_list=1 nr_running=1 throttled=0 p=[grep 26590]
[  102.853403] CPU9 path=/user.slice on_list=1 nr_running=1 throttled=0 p=[grep 26590]
[  102.853407] CPU9 path=/ on_list=1 nr_running=1 throttled=0 p=[grep 26590]
[  102.857501] CPU9 path=/user.slice on_list=1 nr_running=1 throttled=0 p=[make 25708]
[  102.857505] CPU9 path=/ on_list=1 nr_running=1 throttled=0 p=[make 25708]
[  102.858330] CPU9 path=/user.slice on_list=1 nr_running=1 throttled=0 p=[sh 26185]
[  102.858333] CPU9 path=/ on_list=1 nr_running=1 throttled=0 p=[sh 26185]
[  102.858623] CPU9 path=/user.slice on_list=1 nr_running=1 throttled=0 p=[sh 26185]
[  102.858626] CPU9 path=/ on_list=1 nr_running=1 throttled=0 p=[sh 26185]
[  102.859929] CPU9 path=/machine.slice/machine-production.slice/machine-qemu\x2d11\x2dt on_list=1 nr_running=1 throttled=0 p=[CPU 0/KVM 2606]
[  102.859932] CPU9 path=/machine.slice/machine-production.slice/machine-qemu\x2d11\x2dt on_list=1 nr_running=1 throttled=0 p=[CPU 0/KVM 2606]
[  102.859934] CPU9 path=/machine.slice/machine-production.slice on_list=1 nr_running=1 throttled=0 p=[CPU 0/KVM 2606]
[  102.859936] CPU9 path=/machine.slice on_list=1 nr_running=1 throttled=0 p=[CPU 0/KVM 2606]
[  102.859938] CPU9 path=/ on_list=1 nr_running=1 throttled=0 p=[CPU 0/KVM 2606]
[  102.860440] CPU9 path=/machine.slice/machine-production.slice/machine-qemu\x2d11\x2dt on_list=1 nr_running=1 throttled=0 p=[CPU 0/KVM 2606]
[  102.860444] CPU9 path=/machine.slice/machine-production.slice/machine-qemu\x2d11\x2dt on_list=1 nr_running=1 throttled=0 p=[CPU 0/KVM 2606]
[  102.860447] CPU9 path=/machine.slice/machine-production.slice on_list=1 nr_running=1 throttled=0 p=[CPU 0/KVM 2606]
[  102.860449] CPU9 path=/machine.slice on_list=1 nr_running=1 throttled=0 p=[CPU 0/KVM 2606]
[  102.860451] CPU9 path=/ on_list=1 nr_running=1 throttled=0 p=[CPU 0/KVM 2606]
[  102.860826] CPU9 path=/user.slice on_list=1 nr_running=1 throttled=0 p=[gcc 26627]
[  102.860829] CPU9 path=/ on_list=1 nr_running=1 throttled=0 p=[gcc 26627]
[  102.861307] CPU9 path=/machine.slice/machine-production.slice/machine-qemu\x2d11\x2dt on_list=1 nr_running=1 throttled=0 p=[CPU 0/KVM 2606]
[  102.861311] CPU9 path=/machine.slice/machine-production.slice/machine-qemu\x2d11\x2dt on_list=1 nr_running=1 throttled=0 p=[CPU 0/KVM 2606]
[  102.861313] CPU9 path=/machine.slice/machine-production.slice on_list=1 nr_running=1 throttled=0 p=[CPU 0/KVM 2606]
[  102.861314] CPU9 path=/machine.slice on_list=1 nr_running=1 throttled=0 p=[CPU 0/KVM 2606]
[  102.861316] CPU9 path=/ on_list=1 nr_running=2 throttled=0 p=[CPU 0/KVM 2606]
[  102.863122] CPU9 path=/user.slice on_list=1 nr_running=1 throttled=0 p=[make 26630]
[  102.863126] CPU9 path=/ on_list=1 nr_running=1 throttled=0 p=[make 26630]
[  102.863151] CPU9 path=/machine.slice/machine-production.slice/machine-qemu\x2d11\x2dt on_list=1 nr_running=1 throttled=0 p=[CPU 0/KVM 2606]
[  102.863155] CPU9 path=/machine.slice/machine-production.slice/machine-qemu\x2d11\x2dt on_list=1 nr_running=1 throttled=0 p=[CPU 0/KVM 2606]
[  102.863158] CPU9 path=/machine.slice/machine-production.slice on_list=1 nr_running=1 throttled=0 p=[CPU 0/KVM 2606]
[  102.863160] CPU9 path=/machine.slice on_list=1 nr_running=1 throttled=0 p=[CPU 0/KVM 2606]
[  102.863162] CPU9 path=/ on_list=1 nr_running=2 throttled=0 p=[CPU 0/KVM 2606]
[  102.866472] CPU9 path=/user.slice on_list=1 nr_running=1 throttled=0 p=[make 26630]
[  102.866474] CPU9 path=/ on_list=1 nr_running=1 throttled=0 p=[make 26630]
[  102.868330] CPU9 path=/user.slice on_list=1 nr_running=2 throttled=0 p=[sh 26606]
[  102.868333] CPU9 path=/ on_list=1 nr_running=1 throttled=0 p=[sh 26606]
[  102.868631] CPU9 path=/user.slice on_list=1 nr_running=2 throttled=0 p=[sh 26630]
[  102.868635] CPU9 path=/ on_list=1 nr_running=1 throttled=0 p=[sh 26630]
[  102.868654] CPU9 path=/ on_list=1 nr_running=2 throttled=0 p=[kworker/9:1 431]
[  102.868863] CPU9 path=/user.slice on_list=1 nr_running=1 throttled=0 p=[sh 26606]
[  102.868866] CPU9 path=/ on_list=1 nr_running=1 throttled=0 p=[sh 26606]
[  102.868885] CPU9 path=/user.slice on_list=1 nr_running=1 throttled=0 p=[make 26638]
[  102.868887] CPU9 path=/ on_list=1 nr_running=1 throttled=0 p=[make 26638]
[  102.868893] CPU9 path=/ on_list=1 nr_running=2 throttled=0 p=[kworker/9:1 431]
[  102.870219] CPU9 path=/user.slice on_list=1 nr_running=1 throttled=0 p=[sh 26638]
[  102.870222] CPU9 path=/ on_list=1 nr_running=1 throttled=0 p=[sh 26638]
[  102.870238] CPU9 path=/user.slice on_list=1 nr_running=1 throttled=0 p=[make 26620]
[  102.870240] CPU9 path=/ on_list=1 nr_running=1 throttled=0 p=[make 26620]
[  102.870253] CPU9 path=/ on_list=1 nr_running=2 throttled=0 p=[kworker/9:1 431]
[  102.871658] CPU9 path=/user.slice on_list=1 nr_running=1 throttled=0 p=[sh 26620]
[  102.871660] CPU9 path=/ on_list=1 nr_running=1 throttled=0 p=[sh 26620]
[  102.871672] CPU9 path=/user.slice on_list=1 nr_running=1 throttled=0 p=[make 26640]
[  102.871675] CPU9 path=/ on_list=1 nr_running=1 throttled=0 p=[make 26640]
[  102.871677] CPU9 path=/user.slice on_list=1 nr_running=2 throttled=0 p=[make 26632]
[  102.871679] CPU9 path=/ on_list=1 nr_running=1 throttled=0 p=[make 26632]
[  102.871685] CPU9 path=/ on_list=1 nr_running=2 throttled=0 p=[kworker/9:1 431]
[  102.872986] CPU9 path=/user.slice on_list=1 nr_running=2 throttled=0 p=[sh 26632]
[  102.872988] CPU9 path=/ on_list=1 nr_running=1 throttled=0 p=[sh 26632]
[  102.872999] CPU9 path=/ on_list=1 nr_running=2 throttled=0 p=[kworker/9:1 431]
[  102.874840] CPU9 path=/user.slice on_list=1 nr_running=2 throttled=0 p=[sh 26646]
[  102.874843] CPU9 path=/ on_list=1 nr_running=1 throttled=0 p=[sh 26646]
[  102.876304] CPU9 path=/user.slice on_list=1 nr_running=2 throttled=0 p=[gcc 26649]
[  102.876306] CPU9 path=/ on_list=1 nr_running=1 throttled=0 p=[gcc 26649]
[  102.876331] CPU9 path=/user.slice on_list=1 nr_running=1 throttled=0 p=[gcc 26649]
[  102.876333] CPU9 path=/ on_list=1 nr_running=1 throttled=0 p=[gcc 26649]
[  102.878277] CPU9 path=/user.slice on_list=1 nr_running=1 throttled=0 p=[cc1 26649]
[  102.878281] CPU9 path=/ on_list=1 nr_running=1 throttled=0 p=[cc1 26649]

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: 5.6-rc3: WARNING: CPU: 48 PID: 17435 at kernel/sched/fair.c:380 enqueue_task_fair+0x328/0x440
  2020-03-05  9:30                                 ` Vincent Guittot
  2020-03-05 11:28                                   ` Christian Borntraeger
@ 2020-03-05 11:54                                   ` Dietmar Eggemann
  1 sibling, 0 replies; 28+ messages in thread
From: Dietmar Eggemann @ 2020-03-05 11:54 UTC (permalink / raw)
  To: Vincent Guittot, Christian Borntraeger
  Cc: Ingo Molnar, Peter Zijlstra, linux-kernel

On 05/03/2020 10:30, Vincent Guittot wrote:
> Le mercredi 04 mars 2020 à 20:59:33 (+0100), Christian Borntraeger a écrit :
>>
>> On 04.03.20 20:38, Christian Borntraeger wrote:
>>>
>>> On 04.03.20 20:19, Dietmar Eggemann wrote:

[...]

> Could you try to add the patch below on top of dietmar's one so we will have the status of
> each level of the hierarchy ?
> The 1st level seems ok but something wrong happens while walking the hierarchy
> 
> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> index 69fc30db7440..9ccde775e02e 100644
> --- a/kernel/sched/fair.c
> +++ b/kernel/sched/fair.c
> @@ -5331,14 +5331,17 @@ enqueue_task_fair(struct rq *rq, struct task_struct *p, int flags)
>  
>         if (rq->tmp_alone_branch != &rq->leaf_cfs_rq_list) {
>                 char path[64];
> +               se = &p->se;
>  
> -               cfs_rq = cfs_rq_of(&p->se);
> +               for_each_sched_entity(se) {
> +                       cfs_rq = cfs_rq_of(se);
>  
> -               sched_trace_cfs_rq_path(cfs_rq, path, 64);
> +                       sched_trace_cfs_rq_path(cfs_rq, path, 64);
>  
> -               printk("CPU%d path=%s on_list=%d nr_running=%d p=[%s %d]\n",
> -                      cpu_of(rq), path, cfs_rq->on_list, cfs_rq->nr_running,
> +                       printk("CPU%d path=%s on_list=%d nr_running=%d throttled=%d p=[%s %d]\n",
> +                      cpu_of(rq), path, cfs_rq->on_list, cfs_rq->nr_running, cfs_rq_throttled(cfs_rq),
>                        p->comm, p->pid);
> +               }
>         }
>  
>         assert_list_leaf_cfs_rq(rq);

Yeah, that's better.

The fact that the task 'CPU 1/KVM' in
'machine-qemu\x2d16\x2dtest14.scope' hit the assert only tells us that
some list_[add|\del]_leaf_cfs_rq on CPU62 before left
rq->tmp_alone_branch != rq->leaf_cfs_rq_list.

I see that cgroup-v2 is used here.

>> [   25.680326] CPU62 path=/machine.slice/machine-test.slice/machine-qemu\x2d16\x2dtest14. on_list=1 nr_running=1 p=[CPU 1/KVM 2543]
>> [   25.680334] ------------[ cut here ]------------
>> [   25.680335] rq->tmp_alone_branch != &rq->leaf_cfs_rq_list
>> [   25.680351] WARNING: CPU: 61 PID: 2535 at kernel/sched/fair.c:380 enqueue_task_fair+0x3f6/0x4a8

[...]

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: 5.6-rc3: WARNING: CPU: 48 PID: 17435 at kernel/sched/fair.c:380 enqueue_task_fair+0x328/0x440
  2020-03-05 11:28                                   ` Christian Borntraeger
@ 2020-03-05 12:12                                     ` Dietmar Eggemann
  2020-03-05 12:33                                       ` Vincent Guittot
  2020-03-05 12:14                                     ` Vincent Guittot
  1 sibling, 1 reply; 28+ messages in thread
From: Dietmar Eggemann @ 2020-03-05 12:12 UTC (permalink / raw)
  To: Christian Borntraeger, Vincent Guittot
  Cc: Ingo Molnar, Peter Zijlstra, linux-kernel

On 05/03/2020 12:28, Christian Borntraeger wrote:
> 
> On 05.03.20 10:30, Vincent Guittot wrote:
>> Le mercredi 04 mars 2020 à 20:59:33 (+0100), Christian Borntraeger a écrit :
>>>
>>> On 04.03.20 20:38, Christian Borntraeger wrote:
>>>>
>>>>
>>>> On 04.03.20 20:19, Dietmar Eggemann wrote:

[...]

> It seems to speed up the issue when I do a compile job in parallel on the host:
> 
> Do you also need the sysfs tree?

[   87.932552] CPU23 path=/machine.slice/machine-test.slice/machine-qemu\x2d18\x2dtest10. on_list=1 nr_running=1 throttled=0 p=[CPU 2/KVM 2662]
[   87.932559] CPU23 path=/machine.slice/machine-test.slice/machine-qemu\x2d18\x2dtest10. on_list=0 nr_running=3 throttled=0 p=[CPU 2/KVM 2662]
[   87.932562] CPU23 path=/machine.slice/machine-test.slice on_list=1 nr_running=1 throttled=1 p=[CPU 2/KVM 2662]
[   87.932564] CPU23 path=/machine.slice on_list=1 nr_running=0 throttled=0 p=[CPU 2/KVM 2662]
[   87.932566] CPU23 path=/ on_list=1 nr_running=1 throttled=0 p=[CPU 2/KVM 2662]
[   87.951872] CPU23 path=/ on_list=1 nr_running=2 throttled=0 p=[ksoftirqd/23 126]
[   87.987528] CPU23 path=/user.slice on_list=1 nr_running=2 throttled=0 p=[as 6737]
[   87.987533] CPU23 path=/ on_list=1 nr_running=1 throttled=0 p=[as 6737]

Arrh, looks like 'char path[64]' is too small to hold 'machine.slice/machine-test.slice/machine-qemu\x2d18\x2dtest10.scope/vcpuX' !
                                                                                                                    ^  
But I guess that the 'on_list=0' for 'machine-qemu\x2d18\x2dtest10.scope' could be the missing hint?

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: 5.6-rc3: WARNING: CPU: 48 PID: 17435 at kernel/sched/fair.c:380 enqueue_task_fair+0x328/0x440
  2020-03-05 11:28                                   ` Christian Borntraeger
  2020-03-05 12:12                                     ` Dietmar Eggemann
@ 2020-03-05 12:14                                     ` Vincent Guittot
  1 sibling, 0 replies; 28+ messages in thread
From: Vincent Guittot @ 2020-03-05 12:14 UTC (permalink / raw)
  To: Christian Borntraeger
  Cc: Dietmar Eggemann, Ingo Molnar, Peter Zijlstra, linux-kernel

On Thu, 5 Mar 2020 at 12:29, Christian Borntraeger
<borntraeger@de.ibm.com> wrote:
>
>
> On 05.03.20 10:30, Vincent Guittot wrote:
> > Le mercredi 04 mars 2020 à 20:59:33 (+0100), Christian Borntraeger a écrit :
> >>
> >> On 04.03.20 20:38, Christian Borntraeger wrote:
> >>>
> >>>
> >>> On 04.03.20 20:19, Dietmar Eggemann wrote:
> >>>>> I just realized that this system has something special. Some month ago I created 2 slices
> >>>>> $ head /etc/systemd/system/*.slice
> >>>>> ==> /etc/systemd/system/machine-production.slice <==
> >>>>> [Unit]
> >>>>> Description=VM production
> >>>>> Before=slices.target
> >>>>> Wants=machine.slice
> >>>>> [Slice]
> >>>>> CPUQuota=2000%
> >>>>> CPUWeight=1000
> >>>>>
> >>>>> ==> /etc/systemd/system/machine-test.slice <==
> >>>>> [Unit]
> >>>>> Description=VM production
> >>>>> Before=slices.target
> >>>>> Wants=machine.slice
> >>>>> [Slice]
> >>>>> CPUQuota=300%
> >>>>> CPUWeight=100
> >>>>>
> >>>>>
> >>>>> And the guests are then put into these slices. that also means that this test will never use more than the 2300%.
> >>>>> No matter how much CPUs the system has.
> >>>>
> >>>> If you could run this debug patch on top of your un-patched kernel, it would tell us which task (in the enqueue case)
> >>>> and which taskgroup is causing that.
> >>>>
> >>>> You could then further dump the appropriate taskgroup directory under the cpu cgroup mountpoint
> >>>> (to see e.g. the CFS bandwidth data).
> >>>>
> >>>> I expect more than one hit since assert_list_leaf_cfs_rq() uses SCHED_WARN_ON, hence WARN_ONCE.
> >>>
> >>> That was quick. FWIW, I messed up dumping the cgroup mountpoint (since I restarted my guests after this happened).
> >>> Will retry. See the dmesg attached.
> >>
> >> New occurence (with just one extra debug line)
> >
> > Could you try to add the patch below on top of dietmar's one so we will have the status of
> > each level of the hierarchy ?
> > The 1st level seems ok but something wrong happens while walking the hierarchy
>
> It seems to speed up the issue when I do a compile job in parallel on the host:
>
> Do you also need the sysfs tree?

No that's enough to understand the problem

All child cfs are removed from the list when a cfs is throttled which
means that the first 3 cfs have been removed when
machine.slice/machine-test.slice has been throttled.
But there are added back when we enqueue a task to make sure to go
through the full tree which has probably already happened to

[   87.428277] CPU1
path=/machine.slice/machine-test.slice/machine-qemu\x2d14\x2dtest11.
on_list=1 nr_running=1 throttled=0 p=[CPU 2/KVM 2621]
The group entity has been removed from the leaf list when parent has
been throttled but will not be added because nr_running > 1 [
87.428285] CPU1
path=/machine.slice/machine-test.slice/machine-qemu\x2d14\x2dtest11.
on_list=0 nr_running=3 throttled=0 p=[CPU 2/KVM 2621]
This one has been removed when throttled but already added back
because of a previous enqueue_task [   87.428288] CPU1
path=/machine.slice/machine-test.slice on_list=1 nr_running=1
throttled=1 p=[CPU 2/KVM 2621]
This one has also been added during a previous enqueue_task on the
throttled child above [   87.428291] CPU1 path=/machine.slice
on_list=1 nr_running=0 throttled=0 p=[CPU 2/KVM 2621]
[   87.428301] CPU1 path=/ on_list=1 nr_running=1 throttled=0 p=[CPU 2/KVM 2621]

After we added the 1st cgroup, we don't add other cfs to finish the
full hierarchy

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: 5.6-rc3: WARNING: CPU: 48 PID: 17435 at kernel/sched/fair.c:380 enqueue_task_fair+0x328/0x440
  2020-03-05 12:12                                     ` Dietmar Eggemann
@ 2020-03-05 12:33                                       ` Vincent Guittot
  2020-03-05 12:48                                         ` Christian Borntraeger
  0 siblings, 1 reply; 28+ messages in thread
From: Vincent Guittot @ 2020-03-05 12:33 UTC (permalink / raw)
  To: Dietmar Eggemann
  Cc: Christian Borntraeger, Ingo Molnar, Peter Zijlstra, linux-kernel

Le jeudi 05 mars 2020 à 13:12:39 (+0100), Dietmar Eggemann a écrit :
> On 05/03/2020 12:28, Christian Borntraeger wrote:
> > 
> > On 05.03.20 10:30, Vincent Guittot wrote:
> >> Le mercredi 04 mars 2020 à 20:59:33 (+0100), Christian Borntraeger a écrit :
> >>>
> >>> On 04.03.20 20:38, Christian Borntraeger wrote:
> >>>>
> >>>>
> >>>> On 04.03.20 20:19, Dietmar Eggemann wrote:
> 
> [...]
> 
> > It seems to speed up the issue when I do a compile job in parallel on the host:
> > 
> > Do you also need the sysfs tree?
> 
> [   87.932552] CPU23 path=/machine.slice/machine-test.slice/machine-qemu\x2d18\x2dtest10. on_list=1 nr_running=1 throttled=0 p=[CPU 2/KVM 2662]
> [   87.932559] CPU23 path=/machine.slice/machine-test.slice/machine-qemu\x2d18\x2dtest10. on_list=0 nr_running=3 throttled=0 p=[CPU 2/KVM 2662]
> [   87.932562] CPU23 path=/machine.slice/machine-test.slice on_list=1 nr_running=1 throttled=1 p=[CPU 2/KVM 2662]
> [   87.932564] CPU23 path=/machine.slice on_list=1 nr_running=0 throttled=0 p=[CPU 2/KVM 2662]
> [   87.932566] CPU23 path=/ on_list=1 nr_running=1 throttled=0 p=[CPU 2/KVM 2662]
> [   87.951872] CPU23 path=/ on_list=1 nr_running=2 throttled=0 p=[ksoftirqd/23 126]
> [   87.987528] CPU23 path=/user.slice on_list=1 nr_running=2 throttled=0 p=[as 6737]
> [   87.987533] CPU23 path=/ on_list=1 nr_running=1 throttled=0 p=[as 6737]
> 
> Arrh, looks like 'char path[64]' is too small to hold 'machine.slice/machine-test.slice/machine-qemu\x2d18\x2dtest10.scope/vcpuX' !
>                                                                                                                     ^  
> But I guess that the 'on_list=0' for 'machine-qemu\x2d18\x2dtest10.scope' could be the missing hint?

yes the if (cfs_bandwidth_used()) at the end of enqueue_task_fair is not enough
to ensure that all cfs will be added back. It will "work" for the 1st enqueue
because the throttled cfs will be added and will reset tmp_alone_branch but not
for the next one

Compare to the previous proposed fix, we can optimize it a bit with:

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 9ccde775e02e..3b19e508641d 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -4035,10 +4035,16 @@ enqueue_entity(struct cfs_rq *cfs_rq, struct sched_entity *se, int flags)
                __enqueue_entity(cfs_rq, se);
        se->on_rq = 1;

-       if (cfs_rq->nr_running == 1) {
+       /*
+        * When bandwidth control is enabled, cfs might have been removed because of
+        * a parent been throttled but cfs->nr_running > 1. Try to add it
+        * unconditionnally.
+        */
+       if (cfs_rq->nr_running == 1 || cfs_bandwidth_used())
                list_add_leaf_cfs_rq(cfs_rq);
+
+       if (cfs_rq->nr_running == 1)
                check_enqueue_throttle(cfs_rq);
-       }
 }

 static void __clear_buddies_last(struct sched_entity *se)





^ permalink raw reply related	[flat|nested] 28+ messages in thread

* Re: 5.6-rc3: WARNING: CPU: 48 PID: 17435 at kernel/sched/fair.c:380 enqueue_task_fair+0x328/0x440
  2020-03-05 12:33                                       ` Vincent Guittot
@ 2020-03-05 12:48                                         ` Christian Borntraeger
  2020-03-05 13:02                                           ` Vincent Guittot
  2020-03-05 13:18                                           ` Christian Borntraeger
  0 siblings, 2 replies; 28+ messages in thread
From: Christian Borntraeger @ 2020-03-05 12:48 UTC (permalink / raw)
  To: Vincent Guittot, Dietmar Eggemann
  Cc: Ingo Molnar, Peter Zijlstra, linux-kernel



On 05.03.20 13:33, Vincent Guittot wrote:
> Le jeudi 05 mars 2020 à 13:12:39 (+0100), Dietmar Eggemann a écrit :
>> On 05/03/2020 12:28, Christian Borntraeger wrote:
>>>
>>> On 05.03.20 10:30, Vincent Guittot wrote:
>>>> Le mercredi 04 mars 2020 à 20:59:33 (+0100), Christian Borntraeger a écrit :
>>>>>
>>>>> On 04.03.20 20:38, Christian Borntraeger wrote:
>>>>>>
>>>>>>
>>>>>> On 04.03.20 20:19, Dietmar Eggemann wrote:
>>
>> [...]
>>
>>> It seems to speed up the issue when I do a compile job in parallel on the host:
>>>
>>> Do you also need the sysfs tree?
>>
>> [   87.932552] CPU23 path=/machine.slice/machine-test.slice/machine-qemu\x2d18\x2dtest10. on_list=1 nr_running=1 throttled=0 p=[CPU 2/KVM 2662]
>> [   87.932559] CPU23 path=/machine.slice/machine-test.slice/machine-qemu\x2d18\x2dtest10. on_list=0 nr_running=3 throttled=0 p=[CPU 2/KVM 2662]
>> [   87.932562] CPU23 path=/machine.slice/machine-test.slice on_list=1 nr_running=1 throttled=1 p=[CPU 2/KVM 2662]
>> [   87.932564] CPU23 path=/machine.slice on_list=1 nr_running=0 throttled=0 p=[CPU 2/KVM 2662]
>> [   87.932566] CPU23 path=/ on_list=1 nr_running=1 throttled=0 p=[CPU 2/KVM 2662]
>> [   87.951872] CPU23 path=/ on_list=1 nr_running=2 throttled=0 p=[ksoftirqd/23 126]
>> [   87.987528] CPU23 path=/user.slice on_list=1 nr_running=2 throttled=0 p=[as 6737]
>> [   87.987533] CPU23 path=/ on_list=1 nr_running=1 throttled=0 p=[as 6737]
>>
>> Arrh, looks like 'char path[64]' is too small to hold 'machine.slice/machine-test.slice/machine-qemu\x2d18\x2dtest10.scope/vcpuX' !
>>                                                                                                                     ^  
>> But I guess that the 'on_list=0' for 'machine-qemu\x2d18\x2dtest10.scope' could be the missing hint?
> 
> yes the if (cfs_bandwidth_used()) at the end of enqueue_task_fair is not enough
> to ensure that all cfs will be added back. It will "work" for the 1st enqueue
> because the throttled cfs will be added and will reset tmp_alone_branch but not
> for the next one
> 
> Compare to the previous proposed fix, we can optimize it a bit with:
> 
> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> index 9ccde775e02e..3b19e508641d 100644
> --- a/kernel/sched/fair.c
> +++ b/kernel/sched/fair.c
> @@ -4035,10 +4035,16 @@ enqueue_entity(struct cfs_rq *cfs_rq, struct sched_entity *se, int flags)
>                 __enqueue_entity(cfs_rq, se);
>         se->on_rq = 1;
> 
> -       if (cfs_rq->nr_running == 1) {
> +       /*
> +        * When bandwidth control is enabled, cfs might have been removed because of
> +        * a parent been throttled but cfs->nr_running > 1. Try to add it
> +        * unconditionnally.
> +        */
> +       if (cfs_rq->nr_running == 1 || cfs_bandwidth_used())

This needs a forward declaration for cfs_bandwidth_used, but with that it compiles fine 
and its seems to work fine so far. Will keep it running for while.

>                 list_add_leaf_cfs_rq(cfs_rq);
> +
> +       if (cfs_rq->nr_running == 1)
>                 check_enqueue_throttle(cfs_rq);
> -       }
>  }
> 
>  static void __clear_buddies_last(struct sched_entity *se)


^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: 5.6-rc3: WARNING: CPU: 48 PID: 17435 at kernel/sched/fair.c:380 enqueue_task_fair+0x328/0x440
  2020-03-05 12:48                                         ` Christian Borntraeger
@ 2020-03-05 13:02                                           ` Vincent Guittot
  2020-03-05 13:18                                           ` Christian Borntraeger
  1 sibling, 0 replies; 28+ messages in thread
From: Vincent Guittot @ 2020-03-05 13:02 UTC (permalink / raw)
  To: Christian Borntraeger
  Cc: Dietmar Eggemann, Ingo Molnar, Peter Zijlstra, linux-kernel

On Thu, 5 Mar 2020 at 13:49, Christian Borntraeger
<borntraeger@de.ibm.com> wrote:
>
>
>
> On 05.03.20 13:33, Vincent Guittot wrote:
> > Le jeudi 05 mars 2020 à 13:12:39 (+0100), Dietmar Eggemann a écrit :
> >> On 05/03/2020 12:28, Christian Borntraeger wrote:
> >>>
> >>> On 05.03.20 10:30, Vincent Guittot wrote:
> >>>> Le mercredi 04 mars 2020 à 20:59:33 (+0100), Christian Borntraeger a écrit :
> >>>>>
> >>>>> On 04.03.20 20:38, Christian Borntraeger wrote:
> >>>>>>
> >>>>>>
> >>>>>> On 04.03.20 20:19, Dietmar Eggemann wrote:
> >>
> >> [...]
> >>
> >>> It seems to speed up the issue when I do a compile job in parallel on the host:
> >>>
> >>> Do you also need the sysfs tree?
> >>
> >> [   87.932552] CPU23 path=/machine.slice/machine-test.slice/machine-qemu\x2d18\x2dtest10. on_list=1 nr_running=1 throttled=0 p=[CPU 2/KVM 2662]
> >> [   87.932559] CPU23 path=/machine.slice/machine-test.slice/machine-qemu\x2d18\x2dtest10. on_list=0 nr_running=3 throttled=0 p=[CPU 2/KVM 2662]
> >> [   87.932562] CPU23 path=/machine.slice/machine-test.slice on_list=1 nr_running=1 throttled=1 p=[CPU 2/KVM 2662]
> >> [   87.932564] CPU23 path=/machine.slice on_list=1 nr_running=0 throttled=0 p=[CPU 2/KVM 2662]
> >> [   87.932566] CPU23 path=/ on_list=1 nr_running=1 throttled=0 p=[CPU 2/KVM 2662]
> >> [   87.951872] CPU23 path=/ on_list=1 nr_running=2 throttled=0 p=[ksoftirqd/23 126]
> >> [   87.987528] CPU23 path=/user.slice on_list=1 nr_running=2 throttled=0 p=[as 6737]
> >> [   87.987533] CPU23 path=/ on_list=1 nr_running=1 throttled=0 p=[as 6737]
> >>
> >> Arrh, looks like 'char path[64]' is too small to hold 'machine.slice/machine-test.slice/machine-qemu\x2d18\x2dtest10.scope/vcpuX' !
> >>                                                                                                                     ^
> >> But I guess that the 'on_list=0' for 'machine-qemu\x2d18\x2dtest10.scope' could be the missing hint?
> >
> > yes the if (cfs_bandwidth_used()) at the end of enqueue_task_fair is not enough
> > to ensure that all cfs will be added back. It will "work" for the 1st enqueue
> > because the throttled cfs will be added and will reset tmp_alone_branch but not
> > for the next one
> >
> > Compare to the previous proposed fix, we can optimize it a bit with:
> >
> > diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> > index 9ccde775e02e..3b19e508641d 100644
> > --- a/kernel/sched/fair.c
> > +++ b/kernel/sched/fair.c
> > @@ -4035,10 +4035,16 @@ enqueue_entity(struct cfs_rq *cfs_rq, struct sched_entity *se, int flags)
> >                 __enqueue_entity(cfs_rq, se);
> >         se->on_rq = 1;
> >
> > -       if (cfs_rq->nr_running == 1) {
> > +       /*
> > +        * When bandwidth control is enabled, cfs might have been removed because of
> > +        * a parent been throttled but cfs->nr_running > 1. Try to add it
> > +        * unconditionnally.
> > +        */
> > +       if (cfs_rq->nr_running == 1 || cfs_bandwidth_used())
>
> This needs a forward declaration for cfs_bandwidth_used, but with that it compiles fine
> and its seems to work fine so far. Will keep it running for while.

ok. Thanks

>
> >                 list_add_leaf_cfs_rq(cfs_rq);
> > +
> > +       if (cfs_rq->nr_running == 1)
> >                 check_enqueue_throttle(cfs_rq);
> > -       }
> >  }
> >
> >  static void __clear_buddies_last(struct sched_entity *se)
>

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: 5.6-rc3: WARNING: CPU: 48 PID: 17435 at kernel/sched/fair.c:380 enqueue_task_fair+0x328/0x440
  2020-03-05 12:48                                         ` Christian Borntraeger
  2020-03-05 13:02                                           ` Vincent Guittot
@ 2020-03-05 13:18                                           ` Christian Borntraeger
  1 sibling, 0 replies; 28+ messages in thread
From: Christian Borntraeger @ 2020-03-05 13:18 UTC (permalink / raw)
  To: Vincent Guittot, Dietmar Eggemann
  Cc: Ingo Molnar, Peter Zijlstra, linux-kernel



On 05.03.20 13:48, Christian Borntraeger wrote:
> 
> 
> On 05.03.20 13:33, Vincent Guittot wrote:
>> Le jeudi 05 mars 2020 à 13:12:39 (+0100), Dietmar Eggemann a écrit :
>>> On 05/03/2020 12:28, Christian Borntraeger wrote:
>>>>
>>>> On 05.03.20 10:30, Vincent Guittot wrote:
>>>>> Le mercredi 04 mars 2020 à 20:59:33 (+0100), Christian Borntraeger a écrit :
>>>>>>
>>>>>> On 04.03.20 20:38, Christian Borntraeger wrote:
>>>>>>>
>>>>>>>
>>>>>>> On 04.03.20 20:19, Dietmar Eggemann wrote:
>>>
>>> [...]
>>>
>>>> It seems to speed up the issue when I do a compile job in parallel on the host:
>>>>
>>>> Do you also need the sysfs tree?
>>>
>>> [   87.932552] CPU23 path=/machine.slice/machine-test.slice/machine-qemu\x2d18\x2dtest10. on_list=1 nr_running=1 throttled=0 p=[CPU 2/KVM 2662]
>>> [   87.932559] CPU23 path=/machine.slice/machine-test.slice/machine-qemu\x2d18\x2dtest10. on_list=0 nr_running=3 throttled=0 p=[CPU 2/KVM 2662]
>>> [   87.932562] CPU23 path=/machine.slice/machine-test.slice on_list=1 nr_running=1 throttled=1 p=[CPU 2/KVM 2662]
>>> [   87.932564] CPU23 path=/machine.slice on_list=1 nr_running=0 throttled=0 p=[CPU 2/KVM 2662]
>>> [   87.932566] CPU23 path=/ on_list=1 nr_running=1 throttled=0 p=[CPU 2/KVM 2662]
>>> [   87.951872] CPU23 path=/ on_list=1 nr_running=2 throttled=0 p=[ksoftirqd/23 126]
>>> [   87.987528] CPU23 path=/user.slice on_list=1 nr_running=2 throttled=0 p=[as 6737]
>>> [   87.987533] CPU23 path=/ on_list=1 nr_running=1 throttled=0 p=[as 6737]
>>>
>>> Arrh, looks like 'char path[64]' is too small to hold 'machine.slice/machine-test.slice/machine-qemu\x2d18\x2dtest10.scope/vcpuX' !
>>>                                                                                                                     ^  
>>> But I guess that the 'on_list=0' for 'machine-qemu\x2d18\x2dtest10.scope' could be the missing hint?
>>
>> yes the if (cfs_bandwidth_used()) at the end of enqueue_task_fair is not enough
>> to ensure that all cfs will be added back. It will "work" for the 1st enqueue
>> because the throttled cfs will be added and will reset tmp_alone_branch but not
>> for the next one
>>
>> Compare to the previous proposed fix, we can optimize it a bit with:
>>
>> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
>> index 9ccde775e02e..3b19e508641d 100644
>> --- a/kernel/sched/fair.c
>> +++ b/kernel/sched/fair.c
>> @@ -4035,10 +4035,16 @@ enqueue_entity(struct cfs_rq *cfs_rq, struct sched_entity *se, int flags)
>>                 __enqueue_entity(cfs_rq, se);
>>         se->on_rq = 1;
>>
>> -       if (cfs_rq->nr_running == 1) {
>> +       /*
>> +        * When bandwidth control is enabled, cfs might have been removed because of
>> +        * a parent been throttled but cfs->nr_running > 1. Try to add it
>> +        * unconditionnally.
>> +        */
>> +       if (cfs_rq->nr_running == 1 || cfs_bandwidth_used())
> 
> This needs a forward declaration for cfs_bandwidth_used, but with that it compiles fine 
> and its seems to work fine so far. Will keep it running for while.

So I am no longer able to reproduce this issue in the last 30 minutes. As I have been
able to reproduce the issue pretty quickly in the latest trials (more guests, more
gcc threads) it looks like that this patch fixes the issue. I will keep it running
for a day or so, but I think I can already say.


Tested-by: Christian Borntraeger <borntraeger@de.ibm.com>


^ permalink raw reply	[flat|nested] 28+ messages in thread

end of thread, other threads:[~2020-03-05 13:18 UTC | newest]

Thread overview: 28+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-02-28  7:54 5.6-rc3: WARNING: CPU: 48 PID: 17435 at kernel/sched/fair.c:380 enqueue_task_fair+0x328/0x440 Christian Borntraeger
2020-02-28 12:04 ` Christian Borntraeger
2020-02-28 13:32   ` Vincent Guittot
2020-02-28 13:43     ` Christian Borntraeger
2020-02-28 15:08       ` Christian Borntraeger
2020-02-28 15:37         ` Vincent Guittot
2020-02-28 15:42           ` Christian Borntraeger
2020-02-28 16:32             ` Qais Yousef
2020-02-28 16:35             ` Vincent Guittot
2020-03-02 11:16               ` Christian Borntraeger
2020-03-02 18:17                 ` Christian Borntraeger
2020-03-03  7:37                   ` Christian Borntraeger
2020-03-03  7:55                     ` Vincent Guittot
2020-03-04 15:26                       ` Vincent Guittot
2020-03-04 17:42                         ` Christian Borntraeger
2020-03-04 17:51                           ` Vincent Guittot
2020-03-04 19:19                           ` Dietmar Eggemann
2020-03-04 19:38                             ` Christian Borntraeger
2020-03-04 19:59                               ` Christian Borntraeger
2020-03-05  9:30                                 ` Vincent Guittot
2020-03-05 11:28                                   ` Christian Borntraeger
2020-03-05 12:12                                     ` Dietmar Eggemann
2020-03-05 12:33                                       ` Vincent Guittot
2020-03-05 12:48                                         ` Christian Borntraeger
2020-03-05 13:02                                           ` Vincent Guittot
2020-03-05 13:18                                           ` Christian Borntraeger
2020-03-05 12:14                                     ` Vincent Guittot
2020-03-05 11:54                                   ` Dietmar Eggemann

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).