kvm.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: bugzilla-daemon@kernel.org
To: kvm@vger.kernel.org
Subject: [Bug 216388] On Host, kernel errors in KVM, on guests, it shows CPU stalls
Date: Thu, 01 Sep 2022 06:09:17 +0000	[thread overview]
Message-ID: <bug-216388-28872-L9iQIQTrXh@https.bugzilla.kernel.org/> (raw)
In-Reply-To: <bug-216388-28872@https.bugzilla.kernel.org/>

https://bugzilla.kernel.org/show_bug.cgi?id=216388

--- Comment #6 from Robert Dinse (nanook@eskimo.com) ---
Installed 5.19.6 on a couple of machines today, still getting CPU stalls but in
random locations:

[    6.601788] rcu: INFO: rcu_sched detected expedited stalls on CPUs/tasks: {
4-... } 3 jiffies s: 53 root: 0x10/.                                            
[    6.601802] rcu: blocking rcu_node structures (internal RCU debug):          
[    6.601806] Task dump for CPU 4:                                             
[    6.601808] task:systemd-udevd   state:R  running task     stack:    0 pid: 
468 ppid:   454 flags:0x0000400a                                                
[    6.604313] Call Trace:                                                      
[    6.604324]  <TASK>                                                          
[    6.604326]  ? cpumask_any_but+0x35/0x50                                     
[    6.604336]  ? x2apic_send_IPI_allbutself+0x2f/0x40                          
[    6.604339]  ? do_sync_core+0x2a/0x30                                        
[    6.604342]  ? cpumask_next+0x23/0x30                                        
[    6.604344]  ? smp_call_function_many_cond+0xea/0x370                        
[    6.604347]  ? text_poke_memset+0x20/0x20                                    
[    6.604350]  ? arch_unregister_cpu+0x50/0x50                                 
[    6.604352]  ? on_each_cpu_cond_mask+0x1d/0x30                               
[    6.604354]  ? text_poke_bp_batch+0x1fb/0x210                                
[    6.604358]  ? enter_smm.constprop.0+0x51a/0xa70 [kvm]                       
[    6.604414]  ? vmx_set_cr0+0x16f0/0x16f0 [kvm_intel]                         
[    6.604457]  ? enter_smm.constprop.0+0x519/0xa70 [kvm]                       
[    6.604501]  ? text_poke_bp+0x49/0x70                                        
[    6.604504]  ? __static_call_transform+0x7f/0x120                            
[    6.604506]  ? arch_static_call_transform+0x87/0xa0                          
[    6.604508]  ? enter_smm.constprop.0+0x519/0xa70 [kvm]                       
[    6.604552]  ? __static_call_update+0x16e/0x220                              
[    6.604554]  ? vmx_set_cr0+0x16f0/0x16f0 [kvm_intel]                         
[    6.604567]  ? kvm_arch_hardware_setup+0x35a/0x17f0 [kvm]                    
[    6.604611]  ? __kmalloc_node+0x16c/0x380                                    
[    6.604615]  ? kvm_init+0xa2/0x400 [kvm]                                     
[    6.604654]  ? hardware_setup+0x7e2/0x8cc [kvm_intel]                        
[    6.604666]  ? vmx_init+0xf9/0x201 [kvm_intel]                               
[    6.604676]  ? hardware_setup+0x8cc/0x8cc [kvm_intel]                        
[    6.604685]  ? do_one_initcall+0x47/0x1e0                                    
[    6.604689]  ? kmem_cache_alloc_trace+0x16c/0x2b0                            
[    6.604692]  ? do_init_module+0x50/0x1f0                                     
[    6.604694]  ? load_module+0x21bd/0x25e0                                     
[    6.604696]  ? ima_post_read_file+0xd5/0x100                                 
[    6.604700]  ? kernel_read_file+0x23d/0x2e0                                  
[    6.604703]  ? __do_sys_finit_module+0xbd/0x130                              
[    6.604705]  ? __do_sys_finit_module+0xbd/0x130                              
[    6.604708]  ? __x64_sys_finit_module+0x18/0x20                              
[    6.604710]  ? do_syscall_64+0x58/0x80                                       
[    6.604713]  ? syscall_exit_to_user_mode+0x1b/0x40                           
[    6.604715]  ? do_syscall_64+0x67/0x80                                       
[    6.604718]  ? switch_fpu_return+0x4e/0xc0                                   
[    6.604720]  ? exit_to_user_mode_prepare+0x184/0x1e0                         
[    6.604723]  ? syscall_exit_to_user_mode+0x1b/0x40                           
[    6.604725]  ? do_syscall_64+0x67/0x80                                       
[    6.604728]  ? do_syscall_64+0x67/0x80                                       
[    6.604730]  ? do_syscall_64+0x67/0x80                                       
[    6.604732]  ? sysvec_call_function+0x4b/0xa0                                
[    6.604735]  ? entry_SYSCALL_64_after_hwframe+0x63/0xcd                      
[    6.604739]  </TASK>     
[    6.697044] rcu: INFO: rcu_sched detected expedited stalls on CPUs/tasks: {
4-... } 13 jiffies s: 53 root: 0x10/.                                           
[    6.697051] rcu: blocking rcu_node structures (internal RCU debug):          
[    6.697052] Task dump for CPU 4:                                             
[    6.697053] task:systemd-udevd   state:R  running task     stack:    0 pid: 
468 ppid:   454 flags:0x0000400a                                                
[    6.697057] Call Trace:                                                      
[    6.697058]  <TASK>                                                          
[    6.697059]  ? cpumask_any_but+0x35/0x50                                     
[    6.697065]  ? x2apic_send_IPI_allbutself+0x2f/0x40                          
[    6.697068]  ? do_sync_core+0x2a/0x30                                        
[    6.697071]  ? cpumask_next+0x23/0x30                                        
[    6.697072]  ? smp_call_function_many_cond+0xea/0x370                        
[    6.697075]  ? text_poke_memset+0x20/0x20                                    
[    6.697077]  ? arch_unregister_cpu+0x50/0x50                                 
[    6.697080]  ? on_each_cpu_cond_mask+0x1d/0x30                               
[    6.697081]  ? text_poke_bp_batch+0x1fb/0x210                                
[    6.697084]  ? kvm_set_msr_common+0x939/0x1060 [kvm]                         
[    6.697133]  ? vmx_set_efer.part.0+0x160/0x160 [kvm_intel]                   
[    6.697147]  ? kvm_set_msr_common+0x938/0x1060 [kvm]                         
[    6.697187]  ? text_poke_bp+0x49/0x70                                        
[    6.697189]  ? __static_call_transform+0x7f/0x120                            
[    6.697191]  ? arch_static_call_transform+0x87/0xa0                          
[    6.697193]  ? kvm_set_msr_common+0x938/0x1060 [kvm]                         
[    6.697234]  ? __static_call_update+0x16e/0x220                              
[    6.697236]  ? vmx_set_efer.part.0+0x160/0x160 [kvm_intel]                   
[    6.697246]  ? kvm_arch_hardware_setup+0x423/0x17f0 [kvm]                    
[    6.697286]  ? __kmalloc_node+0x16c/0x380                                    
[    6.697290]  ? kvm_init+0xa2/0x400 [kvm]                                     
[    6.697326]  ? hardware_setup+0x7e2/0x8cc [kvm_intel]                        
[    6.697336]  ? vmx_init+0xf9/0x201 [kvm_intel]                               
[    6.697345]  ? hardware_setup+0x8cc/0x8cc [kvm_intel]                        
[    6.697353]  ? do_one_initcall+0x47/0x1e0                                    
[    6.697356]  ? kmem_cache_alloc_trace+0x16c/0x2b0                            
[    6.697359]  ? do_init_module+0x50/0x1f0                                     
[    6.697360]  ? load_module+0x21bd/0x25e0                                     
[    6.697362]  ? ima_post_read_file+0xd5/0x100                                 
[    6.697365]  ? kernel_read_file+0x23d/0x2e0                                  
[    6.697368]  ? __do_sys_finit_module+0xbd/0x130                              
[    6.697370]  ? __do_sys_finit_module+0xbd/0x130                              
[    6.697372]  ? __x64_sys_finit_module+0x18/0x20                              
[    6.697373]  ? do_syscall_64+0x58/0x80                                       
[    6.697376]  ? syscall_exit_to_user_mode+0x1b/0x40
[    6.697377]  ? do_syscall_64+0x67/0x80
[    6.697379]  ? switch_fpu_return+0x4e/0xc0
[    6.697382]  ? exit_to_user_mode_prepare+0x184/0x1e0
[    6.697384]  ? syscall_exit_to_user_mode+0x1b/0x40
[    6.697386]  ? do_syscall_64+0x67/0x80
[    6.697387]  ? do_syscall_64+0x67/0x80
[    6.697389]  ? do_syscall_64+0x67/0x80
[    6.697391]  ? sysvec_call_function+0x4b/0xa0
[    6.697393]  ? entry_SYSCALL_64_after_hwframe+0x63/0xcd
[    6.697397]  </TASK>

[    6.798781] rcu: INFO: rcu_sched detected expedited stalls on CPUs/tasks: {
4-... } 23 jiffies s: 53 root: 0x10/.
[    6.798787] rcu: blocking rcu_node structures (internal RCU debug):
[    6.798833] Task dump for CPU 4:
[    6.798952] task:systemd-udevd   state:R  running task     stack:    0 pid: 
468 ppid:   454 flags:0x0000400a
[    6.798957] Call Trace:
[    6.798959]  <TASK>
[    6.798960]  ? cpumask_any_but+0x35/0x50
[    6.798967]  ? x2apic_send_IPI_allbutself+0x2f/0x40
[    6.798969]  ? do_sync_core+0x2a/0x30
[    6.800010]  ? cpumask_next+0x23/0x30
[    6.800014]  ? smp_call_function_many_cond+0xea/0x370
[    6.800017]  ? text_poke_memset+0x20/0x20
[    6.800019]  ? arch_unregister_cpu+0x50/0x50
[    6.800024]  ? __SCT__kvm_x86_set_rflags+0x8/0x8 [kvm]
[    6.800096]  ? vmx_get_rflags+0x130/0x130 [kvm_intel]
[    6.800109]  ? on_each_cpu_cond_mask+0x1d/0x30
[    6.800110]  ? text_poke_bp_batch+0xaf/0x210
[    6.800113]  ? vmx_get_rflags+0x130/0x130 [kvm_intel]
[    6.800121]  ? __SCT__kvm_x86_set_rflags+0x8/0x8 [kvm]
[    6.800172]  ? vmx_get_rflags+0x130/0x130 [kvm_intel]
[    6.800180]  ? text_poke_bp+0x49/0x70
[    6.800182]  ? __static_call_transform+0x7f/0x120
[    6.800183]  ? arch_static_call_transform+0x58/0xa0
[    6.800185]  ? __SCT__kvm_x86_set_rflags+0x8/0x8 [kvm]
[    6.800233]  ? __static_call_update+0x62/0x220
[    6.800235]  ? vmx_get_rflags+0x130/0x130 [kvm_intel]
[    6.800243]  ? kvm_arch_hardware_setup+0x581/0x17f0 [kvm]
[    6.800284]  ? __kmalloc_node+0x16c/0x380
[    6.800288]  ? kvm_init+0xa2/0x400 [kvm]
[    6.800324]  ? hardware_setup+0x7e2/0x8cc [kvm_intel]
[    6.800334]  ? vmx_init+0xf9/0x201 [kvm_intel]
[    6.800342]  ? hardware_setup+0x8cc/0x8cc [kvm_intel]
[    6.800350]  ? do_one_initcall+0x47/0x1e0
[    6.800352]  ? kmem_cache_alloc_trace+0x16c/0x2b0
[    6.800355]  ? do_init_module+0x50/0x1f0
[    6.800357]  ? load_module+0x21bd/0x25e0
[    6.800358]  ? ima_post_read_file+0xd5/0x100
[    6.800361]  ? kernel_read_file+0x23d/0x2e0
[    6.800364]  ? __do_sys_finit_module+0xbd/0x130
[    6.800365]  ? __do_sys_finit_module+0xbd/0x130
[    6.800368]  ? __x64_sys_finit_module+0x18/0x20
[    6.800369]  ? do_syscall_64+0x58/0x80
[    6.800371]  ? syscall_exit_to_user_mode+0x1b/0x40
[    6.800373]  ? do_syscall_64+0x67/0x80
[    6.800375]  ? switch_fpu_return+0x4e/0xc0
[    6.800377]  ? exit_to_user_mode_prepare+0x184/0x1e0
[    6.800379]  ? syscall_exit_to_user_mode+0x1b/0x40
[    6.800380]  ? do_syscall_64+0x67/0x80
[    6.800382]  ? do_syscall_64+0x67/0x80
[    6.800384]  ? do_syscall_64+0x67/0x80
[    6.800385]  ? sysvec_call_function+0x4b/0xa0
[    6.800387]  ? entry_SYSCALL_64_after_hwframe+0x63/0xcd
[    6.800391]  </TASK>

     Are these related or should I open a new ticket?  These occurred right
after boot.

-- 
You may reply to this email to add a comment.

You are receiving this mail because:
You are watching the assignee of the bug.

  parent reply	other threads:[~2022-09-01  6:09 UTC|newest]

Thread overview: 27+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-08-21  7:37 [Bug 216388] New: On Host, kernel errors in KVM, on guests, it shows CPU stalls bugzilla-daemon
2022-08-22 17:50 ` Sean Christopherson
2022-08-22 23:21   ` Zhenyu Wang
2022-08-22 17:50 ` [Bug 216388] " bugzilla-daemon
2022-08-22 23:46 ` bugzilla-daemon
2022-08-23  0:57 ` bugzilla-daemon
2022-08-27 19:42 ` bugzilla-daemon
2022-08-28 21:08 ` bugzilla-daemon
2022-09-01  6:09 ` bugzilla-daemon [this message]
2022-09-01 16:44   ` Sean Christopherson
2022-09-01 16:44 ` bugzilla-daemon
2022-09-01 19:46 ` bugzilla-daemon
2022-09-01 21:37 ` bugzilla-daemon
2022-09-02  5:46 ` bugzilla-daemon
2022-09-02  8:36 ` bugzilla-daemon
2022-09-03  1:37 ` bugzilla-daemon
2022-09-03  2:03 ` bugzilla-daemon
2022-09-03  5:31 ` bugzilla-daemon
2022-09-03  5:37 ` bugzilla-daemon
2022-09-06 15:52   ` Sean Christopherson
2022-09-04  4:17 ` bugzilla-daemon
2022-09-04  5:41 ` bugzilla-daemon
2022-09-05  4:06 ` bugzilla-daemon
2022-09-06 15:52 ` bugzilla-daemon
2022-09-06 21:44 ` bugzilla-daemon
2022-09-17 19:53 ` bugzilla-daemon
2022-09-17 20:23 ` bugzilla-daemon

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=bug-216388-28872-L9iQIQTrXh@https.bugzilla.kernel.org/ \
    --to=bugzilla-daemon@kernel.org \
    --cc=kvm@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).