* [PATCH] KVM: x86: Only print persistent reasons for kvm disabled once
@ 2019-08-26 18:23 Tony Luck
2019-08-27 6:27 ` Vitaly Kuznetsov
0 siblings, 1 reply; 6+ messages in thread
From: Tony Luck @ 2019-08-26 18:23 UTC (permalink / raw)
To: kvm
Cc: Tony Luck, Paolo Bonzini, Radim Krčmář,
Sean Christopherson, Vitaly Kuznetsov, Wanpeng Li, Jim Mattson,
Joerg Roedel
When I boot my server I'm treated to a console log with:
[ 40.520510] kvm: disabled by bios
[ 40.551234] kvm: disabled by bios
[ 40.607987] kvm: disabled by bios
[ 40.659701] kvm: disabled by bios
[ 40.691224] kvm: disabled by bios
[ 40.718786] kvm: disabled by bios
[ 40.750122] kvm: disabled by bios
[ 40.797170] kvm: disabled by bios
[ 40.828408] kvm: disabled by bios
... many, many more lines, one for every logical CPU
Since it isn't likely that BIOS is going to suddenly enable
KVM between bringing up one CPU and the next, we might as
well just print this once.
Same for a few other unchanging reasons that might keep
kvm from being initialized.
Signed-off-by: Tony Luck <tony.luck@intel.com>
---
arch/x86/kvm/x86.c | 8 ++++----
1 file changed, 4 insertions(+), 4 deletions(-)
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index 93b0bd45ac73..56d4a43dd2db 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -7007,18 +7007,18 @@ int kvm_arch_init(void *opaque)
struct kvm_x86_ops *ops = opaque;
if (kvm_x86_ops) {
- printk(KERN_ERR "kvm: already loaded the other module\n");
+ pr_err_once("kvm: already loaded the other module\n");
r = -EEXIST;
goto out;
}
if (!ops->cpu_has_kvm_support()) {
- printk(KERN_ERR "kvm: no hardware support\n");
+ pr_err_once("kvm: no hardware support\n");
r = -EOPNOTSUPP;
goto out;
}
if (ops->disabled_by_bios()) {
- printk(KERN_ERR "kvm: disabled by bios\n");
+ pr_err_once("kvm: disabled by bios\n");
r = -EOPNOTSUPP;
goto out;
}
@@ -7029,7 +7029,7 @@ int kvm_arch_init(void *opaque)
* vCPU's FPU state as a fxregs_state struct.
*/
if (!boot_cpu_has(X86_FEATURE_FPU) || !boot_cpu_has(X86_FEATURE_FXSR)) {
- printk(KERN_ERR "kvm: inadequate fpu\n");
+ pr_err_once("kvm: inadequate fpu\n");
r = -EOPNOTSUPP;
goto out;
}
--
2.20.1
^ permalink raw reply related [flat|nested] 6+ messages in thread
* Re: [PATCH] KVM: x86: Only print persistent reasons for kvm disabled once
2019-08-26 18:23 [PATCH] KVM: x86: Only print persistent reasons for kvm disabled once Tony Luck
@ 2019-08-27 6:27 ` Vitaly Kuznetsov
2019-08-27 17:50 ` Luck, Tony
2019-08-27 19:08 ` Radim Krčmář
0 siblings, 2 replies; 6+ messages in thread
From: Vitaly Kuznetsov @ 2019-08-27 6:27 UTC (permalink / raw)
To: Tony Luck, kvm
Cc: Paolo Bonzini, Radim Krčmář,
Sean Christopherson, Wanpeng Li, Jim Mattson, Joerg Roedel
Tony Luck <tony.luck@intel.com> writes:
> When I boot my server I'm treated to a console log with:
>
> [ 40.520510] kvm: disabled by bios
> [ 40.551234] kvm: disabled by bios
> [ 40.607987] kvm: disabled by bios
> [ 40.659701] kvm: disabled by bios
> [ 40.691224] kvm: disabled by bios
> [ 40.718786] kvm: disabled by bios
> [ 40.750122] kvm: disabled by bios
> [ 40.797170] kvm: disabled by bios
> [ 40.828408] kvm: disabled by bios
>
> ... many, many more lines, one for every logical CPU
(If I didn't miss anything) we have the following code:
__init vmx_init()
kvm_init();
kvm_arch_init()
and we bail on first error so there should be only 1 message per module
load attempt. The question I have is who (and why) is trying to load
kvm-intel (or kvm-amd which is not any different) for each CPU? Is it
udev? Can this be changed?
In particular, I'm worried about eVMCS enablement in vmx_init(), we will
also get a bunch of "KVM: vmx: using Hyper-V Enlightened VMCS" messages
if the consequent kvm_arch_init() fails.
>
> Since it isn't likely that BIOS is going to suddenly enable
> KVM between bringing up one CPU and the next, we might as
> well just print this once.
>
> Same for a few other unchanging reasons that might keep
> kvm from being initialized.
>
> Signed-off-by: Tony Luck <tony.luck@intel.com>
> ---
> arch/x86/kvm/x86.c | 8 ++++----
> 1 file changed, 4 insertions(+), 4 deletions(-)
>
> diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
> index 93b0bd45ac73..56d4a43dd2db 100644
> --- a/arch/x86/kvm/x86.c
> +++ b/arch/x86/kvm/x86.c
> @@ -7007,18 +7007,18 @@ int kvm_arch_init(void *opaque)
> struct kvm_x86_ops *ops = opaque;
>
> if (kvm_x86_ops) {
> - printk(KERN_ERR "kvm: already loaded the other module\n");
> + pr_err_once("kvm: already loaded the other module\n");
> r = -EEXIST;
> goto out;
> }
>
> if (!ops->cpu_has_kvm_support()) {
> - printk(KERN_ERR "kvm: no hardware support\n");
> + pr_err_once("kvm: no hardware support\n");
> r = -EOPNOTSUPP;
> goto out;
> }
> if (ops->disabled_by_bios()) {
> - printk(KERN_ERR "kvm: disabled by bios\n");
> + pr_err_once("kvm: disabled by bios\n");
> r = -EOPNOTSUPP;
> goto out;
> }
> @@ -7029,7 +7029,7 @@ int kvm_arch_init(void *opaque)
> * vCPU's FPU state as a fxregs_state struct.
> */
> if (!boot_cpu_has(X86_FEATURE_FPU) || !boot_cpu_has(X86_FEATURE_FXSR)) {
> - printk(KERN_ERR "kvm: inadequate fpu\n");
> + pr_err_once("kvm: inadequate fpu\n");
> r = -EOPNOTSUPP;
> goto out;
> }
The messages themselves may need some love but this is irrelevant to the
patch, so
Reviewed-by: Vitaly Kuznetsov <vkuznets@redhat.com>
--
Vitaly
^ permalink raw reply [flat|nested] 6+ messages in thread
* RE: [PATCH] KVM: x86: Only print persistent reasons for kvm disabled once
2019-08-27 6:27 ` Vitaly Kuznetsov
@ 2019-08-27 17:50 ` Luck, Tony
2019-08-27 19:08 ` Radim Krčmář
1 sibling, 0 replies; 6+ messages in thread
From: Luck, Tony @ 2019-08-27 17:50 UTC (permalink / raw)
To: Vitaly Kuznetsov, kvm
Cc: Paolo Bonzini, Radim Krcmár, Christopherson, Sean J,
Wanpeng Li, Jim Mattson, Joerg Roedel
> and we bail on first error so there should be only 1 message per module
> load attempt. The question I have is who (and why) is trying to load
> kvm-intel (or kvm-amd which is not any different) for each CPU? Is it
> udev? Can this be changed?
No idea where to look. The system has a RHEL8 user space install. Kernel
is latest from Linus (v5.3-rc6).
-Tony
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: [PATCH] KVM: x86: Only print persistent reasons for kvm disabled once
2019-08-27 6:27 ` Vitaly Kuznetsov
2019-08-27 17:50 ` Luck, Tony
@ 2019-08-27 19:08 ` Radim Krčmář
2019-08-27 19:24 ` Sean Christopherson
1 sibling, 1 reply; 6+ messages in thread
From: Radim Krčmář @ 2019-08-27 19:08 UTC (permalink / raw)
To: Vitaly Kuznetsov
Cc: Tony Luck, kvm, Paolo Bonzini, Sean Christopherson, Wanpeng Li,
Jim Mattson, Joerg Roedel
2019-08-27 08:27+0200, Vitaly Kuznetsov:
> Tony Luck <tony.luck@intel.com> writes:
>
> > When I boot my server I'm treated to a console log with:
> >
> > [ 40.520510] kvm: disabled by bios
> > [ 40.551234] kvm: disabled by bios
> > [ 40.607987] kvm: disabled by bios
> > [ 40.659701] kvm: disabled by bios
> > [ 40.691224] kvm: disabled by bios
> > [ 40.718786] kvm: disabled by bios
> > [ 40.750122] kvm: disabled by bios
> > [ 40.797170] kvm: disabled by bios
> > [ 40.828408] kvm: disabled by bios
> >
> > ... many, many more lines, one for every logical CPU
>
> (If I didn't miss anything) we have the following code:
>
> __init vmx_init()
> kvm_init();
> kvm_arch_init()
>
> and we bail on first error so there should be only 1 message per module
> load attempt. The question I have is who (and why) is trying to load
> kvm-intel (or kvm-amd which is not any different) for each CPU? Is it
> udev? Can this be changed?
I agree that this is a highly suspicious behavior.
It would be really helpful if we found out what is causing it.
So far, this patch seems to be working around a userspace bug.
> In particular, I'm worried about eVMCS enablement in vmx_init(), we will
> also get a bunch of "KVM: vmx: using Hyper-V Enlightened VMCS" messages
> if the consequent kvm_arch_init() fails.
And we can't get rid of this through the printk_once trick, because this
code lives in kvm_intel module and therefore gets unloaded on every
failure.
I am also not inclined to apply the patch as we will likely merge the
kvm and kvm_{svm,intel} modules in the future to take full advantage of
link time optimizations and this patch would stop working after that.
Thanks.
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: [PATCH] KVM: x86: Only print persistent reasons for kvm disabled once
2019-08-27 19:08 ` Radim Krčmář
@ 2019-08-27 19:24 ` Sean Christopherson
2019-09-11 15:30 ` Paolo Bonzini
0 siblings, 1 reply; 6+ messages in thread
From: Sean Christopherson @ 2019-08-27 19:24 UTC (permalink / raw)
To: Radim Krčmář
Cc: Vitaly Kuznetsov, Tony Luck, kvm, Paolo Bonzini, Wanpeng Li,
Jim Mattson, Joerg Roedel
On Tue, Aug 27, 2019 at 09:08:10PM +0200, Radim Krčmář wrote:
> I am also not inclined to apply the patch as we will likely merge the
> kvm and kvm_{svm,intel} modules in the future to take full advantage of
> link time optimizations and this patch would stop working after that.
Any chance you can provide additional details on the plan for merging
modules? E.g. I assume there would still be kvm_intel and kvm_svm, just
no vanilla kvm?
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: [PATCH] KVM: x86: Only print persistent reasons for kvm disabled once
2019-08-27 19:24 ` Sean Christopherson
@ 2019-09-11 15:30 ` Paolo Bonzini
0 siblings, 0 replies; 6+ messages in thread
From: Paolo Bonzini @ 2019-09-11 15:30 UTC (permalink / raw)
To: Sean Christopherson, Radim Krčmář
Cc: Vitaly Kuznetsov, Tony Luck, kvm, Wanpeng Li, Jim Mattson, Joerg Roedel
On 27/08/19 21:24, Sean Christopherson wrote:
> On Tue, Aug 27, 2019 at 09:08:10PM +0200, Radim Krčmář wrote:
>> I am also not inclined to apply the patch as we will likely merge the
>> kvm and kvm_{svm,intel} modules in the future to take full advantage of
>> link time optimizations and this patch would stop working after that.
>
> Any chance you can provide additional details on the plan for merging
> modules? E.g. I assume there would still be kvm_intel and kvm_svm, just
> no vanilla kvm?
Yes, that is the idea. Basically trade disk space for performance,
since kvm+kvm_intel+kvm_amd are never loaded together and kvm never has
>1 user.
Paolo
^ permalink raw reply [flat|nested] 6+ messages in thread
end of thread, other threads:[~2019-09-11 15:30 UTC | newest]
Thread overview: 6+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-08-26 18:23 [PATCH] KVM: x86: Only print persistent reasons for kvm disabled once Tony Luck
2019-08-27 6:27 ` Vitaly Kuznetsov
2019-08-27 17:50 ` Luck, Tony
2019-08-27 19:08 ` Radim Krčmář
2019-08-27 19:24 ` Sean Christopherson
2019-09-11 15:30 ` Paolo Bonzini
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).