* [PATCH 0/4] Add and use helpers to check bit set in CR0/CR4
@ 2023-03-22 4:58 Binbin Wu
2023-03-22 4:58 ` [PATCH 1/4] KVM: x86: Add helpers to check bit set in CR0/CR4 and return in bool Binbin Wu
` (4 more replies)
0 siblings, 5 replies; 9+ messages in thread
From: Binbin Wu @ 2023-03-22 4:58 UTC (permalink / raw)
To: kvm, seanjc, pbonzini; +Cc: binbin.wu, robert.hu
Add two helpers
- kvm_is_cr0_bit_set()
- kvm_is_cr4_bit_set()
to do CR0/CR4 check on one specific bit and return the value in bool.
Replace kvm_read_{cr0,cr4}_bits() with kvm_is_{cr0,cr4}_bit_set() when applicable.
Also change return type of is_pae(), is_pse(), is_paging() and is_long_mode()
to bool.
No functional change intended.
The patch series is originated from cleanup code of KVM LAM enabling patchset by
Robert Hu
(https://lore.kernel.org/all/20230227084547.404871-3-robert.hu@linux.intel.com),
and suggested by Sean Christopherson during code review
(https://lore.kernel.org/all/ZAuRec2NkC3+4jvD@google.com).
Binbin Wu (4):
KVM: x86: Add helpers to check bit set in CR0/CR4 and return in bool
KVM: x86: Replace kvm_read_{cr0,cr4}_bits() with
kvm_is_{cr0,cr4}_bit_set()
KVM: SVM: Remove implicit cast from ulong to bool in
svm_can_emulate_instruction()
KVM: x86: Change return type of is_long_mode() to bool
arch/x86/kvm/cpuid.c | 4 ++--
arch/x86/kvm/kvm_cache_regs.h | 16 ++++++++++++++++
arch/x86/kvm/mmu.h | 2 +-
arch/x86/kvm/svm/svm.c | 9 +++------
arch/x86/kvm/vmx/nested.c | 2 +-
arch/x86/kvm/vmx/vmx.c | 2 +-
arch/x86/kvm/x86.c | 20 ++++++++++----------
arch/x86/kvm/x86.h | 22 +++++++++++-----------
8 files changed, 45 insertions(+), 32 deletions(-)
--
2.25.1
^ permalink raw reply [flat|nested] 9+ messages in thread
* [PATCH 1/4] KVM: x86: Add helpers to check bit set in CR0/CR4 and return in bool
2023-03-22 4:58 [PATCH 0/4] Add and use helpers to check bit set in CR0/CR4 Binbin Wu
@ 2023-03-22 4:58 ` Binbin Wu
2023-03-22 4:58 ` [PATCH 2/4] KVM: x86: Replace kvm_read_{cr0,cr4}_bits() with kvm_is_{cr0,cr4}_bit_set() Binbin Wu
` (3 subsequent siblings)
4 siblings, 0 replies; 9+ messages in thread
From: Binbin Wu @ 2023-03-22 4:58 UTC (permalink / raw)
To: kvm, seanjc, pbonzini; +Cc: binbin.wu, robert.hu
Add helps kvm_is_cr0_bit_set()/kvm_is_cr4_bit_set() to check if one
specific bit is set or not in CR0/CR4 of vCPU and return the result
in bool.
Suggested-by: Sean Christopherson <seanjc@google.com>
Signed-off-by: Binbin Wu <binbin.wu@linux.intel.com>
---
arch/x86/kvm/kvm_cache_regs.h | 16 ++++++++++++++++
1 file changed, 16 insertions(+)
diff --git a/arch/x86/kvm/kvm_cache_regs.h b/arch/x86/kvm/kvm_cache_regs.h
index 4c91f626c058..f2bf930a3cd4 100644
--- a/arch/x86/kvm/kvm_cache_regs.h
+++ b/arch/x86/kvm/kvm_cache_regs.h
@@ -157,6 +157,14 @@ static inline ulong kvm_read_cr0_bits(struct kvm_vcpu *vcpu, ulong mask)
return vcpu->arch.cr0 & mask;
}
+static __always_inline bool kvm_is_cr0_bit_set(struct kvm_vcpu *vcpu,
+ unsigned long cr0_bit)
+{
+ BUILD_BUG_ON(!is_power_of_2(cr0_bit));
+
+ return !!kvm_read_cr0_bits(vcpu, cr0_bit);
+}
+
static inline ulong kvm_read_cr0(struct kvm_vcpu *vcpu)
{
return kvm_read_cr0_bits(vcpu, ~0UL);
@@ -171,6 +179,14 @@ static inline ulong kvm_read_cr4_bits(struct kvm_vcpu *vcpu, ulong mask)
return vcpu->arch.cr4 & mask;
}
+static __always_inline bool kvm_is_cr4_bit_set(struct kvm_vcpu *vcpu,
+ unsigned long cr4_bit)
+{
+ BUILD_BUG_ON(!is_power_of_2(cr4_bit));
+
+ return !!kvm_read_cr4_bits(vcpu, cr4_bit);
+}
+
static inline ulong kvm_read_cr3(struct kvm_vcpu *vcpu)
{
if (!kvm_register_is_available(vcpu, VCPU_EXREG_CR3))
--
2.25.1
^ permalink raw reply related [flat|nested] 9+ messages in thread
* [PATCH 2/4] KVM: x86: Replace kvm_read_{cr0,cr4}_bits() with kvm_is_{cr0,cr4}_bit_set()
2023-03-22 4:58 [PATCH 0/4] Add and use helpers to check bit set in CR0/CR4 Binbin Wu
2023-03-22 4:58 ` [PATCH 1/4] KVM: x86: Add helpers to check bit set in CR0/CR4 and return in bool Binbin Wu
@ 2023-03-22 4:58 ` Binbin Wu
2023-03-22 17:21 ` Sean Christopherson
2023-03-22 4:58 ` [PATCH 3/4] KVM: SVM: Remove implicit cast from ulong to bool in svm_can_emulate_instruction() Binbin Wu
` (2 subsequent siblings)
4 siblings, 1 reply; 9+ messages in thread
From: Binbin Wu @ 2023-03-22 4:58 UTC (permalink / raw)
To: kvm, seanjc, pbonzini; +Cc: binbin.wu, robert.hu
Replace kvm_read_{cr0,cr4}_bits() with kvm_is_{cr0,cr4}_bit_set() when only
one bit is checked and bool is preferred as return value type.
Also change the return value type from int to bool of is_pae(), is_pse() and
is_paging().
vmx_set_cr0() uses kvm_read_cr0_bits() to read X86_CR0_PG and assigned
to unsigned long, keep it as it is.
Suggested-by: Sean Christopherson <seanjc@google.com>
Signed-off-by: Binbin Wu <binbin.wu@linux.intel.com>
---
arch/x86/kvm/cpuid.c | 4 ++--
arch/x86/kvm/mmu.h | 2 +-
arch/x86/kvm/vmx/nested.c | 2 +-
arch/x86/kvm/vmx/vmx.c | 2 +-
arch/x86/kvm/x86.c | 20 ++++++++++----------
arch/x86/kvm/x86.h | 16 ++++++++--------
6 files changed, 23 insertions(+), 23 deletions(-)
diff --git a/arch/x86/kvm/cpuid.c b/arch/x86/kvm/cpuid.c
index 9583a110cf5f..bc767b0fabea 100644
--- a/arch/x86/kvm/cpuid.c
+++ b/arch/x86/kvm/cpuid.c
@@ -266,7 +266,7 @@ static void __kvm_update_cpuid_runtime(struct kvm_vcpu *vcpu, struct kvm_cpuid_e
/* Update OSXSAVE bit */
if (boot_cpu_has(X86_FEATURE_XSAVE))
cpuid_entry_change(best, X86_FEATURE_OSXSAVE,
- kvm_read_cr4_bits(vcpu, X86_CR4_OSXSAVE));
+ kvm_is_cr4_bit_set(vcpu, X86_CR4_OSXSAVE));
cpuid_entry_change(best, X86_FEATURE_APIC,
vcpu->arch.apic_base & MSR_IA32_APICBASE_ENABLE);
@@ -275,7 +275,7 @@ static void __kvm_update_cpuid_runtime(struct kvm_vcpu *vcpu, struct kvm_cpuid_e
best = cpuid_entry2_find(entries, nent, 7, 0);
if (best && boot_cpu_has(X86_FEATURE_PKU) && best->function == 0x7)
cpuid_entry_change(best, X86_FEATURE_OSPKE,
- kvm_read_cr4_bits(vcpu, X86_CR4_PKE));
+ kvm_is_cr4_bit_set(vcpu, X86_CR4_PKE));
best = cpuid_entry2_find(entries, nent, 0xD, 0);
if (best)
diff --git a/arch/x86/kvm/mmu.h b/arch/x86/kvm/mmu.h
index 168c46fd8dd1..89f532516a45 100644
--- a/arch/x86/kvm/mmu.h
+++ b/arch/x86/kvm/mmu.h
@@ -132,7 +132,7 @@ static inline unsigned long kvm_get_pcid(struct kvm_vcpu *vcpu, gpa_t cr3)
{
BUILD_BUG_ON((X86_CR3_PCID_MASK & PAGE_MASK) != 0);
- return kvm_read_cr4_bits(vcpu, X86_CR4_PCIDE)
+ return kvm_is_cr4_bit_set(vcpu, X86_CR4_PCIDE)
? cr3 & X86_CR3_PCID_MASK
: 0;
}
diff --git a/arch/x86/kvm/vmx/nested.c b/arch/x86/kvm/vmx/nested.c
index f63b28f46a71..7ac6bae6ba0c 100644
--- a/arch/x86/kvm/vmx/nested.c
+++ b/arch/x86/kvm/vmx/nested.c
@@ -5154,7 +5154,7 @@ static int handle_vmxon(struct kvm_vcpu *vcpu)
* does force CR0.PE=1, but only to also force VM86 in order to emulate
* Real Mode, and so there's no need to check CR0.PE manually.
*/
- if (!kvm_read_cr4_bits(vcpu, X86_CR4_VMXE)) {
+ if (!kvm_is_cr4_bit_set(vcpu, X86_CR4_VMXE)) {
kvm_queue_exception(vcpu, UD_VECTOR);
return 1;
}
diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
index d7bf14abdba1..2a29fd6edb30 100644
--- a/arch/x86/kvm/vmx/vmx.c
+++ b/arch/x86/kvm/vmx/vmx.c
@@ -5180,7 +5180,7 @@ bool vmx_guest_inject_ac(struct kvm_vcpu *vcpu)
if (!boot_cpu_has(X86_FEATURE_SPLIT_LOCK_DETECT))
return true;
- return vmx_get_cpl(vcpu) == 3 && kvm_read_cr0_bits(vcpu, X86_CR0_AM) &&
+ return vmx_get_cpl(vcpu) == 3 && kvm_is_cr0_bit_set(vcpu, X86_CR0_AM) &&
(kvm_get_rflags(vcpu) & X86_EFLAGS_AC);
}
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index 237c483b1230..324fcf78a4b9 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -841,7 +841,7 @@ bool kvm_require_cpl(struct kvm_vcpu *vcpu, int required_cpl)
bool kvm_require_dr(struct kvm_vcpu *vcpu, int dr)
{
- if ((dr != 4 && dr != 5) || !kvm_read_cr4_bits(vcpu, X86_CR4_DE))
+ if ((dr != 4 && dr != 5) || !kvm_is_cr4_bit_set(vcpu, X86_CR4_DE))
return true;
kvm_queue_exception(vcpu, UD_VECTOR);
@@ -965,7 +965,7 @@ int kvm_set_cr0(struct kvm_vcpu *vcpu, unsigned long cr0)
return 1;
if (!(cr0 & X86_CR0_PG) &&
- (is_64_bit_mode(vcpu) || kvm_read_cr4_bits(vcpu, X86_CR4_PCIDE)))
+ (is_64_bit_mode(vcpu) || kvm_is_cr4_bit_set(vcpu, X86_CR4_PCIDE)))
return 1;
static_call(kvm_x86_set_cr0)(vcpu, cr0);
@@ -987,7 +987,7 @@ void kvm_load_guest_xsave_state(struct kvm_vcpu *vcpu)
if (vcpu->arch.guest_state_protected)
return;
- if (kvm_read_cr4_bits(vcpu, X86_CR4_OSXSAVE)) {
+ if (kvm_is_cr4_bit_set(vcpu, X86_CR4_OSXSAVE)) {
if (vcpu->arch.xcr0 != host_xcr0)
xsetbv(XCR_XFEATURE_ENABLED_MASK, vcpu->arch.xcr0);
@@ -1001,7 +1001,7 @@ void kvm_load_guest_xsave_state(struct kvm_vcpu *vcpu)
if (static_cpu_has(X86_FEATURE_PKU) &&
vcpu->arch.pkru != vcpu->arch.host_pkru &&
((vcpu->arch.xcr0 & XFEATURE_MASK_PKRU) ||
- kvm_read_cr4_bits(vcpu, X86_CR4_PKE)))
+ kvm_is_cr4_bit_set(vcpu, X86_CR4_PKE)))
write_pkru(vcpu->arch.pkru);
#endif /* CONFIG_X86_INTEL_MEMORY_PROTECTION_KEYS */
}
@@ -1015,14 +1015,14 @@ void kvm_load_host_xsave_state(struct kvm_vcpu *vcpu)
#ifdef CONFIG_X86_INTEL_MEMORY_PROTECTION_KEYS
if (static_cpu_has(X86_FEATURE_PKU) &&
((vcpu->arch.xcr0 & XFEATURE_MASK_PKRU) ||
- kvm_read_cr4_bits(vcpu, X86_CR4_PKE))) {
+ kvm_is_cr4_bit_set(vcpu, X86_CR4_PKE))) {
vcpu->arch.pkru = rdpkru();
if (vcpu->arch.pkru != vcpu->arch.host_pkru)
write_pkru(vcpu->arch.host_pkru);
}
#endif /* CONFIG_X86_INTEL_MEMORY_PROTECTION_KEYS */
- if (kvm_read_cr4_bits(vcpu, X86_CR4_OSXSAVE)) {
+ if (kvm_is_cr4_bit_set(vcpu, X86_CR4_OSXSAVE)) {
if (vcpu->arch.xcr0 != host_xcr0)
xsetbv(XCR_XFEATURE_ENABLED_MASK, host_xcr0);
@@ -1227,7 +1227,7 @@ static void kvm_invalidate_pcid(struct kvm_vcpu *vcpu, unsigned long pcid)
* PCIDs for them are also 0, because MOV to CR3 always flushes the TLB
* with PCIDE=0.
*/
- if (!kvm_read_cr4_bits(vcpu, X86_CR4_PCIDE))
+ if (!kvm_is_cr4_bit_set(vcpu, X86_CR4_PCIDE))
return;
for (i = 0; i < KVM_MMU_NUM_PREV_ROOTS; i++)
@@ -1242,7 +1242,7 @@ int kvm_set_cr3(struct kvm_vcpu *vcpu, unsigned long cr3)
bool skip_tlb_flush = false;
unsigned long pcid = 0;
#ifdef CONFIG_X86_64
- bool pcid_enabled = kvm_read_cr4_bits(vcpu, X86_CR4_PCIDE);
+ bool pcid_enabled = kvm_is_cr4_bit_set(vcpu, X86_CR4_PCIDE);
if (pcid_enabled) {
skip_tlb_flush = cr3 & X86_CR3_PCID_NOFLUSH;
@@ -5033,7 +5033,7 @@ static int kvm_vcpu_ioctl_x86_set_mce(struct kvm_vcpu *vcpu,
return 0;
if (mce->status & MCI_STATUS_UC) {
if ((vcpu->arch.mcg_status & MCG_STATUS_MCIP) ||
- !kvm_read_cr4_bits(vcpu, X86_CR4_MCE)) {
+ !kvm_is_cr4_bit_set(vcpu, X86_CR4_MCE)) {
kvm_make_request(KVM_REQ_TRIPLE_FAULT, vcpu);
return 0;
}
@@ -13236,7 +13236,7 @@ int kvm_handle_invpcid(struct kvm_vcpu *vcpu, unsigned long type, gva_t gva)
return 1;
}
- pcid_enabled = kvm_read_cr4_bits(vcpu, X86_CR4_PCIDE);
+ pcid_enabled = kvm_is_cr4_bit_set(vcpu, X86_CR4_PCIDE);
switch (type) {
case INVPCID_TYPE_INDIV_ADDR:
diff --git a/arch/x86/kvm/x86.h b/arch/x86/kvm/x86.h
index a8167b47b8c8..577b82358529 100644
--- a/arch/x86/kvm/x86.h
+++ b/arch/x86/kvm/x86.h
@@ -123,7 +123,7 @@ static inline bool kvm_exception_is_soft(unsigned int nr)
static inline bool is_protmode(struct kvm_vcpu *vcpu)
{
- return kvm_read_cr0_bits(vcpu, X86_CR0_PE);
+ return kvm_is_cr0_bit_set(vcpu, X86_CR0_PE);
}
static inline int is_long_mode(struct kvm_vcpu *vcpu)
@@ -171,19 +171,19 @@ static inline bool mmu_is_nested(struct kvm_vcpu *vcpu)
return vcpu->arch.walk_mmu == &vcpu->arch.nested_mmu;
}
-static inline int is_pae(struct kvm_vcpu *vcpu)
+static inline bool is_pae(struct kvm_vcpu *vcpu)
{
- return kvm_read_cr4_bits(vcpu, X86_CR4_PAE);
+ return kvm_is_cr4_bit_set(vcpu, X86_CR4_PAE);
}
-static inline int is_pse(struct kvm_vcpu *vcpu)
+static inline bool is_pse(struct kvm_vcpu *vcpu)
{
- return kvm_read_cr4_bits(vcpu, X86_CR4_PSE);
+ return kvm_is_cr4_bit_set(vcpu, X86_CR4_PSE);
}
-static inline int is_paging(struct kvm_vcpu *vcpu)
+static inline bool is_paging(struct kvm_vcpu *vcpu)
{
- return likely(kvm_read_cr0_bits(vcpu, X86_CR0_PG));
+ return likely(kvm_is_cr0_bit_set(vcpu, X86_CR0_PG));
}
static inline bool is_pae_paging(struct kvm_vcpu *vcpu)
@@ -193,7 +193,7 @@ static inline bool is_pae_paging(struct kvm_vcpu *vcpu)
static inline u8 vcpu_virt_addr_bits(struct kvm_vcpu *vcpu)
{
- return kvm_read_cr4_bits(vcpu, X86_CR4_LA57) ? 57 : 48;
+ return kvm_is_cr4_bit_set(vcpu, X86_CR4_LA57) ? 57 : 48;
}
static inline bool is_noncanonical_address(u64 la, struct kvm_vcpu *vcpu)
--
2.25.1
^ permalink raw reply related [flat|nested] 9+ messages in thread
* [PATCH 3/4] KVM: SVM: Remove implicit cast from ulong to bool in svm_can_emulate_instruction()
2023-03-22 4:58 [PATCH 0/4] Add and use helpers to check bit set in CR0/CR4 Binbin Wu
2023-03-22 4:58 ` [PATCH 1/4] KVM: x86: Add helpers to check bit set in CR0/CR4 and return in bool Binbin Wu
2023-03-22 4:58 ` [PATCH 2/4] KVM: x86: Replace kvm_read_{cr0,cr4}_bits() with kvm_is_{cr0,cr4}_bit_set() Binbin Wu
@ 2023-03-22 4:58 ` Binbin Wu
2023-03-22 4:58 ` [PATCH 4/4] KVM: x86: Change return type of is_long_mode() to bool Binbin Wu
2023-03-23 22:47 ` [PATCH 0/4] Add and use helpers to check bit set in CR0/CR4 Sean Christopherson
4 siblings, 0 replies; 9+ messages in thread
From: Binbin Wu @ 2023-03-22 4:58 UTC (permalink / raw)
To: kvm, seanjc, pbonzini; +Cc: binbin.wu, robert.hu
Remove implicit cast from ulong to bool in svm_can_emulate_instruction().
Drop the local var smep and smap, which are used only once.
Instead, use kvm_is_cr4_bit_set() directly.
It should be OK to call kvm_is_cr4_bit_set() twice since X86_CR4_SMAP and
X86_CR4_SMEP are intercepted and the values are read from cache instead of
VMCS field.
Suggested-by: Sean Christopherson <seanjc@google.com>
Signed-off-by: Binbin Wu <binbin.wu@linux.intel.com>
---
arch/x86/kvm/svm/svm.c | 9 +++------
1 file changed, 3 insertions(+), 6 deletions(-)
diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c
index 70183d2271b5..a5b9278f0052 100644
--- a/arch/x86/kvm/svm/svm.c
+++ b/arch/x86/kvm/svm/svm.c
@@ -4544,8 +4544,7 @@ static void svm_enable_smi_window(struct kvm_vcpu *vcpu)
static bool svm_can_emulate_instruction(struct kvm_vcpu *vcpu, int emul_type,
void *insn, int insn_len)
{
- bool smep, smap, is_user;
- unsigned long cr4;
+ bool is_user;
u64 error_code;
/* Emulation is always possible when KVM has access to all guest state. */
@@ -4637,11 +4636,9 @@ static bool svm_can_emulate_instruction(struct kvm_vcpu *vcpu, int emul_type,
if (error_code & (PFERR_GUEST_PAGE_MASK | PFERR_FETCH_MASK))
goto resume_guest;
- cr4 = kvm_read_cr4(vcpu);
- smep = cr4 & X86_CR4_SMEP;
- smap = cr4 & X86_CR4_SMAP;
is_user = svm_get_cpl(vcpu) == 3;
- if (smap && (!smep || is_user)) {
+ if (kvm_is_cr4_bit_set(vcpu, X86_CR4_SMAP) &&
+ (!kvm_is_cr4_bit_set(vcpu, X86_CR4_SMEP) || is_user)) {
pr_err_ratelimited("SEV Guest triggered AMD Erratum 1096\n");
/*
--
2.25.1
^ permalink raw reply related [flat|nested] 9+ messages in thread
* [PATCH 4/4] KVM: x86: Change return type of is_long_mode() to bool
2023-03-22 4:58 [PATCH 0/4] Add and use helpers to check bit set in CR0/CR4 Binbin Wu
` (2 preceding siblings ...)
2023-03-22 4:58 ` [PATCH 3/4] KVM: SVM: Remove implicit cast from ulong to bool in svm_can_emulate_instruction() Binbin Wu
@ 2023-03-22 4:58 ` Binbin Wu
2023-03-22 22:33 ` Huang, Kai
2023-03-23 22:47 ` [PATCH 0/4] Add and use helpers to check bit set in CR0/CR4 Sean Christopherson
4 siblings, 1 reply; 9+ messages in thread
From: Binbin Wu @ 2023-03-22 4:58 UTC (permalink / raw)
To: kvm, seanjc, pbonzini; +Cc: binbin.wu, robert.hu
Change return type of is_long_mode() to bool to avoid implicit cast.
Signed-off-by: Binbin Wu <binbin.wu@linux.intel.com>
---
arch/x86/kvm/x86.h | 6 +++---
1 file changed, 3 insertions(+), 3 deletions(-)
diff --git a/arch/x86/kvm/x86.h b/arch/x86/kvm/x86.h
index 577b82358529..203fb6640b5b 100644
--- a/arch/x86/kvm/x86.h
+++ b/arch/x86/kvm/x86.h
@@ -126,12 +126,12 @@ static inline bool is_protmode(struct kvm_vcpu *vcpu)
return kvm_is_cr0_bit_set(vcpu, X86_CR0_PE);
}
-static inline int is_long_mode(struct kvm_vcpu *vcpu)
+static inline bool is_long_mode(struct kvm_vcpu *vcpu)
{
#ifdef CONFIG_X86_64
- return vcpu->arch.efer & EFER_LMA;
+ return !!(vcpu->arch.efer & EFER_LMA);
#else
- return 0;
+ return false;
#endif
}
--
2.25.1
^ permalink raw reply related [flat|nested] 9+ messages in thread
* Re: [PATCH 2/4] KVM: x86: Replace kvm_read_{cr0,cr4}_bits() with kvm_is_{cr0,cr4}_bit_set()
2023-03-22 4:58 ` [PATCH 2/4] KVM: x86: Replace kvm_read_{cr0,cr4}_bits() with kvm_is_{cr0,cr4}_bit_set() Binbin Wu
@ 2023-03-22 17:21 ` Sean Christopherson
2023-03-23 1:09 ` Binbin Wu
0 siblings, 1 reply; 9+ messages in thread
From: Sean Christopherson @ 2023-03-22 17:21 UTC (permalink / raw)
To: Binbin Wu; +Cc: kvm, pbonzini, robert.hu
On Wed, Mar 22, 2023, Binbin Wu wrote:
> Replace kvm_read_{cr0,cr4}_bits() with kvm_is_{cr0,cr4}_bit_set() when only
> one bit is checked and bool is preferred as return value type.
> Also change the return value type from int to bool of is_pae(), is_pse() and
> is_paging().
I'm going to squash the obvious/direct changes with the introduction of the helpers,
and isolate is_{pae,pse,paging}() as those are more risky due to the multiple
casts (ulong=>int=>bool), and because the end usage isn't visible in the patch.
Case in point, there is a benign but in svm_set_cr0() that would be silently
fixed by converting is_paging() to return a bool:
bool old_paging = is_paging(vcpu);
...
vcpu->arch.cr0 = cr0;
if (!npt_enabled) {
hcr0 |= X86_CR0_PG | X86_CR0_WP;
if (old_paging != is_paging(vcpu))
The "old_paging != is_paging(vcpu)" compares a bool (1/0) against an int that
was an unsigned long (X86_CR0_PG/0), i.e. gets a false positive when paging is
enabled.
I'll post a fix and slot it in before this patch, both so that there's no silent
fixes and so that this changelog can reference the commit.
> ---
> arch/x86/kvm/cpuid.c | 4 ++--
> arch/x86/kvm/mmu.h | 2 +-
> arch/x86/kvm/vmx/nested.c | 2 +-
> arch/x86/kvm/vmx/vmx.c | 2 +-
> arch/x86/kvm/x86.c | 20 ++++++++++----------
> arch/x86/kvm/x86.h | 16 ++++++++--------
This misses a few conversions in kvm_pmu_rdpmc(), I'll fix those when applying too.
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [PATCH 4/4] KVM: x86: Change return type of is_long_mode() to bool
2023-03-22 4:58 ` [PATCH 4/4] KVM: x86: Change return type of is_long_mode() to bool Binbin Wu
@ 2023-03-22 22:33 ` Huang, Kai
0 siblings, 0 replies; 9+ messages in thread
From: Huang, Kai @ 2023-03-22 22:33 UTC (permalink / raw)
To: kvm, pbonzini, Christopherson,, Sean, binbin.wu; +Cc: robert.hu
On Wed, 2023-03-22 at 12:58 +0800, Binbin Wu wrote:
> Change return type of is_long_mode() to bool to avoid implicit cast.
>
> Signed-off-by: Binbin Wu <binbin.wu@linux.intel.com>
> ---
> arch/x86/kvm/x86.h | 6 +++---
> 1 file changed, 3 insertions(+), 3 deletions(-)
>
> diff --git a/arch/x86/kvm/x86.h b/arch/x86/kvm/x86.h
> index 577b82358529..203fb6640b5b 100644
> --- a/arch/x86/kvm/x86.h
> +++ b/arch/x86/kvm/x86.h
> @@ -126,12 +126,12 @@ static inline bool is_protmode(struct kvm_vcpu *vcpu)
> return kvm_is_cr0_bit_set(vcpu, X86_CR0_PE);
> }
>
> -static inline int is_long_mode(struct kvm_vcpu *vcpu)
> +static inline bool is_long_mode(struct kvm_vcpu *vcpu)
> {
> #ifdef CONFIG_X86_64
> - return vcpu->arch.efer & EFER_LMA;
> + return !!(vcpu->arch.efer & EFER_LMA);
> #else
> - return 0;
> + return false;
> #endif
> }
>
Reviewed-by: Kai Huang <kai.huang@intel.com>
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [PATCH 2/4] KVM: x86: Replace kvm_read_{cr0,cr4}_bits() with kvm_is_{cr0,cr4}_bit_set()
2023-03-22 17:21 ` Sean Christopherson
@ 2023-03-23 1:09 ` Binbin Wu
0 siblings, 0 replies; 9+ messages in thread
From: Binbin Wu @ 2023-03-23 1:09 UTC (permalink / raw)
To: Sean Christopherson; +Cc: kvm, pbonzini, robert.hu
On 3/23/2023 1:21 AM, Sean Christopherson wrote:
> On Wed, Mar 22, 2023, Binbin Wu wrote:
>> Replace kvm_read_{cr0,cr4}_bits() with kvm_is_{cr0,cr4}_bit_set() when only
>> one bit is checked and bool is preferred as return value type.
>> Also change the return value type from int to bool of is_pae(), is_pse() and
>> is_paging().
> I'm going to squash the obvious/direct changes with the introduction of the helpers,
> and isolate is_{pae,pse,paging}() as those are more risky due to the multiple
> casts (ulong=>int=>bool), and because the end usage isn't visible in the patch.
>
> Case in point, there is a benign but in svm_set_cr0() that would be silently
> fixed by converting is_paging() to return a bool:
>
> bool old_paging = is_paging(vcpu);
>
> ...
>
> vcpu->arch.cr0 = cr0;
>
> if (!npt_enabled) {
> hcr0 |= X86_CR0_PG | X86_CR0_WP;
> if (old_paging != is_paging(vcpu))
>
> The "old_paging != is_paging(vcpu)" compares a bool (1/0) against an int that
> was an unsigned long (X86_CR0_PG/0), i.e. gets a false positive when paging is
> enabled.
>
> I'll post a fix and slot it in before this patch, both so that there's no silent
> fixes and so that this changelog can reference the commit.
OK, thanks.
>
>> ---
>> arch/x86/kvm/cpuid.c | 4 ++--
>> arch/x86/kvm/mmu.h | 2 +-
>> arch/x86/kvm/vmx/nested.c | 2 +-
>> arch/x86/kvm/vmx/vmx.c | 2 +-
>> arch/x86/kvm/x86.c | 20 ++++++++++----------
>> arch/x86/kvm/x86.h | 16 ++++++++--------
> This misses a few conversions in kvm_pmu_rdpmc(), I'll fix those when applying too.
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [PATCH 0/4] Add and use helpers to check bit set in CR0/CR4
2023-03-22 4:58 [PATCH 0/4] Add and use helpers to check bit set in CR0/CR4 Binbin Wu
` (3 preceding siblings ...)
2023-03-22 4:58 ` [PATCH 4/4] KVM: x86: Change return type of is_long_mode() to bool Binbin Wu
@ 2023-03-23 22:47 ` Sean Christopherson
4 siblings, 0 replies; 9+ messages in thread
From: Sean Christopherson @ 2023-03-23 22:47 UTC (permalink / raw)
To: Sean Christopherson, kvm, pbonzini, Binbin Wu; +Cc: robert.hu
On Wed, 22 Mar 2023 12:58:20 +0800, Binbin Wu wrote:
> Add two helpers
> - kvm_is_cr0_bit_set()
> - kvm_is_cr4_bit_set()
> to do CR0/CR4 check on one specific bit and return the value in bool.
> Replace kvm_read_{cr0,cr4}_bits() with kvm_is_{cr0,cr4}_bit_set() when applicable.
>
> Also change return type of is_pae(), is_pse(), is_paging() and is_long_mode()
> to bool.
>
> [...]
Applied to kvm-x86 misc, with a fair bit of massaging to the changelogs.
Thanks much for putting this together!
[1/4] KVM: x86: Add helpers to check bit set in CR0/CR4 and return in bool
https://github.com/kvm-x86/linux/commit/607475cfa0f7
[2/4] KVM: x86: Replace kvm_read_{cr0,cr4}_bits() with kvm_is_{cr0,cr4}_bit_set()
https://github.com/kvm-x86/linux/commit/bede6eb4db19
[3/4] KVM: SVM: Remove implicit cast from ulong to bool in svm_can_emulate_instruction()
https://github.com/kvm-x86/linux/commit/627778bfcfa1
[4/4] KVM: x86: Change return type of is_long_mode() to bool
https://github.com/kvm-x86/linux/commit/68f7c82ab1b8
--
https://github.com/kvm-x86/linux/tree/next
https://github.com/kvm-x86/linux/tree/fixes
^ permalink raw reply [flat|nested] 9+ messages in thread
end of thread, other threads:[~2023-03-23 22:47 UTC | newest]
Thread overview: 9+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2023-03-22 4:58 [PATCH 0/4] Add and use helpers to check bit set in CR0/CR4 Binbin Wu
2023-03-22 4:58 ` [PATCH 1/4] KVM: x86: Add helpers to check bit set in CR0/CR4 and return in bool Binbin Wu
2023-03-22 4:58 ` [PATCH 2/4] KVM: x86: Replace kvm_read_{cr0,cr4}_bits() with kvm_is_{cr0,cr4}_bit_set() Binbin Wu
2023-03-22 17:21 ` Sean Christopherson
2023-03-23 1:09 ` Binbin Wu
2023-03-22 4:58 ` [PATCH 3/4] KVM: SVM: Remove implicit cast from ulong to bool in svm_can_emulate_instruction() Binbin Wu
2023-03-22 4:58 ` [PATCH 4/4] KVM: x86: Change return type of is_long_mode() to bool Binbin Wu
2023-03-22 22:33 ` Huang, Kai
2023-03-23 22:47 ` [PATCH 0/4] Add and use helpers to check bit set in CR0/CR4 Sean Christopherson
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.