* arch/x86/kvm/mmu/mmu.c:3141:2: warning: 4th function call argument is an uninitialized value [clang-analyzer-core.CallAndMessage]
@ 2021-08-08 6:37 kernel test robot
0 siblings, 0 replies; 2+ messages in thread
From: kernel test robot @ 2021-08-08 6:37 UTC (permalink / raw)
To: kbuild
[-- Attachment #1: Type: text/plain, Size: 25092 bytes --]
CC: clang-built-linux(a)googlegroups.com
CC: kbuild-all(a)lists.01.org
CC: linux-kernel(a)vger.kernel.org
TO: Sean Christopherson <seanjc@google.com>
CC: Paolo Bonzini <pbonzini@redhat.com>
tree: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git master
head: 85a90500f9a1717c4e142ce92e6c1cb1a339ec78
commit: ec89e643867148ab4a2a856a38717d2e89692be7 KVM: x86/mmu: Bail from fast_page_fault() if SPTE is not shadow-present
date: 5 months ago
:::::: branch date: 13 hours ago
:::::: commit date: 5 months ago
config: x86_64-randconfig-c001-20210806 (attached as .config)
compiler: clang version 14.0.0 (https://github.com/llvm/llvm-project 42b9c2a17a0b63cccf3ac197a82f91b28e53e643)
reproduce (this is a W=1 build):
wget https://raw.githubusercontent.com/intel/lkp-tests/master/sbin/make.cross -O ~/bin/make.cross
chmod +x ~/bin/make.cross
# install x86_64 cross compiling tool for clang build
# apt-get install binutils-x86-64-linux-gnu
# https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=ec89e643867148ab4a2a856a38717d2e89692be7
git remote add linus https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
git fetch --no-tags linus master
git checkout ec89e643867148ab4a2a856a38717d2e89692be7
# save the attached .config to linux build tree
COMPILER_INSTALL_PATH=$HOME/0day COMPILER=clang make.cross ARCH=x86_64 clang-analyzer
If you fix the issue, kindly add following tag as appropriate
Reported-by: kernel test robot <lkp@intel.com>
clang-analyzer warnings: (new ones prefixed by >>)
^
arch/x86/kvm/mmu/mmu.c:3759:2: note: Taking false branch
if (r)
^
arch/x86/kvm/mmu/mmu.c:3762:2: note: Taking false branch
if (is_tdp_mmu_root(vcpu->kvm, vcpu->arch.mmu->root_hpa))
^
arch/x86/kvm/mmu/mmu.c:3766:7: note: Calling '__direct_map'
r = __direct_map(vcpu, gpa, error_code, map_writable, max_level, pfn,
^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
arch/x86/kvm/mmu/mmu.c:2868:30: note: Assuming 'exec' is false
bool huge_page_disallowed = exec && nx_huge_page_workaround_enabled;
^~~~
arch/x86/kvm/mmu/mmu.c:2868:35: note: Left side of '&&' is false
bool huge_page_disallowed = exec && nx_huge_page_workaround_enabled;
^
arch/x86/kvm/mmu/mmu.c:2875:15: note: Assuming the condition is true
if (WARN_ON(!VALID_PAGE(vcpu->arch.mmu->root_hpa)))
^
arch/x86/include/asm/kvm_host.h:113:24: note: expanded from macro 'VALID_PAGE'
#define VALID_PAGE(x) ((x) != INVALID_PAGE)
^
include/asm-generic/bug.h:119:25: note: expanded from macro 'WARN_ON'
int __ret_warn_on = !!(condition); \
^~~~~~~~~
arch/x86/kvm/mmu/mmu.c:2875:6: note: Taking false branch
if (WARN_ON(!VALID_PAGE(vcpu->arch.mmu->root_hpa)))
^
include/asm-generic/bug.h:120:2: note: expanded from macro 'WARN_ON'
if (unlikely(__ret_warn_on)) \
^
arch/x86/kvm/mmu/mmu.c:2875:2: note: Taking false branch
if (WARN_ON(!VALID_PAGE(vcpu->arch.mmu->root_hpa)))
^
arch/x86/kvm/mmu/mmu.c:2882:2: note: Calling 'shadow_walk_init'
for_each_shadow_entry(vcpu, gpa, it) {
^
arch/x86/kvm/mmu/mmu.c:160:7: note: expanded from macro 'for_each_shadow_entry'
for (shadow_walk_init(&(_walker), _vcpu, _addr); \
^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
arch/x86/kvm/mmu/mmu.c:2121:2: note: Calling 'shadow_walk_init_using_root'
shadow_walk_init_using_root(iterator, vcpu, vcpu->arch.mmu->root_hpa,
^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
arch/x86/kvm/mmu/mmu.c:2097:6: note: Assuming field 'level' is not equal to PT64_ROOT_4LEVEL
if (iterator->level == PT64_ROOT_4LEVEL &&
^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
arch/x86/kvm/mmu/mmu.c:2097:42: note: Left side of '&&' is false
if (iterator->level == PT64_ROOT_4LEVEL &&
^
arch/x86/kvm/mmu/mmu.c:2102:6: note: Assuming field 'level' is not equal to PT32E_ROOT_LEVEL
if (iterator->level == PT32E_ROOT_LEVEL) {
^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
arch/x86/kvm/mmu/mmu.c:2102:2: note: Taking false branch
if (iterator->level == PT32E_ROOT_LEVEL) {
^
arch/x86/kvm/mmu/mmu.c:2116:1: note: Returning without writing to 'iterator->sptep'
}
^
arch/x86/kvm/mmu/mmu.c:2121:2: note: Returning from 'shadow_walk_init_using_root'
shadow_walk_init_using_root(iterator, vcpu, vcpu->arch.mmu->root_hpa,
^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
arch/x86/kvm/mmu/mmu.c:2123:1: note: Returning without writing to 'iterator->sptep'
}
^
arch/x86/kvm/mmu/mmu.c:2882:2: note: Returning from 'shadow_walk_init'
for_each_shadow_entry(vcpu, gpa, it) {
^
arch/x86/kvm/mmu/mmu.c:160:7: note: expanded from macro 'for_each_shadow_entry'
for (shadow_walk_init(&(_walker), _vcpu, _addr); \
^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
arch/x86/kvm/mmu/mmu.c:2882:2: note: Calling 'shadow_walk_okay'
for_each_shadow_entry(vcpu, gpa, it) {
^
arch/x86/kvm/mmu/mmu.c:161:7: note: expanded from macro 'for_each_shadow_entry'
shadow_walk_okay(&(_walker)); \
^~~~~~~~~~~~~~~~~~~~~~~~~~~~
arch/x86/kvm/mmu/mmu.c:2127:6: note: Assuming field 'level' is < PG_LEVEL_4K
if (iterator->level < PG_LEVEL_4K)
^~~~~~~~~~~~~~~~~~~~~~~~~~~~~
arch/x86/kvm/mmu/mmu.c:2127:2: note: Taking true branch
if (iterator->level < PG_LEVEL_4K)
^
arch/x86/kvm/mmu/mmu.c:2128:3: note: Returning without writing to 'iterator->sptep'
return false;
^
arch/x86/kvm/mmu/mmu.c:2882:2: note: Returning from 'shadow_walk_okay'
for_each_shadow_entry(vcpu, gpa, it) {
^
arch/x86/kvm/mmu/mmu.c:161:7: note: expanded from macro 'for_each_shadow_entry'
shadow_walk_okay(&(_walker)); \
^~~~~~~~~~~~~~~~~~~~~~~~~~~~
arch/x86/kvm/mmu/mmu.c:2882:2: note: Loop condition is false. Execution continues on line 2907
for_each_shadow_entry(vcpu, gpa, it) {
^
arch/x86/kvm/mmu/mmu.c:160:2: note: expanded from macro 'for_each_shadow_entry'
for (shadow_walk_init(&(_walker), _vcpu, _addr); \
^
arch/x86/kvm/mmu/mmu.c:2907:8: note: 2nd function call argument is an uninitialized value
ret = mmu_set_spte(vcpu, it.sptep, ACC_ALL,
^ ~~~~~~~~
>> arch/x86/kvm/mmu/mmu.c:3141:2: warning: 4th function call argument is an uninitialized value [clang-analyzer-core.CallAndMessage]
trace_fast_page_fault(vcpu, cr2_or_gpa, error_code, iterator.sptep,
^
arch/x86/kvm/mmu/mmu.c:3826:2: note: Loop condition is true. Entering loop body
for (max_level = KVM_MAX_HUGEPAGE_LEVEL;
^
arch/x86/kvm/mmu/mmu.c:3832:7: note: Assuming the condition is true
if (kvm_mtrr_check_gfn_range_consistency(vcpu, base, page_num))
^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
arch/x86/kvm/mmu/mmu.c:3832:3: note: Taking true branch
if (kvm_mtrr_check_gfn_range_consistency(vcpu, base, page_num))
^
arch/x86/kvm/mmu/mmu.c:3833:4: note: Execution continues on line 3836
break;
^
arch/x86/kvm/mmu/mmu.c:3836:9: note: Calling 'direct_page_fault'
return direct_page_fault(vcpu, gpa, error_code, prefault,
^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
arch/x86/kvm/mmu/mmu.c:3726:2: note: Taking false branch
if (page_fault_handle_page_track(vcpu, error_code, gfn))
^
arch/x86/kvm/mmu/mmu.c:3729:2: note: Taking true branch
if (!is_tdp_mmu_root(vcpu->kvm, vcpu->arch.mmu->root_hpa)) {
^
arch/x86/kvm/mmu/mmu.c:3730:7: note: Calling 'fast_page_fault'
r = fast_page_fault(vcpu, gpa, error_code);
^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
arch/x86/kvm/mmu/mmu.c:3054:7: note: Calling 'page_fault_can_be_fast'
if (!page_fault_can_be_fast(error_code))
^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
arch/x86/kvm/mmu/mmu.c:2964:2: note: Taking false branch
if (unlikely(error_code & PFERR_RSVD_MASK))
^
arch/x86/kvm/mmu/mmu.c:2968:16: note: Assuming the condition is false
if (unlikely(((error_code & (PFERR_FETCH_MASK | PFERR_PRESENT_MASK))
^
include/linux/compiler.h:78:42: note: expanded from macro 'unlikely'
# define unlikely(x) __builtin_expect(!!(x), 0)
^
arch/x86/kvm/mmu/mmu.c:2968:2: note: Taking false branch
if (unlikely(((error_code & (PFERR_FETCH_MASK | PFERR_PRESENT_MASK))
^
arch/x86/kvm/mmu/mmu.c:2986:9: note: Assuming 'shadow_acc_track_mask' is equal to 0
return shadow_acc_track_mask != 0 ||
^~~~~~~~~~~~~~~~~~~~~~~~~~
arch/x86/kvm/mmu/mmu.c:2986:9: note: Left side of '||' is false
arch/x86/kvm/mmu/mmu.c:2987:10: note: Assuming the condition is true
((error_code & (PFERR_WRITE_MASK | PFERR_PRESENT_MASK))
^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
arch/x86/kvm/mmu/mmu.c:2986:2: note: Returning the value 1, which participates in a condition later
return shadow_acc_track_mask != 0 ||
^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
arch/x86/kvm/mmu/mmu.c:3054:7: note: Returning from 'page_fault_can_be_fast'
if (!page_fault_can_be_fast(error_code))
^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
arch/x86/kvm/mmu/mmu.c:3054:2: note: Taking false branch
if (!page_fault_can_be_fast(error_code))
^
arch/x86/kvm/mmu/mmu.c:3062:3: note: Calling 'shadow_walk_init'
for_each_shadow_entry_lockless(vcpu, cr2_or_gpa, iterator, spte)
^
arch/x86/kvm/mmu/mmu.c:165:7: note: expanded from macro 'for_each_shadow_entry_lockless'
for (shadow_walk_init(&(_walker), _vcpu, _addr); \
^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
arch/x86/kvm/mmu/mmu.c:2121:2: note: Calling 'shadow_walk_init_using_root'
shadow_walk_init_using_root(iterator, vcpu, vcpu->arch.mmu->root_hpa,
^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
arch/x86/kvm/mmu/mmu.c:2097:6: note: Assuming field 'level' is not equal to PT64_ROOT_4LEVEL
if (iterator->level == PT64_ROOT_4LEVEL &&
^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
arch/x86/kvm/mmu/mmu.c:2097:42: note: Left side of '&&' is false
if (iterator->level == PT64_ROOT_4LEVEL &&
^
arch/x86/kvm/mmu/mmu.c:2102:6: note: Assuming field 'level' is not equal to PT32E_ROOT_LEVEL
if (iterator->level == PT32E_ROOT_LEVEL) {
^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
arch/x86/kvm/mmu/mmu.c:2102:2: note: Taking false branch
if (iterator->level == PT32E_ROOT_LEVEL) {
^
arch/x86/kvm/mmu/mmu.c:2116:1: note: Returning without writing to 'iterator->sptep'
}
^
arch/x86/kvm/mmu/mmu.c:2121:2: note: Returning from 'shadow_walk_init_using_root'
shadow_walk_init_using_root(iterator, vcpu, vcpu->arch.mmu->root_hpa,
^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
arch/x86/kvm/mmu/mmu.c:2123:1: note: Returning without writing to 'iterator->sptep'
}
^
arch/x86/kvm/mmu/mmu.c:3062:3: note: Returning from 'shadow_walk_init'
for_each_shadow_entry_lockless(vcpu, cr2_or_gpa, iterator, spte)
^
arch/x86/kvm/mmu/mmu.c:165:7: note: expanded from macro 'for_each_shadow_entry_lockless'
for (shadow_walk_init(&(_walker), _vcpu, _addr); \
^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
arch/x86/kvm/mmu/mmu.c:3062:3: note: Calling 'shadow_walk_okay'
for_each_shadow_entry_lockless(vcpu, cr2_or_gpa, iterator, spte)
^
arch/x86/kvm/mmu/mmu.c:166:7: note: expanded from macro 'for_each_shadow_entry_lockless'
shadow_walk_okay(&(_walker)) && \
^~~~~~~~~~~~~~~~~~~~~~~~~~~~
arch/x86/kvm/mmu/mmu.c:2127:6: note: Assuming field 'level' is < PG_LEVEL_4K
vim +3141 arch/x86/kvm/mmu/mmu.c
d3e328f2cb01f6 arch/x86/kvm/mmu.c Junaid Shahid 2016-12-21 3041
c7ba5b48cc8ddc arch/x86/kvm/mmu.c Xiao Guangrong 2012-06-20 3042 /*
c4371c2a682e0d arch/x86/kvm/mmu/mmu.c Sean Christopherson 2020-09-23 3043 * Returns one of RET_PF_INVALID, RET_PF_FIXED or RET_PF_SPURIOUS.
c7ba5b48cc8ddc arch/x86/kvm/mmu.c Xiao Guangrong 2012-06-20 3044 */
c4371c2a682e0d arch/x86/kvm/mmu/mmu.c Sean Christopherson 2020-09-23 3045 static int fast_page_fault(struct kvm_vcpu *vcpu, gpa_t cr2_or_gpa,
c7ba5b48cc8ddc arch/x86/kvm/mmu.c Xiao Guangrong 2012-06-20 3046 u32 error_code)
c7ba5b48cc8ddc arch/x86/kvm/mmu.c Xiao Guangrong 2012-06-20 3047 {
c7ba5b48cc8ddc arch/x86/kvm/mmu.c Xiao Guangrong 2012-06-20 3048 struct kvm_shadow_walk_iterator iterator;
92a476cbfc476c arch/x86/kvm/mmu.c Xiao Guangrong 2014-04-17 3049 struct kvm_mmu_page *sp;
c4371c2a682e0d arch/x86/kvm/mmu/mmu.c Sean Christopherson 2020-09-23 3050 int ret = RET_PF_INVALID;
c7ba5b48cc8ddc arch/x86/kvm/mmu.c Xiao Guangrong 2012-06-20 3051 u64 spte = 0ull;
97dceba29a6acb arch/x86/kvm/mmu.c Junaid Shahid 2016-12-06 3052 uint retry_count = 0;
c7ba5b48cc8ddc arch/x86/kvm/mmu.c Xiao Guangrong 2012-06-20 3053
e5552fd252763c arch/x86/kvm/mmu.c Xiao Guangrong 2013-07-30 3054 if (!page_fault_can_be_fast(error_code))
c4371c2a682e0d arch/x86/kvm/mmu/mmu.c Sean Christopherson 2020-09-23 3055 return ret;
c7ba5b48cc8ddc arch/x86/kvm/mmu.c Xiao Guangrong 2012-06-20 3056
c7ba5b48cc8ddc arch/x86/kvm/mmu.c Xiao Guangrong 2012-06-20 3057 walk_shadow_page_lockless_begin(vcpu);
c7ba5b48cc8ddc arch/x86/kvm/mmu.c Xiao Guangrong 2012-06-20 3058
97dceba29a6acb arch/x86/kvm/mmu.c Junaid Shahid 2016-12-06 3059 do {
d3e328f2cb01f6 arch/x86/kvm/mmu.c Junaid Shahid 2016-12-21 3060 u64 new_spte;
c7ba5b48cc8ddc arch/x86/kvm/mmu.c Xiao Guangrong 2012-06-20 3061
736c291c9f36b0 arch/x86/kvm/mmu/mmu.c Sean Christopherson 2019-12-06 3062 for_each_shadow_entry_lockless(vcpu, cr2_or_gpa, iterator, spte)
f9fa2509e5ca82 arch/x86/kvm/mmu/mmu.c Sean Christopherson 2020-01-08 3063 if (!is_shadow_present_pte(spte))
d162f30a7cebe9 arch/x86/kvm/mmu.c Junaid Shahid 2016-12-21 3064 break;
d162f30a7cebe9 arch/x86/kvm/mmu.c Junaid Shahid 2016-12-21 3065
ec89e643867148 arch/x86/kvm/mmu/mmu.c Sean Christopherson 2021-02-25 3066 if (!is_shadow_present_pte(spte))
ec89e643867148 arch/x86/kvm/mmu/mmu.c Sean Christopherson 2021-02-25 3067 break;
ec89e643867148 arch/x86/kvm/mmu/mmu.c Sean Christopherson 2021-02-25 3068
573546820b792e arch/x86/kvm/mmu/mmu.c Sean Christopherson 2020-06-22 3069 sp = sptep_to_sp(iterator.sptep);
92a476cbfc476c arch/x86/kvm/mmu.c Xiao Guangrong 2014-04-17 3070 if (!is_last_spte(spte, sp->role.level))
97dceba29a6acb arch/x86/kvm/mmu.c Junaid Shahid 2016-12-06 3071 break;
c7ba5b48cc8ddc arch/x86/kvm/mmu.c Xiao Guangrong 2012-06-20 3072
c7ba5b48cc8ddc arch/x86/kvm/mmu.c Xiao Guangrong 2012-06-20 3073 /*
f160c7b7bb322b arch/x86/kvm/mmu.c Junaid Shahid 2016-12-06 3074 * Check whether the memory access that caused the fault would
f160c7b7bb322b arch/x86/kvm/mmu.c Junaid Shahid 2016-12-06 3075 * still cause it if it were to be performed right now. If not,
f160c7b7bb322b arch/x86/kvm/mmu.c Junaid Shahid 2016-12-06 3076 * then this is a spurious fault caused by TLB lazily flushed,
f160c7b7bb322b arch/x86/kvm/mmu.c Junaid Shahid 2016-12-06 3077 * or some other CPU has already fixed the PTE after the
f160c7b7bb322b arch/x86/kvm/mmu.c Junaid Shahid 2016-12-06 3078 * current CPU took the fault.
c7ba5b48cc8ddc arch/x86/kvm/mmu.c Xiao Guangrong 2012-06-20 3079 *
c7ba5b48cc8ddc arch/x86/kvm/mmu.c Xiao Guangrong 2012-06-20 3080 * Need not check the access of upper level table entries since
c7ba5b48cc8ddc arch/x86/kvm/mmu.c Xiao Guangrong 2012-06-20 3081 * they are always ACC_ALL.
c7ba5b48cc8ddc arch/x86/kvm/mmu.c Xiao Guangrong 2012-06-20 3082 */
d3e328f2cb01f6 arch/x86/kvm/mmu.c Junaid Shahid 2016-12-21 3083 if (is_access_allowed(error_code, spte)) {
c4371c2a682e0d arch/x86/kvm/mmu/mmu.c Sean Christopherson 2020-09-23 3084 ret = RET_PF_SPURIOUS;
97dceba29a6acb arch/x86/kvm/mmu.c Junaid Shahid 2016-12-06 3085 break;
c7ba5b48cc8ddc arch/x86/kvm/mmu.c Xiao Guangrong 2012-06-20 3086 }
c7ba5b48cc8ddc arch/x86/kvm/mmu.c Xiao Guangrong 2012-06-20 3087
d3e328f2cb01f6 arch/x86/kvm/mmu.c Junaid Shahid 2016-12-21 3088 new_spte = spte;
d3e328f2cb01f6 arch/x86/kvm/mmu.c Junaid Shahid 2016-12-21 3089
d3e328f2cb01f6 arch/x86/kvm/mmu.c Junaid Shahid 2016-12-21 3090 if (is_access_track_spte(spte))
d3e328f2cb01f6 arch/x86/kvm/mmu.c Junaid Shahid 2016-12-21 3091 new_spte = restore_acc_track_spte(new_spte);
d3e328f2cb01f6 arch/x86/kvm/mmu.c Junaid Shahid 2016-12-21 3092
c7ba5b48cc8ddc arch/x86/kvm/mmu.c Xiao Guangrong 2012-06-20 3093 /*
f160c7b7bb322b arch/x86/kvm/mmu.c Junaid Shahid 2016-12-06 3094 * Currently, to simplify the code, write-protection can
f160c7b7bb322b arch/x86/kvm/mmu.c Junaid Shahid 2016-12-06 3095 * be removed in the fast path only if the SPTE was
d3e328f2cb01f6 arch/x86/kvm/mmu.c Junaid Shahid 2016-12-21 3096 * write-protected for dirty-logging or access tracking.
c7ba5b48cc8ddc arch/x86/kvm/mmu.c Xiao Guangrong 2012-06-20 3097 */
d3e328f2cb01f6 arch/x86/kvm/mmu.c Junaid Shahid 2016-12-21 3098 if ((error_code & PFERR_WRITE_MASK) &&
e630269841ab08 arch/x86/kvm/mmu/mmu.c Miaohe Lin 2020-02-15 3099 spte_can_locklessly_be_made_writable(spte)) {
d3e328f2cb01f6 arch/x86/kvm/mmu.c Junaid Shahid 2016-12-21 3100 new_spte |= PT_WRITABLE_MASK;
c7ba5b48cc8ddc arch/x86/kvm/mmu.c Xiao Guangrong 2012-06-20 3101
c126d94f2c90ed arch/x86/kvm/mmu.c Xiao Guangrong 2014-04-17 3102 /*
d3e328f2cb01f6 arch/x86/kvm/mmu.c Junaid Shahid 2016-12-21 3103 * Do not fix write-permission on the large spte. Since
d3e328f2cb01f6 arch/x86/kvm/mmu.c Junaid Shahid 2016-12-21 3104 * we only dirty the first page into the dirty-bitmap in
d3e328f2cb01f6 arch/x86/kvm/mmu.c Junaid Shahid 2016-12-21 3105 * fast_pf_fix_direct_spte(), other pages are missed
d3e328f2cb01f6 arch/x86/kvm/mmu.c Junaid Shahid 2016-12-21 3106 * if its slot has dirty logging enabled.
c126d94f2c90ed arch/x86/kvm/mmu.c Xiao Guangrong 2014-04-17 3107 *
d3e328f2cb01f6 arch/x86/kvm/mmu.c Junaid Shahid 2016-12-21 3108 * Instead, we let the slow page fault path create a
d3e328f2cb01f6 arch/x86/kvm/mmu.c Junaid Shahid 2016-12-21 3109 * normal spte to fix the access.
c126d94f2c90ed arch/x86/kvm/mmu.c Xiao Guangrong 2014-04-17 3110 *
c126d94f2c90ed arch/x86/kvm/mmu.c Xiao Guangrong 2014-04-17 3111 * See the comments in kvm_arch_commit_memory_region().
c126d94f2c90ed arch/x86/kvm/mmu.c Xiao Guangrong 2014-04-17 3112 */
3bae0459bcd559 arch/x86/kvm/mmu/mmu.c Sean Christopherson 2020-04-27 3113 if (sp->role.level > PG_LEVEL_4K)
d3e328f2cb01f6 arch/x86/kvm/mmu.c Junaid Shahid 2016-12-21 3114 break;
d3e328f2cb01f6 arch/x86/kvm/mmu.c Junaid Shahid 2016-12-21 3115 }
d3e328f2cb01f6 arch/x86/kvm/mmu.c Junaid Shahid 2016-12-21 3116
d3e328f2cb01f6 arch/x86/kvm/mmu.c Junaid Shahid 2016-12-21 3117 /* Verify that the fault can be handled in the fast path */
d3e328f2cb01f6 arch/x86/kvm/mmu.c Junaid Shahid 2016-12-21 3118 if (new_spte == spte ||
d3e328f2cb01f6 arch/x86/kvm/mmu.c Junaid Shahid 2016-12-21 3119 !is_access_allowed(error_code, new_spte))
97dceba29a6acb arch/x86/kvm/mmu.c Junaid Shahid 2016-12-06 3120 break;
c126d94f2c90ed arch/x86/kvm/mmu.c Xiao Guangrong 2014-04-17 3121
c7ba5b48cc8ddc arch/x86/kvm/mmu.c Xiao Guangrong 2012-06-20 3122 /*
97dceba29a6acb arch/x86/kvm/mmu.c Junaid Shahid 2016-12-06 3123 * Currently, fast page fault only works for direct mapping
97dceba29a6acb arch/x86/kvm/mmu.c Junaid Shahid 2016-12-06 3124 * since the gfn is not stable for indirect shadow page. See
3ecad8c2c1ff33 arch/x86/kvm/mmu/mmu.c Mauro Carvalho Chehab 2020-04-14 3125 * Documentation/virt/kvm/locking.rst to get more detail.
c7ba5b48cc8ddc arch/x86/kvm/mmu.c Xiao Guangrong 2012-06-20 3126 */
c4371c2a682e0d arch/x86/kvm/mmu/mmu.c Sean Christopherson 2020-09-23 3127 if (fast_pf_fix_direct_spte(vcpu, sp, iterator.sptep, spte,
c4371c2a682e0d arch/x86/kvm/mmu/mmu.c Sean Christopherson 2020-09-23 3128 new_spte)) {
c4371c2a682e0d arch/x86/kvm/mmu/mmu.c Sean Christopherson 2020-09-23 3129 ret = RET_PF_FIXED;
97dceba29a6acb arch/x86/kvm/mmu.c Junaid Shahid 2016-12-06 3130 break;
c4371c2a682e0d arch/x86/kvm/mmu/mmu.c Sean Christopherson 2020-09-23 3131 }
97dceba29a6acb arch/x86/kvm/mmu.c Junaid Shahid 2016-12-06 3132
97dceba29a6acb arch/x86/kvm/mmu.c Junaid Shahid 2016-12-06 3133 if (++retry_count > 4) {
97dceba29a6acb arch/x86/kvm/mmu.c Junaid Shahid 2016-12-06 3134 printk_once(KERN_WARNING
97dceba29a6acb arch/x86/kvm/mmu.c Junaid Shahid 2016-12-06 3135 "kvm: Fast #PF retrying more than 4 times.\n");
97dceba29a6acb arch/x86/kvm/mmu.c Junaid Shahid 2016-12-06 3136 break;
97dceba29a6acb arch/x86/kvm/mmu.c Junaid Shahid 2016-12-06 3137 }
97dceba29a6acb arch/x86/kvm/mmu.c Junaid Shahid 2016-12-06 3138
97dceba29a6acb arch/x86/kvm/mmu.c Junaid Shahid 2016-12-06 3139 } while (true);
97dceba29a6acb arch/x86/kvm/mmu.c Junaid Shahid 2016-12-06 3140
736c291c9f36b0 arch/x86/kvm/mmu/mmu.c Sean Christopherson 2019-12-06 @3141 trace_fast_page_fault(vcpu, cr2_or_gpa, error_code, iterator.sptep,
c4371c2a682e0d arch/x86/kvm/mmu/mmu.c Sean Christopherson 2020-09-23 3142 spte, ret);
c7ba5b48cc8ddc arch/x86/kvm/mmu.c Xiao Guangrong 2012-06-20 3143 walk_shadow_page_lockless_end(vcpu);
c7ba5b48cc8ddc arch/x86/kvm/mmu.c Xiao Guangrong 2012-06-20 3144
c4371c2a682e0d arch/x86/kvm/mmu/mmu.c Sean Christopherson 2020-09-23 3145 return ret;
c7ba5b48cc8ddc arch/x86/kvm/mmu.c Xiao Guangrong 2012-06-20 3146 }
c7ba5b48cc8ddc arch/x86/kvm/mmu.c Xiao Guangrong 2012-06-20 3147
:::::: The code at line 3141 was first introduced by commit
:::::: 736c291c9f36b07f8889c61764c28edce20e715d KVM: x86: Use gpa_t for cr2/gpa to fix TDP support on 32-bit KVM
:::::: TO: Sean Christopherson <sean.j.christopherson@intel.com>
:::::: CC: Paolo Bonzini <pbonzini@redhat.com>
---
0-DAY CI Kernel Test Service, Intel Corporation
https://lists.01.org/hyperkitty/list/kbuild-all(a)lists.01.org
[-- Attachment #2: config.gz --]
[-- Type: application/gzip, Size: 27860 bytes --]
^ permalink raw reply [flat|nested] 2+ messages in thread
* arch/x86/kvm/mmu/mmu.c:3141:2: warning: 4th function call argument is an uninitialized value [clang-analyzer-core.CallAndMessage]
@ 2021-08-08 10:35 kernel test robot
0 siblings, 0 replies; 2+ messages in thread
From: kernel test robot @ 2021-08-08 10:35 UTC (permalink / raw)
To: kbuild
[-- Attachment #1: Type: text/plain, Size: 25092 bytes --]
CC: clang-built-linux(a)googlegroups.com
CC: kbuild-all(a)lists.01.org
CC: linux-kernel(a)vger.kernel.org
TO: Sean Christopherson <seanjc@google.com>
CC: Paolo Bonzini <pbonzini@redhat.com>
tree: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git master
head: 85a90500f9a1717c4e142ce92e6c1cb1a339ec78
commit: ec89e643867148ab4a2a856a38717d2e89692be7 KVM: x86/mmu: Bail from fast_page_fault() if SPTE is not shadow-present
date: 5 months ago
:::::: branch date: 17 hours ago
:::::: commit date: 5 months ago
config: x86_64-randconfig-c001-20210806 (attached as .config)
compiler: clang version 14.0.0 (https://github.com/llvm/llvm-project 42b9c2a17a0b63cccf3ac197a82f91b28e53e643)
reproduce (this is a W=1 build):
wget https://raw.githubusercontent.com/intel/lkp-tests/master/sbin/make.cross -O ~/bin/make.cross
chmod +x ~/bin/make.cross
# install x86_64 cross compiling tool for clang build
# apt-get install binutils-x86-64-linux-gnu
# https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=ec89e643867148ab4a2a856a38717d2e89692be7
git remote add linus https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
git fetch --no-tags linus master
git checkout ec89e643867148ab4a2a856a38717d2e89692be7
# save the attached .config to linux build tree
COMPILER_INSTALL_PATH=$HOME/0day COMPILER=clang make.cross ARCH=x86_64 clang-analyzer
If you fix the issue, kindly add following tag as appropriate
Reported-by: kernel test robot <lkp@intel.com>
clang-analyzer warnings: (new ones prefixed by >>)
^
arch/x86/kvm/mmu/mmu.c:3759:2: note: Taking false branch
if (r)
^
arch/x86/kvm/mmu/mmu.c:3762:2: note: Taking false branch
if (is_tdp_mmu_root(vcpu->kvm, vcpu->arch.mmu->root_hpa))
^
arch/x86/kvm/mmu/mmu.c:3766:7: note: Calling '__direct_map'
r = __direct_map(vcpu, gpa, error_code, map_writable, max_level, pfn,
^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
arch/x86/kvm/mmu/mmu.c:2868:30: note: Assuming 'exec' is false
bool huge_page_disallowed = exec && nx_huge_page_workaround_enabled;
^~~~
arch/x86/kvm/mmu/mmu.c:2868:35: note: Left side of '&&' is false
bool huge_page_disallowed = exec && nx_huge_page_workaround_enabled;
^
arch/x86/kvm/mmu/mmu.c:2875:15: note: Assuming the condition is true
if (WARN_ON(!VALID_PAGE(vcpu->arch.mmu->root_hpa)))
^
arch/x86/include/asm/kvm_host.h:113:24: note: expanded from macro 'VALID_PAGE'
#define VALID_PAGE(x) ((x) != INVALID_PAGE)
^
include/asm-generic/bug.h:119:25: note: expanded from macro 'WARN_ON'
int __ret_warn_on = !!(condition); \
^~~~~~~~~
arch/x86/kvm/mmu/mmu.c:2875:6: note: Taking false branch
if (WARN_ON(!VALID_PAGE(vcpu->arch.mmu->root_hpa)))
^
include/asm-generic/bug.h:120:2: note: expanded from macro 'WARN_ON'
if (unlikely(__ret_warn_on)) \
^
arch/x86/kvm/mmu/mmu.c:2875:2: note: Taking false branch
if (WARN_ON(!VALID_PAGE(vcpu->arch.mmu->root_hpa)))
^
arch/x86/kvm/mmu/mmu.c:2882:2: note: Calling 'shadow_walk_init'
for_each_shadow_entry(vcpu, gpa, it) {
^
arch/x86/kvm/mmu/mmu.c:160:7: note: expanded from macro 'for_each_shadow_entry'
for (shadow_walk_init(&(_walker), _vcpu, _addr); \
^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
arch/x86/kvm/mmu/mmu.c:2121:2: note: Calling 'shadow_walk_init_using_root'
shadow_walk_init_using_root(iterator, vcpu, vcpu->arch.mmu->root_hpa,
^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
arch/x86/kvm/mmu/mmu.c:2097:6: note: Assuming field 'level' is not equal to PT64_ROOT_4LEVEL
if (iterator->level == PT64_ROOT_4LEVEL &&
^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
arch/x86/kvm/mmu/mmu.c:2097:42: note: Left side of '&&' is false
if (iterator->level == PT64_ROOT_4LEVEL &&
^
arch/x86/kvm/mmu/mmu.c:2102:6: note: Assuming field 'level' is not equal to PT32E_ROOT_LEVEL
if (iterator->level == PT32E_ROOT_LEVEL) {
^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
arch/x86/kvm/mmu/mmu.c:2102:2: note: Taking false branch
if (iterator->level == PT32E_ROOT_LEVEL) {
^
arch/x86/kvm/mmu/mmu.c:2116:1: note: Returning without writing to 'iterator->sptep'
}
^
arch/x86/kvm/mmu/mmu.c:2121:2: note: Returning from 'shadow_walk_init_using_root'
shadow_walk_init_using_root(iterator, vcpu, vcpu->arch.mmu->root_hpa,
^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
arch/x86/kvm/mmu/mmu.c:2123:1: note: Returning without writing to 'iterator->sptep'
}
^
arch/x86/kvm/mmu/mmu.c:2882:2: note: Returning from 'shadow_walk_init'
for_each_shadow_entry(vcpu, gpa, it) {
^
arch/x86/kvm/mmu/mmu.c:160:7: note: expanded from macro 'for_each_shadow_entry'
for (shadow_walk_init(&(_walker), _vcpu, _addr); \
^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
arch/x86/kvm/mmu/mmu.c:2882:2: note: Calling 'shadow_walk_okay'
for_each_shadow_entry(vcpu, gpa, it) {
^
arch/x86/kvm/mmu/mmu.c:161:7: note: expanded from macro 'for_each_shadow_entry'
shadow_walk_okay(&(_walker)); \
^~~~~~~~~~~~~~~~~~~~~~~~~~~~
arch/x86/kvm/mmu/mmu.c:2127:6: note: Assuming field 'level' is < PG_LEVEL_4K
if (iterator->level < PG_LEVEL_4K)
^~~~~~~~~~~~~~~~~~~~~~~~~~~~~
arch/x86/kvm/mmu/mmu.c:2127:2: note: Taking true branch
if (iterator->level < PG_LEVEL_4K)
^
arch/x86/kvm/mmu/mmu.c:2128:3: note: Returning without writing to 'iterator->sptep'
return false;
^
arch/x86/kvm/mmu/mmu.c:2882:2: note: Returning from 'shadow_walk_okay'
for_each_shadow_entry(vcpu, gpa, it) {
^
arch/x86/kvm/mmu/mmu.c:161:7: note: expanded from macro 'for_each_shadow_entry'
shadow_walk_okay(&(_walker)); \
^~~~~~~~~~~~~~~~~~~~~~~~~~~~
arch/x86/kvm/mmu/mmu.c:2882:2: note: Loop condition is false. Execution continues on line 2907
for_each_shadow_entry(vcpu, gpa, it) {
^
arch/x86/kvm/mmu/mmu.c:160:2: note: expanded from macro 'for_each_shadow_entry'
for (shadow_walk_init(&(_walker), _vcpu, _addr); \
^
arch/x86/kvm/mmu/mmu.c:2907:8: note: 2nd function call argument is an uninitialized value
ret = mmu_set_spte(vcpu, it.sptep, ACC_ALL,
^ ~~~~~~~~
>> arch/x86/kvm/mmu/mmu.c:3141:2: warning: 4th function call argument is an uninitialized value [clang-analyzer-core.CallAndMessage]
trace_fast_page_fault(vcpu, cr2_or_gpa, error_code, iterator.sptep,
^
arch/x86/kvm/mmu/mmu.c:3826:2: note: Loop condition is true. Entering loop body
for (max_level = KVM_MAX_HUGEPAGE_LEVEL;
^
arch/x86/kvm/mmu/mmu.c:3832:7: note: Assuming the condition is true
if (kvm_mtrr_check_gfn_range_consistency(vcpu, base, page_num))
^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
arch/x86/kvm/mmu/mmu.c:3832:3: note: Taking true branch
if (kvm_mtrr_check_gfn_range_consistency(vcpu, base, page_num))
^
arch/x86/kvm/mmu/mmu.c:3833:4: note: Execution continues on line 3836
break;
^
arch/x86/kvm/mmu/mmu.c:3836:9: note: Calling 'direct_page_fault'
return direct_page_fault(vcpu, gpa, error_code, prefault,
^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
arch/x86/kvm/mmu/mmu.c:3726:2: note: Taking false branch
if (page_fault_handle_page_track(vcpu, error_code, gfn))
^
arch/x86/kvm/mmu/mmu.c:3729:2: note: Taking true branch
if (!is_tdp_mmu_root(vcpu->kvm, vcpu->arch.mmu->root_hpa)) {
^
arch/x86/kvm/mmu/mmu.c:3730:7: note: Calling 'fast_page_fault'
r = fast_page_fault(vcpu, gpa, error_code);
^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
arch/x86/kvm/mmu/mmu.c:3054:7: note: Calling 'page_fault_can_be_fast'
if (!page_fault_can_be_fast(error_code))
^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
arch/x86/kvm/mmu/mmu.c:2964:2: note: Taking false branch
if (unlikely(error_code & PFERR_RSVD_MASK))
^
arch/x86/kvm/mmu/mmu.c:2968:16: note: Assuming the condition is false
if (unlikely(((error_code & (PFERR_FETCH_MASK | PFERR_PRESENT_MASK))
^
include/linux/compiler.h:78:42: note: expanded from macro 'unlikely'
# define unlikely(x) __builtin_expect(!!(x), 0)
^
arch/x86/kvm/mmu/mmu.c:2968:2: note: Taking false branch
if (unlikely(((error_code & (PFERR_FETCH_MASK | PFERR_PRESENT_MASK))
^
arch/x86/kvm/mmu/mmu.c:2986:9: note: Assuming 'shadow_acc_track_mask' is equal to 0
return shadow_acc_track_mask != 0 ||
^~~~~~~~~~~~~~~~~~~~~~~~~~
arch/x86/kvm/mmu/mmu.c:2986:9: note: Left side of '||' is false
arch/x86/kvm/mmu/mmu.c:2987:10: note: Assuming the condition is true
((error_code & (PFERR_WRITE_MASK | PFERR_PRESENT_MASK))
^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
arch/x86/kvm/mmu/mmu.c:2986:2: note: Returning the value 1, which participates in a condition later
return shadow_acc_track_mask != 0 ||
^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
arch/x86/kvm/mmu/mmu.c:3054:7: note: Returning from 'page_fault_can_be_fast'
if (!page_fault_can_be_fast(error_code))
^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
arch/x86/kvm/mmu/mmu.c:3054:2: note: Taking false branch
if (!page_fault_can_be_fast(error_code))
^
arch/x86/kvm/mmu/mmu.c:3062:3: note: Calling 'shadow_walk_init'
for_each_shadow_entry_lockless(vcpu, cr2_or_gpa, iterator, spte)
^
arch/x86/kvm/mmu/mmu.c:165:7: note: expanded from macro 'for_each_shadow_entry_lockless'
for (shadow_walk_init(&(_walker), _vcpu, _addr); \
^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
arch/x86/kvm/mmu/mmu.c:2121:2: note: Calling 'shadow_walk_init_using_root'
shadow_walk_init_using_root(iterator, vcpu, vcpu->arch.mmu->root_hpa,
^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
arch/x86/kvm/mmu/mmu.c:2097:6: note: Assuming field 'level' is not equal to PT64_ROOT_4LEVEL
if (iterator->level == PT64_ROOT_4LEVEL &&
^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
arch/x86/kvm/mmu/mmu.c:2097:42: note: Left side of '&&' is false
if (iterator->level == PT64_ROOT_4LEVEL &&
^
arch/x86/kvm/mmu/mmu.c:2102:6: note: Assuming field 'level' is not equal to PT32E_ROOT_LEVEL
if (iterator->level == PT32E_ROOT_LEVEL) {
^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
arch/x86/kvm/mmu/mmu.c:2102:2: note: Taking false branch
if (iterator->level == PT32E_ROOT_LEVEL) {
^
arch/x86/kvm/mmu/mmu.c:2116:1: note: Returning without writing to 'iterator->sptep'
}
^
arch/x86/kvm/mmu/mmu.c:2121:2: note: Returning from 'shadow_walk_init_using_root'
shadow_walk_init_using_root(iterator, vcpu, vcpu->arch.mmu->root_hpa,
^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
arch/x86/kvm/mmu/mmu.c:2123:1: note: Returning without writing to 'iterator->sptep'
}
^
arch/x86/kvm/mmu/mmu.c:3062:3: note: Returning from 'shadow_walk_init'
for_each_shadow_entry_lockless(vcpu, cr2_or_gpa, iterator, spte)
^
arch/x86/kvm/mmu/mmu.c:165:7: note: expanded from macro 'for_each_shadow_entry_lockless'
for (shadow_walk_init(&(_walker), _vcpu, _addr); \
^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
arch/x86/kvm/mmu/mmu.c:3062:3: note: Calling 'shadow_walk_okay'
for_each_shadow_entry_lockless(vcpu, cr2_or_gpa, iterator, spte)
^
arch/x86/kvm/mmu/mmu.c:166:7: note: expanded from macro 'for_each_shadow_entry_lockless'
shadow_walk_okay(&(_walker)) && \
^~~~~~~~~~~~~~~~~~~~~~~~~~~~
arch/x86/kvm/mmu/mmu.c:2127:6: note: Assuming field 'level' is < PG_LEVEL_4K
vim +3141 arch/x86/kvm/mmu/mmu.c
d3e328f2cb01f6 arch/x86/kvm/mmu.c Junaid Shahid 2016-12-21 3041
c7ba5b48cc8ddc arch/x86/kvm/mmu.c Xiao Guangrong 2012-06-20 3042 /*
c4371c2a682e0d arch/x86/kvm/mmu/mmu.c Sean Christopherson 2020-09-23 3043 * Returns one of RET_PF_INVALID, RET_PF_FIXED or RET_PF_SPURIOUS.
c7ba5b48cc8ddc arch/x86/kvm/mmu.c Xiao Guangrong 2012-06-20 3044 */
c4371c2a682e0d arch/x86/kvm/mmu/mmu.c Sean Christopherson 2020-09-23 3045 static int fast_page_fault(struct kvm_vcpu *vcpu, gpa_t cr2_or_gpa,
c7ba5b48cc8ddc arch/x86/kvm/mmu.c Xiao Guangrong 2012-06-20 3046 u32 error_code)
c7ba5b48cc8ddc arch/x86/kvm/mmu.c Xiao Guangrong 2012-06-20 3047 {
c7ba5b48cc8ddc arch/x86/kvm/mmu.c Xiao Guangrong 2012-06-20 3048 struct kvm_shadow_walk_iterator iterator;
92a476cbfc476c arch/x86/kvm/mmu.c Xiao Guangrong 2014-04-17 3049 struct kvm_mmu_page *sp;
c4371c2a682e0d arch/x86/kvm/mmu/mmu.c Sean Christopherson 2020-09-23 3050 int ret = RET_PF_INVALID;
c7ba5b48cc8ddc arch/x86/kvm/mmu.c Xiao Guangrong 2012-06-20 3051 u64 spte = 0ull;
97dceba29a6acb arch/x86/kvm/mmu.c Junaid Shahid 2016-12-06 3052 uint retry_count = 0;
c7ba5b48cc8ddc arch/x86/kvm/mmu.c Xiao Guangrong 2012-06-20 3053
e5552fd252763c arch/x86/kvm/mmu.c Xiao Guangrong 2013-07-30 3054 if (!page_fault_can_be_fast(error_code))
c4371c2a682e0d arch/x86/kvm/mmu/mmu.c Sean Christopherson 2020-09-23 3055 return ret;
c7ba5b48cc8ddc arch/x86/kvm/mmu.c Xiao Guangrong 2012-06-20 3056
c7ba5b48cc8ddc arch/x86/kvm/mmu.c Xiao Guangrong 2012-06-20 3057 walk_shadow_page_lockless_begin(vcpu);
c7ba5b48cc8ddc arch/x86/kvm/mmu.c Xiao Guangrong 2012-06-20 3058
97dceba29a6acb arch/x86/kvm/mmu.c Junaid Shahid 2016-12-06 3059 do {
d3e328f2cb01f6 arch/x86/kvm/mmu.c Junaid Shahid 2016-12-21 3060 u64 new_spte;
c7ba5b48cc8ddc arch/x86/kvm/mmu.c Xiao Guangrong 2012-06-20 3061
736c291c9f36b0 arch/x86/kvm/mmu/mmu.c Sean Christopherson 2019-12-06 3062 for_each_shadow_entry_lockless(vcpu, cr2_or_gpa, iterator, spte)
f9fa2509e5ca82 arch/x86/kvm/mmu/mmu.c Sean Christopherson 2020-01-08 3063 if (!is_shadow_present_pte(spte))
d162f30a7cebe9 arch/x86/kvm/mmu.c Junaid Shahid 2016-12-21 3064 break;
d162f30a7cebe9 arch/x86/kvm/mmu.c Junaid Shahid 2016-12-21 3065
ec89e643867148 arch/x86/kvm/mmu/mmu.c Sean Christopherson 2021-02-25 3066 if (!is_shadow_present_pte(spte))
ec89e643867148 arch/x86/kvm/mmu/mmu.c Sean Christopherson 2021-02-25 3067 break;
ec89e643867148 arch/x86/kvm/mmu/mmu.c Sean Christopherson 2021-02-25 3068
573546820b792e arch/x86/kvm/mmu/mmu.c Sean Christopherson 2020-06-22 3069 sp = sptep_to_sp(iterator.sptep);
92a476cbfc476c arch/x86/kvm/mmu.c Xiao Guangrong 2014-04-17 3070 if (!is_last_spte(spte, sp->role.level))
97dceba29a6acb arch/x86/kvm/mmu.c Junaid Shahid 2016-12-06 3071 break;
c7ba5b48cc8ddc arch/x86/kvm/mmu.c Xiao Guangrong 2012-06-20 3072
c7ba5b48cc8ddc arch/x86/kvm/mmu.c Xiao Guangrong 2012-06-20 3073 /*
f160c7b7bb322b arch/x86/kvm/mmu.c Junaid Shahid 2016-12-06 3074 * Check whether the memory access that caused the fault would
f160c7b7bb322b arch/x86/kvm/mmu.c Junaid Shahid 2016-12-06 3075 * still cause it if it were to be performed right now. If not,
f160c7b7bb322b arch/x86/kvm/mmu.c Junaid Shahid 2016-12-06 3076 * then this is a spurious fault caused by TLB lazily flushed,
f160c7b7bb322b arch/x86/kvm/mmu.c Junaid Shahid 2016-12-06 3077 * or some other CPU has already fixed the PTE after the
f160c7b7bb322b arch/x86/kvm/mmu.c Junaid Shahid 2016-12-06 3078 * current CPU took the fault.
c7ba5b48cc8ddc arch/x86/kvm/mmu.c Xiao Guangrong 2012-06-20 3079 *
c7ba5b48cc8ddc arch/x86/kvm/mmu.c Xiao Guangrong 2012-06-20 3080 * Need not check the access of upper level table entries since
c7ba5b48cc8ddc arch/x86/kvm/mmu.c Xiao Guangrong 2012-06-20 3081 * they are always ACC_ALL.
c7ba5b48cc8ddc arch/x86/kvm/mmu.c Xiao Guangrong 2012-06-20 3082 */
d3e328f2cb01f6 arch/x86/kvm/mmu.c Junaid Shahid 2016-12-21 3083 if (is_access_allowed(error_code, spte)) {
c4371c2a682e0d arch/x86/kvm/mmu/mmu.c Sean Christopherson 2020-09-23 3084 ret = RET_PF_SPURIOUS;
97dceba29a6acb arch/x86/kvm/mmu.c Junaid Shahid 2016-12-06 3085 break;
c7ba5b48cc8ddc arch/x86/kvm/mmu.c Xiao Guangrong 2012-06-20 3086 }
c7ba5b48cc8ddc arch/x86/kvm/mmu.c Xiao Guangrong 2012-06-20 3087
d3e328f2cb01f6 arch/x86/kvm/mmu.c Junaid Shahid 2016-12-21 3088 new_spte = spte;
d3e328f2cb01f6 arch/x86/kvm/mmu.c Junaid Shahid 2016-12-21 3089
d3e328f2cb01f6 arch/x86/kvm/mmu.c Junaid Shahid 2016-12-21 3090 if (is_access_track_spte(spte))
d3e328f2cb01f6 arch/x86/kvm/mmu.c Junaid Shahid 2016-12-21 3091 new_spte = restore_acc_track_spte(new_spte);
d3e328f2cb01f6 arch/x86/kvm/mmu.c Junaid Shahid 2016-12-21 3092
c7ba5b48cc8ddc arch/x86/kvm/mmu.c Xiao Guangrong 2012-06-20 3093 /*
f160c7b7bb322b arch/x86/kvm/mmu.c Junaid Shahid 2016-12-06 3094 * Currently, to simplify the code, write-protection can
f160c7b7bb322b arch/x86/kvm/mmu.c Junaid Shahid 2016-12-06 3095 * be removed in the fast path only if the SPTE was
d3e328f2cb01f6 arch/x86/kvm/mmu.c Junaid Shahid 2016-12-21 3096 * write-protected for dirty-logging or access tracking.
c7ba5b48cc8ddc arch/x86/kvm/mmu.c Xiao Guangrong 2012-06-20 3097 */
d3e328f2cb01f6 arch/x86/kvm/mmu.c Junaid Shahid 2016-12-21 3098 if ((error_code & PFERR_WRITE_MASK) &&
e630269841ab08 arch/x86/kvm/mmu/mmu.c Miaohe Lin 2020-02-15 3099 spte_can_locklessly_be_made_writable(spte)) {
d3e328f2cb01f6 arch/x86/kvm/mmu.c Junaid Shahid 2016-12-21 3100 new_spte |= PT_WRITABLE_MASK;
c7ba5b48cc8ddc arch/x86/kvm/mmu.c Xiao Guangrong 2012-06-20 3101
c126d94f2c90ed arch/x86/kvm/mmu.c Xiao Guangrong 2014-04-17 3102 /*
d3e328f2cb01f6 arch/x86/kvm/mmu.c Junaid Shahid 2016-12-21 3103 * Do not fix write-permission on the large spte. Since
d3e328f2cb01f6 arch/x86/kvm/mmu.c Junaid Shahid 2016-12-21 3104 * we only dirty the first page into the dirty-bitmap in
d3e328f2cb01f6 arch/x86/kvm/mmu.c Junaid Shahid 2016-12-21 3105 * fast_pf_fix_direct_spte(), other pages are missed
d3e328f2cb01f6 arch/x86/kvm/mmu.c Junaid Shahid 2016-12-21 3106 * if its slot has dirty logging enabled.
c126d94f2c90ed arch/x86/kvm/mmu.c Xiao Guangrong 2014-04-17 3107 *
d3e328f2cb01f6 arch/x86/kvm/mmu.c Junaid Shahid 2016-12-21 3108 * Instead, we let the slow page fault path create a
d3e328f2cb01f6 arch/x86/kvm/mmu.c Junaid Shahid 2016-12-21 3109 * normal spte to fix the access.
c126d94f2c90ed arch/x86/kvm/mmu.c Xiao Guangrong 2014-04-17 3110 *
c126d94f2c90ed arch/x86/kvm/mmu.c Xiao Guangrong 2014-04-17 3111 * See the comments in kvm_arch_commit_memory_region().
c126d94f2c90ed arch/x86/kvm/mmu.c Xiao Guangrong 2014-04-17 3112 */
3bae0459bcd559 arch/x86/kvm/mmu/mmu.c Sean Christopherson 2020-04-27 3113 if (sp->role.level > PG_LEVEL_4K)
d3e328f2cb01f6 arch/x86/kvm/mmu.c Junaid Shahid 2016-12-21 3114 break;
d3e328f2cb01f6 arch/x86/kvm/mmu.c Junaid Shahid 2016-12-21 3115 }
d3e328f2cb01f6 arch/x86/kvm/mmu.c Junaid Shahid 2016-12-21 3116
d3e328f2cb01f6 arch/x86/kvm/mmu.c Junaid Shahid 2016-12-21 3117 /* Verify that the fault can be handled in the fast path */
d3e328f2cb01f6 arch/x86/kvm/mmu.c Junaid Shahid 2016-12-21 3118 if (new_spte == spte ||
d3e328f2cb01f6 arch/x86/kvm/mmu.c Junaid Shahid 2016-12-21 3119 !is_access_allowed(error_code, new_spte))
97dceba29a6acb arch/x86/kvm/mmu.c Junaid Shahid 2016-12-06 3120 break;
c126d94f2c90ed arch/x86/kvm/mmu.c Xiao Guangrong 2014-04-17 3121
c7ba5b48cc8ddc arch/x86/kvm/mmu.c Xiao Guangrong 2012-06-20 3122 /*
97dceba29a6acb arch/x86/kvm/mmu.c Junaid Shahid 2016-12-06 3123 * Currently, fast page fault only works for direct mapping
97dceba29a6acb arch/x86/kvm/mmu.c Junaid Shahid 2016-12-06 3124 * since the gfn is not stable for indirect shadow page. See
3ecad8c2c1ff33 arch/x86/kvm/mmu/mmu.c Mauro Carvalho Chehab 2020-04-14 3125 * Documentation/virt/kvm/locking.rst to get more detail.
c7ba5b48cc8ddc arch/x86/kvm/mmu.c Xiao Guangrong 2012-06-20 3126 */
c4371c2a682e0d arch/x86/kvm/mmu/mmu.c Sean Christopherson 2020-09-23 3127 if (fast_pf_fix_direct_spte(vcpu, sp, iterator.sptep, spte,
c4371c2a682e0d arch/x86/kvm/mmu/mmu.c Sean Christopherson 2020-09-23 3128 new_spte)) {
c4371c2a682e0d arch/x86/kvm/mmu/mmu.c Sean Christopherson 2020-09-23 3129 ret = RET_PF_FIXED;
97dceba29a6acb arch/x86/kvm/mmu.c Junaid Shahid 2016-12-06 3130 break;
c4371c2a682e0d arch/x86/kvm/mmu/mmu.c Sean Christopherson 2020-09-23 3131 }
97dceba29a6acb arch/x86/kvm/mmu.c Junaid Shahid 2016-12-06 3132
97dceba29a6acb arch/x86/kvm/mmu.c Junaid Shahid 2016-12-06 3133 if (++retry_count > 4) {
97dceba29a6acb arch/x86/kvm/mmu.c Junaid Shahid 2016-12-06 3134 printk_once(KERN_WARNING
97dceba29a6acb arch/x86/kvm/mmu.c Junaid Shahid 2016-12-06 3135 "kvm: Fast #PF retrying more than 4 times.\n");
97dceba29a6acb arch/x86/kvm/mmu.c Junaid Shahid 2016-12-06 3136 break;
97dceba29a6acb arch/x86/kvm/mmu.c Junaid Shahid 2016-12-06 3137 }
97dceba29a6acb arch/x86/kvm/mmu.c Junaid Shahid 2016-12-06 3138
97dceba29a6acb arch/x86/kvm/mmu.c Junaid Shahid 2016-12-06 3139 } while (true);
97dceba29a6acb arch/x86/kvm/mmu.c Junaid Shahid 2016-12-06 3140
736c291c9f36b0 arch/x86/kvm/mmu/mmu.c Sean Christopherson 2019-12-06 @3141 trace_fast_page_fault(vcpu, cr2_or_gpa, error_code, iterator.sptep,
c4371c2a682e0d arch/x86/kvm/mmu/mmu.c Sean Christopherson 2020-09-23 3142 spte, ret);
c7ba5b48cc8ddc arch/x86/kvm/mmu.c Xiao Guangrong 2012-06-20 3143 walk_shadow_page_lockless_end(vcpu);
c7ba5b48cc8ddc arch/x86/kvm/mmu.c Xiao Guangrong 2012-06-20 3144
c4371c2a682e0d arch/x86/kvm/mmu/mmu.c Sean Christopherson 2020-09-23 3145 return ret;
c7ba5b48cc8ddc arch/x86/kvm/mmu.c Xiao Guangrong 2012-06-20 3146 }
c7ba5b48cc8ddc arch/x86/kvm/mmu.c Xiao Guangrong 2012-06-20 3147
:::::: The code at line 3141 was first introduced by commit
:::::: 736c291c9f36b07f8889c61764c28edce20e715d KVM: x86: Use gpa_t for cr2/gpa to fix TDP support on 32-bit KVM
:::::: TO: Sean Christopherson <sean.j.christopherson@intel.com>
:::::: CC: Paolo Bonzini <pbonzini@redhat.com>
---
0-DAY CI Kernel Test Service, Intel Corporation
https://lists.01.org/hyperkitty/list/kbuild-all(a)lists.01.org
[-- Attachment #2: config.gz --]
[-- Type: application/gzip, Size: 27861 bytes --]
^ permalink raw reply [flat|nested] 2+ messages in thread
end of thread, other threads:[~2021-08-08 10:35 UTC | newest]
Thread overview: 2+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-08-08 6:37 arch/x86/kvm/mmu/mmu.c:3141:2: warning: 4th function call argument is an uninitialized value [clang-analyzer-core.CallAndMessage] kernel test robot
2021-08-08 10:35 kernel test robot
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.