* Re: [RFC PATCH v2 5/5] KVM: Unmap pages only when it's indeed protected for NUMA migration
[not found] <20230810090218.26244-1-yan.y.zhao@intel.com>
@ 2023-08-10 15:19 ` kernel test robot
0 siblings, 0 replies; only message in thread
From: kernel test robot @ 2023-08-10 15:19 UTC (permalink / raw)
To: Yan Zhao; +Cc: llvm, oe-kbuild-all
Hi Yan,
[This is a private test report for your RFC patch.]
kernel test robot noticed the following build warnings:
[auto build test WARNING on kvm/queue]
[also build test WARNING on mst-vhost/linux-next linus/master v6.5-rc5 next-20230809]
[cannot apply to akpm-mm/mm-everything kvm/linux-next]
[If your patch is applied to the wrong git tree, kindly drop us a note.
And when submitting patch, we suggest to use '--base' as documented in
https://git-scm.com/docs/git-format-patch#_base_tree_information]
url: https://github.com/intel-lab-lkp/linux/commits/Yan-Zhao/mm-mmu_notifier-introduce-a-new-mmu-notifier-flag-MMU_NOTIFIER_RANGE_NUMA/20230810-172955
base: https://git.kernel.org/pub/scm/virt/kvm/kvm.git queue
patch link: https://lore.kernel.org/r/20230810090218.26244-1-yan.y.zhao%40intel.com
patch subject: [RFC PATCH v2 5/5] KVM: Unmap pages only when it's indeed protected for NUMA migration
config: riscv-randconfig-r023-20230810 (https://download.01.org/0day-ci/archive/20230810/202308102307.MQAoNjsq-lkp@intel.com/config)
compiler: clang version 17.0.0 (https://github.com/llvm/llvm-project.git 4a5ac14ee968ff0ad5d2cc1ffa0299048db4c88a)
reproduce: (https://download.01.org/0day-ci/archive/20230810/202308102307.MQAoNjsq-lkp@intel.com/reproduce)
If you fix the issue in a separate patch/commit (i.e. not just a new version of
the same patch/commit), kindly add following tags
| Reported-by: kernel test robot <lkp@intel.com>
| Closes: https://lore.kernel.org/oe-kbuild-all/202308102307.MQAoNjsq-lkp@intel.com/
All warnings (new ones prefixed by >>):
>> arch/riscv/kvm/../../../virt/kvm/kvm_main.c:767:23: warning: pointer type mismatch ('bool (*)(struct kvm *, struct kvm_gfn_range *)' (aka '_Bool (*)(struct kvm *, struct kvm_gfn_range *)') and 'void *') [-Wpointer-type-mismatch]
767 | .handler = !is_numa ? kvm_unmap_gfn_range :
| ^ ~~~~~~~~~~~~~~~~~~~
768 | (void *)kvm_null_fn,
| ~~~~~~~~~~~~~~~~~~~
>> arch/riscv/kvm/../../../virt/kvm/kvm_main.c:770:25: warning: pointer type mismatch ('void (*)(struct kvm *)' and 'void *') [-Wpointer-type-mismatch]
770 | .on_unlock = !is_numa ? kvm_arch_guest_memory_reclaimed :
| ^ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
771 | (void *)kvm_null_fn,
| ~~~~~~~~~~~~~~~~~~~
2 warnings generated.
vim +767 arch/riscv/kvm/../../../virt/kvm/kvm_main.c
756
757 static int kvm_mmu_notifier_invalidate_range_start(struct mmu_notifier *mn,
758 const struct mmu_notifier_range *range)
759 {
760 struct kvm *kvm = mmu_notifier_to_kvm(mn);
761 bool is_numa = (range->event == MMU_NOTIFY_PROTECTION_VMA) &&
762 (range->flags & MMU_NOTIFIER_RANGE_NUMA);
763 const struct kvm_hva_range hva_range = {
764 .start = range->start,
765 .end = range->end,
766 .pte = __pte(0),
> 767 .handler = !is_numa ? kvm_unmap_gfn_range :
768 (void *)kvm_null_fn,
769 .on_lock = kvm_mmu_invalidate_begin,
> 770 .on_unlock = !is_numa ? kvm_arch_guest_memory_reclaimed :
771 (void *)kvm_null_fn,
772 .flush_on_ret = !is_numa ? true : false,
773 .may_block = mmu_notifier_range_blockable(range),
774 };
775
776 trace_kvm_unmap_hva_range(range->start, range->end);
777
778 /*
779 * Prevent memslot modification between range_start() and range_end()
780 * so that conditionally locking provides the same result in both
781 * functions. Without that guarantee, the mmu_invalidate_in_progress
782 * adjustments will be imbalanced.
783 *
784 * Pairs with the decrement in range_end().
785 */
786 spin_lock(&kvm->mn_invalidate_lock);
787 kvm->mn_active_invalidate_count++;
788 spin_unlock(&kvm->mn_invalidate_lock);
789
790 /*
791 * Invalidate pfn caches _before_ invalidating the secondary MMUs, i.e.
792 * before acquiring mmu_lock, to avoid holding mmu_lock while acquiring
793 * each cache's lock. There are relatively few caches in existence at
794 * any given time, and the caches themselves can check for hva overlap,
795 * i.e. don't need to rely on memslot overlap checks for performance.
796 * Because this runs without holding mmu_lock, the pfn caches must use
797 * mn_active_invalidate_count (see above) instead of
798 * mmu_invalidate_in_progress.
799 */
800 gfn_to_pfn_cache_invalidate_start(kvm, range->start, range->end,
801 hva_range.may_block);
802
803 __kvm_handle_hva_range(kvm, &hva_range);
804
805 return 0;
806 }
807
--
0-DAY CI Kernel Test Service
https://github.com/intel/lkp-tests/wiki
^ permalink raw reply [flat|nested] only message in thread
only message in thread, other threads:[~2023-08-10 15:20 UTC | newest]
Thread overview: (only message) (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
[not found] <20230810090218.26244-1-yan.y.zhao@intel.com>
2023-08-10 15:19 ` [RFC PATCH v2 5/5] KVM: Unmap pages only when it's indeed protected for NUMA migration kernel test robot
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).