* [PATCH] KVM: remove unused guest_enter/exit @ 2020-01-10 3:13 Alex Shi 2020-01-10 23:10 ` kbuild test robot ` (2 more replies) 0 siblings, 3 replies; 7+ messages in thread From: Alex Shi @ 2020-01-10 3:13 UTC (permalink / raw) Cc: Paolo Bonzini, Peter Zijlstra, Ingo Molnar, Frederic Weisbecker, linux-kernel After commit 6edaa5307f3f ("KVM: remove kvm_guest_enter/exit wrappers") no one uses guest_enter/exit anymore. So better to remove them to simplify code and reduced a bit of object size. Signed-off-by: Alex Shi <alex.shi@linux.alibaba.com> Cc: Paolo Bonzini <pbonzini@redhat.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Ingo Molnar <mingo@kernel.org> Cc: Frederic Weisbecker <frederic@kernel.org> Cc: linux-kernel@vger.kernel.org --- include/linux/context_tracking.h | 19 ------------------- 1 file changed, 19 deletions(-) diff --git a/include/linux/context_tracking.h b/include/linux/context_tracking.h index 64ec82851aa3..238ada39ac2e 100644 --- a/include/linux/context_tracking.h +++ b/include/linux/context_tracking.h @@ -153,23 +153,4 @@ static inline void guest_exit_irqoff(void) current->flags &= ~PF_VCPU; } #endif /* CONFIG_VIRT_CPU_ACCOUNTING_GEN */ - -static inline void guest_enter(void) -{ - unsigned long flags; - - local_irq_save(flags); - guest_enter_irqoff(); - local_irq_restore(flags); -} - -static inline void guest_exit(void) -{ - unsigned long flags; - - local_irq_save(flags); - guest_exit_irqoff(); - local_irq_restore(flags); -} - #endif -- 1.8.3.1 ^ permalink raw reply related [flat|nested] 7+ messages in thread
* Re: [PATCH] KVM: remove unused guest_enter/exit 2020-01-10 3:13 [PATCH] KVM: remove unused guest_enter/exit Alex Shi @ 2020-01-10 23:10 ` kbuild test robot 2020-01-11 5:28 ` kbuild test robot 2020-01-11 11:32 ` Paolo Bonzini 2 siblings, 0 replies; 7+ messages in thread From: kbuild test robot @ 2020-01-10 23:10 UTC (permalink / raw) To: Alex Shi Cc: kbuild-all, Paolo Bonzini, Peter Zijlstra, Ingo Molnar, Frederic Weisbecker, linux-kernel [-- Attachment #1: Type: text/plain, Size: 30101 bytes --] Hi Alex, I love your patch! Yet something to improve: [auto build test ERROR on kvm/linux-next] [also build test ERROR on linux/master linus/master v5.5-rc5 next-20200110] [if your patch is applied to the wrong git tree, please drop us a note to help improve the system. BTW, we also suggest to use '--base' option to specify the base tree in git format-patch, please see https://stackoverflow.com/a/37406982] url: https://github.com/0day-ci/linux/commits/Alex-Shi/KVM-remove-unused-guest_enter-exit/20200111-004903 base: https://git.kernel.org/pub/scm/virt/kvm/kvm.git linux-next config: powerpc-ppc64_defconfig (attached as .config) compiler: powerpc64-linux-gcc (GCC) 7.5.0 reproduce: wget https://raw.githubusercontent.com/intel/lkp-tests/master/sbin/make.cross -O ~/bin/make.cross chmod +x ~/bin/make.cross # save the attached .config to linux build tree GCC_VERSION=7.5.0 make.cross ARCH=powerpc If you fix the issue, kindly add following tag Reported-by: kbuild test robot <lkp@intel.com> All errors (new ones prefixed by >>): arch/powerpc/kvm/book3s_hv.c: In function 'kvmppc_run_core': >> arch/powerpc/kvm/book3s_hv.c:3360:2: error: implicit declaration of function 'guest_exit'; did you mean 'user_exit'? [-Werror=implicit-function-declaration] guest_exit(); ^~~~~~~~~~ user_exit cc1: all warnings being treated as errors vim +3360 arch/powerpc/kvm/book3s_hv.c 8b24e69fc47e43 Paul Mackerras 2017-06-26 3040 371fefd6f2dc46 Paul Mackerras 2011-06-29 3041 /* 371fefd6f2dc46 Paul Mackerras 2011-06-29 3042 * Run a set of guest threads on a physical core. 371fefd6f2dc46 Paul Mackerras 2011-06-29 3043 * Called with vc->lock held. 371fefd6f2dc46 Paul Mackerras 2011-06-29 3044 */ 66feed61cdf6ee Paul Mackerras 2015-03-28 3045 static noinline void kvmppc_run_core(struct kvmppc_vcore *vc) 371fefd6f2dc46 Paul Mackerras 2011-06-29 3046 { 7b5f8272c792d4 Suraj Jitindar Singh 2016-08-02 3047 struct kvm_vcpu *vcpu; d911f0beddc2a9 Paul Mackerras 2015-03-28 3048 int i; 2c9097e4c13402 Paul Mackerras 2012-09-11 3049 int srcu_idx; ec257165082616 Paul Mackerras 2015-06-24 3050 struct core_info core_info; 898b25b202f350 Paul Mackerras 2017-06-22 3051 struct kvmppc_vcore *pvc; b4deba5c41e9f6 Paul Mackerras 2015-07-02 3052 struct kvm_split_mode split_info, *sip; b4deba5c41e9f6 Paul Mackerras 2015-07-02 3053 int split, subcore_size, active; b4deba5c41e9f6 Paul Mackerras 2015-07-02 3054 int sub; b4deba5c41e9f6 Paul Mackerras 2015-07-02 3055 bool thr0_done; b4deba5c41e9f6 Paul Mackerras 2015-07-02 3056 unsigned long cmd_bit, stat_bit; ec257165082616 Paul Mackerras 2015-06-24 3057 int pcpu, thr; ec257165082616 Paul Mackerras 2015-06-24 3058 int target_threads; 45c940ba490df2 Paul Mackerras 2016-11-18 3059 int controlled_threads; 8b24e69fc47e43 Paul Mackerras 2017-06-26 3060 int trap; 516f7898ae20d9 Paul Mackerras 2017-10-16 3061 bool is_power8; c01015091a7703 Paul Mackerras 2017-10-19 3062 bool hpt_on_radix; 371fefd6f2dc46 Paul Mackerras 2011-06-29 3063 d911f0beddc2a9 Paul Mackerras 2015-03-28 3064 /* d911f0beddc2a9 Paul Mackerras 2015-03-28 3065 * Remove from the list any threads that have a signal pending d911f0beddc2a9 Paul Mackerras 2015-03-28 3066 * or need a VPA update done d911f0beddc2a9 Paul Mackerras 2015-03-28 3067 */ d911f0beddc2a9 Paul Mackerras 2015-03-28 3068 prepare_threads(vc); d911f0beddc2a9 Paul Mackerras 2015-03-28 3069 d911f0beddc2a9 Paul Mackerras 2015-03-28 3070 /* if the runner is no longer runnable, let the caller pick a new one */ d911f0beddc2a9 Paul Mackerras 2015-03-28 3071 if (vc->runner->arch.state != KVMPPC_VCPU_RUNNABLE) 913d3ff9a3c3a1 Paul Mackerras 2012-10-15 3072 return; 081f323bd3cc3a Paul Mackerras 2012-06-01 3073 081f323bd3cc3a Paul Mackerras 2012-06-01 3074 /* d911f0beddc2a9 Paul Mackerras 2015-03-28 3075 * Initialize *vc. 081f323bd3cc3a Paul Mackerras 2012-06-01 3076 */ 898b25b202f350 Paul Mackerras 2017-06-22 3077 init_vcore_to_run(vc); 2711e248a352d7 Paul Mackerras 2014-12-04 3078 vc->preempt_tb = TB_NIL; 081f323bd3cc3a Paul Mackerras 2012-06-01 3079 45c940ba490df2 Paul Mackerras 2016-11-18 3080 /* 45c940ba490df2 Paul Mackerras 2016-11-18 3081 * Number of threads that we will be controlling: the same as 45c940ba490df2 Paul Mackerras 2016-11-18 3082 * the number of threads per subcore, except on POWER9, 45c940ba490df2 Paul Mackerras 2016-11-18 3083 * where it's 1 because the threads are (mostly) independent. 45c940ba490df2 Paul Mackerras 2016-11-18 3084 */ 516f7898ae20d9 Paul Mackerras 2017-10-16 3085 controlled_threads = threads_per_vcore(vc->kvm); 45c940ba490df2 Paul Mackerras 2016-11-18 3086 7b444c6710c6c4 Paul Mackerras 2012-10-15 3087 /* 3102f7843c7501 Michael Ellerman 2014-05-23 3088 * Make sure we are running on primary threads, and that secondary 3102f7843c7501 Michael Ellerman 2014-05-23 3089 * threads are offline. Also check if the number of threads in this 3102f7843c7501 Michael Ellerman 2014-05-23 3090 * guest are greater than the current system threads per guest. c01015091a7703 Paul Mackerras 2017-10-19 3091 * On POWER9, we need to be not in independent-threads mode if 00608e1f007e4c Paul Mackerras 2018-01-11 3092 * this is a HPT guest on a radix host machine where the 00608e1f007e4c Paul Mackerras 2018-01-11 3093 * CPU threads may not be in different MMU modes. 7b444c6710c6c4 Paul Mackerras 2012-10-15 3094 */ 00608e1f007e4c Paul Mackerras 2018-01-11 3095 hpt_on_radix = no_mixing_hpt_and_radix && radix_enabled() && 00608e1f007e4c Paul Mackerras 2018-01-11 3096 !kvm_is_radix(vc->kvm); c01015091a7703 Paul Mackerras 2017-10-19 3097 if (((controlled_threads > 1) && c01015091a7703 Paul Mackerras 2017-10-19 3098 ((vc->num_threads > threads_per_subcore) || !on_primary_thread())) || c01015091a7703 Paul Mackerras 2017-10-19 3099 (hpt_on_radix && vc->kvm->arch.threads_indep)) { 7b5f8272c792d4 Suraj Jitindar Singh 2016-08-02 3100 for_each_runnable_thread(i, vcpu, vc) { 7b444c6710c6c4 Paul Mackerras 2012-10-15 3101 vcpu->arch.ret = -EBUSY; 25fedfca94cfbf Paul Mackerras 2015-03-28 3102 kvmppc_remove_runnable(vc, vcpu); 25fedfca94cfbf Paul Mackerras 2015-03-28 3103 wake_up(&vcpu->arch.cpu_run); 25fedfca94cfbf Paul Mackerras 2015-03-28 3104 } 7b444c6710c6c4 Paul Mackerras 2012-10-15 3105 goto out; 7b444c6710c6c4 Paul Mackerras 2012-10-15 3106 } 7b444c6710c6c4 Paul Mackerras 2012-10-15 3107 ec257165082616 Paul Mackerras 2015-06-24 3108 /* ec257165082616 Paul Mackerras 2015-06-24 3109 * See if we could run any other vcores on the physical core ec257165082616 Paul Mackerras 2015-06-24 3110 * along with this one. ec257165082616 Paul Mackerras 2015-06-24 3111 */ ec257165082616 Paul Mackerras 2015-06-24 3112 init_core_info(&core_info, vc); ec257165082616 Paul Mackerras 2015-06-24 3113 pcpu = smp_processor_id(); 45c940ba490df2 Paul Mackerras 2016-11-18 3114 target_threads = controlled_threads; ec257165082616 Paul Mackerras 2015-06-24 3115 if (target_smt_mode && target_smt_mode < target_threads) ec257165082616 Paul Mackerras 2015-06-24 3116 target_threads = target_smt_mode; ec257165082616 Paul Mackerras 2015-06-24 3117 if (vc->num_threads < target_threads) ec257165082616 Paul Mackerras 2015-06-24 3118 collect_piggybacks(&core_info, target_threads); 3102f7843c7501 Michael Ellerman 2014-05-23 3119 8b24e69fc47e43 Paul Mackerras 2017-06-26 3120 /* 8b24e69fc47e43 Paul Mackerras 2017-06-26 3121 * On radix, arrange for TLB flushing if necessary. 8b24e69fc47e43 Paul Mackerras 2017-06-26 3122 * This has to be done before disabling interrupts since 8b24e69fc47e43 Paul Mackerras 2017-06-26 3123 * it uses smp_call_function(). 8b24e69fc47e43 Paul Mackerras 2017-06-26 3124 */ 8b24e69fc47e43 Paul Mackerras 2017-06-26 3125 pcpu = smp_processor_id(); 8b24e69fc47e43 Paul Mackerras 2017-06-26 3126 if (kvm_is_radix(vc->kvm)) { 8b24e69fc47e43 Paul Mackerras 2017-06-26 3127 for (sub = 0; sub < core_info.n_subcores; ++sub) 8b24e69fc47e43 Paul Mackerras 2017-06-26 3128 for_each_runnable_thread(i, vcpu, core_info.vc[sub]) 8b24e69fc47e43 Paul Mackerras 2017-06-26 3129 kvmppc_prepare_radix_vcpu(vcpu, pcpu); 8b24e69fc47e43 Paul Mackerras 2017-06-26 3130 } 8b24e69fc47e43 Paul Mackerras 2017-06-26 3131 8b24e69fc47e43 Paul Mackerras 2017-06-26 3132 /* 8b24e69fc47e43 Paul Mackerras 2017-06-26 3133 * Hard-disable interrupts, and check resched flag and signals. 8b24e69fc47e43 Paul Mackerras 2017-06-26 3134 * If we need to reschedule or deliver a signal, clean up 8b24e69fc47e43 Paul Mackerras 2017-06-26 3135 * and return without going into the guest(s). 072df8130c6b60 Paul Mackerras 2017-11-09 3136 * If the mmu_ready flag has been cleared, don't go into the 38c53af853069a Paul Mackerras 2017-11-08 3137 * guest because that means a HPT resize operation is in progress. 8b24e69fc47e43 Paul Mackerras 2017-06-26 3138 */ 8b24e69fc47e43 Paul Mackerras 2017-06-26 3139 local_irq_disable(); 8b24e69fc47e43 Paul Mackerras 2017-06-26 3140 hard_irq_disable(); 8b24e69fc47e43 Paul Mackerras 2017-06-26 3141 if (lazy_irq_pending() || need_resched() || d28eafc5a64045 Paul Mackerras 2019-08-27 3142 recheck_signals_and_mmu(&core_info)) { 8b24e69fc47e43 Paul Mackerras 2017-06-26 3143 local_irq_enable(); 8b24e69fc47e43 Paul Mackerras 2017-06-26 3144 vc->vcore_state = VCORE_INACTIVE; 8b24e69fc47e43 Paul Mackerras 2017-06-26 3145 /* Unlock all except the primary vcore */ 8b24e69fc47e43 Paul Mackerras 2017-06-26 3146 for (sub = 1; sub < core_info.n_subcores; ++sub) { 8b24e69fc47e43 Paul Mackerras 2017-06-26 3147 pvc = core_info.vc[sub]; 8b24e69fc47e43 Paul Mackerras 2017-06-26 3148 /* Put back on to the preempted vcores list */ 8b24e69fc47e43 Paul Mackerras 2017-06-26 3149 kvmppc_vcore_preempt(pvc); 8b24e69fc47e43 Paul Mackerras 2017-06-26 3150 spin_unlock(&pvc->lock); 8b24e69fc47e43 Paul Mackerras 2017-06-26 3151 } 8b24e69fc47e43 Paul Mackerras 2017-06-26 3152 for (i = 0; i < controlled_threads; ++i) 8b24e69fc47e43 Paul Mackerras 2017-06-26 3153 kvmppc_release_hwthread(pcpu + i); 8b24e69fc47e43 Paul Mackerras 2017-06-26 3154 return; 8b24e69fc47e43 Paul Mackerras 2017-06-26 3155 } 8b24e69fc47e43 Paul Mackerras 2017-06-26 3156 8b24e69fc47e43 Paul Mackerras 2017-06-26 3157 kvmppc_clear_host_core(pcpu); 8b24e69fc47e43 Paul Mackerras 2017-06-26 3158 b4deba5c41e9f6 Paul Mackerras 2015-07-02 3159 /* Decide on micro-threading (split-core) mode */ b4deba5c41e9f6 Paul Mackerras 2015-07-02 3160 subcore_size = threads_per_subcore; b4deba5c41e9f6 Paul Mackerras 2015-07-02 3161 cmd_bit = stat_bit = 0; b4deba5c41e9f6 Paul Mackerras 2015-07-02 3162 split = core_info.n_subcores; b4deba5c41e9f6 Paul Mackerras 2015-07-02 3163 sip = NULL; 516f7898ae20d9 Paul Mackerras 2017-10-16 3164 is_power8 = cpu_has_feature(CPU_FTR_ARCH_207S) 516f7898ae20d9 Paul Mackerras 2017-10-16 3165 && !cpu_has_feature(CPU_FTR_ARCH_300); 516f7898ae20d9 Paul Mackerras 2017-10-16 3166 c01015091a7703 Paul Mackerras 2017-10-19 3167 if (split > 1 || hpt_on_radix) { 516f7898ae20d9 Paul Mackerras 2017-10-16 3168 sip = &split_info; 516f7898ae20d9 Paul Mackerras 2017-10-16 3169 memset(&split_info, 0, sizeof(split_info)); 516f7898ae20d9 Paul Mackerras 2017-10-16 3170 for (sub = 0; sub < core_info.n_subcores; ++sub) 516f7898ae20d9 Paul Mackerras 2017-10-16 3171 split_info.vc[sub] = core_info.vc[sub]; 516f7898ae20d9 Paul Mackerras 2017-10-16 3172 516f7898ae20d9 Paul Mackerras 2017-10-16 3173 if (is_power8) { b4deba5c41e9f6 Paul Mackerras 2015-07-02 3174 if (split == 2 && (dynamic_mt_modes & 2)) { b4deba5c41e9f6 Paul Mackerras 2015-07-02 3175 cmd_bit = HID0_POWER8_1TO2LPAR; b4deba5c41e9f6 Paul Mackerras 2015-07-02 3176 stat_bit = HID0_POWER8_2LPARMODE; b4deba5c41e9f6 Paul Mackerras 2015-07-02 3177 } else { b4deba5c41e9f6 Paul Mackerras 2015-07-02 3178 split = 4; b4deba5c41e9f6 Paul Mackerras 2015-07-02 3179 cmd_bit = HID0_POWER8_1TO4LPAR; b4deba5c41e9f6 Paul Mackerras 2015-07-02 3180 stat_bit = HID0_POWER8_4LPARMODE; b4deba5c41e9f6 Paul Mackerras 2015-07-02 3181 } b4deba5c41e9f6 Paul Mackerras 2015-07-02 3182 subcore_size = MAX_SMT_THREADS / split; b4deba5c41e9f6 Paul Mackerras 2015-07-02 3183 split_info.rpr = mfspr(SPRN_RPR); b4deba5c41e9f6 Paul Mackerras 2015-07-02 3184 split_info.pmmar = mfspr(SPRN_PMMAR); b4deba5c41e9f6 Paul Mackerras 2015-07-02 3185 split_info.ldbar = mfspr(SPRN_LDBAR); b4deba5c41e9f6 Paul Mackerras 2015-07-02 3186 split_info.subcore_size = subcore_size; 516f7898ae20d9 Paul Mackerras 2017-10-16 3187 } else { 516f7898ae20d9 Paul Mackerras 2017-10-16 3188 split_info.subcore_size = 1; c01015091a7703 Paul Mackerras 2017-10-19 3189 if (hpt_on_radix) { c01015091a7703 Paul Mackerras 2017-10-19 3190 /* Use the split_info for LPCR/LPIDR changes */ c01015091a7703 Paul Mackerras 2017-10-19 3191 split_info.lpcr_req = vc->lpcr; c01015091a7703 Paul Mackerras 2017-10-19 3192 split_info.lpidr_req = vc->kvm->arch.lpid; c01015091a7703 Paul Mackerras 2017-10-19 3193 split_info.host_lpcr = vc->kvm->arch.host_lpcr; c01015091a7703 Paul Mackerras 2017-10-19 3194 split_info.do_set = 1; c01015091a7703 Paul Mackerras 2017-10-19 3195 } 516f7898ae20d9 Paul Mackerras 2017-10-16 3196 } 516f7898ae20d9 Paul Mackerras 2017-10-16 3197 b4deba5c41e9f6 Paul Mackerras 2015-07-02 3198 /* order writes to split_info before kvm_split_mode pointer */ b4deba5c41e9f6 Paul Mackerras 2015-07-02 3199 smp_wmb(); b4deba5c41e9f6 Paul Mackerras 2015-07-02 3200 } c01015091a7703 Paul Mackerras 2017-10-19 3201 c01015091a7703 Paul Mackerras 2017-10-19 3202 for (thr = 0; thr < controlled_threads; ++thr) { d2e60075a3d442 Nicholas Piggin 2018-02-14 3203 struct paca_struct *paca = paca_ptrs[pcpu + thr]; d2e60075a3d442 Nicholas Piggin 2018-02-14 3204 d2e60075a3d442 Nicholas Piggin 2018-02-14 3205 paca->kvm_hstate.tid = thr; d2e60075a3d442 Nicholas Piggin 2018-02-14 3206 paca->kvm_hstate.napping = 0; d2e60075a3d442 Nicholas Piggin 2018-02-14 3207 paca->kvm_hstate.kvm_split_mode = sip; c01015091a7703 Paul Mackerras 2017-10-19 3208 } 3102f7843c7501 Michael Ellerman 2014-05-23 3209 516f7898ae20d9 Paul Mackerras 2017-10-16 3210 /* Initiate micro-threading (split-core) on POWER8 if required */ b4deba5c41e9f6 Paul Mackerras 2015-07-02 3211 if (cmd_bit) { b4deba5c41e9f6 Paul Mackerras 2015-07-02 3212 unsigned long hid0 = mfspr(SPRN_HID0); b4deba5c41e9f6 Paul Mackerras 2015-07-02 3213 b4deba5c41e9f6 Paul Mackerras 2015-07-02 3214 hid0 |= cmd_bit | HID0_POWER8_DYNLPARDIS; b4deba5c41e9f6 Paul Mackerras 2015-07-02 3215 mb(); b4deba5c41e9f6 Paul Mackerras 2015-07-02 3216 mtspr(SPRN_HID0, hid0); b4deba5c41e9f6 Paul Mackerras 2015-07-02 3217 isync(); b4deba5c41e9f6 Paul Mackerras 2015-07-02 3218 for (;;) { b4deba5c41e9f6 Paul Mackerras 2015-07-02 3219 hid0 = mfspr(SPRN_HID0); b4deba5c41e9f6 Paul Mackerras 2015-07-02 3220 if (hid0 & stat_bit) b4deba5c41e9f6 Paul Mackerras 2015-07-02 3221 break; b4deba5c41e9f6 Paul Mackerras 2015-07-02 3222 cpu_relax(); b4deba5c41e9f6 Paul Mackerras 2015-07-02 3223 } b4deba5c41e9f6 Paul Mackerras 2015-07-02 3224 } b4deba5c41e9f6 Paul Mackerras 2015-07-02 3225 7aa15842c15f8a Paul Mackerras 2018-04-20 3226 /* 7aa15842c15f8a Paul Mackerras 2018-04-20 3227 * On POWER8, set RWMR register. 7aa15842c15f8a Paul Mackerras 2018-04-20 3228 * Since it only affects PURR and SPURR, it doesn't affect 7aa15842c15f8a Paul Mackerras 2018-04-20 3229 * the host, so we don't save/restore the host value. 7aa15842c15f8a Paul Mackerras 2018-04-20 3230 */ 7aa15842c15f8a Paul Mackerras 2018-04-20 3231 if (is_power8) { 7aa15842c15f8a Paul Mackerras 2018-04-20 3232 unsigned long rwmr_val = RWMR_RPA_P8_8THREAD; 7aa15842c15f8a Paul Mackerras 2018-04-20 3233 int n_online = atomic_read(&vc->online_count); 7aa15842c15f8a Paul Mackerras 2018-04-20 3234 7aa15842c15f8a Paul Mackerras 2018-04-20 3235 /* 7aa15842c15f8a Paul Mackerras 2018-04-20 3236 * Use the 8-thread value if we're doing split-core 7aa15842c15f8a Paul Mackerras 2018-04-20 3237 * or if the vcore's online count looks bogus. 7aa15842c15f8a Paul Mackerras 2018-04-20 3238 */ 7aa15842c15f8a Paul Mackerras 2018-04-20 3239 if (split == 1 && threads_per_subcore == MAX_SMT_THREADS && 7aa15842c15f8a Paul Mackerras 2018-04-20 3240 n_online >= 1 && n_online <= MAX_SMT_THREADS) 7aa15842c15f8a Paul Mackerras 2018-04-20 3241 rwmr_val = p8_rwmr_values[n_online]; 7aa15842c15f8a Paul Mackerras 2018-04-20 3242 mtspr(SPRN_RWMR, rwmr_val); 7aa15842c15f8a Paul Mackerras 2018-04-20 3243 } 7aa15842c15f8a Paul Mackerras 2018-04-20 3244 b4deba5c41e9f6 Paul Mackerras 2015-07-02 3245 /* Start all the threads */ b4deba5c41e9f6 Paul Mackerras 2015-07-02 3246 active = 0; b4deba5c41e9f6 Paul Mackerras 2015-07-02 3247 for (sub = 0; sub < core_info.n_subcores; ++sub) { 516f7898ae20d9 Paul Mackerras 2017-10-16 3248 thr = is_power8 ? subcore_thread_map[sub] : sub; b4deba5c41e9f6 Paul Mackerras 2015-07-02 3249 thr0_done = false; b4deba5c41e9f6 Paul Mackerras 2015-07-02 3250 active |= 1 << thr; 898b25b202f350 Paul Mackerras 2017-06-22 3251 pvc = core_info.vc[sub]; ec257165082616 Paul Mackerras 2015-06-24 3252 pvc->pcpu = pcpu + thr; 7b5f8272c792d4 Suraj Jitindar Singh 2016-08-02 3253 for_each_runnable_thread(i, vcpu, pvc) { b4deba5c41e9f6 Paul Mackerras 2015-07-02 3254 kvmppc_start_thread(vcpu, pvc); ec257165082616 Paul Mackerras 2015-06-24 3255 kvmppc_create_dtl_entry(vcpu, pvc); 3c78f78af95615 Suresh E. Warrier 2014-12-03 3256 trace_kvm_guest_enter(vcpu); b4deba5c41e9f6 Paul Mackerras 2015-07-02 3257 if (!vcpu->arch.ptid) b4deba5c41e9f6 Paul Mackerras 2015-07-02 3258 thr0_done = true; b4deba5c41e9f6 Paul Mackerras 2015-07-02 3259 active |= 1 << (thr + vcpu->arch.ptid); 2e25aa5f64b18a Paul Mackerras 2012-02-19 3260 } b4deba5c41e9f6 Paul Mackerras 2015-07-02 3261 /* b4deba5c41e9f6 Paul Mackerras 2015-07-02 3262 * We need to start the first thread of each subcore b4deba5c41e9f6 Paul Mackerras 2015-07-02 3263 * even if it doesn't have a vcpu. b4deba5c41e9f6 Paul Mackerras 2015-07-02 3264 */ 898b25b202f350 Paul Mackerras 2017-06-22 3265 if (!thr0_done) b4deba5c41e9f6 Paul Mackerras 2015-07-02 3266 kvmppc_start_thread(NULL, pvc); ec257165082616 Paul Mackerras 2015-06-24 3267 } de56a948b9182f Paul Mackerras 2011-06-29 3268 7f23532866f931 Gautham R. Shenoy 2015-09-02 3269 /* 7f23532866f931 Gautham R. Shenoy 2015-09-02 3270 * Ensure that split_info.do_nap is set after setting 7f23532866f931 Gautham R. Shenoy 2015-09-02 3271 * the vcore pointer in the PACA of the secondaries. 7f23532866f931 Gautham R. Shenoy 2015-09-02 3272 */ 7f23532866f931 Gautham R. Shenoy 2015-09-02 3273 smp_mb(); 7f23532866f931 Gautham R. Shenoy 2015-09-02 3274 b4deba5c41e9f6 Paul Mackerras 2015-07-02 3275 /* b4deba5c41e9f6 Paul Mackerras 2015-07-02 3276 * When doing micro-threading, poke the inactive threads as well. b4deba5c41e9f6 Paul Mackerras 2015-07-02 3277 * This gets them to the nap instruction after kvm_do_nap, b4deba5c41e9f6 Paul Mackerras 2015-07-02 3278 * which reduces the time taken to unsplit later. c01015091a7703 Paul Mackerras 2017-10-19 3279 * For POWER9 HPT guest on radix host, we need all the secondary c01015091a7703 Paul Mackerras 2017-10-19 3280 * threads woken up so they can do the LPCR/LPIDR change. b4deba5c41e9f6 Paul Mackerras 2015-07-02 3281 */ c01015091a7703 Paul Mackerras 2017-10-19 3282 if (cmd_bit || hpt_on_radix) { 516f7898ae20d9 Paul Mackerras 2017-10-16 3283 split_info.do_nap = 1; /* ask secondaries to nap when done */ b4deba5c41e9f6 Paul Mackerras 2015-07-02 3284 for (thr = 1; thr < threads_per_subcore; ++thr) b4deba5c41e9f6 Paul Mackerras 2015-07-02 3285 if (!(active & (1 << thr))) b4deba5c41e9f6 Paul Mackerras 2015-07-02 3286 kvmppc_ipi_thread(pcpu + thr); 516f7898ae20d9 Paul Mackerras 2017-10-16 3287 } e0b7ec058c0eb7 Paul Mackerras 2014-01-08 3288 2f12f03436847e Paul Mackerras 2012-10-15 3289 vc->vcore_state = VCORE_RUNNING; 19ccb76a1938ab Paul Mackerras 2011-07-23 3290 preempt_disable(); 3c78f78af95615 Suresh E. Warrier 2014-12-03 3291 3c78f78af95615 Suresh E. Warrier 2014-12-03 3292 trace_kvmppc_run_core(vc, 0); 3c78f78af95615 Suresh E. Warrier 2014-12-03 3293 b4deba5c41e9f6 Paul Mackerras 2015-07-02 3294 for (sub = 0; sub < core_info.n_subcores; ++sub) 898b25b202f350 Paul Mackerras 2017-06-22 3295 spin_unlock(&core_info.vc[sub]->lock); 371fefd6f2dc46 Paul Mackerras 2011-06-29 3296 61bd0f66ff92d5 Laurent Vivier 2018-03-02 3297 guest_enter_irqoff(); 2c9097e4c13402 Paul Mackerras 2012-09-11 3298 e0b7ec058c0eb7 Paul Mackerras 2014-01-08 3299 srcu_idx = srcu_read_lock(&vc->kvm->srcu); 2c9097e4c13402 Paul Mackerras 2012-09-11 3300 a4bc64d305af40 Naveen N. Rao 2018-04-19 3301 this_cpu_disable_ftrace(); a4bc64d305af40 Naveen N. Rao 2018-04-19 3302 3309bec85e60d6 Alexey Kardashevskiy 2019-03-29 3303 /* 3309bec85e60d6 Alexey Kardashevskiy 2019-03-29 3304 * Interrupts will be enabled once we get into the guest, 3309bec85e60d6 Alexey Kardashevskiy 2019-03-29 3305 * so tell lockdep that we're about to enable interrupts. 3309bec85e60d6 Alexey Kardashevskiy 2019-03-29 3306 */ 3309bec85e60d6 Alexey Kardashevskiy 2019-03-29 3307 trace_hardirqs_on(); 3309bec85e60d6 Alexey Kardashevskiy 2019-03-29 3308 8b24e69fc47e43 Paul Mackerras 2017-06-26 3309 trap = __kvmppc_vcore_entry(); de56a948b9182f Paul Mackerras 2011-06-29 3310 3309bec85e60d6 Alexey Kardashevskiy 2019-03-29 3311 trace_hardirqs_off(); 3309bec85e60d6 Alexey Kardashevskiy 2019-03-29 3312 a4bc64d305af40 Naveen N. Rao 2018-04-19 3313 this_cpu_enable_ftrace(); a4bc64d305af40 Naveen N. Rao 2018-04-19 3314 ec257165082616 Paul Mackerras 2015-06-24 3315 srcu_read_unlock(&vc->kvm->srcu, srcu_idx); ec257165082616 Paul Mackerras 2015-06-24 3316 8b24e69fc47e43 Paul Mackerras 2017-06-26 3317 set_irq_happened(trap); 8b24e69fc47e43 Paul Mackerras 2017-06-26 3318 ec257165082616 Paul Mackerras 2015-06-24 3319 spin_lock(&vc->lock); 371fefd6f2dc46 Paul Mackerras 2011-06-29 3320 /* prevent other vcpu threads from doing kvmppc_start_thread() now */ 19ccb76a1938ab Paul Mackerras 2011-07-23 3321 vc->vcore_state = VCORE_EXITING; 371fefd6f2dc46 Paul Mackerras 2011-06-29 3322 19ccb76a1938ab Paul Mackerras 2011-07-23 3323 /* wait for secondary threads to finish writing their state to memory */ 516f7898ae20d9 Paul Mackerras 2017-10-16 3324 kvmppc_wait_for_nap(controlled_threads); b4deba5c41e9f6 Paul Mackerras 2015-07-02 3325 b4deba5c41e9f6 Paul Mackerras 2015-07-02 3326 /* Return to whole-core mode if we split the core earlier */ 516f7898ae20d9 Paul Mackerras 2017-10-16 3327 if (cmd_bit) { b4deba5c41e9f6 Paul Mackerras 2015-07-02 3328 unsigned long hid0 = mfspr(SPRN_HID0); b4deba5c41e9f6 Paul Mackerras 2015-07-02 3329 unsigned long loops = 0; b4deba5c41e9f6 Paul Mackerras 2015-07-02 3330 b4deba5c41e9f6 Paul Mackerras 2015-07-02 3331 hid0 &= ~HID0_POWER8_DYNLPARDIS; b4deba5c41e9f6 Paul Mackerras 2015-07-02 3332 stat_bit = HID0_POWER8_2LPARMODE | HID0_POWER8_4LPARMODE; b4deba5c41e9f6 Paul Mackerras 2015-07-02 3333 mb(); b4deba5c41e9f6 Paul Mackerras 2015-07-02 3334 mtspr(SPRN_HID0, hid0); b4deba5c41e9f6 Paul Mackerras 2015-07-02 3335 isync(); b4deba5c41e9f6 Paul Mackerras 2015-07-02 3336 for (;;) { b4deba5c41e9f6 Paul Mackerras 2015-07-02 3337 hid0 = mfspr(SPRN_HID0); b4deba5c41e9f6 Paul Mackerras 2015-07-02 3338 if (!(hid0 & stat_bit)) b4deba5c41e9f6 Paul Mackerras 2015-07-02 3339 break; b4deba5c41e9f6 Paul Mackerras 2015-07-02 3340 cpu_relax(); b4deba5c41e9f6 Paul Mackerras 2015-07-02 3341 ++loops; b4deba5c41e9f6 Paul Mackerras 2015-07-02 3342 } c01015091a7703 Paul Mackerras 2017-10-19 3343 } else if (hpt_on_radix) { c01015091a7703 Paul Mackerras 2017-10-19 3344 /* Wait for all threads to have seen final sync */ c01015091a7703 Paul Mackerras 2017-10-19 3345 for (thr = 1; thr < controlled_threads; ++thr) { d2e60075a3d442 Nicholas Piggin 2018-02-14 3346 struct paca_struct *paca = paca_ptrs[pcpu + thr]; d2e60075a3d442 Nicholas Piggin 2018-02-14 3347 d2e60075a3d442 Nicholas Piggin 2018-02-14 3348 while (paca->kvm_hstate.kvm_split_mode) { c01015091a7703 Paul Mackerras 2017-10-19 3349 HMT_low(); c01015091a7703 Paul Mackerras 2017-10-19 3350 barrier(); c01015091a7703 Paul Mackerras 2017-10-19 3351 } c01015091a7703 Paul Mackerras 2017-10-19 3352 HMT_medium(); c01015091a7703 Paul Mackerras 2017-10-19 3353 } b4deba5c41e9f6 Paul Mackerras 2015-07-02 3354 } c01015091a7703 Paul Mackerras 2017-10-19 3355 split_info.do_nap = 0; b4deba5c41e9f6 Paul Mackerras 2015-07-02 3356 8b24e69fc47e43 Paul Mackerras 2017-06-26 3357 kvmppc_set_host_core(pcpu); 8b24e69fc47e43 Paul Mackerras 2017-06-26 3358 8b24e69fc47e43 Paul Mackerras 2017-06-26 3359 local_irq_enable(); 61bd0f66ff92d5 Laurent Vivier 2018-03-02 @3360 guest_exit(); 8b24e69fc47e43 Paul Mackerras 2017-06-26 3361 b4deba5c41e9f6 Paul Mackerras 2015-07-02 3362 /* Let secondaries go back to the offline loop */ 45c940ba490df2 Paul Mackerras 2016-11-18 3363 for (i = 0; i < controlled_threads; ++i) { b4deba5c41e9f6 Paul Mackerras 2015-07-02 3364 kvmppc_release_hwthread(pcpu + i); b4deba5c41e9f6 Paul Mackerras 2015-07-02 3365 if (sip && sip->napped[i]) b4deba5c41e9f6 Paul Mackerras 2015-07-02 3366 kvmppc_ipi_thread(pcpu + i); a29ebeaf5575d0 Paul Mackerras 2017-01-30 3367 cpumask_clear_cpu(pcpu + i, &vc->kvm->arch.cpu_in_guest); b4deba5c41e9f6 Paul Mackerras 2015-07-02 3368 } b4deba5c41e9f6 Paul Mackerras 2015-07-02 3369 371fefd6f2dc46 Paul Mackerras 2011-06-29 3370 spin_unlock(&vc->lock); 2c9097e4c13402 Paul Mackerras 2012-09-11 3371 371fefd6f2dc46 Paul Mackerras 2011-06-29 3372 /* make sure updates to secondary vcpu structs are visible now */ 371fefd6f2dc46 Paul Mackerras 2011-06-29 3373 smp_mb(); de56a948b9182f Paul Mackerras 2011-06-29 3374 36ee41d161c67a Paul Mackerras 2018-01-30 3375 preempt_enable(); 36ee41d161c67a Paul Mackerras 2018-01-30 3376 898b25b202f350 Paul Mackerras 2017-06-22 3377 for (sub = 0; sub < core_info.n_subcores; ++sub) { 898b25b202f350 Paul Mackerras 2017-06-22 3378 pvc = core_info.vc[sub]; ec257165082616 Paul Mackerras 2015-06-24 3379 post_guest_process(pvc, pvc == vc); 898b25b202f350 Paul Mackerras 2017-06-22 3380 } de56a948b9182f Paul Mackerras 2011-06-29 3381 913d3ff9a3c3a1 Paul Mackerras 2012-10-15 3382 spin_lock(&vc->lock); de56a948b9182f Paul Mackerras 2011-06-29 3383 de56a948b9182f Paul Mackerras 2011-06-29 3384 out: 19ccb76a1938ab Paul Mackerras 2011-07-23 3385 vc->vcore_state = VCORE_INACTIVE; 3c78f78af95615 Suresh E. Warrier 2014-12-03 3386 trace_kvmppc_run_core(vc, 1); 371fefd6f2dc46 Paul Mackerras 2011-06-29 3387 } 371fefd6f2dc46 Paul Mackerras 2011-06-29 3388 :::::: The code at line 3360 was first introduced by commit :::::: 61bd0f66ff92d5ce765ff9850fd3cbfec773c560 KVM: PPC: Book3S HV: Fix guest time accounting with VIRT_CPU_ACCOUNTING_GEN :::::: TO: Laurent Vivier <lvivier@redhat.com> :::::: CC: Paul Mackerras <paulus@ozlabs.org> --- 0-DAY kernel test infrastructure Open Source Technology Center https://lists.01.org/hyperkitty/list/kbuild-all@lists.01.org Intel Corporation [-- Attachment #2: .config.gz --] [-- Type: application/gzip, Size: 25639 bytes --] ^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: [PATCH] KVM: remove unused guest_enter/exit @ 2020-01-10 23:10 ` kbuild test robot 0 siblings, 0 replies; 7+ messages in thread From: kbuild test robot @ 2020-01-10 23:10 UTC (permalink / raw) To: kbuild-all [-- Attachment #1: Type: text/plain, Size: 30496 bytes --] Hi Alex, I love your patch! Yet something to improve: [auto build test ERROR on kvm/linux-next] [also build test ERROR on linux/master linus/master v5.5-rc5 next-20200110] [if your patch is applied to the wrong git tree, please drop us a note to help improve the system. BTW, we also suggest to use '--base' option to specify the base tree in git format-patch, please see https://stackoverflow.com/a/37406982] url: https://github.com/0day-ci/linux/commits/Alex-Shi/KVM-remove-unused-guest_enter-exit/20200111-004903 base: https://git.kernel.org/pub/scm/virt/kvm/kvm.git linux-next config: powerpc-ppc64_defconfig (attached as .config) compiler: powerpc64-linux-gcc (GCC) 7.5.0 reproduce: wget https://raw.githubusercontent.com/intel/lkp-tests/master/sbin/make.cross -O ~/bin/make.cross chmod +x ~/bin/make.cross # save the attached .config to linux build tree GCC_VERSION=7.5.0 make.cross ARCH=powerpc If you fix the issue, kindly add following tag Reported-by: kbuild test robot <lkp@intel.com> All errors (new ones prefixed by >>): arch/powerpc/kvm/book3s_hv.c: In function 'kvmppc_run_core': >> arch/powerpc/kvm/book3s_hv.c:3360:2: error: implicit declaration of function 'guest_exit'; did you mean 'user_exit'? [-Werror=implicit-function-declaration] guest_exit(); ^~~~~~~~~~ user_exit cc1: all warnings being treated as errors vim +3360 arch/powerpc/kvm/book3s_hv.c 8b24e69fc47e43 Paul Mackerras 2017-06-26 3040 371fefd6f2dc46 Paul Mackerras 2011-06-29 3041 /* 371fefd6f2dc46 Paul Mackerras 2011-06-29 3042 * Run a set of guest threads on a physical core. 371fefd6f2dc46 Paul Mackerras 2011-06-29 3043 * Called with vc->lock held. 371fefd6f2dc46 Paul Mackerras 2011-06-29 3044 */ 66feed61cdf6ee Paul Mackerras 2015-03-28 3045 static noinline void kvmppc_run_core(struct kvmppc_vcore *vc) 371fefd6f2dc46 Paul Mackerras 2011-06-29 3046 { 7b5f8272c792d4 Suraj Jitindar Singh 2016-08-02 3047 struct kvm_vcpu *vcpu; d911f0beddc2a9 Paul Mackerras 2015-03-28 3048 int i; 2c9097e4c13402 Paul Mackerras 2012-09-11 3049 int srcu_idx; ec257165082616 Paul Mackerras 2015-06-24 3050 struct core_info core_info; 898b25b202f350 Paul Mackerras 2017-06-22 3051 struct kvmppc_vcore *pvc; b4deba5c41e9f6 Paul Mackerras 2015-07-02 3052 struct kvm_split_mode split_info, *sip; b4deba5c41e9f6 Paul Mackerras 2015-07-02 3053 int split, subcore_size, active; b4deba5c41e9f6 Paul Mackerras 2015-07-02 3054 int sub; b4deba5c41e9f6 Paul Mackerras 2015-07-02 3055 bool thr0_done; b4deba5c41e9f6 Paul Mackerras 2015-07-02 3056 unsigned long cmd_bit, stat_bit; ec257165082616 Paul Mackerras 2015-06-24 3057 int pcpu, thr; ec257165082616 Paul Mackerras 2015-06-24 3058 int target_threads; 45c940ba490df2 Paul Mackerras 2016-11-18 3059 int controlled_threads; 8b24e69fc47e43 Paul Mackerras 2017-06-26 3060 int trap; 516f7898ae20d9 Paul Mackerras 2017-10-16 3061 bool is_power8; c01015091a7703 Paul Mackerras 2017-10-19 3062 bool hpt_on_radix; 371fefd6f2dc46 Paul Mackerras 2011-06-29 3063 d911f0beddc2a9 Paul Mackerras 2015-03-28 3064 /* d911f0beddc2a9 Paul Mackerras 2015-03-28 3065 * Remove from the list any threads that have a signal pending d911f0beddc2a9 Paul Mackerras 2015-03-28 3066 * or need a VPA update done d911f0beddc2a9 Paul Mackerras 2015-03-28 3067 */ d911f0beddc2a9 Paul Mackerras 2015-03-28 3068 prepare_threads(vc); d911f0beddc2a9 Paul Mackerras 2015-03-28 3069 d911f0beddc2a9 Paul Mackerras 2015-03-28 3070 /* if the runner is no longer runnable, let the caller pick a new one */ d911f0beddc2a9 Paul Mackerras 2015-03-28 3071 if (vc->runner->arch.state != KVMPPC_VCPU_RUNNABLE) 913d3ff9a3c3a1 Paul Mackerras 2012-10-15 3072 return; 081f323bd3cc3a Paul Mackerras 2012-06-01 3073 081f323bd3cc3a Paul Mackerras 2012-06-01 3074 /* d911f0beddc2a9 Paul Mackerras 2015-03-28 3075 * Initialize *vc. 081f323bd3cc3a Paul Mackerras 2012-06-01 3076 */ 898b25b202f350 Paul Mackerras 2017-06-22 3077 init_vcore_to_run(vc); 2711e248a352d7 Paul Mackerras 2014-12-04 3078 vc->preempt_tb = TB_NIL; 081f323bd3cc3a Paul Mackerras 2012-06-01 3079 45c940ba490df2 Paul Mackerras 2016-11-18 3080 /* 45c940ba490df2 Paul Mackerras 2016-11-18 3081 * Number of threads that we will be controlling: the same as 45c940ba490df2 Paul Mackerras 2016-11-18 3082 * the number of threads per subcore, except on POWER9, 45c940ba490df2 Paul Mackerras 2016-11-18 3083 * where it's 1 because the threads are (mostly) independent. 45c940ba490df2 Paul Mackerras 2016-11-18 3084 */ 516f7898ae20d9 Paul Mackerras 2017-10-16 3085 controlled_threads = threads_per_vcore(vc->kvm); 45c940ba490df2 Paul Mackerras 2016-11-18 3086 7b444c6710c6c4 Paul Mackerras 2012-10-15 3087 /* 3102f7843c7501 Michael Ellerman 2014-05-23 3088 * Make sure we are running on primary threads, and that secondary 3102f7843c7501 Michael Ellerman 2014-05-23 3089 * threads are offline. Also check if the number of threads in this 3102f7843c7501 Michael Ellerman 2014-05-23 3090 * guest are greater than the current system threads per guest. c01015091a7703 Paul Mackerras 2017-10-19 3091 * On POWER9, we need to be not in independent-threads mode if 00608e1f007e4c Paul Mackerras 2018-01-11 3092 * this is a HPT guest on a radix host machine where the 00608e1f007e4c Paul Mackerras 2018-01-11 3093 * CPU threads may not be in different MMU modes. 7b444c6710c6c4 Paul Mackerras 2012-10-15 3094 */ 00608e1f007e4c Paul Mackerras 2018-01-11 3095 hpt_on_radix = no_mixing_hpt_and_radix && radix_enabled() && 00608e1f007e4c Paul Mackerras 2018-01-11 3096 !kvm_is_radix(vc->kvm); c01015091a7703 Paul Mackerras 2017-10-19 3097 if (((controlled_threads > 1) && c01015091a7703 Paul Mackerras 2017-10-19 3098 ((vc->num_threads > threads_per_subcore) || !on_primary_thread())) || c01015091a7703 Paul Mackerras 2017-10-19 3099 (hpt_on_radix && vc->kvm->arch.threads_indep)) { 7b5f8272c792d4 Suraj Jitindar Singh 2016-08-02 3100 for_each_runnable_thread(i, vcpu, vc) { 7b444c6710c6c4 Paul Mackerras 2012-10-15 3101 vcpu->arch.ret = -EBUSY; 25fedfca94cfbf Paul Mackerras 2015-03-28 3102 kvmppc_remove_runnable(vc, vcpu); 25fedfca94cfbf Paul Mackerras 2015-03-28 3103 wake_up(&vcpu->arch.cpu_run); 25fedfca94cfbf Paul Mackerras 2015-03-28 3104 } 7b444c6710c6c4 Paul Mackerras 2012-10-15 3105 goto out; 7b444c6710c6c4 Paul Mackerras 2012-10-15 3106 } 7b444c6710c6c4 Paul Mackerras 2012-10-15 3107 ec257165082616 Paul Mackerras 2015-06-24 3108 /* ec257165082616 Paul Mackerras 2015-06-24 3109 * See if we could run any other vcores on the physical core ec257165082616 Paul Mackerras 2015-06-24 3110 * along with this one. ec257165082616 Paul Mackerras 2015-06-24 3111 */ ec257165082616 Paul Mackerras 2015-06-24 3112 init_core_info(&core_info, vc); ec257165082616 Paul Mackerras 2015-06-24 3113 pcpu = smp_processor_id(); 45c940ba490df2 Paul Mackerras 2016-11-18 3114 target_threads = controlled_threads; ec257165082616 Paul Mackerras 2015-06-24 3115 if (target_smt_mode && target_smt_mode < target_threads) ec257165082616 Paul Mackerras 2015-06-24 3116 target_threads = target_smt_mode; ec257165082616 Paul Mackerras 2015-06-24 3117 if (vc->num_threads < target_threads) ec257165082616 Paul Mackerras 2015-06-24 3118 collect_piggybacks(&core_info, target_threads); 3102f7843c7501 Michael Ellerman 2014-05-23 3119 8b24e69fc47e43 Paul Mackerras 2017-06-26 3120 /* 8b24e69fc47e43 Paul Mackerras 2017-06-26 3121 * On radix, arrange for TLB flushing if necessary. 8b24e69fc47e43 Paul Mackerras 2017-06-26 3122 * This has to be done before disabling interrupts since 8b24e69fc47e43 Paul Mackerras 2017-06-26 3123 * it uses smp_call_function(). 8b24e69fc47e43 Paul Mackerras 2017-06-26 3124 */ 8b24e69fc47e43 Paul Mackerras 2017-06-26 3125 pcpu = smp_processor_id(); 8b24e69fc47e43 Paul Mackerras 2017-06-26 3126 if (kvm_is_radix(vc->kvm)) { 8b24e69fc47e43 Paul Mackerras 2017-06-26 3127 for (sub = 0; sub < core_info.n_subcores; ++sub) 8b24e69fc47e43 Paul Mackerras 2017-06-26 3128 for_each_runnable_thread(i, vcpu, core_info.vc[sub]) 8b24e69fc47e43 Paul Mackerras 2017-06-26 3129 kvmppc_prepare_radix_vcpu(vcpu, pcpu); 8b24e69fc47e43 Paul Mackerras 2017-06-26 3130 } 8b24e69fc47e43 Paul Mackerras 2017-06-26 3131 8b24e69fc47e43 Paul Mackerras 2017-06-26 3132 /* 8b24e69fc47e43 Paul Mackerras 2017-06-26 3133 * Hard-disable interrupts, and check resched flag and signals. 8b24e69fc47e43 Paul Mackerras 2017-06-26 3134 * If we need to reschedule or deliver a signal, clean up 8b24e69fc47e43 Paul Mackerras 2017-06-26 3135 * and return without going into the guest(s). 072df8130c6b60 Paul Mackerras 2017-11-09 3136 * If the mmu_ready flag has been cleared, don't go into the 38c53af853069a Paul Mackerras 2017-11-08 3137 * guest because that means a HPT resize operation is in progress. 8b24e69fc47e43 Paul Mackerras 2017-06-26 3138 */ 8b24e69fc47e43 Paul Mackerras 2017-06-26 3139 local_irq_disable(); 8b24e69fc47e43 Paul Mackerras 2017-06-26 3140 hard_irq_disable(); 8b24e69fc47e43 Paul Mackerras 2017-06-26 3141 if (lazy_irq_pending() || need_resched() || d28eafc5a64045 Paul Mackerras 2019-08-27 3142 recheck_signals_and_mmu(&core_info)) { 8b24e69fc47e43 Paul Mackerras 2017-06-26 3143 local_irq_enable(); 8b24e69fc47e43 Paul Mackerras 2017-06-26 3144 vc->vcore_state = VCORE_INACTIVE; 8b24e69fc47e43 Paul Mackerras 2017-06-26 3145 /* Unlock all except the primary vcore */ 8b24e69fc47e43 Paul Mackerras 2017-06-26 3146 for (sub = 1; sub < core_info.n_subcores; ++sub) { 8b24e69fc47e43 Paul Mackerras 2017-06-26 3147 pvc = core_info.vc[sub]; 8b24e69fc47e43 Paul Mackerras 2017-06-26 3148 /* Put back on to the preempted vcores list */ 8b24e69fc47e43 Paul Mackerras 2017-06-26 3149 kvmppc_vcore_preempt(pvc); 8b24e69fc47e43 Paul Mackerras 2017-06-26 3150 spin_unlock(&pvc->lock); 8b24e69fc47e43 Paul Mackerras 2017-06-26 3151 } 8b24e69fc47e43 Paul Mackerras 2017-06-26 3152 for (i = 0; i < controlled_threads; ++i) 8b24e69fc47e43 Paul Mackerras 2017-06-26 3153 kvmppc_release_hwthread(pcpu + i); 8b24e69fc47e43 Paul Mackerras 2017-06-26 3154 return; 8b24e69fc47e43 Paul Mackerras 2017-06-26 3155 } 8b24e69fc47e43 Paul Mackerras 2017-06-26 3156 8b24e69fc47e43 Paul Mackerras 2017-06-26 3157 kvmppc_clear_host_core(pcpu); 8b24e69fc47e43 Paul Mackerras 2017-06-26 3158 b4deba5c41e9f6 Paul Mackerras 2015-07-02 3159 /* Decide on micro-threading (split-core) mode */ b4deba5c41e9f6 Paul Mackerras 2015-07-02 3160 subcore_size = threads_per_subcore; b4deba5c41e9f6 Paul Mackerras 2015-07-02 3161 cmd_bit = stat_bit = 0; b4deba5c41e9f6 Paul Mackerras 2015-07-02 3162 split = core_info.n_subcores; b4deba5c41e9f6 Paul Mackerras 2015-07-02 3163 sip = NULL; 516f7898ae20d9 Paul Mackerras 2017-10-16 3164 is_power8 = cpu_has_feature(CPU_FTR_ARCH_207S) 516f7898ae20d9 Paul Mackerras 2017-10-16 3165 && !cpu_has_feature(CPU_FTR_ARCH_300); 516f7898ae20d9 Paul Mackerras 2017-10-16 3166 c01015091a7703 Paul Mackerras 2017-10-19 3167 if (split > 1 || hpt_on_radix) { 516f7898ae20d9 Paul Mackerras 2017-10-16 3168 sip = &split_info; 516f7898ae20d9 Paul Mackerras 2017-10-16 3169 memset(&split_info, 0, sizeof(split_info)); 516f7898ae20d9 Paul Mackerras 2017-10-16 3170 for (sub = 0; sub < core_info.n_subcores; ++sub) 516f7898ae20d9 Paul Mackerras 2017-10-16 3171 split_info.vc[sub] = core_info.vc[sub]; 516f7898ae20d9 Paul Mackerras 2017-10-16 3172 516f7898ae20d9 Paul Mackerras 2017-10-16 3173 if (is_power8) { b4deba5c41e9f6 Paul Mackerras 2015-07-02 3174 if (split == 2 && (dynamic_mt_modes & 2)) { b4deba5c41e9f6 Paul Mackerras 2015-07-02 3175 cmd_bit = HID0_POWER8_1TO2LPAR; b4deba5c41e9f6 Paul Mackerras 2015-07-02 3176 stat_bit = HID0_POWER8_2LPARMODE; b4deba5c41e9f6 Paul Mackerras 2015-07-02 3177 } else { b4deba5c41e9f6 Paul Mackerras 2015-07-02 3178 split = 4; b4deba5c41e9f6 Paul Mackerras 2015-07-02 3179 cmd_bit = HID0_POWER8_1TO4LPAR; b4deba5c41e9f6 Paul Mackerras 2015-07-02 3180 stat_bit = HID0_POWER8_4LPARMODE; b4deba5c41e9f6 Paul Mackerras 2015-07-02 3181 } b4deba5c41e9f6 Paul Mackerras 2015-07-02 3182 subcore_size = MAX_SMT_THREADS / split; b4deba5c41e9f6 Paul Mackerras 2015-07-02 3183 split_info.rpr = mfspr(SPRN_RPR); b4deba5c41e9f6 Paul Mackerras 2015-07-02 3184 split_info.pmmar = mfspr(SPRN_PMMAR); b4deba5c41e9f6 Paul Mackerras 2015-07-02 3185 split_info.ldbar = mfspr(SPRN_LDBAR); b4deba5c41e9f6 Paul Mackerras 2015-07-02 3186 split_info.subcore_size = subcore_size; 516f7898ae20d9 Paul Mackerras 2017-10-16 3187 } else { 516f7898ae20d9 Paul Mackerras 2017-10-16 3188 split_info.subcore_size = 1; c01015091a7703 Paul Mackerras 2017-10-19 3189 if (hpt_on_radix) { c01015091a7703 Paul Mackerras 2017-10-19 3190 /* Use the split_info for LPCR/LPIDR changes */ c01015091a7703 Paul Mackerras 2017-10-19 3191 split_info.lpcr_req = vc->lpcr; c01015091a7703 Paul Mackerras 2017-10-19 3192 split_info.lpidr_req = vc->kvm->arch.lpid; c01015091a7703 Paul Mackerras 2017-10-19 3193 split_info.host_lpcr = vc->kvm->arch.host_lpcr; c01015091a7703 Paul Mackerras 2017-10-19 3194 split_info.do_set = 1; c01015091a7703 Paul Mackerras 2017-10-19 3195 } 516f7898ae20d9 Paul Mackerras 2017-10-16 3196 } 516f7898ae20d9 Paul Mackerras 2017-10-16 3197 b4deba5c41e9f6 Paul Mackerras 2015-07-02 3198 /* order writes to split_info before kvm_split_mode pointer */ b4deba5c41e9f6 Paul Mackerras 2015-07-02 3199 smp_wmb(); b4deba5c41e9f6 Paul Mackerras 2015-07-02 3200 } c01015091a7703 Paul Mackerras 2017-10-19 3201 c01015091a7703 Paul Mackerras 2017-10-19 3202 for (thr = 0; thr < controlled_threads; ++thr) { d2e60075a3d442 Nicholas Piggin 2018-02-14 3203 struct paca_struct *paca = paca_ptrs[pcpu + thr]; d2e60075a3d442 Nicholas Piggin 2018-02-14 3204 d2e60075a3d442 Nicholas Piggin 2018-02-14 3205 paca->kvm_hstate.tid = thr; d2e60075a3d442 Nicholas Piggin 2018-02-14 3206 paca->kvm_hstate.napping = 0; d2e60075a3d442 Nicholas Piggin 2018-02-14 3207 paca->kvm_hstate.kvm_split_mode = sip; c01015091a7703 Paul Mackerras 2017-10-19 3208 } 3102f7843c7501 Michael Ellerman 2014-05-23 3209 516f7898ae20d9 Paul Mackerras 2017-10-16 3210 /* Initiate micro-threading (split-core) on POWER8 if required */ b4deba5c41e9f6 Paul Mackerras 2015-07-02 3211 if (cmd_bit) { b4deba5c41e9f6 Paul Mackerras 2015-07-02 3212 unsigned long hid0 = mfspr(SPRN_HID0); b4deba5c41e9f6 Paul Mackerras 2015-07-02 3213 b4deba5c41e9f6 Paul Mackerras 2015-07-02 3214 hid0 |= cmd_bit | HID0_POWER8_DYNLPARDIS; b4deba5c41e9f6 Paul Mackerras 2015-07-02 3215 mb(); b4deba5c41e9f6 Paul Mackerras 2015-07-02 3216 mtspr(SPRN_HID0, hid0); b4deba5c41e9f6 Paul Mackerras 2015-07-02 3217 isync(); b4deba5c41e9f6 Paul Mackerras 2015-07-02 3218 for (;;) { b4deba5c41e9f6 Paul Mackerras 2015-07-02 3219 hid0 = mfspr(SPRN_HID0); b4deba5c41e9f6 Paul Mackerras 2015-07-02 3220 if (hid0 & stat_bit) b4deba5c41e9f6 Paul Mackerras 2015-07-02 3221 break; b4deba5c41e9f6 Paul Mackerras 2015-07-02 3222 cpu_relax(); b4deba5c41e9f6 Paul Mackerras 2015-07-02 3223 } b4deba5c41e9f6 Paul Mackerras 2015-07-02 3224 } b4deba5c41e9f6 Paul Mackerras 2015-07-02 3225 7aa15842c15f8a Paul Mackerras 2018-04-20 3226 /* 7aa15842c15f8a Paul Mackerras 2018-04-20 3227 * On POWER8, set RWMR register. 7aa15842c15f8a Paul Mackerras 2018-04-20 3228 * Since it only affects PURR and SPURR, it doesn't affect 7aa15842c15f8a Paul Mackerras 2018-04-20 3229 * the host, so we don't save/restore the host value. 7aa15842c15f8a Paul Mackerras 2018-04-20 3230 */ 7aa15842c15f8a Paul Mackerras 2018-04-20 3231 if (is_power8) { 7aa15842c15f8a Paul Mackerras 2018-04-20 3232 unsigned long rwmr_val = RWMR_RPA_P8_8THREAD; 7aa15842c15f8a Paul Mackerras 2018-04-20 3233 int n_online = atomic_read(&vc->online_count); 7aa15842c15f8a Paul Mackerras 2018-04-20 3234 7aa15842c15f8a Paul Mackerras 2018-04-20 3235 /* 7aa15842c15f8a Paul Mackerras 2018-04-20 3236 * Use the 8-thread value if we're doing split-core 7aa15842c15f8a Paul Mackerras 2018-04-20 3237 * or if the vcore's online count looks bogus. 7aa15842c15f8a Paul Mackerras 2018-04-20 3238 */ 7aa15842c15f8a Paul Mackerras 2018-04-20 3239 if (split == 1 && threads_per_subcore == MAX_SMT_THREADS && 7aa15842c15f8a Paul Mackerras 2018-04-20 3240 n_online >= 1 && n_online <= MAX_SMT_THREADS) 7aa15842c15f8a Paul Mackerras 2018-04-20 3241 rwmr_val = p8_rwmr_values[n_online]; 7aa15842c15f8a Paul Mackerras 2018-04-20 3242 mtspr(SPRN_RWMR, rwmr_val); 7aa15842c15f8a Paul Mackerras 2018-04-20 3243 } 7aa15842c15f8a Paul Mackerras 2018-04-20 3244 b4deba5c41e9f6 Paul Mackerras 2015-07-02 3245 /* Start all the threads */ b4deba5c41e9f6 Paul Mackerras 2015-07-02 3246 active = 0; b4deba5c41e9f6 Paul Mackerras 2015-07-02 3247 for (sub = 0; sub < core_info.n_subcores; ++sub) { 516f7898ae20d9 Paul Mackerras 2017-10-16 3248 thr = is_power8 ? subcore_thread_map[sub] : sub; b4deba5c41e9f6 Paul Mackerras 2015-07-02 3249 thr0_done = false; b4deba5c41e9f6 Paul Mackerras 2015-07-02 3250 active |= 1 << thr; 898b25b202f350 Paul Mackerras 2017-06-22 3251 pvc = core_info.vc[sub]; ec257165082616 Paul Mackerras 2015-06-24 3252 pvc->pcpu = pcpu + thr; 7b5f8272c792d4 Suraj Jitindar Singh 2016-08-02 3253 for_each_runnable_thread(i, vcpu, pvc) { b4deba5c41e9f6 Paul Mackerras 2015-07-02 3254 kvmppc_start_thread(vcpu, pvc); ec257165082616 Paul Mackerras 2015-06-24 3255 kvmppc_create_dtl_entry(vcpu, pvc); 3c78f78af95615 Suresh E. Warrier 2014-12-03 3256 trace_kvm_guest_enter(vcpu); b4deba5c41e9f6 Paul Mackerras 2015-07-02 3257 if (!vcpu->arch.ptid) b4deba5c41e9f6 Paul Mackerras 2015-07-02 3258 thr0_done = true; b4deba5c41e9f6 Paul Mackerras 2015-07-02 3259 active |= 1 << (thr + vcpu->arch.ptid); 2e25aa5f64b18a Paul Mackerras 2012-02-19 3260 } b4deba5c41e9f6 Paul Mackerras 2015-07-02 3261 /* b4deba5c41e9f6 Paul Mackerras 2015-07-02 3262 * We need to start the first thread of each subcore b4deba5c41e9f6 Paul Mackerras 2015-07-02 3263 * even if it doesn't have a vcpu. b4deba5c41e9f6 Paul Mackerras 2015-07-02 3264 */ 898b25b202f350 Paul Mackerras 2017-06-22 3265 if (!thr0_done) b4deba5c41e9f6 Paul Mackerras 2015-07-02 3266 kvmppc_start_thread(NULL, pvc); ec257165082616 Paul Mackerras 2015-06-24 3267 } de56a948b9182f Paul Mackerras 2011-06-29 3268 7f23532866f931 Gautham R. Shenoy 2015-09-02 3269 /* 7f23532866f931 Gautham R. Shenoy 2015-09-02 3270 * Ensure that split_info.do_nap is set after setting 7f23532866f931 Gautham R. Shenoy 2015-09-02 3271 * the vcore pointer in the PACA of the secondaries. 7f23532866f931 Gautham R. Shenoy 2015-09-02 3272 */ 7f23532866f931 Gautham R. Shenoy 2015-09-02 3273 smp_mb(); 7f23532866f931 Gautham R. Shenoy 2015-09-02 3274 b4deba5c41e9f6 Paul Mackerras 2015-07-02 3275 /* b4deba5c41e9f6 Paul Mackerras 2015-07-02 3276 * When doing micro-threading, poke the inactive threads as well. b4deba5c41e9f6 Paul Mackerras 2015-07-02 3277 * This gets them to the nap instruction after kvm_do_nap, b4deba5c41e9f6 Paul Mackerras 2015-07-02 3278 * which reduces the time taken to unsplit later. c01015091a7703 Paul Mackerras 2017-10-19 3279 * For POWER9 HPT guest on radix host, we need all the secondary c01015091a7703 Paul Mackerras 2017-10-19 3280 * threads woken up so they can do the LPCR/LPIDR change. b4deba5c41e9f6 Paul Mackerras 2015-07-02 3281 */ c01015091a7703 Paul Mackerras 2017-10-19 3282 if (cmd_bit || hpt_on_radix) { 516f7898ae20d9 Paul Mackerras 2017-10-16 3283 split_info.do_nap = 1; /* ask secondaries to nap when done */ b4deba5c41e9f6 Paul Mackerras 2015-07-02 3284 for (thr = 1; thr < threads_per_subcore; ++thr) b4deba5c41e9f6 Paul Mackerras 2015-07-02 3285 if (!(active & (1 << thr))) b4deba5c41e9f6 Paul Mackerras 2015-07-02 3286 kvmppc_ipi_thread(pcpu + thr); 516f7898ae20d9 Paul Mackerras 2017-10-16 3287 } e0b7ec058c0eb7 Paul Mackerras 2014-01-08 3288 2f12f03436847e Paul Mackerras 2012-10-15 3289 vc->vcore_state = VCORE_RUNNING; 19ccb76a1938ab Paul Mackerras 2011-07-23 3290 preempt_disable(); 3c78f78af95615 Suresh E. Warrier 2014-12-03 3291 3c78f78af95615 Suresh E. Warrier 2014-12-03 3292 trace_kvmppc_run_core(vc, 0); 3c78f78af95615 Suresh E. Warrier 2014-12-03 3293 b4deba5c41e9f6 Paul Mackerras 2015-07-02 3294 for (sub = 0; sub < core_info.n_subcores; ++sub) 898b25b202f350 Paul Mackerras 2017-06-22 3295 spin_unlock(&core_info.vc[sub]->lock); 371fefd6f2dc46 Paul Mackerras 2011-06-29 3296 61bd0f66ff92d5 Laurent Vivier 2018-03-02 3297 guest_enter_irqoff(); 2c9097e4c13402 Paul Mackerras 2012-09-11 3298 e0b7ec058c0eb7 Paul Mackerras 2014-01-08 3299 srcu_idx = srcu_read_lock(&vc->kvm->srcu); 2c9097e4c13402 Paul Mackerras 2012-09-11 3300 a4bc64d305af40 Naveen N. Rao 2018-04-19 3301 this_cpu_disable_ftrace(); a4bc64d305af40 Naveen N. Rao 2018-04-19 3302 3309bec85e60d6 Alexey Kardashevskiy 2019-03-29 3303 /* 3309bec85e60d6 Alexey Kardashevskiy 2019-03-29 3304 * Interrupts will be enabled once we get into the guest, 3309bec85e60d6 Alexey Kardashevskiy 2019-03-29 3305 * so tell lockdep that we're about to enable interrupts. 3309bec85e60d6 Alexey Kardashevskiy 2019-03-29 3306 */ 3309bec85e60d6 Alexey Kardashevskiy 2019-03-29 3307 trace_hardirqs_on(); 3309bec85e60d6 Alexey Kardashevskiy 2019-03-29 3308 8b24e69fc47e43 Paul Mackerras 2017-06-26 3309 trap = __kvmppc_vcore_entry(); de56a948b9182f Paul Mackerras 2011-06-29 3310 3309bec85e60d6 Alexey Kardashevskiy 2019-03-29 3311 trace_hardirqs_off(); 3309bec85e60d6 Alexey Kardashevskiy 2019-03-29 3312 a4bc64d305af40 Naveen N. Rao 2018-04-19 3313 this_cpu_enable_ftrace(); a4bc64d305af40 Naveen N. Rao 2018-04-19 3314 ec257165082616 Paul Mackerras 2015-06-24 3315 srcu_read_unlock(&vc->kvm->srcu, srcu_idx); ec257165082616 Paul Mackerras 2015-06-24 3316 8b24e69fc47e43 Paul Mackerras 2017-06-26 3317 set_irq_happened(trap); 8b24e69fc47e43 Paul Mackerras 2017-06-26 3318 ec257165082616 Paul Mackerras 2015-06-24 3319 spin_lock(&vc->lock); 371fefd6f2dc46 Paul Mackerras 2011-06-29 3320 /* prevent other vcpu threads from doing kvmppc_start_thread() now */ 19ccb76a1938ab Paul Mackerras 2011-07-23 3321 vc->vcore_state = VCORE_EXITING; 371fefd6f2dc46 Paul Mackerras 2011-06-29 3322 19ccb76a1938ab Paul Mackerras 2011-07-23 3323 /* wait for secondary threads to finish writing their state to memory */ 516f7898ae20d9 Paul Mackerras 2017-10-16 3324 kvmppc_wait_for_nap(controlled_threads); b4deba5c41e9f6 Paul Mackerras 2015-07-02 3325 b4deba5c41e9f6 Paul Mackerras 2015-07-02 3326 /* Return to whole-core mode if we split the core earlier */ 516f7898ae20d9 Paul Mackerras 2017-10-16 3327 if (cmd_bit) { b4deba5c41e9f6 Paul Mackerras 2015-07-02 3328 unsigned long hid0 = mfspr(SPRN_HID0); b4deba5c41e9f6 Paul Mackerras 2015-07-02 3329 unsigned long loops = 0; b4deba5c41e9f6 Paul Mackerras 2015-07-02 3330 b4deba5c41e9f6 Paul Mackerras 2015-07-02 3331 hid0 &= ~HID0_POWER8_DYNLPARDIS; b4deba5c41e9f6 Paul Mackerras 2015-07-02 3332 stat_bit = HID0_POWER8_2LPARMODE | HID0_POWER8_4LPARMODE; b4deba5c41e9f6 Paul Mackerras 2015-07-02 3333 mb(); b4deba5c41e9f6 Paul Mackerras 2015-07-02 3334 mtspr(SPRN_HID0, hid0); b4deba5c41e9f6 Paul Mackerras 2015-07-02 3335 isync(); b4deba5c41e9f6 Paul Mackerras 2015-07-02 3336 for (;;) { b4deba5c41e9f6 Paul Mackerras 2015-07-02 3337 hid0 = mfspr(SPRN_HID0); b4deba5c41e9f6 Paul Mackerras 2015-07-02 3338 if (!(hid0 & stat_bit)) b4deba5c41e9f6 Paul Mackerras 2015-07-02 3339 break; b4deba5c41e9f6 Paul Mackerras 2015-07-02 3340 cpu_relax(); b4deba5c41e9f6 Paul Mackerras 2015-07-02 3341 ++loops; b4deba5c41e9f6 Paul Mackerras 2015-07-02 3342 } c01015091a7703 Paul Mackerras 2017-10-19 3343 } else if (hpt_on_radix) { c01015091a7703 Paul Mackerras 2017-10-19 3344 /* Wait for all threads to have seen final sync */ c01015091a7703 Paul Mackerras 2017-10-19 3345 for (thr = 1; thr < controlled_threads; ++thr) { d2e60075a3d442 Nicholas Piggin 2018-02-14 3346 struct paca_struct *paca = paca_ptrs[pcpu + thr]; d2e60075a3d442 Nicholas Piggin 2018-02-14 3347 d2e60075a3d442 Nicholas Piggin 2018-02-14 3348 while (paca->kvm_hstate.kvm_split_mode) { c01015091a7703 Paul Mackerras 2017-10-19 3349 HMT_low(); c01015091a7703 Paul Mackerras 2017-10-19 3350 barrier(); c01015091a7703 Paul Mackerras 2017-10-19 3351 } c01015091a7703 Paul Mackerras 2017-10-19 3352 HMT_medium(); c01015091a7703 Paul Mackerras 2017-10-19 3353 } b4deba5c41e9f6 Paul Mackerras 2015-07-02 3354 } c01015091a7703 Paul Mackerras 2017-10-19 3355 split_info.do_nap = 0; b4deba5c41e9f6 Paul Mackerras 2015-07-02 3356 8b24e69fc47e43 Paul Mackerras 2017-06-26 3357 kvmppc_set_host_core(pcpu); 8b24e69fc47e43 Paul Mackerras 2017-06-26 3358 8b24e69fc47e43 Paul Mackerras 2017-06-26 3359 local_irq_enable(); 61bd0f66ff92d5 Laurent Vivier 2018-03-02 @3360 guest_exit(); 8b24e69fc47e43 Paul Mackerras 2017-06-26 3361 b4deba5c41e9f6 Paul Mackerras 2015-07-02 3362 /* Let secondaries go back to the offline loop */ 45c940ba490df2 Paul Mackerras 2016-11-18 3363 for (i = 0; i < controlled_threads; ++i) { b4deba5c41e9f6 Paul Mackerras 2015-07-02 3364 kvmppc_release_hwthread(pcpu + i); b4deba5c41e9f6 Paul Mackerras 2015-07-02 3365 if (sip && sip->napped[i]) b4deba5c41e9f6 Paul Mackerras 2015-07-02 3366 kvmppc_ipi_thread(pcpu + i); a29ebeaf5575d0 Paul Mackerras 2017-01-30 3367 cpumask_clear_cpu(pcpu + i, &vc->kvm->arch.cpu_in_guest); b4deba5c41e9f6 Paul Mackerras 2015-07-02 3368 } b4deba5c41e9f6 Paul Mackerras 2015-07-02 3369 371fefd6f2dc46 Paul Mackerras 2011-06-29 3370 spin_unlock(&vc->lock); 2c9097e4c13402 Paul Mackerras 2012-09-11 3371 371fefd6f2dc46 Paul Mackerras 2011-06-29 3372 /* make sure updates to secondary vcpu structs are visible now */ 371fefd6f2dc46 Paul Mackerras 2011-06-29 3373 smp_mb(); de56a948b9182f Paul Mackerras 2011-06-29 3374 36ee41d161c67a Paul Mackerras 2018-01-30 3375 preempt_enable(); 36ee41d161c67a Paul Mackerras 2018-01-30 3376 898b25b202f350 Paul Mackerras 2017-06-22 3377 for (sub = 0; sub < core_info.n_subcores; ++sub) { 898b25b202f350 Paul Mackerras 2017-06-22 3378 pvc = core_info.vc[sub]; ec257165082616 Paul Mackerras 2015-06-24 3379 post_guest_process(pvc, pvc == vc); 898b25b202f350 Paul Mackerras 2017-06-22 3380 } de56a948b9182f Paul Mackerras 2011-06-29 3381 913d3ff9a3c3a1 Paul Mackerras 2012-10-15 3382 spin_lock(&vc->lock); de56a948b9182f Paul Mackerras 2011-06-29 3383 de56a948b9182f Paul Mackerras 2011-06-29 3384 out: 19ccb76a1938ab Paul Mackerras 2011-07-23 3385 vc->vcore_state = VCORE_INACTIVE; 3c78f78af95615 Suresh E. Warrier 2014-12-03 3386 trace_kvmppc_run_core(vc, 1); 371fefd6f2dc46 Paul Mackerras 2011-06-29 3387 } 371fefd6f2dc46 Paul Mackerras 2011-06-29 3388 :::::: The code at line 3360 was first introduced by commit :::::: 61bd0f66ff92d5ce765ff9850fd3cbfec773c560 KVM: PPC: Book3S HV: Fix guest time accounting with VIRT_CPU_ACCOUNTING_GEN :::::: TO: Laurent Vivier <lvivier@redhat.com> :::::: CC: Paul Mackerras <paulus@ozlabs.org> --- 0-DAY kernel test infrastructure Open Source Technology Center https://lists.01.org/hyperkitty/list/kbuild-all(a)lists.01.org Intel Corporation [-- Attachment #2: config.gz --] [-- Type: application/gzip, Size: 25639 bytes --] ^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: [PATCH] KVM: remove unused guest_enter/exit 2020-01-10 3:13 [PATCH] KVM: remove unused guest_enter/exit Alex Shi @ 2020-01-11 5:28 ` kbuild test robot 2020-01-11 5:28 ` kbuild test robot 2020-01-11 11:32 ` Paolo Bonzini 2 siblings, 0 replies; 7+ messages in thread From: kbuild test robot @ 2020-01-11 5:28 UTC (permalink / raw) To: Alex Shi Cc: kbuild-all, Paolo Bonzini, Peter Zijlstra, Ingo Molnar, Frederic Weisbecker, linux-kernel [-- Attachment #1: Type: text/plain, Size: 20730 bytes --] Hi Alex, I love your patch! Yet something to improve: [auto build test ERROR on kvm/linux-next] [also build test ERROR on linux/master linus/master v5.5-rc5 next-20200109] [if your patch is applied to the wrong git tree, please drop us a note to help improve the system. BTW, we also suggest to use '--base' option to specify the base tree in git format-patch, please see https://stackoverflow.com/a/37406982] url: https://github.com/0day-ci/linux/commits/Alex-Shi/KVM-remove-unused-guest_enter-exit/20200111-004903 base: https://git.kernel.org/pub/scm/virt/kvm/kvm.git linux-next config: arm64-randconfig-a001-20200110 (attached as .config) compiler: aarch64-linux-gcc (GCC) 7.5.0 reproduce: wget https://raw.githubusercontent.com/intel/lkp-tests/master/sbin/make.cross -O ~/bin/make.cross chmod +x ~/bin/make.cross # save the attached .config to linux build tree GCC_VERSION=7.5.0 make.cross ARCH=arm64 If you fix the issue, kindly add following tag Reported-by: kbuild test robot <lkp@intel.com> All errors (new ones prefixed by >>): arch/arm64/kvm/../../../virt/kvm/arm/arm.c: In function 'kvm_arch_vcpu_ioctl_run': >> arch/arm64/kvm/../../../virt/kvm/arm/arm.c:861:3: error: implicit declaration of function 'guest_exit'; did you mean 'user_exit'? [-Werror=implicit-function-declaration] guest_exit(); ^~~~~~~~~~ user_exit cc1: some warnings being treated as errors vim +861 arch/arm64/kvm/../../../virt/kvm/arm/arm.c 0592c005622582 virt/kvm/arm/arm.c Andrew Jones 2017-06-04 687 f7ed45be3ba524 arch/arm/kvm/arm.c Christoffer Dall 2013-01-20 688 /** f7ed45be3ba524 arch/arm/kvm/arm.c Christoffer Dall 2013-01-20 689 * kvm_arch_vcpu_ioctl_run - the main VCPU run function to execute guest code f7ed45be3ba524 arch/arm/kvm/arm.c Christoffer Dall 2013-01-20 690 * @vcpu: The VCPU pointer f7ed45be3ba524 arch/arm/kvm/arm.c Christoffer Dall 2013-01-20 691 * @run: The kvm_run structure pointer used for userspace state exchange f7ed45be3ba524 arch/arm/kvm/arm.c Christoffer Dall 2013-01-20 692 * f7ed45be3ba524 arch/arm/kvm/arm.c Christoffer Dall 2013-01-20 693 * This function is called through the VCPU_RUN ioctl called from user space. It f7ed45be3ba524 arch/arm/kvm/arm.c Christoffer Dall 2013-01-20 694 * will execute VM code in a loop until the time slice for the process is used f7ed45be3ba524 arch/arm/kvm/arm.c Christoffer Dall 2013-01-20 695 * or some emulation is needed from user space in which case the function will f7ed45be3ba524 arch/arm/kvm/arm.c Christoffer Dall 2013-01-20 696 * return with return value 0 and with the kvm_run structure filled in with the f7ed45be3ba524 arch/arm/kvm/arm.c Christoffer Dall 2013-01-20 697 * required data for the requested emulation. f7ed45be3ba524 arch/arm/kvm/arm.c Christoffer Dall 2013-01-20 698 */ 749cf76c5a363e arch/arm/kvm/arm.c Christoffer Dall 2013-01-20 699 int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu, struct kvm_run *run) 749cf76c5a363e arch/arm/kvm/arm.c Christoffer Dall 2013-01-20 700 { f7ed45be3ba524 arch/arm/kvm/arm.c Christoffer Dall 2013-01-20 701 int ret; f7ed45be3ba524 arch/arm/kvm/arm.c Christoffer Dall 2013-01-20 702 e8180dcaa8470c arch/arm/kvm/arm.c Andre Przywara 2013-05-09 703 if (unlikely(!kvm_vcpu_initialized(vcpu))) f7ed45be3ba524 arch/arm/kvm/arm.c Christoffer Dall 2013-01-20 704 return -ENOEXEC; f7ed45be3ba524 arch/arm/kvm/arm.c Christoffer Dall 2013-01-20 705 f7ed45be3ba524 arch/arm/kvm/arm.c Christoffer Dall 2013-01-20 706 ret = kvm_vcpu_first_run_init(vcpu); f7ed45be3ba524 arch/arm/kvm/arm.c Christoffer Dall 2013-01-20 707 if (ret) 829a58635497d7 virt/kvm/arm/arm.c Christoffer Dall 2017-11-29 708 return ret; f7ed45be3ba524 arch/arm/kvm/arm.c Christoffer Dall 2013-01-20 709 45e96ea6b36953 arch/arm/kvm/arm.c Christoffer Dall 2013-01-20 710 if (run->exit_reason == KVM_EXIT_MMIO) { 45e96ea6b36953 arch/arm/kvm/arm.c Christoffer Dall 2013-01-20 711 ret = kvm_handle_mmio_return(vcpu, vcpu->run); 45e96ea6b36953 arch/arm/kvm/arm.c Christoffer Dall 2013-01-20 712 if (ret) 829a58635497d7 virt/kvm/arm/arm.c Christoffer Dall 2017-11-29 713 return ret; accb757d798c9b virt/kvm/arm/arm.c Christoffer Dall 2017-12-04 714 } 1eb591288b956b virt/kvm/arm/arm.c Alex Bennée 2017-11-16 715 829a58635497d7 virt/kvm/arm/arm.c Christoffer Dall 2017-11-29 716 if (run->immediate_exit) 829a58635497d7 virt/kvm/arm/arm.c Christoffer Dall 2017-11-29 717 return -EINTR; 45e96ea6b36953 arch/arm/kvm/arm.c Christoffer Dall 2013-01-20 718 829a58635497d7 virt/kvm/arm/arm.c Christoffer Dall 2017-11-29 719 vcpu_load(vcpu); 460df4c1fc7c00 arch/arm/kvm/arm.c Paolo Bonzini 2017-02-08 720 20b7035c66bacc virt/kvm/arm/arm.c Jan H. Schönherr 2017-11-24 721 kvm_sigset_activate(vcpu); f7ed45be3ba524 arch/arm/kvm/arm.c Christoffer Dall 2013-01-20 722 f7ed45be3ba524 arch/arm/kvm/arm.c Christoffer Dall 2013-01-20 723 ret = 1; f7ed45be3ba524 arch/arm/kvm/arm.c Christoffer Dall 2013-01-20 724 run->exit_reason = KVM_EXIT_UNKNOWN; f7ed45be3ba524 arch/arm/kvm/arm.c Christoffer Dall 2013-01-20 725 while (ret > 0) { f7ed45be3ba524 arch/arm/kvm/arm.c Christoffer Dall 2013-01-20 726 /* f7ed45be3ba524 arch/arm/kvm/arm.c Christoffer Dall 2013-01-20 727 * Check conditions before entering the guest f7ed45be3ba524 arch/arm/kvm/arm.c Christoffer Dall 2013-01-20 728 */ f7ed45be3ba524 arch/arm/kvm/arm.c Christoffer Dall 2013-01-20 729 cond_resched(); f7ed45be3ba524 arch/arm/kvm/arm.c Christoffer Dall 2013-01-20 730 e329fb75d519e3 virt/kvm/arm/arm.c Christoffer Dall 2018-12-11 731 update_vmid(&vcpu->kvm->arch.vmid); f7ed45be3ba524 arch/arm/kvm/arm.c Christoffer Dall 2013-01-20 732 0592c005622582 virt/kvm/arm/arm.c Andrew Jones 2017-06-04 733 check_vcpu_requests(vcpu); 0592c005622582 virt/kvm/arm/arm.c Andrew Jones 2017-06-04 734 abdf58438356c7 arch/arm/kvm/arm.c Marc Zyngier 2015-06-08 735 /* abdf58438356c7 arch/arm/kvm/arm.c Marc Zyngier 2015-06-08 736 * Preparing the interrupts to be injected also abdf58438356c7 arch/arm/kvm/arm.c Marc Zyngier 2015-06-08 737 * involves poking the GIC, which must be done in a abdf58438356c7 arch/arm/kvm/arm.c Marc Zyngier 2015-06-08 738 * non-preemptible context. abdf58438356c7 arch/arm/kvm/arm.c Marc Zyngier 2015-06-08 739 */ 1b3d546daf85ed arch/arm/kvm/arm.c Christoffer Dall 2015-05-28 740 preempt_disable(); 328e5664794491 arch/arm/kvm/arm.c Christoffer Dall 2016-03-24 741 b02386eb7dac75 arch/arm/kvm/arm.c Shannon Zhao 2016-02-26 742 kvm_pmu_flush_hwstate(vcpu); 328e5664794491 arch/arm/kvm/arm.c Christoffer Dall 2016-03-24 743 f7ed45be3ba524 arch/arm/kvm/arm.c Christoffer Dall 2013-01-20 744 local_irq_disable(); f7ed45be3ba524 arch/arm/kvm/arm.c Christoffer Dall 2013-01-20 745 abdf58438356c7 arch/arm/kvm/arm.c Marc Zyngier 2015-06-08 746 kvm_vgic_flush_hwstate(vcpu); abdf58438356c7 arch/arm/kvm/arm.c Marc Zyngier 2015-06-08 747 f7ed45be3ba524 arch/arm/kvm/arm.c Christoffer Dall 2013-01-20 748 /* 61bbe380273347 virt/kvm/arm/arm.c Christoffer Dall 2017-10-27 749 * Exit if we have a signal pending so that we can deliver the 61bbe380273347 virt/kvm/arm/arm.c Christoffer Dall 2017-10-27 750 * signal to user space. f7ed45be3ba524 arch/arm/kvm/arm.c Christoffer Dall 2013-01-20 751 */ 61bbe380273347 virt/kvm/arm/arm.c Christoffer Dall 2017-10-27 752 if (signal_pending(current)) { 61bbe380273347 virt/kvm/arm/arm.c Christoffer Dall 2017-10-27 753 ret = -EINTR; 61bbe380273347 virt/kvm/arm/arm.c Christoffer Dall 2017-10-27 754 run->exit_reason = KVM_EXIT_INTR; 61bbe380273347 virt/kvm/arm/arm.c Christoffer Dall 2017-10-27 755 } 61bbe380273347 virt/kvm/arm/arm.c Christoffer Dall 2017-10-27 756 61bbe380273347 virt/kvm/arm/arm.c Christoffer Dall 2017-10-27 757 /* 61bbe380273347 virt/kvm/arm/arm.c Christoffer Dall 2017-10-27 758 * If we're using a userspace irqchip, then check if we need 61bbe380273347 virt/kvm/arm/arm.c Christoffer Dall 2017-10-27 759 * to tell a userspace irqchip about timer or PMU level 61bbe380273347 virt/kvm/arm/arm.c Christoffer Dall 2017-10-27 760 * changes and if so, exit to userspace (the actual level 61bbe380273347 virt/kvm/arm/arm.c Christoffer Dall 2017-10-27 761 * state gets updated in kvm_timer_update_run and 61bbe380273347 virt/kvm/arm/arm.c Christoffer Dall 2017-10-27 762 * kvm_pmu_update_run below). 61bbe380273347 virt/kvm/arm/arm.c Christoffer Dall 2017-10-27 763 */ 61bbe380273347 virt/kvm/arm/arm.c Christoffer Dall 2017-10-27 764 if (static_branch_unlikely(&userspace_irqchip_in_use)) { 61bbe380273347 virt/kvm/arm/arm.c Christoffer Dall 2017-10-27 765 if (kvm_timer_should_notify_user(vcpu) || 3dbbdf78636e66 arch/arm/kvm/arm.c Christoffer Dall 2017-02-01 766 kvm_pmu_should_notify_user(vcpu)) { f7ed45be3ba524 arch/arm/kvm/arm.c Christoffer Dall 2013-01-20 767 ret = -EINTR; f7ed45be3ba524 arch/arm/kvm/arm.c Christoffer Dall 2013-01-20 768 run->exit_reason = KVM_EXIT_INTR; f7ed45be3ba524 arch/arm/kvm/arm.c Christoffer Dall 2013-01-20 769 } 61bbe380273347 virt/kvm/arm/arm.c Christoffer Dall 2017-10-27 770 } f7ed45be3ba524 arch/arm/kvm/arm.c Christoffer Dall 2013-01-20 771 6a6d73be12fbe4 virt/kvm/arm/arm.c Andrew Jones 2017-06-04 772 /* 6a6d73be12fbe4 virt/kvm/arm/arm.c Andrew Jones 2017-06-04 773 * Ensure we set mode to IN_GUEST_MODE after we disable 6a6d73be12fbe4 virt/kvm/arm/arm.c Andrew Jones 2017-06-04 774 * interrupts and before the final VCPU requests check. 6a6d73be12fbe4 virt/kvm/arm/arm.c Andrew Jones 2017-06-04 775 * See the comment in kvm_vcpu_exiting_guest_mode() and 2f5947dfcaecb9 virt/kvm/arm/arm.c Christoph Hellwig 2019-07-24 776 * Documentation/virt/kvm/vcpu-requests.rst 6a6d73be12fbe4 virt/kvm/arm/arm.c Andrew Jones 2017-06-04 777 */ 6a6d73be12fbe4 virt/kvm/arm/arm.c Andrew Jones 2017-06-04 778 smp_store_mb(vcpu->mode, IN_GUEST_MODE); 6a6d73be12fbe4 virt/kvm/arm/arm.c Andrew Jones 2017-06-04 779 e329fb75d519e3 virt/kvm/arm/arm.c Christoffer Dall 2018-12-11 780 if (ret <= 0 || need_new_vmid_gen(&vcpu->kvm->arch.vmid) || 424c989b1a664a virt/kvm/arm/arm.c Andrew Jones 2017-06-04 781 kvm_request_pending(vcpu)) { 6a6d73be12fbe4 virt/kvm/arm/arm.c Andrew Jones 2017-06-04 782 vcpu->mode = OUTSIDE_GUEST_MODE; 771621b0e2f806 virt/kvm/arm/arm.c Christoffer Dall 2017-10-04 783 isb(); /* Ensure work in x_flush_hwstate is committed */ b02386eb7dac75 arch/arm/kvm/arm.c Shannon Zhao 2016-02-26 784 kvm_pmu_sync_hwstate(vcpu); 61bbe380273347 virt/kvm/arm/arm.c Christoffer Dall 2017-10-27 785 if (static_branch_unlikely(&userspace_irqchip_in_use)) 4b4b4512da2a84 arch/arm/kvm/arm.c Christoffer Dall 2015-08-30 786 kvm_timer_sync_hwstate(vcpu); 1a89dd9113badd arch/arm/kvm/arm.c Marc Zyngier 2013-01-21 787 kvm_vgic_sync_hwstate(vcpu); ee9bb9a1e3c6e4 virt/kvm/arm/arm.c Christoffer Dall 2016-10-16 788 local_irq_enable(); abdf58438356c7 arch/arm/kvm/arm.c Marc Zyngier 2015-06-08 789 preempt_enable(); f7ed45be3ba524 arch/arm/kvm/arm.c Christoffer Dall 2013-01-20 790 continue; f7ed45be3ba524 arch/arm/kvm/arm.c Christoffer Dall 2013-01-20 791 } f7ed45be3ba524 arch/arm/kvm/arm.c Christoffer Dall 2013-01-20 792 56c7f5e77f797f arch/arm/kvm/arm.c Alex Bennée 2015-07-07 793 kvm_arm_setup_debug(vcpu); 56c7f5e77f797f arch/arm/kvm/arm.c Alex Bennée 2015-07-07 794 f7ed45be3ba524 arch/arm/kvm/arm.c Christoffer Dall 2013-01-20 795 /************************************************************** f7ed45be3ba524 arch/arm/kvm/arm.c Christoffer Dall 2013-01-20 796 * Enter the guest f7ed45be3ba524 arch/arm/kvm/arm.c Christoffer Dall 2013-01-20 797 */ f7ed45be3ba524 arch/arm/kvm/arm.c Christoffer Dall 2013-01-20 798 trace_kvm_entry(*vcpu_pc(vcpu)); 6edaa5307f3f51 arch/arm/kvm/arm.c Paolo Bonzini 2016-06-15 799 guest_enter_irqoff(); f7ed45be3ba524 arch/arm/kvm/arm.c Christoffer Dall 2013-01-20 800 3f5c90b890acfa virt/kvm/arm/arm.c Christoffer Dall 2017-10-03 801 if (has_vhe()) { 3f5c90b890acfa virt/kvm/arm/arm.c Christoffer Dall 2017-10-03 802 kvm_arm_vhe_guest_enter(); 3f5c90b890acfa virt/kvm/arm/arm.c Christoffer Dall 2017-10-03 803 ret = kvm_vcpu_run_vhe(vcpu); 4f5abad9e826bd virt/kvm/arm/arm.c James Morse 2018-01-15 804 kvm_arm_vhe_guest_exit(); 3f5c90b890acfa virt/kvm/arm/arm.c Christoffer Dall 2017-10-03 805 } else { 7aa8d14641651a virt/kvm/arm/arm.c Marc Zyngier 2019-01-05 806 ret = kvm_call_hyp_ret(__kvm_vcpu_run_nvhe, vcpu); 3f5c90b890acfa virt/kvm/arm/arm.c Christoffer Dall 2017-10-03 807 } 3f5c90b890acfa virt/kvm/arm/arm.c Christoffer Dall 2017-10-03 808 f7ed45be3ba524 arch/arm/kvm/arm.c Christoffer Dall 2013-01-20 809 vcpu->mode = OUTSIDE_GUEST_MODE; b19e6892a90e7c arch/arm/kvm/arm.c Amit Tomar 2015-11-26 810 vcpu->stat.exits++; 1b3d546daf85ed arch/arm/kvm/arm.c Christoffer Dall 2015-05-28 811 /* 1b3d546daf85ed arch/arm/kvm/arm.c Christoffer Dall 2015-05-28 812 * Back from guest 1b3d546daf85ed arch/arm/kvm/arm.c Christoffer Dall 2015-05-28 813 *************************************************************/ 1b3d546daf85ed arch/arm/kvm/arm.c Christoffer Dall 2015-05-28 814 56c7f5e77f797f arch/arm/kvm/arm.c Alex Bennée 2015-07-07 815 kvm_arm_clear_debug(vcpu); 56c7f5e77f797f arch/arm/kvm/arm.c Alex Bennée 2015-07-07 816 ee9bb9a1e3c6e4 virt/kvm/arm/arm.c Christoffer Dall 2016-10-16 817 /* b103cc3f10c06f virt/kvm/arm/arm.c Christoffer Dall 2016-10-16 818 * We must sync the PMU state before the vgic state so ee9bb9a1e3c6e4 virt/kvm/arm/arm.c Christoffer Dall 2016-10-16 819 * that the vgic can properly sample the updated state of the ee9bb9a1e3c6e4 virt/kvm/arm/arm.c Christoffer Dall 2016-10-16 820 * interrupt line. ee9bb9a1e3c6e4 virt/kvm/arm/arm.c Christoffer Dall 2016-10-16 821 */ ee9bb9a1e3c6e4 virt/kvm/arm/arm.c Christoffer Dall 2016-10-16 822 kvm_pmu_sync_hwstate(vcpu); ee9bb9a1e3c6e4 virt/kvm/arm/arm.c Christoffer Dall 2016-10-16 823 b103cc3f10c06f virt/kvm/arm/arm.c Christoffer Dall 2016-10-16 824 /* b103cc3f10c06f virt/kvm/arm/arm.c Christoffer Dall 2016-10-16 825 * Sync the vgic state before syncing the timer state because b103cc3f10c06f virt/kvm/arm/arm.c Christoffer Dall 2016-10-16 826 * the timer code needs to know if the virtual timer b103cc3f10c06f virt/kvm/arm/arm.c Christoffer Dall 2016-10-16 827 * interrupts are active. b103cc3f10c06f virt/kvm/arm/arm.c Christoffer Dall 2016-10-16 828 */ ee9bb9a1e3c6e4 virt/kvm/arm/arm.c Christoffer Dall 2016-10-16 829 kvm_vgic_sync_hwstate(vcpu); ee9bb9a1e3c6e4 virt/kvm/arm/arm.c Christoffer Dall 2016-10-16 830 b103cc3f10c06f virt/kvm/arm/arm.c Christoffer Dall 2016-10-16 831 /* b103cc3f10c06f virt/kvm/arm/arm.c Christoffer Dall 2016-10-16 832 * Sync the timer hardware state before enabling interrupts as b103cc3f10c06f virt/kvm/arm/arm.c Christoffer Dall 2016-10-16 833 * we don't want vtimer interrupts to race with syncing the b103cc3f10c06f virt/kvm/arm/arm.c Christoffer Dall 2016-10-16 834 * timer virtual interrupt state. b103cc3f10c06f virt/kvm/arm/arm.c Christoffer Dall 2016-10-16 835 */ 61bbe380273347 virt/kvm/arm/arm.c Christoffer Dall 2017-10-27 836 if (static_branch_unlikely(&userspace_irqchip_in_use)) b103cc3f10c06f virt/kvm/arm/arm.c Christoffer Dall 2016-10-16 837 kvm_timer_sync_hwstate(vcpu); b103cc3f10c06f virt/kvm/arm/arm.c Christoffer Dall 2016-10-16 838 e6b673b741ea0d virt/kvm/arm/arm.c Dave Martin 2018-04-06 839 kvm_arch_vcpu_ctxsync_fp(vcpu); e6b673b741ea0d virt/kvm/arm/arm.c Dave Martin 2018-04-06 840 f7ed45be3ba524 arch/arm/kvm/arm.c Christoffer Dall 2013-01-20 841 /* f7ed45be3ba524 arch/arm/kvm/arm.c Christoffer Dall 2013-01-20 842 * We may have taken a host interrupt in HYP mode (ie f7ed45be3ba524 arch/arm/kvm/arm.c Christoffer Dall 2013-01-20 843 * while executing the guest). This interrupt is still f7ed45be3ba524 arch/arm/kvm/arm.c Christoffer Dall 2013-01-20 844 * pending, as we haven't serviced it yet! f7ed45be3ba524 arch/arm/kvm/arm.c Christoffer Dall 2013-01-20 845 * f7ed45be3ba524 arch/arm/kvm/arm.c Christoffer Dall 2013-01-20 846 * We're now back in SVC mode, with interrupts f7ed45be3ba524 arch/arm/kvm/arm.c Christoffer Dall 2013-01-20 847 * disabled. Enabling the interrupts now will have f7ed45be3ba524 arch/arm/kvm/arm.c Christoffer Dall 2013-01-20 848 * the effect of taking the interrupt again, in SVC f7ed45be3ba524 arch/arm/kvm/arm.c Christoffer Dall 2013-01-20 849 * mode this time. f7ed45be3ba524 arch/arm/kvm/arm.c Christoffer Dall 2013-01-20 850 */ f7ed45be3ba524 arch/arm/kvm/arm.c Christoffer Dall 2013-01-20 851 local_irq_enable(); f7ed45be3ba524 arch/arm/kvm/arm.c Christoffer Dall 2013-01-20 852 f7ed45be3ba524 arch/arm/kvm/arm.c Christoffer Dall 2013-01-20 853 /* 6edaa5307f3f51 arch/arm/kvm/arm.c Paolo Bonzini 2016-06-15 854 * We do local_irq_enable() before calling guest_exit() so 1b3d546daf85ed arch/arm/kvm/arm.c Christoffer Dall 2015-05-28 855 * that if a timer interrupt hits while running the guest we 1b3d546daf85ed arch/arm/kvm/arm.c Christoffer Dall 2015-05-28 856 * account that tick as being spent in the guest. We enable 6edaa5307f3f51 arch/arm/kvm/arm.c Paolo Bonzini 2016-06-15 857 * preemption after calling guest_exit() so that if we get 1b3d546daf85ed arch/arm/kvm/arm.c Christoffer Dall 2015-05-28 858 * preempted we make sure ticks after that is not counted as 1b3d546daf85ed arch/arm/kvm/arm.c Christoffer Dall 2015-05-28 859 * guest time. 1b3d546daf85ed arch/arm/kvm/arm.c Christoffer Dall 2015-05-28 860 */ 6edaa5307f3f51 arch/arm/kvm/arm.c Paolo Bonzini 2016-06-15 @861 guest_exit(); b5905dc12ed425 arch/arm/kvm/arm.c Christoffer Dall 2015-08-30 862 trace_kvm_exit(ret, kvm_vcpu_trap_get_class(vcpu), *vcpu_pc(vcpu)); 1b3d546daf85ed arch/arm/kvm/arm.c Christoffer Dall 2015-05-28 863 3368bd809764d3 virt/kvm/arm/arm.c James Morse 2018-01-15 864 /* Exit types that need handling before we can be preempted */ 3368bd809764d3 virt/kvm/arm/arm.c James Morse 2018-01-15 865 handle_exit_early(vcpu, run, ret); 3368bd809764d3 virt/kvm/arm/arm.c James Morse 2018-01-15 866 abdf58438356c7 arch/arm/kvm/arm.c Marc Zyngier 2015-06-08 867 preempt_enable(); abdf58438356c7 arch/arm/kvm/arm.c Marc Zyngier 2015-06-08 868 f7ed45be3ba524 arch/arm/kvm/arm.c Christoffer Dall 2013-01-20 869 ret = handle_exit(vcpu, run, ret); f7ed45be3ba524 arch/arm/kvm/arm.c Christoffer Dall 2013-01-20 870 } f7ed45be3ba524 arch/arm/kvm/arm.c Christoffer Dall 2013-01-20 871 d9e1397783765a arch/arm/kvm/arm.c Alexander Graf 2016-09-27 872 /* Tell userspace about in-kernel device output levels */ 3dbbdf78636e66 arch/arm/kvm/arm.c Christoffer Dall 2017-02-01 873 if (unlikely(!irqchip_in_kernel(vcpu->kvm))) { d9e1397783765a arch/arm/kvm/arm.c Alexander Graf 2016-09-27 874 kvm_timer_update_run(vcpu); 3dbbdf78636e66 arch/arm/kvm/arm.c Christoffer Dall 2017-02-01 875 kvm_pmu_update_run(vcpu); 3dbbdf78636e66 arch/arm/kvm/arm.c Christoffer Dall 2017-02-01 876 } d9e1397783765a arch/arm/kvm/arm.c Alexander Graf 2016-09-27 877 20b7035c66bacc virt/kvm/arm/arm.c Jan H. Schönherr 2017-11-24 878 kvm_sigset_deactivate(vcpu); 20b7035c66bacc virt/kvm/arm/arm.c Jan H. Schönherr 2017-11-24 879 accb757d798c9b virt/kvm/arm/arm.c Christoffer Dall 2017-12-04 880 vcpu_put(vcpu); f7ed45be3ba524 arch/arm/kvm/arm.c Christoffer Dall 2013-01-20 881 return ret; 749cf76c5a363e arch/arm/kvm/arm.c Christoffer Dall 2013-01-20 882 } 749cf76c5a363e arch/arm/kvm/arm.c Christoffer Dall 2013-01-20 883 :::::: The code at line 861 was first introduced by commit :::::: 6edaa5307f3f51e4e56dc4c63f68a69d88c6ddf5 KVM: remove kvm_guest_enter/exit wrappers :::::: TO: Paolo Bonzini <pbonzini@redhat.com> :::::: CC: Paolo Bonzini <pbonzini@redhat.com> --- 0-DAY kernel test infrastructure Open Source Technology Center https://lists.01.org/hyperkitty/list/kbuild-all@lists.01.org Intel Corporation [-- Attachment #2: .config.gz --] [-- Type: application/gzip, Size: 32260 bytes --] ^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: [PATCH] KVM: remove unused guest_enter/exit @ 2020-01-11 5:28 ` kbuild test robot 0 siblings, 0 replies; 7+ messages in thread From: kbuild test robot @ 2020-01-11 5:28 UTC (permalink / raw) To: kbuild-all [-- Attachment #1: Type: text/plain, Size: 20981 bytes --] Hi Alex, I love your patch! Yet something to improve: [auto build test ERROR on kvm/linux-next] [also build test ERROR on linux/master linus/master v5.5-rc5 next-20200109] [if your patch is applied to the wrong git tree, please drop us a note to help improve the system. BTW, we also suggest to use '--base' option to specify the base tree in git format-patch, please see https://stackoverflow.com/a/37406982] url: https://github.com/0day-ci/linux/commits/Alex-Shi/KVM-remove-unused-guest_enter-exit/20200111-004903 base: https://git.kernel.org/pub/scm/virt/kvm/kvm.git linux-next config: arm64-randconfig-a001-20200110 (attached as .config) compiler: aarch64-linux-gcc (GCC) 7.5.0 reproduce: wget https://raw.githubusercontent.com/intel/lkp-tests/master/sbin/make.cross -O ~/bin/make.cross chmod +x ~/bin/make.cross # save the attached .config to linux build tree GCC_VERSION=7.5.0 make.cross ARCH=arm64 If you fix the issue, kindly add following tag Reported-by: kbuild test robot <lkp@intel.com> All errors (new ones prefixed by >>): arch/arm64/kvm/../../../virt/kvm/arm/arm.c: In function 'kvm_arch_vcpu_ioctl_run': >> arch/arm64/kvm/../../../virt/kvm/arm/arm.c:861:3: error: implicit declaration of function 'guest_exit'; did you mean 'user_exit'? [-Werror=implicit-function-declaration] guest_exit(); ^~~~~~~~~~ user_exit cc1: some warnings being treated as errors vim +861 arch/arm64/kvm/../../../virt/kvm/arm/arm.c 0592c005622582 virt/kvm/arm/arm.c Andrew Jones 2017-06-04 687 f7ed45be3ba524 arch/arm/kvm/arm.c Christoffer Dall 2013-01-20 688 /** f7ed45be3ba524 arch/arm/kvm/arm.c Christoffer Dall 2013-01-20 689 * kvm_arch_vcpu_ioctl_run - the main VCPU run function to execute guest code f7ed45be3ba524 arch/arm/kvm/arm.c Christoffer Dall 2013-01-20 690 * @vcpu: The VCPU pointer f7ed45be3ba524 arch/arm/kvm/arm.c Christoffer Dall 2013-01-20 691 * @run: The kvm_run structure pointer used for userspace state exchange f7ed45be3ba524 arch/arm/kvm/arm.c Christoffer Dall 2013-01-20 692 * f7ed45be3ba524 arch/arm/kvm/arm.c Christoffer Dall 2013-01-20 693 * This function is called through the VCPU_RUN ioctl called from user space. It f7ed45be3ba524 arch/arm/kvm/arm.c Christoffer Dall 2013-01-20 694 * will execute VM code in a loop until the time slice for the process is used f7ed45be3ba524 arch/arm/kvm/arm.c Christoffer Dall 2013-01-20 695 * or some emulation is needed from user space in which case the function will f7ed45be3ba524 arch/arm/kvm/arm.c Christoffer Dall 2013-01-20 696 * return with return value 0 and with the kvm_run structure filled in with the f7ed45be3ba524 arch/arm/kvm/arm.c Christoffer Dall 2013-01-20 697 * required data for the requested emulation. f7ed45be3ba524 arch/arm/kvm/arm.c Christoffer Dall 2013-01-20 698 */ 749cf76c5a363e arch/arm/kvm/arm.c Christoffer Dall 2013-01-20 699 int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu, struct kvm_run *run) 749cf76c5a363e arch/arm/kvm/arm.c Christoffer Dall 2013-01-20 700 { f7ed45be3ba524 arch/arm/kvm/arm.c Christoffer Dall 2013-01-20 701 int ret; f7ed45be3ba524 arch/arm/kvm/arm.c Christoffer Dall 2013-01-20 702 e8180dcaa8470c arch/arm/kvm/arm.c Andre Przywara 2013-05-09 703 if (unlikely(!kvm_vcpu_initialized(vcpu))) f7ed45be3ba524 arch/arm/kvm/arm.c Christoffer Dall 2013-01-20 704 return -ENOEXEC; f7ed45be3ba524 arch/arm/kvm/arm.c Christoffer Dall 2013-01-20 705 f7ed45be3ba524 arch/arm/kvm/arm.c Christoffer Dall 2013-01-20 706 ret = kvm_vcpu_first_run_init(vcpu); f7ed45be3ba524 arch/arm/kvm/arm.c Christoffer Dall 2013-01-20 707 if (ret) 829a58635497d7 virt/kvm/arm/arm.c Christoffer Dall 2017-11-29 708 return ret; f7ed45be3ba524 arch/arm/kvm/arm.c Christoffer Dall 2013-01-20 709 45e96ea6b36953 arch/arm/kvm/arm.c Christoffer Dall 2013-01-20 710 if (run->exit_reason == KVM_EXIT_MMIO) { 45e96ea6b36953 arch/arm/kvm/arm.c Christoffer Dall 2013-01-20 711 ret = kvm_handle_mmio_return(vcpu, vcpu->run); 45e96ea6b36953 arch/arm/kvm/arm.c Christoffer Dall 2013-01-20 712 if (ret) 829a58635497d7 virt/kvm/arm/arm.c Christoffer Dall 2017-11-29 713 return ret; accb757d798c9b virt/kvm/arm/arm.c Christoffer Dall 2017-12-04 714 } 1eb591288b956b virt/kvm/arm/arm.c Alex Bennée 2017-11-16 715 829a58635497d7 virt/kvm/arm/arm.c Christoffer Dall 2017-11-29 716 if (run->immediate_exit) 829a58635497d7 virt/kvm/arm/arm.c Christoffer Dall 2017-11-29 717 return -EINTR; 45e96ea6b36953 arch/arm/kvm/arm.c Christoffer Dall 2013-01-20 718 829a58635497d7 virt/kvm/arm/arm.c Christoffer Dall 2017-11-29 719 vcpu_load(vcpu); 460df4c1fc7c00 arch/arm/kvm/arm.c Paolo Bonzini 2017-02-08 720 20b7035c66bacc virt/kvm/arm/arm.c Jan H. Schönherr 2017-11-24 721 kvm_sigset_activate(vcpu); f7ed45be3ba524 arch/arm/kvm/arm.c Christoffer Dall 2013-01-20 722 f7ed45be3ba524 arch/arm/kvm/arm.c Christoffer Dall 2013-01-20 723 ret = 1; f7ed45be3ba524 arch/arm/kvm/arm.c Christoffer Dall 2013-01-20 724 run->exit_reason = KVM_EXIT_UNKNOWN; f7ed45be3ba524 arch/arm/kvm/arm.c Christoffer Dall 2013-01-20 725 while (ret > 0) { f7ed45be3ba524 arch/arm/kvm/arm.c Christoffer Dall 2013-01-20 726 /* f7ed45be3ba524 arch/arm/kvm/arm.c Christoffer Dall 2013-01-20 727 * Check conditions before entering the guest f7ed45be3ba524 arch/arm/kvm/arm.c Christoffer Dall 2013-01-20 728 */ f7ed45be3ba524 arch/arm/kvm/arm.c Christoffer Dall 2013-01-20 729 cond_resched(); f7ed45be3ba524 arch/arm/kvm/arm.c Christoffer Dall 2013-01-20 730 e329fb75d519e3 virt/kvm/arm/arm.c Christoffer Dall 2018-12-11 731 update_vmid(&vcpu->kvm->arch.vmid); f7ed45be3ba524 arch/arm/kvm/arm.c Christoffer Dall 2013-01-20 732 0592c005622582 virt/kvm/arm/arm.c Andrew Jones 2017-06-04 733 check_vcpu_requests(vcpu); 0592c005622582 virt/kvm/arm/arm.c Andrew Jones 2017-06-04 734 abdf58438356c7 arch/arm/kvm/arm.c Marc Zyngier 2015-06-08 735 /* abdf58438356c7 arch/arm/kvm/arm.c Marc Zyngier 2015-06-08 736 * Preparing the interrupts to be injected also abdf58438356c7 arch/arm/kvm/arm.c Marc Zyngier 2015-06-08 737 * involves poking the GIC, which must be done in a abdf58438356c7 arch/arm/kvm/arm.c Marc Zyngier 2015-06-08 738 * non-preemptible context. abdf58438356c7 arch/arm/kvm/arm.c Marc Zyngier 2015-06-08 739 */ 1b3d546daf85ed arch/arm/kvm/arm.c Christoffer Dall 2015-05-28 740 preempt_disable(); 328e5664794491 arch/arm/kvm/arm.c Christoffer Dall 2016-03-24 741 b02386eb7dac75 arch/arm/kvm/arm.c Shannon Zhao 2016-02-26 742 kvm_pmu_flush_hwstate(vcpu); 328e5664794491 arch/arm/kvm/arm.c Christoffer Dall 2016-03-24 743 f7ed45be3ba524 arch/arm/kvm/arm.c Christoffer Dall 2013-01-20 744 local_irq_disable(); f7ed45be3ba524 arch/arm/kvm/arm.c Christoffer Dall 2013-01-20 745 abdf58438356c7 arch/arm/kvm/arm.c Marc Zyngier 2015-06-08 746 kvm_vgic_flush_hwstate(vcpu); abdf58438356c7 arch/arm/kvm/arm.c Marc Zyngier 2015-06-08 747 f7ed45be3ba524 arch/arm/kvm/arm.c Christoffer Dall 2013-01-20 748 /* 61bbe380273347 virt/kvm/arm/arm.c Christoffer Dall 2017-10-27 749 * Exit if we have a signal pending so that we can deliver the 61bbe380273347 virt/kvm/arm/arm.c Christoffer Dall 2017-10-27 750 * signal to user space. f7ed45be3ba524 arch/arm/kvm/arm.c Christoffer Dall 2013-01-20 751 */ 61bbe380273347 virt/kvm/arm/arm.c Christoffer Dall 2017-10-27 752 if (signal_pending(current)) { 61bbe380273347 virt/kvm/arm/arm.c Christoffer Dall 2017-10-27 753 ret = -EINTR; 61bbe380273347 virt/kvm/arm/arm.c Christoffer Dall 2017-10-27 754 run->exit_reason = KVM_EXIT_INTR; 61bbe380273347 virt/kvm/arm/arm.c Christoffer Dall 2017-10-27 755 } 61bbe380273347 virt/kvm/arm/arm.c Christoffer Dall 2017-10-27 756 61bbe380273347 virt/kvm/arm/arm.c Christoffer Dall 2017-10-27 757 /* 61bbe380273347 virt/kvm/arm/arm.c Christoffer Dall 2017-10-27 758 * If we're using a userspace irqchip, then check if we need 61bbe380273347 virt/kvm/arm/arm.c Christoffer Dall 2017-10-27 759 * to tell a userspace irqchip about timer or PMU level 61bbe380273347 virt/kvm/arm/arm.c Christoffer Dall 2017-10-27 760 * changes and if so, exit to userspace (the actual level 61bbe380273347 virt/kvm/arm/arm.c Christoffer Dall 2017-10-27 761 * state gets updated in kvm_timer_update_run and 61bbe380273347 virt/kvm/arm/arm.c Christoffer Dall 2017-10-27 762 * kvm_pmu_update_run below). 61bbe380273347 virt/kvm/arm/arm.c Christoffer Dall 2017-10-27 763 */ 61bbe380273347 virt/kvm/arm/arm.c Christoffer Dall 2017-10-27 764 if (static_branch_unlikely(&userspace_irqchip_in_use)) { 61bbe380273347 virt/kvm/arm/arm.c Christoffer Dall 2017-10-27 765 if (kvm_timer_should_notify_user(vcpu) || 3dbbdf78636e66 arch/arm/kvm/arm.c Christoffer Dall 2017-02-01 766 kvm_pmu_should_notify_user(vcpu)) { f7ed45be3ba524 arch/arm/kvm/arm.c Christoffer Dall 2013-01-20 767 ret = -EINTR; f7ed45be3ba524 arch/arm/kvm/arm.c Christoffer Dall 2013-01-20 768 run->exit_reason = KVM_EXIT_INTR; f7ed45be3ba524 arch/arm/kvm/arm.c Christoffer Dall 2013-01-20 769 } 61bbe380273347 virt/kvm/arm/arm.c Christoffer Dall 2017-10-27 770 } f7ed45be3ba524 arch/arm/kvm/arm.c Christoffer Dall 2013-01-20 771 6a6d73be12fbe4 virt/kvm/arm/arm.c Andrew Jones 2017-06-04 772 /* 6a6d73be12fbe4 virt/kvm/arm/arm.c Andrew Jones 2017-06-04 773 * Ensure we set mode to IN_GUEST_MODE after we disable 6a6d73be12fbe4 virt/kvm/arm/arm.c Andrew Jones 2017-06-04 774 * interrupts and before the final VCPU requests check. 6a6d73be12fbe4 virt/kvm/arm/arm.c Andrew Jones 2017-06-04 775 * See the comment in kvm_vcpu_exiting_guest_mode() and 2f5947dfcaecb9 virt/kvm/arm/arm.c Christoph Hellwig 2019-07-24 776 * Documentation/virt/kvm/vcpu-requests.rst 6a6d73be12fbe4 virt/kvm/arm/arm.c Andrew Jones 2017-06-04 777 */ 6a6d73be12fbe4 virt/kvm/arm/arm.c Andrew Jones 2017-06-04 778 smp_store_mb(vcpu->mode, IN_GUEST_MODE); 6a6d73be12fbe4 virt/kvm/arm/arm.c Andrew Jones 2017-06-04 779 e329fb75d519e3 virt/kvm/arm/arm.c Christoffer Dall 2018-12-11 780 if (ret <= 0 || need_new_vmid_gen(&vcpu->kvm->arch.vmid) || 424c989b1a664a virt/kvm/arm/arm.c Andrew Jones 2017-06-04 781 kvm_request_pending(vcpu)) { 6a6d73be12fbe4 virt/kvm/arm/arm.c Andrew Jones 2017-06-04 782 vcpu->mode = OUTSIDE_GUEST_MODE; 771621b0e2f806 virt/kvm/arm/arm.c Christoffer Dall 2017-10-04 783 isb(); /* Ensure work in x_flush_hwstate is committed */ b02386eb7dac75 arch/arm/kvm/arm.c Shannon Zhao 2016-02-26 784 kvm_pmu_sync_hwstate(vcpu); 61bbe380273347 virt/kvm/arm/arm.c Christoffer Dall 2017-10-27 785 if (static_branch_unlikely(&userspace_irqchip_in_use)) 4b4b4512da2a84 arch/arm/kvm/arm.c Christoffer Dall 2015-08-30 786 kvm_timer_sync_hwstate(vcpu); 1a89dd9113badd arch/arm/kvm/arm.c Marc Zyngier 2013-01-21 787 kvm_vgic_sync_hwstate(vcpu); ee9bb9a1e3c6e4 virt/kvm/arm/arm.c Christoffer Dall 2016-10-16 788 local_irq_enable(); abdf58438356c7 arch/arm/kvm/arm.c Marc Zyngier 2015-06-08 789 preempt_enable(); f7ed45be3ba524 arch/arm/kvm/arm.c Christoffer Dall 2013-01-20 790 continue; f7ed45be3ba524 arch/arm/kvm/arm.c Christoffer Dall 2013-01-20 791 } f7ed45be3ba524 arch/arm/kvm/arm.c Christoffer Dall 2013-01-20 792 56c7f5e77f797f arch/arm/kvm/arm.c Alex Bennée 2015-07-07 793 kvm_arm_setup_debug(vcpu); 56c7f5e77f797f arch/arm/kvm/arm.c Alex Bennée 2015-07-07 794 f7ed45be3ba524 arch/arm/kvm/arm.c Christoffer Dall 2013-01-20 795 /************************************************************** f7ed45be3ba524 arch/arm/kvm/arm.c Christoffer Dall 2013-01-20 796 * Enter the guest f7ed45be3ba524 arch/arm/kvm/arm.c Christoffer Dall 2013-01-20 797 */ f7ed45be3ba524 arch/arm/kvm/arm.c Christoffer Dall 2013-01-20 798 trace_kvm_entry(*vcpu_pc(vcpu)); 6edaa5307f3f51 arch/arm/kvm/arm.c Paolo Bonzini 2016-06-15 799 guest_enter_irqoff(); f7ed45be3ba524 arch/arm/kvm/arm.c Christoffer Dall 2013-01-20 800 3f5c90b890acfa virt/kvm/arm/arm.c Christoffer Dall 2017-10-03 801 if (has_vhe()) { 3f5c90b890acfa virt/kvm/arm/arm.c Christoffer Dall 2017-10-03 802 kvm_arm_vhe_guest_enter(); 3f5c90b890acfa virt/kvm/arm/arm.c Christoffer Dall 2017-10-03 803 ret = kvm_vcpu_run_vhe(vcpu); 4f5abad9e826bd virt/kvm/arm/arm.c James Morse 2018-01-15 804 kvm_arm_vhe_guest_exit(); 3f5c90b890acfa virt/kvm/arm/arm.c Christoffer Dall 2017-10-03 805 } else { 7aa8d14641651a virt/kvm/arm/arm.c Marc Zyngier 2019-01-05 806 ret = kvm_call_hyp_ret(__kvm_vcpu_run_nvhe, vcpu); 3f5c90b890acfa virt/kvm/arm/arm.c Christoffer Dall 2017-10-03 807 } 3f5c90b890acfa virt/kvm/arm/arm.c Christoffer Dall 2017-10-03 808 f7ed45be3ba524 arch/arm/kvm/arm.c Christoffer Dall 2013-01-20 809 vcpu->mode = OUTSIDE_GUEST_MODE; b19e6892a90e7c arch/arm/kvm/arm.c Amit Tomar 2015-11-26 810 vcpu->stat.exits++; 1b3d546daf85ed arch/arm/kvm/arm.c Christoffer Dall 2015-05-28 811 /* 1b3d546daf85ed arch/arm/kvm/arm.c Christoffer Dall 2015-05-28 812 * Back from guest 1b3d546daf85ed arch/arm/kvm/arm.c Christoffer Dall 2015-05-28 813 *************************************************************/ 1b3d546daf85ed arch/arm/kvm/arm.c Christoffer Dall 2015-05-28 814 56c7f5e77f797f arch/arm/kvm/arm.c Alex Bennée 2015-07-07 815 kvm_arm_clear_debug(vcpu); 56c7f5e77f797f arch/arm/kvm/arm.c Alex Bennée 2015-07-07 816 ee9bb9a1e3c6e4 virt/kvm/arm/arm.c Christoffer Dall 2016-10-16 817 /* b103cc3f10c06f virt/kvm/arm/arm.c Christoffer Dall 2016-10-16 818 * We must sync the PMU state before the vgic state so ee9bb9a1e3c6e4 virt/kvm/arm/arm.c Christoffer Dall 2016-10-16 819 * that the vgic can properly sample the updated state of the ee9bb9a1e3c6e4 virt/kvm/arm/arm.c Christoffer Dall 2016-10-16 820 * interrupt line. ee9bb9a1e3c6e4 virt/kvm/arm/arm.c Christoffer Dall 2016-10-16 821 */ ee9bb9a1e3c6e4 virt/kvm/arm/arm.c Christoffer Dall 2016-10-16 822 kvm_pmu_sync_hwstate(vcpu); ee9bb9a1e3c6e4 virt/kvm/arm/arm.c Christoffer Dall 2016-10-16 823 b103cc3f10c06f virt/kvm/arm/arm.c Christoffer Dall 2016-10-16 824 /* b103cc3f10c06f virt/kvm/arm/arm.c Christoffer Dall 2016-10-16 825 * Sync the vgic state before syncing the timer state because b103cc3f10c06f virt/kvm/arm/arm.c Christoffer Dall 2016-10-16 826 * the timer code needs to know if the virtual timer b103cc3f10c06f virt/kvm/arm/arm.c Christoffer Dall 2016-10-16 827 * interrupts are active. b103cc3f10c06f virt/kvm/arm/arm.c Christoffer Dall 2016-10-16 828 */ ee9bb9a1e3c6e4 virt/kvm/arm/arm.c Christoffer Dall 2016-10-16 829 kvm_vgic_sync_hwstate(vcpu); ee9bb9a1e3c6e4 virt/kvm/arm/arm.c Christoffer Dall 2016-10-16 830 b103cc3f10c06f virt/kvm/arm/arm.c Christoffer Dall 2016-10-16 831 /* b103cc3f10c06f virt/kvm/arm/arm.c Christoffer Dall 2016-10-16 832 * Sync the timer hardware state before enabling interrupts as b103cc3f10c06f virt/kvm/arm/arm.c Christoffer Dall 2016-10-16 833 * we don't want vtimer interrupts to race with syncing the b103cc3f10c06f virt/kvm/arm/arm.c Christoffer Dall 2016-10-16 834 * timer virtual interrupt state. b103cc3f10c06f virt/kvm/arm/arm.c Christoffer Dall 2016-10-16 835 */ 61bbe380273347 virt/kvm/arm/arm.c Christoffer Dall 2017-10-27 836 if (static_branch_unlikely(&userspace_irqchip_in_use)) b103cc3f10c06f virt/kvm/arm/arm.c Christoffer Dall 2016-10-16 837 kvm_timer_sync_hwstate(vcpu); b103cc3f10c06f virt/kvm/arm/arm.c Christoffer Dall 2016-10-16 838 e6b673b741ea0d virt/kvm/arm/arm.c Dave Martin 2018-04-06 839 kvm_arch_vcpu_ctxsync_fp(vcpu); e6b673b741ea0d virt/kvm/arm/arm.c Dave Martin 2018-04-06 840 f7ed45be3ba524 arch/arm/kvm/arm.c Christoffer Dall 2013-01-20 841 /* f7ed45be3ba524 arch/arm/kvm/arm.c Christoffer Dall 2013-01-20 842 * We may have taken a host interrupt in HYP mode (ie f7ed45be3ba524 arch/arm/kvm/arm.c Christoffer Dall 2013-01-20 843 * while executing the guest). This interrupt is still f7ed45be3ba524 arch/arm/kvm/arm.c Christoffer Dall 2013-01-20 844 * pending, as we haven't serviced it yet! f7ed45be3ba524 arch/arm/kvm/arm.c Christoffer Dall 2013-01-20 845 * f7ed45be3ba524 arch/arm/kvm/arm.c Christoffer Dall 2013-01-20 846 * We're now back in SVC mode, with interrupts f7ed45be3ba524 arch/arm/kvm/arm.c Christoffer Dall 2013-01-20 847 * disabled. Enabling the interrupts now will have f7ed45be3ba524 arch/arm/kvm/arm.c Christoffer Dall 2013-01-20 848 * the effect of taking the interrupt again, in SVC f7ed45be3ba524 arch/arm/kvm/arm.c Christoffer Dall 2013-01-20 849 * mode this time. f7ed45be3ba524 arch/arm/kvm/arm.c Christoffer Dall 2013-01-20 850 */ f7ed45be3ba524 arch/arm/kvm/arm.c Christoffer Dall 2013-01-20 851 local_irq_enable(); f7ed45be3ba524 arch/arm/kvm/arm.c Christoffer Dall 2013-01-20 852 f7ed45be3ba524 arch/arm/kvm/arm.c Christoffer Dall 2013-01-20 853 /* 6edaa5307f3f51 arch/arm/kvm/arm.c Paolo Bonzini 2016-06-15 854 * We do local_irq_enable() before calling guest_exit() so 1b3d546daf85ed arch/arm/kvm/arm.c Christoffer Dall 2015-05-28 855 * that if a timer interrupt hits while running the guest we 1b3d546daf85ed arch/arm/kvm/arm.c Christoffer Dall 2015-05-28 856 * account that tick as being spent in the guest. We enable 6edaa5307f3f51 arch/arm/kvm/arm.c Paolo Bonzini 2016-06-15 857 * preemption after calling guest_exit() so that if we get 1b3d546daf85ed arch/arm/kvm/arm.c Christoffer Dall 2015-05-28 858 * preempted we make sure ticks after that is not counted as 1b3d546daf85ed arch/arm/kvm/arm.c Christoffer Dall 2015-05-28 859 * guest time. 1b3d546daf85ed arch/arm/kvm/arm.c Christoffer Dall 2015-05-28 860 */ 6edaa5307f3f51 arch/arm/kvm/arm.c Paolo Bonzini 2016-06-15 @861 guest_exit(); b5905dc12ed425 arch/arm/kvm/arm.c Christoffer Dall 2015-08-30 862 trace_kvm_exit(ret, kvm_vcpu_trap_get_class(vcpu), *vcpu_pc(vcpu)); 1b3d546daf85ed arch/arm/kvm/arm.c Christoffer Dall 2015-05-28 863 3368bd809764d3 virt/kvm/arm/arm.c James Morse 2018-01-15 864 /* Exit types that need handling before we can be preempted */ 3368bd809764d3 virt/kvm/arm/arm.c James Morse 2018-01-15 865 handle_exit_early(vcpu, run, ret); 3368bd809764d3 virt/kvm/arm/arm.c James Morse 2018-01-15 866 abdf58438356c7 arch/arm/kvm/arm.c Marc Zyngier 2015-06-08 867 preempt_enable(); abdf58438356c7 arch/arm/kvm/arm.c Marc Zyngier 2015-06-08 868 f7ed45be3ba524 arch/arm/kvm/arm.c Christoffer Dall 2013-01-20 869 ret = handle_exit(vcpu, run, ret); f7ed45be3ba524 arch/arm/kvm/arm.c Christoffer Dall 2013-01-20 870 } f7ed45be3ba524 arch/arm/kvm/arm.c Christoffer Dall 2013-01-20 871 d9e1397783765a arch/arm/kvm/arm.c Alexander Graf 2016-09-27 872 /* Tell userspace about in-kernel device output levels */ 3dbbdf78636e66 arch/arm/kvm/arm.c Christoffer Dall 2017-02-01 873 if (unlikely(!irqchip_in_kernel(vcpu->kvm))) { d9e1397783765a arch/arm/kvm/arm.c Alexander Graf 2016-09-27 874 kvm_timer_update_run(vcpu); 3dbbdf78636e66 arch/arm/kvm/arm.c Christoffer Dall 2017-02-01 875 kvm_pmu_update_run(vcpu); 3dbbdf78636e66 arch/arm/kvm/arm.c Christoffer Dall 2017-02-01 876 } d9e1397783765a arch/arm/kvm/arm.c Alexander Graf 2016-09-27 877 20b7035c66bacc virt/kvm/arm/arm.c Jan H. Schönherr 2017-11-24 878 kvm_sigset_deactivate(vcpu); 20b7035c66bacc virt/kvm/arm/arm.c Jan H. Schönherr 2017-11-24 879 accb757d798c9b virt/kvm/arm/arm.c Christoffer Dall 2017-12-04 880 vcpu_put(vcpu); f7ed45be3ba524 arch/arm/kvm/arm.c Christoffer Dall 2013-01-20 881 return ret; 749cf76c5a363e arch/arm/kvm/arm.c Christoffer Dall 2013-01-20 882 } 749cf76c5a363e arch/arm/kvm/arm.c Christoffer Dall 2013-01-20 883 :::::: The code at line 861 was first introduced by commit :::::: 6edaa5307f3f51e4e56dc4c63f68a69d88c6ddf5 KVM: remove kvm_guest_enter/exit wrappers :::::: TO: Paolo Bonzini <pbonzini@redhat.com> :::::: CC: Paolo Bonzini <pbonzini@redhat.com> --- 0-DAY kernel test infrastructure Open Source Technology Center https://lists.01.org/hyperkitty/list/kbuild-all(a)lists.01.org Intel Corporation [-- Attachment #2: config.gz --] [-- Type: application/gzip, Size: 32260 bytes --] ^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: [PATCH] KVM: remove unused guest_enter/exit 2020-01-10 3:13 [PATCH] KVM: remove unused guest_enter/exit Alex Shi 2020-01-10 23:10 ` kbuild test robot 2020-01-11 5:28 ` kbuild test robot @ 2020-01-11 11:32 ` Paolo Bonzini 2020-01-11 13:21 ` Alex Shi 2 siblings, 1 reply; 7+ messages in thread From: Paolo Bonzini @ 2020-01-11 11:32 UTC (permalink / raw) To: Alex Shi; +Cc: Peter Zijlstra, Ingo Molnar, Frederic Weisbecker, linux-kernel On 10/01/20 04:13, Alex Shi wrote: > After commit 6edaa5307f3f ("KVM: remove kvm_guest_enter/exit wrappers") > no one uses guest_enter/exit anymore. > > So better to remove them to simplify code and reduced a bit of object > size. There is no reduction in object size, since these are inlines. But PPC still uses them. Paolo ^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: [PATCH] KVM: remove unused guest_enter/exit 2020-01-11 11:32 ` Paolo Bonzini @ 2020-01-11 13:21 ` Alex Shi 0 siblings, 0 replies; 7+ messages in thread From: Alex Shi @ 2020-01-11 13:21 UTC (permalink / raw) To: Paolo Bonzini Cc: Peter Zijlstra, Ingo Molnar, Frederic Weisbecker, linux-kernel 在 2020/1/11 下午7:32, Paolo Bonzini 写道: > On 10/01/20 04:13, Alex Shi wrote: >> After commit 6edaa5307f3f ("KVM: remove kvm_guest_enter/exit wrappers") >> no one uses guest_enter/exit anymore. >> >> So better to remove them to simplify code and reduced a bit of object >> size. > > There is no reduction in object size, since these are inlines. But PPC > still uses them. > > Paolo > Thanks a lot Paolo. It's my fault to ommit the guest_exit checking. Yes, guest_exit is still in using. but guest_enter isn't. no one use it. So how about this? Thanks Alex --- From 5770c6b8b43adc1e26ecfe696488ccc01896ebfd Mon Sep 17 00:00:00 2001 From: Alex Shi <alex.shi@linux.alibaba.com> Date: Sat, 11 Jan 2020 20:25:45 +0800 Subject: [PATCH] KVM: remove unused guest_enter No one uses guest_enter anymore, so better to remove it. Signed-off-by: Alex Shi <alex.shi@linux.alibaba.com> Cc: Paolo Bonzini <pbonzini@redhat.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Ingo Molnar <mingo@kernel.org> Cc: Frederic Weisbecker <frederic@kernel.org> Cc: linux-kernel@vger.kernel.org --- include/linux/context_tracking.h | 9 --------- 1 file changed, 9 deletions(-) diff --git a/include/linux/context_tracking.h b/include/linux/context_tracking.h index 64ec82851aa3..8150f5ac176c 100644 --- a/include/linux/context_tracking.h +++ b/include/linux/context_tracking.h @@ -154,15 +154,6 @@ static inline void guest_exit_irqoff(void) } #endif /* CONFIG_VIRT_CPU_ACCOUNTING_GEN */ -static inline void guest_enter(void) -{ - unsigned long flags; - - local_irq_save(flags); - guest_enter_irqoff(); - local_irq_restore(flags); -} - static inline void guest_exit(void) { unsigned long flags; -- 1.8.3.1 ^ permalink raw reply related [flat|nested] 7+ messages in thread
end of thread, other threads:[~2020-01-11 13:22 UTC | newest] Thread overview: 7+ messages (download: mbox.gz / follow: Atom feed) -- links below jump to the message on this page -- 2020-01-10 3:13 [PATCH] KVM: remove unused guest_enter/exit Alex Shi 2020-01-10 23:10 ` kbuild test robot 2020-01-10 23:10 ` kbuild test robot 2020-01-11 5:28 ` kbuild test robot 2020-01-11 5:28 ` kbuild test robot 2020-01-11 11:32 ` Paolo Bonzini 2020-01-11 13:21 ` Alex Shi
This is an external index of several public inboxes, see mirroring instructions on how to clone and mirror all data and code used by this external index.