Hi Peter, [FYI, it's a private test report for your RFC patch.] [auto build test ERROR on tip/sched/core] [also build test ERROR on tip/auto-latest next-20200526] [cannot apply to linus/master linux/master v5.7-rc7] [if your patch is applied to the wrong git tree, please drop us a note to help improve the system. BTW, we also suggest to use '--base' option to specify the base tree in git format-patch, please see https://stackoverflow.com/a/37406982] url: https://github.com/0day-ci/linux/commits/Peter-Zijlstra/Fix-the-scheduler-IPI-mess/20200527-010828 base: https://git.kernel.org/pub/scm/linux/kernel/git/tip/tip.git 2ebb17717550607bcd85fb8cf7d24ac870e9d762 config: i386-tinyconfig (attached as .config) compiler: gcc-9 (Debian 9.3.0-13) 9.3.0 reproduce (this is a W=1 build): # save the attached .config to linux build tree make W=1 ARCH=i386 If you fix the issue, kindly add following tag as appropriate Reported-by: kbuild test robot All errors (new ones prefixed by >>, old ones prefixed by <<): kernel/sched/idle.c: In function 'do_idle': >> kernel/sched/idle.c:292:2: error: implicit declaration of function 'flush_smp_call_function_from_idle' [-Werror=implicit-function-declaration] 292 | flush_smp_call_function_from_idle(); | ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ cc1: some warnings being treated as errors vim +/flush_smp_call_function_from_idle +292 kernel/sched/idle.c 225 226 /* 227 * Generic idle loop implementation 228 * 229 * Called with polling cleared. 230 */ 231 static void do_idle(void) 232 { 233 int cpu = smp_processor_id(); 234 /* 235 * If the arch has a polling bit, we maintain an invariant: 236 * 237 * Our polling bit is clear if we're not scheduled (i.e. if rq->curr != 238 * rq->idle). This means that, if rq->idle has the polling bit set, 239 * then setting need_resched is guaranteed to cause the CPU to 240 * reschedule. 241 */ 242 243 __current_set_polling(); 244 tick_nohz_idle_enter(); 245 246 while (!need_resched()) { 247 rmb(); 248 249 local_irq_disable(); 250 251 if (cpu_is_offline(cpu)) { 252 tick_nohz_idle_stop_tick(); 253 cpuhp_report_idle_dead(); 254 arch_cpu_idle_dead(); 255 } 256 257 arch_cpu_idle_enter(); 258 259 /* 260 * In poll mode we reenable interrupts and spin. Also if we 261 * detected in the wakeup from idle path that the tick 262 * broadcast device expired for us, we don't want to go deep 263 * idle as we know that the IPI is going to arrive right away. 264 */ 265 if (cpu_idle_force_poll || tick_check_broadcast_expired()) { 266 tick_nohz_idle_restart_tick(); 267 cpu_idle_poll(); 268 } else { 269 cpuidle_idle_call(); 270 } 271 arch_cpu_idle_exit(); 272 } 273 274 /* 275 * Since we fell out of the loop above, we know TIF_NEED_RESCHED must 276 * be set, propagate it into PREEMPT_NEED_RESCHED. 277 * 278 * This is required because for polling idle loops we will not have had 279 * an IPI to fold the state for us. 280 */ 281 preempt_set_need_resched(); 282 tick_nohz_idle_exit(); 283 __current_clr_polling(); 284 285 /* 286 * We promise to call sched_ttwu_pending() and reschedule if 287 * need_resched() is set while polling is set. That means that clearing 288 * polling needs to be visible before doing these things. 289 */ 290 smp_mb__after_atomic(); 291 > 292 flush_smp_call_function_from_idle(); 293 sched_ttwu_pending(); 294 schedule_idle(); 295 296 if (unlikely(klp_patch_pending(current))) 297 klp_update_patch_state(current); 298 } 299 --- 0-DAY CI Kernel Test Service, Intel Corporation https://lists.01.org/hyperkitty/list/kbuild-all(a)lists.01.org