* [peterz-queue:sched/wip2 9/15] kernel/sched/core.c:4689:3: error: implicit declaration of function 'migrate_disable_switch'
@ 2020-10-02 12:14 kernel test robot
0 siblings, 0 replies; only message in thread
From: kernel test robot @ 2020-10-02 12:14 UTC (permalink / raw)
To: kbuild-all
[-- Attachment #1: Type: text/plain, Size: 8938 bytes --]
tree: https://git.kernel.org/pub/scm/linux/kernel/git/peterz/queue.git sched/wip2
head: 954695e1b6f8a92e8f7e90dcf6c51a14142f33c4
commit: 5501990a0976cead69e63b63850fe84ba375b6f1 [9/15] sched: Add migrate_disable()
config: arm-randconfig-r002-20201002 (attached as .config)
compiler: clang version 12.0.0 (https://github.com/llvm/llvm-project bcd05599d0e53977a963799d6ee4f6e0bc21331b)
reproduce (this is a W=1 build):
wget https://raw.githubusercontent.com/intel/lkp-tests/master/sbin/make.cross -O ~/bin/make.cross
chmod +x ~/bin/make.cross
# install arm cross compiling tool for clang build
# apt-get install binutils-arm-linux-gnueabi
# https://git.kernel.org/pub/scm/linux/kernel/git/peterz/queue.git/commit/?id=5501990a0976cead69e63b63850fe84ba375b6f1
git remote add peterz-queue https://git.kernel.org/pub/scm/linux/kernel/git/peterz/queue.git
git fetch --no-tags peterz-queue sched/wip2
git checkout 5501990a0976cead69e63b63850fe84ba375b6f1
# save the attached .config to linux build tree
COMPILER_INSTALL_PATH=$HOME/0day COMPILER=clang make.cross ARCH=arm
If you fix the issue, kindly add following tag as appropriate
Reported-by: kernel test robot <lkp@intel.com>
All errors (new ones prefixed by >>):
>> kernel/sched/core.c:4689:3: error: implicit declaration of function 'migrate_disable_switch' [-Werror,-Wimplicit-function-declaration]
migrate_disable_switch(rq, prev);
^
kernel/sched/core.c:4689:3: note: did you mean 'migrate_disable'?
include/linux/preempt.h:393:29: note: 'migrate_disable' declared here
static __always_inline void migrate_disable(void)
^
kernel/sched/core.c:4804:35: warning: no previous prototype for function 'schedule_user' [-Wmissing-prototypes]
asmlinkage __visible void __sched schedule_user(void)
^
kernel/sched/core.c:4804:22: note: declare 'static' if the function is not intended to be used outside of this translation unit
asmlinkage __visible void __sched schedule_user(void)
^
static
1 warning and 1 error generated.
vim +/migrate_disable_switch +4689 kernel/sched/core.c
4536
4537 /*
4538 * __schedule() is the main scheduler function.
4539 *
4540 * The main means of driving the scheduler and thus entering this function are:
4541 *
4542 * 1. Explicit blocking: mutex, semaphore, waitqueue, etc.
4543 *
4544 * 2. TIF_NEED_RESCHED flag is checked on interrupt and userspace return
4545 * paths. For example, see arch/x86/entry_64.S.
4546 *
4547 * To drive preemption between tasks, the scheduler sets the flag in timer
4548 * interrupt handler scheduler_tick().
4549 *
4550 * 3. Wakeups don't really cause entry into schedule(). They add a
4551 * task to the run-queue and that's it.
4552 *
4553 * Now, if the new task added to the run-queue preempts the current
4554 * task, then the wakeup sets TIF_NEED_RESCHED and schedule() gets
4555 * called on the nearest possible occasion:
4556 *
4557 * - If the kernel is preemptible (CONFIG_PREEMPTION=y):
4558 *
4559 * - in syscall or exception context, at the next outmost
4560 * preempt_enable(). (this might be as soon as the wake_up()'s
4561 * spin_unlock()!)
4562 *
4563 * - in IRQ context, return from interrupt-handler to
4564 * preemptible context
4565 *
4566 * - If the kernel is not preemptible (CONFIG_PREEMPTION is not set)
4567 * then at the next:
4568 *
4569 * - cond_resched() call
4570 * - explicit schedule() call
4571 * - return from syscall or exception to user-space
4572 * - return from interrupt-handler to user-space
4573 *
4574 * WARNING: must be called with preemption disabled!
4575 */
4576 static void __sched notrace __schedule(bool preempt)
4577 {
4578 struct task_struct *prev, *next;
4579 unsigned long *switch_count;
4580 unsigned long prev_state;
4581 struct rq_flags rf;
4582 struct rq *rq;
4583 int cpu;
4584
4585 cpu = smp_processor_id();
4586 rq = cpu_rq(cpu);
4587 prev = rq->curr;
4588
4589 schedule_debug(prev, preempt);
4590
4591 if (sched_feat(HRTICK))
4592 hrtick_clear(rq);
4593
4594 local_irq_disable();
4595 rcu_note_context_switch(preempt);
4596
4597 /*
4598 * Make sure that signal_pending_state()->signal_pending() below
4599 * can't be reordered with __set_current_state(TASK_INTERRUPTIBLE)
4600 * done by the caller to avoid the race with signal_wake_up():
4601 *
4602 * __set_current_state(@state) signal_wake_up()
4603 * schedule() set_tsk_thread_flag(p, TIF_SIGPENDING)
4604 * wake_up_state(p, state)
4605 * LOCK rq->lock LOCK p->pi_state
4606 * smp_mb__after_spinlock() smp_mb__after_spinlock()
4607 * if (signal_pending_state()) if (p->state & @state)
4608 *
4609 * Also, the membarrier system call requires a full memory barrier
4610 * after coming from user-space, before storing to rq->curr.
4611 */
4612 rq_lock(rq, &rf);
4613 smp_mb__after_spinlock();
4614
4615 /* Promote REQ to ACT */
4616 rq->clock_update_flags <<= 1;
4617 update_rq_clock(rq);
4618
4619 switch_count = &prev->nivcsw;
4620
4621 /*
4622 * We must load prev->state once (task_struct::state is volatile), such
4623 * that:
4624 *
4625 * - we form a control dependency vs deactivate_task() below.
4626 * - ptrace_{,un}freeze_traced() can change ->state underneath us.
4627 */
4628 prev_state = prev->state;
4629 if (!preempt && prev_state) {
4630 if (signal_pending_state(prev_state, prev)) {
4631 prev->state = TASK_RUNNING;
4632 } else {
4633 prev->sched_contributes_to_load =
4634 (prev_state & TASK_UNINTERRUPTIBLE) &&
4635 !(prev_state & TASK_NOLOAD) &&
4636 !(prev->flags & PF_FROZEN);
4637
4638 if (prev->sched_contributes_to_load)
4639 rq->nr_uninterruptible++;
4640
4641 /*
4642 * __schedule() ttwu()
4643 * prev_state = prev->state; if (p->on_rq && ...)
4644 * if (prev_state) goto out;
4645 * p->on_rq = 0; smp_acquire__after_ctrl_dep();
4646 * p->state = TASK_WAKING
4647 *
4648 * Where __schedule() and ttwu() have matching control dependencies.
4649 *
4650 * After this, schedule() must not care about p->state any more.
4651 */
4652 deactivate_task(rq, prev, DEQUEUE_SLEEP | DEQUEUE_NOCLOCK);
4653
4654 if (prev->in_iowait) {
4655 atomic_inc(&rq->nr_iowait);
4656 delayacct_blkio_start();
4657 }
4658 }
4659 switch_count = &prev->nvcsw;
4660 }
4661
4662 next = pick_next_task(rq, prev, &rf);
4663 clear_tsk_need_resched(prev);
4664 clear_preempt_need_resched();
4665
4666 if (likely(prev != next)) {
4667 rq->nr_switches++;
4668 /*
4669 * RCU users of rcu_dereference(rq->curr) may not see
4670 * changes to task_struct made by pick_next_task().
4671 */
4672 RCU_INIT_POINTER(rq->curr, next);
4673 /*
4674 * The membarrier system call requires each architecture
4675 * to have a full memory barrier after updating
4676 * rq->curr, before returning to user-space.
4677 *
4678 * Here are the schemes providing that barrier on the
4679 * various architectures:
4680 * - mm ? switch_mm() : mmdrop() for x86, s390, sparc, PowerPC.
4681 * switch_mm() rely on membarrier_arch_switch_mm() on PowerPC.
4682 * - finish_lock_switch() for weakly-ordered
4683 * architectures where spin_unlock is a full barrier,
4684 * - switch_to() for arm64 (weakly-ordered, spin_unlock
4685 * is a RELEASE barrier),
4686 */
4687 ++*switch_count;
4688
> 4689 migrate_disable_switch(rq, prev);
4690 psi_sched_switch(prev, next, !task_on_rq_queued(prev));
4691
4692 trace_sched_switch(preempt, prev, next);
4693
4694 /* Also unlocks the rq: */
4695 rq = context_switch(rq, prev, next, &rf);
4696 } else {
4697 rq->clock_update_flags &= ~(RQCF_ACT_SKIP|RQCF_REQ_SKIP);
4698
4699 rq_unpin_lock(rq, &rf);
4700 __balance_callbacks(rq);
4701 raw_spin_unlock_irq(&rq->lock);
4702 }
4703 }
4704
---
0-DAY CI Kernel Test Service, Intel Corporation
https://lists.01.org/hyperkitty/list/kbuild-all(a)lists.01.org
[-- Attachment #2: config.gz --]
[-- Type: application/gzip, Size: 26819 bytes --]
^ permalink raw reply [flat|nested] only message in thread
only message in thread, other threads:[~2020-10-02 12:14 UTC | newest]
Thread overview: (only message) (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-10-02 12:14 [peterz-queue:sched/wip2 9/15] kernel/sched/core.c:4689:3: error: implicit declaration of function 'migrate_disable_switch' kernel test robot
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.