All of lore.kernel.org
 help / color / mirror / Atom feed
* [tip:sched/core 2/5] kernel/sched/fair.c:6345:28: error: 'sched_smt_present' undeclared
@ 2021-04-09 13:58 ` kernel test robot
  0 siblings, 0 replies; 2+ messages in thread
From: kernel test robot @ 2021-04-09 13:58 UTC (permalink / raw)
  To: Rik van Riel; +Cc: kbuild-all, linux-kernel, x86, Peter Zijlstra, Mel Gorman

[-- Attachment #1: Type: text/plain, Size: 9969 bytes --]

tree:   https://git.kernel.org/pub/scm/linux/kernel/git/tip/tip.git sched/core
head:   816969e4af7a56bfd284d2e0fa11511900ab93e3
commit: 6bcd3e21ba278098920d26d4888f5e6f4087c61d [2/5] sched/fair: Bring back select_idle_smt(), but differently
config: ia64-randconfig-r034-20210409 (attached as .config)
compiler: ia64-linux-gcc (GCC) 9.3.0
reproduce (this is a W=1 build):
        wget https://raw.githubusercontent.com/intel/lkp-tests/master/sbin/make.cross -O ~/bin/make.cross
        chmod +x ~/bin/make.cross
        # https://git.kernel.org/pub/scm/linux/kernel/git/tip/tip.git/commit/?id=6bcd3e21ba278098920d26d4888f5e6f4087c61d
        git remote add tip https://git.kernel.org/pub/scm/linux/kernel/git/tip/tip.git
        git fetch --no-tags tip sched/core
        git checkout 6bcd3e21ba278098920d26d4888f5e6f4087c61d
        # save the attached .config to linux build tree
        COMPILER_INSTALL_PATH=$HOME/0day COMPILER=gcc-9.3.0 make.cross ARCH=ia64 

If you fix the issue, kindly add following tag as appropriate
Reported-by: kernel test robot <lkp@intel.com>

All errors (new ones prefixed by >>):

   In file included from arch/ia64/include/asm/pgtable.h:154,
                    from include/linux/pgtable.h:6,
                    from arch/ia64/include/asm/uaccess.h:40,
                    from include/linux/uaccess.h:11,
                    from include/linux/sched/task.h:11,
                    from include/linux/sched/signal.h:9,
                    from include/linux/sched/cputime.h:5,
                    from kernel/sched/sched.h:11,
                    from kernel/sched/fair.c:23:
   arch/ia64/include/asm/mmu_context.h: In function 'reload_context':
   arch/ia64/include/asm/mmu_context.h:127:41: warning: variable 'old_rr4' set but not used [-Wunused-but-set-variable]
     127 |  unsigned long rr0, rr1, rr2, rr3, rr4, old_rr4;
         |                                         ^~~~~~~
   kernel/sched/fair.c: At top level:
   kernel/sched/fair.c:5393:6: warning: no previous prototype for 'init_cfs_bandwidth' [-Wmissing-prototypes]
    5393 | void init_cfs_bandwidth(struct cfs_bandwidth *cfs_b) {}
         |      ^~~~~~~~~~~~~~~~~~
   In file included from arch/ia64/include/uapi/asm/gcc_intrin.h:11,
                    from arch/ia64/include/asm/gcc_intrin.h:10,
                    from arch/ia64/include/uapi/asm/intrinsics.h:20,
                    from arch/ia64/include/asm/intrinsics.h:11,
                    from arch/ia64/include/asm/current.h:10,
                    from include/linux/sched.h:12,
                    from kernel/sched/sched.h:5,
                    from kernel/sched/fair.c:23:
   kernel/sched/fair.c: In function 'select_idle_sibling':
>> kernel/sched/fair.c:6345:28: error: 'sched_smt_present' undeclared (first use in this function)
    6345 |  if (static_branch_likely(&sched_smt_present)) {
         |                            ^~~~~~~~~~~~~~~~~
   include/linux/compiler.h:77:40: note: in definition of macro 'likely'
      77 | # define likely(x) __builtin_expect(!!(x), 1)
         |                                        ^
   include/linux/jump_label.h:480:34: note: in expansion of macro 'likely_notrace'
     480 | #define static_branch_likely(x)  likely_notrace(static_key_enabled(&(x)->key))
         |                                  ^~~~~~~~~~~~~~
   include/linux/jump_label.h:480:49: note: in expansion of macro 'static_key_enabled'
     480 | #define static_branch_likely(x)  likely_notrace(static_key_enabled(&(x)->key))
         |                                                 ^~~~~~~~~~~~~~~~~~
   kernel/sched/fair.c:6345:6: note: in expansion of macro 'static_branch_likely'
    6345 |  if (static_branch_likely(&sched_smt_present)) {
         |      ^~~~~~~~~~~~~~~~~~~~
   kernel/sched/fair.c:6345:28: note: each undeclared identifier is reported only once for each function it appears in
    6345 |  if (static_branch_likely(&sched_smt_present)) {
         |                            ^~~~~~~~~~~~~~~~~
   include/linux/compiler.h:77:40: note: in definition of macro 'likely'
      77 | # define likely(x) __builtin_expect(!!(x), 1)
         |                                        ^
   include/linux/jump_label.h:480:34: note: in expansion of macro 'likely_notrace'
     480 | #define static_branch_likely(x)  likely_notrace(static_key_enabled(&(x)->key))
         |                                  ^~~~~~~~~~~~~~
   include/linux/jump_label.h:480:49: note: in expansion of macro 'static_key_enabled'
     480 | #define static_branch_likely(x)  likely_notrace(static_key_enabled(&(x)->key))
         |                                                 ^~~~~~~~~~~~~~~~~~
   kernel/sched/fair.c:6345:6: note: in expansion of macro 'static_branch_likely'
    6345 |  if (static_branch_likely(&sched_smt_present)) {
         |      ^~~~~~~~~~~~~~~~~~~~
   kernel/sched/fair.c: At top level:
   kernel/sched/fair.c:11232:6: warning: no previous prototype for 'free_fair_sched_group' [-Wmissing-prototypes]
   11232 | void free_fair_sched_group(struct task_group *tg) { }
         |      ^~~~~~~~~~~~~~~~~~~~~
   kernel/sched/fair.c:11234:5: warning: no previous prototype for 'alloc_fair_sched_group' [-Wmissing-prototypes]
   11234 | int alloc_fair_sched_group(struct task_group *tg, struct task_group *parent)
         |     ^~~~~~~~~~~~~~~~~~~~~~
   kernel/sched/fair.c:11239:6: warning: no previous prototype for 'online_fair_sched_group' [-Wmissing-prototypes]
   11239 | void online_fair_sched_group(struct task_group *tg) { }
         |      ^~~~~~~~~~~~~~~~~~~~~~~
   kernel/sched/fair.c:11241:6: warning: no previous prototype for 'unregister_fair_sched_group' [-Wmissing-prototypes]
   11241 | void unregister_fair_sched_group(struct task_group *tg) { }
         |      ^~~~~~~~~~~~~~~~~~~~~~~~~~~
   kernel/sched/fair.c:8429:13: warning: 'update_nohz_stats' defined but not used [-Wunused-function]
    8429 | static bool update_nohz_stats(struct rq *rq)
         |             ^~~~~~~~~~~~~~~~~


vim +/sched_smt_present +6345 kernel/sched/fair.c

  6259	
  6260	/*
  6261	 * Try and locate an idle core/thread in the LLC cache domain.
  6262	 */
  6263	static int select_idle_sibling(struct task_struct *p, int prev, int target)
  6264	{
  6265		bool has_idle_core = false;
  6266		struct sched_domain *sd;
  6267		unsigned long task_util;
  6268		int i, recent_used_cpu;
  6269	
  6270		/*
  6271		 * On asymmetric system, update task utilization because we will check
  6272		 * that the task fits with cpu's capacity.
  6273		 */
  6274		if (static_branch_unlikely(&sched_asym_cpucapacity)) {
  6275			sync_entity_load_avg(&p->se);
  6276			task_util = uclamp_task_util(p);
  6277		}
  6278	
  6279		if ((available_idle_cpu(target) || sched_idle_cpu(target)) &&
  6280		    asym_fits_capacity(task_util, target))
  6281			return target;
  6282	
  6283		/*
  6284		 * If the previous CPU is cache affine and idle, don't be stupid:
  6285		 */
  6286		if (prev != target && cpus_share_cache(prev, target) &&
  6287		    (available_idle_cpu(prev) || sched_idle_cpu(prev)) &&
  6288		    asym_fits_capacity(task_util, prev))
  6289			return prev;
  6290	
  6291		/*
  6292		 * Allow a per-cpu kthread to stack with the wakee if the
  6293		 * kworker thread and the tasks previous CPUs are the same.
  6294		 * The assumption is that the wakee queued work for the
  6295		 * per-cpu kthread that is now complete and the wakeup is
  6296		 * essentially a sync wakeup. An obvious example of this
  6297		 * pattern is IO completions.
  6298		 */
  6299		if (is_per_cpu_kthread(current) &&
  6300		    prev == smp_processor_id() &&
  6301		    this_rq()->nr_running <= 1) {
  6302			return prev;
  6303		}
  6304	
  6305		/* Check a recently used CPU as a potential idle candidate: */
  6306		recent_used_cpu = p->recent_used_cpu;
  6307		if (recent_used_cpu != prev &&
  6308		    recent_used_cpu != target &&
  6309		    cpus_share_cache(recent_used_cpu, target) &&
  6310		    (available_idle_cpu(recent_used_cpu) || sched_idle_cpu(recent_used_cpu)) &&
  6311		    cpumask_test_cpu(p->recent_used_cpu, p->cpus_ptr) &&
  6312		    asym_fits_capacity(task_util, recent_used_cpu)) {
  6313			/*
  6314			 * Replace recent_used_cpu with prev as it is a potential
  6315			 * candidate for the next wake:
  6316			 */
  6317			p->recent_used_cpu = prev;
  6318			return recent_used_cpu;
  6319		}
  6320	
  6321		/*
  6322		 * For asymmetric CPU capacity systems, our domain of interest is
  6323		 * sd_asym_cpucapacity rather than sd_llc.
  6324		 */
  6325		if (static_branch_unlikely(&sched_asym_cpucapacity)) {
  6326			sd = rcu_dereference(per_cpu(sd_asym_cpucapacity, target));
  6327			/*
  6328			 * On an asymmetric CPU capacity system where an exclusive
  6329			 * cpuset defines a symmetric island (i.e. one unique
  6330			 * capacity_orig value through the cpuset), the key will be set
  6331			 * but the CPUs within that cpuset will not have a domain with
  6332			 * SD_ASYM_CPUCAPACITY. These should follow the usual symmetric
  6333			 * capacity path.
  6334			 */
  6335			if (sd) {
  6336				i = select_idle_capacity(p, sd, target);
  6337				return ((unsigned)i < nr_cpumask_bits) ? i : target;
  6338			}
  6339		}
  6340	
  6341		sd = rcu_dereference(per_cpu(sd_llc, target));
  6342		if (!sd)
  6343			return target;
  6344	
> 6345		if (static_branch_likely(&sched_smt_present)) {
  6346			has_idle_core = test_idle_cores(target, false);
  6347	
  6348			if (!has_idle_core && cpus_share_cache(prev, target)) {
  6349				i = select_idle_smt(p, sd, prev);
  6350				if ((unsigned int)i < nr_cpumask_bits)
  6351					return i;
  6352			}
  6353		}
  6354	
  6355		i = select_idle_cpu(p, sd, has_idle_core, target);
  6356		if ((unsigned)i < nr_cpumask_bits)
  6357			return i;
  6358	
  6359		return target;
  6360	}
  6361	

---
0-DAY CI Kernel Test Service, Intel Corporation
https://lists.01.org/hyperkitty/list/kbuild-all@lists.01.org

[-- Attachment #2: .config.gz --]
[-- Type: application/gzip, Size: 36396 bytes --]

^ permalink raw reply	[flat|nested] 2+ messages in thread

* [tip:sched/core 2/5] kernel/sched/fair.c:6345:28: error: 'sched_smt_present' undeclared
@ 2021-04-09 13:58 ` kernel test robot
  0 siblings, 0 replies; 2+ messages in thread
From: kernel test robot @ 2021-04-09 13:58 UTC (permalink / raw)
  To: kbuild-all

[-- Attachment #1: Type: text/plain, Size: 10172 bytes --]

tree:   https://git.kernel.org/pub/scm/linux/kernel/git/tip/tip.git sched/core
head:   816969e4af7a56bfd284d2e0fa11511900ab93e3
commit: 6bcd3e21ba278098920d26d4888f5e6f4087c61d [2/5] sched/fair: Bring back select_idle_smt(), but differently
config: ia64-randconfig-r034-20210409 (attached as .config)
compiler: ia64-linux-gcc (GCC) 9.3.0
reproduce (this is a W=1 build):
        wget https://raw.githubusercontent.com/intel/lkp-tests/master/sbin/make.cross -O ~/bin/make.cross
        chmod +x ~/bin/make.cross
        # https://git.kernel.org/pub/scm/linux/kernel/git/tip/tip.git/commit/?id=6bcd3e21ba278098920d26d4888f5e6f4087c61d
        git remote add tip https://git.kernel.org/pub/scm/linux/kernel/git/tip/tip.git
        git fetch --no-tags tip sched/core
        git checkout 6bcd3e21ba278098920d26d4888f5e6f4087c61d
        # save the attached .config to linux build tree
        COMPILER_INSTALL_PATH=$HOME/0day COMPILER=gcc-9.3.0 make.cross ARCH=ia64 

If you fix the issue, kindly add following tag as appropriate
Reported-by: kernel test robot <lkp@intel.com>

All errors (new ones prefixed by >>):

   In file included from arch/ia64/include/asm/pgtable.h:154,
                    from include/linux/pgtable.h:6,
                    from arch/ia64/include/asm/uaccess.h:40,
                    from include/linux/uaccess.h:11,
                    from include/linux/sched/task.h:11,
                    from include/linux/sched/signal.h:9,
                    from include/linux/sched/cputime.h:5,
                    from kernel/sched/sched.h:11,
                    from kernel/sched/fair.c:23:
   arch/ia64/include/asm/mmu_context.h: In function 'reload_context':
   arch/ia64/include/asm/mmu_context.h:127:41: warning: variable 'old_rr4' set but not used [-Wunused-but-set-variable]
     127 |  unsigned long rr0, rr1, rr2, rr3, rr4, old_rr4;
         |                                         ^~~~~~~
   kernel/sched/fair.c: At top level:
   kernel/sched/fair.c:5393:6: warning: no previous prototype for 'init_cfs_bandwidth' [-Wmissing-prototypes]
    5393 | void init_cfs_bandwidth(struct cfs_bandwidth *cfs_b) {}
         |      ^~~~~~~~~~~~~~~~~~
   In file included from arch/ia64/include/uapi/asm/gcc_intrin.h:11,
                    from arch/ia64/include/asm/gcc_intrin.h:10,
                    from arch/ia64/include/uapi/asm/intrinsics.h:20,
                    from arch/ia64/include/asm/intrinsics.h:11,
                    from arch/ia64/include/asm/current.h:10,
                    from include/linux/sched.h:12,
                    from kernel/sched/sched.h:5,
                    from kernel/sched/fair.c:23:
   kernel/sched/fair.c: In function 'select_idle_sibling':
>> kernel/sched/fair.c:6345:28: error: 'sched_smt_present' undeclared (first use in this function)
    6345 |  if (static_branch_likely(&sched_smt_present)) {
         |                            ^~~~~~~~~~~~~~~~~
   include/linux/compiler.h:77:40: note: in definition of macro 'likely'
      77 | # define likely(x) __builtin_expect(!!(x), 1)
         |                                        ^
   include/linux/jump_label.h:480:34: note: in expansion of macro 'likely_notrace'
     480 | #define static_branch_likely(x)  likely_notrace(static_key_enabled(&(x)->key))
         |                                  ^~~~~~~~~~~~~~
   include/linux/jump_label.h:480:49: note: in expansion of macro 'static_key_enabled'
     480 | #define static_branch_likely(x)  likely_notrace(static_key_enabled(&(x)->key))
         |                                                 ^~~~~~~~~~~~~~~~~~
   kernel/sched/fair.c:6345:6: note: in expansion of macro 'static_branch_likely'
    6345 |  if (static_branch_likely(&sched_smt_present)) {
         |      ^~~~~~~~~~~~~~~~~~~~
   kernel/sched/fair.c:6345:28: note: each undeclared identifier is reported only once for each function it appears in
    6345 |  if (static_branch_likely(&sched_smt_present)) {
         |                            ^~~~~~~~~~~~~~~~~
   include/linux/compiler.h:77:40: note: in definition of macro 'likely'
      77 | # define likely(x) __builtin_expect(!!(x), 1)
         |                                        ^
   include/linux/jump_label.h:480:34: note: in expansion of macro 'likely_notrace'
     480 | #define static_branch_likely(x)  likely_notrace(static_key_enabled(&(x)->key))
         |                                  ^~~~~~~~~~~~~~
   include/linux/jump_label.h:480:49: note: in expansion of macro 'static_key_enabled'
     480 | #define static_branch_likely(x)  likely_notrace(static_key_enabled(&(x)->key))
         |                                                 ^~~~~~~~~~~~~~~~~~
   kernel/sched/fair.c:6345:6: note: in expansion of macro 'static_branch_likely'
    6345 |  if (static_branch_likely(&sched_smt_present)) {
         |      ^~~~~~~~~~~~~~~~~~~~
   kernel/sched/fair.c: At top level:
   kernel/sched/fair.c:11232:6: warning: no previous prototype for 'free_fair_sched_group' [-Wmissing-prototypes]
   11232 | void free_fair_sched_group(struct task_group *tg) { }
         |      ^~~~~~~~~~~~~~~~~~~~~
   kernel/sched/fair.c:11234:5: warning: no previous prototype for 'alloc_fair_sched_group' [-Wmissing-prototypes]
   11234 | int alloc_fair_sched_group(struct task_group *tg, struct task_group *parent)
         |     ^~~~~~~~~~~~~~~~~~~~~~
   kernel/sched/fair.c:11239:6: warning: no previous prototype for 'online_fair_sched_group' [-Wmissing-prototypes]
   11239 | void online_fair_sched_group(struct task_group *tg) { }
         |      ^~~~~~~~~~~~~~~~~~~~~~~
   kernel/sched/fair.c:11241:6: warning: no previous prototype for 'unregister_fair_sched_group' [-Wmissing-prototypes]
   11241 | void unregister_fair_sched_group(struct task_group *tg) { }
         |      ^~~~~~~~~~~~~~~~~~~~~~~~~~~
   kernel/sched/fair.c:8429:13: warning: 'update_nohz_stats' defined but not used [-Wunused-function]
    8429 | static bool update_nohz_stats(struct rq *rq)
         |             ^~~~~~~~~~~~~~~~~


vim +/sched_smt_present +6345 kernel/sched/fair.c

  6259	
  6260	/*
  6261	 * Try and locate an idle core/thread in the LLC cache domain.
  6262	 */
  6263	static int select_idle_sibling(struct task_struct *p, int prev, int target)
  6264	{
  6265		bool has_idle_core = false;
  6266		struct sched_domain *sd;
  6267		unsigned long task_util;
  6268		int i, recent_used_cpu;
  6269	
  6270		/*
  6271		 * On asymmetric system, update task utilization because we will check
  6272		 * that the task fits with cpu's capacity.
  6273		 */
  6274		if (static_branch_unlikely(&sched_asym_cpucapacity)) {
  6275			sync_entity_load_avg(&p->se);
  6276			task_util = uclamp_task_util(p);
  6277		}
  6278	
  6279		if ((available_idle_cpu(target) || sched_idle_cpu(target)) &&
  6280		    asym_fits_capacity(task_util, target))
  6281			return target;
  6282	
  6283		/*
  6284		 * If the previous CPU is cache affine and idle, don't be stupid:
  6285		 */
  6286		if (prev != target && cpus_share_cache(prev, target) &&
  6287		    (available_idle_cpu(prev) || sched_idle_cpu(prev)) &&
  6288		    asym_fits_capacity(task_util, prev))
  6289			return prev;
  6290	
  6291		/*
  6292		 * Allow a per-cpu kthread to stack with the wakee if the
  6293		 * kworker thread and the tasks previous CPUs are the same.
  6294		 * The assumption is that the wakee queued work for the
  6295		 * per-cpu kthread that is now complete and the wakeup is
  6296		 * essentially a sync wakeup. An obvious example of this
  6297		 * pattern is IO completions.
  6298		 */
  6299		if (is_per_cpu_kthread(current) &&
  6300		    prev == smp_processor_id() &&
  6301		    this_rq()->nr_running <= 1) {
  6302			return prev;
  6303		}
  6304	
  6305		/* Check a recently used CPU as a potential idle candidate: */
  6306		recent_used_cpu = p->recent_used_cpu;
  6307		if (recent_used_cpu != prev &&
  6308		    recent_used_cpu != target &&
  6309		    cpus_share_cache(recent_used_cpu, target) &&
  6310		    (available_idle_cpu(recent_used_cpu) || sched_idle_cpu(recent_used_cpu)) &&
  6311		    cpumask_test_cpu(p->recent_used_cpu, p->cpus_ptr) &&
  6312		    asym_fits_capacity(task_util, recent_used_cpu)) {
  6313			/*
  6314			 * Replace recent_used_cpu with prev as it is a potential
  6315			 * candidate for the next wake:
  6316			 */
  6317			p->recent_used_cpu = prev;
  6318			return recent_used_cpu;
  6319		}
  6320	
  6321		/*
  6322		 * For asymmetric CPU capacity systems, our domain of interest is
  6323		 * sd_asym_cpucapacity rather than sd_llc.
  6324		 */
  6325		if (static_branch_unlikely(&sched_asym_cpucapacity)) {
  6326			sd = rcu_dereference(per_cpu(sd_asym_cpucapacity, target));
  6327			/*
  6328			 * On an asymmetric CPU capacity system where an exclusive
  6329			 * cpuset defines a symmetric island (i.e. one unique
  6330			 * capacity_orig value through the cpuset), the key will be set
  6331			 * but the CPUs within that cpuset will not have a domain with
  6332			 * SD_ASYM_CPUCAPACITY. These should follow the usual symmetric
  6333			 * capacity path.
  6334			 */
  6335			if (sd) {
  6336				i = select_idle_capacity(p, sd, target);
  6337				return ((unsigned)i < nr_cpumask_bits) ? i : target;
  6338			}
  6339		}
  6340	
  6341		sd = rcu_dereference(per_cpu(sd_llc, target));
  6342		if (!sd)
  6343			return target;
  6344	
> 6345		if (static_branch_likely(&sched_smt_present)) {
  6346			has_idle_core = test_idle_cores(target, false);
  6347	
  6348			if (!has_idle_core && cpus_share_cache(prev, target)) {
  6349				i = select_idle_smt(p, sd, prev);
  6350				if ((unsigned int)i < nr_cpumask_bits)
  6351					return i;
  6352			}
  6353		}
  6354	
  6355		i = select_idle_cpu(p, sd, has_idle_core, target);
  6356		if ((unsigned)i < nr_cpumask_bits)
  6357			return i;
  6358	
  6359		return target;
  6360	}
  6361	

---
0-DAY CI Kernel Test Service, Intel Corporation
https://lists.01.org/hyperkitty/list/kbuild-all@lists.01.org

[-- Attachment #2: config.gz --]
[-- Type: application/gzip, Size: 36396 bytes --]

^ permalink raw reply	[flat|nested] 2+ messages in thread

end of thread, other threads:[~2021-04-09 13:58 UTC | newest]

Thread overview: 2+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-04-09 13:58 [tip:sched/core 2/5] kernel/sched/fair.c:6345:28: error: 'sched_smt_present' undeclared kernel test robot
2021-04-09 13:58 ` kernel test robot

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.