All of lore.kernel.org
 help / color / mirror / Atom feed
* [bug report] sched/fair: Prefer prev cpu in asymmetric wakeup path
@ 2020-11-13  8:46 Dan Carpenter
  2020-11-13  8:56 ` Vincent Guittot
  0 siblings, 1 reply; 3+ messages in thread
From: Dan Carpenter @ 2020-11-13  8:46 UTC (permalink / raw)
  To: vincent.guittot; +Cc: Peter Zijlstra, Valentin Schneider, linux-kernel

Hello Vincent Guittot,

The patch b4c9c9f15649: "sched/fair: Prefer prev cpu in asymmetric
wakeup path" from Oct 29, 2020, leads to the following static checker
warning:

	kernel/sched/fair.c:6249 select_idle_sibling()
	error: uninitialized symbol 'task_util'.

kernel/sched/fair.c
  6233  static int select_idle_sibling(struct task_struct *p, int prev, int target)
  6234  {
  6235          struct sched_domain *sd;
  6236          unsigned long task_util;
  6237          int i, recent_used_cpu;
  6238  
  6239          /*
  6240           * On asymmetric system, update task utilization because we will check
  6241           * that the task fits with cpu's capacity.
  6242           */

The original comment was a bit more clear...  Perhaps "On asymmetric
system[s], [record the] task utilization because we will check that the
task [can be done within] the cpu's capacity."

  6243          if (static_branch_unlikely(&sched_asym_cpucapacity)) {
  6244                  sync_entity_load_avg(&p->se);
  6245                  task_util = uclamp_task_util(p);
  6246          }

"task_util" is not initialized on the else path.

  6247  
  6248          if ((available_idle_cpu(target) || sched_idle_cpu(target)) &&
  6249              asym_fits_capacity(task_util, target))
                                       ^^^^^^^^^
Uninitialized variable warning.

  6250                  return target;
  6251  
  6252          /*
  6253           * If the previous CPU is cache affine and idle, don't be stupid:
  6254           */
  6255          if (prev != target && cpus_share_cache(prev, target) &&
  6256              (available_idle_cpu(prev) || sched_idle_cpu(prev)) &&
  6257              asym_fits_capacity(task_util, prev))
  6258                  return prev;
  6259  
  6260          /*
  6261           * Allow a per-cpu kthread to stack with the wakee if the

regards,
dan carpenter

^ permalink raw reply	[flat|nested] 3+ messages in thread

end of thread, other threads:[~2020-11-13 12:02 UTC | newest]

Thread overview: 3+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-11-13  8:46 [bug report] sched/fair: Prefer prev cpu in asymmetric wakeup path Dan Carpenter
2020-11-13  8:56 ` Vincent Guittot
2020-11-13 11:49   ` Dan Carpenter

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.