kernelnewbies.kernelnewbies.org archive mirror
 help / color / mirror / Atom feed
* [Scheduler] CFS - What happens to each task's slice if nr_running * min_granularity > sched_latency?
@ 2020-04-03  2:10 Evan T Mesterhazy
  2020-04-03  5:50 ` Valdis Klētnieks
  2020-04-07  1:41 ` Rik van Riel
  0 siblings, 2 replies; 4+ messages in thread
From: Evan T Mesterhazy @ 2020-04-03  2:10 UTC (permalink / raw)
  To: kernelnewbies


[-- Attachment #1.1: Type: text/plain, Size: 2841 bytes --]

Hi everyone ~

I've read a few different references on CFS and have been looking through
fair.c at the CFS scheduler code. One thing I'm not completely
understanding is what happens when too many processes are running such
that nr_running
* min_granularity > sched_latency

I know that the scheduler will expand the period so that each process can
run for at least the min_granularity, *but how does that interact with nice
numbers*? Here's the code for expanding the period:

/*
 * The idea is to set a period in which each task runs once.
 *
 * When there are too many tasks (sched_nr_latency) we have to stretch
 * this period because otherwise the slices get too small.
 *
 * p = (nr <= nl) ? l : l*nr/nl
 */
static u64 __sched_period(unsigned long nr_running)
{
        if (unlikely(nr_running > sched_nr_latency))
                return nr_running * sysctl_sched_min_granularity;
        else
                return sysctl_sched_latency;
}

Here's the code for calculating an individual process's slice. It looks
like the weighting formula is used here regardless of whether the period
has been expanded.

   - If that's the case, doesn't that mean that some processes will still
   get a slice that's smaller than the min_granularity?

/*
 * We calculate the wall-time slice from the period by taking a part
 * proportional to the weight.
 *
 * s = p*P[w/rw]
 */
static u64 sched_slice(struct cfs_rq *cfs_rq, struct sched_entity *se)
{
        u64 slice = __sched_period(cfs_rq->nr_running + !se->on_rq);

        for_each_sched_entity(se) {
                struct load_weight *load;
                struct load_weight lw;

                cfs_rq = cfs_rq_of(se);
                load = &cfs_rq->load;

                if (unlikely(!se->on_rq)) {
                        lw = cfs_rq->load;

                        update_load_add(&lw, se->load.weight);
                        load = &lw;
                }
                slice = __calc_delta(slice, se->load.weight, load);
        }
        return slice;
}

I ran a test by starting five busy processes with a nice level of -10.
Next, I launched ~40 busy processes with a nice level of 0 (all procs were
set to use the same CPU). I expected CFS to expand the period and assign
each process a slice equal to the min granularity. However, the 5 processes
with nice = -10 still used considerably more CPU than the other processes.

Is __calc_delta in the function above actually expanding the slice further
based on the nice weighting of the tasks? The __calc_delta function is a
bit difficult to follow, so I haven't quite figured out what it's doing.

tl;dr I know that CFS expands the period if lots of processes are running.
What I'm not sure about is how nice levels affect the slice each task gets
if the period has been expanded due to a high number of running tasks.

Thanks!

[-- Attachment #1.2: Type: text/html, Size: 3986 bytes --]

[-- Attachment #2: Type: text/plain, Size: 170 bytes --]

_______________________________________________
Kernelnewbies mailing list
Kernelnewbies@kernelnewbies.org
https://lists.kernelnewbies.org/mailman/listinfo/kernelnewbies

^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: [Scheduler] CFS - What happens to each task's slice if nr_running * min_granularity > sched_latency?
  2020-04-03  2:10 [Scheduler] CFS - What happens to each task's slice if nr_running * min_granularity > sched_latency? Evan T Mesterhazy
@ 2020-04-03  5:50 ` Valdis Klētnieks
  2020-04-07 15:02   ` Evan T Mesterhazy
  2020-04-07  1:41 ` Rik van Riel
  1 sibling, 1 reply; 4+ messages in thread
From: Valdis Klētnieks @ 2020-04-03  5:50 UTC (permalink / raw)
  To: Evan T Mesterhazy; +Cc: kernelnewbies


[-- Attachment #1.1: Type: text/plain, Size: 625 bytes --]

On Thu, 02 Apr 2020 22:10:11 -0400, Evan T Mesterhazy said:

> I ran a test by starting five busy processes with a nice level of -10.
> Next, I launched ~40 busy processes with a nice level of 0 (all procs were
> set to use the same CPU). I expected CFS to expand the period and assign
> each process a slice equal to the min granularity. However, the 5 processes
> with nice = -10 still used considerably more CPU than the other processes.

Well, it's *expected* that if you set nice = -10 they'll get more CPU.

Do you have any evidence that CFS *didn't* give the nice==0 processes a
min_granularity slide once in a while?

[-- Attachment #1.2: Type: application/pgp-signature, Size: 832 bytes --]

[-- Attachment #2: Type: text/plain, Size: 170 bytes --]

_______________________________________________
Kernelnewbies mailing list
Kernelnewbies@kernelnewbies.org
https://lists.kernelnewbies.org/mailman/listinfo/kernelnewbies

^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: [Scheduler] CFS - What happens to each task's slice if nr_running * min_granularity > sched_latency?
  2020-04-03  2:10 [Scheduler] CFS - What happens to each task's slice if nr_running * min_granularity > sched_latency? Evan T Mesterhazy
  2020-04-03  5:50 ` Valdis Klētnieks
@ 2020-04-07  1:41 ` Rik van Riel
  1 sibling, 0 replies; 4+ messages in thread
From: Rik van Riel @ 2020-04-07  1:41 UTC (permalink / raw)
  To: Evan T Mesterhazy, kernelnewbies


[-- Attachment #1.1: Type: text/plain, Size: 572 bytes --]

On Thu, 2020-04-02 at 22:10 -0400, Evan T Mesterhazy wrote:

> Here's the code for calculating an individual process's slice. It
> looks like the weighting formula is used here regardless of whether
> the period has been expanded.
> If that's the case, doesn't that mean that some processes will still
> get a slice that's smaller than the min_granularity?

That is exactly what will happen. You figured out what
the code does.

Generally this behavior is not a real problem, since
people expect low priority tasks to run slower.

-- 
All Rights Reversed.

[-- Attachment #1.2: This is a digitally signed message part --]
[-- Type: application/pgp-signature, Size: 488 bytes --]

[-- Attachment #2: Type: text/plain, Size: 170 bytes --]

_______________________________________________
Kernelnewbies mailing list
Kernelnewbies@kernelnewbies.org
https://lists.kernelnewbies.org/mailman/listinfo/kernelnewbies

^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: [Scheduler] CFS - What happens to each task's slice if nr_running * min_granularity > sched_latency?
  2020-04-03  5:50 ` Valdis Klētnieks
@ 2020-04-07 15:02   ` Evan T Mesterhazy
  0 siblings, 0 replies; 4+ messages in thread
From: Evan T Mesterhazy @ 2020-04-07 15:02 UTC (permalink / raw)
  To: Valdis Klētnieks; +Cc: kernelnewbies


[-- Attachment #1.1: Type: text/plain, Size: 1057 bytes --]

Interesting, thanks Rik. I assumed that couldn't be correct, since it would
result in lower priority tasks receiving a slice less than the
sysctl_sched_min_granularity
- which I assumed CFS explicitly aims to prevent.

Thanks for taking a look.

On Fri, Apr 3, 2020 at 1:51 AM Valdis Klētnieks <valdis.kletnieks@vt.edu>
wrote:

> On Thu, 02 Apr 2020 22:10:11 -0400, Evan T Mesterhazy said:
>
> > I ran a test by starting five busy processes with a nice level of -10.
> > Next, I launched ~40 busy processes with a nice level of 0 (all procs
> were
> > set to use the same CPU). I expected CFS to expand the period and assign
> > each process a slice equal to the min granularity. However, the 5
> processes
> > with nice = -10 still used considerably more CPU than the other
> processes.
>
> Well, it's *expected* that if you set nice = -10 they'll get more CPU.
>
> Do you have any evidence that CFS *didn't* give the nice==0 processes a
> min_granularity slide once in a while?
>


-- 
Evan Mesterhazy
etm2131@columbia.edu

[-- Attachment #1.2: Type: text/html, Size: 1744 bytes --]

[-- Attachment #2: Type: text/plain, Size: 170 bytes --]

_______________________________________________
Kernelnewbies mailing list
Kernelnewbies@kernelnewbies.org
https://lists.kernelnewbies.org/mailman/listinfo/kernelnewbies

^ permalink raw reply	[flat|nested] 4+ messages in thread

end of thread, other threads:[~2020-04-07 15:05 UTC | newest]

Thread overview: 4+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-04-03  2:10 [Scheduler] CFS - What happens to each task's slice if nr_running * min_granularity > sched_latency? Evan T Mesterhazy
2020-04-03  5:50 ` Valdis Klētnieks
2020-04-07 15:02   ` Evan T Mesterhazy
2020-04-07  1:41 ` Rik van Riel

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).