On Tue, 7 Nov 2023 11:47:32 -0500 Steven Rostedt wrote: > Let me see if I understand what you are asking. By pushing the execution of > the CFS-server to the end of its period, if it it was briefly blocked and > was not able to consume all of its zerolax time, its bandwidth gets > refreshed. Then it can run again, basically doubling its total time. > > But this is basically saying that it ran for its runtime at the start of > one period and at the beginning of another, right? > > Is that an issue? The CFS-server is still just consuming it's time per > period. That means that an RT tasks was starving the system that much to > push it forward too much anyway. I wonder if we just document this > behavior, if that would be enough? I may have even captured this scenario. I ran my migrate[1] program which I use to test RT migration, and it kicks off a bunch of RT tasks. I like this test because with the /proc/sys/kernel/sched_rt_* options set, it shows the lines where they are throttled really well. This time, I disabled those, and just kept the default: ~# cat /sys/kernel/debug/sched/rq/cpu0/fair_server_defer 1 ~# cat /sys/kernel/debug/sched/rq/cpu0/fair_server_period 1000000000 ~# cat /sys/kernel/debug/sched/rq/cpu0/fair_server_runtime 50000000 And ran my userspin[2] program. And recorded it with: trace-cmd record -e sched_switch The kernelshark output shows the delay from userspin taking up 0.1 seconds (double the time usually given), with a little preemption in between. -- Steve