From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751067AbdEaQGj (ORCPT ); Wed, 31 May 2017 12:06:39 -0400 Received: from mail-oi0-f65.google.com ([209.85.218.65]:36505 "EHLO mail-oi0-f65.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1750898AbdEaQGi (ORCPT ); Wed, 31 May 2017 12:06:38 -0400 Date: Wed, 31 May 2017 11:06:36 -0500 From: Rob Herring To: Nicolas Pitre Cc: Ingo Molnar , Peter Zijlstra , linux-kernel@vger.kernel.org Subject: Re: [6/7] sched/rt: make it configurable Message-ID: <20170531160636.4zbbzjjbhhcxep7w@rob-hp-laptop> References: <20170529210302.26868-7-nicolas.pitre@linaro.org> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20170529210302.26868-7-nicolas.pitre@linaro.org> User-Agent: NeoMutt/20170113 (1.7.2) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Mon, May 29, 2017 at 05:03:01PM -0400, Nicolas Pitre wrote: > On most small systems where user space is tightly controlled, the realtime > scheduling class can often be dispensed with to reduce the kernel footprint. > Let's make it configurable. > > Signed-off-by: Nicolas Pitre > --- > static inline int rt_prio(int prio) > { > - if (unlikely(prio < MAX_RT_PRIO)) > + if (IS_ENABLED(CONFIG_SCHED_RT) && unlikely(prio < MAX_RT_PRIO)) > return 1; > return 0; > } > #ifdef CONFIG_PREEMPT_NOTIFIERS > INIT_HLIST_HEAD(&p->preempt_notifiers); > @@ -3716,13 +3720,18 @@ void rt_mutex_setprio(struct task_struct *p, struct task_struct *pi_task) > p->sched_class = &dl_sched_class; > } else > #endif > +#ifdef CONFIG_SCHED_RT > if (rt_prio(prio)) { This ifdef is not necessary since rt_prio is conditioned on CONFIG_SCHED_RT already. > if (oldprio < prio) > queue_flag |= ENQUEUE_HEAD; > p->sched_class = &rt_sched_class; > - } else { > + } else > +#endif > + { > +#ifdef CONFIG_SCHED_RT > if (rt_prio(oldprio)) > p->rt.timeout = 0; > +#endif > p->sched_class = &fair_sched_class; > } > > @@ -3997,6 +4006,23 @@ static int __sched_setscheduler(struct task_struct *p, > > /* May grab non-irq protected spin_locks: */ > BUG_ON(in_interrupt()); > + > + /* > + * When the RT scheduling class is disabled, let's make sure kernel threads > + * wanting RT still get lowest nice value to give them highest available > + * priority rather than simply returning an error. Obviously we can't test > + * rt_policy() here as it is always false in that case. > + */ > + if (!IS_ENABLED(CONFIG_SCHED_RT) && !user && > + (policy == SCHED_FIFO || policy == SCHED_RR)) { > + static const struct sched_attr k_attr = { > + .sched_policy = SCHED_NORMAL, > + .sched_nice = MIN_NICE, > + }; > + attr = &k_attr; > + policy = SCHED_NORMAL; > + } > + > recheck: > /* Double check policy once rq lock held: */ > if (policy < 0) { > @@ -5726,7 +5752,9 @@ void __init sched_init_smp(void) > sched_init_granularity(); > free_cpumask_var(non_isolated_cpus); > > +#ifdef CONFIG_SCHED_RT > init_sched_rt_class(); > +#endif You can do an empty inline function for !CONFIG_SCHED_RT. > #ifdef CONFIG_SCHED_DL > init_sched_dl_class(); > #endif And here in the earlier patch. > @@ -5832,7 +5860,9 @@ void __init sched_init(void) > } > #endif /* CONFIG_CPUMASK_OFFSTACK */ > > +#ifdef CONFIG_SCHED_RT > init_rt_bandwidth(&def_rt_bandwidth, global_rt_period(), global_rt_runtime()); > +#endif And so on... Rob