From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1757320AbbFCPa3 (ORCPT ); Wed, 3 Jun 2015 11:30:29 -0400 Received: from mail-db3on0096.outbound.protection.outlook.com ([157.55.234.96]:6816 "EHLO emea01-db3-obe.outbound.protection.outlook.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1756999AbbFCP3v (ORCPT ); Wed, 3 Jun 2015 11:29:51 -0400 Authentication-Results: spf=fail (sender IP is 12.216.194.146) smtp.mailfrom=ezchip.com; ezchip.com; dkim=none (message not signed) header.d=none; From: Chris Metcalf To: Gilad Ben Yossef , Steven Rostedt , Ingo Molnar , Peter Zijlstra , Andrew Morton , "Rik van Riel" , Tejun Heo , Frederic Weisbecker , Thomas Gleixner , "Paul E. McKenney" , Christoph Lameter , Viresh Kumar , , , CC: Chris Metcalf Subject: [PATCH v3 1/5] nohz_full: add support for "cpu_isolated" mode Date: Wed, 3 Jun 2015 11:29:21 -0400 Message-ID: <1433345365-29506-2-git-send-email-cmetcalf@ezchip.com> X-Mailer: git-send-email 2.1.2 In-Reply-To: <1433345365-29506-1-git-send-email-cmetcalf@ezchip.com> References: <1433345365-29506-1-git-send-email-cmetcalf@ezchip.com> X-EOPAttributedMessage: 0 X-Microsoft-Exchange-Diagnostics: 1;DB3FFO11FD035;1:rgWnVsrnzVQ63OXIm+5aygJm8IxtDtNCZPbcu5tG/7OotEzpSyaOrMGa1MRfSzcabPwUUMbHgxyHVK1eNx5MgzFtAHY4Gd6OoaDPPK52+GQfmRidQVWIB6KXWoi79DCcS+RmzosV2LwgpZLdZz4vEeBrKS+3ksgc2CqRcJLugOyJHh3VbjM95crBTkZZ53otSc8WLuCI1E4+lZq7KhVQ2smn7AHpgGOZc1ZnGhy7DCT77WdESaBb9cDYL9abU7ZH4U/QQedsNeWvODr4jPs/2tUKlLTax1tcaQfzFO31ogU= X-Forefront-Antispam-Report: CIP:12.216.194.146;CTRY:US;IPV:NLI;EFV:NLI;SFV:NSPM;SFS:(10009020)(6009001)(339900001)(189002)(199003)(47776003)(5001770100001)(229853001)(36756003)(68736005)(2201001)(50226001)(107886002)(46102003)(189998001)(5001960100002)(5001860100001)(81156007)(92566002)(104016003)(33646002)(4001540100001)(105606002)(50466002)(19580395003)(106356001)(19580405001)(48376002)(575784001)(86362001)(2950100001)(50986999)(42186005)(5001830100001)(62966003)(85426001)(87936001)(76176999)(106476002)(106466001)(6806004)(77156002)(97736004)(64706001)(9376004)(921003)(4001430100001)(1121003);DIR:OUT;SFP:1101;SCL:1;SRVR:DB4PR02MB0432;H:ld-1.internal.tilera.com;FPR:;SPF:Fail;PTR:InfoNoRecords;MX:1;A:1;LANG:en; MIME-Version: 1.0 Content-Type: text/plain X-Microsoft-Antispam: UriScan:;BCL:0;PCL:0;RULEID:;SRVR:DB4PR02MB0432;UriScan:;BCL:0;PCL:0;RULEID:;SRVR:DB4PR02MB0544; X-Microsoft-Antispam-PRVS: X-Exchange-Antispam-Report-Test: UriScan:; X-Exchange-Antispam-Report-CFA-Test: BCL:0;PCL:0;RULEID:(601004)(520003)(5005006)(3002001);SRVR:DB4PR02MB0432;BCL:0;PCL:0;RULEID:;SRVR:DB4PR02MB0432; X-Forefront-PRVS: 05961EBAFC X-MS-Exchange-CrossTenant-OriginalArrivalTime: 03 Jun 2015 15:29:44.9678 (UTC) X-MS-Exchange-CrossTenant-Id: 0fc16e0a-3cd3-4092-8b2f-0a42cff122c3 X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=0fc16e0a-3cd3-4092-8b2f-0a42cff122c3;Ip=[12.216.194.146];Helo=[ld-1.internal.tilera.com] X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB4PR02MB0432 X-OriginatorOrg: ezchip.com Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org The existing nohz_full mode makes tradeoffs to minimize userspace interruptions while still attempting to avoid overheads in the kernel entry/exit path, to provide 100% kernel semantics, etc. However, some applications require a stronger commitment from the kernel to avoid interruptions, in particular userspace device driver style applications, such as high-speed networking code. This change introduces a framework to allow applications to elect to have the stronger semantics as needed, specifying prctl(PR_SET_CPU_ISOLATED, PR_CPU_ISOLATED_ENABLE) to do so. Subsequent commits will add additional flags and additional semantics. The "cpu_isolated" state is indicated by setting a new task struct field, cpu_isolated_flags, to the value passed by prctl(). When the _ENABLE bit is set for a task, and it is returning to userspace on a nohz_full core, it calls the new tick_nohz_cpu_isolated_enter() routine to take additional actions to help the task avoid being interrupted in the future. Initially, there are only two actions taken. First, the task calls lru_add_drain() to prevent being interrupted by a subsequent lru_add_drain_all() call on another core. Then, the code checks for pending timer interrupts and quiesces until they are no longer pending. As a result, sys calls (and page faults, etc.) can be inordinately slow. However, this quiescing guarantees that no unexpected interrupts will occur, even if the application intentionally calls into the kernel. Signed-off-by: Chris Metcalf --- arch/tile/kernel/process.c | 9 ++++++++ include/linux/sched.h | 3 +++ include/linux/tick.h | 10 ++++++++ include/uapi/linux/prctl.h | 5 ++++ kernel/context_tracking.c | 3 +++ kernel/sys.c | 8 +++++++ kernel/time/tick-sched.c | 57 ++++++++++++++++++++++++++++++++++++++++++++++ 7 files changed, 95 insertions(+) diff --git a/arch/tile/kernel/process.c b/arch/tile/kernel/process.c index e036c0aa9792..e20c3f4a6a82 100644 --- a/arch/tile/kernel/process.c +++ b/arch/tile/kernel/process.c @@ -70,6 +70,15 @@ void arch_cpu_idle(void) _cpu_idle(); } +#ifdef CONFIG_NO_HZ_FULL +void tick_nohz_cpu_isolated_wait() +{ + set_current_state(TASK_INTERRUPTIBLE); + _cpu_idle(); + set_current_state(TASK_RUNNING); +} +#endif + /* * Release a thread_info structure */ diff --git a/include/linux/sched.h b/include/linux/sched.h index 8222ae40ecb0..fb4ba400d7e1 100644 --- a/include/linux/sched.h +++ b/include/linux/sched.h @@ -1732,6 +1732,9 @@ struct task_struct { #ifdef CONFIG_DEBUG_ATOMIC_SLEEP unsigned long task_state_change; #endif +#ifdef CONFIG_NO_HZ_FULL + unsigned int cpu_isolated_flags; +#endif }; /* Future-safe accessor for struct task_struct's cpus_allowed. */ diff --git a/include/linux/tick.h b/include/linux/tick.h index f8492da57ad3..ec1953474a65 100644 --- a/include/linux/tick.h +++ b/include/linux/tick.h @@ -10,6 +10,7 @@ #include #include #include +#include #ifdef CONFIG_GENERIC_CLOCKEVENTS extern void __init tick_init(void); @@ -134,11 +135,18 @@ static inline bool tick_nohz_full_cpu(int cpu) return cpumask_test_cpu(cpu, tick_nohz_full_mask); } +static inline bool tick_nohz_is_cpu_isolated(void) +{ + return tick_nohz_full_cpu(smp_processor_id()) && + (current->cpu_isolated_flags & PR_CPU_ISOLATED_ENABLE); +} + extern void __tick_nohz_full_check(void); extern void tick_nohz_full_kick(void); extern void tick_nohz_full_kick_cpu(int cpu); extern void tick_nohz_full_kick_all(void); extern void __tick_nohz_task_switch(struct task_struct *tsk); +extern void tick_nohz_cpu_isolated_enter(void); #else static inline bool tick_nohz_full_enabled(void) { return false; } static inline bool tick_nohz_full_cpu(int cpu) { return false; } @@ -147,6 +155,8 @@ static inline void tick_nohz_full_kick_cpu(int cpu) { } static inline void tick_nohz_full_kick(void) { } static inline void tick_nohz_full_kick_all(void) { } static inline void __tick_nohz_task_switch(struct task_struct *tsk) { } +static inline bool tick_nohz_is_cpu_isolated(void) { return false; } +static inline void tick_nohz_cpu_isolated_enter(void) { } #endif static inline bool is_housekeeping_cpu(int cpu) diff --git a/include/uapi/linux/prctl.h b/include/uapi/linux/prctl.h index 31891d9535e2..edb40b6b84db 100644 --- a/include/uapi/linux/prctl.h +++ b/include/uapi/linux/prctl.h @@ -190,4 +190,9 @@ struct prctl_mm_map { # define PR_FP_MODE_FR (1 << 0) /* 64b FP registers */ # define PR_FP_MODE_FRE (1 << 1) /* 32b compatibility */ +/* Enable/disable or query cpu_isolated mode for NO_HZ_FULL kernels. */ +#define PR_SET_CPU_ISOLATED 47 +#define PR_GET_CPU_ISOLATED 48 +# define PR_CPU_ISOLATED_ENABLE (1 << 0) + #endif /* _LINUX_PRCTL_H */ diff --git a/kernel/context_tracking.c b/kernel/context_tracking.c index 72d59a1a6eb6..66739d7c1350 100644 --- a/kernel/context_tracking.c +++ b/kernel/context_tracking.c @@ -20,6 +20,7 @@ #include #include #include +#include #define CREATE_TRACE_POINTS #include @@ -85,6 +86,8 @@ void context_tracking_enter(enum ctx_state state) * on the tick. */ if (state == CONTEXT_USER) { + if (tick_nohz_is_cpu_isolated()) + tick_nohz_cpu_isolated_enter(); trace_user_enter(0); vtime_user_enter(current); } diff --git a/kernel/sys.c b/kernel/sys.c index a4e372b798a5..3fd9e47f8fc8 100644 --- a/kernel/sys.c +++ b/kernel/sys.c @@ -2243,6 +2243,14 @@ SYSCALL_DEFINE5(prctl, int, option, unsigned long, arg2, unsigned long, arg3, case PR_GET_FP_MODE: error = GET_FP_MODE(me); break; +#ifdef CONFIG_NO_HZ_FULL + case PR_SET_CPU_ISOLATED: + me->cpu_isolated_flags = arg2; + break; + case PR_GET_CPU_ISOLATED: + error = me->cpu_isolated_flags; + break; +#endif default: error = -EINVAL; break; diff --git a/kernel/time/tick-sched.c b/kernel/time/tick-sched.c index 914259128145..f6236b66788f 100644 --- a/kernel/time/tick-sched.c +++ b/kernel/time/tick-sched.c @@ -24,6 +24,7 @@ #include #include #include +#include #include @@ -389,6 +390,62 @@ void __init tick_nohz_init(void) pr_info("NO_HZ: Full dynticks CPUs: %*pbl.\n", cpumask_pr_args(tick_nohz_full_mask)); } + +/* + * Rather than continuously polling for the next_event in the + * tick_cpu_device, architectures can provide a method to save power + * by sleeping until an interrupt arrives. + */ +void __weak tick_nohz_cpu_isolated_wait() +{ + cpu_relax(); +} + +/* + * We normally return immediately to userspace. + * + * In "cpu_isolated" mode we wait until no more interrupts are + * pending. Otherwise we nap with interrupts enabled and wait for the + * next interrupt to fire, then loop back and retry. + * + * Note that if you schedule two "cpu_isolated" processes on the same + * core, neither will ever leave the kernel, and one will have to be + * killed manually. Otherwise in situations where another process is + * in the runqueue on this cpu, this task will just wait for that + * other task to go idle before returning to user space. + */ +void tick_nohz_cpu_isolated_enter(void) +{ + struct clock_event_device *dev = + __this_cpu_read(tick_cpu_device.evtdev); + struct task_struct *task = current; + unsigned long start = jiffies; + bool warned = false; + + /* Drain the pagevecs to avoid unnecessary IPI flushes later. */ + lru_add_drain(); + + while (READ_ONCE(dev->next_event.tv64) != KTIME_MAX) { + if (!warned && (jiffies - start) >= (5 * HZ)) { + pr_warn("%s/%d: cpu %d: cpu_isolated task blocked for %ld seconds\n", + task->comm, task->pid, smp_processor_id(), + (jiffies - start) / HZ); + warned = true; + } + if (should_resched()) + schedule(); + if (test_thread_flag(TIF_SIGPENDING)) + break; + tick_nohz_cpu_isolated_wait(); + } + if (warned) { + pr_warn("%s/%d: cpu %d: cpu_isolated task unblocked after %ld seconds\n", + task->comm, task->pid, smp_processor_id(), + (jiffies - start) / HZ); + dump_stack(); + } +} + #endif /* -- 2.1.2