From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753012AbbKYLci (ORCPT ); Wed, 25 Nov 2015 06:32:38 -0500 Received: from mail-wm0-f43.google.com ([74.125.82.43]:36074 "EHLO mail-wm0-f43.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752977AbbKYLcg (ORCPT ); Wed, 25 Nov 2015 06:32:36 -0500 Date: Wed, 25 Nov 2015 12:32:33 +0100 From: Frederic Weisbecker To: Chris Metcalf Cc: LKML , Peter Zijlstra , Thomas Gleixner , Luiz Capitulino , Christoph Lameter , Ingo Molnar , Viresh Kumar , Rik van Riel Subject: Re: [PATCH 2/7] nohz: New tick dependency mask Message-ID: <20151125113233.GC16609@lerouge> References: <1447424529-13671-1-git-send-email-fweisbec@gmail.com> <1447424529-13671-3-git-send-email-fweisbec@gmail.com> <56548E03.2080704@ezchip.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <56548E03.2080704@ezchip.com> User-Agent: Mutt/1.5.24 (2015-08-30) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Tue, Nov 24, 2015 at 11:19:15AM -0500, Chris Metcalf wrote: > On 11/13/2015 09:22 AM, Frederic Weisbecker wrote: > >The tick dependency is evaluated on every IRQ. This is a batch of checks > >which determine whether it is safe to stop the tick or not. These checks > >are often split in many details: posix cpu timers, scheduler, sched clock, > >perf events. Each of which are made of smaller details: posix cpu > >timer involves checking process wide timers then thread wide timers. Perf > >involves checking freq events then more per cpu details. > > > >Checking these details asynchronously every time we update the full > >dynticks state bring avoidable overhead and a messy layout. > > > >Lets introduce instead tick dependency masks: one for system wide > >dependency (unstable sched clock), one for CPU wide dependency (sched, > >perf), and task/signal level dependencies. The subsystems are responsible > >of setting and clearing their dependency through a set of APIs that will > >take care of concurrent dependency mask modifications and kick targets > >to restart the relevant CPU tick whenever needed. > > > >This new dependency engine stays beside the old one until all subsystems > >having a tick dependency are converted to it. > > > > > >+void tick_nohz_set_dep_cpu(enum tick_dependency_bit bit, int cpu) > >+{ > >+ unsigned long prev; > >+ struct tick_sched *ts; > >+ > >+ ts = per_cpu_ptr(&tick_cpu_sched, cpu); > >+ > >+ prev = fetch_or(&ts->tick_dependency, BIT_MASK(bit)); > >+ if (!prev) { > >+ preempt_disable(); > >+ /* Perf needs local kick that is NMI safe */ > >+ if (cpu == smp_processor_id()) { > >+ tick_nohz_full_kick(); > >+ } else { > >+ /* Remote irq work not NMI-safe */ > >+ WARN_ON_ONCE(in_nmi()); > > Better to say "if (!WARN_ON_ONCE(in_nmi()))" here instead so > we don't actually try to kick if we are in an NMI? Makes sense yeah. I'll fix that. Thanks.