From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1946032AbbEVNdY (ORCPT ); Fri, 22 May 2015 09:33:24 -0400 Received: from casper.infradead.org ([85.118.1.10]:49332 "EHLO casper.infradead.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1756579AbbEVNdS (ORCPT ); Fri, 22 May 2015 09:33:18 -0400 Message-Id: <20150522133135.640514671@infradead.org> User-Agent: quilt/0.61-1 Date: Fri, 22 May 2015 15:29:09 +0200 From: Peter Zijlstra To: mingo@kernel.org, peterz@infradead.org Cc: vincent.weaver@maine.edu, eranian@google.com, jolsa@redhat.com, linux-kernel@vger.kernel.org Subject: [PATCH v2 04/11] perf/x86: Use lockdep References: <20150522132905.416122812@infradead.org> MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Disposition: inline; filename=peterz-pmu-sched-3a.patch Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Lockdep is very good at finding incorrect IRQ state while locking and is far better at telling us if we hold a lock than the _is_locked() API. It also generates less code for the !DEBUG kernels. Signed-off-by: Peter Zijlstra (Intel) --- arch/x86/kernel/cpu/perf_event_intel.c | 3 +-- 1 file changed, 1 insertion(+), 2 deletions(-) --- a/arch/x86/kernel/cpu/perf_event_intel.c +++ b/arch/x86/kernel/cpu/perf_event_intel.c @@ -1926,7 +1926,6 @@ intel_start_scheduling(struct cpu_hw_eve * in stop_event_scheduling() * makes scheduling appear as a transaction */ - WARN_ON_ONCE(!irqs_disabled()); raw_spin_lock(&excl_cntrs->lock); /* @@ -2198,7 +2197,7 @@ static void intel_commit_scheduling(stru xl = &excl_cntrs->states[tid]; - WARN_ON_ONCE(!raw_spin_is_locked(&excl_cntrs->lock)); + lockdep_assert_held(&excl_cntrs->lock); if (cntr >= 0) { if (c->flags & PERF_X86_EVENT_EXCL)