From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1754382AbcB2LNJ (ORCPT ); Mon, 29 Feb 2016 06:13:09 -0500 Received: from torg.zytor.com ([198.137.202.12]:54048 "EHLO terminus.zytor.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1752028AbcB2LNF (ORCPT ); Mon, 29 Feb 2016 06:13:05 -0500 Date: Mon, 29 Feb 2016 03:12:03 -0800 From: tip-bot for Thomas Gleixner Message-ID: Cc: tglx@linutronix.de, jolsa@redhat.com, vincent.weaver@maine.edu, harish.chegondi@intel.com, peterz@infradead.org, linux-kernel@vger.kernel.org, jacob.jun.pan@linux.intel.com, bp@alien8.de, kan.liang@intel.com, mingo@kernel.org, hpa@zytor.com, andi.kleen@intel.com, torvalds@linux-foundation.org, acme@redhat.com, eranian@google.com Reply-To: hpa@zytor.com, mingo@kernel.org, kan.liang@intel.com, bp@alien8.de, jacob.jun.pan@linux.intel.com, linux-kernel@vger.kernel.org, eranian@google.com, andi.kleen@intel.com, acme@redhat.com, torvalds@linux-foundation.org, peterz@infradead.org, harish.chegondi@intel.com, vincent.weaver@maine.edu, jolsa@redhat.com, tglx@linutronix.de In-Reply-To: <20160222221012.669411833@linutronix.de> References: <20160222221012.669411833@linutronix.de> To: linux-tip-commits@vger.kernel.org Subject: [tip:perf/core] perf/x86/intel/rapl: Make PMU lock raw Git-Commit-ID: a208749c642618b6c106874153906b53f8e2ded9 X-Mailer: tip-git-log-daemon Robot-ID: Robot-Unsubscribe: Contact to get blacklisted from these emails MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Content-Type: text/plain; charset=UTF-8 Content-Disposition: inline Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Commit-ID: a208749c642618b6c106874153906b53f8e2ded9 Gitweb: http://git.kernel.org/tip/a208749c642618b6c106874153906b53f8e2ded9 Author: Thomas Gleixner AuthorDate: Mon, 22 Feb 2016 22:19:25 +0000 Committer: Ingo Molnar CommitDate: Mon, 29 Feb 2016 09:35:25 +0100 perf/x86/intel/rapl: Make PMU lock raw This lock is taken in hard interrupt context even on Preempt-RT. Make it raw so RT does not have to patch it. Signed-off-by: Thomas Gleixner Signed-off-by: Peter Zijlstra (Intel) Cc: Andi Kleen Cc: Arnaldo Carvalho de Melo Cc: Borislav Petkov Cc: Harish Chegondi Cc: Jacob Pan Cc: Jiri Olsa Cc: Kan Liang Cc: Linus Torvalds Cc: Peter Zijlstra Cc: Stephane Eranian Cc: Vince Weaver Cc: linux-kernel@vger.kernel.org Link: http://lkml.kernel.org/r/20160222221012.669411833@linutronix.de Signed-off-by: Ingo Molnar --- arch/x86/events/intel/rapl.c | 20 ++++++++++---------- 1 file changed, 10 insertions(+), 10 deletions(-) diff --git a/arch/x86/events/intel/rapl.c b/arch/x86/events/intel/rapl.c index ba5043b..2934999 100644 --- a/arch/x86/events/intel/rapl.c +++ b/arch/x86/events/intel/rapl.c @@ -120,7 +120,7 @@ static struct perf_pmu_events_attr event_attr_##v = { \ }; struct rapl_pmu { - spinlock_t lock; + raw_spinlock_t lock; int n_active; struct list_head active_list; struct pmu *pmu; @@ -210,12 +210,12 @@ static enum hrtimer_restart rapl_hrtimer_handle(struct hrtimer *hrtimer) if (!pmu->n_active) return HRTIMER_NORESTART; - spin_lock_irqsave(&pmu->lock, flags); + raw_spin_lock_irqsave(&pmu->lock, flags); list_for_each_entry(event, &pmu->active_list, active_entry) rapl_event_update(event); - spin_unlock_irqrestore(&pmu->lock, flags); + raw_spin_unlock_irqrestore(&pmu->lock, flags); hrtimer_forward_now(hrtimer, pmu->timer_interval); @@ -252,9 +252,9 @@ static void rapl_pmu_event_start(struct perf_event *event, int mode) struct rapl_pmu *pmu = __this_cpu_read(rapl_pmu); unsigned long flags; - spin_lock_irqsave(&pmu->lock, flags); + raw_spin_lock_irqsave(&pmu->lock, flags); __rapl_pmu_event_start(pmu, event); - spin_unlock_irqrestore(&pmu->lock, flags); + raw_spin_unlock_irqrestore(&pmu->lock, flags); } static void rapl_pmu_event_stop(struct perf_event *event, int mode) @@ -263,7 +263,7 @@ static void rapl_pmu_event_stop(struct perf_event *event, int mode) struct hw_perf_event *hwc = &event->hw; unsigned long flags; - spin_lock_irqsave(&pmu->lock, flags); + raw_spin_lock_irqsave(&pmu->lock, flags); /* mark event as deactivated and stopped */ if (!(hwc->state & PERF_HES_STOPPED)) { @@ -288,7 +288,7 @@ static void rapl_pmu_event_stop(struct perf_event *event, int mode) hwc->state |= PERF_HES_UPTODATE; } - spin_unlock_irqrestore(&pmu->lock, flags); + raw_spin_unlock_irqrestore(&pmu->lock, flags); } static int rapl_pmu_event_add(struct perf_event *event, int mode) @@ -297,14 +297,14 @@ static int rapl_pmu_event_add(struct perf_event *event, int mode) struct hw_perf_event *hwc = &event->hw; unsigned long flags; - spin_lock_irqsave(&pmu->lock, flags); + raw_spin_lock_irqsave(&pmu->lock, flags); hwc->state = PERF_HES_UPTODATE | PERF_HES_STOPPED; if (mode & PERF_EF_START) __rapl_pmu_event_start(pmu, event); - spin_unlock_irqrestore(&pmu->lock, flags); + raw_spin_unlock_irqrestore(&pmu->lock, flags); return 0; } @@ -567,7 +567,7 @@ static int rapl_cpu_prepare(int cpu) pmu = kzalloc_node(sizeof(*pmu), GFP_KERNEL, cpu_to_node(cpu)); if (!pmu) return -1; - spin_lock_init(&pmu->lock); + raw_spin_lock_init(&pmu->lock); INIT_LIST_HEAD(&pmu->active_list);