From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-13.0 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY, SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id E62A6C4727C for ; Thu, 1 Oct 2020 14:03:29 +0000 (UTC) Received: from merlin.infradead.org (merlin.infradead.org [205.233.59.134]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 8143220872 for ; Thu, 1 Oct 2020 14:03:29 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=lists.infradead.org header.i=@lists.infradead.org header.b="TMn21Ztt" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 8143220872 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=kernel.org Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=merlin.20170209; h=Sender:Content-Transfer-Encoding: Content-Type:Cc:List-Subscribe:List-Help:List-Post:List-Archive: List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To:Message-Id:Date: Subject:To:From:Reply-To:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=fH9v9EibEXRBSibDEw2JQS27223YLnfYcMFdcMP4/9Q=; b=TMn21ZttOyGQAe+tdFaBLkYzY E0yqtpXTFdR3DRM6IxJczdQyKjr/rGvsdZnfztBWgE6hgPISV5OHHf5I+UfefQ1N0+6rXtm/HwLG/ na/3gGPLUOJ4piZFT+SwxdR/s+APGT7KZIHz/1n+pV/UL4i5JZI9v/ONV9MtJmkpcS3gQfZ0yjeoq v4pKhURzouJaA4Hb58W68XXgDWttKTeY00DQic3P3rlny9Xb9lddvHE58ympPmxGKVbmqW0jS1LXI NrXiOMBQQsufgHOmKKTYbw2uLr73AC4iESMRGc2kY/PfJZ8CrwG0v8+LyE6h3TBTTnGlsOKXr/uPW BEPv5r1/g==; Received: from localhost ([::1] helo=merlin.infradead.org) by merlin.infradead.org with esmtp (Exim 4.92.3 #3 (Red Hat Linux)) id 1kNz9G-00062S-39; Thu, 01 Oct 2020 14:01:30 +0000 Received: from mail-oi1-f195.google.com ([209.85.167.195]) by merlin.infradead.org with esmtps (Exim 4.92.3 #3 (Red Hat Linux)) id 1kNz98-0005yH-NP for linux-arm-kernel@lists.infradead.org; Thu, 01 Oct 2020 14:01:24 +0000 Received: by mail-oi1-f195.google.com with SMTP id m7so5683580oie.0 for ; Thu, 01 Oct 2020 07:01:22 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=iGHjCWNuUtJOPPTBOxuddLTs3d0txBwILlyxZmCjVFc=; b=kytbrRDWOFnu4J/Q9uX1n29bDrNtn9dN4FfeybK6NfUQuK2mCIOISbYnnCV1DOUlqI ac5nwHLUVQpV+zntx2B3+e8x/r2Nd/mU6/BpyqT9bykMkUx1Bk6XvbJzQAmCSYQ57eyN MEI+5X8Gyc/CkM1C2N1/2iUkFRY7Gc/bwWyiDdNi/ubhgOQzYOesSgPsZ+L8hvFNToBs yoDSh35epYYPOKGJXksGTGCXlhZ9F6V9b+U1yuoJza1rPm3jfVpLX8dSuvhfPO5Hjspq iZgTtlv80de3iEQomAa4pJN41OmZEcX4YLXw4Ymdz345OHS2hxRevFBCqXDxKRoYH2pK TICg== X-Gm-Message-State: AOAM533+L9Cgo6j1mKizHA6ihiQiBaMShVAmToPdknaH4t41k3+tOVVW r1fVVo6h70+MBSK5Mi4lKw== X-Google-Smtp-Source: ABdhPJwuW/VR5HT7BouAtrDFt4L8ZuQXLIGKKO9USCH3BhyIK3u/U+9i2jQ/ZNVZYEWdbx9SLCCqfA== X-Received: by 2002:aca:fd16:: with SMTP id b22mr54484oii.179.1601560881707; Thu, 01 Oct 2020 07:01:21 -0700 (PDT) Received: from xps15.herring.priv (24-155-109-49.dyn.grandenetworks.net. [24.155.109.49]) by smtp.googlemail.com with ESMTPSA id q81sm1032138oia.46.2020.10.01.07.01.19 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 01 Oct 2020 07:01:21 -0700 (PDT) From: Rob Herring To: Will Deacon , Catalin Marinas , Peter Zijlstra , Ingo Molnar , Arnaldo Carvalho de Melo , Jiri Olsa Subject: [PATCH v4 2/9] arm64: perf: Enable pmu counter direct access for perf event on armv8 Date: Thu, 1 Oct 2020 09:01:09 -0500 Message-Id: <20201001140116.651970-3-robh@kernel.org> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20201001140116.651970-1-robh@kernel.org> References: <20201001140116.651970-1-robh@kernel.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20201001_100122_810547_4D2018B0 X-CRM114-Status: GOOD ( 26.22 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Mark Rutland , Ian Rogers , Alexander Shishkin , linux-kernel@vger.kernel.org, honnappa.nagarahalli@arm.com, Raphael Gault , Jonathan Cameron , Namhyung Kim , Itaru Kitayama , linux-arm-kernel@lists.infradead.org Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: Raphael Gault Keep track of event opened with direct access to the hardware counters and modify permissions while they are open. The strategy used here is the same which x86 uses: everytime an event is mapped, the permissions are set if required. The atomic field added in the mm_context helps keep track of the different event opened and de-activate the permissions when all are unmapped. We also need to update the permissions in the context switch code so that tasks keep the right permissions. Signed-off-by: Raphael Gault Signed-off-by: Rob Herring --- v2: - Move mapped/unmapped into arm64 code. Fixes arm32. - Rebase on cap_user_time_short changes Changes from Raphael's v4: - Drop homogeneous check - Disable access for chained counters - Set pmc_width in user page --- arch/arm64/include/asm/mmu.h | 5 ++++ arch/arm64/include/asm/mmu_context.h | 2 ++ arch/arm64/include/asm/perf_event.h | 14 ++++++++++ arch/arm64/kernel/perf_event.c | 41 ++++++++++++++++++++++++++++ 4 files changed, 62 insertions(+) diff --git a/arch/arm64/include/asm/mmu.h b/arch/arm64/include/asm/mmu.h index a7a5ecaa2e83..52cfdb676f06 100644 --- a/arch/arm64/include/asm/mmu.h +++ b/arch/arm64/include/asm/mmu.h @@ -19,6 +19,11 @@ typedef struct { atomic64_t id; + /* + * non-zero if userspace have access to hardware + * counters directly. + */ + atomic_t pmu_direct_access; #ifdef CONFIG_COMPAT void *sigpage; #endif diff --git a/arch/arm64/include/asm/mmu_context.h b/arch/arm64/include/asm/mmu_context.h index f2d7537d6f83..d24589ecb07a 100644 --- a/arch/arm64/include/asm/mmu_context.h +++ b/arch/arm64/include/asm/mmu_context.h @@ -21,6 +21,7 @@ #include #include #include +#include #include #include @@ -224,6 +225,7 @@ static inline void __switch_mm(struct mm_struct *next) } check_and_switch_context(next); + perf_switch_user_access(next); } static inline void diff --git a/arch/arm64/include/asm/perf_event.h b/arch/arm64/include/asm/perf_event.h index 2c2d7dbe8a02..a025d9595d51 100644 --- a/arch/arm64/include/asm/perf_event.h +++ b/arch/arm64/include/asm/perf_event.h @@ -8,6 +8,7 @@ #include #include +#include #define ARMV8_PMU_MAX_COUNTERS 32 #define ARMV8_PMU_COUNTER_MASK (ARMV8_PMU_MAX_COUNTERS - 1) @@ -251,4 +252,17 @@ extern unsigned long perf_misc_flags(struct pt_regs *regs); (regs)->pstate = PSR_MODE_EL1h; \ } +static inline void perf_switch_user_access(struct mm_struct *mm) +{ + if (!IS_ENABLED(CONFIG_PERF_EVENTS)) + return; + + if (atomic_read(&mm->context.pmu_direct_access)) { + write_sysreg(ARMV8_PMU_USERENR_ER|ARMV8_PMU_USERENR_CR, + pmuserenr_el0); + } else { + write_sysreg(0, pmuserenr_el0); + } +} + #endif diff --git a/arch/arm64/kernel/perf_event.c b/arch/arm64/kernel/perf_event.c index e14f360a7883..8344dc030823 100644 --- a/arch/arm64/kernel/perf_event.c +++ b/arch/arm64/kernel/perf_event.c @@ -834,6 +834,41 @@ static int armv8pmu_access_event_idx(struct perf_event *event) return event->hw.idx; } +static void refresh_pmuserenr(void *mm) +{ + perf_switch_user_access(mm); +} + +static void armv8pmu_event_mapped(struct perf_event *event, struct mm_struct *mm) +{ + if (!(event->hw.flags & ARMPMU_EL0_RD_CNTR)) + return; + + /* + * This function relies on not being called concurrently in two + * tasks in the same mm. Otherwise one task could observe + * pmu_direct_access > 1 and return all the way back to + * userspace with user access disabled while another task is still + * doing on_each_cpu_mask() to enable user access. + * + * For now, this can't happen because all callers hold mmap_lock + * for write. If this changes, we'll need a different solution. + */ + lockdep_assert_held_write(&mm->mmap_lock); + + if (atomic_inc_return(&mm->context.pmu_direct_access) == 1) + on_each_cpu(refresh_pmuserenr, mm, 1); +} + +static void armv8pmu_event_unmapped(struct perf_event *event, struct mm_struct *mm) +{ + if (!(event->hw.flags & ARMPMU_EL0_RD_CNTR)) + return; + + if (atomic_dec_and_test(&mm->context.pmu_direct_access)) + on_each_cpu_mask(mm_cpumask(mm), refresh_pmuserenr, NULL, 1); +} + /* * Add an event filter to a given event. */ @@ -1058,6 +1093,8 @@ static int armv8_pmu_init(struct arm_pmu *cpu_pmu, char *name, cpu_pmu->filter_match = armv8pmu_filter_match; cpu_pmu->pmu.event_idx = armv8pmu_access_event_idx; + cpu_pmu->pmu.event_mapped = armv8pmu_event_mapped; + cpu_pmu->pmu.event_unmapped = armv8pmu_event_unmapped; cpu_pmu->name = name; cpu_pmu->map_event = map_event; @@ -1218,6 +1255,10 @@ void arch_perf_update_userpage(struct perf_event *event, userpg->cap_user_time = 0; userpg->cap_user_time_zero = 0; userpg->cap_user_time_short = 0; + userpg->cap_user_rdpmc = !!(event->hw.flags & ARMPMU_EL0_RD_CNTR); + + if (userpg->cap_user_rdpmc) + userpg->pmc_width = armv8pmu_event_is_64bit(event) ? 64 : 32; do { rd = sched_clock_read_begin(&seq); -- 2.25.1 _______________________________________________ linux-arm-kernel mailing list linux-arm-kernel@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-arm-kernel