From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-4.4 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS autolearn=no autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id CC923C47096 for ; Thu, 3 Jun 2021 16:42:49 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 944F06024A for ; Thu, 3 Jun 2021 16:42:49 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 944F06024A Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=kernel.org Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:Subject:Message-ID:Date:From: In-Reply-To:References:MIME-Version:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=Nc9HvEyvHLIPKb0Sh+bO08k0pJoCAszDjwhdtt9rZpk=; b=sBPFIW+BWnukNM T46Jxs5XCU12eG2D3QYD7W+kfv8aYftnh/b/Q8dMWLJIJBYMnjAtclRBDuJC0KCSVi5yj+HiHhURS x2HyE1i2OrywxsJZvHFMVvCkCIwey/hEZoCsFee1v3OJahPPlta/zdSOvyJ187sQcNQ4CqEyBFBX+ gLfZemCJlnEg+U1fKWxCBIwKTK86IljMvQi7eK/5XRLHeasSiqBtsJcc+UukdqMWjJU9CdKqo6Tkx 5YlVWMPkjtGFpJfPwDe4NSXJa3UcRDefCxlYVTnT+m0c/wHpvaUJmgmBXSw5YuGf5iy2oEEBWpMbp OAE56uefelswkPw574ag==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1loqP1-009a9z-Av; Thu, 03 Jun 2021 16:41:03 +0000 Received: from mail.kernel.org ([198.145.29.99]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1loqOx-009a8i-AN for linux-arm-kernel@lists.infradead.org; Thu, 03 Jun 2021 16:41:00 +0000 Received: by mail.kernel.org (Postfix) with ESMTPSA id ED5F961400 for ; Thu, 3 Jun 2021 16:40:57 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1622738458; bh=DZ7wEU9QOU96tbC6StRBlgGMxmr8AzLKxGBZpB0fIe0=; h=References:In-Reply-To:From:Date:Subject:To:Cc:From; b=OD8u2L7AqO4ExKPvYBM0c4P17DCx936cTakqlaLwcTgfj+5+jFVs4HcfGinOvCDBH KmxFh5c10hI/MgIdnZCViyPmsdgHPYiZJIYIGsHHA0W/4n5r6+MOg8czFB+Tg9U8Ci foqCEuczf2f2mfovCH4f40MMhnVdi/BS6UHM9KgQE88zuiiQLldZS0FFWxxgguDfnw V4kpwZgc6LHStvhwIvCAJzPUCUtflY/6SuF1TB+PHP40lRALUfaXRIFFyXCJCqzWKe GJYhCa7F9tR4u4oQSel1sj/w3E0csvGCnI9WL4h2pAVUaJkLO7k6I7W+OPGI85q80n PQRGZUhzGLYxQ== Received: by mail-ej1-f45.google.com with SMTP id g8so10244662ejx.1 for ; Thu, 03 Jun 2021 09:40:57 -0700 (PDT) X-Gm-Message-State: AOAM531SDHZiGBBuku/LD1k7pAzcLM8RWvOmzDyxyabjKDAQHrVfTMIl ppd13+mLgUUkoVZ9U+Xbty58mjG1XLMA6kKeag== X-Google-Smtp-Source: ABdhPJzUxtg2E0qG2ooQeqqXPB4wNwvJpq85StCbQD5ND/sKCbPX2iWtOgoTxOLgxfrzW64wPGdr2zYVVXdmMeXF5us= X-Received: by 2002:a17:906:a110:: with SMTP id t16mr267460ejy.360.1622738456363; Thu, 03 Jun 2021 09:40:56 -0700 (PDT) MIME-Version: 1.0 References: <20210517195405.3079458-1-robh@kernel.org> <20210517195405.3079458-4-robh@kernel.org> <20210601135526.GA3326@C02TD0UTHF1T.local> <20210601171159.GD3326@C02TD0UTHF1T.local> In-Reply-To: <20210601171159.GD3326@C02TD0UTHF1T.local> From: Rob Herring Date: Thu, 3 Jun 2021 11:40:44 -0500 X-Gmail-Original-Message-ID: Message-ID: Subject: Re: [PATCH v8 3/5] arm64: perf: Enable PMU counter userspace access for perf event To: Mark Rutland , Will Deacon Cc: Catalin Marinas , Peter Zijlstra , Ingo Molnar , Arnaldo Carvalho de Melo , Jiri Olsa , Kan Liang , Ian Rogers , Alexander Shishkin , Honnappa Nagarahalli , Zachary.Leaf@arm.com, Raphael Gault , Jonathan Cameron , Namhyung Kim , Itaru Kitayama , linux-arm-kernel , "linux-kernel@vger.kernel.org" X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20210603_094059_415850_A2A072AC X-CRM114-Status: GOOD ( 39.60 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org On Tue, Jun 1, 2021 at 12:12 PM Mark Rutland wrote: > > On Tue, Jun 01, 2021 at 10:00:53AM -0500, Rob Herring wrote: > > On Tue, Jun 1, 2021 at 8:55 AM Mark Rutland wrote: > > > On Mon, May 17, 2021 at 02:54:03PM -0500, Rob Herring wrote: > > > > +static void armv8pmu_enable_user_access(struct arm_pmu *cpu_pmu) > > > > +{ > > > > + struct pmu_hw_events *cpuc = this_cpu_ptr(cpu_pmu->hw_events); > > > > + > > > > + if (!bitmap_empty(cpuc->dirty_mask, ARMPMU_MAX_HWEVENTS)) { > > > > + int i; > > > > + /* Don't need to clear assigned counters. */ > > > > + bitmap_xor(cpuc->dirty_mask, cpuc->dirty_mask, cpuc->used_mask, ARMPMU_MAX_HWEVENTS); > > > > + > > > > + for_each_set_bit(i, cpuc->dirty_mask, ARMPMU_MAX_HWEVENTS) { > > > > + if (i == ARMV8_IDX_CYCLE_COUNTER) > > > > + write_sysreg(0, pmccntr_el0); > > > > + else > > > > + armv8pmu_write_evcntr(i, 0); > > > > + } > > > > + bitmap_zero(cpuc->dirty_mask, ARMPMU_MAX_HWEVENTS); > > > > + } > > > > + > > > > + write_sysreg(ARMV8_PMU_USERENR_ER | ARMV8_PMU_USERENR_CR, pmuserenr_el0); > > > > +} > > > > > > This still leaks the values of CPU-bound events, or task-bound events > > > owned by others, right? > > > > For CPU-bound events, yes. There's not any way to prevent that without > > per counter access controls. > > > > It is clearing other's task-bound events. > > Sorry, I misspoke. When I said "task-bound events owned by others", I > had meant events bounds to this task, but owned by someone else (e.g. > the system administrator). Ah yeah, those would still be exposed, but that would only tell the task someone is watching them. Though I guess the task could write those counters and corrupt the data. > Thanks for confirming! > > > > > +static void armv8pmu_event_mapped(struct perf_event *event, struct mm_struct *mm) > > > > +{ > > > > + if (!(event->hw.flags & ARMPMU_EL0_RD_CNTR) || (atomic_read(&event->mmap_count) != 1)) > > > > + return; > > > > + > > > > + if (atomic_inc_return(&event->ctx->nr_user) == 1) { > > > > + unsigned long flags; > > > > + atomic_inc(&event->pmu->sched_cb_usage); > > > > + local_irq_save(flags); > > > > + armv8pmu_enable_user_access(to_arm_pmu(event->pmu)); > > > > + local_irq_restore(flags); > > > > + } > > > > +} > > > > + > > > > +static void armv8pmu_event_unmapped(struct perf_event *event, struct mm_struct *mm) > > > > +{ > > > > + if (!(event->hw.flags & ARMPMU_EL0_RD_CNTR) || (atomic_read(&event->mmap_count) != 1)) > > > > + return; > > > > + > > > > + if (atomic_dec_and_test(&event->ctx->nr_user)) { > > > > + atomic_dec(&event->pmu->sched_cb_usage); > > > > + armv8pmu_disable_user_access(); > > > > + } > > > > } > > > > > > We can open an event for task A, but call mmap()/munmap() for that event > > > from task B, which will do the enable/disable on task B rather than task > > > A. The core doesn't enforce that the mmap is performed on the same core, > > > so I don't think this is quite right, unfortunately. > > > > Why do we care and who wants to do that? It all seems like a > > convoluted scenario that isn't really going to happen. I prefer to not > > support that until someone asks for it. Maybe we should check for the > > condition (event->ctx->task != current) though. > > My reason for caring is that it means our accounting structures aren't > aligned with the actual CPU state, and it's going to be very easy for > this to go wrong as things get reworked in future. > > If we're saying someone shouldn't do this, then we should enforce that > they can't. If they can, then I'd prefer to have deterministic behaviour > that isn't going to cause us issues in future. > > I think that we should treat this like changing other event properties > (e.g. the period, or enabling/disabling the event), with an > event_function_call, since that will do the right thing regardless. I > also expect that people will want to do setup/teardown in a single > thread, and this would make that work too. Looks like that will take some work in the core to get event_function_call to be called in this case. A much more simple solution is simply not enabling/disabling user access on mmap/unmap. That was added along the way for x86, but I don't think that buys us much given the departure from x86 implementation. We already have to enable the feature via sysctl (as Will wants it default off) and the open has to request user access. If a task succeeded in opening the event, is there any scenario where it can't mmap its event? If so, could that condition not also be checked in open? Rob _______________________________________________ linux-arm-kernel mailing list linux-arm-kernel@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-arm-kernel