From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.2 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI, SPF_HELO_NONE,SPF_PASS,USER_AGENT_SANE_1 autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id EAEC5C07E95 for ; Tue, 13 Jul 2021 17:50:06 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id AEAC861183 for ; Tue, 13 Jul 2021 17:50:06 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org AEAC861183 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=kernel.org Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:In-Reply-To:MIME-Version:References: Message-ID:Subject:Cc:To:From:Date:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=+0K6zsoCdP2LXcCpzH/Ra6NZtE2cCgHf9VRDRT4ous4=; b=HBggU9gMaDzjy4 avtqPuSIsY8F8q8LaYVXrpI/hPLNz+A2zJFGZYCmVy/grP3mP9HznACnUsRhF3z0Lb9MVcydgzw5Y 7xIOcf1fIPHTR8g+uZ42QBxygtSIrjrveT7kup/8OUQxXDf+0+2oHkJXz2dA0PNty+NFba2j8+RFD 0Ca4b+DyG3lWd9Asw4B20DU35hXhz/J0JRqCHqWOQkB9PaWDBzJi3wkx8TX1iTiC30BK3AEcnuNmG lJGJ3GtMlc0g0/0sIMOgh/neVDxWZGuLJp4bc6vQx2XrA2lmX9d+mbSg/bai3aCyMfjtTYGZCqAt/ QhD0s6dy6/duxOqintww==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1m3MVz-00B5Hl-QE; Tue, 13 Jul 2021 17:48:15 +0000 Received: from mail.kernel.org ([198.145.29.99]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1m3MVu-00B5HM-Rq for linux-arm-kernel@lists.infradead.org; Tue, 13 Jul 2021 17:48:12 +0000 Received: by mail.kernel.org (Postfix) with ESMTPSA id 2B7A860720; Tue, 13 Jul 2021 17:48:09 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1626198490; bh=RzDWQ0UYCJXY3xplTSIU4bB+cTCNtqRGV+Y15h72sUk=; h=Date:From:To:Cc:Subject:References:In-Reply-To:From; b=OLXhufS3yIP25hTBUi5pYNVUjn+ibdeKNJuUveoHizzUKWnb7Bf63rGRLyNWPZDiv 8oPVK0PTHsXyiMSrpidtr+1sWew1glOQCGrpM7BcsqjPFh/lqmrhgzoZjCUYBg4JoC 9c53l5SpH74FbLuzIJ9/dhpYQ3Srm26YIqz5f3LF6AvdaIEEtPAmxBz3OriR9riQ8h uN33HanpCr87Hx5RH8+t0ApQVr1lb31Idz879D/a7LBq43vfrbhz50UybAalq7IGzC y2XfxXMU92AYuKi8E2VHAZIr1t/fCTaZRHA2cN9zDl5ZT5IKBH+bohQbg3dtewmpCX gOmc7lu/Cv59Q== Date: Tue, 13 Jul 2021 18:48:05 +0100 From: Will Deacon To: Peter Collingbourne Cc: Catalin Marinas , Vincenzo Frascino , Evgenii Stepanov , Szabolcs Nagy , Tejas Belagod , linux-arm-kernel@lists.infradead.org, Greg Kroah-Hartman Subject: Re: [PATCH v9 3/4] arm64: mte: introduce a per-CPU tag checking mode preference Message-ID: <20210713174805.GC30304@willie-the-truck> References: <20210702194110.2045282-1-pcc@google.com> <20210702194110.2045282-4-pcc@google.com> <20210707111501.GB21926@willie-the-truck> MIME-Version: 1.0 Content-Disposition: inline In-Reply-To: User-Agent: Mutt/1.10.1 (2018-07-13) X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20210713_104810_983837_B25C63B7 X-CRM114-Status: GOOD ( 35.87 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org On Mon, Jul 12, 2021 at 11:59:39AM -0700, Peter Collingbourne wrote: > On Wed, Jul 7, 2021 at 4:15 AM Will Deacon wrote: > > > > On Fri, Jul 02, 2021 at 12:41:09PM -0700, Peter Collingbourne wrote: > > > Add a per-CPU sysfs node, mte_tcf_preferred, that allows the preferred > > > tag checking mode to be configured. The current possible values are > > > async and sync. > > > > > > Link: https://linux-review.googlesource.com/id/I7493dcd533a2785a1437b16c3f6b50919f840854 > > > Signed-off-by: Peter Collingbourne > > > Reviewed-by: Catalin Marinas > > > --- > > > arch/arm64/kernel/mte.c | 77 +++++++++++++++++++++++++++++++++++++++-- > > > 1 file changed, 75 insertions(+), 2 deletions(-) > > > > > > diff --git a/arch/arm64/kernel/mte.c b/arch/arm64/kernel/mte.c > > > index 53d89915029d..48f218e070cc 100644 > > > --- a/arch/arm64/kernel/mte.c > > > +++ b/arch/arm64/kernel/mte.c > > > @@ -4,6 +4,7 @@ > > > */ > > > > > > #include > > > +#include > > > #include > > > #include > > > #include > > > @@ -26,6 +27,8 @@ u64 gcr_kernel_excl __ro_after_init; > > > > > > static bool report_fault_once = true; > > > > > > +static DEFINE_PER_CPU_READ_MOSTLY(u64, mte_tcf_preferred); > > > + > > > #ifdef CONFIG_KASAN_HW_TAGS > > > /* Whether the MTE asynchronous mode is enabled. */ > > > DEFINE_STATIC_KEY_FALSE(mte_async_mode); > > > @@ -199,17 +202,24 @@ static void update_gcr_el1_excl(u64 excl) > > > > > > static void mte_update_sctlr_user(struct task_struct *task) > > > { > > > + /* > > > + * This can only be called on the current or next task since the CPU > > > + * must match where the thread is going to run. > > > + */ > > > unsigned long sctlr = task->thread.sctlr_user; > > > - unsigned long pref = MTE_CTRL_TCF_ASYNC; > > > unsigned long mte_ctrl = task->thread.mte_ctrl; > > > - unsigned long resolved_mte_tcf = (mte_ctrl & pref) ? pref : mte_ctrl; > > > + unsigned long pref, resolved_mte_tcf; > > > > > > + preempt_disable(); > > > + pref = __this_cpu_read(mte_tcf_preferred); > > > + resolved_mte_tcf = (mte_ctrl & pref) ? pref : mte_ctrl; > > > sctlr &= ~SCTLR_EL1_TCF0_MASK; > > > if (resolved_mte_tcf & MTE_CTRL_TCF_ASYNC) > > > sctlr |= SCTLR_EL1_TCF0_ASYNC; > > > else if (resolved_mte_tcf & MTE_CTRL_TCF_SYNC) > > > sctlr |= SCTLR_EL1_TCF0_SYNC; > > > task->thread.sctlr_user = sctlr; > > > + preempt_enable(); > > > } > > > > > > void mte_thread_init_user(void) > > > @@ -441,3 +451,66 @@ int mte_ptrace_copy_tags(struct task_struct *child, long request, > > > > > > return ret; > > > } > > > + > > > +static ssize_t mte_tcf_preferred_show(struct device *dev, > > > + struct device_attribute *attr, char *buf) > > > +{ > > > + switch (per_cpu(mte_tcf_preferred, dev->id)) { > > > + case MTE_CTRL_TCF_ASYNC: > > > + return sysfs_emit(buf, "async\n"); > > > + case MTE_CTRL_TCF_SYNC: > > > + return sysfs_emit(buf, "sync\n"); > > > + default: > > > + return sysfs_emit(buf, "???\n"); > > > + } > > > +} > > > + > > > +static void sync_sctlr(void *arg) > > > +{ > > > + mte_update_sctlr_user(current); > > > + set_task_sctlr_el1(current->thread.sctlr_user); > > > +} > > > + > > > +static ssize_t mte_tcf_preferred_store(struct device *dev, > > > + struct device_attribute *attr, > > > + const char *buf, size_t count) > > > +{ > > > + ssize_t ret = 0; > > > + u64 tcf; > > > + > > > + if (sysfs_streq(buf, "async")) > > > + tcf = MTE_CTRL_TCF_ASYNC; > > > + else if (sysfs_streq(buf, "sync")) > > > + tcf = MTE_CTRL_TCF_SYNC; > > > + else > > > + return -EINVAL; > > > + > > > + device_lock(dev); > > > + per_cpu(mte_tcf_preferred, dev->id) = tcf; > > > + > > > + if (cpu_online(dev->id)) > > > + ret = smp_call_function_single(dev->id, sync_sctlr, NULL, 0); > > > > Hmm, so this call could land right in the middle of a concurrent prctl(). > > What happens in that case? > > I don't think anything can go wrong. When the prctl sets mte_ctrl we > then need to call mte_update_sctlr_user followed by set_task_sctlr_el1 > and it doesn't matter if it happens multiple times (either > mte_update_sctlr_user/set_task_sctlr_el1/mte_update_sctlr_user/set_task_sctlr_el1 > or even the less likely > mte_update_sctlr_user/mte_update_sctlr_user/set_task_sctlr_el1/set_task_sctlr_el1). I think you're still (at least) relying on the compiler not to tear or cache accesses to ->thread.sctlr_user, no? It seems like it would be a lot simpler just to leave the change until the next context switch and not send the IPI at all. I think that's a reasonable behaviour because the write to sysfs is racy anyway, so we can just document this. > Actually, I'm not sure we need any code in the sync_sctlr function at > all. Simply scheduling an empty function onto the CPU will cause > mte_update_sctlr_user and then update_sctlr_el1 to be called when the > task that was originally running on the CPU gets rescheduled onto it, > as a result of the change to mte_thread_switch. I'm not sure I agree here -- I thought smp_call_function_single() ran the target function directly from the IPI handler in interrupt context, without the need for a context-switch. Will _______________________________________________ linux-arm-kernel mailing list linux-arm-kernel@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-arm-kernel