From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.2 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,NICE_REPLY_A,SPF_HELO_NONE,SPF_PASS, USER_AGENT_SANE_1 autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8D9BBC47096 for ; Thu, 3 Jun 2021 13:03:59 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 55AD9601FA for ; Thu, 3 Jun 2021 13:03:59 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 55AD9601FA Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=arm.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:In-Reply-To:MIME-Version:Date: Message-ID:From:References:Cc:To:Subject:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=XjqgINgdtEi39hHhmupFlQ2fbnFxvxoweZwfz1f1ApI=; b=lmd+hmBzPEBoA9P5bm24daGIwr Wyrqw66a7UEz/4NpgyRaIUhOq+GD8b0I03y37Bpwk1vJNjCK7M6maWatv6ekvKQ+hkLNxC/13R4Hh cve+hADw9W0GasceyhRZRxf5auUuUJkNYH4FIhkult+ZSC9VhK2SkidVeEvL81GtIRUeQqJIEKcPC nI9fZ+glHjqseKn4ML+qugJS3z7SRO4EuP/yZcorD3UsbgjtlIV8Hklnniqysw3y1YjWPnycvDgIC kvJsUWC37jQzSkeJPSwi3Kv3wYiXSXzIPCmOS1ZjA29/FCa0rghrVAsRErGllyyZy5CPBSv0m3jmX eTcR2nGA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1lomz7-008pAt-8C; Thu, 03 Jun 2021 13:02:05 +0000 Received: from foss.arm.com ([217.140.110.172]) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1lomyy-008p8G-HW for linux-arm-kernel@lists.infradead.org; Thu, 03 Jun 2021 13:02:00 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 1C1561063; Thu, 3 Jun 2021 06:01:53 -0700 (PDT) Received: from [10.57.85.77] (unknown [10.57.85.77]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 484143F774; Thu, 3 Jun 2021 06:01:52 -0700 (PDT) Subject: Re: [PATCH] arm64: mte: allow async MTE to be upgraded to sync on a per-CPU basis To: Peter Collingbourne , Catalin Marinas , Will Deacon Cc: Evgenii Stepanov , linux-arm-kernel@lists.infradead.org References: <20210602232445.3829248-1-pcc@google.com> From: Vincenzo Frascino Message-ID: <5ee9d9a1-5b13-ea21-67df-e713c76fc163@arm.com> Date: Thu, 3 Jun 2021 14:02:01 +0100 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101 Thunderbird/68.10.0 MIME-Version: 1.0 In-Reply-To: <20210602232445.3829248-1-pcc@google.com> Content-Language: en-US X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20210603_060156_719149_E316597C X-CRM114-Status: GOOD ( 33.13 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Hi Peter, On 6/3/21 12:24 AM, Peter Collingbourne wrote: > On some CPUs the performance of MTE in synchronous mode is the same > as that of asynchronous mode. This makes it worthwhile to enable > synchronous mode on those CPUs when asynchronous mode is requested, > in order to gain the error detection benefits of synchronous mode > without the performance downsides. Therefore, make it possible for CPUs > to opt into upgrading to synchronous mode via a new mte-prefer-sync > device tree attribute. > I had a look at your patch and I think that there are few points that are worth mentioning: 1) The approach you are using is per-CPU hence we might end up with a system that has some PE configured as sync and some configured as async. We currently support only a system wide setting. 2) async and sync have slightly different semantics (e.g. in sync mode the access does not take place and it requires emulation) this means that a mixed configuration affects the ABI. 3) In your patch you use DT to enforce sync mode on a CPU, probably it is better to have an MIDR scheme to mark these CPUs. > Signed-off-by: Peter Collingbourne > Link: https://linux-review.googlesource.com/id/Id6f95b71fde6e701dd30b5e108126af7286147e8 > --- > arch/arm64/kernel/process.c | 8 ++++++++ > arch/arm64/kernel/smp.c | 22 ++++++++++++++++++++++ > 2 files changed, 30 insertions(+) > > diff --git a/arch/arm64/kernel/process.c b/arch/arm64/kernel/process.c > index b4bb67f17a2c..ba6ed0c1390c 100644 > --- a/arch/arm64/kernel/process.c > +++ b/arch/arm64/kernel/process.c > @@ -527,8 +527,16 @@ static void erratum_1418040_thread_switch(struct task_struct *prev, > write_sysreg(val, cntkctl_el1); > } > > +DECLARE_PER_CPU_READ_MOSTLY(bool, mte_prefer_sync); > + > static void update_sctlr_el1(u64 sctlr) > { > + if ((sctlr & SCTLR_EL1_TCF0_MASK) == SCTLR_EL1_TCF0_ASYNC && > + __this_cpu_read(mte_prefer_sync)) { > + sctlr &= ~SCTLR_EL1_TCF0_MASK; > + sctlr |= SCTLR_EL1_TCF0_SYNC; > + } > + > /* > * EnIA must not be cleared while in the kernel as this is necessary for > * in-kernel PAC. It will be cleared on kernel exit if needed. > diff --git a/arch/arm64/kernel/smp.c b/arch/arm64/kernel/smp.c > index dcd7041b2b07..3a475722f768 100644 > --- a/arch/arm64/kernel/smp.c > +++ b/arch/arm64/kernel/smp.c > @@ -56,6 +56,8 @@ > > DEFINE_PER_CPU_READ_MOSTLY(int, cpu_number); > EXPORT_PER_CPU_SYMBOL(cpu_number); > +DEFINE_PER_CPU_READ_MOSTLY(bool, mte_prefer_sync); > +EXPORT_PER_CPU_SYMBOL(mte_prefer_sync); > > /* > * as from 2.5, kernels no longer have an init_tasks structure > @@ -649,6 +651,16 @@ static void __init acpi_parse_and_init_cpus(void) > #define acpi_parse_and_init_cpus(...) do { } while (0) > #endif > > +/* > + * Read per-CPU properties from the device tree and store them in per-CPU > + * variables for efficient access later. > + */ > +static void __init of_read_cpu_properties(int cpu, struct device_node *dn) > +{ > + per_cpu(mte_prefer_sync, cpu) = > + of_property_read_bool(dn, "mte-prefer-sync"); > +} > + > /* > * Enumerate the possible CPU set from the device tree and build the > * cpu logical map array containing MPIDR values related to logical > @@ -789,6 +801,16 @@ void __init smp_prepare_cpus(unsigned int max_cpus) > set_cpu_present(cpu, true); > numa_store_cpu_info(cpu); > } > + > + if (acpi_disabled) { > + struct device_node *dn; > + int cpu = 0; > + > + for_each_of_cpu_node(dn) { > + of_read_cpu_properties(cpu, dn); > + cpu++; > + } > + } > } > > static const char *ipi_types[NR_IPI] __tracepoint_string = { > -- Regards, Vincenzo _______________________________________________ linux-arm-kernel mailing list linux-arm-kernel@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-arm-kernel