From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 17576C433EF for ; Wed, 6 Oct 2021 23:06:46 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id E25F361077 for ; Wed, 6 Oct 2021 23:06:45 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S239607AbhJFXIh (ORCPT ); Wed, 6 Oct 2021 19:08:37 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:45332 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229809AbhJFXIg (ORCPT ); Wed, 6 Oct 2021 19:08:36 -0400 Received: from mail-il1-x12a.google.com (mail-il1-x12a.google.com [IPv6:2607:f8b0:4864:20::12a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 6D294C061746 for ; Wed, 6 Oct 2021 16:06:43 -0700 (PDT) Received: by mail-il1-x12a.google.com with SMTP id t11so4449686ilf.11 for ; Wed, 06 Oct 2021 16:06:43 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=znqj6PDNlxs5ZVl1qSnYWTeuoiic0iOQZuplncjvoDU=; b=DCHttdwYGL4LiWRuson4MqZW2TZLr5/WDpoWobPd0oyh9XYUvPG+BoX0bmHQddqOiK PKYJQ0cSwgBPr3cdcEVzJ8OQ3o7UL1+AkT2HXCYYleHrJx+UZ687jIMOOfh8Pw359unQ bw2Vjm2WLhkXBKOChqW1Y2Rt/4BWpGiiBj31j/PaYd2wtjw+4HZ3W1Mt5CBqVKknSl2X imjfuGY7ozsjqYW5LYJaeoqesIPfgoDRJb7pgbFZCALc0k8AgLlC25QZgrr10aqfeQcU Z7rKCUvWj+wluYTD0HDDr+KQWcr4rn/13OUI0Qg6AL2oX+Tf08m6QW1R8vUz7/kYEDqd bzjA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=znqj6PDNlxs5ZVl1qSnYWTeuoiic0iOQZuplncjvoDU=; b=4DXkOqvuRjBFWIc0Goy0tKOllEtJtaffNNFk7Hhq/GjNRNrNsUHCHkekem8rqtnAGD ZYDEMxT3zOfScBTE3gEDal1R2F/w6TLrYP2q4zzME9uxarTm1vlIPtzdMw8FvFMdGmEf 6mg5IR3/gZCUq7lDFi/UPhQtv4DRBiz4JRp8PsPWTGYMGlZ9gGAunvV/RkOS+QYaZqr1 XWcqLsOmZGbAUfgQ4RruOTuZjuF+vDYItrCncLoCiSyi5YOdmhanavVxoe759uXSpx2z UTzKJFJJ2hu8I9/5C0FphqP9Nyh4Tij6LCOCG10sM28zAwp4+ClesuJoSAxjQoNVVHle U6XA== X-Gm-Message-State: AOAM5316z7Q9W91z62fi/rTnYDSY0epTU0fEpo5F5MLbxltX1i6ro2dJ l/X4JVerg9NHmaO7yhFXxWsa/WHq1rEJN16f4sYRdtnfgmcvYQ== X-Google-Smtp-Source: ABdhPJxu7yMe73XC6qZNdUkh1jhrRAUu6jQ0qBAoDgDmQbZ7cIxkmtT8fH/WePwfGJDOHDKexPm4uxnSEcPs8g5n9hI= X-Received: by 2002:a92:4453:: with SMTP id a19mr654042ilm.233.1633561602833; Wed, 06 Oct 2021 16:06:42 -0700 (PDT) MIME-Version: 1.0 References: <20211006154751.4463-1-vincenzo.frascino@arm.com> <20211006154751.4463-5-vincenzo.frascino@arm.com> In-Reply-To: <20211006154751.4463-5-vincenzo.frascino@arm.com> From: Andrey Konovalov Date: Thu, 7 Oct 2021 01:06:32 +0200 Message-ID: Subject: Re: [PATCH v3 4/5] arm64: mte: Add asymmetric mode support To: Vincenzo Frascino Cc: Linux ARM , LKML , kasan-dev , Andrew Morton , Catalin Marinas , Will Deacon , Dmitry Vyukov , Andrey Ryabinin , Alexander Potapenko , Marco Elver , Evgenii Stepanov , Branislav Rankov , Lorenzo Pieralisi Content-Type: text/plain; charset="UTF-8" Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Wed, Oct 6, 2021 at 5:48 PM Vincenzo Frascino wrote: > > MTE provides an asymmetric mode for detecting tag exceptions. In > particular, when such a mode is present, the CPU triggers a fault > on a tag mismatch during a load operation and asynchronously updates > a register when a tag mismatch is detected during a store operation. > > Add support for MTE asymmetric mode. > > Note: If the CPU does not support MTE asymmetric mode the kernel falls > back on synchronous mode which is the default for kasan=on. > > Cc: Will Deacon > Cc: Catalin Marinas > Cc: Andrey Konovalov > Signed-off-by: Vincenzo Frascino > Reviewed-by: Catalin Marinas > --- > arch/arm64/include/asm/memory.h | 1 + > arch/arm64/include/asm/mte-kasan.h | 5 ++++ > arch/arm64/include/asm/mte.h | 8 +++--- > arch/arm64/include/asm/uaccess.h | 4 +-- > arch/arm64/kernel/mte.c | 43 +++++++++++++++++++++++++----- > 5 files changed, 49 insertions(+), 12 deletions(-) > > diff --git a/arch/arm64/include/asm/memory.h b/arch/arm64/include/asm/memory.h > index f1745a843414..1b9a1e242612 100644 > --- a/arch/arm64/include/asm/memory.h > +++ b/arch/arm64/include/asm/memory.h > @@ -243,6 +243,7 @@ static inline const void *__tag_set(const void *addr, u8 tag) > #ifdef CONFIG_KASAN_HW_TAGS > #define arch_enable_tagging_sync() mte_enable_kernel_sync() > #define arch_enable_tagging_async() mte_enable_kernel_async() > +#define arch_enable_tagging_asymm() mte_enable_kernel_asymm() > #define arch_force_async_tag_fault() mte_check_tfsr_exit() > #define arch_get_random_tag() mte_get_random_tag() > #define arch_get_mem_tag(addr) mte_get_mem_tag(addr) > diff --git a/arch/arm64/include/asm/mte-kasan.h b/arch/arm64/include/asm/mte-kasan.h > index 22420e1f8c03..478b9bcf69ad 100644 > --- a/arch/arm64/include/asm/mte-kasan.h > +++ b/arch/arm64/include/asm/mte-kasan.h > @@ -130,6 +130,7 @@ static inline void mte_set_mem_tag_range(void *addr, size_t size, u8 tag, > > void mte_enable_kernel_sync(void); > void mte_enable_kernel_async(void); > +void mte_enable_kernel_asymm(void); > > #else /* CONFIG_ARM64_MTE */ > > @@ -161,6 +162,10 @@ static inline void mte_enable_kernel_async(void) > { > } > > +static inline void mte_enable_kernel_asymm(void) > +{ > +} > + > #endif /* CONFIG_ARM64_MTE */ > > #endif /* __ASSEMBLY__ */ > diff --git a/arch/arm64/include/asm/mte.h b/arch/arm64/include/asm/mte.h > index 02511650cffe..075539f5f1c8 100644 > --- a/arch/arm64/include/asm/mte.h > +++ b/arch/arm64/include/asm/mte.h > @@ -88,11 +88,11 @@ static inline int mte_ptrace_copy_tags(struct task_struct *child, > > #ifdef CONFIG_KASAN_HW_TAGS > /* Whether the MTE asynchronous mode is enabled. */ > -DECLARE_STATIC_KEY_FALSE(mte_async_mode); > +DECLARE_STATIC_KEY_FALSE(mte_async_or_asymm_mode); > > -static inline bool system_uses_mte_async_mode(void) > +static inline bool system_uses_mte_async_or_asymm_mode(void) > { > - return static_branch_unlikely(&mte_async_mode); > + return static_branch_unlikely(&mte_async_or_asymm_mode); > } > > void mte_check_tfsr_el1(void); > @@ -121,7 +121,7 @@ static inline void mte_check_tfsr_exit(void) > mte_check_tfsr_el1(); > } > #else > -static inline bool system_uses_mte_async_mode(void) > +static inline bool system_uses_mte_async_or_asymm_mode(void) > { > return false; > } > diff --git a/arch/arm64/include/asm/uaccess.h b/arch/arm64/include/asm/uaccess.h > index 190b494e22ab..315354047d69 100644 > --- a/arch/arm64/include/asm/uaccess.h > +++ b/arch/arm64/include/asm/uaccess.h > @@ -196,13 +196,13 @@ static inline void __uaccess_enable_tco(void) > */ > static inline void __uaccess_disable_tco_async(void) > { > - if (system_uses_mte_async_mode()) > + if (system_uses_mte_async_or_asymm_mode()) > __uaccess_disable_tco(); > } > > static inline void __uaccess_enable_tco_async(void) > { > - if (system_uses_mte_async_mode()) > + if (system_uses_mte_async_or_asymm_mode()) > __uaccess_enable_tco(); > } > > diff --git a/arch/arm64/kernel/mte.c b/arch/arm64/kernel/mte.c > index e5e801bc5312..d7da4e3924c4 100644 > --- a/arch/arm64/kernel/mte.c > +++ b/arch/arm64/kernel/mte.c > @@ -26,9 +26,14 @@ > static DEFINE_PER_CPU_READ_MOSTLY(u64, mte_tcf_preferred); > > #ifdef CONFIG_KASAN_HW_TAGS > -/* Whether the MTE asynchronous mode is enabled. */ > -DEFINE_STATIC_KEY_FALSE(mte_async_mode); > -EXPORT_SYMBOL_GPL(mte_async_mode); > +/* > + * The MTE asynchronous and asymmetric mode have the same > + * behavior for the store operations. > + * > + * Whether the MTE asynchronous or asymmetric mode is enabled. Nit: The asynchronous and asymmetric MTE modes have the same behavior for store operations. This flag is set when either of these modes is enabled. > + */ > +DEFINE_STATIC_KEY_FALSE(mte_async_or_asymm_mode); > +EXPORT_SYMBOL_GPL(mte_async_or_asymm_mode); > #endif > > static void mte_sync_page_tags(struct page *page, pte_t old_pte, > @@ -116,7 +121,7 @@ void mte_enable_kernel_sync(void) > * Make sure we enter this function when no PE has set > * async mode previously. > */ > - WARN_ONCE(system_uses_mte_async_mode(), > + WARN_ONCE(system_uses_mte_async_or_asymm_mode(), > "MTE async mode enabled system wide!"); > > __mte_enable_kernel("synchronous", SCTLR_ELx_TCF_SYNC); > @@ -134,8 +139,34 @@ void mte_enable_kernel_async(void) > * mode in between sync and async, this strategy needs > * to be reviewed. > */ > - if (!system_uses_mte_async_mode()) > - static_branch_enable(&mte_async_mode); > + if (!system_uses_mte_async_or_asymm_mode()) > + static_branch_enable(&mte_async_or_asymm_mode); > +} > + > +void mte_enable_kernel_asymm(void) > +{ > + if (cpus_have_cap(ARM64_MTE_ASYMM)) { > + __mte_enable_kernel("asymmetric", SCTLR_ELx_TCF_ASYMM); > + > + /* > + * MTE asymm mode behaves as async mode for store > + * operations. The mode is set system wide by the > + * first PE that executes this function. > + * > + * Note: If in future KASAN acquires a runtime switching > + * mode in between sync and async, this strategy needs > + * to be reviewed. > + */ > + if (!system_uses_mte_async_or_asymm_mode()) > + static_branch_enable(&mte_async_or_asymm_mode); > + } else { > + /* > + * If the CPU does not support MTE asymmetric mode the > + * kernel falls back on synchronous mode which is the > + * default for kasan=on. > + */ > + mte_enable_kernel_sync(); > + } > } > #endif > > -- > 2.33.0 > Acked-by: Andrey Konovalov From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id D6A9BC433FE for ; Wed, 6 Oct 2021 23:08:51 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 9CECD610CC for ; Wed, 6 Oct 2021 23:08:51 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org 9CECD610CC Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=gmail.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:Subject:Message-ID:Date:From: In-Reply-To:References:MIME-Version:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=YzP22kAMpdouMBdKoA/Qm/8wRIDZl/hwDgzPklNXaYk=; b=hbsSjVmFVclIg2 KWaaRSSM/7gv9bcrbDEc9nQY3CfMmE49VJmwfP5jxkbPpySB6WHj5eBZQCdtC7zOpkY3QcY5S4F/A Wf9gJSqv8Pxa33qaVEXwc2FJTdlcr/256LHw7UH88VCj0mWjcqto4bTdT4CHkmjIIgSK6Gz+g9G7P KL/z3/bCdF4neVxMDQuEzSZI22vW8fEUfhnLQZJwwE2LePHYuoafpc3npLSZwIMh85KjmunwbmvV9 28UucXZ0/LOo2qfUCHznTVC9Ec0V64c5QoqnTlpQcNsK3DVSuIqlPnaj3uHuO3VdeBMMhluMLzECy EGhVQv6JTI3O1wetl1PQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1mYFzt-00FnYt-Nq; Wed, 06 Oct 2021 23:06:49 +0000 Received: from mail-il1-x130.google.com ([2607:f8b0:4864:20::130]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1mYFzo-00FnUn-R9 for linux-arm-kernel@lists.infradead.org; Wed, 06 Oct 2021 23:06:46 +0000 Received: by mail-il1-x130.google.com with SMTP id d11so4467412ilc.8 for ; Wed, 06 Oct 2021 16:06:43 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=znqj6PDNlxs5ZVl1qSnYWTeuoiic0iOQZuplncjvoDU=; b=DCHttdwYGL4LiWRuson4MqZW2TZLr5/WDpoWobPd0oyh9XYUvPG+BoX0bmHQddqOiK PKYJQ0cSwgBPr3cdcEVzJ8OQ3o7UL1+AkT2HXCYYleHrJx+UZ687jIMOOfh8Pw359unQ bw2Vjm2WLhkXBKOChqW1Y2Rt/4BWpGiiBj31j/PaYd2wtjw+4HZ3W1Mt5CBqVKknSl2X imjfuGY7ozsjqYW5LYJaeoqesIPfgoDRJb7pgbFZCALc0k8AgLlC25QZgrr10aqfeQcU Z7rKCUvWj+wluYTD0HDDr+KQWcr4rn/13OUI0Qg6AL2oX+Tf08m6QW1R8vUz7/kYEDqd bzjA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=znqj6PDNlxs5ZVl1qSnYWTeuoiic0iOQZuplncjvoDU=; b=AtzSMCj8Yqkiy8amfFXWjdgC39cPE/KOMVbxzlF49XmCP4nrVEHauX+Y1pxYX4MUxe YQPAmdY/Lx+TvSliUwC6jkCfbjWf9C5JSiXVCwUsmcf2rqqXL2eRMkXniXmuPrFZqCK3 laOK+6KbxjWNXAmkVyYBsoAiJUckALao6R4+ZNTFumYCFg4titfcOAvJHEcEtqNa7efN UZFLDL9gTTbcLs+uFnev+WZ2niYyttu0vMOvGASi5xdUh7Ct5mi/kT+tCzwik+IRYUYd a8HVoO01jlpRv/Z937sVjLQLtd+jqZa6nDms1vSsvy/W2fB1WyIVc9Wj7sGDsgH0cGg8 qg1g== X-Gm-Message-State: AOAM532of718WKwNnFWuBJiXqGt28pdpTjjazHjfLUODk3dA4ZLkI/O0 uMT7KZ/62ie9A+b/V/RTGkNROKXgdY6pIIMg2xQ= X-Google-Smtp-Source: ABdhPJxu7yMe73XC6qZNdUkh1jhrRAUu6jQ0qBAoDgDmQbZ7cIxkmtT8fH/WePwfGJDOHDKexPm4uxnSEcPs8g5n9hI= X-Received: by 2002:a92:4453:: with SMTP id a19mr654042ilm.233.1633561602833; Wed, 06 Oct 2021 16:06:42 -0700 (PDT) MIME-Version: 1.0 References: <20211006154751.4463-1-vincenzo.frascino@arm.com> <20211006154751.4463-5-vincenzo.frascino@arm.com> In-Reply-To: <20211006154751.4463-5-vincenzo.frascino@arm.com> From: Andrey Konovalov Date: Thu, 7 Oct 2021 01:06:32 +0200 Message-ID: Subject: Re: [PATCH v3 4/5] arm64: mte: Add asymmetric mode support To: Vincenzo Frascino Cc: Linux ARM , LKML , kasan-dev , Andrew Morton , Catalin Marinas , Will Deacon , Dmitry Vyukov , Andrey Ryabinin , Alexander Potapenko , Marco Elver , Evgenii Stepanov , Branislav Rankov , Lorenzo Pieralisi X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20211006_160644_952994_0FF6874E X-CRM114-Status: GOOD ( 34.41 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org On Wed, Oct 6, 2021 at 5:48 PM Vincenzo Frascino wrote: > > MTE provides an asymmetric mode for detecting tag exceptions. In > particular, when such a mode is present, the CPU triggers a fault > on a tag mismatch during a load operation and asynchronously updates > a register when a tag mismatch is detected during a store operation. > > Add support for MTE asymmetric mode. > > Note: If the CPU does not support MTE asymmetric mode the kernel falls > back on synchronous mode which is the default for kasan=on. > > Cc: Will Deacon > Cc: Catalin Marinas > Cc: Andrey Konovalov > Signed-off-by: Vincenzo Frascino > Reviewed-by: Catalin Marinas > --- > arch/arm64/include/asm/memory.h | 1 + > arch/arm64/include/asm/mte-kasan.h | 5 ++++ > arch/arm64/include/asm/mte.h | 8 +++--- > arch/arm64/include/asm/uaccess.h | 4 +-- > arch/arm64/kernel/mte.c | 43 +++++++++++++++++++++++++----- > 5 files changed, 49 insertions(+), 12 deletions(-) > > diff --git a/arch/arm64/include/asm/memory.h b/arch/arm64/include/asm/memory.h > index f1745a843414..1b9a1e242612 100644 > --- a/arch/arm64/include/asm/memory.h > +++ b/arch/arm64/include/asm/memory.h > @@ -243,6 +243,7 @@ static inline const void *__tag_set(const void *addr, u8 tag) > #ifdef CONFIG_KASAN_HW_TAGS > #define arch_enable_tagging_sync() mte_enable_kernel_sync() > #define arch_enable_tagging_async() mte_enable_kernel_async() > +#define arch_enable_tagging_asymm() mte_enable_kernel_asymm() > #define arch_force_async_tag_fault() mte_check_tfsr_exit() > #define arch_get_random_tag() mte_get_random_tag() > #define arch_get_mem_tag(addr) mte_get_mem_tag(addr) > diff --git a/arch/arm64/include/asm/mte-kasan.h b/arch/arm64/include/asm/mte-kasan.h > index 22420e1f8c03..478b9bcf69ad 100644 > --- a/arch/arm64/include/asm/mte-kasan.h > +++ b/arch/arm64/include/asm/mte-kasan.h > @@ -130,6 +130,7 @@ static inline void mte_set_mem_tag_range(void *addr, size_t size, u8 tag, > > void mte_enable_kernel_sync(void); > void mte_enable_kernel_async(void); > +void mte_enable_kernel_asymm(void); > > #else /* CONFIG_ARM64_MTE */ > > @@ -161,6 +162,10 @@ static inline void mte_enable_kernel_async(void) > { > } > > +static inline void mte_enable_kernel_asymm(void) > +{ > +} > + > #endif /* CONFIG_ARM64_MTE */ > > #endif /* __ASSEMBLY__ */ > diff --git a/arch/arm64/include/asm/mte.h b/arch/arm64/include/asm/mte.h > index 02511650cffe..075539f5f1c8 100644 > --- a/arch/arm64/include/asm/mte.h > +++ b/arch/arm64/include/asm/mte.h > @@ -88,11 +88,11 @@ static inline int mte_ptrace_copy_tags(struct task_struct *child, > > #ifdef CONFIG_KASAN_HW_TAGS > /* Whether the MTE asynchronous mode is enabled. */ > -DECLARE_STATIC_KEY_FALSE(mte_async_mode); > +DECLARE_STATIC_KEY_FALSE(mte_async_or_asymm_mode); > > -static inline bool system_uses_mte_async_mode(void) > +static inline bool system_uses_mte_async_or_asymm_mode(void) > { > - return static_branch_unlikely(&mte_async_mode); > + return static_branch_unlikely(&mte_async_or_asymm_mode); > } > > void mte_check_tfsr_el1(void); > @@ -121,7 +121,7 @@ static inline void mte_check_tfsr_exit(void) > mte_check_tfsr_el1(); > } > #else > -static inline bool system_uses_mte_async_mode(void) > +static inline bool system_uses_mte_async_or_asymm_mode(void) > { > return false; > } > diff --git a/arch/arm64/include/asm/uaccess.h b/arch/arm64/include/asm/uaccess.h > index 190b494e22ab..315354047d69 100644 > --- a/arch/arm64/include/asm/uaccess.h > +++ b/arch/arm64/include/asm/uaccess.h > @@ -196,13 +196,13 @@ static inline void __uaccess_enable_tco(void) > */ > static inline void __uaccess_disable_tco_async(void) > { > - if (system_uses_mte_async_mode()) > + if (system_uses_mte_async_or_asymm_mode()) > __uaccess_disable_tco(); > } > > static inline void __uaccess_enable_tco_async(void) > { > - if (system_uses_mte_async_mode()) > + if (system_uses_mte_async_or_asymm_mode()) > __uaccess_enable_tco(); > } > > diff --git a/arch/arm64/kernel/mte.c b/arch/arm64/kernel/mte.c > index e5e801bc5312..d7da4e3924c4 100644 > --- a/arch/arm64/kernel/mte.c > +++ b/arch/arm64/kernel/mte.c > @@ -26,9 +26,14 @@ > static DEFINE_PER_CPU_READ_MOSTLY(u64, mte_tcf_preferred); > > #ifdef CONFIG_KASAN_HW_TAGS > -/* Whether the MTE asynchronous mode is enabled. */ > -DEFINE_STATIC_KEY_FALSE(mte_async_mode); > -EXPORT_SYMBOL_GPL(mte_async_mode); > +/* > + * The MTE asynchronous and asymmetric mode have the same > + * behavior for the store operations. > + * > + * Whether the MTE asynchronous or asymmetric mode is enabled. Nit: The asynchronous and asymmetric MTE modes have the same behavior for store operations. This flag is set when either of these modes is enabled. > + */ > +DEFINE_STATIC_KEY_FALSE(mte_async_or_asymm_mode); > +EXPORT_SYMBOL_GPL(mte_async_or_asymm_mode); > #endif > > static void mte_sync_page_tags(struct page *page, pte_t old_pte, > @@ -116,7 +121,7 @@ void mte_enable_kernel_sync(void) > * Make sure we enter this function when no PE has set > * async mode previously. > */ > - WARN_ONCE(system_uses_mte_async_mode(), > + WARN_ONCE(system_uses_mte_async_or_asymm_mode(), > "MTE async mode enabled system wide!"); > > __mte_enable_kernel("synchronous", SCTLR_ELx_TCF_SYNC); > @@ -134,8 +139,34 @@ void mte_enable_kernel_async(void) > * mode in between sync and async, this strategy needs > * to be reviewed. > */ > - if (!system_uses_mte_async_mode()) > - static_branch_enable(&mte_async_mode); > + if (!system_uses_mte_async_or_asymm_mode()) > + static_branch_enable(&mte_async_or_asymm_mode); > +} > + > +void mte_enable_kernel_asymm(void) > +{ > + if (cpus_have_cap(ARM64_MTE_ASYMM)) { > + __mte_enable_kernel("asymmetric", SCTLR_ELx_TCF_ASYMM); > + > + /* > + * MTE asymm mode behaves as async mode for store > + * operations. The mode is set system wide by the > + * first PE that executes this function. > + * > + * Note: If in future KASAN acquires a runtime switching > + * mode in between sync and async, this strategy needs > + * to be reviewed. > + */ > + if (!system_uses_mte_async_or_asymm_mode()) > + static_branch_enable(&mte_async_or_asymm_mode); > + } else { > + /* > + * If the CPU does not support MTE asymmetric mode the > + * kernel falls back on synchronous mode which is the > + * default for kasan=on. > + */ > + mte_enable_kernel_sync(); > + } > } > #endif > > -- > 2.33.0 > Acked-by: Andrey Konovalov _______________________________________________ linux-arm-kernel mailing list linux-arm-kernel@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-arm-kernel