From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 36B45C4361B for ; Fri, 18 Dec 2020 22:03:43 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id D923123B8C for ; Fri, 18 Dec 2020 22:03:42 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org D923123B8C Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=linux-foundation.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 783336B0073; Fri, 18 Dec 2020 17:03:42 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 735EC6B0075; Fri, 18 Dec 2020 17:03:42 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 5AADE6B0078; Fri, 18 Dec 2020 17:03:42 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id 451706B0073 for ; Fri, 18 Dec 2020 17:03:42 -0500 (EST) Received: from smtpin20.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id 0BFF51802BACB for ; Fri, 18 Dec 2020 22:03:42 +0000 (UTC) X-FDA: 77607780684.20.pan55_480d5aa27440 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin20.hostedemail.com (Postfix) with ESMTP id D440F180CC34B for ; Fri, 18 Dec 2020 22:03:41 +0000 (UTC) X-HE-Tag: pan55_480d5aa27440 X-Filterd-Recvd-Size: 6146 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by imf32.hostedemail.com (Postfix) with ESMTP for ; Fri, 18 Dec 2020 22:03:41 +0000 (UTC) Date: Fri, 18 Dec 2020 14:03:39 -0800 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linux-foundation.org; s=korg; t=1608329020; bh=DDDH2RVDluwp8itwATWBD8/EvB5Hgdjo8/Xdk/NWnkg=; h=From:To:Subject:In-Reply-To:From; b=Ht5e/4GCZA3MsYjivYDn37fGpIHcHGW2I5mJXvVaAKKhCg8Mz/EfMMOOpbtmPM0Cs z4WqEd2SUZo16j1dS4jlm38h9PkkAM4+Kov7ly51LEZPmnOrEvEjTGJ4ZlAP1QTIJt 3Md3MuSORzF6l9NDLfS7LwQrG9LHpkhvS5eeRDOU= From: Andrew Morton To: akpm@linux-foundation.org, andreyknvl@google.com, aryabinin@virtuozzo.com, Branislav.Rankov@arm.com, catalin.marinas@arm.com, dvyukov@google.com, elver@google.com, eugenis@google.com, glider@google.com, gor@linux.ibm.com, kevin.brodsky@arm.com, linux-mm@kvack.org, mm-commits@vger.kernel.org, torvalds@linux-foundation.org, vincenzo.frascino@arm.com, will.deacon@arm.com Subject: [patch 41/78] arm64: mte: convert gcr_user into an exclude mask Message-ID: <20201218220339.DTCIzvVAQ%akpm@linux-foundation.org> In-Reply-To: <20201218140046.497484741326828e5b5d46ec@linux-foundation.org> User-Agent: s-nail v14.8.16 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Vincenzo Frascino Subject: arm64: mte: convert gcr_user into an exclude mask The gcr_user mask is a per thread mask that represents the tags that are excluded from random generation when the Memory Tagging Extension is present and an 'irg' instruction is invoked. gcr_user affects the behavior on EL0 only. Currently that mask is an include mask and it is controlled by the user via prctl() while GCR_EL1 accepts an exclude mask. Convert the include mask into an exclude one to make it easier the register setting. Note: This change will affect gcr_kernel (for EL1) introduced with a future patch. Link: https://lkml.kernel.org/r/946dd31be833b660334c4f93410acf6d6c4cf3c4.1606161801.git.andreyknvl@google.com Signed-off-by: Vincenzo Frascino Signed-off-by: Andrey Konovalov Reviewed-by: Catalin Marinas Tested-by: Vincenzo Frascino Cc: Alexander Potapenko Cc: Andrey Ryabinin Cc: Branislav Rankov Cc: Dmitry Vyukov Cc: Evgenii Stepanov Cc: Kevin Brodsky Cc: Marco Elver Cc: Vasily Gorbik Cc: Will Deacon Signed-off-by: Andrew Morton --- arch/arm64/include/asm/processor.h | 2 - arch/arm64/kernel/mte.c | 29 +++++++++++++-------------- 2 files changed, 16 insertions(+), 15 deletions(-) --- a/arch/arm64/include/asm/processor.h~arm64-mte-convert-gcr_user-into-an-exclude-mask +++ a/arch/arm64/include/asm/processor.h @@ -152,7 +152,7 @@ struct thread_struct { #endif #ifdef CONFIG_ARM64_MTE u64 sctlr_tcf0; - u64 gcr_user_incl; + u64 gcr_user_excl; #endif }; --- a/arch/arm64/kernel/mte.c~arm64-mte-convert-gcr_user-into-an-exclude-mask +++ a/arch/arm64/kernel/mte.c @@ -156,23 +156,22 @@ static void set_sctlr_el1_tcf0(u64 tcf0) preempt_enable(); } -static void update_gcr_el1_excl(u64 incl) +static void update_gcr_el1_excl(u64 excl) { - u64 excl = ~incl & SYS_GCR_EL1_EXCL_MASK; /* - * Note that 'incl' is an include mask (controlled by the user via - * prctl()) while GCR_EL1 accepts an exclude mask. + * Note that the mask controlled by the user via prctl() is an + * include while GCR_EL1 accepts an exclude mask. * No need for ISB since this only affects EL0 currently, implicit * with ERET. */ sysreg_clear_set_s(SYS_GCR_EL1, SYS_GCR_EL1_EXCL_MASK, excl); } -static void set_gcr_el1_excl(u64 incl) +static void set_gcr_el1_excl(u64 excl) { - current->thread.gcr_user_incl = incl; - update_gcr_el1_excl(incl); + current->thread.gcr_user_excl = excl; + update_gcr_el1_excl(excl); } void flush_mte_state(void) @@ -187,7 +186,7 @@ void flush_mte_state(void) /* disable tag checking */ set_sctlr_el1_tcf0(SCTLR_EL1_TCF0_NONE); /* reset tag generation mask */ - set_gcr_el1_excl(0); + set_gcr_el1_excl(SYS_GCR_EL1_EXCL_MASK); } void mte_thread_switch(struct task_struct *next) @@ -198,7 +197,7 @@ void mte_thread_switch(struct task_struc /* avoid expensive SCTLR_EL1 accesses if no change */ if (current->thread.sctlr_tcf0 != next->thread.sctlr_tcf0) update_sctlr_el1_tcf0(next->thread.sctlr_tcf0); - update_gcr_el1_excl(next->thread.gcr_user_incl); + update_gcr_el1_excl(next->thread.gcr_user_excl); } void mte_suspend_exit(void) @@ -206,13 +205,14 @@ void mte_suspend_exit(void) if (!system_supports_mte()) return; - update_gcr_el1_excl(current->thread.gcr_user_incl); + update_gcr_el1_excl(current->thread.gcr_user_excl); } long set_mte_ctrl(struct task_struct *task, unsigned long arg) { u64 tcf0; - u64 gcr_incl = (arg & PR_MTE_TAG_MASK) >> PR_MTE_TAG_SHIFT; + u64 gcr_excl = ~((arg & PR_MTE_TAG_MASK) >> PR_MTE_TAG_SHIFT) & + SYS_GCR_EL1_EXCL_MASK; if (!system_supports_mte()) return 0; @@ -233,10 +233,10 @@ long set_mte_ctrl(struct task_struct *ta if (task != current) { task->thread.sctlr_tcf0 = tcf0; - task->thread.gcr_user_incl = gcr_incl; + task->thread.gcr_user_excl = gcr_excl; } else { set_sctlr_el1_tcf0(tcf0); - set_gcr_el1_excl(gcr_incl); + set_gcr_el1_excl(gcr_excl); } return 0; @@ -245,11 +245,12 @@ long set_mte_ctrl(struct task_struct *ta long get_mte_ctrl(struct task_struct *task) { unsigned long ret; + u64 incl = ~task->thread.gcr_user_excl & SYS_GCR_EL1_EXCL_MASK; if (!system_supports_mte()) return 0; - ret = task->thread.gcr_user_incl << PR_MTE_TAG_SHIFT; + ret = incl << PR_MTE_TAG_SHIFT; switch (task->thread.sctlr_tcf0) { case SCTLR_EL1_TCF0_NONE: _