From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-10.0 required=3.0 tests=HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH,MAILING_LIST_MULTI, SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 518A7C433E0 for ; Wed, 24 Jun 2020 17:53:08 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 11C972080C for ; Wed, 24 Jun 2020 17:53:08 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 11C972080C Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=arm.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 943756B0023; Wed, 24 Jun 2020 13:53:06 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 8F5C56B0024; Wed, 24 Jun 2020 13:53:06 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 746DA6B0025; Wed, 24 Jun 2020 13:53:06 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0069.hostedemail.com [216.40.44.69]) by kanga.kvack.org (Postfix) with ESMTP id 572DE6B0023 for ; Wed, 24 Jun 2020 13:53:06 -0400 (EDT) Received: from smtpin06.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id 12A2A181AC9CB for ; Wed, 24 Jun 2020 17:53:06 +0000 (UTC) X-FDA: 76964851572.06.work44_170209826e46 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin06.hostedemail.com (Postfix) with ESMTP id DB99710038276 for ; Wed, 24 Jun 2020 17:53:05 +0000 (UTC) X-HE-Tag: work44_170209826e46 X-Filterd-Recvd-Size: 8150 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by imf03.hostedemail.com (Postfix) with ESMTP for ; Wed, 24 Jun 2020 17:53:05 +0000 (UTC) Received: from localhost.localdomain (unknown [2.26.170.173]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id 9E7EF2077D; Wed, 24 Jun 2020 17:53:02 +0000 (UTC) From: Catalin Marinas To: linux-arm-kernel@lists.infradead.org Cc: linux-mm@kvack.org, linux-arch@vger.kernel.org, Will Deacon , Dave P Martin , Vincenzo Frascino , Szabolcs Nagy , Kevin Brodsky , Andrey Konovalov , Peter Collingbourne , Andrew Morton , Steven Price Subject: [PATCH v5 07/25] arm64: mte: Clear the tags when a page is mapped in user-space with PROT_MTE Date: Wed, 24 Jun 2020 18:52:26 +0100 Message-Id: <20200624175244.25837-8-catalin.marinas@arm.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20200624175244.25837-1-catalin.marinas@arm.com> References: <20200624175244.25837-1-catalin.marinas@arm.com> MIME-Version: 1.0 X-Rspamd-Queue-Id: DB99710038276 X-Spamd-Result: default: False [0.00 / 100.00] X-Rspamd-Server: rspam05 Content-Transfer-Encoding: quoted-printable X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Pages allocated by the kernel are not guaranteed to have the tags zeroed, especially as the kernel does not (yet) use MTE itself. To ensure the user can still access such pages when mapped into its address space, clear the tags via set_pte_at(). A new page flag - PG_mte_tagged (PG_arch_2) - is used to track pages with valid allocation tags. Since the zero page is mapped as pte_special(), it won't be covered by the above set_pte_at() mechanism. Clear its tags during early MTE initialisation. Co-developed-by: Steven Price Signed-off-by: Steven Price Signed-off-by: Catalin Marinas Cc: Will Deacon --- Notes: v5: - Fix the handling of compound pages. Previously, set_pte_at() could have erased already valid tags if the first page in a compound one did not have the PG_mte_tagged flag set. - Move the multi_tag_transfer_size macro from assembler.h to mte.S. - Ignore pte_special() mappings and clear the tags in the zero page separately (since it's mapped as a special pte). - Clearing the tags of the zero page was moved to this patch from an earlier one since mte_clear_page_tags() was not available. =20 New in v4. Replacing a previous page zeroing the tags in clear_page()= . arch/arm64/include/asm/mte.h | 16 +++++++++++++++ arch/arm64/include/asm/pgtable.h | 7 +++++++ arch/arm64/kernel/cpufeature.c | 7 +++++++ arch/arm64/kernel/mte.c | 14 +++++++++++++ arch/arm64/lib/Makefile | 2 ++ arch/arm64/lib/mte.S | 34 ++++++++++++++++++++++++++++++++ 6 files changed, 80 insertions(+) create mode 100644 arch/arm64/lib/mte.S diff --git a/arch/arm64/include/asm/mte.h b/arch/arm64/include/asm/mte.h index a0bf310da74b..1716b3d02489 100644 --- a/arch/arm64/include/asm/mte.h +++ b/arch/arm64/include/asm/mte.h @@ -7,12 +7,28 @@ =20 #ifndef __ASSEMBLY__ =20 +#include + +#include + +void mte_clear_page_tags(void *addr); + #ifdef CONFIG_ARM64_MTE =20 +/* track which pages have valid allocation tags */ +#define PG_mte_tagged PG_arch_2 + +void mte_sync_tags(pte_t *ptep, pte_t pte); void flush_mte_state(void); =20 #else =20 +/* unused if !CONFIG_ARM64_MTE, silence the compiler */ +#define PG_mte_tagged 0 + +static inline void mte_sync_tags(pte_t *ptep, pte_t pte) +{ +} static inline void flush_mte_state(void) { } diff --git a/arch/arm64/include/asm/pgtable.h b/arch/arm64/include/asm/pg= table.h index 758e2d1577d0..f9401a3205a8 100644 --- a/arch/arm64/include/asm/pgtable.h +++ b/arch/arm64/include/asm/pgtable.h @@ -9,6 +9,7 @@ #include =20 #include +#include #include #include #include @@ -80,6 +81,8 @@ extern unsigned long empty_zero_page[PAGE_SIZE / sizeof= (unsigned long)]; #define pte_user_exec(pte) (!(pte_val(pte) & PTE_UXN)) #define pte_cont(pte) (!!(pte_val(pte) & PTE_CONT)) #define pte_devmap(pte) (!!(pte_val(pte) & PTE_DEVMAP)) +#define pte_tagged(pte) ((pte_val(pte) & PTE_ATTRINDX_MASK) =3D=3D \ + PTE_ATTRINDX(MT_NORMAL_TAGGED)) =20 #define pte_cont_addr_end(addr, end) \ ({ unsigned long __boundary =3D ((addr) + CONT_PTE_SIZE) & CONT_PTE_MASK= ; \ @@ -274,6 +277,10 @@ static inline void set_pte_at(struct mm_struct *mm, = unsigned long addr, if (pte_present(pte) && pte_user_exec(pte) && !pte_special(pte)) __sync_icache_dcache(pte); =20 + if (system_supports_mte() && + pte_present(pte) && pte_tagged(pte) && !pte_special(pte)) + mte_sync_tags(ptep, pte); + __check_racy_pte_update(mm, ptep, pte); =20 set_pte(ptep, pte); diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeatur= e.c index aeb9cb9c57f7..110b168eac1f 100644 --- a/arch/arm64/kernel/cpufeature.c +++ b/arch/arm64/kernel/cpufeature.c @@ -75,6 +75,7 @@ #include #include #include +#include #include #include #include @@ -1688,6 +1689,12 @@ static int __init system_enable_mte(void) /* Ensure the TLB does not have stale MAIR attributes */ flush_tlb_all(); =20 + /* + * Clear the tags in the zero page. This needs to be done via the + * linear map which has the Tagged attribute. + */ + mte_clear_page_tags(lm_alias(empty_zero_page)); + return 0; } core_initcall(system_enable_mte); diff --git a/arch/arm64/kernel/mte.c b/arch/arm64/kernel/mte.c index 032016823957..5bf9bbed5a25 100644 --- a/arch/arm64/kernel/mte.c +++ b/arch/arm64/kernel/mte.c @@ -3,12 +3,26 @@ * Copyright (C) 2020 ARM Ltd. */ =20 +#include +#include #include =20 #include #include #include =20 +void mte_sync_tags(pte_t *ptep, pte_t pte) +{ + struct page *page =3D pte_page(pte); + long i, nr_pages =3D compound_nr(page); + + /* if PG_mte_tagged is set, tags have already been initialised */ + for (i =3D 0; i < nr_pages; i++, page++) { + if (!test_and_set_bit(PG_mte_tagged, &page->flags)) + mte_clear_page_tags(page_address(page)); + } +} + void flush_mte_state(void) { if (!system_supports_mte()) diff --git a/arch/arm64/lib/Makefile b/arch/arm64/lib/Makefile index 2fc253466dbf..d31e1169d9b8 100644 --- a/arch/arm64/lib/Makefile +++ b/arch/arm64/lib/Makefile @@ -16,3 +16,5 @@ lib-$(CONFIG_ARCH_HAS_UACCESS_FLUSHCACHE) +=3D uaccess_= flushcache.o obj-$(CONFIG_CRC32) +=3D crc32.o =20 obj-$(CONFIG_FUNCTION_ERROR_INJECTION) +=3D error-inject.o + +obj-$(CONFIG_ARM64_MTE) +=3D mte.o diff --git a/arch/arm64/lib/mte.S b/arch/arm64/lib/mte.S new file mode 100644 index 000000000000..a36705640086 --- /dev/null +++ b/arch/arm64/lib/mte.S @@ -0,0 +1,34 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +/* + * Copyright (C) 2020 ARM Ltd. + */ +#include + +#include +#include + + .arch armv8.5-a+memtag + +/* + * multitag_transfer_size - set \reg to the block size that is accessed = by the + * LDGM/STGM instructions. + */ + .macro multitag_transfer_size, reg, tmp + mrs_s \reg, SYS_GMID_EL1 + ubfx \reg, \reg, #SYS_GMID_EL1_BS_SHIFT, #SYS_GMID_EL1_BS_SIZE + mov \tmp, #4 + lsl \reg, \tmp, \reg + .endm + +/* + * Clear the tags in a page + * x0 - address of the page to be cleared + */ +SYM_FUNC_START(mte_clear_page_tags) + multitag_transfer_size x1, x2 +1: stgm xzr, [x0] + add x0, x0, x1 + tst x0, #(PAGE_SIZE - 1) + b.ne 1b + ret +SYM_FUNC_END(mte_clear_page_tags)