From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-8.2 required=3.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS, URIBL_BLOCKED,USER_AGENT_SANE_1 autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9A04CC83008 for ; Wed, 29 Apr 2020 10:27:15 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 4E68B2074A for ; Wed, 29 Apr 2020 10:27:15 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 4E68B2074A Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=arm.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 029318E0006; Wed, 29 Apr 2020 06:27:15 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id EF53A8E0001; Wed, 29 Apr 2020 06:27:14 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id DE4628E0006; Wed, 29 Apr 2020 06:27:14 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0186.hostedemail.com [216.40.44.186]) by kanga.kvack.org (Postfix) with ESMTP id C122C8E0001 for ; Wed, 29 Apr 2020 06:27:14 -0400 (EDT) Received: from smtpin18.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id 88CB7180AD807 for ; Wed, 29 Apr 2020 10:27:14 +0000 (UTC) X-FDA: 76760515188.18.crush22_4f335fc21560 X-HE-Tag: crush22_4f335fc21560 X-Filterd-Recvd-Size: 12954 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by imf10.hostedemail.com (Postfix) with ESMTP for ; Wed, 29 Apr 2020 10:27:13 +0000 (UTC) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 6403BC14; Wed, 29 Apr 2020 03:27:13 -0700 (PDT) Received: from [192.168.178.25] (unknown [172.31.20.19]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id A52543F73D; Wed, 29 Apr 2020 03:27:11 -0700 (PDT) Subject: Re: [PATCH v3 19/23] arm64: mte: Add PTRACE_{PEEK,POKE}MTETAGS support To: Catalin Marinas , linux-arm-kernel@lists.infradead.org Cc: Will Deacon , Vincenzo Frascino , Szabolcs Nagy , Richard Earnshaw , Andrey Konovalov , Peter Collingbourne , linux-mm@kvack.org, linux-arch@vger.kernel.org, Alan Hayward , Luis Machado , Omair Javaid References: <20200421142603.3894-1-catalin.marinas@arm.com> <20200421142603.3894-20-catalin.marinas@arm.com> From: Kevin Brodsky Message-ID: Date: Wed, 29 Apr 2020 11:27:10 +0100 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:60.0) Gecko/20100101 Thunderbird/60.9.1 MIME-Version: 1.0 In-Reply-To: <20200421142603.3894-20-catalin.marinas@arm.com> Content-Type: text/plain; charset=utf-8; format=flowed Content-Transfer-Encoding: 7bit Content-Language: en-GB X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On 21/04/2020 15:25, Catalin Marinas wrote: > Add support for bulk setting/getting of the MTE tags in a tracee's > address space at 'addr' in the ptrace() syscall prototype. 'data' points > to a struct iovec in the tracer's address space with iov_base > representing the address of a tracer's buffer of length iov_len. The > tags to be copied to/from the tracer's buffer are stored as one tag per > byte. > > On successfully copying at least one tag, ptrace() returns 0 and updates > the tracer's iov_len with the number of tags copied. In case of error, > either -EIO or -EFAULT is returned, trying to follow the ptrace() man > page. > > Note that the tag copying functions are not performance critical, > therefore they lack optimisations found in typical memory copy routines. > > Signed-off-by: Catalin Marinas > Cc: Will Deacon > Cc: Alan Hayward > Cc: Luis Machado > Cc: Omair Javaid > --- > > Notes: > New in v3. > > arch/arm64/include/asm/mte.h | 17 ++++ > arch/arm64/include/uapi/asm/ptrace.h | 3 + > arch/arm64/kernel/mte.c | 127 +++++++++++++++++++++++++++ > arch/arm64/kernel/ptrace.c | 15 +++- > arch/arm64/lib/mte.S | 50 +++++++++++ > 5 files changed, 211 insertions(+), 1 deletion(-) > > diff --git a/arch/arm64/include/asm/mte.h b/arch/arm64/include/asm/mte.h > index 22eb3e06f311..0ca2aaff07a1 100644 > --- a/arch/arm64/include/asm/mte.h > +++ b/arch/arm64/include/asm/mte.h > @@ -2,12 +2,21 @@ > #ifndef __ASM_MTE_H > #define __ASM_MTE_H > > +#define MTE_ALLOC_SIZE UL(16) > +#define MTE_ALLOC_MASK (~(MTE_ALLOC_SIZE - 1)) Nit: maybe MTE_GRANULE_* would be clearer than MTE_ALLOC_*? > +#define MTE_TAG_SHIFT (56) > +#define MTE_TAG_SIZE (4) > + > #ifndef __ASSEMBLY__ > > #include > > /* Memory Tagging API */ > int mte_memcmp_pages(const void *page1_addr, const void *page2_addr); > +unsigned long mte_copy_tags_from_user(void *to, const void __user *from, > + unsigned long n); > +unsigned long mte_copy_tags_to_user(void __user *to, void *from, > + unsigned long n); > > #ifdef CONFIG_ARM64_MTE > void flush_mte_state(void); > @@ -15,6 +24,8 @@ void mte_thread_switch(struct task_struct *next); > void mte_suspend_exit(void); > long set_mte_ctrl(unsigned long arg); > long get_mte_ctrl(void); > +int mte_ptrace_copy_tags(struct task_struct *child, long request, > + unsigned long addr, unsigned long data); > #else > static inline void flush_mte_state(void) > { > @@ -33,6 +44,12 @@ static inline long get_mte_ctrl(void) > { > return 0; > } > +static inline int mte_ptrace_copy_tags(struct task_struct *child, > + long request, unsigned long addr, > + unsigned long data) > +{ > + return -EIO; > +} > #endif > > #endif /* __ASSEMBLY__ */ > diff --git a/arch/arm64/include/uapi/asm/ptrace.h b/arch/arm64/include/uapi/asm/ptrace.h > index 1daf6dda8af0..cd2a4a164de3 100644 > --- a/arch/arm64/include/uapi/asm/ptrace.h > +++ b/arch/arm64/include/uapi/asm/ptrace.h > @@ -67,6 +67,9 @@ > /* syscall emulation path in ptrace */ > #define PTRACE_SYSEMU 31 > #define PTRACE_SYSEMU_SINGLESTEP 32 > +/* MTE allocation tag access */ > +#define PTRACE_PEEKMTETAGS 33 > +#define PTRACE_POKEMTETAGS 34 > > #ifndef __ASSEMBLY__ > > diff --git a/arch/arm64/kernel/mte.c b/arch/arm64/kernel/mte.c > index fa4a4196b248..0cb496ed9bf9 100644 > --- a/arch/arm64/kernel/mte.c > +++ b/arch/arm64/kernel/mte.c > @@ -3,12 +3,17 @@ > * Copyright (C) 2020 ARM Ltd. > */ > > +#include > +#include > #include > #include > +#include > #include > +#include > > #include > #include > +#include > #include > > static void update_sctlr_el1_tcf0(u64 tcf0) > @@ -133,3 +138,125 @@ long get_mte_ctrl(void) > > return ret; > } > + > +/* > + * Access MTE tags in another process' address space as given in mm. Update > + * the number of tags copied. Return 0 if any tags copied, error otherwise. > + * Inspired by __access_remote_vm(). > + */ > +static int __access_remote_tags(struct task_struct *tsk, struct mm_struct *mm, > + unsigned long addr, struct iovec *kiov, > + unsigned int gup_flags) > +{ > + struct vm_area_struct *vma; > + void __user *buf = kiov->iov_base; > + size_t len = kiov->iov_len; > + int ret; > + int write = gup_flags & FOLL_WRITE; > + > + if (down_read_killable(&mm->mmap_sem)) > + return -EIO; > + > + if (!access_ok(buf, len)) > + return -EFAULT; > + > + while (len) { > + unsigned long tags, offset; > + void *maddr; > + struct page *page = NULL; > + > + ret = get_user_pages_remote(tsk, mm, addr, 1, gup_flags, > + &page, &vma, NULL); > + if (ret <= 0) > + break; > + > + /* limit access to the end of the page */ > + offset = offset_in_page(addr); > + tags = min(len, (PAGE_SIZE - offset) / MTE_ALLOC_SIZE); > + > + maddr = page_address(page); > + if (write) { > + tags = mte_copy_tags_from_user(maddr + offset, buf, tags); > + set_page_dirty_lock(page); > + } else { > + tags = mte_copy_tags_to_user(buf, maddr + offset, tags); > + } > + put_page(page); > + > + /* error accessing the tracer's buffer */ > + if (!tags) > + break; > + > + len -= tags; > + buf += tags; > + addr += tags * MTE_ALLOC_SIZE; > + } > + up_read(&mm->mmap_sem); > + > + /* return an error if no tags copied */ > + kiov->iov_len = buf - kiov->iov_base; > + if (!kiov->iov_len) { > + /* check for error accessing the tracee's address space */ > + if (ret <= 0) > + return -EIO; > + else > + return -EFAULT; > + } > + > + return 0; > +} > + > +/* > + * Copy MTE tags in another process' address space at 'addr' to/from tracer's > + * iovec buffer. Return 0 on success. Inspired by ptrace_access_vm(). > + */ > +static int access_remote_tags(struct task_struct *tsk, unsigned long addr, > + struct iovec *kiov, unsigned int gup_flags) > +{ > + struct mm_struct *mm; > + int ret; > + > + mm = get_task_mm(tsk); > + if (!mm) > + return -EPERM; > + > + if (!tsk->ptrace || (current != tsk->parent) || > + ((get_dumpable(mm) != SUID_DUMP_USER) && > + !ptracer_capable(tsk, mm->user_ns))) { > + mmput(mm); > + return -EPERM; > + } > + > + ret = __access_remote_tags(tsk, mm, addr, kiov, gup_flags); > + mmput(mm); > + > + return ret; > +} > + > +int mte_ptrace_copy_tags(struct task_struct *child, long request, > + unsigned long addr, unsigned long data) > +{ > + int ret; > + struct iovec kiov; > + struct iovec __user *uiov = (void __user *)data; > + unsigned int gup_flags = FOLL_FORCE; > + > + if (!system_supports_mte()) > + return -EIO; > + > + if (get_user(kiov.iov_base, &uiov->iov_base) || > + get_user(kiov.iov_len, &uiov->iov_len)) > + return -EFAULT; > + > + if (request == PTRACE_POKEMTETAGS) > + gup_flags |= FOLL_WRITE; > + > + /* align addr to the MTE tag granule */ > + addr &= MTE_ALLOC_MASK; > + > + ret = access_remote_tags(child, addr, &kiov, gup_flags); > + if (!ret) > + ret = __put_user(kiov.iov_len, &uiov->iov_len); > + > + return ret; > +} > diff --git a/arch/arm64/kernel/ptrace.c b/arch/arm64/kernel/ptrace.c > index 077e352495eb..1fdb841ad536 100644 > --- a/arch/arm64/kernel/ptrace.c > +++ b/arch/arm64/kernel/ptrace.c > @@ -34,6 +34,7 @@ > #include > #include > #include > +#include > #include > #include > #include > @@ -1797,7 +1798,19 @@ const struct user_regset_view *task_user_regset_view(struct task_struct *task) > long arch_ptrace(struct task_struct *child, long request, > unsigned long addr, unsigned long data) > { > - return ptrace_request(child, request, addr, data); > + int ret; > + > + switch (request) { > + case PTRACE_PEEKMTETAGS: > + case PTRACE_POKEMTETAGS: > + ret = mte_ptrace_copy_tags(child, request, addr, data); > + break; > + default: > + ret = ptrace_request(child, request, addr, data); > + break; > + } > + > + return ret; > } > > enum ptrace_syscall_dir { > diff --git a/arch/arm64/lib/mte.S b/arch/arm64/lib/mte.S > index bd51ea7e2fcb..45be04a8c73c 100644 > --- a/arch/arm64/lib/mte.S > +++ b/arch/arm64/lib/mte.S > @@ -5,6 +5,7 @@ > #include > > #include > +#include > > /* > * Compare tags of two pages > @@ -44,3 +45,52 @@ SYM_FUNC_START(mte_memcmp_pages) > > ret > SYM_FUNC_END(mte_memcmp_pages) > + > +/* > + * Read tags from a user buffer (one tag per byte) and set the corresponding > + * tags at the given kernel address. Used by PTRACE_POKEMTETAGS. > + * x0 - kernel address (to) > + * x1 - user buffer (from) > + * x2 - number of tags/bytes (n) > + * Returns: > + * x0 - number of tags read/set > + */ > +SYM_FUNC_START(mte_copy_tags_from_user) > + mov x3, x1 > +1: > +USER(2f, ldtrb w4, [x1]) Here we are making either of the following assumptions: 1. The __user pointer (here `from`) actually points to user memory, not kernel memory (and we have set_fs(USER_DS) in place). 2. CONFIG_ARM64_UAO is enabled and the hardware implements UAO. 1. is currently true because these functions are only used for the new ptrace requests, which indeed pass pointers to user memory. However, future users of these functions may not know about this requirement. 2. is not necessarily true because ARM64_MTE does not depend on ARM64_UAO. It is unlikely that future users of these functions actually need to pass __user pointers to kernel memory, so adding a comment spelling out the first assumption is probably fine. Kevin > + lsl x4, x4, #MTE_TAG_SHIFT > + stg x4, [x0], #MTE_ALLOC_SIZE > + add x1, x1, #1 > + subs x2, x2, #1 > + b.ne 1b > + > + // exception handling and function return > +2: sub x0, x1, x3 // update the number of tags set > + ret > +SYM_FUNC_END(mte_copy_tags_from_user) > + > +/* > + * Get the tags from a kernel address range and write the tag values to the > + * given user buffer (one tag per byte). Used by PTRACE_PEEKMTETAGS. > + * x0 - user buffer (to) > + * x1 - kernel address (from) > + * x2 - number of tags/bytes (n) > + * Returns: > + * x0 - number of tags read/set > + */ > +SYM_FUNC_START(mte_copy_tags_to_user) > + mov x3, x0 > +1: > + ldg x4, [x1] > + ubfx x4, x4, #MTE_TAG_SHIFT, #MTE_TAG_SIZE > +USER(2f, sttrb w4, [x0]) > + add x0, x0, #1 > + add x1, x1, #MTE_ALLOC_SIZE > + subs x2, x2, #1 > + b.ne 1b > + > + // exception handling and function return > +2: sub x0, x0, x3 // update the number of tags copied > + ret > +SYM_FUNC_END(mte_copy_tags_from_user)