From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-8.3 required=3.0 tests=DKIM_INVALID,DKIM_SIGNED, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY, SPF_PASS,USER_AGENT_MUTT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 10F1DC10F11 for ; Wed, 10 Apr 2019 18:10:35 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id D7AD020830 for ; Wed, 10 Apr 2019 18:10:34 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=lists.infradead.org header.i=@lists.infradead.org header.b="h1PtmNHV" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org D7AD020830 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=arm.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+infradead-linux-arm-kernel=archiver.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:Cc:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:In-Reply-To:MIME-Version:References: Message-ID:Subject:To:From:Date:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=GdkOCnsbW9uzWPFqfZEewlx4SjAMAW0fFesLtfgo/fA=; b=h1PtmNHVck6kAt R4Lnqkk25Webbet7DvuwzPNAWQZbT25BI3ZCnv6RpIg0dr146jGkJj0x7F/n9YWMFRTBiBMT1xIgU COllfr9CUfLgc02NDqtVn1WH2EdKrA9gU+5zwicpn5eLaSEquJ3dI2VVj8uDct1fn9nGrdzJ2a2AX RimY2nyiaqXStK7bRzrCCxUSd6ULtgKWVZKhNUrHCJJarsG+yvWzi6goV8yBvAF5vXx0BMI4qetso rsVaa7+wc7V0xBgSOGUYLtT8BRXYSdodGpyjPJ87rUcoVYmi7eTV4xE6Sl2CuZuIyCnAFwwrn+uf5 g3WYuEd+w2aoIbfMxWGQ==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.90_1 #2 (Red Hat Linux)) id 1hEHg9-0001L8-NA; Wed, 10 Apr 2019 18:10:33 +0000 Received: from usa-sjc-mx-foss1.foss.arm.com ([217.140.101.70] helo=foss.arm.com) by bombadil.infradead.org with esmtp (Exim 4.90_1 #2 (Red Hat Linux)) id 1hEHg4-0001KQ-7P for linux-arm-kernel@lists.infradead.org; Wed, 10 Apr 2019 18:10:31 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 5F14A374; Wed, 10 Apr 2019 11:10:23 -0700 (PDT) Received: from fuggles.cambridge.arm.com (usa-sjc-imap-foss1.foss.arm.com [10.72.51.249]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 722AC3F557; Wed, 10 Apr 2019 11:10:22 -0700 (PDT) Date: Wed, 10 Apr 2019 19:10:17 +0100 From: Will Deacon To: Vincenzo Frascino Subject: Re: [PATCH v3 1/4] arm64: compat: Alloc separate pages for vectors and sigpage Message-ID: <20190410181017.GA25618@fuggles.cambridge.arm.com> References: <20190402162757.13491-1-vincenzo.frascino@arm.com> <20190402162757.13491-2-vincenzo.frascino@arm.com> MIME-Version: 1.0 Content-Disposition: inline In-Reply-To: <20190402162757.13491-2-vincenzo.frascino@arm.com> User-Agent: Mutt/1.11.1+86 (6f28e57d73f2) () X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20190410_111029_358050_E21C00ED X-CRM114-Status: GOOD ( 30.51 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Mark Rutland , Catalin Marinas , linux-arm-kernel@lists.infradead.org Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+infradead-linux-arm-kernel=archiver.kernel.org@lists.infradead.org On Tue, Apr 02, 2019 at 05:27:54PM +0100, Vincenzo Frascino wrote: > In the current implementation AArch32 installs a special page called > "[vectors]" that contains sigreturn trampolines and kuser helpers, Doesn't make sense. How about: "For AArch32 tasks, we install a special "[vectors]" page that contains the sigreturn trampolines and kuser helpers." > and this is done at fixed address specified by the kuser helpers ABI. which is mapped at a fixed address... > Having sigreturn trampolines and kuser helpers in the same page, makes > difficult to maintain compatibility with arm because it makes not > possible to disable kuser helpers. Replace with: "Having the sigreturn trampolines in the same page as the kuser helpers makes it impossible to disable the kuser helpers independently." > Address the problem creating separate pages for vectors and sigpage in > a similar fashion to what happens today on arm. "Follow the Arm implementation, by moving the signal trampolines out of the "[vectors]" page and into their own "[sigpage]"". > Change as well the meaning of mm->context.vdso for AArch32 compat since > it now points to sigpage and not to vectors anymore in order to make > simpler the implementation of the signal handling (the address of > sigpage is randomized). This is an implementation detail and can be dropped. > Cc: Catalin Marinas > Cc: Will Deacon > Signed-off-by: Vincenzo Frascino > Reviewed-by: Catalin Marinas > --- > arch/arm64/include/asm/elf.h | 6 +- > arch/arm64/include/asm/processor.h | 4 +- > arch/arm64/include/asm/signal32.h | 2 - > arch/arm64/kernel/signal32.c | 5 +- > arch/arm64/kernel/vdso.c | 121 ++++++++++++++++++++++------- > 5 files changed, 102 insertions(+), 36 deletions(-) > > diff --git a/arch/arm64/include/asm/elf.h b/arch/arm64/include/asm/elf.h > index 6adc1a90e7e6..355d120b78cb 100644 > --- a/arch/arm64/include/asm/elf.h > +++ b/arch/arm64/include/asm/elf.h > @@ -214,10 +214,10 @@ typedef compat_elf_greg_t compat_elf_gregset_t[COMPAT_ELF_NGREG]; > set_thread_flag(TIF_32BIT); \ > }) > #define COMPAT_ARCH_DLINFO > -extern int aarch32_setup_vectors_page(struct linux_binprm *bprm, > - int uses_interp); > +extern int aarch32_setup_additional_pages(struct linux_binprm *bprm, > + int uses_interp); > #define compat_arch_setup_additional_pages \ > - aarch32_setup_vectors_page > + aarch32_setup_additional_pages > > #endif /* CONFIG_COMPAT */ > > diff --git a/arch/arm64/include/asm/processor.h b/arch/arm64/include/asm/processor.h > index 5d9ce62bdebd..07c873fce961 100644 > --- a/arch/arm64/include/asm/processor.h > +++ b/arch/arm64/include/asm/processor.h > @@ -78,9 +78,9 @@ > #endif /* CONFIG_ARM64_FORCE_52BIT */ > > #ifdef CONFIG_COMPAT > -#define AARCH32_VECTORS_BASE 0xffff0000 > +#define AARCH32_KUSER_BASE 0xffff0000 > #define STACK_TOP (test_thread_flag(TIF_32BIT) ? \ > - AARCH32_VECTORS_BASE : STACK_TOP_MAX) > + AARCH32_KUSER_BASE : STACK_TOP_MAX) > #else > #define STACK_TOP STACK_TOP_MAX > #endif /* CONFIG_COMPAT */ > diff --git a/arch/arm64/include/asm/signal32.h b/arch/arm64/include/asm/signal32.h > index 81abea0b7650..58e288aaf0ba 100644 > --- a/arch/arm64/include/asm/signal32.h > +++ b/arch/arm64/include/asm/signal32.h > @@ -20,8 +20,6 @@ > #ifdef CONFIG_COMPAT > #include > > -#define AARCH32_KERN_SIGRET_CODE_OFFSET 0x500 > - > int compat_setup_frame(int usig, struct ksignal *ksig, sigset_t *set, > struct pt_regs *regs); > int compat_setup_rt_frame(int usig, struct ksignal *ksig, sigset_t *set, > diff --git a/arch/arm64/kernel/signal32.c b/arch/arm64/kernel/signal32.c > index cb7800acd19f..3846a1b710b5 100644 > --- a/arch/arm64/kernel/signal32.c > +++ b/arch/arm64/kernel/signal32.c > @@ -379,6 +379,7 @@ static void compat_setup_return(struct pt_regs *regs, struct k_sigaction *ka, > compat_ulong_t retcode; > compat_ulong_t spsr = regs->pstate & ~(PSR_f | PSR_AA32_E_BIT); > int thumb; > + void *sigreturn_base; > > /* Check if the handler is written for ARM or Thumb */ > thumb = handler & 1; > @@ -399,12 +400,12 @@ static void compat_setup_return(struct pt_regs *regs, struct k_sigaction *ka, > } else { > /* Set up sigreturn pointer */ > unsigned int idx = thumb << 1; > + sigreturn_base = current->mm->context.vdso; > > if (ka->sa.sa_flags & SA_SIGINFO) > idx += 3; > > - retcode = AARCH32_VECTORS_BASE + > - AARCH32_KERN_SIGRET_CODE_OFFSET + > + retcode = ptr_to_compat(sigreturn_base) + > (idx << 2) + thumb; > } > > diff --git a/arch/arm64/kernel/vdso.c b/arch/arm64/kernel/vdso.c > index 2d419006ad43..16f8fce5c501 100644 > --- a/arch/arm64/kernel/vdso.c > +++ b/arch/arm64/kernel/vdso.c > @@ -1,5 +1,7 @@ > /* > - * VDSO implementation for AArch64 and vector page setup for AArch32. > + * VDSO implementation for AArch64 and for AArch32: > + * AArch64: vDSO implementation contains pages setup and data page update. > + * AArch32: vDSO implementation contains sigreturn and kuser pages setup. > * > * Copyright (C) 2012 ARM Limited > * > @@ -53,61 +55,126 @@ struct vdso_data *vdso_data = &vdso_data_store.data; > /* > * Create and map the vectors page for AArch32 tasks. > */ > -static struct page *vectors_page[1] __ro_after_init; > +/* > + * aarch32_vdso_pages: > + * 0 - kuser helpers > + * 1 - sigreturn code > + */ > +#define C_VECTORS 0 C_KUSER might make more sense, and is consistent with AARCH32_KUSER_BASE. > +#define C_SIGPAGE 1 > +#define C_PAGES (C_SIGPAGE + 1) > +static struct page *aarch32_vdso_pages[C_PAGES] __ro_after_init; > +static const struct vm_special_mapping aarch32_vdso_spec[C_PAGES] = { > + { > + /* Must be named [vectors] for compatibility with arm. */ > + .name = "[vectors]", > + .pages = &aarch32_vdso_pages[C_VECTORS], > + }, > + { > + /* Must be named [sigpage] for compatibility with arm. */ > + .name = "[sigpage]", > + .pages = &aarch32_vdso_pages[C_SIGPAGE], > + }, > +}; > > -static int __init alloc_vectors_page(void) > +static int __init aarch32_alloc_vdso_pages(void) Premature renaming of the function? > { > extern char __kuser_helper_start[], __kuser_helper_end[]; > extern char __aarch32_sigret_code_start[], __aarch32_sigret_code_end[]; > > int kuser_sz = __kuser_helper_end - __kuser_helper_start; > int sigret_sz = __aarch32_sigret_code_end - __aarch32_sigret_code_start; > - unsigned long vpage; > + unsigned long vdso_pages[C_PAGES]; > > - vpage = get_zeroed_page(GFP_ATOMIC); > + vdso_pages[C_VECTORS] = get_zeroed_page(GFP_ATOMIC); > + if (!vdso_pages[C_VECTORS]) > + return -ENOMEM; > > - if (!vpage) > + vdso_pages[C_SIGPAGE] = get_zeroed_page(GFP_ATOMIC); > + if (!vdso_pages[C_SIGPAGE]) > return -ENOMEM; You leak the kuser page if this fails. > /* kuser helpers */ > - memcpy((void *)vpage + 0x1000 - kuser_sz, __kuser_helper_start, > - kuser_sz); > + memcpy((void *)(vdso_pages[C_VECTORS] + 0x1000 - kuser_sz), > + __kuser_helper_start, > + kuser_sz); > > /* sigreturn code */ > - memcpy((void *)vpage + AARCH32_KERN_SIGRET_CODE_OFFSET, > - __aarch32_sigret_code_start, sigret_sz); > + memcpy((void *)vdso_pages[C_SIGPAGE], > + __aarch32_sigret_code_start, > + sigret_sz); > > - flush_icache_range(vpage, vpage + PAGE_SIZE); > - vectors_page[0] = virt_to_page(vpage); > + flush_icache_range(vdso_pages[C_VECTORS], > + vdso_pages[C_VECTORS] + PAGE_SIZE); > + flush_icache_range(vdso_pages[C_SIGPAGE], > + vdso_pages[C_SIGPAGE] + PAGE_SIZE); > + > + aarch32_vdso_pages[C_VECTORS] = virt_to_page(vdso_pages[C_VECTORS]); > + aarch32_vdso_pages[C_SIGPAGE] = virt_to_page(vdso_pages[C_SIGPAGE]); > > return 0; > } > -arch_initcall(alloc_vectors_page); > +arch_initcall(aarch32_alloc_vdso_pages); > > -int aarch32_setup_vectors_page(struct linux_binprm *bprm, int uses_interp) > +static int aarch32_kuser_helpers_setup(struct mm_struct *mm) > { > - struct mm_struct *mm = current->mm; > - unsigned long addr = AARCH32_VECTORS_BASE; > - static const struct vm_special_mapping spec = { > - .name = "[vectors]", > - .pages = vectors_page, > + void *ret; > + > + /* The kuser helpers must be mapped at the ABI-defined high address */ > + ret = _install_special_mapping(mm, AARCH32_KUSER_BASE, PAGE_SIZE, > + VM_READ | VM_EXEC | > + VM_MAYREAD | VM_MAYEXEC, How come you don't need VM_MAYWRITE here... > + /* > + * VM_MAYWRITE is required to allow gdb to Copy-on-Write and > + * set breakpoints. > + */ > ret = _install_special_mapping(mm, addr, PAGE_SIZE, > - VM_READ|VM_EXEC|VM_MAYREAD|VM_MAYEXEC, > - &spec); > + VM_READ | VM_EXEC | VM_MAYREAD | > + VM_MAYWRITE | VM_MAYEXEC, > + &aarch32_vdso_spec[C_SIGPAGE]); ... but you introduce it here? Also, shouldn't this be a separate change so it can be treated as a fix? Will _______________________________________________ linux-arm-kernel mailing list linux-arm-kernel@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-arm-kernel