From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-9.0 required=3.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS, URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 729EEC282CE for ; Wed, 22 May 2019 15:02:46 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 3B4A2204EC for ; Wed, 22 May 2019 15:02:46 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729756AbfEVPCp (ORCPT ); Wed, 22 May 2019 11:02:45 -0400 Received: from usa-sjc-mx-foss1.foss.arm.com ([217.140.101.70]:53200 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729159AbfEVPCo (ORCPT ); Wed, 22 May 2019 11:02:44 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 4733380D; Wed, 22 May 2019 08:02:44 -0700 (PDT) Received: from e111045-lin.cambridge.arm.com (apickardsiphone.cambridge.arm.com [10.1.30.21]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id EF30E3F718; Wed, 22 May 2019 08:02:41 -0700 (PDT) From: Ard Biesheuvel To: linux-arm-kernel@lists.infradead.org Cc: marc.zyngier@arm.com, james.morse@arm.com, will.deacon@arm.com, guillaume.gardet@arm.com, mark.rutland@arm.com, mingo@kernel.org, jeyu@kernel.org, linux-kernel@vger.kernel.org, linux-arch@vger.kernel.org, arnd@arndb.de, x86@kernel.org, Ard Biesheuvel Subject: [PATCH] module/ksymtab: use 64-bit relative reference for target symbol Date: Wed, 22 May 2019 16:02:39 +0100 Message-Id: <20190522150239.19314-1-ard.biesheuvel@arm.com> X-Mailer: git-send-email 2.17.1 Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org The following commit 7290d5809571 ("module: use relative references for __ksymtab entries") updated the ksymtab handling of some KASLR capable architectures so that ksymtab entries are emitted as pairs of 32-bit relative references. This reduces the size of the entries, but more importantly, it gets rid of statically assigned absolute addresses, which require fixing up at boot time if the kernel is self relocating (which takes a 24 byte RELA entry for each member of the ksymtab struct). Since ksymtab entries are always part of the same module as the symbol they export (or of the core kernel), it was assumed at the time that a 32-bit relative reference is always sufficient to capture the offset between a ksymtab entry and its target symbol. Unfortunately, this is not always true: in the case of per-CPU variables, a per-CPU variable's base address (which usually differs from the actual address of any of its per-CPU copies) could be at an arbitrary offset from the ksymtab entry, and so it may be out of range for a 32-bit relative reference. To make matters worse, we identified an issue in the arm64 module loader, where the overflow check applied to 32-bit place relative relocations uses the range that is specified in the AArch64 psABI, which is documented as having a 'blind spot' unless you explicitly narrow the range to match the signed vs unsigned interpretation of the relocation target [0]. This means that, in some cases, code importing those per-CPU variables from other modules may obtain a bogus reference and corrupt unrelated data. So let's fix this issue by switching to a 64-bit place relative reference on 64-bit architectures for the ksymtab entry's target symbol. This uses a bit more memory in the entry itself, which is unfortunate, but it preserves the original intent, which was to make the value invariant under runtime relocation of the core kernel. [0] https://lore.kernel.org/linux-arm-kernel/20190521125707.6115-1-ard.biesheuvel@arm.com Cc: Jessica Yu Cc: # v4.19+ Signed-off-by: Ard Biesheuvel --- Note that the name 'CONFIG_HAVE_ARCH_PREL32_RELOCATIONS' is no longer entirely accurate after this patch, so I will follow up with a patch to rename it to CONFIG_HAVE_ARCH_PREL_RELOCATIONS, but that doesn't require a backport to -stable so I have omitted it here. Also note that for x86, this patch depends on b40a142b12b5 ("x86: Add support for 64-bit place relative relocations"), which will need to be backported to v4.19 (from v4.20) if this patch is applied to -stable. include/asm-generic/export.h | 9 +++++++-- include/linux/compiler.h | 9 +++++++++ include/linux/export.h | 14 ++++++++++---- kernel/module.c | 2 +- 4 files changed, 27 insertions(+), 7 deletions(-) diff --git a/include/asm-generic/export.h b/include/asm-generic/export.h index 294d6ae785d4..4d658b1e4707 100644 --- a/include/asm-generic/export.h +++ b/include/asm-generic/export.h @@ -4,7 +4,7 @@ #ifndef KSYM_FUNC #define KSYM_FUNC(x) x #endif -#ifdef CONFIG_64BIT +#if defined(CONFIG_64BIT) && !defined(CONFIG_HAVE_ARCH_PREL32_RELOCATIONS) #ifndef KSYM_ALIGN #define KSYM_ALIGN 8 #endif @@ -19,7 +19,12 @@ .macro __put, val, name #ifdef CONFIG_HAVE_ARCH_PREL32_RELOCATIONS - .long \val - ., \name - . +#ifdef CONFIG_64BIT + .quad \val - . +#else + .long \val - . +#endif + .long \name - . #elif defined(CONFIG_64BIT) .quad \val, \name #else diff --git a/include/linux/compiler.h b/include/linux/compiler.h index 8aaf7cd026b0..33c65ebb7cfe 100644 --- a/include/linux/compiler.h +++ b/include/linux/compiler.h @@ -305,6 +305,15 @@ static inline void *offset_to_ptr(const int *off) return (void *)((unsigned long)off + *off); } +/** + * loffset_to_ptr - convert a relative memory offset to an absolute pointer + * @off: the address of the signed long offset value + */ +static inline void *loffset_to_ptr(const long *off) +{ + return (void *)((unsigned long)off + *off); +} + #endif /* __ASSEMBLY__ */ /* Compile time object size, -1 for unknown */ diff --git a/include/linux/export.h b/include/linux/export.h index fd8711ed9ac4..8f805b9f1c25 100644 --- a/include/linux/export.h +++ b/include/linux/export.h @@ -43,6 +43,12 @@ extern struct module __this_module; #ifdef CONFIG_HAVE_ARCH_PREL32_RELOCATIONS #include +#ifdef CONFIG_64BIT +#define __KSYMTAB_REL ".quad " +#else +#define __KSYMTAB_REL ".long " +#endif + /* * Emit the ksymtab entry as a pair of relative references: this reduces * the size by half on 64-bit architectures, and eliminates the need for @@ -52,16 +58,16 @@ extern struct module __this_module; #define __KSYMTAB_ENTRY(sym, sec) \ __ADDRESSABLE(sym) \ asm(" .section \"___ksymtab" sec "+" #sym "\", \"a\" \n" \ - " .balign 8 \n" \ + " .balign 4 \n" \ "__ksymtab_" #sym ": \n" \ - " .long " #sym "- . \n" \ + __KSYMTAB_REL #sym "- . \n" \ " .long __kstrtab_" #sym "- . \n" \ " .previous \n") struct kernel_symbol { - int value_offset; + long value_offset; int name_offset; -}; +} __packed; #else #define __KSYMTAB_ENTRY(sym, sec) \ static const struct kernel_symbol __ksymtab_##sym \ diff --git a/kernel/module.c b/kernel/module.c index 6e6712b3aaf5..43efd46feeee 100644 --- a/kernel/module.c +++ b/kernel/module.c @@ -541,7 +541,7 @@ static bool check_exported_symbol(const struct symsearch *syms, static unsigned long kernel_symbol_value(const struct kernel_symbol *sym) { #ifdef CONFIG_HAVE_ARCH_PREL32_RELOCATIONS - return (unsigned long)offset_to_ptr(&sym->value_offset); + return (unsigned long)loffset_to_ptr(&sym->value_offset); #else return sym->value; #endif -- 2.17.1 From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-9.0 required=3.0 tests=DKIMWL_WL_HIGH,DKIM_SIGNED, DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH,MAILING_LIST_MULTI, SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 83C8DC282CE for ; Wed, 22 May 2019 15:03:06 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 58E2220879 for ; Wed, 22 May 2019 15:03:06 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=lists.infradead.org header.i=@lists.infradead.org header.b="QxZMd8Dn" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 58E2220879 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=arm.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+infradead-linux-arm-kernel=archiver.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:MIME-Version:Cc:List-Subscribe: List-Help:List-Post:List-Archive:List-Unsubscribe:List-Id:Message-Id:Date: Subject:To:From:Reply-To:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To: References:List-Owner; bh=jEPopMSCalFNC3RDAb750eARA9tkWcq0jzQ9B1siEjo=; b=QxZ Md8Dne62scHGH2xqgs5Eocw8F3V8gBXjbI2mcgrrvlLkPYU3GwJgtT4dOX+duhbsJeJhxfn1scsIh ZUEgHY+GtkCCbcRVYYeP3zDANnP4cW0R+2FC+EnJwOhw6XQA1F188qCDe1dJ18sAPQBegC2KCSSwb STqVdubzGqWesayoOtZ5gKhzbZcPXTyQShqVp5jbYYWZHJxNQQIMHfgkyCQmeXUBaGLBxzflyJ5j3 npioj9kcapPvZCYO8RviJqTNzmUFm02ceCYSF/cUA66bhTytxGM153u9d/ipjobmXKlgyJAevt9a3 tM9exSFqQ1xjDX5YC+1ybqSoke0pBCA==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.90_1 #2 (Red Hat Linux)) id 1hTSle-0003BF-Q2; Wed, 22 May 2019 15:02:58 +0000 Received: from usa-sjc-mx-foss1.foss.arm.com ([217.140.101.70] helo=foss.arm.com) by bombadil.infradead.org with esmtp (Exim 4.90_1 #2 (Red Hat Linux)) id 1hTSlS-0002yD-Gf for linux-arm-kernel@lists.infradead.org; Wed, 22 May 2019 15:02:49 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 4733380D; Wed, 22 May 2019 08:02:44 -0700 (PDT) Received: from e111045-lin.cambridge.arm.com (apickardsiphone.cambridge.arm.com [10.1.30.21]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id EF30E3F718; Wed, 22 May 2019 08:02:41 -0700 (PDT) From: Ard Biesheuvel To: linux-arm-kernel@lists.infradead.org Subject: [PATCH] module/ksymtab: use 64-bit relative reference for target symbol Date: Wed, 22 May 2019 16:02:39 +0100 Message-Id: <20190522150239.19314-1-ard.biesheuvel@arm.com> X-Mailer: git-send-email 2.17.1 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20190522_080246_566043_A74D8B18 X-CRM114-Status: GOOD ( 23.09 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: mark.rutland@arm.com, linux-arch@vger.kernel.org, arnd@arndb.de, guillaume.gardet@arm.com, marc.zyngier@arm.com, x86@kernel.org, will.deacon@arm.com, linux-kernel@vger.kernel.org, james.morse@arm.com, Ard Biesheuvel , jeyu@kernel.org, mingo@kernel.org MIME-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+infradead-linux-arm-kernel=archiver.kernel.org@lists.infradead.org The following commit 7290d5809571 ("module: use relative references for __ksymtab entries") updated the ksymtab handling of some KASLR capable architectures so that ksymtab entries are emitted as pairs of 32-bit relative references. This reduces the size of the entries, but more importantly, it gets rid of statically assigned absolute addresses, which require fixing up at boot time if the kernel is self relocating (which takes a 24 byte RELA entry for each member of the ksymtab struct). Since ksymtab entries are always part of the same module as the symbol they export (or of the core kernel), it was assumed at the time that a 32-bit relative reference is always sufficient to capture the offset between a ksymtab entry and its target symbol. Unfortunately, this is not always true: in the case of per-CPU variables, a per-CPU variable's base address (which usually differs from the actual address of any of its per-CPU copies) could be at an arbitrary offset from the ksymtab entry, and so it may be out of range for a 32-bit relative reference. To make matters worse, we identified an issue in the arm64 module loader, where the overflow check applied to 32-bit place relative relocations uses the range that is specified in the AArch64 psABI, which is documented as having a 'blind spot' unless you explicitly narrow the range to match the signed vs unsigned interpretation of the relocation target [0]. This means that, in some cases, code importing those per-CPU variables from other modules may obtain a bogus reference and corrupt unrelated data. So let's fix this issue by switching to a 64-bit place relative reference on 64-bit architectures for the ksymtab entry's target symbol. This uses a bit more memory in the entry itself, which is unfortunate, but it preserves the original intent, which was to make the value invariant under runtime relocation of the core kernel. [0] https://lore.kernel.org/linux-arm-kernel/20190521125707.6115-1-ard.biesheuvel@arm.com Cc: Jessica Yu Cc: # v4.19+ Signed-off-by: Ard Biesheuvel --- Note that the name 'CONFIG_HAVE_ARCH_PREL32_RELOCATIONS' is no longer entirely accurate after this patch, so I will follow up with a patch to rename it to CONFIG_HAVE_ARCH_PREL_RELOCATIONS, but that doesn't require a backport to -stable so I have omitted it here. Also note that for x86, this patch depends on b40a142b12b5 ("x86: Add support for 64-bit place relative relocations"), which will need to be backported to v4.19 (from v4.20) if this patch is applied to -stable. include/asm-generic/export.h | 9 +++++++-- include/linux/compiler.h | 9 +++++++++ include/linux/export.h | 14 ++++++++++---- kernel/module.c | 2 +- 4 files changed, 27 insertions(+), 7 deletions(-) diff --git a/include/asm-generic/export.h b/include/asm-generic/export.h index 294d6ae785d4..4d658b1e4707 100644 --- a/include/asm-generic/export.h +++ b/include/asm-generic/export.h @@ -4,7 +4,7 @@ #ifndef KSYM_FUNC #define KSYM_FUNC(x) x #endif -#ifdef CONFIG_64BIT +#if defined(CONFIG_64BIT) && !defined(CONFIG_HAVE_ARCH_PREL32_RELOCATIONS) #ifndef KSYM_ALIGN #define KSYM_ALIGN 8 #endif @@ -19,7 +19,12 @@ .macro __put, val, name #ifdef CONFIG_HAVE_ARCH_PREL32_RELOCATIONS - .long \val - ., \name - . +#ifdef CONFIG_64BIT + .quad \val - . +#else + .long \val - . +#endif + .long \name - . #elif defined(CONFIG_64BIT) .quad \val, \name #else diff --git a/include/linux/compiler.h b/include/linux/compiler.h index 8aaf7cd026b0..33c65ebb7cfe 100644 --- a/include/linux/compiler.h +++ b/include/linux/compiler.h @@ -305,6 +305,15 @@ static inline void *offset_to_ptr(const int *off) return (void *)((unsigned long)off + *off); } +/** + * loffset_to_ptr - convert a relative memory offset to an absolute pointer + * @off: the address of the signed long offset value + */ +static inline void *loffset_to_ptr(const long *off) +{ + return (void *)((unsigned long)off + *off); +} + #endif /* __ASSEMBLY__ */ /* Compile time object size, -1 for unknown */ diff --git a/include/linux/export.h b/include/linux/export.h index fd8711ed9ac4..8f805b9f1c25 100644 --- a/include/linux/export.h +++ b/include/linux/export.h @@ -43,6 +43,12 @@ extern struct module __this_module; #ifdef CONFIG_HAVE_ARCH_PREL32_RELOCATIONS #include +#ifdef CONFIG_64BIT +#define __KSYMTAB_REL ".quad " +#else +#define __KSYMTAB_REL ".long " +#endif + /* * Emit the ksymtab entry as a pair of relative references: this reduces * the size by half on 64-bit architectures, and eliminates the need for @@ -52,16 +58,16 @@ extern struct module __this_module; #define __KSYMTAB_ENTRY(sym, sec) \ __ADDRESSABLE(sym) \ asm(" .section \"___ksymtab" sec "+" #sym "\", \"a\" \n" \ - " .balign 8 \n" \ + " .balign 4 \n" \ "__ksymtab_" #sym ": \n" \ - " .long " #sym "- . \n" \ + __KSYMTAB_REL #sym "- . \n" \ " .long __kstrtab_" #sym "- . \n" \ " .previous \n") struct kernel_symbol { - int value_offset; + long value_offset; int name_offset; -}; +} __packed; #else #define __KSYMTAB_ENTRY(sym, sec) \ static const struct kernel_symbol __ksymtab_##sym \ diff --git a/kernel/module.c b/kernel/module.c index 6e6712b3aaf5..43efd46feeee 100644 --- a/kernel/module.c +++ b/kernel/module.c @@ -541,7 +541,7 @@ static bool check_exported_symbol(const struct symsearch *syms, static unsigned long kernel_symbol_value(const struct kernel_symbol *sym) { #ifdef CONFIG_HAVE_ARCH_PREL32_RELOCATIONS - return (unsigned long)offset_to_ptr(&sym->value_offset); + return (unsigned long)loffset_to_ptr(&sym->value_offset); #else return sym->value; #endif -- 2.17.1 _______________________________________________ linux-arm-kernel mailing list linux-arm-kernel@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-arm-kernel