From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-qt1-f194.google.com ([209.85.160.194]:38197 "EHLO mail-qt1-f194.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726703AbeJOXSc (ORCPT ); Mon, 15 Oct 2018 19:18:32 -0400 Received: by mail-qt1-f194.google.com with SMTP id l9-v6so21914238qtf.5 for ; Mon, 15 Oct 2018 08:32:47 -0700 (PDT) From: David Long To: , Russell King - ARM Linux , Florian Fainelli , Tony Lindgren , Marc Zyngier , Mark Rutland Cc: Greg KH , Mark Brown Subject: [PATCH 4.14 24/24] ARM: spectre-v1: mitigate user accesses Date: Mon, 15 Oct 2018 11:32:18 -0400 Message-Id: <1539617538-22328-25-git-send-email-dave.long@linaro.org> In-Reply-To: <1539617538-22328-1-git-send-email-dave.long@linaro.org> References: <1539617538-22328-1-git-send-email-dave.long@linaro.org> Sender: stable-owner@vger.kernel.org List-ID: From: Russell King Commit a3c0f84765bb429ba0fd23de1c57b5e1591c9389 upstream. Spectre variant 1 attacks are about this sequence of pseudo-code: index = load(user-manipulated pointer); access(base + index * stride); In order for the cache side-channel to work, the access() must me made to memory which userspace can detect whether cache lines have been loaded. On 32-bit ARM, this must be either user accessible memory, or a kernel mapping of that same user accessible memory. The problem occurs when the load() speculatively loads privileged data, and the subsequent access() is made to user accessible memory. Any load() which makes use of a user-maniplated pointer is a potential problem if the data it has loaded is used in a subsequent access. This also applies for the access() if the data loaded by that access is used by a subsequent access. Harden the get_user() accessors against Spectre attacks by forcing out of bounds addresses to a NULL pointer. This prevents get_user() being used as the load() step above. As a side effect, put_user() will also be affected even though it isn't implicated. Also harden copy_from_user() by redoing the bounds check within the arm_copy_from_user() code, and NULLing the pointer if out of bounds. Acked-by: Mark Rutland Signed-off-by: Russell King Signed-off-by: David A. Long --- arch/arm/include/asm/assembler.h | 4 ++++ arch/arm/lib/copy_from_user.S | 9 +++++++++ 2 files changed, 13 insertions(+) diff --git a/arch/arm/include/asm/assembler.h b/arch/arm/include/asm/assembler.h index 0cd4dcc..b17ee03 100644 --- a/arch/arm/include/asm/assembler.h +++ b/arch/arm/include/asm/assembler.h @@ -460,6 +460,10 @@ THUMB( orr \reg , \reg , #PSR_T_BIT ) adds \tmp, \addr, #\size - 1 sbcccs \tmp, \tmp, \limit bcs \bad +#ifdef CONFIG_CPU_SPECTRE + movcs \addr, #0 + csdb +#endif #endif .endm diff --git a/arch/arm/lib/copy_from_user.S b/arch/arm/lib/copy_from_user.S index 7a4b060..a826df3 100644 --- a/arch/arm/lib/copy_from_user.S +++ b/arch/arm/lib/copy_from_user.S @@ -90,6 +90,15 @@ .text ENTRY(arm_copy_from_user) +#ifdef CONFIG_CPU_SPECTRE + get_thread_info r3 + ldr r3, [r3, #TI_ADDR_LIMIT] + adds ip, r1, r2 @ ip=addr+size + sub r3, r3, #1 @ addr_limit - 1 + cmpcc ip, r3 @ if (addr+size > addr_limit - 1) + movcs r1, #0 @ addr = NULL + csdb +#endif #include "copy_template.S" -- 2.5.0