From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-wr1-f71.google.com (mail-wr1-f71.google.com [209.85.221.71]) by kanga.kvack.org (Postfix) with ESMTP id EE93A8E0001 for ; Mon, 10 Dec 2018 07:51:15 -0500 (EST) Received: by mail-wr1-f71.google.com with SMTP id q7so3393984wrw.8 for ; Mon, 10 Dec 2018 04:51:15 -0800 (PST) Received: from mail-sor-f65.google.com (mail-sor-f65.google.com. [209.85.220.65]) by mx.google.com with SMTPS id b17sor7133162wrt.48.2018.12.10.04.51.14 for (Google Transport Security); Mon, 10 Dec 2018 04:51:14 -0800 (PST) From: Andrey Konovalov Subject: [PATCH v9 3/8] arm64: untag user addresses in access_ok and __uaccess_mask_ptr Date: Mon, 10 Dec 2018 13:51:00 +0100 Message-Id: <674252952827b57f4259876cd4ddf802f3539356.1544445454.git.andreyknvl@google.com> In-Reply-To: References: MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Sender: owner-linux-mm@kvack.org List-ID: To: Catalin Marinas , Will Deacon , Mark Rutland , Robin Murphy , Kees Cook , Kate Stewart , Greg Kroah-Hartman , Andrew Morton , Ingo Molnar , "Kirill A . Shutemov" , Shuah Khan , linux-arm-kernel@lists.infradead.org, linux-doc@vger.kernel.org, linux-mm@kvack.org, linux-arch@vger.kernel.org, linux-kselftest@vger.kernel.org, linux-kernel@vger.kernel.org Cc: Dmitry Vyukov , Kostya Serebryany , Evgeniy Stepanov , Lee Smith , Ramana Radhakrishnan , Jacob Bramley , Ruben Ayrapetyan , Chintan Pandya , Luc Van Oostenryck , Andrey Konovalov copy_from_user (and a few other similar functions) are used to copy data from user memory into the kernel memory or vice versa. Since a user can provided a tagged pointer to one of the syscalls that use copy_from_user, we need to correctly handle such pointers. Do this by untagging user pointers in access_ok and in __uaccess_mask_ptr, before performing access validity checks. Reviewed-by: Catalin Marinas Signed-off-by: Andrey Konovalov --- arch/arm64/include/asm/uaccess.h | 11 +++++++---- 1 file changed, 7 insertions(+), 4 deletions(-) diff --git a/arch/arm64/include/asm/uaccess.h b/arch/arm64/include/asm/uaccess.h index 3c3864ba3cc1..d28c3b1314ce 100644 --- a/arch/arm64/include/asm/uaccess.h +++ b/arch/arm64/include/asm/uaccess.h @@ -104,7 +104,8 @@ static inline unsigned long __range_ok(const void __user *addr, unsigned long si #define untagged_addr(addr) \ ((__typeof__(addr))sign_extend64((u64)(addr), 55)) -#define access_ok(type, addr, size) __range_ok(addr, size) +#define access_ok(type, addr, size) \ + __range_ok(untagged_addr(addr), size) #define user_addr_max get_fs #define _ASM_EXTABLE(from, to) \ @@ -236,7 +237,8 @@ static inline void uaccess_enable_not_uao(void) /* * Sanitise a uaccess pointer such that it becomes NULL if above the - * current addr_limit. + * current addr_limit. In case the pointer is tagged (has the top byte set), + * untag the pointer before checking. */ #define uaccess_mask_ptr(ptr) (__typeof__(ptr))__uaccess_mask_ptr(ptr) static inline void __user *__uaccess_mask_ptr(const void __user *ptr) @@ -244,10 +246,11 @@ static inline void __user *__uaccess_mask_ptr(const void __user *ptr) void __user *safe_ptr; asm volatile( - " bics xzr, %1, %2\n" + " bics xzr, %3, %2\n" " csel %0, %1, xzr, eq\n" : "=&r" (safe_ptr) - : "r" (ptr), "r" (current_thread_info()->addr_limit) + : "r" (ptr), "r" (current_thread_info()->addr_limit), + "r" (untagged_addr(ptr)) : "cc"); csdb(); -- 2.20.0.rc2.403.gdbc3b29805-goog