From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8297BC4332B for ; Fri, 26 Feb 2021 01:19:53 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 500F964EE2 for ; Fri, 26 Feb 2021 01:19:53 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229460AbhBZBTu (ORCPT ); Thu, 25 Feb 2021 20:19:50 -0500 Received: from mail.kernel.org ([198.145.29.99]:51684 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229599AbhBZBTl (ORCPT ); Thu, 25 Feb 2021 20:19:41 -0500 Received: by mail.kernel.org (Postfix) with ESMTPSA id 5784064F03; Fri, 26 Feb 2021 01:18:58 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linux-foundation.org; s=korg; t=1614302340; bh=WPrugePVZuh8MPwZ3r7f29O/YNW/yc1ftbMhHEwRP0Q=; h=Date:From:To:Subject:In-Reply-To:From; b=ZHbcJUfue1F3YHmsor0Ue/FRlgw8kT/drOPpMaHoQZK/cjExkdfGzKe6JVYV4bWF0 829Kax20SOVeesuKnSmpiubKC+aG500fwMX8VBhlwGxL7axaay3/PVaw+wTUYWB7i3 +oFW01jEHlIyvQ0XmcP9YYst4yUZ1ompff9276sA= Date: Thu, 25 Feb 2021 17:18:57 -0800 From: Andrew Morton To: akpm@linux-foundation.org, andreyknvl@google.com, aryabinin@virtuozzo.com, bp@alien8.de, catalin.marinas@arm.com, cl@linux.com, corbet@lwn.net, dave.hansen@linux.intel.com, dvyukov@google.com, edumazet@google.com, elver@google.com, glider@google.com, gregkh@linuxfoundation.org, hdanton@sina.com, hpa@zytor.com, iamjoonsoo.kim@lge.com, jannh@google.com, joern@purestorage.com, keescook@chromium.org, linux-mm@kvack.org, luto@kernel.org, mark.rutland@arm.com, mingo@redhat.com, mm-commits@vger.kernel.org, paulmck@kernel.org, penberg@kernel.org, peterz@infradead.org, rientjes@google.com, sjpark@amazon.de, tglx@linutronix.de, torvalds@linux-foundation.org, vbabka@suse.cz, will@kernel.org Subject: [patch 057/118] x86, kfence: enable KFENCE for x86 Message-ID: <20210226011857.hLG7WTkrz%akpm@linux-foundation.org> In-Reply-To: <20210225171452.713967e96554bb6a53e44a19@linux-foundation.org> User-Agent: s-nail v14.8.16 Precedence: bulk Reply-To: linux-kernel@vger.kernel.org List-ID: X-Mailing-List: mm-commits@vger.kernel.org From: Alexander Potapenko Subject: x86, kfence: enable KFENCE for x86 Add architecture specific implementation details for KFENCE and enable KFENCE for the x86 architecture. In particular, this implements the required interface in for setting up the pool and providing helper functions for protecting and unprotecting pages. For x86, we need to ensure that the pool uses 4K pages, which is done using the set_memory_4k() helper function. [elver@google.com: add missing copyright and description header] Link: https://lkml.kernel.org/r/20210118092159.145934-2-elver@google.com Link: https://lkml.kernel.org/r/20201103175841.3495947-3-elver@google.com Signed-off-by: Marco Elver Signed-off-by: Alexander Potapenko Reviewed-by: Dmitry Vyukov Co-developed-by: Marco Elver Reviewed-by: Jann Horn Cc: Andrey Konovalov Cc: Andrey Ryabinin Cc: Andy Lutomirski Cc: Borislav Petkov Cc: Catalin Marinas Cc: Christopher Lameter Cc: Dave Hansen Cc: David Rientjes Cc: Eric Dumazet Cc: Greg Kroah-Hartman Cc: Hillf Danton Cc: "H. Peter Anvin" Cc: Ingo Molnar Cc: Joern Engel Cc: Jonathan Corbet Cc: Joonsoo Kim Cc: Kees Cook Cc: Mark Rutland Cc: Paul E. McKenney Cc: Pekka Enberg Cc: Peter Zijlstra Cc: SeongJae Park Cc: Thomas Gleixner Cc: Vlastimil Babka Cc: Will Deacon Signed-off-by: Andrew Morton --- arch/x86/Kconfig | 1 arch/x86/include/asm/kfence.h | 70 ++++++++++++++++++++++++++++++++ arch/x86/mm/fault.c | 5 ++ 3 files changed, 76 insertions(+) --- /dev/null +++ a/arch/x86/include/asm/kfence.h @@ -0,0 +1,70 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +/* + * x86 KFENCE support. + * + * Copyright (C) 2020, Google LLC. + */ + +#ifndef _ASM_X86_KFENCE_H +#define _ASM_X86_KFENCE_H + +#include +#include + +#include +#include +#include +#include + +/* + * The page fault handler entry function, up to which the stack trace is + * truncated in reports. + */ +#define KFENCE_SKIP_ARCH_FAULT_HANDLER "asm_exc_page_fault" + +/* Force 4K pages for __kfence_pool. */ +static inline bool arch_kfence_init_pool(void) +{ + unsigned long addr; + + for (addr = (unsigned long)__kfence_pool; is_kfence_address((void *)addr); + addr += PAGE_SIZE) { + unsigned int level; + + if (!lookup_address(addr, &level)) + return false; + + if (level != PG_LEVEL_4K) + set_memory_4k(addr, 1); + } + + return true; +} + +/* Protect the given page and flush TLB. */ +static inline bool kfence_protect_page(unsigned long addr, bool protect) +{ + unsigned int level; + pte_t *pte = lookup_address(addr, &level); + + if (WARN_ON(!pte || level != PG_LEVEL_4K)) + return false; + + /* + * We need to avoid IPIs, as we may get KFENCE allocations or faults + * with interrupts disabled. Therefore, the below is best-effort, and + * does not flush TLBs on all CPUs. We can tolerate some inaccuracy; + * lazy fault handling takes care of faults after the page is PRESENT. + */ + + if (protect) + set_pte(pte, __pte(pte_val(*pte) & ~_PAGE_PRESENT)); + else + set_pte(pte, __pte(pte_val(*pte) | _PAGE_PRESENT)); + + /* Flush this CPU's TLB. */ + flush_tlb_one_kernel(addr); + return true; +} + +#endif /* _ASM_X86_KFENCE_H */ --- a/arch/x86/Kconfig~x86-kfence-enable-kfence-for-x86 +++ a/arch/x86/Kconfig @@ -151,6 +151,7 @@ config X86 select HAVE_ARCH_JUMP_LABEL_RELATIVE select HAVE_ARCH_KASAN if X86_64 select HAVE_ARCH_KASAN_VMALLOC if X86_64 + select HAVE_ARCH_KFENCE select HAVE_ARCH_KGDB select HAVE_ARCH_MMAP_RND_BITS if MMU select HAVE_ARCH_MMAP_RND_COMPAT_BITS if MMU && COMPAT --- a/arch/x86/mm/fault.c~x86-kfence-enable-kfence-for-x86 +++ a/arch/x86/mm/fault.c @@ -9,6 +9,7 @@ #include /* oops_begin/end, ... */ #include /* search_exception_tables */ #include /* max_low_pfn */ +#include /* kfence_handle_page_fault */ #include /* NOKPROBE_SYMBOL, ... */ #include /* kmmio_handler, ... */ #include /* perf_sw_event */ @@ -680,6 +681,10 @@ page_fault_oops(struct pt_regs *regs, un if (IS_ENABLED(CONFIG_EFI)) efi_crash_gracefully_on_page_fault(address); + /* Only not-present faults should be handled by KFENCE. */ + if (!(error_code & X86_PF_PROT) && kfence_handle_page_fault(address)) + return; + oops: /* * Oops. The kernel tried to access some bad page. We'll have to _