From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-17.4 required=3.0 tests=DKIMWL_WL_MED,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH, MAILING_LIST_MULTI,SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT, USER_IN_DEF_DKIM_WL autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8FB44C43603 for ; Fri, 20 Dec 2019 18:50:39 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 39559206DA for ; Fri, 20 Dec 2019 18:50:39 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="AdTM+cJV" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 39559206DA Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id D05268E01AE; Fri, 20 Dec 2019 13:50:38 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id CB4648E019D; Fri, 20 Dec 2019 13:50:38 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id BCCE18E01AE; Fri, 20 Dec 2019 13:50:38 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0109.hostedemail.com [216.40.44.109]) by kanga.kvack.org (Postfix) with ESMTP id A309A8E019D for ; Fri, 20 Dec 2019 13:50:38 -0500 (EST) Received: from smtpin16.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with SMTP id 603DB180AD822 for ; Fri, 20 Dec 2019 18:50:38 +0000 (UTC) X-FDA: 76286410956.16.bath35_79df1761de01b X-HE-Tag: bath35_79df1761de01b X-Filterd-Recvd-Size: 21345 Received: from mail-wm1-f74.google.com (mail-wm1-f74.google.com [209.85.128.74]) by imf47.hostedemail.com (Postfix) with ESMTP for ; Fri, 20 Dec 2019 18:50:37 +0000 (UTC) Received: by mail-wm1-f74.google.com with SMTP id y125so4351946wmg.1 for ; Fri, 20 Dec 2019 10:50:37 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=33V4lBU+7m3dhooBX8GVEyP87Wtcg3/psoppaU4s+rI=; b=AdTM+cJVJ94jffTfivR1lhevoZOGdC5J+jIAvQRDbEtsYc0qsWHwcFJ3VU5KvYbIE2 xPjYiZcfVkX9KaO2FnNQP7TiXfvBJIxo97imQW41Gk8uthLyicZWjl6/bl6Z7fR4acJC atR6tKDPMw9QQfWcIaBq/VAMcMDg35asrbfG3y8AzCmXY5nQvwcxkv1L15XAkDToNGXn HkrYy9iCiQfETuCOEIDlvxdZBpr7RbkmpCwMDvckvnVq2pMzv+L2axsCyKmhPjfPvevk 93zg/fMvwOGS4s2ApLbjHrsvkXJiFj5oTuDQZ3Bz63w4fKO+T2pjReMBM8E3NAJUrHp3 RiKQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=33V4lBU+7m3dhooBX8GVEyP87Wtcg3/psoppaU4s+rI=; b=J6599aD4edJScI/VU/2sZz7Gpiylzmt4UtRHItpcRQf9kj1RKWn83rNYpleBNGe35U 3+M7ecsO+ts81X13sZYDzxCkEFx+lT/YGSBedFWQN53G0n/oRVCr0A7V0F9woSnxMSL/ oOZfpAQcP02xys3UdbgsrD4kDgn02xCanZ5Id1K7ishjCCX/GUAmlf+6/RddLTwv+aBf ydcGuzLOhAjqyha4tqMZ4gcsn4HJPNHTRWSYVA3VTXuG2QHCeZsMuuwFkDuPk5Ww3jR/ 8ojwDUaChDZvqfnQM6UCrwyoNDnpZV/zaKuzmo/frqN5lrfa1p0xTeE8E2+CnkQCq7mp cdpw== X-Gm-Message-State: APjAAAWRBTPfCtNXrZSi6b8CbgP8cW19KxtsgAbZ6ixvPo89rGljA/u/ leI0OJsLcrf9GSp2xge/EfNA8vkwxuA= X-Google-Smtp-Source: APXvYqyd0j1P85+MTCAJfgQkPbF+xnocYk+YKJBS9aK9VeYUo9pJBsKwzEcCc+DybLPHGxE1S4/3YLvzvjg= X-Received: by 2002:a5d:5491:: with SMTP id h17mr17268207wrv.374.1576867835944; Fri, 20 Dec 2019 10:50:35 -0800 (PST) Date: Fri, 20 Dec 2019 19:49:24 +0100 In-Reply-To: <20191220184955.223741-1-glider@google.com> Message-Id: <20191220184955.223741-12-glider@google.com> Mime-Version: 1.0 References: <20191220184955.223741-1-glider@google.com> X-Mailer: git-send-email 2.24.1.735.g03f4e72817-goog Subject: [PATCH RFC v4 11/42] kmsan: add KMSAN hooks for kernel subsystems From: glider@google.com To: Jens Axboe , Andy Lutomirski , Wolfram Sang , Christoph Hellwig , Vegard Nossum , Dmitry Vyukov , Marco Elver , Andrey Konovalov , linux-mm@kvack.org Cc: glider@google.com, viro@zeniv.linux.org.uk, adilger.kernel@dilger.ca, akpm@linux-foundation.org, aryabinin@virtuozzo.com, ard.biesheuvel@linaro.org, arnd@arndb.de, hch@infradead.org, darrick.wong@oracle.com, davem@davemloft.net, dmitry.torokhov@gmail.com, ebiggers@google.com, edumazet@google.com, ericvh@gmail.com, gregkh@linuxfoundation.org, harry.wentland@amd.com, herbert@gondor.apana.org.au, iii@linux.ibm.com, mingo@elte.hu, jasowang@redhat.com, m.szyprowski@samsung.com, mark.rutland@arm.com, martin.petersen@oracle.com, schwidefsky@de.ibm.com, willy@infradead.org, mst@redhat.com, mhocko@suse.com, monstr@monstr.eu, pmladek@suse.com, cai@lca.pw, rdunlap@infradead.org, robin.murphy@arm.com, sergey.senozhatsky@gmail.com, rostedt@goodmis.org, tiwai@suse.com, tytso@mit.edu, tglx@linutronix.de, gor@linux.ibm.com Content-Type: text/plain; charset="UTF-8" X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: This patch provides hooks that subsystems use to notify KMSAN about changes in the kernel state. Such changes include: - page operations (allocation, deletion, splitting, mapping); - memory allocation and deallocation; - entering and leaving IRQ/NMI/softirq contexts; - copying data between kernel, userspace and hardware. This patch has been split away from the rest of KMSAN runtime to simplify the review process. Signed-off-by: Alexander Potapenko To: Alexander Potapenko Cc: Jens Axboe Cc: Andy Lutomirski Cc: Wolfram Sang Cc: Christoph Hellwig Cc: Vegard Nossum Cc: Dmitry Vyukov Cc: Marco Elver Cc: Andrey Konovalov Cc: linux-mm@kvack.org --- v4: - fix a lot of comments by Marco Elver and Andrey Konovalov: - clean up headers and #defines, remove debugging code - simplified KMSAN entry hooks - fixed kmsan_check_skb() Change-Id: I99d1f34f26bef122897cb840dac8d5b34d2b6a80 --- arch/x86/include/asm/kmsan.h | 93 ++++++++ mm/kmsan/kmsan_entry.c | 38 ++++ mm/kmsan/kmsan_hooks.c | 416 +++++++++++++++++++++++++++++++++++ 3 files changed, 547 insertions(+) create mode 100644 arch/x86/include/asm/kmsan.h create mode 100644 mm/kmsan/kmsan_entry.c create mode 100644 mm/kmsan/kmsan_hooks.c diff --git a/arch/x86/include/asm/kmsan.h b/arch/x86/include/asm/kmsan.h new file mode 100644 index 000000000000..f924f29f90f9 --- /dev/null +++ b/arch/x86/include/asm/kmsan.h @@ -0,0 +1,93 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +/* + * Assembly bits to safely invoke KMSAN hooks from .S files. + * + * Copyright (C) 2017-2019 Google LLC + * Author: Alexander Potapenko + * + * This program is free software; you can redistribute it and/or modify + * it under the terms of the GNU General Public License version 2 as + * published by the Free Software Foundation. + * + */ +#ifndef _ASM_X86_KMSAN_H +#define _ASM_X86_KMSAN_H + +#ifdef CONFIG_KMSAN + +#ifdef __ASSEMBLY__ +.macro KMSAN_PUSH_REGS + pushq %rax + pushq %rcx + pushq %rdx + pushq %rdi + pushq %rsi + pushq %r8 + pushq %r9 + pushq %r10 + pushq %r11 +.endm + +.macro KMSAN_POP_REGS + popq %r11 + popq %r10 + popq %r9 + popq %r8 + popq %rsi + popq %rdi + popq %rdx + popq %rcx + popq %rax + +.endm + +.macro KMSAN_CALL_HOOK fname + KMSAN_PUSH_REGS + call \fname + KMSAN_POP_REGS +.endm + +.macro KMSAN_CONTEXT_ENTER + KMSAN_CALL_HOOK kmsan_context_enter +.endm + +.macro KMSAN_CONTEXT_EXIT + KMSAN_CALL_HOOK kmsan_context_exit +.endm + +#define KMSAN_INTERRUPT_ENTER KMSAN_CONTEXT_ENTER +#define KMSAN_INTERRUPT_EXIT KMSAN_CONTEXT_EXIT + +#define KMSAN_SOFTIRQ_ENTER KMSAN_CONTEXT_ENTER +#define KMSAN_SOFTIRQ_EXIT KMSAN_CONTEXT_EXIT + +#define KMSAN_NMI_ENTER KMSAN_CONTEXT_ENTER +#define KMSAN_NMI_EXIT KMSAN_CONTEXT_EXIT + +#define KMSAN_IST_ENTER(shift_ist) KMSAN_CONTEXT_ENTER +#define KMSAN_IST_EXIT(shift_ist) KMSAN_CONTEXT_EXIT + +.macro KMSAN_UNPOISON_PT_REGS + KMSAN_CALL_HOOK kmsan_unpoison_pt_regs +.endm + +#else +#error this header must be included into an assembly file +#endif + +#else /* ifdef CONFIG_KMSAN */ + +#define KMSAN_INTERRUPT_ENTER +#define KMSAN_INTERRUPT_EXIT +#define KMSAN_SOFTIRQ_ENTER +#define KMSAN_SOFTIRQ_EXIT +#define KMSAN_NMI_ENTER +#define KMSAN_NMI_EXIT +#define KMSAN_SYSCALL_ENTER +#define KMSAN_SYSCALL_EXIT +#define KMSAN_IST_ENTER(shift_ist) +#define KMSAN_IST_EXIT(shift_ist) +#define KMSAN_UNPOISON_PT_REGS + +#endif /* ifdef CONFIG_KMSAN */ +#endif /* ifndef _ASM_X86_KMSAN_H */ diff --git a/mm/kmsan/kmsan_entry.c b/mm/kmsan/kmsan_entry.c new file mode 100644 index 000000000000..7af31642cd45 --- /dev/null +++ b/mm/kmsan/kmsan_entry.c @@ -0,0 +1,38 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * KMSAN hooks for entry_64.S + * + * Copyright (C) 2018-2019 Google LLC + * Author: Alexander Potapenko + * + * This program is free software; you can redistribute it and/or modify + * it under the terms of the GNU General Public License version 2 as + * published by the Free Software Foundation. + * + */ + +#include "kmsan.h" + +void kmsan_context_enter(void) +{ + int level = this_cpu_inc_return(kmsan_context_level); + + BUG_ON(level >= KMSAN_NESTED_CONTEXT_MAX); +} +EXPORT_SYMBOL(kmsan_context_enter); + +void kmsan_context_exit(void) +{ + int level = this_cpu_dec_return(kmsan_context_level); + + BUG_ON(level < 0); +} +EXPORT_SYMBOL(kmsan_context_exit); + +void kmsan_unpoison_pt_regs(struct pt_regs *regs) +{ + if (!kmsan_ready || kmsan_in_runtime()) + return; + kmsan_internal_unpoison_shadow(regs, sizeof(*regs), /*checked*/true); +} +EXPORT_SYMBOL(kmsan_unpoison_pt_regs); diff --git a/mm/kmsan/kmsan_hooks.c b/mm/kmsan/kmsan_hooks.c new file mode 100644 index 000000000000..8ddfd91b1d11 --- /dev/null +++ b/mm/kmsan/kmsan_hooks.c @@ -0,0 +1,416 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * KMSAN hooks for kernel subsystems. + * + * These functions handle creation of KMSAN metadata for memory allocations. + * + * Copyright (C) 2018-2019 Google LLC + * Author: Alexander Potapenko + * + * This program is free software; you can redistribute it and/or modify + * it under the terms of the GNU General Public License version 2 as + * published by the Free Software Foundation. + * + */ + +#include +#include +#include +#include +#include +#include +#include +#include + +#include "../slab.h" +#include "kmsan.h" + +/* + * The functions may call back to instrumented code, which, in turn, may call + * these hooks again. To avoid re-entrancy, we use __GFP_NO_KMSAN_SHADOW. + * Instrumented functions shouldn't be called under + * kmsan_enter_runtime()/kmsan_leave_runtime(), because this will lead to + * skipping effects of functions like memset() inside instrumented code. + */ + +/* Called from kernel/kthread.c, kernel/fork.c */ +void kmsan_task_create(struct task_struct *task) +{ + unsigned long irq_flags; + + if (!task) + return; + irq_flags = kmsan_enter_runtime(); + kmsan_internal_task_create(task); + kmsan_leave_runtime(irq_flags); +} +EXPORT_SYMBOL(kmsan_task_create); + +/* Called from kernel/exit.c */ +void kmsan_task_exit(struct task_struct *task) +{ + struct kmsan_task_state *state = &task->kmsan; + + if (!kmsan_ready || kmsan_in_runtime()) + return; + + state->allow_reporting = false; +} +EXPORT_SYMBOL(kmsan_task_exit); + +/* Called from mm/slub.c */ +void kmsan_slab_alloc(struct kmem_cache *s, void *object, gfp_t flags) +{ + unsigned long irq_flags; + + if (unlikely(object == NULL)) + return; + if (!kmsan_ready || kmsan_in_runtime()) + return; + /* + * There's a ctor or this is an RCU cache - do nothing. The memory + * status hasn't changed since last use. + */ + if (s->ctor || (s->flags & SLAB_TYPESAFE_BY_RCU)) + return; + + irq_flags = kmsan_enter_runtime(); + if (flags & __GFP_ZERO) + kmsan_internal_unpoison_shadow(object, s->object_size, + KMSAN_POISON_CHECK); + else + kmsan_internal_poison_shadow(object, s->object_size, flags, + KMSAN_POISON_CHECK); + kmsan_leave_runtime(irq_flags); +} +EXPORT_SYMBOL(kmsan_slab_alloc); + +/* Called from mm/slub.c */ +void kmsan_slab_free(struct kmem_cache *s, void *object) +{ + unsigned long irq_flags; + + if (!kmsan_ready || kmsan_in_runtime()) + return; + + /* RCU slabs could be legally used after free within the RCU period */ + if (unlikely(s->flags & (SLAB_TYPESAFE_BY_RCU | SLAB_POISON))) + return; + /* + * If there's a constructor, freed memory must remain in the same state + * till the next allocation. We cannot save its state to detect + * use-after-free bugs, instead we just keep it unpoisoned. + */ + if (s->ctor) + return; + irq_flags = kmsan_enter_runtime(); + kmsan_internal_poison_shadow(object, s->object_size, + GFP_KERNEL, + KMSAN_POISON_CHECK | KMSAN_POISON_FREE); + kmsan_leave_runtime(irq_flags); +} +EXPORT_SYMBOL(kmsan_slab_free); + +/* Called from mm/slub.c */ +void kmsan_kmalloc_large(const void *ptr, size_t size, gfp_t flags) +{ + unsigned long irq_flags; + + if (unlikely(ptr == NULL)) + return; + if (!kmsan_ready || kmsan_in_runtime()) + return; + irq_flags = kmsan_enter_runtime(); + if (flags & __GFP_ZERO) + kmsan_internal_unpoison_shadow((void *)ptr, size, + /*checked*/true); + else + kmsan_internal_poison_shadow((void *)ptr, size, flags, + KMSAN_POISON_CHECK); + kmsan_leave_runtime(irq_flags); +} +EXPORT_SYMBOL(kmsan_kmalloc_large); + +/* Called from mm/slub.c */ +void kmsan_kfree_large(const void *ptr) +{ + struct page *page; + unsigned long irq_flags; + + if (!kmsan_ready || kmsan_in_runtime()) + return; + irq_flags = kmsan_enter_runtime(); + page = virt_to_head_page((void *)ptr); + BUG_ON(ptr != page_address(page)); + kmsan_internal_poison_shadow( + (void *)ptr, PAGE_SIZE << compound_order(page), GFP_KERNEL, + KMSAN_POISON_CHECK | KMSAN_POISON_FREE); + kmsan_leave_runtime(irq_flags); +} +EXPORT_SYMBOL(kmsan_kfree_large); + +static unsigned long vmalloc_shadow(unsigned long addr) +{ + return (unsigned long)kmsan_get_metadata((void *)addr, 1, META_SHADOW); +} + +static unsigned long vmalloc_origin(unsigned long addr) +{ + return (unsigned long)kmsan_get_metadata((void *)addr, 1, META_ORIGIN); +} + +/* Called from mm/vmalloc.c */ +void kmsan_vunmap_page_range(unsigned long start, unsigned long end) +{ + __vunmap_page_range(vmalloc_shadow(start), vmalloc_shadow(end)); + __vunmap_page_range(vmalloc_origin(start), vmalloc_origin(end)); +} +EXPORT_SYMBOL(kmsan_vunmap_page_range); + +/* Called from lib/ioremap.c */ +/* + * This function creates new shadow/origin pages for the physical pages mapped + * into the virtual memory. If those physical pages already had shadow/origin, + * those are ignored. + */ +void kmsan_ioremap_page_range(unsigned long start, unsigned long end, + phys_addr_t phys_addr, pgprot_t prot) +{ + unsigned long irq_flags; + struct page *shadow, *origin; + int i, nr; + unsigned long off = 0; + gfp_t gfp_mask = GFP_KERNEL | __GFP_ZERO | __GFP_NO_KMSAN_SHADOW; + + if (!kmsan_ready || kmsan_in_runtime()) + return; + + nr = (end - start) / PAGE_SIZE; + irq_flags = kmsan_enter_runtime(); + for (i = 0; i < nr; i++, off += PAGE_SIZE) { + shadow = alloc_pages(gfp_mask, 1); + origin = alloc_pages(gfp_mask, 1); + __vmap_page_range_noflush(vmalloc_shadow(start + off), + vmalloc_shadow(start + off + PAGE_SIZE), + prot, &shadow); + __vmap_page_range_noflush(vmalloc_origin(start + off), + vmalloc_origin(start + off + PAGE_SIZE), + prot, &origin); + } + flush_cache_vmap(vmalloc_shadow(start), vmalloc_shadow(end)); + flush_cache_vmap(vmalloc_origin(start), vmalloc_origin(end)); + kmsan_leave_runtime(irq_flags); +} +EXPORT_SYMBOL(kmsan_ioremap_page_range); + +void kmsan_iounmap_page_range(unsigned long start, unsigned long end) +{ + int i, nr; + struct page *shadow, *origin; + unsigned long v_shadow, v_origin; + unsigned long irq_flags; + + if (!kmsan_ready || kmsan_in_runtime()) + return; + + nr = (end - start) / PAGE_SIZE; + irq_flags = kmsan_enter_runtime(); + v_shadow = (unsigned long)vmalloc_shadow(start); + v_origin = (unsigned long)vmalloc_origin(start); + for (i = 0; i < nr; i++, v_shadow += PAGE_SIZE, v_origin += PAGE_SIZE) { + shadow = vmalloc_to_page_or_null((void *)v_shadow); + origin = vmalloc_to_page_or_null((void *)v_origin); + __vunmap_page_range(v_shadow, v_shadow + PAGE_SIZE); + __vunmap_page_range(v_origin, v_origin + PAGE_SIZE); + if (shadow) + __free_pages(shadow, 1); + if (origin) + __free_pages(origin, 1); + } + kmsan_leave_runtime(irq_flags); +} +EXPORT_SYMBOL(kmsan_iounmap_page_range); + +/* Called from include/linux/uaccess.h, include/linux/uaccess.h */ +void kmsan_copy_to_user(const void *to, const void *from, + size_t to_copy, size_t left) +{ + if (!kmsan_ready || kmsan_in_runtime()) + return; + /* + * At this point we've copied the memory already. It's hard to check it + * before copying, as the size of actually copied buffer is unknown. + */ + + /* copy_to_user() may copy zero bytes. No need to check. */ + if (!to_copy) + return; + /* Or maybe copy_to_user() failed to copy anything. */ + if (to_copy == left) + return; + if ((u64)to < TASK_SIZE) { + /* This is a user memory access, check it. */ + kmsan_internal_check_memory((void *)from, to_copy - left, to, + REASON_COPY_TO_USER); + return; + } + /* Otherwise this is a kernel memory access. This happens when a compat + * syscall passes an argument allocated on the kernel stack to a real + * syscall. + * Don't check anything, just copy the shadow of the copied bytes. + */ + kmsan_memcpy_metadata((void *)to, (void *)from, to_copy - left); +} +EXPORT_SYMBOL(kmsan_copy_to_user); + +void kmsan_gup_pgd_range(struct page **pages, int nr) +{ + int i; + void *page_addr; + + /* + * gup_pgd_range() has just created a number of new pages that KMSAN + * treats as uninitialized. In the case they belong to the userspace + * memory, unpoison the corresponding kernel pages. + */ + for (i = 0; i < nr; i++) { + page_addr = page_address(pages[i]); + if (((u64)page_addr < TASK_SIZE) && + ((u64)page_addr + PAGE_SIZE < TASK_SIZE)) + kmsan_unpoison_shadow(page_addr, PAGE_SIZE); + } + +} +EXPORT_SYMBOL(kmsan_gup_pgd_range); + +/* Helper function to check an SKB. */ +void kmsan_check_skb(const struct sk_buff *skb) +{ + struct sk_buff *frag_iter; + int i; + skb_frag_t *f; + u32 p_off, p_len, copied; + struct page *p; + u8 *vaddr; + + if (!skb || !skb->len) + return; + + kmsan_internal_check_memory(skb->data, skb_headlen(skb), 0, REASON_ANY); + if (skb_is_nonlinear(skb)) { + for (i = 0; i < skb_shinfo(skb)->nr_frags; i++) { + f = &skb_shinfo(skb)->frags[i]; + + skb_frag_foreach_page(f, skb_frag_off(f), + skb_frag_size(f), + p, p_off, p_len, copied) { + + vaddr = kmap_atomic(p); + kmsan_internal_check_memory(vaddr + p_off, + p_len, /*user_addr*/ 0, + REASON_ANY); + kunmap_atomic(vaddr); + } + } + } + skb_walk_frags(skb, frag_iter) + kmsan_check_skb(frag_iter); +} +EXPORT_SYMBOL(kmsan_check_skb); + +/* Helper function to check an URB. */ +void kmsan_handle_urb(const struct urb *urb, bool is_out) +{ + if (!urb) + return; + if (is_out) + kmsan_internal_check_memory(urb->transfer_buffer, + urb->transfer_buffer_length, + /*user_addr*/ 0, REASON_SUBMIT_URB); + else + kmsan_internal_unpoison_shadow(urb->transfer_buffer, + urb->transfer_buffer_length, + /*checked*/false); +} +EXPORT_SYMBOL(kmsan_handle_urb); + +static void kmsan_handle_dma_page(const void *addr, size_t size, + enum dma_data_direction dir) +{ + switch (dir) { + case DMA_BIDIRECTIONAL: + kmsan_internal_check_memory((void *)addr, size, /*user_addr*/0, + REASON_ANY); + kmsan_internal_unpoison_shadow((void *)addr, size, + /*checked*/false); + break; + case DMA_TO_DEVICE: + kmsan_internal_check_memory((void *)addr, size, /*user_addr*/0, + REASON_ANY); + break; + case DMA_FROM_DEVICE: + kmsan_internal_unpoison_shadow((void *)addr, size, + /*checked*/false); + break; + case DMA_NONE: + break; + } +} + +/* Helper function to handle DMA data transfers. */ +void kmsan_handle_dma(const void *addr, size_t size, + enum dma_data_direction dir) +{ + u64 page_offset, to_go, uaddr = (u64)addr; + + /* + * The kernel may occasionally give us adjacent DMA pages not belonging + * to the same allocation. Process them separately to avoid triggering + * internal KMSAN checks. + */ + while (size > 0) { + page_offset = uaddr % PAGE_SIZE; + to_go = min(PAGE_SIZE - page_offset, (u64)size); + kmsan_handle_dma_page((void *)uaddr, to_go, dir); + uaddr += to_go; + size -= to_go; + } +} +EXPORT_SYMBOL(kmsan_handle_dma); + +/* Functions from kmsan-checks.h follow. */ +void kmsan_poison_shadow(const void *address, size_t size, gfp_t flags) +{ + unsigned long irq_flags; + + if (!kmsan_ready || kmsan_in_runtime()) + return; + irq_flags = kmsan_enter_runtime(); + /* The users may want to poison/unpoison random memory. */ + kmsan_internal_poison_shadow((void *)address, size, flags, + KMSAN_POISON_NOCHECK); + kmsan_leave_runtime(irq_flags); +} +EXPORT_SYMBOL(kmsan_poison_shadow); + +void kmsan_unpoison_shadow(const void *address, size_t size) +{ + unsigned long irq_flags; + + if (!kmsan_ready || kmsan_in_runtime()) + return; + + irq_flags = kmsan_enter_runtime(); + /* The users may want to poison/unpoison random memory. */ + kmsan_internal_unpoison_shadow((void *)address, size, + KMSAN_POISON_NOCHECK); + kmsan_leave_runtime(irq_flags); +} +EXPORT_SYMBOL(kmsan_unpoison_shadow); + +void kmsan_check_memory(const void *addr, size_t size) +{ + return kmsan_internal_check_memory((void *)addr, size, /*user_addr*/ 0, + REASON_ANY); +} +EXPORT_SYMBOL(kmsan_check_memory); -- 2.24.1.735.g03f4e72817-goog