From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-14.4 required=3.0 tests=DKIMWL_WL_MED,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH, MAILING_LIST_MULTI,SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS,USER_IN_DEF_DKIM_WL autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7AADCC432C0 for ; Fri, 29 Nov 2019 16:22:05 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 129C22176D for ; Fri, 29 Nov 2019 16:22:04 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="JTdInrJr" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 129C22176D Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 96C9D6B05B4; Fri, 29 Nov 2019 11:22:04 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 91C576B05B5; Fri, 29 Nov 2019 11:22:04 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 7BE226B05B6; Fri, 29 Nov 2019 11:22:04 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0233.hostedemail.com [216.40.44.233]) by kanga.kvack.org (Postfix) with ESMTP id 5C5D56B05B4 for ; Fri, 29 Nov 2019 11:22:04 -0500 (EST) Received: from smtpin19.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with SMTP id 23C38180AD801 for ; Fri, 29 Nov 2019 16:22:04 +0000 (UTC) X-FDA: 76209831768.19.trees82_4c1f2d8452433 X-HE-Tag: trees82_4c1f2d8452433 X-Filterd-Recvd-Size: 23340 Received: from mail-pf1-f196.google.com (mail-pf1-f196.google.com [209.85.210.196]) by imf05.hostedemail.com (Postfix) with ESMTP for ; Fri, 29 Nov 2019 16:22:02 +0000 (UTC) Received: by mail-pf1-f196.google.com with SMTP id y206so6187909pfb.0 for ; Fri, 29 Nov 2019 08:22:02 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=fpMwfoaIQwZ5zMc6Vud0Fs+0uOdzJlK7SUAMOhRAkYg=; b=JTdInrJrH2ICFEmkv4A/ne302YO+cxGgAEieLaYHOXfmfTqVH6lENyS+zl8zyy+Cll C9m4boHSIFFOvjxUV28XVezoAaopccFcEBtGrIktMGuzGqEYgtXXu5Xs6cfbH7mUK4vH vSAzWWMUPdyN2Ihxel+78sbmZlOMqRa4euO56EprbHz3IQ1ndZajO+5hVwzoUepQArGI qS5U25C1SnRtQfZmfPGjK4BAEA9xOlLQ/llM/ChjW4zSC6vaCqKEqHajV0UfsKZzze7q Bx7nw1lW29gzgw3xak4JIngq7psPdBjHLJLr+nmZlOaYiwbhv3zqcvPxDoCY/mKOno9m M9gg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=fpMwfoaIQwZ5zMc6Vud0Fs+0uOdzJlK7SUAMOhRAkYg=; b=XS1wDQfH2k7t2oiDTz8br2N4fB/6OXy3W8KmkbeyYAgAYrVRXKAxOGIKS3MmA92Zdu PnnUA2PsWfmqWn6nyvRsXv0wcWfSkAqeDJWWyCh4cSCEsT/rRribBKINLT2ajaRfWlAu nh2ZZzsRVyv5hafwwEo5Jq28l5ZMazfuUvDYOKE+lIdVwnpdG0KrxstAvWYJ5/fnfdSW nrHsuFk+0vn2+hNODNb0paoosNl4NcfbUQk0+NWCEnCxZz6vNsfc8wWoevQ16Ov4fTPu zIQjDShfZVAmN6KhBC7caivm/Nk57LTKUsJQTVbhb4laXce0nAwWIq87NRVKjPbRbG7X 3pBA== X-Gm-Message-State: APjAAAV/X7STVB8QyGJn/1ddOu+F5J+lLo7ZyHHhLSorwRhXY0wupbBy IQxHOB4gR8/89u1cGhhTP4iVQDaKxqq2CN6ZCaHBFw== X-Google-Smtp-Source: APXvYqxoe8ciHg4rO0jMgjVItHG0/zANmIDVxA3wxdDUYXPcIK5EFZgUB9YEjiK/WDeQJbXclUFwIo1nADapjbudys0= X-Received: by 2002:a65:678f:: with SMTP id e15mr17664638pgr.130.1575044519697; Fri, 29 Nov 2019 08:21:59 -0800 (PST) MIME-Version: 1.0 References: <20191122112621.204798-1-glider@google.com> <20191122112621.204798-24-glider@google.com> In-Reply-To: <20191122112621.204798-24-glider@google.com> From: Andrey Konovalov Date: Fri, 29 Nov 2019 17:21:48 +0100 Message-ID: Subject: Re: [PATCH RFC v3 23/36] kmsan: call KMSAN hooks where needed To: Alexander Potapenko Cc: Andrew Morton , Greg Kroah-Hartman , Eric Dumazet , Wolfram Sang , Petr Mladek , Vegard Nossum , Dmitry Vyukov , Linux Memory Management List , Alexander Viro , Andreas Dilger , Andrey Ryabinin , Andy Lutomirski , Ard Biesheuvel , Arnd Bergmann , Christoph Hellwig , Christoph Hellwig , darrick.wong@oracle.com, "David S. Miller" , Dmitry Torokhov , Eric Biggers , ericvh@gmail.com, harry.wentland@amd.com, Herbert Xu , iii@linux.ibm.com, mingo@elte.hu, Jason Wang , Jens Axboe , Marek Szyprowski , Marco Elver , Mark Rutland , "Martin K. Petersen" , Martin Schwidefsky , Matthew Wilcox , "Michael S . Tsirkin" , Michal Simek , Qian Cai , Randy Dunlap , Robin Murphy , sergey.senozhatsky@gmail.com, Steven Rostedt , Takashi Iwai , "Theodore Ts'o" , Thomas Gleixner , gor@linux.ibm.com Content-Type: text/plain; charset="UTF-8" X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Fri, Nov 22, 2019 at 12:27 PM wrote: > > Insert KMSAN hooks that check for potential memory errors and/or make > necessary bookkeeping changes: I think it makes sense to split this patch into 2+ parts, the first one adds hooks for internal KMSAN bookkeeping: > - allocate/split/deallocate metadata pages in > alloc_pages()/split_page()/free_page(); > - clear page shadow and origins in clear_page(), copy_user_highpage(); > - copy page metadata in copy_highpage(), wp_page_copy(); > - handle vmap()/vunmap()/iounmap(); > - handle task creation and deletion; > - call softirq entry/exit hooks in kernel/softirq.c; And the other ones do other things: > - initialize result of vscnprintf() in vprintk_store(); > - check/initialize memory sent to/read from USB, I2C, and network > > Signed-off-by: Alexander Potapenko > To: Alexander Potapenko > Cc: Andrew Morton > Cc: Greg Kroah-Hartman > Cc: Eric Dumazet > Cc: Wolfram Sang > Cc: Petr Mladek > Cc: Vegard Nossum > Cc: Dmitry Vyukov > Cc: linux-mm@kvack.org > --- > > v2: > - dropped call to kmsan_handle_vprintk, updated comment in printk.c > > v3: > - put KMSAN_INIT_VALUE on a separate line in vprintk_store() > - dropped call to kmsan_handle_i2c_transfer() > > Change-Id: I1250a928d9263bf71fdaa067a070bdee686ef47b > --- > arch/x86/include/asm/page_64.h | 13 +++++++++++++ > arch/x86/mm/ioremap.c | 3 +++ > drivers/usb/core/urb.c | 2 ++ > include/linux/highmem.h | 4 ++++ > kernel/exit.c | 2 ++ > kernel/fork.c | 2 ++ > kernel/kthread.c | 2 ++ > kernel/printk/printk.c | 6 ++++++ > kernel/softirq.c | 5 +++++ > lib/ioremap.c | 5 +++++ > mm/compaction.c | 9 +++++++++ > mm/gup.c | 3 +++ > mm/memory.c | 2 ++ > mm/page_alloc.c | 16 ++++++++++++++++ > mm/vmalloc.c | 23 +++++++++++++++++++++-- > net/sched/sch_generic.c | 2 ++ > 16 files changed, 97 insertions(+), 2 deletions(-) > > diff --git a/arch/x86/include/asm/page_64.h b/arch/x86/include/asm/page_64.h > index 939b1cff4a7b..0ba43d93414f 100644 > --- a/arch/x86/include/asm/page_64.h > +++ b/arch/x86/include/asm/page_64.h > @@ -44,14 +44,27 @@ void clear_page_orig(void *page); > void clear_page_rep(void *page); > void clear_page_erms(void *page); > > +/* This is an assembly header, avoid including too much of kmsan.h */ > +#ifdef CONFIG_KMSAN > +void kmsan_clear_page(void *page_addr); > +#endif > +__no_sanitize_memory > static inline void clear_page(void *page) > { > +#ifdef CONFIG_KMSAN > + /* alternative_call_2() changes |page|. */ > + void *page_copy = page; > +#endif > alternative_call_2(clear_page_orig, > clear_page_rep, X86_FEATURE_REP_GOOD, > clear_page_erms, X86_FEATURE_ERMS, > "=D" (page), > "0" (page) > : "cc", "memory", "rax", "rcx"); > +#ifdef CONFIG_KMSAN > + /* Clear KMSAN shadow for the pages that have it. */ > + kmsan_clear_page(page_copy); > +#endif > } > > void copy_page(void *to, void *from); > diff --git a/arch/x86/mm/ioremap.c b/arch/x86/mm/ioremap.c > index a39dcdb5ae34..fdb2abc11a82 100644 > --- a/arch/x86/mm/ioremap.c > +++ b/arch/x86/mm/ioremap.c > @@ -7,6 +7,7 @@ > * (C) Copyright 1995 1996 Linus Torvalds > */ > > +#include > #include > #include > #include > @@ -451,6 +452,8 @@ void iounmap(volatile void __iomem *addr) > return; > } > > + kmsan_iounmap_page_range((unsigned long)addr, > + (unsigned long)addr + get_vm_area_size(p)); > free_memtype(p->phys_addr, p->phys_addr + get_vm_area_size(p)); > > /* Finally remove it */ > diff --git a/drivers/usb/core/urb.c b/drivers/usb/core/urb.c > index 0eab79f82ce4..5bdb54d71c2e 100644 > --- a/drivers/usb/core/urb.c > +++ b/drivers/usb/core/urb.c > @@ -8,6 +8,7 @@ > #include > #include > #include > +#include > #include > #include > #include > @@ -401,6 +402,7 @@ int usb_submit_urb(struct urb *urb, gfp_t mem_flags) > URB_SETUP_MAP_SINGLE | URB_SETUP_MAP_LOCAL | > URB_DMA_SG_COMBINED); > urb->transfer_flags |= (is_out ? URB_DIR_OUT : URB_DIR_IN); > + kmsan_handle_urb(urb, is_out); > > if (xfertype != USB_ENDPOINT_XFER_CONTROL && > dev->state < USB_STATE_CONFIGURED) > diff --git a/include/linux/highmem.h b/include/linux/highmem.h > index ea5cdbd8c2c3..623b56f48685 100644 > --- a/include/linux/highmem.h > +++ b/include/linux/highmem.h > @@ -5,6 +5,7 @@ > #include > #include > #include > +#include > #include > #include > #include > @@ -255,6 +256,8 @@ static inline void copy_user_highpage(struct page *to, struct page *from, > vfrom = kmap_atomic(from); > vto = kmap_atomic(to); > copy_user_page(vto, vfrom, vaddr, to); > + /* User pages don't have shadow, just clear the destination. */ > + kmsan_clear_page(page_address(to)); > kunmap_atomic(vto); > kunmap_atomic(vfrom); > } > @@ -270,6 +273,7 @@ static inline void copy_highpage(struct page *to, struct page *from) > vfrom = kmap_atomic(from); > vto = kmap_atomic(to); > copy_page(vto, vfrom); > + kmsan_copy_page_meta(to, from); > kunmap_atomic(vto); > kunmap_atomic(vfrom); > } > diff --git a/kernel/exit.c b/kernel/exit.c > index a46a50d67002..9e3ce929110b 100644 > --- a/kernel/exit.c > +++ b/kernel/exit.c > @@ -60,6 +60,7 @@ > #include > #include > #include > +#include > #include > #include > #include > @@ -719,6 +720,7 @@ void __noreturn do_exit(long code) > > profile_task_exit(tsk); > kcov_task_exit(tsk); > + kmsan_task_exit(tsk); > > WARN_ON(blk_needs_flush_plug(tsk)); > > diff --git a/kernel/fork.c b/kernel/fork.c > index bcdf53125210..0f08952a42dc 100644 > --- a/kernel/fork.c > +++ b/kernel/fork.c > @@ -37,6 +37,7 @@ > #include > #include > #include > +#include > #include > #include > #include > @@ -931,6 +932,7 @@ static struct task_struct *dup_task_struct(struct task_struct *orig, int node) > account_kernel_stack(tsk, 1); > > kcov_task_init(tsk); > + kmsan_task_create(tsk); > > #ifdef CONFIG_FAULT_INJECTION > tsk->fail_nth = 0; > diff --git a/kernel/kthread.c b/kernel/kthread.c > index b262f47046ca..33ca743ca8b5 100644 > --- a/kernel/kthread.c > +++ b/kernel/kthread.c > @@ -17,6 +17,7 @@ > #include > #include > #include > +#include > #include > #include > #include > @@ -350,6 +351,7 @@ struct task_struct *__kthread_create_on_node(int (*threadfn)(void *data), > set_cpus_allowed_ptr(task, cpu_all_mask); > } > kfree(create); > + kmsan_task_create(task); > return task; > } > > diff --git a/kernel/printk/printk.c b/kernel/printk/printk.c > index ca65327a6de8..c9ef7fb0906f 100644 > --- a/kernel/printk/printk.c > +++ b/kernel/printk/printk.c > @@ -1915,6 +1915,12 @@ int vprintk_store(int facility, int level, > * prefix which might be passed-in as a parameter. > */ > text_len = vscnprintf(text, sizeof(textbuf), fmt, args); > + /* > + * If any of vscnprintf() arguments is uninitialized, KMSAN will report > + * one or more errors and also probably mark text_len as uninitialized. > + * Initialize |text_len| to prevent the errors from spreading further. > + */ > + text_len = KMSAN_INIT_VALUE(text_len); > > /* mark and strip a trailing newline */ > if (text_len && text[text_len-1] == '\n') { > diff --git a/kernel/softirq.c b/kernel/softirq.c > index 0427a86743a4..6d566dd68b35 100644 > --- a/kernel/softirq.c > +++ b/kernel/softirq.c > @@ -11,6 +11,7 @@ > > #include > #include > +#include > #include > #include > #include > @@ -370,7 +371,9 @@ static inline void invoke_softirq(void) > * it is the irq stack, because it should be near empty > * at this stage. > */ > + kmsan_softirq_enter(); > __do_softirq(); > + kmsan_softirq_exit(); > #else > /* > * Otherwise, irq_exit() is called on the task stack that can > @@ -600,7 +603,9 @@ static void run_ksoftirqd(unsigned int cpu) > * We can safely run softirq on inline stack, as we are not deep > * in the task stack here. > */ > + kmsan_softirq_enter(); > __do_softirq(); > + kmsan_softirq_exit(); > local_irq_enable(); > cond_resched(); > return; > diff --git a/lib/ioremap.c b/lib/ioremap.c > index 0a2ffadc6d71..5f830cee5bfc 100644 > --- a/lib/ioremap.c > +++ b/lib/ioremap.c > @@ -6,6 +6,7 @@ > * > * (C) Copyright 1995 1996 Linus Torvalds > */ > +#include > #include > #include > #include > @@ -214,6 +215,8 @@ int ioremap_page_range(unsigned long addr, > unsigned long start; > unsigned long next; > int err; > + unsigned long old_addr = addr; > + phys_addr_t old_phys_addr = phys_addr; > > might_sleep(); > BUG_ON(addr >= end); > @@ -228,6 +231,8 @@ int ioremap_page_range(unsigned long addr, > } while (pgd++, phys_addr += (next - addr), addr = next, addr != end); > > flush_cache_vmap(start, end); > + if (!err) > + kmsan_ioremap_page_range(old_addr, end, old_phys_addr, prot); > > return err; > } > diff --git a/mm/compaction.c b/mm/compaction.c > index 672d3c78c6ab..720a8a4dafec 100644 > --- a/mm/compaction.c > +++ b/mm/compaction.c > @@ -84,6 +84,15 @@ static void split_map_pages(struct list_head *list) > > for (i = 0; i < nr_pages; i++) { > list_add(&page->lru, &tmp_list); > +#ifdef CONFIG_KMSAN > + /* > + * TODO(glider): we may lose the metadata when copying > + * something to these pages. Need to allocate shadow > + * and origin pages here instead. > + */ > + page->shadow = NULL; > + page->origin = NULL; > +#endif > page++; > } > } > diff --git a/mm/gup.c b/mm/gup.c > index 8f236a335ae9..8f5f99772278 100644 > --- a/mm/gup.c > +++ b/mm/gup.c > @@ -4,6 +4,7 @@ > #include > #include > > +#include > #include > #include > #include > @@ -2349,6 +2350,7 @@ int __get_user_pages_fast(unsigned long start, int nr_pages, int write, > gup_fast_permitted(start, end)) { > local_irq_save(flags); > gup_pgd_range(start, end, write ? FOLL_WRITE : 0, pages, &nr); > + kmsan_gup_pgd_range(pages, nr); > local_irq_restore(flags); > } > > @@ -2418,6 +2420,7 @@ int get_user_pages_fast(unsigned long start, int nr_pages, > gup_fast_permitted(start, end)) { > local_irq_disable(); > gup_pgd_range(addr, end, gup_flags, pages, &nr); > + kmsan_gup_pgd_range(pages, nr); > local_irq_enable(); > ret = nr; > } > diff --git a/mm/memory.c b/mm/memory.c > index b1ca51a079f2..48ceacc06e2d 100644 > --- a/mm/memory.c > +++ b/mm/memory.c > @@ -51,6 +51,7 @@ > #include > #include > #include > +#include > #include > #include > #include > @@ -2328,6 +2329,7 @@ static vm_fault_t wp_page_copy(struct vm_fault *vmf) > if (!new_page) > goto oom; > cow_user_page(new_page, old_page, vmf->address, vma); > + kmsan_copy_page_meta(new_page, old_page); > } > > if (mem_cgroup_try_charge_delay(new_page, mm, GFP_KERNEL, &memcg, false)) > diff --git a/mm/page_alloc.c b/mm/page_alloc.c > index ecc3dbad606b..c98e4441c7c0 100644 > --- a/mm/page_alloc.c > +++ b/mm/page_alloc.c > @@ -26,6 +26,8 @@ > #include > #include > #include > +#include > +#include > #include > #include > #include > @@ -1133,6 +1135,7 @@ static __always_inline bool free_pages_prepare(struct page *page, > VM_BUG_ON_PAGE(PageTail(page), page); > > trace_mm_page_free(page, order); > + kmsan_free_page(page, order); > > /* > * Check tail pages before head page information is cleared to > @@ -3121,6 +3124,7 @@ void split_page(struct page *page, unsigned int order) > VM_BUG_ON_PAGE(PageCompound(page), page); > VM_BUG_ON_PAGE(!page_count(page), page); > > + kmsan_split_page(page, order); > for (i = 1; i < (1 << order); i++) > set_page_refcounted(page + i); > split_page_owner(page, order); > @@ -3253,6 +3257,13 @@ static struct page *rmqueue_pcplist(struct zone *preferred_zone, > /* > * Allocate a page from the given zone. Use pcplists for order-0 allocations. > */ > +/* > + * TODO(glider): rmqueue() may call __msan_poison_alloca() through a call to > + * set_pfnblock_flags_mask(). If __msan_poison_alloca() attempts to allocate > + * pages for the stack depot, it may call rmqueue() again, which will result > + * in a deadlock. > + */ > +__no_sanitize_memory > static inline > struct page *rmqueue(struct zone *preferred_zone, > struct zone *zone, unsigned int order, > @@ -4781,6 +4792,11 @@ __alloc_pages_nodemask(gfp_t gfp_mask, unsigned int order, int preferred_nid, > > trace_mm_page_alloc(page, order, alloc_mask, ac.migratetype); > > + if (page) > + if (kmsan_alloc_page(page, order, gfp_mask)) { > + __free_pages(page, order); > + page = NULL; > + } > return page; > } > EXPORT_SYMBOL(__alloc_pages_nodemask); > diff --git a/mm/vmalloc.c b/mm/vmalloc.c > index a3c70e275f4e..bdf66ffcf02c 100644 > --- a/mm/vmalloc.c > +++ b/mm/vmalloc.c > @@ -29,6 +29,7 @@ > #include > #include > #include > +#include > #include > #include > #include > @@ -119,7 +120,8 @@ static void vunmap_p4d_range(pgd_t *pgd, unsigned long addr, unsigned long end) > } while (p4d++, addr = next, addr != end); > } > > -static void vunmap_page_range(unsigned long addr, unsigned long end) > +/* Exported for KMSAN, visible in mm/kmsan/kmsan.h only. */ > +void __vunmap_page_range(unsigned long addr, unsigned long end) > { > pgd_t *pgd; > unsigned long next; > @@ -133,6 +135,12 @@ static void vunmap_page_range(unsigned long addr, unsigned long end) > vunmap_p4d_range(pgd, addr, next); > } while (pgd++, addr = next, addr != end); > } > +EXPORT_SYMBOL(__vunmap_page_range); > +static void vunmap_page_range(unsigned long addr, unsigned long end) > +{ > + kmsan_vunmap_page_range(addr, end); > + __vunmap_page_range(addr, end); > +} > > static int vmap_pte_range(pmd_t *pmd, unsigned long addr, > unsigned long end, pgprot_t prot, struct page **pages, int *nr) > @@ -216,8 +224,11 @@ static int vmap_p4d_range(pgd_t *pgd, unsigned long addr, > * will have pfns corresponding to the "pages" array. > * > * Ie. pte at addr+N*PAGE_SIZE shall point to pfn corresponding to pages[N] > + * > + * This function is exported for use in KMSAN, but is only declared in KMSAN > + * headers. > */ > -static int vmap_page_range_noflush(unsigned long start, unsigned long end, > +int __vmap_page_range_noflush(unsigned long start, unsigned long end, > pgprot_t prot, struct page **pages) > { > pgd_t *pgd; > @@ -237,6 +248,14 @@ static int vmap_page_range_noflush(unsigned long start, unsigned long end, > > return nr; > } > +EXPORT_SYMBOL(__vmap_page_range_noflush); > + > +static int vmap_page_range_noflush(unsigned long start, unsigned long end, > + pgprot_t prot, struct page **pages) > +{ > + kmsan_vmap_page_range_noflush(start, end, prot, pages); > + return __vmap_page_range_noflush(start, end, prot, pages); > +} > > static int vmap_page_range(unsigned long start, unsigned long end, > pgprot_t prot, struct page **pages) > diff --git a/net/sched/sch_generic.c b/net/sched/sch_generic.c > index 17bd8f539bc7..fd22c4a4ba42 100644 > --- a/net/sched/sch_generic.c > +++ b/net/sched/sch_generic.c > @@ -11,6 +11,7 @@ > #include > #include > #include > +#include > #include > #include > #include > @@ -659,6 +660,7 @@ static struct sk_buff *pfifo_fast_dequeue(struct Qdisc *qdisc) > } else { > qdisc->empty = true; > } > + kmsan_check_skb(skb); > > return skb; > } > -- > 2.24.0.432.g9d3f5f5b63-goog >