From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1755984AbcCDOw0 (ORCPT ); Fri, 4 Mar 2016 09:52:26 -0500 Received: from mail-wm0-f47.google.com ([74.125.82.47]:33545 "EHLO mail-wm0-f47.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751313AbcCDOwX convert rfc822-to-8bit (ORCPT ); Fri, 4 Mar 2016 09:52:23 -0500 MIME-Version: 1.0 In-Reply-To: <56D58398.2010708@gmail.com> References: <00e9fa7d4adeac2d37a42cf613837e74850d929a.1456504662.git.glider@google.com> <56D471F5.3010202@gmail.com> <56D58398.2010708@gmail.com> Date: Fri, 4 Mar 2016 15:52:21 +0100 Message-ID: Subject: Re: [PATCH v4 5/7] mm, kasan: Stackdepot implementation. Enable stackdepot for SLAB From: Alexander Potapenko To: Andrey Ryabinin Cc: Dmitry Vyukov , Andrey Konovalov , Christoph Lameter , Andrew Morton , Steven Rostedt , Joonsoo Kim , JoonSoo Kim , Kostya Serebryany , kasan-dev , LKML , "linux-mm@kvack.org" Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8BIT Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Tue, Mar 1, 2016 at 12:57 PM, Andrey Ryabinin wrote: > > > On 02/29/2016 08:12 PM, Dmitry Vyukov wrote: > >>>> diff --git a/lib/Makefile b/lib/Makefile >>>> index a7c26a4..10a4ae3 100644 >>>> --- a/lib/Makefile >>>> +++ b/lib/Makefile >>>> @@ -167,6 +167,13 @@ obj-$(CONFIG_SG_SPLIT) += sg_split.o >>>> obj-$(CONFIG_STMP_DEVICE) += stmp_device.o >>>> obj-$(CONFIG_IRQ_POLL) += irq_poll.o >>>> >>>> +ifeq ($(CONFIG_KASAN),y) >>>> +ifeq ($(CONFIG_SLAB),y) >>> >>> Just try to imagine that another subsystem wants to use stackdepot. How this gonna look like? >>> >>> We have Kconfig to describe dependencies. So, this should be under CONFIG_STACKDEPOT. >>> So any user of this feature can just do 'select STACKDEPOT' in Kconfig. >>> >>>> + obj-y += stackdepot.o >>>> + KASAN_SANITIZE_slub.o := n > _stackdepot.o > > >>> >>>> + >>>> + stack->hash = hash; >>>> + stack->size = size; >>>> + stack->handle.slabindex = depot_index; >>>> + stack->handle.offset = depot_offset >> STACK_ALLOC_ALIGN; >>>> + __memcpy(stack->entries, entries, size * sizeof(unsigned long)); >>> >>> s/__memcpy/memcpy/ >> >> memcpy should be instrumented by asan/tsan, and we would like to avoid >> that instrumentation here. > > KASAN_SANITIZE_* := n already takes care about this. > __memcpy() is a special thing solely for kasan internals and some assembly code. > And it's not available generally. As far as I can see, KASAN_SANITIZE_*:=n does not guarantee it. It just removes KASAN flags from GCC command line, it does not necessarily replace memcpy() calls with some kind of a non-instrumented memcpy(). We see two possible ways to deal with this problem: 1. Define "memcpy" to "__memcpy" in lib/stackdepot.c under CONFIG_KASAN. 2. Create mm/kasan/kasan_stackdepot.c stub which will include lib/stackdepot.c, and define "memcpy" to "__memcpy" in that file. This way we'll be able to instrument the original stackdepot.c and won't miss reports from it if someone starts using it somewhere else. >>>> + if (unlikely(!smp_load_acquire(&next_slab_inited))) { >>>> + if (!preempt_count() && !in_irq()) { >>> >>> If you trying to detect atomic context here, than this doesn't work. E.g. you can't know >>> about held spinlocks in non-preemptible kernel. >>> And I'm not sure why need this. You know gfp flags here, so allocation in atomic context shouldn't be problem. >> >> >> We don't have gfp flags for kfree. >> I wonder how CONFIG_DEBUG_ATOMIC_SLEEP handles this. Maybe it has the answer. > > It hasn't. It doesn't guarantee that atomic context always will be detected. > >> Alternatively, we can always assume that we are in atomic context in kfree. >> > > Or do this allocation in separate context, put in work queue. > >> >> >>>> + alloc_flags &= (__GFP_RECLAIM | __GFP_IO | __GFP_FS | >>>> + __GFP_NOWARN | __GFP_NORETRY | >>>> + __GFP_NOMEMALLOC | __GFP_DIRECT_RECLAIM); >>> >>> I think blacklist approach would be better here. >>> >>>> + page = alloc_pages(alloc_flags, STACK_ALLOC_ORDER); >>> >>> STACK_ALLOC_ORDER = 4 - that's a lot. Do you really need that much? >> >> Part of the issue the atomic context above. When we can't allocate >> memory we still want to save the stack trace. When we have less than >> STACK_ALLOC_ORDER memory, we try to preallocate another >> STACK_ALLOC_ORDER in advance. So in the worst case, we have >> STACK_ALLOC_ORDER memory and that should be enough to handle all >> kmalloc/kfree in the atomic context. 1 page does not look enough. I >> think Alex did some measuring of the failure race (when we are out of >> memory and can't allocate more). >> > > A lot of 4-order pages will lead to high fragmentation. You don't need physically contiguous memory here, > so try to use vmalloc(). It is slower, but fragmentation won't be problem. > > And one more thing. Take a look at mempool, because it's generally used to solve the problem you have here > (guaranteed allocation in atomic context). > > -- Alexander Potapenko Software Engineer Google Germany GmbH Erika-Mann-Straße, 33 80636 München Geschäftsführer: Matthew Scott Sucherman, Paul Terence Manicle Registergericht und -nummer: Hamburg, HRB 86891 Sitz der Gesellschaft: Hamburg