From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-13.7 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH,MAILING_LIST_MULTI,NICE_REPLY_A,SPF_HELO_NONE,SPF_PASS, USER_AGENT_SANE_1 autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id B9BD9C433F5 for ; Thu, 23 Sep 2021 17:43:03 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 514836109E for ; Thu, 23 Sep 2021 17:43:03 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org 514836109E Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=redhat.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kvack.org Received: by kanga.kvack.org (Postfix) id 9A0066B006C; Thu, 23 Sep 2021 13:43:02 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 9286F6B0071; Thu, 23 Sep 2021 13:43:02 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 7C9D5900002; Thu, 23 Sep 2021 13:43:02 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0237.hostedemail.com [216.40.44.237]) by kanga.kvack.org (Postfix) with ESMTP id 66D296B006C for ; Thu, 23 Sep 2021 13:43:02 -0400 (EDT) Received: from smtpin27.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id 1DA97183CB062 for ; Thu, 23 Sep 2021 17:43:02 +0000 (UTC) X-FDA: 78619559004.27.320A56E Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by imf28.hostedemail.com (Postfix) with ESMTP id A64F4900009E for ; Thu, 23 Sep 2021 17:43:01 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1632418981; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=hwZcD7Df0pRLm8jOd8+VyFKWNF1q3vmrtbOqAZa6G5w=; b=NlCkdJF10OAg4XlgnlT5Mx9EWOUhR8m+sin4adLRPW+ene/KIS8s774so83aoxR7H2Ztib apaI/ltik5nmCrI4U2d5o9Ig5e839AdSH2i8oxaVEfTHNQ4ASldRbqoSn2qfBZoyrm/jFo s5p05gTKZVtEdpZKqpcSQ25jY2mgeLM= Received: from mail-wr1-f70.google.com (mail-wr1-f70.google.com [209.85.221.70]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-546-t5TA3bOmPKe_NPyuxuF3-Q-1; Thu, 23 Sep 2021 13:42:58 -0400 X-MC-Unique: t5TA3bOmPKe_NPyuxuF3-Q-1 Received: by mail-wr1-f70.google.com with SMTP id z2-20020a5d4c82000000b0015b140e0562so5810257wrs.7 for ; Thu, 23 Sep 2021 10:42:58 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:subject:to:cc:references:from:organization :message-id:date:user-agent:mime-version:in-reply-to :content-language:content-transfer-encoding; bh=hwZcD7Df0pRLm8jOd8+VyFKWNF1q3vmrtbOqAZa6G5w=; b=SMG+mbQ9PtBfYHY1msPhRV+SlMfYK0U4aOVfs+a3T1qKWKdiRC4JHpFfDQwomMGEBR 49OQxpa1m2YEIOzxLcYSADF+RkBQVd1uEL1ofYaav8jisM0i8RiDZFxvOhprWXZ3PHLp v3bPmcn2Hc5VwGTUVfS3PA22M5Eb7NhoKQffNMDWYLsGUjDacOtdFtz5koE7/wrgzNQ8 /0okaZIdAN8ddq+twTaHPbBCFP9qEiZfRIt6+N9HCsOmQCd2vJ4GZnbrQ1q8WBajZ32D 9ZFevwO5/7u5vmq4l4f/8Xh/s9mcw4P0iRGe9XMZ6BJTOvsiG50cy6N7+YVjcspfiMzK 7Wtg== X-Gm-Message-State: AOAM533p0ma9Tt9idKbW+6CwtvfUIARFN3EyKmAaUDFp3KVEYRImtgkh tssU9n037YUO85WvV7MYWQyM1YhZ7Sw9oqORgvNs+BF2DhcWFBStphc2KBaqkQV8qcgM1HGDQaw et/5lYJl0Xo8Or092tGKtUudqAHMecl948gJkK/XUVD0SxNhsNINSF0KUuXc= X-Received: by 2002:a5d:55cf:: with SMTP id i15mr6548486wrw.224.1632418977637; Thu, 23 Sep 2021 10:42:57 -0700 (PDT) X-Google-Smtp-Source: ABdhPJzJ8XNU7CQAU94aqaI5iNMpmSR7QTAETGZ40odb+UndLxI4xL2DiYDMDSoSRBYOx5wdmRT/FQ== X-Received: by 2002:a5d:55cf:: with SMTP id i15mr6548440wrw.224.1632418977122; Thu, 23 Sep 2021 10:42:57 -0700 (PDT) Received: from [192.168.3.132] (p4ff23e5d.dip0.t-ipconnect.de. [79.242.62.93]) by smtp.gmail.com with ESMTPSA id o12sm5246234wms.15.2021.09.23.10.42.55 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Thu, 23 Sep 2021 10:42:56 -0700 (PDT) Subject: Re: [PATCH v1] mm/vmalloc: fix exact allocations with an alignment > 1 To: Uladzislau Rezki Cc: LKML , Ping Fang , Andrew Morton , Roman Gushchin , Michal Hocko , Oscar Salvador , Linux Memory Management List References: <20210908132727.16165-1-david@redhat.com> <20210916193403.GA1940@pc638.lan> <221e38c1-4b8a-8608-455a-6bde544adaf0@redhat.com> <20210921221337.GA60191@pc638.lan> <7f62d710-ca85-7d33-332a-25ff88b5452f@redhat.com> <20210922104141.GA27011@pc638.lan> From: David Hildenbrand Organization: Red Hat Message-ID: <437ff5c9-1b36-8ef7-1ce6-b3125e42de93@redhat.com> Date: Thu, 23 Sep 2021 19:42:55 +0200 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101 Thunderbird/78.11.0 MIME-Version: 1.0 In-Reply-To: <20210922104141.GA27011@pc638.lan> X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Content-Type: text/plain; charset=utf-8; format=flowed Content-Language: en-US Content-Transfer-Encoding: 7bit X-Rspamd-Server: rspam05 X-Rspamd-Queue-Id: A64F4900009E X-Stat-Signature: ztqt7n7qkrdss6ioofmcd4cq79s31qot Authentication-Results: imf28.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=NlCkdJF1; dmarc=pass (policy=none) header.from=redhat.com; spf=none (imf28.hostedemail.com: domain of david@redhat.com has no SPF policy when checking 170.10.133.124) smtp.mailfrom=david@redhat.com X-HE-Tag: 1632418981-905035 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On 22.09.21 12:41, Uladzislau Rezki wrote: > On Wed, Sep 22, 2021 at 10:34:55AM +0200, David Hildenbrand wrote: >>>> No, that's leaking implementation details to the caller. And no, increasing >>>> the range and eventually allocating something bigger (e.g., placing a huge >>>> page where it might not have been possible) is not acceptable for KASAN. >>>> >>>> If you're terribly unhappy with this patch, >>> Sorry to say but it simple does not make sense. >>> >> >> Let's agree to disagree. >> >> find_vmap_lowest_match() is imprecise now and that's an issue for exact >> allocations. We can either make it fully precise again (eventually degrading >> allocation performance) or just special-case exact allocations to fix the >> regression. >> >> I decided to go the easy path and do the latter; I do agree that making >> find_vmap_lowest_match() fully precise again might be preferred -- we could >> have other allocations failing right now although there are still suitable >> holes. >> >> I briefly thought about performing the search in find_vmap_lowest_match() >> twice. First, start the search without an extended range, and fallback to >> the extended range if that search fails. Unfortunately, I think that still >> won't make the function completely precise due to the way we might miss >> searching some suitable subtrees. >> >>>> >>>> please suggest something reasonable to fix exact allocations: >>>> a) Fixes the KASAN use case. >>>> b) Allows for automatic placement of huge pages for exact allocations. >>>> c) Doesn't leak implementation details into the caller. >>>> >>> I am looking at it. >> > I am testing this: > > > diff --git a/mm/vmalloc.c b/mm/vmalloc.c > index dcf23d16a308..cdf3bda6313d 100644 > --- a/mm/vmalloc.c > +++ b/mm/vmalloc.c > @@ -1161,18 +1161,14 @@ find_vmap_lowest_match(unsigned long size, > { > struct vmap_area *va; > struct rb_node *node; > - unsigned long length; > > /* Start from the root. */ > node = free_vmap_area_root.rb_node; > > - /* Adjust the search size for alignment overhead. */ > - length = size + align - 1; > - > while (node) { > va = rb_entry(node, struct vmap_area, rb_node); > > - if (get_subtree_max_size(node->rb_left) >= length && > + if (get_subtree_max_size(node->rb_left) >= size && > vstart < va->va_start) { > node = node->rb_left; > } else { > @@ -1182,9 +1178,9 @@ find_vmap_lowest_match(unsigned long size, > /* > * Does not make sense to go deeper towards the right > * sub-tree if it does not have a free block that is > - * equal or bigger to the requested search length. > + * equal or bigger to the requested search size. > */ > - if (get_subtree_max_size(node->rb_right) >= length) { > + if (get_subtree_max_size(node->rb_right) >= size) { > node = node->rb_right; > continue; > } > @@ -1192,16 +1188,30 @@ find_vmap_lowest_match(unsigned long size, > /* > * OK. We roll back and find the first right sub-tree, > * that will satisfy the search criteria. It can happen > - * only once due to "vstart" restriction. > + * due to "vstart" restriction or an alignment overhead. > */ > while ((node = rb_parent(node))) { > va = rb_entry(node, struct vmap_area, rb_node); > if (is_within_this_va(va, size, align, vstart)) > return va; > > - if (get_subtree_max_size(node->rb_right) >= length && > + if (get_subtree_max_size(node->rb_right) >= size && > vstart <= va->va_start) { > + /* > + * Shift the vstart forward, so we do not loop over same > + * sub-tree force and back. The aim is to continue tree > + * scanning toward higher addresses cutting off previous > + * ones. > + * > + * Please note we update vstart with parent's start address > + * adding "1" because we do not want to enter same sub-tree > + * one more time after it has already been inspected and no > + * suitable free block found there. > + */ > + vstart = va->va_start + 1; > node = node->rb_right; > + > + /* Scan a sub-tree rooted at "node". */ > break; > } > } > > > so it handles any alignment and is accurate when it comes to searching the most > lowest free block when user wants to allocate with a special alignment value. > > Could you please help and test the KASAN use case? Sure, I'll give it a spin tomorrow! Thanks! -- Thanks, David / dhildenb