From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id EB190C433FE for ; Wed, 29 Sep 2021 15:05:16 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 779D8613A6 for ; Wed, 29 Sep 2021 15:05:14 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org 779D8613A6 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=redhat.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kvack.org Received: by kanga.kvack.org (Postfix) id E9D48940038; Wed, 29 Sep 2021 11:05:13 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id E4CFF940020; Wed, 29 Sep 2021 11:05:13 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id CEDBE940038; Wed, 29 Sep 2021 11:05:13 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0217.hostedemail.com [216.40.44.217]) by kanga.kvack.org (Postfix) with ESMTP id C11AA940020 for ; Wed, 29 Sep 2021 11:05:13 -0400 (EDT) Received: from smtpin28.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id 85FCF8249980 for ; Wed, 29 Sep 2021 15:05:13 +0000 (UTC) X-FDA: 78640934106.28.64DE489 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [216.205.24.124]) by imf23.hostedemail.com (Postfix) with ESMTP id 281D090000B6 for ; Wed, 29 Sep 2021 15:05:13 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1632927912; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=oNY7lu4CVYlRtCbxUPnrsUuoXnqV93QpWri2uUHzgJc=; b=T4JlF2ZCPbXNdOKUYb+aaDWU4EDfwLcwejXOU2SVVMgoIawledq+Ln6zSqN+3HYNG5HXdI nDINnSg1hIN4XZPFoBVZdLn+QPV8S1Gyc1tNqwqRWre9WhRB2DGCNqSMZzOCJtjLEOek9W GRJq89Lx4s1J9x7odId5N57RLKsA6zk= Received: from mail-wm1-f69.google.com (mail-wm1-f69.google.com [209.85.128.69]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-549-xGHMgcMtMS-B5Av7TxzAaw-1; Wed, 29 Sep 2021 11:05:11 -0400 X-MC-Unique: xGHMgcMtMS-B5Av7TxzAaw-1 Received: by mail-wm1-f69.google.com with SMTP id j21-20020a05600c1c1500b00300f1679e4dso2831298wms.4 for ; Wed, 29 Sep 2021 08:05:11 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:to:cc:references:from:organization:subject :message-id:date:user-agent:mime-version:in-reply-to :content-language:content-transfer-encoding; bh=oNY7lu4CVYlRtCbxUPnrsUuoXnqV93QpWri2uUHzgJc=; b=kSavqXUGhq7ZK8RE+DXmny/QWeCuSPpFj5cnjGN1uGN1an483ELMzcWiaL/NaqwCl/ 282HfOXy+KHbFzyXthJzJpdJs04+X77VSj/qpUArR0NgOa0EF5sJBH32gi1Rjjp216ve aRj1lAuzKRPVnCq26xAr9xDt209WQmq0rWRGiuPhO/Nn36l0L7GPHEoU0lgfEQ9VrdhV LhywL6Ll5Rk/qafaTIr+8XZNnUuhsHWqf/iVl8Rl5/8JV4pFMx7Rv4S8IqAFMbYycSlf JL2UMfCjGvlKb5nIdtBhgzpu+Z/ZuGLdGJtdgl87q4nzDKALvkMaiySBeMxZ4gNwOYOL By1w== X-Gm-Message-State: AOAM532l+3C6XefWoBuOsqnmeTlq33AI93uZ5AktOZgdVL0In8rFzM4O kD62ZIzI83nPn/wzwI7KysurLHnTuv9FdzCUsWv/9iAEdulVJ/b5R9LcakLfo0qgOwOOTup1enK 5dBKffAv9KhLj5dzvCz/0QAmaQ6eCR/eWxcPvtWxAic4LfeJsevz/0Mnvt0w= X-Received: by 2002:a05:6000:1b8d:: with SMTP id r13mr421678wru.230.1632927909971; Wed, 29 Sep 2021 08:05:09 -0700 (PDT) X-Google-Smtp-Source: ABdhPJz5yPMk0g1o1wnpWIcweThTcWilK94BqSjhp2G8ePrKT/SUMx6KDIiNd6Aq4yt2b5V85P3H2g== X-Received: by 2002:a05:6000:1b8d:: with SMTP id r13mr421637wru.230.1632927909676; Wed, 29 Sep 2021 08:05:09 -0700 (PDT) Received: from [192.168.3.132] (p4ff23c3b.dip0.t-ipconnect.de. [79.242.60.59]) by smtp.gmail.com with ESMTPSA id o19sm164133wrg.60.2021.09.29.08.05.08 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Wed, 29 Sep 2021 08:05:09 -0700 (PDT) To: Uladzislau Rezki Cc: LKML , Ping Fang , Andrew Morton , Roman Gushchin , Michal Hocko , Oscar Salvador , Linux Memory Management List References: <20210908132727.16165-1-david@redhat.com> <20210916193403.GA1940@pc638.lan> <221e38c1-4b8a-8608-455a-6bde544adaf0@redhat.com> <20210921221337.GA60191@pc638.lan> <7f62d710-ca85-7d33-332a-25ff88b5452f@redhat.com> <20210922104141.GA27011@pc638.lan> <953ea84a-aabb-f64b-b417-ba60928430e0@redhat.com> From: David Hildenbrand Organization: Red Hat Subject: Re: [PATCH v1] mm/vmalloc: fix exact allocations with an alignment > 1 Message-ID: <689b7c24-623d-c01e-6c0f-ad430f1fa3ae@redhat.com> Date: Wed, 29 Sep 2021 17:05:08 +0200 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101 Thunderbird/78.11.0 MIME-Version: 1.0 In-Reply-To: X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Content-Type: text/plain; charset=utf-8; format=flowed Content-Language: en-US X-Rspamd-Server: rspam01 X-Rspamd-Queue-Id: 281D090000B6 X-Stat-Signature: ekhjy1pu4kk6ogcmq4khwgrkb9o51t83 Authentication-Results: imf23.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=T4JlF2ZC; dmarc=pass (policy=none) header.from=redhat.com; spf=none (imf23.hostedemail.com: domain of david@redhat.com has no SPF policy when checking 216.205.24.124) smtp.mailfrom=david@redhat.com X-HE-Tag: 1632927912-40598 Content-Transfer-Encoding: quoted-printable X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On 29.09.21 16:49, Uladzislau Rezki wrote: > On Wed, Sep 29, 2021 at 4:40 PM David Hildenbrand wr= ote: >> >> On 29.09.21 16:30, Uladzislau Rezki wrote: >>>> >>>> So the idea is that once we run into a dead end because we took a le= ft >>>> subtree, we rollback to the next possible rigth subtree and try agai= n. >>>> If we run into another dead end, we repeat ... thus, this can now ha= ppen >>>> more than once. >>>> >>>> I assume the only implication is that this can now be slower in some >>>> corner cases with larger alignment, because it might take longer to = find >>>> something suitable. Fair enough. >>>> >>> Yep, your understanding is correct regarding the tree traversal. If n= o >>> suitable block >>> is found in left sub-tree we roll-back and check right one. So it can >>> be(the scanning) >>> more than one time. >>> >>> I did some performance analyzing using vmalloc test suite to figure >>> out a performance >>> loss for allocations with specific alignment. On that syntactic test = i >>> see approx. 30% >>> of degradation: >> >> How realistic is that test case? I assume most alignment we're dealing >> with is: >> * 1/PAGE_SIZE >> * huge page size (for automatic huge page placing) >> > Well that is synthetic test. Most of the alignments are 1 or PAGE_SIZE. > There are users which use internal API where you can specify an alignme= nt > you want but those are mainly like KASAN, module alloc, etc. >=20 >>> >>> 2.225 microseconds vs 1.496 microseconds. That time includes both >>> vmalloc() and vfree() >>> calls. I do not consider it as a big degrade, but from the other hand >>> we can still adjust the >>> search length for alignments > one page: >>> >>> # add it on top of previous proposal and search length instead of siz= e >>> length =3D align > PAGE_SIZE ? size + align:size; >> >> That will not allow to place huge pages in the case of kasan. And I >> consider that more important than optimizing a syntactic test :) My 2 = cents. >> > Could you please to be more specific? I mean how is it connected with h= uge > pages mappings? Huge-pages are which have order > 0. Or you mean that > a special alignments are needed for mapping huge pages? Let me try to clarify: KASAN does an exact allocation when onlining a memory block,=20 __vmalloc_node_range() will try placing huge pages first, increasing the=20 alignment to e.g., "1 << PMD_SHIFT". If we increase the search length in find_vmap_lowest_match(), that=20 search will fail if the exact allocation is surrounded by other=20 allocations. In that case, we won't place a huge page although we could=20 -- because find_vmap_lowest_match() would be imprecise for alignments >=20 PAGE_SIZE. Memory blocks we online/offline on x86 are at least 128MB. The KASAN=20 "overhead" we have to allocate is 1/8 of that -- 16 MB, so essentially 8=20 huge pages. __vmalloc_node_range() will increase the alignment to 2MB to try placing=20 huge pages first. find_vmap_lowest_match() will search within the given=20 exact 16MB are a 18MB area (size + align), which won't work. So=20 __vmalloc_node_range() will fallback to the original PAGE_SIZE alignment=20 and shift=3DPAGE_SHIFT. __vmalloc_area_node() will set the set_vm_area_page_order effectively to=20 0 -- small pages. Does that make sense or am I missing something? --=20 Thanks, David / dhildenb