From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.5 required=3.0 tests=DKIM_SIGNED,DKIM_VALID, HEADER_FROM_DIFFERENT_DOMAINS,MAILING_LIST_MULTI,SIGNED_OFF_BY,SPF_HELO_NONE, SPF_PASS,USER_AGENT_SANE_1 autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id C276FC388F3 for ; Mon, 30 Sep 2019 09:32:38 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 73E4021920 for ; Mon, 30 Sep 2019 09:32:38 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=shutemov-name.20150623.gappssmtp.com header.i=@shutemov-name.20150623.gappssmtp.com header.b="x91S8slw" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 73E4021920 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=shutemov.name Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 0D9FD6B0003; Mon, 30 Sep 2019 05:32:38 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 060B46B0005; Mon, 30 Sep 2019 05:32:38 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id E42E96B0006; Mon, 30 Sep 2019 05:32:37 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0195.hostedemail.com [216.40.44.195]) by kanga.kvack.org (Postfix) with ESMTP id BB4526B0003 for ; Mon, 30 Sep 2019 05:32:37 -0400 (EDT) Received: from smtpin14.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with SMTP id 39662824CA1C for ; Mon, 30 Sep 2019 09:32:37 +0000 (UTC) X-FDA: 75991071954.14.war93_8b31a0039c61d X-HE-Tag: war93_8b31a0039c61d X-Filterd-Recvd-Size: 7606 Received: from mail-ed1-f68.google.com (mail-ed1-f68.google.com [209.85.208.68]) by imf40.hostedemail.com (Postfix) with ESMTP for ; Mon, 30 Sep 2019 09:32:36 +0000 (UTC) Received: by mail-ed1-f68.google.com with SMTP id v38so7962538edm.7 for ; Mon, 30 Sep 2019 02:32:36 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=shutemov-name.20150623.gappssmtp.com; s=20150623; h=date:from:to:cc:subject:message-id:references:mime-version :content-disposition:in-reply-to:user-agent; bh=/NIG45wRzSSQZGZMC2ZYbWOSNJZUGz087s/DaB8OSlU=; b=x91S8slwG0o9H8Ulvo2sIbq8o59knZIf5NKCDlSMrONoWMIkHRJFQmayDWhMzHjPwS rb3qQNPx5vp56bIvPxjd/PDq9+78jmT6MgTXCEMOpxq7+NJM4baRY5tKffN9KkztKMd9 L4lQcDIzDwXISgWsyiY9uZxBX10dEAh6nFoWIlqh+C5Y6u3mi8B15R+BjUT+Nfyp4FHi XwTgQ8n+n+YjUL7iuMoGHMeGb5w24b37TEdprFIu6/74B4KXUn4mhbdMr5nssCpt0/58 wKJ3gV2suDmr/gxwUYuFakGfgmjvaEoq664xxE/Sk+ujLtQpggzHh78ZiRo754VitCUp gOFw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:from:to:cc:subject:message-id:references :mime-version:content-disposition:in-reply-to:user-agent; bh=/NIG45wRzSSQZGZMC2ZYbWOSNJZUGz087s/DaB8OSlU=; b=Nv1U/cb6e+QT8GZ6cTFpU09/Mxl6x9NbN7a094gfQBAm3MTeAY5yEUnHrZ+WPLFF1D UbrwC75fJlNV12TpHoPHaqHiPb/3koE2iexCd9IuJQuA8jwbst01Amfxdhm1kdY+RIdZ /qpXt+hjkNvCGpD7Kor3tGhpyS4ioCTHlidLszH5hEwP768joOvGcyGUz0bkL+JqsElw riLqDR8zaYvH93eHwMOvj46NlxUOfm4p7MBfPvQgQgOIeM97sGJ/772+G1W7eP5LqqoN ksd9CUq1VRE1lbHOpAV+EcvteoTvFDJDfaJeckbP6aTIWUkvcz/dMyUSCs0wRVOQt54w COIA== X-Gm-Message-State: APjAAAX5FnXav3LjwjMSEkmW6WbzRRyNpWn/SY+EGR2gXVPS5UbGVMSs tq31M/AtXg5LhjAYcQkaa2c6eA== X-Google-Smtp-Source: APXvYqxYESSztSbQdFehvlYRwQ7ZtyQatlQP7EBvkF43mdhJrM5sORhaK1vk4DnKcR72sGXiIkij2w== X-Received: by 2002:a50:886d:: with SMTP id c42mr18545157edc.24.1569835955026; Mon, 30 Sep 2019 02:32:35 -0700 (PDT) Received: from box.localdomain ([86.57.175.117]) by smtp.gmail.com with ESMTPSA id bq13sm1395366ejb.25.2019.09.30.02.32.34 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Mon, 30 Sep 2019 02:32:34 -0700 (PDT) Received: by box.localdomain (Postfix, from userid 1000) id 2E70010204E; Mon, 30 Sep 2019 12:32:33 +0300 (+03) Date: Mon, 30 Sep 2019 12:32:33 +0300 From: "Kirill A. Shutemov" To: Michal Hocko Cc: Vlastimil Babka , Andrew Morton , linux-mm@kvack.org, linux-kernel@vger.kernel.org, Christoph Lameter , Pekka Enberg , David Rientjes , Ming Lei , Dave Chinner , Matthew Wilcox , "Darrick J . Wong" , Christoph Hellwig , linux-xfs@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-block@vger.kernel.org, James Bottomley , linux-btrfs@vger.kernel.org, Roman Gushchin , Johannes Weiner Subject: Re: [PATCH v2 2/2] mm, sl[aou]b: guarantee natural alignment for kmalloc(power-of-two) Message-ID: <20190930093233.jlypzgmkf4pplgso@box.shutemov.name> References: <20190826111627.7505-1-vbabka@suse.cz> <20190826111627.7505-3-vbabka@suse.cz> <20190930092334.GA25306@dhcp22.suse.cz> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20190930092334.GA25306@dhcp22.suse.cz> User-Agent: NeoMutt/20180716 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Mon, Sep 30, 2019 at 11:23:34AM +0200, Michal Hocko wrote: > On Mon 23-09-19 18:36:32, Vlastimil Babka wrote: > > On 8/26/19 1:16 PM, Vlastimil Babka wrote: > > > In most configurations, kmalloc() happens to return naturally aligned (i.e. > > > aligned to the block size itself) blocks for power of two sizes. That means > > > some kmalloc() users might unknowingly rely on that alignment, until stuff > > > breaks when the kernel is built with e.g. CONFIG_SLUB_DEBUG or CONFIG_SLOB, > > > and blocks stop being aligned. Then developers have to devise workaround such > > > as own kmem caches with specified alignment [1], which is not always practical, > > > as recently evidenced in [2]. > > > > > > The topic has been discussed at LSF/MM 2019 [3]. Adding a 'kmalloc_aligned()' > > > variant would not help with code unknowingly relying on the implicit alignment. > > > For slab implementations it would either require creating more kmalloc caches, > > > or allocate a larger size and only give back part of it. That would be > > > wasteful, especially with a generic alignment parameter (in contrast with a > > > fixed alignment to size). > > > > > > Ideally we should provide to mm users what they need without difficult > > > workarounds or own reimplementations, so let's make the kmalloc() alignment to > > > size explicitly guaranteed for power-of-two sizes under all configurations. > > > What this means for the three available allocators? > > > > > > * SLAB object layout happens to be mostly unchanged by the patch. The > > > implicitly provided alignment could be compromised with CONFIG_DEBUG_SLAB due > > > to redzoning, however SLAB disables redzoning for caches with alignment > > > larger than unsigned long long. Practically on at least x86 this includes > > > kmalloc caches as they use cache line alignment, which is larger than that. > > > Still, this patch ensures alignment on all arches and cache sizes. > > > > > > * SLUB layout is also unchanged unless redzoning is enabled through > > > CONFIG_SLUB_DEBUG and boot parameter for the particular kmalloc cache. With > > > this patch, explicit alignment is guaranteed with redzoning as well. This > > > will result in more memory being wasted, but that should be acceptable in a > > > debugging scenario. > > > > > > * SLOB has no implicit alignment so this patch adds it explicitly for > > > kmalloc(). The potential downside is increased fragmentation. While > > > pathological allocation scenarios are certainly possible, in my testing, > > > after booting a x86_64 kernel+userspace with virtme, around 16MB memory > > > was consumed by slab pages both before and after the patch, with difference > > > in the noise. > > > > > > [1] https://lore.kernel.org/linux-btrfs/c3157c8e8e0e7588312b40c853f65c02fe6c957a.1566399731.git.christophe.leroy@c-s.fr/ > > > [2] https://lore.kernel.org/linux-fsdevel/20190225040904.5557-1-ming.lei@redhat.com/ > > > [3] https://lwn.net/Articles/787740/ > > > > > > Signed-off-by: Vlastimil Babka > > > > So if anyone thinks this is a good idea, please express it (preferably > > in a formal way such as Acked-by), otherwise it seems the patch will be > > dropped (due to a private NACK, apparently). > > Sigh. > > An existing code to workaround the lack of alignment guarantee just show > that this is necessary. And there wasn't any real technical argument > against except for a highly theoretical optimizations/new allocator that > would be tight by the guarantee. > > Therefore > Acked-by: Michal Hocko Agreed. Acked-by: Kirill A. Shutemov -- Kirill A. Shutemov