From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.3 required=3.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_SANE_1 autolearn=no autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id D2614C3A59F for ; Fri, 30 Aug 2019 00:29:44 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 97D092189D for ; Fri, 30 Aug 2019 00:29:44 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 97D092189D Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=fromorbit.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 1AD256B000D; Thu, 29 Aug 2019 20:29:44 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 136DA6B000E; Thu, 29 Aug 2019 20:29:44 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id F406C6B0010; Thu, 29 Aug 2019 20:29:43 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0071.hostedemail.com [216.40.44.71]) by kanga.kvack.org (Postfix) with ESMTP id D12506B000D for ; Thu, 29 Aug 2019 20:29:43 -0400 (EDT) Received: from smtpin13.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with SMTP id 7B744824CA2C for ; Fri, 30 Aug 2019 00:29:43 +0000 (UTC) X-FDA: 75877211046.13.stage29_8f024a274b000 X-HE-Tag: stage29_8f024a274b000 X-Filterd-Recvd-Size: 5261 Received: from mail104.syd.optusnet.com.au (mail104.syd.optusnet.com.au [211.29.132.246]) by imf40.hostedemail.com (Postfix) with ESMTP for ; Fri, 30 Aug 2019 00:29:42 +0000 (UTC) Received: from dread.disaster.area (pa49-181-255-194.pa.nsw.optusnet.com.au [49.181.255.194]) by mail104.syd.optusnet.com.au (Postfix) with ESMTPS id 319E743EA68; Fri, 30 Aug 2019 10:29:37 +1000 (AEST) Received: from dave by dread.disaster.area with local (Exim 4.92) (envelope-from ) id 1i3UnH-0001yY-Mw; Fri, 30 Aug 2019 10:29:35 +1000 Date: Fri, 30 Aug 2019 10:29:35 +1000 From: Dave Chinner To: Vlastimil Babka Cc: Matthew Wilcox , Christopher Lameter , Andrew Morton , linux-mm@kvack.org, linux-kernel@vger.kernel.org, Pekka Enberg , David Rientjes , Ming Lei , "Darrick J . Wong" , Christoph Hellwig , linux-xfs@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-block@vger.kernel.org, James Bottomley , linux-btrfs@vger.kernel.org Subject: Re: [PATCH v2 2/2] mm, sl[aou]b: guarantee natural alignment for kmalloc(power-of-two) Message-ID: <20190830002935.GX1119@dread.disaster.area> References: <20190826111627.7505-1-vbabka@suse.cz> <20190826111627.7505-3-vbabka@suse.cz> <0100016cd98bb2c1-a2af7539-706f-47ba-a68e-5f6a91f2f495-000000@email.amazonses.com> <20190828194607.GB6590@bombadil.infradead.org> <20190828222422.GL1119@dread.disaster.area> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: User-Agent: Mutt/1.10.1 (2018-07-13) X-Optus-CM-Score: 0 X-Optus-CM-Analysis: v=2.2 cv=FNpr/6gs c=1 sm=1 tr=0 a=YO9NNpcXwc8z/SaoS+iAiA==:117 a=YO9NNpcXwc8z/SaoS+iAiA==:17 a=jpOVt7BSZ2e4Z31A5e1TngXxSK0=:19 a=kj9zAlcOel0A:10 a=FmdZ9Uzk2mMA:10 a=7-415B0cAAAA:8 a=3jipp7y0vy340d0hbX8A:9 a=CjuIK1q_8ugA:10 a=biEYGPWJfzWAr4FL6Ov7:22 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Thu, Aug 29, 2019 at 09:56:13AM +0200, Vlastimil Babka wrote: > On 8/29/19 12:24 AM, Dave Chinner wrote: > > On Wed, Aug 28, 2019 at 12:46:08PM -0700, Matthew Wilcox wrote: > >> On Wed, Aug 28, 2019 at 06:45:07PM +0000, Christopher Lameter wrote: > >>> I still think implicit exceptions to alignments are a bad idea. Those need > >>> to be explicity specified and that is possible using kmem_cache_create(). > >> > >> I swear we covered this last time the topic came up, but XFS would need > >> to create special slab caches for each size between 512 and PAGE_SIZE. > >> Potentially larger, depending on whether the MM developers are willing to > >> guarantee that kmalloc(PAGE_SIZE * 2, GFP_KERNEL) will return a PAGE_SIZE > >> aligned block of memory indefinitely. > > > > Page size alignment of multi-page heap allocations is ncessary. The > > current behaviour w/ KASAN is to offset so a 8KB allocation spans 3 > > pages and is not page aligned. That causes just as much in way > > of alignment problems as unaligned objects in multi-object-per-page > > slabs. > > Ugh, multi-page (power of two) allocations *at the page allocator level* > simply have to be aligned, as that's how the buddy allocator has always > worked, and it would be madness to try relax that guarantee and require > an explicit flag at this point. The kmalloc wrapper with SLUB will pass > everything above 8KB directly to the page allocator, so that's fine too. > 4k and 8k are the only (multi-)page sizes still managed as SLUB objects. On a 4kB page size box, yes. On a 64kB page size system, 4/8kB allocations are still sub-page objects and will have alignment issues. Hence right now we can't assume a 4/8/16/32kB allocation will be page size aligned anywhere, because they are heap allocations on 64kB page sized machines. > I would say that these sizes are the most striking example that it's > wrong not to align them without extra flags or special API variant. Yup, just pointing out that they aren't guaranteed alignment right now on x86-64. > > As I said in the lastest discussion of this problem on XFS (pmem > > devices w/ KASAN enabled), all we -need- is a GFP flag that tells the > > slab allocator to give us naturally aligned object or fail if it > > can't. I don't care how that gets implemented (e.g. another set of > > heap slabs like the -rcl slabs), I just don't want every high level > > Given alignment is orthogonal to -rcl and dma-, would that be another > three sets? Or we assume that dma- would want it always, and complicate > the rules further? Funilly enough, SLOB would be the simplest case here. Not my problem. :) All I'm pointing out is that the minimum functionality we require is specifying individual allocations as needing alignment. I've just implemented that API in XFS, so whatever happens in the allocation infrastructure from this point onwards is really just implementation optimisation for us now.... Cheers, Dave. -- Dave Chinner david@fromorbit.com