From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Message-ID: <1537801706.195115.7.camel@acm.org> Subject: Re: block: DMA alignment of IO buffer allocated from slab From: Bart Van Assche To: Andrey Ryabinin , Ming Lei , Vitaly Kuznetsov Cc: Christoph Hellwig , Ming Lei , linux-block , linux-mm , Linux FS Devel , "open list:XFS FILESYSTEM" , Dave Chinner , Linux Kernel Mailing List , Jens Axboe , Christoph Lameter , Linus Torvalds , Greg Kroah-Hartman Date: Mon, 24 Sep 2018 08:08:26 -0700 In-Reply-To: <12eee877-affa-c822-c9d5-fda3aa0a50da@virtuozzo.com> References: <20180920063129.GB12913@lst.de> <87h8ij0zot.fsf@vitty.brq.redhat.com> <20180923224206.GA13618@ming.t460p> <38c03920-0fd0-0a39-2a6e-70cd8cb4ef34@virtuozzo.com> <20a20568-5089-541d-3cee-546e549a0bc8@acm.org> <12eee877-affa-c822-c9d5-fda3aa0a50da@virtuozzo.com> Content-Type: text/plain; charset="UTF-7" Mime-Version: 1.0 Content-Transfer-Encoding: 7bit Sender: owner-linux-mm@kvack.org List-ID: On Mon, 2018-09-24 at 17:43 +-0300, Andrey Ryabinin wrote: +AD4 +AD4 On 09/24/2018 05:19 PM, Bart Van Assche wrote: +AD4 +AD4 On 9/24/18 2:46 AM, Andrey Ryabinin wrote: +AD4 +AD4 +AD4 On 09/24/2018 01:42 AM, Ming Lei wrote: +AD4 +AD4 +AD4 +AD4 On Fri, Sep 21, 2018 at 03:04:18PM +-0200, Vitaly Kuznetsov wrote: +AD4 +AD4 +AD4 +AD4 +AD4 Christoph Hellwig +ADw-hch+AEA-lst.de+AD4 writes: +AD4 +AD4 +AD4 +AD4 +AD4 +AD4 +AD4 +AD4 +AD4 +AD4 +AD4 On Wed, Sep 19, 2018 at 05:15:43PM +-0800, Ming Lei wrote: +AD4 +AD4 +AD4 +AD4 +AD4 +AD4 +AD4 1) does kmalloc-N slab guarantee to return N-byte aligned buffer? If +AD4 +AD4 +AD4 +AD4 +AD4 +AD4 +AD4 yes, is it a stable rule? +AD4 +AD4 +AD4 +AD4 +AD4 +AD4 +AD4 +AD4 +AD4 +AD4 +AD4 +AD4 This is the assumption in a lot of the kernel, so I think if somethings +AD4 +AD4 +AD4 +AD4 +AD4 +AD4 breaks this we are in a lot of pain. +AD4 +AD4 +AD4 +AD4 +AD4 +AD4 This assumption is not correct. And it's not correct at least from the beginning of the +AD4 +AD4 +AD4 git era, which is even before SLUB allocator appeared. With CONFIG+AF8-DEBUG+AF8-SLAB+AD0-y +AD4 +AD4 +AD4 the same as with CONFIG+AF8-SLUB+AF8-DEBUG+AF8-ON+AD0-y kmalloc return 'unaligned' objects. +AD4 +AD4 +AD4 The guaranteed arch-and-config-independent alignment of kmalloc() result is +ACI-sizeof(void+ACo)+ACI. +AD4 +AD4 Correction sizeof(unsigned long long), so 8-byte alignment guarantee. +AD4 +AD4 +AD4 +AD4 +AD4 +AD4 +AD4 If objects has higher alignment requirement, the could be allocated via specifically created kmem+AF8-cache. +AD4 +AD4 +AD4 +AD4 Hello Andrey, +AD4 +AD4 +AD4 +AD4 The above confuses me. Can you explain to me why the following comment is present in include/linux/slab.h? +AD4 +AD4 +AD4 +AD4 /+ACo +AD4 +AD4 +ACo kmalloc and friends return ARCH+AF8-KMALLOC+AF8-MINALIGN aligned +AD4 +AD4 +ACo pointers. kmem+AF8-cache+AF8-alloc and friends return ARCH+AF8-SLAB+AF8-MINALIGN +AD4 +AD4 +ACo aligned pointers. +AD4 +AD4 +ACo-/ +AD4 +AD4 +AD4 +AD4 ARCH+AF8-KMALLOC+AF8-MINALIGN - guaranteed alignment of the kmalloc() result. +AD4 ARCH+AF8-SLAB+AF8-MINALIGN - guaranteed alignment of kmem+AF8-cache+AF8-alloc() result. +AD4 +AD4 If the 'align' argument passed into kmem+AF8-cache+AF8-create() is bigger than ARCH+AF8-SLAB+AF8-MINALIGN +AD4 than kmem+AF8-cache+AF8-alloc() from that cache should return 'align'-aligned pointers. Hello Andrey, Do you realize that that comment from +ADw-linux/slab.h+AD4 contradicts what you wrote about kmalloc() if ARCH+AF8-KMALLOC+AF8-MINALIGN +AD4 sizeof(unsigned long long)? Additionally, shouldn't CONFIG+AF8-DEBUG+AF8-SLAB+AD0-y and CONFIG+AF8-SLUB+AF8-DEBUG+AF8-ON+AD0-y provide the same guarantees as with debugging disabled, namely that kmalloc() buffers are aligned on ARCH+AF8-KMALLOC+AF8-MINALIGN boundaries? Since buffers allocated with kmalloc() are often used for DMA, how otherwise is DMA assumed to work? Thanks, Bart.