From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Subject: Re: block: DMA alignment of IO buffer allocated from slab To: Ming Lei , Vitaly Kuznetsov Cc: Christoph Hellwig , Ming Lei , linux-block , linux-mm , Linux FS Devel , "open list:XFS FILESYSTEM" , Dave Chinner , Linux Kernel Mailing List , Jens Axboe , Christoph Lameter , Linus Torvalds , Greg Kroah-Hartman References: <20180920063129.GB12913@lst.de> <87h8ij0zot.fsf@vitty.brq.redhat.com> <20180923224206.GA13618@ming.t460p> From: Andrey Ryabinin Message-ID: <38c03920-0fd0-0a39-2a6e-70cd8cb4ef34@virtuozzo.com> Date: Mon, 24 Sep 2018 12:46:27 +0300 MIME-Version: 1.0 In-Reply-To: <20180923224206.GA13618@ming.t460p> Content-Type: text/plain; charset=utf-8 Return-Path: aryabinin@virtuozzo.com List-ID: On 09/24/2018 01:42 AM, Ming Lei wrote: > On Fri, Sep 21, 2018 at 03:04:18PM +0200, Vitaly Kuznetsov wrote: >> Christoph Hellwig writes: >> >>> On Wed, Sep 19, 2018 at 05:15:43PM +0800, Ming Lei wrote: >>>> 1) does kmalloc-N slab guarantee to return N-byte aligned buffer? If >>>> yes, is it a stable rule? >>> >>> This is the assumption in a lot of the kernel, so I think if somethings >>> breaks this we are in a lot of pain. This assumption is not correct. And it's not correct at least from the beginning of the git era, which is even before SLUB allocator appeared. With CONFIG_DEBUG_SLAB=y the same as with CONFIG_SLUB_DEBUG_ON=y kmalloc return 'unaligned' objects. The guaranteed arch-and-config-independent alignment of kmalloc() result is "sizeof(void*)". If objects has higher alignment requirement, the could be allocated via specifically created kmem_cache. > > Even some of buffer address is _not_ L1 cache size aligned, this way is > totally broken wrt. DMA to/from this buffer. > > So this issue has to be fixed in slab debug side. > Well, this definitely would increase memory consumption. Many (probably most) of the kmalloc() users doesn't need such alignment, why should they pay the cost?