From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6ECD6C433FE for ; Thu, 21 Apr 2022 08:06:10 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1386424AbiDUII4 (ORCPT ); Thu, 21 Apr 2022 04:08:56 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:57976 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231173AbiDUIIv (ORCPT ); Thu, 21 Apr 2022 04:08:51 -0400 Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 85C311B78F for ; Thu, 21 Apr 2022 01:06:02 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 00D9B61ABA for ; Thu, 21 Apr 2022 08:06:02 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 6A98FC385AB for ; Thu, 21 Apr 2022 08:06:01 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1650528361; bh=9MnyHMj4/WZWXBUrOQs4qeUaIaUlxdy8igZsS3lMkVw=; h=References:In-Reply-To:From:Date:Subject:To:Cc:From; b=MIvk92Srxr23LX7TlLH90nQXY6T/havYMoWz5qtfT/UymbihSbaG+YUcaBK7o6qoN ncMEgsU0p12Mme6v+eLiYgWGS9GpFqHmYgS2hkoi/P1oQruLwiC8FPyZMGdXSDKGz4 R3twXJr1noDPWn9GSzmx1c+SWKpEBr2o0jMTB7vOK4x1430MtOzsEGJ5WP2a6rTamD sEEytXiegjePrMi10SwomqSqZrNF2ZdWxnAhj+Y3pXajj/cj4saAUE8xplNgbAy+ER O9yLgO6SXeq1aZ0ijCdnanLFIWNl3NIneyKJeRnOdR1oNmU2+55T4i4YRUuzAJpH10 vSMz3v5ZgHqmQ== Received: by mail-oa1-f41.google.com with SMTP id 586e51a60fabf-e5e433d66dso4647387fac.5 for ; Thu, 21 Apr 2022 01:06:01 -0700 (PDT) X-Gm-Message-State: AOAM531/6SeQkCLne6oRMerf8IqNntG5xT7cCuJ/0Von3ECJYl9QSsyM vmc9LVgUlh72su3/liXIZmVAUusinV5eLcAXTb0= X-Google-Smtp-Source: ABdhPJzuKwwfoRfldur0eTFFvHflMSJoW3hPIFAXLb7ZszW8X7w95tURymH4a711BiVbokZ6QdpjBaOqWffm9M8yDDQ= X-Received: by 2002:a05:6871:297:b0:e5:f100:602f with SMTP id i23-20020a056871029700b000e5f100602fmr3358921oae.126.1650528360636; Thu, 21 Apr 2022 01:06:00 -0700 (PDT) MIME-Version: 1.0 References: In-Reply-To: From: Ard Biesheuvel Date: Thu, 21 Apr 2022 10:05:49 +0200 X-Gmail-Original-Message-ID: Message-ID: Subject: Re: [PATCH 07/10] crypto: Use ARCH_DMA_MINALIGN instead of ARCH_KMALLOC_MINALIGN To: Christoph Hellwig Cc: Arnd Bergmann , Catalin Marinas , Herbert Xu , Will Deacon , Marc Zyngier , Greg Kroah-Hartman , Andrew Morton , Linus Torvalds , Linux Memory Management List , Linux ARM , Linux Kernel Mailing List , "David S. Miller" Content-Type: text/plain; charset="UTF-8" Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Thu, 21 Apr 2022 at 09:20, Christoph Hellwig wrote: > > Btw, there is another option: Most real systems already require having > swiotlb to bounce buffer in some cases. We could simply force bounce > buffering in the dma mapping code for too small or not properly aligned > transfers and just decrease the dma alignment. Strongly agree. As I pointed out before, we'd only need to do this for misaligned, non-cache coherent inbound DMA, and we'd only have to worry about performance regressions, not data corruption issues. And given the natural alignment of block I/O, and the fact that network drivers typically allocate and map their own RX buffers (which means they could reasonably be fixed if a performance bottleneck pops up), I think the risk for showstopper performance regressions is likely to be acceptable. From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id AB445C433FE for ; Thu, 21 Apr 2022 08:10:41 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:Subject:Message-ID:Date:From: In-Reply-To:References:MIME-Version:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=KLhsXgZ6ExwxfsmQ3OdScBedWonbYTeMRrt7Hxw8WrI=; b=kRo3jiChTLAQFr zHdMit4xtAzWoUxZayTEsBwCrRNIqLDpfs4WxYb9+ks8SCGC4idsxsRxqCIKuoH4p9Zjo4dJNbpoB sLrRSuWKjebaHp1QZiXtiarmaQyDLYfrckHkb8g1pZ8WAldouV5iGhdsGm5j3pJmZRQWNRIIlKWKj YEiC0Wd2Fv6PaxJIdgcszOtPAtvUeHalsZ8UcSlao0zsfwNFkH5nbw/3AUsPzMTbwBlP/1/aLUp4K D+4l7ktX4VfhjC1GZ9Fa8dEOQUZ6driqyoRU7ht+M6yul2JqrxYJMXJ2rfnY+2K7skLB1LY23DG6k 8SYEb+o0uYFQqlxHi0ww==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1nhRsX-00CKYF-VC; Thu, 21 Apr 2022 08:09:30 +0000 Received: from ams.source.kernel.org ([145.40.68.75]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1nhRpF-00CIu2-G4 for linux-arm-kernel@lists.infradead.org; Thu, 21 Apr 2022 08:06:07 +0000 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id 134F3B8232B for ; Thu, 21 Apr 2022 08:06:04 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 6E262C385AF for ; Thu, 21 Apr 2022 08:06:01 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1650528361; bh=9MnyHMj4/WZWXBUrOQs4qeUaIaUlxdy8igZsS3lMkVw=; h=References:In-Reply-To:From:Date:Subject:To:Cc:From; b=MIvk92Srxr23LX7TlLH90nQXY6T/havYMoWz5qtfT/UymbihSbaG+YUcaBK7o6qoN ncMEgsU0p12Mme6v+eLiYgWGS9GpFqHmYgS2hkoi/P1oQruLwiC8FPyZMGdXSDKGz4 R3twXJr1noDPWn9GSzmx1c+SWKpEBr2o0jMTB7vOK4x1430MtOzsEGJ5WP2a6rTamD sEEytXiegjePrMi10SwomqSqZrNF2ZdWxnAhj+Y3pXajj/cj4saAUE8xplNgbAy+ER O9yLgO6SXeq1aZ0ijCdnanLFIWNl3NIneyKJeRnOdR1oNmU2+55T4i4YRUuzAJpH10 vSMz3v5ZgHqmQ== Received: by mail-oa1-f43.google.com with SMTP id 586e51a60fabf-e5e433d66dso4647388fac.5 for ; Thu, 21 Apr 2022 01:06:01 -0700 (PDT) X-Gm-Message-State: AOAM5308TE9jjzwP6Rxg8RMETeR06Kw7/yFBNDeiCP913zhUrYVqhj+D clEn2cp+HUcpzRQhmYDVE6urtc4Ps856/2csJQA= X-Google-Smtp-Source: ABdhPJzuKwwfoRfldur0eTFFvHflMSJoW3hPIFAXLb7ZszW8X7w95tURymH4a711BiVbokZ6QdpjBaOqWffm9M8yDDQ= X-Received: by 2002:a05:6871:297:b0:e5:f100:602f with SMTP id i23-20020a056871029700b000e5f100602fmr3358921oae.126.1650528360636; Thu, 21 Apr 2022 01:06:00 -0700 (PDT) MIME-Version: 1.0 References: In-Reply-To: From: Ard Biesheuvel Date: Thu, 21 Apr 2022 10:05:49 +0200 X-Gmail-Original-Message-ID: Message-ID: Subject: Re: [PATCH 07/10] crypto: Use ARCH_DMA_MINALIGN instead of ARCH_KMALLOC_MINALIGN To: Christoph Hellwig Cc: Arnd Bergmann , Catalin Marinas , Herbert Xu , Will Deacon , Marc Zyngier , Greg Kroah-Hartman , Andrew Morton , Linus Torvalds , Linux Memory Management List , Linux ARM , Linux Kernel Mailing List , "David S. Miller" X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20220421_010605_802287_0BBCB782 X-CRM114-Status: GOOD ( 14.70 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org On Thu, 21 Apr 2022 at 09:20, Christoph Hellwig wrote: > > Btw, there is another option: Most real systems already require having > swiotlb to bounce buffer in some cases. We could simply force bounce > buffering in the dma mapping code for too small or not properly aligned > transfers and just decrease the dma alignment. Strongly agree. As I pointed out before, we'd only need to do this for misaligned, non-cache coherent inbound DMA, and we'd only have to worry about performance regressions, not data corruption issues. And given the natural alignment of block I/O, and the fact that network drivers typically allocate and map their own RX buffers (which means they could reasonably be fixed if a performance bottleneck pops up), I think the risk for showstopper performance regressions is likely to be acceptable. _______________________________________________ linux-arm-kernel mailing list linux-arm-kernel@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-arm-kernel