From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 01B0DC433EF for ; Fri, 25 Mar 2022 16:25:35 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1376790AbiCYQ1I (ORCPT ); Fri, 25 Mar 2022 12:27:08 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:60676 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S241341AbiCYQ1A (ORCPT ); Fri, 25 Mar 2022 12:27:00 -0400 Received: from mail.toke.dk (mail.toke.dk [45.145.95.4]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D8E72DA6C9; Fri, 25 Mar 2022 09:25:24 -0700 (PDT) From: Toke =?utf-8?Q?H=C3=B8iland-J=C3=B8rgensen?= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=toke.dk; s=20161023; t=1648225521; bh=CQVTQ0YWUyVqJTQIKyyy3fv+ewiFx6OvdfGg4mW6pTQ=; h=From:To:Cc:Subject:In-Reply-To:References:Date:From; b=dY6s8FZcGbx4M3Ov5kHFWbz37pKFj65G1OC6fBB8z/tVV+am88ZfoF14ED65xxRfy fOLQlXZTlO9XnrrFbC9jdp9qP8CzWxIoIJ9tmfXHq2P/R5yHN396bTH6CKZYI5a8WW DD3bdPHqaN8gCGW78pwoabnJBY2snukIyApgMIoAqaA9Vdl2OUjDqBtuq40B49gIBc w+FVDx4iSZKjE/araq8XXw8Ex3/qMIjxUaNZ8Exy3fR+fRHCPfJikWMxuYW7fVYSRZ iv5BKTJDyv5nkS+fmdfKnF9fqjQ8Q5kgi4mjFJqFWUv15ynkDAsA+TI670+ZfI4GOK IWwWkpT3eVt0w== To: mbizon@freebox.fr, Linus Torvalds Cc: Robin Murphy , Christoph Hellwig , Oleksandr Natalenko , Halil Pasic , Marek Szyprowski , Kalle Valo , "David S. Miller" , Jakub Kicinski , Paolo Abeni , Olha Cherevyk , iommu , linux-wireless , Netdev , Linux Kernel Mailing List , Greg Kroah-Hartman , stable Subject: Re: [REGRESSION] Recent swiotlb DMA_FROM_DEVICE fixes break ath9k-based AP In-Reply-To: <31434708dcad126a8334c99ee056dcce93e507f1.camel@freebox.fr> References: <1812355.tdWV9SEqCh@natalenko.name> <20220324055732.GB12078@lst.de> <4386660.LvFx2qVVIh@natalenko.name> <81ffc753-72aa-6327-b87b-3f11915f2549@arm.com> <878rsza0ih.fsf@toke.dk> <4be26f5d8725cdb016c6fdd9d05cfeb69cdd9e09.camel@freebox.fr> <20220324163132.GB26098@lst.de> <871qyr9t4e.fsf@toke.dk> <31434708dcad126a8334c99ee056dcce93e507f1.camel@freebox.fr> Date: Fri, 25 Mar 2022 17:25:21 +0100 X-Clacks-Overhead: GNU Terry Pratchett Message-ID: <87a6de80em.fsf@toke.dk> MIME-Version: 1.0 Content-Type: text/plain Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Maxime Bizon writes: > On Thu, 2022-03-24 at 12:26 -0700, Linus Torvalds wrote: > >> >> It's actually very natural in that situation to flush the caches from >> the CPU side again. And so dma_sync_single_for_device() is a fairly >> reasonable thing to do in that situation. >> > > In the non-cache-coherent scenario, and assuming dma_map() did an > initial cache invalidation, you can write this: > > rx_buffer_complete_1(buf) > { > invalidate_cache(buf, size) > if (!is_ready(buf)) > return; > > } > > or > > rx_buffer_complete_2(buf) > { > if (!is_ready(buf)) { > invalidate_cache(buf, size) > return; > } > > } > > The latter is preferred for performance because dma_map() did the > initial invalidate. > > Of course you could write: > > rx_buffer_complete_3(buf) > { > invalidate_cache(buf, size) > if > (!is_ready(buf)) { > invalidate_cache(buf, size) > return; > } > > > } > > > but it's a waste of CPU cycles > > So I'd be very cautious assuming sync_for_cpu() and sync_for_device() > are both doing invalidation in existing implementation of arch DMA ops, > implementers may have taken some liberty around DMA-API to avoid > unnecessary cache operation (not to blame them). I sense an implicit "and the driver can't (or shouldn't) influence this" here, right? > For example looking at arch/arm/mm/dma-mapping.c, for DMA_FROM_DEVICE > > sync_single_for_device() > => __dma_page_cpu_to_dev() > => dma_cache_maint_page(op=dmac_map_area) > => cpu_cache.dma_map_area() > > sync_single_for_cpu() > => __dma_page_dev_to_cpu() > => > __dma_page_cpu_to_dev(op=dmac_unmap_area) > => > cpu_cache.dma_unmap_area() > > dma_map_area() always does cache invalidate. > > But for a couple of CPU variant, dma_unmap_area() is a noop, so > sync_for_cpu() does nothing. > > Toke's patch will break ath9k on those platforms (mostly silent > breakage, rx corruption leading to bad performance) Okay, so that would be bad obviously. So if I'm reading you correctly (cf my question above), we can't fix this properly from the driver side, and we should go with the partial SWIOTLB revert instead? -Toke