From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-18.2 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_SANE_1 autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 08DCDC43216 for ; Mon, 2 Aug 2021 13:56:37 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id E581E61212 for ; Mon, 2 Aug 2021 13:56:36 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235989AbhHBN4o (ORCPT ); Mon, 2 Aug 2021 09:56:44 -0400 Received: from mail.kernel.org ([198.145.29.99]:33684 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235030AbhHBNuy (ORCPT ); Mon, 2 Aug 2021 09:50:54 -0400 Received: by mail.kernel.org (Postfix) with ESMTPSA id 7F6E160FC2; Mon, 2 Aug 2021 13:50:40 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1627912241; bh=GBr6UnZnddvBVtPy1lt2vpJ7l1kGUYvvLsBoTxeVP4w=; h=Date:From:To:Cc:Subject:References:In-Reply-To:From; b=WgH17kg3HEe1Wizzqvjcr5kTff4dkEdqHbCHlmRKCvBUGDzuRKo33Qd7IePx1dAl6 zEFn9XAv1elezGzF2eguzFcH4uB7ngIzCRTjMsfxwLx97WmXQRM0PZkH9twIQmXNcJ +K58eKlj4xK/fw8FEO8pJWuYB1tIJuRNCAIRH6UFGDVxzSqWRXsWufdnPGn2/fm4hp zbpKwVR+N7kuGmN5sE7jRktdNflfCDv+vaoGKVd2igU4miwM6NyDTEec2pL4CTOdu1 uKnn+Y+n7JHjTRRL/5uagoaV5XD2s5qMriVQxFCk+FV007RfhqhsH1B+JSUsGaDrq/ 3SDJPNTUGgsNA== Date: Mon, 2 Aug 2021 14:50:37 +0100 From: Will Deacon To: David Stevens Cc: Robin Murphy , Joerg Roedel , Lu Baolu , Tom Murphy , iommu@lists.linux-foundation.org, linux-kernel@vger.kernel.org Subject: Re: [PATCH v2 2/4] dma-iommu: fix arch_sync_dma for map with swiotlb Message-ID: <20210802135037.GD28547@willie-the-truck> References: <20210709033502.3545820-1-stevensd@google.com> <20210709033502.3545820-3-stevensd@google.com> <20210802134059.GC28547@willie-the-truck> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20210802134059.GC28547@willie-the-truck> User-Agent: Mutt/1.10.1 (2018-07-13) Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Mon, Aug 02, 2021 at 02:40:59PM +0100, Will Deacon wrote: > On Fri, Jul 09, 2021 at 12:35:00PM +0900, David Stevens wrote: > > From: David Stevens > > > > When calling arch_sync_dma, we need to pass it the memory that's > > actually being used for dma. When using swiotlb bounce buffers, this is > > the bounce buffer. Move arch_sync_dma into the __iommu_dma_map_swiotlb > > helper, so it can use the bounce buffer address if necessary. This also > > means it is no longer necessary to call iommu_dma_sync_sg_for_device in > > iommu_dma_map_sg for untrusted devices. > > > > Fixes: 82612d66d51d ("iommu: Allow the dma-iommu api to use bounce buffers") > > Signed-off-by: David Stevens > > --- > > drivers/iommu/dma-iommu.c | 16 +++++++--------- > > 1 file changed, 7 insertions(+), 9 deletions(-) > > > > diff --git a/drivers/iommu/dma-iommu.c b/drivers/iommu/dma-iommu.c > > index eac65302439e..e79e274d2dc5 100644 > > --- a/drivers/iommu/dma-iommu.c > > +++ b/drivers/iommu/dma-iommu.c > > @@ -574,6 +574,9 @@ static dma_addr_t __iommu_dma_map_swiotlb(struct device *dev, phys_addr_t phys, > > memset(padding_start, 0, padding_size); > > } > > > > + if (!coherent && !(attrs & DMA_ATTR_SKIP_CPU_SYNC)) > > + arch_sync_dma_for_device(phys, org_size, dir); > > I think this relies on the swiotlb buffers residing in the linear mapping > (i.e. where phys_to_virt() is reliable), which doesn't look like a safe > assumption to me. No, sorry, ignore me here. I misread swiotlb_bounce(), so I think your change is good. As an aside, it strikes me that we'd probably be better off using uncacheable bounce buffers for non-coherent devices so we could avoid all this maintenance entirely. Will