From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,MAILING_LIST_MULTI,SPF_HELO_NONE,URIBL_BLOCKED, USER_AGENT_SANE_1 autolearn=no autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7B29AC48BE5 for ; Wed, 16 Jun 2021 05:18:46 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 54E0E61375 for ; Wed, 16 Jun 2021 05:18:46 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231179AbhFPFUu (ORCPT ); Wed, 16 Jun 2021 01:20:50 -0400 Received: from verein.lst.de ([213.95.11.211]:52521 "EHLO verein.lst.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230336AbhFPFUu (ORCPT ); Wed, 16 Jun 2021 01:20:50 -0400 Received: by verein.lst.de (Postfix, from userid 2407) id EC67D68AFE; Wed, 16 Jun 2021 07:18:39 +0200 (CEST) Date: Wed, 16 Jun 2021 07:18:39 +0200 From: Christoph Hellwig To: Claire Chang Cc: Christoph Hellwig , Rob Herring , mpe@ellerman.id.au, Joerg Roedel , Will Deacon , Frank Rowand , Konrad Rzeszutek Wilk , boris.ostrovsky@oracle.com, jgross@suse.com, Marek Szyprowski , benh@kernel.crashing.org, paulus@samba.org, "list@263.net:IOMMU DRIVERS" , sstabellini@kernel.org, Robin Murphy , grant.likely@arm.com, xypron.glpk@gmx.de, Thierry Reding , mingo@kernel.org, bauerman@linux.ibm.com, peterz@infradead.org, Greg KH , Saravana Kannan , "Rafael J . Wysocki" , heikki.krogerus@linux.intel.com, Andy Shevchenko , Randy Dunlap , Dan Williams , Bartosz Golaszewski , linux-devicetree , lkml , linuxppc-dev@lists.ozlabs.org, xen-devel@lists.xenproject.org, Nicolas Boichat , Jim Quinlan , Tomasz Figa , bskeggs@redhat.com, Bjorn Helgaas , chris@chris-wilson.co.uk, Daniel Vetter , airlied@linux.ie, dri-devel@lists.freedesktop.org, intel-gfx@lists.freedesktop.org, jani.nikula@linux.intel.com, Jianxiong Gao , joonas.lahtinen@linux.intel.com, linux-pci@vger.kernel.org, maarten.lankhorst@linux.intel.com, matthew.auld@intel.com, rodrigo.vivi@intel.com, thomas.hellstrom@linux.intel.com Subject: Re: [PATCH v11 09/12] swiotlb: Add restricted DMA alloc/free support Message-ID: <20210616051839.GA27982@lst.de> References: <20210616035240.840463-1-tientzu@chromium.org> <20210616035240.840463-10-tientzu@chromium.org> <20210616045918.GA27537@lst.de> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: User-Agent: Mutt/1.5.17 (2007-11-01) Precedence: bulk List-ID: X-Mailing-List: linux-pci@vger.kernel.org On Wed, Jun 16, 2021 at 01:10:02PM +0800, Claire Chang wrote: > On Wed, Jun 16, 2021 at 12:59 PM Christoph Hellwig wrote: > > > > On Wed, Jun 16, 2021 at 12:04:16PM +0800, Claire Chang wrote: > > > Just noticed that after propagating swiotlb_force setting into > > > io_tlb_default_mem->force, the memory allocation behavior for > > > swiotlb_force will change (i.e. always skipping arch_dma_alloc and > > > dma_direct_alloc_from_pool). > > > > Yes, I think we need to split a "use_for_alloc" flag from the force flag. > > How about splitting is_dev_swiotlb_force into is_swiotlb_force_bounce > (io_tlb_mem->force_bounce) and is_swiotlb_force_alloc > (io_tlb_mem->force_alloc)? Yes, something like that. I'd probably not use force for the alloc side given that we otherwise never allocte from the swiotlb buffer.