From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.4 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, URIBL_BLOCKED autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id D0E27C433ED for ; Mon, 3 May 2021 14:33:36 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 9771361139 for ; Mon, 3 May 2021 14:33:36 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229739AbhECOe2 (ORCPT ); Mon, 3 May 2021 10:34:28 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:50112 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229713AbhECOe1 (ORCPT ); Mon, 3 May 2021 10:34:27 -0400 Received: from mail-io1-xd2c.google.com (mail-io1-xd2c.google.com [IPv6:2607:f8b0:4864:20::d2c]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 7D749C06174A for ; Mon, 3 May 2021 07:33:34 -0700 (PDT) Received: by mail-io1-xd2c.google.com with SMTP id z24so2153069ioj.7 for ; Mon, 03 May 2021 07:33:34 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=chromium.org; s=google; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=ELcNibbwxopbe4wnzShPTb7xRMLVvYpqQ8un28QQp9U=; b=nOyrfh2thpmQagEkzKEHJkMTgRy+3FJJdT9Oo6MZndwooAl8msGQ2IZHFMeS6aJ6Md +OeRGM111zl+0t7XmNgYC8WpP2ka9OwZM9VN//prTwlXpUXb+Muztthx/dt4YfUeTqwD ene0+k2RSvchnHpwNfAnGfkLV6l7GmkYIbRvE= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=ELcNibbwxopbe4wnzShPTb7xRMLVvYpqQ8un28QQp9U=; b=pE5FtWCejJ+Kz8tLgLSIZPg6JEULyrG4LnHQkKG4m+NhZhjhyQhAdowf3I2uh/sm2n CXlZ1NFUiEVJFyzh9HKYmUdvtJEIMu5UPW2HzVNm0o9TeJenDJnLGwXtuQoFzRiwz7al WssJY+yLWKLa7CbwyJKUBytnvv9inK4ya7mXMRH7OJ6yR8CREXhgQ/vepGnWN+r02sTJ a8ZotWqNCqNWccnHzUcEFNXI0V9nz1Bosj+K5BDB7Mfyr28UoZxCiQHi9NEQKiAKKqKO IYv+atD0rBsZSJ1WoW4loIvNIR7lhtR4If4M3rUkLlJ7h/+FXNtRt8LTVTpLRgxt4Eoi j2+Q== X-Gm-Message-State: AOAM53134FztRDkSLYB3XjpnDuPbRbtSGvVWS6kIzUndFy41FWBYgsu8 N3+t60DbiooPrUplQN9tgBpwqrihHISBbQ== X-Google-Smtp-Source: ABdhPJyaOz1NC8J17Si6J5rHmw3R8Zzf/rOkG2Joy7tsqSHkgN3oNsHIekd0HLRPQ7tHFVwkcDBpCA== X-Received: by 2002:a5d:87c4:: with SMTP id q4mr3816993ios.141.1620052413555; Mon, 03 May 2021 07:33:33 -0700 (PDT) Received: from mail-il1-f173.google.com (mail-il1-f173.google.com. [209.85.166.173]) by smtp.gmail.com with ESMTPSA id 11sm5667186ilt.7.2021.05.03.07.33.33 for (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Mon, 03 May 2021 07:33:33 -0700 (PDT) Received: by mail-il1-f173.google.com with SMTP id e14so3796928ils.12 for ; Mon, 03 May 2021 07:33:33 -0700 (PDT) X-Received: by 2002:a05:6e02:f4e:: with SMTP id y14mr3397094ilj.18.1620051971892; Mon, 03 May 2021 07:26:11 -0700 (PDT) MIME-Version: 1.0 References: <20210422081508.3942748-1-tientzu@chromium.org> <20210422081508.3942748-15-tientzu@chromium.org> <70b895c2-4a39-bbbd-a719-5c8b6b922026@arm.com> In-Reply-To: <70b895c2-4a39-bbbd-a719-5c8b6b922026@arm.com> From: Claire Chang Date: Mon, 3 May 2021 22:26:00 +0800 X-Gmail-Original-Message-ID: Message-ID: Subject: Re: [PATCH v5 14/16] dma-direct: Allocate memory from restricted DMA pool if available To: Robin Murphy Cc: Joerg Roedel , Will Deacon , Frank Rowand , Konrad Rzeszutek Wilk , boris.ostrovsky@oracle.com, jgross@suse.com, Christoph Hellwig , Marek Szyprowski , benh@kernel.crashing.org, paulus@samba.org, "list@263.net:IOMMU DRIVERS" , sstabellini@kernel.org, grant.likely@arm.com, xypron.glpk@gmx.de, Thierry Reding , mingo@kernel.org, bauerman@linux.ibm.com, peterz@infradead.org, Greg KH , Saravana Kannan , "Rafael J . Wysocki" , heikki.krogerus@linux.intel.com, Andy Shevchenko , Randy Dunlap , Dan Williams , Bartosz Golaszewski , linux-devicetree , lkml , linuxppc-dev@lists.ozlabs.org, xen-devel@lists.xenproject.org, Nicolas Boichat , Jim Quinlan , Tomasz Figa , bskeggs@redhat.com, Bjorn Helgaas , chris@chris-wilson.co.uk, Daniel Vetter , airlied@linux.ie, dri-devel@lists.freedesktop.org, intel-gfx@lists.freedesktop.org, jani.nikula@linux.intel.com, Jianxiong Gao , joonas.lahtinen@linux.intel.com, linux-pci@vger.kernel.org, maarten.lankhorst@linux.intel.com, matthew.auld@intel.com, nouveau@lists.freedesktop.org, rodrigo.vivi@intel.com, thomas.hellstrom@linux.intel.com Content-Type: text/plain; charset="UTF-8" Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Fri, Apr 23, 2021 at 9:46 PM Robin Murphy wrote: > > On 2021-04-22 09:15, Claire Chang wrote: > > The restricted DMA pool is preferred if available. > > > > The restricted DMA pools provide a basic level of protection against the > > DMA overwriting buffer contents at unexpected times. However, to protect > > against general data leakage and system memory corruption, the system > > needs to provide a way to lock down the memory access, e.g., MPU. > > > > Signed-off-by: Claire Chang > > --- > > kernel/dma/direct.c | 35 ++++++++++++++++++++++++++--------- > > 1 file changed, 26 insertions(+), 9 deletions(-) > > > > diff --git a/kernel/dma/direct.c b/kernel/dma/direct.c > > index 7a27f0510fcc..29523d2a9845 100644 > > --- a/kernel/dma/direct.c > > +++ b/kernel/dma/direct.c > > @@ -78,6 +78,10 @@ static bool dma_coherent_ok(struct device *dev, phys_addr_t phys, size_t size) > > static void __dma_direct_free_pages(struct device *dev, struct page *page, > > size_t size) > > { > > +#ifdef CONFIG_DMA_RESTRICTED_POOL > > + if (swiotlb_free(dev, page, size)) > > + return; > > +#endif > > dma_free_contiguous(dev, page, size); > > } > > > > @@ -92,7 +96,17 @@ static struct page *__dma_direct_alloc_pages(struct device *dev, size_t size, > > > > gfp |= dma_direct_optimal_gfp_mask(dev, dev->coherent_dma_mask, > > &phys_limit); > > - page = dma_alloc_contiguous(dev, size, gfp); > > + > > +#ifdef CONFIG_DMA_RESTRICTED_POOL > > + page = swiotlb_alloc(dev, size); > > + if (page && !dma_coherent_ok(dev, page_to_phys(page), size)) { > > + __dma_direct_free_pages(dev, page, size); > > + page = NULL; > > + } > > +#endif > > + > > + if (!page) > > + page = dma_alloc_contiguous(dev, size, gfp); > > if (page && !dma_coherent_ok(dev, page_to_phys(page), size)) { > > dma_free_contiguous(dev, page, size); > > page = NULL; > > @@ -148,7 +162,7 @@ void *dma_direct_alloc(struct device *dev, size_t size, > > gfp |= __GFP_NOWARN; > > > > if ((attrs & DMA_ATTR_NO_KERNEL_MAPPING) && > > - !force_dma_unencrypted(dev)) { > > + !force_dma_unencrypted(dev) && !is_dev_swiotlb_force(dev)) { > > page = __dma_direct_alloc_pages(dev, size, gfp & ~__GFP_ZERO); > > if (!page) > > return NULL; > > @@ -161,8 +175,8 @@ void *dma_direct_alloc(struct device *dev, size_t size, > > } > > > > if (!IS_ENABLED(CONFIG_ARCH_HAS_DMA_SET_UNCACHED) && > > - !IS_ENABLED(CONFIG_DMA_DIRECT_REMAP) && > > - !dev_is_dma_coherent(dev)) > > + !IS_ENABLED(CONFIG_DMA_DIRECT_REMAP) && !dev_is_dma_coherent(dev) && > > + !is_dev_swiotlb_force(dev)) > > return arch_dma_alloc(dev, size, dma_handle, gfp, attrs); > > > > /* > > @@ -172,7 +186,9 @@ void *dma_direct_alloc(struct device *dev, size_t size, > > if (IS_ENABLED(CONFIG_DMA_COHERENT_POOL) && > > !gfpflags_allow_blocking(gfp) && > > (force_dma_unencrypted(dev) || > > - (IS_ENABLED(CONFIG_DMA_DIRECT_REMAP) && !dev_is_dma_coherent(dev)))) > > + (IS_ENABLED(CONFIG_DMA_DIRECT_REMAP) && > > + !dev_is_dma_coherent(dev))) && > > + !is_dev_swiotlb_force(dev)) > > return dma_direct_alloc_from_pool(dev, size, dma_handle, gfp); > > > > /* we always manually zero the memory once we are done */ > > @@ -253,15 +269,15 @@ void dma_direct_free(struct device *dev, size_t size, > > unsigned int page_order = get_order(size); > > > > if ((attrs & DMA_ATTR_NO_KERNEL_MAPPING) && > > - !force_dma_unencrypted(dev)) { > > + !force_dma_unencrypted(dev) && !is_dev_swiotlb_force(dev)) { > > /* cpu_addr is a struct page cookie, not a kernel address */ > > dma_free_contiguous(dev, cpu_addr, size); > > return; > > } > > > > if (!IS_ENABLED(CONFIG_ARCH_HAS_DMA_SET_UNCACHED) && > > - !IS_ENABLED(CONFIG_DMA_DIRECT_REMAP) && > > - !dev_is_dma_coherent(dev)) { > > + !IS_ENABLED(CONFIG_DMA_DIRECT_REMAP) && !dev_is_dma_coherent(dev) && > > + !is_dev_swiotlb_force(dev)) { > > arch_dma_free(dev, size, cpu_addr, dma_addr, attrs); > > return; > > } > > @@ -289,7 +305,8 @@ struct page *dma_direct_alloc_pages(struct device *dev, size_t size, > > void *ret; > > > > if (IS_ENABLED(CONFIG_DMA_COHERENT_POOL) && > > - force_dma_unencrypted(dev) && !gfpflags_allow_blocking(gfp)) > > + force_dma_unencrypted(dev) && !gfpflags_allow_blocking(gfp) && > > + !is_dev_swiotlb_force(dev)) > > return dma_direct_alloc_from_pool(dev, size, dma_handle, gfp); > > Wait, this seems broken for non-coherent devices - in that case we need > to return a non-cacheable address, but we can't simply fall through into > the remapping path below in GFP_ATOMIC context. That's why we need the > atomic pool concept in the first place :/ Sorry for the late reply. I'm not very familiar with this. I wonder if the memory returned here must be coherent. If yes, could we say for this case, one must set up another device coherent pool (shared-dma-pool) and go with dma_alloc_from_dev_coherent()[1]? [1] https://elixir.bootlin.com/linux/v5.12/source/kernel/dma/mapping.c#L435 > > Unless I've overlooked something, we're still using the regular > cacheable linear map address of the dma_io_tlb_mem buffer, no? > > Robin. > > > > > page = __dma_direct_alloc_pages(dev, size, gfp); > > From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-13.5 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id D80E6C433B4 for ; Mon, 3 May 2021 14:27:02 +0000 (UTC) Received: from lists.ozlabs.org (lists.ozlabs.org [112.213.38.117]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id DC6C661139 for ; Mon, 3 May 2021 14:27:01 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org DC6C661139 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=chromium.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=linuxppc-dev-bounces+linuxppc-dev=archiver.kernel.org@lists.ozlabs.org Received: from boromir.ozlabs.org (localhost [IPv6:::1]) by lists.ozlabs.org (Postfix) with ESMTP id 4FYlfX0kwMz30FL for ; Tue, 4 May 2021 00:27:00 +1000 (AEST) Authentication-Results: lists.ozlabs.org; dkim=fail reason="signature verification failed" (1024-bit key; unprotected) header.d=chromium.org header.i=@chromium.org header.a=rsa-sha256 header.s=google header.b=nOyrfh2t; dkim-atps=neutral Authentication-Results: lists.ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=chromium.org (client-ip=2607:f8b0:4864:20::134; helo=mail-il1-x134.google.com; envelope-from=tientzu@chromium.org; receiver=) Authentication-Results: lists.ozlabs.org; dkim=pass (1024-bit key; unprotected) header.d=chromium.org header.i=@chromium.org header.a=rsa-sha256 header.s=google header.b=nOyrfh2t; dkim-atps=neutral Received: from mail-il1-x134.google.com (mail-il1-x134.google.com [IPv6:2607:f8b0:4864:20::134]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by lists.ozlabs.org (Postfix) with ESMTPS id 4FYldy4P00z2xYd for ; Tue, 4 May 2021 00:26:27 +1000 (AEST) Received: by mail-il1-x134.google.com with SMTP id c3so3810846ils.5 for ; Mon, 03 May 2021 07:26:27 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=chromium.org; s=google; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=ELcNibbwxopbe4wnzShPTb7xRMLVvYpqQ8un28QQp9U=; b=nOyrfh2thpmQagEkzKEHJkMTgRy+3FJJdT9Oo6MZndwooAl8msGQ2IZHFMeS6aJ6Md +OeRGM111zl+0t7XmNgYC8WpP2ka9OwZM9VN//prTwlXpUXb+Muztthx/dt4YfUeTqwD ene0+k2RSvchnHpwNfAnGfkLV6l7GmkYIbRvE= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=ELcNibbwxopbe4wnzShPTb7xRMLVvYpqQ8un28QQp9U=; b=iB8Orw+1xkLNuwjNWZgO5wcWGy//8iZuvL03/sYIl1Wp/d2fFjb27yLzXRYP820I9Z 1PW5AHLdMTldsddJQIddiyM7V7jKFL3zzE4Juz4Vbc62X6/ZK0wvu7K2eZ+KlKy57Bl2 Hapn+L29Pf7YpiNVbECcKPu8yAHryFKf44YXPs2OQFWvWKUXQ7GpPD7s+1fuH365ekBW swasoKNx05Zrz92cMAaelPkPLaqmQPdD8vIqNlOsFbBYTaRcCdcTHP+dDHm6i2G7MUCi lApXz18mu/zYqyDcCb6GbeCRpr8ir09PvnfJ3g6yc/46i5AOkqX7q9cGS/h//a7VJef5 VUMA== X-Gm-Message-State: AOAM5309yac4KeJbW0kDn9DeLrpaXiW9JjYmCwve2d/zqsVxG4wIPYMz bJiG7LCRpBO4SU40vN8tw3yhP5zmQ3vC3A== X-Google-Smtp-Source: ABdhPJxLKjVx49DVNacXlzScnq3+8i+bXW2cphdYD+/E/NRLqgdr6n+MBUBXGEy7a701s+nsA5jLPg== X-Received: by 2002:a05:6e02:1baf:: with SMTP id n15mr16382496ili.148.1620051984187; Mon, 03 May 2021 07:26:24 -0700 (PDT) Received: from mail-il1-f175.google.com (mail-il1-f175.google.com. [209.85.166.175]) by smtp.gmail.com with ESMTPSA id h4sm5488960ili.52.2021.05.03.07.26.22 for (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Mon, 03 May 2021 07:26:23 -0700 (PDT) Received: by mail-il1-f175.google.com with SMTP id j12so3806669ils.4 for ; Mon, 03 May 2021 07:26:22 -0700 (PDT) X-Received: by 2002:a05:6e02:f4e:: with SMTP id y14mr3397094ilj.18.1620051971892; Mon, 03 May 2021 07:26:11 -0700 (PDT) MIME-Version: 1.0 References: <20210422081508.3942748-1-tientzu@chromium.org> <20210422081508.3942748-15-tientzu@chromium.org> <70b895c2-4a39-bbbd-a719-5c8b6b922026@arm.com> In-Reply-To: <70b895c2-4a39-bbbd-a719-5c8b6b922026@arm.com> From: Claire Chang Date: Mon, 3 May 2021 22:26:00 +0800 X-Gmail-Original-Message-ID: Message-ID: Subject: Re: [PATCH v5 14/16] dma-direct: Allocate memory from restricted DMA pool if available To: Robin Murphy Content-Type: text/plain; charset="UTF-8" X-BeenThere: linuxppc-dev@lists.ozlabs.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Linux on PowerPC Developers Mail List List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: heikki.krogerus@linux.intel.com, thomas.hellstrom@linux.intel.com, peterz@infradead.org, joonas.lahtinen@linux.intel.com, dri-devel@lists.freedesktop.org, lkml , grant.likely@arm.com, paulus@samba.org, Will Deacon , mingo@kernel.org, Marek Szyprowski , sstabellini@kernel.org, Saravana Kannan , xypron.glpk@gmx.de, Joerg Roedel , "Rafael J . Wysocki" , Christoph Hellwig , Bartosz Golaszewski , bskeggs@redhat.com, linux-pci@vger.kernel.org, xen-devel@lists.xenproject.org, Thierry Reding , intel-gfx@lists.freedesktop.org, matthew.auld@intel.com, linux-devicetree , Jianxiong Gao , Daniel Vetter , Konrad Rzeszutek Wilk , maarten.lankhorst@linux.intel.com, airlied@linux.ie, Dan Williams , jani.nikula@linux.intel.com, Nicolas Boichat , rodrigo.vivi@intel.com, Bjorn Helgaas , boris.ostrovsky@oracle.com, Andy Shevchenko , jgross@suse.com, chris@chris-wilson.co.uk, nouveau@lists.freedesktop.org, Greg KH , Randy Dunlap , Frank Rowand , Tomasz Figa , "list@263.net:IOMMU DRIVERS" , Jim Quinlan , linuxppc-dev@lists.ozlabs.org, bauerman@linux.ibm.com Errors-To: linuxppc-dev-bounces+linuxppc-dev=archiver.kernel.org@lists.ozlabs.org Sender: "Linuxppc-dev" On Fri, Apr 23, 2021 at 9:46 PM Robin Murphy wrote: > > On 2021-04-22 09:15, Claire Chang wrote: > > The restricted DMA pool is preferred if available. > > > > The restricted DMA pools provide a basic level of protection against the > > DMA overwriting buffer contents at unexpected times. However, to protect > > against general data leakage and system memory corruption, the system > > needs to provide a way to lock down the memory access, e.g., MPU. > > > > Signed-off-by: Claire Chang > > --- > > kernel/dma/direct.c | 35 ++++++++++++++++++++++++++--------- > > 1 file changed, 26 insertions(+), 9 deletions(-) > > > > diff --git a/kernel/dma/direct.c b/kernel/dma/direct.c > > index 7a27f0510fcc..29523d2a9845 100644 > > --- a/kernel/dma/direct.c > > +++ b/kernel/dma/direct.c > > @@ -78,6 +78,10 @@ static bool dma_coherent_ok(struct device *dev, phys_addr_t phys, size_t size) > > static void __dma_direct_free_pages(struct device *dev, struct page *page, > > size_t size) > > { > > +#ifdef CONFIG_DMA_RESTRICTED_POOL > > + if (swiotlb_free(dev, page, size)) > > + return; > > +#endif > > dma_free_contiguous(dev, page, size); > > } > > > > @@ -92,7 +96,17 @@ static struct page *__dma_direct_alloc_pages(struct device *dev, size_t size, > > > > gfp |= dma_direct_optimal_gfp_mask(dev, dev->coherent_dma_mask, > > &phys_limit); > > - page = dma_alloc_contiguous(dev, size, gfp); > > + > > +#ifdef CONFIG_DMA_RESTRICTED_POOL > > + page = swiotlb_alloc(dev, size); > > + if (page && !dma_coherent_ok(dev, page_to_phys(page), size)) { > > + __dma_direct_free_pages(dev, page, size); > > + page = NULL; > > + } > > +#endif > > + > > + if (!page) > > + page = dma_alloc_contiguous(dev, size, gfp); > > if (page && !dma_coherent_ok(dev, page_to_phys(page), size)) { > > dma_free_contiguous(dev, page, size); > > page = NULL; > > @@ -148,7 +162,7 @@ void *dma_direct_alloc(struct device *dev, size_t size, > > gfp |= __GFP_NOWARN; > > > > if ((attrs & DMA_ATTR_NO_KERNEL_MAPPING) && > > - !force_dma_unencrypted(dev)) { > > + !force_dma_unencrypted(dev) && !is_dev_swiotlb_force(dev)) { > > page = __dma_direct_alloc_pages(dev, size, gfp & ~__GFP_ZERO); > > if (!page) > > return NULL; > > @@ -161,8 +175,8 @@ void *dma_direct_alloc(struct device *dev, size_t size, > > } > > > > if (!IS_ENABLED(CONFIG_ARCH_HAS_DMA_SET_UNCACHED) && > > - !IS_ENABLED(CONFIG_DMA_DIRECT_REMAP) && > > - !dev_is_dma_coherent(dev)) > > + !IS_ENABLED(CONFIG_DMA_DIRECT_REMAP) && !dev_is_dma_coherent(dev) && > > + !is_dev_swiotlb_force(dev)) > > return arch_dma_alloc(dev, size, dma_handle, gfp, attrs); > > > > /* > > @@ -172,7 +186,9 @@ void *dma_direct_alloc(struct device *dev, size_t size, > > if (IS_ENABLED(CONFIG_DMA_COHERENT_POOL) && > > !gfpflags_allow_blocking(gfp) && > > (force_dma_unencrypted(dev) || > > - (IS_ENABLED(CONFIG_DMA_DIRECT_REMAP) && !dev_is_dma_coherent(dev)))) > > + (IS_ENABLED(CONFIG_DMA_DIRECT_REMAP) && > > + !dev_is_dma_coherent(dev))) && > > + !is_dev_swiotlb_force(dev)) > > return dma_direct_alloc_from_pool(dev, size, dma_handle, gfp); > > > > /* we always manually zero the memory once we are done */ > > @@ -253,15 +269,15 @@ void dma_direct_free(struct device *dev, size_t size, > > unsigned int page_order = get_order(size); > > > > if ((attrs & DMA_ATTR_NO_KERNEL_MAPPING) && > > - !force_dma_unencrypted(dev)) { > > + !force_dma_unencrypted(dev) && !is_dev_swiotlb_force(dev)) { > > /* cpu_addr is a struct page cookie, not a kernel address */ > > dma_free_contiguous(dev, cpu_addr, size); > > return; > > } > > > > if (!IS_ENABLED(CONFIG_ARCH_HAS_DMA_SET_UNCACHED) && > > - !IS_ENABLED(CONFIG_DMA_DIRECT_REMAP) && > > - !dev_is_dma_coherent(dev)) { > > + !IS_ENABLED(CONFIG_DMA_DIRECT_REMAP) && !dev_is_dma_coherent(dev) && > > + !is_dev_swiotlb_force(dev)) { > > arch_dma_free(dev, size, cpu_addr, dma_addr, attrs); > > return; > > } > > @@ -289,7 +305,8 @@ struct page *dma_direct_alloc_pages(struct device *dev, size_t size, > > void *ret; > > > > if (IS_ENABLED(CONFIG_DMA_COHERENT_POOL) && > > - force_dma_unencrypted(dev) && !gfpflags_allow_blocking(gfp)) > > + force_dma_unencrypted(dev) && !gfpflags_allow_blocking(gfp) && > > + !is_dev_swiotlb_force(dev)) > > return dma_direct_alloc_from_pool(dev, size, dma_handle, gfp); > > Wait, this seems broken for non-coherent devices - in that case we need > to return a non-cacheable address, but we can't simply fall through into > the remapping path below in GFP_ATOMIC context. That's why we need the > atomic pool concept in the first place :/ Sorry for the late reply. I'm not very familiar with this. I wonder if the memory returned here must be coherent. If yes, could we say for this case, one must set up another device coherent pool (shared-dma-pool) and go with dma_alloc_from_dev_coherent()[1]? [1] https://elixir.bootlin.com/linux/v5.12/source/kernel/dma/mapping.c#L435 > > Unless I've overlooked something, we're still using the regular > cacheable linear map address of the dma_io_tlb_mem buffer, no? > > Robin. > > > > > page = __dma_direct_alloc_pages(dev, size, gfp); > > From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-13.5 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 299A1C433B4 for ; Mon, 3 May 2021 17:26:56 +0000 (UTC) Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id BFB8E610A6 for ; Mon, 3 May 2021 17:26:55 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org BFB8E610A6 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=chromium.org Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=nouveau-bounces@lists.freedesktop.org Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 52EA76E9E8; Mon, 3 May 2021 17:26:53 +0000 (UTC) Received: from mail-pg1-x52e.google.com (mail-pg1-x52e.google.com [IPv6:2607:f8b0:4864:20::52e]) by gabe.freedesktop.org (Postfix) with ESMTPS id 93A5E6E12C for ; Mon, 3 May 2021 14:32:30 +0000 (UTC) Received: by mail-pg1-x52e.google.com with SMTP id y30so3757555pgl.7 for ; Mon, 03 May 2021 07:32:30 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=chromium.org; s=google; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=ELcNibbwxopbe4wnzShPTb7xRMLVvYpqQ8un28QQp9U=; b=nOyrfh2thpmQagEkzKEHJkMTgRy+3FJJdT9Oo6MZndwooAl8msGQ2IZHFMeS6aJ6Md +OeRGM111zl+0t7XmNgYC8WpP2ka9OwZM9VN//prTwlXpUXb+Muztthx/dt4YfUeTqwD ene0+k2RSvchnHpwNfAnGfkLV6l7GmkYIbRvE= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=ELcNibbwxopbe4wnzShPTb7xRMLVvYpqQ8un28QQp9U=; b=hHus2rVNfmQtubQk1YsqxA2xJWVExivtJumA3qZsqhJggpQVpyq/xx10FMuDFzDPcJ DvZxybqVsqcspJ95PvOoHWrE4+pgNz55N3+uWuf5krVKIL4vrIezgzjiCmFN6zZVvJVe IbE5pFuPNDeokrU9ak/k/LC2tvHDIUX5gvp5XNUwZ6QKuxjIcAS89/KRESPcM7JHtqlv okC92lgFgmGlUk8MPIb20jBXI/th2FwncAAGX9R79wQs/Dfr9BFcVvae3CThapDiM0Fk lF5+2OO+fOj42Or6TruR/S/mcWOgRWyccZqIg0uTfuiAAOtbm6kYmwEXp5udo1G9MbTZ J1HQ== X-Gm-Message-State: AOAM5308q1NlCNkwPpHpzI9Sk9Ecg1v77uO2KneG+lTfDcH20X84Uyc/ m7xxkKruQ9hmX8qCkCkAW4uJqo15/oSziA== X-Google-Smtp-Source: ABdhPJzcaHs6VpJy790N8Y3tQOyrBFMjceXJNOkWb3hfU3slxLpKrQZ2hJDQGyHXrnNpkYkYy3Na5A== X-Received: by 2002:aa7:9aa2:0:b029:28e:af64:ec59 with SMTP id x2-20020aa79aa20000b029028eaf64ec59mr3067529pfi.0.1620052349790; Mon, 03 May 2021 07:32:29 -0700 (PDT) Received: from mail-pf1-f182.google.com (mail-pf1-f182.google.com. [209.85.210.182]) by smtp.gmail.com with ESMTPSA id a1sm3439226pfi.22.2021.05.03.07.32.29 for (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Mon, 03 May 2021 07:32:29 -0700 (PDT) Received: by mail-pf1-f182.google.com with SMTP id h11so4354369pfn.0 for ; Mon, 03 May 2021 07:32:29 -0700 (PDT) X-Received: by 2002:a05:6e02:f4e:: with SMTP id y14mr3397094ilj.18.1620051971892; Mon, 03 May 2021 07:26:11 -0700 (PDT) MIME-Version: 1.0 References: <20210422081508.3942748-1-tientzu@chromium.org> <20210422081508.3942748-15-tientzu@chromium.org> <70b895c2-4a39-bbbd-a719-5c8b6b922026@arm.com> In-Reply-To: <70b895c2-4a39-bbbd-a719-5c8b6b922026@arm.com> From: Claire Chang Date: Mon, 3 May 2021 22:26:00 +0800 X-Gmail-Original-Message-ID: Message-ID: To: Robin Murphy X-Mailman-Approved-At: Mon, 03 May 2021 17:26:52 +0000 Subject: Re: [Nouveau] [PATCH v5 14/16] dma-direct: Allocate memory from restricted DMA pool if available X-BeenThere: nouveau@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Nouveau development list List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: heikki.krogerus@linux.intel.com, thomas.hellstrom@linux.intel.com, peterz@infradead.org, benh@kernel.crashing.org, joonas.lahtinen@linux.intel.com, dri-devel@lists.freedesktop.org, lkml , grant.likely@arm.com, paulus@samba.org, Will Deacon , mingo@kernel.org, Marek Szyprowski , sstabellini@kernel.org, Saravana Kannan , xypron.glpk@gmx.de, Joerg Roedel , "Rafael J . Wysocki" , Christoph Hellwig , Bartosz Golaszewski , bskeggs@redhat.com, linux-pci@vger.kernel.org, xen-devel@lists.xenproject.org, Thierry Reding , intel-gfx@lists.freedesktop.org, matthew.auld@intel.com, linux-devicetree , Jianxiong Gao , Daniel Vetter , Konrad Rzeszutek Wilk , maarten.lankhorst@linux.intel.com, airlied@linux.ie, Dan Williams , jani.nikula@linux.intel.com, Nicolas Boichat , rodrigo.vivi@intel.com, Bjorn Helgaas , boris.ostrovsky@oracle.com, Andy Shevchenko , jgross@suse.com, chris@chris-wilson.co.uk, nouveau@lists.freedesktop.org, Greg KH , Randy Dunlap , Frank Rowand , Tomasz Figa , "list@263.net:IOMMU DRIVERS" , Jim Quinlan , linuxppc-dev@lists.ozlabs.org, bauerman@linux.ibm.com Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Errors-To: nouveau-bounces@lists.freedesktop.org Sender: "Nouveau" On Fri, Apr 23, 2021 at 9:46 PM Robin Murphy wrote: > > On 2021-04-22 09:15, Claire Chang wrote: > > The restricted DMA pool is preferred if available. > > > > The restricted DMA pools provide a basic level of protection against the > > DMA overwriting buffer contents at unexpected times. However, to protect > > against general data leakage and system memory corruption, the system > > needs to provide a way to lock down the memory access, e.g., MPU. > > > > Signed-off-by: Claire Chang > > --- > > kernel/dma/direct.c | 35 ++++++++++++++++++++++++++--------- > > 1 file changed, 26 insertions(+), 9 deletions(-) > > > > diff --git a/kernel/dma/direct.c b/kernel/dma/direct.c > > index 7a27f0510fcc..29523d2a9845 100644 > > --- a/kernel/dma/direct.c > > +++ b/kernel/dma/direct.c > > @@ -78,6 +78,10 @@ static bool dma_coherent_ok(struct device *dev, phys_addr_t phys, size_t size) > > static void __dma_direct_free_pages(struct device *dev, struct page *page, > > size_t size) > > { > > +#ifdef CONFIG_DMA_RESTRICTED_POOL > > + if (swiotlb_free(dev, page, size)) > > + return; > > +#endif > > dma_free_contiguous(dev, page, size); > > } > > > > @@ -92,7 +96,17 @@ static struct page *__dma_direct_alloc_pages(struct device *dev, size_t size, > > > > gfp |= dma_direct_optimal_gfp_mask(dev, dev->coherent_dma_mask, > > &phys_limit); > > - page = dma_alloc_contiguous(dev, size, gfp); > > + > > +#ifdef CONFIG_DMA_RESTRICTED_POOL > > + page = swiotlb_alloc(dev, size); > > + if (page && !dma_coherent_ok(dev, page_to_phys(page), size)) { > > + __dma_direct_free_pages(dev, page, size); > > + page = NULL; > > + } > > +#endif > > + > > + if (!page) > > + page = dma_alloc_contiguous(dev, size, gfp); > > if (page && !dma_coherent_ok(dev, page_to_phys(page), size)) { > > dma_free_contiguous(dev, page, size); > > page = NULL; > > @@ -148,7 +162,7 @@ void *dma_direct_alloc(struct device *dev, size_t size, > > gfp |= __GFP_NOWARN; > > > > if ((attrs & DMA_ATTR_NO_KERNEL_MAPPING) && > > - !force_dma_unencrypted(dev)) { > > + !force_dma_unencrypted(dev) && !is_dev_swiotlb_force(dev)) { > > page = __dma_direct_alloc_pages(dev, size, gfp & ~__GFP_ZERO); > > if (!page) > > return NULL; > > @@ -161,8 +175,8 @@ void *dma_direct_alloc(struct device *dev, size_t size, > > } > > > > if (!IS_ENABLED(CONFIG_ARCH_HAS_DMA_SET_UNCACHED) && > > - !IS_ENABLED(CONFIG_DMA_DIRECT_REMAP) && > > - !dev_is_dma_coherent(dev)) > > + !IS_ENABLED(CONFIG_DMA_DIRECT_REMAP) && !dev_is_dma_coherent(dev) && > > + !is_dev_swiotlb_force(dev)) > > return arch_dma_alloc(dev, size, dma_handle, gfp, attrs); > > > > /* > > @@ -172,7 +186,9 @@ void *dma_direct_alloc(struct device *dev, size_t size, > > if (IS_ENABLED(CONFIG_DMA_COHERENT_POOL) && > > !gfpflags_allow_blocking(gfp) && > > (force_dma_unencrypted(dev) || > > - (IS_ENABLED(CONFIG_DMA_DIRECT_REMAP) && !dev_is_dma_coherent(dev)))) > > + (IS_ENABLED(CONFIG_DMA_DIRECT_REMAP) && > > + !dev_is_dma_coherent(dev))) && > > + !is_dev_swiotlb_force(dev)) > > return dma_direct_alloc_from_pool(dev, size, dma_handle, gfp); > > > > /* we always manually zero the memory once we are done */ > > @@ -253,15 +269,15 @@ void dma_direct_free(struct device *dev, size_t size, > > unsigned int page_order = get_order(size); > > > > if ((attrs & DMA_ATTR_NO_KERNEL_MAPPING) && > > - !force_dma_unencrypted(dev)) { > > + !force_dma_unencrypted(dev) && !is_dev_swiotlb_force(dev)) { > > /* cpu_addr is a struct page cookie, not a kernel address */ > > dma_free_contiguous(dev, cpu_addr, size); > > return; > > } > > > > if (!IS_ENABLED(CONFIG_ARCH_HAS_DMA_SET_UNCACHED) && > > - !IS_ENABLED(CONFIG_DMA_DIRECT_REMAP) && > > - !dev_is_dma_coherent(dev)) { > > + !IS_ENABLED(CONFIG_DMA_DIRECT_REMAP) && !dev_is_dma_coherent(dev) && > > + !is_dev_swiotlb_force(dev)) { > > arch_dma_free(dev, size, cpu_addr, dma_addr, attrs); > > return; > > } > > @@ -289,7 +305,8 @@ struct page *dma_direct_alloc_pages(struct device *dev, size_t size, > > void *ret; > > > > if (IS_ENABLED(CONFIG_DMA_COHERENT_POOL) && > > - force_dma_unencrypted(dev) && !gfpflags_allow_blocking(gfp)) > > + force_dma_unencrypted(dev) && !gfpflags_allow_blocking(gfp) && > > + !is_dev_swiotlb_force(dev)) > > return dma_direct_alloc_from_pool(dev, size, dma_handle, gfp); > > Wait, this seems broken for non-coherent devices - in that case we need > to return a non-cacheable address, but we can't simply fall through into > the remapping path below in GFP_ATOMIC context. That's why we need the > atomic pool concept in the first place :/ Sorry for the late reply. I'm not very familiar with this. I wonder if the memory returned here must be coherent. If yes, could we say for this case, one must set up another device coherent pool (shared-dma-pool) and go with dma_alloc_from_dev_coherent()[1]? [1] https://elixir.bootlin.com/linux/v5.12/source/kernel/dma/mapping.c#L435 > > Unless I've overlooked something, we're still using the regular > cacheable linear map address of the dma_io_tlb_mem buffer, no? > > Robin. > > > > > page = __dma_direct_alloc_pages(dev, size, gfp); > > _______________________________________________ Nouveau mailing list Nouveau@lists.freedesktop.org https://lists.freedesktop.org/mailman/listinfo/nouveau From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-13.5 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id C6030C433ED for ; Mon, 3 May 2021 14:26:24 +0000 (UTC) Received: from smtp4.osuosl.org (smtp4.osuosl.org [140.211.166.137]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 0E6A7611AC for ; Mon, 3 May 2021 14:26:23 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 0E6A7611AC Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=chromium.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=iommu-bounces@lists.linux-foundation.org Received: from localhost (localhost [127.0.0.1]) by smtp4.osuosl.org (Postfix) with ESMTP id 8CD8940E49; Mon, 3 May 2021 14:26:23 +0000 (UTC) X-Virus-Scanned: amavisd-new at osuosl.org Received: from smtp4.osuosl.org ([127.0.0.1]) by localhost (smtp4.osuosl.org [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id Obcxz1dAJG_9; Mon, 3 May 2021 14:26:22 +0000 (UTC) Received: from lists.linuxfoundation.org (lf-lists.osuosl.org [IPv6:2605:bc80:3010:104::8cd3:938]) by smtp4.osuosl.org (Postfix) with ESMTP id 088A740E48; Mon, 3 May 2021 14:26:21 +0000 (UTC) Received: from lf-lists.osuosl.org (localhost [127.0.0.1]) by lists.linuxfoundation.org (Postfix) with ESMTP id BFD87C000D; Mon, 3 May 2021 14:26:21 +0000 (UTC) Received: from smtp2.osuosl.org (smtp2.osuosl.org [IPv6:2605:bc80:3010::133]) by lists.linuxfoundation.org (Postfix) with ESMTP id BA7EDC0001 for ; Mon, 3 May 2021 14:26:16 +0000 (UTC) Received: from localhost (localhost [127.0.0.1]) by smtp2.osuosl.org (Postfix) with ESMTP id A586640149 for ; Mon, 3 May 2021 14:26:16 +0000 (UTC) X-Virus-Scanned: amavisd-new at osuosl.org Authentication-Results: smtp2.osuosl.org (amavisd-new); dkim=pass (1024-bit key) header.d=chromium.org Received: from smtp2.osuosl.org ([127.0.0.1]) by localhost (smtp2.osuosl.org [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id L7hwxjnod9Oq for ; Mon, 3 May 2021 14:26:15 +0000 (UTC) X-Greylist: whitelisted by SQLgrey-1.8.0 Received: from mail-il1-x132.google.com (mail-il1-x132.google.com [IPv6:2607:f8b0:4864:20::132]) by smtp2.osuosl.org (Postfix) with ESMTPS id 05F74400CD for ; Mon, 3 May 2021 14:26:14 +0000 (UTC) Received: by mail-il1-x132.google.com with SMTP id j20so3798727ilo.10 for ; Mon, 03 May 2021 07:26:14 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=chromium.org; s=google; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=ELcNibbwxopbe4wnzShPTb7xRMLVvYpqQ8un28QQp9U=; b=nOyrfh2thpmQagEkzKEHJkMTgRy+3FJJdT9Oo6MZndwooAl8msGQ2IZHFMeS6aJ6Md +OeRGM111zl+0t7XmNgYC8WpP2ka9OwZM9VN//prTwlXpUXb+Muztthx/dt4YfUeTqwD ene0+k2RSvchnHpwNfAnGfkLV6l7GmkYIbRvE= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=ELcNibbwxopbe4wnzShPTb7xRMLVvYpqQ8un28QQp9U=; b=QspUS3W6FGLm+P1z7YJotpiGn27M+Iwf7omgNUzbGchXaFNCHFGYj+EZeZGORgLOR4 Ktq9pQYXGFNIdsY/77/Anu7be+FT1BgEclc6+KirLNtYVQqQuqcDnbOfjEcqSXnslkt4 lGNlZjaS1TC5HxBjDuIMD1qoDeMRyrS02VTd0hLUo2mgHrcPbuTMmBFYvjj3XQjknOtH 3CjfinYXd/KAhZwSysQuVE4Bs7KjWSW3+UwWNhwkUjnJOyYw5GHfWC3ew62xAb8inbhg 0MR569uESnTL40fFE0OKVEJwY4dAIbYXMVyQ44jczmC/6QVGroR2CytaI2PNfmZUTOXp x4xg== X-Gm-Message-State: AOAM533k9azkWEndY3rvUq1y7M9qmeOavT3VLLRdnashwTOMic6JBj6y i7XFqHsaoJ1p8jY2g07UPzwT/RCCGX44yQ== X-Google-Smtp-Source: ABdhPJyQkIbSemaHBchp72WKpXW6pzGWnaGeJlHnd5UHTjnf4/GN/LqslT5hNrZudKArS/IiMlEmpA== X-Received: by 2002:a05:6e02:eeb:: with SMTP id j11mr15903902ilk.23.1620051973591; Mon, 03 May 2021 07:26:13 -0700 (PDT) Received: from mail-il1-f181.google.com (mail-il1-f181.google.com. [209.85.166.181]) by smtp.gmail.com with ESMTPSA id z8sm2597138iot.27.2021.05.03.07.26.12 for (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Mon, 03 May 2021 07:26:12 -0700 (PDT) Received: by mail-il1-f181.google.com with SMTP id j12so3806225ils.4 for ; Mon, 03 May 2021 07:26:12 -0700 (PDT) X-Received: by 2002:a05:6e02:f4e:: with SMTP id y14mr3397094ilj.18.1620051971892; Mon, 03 May 2021 07:26:11 -0700 (PDT) MIME-Version: 1.0 References: <20210422081508.3942748-1-tientzu@chromium.org> <20210422081508.3942748-15-tientzu@chromium.org> <70b895c2-4a39-bbbd-a719-5c8b6b922026@arm.com> In-Reply-To: <70b895c2-4a39-bbbd-a719-5c8b6b922026@arm.com> From: Claire Chang Date: Mon, 3 May 2021 22:26:00 +0800 X-Gmail-Original-Message-ID: Message-ID: Subject: Re: [PATCH v5 14/16] dma-direct: Allocate memory from restricted DMA pool if available To: Robin Murphy Cc: heikki.krogerus@linux.intel.com, thomas.hellstrom@linux.intel.com, peterz@infradead.org, benh@kernel.crashing.org, joonas.lahtinen@linux.intel.com, dri-devel@lists.freedesktop.org, lkml , grant.likely@arm.com, paulus@samba.org, Will Deacon , mingo@kernel.org, sstabellini@kernel.org, Saravana Kannan , xypron.glpk@gmx.de, "Rafael J . Wysocki" , Christoph Hellwig , Bartosz Golaszewski , bskeggs@redhat.com, linux-pci@vger.kernel.org, xen-devel@lists.xenproject.org, Thierry Reding , intel-gfx@lists.freedesktop.org, matthew.auld@intel.com, linux-devicetree , Jianxiong Gao , Daniel Vetter , Konrad Rzeszutek Wilk , maarten.lankhorst@linux.intel.com, airlied@linux.ie, Dan Williams , jani.nikula@linux.intel.com, Nicolas Boichat , rodrigo.vivi@intel.com, Bjorn Helgaas , boris.ostrovsky@oracle.com, Andy Shevchenko , jgross@suse.com, chris@chris-wilson.co.uk, nouveau@lists.freedesktop.org, Greg KH , Randy Dunlap , Frank Rowand , "list@263.net:IOMMU DRIVERS" , Jim Quinlan , linuxppc-dev@lists.ozlabs.org, bauerman@linux.ibm.com X-BeenThere: iommu@lists.linux-foundation.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: Development issues for Linux IOMMU support List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Errors-To: iommu-bounces@lists.linux-foundation.org Sender: "iommu" On Fri, Apr 23, 2021 at 9:46 PM Robin Murphy wrote: > > On 2021-04-22 09:15, Claire Chang wrote: > > The restricted DMA pool is preferred if available. > > > > The restricted DMA pools provide a basic level of protection against the > > DMA overwriting buffer contents at unexpected times. However, to protect > > against general data leakage and system memory corruption, the system > > needs to provide a way to lock down the memory access, e.g., MPU. > > > > Signed-off-by: Claire Chang > > --- > > kernel/dma/direct.c | 35 ++++++++++++++++++++++++++--------- > > 1 file changed, 26 insertions(+), 9 deletions(-) > > > > diff --git a/kernel/dma/direct.c b/kernel/dma/direct.c > > index 7a27f0510fcc..29523d2a9845 100644 > > --- a/kernel/dma/direct.c > > +++ b/kernel/dma/direct.c > > @@ -78,6 +78,10 @@ static bool dma_coherent_ok(struct device *dev, phys_addr_t phys, size_t size) > > static void __dma_direct_free_pages(struct device *dev, struct page *page, > > size_t size) > > { > > +#ifdef CONFIG_DMA_RESTRICTED_POOL > > + if (swiotlb_free(dev, page, size)) > > + return; > > +#endif > > dma_free_contiguous(dev, page, size); > > } > > > > @@ -92,7 +96,17 @@ static struct page *__dma_direct_alloc_pages(struct device *dev, size_t size, > > > > gfp |= dma_direct_optimal_gfp_mask(dev, dev->coherent_dma_mask, > > &phys_limit); > > - page = dma_alloc_contiguous(dev, size, gfp); > > + > > +#ifdef CONFIG_DMA_RESTRICTED_POOL > > + page = swiotlb_alloc(dev, size); > > + if (page && !dma_coherent_ok(dev, page_to_phys(page), size)) { > > + __dma_direct_free_pages(dev, page, size); > > + page = NULL; > > + } > > +#endif > > + > > + if (!page) > > + page = dma_alloc_contiguous(dev, size, gfp); > > if (page && !dma_coherent_ok(dev, page_to_phys(page), size)) { > > dma_free_contiguous(dev, page, size); > > page = NULL; > > @@ -148,7 +162,7 @@ void *dma_direct_alloc(struct device *dev, size_t size, > > gfp |= __GFP_NOWARN; > > > > if ((attrs & DMA_ATTR_NO_KERNEL_MAPPING) && > > - !force_dma_unencrypted(dev)) { > > + !force_dma_unencrypted(dev) && !is_dev_swiotlb_force(dev)) { > > page = __dma_direct_alloc_pages(dev, size, gfp & ~__GFP_ZERO); > > if (!page) > > return NULL; > > @@ -161,8 +175,8 @@ void *dma_direct_alloc(struct device *dev, size_t size, > > } > > > > if (!IS_ENABLED(CONFIG_ARCH_HAS_DMA_SET_UNCACHED) && > > - !IS_ENABLED(CONFIG_DMA_DIRECT_REMAP) && > > - !dev_is_dma_coherent(dev)) > > + !IS_ENABLED(CONFIG_DMA_DIRECT_REMAP) && !dev_is_dma_coherent(dev) && > > + !is_dev_swiotlb_force(dev)) > > return arch_dma_alloc(dev, size, dma_handle, gfp, attrs); > > > > /* > > @@ -172,7 +186,9 @@ void *dma_direct_alloc(struct device *dev, size_t size, > > if (IS_ENABLED(CONFIG_DMA_COHERENT_POOL) && > > !gfpflags_allow_blocking(gfp) && > > (force_dma_unencrypted(dev) || > > - (IS_ENABLED(CONFIG_DMA_DIRECT_REMAP) && !dev_is_dma_coherent(dev)))) > > + (IS_ENABLED(CONFIG_DMA_DIRECT_REMAP) && > > + !dev_is_dma_coherent(dev))) && > > + !is_dev_swiotlb_force(dev)) > > return dma_direct_alloc_from_pool(dev, size, dma_handle, gfp); > > > > /* we always manually zero the memory once we are done */ > > @@ -253,15 +269,15 @@ void dma_direct_free(struct device *dev, size_t size, > > unsigned int page_order = get_order(size); > > > > if ((attrs & DMA_ATTR_NO_KERNEL_MAPPING) && > > - !force_dma_unencrypted(dev)) { > > + !force_dma_unencrypted(dev) && !is_dev_swiotlb_force(dev)) { > > /* cpu_addr is a struct page cookie, not a kernel address */ > > dma_free_contiguous(dev, cpu_addr, size); > > return; > > } > > > > if (!IS_ENABLED(CONFIG_ARCH_HAS_DMA_SET_UNCACHED) && > > - !IS_ENABLED(CONFIG_DMA_DIRECT_REMAP) && > > - !dev_is_dma_coherent(dev)) { > > + !IS_ENABLED(CONFIG_DMA_DIRECT_REMAP) && !dev_is_dma_coherent(dev) && > > + !is_dev_swiotlb_force(dev)) { > > arch_dma_free(dev, size, cpu_addr, dma_addr, attrs); > > return; > > } > > @@ -289,7 +305,8 @@ struct page *dma_direct_alloc_pages(struct device *dev, size_t size, > > void *ret; > > > > if (IS_ENABLED(CONFIG_DMA_COHERENT_POOL) && > > - force_dma_unencrypted(dev) && !gfpflags_allow_blocking(gfp)) > > + force_dma_unencrypted(dev) && !gfpflags_allow_blocking(gfp) && > > + !is_dev_swiotlb_force(dev)) > > return dma_direct_alloc_from_pool(dev, size, dma_handle, gfp); > > Wait, this seems broken for non-coherent devices - in that case we need > to return a non-cacheable address, but we can't simply fall through into > the remapping path below in GFP_ATOMIC context. That's why we need the > atomic pool concept in the first place :/ Sorry for the late reply. I'm not very familiar with this. I wonder if the memory returned here must be coherent. If yes, could we say for this case, one must set up another device coherent pool (shared-dma-pool) and go with dma_alloc_from_dev_coherent()[1]? [1] https://elixir.bootlin.com/linux/v5.12/source/kernel/dma/mapping.c#L435 > > Unless I've overlooked something, we're still using the regular > cacheable linear map address of the dma_io_tlb_mem buffer, no? > > Robin. > > > > > page = __dma_direct_alloc_pages(dev, size, gfp); > > _______________________________________________ iommu mailing list iommu@lists.linux-foundation.org https://lists.linuxfoundation.org/mailman/listinfo/iommu From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-13.5 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 485C3C43460 for ; Mon, 3 May 2021 14:26:28 +0000 (UTC) Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id E5DA0611AC for ; Mon, 3 May 2021 14:26:27 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org E5DA0611AC Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=chromium.org Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=dri-devel-bounces@lists.freedesktop.org Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 2796D6E201; Mon, 3 May 2021 14:26:27 +0000 (UTC) Received: from mail-il1-x132.google.com (mail-il1-x132.google.com [IPv6:2607:f8b0:4864:20::132]) by gabe.freedesktop.org (Postfix) with ESMTPS id 7A30E6E209 for ; Mon, 3 May 2021 14:26:25 +0000 (UTC) Received: by mail-il1-x132.google.com with SMTP id j12so3806784ils.4 for ; Mon, 03 May 2021 07:26:25 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=chromium.org; s=google; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=ELcNibbwxopbe4wnzShPTb7xRMLVvYpqQ8un28QQp9U=; b=nOyrfh2thpmQagEkzKEHJkMTgRy+3FJJdT9Oo6MZndwooAl8msGQ2IZHFMeS6aJ6Md +OeRGM111zl+0t7XmNgYC8WpP2ka9OwZM9VN//prTwlXpUXb+Muztthx/dt4YfUeTqwD ene0+k2RSvchnHpwNfAnGfkLV6l7GmkYIbRvE= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=ELcNibbwxopbe4wnzShPTb7xRMLVvYpqQ8un28QQp9U=; b=CfnQUR4WZJqrtAry1cX9bUF3pw3zKhCtHZZprBjcnWbY5UX8X3k3LpY02CoLPOxB78 XbIz9D/KNK5TlRskDbHVyf+iy7LJb486cpgAisZ85iVqGjYOLG8NqblO3CcMAeMkd2MF py64VGJhROeh5U30Vh/RmHXApq+I4nrX3V2ABOo8AzdRkth1lQ6Qa4UKRpsMQRIb/jUz wvUd9N0twajx563uvtLjvtPTtro6rmh78XyThBLJocSBDKahaVNBAn8ElbjM0Gxj4X+L q5vxt1cmOiBwEMXhZ28Y+aBm7Ct0RmeK3cECJWKxqGdBX2hUtOB+veszSzy9AgtgWjiy ygZw== X-Gm-Message-State: AOAM530LGoVlZ1lN2lDXFbE1zblvVg2XFnnVBMS4E9/818xvjaK3eQdi o8OMkZxcblfv+2NCmMdUsjEiwZNZOaaw9w== X-Google-Smtp-Source: ABdhPJxuRFKv1ySLkFlxLgmuLn4a1YfJeFJ5AGjafWx4rM+0UdEYQpsbsVyXf8GNYfn3vUA6yhIHbA== X-Received: by 2002:a05:6e02:214e:: with SMTP id d14mr16557314ilv.142.1620051984562; Mon, 03 May 2021 07:26:24 -0700 (PDT) Received: from mail-il1-f181.google.com (mail-il1-f181.google.com. [209.85.166.181]) by smtp.gmail.com with ESMTPSA id v7sm5712175ilo.25.2021.05.03.07.26.23 for (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Mon, 03 May 2021 07:26:23 -0700 (PDT) Received: by mail-il1-f181.google.com with SMTP id j12so3806677ils.4 for ; Mon, 03 May 2021 07:26:23 -0700 (PDT) X-Received: by 2002:a05:6e02:f4e:: with SMTP id y14mr3397094ilj.18.1620051971892; Mon, 03 May 2021 07:26:11 -0700 (PDT) MIME-Version: 1.0 References: <20210422081508.3942748-1-tientzu@chromium.org> <20210422081508.3942748-15-tientzu@chromium.org> <70b895c2-4a39-bbbd-a719-5c8b6b922026@arm.com> In-Reply-To: <70b895c2-4a39-bbbd-a719-5c8b6b922026@arm.com> From: Claire Chang Date: Mon, 3 May 2021 22:26:00 +0800 X-Gmail-Original-Message-ID: Message-ID: Subject: Re: [PATCH v5 14/16] dma-direct: Allocate memory from restricted DMA pool if available To: Robin Murphy X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: heikki.krogerus@linux.intel.com, thomas.hellstrom@linux.intel.com, peterz@infradead.org, dri-devel@lists.freedesktop.org, lkml , grant.likely@arm.com, paulus@samba.org, Will Deacon , mingo@kernel.org, Marek Szyprowski , sstabellini@kernel.org, Saravana Kannan , xypron.glpk@gmx.de, Joerg Roedel , "Rafael J . Wysocki" , Christoph Hellwig , Bartosz Golaszewski , bskeggs@redhat.com, linux-pci@vger.kernel.org, xen-devel@lists.xenproject.org, Thierry Reding , intel-gfx@lists.freedesktop.org, matthew.auld@intel.com, linux-devicetree , Jianxiong Gao , Konrad Rzeszutek Wilk , airlied@linux.ie, Dan Williams , Nicolas Boichat , rodrigo.vivi@intel.com, Bjorn Helgaas , boris.ostrovsky@oracle.com, Andy Shevchenko , jgross@suse.com, chris@chris-wilson.co.uk, nouveau@lists.freedesktop.org, Greg KH , Randy Dunlap , Frank Rowand , Tomasz Figa , "list@263.net:IOMMU DRIVERS" , Jim Quinlan , linuxppc-dev@lists.ozlabs.org, bauerman@linux.ibm.com Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" On Fri, Apr 23, 2021 at 9:46 PM Robin Murphy wrote: > > On 2021-04-22 09:15, Claire Chang wrote: > > The restricted DMA pool is preferred if available. > > > > The restricted DMA pools provide a basic level of protection against the > > DMA overwriting buffer contents at unexpected times. However, to protect > > against general data leakage and system memory corruption, the system > > needs to provide a way to lock down the memory access, e.g., MPU. > > > > Signed-off-by: Claire Chang > > --- > > kernel/dma/direct.c | 35 ++++++++++++++++++++++++++--------- > > 1 file changed, 26 insertions(+), 9 deletions(-) > > > > diff --git a/kernel/dma/direct.c b/kernel/dma/direct.c > > index 7a27f0510fcc..29523d2a9845 100644 > > --- a/kernel/dma/direct.c > > +++ b/kernel/dma/direct.c > > @@ -78,6 +78,10 @@ static bool dma_coherent_ok(struct device *dev, phys_addr_t phys, size_t size) > > static void __dma_direct_free_pages(struct device *dev, struct page *page, > > size_t size) > > { > > +#ifdef CONFIG_DMA_RESTRICTED_POOL > > + if (swiotlb_free(dev, page, size)) > > + return; > > +#endif > > dma_free_contiguous(dev, page, size); > > } > > > > @@ -92,7 +96,17 @@ static struct page *__dma_direct_alloc_pages(struct device *dev, size_t size, > > > > gfp |= dma_direct_optimal_gfp_mask(dev, dev->coherent_dma_mask, > > &phys_limit); > > - page = dma_alloc_contiguous(dev, size, gfp); > > + > > +#ifdef CONFIG_DMA_RESTRICTED_POOL > > + page = swiotlb_alloc(dev, size); > > + if (page && !dma_coherent_ok(dev, page_to_phys(page), size)) { > > + __dma_direct_free_pages(dev, page, size); > > + page = NULL; > > + } > > +#endif > > + > > + if (!page) > > + page = dma_alloc_contiguous(dev, size, gfp); > > if (page && !dma_coherent_ok(dev, page_to_phys(page), size)) { > > dma_free_contiguous(dev, page, size); > > page = NULL; > > @@ -148,7 +162,7 @@ void *dma_direct_alloc(struct device *dev, size_t size, > > gfp |= __GFP_NOWARN; > > > > if ((attrs & DMA_ATTR_NO_KERNEL_MAPPING) && > > - !force_dma_unencrypted(dev)) { > > + !force_dma_unencrypted(dev) && !is_dev_swiotlb_force(dev)) { > > page = __dma_direct_alloc_pages(dev, size, gfp & ~__GFP_ZERO); > > if (!page) > > return NULL; > > @@ -161,8 +175,8 @@ void *dma_direct_alloc(struct device *dev, size_t size, > > } > > > > if (!IS_ENABLED(CONFIG_ARCH_HAS_DMA_SET_UNCACHED) && > > - !IS_ENABLED(CONFIG_DMA_DIRECT_REMAP) && > > - !dev_is_dma_coherent(dev)) > > + !IS_ENABLED(CONFIG_DMA_DIRECT_REMAP) && !dev_is_dma_coherent(dev) && > > + !is_dev_swiotlb_force(dev)) > > return arch_dma_alloc(dev, size, dma_handle, gfp, attrs); > > > > /* > > @@ -172,7 +186,9 @@ void *dma_direct_alloc(struct device *dev, size_t size, > > if (IS_ENABLED(CONFIG_DMA_COHERENT_POOL) && > > !gfpflags_allow_blocking(gfp) && > > (force_dma_unencrypted(dev) || > > - (IS_ENABLED(CONFIG_DMA_DIRECT_REMAP) && !dev_is_dma_coherent(dev)))) > > + (IS_ENABLED(CONFIG_DMA_DIRECT_REMAP) && > > + !dev_is_dma_coherent(dev))) && > > + !is_dev_swiotlb_force(dev)) > > return dma_direct_alloc_from_pool(dev, size, dma_handle, gfp); > > > > /* we always manually zero the memory once we are done */ > > @@ -253,15 +269,15 @@ void dma_direct_free(struct device *dev, size_t size, > > unsigned int page_order = get_order(size); > > > > if ((attrs & DMA_ATTR_NO_KERNEL_MAPPING) && > > - !force_dma_unencrypted(dev)) { > > + !force_dma_unencrypted(dev) && !is_dev_swiotlb_force(dev)) { > > /* cpu_addr is a struct page cookie, not a kernel address */ > > dma_free_contiguous(dev, cpu_addr, size); > > return; > > } > > > > if (!IS_ENABLED(CONFIG_ARCH_HAS_DMA_SET_UNCACHED) && > > - !IS_ENABLED(CONFIG_DMA_DIRECT_REMAP) && > > - !dev_is_dma_coherent(dev)) { > > + !IS_ENABLED(CONFIG_DMA_DIRECT_REMAP) && !dev_is_dma_coherent(dev) && > > + !is_dev_swiotlb_force(dev)) { > > arch_dma_free(dev, size, cpu_addr, dma_addr, attrs); > > return; > > } > > @@ -289,7 +305,8 @@ struct page *dma_direct_alloc_pages(struct device *dev, size_t size, > > void *ret; > > > > if (IS_ENABLED(CONFIG_DMA_COHERENT_POOL) && > > - force_dma_unencrypted(dev) && !gfpflags_allow_blocking(gfp)) > > + force_dma_unencrypted(dev) && !gfpflags_allow_blocking(gfp) && > > + !is_dev_swiotlb_force(dev)) > > return dma_direct_alloc_from_pool(dev, size, dma_handle, gfp); > > Wait, this seems broken for non-coherent devices - in that case we need > to return a non-cacheable address, but we can't simply fall through into > the remapping path below in GFP_ATOMIC context. That's why we need the > atomic pool concept in the first place :/ Sorry for the late reply. I'm not very familiar with this. I wonder if the memory returned here must be coherent. If yes, could we say for this case, one must set up another device coherent pool (shared-dma-pool) and go with dma_alloc_from_dev_coherent()[1]? [1] https://elixir.bootlin.com/linux/v5.12/source/kernel/dma/mapping.c#L435 > > Unless I've overlooked something, we're still using the regular > cacheable linear map address of the dma_io_tlb_mem buffer, no? > > Robin. > > > > > page = __dma_direct_alloc_pages(dev, size, gfp); > > _______________________________________________ dri-devel mailing list dri-devel@lists.freedesktop.org https://lists.freedesktop.org/mailman/listinfo/dri-devel From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-13.5 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 046C3C433B4 for ; Mon, 3 May 2021 14:26:26 +0000 (UTC) Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 87B2F61166 for ; Mon, 3 May 2021 14:26:25 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 87B2F61166 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=chromium.org Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=intel-gfx-bounces@lists.freedesktop.org Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 02C2D6E12C; Mon, 3 May 2021 14:26:25 +0000 (UTC) Received: from mail-il1-x133.google.com (mail-il1-x133.google.com [IPv6:2607:f8b0:4864:20::133]) by gabe.freedesktop.org (Postfix) with ESMTPS id 2C6AC6E12C for ; Mon, 3 May 2021 14:26:24 +0000 (UTC) Received: by mail-il1-x133.google.com with SMTP id p15so3814301iln.3 for ; Mon, 03 May 2021 07:26:24 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=chromium.org; s=google; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=ELcNibbwxopbe4wnzShPTb7xRMLVvYpqQ8un28QQp9U=; b=nOyrfh2thpmQagEkzKEHJkMTgRy+3FJJdT9Oo6MZndwooAl8msGQ2IZHFMeS6aJ6Md +OeRGM111zl+0t7XmNgYC8WpP2ka9OwZM9VN//prTwlXpUXb+Muztthx/dt4YfUeTqwD ene0+k2RSvchnHpwNfAnGfkLV6l7GmkYIbRvE= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=ELcNibbwxopbe4wnzShPTb7xRMLVvYpqQ8un28QQp9U=; b=GoGuN7Ud+7Hqa2PDZeheh5K/8WGrGkyUB4AdoFkweU/xor9Eh6bMczJt9ejfpBWC5d d7fmfT7KrFrIxG7EgxUIFRUmWnWzW7Pi40QyhAbFckBflFc/s1OIB10pi147hZ6XYa0Q wzOAdhNmlnz8wHYhheATB2Q5e0kxGGlBKdjMi7ywnXbjBa6Nbl13EC3ZhvHE5c8tOCxZ 4MYs49qD0eG/PoRb0p55RvYzY/TJoM4Ov6sHabtMBI8xYNQJ168gaQP6cRXexM83emCG 2cmQHp0Tyz068WSbGIMEidqi0Cd4jFro8VT83RSUvzI0/lF82Dk5V+9ixkArGGF7IwY/ xWsg== X-Gm-Message-State: AOAM531d0EFvKF4OgBfYnMs5NF+2T4H3UuYpMyDNMb4kr9Q2aX64l54O TAM7uAzWS9PVogSv+IR631z33VweptNOvw== X-Google-Smtp-Source: ABdhPJx+SA3mTY0F4jJzFAs9NoKob1iLmh1cQizjUDmxNAbKyukHbHsOqrdGh3LEyX1Ew3bDsf/ZNw== X-Received: by 2002:a92:d09:: with SMTP id 9mr16282467iln.229.1620051983064; Mon, 03 May 2021 07:26:23 -0700 (PDT) Received: from mail-io1-f44.google.com (mail-io1-f44.google.com. [209.85.166.44]) by smtp.gmail.com with ESMTPSA id f4sm4995788ilj.21.2021.05.03.07.26.22 for (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Mon, 03 May 2021 07:26:22 -0700 (PDT) Received: by mail-io1-f44.google.com with SMTP id l21so4242931iob.1 for ; Mon, 03 May 2021 07:26:22 -0700 (PDT) X-Received: by 2002:a05:6e02:f4e:: with SMTP id y14mr3397094ilj.18.1620051971892; Mon, 03 May 2021 07:26:11 -0700 (PDT) MIME-Version: 1.0 References: <20210422081508.3942748-1-tientzu@chromium.org> <20210422081508.3942748-15-tientzu@chromium.org> <70b895c2-4a39-bbbd-a719-5c8b6b922026@arm.com> In-Reply-To: <70b895c2-4a39-bbbd-a719-5c8b6b922026@arm.com> From: Claire Chang Date: Mon, 3 May 2021 22:26:00 +0800 X-Gmail-Original-Message-ID: Message-ID: To: Robin Murphy Subject: Re: [Intel-gfx] [PATCH v5 14/16] dma-direct: Allocate memory from restricted DMA pool if available X-BeenThere: intel-gfx@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Intel graphics driver community testing & development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: heikki.krogerus@linux.intel.com, thomas.hellstrom@linux.intel.com, peterz@infradead.org, benh@kernel.crashing.org, dri-devel@lists.freedesktop.org, lkml , grant.likely@arm.com, paulus@samba.org, Will Deacon , mingo@kernel.org, Marek Szyprowski , sstabellini@kernel.org, Saravana Kannan , xypron.glpk@gmx.de, Joerg Roedel , "Rafael J . Wysocki" , Christoph Hellwig , Bartosz Golaszewski , bskeggs@redhat.com, linux-pci@vger.kernel.org, xen-devel@lists.xenproject.org, Thierry Reding , intel-gfx@lists.freedesktop.org, matthew.auld@intel.com, linux-devicetree , Jianxiong Gao , Konrad Rzeszutek Wilk , airlied@linux.ie, Dan Williams , Nicolas Boichat , Bjorn Helgaas , boris.ostrovsky@oracle.com, Andy Shevchenko , jgross@suse.com, chris@chris-wilson.co.uk, nouveau@lists.freedesktop.org, Greg KH , Randy Dunlap , Frank Rowand , Tomasz Figa , "list@263.net:IOMMU DRIVERS" , Jim Quinlan , linuxppc-dev@lists.ozlabs.org, bauerman@linux.ibm.com Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Errors-To: intel-gfx-bounces@lists.freedesktop.org Sender: "Intel-gfx" On Fri, Apr 23, 2021 at 9:46 PM Robin Murphy wrote: > > On 2021-04-22 09:15, Claire Chang wrote: > > The restricted DMA pool is preferred if available. > > > > The restricted DMA pools provide a basic level of protection against the > > DMA overwriting buffer contents at unexpected times. However, to protect > > against general data leakage and system memory corruption, the system > > needs to provide a way to lock down the memory access, e.g., MPU. > > > > Signed-off-by: Claire Chang > > --- > > kernel/dma/direct.c | 35 ++++++++++++++++++++++++++--------- > > 1 file changed, 26 insertions(+), 9 deletions(-) > > > > diff --git a/kernel/dma/direct.c b/kernel/dma/direct.c > > index 7a27f0510fcc..29523d2a9845 100644 > > --- a/kernel/dma/direct.c > > +++ b/kernel/dma/direct.c > > @@ -78,6 +78,10 @@ static bool dma_coherent_ok(struct device *dev, phys_addr_t phys, size_t size) > > static void __dma_direct_free_pages(struct device *dev, struct page *page, > > size_t size) > > { > > +#ifdef CONFIG_DMA_RESTRICTED_POOL > > + if (swiotlb_free(dev, page, size)) > > + return; > > +#endif > > dma_free_contiguous(dev, page, size); > > } > > > > @@ -92,7 +96,17 @@ static struct page *__dma_direct_alloc_pages(struct device *dev, size_t size, > > > > gfp |= dma_direct_optimal_gfp_mask(dev, dev->coherent_dma_mask, > > &phys_limit); > > - page = dma_alloc_contiguous(dev, size, gfp); > > + > > +#ifdef CONFIG_DMA_RESTRICTED_POOL > > + page = swiotlb_alloc(dev, size); > > + if (page && !dma_coherent_ok(dev, page_to_phys(page), size)) { > > + __dma_direct_free_pages(dev, page, size); > > + page = NULL; > > + } > > +#endif > > + > > + if (!page) > > + page = dma_alloc_contiguous(dev, size, gfp); > > if (page && !dma_coherent_ok(dev, page_to_phys(page), size)) { > > dma_free_contiguous(dev, page, size); > > page = NULL; > > @@ -148,7 +162,7 @@ void *dma_direct_alloc(struct device *dev, size_t size, > > gfp |= __GFP_NOWARN; > > > > if ((attrs & DMA_ATTR_NO_KERNEL_MAPPING) && > > - !force_dma_unencrypted(dev)) { > > + !force_dma_unencrypted(dev) && !is_dev_swiotlb_force(dev)) { > > page = __dma_direct_alloc_pages(dev, size, gfp & ~__GFP_ZERO); > > if (!page) > > return NULL; > > @@ -161,8 +175,8 @@ void *dma_direct_alloc(struct device *dev, size_t size, > > } > > > > if (!IS_ENABLED(CONFIG_ARCH_HAS_DMA_SET_UNCACHED) && > > - !IS_ENABLED(CONFIG_DMA_DIRECT_REMAP) && > > - !dev_is_dma_coherent(dev)) > > + !IS_ENABLED(CONFIG_DMA_DIRECT_REMAP) && !dev_is_dma_coherent(dev) && > > + !is_dev_swiotlb_force(dev)) > > return arch_dma_alloc(dev, size, dma_handle, gfp, attrs); > > > > /* > > @@ -172,7 +186,9 @@ void *dma_direct_alloc(struct device *dev, size_t size, > > if (IS_ENABLED(CONFIG_DMA_COHERENT_POOL) && > > !gfpflags_allow_blocking(gfp) && > > (force_dma_unencrypted(dev) || > > - (IS_ENABLED(CONFIG_DMA_DIRECT_REMAP) && !dev_is_dma_coherent(dev)))) > > + (IS_ENABLED(CONFIG_DMA_DIRECT_REMAP) && > > + !dev_is_dma_coherent(dev))) && > > + !is_dev_swiotlb_force(dev)) > > return dma_direct_alloc_from_pool(dev, size, dma_handle, gfp); > > > > /* we always manually zero the memory once we are done */ > > @@ -253,15 +269,15 @@ void dma_direct_free(struct device *dev, size_t size, > > unsigned int page_order = get_order(size); > > > > if ((attrs & DMA_ATTR_NO_KERNEL_MAPPING) && > > - !force_dma_unencrypted(dev)) { > > + !force_dma_unencrypted(dev) && !is_dev_swiotlb_force(dev)) { > > /* cpu_addr is a struct page cookie, not a kernel address */ > > dma_free_contiguous(dev, cpu_addr, size); > > return; > > } > > > > if (!IS_ENABLED(CONFIG_ARCH_HAS_DMA_SET_UNCACHED) && > > - !IS_ENABLED(CONFIG_DMA_DIRECT_REMAP) && > > - !dev_is_dma_coherent(dev)) { > > + !IS_ENABLED(CONFIG_DMA_DIRECT_REMAP) && !dev_is_dma_coherent(dev) && > > + !is_dev_swiotlb_force(dev)) { > > arch_dma_free(dev, size, cpu_addr, dma_addr, attrs); > > return; > > } > > @@ -289,7 +305,8 @@ struct page *dma_direct_alloc_pages(struct device *dev, size_t size, > > void *ret; > > > > if (IS_ENABLED(CONFIG_DMA_COHERENT_POOL) && > > - force_dma_unencrypted(dev) && !gfpflags_allow_blocking(gfp)) > > + force_dma_unencrypted(dev) && !gfpflags_allow_blocking(gfp) && > > + !is_dev_swiotlb_force(dev)) > > return dma_direct_alloc_from_pool(dev, size, dma_handle, gfp); > > Wait, this seems broken for non-coherent devices - in that case we need > to return a non-cacheable address, but we can't simply fall through into > the remapping path below in GFP_ATOMIC context. That's why we need the > atomic pool concept in the first place :/ Sorry for the late reply. I'm not very familiar with this. I wonder if the memory returned here must be coherent. If yes, could we say for this case, one must set up another device coherent pool (shared-dma-pool) and go with dma_alloc_from_dev_coherent()[1]? [1] https://elixir.bootlin.com/linux/v5.12/source/kernel/dma/mapping.c#L435 > > Unless I've overlooked something, we're still using the regular > cacheable linear map address of the dma_io_tlb_mem buffer, no? > > Robin. > > > > > page = __dma_direct_alloc_pages(dev, size, gfp); > > _______________________________________________ Intel-gfx mailing list Intel-gfx@lists.freedesktop.org https://lists.freedesktop.org/mailman/listinfo/intel-gfx From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.4 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, URIBL_BLOCKED autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id C41BCC433B4 for ; Mon, 3 May 2021 14:26:35 +0000 (UTC) Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 45AA661164 for ; Mon, 3 May 2021 14:26:35 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 45AA661164 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=chromium.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Received: from list by lists.xenproject.org with outflank-mailman.121649.229424 (Exim 4.92) (envelope-from ) id 1ldZWk-0002r0-87; Mon, 03 May 2021 14:26:26 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 121649.229424; Mon, 03 May 2021 14:26:26 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1ldZWk-0002qt-46; Mon, 03 May 2021 14:26:26 +0000 Received: by outflank-mailman (input) for mailman id 121649; Mon, 03 May 2021 14:26:25 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1ldZWj-0002qV-5H for xen-devel@lists.xenproject.org; Mon, 03 May 2021 14:26:25 +0000 Received: from mail-il1-x134.google.com (unknown [2607:f8b0:4864:20::134]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id 1b67027b-9310-414c-b710-5a85310b8e63; Mon, 03 May 2021 14:26:24 +0000 (UTC) Received: by mail-il1-x134.google.com with SMTP id h6so3793967ila.7 for ; Mon, 03 May 2021 07:26:24 -0700 (PDT) Received: from mail-il1-f182.google.com (mail-il1-f182.google.com. [209.85.166.182]) by smtp.gmail.com with ESMTPSA id z25sm5614971iob.26.2021.05.03.07.26.22 for (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Mon, 03 May 2021 07:26:22 -0700 (PDT) Received: by mail-il1-f182.google.com with SMTP id p15so3814262iln.3 for ; Mon, 03 May 2021 07:26:22 -0700 (PDT) X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 1b67027b-9310-414c-b710-5a85310b8e63 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=chromium.org; s=google; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=ELcNibbwxopbe4wnzShPTb7xRMLVvYpqQ8un28QQp9U=; b=nOyrfh2thpmQagEkzKEHJkMTgRy+3FJJdT9Oo6MZndwooAl8msGQ2IZHFMeS6aJ6Md +OeRGM111zl+0t7XmNgYC8WpP2ka9OwZM9VN//prTwlXpUXb+Muztthx/dt4YfUeTqwD ene0+k2RSvchnHpwNfAnGfkLV6l7GmkYIbRvE= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=ELcNibbwxopbe4wnzShPTb7xRMLVvYpqQ8un28QQp9U=; b=UOhNSxIFI4CPa00XUq803vxMg2VjRi+Js/VOcw0oJYonAWOV+fn4L4w6hvHxrDQsAc S3Z34MUbu29Jn5K8v4bp8zNOu8Q0WragyJDh8lsiX8MbAOl3IdwCbtF3DvGBeZeiX/7u qpR7iRRPNccGwypVjBrKD8RNGrSL6t4Z2Mcmw0/CyBuLH1E+myot/jookHgm/Fl6oJ5T bFVK7U6Jpy6MXOR7FspMw+iSlRdgahoMHACosBS+aaKWum2qoDPeCi+/N09lFcDPFNAT nXxTz7R8l9f+efoOenwTz2yeMJR81jmQROp+eJMrvNuiOG3aKAWpqi5kslxiFtG+QUXr LrvQ== X-Gm-Message-State: AOAM532mb3um9NnV4hEVE7Mv8ZQSX/JPFDDeh+BMn+rG0yuX79WyRxwJ y5xzYYm0ehho+hZjgdmS4Jfo/zGz7E8ZSQ== X-Google-Smtp-Source: ABdhPJyb25KahPJML9LRmuSA0bTQPqmpTuPaaB0/CjLKnl05wCrQa/ybeGSbW8ptyK9nLIhnzcHosQ== X-Received: by 2002:a92:c0cf:: with SMTP id t15mr16572406ilf.117.1620051983158; Mon, 03 May 2021 07:26:23 -0700 (PDT) X-Received: by 2002:a05:6e02:f4e:: with SMTP id y14mr3397094ilj.18.1620051971892; Mon, 03 May 2021 07:26:11 -0700 (PDT) MIME-Version: 1.0 References: <20210422081508.3942748-1-tientzu@chromium.org> <20210422081508.3942748-15-tientzu@chromium.org> <70b895c2-4a39-bbbd-a719-5c8b6b922026@arm.com> In-Reply-To: <70b895c2-4a39-bbbd-a719-5c8b6b922026@arm.com> From: Claire Chang Date: Mon, 3 May 2021 22:26:00 +0800 X-Gmail-Original-Message-ID: Message-ID: Subject: Re: [PATCH v5 14/16] dma-direct: Allocate memory from restricted DMA pool if available To: Robin Murphy Cc: Joerg Roedel , Will Deacon , Frank Rowand , Konrad Rzeszutek Wilk , boris.ostrovsky@oracle.com, jgross@suse.com, Christoph Hellwig , Marek Szyprowski , benh@kernel.crashing.org, paulus@samba.org, "list@263.net:IOMMU DRIVERS" , sstabellini@kernel.org, grant.likely@arm.com, xypron.glpk@gmx.de, Thierry Reding , mingo@kernel.org, bauerman@linux.ibm.com, peterz@infradead.org, Greg KH , Saravana Kannan , "Rafael J . Wysocki" , heikki.krogerus@linux.intel.com, Andy Shevchenko , Randy Dunlap , Dan Williams , Bartosz Golaszewski , linux-devicetree , lkml , linuxppc-dev@lists.ozlabs.org, xen-devel@lists.xenproject.org, Nicolas Boichat , Jim Quinlan , Tomasz Figa , bskeggs@redhat.com, Bjorn Helgaas , chris@chris-wilson.co.uk, Daniel Vetter , airlied@linux.ie, dri-devel@lists.freedesktop.org, intel-gfx@lists.freedesktop.org, jani.nikula@linux.intel.com, Jianxiong Gao , joonas.lahtinen@linux.intel.com, linux-pci@vger.kernel.org, maarten.lankhorst@linux.intel.com, matthew.auld@intel.com, nouveau@lists.freedesktop.org, rodrigo.vivi@intel.com, thomas.hellstrom@linux.intel.com Content-Type: text/plain; charset="UTF-8" On Fri, Apr 23, 2021 at 9:46 PM Robin Murphy wrote: > > On 2021-04-22 09:15, Claire Chang wrote: > > The restricted DMA pool is preferred if available. > > > > The restricted DMA pools provide a basic level of protection against the > > DMA overwriting buffer contents at unexpected times. However, to protect > > against general data leakage and system memory corruption, the system > > needs to provide a way to lock down the memory access, e.g., MPU. > > > > Signed-off-by: Claire Chang > > --- > > kernel/dma/direct.c | 35 ++++++++++++++++++++++++++--------- > > 1 file changed, 26 insertions(+), 9 deletions(-) > > > > diff --git a/kernel/dma/direct.c b/kernel/dma/direct.c > > index 7a27f0510fcc..29523d2a9845 100644 > > --- a/kernel/dma/direct.c > > +++ b/kernel/dma/direct.c > > @@ -78,6 +78,10 @@ static bool dma_coherent_ok(struct device *dev, phys_addr_t phys, size_t size) > > static void __dma_direct_free_pages(struct device *dev, struct page *page, > > size_t size) > > { > > +#ifdef CONFIG_DMA_RESTRICTED_POOL > > + if (swiotlb_free(dev, page, size)) > > + return; > > +#endif > > dma_free_contiguous(dev, page, size); > > } > > > > @@ -92,7 +96,17 @@ static struct page *__dma_direct_alloc_pages(struct device *dev, size_t size, > > > > gfp |= dma_direct_optimal_gfp_mask(dev, dev->coherent_dma_mask, > > &phys_limit); > > - page = dma_alloc_contiguous(dev, size, gfp); > > + > > +#ifdef CONFIG_DMA_RESTRICTED_POOL > > + page = swiotlb_alloc(dev, size); > > + if (page && !dma_coherent_ok(dev, page_to_phys(page), size)) { > > + __dma_direct_free_pages(dev, page, size); > > + page = NULL; > > + } > > +#endif > > + > > + if (!page) > > + page = dma_alloc_contiguous(dev, size, gfp); > > if (page && !dma_coherent_ok(dev, page_to_phys(page), size)) { > > dma_free_contiguous(dev, page, size); > > page = NULL; > > @@ -148,7 +162,7 @@ void *dma_direct_alloc(struct device *dev, size_t size, > > gfp |= __GFP_NOWARN; > > > > if ((attrs & DMA_ATTR_NO_KERNEL_MAPPING) && > > - !force_dma_unencrypted(dev)) { > > + !force_dma_unencrypted(dev) && !is_dev_swiotlb_force(dev)) { > > page = __dma_direct_alloc_pages(dev, size, gfp & ~__GFP_ZERO); > > if (!page) > > return NULL; > > @@ -161,8 +175,8 @@ void *dma_direct_alloc(struct device *dev, size_t size, > > } > > > > if (!IS_ENABLED(CONFIG_ARCH_HAS_DMA_SET_UNCACHED) && > > - !IS_ENABLED(CONFIG_DMA_DIRECT_REMAP) && > > - !dev_is_dma_coherent(dev)) > > + !IS_ENABLED(CONFIG_DMA_DIRECT_REMAP) && !dev_is_dma_coherent(dev) && > > + !is_dev_swiotlb_force(dev)) > > return arch_dma_alloc(dev, size, dma_handle, gfp, attrs); > > > > /* > > @@ -172,7 +186,9 @@ void *dma_direct_alloc(struct device *dev, size_t size, > > if (IS_ENABLED(CONFIG_DMA_COHERENT_POOL) && > > !gfpflags_allow_blocking(gfp) && > > (force_dma_unencrypted(dev) || > > - (IS_ENABLED(CONFIG_DMA_DIRECT_REMAP) && !dev_is_dma_coherent(dev)))) > > + (IS_ENABLED(CONFIG_DMA_DIRECT_REMAP) && > > + !dev_is_dma_coherent(dev))) && > > + !is_dev_swiotlb_force(dev)) > > return dma_direct_alloc_from_pool(dev, size, dma_handle, gfp); > > > > /* we always manually zero the memory once we are done */ > > @@ -253,15 +269,15 @@ void dma_direct_free(struct device *dev, size_t size, > > unsigned int page_order = get_order(size); > > > > if ((attrs & DMA_ATTR_NO_KERNEL_MAPPING) && > > - !force_dma_unencrypted(dev)) { > > + !force_dma_unencrypted(dev) && !is_dev_swiotlb_force(dev)) { > > /* cpu_addr is a struct page cookie, not a kernel address */ > > dma_free_contiguous(dev, cpu_addr, size); > > return; > > } > > > > if (!IS_ENABLED(CONFIG_ARCH_HAS_DMA_SET_UNCACHED) && > > - !IS_ENABLED(CONFIG_DMA_DIRECT_REMAP) && > > - !dev_is_dma_coherent(dev)) { > > + !IS_ENABLED(CONFIG_DMA_DIRECT_REMAP) && !dev_is_dma_coherent(dev) && > > + !is_dev_swiotlb_force(dev)) { > > arch_dma_free(dev, size, cpu_addr, dma_addr, attrs); > > return; > > } > > @@ -289,7 +305,8 @@ struct page *dma_direct_alloc_pages(struct device *dev, size_t size, > > void *ret; > > > > if (IS_ENABLED(CONFIG_DMA_COHERENT_POOL) && > > - force_dma_unencrypted(dev) && !gfpflags_allow_blocking(gfp)) > > + force_dma_unencrypted(dev) && !gfpflags_allow_blocking(gfp) && > > + !is_dev_swiotlb_force(dev)) > > return dma_direct_alloc_from_pool(dev, size, dma_handle, gfp); > > Wait, this seems broken for non-coherent devices - in that case we need > to return a non-cacheable address, but we can't simply fall through into > the remapping path below in GFP_ATOMIC context. That's why we need the > atomic pool concept in the first place :/ Sorry for the late reply. I'm not very familiar with this. I wonder if the memory returned here must be coherent. If yes, could we say for this case, one must set up another device coherent pool (shared-dma-pool) and go with dma_alloc_from_dev_coherent()[1]? [1] https://elixir.bootlin.com/linux/v5.12/source/kernel/dma/mapping.c#L435 > > Unless I've overlooked something, we're still using the regular > cacheable linear map address of the dma_io_tlb_mem buffer, no? > > Robin. > > > > > page = __dma_direct_alloc_pages(dev, size, gfp); > >