From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-9.6 required=3.0 tests=DKIM_INVALID,DKIM_SIGNED, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY, SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 45795C43331 for ; Thu, 7 Nov 2019 17:40:58 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 11ADA21D6C for ; Thu, 7 Nov 2019 17:40:58 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b="p5VJ+Kzb" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2387502AbfKGRk5 (ORCPT ); Thu, 7 Nov 2019 12:40:57 -0500 Received: from bombadil.infradead.org ([198.137.202.133]:49990 "EHLO bombadil.infradead.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727132AbfKGRkz (ORCPT ); Thu, 7 Nov 2019 12:40:55 -0500 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20170209; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description:Resent-Date:Resent-From :Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help: List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive; bh=V6oKP4lc+Qnv6sTHAaXs5ytofADvXNkCvxLyMCGZAGM=; b=p5VJ+KzbAtdMWZHpt1z3zRLm9J BuGBPRjHA8A17ctz38M7+c95xVBVezojpm5oFhjIdFVn7Y60AjqOaaJn+Fy33f6p4nWducN+uyhfG aiAwjRn0MYfTFU1boeDjAdni7qM4goYe0VI55Rt8yn/jPqsDAbhbYK2qPZmjkKH+mXGymzsq4Y00g 1NvMd7wTMvwQ8Or0qVj0Oc0YyKi4bBH3UdEq5jr2YhuT5MFy74fP+XwvWFfRFyL85RPXhNHtNt1YT j6rYqNfnmvCtdqcjjTvzIW2GYIm/6cuj68FvBKNA8hzm0BAOJPXkJpAcVXfPKida7k7xBN417alSF ymNp+ikw==; Received: from [2001:4bb8:184:e48:c70:4a89:bc61:2] (helo=localhost) by bombadil.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux)) id 1iSllw-0002O1-MK; Thu, 07 Nov 2019 17:40:41 +0000 From: Christoph Hellwig To: Jonas Bonn , Stefan Kristiansson , Stafford Horne Cc: Marek Szyprowski , Robin Murphy , Will Deacon , Mark Rutland , openrisc@lists.librecores.org, iommu@lists.linux-foundation.org, linux-arm-kernel@lists.infradead.org, linux-arch@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH 1/2] dma-mapping: support setting memory uncached in place Date: Thu, 7 Nov 2019 18:40:34 +0100 Message-Id: <20191107174035.13783-2-hch@lst.de> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20191107174035.13783-1-hch@lst.de> References: <20191107174035.13783-1-hch@lst.de> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-SRS-Rewrite: SMTP reverse-path rewritten from by bombadil.infradead.org. See http://www.infradead.org/rpr.html Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org We currently only support remapping memory as uncached through vmap or a magic uncached segment provided by some architectures. But there is a simpler and much better way available on some architectures where we can just remap the memory in place. The advantages are: 1) no aliasing is possible, which prevents speculating into the cached alias 2) there is no need to allocate new ptes and thus no need for a special pre-allocated pool of memory that can be used with GFP_ATOMIC DMA allocations The downside is that architectures must provide a way to set arbitrary pages uncached in the kernel mapping, which might not be possible on architecture that have a special implicit kernel mapping, and requires splitting of huge page kernel mappings where they exist. Signed-off-by: Christoph Hellwig --- include/linux/dma-noncoherent.h | 3 +++ kernel/dma/Kconfig | 8 ++++++++ kernel/dma/direct.c | 28 ++++++++++++++++++---------- 3 files changed, 29 insertions(+), 10 deletions(-) diff --git a/include/linux/dma-noncoherent.h b/include/linux/dma-noncoherent.h index e30fca1f1b12..c4be9697279a 100644 --- a/include/linux/dma-noncoherent.h +++ b/include/linux/dma-noncoherent.h @@ -111,4 +111,7 @@ static inline void arch_dma_prep_coherent(struct page *page, size_t size) void *uncached_kernel_address(void *addr); void *cached_kernel_address(void *addr); +int arch_dma_set_uncached(void *cpu_addr, size_t size); +void arch_dma_clear_uncached(void *cpu_addr, size_t size); + #endif /* _LINUX_DMA_NONCOHERENT_H */ diff --git a/kernel/dma/Kconfig b/kernel/dma/Kconfig index 4c103a24e380..7bc0b77f1243 100644 --- a/kernel/dma/Kconfig +++ b/kernel/dma/Kconfig @@ -83,6 +83,14 @@ config DMA_DIRECT_REMAP bool select DMA_REMAP +# +# Should be selected if the architecture can remap memory from the page +# allocator and CMA as uncached and provides the arch_dma_set_uncached and +# arch_dma_clear_uncached helpers +# +config ARCH_HAS_DMA_SET_UNCACHED + bool + config DMA_CMA bool "DMA Contiguous Memory Allocator" depends on HAVE_DMA_CONTIGUOUS && CMA diff --git a/kernel/dma/direct.c b/kernel/dma/direct.c index e37e7ab6e2ee..e2b46001c1b3 100644 --- a/kernel/dma/direct.c +++ b/kernel/dma/direct.c @@ -171,11 +171,8 @@ void *dma_direct_alloc_pages(struct device *dev, size_t size, ret = dma_common_contiguous_remap(page, PAGE_ALIGN(size), dma_pgprot(dev, PAGE_KERNEL, attrs), __builtin_return_address(0)); - if (!ret) { - dma_free_contiguous(dev, page, size); - return ret; - } - + if (!ret) + goto out_free_pages; memset(ret, 0, size); goto done; } @@ -188,8 +185,7 @@ void *dma_direct_alloc_pages(struct device *dev, size_t size, * so log an error and fail. */ dev_info(dev, "Rejecting highmem page from CMA.\n"); - dma_free_contiguous(dev, page, size); - return NULL; + goto out_free_pages; } ret = page_address(page); @@ -198,10 +194,15 @@ void *dma_direct_alloc_pages(struct device *dev, size_t size, memset(ret, 0, size); - if (IS_ENABLED(CONFIG_ARCH_HAS_UNCACHED_SEGMENT) && - dma_alloc_need_uncached(dev, attrs)) { + if (dma_alloc_need_uncached(dev, attrs)) { arch_dma_prep_coherent(page, size); - ret = uncached_kernel_address(ret); + + if (IS_ENABLED(CONFIG_ARCH_HAS_DMA_SET_UNCACHED)) { + if (!arch_dma_set_uncached(ret, size)) + goto out_free_pages; + } else if (IS_ENABLED(CONFIG_ARCH_HAS_UNCACHED_SEGMENT)) { + ret = uncached_kernel_address(ret); + } } done: if (force_dma_unencrypted(dev)) @@ -209,6 +210,9 @@ void *dma_direct_alloc_pages(struct device *dev, size_t size, else *dma_handle = phys_to_dma(dev, page_to_phys(page)); return ret; +out_free_pages: + dma_free_contiguous(dev, page, size); + return NULL; } void dma_direct_free_pages(struct device *dev, size_t size, void *cpu_addr, @@ -232,6 +236,8 @@ void dma_direct_free_pages(struct device *dev, size_t size, void *cpu_addr, if (IS_ENABLED(CONFIG_DMA_REMAP) && is_vmalloc_addr(cpu_addr)) vunmap(cpu_addr); + else if (IS_ENABLED(CONFIG_ARCH_HAS_DMA_SET_UNCACHED)) + arch_dma_clear_uncached(cpu_addr, size); dma_free_contiguous(dev, dma_direct_to_page(dev, dma_addr), size); } @@ -240,6 +246,7 @@ void *dma_direct_alloc(struct device *dev, size_t size, dma_addr_t *dma_handle, gfp_t gfp, unsigned long attrs) { if (!IS_ENABLED(CONFIG_ARCH_HAS_UNCACHED_SEGMENT) && + !IS_ENABLED(CONFIG_ARCH_HAS_DMA_SET_UNCACHED) && !IS_ENABLED(CONFIG_DMA_DIRECT_REMAP) && dma_alloc_need_uncached(dev, attrs)) return arch_dma_alloc(dev, size, dma_handle, gfp, attrs); @@ -250,6 +257,7 @@ void dma_direct_free(struct device *dev, size_t size, void *cpu_addr, dma_addr_t dma_addr, unsigned long attrs) { if (!IS_ENABLED(CONFIG_ARCH_HAS_UNCACHED_SEGMENT) && + !IS_ENABLED(CONFIG_ARCH_HAS_DMA_SET_UNCACHED) && !IS_ENABLED(CONFIG_DMA_DIRECT_REMAP) && dma_alloc_need_uncached(dev, attrs)) arch_dma_free(dev, size, cpu_addr, dma_addr, attrs); -- 2.20.1 From mboxrd@z Thu Jan 1 00:00:00 1970 From: Christoph Hellwig Subject: [PATCH 1/2] dma-mapping: support setting memory uncached in place Date: Thu, 7 Nov 2019 18:40:34 +0100 Message-ID: <20191107174035.13783-2-hch@lst.de> References: <20191107174035.13783-1-hch@lst.de> Mime-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Return-path: In-Reply-To: <20191107174035.13783-1-hch@lst.de> List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=m.gmane.org@lists.infradead.org To: Jonas Bonn , Stefan Kristiansson , Stafford Horne Cc: Mark Rutland , linux-arch@vger.kernel.org, Robin Murphy , linux-kernel@vger.kernel.org, iommu@lists.linux-foundation.org, openrisc@lists.librecores.org, Will Deacon , linux-arm-kernel@lists.infradead.org, Marek Szyprowski List-Id: linux-arch.vger.kernel.org We currently only support remapping memory as uncached through vmap or a magic uncached segment provided by some architectures. But there is a simpler and much better way available on some architectures where we can just remap the memory in place. The advantages are: 1) no aliasing is possible, which prevents speculating into the cached alias 2) there is no need to allocate new ptes and thus no need for a special pre-allocated pool of memory that can be used with GFP_ATOMIC DMA allocations The downside is that architectures must provide a way to set arbitrary pages uncached in the kernel mapping, which might not be possible on architecture that have a special implicit kernel mapping, and requires splitting of huge page kernel mappings where they exist. Signed-off-by: Christoph Hellwig --- include/linux/dma-noncoherent.h | 3 +++ kernel/dma/Kconfig | 8 ++++++++ kernel/dma/direct.c | 28 ++++++++++++++++++---------- 3 files changed, 29 insertions(+), 10 deletions(-) diff --git a/include/linux/dma-noncoherent.h b/include/linux/dma-noncoherent.h index e30fca1f1b12..c4be9697279a 100644 --- a/include/linux/dma-noncoherent.h +++ b/include/linux/dma-noncoherent.h @@ -111,4 +111,7 @@ static inline void arch_dma_prep_coherent(struct page *page, size_t size) void *uncached_kernel_address(void *addr); void *cached_kernel_address(void *addr); +int arch_dma_set_uncached(void *cpu_addr, size_t size); +void arch_dma_clear_uncached(void *cpu_addr, size_t size); + #endif /* _LINUX_DMA_NONCOHERENT_H */ diff --git a/kernel/dma/Kconfig b/kernel/dma/Kconfig index 4c103a24e380..7bc0b77f1243 100644 --- a/kernel/dma/Kconfig +++ b/kernel/dma/Kconfig @@ -83,6 +83,14 @@ config DMA_DIRECT_REMAP bool select DMA_REMAP +# +# Should be selected if the architecture can remap memory from the page +# allocator and CMA as uncached and provides the arch_dma_set_uncached and +# arch_dma_clear_uncached helpers +# +config ARCH_HAS_DMA_SET_UNCACHED + bool + config DMA_CMA bool "DMA Contiguous Memory Allocator" depends on HAVE_DMA_CONTIGUOUS && CMA diff --git a/kernel/dma/direct.c b/kernel/dma/direct.c index e37e7ab6e2ee..e2b46001c1b3 100644 --- a/kernel/dma/direct.c +++ b/kernel/dma/direct.c @@ -171,11 +171,8 @@ void *dma_direct_alloc_pages(struct device *dev, size_t size, ret = dma_common_contiguous_remap(page, PAGE_ALIGN(size), dma_pgprot(dev, PAGE_KERNEL, attrs), __builtin_return_address(0)); - if (!ret) { - dma_free_contiguous(dev, page, size); - return ret; - } - + if (!ret) + goto out_free_pages; memset(ret, 0, size); goto done; } @@ -188,8 +185,7 @@ void *dma_direct_alloc_pages(struct device *dev, size_t size, * so log an error and fail. */ dev_info(dev, "Rejecting highmem page from CMA.\n"); - dma_free_contiguous(dev, page, size); - return NULL; + goto out_free_pages; } ret = page_address(page); @@ -198,10 +194,15 @@ void *dma_direct_alloc_pages(struct device *dev, size_t size, memset(ret, 0, size); - if (IS_ENABLED(CONFIG_ARCH_HAS_UNCACHED_SEGMENT) && - dma_alloc_need_uncached(dev, attrs)) { + if (dma_alloc_need_uncached(dev, attrs)) { arch_dma_prep_coherent(page, size); - ret = uncached_kernel_address(ret); + + if (IS_ENABLED(CONFIG_ARCH_HAS_DMA_SET_UNCACHED)) { + if (!arch_dma_set_uncached(ret, size)) + goto out_free_pages; + } else if (IS_ENABLED(CONFIG_ARCH_HAS_UNCACHED_SEGMENT)) { + ret = uncached_kernel_address(ret); + } } done: if (force_dma_unencrypted(dev)) @@ -209,6 +210,9 @@ void *dma_direct_alloc_pages(struct device *dev, size_t size, else *dma_handle = phys_to_dma(dev, page_to_phys(page)); return ret; +out_free_pages: + dma_free_contiguous(dev, page, size); + return NULL; } void dma_direct_free_pages(struct device *dev, size_t size, void *cpu_addr, @@ -232,6 +236,8 @@ void dma_direct_free_pages(struct device *dev, size_t size, void *cpu_addr, if (IS_ENABLED(CONFIG_DMA_REMAP) && is_vmalloc_addr(cpu_addr)) vunmap(cpu_addr); + else if (IS_ENABLED(CONFIG_ARCH_HAS_DMA_SET_UNCACHED)) + arch_dma_clear_uncached(cpu_addr, size); dma_free_contiguous(dev, dma_direct_to_page(dev, dma_addr), size); } @@ -240,6 +246,7 @@ void *dma_direct_alloc(struct device *dev, size_t size, dma_addr_t *dma_handle, gfp_t gfp, unsigned long attrs) { if (!IS_ENABLED(CONFIG_ARCH_HAS_UNCACHED_SEGMENT) && + !IS_ENABLED(CONFIG_ARCH_HAS_DMA_SET_UNCACHED) && !IS_ENABLED(CONFIG_DMA_DIRECT_REMAP) && dma_alloc_need_uncached(dev, attrs)) return arch_dma_alloc(dev, size, dma_handle, gfp, attrs); @@ -250,6 +257,7 @@ void dma_direct_free(struct device *dev, size_t size, void *cpu_addr, dma_addr_t dma_addr, unsigned long attrs) { if (!IS_ENABLED(CONFIG_ARCH_HAS_UNCACHED_SEGMENT) && + !IS_ENABLED(CONFIG_ARCH_HAS_DMA_SET_UNCACHED) && !IS_ENABLED(CONFIG_DMA_DIRECT_REMAP) && dma_alloc_need_uncached(dev, attrs)) arch_dma_free(dev, size, cpu_addr, dma_addr, attrs); -- 2.20.1 From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-9.6 required=3.0 tests=DKIM_INVALID,DKIM_SIGNED, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY, SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 74785C43331 for ; Thu, 7 Nov 2019 17:41:38 +0000 (UTC) Received: from mail.linuxfoundation.org (mail.linuxfoundation.org [140.211.169.12]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 3BFE92077C for ; Thu, 7 Nov 2019 17:41:38 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b="p5VJ+Kzb" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 3BFE92077C Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=lst.de Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=iommu-bounces@lists.linux-foundation.org Received: from mail.linux-foundation.org (localhost [127.0.0.1]) by mail.linuxfoundation.org (Postfix) with ESMTP id 2C609DC8; Thu, 7 Nov 2019 17:40:56 +0000 (UTC) Received: from smtp1.linuxfoundation.org (smtp1.linux-foundation.org [172.17.192.35]) by mail.linuxfoundation.org (Postfix) with ESMTPS id A3D70DBE for ; Thu, 7 Nov 2019 17:40:55 +0000 (UTC) X-Greylist: from auto-whitelisted by SQLgrey-1.7.6 Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) by smtp1.linuxfoundation.org (Postfix) with ESMTPS id 2DAE1756 for ; Thu, 7 Nov 2019 17:40:55 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20170209; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description:Resent-Date:Resent-From :Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help: List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive; bh=V6oKP4lc+Qnv6sTHAaXs5ytofADvXNkCvxLyMCGZAGM=; b=p5VJ+KzbAtdMWZHpt1z3zRLm9J BuGBPRjHA8A17ctz38M7+c95xVBVezojpm5oFhjIdFVn7Y60AjqOaaJn+Fy33f6p4nWducN+uyhfG aiAwjRn0MYfTFU1boeDjAdni7qM4goYe0VI55Rt8yn/jPqsDAbhbYK2qPZmjkKH+mXGymzsq4Y00g 1NvMd7wTMvwQ8Or0qVj0Oc0YyKi4bBH3UdEq5jr2YhuT5MFy74fP+XwvWFfRFyL85RPXhNHtNt1YT j6rYqNfnmvCtdqcjjTvzIW2GYIm/6cuj68FvBKNA8hzm0BAOJPXkJpAcVXfPKida7k7xBN417alSF ymNp+ikw==; Received: from [2001:4bb8:184:e48:c70:4a89:bc61:2] (helo=localhost) by bombadil.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux)) id 1iSllw-0002O1-MK; Thu, 07 Nov 2019 17:40:41 +0000 From: Christoph Hellwig To: Jonas Bonn , Stefan Kristiansson , Stafford Horne Subject: [PATCH 1/2] dma-mapping: support setting memory uncached in place Date: Thu, 7 Nov 2019 18:40:34 +0100 Message-Id: <20191107174035.13783-2-hch@lst.de> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20191107174035.13783-1-hch@lst.de> References: <20191107174035.13783-1-hch@lst.de> MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by bombadil.infradead.org. See http://www.infradead.org/rpr.html Cc: Mark Rutland , linux-arch@vger.kernel.org, Robin Murphy , linux-kernel@vger.kernel.org, iommu@lists.linux-foundation.org, openrisc@lists.librecores.org, Will Deacon , linux-arm-kernel@lists.infradead.org X-BeenThere: iommu@lists.linux-foundation.org X-Mailman-Version: 2.1.12 Precedence: list List-Id: Development issues for Linux IOMMU support List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Sender: iommu-bounces@lists.linux-foundation.org Errors-To: iommu-bounces@lists.linux-foundation.org We currently only support remapping memory as uncached through vmap or a magic uncached segment provided by some architectures. But there is a simpler and much better way available on some architectures where we can just remap the memory in place. The advantages are: 1) no aliasing is possible, which prevents speculating into the cached alias 2) there is no need to allocate new ptes and thus no need for a special pre-allocated pool of memory that can be used with GFP_ATOMIC DMA allocations The downside is that architectures must provide a way to set arbitrary pages uncached in the kernel mapping, which might not be possible on architecture that have a special implicit kernel mapping, and requires splitting of huge page kernel mappings where they exist. Signed-off-by: Christoph Hellwig --- include/linux/dma-noncoherent.h | 3 +++ kernel/dma/Kconfig | 8 ++++++++ kernel/dma/direct.c | 28 ++++++++++++++++++---------- 3 files changed, 29 insertions(+), 10 deletions(-) diff --git a/include/linux/dma-noncoherent.h b/include/linux/dma-noncoherent.h index e30fca1f1b12..c4be9697279a 100644 --- a/include/linux/dma-noncoherent.h +++ b/include/linux/dma-noncoherent.h @@ -111,4 +111,7 @@ static inline void arch_dma_prep_coherent(struct page *page, size_t size) void *uncached_kernel_address(void *addr); void *cached_kernel_address(void *addr); +int arch_dma_set_uncached(void *cpu_addr, size_t size); +void arch_dma_clear_uncached(void *cpu_addr, size_t size); + #endif /* _LINUX_DMA_NONCOHERENT_H */ diff --git a/kernel/dma/Kconfig b/kernel/dma/Kconfig index 4c103a24e380..7bc0b77f1243 100644 --- a/kernel/dma/Kconfig +++ b/kernel/dma/Kconfig @@ -83,6 +83,14 @@ config DMA_DIRECT_REMAP bool select DMA_REMAP +# +# Should be selected if the architecture can remap memory from the page +# allocator and CMA as uncached and provides the arch_dma_set_uncached and +# arch_dma_clear_uncached helpers +# +config ARCH_HAS_DMA_SET_UNCACHED + bool + config DMA_CMA bool "DMA Contiguous Memory Allocator" depends on HAVE_DMA_CONTIGUOUS && CMA diff --git a/kernel/dma/direct.c b/kernel/dma/direct.c index e37e7ab6e2ee..e2b46001c1b3 100644 --- a/kernel/dma/direct.c +++ b/kernel/dma/direct.c @@ -171,11 +171,8 @@ void *dma_direct_alloc_pages(struct device *dev, size_t size, ret = dma_common_contiguous_remap(page, PAGE_ALIGN(size), dma_pgprot(dev, PAGE_KERNEL, attrs), __builtin_return_address(0)); - if (!ret) { - dma_free_contiguous(dev, page, size); - return ret; - } - + if (!ret) + goto out_free_pages; memset(ret, 0, size); goto done; } @@ -188,8 +185,7 @@ void *dma_direct_alloc_pages(struct device *dev, size_t size, * so log an error and fail. */ dev_info(dev, "Rejecting highmem page from CMA.\n"); - dma_free_contiguous(dev, page, size); - return NULL; + goto out_free_pages; } ret = page_address(page); @@ -198,10 +194,15 @@ void *dma_direct_alloc_pages(struct device *dev, size_t size, memset(ret, 0, size); - if (IS_ENABLED(CONFIG_ARCH_HAS_UNCACHED_SEGMENT) && - dma_alloc_need_uncached(dev, attrs)) { + if (dma_alloc_need_uncached(dev, attrs)) { arch_dma_prep_coherent(page, size); - ret = uncached_kernel_address(ret); + + if (IS_ENABLED(CONFIG_ARCH_HAS_DMA_SET_UNCACHED)) { + if (!arch_dma_set_uncached(ret, size)) + goto out_free_pages; + } else if (IS_ENABLED(CONFIG_ARCH_HAS_UNCACHED_SEGMENT)) { + ret = uncached_kernel_address(ret); + } } done: if (force_dma_unencrypted(dev)) @@ -209,6 +210,9 @@ void *dma_direct_alloc_pages(struct device *dev, size_t size, else *dma_handle = phys_to_dma(dev, page_to_phys(page)); return ret; +out_free_pages: + dma_free_contiguous(dev, page, size); + return NULL; } void dma_direct_free_pages(struct device *dev, size_t size, void *cpu_addr, @@ -232,6 +236,8 @@ void dma_direct_free_pages(struct device *dev, size_t size, void *cpu_addr, if (IS_ENABLED(CONFIG_DMA_REMAP) && is_vmalloc_addr(cpu_addr)) vunmap(cpu_addr); + else if (IS_ENABLED(CONFIG_ARCH_HAS_DMA_SET_UNCACHED)) + arch_dma_clear_uncached(cpu_addr, size); dma_free_contiguous(dev, dma_direct_to_page(dev, dma_addr), size); } @@ -240,6 +246,7 @@ void *dma_direct_alloc(struct device *dev, size_t size, dma_addr_t *dma_handle, gfp_t gfp, unsigned long attrs) { if (!IS_ENABLED(CONFIG_ARCH_HAS_UNCACHED_SEGMENT) && + !IS_ENABLED(CONFIG_ARCH_HAS_DMA_SET_UNCACHED) && !IS_ENABLED(CONFIG_DMA_DIRECT_REMAP) && dma_alloc_need_uncached(dev, attrs)) return arch_dma_alloc(dev, size, dma_handle, gfp, attrs); @@ -250,6 +257,7 @@ void dma_direct_free(struct device *dev, size_t size, void *cpu_addr, dma_addr_t dma_addr, unsigned long attrs) { if (!IS_ENABLED(CONFIG_ARCH_HAS_UNCACHED_SEGMENT) && + !IS_ENABLED(CONFIG_ARCH_HAS_DMA_SET_UNCACHED) && !IS_ENABLED(CONFIG_DMA_DIRECT_REMAP) && dma_alloc_need_uncached(dev, attrs)) arch_dma_free(dev, size, cpu_addr, dma_addr, attrs); -- 2.20.1 _______________________________________________ iommu mailing list iommu@lists.linux-foundation.org https://lists.linuxfoundation.org/mailman/listinfo/iommu From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-9.8 required=3.0 tests=DKIMWL_WL_HIGH,DKIM_SIGNED, DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH,MAILING_LIST_MULTI, SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id F3BC9C5DF60 for ; Thu, 7 Nov 2019 17:40:51 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id C46FC2077C for ; Thu, 7 Nov 2019 17:40:51 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=lists.infradead.org header.i=@lists.infradead.org header.b="ac7X+IGy" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org C46FC2077C Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=lst.de Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+infradead-linux-arm-kernel=archiver.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:Cc:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=6TMRyTDK7bsvv41hUOVufzyCjejqwtPMgVSPBQx+0W0=; b=ac7X+IGyIEzSms sX3AHbRjc1YjAcGaCD4+x/HVnmR4SIKCM/RY/Y/VCy9OKGzkIyFCWYsVQWsAVrwe8kU0fTEcNvfEC 8fMjvh3QWdHXyCluwELdir5yzxYhppnTLNSiTYiMJqYUY+2WDYVaYdBii8zeaPRDvFlExbcpSCS0q qzg+CcelMUXfuzSJdzTH98kY8L5lkGfyee0lyVI9Dt3tNF550v0aSER+Ucyy/KCNm9wnjEm4+7dwU SIpP6j7dMvlPnvyepyNNxp2BiYWY95dZiLgwT/loHwMggvv7nSsKxuTkTWajM+gxm9K321WLXRkES nmRitNwG+4GIPISmv3ig==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.92.3 #3 (Red Hat Linux)) id 1iSlm6-0002Vt-Sa; Thu, 07 Nov 2019 17:40:50 +0000 Received: from [2001:4bb8:184:e48:c70:4a89:bc61:2] (helo=localhost) by bombadil.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux)) id 1iSllw-0002O1-MK; Thu, 07 Nov 2019 17:40:41 +0000 From: Christoph Hellwig To: Jonas Bonn , Stefan Kristiansson , Stafford Horne Subject: [PATCH 1/2] dma-mapping: support setting memory uncached in place Date: Thu, 7 Nov 2019 18:40:34 +0100 Message-Id: <20191107174035.13783-2-hch@lst.de> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20191107174035.13783-1-hch@lst.de> References: <20191107174035.13783-1-hch@lst.de> MIME-Version: 1.0 X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Mark Rutland , linux-arch@vger.kernel.org, Robin Murphy , linux-kernel@vger.kernel.org, iommu@lists.linux-foundation.org, openrisc@lists.librecores.org, Will Deacon , linux-arm-kernel@lists.infradead.org, Marek Szyprowski Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+infradead-linux-arm-kernel=archiver.kernel.org@lists.infradead.org We currently only support remapping memory as uncached through vmap or a magic uncached segment provided by some architectures. But there is a simpler and much better way available on some architectures where we can just remap the memory in place. The advantages are: 1) no aliasing is possible, which prevents speculating into the cached alias 2) there is no need to allocate new ptes and thus no need for a special pre-allocated pool of memory that can be used with GFP_ATOMIC DMA allocations The downside is that architectures must provide a way to set arbitrary pages uncached in the kernel mapping, which might not be possible on architecture that have a special implicit kernel mapping, and requires splitting of huge page kernel mappings where they exist. Signed-off-by: Christoph Hellwig --- include/linux/dma-noncoherent.h | 3 +++ kernel/dma/Kconfig | 8 ++++++++ kernel/dma/direct.c | 28 ++++++++++++++++++---------- 3 files changed, 29 insertions(+), 10 deletions(-) diff --git a/include/linux/dma-noncoherent.h b/include/linux/dma-noncoherent.h index e30fca1f1b12..c4be9697279a 100644 --- a/include/linux/dma-noncoherent.h +++ b/include/linux/dma-noncoherent.h @@ -111,4 +111,7 @@ static inline void arch_dma_prep_coherent(struct page *page, size_t size) void *uncached_kernel_address(void *addr); void *cached_kernel_address(void *addr); +int arch_dma_set_uncached(void *cpu_addr, size_t size); +void arch_dma_clear_uncached(void *cpu_addr, size_t size); + #endif /* _LINUX_DMA_NONCOHERENT_H */ diff --git a/kernel/dma/Kconfig b/kernel/dma/Kconfig index 4c103a24e380..7bc0b77f1243 100644 --- a/kernel/dma/Kconfig +++ b/kernel/dma/Kconfig @@ -83,6 +83,14 @@ config DMA_DIRECT_REMAP bool select DMA_REMAP +# +# Should be selected if the architecture can remap memory from the page +# allocator and CMA as uncached and provides the arch_dma_set_uncached and +# arch_dma_clear_uncached helpers +# +config ARCH_HAS_DMA_SET_UNCACHED + bool + config DMA_CMA bool "DMA Contiguous Memory Allocator" depends on HAVE_DMA_CONTIGUOUS && CMA diff --git a/kernel/dma/direct.c b/kernel/dma/direct.c index e37e7ab6e2ee..e2b46001c1b3 100644 --- a/kernel/dma/direct.c +++ b/kernel/dma/direct.c @@ -171,11 +171,8 @@ void *dma_direct_alloc_pages(struct device *dev, size_t size, ret = dma_common_contiguous_remap(page, PAGE_ALIGN(size), dma_pgprot(dev, PAGE_KERNEL, attrs), __builtin_return_address(0)); - if (!ret) { - dma_free_contiguous(dev, page, size); - return ret; - } - + if (!ret) + goto out_free_pages; memset(ret, 0, size); goto done; } @@ -188,8 +185,7 @@ void *dma_direct_alloc_pages(struct device *dev, size_t size, * so log an error and fail. */ dev_info(dev, "Rejecting highmem page from CMA.\n"); - dma_free_contiguous(dev, page, size); - return NULL; + goto out_free_pages; } ret = page_address(page); @@ -198,10 +194,15 @@ void *dma_direct_alloc_pages(struct device *dev, size_t size, memset(ret, 0, size); - if (IS_ENABLED(CONFIG_ARCH_HAS_UNCACHED_SEGMENT) && - dma_alloc_need_uncached(dev, attrs)) { + if (dma_alloc_need_uncached(dev, attrs)) { arch_dma_prep_coherent(page, size); - ret = uncached_kernel_address(ret); + + if (IS_ENABLED(CONFIG_ARCH_HAS_DMA_SET_UNCACHED)) { + if (!arch_dma_set_uncached(ret, size)) + goto out_free_pages; + } else if (IS_ENABLED(CONFIG_ARCH_HAS_UNCACHED_SEGMENT)) { + ret = uncached_kernel_address(ret); + } } done: if (force_dma_unencrypted(dev)) @@ -209,6 +210,9 @@ void *dma_direct_alloc_pages(struct device *dev, size_t size, else *dma_handle = phys_to_dma(dev, page_to_phys(page)); return ret; +out_free_pages: + dma_free_contiguous(dev, page, size); + return NULL; } void dma_direct_free_pages(struct device *dev, size_t size, void *cpu_addr, @@ -232,6 +236,8 @@ void dma_direct_free_pages(struct device *dev, size_t size, void *cpu_addr, if (IS_ENABLED(CONFIG_DMA_REMAP) && is_vmalloc_addr(cpu_addr)) vunmap(cpu_addr); + else if (IS_ENABLED(CONFIG_ARCH_HAS_DMA_SET_UNCACHED)) + arch_dma_clear_uncached(cpu_addr, size); dma_free_contiguous(dev, dma_direct_to_page(dev, dma_addr), size); } @@ -240,6 +246,7 @@ void *dma_direct_alloc(struct device *dev, size_t size, dma_addr_t *dma_handle, gfp_t gfp, unsigned long attrs) { if (!IS_ENABLED(CONFIG_ARCH_HAS_UNCACHED_SEGMENT) && + !IS_ENABLED(CONFIG_ARCH_HAS_DMA_SET_UNCACHED) && !IS_ENABLED(CONFIG_DMA_DIRECT_REMAP) && dma_alloc_need_uncached(dev, attrs)) return arch_dma_alloc(dev, size, dma_handle, gfp, attrs); @@ -250,6 +257,7 @@ void dma_direct_free(struct device *dev, size_t size, void *cpu_addr, dma_addr_t dma_addr, unsigned long attrs) { if (!IS_ENABLED(CONFIG_ARCH_HAS_UNCACHED_SEGMENT) && + !IS_ENABLED(CONFIG_ARCH_HAS_DMA_SET_UNCACHED) && !IS_ENABLED(CONFIG_DMA_DIRECT_REMAP) && dma_alloc_need_uncached(dev, attrs)) arch_dma_free(dev, size, cpu_addr, dma_addr, attrs); -- 2.20.1 _______________________________________________ linux-arm-kernel mailing list linux-arm-kernel@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-arm-kernel From mboxrd@z Thu Jan 1 00:00:00 1970 From: Christoph Hellwig Date: Thu, 7 Nov 2019 18:40:34 +0100 Subject: [OpenRISC] [PATCH 1/2] dma-mapping: support setting memory uncached in place In-Reply-To: <20191107174035.13783-1-hch@lst.de> References: <20191107174035.13783-1-hch@lst.de> Message-ID: <20191107174035.13783-2-hch@lst.de> List-Id: MIME-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit To: openrisc@lists.librecores.org We currently only support remapping memory as uncached through vmap or a magic uncached segment provided by some architectures. But there is a simpler and much better way available on some architectures where we can just remap the memory in place. The advantages are: 1) no aliasing is possible, which prevents speculating into the cached alias 2) there is no need to allocate new ptes and thus no need for a special pre-allocated pool of memory that can be used with GFP_ATOMIC DMA allocations The downside is that architectures must provide a way to set arbitrary pages uncached in the kernel mapping, which might not be possible on architecture that have a special implicit kernel mapping, and requires splitting of huge page kernel mappings where they exist. Signed-off-by: Christoph Hellwig --- include/linux/dma-noncoherent.h | 3 +++ kernel/dma/Kconfig | 8 ++++++++ kernel/dma/direct.c | 28 ++++++++++++++++++---------- 3 files changed, 29 insertions(+), 10 deletions(-) diff --git a/include/linux/dma-noncoherent.h b/include/linux/dma-noncoherent.h index e30fca1f1b12..c4be9697279a 100644 --- a/include/linux/dma-noncoherent.h +++ b/include/linux/dma-noncoherent.h @@ -111,4 +111,7 @@ static inline void arch_dma_prep_coherent(struct page *page, size_t size) void *uncached_kernel_address(void *addr); void *cached_kernel_address(void *addr); +int arch_dma_set_uncached(void *cpu_addr, size_t size); +void arch_dma_clear_uncached(void *cpu_addr, size_t size); + #endif /* _LINUX_DMA_NONCOHERENT_H */ diff --git a/kernel/dma/Kconfig b/kernel/dma/Kconfig index 4c103a24e380..7bc0b77f1243 100644 --- a/kernel/dma/Kconfig +++ b/kernel/dma/Kconfig @@ -83,6 +83,14 @@ config DMA_DIRECT_REMAP bool select DMA_REMAP +# +# Should be selected if the architecture can remap memory from the page +# allocator and CMA as uncached and provides the arch_dma_set_uncached and +# arch_dma_clear_uncached helpers +# +config ARCH_HAS_DMA_SET_UNCACHED + bool + config DMA_CMA bool "DMA Contiguous Memory Allocator" depends on HAVE_DMA_CONTIGUOUS && CMA diff --git a/kernel/dma/direct.c b/kernel/dma/direct.c index e37e7ab6e2ee..e2b46001c1b3 100644 --- a/kernel/dma/direct.c +++ b/kernel/dma/direct.c @@ -171,11 +171,8 @@ void *dma_direct_alloc_pages(struct device *dev, size_t size, ret = dma_common_contiguous_remap(page, PAGE_ALIGN(size), dma_pgprot(dev, PAGE_KERNEL, attrs), __builtin_return_address(0)); - if (!ret) { - dma_free_contiguous(dev, page, size); - return ret; - } - + if (!ret) + goto out_free_pages; memset(ret, 0, size); goto done; } @@ -188,8 +185,7 @@ void *dma_direct_alloc_pages(struct device *dev, size_t size, * so log an error and fail. */ dev_info(dev, "Rejecting highmem page from CMA.\n"); - dma_free_contiguous(dev, page, size); - return NULL; + goto out_free_pages; } ret = page_address(page); @@ -198,10 +194,15 @@ void *dma_direct_alloc_pages(struct device *dev, size_t size, memset(ret, 0, size); - if (IS_ENABLED(CONFIG_ARCH_HAS_UNCACHED_SEGMENT) && - dma_alloc_need_uncached(dev, attrs)) { + if (dma_alloc_need_uncached(dev, attrs)) { arch_dma_prep_coherent(page, size); - ret = uncached_kernel_address(ret); + + if (IS_ENABLED(CONFIG_ARCH_HAS_DMA_SET_UNCACHED)) { + if (!arch_dma_set_uncached(ret, size)) + goto out_free_pages; + } else if (IS_ENABLED(CONFIG_ARCH_HAS_UNCACHED_SEGMENT)) { + ret = uncached_kernel_address(ret); + } } done: if (force_dma_unencrypted(dev)) @@ -209,6 +210,9 @@ void *dma_direct_alloc_pages(struct device *dev, size_t size, else *dma_handle = phys_to_dma(dev, page_to_phys(page)); return ret; +out_free_pages: + dma_free_contiguous(dev, page, size); + return NULL; } void dma_direct_free_pages(struct device *dev, size_t size, void *cpu_addr, @@ -232,6 +236,8 @@ void dma_direct_free_pages(struct device *dev, size_t size, void *cpu_addr, if (IS_ENABLED(CONFIG_DMA_REMAP) && is_vmalloc_addr(cpu_addr)) vunmap(cpu_addr); + else if (IS_ENABLED(CONFIG_ARCH_HAS_DMA_SET_UNCACHED)) + arch_dma_clear_uncached(cpu_addr, size); dma_free_contiguous(dev, dma_direct_to_page(dev, dma_addr), size); } @@ -240,6 +246,7 @@ void *dma_direct_alloc(struct device *dev, size_t size, dma_addr_t *dma_handle, gfp_t gfp, unsigned long attrs) { if (!IS_ENABLED(CONFIG_ARCH_HAS_UNCACHED_SEGMENT) && + !IS_ENABLED(CONFIG_ARCH_HAS_DMA_SET_UNCACHED) && !IS_ENABLED(CONFIG_DMA_DIRECT_REMAP) && dma_alloc_need_uncached(dev, attrs)) return arch_dma_alloc(dev, size, dma_handle, gfp, attrs); @@ -250,6 +257,7 @@ void dma_direct_free(struct device *dev, size_t size, void *cpu_addr, dma_addr_t dma_addr, unsigned long attrs) { if (!IS_ENABLED(CONFIG_ARCH_HAS_UNCACHED_SEGMENT) && + !IS_ENABLED(CONFIG_ARCH_HAS_DMA_SET_UNCACHED) && !IS_ENABLED(CONFIG_DMA_DIRECT_REMAP) && dma_alloc_need_uncached(dev, attrs)) arch_dma_free(dev, size, cpu_addr, dma_addr, attrs); -- 2.20.1