* handle "special" dma allocation in common code @ 2019-06-14 14:44 ` Christoph Hellwig 0 siblings, 0 replies; 91+ messages in thread From: Christoph Hellwig @ 2019-06-14 14:44 UTC (permalink / raw) To: Vineet Gupta Cc: Jonas Bonn, Stefan Kristiansson, Stafford Horne, Helge Deller, Vladimir Murzin, linux-snps-arc, linux-arm-kernel, openrisc, linux-parisc, linux-xtensa, iommu, linux-kernel Hi all,, this series ensures that the common dma-direct code handles the somewhat special allocation types requested by the DMA_ATTR_NON_CONSISTENT and DMA_ATTR_NO_KERNEL_MAPPING flags directly. To do so it also removes three partial and thus broken implementations of DMA_ATTR_NON_CONSISTENT. Last but not least it switches arc to use the generic dma remapping code now that arc doesn't implement any special behavior. ^ permalink raw reply [flat|nested] 91+ messages in thread
* [OpenRISC] handle "special" dma allocation in common code @ 2019-06-14 14:44 ` Christoph Hellwig 0 siblings, 0 replies; 91+ messages in thread From: Christoph Hellwig @ 2019-06-14 14:44 UTC (permalink / raw) To: openrisc Hi all,, this series ensures that the common dma-direct code handles the somewhat special allocation types requested by the DMA_ATTR_NON_CONSISTENT and DMA_ATTR_NO_KERNEL_MAPPING flags directly. To do so it also removes three partial and thus broken implementations of DMA_ATTR_NON_CONSISTENT. Last but not least it switches arc to use the generic dma remapping code now that arc doesn't implement any special behavior. ^ permalink raw reply [flat|nested] 91+ messages in thread
* handle "special" dma allocation in common code @ 2019-06-14 14:44 ` Christoph Hellwig 0 siblings, 0 replies; 91+ messages in thread From: Christoph Hellwig @ 2019-06-14 14:44 UTC (permalink / raw) To: Vineet Gupta Cc: Jonas Bonn, linux-xtensa, Vladimir Murzin, linux-parisc, Helge Deller, linux-kernel, Stefan Kristiansson, iommu, openrisc, Stafford Horne, linux-snps-arc, linux-arm-kernel Hi all,, this series ensures that the common dma-direct code handles the somewhat special allocation types requested by the DMA_ATTR_NON_CONSISTENT and DMA_ATTR_NO_KERNEL_MAPPING flags directly. To do so it also removes three partial and thus broken implementations of DMA_ATTR_NON_CONSISTENT. Last but not least it switches arc to use the generic dma remapping code now that arc doesn't implement any special behavior. _______________________________________________ linux-arm-kernel mailing list linux-arm-kernel@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-arm-kernel ^ permalink raw reply [flat|nested] 91+ messages in thread
* handle "special" dma allocation in common code @ 2019-06-14 14:44 ` Christoph Hellwig 0 siblings, 0 replies; 91+ messages in thread From: Christoph Hellwig @ 2019-06-14 14:44 UTC (permalink / raw) To: Vineet Gupta Cc: Jonas Bonn, linux-xtensa, Vladimir Murzin, linux-parisc, Helge Deller, linux-kernel, Stefan Kristiansson, iommu, openrisc, Stafford Horne, linux-snps-arc, linux-arm-kernel Hi all,, this series ensures that the common dma-direct code handles the somewhat special allocation types requested by the DMA_ATTR_NON_CONSISTENT and DMA_ATTR_NO_KERNEL_MAPPING flags directly. To do so it also removes three partial and thus broken implementations of DMA_ATTR_NON_CONSISTENT. Last but not least it switches arc to use the generic dma remapping code now that arc doesn't implement any special behavior. _______________________________________________ iommu mailing list iommu@lists.linux-foundation.org https://lists.linuxfoundation.org/mailman/listinfo/iommu ^ permalink raw reply [flat|nested] 91+ messages in thread
* handle "special" dma allocation in common code @ 2019-06-14 14:44 ` Christoph Hellwig 0 siblings, 0 replies; 91+ messages in thread From: Christoph Hellwig @ 2019-06-14 14:44 UTC (permalink / raw) To: linux-snps-arc Hi all,, this series ensures that the common dma-direct code handles the somewhat special allocation types requested by the DMA_ATTR_NON_CONSISTENT and DMA_ATTR_NO_KERNEL_MAPPING flags directly. To do so it also removes three partial and thus broken implementations of DMA_ATTR_NON_CONSISTENT. Last but not least it switches arc to use the generic dma remapping code now that arc doesn't implement any special behavior. ^ permalink raw reply [flat|nested] 91+ messages in thread
* [PATCH 1/7] arm-nommu: remove the partial DMA_ATTR_NON_CONSISTENT support 2019-06-14 14:44 ` Christoph Hellwig ` (2 preceding siblings ...) (?) @ 2019-06-14 14:44 ` Christoph Hellwig -1 siblings, 0 replies; 91+ messages in thread From: Christoph Hellwig @ 2019-06-14 14:44 UTC (permalink / raw) To: Vineet Gupta Cc: Jonas Bonn, Stefan Kristiansson, Stafford Horne, Helge Deller, Vladimir Murzin, linux-snps-arc, linux-arm-kernel, openrisc, linux-parisc, linux-xtensa, iommu, linux-kernel The arm-nommu DMA code supports DMA_ATTR_NON_CONSISTENT allocations, but does not provide a cache_sync operation. This means any user of it will never be able to actually transfer cache ownership and thus cause coherency bugs. Signed-off-by: Christoph Hellwig <hch@lst.de> --- arch/arm/mm/dma-mapping-nommu.c | 24 +++--------------------- 1 file changed, 3 insertions(+), 21 deletions(-) diff --git a/arch/arm/mm/dma-mapping-nommu.c b/arch/arm/mm/dma-mapping-nommu.c index f304b10e23a4..bc003df45546 100644 --- a/arch/arm/mm/dma-mapping-nommu.c +++ b/arch/arm/mm/dma-mapping-nommu.c @@ -39,18 +39,7 @@ static void *arm_nommu_dma_alloc(struct device *dev, size_t size, unsigned long attrs) { - void *ret; - - /* - * Try generic allocator first if we are advertised that - * consistency is not required. - */ - - if (attrs & DMA_ATTR_NON_CONSISTENT) - return dma_direct_alloc_pages(dev, size, dma_handle, gfp, - attrs); - - ret = dma_alloc_from_global_coherent(size, dma_handle); + void *ret = dma_alloc_from_global_coherent(size, dma_handle); /* * dma_alloc_from_global_coherent() may fail because: @@ -70,16 +59,9 @@ static void arm_nommu_dma_free(struct device *dev, size_t size, void *cpu_addr, dma_addr_t dma_addr, unsigned long attrs) { - if (attrs & DMA_ATTR_NON_CONSISTENT) { - dma_direct_free_pages(dev, size, cpu_addr, dma_addr, attrs); - } else { - int ret = dma_release_from_global_coherent(get_order(size), - cpu_addr); - - WARN_ON_ONCE(ret == 0); - } + int ret = dma_release_from_global_coherent(get_order(size), cpu_addr); - return; + WARN_ON_ONCE(ret == 0); } static int arm_nommu_dma_mmap(struct device *dev, struct vm_area_struct *vma, -- 2.20.1 ^ permalink raw reply related [flat|nested] 91+ messages in thread
* [OpenRISC] [PATCH 1/7] arm-nommu: remove the partial DMA_ATTR_NON_CONSISTENT support @ 2019-06-14 14:44 ` Christoph Hellwig 0 siblings, 0 replies; 91+ messages in thread From: Christoph Hellwig @ 2019-06-14 14:44 UTC (permalink / raw) To: openrisc The arm-nommu DMA code supports DMA_ATTR_NON_CONSISTENT allocations, but does not provide a cache_sync operation. This means any user of it will never be able to actually transfer cache ownership and thus cause coherency bugs. Signed-off-by: Christoph Hellwig <hch@lst.de> --- arch/arm/mm/dma-mapping-nommu.c | 24 +++--------------------- 1 file changed, 3 insertions(+), 21 deletions(-) diff --git a/arch/arm/mm/dma-mapping-nommu.c b/arch/arm/mm/dma-mapping-nommu.c index f304b10e23a4..bc003df45546 100644 --- a/arch/arm/mm/dma-mapping-nommu.c +++ b/arch/arm/mm/dma-mapping-nommu.c @@ -39,18 +39,7 @@ static void *arm_nommu_dma_alloc(struct device *dev, size_t size, unsigned long attrs) { - void *ret; - - /* - * Try generic allocator first if we are advertised that - * consistency is not required. - */ - - if (attrs & DMA_ATTR_NON_CONSISTENT) - return dma_direct_alloc_pages(dev, size, dma_handle, gfp, - attrs); - - ret = dma_alloc_from_global_coherent(size, dma_handle); + void *ret = dma_alloc_from_global_coherent(size, dma_handle); /* * dma_alloc_from_global_coherent() may fail because: @@ -70,16 +59,9 @@ static void arm_nommu_dma_free(struct device *dev, size_t size, void *cpu_addr, dma_addr_t dma_addr, unsigned long attrs) { - if (attrs & DMA_ATTR_NON_CONSISTENT) { - dma_direct_free_pages(dev, size, cpu_addr, dma_addr, attrs); - } else { - int ret = dma_release_from_global_coherent(get_order(size), - cpu_addr); - - WARN_ON_ONCE(ret == 0); - } + int ret = dma_release_from_global_coherent(get_order(size), cpu_addr); - return; + WARN_ON_ONCE(ret == 0); } static int arm_nommu_dma_mmap(struct device *dev, struct vm_area_struct *vma, -- 2.20.1 ^ permalink raw reply related [flat|nested] 91+ messages in thread
* [PATCH 1/7] arm-nommu: remove the partial DMA_ATTR_NON_CONSISTENT support @ 2019-06-14 14:44 ` Christoph Hellwig 0 siblings, 0 replies; 91+ messages in thread From: Christoph Hellwig @ 2019-06-14 14:44 UTC (permalink / raw) To: Vineet Gupta Cc: Jonas Bonn, linux-xtensa, Vladimir Murzin, linux-parisc, Helge Deller, linux-kernel, Stefan Kristiansson, iommu, openrisc, Stafford Horne, linux-snps-arc, linux-arm-kernel The arm-nommu DMA code supports DMA_ATTR_NON_CONSISTENT allocations, but does not provide a cache_sync operation. This means any user of it will never be able to actually transfer cache ownership and thus cause coherency bugs. Signed-off-by: Christoph Hellwig <hch@lst.de> --- arch/arm/mm/dma-mapping-nommu.c | 24 +++--------------------- 1 file changed, 3 insertions(+), 21 deletions(-) diff --git a/arch/arm/mm/dma-mapping-nommu.c b/arch/arm/mm/dma-mapping-nommu.c index f304b10e23a4..bc003df45546 100644 --- a/arch/arm/mm/dma-mapping-nommu.c +++ b/arch/arm/mm/dma-mapping-nommu.c @@ -39,18 +39,7 @@ static void *arm_nommu_dma_alloc(struct device *dev, size_t size, unsigned long attrs) { - void *ret; - - /* - * Try generic allocator first if we are advertised that - * consistency is not required. - */ - - if (attrs & DMA_ATTR_NON_CONSISTENT) - return dma_direct_alloc_pages(dev, size, dma_handle, gfp, - attrs); - - ret = dma_alloc_from_global_coherent(size, dma_handle); + void *ret = dma_alloc_from_global_coherent(size, dma_handle); /* * dma_alloc_from_global_coherent() may fail because: @@ -70,16 +59,9 @@ static void arm_nommu_dma_free(struct device *dev, size_t size, void *cpu_addr, dma_addr_t dma_addr, unsigned long attrs) { - if (attrs & DMA_ATTR_NON_CONSISTENT) { - dma_direct_free_pages(dev, size, cpu_addr, dma_addr, attrs); - } else { - int ret = dma_release_from_global_coherent(get_order(size), - cpu_addr); - - WARN_ON_ONCE(ret == 0); - } + int ret = dma_release_from_global_coherent(get_order(size), cpu_addr); - return; + WARN_ON_ONCE(ret == 0); } static int arm_nommu_dma_mmap(struct device *dev, struct vm_area_struct *vma, -- 2.20.1 _______________________________________________ linux-arm-kernel mailing list linux-arm-kernel@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-arm-kernel ^ permalink raw reply related [flat|nested] 91+ messages in thread
* [PATCH 1/7] arm-nommu: remove the partial DMA_ATTR_NON_CONSISTENT support @ 2019-06-14 14:44 ` Christoph Hellwig 0 siblings, 0 replies; 91+ messages in thread From: Christoph Hellwig @ 2019-06-14 14:44 UTC (permalink / raw) To: Vineet Gupta Cc: Jonas Bonn, linux-xtensa, Vladimir Murzin, linux-parisc, Helge Deller, linux-kernel, Stefan Kristiansson, iommu, openrisc, Stafford Horne, linux-snps-arc, linux-arm-kernel The arm-nommu DMA code supports DMA_ATTR_NON_CONSISTENT allocations, but does not provide a cache_sync operation. This means any user of it will never be able to actually transfer cache ownership and thus cause coherency bugs. Signed-off-by: Christoph Hellwig <hch@lst.de> --- arch/arm/mm/dma-mapping-nommu.c | 24 +++--------------------- 1 file changed, 3 insertions(+), 21 deletions(-) diff --git a/arch/arm/mm/dma-mapping-nommu.c b/arch/arm/mm/dma-mapping-nommu.c index f304b10e23a4..bc003df45546 100644 --- a/arch/arm/mm/dma-mapping-nommu.c +++ b/arch/arm/mm/dma-mapping-nommu.c @@ -39,18 +39,7 @@ static void *arm_nommu_dma_alloc(struct device *dev, size_t size, unsigned long attrs) { - void *ret; - - /* - * Try generic allocator first if we are advertised that - * consistency is not required. - */ - - if (attrs & DMA_ATTR_NON_CONSISTENT) - return dma_direct_alloc_pages(dev, size, dma_handle, gfp, - attrs); - - ret = dma_alloc_from_global_coherent(size, dma_handle); + void *ret = dma_alloc_from_global_coherent(size, dma_handle); /* * dma_alloc_from_global_coherent() may fail because: @@ -70,16 +59,9 @@ static void arm_nommu_dma_free(struct device *dev, size_t size, void *cpu_addr, dma_addr_t dma_addr, unsigned long attrs) { - if (attrs & DMA_ATTR_NON_CONSISTENT) { - dma_direct_free_pages(dev, size, cpu_addr, dma_addr, attrs); - } else { - int ret = dma_release_from_global_coherent(get_order(size), - cpu_addr); - - WARN_ON_ONCE(ret == 0); - } + int ret = dma_release_from_global_coherent(get_order(size), cpu_addr); - return; + WARN_ON_ONCE(ret == 0); } static int arm_nommu_dma_mmap(struct device *dev, struct vm_area_struct *vma, -- 2.20.1 _______________________________________________ iommu mailing list iommu@lists.linux-foundation.org https://lists.linuxfoundation.org/mailman/listinfo/iommu ^ permalink raw reply related [flat|nested] 91+ messages in thread
* [PATCH 1/7] arm-nommu: remove the partial DMA_ATTR_NON_CONSISTENT support @ 2019-06-14 14:44 ` Christoph Hellwig 0 siblings, 0 replies; 91+ messages in thread From: Christoph Hellwig @ 2019-06-14 14:44 UTC (permalink / raw) To: linux-snps-arc The arm-nommu DMA code supports DMA_ATTR_NON_CONSISTENT allocations, but does not provide a cache_sync operation. This means any user of it will never be able to actually transfer cache ownership and thus cause coherency bugs. Signed-off-by: Christoph Hellwig <hch at lst.de> --- arch/arm/mm/dma-mapping-nommu.c | 24 +++--------------------- 1 file changed, 3 insertions(+), 21 deletions(-) diff --git a/arch/arm/mm/dma-mapping-nommu.c b/arch/arm/mm/dma-mapping-nommu.c index f304b10e23a4..bc003df45546 100644 --- a/arch/arm/mm/dma-mapping-nommu.c +++ b/arch/arm/mm/dma-mapping-nommu.c @@ -39,18 +39,7 @@ static void *arm_nommu_dma_alloc(struct device *dev, size_t size, unsigned long attrs) { - void *ret; - - /* - * Try generic allocator first if we are advertised that - * consistency is not required. - */ - - if (attrs & DMA_ATTR_NON_CONSISTENT) - return dma_direct_alloc_pages(dev, size, dma_handle, gfp, - attrs); - - ret = dma_alloc_from_global_coherent(size, dma_handle); + void *ret = dma_alloc_from_global_coherent(size, dma_handle); /* * dma_alloc_from_global_coherent() may fail because: @@ -70,16 +59,9 @@ static void arm_nommu_dma_free(struct device *dev, size_t size, void *cpu_addr, dma_addr_t dma_addr, unsigned long attrs) { - if (attrs & DMA_ATTR_NON_CONSISTENT) { - dma_direct_free_pages(dev, size, cpu_addr, dma_addr, attrs); - } else { - int ret = dma_release_from_global_coherent(get_order(size), - cpu_addr); - - WARN_ON_ONCE(ret == 0); - } + int ret = dma_release_from_global_coherent(get_order(size), cpu_addr); - return; + WARN_ON_ONCE(ret == 0); } static int arm_nommu_dma_mmap(struct device *dev, struct vm_area_struct *vma, -- 2.20.1 ^ permalink raw reply related [flat|nested] 91+ messages in thread
* Re: [PATCH 1/7] arm-nommu: remove the partial DMA_ATTR_NON_CONSISTENT support 2019-06-14 14:44 ` Christoph Hellwig ` (2 preceding siblings ...) (?) @ 2019-06-24 14:23 ` Vladimir Murzin -1 siblings, 0 replies; 91+ messages in thread From: Vladimir Murzin @ 2019-06-24 14:23 UTC (permalink / raw) To: Christoph Hellwig, Vineet Gupta Cc: Jonas Bonn, Stefan Kristiansson, Stafford Horne, Helge Deller, linux-snps-arc, linux-arm-kernel, openrisc, linux-parisc, linux-xtensa, iommu, linux-kernel On 6/14/19 3:44 PM, Christoph Hellwig wrote: > The arm-nommu DMA code supports DMA_ATTR_NON_CONSISTENT allocations, but > does not provide a cache_sync operation. This means any user of it > will never be able to actually transfer cache ownership and thus cause > coherency bugs. By the way, Documentation/DMA-attributes.txt doesn't specify cache_sync() as requirement for DMA_ATTR_NON_CONSISTENT it only states that it is responsibility of the driver to have all the correct and necessary sync points. > > Signed-off-by: Christoph Hellwig <hch@lst.de> > --- > arch/arm/mm/dma-mapping-nommu.c | 24 +++--------------------- > 1 file changed, 3 insertions(+), 21 deletions(-) > > diff --git a/arch/arm/mm/dma-mapping-nommu.c b/arch/arm/mm/dma-mapping-nommu.c > index f304b10e23a4..bc003df45546 100644 > --- a/arch/arm/mm/dma-mapping-nommu.c > +++ b/arch/arm/mm/dma-mapping-nommu.c > @@ -39,18 +39,7 @@ static void *arm_nommu_dma_alloc(struct device *dev, size_t size, > unsigned long attrs) > > { > - void *ret; > - > - /* > - * Try generic allocator first if we are advertised that > - * consistency is not required. > - */ > - > - if (attrs & DMA_ATTR_NON_CONSISTENT) > - return dma_direct_alloc_pages(dev, size, dma_handle, gfp, > - attrs); > - > - ret = dma_alloc_from_global_coherent(size, dma_handle); > + void *ret = dma_alloc_from_global_coherent(size, dma_handle); > > /* > * dma_alloc_from_global_coherent() may fail because: > @@ -70,16 +59,9 @@ static void arm_nommu_dma_free(struct device *dev, size_t size, > void *cpu_addr, dma_addr_t dma_addr, > unsigned long attrs) > { > - if (attrs & DMA_ATTR_NON_CONSISTENT) { > - dma_direct_free_pages(dev, size, cpu_addr, dma_addr, attrs); > - } else { > - int ret = dma_release_from_global_coherent(get_order(size), > - cpu_addr); > - > - WARN_ON_ONCE(ret == 0); > - } > + int ret = dma_release_from_global_coherent(get_order(size), cpu_addr); > > - return; > + WARN_ON_ONCE(ret == 0); > } > > static int arm_nommu_dma_mmap(struct device *dev, struct vm_area_struct *vma, > FWIW: Reviewed-by: Vladimir Murzin <vladimir.murzin@arm.com> Cheers Vladimir ^ permalink raw reply [flat|nested] 91+ messages in thread
* [OpenRISC] [PATCH 1/7] arm-nommu: remove the partial DMA_ATTR_NON_CONSISTENT support @ 2019-06-24 14:23 ` Vladimir Murzin 0 siblings, 0 replies; 91+ messages in thread From: Vladimir Murzin @ 2019-06-24 14:23 UTC (permalink / raw) To: openrisc On 6/14/19 3:44 PM, Christoph Hellwig wrote: > The arm-nommu DMA code supports DMA_ATTR_NON_CONSISTENT allocations, but > does not provide a cache_sync operation. This means any user of it > will never be able to actually transfer cache ownership and thus cause > coherency bugs. By the way, Documentation/DMA-attributes.txt doesn't specify cache_sync() as requirement for DMA_ATTR_NON_CONSISTENT it only states that it is responsibility of the driver to have all the correct and necessary sync points. > > Signed-off-by: Christoph Hellwig <hch@lst.de> > --- > arch/arm/mm/dma-mapping-nommu.c | 24 +++--------------------- > 1 file changed, 3 insertions(+), 21 deletions(-) > > diff --git a/arch/arm/mm/dma-mapping-nommu.c b/arch/arm/mm/dma-mapping-nommu.c > index f304b10e23a4..bc003df45546 100644 > --- a/arch/arm/mm/dma-mapping-nommu.c > +++ b/arch/arm/mm/dma-mapping-nommu.c > @@ -39,18 +39,7 @@ static void *arm_nommu_dma_alloc(struct device *dev, size_t size, > unsigned long attrs) > > { > - void *ret; > - > - /* > - * Try generic allocator first if we are advertised that > - * consistency is not required. > - */ > - > - if (attrs & DMA_ATTR_NON_CONSISTENT) > - return dma_direct_alloc_pages(dev, size, dma_handle, gfp, > - attrs); > - > - ret = dma_alloc_from_global_coherent(size, dma_handle); > + void *ret = dma_alloc_from_global_coherent(size, dma_handle); > > /* > * dma_alloc_from_global_coherent() may fail because: > @@ -70,16 +59,9 @@ static void arm_nommu_dma_free(struct device *dev, size_t size, > void *cpu_addr, dma_addr_t dma_addr, > unsigned long attrs) > { > - if (attrs & DMA_ATTR_NON_CONSISTENT) { > - dma_direct_free_pages(dev, size, cpu_addr, dma_addr, attrs); > - } else { > - int ret = dma_release_from_global_coherent(get_order(size), > - cpu_addr); > - > - WARN_ON_ONCE(ret == 0); > - } > + int ret = dma_release_from_global_coherent(get_order(size), cpu_addr); > > - return; > + WARN_ON_ONCE(ret == 0); > } > > static int arm_nommu_dma_mmap(struct device *dev, struct vm_area_struct *vma, > FWIW: Reviewed-by: Vladimir Murzin <vladimir.murzin@arm.com> Cheers Vladimir ^ permalink raw reply [flat|nested] 91+ messages in thread
* Re: [PATCH 1/7] arm-nommu: remove the partial DMA_ATTR_NON_CONSISTENT support @ 2019-06-24 14:23 ` Vladimir Murzin 0 siblings, 0 replies; 91+ messages in thread From: Vladimir Murzin @ 2019-06-24 14:23 UTC (permalink / raw) To: Christoph Hellwig, Vineet Gupta Cc: Jonas Bonn, linux-xtensa, linux-parisc, Helge Deller, linux-kernel, Stefan Kristiansson, iommu, openrisc, Stafford Horne, linux-snps-arc, linux-arm-kernel On 6/14/19 3:44 PM, Christoph Hellwig wrote: > The arm-nommu DMA code supports DMA_ATTR_NON_CONSISTENT allocations, but > does not provide a cache_sync operation. This means any user of it > will never be able to actually transfer cache ownership and thus cause > coherency bugs. By the way, Documentation/DMA-attributes.txt doesn't specify cache_sync() as requirement for DMA_ATTR_NON_CONSISTENT it only states that it is responsibility of the driver to have all the correct and necessary sync points. > > Signed-off-by: Christoph Hellwig <hch@lst.de> > --- > arch/arm/mm/dma-mapping-nommu.c | 24 +++--------------------- > 1 file changed, 3 insertions(+), 21 deletions(-) > > diff --git a/arch/arm/mm/dma-mapping-nommu.c b/arch/arm/mm/dma-mapping-nommu.c > index f304b10e23a4..bc003df45546 100644 > --- a/arch/arm/mm/dma-mapping-nommu.c > +++ b/arch/arm/mm/dma-mapping-nommu.c > @@ -39,18 +39,7 @@ static void *arm_nommu_dma_alloc(struct device *dev, size_t size, > unsigned long attrs) > > { > - void *ret; > - > - /* > - * Try generic allocator first if we are advertised that > - * consistency is not required. > - */ > - > - if (attrs & DMA_ATTR_NON_CONSISTENT) > - return dma_direct_alloc_pages(dev, size, dma_handle, gfp, > - attrs); > - > - ret = dma_alloc_from_global_coherent(size, dma_handle); > + void *ret = dma_alloc_from_global_coherent(size, dma_handle); > > /* > * dma_alloc_from_global_coherent() may fail because: > @@ -70,16 +59,9 @@ static void arm_nommu_dma_free(struct device *dev, size_t size, > void *cpu_addr, dma_addr_t dma_addr, > unsigned long attrs) > { > - if (attrs & DMA_ATTR_NON_CONSISTENT) { > - dma_direct_free_pages(dev, size, cpu_addr, dma_addr, attrs); > - } else { > - int ret = dma_release_from_global_coherent(get_order(size), > - cpu_addr); > - > - WARN_ON_ONCE(ret == 0); > - } > + int ret = dma_release_from_global_coherent(get_order(size), cpu_addr); > > - return; > + WARN_ON_ONCE(ret == 0); > } > > static int arm_nommu_dma_mmap(struct device *dev, struct vm_area_struct *vma, > FWIW: Reviewed-by: Vladimir Murzin <vladimir.murzin@arm.com> Cheers Vladimir _______________________________________________ linux-arm-kernel mailing list linux-arm-kernel@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-arm-kernel ^ permalink raw reply [flat|nested] 91+ messages in thread
* Re: [PATCH 1/7] arm-nommu: remove the partial DMA_ATTR_NON_CONSISTENT support @ 2019-06-24 14:23 ` Vladimir Murzin 0 siblings, 0 replies; 91+ messages in thread From: Vladimir Murzin @ 2019-06-24 14:23 UTC (permalink / raw) To: Christoph Hellwig, Vineet Gupta Cc: Jonas Bonn, linux-xtensa, linux-parisc, Helge Deller, linux-kernel, Stefan Kristiansson, iommu, openrisc, Stafford Horne, linux-snps-arc, linux-arm-kernel On 6/14/19 3:44 PM, Christoph Hellwig wrote: > The arm-nommu DMA code supports DMA_ATTR_NON_CONSISTENT allocations, but > does not provide a cache_sync operation. This means any user of it > will never be able to actually transfer cache ownership and thus cause > coherency bugs. By the way, Documentation/DMA-attributes.txt doesn't specify cache_sync() as requirement for DMA_ATTR_NON_CONSISTENT it only states that it is responsibility of the driver to have all the correct and necessary sync points. > > Signed-off-by: Christoph Hellwig <hch@lst.de> > --- > arch/arm/mm/dma-mapping-nommu.c | 24 +++--------------------- > 1 file changed, 3 insertions(+), 21 deletions(-) > > diff --git a/arch/arm/mm/dma-mapping-nommu.c b/arch/arm/mm/dma-mapping-nommu.c > index f304b10e23a4..bc003df45546 100644 > --- a/arch/arm/mm/dma-mapping-nommu.c > +++ b/arch/arm/mm/dma-mapping-nommu.c > @@ -39,18 +39,7 @@ static void *arm_nommu_dma_alloc(struct device *dev, size_t size, > unsigned long attrs) > > { > - void *ret; > - > - /* > - * Try generic allocator first if we are advertised that > - * consistency is not required. > - */ > - > - if (attrs & DMA_ATTR_NON_CONSISTENT) > - return dma_direct_alloc_pages(dev, size, dma_handle, gfp, > - attrs); > - > - ret = dma_alloc_from_global_coherent(size, dma_handle); > + void *ret = dma_alloc_from_global_coherent(size, dma_handle); > > /* > * dma_alloc_from_global_coherent() may fail because: > @@ -70,16 +59,9 @@ static void arm_nommu_dma_free(struct device *dev, size_t size, > void *cpu_addr, dma_addr_t dma_addr, > unsigned long attrs) > { > - if (attrs & DMA_ATTR_NON_CONSISTENT) { > - dma_direct_free_pages(dev, size, cpu_addr, dma_addr, attrs); > - } else { > - int ret = dma_release_from_global_coherent(get_order(size), > - cpu_addr); > - > - WARN_ON_ONCE(ret == 0); > - } > + int ret = dma_release_from_global_coherent(get_order(size), cpu_addr); > > - return; > + WARN_ON_ONCE(ret == 0); > } > > static int arm_nommu_dma_mmap(struct device *dev, struct vm_area_struct *vma, > FWIW: Reviewed-by: Vladimir Murzin <vladimir.murzin@arm.com> Cheers Vladimir _______________________________________________ iommu mailing list iommu@lists.linux-foundation.org https://lists.linuxfoundation.org/mailman/listinfo/iommu ^ permalink raw reply [flat|nested] 91+ messages in thread
* [PATCH 1/7] arm-nommu: remove the partial DMA_ATTR_NON_CONSISTENT support @ 2019-06-24 14:23 ` Vladimir Murzin 0 siblings, 0 replies; 91+ messages in thread From: Vladimir Murzin @ 2019-06-24 14:23 UTC (permalink / raw) To: linux-snps-arc On 6/14/19 3:44 PM, Christoph Hellwig wrote: > The arm-nommu DMA code supports DMA_ATTR_NON_CONSISTENT allocations, but > does not provide a cache_sync operation. This means any user of it > will never be able to actually transfer cache ownership and thus cause > coherency bugs. By the way, Documentation/DMA-attributes.txt doesn't specify cache_sync() as requirement for DMA_ATTR_NON_CONSISTENT it only states that it is responsibility of the driver to have all the correct and necessary sync points. > > Signed-off-by: Christoph Hellwig <hch at lst.de> > --- > arch/arm/mm/dma-mapping-nommu.c | 24 +++--------------------- > 1 file changed, 3 insertions(+), 21 deletions(-) > > diff --git a/arch/arm/mm/dma-mapping-nommu.c b/arch/arm/mm/dma-mapping-nommu.c > index f304b10e23a4..bc003df45546 100644 > --- a/arch/arm/mm/dma-mapping-nommu.c > +++ b/arch/arm/mm/dma-mapping-nommu.c > @@ -39,18 +39,7 @@ static void *arm_nommu_dma_alloc(struct device *dev, size_t size, > unsigned long attrs) > > { > - void *ret; > - > - /* > - * Try generic allocator first if we are advertised that > - * consistency is not required. > - */ > - > - if (attrs & DMA_ATTR_NON_CONSISTENT) > - return dma_direct_alloc_pages(dev, size, dma_handle, gfp, > - attrs); > - > - ret = dma_alloc_from_global_coherent(size, dma_handle); > + void *ret = dma_alloc_from_global_coherent(size, dma_handle); > > /* > * dma_alloc_from_global_coherent() may fail because: > @@ -70,16 +59,9 @@ static void arm_nommu_dma_free(struct device *dev, size_t size, > void *cpu_addr, dma_addr_t dma_addr, > unsigned long attrs) > { > - if (attrs & DMA_ATTR_NON_CONSISTENT) { > - dma_direct_free_pages(dev, size, cpu_addr, dma_addr, attrs); > - } else { > - int ret = dma_release_from_global_coherent(get_order(size), > - cpu_addr); > - > - WARN_ON_ONCE(ret == 0); > - } > + int ret = dma_release_from_global_coherent(get_order(size), cpu_addr); > > - return; > + WARN_ON_ONCE(ret == 0); > } > > static int arm_nommu_dma_mmap(struct device *dev, struct vm_area_struct *vma, > FWIW: Reviewed-by: Vladimir Murzin <vladimir.murzin at arm.com> Cheers Vladimir ^ permalink raw reply [flat|nested] 91+ messages in thread
* Re: [PATCH 1/7] arm-nommu: remove the partial DMA_ATTR_NON_CONSISTENT support 2019-06-24 14:23 ` Vladimir Murzin ` (2 preceding siblings ...) (?) @ 2019-06-25 6:13 ` Christoph Hellwig -1 siblings, 0 replies; 91+ messages in thread From: Christoph Hellwig @ 2019-06-25 6:13 UTC (permalink / raw) To: Vladimir Murzin Cc: Christoph Hellwig, Vineet Gupta, Jonas Bonn, Stefan Kristiansson, Stafford Horne, Helge Deller, linux-snps-arc, linux-arm-kernel, openrisc, linux-parisc, linux-xtensa, iommu, linux-kernel On Mon, Jun 24, 2019 at 03:23:08PM +0100, Vladimir Murzin wrote: > On 6/14/19 3:44 PM, Christoph Hellwig wrote: > > The arm-nommu DMA code supports DMA_ATTR_NON_CONSISTENT allocations, but > > does not provide a cache_sync operation. This means any user of it > > will never be able to actually transfer cache ownership and thus cause > > coherency bugs. > > By the way, Documentation/DMA-attributes.txt doesn't specify cache_sync() as > requirement for DMA_ATTR_NON_CONSISTENT it only states that it is responsibility > of the driver to have all the correct and necessary sync points. True. dma_cache_sync has always been a rather odd interface, as it doesn't specify in what direction we need to sync and doesn't participate in our ownership protocol. So my mid-term plan is to kill it off and replace it with the existing dma_sync_* helpers. This series is the first step towards that. ^ permalink raw reply [flat|nested] 91+ messages in thread
* [OpenRISC] [PATCH 1/7] arm-nommu: remove the partial DMA_ATTR_NON_CONSISTENT support @ 2019-06-25 6:13 ` Christoph Hellwig 0 siblings, 0 replies; 91+ messages in thread From: Christoph Hellwig @ 2019-06-25 6:13 UTC (permalink / raw) To: openrisc On Mon, Jun 24, 2019 at 03:23:08PM +0100, Vladimir Murzin wrote: > On 6/14/19 3:44 PM, Christoph Hellwig wrote: > > The arm-nommu DMA code supports DMA_ATTR_NON_CONSISTENT allocations, but > > does not provide a cache_sync operation. This means any user of it > > will never be able to actually transfer cache ownership and thus cause > > coherency bugs. > > By the way, Documentation/DMA-attributes.txt doesn't specify cache_sync() as > requirement for DMA_ATTR_NON_CONSISTENT it only states that it is responsibility > of the driver to have all the correct and necessary sync points. True. dma_cache_sync has always been a rather odd interface, as it doesn't specify in what direction we need to sync and doesn't participate in our ownership protocol. So my mid-term plan is to kill it off and replace it with the existing dma_sync_* helpers. This series is the first step towards that. ^ permalink raw reply [flat|nested] 91+ messages in thread
* Re: [PATCH 1/7] arm-nommu: remove the partial DMA_ATTR_NON_CONSISTENT support @ 2019-06-25 6:13 ` Christoph Hellwig 0 siblings, 0 replies; 91+ messages in thread From: Christoph Hellwig @ 2019-06-25 6:13 UTC (permalink / raw) To: Vladimir Murzin Cc: Jonas Bonn, linux-xtensa, linux-parisc, Vineet Gupta, Helge Deller, linux-kernel, Stefan Kristiansson, iommu, openrisc, Stafford Horne, linux-snps-arc, Christoph Hellwig, linux-arm-kernel On Mon, Jun 24, 2019 at 03:23:08PM +0100, Vladimir Murzin wrote: > On 6/14/19 3:44 PM, Christoph Hellwig wrote: > > The arm-nommu DMA code supports DMA_ATTR_NON_CONSISTENT allocations, but > > does not provide a cache_sync operation. This means any user of it > > will never be able to actually transfer cache ownership and thus cause > > coherency bugs. > > By the way, Documentation/DMA-attributes.txt doesn't specify cache_sync() as > requirement for DMA_ATTR_NON_CONSISTENT it only states that it is responsibility > of the driver to have all the correct and necessary sync points. True. dma_cache_sync has always been a rather odd interface, as it doesn't specify in what direction we need to sync and doesn't participate in our ownership protocol. So my mid-term plan is to kill it off and replace it with the existing dma_sync_* helpers. This series is the first step towards that. _______________________________________________ linux-arm-kernel mailing list linux-arm-kernel@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-arm-kernel ^ permalink raw reply [flat|nested] 91+ messages in thread
* Re: [PATCH 1/7] arm-nommu: remove the partial DMA_ATTR_NON_CONSISTENT support @ 2019-06-25 6:13 ` Christoph Hellwig 0 siblings, 0 replies; 91+ messages in thread From: Christoph Hellwig @ 2019-06-25 6:13 UTC (permalink / raw) To: Vladimir Murzin Cc: Jonas Bonn, linux-xtensa, linux-parisc, Vineet Gupta, Helge Deller, linux-kernel, Stefan Kristiansson, iommu, openrisc, Stafford Horne, linux-snps-arc, Christoph Hellwig, linux-arm-kernel On Mon, Jun 24, 2019 at 03:23:08PM +0100, Vladimir Murzin wrote: > On 6/14/19 3:44 PM, Christoph Hellwig wrote: > > The arm-nommu DMA code supports DMA_ATTR_NON_CONSISTENT allocations, but > > does not provide a cache_sync operation. This means any user of it > > will never be able to actually transfer cache ownership and thus cause > > coherency bugs. > > By the way, Documentation/DMA-attributes.txt doesn't specify cache_sync() as > requirement for DMA_ATTR_NON_CONSISTENT it only states that it is responsibility > of the driver to have all the correct and necessary sync points. True. dma_cache_sync has always been a rather odd interface, as it doesn't specify in what direction we need to sync and doesn't participate in our ownership protocol. So my mid-term plan is to kill it off and replace it with the existing dma_sync_* helpers. This series is the first step towards that. _______________________________________________ iommu mailing list iommu@lists.linux-foundation.org https://lists.linuxfoundation.org/mailman/listinfo/iommu ^ permalink raw reply [flat|nested] 91+ messages in thread
* [PATCH 1/7] arm-nommu: remove the partial DMA_ATTR_NON_CONSISTENT support @ 2019-06-25 6:13 ` Christoph Hellwig 0 siblings, 0 replies; 91+ messages in thread From: Christoph Hellwig @ 2019-06-25 6:13 UTC (permalink / raw) To: linux-snps-arc On Mon, Jun 24, 2019@03:23:08PM +0100, Vladimir Murzin wrote: > On 6/14/19 3:44 PM, Christoph Hellwig wrote: > > The arm-nommu DMA code supports DMA_ATTR_NON_CONSISTENT allocations, but > > does not provide a cache_sync operation. This means any user of it > > will never be able to actually transfer cache ownership and thus cause > > coherency bugs. > > By the way, Documentation/DMA-attributes.txt doesn't specify cache_sync() as > requirement for DMA_ATTR_NON_CONSISTENT it only states that it is responsibility > of the driver to have all the correct and necessary sync points. True. dma_cache_sync has always been a rather odd interface, as it doesn't specify in what direction we need to sync and doesn't participate in our ownership protocol. So my mid-term plan is to kill it off and replace it with the existing dma_sync_* helpers. This series is the first step towards that. ^ permalink raw reply [flat|nested] 91+ messages in thread
* [PATCH 2/7] arc: remove the partial DMA_ATTR_NON_CONSISTENT support 2019-06-14 14:44 ` Christoph Hellwig ` (2 preceding siblings ...) (?) @ 2019-06-14 14:44 ` Christoph Hellwig -1 siblings, 0 replies; 91+ messages in thread From: Christoph Hellwig @ 2019-06-14 14:44 UTC (permalink / raw) To: Vineet Gupta Cc: Jonas Bonn, Stefan Kristiansson, Stafford Horne, Helge Deller, Vladimir Murzin, linux-snps-arc, linux-arm-kernel, openrisc, linux-parisc, linux-xtensa, iommu, linux-kernel The arc DMA code supports DMA_ATTR_NON_CONSISTENT allocations, but does not provide a cache_sync operation. This means any user of it will never be able to actually transfer cache ownership and thus cause coherency bugs. Signed-off-by: Christoph Hellwig <hch@lst.de> --- arch/arc/mm/dma.c | 21 ++++++--------------- 1 file changed, 6 insertions(+), 15 deletions(-) diff --git a/arch/arc/mm/dma.c b/arch/arc/mm/dma.c index 1525ac00fd02..9832928f896d 100644 --- a/arch/arc/mm/dma.c +++ b/arch/arc/mm/dma.c @@ -24,7 +24,6 @@ void *arch_dma_alloc(struct device *dev, size_t size, dma_addr_t *dma_handle, struct page *page; phys_addr_t paddr; void *kvaddr; - bool need_coh = !(attrs & DMA_ATTR_NON_CONSISTENT); /* * __GFP_HIGHMEM flag is cleared by upper layer functions @@ -46,14 +45,10 @@ void *arch_dma_alloc(struct device *dev, size_t size, dma_addr_t *dma_handle, * A coherent buffer needs MMU mapping to enforce non-cachability. * kvaddr is kernel Virtual address (0x7000_0000 based). */ - if (need_coh) { - kvaddr = ioremap_nocache(paddr, size); - if (kvaddr == NULL) { - __free_pages(page, order); - return NULL; - } - } else { - kvaddr = (void *)(u32)paddr; + kvaddr = ioremap_nocache(paddr, size); + if (kvaddr == NULL) { + __free_pages(page, order); + return NULL; } /* @@ -66,9 +61,7 @@ void *arch_dma_alloc(struct device *dev, size_t size, dma_addr_t *dma_handle, * Currently flush_cache_vmap nukes the L1 cache completely which * will be optimized as a separate commit */ - if (need_coh) - dma_cache_wback_inv(paddr, size); - + dma_cache_wback_inv(paddr, size); return kvaddr; } @@ -78,9 +71,7 @@ void arch_dma_free(struct device *dev, size_t size, void *vaddr, phys_addr_t paddr = dma_handle; struct page *page = virt_to_page(paddr); - if (!(attrs & DMA_ATTR_NON_CONSISTENT)) - iounmap((void __force __iomem *)vaddr); - + iounmap((void __force __iomem *)vaddr); __free_pages(page, get_order(size)); } -- 2.20.1 ^ permalink raw reply related [flat|nested] 91+ messages in thread
* [OpenRISC] [PATCH 2/7] arc: remove the partial DMA_ATTR_NON_CONSISTENT support @ 2019-06-14 14:44 ` Christoph Hellwig 0 siblings, 0 replies; 91+ messages in thread From: Christoph Hellwig @ 2019-06-14 14:44 UTC (permalink / raw) To: openrisc The arc DMA code supports DMA_ATTR_NON_CONSISTENT allocations, but does not provide a cache_sync operation. This means any user of it will never be able to actually transfer cache ownership and thus cause coherency bugs. Signed-off-by: Christoph Hellwig <hch@lst.de> --- arch/arc/mm/dma.c | 21 ++++++--------------- 1 file changed, 6 insertions(+), 15 deletions(-) diff --git a/arch/arc/mm/dma.c b/arch/arc/mm/dma.c index 1525ac00fd02..9832928f896d 100644 --- a/arch/arc/mm/dma.c +++ b/arch/arc/mm/dma.c @@ -24,7 +24,6 @@ void *arch_dma_alloc(struct device *dev, size_t size, dma_addr_t *dma_handle, struct page *page; phys_addr_t paddr; void *kvaddr; - bool need_coh = !(attrs & DMA_ATTR_NON_CONSISTENT); /* * __GFP_HIGHMEM flag is cleared by upper layer functions @@ -46,14 +45,10 @@ void *arch_dma_alloc(struct device *dev, size_t size, dma_addr_t *dma_handle, * A coherent buffer needs MMU mapping to enforce non-cachability. * kvaddr is kernel Virtual address (0x7000_0000 based). */ - if (need_coh) { - kvaddr = ioremap_nocache(paddr, size); - if (kvaddr == NULL) { - __free_pages(page, order); - return NULL; - } - } else { - kvaddr = (void *)(u32)paddr; + kvaddr = ioremap_nocache(paddr, size); + if (kvaddr == NULL) { + __free_pages(page, order); + return NULL; } /* @@ -66,9 +61,7 @@ void *arch_dma_alloc(struct device *dev, size_t size, dma_addr_t *dma_handle, * Currently flush_cache_vmap nukes the L1 cache completely which * will be optimized as a separate commit */ - if (need_coh) - dma_cache_wback_inv(paddr, size); - + dma_cache_wback_inv(paddr, size); return kvaddr; } @@ -78,9 +71,7 @@ void arch_dma_free(struct device *dev, size_t size, void *vaddr, phys_addr_t paddr = dma_handle; struct page *page = virt_to_page(paddr); - if (!(attrs & DMA_ATTR_NON_CONSISTENT)) - iounmap((void __force __iomem *)vaddr); - + iounmap((void __force __iomem *)vaddr); __free_pages(page, get_order(size)); } -- 2.20.1 ^ permalink raw reply related [flat|nested] 91+ messages in thread
* [PATCH 2/7] arc: remove the partial DMA_ATTR_NON_CONSISTENT support @ 2019-06-14 14:44 ` Christoph Hellwig 0 siblings, 0 replies; 91+ messages in thread From: Christoph Hellwig @ 2019-06-14 14:44 UTC (permalink / raw) To: Vineet Gupta Cc: Jonas Bonn, linux-xtensa, Vladimir Murzin, linux-parisc, Helge Deller, linux-kernel, Stefan Kristiansson, iommu, openrisc, Stafford Horne, linux-snps-arc, linux-arm-kernel The arc DMA code supports DMA_ATTR_NON_CONSISTENT allocations, but does not provide a cache_sync operation. This means any user of it will never be able to actually transfer cache ownership and thus cause coherency bugs. Signed-off-by: Christoph Hellwig <hch@lst.de> --- arch/arc/mm/dma.c | 21 ++++++--------------- 1 file changed, 6 insertions(+), 15 deletions(-) diff --git a/arch/arc/mm/dma.c b/arch/arc/mm/dma.c index 1525ac00fd02..9832928f896d 100644 --- a/arch/arc/mm/dma.c +++ b/arch/arc/mm/dma.c @@ -24,7 +24,6 @@ void *arch_dma_alloc(struct device *dev, size_t size, dma_addr_t *dma_handle, struct page *page; phys_addr_t paddr; void *kvaddr; - bool need_coh = !(attrs & DMA_ATTR_NON_CONSISTENT); /* * __GFP_HIGHMEM flag is cleared by upper layer functions @@ -46,14 +45,10 @@ void *arch_dma_alloc(struct device *dev, size_t size, dma_addr_t *dma_handle, * A coherent buffer needs MMU mapping to enforce non-cachability. * kvaddr is kernel Virtual address (0x7000_0000 based). */ - if (need_coh) { - kvaddr = ioremap_nocache(paddr, size); - if (kvaddr == NULL) { - __free_pages(page, order); - return NULL; - } - } else { - kvaddr = (void *)(u32)paddr; + kvaddr = ioremap_nocache(paddr, size); + if (kvaddr == NULL) { + __free_pages(page, order); + return NULL; } /* @@ -66,9 +61,7 @@ void *arch_dma_alloc(struct device *dev, size_t size, dma_addr_t *dma_handle, * Currently flush_cache_vmap nukes the L1 cache completely which * will be optimized as a separate commit */ - if (need_coh) - dma_cache_wback_inv(paddr, size); - + dma_cache_wback_inv(paddr, size); return kvaddr; } @@ -78,9 +71,7 @@ void arch_dma_free(struct device *dev, size_t size, void *vaddr, phys_addr_t paddr = dma_handle; struct page *page = virt_to_page(paddr); - if (!(attrs & DMA_ATTR_NON_CONSISTENT)) - iounmap((void __force __iomem *)vaddr); - + iounmap((void __force __iomem *)vaddr); __free_pages(page, get_order(size)); } -- 2.20.1 _______________________________________________ linux-arm-kernel mailing list linux-arm-kernel@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-arm-kernel ^ permalink raw reply related [flat|nested] 91+ messages in thread
* [PATCH 2/7] arc: remove the partial DMA_ATTR_NON_CONSISTENT support @ 2019-06-14 14:44 ` Christoph Hellwig 0 siblings, 0 replies; 91+ messages in thread From: Christoph Hellwig @ 2019-06-14 14:44 UTC (permalink / raw) To: Vineet Gupta Cc: Jonas Bonn, linux-xtensa, Vladimir Murzin, linux-parisc, Helge Deller, linux-kernel, Stefan Kristiansson, iommu, openrisc, Stafford Horne, linux-snps-arc, linux-arm-kernel The arc DMA code supports DMA_ATTR_NON_CONSISTENT allocations, but does not provide a cache_sync operation. This means any user of it will never be able to actually transfer cache ownership and thus cause coherency bugs. Signed-off-by: Christoph Hellwig <hch@lst.de> --- arch/arc/mm/dma.c | 21 ++++++--------------- 1 file changed, 6 insertions(+), 15 deletions(-) diff --git a/arch/arc/mm/dma.c b/arch/arc/mm/dma.c index 1525ac00fd02..9832928f896d 100644 --- a/arch/arc/mm/dma.c +++ b/arch/arc/mm/dma.c @@ -24,7 +24,6 @@ void *arch_dma_alloc(struct device *dev, size_t size, dma_addr_t *dma_handle, struct page *page; phys_addr_t paddr; void *kvaddr; - bool need_coh = !(attrs & DMA_ATTR_NON_CONSISTENT); /* * __GFP_HIGHMEM flag is cleared by upper layer functions @@ -46,14 +45,10 @@ void *arch_dma_alloc(struct device *dev, size_t size, dma_addr_t *dma_handle, * A coherent buffer needs MMU mapping to enforce non-cachability. * kvaddr is kernel Virtual address (0x7000_0000 based). */ - if (need_coh) { - kvaddr = ioremap_nocache(paddr, size); - if (kvaddr == NULL) { - __free_pages(page, order); - return NULL; - } - } else { - kvaddr = (void *)(u32)paddr; + kvaddr = ioremap_nocache(paddr, size); + if (kvaddr == NULL) { + __free_pages(page, order); + return NULL; } /* @@ -66,9 +61,7 @@ void *arch_dma_alloc(struct device *dev, size_t size, dma_addr_t *dma_handle, * Currently flush_cache_vmap nukes the L1 cache completely which * will be optimized as a separate commit */ - if (need_coh) - dma_cache_wback_inv(paddr, size); - + dma_cache_wback_inv(paddr, size); return kvaddr; } @@ -78,9 +71,7 @@ void arch_dma_free(struct device *dev, size_t size, void *vaddr, phys_addr_t paddr = dma_handle; struct page *page = virt_to_page(paddr); - if (!(attrs & DMA_ATTR_NON_CONSISTENT)) - iounmap((void __force __iomem *)vaddr); - + iounmap((void __force __iomem *)vaddr); __free_pages(page, get_order(size)); } -- 2.20.1 _______________________________________________ iommu mailing list iommu@lists.linux-foundation.org https://lists.linuxfoundation.org/mailman/listinfo/iommu ^ permalink raw reply related [flat|nested] 91+ messages in thread
* [PATCH 2/7] arc: remove the partial DMA_ATTR_NON_CONSISTENT support @ 2019-06-14 14:44 ` Christoph Hellwig 0 siblings, 0 replies; 91+ messages in thread From: Christoph Hellwig @ 2019-06-14 14:44 UTC (permalink / raw) To: linux-snps-arc The arc DMA code supports DMA_ATTR_NON_CONSISTENT allocations, but does not provide a cache_sync operation. This means any user of it will never be able to actually transfer cache ownership and thus cause coherency bugs. Signed-off-by: Christoph Hellwig <hch at lst.de> --- arch/arc/mm/dma.c | 21 ++++++--------------- 1 file changed, 6 insertions(+), 15 deletions(-) diff --git a/arch/arc/mm/dma.c b/arch/arc/mm/dma.c index 1525ac00fd02..9832928f896d 100644 --- a/arch/arc/mm/dma.c +++ b/arch/arc/mm/dma.c @@ -24,7 +24,6 @@ void *arch_dma_alloc(struct device *dev, size_t size, dma_addr_t *dma_handle, struct page *page; phys_addr_t paddr; void *kvaddr; - bool need_coh = !(attrs & DMA_ATTR_NON_CONSISTENT); /* * __GFP_HIGHMEM flag is cleared by upper layer functions @@ -46,14 +45,10 @@ void *arch_dma_alloc(struct device *dev, size_t size, dma_addr_t *dma_handle, * A coherent buffer needs MMU mapping to enforce non-cachability. * kvaddr is kernel Virtual address (0x7000_0000 based). */ - if (need_coh) { - kvaddr = ioremap_nocache(paddr, size); - if (kvaddr == NULL) { - __free_pages(page, order); - return NULL; - } - } else { - kvaddr = (void *)(u32)paddr; + kvaddr = ioremap_nocache(paddr, size); + if (kvaddr == NULL) { + __free_pages(page, order); + return NULL; } /* @@ -66,9 +61,7 @@ void *arch_dma_alloc(struct device *dev, size_t size, dma_addr_t *dma_handle, * Currently flush_cache_vmap nukes the L1 cache completely which * will be optimized as a separate commit */ - if (need_coh) - dma_cache_wback_inv(paddr, size); - + dma_cache_wback_inv(paddr, size); return kvaddr; } @@ -78,9 +71,7 @@ void arch_dma_free(struct device *dev, size_t size, void *vaddr, phys_addr_t paddr = dma_handle; struct page *page = virt_to_page(paddr); - if (!(attrs & DMA_ATTR_NON_CONSISTENT)) - iounmap((void __force __iomem *)vaddr); - + iounmap((void __force __iomem *)vaddr); __free_pages(page, get_order(size)); } -- 2.20.1 ^ permalink raw reply related [flat|nested] 91+ messages in thread
* [PATCH 3/7] openrisc: remove the partial DMA_ATTR_NON_CONSISTENT support 2019-06-14 14:44 ` Christoph Hellwig ` (2 preceding siblings ...) (?) @ 2019-06-14 14:44 ` Christoph Hellwig -1 siblings, 0 replies; 91+ messages in thread From: Christoph Hellwig @ 2019-06-14 14:44 UTC (permalink / raw) To: Vineet Gupta Cc: Jonas Bonn, Stefan Kristiansson, Stafford Horne, Helge Deller, Vladimir Murzin, linux-snps-arc, linux-arm-kernel, openrisc, linux-parisc, linux-xtensa, iommu, linux-kernel The openrisc DMA code supports DMA_ATTR_NON_CONSISTENT allocations, but does not provide a cache_sync operation. This means any user of it will never be able to actually transfer cache ownership and thus cause coherency bugs. Signed-off-by: Christoph Hellwig <hch@lst.de> --- arch/openrisc/kernel/dma.c | 22 +++++++++------------- 1 file changed, 9 insertions(+), 13 deletions(-) diff --git a/arch/openrisc/kernel/dma.c b/arch/openrisc/kernel/dma.c index f79457cb3741..9f25fd0fbb5d 100644 --- a/arch/openrisc/kernel/dma.c +++ b/arch/openrisc/kernel/dma.c @@ -98,15 +98,13 @@ arch_dma_alloc(struct device *dev, size_t size, dma_addr_t *dma_handle, va = (unsigned long)page; - if ((attrs & DMA_ATTR_NON_CONSISTENT) == 0) { - /* - * We need to iterate through the pages, clearing the dcache for - * them and setting the cache-inhibit bit. - */ - if (walk_page_range(va, va + size, &walk)) { - free_pages_exact(page, size); - return NULL; - } + /* + * We need to iterate through the pages, clearing the dcache for + * them and setting the cache-inhibit bit. + */ + if (walk_page_range(va, va + size, &walk)) { + free_pages_exact(page, size); + return NULL; } return (void *)va; @@ -122,10 +120,8 @@ arch_dma_free(struct device *dev, size_t size, void *vaddr, .mm = &init_mm }; - if ((attrs & DMA_ATTR_NON_CONSISTENT) == 0) { - /* walk_page_range shouldn't be able to fail here */ - WARN_ON(walk_page_range(va, va + size, &walk)); - } + /* walk_page_range shouldn't be able to fail here */ + WARN_ON(walk_page_range(va, va + size, &walk)); free_pages_exact(vaddr, size); } -- 2.20.1 ^ permalink raw reply related [flat|nested] 91+ messages in thread
* [OpenRISC] [PATCH 3/7] openrisc: remove the partial DMA_ATTR_NON_CONSISTENT support @ 2019-06-14 14:44 ` Christoph Hellwig 0 siblings, 0 replies; 91+ messages in thread From: Christoph Hellwig @ 2019-06-14 14:44 UTC (permalink / raw) To: openrisc The openrisc DMA code supports DMA_ATTR_NON_CONSISTENT allocations, but does not provide a cache_sync operation. This means any user of it will never be able to actually transfer cache ownership and thus cause coherency bugs. Signed-off-by: Christoph Hellwig <hch@lst.de> --- arch/openrisc/kernel/dma.c | 22 +++++++++------------- 1 file changed, 9 insertions(+), 13 deletions(-) diff --git a/arch/openrisc/kernel/dma.c b/arch/openrisc/kernel/dma.c index f79457cb3741..9f25fd0fbb5d 100644 --- a/arch/openrisc/kernel/dma.c +++ b/arch/openrisc/kernel/dma.c @@ -98,15 +98,13 @@ arch_dma_alloc(struct device *dev, size_t size, dma_addr_t *dma_handle, va = (unsigned long)page; - if ((attrs & DMA_ATTR_NON_CONSISTENT) == 0) { - /* - * We need to iterate through the pages, clearing the dcache for - * them and setting the cache-inhibit bit. - */ - if (walk_page_range(va, va + size, &walk)) { - free_pages_exact(page, size); - return NULL; - } + /* + * We need to iterate through the pages, clearing the dcache for + * them and setting the cache-inhibit bit. + */ + if (walk_page_range(va, va + size, &walk)) { + free_pages_exact(page, size); + return NULL; } return (void *)va; @@ -122,10 +120,8 @@ arch_dma_free(struct device *dev, size_t size, void *vaddr, .mm = &init_mm }; - if ((attrs & DMA_ATTR_NON_CONSISTENT) == 0) { - /* walk_page_range shouldn't be able to fail here */ - WARN_ON(walk_page_range(va, va + size, &walk)); - } + /* walk_page_range shouldn't be able to fail here */ + WARN_ON(walk_page_range(va, va + size, &walk)); free_pages_exact(vaddr, size); } -- 2.20.1 ^ permalink raw reply related [flat|nested] 91+ messages in thread
* [PATCH 3/7] openrisc: remove the partial DMA_ATTR_NON_CONSISTENT support @ 2019-06-14 14:44 ` Christoph Hellwig 0 siblings, 0 replies; 91+ messages in thread From: Christoph Hellwig @ 2019-06-14 14:44 UTC (permalink / raw) To: Vineet Gupta Cc: Jonas Bonn, linux-xtensa, Vladimir Murzin, linux-parisc, Helge Deller, linux-kernel, Stefan Kristiansson, iommu, openrisc, Stafford Horne, linux-snps-arc, linux-arm-kernel The openrisc DMA code supports DMA_ATTR_NON_CONSISTENT allocations, but does not provide a cache_sync operation. This means any user of it will never be able to actually transfer cache ownership and thus cause coherency bugs. Signed-off-by: Christoph Hellwig <hch@lst.de> --- arch/openrisc/kernel/dma.c | 22 +++++++++------------- 1 file changed, 9 insertions(+), 13 deletions(-) diff --git a/arch/openrisc/kernel/dma.c b/arch/openrisc/kernel/dma.c index f79457cb3741..9f25fd0fbb5d 100644 --- a/arch/openrisc/kernel/dma.c +++ b/arch/openrisc/kernel/dma.c @@ -98,15 +98,13 @@ arch_dma_alloc(struct device *dev, size_t size, dma_addr_t *dma_handle, va = (unsigned long)page; - if ((attrs & DMA_ATTR_NON_CONSISTENT) == 0) { - /* - * We need to iterate through the pages, clearing the dcache for - * them and setting the cache-inhibit bit. - */ - if (walk_page_range(va, va + size, &walk)) { - free_pages_exact(page, size); - return NULL; - } + /* + * We need to iterate through the pages, clearing the dcache for + * them and setting the cache-inhibit bit. + */ + if (walk_page_range(va, va + size, &walk)) { + free_pages_exact(page, size); + return NULL; } return (void *)va; @@ -122,10 +120,8 @@ arch_dma_free(struct device *dev, size_t size, void *vaddr, .mm = &init_mm }; - if ((attrs & DMA_ATTR_NON_CONSISTENT) == 0) { - /* walk_page_range shouldn't be able to fail here */ - WARN_ON(walk_page_range(va, va + size, &walk)); - } + /* walk_page_range shouldn't be able to fail here */ + WARN_ON(walk_page_range(va, va + size, &walk)); free_pages_exact(vaddr, size); } -- 2.20.1 _______________________________________________ linux-arm-kernel mailing list linux-arm-kernel@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-arm-kernel ^ permalink raw reply related [flat|nested] 91+ messages in thread
* [PATCH 3/7] openrisc: remove the partial DMA_ATTR_NON_CONSISTENT support @ 2019-06-14 14:44 ` Christoph Hellwig 0 siblings, 0 replies; 91+ messages in thread From: Christoph Hellwig @ 2019-06-14 14:44 UTC (permalink / raw) To: Vineet Gupta Cc: Jonas Bonn, linux-xtensa, Vladimir Murzin, linux-parisc, Helge Deller, linux-kernel, Stefan Kristiansson, iommu, openrisc, Stafford Horne, linux-snps-arc, linux-arm-kernel The openrisc DMA code supports DMA_ATTR_NON_CONSISTENT allocations, but does not provide a cache_sync operation. This means any user of it will never be able to actually transfer cache ownership and thus cause coherency bugs. Signed-off-by: Christoph Hellwig <hch@lst.de> --- arch/openrisc/kernel/dma.c | 22 +++++++++------------- 1 file changed, 9 insertions(+), 13 deletions(-) diff --git a/arch/openrisc/kernel/dma.c b/arch/openrisc/kernel/dma.c index f79457cb3741..9f25fd0fbb5d 100644 --- a/arch/openrisc/kernel/dma.c +++ b/arch/openrisc/kernel/dma.c @@ -98,15 +98,13 @@ arch_dma_alloc(struct device *dev, size_t size, dma_addr_t *dma_handle, va = (unsigned long)page; - if ((attrs & DMA_ATTR_NON_CONSISTENT) == 0) { - /* - * We need to iterate through the pages, clearing the dcache for - * them and setting the cache-inhibit bit. - */ - if (walk_page_range(va, va + size, &walk)) { - free_pages_exact(page, size); - return NULL; - } + /* + * We need to iterate through the pages, clearing the dcache for + * them and setting the cache-inhibit bit. + */ + if (walk_page_range(va, va + size, &walk)) { + free_pages_exact(page, size); + return NULL; } return (void *)va; @@ -122,10 +120,8 @@ arch_dma_free(struct device *dev, size_t size, void *vaddr, .mm = &init_mm }; - if ((attrs & DMA_ATTR_NON_CONSISTENT) == 0) { - /* walk_page_range shouldn't be able to fail here */ - WARN_ON(walk_page_range(va, va + size, &walk)); - } + /* walk_page_range shouldn't be able to fail here */ + WARN_ON(walk_page_range(va, va + size, &walk)); free_pages_exact(vaddr, size); } -- 2.20.1 _______________________________________________ iommu mailing list iommu@lists.linux-foundation.org https://lists.linuxfoundation.org/mailman/listinfo/iommu ^ permalink raw reply related [flat|nested] 91+ messages in thread
* [PATCH 3/7] openrisc: remove the partial DMA_ATTR_NON_CONSISTENT support @ 2019-06-14 14:44 ` Christoph Hellwig 0 siblings, 0 replies; 91+ messages in thread From: Christoph Hellwig @ 2019-06-14 14:44 UTC (permalink / raw) To: linux-snps-arc The openrisc DMA code supports DMA_ATTR_NON_CONSISTENT allocations, but does not provide a cache_sync operation. This means any user of it will never be able to actually transfer cache ownership and thus cause coherency bugs. Signed-off-by: Christoph Hellwig <hch at lst.de> --- arch/openrisc/kernel/dma.c | 22 +++++++++------------- 1 file changed, 9 insertions(+), 13 deletions(-) diff --git a/arch/openrisc/kernel/dma.c b/arch/openrisc/kernel/dma.c index f79457cb3741..9f25fd0fbb5d 100644 --- a/arch/openrisc/kernel/dma.c +++ b/arch/openrisc/kernel/dma.c @@ -98,15 +98,13 @@ arch_dma_alloc(struct device *dev, size_t size, dma_addr_t *dma_handle, va = (unsigned long)page; - if ((attrs & DMA_ATTR_NON_CONSISTENT) == 0) { - /* - * We need to iterate through the pages, clearing the dcache for - * them and setting the cache-inhibit bit. - */ - if (walk_page_range(va, va + size, &walk)) { - free_pages_exact(page, size); - return NULL; - } + /* + * We need to iterate through the pages, clearing the dcache for + * them and setting the cache-inhibit bit. + */ + if (walk_page_range(va, va + size, &walk)) { + free_pages_exact(page, size); + return NULL; } return (void *)va; @@ -122,10 +120,8 @@ arch_dma_free(struct device *dev, size_t size, void *vaddr, .mm = &init_mm }; - if ((attrs & DMA_ATTR_NON_CONSISTENT) == 0) { - /* walk_page_range shouldn't be able to fail here */ - WARN_ON(walk_page_range(va, va + size, &walk)); - } + /* walk_page_range shouldn't be able to fail here */ + WARN_ON(walk_page_range(va, va + size, &walk)); free_pages_exact(vaddr, size); } -- 2.20.1 ^ permalink raw reply related [flat|nested] 91+ messages in thread
* Re: [PATCH 3/7] openrisc: remove the partial DMA_ATTR_NON_CONSISTENT support 2019-06-14 14:44 ` Christoph Hellwig ` (2 preceding siblings ...) (?) @ 2019-06-16 9:17 ` Stafford Horne -1 siblings, 0 replies; 91+ messages in thread From: Stafford Horne @ 2019-06-16 9:17 UTC (permalink / raw) To: Christoph Hellwig Cc: Vineet Gupta, Jonas Bonn, Stefan Kristiansson, Helge Deller, Vladimir Murzin, linux-snps-arc, linux-arm-kernel, openrisc, linux-parisc, linux-xtensa, iommu, linux-kernel On Fri, Jun 14, 2019 at 04:44:27PM +0200, Christoph Hellwig wrote: > The openrisc DMA code supports DMA_ATTR_NON_CONSISTENT allocations, but > does not provide a cache_sync operation. This means any user of it > will never be able to actually transfer cache ownership and thus cause > coherency bugs. The below looks good. I am always happy to what looks like legacy copy & paste cruft. Acked-by: Stafford Horne <shorne@gmail.com> > Signed-off-by: Christoph Hellwig <hch@lst.de> > --- > arch/openrisc/kernel/dma.c | 22 +++++++++------------- > 1 file changed, 9 insertions(+), 13 deletions(-) > > diff --git a/arch/openrisc/kernel/dma.c b/arch/openrisc/kernel/dma.c > index f79457cb3741..9f25fd0fbb5d 100644 > --- a/arch/openrisc/kernel/dma.c > +++ b/arch/openrisc/kernel/dma.c > @@ -98,15 +98,13 @@ arch_dma_alloc(struct device *dev, size_t size, dma_addr_t *dma_handle, > > va = (unsigned long)page; > > - if ((attrs & DMA_ATTR_NON_CONSISTENT) == 0) { > - /* > - * We need to iterate through the pages, clearing the dcache for > - * them and setting the cache-inhibit bit. > - */ > - if (walk_page_range(va, va + size, &walk)) { > - free_pages_exact(page, size); > - return NULL; > - } > + /* > + * We need to iterate through the pages, clearing the dcache for > + * them and setting the cache-inhibit bit. > + */ > + if (walk_page_range(va, va + size, &walk)) { > + free_pages_exact(page, size); > + return NULL; > } > > return (void *)va; > @@ -122,10 +120,8 @@ arch_dma_free(struct device *dev, size_t size, void *vaddr, > .mm = &init_mm > }; > > - if ((attrs & DMA_ATTR_NON_CONSISTENT) == 0) { > - /* walk_page_range shouldn't be able to fail here */ > - WARN_ON(walk_page_range(va, va + size, &walk)); > - } > + /* walk_page_range shouldn't be able to fail here */ > + WARN_ON(walk_page_range(va, va + size, &walk)); > > free_pages_exact(vaddr, size); > } > -- > 2.20.1 > ^ permalink raw reply [flat|nested] 91+ messages in thread
* [OpenRISC] [PATCH 3/7] openrisc: remove the partial DMA_ATTR_NON_CONSISTENT support @ 2019-06-16 9:17 ` Stafford Horne 0 siblings, 0 replies; 91+ messages in thread From: Stafford Horne @ 2019-06-16 9:17 UTC (permalink / raw) To: openrisc On Fri, Jun 14, 2019 at 04:44:27PM +0200, Christoph Hellwig wrote: > The openrisc DMA code supports DMA_ATTR_NON_CONSISTENT allocations, but > does not provide a cache_sync operation. This means any user of it > will never be able to actually transfer cache ownership and thus cause > coherency bugs. The below looks good. I am always happy to what looks like legacy copy & paste cruft. Acked-by: Stafford Horne <shorne@gmail.com> > Signed-off-by: Christoph Hellwig <hch@lst.de> > --- > arch/openrisc/kernel/dma.c | 22 +++++++++------------- > 1 file changed, 9 insertions(+), 13 deletions(-) > > diff --git a/arch/openrisc/kernel/dma.c b/arch/openrisc/kernel/dma.c > index f79457cb3741..9f25fd0fbb5d 100644 > --- a/arch/openrisc/kernel/dma.c > +++ b/arch/openrisc/kernel/dma.c > @@ -98,15 +98,13 @@ arch_dma_alloc(struct device *dev, size_t size, dma_addr_t *dma_handle, > > va = (unsigned long)page; > > - if ((attrs & DMA_ATTR_NON_CONSISTENT) == 0) { > - /* > - * We need to iterate through the pages, clearing the dcache for > - * them and setting the cache-inhibit bit. > - */ > - if (walk_page_range(va, va + size, &walk)) { > - free_pages_exact(page, size); > - return NULL; > - } > + /* > + * We need to iterate through the pages, clearing the dcache for > + * them and setting the cache-inhibit bit. > + */ > + if (walk_page_range(va, va + size, &walk)) { > + free_pages_exact(page, size); > + return NULL; > } > > return (void *)va; > @@ -122,10 +120,8 @@ arch_dma_free(struct device *dev, size_t size, void *vaddr, > .mm = &init_mm > }; > > - if ((attrs & DMA_ATTR_NON_CONSISTENT) == 0) { > - /* walk_page_range shouldn't be able to fail here */ > - WARN_ON(walk_page_range(va, va + size, &walk)); > - } > + /* walk_page_range shouldn't be able to fail here */ > + WARN_ON(walk_page_range(va, va + size, &walk)); > > free_pages_exact(vaddr, size); > } > -- > 2.20.1 > ^ permalink raw reply [flat|nested] 91+ messages in thread
* Re: [PATCH 3/7] openrisc: remove the partial DMA_ATTR_NON_CONSISTENT support @ 2019-06-16 9:17 ` Stafford Horne 0 siblings, 0 replies; 91+ messages in thread From: Stafford Horne @ 2019-06-16 9:17 UTC (permalink / raw) To: Christoph Hellwig Cc: Jonas Bonn, linux-xtensa, Vladimir Murzin, linux-parisc, Vineet Gupta, Helge Deller, linux-kernel, Stefan Kristiansson, iommu, openrisc, linux-snps-arc, linux-arm-kernel On Fri, Jun 14, 2019 at 04:44:27PM +0200, Christoph Hellwig wrote: > The openrisc DMA code supports DMA_ATTR_NON_CONSISTENT allocations, but > does not provide a cache_sync operation. This means any user of it > will never be able to actually transfer cache ownership and thus cause > coherency bugs. The below looks good. I am always happy to what looks like legacy copy & paste cruft. Acked-by: Stafford Horne <shorne@gmail.com> > Signed-off-by: Christoph Hellwig <hch@lst.de> > --- > arch/openrisc/kernel/dma.c | 22 +++++++++------------- > 1 file changed, 9 insertions(+), 13 deletions(-) > > diff --git a/arch/openrisc/kernel/dma.c b/arch/openrisc/kernel/dma.c > index f79457cb3741..9f25fd0fbb5d 100644 > --- a/arch/openrisc/kernel/dma.c > +++ b/arch/openrisc/kernel/dma.c > @@ -98,15 +98,13 @@ arch_dma_alloc(struct device *dev, size_t size, dma_addr_t *dma_handle, > > va = (unsigned long)page; > > - if ((attrs & DMA_ATTR_NON_CONSISTENT) == 0) { > - /* > - * We need to iterate through the pages, clearing the dcache for > - * them and setting the cache-inhibit bit. > - */ > - if (walk_page_range(va, va + size, &walk)) { > - free_pages_exact(page, size); > - return NULL; > - } > + /* > + * We need to iterate through the pages, clearing the dcache for > + * them and setting the cache-inhibit bit. > + */ > + if (walk_page_range(va, va + size, &walk)) { > + free_pages_exact(page, size); > + return NULL; > } > > return (void *)va; > @@ -122,10 +120,8 @@ arch_dma_free(struct device *dev, size_t size, void *vaddr, > .mm = &init_mm > }; > > - if ((attrs & DMA_ATTR_NON_CONSISTENT) == 0) { > - /* walk_page_range shouldn't be able to fail here */ > - WARN_ON(walk_page_range(va, va + size, &walk)); > - } > + /* walk_page_range shouldn't be able to fail here */ > + WARN_ON(walk_page_range(va, va + size, &walk)); > > free_pages_exact(vaddr, size); > } > -- > 2.20.1 > _______________________________________________ linux-arm-kernel mailing list linux-arm-kernel@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-arm-kernel ^ permalink raw reply [flat|nested] 91+ messages in thread
* Re: [PATCH 3/7] openrisc: remove the partial DMA_ATTR_NON_CONSISTENT support @ 2019-06-16 9:17 ` Stafford Horne 0 siblings, 0 replies; 91+ messages in thread From: Stafford Horne @ 2019-06-16 9:17 UTC (permalink / raw) To: Christoph Hellwig Cc: Jonas Bonn, linux-xtensa, Vladimir Murzin, linux-parisc, Vineet Gupta, Helge Deller, linux-kernel, Stefan Kristiansson, iommu, openrisc, linux-snps-arc, linux-arm-kernel On Fri, Jun 14, 2019 at 04:44:27PM +0200, Christoph Hellwig wrote: > The openrisc DMA code supports DMA_ATTR_NON_CONSISTENT allocations, but > does not provide a cache_sync operation. This means any user of it > will never be able to actually transfer cache ownership and thus cause > coherency bugs. The below looks good. I am always happy to what looks like legacy copy & paste cruft. Acked-by: Stafford Horne <shorne@gmail.com> > Signed-off-by: Christoph Hellwig <hch@lst.de> > --- > arch/openrisc/kernel/dma.c | 22 +++++++++------------- > 1 file changed, 9 insertions(+), 13 deletions(-) > > diff --git a/arch/openrisc/kernel/dma.c b/arch/openrisc/kernel/dma.c > index f79457cb3741..9f25fd0fbb5d 100644 > --- a/arch/openrisc/kernel/dma.c > +++ b/arch/openrisc/kernel/dma.c > @@ -98,15 +98,13 @@ arch_dma_alloc(struct device *dev, size_t size, dma_addr_t *dma_handle, > > va = (unsigned long)page; > > - if ((attrs & DMA_ATTR_NON_CONSISTENT) == 0) { > - /* > - * We need to iterate through the pages, clearing the dcache for > - * them and setting the cache-inhibit bit. > - */ > - if (walk_page_range(va, va + size, &walk)) { > - free_pages_exact(page, size); > - return NULL; > - } > + /* > + * We need to iterate through the pages, clearing the dcache for > + * them and setting the cache-inhibit bit. > + */ > + if (walk_page_range(va, va + size, &walk)) { > + free_pages_exact(page, size); > + return NULL; > } > > return (void *)va; > @@ -122,10 +120,8 @@ arch_dma_free(struct device *dev, size_t size, void *vaddr, > .mm = &init_mm > }; > > - if ((attrs & DMA_ATTR_NON_CONSISTENT) == 0) { > - /* walk_page_range shouldn't be able to fail here */ > - WARN_ON(walk_page_range(va, va + size, &walk)); > - } > + /* walk_page_range shouldn't be able to fail here */ > + WARN_ON(walk_page_range(va, va + size, &walk)); > > free_pages_exact(vaddr, size); > } > -- > 2.20.1 > _______________________________________________ iommu mailing list iommu@lists.linux-foundation.org https://lists.linuxfoundation.org/mailman/listinfo/iommu ^ permalink raw reply [flat|nested] 91+ messages in thread
* [PATCH 3/7] openrisc: remove the partial DMA_ATTR_NON_CONSISTENT support @ 2019-06-16 9:17 ` Stafford Horne 0 siblings, 0 replies; 91+ messages in thread From: Stafford Horne @ 2019-06-16 9:17 UTC (permalink / raw) To: linux-snps-arc On Fri, Jun 14, 2019@04:44:27PM +0200, Christoph Hellwig wrote: > The openrisc DMA code supports DMA_ATTR_NON_CONSISTENT allocations, but > does not provide a cache_sync operation. This means any user of it > will never be able to actually transfer cache ownership and thus cause > coherency bugs. The below looks good. I am always happy to what looks like legacy copy & paste cruft. Acked-by: Stafford Horne <shorne at gmail.com> > Signed-off-by: Christoph Hellwig <hch at lst.de> > --- > arch/openrisc/kernel/dma.c | 22 +++++++++------------- > 1 file changed, 9 insertions(+), 13 deletions(-) > > diff --git a/arch/openrisc/kernel/dma.c b/arch/openrisc/kernel/dma.c > index f79457cb3741..9f25fd0fbb5d 100644 > --- a/arch/openrisc/kernel/dma.c > +++ b/arch/openrisc/kernel/dma.c > @@ -98,15 +98,13 @@ arch_dma_alloc(struct device *dev, size_t size, dma_addr_t *dma_handle, > > va = (unsigned long)page; > > - if ((attrs & DMA_ATTR_NON_CONSISTENT) == 0) { > - /* > - * We need to iterate through the pages, clearing the dcache for > - * them and setting the cache-inhibit bit. > - */ > - if (walk_page_range(va, va + size, &walk)) { > - free_pages_exact(page, size); > - return NULL; > - } > + /* > + * We need to iterate through the pages, clearing the dcache for > + * them and setting the cache-inhibit bit. > + */ > + if (walk_page_range(va, va + size, &walk)) { > + free_pages_exact(page, size); > + return NULL; > } > > return (void *)va; > @@ -122,10 +120,8 @@ arch_dma_free(struct device *dev, size_t size, void *vaddr, > .mm = &init_mm > }; > > - if ((attrs & DMA_ATTR_NON_CONSISTENT) == 0) { > - /* walk_page_range shouldn't be able to fail here */ > - WARN_ON(walk_page_range(va, va + size, &walk)); > - } > + /* walk_page_range shouldn't be able to fail here */ > + WARN_ON(walk_page_range(va, va + size, &walk)); > > free_pages_exact(vaddr, size); > } > -- > 2.20.1 > ^ permalink raw reply [flat|nested] 91+ messages in thread
* [PATCH 4/7] dma-mapping: add a dma_alloc_need_uncached helper 2019-06-14 14:44 ` Christoph Hellwig ` (2 preceding siblings ...) (?) @ 2019-06-14 14:44 ` Christoph Hellwig -1 siblings, 0 replies; 91+ messages in thread From: Christoph Hellwig @ 2019-06-14 14:44 UTC (permalink / raw) To: Vineet Gupta Cc: Jonas Bonn, Stefan Kristiansson, Stafford Horne, Helge Deller, Vladimir Murzin, linux-snps-arc, linux-arm-kernel, openrisc, linux-parisc, linux-xtensa, iommu, linux-kernel Check if we need to allocate uncached memory for a device given the allocation flags. Switch over the uncached segment check to this helper to deal with architectures that do not support the dma_cache_sync operation and thus should not returned cacheable memory for DMA_ATTR_NON_CONSISTENT allocations. Signed-off-by: Christoph Hellwig <hch@lst.de> --- include/linux/dma-noncoherent.h | 14 ++++++++++++++ kernel/dma/direct.c | 4 ++-- 2 files changed, 16 insertions(+), 2 deletions(-) diff --git a/include/linux/dma-noncoherent.h b/include/linux/dma-noncoherent.h index 7e0126a04e02..732919ac5c11 100644 --- a/include/linux/dma-noncoherent.h +++ b/include/linux/dma-noncoherent.h @@ -20,6 +20,20 @@ static inline bool dev_is_dma_coherent(struct device *dev) } #endif /* CONFIG_ARCH_HAS_DMA_COHERENCE_H */ +/* + * Check if an allocation needs to be marked uncached to be coherent. + */ +static inline bool dma_alloc_need_uncached(struct device *dev, + unsigned long attrs) +{ + if (dev_is_dma_coherent(dev)) + return false; + if (IS_ENABLED(CONFIG_DMA_NONCOHERENT_CACHE_SYNC) && + (attrs & DMA_ATTR_NON_CONSISTENT)) + return false; + return true; +} + void *arch_dma_alloc(struct device *dev, size_t size, dma_addr_t *dma_handle, gfp_t gfp, unsigned long attrs); void arch_dma_free(struct device *dev, size_t size, void *cpu_addr, diff --git a/kernel/dma/direct.c b/kernel/dma/direct.c index b67f0aa08aa3..c2893713bf80 100644 --- a/kernel/dma/direct.c +++ b/kernel/dma/direct.c @@ -160,7 +160,7 @@ void *dma_direct_alloc_pages(struct device *dev, size_t size, memset(ret, 0, size); if (IS_ENABLED(CONFIG_ARCH_HAS_UNCACHED_SEGMENT) && - !dev_is_dma_coherent(dev) && !(attrs & DMA_ATTR_NON_CONSISTENT)) { + dma_alloc_need_uncached(dev, attrs)) { arch_dma_prep_coherent(page, size); ret = uncached_kernel_address(ret); } @@ -182,7 +182,7 @@ void dma_direct_free_pages(struct device *dev, size_t size, void *cpu_addr, set_memory_encrypted((unsigned long)cpu_addr, 1 << page_order); if (IS_ENABLED(CONFIG_ARCH_HAS_UNCACHED_SEGMENT) && - !dev_is_dma_coherent(dev) && !(attrs & DMA_ATTR_NON_CONSISTENT)) + dma_alloc_need_uncached(dev, attrs)) cpu_addr = cached_kernel_address(cpu_addr); __dma_direct_free_pages(dev, size, virt_to_page(cpu_addr)); } -- 2.20.1 ^ permalink raw reply related [flat|nested] 91+ messages in thread
* [OpenRISC] [PATCH 4/7] dma-mapping: add a dma_alloc_need_uncached helper @ 2019-06-14 14:44 ` Christoph Hellwig 0 siblings, 0 replies; 91+ messages in thread From: Christoph Hellwig @ 2019-06-14 14:44 UTC (permalink / raw) To: openrisc Check if we need to allocate uncached memory for a device given the allocation flags. Switch over the uncached segment check to this helper to deal with architectures that do not support the dma_cache_sync operation and thus should not returned cacheable memory for DMA_ATTR_NON_CONSISTENT allocations. Signed-off-by: Christoph Hellwig <hch@lst.de> --- include/linux/dma-noncoherent.h | 14 ++++++++++++++ kernel/dma/direct.c | 4 ++-- 2 files changed, 16 insertions(+), 2 deletions(-) diff --git a/include/linux/dma-noncoherent.h b/include/linux/dma-noncoherent.h index 7e0126a04e02..732919ac5c11 100644 --- a/include/linux/dma-noncoherent.h +++ b/include/linux/dma-noncoherent.h @@ -20,6 +20,20 @@ static inline bool dev_is_dma_coherent(struct device *dev) } #endif /* CONFIG_ARCH_HAS_DMA_COHERENCE_H */ +/* + * Check if an allocation needs to be marked uncached to be coherent. + */ +static inline bool dma_alloc_need_uncached(struct device *dev, + unsigned long attrs) +{ + if (dev_is_dma_coherent(dev)) + return false; + if (IS_ENABLED(CONFIG_DMA_NONCOHERENT_CACHE_SYNC) && + (attrs & DMA_ATTR_NON_CONSISTENT)) + return false; + return true; +} + void *arch_dma_alloc(struct device *dev, size_t size, dma_addr_t *dma_handle, gfp_t gfp, unsigned long attrs); void arch_dma_free(struct device *dev, size_t size, void *cpu_addr, diff --git a/kernel/dma/direct.c b/kernel/dma/direct.c index b67f0aa08aa3..c2893713bf80 100644 --- a/kernel/dma/direct.c +++ b/kernel/dma/direct.c @@ -160,7 +160,7 @@ void *dma_direct_alloc_pages(struct device *dev, size_t size, memset(ret, 0, size); if (IS_ENABLED(CONFIG_ARCH_HAS_UNCACHED_SEGMENT) && - !dev_is_dma_coherent(dev) && !(attrs & DMA_ATTR_NON_CONSISTENT)) { + dma_alloc_need_uncached(dev, attrs)) { arch_dma_prep_coherent(page, size); ret = uncached_kernel_address(ret); } @@ -182,7 +182,7 @@ void dma_direct_free_pages(struct device *dev, size_t size, void *cpu_addr, set_memory_encrypted((unsigned long)cpu_addr, 1 << page_order); if (IS_ENABLED(CONFIG_ARCH_HAS_UNCACHED_SEGMENT) && - !dev_is_dma_coherent(dev) && !(attrs & DMA_ATTR_NON_CONSISTENT)) + dma_alloc_need_uncached(dev, attrs)) cpu_addr = cached_kernel_address(cpu_addr); __dma_direct_free_pages(dev, size, virt_to_page(cpu_addr)); } -- 2.20.1 ^ permalink raw reply related [flat|nested] 91+ messages in thread
* [PATCH 4/7] dma-mapping: add a dma_alloc_need_uncached helper @ 2019-06-14 14:44 ` Christoph Hellwig 0 siblings, 0 replies; 91+ messages in thread From: Christoph Hellwig @ 2019-06-14 14:44 UTC (permalink / raw) To: Vineet Gupta Cc: Jonas Bonn, linux-xtensa, Vladimir Murzin, linux-parisc, Helge Deller, linux-kernel, Stefan Kristiansson, iommu, openrisc, Stafford Horne, linux-snps-arc, linux-arm-kernel Check if we need to allocate uncached memory for a device given the allocation flags. Switch over the uncached segment check to this helper to deal with architectures that do not support the dma_cache_sync operation and thus should not returned cacheable memory for DMA_ATTR_NON_CONSISTENT allocations. Signed-off-by: Christoph Hellwig <hch@lst.de> --- include/linux/dma-noncoherent.h | 14 ++++++++++++++ kernel/dma/direct.c | 4 ++-- 2 files changed, 16 insertions(+), 2 deletions(-) diff --git a/include/linux/dma-noncoherent.h b/include/linux/dma-noncoherent.h index 7e0126a04e02..732919ac5c11 100644 --- a/include/linux/dma-noncoherent.h +++ b/include/linux/dma-noncoherent.h @@ -20,6 +20,20 @@ static inline bool dev_is_dma_coherent(struct device *dev) } #endif /* CONFIG_ARCH_HAS_DMA_COHERENCE_H */ +/* + * Check if an allocation needs to be marked uncached to be coherent. + */ +static inline bool dma_alloc_need_uncached(struct device *dev, + unsigned long attrs) +{ + if (dev_is_dma_coherent(dev)) + return false; + if (IS_ENABLED(CONFIG_DMA_NONCOHERENT_CACHE_SYNC) && + (attrs & DMA_ATTR_NON_CONSISTENT)) + return false; + return true; +} + void *arch_dma_alloc(struct device *dev, size_t size, dma_addr_t *dma_handle, gfp_t gfp, unsigned long attrs); void arch_dma_free(struct device *dev, size_t size, void *cpu_addr, diff --git a/kernel/dma/direct.c b/kernel/dma/direct.c index b67f0aa08aa3..c2893713bf80 100644 --- a/kernel/dma/direct.c +++ b/kernel/dma/direct.c @@ -160,7 +160,7 @@ void *dma_direct_alloc_pages(struct device *dev, size_t size, memset(ret, 0, size); if (IS_ENABLED(CONFIG_ARCH_HAS_UNCACHED_SEGMENT) && - !dev_is_dma_coherent(dev) && !(attrs & DMA_ATTR_NON_CONSISTENT)) { + dma_alloc_need_uncached(dev, attrs)) { arch_dma_prep_coherent(page, size); ret = uncached_kernel_address(ret); } @@ -182,7 +182,7 @@ void dma_direct_free_pages(struct device *dev, size_t size, void *cpu_addr, set_memory_encrypted((unsigned long)cpu_addr, 1 << page_order); if (IS_ENABLED(CONFIG_ARCH_HAS_UNCACHED_SEGMENT) && - !dev_is_dma_coherent(dev) && !(attrs & DMA_ATTR_NON_CONSISTENT)) + dma_alloc_need_uncached(dev, attrs)) cpu_addr = cached_kernel_address(cpu_addr); __dma_direct_free_pages(dev, size, virt_to_page(cpu_addr)); } -- 2.20.1 _______________________________________________ linux-arm-kernel mailing list linux-arm-kernel@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-arm-kernel ^ permalink raw reply related [flat|nested] 91+ messages in thread
* [PATCH 4/7] dma-mapping: add a dma_alloc_need_uncached helper @ 2019-06-14 14:44 ` Christoph Hellwig 0 siblings, 0 replies; 91+ messages in thread From: Christoph Hellwig @ 2019-06-14 14:44 UTC (permalink / raw) To: Vineet Gupta Cc: Jonas Bonn, linux-xtensa, Vladimir Murzin, linux-parisc, Helge Deller, linux-kernel, Stefan Kristiansson, iommu, openrisc, Stafford Horne, linux-snps-arc, linux-arm-kernel Check if we need to allocate uncached memory for a device given the allocation flags. Switch over the uncached segment check to this helper to deal with architectures that do not support the dma_cache_sync operation and thus should not returned cacheable memory for DMA_ATTR_NON_CONSISTENT allocations. Signed-off-by: Christoph Hellwig <hch@lst.de> --- include/linux/dma-noncoherent.h | 14 ++++++++++++++ kernel/dma/direct.c | 4 ++-- 2 files changed, 16 insertions(+), 2 deletions(-) diff --git a/include/linux/dma-noncoherent.h b/include/linux/dma-noncoherent.h index 7e0126a04e02..732919ac5c11 100644 --- a/include/linux/dma-noncoherent.h +++ b/include/linux/dma-noncoherent.h @@ -20,6 +20,20 @@ static inline bool dev_is_dma_coherent(struct device *dev) } #endif /* CONFIG_ARCH_HAS_DMA_COHERENCE_H */ +/* + * Check if an allocation needs to be marked uncached to be coherent. + */ +static inline bool dma_alloc_need_uncached(struct device *dev, + unsigned long attrs) +{ + if (dev_is_dma_coherent(dev)) + return false; + if (IS_ENABLED(CONFIG_DMA_NONCOHERENT_CACHE_SYNC) && + (attrs & DMA_ATTR_NON_CONSISTENT)) + return false; + return true; +} + void *arch_dma_alloc(struct device *dev, size_t size, dma_addr_t *dma_handle, gfp_t gfp, unsigned long attrs); void arch_dma_free(struct device *dev, size_t size, void *cpu_addr, diff --git a/kernel/dma/direct.c b/kernel/dma/direct.c index b67f0aa08aa3..c2893713bf80 100644 --- a/kernel/dma/direct.c +++ b/kernel/dma/direct.c @@ -160,7 +160,7 @@ void *dma_direct_alloc_pages(struct device *dev, size_t size, memset(ret, 0, size); if (IS_ENABLED(CONFIG_ARCH_HAS_UNCACHED_SEGMENT) && - !dev_is_dma_coherent(dev) && !(attrs & DMA_ATTR_NON_CONSISTENT)) { + dma_alloc_need_uncached(dev, attrs)) { arch_dma_prep_coherent(page, size); ret = uncached_kernel_address(ret); } @@ -182,7 +182,7 @@ void dma_direct_free_pages(struct device *dev, size_t size, void *cpu_addr, set_memory_encrypted((unsigned long)cpu_addr, 1 << page_order); if (IS_ENABLED(CONFIG_ARCH_HAS_UNCACHED_SEGMENT) && - !dev_is_dma_coherent(dev) && !(attrs & DMA_ATTR_NON_CONSISTENT)) + dma_alloc_need_uncached(dev, attrs)) cpu_addr = cached_kernel_address(cpu_addr); __dma_direct_free_pages(dev, size, virt_to_page(cpu_addr)); } -- 2.20.1 _______________________________________________ iommu mailing list iommu@lists.linux-foundation.org https://lists.linuxfoundation.org/mailman/listinfo/iommu ^ permalink raw reply related [flat|nested] 91+ messages in thread
* [PATCH 4/7] dma-mapping: add a dma_alloc_need_uncached helper @ 2019-06-14 14:44 ` Christoph Hellwig 0 siblings, 0 replies; 91+ messages in thread From: Christoph Hellwig @ 2019-06-14 14:44 UTC (permalink / raw) To: linux-snps-arc Check if we need to allocate uncached memory for a device given the allocation flags. Switch over the uncached segment check to this helper to deal with architectures that do not support the dma_cache_sync operation and thus should not returned cacheable memory for DMA_ATTR_NON_CONSISTENT allocations. Signed-off-by: Christoph Hellwig <hch at lst.de> --- include/linux/dma-noncoherent.h | 14 ++++++++++++++ kernel/dma/direct.c | 4 ++-- 2 files changed, 16 insertions(+), 2 deletions(-) diff --git a/include/linux/dma-noncoherent.h b/include/linux/dma-noncoherent.h index 7e0126a04e02..732919ac5c11 100644 --- a/include/linux/dma-noncoherent.h +++ b/include/linux/dma-noncoherent.h @@ -20,6 +20,20 @@ static inline bool dev_is_dma_coherent(struct device *dev) } #endif /* CONFIG_ARCH_HAS_DMA_COHERENCE_H */ +/* + * Check if an allocation needs to be marked uncached to be coherent. + */ +static inline bool dma_alloc_need_uncached(struct device *dev, + unsigned long attrs) +{ + if (dev_is_dma_coherent(dev)) + return false; + if (IS_ENABLED(CONFIG_DMA_NONCOHERENT_CACHE_SYNC) && + (attrs & DMA_ATTR_NON_CONSISTENT)) + return false; + return true; +} + void *arch_dma_alloc(struct device *dev, size_t size, dma_addr_t *dma_handle, gfp_t gfp, unsigned long attrs); void arch_dma_free(struct device *dev, size_t size, void *cpu_addr, diff --git a/kernel/dma/direct.c b/kernel/dma/direct.c index b67f0aa08aa3..c2893713bf80 100644 --- a/kernel/dma/direct.c +++ b/kernel/dma/direct.c @@ -160,7 +160,7 @@ void *dma_direct_alloc_pages(struct device *dev, size_t size, memset(ret, 0, size); if (IS_ENABLED(CONFIG_ARCH_HAS_UNCACHED_SEGMENT) && - !dev_is_dma_coherent(dev) && !(attrs & DMA_ATTR_NON_CONSISTENT)) { + dma_alloc_need_uncached(dev, attrs)) { arch_dma_prep_coherent(page, size); ret = uncached_kernel_address(ret); } @@ -182,7 +182,7 @@ void dma_direct_free_pages(struct device *dev, size_t size, void *cpu_addr, set_memory_encrypted((unsigned long)cpu_addr, 1 << page_order); if (IS_ENABLED(CONFIG_ARCH_HAS_UNCACHED_SEGMENT) && - !dev_is_dma_coherent(dev) && !(attrs & DMA_ATTR_NON_CONSISTENT)) + dma_alloc_need_uncached(dev, attrs)) cpu_addr = cached_kernel_address(cpu_addr); __dma_direct_free_pages(dev, size, virt_to_page(cpu_addr)); } -- 2.20.1 ^ permalink raw reply related [flat|nested] 91+ messages in thread
* [PATCH 5/7] dma-direct: handle DMA_ATTR_NON_CONSISTENT in common code 2019-06-14 14:44 ` Christoph Hellwig ` (4 preceding siblings ...) (?) @ 2019-06-14 14:44 ` Christoph Hellwig -1 siblings, 0 replies; 91+ messages in thread From: Christoph Hellwig @ 2019-06-14 14:44 UTC (permalink / raw) To: Vineet Gupta Cc: Jonas Bonn, Stefan Kristiansson, Stafford Horne, Helge Deller, Vladimir Murzin, linux-snps-arc, linux-arm-kernel, openrisc, linux-parisc, linux-xtensa, iommu, linux-kernel Only call into arch_dma_alloc if we require an uncached mapping, and remove the parisc code manually doing normal cached DMA_ATTR_NON_CONSISTENT allocations. Signed-off-by: Christoph Hellwig <hch@lst.de> --- arch/parisc/kernel/pci-dma.c | 48 ++++++++++-------------------------- kernel/dma/direct.c | 4 +-- 2 files changed, 15 insertions(+), 37 deletions(-) diff --git a/arch/parisc/kernel/pci-dma.c b/arch/parisc/kernel/pci-dma.c index 239162355b58..ca35d9a76e50 100644 --- a/arch/parisc/kernel/pci-dma.c +++ b/arch/parisc/kernel/pci-dma.c @@ -394,17 +394,20 @@ pcxl_dma_init(void) __initcall(pcxl_dma_init); -static void *pcxl_dma_alloc(struct device *dev, size_t size, - dma_addr_t *dma_handle, gfp_t flag, unsigned long attrs) +void *arch_dma_alloc(struct device *dev, size_t size, + dma_addr_t *dma_handle, gfp_t gfp, unsigned long attrs) { unsigned long vaddr; unsigned long paddr; int order; + if (boot_cpu_data.cpu_type != pcxl2 && boot_cpu_data.cpu_type != pcxl) + return NULL; + order = get_order(size); size = 1 << (order + PAGE_SHIFT); vaddr = pcxl_alloc_range(size); - paddr = __get_free_pages(flag | __GFP_ZERO, order); + paddr = __get_free_pages(gfp | __GFP_ZERO, order); flush_kernel_dcache_range(paddr, size); paddr = __pa(paddr); map_uncached_pages(vaddr, size, paddr); @@ -421,44 +424,19 @@ static void *pcxl_dma_alloc(struct device *dev, size_t size, return (void *)vaddr; } -static void *pcx_dma_alloc(struct device *dev, size_t size, - dma_addr_t *dma_handle, gfp_t flag, unsigned long attrs) -{ - void *addr; - - if ((attrs & DMA_ATTR_NON_CONSISTENT) == 0) - return NULL; - - addr = (void *)__get_free_pages(flag | __GFP_ZERO, get_order(size)); - if (addr) - *dma_handle = (dma_addr_t)virt_to_phys(addr); - - return addr; -} - -void *arch_dma_alloc(struct device *dev, size_t size, - dma_addr_t *dma_handle, gfp_t gfp, unsigned long attrs) -{ - - if (boot_cpu_data.cpu_type == pcxl2 || boot_cpu_data.cpu_type == pcxl) - return pcxl_dma_alloc(dev, size, dma_handle, gfp, attrs); - else - return pcx_dma_alloc(dev, size, dma_handle, gfp, attrs); -} - void arch_dma_free(struct device *dev, size_t size, void *vaddr, dma_addr_t dma_handle, unsigned long attrs) { int order = get_order(size); - if (boot_cpu_data.cpu_type == pcxl2 || boot_cpu_data.cpu_type == pcxl) { - size = 1 << (order + PAGE_SHIFT); - unmap_uncached_pages((unsigned long)vaddr, size); - pcxl_free_range((unsigned long)vaddr, size); + WARN_ON_ONCE(boot_cpu_data.cpu_type != pcxl2 && + boot_cpu_data.cpu_type != pcxl); - vaddr = __va(dma_handle); - } - free_pages((unsigned long)vaddr, get_order(size)); + size = 1 << (order + PAGE_SHIFT); + unmap_uncached_pages((unsigned long)vaddr, size); + pcxl_free_range((unsigned long)vaddr, size); + + free_pages((unsigned long)__va(dma_handle), order); } void arch_sync_dma_for_device(struct device *dev, phys_addr_t paddr, diff --git a/kernel/dma/direct.c b/kernel/dma/direct.c index c2893713bf80..fc354f4f490b 100644 --- a/kernel/dma/direct.c +++ b/kernel/dma/direct.c @@ -191,7 +191,7 @@ void *dma_direct_alloc(struct device *dev, size_t size, dma_addr_t *dma_handle, gfp_t gfp, unsigned long attrs) { if (!IS_ENABLED(CONFIG_ARCH_HAS_UNCACHED_SEGMENT) && - !dev_is_dma_coherent(dev)) + dma_alloc_need_uncached(dev, attrs)) return arch_dma_alloc(dev, size, dma_handle, gfp, attrs); return dma_direct_alloc_pages(dev, size, dma_handle, gfp, attrs); } @@ -200,7 +200,7 @@ void dma_direct_free(struct device *dev, size_t size, void *cpu_addr, dma_addr_t dma_addr, unsigned long attrs) { if (!IS_ENABLED(CONFIG_ARCH_HAS_UNCACHED_SEGMENT) && - !dev_is_dma_coherent(dev)) + dma_alloc_need_uncached(dev, attrs)) arch_dma_free(dev, size, cpu_addr, dma_addr, attrs); else dma_direct_free_pages(dev, size, cpu_addr, dma_addr, attrs); -- 2.20.1 ^ permalink raw reply related [flat|nested] 91+ messages in thread
* Re: [PATCH 5/7] dma-direct: handle DMA_ATTR_NON_CONSISTENT in common code @ 2019-06-14 14:44 ` Christoph Hellwig 0 siblings, 0 replies; 91+ messages in thread From: Hillf Danton @ 2019-06-16 10:08 UTC (permalink / raw) To: Christoph Hellwig Cc: Jonas Bonn, linux-xtensa, Vladimir Murzin, linux-parisc, Vineet Gupta, Helge Deller, linux-kernel, Stefan Kristiansson, iommu, openrisc, Stafford Horne, linux-snps-arc, linux-arm-kernel Hello Christoph On Fri, 14 Jun 2019 16:44:29 +0200 Christoph Hellwig wrote: > Only call into arch_dma_alloc if we require an uncached mapping, > and remove the parisc code manually doing normal cached > DMA_ATTR_NON_CONSISTENT allocations. > > Signed-off-by: Christoph Hellwig <hch@lst.de> > --- > arch/parisc/kernel/pci-dma.c | 48 ++++++++++-------------------------- > kernel/dma/direct.c | 4 +-- > 2 files changed, 15 insertions(+), 37 deletions(-) > > diff --git a/arch/parisc/kernel/pci-dma.c b/arch/parisc/kernel/pci-dma.c > index 239162355b58..ca35d9a76e50 100644 > --- a/arch/parisc/kernel/pci-dma.c > +++ b/arch/parisc/kernel/pci-dma.c > @@ -394,17 +394,20 @@ pcxl_dma_init(void) > > __initcall(pcxl_dma_init); > > -static void *pcxl_dma_alloc(struct device *dev, size_t size, > - dma_addr_t *dma_handle, gfp_t flag, unsigned long attrs) > +void *arch_dma_alloc(struct device *dev, size_t size, > + dma_addr_t *dma_handle, gfp_t gfp, unsigned long attrs) > { > unsigned long vaddr; > unsigned long paddr; > int order; > > + if (boot_cpu_data.cpu_type != pcxl2 && boot_cpu_data.cpu_type != pcxl) > + return NULL; > + Literally, any cpu (call it cpuW) other than pcx12 and pcx1 will no longer do dma alloc for any device with this patch applied. On the other hand, !dev_is_dma_coherent(dev) && !(attrs & DMA_ATTR_NON_CONSISTENT) will ask any cpu to do dma alloc, regardless of pcx1. This patch works imo unless cpuW plays games only with devices that are dma coherent. I doubt it is true. -- Hillf > order = get_order(size); > size = 1 << (order + PAGE_SHIFT); > vaddr = pcxl_alloc_range(size); > - paddr = __get_free_pages(flag | __GFP_ZERO, order); > + paddr = __get_free_pages(gfp | __GFP_ZERO, order); > flush_kernel_dcache_range(paddr, size); > paddr = __pa(paddr); > map_uncached_pages(vaddr, size, paddr); > @@ -421,44 +424,19 @@ static void *pcxl_dma_alloc(struct device *dev, size_t size, > return (void *)vaddr; > } > > -static void *pcx_dma_alloc(struct device *dev, size_t size, > - dma_addr_t *dma_handle, gfp_t flag, unsigned long attrs) > -{ > - void *addr; > - > - if ((attrs & DMA_ATTR_NON_CONSISTENT) == 0) > - return NULL; > - > - addr = (void *)__get_free_pages(flag | __GFP_ZERO, get_order(size)); > - if (addr) > - *dma_handle = (dma_addr_t)virt_to_phys(addr); > - > - return addr; > -} > - > -void *arch_dma_alloc(struct device *dev, size_t size, > - dma_addr_t *dma_handle, gfp_t gfp, unsigned long attrs) > -{ > - > - if (boot_cpu_data.cpu_type == pcxl2 || boot_cpu_data.cpu_type == pcxl) > - return pcxl_dma_alloc(dev, size, dma_handle, gfp, attrs); > - else > - return pcx_dma_alloc(dev, size, dma_handle, gfp, attrs); > -} > - > void arch_dma_free(struct device *dev, size_t size, void *vaddr, > dma_addr_t dma_handle, unsigned long attrs) > { > int order = get_order(size); > > - if (boot_cpu_data.cpu_type == pcxl2 || boot_cpu_data.cpu_type == pcxl) { > - size = 1 << (order + PAGE_SHIFT); > - unmap_uncached_pages((unsigned long)vaddr, size); > - pcxl_free_range((unsigned long)vaddr, size); > + WARN_ON_ONCE(boot_cpu_data.cpu_type != pcxl2 && > + boot_cpu_data.cpu_type != pcxl); > > - vaddr = __va(dma_handle); > - } > - free_pages((unsigned long)vaddr, get_order(size)); > + size = 1 << (order + PAGE_SHIFT); > + unmap_uncached_pages((unsigned long)vaddr, size); > + pcxl_free_range((unsigned long)vaddr, size); > + > + free_pages((unsigned long)__va(dma_handle), order); > } > > void arch_sync_dma_for_device(struct device *dev, phys_addr_t paddr, > diff --git a/kernel/dma/direct.c b/kernel/dma/direct.c > index c2893713bf80..fc354f4f490b 100644 > --- a/kernel/dma/direct.c > +++ b/kernel/dma/direct.c > @@ -191,7 +191,7 @@ void *dma_direct_alloc(struct device *dev, size_t size, > dma_addr_t *dma_handle, gfp_t gfp, unsigned long attrs) > { > if (!IS_ENABLED(CONFIG_ARCH_HAS_UNCACHED_SEGMENT) && > - !dev_is_dma_coherent(dev)) > + dma_alloc_need_uncached(dev, attrs)) > return arch_dma_alloc(dev, size, dma_handle, gfp, attrs); > return dma_direct_alloc_pages(dev, size, dma_handle, gfp, attrs); > } > @@ -200,7 +200,7 @@ void dma_direct_free(struct device *dev, size_t size, > void *cpu_addr, dma_addr_t dma_addr, unsigned long attrs) > { > if (!IS_ENABLED(CONFIG_ARCH_HAS_UNCACHED_SEGMENT) && > - !dev_is_dma_coherent(dev)) > + dma_alloc_need_uncached(dev, attrs)) > arch_dma_free(dev, size, cpu_addr, dma_addr, attrs); > else > dma_direct_free_pages(dev, size, cpu_addr, dma_addr, attrs); > -- > 2.20.1 > _______________________________________________ linux-arm-kernel mailing list linux-arm-kernel@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-arm-kernel ^ permalink raw reply [flat|nested] 91+ messages in thread
* Re: [PATCH 5/7] dma-direct: handle DMA_ATTR_NON_CONSISTENT in common code @ 2019-06-14 14:44 ` Christoph Hellwig 0 siblings, 0 replies; 91+ messages in thread From: Hillf Danton @ 2019-06-16 10:08 UTC (permalink / raw) To: Christoph Hellwig Cc: Jonas Bonn, linux-xtensa, Vladimir Murzin, linux-parisc, Vineet Gupta, Helge Deller, linux-kernel, Stefan Kristiansson, iommu, openrisc, Stafford Horne, linux-snps-arc, linux-arm-kernel Hello Christoph On Fri, 14 Jun 2019 16:44:29 +0200 Christoph Hellwig wrote: > Only call into arch_dma_alloc if we require an uncached mapping, > and remove the parisc code manually doing normal cached > DMA_ATTR_NON_CONSISTENT allocations. > > Signed-off-by: Christoph Hellwig <hch@lst.de> > --- > arch/parisc/kernel/pci-dma.c | 48 ++++++++++-------------------------- > kernel/dma/direct.c | 4 +-- > 2 files changed, 15 insertions(+), 37 deletions(-) > > diff --git a/arch/parisc/kernel/pci-dma.c b/arch/parisc/kernel/pci-dma.c > index 239162355b58..ca35d9a76e50 100644 > --- a/arch/parisc/kernel/pci-dma.c > +++ b/arch/parisc/kernel/pci-dma.c > @@ -394,17 +394,20 @@ pcxl_dma_init(void) > > __initcall(pcxl_dma_init); > > -static void *pcxl_dma_alloc(struct device *dev, size_t size, > - dma_addr_t *dma_handle, gfp_t flag, unsigned long attrs) > +void *arch_dma_alloc(struct device *dev, size_t size, > + dma_addr_t *dma_handle, gfp_t gfp, unsigned long attrs) > { > unsigned long vaddr; > unsigned long paddr; > int order; > > + if (boot_cpu_data.cpu_type != pcxl2 && boot_cpu_data.cpu_type != pcxl) > + return NULL; > + Literally, any cpu (call it cpuW) other than pcx12 and pcx1 will no longer do dma alloc for any device with this patch applied. On the other hand, !dev_is_dma_coherent(dev) && !(attrs & DMA_ATTR_NON_CONSISTENT) will ask any cpu to do dma alloc, regardless of pcx1. This patch works imo unless cpuW plays games only with devices that are dma coherent. I doubt it is true. -- Hillf > order = get_order(size); > size = 1 << (order + PAGE_SHIFT); > vaddr = pcxl_alloc_range(size); > - paddr = __get_free_pages(flag | __GFP_ZERO, order); > + paddr = __get_free_pages(gfp | __GFP_ZERO, order); > flush_kernel_dcache_range(paddr, size); > paddr = __pa(paddr); > map_uncached_pages(vaddr, size, paddr); > @@ -421,44 +424,19 @@ static void *pcxl_dma_alloc(struct device *dev, size_t size, > return (void *)vaddr; > } > > -static void *pcx_dma_alloc(struct device *dev, size_t size, > - dma_addr_t *dma_handle, gfp_t flag, unsigned long attrs) > -{ > - void *addr; > - > - if ((attrs & DMA_ATTR_NON_CONSISTENT) == 0) > - return NULL; > - > - addr = (void *)__get_free_pages(flag | __GFP_ZERO, get_order(size)); > - if (addr) > - *dma_handle = (dma_addr_t)virt_to_phys(addr); > - > - return addr; > -} > - > -void *arch_dma_alloc(struct device *dev, size_t size, > - dma_addr_t *dma_handle, gfp_t gfp, unsigned long attrs) > -{ > - > - if (boot_cpu_data.cpu_type == pcxl2 || boot_cpu_data.cpu_type == pcxl) > - return pcxl_dma_alloc(dev, size, dma_handle, gfp, attrs); > - else > - return pcx_dma_alloc(dev, size, dma_handle, gfp, attrs); > -} > - > void arch_dma_free(struct device *dev, size_t size, void *vaddr, > dma_addr_t dma_handle, unsigned long attrs) > { > int order = get_order(size); > > - if (boot_cpu_data.cpu_type == pcxl2 || boot_cpu_data.cpu_type == pcxl) { > - size = 1 << (order + PAGE_SHIFT); > - unmap_uncached_pages((unsigned long)vaddr, size); > - pcxl_free_range((unsigned long)vaddr, size); > + WARN_ON_ONCE(boot_cpu_data.cpu_type != pcxl2 && > + boot_cpu_data.cpu_type != pcxl); > > - vaddr = __va(dma_handle); > - } > - free_pages((unsigned long)vaddr, get_order(size)); > + size = 1 << (order + PAGE_SHIFT); > + unmap_uncached_pages((unsigned long)vaddr, size); > + pcxl_free_range((unsigned long)vaddr, size); > + > + free_pages((unsigned long)__va(dma_handle), order); > } > > void arch_sync_dma_for_device(struct device *dev, phys_addr_t paddr, > diff --git a/kernel/dma/direct.c b/kernel/dma/direct.c > index c2893713bf80..fc354f4f490b 100644 > --- a/kernel/dma/direct.c > +++ b/kernel/dma/direct.c > @@ -191,7 +191,7 @@ void *dma_direct_alloc(struct device *dev, size_t size, > dma_addr_t *dma_handle, gfp_t gfp, unsigned long attrs) > { > if (!IS_ENABLED(CONFIG_ARCH_HAS_UNCACHED_SEGMENT) && > - !dev_is_dma_coherent(dev)) > + dma_alloc_need_uncached(dev, attrs)) > return arch_dma_alloc(dev, size, dma_handle, gfp, attrs); > return dma_direct_alloc_pages(dev, size, dma_handle, gfp, attrs); > } > @@ -200,7 +200,7 @@ void dma_direct_free(struct device *dev, size_t size, > void *cpu_addr, dma_addr_t dma_addr, unsigned long attrs) > { > if (!IS_ENABLED(CONFIG_ARCH_HAS_UNCACHED_SEGMENT) && > - !dev_is_dma_coherent(dev)) > + dma_alloc_need_uncached(dev, attrs)) > arch_dma_free(dev, size, cpu_addr, dma_addr, attrs); > else > dma_direct_free_pages(dev, size, cpu_addr, dma_addr, attrs); > -- > 2.20.1 > _______________________________________________ iommu mailing list iommu@lists.linux-foundation.org https://lists.linuxfoundation.org/mailman/listinfo/iommu ^ permalink raw reply [flat|nested] 91+ messages in thread
* [OpenRISC] [PATCH 5/7] dma-direct: handle DMA_ATTR_NON_CONSISTENT in common code @ 2019-06-14 14:44 ` Christoph Hellwig 0 siblings, 0 replies; 91+ messages in thread From: Christoph Hellwig @ 2019-06-14 14:44 UTC (permalink / raw) To: openrisc Only call into arch_dma_alloc if we require an uncached mapping, and remove the parisc code manually doing normal cached DMA_ATTR_NON_CONSISTENT allocations. Signed-off-by: Christoph Hellwig <hch@lst.de> --- arch/parisc/kernel/pci-dma.c | 48 ++++++++++-------------------------- kernel/dma/direct.c | 4 +-- 2 files changed, 15 insertions(+), 37 deletions(-) diff --git a/arch/parisc/kernel/pci-dma.c b/arch/parisc/kernel/pci-dma.c index 239162355b58..ca35d9a76e50 100644 --- a/arch/parisc/kernel/pci-dma.c +++ b/arch/parisc/kernel/pci-dma.c @@ -394,17 +394,20 @@ pcxl_dma_init(void) __initcall(pcxl_dma_init); -static void *pcxl_dma_alloc(struct device *dev, size_t size, - dma_addr_t *dma_handle, gfp_t flag, unsigned long attrs) +void *arch_dma_alloc(struct device *dev, size_t size, + dma_addr_t *dma_handle, gfp_t gfp, unsigned long attrs) { unsigned long vaddr; unsigned long paddr; int order; + if (boot_cpu_data.cpu_type != pcxl2 && boot_cpu_data.cpu_type != pcxl) + return NULL; + order = get_order(size); size = 1 << (order + PAGE_SHIFT); vaddr = pcxl_alloc_range(size); - paddr = __get_free_pages(flag | __GFP_ZERO, order); + paddr = __get_free_pages(gfp | __GFP_ZERO, order); flush_kernel_dcache_range(paddr, size); paddr = __pa(paddr); map_uncached_pages(vaddr, size, paddr); @@ -421,44 +424,19 @@ static void *pcxl_dma_alloc(struct device *dev, size_t size, return (void *)vaddr; } -static void *pcx_dma_alloc(struct device *dev, size_t size, - dma_addr_t *dma_handle, gfp_t flag, unsigned long attrs) -{ - void *addr; - - if ((attrs & DMA_ATTR_NON_CONSISTENT) == 0) - return NULL; - - addr = (void *)__get_free_pages(flag | __GFP_ZERO, get_order(size)); - if (addr) - *dma_handle = (dma_addr_t)virt_to_phys(addr); - - return addr; -} - -void *arch_dma_alloc(struct device *dev, size_t size, - dma_addr_t *dma_handle, gfp_t gfp, unsigned long attrs) -{ - - if (boot_cpu_data.cpu_type == pcxl2 || boot_cpu_data.cpu_type == pcxl) - return pcxl_dma_alloc(dev, size, dma_handle, gfp, attrs); - else - return pcx_dma_alloc(dev, size, dma_handle, gfp, attrs); -} - void arch_dma_free(struct device *dev, size_t size, void *vaddr, dma_addr_t dma_handle, unsigned long attrs) { int order = get_order(size); - if (boot_cpu_data.cpu_type == pcxl2 || boot_cpu_data.cpu_type == pcxl) { - size = 1 << (order + PAGE_SHIFT); - unmap_uncached_pages((unsigned long)vaddr, size); - pcxl_free_range((unsigned long)vaddr, size); + WARN_ON_ONCE(boot_cpu_data.cpu_type != pcxl2 && + boot_cpu_data.cpu_type != pcxl); - vaddr = __va(dma_handle); - } - free_pages((unsigned long)vaddr, get_order(size)); + size = 1 << (order + PAGE_SHIFT); + unmap_uncached_pages((unsigned long)vaddr, size); + pcxl_free_range((unsigned long)vaddr, size); + + free_pages((unsigned long)__va(dma_handle), order); } void arch_sync_dma_for_device(struct device *dev, phys_addr_t paddr, diff --git a/kernel/dma/direct.c b/kernel/dma/direct.c index c2893713bf80..fc354f4f490b 100644 --- a/kernel/dma/direct.c +++ b/kernel/dma/direct.c @@ -191,7 +191,7 @@ void *dma_direct_alloc(struct device *dev, size_t size, dma_addr_t *dma_handle, gfp_t gfp, unsigned long attrs) { if (!IS_ENABLED(CONFIG_ARCH_HAS_UNCACHED_SEGMENT) && - !dev_is_dma_coherent(dev)) + dma_alloc_need_uncached(dev, attrs)) return arch_dma_alloc(dev, size, dma_handle, gfp, attrs); return dma_direct_alloc_pages(dev, size, dma_handle, gfp, attrs); } @@ -200,7 +200,7 @@ void dma_direct_free(struct device *dev, size_t size, void *cpu_addr, dma_addr_t dma_addr, unsigned long attrs) { if (!IS_ENABLED(CONFIG_ARCH_HAS_UNCACHED_SEGMENT) && - !dev_is_dma_coherent(dev)) + dma_alloc_need_uncached(dev, attrs)) arch_dma_free(dev, size, cpu_addr, dma_addr, attrs); else dma_direct_free_pages(dev, size, cpu_addr, dma_addr, attrs); -- 2.20.1 ^ permalink raw reply related [flat|nested] 91+ messages in thread
* [PATCH 5/7] dma-direct: handle DMA_ATTR_NON_CONSISTENT in common code @ 2019-06-14 14:44 ` Christoph Hellwig 0 siblings, 0 replies; 91+ messages in thread From: Christoph Hellwig @ 2019-06-14 14:44 UTC (permalink / raw) To: Vineet Gupta Cc: Jonas Bonn, linux-xtensa, Vladimir Murzin, linux-parisc, Helge Deller, linux-kernel, Stefan Kristiansson, iommu, openrisc, Stafford Horne, linux-snps-arc, linux-arm-kernel Only call into arch_dma_alloc if we require an uncached mapping, and remove the parisc code manually doing normal cached DMA_ATTR_NON_CONSISTENT allocations. Signed-off-by: Christoph Hellwig <hch@lst.de> --- arch/parisc/kernel/pci-dma.c | 48 ++++++++++-------------------------- kernel/dma/direct.c | 4 +-- 2 files changed, 15 insertions(+), 37 deletions(-) diff --git a/arch/parisc/kernel/pci-dma.c b/arch/parisc/kernel/pci-dma.c index 239162355b58..ca35d9a76e50 100644 --- a/arch/parisc/kernel/pci-dma.c +++ b/arch/parisc/kernel/pci-dma.c @@ -394,17 +394,20 @@ pcxl_dma_init(void) __initcall(pcxl_dma_init); -static void *pcxl_dma_alloc(struct device *dev, size_t size, - dma_addr_t *dma_handle, gfp_t flag, unsigned long attrs) +void *arch_dma_alloc(struct device *dev, size_t size, + dma_addr_t *dma_handle, gfp_t gfp, unsigned long attrs) { unsigned long vaddr; unsigned long paddr; int order; + if (boot_cpu_data.cpu_type != pcxl2 && boot_cpu_data.cpu_type != pcxl) + return NULL; + order = get_order(size); size = 1 << (order + PAGE_SHIFT); vaddr = pcxl_alloc_range(size); - paddr = __get_free_pages(flag | __GFP_ZERO, order); + paddr = __get_free_pages(gfp | __GFP_ZERO, order); flush_kernel_dcache_range(paddr, size); paddr = __pa(paddr); map_uncached_pages(vaddr, size, paddr); @@ -421,44 +424,19 @@ static void *pcxl_dma_alloc(struct device *dev, size_t size, return (void *)vaddr; } -static void *pcx_dma_alloc(struct device *dev, size_t size, - dma_addr_t *dma_handle, gfp_t flag, unsigned long attrs) -{ - void *addr; - - if ((attrs & DMA_ATTR_NON_CONSISTENT) == 0) - return NULL; - - addr = (void *)__get_free_pages(flag | __GFP_ZERO, get_order(size)); - if (addr) - *dma_handle = (dma_addr_t)virt_to_phys(addr); - - return addr; -} - -void *arch_dma_alloc(struct device *dev, size_t size, - dma_addr_t *dma_handle, gfp_t gfp, unsigned long attrs) -{ - - if (boot_cpu_data.cpu_type == pcxl2 || boot_cpu_data.cpu_type == pcxl) - return pcxl_dma_alloc(dev, size, dma_handle, gfp, attrs); - else - return pcx_dma_alloc(dev, size, dma_handle, gfp, attrs); -} - void arch_dma_free(struct device *dev, size_t size, void *vaddr, dma_addr_t dma_handle, unsigned long attrs) { int order = get_order(size); - if (boot_cpu_data.cpu_type == pcxl2 || boot_cpu_data.cpu_type == pcxl) { - size = 1 << (order + PAGE_SHIFT); - unmap_uncached_pages((unsigned long)vaddr, size); - pcxl_free_range((unsigned long)vaddr, size); + WARN_ON_ONCE(boot_cpu_data.cpu_type != pcxl2 && + boot_cpu_data.cpu_type != pcxl); - vaddr = __va(dma_handle); - } - free_pages((unsigned long)vaddr, get_order(size)); + size = 1 << (order + PAGE_SHIFT); + unmap_uncached_pages((unsigned long)vaddr, size); + pcxl_free_range((unsigned long)vaddr, size); + + free_pages((unsigned long)__va(dma_handle), order); } void arch_sync_dma_for_device(struct device *dev, phys_addr_t paddr, diff --git a/kernel/dma/direct.c b/kernel/dma/direct.c index c2893713bf80..fc354f4f490b 100644 --- a/kernel/dma/direct.c +++ b/kernel/dma/direct.c @@ -191,7 +191,7 @@ void *dma_direct_alloc(struct device *dev, size_t size, dma_addr_t *dma_handle, gfp_t gfp, unsigned long attrs) { if (!IS_ENABLED(CONFIG_ARCH_HAS_UNCACHED_SEGMENT) && - !dev_is_dma_coherent(dev)) + dma_alloc_need_uncached(dev, attrs)) return arch_dma_alloc(dev, size, dma_handle, gfp, attrs); return dma_direct_alloc_pages(dev, size, dma_handle, gfp, attrs); } @@ -200,7 +200,7 @@ void dma_direct_free(struct device *dev, size_t size, void *cpu_addr, dma_addr_t dma_addr, unsigned long attrs) { if (!IS_ENABLED(CONFIG_ARCH_HAS_UNCACHED_SEGMENT) && - !dev_is_dma_coherent(dev)) + dma_alloc_need_uncached(dev, attrs)) arch_dma_free(dev, size, cpu_addr, dma_addr, attrs); else dma_direct_free_pages(dev, size, cpu_addr, dma_addr, attrs); -- 2.20.1 _______________________________________________ linux-arm-kernel mailing list linux-arm-kernel@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-arm-kernel ^ permalink raw reply related [flat|nested] 91+ messages in thread
* [PATCH 5/7] dma-direct: handle DMA_ATTR_NON_CONSISTENT in common code @ 2019-06-14 14:44 ` Christoph Hellwig 0 siblings, 0 replies; 91+ messages in thread From: Christoph Hellwig @ 2019-06-14 14:44 UTC (permalink / raw) To: Vineet Gupta Cc: Jonas Bonn, linux-xtensa, Vladimir Murzin, linux-parisc, Helge Deller, linux-kernel, Stefan Kristiansson, iommu, openrisc, Stafford Horne, linux-snps-arc, linux-arm-kernel Only call into arch_dma_alloc if we require an uncached mapping, and remove the parisc code manually doing normal cached DMA_ATTR_NON_CONSISTENT allocations. Signed-off-by: Christoph Hellwig <hch@lst.de> --- arch/parisc/kernel/pci-dma.c | 48 ++++++++++-------------------------- kernel/dma/direct.c | 4 +-- 2 files changed, 15 insertions(+), 37 deletions(-) diff --git a/arch/parisc/kernel/pci-dma.c b/arch/parisc/kernel/pci-dma.c index 239162355b58..ca35d9a76e50 100644 --- a/arch/parisc/kernel/pci-dma.c +++ b/arch/parisc/kernel/pci-dma.c @@ -394,17 +394,20 @@ pcxl_dma_init(void) __initcall(pcxl_dma_init); -static void *pcxl_dma_alloc(struct device *dev, size_t size, - dma_addr_t *dma_handle, gfp_t flag, unsigned long attrs) +void *arch_dma_alloc(struct device *dev, size_t size, + dma_addr_t *dma_handle, gfp_t gfp, unsigned long attrs) { unsigned long vaddr; unsigned long paddr; int order; + if (boot_cpu_data.cpu_type != pcxl2 && boot_cpu_data.cpu_type != pcxl) + return NULL; + order = get_order(size); size = 1 << (order + PAGE_SHIFT); vaddr = pcxl_alloc_range(size); - paddr = __get_free_pages(flag | __GFP_ZERO, order); + paddr = __get_free_pages(gfp | __GFP_ZERO, order); flush_kernel_dcache_range(paddr, size); paddr = __pa(paddr); map_uncached_pages(vaddr, size, paddr); @@ -421,44 +424,19 @@ static void *pcxl_dma_alloc(struct device *dev, size_t size, return (void *)vaddr; } -static void *pcx_dma_alloc(struct device *dev, size_t size, - dma_addr_t *dma_handle, gfp_t flag, unsigned long attrs) -{ - void *addr; - - if ((attrs & DMA_ATTR_NON_CONSISTENT) == 0) - return NULL; - - addr = (void *)__get_free_pages(flag | __GFP_ZERO, get_order(size)); - if (addr) - *dma_handle = (dma_addr_t)virt_to_phys(addr); - - return addr; -} - -void *arch_dma_alloc(struct device *dev, size_t size, - dma_addr_t *dma_handle, gfp_t gfp, unsigned long attrs) -{ - - if (boot_cpu_data.cpu_type == pcxl2 || boot_cpu_data.cpu_type == pcxl) - return pcxl_dma_alloc(dev, size, dma_handle, gfp, attrs); - else - return pcx_dma_alloc(dev, size, dma_handle, gfp, attrs); -} - void arch_dma_free(struct device *dev, size_t size, void *vaddr, dma_addr_t dma_handle, unsigned long attrs) { int order = get_order(size); - if (boot_cpu_data.cpu_type == pcxl2 || boot_cpu_data.cpu_type == pcxl) { - size = 1 << (order + PAGE_SHIFT); - unmap_uncached_pages((unsigned long)vaddr, size); - pcxl_free_range((unsigned long)vaddr, size); + WARN_ON_ONCE(boot_cpu_data.cpu_type != pcxl2 && + boot_cpu_data.cpu_type != pcxl); - vaddr = __va(dma_handle); - } - free_pages((unsigned long)vaddr, get_order(size)); + size = 1 << (order + PAGE_SHIFT); + unmap_uncached_pages((unsigned long)vaddr, size); + pcxl_free_range((unsigned long)vaddr, size); + + free_pages((unsigned long)__va(dma_handle), order); } void arch_sync_dma_for_device(struct device *dev, phys_addr_t paddr, diff --git a/kernel/dma/direct.c b/kernel/dma/direct.c index c2893713bf80..fc354f4f490b 100644 --- a/kernel/dma/direct.c +++ b/kernel/dma/direct.c @@ -191,7 +191,7 @@ void *dma_direct_alloc(struct device *dev, size_t size, dma_addr_t *dma_handle, gfp_t gfp, unsigned long attrs) { if (!IS_ENABLED(CONFIG_ARCH_HAS_UNCACHED_SEGMENT) && - !dev_is_dma_coherent(dev)) + dma_alloc_need_uncached(dev, attrs)) return arch_dma_alloc(dev, size, dma_handle, gfp, attrs); return dma_direct_alloc_pages(dev, size, dma_handle, gfp, attrs); } @@ -200,7 +200,7 @@ void dma_direct_free(struct device *dev, size_t size, void *cpu_addr, dma_addr_t dma_addr, unsigned long attrs) { if (!IS_ENABLED(CONFIG_ARCH_HAS_UNCACHED_SEGMENT) && - !dev_is_dma_coherent(dev)) + dma_alloc_need_uncached(dev, attrs)) arch_dma_free(dev, size, cpu_addr, dma_addr, attrs); else dma_direct_free_pages(dev, size, cpu_addr, dma_addr, attrs); -- 2.20.1 _______________________________________________ iommu mailing list iommu@lists.linux-foundation.org https://lists.linuxfoundation.org/mailman/listinfo/iommu ^ permalink raw reply related [flat|nested] 91+ messages in thread
* [PATCH 5/7] dma-direct: handle DMA_ATTR_NON_CONSISTENT in common code @ 2019-06-14 14:44 ` Christoph Hellwig 0 siblings, 0 replies; 91+ messages in thread From: Christoph Hellwig @ 2019-06-14 14:44 UTC (permalink / raw) To: linux-snps-arc Only call into arch_dma_alloc if we require an uncached mapping, and remove the parisc code manually doing normal cached DMA_ATTR_NON_CONSISTENT allocations. Signed-off-by: Christoph Hellwig <hch at lst.de> --- arch/parisc/kernel/pci-dma.c | 48 ++++++++++-------------------------- kernel/dma/direct.c | 4 +-- 2 files changed, 15 insertions(+), 37 deletions(-) diff --git a/arch/parisc/kernel/pci-dma.c b/arch/parisc/kernel/pci-dma.c index 239162355b58..ca35d9a76e50 100644 --- a/arch/parisc/kernel/pci-dma.c +++ b/arch/parisc/kernel/pci-dma.c @@ -394,17 +394,20 @@ pcxl_dma_init(void) __initcall(pcxl_dma_init); -static void *pcxl_dma_alloc(struct device *dev, size_t size, - dma_addr_t *dma_handle, gfp_t flag, unsigned long attrs) +void *arch_dma_alloc(struct device *dev, size_t size, + dma_addr_t *dma_handle, gfp_t gfp, unsigned long attrs) { unsigned long vaddr; unsigned long paddr; int order; + if (boot_cpu_data.cpu_type != pcxl2 && boot_cpu_data.cpu_type != pcxl) + return NULL; + order = get_order(size); size = 1 << (order + PAGE_SHIFT); vaddr = pcxl_alloc_range(size); - paddr = __get_free_pages(flag | __GFP_ZERO, order); + paddr = __get_free_pages(gfp | __GFP_ZERO, order); flush_kernel_dcache_range(paddr, size); paddr = __pa(paddr); map_uncached_pages(vaddr, size, paddr); @@ -421,44 +424,19 @@ static void *pcxl_dma_alloc(struct device *dev, size_t size, return (void *)vaddr; } -static void *pcx_dma_alloc(struct device *dev, size_t size, - dma_addr_t *dma_handle, gfp_t flag, unsigned long attrs) -{ - void *addr; - - if ((attrs & DMA_ATTR_NON_CONSISTENT) == 0) - return NULL; - - addr = (void *)__get_free_pages(flag | __GFP_ZERO, get_order(size)); - if (addr) - *dma_handle = (dma_addr_t)virt_to_phys(addr); - - return addr; -} - -void *arch_dma_alloc(struct device *dev, size_t size, - dma_addr_t *dma_handle, gfp_t gfp, unsigned long attrs) -{ - - if (boot_cpu_data.cpu_type == pcxl2 || boot_cpu_data.cpu_type == pcxl) - return pcxl_dma_alloc(dev, size, dma_handle, gfp, attrs); - else - return pcx_dma_alloc(dev, size, dma_handle, gfp, attrs); -} - void arch_dma_free(struct device *dev, size_t size, void *vaddr, dma_addr_t dma_handle, unsigned long attrs) { int order = get_order(size); - if (boot_cpu_data.cpu_type == pcxl2 || boot_cpu_data.cpu_type == pcxl) { - size = 1 << (order + PAGE_SHIFT); - unmap_uncached_pages((unsigned long)vaddr, size); - pcxl_free_range((unsigned long)vaddr, size); + WARN_ON_ONCE(boot_cpu_data.cpu_type != pcxl2 && + boot_cpu_data.cpu_type != pcxl); - vaddr = __va(dma_handle); - } - free_pages((unsigned long)vaddr, get_order(size)); + size = 1 << (order + PAGE_SHIFT); + unmap_uncached_pages((unsigned long)vaddr, size); + pcxl_free_range((unsigned long)vaddr, size); + + free_pages((unsigned long)__va(dma_handle), order); } void arch_sync_dma_for_device(struct device *dev, phys_addr_t paddr, diff --git a/kernel/dma/direct.c b/kernel/dma/direct.c index c2893713bf80..fc354f4f490b 100644 --- a/kernel/dma/direct.c +++ b/kernel/dma/direct.c @@ -191,7 +191,7 @@ void *dma_direct_alloc(struct device *dev, size_t size, dma_addr_t *dma_handle, gfp_t gfp, unsigned long attrs) { if (!IS_ENABLED(CONFIG_ARCH_HAS_UNCACHED_SEGMENT) && - !dev_is_dma_coherent(dev)) + dma_alloc_need_uncached(dev, attrs)) return arch_dma_alloc(dev, size, dma_handle, gfp, attrs); return dma_direct_alloc_pages(dev, size, dma_handle, gfp, attrs); } @@ -200,7 +200,7 @@ void dma_direct_free(struct device *dev, size_t size, void *cpu_addr, dma_addr_t dma_addr, unsigned long attrs) { if (!IS_ENABLED(CONFIG_ARCH_HAS_UNCACHED_SEGMENT) && - !dev_is_dma_coherent(dev)) + dma_alloc_need_uncached(dev, attrs)) arch_dma_free(dev, size, cpu_addr, dma_addr, attrs); else dma_direct_free_pages(dev, size, cpu_addr, dma_addr, attrs); -- 2.20.1 ^ permalink raw reply related [flat|nested] 91+ messages in thread
* Re: [PATCH 5/7] dma-direct: handle DMA_ATTR_NON_CONSISTENT in common code 2019-06-14 14:44 ` Christoph Hellwig ` (2 preceding siblings ...) (?) @ 2019-06-25 6:11 ` Christoph Hellwig -1 siblings, 0 replies; 91+ messages in thread From: Christoph Hellwig @ 2019-06-25 6:11 UTC (permalink / raw) To: Hillf Danton Cc: Christoph Hellwig, Vineet Gupta, Jonas Bonn, Stefan Kristiansson, Stafford Horne, Helge Deller, Vladimir Murzin, linux-snps-arc, linux-arm-kernel, openrisc, linux-parisc, linux-xtensa, iommu, linux-kernel On Sun, Jun 16, 2019 at 06:08:40PM +0800, Hillf Danton wrote: > Literally, any cpu (call it cpuW) other than pcx12 and pcx1 will no longer do > dma alloc for any device with this patch applied. Yes. And that is not a chance from the previous code, where only pcx1 and pcx12 could do coherent allocations, > On the other hand, > !dev_is_dma_coherent(dev) && !(attrs & DMA_ATTR_NON_CONSISTENT) will ask > any cpu to do dma alloc, regardless of pcx1. This patch works imo unless cpuW > plays games only with devices that are dma coherent. I doubt it is true. I can't parse these two sentences. But to explains the bits mentioned here - dev_is_dma_coherent will return if a device is coherently attached vs the cpu. This will never be true for the parisc direct mapping. DMA_ATTR_NON_CONSISTENT asks for a non-coherent mapping that needs to be explicitly synced. This support now is in the dma-direct core code, and this is what the parisc specific devices used on the non-pcxl systems use, as they do not support dma coherency at all. (the story slightly changes when using an iommu, but that is irrelevant here) ^ permalink raw reply [flat|nested] 91+ messages in thread
* [OpenRISC] [PATCH 5/7] dma-direct: handle DMA_ATTR_NON_CONSISTENT in common code @ 2019-06-25 6:11 ` Christoph Hellwig 0 siblings, 0 replies; 91+ messages in thread From: Christoph Hellwig @ 2019-06-25 6:11 UTC (permalink / raw) To: openrisc On Sun, Jun 16, 2019 at 06:08:40PM +0800, Hillf Danton wrote: > Literally, any cpu (call it cpuW) other than pcx12 and pcx1 will no longer do > dma alloc for any device with this patch applied. Yes. And that is not a chance from the previous code, where only pcx1 and pcx12 could do coherent allocations, > On the other hand, > !dev_is_dma_coherent(dev) && !(attrs & DMA_ATTR_NON_CONSISTENT) will ask > any cpu to do dma alloc, regardless of pcx1. This patch works imo unless cpuW > plays games only with devices that are dma coherent. I doubt it is true. I can't parse these two sentences. But to explains the bits mentioned here - dev_is_dma_coherent will return if a device is coherently attached vs the cpu. This will never be true for the parisc direct mapping. DMA_ATTR_NON_CONSISTENT asks for a non-coherent mapping that needs to be explicitly synced. This support now is in the dma-direct core code, and this is what the parisc specific devices used on the non-pcxl systems use, as they do not support dma coherency at all. (the story slightly changes when using an iommu, but that is irrelevant here) ^ permalink raw reply [flat|nested] 91+ messages in thread
* Re: [PATCH 5/7] dma-direct: handle DMA_ATTR_NON_CONSISTENT in common code @ 2019-06-25 6:11 ` Christoph Hellwig 0 siblings, 0 replies; 91+ messages in thread From: Christoph Hellwig @ 2019-06-25 6:11 UTC (permalink / raw) To: Hillf Danton Cc: Jonas Bonn, linux-xtensa, Vladimir Murzin, linux-parisc, Vineet Gupta, Helge Deller, linux-kernel, Stefan Kristiansson, iommu, openrisc, Stafford Horne, linux-snps-arc, Christoph Hellwig, linux-arm-kernel On Sun, Jun 16, 2019 at 06:08:40PM +0800, Hillf Danton wrote: > Literally, any cpu (call it cpuW) other than pcx12 and pcx1 will no longer do > dma alloc for any device with this patch applied. Yes. And that is not a chance from the previous code, where only pcx1 and pcx12 could do coherent allocations, > On the other hand, > !dev_is_dma_coherent(dev) && !(attrs & DMA_ATTR_NON_CONSISTENT) will ask > any cpu to do dma alloc, regardless of pcx1. This patch works imo unless cpuW > plays games only with devices that are dma coherent. I doubt it is true. I can't parse these two sentences. But to explains the bits mentioned here - dev_is_dma_coherent will return if a device is coherently attached vs the cpu. This will never be true for the parisc direct mapping. DMA_ATTR_NON_CONSISTENT asks for a non-coherent mapping that needs to be explicitly synced. This support now is in the dma-direct core code, and this is what the parisc specific devices used on the non-pcxl systems use, as they do not support dma coherency at all. (the story slightly changes when using an iommu, but that is irrelevant here) _______________________________________________ linux-arm-kernel mailing list linux-arm-kernel@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-arm-kernel ^ permalink raw reply [flat|nested] 91+ messages in thread
* Re: [PATCH 5/7] dma-direct: handle DMA_ATTR_NON_CONSISTENT in common code @ 2019-06-25 6:11 ` Christoph Hellwig 0 siblings, 0 replies; 91+ messages in thread From: Christoph Hellwig @ 2019-06-25 6:11 UTC (permalink / raw) To: Hillf Danton Cc: Jonas Bonn, linux-xtensa, Vladimir Murzin, linux-parisc, Vineet Gupta, Helge Deller, linux-kernel, Stefan Kristiansson, iommu, openrisc, Stafford Horne, linux-snps-arc, Christoph Hellwig, linux-arm-kernel On Sun, Jun 16, 2019 at 06:08:40PM +0800, Hillf Danton wrote: > Literally, any cpu (call it cpuW) other than pcx12 and pcx1 will no longer do > dma alloc for any device with this patch applied. Yes. And that is not a chance from the previous code, where only pcx1 and pcx12 could do coherent allocations, > On the other hand, > !dev_is_dma_coherent(dev) && !(attrs & DMA_ATTR_NON_CONSISTENT) will ask > any cpu to do dma alloc, regardless of pcx1. This patch works imo unless cpuW > plays games only with devices that are dma coherent. I doubt it is true. I can't parse these two sentences. But to explains the bits mentioned here - dev_is_dma_coherent will return if a device is coherently attached vs the cpu. This will never be true for the parisc direct mapping. DMA_ATTR_NON_CONSISTENT asks for a non-coherent mapping that needs to be explicitly synced. This support now is in the dma-direct core code, and this is what the parisc specific devices used on the non-pcxl systems use, as they do not support dma coherency at all. (the story slightly changes when using an iommu, but that is irrelevant here) _______________________________________________ iommu mailing list iommu@lists.linux-foundation.org https://lists.linuxfoundation.org/mailman/listinfo/iommu ^ permalink raw reply [flat|nested] 91+ messages in thread
* [PATCH 5/7] dma-direct: handle DMA_ATTR_NON_CONSISTENT in common code @ 2019-06-25 6:11 ` Christoph Hellwig 0 siblings, 0 replies; 91+ messages in thread From: Christoph Hellwig @ 2019-06-25 6:11 UTC (permalink / raw) To: linux-snps-arc On Sun, Jun 16, 2019@06:08:40PM +0800, Hillf Danton wrote: > Literally, any cpu (call it cpuW) other than pcx12 and pcx1 will no longer do > dma alloc for any device with this patch applied. Yes. And that is not a chance from the previous code, where only pcx1 and pcx12 could do coherent allocations, > On the other hand, > !dev_is_dma_coherent(dev) && !(attrs & DMA_ATTR_NON_CONSISTENT) will ask > any cpu to do dma alloc, regardless of pcx1. This patch works imo unless cpuW > plays games only with devices that are dma coherent. I doubt it is true. I can't parse these two sentences. But to explains the bits mentioned here - dev_is_dma_coherent will return if a device is coherently attached vs the cpu. This will never be true for the parisc direct mapping. DMA_ATTR_NON_CONSISTENT asks for a non-coherent mapping that needs to be explicitly synced. This support now is in the dma-direct core code, and this is what the parisc specific devices used on the non-pcxl systems use, as they do not support dma coherency at all. (the story slightly changes when using an iommu, but that is irrelevant here) ^ permalink raw reply [flat|nested] 91+ messages in thread
* Re: [PATCH 5/7] dma-direct: handle DMA_ATTR_NON_CONSISTENT in common code 2019-06-14 14:44 ` Christoph Hellwig ` (2 preceding siblings ...) (?) @ 2019-06-25 12:23 ` Helge Deller -1 siblings, 0 replies; 91+ messages in thread From: Helge Deller @ 2019-06-25 12:23 UTC (permalink / raw) To: Christoph Hellwig, Vineet Gupta Cc: Jonas Bonn, Stefan Kristiansson, Stafford Horne, Vladimir Murzin, linux-snps-arc, linux-arm-kernel, openrisc, linux-parisc, iommu, linux-kernel On 14.06.19 16:44, Christoph Hellwig wrote: > Only call into arch_dma_alloc if we require an uncached mapping, > and remove the parisc code manually doing normal cached > DMA_ATTR_NON_CONSISTENT allocations. > > Signed-off-by: Christoph Hellwig <hch@lst.de> Acked-by: Helge Deller <deller@gmx.de> # parisc Boot-tested 32-bit kernel on PCX-L and PCX-W2 machines (although the patches don't cleanly apply any longer against git head). Helge > --- > arch/parisc/kernel/pci-dma.c | 48 ++++++++++-------------------------- > kernel/dma/direct.c | 4 +-- > 2 files changed, 15 insertions(+), 37 deletions(-) > > diff --git a/arch/parisc/kernel/pci-dma.c b/arch/parisc/kernel/pci-dma.c > index 239162355b58..ca35d9a76e50 100644 > --- a/arch/parisc/kernel/pci-dma.c > +++ b/arch/parisc/kernel/pci-dma.c > @@ -394,17 +394,20 @@ pcxl_dma_init(void) > > __initcall(pcxl_dma_init); > > -static void *pcxl_dma_alloc(struct device *dev, size_t size, > - dma_addr_t *dma_handle, gfp_t flag, unsigned long attrs) > +void *arch_dma_alloc(struct device *dev, size_t size, > + dma_addr_t *dma_handle, gfp_t gfp, unsigned long attrs) > { > unsigned long vaddr; > unsigned long paddr; > int order; > > + if (boot_cpu_data.cpu_type != pcxl2 && boot_cpu_data.cpu_type != pcxl) > + return NULL; > + > order = get_order(size); > size = 1 << (order + PAGE_SHIFT); > vaddr = pcxl_alloc_range(size); > - paddr = __get_free_pages(flag | __GFP_ZERO, order); > + paddr = __get_free_pages(gfp | __GFP_ZERO, order); > flush_kernel_dcache_range(paddr, size); > paddr = __pa(paddr); > map_uncached_pages(vaddr, size, paddr); > @@ -421,44 +424,19 @@ static void *pcxl_dma_alloc(struct device *dev, size_t size, > return (void *)vaddr; > } > > -static void *pcx_dma_alloc(struct device *dev, size_t size, > - dma_addr_t *dma_handle, gfp_t flag, unsigned long attrs) > -{ > - void *addr; > - > - if ((attrs & DMA_ATTR_NON_CONSISTENT) == 0) > - return NULL; > - > - addr = (void *)__get_free_pages(flag | __GFP_ZERO, get_order(size)); > - if (addr) > - *dma_handle = (dma_addr_t)virt_to_phys(addr); > - > - return addr; > -} > - > -void *arch_dma_alloc(struct device *dev, size_t size, > - dma_addr_t *dma_handle, gfp_t gfp, unsigned long attrs) > -{ > - > - if (boot_cpu_data.cpu_type == pcxl2 || boot_cpu_data.cpu_type == pcxl) > - return pcxl_dma_alloc(dev, size, dma_handle, gfp, attrs); > - else > - return pcx_dma_alloc(dev, size, dma_handle, gfp, attrs); > -} > - > void arch_dma_free(struct device *dev, size_t size, void *vaddr, > dma_addr_t dma_handle, unsigned long attrs) > { > int order = get_order(size); > > - if (boot_cpu_data.cpu_type == pcxl2 || boot_cpu_data.cpu_type == pcxl) { > - size = 1 << (order + PAGE_SHIFT); > - unmap_uncached_pages((unsigned long)vaddr, size); > - pcxl_free_range((unsigned long)vaddr, size); > + WARN_ON_ONCE(boot_cpu_data.cpu_type != pcxl2 && > + boot_cpu_data.cpu_type != pcxl); > > - vaddr = __va(dma_handle); > - } > - free_pages((unsigned long)vaddr, get_order(size)); > + size = 1 << (order + PAGE_SHIFT); > + unmap_uncached_pages((unsigned long)vaddr, size); > + pcxl_free_range((unsigned long)vaddr, size); > + > + free_pages((unsigned long)__va(dma_handle), order); > } > > void arch_sync_dma_for_device(struct device *dev, phys_addr_t paddr, > diff --git a/kernel/dma/direct.c b/kernel/dma/direct.c > index c2893713bf80..fc354f4f490b 100644 > --- a/kernel/dma/direct.c > +++ b/kernel/dma/direct.c > @@ -191,7 +191,7 @@ void *dma_direct_alloc(struct device *dev, size_t size, > dma_addr_t *dma_handle, gfp_t gfp, unsigned long attrs) > { > if (!IS_ENABLED(CONFIG_ARCH_HAS_UNCACHED_SEGMENT) && > - !dev_is_dma_coherent(dev)) > + dma_alloc_need_uncached(dev, attrs)) > return arch_dma_alloc(dev, size, dma_handle, gfp, attrs); > return dma_direct_alloc_pages(dev, size, dma_handle, gfp, attrs); > } > @@ -200,7 +200,7 @@ void dma_direct_free(struct device *dev, size_t size, > void *cpu_addr, dma_addr_t dma_addr, unsigned long attrs) > { > if (!IS_ENABLED(CONFIG_ARCH_HAS_UNCACHED_SEGMENT) && > - !dev_is_dma_coherent(dev)) > + dma_alloc_need_uncached(dev, attrs)) > arch_dma_free(dev, size, cpu_addr, dma_addr, attrs); > else > dma_direct_free_pages(dev, size, cpu_addr, dma_addr, attrs); > ^ permalink raw reply [flat|nested] 91+ messages in thread
* [OpenRISC] [PATCH 5/7] dma-direct: handle DMA_ATTR_NON_CONSISTENT in common code @ 2019-06-25 12:23 ` Helge Deller 0 siblings, 0 replies; 91+ messages in thread From: Helge Deller @ 2019-06-25 12:23 UTC (permalink / raw) To: openrisc On 14.06.19 16:44, Christoph Hellwig wrote: > Only call into arch_dma_alloc if we require an uncached mapping, > and remove the parisc code manually doing normal cached > DMA_ATTR_NON_CONSISTENT allocations. > > Signed-off-by: Christoph Hellwig <hch@lst.de> Acked-by: Helge Deller <deller@gmx.de> # parisc Boot-tested 32-bit kernel on PCX-L and PCX-W2 machines (although the patches don't cleanly apply any longer against git head). Helge > --- > arch/parisc/kernel/pci-dma.c | 48 ++++++++++-------------------------- > kernel/dma/direct.c | 4 +-- > 2 files changed, 15 insertions(+), 37 deletions(-) > > diff --git a/arch/parisc/kernel/pci-dma.c b/arch/parisc/kernel/pci-dma.c > index 239162355b58..ca35d9a76e50 100644 > --- a/arch/parisc/kernel/pci-dma.c > +++ b/arch/parisc/kernel/pci-dma.c > @@ -394,17 +394,20 @@ pcxl_dma_init(void) > > __initcall(pcxl_dma_init); > > -static void *pcxl_dma_alloc(struct device *dev, size_t size, > - dma_addr_t *dma_handle, gfp_t flag, unsigned long attrs) > +void *arch_dma_alloc(struct device *dev, size_t size, > + dma_addr_t *dma_handle, gfp_t gfp, unsigned long attrs) > { > unsigned long vaddr; > unsigned long paddr; > int order; > > + if (boot_cpu_data.cpu_type != pcxl2 && boot_cpu_data.cpu_type != pcxl) > + return NULL; > + > order = get_order(size); > size = 1 << (order + PAGE_SHIFT); > vaddr = pcxl_alloc_range(size); > - paddr = __get_free_pages(flag | __GFP_ZERO, order); > + paddr = __get_free_pages(gfp | __GFP_ZERO, order); > flush_kernel_dcache_range(paddr, size); > paddr = __pa(paddr); > map_uncached_pages(vaddr, size, paddr); > @@ -421,44 +424,19 @@ static void *pcxl_dma_alloc(struct device *dev, size_t size, > return (void *)vaddr; > } > > -static void *pcx_dma_alloc(struct device *dev, size_t size, > - dma_addr_t *dma_handle, gfp_t flag, unsigned long attrs) > -{ > - void *addr; > - > - if ((attrs & DMA_ATTR_NON_CONSISTENT) == 0) > - return NULL; > - > - addr = (void *)__get_free_pages(flag | __GFP_ZERO, get_order(size)); > - if (addr) > - *dma_handle = (dma_addr_t)virt_to_phys(addr); > - > - return addr; > -} > - > -void *arch_dma_alloc(struct device *dev, size_t size, > - dma_addr_t *dma_handle, gfp_t gfp, unsigned long attrs) > -{ > - > - if (boot_cpu_data.cpu_type == pcxl2 || boot_cpu_data.cpu_type == pcxl) > - return pcxl_dma_alloc(dev, size, dma_handle, gfp, attrs); > - else > - return pcx_dma_alloc(dev, size, dma_handle, gfp, attrs); > -} > - > void arch_dma_free(struct device *dev, size_t size, void *vaddr, > dma_addr_t dma_handle, unsigned long attrs) > { > int order = get_order(size); > > - if (boot_cpu_data.cpu_type == pcxl2 || boot_cpu_data.cpu_type == pcxl) { > - size = 1 << (order + PAGE_SHIFT); > - unmap_uncached_pages((unsigned long)vaddr, size); > - pcxl_free_range((unsigned long)vaddr, size); > + WARN_ON_ONCE(boot_cpu_data.cpu_type != pcxl2 && > + boot_cpu_data.cpu_type != pcxl); > > - vaddr = __va(dma_handle); > - } > - free_pages((unsigned long)vaddr, get_order(size)); > + size = 1 << (order + PAGE_SHIFT); > + unmap_uncached_pages((unsigned long)vaddr, size); > + pcxl_free_range((unsigned long)vaddr, size); > + > + free_pages((unsigned long)__va(dma_handle), order); > } > > void arch_sync_dma_for_device(struct device *dev, phys_addr_t paddr, > diff --git a/kernel/dma/direct.c b/kernel/dma/direct.c > index c2893713bf80..fc354f4f490b 100644 > --- a/kernel/dma/direct.c > +++ b/kernel/dma/direct.c > @@ -191,7 +191,7 @@ void *dma_direct_alloc(struct device *dev, size_t size, > dma_addr_t *dma_handle, gfp_t gfp, unsigned long attrs) > { > if (!IS_ENABLED(CONFIG_ARCH_HAS_UNCACHED_SEGMENT) && > - !dev_is_dma_coherent(dev)) > + dma_alloc_need_uncached(dev, attrs)) > return arch_dma_alloc(dev, size, dma_handle, gfp, attrs); > return dma_direct_alloc_pages(dev, size, dma_handle, gfp, attrs); > } > @@ -200,7 +200,7 @@ void dma_direct_free(struct device *dev, size_t size, > void *cpu_addr, dma_addr_t dma_addr, unsigned long attrs) > { > if (!IS_ENABLED(CONFIG_ARCH_HAS_UNCACHED_SEGMENT) && > - !dev_is_dma_coherent(dev)) > + dma_alloc_need_uncached(dev, attrs)) > arch_dma_free(dev, size, cpu_addr, dma_addr, attrs); > else > dma_direct_free_pages(dev, size, cpu_addr, dma_addr, attrs); > ^ permalink raw reply [flat|nested] 91+ messages in thread
* Re: [PATCH 5/7] dma-direct: handle DMA_ATTR_NON_CONSISTENT in common code @ 2019-06-25 12:23 ` Helge Deller 0 siblings, 0 replies; 91+ messages in thread From: Helge Deller @ 2019-06-25 12:23 UTC (permalink / raw) To: Christoph Hellwig, Vineet Gupta Cc: Jonas Bonn, Vladimir Murzin, linux-parisc, linux-kernel, Stefan Kristiansson, iommu, openrisc, Stafford Horne, linux-snps-arc, linux-arm-kernel On 14.06.19 16:44, Christoph Hellwig wrote: > Only call into arch_dma_alloc if we require an uncached mapping, > and remove the parisc code manually doing normal cached > DMA_ATTR_NON_CONSISTENT allocations. > > Signed-off-by: Christoph Hellwig <hch@lst.de> Acked-by: Helge Deller <deller@gmx.de> # parisc Boot-tested 32-bit kernel on PCX-L and PCX-W2 machines (although the patches don't cleanly apply any longer against git head). Helge > --- > arch/parisc/kernel/pci-dma.c | 48 ++++++++++-------------------------- > kernel/dma/direct.c | 4 +-- > 2 files changed, 15 insertions(+), 37 deletions(-) > > diff --git a/arch/parisc/kernel/pci-dma.c b/arch/parisc/kernel/pci-dma.c > index 239162355b58..ca35d9a76e50 100644 > --- a/arch/parisc/kernel/pci-dma.c > +++ b/arch/parisc/kernel/pci-dma.c > @@ -394,17 +394,20 @@ pcxl_dma_init(void) > > __initcall(pcxl_dma_init); > > -static void *pcxl_dma_alloc(struct device *dev, size_t size, > - dma_addr_t *dma_handle, gfp_t flag, unsigned long attrs) > +void *arch_dma_alloc(struct device *dev, size_t size, > + dma_addr_t *dma_handle, gfp_t gfp, unsigned long attrs) > { > unsigned long vaddr; > unsigned long paddr; > int order; > > + if (boot_cpu_data.cpu_type != pcxl2 && boot_cpu_data.cpu_type != pcxl) > + return NULL; > + > order = get_order(size); > size = 1 << (order + PAGE_SHIFT); > vaddr = pcxl_alloc_range(size); > - paddr = __get_free_pages(flag | __GFP_ZERO, order); > + paddr = __get_free_pages(gfp | __GFP_ZERO, order); > flush_kernel_dcache_range(paddr, size); > paddr = __pa(paddr); > map_uncached_pages(vaddr, size, paddr); > @@ -421,44 +424,19 @@ static void *pcxl_dma_alloc(struct device *dev, size_t size, > return (void *)vaddr; > } > > -static void *pcx_dma_alloc(struct device *dev, size_t size, > - dma_addr_t *dma_handle, gfp_t flag, unsigned long attrs) > -{ > - void *addr; > - > - if ((attrs & DMA_ATTR_NON_CONSISTENT) == 0) > - return NULL; > - > - addr = (void *)__get_free_pages(flag | __GFP_ZERO, get_order(size)); > - if (addr) > - *dma_handle = (dma_addr_t)virt_to_phys(addr); > - > - return addr; > -} > - > -void *arch_dma_alloc(struct device *dev, size_t size, > - dma_addr_t *dma_handle, gfp_t gfp, unsigned long attrs) > -{ > - > - if (boot_cpu_data.cpu_type == pcxl2 || boot_cpu_data.cpu_type == pcxl) > - return pcxl_dma_alloc(dev, size, dma_handle, gfp, attrs); > - else > - return pcx_dma_alloc(dev, size, dma_handle, gfp, attrs); > -} > - > void arch_dma_free(struct device *dev, size_t size, void *vaddr, > dma_addr_t dma_handle, unsigned long attrs) > { > int order = get_order(size); > > - if (boot_cpu_data.cpu_type == pcxl2 || boot_cpu_data.cpu_type == pcxl) { > - size = 1 << (order + PAGE_SHIFT); > - unmap_uncached_pages((unsigned long)vaddr, size); > - pcxl_free_range((unsigned long)vaddr, size); > + WARN_ON_ONCE(boot_cpu_data.cpu_type != pcxl2 && > + boot_cpu_data.cpu_type != pcxl); > > - vaddr = __va(dma_handle); > - } > - free_pages((unsigned long)vaddr, get_order(size)); > + size = 1 << (order + PAGE_SHIFT); > + unmap_uncached_pages((unsigned long)vaddr, size); > + pcxl_free_range((unsigned long)vaddr, size); > + > + free_pages((unsigned long)__va(dma_handle), order); > } > > void arch_sync_dma_for_device(struct device *dev, phys_addr_t paddr, > diff --git a/kernel/dma/direct.c b/kernel/dma/direct.c > index c2893713bf80..fc354f4f490b 100644 > --- a/kernel/dma/direct.c > +++ b/kernel/dma/direct.c > @@ -191,7 +191,7 @@ void *dma_direct_alloc(struct device *dev, size_t size, > dma_addr_t *dma_handle, gfp_t gfp, unsigned long attrs) > { > if (!IS_ENABLED(CONFIG_ARCH_HAS_UNCACHED_SEGMENT) && > - !dev_is_dma_coherent(dev)) > + dma_alloc_need_uncached(dev, attrs)) > return arch_dma_alloc(dev, size, dma_handle, gfp, attrs); > return dma_direct_alloc_pages(dev, size, dma_handle, gfp, attrs); > } > @@ -200,7 +200,7 @@ void dma_direct_free(struct device *dev, size_t size, > void *cpu_addr, dma_addr_t dma_addr, unsigned long attrs) > { > if (!IS_ENABLED(CONFIG_ARCH_HAS_UNCACHED_SEGMENT) && > - !dev_is_dma_coherent(dev)) > + dma_alloc_need_uncached(dev, attrs)) > arch_dma_free(dev, size, cpu_addr, dma_addr, attrs); > else > dma_direct_free_pages(dev, size, cpu_addr, dma_addr, attrs); > _______________________________________________ linux-arm-kernel mailing list linux-arm-kernel@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-arm-kernel ^ permalink raw reply [flat|nested] 91+ messages in thread
* Re: [PATCH 5/7] dma-direct: handle DMA_ATTR_NON_CONSISTENT in common code @ 2019-06-25 12:23 ` Helge Deller 0 siblings, 0 replies; 91+ messages in thread From: Helge Deller @ 2019-06-25 12:23 UTC (permalink / raw) To: Christoph Hellwig, Vineet Gupta Cc: Jonas Bonn, Vladimir Murzin, linux-parisc, linux-kernel, Stefan Kristiansson, iommu, openrisc, Stafford Horne, linux-snps-arc, linux-arm-kernel On 14.06.19 16:44, Christoph Hellwig wrote: > Only call into arch_dma_alloc if we require an uncached mapping, > and remove the parisc code manually doing normal cached > DMA_ATTR_NON_CONSISTENT allocations. > > Signed-off-by: Christoph Hellwig <hch@lst.de> Acked-by: Helge Deller <deller@gmx.de> # parisc Boot-tested 32-bit kernel on PCX-L and PCX-W2 machines (although the patches don't cleanly apply any longer against git head). Helge > --- > arch/parisc/kernel/pci-dma.c | 48 ++++++++++-------------------------- > kernel/dma/direct.c | 4 +-- > 2 files changed, 15 insertions(+), 37 deletions(-) > > diff --git a/arch/parisc/kernel/pci-dma.c b/arch/parisc/kernel/pci-dma.c > index 239162355b58..ca35d9a76e50 100644 > --- a/arch/parisc/kernel/pci-dma.c > +++ b/arch/parisc/kernel/pci-dma.c > @@ -394,17 +394,20 @@ pcxl_dma_init(void) > > __initcall(pcxl_dma_init); > > -static void *pcxl_dma_alloc(struct device *dev, size_t size, > - dma_addr_t *dma_handle, gfp_t flag, unsigned long attrs) > +void *arch_dma_alloc(struct device *dev, size_t size, > + dma_addr_t *dma_handle, gfp_t gfp, unsigned long attrs) > { > unsigned long vaddr; > unsigned long paddr; > int order; > > + if (boot_cpu_data.cpu_type != pcxl2 && boot_cpu_data.cpu_type != pcxl) > + return NULL; > + > order = get_order(size); > size = 1 << (order + PAGE_SHIFT); > vaddr = pcxl_alloc_range(size); > - paddr = __get_free_pages(flag | __GFP_ZERO, order); > + paddr = __get_free_pages(gfp | __GFP_ZERO, order); > flush_kernel_dcache_range(paddr, size); > paddr = __pa(paddr); > map_uncached_pages(vaddr, size, paddr); > @@ -421,44 +424,19 @@ static void *pcxl_dma_alloc(struct device *dev, size_t size, > return (void *)vaddr; > } > > -static void *pcx_dma_alloc(struct device *dev, size_t size, > - dma_addr_t *dma_handle, gfp_t flag, unsigned long attrs) > -{ > - void *addr; > - > - if ((attrs & DMA_ATTR_NON_CONSISTENT) == 0) > - return NULL; > - > - addr = (void *)__get_free_pages(flag | __GFP_ZERO, get_order(size)); > - if (addr) > - *dma_handle = (dma_addr_t)virt_to_phys(addr); > - > - return addr; > -} > - > -void *arch_dma_alloc(struct device *dev, size_t size, > - dma_addr_t *dma_handle, gfp_t gfp, unsigned long attrs) > -{ > - > - if (boot_cpu_data.cpu_type == pcxl2 || boot_cpu_data.cpu_type == pcxl) > - return pcxl_dma_alloc(dev, size, dma_handle, gfp, attrs); > - else > - return pcx_dma_alloc(dev, size, dma_handle, gfp, attrs); > -} > - > void arch_dma_free(struct device *dev, size_t size, void *vaddr, > dma_addr_t dma_handle, unsigned long attrs) > { > int order = get_order(size); > > - if (boot_cpu_data.cpu_type == pcxl2 || boot_cpu_data.cpu_type == pcxl) { > - size = 1 << (order + PAGE_SHIFT); > - unmap_uncached_pages((unsigned long)vaddr, size); > - pcxl_free_range((unsigned long)vaddr, size); > + WARN_ON_ONCE(boot_cpu_data.cpu_type != pcxl2 && > + boot_cpu_data.cpu_type != pcxl); > > - vaddr = __va(dma_handle); > - } > - free_pages((unsigned long)vaddr, get_order(size)); > + size = 1 << (order + PAGE_SHIFT); > + unmap_uncached_pages((unsigned long)vaddr, size); > + pcxl_free_range((unsigned long)vaddr, size); > + > + free_pages((unsigned long)__va(dma_handle), order); > } > > void arch_sync_dma_for_device(struct device *dev, phys_addr_t paddr, > diff --git a/kernel/dma/direct.c b/kernel/dma/direct.c > index c2893713bf80..fc354f4f490b 100644 > --- a/kernel/dma/direct.c > +++ b/kernel/dma/direct.c > @@ -191,7 +191,7 @@ void *dma_direct_alloc(struct device *dev, size_t size, > dma_addr_t *dma_handle, gfp_t gfp, unsigned long attrs) > { > if (!IS_ENABLED(CONFIG_ARCH_HAS_UNCACHED_SEGMENT) && > - !dev_is_dma_coherent(dev)) > + dma_alloc_need_uncached(dev, attrs)) > return arch_dma_alloc(dev, size, dma_handle, gfp, attrs); > return dma_direct_alloc_pages(dev, size, dma_handle, gfp, attrs); > } > @@ -200,7 +200,7 @@ void dma_direct_free(struct device *dev, size_t size, > void *cpu_addr, dma_addr_t dma_addr, unsigned long attrs) > { > if (!IS_ENABLED(CONFIG_ARCH_HAS_UNCACHED_SEGMENT) && > - !dev_is_dma_coherent(dev)) > + dma_alloc_need_uncached(dev, attrs)) > arch_dma_free(dev, size, cpu_addr, dma_addr, attrs); > else > dma_direct_free_pages(dev, size, cpu_addr, dma_addr, attrs); > _______________________________________________ iommu mailing list iommu@lists.linux-foundation.org https://lists.linuxfoundation.org/mailman/listinfo/iommu ^ permalink raw reply [flat|nested] 91+ messages in thread
* [PATCH 5/7] dma-direct: handle DMA_ATTR_NON_CONSISTENT in common code @ 2019-06-25 12:23 ` Helge Deller 0 siblings, 0 replies; 91+ messages in thread From: Helge Deller @ 2019-06-25 12:23 UTC (permalink / raw) To: linux-snps-arc On 14.06.19 16:44, Christoph Hellwig wrote: > Only call into arch_dma_alloc if we require an uncached mapping, > and remove the parisc code manually doing normal cached > DMA_ATTR_NON_CONSISTENT allocations. > > Signed-off-by: Christoph Hellwig <hch at lst.de> Acked-by: Helge Deller <deller at gmx.de> # parisc Boot-tested 32-bit kernel on PCX-L and PCX-W2 machines (although the patches don't cleanly apply any longer against git head). Helge > --- > arch/parisc/kernel/pci-dma.c | 48 ++++++++++-------------------------- > kernel/dma/direct.c | 4 +-- > 2 files changed, 15 insertions(+), 37 deletions(-) > > diff --git a/arch/parisc/kernel/pci-dma.c b/arch/parisc/kernel/pci-dma.c > index 239162355b58..ca35d9a76e50 100644 > --- a/arch/parisc/kernel/pci-dma.c > +++ b/arch/parisc/kernel/pci-dma.c > @@ -394,17 +394,20 @@ pcxl_dma_init(void) > > __initcall(pcxl_dma_init); > > -static void *pcxl_dma_alloc(struct device *dev, size_t size, > - dma_addr_t *dma_handle, gfp_t flag, unsigned long attrs) > +void *arch_dma_alloc(struct device *dev, size_t size, > + dma_addr_t *dma_handle, gfp_t gfp, unsigned long attrs) > { > unsigned long vaddr; > unsigned long paddr; > int order; > > + if (boot_cpu_data.cpu_type != pcxl2 && boot_cpu_data.cpu_type != pcxl) > + return NULL; > + > order = get_order(size); > size = 1 << (order + PAGE_SHIFT); > vaddr = pcxl_alloc_range(size); > - paddr = __get_free_pages(flag | __GFP_ZERO, order); > + paddr = __get_free_pages(gfp | __GFP_ZERO, order); > flush_kernel_dcache_range(paddr, size); > paddr = __pa(paddr); > map_uncached_pages(vaddr, size, paddr); > @@ -421,44 +424,19 @@ static void *pcxl_dma_alloc(struct device *dev, size_t size, > return (void *)vaddr; > } > > -static void *pcx_dma_alloc(struct device *dev, size_t size, > - dma_addr_t *dma_handle, gfp_t flag, unsigned long attrs) > -{ > - void *addr; > - > - if ((attrs & DMA_ATTR_NON_CONSISTENT) == 0) > - return NULL; > - > - addr = (void *)__get_free_pages(flag | __GFP_ZERO, get_order(size)); > - if (addr) > - *dma_handle = (dma_addr_t)virt_to_phys(addr); > - > - return addr; > -} > - > -void *arch_dma_alloc(struct device *dev, size_t size, > - dma_addr_t *dma_handle, gfp_t gfp, unsigned long attrs) > -{ > - > - if (boot_cpu_data.cpu_type == pcxl2 || boot_cpu_data.cpu_type == pcxl) > - return pcxl_dma_alloc(dev, size, dma_handle, gfp, attrs); > - else > - return pcx_dma_alloc(dev, size, dma_handle, gfp, attrs); > -} > - > void arch_dma_free(struct device *dev, size_t size, void *vaddr, > dma_addr_t dma_handle, unsigned long attrs) > { > int order = get_order(size); > > - if (boot_cpu_data.cpu_type == pcxl2 || boot_cpu_data.cpu_type == pcxl) { > - size = 1 << (order + PAGE_SHIFT); > - unmap_uncached_pages((unsigned long)vaddr, size); > - pcxl_free_range((unsigned long)vaddr, size); > + WARN_ON_ONCE(boot_cpu_data.cpu_type != pcxl2 && > + boot_cpu_data.cpu_type != pcxl); > > - vaddr = __va(dma_handle); > - } > - free_pages((unsigned long)vaddr, get_order(size)); > + size = 1 << (order + PAGE_SHIFT); > + unmap_uncached_pages((unsigned long)vaddr, size); > + pcxl_free_range((unsigned long)vaddr, size); > + > + free_pages((unsigned long)__va(dma_handle), order); > } > > void arch_sync_dma_for_device(struct device *dev, phys_addr_t paddr, > diff --git a/kernel/dma/direct.c b/kernel/dma/direct.c > index c2893713bf80..fc354f4f490b 100644 > --- a/kernel/dma/direct.c > +++ b/kernel/dma/direct.c > @@ -191,7 +191,7 @@ void *dma_direct_alloc(struct device *dev, size_t size, > dma_addr_t *dma_handle, gfp_t gfp, unsigned long attrs) > { > if (!IS_ENABLED(CONFIG_ARCH_HAS_UNCACHED_SEGMENT) && > - !dev_is_dma_coherent(dev)) > + dma_alloc_need_uncached(dev, attrs)) > return arch_dma_alloc(dev, size, dma_handle, gfp, attrs); > return dma_direct_alloc_pages(dev, size, dma_handle, gfp, attrs); > } > @@ -200,7 +200,7 @@ void dma_direct_free(struct device *dev, size_t size, > void *cpu_addr, dma_addr_t dma_addr, unsigned long attrs) > { > if (!IS_ENABLED(CONFIG_ARCH_HAS_UNCACHED_SEGMENT) && > - !dev_is_dma_coherent(dev)) > + dma_alloc_need_uncached(dev, attrs)) > arch_dma_free(dev, size, cpu_addr, dma_addr, attrs); > else > dma_direct_free_pages(dev, size, cpu_addr, dma_addr, attrs); > ^ permalink raw reply [flat|nested] 91+ messages in thread
* Re: [PATCH 5/7] dma-direct: handle DMA_ATTR_NON_CONSISTENT in common code 2019-06-25 12:23 ` Helge Deller ` (2 preceding siblings ...) (?) @ 2019-06-25 12:28 ` Christoph Hellwig -1 siblings, 0 replies; 91+ messages in thread From: Christoph Hellwig @ 2019-06-25 12:28 UTC (permalink / raw) To: Helge Deller Cc: Christoph Hellwig, Vineet Gupta, Jonas Bonn, Stefan Kristiansson, Stafford Horne, Vladimir Murzin, linux-snps-arc, linux-arm-kernel, openrisc, linux-parisc, iommu, linux-kernel On Tue, Jun 25, 2019 at 02:23:45PM +0200, Helge Deller wrote: > On 14.06.19 16:44, Christoph Hellwig wrote: > > Only call into arch_dma_alloc if we require an uncached mapping, > > and remove the parisc code manually doing normal cached > > DMA_ATTR_NON_CONSISTENT allocations. > > > > Signed-off-by: Christoph Hellwig <hch@lst.de> > > Acked-by: Helge Deller <deller@gmx.de> # parisc > > Boot-tested 32-bit kernel on PCX-L and PCX-W2 machines (although > the patches don't cleanly apply any longer against git head). The series was against the dma-mapping tree, which might have diverged a bit already. Thanks for testing! ^ permalink raw reply [flat|nested] 91+ messages in thread
* [OpenRISC] [PATCH 5/7] dma-direct: handle DMA_ATTR_NON_CONSISTENT in common code @ 2019-06-25 12:28 ` Christoph Hellwig 0 siblings, 0 replies; 91+ messages in thread From: Christoph Hellwig @ 2019-06-25 12:28 UTC (permalink / raw) To: openrisc On Tue, Jun 25, 2019 at 02:23:45PM +0200, Helge Deller wrote: > On 14.06.19 16:44, Christoph Hellwig wrote: > > Only call into arch_dma_alloc if we require an uncached mapping, > > and remove the parisc code manually doing normal cached > > DMA_ATTR_NON_CONSISTENT allocations. > > > > Signed-off-by: Christoph Hellwig <hch@lst.de> > > Acked-by: Helge Deller <deller@gmx.de> # parisc > > Boot-tested 32-bit kernel on PCX-L and PCX-W2 machines (although > the patches don't cleanly apply any longer against git head). The series was against the dma-mapping tree, which might have diverged a bit already. Thanks for testing! ^ permalink raw reply [flat|nested] 91+ messages in thread
* Re: [PATCH 5/7] dma-direct: handle DMA_ATTR_NON_CONSISTENT in common code @ 2019-06-25 12:28 ` Christoph Hellwig 0 siblings, 0 replies; 91+ messages in thread From: Christoph Hellwig @ 2019-06-25 12:28 UTC (permalink / raw) To: Helge Deller Cc: Jonas Bonn, Vladimir Murzin, linux-parisc, Vineet Gupta, linux-kernel, Stefan Kristiansson, iommu, openrisc, Stafford Horne, linux-snps-arc, Christoph Hellwig, linux-arm-kernel On Tue, Jun 25, 2019 at 02:23:45PM +0200, Helge Deller wrote: > On 14.06.19 16:44, Christoph Hellwig wrote: > > Only call into arch_dma_alloc if we require an uncached mapping, > > and remove the parisc code manually doing normal cached > > DMA_ATTR_NON_CONSISTENT allocations. > > > > Signed-off-by: Christoph Hellwig <hch@lst.de> > > Acked-by: Helge Deller <deller@gmx.de> # parisc > > Boot-tested 32-bit kernel on PCX-L and PCX-W2 machines (although > the patches don't cleanly apply any longer against git head). The series was against the dma-mapping tree, which might have diverged a bit already. Thanks for testing! _______________________________________________ linux-arm-kernel mailing list linux-arm-kernel@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-arm-kernel ^ permalink raw reply [flat|nested] 91+ messages in thread
* Re: [PATCH 5/7] dma-direct: handle DMA_ATTR_NON_CONSISTENT in common code @ 2019-06-25 12:28 ` Christoph Hellwig 0 siblings, 0 replies; 91+ messages in thread From: Christoph Hellwig @ 2019-06-25 12:28 UTC (permalink / raw) To: Helge Deller Cc: Jonas Bonn, Vladimir Murzin, linux-parisc, Vineet Gupta, linux-kernel, Stefan Kristiansson, iommu, openrisc, Stafford Horne, linux-snps-arc, Christoph Hellwig, linux-arm-kernel On Tue, Jun 25, 2019 at 02:23:45PM +0200, Helge Deller wrote: > On 14.06.19 16:44, Christoph Hellwig wrote: > > Only call into arch_dma_alloc if we require an uncached mapping, > > and remove the parisc code manually doing normal cached > > DMA_ATTR_NON_CONSISTENT allocations. > > > > Signed-off-by: Christoph Hellwig <hch@lst.de> > > Acked-by: Helge Deller <deller@gmx.de> # parisc > > Boot-tested 32-bit kernel on PCX-L and PCX-W2 machines (although > the patches don't cleanly apply any longer against git head). The series was against the dma-mapping tree, which might have diverged a bit already. Thanks for testing! _______________________________________________ iommu mailing list iommu@lists.linux-foundation.org https://lists.linuxfoundation.org/mailman/listinfo/iommu ^ permalink raw reply [flat|nested] 91+ messages in thread
* [PATCH 5/7] dma-direct: handle DMA_ATTR_NON_CONSISTENT in common code @ 2019-06-25 12:28 ` Christoph Hellwig 0 siblings, 0 replies; 91+ messages in thread From: Christoph Hellwig @ 2019-06-25 12:28 UTC (permalink / raw) To: linux-snps-arc On Tue, Jun 25, 2019@02:23:45PM +0200, Helge Deller wrote: > On 14.06.19 16:44, Christoph Hellwig wrote: > > Only call into arch_dma_alloc if we require an uncached mapping, > > and remove the parisc code manually doing normal cached > > DMA_ATTR_NON_CONSISTENT allocations. > > > > Signed-off-by: Christoph Hellwig <hch at lst.de> > > Acked-by: Helge Deller <deller at gmx.de> # parisc > > Boot-tested 32-bit kernel on PCX-L and PCX-W2 machines (although > the patches don't cleanly apply any longer against git head). The series was against the dma-mapping tree, which might have diverged a bit already. Thanks for testing! ^ permalink raw reply [flat|nested] 91+ messages in thread
* [PATCH 6/7] dma-direct: handle DMA_ATTR_NO_KERNEL_MAPPING in common code 2019-06-14 14:44 ` Christoph Hellwig ` (2 preceding siblings ...) (?) @ 2019-06-14 14:44 ` Christoph Hellwig -1 siblings, 0 replies; 91+ messages in thread From: Christoph Hellwig @ 2019-06-14 14:44 UTC (permalink / raw) To: Vineet Gupta Cc: Jonas Bonn, Stefan Kristiansson, Stafford Horne, Helge Deller, Vladimir Murzin, linux-snps-arc, linux-arm-kernel, openrisc, linux-parisc, linux-xtensa, iommu, linux-kernel DMA_ATTR_NO_KERNEL_MAPPING is generally implemented by allocating normal cacheable pages or CMA memory, and then returning the page pointer as the opaque handle. Lift that code from the xtensa and generic dma remapping implementations into the generic dma-direct code so that we don't even call arch_dma_alloc for these allocations. Signed-off-by: Christoph Hellwig <hch@lst.de> --- arch/xtensa/kernel/pci-dma.c | 8 +------- include/linux/dma-noncoherent.h | 2 ++ kernel/dma/direct.c | 14 ++++++++++++++ kernel/dma/remap.c | 13 ++----------- 4 files changed, 19 insertions(+), 18 deletions(-) diff --git a/arch/xtensa/kernel/pci-dma.c b/arch/xtensa/kernel/pci-dma.c index 9171bff76fc4..206771277dff 100644 --- a/arch/xtensa/kernel/pci-dma.c +++ b/arch/xtensa/kernel/pci-dma.c @@ -167,10 +167,6 @@ void *arch_dma_alloc(struct device *dev, size_t size, dma_addr_t *handle, *handle = phys_to_dma(dev, page_to_phys(page)); - if (attrs & DMA_ATTR_NO_KERNEL_MAPPING) { - return page; - } - #ifdef CONFIG_MMU if (PageHighMem(page)) { void *p; @@ -196,9 +192,7 @@ void arch_dma_free(struct device *dev, size_t size, void *vaddr, unsigned long count = PAGE_ALIGN(size) >> PAGE_SHIFT; struct page *page; - if (attrs & DMA_ATTR_NO_KERNEL_MAPPING) { - page = vaddr; - } else if (platform_vaddr_uncached(vaddr)) { + if (platform_vaddr_uncached(vaddr)) { page = virt_to_page(platform_vaddr_to_cached(vaddr)); } else { #ifdef CONFIG_MMU diff --git a/include/linux/dma-noncoherent.h b/include/linux/dma-noncoherent.h index 732919ac5c11..53ee36ecdf37 100644 --- a/include/linux/dma-noncoherent.h +++ b/include/linux/dma-noncoherent.h @@ -28,6 +28,8 @@ static inline bool dma_alloc_need_uncached(struct device *dev, { if (dev_is_dma_coherent(dev)) return false; + if (attrs & DMA_ATTR_NO_KERNEL_MAPPING) + return false; if (IS_ENABLED(CONFIG_DMA_NONCOHERENT_CACHE_SYNC) && (attrs & DMA_ATTR_NON_CONSISTENT)) return false; diff --git a/kernel/dma/direct.c b/kernel/dma/direct.c index fc354f4f490b..b90e1aede743 100644 --- a/kernel/dma/direct.c +++ b/kernel/dma/direct.c @@ -138,6 +138,14 @@ void *dma_direct_alloc_pages(struct device *dev, size_t size, if (!page) return NULL; + if (attrs & DMA_ATTR_NO_KERNEL_MAPPING) { + /* remove any dirty cache lines on the kernel alias */ + if (!PageHighMem(page)) + arch_dma_prep_coherent(page, size); + /* return the page pointer as the opaque cookie */ + return page; + } + if (PageHighMem(page)) { /* * Depending on the cma= arguments and per-arch setup @@ -178,6 +186,12 @@ void dma_direct_free_pages(struct device *dev, size_t size, void *cpu_addr, { unsigned int page_order = get_order(size); + if (attrs & DMA_ATTR_NO_KERNEL_MAPPING) { + /* cpu_addr is a struct page cookie, not a kernel address */ + __dma_direct_free_pages(dev, size, cpu_addr); + return; + } + if (force_dma_unencrypted()) set_memory_encrypted((unsigned long)cpu_addr, 1 << page_order); diff --git a/kernel/dma/remap.c b/kernel/dma/remap.c index 0207e3764d52..a594aec07882 100644 --- a/kernel/dma/remap.c +++ b/kernel/dma/remap.c @@ -202,8 +202,7 @@ void *arch_dma_alloc(struct device *dev, size_t size, dma_addr_t *dma_handle, size = PAGE_ALIGN(size); - if (!gfpflags_allow_blocking(flags) && - !(attrs & DMA_ATTR_NO_KERNEL_MAPPING)) { + if (!gfpflags_allow_blocking(flags)) { ret = dma_alloc_from_pool(size, &page, flags); if (!ret) return NULL; @@ -217,11 +216,6 @@ void *arch_dma_alloc(struct device *dev, size_t size, dma_addr_t *dma_handle, /* remove any dirty cache lines on the kernel alias */ arch_dma_prep_coherent(page, size); - if (attrs & DMA_ATTR_NO_KERNEL_MAPPING) { - ret = page; /* opaque cookie */ - goto done; - } - /* create a coherent mapping */ ret = dma_common_contiguous_remap(page, size, VM_USERMAP, arch_dma_mmap_pgprot(dev, PAGE_KERNEL, attrs), @@ -240,10 +234,7 @@ void *arch_dma_alloc(struct device *dev, size_t size, dma_addr_t *dma_handle, void arch_dma_free(struct device *dev, size_t size, void *vaddr, dma_addr_t dma_handle, unsigned long attrs) { - if (attrs & DMA_ATTR_NO_KERNEL_MAPPING) { - /* vaddr is a struct page cookie, not a kernel address */ - __dma_direct_free_pages(dev, size, vaddr); - } else if (!dma_free_from_pool(vaddr, PAGE_ALIGN(size))) { + if (!dma_free_from_pool(vaddr, PAGE_ALIGN(size))) { phys_addr_t phys = dma_to_phys(dev, dma_handle); struct page *page = pfn_to_page(__phys_to_pfn(phys)); -- 2.20.1 ^ permalink raw reply related [flat|nested] 91+ messages in thread
* [OpenRISC] [PATCH 6/7] dma-direct: handle DMA_ATTR_NO_KERNEL_MAPPING in common code @ 2019-06-14 14:44 ` Christoph Hellwig 0 siblings, 0 replies; 91+ messages in thread From: Christoph Hellwig @ 2019-06-14 14:44 UTC (permalink / raw) To: openrisc DMA_ATTR_NO_KERNEL_MAPPING is generally implemented by allocating normal cacheable pages or CMA memory, and then returning the page pointer as the opaque handle. Lift that code from the xtensa and generic dma remapping implementations into the generic dma-direct code so that we don't even call arch_dma_alloc for these allocations. Signed-off-by: Christoph Hellwig <hch@lst.de> --- arch/xtensa/kernel/pci-dma.c | 8 +------- include/linux/dma-noncoherent.h | 2 ++ kernel/dma/direct.c | 14 ++++++++++++++ kernel/dma/remap.c | 13 ++----------- 4 files changed, 19 insertions(+), 18 deletions(-) diff --git a/arch/xtensa/kernel/pci-dma.c b/arch/xtensa/kernel/pci-dma.c index 9171bff76fc4..206771277dff 100644 --- a/arch/xtensa/kernel/pci-dma.c +++ b/arch/xtensa/kernel/pci-dma.c @@ -167,10 +167,6 @@ void *arch_dma_alloc(struct device *dev, size_t size, dma_addr_t *handle, *handle = phys_to_dma(dev, page_to_phys(page)); - if (attrs & DMA_ATTR_NO_KERNEL_MAPPING) { - return page; - } - #ifdef CONFIG_MMU if (PageHighMem(page)) { void *p; @@ -196,9 +192,7 @@ void arch_dma_free(struct device *dev, size_t size, void *vaddr, unsigned long count = PAGE_ALIGN(size) >> PAGE_SHIFT; struct page *page; - if (attrs & DMA_ATTR_NO_KERNEL_MAPPING) { - page = vaddr; - } else if (platform_vaddr_uncached(vaddr)) { + if (platform_vaddr_uncached(vaddr)) { page = virt_to_page(platform_vaddr_to_cached(vaddr)); } else { #ifdef CONFIG_MMU diff --git a/include/linux/dma-noncoherent.h b/include/linux/dma-noncoherent.h index 732919ac5c11..53ee36ecdf37 100644 --- a/include/linux/dma-noncoherent.h +++ b/include/linux/dma-noncoherent.h @@ -28,6 +28,8 @@ static inline bool dma_alloc_need_uncached(struct device *dev, { if (dev_is_dma_coherent(dev)) return false; + if (attrs & DMA_ATTR_NO_KERNEL_MAPPING) + return false; if (IS_ENABLED(CONFIG_DMA_NONCOHERENT_CACHE_SYNC) && (attrs & DMA_ATTR_NON_CONSISTENT)) return false; diff --git a/kernel/dma/direct.c b/kernel/dma/direct.c index fc354f4f490b..b90e1aede743 100644 --- a/kernel/dma/direct.c +++ b/kernel/dma/direct.c @@ -138,6 +138,14 @@ void *dma_direct_alloc_pages(struct device *dev, size_t size, if (!page) return NULL; + if (attrs & DMA_ATTR_NO_KERNEL_MAPPING) { + /* remove any dirty cache lines on the kernel alias */ + if (!PageHighMem(page)) + arch_dma_prep_coherent(page, size); + /* return the page pointer as the opaque cookie */ + return page; + } + if (PageHighMem(page)) { /* * Depending on the cma= arguments and per-arch setup @@ -178,6 +186,12 @@ void dma_direct_free_pages(struct device *dev, size_t size, void *cpu_addr, { unsigned int page_order = get_order(size); + if (attrs & DMA_ATTR_NO_KERNEL_MAPPING) { + /* cpu_addr is a struct page cookie, not a kernel address */ + __dma_direct_free_pages(dev, size, cpu_addr); + return; + } + if (force_dma_unencrypted()) set_memory_encrypted((unsigned long)cpu_addr, 1 << page_order); diff --git a/kernel/dma/remap.c b/kernel/dma/remap.c index 0207e3764d52..a594aec07882 100644 --- a/kernel/dma/remap.c +++ b/kernel/dma/remap.c @@ -202,8 +202,7 @@ void *arch_dma_alloc(struct device *dev, size_t size, dma_addr_t *dma_handle, size = PAGE_ALIGN(size); - if (!gfpflags_allow_blocking(flags) && - !(attrs & DMA_ATTR_NO_KERNEL_MAPPING)) { + if (!gfpflags_allow_blocking(flags)) { ret = dma_alloc_from_pool(size, &page, flags); if (!ret) return NULL; @@ -217,11 +216,6 @@ void *arch_dma_alloc(struct device *dev, size_t size, dma_addr_t *dma_handle, /* remove any dirty cache lines on the kernel alias */ arch_dma_prep_coherent(page, size); - if (attrs & DMA_ATTR_NO_KERNEL_MAPPING) { - ret = page; /* opaque cookie */ - goto done; - } - /* create a coherent mapping */ ret = dma_common_contiguous_remap(page, size, VM_USERMAP, arch_dma_mmap_pgprot(dev, PAGE_KERNEL, attrs), @@ -240,10 +234,7 @@ void *arch_dma_alloc(struct device *dev, size_t size, dma_addr_t *dma_handle, void arch_dma_free(struct device *dev, size_t size, void *vaddr, dma_addr_t dma_handle, unsigned long attrs) { - if (attrs & DMA_ATTR_NO_KERNEL_MAPPING) { - /* vaddr is a struct page cookie, not a kernel address */ - __dma_direct_free_pages(dev, size, vaddr); - } else if (!dma_free_from_pool(vaddr, PAGE_ALIGN(size))) { + if (!dma_free_from_pool(vaddr, PAGE_ALIGN(size))) { phys_addr_t phys = dma_to_phys(dev, dma_handle); struct page *page = pfn_to_page(__phys_to_pfn(phys)); -- 2.20.1 ^ permalink raw reply related [flat|nested] 91+ messages in thread
* [PATCH 6/7] dma-direct: handle DMA_ATTR_NO_KERNEL_MAPPING in common code @ 2019-06-14 14:44 ` Christoph Hellwig 0 siblings, 0 replies; 91+ messages in thread From: Christoph Hellwig @ 2019-06-14 14:44 UTC (permalink / raw) To: Vineet Gupta Cc: Jonas Bonn, linux-xtensa, Vladimir Murzin, linux-parisc, Helge Deller, linux-kernel, Stefan Kristiansson, iommu, openrisc, Stafford Horne, linux-snps-arc, linux-arm-kernel DMA_ATTR_NO_KERNEL_MAPPING is generally implemented by allocating normal cacheable pages or CMA memory, and then returning the page pointer as the opaque handle. Lift that code from the xtensa and generic dma remapping implementations into the generic dma-direct code so that we don't even call arch_dma_alloc for these allocations. Signed-off-by: Christoph Hellwig <hch@lst.de> --- arch/xtensa/kernel/pci-dma.c | 8 +------- include/linux/dma-noncoherent.h | 2 ++ kernel/dma/direct.c | 14 ++++++++++++++ kernel/dma/remap.c | 13 ++----------- 4 files changed, 19 insertions(+), 18 deletions(-) diff --git a/arch/xtensa/kernel/pci-dma.c b/arch/xtensa/kernel/pci-dma.c index 9171bff76fc4..206771277dff 100644 --- a/arch/xtensa/kernel/pci-dma.c +++ b/arch/xtensa/kernel/pci-dma.c @@ -167,10 +167,6 @@ void *arch_dma_alloc(struct device *dev, size_t size, dma_addr_t *handle, *handle = phys_to_dma(dev, page_to_phys(page)); - if (attrs & DMA_ATTR_NO_KERNEL_MAPPING) { - return page; - } - #ifdef CONFIG_MMU if (PageHighMem(page)) { void *p; @@ -196,9 +192,7 @@ void arch_dma_free(struct device *dev, size_t size, void *vaddr, unsigned long count = PAGE_ALIGN(size) >> PAGE_SHIFT; struct page *page; - if (attrs & DMA_ATTR_NO_KERNEL_MAPPING) { - page = vaddr; - } else if (platform_vaddr_uncached(vaddr)) { + if (platform_vaddr_uncached(vaddr)) { page = virt_to_page(platform_vaddr_to_cached(vaddr)); } else { #ifdef CONFIG_MMU diff --git a/include/linux/dma-noncoherent.h b/include/linux/dma-noncoherent.h index 732919ac5c11..53ee36ecdf37 100644 --- a/include/linux/dma-noncoherent.h +++ b/include/linux/dma-noncoherent.h @@ -28,6 +28,8 @@ static inline bool dma_alloc_need_uncached(struct device *dev, { if (dev_is_dma_coherent(dev)) return false; + if (attrs & DMA_ATTR_NO_KERNEL_MAPPING) + return false; if (IS_ENABLED(CONFIG_DMA_NONCOHERENT_CACHE_SYNC) && (attrs & DMA_ATTR_NON_CONSISTENT)) return false; diff --git a/kernel/dma/direct.c b/kernel/dma/direct.c index fc354f4f490b..b90e1aede743 100644 --- a/kernel/dma/direct.c +++ b/kernel/dma/direct.c @@ -138,6 +138,14 @@ void *dma_direct_alloc_pages(struct device *dev, size_t size, if (!page) return NULL; + if (attrs & DMA_ATTR_NO_KERNEL_MAPPING) { + /* remove any dirty cache lines on the kernel alias */ + if (!PageHighMem(page)) + arch_dma_prep_coherent(page, size); + /* return the page pointer as the opaque cookie */ + return page; + } + if (PageHighMem(page)) { /* * Depending on the cma= arguments and per-arch setup @@ -178,6 +186,12 @@ void dma_direct_free_pages(struct device *dev, size_t size, void *cpu_addr, { unsigned int page_order = get_order(size); + if (attrs & DMA_ATTR_NO_KERNEL_MAPPING) { + /* cpu_addr is a struct page cookie, not a kernel address */ + __dma_direct_free_pages(dev, size, cpu_addr); + return; + } + if (force_dma_unencrypted()) set_memory_encrypted((unsigned long)cpu_addr, 1 << page_order); diff --git a/kernel/dma/remap.c b/kernel/dma/remap.c index 0207e3764d52..a594aec07882 100644 --- a/kernel/dma/remap.c +++ b/kernel/dma/remap.c @@ -202,8 +202,7 @@ void *arch_dma_alloc(struct device *dev, size_t size, dma_addr_t *dma_handle, size = PAGE_ALIGN(size); - if (!gfpflags_allow_blocking(flags) && - !(attrs & DMA_ATTR_NO_KERNEL_MAPPING)) { + if (!gfpflags_allow_blocking(flags)) { ret = dma_alloc_from_pool(size, &page, flags); if (!ret) return NULL; @@ -217,11 +216,6 @@ void *arch_dma_alloc(struct device *dev, size_t size, dma_addr_t *dma_handle, /* remove any dirty cache lines on the kernel alias */ arch_dma_prep_coherent(page, size); - if (attrs & DMA_ATTR_NO_KERNEL_MAPPING) { - ret = page; /* opaque cookie */ - goto done; - } - /* create a coherent mapping */ ret = dma_common_contiguous_remap(page, size, VM_USERMAP, arch_dma_mmap_pgprot(dev, PAGE_KERNEL, attrs), @@ -240,10 +234,7 @@ void *arch_dma_alloc(struct device *dev, size_t size, dma_addr_t *dma_handle, void arch_dma_free(struct device *dev, size_t size, void *vaddr, dma_addr_t dma_handle, unsigned long attrs) { - if (attrs & DMA_ATTR_NO_KERNEL_MAPPING) { - /* vaddr is a struct page cookie, not a kernel address */ - __dma_direct_free_pages(dev, size, vaddr); - } else if (!dma_free_from_pool(vaddr, PAGE_ALIGN(size))) { + if (!dma_free_from_pool(vaddr, PAGE_ALIGN(size))) { phys_addr_t phys = dma_to_phys(dev, dma_handle); struct page *page = pfn_to_page(__phys_to_pfn(phys)); -- 2.20.1 _______________________________________________ linux-arm-kernel mailing list linux-arm-kernel@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-arm-kernel ^ permalink raw reply related [flat|nested] 91+ messages in thread
* [PATCH 6/7] dma-direct: handle DMA_ATTR_NO_KERNEL_MAPPING in common code @ 2019-06-14 14:44 ` Christoph Hellwig 0 siblings, 0 replies; 91+ messages in thread From: Christoph Hellwig @ 2019-06-14 14:44 UTC (permalink / raw) To: Vineet Gupta Cc: Jonas Bonn, linux-xtensa, Vladimir Murzin, linux-parisc, Helge Deller, linux-kernel, Stefan Kristiansson, iommu, openrisc, Stafford Horne, linux-snps-arc, linux-arm-kernel DMA_ATTR_NO_KERNEL_MAPPING is generally implemented by allocating normal cacheable pages or CMA memory, and then returning the page pointer as the opaque handle. Lift that code from the xtensa and generic dma remapping implementations into the generic dma-direct code so that we don't even call arch_dma_alloc for these allocations. Signed-off-by: Christoph Hellwig <hch@lst.de> --- arch/xtensa/kernel/pci-dma.c | 8 +------- include/linux/dma-noncoherent.h | 2 ++ kernel/dma/direct.c | 14 ++++++++++++++ kernel/dma/remap.c | 13 ++----------- 4 files changed, 19 insertions(+), 18 deletions(-) diff --git a/arch/xtensa/kernel/pci-dma.c b/arch/xtensa/kernel/pci-dma.c index 9171bff76fc4..206771277dff 100644 --- a/arch/xtensa/kernel/pci-dma.c +++ b/arch/xtensa/kernel/pci-dma.c @@ -167,10 +167,6 @@ void *arch_dma_alloc(struct device *dev, size_t size, dma_addr_t *handle, *handle = phys_to_dma(dev, page_to_phys(page)); - if (attrs & DMA_ATTR_NO_KERNEL_MAPPING) { - return page; - } - #ifdef CONFIG_MMU if (PageHighMem(page)) { void *p; @@ -196,9 +192,7 @@ void arch_dma_free(struct device *dev, size_t size, void *vaddr, unsigned long count = PAGE_ALIGN(size) >> PAGE_SHIFT; struct page *page; - if (attrs & DMA_ATTR_NO_KERNEL_MAPPING) { - page = vaddr; - } else if (platform_vaddr_uncached(vaddr)) { + if (platform_vaddr_uncached(vaddr)) { page = virt_to_page(platform_vaddr_to_cached(vaddr)); } else { #ifdef CONFIG_MMU diff --git a/include/linux/dma-noncoherent.h b/include/linux/dma-noncoherent.h index 732919ac5c11..53ee36ecdf37 100644 --- a/include/linux/dma-noncoherent.h +++ b/include/linux/dma-noncoherent.h @@ -28,6 +28,8 @@ static inline bool dma_alloc_need_uncached(struct device *dev, { if (dev_is_dma_coherent(dev)) return false; + if (attrs & DMA_ATTR_NO_KERNEL_MAPPING) + return false; if (IS_ENABLED(CONFIG_DMA_NONCOHERENT_CACHE_SYNC) && (attrs & DMA_ATTR_NON_CONSISTENT)) return false; diff --git a/kernel/dma/direct.c b/kernel/dma/direct.c index fc354f4f490b..b90e1aede743 100644 --- a/kernel/dma/direct.c +++ b/kernel/dma/direct.c @@ -138,6 +138,14 @@ void *dma_direct_alloc_pages(struct device *dev, size_t size, if (!page) return NULL; + if (attrs & DMA_ATTR_NO_KERNEL_MAPPING) { + /* remove any dirty cache lines on the kernel alias */ + if (!PageHighMem(page)) + arch_dma_prep_coherent(page, size); + /* return the page pointer as the opaque cookie */ + return page; + } + if (PageHighMem(page)) { /* * Depending on the cma= arguments and per-arch setup @@ -178,6 +186,12 @@ void dma_direct_free_pages(struct device *dev, size_t size, void *cpu_addr, { unsigned int page_order = get_order(size); + if (attrs & DMA_ATTR_NO_KERNEL_MAPPING) { + /* cpu_addr is a struct page cookie, not a kernel address */ + __dma_direct_free_pages(dev, size, cpu_addr); + return; + } + if (force_dma_unencrypted()) set_memory_encrypted((unsigned long)cpu_addr, 1 << page_order); diff --git a/kernel/dma/remap.c b/kernel/dma/remap.c index 0207e3764d52..a594aec07882 100644 --- a/kernel/dma/remap.c +++ b/kernel/dma/remap.c @@ -202,8 +202,7 @@ void *arch_dma_alloc(struct device *dev, size_t size, dma_addr_t *dma_handle, size = PAGE_ALIGN(size); - if (!gfpflags_allow_blocking(flags) && - !(attrs & DMA_ATTR_NO_KERNEL_MAPPING)) { + if (!gfpflags_allow_blocking(flags)) { ret = dma_alloc_from_pool(size, &page, flags); if (!ret) return NULL; @@ -217,11 +216,6 @@ void *arch_dma_alloc(struct device *dev, size_t size, dma_addr_t *dma_handle, /* remove any dirty cache lines on the kernel alias */ arch_dma_prep_coherent(page, size); - if (attrs & DMA_ATTR_NO_KERNEL_MAPPING) { - ret = page; /* opaque cookie */ - goto done; - } - /* create a coherent mapping */ ret = dma_common_contiguous_remap(page, size, VM_USERMAP, arch_dma_mmap_pgprot(dev, PAGE_KERNEL, attrs), @@ -240,10 +234,7 @@ void *arch_dma_alloc(struct device *dev, size_t size, dma_addr_t *dma_handle, void arch_dma_free(struct device *dev, size_t size, void *vaddr, dma_addr_t dma_handle, unsigned long attrs) { - if (attrs & DMA_ATTR_NO_KERNEL_MAPPING) { - /* vaddr is a struct page cookie, not a kernel address */ - __dma_direct_free_pages(dev, size, vaddr); - } else if (!dma_free_from_pool(vaddr, PAGE_ALIGN(size))) { + if (!dma_free_from_pool(vaddr, PAGE_ALIGN(size))) { phys_addr_t phys = dma_to_phys(dev, dma_handle); struct page *page = pfn_to_page(__phys_to_pfn(phys)); -- 2.20.1 _______________________________________________ iommu mailing list iommu@lists.linux-foundation.org https://lists.linuxfoundation.org/mailman/listinfo/iommu ^ permalink raw reply related [flat|nested] 91+ messages in thread
* [PATCH 6/7] dma-direct: handle DMA_ATTR_NO_KERNEL_MAPPING in common code @ 2019-06-14 14:44 ` Christoph Hellwig 0 siblings, 0 replies; 91+ messages in thread From: Christoph Hellwig @ 2019-06-14 14:44 UTC (permalink / raw) To: linux-snps-arc DMA_ATTR_NO_KERNEL_MAPPING is generally implemented by allocating normal cacheable pages or CMA memory, and then returning the page pointer as the opaque handle. Lift that code from the xtensa and generic dma remapping implementations into the generic dma-direct code so that we don't even call arch_dma_alloc for these allocations. Signed-off-by: Christoph Hellwig <hch at lst.de> --- arch/xtensa/kernel/pci-dma.c | 8 +------- include/linux/dma-noncoherent.h | 2 ++ kernel/dma/direct.c | 14 ++++++++++++++ kernel/dma/remap.c | 13 ++----------- 4 files changed, 19 insertions(+), 18 deletions(-) diff --git a/arch/xtensa/kernel/pci-dma.c b/arch/xtensa/kernel/pci-dma.c index 9171bff76fc4..206771277dff 100644 --- a/arch/xtensa/kernel/pci-dma.c +++ b/arch/xtensa/kernel/pci-dma.c @@ -167,10 +167,6 @@ void *arch_dma_alloc(struct device *dev, size_t size, dma_addr_t *handle, *handle = phys_to_dma(dev, page_to_phys(page)); - if (attrs & DMA_ATTR_NO_KERNEL_MAPPING) { - return page; - } - #ifdef CONFIG_MMU if (PageHighMem(page)) { void *p; @@ -196,9 +192,7 @@ void arch_dma_free(struct device *dev, size_t size, void *vaddr, unsigned long count = PAGE_ALIGN(size) >> PAGE_SHIFT; struct page *page; - if (attrs & DMA_ATTR_NO_KERNEL_MAPPING) { - page = vaddr; - } else if (platform_vaddr_uncached(vaddr)) { + if (platform_vaddr_uncached(vaddr)) { page = virt_to_page(platform_vaddr_to_cached(vaddr)); } else { #ifdef CONFIG_MMU diff --git a/include/linux/dma-noncoherent.h b/include/linux/dma-noncoherent.h index 732919ac5c11..53ee36ecdf37 100644 --- a/include/linux/dma-noncoherent.h +++ b/include/linux/dma-noncoherent.h @@ -28,6 +28,8 @@ static inline bool dma_alloc_need_uncached(struct device *dev, { if (dev_is_dma_coherent(dev)) return false; + if (attrs & DMA_ATTR_NO_KERNEL_MAPPING) + return false; if (IS_ENABLED(CONFIG_DMA_NONCOHERENT_CACHE_SYNC) && (attrs & DMA_ATTR_NON_CONSISTENT)) return false; diff --git a/kernel/dma/direct.c b/kernel/dma/direct.c index fc354f4f490b..b90e1aede743 100644 --- a/kernel/dma/direct.c +++ b/kernel/dma/direct.c @@ -138,6 +138,14 @@ void *dma_direct_alloc_pages(struct device *dev, size_t size, if (!page) return NULL; + if (attrs & DMA_ATTR_NO_KERNEL_MAPPING) { + /* remove any dirty cache lines on the kernel alias */ + if (!PageHighMem(page)) + arch_dma_prep_coherent(page, size); + /* return the page pointer as the opaque cookie */ + return page; + } + if (PageHighMem(page)) { /* * Depending on the cma= arguments and per-arch setup @@ -178,6 +186,12 @@ void dma_direct_free_pages(struct device *dev, size_t size, void *cpu_addr, { unsigned int page_order = get_order(size); + if (attrs & DMA_ATTR_NO_KERNEL_MAPPING) { + /* cpu_addr is a struct page cookie, not a kernel address */ + __dma_direct_free_pages(dev, size, cpu_addr); + return; + } + if (force_dma_unencrypted()) set_memory_encrypted((unsigned long)cpu_addr, 1 << page_order); diff --git a/kernel/dma/remap.c b/kernel/dma/remap.c index 0207e3764d52..a594aec07882 100644 --- a/kernel/dma/remap.c +++ b/kernel/dma/remap.c @@ -202,8 +202,7 @@ void *arch_dma_alloc(struct device *dev, size_t size, dma_addr_t *dma_handle, size = PAGE_ALIGN(size); - if (!gfpflags_allow_blocking(flags) && - !(attrs & DMA_ATTR_NO_KERNEL_MAPPING)) { + if (!gfpflags_allow_blocking(flags)) { ret = dma_alloc_from_pool(size, &page, flags); if (!ret) return NULL; @@ -217,11 +216,6 @@ void *arch_dma_alloc(struct device *dev, size_t size, dma_addr_t *dma_handle, /* remove any dirty cache lines on the kernel alias */ arch_dma_prep_coherent(page, size); - if (attrs & DMA_ATTR_NO_KERNEL_MAPPING) { - ret = page; /* opaque cookie */ - goto done; - } - /* create a coherent mapping */ ret = dma_common_contiguous_remap(page, size, VM_USERMAP, arch_dma_mmap_pgprot(dev, PAGE_KERNEL, attrs), @@ -240,10 +234,7 @@ void *arch_dma_alloc(struct device *dev, size_t size, dma_addr_t *dma_handle, void arch_dma_free(struct device *dev, size_t size, void *vaddr, dma_addr_t dma_handle, unsigned long attrs) { - if (attrs & DMA_ATTR_NO_KERNEL_MAPPING) { - /* vaddr is a struct page cookie, not a kernel address */ - __dma_direct_free_pages(dev, size, vaddr); - } else if (!dma_free_from_pool(vaddr, PAGE_ALIGN(size))) { + if (!dma_free_from_pool(vaddr, PAGE_ALIGN(size))) { phys_addr_t phys = dma_to_phys(dev, dma_handle); struct page *page = pfn_to_page(__phys_to_pfn(phys)); -- 2.20.1 ^ permalink raw reply related [flat|nested] 91+ messages in thread
* Re: [PATCH 6/7] dma-direct: handle DMA_ATTR_NO_KERNEL_MAPPING in common code 2019-06-14 14:44 ` Christoph Hellwig ` (2 preceding siblings ...) (?) @ 2019-06-29 15:09 ` Guenter Roeck -1 siblings, 0 replies; 91+ messages in thread From: Guenter Roeck @ 2019-06-29 15:09 UTC (permalink / raw) To: Christoph Hellwig Cc: Vineet Gupta, Jonas Bonn, Stefan Kristiansson, Stafford Horne, Helge Deller, Vladimir Murzin, linux-snps-arc, linux-arm-kernel, openrisc, linux-parisc, linux-xtensa, iommu, linux-kernel On Fri, Jun 14, 2019 at 04:44:30PM +0200, Christoph Hellwig wrote: > DMA_ATTR_NO_KERNEL_MAPPING is generally implemented by allocating > normal cacheable pages or CMA memory, and then returning the page > pointer as the opaque handle. Lift that code from the xtensa and > generic dma remapping implementations into the generic dma-direct > code so that we don't even call arch_dma_alloc for these allocations. > > Signed-off-by: Christoph Hellwig <hch@lst.de> This patch results in build failures for mips:nlm_xlp_defconfig and mips:cavium_octeon_defconfig. kernel/dma/direct.c:144: undefined reference to `arch_dma_prep_coherent' Reverting the patch fixes the problem. Guenter ^ permalink raw reply [flat|nested] 91+ messages in thread
* [OpenRISC] [PATCH 6/7] dma-direct: handle DMA_ATTR_NO_KERNEL_MAPPING in common code @ 2019-06-29 15:09 ` Guenter Roeck 0 siblings, 0 replies; 91+ messages in thread From: Guenter Roeck @ 2019-06-29 15:09 UTC (permalink / raw) To: openrisc On Fri, Jun 14, 2019 at 04:44:30PM +0200, Christoph Hellwig wrote: > DMA_ATTR_NO_KERNEL_MAPPING is generally implemented by allocating > normal cacheable pages or CMA memory, and then returning the page > pointer as the opaque handle. Lift that code from the xtensa and > generic dma remapping implementations into the generic dma-direct > code so that we don't even call arch_dma_alloc for these allocations. > > Signed-off-by: Christoph Hellwig <hch@lst.de> This patch results in build failures for mips:nlm_xlp_defconfig and mips:cavium_octeon_defconfig. kernel/dma/direct.c:144: undefined reference to `arch_dma_prep_coherent' Reverting the patch fixes the problem. Guenter ^ permalink raw reply [flat|nested] 91+ messages in thread
* Re: [PATCH 6/7] dma-direct: handle DMA_ATTR_NO_KERNEL_MAPPING in common code @ 2019-06-29 15:09 ` Guenter Roeck 0 siblings, 0 replies; 91+ messages in thread From: Guenter Roeck @ 2019-06-29 15:09 UTC (permalink / raw) To: Christoph Hellwig Cc: Jonas Bonn, linux-xtensa, Vladimir Murzin, linux-parisc, Vineet Gupta, Helge Deller, linux-kernel, Stefan Kristiansson, iommu, openrisc, Stafford Horne, linux-snps-arc, linux-arm-kernel On Fri, Jun 14, 2019 at 04:44:30PM +0200, Christoph Hellwig wrote: > DMA_ATTR_NO_KERNEL_MAPPING is generally implemented by allocating > normal cacheable pages or CMA memory, and then returning the page > pointer as the opaque handle. Lift that code from the xtensa and > generic dma remapping implementations into the generic dma-direct > code so that we don't even call arch_dma_alloc for these allocations. > > Signed-off-by: Christoph Hellwig <hch@lst.de> This patch results in build failures for mips:nlm_xlp_defconfig and mips:cavium_octeon_defconfig. kernel/dma/direct.c:144: undefined reference to `arch_dma_prep_coherent' Reverting the patch fixes the problem. Guenter _______________________________________________ linux-arm-kernel mailing list linux-arm-kernel@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-arm-kernel ^ permalink raw reply [flat|nested] 91+ messages in thread
* Re: [PATCH 6/7] dma-direct: handle DMA_ATTR_NO_KERNEL_MAPPING in common code @ 2019-06-29 15:09 ` Guenter Roeck 0 siblings, 0 replies; 91+ messages in thread From: Guenter Roeck @ 2019-06-29 15:09 UTC (permalink / raw) To: Christoph Hellwig Cc: Jonas Bonn, linux-xtensa, Vladimir Murzin, linux-parisc, Vineet Gupta, Helge Deller, linux-kernel, Stefan Kristiansson, iommu, openrisc, Stafford Horne, linux-snps-arc, linux-arm-kernel On Fri, Jun 14, 2019 at 04:44:30PM +0200, Christoph Hellwig wrote: > DMA_ATTR_NO_KERNEL_MAPPING is generally implemented by allocating > normal cacheable pages or CMA memory, and then returning the page > pointer as the opaque handle. Lift that code from the xtensa and > generic dma remapping implementations into the generic dma-direct > code so that we don't even call arch_dma_alloc for these allocations. > > Signed-off-by: Christoph Hellwig <hch@lst.de> This patch results in build failures for mips:nlm_xlp_defconfig and mips:cavium_octeon_defconfig. kernel/dma/direct.c:144: undefined reference to `arch_dma_prep_coherent' Reverting the patch fixes the problem. Guenter _______________________________________________ iommu mailing list iommu@lists.linux-foundation.org https://lists.linuxfoundation.org/mailman/listinfo/iommu ^ permalink raw reply [flat|nested] 91+ messages in thread
* [PATCH 6/7] dma-direct: handle DMA_ATTR_NO_KERNEL_MAPPING in common code @ 2019-06-29 15:09 ` Guenter Roeck 0 siblings, 0 replies; 91+ messages in thread From: Guenter Roeck @ 2019-06-29 15:09 UTC (permalink / raw) To: linux-snps-arc On Fri, Jun 14, 2019@04:44:30PM +0200, Christoph Hellwig wrote: > DMA_ATTR_NO_KERNEL_MAPPING is generally implemented by allocating > normal cacheable pages or CMA memory, and then returning the page > pointer as the opaque handle. Lift that code from the xtensa and > generic dma remapping implementations into the generic dma-direct > code so that we don't even call arch_dma_alloc for these allocations. > > Signed-off-by: Christoph Hellwig <hch at lst.de> This patch results in build failures for mips:nlm_xlp_defconfig and mips:cavium_octeon_defconfig. kernel/dma/direct.c:144: undefined reference to `arch_dma_prep_coherent' Reverting the patch fixes the problem. Guenter ^ permalink raw reply [flat|nested] 91+ messages in thread
* [PATCH 7/7] arc: use the generic remapping allocator for coherent DMA allocations 2019-06-14 14:44 ` Christoph Hellwig ` (2 preceding siblings ...) (?) @ 2019-06-14 14:44 ` Christoph Hellwig -1 siblings, 0 replies; 91+ messages in thread From: Christoph Hellwig @ 2019-06-14 14:44 UTC (permalink / raw) To: Vineet Gupta Cc: Jonas Bonn, Stefan Kristiansson, Stafford Horne, Helge Deller, Vladimir Murzin, linux-snps-arc, linux-arm-kernel, openrisc, linux-parisc, linux-xtensa, iommu, linux-kernel Replace the code that sets up uncached PTEs with the generic vmap based remapping code. It also provides an atomic pool for allocations from non-blocking context, which we not properly supported by the existing arc code. Signed-off-by: Christoph Hellwig <hch@lst.de> --- arch/arc/Kconfig | 2 ++ arch/arc/mm/dma.c | 62 ++++++++--------------------------------------- 2 files changed, 12 insertions(+), 52 deletions(-) diff --git a/arch/arc/Kconfig b/arch/arc/Kconfig index 23e063df5d2c..cdad7d30ff1d 100644 --- a/arch/arc/Kconfig +++ b/arch/arc/Kconfig @@ -10,6 +10,7 @@ config ARC def_bool y select ARC_TIMERS select ARCH_HAS_DMA_COHERENT_TO_PFN + select ARCH_HAS_DMA_PREP_COHERENT select ARCH_HAS_PTE_SPECIAL select ARCH_HAS_SETUP_DMA_OPS select ARCH_HAS_SYNC_DMA_FOR_CPU @@ -19,6 +20,7 @@ config ARC select BUILDTIME_EXTABLE_SORT select CLONE_BACKWARDS select COMMON_CLK + select DMA_DIRECT_REMAP select GENERIC_ATOMIC64 if !ISA_ARCV2 || !(ARC_HAS_LL64 && ARC_HAS_LLSC) select GENERIC_CLOCKEVENTS select GENERIC_FIND_FIRST_BIT diff --git a/arch/arc/mm/dma.c b/arch/arc/mm/dma.c index 9832928f896d..0fa850709fac 100644 --- a/arch/arc/mm/dma.c +++ b/arch/arc/mm/dma.c @@ -11,46 +11,15 @@ #include <asm/cacheflush.h> /* - * ARCH specific callbacks for generic noncoherent DMA ops (dma/noncoherent.c) + * ARCH specific callbacks for generic noncoherent DMA ops * - hardware IOC not available (or "dma-coherent" not set for device in DT) * - But still handle both coherent and non-coherent requests from caller * * For DMA coherent hardware (IOC) generic code suffices */ -void *arch_dma_alloc(struct device *dev, size_t size, dma_addr_t *dma_handle, - gfp_t gfp, unsigned long attrs) -{ - unsigned long order = get_order(size); - struct page *page; - phys_addr_t paddr; - void *kvaddr; - - /* - * __GFP_HIGHMEM flag is cleared by upper layer functions - * (in include/linux/dma-mapping.h) so we should never get a - * __GFP_HIGHMEM here. - */ - BUG_ON(gfp & __GFP_HIGHMEM); - - page = alloc_pages(gfp | __GFP_ZERO, order); - if (!page) - return NULL; - - /* This is linear addr (0x8000_0000 based) */ - paddr = page_to_phys(page); - - *dma_handle = paddr; - - /* - * A coherent buffer needs MMU mapping to enforce non-cachability. - * kvaddr is kernel Virtual address (0x7000_0000 based). - */ - kvaddr = ioremap_nocache(paddr, size); - if (kvaddr == NULL) { - __free_pages(page, order); - return NULL; - } +void arch_dma_prep_coherent(struct page *page, size_t size) +{ /* * Evict any existing L1 and/or L2 lines for the backing page * in case it was used earlier as a normal "cached" page. @@ -61,24 +30,7 @@ void *arch_dma_alloc(struct device *dev, size_t size, dma_addr_t *dma_handle, * Currently flush_cache_vmap nukes the L1 cache completely which * will be optimized as a separate commit */ - dma_cache_wback_inv(paddr, size); - return kvaddr; -} - -void arch_dma_free(struct device *dev, size_t size, void *vaddr, - dma_addr_t dma_handle, unsigned long attrs) -{ - phys_addr_t paddr = dma_handle; - struct page *page = virt_to_page(paddr); - - iounmap((void __force __iomem *)vaddr); - __free_pages(page, get_order(size)); -} - -long arch_dma_coherent_to_pfn(struct device *dev, void *cpu_addr, - dma_addr_t dma_addr) -{ - return __phys_to_pfn(dma_addr); + dma_cache_wback_inv(page_to_phys(page), size); } /* @@ -155,3 +107,9 @@ void arch_setup_dma_ops(struct device *dev, u64 dma_base, u64 size, dev_info(dev, "use %sncoherent DMA ops\n", dev->dma_coherent ? "" : "non"); } + +static int __init atomic_pool_init(void) +{ + return dma_atomic_pool_init(GFP_KERNEL, pgprot_noncached(PAGE_KERNEL)); +} +postcore_initcall(atomic_pool_init); -- 2.20.1 ^ permalink raw reply related [flat|nested] 91+ messages in thread
* [OpenRISC] [PATCH 7/7] arc: use the generic remapping allocator for coherent DMA allocations @ 2019-06-14 14:44 ` Christoph Hellwig 0 siblings, 0 replies; 91+ messages in thread From: Christoph Hellwig @ 2019-06-14 14:44 UTC (permalink / raw) To: openrisc Replace the code that sets up uncached PTEs with the generic vmap based remapping code. It also provides an atomic pool for allocations from non-blocking context, which we not properly supported by the existing arc code. Signed-off-by: Christoph Hellwig <hch@lst.de> --- arch/arc/Kconfig | 2 ++ arch/arc/mm/dma.c | 62 ++++++++--------------------------------------- 2 files changed, 12 insertions(+), 52 deletions(-) diff --git a/arch/arc/Kconfig b/arch/arc/Kconfig index 23e063df5d2c..cdad7d30ff1d 100644 --- a/arch/arc/Kconfig +++ b/arch/arc/Kconfig @@ -10,6 +10,7 @@ config ARC def_bool y select ARC_TIMERS select ARCH_HAS_DMA_COHERENT_TO_PFN + select ARCH_HAS_DMA_PREP_COHERENT select ARCH_HAS_PTE_SPECIAL select ARCH_HAS_SETUP_DMA_OPS select ARCH_HAS_SYNC_DMA_FOR_CPU @@ -19,6 +20,7 @@ config ARC select BUILDTIME_EXTABLE_SORT select CLONE_BACKWARDS select COMMON_CLK + select DMA_DIRECT_REMAP select GENERIC_ATOMIC64 if !ISA_ARCV2 || !(ARC_HAS_LL64 && ARC_HAS_LLSC) select GENERIC_CLOCKEVENTS select GENERIC_FIND_FIRST_BIT diff --git a/arch/arc/mm/dma.c b/arch/arc/mm/dma.c index 9832928f896d..0fa850709fac 100644 --- a/arch/arc/mm/dma.c +++ b/arch/arc/mm/dma.c @@ -11,46 +11,15 @@ #include <asm/cacheflush.h> /* - * ARCH specific callbacks for generic noncoherent DMA ops (dma/noncoherent.c) + * ARCH specific callbacks for generic noncoherent DMA ops * - hardware IOC not available (or "dma-coherent" not set for device in DT) * - But still handle both coherent and non-coherent requests from caller * * For DMA coherent hardware (IOC) generic code suffices */ -void *arch_dma_alloc(struct device *dev, size_t size, dma_addr_t *dma_handle, - gfp_t gfp, unsigned long attrs) -{ - unsigned long order = get_order(size); - struct page *page; - phys_addr_t paddr; - void *kvaddr; - - /* - * __GFP_HIGHMEM flag is cleared by upper layer functions - * (in include/linux/dma-mapping.h) so we should never get a - * __GFP_HIGHMEM here. - */ - BUG_ON(gfp & __GFP_HIGHMEM); - - page = alloc_pages(gfp | __GFP_ZERO, order); - if (!page) - return NULL; - - /* This is linear addr (0x8000_0000 based) */ - paddr = page_to_phys(page); - - *dma_handle = paddr; - - /* - * A coherent buffer needs MMU mapping to enforce non-cachability. - * kvaddr is kernel Virtual address (0x7000_0000 based). - */ - kvaddr = ioremap_nocache(paddr, size); - if (kvaddr == NULL) { - __free_pages(page, order); - return NULL; - } +void arch_dma_prep_coherent(struct page *page, size_t size) +{ /* * Evict any existing L1 and/or L2 lines for the backing page * in case it was used earlier as a normal "cached" page. @@ -61,24 +30,7 @@ void *arch_dma_alloc(struct device *dev, size_t size, dma_addr_t *dma_handle, * Currently flush_cache_vmap nukes the L1 cache completely which * will be optimized as a separate commit */ - dma_cache_wback_inv(paddr, size); - return kvaddr; -} - -void arch_dma_free(struct device *dev, size_t size, void *vaddr, - dma_addr_t dma_handle, unsigned long attrs) -{ - phys_addr_t paddr = dma_handle; - struct page *page = virt_to_page(paddr); - - iounmap((void __force __iomem *)vaddr); - __free_pages(page, get_order(size)); -} - -long arch_dma_coherent_to_pfn(struct device *dev, void *cpu_addr, - dma_addr_t dma_addr) -{ - return __phys_to_pfn(dma_addr); + dma_cache_wback_inv(page_to_phys(page), size); } /* @@ -155,3 +107,9 @@ void arch_setup_dma_ops(struct device *dev, u64 dma_base, u64 size, dev_info(dev, "use %sncoherent DMA ops\n", dev->dma_coherent ? "" : "non"); } + +static int __init atomic_pool_init(void) +{ + return dma_atomic_pool_init(GFP_KERNEL, pgprot_noncached(PAGE_KERNEL)); +} +postcore_initcall(atomic_pool_init); -- 2.20.1 ^ permalink raw reply related [flat|nested] 91+ messages in thread
* [PATCH 7/7] arc: use the generic remapping allocator for coherent DMA allocations @ 2019-06-14 14:44 ` Christoph Hellwig 0 siblings, 0 replies; 91+ messages in thread From: Christoph Hellwig @ 2019-06-14 14:44 UTC (permalink / raw) To: Vineet Gupta Cc: Jonas Bonn, linux-xtensa, Vladimir Murzin, linux-parisc, Helge Deller, linux-kernel, Stefan Kristiansson, iommu, openrisc, Stafford Horne, linux-snps-arc, linux-arm-kernel Replace the code that sets up uncached PTEs with the generic vmap based remapping code. It also provides an atomic pool for allocations from non-blocking context, which we not properly supported by the existing arc code. Signed-off-by: Christoph Hellwig <hch@lst.de> --- arch/arc/Kconfig | 2 ++ arch/arc/mm/dma.c | 62 ++++++++--------------------------------------- 2 files changed, 12 insertions(+), 52 deletions(-) diff --git a/arch/arc/Kconfig b/arch/arc/Kconfig index 23e063df5d2c..cdad7d30ff1d 100644 --- a/arch/arc/Kconfig +++ b/arch/arc/Kconfig @@ -10,6 +10,7 @@ config ARC def_bool y select ARC_TIMERS select ARCH_HAS_DMA_COHERENT_TO_PFN + select ARCH_HAS_DMA_PREP_COHERENT select ARCH_HAS_PTE_SPECIAL select ARCH_HAS_SETUP_DMA_OPS select ARCH_HAS_SYNC_DMA_FOR_CPU @@ -19,6 +20,7 @@ config ARC select BUILDTIME_EXTABLE_SORT select CLONE_BACKWARDS select COMMON_CLK + select DMA_DIRECT_REMAP select GENERIC_ATOMIC64 if !ISA_ARCV2 || !(ARC_HAS_LL64 && ARC_HAS_LLSC) select GENERIC_CLOCKEVENTS select GENERIC_FIND_FIRST_BIT diff --git a/arch/arc/mm/dma.c b/arch/arc/mm/dma.c index 9832928f896d..0fa850709fac 100644 --- a/arch/arc/mm/dma.c +++ b/arch/arc/mm/dma.c @@ -11,46 +11,15 @@ #include <asm/cacheflush.h> /* - * ARCH specific callbacks for generic noncoherent DMA ops (dma/noncoherent.c) + * ARCH specific callbacks for generic noncoherent DMA ops * - hardware IOC not available (or "dma-coherent" not set for device in DT) * - But still handle both coherent and non-coherent requests from caller * * For DMA coherent hardware (IOC) generic code suffices */ -void *arch_dma_alloc(struct device *dev, size_t size, dma_addr_t *dma_handle, - gfp_t gfp, unsigned long attrs) -{ - unsigned long order = get_order(size); - struct page *page; - phys_addr_t paddr; - void *kvaddr; - - /* - * __GFP_HIGHMEM flag is cleared by upper layer functions - * (in include/linux/dma-mapping.h) so we should never get a - * __GFP_HIGHMEM here. - */ - BUG_ON(gfp & __GFP_HIGHMEM); - - page = alloc_pages(gfp | __GFP_ZERO, order); - if (!page) - return NULL; - - /* This is linear addr (0x8000_0000 based) */ - paddr = page_to_phys(page); - - *dma_handle = paddr; - - /* - * A coherent buffer needs MMU mapping to enforce non-cachability. - * kvaddr is kernel Virtual address (0x7000_0000 based). - */ - kvaddr = ioremap_nocache(paddr, size); - if (kvaddr == NULL) { - __free_pages(page, order); - return NULL; - } +void arch_dma_prep_coherent(struct page *page, size_t size) +{ /* * Evict any existing L1 and/or L2 lines for the backing page * in case it was used earlier as a normal "cached" page. @@ -61,24 +30,7 @@ void *arch_dma_alloc(struct device *dev, size_t size, dma_addr_t *dma_handle, * Currently flush_cache_vmap nukes the L1 cache completely which * will be optimized as a separate commit */ - dma_cache_wback_inv(paddr, size); - return kvaddr; -} - -void arch_dma_free(struct device *dev, size_t size, void *vaddr, - dma_addr_t dma_handle, unsigned long attrs) -{ - phys_addr_t paddr = dma_handle; - struct page *page = virt_to_page(paddr); - - iounmap((void __force __iomem *)vaddr); - __free_pages(page, get_order(size)); -} - -long arch_dma_coherent_to_pfn(struct device *dev, void *cpu_addr, - dma_addr_t dma_addr) -{ - return __phys_to_pfn(dma_addr); + dma_cache_wback_inv(page_to_phys(page), size); } /* @@ -155,3 +107,9 @@ void arch_setup_dma_ops(struct device *dev, u64 dma_base, u64 size, dev_info(dev, "use %sncoherent DMA ops\n", dev->dma_coherent ? "" : "non"); } + +static int __init atomic_pool_init(void) +{ + return dma_atomic_pool_init(GFP_KERNEL, pgprot_noncached(PAGE_KERNEL)); +} +postcore_initcall(atomic_pool_init); -- 2.20.1 _______________________________________________ linux-arm-kernel mailing list linux-arm-kernel@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-arm-kernel ^ permalink raw reply related [flat|nested] 91+ messages in thread
* [PATCH 7/7] arc: use the generic remapping allocator for coherent DMA allocations @ 2019-06-14 14:44 ` Christoph Hellwig 0 siblings, 0 replies; 91+ messages in thread From: Christoph Hellwig @ 2019-06-14 14:44 UTC (permalink / raw) To: Vineet Gupta Cc: Jonas Bonn, linux-xtensa, Vladimir Murzin, linux-parisc, Helge Deller, linux-kernel, Stefan Kristiansson, iommu, openrisc, Stafford Horne, linux-snps-arc, linux-arm-kernel Replace the code that sets up uncached PTEs with the generic vmap based remapping code. It also provides an atomic pool for allocations from non-blocking context, which we not properly supported by the existing arc code. Signed-off-by: Christoph Hellwig <hch@lst.de> --- arch/arc/Kconfig | 2 ++ arch/arc/mm/dma.c | 62 ++++++++--------------------------------------- 2 files changed, 12 insertions(+), 52 deletions(-) diff --git a/arch/arc/Kconfig b/arch/arc/Kconfig index 23e063df5d2c..cdad7d30ff1d 100644 --- a/arch/arc/Kconfig +++ b/arch/arc/Kconfig @@ -10,6 +10,7 @@ config ARC def_bool y select ARC_TIMERS select ARCH_HAS_DMA_COHERENT_TO_PFN + select ARCH_HAS_DMA_PREP_COHERENT select ARCH_HAS_PTE_SPECIAL select ARCH_HAS_SETUP_DMA_OPS select ARCH_HAS_SYNC_DMA_FOR_CPU @@ -19,6 +20,7 @@ config ARC select BUILDTIME_EXTABLE_SORT select CLONE_BACKWARDS select COMMON_CLK + select DMA_DIRECT_REMAP select GENERIC_ATOMIC64 if !ISA_ARCV2 || !(ARC_HAS_LL64 && ARC_HAS_LLSC) select GENERIC_CLOCKEVENTS select GENERIC_FIND_FIRST_BIT diff --git a/arch/arc/mm/dma.c b/arch/arc/mm/dma.c index 9832928f896d..0fa850709fac 100644 --- a/arch/arc/mm/dma.c +++ b/arch/arc/mm/dma.c @@ -11,46 +11,15 @@ #include <asm/cacheflush.h> /* - * ARCH specific callbacks for generic noncoherent DMA ops (dma/noncoherent.c) + * ARCH specific callbacks for generic noncoherent DMA ops * - hardware IOC not available (or "dma-coherent" not set for device in DT) * - But still handle both coherent and non-coherent requests from caller * * For DMA coherent hardware (IOC) generic code suffices */ -void *arch_dma_alloc(struct device *dev, size_t size, dma_addr_t *dma_handle, - gfp_t gfp, unsigned long attrs) -{ - unsigned long order = get_order(size); - struct page *page; - phys_addr_t paddr; - void *kvaddr; - - /* - * __GFP_HIGHMEM flag is cleared by upper layer functions - * (in include/linux/dma-mapping.h) so we should never get a - * __GFP_HIGHMEM here. - */ - BUG_ON(gfp & __GFP_HIGHMEM); - - page = alloc_pages(gfp | __GFP_ZERO, order); - if (!page) - return NULL; - - /* This is linear addr (0x8000_0000 based) */ - paddr = page_to_phys(page); - - *dma_handle = paddr; - - /* - * A coherent buffer needs MMU mapping to enforce non-cachability. - * kvaddr is kernel Virtual address (0x7000_0000 based). - */ - kvaddr = ioremap_nocache(paddr, size); - if (kvaddr == NULL) { - __free_pages(page, order); - return NULL; - } +void arch_dma_prep_coherent(struct page *page, size_t size) +{ /* * Evict any existing L1 and/or L2 lines for the backing page * in case it was used earlier as a normal "cached" page. @@ -61,24 +30,7 @@ void *arch_dma_alloc(struct device *dev, size_t size, dma_addr_t *dma_handle, * Currently flush_cache_vmap nukes the L1 cache completely which * will be optimized as a separate commit */ - dma_cache_wback_inv(paddr, size); - return kvaddr; -} - -void arch_dma_free(struct device *dev, size_t size, void *vaddr, - dma_addr_t dma_handle, unsigned long attrs) -{ - phys_addr_t paddr = dma_handle; - struct page *page = virt_to_page(paddr); - - iounmap((void __force __iomem *)vaddr); - __free_pages(page, get_order(size)); -} - -long arch_dma_coherent_to_pfn(struct device *dev, void *cpu_addr, - dma_addr_t dma_addr) -{ - return __phys_to_pfn(dma_addr); + dma_cache_wback_inv(page_to_phys(page), size); } /* @@ -155,3 +107,9 @@ void arch_setup_dma_ops(struct device *dev, u64 dma_base, u64 size, dev_info(dev, "use %sncoherent DMA ops\n", dev->dma_coherent ? "" : "non"); } + +static int __init atomic_pool_init(void) +{ + return dma_atomic_pool_init(GFP_KERNEL, pgprot_noncached(PAGE_KERNEL)); +} +postcore_initcall(atomic_pool_init); -- 2.20.1 _______________________________________________ iommu mailing list iommu@lists.linux-foundation.org https://lists.linuxfoundation.org/mailman/listinfo/iommu ^ permalink raw reply related [flat|nested] 91+ messages in thread
* [PATCH 7/7] arc: use the generic remapping allocator for coherent DMA allocations @ 2019-06-14 14:44 ` Christoph Hellwig 0 siblings, 0 replies; 91+ messages in thread From: Christoph Hellwig @ 2019-06-14 14:44 UTC (permalink / raw) To: linux-snps-arc Replace the code that sets up uncached PTEs with the generic vmap based remapping code. It also provides an atomic pool for allocations from non-blocking context, which we not properly supported by the existing arc code. Signed-off-by: Christoph Hellwig <hch at lst.de> --- arch/arc/Kconfig | 2 ++ arch/arc/mm/dma.c | 62 ++++++++--------------------------------------- 2 files changed, 12 insertions(+), 52 deletions(-) diff --git a/arch/arc/Kconfig b/arch/arc/Kconfig index 23e063df5d2c..cdad7d30ff1d 100644 --- a/arch/arc/Kconfig +++ b/arch/arc/Kconfig @@ -10,6 +10,7 @@ config ARC def_bool y select ARC_TIMERS select ARCH_HAS_DMA_COHERENT_TO_PFN + select ARCH_HAS_DMA_PREP_COHERENT select ARCH_HAS_PTE_SPECIAL select ARCH_HAS_SETUP_DMA_OPS select ARCH_HAS_SYNC_DMA_FOR_CPU @@ -19,6 +20,7 @@ config ARC select BUILDTIME_EXTABLE_SORT select CLONE_BACKWARDS select COMMON_CLK + select DMA_DIRECT_REMAP select GENERIC_ATOMIC64 if !ISA_ARCV2 || !(ARC_HAS_LL64 && ARC_HAS_LLSC) select GENERIC_CLOCKEVENTS select GENERIC_FIND_FIRST_BIT diff --git a/arch/arc/mm/dma.c b/arch/arc/mm/dma.c index 9832928f896d..0fa850709fac 100644 --- a/arch/arc/mm/dma.c +++ b/arch/arc/mm/dma.c @@ -11,46 +11,15 @@ #include <asm/cacheflush.h> /* - * ARCH specific callbacks for generic noncoherent DMA ops (dma/noncoherent.c) + * ARCH specific callbacks for generic noncoherent DMA ops * - hardware IOC not available (or "dma-coherent" not set for device in DT) * - But still handle both coherent and non-coherent requests from caller * * For DMA coherent hardware (IOC) generic code suffices */ -void *arch_dma_alloc(struct device *dev, size_t size, dma_addr_t *dma_handle, - gfp_t gfp, unsigned long attrs) -{ - unsigned long order = get_order(size); - struct page *page; - phys_addr_t paddr; - void *kvaddr; - - /* - * __GFP_HIGHMEM flag is cleared by upper layer functions - * (in include/linux/dma-mapping.h) so we should never get a - * __GFP_HIGHMEM here. - */ - BUG_ON(gfp & __GFP_HIGHMEM); - - page = alloc_pages(gfp | __GFP_ZERO, order); - if (!page) - return NULL; - - /* This is linear addr (0x8000_0000 based) */ - paddr = page_to_phys(page); - - *dma_handle = paddr; - - /* - * A coherent buffer needs MMU mapping to enforce non-cachability. - * kvaddr is kernel Virtual address (0x7000_0000 based). - */ - kvaddr = ioremap_nocache(paddr, size); - if (kvaddr == NULL) { - __free_pages(page, order); - return NULL; - } +void arch_dma_prep_coherent(struct page *page, size_t size) +{ /* * Evict any existing L1 and/or L2 lines for the backing page * in case it was used earlier as a normal "cached" page. @@ -61,24 +30,7 @@ void *arch_dma_alloc(struct device *dev, size_t size, dma_addr_t *dma_handle, * Currently flush_cache_vmap nukes the L1 cache completely which * will be optimized as a separate commit */ - dma_cache_wback_inv(paddr, size); - return kvaddr; -} - -void arch_dma_free(struct device *dev, size_t size, void *vaddr, - dma_addr_t dma_handle, unsigned long attrs) -{ - phys_addr_t paddr = dma_handle; - struct page *page = virt_to_page(paddr); - - iounmap((void __force __iomem *)vaddr); - __free_pages(page, get_order(size)); -} - -long arch_dma_coherent_to_pfn(struct device *dev, void *cpu_addr, - dma_addr_t dma_addr) -{ - return __phys_to_pfn(dma_addr); + dma_cache_wback_inv(page_to_phys(page), size); } /* @@ -155,3 +107,9 @@ void arch_setup_dma_ops(struct device *dev, u64 dma_base, u64 size, dev_info(dev, "use %sncoherent DMA ops\n", dev->dma_coherent ? "" : "non"); } + +static int __init atomic_pool_init(void) +{ + return dma_atomic_pool_init(GFP_KERNEL, pgprot_noncached(PAGE_KERNEL)); +} +postcore_initcall(atomic_pool_init); -- 2.20.1 ^ permalink raw reply related [flat|nested] 91+ messages in thread
* Re: [PATCH 7/7] arc: use the generic remapping allocator for coherent DMA allocations 2019-06-14 14:44 ` Christoph Hellwig ` (2 preceding siblings ...) (?) @ 2019-06-14 18:05 ` Eugeniy Paltsev -1 siblings, 0 replies; 91+ messages in thread From: Eugeniy Paltsev @ 2019-06-14 18:05 UTC (permalink / raw) To: hch, Vineet Gupta Cc: shorne, linux-snps-arc, iommu, vladimir.murzin, linux-parisc, stefan.kristiansson, jonas, linux-xtensa, deller, linux-arm-kernel, linux-kernel, openrisc Hi Christoph, Regular question - do you have any public git repository with all this dma changes? I want to test it for ARC. Pretty sure the [PATCH 2/7] arc: remove the partial DMA_ATTR_NON_CONSISTENT support is fine. Not so sure about [PATCH 7/7] arc: use the generic remapping allocator for coherent DMA allocations :) On Fri, 2019-06-14 at 16:44 +0200, Christoph Hellwig wrote: > Replace the code that sets up uncached PTEs with the generic vmap based > remapping code. It also provides an atomic pool for allocations from > non-blocking context, which we not properly supported by the existing > arc code. > > Signed-off-by: Christoph Hellwig <hch@lst.de> > --- > arch/arc/Kconfig | 2 ++ > arch/arc/mm/dma.c | 62 ++++++++--------------------------------------- > 2 files changed, 12 insertions(+), 52 deletions(-) > > diff --git a/arch/arc/Kconfig b/arch/arc/Kconfig > index 23e063df5d2c..cdad7d30ff1d 100644 > --- a/arch/arc/Kconfig > +++ b/arch/arc/Kconfig > @@ -10,6 +10,7 @@ config ARC > def_bool y > select ARC_TIMERS > select ARCH_HAS_DMA_COHERENT_TO_PFN > + select ARCH_HAS_DMA_PREP_COHERENT > select ARCH_HAS_PTE_SPECIAL > select ARCH_HAS_SETUP_DMA_OPS > select ARCH_HAS_SYNC_DMA_FOR_CPU > @@ -19,6 +20,7 @@ config ARC > select BUILDTIME_EXTABLE_SORT > select CLONE_BACKWARDS > select COMMON_CLK > + select DMA_DIRECT_REMAP > select GENERIC_ATOMIC64 if !ISA_ARCV2 || !(ARC_HAS_LL64 && ARC_HAS_LLSC) > select GENERIC_CLOCKEVENTS > select GENERIC_FIND_FIRST_BIT > diff --git a/arch/arc/mm/dma.c b/arch/arc/mm/dma.c > index 9832928f896d..0fa850709fac 100644 > --- a/arch/arc/mm/dma.c > +++ b/arch/arc/mm/dma.c > @@ -11,46 +11,15 @@ > #include <asm/cacheflush.h> > > /* > - * ARCH specific callbacks for generic noncoherent DMA ops (dma/noncoherent.c) > + * ARCH specific callbacks for generic noncoherent DMA ops > * - hardware IOC not available (or "dma-coherent" not set for device in DT) > * - But still handle both coherent and non-coherent requests from caller > * > * For DMA coherent hardware (IOC) generic code suffices > */ > -void *arch_dma_alloc(struct device *dev, size_t size, dma_addr_t *dma_handle, > - gfp_t gfp, unsigned long attrs) > -{ > - unsigned long order = get_order(size); > - struct page *page; > - phys_addr_t paddr; > - void *kvaddr; > - > - /* > - * __GFP_HIGHMEM flag is cleared by upper layer functions > - * (in include/linux/dma-mapping.h) so we should never get a > - * __GFP_HIGHMEM here. > - */ > - BUG_ON(gfp & __GFP_HIGHMEM); > - > - page = alloc_pages(gfp | __GFP_ZERO, order); > - if (!page) > - return NULL; > - > - /* This is linear addr (0x8000_0000 based) */ > - paddr = page_to_phys(page); > - > - *dma_handle = paddr; > - > - /* > - * A coherent buffer needs MMU mapping to enforce non-cachability. > - * kvaddr is kernel Virtual address (0x7000_0000 based). > - */ > - kvaddr = ioremap_nocache(paddr, size); > - if (kvaddr == NULL) { > - __free_pages(page, order); > - return NULL; > - } > > +void arch_dma_prep_coherent(struct page *page, size_t size) > +{ > /* > * Evict any existing L1 and/or L2 lines for the backing page > * in case it was used earlier as a normal "cached" page. > @@ -61,24 +30,7 @@ void *arch_dma_alloc(struct device *dev, size_t size, dma_addr_t *dma_handle, > * Currently flush_cache_vmap nukes the L1 cache completely which > * will be optimized as a separate commit > */ > - dma_cache_wback_inv(paddr, size); > - return kvaddr; > -} > - > -void arch_dma_free(struct device *dev, size_t size, void *vaddr, > - dma_addr_t dma_handle, unsigned long attrs) > -{ > - phys_addr_t paddr = dma_handle; > - struct page *page = virt_to_page(paddr); > - > - iounmap((void __force __iomem *)vaddr); > - __free_pages(page, get_order(size)); > -} > - > -long arch_dma_coherent_to_pfn(struct device *dev, void *cpu_addr, > - dma_addr_t dma_addr) > -{ > - return __phys_to_pfn(dma_addr); > + dma_cache_wback_inv(page_to_phys(page), size); > } > > /* > @@ -155,3 +107,9 @@ void arch_setup_dma_ops(struct device *dev, u64 dma_base, u64 size, > dev_info(dev, "use %sncoherent DMA ops\n", > dev->dma_coherent ? "" : "non"); > } > + > +static int __init atomic_pool_init(void) > +{ > + return dma_atomic_pool_init(GFP_KERNEL, pgprot_noncached(PAGE_KERNEL)); > +} > +postcore_initcall(atomic_pool_init); -- Eugeniy Paltsev ^ permalink raw reply [flat|nested] 91+ messages in thread
* [OpenRISC] [PATCH 7/7] arc: use the generic remapping allocator for coherent DMA allocations @ 2019-06-14 18:05 ` Eugeniy Paltsev 0 siblings, 0 replies; 91+ messages in thread From: Eugeniy Paltsev @ 2019-06-14 18:05 UTC (permalink / raw) To: openrisc Hi Christoph, Regular question - do you have any public git repository with all this dma changes? I want to test it for ARC. Pretty sure the [PATCH 2/7] arc: remove the partial DMA_ATTR_NON_CONSISTENT support is fine. Not so sure about [PATCH 7/7] arc: use the generic remapping allocator for coherent DMA allocations :) On Fri, 2019-06-14 at 16:44 +0200, Christoph Hellwig wrote: > Replace the code that sets up uncached PTEs with the generic vmap based > remapping code. It also provides an atomic pool for allocations from > non-blocking context, which we not properly supported by the existing > arc code. > > Signed-off-by: Christoph Hellwig <hch@lst.de> > --- > arch/arc/Kconfig | 2 ++ > arch/arc/mm/dma.c | 62 ++++++++--------------------------------------- > 2 files changed, 12 insertions(+), 52 deletions(-) > > diff --git a/arch/arc/Kconfig b/arch/arc/Kconfig > index 23e063df5d2c..cdad7d30ff1d 100644 > --- a/arch/arc/Kconfig > +++ b/arch/arc/Kconfig > @@ -10,6 +10,7 @@ config ARC > def_bool y > select ARC_TIMERS > select ARCH_HAS_DMA_COHERENT_TO_PFN > + select ARCH_HAS_DMA_PREP_COHERENT > select ARCH_HAS_PTE_SPECIAL > select ARCH_HAS_SETUP_DMA_OPS > select ARCH_HAS_SYNC_DMA_FOR_CPU > @@ -19,6 +20,7 @@ config ARC > select BUILDTIME_EXTABLE_SORT > select CLONE_BACKWARDS > select COMMON_CLK > + select DMA_DIRECT_REMAP > select GENERIC_ATOMIC64 if !ISA_ARCV2 || !(ARC_HAS_LL64 && ARC_HAS_LLSC) > select GENERIC_CLOCKEVENTS > select GENERIC_FIND_FIRST_BIT > diff --git a/arch/arc/mm/dma.c b/arch/arc/mm/dma.c > index 9832928f896d..0fa850709fac 100644 > --- a/arch/arc/mm/dma.c > +++ b/arch/arc/mm/dma.c > @@ -11,46 +11,15 @@ > #include <asm/cacheflush.h> > > /* > - * ARCH specific callbacks for generic noncoherent DMA ops (dma/noncoherent.c) > + * ARCH specific callbacks for generic noncoherent DMA ops > * - hardware IOC not available (or "dma-coherent" not set for device in DT) > * - But still handle both coherent and non-coherent requests from caller > * > * For DMA coherent hardware (IOC) generic code suffices > */ > -void *arch_dma_alloc(struct device *dev, size_t size, dma_addr_t *dma_handle, > - gfp_t gfp, unsigned long attrs) > -{ > - unsigned long order = get_order(size); > - struct page *page; > - phys_addr_t paddr; > - void *kvaddr; > - > - /* > - * __GFP_HIGHMEM flag is cleared by upper layer functions > - * (in include/linux/dma-mapping.h) so we should never get a > - * __GFP_HIGHMEM here. > - */ > - BUG_ON(gfp & __GFP_HIGHMEM); > - > - page = alloc_pages(gfp | __GFP_ZERO, order); > - if (!page) > - return NULL; > - > - /* This is linear addr (0x8000_0000 based) */ > - paddr = page_to_phys(page); > - > - *dma_handle = paddr; > - > - /* > - * A coherent buffer needs MMU mapping to enforce non-cachability. > - * kvaddr is kernel Virtual address (0x7000_0000 based). > - */ > - kvaddr = ioremap_nocache(paddr, size); > - if (kvaddr == NULL) { > - __free_pages(page, order); > - return NULL; > - } > > +void arch_dma_prep_coherent(struct page *page, size_t size) > +{ > /* > * Evict any existing L1 and/or L2 lines for the backing page > * in case it was used earlier as a normal "cached" page. > @@ -61,24 +30,7 @@ void *arch_dma_alloc(struct device *dev, size_t size, dma_addr_t *dma_handle, > * Currently flush_cache_vmap nukes the L1 cache completely which > * will be optimized as a separate commit > */ > - dma_cache_wback_inv(paddr, size); > - return kvaddr; > -} > - > -void arch_dma_free(struct device *dev, size_t size, void *vaddr, > - dma_addr_t dma_handle, unsigned long attrs) > -{ > - phys_addr_t paddr = dma_handle; > - struct page *page = virt_to_page(paddr); > - > - iounmap((void __force __iomem *)vaddr); > - __free_pages(page, get_order(size)); > -} > - > -long arch_dma_coherent_to_pfn(struct device *dev, void *cpu_addr, > - dma_addr_t dma_addr) > -{ > - return __phys_to_pfn(dma_addr); > + dma_cache_wback_inv(page_to_phys(page), size); > } > > /* > @@ -155,3 +107,9 @@ void arch_setup_dma_ops(struct device *dev, u64 dma_base, u64 size, > dev_info(dev, "use %sncoherent DMA ops\n", > dev->dma_coherent ? "" : "non"); > } > + > +static int __init atomic_pool_init(void) > +{ > + return dma_atomic_pool_init(GFP_KERNEL, pgprot_noncached(PAGE_KERNEL)); > +} > +postcore_initcall(atomic_pool_init); -- Eugeniy Paltsev ^ permalink raw reply [flat|nested] 91+ messages in thread
* Re: [PATCH 7/7] arc: use the generic remapping allocator for coherent DMA allocations @ 2019-06-14 18:05 ` Eugeniy Paltsev 0 siblings, 0 replies; 91+ messages in thread From: Eugeniy Paltsev @ 2019-06-14 18:05 UTC (permalink / raw) To: hch, Vineet Gupta Cc: jonas, linux-xtensa, vladimir.murzin, linux-parisc, deller, linux-kernel, stefan.kristiansson, iommu, openrisc, shorne, linux-snps-arc, linux-arm-kernel Hi Christoph, Regular question - do you have any public git repository with all this dma changes? I want to test it for ARC. Pretty sure the [PATCH 2/7] arc: remove the partial DMA_ATTR_NON_CONSISTENT support is fine. Not so sure about [PATCH 7/7] arc: use the generic remapping allocator for coherent DMA allocations :) On Fri, 2019-06-14 at 16:44 +0200, Christoph Hellwig wrote: > Replace the code that sets up uncached PTEs with the generic vmap based > remapping code. It also provides an atomic pool for allocations from > non-blocking context, which we not properly supported by the existing > arc code. > > Signed-off-by: Christoph Hellwig <hch@lst.de> > --- > arch/arc/Kconfig | 2 ++ > arch/arc/mm/dma.c | 62 ++++++++--------------------------------------- > 2 files changed, 12 insertions(+), 52 deletions(-) > > diff --git a/arch/arc/Kconfig b/arch/arc/Kconfig > index 23e063df5d2c..cdad7d30ff1d 100644 > --- a/arch/arc/Kconfig > +++ b/arch/arc/Kconfig > @@ -10,6 +10,7 @@ config ARC > def_bool y > select ARC_TIMERS > select ARCH_HAS_DMA_COHERENT_TO_PFN > + select ARCH_HAS_DMA_PREP_COHERENT > select ARCH_HAS_PTE_SPECIAL > select ARCH_HAS_SETUP_DMA_OPS > select ARCH_HAS_SYNC_DMA_FOR_CPU > @@ -19,6 +20,7 @@ config ARC > select BUILDTIME_EXTABLE_SORT > select CLONE_BACKWARDS > select COMMON_CLK > + select DMA_DIRECT_REMAP > select GENERIC_ATOMIC64 if !ISA_ARCV2 || !(ARC_HAS_LL64 && ARC_HAS_LLSC) > select GENERIC_CLOCKEVENTS > select GENERIC_FIND_FIRST_BIT > diff --git a/arch/arc/mm/dma.c b/arch/arc/mm/dma.c > index 9832928f896d..0fa850709fac 100644 > --- a/arch/arc/mm/dma.c > +++ b/arch/arc/mm/dma.c > @@ -11,46 +11,15 @@ > #include <asm/cacheflush.h> > > /* > - * ARCH specific callbacks for generic noncoherent DMA ops (dma/noncoherent.c) > + * ARCH specific callbacks for generic noncoherent DMA ops > * - hardware IOC not available (or "dma-coherent" not set for device in DT) > * - But still handle both coherent and non-coherent requests from caller > * > * For DMA coherent hardware (IOC) generic code suffices > */ > -void *arch_dma_alloc(struct device *dev, size_t size, dma_addr_t *dma_handle, > - gfp_t gfp, unsigned long attrs) > -{ > - unsigned long order = get_order(size); > - struct page *page; > - phys_addr_t paddr; > - void *kvaddr; > - > - /* > - * __GFP_HIGHMEM flag is cleared by upper layer functions > - * (in include/linux/dma-mapping.h) so we should never get a > - * __GFP_HIGHMEM here. > - */ > - BUG_ON(gfp & __GFP_HIGHMEM); > - > - page = alloc_pages(gfp | __GFP_ZERO, order); > - if (!page) > - return NULL; > - > - /* This is linear addr (0x8000_0000 based) */ > - paddr = page_to_phys(page); > - > - *dma_handle = paddr; > - > - /* > - * A coherent buffer needs MMU mapping to enforce non-cachability. > - * kvaddr is kernel Virtual address (0x7000_0000 based). > - */ > - kvaddr = ioremap_nocache(paddr, size); > - if (kvaddr == NULL) { > - __free_pages(page, order); > - return NULL; > - } > > +void arch_dma_prep_coherent(struct page *page, size_t size) > +{ > /* > * Evict any existing L1 and/or L2 lines for the backing page > * in case it was used earlier as a normal "cached" page. > @@ -61,24 +30,7 @@ void *arch_dma_alloc(struct device *dev, size_t size, dma_addr_t *dma_handle, > * Currently flush_cache_vmap nukes the L1 cache completely which > * will be optimized as a separate commit > */ > - dma_cache_wback_inv(paddr, size); > - return kvaddr; > -} > - > -void arch_dma_free(struct device *dev, size_t size, void *vaddr, > - dma_addr_t dma_handle, unsigned long attrs) > -{ > - phys_addr_t paddr = dma_handle; > - struct page *page = virt_to_page(paddr); > - > - iounmap((void __force __iomem *)vaddr); > - __free_pages(page, get_order(size)); > -} > - > -long arch_dma_coherent_to_pfn(struct device *dev, void *cpu_addr, > - dma_addr_t dma_addr) > -{ > - return __phys_to_pfn(dma_addr); > + dma_cache_wback_inv(page_to_phys(page), size); > } > > /* > @@ -155,3 +107,9 @@ void arch_setup_dma_ops(struct device *dev, u64 dma_base, u64 size, > dev_info(dev, "use %sncoherent DMA ops\n", > dev->dma_coherent ? "" : "non"); > } > + > +static int __init atomic_pool_init(void) > +{ > + return dma_atomic_pool_init(GFP_KERNEL, pgprot_noncached(PAGE_KERNEL)); > +} > +postcore_initcall(atomic_pool_init); -- Eugeniy Paltsev _______________________________________________ linux-arm-kernel mailing list linux-arm-kernel@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-arm-kernel ^ permalink raw reply [flat|nested] 91+ messages in thread
* Re: [PATCH 7/7] arc: use the generic remapping allocator for coherent DMA allocations @ 2019-06-14 18:05 ` Eugeniy Paltsev 0 siblings, 0 replies; 91+ messages in thread From: Eugeniy Paltsev @ 2019-06-14 18:05 UTC (permalink / raw) To: hch, Vineet Gupta Cc: jonas, linux-xtensa, vladimir.murzin, linux-parisc, deller, linux-kernel, stefan.kristiansson, iommu, openrisc, shorne, linux-snps-arc, linux-arm-kernel Hi Christoph, Regular question - do you have any public git repository with all this dma changes? I want to test it for ARC. Pretty sure the [PATCH 2/7] arc: remove the partial DMA_ATTR_NON_CONSISTENT support is fine. Not so sure about [PATCH 7/7] arc: use the generic remapping allocator for coherent DMA allocations :) On Fri, 2019-06-14 at 16:44 +0200, Christoph Hellwig wrote: > Replace the code that sets up uncached PTEs with the generic vmap based > remapping code. It also provides an atomic pool for allocations from > non-blocking context, which we not properly supported by the existing > arc code. > > Signed-off-by: Christoph Hellwig <hch@lst.de> > --- > arch/arc/Kconfig | 2 ++ > arch/arc/mm/dma.c | 62 ++++++++--------------------------------------- > 2 files changed, 12 insertions(+), 52 deletions(-) > > diff --git a/arch/arc/Kconfig b/arch/arc/Kconfig > index 23e063df5d2c..cdad7d30ff1d 100644 > --- a/arch/arc/Kconfig > +++ b/arch/arc/Kconfig > @@ -10,6 +10,7 @@ config ARC > def_bool y > select ARC_TIMERS > select ARCH_HAS_DMA_COHERENT_TO_PFN > + select ARCH_HAS_DMA_PREP_COHERENT > select ARCH_HAS_PTE_SPECIAL > select ARCH_HAS_SETUP_DMA_OPS > select ARCH_HAS_SYNC_DMA_FOR_CPU > @@ -19,6 +20,7 @@ config ARC > select BUILDTIME_EXTABLE_SORT > select CLONE_BACKWARDS > select COMMON_CLK > + select DMA_DIRECT_REMAP > select GENERIC_ATOMIC64 if !ISA_ARCV2 || !(ARC_HAS_LL64 && ARC_HAS_LLSC) > select GENERIC_CLOCKEVENTS > select GENERIC_FIND_FIRST_BIT > diff --git a/arch/arc/mm/dma.c b/arch/arc/mm/dma.c > index 9832928f896d..0fa850709fac 100644 > --- a/arch/arc/mm/dma.c > +++ b/arch/arc/mm/dma.c > @@ -11,46 +11,15 @@ > #include <asm/cacheflush.h> > > /* > - * ARCH specific callbacks for generic noncoherent DMA ops (dma/noncoherent.c) > + * ARCH specific callbacks for generic noncoherent DMA ops > * - hardware IOC not available (or "dma-coherent" not set for device in DT) > * - But still handle both coherent and non-coherent requests from caller > * > * For DMA coherent hardware (IOC) generic code suffices > */ > -void *arch_dma_alloc(struct device *dev, size_t size, dma_addr_t *dma_handle, > - gfp_t gfp, unsigned long attrs) > -{ > - unsigned long order = get_order(size); > - struct page *page; > - phys_addr_t paddr; > - void *kvaddr; > - > - /* > - * __GFP_HIGHMEM flag is cleared by upper layer functions > - * (in include/linux/dma-mapping.h) so we should never get a > - * __GFP_HIGHMEM here. > - */ > - BUG_ON(gfp & __GFP_HIGHMEM); > - > - page = alloc_pages(gfp | __GFP_ZERO, order); > - if (!page) > - return NULL; > - > - /* This is linear addr (0x8000_0000 based) */ > - paddr = page_to_phys(page); > - > - *dma_handle = paddr; > - > - /* > - * A coherent buffer needs MMU mapping to enforce non-cachability. > - * kvaddr is kernel Virtual address (0x7000_0000 based). > - */ > - kvaddr = ioremap_nocache(paddr, size); > - if (kvaddr == NULL) { > - __free_pages(page, order); > - return NULL; > - } > > +void arch_dma_prep_coherent(struct page *page, size_t size) > +{ > /* > * Evict any existing L1 and/or L2 lines for the backing page > * in case it was used earlier as a normal "cached" page. > @@ -61,24 +30,7 @@ void *arch_dma_alloc(struct device *dev, size_t size, dma_addr_t *dma_handle, > * Currently flush_cache_vmap nukes the L1 cache completely which > * will be optimized as a separate commit > */ > - dma_cache_wback_inv(paddr, size); > - return kvaddr; > -} > - > -void arch_dma_free(struct device *dev, size_t size, void *vaddr, > - dma_addr_t dma_handle, unsigned long attrs) > -{ > - phys_addr_t paddr = dma_handle; > - struct page *page = virt_to_page(paddr); > - > - iounmap((void __force __iomem *)vaddr); > - __free_pages(page, get_order(size)); > -} > - > -long arch_dma_coherent_to_pfn(struct device *dev, void *cpu_addr, > - dma_addr_t dma_addr) > -{ > - return __phys_to_pfn(dma_addr); > + dma_cache_wback_inv(page_to_phys(page), size); > } > > /* > @@ -155,3 +107,9 @@ void arch_setup_dma_ops(struct device *dev, u64 dma_base, u64 size, > dev_info(dev, "use %sncoherent DMA ops\n", > dev->dma_coherent ? "" : "non"); > } > + > +static int __init atomic_pool_init(void) > +{ > + return dma_atomic_pool_init(GFP_KERNEL, pgprot_noncached(PAGE_KERNEL)); > +} > +postcore_initcall(atomic_pool_init); -- Eugeniy Paltsev _______________________________________________ iommu mailing list iommu@lists.linux-foundation.org https://lists.linuxfoundation.org/mailman/listinfo/iommu ^ permalink raw reply [flat|nested] 91+ messages in thread
* [PATCH 7/7] arc: use the generic remapping allocator for coherent DMA allocations @ 2019-06-14 18:05 ` Eugeniy Paltsev 0 siblings, 0 replies; 91+ messages in thread From: Eugeniy Paltsev @ 2019-06-14 18:05 UTC (permalink / raw) To: linux-snps-arc Hi Christoph, Regular question - do you have any public git repository with all this dma changes? I want to test it for ARC. Pretty sure the [PATCH 2/7] arc: remove the partial DMA_ATTR_NON_CONSISTENT support is fine. Not so sure about [PATCH 7/7] arc: use the generic remapping allocator for coherent DMA allocations :) On Fri, 2019-06-14@16:44 +0200, Christoph Hellwig wrote: > Replace the code that sets up uncached PTEs with the generic vmap based > remapping code. It also provides an atomic pool for allocations from > non-blocking context, which we not properly supported by the existing > arc code. > > Signed-off-by: Christoph Hellwig <hch at lst.de> > --- > arch/arc/Kconfig | 2 ++ > arch/arc/mm/dma.c | 62 ++++++++--------------------------------------- > 2 files changed, 12 insertions(+), 52 deletions(-) > > diff --git a/arch/arc/Kconfig b/arch/arc/Kconfig > index 23e063df5d2c..cdad7d30ff1d 100644 > --- a/arch/arc/Kconfig > +++ b/arch/arc/Kconfig > @@ -10,6 +10,7 @@ config ARC > def_bool y > select ARC_TIMERS > select ARCH_HAS_DMA_COHERENT_TO_PFN > + select ARCH_HAS_DMA_PREP_COHERENT > select ARCH_HAS_PTE_SPECIAL > select ARCH_HAS_SETUP_DMA_OPS > select ARCH_HAS_SYNC_DMA_FOR_CPU > @@ -19,6 +20,7 @@ config ARC > select BUILDTIME_EXTABLE_SORT > select CLONE_BACKWARDS > select COMMON_CLK > + select DMA_DIRECT_REMAP > select GENERIC_ATOMIC64 if !ISA_ARCV2 || !(ARC_HAS_LL64 && ARC_HAS_LLSC) > select GENERIC_CLOCKEVENTS > select GENERIC_FIND_FIRST_BIT > diff --git a/arch/arc/mm/dma.c b/arch/arc/mm/dma.c > index 9832928f896d..0fa850709fac 100644 > --- a/arch/arc/mm/dma.c > +++ b/arch/arc/mm/dma.c > @@ -11,46 +11,15 @@ > #include <asm/cacheflush.h> > > /* > - * ARCH specific callbacks for generic noncoherent DMA ops (dma/noncoherent.c) > + * ARCH specific callbacks for generic noncoherent DMA ops > * - hardware IOC not available (or "dma-coherent" not set for device in DT) > * - But still handle both coherent and non-coherent requests from caller > * > * For DMA coherent hardware (IOC) generic code suffices > */ > -void *arch_dma_alloc(struct device *dev, size_t size, dma_addr_t *dma_handle, > - gfp_t gfp, unsigned long attrs) > -{ > - unsigned long order = get_order(size); > - struct page *page; > - phys_addr_t paddr; > - void *kvaddr; > - > - /* > - * __GFP_HIGHMEM flag is cleared by upper layer functions > - * (in include/linux/dma-mapping.h) so we should never get a > - * __GFP_HIGHMEM here. > - */ > - BUG_ON(gfp & __GFP_HIGHMEM); > - > - page = alloc_pages(gfp | __GFP_ZERO, order); > - if (!page) > - return NULL; > - > - /* This is linear addr (0x8000_0000 based) */ > - paddr = page_to_phys(page); > - > - *dma_handle = paddr; > - > - /* > - * A coherent buffer needs MMU mapping to enforce non-cachability. > - * kvaddr is kernel Virtual address (0x7000_0000 based). > - */ > - kvaddr = ioremap_nocache(paddr, size); > - if (kvaddr == NULL) { > - __free_pages(page, order); > - return NULL; > - } > > +void arch_dma_prep_coherent(struct page *page, size_t size) > +{ > /* > * Evict any existing L1 and/or L2 lines for the backing page > * in case it was used earlier as a normal "cached" page. > @@ -61,24 +30,7 @@ void *arch_dma_alloc(struct device *dev, size_t size, dma_addr_t *dma_handle, > * Currently flush_cache_vmap nukes the L1 cache completely which > * will be optimized as a separate commit > */ > - dma_cache_wback_inv(paddr, size); > - return kvaddr; > -} > - > -void arch_dma_free(struct device *dev, size_t size, void *vaddr, > - dma_addr_t dma_handle, unsigned long attrs) > -{ > - phys_addr_t paddr = dma_handle; > - struct page *page = virt_to_page(paddr); > - > - iounmap((void __force __iomem *)vaddr); > - __free_pages(page, get_order(size)); > -} > - > -long arch_dma_coherent_to_pfn(struct device *dev, void *cpu_addr, > - dma_addr_t dma_addr) > -{ > - return __phys_to_pfn(dma_addr); > + dma_cache_wback_inv(page_to_phys(page), size); > } > > /* > @@ -155,3 +107,9 @@ void arch_setup_dma_ops(struct device *dev, u64 dma_base, u64 size, > dev_info(dev, "use %sncoherent DMA ops\n", > dev->dma_coherent ? "" : "non"); > } > + > +static int __init atomic_pool_init(void) > +{ > + return dma_atomic_pool_init(GFP_KERNEL, pgprot_noncached(PAGE_KERNEL)); > +} > +postcore_initcall(atomic_pool_init); -- Eugeniy Paltsev ^ permalink raw reply [flat|nested] 91+ messages in thread
* Re: [PATCH 7/7] arc: use the generic remapping allocator for coherent DMA allocations 2019-06-14 18:05 ` Eugeniy Paltsev ` (2 preceding siblings ...) (?) @ 2019-06-15 8:35 ` hch -1 siblings, 0 replies; 91+ messages in thread From: hch @ 2019-06-15 8:35 UTC (permalink / raw) To: Eugeniy Paltsev Cc: hch, Vineet Gupta, shorne, linux-snps-arc, iommu, vladimir.murzin, linux-parisc, stefan.kristiansson, jonas, linux-xtensa, deller, linux-arm-kernel, linux-kernel, openrisc On Fri, Jun 14, 2019 at 06:05:01PM +0000, Eugeniy Paltsev wrote: > Hi Christoph, > > Regular question - do you have any public git repository with all this dma changes? > I want to test it for ARC. > > Pretty sure the > [PATCH 2/7] arc: remove the partial DMA_ATTR_NON_CONSISTENT support > is fine. > > Not so sure about > [PATCH 7/7] arc: use the generic remapping allocator for coherent DMA allocations > :) git://git.infradead.org/users/hch/misc.git dma-not-consistent-cleanup Gitweb: http://git.infradead.org/users/hch/misc.git/shortlog/refs/heads/dma-not-consistent-cleanup ^ permalink raw reply [flat|nested] 91+ messages in thread
* [OpenRISC] [PATCH 7/7] arc: use the generic remapping allocator for coherent DMA allocations @ 2019-06-15 8:35 ` hch 0 siblings, 0 replies; 91+ messages in thread From: hch @ 2019-06-15 8:35 UTC (permalink / raw) To: openrisc On Fri, Jun 14, 2019 at 06:05:01PM +0000, Eugeniy Paltsev wrote: > Hi Christoph, > > Regular question - do you have any public git repository with all this dma changes? > I want to test it for ARC. > > Pretty sure the > [PATCH 2/7] arc: remove the partial DMA_ATTR_NON_CONSISTENT support > is fine. > > Not so sure about > [PATCH 7/7] arc: use the generic remapping allocator for coherent DMA allocations > :) git://git.infradead.org/users/hch/misc.git dma-not-consistent-cleanup Gitweb: http://git.infradead.org/users/hch/misc.git/shortlog/refs/heads/dma-not-consistent-cleanup ^ permalink raw reply [flat|nested] 91+ messages in thread
* Re: [PATCH 7/7] arc: use the generic remapping allocator for coherent DMA allocations @ 2019-06-15 8:35 ` hch 0 siblings, 0 replies; 91+ messages in thread From: hch @ 2019-06-15 8:35 UTC (permalink / raw) To: Eugeniy Paltsev Cc: jonas, linux-xtensa, vladimir.murzin, linux-parisc, Vineet Gupta, linux-kernel, stefan.kristiansson, iommu, openrisc, deller, shorne, linux-snps-arc, hch, linux-arm-kernel On Fri, Jun 14, 2019 at 06:05:01PM +0000, Eugeniy Paltsev wrote: > Hi Christoph, > > Regular question - do you have any public git repository with all this dma changes? > I want to test it for ARC. > > Pretty sure the > [PATCH 2/7] arc: remove the partial DMA_ATTR_NON_CONSISTENT support > is fine. > > Not so sure about > [PATCH 7/7] arc: use the generic remapping allocator for coherent DMA allocations > :) git://git.infradead.org/users/hch/misc.git dma-not-consistent-cleanup Gitweb: http://git.infradead.org/users/hch/misc.git/shortlog/refs/heads/dma-not-consistent-cleanup _______________________________________________ linux-arm-kernel mailing list linux-arm-kernel@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-arm-kernel ^ permalink raw reply [flat|nested] 91+ messages in thread
* Re: [PATCH 7/7] arc: use the generic remapping allocator for coherent DMA allocations @ 2019-06-15 8:35 ` hch 0 siblings, 0 replies; 91+ messages in thread From: hch @ 2019-06-15 8:35 UTC (permalink / raw) To: Eugeniy Paltsev Cc: jonas, linux-xtensa, vladimir.murzin, linux-parisc, Vineet Gupta, linux-kernel, stefan.kristiansson, iommu, openrisc, deller, shorne, linux-snps-arc, hch, linux-arm-kernel On Fri, Jun 14, 2019 at 06:05:01PM +0000, Eugeniy Paltsev wrote: > Hi Christoph, > > Regular question - do you have any public git repository with all this dma changes? > I want to test it for ARC. > > Pretty sure the > [PATCH 2/7] arc: remove the partial DMA_ATTR_NON_CONSISTENT support > is fine. > > Not so sure about > [PATCH 7/7] arc: use the generic remapping allocator for coherent DMA allocations > :) git://git.infradead.org/users/hch/misc.git dma-not-consistent-cleanup Gitweb: http://git.infradead.org/users/hch/misc.git/shortlog/refs/heads/dma-not-consistent-cleanup _______________________________________________ iommu mailing list iommu@lists.linux-foundation.org https://lists.linuxfoundation.org/mailman/listinfo/iommu ^ permalink raw reply [flat|nested] 91+ messages in thread
* [PATCH 7/7] arc: use the generic remapping allocator for coherent DMA allocations @ 2019-06-15 8:35 ` hch 0 siblings, 0 replies; 91+ messages in thread From: hch @ 2019-06-15 8:35 UTC (permalink / raw) To: linux-snps-arc On Fri, Jun 14, 2019@06:05:01PM +0000, Eugeniy Paltsev wrote: > Hi Christoph, > > Regular question - do you have any public git repository with all this dma changes? > I want to test it for ARC. > > Pretty sure the > [PATCH 2/7] arc: remove the partial DMA_ATTR_NON_CONSISTENT support > is fine. > > Not so sure about > [PATCH 7/7] arc: use the generic remapping allocator for coherent DMA allocations > :) git://git.infradead.org/users/hch/misc.git dma-not-consistent-cleanup Gitweb: http://git.infradead.org/users/hch/misc.git/shortlog/refs/heads/dma-not-consistent-cleanup ^ permalink raw reply [flat|nested] 91+ messages in thread
[parent not found: <20190624131417.GA10593@lst.de>]
* Re: [PATCH 7/7] arc: use the generic remapping allocator for coherent DMA allocations [not found] ` <20190624131417.GA10593@lst.de> @ 2019-06-24 19:13 ` Eugeniy Paltsev 0 siblings, 0 replies; 91+ messages in thread From: Eugeniy Paltsev @ 2019-06-24 19:13 UTC (permalink / raw) To: hch, linux-snps-arc, linux-kernel; +Cc: Alexey Brodkin, Vineet.Gupta1 Hi Christoph, Yep I've reviewed and tested it for both cases: - coherent/noncoherent dma - allocation from atomic_pool/regular allocation everything works fine for ARC. So, Reviewed-by: Evgeniy Paltsev <paltsev@synopsys.com> Tested-by: Evgeniy Paltsev <paltsev@synopsys.com> for both [PATCH 2/7] arc: remove the partial DMA_ATTR_NON_CONSISTENT support [PATCH 7/7] arc: use the generic remapping allocator for coherent DMA allocations On Mon, 2019-06-24 at 15:14 +0200, hch@lst.de wrote: > On Sat, Jun 15, 2019 at 10:35:54AM +0200, hch@lst.de wrote: > > On Fri, Jun 14, 2019 at 06:05:01PM +0000, Eugeniy Paltsev wrote: > > > Hi Christoph, > > > > > > Regular question - do you have any public git repository with all this dma changes? > > > I want to test it for ARC. > > > > > > Pretty sure the > > > [PATCH 2/7] arc: remove the partial DMA_ATTR_NON_CONSISTENT support > > > is fine. > > > > > > Not so sure about > > > [PATCH 7/7] arc: use the generic remapping allocator for coherent DMA allocations > > > :) > > > > git://git.infradead.org/users/hch/misc.git dma-not-consistent-cleanup > > > > Gitweb: > > > > > > https://urldefense.proofpoint.com/v2/url?u=http-3A__git.infradead.org_users_hch_misc.git_shortlog_refs_heads_dma-2Dnot-2Dconsistent-2Dcleanup&d=DwIBAg&c=DPL6_X_6JkXFx7AXWqB0tg&r=ZlJN1MriPUTkBKCrPSx67GmaplEUGcAEk9yPtCLdUXI&m=ipYAmX3rwLxDIdXUztTMYBZkKFGZBY9vykJVBwq_KXE&s=uRWKQoDT8pPKRPYCB6k4oO3mtRFRNyLolRDeBBIVtNQ&e= > > > > Did you get a chance to look into these patches? -- Eugeniy Paltsev ^ permalink raw reply [flat|nested] 91+ messages in thread
* [PATCH 7/7] arc: use the generic remapping allocator for coherent DMA allocations @ 2019-06-24 19:13 ` Eugeniy Paltsev 0 siblings, 0 replies; 91+ messages in thread From: Eugeniy Paltsev @ 2019-06-24 19:13 UTC (permalink / raw) To: linux-snps-arc Hi Christoph, Yep I've reviewed and tested it for both cases: - coherent/noncoherent dma - allocation from atomic_pool/regular allocation everything works fine for ARC. So, Reviewed-by: Evgeniy Paltsev <paltsev at synopsys.com> Tested-by: Evgeniy Paltsev <paltsev at synopsys.com> for both [PATCH 2/7] arc: remove the partial DMA_ATTR_NON_CONSISTENT support [PATCH 7/7] arc: use the generic remapping allocator for coherent DMA allocations On Mon, 2019-06-24@15:14 +0200, hch@lst.de wrote: > On Sat, Jun 15, 2019@10:35:54AM +0200, hch@lst.de wrote: > > On Fri, Jun 14, 2019@06:05:01PM +0000, Eugeniy Paltsev wrote: > > > Hi Christoph, > > > > > > Regular question - do you have any public git repository with all this dma changes? > > > I want to test it for ARC. > > > > > > Pretty sure the > > > [PATCH 2/7] arc: remove the partial DMA_ATTR_NON_CONSISTENT support > > > is fine. > > > > > > Not so sure about > > > [PATCH 7/7] arc: use the generic remapping allocator for coherent DMA allocations > > > :) > > > > git://git.infradead.org/users/hch/misc.git dma-not-consistent-cleanup > > > > Gitweb: > > > > > > https://urldefense.proofpoint.com/v2/url?u=http-3A__git.infradead.org_users_hch_misc.git_shortlog_refs_heads_dma-2Dnot-2Dconsistent-2Dcleanup&d=DwIBAg&c=DPL6_X_6JkXFx7AXWqB0tg&r=ZlJN1MriPUTkBKCrPSx67GmaplEUGcAEk9yPtCLdUXI&m=ipYAmX3rwLxDIdXUztTMYBZkKFGZBY9vykJVBwq_KXE&s=uRWKQoDT8pPKRPYCB6k4oO3mtRFRNyLolRDeBBIVtNQ&e= > > > > Did you get a chance to look into these patches? -- Eugeniy Paltsev ^ permalink raw reply [flat|nested] 91+ messages in thread
* Re: [PATCH 7/7] arc: use the generic remapping allocator for coherent DMA allocations 2019-06-24 19:13 ` Eugeniy Paltsev @ 2019-06-25 6:34 ` hch -1 siblings, 0 replies; 91+ messages in thread From: hch @ 2019-06-25 6:34 UTC (permalink / raw) To: Eugeniy Paltsev Cc: hch, linux-snps-arc, linux-kernel, Alexey Brodkin, Vineet.Gupta1 On Mon, Jun 24, 2019 at 07:13:17PM +0000, Eugeniy Paltsev wrote: > Hi Christoph, > > Yep I've reviewed and tested it for both cases: > - coherent/noncoherent dma > - allocation from atomic_pool/regular allocation > > everything works fine for ARC. Thanks. I've applied the whole series to the dma-mapping for-next branch. ^ permalink raw reply [flat|nested] 91+ messages in thread
* [PATCH 7/7] arc: use the generic remapping allocator for coherent DMA allocations @ 2019-06-25 6:34 ` hch 0 siblings, 0 replies; 91+ messages in thread From: hch @ 2019-06-25 6:34 UTC (permalink / raw) To: linux-snps-arc On Mon, Jun 24, 2019@07:13:17PM +0000, Eugeniy Paltsev wrote: > Hi Christoph, > > Yep I've reviewed and tested it for both cases: > - coherent/noncoherent dma > - allocation from atomic_pool/regular allocation > > everything works fine for ARC. Thanks. I've applied the whole series to the dma-mapping for-next branch. ^ permalink raw reply [flat|nested] 91+ messages in thread
end of thread, other threads:[~2019-06-29 15:09 UTC | newest] Thread overview: 91+ messages (download: mbox.gz / follow: Atom feed) -- links below jump to the message on this page -- 2019-06-14 14:44 handle "special" dma allocation in common code Christoph Hellwig 2019-06-14 14:44 ` [OpenRISC] " Christoph Hellwig 2019-06-14 14:44 ` Christoph Hellwig 2019-06-14 14:44 ` Christoph Hellwig 2019-06-14 14:44 ` Christoph Hellwig 2019-06-14 14:44 ` [PATCH 1/7] arm-nommu: remove the partial DMA_ATTR_NON_CONSISTENT support Christoph Hellwig 2019-06-14 14:44 ` [OpenRISC] " Christoph Hellwig 2019-06-14 14:44 ` Christoph Hellwig 2019-06-14 14:44 ` Christoph Hellwig 2019-06-14 14:44 ` Christoph Hellwig 2019-06-24 14:23 ` Vladimir Murzin 2019-06-24 14:23 ` [OpenRISC] " Vladimir Murzin 2019-06-24 14:23 ` Vladimir Murzin 2019-06-24 14:23 ` Vladimir Murzin 2019-06-24 14:23 ` Vladimir Murzin 2019-06-25 6:13 ` Christoph Hellwig 2019-06-25 6:13 ` [OpenRISC] " Christoph Hellwig 2019-06-25 6:13 ` Christoph Hellwig 2019-06-25 6:13 ` Christoph Hellwig 2019-06-25 6:13 ` Christoph Hellwig 2019-06-14 14:44 ` [PATCH 2/7] arc: " Christoph Hellwig 2019-06-14 14:44 ` [OpenRISC] " Christoph Hellwig 2019-06-14 14:44 ` Christoph Hellwig 2019-06-14 14:44 ` Christoph Hellwig 2019-06-14 14:44 ` Christoph Hellwig 2019-06-14 14:44 ` [PATCH 3/7] openrisc: " Christoph Hellwig 2019-06-14 14:44 ` [OpenRISC] " Christoph Hellwig 2019-06-14 14:44 ` Christoph Hellwig 2019-06-14 14:44 ` Christoph Hellwig 2019-06-14 14:44 ` Christoph Hellwig 2019-06-16 9:17 ` Stafford Horne 2019-06-16 9:17 ` [OpenRISC] " Stafford Horne 2019-06-16 9:17 ` Stafford Horne 2019-06-16 9:17 ` Stafford Horne 2019-06-16 9:17 ` Stafford Horne 2019-06-14 14:44 ` [PATCH 4/7] dma-mapping: add a dma_alloc_need_uncached helper Christoph Hellwig 2019-06-14 14:44 ` [OpenRISC] " Christoph Hellwig 2019-06-14 14:44 ` Christoph Hellwig 2019-06-14 14:44 ` Christoph Hellwig 2019-06-14 14:44 ` Christoph Hellwig 2019-06-14 14:44 ` [PATCH 5/7] dma-direct: handle DMA_ATTR_NON_CONSISTENT in common code Christoph Hellwig 2019-06-16 10:08 ` Hillf Danton 2019-06-16 10:08 ` Hillf Danton 2019-06-14 14:44 ` [OpenRISC] " Christoph Hellwig 2019-06-14 14:44 ` Christoph Hellwig 2019-06-14 14:44 ` Christoph Hellwig 2019-06-14 14:44 ` Christoph Hellwig 2019-06-25 6:11 ` Christoph Hellwig 2019-06-25 6:11 ` [OpenRISC] " Christoph Hellwig 2019-06-25 6:11 ` Christoph Hellwig 2019-06-25 6:11 ` Christoph Hellwig 2019-06-25 6:11 ` Christoph Hellwig 2019-06-25 12:23 ` Helge Deller 2019-06-25 12:23 ` [OpenRISC] " Helge Deller 2019-06-25 12:23 ` Helge Deller 2019-06-25 12:23 ` Helge Deller 2019-06-25 12:23 ` Helge Deller 2019-06-25 12:28 ` Christoph Hellwig 2019-06-25 12:28 ` [OpenRISC] " Christoph Hellwig 2019-06-25 12:28 ` Christoph Hellwig 2019-06-25 12:28 ` Christoph Hellwig 2019-06-25 12:28 ` Christoph Hellwig 2019-06-14 14:44 ` [PATCH 6/7] dma-direct: handle DMA_ATTR_NO_KERNEL_MAPPING " Christoph Hellwig 2019-06-14 14:44 ` [OpenRISC] " Christoph Hellwig 2019-06-14 14:44 ` Christoph Hellwig 2019-06-14 14:44 ` Christoph Hellwig 2019-06-14 14:44 ` Christoph Hellwig 2019-06-29 15:09 ` Guenter Roeck 2019-06-29 15:09 ` [OpenRISC] " Guenter Roeck 2019-06-29 15:09 ` Guenter Roeck 2019-06-29 15:09 ` Guenter Roeck 2019-06-29 15:09 ` Guenter Roeck 2019-06-14 14:44 ` [PATCH 7/7] arc: use the generic remapping allocator for coherent DMA allocations Christoph Hellwig 2019-06-14 14:44 ` [OpenRISC] " Christoph Hellwig 2019-06-14 14:44 ` Christoph Hellwig 2019-06-14 14:44 ` Christoph Hellwig 2019-06-14 14:44 ` Christoph Hellwig 2019-06-14 18:05 ` Eugeniy Paltsev 2019-06-14 18:05 ` [OpenRISC] " Eugeniy Paltsev 2019-06-14 18:05 ` Eugeniy Paltsev 2019-06-14 18:05 ` Eugeniy Paltsev 2019-06-14 18:05 ` Eugeniy Paltsev 2019-06-15 8:35 ` hch 2019-06-15 8:35 ` [OpenRISC] " hch 2019-06-15 8:35 ` hch 2019-06-15 8:35 ` hch 2019-06-15 8:35 ` hch [not found] ` <20190624131417.GA10593@lst.de> 2019-06-24 19:13 ` Eugeniy Paltsev 2019-06-24 19:13 ` Eugeniy Paltsev 2019-06-25 6:34 ` hch 2019-06-25 6:34 ` hch
This is an external index of several public inboxes, see mirroring instructions on how to clone and mirror all data and code used by this external index.