All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v2 1/1] s390/dma: provide proper ARCH_ZONE_DMA_BITS value
@ 2019-07-23 22:51 Halil Pasic
  2019-07-24  5:44 ` Heiko Carstens
  0 siblings, 1 reply; 2+ messages in thread
From: Halil Pasic @ 2019-07-23 22:51 UTC (permalink / raw)
  To: kvm, linux-s390, Christoph Hellwig, Heiko Carstens, Vasily Gorbik
  Cc: Halil Pasic, Petr Tesarik, Christian Borntraeger, Janosch Frank

On s390 ZONE_DMA is up to 2G, i.e. ARCH_ZONE_DMA_BITS should be 31 bits.
The current value is 24 and makes __dma_direct_alloc_pages() take a
wrong turn first (but __dma_direct_alloc_pages() recovers then).

Let's correct ARCH_ZONE_DMA_BITS value and avoid wrong turns.

Signed-off-by: Halil Pasic <pasic@linux.ibm.com>
Reported-by: Petr Tesarik <ptesarik@suse.cz>
Fixes: c61e9637340e ("dma-direct: add support for allocation from ZONE_DMA and ZONE_DMA32")
---
 arch/s390/include/asm/page.h | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/arch/s390/include/asm/page.h b/arch/s390/include/asm/page.h
index a4d38092530a..27470dae31d2 100644
--- a/arch/s390/include/asm/page.h
+++ b/arch/s390/include/asm/page.h
@@ -177,6 +177,8 @@ static inline int devmem_is_allowed(unsigned long pfn)
 #define VM_DATA_DEFAULT_FLAGS	(VM_READ | VM_WRITE | \
 				 VM_MAYREAD | VM_MAYWRITE | VM_MAYEXEC)
 
+#define ARCH_ZONE_DMA_BITS      31
+
 #include <asm-generic/memory_model.h>
 #include <asm-generic/getorder.h>
 
-- 
2.17.1

^ permalink raw reply related	[flat|nested] 2+ messages in thread

* Re: [PATCH v2 1/1] s390/dma: provide proper ARCH_ZONE_DMA_BITS value
  2019-07-23 22:51 [PATCH v2 1/1] s390/dma: provide proper ARCH_ZONE_DMA_BITS value Halil Pasic
@ 2019-07-24  5:44 ` Heiko Carstens
  0 siblings, 0 replies; 2+ messages in thread
From: Heiko Carstens @ 2019-07-24  5:44 UTC (permalink / raw)
  To: Halil Pasic
  Cc: kvm, linux-s390, Christoph Hellwig, Vasily Gorbik, Petr Tesarik,
	Christian Borntraeger, Janosch Frank

On Wed, Jul 24, 2019 at 12:51:55AM +0200, Halil Pasic wrote:
> On s390 ZONE_DMA is up to 2G, i.e. ARCH_ZONE_DMA_BITS should be 31 bits.
> The current value is 24 and makes __dma_direct_alloc_pages() take a
> wrong turn first (but __dma_direct_alloc_pages() recovers then).
> 
> Let's correct ARCH_ZONE_DMA_BITS value and avoid wrong turns.
> 
> Signed-off-by: Halil Pasic <pasic@linux.ibm.com>
> Reported-by: Petr Tesarik <ptesarik@suse.cz>
> Fixes: c61e9637340e ("dma-direct: add support for allocation from ZONE_DMA and ZONE_DMA32")
> ---
>  arch/s390/include/asm/page.h | 2 ++
>  1 file changed, 2 insertions(+)

Applied, thanks!

^ permalink raw reply	[flat|nested] 2+ messages in thread

end of thread, other threads:[~2019-07-24  5:44 UTC | newest]

Thread overview: 2+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-07-23 22:51 [PATCH v2 1/1] s390/dma: provide proper ARCH_ZONE_DMA_BITS value Halil Pasic
2019-07-24  5:44 ` Heiko Carstens

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.