linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v2] dma-pool: Fix too large DMA pools on medium systems
@ 2020-06-08 13:22 Geert Uytterhoeven
  2020-06-08 21:04 ` David Rientjes
                   ` (2 more replies)
  0 siblings, 3 replies; 13+ messages in thread
From: Geert Uytterhoeven @ 2020-06-08 13:22 UTC (permalink / raw)
  To: Christoph Hellwig, Marek Szyprowski, Robin Murphy, David Rientjes
  Cc: iommu, linux-kernel, Geert Uytterhoeven

On systems with at least 32 MiB, but less than 32 GiB of RAM, the DMA
memory pools are much larger than intended (e.g. 2 MiB instead of 128
KiB on a 256 MiB system).

Fix this by correcting the calculation of the number of GiBs of RAM in
the system.  Invert the order of the min/max operations, to keep on
calculating in pages until the last step, which aids readability.

Fixes: 1d659236fb43c4d2 ("dma-pool: scale the default DMA coherent pool size with memory capacity")
Signed-off-by: Geert Uytterhoeven <geert@linux-m68k.org>
---
v2:
  - Improve readability:
      - Divide by (SZ_1G / SZ_128K) instead of using a shift,
      - Invert min/max order to keep calculating in pages until the last
	step.
---
 kernel/dma/pool.c | 7 +++----
 1 file changed, 3 insertions(+), 4 deletions(-)

diff --git a/kernel/dma/pool.c b/kernel/dma/pool.c
index 35bb51c31fff370f..8cfa01243ed27b6f 100644
--- a/kernel/dma/pool.c
+++ b/kernel/dma/pool.c
@@ -175,10 +175,9 @@ static int __init dma_atomic_pool_init(void)
 	 * sizes to 128KB per 1GB of memory, min 128KB, max MAX_ORDER-1.
 	 */
 	if (!atomic_pool_size) {
-		atomic_pool_size = max(totalram_pages() >> PAGE_SHIFT, 1UL) *
-					SZ_128K;
-		atomic_pool_size = min_t(size_t, atomic_pool_size,
-					 1 << (PAGE_SHIFT + MAX_ORDER-1));
+		unsigned long pages = totalram_pages() / (SZ_1G / SZ_128K);
+		pages = min_t(unsigned long, pages, MAX_ORDER_NR_PAGES);
+		atomic_pool_size = max_t(size_t, pages << PAGE_SHIFT, SZ_128K);
 	}
 	INIT_WORK(&atomic_pool_work, atomic_pool_work_fn);
 
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* Re: [PATCH v2] dma-pool: Fix too large DMA pools on medium systems
  2020-06-08 13:22 [PATCH v2] dma-pool: Fix too large DMA pools on medium systems Geert Uytterhoeven
@ 2020-06-08 21:04 ` David Rientjes
  2020-06-09 13:26 ` Christoph Hellwig
  2020-06-20 20:09 ` Guenter Roeck
  2 siblings, 0 replies; 13+ messages in thread
From: David Rientjes @ 2020-06-08 21:04 UTC (permalink / raw)
  To: Geert Uytterhoeven
  Cc: Christoph Hellwig, Marek Szyprowski, Robin Murphy, iommu, linux-kernel

On Mon, 8 Jun 2020, Geert Uytterhoeven wrote:

> On systems with at least 32 MiB, but less than 32 GiB of RAM, the DMA
> memory pools are much larger than intended (e.g. 2 MiB instead of 128
> KiB on a 256 MiB system).
> 
> Fix this by correcting the calculation of the number of GiBs of RAM in
> the system.  Invert the order of the min/max operations, to keep on
> calculating in pages until the last step, which aids readability.
> 
> Fixes: 1d659236fb43c4d2 ("dma-pool: scale the default DMA coherent pool size with memory capacity")
> Signed-off-by: Geert Uytterhoeven <geert@linux-m68k.org>

This works as well and is much more readable.  Thanks Geert!

Acked-by: David Rientjes <rientjes@google.com>

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH v2] dma-pool: Fix too large DMA pools on medium systems
  2020-06-08 13:22 [PATCH v2] dma-pool: Fix too large DMA pools on medium systems Geert Uytterhoeven
  2020-06-08 21:04 ` David Rientjes
@ 2020-06-09 13:26 ` Christoph Hellwig
  2020-06-20 20:09 ` Guenter Roeck
  2 siblings, 0 replies; 13+ messages in thread
From: Christoph Hellwig @ 2020-06-09 13:26 UTC (permalink / raw)
  To: Geert Uytterhoeven
  Cc: Christoph Hellwig, Marek Szyprowski, Robin Murphy,
	David Rientjes, iommu, linux-kernel

Thanks,

applied to the dma-mapping tree for 5.8.

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH v2] dma-pool: Fix too large DMA pools on medium systems
  2020-06-08 13:22 [PATCH v2] dma-pool: Fix too large DMA pools on medium systems Geert Uytterhoeven
  2020-06-08 21:04 ` David Rientjes
  2020-06-09 13:26 ` Christoph Hellwig
@ 2020-06-20 20:09 ` Guenter Roeck
  2020-06-21  8:35   ` Geert Uytterhoeven
  2020-06-24  7:38   ` Christoph Hellwig
  2 siblings, 2 replies; 13+ messages in thread
From: Guenter Roeck @ 2020-06-20 20:09 UTC (permalink / raw)
  To: Geert Uytterhoeven
  Cc: Christoph Hellwig, Marek Szyprowski, Robin Murphy,
	David Rientjes, iommu, linux-kernel

On Mon, Jun 08, 2020 at 03:22:17PM +0200, Geert Uytterhoeven wrote:
> On systems with at least 32 MiB, but less than 32 GiB of RAM, the DMA
> memory pools are much larger than intended (e.g. 2 MiB instead of 128
> KiB on a 256 MiB system).
> 
> Fix this by correcting the calculation of the number of GiBs of RAM in
> the system.  Invert the order of the min/max operations, to keep on
> calculating in pages until the last step, which aids readability.
> 
> Fixes: 1d659236fb43c4d2 ("dma-pool: scale the default DMA coherent pool size with memory capacity")
> Signed-off-by: Geert Uytterhoeven <geert@linux-m68k.org>
> Acked-by: David Rientjes <rientjes@google.com>

This patch results in a boot failure in some of my powerpc boot tests,
specifically those testing boots from mptsas1068 devices. Error message:

mptsas 0000:00:02.0: enabling device (0000 -> 0002)
mptbase: ioc0: Initiating bringup
ioc0: LSISAS1068 A0: Capabilities={Initiator}
mptbase: ioc0: ERROR - Unable to allocate Reply, Request, Chain Buffers!
mptbase: ioc0: ERROR - didn't initialize properly! (-3)
mptsas: probe of 0000:00:02.0 failed with error -3

Configuration is bamboo:44x/bamboo_defconfig plus various added drivers.
Qemu command line is

qemu-system-ppc -kernel vmlinux -M bamboo \
     -m 256 -no-reboot -snapshot -device mptsas1068,id=scsi \
     -device scsi-hd,bus=scsi.0,drive=d0,wwn=0x5000c50015ea71ac -drive \
     file=rootfs.ext2,format=raw,if=none,id=d0 \
     --append "panic=-1 slub_debug=FZPUA root=/dev/sda  mem=256M console=ttyS0" \
     -monitor none -nographic

canyonlands_defconfig with sam460ex machine and otherwise similar command line
fails as well.

Reverting this patch fixes the problem.

Guenter

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH v2] dma-pool: Fix too large DMA pools on medium systems
  2020-06-20 20:09 ` Guenter Roeck
@ 2020-06-21  8:35   ` Geert Uytterhoeven
  2020-06-21 13:11     ` Guenter Roeck
  2020-06-24  7:38   ` Christoph Hellwig
  1 sibling, 1 reply; 13+ messages in thread
From: Geert Uytterhoeven @ 2020-06-21  8:35 UTC (permalink / raw)
  To: Guenter Roeck
  Cc: Christoph Hellwig, Marek Szyprowski, Robin Murphy,
	David Rientjes, Linux IOMMU, Linux Kernel Mailing List

Hi Günter,

On Sat, Jun 20, 2020 at 10:09 PM Guenter Roeck <linux@roeck-us.net> wrote:
> On Mon, Jun 08, 2020 at 03:22:17PM +0200, Geert Uytterhoeven wrote:
> > On systems with at least 32 MiB, but less than 32 GiB of RAM, the DMA
> > memory pools are much larger than intended (e.g. 2 MiB instead of 128
> > KiB on a 256 MiB system).
> >
> > Fix this by correcting the calculation of the number of GiBs of RAM in
> > the system.  Invert the order of the min/max operations, to keep on
> > calculating in pages until the last step, which aids readability.
> >
> > Fixes: 1d659236fb43c4d2 ("dma-pool: scale the default DMA coherent pool size with memory capacity")
> > Signed-off-by: Geert Uytterhoeven <geert@linux-m68k.org>
> > Acked-by: David Rientjes <rientjes@google.com>
>
> This patch results in a boot failure in some of my powerpc boot tests,
> specifically those testing boots from mptsas1068 devices. Error message:
>
> mptsas 0000:00:02.0: enabling device (0000 -> 0002)
> mptbase: ioc0: Initiating bringup
> ioc0: LSISAS1068 A0: Capabilities={Initiator}
> mptbase: ioc0: ERROR - Unable to allocate Reply, Request, Chain Buffers!
> mptbase: ioc0: ERROR - didn't initialize properly! (-3)
> mptsas: probe of 0000:00:02.0 failed with error -3
>
> Configuration is bamboo:44x/bamboo_defconfig plus various added drivers.
> Qemu command line is
>
> qemu-system-ppc -kernel vmlinux -M bamboo \
>      -m 256 -no-reboot -snapshot -device mptsas1068,id=scsi \
>      -device scsi-hd,bus=scsi.0,drive=d0,wwn=0x5000c50015ea71ac -drive \
>      file=rootfs.ext2,format=raw,if=none,id=d0 \
>      --append "panic=-1 slub_debug=FZPUA root=/dev/sda  mem=256M console=ttyS0" \
>      -monitor none -nographic
>
> canyonlands_defconfig with sam460ex machine and otherwise similar command line
> fails as well.
>
> Reverting this patch fixes the problem.

This looks like the minimum value of 128 KiB is not sufficient, and the
bug is in the intention of 1d659236fb43c4d2 ("dma-pool: scale the
default DMA coherent pool size with memory capacity")?
Before, there was a single pool of (fixed) 256 KiB size, now there are
up to three coherent pools (DMA, DMA32, and kernel), albeit of smaller
size (128 KiB each).

Can you print the requested size in drivers/message/fusion/mptbase.c:
PrimeIocFifos()?
Does replacing all SZ_128K by SZ_256K in my patch help?
That would waste^H^H^H^H^Hallocate 256 KiB or 512 KiB more, though.

Thanks!

Gr{oetje,eeting}s,

                        Geert

-- 
Geert Uytterhoeven -- There's lots of Linux beyond ia32 -- geert@linux-m68k.org

In personal conversations with technical people, I call myself a hacker. But
when I'm talking to journalists I just say "programmer" or something like that.
                                -- Linus Torvalds

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH v2] dma-pool: Fix too large DMA pools on medium systems
  2020-06-21  8:35   ` Geert Uytterhoeven
@ 2020-06-21 13:11     ` Guenter Roeck
  2020-06-21 20:20       ` David Rientjes
  0 siblings, 1 reply; 13+ messages in thread
From: Guenter Roeck @ 2020-06-21 13:11 UTC (permalink / raw)
  To: Geert Uytterhoeven
  Cc: Christoph Hellwig, Marek Szyprowski, Robin Murphy,
	David Rientjes, Linux IOMMU, Linux Kernel Mailing List

On 6/21/20 1:35 AM, Geert Uytterhoeven wrote:
> Hi Günter,
> 
> On Sat, Jun 20, 2020 at 10:09 PM Guenter Roeck <linux@roeck-us.net> wrote:
>> On Mon, Jun 08, 2020 at 03:22:17PM +0200, Geert Uytterhoeven wrote:
>>> On systems with at least 32 MiB, but less than 32 GiB of RAM, the DMA
>>> memory pools are much larger than intended (e.g. 2 MiB instead of 128
>>> KiB on a 256 MiB system).
>>>
>>> Fix this by correcting the calculation of the number of GiBs of RAM in
>>> the system.  Invert the order of the min/max operations, to keep on
>>> calculating in pages until the last step, which aids readability.
>>>
>>> Fixes: 1d659236fb43c4d2 ("dma-pool: scale the default DMA coherent pool size with memory capacity")
>>> Signed-off-by: Geert Uytterhoeven <geert@linux-m68k.org>
>>> Acked-by: David Rientjes <rientjes@google.com>
>>
>> This patch results in a boot failure in some of my powerpc boot tests,
>> specifically those testing boots from mptsas1068 devices. Error message:
>>
>> mptsas 0000:00:02.0: enabling device (0000 -> 0002)
>> mptbase: ioc0: Initiating bringup
>> ioc0: LSISAS1068 A0: Capabilities={Initiator}
>> mptbase: ioc0: ERROR - Unable to allocate Reply, Request, Chain Buffers!
>> mptbase: ioc0: ERROR - didn't initialize properly! (-3)
>> mptsas: probe of 0000:00:02.0 failed with error -3
>>
>> Configuration is bamboo:44x/bamboo_defconfig plus various added drivers.
>> Qemu command line is
>>
>> qemu-system-ppc -kernel vmlinux -M bamboo \
>>      -m 256 -no-reboot -snapshot -device mptsas1068,id=scsi \
>>      -device scsi-hd,bus=scsi.0,drive=d0,wwn=0x5000c50015ea71ac -drive \
>>      file=rootfs.ext2,format=raw,if=none,id=d0 \
>>      --append "panic=-1 slub_debug=FZPUA root=/dev/sda  mem=256M console=ttyS0" \
>>      -monitor none -nographic
>>
>> canyonlands_defconfig with sam460ex machine and otherwise similar command line
>> fails as well.
>>
>> Reverting this patch fixes the problem.
> 
> This looks like the minimum value of 128 KiB is not sufficient, and the
> bug is in the intention of 1d659236fb43c4d2 ("dma-pool: scale the
> default DMA coherent pool size with memory capacity")?
> Before, there was a single pool of (fixed) 256 KiB size, now there are
> up to three coherent pools (DMA, DMA32, and kernel), albeit of smaller
> size (128 KiB each).
> 
> Can you print the requested size in drivers/message/fusion/mptbase.c:
> PrimeIocFifos()?

172928 bytes

> Does replacing all SZ_128K by SZ_256K in my patch help?

Yes, it does.

Guenter

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH v2] dma-pool: Fix too large DMA pools on medium systems
  2020-06-21 13:11     ` Guenter Roeck
@ 2020-06-21 20:20       ` David Rientjes
  2020-06-22 16:07         ` Robin Murphy
  0 siblings, 1 reply; 13+ messages in thread
From: David Rientjes @ 2020-06-21 20:20 UTC (permalink / raw)
  To: Guenter Roeck
  Cc: Geert Uytterhoeven, Christoph Hellwig, Marek Szyprowski,
	Robin Murphy, Linux IOMMU, Linux Kernel Mailing List

On Sun, 21 Jun 2020, Guenter Roeck wrote:

> >> This patch results in a boot failure in some of my powerpc boot tests,
> >> specifically those testing boots from mptsas1068 devices. Error message:
> >>
> >> mptsas 0000:00:02.0: enabling device (0000 -> 0002)
> >> mptbase: ioc0: Initiating bringup
> >> ioc0: LSISAS1068 A0: Capabilities={Initiator}
> >> mptbase: ioc0: ERROR - Unable to allocate Reply, Request, Chain Buffers!
> >> mptbase: ioc0: ERROR - didn't initialize properly! (-3)
> >> mptsas: probe of 0000:00:02.0 failed with error -3
> >>
> >> Configuration is bamboo:44x/bamboo_defconfig plus various added drivers.
> >> Qemu command line is
> >>
> >> qemu-system-ppc -kernel vmlinux -M bamboo \
> >>      -m 256 -no-reboot -snapshot -device mptsas1068,id=scsi \
> >>      -device scsi-hd,bus=scsi.0,drive=d0,wwn=0x5000c50015ea71ac -drive \
> >>      file=rootfs.ext2,format=raw,if=none,id=d0 \
> >>      --append "panic=-1 slub_debug=FZPUA root=/dev/sda  mem=256M console=ttyS0" \
> >>      -monitor none -nographic
> >>
> >> canyonlands_defconfig with sam460ex machine and otherwise similar command line
> >> fails as well.
> >>
> >> Reverting this patch fixes the problem.
> > 
> > This looks like the minimum value of 128 KiB is not sufficient, and the
> > bug is in the intention of 1d659236fb43c4d2 ("dma-pool: scale the
> > default DMA coherent pool size with memory capacity")?
> > Before, there was a single pool of (fixed) 256 KiB size, now there are
> > up to three coherent pools (DMA, DMA32, and kernel), albeit of smaller
> > size (128 KiB each).
> > 
> > Can you print the requested size in drivers/message/fusion/mptbase.c:
> > PrimeIocFifos()?
> 
> 172928 bytes
> 
> > Does replacing all SZ_128K by SZ_256K in my patch help?
> 
> Yes, it does.
> 

The new coherent pools should auto expand when they are close to being 
depleted but there's no guarantee that it can be done fast enough.  
Switching the min size to be the previous min size (256KB) seems like the 
best option and it matches what 
Documentation/admin-guide/kernel-parameters.txt still stays.

I'll also send a patch to point in the right direction when this happens.

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH v2] dma-pool: Fix too large DMA pools on medium systems
  2020-06-21 20:20       ` David Rientjes
@ 2020-06-22 16:07         ` Robin Murphy
  2020-06-22 17:31           ` Christoph Hellwig
  0 siblings, 1 reply; 13+ messages in thread
From: Robin Murphy @ 2020-06-22 16:07 UTC (permalink / raw)
  To: David Rientjes, Guenter Roeck
  Cc: Geert Uytterhoeven, Christoph Hellwig, Marek Szyprowski,
	Linux IOMMU, Linux Kernel Mailing List

On 2020-06-21 21:20, David Rientjes wrote:
> On Sun, 21 Jun 2020, Guenter Roeck wrote:
> 
>>>> This patch results in a boot failure in some of my powerpc boot tests,
>>>> specifically those testing boots from mptsas1068 devices. Error message:
>>>>
>>>> mptsas 0000:00:02.0: enabling device (0000 -> 0002)
>>>> mptbase: ioc0: Initiating bringup
>>>> ioc0: LSISAS1068 A0: Capabilities={Initiator}
>>>> mptbase: ioc0: ERROR - Unable to allocate Reply, Request, Chain Buffers!
>>>> mptbase: ioc0: ERROR - didn't initialize properly! (-3)
>>>> mptsas: probe of 0000:00:02.0 failed with error -3
>>>>
>>>> Configuration is bamboo:44x/bamboo_defconfig plus various added drivers.
>>>> Qemu command line is
>>>>
>>>> qemu-system-ppc -kernel vmlinux -M bamboo \
>>>>       -m 256 -no-reboot -snapshot -device mptsas1068,id=scsi \
>>>>       -device scsi-hd,bus=scsi.0,drive=d0,wwn=0x5000c50015ea71ac -drive \
>>>>       file=rootfs.ext2,format=raw,if=none,id=d0 \
>>>>       --append "panic=-1 slub_debug=FZPUA root=/dev/sda  mem=256M console=ttyS0" \
>>>>       -monitor none -nographic
>>>>
>>>> canyonlands_defconfig with sam460ex machine and otherwise similar command line
>>>> fails as well.
>>>>
>>>> Reverting this patch fixes the problem.
>>>
>>> This looks like the minimum value of 128 KiB is not sufficient, and the
>>> bug is in the intention of 1d659236fb43c4d2 ("dma-pool: scale the
>>> default DMA coherent pool size with memory capacity")?
>>> Before, there was a single pool of (fixed) 256 KiB size, now there are
>>> up to three coherent pools (DMA, DMA32, and kernel), albeit of smaller
>>> size (128 KiB each).
>>>
>>> Can you print the requested size in drivers/message/fusion/mptbase.c:
>>> PrimeIocFifos()?
>>
>> 172928 bytes
>>
>>> Does replacing all SZ_128K by SZ_256K in my patch help?
>>
>> Yes, it does.
>>
> 
> The new coherent pools should auto expand when they are close to being
> depleted but there's no guarantee that it can be done fast enough.

More to the point, it's never going to help if the pool is empty and one 
allocation is simply larger than the entire thing ;)

Another angle, though, is to question why this driver is making such a 
large allocation with GFP_ATOMIC in the first place. At a glance it 
looks like there's no reason at all other than that it's still using the 
legacy pci_alloc_consistent() API, since every path to that appears to 
have CAN_SLEEP passed as its flag - modernising that would arguably be 
an even better long-term win.

Robin.

> Switching the min size to be the previous min size (256KB) seems like the
> best option and it matches what
> Documentation/admin-guide/kernel-parameters.txt still stays.
> 
> I'll also send a patch to point in the right direction when this happens.
> 

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH v2] dma-pool: Fix too large DMA pools on medium systems
  2020-06-22 16:07         ` Robin Murphy
@ 2020-06-22 17:31           ` Christoph Hellwig
  0 siblings, 0 replies; 13+ messages in thread
From: Christoph Hellwig @ 2020-06-22 17:31 UTC (permalink / raw)
  To: Robin Murphy
  Cc: David Rientjes, Guenter Roeck, Geert Uytterhoeven,
	Christoph Hellwig, Marek Szyprowski, Linux IOMMU,
	Linux Kernel Mailing List

On Mon, Jun 22, 2020 at 05:07:55PM +0100, Robin Murphy wrote:
> Another angle, though, is to question why this driver is making such a 
> large allocation with GFP_ATOMIC in the first place. At a glance it looks 
> like there's no reason at all other than that it's still using the legacy 
> pci_alloc_consistent() API, since every path to that appears to have 
> CAN_SLEEP passed as its flag - modernising that would arguably be an even 
> better long-term win.

Maybe we can just try that for now?  If other problems show up we
can still increase the initial pool size later in this cycle.

I'll try to cook up a patch.

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH v2] dma-pool: Fix too large DMA pools on medium systems
  2020-06-20 20:09 ` Guenter Roeck
  2020-06-21  8:35   ` Geert Uytterhoeven
@ 2020-06-24  7:38   ` Christoph Hellwig
  2020-06-24 16:20     ` Guenter Roeck
  2020-06-27 16:13     ` Marion & Christophe JAILLET
  1 sibling, 2 replies; 13+ messages in thread
From: Christoph Hellwig @ 2020-06-24  7:38 UTC (permalink / raw)
  To: Guenter Roeck
  Cc: Geert Uytterhoeven, Christoph Hellwig, Marek Szyprowski,
	Robin Murphy, David Rientjes, iommu, linux-kernel,
	Christophe JAILLET

Hi Guenter,

can you try the patch below?  This just converts the huge allocations
in mptbase to use GFP_KERNEL.  Christophe (added to Cc) actually has
a scripted conversion for the rest that he hasn't posted yet, so I'll
aim for the minimal version here.


diff --git a/drivers/message/fusion/mptbase.c b/drivers/message/fusion/mptbase.c
index 68aea22f2b8978..5216487db4fbea 100644
--- a/drivers/message/fusion/mptbase.c
+++ b/drivers/message/fusion/mptbase.c
@@ -1324,13 +1324,13 @@ mpt_host_page_alloc(MPT_ADAPTER *ioc, pIOCInit_t ioc_init)
 			return 0; /* fw doesn't need any host buffers */
 
 		/* spin till we get enough memory */
-		while(host_page_buffer_sz > 0) {
-
-			if((ioc->HostPageBuffer = pci_alloc_consistent(
-			    ioc->pcidev,
-			    host_page_buffer_sz,
-			    &ioc->HostPageBuffer_dma)) != NULL) {
-
+		while (host_page_buffer_sz > 0) {
+			ioc->HostPageBuffer =
+				dma_alloc_coherent(&ioc->pcidev->dev,
+						host_page_buffer_sz,
+						&ioc->HostPageBuffer_dma,
+						GFP_KERNEL);
+			if (ioc->HostPageBuffer) {
 				dinitprintk(ioc, printk(MYIOC_s_DEBUG_FMT
 				    "host_page_buffer @ %p, dma @ %x, sz=%d bytes\n",
 				    ioc->name, ioc->HostPageBuffer,
@@ -2741,8 +2741,8 @@ mpt_adapter_disable(MPT_ADAPTER *ioc)
 		sz = ioc->alloc_sz;
 		dexitprintk(ioc, printk(MYIOC_s_INFO_FMT "free  @ %p, sz=%d bytes\n",
 		    ioc->name, ioc->alloc, ioc->alloc_sz));
-		pci_free_consistent(ioc->pcidev, sz,
-				ioc->alloc, ioc->alloc_dma);
+		dma_free_coherent(&ioc->pcidev->dev, sz, ioc->alloc,
+				ioc->alloc_dma);
 		ioc->reply_frames = NULL;
 		ioc->req_frames = NULL;
 		ioc->alloc = NULL;
@@ -2751,8 +2751,8 @@ mpt_adapter_disable(MPT_ADAPTER *ioc)
 
 	if (ioc->sense_buf_pool != NULL) {
 		sz = (ioc->req_depth * MPT_SENSE_BUFFER_ALLOC);
-		pci_free_consistent(ioc->pcidev, sz,
-				ioc->sense_buf_pool, ioc->sense_buf_pool_dma);
+		dma_free_coherent(&ioc->pcidev->dev, sz, ioc->sense_buf_pool,
+				ioc->sense_buf_pool_dma);
 		ioc->sense_buf_pool = NULL;
 		ioc->alloc_total -= sz;
 	}
@@ -2802,7 +2802,7 @@ mpt_adapter_disable(MPT_ADAPTER *ioc)
 			"HostPageBuffer free  @ %p, sz=%d bytes\n",
 			ioc->name, ioc->HostPageBuffer,
 			ioc->HostPageBuffer_sz));
-		pci_free_consistent(ioc->pcidev, ioc->HostPageBuffer_sz,
+		dma_free_coherent(&ioc->pcidev->dev, ioc->HostPageBuffer_sz,
 		    ioc->HostPageBuffer, ioc->HostPageBuffer_dma);
 		ioc->HostPageBuffer = NULL;
 		ioc->HostPageBuffer_sz = 0;
@@ -4497,7 +4497,8 @@ PrimeIocFifos(MPT_ADAPTER *ioc)
 			 	ioc->name, sz, sz, num_chain));
 
 		total_size += sz;
-		mem = pci_alloc_consistent(ioc->pcidev, total_size, &alloc_dma);
+		mem = dma_alloc_coherent(&ioc->pcidev->dev, total_size,
+				&alloc_dma, GFP_KERNEL);
 		if (mem == NULL) {
 			printk(MYIOC_s_ERR_FMT "Unable to allocate Reply, Request, Chain Buffers!\n",
 				ioc->name);
@@ -4574,8 +4575,8 @@ PrimeIocFifos(MPT_ADAPTER *ioc)
 		spin_unlock_irqrestore(&ioc->FreeQlock, flags);
 
 		sz = (ioc->req_depth * MPT_SENSE_BUFFER_ALLOC);
-		ioc->sense_buf_pool =
-			pci_alloc_consistent(ioc->pcidev, sz, &ioc->sense_buf_pool_dma);
+		ioc->sense_buf_pool = dma_alloc_coherent(&ioc->pcidev->dev, sz,
+				&ioc->sense_buf_pool_dma, GFP_KERNEL);
 		if (ioc->sense_buf_pool == NULL) {
 			printk(MYIOC_s_ERR_FMT "Unable to allocate Sense Buffers!\n",
 				ioc->name);
@@ -4613,18 +4614,16 @@ PrimeIocFifos(MPT_ADAPTER *ioc)
 
 	if (ioc->alloc != NULL) {
 		sz = ioc->alloc_sz;
-		pci_free_consistent(ioc->pcidev,
-				sz,
-				ioc->alloc, ioc->alloc_dma);
+		dma_free_coherent(&ioc->pcidev->dev, sz, ioc->alloc,
+				ioc->alloc_dma);
 		ioc->reply_frames = NULL;
 		ioc->req_frames = NULL;
 		ioc->alloc_total -= sz;
 	}
 	if (ioc->sense_buf_pool != NULL) {
 		sz = (ioc->req_depth * MPT_SENSE_BUFFER_ALLOC);
-		pci_free_consistent(ioc->pcidev,
-				sz,
-				ioc->sense_buf_pool, ioc->sense_buf_pool_dma);
+		dma_free_coherent(&ioc->pcidev->dev, sz, ioc->sense_buf_pool,
+				ioc->sense_buf_pool_dma);
 		ioc->sense_buf_pool = NULL;
 	}
 

^ permalink raw reply related	[flat|nested] 13+ messages in thread

* Re: [PATCH v2] dma-pool: Fix too large DMA pools on medium systems
  2020-06-24  7:38   ` Christoph Hellwig
@ 2020-06-24 16:20     ` Guenter Roeck
  2020-06-27 16:13     ` Marion & Christophe JAILLET
  1 sibling, 0 replies; 13+ messages in thread
From: Guenter Roeck @ 2020-06-24 16:20 UTC (permalink / raw)
  To: Christoph Hellwig
  Cc: Geert Uytterhoeven, Marek Szyprowski, Robin Murphy,
	David Rientjes, iommu, linux-kernel, Christophe JAILLET

On Wed, Jun 24, 2020 at 09:38:15AM +0200, Christoph Hellwig wrote:
> Hi Guenter,
> 
> can you try the patch below?  This just converts the huge allocations
> in mptbase to use GFP_KERNEL.  Christophe (added to Cc) actually has
> a scripted conversion for the rest that he hasn't posted yet, so I'll
> aim for the minimal version here.
> 

The previously failing test passes with this patch applied on top of the
mainline kernel.

Guenter

> 
> diff --git a/drivers/message/fusion/mptbase.c b/drivers/message/fusion/mptbase.c
> index 68aea22f2b8978..5216487db4fbea 100644
> --- a/drivers/message/fusion/mptbase.c
> +++ b/drivers/message/fusion/mptbase.c
> @@ -1324,13 +1324,13 @@ mpt_host_page_alloc(MPT_ADAPTER *ioc, pIOCInit_t ioc_init)
>  			return 0; /* fw doesn't need any host buffers */
>  
>  		/* spin till we get enough memory */
> -		while(host_page_buffer_sz > 0) {
> -
> -			if((ioc->HostPageBuffer = pci_alloc_consistent(
> -			    ioc->pcidev,
> -			    host_page_buffer_sz,
> -			    &ioc->HostPageBuffer_dma)) != NULL) {
> -
> +		while (host_page_buffer_sz > 0) {
> +			ioc->HostPageBuffer =
> +				dma_alloc_coherent(&ioc->pcidev->dev,
> +						host_page_buffer_sz,
> +						&ioc->HostPageBuffer_dma,
> +						GFP_KERNEL);
> +			if (ioc->HostPageBuffer) {
>  				dinitprintk(ioc, printk(MYIOC_s_DEBUG_FMT
>  				    "host_page_buffer @ %p, dma @ %x, sz=%d bytes\n",
>  				    ioc->name, ioc->HostPageBuffer,
> @@ -2741,8 +2741,8 @@ mpt_adapter_disable(MPT_ADAPTER *ioc)
>  		sz = ioc->alloc_sz;
>  		dexitprintk(ioc, printk(MYIOC_s_INFO_FMT "free  @ %p, sz=%d bytes\n",
>  		    ioc->name, ioc->alloc, ioc->alloc_sz));
> -		pci_free_consistent(ioc->pcidev, sz,
> -				ioc->alloc, ioc->alloc_dma);
> +		dma_free_coherent(&ioc->pcidev->dev, sz, ioc->alloc,
> +				ioc->alloc_dma);
>  		ioc->reply_frames = NULL;
>  		ioc->req_frames = NULL;
>  		ioc->alloc = NULL;
> @@ -2751,8 +2751,8 @@ mpt_adapter_disable(MPT_ADAPTER *ioc)
>  
>  	if (ioc->sense_buf_pool != NULL) {
>  		sz = (ioc->req_depth * MPT_SENSE_BUFFER_ALLOC);
> -		pci_free_consistent(ioc->pcidev, sz,
> -				ioc->sense_buf_pool, ioc->sense_buf_pool_dma);
> +		dma_free_coherent(&ioc->pcidev->dev, sz, ioc->sense_buf_pool,
> +				ioc->sense_buf_pool_dma);
>  		ioc->sense_buf_pool = NULL;
>  		ioc->alloc_total -= sz;
>  	}
> @@ -2802,7 +2802,7 @@ mpt_adapter_disable(MPT_ADAPTER *ioc)
>  			"HostPageBuffer free  @ %p, sz=%d bytes\n",
>  			ioc->name, ioc->HostPageBuffer,
>  			ioc->HostPageBuffer_sz));
> -		pci_free_consistent(ioc->pcidev, ioc->HostPageBuffer_sz,
> +		dma_free_coherent(&ioc->pcidev->dev, ioc->HostPageBuffer_sz,
>  		    ioc->HostPageBuffer, ioc->HostPageBuffer_dma);
>  		ioc->HostPageBuffer = NULL;
>  		ioc->HostPageBuffer_sz = 0;
> @@ -4497,7 +4497,8 @@ PrimeIocFifos(MPT_ADAPTER *ioc)
>  			 	ioc->name, sz, sz, num_chain));
>  
>  		total_size += sz;
> -		mem = pci_alloc_consistent(ioc->pcidev, total_size, &alloc_dma);
> +		mem = dma_alloc_coherent(&ioc->pcidev->dev, total_size,
> +				&alloc_dma, GFP_KERNEL);
>  		if (mem == NULL) {
>  			printk(MYIOC_s_ERR_FMT "Unable to allocate Reply, Request, Chain Buffers!\n",
>  				ioc->name);
> @@ -4574,8 +4575,8 @@ PrimeIocFifos(MPT_ADAPTER *ioc)
>  		spin_unlock_irqrestore(&ioc->FreeQlock, flags);
>  
>  		sz = (ioc->req_depth * MPT_SENSE_BUFFER_ALLOC);
> -		ioc->sense_buf_pool =
> -			pci_alloc_consistent(ioc->pcidev, sz, &ioc->sense_buf_pool_dma);
> +		ioc->sense_buf_pool = dma_alloc_coherent(&ioc->pcidev->dev, sz,
> +				&ioc->sense_buf_pool_dma, GFP_KERNEL);
>  		if (ioc->sense_buf_pool == NULL) {
>  			printk(MYIOC_s_ERR_FMT "Unable to allocate Sense Buffers!\n",
>  				ioc->name);
> @@ -4613,18 +4614,16 @@ PrimeIocFifos(MPT_ADAPTER *ioc)
>  
>  	if (ioc->alloc != NULL) {
>  		sz = ioc->alloc_sz;
> -		pci_free_consistent(ioc->pcidev,
> -				sz,
> -				ioc->alloc, ioc->alloc_dma);
> +		dma_free_coherent(&ioc->pcidev->dev, sz, ioc->alloc,
> +				ioc->alloc_dma);
>  		ioc->reply_frames = NULL;
>  		ioc->req_frames = NULL;
>  		ioc->alloc_total -= sz;
>  	}
>  	if (ioc->sense_buf_pool != NULL) {
>  		sz = (ioc->req_depth * MPT_SENSE_BUFFER_ALLOC);
> -		pci_free_consistent(ioc->pcidev,
> -				sz,
> -				ioc->sense_buf_pool, ioc->sense_buf_pool_dma);
> +		dma_free_coherent(&ioc->pcidev->dev, sz, ioc->sense_buf_pool,
> +				ioc->sense_buf_pool_dma);
>  		ioc->sense_buf_pool = NULL;
>  	}
>  

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH v2] dma-pool: Fix too large DMA pools on medium systems
  2020-06-24  7:38   ` Christoph Hellwig
  2020-06-24 16:20     ` Guenter Roeck
@ 2020-06-27 16:13     ` Marion & Christophe JAILLET
  2020-06-29  8:12       ` Christoph Hellwig
  1 sibling, 1 reply; 13+ messages in thread
From: Marion & Christophe JAILLET @ 2020-06-27 16:13 UTC (permalink / raw)
  To: Christoph Hellwig, Guenter Roeck
  Cc: Geert Uytterhoeven, Marek Szyprowski, Robin Murphy,
	David Rientjes, iommu, linux-kernel


Le 24/06/2020 à 09:38, Christoph Hellwig a écrit :
> Hi Guenter,
>
> can you try the patch below?  This just converts the huge allocations
> in mptbase to use GFP_KERNEL.  Christophe (added to Cc) actually has
> a scripted conversion for the rest that he hasn't posted yet, so I'll
> aim for the minimal version here.

Hi,

I'm sorry, but I will not send pci_ --> dma_ conversion for this driver.
I'm a bit puzzled by some choice of GFP_KERNEL and GFP_ATOMIC that not 
all that obvious to me.

I'll try to send some patches for other easier drivers in the coming weeks.

Best regards,

CJ


^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH v2] dma-pool: Fix too large DMA pools on medium systems
  2020-06-27 16:13     ` Marion & Christophe JAILLET
@ 2020-06-29  8:12       ` Christoph Hellwig
  0 siblings, 0 replies; 13+ messages in thread
From: Christoph Hellwig @ 2020-06-29  8:12 UTC (permalink / raw)
  To: Marion & Christophe JAILLET
  Cc: Christoph Hellwig, Guenter Roeck, Geert Uytterhoeven,
	Marek Szyprowski, Robin Murphy, David Rientjes, iommu,
	linux-kernel

On Sat, Jun 27, 2020 at 06:13:43PM +0200, Marion & Christophe JAILLET wrote:
> I'm sorry, but I will not send pci_ --> dma_ conversion for this driver.
> I'm a bit puzzled by some choice of GFP_KERNEL and GFP_ATOMIC that not all 
> that obvious to me.
>
> I'll try to send some patches for other easier drivers in the coming weeks.

No problem, I sent a patch for the conversion of the allocations that
caused problems.

^ permalink raw reply	[flat|nested] 13+ messages in thread

end of thread, other threads:[~2020-06-29 18:51 UTC | newest]

Thread overview: 13+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-06-08 13:22 [PATCH v2] dma-pool: Fix too large DMA pools on medium systems Geert Uytterhoeven
2020-06-08 21:04 ` David Rientjes
2020-06-09 13:26 ` Christoph Hellwig
2020-06-20 20:09 ` Guenter Roeck
2020-06-21  8:35   ` Geert Uytterhoeven
2020-06-21 13:11     ` Guenter Roeck
2020-06-21 20:20       ` David Rientjes
2020-06-22 16:07         ` Robin Murphy
2020-06-22 17:31           ` Christoph Hellwig
2020-06-24  7:38   ` Christoph Hellwig
2020-06-24 16:20     ` Guenter Roeck
2020-06-27 16:13     ` Marion & Christophe JAILLET
2020-06-29  8:12       ` Christoph Hellwig

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).