From: Jeremy Linton <jeremy.linton@arm.com>
To: Nicolas Saenz Julienne <nsaenzjulienne@suse.de>,
Christoph Hellwig <hch@lst.de>,
Marek Szyprowski <m.szyprowski@samsung.com>,
Robin Murphy <robin.murphy@arm.com>,
David Rientjes <rientjes@google.com>
Cc: linux-rpi-kernel@lists.infradead.org,
iommu@lists.linux-foundation.org, linux-kernel@vger.kernel.org
Subject: Re: [PATCH] dma-pool: use single atomic pool for both DMA zones
Date: Wed, 8 Jul 2020 10:11:30 -0500 [thread overview]
Message-ID: <6b75da91-c24d-4d54-e6ac-ff580141fda9@arm.com> (raw)
In-Reply-To: <21a7276e98ae245404d82537ac1ee597a92f9150.camel@suse.de>
Hi,
On 7/8/20 5:35 AM, Nicolas Saenz Julienne wrote:
> Hi Jim,
>
> On Tue, 2020-07-07 at 17:08 -0500, Jeremy Linton wrote:
>> Hi,
>>
>> I spun this up on my 8G model using the PFTF firmware from:
>>
>> https://github.com/pftf/RPi4/releases
>>
>> Which allows me to switch between ACPI/DT on the machine. In DT mode it
>> works fine now,
>
> Nice, would that count as a Tested-by from you?
If it worked... :)
>
>> but with ACPI I continue to have failures unless I
>> disable CMA via cma=0 on the kernel command line.
>
> Yes, I see why, in atomic_pool_expand() memory is allocated from CMA without
> checking its correctness. That calls for a separate fix. I'll try to think of
> something.
>
>> It think that is because
>>
>> using DT:
>>
>> [ 0.000000] Reserved memory: created CMA memory pool at
>> 0x0000000037400000, size 64 MiB
>>
>>
>> using ACPI:
>> [ 0.000000] cma: Reserved 64 MiB at 0x00000000f8000000
>>
>> Which is AFAIK because the default arm64 CMA allocation is just below
>> the arm64_dma32_phys_limit.
>
> As I'm sure you know, we fix the CMA address trough DT, isn't that possible
> trough ACPI?
Well there isn't a linux specific cma location property in ACPI. There
are various ways to infer the information, like looking for the lowest
_DMA() range and using that to lower the arm64_dma32_phys_limit. OTOH,
as it stands I don't think that information is available early enough to
setup the cma pool.
But as you mention the atomic pool code is allocating from CMA under the
assumption that its going to be below the GFP_DMA range, which might not
be generally true (due to lack of DT cma properties too?).
next prev parent reply other threads:[~2020-07-08 15:11 UTC|newest]
Thread overview: 12+ messages / expand[flat|nested] mbox.gz Atom feed top
2020-07-07 12:28 [PATCH] dma-pool: use single atomic pool for both DMA zones Nicolas Saenz Julienne
2020-07-07 22:08 ` Jeremy Linton
2020-07-08 10:35 ` Nicolas Saenz Julienne
2020-07-08 15:11 ` Jeremy Linton [this message]
2020-07-08 15:36 ` Christoph Hellwig
2020-07-08 16:20 ` Robin Murphy
2020-07-08 15:35 ` Christoph Hellwig
2020-07-08 16:00 ` Nicolas Saenz Julienne
2020-07-08 16:10 ` Christoph Hellwig
2020-07-09 21:49 ` David Rientjes
2020-07-10 8:19 ` Nicolas Saenz Julienne
2020-07-08 23:16 ` Jeremy Linton
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=6b75da91-c24d-4d54-e6ac-ff580141fda9@arm.com \
--to=jeremy.linton@arm.com \
--cc=hch@lst.de \
--cc=iommu@lists.linux-foundation.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-rpi-kernel@lists.infradead.org \
--cc=m.szyprowski@samsung.com \
--cc=nsaenzjulienne@suse.de \
--cc=rientjes@google.com \
--cc=robin.murphy@arm.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).