From: Robin Murphy <robin.murphy@arm.com>
To: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>,
Srinath Mannam <srinath.mannam@broadcom.com>
Cc: Bjorn Helgaas <bhelgaas@google.com>,
Eric Auger <eric.auger@redhat.com>,
Joerg Roedel <joro@8bytes.org>,
poza@codeaurora.org, Ray Jui <rjui@broadcom.com>,
bcm-kernel-feedback-list@broadcom.com, linux-pci@vger.kernel.org,
iommu@lists.linux-foundation.org, linux-kernel@vger.kernel.org
Subject: Re: [PATCH v5 2/3] iommu/dma: Reserve IOVA for PCIe inaccessible DMA address
Date: Thu, 2 May 2019 12:27:02 +0100 [thread overview]
Message-ID: <2f4b9492-0caf-d6e3-e727-e3c869eefb58@arm.com> (raw)
In-Reply-To: <20190502110152.GA7313@e121166-lin.cambridge.arm.com>
Hi Lorenzo,
On 02/05/2019 12:01, Lorenzo Pieralisi wrote:
> On Wed, May 01, 2019 at 11:06:25PM +0530, Srinath Mannam wrote:
>> dma_ranges field of PCI host bridge structure has resource entries in
>> sorted order of address range given through dma-ranges DT property. This
>> list is the accessible DMA address range. So that this resource list will
>> be processed and reserve IOVA address to the inaccessible address holes in
>> the list.
>>
>> This method is similar to PCI IO resources address ranges reserving in
>> IOMMU for each EP connected to host bridge.
>>
>> Signed-off-by: Srinath Mannam <srinath.mannam@broadcom.com>
>> Based-on-patch-by: Oza Pawandeep <oza.oza@broadcom.com>
>> Reviewed-by: Oza Pawandeep <poza@codeaurora.org>
>> Acked-by: Robin Murphy <robin.murphy@arm.com>
>> ---
>> drivers/iommu/dma-iommu.c | 19 +++++++++++++++++++
>> 1 file changed, 19 insertions(+)
>>
>> diff --git a/drivers/iommu/dma-iommu.c b/drivers/iommu/dma-iommu.c
>> index 77aabe6..da94844 100644
>> --- a/drivers/iommu/dma-iommu.c
>> +++ b/drivers/iommu/dma-iommu.c
>> @@ -212,6 +212,7 @@ static void iova_reserve_pci_windows(struct pci_dev *dev,
>> struct pci_host_bridge *bridge = pci_find_host_bridge(dev->bus);
>> struct resource_entry *window;
>> unsigned long lo, hi;
>> + phys_addr_t start = 0, end;
>>
>> resource_list_for_each_entry(window, &bridge->windows) {
>> if (resource_type(window->res) != IORESOURCE_MEM)
>> @@ -221,6 +222,24 @@ static void iova_reserve_pci_windows(struct pci_dev *dev,
>> hi = iova_pfn(iovad, window->res->end - window->offset);
>> reserve_iova(iovad, lo, hi);
>> }
>> +
>> + /* Get reserved DMA windows from host bridge */
>> + resource_list_for_each_entry(window, &bridge->dma_ranges) {
>
> If this list is not sorted it seems to me the logic in this loop is
> broken and you can't rely on callers to sort it because it is not a
> written requirement and it is not enforced (you know because you
> wrote the code but any other developer is not supposed to guess
> it).
>
> Can't we rewrite this loop so that it does not rely on list
> entries order ?
The original idea was that callers should be required to provide a
sorted list, since it keeps things nice and simple...
> I won't merge this series unless you sort it, no pun intended.
>
> Lorenzo
>
>> + end = window->res->start - window->offset;
...so would you consider it sufficient to add
if (end < start)
dev_err(...);
here, plus commenting the definition of pci_host_bridge::dma_ranges that
it must be sorted in ascending order?
[ I guess it might even make sense to factor out the parsing and list
construction from patch #3 into an of_pci core helper from the
beginning, so that there's even less chance of another driver
reimplementing it incorrectly in future. ]
Failing that, although I do prefer the "simple by construction"
approach, I'd have no objection to just sticking a list_sort() call in
here instead, if you'd rather it be entirely bulletproof.
Robin.
>> +resv_iova:
>> + if (end - start) {
>> + lo = iova_pfn(iovad, start);
>> + hi = iova_pfn(iovad, end);
>> + reserve_iova(iovad, lo, hi);
>> + }
>> + start = window->res->end - window->offset + 1;
>> + /* If window is last entry */
>> + if (window->node.next == &bridge->dma_ranges &&
>> + end != ~(dma_addr_t)0) {
>> + end = ~(dma_addr_t)0;
>> + goto resv_iova;
>> + }
>> + }
>> }
>>
>> static int iova_reserve_iommu_regions(struct device *dev,
>> --
>> 2.7.4
>>
next prev parent reply other threads:[~2019-05-02 11:27 UTC|newest]
Thread overview: 13+ messages / expand[flat|nested] mbox.gz Atom feed top
2019-05-01 17:36 [PATCH v5 0/3] PCIe Host request to reserve IOVA Srinath Mannam
2019-05-01 17:36 ` [PATCH v5 1/3] PCI: Add dma_ranges window list Srinath Mannam
2019-05-01 17:36 ` [PATCH v5 2/3] iommu/dma: Reserve IOVA for PCIe inaccessible DMA address Srinath Mannam
2019-05-02 11:01 ` Lorenzo Pieralisi
2019-05-02 11:27 ` Robin Murphy [this message]
2019-05-02 13:06 ` Lorenzo Pieralisi
2019-05-02 14:15 ` Robin Murphy
2019-05-03 5:23 ` Srinath Mannam
2019-05-03 9:53 ` Lorenzo Pieralisi
2019-05-03 10:05 ` Srinath Mannam
2019-05-03 10:27 ` Robin Murphy
2019-05-03 10:30 ` Srinath Mannam
2019-05-01 17:36 ` [PATCH v5 3/3] PCI: iproc: Add sorted dma ranges resource entries to host bridge Srinath Mannam
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=2f4b9492-0caf-d6e3-e727-e3c869eefb58@arm.com \
--to=robin.murphy@arm.com \
--cc=bcm-kernel-feedback-list@broadcom.com \
--cc=bhelgaas@google.com \
--cc=eric.auger@redhat.com \
--cc=iommu@lists.linux-foundation.org \
--cc=joro@8bytes.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-pci@vger.kernel.org \
--cc=lorenzo.pieralisi@arm.com \
--cc=poza@codeaurora.org \
--cc=rjui@broadcom.com \
--cc=srinath.mannam@broadcom.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).