From: Robin Murphy <robin.murphy@arm.com> To: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com> Cc: Srinath Mannam <srinath.mannam@broadcom.com>, Bjorn Helgaas <bhelgaas@google.com>, Eric Auger <eric.auger@redhat.com>, Joerg Roedel <joro@8bytes.org>, poza@codeaurora.org, Ray Jui <rjui@broadcom.com>, bcm-kernel-feedback-list@broadcom.com, linux-pci@vger.kernel.org, iommu@lists.linux-foundation.org, linux-kernel@vger.kernel.org Subject: Re: [PATCH v5 2/3] iommu/dma: Reserve IOVA for PCIe inaccessible DMA address Date: Thu, 2 May 2019 15:15:17 +0100 [thread overview] Message-ID: <b4420901-60d4-69ab-6ed0-5d2fa9449595@arm.com> (raw) In-Reply-To: <20190502130624.GA10470@e121166-lin.cambridge.arm.com> On 02/05/2019 14:06, Lorenzo Pieralisi wrote: > On Thu, May 02, 2019 at 12:27:02PM +0100, Robin Murphy wrote: >> Hi Lorenzo, >> >> On 02/05/2019 12:01, Lorenzo Pieralisi wrote: >>> On Wed, May 01, 2019 at 11:06:25PM +0530, Srinath Mannam wrote: >>>> dma_ranges field of PCI host bridge structure has resource entries in >>>> sorted order of address range given through dma-ranges DT property. This >>>> list is the accessible DMA address range. So that this resource list will >>>> be processed and reserve IOVA address to the inaccessible address holes in >>>> the list. >>>> >>>> This method is similar to PCI IO resources address ranges reserving in >>>> IOMMU for each EP connected to host bridge. >>>> >>>> Signed-off-by: Srinath Mannam <srinath.mannam@broadcom.com> >>>> Based-on-patch-by: Oza Pawandeep <oza.oza@broadcom.com> >>>> Reviewed-by: Oza Pawandeep <poza@codeaurora.org> >>>> Acked-by: Robin Murphy <robin.murphy@arm.com> >>>> --- >>>> drivers/iommu/dma-iommu.c | 19 +++++++++++++++++++ >>>> 1 file changed, 19 insertions(+) >>>> >>>> diff --git a/drivers/iommu/dma-iommu.c b/drivers/iommu/dma-iommu.c >>>> index 77aabe6..da94844 100644 >>>> --- a/drivers/iommu/dma-iommu.c >>>> +++ b/drivers/iommu/dma-iommu.c >>>> @@ -212,6 +212,7 @@ static void iova_reserve_pci_windows(struct pci_dev *dev, >>>> struct pci_host_bridge *bridge = pci_find_host_bridge(dev->bus); >>>> struct resource_entry *window; >>>> unsigned long lo, hi; >>>> + phys_addr_t start = 0, end; >>>> resource_list_for_each_entry(window, &bridge->windows) { >>>> if (resource_type(window->res) != IORESOURCE_MEM) >>>> @@ -221,6 +222,24 @@ static void iova_reserve_pci_windows(struct pci_dev *dev, >>>> hi = iova_pfn(iovad, window->res->end - window->offset); >>>> reserve_iova(iovad, lo, hi); >>>> } >>>> + >>>> + /* Get reserved DMA windows from host bridge */ >>>> + resource_list_for_each_entry(window, &bridge->dma_ranges) { >>> >>> If this list is not sorted it seems to me the logic in this loop is >>> broken and you can't rely on callers to sort it because it is not a >>> written requirement and it is not enforced (you know because you >>> wrote the code but any other developer is not supposed to guess >>> it). >>> >>> Can't we rewrite this loop so that it does not rely on list >>> entries order ? >> >> The original idea was that callers should be required to provide a sorted >> list, since it keeps things nice and simple... > > I understand, if it was self-contained in driver code that would be fine > but in core code with possible multiple consumers this must be > documented/enforced, somehow. > >>> I won't merge this series unless you sort it, no pun intended. >>> >>> Lorenzo >>> >>>> + end = window->res->start - window->offset; >> >> ...so would you consider it sufficient to add >> >> if (end < start) >> dev_err(...); > > We should also revert any IOVA reservation we did prior to this > error, right ? I think it would be enough to propagate an error code back out through iommu_dma_init_domain(), which should then end up aborting the whole IOMMU setup - reserve_iova() isn't really designed to be undoable, but since this is the kind of error that should only ever be hit during driver or DT development, as long as we continue booting such that the developer can clearly see what's gone wrong, I don't think we need bother spending too much effort tidying up inside the unused domain. > Anyway, I think it is best to ensure it *is* sorted. > >> here, plus commenting the definition of pci_host_bridge::dma_ranges >> that it must be sorted in ascending order? > > I don't think that commenting dma_ranges would help much, I am more > keen on making it work by construction. > >> [ I guess it might even make sense to factor out the parsing and list >> construction from patch #3 into an of_pci core helper from the beginning, so >> that there's even less chance of another driver reimplementing it >> incorrectly in future. ] > > This makes sense IMO and I would like to take this approach if you > don't mind. Sure - at some point it would be nice to wire this up to pci-host-generic for Juno as well (with a parallel version for ACPI _DMA), so from that viewpoint, the more groundwork in place the better :) Thanks, Robin. > > Either this or we move the whole IOVA reservation and dma-ranges > parsing into PCI IProc. > >> Failing that, although I do prefer the "simple by construction" >> approach, I'd have no objection to just sticking a list_sort() call in >> here instead, if you'd rather it be entirely bulletproof. > > I think what you outline above is a sensible way forward - if we > miss the merge window so be it. > > Thanks, > Lorenzo > >> Robin. >> >>>> +resv_iova: >>>> + if (end - start) { >>>> + lo = iova_pfn(iovad, start); >>>> + hi = iova_pfn(iovad, end); >>>> + reserve_iova(iovad, lo, hi); >>>> + } >>>> + start = window->res->end - window->offset + 1; >>>> + /* If window is last entry */ >>>> + if (window->node.next == &bridge->dma_ranges && >>>> + end != ~(dma_addr_t)0) { >>>> + end = ~(dma_addr_t)0; >>>> + goto resv_iova; >>>> + } >>>> + } >>>> } >>>> static int iova_reserve_iommu_regions(struct device *dev, >>>> -- >>>> 2.7.4 >>>>
WARNING: multiple messages have this Message-ID (diff)
From: Robin Murphy <robin.murphy@arm.com> To: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com> Cc: poza@codeaurora.org, Ray Jui <rjui@broadcom.com>, linux-kernel@vger.kernel.org, iommu@lists.linux-foundation.org, Srinath Mannam <srinath.mannam@broadcom.com>, linux-pci@vger.kernel.org, Bjorn Helgaas <bhelgaas@google.com>, bcm-kernel-feedback-list@broadcom.com Subject: Re: [PATCH v5 2/3] iommu/dma: Reserve IOVA for PCIe inaccessible DMA address Date: Thu, 2 May 2019 15:15:17 +0100 [thread overview] Message-ID: <b4420901-60d4-69ab-6ed0-5d2fa9449595@arm.com> (raw) Message-ID: <20190502141517.QBskaVTUUoWv4N0JtJ5r9uJVaMQKuTvNCv2GmGYW7JY@z> (raw) In-Reply-To: <20190502130624.GA10470@e121166-lin.cambridge.arm.com> On 02/05/2019 14:06, Lorenzo Pieralisi wrote: > On Thu, May 02, 2019 at 12:27:02PM +0100, Robin Murphy wrote: >> Hi Lorenzo, >> >> On 02/05/2019 12:01, Lorenzo Pieralisi wrote: >>> On Wed, May 01, 2019 at 11:06:25PM +0530, Srinath Mannam wrote: >>>> dma_ranges field of PCI host bridge structure has resource entries in >>>> sorted order of address range given through dma-ranges DT property. This >>>> list is the accessible DMA address range. So that this resource list will >>>> be processed and reserve IOVA address to the inaccessible address holes in >>>> the list. >>>> >>>> This method is similar to PCI IO resources address ranges reserving in >>>> IOMMU for each EP connected to host bridge. >>>> >>>> Signed-off-by: Srinath Mannam <srinath.mannam@broadcom.com> >>>> Based-on-patch-by: Oza Pawandeep <oza.oza@broadcom.com> >>>> Reviewed-by: Oza Pawandeep <poza@codeaurora.org> >>>> Acked-by: Robin Murphy <robin.murphy@arm.com> >>>> --- >>>> drivers/iommu/dma-iommu.c | 19 +++++++++++++++++++ >>>> 1 file changed, 19 insertions(+) >>>> >>>> diff --git a/drivers/iommu/dma-iommu.c b/drivers/iommu/dma-iommu.c >>>> index 77aabe6..da94844 100644 >>>> --- a/drivers/iommu/dma-iommu.c >>>> +++ b/drivers/iommu/dma-iommu.c >>>> @@ -212,6 +212,7 @@ static void iova_reserve_pci_windows(struct pci_dev *dev, >>>> struct pci_host_bridge *bridge = pci_find_host_bridge(dev->bus); >>>> struct resource_entry *window; >>>> unsigned long lo, hi; >>>> + phys_addr_t start = 0, end; >>>> resource_list_for_each_entry(window, &bridge->windows) { >>>> if (resource_type(window->res) != IORESOURCE_MEM) >>>> @@ -221,6 +222,24 @@ static void iova_reserve_pci_windows(struct pci_dev *dev, >>>> hi = iova_pfn(iovad, window->res->end - window->offset); >>>> reserve_iova(iovad, lo, hi); >>>> } >>>> + >>>> + /* Get reserved DMA windows from host bridge */ >>>> + resource_list_for_each_entry(window, &bridge->dma_ranges) { >>> >>> If this list is not sorted it seems to me the logic in this loop is >>> broken and you can't rely on callers to sort it because it is not a >>> written requirement and it is not enforced (you know because you >>> wrote the code but any other developer is not supposed to guess >>> it). >>> >>> Can't we rewrite this loop so that it does not rely on list >>> entries order ? >> >> The original idea was that callers should be required to provide a sorted >> list, since it keeps things nice and simple... > > I understand, if it was self-contained in driver code that would be fine > but in core code with possible multiple consumers this must be > documented/enforced, somehow. > >>> I won't merge this series unless you sort it, no pun intended. >>> >>> Lorenzo >>> >>>> + end = window->res->start - window->offset; >> >> ...so would you consider it sufficient to add >> >> if (end < start) >> dev_err(...); > > We should also revert any IOVA reservation we did prior to this > error, right ? I think it would be enough to propagate an error code back out through iommu_dma_init_domain(), which should then end up aborting the whole IOMMU setup - reserve_iova() isn't really designed to be undoable, but since this is the kind of error that should only ever be hit during driver or DT development, as long as we continue booting such that the developer can clearly see what's gone wrong, I don't think we need bother spending too much effort tidying up inside the unused domain. > Anyway, I think it is best to ensure it *is* sorted. > >> here, plus commenting the definition of pci_host_bridge::dma_ranges >> that it must be sorted in ascending order? > > I don't think that commenting dma_ranges would help much, I am more > keen on making it work by construction. > >> [ I guess it might even make sense to factor out the parsing and list >> construction from patch #3 into an of_pci core helper from the beginning, so >> that there's even less chance of another driver reimplementing it >> incorrectly in future. ] > > This makes sense IMO and I would like to take this approach if you > don't mind. Sure - at some point it would be nice to wire this up to pci-host-generic for Juno as well (with a parallel version for ACPI _DMA), so from that viewpoint, the more groundwork in place the better :) Thanks, Robin. > > Either this or we move the whole IOVA reservation and dma-ranges > parsing into PCI IProc. > >> Failing that, although I do prefer the "simple by construction" >> approach, I'd have no objection to just sticking a list_sort() call in >> here instead, if you'd rather it be entirely bulletproof. > > I think what you outline above is a sensible way forward - if we > miss the merge window so be it. > > Thanks, > Lorenzo > >> Robin. >> >>>> +resv_iova: >>>> + if (end - start) { >>>> + lo = iova_pfn(iovad, start); >>>> + hi = iova_pfn(iovad, end); >>>> + reserve_iova(iovad, lo, hi); >>>> + } >>>> + start = window->res->end - window->offset + 1; >>>> + /* If window is last entry */ >>>> + if (window->node.next == &bridge->dma_ranges && >>>> + end != ~(dma_addr_t)0) { >>>> + end = ~(dma_addr_t)0; >>>> + goto resv_iova; >>>> + } >>>> + } >>>> } >>>> static int iova_reserve_iommu_regions(struct device *dev, >>>> -- >>>> 2.7.4 >>>> _______________________________________________ iommu mailing list iommu@lists.linux-foundation.org https://lists.linuxfoundation.org/mailman/listinfo/iommu
next prev parent reply other threads:[~2019-05-02 14:15 UTC|newest] Thread overview: 26+ messages / expand[flat|nested] mbox.gz Atom feed top 2019-05-01 17:36 [PATCH v5 0/3] PCIe Host request to reserve IOVA Srinath Mannam 2019-05-01 17:36 ` Srinath Mannam via iommu 2019-05-01 17:36 ` [PATCH v5 1/3] PCI: Add dma_ranges window list Srinath Mannam 2019-05-01 17:36 ` Srinath Mannam via iommu 2019-05-01 17:36 ` [PATCH v5 2/3] iommu/dma: Reserve IOVA for PCIe inaccessible DMA address Srinath Mannam 2019-05-01 17:36 ` Srinath Mannam via iommu 2019-05-02 11:01 ` Lorenzo Pieralisi 2019-05-02 11:01 ` Lorenzo Pieralisi 2019-05-02 11:27 ` Robin Murphy 2019-05-02 11:27 ` Robin Murphy 2019-05-02 13:06 ` Lorenzo Pieralisi 2019-05-02 13:06 ` Lorenzo Pieralisi 2019-05-02 14:15 ` Robin Murphy [this message] 2019-05-02 14:15 ` Robin Murphy 2019-05-03 5:23 ` Srinath Mannam 2019-05-03 5:23 ` Srinath Mannam via iommu 2019-05-03 9:53 ` Lorenzo Pieralisi 2019-05-03 9:53 ` Lorenzo Pieralisi 2019-05-03 10:05 ` Srinath Mannam 2019-05-03 10:05 ` Srinath Mannam via iommu 2019-05-03 10:27 ` Robin Murphy 2019-05-03 10:27 ` Robin Murphy 2019-05-03 10:30 ` Srinath Mannam 2019-05-03 10:30 ` Srinath Mannam via iommu 2019-05-01 17:36 ` [PATCH v5 3/3] PCI: iproc: Add sorted dma ranges resource entries to host bridge Srinath Mannam 2019-05-01 17:36 ` Srinath Mannam via iommu
Reply instructions: You may reply publicly to this message via plain-text email using any one of the following methods: * Save the following mbox file, import it into your mail client, and reply-to-all from there: mbox Avoid top-posting and favor interleaved quoting: https://en.wikipedia.org/wiki/Posting_style#Interleaved_style * Reply using the --to, --cc, and --in-reply-to switches of git-send-email(1): git send-email \ --in-reply-to=b4420901-60d4-69ab-6ed0-5d2fa9449595@arm.com \ --to=robin.murphy@arm.com \ --cc=bcm-kernel-feedback-list@broadcom.com \ --cc=bhelgaas@google.com \ --cc=eric.auger@redhat.com \ --cc=iommu@lists.linux-foundation.org \ --cc=joro@8bytes.org \ --cc=linux-kernel@vger.kernel.org \ --cc=linux-pci@vger.kernel.org \ --cc=lorenzo.pieralisi@arm.com \ --cc=poza@codeaurora.org \ --cc=rjui@broadcom.com \ --cc=srinath.mannam@broadcom.com \ /path/to/YOUR_REPLY https://kernel.org/pub/software/scm/git/docs/git-send-email.html * If your mail client supports setting the In-Reply-To header via mailto: links, try the mailto: linkBe sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes, see mirroring instructions on how to clone and mirror all data and code used by this external index.