linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Thierry Reding <thierry.reding@gmail.com>
To: Robin Murphy <robin.murphy@arm.com>
Cc: Joerg Roedel <joro@8bytes.org>, Rob Herring <robh+dt@kernel.org>,
	Frank Rowand <frowand.list@gmail.com>,
	Will Deacon <will@kernel.org>,
	iommu@lists.linux-foundation.org, devicetree@vger.kernel.org,
	linux-kernel@vger.kernel.org
Subject: Re: [PATCH 1/2] iommu: Implement of_iommu_get_resv_regions()
Date: Mon, 2 Sep 2019 17:00:56 +0200	[thread overview]
Message-ID: <20190902150056.GD1445@ulmo> (raw)
In-Reply-To: <0b7e050a-cec6-6ce7-9ed6-2146eabb2fe8@arm.com>

[-- Attachment #1: Type: text/plain, Size: 3374 bytes --]

On Mon, Sep 02, 2019 at 02:54:23PM +0100, Robin Murphy wrote:
> On 29/08/2019 12:14, Thierry Reding wrote:
> > From: Thierry Reding <treding@nvidia.com>
> > 
> > This is an implementation that IOMMU drivers can use to obtain reserved
> > memory regions from a device tree node. It uses the reserved-memory DT
> > bindings to find the regions associated with a given device. These
> > regions will be used to create 1:1 mappings in the IOMMU domain that
> > the devices will be attached to.
> > 
> > Cc: Rob Herring <robh+dt@kernel.org>
> > Cc: Frank Rowand <frowand.list@gmail.com>
> > Cc: devicetree@vger.kernel.org
> > Signed-off-by: Thierry Reding <treding@nvidia.com>
> > ---
> >   drivers/iommu/of_iommu.c | 39 +++++++++++++++++++++++++++++++++++++++
> >   include/linux/of_iommu.h |  8 ++++++++
> >   2 files changed, 47 insertions(+)
> > 
> > diff --git a/drivers/iommu/of_iommu.c b/drivers/iommu/of_iommu.c
> > index 614a93aa5305..0d47f626b854 100644
> > --- a/drivers/iommu/of_iommu.c
> > +++ b/drivers/iommu/of_iommu.c
> > @@ -9,6 +9,7 @@
> >   #include <linux/iommu.h>
> >   #include <linux/limits.h>
> >   #include <linux/of.h>
> > +#include <linux/of_address.h>
> >   #include <linux/of_iommu.h>
> >   #include <linux/of_pci.h>
> >   #include <linux/slab.h>
> > @@ -225,3 +226,41 @@ const struct iommu_ops *of_iommu_configure(struct device *dev,
> >   	return ops;
> >   }
> > +
> > +/**
> > + * of_iommu_get_resv_regions - reserved region driver helper for device tree
> > + * @dev: device for which to get reserved regions
> > + * @list: reserved region list
> > + *
> > + * IOMMU drivers can use this to implement their .get_resv_regions() callback
> > + * for memory regions attached to a device tree node. See the reserved-memory
> > + * device tree bindings on how to use these:
> > + *
> > + *   Documentation/devicetree/bindings/reserved-memory/reserved-memory.txt
> > + */
> > +void of_iommu_get_resv_regions(struct device *dev, struct list_head *list)
> > +{
> > +	struct of_phandle_iterator it;
> > +	int err;
> > +
> > +	of_for_each_phandle(&it, err, dev->of_node, "memory-region", NULL, 0) {
> > +		struct iommu_resv_region *region;
> > +		struct resource res;
> > +
> > +		err = of_address_to_resource(it.node, 0, &res);
> > +		if (err < 0) {
> > +			dev_err(dev, "failed to parse memory region %pOF: %d\n",
> > +				it.node, err);
> > +			continue;
> > +		}
> 
> What if the device node has memory regions for other purposes, like private
> CMA carveouts? We wouldn't want to force mappings of those (and in the very
> worst case doing so could even render them unusable).

I suppose we could come up with additional properties to mark such
memory regions and skip them here.

One other alternative might be to make sure that the driver claims
the memory regions that have been mapped and then tells the IOMMU to
undo the mappings for them. That way the driver could set up the new
mappings, reprogram the hardware and then have the old mappings torn
down. I'm not sure that could always be done in a race-free way. For
example, what if the new mappings need to be in a region, such as a
private CMA carveout, that's already mapped. Can we temporarily map
one physical address to two DMA addresses?

The details here probably depend on the IOMMU hardware.

Thierry

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 833 bytes --]

  reply	other threads:[~2019-09-02 15:01 UTC|newest]

Thread overview: 14+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-08-29 11:14 [PATCH 0/2] iommu: Support reserved-memory regions Thierry Reding
2019-08-29 11:14 ` [PATCH 1/2] iommu: Implement of_iommu_get_resv_regions() Thierry Reding
2019-09-02 13:38   ` Rob Herring
2019-09-02 13:54   ` Robin Murphy
2019-09-02 15:00     ` Thierry Reding [this message]
2019-09-09 14:04       ` Will Deacon
2019-08-29 11:14 ` [PATCH 2/2] iommu: dma: Use of_iommu_get_resv_regions() Thierry Reding
2019-09-02 14:22   ` Robin Murphy
2019-09-02 14:52     ` Thierry Reding
2019-09-17 17:59       ` Will Deacon
2019-11-27 12:16         ` Thierry Reding
2019-11-27 14:16           ` Thierry Reding
2019-11-27 17:09             ` Robin Murphy
2019-11-28 11:43               ` Thierry Reding

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20190902150056.GD1445@ulmo \
    --to=thierry.reding@gmail.com \
    --cc=devicetree@vger.kernel.org \
    --cc=frowand.list@gmail.com \
    --cc=iommu@lists.linux-foundation.org \
    --cc=joro@8bytes.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=robh+dt@kernel.org \
    --cc=robin.murphy@arm.com \
    --cc=will@kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).