From: Claire Chang <tientzu@chromium.org>
To: robh+dt@kernel.org, mpe@ellerman.id.au, benh@kernel.crashing.org,
paulus@samba.org, joro@8bytes.org, will@kernel.org,
frowand.list@gmail.com, konrad.wilk@oracle.com,
boris.ostrovsky@oracle.com, jgross@suse.com,
sstabellini@kernel.org, hch@lst.de, m.szyprowski@samsung.com,
robin.murphy@arm.com
Cc: heikki.krogerus@linux.intel.com, peterz@infradead.org,
grant.likely@arm.com, mingo@kernel.org, drinkcat@chromium.org,
saravanak@google.com, xypron.glpk@gmx.de,
rafael.j.wysocki@intel.com, bgolaszewski@baylibre.com,
xen-devel@lists.xenproject.org, treding@nvidia.com,
devicetree@vger.kernel.org, Claire Chang <tientzu@chromium.org>,
dan.j.williams@intel.com, andriy.shevchenko@linux.intel.com,
gregkh@linuxfoundation.org, rdunlap@infradead.org,
linux-kernel@vger.kernel.org, iommu@lists.linux-foundation.org,
linuxppc-dev@lists.ozlabs.org
Subject: [RFC PATCH v3 6/6] of: Add plumbing for restricted DMA pool
Date: Wed, 6 Jan 2021 11:41:24 +0800 [thread overview]
Message-ID: <20210106034124.30560-7-tientzu@chromium.org> (raw)
In-Reply-To: <20210106034124.30560-1-tientzu@chromium.org>
If a device is not behind an IOMMU, we look up the device node and set
up the restricted DMA when the restricted-dma-pool is presented.
Signed-off-by: Claire Chang <tientzu@chromium.org>
---
drivers/of/address.c | 21 +++++++++++++++++++++
drivers/of/device.c | 4 ++++
drivers/of/of_private.h | 5 +++++
3 files changed, 30 insertions(+)
diff --git a/drivers/of/address.c b/drivers/of/address.c
index 73ddf2540f3f..94eca8249854 100644
--- a/drivers/of/address.c
+++ b/drivers/of/address.c
@@ -8,6 +8,7 @@
#include <linux/logic_pio.h>
#include <linux/module.h>
#include <linux/of_address.h>
+#include <linux/of_reserved_mem.h>
#include <linux/pci.h>
#include <linux/pci_regs.h>
#include <linux/sizes.h>
@@ -1094,3 +1095,23 @@ bool of_dma_is_coherent(struct device_node *np)
return false;
}
EXPORT_SYMBOL_GPL(of_dma_is_coherent);
+
+int of_dma_set_restricted_buffer(struct device *dev)
+{
+ struct device_node *node;
+ int count, i;
+
+ if (!dev->of_node)
+ return 0;
+
+ count = of_property_count_elems_of_size(dev->of_node, "memory-region",
+ sizeof(phandle));
+ for (i = 0; i < count; i++) {
+ node = of_parse_phandle(dev->of_node, "memory-region", i);
+ if (of_device_is_compatible(node, "restricted-dma-pool"))
+ return of_reserved_mem_device_init_by_idx(
+ dev, dev->of_node, i);
+ }
+
+ return 0;
+}
diff --git a/drivers/of/device.c b/drivers/of/device.c
index aedfaaafd3e7..e2c7409956ab 100644
--- a/drivers/of/device.c
+++ b/drivers/of/device.c
@@ -182,6 +182,10 @@ int of_dma_configure_id(struct device *dev, struct device_node *np,
arch_setup_dma_ops(dev, dma_start, size, iommu, coherent);
dev->dma_range_map = map;
+
+ if (!iommu)
+ return of_dma_set_restricted_buffer(dev);
+
return 0;
}
EXPORT_SYMBOL_GPL(of_dma_configure_id);
diff --git a/drivers/of/of_private.h b/drivers/of/of_private.h
index d9e6a324de0a..28a2dfa197ba 100644
--- a/drivers/of/of_private.h
+++ b/drivers/of/of_private.h
@@ -161,12 +161,17 @@ struct bus_dma_region;
#if defined(CONFIG_OF_ADDRESS) && defined(CONFIG_HAS_DMA)
int of_dma_get_range(struct device_node *np,
const struct bus_dma_region **map);
+int of_dma_set_restricted_buffer(struct device *dev);
#else
static inline int of_dma_get_range(struct device_node *np,
const struct bus_dma_region **map)
{
return -ENODEV;
}
+static inline int of_dma_get_restricted_buffer(struct device *dev)
+{
+ return -ENODEV;
+}
#endif
#endif /* _LINUX_OF_PRIVATE_H */
--
2.29.2.729.g45daf8777d-goog
_______________________________________________
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu
next prev parent reply other threads:[~2021-01-06 3:42 UTC|newest]
Thread overview: 59+ messages / expand[flat|nested] mbox.gz Atom feed top
2021-01-06 3:41 [RFC PATCH v3 0/6] Restricted DMA Claire Chang
2021-01-06 3:41 ` [RFC PATCH v3 1/6] swiotlb: Add io_tlb_mem struct Claire Chang
2021-01-13 11:50 ` Christoph Hellwig
2021-01-06 3:41 ` [RFC PATCH v3 2/6] swiotlb: Add restricted DMA pool Claire Chang
2021-01-06 7:50 ` Greg KH
2021-01-13 11:51 ` Christoph Hellwig
2021-01-13 12:29 ` Greg KH
2021-01-13 12:37 ` Christoph Hellwig
2021-01-06 18:52 ` Konrad Rzeszutek Wilk
2021-01-07 17:39 ` Claire Chang
2021-01-07 17:57 ` Konrad Rzeszutek Wilk
2021-01-07 18:09 ` Florian Fainelli
2021-01-07 21:19 ` Konrad Rzeszutek Wilk
2021-01-12 23:52 ` Florian Fainelli
2021-01-25 5:26 ` Jon Masters
2021-01-13 1:53 ` Robin Murphy
2021-01-13 0:03 ` Florian Fainelli
2021-01-13 13:59 ` Nicolas Saenz Julienne
2021-01-13 15:27 ` Robin Murphy
2021-01-13 17:43 ` Florian Fainelli
2021-01-13 18:03 ` Robin Murphy
2021-01-13 12:42 ` Christoph Hellwig
2021-01-14 9:06 ` Claire Chang
2021-01-06 3:41 ` [RFC PATCH v3 3/6] swiotlb: Use restricted DMA pool if available Claire Chang
2021-01-12 23:39 ` Florian Fainelli
2021-01-13 12:44 ` Christoph Hellwig
2021-01-06 3:41 ` [RFC PATCH v3 4/6] swiotlb: Add restricted DMA alloc/free support Claire Chang
2021-01-12 23:41 ` Florian Fainelli
2021-01-13 12:48 ` Christoph Hellwig
2021-01-13 18:27 ` Robin Murphy
2021-01-13 18:32 ` Christoph Hellwig
2021-01-06 3:41 ` [RFC PATCH v3 5/6] dt-bindings: of: Add restricted DMA pool Claire Chang
2021-01-06 18:57 ` Konrad Rzeszutek Wilk
2021-01-07 17:39 ` Claire Chang
2021-01-07 18:00 ` Konrad Rzeszutek Wilk
2021-01-07 18:14 ` Florian Fainelli
2021-01-12 7:47 ` Claire Chang
2021-01-20 16:53 ` Rob Herring
2021-01-20 17:30 ` Robin Murphy
2021-01-20 21:31 ` Rob Herring
2021-01-21 1:09 ` Robin Murphy
2021-01-21 15:48 ` Rob Herring
2021-01-21 17:29 ` Robin Murphy
2021-01-06 3:41 ` Claire Chang [this message]
2021-01-12 23:48 ` [RFC PATCH v3 6/6] of: Add plumbing for " Florian Fainelli
2021-01-14 9:08 ` Claire Chang
2021-01-14 18:52 ` Florian Fainelli
2021-01-15 3:46 ` Claire Chang
2021-01-06 18:48 ` [RFC PATCH v3 0/6] Restricted DMA Florian Fainelli
2021-01-07 17:38 ` Claire Chang
2021-01-07 17:42 ` Claire Chang
2021-01-07 17:59 ` Florian Fainelli
2021-01-12 7:48 ` Claire Chang
2021-01-12 18:01 ` Florian Fainelli
2021-01-13 2:29 ` Tomasz Figa
2021-01-13 3:56 ` Florian Fainelli
2021-01-13 4:25 ` Tomasz Figa
2021-01-13 4:41 ` Florian Fainelli
2021-02-09 6:27 ` Claire Chang
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20210106034124.30560-7-tientzu@chromium.org \
--to=tientzu@chromium.org \
--cc=andriy.shevchenko@linux.intel.com \
--cc=benh@kernel.crashing.org \
--cc=bgolaszewski@baylibre.com \
--cc=boris.ostrovsky@oracle.com \
--cc=dan.j.williams@intel.com \
--cc=devicetree@vger.kernel.org \
--cc=drinkcat@chromium.org \
--cc=frowand.list@gmail.com \
--cc=grant.likely@arm.com \
--cc=gregkh@linuxfoundation.org \
--cc=hch@lst.de \
--cc=heikki.krogerus@linux.intel.com \
--cc=iommu@lists.linux-foundation.org \
--cc=jgross@suse.com \
--cc=joro@8bytes.org \
--cc=konrad.wilk@oracle.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linuxppc-dev@lists.ozlabs.org \
--cc=m.szyprowski@samsung.com \
--cc=mingo@kernel.org \
--cc=mpe@ellerman.id.au \
--cc=paulus@samba.org \
--cc=peterz@infradead.org \
--cc=rafael.j.wysocki@intel.com \
--cc=rdunlap@infradead.org \
--cc=robh+dt@kernel.org \
--cc=robin.murphy@arm.com \
--cc=saravanak@google.com \
--cc=sstabellini@kernel.org \
--cc=treding@nvidia.com \
--cc=will@kernel.org \
--cc=xen-devel@lists.xenproject.org \
--cc=xypron.glpk@gmx.de \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).