All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v2 0/5] PLX Switch DMA Engine Driver
@ 2019-12-10  0:24 Logan Gunthorpe
  2019-12-10  0:24 ` [PATCH v2 1/5] dmaengine: Store module owner in dma_device struct Logan Gunthorpe
                   ` (4 more replies)
  0 siblings, 5 replies; 10+ messages in thread
From: Logan Gunthorpe @ 2019-12-10  0:24 UTC (permalink / raw)
  To: linux-kernel, dmaengine, Vinod Koul
  Cc: Dan Williams, Kit Chow, Logan Gunthorpe

This is v2 of the patchset. The discussion for v1 was mostly resolving
a confusion of how this new driver fixes the problems with unbinds
while in use and how this is not solved by the existing module reference.
The first two patches fix bugs in the core code to support this.

This patchset is based off of v5.5-rc1 and a git branch is available
here:

https://github.com/sbates130272/linux-p2pmem/ plx-dma-v2

Changes for v2:

* Rebased onto v5.5-rc1 (no changes)
* Pushed the plx_dma_isr() routine into the patch that uses it (per
  Vinod's feedback)

--

The following patches introduce a driver to use the DMA hardware inside
PLX ExpressLane PEX Switches. The DMA devices appear as one or more
PCI virtual functions per channel and can serve similar use cases as
Intel's IOAT driver.

The first two patches in this series fix some bugs that are required to
support cases where the driver is unbound while the DMA engine is in
use. Without these patches the kernel will panic when that happens.

The final three patches implement the driver itself.

--

Logan Gunthorpe (5):
  dmaengine: Store module owner in dma_device struct
  dmaengine: Call module_put() after device_free_chan_resources()
  dmaengine: plx-dma: Introduce PLX DMA engine PCI driver skeleton
  dmaengine: plx-dma: Implement hardware initialization and cleanup
  dmaengine: plx-dma: Implement descriptor submission

 MAINTAINERS               |   5 +
 drivers/dma/Kconfig       |   9 +
 drivers/dma/Makefile      |   1 +
 drivers/dma/dmaengine.c   |   7 +-
 drivers/dma/plx_dma.c     | 658 ++++++++++++++++++++++++++++++++++++++
 include/linux/dmaengine.h |   2 +
 6 files changed, 680 insertions(+), 2 deletions(-)
 create mode 100644 drivers/dma/plx_dma.c

--
2.20.1

^ permalink raw reply	[flat|nested] 10+ messages in thread

* [PATCH v2 1/5] dmaengine: Store module owner in dma_device struct
  2019-12-10  0:24 [PATCH v2 0/5] PLX Switch DMA Engine Driver Logan Gunthorpe
@ 2019-12-10  0:24 ` Logan Gunthorpe
  2019-12-10  0:24 ` [PATCH v2 2/5] dmaengine: Call module_put() after device_free_chan_resources() Logan Gunthorpe
                   ` (3 subsequent siblings)
  4 siblings, 0 replies; 10+ messages in thread
From: Logan Gunthorpe @ 2019-12-10  0:24 UTC (permalink / raw)
  To: linux-kernel, dmaengine, Vinod Koul
  Cc: Dan Williams, Kit Chow, Logan Gunthorpe

dma_chan_to_owner() dereferences the driver from the struct device to
obtain the owner and call module_[get|put](). However, if the backing
device is unbound before the dma_device is unregistered, the driver
will be cleared and this will cause a NULL pointer dereference.

Instead, store a pointer to the owner module in the dma_device struct
so the module reference can be properly put when the channel is put, even
if the backing device was destroyed first.

This change helps to support a safer unbind of DMA engines.
If the dma_device is unregistered in the driver's remove function,
there's no guarantee that there are no existing clients and a users
action may trigger the WARN_ONCE in dma_async_device_unregister()
which is unlikely to leave the system in a consistent state.
Instead, a better approach is to allow the backing driver to go away
and fail any subsequent requests to it.

Signed-off-by: Logan Gunthorpe <logang@deltatee.com>
---
 drivers/dma/dmaengine.c   | 4 +++-
 include/linux/dmaengine.h | 2 ++
 2 files changed, 5 insertions(+), 1 deletion(-)

diff --git a/drivers/dma/dmaengine.c b/drivers/dma/dmaengine.c
index 03ac4b96117c..4b604086b1b3 100644
--- a/drivers/dma/dmaengine.c
+++ b/drivers/dma/dmaengine.c
@@ -179,7 +179,7 @@ __dma_device_satisfies_mask(struct dma_device *device,
 
 static struct module *dma_chan_to_owner(struct dma_chan *chan)
 {
-	return chan->device->dev->driver->owner;
+	return chan->device->owner;
 }
 
 /**
@@ -919,6 +919,8 @@ int dma_async_device_register(struct dma_device *device)
 		return -EIO;
 	}
 
+	device->owner = device->dev->driver->owner;
+
 	if (dma_has_cap(DMA_MEMCPY, device->cap_mask) && !device->device_prep_dma_memcpy) {
 		dev_err(device->dev,
 			"Device claims capability %s, but op is not defined\n",
diff --git a/include/linux/dmaengine.h b/include/linux/dmaengine.h
index 8fcdee1c0cf9..13aa0abb71de 100644
--- a/include/linux/dmaengine.h
+++ b/include/linux/dmaengine.h
@@ -674,6 +674,7 @@ struct dma_filter {
  * @fill_align: alignment shift for memset operations
  * @dev_id: unique device ID
  * @dev: struct device reference for dma mapping api
+ * @owner: owner module (automatically set based on the provided dev)
  * @src_addr_widths: bit mask of src addr widths the device supports
  *	Width is specified in bytes, e.g. for a device supporting
  *	a width of 4 the mask should have BIT(4) set.
@@ -737,6 +738,7 @@ struct dma_device {
 
 	int dev_id;
 	struct device *dev;
+	struct module *owner;
 
 	u32 src_addr_widths;
 	u32 dst_addr_widths;
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [PATCH v2 2/5] dmaengine: Call module_put() after device_free_chan_resources()
  2019-12-10  0:24 [PATCH v2 0/5] PLX Switch DMA Engine Driver Logan Gunthorpe
  2019-12-10  0:24 ` [PATCH v2 1/5] dmaengine: Store module owner in dma_device struct Logan Gunthorpe
@ 2019-12-10  0:24 ` Logan Gunthorpe
  2019-12-10  0:24 ` [PATCH v2 3/5] dmaengine: plx-dma: Introduce PLX DMA engine PCI driver skeleton Logan Gunthorpe
                   ` (2 subsequent siblings)
  4 siblings, 0 replies; 10+ messages in thread
From: Logan Gunthorpe @ 2019-12-10  0:24 UTC (permalink / raw)
  To: linux-kernel, dmaengine, Vinod Koul
  Cc: Dan Williams, Kit Chow, Logan Gunthorpe

The module reference is taken to ensure the callbacks still exist
when they are called. If the channel holds the last reference to the
module, the module can disappear before device_free_chan_resources() is
called and would cause a call into free'd memory.

Signed-off-by: Logan Gunthorpe <logang@deltatee.com>
---
 drivers/dma/dmaengine.c | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/drivers/dma/dmaengine.c b/drivers/dma/dmaengine.c
index 4b604086b1b3..776fdf535a3a 100644
--- a/drivers/dma/dmaengine.c
+++ b/drivers/dma/dmaengine.c
@@ -250,7 +250,6 @@ static void dma_chan_put(struct dma_chan *chan)
 		return;
 
 	chan->client_count--;
-	module_put(dma_chan_to_owner(chan));
 
 	/* This channel is not in use anymore, free it */
 	if (!chan->client_count && chan->device->device_free_chan_resources) {
@@ -259,6 +258,8 @@ static void dma_chan_put(struct dma_chan *chan)
 		chan->device->device_free_chan_resources(chan);
 	}
 
+	module_put(dma_chan_to_owner(chan));
+
 	/* If the channel is used via a DMA request router, free the mapping */
 	if (chan->router && chan->router->route_free) {
 		chan->router->route_free(chan->router->dev, chan->route_data);
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [PATCH v2 3/5] dmaengine: plx-dma: Introduce PLX DMA engine PCI driver skeleton
  2019-12-10  0:24 [PATCH v2 0/5] PLX Switch DMA Engine Driver Logan Gunthorpe
  2019-12-10  0:24 ` [PATCH v2 1/5] dmaengine: Store module owner in dma_device struct Logan Gunthorpe
  2019-12-10  0:24 ` [PATCH v2 2/5] dmaengine: Call module_put() after device_free_chan_resources() Logan Gunthorpe
@ 2019-12-10  0:24 ` Logan Gunthorpe
  2019-12-10  2:33   ` Jiasen Lin
  2019-12-10  0:24 ` [PATCH v2 4/5] dmaengine: plx-dma: Implement hardware initialization and cleanup Logan Gunthorpe
  2019-12-10  0:24 ` [PATCH v2 5/5] dmaengine: plx-dma: Implement descriptor submission Logan Gunthorpe
  4 siblings, 1 reply; 10+ messages in thread
From: Logan Gunthorpe @ 2019-12-10  0:24 UTC (permalink / raw)
  To: linux-kernel, dmaengine, Vinod Koul
  Cc: Dan Williams, Kit Chow, Logan Gunthorpe

Some PLX Switches can expose DMA engines via extra PCI functions
on the upstream port. Each function will have one DMA channel.

This patch is just the core PCI driver skeleton and dma
engine registration.

Signed-off-by: Logan Gunthorpe <logang@deltatee.com>
---
 MAINTAINERS           |   5 ++
 drivers/dma/Kconfig   |   9 ++
 drivers/dma/Makefile  |   1 +
 drivers/dma/plx_dma.c | 197 ++++++++++++++++++++++++++++++++++++++++++
 4 files changed, 212 insertions(+)
 create mode 100644 drivers/dma/plx_dma.c

diff --git a/MAINTAINERS b/MAINTAINERS
index bd5847e802de..76713226f256 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -13139,6 +13139,11 @@ S:	Maintained
 F:	drivers/iio/chemical/pms7003.c
 F:	Documentation/devicetree/bindings/iio/chemical/plantower,pms7003.yaml
 
+PLX DMA DRIVER
+M:	Logan Gunthorpe <logang@deltatee.com>
+S:	Maintained
+F:	drivers/dma/plx_dma.c
+
 PMBUS HARDWARE MONITORING DRIVERS
 M:	Guenter Roeck <linux@roeck-us.net>
 L:	linux-hwmon@vger.kernel.org
diff --git a/drivers/dma/Kconfig b/drivers/dma/Kconfig
index 6fa1eba9d477..312a6cc36c78 100644
--- a/drivers/dma/Kconfig
+++ b/drivers/dma/Kconfig
@@ -497,6 +497,15 @@ config PXA_DMA
 	  16 to 32 channels for peripheral to memory or memory to memory
 	  transfers.
 
+config PLX_DMA
+	tristate "PLX ExpressLane PEX Switch DMA Engine Support"
+	depends on PCI
+	select DMA_ENGINE
+	help
+	  Some PLX ExpressLane PCI Switches support additional DMA engines.
+	  These are exposed via extra functions on the switch's
+	  upstream port. Each function exposes one DMA channel.
+
 config SIRF_DMA
 	tristate "CSR SiRFprimaII/SiRFmarco DMA support"
 	depends on ARCH_SIRF
diff --git a/drivers/dma/Makefile b/drivers/dma/Makefile
index 42d7e2fc64fa..a150d1d792fd 100644
--- a/drivers/dma/Makefile
+++ b/drivers/dma/Makefile
@@ -59,6 +59,7 @@ obj-$(CONFIG_NBPFAXI_DMA) += nbpfaxi.o
 obj-$(CONFIG_OWL_DMA) += owl-dma.o
 obj-$(CONFIG_PCH_DMA) += pch_dma.o
 obj-$(CONFIG_PL330_DMA) += pl330.o
+obj-$(CONFIG_PLX_DMA) += plx_dma.o
 obj-$(CONFIG_PPC_BESTCOMM) += bestcomm/
 obj-$(CONFIG_PXA_DMA) += pxa_dma.o
 obj-$(CONFIG_RENESAS_DMA) += sh/
diff --git a/drivers/dma/plx_dma.c b/drivers/dma/plx_dma.c
new file mode 100644
index 000000000000..54e13cb92d51
--- /dev/null
+++ b/drivers/dma/plx_dma.c
@@ -0,0 +1,197 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Microsemi Switchtec(tm) PCIe Management Driver
+ * Copyright (c) 2019, Logan Gunthorpe <logang@deltatee.com>
+ * Copyright (c) 2019, GigaIO Networks, Inc
+ */
+
+#include "dmaengine.h"
+
+#include <linux/dmaengine.h>
+#include <linux/kref.h>
+#include <linux/list.h>
+#include <linux/module.h>
+#include <linux/pci.h>
+
+MODULE_DESCRIPTION("PLX ExpressLane PEX PCI Switch DMA Engine");
+MODULE_VERSION("0.1");
+MODULE_LICENSE("GPL");
+MODULE_AUTHOR("Logan Gunthorpe");
+
+struct plx_dma_dev {
+	struct dma_device dma_dev;
+	struct dma_chan dma_chan;
+	void __iomem *bar;
+
+	struct kref ref;
+	struct work_struct release_work;
+};
+
+static struct plx_dma_dev *chan_to_plx_dma_dev(struct dma_chan *c)
+{
+	return container_of(c, struct plx_dma_dev, dma_chan);
+}
+
+static void plx_dma_release_work(struct work_struct *work)
+{
+	struct plx_dma_dev *plxdev = container_of(work, struct plx_dma_dev,
+						  release_work);
+
+	dma_async_device_unregister(&plxdev->dma_dev);
+	put_device(plxdev->dma_dev.dev);
+	kfree(plxdev);
+}
+
+static void plx_dma_release(struct kref *ref)
+{
+	struct plx_dma_dev *plxdev = container_of(ref, struct plx_dma_dev, ref);
+
+	/*
+	 * The dmaengine reference counting and locking is a bit of a
+	 * mess so we have to work around it a bit here. We might put
+	 * the reference while the dmaengine holds the dma_list_mutex
+	 * which means we can't call dma_async_device_unregister() directly
+	 * here and it must be delayed.
+	 */
+	schedule_work(&plxdev->release_work);
+}
+
+static void plx_dma_put(struct plx_dma_dev *plxdev)
+{
+	kref_put(&plxdev->ref, plx_dma_release);
+}
+
+static int plx_dma_alloc_chan_resources(struct dma_chan *chan)
+{
+	struct plx_dma_dev *plxdev = chan_to_plx_dma_dev(chan);
+
+	kref_get(&plxdev->ref);
+
+	return 0;
+}
+
+static void plx_dma_free_chan_resources(struct dma_chan *chan)
+{
+	struct plx_dma_dev *plxdev = chan_to_plx_dma_dev(chan);
+
+	plx_dma_put(plxdev);
+}
+
+static int plx_dma_create(struct pci_dev *pdev)
+{
+	struct plx_dma_dev *plxdev;
+	struct dma_device *dma;
+	struct dma_chan *chan;
+	int rc;
+
+	plxdev = kzalloc(sizeof(*plxdev), GFP_KERNEL);
+	if (!plxdev)
+		return -ENOMEM;
+
+	kref_init(&plxdev->ref);
+	INIT_WORK(&plxdev->release_work, plx_dma_release_work);
+
+	plxdev->bar = pcim_iomap_table(pdev)[0];
+
+	dma = &plxdev->dma_dev;
+	dma->chancnt = 1;
+	INIT_LIST_HEAD(&dma->channels);
+	dma->copy_align = DMAENGINE_ALIGN_1_BYTE;
+	dma->dev = get_device(&pdev->dev);
+
+	dma->device_alloc_chan_resources = plx_dma_alloc_chan_resources;
+	dma->device_free_chan_resources = plx_dma_free_chan_resources;
+
+	chan = &plxdev->dma_chan;
+	chan->device = dma;
+	dma_cookie_init(chan);
+	list_add_tail(&chan->device_node, &dma->channels);
+
+	rc = dma_async_device_register(dma);
+	if (rc) {
+		pci_err(pdev, "Failed to register dma device: %d\n", rc);
+		free_irq(pci_irq_vector(pdev, 0),  plxdev);
+		return rc;
+	}
+
+	pci_set_drvdata(pdev, plxdev);
+
+	return 0;
+}
+
+static int plx_dma_probe(struct pci_dev *pdev,
+			 const struct pci_device_id *id)
+{
+	int rc;
+
+	rc = pcim_enable_device(pdev);
+	if (rc)
+		return rc;
+
+	rc = pci_set_dma_mask(pdev, DMA_BIT_MASK(48));
+	if (rc)
+		rc = pci_set_dma_mask(pdev, DMA_BIT_MASK(32));
+	if (rc)
+		return rc;
+
+	rc = pci_set_consistent_dma_mask(pdev, DMA_BIT_MASK(48));
+	if (rc)
+		rc = pci_set_consistent_dma_mask(pdev, DMA_BIT_MASK(32));
+	if (rc)
+		return rc;
+
+	rc = pcim_iomap_regions(pdev, 1, KBUILD_MODNAME);
+	if (rc)
+		return rc;
+
+	rc = pci_alloc_irq_vectors(pdev, 1, 1, PCI_IRQ_ALL_TYPES);
+	if (rc <= 0)
+		return rc;
+
+	pci_set_master(pdev);
+
+	rc = plx_dma_create(pdev);
+	if (rc)
+		goto err_free_irq_vectors;
+
+	pci_info(pdev, "PLX DMA Channel Registered\n");
+
+	return 0;
+
+err_free_irq_vectors:
+	pci_free_irq_vectors(pdev);
+	return rc;
+}
+
+static void plx_dma_remove(struct pci_dev *pdev)
+{
+	struct plx_dma_dev *plxdev = pci_get_drvdata(pdev);
+
+	free_irq(pci_irq_vector(pdev, 0),  plxdev);
+
+	plxdev->bar = NULL;
+	plx_dma_put(plxdev);
+
+	pci_free_irq_vectors(pdev);
+}
+
+static const struct pci_device_id plx_dma_pci_tbl[] = {
+	{
+		.vendor		= PCI_VENDOR_ID_PLX,
+		.device		= 0x87D0,
+		.subvendor	= PCI_ANY_ID,
+		.subdevice	= PCI_ANY_ID,
+		.class		= PCI_CLASS_SYSTEM_OTHER << 8,
+		.class_mask	= 0xFFFFFFFF,
+	},
+	{0}
+};
+MODULE_DEVICE_TABLE(pci, plx_dma_pci_tbl);
+
+static struct pci_driver plx_dma_pci_driver = {
+	.name           = KBUILD_MODNAME,
+	.id_table       = plx_dma_pci_tbl,
+	.probe          = plx_dma_probe,
+	.remove		= plx_dma_remove,
+};
+module_pci_driver(plx_dma_pci_driver);
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [PATCH v2 4/5] dmaengine: plx-dma: Implement hardware initialization and cleanup
  2019-12-10  0:24 [PATCH v2 0/5] PLX Switch DMA Engine Driver Logan Gunthorpe
                   ` (2 preceding siblings ...)
  2019-12-10  0:24 ` [PATCH v2 3/5] dmaengine: plx-dma: Introduce PLX DMA engine PCI driver skeleton Logan Gunthorpe
@ 2019-12-10  0:24 ` Logan Gunthorpe
  2019-12-10  6:49   ` Jiasen Lin
  2019-12-10  0:24 ` [PATCH v2 5/5] dmaengine: plx-dma: Implement descriptor submission Logan Gunthorpe
  4 siblings, 1 reply; 10+ messages in thread
From: Logan Gunthorpe @ 2019-12-10  0:24 UTC (permalink / raw)
  To: linux-kernel, dmaengine, Vinod Koul
  Cc: Dan Williams, Kit Chow, Logan Gunthorpe

Allocate DMA coherent memory for the ring of DMA descriptors and
program the appropriate hardware registers.

A tasklet is created which is triggered on an interrupt to process
all the finished requests. Additionally, any remaining descriptors
are aborted when the hardware is removed or the resources freed.

Use an RCU pointer to synchronize PCI device unbind.

Signed-off-by: Logan Gunthorpe <logang@deltatee.com>
---
 drivers/dma/plx_dma.c | 344 +++++++++++++++++++++++++++++++++++++++++-
 1 file changed, 343 insertions(+), 1 deletion(-)

diff --git a/drivers/dma/plx_dma.c b/drivers/dma/plx_dma.c
index 54e13cb92d51..d3c2319e2fad 100644
--- a/drivers/dma/plx_dma.c
+++ b/drivers/dma/plx_dma.c
@@ -18,13 +18,103 @@ MODULE_VERSION("0.1");
 MODULE_LICENSE("GPL");
 MODULE_AUTHOR("Logan Gunthorpe");
 
+#define PLX_REG_DESC_RING_ADDR			0x214
+#define PLX_REG_DESC_RING_ADDR_HI		0x218
+#define PLX_REG_DESC_RING_NEXT_ADDR		0x21C
+#define PLX_REG_DESC_RING_COUNT			0x220
+#define PLX_REG_DESC_RING_LAST_ADDR		0x224
+#define PLX_REG_DESC_RING_LAST_SIZE		0x228
+#define PLX_REG_PREF_LIMIT			0x234
+#define PLX_REG_CTRL				0x238
+#define PLX_REG_CTRL2				0x23A
+#define PLX_REG_INTR_CTRL			0x23C
+#define PLX_REG_INTR_STATUS			0x23E
+
+#define PLX_REG_PREF_LIMIT_PREF_FOUR		8
+
+#define PLX_REG_CTRL_GRACEFUL_PAUSE		BIT(0)
+#define PLX_REG_CTRL_ABORT			BIT(1)
+#define PLX_REG_CTRL_WRITE_BACK_EN		BIT(2)
+#define PLX_REG_CTRL_START			BIT(3)
+#define PLX_REG_CTRL_RING_STOP_MODE		BIT(4)
+#define PLX_REG_CTRL_DESC_MODE_BLOCK		(0 << 5)
+#define PLX_REG_CTRL_DESC_MODE_ON_CHIP		(1 << 5)
+#define PLX_REG_CTRL_DESC_MODE_OFF_CHIP		(2 << 5)
+#define PLX_REG_CTRL_DESC_INVALID		BIT(8)
+#define PLX_REG_CTRL_GRACEFUL_PAUSE_DONE	BIT(9)
+#define PLX_REG_CTRL_ABORT_DONE			BIT(10)
+#define PLX_REG_CTRL_IMM_PAUSE_DONE		BIT(12)
+#define PLX_REG_CTRL_IN_PROGRESS		BIT(30)
+
+#define PLX_REG_CTRL_RESET_VAL	(PLX_REG_CTRL_DESC_INVALID | \
+				 PLX_REG_CTRL_GRACEFUL_PAUSE_DONE | \
+				 PLX_REG_CTRL_ABORT_DONE | \
+				 PLX_REG_CTRL_IMM_PAUSE_DONE)
+
+#define PLX_REG_CTRL_START_VAL	(PLX_REG_CTRL_WRITE_BACK_EN | \
+				 PLX_REG_CTRL_DESC_MODE_OFF_CHIP | \
+				 PLX_REG_CTRL_START | \
+				 PLX_REG_CTRL_RESET_VAL)
+
+#define PLX_REG_CTRL2_MAX_TXFR_SIZE_64B		0
+#define PLX_REG_CTRL2_MAX_TXFR_SIZE_128B	1
+#define PLX_REG_CTRL2_MAX_TXFR_SIZE_256B	2
+#define PLX_REG_CTRL2_MAX_TXFR_SIZE_512B	3
+#define PLX_REG_CTRL2_MAX_TXFR_SIZE_1KB		4
+#define PLX_REG_CTRL2_MAX_TXFR_SIZE_2KB		5
+#define PLX_REG_CTRL2_MAX_TXFR_SIZE_4B		7
+
+#define PLX_REG_INTR_CRTL_ERROR_EN		BIT(0)
+#define PLX_REG_INTR_CRTL_INV_DESC_EN		BIT(1)
+#define PLX_REG_INTR_CRTL_ABORT_DONE_EN		BIT(3)
+#define PLX_REG_INTR_CRTL_PAUSE_DONE_EN		BIT(4)
+#define PLX_REG_INTR_CRTL_IMM_PAUSE_DONE_EN	BIT(5)
+
+#define PLX_REG_INTR_STATUS_ERROR		BIT(0)
+#define PLX_REG_INTR_STATUS_INV_DESC		BIT(1)
+#define PLX_REG_INTR_STATUS_DESC_DONE		BIT(2)
+#define PLX_REG_INTR_CRTL_ABORT_DONE		BIT(3)
+
+struct plx_dma_hw_std_desc {
+	__le32 flags_and_size;
+	__le16 dst_addr_hi;
+	__le16 src_addr_hi;
+	__le32 dst_addr_lo;
+	__le32 src_addr_lo;
+};
+
+#define PLX_DESC_SIZE_MASK		0x7ffffff
+#define PLX_DESC_FLAG_VALID		BIT(31)
+#define PLX_DESC_FLAG_INT_WHEN_DONE	BIT(30)
+
+#define PLX_DESC_WB_SUCCESS		BIT(30)
+#define PLX_DESC_WB_RD_FAIL		BIT(29)
+#define PLX_DESC_WB_WR_FAIL		BIT(28)
+
+#define PLX_DMA_RING_COUNT		2048
+
+struct plx_dma_desc {
+	struct dma_async_tx_descriptor txd;
+	struct plx_dma_hw_std_desc *hw;
+	u32 orig_size;
+};
+
 struct plx_dma_dev {
 	struct dma_device dma_dev;
 	struct dma_chan dma_chan;
+	struct pci_dev __rcu *pdev;
 	void __iomem *bar;
 
 	struct kref ref;
 	struct work_struct release_work;
+	struct tasklet_struct desc_task;
+
+	spinlock_t ring_lock;
+	bool ring_active;
+	int head;
+	int tail;
+	struct plx_dma_hw_std_desc *hw_ring;
+	struct plx_dma_desc **desc_ring;
 };
 
 static struct plx_dma_dev *chan_to_plx_dma_dev(struct dma_chan *c)
@@ -32,6 +122,146 @@ static struct plx_dma_dev *chan_to_plx_dma_dev(struct dma_chan *c)
 	return container_of(c, struct plx_dma_dev, dma_chan);
 }
 
+static struct plx_dma_desc *plx_dma_get_desc(struct plx_dma_dev *plxdev, int i)
+{
+	return plxdev->desc_ring[i & (PLX_DMA_RING_COUNT - 1)];
+}
+
+static void plx_dma_process_desc(struct plx_dma_dev *plxdev)
+{
+	struct dmaengine_result res;
+	struct plx_dma_desc *desc;
+	u32 flags;
+
+	spin_lock_bh(&plxdev->ring_lock);
+
+	while (plxdev->tail != plxdev->head) {
+		desc = plx_dma_get_desc(plxdev, plxdev->tail);
+
+		flags = le32_to_cpu(READ_ONCE(desc->hw->flags_and_size));
+
+		if (flags & PLX_DESC_FLAG_VALID)
+			break;
+
+		res.residue = desc->orig_size - (flags & PLX_DESC_SIZE_MASK);
+
+		if (flags & PLX_DESC_WB_SUCCESS)
+			res.result = DMA_TRANS_NOERROR;
+		else if (flags & PLX_DESC_WB_WR_FAIL)
+			res.result = DMA_TRANS_WRITE_FAILED;
+		else
+			res.result = DMA_TRANS_READ_FAILED;
+
+		dma_cookie_complete(&desc->txd);
+		dma_descriptor_unmap(&desc->txd);
+		dmaengine_desc_get_callback_invoke(&desc->txd, &res);
+		desc->txd.callback = NULL;
+		desc->txd.callback_result = NULL;
+
+		plxdev->tail++;
+	}
+
+	spin_unlock_bh(&plxdev->ring_lock);
+}
+
+static void plx_dma_abort_desc(struct plx_dma_dev *plxdev)
+{
+	struct dmaengine_result res;
+	struct plx_dma_desc *desc;
+
+	plx_dma_process_desc(plxdev);
+
+	spin_lock_bh(&plxdev->ring_lock);
+
+	while (plxdev->tail != plxdev->head) {
+		desc = plx_dma_get_desc(plxdev, plxdev->tail);
+
+		res.residue = desc->orig_size;
+		res.result = DMA_TRANS_ABORTED;
+
+		dma_cookie_complete(&desc->txd);
+		dma_descriptor_unmap(&desc->txd);
+		dmaengine_desc_get_callback_invoke(&desc->txd, &res);
+		desc->txd.callback = NULL;
+		desc->txd.callback_result = NULL;
+
+		plxdev->tail++;
+	}
+
+	spin_unlock_bh(&plxdev->ring_lock);
+}
+
+static void __plx_dma_stop(struct plx_dma_dev *plxdev)
+{
+	unsigned long timeout = jiffies + msecs_to_jiffies(1000);
+	u32 val;
+
+	val = readl(plxdev->bar + PLX_REG_CTRL);
+	if (!(val & ~PLX_REG_CTRL_GRACEFUL_PAUSE))
+		return;
+
+	writel(PLX_REG_CTRL_RESET_VAL | PLX_REG_CTRL_GRACEFUL_PAUSE,
+	       plxdev->bar + PLX_REG_CTRL);
+
+	while (!time_after(jiffies, timeout)) {
+		val = readl(plxdev->bar + PLX_REG_CTRL);
+		if (val & PLX_REG_CTRL_GRACEFUL_PAUSE_DONE)
+			break;
+
+		cpu_relax();
+	}
+
+	if (!(val & PLX_REG_CTRL_GRACEFUL_PAUSE_DONE))
+		dev_err(plxdev->dma_dev.dev,
+			"Timeout waiting for graceful pause!\n");
+
+	writel(PLX_REG_CTRL_RESET_VAL | PLX_REG_CTRL_GRACEFUL_PAUSE,
+	       plxdev->bar + PLX_REG_CTRL);
+
+	writel(0, plxdev->bar + PLX_REG_DESC_RING_COUNT);
+	writel(0, plxdev->bar + PLX_REG_DESC_RING_ADDR);
+	writel(0, plxdev->bar + PLX_REG_DESC_RING_ADDR_HI);
+	writel(0, plxdev->bar + PLX_REG_DESC_RING_NEXT_ADDR);
+}
+
+static void plx_dma_stop(struct plx_dma_dev *plxdev)
+{
+	rcu_read_lock();
+	if (!rcu_dereference(plxdev->pdev)) {
+		rcu_read_unlock();
+		return;
+	}
+
+	__plx_dma_stop(plxdev);
+
+	rcu_read_unlock();
+}
+
+static void plx_dma_desc_task(unsigned long data)
+{
+	struct plx_dma_dev *plxdev = (void *)data;
+
+	plx_dma_process_desc(plxdev);
+}
+
+static irqreturn_t plx_dma_isr(int irq, void *devid)
+{
+	struct plx_dma_dev *plxdev = devid;
+	u32 status;
+
+	status = readw(plxdev->bar + PLX_REG_INTR_STATUS);
+
+	if (!status)
+		return IRQ_NONE;
+
+	if (status & PLX_REG_INTR_STATUS_DESC_DONE && plxdev->ring_active)
+		tasklet_schedule(&plxdev->desc_task);
+
+	writew(status, plxdev->bar + PLX_REG_INTR_STATUS);
+
+	return IRQ_HANDLED;
+}
+
 static void plx_dma_release_work(struct work_struct *work)
 {
 	struct plx_dma_dev *plxdev = container_of(work, struct plx_dma_dev,
@@ -61,18 +291,109 @@ static void plx_dma_put(struct plx_dma_dev *plxdev)
 	kref_put(&plxdev->ref, plx_dma_release);
 }
 
+static int plx_dma_alloc_desc(struct plx_dma_dev *plxdev)
+{
+	struct plx_dma_desc *desc;
+	int i;
+
+	plxdev->desc_ring = kcalloc(PLX_DMA_RING_COUNT,
+				    sizeof(*plxdev->desc_ring), GFP_KERNEL);
+	if (!plxdev->desc_ring)
+		return -ENOMEM;
+
+	for (i = 0; i < PLX_DMA_RING_COUNT; i++) {
+		desc = kzalloc(sizeof(*desc), GFP_KERNEL);
+		if (!desc)
+			goto free_and_exit;
+
+		dma_async_tx_descriptor_init(&desc->txd, &plxdev->dma_chan);
+		desc->hw = &plxdev->hw_ring[i];
+		plxdev->desc_ring[i] = desc;
+	}
+
+	return 0;
+
+free_and_exit:
+	for (i = 0; i < PLX_DMA_RING_COUNT; i++)
+		kfree(plxdev->desc_ring[i]);
+	kfree(plxdev->desc_ring);
+	return -ENOMEM;
+}
+
 static int plx_dma_alloc_chan_resources(struct dma_chan *chan)
 {
 	struct plx_dma_dev *plxdev = chan_to_plx_dma_dev(chan);
+	size_t ring_sz = PLX_DMA_RING_COUNT * sizeof(*plxdev->hw_ring);
+	dma_addr_t dma_addr;
+	int rc;
+
+	rcu_read_lock();
+	if (!rcu_dereference(plxdev->pdev)) {
+		rcu_read_unlock();
+		return -ENODEV;
+	}
 
 	kref_get(&plxdev->ref);
 
-	return 0;
+	writel(PLX_REG_CTRL_RESET_VAL, plxdev->bar + PLX_REG_CTRL);
+
+	plxdev->hw_ring = dmam_alloc_coherent(plxdev->dma_dev.dev, ring_sz,
+					      &dma_addr, GFP_KERNEL);
+	if (!plxdev->hw_ring) {
+		rcu_read_unlock();
+		return -ENOMEM;
+	}
+
+	plxdev->head = plxdev->tail = 0;
+
+	rc = plx_dma_alloc_desc(plxdev);
+	if (rc) {
+		plx_dma_put(plxdev);
+		rcu_read_unlock();
+		return rc;
+	}
+
+	writel(lower_32_bits(dma_addr), plxdev->bar + PLX_REG_DESC_RING_ADDR);
+	writel(upper_32_bits(dma_addr),
+	       plxdev->bar + PLX_REG_DESC_RING_ADDR_HI);
+	writel(lower_32_bits(dma_addr),
+	       plxdev->bar + PLX_REG_DESC_RING_NEXT_ADDR);
+	writel(PLX_DMA_RING_COUNT, plxdev->bar + PLX_REG_DESC_RING_COUNT);
+	writel(PLX_REG_PREF_LIMIT_PREF_FOUR, plxdev->bar + PLX_REG_PREF_LIMIT);
+
+	plxdev->ring_active = true;
+
+	rcu_read_unlock();
+
+	return PLX_DMA_RING_COUNT;
 }
 
 static void plx_dma_free_chan_resources(struct dma_chan *chan)
 {
 	struct plx_dma_dev *plxdev = chan_to_plx_dma_dev(chan);
+	struct pci_dev *pdev;
+	int i;
+
+	spin_lock_bh(&plxdev->ring_lock);
+	plxdev->ring_active = false;
+	spin_unlock_bh(&plxdev->ring_lock);
+
+	plx_dma_stop(plxdev);
+
+	rcu_read_lock();
+	pdev = rcu_dereference(plxdev->pdev);
+	if (pdev)
+		synchronize_irq(pci_irq_vector(pdev, 0));
+	rcu_read_unlock();
+
+	tasklet_kill(&plxdev->desc_task);
+
+	plx_dma_abort_desc(plxdev);
+
+	for (i = 0; i < PLX_DMA_RING_COUNT; i++)
+		kfree(plxdev->desc_ring[i]);
+
+	kfree(plxdev->desc_ring);
 
 	plx_dma_put(plxdev);
 }
@@ -88,9 +409,20 @@ static int plx_dma_create(struct pci_dev *pdev)
 	if (!plxdev)
 		return -ENOMEM;
 
+	rc = request_irq(pci_irq_vector(pdev, 0), plx_dma_isr, 0,
+			 KBUILD_MODNAME, plxdev);
+	if (rc) {
+		kfree(plxdev);
+		return rc;
+	}
+
 	kref_init(&plxdev->ref);
 	INIT_WORK(&plxdev->release_work, plx_dma_release_work);
+	spin_lock_init(&plxdev->ring_lock);
+	tasklet_init(&plxdev->desc_task, plx_dma_desc_task,
+		     (unsigned long)plxdev);
 
+	RCU_INIT_POINTER(plxdev->pdev, pdev);
 	plxdev->bar = pcim_iomap_table(pdev)[0];
 
 	dma = &plxdev->dma_dev;
@@ -169,6 +501,16 @@ static void plx_dma_remove(struct pci_dev *pdev)
 
 	free_irq(pci_irq_vector(pdev, 0),  plxdev);
 
+	rcu_assign_pointer(plxdev->pdev, NULL);
+	synchronize_rcu();
+
+	spin_lock_bh(&plxdev->ring_lock);
+	plxdev->ring_active = false;
+	spin_unlock_bh(&plxdev->ring_lock);
+
+	__plx_dma_stop(plxdev);
+	plx_dma_abort_desc(plxdev);
+
 	plxdev->bar = NULL;
 	plx_dma_put(plxdev);
 
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [PATCH v2 5/5] dmaengine: plx-dma: Implement descriptor submission
  2019-12-10  0:24 [PATCH v2 0/5] PLX Switch DMA Engine Driver Logan Gunthorpe
                   ` (3 preceding siblings ...)
  2019-12-10  0:24 ` [PATCH v2 4/5] dmaengine: plx-dma: Implement hardware initialization and cleanup Logan Gunthorpe
@ 2019-12-10  0:24 ` Logan Gunthorpe
  4 siblings, 0 replies; 10+ messages in thread
From: Logan Gunthorpe @ 2019-12-10  0:24 UTC (permalink / raw)
  To: linux-kernel, dmaengine, Vinod Koul
  Cc: Dan Williams, Kit Chow, Logan Gunthorpe

On prep, a spin lock is taken and the next entry in the circular buffer
is filled. On submit, the valid bit is set in the hardware descriptor
and the lock is released.

The DMA engine is started (if it's not already running) when the client
calls dma_async_issue_pending().

Signed-off-by: Logan Gunthorpe <logang@deltatee.com>
---
 drivers/dma/plx_dma.c | 119 ++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 119 insertions(+)

diff --git a/drivers/dma/plx_dma.c b/drivers/dma/plx_dma.c
index d3c2319e2fad..21e4d7634eeb 100644
--- a/drivers/dma/plx_dma.c
+++ b/drivers/dma/plx_dma.c
@@ -7,6 +7,7 @@
 
 #include "dmaengine.h"
 
+#include <linux/circ_buf.h>
 #include <linux/dmaengine.h>
 #include <linux/kref.h>
 #include <linux/list.h>
@@ -122,6 +123,11 @@ static struct plx_dma_dev *chan_to_plx_dma_dev(struct dma_chan *c)
 	return container_of(c, struct plx_dma_dev, dma_chan);
 }
 
+static struct plx_dma_desc *to_plx_desc(struct dma_async_tx_descriptor *txd)
+{
+	return container_of(txd, struct plx_dma_desc, txd);
+}
+
 static struct plx_dma_desc *plx_dma_get_desc(struct plx_dma_dev *plxdev, int i)
 {
 	return plxdev->desc_ring[i & (PLX_DMA_RING_COUNT - 1)];
@@ -244,6 +250,113 @@ static void plx_dma_desc_task(unsigned long data)
 	plx_dma_process_desc(plxdev);
 }
 
+static struct dma_async_tx_descriptor *plx_dma_prep_memcpy(struct dma_chan *c,
+		dma_addr_t dma_dst, dma_addr_t dma_src, size_t len,
+		unsigned long flags)
+	__acquires(plxdev->ring_lock)
+{
+	struct plx_dma_dev *plxdev = chan_to_plx_dma_dev(c);
+	struct plx_dma_desc *plxdesc;
+
+	spin_lock_bh(&plxdev->ring_lock);
+	if (!plxdev->ring_active)
+		goto err_unlock;
+
+	if (!CIRC_SPACE(plxdev->head, plxdev->tail, PLX_DMA_RING_COUNT))
+		goto err_unlock;
+
+	if (len > PLX_DESC_SIZE_MASK)
+		goto err_unlock;
+
+	plxdesc = plx_dma_get_desc(plxdev, plxdev->head);
+	plxdev->head++;
+
+	plxdesc->hw->dst_addr_lo = cpu_to_le32(lower_32_bits(dma_dst));
+	plxdesc->hw->dst_addr_hi = cpu_to_le16(upper_32_bits(dma_dst));
+	plxdesc->hw->src_addr_lo = cpu_to_le32(lower_32_bits(dma_src));
+	plxdesc->hw->src_addr_hi = cpu_to_le16(upper_32_bits(dma_src));
+
+	plxdesc->orig_size = len;
+
+	if (flags & DMA_PREP_INTERRUPT)
+		len |= PLX_DESC_FLAG_INT_WHEN_DONE;
+
+	plxdesc->hw->flags_and_size = cpu_to_le32(len);
+	plxdesc->txd.flags = flags;
+
+	/* return with the lock held, it will be released in tx_submit */
+
+	return &plxdesc->txd;
+
+err_unlock:
+	/*
+	 * Keep sparse happy by restoring an even lock count on
+	 * this lock.
+	 */
+	__acquire(plxdev->ring_lock);
+
+	spin_unlock_bh(&plxdev->ring_lock);
+	return NULL;
+}
+
+static dma_cookie_t plx_dma_tx_submit(struct dma_async_tx_descriptor *desc)
+	__releases(plxdev->ring_lock)
+{
+	struct plx_dma_dev *plxdev = chan_to_plx_dma_dev(desc->chan);
+	struct plx_dma_desc *plxdesc = to_plx_desc(desc);
+	dma_cookie_t cookie;
+
+	cookie = dma_cookie_assign(desc);
+
+	/*
+	 * Ensure the descriptor updates are visible to the dma device
+	 * before setting the valid bit.
+	 */
+	wmb();
+
+	plxdesc->hw->flags_and_size |= cpu_to_le32(PLX_DESC_FLAG_VALID);
+
+	spin_unlock_bh(&plxdev->ring_lock);
+
+	return cookie;
+}
+
+static enum dma_status plx_dma_tx_status(struct dma_chan *chan,
+		dma_cookie_t cookie, struct dma_tx_state *txstate)
+{
+	struct plx_dma_dev *plxdev = chan_to_plx_dma_dev(chan);
+	enum dma_status ret;
+
+	ret = dma_cookie_status(chan, cookie, txstate);
+	if (ret == DMA_COMPLETE)
+		return ret;
+
+	plx_dma_process_desc(plxdev);
+
+	return dma_cookie_status(chan, cookie, txstate);
+}
+
+static void plx_dma_issue_pending(struct dma_chan *chan)
+{
+	struct plx_dma_dev *plxdev = chan_to_plx_dma_dev(chan);
+
+	rcu_read_lock();
+	if (!rcu_dereference(plxdev->pdev)) {
+		rcu_read_unlock();
+		return;
+	}
+
+	/*
+	 * Ensure the valid bits are visible before starting the
+	 * DMA engine.
+	 */
+	wmb();
+
+	writew(PLX_REG_CTRL_START_VAL, plxdev->bar + PLX_REG_CTRL);
+
+	rcu_read_unlock();
+}
+
 static irqreturn_t plx_dma_isr(int irq, void *devid)
 {
 	struct plx_dma_dev *plxdev = devid;
@@ -307,7 +420,9 @@ static int plx_dma_alloc_desc(struct plx_dma_dev *plxdev)
 			goto free_and_exit;
 
 		dma_async_tx_descriptor_init(&desc->txd, &plxdev->dma_chan);
+		desc->txd.tx_submit = plx_dma_tx_submit;
 		desc->hw = &plxdev->hw_ring[i];
+
 		plxdev->desc_ring[i] = desc;
 	}
 
@@ -428,11 +543,15 @@ static int plx_dma_create(struct pci_dev *pdev)
 	dma = &plxdev->dma_dev;
 	dma->chancnt = 1;
 	INIT_LIST_HEAD(&dma->channels);
+	dma_cap_set(DMA_MEMCPY, dma->cap_mask);
 	dma->copy_align = DMAENGINE_ALIGN_1_BYTE;
 	dma->dev = get_device(&pdev->dev);
 
 	dma->device_alloc_chan_resources = plx_dma_alloc_chan_resources;
 	dma->device_free_chan_resources = plx_dma_free_chan_resources;
+	dma->device_prep_dma_memcpy = plx_dma_prep_memcpy;
+	dma->device_issue_pending = plx_dma_issue_pending;
+	dma->device_tx_status = plx_dma_tx_status;
 
 	chan = &plxdev->dma_chan;
 	chan->device = dma;
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 10+ messages in thread

* Re: [PATCH v2 3/5] dmaengine: plx-dma: Introduce PLX DMA engine PCI driver skeleton
  2019-12-10  0:24 ` [PATCH v2 3/5] dmaengine: plx-dma: Introduce PLX DMA engine PCI driver skeleton Logan Gunthorpe
@ 2019-12-10  2:33   ` Jiasen Lin
  2019-12-10 17:44     ` Logan Gunthorpe
  0 siblings, 1 reply; 10+ messages in thread
From: Jiasen Lin @ 2019-12-10  2:33 UTC (permalink / raw)
  To: Logan Gunthorpe, linux-kernel, dmaengine, Vinod Koul
  Cc: Dan Williams, Kit Chow



On 2019/12/10 8:24, Logan Gunthorpe wrote:
> Some PLX Switches can expose DMA engines via extra PCI functions
> on the upstream port. Each function will have one DMA channel.
> 
> This patch is just the core PCI driver skeleton and dma
> engine registration.
> 
> Signed-off-by: Logan Gunthorpe <logang@deltatee.com>
> ---
>   MAINTAINERS           |   5 ++
>   drivers/dma/Kconfig   |   9 ++
>   drivers/dma/Makefile  |   1 +
>   drivers/dma/plx_dma.c | 197 ++++++++++++++++++++++++++++++++++++++++++
>   4 files changed, 212 insertions(+)
>   create mode 100644 drivers/dma/plx_dma.c
> 
> diff --git a/MAINTAINERS b/MAINTAINERS
> index bd5847e802de..76713226f256 100644
> --- a/MAINTAINERS
> +++ b/MAINTAINERS
> @@ -13139,6 +13139,11 @@ S:	Maintained
>   F:	drivers/iio/chemical/pms7003.c
>   F:	Documentation/devicetree/bindings/iio/chemical/plantower,pms7003.yaml
>   
> +PLX DMA DRIVER
> +M:	Logan Gunthorpe <logang@deltatee.com>
> +S:	Maintained
> +F:	drivers/dma/plx_dma.c
> +
>   PMBUS HARDWARE MONITORING DRIVERS
>   M:	Guenter Roeck <linux@roeck-us.net>
>   L:	linux-hwmon@vger.kernel.org
> diff --git a/drivers/dma/Kconfig b/drivers/dma/Kconfig
> index 6fa1eba9d477..312a6cc36c78 100644
> --- a/drivers/dma/Kconfig
> +++ b/drivers/dma/Kconfig
> @@ -497,6 +497,15 @@ config PXA_DMA
>   	  16 to 32 channels for peripheral to memory or memory to memory
>   	  transfers.
>   
> +config PLX_DMA
> +	tristate "PLX ExpressLane PEX Switch DMA Engine Support"
> +	depends on PCI
> +	select DMA_ENGINE
> +	help
> +	  Some PLX ExpressLane PCI Switches support additional DMA engines.
> +	  These are exposed via extra functions on the switch's
> +	  upstream port. Each function exposes one DMA channel.
> +
>   config SIRF_DMA
>   	tristate "CSR SiRFprimaII/SiRFmarco DMA support"
>   	depends on ARCH_SIRF
> diff --git a/drivers/dma/Makefile b/drivers/dma/Makefile
> index 42d7e2fc64fa..a150d1d792fd 100644
> --- a/drivers/dma/Makefile
> +++ b/drivers/dma/Makefile
> @@ -59,6 +59,7 @@ obj-$(CONFIG_NBPFAXI_DMA) += nbpfaxi.o
>   obj-$(CONFIG_OWL_DMA) += owl-dma.o
>   obj-$(CONFIG_PCH_DMA) += pch_dma.o
>   obj-$(CONFIG_PL330_DMA) += pl330.o
> +obj-$(CONFIG_PLX_DMA) += plx_dma.o
>   obj-$(CONFIG_PPC_BESTCOMM) += bestcomm/
>   obj-$(CONFIG_PXA_DMA) += pxa_dma.o
>   obj-$(CONFIG_RENESAS_DMA) += sh/
> diff --git a/drivers/dma/plx_dma.c b/drivers/dma/plx_dma.c
> new file mode 100644
> index 000000000000..54e13cb92d51
> --- /dev/null
> +++ b/drivers/dma/plx_dma.c
> @@ -0,0 +1,197 @@
> +// SPDX-License-Identifier: GPL-2.0
> +/*
> + * Microsemi Switchtec(tm) PCIe Management Driver
> + * Copyright (c) 2019, Logan Gunthorpe <logang@deltatee.com>
> + * Copyright (c) 2019, GigaIO Networks, Inc
> + */
> +
> +#include "dmaengine.h"
> +
> +#include <linux/dmaengine.h>
> +#include <linux/kref.h>
> +#include <linux/list.h>
> +#include <linux/module.h>
> +#include <linux/pci.h>
> +
> +MODULE_DESCRIPTION("PLX ExpressLane PEX PCI Switch DMA Engine");
> +MODULE_VERSION("0.1");
> +MODULE_LICENSE("GPL");
> +MODULE_AUTHOR("Logan Gunthorpe");
> +
> +struct plx_dma_dev {
> +	struct dma_device dma_dev;
> +	struct dma_chan dma_chan;
> +	void __iomem *bar;
> +
> +	struct kref ref;
> +	struct work_struct release_work;
> +};
> +
> +static struct plx_dma_dev *chan_to_plx_dma_dev(struct dma_chan *c)
> +{
> +	return container_of(c, struct plx_dma_dev, dma_chan);
> +}
> +
> +static void plx_dma_release_work(struct work_struct *work)
> +{
> +	struct plx_dma_dev *plxdev = container_of(work, struct plx_dma_dev,
> +						  release_work);
> +
> +	dma_async_device_unregister(&plxdev->dma_dev);
> +	put_device(plxdev->dma_dev.dev);
> +	kfree(plxdev);
> +}
> +
> +static void plx_dma_release(struct kref *ref)
> +{
> +	struct plx_dma_dev *plxdev = container_of(ref, struct plx_dma_dev, ref);
> +
> +	/*
> +	 * The dmaengine reference counting and locking is a bit of a
> +	 * mess so we have to work around it a bit here. We might put
> +	 * the reference while the dmaengine holds the dma_list_mutex
> +	 * which means we can't call dma_async_device_unregister() directly
> +	 * here and it must be delayed.
> +	 */
> +	schedule_work(&plxdev->release_work);
> +}
> +
> +static void plx_dma_put(struct plx_dma_dev *plxdev)
> +{
> +	kref_put(&plxdev->ref, plx_dma_release);
> +}
> +
> +static int plx_dma_alloc_chan_resources(struct dma_chan *chan)
> +{
> +	struct plx_dma_dev *plxdev = chan_to_plx_dma_dev(chan);
> +
> +	kref_get(&plxdev->ref);
> +
> +	return 0;
> +}
> +
> +static void plx_dma_free_chan_resources(struct dma_chan *chan)
> +{
> +	struct plx_dma_dev *plxdev = chan_to_plx_dma_dev(chan);
> +
> +	plx_dma_put(plxdev);
> +}
> +
> +static int plx_dma_create(struct pci_dev *pdev)
> +{
> +	struct plx_dma_dev *plxdev;
> +	struct dma_device *dma;
> +	struct dma_chan *chan;
> +	int rc;
> +
> +	plxdev = kzalloc(sizeof(*plxdev), GFP_KERNEL);
> +	if (!plxdev)
> +		return -ENOMEM;
> +
> +	kref_init(&plxdev->ref);
> +	INIT_WORK(&plxdev->release_work, plx_dma_release_work);
> +
> +	plxdev->bar = pcim_iomap_table(pdev)[0];
> +
> +	dma = &plxdev->dma_dev;
> +	dma->chancnt = 1;
> +	INIT_LIST_HEAD(&dma->channels);
> +	dma->copy_align = DMAENGINE_ALIGN_1_BYTE;
> +	dma->dev = get_device(&pdev->dev);
> +
> +	dma->device_alloc_chan_resources = plx_dma_alloc_chan_resources;
> +	dma->device_free_chan_resources = plx_dma_free_chan_resources;
> +
> +	chan = &plxdev->dma_chan;
> +	chan->device = dma;
> +	dma_cookie_init(chan);
> +	list_add_tail(&chan->device_node, &dma->channels);
> +
> +	rc = dma_async_device_register(dma);
> +	if (rc) {
> +		pci_err(pdev, "Failed to register dma device: %d\n", rc);
> +		free_irq(pci_irq_vector(pdev, 0),  plxdev);

Hi Logan
Failed to register dma device need to call plx_dma_put(plxdev) or 
kfree(plxdev), otherwise it result in memory leak.

Thanks
Jiasen Lin

> +		return rc;
> +	}
> +
> +	pci_set_drvdata(pdev, plxdev);
> +
> +	return 0;
> +}
> +
> +static int plx_dma_probe(struct pci_dev *pdev,
> +			 const struct pci_device_id *id)
> +{
> +	int rc;
> +
> +	rc = pcim_enable_device(pdev);
> +	if (rc)
> +		return rc;
> +
> +	rc = pci_set_dma_mask(pdev, DMA_BIT_MASK(48));
> +	if (rc)
> +		rc = pci_set_dma_mask(pdev, DMA_BIT_MASK(32));
> +	if (rc)
> +		return rc;
> +
> +	rc = pci_set_consistent_dma_mask(pdev, DMA_BIT_MASK(48));
> +	if (rc)
> +		rc = pci_set_consistent_dma_mask(pdev, DMA_BIT_MASK(32));
> +	if (rc)
> +		return rc;
> +
> +	rc = pcim_iomap_regions(pdev, 1, KBUILD_MODNAME);
> +	if (rc)
> +		return rc;
> +
> +	rc = pci_alloc_irq_vectors(pdev, 1, 1, PCI_IRQ_ALL_TYPES);
> +	if (rc <= 0)
> +		return rc;
> +
> +	pci_set_master(pdev);
> +
> +	rc = plx_dma_create(pdev);
> +	if (rc)
> +		goto err_free_irq_vectors;
> +
> +	pci_info(pdev, "PLX DMA Channel Registered\n");
> +
> +	return 0;
> +
> +err_free_irq_vectors:
> +	pci_free_irq_vectors(pdev);
> +	return rc;
> +}
> +
> +static void plx_dma_remove(struct pci_dev *pdev)
> +{
> +	struct plx_dma_dev *plxdev = pci_get_drvdata(pdev);
> +
> +	free_irq(pci_irq_vector(pdev, 0),  plxdev);
> +
> +	plxdev->bar = NULL;
> +	plx_dma_put(plxdev);
> +
> +	pci_free_irq_vectors(pdev);
> +}
> +
> +static const struct pci_device_id plx_dma_pci_tbl[] = {
> +	{
> +		.vendor		= PCI_VENDOR_ID_PLX,
> +		.device		= 0x87D0,
> +		.subvendor	= PCI_ANY_ID,
> +		.subdevice	= PCI_ANY_ID,
> +		.class		= PCI_CLASS_SYSTEM_OTHER << 8,
> +		.class_mask	= 0xFFFFFFFF,
> +	},
> +	{0}
> +};
> +MODULE_DEVICE_TABLE(pci, plx_dma_pci_tbl);
> +
> +static struct pci_driver plx_dma_pci_driver = {
> +	.name           = KBUILD_MODNAME,
> +	.id_table       = plx_dma_pci_tbl,
> +	.probe          = plx_dma_probe,
> +	.remove		= plx_dma_remove,
> +};
> +module_pci_driver(plx_dma_pci_driver);
> 

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH v2 4/5] dmaengine: plx-dma: Implement hardware initialization and cleanup
  2019-12-10  0:24 ` [PATCH v2 4/5] dmaengine: plx-dma: Implement hardware initialization and cleanup Logan Gunthorpe
@ 2019-12-10  6:49   ` Jiasen Lin
  2019-12-10 17:55     ` Logan Gunthorpe
  0 siblings, 1 reply; 10+ messages in thread
From: Jiasen Lin @ 2019-12-10  6:49 UTC (permalink / raw)
  To: Logan Gunthorpe, linux-kernel, dmaengine, Vinod Koul
  Cc: Dan Williams, Kit Chow



On 2019/12/10 8:24, Logan Gunthorpe wrote:
> Allocate DMA coherent memory for the ring of DMA descriptors and
> program the appropriate hardware registers.
> 
> A tasklet is created which is triggered on an interrupt to process
> all the finished requests. Additionally, any remaining descriptors
> are aborted when the hardware is removed or the resources freed.
> 
> Use an RCU pointer to synchronize PCI device unbind.
> 
> Signed-off-by: Logan Gunthorpe <logang@deltatee.com>
> ---
>   drivers/dma/plx_dma.c | 344 +++++++++++++++++++++++++++++++++++++++++-
>   1 file changed, 343 insertions(+), 1 deletion(-)
> 
> diff --git a/drivers/dma/plx_dma.c b/drivers/dma/plx_dma.c
> index 54e13cb92d51..d3c2319e2fad 100644
> --- a/drivers/dma/plx_dma.c
> +++ b/drivers/dma/plx_dma.c
> @@ -18,13 +18,103 @@ MODULE_VERSION("0.1");
>   MODULE_LICENSE("GPL");
>   MODULE_AUTHOR("Logan Gunthorpe");
>   
> +#define PLX_REG_DESC_RING_ADDR			0x214
> +#define PLX_REG_DESC_RING_ADDR_HI		0x218
> +#define PLX_REG_DESC_RING_NEXT_ADDR		0x21C
> +#define PLX_REG_DESC_RING_COUNT			0x220
> +#define PLX_REG_DESC_RING_LAST_ADDR		0x224
> +#define PLX_REG_DESC_RING_LAST_SIZE		0x228
> +#define PLX_REG_PREF_LIMIT			0x234
> +#define PLX_REG_CTRL				0x238
> +#define PLX_REG_CTRL2				0x23A
> +#define PLX_REG_INTR_CTRL			0x23C
> +#define PLX_REG_INTR_STATUS			0x23E
> +
> +#define PLX_REG_PREF_LIMIT_PREF_FOUR		8
> +
> +#define PLX_REG_CTRL_GRACEFUL_PAUSE		BIT(0)
> +#define PLX_REG_CTRL_ABORT			BIT(1)
> +#define PLX_REG_CTRL_WRITE_BACK_EN		BIT(2)
> +#define PLX_REG_CTRL_START			BIT(3)
> +#define PLX_REG_CTRL_RING_STOP_MODE		BIT(4)
> +#define PLX_REG_CTRL_DESC_MODE_BLOCK		(0 << 5)
> +#define PLX_REG_CTRL_DESC_MODE_ON_CHIP		(1 << 5)
> +#define PLX_REG_CTRL_DESC_MODE_OFF_CHIP		(2 << 5)
> +#define PLX_REG_CTRL_DESC_INVALID		BIT(8)
> +#define PLX_REG_CTRL_GRACEFUL_PAUSE_DONE	BIT(9)
> +#define PLX_REG_CTRL_ABORT_DONE			BIT(10)
> +#define PLX_REG_CTRL_IMM_PAUSE_DONE		BIT(12)
> +#define PLX_REG_CTRL_IN_PROGRESS		BIT(30)
> +
> +#define PLX_REG_CTRL_RESET_VAL	(PLX_REG_CTRL_DESC_INVALID | \
> +				 PLX_REG_CTRL_GRACEFUL_PAUSE_DONE | \
> +				 PLX_REG_CTRL_ABORT_DONE | \
> +				 PLX_REG_CTRL_IMM_PAUSE_DONE)
> +
> +#define PLX_REG_CTRL_START_VAL	(PLX_REG_CTRL_WRITE_BACK_EN | \
> +				 PLX_REG_CTRL_DESC_MODE_OFF_CHIP | \
> +				 PLX_REG_CTRL_START | \
> +				 PLX_REG_CTRL_RESET_VAL)
> +
> +#define PLX_REG_CTRL2_MAX_TXFR_SIZE_64B		0
> +#define PLX_REG_CTRL2_MAX_TXFR_SIZE_128B	1
> +#define PLX_REG_CTRL2_MAX_TXFR_SIZE_256B	2
> +#define PLX_REG_CTRL2_MAX_TXFR_SIZE_512B	3
> +#define PLX_REG_CTRL2_MAX_TXFR_SIZE_1KB		4
> +#define PLX_REG_CTRL2_MAX_TXFR_SIZE_2KB		5
> +#define PLX_REG_CTRL2_MAX_TXFR_SIZE_4B		7
> +
> +#define PLX_REG_INTR_CRTL_ERROR_EN		BIT(0)
> +#define PLX_REG_INTR_CRTL_INV_DESC_EN		BIT(1)
> +#define PLX_REG_INTR_CRTL_ABORT_DONE_EN		BIT(3)
> +#define PLX_REG_INTR_CRTL_PAUSE_DONE_EN		BIT(4)
> +#define PLX_REG_INTR_CRTL_IMM_PAUSE_DONE_EN	BIT(5)
> +
> +#define PLX_REG_INTR_STATUS_ERROR		BIT(0)
> +#define PLX_REG_INTR_STATUS_INV_DESC		BIT(1)
> +#define PLX_REG_INTR_STATUS_DESC_DONE		BIT(2)
> +#define PLX_REG_INTR_CRTL_ABORT_DONE		BIT(3)
> +
> +struct plx_dma_hw_std_desc {
> +	__le32 flags_and_size;
> +	__le16 dst_addr_hi;
> +	__le16 src_addr_hi;
> +	__le32 dst_addr_lo;
> +	__le32 src_addr_lo;
> +};
> +
> +#define PLX_DESC_SIZE_MASK		0x7ffffff
> +#define PLX_DESC_FLAG_VALID		BIT(31)
> +#define PLX_DESC_FLAG_INT_WHEN_DONE	BIT(30)
> +
> +#define PLX_DESC_WB_SUCCESS		BIT(30)
> +#define PLX_DESC_WB_RD_FAIL		BIT(29)
> +#define PLX_DESC_WB_WR_FAIL		BIT(28)
> +
> +#define PLX_DMA_RING_COUNT		2048
> +
> +struct plx_dma_desc {
> +	struct dma_async_tx_descriptor txd;
> +	struct plx_dma_hw_std_desc *hw;
> +	u32 orig_size;
> +};
> +
>   struct plx_dma_dev {
>   	struct dma_device dma_dev;
>   	struct dma_chan dma_chan;
> +	struct pci_dev __rcu *pdev;
>   	void __iomem *bar;
>   
>   	struct kref ref;
>   	struct work_struct release_work;
> +	struct tasklet_struct desc_task;
> +
> +	spinlock_t ring_lock;
> +	bool ring_active;
> +	int head;
> +	int tail;
> +	struct plx_dma_hw_std_desc *hw_ring;
> +	struct plx_dma_desc **desc_ring;
>   };
>   
>   static struct plx_dma_dev *chan_to_plx_dma_dev(struct dma_chan *c)
> @@ -32,6 +122,146 @@ static struct plx_dma_dev *chan_to_plx_dma_dev(struct dma_chan *c)
>   	return container_of(c, struct plx_dma_dev, dma_chan);
>   }
>   
> +static struct plx_dma_desc *plx_dma_get_desc(struct plx_dma_dev *plxdev, int i)
> +{
> +	return plxdev->desc_ring[i & (PLX_DMA_RING_COUNT - 1)];
> +}
> +
> +static void plx_dma_process_desc(struct plx_dma_dev *plxdev)
> +{
> +	struct dmaengine_result res;
> +	struct plx_dma_desc *desc;
> +	u32 flags;
> +
> +	spin_lock_bh(&plxdev->ring_lock);
> +
> +	while (plxdev->tail != plxdev->head) {
> +		desc = plx_dma_get_desc(plxdev, plxdev->tail);
> +
> +		flags = le32_to_cpu(READ_ONCE(desc->hw->flags_and_size));
> +
> +		if (flags & PLX_DESC_FLAG_VALID)
> +			break;
> +
> +		res.residue = desc->orig_size - (flags & PLX_DESC_SIZE_MASK);
> +
> +		if (flags & PLX_DESC_WB_SUCCESS)
> +			res.result = DMA_TRANS_NOERROR;
> +		else if (flags & PLX_DESC_WB_WR_FAIL)
> +			res.result = DMA_TRANS_WRITE_FAILED;
> +		else
> +			res.result = DMA_TRANS_READ_FAILED;
> +
> +		dma_cookie_complete(&desc->txd);
> +		dma_descriptor_unmap(&desc->txd);
> +		dmaengine_desc_get_callback_invoke(&desc->txd, &res);
> +		desc->txd.callback = NULL;
> +		desc->txd.callback_result = NULL;
> +
> +		plxdev->tail++;
> +	}
> +
> +	spin_unlock_bh(&plxdev->ring_lock);
> +}
> +
> +static void plx_dma_abort_desc(struct plx_dma_dev *plxdev)
> +{
> +	struct dmaengine_result res;
> +	struct plx_dma_desc *desc;
> +
> +	plx_dma_process_desc(plxdev);
> +
> +	spin_lock_bh(&plxdev->ring_lock);
> +
> +	while (plxdev->tail != plxdev->head) {
> +		desc = plx_dma_get_desc(plxdev, plxdev->tail);
> +
> +		res.residue = desc->orig_size;
> +		res.result = DMA_TRANS_ABORTED;
> +
> +		dma_cookie_complete(&desc->txd);
> +		dma_descriptor_unmap(&desc->txd);
> +		dmaengine_desc_get_callback_invoke(&desc->txd, &res);
> +		desc->txd.callback = NULL;
> +		desc->txd.callback_result = NULL;
> +
> +		plxdev->tail++;
> +	}
> +
> +	spin_unlock_bh(&plxdev->ring_lock);
> +}
> +
> +static void __plx_dma_stop(struct plx_dma_dev *plxdev)
> +{
> +	unsigned long timeout = jiffies + msecs_to_jiffies(1000);
> +	u32 val;
> +
> +	val = readl(plxdev->bar + PLX_REG_CTRL);
> +	if (!(val & ~PLX_REG_CTRL_GRACEFUL_PAUSE))
> +		return;
> +
> +	writel(PLX_REG_CTRL_RESET_VAL | PLX_REG_CTRL_GRACEFUL_PAUSE,
> +	       plxdev->bar + PLX_REG_CTRL);
> +
> +	while (!time_after(jiffies, timeout)) {
> +		val = readl(plxdev->bar + PLX_REG_CTRL);
> +		if (val & PLX_REG_CTRL_GRACEFUL_PAUSE_DONE)
> +			break;
> +
> +		cpu_relax();
> +	}
> +
> +	if (!(val & PLX_REG_CTRL_GRACEFUL_PAUSE_DONE))
> +		dev_err(plxdev->dma_dev.dev,
> +			"Timeout waiting for graceful pause!\n");
> +
> +	writel(PLX_REG_CTRL_RESET_VAL | PLX_REG_CTRL_GRACEFUL_PAUSE,
> +	       plxdev->bar + PLX_REG_CTRL);
> +
> +	writel(0, plxdev->bar + PLX_REG_DESC_RING_COUNT);
> +	writel(0, plxdev->bar + PLX_REG_DESC_RING_ADDR);
> +	writel(0, plxdev->bar + PLX_REG_DESC_RING_ADDR_HI);
> +	writel(0, plxdev->bar + PLX_REG_DESC_RING_NEXT_ADDR);
> +}
> +
> +static void plx_dma_stop(struct plx_dma_dev *plxdev)
> +{
> +	rcu_read_lock();
> +	if (!rcu_dereference(plxdev->pdev)) {
> +		rcu_read_unlock();
> +		return;
> +	}
> +
> +	__plx_dma_stop(plxdev);
> +
> +	rcu_read_unlock();
> +}
> +
> +static void plx_dma_desc_task(unsigned long data)
> +{
> +	struct plx_dma_dev *plxdev = (void *)data;
> +
> +	plx_dma_process_desc(plxdev);
> +}
> +
> +static irqreturn_t plx_dma_isr(int irq, void *devid)
> +{
> +	struct plx_dma_dev *plxdev = devid;
> +	u32 status;
> +
> +	status = readw(plxdev->bar + PLX_REG_INTR_STATUS);
> +
> +	if (!status)
> +		return IRQ_NONE;
> +
> +	if (status & PLX_REG_INTR_STATUS_DESC_DONE && plxdev->ring_active)
> +		tasklet_schedule(&plxdev->desc_task);
> +
> +	writew(status, plxdev->bar + PLX_REG_INTR_STATUS);
> +
> +	return IRQ_HANDLED;
> +}
> +
>   static void plx_dma_release_work(struct work_struct *work)
>   {
>   	struct plx_dma_dev *plxdev = container_of(work, struct plx_dma_dev,
> @@ -61,18 +291,109 @@ static void plx_dma_put(struct plx_dma_dev *plxdev)
>   	kref_put(&plxdev->ref, plx_dma_release);
>   }
>   
> +static int plx_dma_alloc_desc(struct plx_dma_dev *plxdev)
> +{
> +	struct plx_dma_desc *desc;
> +	int i;
> +
> +	plxdev->desc_ring = kcalloc(PLX_DMA_RING_COUNT,
> +				    sizeof(*plxdev->desc_ring), GFP_KERNEL);
> +	if (!plxdev->desc_ring)
> +		return -ENOMEM;
> +
> +	for (i = 0; i < PLX_DMA_RING_COUNT; i++) {
> +		desc = kzalloc(sizeof(*desc), GFP_KERNEL);
> +		if (!desc)
> +			goto free_and_exit;
> +
> +		dma_async_tx_descriptor_init(&desc->txd, &plxdev->dma_chan);
> +		desc->hw = &plxdev->hw_ring[i];
> +		plxdev->desc_ring[i] = desc;
> +	}
> +
> +	return 0;
> +
> +free_and_exit:
> +	for (i = 0; i < PLX_DMA_RING_COUNT; i++)
> +		kfree(plxdev->desc_ring[i]);
> +	kfree(plxdev->desc_ring);
> +	return -ENOMEM;
> +}
> +
>   static int plx_dma_alloc_chan_resources(struct dma_chan *chan)
>   {
>   	struct plx_dma_dev *plxdev = chan_to_plx_dma_dev(chan);
> +	size_t ring_sz = PLX_DMA_RING_COUNT * sizeof(*plxdev->hw_ring);
> +	dma_addr_t dma_addr;
> +	int rc;
> +
> +	rcu_read_lock();
> +	if (!rcu_dereference(plxdev->pdev)) {
> +		rcu_read_unlock();
> +		return -ENODEV;
> +	}
>   
>   	kref_get(&plxdev->ref);
>   
> -	return 0;
> +	writel(PLX_REG_CTRL_RESET_VAL, plxdev->bar + PLX_REG_CTRL);
> +
> +	plxdev->hw_ring = dmam_alloc_coherent(plxdev->dma_dev.dev, ring_sz,
> +					      &dma_addr, GFP_KERNEL);
> +	if (!plxdev->hw_ring) {
> +		rcu_read_unlock();
> +		return -ENOMEM;
> +	}
> +
> +	plxdev->head = plxdev->tail = 0;
> +
> +	rc = plx_dma_alloc_desc(plxdev);
> +	if (rc) {
> +		plx_dma_put(plxdev);
> +		rcu_read_unlock();
> +		return rc;
> +	}
> +
> +	writel(lower_32_bits(dma_addr), plxdev->bar + PLX_REG_DESC_RING_ADDR);
> +	writel(upper_32_bits(dma_addr),
> +	       plxdev->bar + PLX_REG_DESC_RING_ADDR_HI);
> +	writel(lower_32_bits(dma_addr),
> +	       plxdev->bar + PLX_REG_DESC_RING_NEXT_ADDR);
> +	writel(PLX_DMA_RING_COUNT, plxdev->bar + PLX_REG_DESC_RING_COUNT);
> +	writel(PLX_REG_PREF_LIMIT_PREF_FOUR, plxdev->bar + PLX_REG_PREF_LIMIT);
> +
> +	plxdev->ring_active = true;
> +
> +	rcu_read_unlock();
> +
> +	return PLX_DMA_RING_COUNT;
>   }
>   
>   static void plx_dma_free_chan_resources(struct dma_chan *chan)
>   {
>   	struct plx_dma_dev *plxdev = chan_to_plx_dma_dev(chan);
> +	struct pci_dev *pdev;
> +	int i;
> +
> +	spin_lock_bh(&plxdev->ring_lock);
> +	plxdev->ring_active = false;
> +	spin_unlock_bh(&plxdev->ring_lock);
> +
> +	plx_dma_stop(plxdev);
> +
> +	rcu_read_lock();
> +	pdev = rcu_dereference(plxdev->pdev);
> +	if (pdev)
> +		synchronize_irq(pci_irq_vector(pdev, 0));
> +	rcu_read_unlock();
> +
> +	tasklet_kill(&plxdev->desc_task);
> +
> +	plx_dma_abort_desc(plxdev);
> +
> +	for (i = 0; i < PLX_DMA_RING_COUNT; i++)
> +		kfree(plxdev->desc_ring[i]);
> +
> +	kfree(plxdev->desc_ring);
>   
>   	plx_dma_put(plxdev);
>   }
> @@ -88,9 +409,20 @@ static int plx_dma_create(struct pci_dev *pdev)
>   	if (!plxdev)
>   		return -ENOMEM;
>   
> +	rc = request_irq(pci_irq_vector(pdev, 0), plx_dma_isr, 0,
> +			 KBUILD_MODNAME, plxdev);
> +	if (rc) {
> +		kfree(plxdev);
> +		return rc;
> +	}
> +
Hi Logan

Integrated DMA engine of PEX87xx series switch support various
interrupts. According to my personal experience, I suggest that
enable error interrupt, invalid decscriptor interrupt, abort done
interrupt, graceful puse done interrupt, and
immediate pasue done interrupt by write  DMA Channel x Interrupt
Control/Status register.

Thanks,
Jiasen Lin

>   	kref_init(&plxdev->ref);
>   	INIT_WORK(&plxdev->release_work, plx_dma_release_work);
> +	spin_lock_init(&plxdev->ring_lock);
> +	tasklet_init(&plxdev->desc_task, plx_dma_desc_task,
> +		     (unsigned long)plxdev);
>   
> +	RCU_INIT_POINTER(plxdev->pdev, pdev);
>   	plxdev->bar = pcim_iomap_table(pdev)[0];
>   
>   	dma = &plxdev->dma_dev;
> @@ -169,6 +501,16 @@ static void plx_dma_remove(struct pci_dev *pdev)
>   
>   	free_irq(pci_irq_vector(pdev, 0),  plxdev);
>   
> +	rcu_assign_pointer(plxdev->pdev, NULL);
> +	synchronize_rcu();
> +
> +	spin_lock_bh(&plxdev->ring_lock);
> +	plxdev->ring_active = false;
> +	spin_unlock_bh(&plxdev->ring_lock);
> +
> +	__plx_dma_stop(plxdev);
> +	plx_dma_abort_desc(plxdev);
> +
>   	plxdev->bar = NULL;
>   	plx_dma_put(plxdev);
>   
> 

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH v2 3/5] dmaengine: plx-dma: Introduce PLX DMA engine PCI driver skeleton
  2019-12-10  2:33   ` Jiasen Lin
@ 2019-12-10 17:44     ` Logan Gunthorpe
  0 siblings, 0 replies; 10+ messages in thread
From: Logan Gunthorpe @ 2019-12-10 17:44 UTC (permalink / raw)
  To: Jiasen Lin, linux-kernel, dmaengine, Vinod Koul; +Cc: Dan Williams, Kit Chow



On 2019-12-09 7:33 p.m., Jiasen Lin wrote:
> 
> 
> On 2019/12/10 8:24, Logan Gunthorpe wrote:
>> Some PLX Switches can expose DMA engines via extra PCI functions
>> on the upstream port. Each function will have one DMA channel.
>>
>> This patch is just the core PCI driver skeleton and dma
>> engine registration.
>>
>> Signed-off-by: Logan Gunthorpe <logang@deltatee.com>
>> ---
>>   MAINTAINERS           |   5 ++
>>   drivers/dma/Kconfig   |   9 ++
>>   drivers/dma/Makefile  |   1 +
>>   drivers/dma/plx_dma.c | 197 ++++++++++++++++++++++++++++++++++++++++++
>>   4 files changed, 212 insertions(+)
>>   create mode 100644 drivers/dma/plx_dma.c
>>
>> diff --git a/MAINTAINERS b/MAINTAINERS
>> index bd5847e802de..76713226f256 100644
>> --- a/MAINTAINERS
>> +++ b/MAINTAINERS
>> @@ -13139,6 +13139,11 @@ S:	Maintained
>>   F:	drivers/iio/chemical/pms7003.c
>>   F:	Documentation/devicetree/bindings/iio/chemical/plantower,pms7003.yaml
>>   
>> +PLX DMA DRIVER
>> +M:	Logan Gunthorpe <logang@deltatee.com>
>> +S:	Maintained
>> +F:	drivers/dma/plx_dma.c
>> +
>>   PMBUS HARDWARE MONITORING DRIVERS
>>   M:	Guenter Roeck <linux@roeck-us.net>
>>   L:	linux-hwmon@vger.kernel.org
>> diff --git a/drivers/dma/Kconfig b/drivers/dma/Kconfig
>> index 6fa1eba9d477..312a6cc36c78 100644
>> --- a/drivers/dma/Kconfig
>> +++ b/drivers/dma/Kconfig
>> @@ -497,6 +497,15 @@ config PXA_DMA
>>   	  16 to 32 channels for peripheral to memory or memory to memory
>>   	  transfers.
>>   
>> +config PLX_DMA
>> +	tristate "PLX ExpressLane PEX Switch DMA Engine Support"
>> +	depends on PCI
>> +	select DMA_ENGINE
>> +	help
>> +	  Some PLX ExpressLane PCI Switches support additional DMA engines.
>> +	  These are exposed via extra functions on the switch's
>> +	  upstream port. Each function exposes one DMA channel.
>> +
>>   config SIRF_DMA
>>   	tristate "CSR SiRFprimaII/SiRFmarco DMA support"
>>   	depends on ARCH_SIRF
>> diff --git a/drivers/dma/Makefile b/drivers/dma/Makefile
>> index 42d7e2fc64fa..a150d1d792fd 100644
>> --- a/drivers/dma/Makefile
>> +++ b/drivers/dma/Makefile
>> @@ -59,6 +59,7 @@ obj-$(CONFIG_NBPFAXI_DMA) += nbpfaxi.o
>>   obj-$(CONFIG_OWL_DMA) += owl-dma.o
>>   obj-$(CONFIG_PCH_DMA) += pch_dma.o
>>   obj-$(CONFIG_PL330_DMA) += pl330.o
>> +obj-$(CONFIG_PLX_DMA) += plx_dma.o
>>   obj-$(CONFIG_PPC_BESTCOMM) += bestcomm/
>>   obj-$(CONFIG_PXA_DMA) += pxa_dma.o
>>   obj-$(CONFIG_RENESAS_DMA) += sh/
>> diff --git a/drivers/dma/plx_dma.c b/drivers/dma/plx_dma.c
>> new file mode 100644
>> index 000000000000..54e13cb92d51
>> --- /dev/null
>> +++ b/drivers/dma/plx_dma.c
>> @@ -0,0 +1,197 @@
>> +// SPDX-License-Identifier: GPL-2.0
>> +/*
>> + * Microsemi Switchtec(tm) PCIe Management Driver
>> + * Copyright (c) 2019, Logan Gunthorpe <logang@deltatee.com>
>> + * Copyright (c) 2019, GigaIO Networks, Inc
>> + */
>> +
>> +#include "dmaengine.h"
>> +
>> +#include <linux/dmaengine.h>
>> +#include <linux/kref.h>
>> +#include <linux/list.h>
>> +#include <linux/module.h>
>> +#include <linux/pci.h>
>> +
>> +MODULE_DESCRIPTION("PLX ExpressLane PEX PCI Switch DMA Engine");
>> +MODULE_VERSION("0.1");
>> +MODULE_LICENSE("GPL");
>> +MODULE_AUTHOR("Logan Gunthorpe");
>> +
>> +struct plx_dma_dev {
>> +	struct dma_device dma_dev;
>> +	struct dma_chan dma_chan;
>> +	void __iomem *bar;
>> +
>> +	struct kref ref;
>> +	struct work_struct release_work;
>> +};
>> +
>> +static struct plx_dma_dev *chan_to_plx_dma_dev(struct dma_chan *c)
>> +{
>> +	return container_of(c, struct plx_dma_dev, dma_chan);
>> +}
>> +
>> +static void plx_dma_release_work(struct work_struct *work)
>> +{
>> +	struct plx_dma_dev *plxdev = container_of(work, struct plx_dma_dev,
>> +						  release_work);
>> +
>> +	dma_async_device_unregister(&plxdev->dma_dev);
>> +	put_device(plxdev->dma_dev.dev);
>> +	kfree(plxdev);
>> +}
>> +
>> +static void plx_dma_release(struct kref *ref)
>> +{
>> +	struct plx_dma_dev *plxdev = container_of(ref, struct plx_dma_dev, ref);
>> +
>> +	/*
>> +	 * The dmaengine reference counting and locking is a bit of a
>> +	 * mess so we have to work around it a bit here. We might put
>> +	 * the reference while the dmaengine holds the dma_list_mutex
>> +	 * which means we can't call dma_async_device_unregister() directly
>> +	 * here and it must be delayed.
>> +	 */
>> +	schedule_work(&plxdev->release_work);
>> +}
>> +
>> +static void plx_dma_put(struct plx_dma_dev *plxdev)
>> +{
>> +	kref_put(&plxdev->ref, plx_dma_release);
>> +}
>> +
>> +static int plx_dma_alloc_chan_resources(struct dma_chan *chan)
>> +{
>> +	struct plx_dma_dev *plxdev = chan_to_plx_dma_dev(chan);
>> +
>> +	kref_get(&plxdev->ref);
>> +
>> +	return 0;
>> +}
>> +
>> +static void plx_dma_free_chan_resources(struct dma_chan *chan)
>> +{
>> +	struct plx_dma_dev *plxdev = chan_to_plx_dma_dev(chan);
>> +
>> +	plx_dma_put(plxdev);
>> +}
>> +
>> +static int plx_dma_create(struct pci_dev *pdev)
>> +{
>> +	struct plx_dma_dev *plxdev;
>> +	struct dma_device *dma;
>> +	struct dma_chan *chan;
>> +	int rc;
>> +
>> +	plxdev = kzalloc(sizeof(*plxdev), GFP_KERNEL);
>> +	if (!plxdev)
>> +		return -ENOMEM;
>> +
>> +	kref_init(&plxdev->ref);
>> +	INIT_WORK(&plxdev->release_work, plx_dma_release_work);
>> +
>> +	plxdev->bar = pcim_iomap_table(pdev)[0];
>> +
>> +	dma = &plxdev->dma_dev;
>> +	dma->chancnt = 1;
>> +	INIT_LIST_HEAD(&dma->channels);
>> +	dma->copy_align = DMAENGINE_ALIGN_1_BYTE;
>> +	dma->dev = get_device(&pdev->dev);
>> +
>> +	dma->device_alloc_chan_resources = plx_dma_alloc_chan_resources;
>> +	dma->device_free_chan_resources = plx_dma_free_chan_resources;
>> +
>> +	chan = &plxdev->dma_chan;
>> +	chan->device = dma;
>> +	dma_cookie_init(chan);
>> +	list_add_tail(&chan->device_node, &dma->channels);
>> +
>> +	rc = dma_async_device_register(dma);
>> +	if (rc) {
>> +		pci_err(pdev, "Failed to register dma device: %d\n", rc);
>> +		free_irq(pci_irq_vector(pdev, 0),  plxdev);
> 
> Hi Logan
> Failed to register dma device need to call plx_dma_put(plxdev) or 
> kfree(plxdev), otherwise it result in memory leak.

Nice catch! Thanks.

Will fix for v3.

Logan

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH v2 4/5] dmaengine: plx-dma: Implement hardware initialization and cleanup
  2019-12-10  6:49   ` Jiasen Lin
@ 2019-12-10 17:55     ` Logan Gunthorpe
  0 siblings, 0 replies; 10+ messages in thread
From: Logan Gunthorpe @ 2019-12-10 17:55 UTC (permalink / raw)
  To: Jiasen Lin, linux-kernel, dmaengine, Vinod Koul; +Cc: Dan Williams, Kit Chow



On 2019-12-09 11:49 p.m., Jiasen Lin wrote:
> Integrated DMA engine of PEX87xx series switch support various
> interrupts. According to my personal experience, I suggest that
> enable error interrupt, invalid decscriptor interrupt, abort done
> interrupt, graceful puse done interrupt, and
> immediate pasue done interrupt by write  DMA Channel x Interrupt
> Control/Status register.

Well, that depends on what we want to do with these interrupts:

1) We shouldn't need to handle the error/invalid descriptor interrupt.
We instead just see that a specific descriptor failed (in the usual way)
and handle it accordingly. (Though this isn't really documented well).
An invalid descriptor should really never happen unless we have a driver
bug. I suppose I could print an error message if either occur.

2) We never send an abort or immediate pause to the device, so neither
interrupt can ever fire. So there's nothing to do if they do fire and
thus no sense enabling them.

3) We do send a graceful pause to the device on teardown but prefer to
poll for the end of the pause instead of adding the extra complexity to
waiting for an interrupt. So no need for the interrupt.

Logan



> 
> Thanks,
> Jiasen Lin
> 
>>   	kref_init(&plxdev->ref);
>>   	INIT_WORK(&plxdev->release_work, plx_dma_release_work);
>> +	spin_lock_init(&plxdev->ring_lock);
>> +	tasklet_init(&plxdev->desc_task, plx_dma_desc_task,
>> +		     (unsigned long)plxdev);
>>   
>> +	RCU_INIT_POINTER(plxdev->pdev, pdev);
>>   	plxdev->bar = pcim_iomap_table(pdev)[0];
>>   
>>   	dma = &plxdev->dma_dev;
>> @@ -169,6 +501,16 @@ static void plx_dma_remove(struct pci_dev *pdev)
>>   
>>   	free_irq(pci_irq_vector(pdev, 0),  plxdev);
>>   
>> +	rcu_assign_pointer(plxdev->pdev, NULL);
>> +	synchronize_rcu();
>> +
>> +	spin_lock_bh(&plxdev->ring_lock);
>> +	plxdev->ring_active = false;
>> +	spin_unlock_bh(&plxdev->ring_lock);
>> +
>> +	__plx_dma_stop(plxdev);
>> +	plx_dma_abort_desc(plxdev);
>> +
>>   	plxdev->bar = NULL;
>>   	plx_dma_put(plxdev);
>>   
>>

^ permalink raw reply	[flat|nested] 10+ messages in thread

end of thread, other threads:[~2019-12-10 17:55 UTC | newest]

Thread overview: 10+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-12-10  0:24 [PATCH v2 0/5] PLX Switch DMA Engine Driver Logan Gunthorpe
2019-12-10  0:24 ` [PATCH v2 1/5] dmaengine: Store module owner in dma_device struct Logan Gunthorpe
2019-12-10  0:24 ` [PATCH v2 2/5] dmaengine: Call module_put() after device_free_chan_resources() Logan Gunthorpe
2019-12-10  0:24 ` [PATCH v2 3/5] dmaengine: plx-dma: Introduce PLX DMA engine PCI driver skeleton Logan Gunthorpe
2019-12-10  2:33   ` Jiasen Lin
2019-12-10 17:44     ` Logan Gunthorpe
2019-12-10  0:24 ` [PATCH v2 4/5] dmaengine: plx-dma: Implement hardware initialization and cleanup Logan Gunthorpe
2019-12-10  6:49   ` Jiasen Lin
2019-12-10 17:55     ` Logan Gunthorpe
2019-12-10  0:24 ` [PATCH v2 5/5] dmaengine: plx-dma: Implement descriptor submission Logan Gunthorpe

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.