linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH 0/5] PLX Switch DMA Engine Driver
@ 2019-10-22 21:46 Logan Gunthorpe
  2019-10-22 21:46 ` [PATCH 1/5] dmaengine: Store module owner in dma_device struct Logan Gunthorpe
                   ` (4 more replies)
  0 siblings, 5 replies; 27+ messages in thread
From: Logan Gunthorpe @ 2019-10-22 21:46 UTC (permalink / raw)
  To: linux-kernel, dmaengine, Vinod Koul; +Cc: Dan Williams, Logan Gunthorpe

Hi,

The following patches introduce a driver to use the DMA hardware inside
PLX ExpressLane PEX Switches. The DMA devices appear as one or more
PCI virtual functions per channel and can serve similar use cases as
Intel's IOAT driver.

The first two patches in this series fix some bugs that are required to
support cases where the driver is unbound while the DMA engine is in
use. Without these patches the kernel will panic when that happens.

The final three patches implement the driver itself.

This patchset is based off of v5.4-rc4 and a git branch is available
here:

https://github.com/sbates130272/linux-p2pmem/ plx-dma

Thanks,

Logan

--

Logan Gunthorpe (5):
  dmaengine: Store module owner in dma_device struct
  dmaengine: Call module_put() after device_free_chan_resources()
  dmaengine: plx-dma: Introduce PLX DMA engine PCI driver skeleton
  dmaengine: plx-dma: Implement hardware initialization and cleanup
  dmaengine: plx-dma: Implement descriptor submission

 MAINTAINERS               |   5 +
 drivers/dma/Kconfig       |   9 +
 drivers/dma/Makefile      |   1 +
 drivers/dma/dmaengine.c   |   7 +-
 drivers/dma/plx_dma.c     | 658 ++++++++++++++++++++++++++++++++++++++
 include/linux/dmaengine.h |   2 +
 6 files changed, 680 insertions(+), 2 deletions(-)
 create mode 100644 drivers/dma/plx_dma.c

--
2.20.1

^ permalink raw reply	[flat|nested] 27+ messages in thread

* [PATCH 1/5] dmaengine: Store module owner in dma_device struct
  2019-10-22 21:46 [PATCH 0/5] PLX Switch DMA Engine Driver Logan Gunthorpe
@ 2019-10-22 21:46 ` Logan Gunthorpe
  2019-11-09 17:18   ` Vinod Koul
  2019-10-22 21:46 ` [PATCH 2/5] dmaengine: Call module_put() after device_free_chan_resources() Logan Gunthorpe
                   ` (3 subsequent siblings)
  4 siblings, 1 reply; 27+ messages in thread
From: Logan Gunthorpe @ 2019-10-22 21:46 UTC (permalink / raw)
  To: linux-kernel, dmaengine, Vinod Koul; +Cc: Dan Williams, Logan Gunthorpe

dma_chan_to_owner() dereferences the driver from the struct device to
obtain the owner and call module_[get|put](). However, if the backing
device is unbound before the dma_device is unregistered, the driver
will be cleared and this will cause a NULL pointer dereference.

Instead, store a pointer to the owner module in the dma_device struct
so the module reference can be properly put when the channel is put, even
if the backing device was destroyed first.

This change helps to support a safer unbind of DMA engines.
If the dma_device is unregistered in the driver's remove function,
there's no guarantee that there are no existing clients and a users
action may trigger the WARN_ONCE in dma_async_device_unregister()
which is unlikely to leave the system in a consistent state.
Instead, a better approach is to allow the backing driver to go away
and fail any subsequent requests to it.

Signed-off-by: Logan Gunthorpe <logang@deltatee.com>
---
 drivers/dma/dmaengine.c   | 4 +++-
 include/linux/dmaengine.h | 2 ++
 2 files changed, 5 insertions(+), 1 deletion(-)

diff --git a/drivers/dma/dmaengine.c b/drivers/dma/dmaengine.c
index 03ac4b96117c..4b604086b1b3 100644
--- a/drivers/dma/dmaengine.c
+++ b/drivers/dma/dmaengine.c
@@ -179,7 +179,7 @@ __dma_device_satisfies_mask(struct dma_device *device,
 
 static struct module *dma_chan_to_owner(struct dma_chan *chan)
 {
-	return chan->device->dev->driver->owner;
+	return chan->device->owner;
 }
 
 /**
@@ -919,6 +919,8 @@ int dma_async_device_register(struct dma_device *device)
 		return -EIO;
 	}
 
+	device->owner = device->dev->driver->owner;
+
 	if (dma_has_cap(DMA_MEMCPY, device->cap_mask) && !device->device_prep_dma_memcpy) {
 		dev_err(device->dev,
 			"Device claims capability %s, but op is not defined\n",
diff --git a/include/linux/dmaengine.h b/include/linux/dmaengine.h
index 8fcdee1c0cf9..13aa0abb71de 100644
--- a/include/linux/dmaengine.h
+++ b/include/linux/dmaengine.h
@@ -674,6 +674,7 @@ struct dma_filter {
  * @fill_align: alignment shift for memset operations
  * @dev_id: unique device ID
  * @dev: struct device reference for dma mapping api
+ * @owner: owner module (automatically set based on the provided dev)
  * @src_addr_widths: bit mask of src addr widths the device supports
  *	Width is specified in bytes, e.g. for a device supporting
  *	a width of 4 the mask should have BIT(4) set.
@@ -737,6 +738,7 @@ struct dma_device {
 
 	int dev_id;
 	struct device *dev;
+	struct module *owner;
 
 	u32 src_addr_widths;
 	u32 dst_addr_widths;
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 27+ messages in thread

* [PATCH 2/5] dmaengine: Call module_put() after device_free_chan_resources()
  2019-10-22 21:46 [PATCH 0/5] PLX Switch DMA Engine Driver Logan Gunthorpe
  2019-10-22 21:46 ` [PATCH 1/5] dmaengine: Store module owner in dma_device struct Logan Gunthorpe
@ 2019-10-22 21:46 ` Logan Gunthorpe
  2019-10-22 21:46 ` [PATCH 3/5] dmaengine: plx-dma: Introduce PLX DMA engine PCI driver skeleton Logan Gunthorpe
                   ` (2 subsequent siblings)
  4 siblings, 0 replies; 27+ messages in thread
From: Logan Gunthorpe @ 2019-10-22 21:46 UTC (permalink / raw)
  To: linux-kernel, dmaengine, Vinod Koul; +Cc: Dan Williams, Logan Gunthorpe

The module reference is taken to ensure the callbacks still exist
when they are called. If the channel holds the last reference to the
module, the module can disappear before device_free_chan_resources() is
called and would cause a call into free'd memory.

Signed-off-by: Logan Gunthorpe <logang@deltatee.com>
---
 drivers/dma/dmaengine.c | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/drivers/dma/dmaengine.c b/drivers/dma/dmaengine.c
index 4b604086b1b3..776fdf535a3a 100644
--- a/drivers/dma/dmaengine.c
+++ b/drivers/dma/dmaengine.c
@@ -250,7 +250,6 @@ static void dma_chan_put(struct dma_chan *chan)
 		return;
 
 	chan->client_count--;
-	module_put(dma_chan_to_owner(chan));
 
 	/* This channel is not in use anymore, free it */
 	if (!chan->client_count && chan->device->device_free_chan_resources) {
@@ -259,6 +258,8 @@ static void dma_chan_put(struct dma_chan *chan)
 		chan->device->device_free_chan_resources(chan);
 	}
 
+	module_put(dma_chan_to_owner(chan));
+
 	/* If the channel is used via a DMA request router, free the mapping */
 	if (chan->router && chan->router->route_free) {
 		chan->router->route_free(chan->router->dev, chan->route_data);
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 27+ messages in thread

* [PATCH 3/5] dmaengine: plx-dma: Introduce PLX DMA engine PCI driver skeleton
  2019-10-22 21:46 [PATCH 0/5] PLX Switch DMA Engine Driver Logan Gunthorpe
  2019-10-22 21:46 ` [PATCH 1/5] dmaengine: Store module owner in dma_device struct Logan Gunthorpe
  2019-10-22 21:46 ` [PATCH 2/5] dmaengine: Call module_put() after device_free_chan_resources() Logan Gunthorpe
@ 2019-10-22 21:46 ` Logan Gunthorpe
  2019-11-09 17:35   ` Vinod Koul
  2019-10-22 21:46 ` [PATCH 4/5] dmaengine: plx-dma: Implement hardware initialization and cleanup Logan Gunthorpe
  2019-10-22 21:46 ` [PATCH 5/5] dmaengine: plx-dma: Implement descriptor submission Logan Gunthorpe
  4 siblings, 1 reply; 27+ messages in thread
From: Logan Gunthorpe @ 2019-10-22 21:46 UTC (permalink / raw)
  To: linux-kernel, dmaengine, Vinod Koul; +Cc: Dan Williams, Logan Gunthorpe

Some PLX Switches can expose DMA engines via extra PCI functions
on the upstream port. Each function will have one DMA channel.

This patch is just the core PCI driver skeleton and dma
engine registration.

Signed-off-by: Logan Gunthorpe <logang@deltatee.com>
---
 MAINTAINERS           |   5 +
 drivers/dma/Kconfig   |   9 ++
 drivers/dma/Makefile  |   1 +
 drivers/dma/plx_dma.c | 209 ++++++++++++++++++++++++++++++++++++++++++
 4 files changed, 224 insertions(+)
 create mode 100644 drivers/dma/plx_dma.c

diff --git a/MAINTAINERS b/MAINTAINERS
index e51a68bf8ca8..07edc1ead75d 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -12952,6 +12952,11 @@ S:	Maintained
 F:	drivers/iio/chemical/pms7003.c
 F:	Documentation/devicetree/bindings/iio/chemical/plantower,pms7003.yaml
 
+PLX DMA DRIVER
+M:	Logan Gunthorpe <logang@deltatee.com>
+S:	Maintained
+F:	drivers/dma/plx_dma.c
+
 PMBUS HARDWARE MONITORING DRIVERS
 M:	Guenter Roeck <linux@roeck-us.net>
 L:	linux-hwmon@vger.kernel.org
diff --git a/drivers/dma/Kconfig b/drivers/dma/Kconfig
index 7af874b69ffb..156f6e8f61f1 100644
--- a/drivers/dma/Kconfig
+++ b/drivers/dma/Kconfig
@@ -477,6 +477,15 @@ config PXA_DMA
 	  16 to 32 channels for peripheral to memory or memory to memory
 	  transfers.
 
+config PLX_DMA
+	tristate "PLX ExpressLane PEX Switch DMA Engine Support"
+	depends on PCI
+	select DMA_ENGINE
+	help
+	  Some PLX ExpressLane PCI Switches support additional DMA engines.
+	  These are exposed via extra functions on the switch's
+	  upstream port. Each function exposes one DMA channel.
+
 config SIRF_DMA
 	tristate "CSR SiRFprimaII/SiRFmarco DMA support"
 	depends on ARCH_SIRF
diff --git a/drivers/dma/Makefile b/drivers/dma/Makefile
index f5ce8665e944..ce03135c47fd 100644
--- a/drivers/dma/Makefile
+++ b/drivers/dma/Makefile
@@ -57,6 +57,7 @@ obj-$(CONFIG_NBPFAXI_DMA) += nbpfaxi.o
 obj-$(CONFIG_OWL_DMA) += owl-dma.o
 obj-$(CONFIG_PCH_DMA) += pch_dma.o
 obj-$(CONFIG_PL330_DMA) += pl330.o
+obj-$(CONFIG_PLX_DMA) += plx_dma.o
 obj-$(CONFIG_PPC_BESTCOMM) += bestcomm/
 obj-$(CONFIG_PXA_DMA) += pxa_dma.o
 obj-$(CONFIG_RENESAS_DMA) += sh/
diff --git a/drivers/dma/plx_dma.c b/drivers/dma/plx_dma.c
new file mode 100644
index 000000000000..8668dad790b6
--- /dev/null
+++ b/drivers/dma/plx_dma.c
@@ -0,0 +1,209 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Microsemi Switchtec(tm) PCIe Management Driver
+ * Copyright (c) 2019, Logan Gunthorpe <logang@deltatee.com>
+ * Copyright (c) 2019, GigaIO Networks, Inc
+ */
+
+#include "dmaengine.h"
+
+#include <linux/dmaengine.h>
+#include <linux/kref.h>
+#include <linux/list.h>
+#include <linux/module.h>
+#include <linux/pci.h>
+
+MODULE_DESCRIPTION("PLX ExpressLane PEX PCI Switch DMA Engine");
+MODULE_VERSION("0.1");
+MODULE_LICENSE("GPL");
+MODULE_AUTHOR("Logan Gunthorpe");
+
+struct plx_dma_dev {
+	struct dma_device dma_dev;
+	struct dma_chan dma_chan;
+	void __iomem *bar;
+
+	struct kref ref;
+	struct work_struct release_work;
+};
+
+static struct plx_dma_dev *chan_to_plx_dma_dev(struct dma_chan *c)
+{
+	return container_of(c, struct plx_dma_dev, dma_chan);
+}
+
+static irqreturn_t plx_dma_isr(int irq, void *devid)
+{
+	return IRQ_HANDLED;
+}
+
+static void plx_dma_release_work(struct work_struct *work)
+{
+	struct plx_dma_dev *plxdev = container_of(work, struct plx_dma_dev,
+						  release_work);
+
+	dma_async_device_unregister(&plxdev->dma_dev);
+	put_device(plxdev->dma_dev.dev);
+	kfree(plxdev);
+}
+
+static void plx_dma_release(struct kref *ref)
+{
+	struct plx_dma_dev *plxdev = container_of(ref, struct plx_dma_dev, ref);
+
+	/*
+	 * The dmaengine reference counting and locking is a bit of a
+	 * mess so we have to work around it a bit here. We might put
+	 * the reference while the dmaengine holds the dma_list_mutex
+	 * which means we can't call dma_async_device_unregister() directly
+	 * here and it must be delayed.
+	 */
+	schedule_work(&plxdev->release_work);
+}
+
+static void plx_dma_put(struct plx_dma_dev *plxdev)
+{
+	kref_put(&plxdev->ref, plx_dma_release);
+}
+
+static int plx_dma_alloc_chan_resources(struct dma_chan *chan)
+{
+	struct plx_dma_dev *plxdev = chan_to_plx_dma_dev(chan);
+
+	kref_get(&plxdev->ref);
+
+	return 0;
+}
+
+static void plx_dma_free_chan_resources(struct dma_chan *chan)
+{
+	struct plx_dma_dev *plxdev = chan_to_plx_dma_dev(chan);
+
+	plx_dma_put(plxdev);
+}
+
+static int plx_dma_create(struct pci_dev *pdev)
+{
+	struct plx_dma_dev *plxdev;
+	struct dma_device *dma;
+	struct dma_chan *chan;
+	int rc;
+
+	plxdev = kzalloc(sizeof(*plxdev), GFP_KERNEL);
+	if (!plxdev)
+		return -ENOMEM;
+
+	rc = request_irq(pci_irq_vector(pdev, 0), plx_dma_isr, 0,
+			 KBUILD_MODNAME, plxdev);
+	if (rc) {
+		kfree(plxdev);
+		return rc;
+	}
+
+	kref_init(&plxdev->ref);
+	INIT_WORK(&plxdev->release_work, plx_dma_release_work);
+
+	plxdev->bar = pcim_iomap_table(pdev)[0];
+
+	dma = &plxdev->dma_dev;
+	dma->chancnt = 1;
+	INIT_LIST_HEAD(&dma->channels);
+	dma->copy_align = DMAENGINE_ALIGN_1_BYTE;
+	dma->dev = get_device(&pdev->dev);
+
+	dma->device_alloc_chan_resources = plx_dma_alloc_chan_resources;
+	dma->device_free_chan_resources = plx_dma_free_chan_resources;
+
+	chan = &plxdev->dma_chan;
+	chan->device = dma;
+	dma_cookie_init(chan);
+	list_add_tail(&chan->device_node, &dma->channels);
+
+	rc = dma_async_device_register(dma);
+	if (rc) {
+		pci_err(pdev, "Failed to register dma device: %d\n", rc);
+		free_irq(pci_irq_vector(pdev, 0),  plxdev);
+		return rc;
+	}
+
+	pci_set_drvdata(pdev, plxdev);
+
+	return 0;
+}
+
+static int plx_dma_probe(struct pci_dev *pdev,
+			 const struct pci_device_id *id)
+{
+	int rc;
+
+	rc = pcim_enable_device(pdev);
+	if (rc)
+		return rc;
+
+	rc = pci_set_dma_mask(pdev, DMA_BIT_MASK(48));
+	if (rc)
+		rc = pci_set_dma_mask(pdev, DMA_BIT_MASK(32));
+	if (rc)
+		return rc;
+
+	rc = pci_set_consistent_dma_mask(pdev, DMA_BIT_MASK(48));
+	if (rc)
+		rc = pci_set_consistent_dma_mask(pdev, DMA_BIT_MASK(32));
+	if (rc)
+		return rc;
+
+	rc = pcim_iomap_regions(pdev, 1, KBUILD_MODNAME);
+	if (rc)
+		return rc;
+
+	rc = pci_alloc_irq_vectors(pdev, 1, 1, PCI_IRQ_ALL_TYPES);
+	if (rc <= 0)
+		return rc;
+
+	pci_set_master(pdev);
+
+	rc = plx_dma_create(pdev);
+	if (rc)
+		goto err_free_irq_vectors;
+
+	pci_info(pdev, "PLX DMA Channel Registered\n");
+
+	return 0;
+
+err_free_irq_vectors:
+	pci_free_irq_vectors(pdev);
+	return rc;
+}
+
+static void plx_dma_remove(struct pci_dev *pdev)
+{
+	struct plx_dma_dev *plxdev = pci_get_drvdata(pdev);
+
+	free_irq(pci_irq_vector(pdev, 0),  plxdev);
+
+	plxdev->bar = NULL;
+	plx_dma_put(plxdev);
+
+	pci_free_irq_vectors(pdev);
+}
+
+static const struct pci_device_id plx_dma_pci_tbl[] = {
+	{
+		.vendor		= PCI_VENDOR_ID_PLX,
+		.device		= 0x87D0,
+		.subvendor	= PCI_ANY_ID,
+		.subdevice	= PCI_ANY_ID,
+		.class		= PCI_CLASS_SYSTEM_OTHER << 8,
+		.class_mask	= 0xFFFFFFFF,
+	},
+	{0}
+};
+MODULE_DEVICE_TABLE(pci, plx_dma_pci_tbl);
+
+static struct pci_driver plx_dma_pci_driver = {
+	.name           = KBUILD_MODNAME,
+	.id_table       = plx_dma_pci_tbl,
+	.probe          = plx_dma_probe,
+	.remove		= plx_dma_remove,
+};
+module_pci_driver(plx_dma_pci_driver);
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 27+ messages in thread

* [PATCH 4/5] dmaengine: plx-dma: Implement hardware initialization and cleanup
  2019-10-22 21:46 [PATCH 0/5] PLX Switch DMA Engine Driver Logan Gunthorpe
                   ` (2 preceding siblings ...)
  2019-10-22 21:46 ` [PATCH 3/5] dmaengine: plx-dma: Introduce PLX DMA engine PCI driver skeleton Logan Gunthorpe
@ 2019-10-22 21:46 ` Logan Gunthorpe
  2019-10-22 21:46 ` [PATCH 5/5] dmaengine: plx-dma: Implement descriptor submission Logan Gunthorpe
  4 siblings, 0 replies; 27+ messages in thread
From: Logan Gunthorpe @ 2019-10-22 21:46 UTC (permalink / raw)
  To: linux-kernel, dmaengine, Vinod Koul; +Cc: Dan Williams, Logan Gunthorpe

Allocate DMA coherent memory for the ring of DMA descriptors and
program the appropriate hardware registers.

A tasklet is created which is triggered on an interrupt to process
all the finished requests. Additionally, any remaining descriptors
are aborted when the hardware is removed or the resources freed.

Use an RCU pointer to synchronize PCI device unbind.

Signed-off-by: Logan Gunthorpe <logang@deltatee.com>
---
 drivers/dma/plx_dma.c | 332 +++++++++++++++++++++++++++++++++++++++++-
 1 file changed, 331 insertions(+), 1 deletion(-)

diff --git a/drivers/dma/plx_dma.c b/drivers/dma/plx_dma.c
index 8668dad790b6..d3c2319e2fad 100644
--- a/drivers/dma/plx_dma.c
+++ b/drivers/dma/plx_dma.c
@@ -18,13 +18,103 @@ MODULE_VERSION("0.1");
 MODULE_LICENSE("GPL");
 MODULE_AUTHOR("Logan Gunthorpe");
 
+#define PLX_REG_DESC_RING_ADDR			0x214
+#define PLX_REG_DESC_RING_ADDR_HI		0x218
+#define PLX_REG_DESC_RING_NEXT_ADDR		0x21C
+#define PLX_REG_DESC_RING_COUNT			0x220
+#define PLX_REG_DESC_RING_LAST_ADDR		0x224
+#define PLX_REG_DESC_RING_LAST_SIZE		0x228
+#define PLX_REG_PREF_LIMIT			0x234
+#define PLX_REG_CTRL				0x238
+#define PLX_REG_CTRL2				0x23A
+#define PLX_REG_INTR_CTRL			0x23C
+#define PLX_REG_INTR_STATUS			0x23E
+
+#define PLX_REG_PREF_LIMIT_PREF_FOUR		8
+
+#define PLX_REG_CTRL_GRACEFUL_PAUSE		BIT(0)
+#define PLX_REG_CTRL_ABORT			BIT(1)
+#define PLX_REG_CTRL_WRITE_BACK_EN		BIT(2)
+#define PLX_REG_CTRL_START			BIT(3)
+#define PLX_REG_CTRL_RING_STOP_MODE		BIT(4)
+#define PLX_REG_CTRL_DESC_MODE_BLOCK		(0 << 5)
+#define PLX_REG_CTRL_DESC_MODE_ON_CHIP		(1 << 5)
+#define PLX_REG_CTRL_DESC_MODE_OFF_CHIP		(2 << 5)
+#define PLX_REG_CTRL_DESC_INVALID		BIT(8)
+#define PLX_REG_CTRL_GRACEFUL_PAUSE_DONE	BIT(9)
+#define PLX_REG_CTRL_ABORT_DONE			BIT(10)
+#define PLX_REG_CTRL_IMM_PAUSE_DONE		BIT(12)
+#define PLX_REG_CTRL_IN_PROGRESS		BIT(30)
+
+#define PLX_REG_CTRL_RESET_VAL	(PLX_REG_CTRL_DESC_INVALID | \
+				 PLX_REG_CTRL_GRACEFUL_PAUSE_DONE | \
+				 PLX_REG_CTRL_ABORT_DONE | \
+				 PLX_REG_CTRL_IMM_PAUSE_DONE)
+
+#define PLX_REG_CTRL_START_VAL	(PLX_REG_CTRL_WRITE_BACK_EN | \
+				 PLX_REG_CTRL_DESC_MODE_OFF_CHIP | \
+				 PLX_REG_CTRL_START | \
+				 PLX_REG_CTRL_RESET_VAL)
+
+#define PLX_REG_CTRL2_MAX_TXFR_SIZE_64B		0
+#define PLX_REG_CTRL2_MAX_TXFR_SIZE_128B	1
+#define PLX_REG_CTRL2_MAX_TXFR_SIZE_256B	2
+#define PLX_REG_CTRL2_MAX_TXFR_SIZE_512B	3
+#define PLX_REG_CTRL2_MAX_TXFR_SIZE_1KB		4
+#define PLX_REG_CTRL2_MAX_TXFR_SIZE_2KB		5
+#define PLX_REG_CTRL2_MAX_TXFR_SIZE_4B		7
+
+#define PLX_REG_INTR_CRTL_ERROR_EN		BIT(0)
+#define PLX_REG_INTR_CRTL_INV_DESC_EN		BIT(1)
+#define PLX_REG_INTR_CRTL_ABORT_DONE_EN		BIT(3)
+#define PLX_REG_INTR_CRTL_PAUSE_DONE_EN		BIT(4)
+#define PLX_REG_INTR_CRTL_IMM_PAUSE_DONE_EN	BIT(5)
+
+#define PLX_REG_INTR_STATUS_ERROR		BIT(0)
+#define PLX_REG_INTR_STATUS_INV_DESC		BIT(1)
+#define PLX_REG_INTR_STATUS_DESC_DONE		BIT(2)
+#define PLX_REG_INTR_CRTL_ABORT_DONE		BIT(3)
+
+struct plx_dma_hw_std_desc {
+	__le32 flags_and_size;
+	__le16 dst_addr_hi;
+	__le16 src_addr_hi;
+	__le32 dst_addr_lo;
+	__le32 src_addr_lo;
+};
+
+#define PLX_DESC_SIZE_MASK		0x7ffffff
+#define PLX_DESC_FLAG_VALID		BIT(31)
+#define PLX_DESC_FLAG_INT_WHEN_DONE	BIT(30)
+
+#define PLX_DESC_WB_SUCCESS		BIT(30)
+#define PLX_DESC_WB_RD_FAIL		BIT(29)
+#define PLX_DESC_WB_WR_FAIL		BIT(28)
+
+#define PLX_DMA_RING_COUNT		2048
+
+struct plx_dma_desc {
+	struct dma_async_tx_descriptor txd;
+	struct plx_dma_hw_std_desc *hw;
+	u32 orig_size;
+};
+
 struct plx_dma_dev {
 	struct dma_device dma_dev;
 	struct dma_chan dma_chan;
+	struct pci_dev __rcu *pdev;
 	void __iomem *bar;
 
 	struct kref ref;
 	struct work_struct release_work;
+	struct tasklet_struct desc_task;
+
+	spinlock_t ring_lock;
+	bool ring_active;
+	int head;
+	int tail;
+	struct plx_dma_hw_std_desc *hw_ring;
+	struct plx_dma_desc **desc_ring;
 };
 
 static struct plx_dma_dev *chan_to_plx_dma_dev(struct dma_chan *c)
@@ -32,8 +122,143 @@ static struct plx_dma_dev *chan_to_plx_dma_dev(struct dma_chan *c)
 	return container_of(c, struct plx_dma_dev, dma_chan);
 }
 
+static struct plx_dma_desc *plx_dma_get_desc(struct plx_dma_dev *plxdev, int i)
+{
+	return plxdev->desc_ring[i & (PLX_DMA_RING_COUNT - 1)];
+}
+
+static void plx_dma_process_desc(struct plx_dma_dev *plxdev)
+{
+	struct dmaengine_result res;
+	struct plx_dma_desc *desc;
+	u32 flags;
+
+	spin_lock_bh(&plxdev->ring_lock);
+
+	while (plxdev->tail != plxdev->head) {
+		desc = plx_dma_get_desc(plxdev, plxdev->tail);
+
+		flags = le32_to_cpu(READ_ONCE(desc->hw->flags_and_size));
+
+		if (flags & PLX_DESC_FLAG_VALID)
+			break;
+
+		res.residue = desc->orig_size - (flags & PLX_DESC_SIZE_MASK);
+
+		if (flags & PLX_DESC_WB_SUCCESS)
+			res.result = DMA_TRANS_NOERROR;
+		else if (flags & PLX_DESC_WB_WR_FAIL)
+			res.result = DMA_TRANS_WRITE_FAILED;
+		else
+			res.result = DMA_TRANS_READ_FAILED;
+
+		dma_cookie_complete(&desc->txd);
+		dma_descriptor_unmap(&desc->txd);
+		dmaengine_desc_get_callback_invoke(&desc->txd, &res);
+		desc->txd.callback = NULL;
+		desc->txd.callback_result = NULL;
+
+		plxdev->tail++;
+	}
+
+	spin_unlock_bh(&plxdev->ring_lock);
+}
+
+static void plx_dma_abort_desc(struct plx_dma_dev *plxdev)
+{
+	struct dmaengine_result res;
+	struct plx_dma_desc *desc;
+
+	plx_dma_process_desc(plxdev);
+
+	spin_lock_bh(&plxdev->ring_lock);
+
+	while (plxdev->tail != plxdev->head) {
+		desc = plx_dma_get_desc(plxdev, plxdev->tail);
+
+		res.residue = desc->orig_size;
+		res.result = DMA_TRANS_ABORTED;
+
+		dma_cookie_complete(&desc->txd);
+		dma_descriptor_unmap(&desc->txd);
+		dmaengine_desc_get_callback_invoke(&desc->txd, &res);
+		desc->txd.callback = NULL;
+		desc->txd.callback_result = NULL;
+
+		plxdev->tail++;
+	}
+
+	spin_unlock_bh(&plxdev->ring_lock);
+}
+
+static void __plx_dma_stop(struct plx_dma_dev *plxdev)
+{
+	unsigned long timeout = jiffies + msecs_to_jiffies(1000);
+	u32 val;
+
+	val = readl(plxdev->bar + PLX_REG_CTRL);
+	if (!(val & ~PLX_REG_CTRL_GRACEFUL_PAUSE))
+		return;
+
+	writel(PLX_REG_CTRL_RESET_VAL | PLX_REG_CTRL_GRACEFUL_PAUSE,
+	       plxdev->bar + PLX_REG_CTRL);
+
+	while (!time_after(jiffies, timeout)) {
+		val = readl(plxdev->bar + PLX_REG_CTRL);
+		if (val & PLX_REG_CTRL_GRACEFUL_PAUSE_DONE)
+			break;
+
+		cpu_relax();
+	}
+
+	if (!(val & PLX_REG_CTRL_GRACEFUL_PAUSE_DONE))
+		dev_err(plxdev->dma_dev.dev,
+			"Timeout waiting for graceful pause!\n");
+
+	writel(PLX_REG_CTRL_RESET_VAL | PLX_REG_CTRL_GRACEFUL_PAUSE,
+	       plxdev->bar + PLX_REG_CTRL);
+
+	writel(0, plxdev->bar + PLX_REG_DESC_RING_COUNT);
+	writel(0, plxdev->bar + PLX_REG_DESC_RING_ADDR);
+	writel(0, plxdev->bar + PLX_REG_DESC_RING_ADDR_HI);
+	writel(0, plxdev->bar + PLX_REG_DESC_RING_NEXT_ADDR);
+}
+
+static void plx_dma_stop(struct plx_dma_dev *plxdev)
+{
+	rcu_read_lock();
+	if (!rcu_dereference(plxdev->pdev)) {
+		rcu_read_unlock();
+		return;
+	}
+
+	__plx_dma_stop(plxdev);
+
+	rcu_read_unlock();
+}
+
+static void plx_dma_desc_task(unsigned long data)
+{
+	struct plx_dma_dev *plxdev = (void *)data;
+
+	plx_dma_process_desc(plxdev);
+}
+
 static irqreturn_t plx_dma_isr(int irq, void *devid)
 {
+	struct plx_dma_dev *plxdev = devid;
+	u32 status;
+
+	status = readw(plxdev->bar + PLX_REG_INTR_STATUS);
+
+	if (!status)
+		return IRQ_NONE;
+
+	if (status & PLX_REG_INTR_STATUS_DESC_DONE && plxdev->ring_active)
+		tasklet_schedule(&plxdev->desc_task);
+
+	writew(status, plxdev->bar + PLX_REG_INTR_STATUS);
+
 	return IRQ_HANDLED;
 }
 
@@ -66,18 +291,109 @@ static void plx_dma_put(struct plx_dma_dev *plxdev)
 	kref_put(&plxdev->ref, plx_dma_release);
 }
 
+static int plx_dma_alloc_desc(struct plx_dma_dev *plxdev)
+{
+	struct plx_dma_desc *desc;
+	int i;
+
+	plxdev->desc_ring = kcalloc(PLX_DMA_RING_COUNT,
+				    sizeof(*plxdev->desc_ring), GFP_KERNEL);
+	if (!plxdev->desc_ring)
+		return -ENOMEM;
+
+	for (i = 0; i < PLX_DMA_RING_COUNT; i++) {
+		desc = kzalloc(sizeof(*desc), GFP_KERNEL);
+		if (!desc)
+			goto free_and_exit;
+
+		dma_async_tx_descriptor_init(&desc->txd, &plxdev->dma_chan);
+		desc->hw = &plxdev->hw_ring[i];
+		plxdev->desc_ring[i] = desc;
+	}
+
+	return 0;
+
+free_and_exit:
+	for (i = 0; i < PLX_DMA_RING_COUNT; i++)
+		kfree(plxdev->desc_ring[i]);
+	kfree(plxdev->desc_ring);
+	return -ENOMEM;
+}
+
 static int plx_dma_alloc_chan_resources(struct dma_chan *chan)
 {
 	struct plx_dma_dev *plxdev = chan_to_plx_dma_dev(chan);
+	size_t ring_sz = PLX_DMA_RING_COUNT * sizeof(*plxdev->hw_ring);
+	dma_addr_t dma_addr;
+	int rc;
+
+	rcu_read_lock();
+	if (!rcu_dereference(plxdev->pdev)) {
+		rcu_read_unlock();
+		return -ENODEV;
+	}
 
 	kref_get(&plxdev->ref);
 
-	return 0;
+	writel(PLX_REG_CTRL_RESET_VAL, plxdev->bar + PLX_REG_CTRL);
+
+	plxdev->hw_ring = dmam_alloc_coherent(plxdev->dma_dev.dev, ring_sz,
+					      &dma_addr, GFP_KERNEL);
+	if (!plxdev->hw_ring) {
+		rcu_read_unlock();
+		return -ENOMEM;
+	}
+
+	plxdev->head = plxdev->tail = 0;
+
+	rc = plx_dma_alloc_desc(plxdev);
+	if (rc) {
+		plx_dma_put(plxdev);
+		rcu_read_unlock();
+		return rc;
+	}
+
+	writel(lower_32_bits(dma_addr), plxdev->bar + PLX_REG_DESC_RING_ADDR);
+	writel(upper_32_bits(dma_addr),
+	       plxdev->bar + PLX_REG_DESC_RING_ADDR_HI);
+	writel(lower_32_bits(dma_addr),
+	       plxdev->bar + PLX_REG_DESC_RING_NEXT_ADDR);
+	writel(PLX_DMA_RING_COUNT, plxdev->bar + PLX_REG_DESC_RING_COUNT);
+	writel(PLX_REG_PREF_LIMIT_PREF_FOUR, plxdev->bar + PLX_REG_PREF_LIMIT);
+
+	plxdev->ring_active = true;
+
+	rcu_read_unlock();
+
+	return PLX_DMA_RING_COUNT;
 }
 
 static void plx_dma_free_chan_resources(struct dma_chan *chan)
 {
 	struct plx_dma_dev *plxdev = chan_to_plx_dma_dev(chan);
+	struct pci_dev *pdev;
+	int i;
+
+	spin_lock_bh(&plxdev->ring_lock);
+	plxdev->ring_active = false;
+	spin_unlock_bh(&plxdev->ring_lock);
+
+	plx_dma_stop(plxdev);
+
+	rcu_read_lock();
+	pdev = rcu_dereference(plxdev->pdev);
+	if (pdev)
+		synchronize_irq(pci_irq_vector(pdev, 0));
+	rcu_read_unlock();
+
+	tasklet_kill(&plxdev->desc_task);
+
+	plx_dma_abort_desc(plxdev);
+
+	for (i = 0; i < PLX_DMA_RING_COUNT; i++)
+		kfree(plxdev->desc_ring[i]);
+
+	kfree(plxdev->desc_ring);
 
 	plx_dma_put(plxdev);
 }
@@ -102,7 +418,11 @@ static int plx_dma_create(struct pci_dev *pdev)
 
 	kref_init(&plxdev->ref);
 	INIT_WORK(&plxdev->release_work, plx_dma_release_work);
+	spin_lock_init(&plxdev->ring_lock);
+	tasklet_init(&plxdev->desc_task, plx_dma_desc_task,
+		     (unsigned long)plxdev);
 
+	RCU_INIT_POINTER(plxdev->pdev, pdev);
 	plxdev->bar = pcim_iomap_table(pdev)[0];
 
 	dma = &plxdev->dma_dev;
@@ -181,6 +501,16 @@ static void plx_dma_remove(struct pci_dev *pdev)
 
 	free_irq(pci_irq_vector(pdev, 0),  plxdev);
 
+	rcu_assign_pointer(plxdev->pdev, NULL);
+	synchronize_rcu();
+
+	spin_lock_bh(&plxdev->ring_lock);
+	plxdev->ring_active = false;
+	spin_unlock_bh(&plxdev->ring_lock);
+
+	__plx_dma_stop(plxdev);
+	plx_dma_abort_desc(plxdev);
+
 	plxdev->bar = NULL;
 	plx_dma_put(plxdev);
 
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 27+ messages in thread

* [PATCH 5/5] dmaengine: plx-dma: Implement descriptor submission
  2019-10-22 21:46 [PATCH 0/5] PLX Switch DMA Engine Driver Logan Gunthorpe
                   ` (3 preceding siblings ...)
  2019-10-22 21:46 ` [PATCH 4/5] dmaengine: plx-dma: Implement hardware initialization and cleanup Logan Gunthorpe
@ 2019-10-22 21:46 ` Logan Gunthorpe
  2019-11-09 17:40   ` Vinod Koul
  4 siblings, 1 reply; 27+ messages in thread
From: Logan Gunthorpe @ 2019-10-22 21:46 UTC (permalink / raw)
  To: linux-kernel, dmaengine, Vinod Koul; +Cc: Dan Williams, Logan Gunthorpe

On prep, a spin lock is taken and the next entry in the circular buffer
is filled. On submit, the valid bit is set in the hardware descriptor
and the lock is released.

The DMA engine is started (if it's not already running) when the client
calls dma_async_issue_pending().

Signed-off-by: Logan Gunthorpe <logang@deltatee.com>
---
 drivers/dma/plx_dma.c | 119 ++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 119 insertions(+)

diff --git a/drivers/dma/plx_dma.c b/drivers/dma/plx_dma.c
index d3c2319e2fad..21e4d7634eeb 100644
--- a/drivers/dma/plx_dma.c
+++ b/drivers/dma/plx_dma.c
@@ -7,6 +7,7 @@
 
 #include "dmaengine.h"
 
+#include <linux/circ_buf.h>
 #include <linux/dmaengine.h>
 #include <linux/kref.h>
 #include <linux/list.h>
@@ -122,6 +123,11 @@ static struct plx_dma_dev *chan_to_plx_dma_dev(struct dma_chan *c)
 	return container_of(c, struct plx_dma_dev, dma_chan);
 }
 
+static struct plx_dma_desc *to_plx_desc(struct dma_async_tx_descriptor *txd)
+{
+	return container_of(txd, struct plx_dma_desc, txd);
+}
+
 static struct plx_dma_desc *plx_dma_get_desc(struct plx_dma_dev *plxdev, int i)
 {
 	return plxdev->desc_ring[i & (PLX_DMA_RING_COUNT - 1)];
@@ -244,6 +250,113 @@ static void plx_dma_desc_task(unsigned long data)
 	plx_dma_process_desc(plxdev);
 }
 
+static struct dma_async_tx_descriptor *plx_dma_prep_memcpy(struct dma_chan *c,
+		dma_addr_t dma_dst, dma_addr_t dma_src, size_t len,
+		unsigned long flags)
+	__acquires(plxdev->ring_lock)
+{
+	struct plx_dma_dev *plxdev = chan_to_plx_dma_dev(c);
+	struct plx_dma_desc *plxdesc;
+
+	spin_lock_bh(&plxdev->ring_lock);
+	if (!plxdev->ring_active)
+		goto err_unlock;
+
+	if (!CIRC_SPACE(plxdev->head, plxdev->tail, PLX_DMA_RING_COUNT))
+		goto err_unlock;
+
+	if (len > PLX_DESC_SIZE_MASK)
+		goto err_unlock;
+
+	plxdesc = plx_dma_get_desc(plxdev, plxdev->head);
+	plxdev->head++;
+
+	plxdesc->hw->dst_addr_lo = cpu_to_le32(lower_32_bits(dma_dst));
+	plxdesc->hw->dst_addr_hi = cpu_to_le16(upper_32_bits(dma_dst));
+	plxdesc->hw->src_addr_lo = cpu_to_le32(lower_32_bits(dma_src));
+	plxdesc->hw->src_addr_hi = cpu_to_le16(upper_32_bits(dma_src));
+
+	plxdesc->orig_size = len;
+
+	if (flags & DMA_PREP_INTERRUPT)
+		len |= PLX_DESC_FLAG_INT_WHEN_DONE;
+
+	plxdesc->hw->flags_and_size = cpu_to_le32(len);
+	plxdesc->txd.flags = flags;
+
+	/* return with the lock held, it will be released in tx_submit */
+
+	return &plxdesc->txd;
+
+err_unlock:
+	/*
+	 * Keep sparse happy by restoring an even lock count on
+	 * this lock.
+	 */
+	__acquire(plxdev->ring_lock);
+
+	spin_unlock_bh(&plxdev->ring_lock);
+	return NULL;
+}
+
+static dma_cookie_t plx_dma_tx_submit(struct dma_async_tx_descriptor *desc)
+	__releases(plxdev->ring_lock)
+{
+	struct plx_dma_dev *plxdev = chan_to_plx_dma_dev(desc->chan);
+	struct plx_dma_desc *plxdesc = to_plx_desc(desc);
+	dma_cookie_t cookie;
+
+	cookie = dma_cookie_assign(desc);
+
+	/*
+	 * Ensure the descriptor updates are visible to the dma device
+	 * before setting the valid bit.
+	 */
+	wmb();
+
+	plxdesc->hw->flags_and_size |= cpu_to_le32(PLX_DESC_FLAG_VALID);
+
+	spin_unlock_bh(&plxdev->ring_lock);
+
+	return cookie;
+}
+
+static enum dma_status plx_dma_tx_status(struct dma_chan *chan,
+		dma_cookie_t cookie, struct dma_tx_state *txstate)
+{
+	struct plx_dma_dev *plxdev = chan_to_plx_dma_dev(chan);
+	enum dma_status ret;
+
+	ret = dma_cookie_status(chan, cookie, txstate);
+	if (ret == DMA_COMPLETE)
+		return ret;
+
+	plx_dma_process_desc(plxdev);
+
+	return dma_cookie_status(chan, cookie, txstate);
+}
+
+static void plx_dma_issue_pending(struct dma_chan *chan)
+{
+	struct plx_dma_dev *plxdev = chan_to_plx_dma_dev(chan);
+
+	rcu_read_lock();
+	if (!rcu_dereference(plxdev->pdev)) {
+		rcu_read_unlock();
+		return;
+	}
+
+	/*
+	 * Ensure the valid bits are visible before starting the
+	 * DMA engine.
+	 */
+	wmb();
+
+	writew(PLX_REG_CTRL_START_VAL, plxdev->bar + PLX_REG_CTRL);
+
+	rcu_read_unlock();
+}
+
 static irqreturn_t plx_dma_isr(int irq, void *devid)
 {
 	struct plx_dma_dev *plxdev = devid;
@@ -307,7 +420,9 @@ static int plx_dma_alloc_desc(struct plx_dma_dev *plxdev)
 			goto free_and_exit;
 
 		dma_async_tx_descriptor_init(&desc->txd, &plxdev->dma_chan);
+		desc->txd.tx_submit = plx_dma_tx_submit;
 		desc->hw = &plxdev->hw_ring[i];
+
 		plxdev->desc_ring[i] = desc;
 	}
 
@@ -428,11 +543,15 @@ static int plx_dma_create(struct pci_dev *pdev)
 	dma = &plxdev->dma_dev;
 	dma->chancnt = 1;
 	INIT_LIST_HEAD(&dma->channels);
+	dma_cap_set(DMA_MEMCPY, dma->cap_mask);
 	dma->copy_align = DMAENGINE_ALIGN_1_BYTE;
 	dma->dev = get_device(&pdev->dev);
 
 	dma->device_alloc_chan_resources = plx_dma_alloc_chan_resources;
 	dma->device_free_chan_resources = plx_dma_free_chan_resources;
+	dma->device_prep_dma_memcpy = plx_dma_prep_memcpy;
+	dma->device_issue_pending = plx_dma_issue_pending;
+	dma->device_tx_status = plx_dma_tx_status;
 
 	chan = &plxdev->dma_chan;
 	chan->device = dma;
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 27+ messages in thread

* Re: [PATCH 1/5] dmaengine: Store module owner in dma_device struct
  2019-10-22 21:46 ` [PATCH 1/5] dmaengine: Store module owner in dma_device struct Logan Gunthorpe
@ 2019-11-09 17:18   ` Vinod Koul
  2019-11-11 16:50     ` Logan Gunthorpe
  0 siblings, 1 reply; 27+ messages in thread
From: Vinod Koul @ 2019-11-09 17:18 UTC (permalink / raw)
  To: Logan Gunthorpe; +Cc: linux-kernel, dmaengine, Dan Williams

Hi Logan,

Sorry for delay in reply!

On 22-10-19, 15:46, Logan Gunthorpe wrote:
> dma_chan_to_owner() dereferences the driver from the struct device to
> obtain the owner and call module_[get|put](). However, if the backing
> device is unbound before the dma_device is unregistered, the driver
> will be cleared and this will cause a NULL pointer dereference.

Have you been able to repro this? If so how..?

The expectation is that the driver shall unregister before removed.
> 
> Instead, store a pointer to the owner module in the dma_device struct
> so the module reference can be properly put when the channel is put, even
> if the backing device was destroyed first.
> 
> This change helps to support a safer unbind of DMA engines.

For error cases which should be fixed, so maybe this is a right way and
gets things fixed :)

> If the dma_device is unregistered in the driver's remove function,
> there's no guarantee that there are no existing clients and a users
> action may trigger the WARN_ONCE in dma_async_device_unregister()
> which is unlikely to leave the system in a consistent state.
> Instead, a better approach is to allow the backing driver to go away
> and fail any subsequent requests to it.
> 
> Signed-off-by: Logan Gunthorpe <logang@deltatee.com>
> ---
>  drivers/dma/dmaengine.c   | 4 +++-
>  include/linux/dmaengine.h | 2 ++
>  2 files changed, 5 insertions(+), 1 deletion(-)
> 
> diff --git a/drivers/dma/dmaengine.c b/drivers/dma/dmaengine.c
> index 03ac4b96117c..4b604086b1b3 100644
> --- a/drivers/dma/dmaengine.c
> +++ b/drivers/dma/dmaengine.c
> @@ -179,7 +179,7 @@ __dma_device_satisfies_mask(struct dma_device *device,
>  
>  static struct module *dma_chan_to_owner(struct dma_chan *chan)
>  {
> -	return chan->device->dev->driver->owner;
> +	return chan->device->owner;
>  }
>  
>  /**
> @@ -919,6 +919,8 @@ int dma_async_device_register(struct dma_device *device)
>  		return -EIO;
>  	}
>  
> +	device->owner = device->dev->driver->owner;
> +
>  	if (dma_has_cap(DMA_MEMCPY, device->cap_mask) && !device->device_prep_dma_memcpy) {
>  		dev_err(device->dev,
>  			"Device claims capability %s, but op is not defined\n",
> diff --git a/include/linux/dmaengine.h b/include/linux/dmaengine.h
> index 8fcdee1c0cf9..13aa0abb71de 100644
> --- a/include/linux/dmaengine.h
> +++ b/include/linux/dmaengine.h
> @@ -674,6 +674,7 @@ struct dma_filter {
>   * @fill_align: alignment shift for memset operations
>   * @dev_id: unique device ID
>   * @dev: struct device reference for dma mapping api
> + * @owner: owner module (automatically set based on the provided dev)
>   * @src_addr_widths: bit mask of src addr widths the device supports
>   *	Width is specified in bytes, e.g. for a device supporting
>   *	a width of 4 the mask should have BIT(4) set.
> @@ -737,6 +738,7 @@ struct dma_device {
>  
>  	int dev_id;
>  	struct device *dev;
> +	struct module *owner;
>  
>  	u32 src_addr_widths;
>  	u32 dst_addr_widths;
> -- 
> 2.20.1

-- 
~Vinod

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [PATCH 3/5] dmaengine: plx-dma: Introduce PLX DMA engine PCI driver skeleton
  2019-10-22 21:46 ` [PATCH 3/5] dmaengine: plx-dma: Introduce PLX DMA engine PCI driver skeleton Logan Gunthorpe
@ 2019-11-09 17:35   ` Vinod Koul
  2019-11-11 17:50     ` Logan Gunthorpe
  0 siblings, 1 reply; 27+ messages in thread
From: Vinod Koul @ 2019-11-09 17:35 UTC (permalink / raw)
  To: Logan Gunthorpe; +Cc: linux-kernel, dmaengine, Dan Williams

On 22-10-19, 15:46, Logan Gunthorpe wrote:
> Some PLX Switches can expose DMA engines via extra PCI functions
> on the upstream port. Each function will have one DMA channel.
> 
> This patch is just the core PCI driver skeleton and dma
> engine registration.
> 
> Signed-off-by: Logan Gunthorpe <logang@deltatee.com>
> ---
>  MAINTAINERS           |   5 +
>  drivers/dma/Kconfig   |   9 ++
>  drivers/dma/Makefile  |   1 +
>  drivers/dma/plx_dma.c | 209 ++++++++++++++++++++++++++++++++++++++++++
>  4 files changed, 224 insertions(+)
>  create mode 100644 drivers/dma/plx_dma.c
> 
> diff --git a/MAINTAINERS b/MAINTAINERS
> index e51a68bf8ca8..07edc1ead75d 100644
> --- a/MAINTAINERS
> +++ b/MAINTAINERS
> @@ -12952,6 +12952,11 @@ S:	Maintained
>  F:	drivers/iio/chemical/pms7003.c
>  F:	Documentation/devicetree/bindings/iio/chemical/plantower,pms7003.yaml
>  
> +PLX DMA DRIVER
> +M:	Logan Gunthorpe <logang@deltatee.com>
> +S:	Maintained
> +F:	drivers/dma/plx_dma.c
> +
>  PMBUS HARDWARE MONITORING DRIVERS
>  M:	Guenter Roeck <linux@roeck-us.net>
>  L:	linux-hwmon@vger.kernel.org
> diff --git a/drivers/dma/Kconfig b/drivers/dma/Kconfig
> index 7af874b69ffb..156f6e8f61f1 100644
> --- a/drivers/dma/Kconfig
> +++ b/drivers/dma/Kconfig
> @@ -477,6 +477,15 @@ config PXA_DMA
>  	  16 to 32 channels for peripheral to memory or memory to memory
>  	  transfers.
>  
> +config PLX_DMA
> +	tristate "PLX ExpressLane PEX Switch DMA Engine Support"
> +	depends on PCI
> +	select DMA_ENGINE
> +	help
> +	  Some PLX ExpressLane PCI Switches support additional DMA engines.
> +	  These are exposed via extra functions on the switch's
> +	  upstream port. Each function exposes one DMA channel.
> +
>  config SIRF_DMA
>  	tristate "CSR SiRFprimaII/SiRFmarco DMA support"
>  	depends on ARCH_SIRF
> diff --git a/drivers/dma/Makefile b/drivers/dma/Makefile
> index f5ce8665e944..ce03135c47fd 100644
> --- a/drivers/dma/Makefile
> +++ b/drivers/dma/Makefile
> @@ -57,6 +57,7 @@ obj-$(CONFIG_NBPFAXI_DMA) += nbpfaxi.o
>  obj-$(CONFIG_OWL_DMA) += owl-dma.o
>  obj-$(CONFIG_PCH_DMA) += pch_dma.o
>  obj-$(CONFIG_PL330_DMA) += pl330.o
> +obj-$(CONFIG_PLX_DMA) += plx_dma.o
>  obj-$(CONFIG_PPC_BESTCOMM) += bestcomm/
>  obj-$(CONFIG_PXA_DMA) += pxa_dma.o
>  obj-$(CONFIG_RENESAS_DMA) += sh/
> diff --git a/drivers/dma/plx_dma.c b/drivers/dma/plx_dma.c
> new file mode 100644
> index 000000000000..8668dad790b6
> --- /dev/null
> +++ b/drivers/dma/plx_dma.c
> @@ -0,0 +1,209 @@
> +// SPDX-License-Identifier: GPL-2.0
> +/*
> + * Microsemi Switchtec(tm) PCIe Management Driver
> + * Copyright (c) 2019, Logan Gunthorpe <logang@deltatee.com>
> + * Copyright (c) 2019, GigaIO Networks, Inc
> + */
> +
> +#include "dmaengine.h"
> +
> +#include <linux/dmaengine.h>
> +#include <linux/kref.h>
> +#include <linux/list.h>
> +#include <linux/module.h>
> +#include <linux/pci.h>
> +
> +MODULE_DESCRIPTION("PLX ExpressLane PEX PCI Switch DMA Engine");
> +MODULE_VERSION("0.1");
> +MODULE_LICENSE("GPL");
> +MODULE_AUTHOR("Logan Gunthorpe");
> +
> +struct plx_dma_dev {
> +	struct dma_device dma_dev;
> +	struct dma_chan dma_chan;
> +	void __iomem *bar;
> +
> +	struct kref ref;
> +	struct work_struct release_work;
> +};
> +
> +static struct plx_dma_dev *chan_to_plx_dma_dev(struct dma_chan *c)
> +{
> +	return container_of(c, struct plx_dma_dev, dma_chan);
> +}
> +
> +static irqreturn_t plx_dma_isr(int irq, void *devid)
> +{
> +	return IRQ_HANDLED;

??

> +}
> +
> +static void plx_dma_release_work(struct work_struct *work)
> +{
> +	struct plx_dma_dev *plxdev = container_of(work, struct plx_dma_dev,
> +						  release_work);
> +
> +	dma_async_device_unregister(&plxdev->dma_dev);
> +	put_device(plxdev->dma_dev.dev);
> +	kfree(plxdev);
> +}
> +
> +static void plx_dma_release(struct kref *ref)
> +{
> +	struct plx_dma_dev *plxdev = container_of(ref, struct plx_dma_dev, ref);
> +
> +	/*
> +	 * The dmaengine reference counting and locking is a bit of a
> +	 * mess so we have to work around it a bit here. We might put
> +	 * the reference while the dmaengine holds the dma_list_mutex
> +	 * which means we can't call dma_async_device_unregister() directly
> +	 * here and it must be delayed.

why is that, i have not heard any complaints about locking, can you
elaborate on why you need to do this?

> +	 */
> +	schedule_work(&plxdev->release_work);
> +}
> +
> +static void plx_dma_put(struct plx_dma_dev *plxdev)
> +{
> +	kref_put(&plxdev->ref, plx_dma_release);
> +}
> +
> +static int plx_dma_alloc_chan_resources(struct dma_chan *chan)
> +{
> +	struct plx_dma_dev *plxdev = chan_to_plx_dma_dev(chan);
> +
> +	kref_get(&plxdev->ref);

why do you need to do this?

> +
> +	return 0;
> +}
> +
> +static void plx_dma_free_chan_resources(struct dma_chan *chan)
> +{
> +	struct plx_dma_dev *plxdev = chan_to_plx_dma_dev(chan);
> +
> +	plx_dma_put(plxdev);
> +}
> +
> +static int plx_dma_create(struct pci_dev *pdev)
> +{
> +	struct plx_dma_dev *plxdev;
> +	struct dma_device *dma;
> +	struct dma_chan *chan;
> +	int rc;
> +
> +	plxdev = kzalloc(sizeof(*plxdev), GFP_KERNEL);
> +	if (!plxdev)
> +		return -ENOMEM;
> +
> +	rc = request_irq(pci_irq_vector(pdev, 0), plx_dma_isr, 0,
> +			 KBUILD_MODNAME, plxdev);
> +	if (rc) {
> +		kfree(plxdev);
> +		return rc;
> +	}
> +
> +	kref_init(&plxdev->ref);
> +	INIT_WORK(&plxdev->release_work, plx_dma_release_work);
> +
> +	plxdev->bar = pcim_iomap_table(pdev)[0];
> +
> +	dma = &plxdev->dma_dev;
> +	dma->chancnt = 1;
> +	INIT_LIST_HEAD(&dma->channels);
> +	dma->copy_align = DMAENGINE_ALIGN_1_BYTE;
> +	dma->dev = get_device(&pdev->dev);
> +
> +	dma->device_alloc_chan_resources = plx_dma_alloc_chan_resources;
> +	dma->device_free_chan_resources = plx_dma_free_chan_resources;
> +
> +	chan = &plxdev->dma_chan;
> +	chan->device = dma;
> +	dma_cookie_init(chan);
> +	list_add_tail(&chan->device_node, &dma->channels);
> +
> +	rc = dma_async_device_register(dma);
> +	if (rc) {
> +		pci_err(pdev, "Failed to register dma device: %d\n", rc);
> +		free_irq(pci_irq_vector(pdev, 0),  plxdev);
> +		return rc;
> +	}
> +
> +	pci_set_drvdata(pdev, plxdev);
> +
> +	return 0;
> +}
> +
> +static int plx_dma_probe(struct pci_dev *pdev,
> +			 const struct pci_device_id *id)
> +{
> +	int rc;
> +
> +	rc = pcim_enable_device(pdev);
> +	if (rc)
> +		return rc;
> +
> +	rc = pci_set_dma_mask(pdev, DMA_BIT_MASK(48));
> +	if (rc)
> +		rc = pci_set_dma_mask(pdev, DMA_BIT_MASK(32));
> +	if (rc)
> +		return rc;
> +
> +	rc = pci_set_consistent_dma_mask(pdev, DMA_BIT_MASK(48));
> +	if (rc)
> +		rc = pci_set_consistent_dma_mask(pdev, DMA_BIT_MASK(32));
> +	if (rc)
> +		return rc;
> +
> +	rc = pcim_iomap_regions(pdev, 1, KBUILD_MODNAME);
> +	if (rc)
> +		return rc;
> +
> +	rc = pci_alloc_irq_vectors(pdev, 1, 1, PCI_IRQ_ALL_TYPES);
> +	if (rc <= 0)
> +		return rc;
> +
> +	pci_set_master(pdev);
> +
> +	rc = plx_dma_create(pdev);
> +	if (rc)
> +		goto err_free_irq_vectors;
> +
> +	pci_info(pdev, "PLX DMA Channel Registered\n");
> +
> +	return 0;
> +
> +err_free_irq_vectors:
> +	pci_free_irq_vectors(pdev);
> +	return rc;
> +}
> +
> +static void plx_dma_remove(struct pci_dev *pdev)
> +{
> +	struct plx_dma_dev *plxdev = pci_get_drvdata(pdev);
> +
> +	free_irq(pci_irq_vector(pdev, 0),  plxdev);
> +
> +	plxdev->bar = NULL;
> +	plx_dma_put(plxdev);
> +
> +	pci_free_irq_vectors(pdev);
> +}
> +
> +static const struct pci_device_id plx_dma_pci_tbl[] = {
> +	{
> +		.vendor		= PCI_VENDOR_ID_PLX,
> +		.device		= 0x87D0,
> +		.subvendor	= PCI_ANY_ID,
> +		.subdevice	= PCI_ANY_ID,
> +		.class		= PCI_CLASS_SYSTEM_OTHER << 8,

can you explain this

> +		.class_mask	= 0xFFFFFFFF,
> +	},

-- 
~Vinod

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [PATCH 5/5] dmaengine: plx-dma: Implement descriptor submission
  2019-10-22 21:46 ` [PATCH 5/5] dmaengine: plx-dma: Implement descriptor submission Logan Gunthorpe
@ 2019-11-09 17:40   ` Vinod Koul
  2019-11-11 18:11     ` Logan Gunthorpe
  0 siblings, 1 reply; 27+ messages in thread
From: Vinod Koul @ 2019-11-09 17:40 UTC (permalink / raw)
  To: Logan Gunthorpe; +Cc: linux-kernel, dmaengine, Dan Williams

On 22-10-19, 15:46, Logan Gunthorpe wrote:
> On prep, a spin lock is taken and the next entry in the circular buffer
> is filled. On submit, the valid bit is set in the hardware descriptor
> and the lock is released.
> 
> The DMA engine is started (if it's not already running) when the client
> calls dma_async_issue_pending().
> 
> Signed-off-by: Logan Gunthorpe <logang@deltatee.com>
> ---
>  drivers/dma/plx_dma.c | 119 ++++++++++++++++++++++++++++++++++++++++++
>  1 file changed, 119 insertions(+)
> 
> diff --git a/drivers/dma/plx_dma.c b/drivers/dma/plx_dma.c
> index d3c2319e2fad..21e4d7634eeb 100644
> --- a/drivers/dma/plx_dma.c
> +++ b/drivers/dma/plx_dma.c
> @@ -7,6 +7,7 @@
>  
>  #include "dmaengine.h"
>  
> +#include <linux/circ_buf.h>
>  #include <linux/dmaengine.h>
>  #include <linux/kref.h>
>  #include <linux/list.h>
> @@ -122,6 +123,11 @@ static struct plx_dma_dev *chan_to_plx_dma_dev(struct dma_chan *c)
>  	return container_of(c, struct plx_dma_dev, dma_chan);
>  }
>  
> +static struct plx_dma_desc *to_plx_desc(struct dma_async_tx_descriptor *txd)
> +{
> +	return container_of(txd, struct plx_dma_desc, txd);
> +}
> +
>  static struct plx_dma_desc *plx_dma_get_desc(struct plx_dma_dev *plxdev, int i)
>  {
>  	return plxdev->desc_ring[i & (PLX_DMA_RING_COUNT - 1)];
> @@ -244,6 +250,113 @@ static void plx_dma_desc_task(unsigned long data)
>  	plx_dma_process_desc(plxdev);
>  }
>  
> +static struct dma_async_tx_descriptor *plx_dma_prep_memcpy(struct dma_chan *c,
> +		dma_addr_t dma_dst, dma_addr_t dma_src, size_t len,
> +		unsigned long flags)
> +	__acquires(plxdev->ring_lock)
> +{
> +	struct plx_dma_dev *plxdev = chan_to_plx_dma_dev(c);
> +	struct plx_dma_desc *plxdesc;
> +
> +	spin_lock_bh(&plxdev->ring_lock);
> +	if (!plxdev->ring_active)
> +		goto err_unlock;
> +
> +	if (!CIRC_SPACE(plxdev->head, plxdev->tail, PLX_DMA_RING_COUNT))
> +		goto err_unlock;
> +
> +	if (len > PLX_DESC_SIZE_MASK)
> +		goto err_unlock;
> +
> +	plxdesc = plx_dma_get_desc(plxdev, plxdev->head);
> +	plxdev->head++;
> +
> +	plxdesc->hw->dst_addr_lo = cpu_to_le32(lower_32_bits(dma_dst));
> +	plxdesc->hw->dst_addr_hi = cpu_to_le16(upper_32_bits(dma_dst));
> +	plxdesc->hw->src_addr_lo = cpu_to_le32(lower_32_bits(dma_src));
> +	plxdesc->hw->src_addr_hi = cpu_to_le16(upper_32_bits(dma_src));
> +
> +	plxdesc->orig_size = len;
> +
> +	if (flags & DMA_PREP_INTERRUPT)
> +		len |= PLX_DESC_FLAG_INT_WHEN_DONE;
> +
> +	plxdesc->hw->flags_and_size = cpu_to_le32(len);
> +	plxdesc->txd.flags = flags;
> +
> +	/* return with the lock held, it will be released in tx_submit */
> +
> +	return &plxdesc->txd;
> +
> +err_unlock:
> +	/*
> +	 * Keep sparse happy by restoring an even lock count on
> +	 * this lock.
> +	 */
> +	__acquire(plxdev->ring_lock);
> +
> +	spin_unlock_bh(&plxdev->ring_lock);
> +	return NULL;
> +}
> +
> +static dma_cookie_t plx_dma_tx_submit(struct dma_async_tx_descriptor *desc)
> +	__releases(plxdev->ring_lock)
> +{
> +	struct plx_dma_dev *plxdev = chan_to_plx_dma_dev(desc->chan);
> +	struct plx_dma_desc *plxdesc = to_plx_desc(desc);
> +	dma_cookie_t cookie;
> +
> +	cookie = dma_cookie_assign(desc);
> +
> +	/*
> +	 * Ensure the descriptor updates are visible to the dma device
> +	 * before setting the valid bit.
> +	 */
> +	wmb();
> +
> +	plxdesc->hw->flags_and_size |= cpu_to_le32(PLX_DESC_FLAG_VALID);

so where do you store the submitted descriptor?

> +
> +	spin_unlock_bh(&plxdev->ring_lock);
> +
> +	return cookie;
> +}
> +
> +static enum dma_status plx_dma_tx_status(struct dma_chan *chan,
> +		dma_cookie_t cookie, struct dma_tx_state *txstate)
> +{
> +	struct plx_dma_dev *plxdev = chan_to_plx_dma_dev(chan);
> +	enum dma_status ret;
> +
> +	ret = dma_cookie_status(chan, cookie, txstate);
> +	if (ret == DMA_COMPLETE)
> +		return ret;
> +
> +	plx_dma_process_desc(plxdev);

why is this done here..? Query of status should not make you process
something!

> +
> +	return dma_cookie_status(chan, cookie, txstate);
> +}
> +
> +static void plx_dma_issue_pending(struct dma_chan *chan)
> +{
> +	struct plx_dma_dev *plxdev = chan_to_plx_dma_dev(chan);
> +
> +	rcu_read_lock();
> +	if (!rcu_dereference(plxdev->pdev)) {
> +		rcu_read_unlock();
> +		return;
> +	}
> +
> +	/*
> +	 * Ensure the valid bits are visible before starting the
> +	 * DMA engine.
> +	 */
> +	wmb();
> +
> +	writew(PLX_REG_CTRL_START_VAL, plxdev->bar + PLX_REG_CTRL);

start what? 

> +
> +	rcu_read_unlock();
> +}
> +
>  static irqreturn_t plx_dma_isr(int irq, void *devid)
>  {
>  	struct plx_dma_dev *plxdev = devid;
> @@ -307,7 +420,9 @@ static int plx_dma_alloc_desc(struct plx_dma_dev *plxdev)
>  			goto free_and_exit;
>  
>  		dma_async_tx_descriptor_init(&desc->txd, &plxdev->dma_chan);
> +		desc->txd.tx_submit = plx_dma_tx_submit;
>  		desc->hw = &plxdev->hw_ring[i];
> +
>  		plxdev->desc_ring[i] = desc;
>  	}
>  
> @@ -428,11 +543,15 @@ static int plx_dma_create(struct pci_dev *pdev)
>  	dma = &plxdev->dma_dev;
>  	dma->chancnt = 1;
>  	INIT_LIST_HEAD(&dma->channels);
> +	dma_cap_set(DMA_MEMCPY, dma->cap_mask);
>  	dma->copy_align = DMAENGINE_ALIGN_1_BYTE;
>  	dma->dev = get_device(&pdev->dev);
>  
>  	dma->device_alloc_chan_resources = plx_dma_alloc_chan_resources;
>  	dma->device_free_chan_resources = plx_dma_free_chan_resources;
> +	dma->device_prep_dma_memcpy = plx_dma_prep_memcpy;
> +	dma->device_issue_pending = plx_dma_issue_pending;
> +	dma->device_tx_status = plx_dma_tx_status;
>  
>  	chan = &plxdev->dma_chan;
>  	chan->device = dma;
> -- 
> 2.20.1

-- 
~Vinod

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [PATCH 1/5] dmaengine: Store module owner in dma_device struct
  2019-11-09 17:18   ` Vinod Koul
@ 2019-11-11 16:50     ` Logan Gunthorpe
  2019-11-12  5:56       ` Vinod Koul
  0 siblings, 1 reply; 27+ messages in thread
From: Logan Gunthorpe @ 2019-11-11 16:50 UTC (permalink / raw)
  To: Vinod Koul; +Cc: linux-kernel, dmaengine, Dan Williams



On 2019-11-09 10:18 a.m., Vinod Koul wrote:
> Hi Logan,
> 
> Sorry for delay in reply!
> 
> On 22-10-19, 15:46, Logan Gunthorpe wrote:
>> dma_chan_to_owner() dereferences the driver from the struct device to
>> obtain the owner and call module_[get|put](). However, if the backing
>> device is unbound before the dma_device is unregistered, the driver
>> will be cleared and this will cause a NULL pointer dereference.
> 
> Have you been able to repro this? If so how..?
> 
> The expectation is that the driver shall unregister before removed.

Yes, with my new driver, if I do a PCI unbind (which unregisters) while
the DMA engine is in use, it panics. The point is the underlying driver
can go away before the channel is removed.

I suspect this is less of an issue for most devices as they wouldn't
normally be unbound while in use (for example there's really no reason
to ever unbind IOAT seeing it's built into the system). Though, the fact
is, the user could unbind these devices at anytime and we don't want to
panic if they do.

>>
>> Instead, store a pointer to the owner module in the dma_device struct
>> so the module reference can be properly put when the channel is put, even
>> if the backing device was destroyed first.
>>
>> This change helps to support a safer unbind of DMA engines.
> 
> For error cases which should be fixed, so maybe this is a right way and
> gets things fixed :)

Yes, if you'd like to merge the first two patches ahead of the rest,
that would make sense to me.

Thanks,

Logan

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [PATCH 3/5] dmaengine: plx-dma: Introduce PLX DMA engine PCI driver skeleton
  2019-11-09 17:35   ` Vinod Koul
@ 2019-11-11 17:50     ` Logan Gunthorpe
  2019-11-12  6:09       ` Vinod Koul
  0 siblings, 1 reply; 27+ messages in thread
From: Logan Gunthorpe @ 2019-11-11 17:50 UTC (permalink / raw)
  To: Vinod Koul; +Cc: linux-kernel, dmaengine, Dan Williams



On 2019-11-09 10:35 a.m., Vinod Koul wrote:
> On 22-10-19, 15:46, Logan Gunthorpe wrote:
>> +static irqreturn_t plx_dma_isr(int irq, void *devid)
>> +{
>> +	return IRQ_HANDLED;
> 
> ??

Yes, sorry this is more of an artifact of how I chose to split the
patches up. The ISR is filled-in in patch 4.


>> +	 */
>> +	schedule_work(&plxdev->release_work);
>> +}
>> +
>> +static void plx_dma_put(struct plx_dma_dev *plxdev)
>> +{
>> +	kref_put(&plxdev->ref, plx_dma_release);
>> +}
>> +
>> +static int plx_dma_alloc_chan_resources(struct dma_chan *chan)
>> +{
>> +	struct plx_dma_dev *plxdev = chan_to_plx_dma_dev(chan);
>> +
>> +	kref_get(&plxdev->ref);
> 
> why do you need to do this?

This has to do with being able to probably unbind while a channel is in
use. If we don't hold a reference to the struct plx_dma_dev between
alloc_chan_resources() and free_chan_resources() then it will panic if a
call back is called after plx_dma_remove(). The way I've done it, once a
device is removed, subsequent calls to dma_prep_memcpy() will fail (see
ring_active).

struct plx_dma_dev needs to be alive between plx_dma_probe() and
plx_dma_remove(), and between calls to alloc_chan_resources() and
free_chan_resources(). So we use a reference count to ensure this.

>> +}
>> +
>> +static void plx_dma_release_work(struct work_struct *work)
>> +{
>> +	struct plx_dma_dev *plxdev = container_of(work, struct plx_dma_dev,
>> +						  release_work);
>> +
>> +	dma_async_device_unregister(&plxdev->dma_dev);
>> +	put_device(plxdev->dma_dev.dev);
>> +	kfree(plxdev);
>> +}
>> +
>> +static void plx_dma_release(struct kref *ref)
>> +{
>> +	struct plx_dma_dev *plxdev = container_of(ref, struct plx_dma_dev, ref);
>> +
>> +	/*
>> +	 * The dmaengine reference counting and locking is a bit of a
>> +	 * mess so we have to work around it a bit here. We might put
>> +	 * the reference while the dmaengine holds the dma_list_mutex
>> +	 * which means we can't call dma_async_device_unregister() directly
>> +	 * here and it must be delayed.
>
> why is that, i have not heard any complaints about locking, can you
> elaborate on why you need to do this?

Per the above explanation, we need to call plx_dma_put() in
plx_dma_free_chan_resources(); and plx_dma_release() is when we can call
dma_async_device_unregister() (seeing that's when we know there are no
longer any active channels).

However, dma_chan_put() (which calls device_free_chan_resources()) holds
the dma_list_mutex and dma_async_device_unregister() tries to take the
dma_list_mutex so, if we call unregister inside free_chan_resources we
would deadlock.

I suspect that most dmaengine drivers will WARN[1] and then panic if
they are unbound while in use. The locking, reference counting and
structure of the API didn't seem to consider it and required extra
reference counts and workarounds to make it work correctly at all.

This is the mess I'm referring to and would require some significant
restructuring to fix generally.

Logan

[1]
https://elixir.bootlin.com/linux/latest/source/drivers/dma/dmaengine.c#L1119



^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [PATCH 5/5] dmaengine: plx-dma: Implement descriptor submission
  2019-11-09 17:40   ` Vinod Koul
@ 2019-11-11 18:11     ` Logan Gunthorpe
  0 siblings, 0 replies; 27+ messages in thread
From: Logan Gunthorpe @ 2019-11-11 18:11 UTC (permalink / raw)
  To: Vinod Koul; +Cc: linux-kernel, dmaengine, Dan Williams



On 2019-11-09 10:40 a.m., Vinod Koul wrote:
>> +static dma_cookie_t plx_dma_tx_submit(struct dma_async_tx_descriptor *desc)
>> +	__releases(plxdev->ring_lock)
>> +{
>> +	struct plx_dma_dev *plxdev = chan_to_plx_dma_dev(desc->chan);
>> +	struct plx_dma_desc *plxdesc = to_plx_desc(desc);
>> +	dma_cookie_t cookie;
>> +
>> +	cookie = dma_cookie_assign(desc);
>> +
>> +	/*
>> +	 * Ensure the descriptor updates are visible to the dma device
>> +	 * before setting the valid bit.
>> +	 */
>> +	wmb();
>> +
>> +	plxdesc->hw->flags_and_size |= cpu_to_le32(PLX_DESC_FLAG_VALID);
> 
> so where do you store the submitted descriptor?

The descriptors are stored in a ring in memory which the DMA engine
reads. The ring is at (struct plx_dma_dev)->hw_ring. plxdesc->hw is a
pointer to the descriptor's specific entry in the hardware's ring. The
hardware descriptor is populated in plx_dma_prep_memcpy(). Once the
valid flag is set, the hardware owns the descriptor and may start
processing it.

>> +
>> +	spin_unlock_bh(&plxdev->ring_lock);
>> +
>> +	return cookie;
>> +}
>> +
>> +static enum dma_status plx_dma_tx_status(struct dma_chan *chan,
>> +		dma_cookie_t cookie, struct dma_tx_state *txstate)
>> +{
>> +	struct plx_dma_dev *plxdev = chan_to_plx_dma_dev(chan);
>> +	enum dma_status ret;
>> +
>> +	ret = dma_cookie_status(chan, cookie, txstate);
>> +	if (ret == DMA_COMPLETE)
>> +		return ret;
>> +
>> +	plx_dma_process_desc(plxdev);
> 
> why is this done here..? Query of status should not make you process
> something!

When descriptors are submitted without interrupts, something has to
cleanup the completed descriptors and this is the only sensible place to
do that. This is exactly what IOAT does[1] (but with a different name
and a bit more complexity).

>> +
>> +	return dma_cookie_status(chan, cookie, txstate);
>> +}
>> +
>> +static void plx_dma_issue_pending(struct dma_chan *chan)
>> +{
>> +	struct plx_dma_dev *plxdev = chan_to_plx_dma_dev(chan);
>> +
>> +	rcu_read_lock();
>> +	if (!rcu_dereference(plxdev->pdev)) {
>> +		rcu_read_unlock();
>> +		return;
>> +	}
>> +
>> +	/*
>> +	 * Ensure the valid bits are visible before starting the
>> +	 * DMA engine.
>> +	 */
>> +	wmb();
>> +
>> +	writew(PLX_REG_CTRL_START_VAL, plxdev->bar + PLX_REG_CTRL);
> 
> start what? 

The hardware processes entries in the ring and once it reaches the end
of the submitted descriptors, then it simply stops forever. Setting this
bit will start it processing entries again (or, if it is already
running, nothing will happen).

Logan

[1]
https://elixir.bootlin.com/linux/latest/source/drivers/dma/ioat/dma.c#L962

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [PATCH 1/5] dmaengine: Store module owner in dma_device struct
  2019-11-11 16:50     ` Logan Gunthorpe
@ 2019-11-12  5:56       ` Vinod Koul
  2019-11-12 16:45         ` Logan Gunthorpe
  0 siblings, 1 reply; 27+ messages in thread
From: Vinod Koul @ 2019-11-12  5:56 UTC (permalink / raw)
  To: Logan Gunthorpe; +Cc: linux-kernel, dmaengine, Dan Williams

On 11-11-19, 09:50, Logan Gunthorpe wrote:
> 
> 
> On 2019-11-09 10:18 a.m., Vinod Koul wrote:
> > Hi Logan,
> > 
> > Sorry for delay in reply!
> > 
> > On 22-10-19, 15:46, Logan Gunthorpe wrote:
> >> dma_chan_to_owner() dereferences the driver from the struct device to
> >> obtain the owner and call module_[get|put](). However, if the backing
> >> device is unbound before the dma_device is unregistered, the driver
> >> will be cleared and this will cause a NULL pointer dereference.
> > 
> > Have you been able to repro this? If so how..?
> > 
> > The expectation is that the driver shall unregister before removed.
> 
> Yes, with my new driver, if I do a PCI unbind (which unregisters) while
> the DMA engine is in use, it panics. The point is the underlying driver
> can go away before the channel is removed.

and in your driver remove you do not unregister? When unbind is invoked
the driver remove is invoked by core and you should unregister whatever
you have registered in your probe!

Said that, if someone is using the dmaengine at that point of time, it
is not a nice thing to do and can cause issues, but on idle it should
just work!

> I suspect this is less of an issue for most devices as they wouldn't
> normally be unbound while in use (for example there's really no reason
> to ever unbind IOAT seeing it's built into the system). Though, the fact
> is, the user could unbind these devices at anytime and we don't want to
> panic if they do.

There are many drivers which do modules so yes I am expecting unbind and
even a bind following that to work

-- 
~Vinod

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [PATCH 3/5] dmaengine: plx-dma: Introduce PLX DMA engine PCI driver skeleton
  2019-11-11 17:50     ` Logan Gunthorpe
@ 2019-11-12  6:09       ` Vinod Koul
  2019-11-12 17:22         ` Logan Gunthorpe
  0 siblings, 1 reply; 27+ messages in thread
From: Vinod Koul @ 2019-11-12  6:09 UTC (permalink / raw)
  To: Logan Gunthorpe; +Cc: linux-kernel, dmaengine, Dan Williams

On 11-11-19, 10:50, Logan Gunthorpe wrote:
> 
> 
> On 2019-11-09 10:35 a.m., Vinod Koul wrote:
> > On 22-10-19, 15:46, Logan Gunthorpe wrote:
> >> +static irqreturn_t plx_dma_isr(int irq, void *devid)
> >> +{
> >> +	return IRQ_HANDLED;
> > 
> > ??
> 
> Yes, sorry this is more of an artifact of how I chose to split the
> patches up. The ISR is filled-in in patch 4.

lets move this code in all including isr registration in patch 4 then :)

> >> +	 */
> >> +	schedule_work(&plxdev->release_work);
> >> +}
> >> +
> >> +static void plx_dma_put(struct plx_dma_dev *plxdev)
> >> +{
> >> +	kref_put(&plxdev->ref, plx_dma_release);
> >> +}
> >> +
> >> +static int plx_dma_alloc_chan_resources(struct dma_chan *chan)
> >> +{
> >> +	struct plx_dma_dev *plxdev = chan_to_plx_dma_dev(chan);
> >> +
> >> +	kref_get(&plxdev->ref);
> > 
> > why do you need to do this?
> 
> This has to do with being able to probably unbind while a channel is in
> use. If we don't hold a reference to the struct plx_dma_dev between
> alloc_chan_resources() and free_chan_resources() then it will panic if a
> call back is called after plx_dma_remove(). The way I've done it, once a

which callback?

> device is removed, subsequent calls to dma_prep_memcpy() will fail (see
> ring_active).
> 
> struct plx_dma_dev needs to be alive between plx_dma_probe() and
> plx_dma_remove(), and between calls to alloc_chan_resources() and
> free_chan_resources(). So we use a reference count to ensure this.

and that is why we hold module reference so we don't go away without
cleanup

> >> +static void plx_dma_release_work(struct work_struct *work)
> >> +{
> >> +	struct plx_dma_dev *plxdev = container_of(work, struct plx_dma_dev,
> >> +						  release_work);
> >> +
> >> +	dma_async_device_unregister(&plxdev->dma_dev);
> >> +	put_device(plxdev->dma_dev.dev);
> >> +	kfree(plxdev);
> >> +}
> >> +
> >> +static void plx_dma_release(struct kref *ref)
> >> +{
> >> +	struct plx_dma_dev *plxdev = container_of(ref, struct plx_dma_dev, ref);
> >> +
> >> +	/*
> >> +	 * The dmaengine reference counting and locking is a bit of a
> >> +	 * mess so we have to work around it a bit here. We might put
> >> +	 * the reference while the dmaengine holds the dma_list_mutex
> >> +	 * which means we can't call dma_async_device_unregister() directly
> >> +	 * here and it must be delayed.
> >
> > why is that, i have not heard any complaints about locking, can you
> > elaborate on why you need to do this?
> 
> Per the above explanation, we need to call plx_dma_put() in
> plx_dma_free_chan_resources(); and plx_dma_release() is when we can call
> dma_async_device_unregister() (seeing that's when we know there are no
> longer any active channels).
> 
> However, dma_chan_put() (which calls device_free_chan_resources()) holds
> the dma_list_mutex and dma_async_device_unregister() tries to take the
> dma_list_mutex so, if we call unregister inside free_chan_resources we
> would deadlock.

yes as we are not expecting someone to unregister in
device_free_chan_resources(), that is for freeing up resources.

You are expected to unregister in .remove!

Can you explain me why unregister cant be done in remove? I think I am
still missing some detail for this case.

-- 
~Vinod

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [PATCH 1/5] dmaengine: Store module owner in dma_device struct
  2019-11-12  5:56       ` Vinod Koul
@ 2019-11-12 16:45         ` Logan Gunthorpe
  2019-11-14  4:55           ` Vinod Koul
  0 siblings, 1 reply; 27+ messages in thread
From: Logan Gunthorpe @ 2019-11-12 16:45 UTC (permalink / raw)
  To: Vinod Koul; +Cc: linux-kernel, dmaengine, Dan Williams



On 2019-11-11 10:56 p.m., Vinod Koul wrote:
> On 11-11-19, 09:50, Logan Gunthorpe wrote:
>>
>>
>> On 2019-11-09 10:18 a.m., Vinod Koul wrote:
>>> Hi Logan,
>>>
>>> Sorry for delay in reply!
>>>
>>> On 22-10-19, 15:46, Logan Gunthorpe wrote:
>>>> dma_chan_to_owner() dereferences the driver from the struct device to
>>>> obtain the owner and call module_[get|put](). However, if the backing
>>>> device is unbound before the dma_device is unregistered, the driver
>>>> will be cleared and this will cause a NULL pointer dereference.
>>>
>>> Have you been able to repro this? If so how..?
>>>
>>> The expectation is that the driver shall unregister before removed.
>>
>> Yes, with my new driver, if I do a PCI unbind (which unregisters) while
>> the DMA engine is in use, it panics. The point is the underlying driver
>> can go away before the channel is removed.
> 
> and in your driver remove you do not unregister? When unbind is invoked
> the driver remove is invoked by core and you should unregister whatever
> you have registered in your probe!
>
> Said that, if someone is using the dmaengine at that point of time, it
> is not a nice thing to do and can cause issues, but on idle it should
> just work!

But that's the problem. We can't expect our users to be "nice" and not
unbind when the driver is in use. Killing the kernel if the user
unexpectedly unbinds is not acceptable.

>> I suspect this is less of an issue for most devices as they wouldn't
>> normally be unbound while in use (for example there's really no reason
>> to ever unbind IOAT seeing it's built into the system). Though, the fact
>> is, the user could unbind these devices at anytime and we don't want to
>> panic if they do.
> 
> There are many drivers which do modules so yes I am expecting unbind and
> even a bind following that to work

Except they will panic if they unbind while in use, so that's a
questionable definition of "work".

Logan


^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [PATCH 3/5] dmaengine: plx-dma: Introduce PLX DMA engine PCI driver skeleton
  2019-11-12  6:09       ` Vinod Koul
@ 2019-11-12 17:22         ` Logan Gunthorpe
  0 siblings, 0 replies; 27+ messages in thread
From: Logan Gunthorpe @ 2019-11-12 17:22 UTC (permalink / raw)
  To: Vinod Koul; +Cc: linux-kernel, dmaengine, Dan Williams



On 2019-11-11 11:09 p.m., Vinod Koul wrote:
> On 11-11-19, 10:50, Logan Gunthorpe wrote:
>>
>>
>> On 2019-11-09 10:35 a.m., Vinod Koul wrote:
>>> On 22-10-19, 15:46, Logan Gunthorpe wrote:
>>>> +static irqreturn_t plx_dma_isr(int irq, void *devid)
>>>> +{
>>>> +	return IRQ_HANDLED;
>>>
>>> ??
>>
>> Yes, sorry this is more of an artifact of how I chose to split the
>> patches up. The ISR is filled-in in patch 4.
> 
> lets move this code in all including isr registration in patch 4 then :)

Ok, I'll rework that for the next submission.

>>>> +	 */
>>>> +	schedule_work(&plxdev->release_work);
>>>> +}
>>>> +
>>>> +static void plx_dma_put(struct plx_dma_dev *plxdev)
>>>> +{
>>>> +	kref_put(&plxdev->ref, plx_dma_release);
>>>> +}
>>>> +
>>>> +static int plx_dma_alloc_chan_resources(struct dma_chan *chan)
>>>> +{
>>>> +	struct plx_dma_dev *plxdev = chan_to_plx_dma_dev(chan);
>>>> +
>>>> +	kref_get(&plxdev->ref);
>>>
>>> why do you need to do this?
>>
>> This has to do with being able to probably unbind while a channel is in
>> use. If we don't hold a reference to the struct plx_dma_dev between
>> alloc_chan_resources() and free_chan_resources() then it will panic if a
>> call back is called after plx_dma_remove(). The way I've done it, once a
> 
> which callback?

Any callback that tries to obtain the free'd plx_dma_dev structure. (So
plx_dma_free_chan_resources(), plx_dma_prep_memcpy(),
plx_dma_issue_pending(), plx_dma_tx_status()).

>> device is removed, subsequent calls to dma_prep_memcpy() will fail (see
>> ring_active).
>>
>> struct plx_dma_dev needs to be alive between plx_dma_probe() and
>> plx_dma_remove(), and between calls to alloc_chan_resources() and
>> free_chan_resources(). So we use a reference count to ensure this.
> 
> and that is why we hold module reference so we don't go away without
> cleanup

No, that's wrong. The module reference will prevent the module and the
functions within it from going away. It doesn't prevent the driver from
being unbound which normally causes the devices' structure from being
freed. Most drivers will free the structure containing the DMA engine on
the remove() call, so even if the module is still around, its functions
will still be called with a freed pointer. We're taking a reference on
the pointer to ensure it's not freed while dmaengine users still have a
reference to it.

>>>> +static void plx_dma_release_work(struct work_struct *work)
>>>> +{
>>>> +	struct plx_dma_dev *plxdev = container_of(work, struct plx_dma_dev,
>>>> +						  release_work);
>>>> +
>>>> +	dma_async_device_unregister(&plxdev->dma_dev);
>>>> +	put_device(plxdev->dma_dev.dev);
>>>> +	kfree(plxdev);
>>>> +}
>>>> +
>>>> +static void plx_dma_release(struct kref *ref)
>>>> +{
>>>> +	struct plx_dma_dev *plxdev = container_of(ref, struct plx_dma_dev, ref);
>>>> +
>>>> +	/*
>>>> +	 * The dmaengine reference counting and locking is a bit of a
>>>> +	 * mess so we have to work around it a bit here. We might put
>>>> +	 * the reference while the dmaengine holds the dma_list_mutex
>>>> +	 * which means we can't call dma_async_device_unregister() directly
>>>> +	 * here and it must be delayed.
>>>
>>> why is that, i have not heard any complaints about locking, can you
>>> elaborate on why you need to do this?
>>
>> Per the above explanation, we need to call plx_dma_put() in
>> plx_dma_free_chan_resources(); and plx_dma_release() is when we can call
>> dma_async_device_unregister() (seeing that's when we know there are no
>> longer any active channels).
>>
>> However, dma_chan_put() (which calls device_free_chan_resources()) holds
>> the dma_list_mutex and dma_async_device_unregister() tries to take the
>> dma_list_mutex so, if we call unregister inside free_chan_resources we
>> would deadlock.
> 
> yes as we are not expecting someone to unregister in
> device_free_chan_resources(), that is for freeing up resources.
> 
> You are expected to unregister in .remove!
> 
> Can you explain me why unregister cant be done in remove? I think I am
> still missing some detail for this case.

Because, if the user unbinds while there's a client of the dma channel,
then it panics the kernel. First there's the warning[1] I pointed out
previously, then the DMA channel users will cause a use after free
exception when they continue unaware that the memory they are using has
been freed.

For an example from a random driver:

1) owl_dma_probe() allocates it's struct owl_dma with devm_kzalloc()
2) Another driver calls dma_find_channel() and obtains a reference to
one of the channels
3) Asynchronously, the user unbinds the owl_dma driver using the sysfs
interface
4) owl_dma_remove() is called which calls dma_async_device_unregister()
which produces a WARN_ON because a channel is in use
5) The devm stack for this driver instance unwinds and the struct
owl_dma is freed
6) The client driver then calls dmaengine_prep_dma_memcpy() which calls
owl_dma_prep_memcpy(). The first thing that driver does is convert the
now invalid channel reference to an invalid struct owl_dma reference and
shortly thereafter dereferences the now freed memory. If KASAN is
enabled, the user will get a big use after free bug panic. If not, the
driver will read and write memory that may be used by some other random
process eventually causing other random fatal errors in the system. The
best case scenario is the process that allocates the already freed
memory zeros it, and thus the client driver would panic on a NULL
pointer dereference.

I think this is unacceptable for a driver to have happen and that's why
I wrote the plx driver such that it is not possible. This is especially
important for the PLX driver because it is a PCI device which can be
hotplugged so users may actually be randomly trying to unbind it.

Logan


[1]
https://elixir.bootlin.com/linux/latest/source/drivers/dma/dmaengine.c#L1119

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [PATCH 1/5] dmaengine: Store module owner in dma_device struct
  2019-11-12 16:45         ` Logan Gunthorpe
@ 2019-11-14  4:55           ` Vinod Koul
  2019-11-14 17:03             ` Logan Gunthorpe
  0 siblings, 1 reply; 27+ messages in thread
From: Vinod Koul @ 2019-11-14  4:55 UTC (permalink / raw)
  To: Logan Gunthorpe; +Cc: linux-kernel, dmaengine, Dan Williams

On 12-11-19, 09:45, Logan Gunthorpe wrote:
> 
> 
> On 2019-11-11 10:56 p.m., Vinod Koul wrote:
> > On 11-11-19, 09:50, Logan Gunthorpe wrote:
> >>
> >>
> >> On 2019-11-09 10:18 a.m., Vinod Koul wrote:
> >>> Hi Logan,
> >>>
> >>> Sorry for delay in reply!
> >>>
> >>> On 22-10-19, 15:46, Logan Gunthorpe wrote:
> >>>> dma_chan_to_owner() dereferences the driver from the struct device to
> >>>> obtain the owner and call module_[get|put](). However, if the backing
> >>>> device is unbound before the dma_device is unregistered, the driver
> >>>> will be cleared and this will cause a NULL pointer dereference.
> >>>
> >>> Have you been able to repro this? If so how..?
> >>>
> >>> The expectation is that the driver shall unregister before removed.
> >>
> >> Yes, with my new driver, if I do a PCI unbind (which unregisters) while
> >> the DMA engine is in use, it panics. The point is the underlying driver
> >> can go away before the channel is removed.
> > 
> > and in your driver remove you do not unregister? When unbind is invoked
> > the driver remove is invoked by core and you should unregister whatever
> > you have registered in your probe!
> >
> > Said that, if someone is using the dmaengine at that point of time, it
> > is not a nice thing to do and can cause issues, but on idle it should
> > just work!
> 
> But that's the problem. We can't expect our users to be "nice" and not
> unbind when the driver is in use. Killing the kernel if the user
> unexpectedly unbinds is not acceptable.

And that is why we review the code and ensure this does not happen and
behaviour is as expected

> >> I suspect this is less of an issue for most devices as they wouldn't
> >> normally be unbound while in use (for example there's really no reason
> >> to ever unbind IOAT seeing it's built into the system). Though, the fact
> >> is, the user could unbind these devices at anytime and we don't want to
> >> panic if they do.
> > 
> > There are many drivers which do modules so yes I am expecting unbind and
> > even a bind following that to work
> 
> Except they will panic if they unbind while in use, so that's a
> questionable definition of "work".

dmaengine core has module reference so while they are being used they
won't be removed (unless I complete misread the driver core behaviour)

-- 
~Vinod

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [PATCH 1/5] dmaengine: Store module owner in dma_device struct
  2019-11-14  4:55           ` Vinod Koul
@ 2019-11-14 17:03             ` Logan Gunthorpe
  2019-11-22  5:20               ` Vinod Koul
  0 siblings, 1 reply; 27+ messages in thread
From: Logan Gunthorpe @ 2019-11-14 17:03 UTC (permalink / raw)
  To: Vinod Koul; +Cc: linux-kernel, dmaengine, Dan Williams



On 2019-11-13 9:55 p.m., Vinod Koul wrote:
>> But that's the problem. We can't expect our users to be "nice" and not
>> unbind when the driver is in use. Killing the kernel if the user
>> unexpectedly unbinds is not acceptable.
> 
> And that is why we review the code and ensure this does not happen and
> behaviour is as expected

Yes, but the current code can kill the kernel when the driver is unbound.

>>>> I suspect this is less of an issue for most devices as they wouldn't
>>>> normally be unbound while in use (for example there's really no reason
>>>> to ever unbind IOAT seeing it's built into the system). Though, the fact
>>>> is, the user could unbind these devices at anytime and we don't want to
>>>> panic if they do.
>>>
>>> There are many drivers which do modules so yes I am expecting unbind and
>>> even a bind following that to work
>>
>> Except they will panic if they unbind while in use, so that's a
>> questionable definition of "work".
> 
> dmaengine core has module reference so while they are being used they
> won't be removed (unless I complete misread the driver core behaviour)

Yes, as I mentioned in my other email, holding a module reference does
not prevent the driver from being unbound. Any driver can be unbound by
the user at any time without the module being removed.

Essentially, at any time, a user can do this:

echo 0000:83:00.4 > /sys/bus/pci/drivers/plx_dma/unbind

Which will call plx_dma_remove() regardless of whether anyone has a
reference to the module, and regardless of whether the dma channel is
currently in use. I feel it is important that drivers support this
without crashing, and my plx_dma driver does the correct thing here.

Logan


^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [PATCH 1/5] dmaengine: Store module owner in dma_device struct
  2019-11-14 17:03             ` Logan Gunthorpe
@ 2019-11-22  5:20               ` Vinod Koul
  2019-11-22 16:53                 ` Dave Jiang
  0 siblings, 1 reply; 27+ messages in thread
From: Vinod Koul @ 2019-11-22  5:20 UTC (permalink / raw)
  To: Logan Gunthorpe; +Cc: linux-kernel, dmaengine, Dan Williams

On 14-11-19, 10:03, Logan Gunthorpe wrote:
> 
> 
> On 2019-11-13 9:55 p.m., Vinod Koul wrote:
> >> But that's the problem. We can't expect our users to be "nice" and not
> >> unbind when the driver is in use. Killing the kernel if the user
> >> unexpectedly unbinds is not acceptable.
> > 
> > And that is why we review the code and ensure this does not happen and
> > behaviour is as expected
> 
> Yes, but the current code can kill the kernel when the driver is unbound.
> 
> >>>> I suspect this is less of an issue for most devices as they wouldn't
> >>>> normally be unbound while in use (for example there's really no reason
> >>>> to ever unbind IOAT seeing it's built into the system). Though, the fact
> >>>> is, the user could unbind these devices at anytime and we don't want to
> >>>> panic if they do.
> >>>
> >>> There are many drivers which do modules so yes I am expecting unbind and
> >>> even a bind following that to work
> >>
> >> Except they will panic if they unbind while in use, so that's a
> >> questionable definition of "work".
> > 
> > dmaengine core has module reference so while they are being used they
> > won't be removed (unless I complete misread the driver core behaviour)
> 
> Yes, as I mentioned in my other email, holding a module reference does
> not prevent the driver from being unbound. Any driver can be unbound by
> the user at any time without the module being removed.

That sounds okay then.
> 
> Essentially, at any time, a user can do this:
> 
> echo 0000:83:00.4 > /sys/bus/pci/drivers/plx_dma/unbind
> 
> Which will call plx_dma_remove() regardless of whether anyone has a
> reference to the module, and regardless of whether the dma channel is
> currently in use. I feel it is important that drivers support this
> without crashing, and my plx_dma driver does the correct thing here.
> 
> Logan

-- 
~Vinod

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [PATCH 1/5] dmaengine: Store module owner in dma_device struct
  2019-11-22  5:20               ` Vinod Koul
@ 2019-11-22 16:53                 ` Dave Jiang
  2019-11-22 20:50                   ` Dan Williams
  0 siblings, 1 reply; 27+ messages in thread
From: Dave Jiang @ 2019-11-22 16:53 UTC (permalink / raw)
  To: Vinod Koul, Logan Gunthorpe; +Cc: linux-kernel, dmaengine, Dan Williams



On 11/21/19 10:20 PM, Vinod Koul wrote:
> On 14-11-19, 10:03, Logan Gunthorpe wrote:
>>
>>
>> On 2019-11-13 9:55 p.m., Vinod Koul wrote:
>>>> But that's the problem. We can't expect our users to be "nice" and not
>>>> unbind when the driver is in use. Killing the kernel if the user
>>>> unexpectedly unbinds is not acceptable.
>>>
>>> And that is why we review the code and ensure this does not happen and
>>> behaviour is as expected
>>
>> Yes, but the current code can kill the kernel when the driver is unbound.
>>
>>>>>> I suspect this is less of an issue for most devices as they wouldn't
>>>>>> normally be unbound while in use (for example there's really no reason
>>>>>> to ever unbind IOAT seeing it's built into the system). Though, the fact
>>>>>> is, the user could unbind these devices at anytime and we don't want to
>>>>>> panic if they do.
>>>>>
>>>>> There are many drivers which do modules so yes I am expecting unbind and
>>>>> even a bind following that to work
>>>>
>>>> Except they will panic if they unbind while in use, so that's a
>>>> questionable definition of "work".
>>>
>>> dmaengine core has module reference so while they are being used they
>>> won't be removed (unless I complete misread the driver core behaviour)
>>
>> Yes, as I mentioned in my other email, holding a module reference does
>> not prevent the driver from being unbound. Any driver can be unbound by
>> the user at any time without the module being removed.
> 
> That sounds okay then.

I'm actually glad Logan is putting some work in addressing this. I also 
ran into the same issue as well dealing with unbinds on my new driver.

>>
>> Essentially, at any time, a user can do this:
>>
>> echo 0000:83:00.4 > /sys/bus/pci/drivers/plx_dma/unbind
>>
>> Which will call plx_dma_remove() regardless of whether anyone has a
>> reference to the module, and regardless of whether the dma channel is
>> currently in use. I feel it is important that drivers support this
>> without crashing, and my plx_dma driver does the correct thing here.
>>
>> Logan
> 

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [PATCH 1/5] dmaengine: Store module owner in dma_device struct
  2019-11-22 16:53                 ` Dave Jiang
@ 2019-11-22 20:50                   ` Dan Williams
  2019-11-22 20:56                     ` Logan Gunthorpe
  0 siblings, 1 reply; 27+ messages in thread
From: Dan Williams @ 2019-11-22 20:50 UTC (permalink / raw)
  To: Dave Jiang
  Cc: Vinod Koul, Logan Gunthorpe, Linux Kernel Mailing List, dmaengine

On Fri, Nov 22, 2019 at 8:53 AM Dave Jiang <dave.jiang@intel.com> wrote:
>
>
>
> On 11/21/19 10:20 PM, Vinod Koul wrote:
> > On 14-11-19, 10:03, Logan Gunthorpe wrote:
> >>
> >>
> >> On 2019-11-13 9:55 p.m., Vinod Koul wrote:
> >>>> But that's the problem. We can't expect our users to be "nice" and not
> >>>> unbind when the driver is in use. Killing the kernel if the user
> >>>> unexpectedly unbinds is not acceptable.
> >>>
> >>> And that is why we review the code and ensure this does not happen and
> >>> behaviour is as expected
> >>
> >> Yes, but the current code can kill the kernel when the driver is unbound.
> >>
> >>>>>> I suspect this is less of an issue for most devices as they wouldn't
> >>>>>> normally be unbound while in use (for example there's really no reason
> >>>>>> to ever unbind IOAT seeing it's built into the system). Though, the fact
> >>>>>> is, the user could unbind these devices at anytime and we don't want to
> >>>>>> panic if they do.
> >>>>>
> >>>>> There are many drivers which do modules so yes I am expecting unbind and
> >>>>> even a bind following that to work
> >>>>
> >>>> Except they will panic if they unbind while in use, so that's a
> >>>> questionable definition of "work".
> >>>
> >>> dmaengine core has module reference so while they are being used they
> >>> won't be removed (unless I complete misread the driver core behaviour)
> >>
> >> Yes, as I mentioned in my other email, holding a module reference does
> >> not prevent the driver from being unbound. Any driver can be unbound by
> >> the user at any time without the module being removed.
> >
> > That sounds okay then.
>
> I'm actually glad Logan is putting some work in addressing this. I also
> ran into the same issue as well dealing with unbinds on my new driver.

This was an original mistake of the dmaengine implementation that
Vinod inherited. Module pinning is distinct from preventing device
unbind which ultimately can't be prevented. Longer term I think we
need to audit dmaengine consumers to make sure they are prepared for
the driver to be removed similar to how other request based drivers
can gracefully return an error status when the device goes away rather
than crashing.

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [PATCH 1/5] dmaengine: Store module owner in dma_device struct
  2019-11-22 20:50                   ` Dan Williams
@ 2019-11-22 20:56                     ` Logan Gunthorpe
  2019-11-22 21:01                       ` Dan Williams
  0 siblings, 1 reply; 27+ messages in thread
From: Logan Gunthorpe @ 2019-11-22 20:56 UTC (permalink / raw)
  To: Dan Williams, Dave Jiang; +Cc: Vinod Koul, Linux Kernel Mailing List, dmaengine



On 2019-11-22 1:50 p.m., Dan Williams wrote:
> On Fri, Nov 22, 2019 at 8:53 AM Dave Jiang <dave.jiang@intel.com> wrote:
>>
>>
>>
>> On 11/21/19 10:20 PM, Vinod Koul wrote:
>>> On 14-11-19, 10:03, Logan Gunthorpe wrote:
>>>>
>>>>
>>>> On 2019-11-13 9:55 p.m., Vinod Koul wrote:
>>>>>> But that's the problem. We can't expect our users to be "nice" and not
>>>>>> unbind when the driver is in use. Killing the kernel if the user
>>>>>> unexpectedly unbinds is not acceptable.
>>>>>
>>>>> And that is why we review the code and ensure this does not happen and
>>>>> behaviour is as expected
>>>>
>>>> Yes, but the current code can kill the kernel when the driver is unbound.
>>>>
>>>>>>>> I suspect this is less of an issue for most devices as they wouldn't
>>>>>>>> normally be unbound while in use (for example there's really no reason
>>>>>>>> to ever unbind IOAT seeing it's built into the system). Though, the fact
>>>>>>>> is, the user could unbind these devices at anytime and we don't want to
>>>>>>>> panic if they do.
>>>>>>>
>>>>>>> There are many drivers which do modules so yes I am expecting unbind and
>>>>>>> even a bind following that to work
>>>>>>
>>>>>> Except they will panic if they unbind while in use, so that's a
>>>>>> questionable definition of "work".
>>>>>
>>>>> dmaengine core has module reference so while they are being used they
>>>>> won't be removed (unless I complete misread the driver core behaviour)
>>>>
>>>> Yes, as I mentioned in my other email, holding a module reference does
>>>> not prevent the driver from being unbound. Any driver can be unbound by
>>>> the user at any time without the module being removed.
>>>
>>> That sounds okay then.
>>
>> I'm actually glad Logan is putting some work in addressing this. I also
>> ran into the same issue as well dealing with unbinds on my new driver.
> 
> This was an original mistake of the dmaengine implementation that
> Vinod inherited. Module pinning is distinct from preventing device
> unbind which ultimately can't be prevented. Longer term I think we
> need to audit dmaengine consumers to make sure they are prepared for
> the driver to be removed similar to how other request based drivers
> can gracefully return an error status when the device goes away rather
> than crashing.

Yes, but that will be a big project because there are a lot of drivers.
But I think the dmaengine common code needs to support removal properly,
which essentially means changing how all the drivers allocate and free
their structures, among other things.

The one saving grace is that most of the drivers are for SOCs which
can't be physically removed and there's really no use-case for the user
to call unbind.

Logan


^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [PATCH 1/5] dmaengine: Store module owner in dma_device struct
  2019-11-22 20:56                     ` Logan Gunthorpe
@ 2019-11-22 21:01                       ` Dan Williams
  2019-11-22 21:42                         ` Dave Jiang
  0 siblings, 1 reply; 27+ messages in thread
From: Dan Williams @ 2019-11-22 21:01 UTC (permalink / raw)
  To: Logan Gunthorpe
  Cc: Dave Jiang, Vinod Koul, Linux Kernel Mailing List, dmaengine

On Fri, Nov 22, 2019 at 12:56 PM Logan Gunthorpe <logang@deltatee.com> wrote:
>
>
>
> On 2019-11-22 1:50 p.m., Dan Williams wrote:
> > On Fri, Nov 22, 2019 at 8:53 AM Dave Jiang <dave.jiang@intel.com> wrote:
> >>
> >>
> >>
> >> On 11/21/19 10:20 PM, Vinod Koul wrote:
> >>> On 14-11-19, 10:03, Logan Gunthorpe wrote:
> >>>>
> >>>>
> >>>> On 2019-11-13 9:55 p.m., Vinod Koul wrote:
> >>>>>> But that's the problem. We can't expect our users to be "nice" and not
> >>>>>> unbind when the driver is in use. Killing the kernel if the user
> >>>>>> unexpectedly unbinds is not acceptable.
> >>>>>
> >>>>> And that is why we review the code and ensure this does not happen and
> >>>>> behaviour is as expected
> >>>>
> >>>> Yes, but the current code can kill the kernel when the driver is unbound.
> >>>>
> >>>>>>>> I suspect this is less of an issue for most devices as they wouldn't
> >>>>>>>> normally be unbound while in use (for example there's really no reason
> >>>>>>>> to ever unbind IOAT seeing it's built into the system). Though, the fact
> >>>>>>>> is, the user could unbind these devices at anytime and we don't want to
> >>>>>>>> panic if they do.
> >>>>>>>
> >>>>>>> There are many drivers which do modules so yes I am expecting unbind and
> >>>>>>> even a bind following that to work
> >>>>>>
> >>>>>> Except they will panic if they unbind while in use, so that's a
> >>>>>> questionable definition of "work".
> >>>>>
> >>>>> dmaengine core has module reference so while they are being used they
> >>>>> won't be removed (unless I complete misread the driver core behaviour)
> >>>>
> >>>> Yes, as I mentioned in my other email, holding a module reference does
> >>>> not prevent the driver from being unbound. Any driver can be unbound by
> >>>> the user at any time without the module being removed.
> >>>
> >>> That sounds okay then.
> >>
> >> I'm actually glad Logan is putting some work in addressing this. I also
> >> ran into the same issue as well dealing with unbinds on my new driver.
> >
> > This was an original mistake of the dmaengine implementation that
> > Vinod inherited. Module pinning is distinct from preventing device
> > unbind which ultimately can't be prevented. Longer term I think we
> > need to audit dmaengine consumers to make sure they are prepared for
> > the driver to be removed similar to how other request based drivers
> > can gracefully return an error status when the device goes away rather
> > than crashing.
>
> Yes, but that will be a big project because there are a lot of drivers.

Oh yes, in fact I think it's something that can only reasonably be
considered for new consumers.

> But I think the dmaengine common code needs to support removal properly,
> which essentially means changing how all the drivers allocate and free
> their structures, among other things.
>
> The one saving grace is that most of the drivers are for SOCs which
> can't be physically removed and there's really no use-case for the user
> to call unbind.

Yes, the SOC case is not so much my concern as the generic offload use
cases, especially if those offloads are in a similar hotplug domain as
a cpu.

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [PATCH 1/5] dmaengine: Store module owner in dma_device struct
  2019-11-22 21:01                       ` Dan Williams
@ 2019-11-22 21:42                         ` Dave Jiang
  2019-12-10  9:53                           ` Vinod Koul
  0 siblings, 1 reply; 27+ messages in thread
From: Dave Jiang @ 2019-11-22 21:42 UTC (permalink / raw)
  To: Dan Williams, Logan Gunthorpe
  Cc: Vinod Koul, Linux Kernel Mailing List, dmaengine



On 11/22/19 2:01 PM, Dan Williams wrote:
> On Fri, Nov 22, 2019 at 12:56 PM Logan Gunthorpe <logang@deltatee.com> wrote:
>>
>>
>>
>> On 2019-11-22 1:50 p.m., Dan Williams wrote:
>>> On Fri, Nov 22, 2019 at 8:53 AM Dave Jiang <dave.jiang@intel.com> wrote:
>>>>
>>>>
>>>>
>>>> On 11/21/19 10:20 PM, Vinod Koul wrote:
>>>>> On 14-11-19, 10:03, Logan Gunthorpe wrote:
>>>>>>
>>>>>>
>>>>>> On 2019-11-13 9:55 p.m., Vinod Koul wrote:
>>>>>>>> But that's the problem. We can't expect our users to be "nice" and not
>>>>>>>> unbind when the driver is in use. Killing the kernel if the user
>>>>>>>> unexpectedly unbinds is not acceptable.
>>>>>>>
>>>>>>> And that is why we review the code and ensure this does not happen and
>>>>>>> behaviour is as expected
>>>>>>
>>>>>> Yes, but the current code can kill the kernel when the driver is unbound.
>>>>>>
>>>>>>>>>> I suspect this is less of an issue for most devices as they wouldn't
>>>>>>>>>> normally be unbound while in use (for example there's really no reason
>>>>>>>>>> to ever unbind IOAT seeing it's built into the system). Though, the fact
>>>>>>>>>> is, the user could unbind these devices at anytime and we don't want to
>>>>>>>>>> panic if they do.
>>>>>>>>>
>>>>>>>>> There are many drivers which do modules so yes I am expecting unbind and
>>>>>>>>> even a bind following that to work
>>>>>>>>
>>>>>>>> Except they will panic if they unbind while in use, so that's a
>>>>>>>> questionable definition of "work".
>>>>>>>
>>>>>>> dmaengine core has module reference so while they are being used they
>>>>>>> won't be removed (unless I complete misread the driver core behaviour)
>>>>>>
>>>>>> Yes, as I mentioned in my other email, holding a module reference does
>>>>>> not prevent the driver from being unbound. Any driver can be unbound by
>>>>>> the user at any time without the module being removed.
>>>>>
>>>>> That sounds okay then.
>>>>
>>>> I'm actually glad Logan is putting some work in addressing this. I also
>>>> ran into the same issue as well dealing with unbinds on my new driver.
>>>
>>> This was an original mistake of the dmaengine implementation that
>>> Vinod inherited. Module pinning is distinct from preventing device
>>> unbind which ultimately can't be prevented. Longer term I think we
>>> need to audit dmaengine consumers to make sure they are prepared for
>>> the driver to be removed similar to how other request based drivers
>>> can gracefully return an error status when the device goes away rather
>>> than crashing.
>>
>> Yes, but that will be a big project because there are a lot of drivers.
> 
> Oh yes, in fact I think it's something that can only reasonably be
> considered for new consumers.
> 
>> But I think the dmaengine common code needs to support removal properly,
>> which essentially means changing how all the drivers allocate and free
>> their structures, among other things.
>>
>> The one saving grace is that most of the drivers are for SOCs which
>> can't be physically removed and there's really no use-case for the user
>> to call unbind.
> 
> Yes, the SOC case is not so much my concern as the generic offload use
> cases, especially if those offloads are in a similar hotplug domain as
> a cpu.
> 

It becomes a bigger issue when "channels" can be reconfigured and can 
come and go in a hot plug fashion.

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [PATCH 1/5] dmaengine: Store module owner in dma_device struct
  2019-11-22 21:42                         ` Dave Jiang
@ 2019-12-10  9:53                           ` Vinod Koul
  2019-12-10 17:39                             ` Logan Gunthorpe
  0 siblings, 1 reply; 27+ messages in thread
From: Vinod Koul @ 2019-12-10  9:53 UTC (permalink / raw)
  To: Dave Jiang, Dan Williams, Logan Gunthorpe
  Cc: Linux Kernel Mailing List, dmaengine

On 22-11-19, 14:42, Dave Jiang wrote:
> 
> 
> On 11/22/19 2:01 PM, Dan Williams wrote:
> > On Fri, Nov 22, 2019 at 12:56 PM Logan Gunthorpe <logang@deltatee.com> wrote:
> > > 
> > > 
> > > 
> > > On 2019-11-22 1:50 p.m., Dan Williams wrote:
> > > > On Fri, Nov 22, 2019 at 8:53 AM Dave Jiang <dave.jiang@intel.com> wrote:
> > > > > 
> > > > > 
> > > > > 
> > > > > On 11/21/19 10:20 PM, Vinod Koul wrote:
> > > > > > On 14-11-19, 10:03, Logan Gunthorpe wrote:
> > > > > > > 
> > > > > > > 
> > > > > > > On 2019-11-13 9:55 p.m., Vinod Koul wrote:
> > > > > > > > > But that's the problem. We can't expect our users to be "nice" and not
> > > > > > > > > unbind when the driver is in use. Killing the kernel if the user
> > > > > > > > > unexpectedly unbinds is not acceptable.
> > > > > > > > 
> > > > > > > > And that is why we review the code and ensure this does not happen and
> > > > > > > > behaviour is as expected
> > > > > > > 
> > > > > > > Yes, but the current code can kill the kernel when the driver is unbound.
> > > > > > > 
> > > > > > > > > > > I suspect this is less of an issue for most devices as they wouldn't
> > > > > > > > > > > normally be unbound while in use (for example there's really no reason
> > > > > > > > > > > to ever unbind IOAT seeing it's built into the system). Though, the fact
> > > > > > > > > > > is, the user could unbind these devices at anytime and we don't want to
> > > > > > > > > > > panic if they do.
> > > > > > > > > > 
> > > > > > > > > > There are many drivers which do modules so yes I am expecting unbind and
> > > > > > > > > > even a bind following that to work
> > > > > > > > > 
> > > > > > > > > Except they will panic if they unbind while in use, so that's a
> > > > > > > > > questionable definition of "work".
> > > > > > > > 
> > > > > > > > dmaengine core has module reference so while they are being used they
> > > > > > > > won't be removed (unless I complete misread the driver core behaviour)
> > > > > > > 
> > > > > > > Yes, as I mentioned in my other email, holding a module reference does
> > > > > > > not prevent the driver from being unbound. Any driver can be unbound by
> > > > > > > the user at any time without the module being removed.
> > > > > > 
> > > > > > That sounds okay then.
> > > > > 
> > > > > I'm actually glad Logan is putting some work in addressing this. I also
> > > > > ran into the same issue as well dealing with unbinds on my new driver.
> > > > 
> > > > This was an original mistake of the dmaengine implementation that
> > > > Vinod inherited. Module pinning is distinct from preventing device
> > > > unbind which ultimately can't be prevented. Longer term I think we
> > > > need to audit dmaengine consumers to make sure they are prepared for
> > > > the driver to be removed similar to how other request based drivers
> > > > can gracefully return an error status when the device goes away rather
> > > > than crashing.

Right finally wrapping my head of static dmaengine devices! we can
indeed have devices going away, but me wondering why this should be
handled in subsystems! Should the driver core not be doing this instead?
Would it be not a safe behaviour for unplug to unload the driver and
thus give a chance to unbind from subsystems too...


> > > Yes, but that will be a big project because there are a lot of drivers.
> > 
> > Oh yes, in fact I think it's something that can only reasonably be
> > considered for new consumers.
> > 
> > > But I think the dmaengine common code needs to support removal properly,
> > > which essentially means changing how all the drivers allocate and free
> > > their structures, among other things.
> > > 
> > > The one saving grace is that most of the drivers are for SOCs which
> > > can't be physically removed and there's really no use-case for the user
> > > to call unbind.

yeah only a small set of drivers would need this for now!

> > Yes, the SOC case is not so much my concern as the generic offload use
> > cases, especially if those offloads are in a similar hotplug domain as
> > a cpu.
> 
> It becomes a bigger issue when "channels" can be reconfigured and can come
> and go in a hot plug fashion.

-- 
~Vinod

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [PATCH 1/5] dmaengine: Store module owner in dma_device struct
  2019-12-10  9:53                           ` Vinod Koul
@ 2019-12-10 17:39                             ` Logan Gunthorpe
  0 siblings, 0 replies; 27+ messages in thread
From: Logan Gunthorpe @ 2019-12-10 17:39 UTC (permalink / raw)
  To: Vinod Koul, Dave Jiang, Dan Williams; +Cc: Linux Kernel Mailing List, dmaengine



On 2019-12-10 2:53 a.m., Vinod Koul wrote:
> On 22-11-19, 14:42, Dave Jiang wrote:
>>
>>
>> On 11/22/19 2:01 PM, Dan Williams wrote:
>>> On Fri, Nov 22, 2019 at 12:56 PM Logan Gunthorpe <logang@deltatee.com> wrote:
>>>>
>>>>
>>>>
>>>> On 2019-11-22 1:50 p.m., Dan Williams wrote:
>>>>> On Fri, Nov 22, 2019 at 8:53 AM Dave Jiang <dave.jiang@intel.com> wrote:
>>>>>>
>>>>>>
>>>>>>
>>>>>> On 11/21/19 10:20 PM, Vinod Koul wrote:
>>>>>>> On 14-11-19, 10:03, Logan Gunthorpe wrote:
>>>>>>>>
>>>>>>>>
>>>>>>>> On 2019-11-13 9:55 p.m., Vinod Koul wrote:
>>>>>>>>>> But that's the problem. We can't expect our users to be "nice" and not
>>>>>>>>>> unbind when the driver is in use. Killing the kernel if the user
>>>>>>>>>> unexpectedly unbinds is not acceptable.
>>>>>>>>>
>>>>>>>>> And that is why we review the code and ensure this does not happen and
>>>>>>>>> behaviour is as expected
>>>>>>>>
>>>>>>>> Yes, but the current code can kill the kernel when the driver is unbound.
>>>>>>>>
>>>>>>>>>>>> I suspect this is less of an issue for most devices as they wouldn't
>>>>>>>>>>>> normally be unbound while in use (for example there's really no reason
>>>>>>>>>>>> to ever unbind IOAT seeing it's built into the system). Though, the fact
>>>>>>>>>>>> is, the user could unbind these devices at anytime and we don't want to
>>>>>>>>>>>> panic if they do.
>>>>>>>>>>>
>>>>>>>>>>> There are many drivers which do modules so yes I am expecting unbind and
>>>>>>>>>>> even a bind following that to work
>>>>>>>>>>
>>>>>>>>>> Except they will panic if they unbind while in use, so that's a
>>>>>>>>>> questionable definition of "work".
>>>>>>>>>
>>>>>>>>> dmaengine core has module reference so while they are being used they
>>>>>>>>> won't be removed (unless I complete misread the driver core behaviour)
>>>>>>>>
>>>>>>>> Yes, as I mentioned in my other email, holding a module reference does
>>>>>>>> not prevent the driver from being unbound. Any driver can be unbound by
>>>>>>>> the user at any time without the module being removed.
>>>>>>>
>>>>>>> That sounds okay then.
>>>>>>
>>>>>> I'm actually glad Logan is putting some work in addressing this. I also
>>>>>> ran into the same issue as well dealing with unbinds on my new driver.
>>>>>
>>>>> This was an original mistake of the dmaengine implementation that
>>>>> Vinod inherited. Module pinning is distinct from preventing device
>>>>> unbind which ultimately can't be prevented. Longer term I think we
>>>>> need to audit dmaengine consumers to make sure they are prepared for
>>>>> the driver to be removed similar to how other request based drivers
>>>>> can gracefully return an error status when the device goes away rather
>>>>> than crashing.
> 
> Right finally wrapping my head of static dmaengine devices! we can
> indeed have devices going away, but me wondering why this should be
> handled in subsystems! Should the driver core not be doing this instead?
> Would it be not a safe behaviour for unplug to unload the driver and
> thus give a chance to unbind from subsystems too...

Yes, I think it should be in the core. I was just worried about breaking
older drivers. But I'll see if I can move a bit more of the logic for a
v3 series.

Thanks,

Logan

^ permalink raw reply	[flat|nested] 27+ messages in thread

* [PATCH 1/5] dmaengine: Store module owner in dma_device struct
  2019-12-16 19:01 [PATCH 0/5] Support hot-unbind in IOAT Logan Gunthorpe
@ 2019-12-16 19:01 ` Logan Gunthorpe
  0 siblings, 0 replies; 27+ messages in thread
From: Logan Gunthorpe @ 2019-12-16 19:01 UTC (permalink / raw)
  To: linux-kernel, dmaengine, Vinod Koul
  Cc: Dan Williams, Dave Jiang, Kit Chow, Logan Gunthorpe

dma_chan_to_owner() dereferences the driver from the struct device to
obtain the owner and call module_[get|put](). However, if the backing
device is unbound before the dma_device is unregistered, the driver
will be cleared and this will cause a NULL pointer dereference.

Instead, store a pointer to the owner module in the dma_device struct
so the module reference can be properly put when the channel is put, even
if the backing device was destroyed first.

This change helps to support a safer unbind of DMA engines.
If the dma_device is unregistered in the driver's remove function,
there's no guarantee that there are no existing clients and a users
action may trigger the WARN_ONCE in dma_async_device_unregister()
which is unlikely to leave the system in a consistent state.
Instead, a better approach is to allow the backing driver to go away
and fail any subsequent requests to it.

Signed-off-by: Logan Gunthorpe <logang@deltatee.com>
---
 drivers/dma/dmaengine.c   | 4 +++-
 include/linux/dmaengine.h | 2 ++
 2 files changed, 5 insertions(+), 1 deletion(-)

diff --git a/drivers/dma/dmaengine.c b/drivers/dma/dmaengine.c
index 03ac4b96117c..4b604086b1b3 100644
--- a/drivers/dma/dmaengine.c
+++ b/drivers/dma/dmaengine.c
@@ -179,7 +179,7 @@ __dma_device_satisfies_mask(struct dma_device *device,
 
 static struct module *dma_chan_to_owner(struct dma_chan *chan)
 {
-	return chan->device->dev->driver->owner;
+	return chan->device->owner;
 }
 
 /**
@@ -919,6 +919,8 @@ int dma_async_device_register(struct dma_device *device)
 		return -EIO;
 	}
 
+	device->owner = device->dev->driver->owner;
+
 	if (dma_has_cap(DMA_MEMCPY, device->cap_mask) && !device->device_prep_dma_memcpy) {
 		dev_err(device->dev,
 			"Device claims capability %s, but op is not defined\n",
diff --git a/include/linux/dmaengine.h b/include/linux/dmaengine.h
index 8fcdee1c0cf9..13aa0abb71de 100644
--- a/include/linux/dmaengine.h
+++ b/include/linux/dmaengine.h
@@ -674,6 +674,7 @@ struct dma_filter {
  * @fill_align: alignment shift for memset operations
  * @dev_id: unique device ID
  * @dev: struct device reference for dma mapping api
+ * @owner: owner module (automatically set based on the provided dev)
  * @src_addr_widths: bit mask of src addr widths the device supports
  *	Width is specified in bytes, e.g. for a device supporting
  *	a width of 4 the mask should have BIT(4) set.
@@ -737,6 +738,7 @@ struct dma_device {
 
 	int dev_id;
 	struct device *dev;
+	struct module *owner;
 
 	u32 src_addr_widths;
 	u32 dst_addr_widths;
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 27+ messages in thread

end of thread, other threads:[~2019-12-16 19:01 UTC | newest]

Thread overview: 27+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-10-22 21:46 [PATCH 0/5] PLX Switch DMA Engine Driver Logan Gunthorpe
2019-10-22 21:46 ` [PATCH 1/5] dmaengine: Store module owner in dma_device struct Logan Gunthorpe
2019-11-09 17:18   ` Vinod Koul
2019-11-11 16:50     ` Logan Gunthorpe
2019-11-12  5:56       ` Vinod Koul
2019-11-12 16:45         ` Logan Gunthorpe
2019-11-14  4:55           ` Vinod Koul
2019-11-14 17:03             ` Logan Gunthorpe
2019-11-22  5:20               ` Vinod Koul
2019-11-22 16:53                 ` Dave Jiang
2019-11-22 20:50                   ` Dan Williams
2019-11-22 20:56                     ` Logan Gunthorpe
2019-11-22 21:01                       ` Dan Williams
2019-11-22 21:42                         ` Dave Jiang
2019-12-10  9:53                           ` Vinod Koul
2019-12-10 17:39                             ` Logan Gunthorpe
2019-10-22 21:46 ` [PATCH 2/5] dmaengine: Call module_put() after device_free_chan_resources() Logan Gunthorpe
2019-10-22 21:46 ` [PATCH 3/5] dmaengine: plx-dma: Introduce PLX DMA engine PCI driver skeleton Logan Gunthorpe
2019-11-09 17:35   ` Vinod Koul
2019-11-11 17:50     ` Logan Gunthorpe
2019-11-12  6:09       ` Vinod Koul
2019-11-12 17:22         ` Logan Gunthorpe
2019-10-22 21:46 ` [PATCH 4/5] dmaengine: plx-dma: Implement hardware initialization and cleanup Logan Gunthorpe
2019-10-22 21:46 ` [PATCH 5/5] dmaengine: plx-dma: Implement descriptor submission Logan Gunthorpe
2019-11-09 17:40   ` Vinod Koul
2019-11-11 18:11     ` Logan Gunthorpe
2019-12-16 19:01 [PATCH 0/5] Support hot-unbind in IOAT Logan Gunthorpe
2019-12-16 19:01 ` [PATCH 1/5] dmaengine: Store module owner in dma_device struct Logan Gunthorpe

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).