All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH 00/13] dmaengine redux
@ 2008-11-14 21:34 Dan Williams
  2008-11-14 21:34 ` [PATCH 01/13] async_tx, dmaengine: document channel allocation and api rework Dan Williams
                   ` (13 more replies)
  0 siblings, 14 replies; 55+ messages in thread
From: Dan Williams @ 2008-11-14 21:34 UTC (permalink / raw)
  To: linux-kernel, netdev
  Cc: maciej.sosnowski, hskinnemoen, g.liakhovetski, nicolas.ferre

The dmaengine subsystem collects and advertises dma channels for two classes
of users in the kernel, memory-to-memory offload and traditional
device-to-memory DMA.  The original design was driven by the memory-to-memory
case and is starting to show its limitations now that more device-to-memory
DMA users are being planned.  The primary difference between the two classes
is that memory-to-memory offload is very amenable to channel sharing and is
tolerant of dynamic channel changes.  Compare this to the device-to-memory
case where a channel must be dedicated to a device and may have
platform-specific reasons why it cannot talk to a different device.

This rework allows channels to be targeted to a public (mem-to-mem) pool or be
reserved for an exclusive private (dev-to-mem) allocation.  See [PATCH 1/13]
for documentation of the changes.  A nice side effect of the rework is:

 24 files changed, 679 insertions(+), 1108 deletions(-)

All review welcomed, especially around the dma_slave changes, or performance
impacts of dma_find_channel.

These patches are currently on async_tx.git/upstream, and barring any
brown-paper-bag issues will move to linux-next via async_tx.git/next.

  git://git.kernel.org/pub/scm/linux/kernel/git/djbw/async_tx.git upstream

---
Dan Williams (13):
      dmaengine: kill enum dma_state_client
      dmaengine: remove 'bigref' infrastructure
      dmaengine: kill struct dma_client and supporting infrastructure
      dmaengine: replace dma_async_client_register with dmaengine_get
      atmel-mci: convert to dma_request_channel and down-level dma_slave
      dmatest: convert to dma_request_channel
      dmaengine: introduce dma_request_channel and private channels
      net_dma: convert to dma_find_channel
      dmaengine: provide a common 'issue_pending_all' implementation
      dmaengine: centralize channel allocation, introduce dma_find_channel
      dmaengine: up-level reference counting to the module level
      dmaengine: remove dependency on async_tx
      async_tx, dmaengine: document channel allocation and api rework

 Documentation/crypto/async-tx-api.txt |  135 +++----
 Documentation/dmaengine.txt           |    1 
 arch/avr32/include/asm/atmel-mci.h    |    6 
 arch/avr32/mach-at32ap/at32ap700x.c   |   15 -
 crypto/async_tx/async_tx.c            |  350 ------------------
 drivers/dma/Kconfig                   |    2 
 drivers/dma/dmaengine.c               |  637 +++++++++++++++++++++++----------
 drivers/dma/dmatest.c                 |  111 ++----
 drivers/dma/dw_dmac.c                 |   28 -
 drivers/dma/fsldma.c                  |    3 
 drivers/dma/ioat_dma.c                |    5 
 drivers/dma/iop-adma.c                |   11 -
 drivers/dma/mv_xor.c                  |   11 -
 drivers/mmc/host/atmel-mci.c          |  103 +----
 include/linux/async_tx.h              |   17 -
 include/linux/dmaengine.h             |  148 ++------
 include/linux/dw_dmac.h               |   31 +-
 include/linux/netdevice.h             |    3 
 include/net/netdma.h                  |   11 -
 net/core/dev.c                        |  148 --------
 net/ipv4/tcp.c                        |    5 
 net/ipv4/tcp_input.c                  |    2 
 net/ipv4/tcp_ipv4.c                   |    2 
 net/ipv6/tcp_ipv6.c                   |    2 
 24 files changed, 679 insertions(+), 1108 deletions(-)
 create mode 100644 Documentation/dmaengine.txt

-- 

Regards,
Dan

^ permalink raw reply	[flat|nested] 55+ messages in thread

* [PATCH 01/13] async_tx, dmaengine: document channel allocation and api rework
  2008-11-14 21:34 [PATCH 00/13] dmaengine redux Dan Williams
@ 2008-11-14 21:34 ` Dan Williams
  2008-11-14 21:34 ` [PATCH 02/13] dmaengine: remove dependency on async_tx Dan Williams
                   ` (12 subsequent siblings)
  13 siblings, 0 replies; 55+ messages in thread
From: Dan Williams @ 2008-11-14 21:34 UTC (permalink / raw)
  To: linux-kernel, netdev
  Cc: maciej.sosnowski, hskinnemoen, g.liakhovetski, nicolas.ferre

"Wouldn't it be better if the dmaengine layer made sure it didn't pass
the same channel several times to a client?

I mean, you seem concerned that the memcpy() API should be transparent
and easy to use, but the whole registration interface is just
ridiculously complicated..."
	- Haavard

The dmaengine and async_tx registration/allocation interface is indeed
needlessly complicated.  This redesign has the following goals:

1/ Simplify reference counting: dma channels are not something one would
   expect to be hotplugged, it should be an exceptional event handled by
   drivers not something clients should be mandated to handle in a
   callback.  The common case channel removal event is 'rmmod <dma driver>',
   which for simplicity should be disallowed if the channel is in use.
2/ Add an interface for requesting exclusive access to a channel
   suitable to device-to-memory users.
3/ Convert all memory-to-memory users over to a common allocator, the goal
   here is to not have competing channel allocation schemes.  The only
   competition should be between device-to-memory exclusive allocations and
   the memory-to-memory usage case where channels are shared between
   multiple "clients".

Cc: Haavard Skinnemoen <haavard.skinnemoen@atmel.com>
Cc: Neil Brown <neilb@suse.de>
Cc: Jeff Garzik <jeff@garzik.org>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
---

 Documentation/crypto/async-tx-api.txt |  135 ++++++++++++++++-----------------
 Documentation/dmaengine.txt           |    1 
 2 files changed, 68 insertions(+), 68 deletions(-)
 create mode 100644 Documentation/dmaengine.txt

diff --git a/Documentation/crypto/async-tx-api.txt b/Documentation/crypto/async-tx-api.txt
index c1e9545..48e7b35 100644
--- a/Documentation/crypto/async-tx-api.txt
+++ b/Documentation/crypto/async-tx-api.txt
@@ -13,9 +13,9 @@
 3.6 Constraints
 3.7 Example
 
-4 DRIVER DEVELOPER NOTES
+4 DMAENGINE DRIVER DEVELOPER NOTES
 4.1 Conformance points
-4.2 "My application needs finer control of hardware channels"
+4.2 "My application needs exclusive control of hardware channels"
 
 5 SOURCE
 
@@ -80,9 +80,7 @@ acknowledged by the application before the offload engine driver is allowed to
 recycle (or free) the descriptor.  A descriptor can be acked by one of the
 following methods:
 1/ setting the ASYNC_TX_ACK flag if no child operations are to be submitted
-2/ setting the ASYNC_TX_DEP_ACK flag to acknowledge the parent
-   descriptor of a new operation.
-3/ calling async_tx_ack() on the descriptor.
+2/ calling async_tx_ack() on the descriptor.
 
 3.4 When does the operation execute?
 Operations do not immediately issue after return from the
@@ -101,12 +99,15 @@ of an operation.
    it polls for the completion of the operation.  It handles dependency
    chains and issuing pending operations.
 2/ Specify a completion callback.  The callback routine runs in tasklet
-   context if the offload engine driver supports interrupts, or it is
-   called in application context if the operation is carried out
-   synchronously in software.  The callback can be set in the call to
-   async_<operation>, or when the application needs to submit a chain of
-   unknown length it can use the async_trigger_callback() routine to set a
-   completion interrupt/callback at the end of the chain.
+   context if an offload engine performs the operation, or it is called
+   in process context if the operation is carried out synchronously in
+   software.  The callback can by specified by using the
+   async_trigger_callback() interface.
+   Note: if the application knows ahead of time that it wants a
+   completion callback on a particular operation it should specify
+   ASYNC_TX_DELAY_SUBMIT to the flags.  Otherwise, async_trigger_callback
+   will be required to allocate and queue a separate 'interrupt'
+   descriptor.
 
 3.6 Constraints:
 1/ Calls to async_<operation> are not permitted in IRQ context.  Other
@@ -133,14 +134,20 @@ int run_xor_copy_xor(struct page **xor_srcs,
 		     size_t copy_len)
 {
 	struct dma_async_tx_descriptor *tx;
+	/* async_xor overwrites input parameters to save stack space */
+	struct page *srcs[xor_src_cnt];
 
-	tx = async_xor(xor_dest, xor_srcs, 0, xor_src_cnt, xor_len,
-		       ASYNC_TX_XOR_DROP_DST, NULL, NULL, NULL);
-	tx = async_memcpy(copy_dest, copy_src, 0, 0, copy_len,
-			  ASYNC_TX_DEP_ACK, tx, NULL, NULL);
-	tx = async_xor(xor_dest, xor_srcs, 0, xor_src_cnt, xor_len,
-		       ASYNC_TX_XOR_DROP_DST | ASYNC_TX_DEP_ACK | ASYNC_TX_ACK,
-		       tx, complete_xor_copy_xor, NULL);
+	memcpy(srcs, xor_srcs, sizeof(struct page *) * xor_src_cnt));
+	tx = async_xor(xor_dest, srcs, 0, xor_src_cnt, xor_len,
+		       ASYNC_TX_XOR_DROP_DST, NULL);
+
+	tx = async_memcpy(copy_dest, copy_src, 0, 0, copy_len, 0, tx);
+
+	memcpy(srcs, xor_srcs, sizeof(struct page *) * xor_src_cnt));
+	tx = async_xor(xor_dest, srcs, 0, xor_src_cnt, xor_len,
+		       ASYNC_TX_XOR_DROP_DST | ASYNC_TX_DELAY_SUBMIT, tx);
+
+	tx = async_trigger_callback(tx, complete_xor_copy_xor, NULL);
 
 	async_tx_issue_pending_all();
 }
@@ -150,6 +157,7 @@ ops_run_* and ops_complete_* routines in drivers/md/raid5.c for more
 implementation examples.
 
 4 DRIVER DEVELOPMENT NOTES
+
 4.1 Conformance points:
 There are a few conformance points required in dmaengine drivers to
 accommodate assumptions made by applications using the async_tx API:
@@ -158,58 +166,49 @@ accommodate assumptions made by applications using the async_tx API:
 3/ Use async_tx_run_dependencies() in the descriptor clean up path to
    handle submission of dependent operations
 
-4.2 "My application needs finer control of hardware channels"
-This requirement seems to arise from cases where a DMA engine driver is
-trying to support device-to-memory DMA.  The dmaengine and async_tx
-implementations were designed for offloading memory-to-memory
-operations; however, there are some capabilities of the dmaengine layer
-that can be used for platform-specific channel management.
-Platform-specific constraints can be handled by registering the
-application as a 'dma_client' and implementing a 'dma_event_callback' to
-apply a filter to the available channels in the system.  Before showing
-how to implement a custom dma_event callback some background of
-dmaengine's client support is required.
-
-The following routines in dmaengine support multiple clients requesting
-use of a channel:
-- dma_async_client_register(struct dma_client *client)
-- dma_async_client_chan_request(struct dma_client *client)
-
-dma_async_client_register takes a pointer to an initialized dma_client
-structure.  It expects that the 'event_callback' and 'cap_mask' fields
-are already initialized.
-
-dma_async_client_chan_request triggers dmaengine to notify the client of
-all channels that satisfy the capability mask.  It is up to the client's
-event_callback routine to track how many channels the client needs and
-how many it is currently using.  The dma_event_callback routine returns a
-dma_state_client code to let dmaengine know the status of the
-allocation.
-
-Below is the example of how to extend this functionality for
-platform-specific filtering of the available channels beyond the
-standard capability mask:
-
-static enum dma_state_client
-my_dma_client_callback(struct dma_client *client,
-			struct dma_chan *chan, enum dma_state state)
-{
-	struct dma_device *dma_dev;
-	struct my_platform_specific_dma *plat_dma_dev;
-	
-	dma_dev = chan->device;
-	plat_dma_dev = container_of(dma_dev,
-				    struct my_platform_specific_dma,
-				    dma_dev);
-
-	if (!plat_dma_dev->platform_specific_capability)
-		return DMA_DUP;
-
-	. . .
-}
+4.2 "My application needs exclusive control of hardware channels"
+Primarily this requirement arises from cases where a DMA engine driver
+is being used to support device-to-memory operations.  A channel that is
+performing these operations cannot, for many platform specific reasons,
+be shared.  For these cases the dma_request_channel() interface is
+provided.
+
+The interface is:
+struct dma_chan *dma_request_channel(dma_cap_mask_t mask,
+				     dma_filter_fn filter_fn,
+				     void *filter_param);
+
+Where dma_filter_fn is defined as:
+typedef bool (*dma_filter_fn)(struct dma_chan *chan, void *filter_param);
+
+When the optional 'filter_fn' parameter is set to NULL
+dma_request_channel simply returns the first channel that satisfies the
+capability mask.  Otherwise, when the mask parameter is insufficient for
+specifying the necessary channel, the filter_fn routine can be used to
+disposition the available channels in the system. The filter_fn routine
+is called once for each free channel in the system.  Upon seeing a
+suitable channel filter_fn returns DMA_ACK which flags that channel to
+be the return value from dma_request_channel.  A channel allocated via
+this interface is exclusive to the caller, until dma_release_channel()
+is called.
+
+The DMA_PRIVATE capability flag is used to tag dma devices that should
+not be used by the general-purpose allocator.  It can be set at
+initialization time if it is known that a channel will always be
+private.  Alternatively, it is set when dma_request_channel() finds an
+unused "public" channel.
+
+A couple caveats to note when implementing a driver and consumer:
+1/ Once a channel has been privately allocated it will no longer be
+   considered by the general-purpose allocator even after a call to
+   dma_release_channel().
+2/ Since capabilities are specified at the device level a dma_device
+   with multiple channels will either have all channels public, or all
+   channels private.
 
 5 SOURCE
-include/linux/dmaengine.h: core header file for DMA drivers and clients
+
+include/linux/dmaengine.h: core header file for DMA drivers and api users
 drivers/dma/dmaengine.c: offload engine channel management routines
 drivers/dma/: location for offload engine drivers
 include/linux/async_tx.h: core header file for the async_tx api
diff --git a/Documentation/dmaengine.txt b/Documentation/dmaengine.txt
new file mode 100644
index 0000000..0c1c2f6
--- /dev/null
+++ b/Documentation/dmaengine.txt
@@ -0,0 +1 @@
+See Documentation/crypto/async-tx-api.txt


^ permalink raw reply related	[flat|nested] 55+ messages in thread

* [PATCH 02/13] dmaengine: remove dependency on async_tx
  2008-11-14 21:34 [PATCH 00/13] dmaengine redux Dan Williams
  2008-11-14 21:34 ` [PATCH 01/13] async_tx, dmaengine: document channel allocation and api rework Dan Williams
@ 2008-11-14 21:34 ` Dan Williams
  2008-11-15  6:02   ` Andrew Morton
  2008-11-14 21:34 ` [PATCH 03/13] dmaengine: up-level reference counting to the module level Dan Williams
                   ` (11 subsequent siblings)
  13 siblings, 1 reply; 55+ messages in thread
From: Dan Williams @ 2008-11-14 21:34 UTC (permalink / raw)
  To: linux-kernel, netdev
  Cc: maciej.sosnowski, hskinnemoen, g.liakhovetski, nicolas.ferre

async_tx.ko is a consumer of dma channels.  A circular dependency arises
if modules in drivers/dma rely on common code in async_tx.ko.  It
prevents either module from being unloaded.

Move dma_wait_for_async_tx and async_tx_run_dependencies to dmaeninge.o
where they should have been from the beginning.

Signed-off-by: Dan Williams <dan.j.williams@intel.com>
---

 crypto/async_tx/async_tx.c |   75 --------------------------------------------
 drivers/dma/Kconfig        |    2 -
 drivers/dma/dmaengine.c    |   75 ++++++++++++++++++++++++++++++++++++++++++++
 drivers/dma/iop-adma.c     |    3 +-
 drivers/dma/mv_xor.c       |    3 +-
 include/linux/async_tx.h   |   15 ---------
 include/linux/dmaengine.h  |    9 +++++
 7 files changed, 86 insertions(+), 96 deletions(-)

diff --git a/crypto/async_tx/async_tx.c b/crypto/async_tx/async_tx.c
index dcbf1be..8cfac18 100644
--- a/crypto/async_tx/async_tx.c
+++ b/crypto/async_tx/async_tx.c
@@ -72,81 +72,6 @@ void async_tx_issue_pending_all(void)
 }
 EXPORT_SYMBOL_GPL(async_tx_issue_pending_all);
 
-/* dma_wait_for_async_tx - spin wait for a transcation to complete
- * @tx: transaction to wait on
- */
-enum dma_status
-dma_wait_for_async_tx(struct dma_async_tx_descriptor *tx)
-{
-	enum dma_status status;
-	struct dma_async_tx_descriptor *iter;
-	struct dma_async_tx_descriptor *parent;
-
-	if (!tx)
-		return DMA_SUCCESS;
-
-	/* poll through the dependency chain, return when tx is complete */
-	do {
-		iter = tx;
-
-		/* find the root of the unsubmitted dependency chain */
-		do {
-			parent = iter->parent;
-			if (!parent)
-				break;
-			else
-				iter = parent;
-		} while (parent);
-
-		/* there is a small window for ->parent == NULL and
-		 * ->cookie == -EBUSY
-		 */
-		while (iter->cookie == -EBUSY)
-			cpu_relax();
-
-		status = dma_sync_wait(iter->chan, iter->cookie);
-	} while (status == DMA_IN_PROGRESS || (iter != tx));
-
-	return status;
-}
-EXPORT_SYMBOL_GPL(dma_wait_for_async_tx);
-
-/* async_tx_run_dependencies - helper routine for dma drivers to process
- *	(start) dependent operations on their target channel
- * @tx: transaction with dependencies
- */
-void async_tx_run_dependencies(struct dma_async_tx_descriptor *tx)
-{
-	struct dma_async_tx_descriptor *dep = tx->next;
-	struct dma_async_tx_descriptor *dep_next;
-	struct dma_chan *chan;
-
-	if (!dep)
-		return;
-
-	chan = dep->chan;
-
-	/* keep submitting up until a channel switch is detected
-	 * in that case we will be called again as a result of
-	 * processing the interrupt from async_tx_channel_switch
-	 */
-	for (; dep; dep = dep_next) {
-		spin_lock_bh(&dep->lock);
-		dep->parent = NULL;
-		dep_next = dep->next;
-		if (dep_next && dep_next->chan == chan)
-			dep->next = NULL; /* ->next will be submitted */
-		else
-			dep_next = NULL; /* submit current dep and terminate */
-		spin_unlock_bh(&dep->lock);
-
-		dep->tx_submit(dep);
-	}
-
-	chan->device->device_issue_pending(chan);
-}
-EXPORT_SYMBOL_GPL(async_tx_run_dependencies);
-
 static void
 free_dma_chan_ref(struct rcu_head *rcu)
 {
diff --git a/drivers/dma/Kconfig b/drivers/dma/Kconfig
index 904e575..e34b064 100644
--- a/drivers/dma/Kconfig
+++ b/drivers/dma/Kconfig
@@ -33,7 +33,6 @@ config INTEL_IOATDMA
 config INTEL_IOP_ADMA
 	tristate "Intel IOP ADMA support"
 	depends on ARCH_IOP32X || ARCH_IOP33X || ARCH_IOP13XX
-	select ASYNC_CORE
 	select DMA_ENGINE
 	help
 	  Enable support for the Intel(R) IOP Series RAID engines.
@@ -59,7 +58,6 @@ config FSL_DMA
 config MV_XOR
 	bool "Marvell XOR engine support"
 	depends on PLAT_ORION
-	select ASYNC_CORE
 	select DMA_ENGINE
 	---help---
 	  Enable support for the Marvell XOR engine.
diff --git a/drivers/dma/dmaengine.c b/drivers/dma/dmaengine.c
index 5317e08..5410c04 100644
--- a/drivers/dma/dmaengine.c
+++ b/drivers/dma/dmaengine.c
@@ -623,6 +623,81 @@ void dma_async_tx_descriptor_init(struct dma_async_tx_descriptor *tx,
 }
 EXPORT_SYMBOL(dma_async_tx_descriptor_init);
 
+/* dma_wait_for_async_tx - spin wait for a transcation to complete
+ * @tx: transaction to wait on
+ */
+enum dma_status
+dma_wait_for_async_tx(struct dma_async_tx_descriptor *tx)
+{
+	enum dma_status status;
+	struct dma_async_tx_descriptor *iter;
+	struct dma_async_tx_descriptor *parent;
+
+	if (!tx)
+		return DMA_SUCCESS;
+
+	/* poll through the dependency chain, return when tx is complete */
+	do {
+		iter = tx;
+
+		/* find the root of the unsubmitted dependency chain */
+		do {
+			parent = iter->parent;
+			if (!parent)
+				break;
+			else
+				iter = parent;
+		} while (parent);
+
+		/* there is a small window for ->parent == NULL and
+		 * ->cookie == -EBUSY
+		 */
+		while (iter->cookie == -EBUSY)
+			cpu_relax();
+
+		status = dma_sync_wait(iter->chan, iter->cookie);
+	} while (status == DMA_IN_PROGRESS || (iter != tx));
+
+	return status;
+}
+EXPORT_SYMBOL_GPL(dma_wait_for_async_tx);
+
+/* dma_run_dependencies - helper routine for dma drivers to process
+ *	(start) dependent operations on their target channel
+ * @tx: transaction with dependencies
+ */
+void dma_run_dependencies(struct dma_async_tx_descriptor *tx)
+{
+	struct dma_async_tx_descriptor *dep = tx->next;
+	struct dma_async_tx_descriptor *dep_next;
+	struct dma_chan *chan;
+
+	if (!dep)
+		return;
+
+	chan = dep->chan;
+
+	/* keep submitting up until a channel switch is detected
+	 * in that case we will be called again as a result of
+	 * processing the interrupt from async_tx_channel_switch
+	 */
+	for (; dep; dep = dep_next) {
+		spin_lock_bh(&dep->lock);
+		dep->parent = NULL;
+		dep_next = dep->next;
+		if (dep_next && dep_next->chan == chan)
+			dep->next = NULL; /* ->next will be submitted */
+		else
+			dep_next = NULL; /* submit current dep and terminate */
+		spin_unlock_bh(&dep->lock);
+
+		dep->tx_submit(dep);
+	}
+
+	chan->device->device_issue_pending(chan);
+}
+EXPORT_SYMBOL_GPL(dma_run_dependencies);
+
 static int __init dma_bus_init(void)
 {
 	mutex_init(&dma_list_mutex);
diff --git a/drivers/dma/iop-adma.c b/drivers/dma/iop-adma.c
index c7a9306..f008ab2 100644
--- a/drivers/dma/iop-adma.c
+++ b/drivers/dma/iop-adma.c
@@ -24,7 +24,6 @@
 
 #include <linux/init.h>
 #include <linux/module.h>
-#include <linux/async_tx.h>
 #include <linux/delay.h>
 #include <linux/dma-mapping.h>
 #include <linux/spinlock.h>
@@ -106,7 +105,7 @@ iop_adma_run_tx_complete_actions(struct iop_adma_desc_slot *desc,
 	}
 
 	/* run dependent operations */
-	async_tx_run_dependencies(&desc->async_tx);
+	dma_run_dependencies(&desc->async_tx);
 
 	return cookie;
 }
diff --git a/drivers/dma/mv_xor.c b/drivers/dma/mv_xor.c
index 0328da0..c3fcaa8 100644
--- a/drivers/dma/mv_xor.c
+++ b/drivers/dma/mv_xor.c
@@ -18,7 +18,6 @@
 
 #include <linux/init.h>
 #include <linux/module.h>
-#include <linux/async_tx.h>
 #include <linux/delay.h>
 #include <linux/dma-mapping.h>
 #include <linux/spinlock.h>
@@ -331,7 +330,7 @@ mv_xor_run_tx_complete_actions(struct mv_xor_desc_slot *desc,
 	}
 
 	/* run dependent operations */
-	async_tx_run_dependencies(&desc->async_tx);
+	dma_run_dependencies(&desc->async_tx);
 
 	return cookie;
 }
diff --git a/include/linux/async_tx.h b/include/linux/async_tx.h
index 0f50d4c..1c81677 100644
--- a/include/linux/async_tx.h
+++ b/include/linux/async_tx.h
@@ -60,8 +60,6 @@ enum async_tx_flags {
 
 #ifdef CONFIG_DMA_ENGINE
 void async_tx_issue_pending_all(void);
-enum dma_status dma_wait_for_async_tx(struct dma_async_tx_descriptor *tx);
-void async_tx_run_dependencies(struct dma_async_tx_descriptor *tx);
 #ifdef CONFIG_ARCH_HAS_ASYNC_TX_FIND_CHANNEL
 #include <asm/async_tx.h>
 #else
@@ -77,19 +75,6 @@ static inline void async_tx_issue_pending_all(void)
 	do { } while (0);
 }
 
-static inline enum dma_status
-dma_wait_for_async_tx(struct dma_async_tx_descriptor *tx)
-{
-	return DMA_SUCCESS;
-}
-
-static inline void
-async_tx_run_dependencies(struct dma_async_tx_descriptor *tx,
-	struct dma_chan *host_chan)
-{
-	do { } while (0);
-}
-
 static inline struct dma_chan *
 async_tx_find_channel(struct dma_async_tx_descriptor *depend_tx,
 	enum dma_transaction_type tx_type, struct page **dst, int dst_count,
diff --git a/include/linux/dmaengine.h b/include/linux/dmaengine.h
index adb0b08..e4ec7e7 100644
--- a/include/linux/dmaengine.h
+++ b/include/linux/dmaengine.h
@@ -475,11 +475,20 @@ static inline enum dma_status dma_async_is_complete(dma_cookie_t cookie,
 }
 
 enum dma_status dma_sync_wait(struct dma_chan *chan, dma_cookie_t cookie);
+#ifdef CONFIG_DMA_ENGINE
+enum dma_status dma_wait_for_async_tx(struct dma_async_tx_descriptor *tx);
+#else
+static inline enum dma_status dma_wait_for_async_tx(struct dma_async_tx_descriptor *tx)
+{
+	return DMA_SUCCESS;
+}
+#endif
 
 /* --- DMA device --- */
 
 int dma_async_device_register(struct dma_device *device);
 void dma_async_device_unregister(struct dma_device *device);
+void dma_run_dependencies(struct dma_async_tx_descriptor *tx);
 
 /* --- Helper iov-locking functions --- */
 


^ permalink raw reply related	[flat|nested] 55+ messages in thread

* [PATCH 03/13] dmaengine: up-level reference counting to the module level
  2008-11-14 21:34 [PATCH 00/13] dmaengine redux Dan Williams
  2008-11-14 21:34 ` [PATCH 01/13] async_tx, dmaengine: document channel allocation and api rework Dan Williams
  2008-11-14 21:34 ` [PATCH 02/13] dmaengine: remove dependency on async_tx Dan Williams
@ 2008-11-14 21:34 ` Dan Williams
  2008-11-15  6:08   ` Andrew Morton
                     ` (2 more replies)
  2008-11-14 21:34 ` [PATCH 04/13] dmaengine: centralize channel allocation, introduce dma_find_channel Dan Williams
                   ` (10 subsequent siblings)
  13 siblings, 3 replies; 55+ messages in thread
From: Dan Williams @ 2008-11-14 21:34 UTC (permalink / raw)
  To: linux-kernel, netdev
  Cc: maciej.sosnowski, hskinnemoen, g.liakhovetski, nicolas.ferre

Simply, if a client wants any dmaengine channel then prevent all dmaengine
modules from being removed.  Once the clients are done re-enable module
removal.

Why?, beyond reducing complication:
1/ Tracking reference counts per-transaction in an efficient manner, as
   is currently done, requires a complicated scheme to avoid cache-line
   bouncing effects.
2/ Per-transaction ref-counting gives the false impression that a
   dma-driver can be gracefully removed ahead of its user (net, md, or
   dma-slave)
3/ None of the in-tree dma-drivers talk to hot pluggable hardware, but
   if such an engine were built one day we still would not need to notify
   clients of remove events.  The driver can simply return NULL to a
   ->prep() request, something that is much easier for a client to handle.

Signed-off-by: Dan Williams <dan.j.williams@intel.com>
---

 crypto/async_tx/async_tx.c   |    4 -
 drivers/dma/dmaengine.c      |  190 +++++++++++++++++++++++++-----------------
 drivers/dma/dmatest.c        |    2 
 drivers/dma/dw_dmac.c        |    2 
 drivers/mmc/host/atmel-mci.c |    4 -
 include/linux/dmaengine.h    |   21 -----
 include/net/netdma.h         |    4 -
 net/ipv4/tcp.c               |    1 
 8 files changed, 118 insertions(+), 110 deletions(-)

diff --git a/crypto/async_tx/async_tx.c b/crypto/async_tx/async_tx.c
index 8cfac18..43fe4cb 100644
--- a/crypto/async_tx/async_tx.c
+++ b/crypto/async_tx/async_tx.c
@@ -198,8 +198,6 @@ dma_channel_add_remove(struct dma_client *client,
 		/* add the channel to the generic management list */
 		master_ref = kmalloc(sizeof(*master_ref), GFP_KERNEL);
 		if (master_ref) {
-			/* keep a reference until async_tx is unloaded */
-			dma_chan_get(chan);
 			init_dma_chan_ref(master_ref, chan);
 			spin_lock_irqsave(&async_tx_lock, flags);
 			list_add_tail_rcu(&master_ref->node,
@@ -221,8 +219,6 @@ dma_channel_add_remove(struct dma_client *client,
 		spin_lock_irqsave(&async_tx_lock, flags);
 		list_for_each_entry(ref, &async_tx_master_list, node)
 			if (ref->chan == chan) {
-				/* permit backing devices to go away */
-				dma_chan_put(ref->chan);
 				list_del_rcu(&ref->node);
 				call_rcu(&ref->rcu, free_dma_chan_ref);
 				found = 1;
diff --git a/drivers/dma/dmaengine.c b/drivers/dma/dmaengine.c
index 5410c04..df37073 100644
--- a/drivers/dma/dmaengine.c
+++ b/drivers/dma/dmaengine.c
@@ -74,6 +74,7 @@
 static DEFINE_MUTEX(dma_list_mutex);
 static LIST_HEAD(dma_device_list);
 static LIST_HEAD(dma_client_list);
+static long dmaengine_ref_count;
 
 /* --- sysfs implementation --- */
 
@@ -105,19 +106,8 @@ static ssize_t show_bytes_transferred(struct device *dev, struct device_attribut
 static ssize_t show_in_use(struct device *dev, struct device_attribute *attr, char *buf)
 {
 	struct dma_chan *chan = to_dma_chan(dev);
-	int in_use = 0;
-
-	if (unlikely(chan->slow_ref) &&
-		atomic_read(&chan->refcount.refcount) > 1)
-		in_use = 1;
-	else {
-		if (local_read(&(per_cpu_ptr(chan->local,
-			get_cpu())->refcount)) > 0)
-			in_use = 1;
-		put_cpu();
-	}
 
-	return sprintf(buf, "%d\n", in_use);
+	return sprintf(buf, "%d\n", chan->client_count);
 }
 
 static struct device_attribute dma_attrs[] = {
@@ -155,6 +145,67 @@ __dma_chan_satisfies_mask(struct dma_chan *chan, dma_cap_mask_t *want)
 	return bitmap_equal(want->bits, has.bits, DMA_TX_TYPE_END);
 }
 
+static struct module *dma_chan_to_owner(struct dma_chan *chan)
+{
+	return chan->device->dev->driver->owner;
+}
+
+/**
+ * balance_ref_count - catch up the channel reference count
+ */
+static void balance_ref_count(struct dma_chan *chan)
+{
+	struct module *owner = dma_chan_to_owner(chan);
+
+	while (chan->client_count < dmaengine_ref_count) {
+		__module_get(owner);
+		chan->client_count++;
+	}
+}
+
+/**
+ * dma_chan_get - try to grab a dma channel's parent driver module
+ * @chan - channel to grab
+ */
+static int dma_chan_get(struct dma_chan *chan)
+{
+	int err = -ENODEV;
+	struct module *owner = dma_chan_to_owner(chan);
+
+	if (chan->client_count) {
+		__module_get(owner);
+		err = 0;
+	} else if (try_module_get(owner))
+		err = 0;
+
+	if (err == 0)
+		chan->client_count++;
+
+	/* allocate upon first client reference */
+	if (chan->client_count == 1 && err == 0) {
+		int desc = chan->device->device_alloc_chan_resources(chan, NULL);
+
+		if (desc < 0) {
+			chan->client_count = 0;
+			module_put(owner);
+			err = -ENOMEM;
+		} else
+			balance_ref_count(chan);
+	}
+
+	return err;
+}
+
+static void dma_chan_put(struct dma_chan *chan)
+{
+	if (!chan->client_count)
+		return; /* this channel failed alloc_chan_resources */
+	chan->client_count--;
+	module_put(dma_chan_to_owner(chan));
+	if (chan->client_count == 0)
+		chan->device->device_free_chan_resources(chan);
+}
+
 /**
  * dma_client_chan_alloc - try to allocate channels to a client
  * @client: &dma_client
@@ -165,7 +216,6 @@ static void dma_client_chan_alloc(struct dma_client *client)
 {
 	struct dma_device *device;
 	struct dma_chan *chan;
-	int desc;	/* allocated descriptor count */
 	enum dma_state_client ack;
 
 	/* Find a channel */
@@ -178,23 +228,16 @@ static void dma_client_chan_alloc(struct dma_client *client)
 		list_for_each_entry(chan, &device->channels, device_node) {
 			if (!dma_chan_satisfies_mask(chan, client->cap_mask))
 				continue;
+			if (!chan->client_count)
+				continue;
+			ack = client->event_callback(client, chan,
+						     DMA_RESOURCE_AVAILABLE);
 
-			desc = chan->device->device_alloc_chan_resources(
-					chan, client);
-			if (desc >= 0) {
-				ack = client->event_callback(client,
-						chan,
-						DMA_RESOURCE_AVAILABLE);
-
-				/* we are done once this client rejects
-				 * an available resource
-				 */
-				if (ack == DMA_ACK) {
-					dma_chan_get(chan);
-					chan->client_count++;
-				} else if (ack == DMA_NAK)
-					return;
-			}
+			/* we are done once this client rejects
+			 * an available resource
+			 */
+			if (ack == DMA_NAK)
+				return;
 		}
 	}
 }
@@ -224,7 +267,6 @@ EXPORT_SYMBOL(dma_sync_wait);
 void dma_chan_cleanup(struct kref *kref)
 {
 	struct dma_chan *chan = container_of(kref, struct dma_chan, refcount);
-	chan->device->device_free_chan_resources(chan);
 	kref_put(&chan->device->refcount, dma_async_device_cleanup);
 }
 EXPORT_SYMBOL(dma_chan_cleanup);
@@ -232,18 +274,12 @@ EXPORT_SYMBOL(dma_chan_cleanup);
 static void dma_chan_free_rcu(struct rcu_head *rcu)
 {
 	struct dma_chan *chan = container_of(rcu, struct dma_chan, rcu);
-	int bias = 0x7FFFFFFF;
-	int i;
-	for_each_possible_cpu(i)
-		bias -= local_read(&per_cpu_ptr(chan->local, i)->refcount);
-	atomic_sub(bias, &chan->refcount.refcount);
+
 	kref_put(&chan->refcount, dma_chan_cleanup);
 }
 
 static void dma_chan_release(struct dma_chan *chan)
 {
-	atomic_add(0x7FFFFFFF, &chan->refcount.refcount);
-	chan->slow_ref = 1;
 	call_rcu(&chan->rcu, dma_chan_free_rcu);
 }
 
@@ -263,43 +299,36 @@ static void dma_clients_notify_available(void)
 }
 
 /**
- * dma_chans_notify_available - tell the clients that a channel is going away
- * @chan: channel on its way out
- */
-static void dma_clients_notify_removed(struct dma_chan *chan)
-{
-	struct dma_client *client;
-	enum dma_state_client ack;
-
-	mutex_lock(&dma_list_mutex);
-
-	list_for_each_entry(client, &dma_client_list, global_node) {
-		ack = client->event_callback(client, chan,
-				DMA_RESOURCE_REMOVED);
-
-		/* client was holding resources for this channel so
-		 * free it
-		 */
-		if (ack == DMA_ACK) {
-			dma_chan_put(chan);
-			chan->client_count--;
-		}
-	}
-
-	mutex_unlock(&dma_list_mutex);
-}
-
-/**
  * dma_async_client_register - register a &dma_client
  * @client: ptr to a client structure with valid 'event_callback' and 'cap_mask'
  */
 void dma_async_client_register(struct dma_client *client)
 {
+	struct dma_device *device, *_d;
+	struct dma_chan *chan;
+	int err;
+
 	/* validate client data */
 	BUG_ON(dma_has_cap(DMA_SLAVE, client->cap_mask) &&
 		!client->slave);
 
 	mutex_lock(&dma_list_mutex);
+	dmaengine_ref_count++;
+
+	/* try to grab channels */
+	list_for_each_entry_safe(device, _d, &dma_device_list, global_node)
+		list_for_each_entry(chan, &device->channels, device_node) {
+			err = dma_chan_get(chan);
+			if (err == -ENODEV) {
+				/* module removed before we could use it */
+				list_del_init(&device->global_node);
+				break;
+			} else if (err)
+				pr_err("dmaengine: failed to get %s: (%d)",
+				       dev_name(&chan->dev), err);
+		}
+
+
 	list_add_tail(&client->global_node, &dma_client_list);
 	mutex_unlock(&dma_list_mutex);
 }
@@ -315,23 +344,17 @@ void dma_async_client_unregister(struct dma_client *client)
 {
 	struct dma_device *device;
 	struct dma_chan *chan;
-	enum dma_state_client ack;
 
 	if (!client)
 		return;
 
 	mutex_lock(&dma_list_mutex);
-	/* free all channels the client is holding */
+	dmaengine_ref_count--;
+	BUG_ON(dmaengine_ref_count < 0);
+	/* drop channel references */
 	list_for_each_entry(device, &dma_device_list, global_node)
-		list_for_each_entry(chan, &device->channels, device_node) {
-			ack = client->event_callback(client, chan,
-				DMA_RESOURCE_REMOVED);
-
-			if (ack == DMA_ACK) {
-				dma_chan_put(chan);
-				chan->client_count--;
-			}
-		}
+		list_for_each_entry(chan, &device->channels, device_node)
+			dma_chan_put(chan);
 
 	list_del(&client->global_node);
 	mutex_unlock(&dma_list_mutex);
@@ -420,6 +443,19 @@ int dma_async_device_register(struct dma_device *device)
 	}
 
 	mutex_lock(&dma_list_mutex);
+	list_for_each_entry(chan, &device->channels, device_node) {
+		/* if clients are already waiting for channels we need to
+		 * take references on their behalf
+		 */
+		if (dmaengine_ref_count && dma_chan_get(chan) == -ENODEV) {
+			/* note we can only get here for the first
+			 * channel as the remaining channels are
+			 * guaranteed to get a reference
+			 */
+			rc = -ENODEV;
+			goto err_out;
+		}
+	}
 	list_add_tail(&device->global_node, &dma_device_list);
 	mutex_unlock(&dma_list_mutex);
 
@@ -465,7 +501,9 @@ void dma_async_device_unregister(struct dma_device *device)
 	mutex_unlock(&dma_list_mutex);
 
 	list_for_each_entry(chan, &device->channels, device_node) {
-		dma_clients_notify_removed(chan);
+		WARN_ONCE(chan->client_count,
+			  "%s called while %d clients hold a reference\n",
+			  __func__, chan->client_count);
 		device_unregister(&chan->dev);
 		dma_chan_release(chan);
 	}
diff --git a/drivers/dma/dmatest.c b/drivers/dma/dmatest.c
index ed9636b..db40508 100644
--- a/drivers/dma/dmatest.c
+++ b/drivers/dma/dmatest.c
@@ -215,7 +215,6 @@ static int dmatest_func(void *data)
 
 	smp_rmb();
 	chan = thread->chan;
-	dma_chan_get(chan);
 
 	while (!kthread_should_stop()) {
 		total_tests++;
@@ -293,7 +292,6 @@ static int dmatest_func(void *data)
 	}
 
 	ret = 0;
-	dma_chan_put(chan);
 	kfree(thread->dstbuf);
 err_dstbuf:
 	kfree(thread->srcbuf);
diff --git a/drivers/dma/dw_dmac.c b/drivers/dma/dw_dmac.c
index 0778d99..377dafa 100644
--- a/drivers/dma/dw_dmac.c
+++ b/drivers/dma/dw_dmac.c
@@ -773,7 +773,7 @@ static int dwc_alloc_chan_resources(struct dma_chan *chan,
 	dev_vdbg(&chan->dev, "alloc_chan_resources\n");
 
 	/* Channels doing slave DMA can only handle one client. */
-	if (dwc->dws || client->slave) {
+	if (dwc->dws || (client && client->slave)) {
 		if (chan->client_count)
 			return -EBUSY;
 	}
diff --git a/drivers/mmc/host/atmel-mci.c b/drivers/mmc/host/atmel-mci.c
index 7a3f243..6c11f4d 100644
--- a/drivers/mmc/host/atmel-mci.c
+++ b/drivers/mmc/host/atmel-mci.c
@@ -593,10 +593,8 @@ atmci_submit_data_dma(struct atmel_mci *host, struct mmc_data *data)
 
 	/* If we don't have a channel, we can't do DMA */
 	chan = host->dma.chan;
-	if (chan) {
-		dma_chan_get(chan);
+	if (chan)
 		host->data_chan = chan;
-	}
 
 	if (!chan)
 		return -ENODEV;
diff --git a/include/linux/dmaengine.h b/include/linux/dmaengine.h
index e4ec7e7..d18d37d 100644
--- a/include/linux/dmaengine.h
+++ b/include/linux/dmaengine.h
@@ -165,7 +165,6 @@ struct dma_slave {
  */
 
 struct dma_chan_percpu {
-	local_t refcount;
 	/* stats */
 	unsigned long memcpy_count;
 	unsigned long bytes_transferred;
@@ -205,26 +204,6 @@ struct dma_chan {
 
 void dma_chan_cleanup(struct kref *kref);
 
-static inline void dma_chan_get(struct dma_chan *chan)
-{
-	if (unlikely(chan->slow_ref))
-		kref_get(&chan->refcount);
-	else {
-		local_inc(&(per_cpu_ptr(chan->local, get_cpu())->refcount));
-		put_cpu();
-	}
-}
-
-static inline void dma_chan_put(struct dma_chan *chan)
-{
-	if (unlikely(chan->slow_ref))
-		kref_put(&chan->refcount, dma_chan_cleanup);
-	else {
-		local_dec(&(per_cpu_ptr(chan->local, get_cpu())->refcount));
-		put_cpu();
-	}
-}
-
 /*
  * typedef dma_event_callback - function pointer to a DMA event callback
  * For each channel added to the system this routine is called for each client.
diff --git a/include/net/netdma.h b/include/net/netdma.h
index f28c6e0..cbe2737 100644
--- a/include/net/netdma.h
+++ b/include/net/netdma.h
@@ -27,11 +27,11 @@
 static inline struct dma_chan *get_softnet_dma(void)
 {
 	struct dma_chan *chan;
+
 	rcu_read_lock();
 	chan = rcu_dereference(__get_cpu_var(softnet_data).net_dma);
-	if (chan)
-		dma_chan_get(chan);
 	rcu_read_unlock();
+
 	return chan;
 }
 
diff --git a/net/ipv4/tcp.c b/net/ipv4/tcp.c
index c5aca0b..40ec69f 100644
--- a/net/ipv4/tcp.c
+++ b/net/ipv4/tcp.c
@@ -1630,7 +1630,6 @@ skip_copy:
 
 		/* Safe to free early-copied skbs now */
 		__skb_queue_purge(&sk->sk_async_wait_queue);
-		dma_chan_put(tp->ucopy.dma_chan);
 		tp->ucopy.dma_chan = NULL;
 	}
 	if (tp->ucopy.pinned_list) {


^ permalink raw reply related	[flat|nested] 55+ messages in thread

* [PATCH 04/13] dmaengine: centralize channel allocation, introduce dma_find_channel
  2008-11-14 21:34 [PATCH 00/13] dmaengine redux Dan Williams
                   ` (2 preceding siblings ...)
  2008-11-14 21:34 ` [PATCH 03/13] dmaengine: up-level reference counting to the module level Dan Williams
@ 2008-11-14 21:34 ` Dan Williams
  2008-11-15  6:14   ` Andrew Morton
  2008-11-14 21:34 ` [PATCH 05/13] dmaengine: provide a common 'issue_pending_all' implementation Dan Williams
                   ` (9 subsequent siblings)
  13 siblings, 1 reply; 55+ messages in thread
From: Dan Williams @ 2008-11-14 21:34 UTC (permalink / raw)
  To: linux-kernel, netdev
  Cc: maciej.sosnowski, hskinnemoen, g.liakhovetski, nicolas.ferre

Allowing multiple clients to each define their own channel allocation
scheme quickly leads to a pathological situation.  For memory-to-memory
offload all clients can share a central allocator.

This simply moves the existing async_tx allocator to dmaengine with
minimal fixups:
* async_tx.c:get_chan_ref_by_cap --> dmaengine.c:nth_chan
* async_tx.c:async_tx_rebalance --> dmaengine.c:dma_channel_rebalance
* split out common code from async_tx.c:__async_tx_find_channel -->
  dma_find_channel

Signed-off-by: Dan Williams <dan.j.williams@intel.com>
---

 crypto/async_tx/async_tx.c |  146 +---------------------------------------
 drivers/dma/dmaengine.c    |  159 ++++++++++++++++++++++++++++++++++++++++++++
 include/linux/dmaengine.h  |    3 +
 3 files changed, 166 insertions(+), 142 deletions(-)

diff --git a/crypto/async_tx/async_tx.c b/crypto/async_tx/async_tx.c
index 43fe4cb..b88bb1f 100644
--- a/crypto/async_tx/async_tx.c
+++ b/crypto/async_tx/async_tx.c
@@ -38,25 +38,10 @@ static struct dma_client async_tx_dma = {
 };
 
 /**
- * dma_cap_mask_all - enable iteration over all operation types
- */
-static dma_cap_mask_t dma_cap_mask_all;
-
-/**
- * chan_ref_percpu - tracks channel allocations per core/opertion
- */
-struct chan_ref_percpu {
-	struct dma_chan_ref *ref;
-};
-
-static int channel_table_initialized;
-static struct chan_ref_percpu *channel_table[DMA_TX_TYPE_END];
-
-/**
  * async_tx_lock - protect modification of async_tx_master_list and serialize
  *	rebalance operations
  */
-static spinlock_t async_tx_lock;
+static DEFINE_SPINLOCK(async_tx_lock);
 
 static LIST_HEAD(async_tx_master_list);
 
@@ -89,85 +74,6 @@ init_dma_chan_ref(struct dma_chan_ref *ref, struct dma_chan *chan)
 	atomic_set(&ref->count, 0);
 }
 
-/**
- * get_chan_ref_by_cap - returns the nth channel of the given capability
- * 	defaults to returning the channel with the desired capability and the
- * 	lowest reference count if the index can not be satisfied
- * @cap: capability to match
- * @index: nth channel desired, passing -1 has the effect of forcing the
- *  default return value
- */
-static struct dma_chan_ref *
-get_chan_ref_by_cap(enum dma_transaction_type cap, int index)
-{
-	struct dma_chan_ref *ret_ref = NULL, *min_ref = NULL, *ref;
-
-	rcu_read_lock();
-	list_for_each_entry_rcu(ref, &async_tx_master_list, node)
-		if (dma_has_cap(cap, ref->chan->device->cap_mask)) {
-			if (!min_ref)
-				min_ref = ref;
-			else if (atomic_read(&ref->count) <
-				atomic_read(&min_ref->count))
-				min_ref = ref;
-
-			if (index-- == 0) {
-				ret_ref = ref;
-				break;
-			}
-		}
-	rcu_read_unlock();
-
-	if (!ret_ref)
-		ret_ref = min_ref;
-
-	if (ret_ref)
-		atomic_inc(&ret_ref->count);
-
-	return ret_ref;
-}
-
-/**
- * async_tx_rebalance - redistribute the available channels, optimize
- * for cpu isolation in the SMP case, and opertaion isolation in the
- * uniprocessor case
- */
-static void async_tx_rebalance(void)
-{
-	int cpu, cap, cpu_idx = 0;
-	unsigned long flags;
-
-	if (!channel_table_initialized)
-		return;
-
-	spin_lock_irqsave(&async_tx_lock, flags);
-
-	/* undo the last distribution */
-	for_each_dma_cap_mask(cap, dma_cap_mask_all)
-		for_each_possible_cpu(cpu) {
-			struct dma_chan_ref *ref =
-				per_cpu_ptr(channel_table[cap], cpu)->ref;
-			if (ref) {
-				atomic_set(&ref->count, 0);
-				per_cpu_ptr(channel_table[cap], cpu)->ref =
-									NULL;
-			}
-		}
-
-	for_each_dma_cap_mask(cap, dma_cap_mask_all)
-		for_each_online_cpu(cpu) {
-			struct dma_chan_ref *new;
-			if (NR_CPUS > 1)
-				new = get_chan_ref_by_cap(cap, cpu_idx++);
-			else
-				new = get_chan_ref_by_cap(cap, -1);
-
-			per_cpu_ptr(channel_table[cap], cpu)->ref = new;
-		}
-
-	spin_unlock_irqrestore(&async_tx_lock, flags);
-}
-
 static enum dma_state_client
 dma_channel_add_remove(struct dma_client *client,
 	struct dma_chan *chan, enum dma_state state)
@@ -211,8 +117,6 @@ dma_channel_add_remove(struct dma_client *client,
 				" (-ENOMEM)\n");
 			return 0;
 		}
-
-		async_tx_rebalance();
 		break;
 	case DMA_RESOURCE_REMOVED:
 		found = 0;
@@ -233,8 +137,6 @@ dma_channel_add_remove(struct dma_client *client,
 			ack = DMA_ACK;
 		else
 			break;
-
-		async_tx_rebalance();
 		break;
 	case DMA_RESOURCE_SUSPEND:
 	case DMA_RESOURCE_RESUME:
@@ -248,51 +150,18 @@ dma_channel_add_remove(struct dma_client *client,
 	return ack;
 }
 
-static int __init
-async_tx_init(void)
+static int __init async_tx_init(void)
 {
-	enum dma_transaction_type cap;
-
-	spin_lock_init(&async_tx_lock);
-	bitmap_fill(dma_cap_mask_all.bits, DMA_TX_TYPE_END);
-
-	/* an interrupt will never be an explicit operation type.
-	 * clearing this bit prevents allocation to a slot in 'channel_table'
-	 */
-	clear_bit(DMA_INTERRUPT, dma_cap_mask_all.bits);
-
-	for_each_dma_cap_mask(cap, dma_cap_mask_all) {
-		channel_table[cap] = alloc_percpu(struct chan_ref_percpu);
-		if (!channel_table[cap])
-			goto err;
-	}
-
-	channel_table_initialized = 1;
 	dma_async_client_register(&async_tx_dma);
 	dma_async_client_chan_request(&async_tx_dma);
 
 	printk(KERN_INFO "async_tx: api initialized (async)\n");
 
 	return 0;
-err:
-	printk(KERN_ERR "async_tx: initialization failure\n");
-
-	while (--cap >= 0)
-		free_percpu(channel_table[cap]);
-
-	return 1;
 }
 
 static void __exit async_tx_exit(void)
 {
-	enum dma_transaction_type cap;
-
-	channel_table_initialized = 0;
-
-	for_each_dma_cap_mask(cap, dma_cap_mask_all)
-		if (channel_table[cap])
-			free_percpu(channel_table[cap]);
-
 	dma_async_client_unregister(&async_tx_dma);
 }
 
@@ -308,16 +177,9 @@ __async_tx_find_channel(struct dma_async_tx_descriptor *depend_tx,
 {
 	/* see if we can keep the chain on one channel */
 	if (depend_tx &&
-		dma_has_cap(tx_type, depend_tx->chan->device->cap_mask))
+	    dma_has_cap(tx_type, depend_tx->chan->device->cap_mask))
 		return depend_tx->chan;
-	else if (likely(channel_table_initialized)) {
-		struct dma_chan_ref *ref;
-		int cpu = get_cpu();
-		ref = per_cpu_ptr(channel_table[tx_type], cpu)->ref;
-		put_cpu();
-		return ref ? ref->chan : NULL;
-	} else
-		return NULL;
+	return dma_find_channel(tx_type);
 }
 EXPORT_SYMBOL_GPL(__async_tx_find_channel);
 #else
diff --git a/drivers/dma/dmaengine.c b/drivers/dma/dmaengine.c
index df37073..c3e6fbb 100644
--- a/drivers/dma/dmaengine.c
+++ b/drivers/dma/dmaengine.c
@@ -284,6 +284,162 @@ static void dma_chan_release(struct dma_chan *chan)
 }
 
 /**
+ * dma_cap_mask_all - enable iteration over all operation types
+ */
+static dma_cap_mask_t dma_cap_mask_all;
+
+/**
+ * dma_chan_tbl_ent - tracks channel allocations per core/opertion
+ */
+struct dma_chan_tbl_ent {
+	struct dma_chan *chan;
+};
+
+/**
+ * channel_table - percpu lookup table for memory-to-memory offload providers
+ */
+static struct dma_chan_tbl_ent *channel_table[DMA_TX_TYPE_END];
+
+static int __init dma_channel_table_init(void)
+{
+	enum dma_transaction_type cap;
+	int err = 0;
+
+	bitmap_fill(dma_cap_mask_all.bits, DMA_TX_TYPE_END);
+
+	/* 'interrupt' and 'slave' are channel capabilities, but are not
+	 * associated with an operation so they do not need an entry in the
+	 * channel_table
+	 */
+	clear_bit(DMA_INTERRUPT, dma_cap_mask_all.bits);
+	clear_bit(DMA_SLAVE, dma_cap_mask_all.bits);
+
+	for_each_dma_cap_mask(cap, dma_cap_mask_all) {
+		channel_table[cap] = alloc_percpu(struct dma_chan_tbl_ent);
+		if (!channel_table[cap]) {
+			err = 1;
+			break;
+		}
+	}
+
+	if (err) {
+		pr_err("dmaengine: initialization failure\n");
+		for_each_dma_cap_mask(cap, dma_cap_mask_all)
+			if (channel_table[cap])
+				free_percpu(channel_table[cap]);
+	}
+
+	return err;
+}
+subsys_initcall(dma_channel_table_init);
+
+/**
+ * dma_find_channel - find a channel to carry out the operation
+ * @tx_type: transaction type
+ */
+struct dma_chan *dma_find_channel(enum dma_transaction_type tx_type)
+{
+	struct dma_chan *chan;
+	int cpu;
+
+	WARN_ONCE(dmaengine_ref_count == 0,
+		  "client called %s without a reference", __func__);
+
+	cpu = get_cpu();
+	chan = per_cpu_ptr(channel_table[tx_type], cpu)->chan;
+	put_cpu();
+
+	return chan;
+}
+EXPORT_SYMBOL(dma_find_channel);
+
+/**
+ * nth_chan - returns the nth channel of the given capability
+ * @cap: capability to match
+ * @n: nth channel desired
+ *
+ * Defaults to returning the channel with the desired capability and the
+ * lowest reference count when 'n' cannot be satisfied
+ */
+static struct dma_chan *nth_chan(enum dma_transaction_type cap, int n)
+{
+	struct dma_device *device;
+	struct dma_chan *chan;
+	struct dma_chan *ret = NULL;
+	struct dma_chan *min = NULL;
+
+	list_for_each_entry(device, &dma_device_list, global_node) {
+		if (!dma_has_cap(cap, device->cap_mask))
+			continue;
+		list_for_each_entry(chan, &device->channels, device_node) {
+			if (!chan->client_count)
+				continue;
+			if (!min)
+				min = chan;
+			else if (chan->table_count < min->table_count)
+				min = chan;
+
+			if (n-- == 0) {
+				ret = chan;
+				break; /* done */
+			}
+		}
+		if (ret)
+			break; /* done */
+	}
+
+	if (!ret)
+		ret = min;
+
+	if (ret)
+		ret->table_count++;
+
+	return ret;
+}
+
+/**
+ * dma_channel_rebalance - redistribute the available channels
+ *
+ * Optimize for cpu isolation (each cpu gets a dedicated channel for an
+ * operation type) in the SMP case,  and opertaion isolation (avoid
+ * multi-tasking channels) in the uniprocessor case.  Must be called
+ * under dma_list_mutex.
+ */
+static void dma_channel_rebalance(void)
+{
+	struct dma_chan *chan;
+	struct dma_device *device;
+	int cpu;
+	int cap;
+	int n;
+
+	/* undo the last distribution */
+	for_each_dma_cap_mask(cap, dma_cap_mask_all)
+		for_each_possible_cpu(cpu)
+			per_cpu_ptr(channel_table[cap], cpu)->chan = NULL;
+
+	list_for_each_entry(device, &dma_device_list, global_node)
+		list_for_each_entry(chan, &device->channels, device_node)
+			chan->table_count = 0;
+
+	/* don't populate the channel_table if no clients are available */
+	if (!dmaengine_ref_count)
+		return;
+
+	/* redistribute available channels */
+	n = 0;
+	for_each_dma_cap_mask(cap, dma_cap_mask_all)
+		for_each_online_cpu(cpu) {
+			if (num_possible_cpus() > 1)
+				chan = nth_chan(cap, n++);
+			else
+				chan = nth_chan(cap, -1);
+
+			per_cpu_ptr(channel_table[cap], cpu)->chan = chan;
+		}
+}
+
+/**
  * dma_chans_notify_available - broadcast available channels to the clients
  */
 static void dma_clients_notify_available(void)
@@ -457,6 +613,7 @@ int dma_async_device_register(struct dma_device *device)
 		}
 	}
 	list_add_tail(&device->global_node, &dma_device_list);
+	dma_channel_rebalance();
 	mutex_unlock(&dma_list_mutex);
 
 	dma_clients_notify_available();
@@ -498,6 +655,7 @@ void dma_async_device_unregister(struct dma_device *device)
 
 	mutex_lock(&dma_list_mutex);
 	list_del(&device->global_node);
+	dma_channel_rebalance();
 	mutex_unlock(&dma_list_mutex);
 
 	list_for_each_entry(chan, &device->channels, device_node) {
@@ -743,3 +901,4 @@ static int __init dma_bus_init(void)
 }
 subsys_initcall(dma_bus_init);
 
+
diff --git a/include/linux/dmaengine.h b/include/linux/dmaengine.h
index d18d37d..b466f02 100644
--- a/include/linux/dmaengine.h
+++ b/include/linux/dmaengine.h
@@ -182,6 +182,7 @@ struct dma_chan_percpu {
  * @device_node: used to add this to the device chan list
  * @local: per-cpu pointer to a struct dma_chan_percpu
  * @client-count: how many clients are using this channel
+ * @table_count: number of appearances in the mem-to-mem allocation table
  */
 struct dma_chan {
 	struct dma_device *device;
@@ -198,6 +199,7 @@ struct dma_chan {
 	struct list_head device_node;
 	struct dma_chan_percpu *local;
 	int client_count;
+	int table_count;
 };
 
 #define to_dma_chan(p) container_of(p, struct dma_chan, dev)
@@ -468,6 +470,7 @@ static inline enum dma_status dma_wait_for_async_tx(struct dma_async_tx_descript
 int dma_async_device_register(struct dma_device *device);
 void dma_async_device_unregister(struct dma_device *device);
 void dma_run_dependencies(struct dma_async_tx_descriptor *tx);
+struct dma_chan *dma_find_channel(enum dma_transaction_type tx_type);
 
 /* --- Helper iov-locking functions --- */
 


^ permalink raw reply related	[flat|nested] 55+ messages in thread

* [PATCH 05/13] dmaengine: provide a common 'issue_pending_all' implementation
  2008-11-14 21:34 [PATCH 00/13] dmaengine redux Dan Williams
                   ` (3 preceding siblings ...)
  2008-11-14 21:34 ` [PATCH 04/13] dmaengine: centralize channel allocation, introduce dma_find_channel Dan Williams
@ 2008-11-14 21:34 ` Dan Williams
  2008-11-14 21:34 ` [PATCH 06/13] net_dma: convert to dma_find_channel Dan Williams
                   ` (8 subsequent siblings)
  13 siblings, 0 replies; 55+ messages in thread
From: Dan Williams @ 2008-11-14 21:34 UTC (permalink / raw)
  To: linux-kernel, netdev
  Cc: maciej.sosnowski, hskinnemoen, g.liakhovetski, nicolas.ferre

async_tx and net_dma each have open-coded versions of issue_pending_all,
so provide a common routine in dmaengine.

The implementation needs to walk the global device list, so implement
rcu to allow dma_issue_pending_all to run lockless.  Clients protect
themselves from channel removal events by holding a dmaengine reference.

Signed-off-by: Dan Williams <dan.j.williams@intel.com>
---

 crypto/async_tx/async_tx.c |   12 ------------
 drivers/dma/dmaengine.c    |   27 ++++++++++++++++++++++++---
 include/linux/async_tx.h   |    2 +-
 include/linux/dmaengine.h  |    1 +
 net/core/dev.c             |    9 +--------
 5 files changed, 27 insertions(+), 24 deletions(-)

diff --git a/crypto/async_tx/async_tx.c b/crypto/async_tx/async_tx.c
index b88bb1f..2cdf7a0 100644
--- a/crypto/async_tx/async_tx.c
+++ b/crypto/async_tx/async_tx.c
@@ -45,18 +45,6 @@ static DEFINE_SPINLOCK(async_tx_lock);
 
 static LIST_HEAD(async_tx_master_list);
 
-/* async_tx_issue_pending_all - start all transactions on all channels */
-void async_tx_issue_pending_all(void)
-{
-	struct dma_chan_ref *ref;
-
-	rcu_read_lock();
-	list_for_each_entry_rcu(ref, &async_tx_master_list, node)
-		ref->chan->device->device_issue_pending(ref->chan);
-	rcu_read_unlock();
-}
-EXPORT_SYMBOL_GPL(async_tx_issue_pending_all);
-
 static void
 free_dma_chan_ref(struct rcu_head *rcu)
 {
diff --git a/drivers/dma/dmaengine.c b/drivers/dma/dmaengine.c
index c3e6fbb..ec483cc 100644
--- a/drivers/dma/dmaengine.c
+++ b/drivers/dma/dmaengine.c
@@ -70,6 +70,7 @@
 #include <linux/rcupdate.h>
 #include <linux/mutex.h>
 #include <linux/jiffies.h>
+#include <linux/rculist.h>
 
 static DEFINE_MUTEX(dma_list_mutex);
 static LIST_HEAD(dma_device_list);
@@ -354,6 +355,26 @@ struct dma_chan *dma_find_channel(enum dma_transaction_type tx_type)
 EXPORT_SYMBOL(dma_find_channel);
 
 /**
+ * dma_issue_pending_all - flush all pending operations across all channels
+ */
+void dma_issue_pending_all(void)
+{
+	struct dma_device *device;
+	struct dma_chan *chan;
+
+	WARN_ONCE(dmaengine_ref_count == 0,
+		  "client called %s without a reference", __func__);
+
+	rcu_read_lock();
+	list_for_each_entry_rcu(device, &dma_device_list, global_node)
+		list_for_each_entry(chan, &device->channels, device_node)
+			if (chan->client_count)
+				device->device_issue_pending(chan);
+	rcu_read_unlock();
+}
+EXPORT_SYMBOL(dma_issue_pending_all);
+
+/**
  * nth_chan - returns the nth channel of the given capability
  * @cap: capability to match
  * @n: nth channel desired
@@ -477,7 +498,7 @@ void dma_async_client_register(struct dma_client *client)
 			err = dma_chan_get(chan);
 			if (err == -ENODEV) {
 				/* module removed before we could use it */
-				list_del_init(&device->global_node);
+				list_del_rcu(&device->global_node);
 				break;
 			} else if (err)
 				pr_err("dmaengine: failed to get %s: (%d)",
@@ -612,7 +633,7 @@ int dma_async_device_register(struct dma_device *device)
 			goto err_out;
 		}
 	}
-	list_add_tail(&device->global_node, &dma_device_list);
+	list_add_tail_rcu(&device->global_node, &dma_device_list);
 	dma_channel_rebalance();
 	mutex_unlock(&dma_list_mutex);
 
@@ -654,7 +675,7 @@ void dma_async_device_unregister(struct dma_device *device)
 	struct dma_chan *chan;
 
 	mutex_lock(&dma_list_mutex);
-	list_del(&device->global_node);
+	list_del_rcu(&device->global_node);
 	dma_channel_rebalance();
 	mutex_unlock(&dma_list_mutex);
 
diff --git a/include/linux/async_tx.h b/include/linux/async_tx.h
index 1c81677..45f6297 100644
--- a/include/linux/async_tx.h
+++ b/include/linux/async_tx.h
@@ -59,7 +59,7 @@ enum async_tx_flags {
 };
 
 #ifdef CONFIG_DMA_ENGINE
-void async_tx_issue_pending_all(void);
+#define async_tx_issue_pending_all dma_issue_pending_all
 #ifdef CONFIG_ARCH_HAS_ASYNC_TX_FIND_CHANNEL
 #include <asm/async_tx.h>
 #else
diff --git a/include/linux/dmaengine.h b/include/linux/dmaengine.h
index b466f02..57a43ad 100644
--- a/include/linux/dmaengine.h
+++ b/include/linux/dmaengine.h
@@ -471,6 +471,7 @@ int dma_async_device_register(struct dma_device *device);
 void dma_async_device_unregister(struct dma_device *device);
 void dma_run_dependencies(struct dma_async_tx_descriptor *tx);
 struct dma_chan *dma_find_channel(enum dma_transaction_type tx_type);
+void dma_issue_pending_all(void);
 
 /* --- Helper iov-locking functions --- */
 
diff --git a/net/core/dev.c b/net/core/dev.c
index 9174c77..301a449 100644
--- a/net/core/dev.c
+++ b/net/core/dev.c
@@ -2442,14 +2442,7 @@ out:
 	 * There may not be any more sk_buffs coming right now, so push
 	 * any pending DMA copies to hardware
 	 */
-	if (!cpus_empty(net_dma.channel_mask)) {
-		int chan_idx;
-		for_each_cpu_mask_nr(chan_idx, net_dma.channel_mask) {
-			struct dma_chan *chan = net_dma.channels[chan_idx];
-			if (chan)
-				dma_async_memcpy_issue_pending(chan);
-		}
-	}
+	dma_issue_pending_all();
 #endif
 
 	return;


^ permalink raw reply related	[flat|nested] 55+ messages in thread

* [PATCH 06/13] net_dma: convert to dma_find_channel
  2008-11-14 21:34 [PATCH 00/13] dmaengine redux Dan Williams
                   ` (4 preceding siblings ...)
  2008-11-14 21:34 ` [PATCH 05/13] dmaengine: provide a common 'issue_pending_all' implementation Dan Williams
@ 2008-11-14 21:34 ` Dan Williams
  2008-11-14 21:34 ` [PATCH 07/13] dmaengine: introduce dma_request_channel and private channels Dan Williams
                   ` (7 subsequent siblings)
  13 siblings, 0 replies; 55+ messages in thread
From: Dan Williams @ 2008-11-14 21:34 UTC (permalink / raw)
  To: linux-kernel, netdev
  Cc: maciej.sosnowski, hskinnemoen, g.liakhovetski, nicolas.ferre

Use the general-purpose channel allocation provided by dmaengine.

Signed-off-by: Dan Williams <dan.j.williams@intel.com>
---

 include/linux/netdevice.h |    3 ---
 include/net/netdma.h      |   11 -----------
 net/core/dev.c            |   40 ----------------------------------------
 net/ipv4/tcp.c            |    4 ++--
 net/ipv4/tcp_input.c      |    2 +-
 net/ipv4/tcp_ipv4.c       |    2 +-
 net/ipv6/tcp_ipv6.c       |    2 +-
 7 files changed, 5 insertions(+), 59 deletions(-)

diff --git a/include/linux/netdevice.h b/include/linux/netdevice.h
index 9d77b1d..800866e 100644
--- a/include/linux/netdevice.h
+++ b/include/linux/netdevice.h
@@ -1004,9 +1004,6 @@ struct softnet_data
 	struct sk_buff		*completion_queue;
 
 	struct napi_struct	backlog;
-#ifdef CONFIG_NET_DMA
-	struct dma_chan		*net_dma;
-#endif
 };
 
 DECLARE_PER_CPU(struct softnet_data,softnet_data);
diff --git a/include/net/netdma.h b/include/net/netdma.h
index cbe2737..8ba8ce2 100644
--- a/include/net/netdma.h
+++ b/include/net/netdma.h
@@ -24,17 +24,6 @@
 #include <linux/dmaengine.h>
 #include <linux/skbuff.h>
 
-static inline struct dma_chan *get_softnet_dma(void)
-{
-	struct dma_chan *chan;
-
-	rcu_read_lock();
-	chan = rcu_dereference(__get_cpu_var(softnet_data).net_dma);
-	rcu_read_unlock();
-
-	return chan;
-}
-
 int dma_skb_copy_datagram_iovec(struct dma_chan* chan,
 		struct sk_buff *skb, int offset, struct iovec *to,
 		size_t len, struct dma_pinned_list *pinned_list);
diff --git a/net/core/dev.c b/net/core/dev.c
index 301a449..56e6965 100644
--- a/net/core/dev.c
+++ b/net/core/dev.c
@@ -4591,44 +4591,6 @@ static int dev_cpu_callback(struct notifier_block *nfb,
 
 #ifdef CONFIG_NET_DMA
 /**
- * net_dma_rebalance - try to maintain one DMA channel per CPU
- * @net_dma: DMA client and associated data (lock, channels, channel_mask)
- *
- * This is called when the number of channels allocated to the net_dma client
- * changes.  The net_dma client tries to have one DMA channel per CPU.
- */
-
-static void net_dma_rebalance(struct net_dma *net_dma)
-{
-	unsigned int cpu, i, n, chan_idx;
-	struct dma_chan *chan;
-
-	if (cpus_empty(net_dma->channel_mask)) {
-		for_each_online_cpu(cpu)
-			rcu_assign_pointer(per_cpu(softnet_data, cpu).net_dma, NULL);
-		return;
-	}
-
-	i = 0;
-	cpu = first_cpu(cpu_online_map);
-
-	for_each_cpu_mask_nr(chan_idx, net_dma->channel_mask) {
-		chan = net_dma->channels[chan_idx];
-
-		n = ((num_online_cpus() / cpus_weight(net_dma->channel_mask))
-		   + (i < (num_online_cpus() %
-			cpus_weight(net_dma->channel_mask)) ? 1 : 0));
-
-		while(n) {
-			per_cpu(softnet_data, cpu).net_dma = chan;
-			cpu = next_cpu(cpu, cpu_online_map);
-			n--;
-		}
-		i++;
-	}
-}
-
-/**
  * netdev_dma_event - event callback for the net_dma_client
  * @client: should always be net_dma_client
  * @chan: DMA channel for the event
@@ -4657,7 +4619,6 @@ netdev_dma_event(struct dma_client *client, struct dma_chan *chan,
 			ack = DMA_ACK;
 			net_dma->channels[pos] = chan;
 			cpu_set(pos, net_dma->channel_mask);
-			net_dma_rebalance(net_dma);
 		}
 		break;
 	case DMA_RESOURCE_REMOVED:
@@ -4672,7 +4633,6 @@ netdev_dma_event(struct dma_client *client, struct dma_chan *chan,
 			ack = DMA_ACK;
 			cpu_clear(pos, net_dma->channel_mask);
 			net_dma->channels[i] = NULL;
-			net_dma_rebalance(net_dma);
 		}
 		break;
 	default:
diff --git a/net/ipv4/tcp.c b/net/ipv4/tcp.c
index 40ec69f..5da9edb 100644
--- a/net/ipv4/tcp.c
+++ b/net/ipv4/tcp.c
@@ -1315,7 +1315,7 @@ int tcp_recvmsg(struct kiocb *iocb, struct sock *sk, struct msghdr *msg,
 		if ((available < target) &&
 		    (len > sysctl_tcp_dma_copybreak) && !(flags & MSG_PEEK) &&
 		    !sysctl_tcp_low_latency &&
-		    __get_cpu_var(softnet_data).net_dma) {
+		    dma_find_channel(DMA_MEMCPY)) {
 			preempt_enable_no_resched();
 			tp->ucopy.pinned_list =
 					dma_pin_iovec_pages(msg->msg_iov, len);
@@ -1525,7 +1525,7 @@ do_prequeue:
 		if (!(flags & MSG_TRUNC)) {
 #ifdef CONFIG_NET_DMA
 			if (!tp->ucopy.dma_chan && tp->ucopy.pinned_list)
-				tp->ucopy.dma_chan = get_softnet_dma();
+				tp->ucopy.dma_chan = dma_find_channel(DMA_MEMCPY);
 
 			if (tp->ucopy.dma_chan) {
 				tp->ucopy.dma_cookie = dma_skb_copy_datagram_iovec(
diff --git a/net/ipv4/tcp_input.c b/net/ipv4/tcp_input.c
index d77c0d2..620ca4e 100644
--- a/net/ipv4/tcp_input.c
+++ b/net/ipv4/tcp_input.c
@@ -4682,7 +4682,7 @@ static int tcp_dma_try_early_copy(struct sock *sk, struct sk_buff *skb,
 		return 0;
 
 	if (!tp->ucopy.dma_chan && tp->ucopy.pinned_list)
-		tp->ucopy.dma_chan = get_softnet_dma();
+		tp->ucopy.dma_chan = dma_find_channel(DMA_MEMCPY);
 
 	if (tp->ucopy.dma_chan && skb_csum_unnecessary(skb)) {
 
diff --git a/net/ipv4/tcp_ipv4.c b/net/ipv4/tcp_ipv4.c
index 5c8fa7f..b671d49 100644
--- a/net/ipv4/tcp_ipv4.c
+++ b/net/ipv4/tcp_ipv4.c
@@ -1600,7 +1600,7 @@ process:
 #ifdef CONFIG_NET_DMA
 		struct tcp_sock *tp = tcp_sk(sk);
 		if (!tp->ucopy.dma_chan && tp->ucopy.pinned_list)
-			tp->ucopy.dma_chan = get_softnet_dma();
+			tp->ucopy.dma_chan = dma_find_channel(DMA_MEMCPY);
 		if (tp->ucopy.dma_chan)
 			ret = tcp_v4_do_rcv(sk, skb);
 		else
diff --git a/net/ipv6/tcp_ipv6.c b/net/ipv6/tcp_ipv6.c
index b6b356b..d41a896 100644
--- a/net/ipv6/tcp_ipv6.c
+++ b/net/ipv6/tcp_ipv6.c
@@ -1640,7 +1640,7 @@ process:
 #ifdef CONFIG_NET_DMA
 		struct tcp_sock *tp = tcp_sk(sk);
 		if (!tp->ucopy.dma_chan && tp->ucopy.pinned_list)
-			tp->ucopy.dma_chan = get_softnet_dma();
+			tp->ucopy.dma_chan = dma_find_channel(DMA_MEMCPY);
 		if (tp->ucopy.dma_chan)
 			ret = tcp_v6_do_rcv(sk, skb);
 		else


^ permalink raw reply related	[flat|nested] 55+ messages in thread

* [PATCH 07/13] dmaengine: introduce dma_request_channel and private channels
  2008-11-14 21:34 [PATCH 00/13] dmaengine redux Dan Williams
                   ` (5 preceding siblings ...)
  2008-11-14 21:34 ` [PATCH 06/13] net_dma: convert to dma_find_channel Dan Williams
@ 2008-11-14 21:34 ` Dan Williams
  2008-12-02 15:52   ` Guennadi Liakhovetski
                     ` (2 more replies)
  2008-11-14 21:34 ` [PATCH 08/13] dmatest: convert to dma_request_channel Dan Williams
                   ` (6 subsequent siblings)
  13 siblings, 3 replies; 55+ messages in thread
From: Dan Williams @ 2008-11-14 21:34 UTC (permalink / raw)
  To: linux-kernel, netdev
  Cc: maciej.sosnowski, hskinnemoen, g.liakhovetski, nicolas.ferre

This interface is primarily for device-to-memory clients which need to
search for dma channels with platform-specific characteristics.  The
prototype is:

struct dma_chan *dma_request_channel(dma_cap_mask_t mask,
                                     dma_filter_fn filter_fn,
                                     void *filter_param);

When the optional 'filter_fn' parameter is set to NULL
dma_request_channel simply returns the first channel that satisfies the
capability mask.  Otherwise, when the mask parameter is insufficient for
specifying the necessary channel, the filter_fn routine can be used to
disposition the available channels in the system. The filter_fn routine
is called once for each free channel in the system.  Upon seeing a
suitable channel filter_fn returns DMA_ACK which flags that channel to
be the return value from dma_request_channel.  A channel allocated via
this interface is exclusive to the caller, until dma_release_channel()
is called.

To ensure that all channels are not consumed by the general-purpose
allocator the DMA_PRIVATE capability is provided to exclude a dma_device
from general-purpose (memory-to-memory) consideration.

Signed-off-by: Dan Williams <dan.j.williams@intel.com>
---

 drivers/dma/dmaengine.c   |  161 ++++++++++++++++++++++++++++++++++++++++-----
 include/linux/dmaengine.h |   16 ++++
 2 files changed, 159 insertions(+), 18 deletions(-)

diff --git a/drivers/dma/dmaengine.c b/drivers/dma/dmaengine.c
index ec483cc..46fd5fa 100644
--- a/drivers/dma/dmaengine.c
+++ b/drivers/dma/dmaengine.c
@@ -190,7 +190,7 @@ static int dma_chan_get(struct dma_chan *chan)
 			chan->client_count = 0;
 			module_put(owner);
 			err = -ENOMEM;
-		} else
+		} else if (!dma_has_cap(DMA_PRIVATE, chan->device->cap_mask))
 			balance_ref_count(chan);
 	}
 
@@ -221,6 +221,8 @@ static void dma_client_chan_alloc(struct dma_client *client)
 
 	/* Find a channel */
 	list_for_each_entry(device, &dma_device_list, global_node) {
+		if (dma_has_cap(DMA_PRIVATE, device->cap_mask))
+			continue;
 		/* Does the client require a specific DMA controller? */
 		if (client->slave && client->slave->dma_dev
 				&& client->slave->dma_dev != device->dev)
@@ -313,6 +315,7 @@ static int __init dma_channel_table_init(void)
 	 * channel_table
 	 */
 	clear_bit(DMA_INTERRUPT, dma_cap_mask_all.bits);
+	clear_bit(DMA_PRIVATE, dma_cap_mask_all.bits);
 	clear_bit(DMA_SLAVE, dma_cap_mask_all.bits);
 
 	for_each_dma_cap_mask(cap, dma_cap_mask_all) {
@@ -366,10 +369,13 @@ void dma_issue_pending_all(void)
 		  "client called %s without a reference", __func__);
 
 	rcu_read_lock();
-	list_for_each_entry_rcu(device, &dma_device_list, global_node)
+	list_for_each_entry_rcu(device, &dma_device_list, global_node) {
+		if (dma_has_cap(DMA_PRIVATE, device->cap_mask))
+			continue;
 		list_for_each_entry(chan, &device->channels, device_node)
 			if (chan->client_count)
 				device->device_issue_pending(chan);
+	}
 	rcu_read_unlock();
 }
 EXPORT_SYMBOL(dma_issue_pending_all);
@@ -390,7 +396,8 @@ static struct dma_chan *nth_chan(enum dma_transaction_type cap, int n)
 	struct dma_chan *min = NULL;
 
 	list_for_each_entry(device, &dma_device_list, global_node) {
-		if (!dma_has_cap(cap, device->cap_mask))
+		if (!dma_has_cap(cap, device->cap_mask) ||
+		    dma_has_cap(DMA_PRIVATE, device->cap_mask))
 			continue;
 		list_for_each_entry(chan, &device->channels, device_node) {
 			if (!chan->client_count)
@@ -439,9 +446,12 @@ static void dma_channel_rebalance(void)
 		for_each_possible_cpu(cpu)
 			per_cpu_ptr(channel_table[cap], cpu)->chan = NULL;
 
-	list_for_each_entry(device, &dma_device_list, global_node)
+	list_for_each_entry(device, &dma_device_list, global_node) {
+		if (dma_has_cap(DMA_PRIVATE, device->cap_mask))
+			continue;
 		list_for_each_entry(chan, &device->channels, device_node)
 			chan->table_count = 0;
+	}
 
 	/* don't populate the channel_table if no clients are available */
 	if (!dmaengine_ref_count)
@@ -460,6 +470,112 @@ static void dma_channel_rebalance(void)
 		}
 }
 
+static struct dma_chan *private_candidate(dma_cap_mask_t *mask, struct dma_device *dev)
+{
+	struct dma_chan *chan;
+	struct dma_chan *ret = NULL;
+
+	/* devices with multiple channels need special handling as we need to
+	 * ensure that all channels are either private or public.
+	 */
+	if (dev->chancnt > 1 && !dma_has_cap(DMA_PRIVATE, dev->cap_mask))
+		list_for_each_entry(chan, &dev->channels, device_node) {
+			/* some channels are already publicly allocated */
+			if (chan->client_count)
+				return NULL;
+		}
+
+	list_for_each_entry(chan, &dev->channels, device_node) {
+		if (!__dma_chan_satisfies_mask(chan, mask)) {
+			pr_debug("%s: %s wrong capabilities\n",
+				 __func__, dev_name(&chan->dev));
+			continue;
+		}
+		if (chan->client_count) {
+			pr_debug("%s: %s busy\n",
+				 __func__, dev_name(&chan->dev));
+			continue;
+		}
+		ret = chan;
+		break;
+	}
+
+	return ret;
+}
+
+/**
+ * dma_request_channel - try to allocate an exclusive channel
+ * @mask: capabilities that the channel must satisfy
+ * @fn: optional callback to disposition available channels
+ * @fn_param: opaque parameter to pass to dma_filter_fn
+ */
+struct dma_chan *__dma_request_channel(dma_cap_mask_t *mask, dma_filter_fn fn, void *fn_param)
+{
+	struct dma_device *device, *_d;
+	struct dma_chan *chan = NULL;
+	enum dma_state_client ack;
+	int err;
+
+	/* Find a channel */
+	mutex_lock(&dma_list_mutex);
+	list_for_each_entry_safe(device, _d, &dma_device_list, global_node) {
+		chan = private_candidate(mask, device);
+		if (!chan)
+			continue;
+
+		if (fn)
+			ack = fn(chan, fn_param);
+		else
+			ack = DMA_ACK;
+
+		if (ack == DMA_ACK) {
+			/* Found a suitable channel, try to grab, prep, and
+			 * return it.  We first set DMA_PRIVATE to disable
+			 * balance_ref_count as this channel will not be
+			 * published in the general-purpose allocator
+			 */
+			dma_cap_set(DMA_PRIVATE, device->cap_mask);
+			err = dma_chan_get(chan);
+
+			if (err == -ENODEV) {
+				pr_debug("%s: %s module removed\n", __func__,
+					 dev_name(&chan->dev));
+				list_del_rcu(&device->global_node);
+			} else if (err)
+				pr_err("dmaengine: failed to get %s: (%d)",
+				       dev_name(&chan->dev), err);
+			else
+				break;
+		} else if (ack == DMA_DUP) {
+			pr_debug("%s: %s filter said DMA_DUP\n",
+				 __func__, dev_name(&chan->dev));
+		} else if (ack == DMA_NAK) {
+			pr_debug("%s: %s filter said DMA_NAK\n",
+				 __func__, dev_name(&chan->dev));
+			break;
+		} else
+			WARN_ONCE(1, "filter_fn: unknown response?\n");
+		chan = NULL;
+	}
+	mutex_unlock(&dma_list_mutex);
+
+	pr_debug("%s: %s (%s)\n", __func__, chan ? "success" : "fail",
+		 chan ? dev_name(&chan->dev) : NULL);
+
+	return chan;
+}
+EXPORT_SYMBOL_GPL(__dma_request_channel);
+
+void dma_release_channel(struct dma_chan *chan)
+{
+	mutex_lock(&dma_list_mutex);
+	WARN_ONCE(chan->client_count != 1,
+		  "chan reference count %d != 1\n", chan->client_count);
+	dma_chan_put(chan);
+	mutex_unlock(&dma_list_mutex);
+}
+EXPORT_SYMBOL_GPL(dma_release_channel);
+
 /**
  * dma_chans_notify_available - broadcast available channels to the clients
  */
@@ -493,7 +609,9 @@ void dma_async_client_register(struct dma_client *client)
 	dmaengine_ref_count++;
 
 	/* try to grab channels */
-	list_for_each_entry_safe(device, _d, &dma_device_list, global_node)
+	list_for_each_entry_safe(device, _d, &dma_device_list, global_node) {
+		if (dma_has_cap(DMA_PRIVATE, device->cap_mask))
+			continue;
 		list_for_each_entry(chan, &device->channels, device_node) {
 			err = dma_chan_get(chan);
 			if (err == -ENODEV) {
@@ -504,7 +622,7 @@ void dma_async_client_register(struct dma_client *client)
 				pr_err("dmaengine: failed to get %s: (%d)",
 				       dev_name(&chan->dev), err);
 		}
-
+	}
 
 	list_add_tail(&client->global_node, &dma_client_list);
 	mutex_unlock(&dma_list_mutex);
@@ -529,9 +647,12 @@ void dma_async_client_unregister(struct dma_client *client)
 	dmaengine_ref_count--;
 	BUG_ON(dmaengine_ref_count < 0);
 	/* drop channel references */
-	list_for_each_entry(device, &dma_device_list, global_node)
+	list_for_each_entry(device, &dma_device_list, global_node) {
+		if (dma_has_cap(DMA_PRIVATE, device->cap_mask))
+			continue;
 		list_for_each_entry(chan, &device->channels, device_node)
 			dma_chan_put(chan);
+	}
 
 	list_del(&client->global_node);
 	mutex_unlock(&dma_list_mutex);
@@ -618,21 +739,25 @@ int dma_async_device_register(struct dma_device *device)
 		chan->slow_ref = 0;
 		INIT_RCU_HEAD(&chan->rcu);
 	}
+	device->chancnt = chancnt;
 
 	mutex_lock(&dma_list_mutex);
-	list_for_each_entry(chan, &device->channels, device_node) {
-		/* if clients are already waiting for channels we need to
-		 * take references on their behalf
-		 */
-		if (dmaengine_ref_count && dma_chan_get(chan) == -ENODEV) {
-			/* note we can only get here for the first
-			 * channel as the remaining channels are
-			 * guaranteed to get a reference
+	/* take references on public channels */
+	if (!dma_has_cap(DMA_PRIVATE, device->cap_mask))
+		list_for_each_entry(chan, &device->channels, device_node) {
+			/* if clients are already waiting for channels we need
+			 * to take references on their behalf
 			 */
-			rc = -ENODEV;
-			goto err_out;
+			if (dmaengine_ref_count &&
+			    dma_chan_get(chan) == -ENODEV) {
+				/* note we can only get here for the first
+				 * channel as the remaining channels are
+				 * guaranteed to get a reference
+				 */
+				rc = -ENODEV;
+				goto err_out;
+			}
 		}
-	}
 	list_add_tail_rcu(&device->global_node, &dma_device_list);
 	dma_channel_rebalance();
 	mutex_unlock(&dma_list_mutex);
diff --git a/include/linux/dmaengine.h b/include/linux/dmaengine.h
index 57a43ad..fe40bc0 100644
--- a/include/linux/dmaengine.h
+++ b/include/linux/dmaengine.h
@@ -89,6 +89,7 @@ enum dma_transaction_type {
 	DMA_MEMSET,
 	DMA_MEMCPY_CRC32C,
 	DMA_INTERRUPT,
+	DMA_PRIVATE,
 	DMA_SLAVE,
 };
 
@@ -224,6 +225,18 @@ typedef enum dma_state_client (*dma_event_callback) (struct dma_client *client,
 		struct dma_chan *chan, enum dma_state state);
 
 /**
+ * typedef dma_filter_fn - callback filter for dma_request_channel
+ * @chan: channel to be reviewed
+ * @filter_param: opaque parameter passed through dma_request_channel
+ *
+ * When this optional parameter is specified in a call to dma_request_channel a
+ * suitable channel is passed to this routine for further dispositioning before
+ * being returned.  Where 'suitable' indicates a non-busy channel that
+ * satisfies the given capability mask.
+ */
+typedef enum dma_state_client (*dma_filter_fn)(struct dma_chan *chan, void *filter_param);
+
+/**
  * struct dma_client - info on the entity making use of DMA services
  * @event_callback: func ptr to call when something happens
  * @cap_mask: only return channels that satisfy the requested capabilities
@@ -472,6 +485,9 @@ void dma_async_device_unregister(struct dma_device *device);
 void dma_run_dependencies(struct dma_async_tx_descriptor *tx);
 struct dma_chan *dma_find_channel(enum dma_transaction_type tx_type);
 void dma_issue_pending_all(void);
+#define dma_request_channel(mask, x, y) __dma_request_channel(&(mask), x, y)
+struct dma_chan *__dma_request_channel(dma_cap_mask_t *mask, dma_filter_fn fn, void *fn_param);
+void dma_release_channel(struct dma_chan *chan);
 
 /* --- Helper iov-locking functions --- */
 


^ permalink raw reply related	[flat|nested] 55+ messages in thread

* [PATCH 08/13] dmatest: convert to dma_request_channel
  2008-11-14 21:34 [PATCH 00/13] dmaengine redux Dan Williams
                   ` (6 preceding siblings ...)
  2008-11-14 21:34 ` [PATCH 07/13] dmaengine: introduce dma_request_channel and private channels Dan Williams
@ 2008-11-14 21:34 ` Dan Williams
  2008-11-15  6:17   ` Andrew Morton
  2008-11-14 21:35 ` [PATCH 09/13] atmel-mci: convert to dma_request_channel and down-level dma_slave Dan Williams
                   ` (5 subsequent siblings)
  13 siblings, 1 reply; 55+ messages in thread
From: Dan Williams @ 2008-11-14 21:34 UTC (permalink / raw)
  To: linux-kernel, netdev
  Cc: maciej.sosnowski, hskinnemoen, g.liakhovetski, nicolas.ferre

Replace the client registration infrastructure with a custom loop to
poll for channels.  Once dma_request_channel returns NULL stop asking
for channels.  A userspace side effect of this change if that loading
the dmatest module before loading a dma driver will result in no
channels being found, previously dmatest would get a callback.  To
facilitate testing in the built-in case dmatest_init is marked as a
late_initcall.  Another side effect is that channels under test can not
be used for any other purpose.

Cc: Haavard Skinnemoen <haavard.skinnemoen@atmel.com>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
---

 drivers/dma/dmatest.c     |  109 ++++++++++++++++-----------------------------
 include/linux/dmaengine.h |    6 ++
 2 files changed, 44 insertions(+), 71 deletions(-)

diff --git a/drivers/dma/dmatest.c b/drivers/dma/dmatest.c
index db40508..2a7724d 100644
--- a/drivers/dma/dmatest.c
+++ b/drivers/dma/dmatest.c
@@ -35,7 +35,7 @@ MODULE_PARM_DESC(threads_per_chan,
 
 static unsigned int max_channels;
 module_param(max_channels, uint, S_IRUGO);
-MODULE_PARM_DESC(nr_channels,
+MODULE_PARM_DESC(max_channels,
 		"Maximum number of channels to use (default: all)");
 
 /*
@@ -71,7 +71,7 @@ struct dmatest_chan {
 
 /*
  * These are protected by dma_list_mutex since they're only used by
- * the DMA client event callback
+ * the DMA filter function callback
  */
 static LIST_HEAD(dmatest_channels);
 static unsigned int nr_channels;
@@ -317,21 +317,16 @@ static void dmatest_cleanup_channel(struct dmatest_chan *dtc)
 	kfree(dtc);
 }
 
-static enum dma_state_client dmatest_add_channel(struct dma_chan *chan)
+static int dmatest_add_channel(struct dma_chan *chan)
 {
 	struct dmatest_chan	*dtc;
 	struct dmatest_thread	*thread;
 	unsigned int		i;
 
-	/* Have we already been told about this channel? */
-	list_for_each_entry(dtc, &dmatest_channels, node)
-		if (dtc->chan == chan)
-			return DMA_DUP;
-
 	dtc = kmalloc(sizeof(struct dmatest_chan), GFP_KERNEL);
 	if (!dtc) {
 		pr_warning("dmatest: No memory for %s\n", dev_name(&chan->dev));
-		return DMA_NAK;
+		return -ENOMEM;
 	}
 
 	dtc->chan = chan;
@@ -365,81 +360,53 @@ static enum dma_state_client dmatest_add_channel(struct dma_chan *chan)
 	list_add_tail(&dtc->node, &dmatest_channels);
 	nr_channels++;
 
-	return DMA_ACK;
-}
-
-static enum dma_state_client dmatest_remove_channel(struct dma_chan *chan)
-{
-	struct dmatest_chan	*dtc, *_dtc;
-
-	list_for_each_entry_safe(dtc, _dtc, &dmatest_channels, node) {
-		if (dtc->chan == chan) {
-			list_del(&dtc->node);
-			dmatest_cleanup_channel(dtc);
-			pr_debug("dmatest: lost channel %s\n",
-					dev_name(&chan->dev));
-			return DMA_ACK;
-		}
-	}
-
-	return DMA_DUP;
+	return 0;
 }
 
-/*
- * Start testing threads as new channels are assigned to us, and kill
- * them when the channels go away.
- *
- * When we unregister the client, all channels are removed so this
- * will also take care of cleaning things up when the module is
- * unloaded.
- */
-static enum dma_state_client
-dmatest_event(struct dma_client *client, struct dma_chan *chan,
-		enum dma_state state)
+static enum dma_state_client filter(struct dma_chan *chan, void *param)
 {
-	enum dma_state_client	ack = DMA_NAK;
-
-	switch (state) {
-	case DMA_RESOURCE_AVAILABLE:
-		if (!dmatest_match_channel(chan)
-				|| !dmatest_match_device(chan->device))
-			ack = DMA_DUP;
-		else if (max_channels && nr_channels >= max_channels)
-			ack = DMA_NAK;
-		else
-			ack = dmatest_add_channel(chan);
-		break;
-
-	case DMA_RESOURCE_REMOVED:
-		ack = dmatest_remove_channel(chan);
-		break;
-
-	default:
-		pr_info("dmatest: Unhandled event %u (%s)\n",
-				state, dev_name(&chan->dev));
-		break;
-	}
-
-	return ack;
+	if (!dmatest_match_channel(chan) || !dmatest_match_device(chan->device))
+		return DMA_DUP;
+	else
+		return DMA_ACK;
 }
 
-static struct dma_client dmatest_client = {
-	.event_callback	= dmatest_event,
-};
-
 static int __init dmatest_init(void)
 {
-	dma_cap_set(DMA_MEMCPY, dmatest_client.cap_mask);
-	dma_async_client_register(&dmatest_client);
-	dma_async_client_chan_request(&dmatest_client);
+	dma_cap_mask_t mask;
+	struct dma_chan *chan;
+
+	dma_cap_zero(mask);
+	dma_cap_set(DMA_MEMCPY, mask);
+	for (;;) {
+		chan = dma_request_channel(mask, filter, NULL);
+		if (chan) {
+			if (dmatest_add_channel(chan) == 0)
+				continue;
+			else
+				dma_release_channel(chan);
+		} else
+			break; /* no more channels available */
+		if (max_channels && nr_channels >= max_channels)
+			break; /* we have all we need */
+	}
 
 	return 0;
 }
-module_init(dmatest_init);
+/* when compiled-in wait for drivers to load first */
+late_initcall(dmatest_init);
 
 static void __exit dmatest_exit(void)
 {
-	dma_async_client_unregister(&dmatest_client);
+	struct dmatest_chan *dtc, *_dtc;
+
+	list_for_each_entry_safe(dtc, _dtc, &dmatest_channels, node) {
+		list_del(&dtc->node);
+		dmatest_cleanup_channel(dtc);
+		pr_debug("dmatest: dropped channel %s\n",
+			 dev_name(&dtc->chan->dev));
+		dma_release_channel(dtc->chan);
+	}
 }
 module_exit(dmatest_exit);
 
diff --git a/include/linux/dmaengine.h b/include/linux/dmaengine.h
index fe40bc0..6f2d070 100644
--- a/include/linux/dmaengine.h
+++ b/include/linux/dmaengine.h
@@ -400,6 +400,12 @@ __dma_cap_set(enum dma_transaction_type tx_type, dma_cap_mask_t *dstp)
 	set_bit(tx_type, dstp->bits);
 }
 
+#define dma_cap_zero(mask) __dma_cap_zero(&(mask))
+static inline void __dma_cap_zero(dma_cap_mask_t *dstp)
+{
+	bitmap_zero(dstp->bits, DMA_TX_TYPE_END);
+}
+
 #define dma_has_cap(tx, mask) __dma_has_cap((tx), &(mask))
 static inline int
 __dma_has_cap(enum dma_transaction_type tx_type, dma_cap_mask_t *srcp)


^ permalink raw reply related	[flat|nested] 55+ messages in thread

* [PATCH 09/13] atmel-mci: convert to dma_request_channel and down-level dma_slave
  2008-11-14 21:34 [PATCH 00/13] dmaengine redux Dan Williams
                   ` (7 preceding siblings ...)
  2008-11-14 21:34 ` [PATCH 08/13] dmatest: convert to dma_request_channel Dan Williams
@ 2008-11-14 21:35 ` Dan Williams
  2009-01-30 16:40   ` Atsushi Nemoto
  2008-11-14 21:35 ` [PATCH 10/13] dmaengine: replace dma_async_client_register with dmaengine_get Dan Williams
                   ` (4 subsequent siblings)
  13 siblings, 1 reply; 55+ messages in thread
From: Dan Williams @ 2008-11-14 21:35 UTC (permalink / raw)
  To: linux-kernel, netdev
  Cc: maciej.sosnowski, hskinnemoen, g.liakhovetski, nicolas.ferre

dma_request_channel provides an exclusive channel, so we no longer need to
pass slave data through dmaengine.

Cc: Haavard Skinnemoen <haavard.skinnemoen@atmel.com>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
---

 arch/avr32/include/asm/atmel-mci.h  |    6 +-
 arch/avr32/mach-at32ap/at32ap700x.c |   15 +----
 drivers/dma/dmaengine.c             |    8 ---
 drivers/dma/dw_dmac.c               |   25 ++-------
 drivers/mmc/host/atmel-mci.c        |   98 ++++++++++-------------------------
 include/linux/dmaengine.h           |   38 --------------
 include/linux/dw_dmac.h             |   31 ++++++++---
 7 files changed, 62 insertions(+), 159 deletions(-)

diff --git a/arch/avr32/include/asm/atmel-mci.h b/arch/avr32/include/asm/atmel-mci.h
index 59f3fad..e5e54c6 100644
--- a/arch/avr32/include/asm/atmel-mci.h
+++ b/arch/avr32/include/asm/atmel-mci.h
@@ -3,7 +3,7 @@
 
 #define ATMEL_MCI_MAX_NR_SLOTS	2
 
-struct dma_slave;
+#include <linux/dw_dmac.h>
 
 /**
  * struct mci_slot_pdata - board-specific per-slot configuration
@@ -28,11 +28,11 @@ struct mci_slot_pdata {
 
 /**
  * struct mci_platform_data - board-specific MMC/SDcard configuration
- * @dma_slave: DMA slave interface to use in data transfers, or NULL.
+ * @dma_slave: DMA slave interface to use in data transfers.
  * @slot: Per-slot configuration data.
  */
 struct mci_platform_data {
-	struct dma_slave	*dma_slave;
+	struct dw_dma_slave	dma_slave;
 	struct mci_slot_pdata	slot[ATMEL_MCI_MAX_NR_SLOTS];
 };
 
diff --git a/arch/avr32/mach-at32ap/at32ap700x.c b/arch/avr32/mach-at32ap/at32ap700x.c
index 0c6e02f..359c01f 100644
--- a/arch/avr32/mach-at32ap/at32ap700x.c
+++ b/arch/avr32/mach-at32ap/at32ap700x.c
@@ -1305,7 +1305,7 @@ struct platform_device *__init
 at32_add_device_mci(unsigned int id, struct mci_platform_data *data)
 {
 	struct platform_device		*pdev;
-	struct dw_dma_slave		*dws;
+	struct dw_dma_slave		*dws = &data->dma_slave;
 	u32				pioa_mask;
 	u32				piob_mask;
 
@@ -1324,22 +1324,13 @@ at32_add_device_mci(unsigned int id, struct mci_platform_data *data)
 				ARRAY_SIZE(atmel_mci0_resource)))
 		goto fail;
 
-	if (data->dma_slave)
-		dws = kmemdup(to_dw_dma_slave(data->dma_slave),
-				sizeof(struct dw_dma_slave), GFP_KERNEL);
-	else
-		dws = kzalloc(sizeof(struct dw_dma_slave), GFP_KERNEL);
-
-	dws->slave.dev = &pdev->dev;
-	dws->slave.dma_dev = &dw_dmac0_device.dev;
-	dws->slave.reg_width = DMA_SLAVE_WIDTH_32BIT;
+	dws->dma_dev = &dw_dmac0_device.dev;
+	dws->reg_width = DW_DMA_SLAVE_WIDTH_32BIT;
 	dws->cfg_hi = (DWC_CFGH_SRC_PER(0)
 				| DWC_CFGH_DST_PER(1));
 	dws->cfg_lo &= ~(DWC_CFGL_HS_DST_POL
 				| DWC_CFGL_HS_SRC_POL);
 
-	data->dma_slave = &dws->slave;
-
 	if (platform_device_add_data(pdev, data,
 				sizeof(struct mci_platform_data)))
 		goto fail;
diff --git a/drivers/dma/dmaengine.c b/drivers/dma/dmaengine.c
index 46fd5fa..1b694b8 100644
--- a/drivers/dma/dmaengine.c
+++ b/drivers/dma/dmaengine.c
@@ -223,10 +223,6 @@ static void dma_client_chan_alloc(struct dma_client *client)
 	list_for_each_entry(device, &dma_device_list, global_node) {
 		if (dma_has_cap(DMA_PRIVATE, device->cap_mask))
 			continue;
-		/* Does the client require a specific DMA controller? */
-		if (client->slave && client->slave->dma_dev
-				&& client->slave->dma_dev != device->dev)
-			continue;
 
 		list_for_each_entry(chan, &device->channels, device_node) {
 			if (!dma_chan_satisfies_mask(chan, client->cap_mask))
@@ -601,10 +597,6 @@ void dma_async_client_register(struct dma_client *client)
 	struct dma_chan *chan;
 	int err;
 
-	/* validate client data */
-	BUG_ON(dma_has_cap(DMA_SLAVE, client->cap_mask) &&
-		!client->slave);
-
 	mutex_lock(&dma_list_mutex);
 	dmaengine_ref_count++;
 
diff --git a/drivers/dma/dw_dmac.c b/drivers/dma/dw_dmac.c
index 377dafa..dbd5080 100644
--- a/drivers/dma/dw_dmac.c
+++ b/drivers/dma/dw_dmac.c
@@ -567,7 +567,7 @@ dwc_prep_slave_sg(struct dma_chan *chan, struct scatterlist *sgl,
 	if (unlikely(!dws || !sg_len))
 		return NULL;
 
-	reg_width = dws->slave.reg_width;
+	reg_width = dws->reg_width;
 	prev = first = NULL;
 
 	sg_len = dma_map_sg(chan->dev.parent, sgl, sg_len, direction);
@@ -579,7 +579,7 @@ dwc_prep_slave_sg(struct dma_chan *chan, struct scatterlist *sgl,
 				| DWC_CTLL_DST_FIX
 				| DWC_CTLL_SRC_INC
 				| DWC_CTLL_FC_M2P);
-		reg = dws->slave.tx_reg;
+		reg = dws->tx_reg;
 		for_each_sg(sgl, sg, sg_len, i) {
 			struct dw_desc	*desc;
 			u32		len;
@@ -625,7 +625,7 @@ dwc_prep_slave_sg(struct dma_chan *chan, struct scatterlist *sgl,
 				| DWC_CTLL_SRC_FIX
 				| DWC_CTLL_FC_P2M);
 
-		reg = dws->slave.rx_reg;
+		reg = dws->rx_reg;
 		for_each_sg(sgl, sg, sg_len, i) {
 			struct dw_desc	*desc;
 			u32		len;
@@ -764,7 +764,6 @@ static int dwc_alloc_chan_resources(struct dma_chan *chan,
 	struct dw_dma_chan	*dwc = to_dw_dma_chan(chan);
 	struct dw_dma		*dw = to_dw_dma(chan->device);
 	struct dw_desc		*desc;
-	struct dma_slave	*slave;
 	struct dw_dma_slave	*dws;
 	int			i;
 	u32			cfghi;
@@ -772,12 +771,6 @@ static int dwc_alloc_chan_resources(struct dma_chan *chan,
 
 	dev_vdbg(&chan->dev, "alloc_chan_resources\n");
 
-	/* Channels doing slave DMA can only handle one client. */
-	if (dwc->dws || (client && client->slave)) {
-		if (chan->client_count)
-			return -EBUSY;
-	}
-
 	/* ASSERT:  channel is idle */
 	if (dma_readl(dw, CH_EN) & dwc->mask) {
 		dev_dbg(&chan->dev, "DMA channel not idle?\n");
@@ -789,23 +782,17 @@ static int dwc_alloc_chan_resources(struct dma_chan *chan,
 	cfghi = DWC_CFGH_FIFO_MODE;
 	cfglo = 0;
 
-	slave = client->slave;
-	if (slave) {
+	dws = dwc->dws;
+	if (dws) {
 		/*
 		 * We need controller-specific data to set up slave
 		 * transfers.
 		 */
-		BUG_ON(!slave->dma_dev || slave->dma_dev != dw->dma.dev);
-
-		dws = container_of(slave, struct dw_dma_slave, slave);
+		BUG_ON(!dws->dma_dev || dws->dma_dev != dw->dma.dev);
 
-		dwc->dws = dws;
 		cfghi = dws->cfg_hi;
 		cfglo = dws->cfg_lo;
-	} else {
-		dwc->dws = NULL;
 	}
-
 	channel_writel(dwc, CFG_LO, cfglo);
 	channel_writel(dwc, CFG_HI, cfghi);
 
diff --git a/drivers/mmc/host/atmel-mci.c b/drivers/mmc/host/atmel-mci.c
index 6c11f4d..7a34118 100644
--- a/drivers/mmc/host/atmel-mci.c
+++ b/drivers/mmc/host/atmel-mci.c
@@ -1441,60 +1441,6 @@ static irqreturn_t atmci_detect_interrupt(int irq, void *dev_id)
 	return IRQ_HANDLED;
 }
 
-#ifdef CONFIG_MMC_ATMELMCI_DMA
-
-static inline struct atmel_mci *
-dma_client_to_atmel_mci(struct dma_client *client)
-{
-	return container_of(client, struct atmel_mci, dma.client);
-}
-
-static enum dma_state_client atmci_dma_event(struct dma_client *client,
-		struct dma_chan *chan, enum dma_state state)
-{
-	struct atmel_mci	*host;
-	enum dma_state_client	ret = DMA_NAK;
-
-	host = dma_client_to_atmel_mci(client);
-
-	switch (state) {
-	case DMA_RESOURCE_AVAILABLE:
-		spin_lock_bh(&host->lock);
-		if (!host->dma.chan) {
-			host->dma.chan = chan;
-			ret = DMA_ACK;
-		}
-		spin_unlock_bh(&host->lock);
-
-		if (ret == DMA_ACK)
-			dev_info(&host->pdev->dev,
-					"Using %s for DMA transfers\n",
-					chan->dev.bus_id);
-		break;
-
-	case DMA_RESOURCE_REMOVED:
-		spin_lock_bh(&host->lock);
-		if (host->dma.chan == chan) {
-			host->dma.chan = NULL;
-			ret = DMA_ACK;
-		}
-		spin_unlock_bh(&host->lock);
-
-		if (ret == DMA_ACK)
-			dev_info(&host->pdev->dev,
-					"Lost %s, falling back to PIO\n",
-					chan->dev.bus_id);
-		break;
-
-	default:
-		break;
-	}
-
-
-	return ret;
-}
-#endif /* CONFIG_MMC_ATMELMCI_DMA */
-
 static int __init atmci_init_slot(struct atmel_mci *host,
 		struct mci_slot_pdata *slot_data, unsigned int id,
 		u32 sdc_reg)
@@ -1598,6 +1544,18 @@ static void __exit atmci_cleanup_slot(struct atmel_mci_slot *slot,
 	mmc_free_host(slot->mmc);
 }
 
+#ifdef CONFIG_MMC_ATMELMCI_DMA
+static enum dma_state_client filter(struct dma_chan *chan, void *slave)
+{
+	struct dw_dma_slave *dws = slave;
+
+	if (dws->dma_dev == chan->device->dev)
+		return DMA_ACK;
+	else
+		return DMA_DUP;
+}
+#endif
+
 static int __init atmci_probe(struct platform_device *pdev)
 {
 	struct mci_platform_data	*pdata;
@@ -1650,22 +1608,20 @@ static int __init atmci_probe(struct platform_device *pdev)
 		goto err_request_irq;
 
 #ifdef CONFIG_MMC_ATMELMCI_DMA
-	if (pdata->dma_slave) {
-		struct dma_slave *slave = pdata->dma_slave;
+	if (pdata->dma_slave.dma_dev) {
+		struct dw_dma_slave *dws = &pdata->dma_slave;
+		dma_cap_mask_t mask;
 
-		slave->tx_reg = regs->start + MCI_TDR;
-		slave->rx_reg = regs->start + MCI_RDR;
+		dws->tx_reg = regs->start + MCI_TDR;
+		dws->rx_reg = regs->start + MCI_RDR;
 
 		/* Try to grab a DMA channel */
-		host->dma.client.event_callback = atmci_dma_event;
-		dma_cap_set(DMA_SLAVE, host->dma.client.cap_mask);
-		host->dma.client.slave = slave;
-
-		dma_async_client_register(&host->dma.client);
-		dma_async_client_chan_request(&host->dma.client);
-	} else {
-		dev_notice(&pdev->dev, "DMA not available, using PIO\n");
+		dma_cap_zero(mask);
+		dma_cap_set(DMA_SLAVE, mask);
+		host->dma.chan = dma_request_channel(mask, filter, dws);
 	}
+	if (!host->dma.chan)
+		dev_notice(&pdev->dev, "DMA not available, using PIO\n");
 #endif /* CONFIG_MMC_ATMELMCI_DMA */
 
 	platform_set_drvdata(pdev, host);
@@ -1697,8 +1653,8 @@ static int __init atmci_probe(struct platform_device *pdev)
 
 err_init_slot:
 #ifdef CONFIG_MMC_ATMELMCI_DMA
-	if (pdata->dma_slave)
-		dma_async_client_unregister(&host->dma.client);
+	if (host->dma.chan)
+		dma_release_channel(host->dma.chan);
 #endif
 	free_irq(irq, host);
 err_request_irq:
@@ -1729,8 +1685,8 @@ static int __exit atmci_remove(struct platform_device *pdev)
 	clk_disable(host->mck);
 
 #ifdef CONFIG_MMC_ATMELMCI_DMA
-	if (host->dma.client.slave)
-		dma_async_client_unregister(&host->dma.client);
+	if (host->dma.chan)
+		dma_release_channel(host->dma.chan);
 #endif
 
 	free_irq(platform_get_irq(pdev, 0), host);
@@ -1759,7 +1715,7 @@ static void __exit atmci_exit(void)
 	platform_driver_unregister(&atmci_driver);
 }
 
-module_init(atmci_init);
+late_initcall(atmci_init); /* try to load after dma driver when built-in */
 module_exit(atmci_exit);
 
 MODULE_DESCRIPTION("Atmel Multimedia Card Interface driver");
diff --git a/include/linux/dmaengine.h b/include/linux/dmaengine.h
index 6f2d070..d63544c 100644
--- a/include/linux/dmaengine.h
+++ b/include/linux/dmaengine.h
@@ -96,17 +96,6 @@ enum dma_transaction_type {
 /* last transaction type for creation of the capabilities mask */
 #define DMA_TX_TYPE_END (DMA_SLAVE + 1)
 
-/**
- * enum dma_slave_width - DMA slave register access width.
- * @DMA_SLAVE_WIDTH_8BIT: Do 8-bit slave register accesses
- * @DMA_SLAVE_WIDTH_16BIT: Do 16-bit slave register accesses
- * @DMA_SLAVE_WIDTH_32BIT: Do 32-bit slave register accesses
- */
-enum dma_slave_width {
-	DMA_SLAVE_WIDTH_8BIT,
-	DMA_SLAVE_WIDTH_16BIT,
-	DMA_SLAVE_WIDTH_32BIT,
-};
 
 /**
  * enum dma_ctrl_flags - DMA flags to augment operation preparation,
@@ -133,32 +122,6 @@ enum dma_ctrl_flags {
 typedef struct { DECLARE_BITMAP(bits, DMA_TX_TYPE_END); } dma_cap_mask_t;
 
 /**
- * struct dma_slave - Information about a DMA slave
- * @dev: device acting as DMA slave
- * @dma_dev: required DMA master device. If non-NULL, the client can not be
- *	bound to other masters than this.
- * @tx_reg: physical address of data register used for
- *	memory-to-peripheral transfers
- * @rx_reg: physical address of data register used for
- *	peripheral-to-memory transfers
- * @reg_width: peripheral register width
- *
- * If dma_dev is non-NULL, the client can not be bound to other DMA
- * masters than the one corresponding to this device. The DMA master
- * driver may use this to determine if there is controller-specific
- * data wrapped around this struct. Drivers of platform code that sets
- * the dma_dev field must therefore make sure to use an appropriate
- * controller-specific dma slave structure wrapping this struct.
- */
-struct dma_slave {
-	struct device		*dev;
-	struct device		*dma_dev;
-	dma_addr_t		tx_reg;
-	dma_addr_t		rx_reg;
-	enum dma_slave_width	reg_width;
-};
-
-/**
  * struct dma_chan_percpu - the per-CPU part of struct dma_chan
  * @refcount: local_t used for open-coded "bigref" counting
  * @memcpy_count: transaction counter
@@ -248,7 +211,6 @@ typedef enum dma_state_client (*dma_filter_fn)(struct dma_chan *chan, void *filt
 struct dma_client {
 	dma_event_callback	event_callback;
 	dma_cap_mask_t		cap_mask;
-	struct dma_slave	*slave;
 	struct list_head	global_node;
 };
 
diff --git a/include/linux/dw_dmac.h b/include/linux/dw_dmac.h
index 04d217b..d797dde 100644
--- a/include/linux/dw_dmac.h
+++ b/include/linux/dw_dmac.h
@@ -22,14 +22,34 @@ struct dw_dma_platform_data {
 };
 
 /**
+ * enum dw_dma_slave_width - DMA slave register access width.
+ * @DMA_SLAVE_WIDTH_8BIT: Do 8-bit slave register accesses
+ * @DMA_SLAVE_WIDTH_16BIT: Do 16-bit slave register accesses
+ * @DMA_SLAVE_WIDTH_32BIT: Do 32-bit slave register accesses
+ */
+enum dw_dma_slave_width {
+	DW_DMA_SLAVE_WIDTH_8BIT,
+	DW_DMA_SLAVE_WIDTH_16BIT,
+	DW_DMA_SLAVE_WIDTH_32BIT,
+};
+
+/**
  * struct dw_dma_slave - Controller-specific information about a slave
- * @slave: Generic information about the slave
- * @ctl_lo: Platform-specific initializer for the CTL_LO register
+ *
+ * @dma_dev: required DMA master device
+ * @tx_reg: physical address of data register used for
+ *	memory-to-peripheral transfers
+ * @rx_reg: physical address of data register used for
+ *	peripheral-to-memory transfers
+ * @reg_width: peripheral register width
  * @cfg_hi: Platform-specific initializer for the CFG_HI register
  * @cfg_lo: Platform-specific initializer for the CFG_LO register
  */
 struct dw_dma_slave {
-	struct dma_slave	slave;
+	struct device		*dma_dev;
+	dma_addr_t		tx_reg;
+	dma_addr_t		rx_reg;
+	enum dw_dma_slave_width	reg_width;
 	u32			cfg_hi;
 	u32			cfg_lo;
 };
@@ -54,9 +74,4 @@ struct dw_dma_slave {
 #define DWC_CFGL_HS_DST_POL	(1 << 18)	/* dst handshake active low */
 #define DWC_CFGL_HS_SRC_POL	(1 << 19)	/* src handshake active low */
 
-static inline struct dw_dma_slave *to_dw_dma_slave(struct dma_slave *slave)
-{
-	return container_of(slave, struct dw_dma_slave, slave);
-}
-
 #endif /* DW_DMAC_H */


^ permalink raw reply related	[flat|nested] 55+ messages in thread

* [PATCH 10/13] dmaengine: replace dma_async_client_register with dmaengine_get
  2008-11-14 21:34 [PATCH 00/13] dmaengine redux Dan Williams
                   ` (8 preceding siblings ...)
  2008-11-14 21:35 ` [PATCH 09/13] atmel-mci: convert to dma_request_channel and down-level dma_slave Dan Williams
@ 2008-11-14 21:35 ` Dan Williams
  2008-11-14 21:35 ` [PATCH 11/13] dmaengine: kill struct dma_client and supporting infrastructure Dan Williams
                   ` (3 subsequent siblings)
  13 siblings, 0 replies; 55+ messages in thread
From: Dan Williams @ 2008-11-14 21:35 UTC (permalink / raw)
  To: linux-kernel, netdev
  Cc: maciej.sosnowski, hskinnemoen, g.liakhovetski, nicolas.ferre

Now that clients no longer need to be notified of channel arrival
dma_async_client_register can simply increment the dmaengine_ref_count.

Signed-off-by: Dan Williams <dan.j.williams@intel.com>
---

 crypto/async_tx/async_tx.c |  115 +-------------------------------------------
 drivers/dma/dmaengine.c    |   23 ++-------
 include/linux/dmaengine.h  |    4 +-
 net/core/dev.c             |    3 -
 4 files changed, 11 insertions(+), 134 deletions(-)

diff --git a/crypto/async_tx/async_tx.c b/crypto/async_tx/async_tx.c
index 2cdf7a0..f21147f 100644
--- a/crypto/async_tx/async_tx.c
+++ b/crypto/async_tx/async_tx.c
@@ -28,120 +28,9 @@
 #include <linux/async_tx.h>
 
 #ifdef CONFIG_DMA_ENGINE
-static enum dma_state_client
-dma_channel_add_remove(struct dma_client *client,
-	struct dma_chan *chan, enum dma_state state);
-
-static struct dma_client async_tx_dma = {
-	.event_callback = dma_channel_add_remove,
-	/* .cap_mask == 0 defaults to all channels */
-};
-
-/**
- * async_tx_lock - protect modification of async_tx_master_list and serialize
- *	rebalance operations
- */
-static DEFINE_SPINLOCK(async_tx_lock);
-
-static LIST_HEAD(async_tx_master_list);
-
-static void
-free_dma_chan_ref(struct rcu_head *rcu)
-{
-	struct dma_chan_ref *ref;
-	ref = container_of(rcu, struct dma_chan_ref, rcu);
-	kfree(ref);
-}
-
-static void
-init_dma_chan_ref(struct dma_chan_ref *ref, struct dma_chan *chan)
-{
-	INIT_LIST_HEAD(&ref->node);
-	INIT_RCU_HEAD(&ref->rcu);
-	ref->chan = chan;
-	atomic_set(&ref->count, 0);
-}
-
-static enum dma_state_client
-dma_channel_add_remove(struct dma_client *client,
-	struct dma_chan *chan, enum dma_state state)
-{
-	unsigned long found, flags;
-	struct dma_chan_ref *master_ref, *ref;
-	enum dma_state_client ack = DMA_DUP; /* default: take no action */
-
-	switch (state) {
-	case DMA_RESOURCE_AVAILABLE:
-		found = 0;
-		rcu_read_lock();
-		list_for_each_entry_rcu(ref, &async_tx_master_list, node)
-			if (ref->chan == chan) {
-				found = 1;
-				break;
-			}
-		rcu_read_unlock();
-
-		pr_debug("async_tx: dma resource available [%s]\n",
-			found ? "old" : "new");
-
-		if (!found)
-			ack = DMA_ACK;
-		else
-			break;
-
-		/* add the channel to the generic management list */
-		master_ref = kmalloc(sizeof(*master_ref), GFP_KERNEL);
-		if (master_ref) {
-			init_dma_chan_ref(master_ref, chan);
-			spin_lock_irqsave(&async_tx_lock, flags);
-			list_add_tail_rcu(&master_ref->node,
-				&async_tx_master_list);
-			spin_unlock_irqrestore(&async_tx_lock,
-				flags);
-		} else {
-			printk(KERN_WARNING "async_tx: unable to create"
-				" new master entry in response to"
-				" a DMA_RESOURCE_ADDED event"
-				" (-ENOMEM)\n");
-			return 0;
-		}
-		break;
-	case DMA_RESOURCE_REMOVED:
-		found = 0;
-		spin_lock_irqsave(&async_tx_lock, flags);
-		list_for_each_entry(ref, &async_tx_master_list, node)
-			if (ref->chan == chan) {
-				list_del_rcu(&ref->node);
-				call_rcu(&ref->rcu, free_dma_chan_ref);
-				found = 1;
-				break;
-			}
-		spin_unlock_irqrestore(&async_tx_lock, flags);
-
-		pr_debug("async_tx: dma resource removed [%s]\n",
-			found ? "ours" : "not ours");
-
-		if (found)
-			ack = DMA_ACK;
-		else
-			break;
-		break;
-	case DMA_RESOURCE_SUSPEND:
-	case DMA_RESOURCE_RESUME:
-		printk(KERN_WARNING "async_tx: does not support dma channel"
-			" suspend/resume\n");
-		break;
-	default:
-		BUG();
-	}
-
-	return ack;
-}
-
 static int __init async_tx_init(void)
 {
-	dma_async_client_register(&async_tx_dma);
-	dma_async_client_chan_request(&async_tx_dma);
+	dmaengine_get();
 
 	printk(KERN_INFO "async_tx: api initialized (async)\n");
 
@@ -150,7 +39,7 @@ static int __init async_tx_init(void)
 
 static void __exit async_tx_exit(void)
 {
-	dma_async_client_unregister(&async_tx_dma);
+	dmaengine_put();
 }
 
 /**
diff --git a/drivers/dma/dmaengine.c b/drivers/dma/dmaengine.c
index 1b694b8..78456f5 100644
--- a/drivers/dma/dmaengine.c
+++ b/drivers/dma/dmaengine.c
@@ -588,10 +588,9 @@ static void dma_clients_notify_available(void)
 }
 
 /**
- * dma_async_client_register - register a &dma_client
- * @client: ptr to a client structure with valid 'event_callback' and 'cap_mask'
+ * dmaengine_get - register interest in dma_channels
  */
-void dma_async_client_register(struct dma_client *client)
+void dmaengine_get(void)
 {
 	struct dma_device *device, *_d;
 	struct dma_chan *chan;
@@ -615,26 +614,18 @@ void dma_async_client_register(struct dma_client *client)
 				       dev_name(&chan->dev), err);
 		}
 	}
-
-	list_add_tail(&client->global_node, &dma_client_list);
 	mutex_unlock(&dma_list_mutex);
 }
-EXPORT_SYMBOL(dma_async_client_register);
+EXPORT_SYMBOL(dmaengine_get);
 
 /**
- * dma_async_client_unregister - unregister a client and free the &dma_client
- * @client: &dma_client to free
- *
- * Force frees any allocated DMA channels, frees the &dma_client memory
+ * dmaengine_put - let dma drivers be removed when ref_count == 0
  */
-void dma_async_client_unregister(struct dma_client *client)
+void dmaengine_put(void)
 {
 	struct dma_device *device;
 	struct dma_chan *chan;
 
-	if (!client)
-		return;
-
 	mutex_lock(&dma_list_mutex);
 	dmaengine_ref_count--;
 	BUG_ON(dmaengine_ref_count < 0);
@@ -645,11 +636,9 @@ void dma_async_client_unregister(struct dma_client *client)
 		list_for_each_entry(chan, &device->channels, device_node)
 			dma_chan_put(chan);
 	}
-
-	list_del(&client->global_node);
 	mutex_unlock(&dma_list_mutex);
 }
-EXPORT_SYMBOL(dma_async_client_unregister);
+EXPORT_SYMBOL(dmaengine_put);
 
 /**
  * dma_async_client_chan_request - send all available channels to the
diff --git a/include/linux/dmaengine.h b/include/linux/dmaengine.h
index d63544c..37d95db 100644
--- a/include/linux/dmaengine.h
+++ b/include/linux/dmaengine.h
@@ -318,8 +318,8 @@ struct dma_device {
 
 /* --- public DMA engine API --- */
 
-void dma_async_client_register(struct dma_client *client);
-void dma_async_client_unregister(struct dma_client *client);
+void dmaengine_get(void);
+void dmaengine_put(void);
 void dma_async_client_chan_request(struct dma_client *client);
 dma_cookie_t dma_async_memcpy_buf_to_buf(struct dma_chan *chan,
 	void *dest, void *src, size_t len);
diff --git a/net/core/dev.c b/net/core/dev.c
index 56e6965..4b3b7d3 100644
--- a/net/core/dev.c
+++ b/net/core/dev.c
@@ -4657,8 +4657,7 @@ static int __init netdev_dma_register(void)
 	}
 	spin_lock_init(&net_dma.lock);
 	dma_cap_set(DMA_MEMCPY, net_dma.client.cap_mask);
-	dma_async_client_register(&net_dma.client);
-	dma_async_client_chan_request(&net_dma.client);
+	dmaengine_get();
 	return 0;
 }
 


^ permalink raw reply related	[flat|nested] 55+ messages in thread

* [PATCH 11/13] dmaengine: kill struct dma_client and supporting infrastructure
  2008-11-14 21:34 [PATCH 00/13] dmaengine redux Dan Williams
                   ` (9 preceding siblings ...)
  2008-11-14 21:35 ` [PATCH 10/13] dmaengine: replace dma_async_client_register with dmaengine_get Dan Williams
@ 2008-11-14 21:35 ` Dan Williams
  2008-12-12 14:29   ` Sosnowski, Maciej
  2008-11-14 21:35 ` [PATCH 12/13] dmaengine: remove 'bigref' infrastructure Dan Williams
                   ` (2 subsequent siblings)
  13 siblings, 1 reply; 55+ messages in thread
From: Dan Williams @ 2008-11-14 21:35 UTC (permalink / raw)
  To: linux-kernel, netdev
  Cc: maciej.sosnowski, hskinnemoen, g.liakhovetski, nicolas.ferre

All users have been converted to either the general-purpose allocator,
dma_find_channel, or dma_request_channel.

Signed-off-by: Dan Williams <dan.j.williams@intel.com>
---

 drivers/dma/dmaengine.c      |   74 +-------------------------------
 drivers/dma/dw_dmac.c        |    3 -
 drivers/dma/fsldma.c         |    3 -
 drivers/dma/ioat_dma.c       |    5 +-
 drivers/dma/iop-adma.c       |    7 +--
 drivers/dma/mv_xor.c         |    7 +--
 drivers/mmc/host/atmel-mci.c |    1 
 include/linux/dmaengine.h    |   36 ---------------
 net/core/dev.c               |   98 +-----------------------------------------
 9 files changed, 17 insertions(+), 217 deletions(-)

diff --git a/drivers/dma/dmaengine.c b/drivers/dma/dmaengine.c
index 78456f5..26f862c 100644
--- a/drivers/dma/dmaengine.c
+++ b/drivers/dma/dmaengine.c
@@ -31,15 +31,12 @@
  *
  * LOCKING:
  *
- * The subsystem keeps two global lists, dma_device_list and dma_client_list.
- * Both of these are protected by a mutex, dma_list_mutex.
+ * The subsystem keeps a global list of dma_device structs it is protected by a
+ * mutex, dma_list_mutex.
  *
  * Each device has a channels list, which runs unlocked but is never modified
  * once the device is registered, it's just setup by the driver.
  *
- * Each client is responsible for keeping track of the channels it uses.  See
- * the definition of dma_event_callback in dmaengine.h.
- *
  * Each device has a kref, which is initialized to 1 when the device is
  * registered. A kref_get is done for each device registered.  When the
  * device is released, the corresponding kref_put is done in the release
@@ -74,7 +71,6 @@
 
 static DEFINE_MUTEX(dma_list_mutex);
 static LIST_HEAD(dma_device_list);
-static LIST_HEAD(dma_client_list);
 static long dmaengine_ref_count;
 
 /* --- sysfs implementation --- */
@@ -184,7 +180,7 @@ static int dma_chan_get(struct dma_chan *chan)
 
 	/* allocate upon first client reference */
 	if (chan->client_count == 1 && err == 0) {
-		int desc = chan->device->device_alloc_chan_resources(chan, NULL);
+		int desc = chan->device->device_alloc_chan_resources(chan);
 
 		if (desc < 0) {
 			chan->client_count = 0;
@@ -207,40 +203,6 @@ static void dma_chan_put(struct dma_chan *chan)
 		chan->device->device_free_chan_resources(chan);
 }
 
-/**
- * dma_client_chan_alloc - try to allocate channels to a client
- * @client: &dma_client
- *
- * Called with dma_list_mutex held.
- */
-static void dma_client_chan_alloc(struct dma_client *client)
-{
-	struct dma_device *device;
-	struct dma_chan *chan;
-	enum dma_state_client ack;
-
-	/* Find a channel */
-	list_for_each_entry(device, &dma_device_list, global_node) {
-		if (dma_has_cap(DMA_PRIVATE, device->cap_mask))
-			continue;
-
-		list_for_each_entry(chan, &device->channels, device_node) {
-			if (!dma_chan_satisfies_mask(chan, client->cap_mask))
-				continue;
-			if (!chan->client_count)
-				continue;
-			ack = client->event_callback(client, chan,
-						     DMA_RESOURCE_AVAILABLE);
-
-			/* we are done once this client rejects
-			 * an available resource
-			 */
-			if (ack == DMA_NAK)
-				return;
-		}
-	}
-}
-
 enum dma_status dma_sync_wait(struct dma_chan *chan, dma_cookie_t cookie)
 {
 	enum dma_status status;
@@ -573,21 +535,6 @@ void dma_release_channel(struct dma_chan *chan)
 EXPORT_SYMBOL_GPL(dma_release_channel);
 
 /**
- * dma_chans_notify_available - broadcast available channels to the clients
- */
-static void dma_clients_notify_available(void)
-{
-	struct dma_client *client;
-
-	mutex_lock(&dma_list_mutex);
-
-	list_for_each_entry(client, &dma_client_list, global_node)
-		dma_client_chan_alloc(client);
-
-	mutex_unlock(&dma_list_mutex);
-}
-
-/**
  * dmaengine_get - register interest in dma_channels
  */
 void dmaengine_get(void)
@@ -641,19 +588,6 @@ void dmaengine_put(void)
 EXPORT_SYMBOL(dmaengine_put);
 
 /**
- * dma_async_client_chan_request - send all available channels to the
- * client that satisfy the capability mask
- * @client - requester
- */
-void dma_async_client_chan_request(struct dma_client *client)
-{
-	mutex_lock(&dma_list_mutex);
-	dma_client_chan_alloc(client);
-	mutex_unlock(&dma_list_mutex);
-}
-EXPORT_SYMBOL(dma_async_client_chan_request);
-
-/**
  * dma_async_device_register - registers DMA devices found
  * @device: &dma_device
  */
@@ -743,8 +677,6 @@ int dma_async_device_register(struct dma_device *device)
 	dma_channel_rebalance();
 	mutex_unlock(&dma_list_mutex);
 
-	dma_clients_notify_available();
-
 	return 0;
 
 err_out:
diff --git a/drivers/dma/dw_dmac.c b/drivers/dma/dw_dmac.c
index dbd5080..a29dda8 100644
--- a/drivers/dma/dw_dmac.c
+++ b/drivers/dma/dw_dmac.c
@@ -758,8 +758,7 @@ static void dwc_issue_pending(struct dma_chan *chan)
 	spin_unlock_bh(&dwc->lock);
 }
 
-static int dwc_alloc_chan_resources(struct dma_chan *chan,
-		struct dma_client *client)
+static int dwc_alloc_chan_resources(struct dma_chan *chan)
 {
 	struct dw_dma_chan	*dwc = to_dw_dma_chan(chan);
 	struct dw_dma		*dw = to_dw_dma(chan->device);
diff --git a/drivers/dma/fsldma.c b/drivers/dma/fsldma.c
index 0b95dcc..46e0128 100644
--- a/drivers/dma/fsldma.c
+++ b/drivers/dma/fsldma.c
@@ -366,8 +366,7 @@ static struct fsl_desc_sw *fsl_dma_alloc_descriptor(
  *
  * Return - The number of descriptors allocated.
  */
-static int fsl_dma_alloc_chan_resources(struct dma_chan *chan,
-					struct dma_client *client)
+static int fsl_dma_alloc_chan_resources(struct dma_chan *chan)
 {
 	struct fsl_dma_chan *fsl_chan = to_fsl_chan(chan);
 
diff --git a/drivers/dma/ioat_dma.c b/drivers/dma/ioat_dma.c
index ecd743f..c746fb7 100644
--- a/drivers/dma/ioat_dma.c
+++ b/drivers/dma/ioat_dma.c
@@ -734,8 +734,7 @@ static void ioat2_dma_massage_chan_desc(struct ioat_dma_chan *ioat_chan)
  * ioat_dma_alloc_chan_resources - returns the number of allocated descriptors
  * @chan: the channel to be filled out
  */
-static int ioat_dma_alloc_chan_resources(struct dma_chan *chan,
-					 struct dma_client *client)
+static int ioat_dma_alloc_chan_resources(struct dma_chan *chan)
 {
 	struct ioat_dma_chan *ioat_chan = to_ioat_chan(chan);
 	struct ioat_desc_sw *desc;
@@ -1379,7 +1378,7 @@ static int ioat_dma_self_test(struct ioatdma_device *device)
 	dma_chan = container_of(device->common.channels.next,
 				struct dma_chan,
 				device_node);
-	if (device->common.device_alloc_chan_resources(dma_chan, NULL) < 1) {
+	if (device->common.device_alloc_chan_resources(dma_chan) < 1) {
 		dev_err(&device->pdev->dev,
 			"selftest cannot allocate chan resource\n");
 		err = -ENODEV;
diff --git a/drivers/dma/iop-adma.c b/drivers/dma/iop-adma.c
index f008ab2..44c483e 100644
--- a/drivers/dma/iop-adma.c
+++ b/drivers/dma/iop-adma.c
@@ -460,8 +460,7 @@ static void iop_chan_start_null_xor(struct iop_adma_chan *iop_chan);
  * greater than 2x the number slots needed to satisfy a device->max_xor
  * request.
  * */
-static int iop_adma_alloc_chan_resources(struct dma_chan *chan,
-					 struct dma_client *client)
+static int iop_adma_alloc_chan_resources(struct dma_chan *chan)
 {
 	char *hw_desc;
 	int idx;
@@ -855,7 +854,7 @@ static int __devinit iop_adma_memcpy_self_test(struct iop_adma_device *device)
 	dma_chan = container_of(device->common.channels.next,
 				struct dma_chan,
 				device_node);
-	if (iop_adma_alloc_chan_resources(dma_chan, NULL) < 1) {
+	if (iop_adma_alloc_chan_resources(dma_chan) < 1) {
 		err = -ENODEV;
 		goto out;
 	}
@@ -953,7 +952,7 @@ iop_adma_xor_zero_sum_self_test(struct iop_adma_device *device)
 	dma_chan = container_of(device->common.channels.next,
 				struct dma_chan,
 				device_node);
-	if (iop_adma_alloc_chan_resources(dma_chan, NULL) < 1) {
+	if (iop_adma_alloc_chan_resources(dma_chan) < 1) {
 		err = -ENODEV;
 		goto out;
 	}
diff --git a/drivers/dma/mv_xor.c b/drivers/dma/mv_xor.c
index c3fcaa8..dabdc36 100644
--- a/drivers/dma/mv_xor.c
+++ b/drivers/dma/mv_xor.c
@@ -597,8 +597,7 @@ submit_done:
 }
 
 /* returns the number of allocated descriptors */
-static int mv_xor_alloc_chan_resources(struct dma_chan *chan,
-				       struct dma_client *client)
+static int mv_xor_alloc_chan_resources(struct dma_chan *chan)
 {
 	char *hw_desc;
 	int idx;
@@ -948,7 +947,7 @@ static int __devinit mv_xor_memcpy_self_test(struct mv_xor_device *device)
 	dma_chan = container_of(device->common.channels.next,
 				struct dma_chan,
 				device_node);
-	if (mv_xor_alloc_chan_resources(dma_chan, NULL) < 1) {
+	if (mv_xor_alloc_chan_resources(dma_chan) < 1) {
 		err = -ENODEV;
 		goto out;
 	}
@@ -1043,7 +1042,7 @@ mv_xor_xor_self_test(struct mv_xor_device *device)
 	dma_chan = container_of(device->common.channels.next,
 				struct dma_chan,
 				device_node);
-	if (mv_xor_alloc_chan_resources(dma_chan, NULL) < 1) {
+	if (mv_xor_alloc_chan_resources(dma_chan) < 1) {
 		err = -ENODEV;
 		goto out;
 	}
diff --git a/drivers/mmc/host/atmel-mci.c b/drivers/mmc/host/atmel-mci.c
index 7a34118..4b567a0 100644
--- a/drivers/mmc/host/atmel-mci.c
+++ b/drivers/mmc/host/atmel-mci.c
@@ -55,7 +55,6 @@ enum atmel_mci_state {
 
 struct atmel_mci_dma {
 #ifdef CONFIG_MMC_ATMELMCI_DMA
-	struct dma_client		client;
 	struct dma_chan			*chan;
 	struct dma_async_tx_descriptor	*data_desc;
 #endif
diff --git a/include/linux/dmaengine.h b/include/linux/dmaengine.h
index 37d95db..ad9fd55 100644
--- a/include/linux/dmaengine.h
+++ b/include/linux/dmaengine.h
@@ -170,23 +170,6 @@ struct dma_chan {
 
 void dma_chan_cleanup(struct kref *kref);
 
-/*
- * typedef dma_event_callback - function pointer to a DMA event callback
- * For each channel added to the system this routine is called for each client.
- * If the client would like to use the channel it returns '1' to signal (ack)
- * the dmaengine core to take out a reference on the channel and its
- * corresponding device.  A client must not 'ack' an available channel more
- * than once.  When a channel is removed all clients are notified.  If a client
- * is using the channel it must 'ack' the removal.  A client must not 'ack' a
- * removed channel more than once.
- * @client - 'this' pointer for the client context
- * @chan - channel to be acted upon
- * @state - available or removed
- */
-struct dma_client;
-typedef enum dma_state_client (*dma_event_callback) (struct dma_client *client,
-		struct dma_chan *chan, enum dma_state state);
-
 /**
  * typedef dma_filter_fn - callback filter for dma_request_channel
  * @chan: channel to be reviewed
@@ -199,21 +182,6 @@ typedef enum dma_state_client (*dma_event_callback) (struct dma_client *client,
  */
 typedef enum dma_state_client (*dma_filter_fn)(struct dma_chan *chan, void *filter_param);
 
-/**
- * struct dma_client - info on the entity making use of DMA services
- * @event_callback: func ptr to call when something happens
- * @cap_mask: only return channels that satisfy the requested capabilities
- *  a value of zero corresponds to any capability
- * @slave: data for preparing slave transfer. Must be non-NULL iff the
- *  DMA_SLAVE capability is requested.
- * @global_node: list_head for global dma_client_list
- */
-struct dma_client {
-	dma_event_callback	event_callback;
-	dma_cap_mask_t		cap_mask;
-	struct list_head	global_node;
-};
-
 typedef void (*dma_async_tx_callback)(void *dma_async_param);
 /**
  * struct dma_async_tx_descriptor - async transaction descriptor
@@ -285,8 +253,7 @@ struct dma_device {
 	int dev_id;
 	struct device *dev;
 
-	int (*device_alloc_chan_resources)(struct dma_chan *chan,
-			struct dma_client *client);
+	int (*device_alloc_chan_resources)(struct dma_chan *chan);
 	void (*device_free_chan_resources)(struct dma_chan *chan);
 
 	struct dma_async_tx_descriptor *(*device_prep_dma_memcpy)(
@@ -320,7 +287,6 @@ struct dma_device {
 
 void dmaengine_get(void);
 void dmaengine_put(void);
-void dma_async_client_chan_request(struct dma_client *client);
 dma_cookie_t dma_async_memcpy_buf_to_buf(struct dma_chan *chan,
 	void *dest, void *src, size_t len);
 dma_cookie_t dma_async_memcpy_buf_to_pg(struct dma_chan *chan,
diff --git a/net/core/dev.c b/net/core/dev.c
index 4b3b7d3..cf1abd1 100644
--- a/net/core/dev.c
+++ b/net/core/dev.c
@@ -165,25 +165,6 @@ static DEFINE_SPINLOCK(ptype_lock);
 static struct list_head ptype_base[PTYPE_HASH_SIZE] __read_mostly;
 static struct list_head ptype_all __read_mostly;	/* Taps */
 
-#ifdef CONFIG_NET_DMA
-struct net_dma {
-	struct dma_client client;
-	spinlock_t lock;
-	cpumask_t channel_mask;
-	struct dma_chan **channels;
-};
-
-static enum dma_state_client
-netdev_dma_event(struct dma_client *client, struct dma_chan *chan,
-	enum dma_state state);
-
-static struct net_dma net_dma = {
-	.client = {
-		.event_callback = netdev_dma_event,
-	},
-};
-#endif
-
 /*
  * The @dev_base_head list is protected by @dev_base_lock and the rtnl
  * semaphore.
@@ -4589,81 +4570,6 @@ static int dev_cpu_callback(struct notifier_block *nfb,
 	return NOTIFY_OK;
 }
 
-#ifdef CONFIG_NET_DMA
-/**
- * netdev_dma_event - event callback for the net_dma_client
- * @client: should always be net_dma_client
- * @chan: DMA channel for the event
- * @state: DMA state to be handled
- */
-static enum dma_state_client
-netdev_dma_event(struct dma_client *client, struct dma_chan *chan,
-	enum dma_state state)
-{
-	int i, found = 0, pos = -1;
-	struct net_dma *net_dma =
-		container_of(client, struct net_dma, client);
-	enum dma_state_client ack = DMA_DUP; /* default: take no action */
-
-	spin_lock(&net_dma->lock);
-	switch (state) {
-	case DMA_RESOURCE_AVAILABLE:
-		for (i = 0; i < nr_cpu_ids; i++)
-			if (net_dma->channels[i] == chan) {
-				found = 1;
-				break;
-			} else if (net_dma->channels[i] == NULL && pos < 0)
-				pos = i;
-
-		if (!found && pos >= 0) {
-			ack = DMA_ACK;
-			net_dma->channels[pos] = chan;
-			cpu_set(pos, net_dma->channel_mask);
-		}
-		break;
-	case DMA_RESOURCE_REMOVED:
-		for (i = 0; i < nr_cpu_ids; i++)
-			if (net_dma->channels[i] == chan) {
-				found = 1;
-				pos = i;
-				break;
-			}
-
-		if (found) {
-			ack = DMA_ACK;
-			cpu_clear(pos, net_dma->channel_mask);
-			net_dma->channels[i] = NULL;
-		}
-		break;
-	default:
-		break;
-	}
-	spin_unlock(&net_dma->lock);
-
-	return ack;
-}
-
-/**
- * netdev_dma_register - register the networking subsystem as a DMA client
- */
-static int __init netdev_dma_register(void)
-{
-	net_dma.channels = kzalloc(nr_cpu_ids * sizeof(struct net_dma),
-								GFP_KERNEL);
-	if (unlikely(!net_dma.channels)) {
-		printk(KERN_NOTICE
-				"netdev_dma: no memory for net_dma.channels\n");
-		return -ENOMEM;
-	}
-	spin_lock_init(&net_dma.lock);
-	dma_cap_set(DMA_MEMCPY, net_dma.client.cap_mask);
-	dmaengine_get();
-	return 0;
-}
-
-#else
-static int __init netdev_dma_register(void) { return -ENODEV; }
-#endif /* CONFIG_NET_DMA */
 
 /**
  *	netdev_increment_features - increment feature set by one
@@ -4860,7 +4766,9 @@ static int __init net_dev_init(void)
 		queue->backlog.weight = weight_p;
 	}
 
-	netdev_dma_register();
+	#ifdef CONFIG_NET_DMA
+	dmaengine_get();
+	#endif
 
 	dev_boot_phase = 0;
 


^ permalink raw reply related	[flat|nested] 55+ messages in thread

* [PATCH 12/13] dmaengine: remove 'bigref' infrastructure
  2008-11-14 21:34 [PATCH 00/13] dmaengine redux Dan Williams
                   ` (10 preceding siblings ...)
  2008-11-14 21:35 ` [PATCH 11/13] dmaengine: kill struct dma_client and supporting infrastructure Dan Williams
@ 2008-11-14 21:35 ` Dan Williams
  2008-11-14 21:35 ` [PATCH 13/13] dmaengine: kill enum dma_state_client Dan Williams
  2008-12-12 14:27 ` [PATCH 00/13] dmaengine redux Sosnowski, Maciej
  13 siblings, 0 replies; 55+ messages in thread
From: Dan Williams @ 2008-11-14 21:35 UTC (permalink / raw)
  To: linux-kernel, netdev
  Cc: maciej.sosnowski, hskinnemoen, g.liakhovetski, nicolas.ferre

Reference counting is done at the module level so clients need not worry
that a channel will leave while they are actively using dmaengine.

Signed-off-by: Dan Williams <dan.j.williams@intel.com>
---

 drivers/dma/dmaengine.c   |   86 +++++----------------------------------------
 drivers/dma/iop-adma.c    |    1 -
 drivers/dma/mv_xor.c      |    1 -
 include/linux/dmaengine.h |    7 ----
 4 files changed, 9 insertions(+), 86 deletions(-)

diff --git a/drivers/dma/dmaengine.c b/drivers/dma/dmaengine.c
index 26f862c..a6c0d72 100644
--- a/drivers/dma/dmaengine.c
+++ b/drivers/dma/dmaengine.c
@@ -34,26 +34,15 @@
  * The subsystem keeps a global list of dma_device structs it is protected by a
  * mutex, dma_list_mutex.
  *
+ * A subsystem can get access to a channel by calling dmaengine_get() followed
+ * by dma_find_channel(), or if it has need for an exclusive channel it can call
+ * dma_request_channel().  Once a channel is allocated a reference is taken
+ * against its corresponding driver to disable removal.
+ *
  * Each device has a channels list, which runs unlocked but is never modified
  * once the device is registered, it's just setup by the driver.
  *
- * Each device has a kref, which is initialized to 1 when the device is
- * registered. A kref_get is done for each device registered.  When the
- * device is released, the corresponding kref_put is done in the release
- * method. Every time one of the device's channels is allocated to a client,
- * a kref_get occurs.  When the channel is freed, the corresponding kref_put
- * happens. The device's release function does a completion, so
- * unregister_device does a remove event, device_unregister, a kref_put
- * for the first reference, then waits on the completion for all other
- * references to finish.
- *
- * Each channel has an open-coded implementation of Rusty Russell's "bigref,"
- * with a kref and a per_cpu local_t.  A dma_chan_get is called when a client
- * signals that it wants to use a channel, and dma_chan_put is called when
- * a channel is removed or a client using it is unregistered.  A client can
- * take extra references per outstanding transaction, as is the case with
- * the NET DMA client.  The release function does a kref_put on the device.
- *	-ChrisL, DanW
+ * See Documentation/dmaengine.txt for more details
  */
 
 #include <linux/init.h>
@@ -114,18 +103,9 @@ static struct device_attribute dma_attrs[] = {
 	__ATTR_NULL
 };
 
-static void dma_async_device_cleanup(struct kref *kref);
-
-static void dma_dev_release(struct device *dev)
-{
-	struct dma_chan *chan = to_dma_chan(dev);
-	kref_put(&chan->device->refcount, dma_async_device_cleanup);
-}
-
 static struct class dma_devclass = {
 	.name		= "dma",
 	.dev_attrs	= dma_attrs,
-	.dev_release	= dma_dev_release,
 };
 
 /* --- client and device registration --- */
@@ -222,29 +202,6 @@ enum dma_status dma_sync_wait(struct dma_chan *chan, dma_cookie_t cookie)
 EXPORT_SYMBOL(dma_sync_wait);
 
 /**
- * dma_chan_cleanup - release a DMA channel's resources
- * @kref: kernel reference structure that contains the DMA channel device
- */
-void dma_chan_cleanup(struct kref *kref)
-{
-	struct dma_chan *chan = container_of(kref, struct dma_chan, refcount);
-	kref_put(&chan->device->refcount, dma_async_device_cleanup);
-}
-EXPORT_SYMBOL(dma_chan_cleanup);
-
-static void dma_chan_free_rcu(struct rcu_head *rcu)
-{
-	struct dma_chan *chan = container_of(rcu, struct dma_chan, rcu);
-
-	kref_put(&chan->refcount, dma_chan_cleanup);
-}
-
-static void dma_chan_release(struct dma_chan *chan)
-{
-	call_rcu(&chan->rcu, dma_chan_free_rcu);
-}
-
-/**
  * dma_cap_mask_all - enable iteration over all operation types
  */
 static dma_cap_mask_t dma_cap_mask_all;
@@ -622,8 +579,6 @@ int dma_async_device_register(struct dma_device *device)
 	BUG_ON(!device->device_issue_pending);
 	BUG_ON(!device->dev);
 
-	init_completion(&device->done);
-	kref_init(&device->refcount);
 	device->dev_id = id++;
 
 	/* represent channels in sysfs. Probably want devs too */
@@ -640,19 +595,11 @@ int dma_async_device_register(struct dma_device *device)
 
 		rc = device_register(&chan->dev);
 		if (rc) {
-			chancnt--;
 			free_percpu(chan->local);
 			chan->local = NULL;
 			goto err_out;
 		}
-
-		/* One for the channel, one of the class device */
-		kref_get(&device->refcount);
-		kref_get(&device->refcount);
-		kref_init(&chan->refcount);
 		chan->client_count = 0;
-		chan->slow_ref = 0;
-		INIT_RCU_HEAD(&chan->rcu);
 	}
 	device->chancnt = chancnt;
 
@@ -683,9 +630,7 @@ err_out:
 	list_for_each_entry(chan, &device->channels, device_node) {
 		if (chan->local == NULL)
 			continue;
-		kref_put(&device->refcount, dma_async_device_cleanup);
 		device_unregister(&chan->dev);
-		chancnt--;
 		free_percpu(chan->local);
 	}
 	return rc;
@@ -693,20 +638,11 @@ err_out:
 EXPORT_SYMBOL(dma_async_device_register);
 
 /**
- * dma_async_device_cleanup - function called when all references are released
- * @kref: kernel reference object
- */
-static void dma_async_device_cleanup(struct kref *kref)
-{
-	struct dma_device *device;
-
-	device = container_of(kref, struct dma_device, refcount);
-	complete(&device->done);
-}
-
-/**
  * dma_async_device_unregister - unregisters DMA devices
  * @device: &dma_device
+ *
+ * This routine is called by dma driver exit routines, dmaengine holds module
+ * references to prevent it being called while channels are in use.
  */
 void dma_async_device_unregister(struct dma_device *device)
 {
@@ -722,11 +658,7 @@ void dma_async_device_unregister(struct dma_device *device)
 			  "%s called while %d clients hold a reference\n",
 			  __func__, chan->client_count);
 		device_unregister(&chan->dev);
-		dma_chan_release(chan);
 	}
-
-	kref_put(&device->refcount, dma_async_device_cleanup);
-	wait_for_completion(&device->done);
 }
 EXPORT_SYMBOL(dma_async_device_unregister);
 
diff --git a/drivers/dma/iop-adma.c b/drivers/dma/iop-adma.c
index 44c483e..6963400 100644
--- a/drivers/dma/iop-adma.c
+++ b/drivers/dma/iop-adma.c
@@ -1243,7 +1243,6 @@ static int __devinit iop_adma_probe(struct platform_device *pdev)
 	spin_lock_init(&iop_chan->lock);
 	INIT_LIST_HEAD(&iop_chan->chain);
 	INIT_LIST_HEAD(&iop_chan->all_slots);
-	INIT_RCU_HEAD(&iop_chan->common.rcu);
 	iop_chan->common.device = dma_dev;
 	list_add_tail(&iop_chan->common.device_node, &dma_dev->channels);
 
diff --git a/drivers/dma/mv_xor.c b/drivers/dma/mv_xor.c
index dabdc36..acb271d 100644
--- a/drivers/dma/mv_xor.c
+++ b/drivers/dma/mv_xor.c
@@ -1210,7 +1210,6 @@ static int __devinit mv_xor_probe(struct platform_device *pdev)
 	INIT_LIST_HEAD(&mv_chan->chain);
 	INIT_LIST_HEAD(&mv_chan->completed_slots);
 	INIT_LIST_HEAD(&mv_chan->all_slots);
-	INIT_RCU_HEAD(&mv_chan->common.rcu);
 	mv_chan->common.device = dma_dev;
 
 	list_add_tail(&mv_chan->common.device_node, &dma_dev->channels);
diff --git a/include/linux/dmaengine.h b/include/linux/dmaengine.h
index ad9fd55..32edaf4 100644
--- a/include/linux/dmaengine.h
+++ b/include/linux/dmaengine.h
@@ -156,10 +156,6 @@ struct dma_chan {
 	int chan_id;
 	struct device dev;
 
-	struct kref refcount;
-	int slow_ref;
-	struct rcu_head rcu;
-
 	struct list_head device_node;
 	struct dma_chan_percpu *local;
 	int client_count;
@@ -247,9 +243,6 @@ struct dma_device {
 	dma_cap_mask_t  cap_mask;
 	int max_xor;
 
-	struct kref refcount;
-	struct completion done;
-
 	int dev_id;
 	struct device *dev;
 


^ permalink raw reply related	[flat|nested] 55+ messages in thread

* [PATCH 13/13] dmaengine: kill enum dma_state_client
  2008-11-14 21:34 [PATCH 00/13] dmaengine redux Dan Williams
                   ` (11 preceding siblings ...)
  2008-11-14 21:35 ` [PATCH 12/13] dmaengine: remove 'bigref' infrastructure Dan Williams
@ 2008-11-14 21:35 ` Dan Williams
  2008-12-12 14:27 ` [PATCH 00/13] dmaengine redux Sosnowski, Maciej
  13 siblings, 0 replies; 55+ messages in thread
From: Dan Williams @ 2008-11-14 21:35 UTC (permalink / raw)
  To: linux-kernel, netdev
  Cc: maciej.sosnowski, hskinnemoen, g.liakhovetski, nicolas.ferre

DMA_NAK is now useless.  We can just use a bool instead.

Signed-off-by: Dan Williams <dan.j.williams@intel.com>
---

 drivers/dma/dmaengine.c      |   16 +++++-----------
 drivers/dma/dmatest.c        |    6 +++---
 drivers/mmc/host/atmel-mci.c |    6 +++---
 include/linux/dmaengine.h    |   17 +++--------------
 4 files changed, 14 insertions(+), 31 deletions(-)

diff --git a/drivers/dma/dmaengine.c b/drivers/dma/dmaengine.c
index a6c0d72..dae88f2 100644
--- a/drivers/dma/dmaengine.c
+++ b/drivers/dma/dmaengine.c
@@ -428,7 +428,7 @@ struct dma_chan *__dma_request_channel(dma_cap_mask_t *mask, dma_filter_fn fn, v
 {
 	struct dma_device *device, *_d;
 	struct dma_chan *chan = NULL;
-	enum dma_state_client ack;
+	bool ack;
 	int err;
 
 	/* Find a channel */
@@ -441,9 +441,9 @@ struct dma_chan *__dma_request_channel(dma_cap_mask_t *mask, dma_filter_fn fn, v
 		if (fn)
 			ack = fn(chan, fn_param);
 		else
-			ack = DMA_ACK;
+			ack = true;
 
-		if (ack == DMA_ACK) {
+		if (ack) {
 			/* Found a suitable channel, try to grab, prep, and
 			 * return it.  We first set DMA_PRIVATE to disable
 			 * balance_ref_count as this channel will not be
@@ -461,15 +461,9 @@ struct dma_chan *__dma_request_channel(dma_cap_mask_t *mask, dma_filter_fn fn, v
 				       dev_name(&chan->dev), err);
 			else
 				break;
-		} else if (ack == DMA_DUP) {
-			pr_debug("%s: %s filter said DMA_DUP\n",
-				 __func__, dev_name(&chan->dev));
-		} else if (ack == DMA_NAK) {
-			pr_debug("%s: %s filter said DMA_NAK\n",
-				 __func__, dev_name(&chan->dev));
-			break;
 		} else
-			WARN_ONCE(1, "filter_fn: unknown response?\n");
+			pr_debug("%s: %s filter said false\n",
+				 __func__, dev_name(&chan->dev));
 		chan = NULL;
 	}
 	mutex_unlock(&dma_list_mutex);
diff --git a/drivers/dma/dmatest.c b/drivers/dma/dmatest.c
index 2a7724d..e0f9139 100644
--- a/drivers/dma/dmatest.c
+++ b/drivers/dma/dmatest.c
@@ -363,12 +363,12 @@ static int dmatest_add_channel(struct dma_chan *chan)
 	return 0;
 }
 
-static enum dma_state_client filter(struct dma_chan *chan, void *param)
+static bool filter(struct dma_chan *chan, void *param)
 {
 	if (!dmatest_match_channel(chan) || !dmatest_match_device(chan->device))
-		return DMA_DUP;
+		return false;
 	else
-		return DMA_ACK;
+		return true;
 }
 
 static int __init dmatest_init(void)
diff --git a/drivers/mmc/host/atmel-mci.c b/drivers/mmc/host/atmel-mci.c
index 4b567a0..b0042d0 100644
--- a/drivers/mmc/host/atmel-mci.c
+++ b/drivers/mmc/host/atmel-mci.c
@@ -1544,14 +1544,14 @@ static void __exit atmci_cleanup_slot(struct atmel_mci_slot *slot,
 }
 
 #ifdef CONFIG_MMC_ATMELMCI_DMA
-static enum dma_state_client filter(struct dma_chan *chan, void *slave)
+static bool filter(struct dma_chan *chan, void *slave)
 {
 	struct dw_dma_slave *dws = slave;
 
 	if (dws->dma_dev == chan->device->dev)
-		return DMA_ACK;
+		return true;
 	else
-		return DMA_DUP;
+		return false;
 }
 #endif
 
diff --git a/include/linux/dmaengine.h b/include/linux/dmaengine.h
index 32edaf4..48112cb 100644
--- a/include/linux/dmaengine.h
+++ b/include/linux/dmaengine.h
@@ -43,18 +43,6 @@ enum dma_state {
 };
 
 /**
- * enum dma_state_client - state of the channel in the client
- * @DMA_ACK: client would like to use, or was using this channel
- * @DMA_DUP: client has already seen this channel, or is not using this channel
- * @DMA_NAK: client does not want to see any more channels
- */
-enum dma_state_client {
-	DMA_ACK,
-	DMA_DUP,
-	DMA_NAK,
-};
-
-/**
  * typedef dma_cookie_t - an opaque DMA cookie
  *
  * if dma_cookie_t is >0 it's a DMA request cookie, <0 it's an error code
@@ -174,9 +162,10 @@ void dma_chan_cleanup(struct kref *kref);
  * When this optional parameter is specified in a call to dma_request_channel a
  * suitable channel is passed to this routine for further dispositioning before
  * being returned.  Where 'suitable' indicates a non-busy channel that
- * satisfies the given capability mask.
+ * satisfies the given capability mask.  It returns 'true' to indicate that the
+ * channel is suitable.
  */
-typedef enum dma_state_client (*dma_filter_fn)(struct dma_chan *chan, void *filter_param);
+typedef bool (*dma_filter_fn)(struct dma_chan *chan, void *filter_param);
 
 typedef void (*dma_async_tx_callback)(void *dma_async_param);
 /**


^ permalink raw reply related	[flat|nested] 55+ messages in thread

* Re: [PATCH 02/13] dmaengine: remove dependency on async_tx
  2008-11-14 21:34 ` [PATCH 02/13] dmaengine: remove dependency on async_tx Dan Williams
@ 2008-11-15  6:02   ` Andrew Morton
  2008-11-17 23:44     ` Dan Williams
  0 siblings, 1 reply; 55+ messages in thread
From: Andrew Morton @ 2008-11-15  6:02 UTC (permalink / raw)
  To: Dan Williams
  Cc: linux-kernel, netdev, maciej.sosnowski, hskinnemoen,
	g.liakhovetski, nicolas.ferre

On Fri, 14 Nov 2008 14:34:27 -0700 Dan Williams <dan.j.williams@intel.com> wrote:

> async_tx.ko is a consumer of dma channels.  A circular dependency arises
> if modules in drivers/dma rely on common code in async_tx.ko.  It
> prevents either module from being unloaded.
> 
> Move dma_wait_for_async_tx and async_tx_run_dependencies to dmaeninge.o
> where they should have been from the beginning.
> 
>
> ...
>
> --- a/drivers/dma/dmaengine.c
> +++ b/drivers/dma/dmaengine.c
> @@ -623,6 +623,81 @@ void dma_async_tx_descriptor_init(struct dma_async_tx_descriptor *tx,
>  }
>  EXPORT_SYMBOL(dma_async_tx_descriptor_init);
>  
> +/* dma_wait_for_async_tx - spin wait for a transcation to complete

yuo cnat sepll

> + * @tx: transaction to wait on
> + */
> +enum dma_status
> +dma_wait_for_async_tx(struct dma_async_tx_descriptor *tx)
> +{
> +	enum dma_status status;
> +	struct dma_async_tx_descriptor *iter;
> +	struct dma_async_tx_descriptor *parent;
> +
> +	if (!tx)
> +		return DMA_SUCCESS;
> +
> +	/* poll through the dependency chain, return when tx is complete */
> +	do {
> +		iter = tx;
> +
> +		/* find the root of the unsubmitted dependency chain */
> +		do {
> +			parent = iter->parent;
> +			if (!parent)
> +				break;
> +			else
> +				iter = parent;
> +		} while (parent);
> +
> +		/* there is a small window for ->parent == NULL and
> +		 * ->cookie == -EBUSY
> +		 */
> +		while (iter->cookie == -EBUSY)
> +			cpu_relax();
> +
> +		status = dma_sync_wait(iter->chan, iter->cookie);
> +	} while (status == DMA_IN_PROGRESS || (iter != tx));
> +
> +	return status;
> +}
> +EXPORT_SYMBOL_GPL(dma_wait_for_async_tx);

hm, strange.

The unlocked list walk assumes that this thread of control has
exclusive access to *tx (somehow??), but this thread of control doesn't
end up freeing *tx.  I guess the caller does that.

> +/* dma_run_dependencies - helper routine for dma drivers to process
> + *	(start) dependent operations on their target channel
> + * @tx: transaction with dependencies
> + */
> +void dma_run_dependencies(struct dma_async_tx_descriptor *tx)
> +{
> +	struct dma_async_tx_descriptor *dep = tx->next;
> +	struct dma_async_tx_descriptor *dep_next;
> +	struct dma_chan *chan;
> +
> +	if (!dep)
> +		return;
> +
> +	chan = dep->chan;
> +
> +	/* keep submitting up until a channel switch is detected
> +	 * in that case we will be called again as a result of
> +	 * processing the interrupt from async_tx_channel_switch
> +	 */
> +	for (; dep; dep = dep_next) {
> +		spin_lock_bh(&dep->lock);
> +		dep->parent = NULL;
> +		dep_next = dep->next;
> +		if (dep_next && dep_next->chan == chan)
> +			dep->next = NULL; /* ->next will be submitted */
> +		else
> +			dep_next = NULL; /* submit current dep and terminate */
> +		spin_unlock_bh(&dep->lock);
> +
> +		dep->tx_submit(dep);
> +	}
> +
> +	chan->device->device_issue_pending(chan);
> +}
> +EXPORT_SYMBOL_GPL(dma_run_dependencies);

Now there's a lock.



^ permalink raw reply	[flat|nested] 55+ messages in thread

* Re: [PATCH 03/13] dmaengine: up-level reference counting to the module level
  2008-11-14 21:34 ` [PATCH 03/13] dmaengine: up-level reference counting to the module level Dan Williams
@ 2008-11-15  6:08   ` Andrew Morton
  2008-11-18  3:42     ` Dan Williams
  2008-12-04 16:56   ` Guennadi Liakhovetski
  2008-12-12 14:28   ` Sosnowski, Maciej
  2 siblings, 1 reply; 55+ messages in thread
From: Andrew Morton @ 2008-11-15  6:08 UTC (permalink / raw)
  To: Dan Williams
  Cc: linux-kernel, netdev, maciej.sosnowski, hskinnemoen,
	g.liakhovetski, nicolas.ferre

On Fri, 14 Nov 2008 14:34:32 -0700 Dan Williams <dan.j.williams@intel.com> wrote:

> Simply, if a client wants any dmaengine channel then prevent all dmaengine
> modules from being removed.  Once the clients are done re-enable module
> removal.
> 
> Why?, beyond reducing complication:
> 1/ Tracking reference counts per-transaction in an efficient manner, as
>    is currently done, requires a complicated scheme to avoid cache-line
>    bouncing effects.
> 2/ Per-transaction ref-counting gives the false impression that a
>    dma-driver can be gracefully removed ahead of its user (net, md, or
>    dma-slave)
> 3/ None of the in-tree dma-drivers talk to hot pluggable hardware, but
>    if such an engine were built one day we still would not need to notify
>    clients of remove events.  The driver can simply return NULL to a
>    ->prep() request, something that is much easier for a client to handle.
> 
> ...
>  
> +static struct module *dma_chan_to_owner(struct dma_chan *chan)
> +{
> +	return chan->device->dev->driver->owner;
> +}

Has this all been tested with CONFIG_MODULES=n?

It looks like we have a lot of unneeded code if CONFIG_MODULES=n. 
However that might not be a case which is worth bothering about.

> +/**
> + * balance_ref_count - catch up the channel reference count
> + */
> +static void balance_ref_count(struct dma_chan *chan)

Forgot to kerneldocument the argument.

> +{
> +	struct module *owner = dma_chan_to_owner(chan);
> +
> +	while (chan->client_count < dmaengine_ref_count) {
> +		__module_get(owner);
> +		chan->client_count++;
> +	}
> +}

The locking for ->client_count is undocumented.

> +/**
> + * dma_chan_get - try to grab a dma channel's parent driver module
> + * @chan - channel to grab
> + */
> +static int dma_chan_get(struct dma_chan *chan)
> +{
> +	int err = -ENODEV;
> +	struct module *owner = dma_chan_to_owner(chan);
> +
> +	if (chan->client_count) {
> +		__module_get(owner);
> +		err = 0;
> +	} else if (try_module_get(owner))
> +		err = 0;

I wonder if try_module_get() could be used in both cases (migt not make
sense to do so though).

> +	if (err == 0)
> +		chan->client_count++;

Locking for this?

> +	/* allocate upon first client reference */
> +	if (chan->client_count == 1 && err == 0) {
> +		int desc = chan->device->device_alloc_chan_resources(chan, NULL);
> +
> +		if (desc < 0) {
> +			chan->client_count = 0;
> +			module_put(owner);
> +			err = -ENOMEM;

Shouldn't we just propagate the ->device_alloc_chan_resources() return value?

> +		} else
> +			balance_ref_count(chan);
> +	}
> +
> +	return err;
> +}
> +
> +static void dma_chan_put(struct dma_chan *chan)
> +{
> +	if (!chan->client_count)
> +		return; /* this channel failed alloc_chan_resources */

Or we had a bug ;)

> +	chan->client_count--;

Undocumented locking..

>
> ...
>

^ permalink raw reply	[flat|nested] 55+ messages in thread

* Re: [PATCH 04/13] dmaengine: centralize channel allocation, introduce dma_find_channel
  2008-11-14 21:34 ` [PATCH 04/13] dmaengine: centralize channel allocation, introduce dma_find_channel Dan Williams
@ 2008-11-15  6:14   ` Andrew Morton
  2008-11-18  5:59     ` Dan Williams
  0 siblings, 1 reply; 55+ messages in thread
From: Andrew Morton @ 2008-11-15  6:14 UTC (permalink / raw)
  To: Dan Williams
  Cc: linux-kernel, netdev, maciej.sosnowski, hskinnemoen,
	g.liakhovetski, nicolas.ferre

On Fri, 14 Nov 2008 14:34:37 -0700 Dan Williams <dan.j.williams@intel.com> wrote:

> Allowing multiple clients to each define their own channel allocation
> scheme quickly leads to a pathological situation.  For memory-to-memory
> offload all clients can share a central allocator.
> 
> This simply moves the existing async_tx allocator to dmaengine with
> minimal fixups:
> * async_tx.c:get_chan_ref_by_cap --> dmaengine.c:nth_chan
> * async_tx.c:async_tx_rebalance --> dmaengine.c:dma_channel_rebalance
> * split out common code from async_tx.c:__async_tx_find_channel -->
>   dma_find_channel
> 
>  /**
> + * dma_cap_mask_all - enable iteration over all operation types
> + */
> +static dma_cap_mask_t dma_cap_mask_all;
> +
> +/**
> + * dma_chan_tbl_ent - tracks channel allocations per core/opertion
> + */

Would be conventional to document the fields as well.

> +struct dma_chan_tbl_ent {
> +	struct dma_chan *chan;
> +};
> +
> +/**
> + * channel_table - percpu lookup table for memory-to-memory offload providers
> + */
> +static struct dma_chan_tbl_ent *channel_table[DMA_TX_TYPE_END];
> +
> +static int __init dma_channel_table_init(void)
> +{
> +	enum dma_transaction_type cap;
> +	int err = 0;
> +
> +	bitmap_fill(dma_cap_mask_all.bits, DMA_TX_TYPE_END);
> +
> +	/* 'interrupt' and 'slave' are channel capabilities, but are not
> +	 * associated with an operation so they do not need an entry in the
> +	 * channel_table
> +	 */
> +	clear_bit(DMA_INTERRUPT, dma_cap_mask_all.bits);
> +	clear_bit(DMA_SLAVE, dma_cap_mask_all.bits);
> +
> +	for_each_dma_cap_mask(cap, dma_cap_mask_all) {
> +		channel_table[cap] = alloc_percpu(struct dma_chan_tbl_ent);
> +		if (!channel_table[cap]) {
> +			err = 1;

initcalls can return -ve errnos, and that at least would make the
message in do_one_initcall() more meaningful.

> +			break;
> +		}
> +	}
> +
> +	if (err) {
> +		pr_err("dmaengine: initialization failure\n");
> +		for_each_dma_cap_mask(cap, dma_cap_mask_all)
> +			if (channel_table[cap])
> +				free_percpu(channel_table[cap]);
> +	}
> +
> +	return err;
> +}
> +subsys_initcall(dma_channel_table_init);
> +
> +/**
> + * dma_find_channel - find a channel to carry out the operation
> + * @tx_type: transaction type
> + */
> +struct dma_chan *dma_find_channel(enum dma_transaction_type tx_type)
> +{
> +	struct dma_chan *chan;
> +	int cpu;
> +
> +	WARN_ONCE(dmaengine_ref_count == 0,
> +		  "client called %s without a reference", __func__);
> +
> +	cpu = get_cpu();
> +	chan = per_cpu_ptr(channel_table[tx_type], cpu)->chan;
> +	put_cpu();
> +
> +	return chan;
> +}
> +EXPORT_SYMBOL(dma_find_channel);

Strange.  We return the address of a per-cpu variable, but we've
reenabled preemption so this thread can now switch CPUs.  Hence there
must be spinlocking on *chan as well?

> +/**
> + * nth_chan - returns the nth channel of the given capability
> + * @cap: capability to match
> + * @n: nth channel desired
> + *
> + * Defaults to returning the channel with the desired capability and the
> + * lowest reference count when 'n' cannot be satisfied
> + */
> +static struct dma_chan *nth_chan(enum dma_transaction_type cap, int n)
> +{
> +	struct dma_device *device;
> +	struct dma_chan *chan;
> +	struct dma_chan *ret = NULL;
> +	struct dma_chan *min = NULL;
> +
> +	list_for_each_entry(device, &dma_device_list, global_node) {
> +		if (!dma_has_cap(cap, device->cap_mask))
> +			continue;
> +		list_for_each_entry(chan, &device->channels, device_node) {
> +			if (!chan->client_count)
> +				continue;
> +			if (!min)
> +				min = chan;
> +			else if (chan->table_count < min->table_count)
> +				min = chan;
> +
> +			if (n-- == 0) {
> +				ret = chan;
> +				break; /* done */
> +			}
> +		}
> +		if (ret)
> +			break; /* done */
> +	}
> +
> +	if (!ret)
> +		ret = min;
> +
> +	if (ret)
> +		ret->table_count++;

Undocumented locking for ->table_count.

> +	return ret;
> +}
> +
> +/**
> + * dma_channel_rebalance - redistribute the available channels
> + *
> + * Optimize for cpu isolation (each cpu gets a dedicated channel for an
> + * operation type) in the SMP case,  and opertaion isolation (avoid
> + * multi-tasking channels) in the uniprocessor case.  Must be called
> + * under dma_list_mutex.
> + */
> +static void dma_channel_rebalance(void)
> +{
> +	struct dma_chan *chan;
> +	struct dma_device *device;
> +	int cpu;
> +	int cap;
> +	int n;
> +
> +	/* undo the last distribution */
> +	for_each_dma_cap_mask(cap, dma_cap_mask_all)
> +		for_each_possible_cpu(cpu)
> +			per_cpu_ptr(channel_table[cap], cpu)->chan = NULL;

The number of possible cpus can be larger than the number of online
CPUs.  Is it worth making this code hotplug-aware?

> +	list_for_each_entry(device, &dma_device_list, global_node)
> +		list_for_each_entry(chan, &device->channels, device_node)
> +			chan->table_count = 0;
> +
> +	/* don't populate the channel_table if no clients are available */
> +	if (!dmaengine_ref_count)
> +		return;
> +
> +	/* redistribute available channels */
> +	n = 0;
> +	for_each_dma_cap_mask(cap, dma_cap_mask_all)
> +		for_each_online_cpu(cpu) {
> +			if (num_possible_cpus() > 1)
> +				chan = nth_chan(cap, n++);
> +			else
> +				chan = nth_chan(cap, -1);
> +
> +			per_cpu_ptr(channel_table[cap], cpu)->chan = chan;
> +		}
> +}
> +
>
> ...
>

^ permalink raw reply	[flat|nested] 55+ messages in thread

* Re: [PATCH 08/13] dmatest: convert to dma_request_channel
  2008-11-14 21:34 ` [PATCH 08/13] dmatest: convert to dma_request_channel Dan Williams
@ 2008-11-15  6:17   ` Andrew Morton
  2008-11-18 18:24     ` Dan Williams
  0 siblings, 1 reply; 55+ messages in thread
From: Andrew Morton @ 2008-11-15  6:17 UTC (permalink / raw)
  To: Dan Williams
  Cc: linux-kernel, netdev, maciej.sosnowski, hskinnemoen,
	g.liakhovetski, nicolas.ferre

On Fri, 14 Nov 2008 14:34:58 -0700 Dan Williams <dan.j.williams@intel.com> wrote:

>  static int __init dmatest_init(void)
>  {
> -	dma_cap_set(DMA_MEMCPY, dmatest_client.cap_mask);
> -	dma_async_client_register(&dmatest_client);
> -	dma_async_client_chan_request(&dmatest_client);
> +	dma_cap_mask_t mask;
> +	struct dma_chan *chan;
> +
> +	dma_cap_zero(mask);
> +	dma_cap_set(DMA_MEMCPY, mask);
> +	for (;;) {
> +		chan = dma_request_channel(mask, filter, NULL);
> +		if (chan) {
> +			if (dmatest_add_channel(chan) == 0)
> +				continue;

hrm.  I trust this loop can't lock up the box.

> +			else
> +				dma_release_channel(chan);
> +		} else
> +			break; /* no more channels available */
> +		if (max_channels && nr_channels >= max_channels)
> +			break; /* we have all we need */
> +	}

^ permalink raw reply	[flat|nested] 55+ messages in thread

* Re: [PATCH 02/13] dmaengine: remove dependency on async_tx
  2008-11-15  6:02   ` Andrew Morton
@ 2008-11-17 23:44     ` Dan Williams
  0 siblings, 0 replies; 55+ messages in thread
From: Dan Williams @ 2008-11-17 23:44 UTC (permalink / raw)
  To: Andrew Morton
  Cc: linux-kernel, netdev, Sosnowski, Maciej, hskinnemoen,
	g.liakhovetski, nicolas.ferre

Thanks for the review.

On Fri, 2008-11-14 at 23:02 -0700, Andrew Morton wrote:
> > +/* dma_wait_for_async_tx - spin wait for a transcation to complete
> 
> yuo cnat sepll

Noted... and a few more:

diff --git a/drivers/dma/dmaengine.c b/drivers/dma/dmaengine.c
index dae88f2..d5d60de 100644
--- a/drivers/dma/dmaengine.c
+++ b/drivers/dma/dmaengine.c
@@ -344,9 +344,9 @@ static struct dma_chan *nth_chan(enum dma_transaction_type cap, int n)
  * dma_channel_rebalance - redistribute the available channels
  *
  * Optimize for cpu isolation (each cpu gets a dedicated channel for an
- * operation type) in the SMP case,  and opertaion isolation (avoid
- * multi-tasking channels) in the uniprocessor case.  Must be called
- * under dma_list_mutex.
+ * operation type) in the SMP case,  and operation isolation (avoid
+ * multi-tasking channels) in the non-SMP case.  Must be called under
+ * dma_list_mutex.
  */
 static void dma_channel_rebalance(void)
 {
@@ -632,7 +632,7 @@ err_out:
 EXPORT_SYMBOL(dma_async_device_register);
 
 /**
- * dma_async_device_unregister - unregisters DMA devices
+ * dma_async_device_unregister - unregister a DMA device
  * @device: &dma_device
  *
  * This routine is called by dma driver exit routines, dmaengine holds module
@@ -804,7 +804,7 @@ void dma_async_tx_descriptor_init(struct dma_async_tx_descriptor *tx,
 }
 EXPORT_SYMBOL(dma_async_tx_descriptor_init);
 
-/* dma_wait_for_async_tx - spin wait for a transcation to complete
+/* dma_wait_for_async_tx - spin wait for a transaction to complete
  * @tx: transaction to wait on
  */
 enum dma_status


> 
> > + * @tx: transaction to wait on
> > + */
> > +enum dma_status
> > +dma_wait_for_async_tx(struct dma_async_tx_descriptor *tx)
> > +{
> > +	enum dma_status status;
> > +	struct dma_async_tx_descriptor *iter;
> > +	struct dma_async_tx_descriptor *parent;
> > +
> > +	if (!tx)
> > +		return DMA_SUCCESS;
> > +
> > +	/* poll through the dependency chain, return when tx is complete */
> > +	do {
> > +		iter = tx;
> > +
> > +		/* find the root of the unsubmitted dependency chain */
> > +		do {
> > +			parent = iter->parent;
> > +			if (!parent)
> > +				break;
> > +			else
> > +				iter = parent;
> > +		} while (parent);
> > +
> > +		/* there is a small window for ->parent == NULL and
> > +		 * ->cookie == -EBUSY
> > +		 */
> > +		while (iter->cookie == -EBUSY)
> > +			cpu_relax();
> > +
> > +		status = dma_sync_wait(iter->chan, iter->cookie);
> > +	} while (status == DMA_IN_PROGRESS || (iter != tx));
> > +
> > +	return status;
> > +}
> > +EXPORT_SYMBOL_GPL(dma_wait_for_async_tx);
> 
> hm, strange.
> 
> The unlocked list walk assumes that this thread of control has
> exclusive access to *tx (somehow??), but this thread of control doesn't
> end up freeing *tx.  I guess the caller does that.

This routine was created to cover cases where the backing device driver
did not support "interrupt" descriptors for notification of operation
completion.  All drivers should be fixed up now so how about the
following:

diff --git a/drivers/dma/dmaengine.c b/drivers/dma/dmaengine.c
index d5d60de..dce6d00 100644
--- a/drivers/dma/dmaengine.c
+++ b/drivers/dma/dmaengine.c
@@ -805,7 +805,13 @@ void dma_async_tx_descriptor_init(struct dma_async_tx_descriptor *tx,
 EXPORT_SYMBOL(dma_async_tx_descriptor_init);
 
 /* dma_wait_for_async_tx - spin wait for a transaction to complete
- * @tx: transaction to wait on
+ * @tx: in-flight transaction to wait on
+ *
+ * This routine assumes that tx was obtained from a call to async_memcpy,
+ * async_xor, async_memset, etc which ensures that tx is "in-flight" (prepped
+ * and submitted).  Walking the parent chain is only meant to cover for DMA
+ * drivers that do not implement the DMA_INTERRUPT capability and may race with
+ * the driver's descriptor cleanup routine.
  */
 enum dma_status
 dma_wait_for_async_tx(struct dma_async_tx_descriptor *tx)
@@ -817,6 +823,9 @@ dma_wait_for_async_tx(struct dma_async_tx_descriptor *tx)
 	if (!tx)
 		return DMA_SUCCESS;
 
+	WARN_ONCE(tx->parent, "%s: speculatively walking dependency chain for"
+		  " %s\n", __func__, dev_name(&tx->chan->dev));
+
 	/* poll through the dependency chain, return when tx is complete */
 	do {
 		iter = tx;


Regards,
Dan


^ permalink raw reply related	[flat|nested] 55+ messages in thread

* Re: [PATCH 03/13] dmaengine: up-level reference counting to the module level
  2008-11-15  6:08   ` Andrew Morton
@ 2008-11-18  3:42     ` Dan Williams
  0 siblings, 0 replies; 55+ messages in thread
From: Dan Williams @ 2008-11-18  3:42 UTC (permalink / raw)
  To: Andrew Morton
  Cc: linux-kernel, netdev, Sosnowski, Maciej, hskinnemoen,
	g.liakhovetski, nicolas.ferre

Thanks for the review.

On Fri, 2008-11-14 at 23:08 -0700, Andrew Morton wrote: 
> > +static struct module *dma_chan_to_owner(struct dma_chan *chan)
> > +{
> > +	return chan->device->dev->driver->owner;
> > +}
> 
> Has this all been tested with CONFIG_MODULES=n?
> 

It works, the only thing that changes is that ->owner is always NULL. 

> It looks like we have a lot of unneeded code if CONFIG_MODULES=n. 
> However that might not be a case which is worth bothering about.

We still need all the other reference counting machinery to identify
busy channels.

> 
> > +/**
> > + * balance_ref_count - catch up the channel reference count
> > + */
> > +static void balance_ref_count(struct dma_chan *chan)
> 
> Forgot to kerneldocument the argument.

yup

> 
> > +{
> > +	struct module *owner = dma_chan_to_owner(chan);
> > +
> > +	while (chan->client_count < dmaengine_ref_count) {
> > +		__module_get(owner);
> > +		chan->client_count++;
> > +	}
> > +}
> 
> The locking for ->client_count is undocumented.

diff --git a/drivers/dma/dmaengine.c b/drivers/dma/dmaengine.c
index dce6d00..9396891 100644
--- a/drivers/dma/dmaengine.c
+++ b/drivers/dma/dmaengine.c
@@ -129,6 +129,9 @@ static struct module *dma_chan_to_owner(struct dma_chan *chan)
 
 /**
  * balance_ref_count - catch up the channel reference count
+ * @chan - channel to balance ->client_count versus dmaengine_ref_count
+ *
+ * balance_ref_count must be called under dma_list_mutex
  */
 static void balance_ref_count(struct dma_chan *chan)
 {


> > +/**
> > + * dma_chan_get - try to grab a dma channel's parent driver module
> > + * @chan - channel to grab
> > + */
> > +static int dma_chan_get(struct dma_chan *chan)
> > +{
> > +	int err = -ENODEV;
> > +	struct module *owner = dma_chan_to_owner(chan);
> > +
> > +	if (chan->client_count) {
> > +		__module_get(owner);
> > +		err = 0;
> > +	} else if (try_module_get(owner))
> > +		err = 0;
> 
> I wonder if try_module_get() could be used in both cases (migt not make
> sense to do so though).

Yes, but I like how it documents the assumption that the backing module
has been referenced when client_count is non-zero.

> 
> > +	if (err == 0)
> > +		chan->client_count++;
> 
> Locking for this?

@@ -146,6 +146,8 @@ static void balance_ref_count(struct dma_chan *chan)
 /**
  * dma_chan_get - try to grab a dma channel's parent driver module
  * @chan - channel to grab
+ *
+ * Must be called under dma_list_mutex
  */
 static int dma_chan_get(struct dma_chan *chan)
 {

> 
> > +	/* allocate upon first client reference */
> > +	if (chan->client_count == 1 && err == 0) {
> > +		int desc = chan->device->device_alloc_chan_resources(chan, NULL);
> > +
> > +		if (desc < 0) {
> > +			chan->client_count = 0;
> > +			module_put(owner);
> > +			err = -ENOMEM;
> 
> Shouldn't we just propagate the ->device_alloc_chan_resources() return value?
> 

Yes, this is broken.

@@ -165,12 +165,11 @@ static int dma_chan_get(struct dma_chan *chan)
 
 	/* allocate upon first client reference */
 	if (chan->client_count == 1 && err == 0) {
-		int desc = chan->device->device_alloc_chan_resources(chan);
+		err = chan->device->device_alloc_chan_resources(chan);
 
-		if (desc < 0) {
+		if (err < 0) {
 			chan->client_count = 0;
 			module_put(owner);
-			err = -ENOMEM;
 		} else if (!dma_has_cap(DMA_PRIVATE, chan->device->cap_mask))
 			balance_ref_count(chan);
 	}


> > +		} else
> > +			balance_ref_count(chan);
> > +	}
> > +
> > +	return err;
> > +}
> > +
> > +static void dma_chan_put(struct dma_chan *chan)
> > +{
> > +	if (!chan->client_count)
> > +		return; /* this channel failed alloc_chan_resources */
> 
> Or we had a bug ;)

...hopefully caught by the BUG_ON(dmaengine_ref_count < 0) in
dmaengine_put().

> 
> > +	chan->client_count--;
> 
> Undocumented locking..

@@ -178,6 +178,12 @@ static int dma_chan_get(struct dma_chan *chan)
 	return err;
 }
 
+/**
+ * dma_chan_put - drop a reference to a dma channel's parent driver module
+ * @chan - channel to release
+ *
+ * Must be called under dma_list_mutex
+ */
 static void dma_chan_put(struct dma_chan *chan)
 {
 	if (!chan->client_count)


Regards,
Dan


^ permalink raw reply related	[flat|nested] 55+ messages in thread

* Re: [PATCH 04/13] dmaengine: centralize channel allocation, introduce dma_find_channel
  2008-11-15  6:14   ` Andrew Morton
@ 2008-11-18  5:59     ` Dan Williams
  0 siblings, 0 replies; 55+ messages in thread
From: Dan Williams @ 2008-11-18  5:59 UTC (permalink / raw)
  To: Andrew Morton
  Cc: linux-kernel, netdev, Sosnowski, Maciej, hskinnemoen,
	g.liakhovetski, nicolas.ferre

On Fri, 2008-11-14 at 23:14 -0700, Andrew Morton wrote:
> > +/**
> > + * dma_chan_tbl_ent - tracks channel allocations per core/opertion
> > + */
> 
> Would be conventional to document the fields as well.
> 

diff --git a/drivers/dma/dmaengine.c b/drivers/dma/dmaengine.c
index 644a8c8..ebbfe2d 100644
--- a/drivers/dma/dmaengine.c
+++ b/drivers/dma/dmaengine.c
@@ -218,7 +218,8 @@ EXPORT_SYMBOL(dma_sync_wait);
 static dma_cap_mask_t dma_cap_mask_all;
 
 /**
- * dma_chan_tbl_ent - tracks channel allocations per core/opertion
+ * dma_chan_tbl_ent - tracks channel allocations per core/operation
+ * @chan - associated channel for this entry
  */
 struct dma_chan_tbl_ent {
 	struct dma_chan *chan;

> > +	for_each_dma_cap_mask(cap, dma_cap_mask_all) {
> > +		channel_table[cap] = alloc_percpu(struct dma_chan_tbl_ent);
> > +		if (!channel_table[cap]) {
> > +			err = 1;
> 
> initcalls can return -ve errnos, and that at least would make the
> message in do_one_initcall() more meaningful.
> 

@@ -247,7 +247,7 @@ static int __init dma_channel_table_init(void)
 	for_each_dma_cap_mask(cap, dma_cap_mask_all) {
 		channel_table[cap] = alloc_percpu(struct dma_chan_tbl_ent);
 		if (!channel_table[cap]) {
-			err = 1;
+			err = -ENOMEM;
 			break;
 		}
 	}

> > +/**
> > + * dma_find_channel - find a channel to carry out the operation
> > + * @tx_type: transaction type
> > + */
> > +struct dma_chan *dma_find_channel(enum dma_transaction_type tx_type)
> > +{
> > +	struct dma_chan *chan;
> > +	int cpu;
> > +
> > +	WARN_ONCE(dmaengine_ref_count == 0,
> > +		  "client called %s without a reference", __func__);
> > +
> > +	cpu = get_cpu();
> > +	chan = per_cpu_ptr(channel_table[tx_type], cpu)->chan;
> > +	put_cpu();
> > +
> > +	return chan;
> > +}
> > +EXPORT_SYMBOL(dma_find_channel);
> 
> Strange.  We return the address of a per-cpu variable, but we've
> reenabled preemption so this thread can now switch CPUs.  Hence there
> must be spinlocking on *chan as well?

There is locking when we call into the driver.  dmaengine is using the
cpuid as a load balancing hint and nothing more.  We loosely try to
maintain cpu-channel affinity, but if preemption occasionally jumbles
these associations clients can tolerate it.  dma_find_channel is called
relatively frequently so these misassociations should be short-lived

> > +	if (ret)
> > +		ret->table_count++;
> 
> Undocumented locking for ->table_count.
> 

@@ -313,7 +313,8 @@ EXPORT_SYMBOL(dma_issue_pending_all);
  * @n: nth channel desired
  *
  * Defaults to returning the channel with the desired capability and the
- * lowest reference count when 'n' cannot be satisfied
+ * lowest reference count when 'n' cannot be satisfied.  Must be called
+ * under dma_list_mutex.
  */
 static struct dma_chan *nth_chan(enum dma_transaction_type cap, int n)
 {

> >
> > +	/* undo the last distribution */
> > +	for_each_dma_cap_mask(cap, dma_cap_mask_all)
> > +		for_each_possible_cpu(cpu)
> > +			per_cpu_ptr(channel_table[cap], cpu)->chan = NULL;
> 
> The number of possible cpus can be larger than the number of online
> CPUs.  Is it worth making this code hotplug-aware?
> 

...nobody has missed it from the NET_DMA case so far, but yes, at some
future date.

Regards,
Dan


^ permalink raw reply related	[flat|nested] 55+ messages in thread

* Re: [PATCH 08/13] dmatest: convert to dma_request_channel
  2008-11-15  6:17   ` Andrew Morton
@ 2008-11-18 18:24     ` Dan Williams
  2008-11-18 20:58       ` Andrew Morton
  0 siblings, 1 reply; 55+ messages in thread
From: Dan Williams @ 2008-11-18 18:24 UTC (permalink / raw)
  To: Andrew Morton
  Cc: linux-kernel, netdev, Sosnowski, Maciej, hskinnemoen,
	g.liakhovetski, nicolas.ferre

On Fri, 2008-11-14 at 23:17 -0700, Andrew Morton wrote:
> On Fri, 14 Nov 2008 14:34:58 -0700 Dan Williams <dan.j.williams@intel.com> wrote:
> 
> >  static int __init dmatest_init(void)
> >  {
> > -	dma_cap_set(DMA_MEMCPY, dmatest_client.cap_mask);
> > -	dma_async_client_register(&dmatest_client);
> > -	dma_async_client_chan_request(&dmatest_client);
> > +	dma_cap_mask_t mask;
> > +	struct dma_chan *chan;
> > +
> > +	dma_cap_zero(mask);
> > +	dma_cap_set(DMA_MEMCPY, mask);
> > +	for (;;) {
> > +		chan = dma_request_channel(mask, filter, NULL);
> > +		if (chan) {
> > +			if (dmatest_add_channel(chan) == 0)
> > +				continue;
> 
> hrm.  I trust this loop can't lock up the box.
> 

Only if the system has infinite channels, but we should break if
dma_add_channel fails.

diff --git a/drivers/dma/dmatest.c b/drivers/dma/dmatest.c
index e0f9139..c77d47c 100644
--- a/drivers/dma/dmatest.c
+++ b/drivers/dma/dmatest.c
@@ -375,23 +375,27 @@ static int __init dmatest_init(void)
 {
 	dma_cap_mask_t mask;
 	struct dma_chan *chan;
+	int err = 0;
 
 	dma_cap_zero(mask);
 	dma_cap_set(DMA_MEMCPY, mask);
 	for (;;) {
 		chan = dma_request_channel(mask, filter, NULL);
 		if (chan) {
-			if (dmatest_add_channel(chan) == 0)
+			err = dmatest_add_channel(chan);
+			if (err == 0)
 				continue;
-			else
+			else {
 				dma_release_channel(chan);
+				break; /* add_channel failed, punt */
+			}
 		} else
 			break; /* no more channels available */
 		if (max_channels && nr_channels >= max_channels)
 			break; /* we have all we need */
 	}
 
-	return 0;
+	return err;
 }
 /* when compiled-in wait for drivers to load first */
 late_initcall(dmatest_init);

---

Ok, so I now have:
# git shortlog `stg id akpm-review-spell-fixes.patch`^..
Dan Williams (6):
      dmaengine review: spelling/grammar fixes
      dmaengine review: dma_wait_for_async_tx clarification
      dmaengine review: fix up kernel doc for dma_list_mutex usage
      dmaengine review: propagate alloc_chan_resources error code
      dmaengine review: return meaningful error code from initcall
      dmaengine review: dmatest stop on dmatest_add_channel failure

May I add your Reviewed-by for these and the original 13.  I could fold
these into the originals but perhaps it is better to document fixups
when they are the result of review [1].

Thanks,
Dan

[1] http://lwn.net/Articles/306690/


^ permalink raw reply related	[flat|nested] 55+ messages in thread

* Re: [PATCH 08/13] dmatest: convert to dma_request_channel
  2008-11-18 18:24     ` Dan Williams
@ 2008-11-18 20:58       ` Andrew Morton
  2008-11-18 22:19         ` Dan Williams
  0 siblings, 1 reply; 55+ messages in thread
From: Andrew Morton @ 2008-11-18 20:58 UTC (permalink / raw)
  To: Dan Williams
  Cc: linux-kernel, netdev, maciej.sosnowski, hskinnemoen,
	g.liakhovetski, nicolas.ferre

On Tue, 18 Nov 2008 11:24:21 -0700
Dan Williams <dan.j.williams@intel.com> wrote:

> Ok, so I now have:
> # git shortlog `stg id akpm-review-spell-fixes.patch`^..
> Dan Williams (6):
>       dmaengine review: spelling/grammar fixes
>       dmaengine review: dma_wait_for_async_tx clarification
>       dmaengine review: fix up kernel doc for dma_list_mutex usage
>       dmaengine review: propagate alloc_chan_resources error code
>       dmaengine review: return meaningful error code from initcall
>       dmaengine review: dmatest stop on dmatest_add_channel failure
> 
> May I add your Reviewed-by for these and the original 13.

Sure.  Ignorantly-stared-at-by: would be more appropriate though.

>  I could fold
> these into the originals but perhaps it is better to document fixups
> when they are the result of review [1].

I think that a clean commit trail is more important.  One can always
retain documentation/attributions/etc in the changelog when folding.


^ permalink raw reply	[flat|nested] 55+ messages in thread

* Re: [PATCH 08/13] dmatest: convert to dma_request_channel
  2008-11-18 20:58       ` Andrew Morton
@ 2008-11-18 22:19         ` Dan Williams
  0 siblings, 0 replies; 55+ messages in thread
From: Dan Williams @ 2008-11-18 22:19 UTC (permalink / raw)
  To: Andrew Morton
  Cc: linux-kernel, netdev, Sosnowski, Maciej, hskinnemoen,
	g.liakhovetski, nicolas.ferre

On Tue, 2008-11-18 at 13:58 -0700, Andrew Morton wrote:
> On Tue, 18 Nov 2008 11:24:21 -0700
> Dan Williams <dan.j.williams@intel.com> wrote:
> 
> > Ok, so I now have:
> > # git shortlog `stg id akpm-review-spell-fixes.patch`^..
> > Dan Williams (6):
> >       dmaengine review: spelling/grammar fixes
> >       dmaengine review: dma_wait_for_async_tx clarification
> >       dmaengine review: fix up kernel doc for dma_list_mutex usage
> >       dmaengine review: propagate alloc_chan_resources error code
> >       dmaengine review: return meaningful error code from initcall
> >       dmaengine review: dmatest stop on dmatest_add_channel failure
> > 
> > May I add your Reviewed-by for these and the original 13.
> 
> Sure.  Ignorantly-stared-at-by: would be more appropriate though.

This is the problem with the Reviewed-by tag, it implies some kind of
reputation injuring side effect when the bug is found rather than a
author-driven "thank-you for taking the time to make some helpful
comments".

--
Dan


^ permalink raw reply	[flat|nested] 55+ messages in thread

* Re: [PATCH 07/13] dmaengine: introduce dma_request_channel and private channels
  2008-11-14 21:34 ` [PATCH 07/13] dmaengine: introduce dma_request_channel and private channels Dan Williams
@ 2008-12-02 15:52   ` Guennadi Liakhovetski
  2008-12-02 17:16     ` Dan Williams
  2008-12-02 17:26     ` Nicolas Ferre
  2008-12-12 14:29   ` Sosnowski, Maciej
  2009-02-06 16:58   ` Atsushi Nemoto
  2 siblings, 2 replies; 55+ messages in thread
From: Guennadi Liakhovetski @ 2008-12-02 15:52 UTC (permalink / raw)
  To: Dan Williams
  Cc: linux-kernel, netdev, maciej.sosnowski, hskinnemoen, nicolas.ferre

Hi Dan,

I think, there is a problem with your dma_request_channel() / 
private_candidate() implementation: your current version only tries one 
channel from a dma device list, which matched capabilities. If this 
channel is not accepted by the client, you do not try other channels from 
this device and just go to the next one...

Another problem I encountered with my framebuffer is the initialisation 
order. You initialise dmaengine per subsys_initcall(), whereas the only 
way to guarantee the order:

dmaengine
dma-device driver
framebuffer

when they are all linked into the kernel was to switch dmaengine to 
arch_initcall, put the dma driver under subsys_initcall, and the 
framebuffer under a normal module_init / device_initcall.

Below is a naive patch, adding two more arguments to private_candidate() 
is not very elegant, so, this is not an official submission. Feel free to 
propose a better fix. But if you like, I can make two patches out of this 
- separating the initialisation priority.

Thanks
Guennadi
---
Guennadi Liakhovetski, Ph.D.
Freelance Open-Source Software Developer

dmaengine: scan all channels if one is rejected, initialise earlier.

Signed-off-by: Guennadi Liakhovetski <g.liakhovetski@gmx.de>
---
diff --git a/drivers/dma/dmaengine.c b/drivers/dma/dmaengine.c
index 94dcc64..677b0c8 100644
--- a/drivers/dma/dmaengine.c
+++ b/drivers/dma/dmaengine.c
@@ -404,10 +404,12 @@ static void dma_channel_rebalance(void)
 		}
 }
 
-static struct dma_chan *private_candidate(dma_cap_mask_t *mask, struct dma_device *dev)
+static struct dma_chan *private_candidate(dma_cap_mask_t *mask,
+		struct dma_device *dev, dma_filter_fn fn, void *fn_param)
 {
 	struct dma_chan *chan;
 	struct dma_chan *ret = NULL;
+	bool ack;
 
 	/* devices with multiple channels need special handling as we need to
 	 * ensure that all channels are either private or public.
@@ -430,8 +432,18 @@ static struct dma_chan *private_candidate(dma_cap_mask_t *mask, struct dma_devic
 				 __func__, dev_name(&chan->dev));
 			continue;
 		}
-		ret = chan;
-		break;
+
+		if (fn)
+			ack = fn(chan, fn_param);
+		else
+			ack = true;
+
+		if (ack) {
+			ret = chan;
+			break;
+		} else
+			pr_debug("%s: %s filter said false\n",
+				 __func__, dev_name(&chan->dev));
 	}
 
 	return ret;
@@ -447,42 +459,33 @@ struct dma_chan *__dma_request_channel(dma_cap_mask_t *mask, dma_filter_fn fn, v
 {
 	struct dma_device *device, *_d;
 	struct dma_chan *chan = NULL;
-	bool ack;
 	int err;
 
 	/* Find a channel */
 	mutex_lock(&dma_list_mutex);
 	list_for_each_entry_safe(device, _d, &dma_device_list, global_node) {
-		chan = private_candidate(mask, device);
+		chan = private_candidate(mask, device, fn, fn_param);
 		if (!chan)
 			continue;
 
-		if (fn)
-			ack = fn(chan, fn_param);
+		/* Found a suitable channel, try to grab, prep, and
+		 * return it.  We first set DMA_PRIVATE to disable
+		 * balance_ref_count as this channel will not be
+		 * published in the general-purpose allocator
+		 */
+		dma_cap_set(DMA_PRIVATE, device->cap_mask);
+		err = dma_chan_get(chan);
+
+		if (err == -ENODEV) {
+			pr_debug("%s: %s module removed\n", __func__,
+				 dev_name(&chan->dev));
+			list_del_rcu(&device->global_node);
+		} else if (err)
+			pr_err("dmaengine: failed to get %s: (%d)",
+			       dev_name(&chan->dev), err);
 		else
-			ack = true;
-
-		if (ack) {
-			/* Found a suitable channel, try to grab, prep, and
-			 * return it.  We first set DMA_PRIVATE to disable
-			 * balance_ref_count as this channel will not be
-			 * published in the general-purpose allocator
-			 */
-			dma_cap_set(DMA_PRIVATE, device->cap_mask);
-			err = dma_chan_get(chan);
+			break;
 
-			if (err == -ENODEV) {
-				pr_debug("%s: %s module removed\n", __func__,
-					 dev_name(&chan->dev));
-				list_del_rcu(&device->global_node);
-			} else if (err)
-				pr_err("dmaengine: failed to get %s: (%d)",
-				       dev_name(&chan->dev), err);
-			else
-				break;
-		} else
-			pr_debug("%s: %s filter said false\n",
-				 __func__, dev_name(&chan->dev));
 		chan = NULL;
 	}
 	mutex_unlock(&dma_list_mutex);
@@ -912,6 +915,4 @@ static int __init dma_bus_init(void)
 	mutex_init(&dma_list_mutex);
 	return class_register(&dma_devclass);
 }
-subsys_initcall(dma_bus_init);
-
-
+arch_initcall(dma_bus_init);

^ permalink raw reply related	[flat|nested] 55+ messages in thread

* Re: [PATCH 07/13] dmaengine: introduce dma_request_channel and private channels
  2008-12-02 15:52   ` Guennadi Liakhovetski
@ 2008-12-02 17:16     ` Dan Williams
  2008-12-02 17:27       ` Guennadi Liakhovetski
  2009-01-30 17:03       ` Atsushi Nemoto
  2008-12-02 17:26     ` Nicolas Ferre
  1 sibling, 2 replies; 55+ messages in thread
From: Dan Williams @ 2008-12-02 17:16 UTC (permalink / raw)
  To: Guennadi Liakhovetski
  Cc: linux-kernel, netdev, maciej.sosnowski, hskinnemoen, nicolas.ferre

On Tue, Dec 2, 2008 at 8:52 AM, Guennadi Liakhovetski
<g.liakhovetski@gmx.de> wrote:
> Hi Dan,
>
> I think, there is a problem with your dma_request_channel() /
> private_candidate() implementation: your current version only tries one
> channel from a dma device list, which matched capabilities. If this
> channel is not accepted by the client, you do not try other channels from
> this device and just go to the next one...
>

Which dma driver are you using?  The dmaengine code assumes that all
channels on a device are equal.  It sounds like there are differences
between peer-channels on the device in this case.  If the driver
registers a device per channel that should give the flexibility you
want.

> Another problem I encountered with my framebuffer is the initialisation
> order. You initialise dmaengine per subsys_initcall(), whereas the only
> way to guarantee the order:
>
> dmaengine
> dma-device driver
> framebuffer
>

hmm... can the framebuffer be moved to late_initcall?

Regards,
Dan

^ permalink raw reply	[flat|nested] 55+ messages in thread

* Re: [PATCH 07/13] dmaengine: introduce dma_request_channel and private channels
  2008-12-02 15:52   ` Guennadi Liakhovetski
  2008-12-02 17:16     ` Dan Williams
@ 2008-12-02 17:26     ` Nicolas Ferre
  1 sibling, 0 replies; 55+ messages in thread
From: Nicolas Ferre @ 2008-12-02 17:26 UTC (permalink / raw)
  To: Guennadi Liakhovetski, Dan Williams
  Cc: linux-kernel, netdev, maciej.sosnowski, hskinnemoen

Guennadi Liakhovetski :
> Another problem I encountered with my framebuffer is the initialisation 
> order. You initialise dmaengine per subsys_initcall(), whereas the only 
> way to guarantee the order:
> 
> dmaengine
> dma-device driver
> framebuffer
> 
> when they are all linked into the kernel was to switch dmaengine to 
> arch_initcall, put the dma driver under subsys_initcall, and the 
> framebuffer under a normal module_init / device_initcall.

I did not dig much in this but I feel also that dmaengine is initialized 
a bit late : it seems to be initialized after atmel-mci is started.
It works but it seems to me that dmaengine has to be initialized very 
early...

...my 2 cents.

Regards,
-- 
Nicolas Ferre


^ permalink raw reply	[flat|nested] 55+ messages in thread

* Re: [PATCH 07/13] dmaengine: introduce dma_request_channel and private channels
  2008-12-02 17:16     ` Dan Williams
@ 2008-12-02 17:27       ` Guennadi Liakhovetski
  2008-12-02 19:10         ` Dan Williams
  2009-01-30 17:03       ` Atsushi Nemoto
  1 sibling, 1 reply; 55+ messages in thread
From: Guennadi Liakhovetski @ 2008-12-02 17:27 UTC (permalink / raw)
  To: Dan Williams
  Cc: linux-kernel, netdev, maciej.sosnowski, hskinnemoen, nicolas.ferre

On Tue, 2 Dec 2008, Dan Williams wrote:

> On Tue, Dec 2, 2008 at 8:52 AM, Guennadi Liakhovetski
> <g.liakhovetski@gmx.de> wrote:
> > Hi Dan,
> >
> > I think, there is a problem with your dma_request_channel() /
> > private_candidate() implementation: your current version only tries one
> > channel from a dma device list, which matched capabilities. If this
> > channel is not accepted by the client, you do not try other channels from
> > this device and just go to the next one...
> >
> 
> Which dma driver are you using?

This is the idmac dmaengine driver I submitted a few weeks ago, that I am 
porting to your modified dmaengine framework. Initial version:

http://marc.info/?l=linux-arm-kernel&m=122607472721145&w=2

BTW - it does look nicer and more simple now, so, in general, I like the 
change.

> The dmaengine code assumes that all
> channels on a device are equal.  It sounds like there are differences
> between peer-channels on the device in this case.  If the driver
> registers a device per channel that should give the flexibility you
> want.

Ooh... Do you really think registering 32 dma-devices is a better solution 
than allowing non-equal dma-channels? As I explained in the commit 
comment, this is a specialised Image Processing DMA Controller, and each 
its channel has a fixed role. So, each client has to get a specific 
channel.

> > Another problem I encountered with my framebuffer is the initialisation
> > order. You initialise dmaengine per subsys_initcall(), whereas the only
> > way to guarantee the order:
> >
> > dmaengine
> > dma-device driver
> > framebuffer
> 
> hmm... can the framebuffer be moved to late_initcall?

I assumed, that one wants to register the framebuffer as early as 
possible...

Thanks
Guennadi
---
Guennadi Liakhovetski, Ph.D.
Freelance Open-Source Software Developer

^ permalink raw reply	[flat|nested] 55+ messages in thread

* Re: [PATCH 07/13] dmaengine: introduce dma_request_channel and private channels
  2008-12-02 17:27       ` Guennadi Liakhovetski
@ 2008-12-02 19:10         ` Dan Williams
  2008-12-02 21:28           ` Guennadi Liakhovetski
  0 siblings, 1 reply; 55+ messages in thread
From: Dan Williams @ 2008-12-02 19:10 UTC (permalink / raw)
  To: Guennadi Liakhovetski
  Cc: linux-kernel, netdev, Sosnowski, Maciej, hskinnemoen, nicolas.ferre

On Tue, 2008-12-02 at 10:27 -0700, Guennadi Liakhovetski wrote:
> Ooh... Do you really think registering 32 dma-devices is a better solution 
> than allowing non-equal dma-channels? As I explained in the commit 
> comment, this is a specialised Image Processing DMA Controller, and each 
> its channel has a fixed role. So, each client has to get a specific 
> channel.

I see your point.  Rather than doing driver gymnastics we can simply
have dmaengine do the following (basically your patch reformatted a
bit):

diff --git a/drivers/dma/dmaengine.c b/drivers/dma/dmaengine.c
index e2ccfd0..66d0ae7 100644
--- a/drivers/dma/dmaengine.c
+++ b/drivers/dma/dmaengine.c
@@ -445,10 +445,10 @@ static void dma_channel_rebalance(void)
 		}
 }
 
-static struct dma_chan *private_candidate(dma_cap_mask_t *mask, struct dma_device *dev)
+static struct dma_chan *private_candidate(dma_cap_mask_t *mask, struct dma_device *dev,
+					  dma_filter_fn fn, void *fn_param)
 {
 	struct dma_chan *chan;
-	struct dma_chan *ret = NULL;
 
 	/* devices with multiple channels need special handling as we need to
 	 * ensure that all channels are either private or public.
@@ -471,11 +471,15 @@ static struct dma_chan *private_candidate(dma_cap_mask_t *mask, struct dma_devic
 				 __func__, dma_chan_name(chan));
 			continue;
 		}
-		ret = chan;
-		break;
+		if (fn && !fn(chan, fn_param)) {
+			pr_debug("%s: %s filter said false\n",
+				 __func__, dma_chan_name(chan));
+			continue;
+		}
+		return chan;
 	}
 
-	return ret;
+	return NULL;
 }
 
 /**
@@ -488,22 +492,13 @@ struct dma_chan *__dma_request_channel(dma_cap_mask_t *mask, dma_filter_fn fn, v
 {
 	struct dma_device *device, *_d;
 	struct dma_chan *chan = NULL;
-	bool ack;
 	int err;
 
 	/* Find a channel */
 	mutex_lock(&dma_list_mutex);
 	list_for_each_entry_safe(device, _d, &dma_device_list, global_node) {
-		chan = private_candidate(mask, device);
-		if (!chan)
-			continue;
-
-		if (fn)
-			ack = fn(chan, fn_param);
-		else
-			ack = true;
-
-		if (ack) {
+		chan = private_candidate(mask, device, fn, fn_param);
+		if (chan) {
 			/* Found a suitable channel, try to grab, prep, and
 			 * return it.  We first set DMA_PRIVATE to disable
 			 * balance_ref_count as this channel will not be
@@ -521,10 +516,8 @@ struct dma_chan *__dma_request_channel(dma_cap_mask_t *mask, dma_filter_fn fn, v
 				       dma_chan_name(chan), err);
 			else
 				break;
-		} else
-			pr_debug("%s: %s filter said false\n",
-				 __func__, dma_chan_name(chan));
-		chan = NULL;
+			chan = NULL;
+		}
 	}
 	mutex_unlock(&dma_list_mutex);
 


> > > Another problem I encountered with my framebuffer is the initialisation
> > > order. You initialise dmaengine per subsys_initcall(), whereas the only
> > > way to guarantee the order:
> > >
> > > dmaengine
> > > dma-device driver
> > > framebuffer
> > 
> > hmm... can the framebuffer be moved to late_initcall?
> 
> I assumed, that one wants to register the framebuffer as early as 
> possible...

Yes, but I'm hesitant to escalate the initcall level.  It sounds like
the channel(s?) for the framebuffer are an integral part of the
framebuffer device so maybe they should not be registered separately?
But that runs into issues if you want the channels to return to the
general pool when the framebuffer driver is unloaded.

--
Dan


^ permalink raw reply related	[flat|nested] 55+ messages in thread

* Re: [PATCH 07/13] dmaengine: introduce dma_request_channel and private channels
  2008-12-02 19:10         ` Dan Williams
@ 2008-12-02 21:28           ` Guennadi Liakhovetski
  0 siblings, 0 replies; 55+ messages in thread
From: Guennadi Liakhovetski @ 2008-12-02 21:28 UTC (permalink / raw)
  To: Dan Williams
  Cc: linux-kernel, netdev, Sosnowski, Maciej, hskinnemoen, nicolas.ferre

On Tue, 2 Dec 2008, Dan Williams wrote:

> On Tue, 2008-12-02 at 10:27 -0700, Guennadi Liakhovetski wrote:
> > Ooh... Do you really think registering 32 dma-devices is a better solution 
> > than allowing non-equal dma-channels? As I explained in the commit 
> > comment, this is a specialised Image Processing DMA Controller, and each 
> > its channel has a fixed role. So, each client has to get a specific 
> > channel.
> 
> I see your point.  Rather than doing driver gymnastics we can simply
> have dmaengine do the following (basically your patch reformatted a
> bit):

Good, so, would you commit it?

> diff --git a/drivers/dma/dmaengine.c b/drivers/dma/dmaengine.c
> index e2ccfd0..66d0ae7 100644
> --- a/drivers/dma/dmaengine.c
> +++ b/drivers/dma/dmaengine.c
> @@ -445,10 +445,10 @@ static void dma_channel_rebalance(void)
>  		}
>  }
>  
> -static struct dma_chan *private_candidate(dma_cap_mask_t *mask, struct dma_device *dev)
> +static struct dma_chan *private_candidate(dma_cap_mask_t *mask, struct dma_device *dev,
> +					  dma_filter_fn fn, void *fn_param)
>  {
>  	struct dma_chan *chan;
> -	struct dma_chan *ret = NULL;
>  
>  	/* devices with multiple channels need special handling as we need to
>  	 * ensure that all channels are either private or public.
> @@ -471,11 +471,15 @@ static struct dma_chan *private_candidate(dma_cap_mask_t *mask, struct dma_devic
>  				 __func__, dma_chan_name(chan));
>  			continue;
>  		}
> -		ret = chan;
> -		break;
> +		if (fn && !fn(chan, fn_param)) {
> +			pr_debug("%s: %s filter said false\n",
> +				 __func__, dma_chan_name(chan));
> +			continue;
> +		}
> +		return chan;
>  	}
>  
> -	return ret;
> +	return NULL;
>  }
>  
>  /**
> @@ -488,22 +492,13 @@ struct dma_chan *__dma_request_channel(dma_cap_mask_t *mask, dma_filter_fn fn, v
>  {
>  	struct dma_device *device, *_d;
>  	struct dma_chan *chan = NULL;
> -	bool ack;
>  	int err;
>  
>  	/* Find a channel */
>  	mutex_lock(&dma_list_mutex);
>  	list_for_each_entry_safe(device, _d, &dma_device_list, global_node) {
> -		chan = private_candidate(mask, device);
> -		if (!chan)
> -			continue;
> -
> -		if (fn)
> -			ack = fn(chan, fn_param);
> -		else
> -			ack = true;
> -
> -		if (ack) {
> +		chan = private_candidate(mask, device, fn, fn_param);
> +		if (chan) {
>  			/* Found a suitable channel, try to grab, prep, and
>  			 * return it.  We first set DMA_PRIVATE to disable
>  			 * balance_ref_count as this channel will not be
> @@ -521,10 +516,8 @@ struct dma_chan *__dma_request_channel(dma_cap_mask_t *mask, dma_filter_fn fn, v
>  				       dma_chan_name(chan), err);
>  			else
>  				break;
> -		} else
> -			pr_debug("%s: %s filter said false\n",
> -				 __func__, dma_chan_name(chan));
> -		chan = NULL;
> +			chan = NULL;
> +		}
>  	}
>  	mutex_unlock(&dma_list_mutex);
>  
> 
> 
> > > > Another problem I encountered with my framebuffer is the initialisation
> > > > order. You initialise dmaengine per subsys_initcall(), whereas the only
> > > > way to guarantee the order:
> > > >
> > > > dmaengine
> > > > dma-device driver
> > > > framebuffer
> > > 
> > > hmm... can the framebuffer be moved to late_initcall?
> > 
> > I assumed, that one wants to register the framebuffer as early as 
> > possible...
> 
> Yes, but I'm hesitant to escalate the initcall level.  It sounds like
> the channel(s?) for the framebuffer are an integral part of the
> framebuffer device so maybe they should not be registered separately?
> But that runs into issues if you want the channels to return to the
> general pool when the framebuffer driver is unloaded.

You mean whether it makes sense at all to manage these framebuffer 
channels outside of the framebuffer driver in a dma driver? I think yes. 
Simply because these 2 channels used by the fb share code and _registers_ 
with the rest 30 channels, which are all also quite specialised.

As for excalating the initcall level - the dmaengine init function doesn't 
do much - it just registers a device class and initialises a mutex - 
shouldn't be a problem to do it earlier?

Thanks
Guennadi
---
Guennadi Liakhovetski, Ph.D.
Freelance Open-Source Software Developer

^ permalink raw reply	[flat|nested] 55+ messages in thread

* Re: [PATCH 03/13] dmaengine: up-level reference counting to the module level
  2008-11-14 21:34 ` [PATCH 03/13] dmaengine: up-level reference counting to the module level Dan Williams
  2008-11-15  6:08   ` Andrew Morton
@ 2008-12-04 16:56   ` Guennadi Liakhovetski
  2008-12-04 18:51     ` Dan Williams
  2008-12-12 14:28   ` Sosnowski, Maciej
  2 siblings, 1 reply; 55+ messages in thread
From: Guennadi Liakhovetski @ 2008-12-04 16:56 UTC (permalink / raw)
  To: Dan Williams
  Cc: linux-kernel, netdev, maciej.sosnowski, hskinnemoen, nicolas.ferre

On Fri, 14 Nov 2008, Dan Williams wrote:

> -/**
>   * dma_async_client_register - register a &dma_client
>   * @client: ptr to a client structure with valid 'event_callback' and 'cap_mask'
>   */
>  void dma_async_client_register(struct dma_client *client)
>  {
> +	struct dma_device *device, *_d;
> +	struct dma_chan *chan;
> +	int err;
> +
>  	/* validate client data */
>  	BUG_ON(dma_has_cap(DMA_SLAVE, client->cap_mask) &&
>  		!client->slave);
>  
>  	mutex_lock(&dma_list_mutex);
> +	dmaengine_ref_count++;
> +
> +	/* try to grab channels */
> +	list_for_each_entry_safe(device, _d, &dma_device_list, global_node)
> +		list_for_each_entry(chan, &device->channels, device_node) {
> +			err = dma_chan_get(chan);

Dan, could you please explain this: dma_chan_get() takes a reference on 
the channel _and_ calls .device_alloc_chan_resources() on first invocation 
for a specific channel. I now see three locations in dmaengine.c, where 
dma_chan_get() is called: in dma_request_channel() - logical, but also in 
dmaengine_get() and dma_async_device_register(), and these latter two I 
don't understand. I do not understand why we have to grab references and 
allocate resources for all (public) channels on all controllers in the 
system if someone just called dmaengine_get()?

> @@ -420,6 +443,19 @@ int dma_async_device_register(struct dma_device *device)
>  	}
>  
>  	mutex_lock(&dma_list_mutex);
> +	list_for_each_entry(chan, &device->channels, device_node) {
> +		/* if clients are already waiting for channels we need to
> +		 * take references on their behalf
> +		 */
> +		if (dmaengine_ref_count && dma_chan_get(chan) == -ENODEV) {
> +			/* note we can only get here for the first
> +			 * channel as the remaining channels are
> +			 * guaranteed to get a reference
> +			 */

This is the second location - where and how are clients waiting for 
channels? In the old implementation clients had notification callbacks, 
which were called as new channels became available. Now clients are gone, 
so, what is meant here?

Confused
Guennadi
---
Guennadi Liakhovetski, Ph.D.
Freelance Open-Source Software Developer

^ permalink raw reply	[flat|nested] 55+ messages in thread

* Re: [PATCH 03/13] dmaengine: up-level reference counting to the module level
  2008-12-04 16:56   ` Guennadi Liakhovetski
@ 2008-12-04 18:51     ` Dan Williams
  2008-12-04 19:28       ` Guennadi Liakhovetski
  0 siblings, 1 reply; 55+ messages in thread
From: Dan Williams @ 2008-12-04 18:51 UTC (permalink / raw)
  To: Guennadi Liakhovetski
  Cc: linux-kernel, netdev, maciej.sosnowski, hskinnemoen, nicolas.ferre

On Thu, Dec 4, 2008 at 9:56 AM, Guennadi Liakhovetski
<g.liakhovetski@gmx.de> wrote:
>> +
>> +     /* try to grab channels */
>> +     list_for_each_entry_safe(device, _d, &dma_device_list, global_node)
>> +             list_for_each_entry(chan, &device->channels, device_node) {
>> +                     err = dma_chan_get(chan);
>
> Dan, could you please explain this: dma_chan_get() takes a reference on
> the channel _and_ calls .device_alloc_chan_resources() on first invocation
> for a specific channel. I now see three locations in dmaengine.c, where
> dma_chan_get() is called: in dma_request_channel() - logical, but also in
> dmaengine_get() and dma_async_device_register(), and these latter two I
> don't understand. I do not understand why we have to grab references and
> allocate resources for all (public) channels on all controllers in the
> system if someone just called dmaengine_get()?

Consider the case where a subsystem that consumes engines like
async_tx or net_dma loads before engines are available in the system.
They will have taken a reference against dmaengine, but calls to
dma_find_channel will return NULL.  Once a channel driver is loaded
dma_find_channel() can start returning results, and it is at this
point that resources must be allocated and the backing module pinned.
So dmaengine_get() means "I am interested in offloading stuff, if you
see an offload resource grab it and prep it so I can discover it with
dma_find_channel()".

>
>> @@ -420,6 +443,19 @@ int dma_async_device_register(struct dma_device *device)
>>       }
>>
>>       mutex_lock(&dma_list_mutex);
>> +     list_for_each_entry(chan, &device->channels, device_node) {
>> +             /* if clients are already waiting for channels we need to
>> +              * take references on their behalf
>> +              */
>> +             if (dmaengine_ref_count && dma_chan_get(chan) == -ENODEV) {
>> +                     /* note we can only get here for the first
>> +                      * channel as the remaining channels are
>> +                      * guaranteed to get a reference
>> +                      */
>
> This is the second location - where and how are clients waiting for
> channels? In the old implementation clients had notification callbacks,
> which were called as new channels became available. Now clients are gone,
> so, what is meant here?
>

The assumption is that mem-to-mem offload clients poll dma_find_channel().

Regards,
Dan

^ permalink raw reply	[flat|nested] 55+ messages in thread

* Re: [PATCH 03/13] dmaengine: up-level reference counting to the module level
  2008-12-04 18:51     ` Dan Williams
@ 2008-12-04 19:28       ` Guennadi Liakhovetski
  2008-12-08 22:39         ` Dan Williams
  0 siblings, 1 reply; 55+ messages in thread
From: Guennadi Liakhovetski @ 2008-12-04 19:28 UTC (permalink / raw)
  To: Dan Williams
  Cc: linux-kernel, netdev, maciej.sosnowski, hskinnemoen, nicolas.ferre

On Thu, 4 Dec 2008, Dan Williams wrote:

> On Thu, Dec 4, 2008 at 9:56 AM, Guennadi Liakhovetski
> <g.liakhovetski@gmx.de> wrote:
> >> +
> >> +     /* try to grab channels */
> >> +     list_for_each_entry_safe(device, _d, &dma_device_list, global_node)
> >> +             list_for_each_entry(chan, &device->channels, device_node) {
> >> +                     err = dma_chan_get(chan);
> >
> > Dan, could you please explain this: dma_chan_get() takes a reference on
> > the channel _and_ calls .device_alloc_chan_resources() on first invocation
> > for a specific channel. I now see three locations in dmaengine.c, where
> > dma_chan_get() is called: in dma_request_channel() - logical, but also in
> > dmaengine_get() and dma_async_device_register(), and these latter two I
> > don't understand. I do not understand why we have to grab references and
> > allocate resources for all (public) channels on all controllers in the
> > system if someone just called dmaengine_get()?
> 
> Consider the case where a subsystem that consumes engines like
> async_tx or net_dma loads before engines are available in the system.
> They will have taken a reference against dmaengine, but calls to
> dma_find_channel will return NULL.  Once a channel driver is loaded
> dma_find_channel() can start returning results, and it is at this
> point that resources must be allocated and the backing module pinned.
> So dmaengine_get() means "I am interested in offloading stuff, if you
> see an offload resource grab it and prep it so I can discover it with
> dma_find_channel()".

Ok, but why not postpone calling .device_alloc_chan_resources() until a 
channel is _found_? Calling it for all channels just because one client 
has once called dmaengine_get() and _probably_ will poll if any channels 
become available - doesn't it look like an overkill?

> >> @@ -420,6 +443,19 @@ int dma_async_device_register(struct dma_device *device)
> >>       }
> >>
> >>       mutex_lock(&dma_list_mutex);
> >> +     list_for_each_entry(chan, &device->channels, device_node) {
> >> +             /* if clients are already waiting for channels we need to
> >> +              * take references on their behalf
> >> +              */
> >> +             if (dmaengine_ref_count && dma_chan_get(chan) == -ENODEV) {
> >> +                     /* note we can only get here for the first
> >> +                      * channel as the remaining channels are
> >> +                      * guaranteed to get a reference
> >> +                      */
> >
> > This is the second location - where and how are clients waiting for
> > channels? In the old implementation clients had notification callbacks,
> > which were called as new channels became available. Now clients are gone,
> > so, what is meant here?
> >
> 
> The assumption is that mem-to-mem offload clients poll dma_find_channel().

Hm, interesting, so that's why you want dma_find_channel() as light-weight 
as possible?

Thanks
Guennadi
---
Guennadi Liakhovetski, Ph.D.
Freelance Open-Source Software Developer

^ permalink raw reply	[flat|nested] 55+ messages in thread

* Re: [PATCH 03/13] dmaengine: up-level reference counting to the module level
  2008-12-04 19:28       ` Guennadi Liakhovetski
@ 2008-12-08 22:39         ` Dan Williams
  0 siblings, 0 replies; 55+ messages in thread
From: Dan Williams @ 2008-12-08 22:39 UTC (permalink / raw)
  To: Guennadi Liakhovetski
  Cc: linux-kernel, netdev, maciej.sosnowski, hskinnemoen, nicolas.ferre

On Thu, Dec 4, 2008 at 12:28 PM, Guennadi Liakhovetski
<g.liakhovetski@gmx.de> wrote:
> Ok, but why not postpone calling .device_alloc_chan_resources() until a
> channel is _found_?

dma_find_channel() is called in fast paths (atomic contexts, I/O
writeout), so we need to allocate ahead of time.

^ permalink raw reply	[flat|nested] 55+ messages in thread

* RE: [PATCH 00/13] dmaengine redux
  2008-11-14 21:34 [PATCH 00/13] dmaengine redux Dan Williams
                   ` (12 preceding siblings ...)
  2008-11-14 21:35 ` [PATCH 13/13] dmaengine: kill enum dma_state_client Dan Williams
@ 2008-12-12 14:27 ` Sosnowski, Maciej
  13 siblings, 0 replies; 55+ messages in thread
From: Sosnowski, Maciej @ 2008-12-12 14:27 UTC (permalink / raw)
  To: Williams, Dan J, linux-kernel, netdev
  Cc: hskinnemoen, g.liakhovetski, nicolas.ferre

Williams, Dan J wrote:
> The dmaengine subsystem collects and advertises dma channels for two
> classes of users in the kernel, memory-to-memory offload and
> traditional device-to-memory DMA.  The original design was driven by
> the memory-to-memory case and is starting to show its limitations now
> that more device-to-memory DMA users are being planned.  The primary
> difference between the two classes is that memory-to-memory offload
> is very amenable to channel sharing and is tolerant of dynamic
> channel changes.  Compare this to the device-to-memory case where a
> channel must be dedicated to a device and may have platform-specific
> reasons why it cannot talk to a different device. 
> 
> This rework allows channels to be targeted to a public (mem-to-mem)
> pool or be reserved for an exclusive private (dev-to-mem) allocation.
> See [PATCH 1/13] for documentation of the changes.  A nice side
> effect of the rework is: 
> 
>  24 files changed, 679 insertions(+), 1108 deletions(-)
> 
> All review welcomed, especially around the dma_slave changes, or
> performance impacts of dma_find_channel.
> 
> These patches are currently on async_tx.git/upstream, and barring any
> brown-paper-bag issues will move to linux-next via async_tx.git/next.
> 
>   git://git.kernel.org/pub/scm/linux/kernel/git/djbw/async_tx.git
> upstream 
> 
> ---
> Dan Williams (13):
>       dmaengine: kill enum dma_state_client
>       dmaengine: remove 'bigref' infrastructure
>       dmaengine: kill struct dma_client and supporting infrastructure
>       dmaengine: replace dma_async_client_register with dmaengine_get
>       atmel-mci: convert to dma_request_channel and down-level
>       dma_slave dmatest: convert to dma_request_channel
>       dmaengine: introduce dma_request_channel and private channels
>       net_dma: convert to dma_find_channel
>       dmaengine: provide a common 'issue_pending_all' implementation
>       dmaengine: centralize channel allocation, introduce
>       dma_find_channel dmaengine: up-level reference counting to the
>       module level dmaengine: remove dependency on async_tx
>       async_tx, dmaengine: document channel allocation and api rework
> 
>  Documentation/crypto/async-tx-api.txt |  135 +++----
>  Documentation/dmaengine.txt           |    1
>  arch/avr32/include/asm/atmel-mci.h    |    6
>  arch/avr32/mach-at32ap/at32ap700x.c   |   15 -
>  crypto/async_tx/async_tx.c            |  350 ------------------
>  drivers/dma/Kconfig                   |    2
>  drivers/dma/dmaengine.c               |  637
>  +++++++++++++++++++++++---------- drivers/dma/dmatest.c             
>  |  111 ++---- drivers/dma/dw_dmac.c                 |   28 -
>  drivers/dma/fsldma.c                  |    3
>  drivers/dma/ioat_dma.c                |    5
>  drivers/dma/iop-adma.c                |   11 -
>  drivers/dma/mv_xor.c                  |   11 -
>  drivers/mmc/host/atmel-mci.c          |  103 +----
>  include/linux/async_tx.h              |   17 -
>  include/linux/dmaengine.h             |  148 ++------
>  include/linux/dw_dmac.h               |   31 +-
>  include/linux/netdevice.h             |    3
>  include/net/netdma.h                  |   11 -
>  net/core/dev.c                        |  148 --------
>  net/ipv4/tcp.c                        |    5
>  net/ipv4/tcp_input.c                  |    2
>  net/ipv4/tcp_ipv4.c                   |    2
>  net/ipv6/tcp_ipv6.c                   |    2
>  24 files changed, 679 insertions(+), 1108 deletions(-)
>  create mode 100644 Documentation/dmaengine.txt

Dan,

First of all sorry for delay in my feedback on this.
I have been absent for couple of weeks and last days
I was struggling with some ioatdma hot issues.

I have walked through these patches and generally
I like the whole redesign/reduction idea.
There are couple of concerns/questions (+ minor comments) from my side
to three of these patches: 3, 7 and 11.
I will send them in a minute.
The rest of them looks good to me at this point.

Next week I am planning to run some tests on my I/OAT setup,
to see how this new dmaengine would behave in this environment.
I will let you know.

Regards,
Maciej

^ permalink raw reply	[flat|nested] 55+ messages in thread

* RE: [PATCH 03/13] dmaengine: up-level reference counting to the module level
  2008-11-14 21:34 ` [PATCH 03/13] dmaengine: up-level reference counting to the module level Dan Williams
  2008-11-15  6:08   ` Andrew Morton
  2008-12-04 16:56   ` Guennadi Liakhovetski
@ 2008-12-12 14:28   ` Sosnowski, Maciej
  2008-12-15 22:12     ` Dan Williams
  2 siblings, 1 reply; 55+ messages in thread
From: Sosnowski, Maciej @ 2008-12-12 14:28 UTC (permalink / raw)
  To: Williams, Dan J, linux-kernel, netdev
  Cc: hskinnemoen, g.liakhovetski, nicolas.ferre

Williams, Dan J wrote:
> Simply, if a client wants any dmaengine channel then prevent all
> dmaengine 
> modules from being removed.  Once the clients are done re-enable
> module 
> removal.
> 
> Why?, beyond reducing complication:
> 1/ Tracking reference counts per-transaction in an efficient manner,
>    as is currently done, requires a complicated scheme to avoid
>    cache-line bouncing effects.
> 2/ Per-transaction ref-counting gives the false impression that a
>    dma-driver can be gracefully removed ahead of its user (net, md, or
>    dma-slave)
> 3/ None of the in-tree dma-drivers talk to hot pluggable hardware, but
>    if such an engine were built one day we still would not need to
>    notify clients of remove events.  The driver can simply return
>    NULL to a ->prep() request, something that is much easier for a
> client to handle. 
> 
> Signed-off-by: Dan Williams <dan.j.williams@intel.com>
> ---

Dan,

General concern I see about this change is that it makes impossible 
to rmmod ioatdma in case of NET_DMA enabled.
This is a specific case of situation when client is permanently registered in dmaengine,
making it impossible to rmmod any device with "public" channels.
Ioatdma is just an example here.
However in ioatdma case it would be problematic now to switch between CPU and DMA copy modes.
It seems that the only way to disable DMA after loading ioatdma would be raising tcp_dma_copybreak
to some high enough value to direct all buffers to CPU copy path.
This way is yet rather more like hacking than "normal" usage way (like "rmmod ioatdma" used so far).

Another issue I find problematic in this solution is that _any_ client 
declaring its presence in dmaengine causes holding reference for _all_ channels (and blocking them),
does not matter if they are used or not.
I agree with Guennadi's doubts here.
Why not at least hold a reference only for channels with capabilities matching client's ones?

> @@ -105,19 +106,8 @@ static ssize_t show_bytes_transferred(struct
>  device *dev, struct device_attribut static ssize_t
>  show_in_use(struct device *dev, struct device_attribute *attr, char
>  	*buf) { struct dma_chan *chan = to_dma_chan(dev);
> -	int in_use = 0;
> -
> -	if (unlikely(chan->slow_ref) &&
> -		atomic_read(&chan->refcount.refcount) > 1)
> -		in_use = 1;
> -	else {
> -		if (local_read(&(per_cpu_ptr(chan->local,
> -			get_cpu())->refcount)) > 0)
> -			in_use = 1;
> -		put_cpu();
> -	}
> 
> -	return sprintf(buf, "%d\n", in_use);
> +	return sprintf(buf, "%d\n", chan->client_count);
>  }

In this case show_in_use will not show if the channel is really in use 
but rather how many clients are present, including these with different capabilities.
Thus this number does not even show number of "potential" users of the channel...
Again, maybe it would be better to limit chan->client_count 
to number of at least potential users ( = matching capabilities)?

> 
>  	/* Find a channel */
> @@ -178,23 +228,16 @@ static void dma_client_chan_alloc(struct
>  		dma_client *client)
>		list_for_each_entry(chan, &device->channels, device_node) {
>			if (!dma_chan_satisfies_mask(chan, client->cap_mask))
>				continue;
> +			if (!chan->client_count)
> +				continue;

As cap_mask is per device not per channel, checking capabilites matching 
is not necessary to be repeated for every channel.
It would be more efficient to do it once yet before 
list_for_each_entry(chan, &device->channels, device_node).

> @@ -420,6 +443,19 @@ int dma_async_device_register(struct dma_device
>  	*device) }
> 
>  	mutex_lock(&dma_list_mutex);
> +	list_for_each_entry(chan, &device->channels, device_node) {
> +		/* if clients are already waiting for channels we need to
> +		 * take references on their behalf
> +		 */
> +		if (dmaengine_ref_count && dma_chan_get(chan) == -ENODEV) {
> +			/* note we can only get here for the first
> +			 * channel as the remaining channels are
> +			 * guaranteed to get a reference
> +			 */
> +			rc = -ENODEV;
> +			goto err_out;
> +		}
> +	}

Going through this list_for_each_entry() loop makes sense only if there are any clients,
so maybe it would be more efficient to move "if (dmaengine_ref_count)" check before 
list_for_each_entry(chan, &device->channels, device_node)?

Regards,
Maciej

^ permalink raw reply	[flat|nested] 55+ messages in thread

* RE: [PATCH 07/13] dmaengine: introduce dma_request_channel and private channels
  2008-11-14 21:34 ` [PATCH 07/13] dmaengine: introduce dma_request_channel and private channels Dan Williams
  2008-12-02 15:52   ` Guennadi Liakhovetski
@ 2008-12-12 14:29   ` Sosnowski, Maciej
  2008-12-15 23:55     ` Dan Williams
  2009-02-06 16:58   ` Atsushi Nemoto
  2 siblings, 1 reply; 55+ messages in thread
From: Sosnowski, Maciej @ 2008-12-12 14:29 UTC (permalink / raw)
  To: Williams, Dan J, linux-kernel, netdev
  Cc: hskinnemoen, g.liakhovetski, nicolas.ferre

Williams, Dan J wrote:
> This interface is primarily for device-to-memory clients which need to
> search for dma channels with platform-specific characteristics.  The
> prototype is:
> 
> struct dma_chan *dma_request_channel(dma_cap_mask_t mask,
>                                      dma_filter_fn filter_fn,
>                                      void *filter_param);
> 
> When the optional 'filter_fn' parameter is set to NULL
> dma_request_channel simply returns the first channel that satisfies
> the 
> capability mask.  Otherwise, when the mask parameter is insufficient
> for 
> specifying the necessary channel, the filter_fn routine can be used to
> disposition the available channels in the system. The filter_fn
> routine 
> is called once for each free channel in the system.  Upon seeing a
> suitable channel filter_fn returns DMA_ACK which flags that channel to
> be the return value from dma_request_channel.  A channel allocated via
> this interface is exclusive to the caller, until dma_release_channel()
> is called.
> 
> To ensure that all channels are not consumed by the general-purpose
> allocator the DMA_PRIVATE capability is provided to exclude a
> dma_device 
> from general-purpose (memory-to-memory) consideration.
> 
> Signed-off-by: Dan Williams <dan.j.williams@intel.com>
> ---
> 
>  drivers/dma/dmaengine.c   |  161
>  ++++++++++++++++++++++++++++++++++++++++-----
>  include/linux/dmaengine.h |   16 ++++ 2 files changed, 159
> insertions(+), 18 deletions(-) 
> 
> diff --git a/drivers/dma/dmaengine.c b/drivers/dma/dmaengine.c
> index ec483cc..46fd5fa 100644
> --- a/drivers/dma/dmaengine.c
> +++ b/drivers/dma/dmaengine.c
> @@ -190,7 +190,7 @@ static int dma_chan_get(struct dma_chan *chan)
>  			chan->client_count = 0;
>  			module_put(owner);
>  			err = -ENOMEM;
> -		} else
> +		} else if (!dma_has_cap(DMA_PRIVATE, chan->device->cap_mask))
>  			balance_ref_count(chan);
>  	}
> 
> @@ -221,6 +221,8 @@ static void dma_client_chan_alloc(struct
> dma_client *client) 
> 
>  	/* Find a channel */
>  	list_for_each_entry(device, &dma_device_list, global_node) {
> +		if (dma_has_cap(DMA_PRIVATE, device->cap_mask))
> +			continue;
>  		/* Does the client require a specific DMA controller? */
>  		if (client->slave && client->slave->dma_dev
>  				&& client->slave->dma_dev != device->dev)
> @@ -313,6 +315,7 @@ static int __init dma_channel_table_init(void)
>  	 * channel_table
>  	 */
>  	clear_bit(DMA_INTERRUPT, dma_cap_mask_all.bits);
> +	clear_bit(DMA_PRIVATE, dma_cap_mask_all.bits);
>  	clear_bit(DMA_SLAVE, dma_cap_mask_all.bits);

The comment above should be updated according to this change:
-/* 'interrupt' and 'slave' are channel capabilities, but are not
+/* 'interrupt', 'private' and 'slave' are channel capabilities, but are not

> +static struct dma_chan *private_candidate(dma_cap_mask_t *mask,
> struct dma_device *dev) +{
> +	struct dma_chan *chan;
> +	struct dma_chan *ret = NULL;
> +
> +	/* devices with multiple channels need special handling as we need
> to +	 * ensure that all channels are either private or public.
> +	 */
> +	if (dev->chancnt > 1 && !dma_has_cap(DMA_PRIVATE, dev->cap_mask))
> +		list_for_each_entry(chan, &dev->channels, device_node) {
> +			/* some channels are already publicly allocated */
> +			if (chan->client_count)
> +				return NULL;
> +		}

Isn't it a dangerous approach to let clients consume for their exclusive usage channels
meant for general-purpose ("pubilc" ones)?
Why not to limit private_candidate to devices with DMA_PRIVATE capability only?

> +
> +	list_for_each_entry(chan, &dev->channels, device_node) {
> +		if (!__dma_chan_satisfies_mask(chan, mask)) {
> +			pr_debug("%s: %s wrong capabilities\n",
> +				 __func__, dev_name(&chan->dev));
> +			continue;
> +		}

As capabilities are per device, this check could be performed just once
before list_for_each_entry(chan, &dev->channels, device_node).

Regards,
Maciej

^ permalink raw reply	[flat|nested] 55+ messages in thread

* RE: [PATCH 11/13] dmaengine: kill struct dma_client and supporting infrastructure
  2008-11-14 21:35 ` [PATCH 11/13] dmaengine: kill struct dma_client and supporting infrastructure Dan Williams
@ 2008-12-12 14:29   ` Sosnowski, Maciej
  2008-12-16  0:09     ` Dan Williams
  0 siblings, 1 reply; 55+ messages in thread
From: Sosnowski, Maciej @ 2008-12-12 14:29 UTC (permalink / raw)
  To: Williams, Dan J, linux-kernel, netdev
  Cc: hskinnemoen, g.liakhovetski, nicolas.ferre

Williams, Dan J wrote:
> All users have been converted to either the general-purpose allocator,
> dma_find_channel, or dma_request_channel.
> 
> Signed-off-by: Dan Williams <dan.j.williams@intel.com>
> ---
(...)
>  /**
> - * dma_chans_notify_available - broadcast available channels to the
> clients 
> - */
> -static void dma_clients_notify_available(void)
> -{
> -	struct dma_client *client;
> -
> -	mutex_lock(&dma_list_mutex);
> -
> -	list_for_each_entry(client, &dma_client_list, global_node)
> -		dma_client_chan_alloc(client);
> -
> -	mutex_unlock(&dma_list_mutex);
> -}

I agree with Guennadi's concern about removing clients' notification
of new devices available in the system.
I understand that this design is based on polling instead,
however polling is always less efficient approach.
Do you think that restoring notifications in this redesigned dmaengine
would be more painful than limiting clients to polling solution?

Regards,
Maciej

^ permalink raw reply	[flat|nested] 55+ messages in thread

* Re: [PATCH 03/13] dmaengine: up-level reference counting to the module level
  2008-12-12 14:28   ` Sosnowski, Maciej
@ 2008-12-15 22:12     ` Dan Williams
  2008-12-18 14:26       ` Sosnowski, Maciej
  0 siblings, 1 reply; 55+ messages in thread
From: Dan Williams @ 2008-12-15 22:12 UTC (permalink / raw)
  To: Sosnowski, Maciej
  Cc: linux-kernel, netdev, hskinnemoen, g.liakhovetski, nicolas.ferre

On Fri, Dec 12, 2008 at 7:28 AM, Sosnowski, Maciej
<maciej.sosnowski@intel.com> wrote:
> Dan,
>
> General concern I see about this change is that it makes impossible
> to rmmod ioatdma in case of NET_DMA enabled.
> This is a specific case of situation when client is permanently registered in dmaengine,
> making it impossible to rmmod any device with "public" channels.
> Ioatdma is just an example here.
> However in ioatdma case it would be problematic now to switch between CPU and DMA copy modes.
> It seems that the only way to disable DMA after loading ioatdma would be raising tcp_dma_copybreak
> to some high enough value to direct all buffers to CPU copy path.
> This way is yet rather more like hacking than "normal" usage way (like "rmmod ioatdma" used so far).
>

Hi Maciej,

I have been waiting for you to point this out because I believe it
shows where more work is needed in net_dma.  The problem with net_dma
is that it never says "I am done with dma channels".  The result was
the old/complicated per-operation reference counting in dmaengine that
lead to the, now deleted, comment for ioat_remove():

/*
 * It is unsafe to remove this module: if removed while a requested
 * dma is outstanding, esp. from tcp, it is possible to hang while
 * waiting for something that will never finish.  However, if you're
 * feeling lucky, this usually works just fine.
 */

This also required the complexity of each client needing to handle
asynchronous channel removal events.  In all other cases in the kernel
a module is pinned as long as it has users, so dmaengine was backwards
in this regard.

Putting the end-node principal to work here means that the
complexity/effort of determining when net_dma is done with channels
should reside in net_dma, not dmaengine.  Since we cannot hook into a
"rmmod net" event to drop the dmaengine reference we will need some
out-of-band signal to enable "rmmod ioatdma" like detecting
network-idle, or having an explicit "dma disable" sysctl.

> Another issue I find problematic in this solution is that _any_ client
> declaring its presence in dmaengine causes holding reference for _all_ channels (and blocking them),
> does not matter if they are used or not.
> I agree with Guennadi's doubts here.
> Why not at least hold a reference only for channels with capabilities matching client's ones?
>

All channels share the same parent module, so taking a reference on
one will always involve blocking all channels.  Private channels have
this granularity, and this is a primary difference between public and
private channels.  I eventually want dmaengine to hook into cpu
hotplug notifications to guarantee that resources are only allocated
for channels with a non-zero ->table_count, but this is not there
today.

>> @@ -105,19 +106,8 @@ static ssize_t show_bytes_transferred(struct
>>  device *dev, struct device_attribut static ssize_t
>>  show_in_use(struct device *dev, struct device_attribute *attr, char
>>       *buf) { struct dma_chan *chan = to_dma_chan(dev);
>> -     int in_use = 0;
>> -
>> -     if (unlikely(chan->slow_ref) &&
>> -             atomic_read(&chan->refcount.refcount) > 1)
>> -             in_use = 1;
>> -     else {
>> -             if (local_read(&(per_cpu_ptr(chan->local,
>> -                     get_cpu())->refcount)) > 0)
>> -                     in_use = 1;
>> -             put_cpu();
>> -     }
>>
>> -     return sprintf(buf, "%d\n", in_use);
>> +     return sprintf(buf, "%d\n", chan->client_count);
>>  }
>
> In this case show_in_use will not show if the channel is really in use
> but rather how many clients are present, including these with different capabilities.
> Thus this number does not even show number of "potential" users of the channel...
> Again, maybe it would be better to limit chan->client_count
> to number of at least potential users ( = matching capabilities)?

To be clear show_in_use was broken before because it only looked at a
per-cpu variable[1], the situation is better now because it indeed
shows "potential" users in the public case and actual users in the
private case.  I.e. if a public channel client has the potential to
get this channel via dma_find_channel() then that will be reflected in
->client_count.

>
>>
>>       /* Find a channel */
>> @@ -178,23 +228,16 @@ static void dma_client_chan_alloc(struct
>>               dma_client *client)
>>               list_for_each_entry(chan, &device->channels, device_node) {
>>                       if (!dma_chan_satisfies_mask(chan, client->cap_mask))
>>                               continue;
>> +                     if (!chan->client_count)
>> +                             continue;
>
> As cap_mask is per device not per channel, checking capabilites matching
> is not necessary to be repeated for every channel.
> It would be more efficient to do it once yet before
> list_for_each_entry(chan, &device->channels, device_node).

This is changed in later patches [2].  We repeat for each channel in
the dma_request_channel() case under the assumption that the client
may be smart enough to disambiguate the channels on other criteria.

>
>> @@ -420,6 +443,19 @@ int dma_async_device_register(struct dma_device
>>       *device) }
>>
>>       mutex_lock(&dma_list_mutex);
>> +     list_for_each_entry(chan, &device->channels, device_node) {
>> +             /* if clients are already waiting for channels we need to
>> +              * take references on their behalf
>> +              */
>> +             if (dmaengine_ref_count && dma_chan_get(chan) == -ENODEV) {
>> +                     /* note we can only get here for the first
>> +                      * channel as the remaining channels are
>> +                      * guaranteed to get a reference
>> +                      */
>> +                     rc = -ENODEV;
>> +                     goto err_out;
>> +             }
>> +     }
>
> Going through this list_for_each_entry() loop makes sense only if there are any clients,
> so maybe it would be more efficient to move "if (dmaengine_ref_count)" check before
> list_for_each_entry(chan, &device->channels, device_node)?

Good point... will fix.

Thanks,
Dan

[1] http://marc.info/?l=linux-kernel&m=121448921721509&w=2
[2] http://marc.info/?l=linux-kernel&m=122835195303216&w=2

^ permalink raw reply	[flat|nested] 55+ messages in thread

* Re: [PATCH 07/13] dmaengine: introduce dma_request_channel and private channels
  2008-12-12 14:29   ` Sosnowski, Maciej
@ 2008-12-15 23:55     ` Dan Williams
  2008-12-18 14:33       ` Sosnowski, Maciej
  0 siblings, 1 reply; 55+ messages in thread
From: Dan Williams @ 2008-12-15 23:55 UTC (permalink / raw)
  To: Sosnowski, Maciej
  Cc: linux-kernel, netdev, hskinnemoen, g.liakhovetski, nicolas.ferre

On Fri, Dec 12, 2008 at 7:29 AM, Sosnowski, Maciej
<maciej.sosnowski@intel.com> wrote:
>>       clear_bit(DMA_INTERRUPT, dma_cap_mask_all.bits);
>> +     clear_bit(DMA_PRIVATE, dma_cap_mask_all.bits);
>>       clear_bit(DMA_SLAVE, dma_cap_mask_all.bits);
>
> The comment above should be updated according to this change:
> -/* 'interrupt' and 'slave' are channel capabilities, but are not
> +/* 'interrupt', 'private' and 'slave' are channel capabilities, but are not
>

ok.

>> +static struct dma_chan *private_candidate(dma_cap_mask_t *mask,
>> struct dma_device *dev) +{
>> +     struct dma_chan *chan;
>> +     struct dma_chan *ret = NULL;
>> +
>> +     /* devices with multiple channels need special handling as we need
>> to +   * ensure that all channels are either private or public.
>> +      */
>> +     if (dev->chancnt > 1 && !dma_has_cap(DMA_PRIVATE, dev->cap_mask))
>> +             list_for_each_entry(chan, &dev->channels, device_node) {
>> +                     /* some channels are already publicly allocated */
>> +                     if (chan->client_count)
>> +                             return NULL;
>> +             }
>
> Isn't it a dangerous approach to let clients consume for their exclusive usage channels
> meant for general-purpose ("pubilc" ones)?
> Why not to limit private_candidate to devices with DMA_PRIVATE capability only?
>

This allows unused channels to be claimed by dma_request_channel().
It is not dangerous as long as ->client_count is zero.

>> +
>> +     list_for_each_entry(chan, &dev->channels, device_node) {
>> +             if (!__dma_chan_satisfies_mask(chan, mask)) {
>> +                     pr_debug("%s: %s wrong capabilities\n",
>> +                              __func__, dev_name(&chan->dev));
>> +                     continue;
>> +             }
>
> As capabilities are per device, this check could be performed just once
> before list_for_each_entry(chan, &dev->channels, device_node).
>

Yes, changed.

Thanks,
Dan

^ permalink raw reply	[flat|nested] 55+ messages in thread

* Re: [PATCH 11/13] dmaengine: kill struct dma_client and supporting infrastructure
  2008-12-12 14:29   ` Sosnowski, Maciej
@ 2008-12-16  0:09     ` Dan Williams
  2008-12-18 14:34       ` Sosnowski, Maciej
  0 siblings, 1 reply; 55+ messages in thread
From: Dan Williams @ 2008-12-16  0:09 UTC (permalink / raw)
  To: Sosnowski, Maciej
  Cc: linux-kernel, netdev, hskinnemoen, g.liakhovetski, nicolas.ferre

On Fri, Dec 12, 2008 at 7:29 AM, Sosnowski, Maciej
<maciej.sosnowski@intel.com> wrote:
> Williams, Dan J wrote:
>> All users have been converted to either the general-purpose allocator,
>> dma_find_channel, or dma_request_channel.
>>
>> Signed-off-by: Dan Williams <dan.j.williams@intel.com>
>> ---
> (...)
>>  /**
>> - * dma_chans_notify_available - broadcast available channels to the
>> clients
>> - */
>> -static void dma_clients_notify_available(void)
>> -{
>> -     struct dma_client *client;
>> -
>> -     mutex_lock(&dma_list_mutex);
>> -
>> -     list_for_each_entry(client, &dma_client_list, global_node)
>> -             dma_client_chan_alloc(client);
>> -
>> -     mutex_unlock(&dma_list_mutex);
>> -}
>
> I agree with Guennadi's concern about removing clients' notification
> of new devices available in the system.
> I understand that this design is based on polling instead,
> however polling is always less efficient approach.
> Do you think that restoring notifications in this redesigned dmaengine
> would be more painful than limiting clients to polling solution?
>

You are missing that net_dma has always "polled".  Consider the case
of how net_dma currently operates prior to ioatdma.ko being loaded.
It periodically calls get_softnet_dma() to see if a channel is
available, if it does not find one it falls back to a cpu copy.  All
that has changed is replacing this custom channel allocation routine
with the unified dma_find_channel() interface.  Channel notifications
are not required.  Either everything is built-in to guarantee that an
engine is available upon request[1], or the client is smart enough to
run without a channel for while like net_dma and raid offload.

Regards,
Dan

[1] http://marc.info/?l=linux-kernel&m=122835195303213&w=2

^ permalink raw reply	[flat|nested] 55+ messages in thread

* RE: [PATCH 03/13] dmaengine: up-level reference counting to the module level
  2008-12-15 22:12     ` Dan Williams
@ 2008-12-18 14:26       ` Sosnowski, Maciej
  0 siblings, 0 replies; 55+ messages in thread
From: Sosnowski, Maciej @ 2008-12-18 14:26 UTC (permalink / raw)
  To: Williams, Dan J
  Cc: linux-kernel, netdev, hskinnemoen, g.liakhovetski, nicolas.ferre

Dan Williams wrote:
> On Fri, Dec 12, 2008 at 7:28 AM, Sosnowski, Maciej
> <maciej.sosnowski@intel.com> wrote:
>> Dan,
>>
>> General concern I see about this change is that it makes impossible
>> to rmmod ioatdma in case of NET_DMA enabled.
>> This is a specific case of situation when client is permanently
>> registered in dmaengine,
>> making it impossible to rmmod any device with "public" channels.
>> Ioatdma is just an example here.
>> However in ioatdma case it would be problematic now to switch
>> between CPU and DMA copy modes.
>> It seems that the only way to disable DMA after loading ioatdma
>> would be raising tcp_dma_copybreak to some high enough value to
>> direct all buffers to CPU copy path.
>> This way is yet rather more like hacking than "normal" usage way
>> (like "rmmod ioatdma" used so far).
>>
>
> Hi Maciej,
>
> I have been waiting for you to point this out because I believe it
> shows where more work is needed in net_dma.  The problem with net_dma
> is that it never says "I am done with dma channels".  The result was
> the old/complicated per-operation reference counting in dmaengine that
> lead to the, now deleted, comment for ioat_remove():
>
> /*
>  * It is unsafe to remove this module: if removed while a requested
>  * dma is outstanding, esp. from tcp, it is possible to hang while
>  * waiting for something that will never finish.  However, if you're
>  * feeling lucky, this usually works just fine.
>  */
>
> This also required the complexity of each client needing to handle
> asynchronous channel removal events.  In all other cases in the kernel
> a module is pinned as long as it has users, so dmaengine was backwards
> in this regard.
>
> Putting the end-node principal to work here means that the
> complexity/effort of determining when net_dma is done with channels
> should reside in net_dma, not dmaengine.  Since we cannot hook into a
> "rmmod net" event to drop the dmaengine reference we will need some
> out-of-band signal to enable "rmmod ioatdma" like detecting
> network-idle, or having an explicit "dma disable" sysctl.

Hi Dan,
I agree that net_dma needs to be modified
so that it releases dma channels if not used.
However in the mean time, as long as net_dma is not ready to do this,
the problem of "rmmod ioatdma" in the redesigned dmaengine remains...

>
>> Another issue I find problematic in this solution is that _any_
>> client
>> declaring its presence in dmaengine causes holding reference for
>> _all_ channels (and blocking them), does not matter if they are used
>> or not.
>> I agree with Guennadi's doubts here.
>> Why not at least hold a reference only for channels with
>> capabilities matching client's ones?
>>
>
> All channels share the same parent module, so taking a reference on
> one will always involve blocking all channels.

This is true for channels of the same device/module.
However, if we have two different devices with different capabilities,
should they both be blocked by single client?
For example, what is use of blocking channels of device
with DMA_MEMCPY capabilities only
by a client interested in DMA_XOR or DMA_MEMSET?

Thanks,
Maciej

^ permalink raw reply	[flat|nested] 55+ messages in thread

* RE: [PATCH 07/13] dmaengine: introduce dma_request_channel and private channels
  2008-12-15 23:55     ` Dan Williams
@ 2008-12-18 14:33       ` Sosnowski, Maciej
  2008-12-18 17:27         ` Dan Williams
  0 siblings, 1 reply; 55+ messages in thread
From: Sosnowski, Maciej @ 2008-12-18 14:33 UTC (permalink / raw)
  To: Williams, Dan J
  Cc: linux-kernel, netdev, hskinnemoen, g.liakhovetski, nicolas.ferre

Dan Williams wrote:
> On Fri, Dec 12, 2008 at 7:29 AM, Sosnowski, Maciej
> <maciej.sosnowski@intel.com> wrote:
>>> +static struct dma_chan *private_candidate(dma_cap_mask_t *mask,
>>> struct dma_device *dev) +{
>>> +     struct dma_chan *chan;
>>> +     struct dma_chan *ret = NULL;
>>> +
>>> +     /* devices with multiple channels need special handling as we
>>> need to +   * ensure that all channels are either private or
>>> public. +      */ +     if (dev->chancnt > 1 &&
>>> !dma_has_cap(DMA_PRIVATE, dev->cap_mask)) +
>>> list_for_each_entry(chan, &dev->channels, device_node) { +
>>> /* some channels are already publicly allocated */ +
>>> if (chan->client_count) +                             return NULL;
>>> +             }
>>
>> Isn't it a dangerous approach to let clients consume for their
>> exclusive usage channels meant for general-purpose ("pubilc" ones)?
>> Why not to limit private_candidate to devices with DMA_PRIVATE
>> capability only?
>>
>
> This allows unused channels to be claimed by dma_request_channel().
> It is not dangerous as long as ->client_count is zero.

What about situation, where some or all "public" channels in the system
have been claimed by one client for its exclusive usage
before another client appears trying to use available "public" channels?
Despite of presence in the system of channels that supposed to be "public",
the second cilent realizes that the channels are not available anymore at all
or at least limited...
Doesn't it contradict the general idea of "public" (general purpose) channels?

Regards,
Maciej

^ permalink raw reply	[flat|nested] 55+ messages in thread

* RE: [PATCH 11/13] dmaengine: kill struct dma_client and supporting infrastructure
  2008-12-16  0:09     ` Dan Williams
@ 2008-12-18 14:34       ` Sosnowski, Maciej
  0 siblings, 0 replies; 55+ messages in thread
From: Sosnowski, Maciej @ 2008-12-18 14:34 UTC (permalink / raw)
  To: Williams, Dan J
  Cc: linux-kernel, netdev, hskinnemoen, g.liakhovetski, nicolas.ferre

Dan Williams wrote:
> On Fri, Dec 12, 2008 at 7:29 AM, Sosnowski, Maciej
> <maciej.sosnowski@intel.com> wrote:
>> Williams, Dan J wrote:
>>> All users have been converted to either the general-purpose
>>> allocator, dma_find_channel, or dma_request_channel.
>>> 
>>> Signed-off-by: Dan Williams <dan.j.williams@intel.com>
>>> ---
>> (...)
>>>  /**
>>> - * dma_chans_notify_available - broadcast available channels to
>>> the clients 
>>> - */
>>> -static void dma_clients_notify_available(void)
>>> -{
>>> -     struct dma_client *client;
>>> -
>>> -     mutex_lock(&dma_list_mutex);
>>> -
>>> -     list_for_each_entry(client, &dma_client_list, global_node)
>>> -             dma_client_chan_alloc(client);
>>> -
>>> -     mutex_unlock(&dma_list_mutex);
>>> -}
>> 
>> I agree with Guennadi's concern about removing clients' notification
>> of new devices available in the system.
>> I understand that this design is based on polling instead,
>> however polling is always less efficient approach.
>> Do you think that restoring notifications in this redesigned
>> dmaengine would be more painful than limiting clients to polling
>> solution? 
>> 
> 
> You are missing that net_dma has always "polled".  Consider the case
> of how net_dma currently operates prior to ioatdma.ko being loaded.
> It periodically calls get_softnet_dma() to see if a channel is
> available, if it does not find one it falls back to a cpu copy.  All
> that has changed is replacing this custom channel allocation routine
> with the unified dma_find_channel() interface.  Channel notifications
> are not required.  Either everything is built-in to guarantee that an
> engine is available upon request[1], or the client is smart enough to
> run without a channel for while like net_dma and raid offload.

My comment here was rather more general than ioatdma/net_dma specific.
You are of course right that net_dma has been always based on polling.
I am rather worrying a bit that this change limits all current and future clients
to polling approach only, not letting them base on notifications anymore.

> 
> Regards,
> Dan
> 
> [1] http://marc.info/?l=linux-kernel&m=122835195303213&w=2

Regards,
Maciej

^ permalink raw reply	[flat|nested] 55+ messages in thread

* Re: [PATCH 07/13] dmaengine: introduce dma_request_channel and private channels
  2008-12-18 14:33       ` Sosnowski, Maciej
@ 2008-12-18 17:27         ` Dan Williams
  0 siblings, 0 replies; 55+ messages in thread
From: Dan Williams @ 2008-12-18 17:27 UTC (permalink / raw)
  To: Sosnowski, Maciej
  Cc: linux-kernel, netdev, hskinnemoen, g.liakhovetski, nicolas.ferre

On Thu, Dec 18, 2008 at 7:33 AM, Sosnowski, Maciej
<maciej.sosnowski@intel.com> wrote:
> What about situation, where some or all "public" channels in the system
> have been claimed by one client for its exclusive usage
> before another client appears trying to use available "public" channels?
> Despite of presence in the system of channels that supposed to be "public",
> the second cilent realizes that the channels are not available anymore at all
> or at least limited...
> Doesn't it contradict the general idea of "public" (general purpose) channels?
>

If a greedy module comes along and grabs all the channels via
dma_request_channel() then yes, there will be nothing left for the
public pool.  So, there is a requirement to "play nice".  If this
becomes an issue in practice we could add a DMA_NO_PRIVATE flag to
reserve a channel for public-only usage.

Regards,
Dan

^ permalink raw reply	[flat|nested] 55+ messages in thread

* Re: [PATCH 09/13] atmel-mci: convert to dma_request_channel and down-level dma_slave
  2008-11-14 21:35 ` [PATCH 09/13] atmel-mci: convert to dma_request_channel and down-level dma_slave Dan Williams
@ 2009-01-30 16:40   ` Atsushi Nemoto
  2009-01-30 23:02       ` Dan Williams
  0 siblings, 1 reply; 55+ messages in thread
From: Atsushi Nemoto @ 2009-01-30 16:40 UTC (permalink / raw)
  To: dan.j.williams
  Cc: linux-kernel, netdev, maciej.sosnowski, hskinnemoen,
	g.liakhovetski, nicolas.ferre

On Fri, 14 Nov 2008 14:35:03 -0700, Dan Williams <dan.j.williams@intel.com> wrote:
> dma_request_channel provides an exclusive channel, so we no longer need to
> pass slave data through dmaengine.
> 
> Cc: Haavard Skinnemoen <haavard.skinnemoen@atmel.com>
> Signed-off-by: Dan Williams <dan.j.williams@intel.com>

Sorry for late review... I hope not too late.

> @@ -789,23 +782,17 @@ static int dwc_alloc_chan_resources(struct dma_chan *chan,
>  	cfghi = DWC_CFGH_FIFO_MODE;
>  	cfglo = 0;
>  
> -	slave = client->slave;
> -	if (slave) {
> +	dws = dwc->dws;
> +	if (dws) {
>  		/*
>  		 * We need controller-specific data to set up slave
>  		 * transfers.
>  		 */
> -		BUG_ON(!slave->dma_dev || slave->dma_dev != dw->dma.dev);
> -
> -		dws = container_of(slave, struct dw_dma_slave, slave);
> +		BUG_ON(!dws->dma_dev || dws->dma_dev != dw->dma.dev);
>  
> -		dwc->dws = dws;

How dw_dma_slave passed to the dw_dmac driver?  I cannot see where
dwc->dws is initialized now.  Or am I missing something?

---
Atsushi Nemoto

^ permalink raw reply	[flat|nested] 55+ messages in thread

* Re: [PATCH 07/13] dmaengine: introduce dma_request_channel and private channels
  2008-12-02 17:16     ` Dan Williams
  2008-12-02 17:27       ` Guennadi Liakhovetski
@ 2009-01-30 17:03       ` Atsushi Nemoto
  2009-01-30 23:13           ` Dan Williams
  1 sibling, 1 reply; 55+ messages in thread
From: Atsushi Nemoto @ 2009-01-30 17:03 UTC (permalink / raw)
  To: dan.j.williams
  Cc: g.liakhovetski, linux-kernel, netdev, maciej.sosnowski,
	hskinnemoen, nicolas.ferre

On Tue, 2 Dec 2008 10:16:05 -0700, "Dan Williams" <dan.j.williams@intel.com> wrote:
> > I think, there is a problem with your dma_request_channel() /
> > private_candidate() implementation: your current version only tries one
> > channel from a dma device list, which matched capabilities. If this
> > channel is not accepted by the client, you do not try other channels from
> > this device and just go to the next one...
> 
> Which dma driver are you using?  The dmaengine code assumes that all
> channels on a device are equal.  It sounds like there are differences
> between peer-channels on the device in this case.  If the driver
> registers a device per channel that should give the flexibility you
> want.

I'm writing a new dma driver.  My DMAC has multiple channels and only
one channel is capable for generic memcpy and other channels have
fixed role.

Does new framework allow dma driver make only one channel public?
Or should I register two dma_device for DMA_MEMCPY and DMA_SLAVE?
Could you give me some advice?

---
Atsushi Nemoto

^ permalink raw reply	[flat|nested] 55+ messages in thread

* Re: [PATCH 09/13] atmel-mci: convert to dma_request_channel and  down-level dma_slave
  2009-01-30 16:40   ` Atsushi Nemoto
@ 2009-01-30 23:02       ` Dan Williams
  0 siblings, 0 replies; 55+ messages in thread
From: Dan Williams @ 2009-01-30 23:02 UTC (permalink / raw)
  To: Atsushi Nemoto
  Cc: linux-kernel, netdev, maciej.sosnowski, hskinnemoen,
	g.liakhovetski, nicolas.ferre

On Fri, Jan 30, 2009 at 9:40 AM, Atsushi Nemoto <anemo@mba.ocn.ne.jp> wrote:
> On Fri, 14 Nov 2008 14:35:03 -0700, Dan Williams <dan.j.williams@intel.com> wrote:
> How dw_dma_slave passed to the dw_dmac driver?  I cannot see where
> dwc->dws is initialized now.  Or am I missing something?
>

No, the conversion is incomplete.  After finding a channel that meets
the slave requirements it is up to the driver to set the slave data.
I will send a fix in a separate message.

Thanks for the review,
Dan

^ permalink raw reply	[flat|nested] 55+ messages in thread

* Re: [PATCH 09/13] atmel-mci: convert to dma_request_channel and down-level dma_slave
@ 2009-01-30 23:02       ` Dan Williams
  0 siblings, 0 replies; 55+ messages in thread
From: Dan Williams @ 2009-01-30 23:02 UTC (permalink / raw)
  To: Atsushi Nemoto
  Cc: linux-kernel, netdev, maciej.sosnowski, hskinnemoen,
	g.liakhovetski, nicolas.ferre

On Fri, Jan 30, 2009 at 9:40 AM, Atsushi Nemoto <anemo@mba.ocn.ne.jp> wrote:
> On Fri, 14 Nov 2008 14:35:03 -0700, Dan Williams <dan.j.williams@intel.com> wrote:
> How dw_dma_slave passed to the dw_dmac driver?  I cannot see where
> dwc->dws is initialized now.  Or am I missing something?
>

No, the conversion is incomplete.  After finding a channel that meets
the slave requirements it is up to the driver to set the slave data.
I will send a fix in a separate message.

Thanks for the review,
Dan

^ permalink raw reply	[flat|nested] 55+ messages in thread

* Re: [PATCH 07/13] dmaengine: introduce dma_request_channel and  private channels
  2009-01-30 17:03       ` Atsushi Nemoto
@ 2009-01-30 23:13           ` Dan Williams
  0 siblings, 0 replies; 55+ messages in thread
From: Dan Williams @ 2009-01-30 23:13 UTC (permalink / raw)
  To: Atsushi Nemoto
  Cc: g.liakhovetski, linux-kernel, netdev, maciej.sosnowski,
	hskinnemoen, nicolas.ferre

On Fri, Jan 30, 2009 at 10:03 AM, Atsushi Nemoto <anemo@mba.ocn.ne.jp> wrote:
> I'm writing a new dma driver.  My DMAC has multiple channels and only
> one channel is capable for generic memcpy and other channels have
> fixed role.
>
> Does new framework allow dma driver make only one channel public?

Yes, if the driver registers a dma_device for each channel.

> Or should I register two dma_device for DMA_MEMCPY and DMA_SLAVE?
> Could you give me some advice?

Register multiple dma_devices, the public one with a DMA_MEMCPY, and
the fixed role devices with DMA_PRIVATE, DMA_MEMCPY, and DMA_SLAVE
capabilities.

DMA_PRIVATE ensures that a channel is never considered for public consumption.

--
Dan

^ permalink raw reply	[flat|nested] 55+ messages in thread

* Re: [PATCH 07/13] dmaengine: introduce dma_request_channel and private channels
@ 2009-01-30 23:13           ` Dan Williams
  0 siblings, 0 replies; 55+ messages in thread
From: Dan Williams @ 2009-01-30 23:13 UTC (permalink / raw)
  To: Atsushi Nemoto
  Cc: g.liakhovetski, linux-kernel, netdev, maciej.sosnowski,
	hskinnemoen, nicolas.ferre

On Fri, Jan 30, 2009 at 10:03 AM, Atsushi Nemoto <anemo@mba.ocn.ne.jp> wrote:
> I'm writing a new dma driver.  My DMAC has multiple channels and only
> one channel is capable for generic memcpy and other channels have
> fixed role.
>
> Does new framework allow dma driver make only one channel public?

Yes, if the driver registers a dma_device for each channel.

> Or should I register two dma_device for DMA_MEMCPY and DMA_SLAVE?
> Could you give me some advice?

Register multiple dma_devices, the public one with a DMA_MEMCPY, and
the fixed role devices with DMA_PRIVATE, DMA_MEMCPY, and DMA_SLAVE
capabilities.

DMA_PRIVATE ensures that a channel is never considered for public consumption.

--
Dan

^ permalink raw reply	[flat|nested] 55+ messages in thread

* Re: [PATCH 07/13] dmaengine: introduce dma_request_channel and  private channels
  2009-01-30 23:13           ` Dan Williams
@ 2009-01-30 23:27             ` Guennadi Liakhovetski
  -1 siblings, 0 replies; 55+ messages in thread
From: Guennadi Liakhovetski @ 2009-01-30 23:27 UTC (permalink / raw)
  To: Dan Williams
  Cc: Atsushi Nemoto, linux-kernel, netdev, maciej.sosnowski,
	hskinnemoen, nicolas.ferre

On Fri, 30 Jan 2009, Dan Williams wrote:

> On Fri, Jan 30, 2009 at 10:03 AM, Atsushi Nemoto <anemo@mba.ocn.ne.jp> wrote:
> > I'm writing a new dma driver.  My DMAC has multiple channels and only
> > one channel is capable for generic memcpy and other channels have
> > fixed role.
> >
> > Does new framework allow dma driver make only one channel public?
> 
> Yes, if the driver registers a dma_device for each channel.
> 
> > Or should I register two dma_device for DMA_MEMCPY and DMA_SLAVE?
> > Could you give me some advice?
> 
> Register multiple dma_devices, the public one with a DMA_MEMCPY, and
> the fixed role devices with DMA_PRIVATE, DMA_MEMCPY, and DMA_SLAVE
> capabilities.
> 
> DMA_PRIVATE ensures that a channel is never considered for public consumption.

Maybe just two dma-devices would suffice: one with the public memcpy 
channel, and one with the rest private slave channels?

Thanks
Guennadi
---
Guennadi Liakhovetski, Ph.D.
Freelance Open-Source Software Developer

^ permalink raw reply	[flat|nested] 55+ messages in thread

* Re: [PATCH 07/13] dmaengine: introduce dma_request_channel and private channels
@ 2009-01-30 23:27             ` Guennadi Liakhovetski
  0 siblings, 0 replies; 55+ messages in thread
From: Guennadi Liakhovetski @ 2009-01-30 23:27 UTC (permalink / raw)
  To: Dan Williams
  Cc: Atsushi Nemoto, linux-kernel, netdev, maciej.sosnowski,
	hskinnemoen, nicolas.ferre

On Fri, 30 Jan 2009, Dan Williams wrote:

> On Fri, Jan 30, 2009 at 10:03 AM, Atsushi Nemoto <anemo@mba.ocn.ne.jp> wrote:
> > I'm writing a new dma driver.  My DMAC has multiple channels and only
> > one channel is capable for generic memcpy and other channels have
> > fixed role.
> >
> > Does new framework allow dma driver make only one channel public?
> 
> Yes, if the driver registers a dma_device for each channel.
> 
> > Or should I register two dma_device for DMA_MEMCPY and DMA_SLAVE?
> > Could you give me some advice?
> 
> Register multiple dma_devices, the public one with a DMA_MEMCPY, and
> the fixed role devices with DMA_PRIVATE, DMA_MEMCPY, and DMA_SLAVE
> capabilities.
> 
> DMA_PRIVATE ensures that a channel is never considered for public consumption.

Maybe just two dma-devices would suffice: one with the public memcpy 
channel, and one with the rest private slave channels?

Thanks
Guennadi
---
Guennadi Liakhovetski, Ph.D.
Freelance Open-Source Software Developer

^ permalink raw reply	[flat|nested] 55+ messages in thread

* Re: [PATCH 07/13] dmaengine: introduce dma_request_channel and private channels
  2009-01-30 23:27             ` Guennadi Liakhovetski
  (?)
@ 2009-01-31 12:18             ` Atsushi Nemoto
  -1 siblings, 0 replies; 55+ messages in thread
From: Atsushi Nemoto @ 2009-01-31 12:18 UTC (permalink / raw)
  To: g.liakhovetski
  Cc: dan.j.williams, linux-kernel, netdev, maciej.sosnowski,
	hskinnemoen, nicolas.ferre

On Sat, 31 Jan 2009 00:27:48 +0100 (CET), Guennadi Liakhovetski <g.liakhovetski@gmx.de> wrote:
> > Register multiple dma_devices, the public one with a DMA_MEMCPY, and
> > the fixed role devices with DMA_PRIVATE, DMA_MEMCPY, and DMA_SLAVE
> > capabilities.
> > 
> > DMA_PRIVATE ensures that a channel is never considered for public consumption.
> 
> Maybe just two dma-devices would suffice: one with the public memcpy 
> channel, and one with the rest private slave channels?

Thank you, I will try this.

---
Atsushi Nemoto

^ permalink raw reply	[flat|nested] 55+ messages in thread

* Re: [PATCH 07/13] dmaengine: introduce dma_request_channel and private channels
  2008-11-14 21:34 ` [PATCH 07/13] dmaengine: introduce dma_request_channel and private channels Dan Williams
  2008-12-02 15:52   ` Guennadi Liakhovetski
  2008-12-12 14:29   ` Sosnowski, Maciej
@ 2009-02-06 16:58   ` Atsushi Nemoto
  2 siblings, 0 replies; 55+ messages in thread
From: Atsushi Nemoto @ 2009-02-06 16:58 UTC (permalink / raw)
  To: dan.j.williams
  Cc: linux-kernel, netdev, maciej.sosnowski, hskinnemoen,
	g.liakhovetski, nicolas.ferre

On Fri, 14 Nov 2008 14:34:53 -0700, Dan Williams <dan.j.williams@intel.com> wrote:
> To ensure that all channels are not consumed by the general-purpose
> allocator the DMA_PRIVATE capability is provided to exclude a dma_device
> from general-purpose (memory-to-memory) consideration.
...
> +			/* Found a suitable channel, try to grab, prep, and
> +			 * return it.  We first set DMA_PRIVATE to disable
> +			 * balance_ref_count as this channel will not be
> +			 * published in the general-purpose allocator
> +			 */
> +			dma_cap_set(DMA_PRIVATE, device->cap_mask);
> +			err = dma_chan_get(chan);

The dma_request_channel() set DMA_PRIVATE but dma_release_channel()
does not clear it.

If a client (such as dmatest) grabbed a public channel by
dma_request_channel(), the channel will stay private forever.

While a dma driver might set DMA_PRIVATE initially for private-only
channels, dma_release_channel() cannot clear it unconditionally.

How can this solved?  DMA_ALWAYS_PRIVATE or device->is_private or
something will needed?

---
Atsushi Nemoto

^ permalink raw reply	[flat|nested] 55+ messages in thread

end of thread, other threads:[~2009-02-06 16:58 UTC | newest]

Thread overview: 55+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2008-11-14 21:34 [PATCH 00/13] dmaengine redux Dan Williams
2008-11-14 21:34 ` [PATCH 01/13] async_tx, dmaengine: document channel allocation and api rework Dan Williams
2008-11-14 21:34 ` [PATCH 02/13] dmaengine: remove dependency on async_tx Dan Williams
2008-11-15  6:02   ` Andrew Morton
2008-11-17 23:44     ` Dan Williams
2008-11-14 21:34 ` [PATCH 03/13] dmaengine: up-level reference counting to the module level Dan Williams
2008-11-15  6:08   ` Andrew Morton
2008-11-18  3:42     ` Dan Williams
2008-12-04 16:56   ` Guennadi Liakhovetski
2008-12-04 18:51     ` Dan Williams
2008-12-04 19:28       ` Guennadi Liakhovetski
2008-12-08 22:39         ` Dan Williams
2008-12-12 14:28   ` Sosnowski, Maciej
2008-12-15 22:12     ` Dan Williams
2008-12-18 14:26       ` Sosnowski, Maciej
2008-11-14 21:34 ` [PATCH 04/13] dmaengine: centralize channel allocation, introduce dma_find_channel Dan Williams
2008-11-15  6:14   ` Andrew Morton
2008-11-18  5:59     ` Dan Williams
2008-11-14 21:34 ` [PATCH 05/13] dmaengine: provide a common 'issue_pending_all' implementation Dan Williams
2008-11-14 21:34 ` [PATCH 06/13] net_dma: convert to dma_find_channel Dan Williams
2008-11-14 21:34 ` [PATCH 07/13] dmaengine: introduce dma_request_channel and private channels Dan Williams
2008-12-02 15:52   ` Guennadi Liakhovetski
2008-12-02 17:16     ` Dan Williams
2008-12-02 17:27       ` Guennadi Liakhovetski
2008-12-02 19:10         ` Dan Williams
2008-12-02 21:28           ` Guennadi Liakhovetski
2009-01-30 17:03       ` Atsushi Nemoto
2009-01-30 23:13         ` Dan Williams
2009-01-30 23:13           ` Dan Williams
2009-01-30 23:27           ` Guennadi Liakhovetski
2009-01-30 23:27             ` Guennadi Liakhovetski
2009-01-31 12:18             ` Atsushi Nemoto
2008-12-02 17:26     ` Nicolas Ferre
2008-12-12 14:29   ` Sosnowski, Maciej
2008-12-15 23:55     ` Dan Williams
2008-12-18 14:33       ` Sosnowski, Maciej
2008-12-18 17:27         ` Dan Williams
2009-02-06 16:58   ` Atsushi Nemoto
2008-11-14 21:34 ` [PATCH 08/13] dmatest: convert to dma_request_channel Dan Williams
2008-11-15  6:17   ` Andrew Morton
2008-11-18 18:24     ` Dan Williams
2008-11-18 20:58       ` Andrew Morton
2008-11-18 22:19         ` Dan Williams
2008-11-14 21:35 ` [PATCH 09/13] atmel-mci: convert to dma_request_channel and down-level dma_slave Dan Williams
2009-01-30 16:40   ` Atsushi Nemoto
2009-01-30 23:02     ` Dan Williams
2009-01-30 23:02       ` Dan Williams
2008-11-14 21:35 ` [PATCH 10/13] dmaengine: replace dma_async_client_register with dmaengine_get Dan Williams
2008-11-14 21:35 ` [PATCH 11/13] dmaengine: kill struct dma_client and supporting infrastructure Dan Williams
2008-12-12 14:29   ` Sosnowski, Maciej
2008-12-16  0:09     ` Dan Williams
2008-12-18 14:34       ` Sosnowski, Maciej
2008-11-14 21:35 ` [PATCH 12/13] dmaengine: remove 'bigref' infrastructure Dan Williams
2008-11-14 21:35 ` [PATCH 13/13] dmaengine: kill enum dma_state_client Dan Williams
2008-12-12 14:27 ` [PATCH 00/13] dmaengine redux Sosnowski, Maciej

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.