All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v2 0/3] iio: Refactor __iio_update_buffers()
@ 2015-05-18 11:34 Lars-Peter Clausen
  2015-05-18 11:34 ` [PATCH v2 1/3] iio: __iio_update_buffers: Verify configuration before starting to apply it Lars-Peter Clausen
                   ` (2 more replies)
  0 siblings, 3 replies; 7+ messages in thread
From: Lars-Peter Clausen @ 2015-05-18 11:34 UTC (permalink / raw)
  To: Jonathan Cameron, Hartmut Knaack, Peter Meerwald
  Cc: linux-iio, Lars-Peter Clausen

Changes since v1:
	* Dropped 'iio: __iio_update_buffers: Update mode before
	  preenable/after postdisable' for now

- Lars

Original cover letter below:

__iio_update_buffers() is quite the beast, it does quite a few different
things and has a couple of separate error paths due to this. This series
refactors the function starting with moving the configuration verification
out of the configuration path itself. This means we can verify that the
configuration will be accepted before we make any changes to the device and
don't have to implement rollback functionality. The function is also broken
down into a couple of smaller helper function each which only a single
error path. This should hopefully make it easier to comprehend what is
going on. The last step of the refactoring process is to make sure that we
always leave the device in a sane and consistent state even if we encounter
a fatal error. This allows to potentially recover from the fatal error and
will also make sure that we don't trigger unexpected behavior in drivers.

The main motivation for this series to get __iio_update_buffers() ready for
future extension like the ones that are needed to support DMA buffer
support as well as better support for hardware buffer support.

The last patch in this series is not meant to be applied, but it implements
some simple random error injecting into the various error paths of
__iio_update_buffers(). This was used to test the patch series for
regressions and to make sure that we can indeed recover from fatal errors
now.

Lars-Peter Clausen (3):
  iio: __iio_update_buffers: Verify configuration before starting to
    apply it
  iio: __iio_update_buffers: Split enable and disable path into helper
    functions
  iio: __iio_update_buffers: Leave device in sane state on error

 drivers/iio/industrialio-buffer.c | 287 +++++++++++++++++++++++---------------
 1 file changed, 175 insertions(+), 112 deletions(-)

-- 
1.8.0


^ permalink raw reply	[flat|nested] 7+ messages in thread

* [PATCH v2 1/3] iio: __iio_update_buffers: Verify configuration before starting to apply it
  2015-05-18 11:34 [PATCH v2 0/3] iio: Refactor __iio_update_buffers() Lars-Peter Clausen
@ 2015-05-18 11:34 ` Lars-Peter Clausen
  2015-05-23 11:45   ` Jonathan Cameron
  2015-05-18 11:34 ` [PATCH v2 2/3] iio: __iio_update_buffers: Split enable and disable path into helper functions Lars-Peter Clausen
  2015-05-18 11:34 ` [PATCH v2 3/3] iio: __iio_update_buffers: Leave device in sane state on error Lars-Peter Clausen
  2 siblings, 1 reply; 7+ messages in thread
From: Lars-Peter Clausen @ 2015-05-18 11:34 UTC (permalink / raw)
  To: Jonathan Cameron, Hartmut Knaack, Peter Meerwald
  Cc: linux-iio, Lars-Peter Clausen

Currently __iio_update_buffers() verifies whether the new configuration
will work in the middle of the update sequence. This means if the new
configuration is invalid we need to rollback the changes already made. This
patch moves the validation of the new configuration at the beginning of
__iio_update_buffers() and will not start to make any changes if the new
configuration is invalid.

Signed-off-by: Lars-Peter Clausen <lars@metafoo.de>
---
 drivers/iio/industrialio-buffer.c | 164 +++++++++++++++++++++++---------------
 1 file changed, 101 insertions(+), 63 deletions(-)

diff --git a/drivers/iio/industrialio-buffer.c b/drivers/iio/industrialio-buffer.c
index 21ed316..0b4fe63 100644
--- a/drivers/iio/industrialio-buffer.c
+++ b/drivers/iio/industrialio-buffer.c
@@ -602,20 +602,104 @@ static void iio_free_scan_mask(struct iio_dev *indio_dev,
 		kfree(mask);
 }
 
+struct iio_device_config {
+	unsigned int mode;
+	const unsigned long *scan_mask;
+	unsigned int scan_bytes;
+	bool scan_timestamp;
+};
+
+static int iio_verify_update(struct iio_dev *indio_dev,
+	struct iio_buffer *insert_buffer, struct iio_buffer *remove_buffer,
+	struct iio_device_config *config)
+{
+	unsigned long *compound_mask;
+	const unsigned long *scan_mask;
+	struct iio_buffer *buffer;
+	bool scan_timestamp;
+
+	memset(config, 0, sizeof(*config));
+
+	/*
+	 * If there is just one buffer and we are removing it there is nothing
+	 * to verify.
+	 */
+	if (remove_buffer && !insert_buffer &&
+		list_is_singular(&indio_dev->buffer_list))
+			return 0;
+
+	/* Definitely possible for devices to support both of these. */
+	if ((indio_dev->modes & INDIO_BUFFER_TRIGGERED) && indio_dev->trig) {
+		config->mode = INDIO_BUFFER_TRIGGERED;
+	} else if (indio_dev->modes & INDIO_BUFFER_HARDWARE) {
+		config->mode = INDIO_BUFFER_HARDWARE;
+	} else if (indio_dev->modes & INDIO_BUFFER_SOFTWARE) {
+		config->mode = INDIO_BUFFER_SOFTWARE;
+	} else {
+		/* Can only occur on first buffer */
+		if (indio_dev->modes & INDIO_BUFFER_TRIGGERED)
+			dev_dbg(&indio_dev->dev, "Buffer not started: no trigger\n");
+		return -EINVAL;
+	}
+
+	/* What scan mask do we actually have? */
+	compound_mask = kcalloc(BITS_TO_LONGS(indio_dev->masklength),
+				sizeof(long), GFP_KERNEL);
+	if (compound_mask == NULL)
+		return -ENOMEM;
+
+	scan_timestamp = false;
+
+	list_for_each_entry(buffer, &indio_dev->buffer_list, buffer_list) {
+		if (buffer == remove_buffer)
+			continue;
+		bitmap_or(compound_mask, compound_mask, buffer->scan_mask,
+			  indio_dev->masklength);
+		scan_timestamp |= buffer->scan_timestamp;
+	}
+
+	if (insert_buffer) {
+		bitmap_or(compound_mask, compound_mask,
+			  insert_buffer->scan_mask, indio_dev->masklength);
+		scan_timestamp |= insert_buffer->scan_timestamp;
+	}
+
+	if (indio_dev->available_scan_masks) {
+		scan_mask = iio_scan_mask_match(indio_dev->available_scan_masks,
+				    indio_dev->masklength,
+				    compound_mask);
+		kfree(compound_mask);
+		if (scan_mask == NULL)
+			return -EINVAL;
+	} else {
+	    scan_mask = compound_mask;
+	}
+
+	config->scan_bytes = iio_compute_scan_bytes(indio_dev,
+				    scan_mask, scan_timestamp);
+	config->scan_mask = scan_mask;
+	config->scan_timestamp = scan_timestamp;
+
+	return 0;
+}
+
 static int __iio_update_buffers(struct iio_dev *indio_dev,
 		       struct iio_buffer *insert_buffer,
 		       struct iio_buffer *remove_buffer)
 {
 	int ret;
-	int success = 0;
-	struct iio_buffer *buffer;
-	unsigned long *compound_mask;
 	const unsigned long *old_mask;
+	struct iio_device_config new_config;
+
+	ret = iio_verify_update(indio_dev, insert_buffer, remove_buffer,
+		&new_config);
+	if (ret)
+		return ret;
 
 	if (insert_buffer) {
 		ret = iio_buffer_request_update(indio_dev, insert_buffer);
 		if (ret)
-			return ret;
+			goto err_free_config;
 	}
 
 	/* Wind down existing buffers - iff there are any */
@@ -623,13 +707,13 @@ static int __iio_update_buffers(struct iio_dev *indio_dev,
 		if (indio_dev->setup_ops->predisable) {
 			ret = indio_dev->setup_ops->predisable(indio_dev);
 			if (ret)
-				return ret;
+				goto err_free_config;
 		}
 		indio_dev->currentmode = INDIO_DIRECT_MODE;
 		if (indio_dev->setup_ops->postdisable) {
 			ret = indio_dev->setup_ops->postdisable(indio_dev);
 			if (ret)
-				return ret;
+				goto err_free_config;
 		}
 	}
 	/* Keep a copy of current setup to allow roll back */
@@ -649,44 +733,9 @@ static int __iio_update_buffers(struct iio_dev *indio_dev,
 		return 0;
 	}
 
-	/* What scan mask do we actually have? */
-	compound_mask = kcalloc(BITS_TO_LONGS(indio_dev->masklength),
-				sizeof(long), GFP_KERNEL);
-	if (compound_mask == NULL) {
-		iio_free_scan_mask(indio_dev, old_mask);
-		return -ENOMEM;
-	}
-	indio_dev->scan_timestamp = 0;
-
-	list_for_each_entry(buffer, &indio_dev->buffer_list, buffer_list) {
-		bitmap_or(compound_mask, compound_mask, buffer->scan_mask,
-			  indio_dev->masklength);
-		indio_dev->scan_timestamp |= buffer->scan_timestamp;
-	}
-	if (indio_dev->available_scan_masks) {
-		indio_dev->active_scan_mask =
-			iio_scan_mask_match(indio_dev->available_scan_masks,
-					    indio_dev->masklength,
-					    compound_mask);
-		kfree(compound_mask);
-		if (indio_dev->active_scan_mask == NULL) {
-			/*
-			 * Roll back.
-			 * Note can only occur when adding a buffer.
-			 */
-			iio_buffer_deactivate(insert_buffer);
-			if (old_mask) {
-				indio_dev->active_scan_mask = old_mask;
-				success = -EINVAL;
-			}
-			else {
-				ret = -EINVAL;
-				return ret;
-			}
-		}
-	} else {
-		indio_dev->active_scan_mask = compound_mask;
-	}
+	indio_dev->active_scan_mask = new_config.scan_mask;
+	indio_dev->scan_timestamp = new_config.scan_timestamp;
+	indio_dev->scan_bytes = new_config.scan_bytes;
 
 	iio_update_demux(indio_dev);
 
@@ -699,10 +748,7 @@ static int __iio_update_buffers(struct iio_dev *indio_dev,
 			goto error_remove_inserted;
 		}
 	}
-	indio_dev->scan_bytes =
-		iio_compute_scan_bytes(indio_dev,
-				       indio_dev->active_scan_mask,
-				       indio_dev->scan_timestamp);
+
 	if (indio_dev->info->update_scan_mode) {
 		ret = indio_dev->info
 			->update_scan_mode(indio_dev,
@@ -714,20 +760,8 @@ static int __iio_update_buffers(struct iio_dev *indio_dev,
 			goto error_run_postdisable;
 		}
 	}
-	/* Definitely possible for devices to support both of these. */
-	if ((indio_dev->modes & INDIO_BUFFER_TRIGGERED) && indio_dev->trig) {
-		indio_dev->currentmode = INDIO_BUFFER_TRIGGERED;
-	} else if (indio_dev->modes & INDIO_BUFFER_HARDWARE) {
-		indio_dev->currentmode = INDIO_BUFFER_HARDWARE;
-	} else if (indio_dev->modes & INDIO_BUFFER_SOFTWARE) {
-		indio_dev->currentmode = INDIO_BUFFER_SOFTWARE;
-	} else { /* Should never be reached */
-		/* Can only occur on first buffer */
-		if (indio_dev->modes & INDIO_BUFFER_TRIGGERED)
-			dev_dbg(&indio_dev->dev, "Buffer not started: no trigger\n");
-		ret = -EINVAL;
-		goto error_run_postdisable;
-	}
+
+	indio_dev->currentmode = new_config.mode;
 
 	if (indio_dev->setup_ops->postenable) {
 		ret = indio_dev->setup_ops->postenable(indio_dev);
@@ -743,7 +777,7 @@ static int __iio_update_buffers(struct iio_dev *indio_dev,
 
 	iio_free_scan_mask(indio_dev, old_mask);
 
-	return success;
+	return 0;
 
 error_disable_all_buffers:
 	indio_dev->currentmode = INDIO_DIRECT_MODE;
@@ -756,6 +790,10 @@ error_remove_inserted:
 	iio_free_scan_mask(indio_dev, indio_dev->active_scan_mask);
 	indio_dev->active_scan_mask = old_mask;
 	return ret;
+
+err_free_config:
+	iio_free_scan_mask(indio_dev, new_config.scan_mask);
+	return ret;
 }
 
 int iio_update_buffers(struct iio_dev *indio_dev,
-- 
1.8.0


^ permalink raw reply related	[flat|nested] 7+ messages in thread

* [PATCH v2 2/3] iio: __iio_update_buffers: Split enable and disable path into helper functions
  2015-05-18 11:34 [PATCH v2 0/3] iio: Refactor __iio_update_buffers() Lars-Peter Clausen
  2015-05-18 11:34 ` [PATCH v2 1/3] iio: __iio_update_buffers: Verify configuration before starting to apply it Lars-Peter Clausen
@ 2015-05-18 11:34 ` Lars-Peter Clausen
  2015-05-23 11:45   ` Jonathan Cameron
  2015-05-18 11:34 ` [PATCH v2 3/3] iio: __iio_update_buffers: Leave device in sane state on error Lars-Peter Clausen
  2 siblings, 1 reply; 7+ messages in thread
From: Lars-Peter Clausen @ 2015-05-18 11:34 UTC (permalink / raw)
  To: Jonathan Cameron, Hartmut Knaack, Peter Meerwald
  Cc: linux-iio, Lars-Peter Clausen

__iio_update_buffers is already a rather large function with many different
error paths and it is going to get even larger. This patch factors out the
device enable and device disable paths into separate helper functions.

The patch also re-implements iio_disable_all_buffers() using the new
iio_disable_buffers() function removing a fair bit of redundant code.

Signed-off-by: Lars-Peter Clausen <lars@metafoo.de>
---
 drivers/iio/industrialio-buffer.c | 191 ++++++++++++++++++++------------------
 1 file changed, 103 insertions(+), 88 deletions(-)

diff --git a/drivers/iio/industrialio-buffer.c b/drivers/iio/industrialio-buffer.c
index 0b4fe63..b4d7dba 100644
--- a/drivers/iio/industrialio-buffer.c
+++ b/drivers/iio/industrialio-buffer.c
@@ -539,28 +539,6 @@ static void iio_buffer_deactivate(struct iio_buffer *buffer)
 	iio_buffer_put(buffer);
 }
 
-void iio_disable_all_buffers(struct iio_dev *indio_dev)
-{
-	struct iio_buffer *buffer, *_buffer;
-
-	if (list_empty(&indio_dev->buffer_list))
-		return;
-
-	if (indio_dev->setup_ops->predisable)
-		indio_dev->setup_ops->predisable(indio_dev);
-
-	list_for_each_entry_safe(buffer, _buffer,
-			&indio_dev->buffer_list, buffer_list)
-		iio_buffer_deactivate(buffer);
-
-	indio_dev->currentmode = INDIO_DIRECT_MODE;
-	if (indio_dev->setup_ops->postdisable)
-		indio_dev->setup_ops->postdisable(indio_dev);
-
-	if (indio_dev->available_scan_masks == NULL)
-		kfree(indio_dev->active_scan_mask);
-}
-
 static void iio_buffer_update_bytes_per_datum(struct iio_dev *indio_dev,
 	struct iio_buffer *buffer)
 {
@@ -683,59 +661,14 @@ static int iio_verify_update(struct iio_dev *indio_dev,
 	return 0;
 }
 
-static int __iio_update_buffers(struct iio_dev *indio_dev,
-		       struct iio_buffer *insert_buffer,
-		       struct iio_buffer *remove_buffer)
+static int iio_enable_buffers(struct iio_dev *indio_dev,
+	struct iio_device_config *config)
 {
 	int ret;
-	const unsigned long *old_mask;
-	struct iio_device_config new_config;
-
-	ret = iio_verify_update(indio_dev, insert_buffer, remove_buffer,
-		&new_config);
-	if (ret)
-		return ret;
-
-	if (insert_buffer) {
-		ret = iio_buffer_request_update(indio_dev, insert_buffer);
-		if (ret)
-			goto err_free_config;
-	}
 
-	/* Wind down existing buffers - iff there are any */
-	if (!list_empty(&indio_dev->buffer_list)) {
-		if (indio_dev->setup_ops->predisable) {
-			ret = indio_dev->setup_ops->predisable(indio_dev);
-			if (ret)
-				goto err_free_config;
-		}
-		indio_dev->currentmode = INDIO_DIRECT_MODE;
-		if (indio_dev->setup_ops->postdisable) {
-			ret = indio_dev->setup_ops->postdisable(indio_dev);
-			if (ret)
-				goto err_free_config;
-		}
-	}
-	/* Keep a copy of current setup to allow roll back */
-	old_mask = indio_dev->active_scan_mask;
-	if (!indio_dev->available_scan_masks)
-		indio_dev->active_scan_mask = NULL;
-
-	if (remove_buffer)
-		iio_buffer_deactivate(remove_buffer);
-	if (insert_buffer)
-		iio_buffer_activate(indio_dev, insert_buffer);
-
-	/* If no buffers in list, we are done */
-	if (list_empty(&indio_dev->buffer_list)) {
-		indio_dev->currentmode = INDIO_DIRECT_MODE;
-		iio_free_scan_mask(indio_dev, old_mask);
-		return 0;
-	}
-
-	indio_dev->active_scan_mask = new_config.scan_mask;
-	indio_dev->scan_timestamp = new_config.scan_timestamp;
-	indio_dev->scan_bytes = new_config.scan_bytes;
+	indio_dev->active_scan_mask = config->scan_mask;
+	indio_dev->scan_timestamp = config->scan_timestamp;
+	indio_dev->scan_bytes = config->scan_bytes;
 
 	iio_update_demux(indio_dev);
 
@@ -745,7 +678,7 @@ static int __iio_update_buffers(struct iio_dev *indio_dev,
 		if (ret) {
 			dev_dbg(&indio_dev->dev,
 			       "Buffer not started: buffer preenable failed (%d)\n", ret);
-			goto error_remove_inserted;
+			goto err_undo_config;
 		}
 	}
 
@@ -757,39 +690,108 @@ static int __iio_update_buffers(struct iio_dev *indio_dev,
 			dev_dbg(&indio_dev->dev,
 				"Buffer not started: update scan mode failed (%d)\n",
 				ret);
-			goto error_run_postdisable;
+			goto err_run_postdisable;
 		}
 	}
 
-	indio_dev->currentmode = new_config.mode;
+	indio_dev->currentmode = config->mode;
 
 	if (indio_dev->setup_ops->postenable) {
 		ret = indio_dev->setup_ops->postenable(indio_dev);
 		if (ret) {
 			dev_dbg(&indio_dev->dev,
 			       "Buffer not started: postenable failed (%d)\n", ret);
-			indio_dev->currentmode = INDIO_DIRECT_MODE;
-			if (indio_dev->setup_ops->postdisable)
-				indio_dev->setup_ops->postdisable(indio_dev);
-			goto error_disable_all_buffers;
+			goto err_run_postdisable;
 		}
 	}
 
-	iio_free_scan_mask(indio_dev, old_mask);
-
 	return 0;
 
-error_disable_all_buffers:
+err_run_postdisable:
 	indio_dev->currentmode = INDIO_DIRECT_MODE;
-error_run_postdisable:
 	if (indio_dev->setup_ops->postdisable)
 		indio_dev->setup_ops->postdisable(indio_dev);
-error_remove_inserted:
-	if (insert_buffer)
-		iio_buffer_deactivate(insert_buffer);
-	iio_free_scan_mask(indio_dev, indio_dev->active_scan_mask);
-	indio_dev->active_scan_mask = old_mask;
+err_undo_config:
+	indio_dev->active_scan_mask = NULL;
+
 	return ret;
+}
+
+static int iio_disable_buffers(struct iio_dev *indio_dev)
+{
+	int ret;
+
+	/* Wind down existing buffers - iff there are any */
+	if (list_empty(&indio_dev->buffer_list))
+		return 0;
+
+	if (indio_dev->setup_ops->predisable) {
+		ret = indio_dev->setup_ops->predisable(indio_dev);
+		if (ret)
+			return ret;
+	}
+
+	indio_dev->currentmode = INDIO_DIRECT_MODE;
+
+	if (indio_dev->setup_ops->postdisable) {
+		ret = indio_dev->setup_ops->postdisable(indio_dev);
+		if (ret)
+			return ret;
+	}
+
+	return 0;
+}
+
+static int __iio_update_buffers(struct iio_dev *indio_dev,
+		       struct iio_buffer *insert_buffer,
+		       struct iio_buffer *remove_buffer)
+{
+	int ret;
+	const unsigned long *old_mask;
+	struct iio_device_config new_config;
+
+	ret = iio_verify_update(indio_dev, insert_buffer, remove_buffer,
+		&new_config);
+	if (ret)
+		return ret;
+
+	if (insert_buffer) {
+		ret = iio_buffer_request_update(indio_dev, insert_buffer);
+		if (ret)
+			goto err_free_config;
+	}
+
+	/* Keep a copy of current setup to allow roll back */
+	old_mask = indio_dev->active_scan_mask;
+	indio_dev->active_scan_mask = NULL;
+
+	ret = iio_disable_buffers(indio_dev);
+	if (ret) {
+		iio_free_scan_mask(indio_dev, old_mask);
+		goto err_free_config;
+	}
+
+	if (remove_buffer)
+		iio_buffer_deactivate(remove_buffer);
+	if (insert_buffer)
+		iio_buffer_activate(indio_dev, insert_buffer);
+
+	/* If no buffers in list, we are done */
+	if (list_empty(&indio_dev->buffer_list)) {
+		iio_free_scan_mask(indio_dev, old_mask);
+		return 0;
+	}
+
+	ret = iio_enable_buffers(indio_dev, &new_config);
+	if (ret) {
+		if (insert_buffer)
+			iio_buffer_deactivate(insert_buffer);
+		indio_dev->active_scan_mask = old_mask;
+		goto err_free_config;
+	}
+
+	iio_free_scan_mask(indio_dev, old_mask);
+	return 0;
 
 err_free_config:
 	iio_free_scan_mask(indio_dev, new_config.scan_mask);
@@ -834,6 +836,19 @@ out_unlock:
 }
 EXPORT_SYMBOL_GPL(iio_update_buffers);
 
+void iio_disable_all_buffers(struct iio_dev *indio_dev)
+{
+	struct iio_buffer *buffer, *_buffer;
+
+	iio_disable_buffers(indio_dev);
+	iio_free_scan_mask(indio_dev, indio_dev->active_scan_mask);
+	indio_dev->active_scan_mask = NULL;
+
+	list_for_each_entry_safe(buffer, _buffer,
+			&indio_dev->buffer_list, buffer_list)
+		iio_buffer_deactivate(buffer);
+}
+
 static ssize_t iio_buffer_store_enable(struct device *dev,
 				       struct device_attribute *attr,
 				       const char *buf,
-- 
1.8.0


^ permalink raw reply related	[flat|nested] 7+ messages in thread

* [PATCH v2 3/3] iio: __iio_update_buffers: Leave device in sane state on error
  2015-05-18 11:34 [PATCH v2 0/3] iio: Refactor __iio_update_buffers() Lars-Peter Clausen
  2015-05-18 11:34 ` [PATCH v2 1/3] iio: __iio_update_buffers: Verify configuration before starting to apply it Lars-Peter Clausen
  2015-05-18 11:34 ` [PATCH v2 2/3] iio: __iio_update_buffers: Split enable and disable path into helper functions Lars-Peter Clausen
@ 2015-05-18 11:34 ` Lars-Peter Clausen
  2015-05-23 11:48   ` Jonathan Cameron
  2 siblings, 1 reply; 7+ messages in thread
From: Lars-Peter Clausen @ 2015-05-18 11:34 UTC (permalink / raw)
  To: Jonathan Cameron, Hartmut Knaack, Peter Meerwald
  Cc: linux-iio, Lars-Peter Clausen

Currently when something goes wrong at some step when disabling the buffers
we immediately abort. This has the effect that the enable/disable calls are
no longer balanced. So make sure that even if one step in the disable
sequence fails the other steps are still executed.

The other issue is that when either enable or disable fails buffers that
were active at that time stay active while the device itself is disabled.
This leaves things in a inconsistent state and can cause unbalanced
enable/disable calls. Furthermore when enable fails we restore the old scan
mask, but still keeps things disabled.

Given that verification of the configuration was performed earlier and it
is valid at the point where we try to enable/disable the most likely reason
of failure is a communication failure with the device or maybe a
out-of-memory situation. There is not really a good recovery strategy in
such a case, so it makes sense to leave the device disabled, but we should
still leave it in a consistent state.

What the patch does if disable/enable fails is to deactivate all buffers
and make sure that the device will be in the same state as if all buffers
had been manually disabled.

Signed-off-by: Lars-Peter Clausen <lars@metafoo.de>
---
 drivers/iio/industrialio-buffer.c | 82 ++++++++++++++++++++++-----------------
 1 file changed, 46 insertions(+), 36 deletions(-)

diff --git a/drivers/iio/industrialio-buffer.c b/drivers/iio/industrialio-buffer.c
index b4d7dba..1129125 100644
--- a/drivers/iio/industrialio-buffer.c
+++ b/drivers/iio/industrialio-buffer.c
@@ -539,6 +539,15 @@ static void iio_buffer_deactivate(struct iio_buffer *buffer)
 	iio_buffer_put(buffer);
 }
 
+static void iio_buffer_deactivate_all(struct iio_dev *indio_dev)
+{
+	struct iio_buffer *buffer, *_buffer;
+
+	list_for_each_entry_safe(buffer, _buffer,
+			&indio_dev->buffer_list, buffer_list)
+		iio_buffer_deactivate(buffer);
+}
+
 static void iio_buffer_update_bytes_per_datum(struct iio_dev *indio_dev,
 	struct iio_buffer *buffer)
 {
@@ -719,36 +728,46 @@ err_undo_config:
 
 static int iio_disable_buffers(struct iio_dev *indio_dev)
 {
-	int ret;
+	int ret = 0;
+	int ret2;
 
 	/* Wind down existing buffers - iff there are any */
 	if (list_empty(&indio_dev->buffer_list))
 		return 0;
 
+	/*
+	 * If things go wrong at some step in disable we still need to continue
+	 * to perform the other steps, otherwise we leave the device in a
+	 * inconsistent state. We return the error code for the first error we
+	 * encountered.
+	 */
+
 	if (indio_dev->setup_ops->predisable) {
-		ret = indio_dev->setup_ops->predisable(indio_dev);
-		if (ret)
-			return ret;
+		ret2 = indio_dev->setup_ops->predisable(indio_dev);
+		if (ret2 && !ret)
+			ret = ret2;
 	}
 
 	indio_dev->currentmode = INDIO_DIRECT_MODE;
 
 	if (indio_dev->setup_ops->postdisable) {
-		ret = indio_dev->setup_ops->postdisable(indio_dev);
-		if (ret)
-			return ret;
+		ret2 = indio_dev->setup_ops->postdisable(indio_dev);
+		if (ret2 && !ret)
+			ret = ret2;
 	}
 
-	return 0;
+	iio_free_scan_mask(indio_dev, indio_dev->active_scan_mask);
+	indio_dev->active_scan_mask = NULL;
+
+	return ret;
 }
 
 static int __iio_update_buffers(struct iio_dev *indio_dev,
 		       struct iio_buffer *insert_buffer,
 		       struct iio_buffer *remove_buffer)
 {
-	int ret;
-	const unsigned long *old_mask;
 	struct iio_device_config new_config;
+	int ret;
 
 	ret = iio_verify_update(indio_dev, insert_buffer, remove_buffer,
 		&new_config);
@@ -761,15 +780,9 @@ static int __iio_update_buffers(struct iio_dev *indio_dev,
 			goto err_free_config;
 	}
 
-	/* Keep a copy of current setup to allow roll back */
-	old_mask = indio_dev->active_scan_mask;
-	indio_dev->active_scan_mask = NULL;
-
 	ret = iio_disable_buffers(indio_dev);
-	if (ret) {
-		iio_free_scan_mask(indio_dev, old_mask);
-		goto err_free_config;
-	}
+	if (ret)
+		goto err_deactivate_all;
 
 	if (remove_buffer)
 		iio_buffer_deactivate(remove_buffer);
@@ -777,22 +790,26 @@ static int __iio_update_buffers(struct iio_dev *indio_dev,
 		iio_buffer_activate(indio_dev, insert_buffer);
 
 	/* If no buffers in list, we are done */
-	if (list_empty(&indio_dev->buffer_list)) {
-		iio_free_scan_mask(indio_dev, old_mask);
+	if (list_empty(&indio_dev->buffer_list))
 		return 0;
-	}
 
 	ret = iio_enable_buffers(indio_dev, &new_config);
-	if (ret) {
-		if (insert_buffer)
-			iio_buffer_deactivate(insert_buffer);
-		indio_dev->active_scan_mask = old_mask;
-		goto err_free_config;
-	}
+	if (ret)
+		goto err_deactivate_all;
 
-	iio_free_scan_mask(indio_dev, old_mask);
 	return 0;
 
+err_deactivate_all:
+	/*
+	 * We've already verified that the config is valid earlier. If things go
+	 * wrong in either enable or disable the most likely reason is an IO
+	 * error from the device. In this case there is no good recovery
+	 * strategy. Just make sure to disable everything and leave the device
+	 * in a sane state.  With a bit of luck the device might come back to
+	 * life again later and userspace can try again.
+	 */
+	iio_buffer_deactivate_all(indio_dev);
+
 err_free_config:
 	iio_free_scan_mask(indio_dev, new_config.scan_mask);
 	return ret;
@@ -838,15 +855,8 @@ EXPORT_SYMBOL_GPL(iio_update_buffers);
 
 void iio_disable_all_buffers(struct iio_dev *indio_dev)
 {
-	struct iio_buffer *buffer, *_buffer;
-
 	iio_disable_buffers(indio_dev);
-	iio_free_scan_mask(indio_dev, indio_dev->active_scan_mask);
-	indio_dev->active_scan_mask = NULL;
-
-	list_for_each_entry_safe(buffer, _buffer,
-			&indio_dev->buffer_list, buffer_list)
-		iio_buffer_deactivate(buffer);
+	iio_buffer_deactivate_all(indio_dev);
 }
 
 static ssize_t iio_buffer_store_enable(struct device *dev,
-- 
1.8.0


^ permalink raw reply related	[flat|nested] 7+ messages in thread

* Re: [PATCH v2 1/3] iio: __iio_update_buffers: Verify configuration before starting to apply it
  2015-05-18 11:34 ` [PATCH v2 1/3] iio: __iio_update_buffers: Verify configuration before starting to apply it Lars-Peter Clausen
@ 2015-05-23 11:45   ` Jonathan Cameron
  0 siblings, 0 replies; 7+ messages in thread
From: Jonathan Cameron @ 2015-05-23 11:45 UTC (permalink / raw)
  To: Lars-Peter Clausen, Hartmut Knaack, Peter Meerwald; +Cc: linux-iio

On 18/05/15 12:34, Lars-Peter Clausen wrote:
> Currently __iio_update_buffers() verifies whether the new configuration
> will work in the middle of the update sequence. This means if the new
> configuration is invalid we need to rollback the changes already made. This
> patch moves the validation of the new configuration at the beginning of
> __iio_update_buffers() and will not start to make any changes if the new
> configuration is invalid.
> 
> Signed-off-by: Lars-Peter Clausen <lars@metafoo.de>
Applied to the togreg branch of iio.git pushed out as testing
for the autobuilders to work their magic.

Thanks,

Jonathan
> ---
>  drivers/iio/industrialio-buffer.c | 164 +++++++++++++++++++++++---------------
>  1 file changed, 101 insertions(+), 63 deletions(-)
> 
> diff --git a/drivers/iio/industrialio-buffer.c b/drivers/iio/industrialio-buffer.c
> index 21ed316..0b4fe63 100644
> --- a/drivers/iio/industrialio-buffer.c
> +++ b/drivers/iio/industrialio-buffer.c
> @@ -602,20 +602,104 @@ static void iio_free_scan_mask(struct iio_dev *indio_dev,
>  		kfree(mask);
>  }
>  
> +struct iio_device_config {
> +	unsigned int mode;
> +	const unsigned long *scan_mask;
> +	unsigned int scan_bytes;
> +	bool scan_timestamp;
> +};
> +
> +static int iio_verify_update(struct iio_dev *indio_dev,
> +	struct iio_buffer *insert_buffer, struct iio_buffer *remove_buffer,
> +	struct iio_device_config *config)
> +{
> +	unsigned long *compound_mask;
> +	const unsigned long *scan_mask;
> +	struct iio_buffer *buffer;
> +	bool scan_timestamp;
> +
> +	memset(config, 0, sizeof(*config));
> +
> +	/*
> +	 * If there is just one buffer and we are removing it there is nothing
> +	 * to verify.
> +	 */
> +	if (remove_buffer && !insert_buffer &&
> +		list_is_singular(&indio_dev->buffer_list))
> +			return 0;
> +
> +	/* Definitely possible for devices to support both of these. */
> +	if ((indio_dev->modes & INDIO_BUFFER_TRIGGERED) && indio_dev->trig) {
> +		config->mode = INDIO_BUFFER_TRIGGERED;
> +	} else if (indio_dev->modes & INDIO_BUFFER_HARDWARE) {
> +		config->mode = INDIO_BUFFER_HARDWARE;
> +	} else if (indio_dev->modes & INDIO_BUFFER_SOFTWARE) {
> +		config->mode = INDIO_BUFFER_SOFTWARE;
> +	} else {
> +		/* Can only occur on first buffer */
> +		if (indio_dev->modes & INDIO_BUFFER_TRIGGERED)
> +			dev_dbg(&indio_dev->dev, "Buffer not started: no trigger\n");
> +		return -EINVAL;
> +	}
> +
> +	/* What scan mask do we actually have? */
> +	compound_mask = kcalloc(BITS_TO_LONGS(indio_dev->masklength),
> +				sizeof(long), GFP_KERNEL);
> +	if (compound_mask == NULL)
> +		return -ENOMEM;
> +
> +	scan_timestamp = false;
> +
> +	list_for_each_entry(buffer, &indio_dev->buffer_list, buffer_list) {
> +		if (buffer == remove_buffer)
> +			continue;
> +		bitmap_or(compound_mask, compound_mask, buffer->scan_mask,
> +			  indio_dev->masklength);
> +		scan_timestamp |= buffer->scan_timestamp;
> +	}
> +
> +	if (insert_buffer) {
> +		bitmap_or(compound_mask, compound_mask,
> +			  insert_buffer->scan_mask, indio_dev->masklength);
> +		scan_timestamp |= insert_buffer->scan_timestamp;
> +	}
> +
> +	if (indio_dev->available_scan_masks) {
> +		scan_mask = iio_scan_mask_match(indio_dev->available_scan_masks,
> +				    indio_dev->masklength,
> +				    compound_mask);
> +		kfree(compound_mask);
> +		if (scan_mask == NULL)
> +			return -EINVAL;
> +	} else {
> +	    scan_mask = compound_mask;
> +	}
> +
> +	config->scan_bytes = iio_compute_scan_bytes(indio_dev,
> +				    scan_mask, scan_timestamp);
> +	config->scan_mask = scan_mask;
> +	config->scan_timestamp = scan_timestamp;
> +
> +	return 0;
> +}
> +
>  static int __iio_update_buffers(struct iio_dev *indio_dev,
>  		       struct iio_buffer *insert_buffer,
>  		       struct iio_buffer *remove_buffer)
>  {
>  	int ret;
> -	int success = 0;
> -	struct iio_buffer *buffer;
> -	unsigned long *compound_mask;
>  	const unsigned long *old_mask;
> +	struct iio_device_config new_config;
> +
> +	ret = iio_verify_update(indio_dev, insert_buffer, remove_buffer,
> +		&new_config);
> +	if (ret)
> +		return ret;
>  
>  	if (insert_buffer) {
>  		ret = iio_buffer_request_update(indio_dev, insert_buffer);
>  		if (ret)
> -			return ret;
> +			goto err_free_config;
>  	}
>  
>  	/* Wind down existing buffers - iff there are any */
> @@ -623,13 +707,13 @@ static int __iio_update_buffers(struct iio_dev *indio_dev,
>  		if (indio_dev->setup_ops->predisable) {
>  			ret = indio_dev->setup_ops->predisable(indio_dev);
>  			if (ret)
> -				return ret;
> +				goto err_free_config;
>  		}
>  		indio_dev->currentmode = INDIO_DIRECT_MODE;
>  		if (indio_dev->setup_ops->postdisable) {
>  			ret = indio_dev->setup_ops->postdisable(indio_dev);
>  			if (ret)
> -				return ret;
> +				goto err_free_config;
>  		}
>  	}
>  	/* Keep a copy of current setup to allow roll back */
> @@ -649,44 +733,9 @@ static int __iio_update_buffers(struct iio_dev *indio_dev,
>  		return 0;
>  	}
>  
> -	/* What scan mask do we actually have? */
> -	compound_mask = kcalloc(BITS_TO_LONGS(indio_dev->masklength),
> -				sizeof(long), GFP_KERNEL);
> -	if (compound_mask == NULL) {
> -		iio_free_scan_mask(indio_dev, old_mask);
> -		return -ENOMEM;
> -	}
> -	indio_dev->scan_timestamp = 0;
> -
> -	list_for_each_entry(buffer, &indio_dev->buffer_list, buffer_list) {
> -		bitmap_or(compound_mask, compound_mask, buffer->scan_mask,
> -			  indio_dev->masklength);
> -		indio_dev->scan_timestamp |= buffer->scan_timestamp;
> -	}
> -	if (indio_dev->available_scan_masks) {
> -		indio_dev->active_scan_mask =
> -			iio_scan_mask_match(indio_dev->available_scan_masks,
> -					    indio_dev->masklength,
> -					    compound_mask);
> -		kfree(compound_mask);
> -		if (indio_dev->active_scan_mask == NULL) {
> -			/*
> -			 * Roll back.
> -			 * Note can only occur when adding a buffer.
> -			 */
> -			iio_buffer_deactivate(insert_buffer);
> -			if (old_mask) {
> -				indio_dev->active_scan_mask = old_mask;
> -				success = -EINVAL;
> -			}
> -			else {
> -				ret = -EINVAL;
> -				return ret;
> -			}
> -		}
> -	} else {
> -		indio_dev->active_scan_mask = compound_mask;
> -	}
> +	indio_dev->active_scan_mask = new_config.scan_mask;
> +	indio_dev->scan_timestamp = new_config.scan_timestamp;
> +	indio_dev->scan_bytes = new_config.scan_bytes;
>  
>  	iio_update_demux(indio_dev);
>  
> @@ -699,10 +748,7 @@ static int __iio_update_buffers(struct iio_dev *indio_dev,
>  			goto error_remove_inserted;
>  		}
>  	}
> -	indio_dev->scan_bytes =
> -		iio_compute_scan_bytes(indio_dev,
> -				       indio_dev->active_scan_mask,
> -				       indio_dev->scan_timestamp);
> +
>  	if (indio_dev->info->update_scan_mode) {
>  		ret = indio_dev->info
>  			->update_scan_mode(indio_dev,
> @@ -714,20 +760,8 @@ static int __iio_update_buffers(struct iio_dev *indio_dev,
>  			goto error_run_postdisable;
>  		}
>  	}
> -	/* Definitely possible for devices to support both of these. */
> -	if ((indio_dev->modes & INDIO_BUFFER_TRIGGERED) && indio_dev->trig) {
> -		indio_dev->currentmode = INDIO_BUFFER_TRIGGERED;
> -	} else if (indio_dev->modes & INDIO_BUFFER_HARDWARE) {
> -		indio_dev->currentmode = INDIO_BUFFER_HARDWARE;
> -	} else if (indio_dev->modes & INDIO_BUFFER_SOFTWARE) {
> -		indio_dev->currentmode = INDIO_BUFFER_SOFTWARE;
> -	} else { /* Should never be reached */
> -		/* Can only occur on first buffer */
> -		if (indio_dev->modes & INDIO_BUFFER_TRIGGERED)
> -			dev_dbg(&indio_dev->dev, "Buffer not started: no trigger\n");
> -		ret = -EINVAL;
> -		goto error_run_postdisable;
> -	}
> +
> +	indio_dev->currentmode = new_config.mode;
>  
>  	if (indio_dev->setup_ops->postenable) {
>  		ret = indio_dev->setup_ops->postenable(indio_dev);
> @@ -743,7 +777,7 @@ static int __iio_update_buffers(struct iio_dev *indio_dev,
>  
>  	iio_free_scan_mask(indio_dev, old_mask);
>  
> -	return success;
> +	return 0;
>  
>  error_disable_all_buffers:
>  	indio_dev->currentmode = INDIO_DIRECT_MODE;
> @@ -756,6 +790,10 @@ error_remove_inserted:
>  	iio_free_scan_mask(indio_dev, indio_dev->active_scan_mask);
>  	indio_dev->active_scan_mask = old_mask;
>  	return ret;
> +
> +err_free_config:
> +	iio_free_scan_mask(indio_dev, new_config.scan_mask);
> +	return ret;
>  }
>  
>  int iio_update_buffers(struct iio_dev *indio_dev,
> 


^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [PATCH v2 2/3] iio: __iio_update_buffers: Split enable and disable path into helper functions
  2015-05-18 11:34 ` [PATCH v2 2/3] iio: __iio_update_buffers: Split enable and disable path into helper functions Lars-Peter Clausen
@ 2015-05-23 11:45   ` Jonathan Cameron
  0 siblings, 0 replies; 7+ messages in thread
From: Jonathan Cameron @ 2015-05-23 11:45 UTC (permalink / raw)
  To: Lars-Peter Clausen, Hartmut Knaack, Peter Meerwald; +Cc: linux-iio

On 18/05/15 12:34, Lars-Peter Clausen wrote:
> __iio_update_buffers is already a rather large function with many different
> error paths and it is going to get even larger. This patch factors out the
> device enable and device disable paths into separate helper functions.
> 
> The patch also re-implements iio_disable_all_buffers() using the new
> iio_disable_buffers() function removing a fair bit of redundant code.
> 
> Signed-off-by: Lars-Peter Clausen <lars@metafoo.de>

Applied.

> ---
>  drivers/iio/industrialio-buffer.c | 191 ++++++++++++++++++++------------------
>  1 file changed, 103 insertions(+), 88 deletions(-)
> 
> diff --git a/drivers/iio/industrialio-buffer.c b/drivers/iio/industrialio-buffer.c
> index 0b4fe63..b4d7dba 100644
> --- a/drivers/iio/industrialio-buffer.c
> +++ b/drivers/iio/industrialio-buffer.c
> @@ -539,28 +539,6 @@ static void iio_buffer_deactivate(struct iio_buffer *buffer)
>  	iio_buffer_put(buffer);
>  }
>  
> -void iio_disable_all_buffers(struct iio_dev *indio_dev)
> -{
> -	struct iio_buffer *buffer, *_buffer;
> -
> -	if (list_empty(&indio_dev->buffer_list))
> -		return;
> -
> -	if (indio_dev->setup_ops->predisable)
> -		indio_dev->setup_ops->predisable(indio_dev);
> -
> -	list_for_each_entry_safe(buffer, _buffer,
> -			&indio_dev->buffer_list, buffer_list)
> -		iio_buffer_deactivate(buffer);
> -
> -	indio_dev->currentmode = INDIO_DIRECT_MODE;
> -	if (indio_dev->setup_ops->postdisable)
> -		indio_dev->setup_ops->postdisable(indio_dev);
> -
> -	if (indio_dev->available_scan_masks == NULL)
> -		kfree(indio_dev->active_scan_mask);
> -}
> -
>  static void iio_buffer_update_bytes_per_datum(struct iio_dev *indio_dev,
>  	struct iio_buffer *buffer)
>  {
> @@ -683,59 +661,14 @@ static int iio_verify_update(struct iio_dev *indio_dev,
>  	return 0;
>  }
>  
> -static int __iio_update_buffers(struct iio_dev *indio_dev,
> -		       struct iio_buffer *insert_buffer,
> -		       struct iio_buffer *remove_buffer)
> +static int iio_enable_buffers(struct iio_dev *indio_dev,
> +	struct iio_device_config *config)
>  {
>  	int ret;
> -	const unsigned long *old_mask;
> -	struct iio_device_config new_config;
> -
> -	ret = iio_verify_update(indio_dev, insert_buffer, remove_buffer,
> -		&new_config);
> -	if (ret)
> -		return ret;
> -
> -	if (insert_buffer) {
> -		ret = iio_buffer_request_update(indio_dev, insert_buffer);
> -		if (ret)
> -			goto err_free_config;
> -	}
>  
> -	/* Wind down existing buffers - iff there are any */
> -	if (!list_empty(&indio_dev->buffer_list)) {
> -		if (indio_dev->setup_ops->predisable) {
> -			ret = indio_dev->setup_ops->predisable(indio_dev);
> -			if (ret)
> -				goto err_free_config;
> -		}
> -		indio_dev->currentmode = INDIO_DIRECT_MODE;
> -		if (indio_dev->setup_ops->postdisable) {
> -			ret = indio_dev->setup_ops->postdisable(indio_dev);
> -			if (ret)
> -				goto err_free_config;
> -		}
> -	}
> -	/* Keep a copy of current setup to allow roll back */
> -	old_mask = indio_dev->active_scan_mask;
> -	if (!indio_dev->available_scan_masks)
> -		indio_dev->active_scan_mask = NULL;
> -
> -	if (remove_buffer)
> -		iio_buffer_deactivate(remove_buffer);
> -	if (insert_buffer)
> -		iio_buffer_activate(indio_dev, insert_buffer);
> -
> -	/* If no buffers in list, we are done */
> -	if (list_empty(&indio_dev->buffer_list)) {
> -		indio_dev->currentmode = INDIO_DIRECT_MODE;
> -		iio_free_scan_mask(indio_dev, old_mask);
> -		return 0;
> -	}
> -
> -	indio_dev->active_scan_mask = new_config.scan_mask;
> -	indio_dev->scan_timestamp = new_config.scan_timestamp;
> -	indio_dev->scan_bytes = new_config.scan_bytes;
> +	indio_dev->active_scan_mask = config->scan_mask;
> +	indio_dev->scan_timestamp = config->scan_timestamp;
> +	indio_dev->scan_bytes = config->scan_bytes;
>  
>  	iio_update_demux(indio_dev);
>  
> @@ -745,7 +678,7 @@ static int __iio_update_buffers(struct iio_dev *indio_dev,
>  		if (ret) {
>  			dev_dbg(&indio_dev->dev,
>  			       "Buffer not started: buffer preenable failed (%d)\n", ret);
> -			goto error_remove_inserted;
> +			goto err_undo_config;
>  		}
>  	}
>  
> @@ -757,39 +690,108 @@ static int __iio_update_buffers(struct iio_dev *indio_dev,
>  			dev_dbg(&indio_dev->dev,
>  				"Buffer not started: update scan mode failed (%d)\n",
>  				ret);
> -			goto error_run_postdisable;
> +			goto err_run_postdisable;
>  		}
>  	}
>  
> -	indio_dev->currentmode = new_config.mode;
> +	indio_dev->currentmode = config->mode;
>  
>  	if (indio_dev->setup_ops->postenable) {
>  		ret = indio_dev->setup_ops->postenable(indio_dev);
>  		if (ret) {
>  			dev_dbg(&indio_dev->dev,
>  			       "Buffer not started: postenable failed (%d)\n", ret);
> -			indio_dev->currentmode = INDIO_DIRECT_MODE;
> -			if (indio_dev->setup_ops->postdisable)
> -				indio_dev->setup_ops->postdisable(indio_dev);
> -			goto error_disable_all_buffers;
> +			goto err_run_postdisable;
>  		}
>  	}
>  
> -	iio_free_scan_mask(indio_dev, old_mask);
> -
>  	return 0;
>  
> -error_disable_all_buffers:
> +err_run_postdisable:
>  	indio_dev->currentmode = INDIO_DIRECT_MODE;
> -error_run_postdisable:
>  	if (indio_dev->setup_ops->postdisable)
>  		indio_dev->setup_ops->postdisable(indio_dev);
> -error_remove_inserted:
> -	if (insert_buffer)
> -		iio_buffer_deactivate(insert_buffer);
> -	iio_free_scan_mask(indio_dev, indio_dev->active_scan_mask);
> -	indio_dev->active_scan_mask = old_mask;
> +err_undo_config:
> +	indio_dev->active_scan_mask = NULL;
> +
>  	return ret;
> +}
> +
> +static int iio_disable_buffers(struct iio_dev *indio_dev)
> +{
> +	int ret;
> +
> +	/* Wind down existing buffers - iff there are any */
> +	if (list_empty(&indio_dev->buffer_list))
> +		return 0;
> +
> +	if (indio_dev->setup_ops->predisable) {
> +		ret = indio_dev->setup_ops->predisable(indio_dev);
> +		if (ret)
> +			return ret;
> +	}
> +
> +	indio_dev->currentmode = INDIO_DIRECT_MODE;
> +
> +	if (indio_dev->setup_ops->postdisable) {
> +		ret = indio_dev->setup_ops->postdisable(indio_dev);
> +		if (ret)
> +			return ret;
> +	}
> +
> +	return 0;
> +}
> +
> +static int __iio_update_buffers(struct iio_dev *indio_dev,
> +		       struct iio_buffer *insert_buffer,
> +		       struct iio_buffer *remove_buffer)
> +{
> +	int ret;
> +	const unsigned long *old_mask;
> +	struct iio_device_config new_config;
> +
> +	ret = iio_verify_update(indio_dev, insert_buffer, remove_buffer,
> +		&new_config);
> +	if (ret)
> +		return ret;
> +
> +	if (insert_buffer) {
> +		ret = iio_buffer_request_update(indio_dev, insert_buffer);
> +		if (ret)
> +			goto err_free_config;
> +	}
> +
> +	/* Keep a copy of current setup to allow roll back */
> +	old_mask = indio_dev->active_scan_mask;
> +	indio_dev->active_scan_mask = NULL;
> +
> +	ret = iio_disable_buffers(indio_dev);
> +	if (ret) {
> +		iio_free_scan_mask(indio_dev, old_mask);
> +		goto err_free_config;
> +	}
> +
> +	if (remove_buffer)
> +		iio_buffer_deactivate(remove_buffer);
> +	if (insert_buffer)
> +		iio_buffer_activate(indio_dev, insert_buffer);
> +
> +	/* If no buffers in list, we are done */
> +	if (list_empty(&indio_dev->buffer_list)) {
> +		iio_free_scan_mask(indio_dev, old_mask);
> +		return 0;
> +	}
> +
> +	ret = iio_enable_buffers(indio_dev, &new_config);
> +	if (ret) {
> +		if (insert_buffer)
> +			iio_buffer_deactivate(insert_buffer);
> +		indio_dev->active_scan_mask = old_mask;
> +		goto err_free_config;
> +	}
> +
> +	iio_free_scan_mask(indio_dev, old_mask);
> +	return 0;
>  
>  err_free_config:
>  	iio_free_scan_mask(indio_dev, new_config.scan_mask);
> @@ -834,6 +836,19 @@ out_unlock:
>  }
>  EXPORT_SYMBOL_GPL(iio_update_buffers);
>  
> +void iio_disable_all_buffers(struct iio_dev *indio_dev)
> +{
> +	struct iio_buffer *buffer, *_buffer;
> +
> +	iio_disable_buffers(indio_dev);
> +	iio_free_scan_mask(indio_dev, indio_dev->active_scan_mask);
> +	indio_dev->active_scan_mask = NULL;
> +
> +	list_for_each_entry_safe(buffer, _buffer,
> +			&indio_dev->buffer_list, buffer_list)
> +		iio_buffer_deactivate(buffer);
> +}
> +
>  static ssize_t iio_buffer_store_enable(struct device *dev,
>  				       struct device_attribute *attr,
>  				       const char *buf,
> 


^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [PATCH v2 3/3] iio: __iio_update_buffers: Leave device in sane state on error
  2015-05-18 11:34 ` [PATCH v2 3/3] iio: __iio_update_buffers: Leave device in sane state on error Lars-Peter Clausen
@ 2015-05-23 11:48   ` Jonathan Cameron
  0 siblings, 0 replies; 7+ messages in thread
From: Jonathan Cameron @ 2015-05-23 11:48 UTC (permalink / raw)
  To: Lars-Peter Clausen, Hartmut Knaack, Peter Meerwald; +Cc: linux-iio

On 18/05/15 12:34, Lars-Peter Clausen wrote:
> Currently when something goes wrong at some step when disabling the buffers
> we immediately abort. This has the effect that the enable/disable calls are
> no longer balanced. So make sure that even if one step in the disable
> sequence fails the other steps are still executed.
> 
> The other issue is that when either enable or disable fails buffers that
> were active at that time stay active while the device itself is disabled.
> This leaves things in a inconsistent state and can cause unbalanced
> enable/disable calls. Furthermore when enable fails we restore the old scan
> mask, but still keeps things disabled.
> 
> Given that verification of the configuration was performed earlier and it
> is valid at the point where we try to enable/disable the most likely reason
> of failure is a communication failure with the device or maybe a
> out-of-memory situation. There is not really a good recovery strategy in
> such a case, so it makes sense to leave the device disabled, but we should
> still leave it in a consistent state.
> 
> What the patch does if disable/enable fails is to deactivate all buffers
> and make sure that the device will be in the same state as if all buffers
> had been manually disabled.
> 
> Signed-off-by: Lars-Peter Clausen <lars@metafoo.de>
Applied.

Thanks for doing this Lars. I suspect we have quite a few 'evolved' bits
of rubbish code hiding in there that need this treatment but this was definitely
one of the worst!

I guess I should find some time to clean up the iio_input driver so we actually
have a common usecase for all this complexity.

Jonathan
> ---
>  drivers/iio/industrialio-buffer.c | 82 ++++++++++++++++++++++-----------------
>  1 file changed, 46 insertions(+), 36 deletions(-)
> 
> diff --git a/drivers/iio/industrialio-buffer.c b/drivers/iio/industrialio-buffer.c
> index b4d7dba..1129125 100644
> --- a/drivers/iio/industrialio-buffer.c
> +++ b/drivers/iio/industrialio-buffer.c
> @@ -539,6 +539,15 @@ static void iio_buffer_deactivate(struct iio_buffer *buffer)
>  	iio_buffer_put(buffer);
>  }
>  
> +static void iio_buffer_deactivate_all(struct iio_dev *indio_dev)
> +{
> +	struct iio_buffer *buffer, *_buffer;
> +
> +	list_for_each_entry_safe(buffer, _buffer,
> +			&indio_dev->buffer_list, buffer_list)
> +		iio_buffer_deactivate(buffer);
> +}
> +
>  static void iio_buffer_update_bytes_per_datum(struct iio_dev *indio_dev,
>  	struct iio_buffer *buffer)
>  {
> @@ -719,36 +728,46 @@ err_undo_config:
>  
>  static int iio_disable_buffers(struct iio_dev *indio_dev)
>  {
> -	int ret;
> +	int ret = 0;
> +	int ret2;
>  
>  	/* Wind down existing buffers - iff there are any */
>  	if (list_empty(&indio_dev->buffer_list))
>  		return 0;
>  
> +	/*
> +	 * If things go wrong at some step in disable we still need to continue
> +	 * to perform the other steps, otherwise we leave the device in a
> +	 * inconsistent state. We return the error code for the first error we
> +	 * encountered.
> +	 */
> +
>  	if (indio_dev->setup_ops->predisable) {
> -		ret = indio_dev->setup_ops->predisable(indio_dev);
> -		if (ret)
> -			return ret;
> +		ret2 = indio_dev->setup_ops->predisable(indio_dev);
> +		if (ret2 && !ret)
> +			ret = ret2;
>  	}
>  
>  	indio_dev->currentmode = INDIO_DIRECT_MODE;
>  
>  	if (indio_dev->setup_ops->postdisable) {
> -		ret = indio_dev->setup_ops->postdisable(indio_dev);
> -		if (ret)
> -			return ret;
> +		ret2 = indio_dev->setup_ops->postdisable(indio_dev);
> +		if (ret2 && !ret)
> +			ret = ret2;
>  	}
>  
> -	return 0;
> +	iio_free_scan_mask(indio_dev, indio_dev->active_scan_mask);
> +	indio_dev->active_scan_mask = NULL;
> +
> +	return ret;
>  }
>  
>  static int __iio_update_buffers(struct iio_dev *indio_dev,
>  		       struct iio_buffer *insert_buffer,
>  		       struct iio_buffer *remove_buffer)
>  {
> -	int ret;
> -	const unsigned long *old_mask;
>  	struct iio_device_config new_config;
> +	int ret;
>  
>  	ret = iio_verify_update(indio_dev, insert_buffer, remove_buffer,
>  		&new_config);
> @@ -761,15 +780,9 @@ static int __iio_update_buffers(struct iio_dev *indio_dev,
>  			goto err_free_config;
>  	}
>  
> -	/* Keep a copy of current setup to allow roll back */
> -	old_mask = indio_dev->active_scan_mask;
> -	indio_dev->active_scan_mask = NULL;
> -
>  	ret = iio_disable_buffers(indio_dev);
> -	if (ret) {
> -		iio_free_scan_mask(indio_dev, old_mask);
> -		goto err_free_config;
> -	}
> +	if (ret)
> +		goto err_deactivate_all;
>  
>  	if (remove_buffer)
>  		iio_buffer_deactivate(remove_buffer);
> @@ -777,22 +790,26 @@ static int __iio_update_buffers(struct iio_dev *indio_dev,
>  		iio_buffer_activate(indio_dev, insert_buffer);
>  
>  	/* If no buffers in list, we are done */
> -	if (list_empty(&indio_dev->buffer_list)) {
> -		iio_free_scan_mask(indio_dev, old_mask);
> +	if (list_empty(&indio_dev->buffer_list))
>  		return 0;
> -	}
>  
>  	ret = iio_enable_buffers(indio_dev, &new_config);
> -	if (ret) {
> -		if (insert_buffer)
> -			iio_buffer_deactivate(insert_buffer);
> -		indio_dev->active_scan_mask = old_mask;
> -		goto err_free_config;
> -	}
> +	if (ret)
> +		goto err_deactivate_all;
>  
> -	iio_free_scan_mask(indio_dev, old_mask);
>  	return 0;
>  
> +err_deactivate_all:
> +	/*
> +	 * We've already verified that the config is valid earlier. If things go
> +	 * wrong in either enable or disable the most likely reason is an IO
> +	 * error from the device. In this case there is no good recovery
> +	 * strategy. Just make sure to disable everything and leave the device
> +	 * in a sane state.  With a bit of luck the device might come back to
> +	 * life again later and userspace can try again.
> +	 */
> +	iio_buffer_deactivate_all(indio_dev);
> +
>  err_free_config:
>  	iio_free_scan_mask(indio_dev, new_config.scan_mask);
>  	return ret;
> @@ -838,15 +855,8 @@ EXPORT_SYMBOL_GPL(iio_update_buffers);
>  
>  void iio_disable_all_buffers(struct iio_dev *indio_dev)
>  {
> -	struct iio_buffer *buffer, *_buffer;
> -
>  	iio_disable_buffers(indio_dev);
> -	iio_free_scan_mask(indio_dev, indio_dev->active_scan_mask);
> -	indio_dev->active_scan_mask = NULL;
> -
> -	list_for_each_entry_safe(buffer, _buffer,
> -			&indio_dev->buffer_list, buffer_list)
> -		iio_buffer_deactivate(buffer);
> +	iio_buffer_deactivate_all(indio_dev);
>  }
>  
>  static ssize_t iio_buffer_store_enable(struct device *dev,
> 


^ permalink raw reply	[flat|nested] 7+ messages in thread

end of thread, other threads:[~2015-05-23 11:48 UTC | newest]

Thread overview: 7+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2015-05-18 11:34 [PATCH v2 0/3] iio: Refactor __iio_update_buffers() Lars-Peter Clausen
2015-05-18 11:34 ` [PATCH v2 1/3] iio: __iio_update_buffers: Verify configuration before starting to apply it Lars-Peter Clausen
2015-05-23 11:45   ` Jonathan Cameron
2015-05-18 11:34 ` [PATCH v2 2/3] iio: __iio_update_buffers: Split enable and disable path into helper functions Lars-Peter Clausen
2015-05-23 11:45   ` Jonathan Cameron
2015-05-18 11:34 ` [PATCH v2 3/3] iio: __iio_update_buffers: Leave device in sane state on error Lars-Peter Clausen
2015-05-23 11:48   ` Jonathan Cameron

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.