All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v3 0/4] iio: move IIO to the cleanup.h magic
@ 2024-02-29 15:10 Nuno Sa
  2024-02-29 15:10 ` [PATCH v3 1/4] iio: core: move to " Nuno Sa
                   ` (3 more replies)
  0 siblings, 4 replies; 22+ messages in thread
From: Nuno Sa @ 2024-02-29 15:10 UTC (permalink / raw)
  To: linux-iio; +Cc: Jonathan Cameron, Lars-Peter Clausen

Hi,

In v3, I decided to drop the scope_guard_cond() usage (which meant
dropping the event patch). Given the discussion on the cond_guard()
patches, I think it's better to hold those conversions a bit
since Linus came with a suggestion that will potentially change
scope_guard_cond() - and for the better IMO.

Another significant change is the __free(kfree) in all allocations in
inkern.c. I have to admit that I'm not a fan of the in place declaration
(too much time looking at kernel code I guess) but it makes the style
more consistent (as we are doing the automatic cleanup in the locks).  

v1:
 * https://lore.kernel.org/all/20240221-iio-use-cleanup-magic-v1-0-f9c292666f26@analog.com/

v2:
 * https://lore.kernel.org/r/20240223-iio-use-cleanup-magic-v2-0-f6b4848c1f34@analog.com

v3:
 - Patch 1:
  * Do not use scope_guard_cond();
  * Directly return -ENODEV and don't initialize ret.
 - Patch 2:
  * Removed bonus space;
  * Used the normal error check flow in iio_trigger_get_irq().
 - Patch 3:
  * Refactored the state and ret conditions in iio_scan_mask_query();
  * Removed ternary operator check in enable_store().
 - Patch 4:
  * Still use goto at the error patch in iio_map_array_unregister_locked();
  * Used __free(kfree) + return_ptr() in places where allocations are
    done;
  * Keep the else branch in iio_read_channel_processed_scale();
  * Removed the comment in iio_channel_read_min().
  

---
Nuno Sa (4):
      iio: core: move to cleanup.h magic
      iio: trigger: move to the cleanup.h magic
      iio: buffer: iio: core: move to the cleanup.h magic
      iio: inkern: move to the cleanup.h magic

 drivers/iio/industrialio-buffer.c  | 122 +++++++-----------
 drivers/iio/industrialio-core.c    |  34 ++---
 drivers/iio/industrialio-trigger.c |  71 +++++------
 drivers/iio/inkern.c               | 255 +++++++++++++------------------------
 4 files changed, 176 insertions(+), 306 deletions(-)
---
base-commit: 3cc5ebd3a2d6247aeba81873d6b040d5d87f7db1
change-id: 20240223-iio-use-cleanup-magic-9efe3b0a073d
--

Thanks!
- Nuno Sá


^ permalink raw reply	[flat|nested] 22+ messages in thread

* [PATCH v3 1/4] iio: core: move to cleanup.h magic
  2024-02-29 15:10 [PATCH v3 0/4] iio: move IIO to the cleanup.h magic Nuno Sa
@ 2024-02-29 15:10 ` Nuno Sa
  2024-02-29 15:10 ` [PATCH v3 2/4] iio: trigger: move to the " Nuno Sa
                   ` (2 subsequent siblings)
  3 siblings, 0 replies; 22+ messages in thread
From: Nuno Sa @ 2024-02-29 15:10 UTC (permalink / raw)
  To: linux-iio; +Cc: Jonathan Cameron, Lars-Peter Clausen

Use the new cleanup magic for handling mutexes in IIO. This allows us to
greatly simplify some code paths.

Note that we keep the plain mutex calls in the
iio_device_release|acquire() APIs since in there the macros would likely
not help much (as we want to keep the lock acquired when he leave the
APIs).

Signed-off-by: Nuno Sa <nuno.sa@analog.com>
---
 drivers/iio/industrialio-core.c | 34 +++++++++++++---------------------
 1 file changed, 13 insertions(+), 21 deletions(-)

diff --git a/drivers/iio/industrialio-core.c b/drivers/iio/industrialio-core.c
index 9b2877fe8689..e80a8065b7b9 100644
--- a/drivers/iio/industrialio-core.c
+++ b/drivers/iio/industrialio-core.c
@@ -11,6 +11,7 @@
 
 #include <linux/anon_inodes.h>
 #include <linux/cdev.h>
+#include <linux/cleanup.h>
 #include <linux/debugfs.h>
 #include <linux/device.h>
 #include <linux/err.h>
@@ -1806,31 +1807,24 @@ static long iio_ioctl(struct file *filp, unsigned int cmd, unsigned long arg)
 	struct iio_dev *indio_dev = ib->indio_dev;
 	struct iio_dev_opaque *iio_dev_opaque = to_iio_dev_opaque(indio_dev);
 	struct iio_ioctl_handler *h;
-	int ret = -ENODEV;
-
-	mutex_lock(&iio_dev_opaque->info_exist_lock);
+	int ret;
 
+	guard(mutex)(&iio_dev_opaque->info_exist_lock);
 	/*
 	 * The NULL check here is required to prevent crashing when a device
 	 * is being removed while userspace would still have open file handles
 	 * to try to access this device.
 	 */
 	if (!indio_dev->info)
-		goto out_unlock;
+		return -ENODEV;
 
 	list_for_each_entry(h, &iio_dev_opaque->ioctl_handlers, entry) {
 		ret = h->ioctl(indio_dev, filp, cmd, arg);
 		if (ret != IIO_IOCTL_UNHANDLED)
-			break;
+			return ret;
 	}
 
-	if (ret == IIO_IOCTL_UNHANDLED)
-		ret = -ENODEV;
-
-out_unlock:
-	mutex_unlock(&iio_dev_opaque->info_exist_lock);
-
-	return ret;
+	return -ENODEV;
 }
 
 static const struct file_operations iio_buffer_fileops = {
@@ -2058,18 +2052,16 @@ void iio_device_unregister(struct iio_dev *indio_dev)
 
 	cdev_device_del(&iio_dev_opaque->chrdev, &indio_dev->dev);
 
-	mutex_lock(&iio_dev_opaque->info_exist_lock);
+	scoped_guard(mutex, &iio_dev_opaque->info_exist_lock) {
+		iio_device_unregister_debugfs(indio_dev);
 
-	iio_device_unregister_debugfs(indio_dev);
+		iio_disable_all_buffers(indio_dev);
 
-	iio_disable_all_buffers(indio_dev);
+		indio_dev->info = NULL;
 
-	indio_dev->info = NULL;
-
-	iio_device_wakeup_eventset(indio_dev);
-	iio_buffer_wakeup_poll(indio_dev);
-
-	mutex_unlock(&iio_dev_opaque->info_exist_lock);
+		iio_device_wakeup_eventset(indio_dev);
+		iio_buffer_wakeup_poll(indio_dev);
+	}
 
 	iio_buffers_free_sysfs_and_mask(indio_dev);
 }

-- 
2.44.0


^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [PATCH v3 2/4] iio: trigger: move to the cleanup.h magic
  2024-02-29 15:10 [PATCH v3 0/4] iio: move IIO to the cleanup.h magic Nuno Sa
  2024-02-29 15:10 ` [PATCH v3 1/4] iio: core: move to " Nuno Sa
@ 2024-02-29 15:10 ` Nuno Sa
  2024-03-16 19:32   ` Andy Shevchenko
  2024-03-16 19:39   ` Andy Shevchenko
  2024-02-29 15:10 ` [PATCH v3 3/4] iio: buffer: iio: core: " Nuno Sa
  2024-02-29 15:10 ` [PATCH v3 4/4] iio: inkern: " Nuno Sa
  3 siblings, 2 replies; 22+ messages in thread
From: Nuno Sa @ 2024-02-29 15:10 UTC (permalink / raw)
  To: linux-iio; +Cc: Jonathan Cameron, Lars-Peter Clausen

Use the new cleanup magic for handling mutexes in IIO. This allows us to
greatly simplify some code paths.

Signed-off-by: Nuno Sa <nuno.sa@analog.com>
---
 drivers/iio/industrialio-trigger.c | 71 ++++++++++++++++----------------------
 1 file changed, 30 insertions(+), 41 deletions(-)

diff --git a/drivers/iio/industrialio-trigger.c b/drivers/iio/industrialio-trigger.c
index 18f83158f637..16de57846bd9 100644
--- a/drivers/iio/industrialio-trigger.c
+++ b/drivers/iio/industrialio-trigger.c
@@ -4,6 +4,7 @@
  * Copyright (c) 2008 Jonathan Cameron
  */
 
+#include <linux/cleanup.h>
 #include <linux/kernel.h>
 #include <linux/idr.h>
 #include <linux/err.h>
@@ -80,19 +81,18 @@ int iio_trigger_register(struct iio_trigger *trig_info)
 		goto error_unregister_id;
 
 	/* Add to list of available triggers held by the IIO core */
-	mutex_lock(&iio_trigger_list_lock);
-	if (__iio_trigger_find_by_name(trig_info->name)) {
-		pr_err("Duplicate trigger name '%s'\n", trig_info->name);
-		ret = -EEXIST;
-		goto error_device_del;
+	scoped_guard(mutex, &iio_trigger_list_lock) {
+		if (__iio_trigger_find_by_name(trig_info->name)) {
+			pr_err("Duplicate trigger name '%s'\n", trig_info->name);
+			ret = -EEXIST;
+			goto error_device_del;
+		}
+		list_add_tail(&trig_info->list, &iio_trigger_list);
 	}
-	list_add_tail(&trig_info->list, &iio_trigger_list);
-	mutex_unlock(&iio_trigger_list_lock);
 
 	return 0;
 
 error_device_del:
-	mutex_unlock(&iio_trigger_list_lock);
 	device_del(&trig_info->dev);
 error_unregister_id:
 	ida_free(&iio_trigger_ida, trig_info->id);
@@ -102,9 +102,8 @@ EXPORT_SYMBOL(iio_trigger_register);
 
 void iio_trigger_unregister(struct iio_trigger *trig_info)
 {
-	mutex_lock(&iio_trigger_list_lock);
-	list_del(&trig_info->list);
-	mutex_unlock(&iio_trigger_list_lock);
+	scoped_guard(mutex, &iio_trigger_list_lock)
+		list_del(&trig_info->list);
 
 	ida_free(&iio_trigger_ida, trig_info->id);
 	/* Possible issue in here */
@@ -120,12 +119,11 @@ int iio_trigger_set_immutable(struct iio_dev *indio_dev, struct iio_trigger *tri
 		return -EINVAL;
 
 	iio_dev_opaque = to_iio_dev_opaque(indio_dev);
-	mutex_lock(&iio_dev_opaque->mlock);
+	guard(mutex)(&iio_dev_opaque->mlock);
 	WARN_ON(iio_dev_opaque->trig_readonly);
 
 	indio_dev->trig = iio_trigger_get(trig);
 	iio_dev_opaque->trig_readonly = true;
-	mutex_unlock(&iio_dev_opaque->mlock);
 
 	return 0;
 }
@@ -145,18 +143,14 @@ static struct iio_trigger *__iio_trigger_find_by_name(const char *name)
 
 static struct iio_trigger *iio_trigger_acquire_by_name(const char *name)
 {
-	struct iio_trigger *trig = NULL, *iter;
+	struct iio_trigger *iter;
 
-	mutex_lock(&iio_trigger_list_lock);
+	guard(mutex)(&iio_trigger_list_lock);
 	list_for_each_entry(iter, &iio_trigger_list, list)
-		if (sysfs_streq(iter->name, name)) {
-			trig = iter;
-			iio_trigger_get(trig);
-			break;
-		}
-	mutex_unlock(&iio_trigger_list_lock);
+		if (sysfs_streq(iter->name, name))
+			return iio_trigger_get(iter);
 
-	return trig;
+	return NULL;
 }
 
 static void iio_reenable_work_fn(struct work_struct *work)
@@ -259,22 +253,21 @@ static int iio_trigger_get_irq(struct iio_trigger *trig)
 {
 	int ret;
 
-	mutex_lock(&trig->pool_lock);
-	ret = bitmap_find_free_region(trig->pool,
-				      CONFIG_IIO_CONSUMERS_PER_TRIGGER,
-				      ilog2(1));
-	mutex_unlock(&trig->pool_lock);
-	if (ret >= 0)
-		ret += trig->subirq_base;
+	scoped_guard(mutex, &trig->pool_lock) {
+		ret = bitmap_find_free_region(trig->pool,
+					      CONFIG_IIO_CONSUMERS_PER_TRIGGER,
+					      ilog2(1));
+		if (ret < 0)
+			return ret;
+	}
 
-	return ret;
+	return ret + trig->subirq_base;
 }
 
 static void iio_trigger_put_irq(struct iio_trigger *trig, int irq)
 {
-	mutex_lock(&trig->pool_lock);
+	guard(mutex)(&trig->pool_lock);
 	clear_bit(irq - trig->subirq_base, trig->pool);
-	mutex_unlock(&trig->pool_lock);
 }
 
 /* Complexity in here.  With certain triggers (datardy) an acknowledgement
@@ -451,16 +444,12 @@ static ssize_t current_trigger_store(struct device *dev,
 	struct iio_trigger *trig;
 	int ret;
 
-	mutex_lock(&iio_dev_opaque->mlock);
-	if (iio_dev_opaque->currentmode == INDIO_BUFFER_TRIGGERED) {
-		mutex_unlock(&iio_dev_opaque->mlock);
-		return -EBUSY;
+	scoped_guard(mutex, &iio_dev_opaque->mlock) {
+		if (iio_dev_opaque->currentmode == INDIO_BUFFER_TRIGGERED)
+			return -EBUSY;
+		if (iio_dev_opaque->trig_readonly)
+			return -EPERM;
 	}
-	if (iio_dev_opaque->trig_readonly) {
-		mutex_unlock(&iio_dev_opaque->mlock);
-		return -EPERM;
-	}
-	mutex_unlock(&iio_dev_opaque->mlock);
 
 	trig = iio_trigger_acquire_by_name(buf);
 	if (oldtrig == trig) {

-- 
2.44.0


^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [PATCH v3 3/4] iio: buffer: iio: core: move to the cleanup.h magic
  2024-02-29 15:10 [PATCH v3 0/4] iio: move IIO to the cleanup.h magic Nuno Sa
  2024-02-29 15:10 ` [PATCH v3 1/4] iio: core: move to " Nuno Sa
  2024-02-29 15:10 ` [PATCH v3 2/4] iio: trigger: move to the " Nuno Sa
@ 2024-02-29 15:10 ` Nuno Sa
  2024-03-16 19:38   ` Andy Shevchenko
  2024-03-16 19:49   ` Andy Shevchenko
  2024-02-29 15:10 ` [PATCH v3 4/4] iio: inkern: " Nuno Sa
  3 siblings, 2 replies; 22+ messages in thread
From: Nuno Sa @ 2024-02-29 15:10 UTC (permalink / raw)
  To: linux-iio; +Cc: Jonathan Cameron, Lars-Peter Clausen

Use the new cleanup magic for handling mutexes in IIO. This allows us to
greatly simplify some code paths.

Signed-off-by: Nuno Sa <nuno.sa@analog.com>
---
 drivers/iio/industrialio-buffer.c | 122 +++++++++++++++-----------------------
 1 file changed, 48 insertions(+), 74 deletions(-)

diff --git a/drivers/iio/industrialio-buffer.c b/drivers/iio/industrialio-buffer.c
index b581a7e80566..1d950a3e153b 100644
--- a/drivers/iio/industrialio-buffer.c
+++ b/drivers/iio/industrialio-buffer.c
@@ -10,6 +10,7 @@
  * - Alternative access techniques?
  */
 #include <linux/anon_inodes.h>
+#include <linux/cleanup.h>
 #include <linux/kernel.h>
 #include <linux/export.h>
 #include <linux/device.h>
@@ -533,28 +534,26 @@ static ssize_t iio_scan_el_store(struct device *dev,
 	ret = kstrtobool(buf, &state);
 	if (ret < 0)
 		return ret;
-	mutex_lock(&iio_dev_opaque->mlock);
-	if (iio_buffer_is_active(buffer)) {
-		ret = -EBUSY;
-		goto error_ret;
-	}
+
+	guard(mutex)(&iio_dev_opaque->mlock);
+	if (iio_buffer_is_active(buffer))
+		return -EBUSY;
+
 	ret = iio_scan_mask_query(indio_dev, buffer, this_attr->address);
 	if (ret < 0)
-		goto error_ret;
-	if (!state && ret) {
-		ret = iio_scan_mask_clear(buffer, this_attr->address);
-		if (ret)
-			goto error_ret;
-	} else if (state && !ret) {
+		return ret;
+
+	if (state && ret)
+		return len;
+
+	if (state)
 		ret = iio_scan_mask_set(indio_dev, buffer, this_attr->address);
-		if (ret)
-			goto error_ret;
-	}
+	else
+		ret = iio_scan_mask_clear(buffer, this_attr->address);
+	if (ret)
+		return ret;
 
-error_ret:
-	mutex_unlock(&iio_dev_opaque->mlock);
-
-	return ret < 0 ? ret : len;
+	return len;
 }
 
 static ssize_t iio_scan_el_ts_show(struct device *dev,
@@ -581,16 +580,13 @@ static ssize_t iio_scan_el_ts_store(struct device *dev,
 	if (ret < 0)
 		return ret;
 
-	mutex_lock(&iio_dev_opaque->mlock);
-	if (iio_buffer_is_active(buffer)) {
-		ret = -EBUSY;
-		goto error_ret;
-	}
-	buffer->scan_timestamp = state;
-error_ret:
-	mutex_unlock(&iio_dev_opaque->mlock);
+	guard(mutex)(&iio_dev_opaque->mlock);
+	if (iio_buffer_is_active(buffer))
+		return -EBUSY;
 
-	return ret ? ret : len;
+	buffer->scan_timestamp = state;
+
+	return len;
 }
 
 static int iio_buffer_add_channel_sysfs(struct iio_dev *indio_dev,
@@ -674,21 +670,16 @@ static ssize_t length_store(struct device *dev, struct device_attribute *attr,
 	if (val == buffer->length)
 		return len;
 
-	mutex_lock(&iio_dev_opaque->mlock);
-	if (iio_buffer_is_active(buffer)) {
-		ret = -EBUSY;
-	} else {
-		buffer->access->set_length(buffer, val);
-		ret = 0;
-	}
-	if (ret)
-		goto out;
+	guard(mutex)(&iio_dev_opaque->mlock);
+	if (iio_buffer_is_active(buffer))
+		return -EBUSY;
+
+	buffer->access->set_length(buffer, val);
+
 	if (buffer->length && buffer->length < buffer->watermark)
 		buffer->watermark = buffer->length;
-out:
-	mutex_unlock(&iio_dev_opaque->mlock);
 
-	return ret ? ret : len;
+	return len;
 }
 
 static ssize_t enable_show(struct device *dev, struct device_attribute *attr,
@@ -1268,7 +1259,6 @@ int iio_update_buffers(struct iio_dev *indio_dev,
 		       struct iio_buffer *remove_buffer)
 {
 	struct iio_dev_opaque *iio_dev_opaque = to_iio_dev_opaque(indio_dev);
-	int ret;
 
 	if (insert_buffer == remove_buffer)
 		return 0;
@@ -1277,8 +1267,8 @@ int iio_update_buffers(struct iio_dev *indio_dev,
 	    insert_buffer->direction == IIO_BUFFER_DIRECTION_OUT)
 		return -EINVAL;
 
-	mutex_lock(&iio_dev_opaque->info_exist_lock);
-	mutex_lock(&iio_dev_opaque->mlock);
+	guard(mutex)(&iio_dev_opaque->info_exist_lock);
+	guard(mutex)(&iio_dev_opaque->mlock);
 
 	if (insert_buffer && iio_buffer_is_active(insert_buffer))
 		insert_buffer = NULL;
@@ -1286,23 +1276,13 @@ int iio_update_buffers(struct iio_dev *indio_dev,
 	if (remove_buffer && !iio_buffer_is_active(remove_buffer))
 		remove_buffer = NULL;
 
-	if (!insert_buffer && !remove_buffer) {
-		ret = 0;
-		goto out_unlock;
-	}
+	if (!insert_buffer && !remove_buffer)
+		return 0;
 
-	if (!indio_dev->info) {
-		ret = -ENODEV;
-		goto out_unlock;
-	}
+	if (!indio_dev->info)
+		return -ENODEV;
 
-	ret = __iio_update_buffers(indio_dev, insert_buffer, remove_buffer);
-
-out_unlock:
-	mutex_unlock(&iio_dev_opaque->mlock);
-	mutex_unlock(&iio_dev_opaque->info_exist_lock);
-
-	return ret;
+	return __iio_update_buffers(indio_dev, insert_buffer, remove_buffer);
 }
 EXPORT_SYMBOL_GPL(iio_update_buffers);
 
@@ -1326,22 +1306,22 @@ static ssize_t enable_store(struct device *dev, struct device_attribute *attr,
 	if (ret < 0)
 		return ret;
 
-	mutex_lock(&iio_dev_opaque->mlock);
+	guard(mutex)(&iio_dev_opaque->mlock);
 
 	/* Find out if it is in the list */
 	inlist = iio_buffer_is_active(buffer);
 	/* Already in desired state */
 	if (inlist == requested_state)
-		goto done;
+		return len;
 
 	if (requested_state)
 		ret = __iio_update_buffers(indio_dev, buffer, NULL);
 	else
 		ret = __iio_update_buffers(indio_dev, NULL, buffer);
+	if (ret)
+		return ret;
 
-done:
-	mutex_unlock(&iio_dev_opaque->mlock);
-	return (ret < 0) ? ret : len;
+	return len;
 }
 
 static ssize_t watermark_show(struct device *dev, struct device_attribute *attr,
@@ -1368,23 +1348,17 @@ static ssize_t watermark_store(struct device *dev,
 	if (!val)
 		return -EINVAL;
 
-	mutex_lock(&iio_dev_opaque->mlock);
+	guard(mutex)(&iio_dev_opaque->mlock);
 
-	if (val > buffer->length) {
-		ret = -EINVAL;
-		goto out;
-	}
+	if (val > buffer->length)
+		return -EINVAL;
 
-	if (iio_buffer_is_active(buffer)) {
-		ret = -EBUSY;
-		goto out;
-	}
+	if (iio_buffer_is_active(buffer))
+		return -EBUSY;
 
 	buffer->watermark = val;
-out:
-	mutex_unlock(&iio_dev_opaque->mlock);
 
-	return ret ? ret : len;
+	return len;
 }
 
 static ssize_t data_available_show(struct device *dev,

-- 
2.44.0


^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [PATCH v3 4/4] iio: inkern: move to the cleanup.h magic
  2024-02-29 15:10 [PATCH v3 0/4] iio: move IIO to the cleanup.h magic Nuno Sa
                   ` (2 preceding siblings ...)
  2024-02-29 15:10 ` [PATCH v3 3/4] iio: buffer: iio: core: " Nuno Sa
@ 2024-02-29 15:10 ` Nuno Sa
  2024-03-03 14:24   ` Jonathan Cameron
  2024-03-16 19:48   ` Andy Shevchenko
  3 siblings, 2 replies; 22+ messages in thread
From: Nuno Sa @ 2024-02-29 15:10 UTC (permalink / raw)
  To: linux-iio; +Cc: Jonathan Cameron, Lars-Peter Clausen

Use the new cleanup magic for handling mutexes in IIO. This allows us to
greatly simplify some code paths.

While at it, also use __free(kfree) where allocations are done and drop
obvious comment in iio_channel_read_min().

Signed-off-by: Nuno Sa <nuno.sa@analog.com>
---
 drivers/iio/inkern.c | 255 +++++++++++++++++----------------------------------
 1 file changed, 85 insertions(+), 170 deletions(-)

diff --git a/drivers/iio/inkern.c b/drivers/iio/inkern.c
index 7a1f6713318a..6017f294ac1c 100644
--- a/drivers/iio/inkern.c
+++ b/drivers/iio/inkern.c
@@ -3,6 +3,7 @@
  *
  * Copyright (c) 2011 Jonathan Cameron
  */
+#include <linux/cleanup.h>
 #include <linux/err.h>
 #include <linux/export.h>
 #include <linux/minmax.h>
@@ -43,13 +44,13 @@ static int iio_map_array_unregister_locked(struct iio_dev *indio_dev)
 
 int iio_map_array_register(struct iio_dev *indio_dev, struct iio_map *maps)
 {
-	int i = 0, ret = 0;
+	int i = 0, ret;
 	struct iio_map_internal *mapi;
 
 	if (!maps)
 		return 0;
 
-	mutex_lock(&iio_map_list_lock);
+	guard(mutex)(&iio_map_list_lock);
 	while (maps[i].consumer_dev_name) {
 		mapi = kzalloc(sizeof(*mapi), GFP_KERNEL);
 		if (!mapi) {
@@ -61,11 +62,10 @@ int iio_map_array_register(struct iio_dev *indio_dev, struct iio_map *maps)
 		list_add_tail(&mapi->l, &iio_map_list);
 		i++;
 	}
-error_ret:
-	if (ret)
-		iio_map_array_unregister_locked(indio_dev);
-	mutex_unlock(&iio_map_list_lock);
 
+	return 0;
+error_ret:
+	iio_map_array_unregister_locked(indio_dev);
 	return ret;
 }
 EXPORT_SYMBOL_GPL(iio_map_array_register);
@@ -75,13 +75,8 @@ EXPORT_SYMBOL_GPL(iio_map_array_register);
  */
 int iio_map_array_unregister(struct iio_dev *indio_dev)
 {
-	int ret;
-
-	mutex_lock(&iio_map_list_lock);
-	ret = iio_map_array_unregister_locked(indio_dev);
-	mutex_unlock(&iio_map_list_lock);
-
-	return ret;
+	guard(mutex)(&iio_map_list_lock);
+	return iio_map_array_unregister_locked(indio_dev);
 }
 EXPORT_SYMBOL_GPL(iio_map_array_unregister);
 
@@ -183,25 +178,21 @@ static int __fwnode_iio_channel_get(struct iio_channel *channel,
 static struct iio_channel *fwnode_iio_channel_get(struct fwnode_handle *fwnode,
 						  int index)
 {
-	struct iio_channel *channel;
 	int err;
 
 	if (index < 0)
 		return ERR_PTR(-EINVAL);
 
-	channel = kzalloc(sizeof(*channel), GFP_KERNEL);
+	struct iio_channel *channel __free(kfree) = kzalloc(sizeof(*channel),
+							    GFP_KERNEL);
 	if (!channel)
 		return ERR_PTR(-ENOMEM);
 
 	err = __fwnode_iio_channel_get(channel, fwnode, index);
 	if (err)
-		goto err_free_channel;
+		return ERR_PTR(err);
 
-	return channel;
-
-err_free_channel:
-	kfree(channel);
-	return ERR_PTR(err);
+	return_ptr(channel);
 }
 
 static struct iio_channel *
@@ -291,7 +282,6 @@ EXPORT_SYMBOL_GPL(fwnode_iio_channel_get_by_name);
 static struct iio_channel *fwnode_iio_channel_get_all(struct device *dev)
 {
 	struct fwnode_handle *fwnode = dev_fwnode(dev);
-	struct iio_channel *chans;
 	int i, mapind, nummaps = 0;
 	int ret;
 
@@ -307,7 +297,9 @@ static struct iio_channel *fwnode_iio_channel_get_all(struct device *dev)
 		return ERR_PTR(-ENODEV);
 
 	/* NULL terminated array to save passing size */
-	chans = kcalloc(nummaps + 1, sizeof(*chans), GFP_KERNEL);
+	struct iio_channel *chans __free(kfree) = kcalloc(nummaps + 1,
+							  sizeof(*chans),
+							  GFP_KERNEL);
 	if (!chans)
 		return ERR_PTR(-ENOMEM);
 
@@ -317,12 +309,11 @@ static struct iio_channel *fwnode_iio_channel_get_all(struct device *dev)
 		if (ret)
 			goto error_free_chans;
 	}
-	return chans;
+	return_ptr(chans);
 
 error_free_chans:
 	for (i = 0; i < mapind; i++)
 		iio_device_put(chans[i].indio_dev);
-	kfree(chans);
 	return ERR_PTR(ret);
 }
 
@@ -330,28 +321,28 @@ static struct iio_channel *iio_channel_get_sys(const char *name,
 					       const char *channel_name)
 {
 	struct iio_map_internal *c_i = NULL, *c = NULL;
-	struct iio_channel *channel;
 	int err;
 
 	if (!(name || channel_name))
 		return ERR_PTR(-ENODEV);
 
 	/* first find matching entry the channel map */
-	mutex_lock(&iio_map_list_lock);
-	list_for_each_entry(c_i, &iio_map_list, l) {
-		if ((name && strcmp(name, c_i->map->consumer_dev_name) != 0) ||
-		    (channel_name &&
-		     strcmp(channel_name, c_i->map->consumer_channel) != 0))
-			continue;
-		c = c_i;
-		iio_device_get(c->indio_dev);
-		break;
+	scoped_guard(mutex, &iio_map_list_lock) {
+		list_for_each_entry(c_i, &iio_map_list, l) {
+			if ((name && strcmp(name, c_i->map->consumer_dev_name) != 0) ||
+			    (channel_name &&
+			     strcmp(channel_name, c_i->map->consumer_channel) != 0))
+				continue;
+			c = c_i;
+			iio_device_get(c->indio_dev);
+			break;
+		}
 	}
-	mutex_unlock(&iio_map_list_lock);
 	if (!c)
 		return ERR_PTR(-ENODEV);
 
-	channel = kzalloc(sizeof(*channel), GFP_KERNEL);
+	struct iio_channel *channel __free(kfree) = kzalloc(sizeof(*channel),
+							    GFP_KERNEL);
 	if (!channel) {
 		err = -ENOMEM;
 		goto error_no_mem;
@@ -366,14 +357,12 @@ static struct iio_channel *iio_channel_get_sys(const char *name,
 
 		if (!channel->channel) {
 			err = -EINVAL;
-			goto error_no_chan;
+			goto error_no_mem;
 		}
 	}
 
-	return channel;
+	return_ptr(channel);
 
-error_no_chan:
-	kfree(channel);
 error_no_mem:
 	iio_device_put(c->indio_dev);
 	return ERR_PTR(err);
@@ -450,7 +439,7 @@ EXPORT_SYMBOL_GPL(devm_fwnode_iio_channel_get_by_name);
 struct iio_channel *iio_channel_get_all(struct device *dev)
 {
 	const char *name;
-	struct iio_channel *chans;
+	struct iio_channel *fw_chans;
 	struct iio_map_internal *c = NULL;
 	int nummaps = 0;
 	int mapind = 0;
@@ -459,17 +448,17 @@ struct iio_channel *iio_channel_get_all(struct device *dev)
 	if (!dev)
 		return ERR_PTR(-EINVAL);
 
-	chans = fwnode_iio_channel_get_all(dev);
+	fw_chans = fwnode_iio_channel_get_all(dev);
 	/*
 	 * We only want to carry on if the error is -ENODEV.  Anything else
 	 * should be reported up the stack.
 	 */
-	if (!IS_ERR(chans) || PTR_ERR(chans) != -ENODEV)
-		return chans;
+	if (!IS_ERR(fw_chans) || PTR_ERR(fw_chans) != -ENODEV)
+		return fw_chans;
 
 	name = dev_name(dev);
 
-	mutex_lock(&iio_map_list_lock);
+	guard(mutex)(&iio_map_list_lock);
 	/* first count the matching maps */
 	list_for_each_entry(c, &iio_map_list, l)
 		if (name && strcmp(name, c->map->consumer_dev_name) != 0)
@@ -477,17 +466,15 @@ struct iio_channel *iio_channel_get_all(struct device *dev)
 		else
 			nummaps++;
 
-	if (nummaps == 0) {
-		ret = -ENODEV;
-		goto error_ret;
-	}
+	if (nummaps == 0)
+		return ERR_PTR(-ENODEV);
 
 	/* NULL terminated array to save passing size */
-	chans = kcalloc(nummaps + 1, sizeof(*chans), GFP_KERNEL);
-	if (!chans) {
-		ret = -ENOMEM;
-		goto error_ret;
-	}
+	struct iio_channel *chans __free(kfree) = kcalloc(nummaps + 1,
+							  sizeof(*chans),
+							  GFP_KERNEL);
+	if (!chans)
+		return ERR_PTR(-ENOMEM);
 
 	/* for each map fill in the chans element */
 	list_for_each_entry(c, &iio_map_list, l) {
@@ -509,17 +496,12 @@ struct iio_channel *iio_channel_get_all(struct device *dev)
 		ret = -ENODEV;
 		goto error_free_chans;
 	}
-	mutex_unlock(&iio_map_list_lock);
 
-	return chans;
+	return_ptr(chans);
 
 error_free_chans:
 	for (i = 0; i < nummaps; i++)
 		iio_device_put(chans[i].indio_dev);
-	kfree(chans);
-error_ret:
-	mutex_unlock(&iio_map_list_lock);
-
 	return ERR_PTR(ret);
 }
 EXPORT_SYMBOL_GPL(iio_channel_get_all);
@@ -590,38 +572,24 @@ static int iio_channel_read(struct iio_channel *chan, int *val, int *val2,
 int iio_read_channel_raw(struct iio_channel *chan, int *val)
 {
 	struct iio_dev_opaque *iio_dev_opaque = to_iio_dev_opaque(chan->indio_dev);
-	int ret;
 
-	mutex_lock(&iio_dev_opaque->info_exist_lock);
-	if (!chan->indio_dev->info) {
-		ret = -ENODEV;
-		goto err_unlock;
-	}
+	guard(mutex)(&iio_dev_opaque->info_exist_lock);
+	if (!chan->indio_dev->info)
+		return -ENODEV;
 
-	ret = iio_channel_read(chan, val, NULL, IIO_CHAN_INFO_RAW);
-err_unlock:
-	mutex_unlock(&iio_dev_opaque->info_exist_lock);
-
-	return ret;
+	return iio_channel_read(chan, val, NULL, IIO_CHAN_INFO_RAW);
 }
 EXPORT_SYMBOL_GPL(iio_read_channel_raw);
 
 int iio_read_channel_average_raw(struct iio_channel *chan, int *val)
 {
 	struct iio_dev_opaque *iio_dev_opaque = to_iio_dev_opaque(chan->indio_dev);
-	int ret;
 
-	mutex_lock(&iio_dev_opaque->info_exist_lock);
-	if (!chan->indio_dev->info) {
-		ret = -ENODEV;
-		goto err_unlock;
-	}
+	guard(mutex)(&iio_dev_opaque->info_exist_lock);
+	if (!chan->indio_dev->info)
+		return -ENODEV;
 
-	ret = iio_channel_read(chan, val, NULL, IIO_CHAN_INFO_AVERAGE_RAW);
-err_unlock:
-	mutex_unlock(&iio_dev_opaque->info_exist_lock);
-
-	return ret;
+	return iio_channel_read(chan, val, NULL, IIO_CHAN_INFO_AVERAGE_RAW);
 }
 EXPORT_SYMBOL_GPL(iio_read_channel_average_raw);
 
@@ -708,20 +676,13 @@ int iio_convert_raw_to_processed(struct iio_channel *chan, int raw,
 				 int *processed, unsigned int scale)
 {
 	struct iio_dev_opaque *iio_dev_opaque = to_iio_dev_opaque(chan->indio_dev);
-	int ret;
 
-	mutex_lock(&iio_dev_opaque->info_exist_lock);
-	if (!chan->indio_dev->info) {
-		ret = -ENODEV;
-		goto err_unlock;
-	}
+	guard(mutex)(&iio_dev_opaque->info_exist_lock);
+	if (!chan->indio_dev->info)
+		return -ENODEV;
 
-	ret = iio_convert_raw_to_processed_unlocked(chan, raw, processed,
-						    scale);
-err_unlock:
-	mutex_unlock(&iio_dev_opaque->info_exist_lock);
-
-	return ret;
+	return iio_convert_raw_to_processed_unlocked(chan, raw, processed,
+						     scale);
 }
 EXPORT_SYMBOL_GPL(iio_convert_raw_to_processed);
 
@@ -729,19 +690,12 @@ int iio_read_channel_attribute(struct iio_channel *chan, int *val, int *val2,
 			       enum iio_chan_info_enum attribute)
 {
 	struct iio_dev_opaque *iio_dev_opaque = to_iio_dev_opaque(chan->indio_dev);
-	int ret;
 
-	mutex_lock(&iio_dev_opaque->info_exist_lock);
-	if (!chan->indio_dev->info) {
-		ret = -ENODEV;
-		goto err_unlock;
-	}
+	guard(mutex)(&iio_dev_opaque->info_exist_lock);
+	if (!chan->indio_dev->info)
+		return -ENODEV;
 
-	ret = iio_channel_read(chan, val, val2, attribute);
-err_unlock:
-	mutex_unlock(&iio_dev_opaque->info_exist_lock);
-
-	return ret;
+	return iio_channel_read(chan, val, val2, attribute);
 }
 EXPORT_SYMBOL_GPL(iio_read_channel_attribute);
 
@@ -757,29 +711,24 @@ int iio_read_channel_processed_scale(struct iio_channel *chan, int *val,
 	struct iio_dev_opaque *iio_dev_opaque = to_iio_dev_opaque(chan->indio_dev);
 	int ret;
 
-	mutex_lock(&iio_dev_opaque->info_exist_lock);
-	if (!chan->indio_dev->info) {
-		ret = -ENODEV;
-		goto err_unlock;
-	}
+	guard(mutex)(&iio_dev_opaque->info_exist_lock);
+	if (!chan->indio_dev->info)
+		return -ENODEV;
 
 	if (iio_channel_has_info(chan->channel, IIO_CHAN_INFO_PROCESSED)) {
 		ret = iio_channel_read(chan, val, NULL,
 				       IIO_CHAN_INFO_PROCESSED);
 		if (ret < 0)
-			goto err_unlock;
+			return ret;
 		*val *= scale;
 	} else {
 		ret = iio_channel_read(chan, val, NULL, IIO_CHAN_INFO_RAW);
 		if (ret < 0)
-			goto err_unlock;
+			return ret;
 		ret = iio_convert_raw_to_processed_unlocked(chan, *val, val,
 							    scale);
 	}
 
-err_unlock:
-	mutex_unlock(&iio_dev_opaque->info_exist_lock);
-
 	return ret;
 }
 EXPORT_SYMBOL_GPL(iio_read_channel_processed_scale);
@@ -813,19 +762,12 @@ int iio_read_avail_channel_attribute(struct iio_channel *chan,
 				     enum iio_chan_info_enum attribute)
 {
 	struct iio_dev_opaque *iio_dev_opaque = to_iio_dev_opaque(chan->indio_dev);
-	int ret;
 
-	mutex_lock(&iio_dev_opaque->info_exist_lock);
-	if (!chan->indio_dev->info) {
-		ret = -ENODEV;
-		goto err_unlock;
-	}
+	guard(mutex)(&iio_dev_opaque->info_exist_lock);
+	if (!chan->indio_dev->info)
+		return -ENODEV;
 
-	ret = iio_channel_read_avail(chan, vals, type, length, attribute);
-err_unlock:
-	mutex_unlock(&iio_dev_opaque->info_exist_lock);
-
-	return ret;
+	return iio_channel_read_avail(chan, vals, type, length, attribute);
 }
 EXPORT_SYMBOL_GPL(iio_read_avail_channel_attribute);
 
@@ -892,20 +834,13 @@ static int iio_channel_read_max(struct iio_channel *chan,
 int iio_read_max_channel_raw(struct iio_channel *chan, int *val)
 {
 	struct iio_dev_opaque *iio_dev_opaque = to_iio_dev_opaque(chan->indio_dev);
-	int ret;
 	int type;
 
-	mutex_lock(&iio_dev_opaque->info_exist_lock);
-	if (!chan->indio_dev->info) {
-		ret = -ENODEV;
-		goto err_unlock;
-	}
+	guard(mutex)(&iio_dev_opaque->info_exist_lock);
+	if (!chan->indio_dev->info)
+		return -ENODEV;
 
-	ret = iio_channel_read_max(chan, val, NULL, &type, IIO_CHAN_INFO_RAW);
-err_unlock:
-	mutex_unlock(&iio_dev_opaque->info_exist_lock);
-
-	return ret;
+	return iio_channel_read_max(chan, val, NULL, &type, IIO_CHAN_INFO_RAW);
 }
 EXPORT_SYMBOL_GPL(iio_read_max_channel_raw);
 
@@ -955,40 +890,27 @@ static int iio_channel_read_min(struct iio_channel *chan,
 int iio_read_min_channel_raw(struct iio_channel *chan, int *val)
 {
 	struct iio_dev_opaque *iio_dev_opaque = to_iio_dev_opaque(chan->indio_dev);
-	int ret;
 	int type;
 
-	mutex_lock(&iio_dev_opaque->info_exist_lock);
-	if (!chan->indio_dev->info) {
-		ret = -ENODEV;
-		goto err_unlock;
-	}
+	guard(mutex)(&iio_dev_opaque->info_exist_lock);
+	if (!chan->indio_dev->info)
+		return -ENODEV;
 
-	ret = iio_channel_read_min(chan, val, NULL, &type, IIO_CHAN_INFO_RAW);
-err_unlock:
-	mutex_unlock(&iio_dev_opaque->info_exist_lock);
-
-	return ret;
+	return iio_channel_read_min(chan, val, NULL, &type, IIO_CHAN_INFO_RAW);
 }
 EXPORT_SYMBOL_GPL(iio_read_min_channel_raw);
 
 int iio_get_channel_type(struct iio_channel *chan, enum iio_chan_type *type)
 {
 	struct iio_dev_opaque *iio_dev_opaque = to_iio_dev_opaque(chan->indio_dev);
-	int ret = 0;
-	/* Need to verify underlying driver has not gone away */
 
-	mutex_lock(&iio_dev_opaque->info_exist_lock);
-	if (!chan->indio_dev->info) {
-		ret = -ENODEV;
-		goto err_unlock;
-	}
+	guard(mutex)(&iio_dev_opaque->info_exist_lock);
+	if (!chan->indio_dev->info)
+		return -ENODEV;
 
 	*type = chan->channel->type;
-err_unlock:
-	mutex_unlock(&iio_dev_opaque->info_exist_lock);
 
-	return ret;
+	return 0;
 }
 EXPORT_SYMBOL_GPL(iio_get_channel_type);
 
@@ -1003,19 +925,12 @@ int iio_write_channel_attribute(struct iio_channel *chan, int val, int val2,
 				enum iio_chan_info_enum attribute)
 {
 	struct iio_dev_opaque *iio_dev_opaque = to_iio_dev_opaque(chan->indio_dev);
-	int ret;
 
-	mutex_lock(&iio_dev_opaque->info_exist_lock);
-	if (!chan->indio_dev->info) {
-		ret = -ENODEV;
-		goto err_unlock;
-	}
+	guard(mutex)(&iio_dev_opaque->info_exist_lock);
+	if (!chan->indio_dev->info)
+		return -ENODEV;
 
-	ret = iio_channel_write(chan, val, val2, attribute);
-err_unlock:
-	mutex_unlock(&iio_dev_opaque->info_exist_lock);
-
-	return ret;
+	return iio_channel_write(chan, val, val2, attribute);
 }
 EXPORT_SYMBOL_GPL(iio_write_channel_attribute);
 

-- 
2.44.0


^ permalink raw reply related	[flat|nested] 22+ messages in thread

* Re: [PATCH v3 4/4] iio: inkern: move to the cleanup.h magic
  2024-02-29 15:10 ` [PATCH v3 4/4] iio: inkern: " Nuno Sa
@ 2024-03-03 14:24   ` Jonathan Cameron
  2024-03-04  8:04     ` Nuno Sá
  2024-03-16 19:48   ` Andy Shevchenko
  1 sibling, 1 reply; 22+ messages in thread
From: Jonathan Cameron @ 2024-03-03 14:24 UTC (permalink / raw)
  To: Nuno Sa; +Cc: linux-iio, Lars-Peter Clausen

On Thu, 29 Feb 2024 16:10:28 +0100
Nuno Sa <nuno.sa@analog.com> wrote:

> Use the new cleanup magic for handling mutexes in IIO. This allows us to
> greatly simplify some code paths.
> 
> While at it, also use __free(kfree) where allocations are done and drop
> obvious comment in iio_channel_read_min().
> 
> Signed-off-by: Nuno Sa <nuno.sa@analog.com>

Hi Nuno

Series looks very nice. One trivial thing inline - I can tidy that up whilst
applying if nothing else comes up.

Given this obviously touches a lot of core code, so even though simple it's
high risk for queuing up late. I also have a complex mess already queued up
for the coming merge window. Hence I'm going to hold off on applying this
series until the start of the next cycle.

Nothing outside IIO is going to depend on it, so it's rather simpler decision
to hold it than for the ones that add new general purpose infrastructure.

Jonathan




>  EXPORT_SYMBOL_GPL(iio_read_channel_attribute);
>  
> @@ -757,29 +711,24 @@ int iio_read_channel_processed_scale(struct iio_channel *chan, int *val,
>  	struct iio_dev_opaque *iio_dev_opaque = to_iio_dev_opaque(chan->indio_dev);
>  	int ret;
>  
> -	mutex_lock(&iio_dev_opaque->info_exist_lock);
> -	if (!chan->indio_dev->info) {
> -		ret = -ENODEV;
> -		goto err_unlock;
> -	}
> +	guard(mutex)(&iio_dev_opaque->info_exist_lock);
> +	if (!chan->indio_dev->info)
> +		return -ENODEV;
>  
>  	if (iio_channel_has_info(chan->channel, IIO_CHAN_INFO_PROCESSED)) {
>  		ret = iio_channel_read(chan, val, NULL,
>  				       IIO_CHAN_INFO_PROCESSED);
>  		if (ret < 0)
> -			goto err_unlock;
> +			return ret;
>  		*val *= scale;

		return 0;

>  	} else {
could drop the else.

>  		ret = iio_channel_read(chan, val, NULL, IIO_CHAN_INFO_RAW);
>  		if (ret < 0)
> -			goto err_unlock;
> +			return ret;
>  		ret = iio_convert_raw_to_processed_unlocked(chan, *val, val,
>  							    scale);
		return iio_convert_raw_to_proc...

>  	}
>  
> -err_unlock:
> -	mutex_unlock(&iio_dev_opaque->info_exist_lock);
> -
>  	return ret;
Drop this.
>  }


^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH v3 4/4] iio: inkern: move to the cleanup.h magic
  2024-03-03 14:24   ` Jonathan Cameron
@ 2024-03-04  8:04     ` Nuno Sá
  2024-03-09 17:41       ` Jonathan Cameron
  0 siblings, 1 reply; 22+ messages in thread
From: Nuno Sá @ 2024-03-04  8:04 UTC (permalink / raw)
  To: Jonathan Cameron, Nuno Sa; +Cc: linux-iio, Lars-Peter Clausen

On Sun, 2024-03-03 at 14:24 +0000, Jonathan Cameron wrote:
> On Thu, 29 Feb 2024 16:10:28 +0100
> Nuno Sa <nuno.sa@analog.com> wrote:
> 
> > Use the new cleanup magic for handling mutexes in IIO. This allows us to
> > greatly simplify some code paths.
> > 
> > While at it, also use __free(kfree) where allocations are done and drop
> > obvious comment in iio_channel_read_min().
> > 
> > Signed-off-by: Nuno Sa <nuno.sa@analog.com>
> 
> Hi Nuno
> 
> Series looks very nice. One trivial thing inline - I can tidy that up whilst
> applying if nothing else comes up.
> 
> Given this obviously touches a lot of core code, so even though simple it's
> high risk for queuing up late. I also have a complex mess already queued up
> for the coming merge window. Hence I'm going to hold off on applying this
> series until the start of the next cycle.
> 
> Nothing outside IIO is going to depend on it, so it's rather simpler decision
> to hold it than for the ones that add new general purpose infrastructure.
> 
> 

Seems reasonable... It may even give us some time to see how the cond_guard()
and scoped_cond_guard() will end up.

> 
> 
> >  EXPORT_SYMBOL_GPL(iio_read_channel_attribute);
> >  
> > @@ -757,29 +711,24 @@ int iio_read_channel_processed_scale(struct
> > iio_channel *chan, int *val,
> >  	struct iio_dev_opaque *iio_dev_opaque = to_iio_dev_opaque(chan-
> > >indio_dev);
> >  	int ret;
> >  
> > -	mutex_lock(&iio_dev_opaque->info_exist_lock);
> > -	if (!chan->indio_dev->info) {
> > -		ret = -ENODEV;
> > -		goto err_unlock;
> > -	}
> > +	guard(mutex)(&iio_dev_opaque->info_exist_lock);
> > +	if (!chan->indio_dev->info)
> > +		return -ENODEV;
> >  
> >  	if (iio_channel_has_info(chan->channel, IIO_CHAN_INFO_PROCESSED)) {
> >  		ret = iio_channel_read(chan, val, NULL,
> >  				       IIO_CHAN_INFO_PROCESSED);
> >  		if (ret < 0)
> > -			goto err_unlock;
> > +			return ret;
> >  		*val *= scale;
> 
> 		return 0;
> 
> >  	} else {
> could drop the else.
> 
> >  		ret = iio_channel_read(chan, val, NULL, IIO_CHAN_INFO_RAW);
> >  		if (ret < 0)
> > -			goto err_unlock;
> > +			return ret;
> >  		ret = iio_convert_raw_to_processed_unlocked(chan, *val,
> > val,
> >  							    scale);
> 		return iio_convert_raw_to_proc...
> 

Hmm, unless I completely misunderstood your comments on v2, this was exactly
what I had but you recommended to leave the else branch :).

- Nuno Sá


^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH v3 4/4] iio: inkern: move to the cleanup.h magic
  2024-03-04  8:04     ` Nuno Sá
@ 2024-03-09 17:41       ` Jonathan Cameron
  2024-03-16 13:26         ` Jonathan Cameron
  0 siblings, 1 reply; 22+ messages in thread
From: Jonathan Cameron @ 2024-03-09 17:41 UTC (permalink / raw)
  To: Nuno Sá; +Cc: Nuno Sa, linux-iio, Lars-Peter Clausen

On Mon, 04 Mar 2024 09:04:49 +0100
Nuno Sá <noname.nuno@gmail.com> wrote:

> On Sun, 2024-03-03 at 14:24 +0000, Jonathan Cameron wrote:
> > On Thu, 29 Feb 2024 16:10:28 +0100
> > Nuno Sa <nuno.sa@analog.com> wrote:
> >   
> > > Use the new cleanup magic for handling mutexes in IIO. This allows us to
> > > greatly simplify some code paths.
> > > 
> > > While at it, also use __free(kfree) where allocations are done and drop
> > > obvious comment in iio_channel_read_min().
> > > 
> > > Signed-off-by: Nuno Sa <nuno.sa@analog.com>  
> > 
> > Hi Nuno
> > 
> > Series looks very nice. One trivial thing inline - I can tidy that up whilst
> > applying if nothing else comes up.
> > 
> > Given this obviously touches a lot of core code, so even though simple it's
> > high risk for queuing up late. I also have a complex mess already queued up
> > for the coming merge window. Hence I'm going to hold off on applying this
> > series until the start of the next cycle.
> > 
> > Nothing outside IIO is going to depend on it, so it's rather simpler decision
> > to hold it than for the ones that add new general purpose infrastructure.
> > 
> >   
> 
> Seems reasonable... It may even give us some time to see how the cond_guard()
> and scoped_cond_guard() will end up.

Absolutely - thankfully converting to the suggestions Linus made will be straight
forwards, so hopefully the worst that happens is a complex merge, or some
fixing up to do afterwards.

> 
> > 
> >   
> > >  EXPORT_SYMBOL_GPL(iio_read_channel_attribute);
> > >  
> > > @@ -757,29 +711,24 @@ int iio_read_channel_processed_scale(struct
> > > iio_channel *chan, int *val,
> > >  	struct iio_dev_opaque *iio_dev_opaque = to_iio_dev_opaque(chan-  
> > > >indio_dev);  
> > >  	int ret;
> > >  
> > > -	mutex_lock(&iio_dev_opaque->info_exist_lock);
> > > -	if (!chan->indio_dev->info) {
> > > -		ret = -ENODEV;
> > > -		goto err_unlock;
> > > -	}
> > > +	guard(mutex)(&iio_dev_opaque->info_exist_lock);
> > > +	if (!chan->indio_dev->info)
> > > +		return -ENODEV;
> > >  
> > >  	if (iio_channel_has_info(chan->channel, IIO_CHAN_INFO_PROCESSED)) {
> > >  		ret = iio_channel_read(chan, val, NULL,
> > >  				       IIO_CHAN_INFO_PROCESSED);
> > >  		if (ret < 0)
> > > -			goto err_unlock;
> > > +			return ret;
> > >  		*val *= scale;  
> > 
> > 		return 0;
> >   
> > >  	} else {  
> > could drop the else.
> >   
> > >  		ret = iio_channel_read(chan, val, NULL, IIO_CHAN_INFO_RAW);
> > >  		if (ret < 0)
> > > -			goto err_unlock;
> > > +			return ret;
> > >  		ret = iio_convert_raw_to_processed_unlocked(chan, *val,
> > > val,
> > >  							    scale);  
> > 		return iio_convert_raw_to_proc...
> >   
> 
> Hmm, unless I completely misunderstood your comments on v2, this was exactly
> what I had but you recommended to leave the else branch :).
> 
That was a younger me :)  Either way is fine.

Jonathan


> - Nuno Sá
> 


^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH v3 4/4] iio: inkern: move to the cleanup.h magic
  2024-03-09 17:41       ` Jonathan Cameron
@ 2024-03-16 13:26         ` Jonathan Cameron
  0 siblings, 0 replies; 22+ messages in thread
From: Jonathan Cameron @ 2024-03-16 13:26 UTC (permalink / raw)
  To: Nuno Sá; +Cc: Nuno Sa, linux-iio, Lars-Peter Clausen

On Sat, 9 Mar 2024 17:41:45 +0000
Jonathan Cameron <jic23@kernel.org> wrote:

> On Mon, 04 Mar 2024 09:04:49 +0100
> Nuno Sá <noname.nuno@gmail.com> wrote:
> 
> > On Sun, 2024-03-03 at 14:24 +0000, Jonathan Cameron wrote:  
> > > On Thu, 29 Feb 2024 16:10:28 +0100
> > > Nuno Sa <nuno.sa@analog.com> wrote:
> > >     
> > > > Use the new cleanup magic for handling mutexes in IIO. This allows us to
> > > > greatly simplify some code paths.
> > > > 
> > > > While at it, also use __free(kfree) where allocations are done and drop
> > > > obvious comment in iio_channel_read_min().
> > > > 
> > > > Signed-off-by: Nuno Sa <nuno.sa@analog.com>    
> > > 
> > > Hi Nuno
> > > 
> > > Series looks very nice. One trivial thing inline - I can tidy that up whilst
> > > applying if nothing else comes up.
> > > 
> > > Given this obviously touches a lot of core code, so even though simple it's
> > > high risk for queuing up late. I also have a complex mess already queued up
> > > for the coming merge window. Hence I'm going to hold off on applying this
> > > series until the start of the next cycle.
> > > 
> > > Nothing outside IIO is going to depend on it, so it's rather simpler decision
> > > to hold it than for the ones that add new general purpose infrastructure.
> > > 
> > >     
> > 
> > Seems reasonable... It may even give us some time to see how the cond_guard()
> > and scoped_cond_guard() will end up.  
> 
> Absolutely - thankfully converting to the suggestions Linus made will be straight
> forwards, so hopefully the worst that happens is a complex merge, or some
> fixing up to do afterwards.
> 
> >   
> > > 
> > >     
> > > >  EXPORT_SYMBOL_GPL(iio_read_channel_attribute);
> > > >  
> > > > @@ -757,29 +711,24 @@ int iio_read_channel_processed_scale(struct
> > > > iio_channel *chan, int *val,
> > > >  	struct iio_dev_opaque *iio_dev_opaque = to_iio_dev_opaque(chan-    
> > > > >indio_dev);    
> > > >  	int ret;
> > > >  
> > > > -	mutex_lock(&iio_dev_opaque->info_exist_lock);
> > > > -	if (!chan->indio_dev->info) {
> > > > -		ret = -ENODEV;
> > > > -		goto err_unlock;
> > > > -	}
> > > > +	guard(mutex)(&iio_dev_opaque->info_exist_lock);
> > > > +	if (!chan->indio_dev->info)
> > > > +		return -ENODEV;
> > > >  
> > > >  	if (iio_channel_has_info(chan->channel, IIO_CHAN_INFO_PROCESSED)) {
> > > >  		ret = iio_channel_read(chan, val, NULL,
> > > >  				       IIO_CHAN_INFO_PROCESSED);
> > > >  		if (ret < 0)
> > > > -			goto err_unlock;
> > > > +			return ret;
> > > >  		*val *= scale;    
> > > 
> > > 		return 0;
> > >     
> > > >  	} else {    
> > > could drop the else.
> > >     
> > > >  		ret = iio_channel_read(chan, val, NULL, IIO_CHAN_INFO_RAW);
> > > >  		if (ret < 0)
> > > > -			goto err_unlock;
> > > > +			return ret;
> > > >  		ret = iio_convert_raw_to_processed_unlocked(chan, *val,
> > > > val,
> > > >  							    scale);    
> > > 		return iio_convert_raw_to_proc...
> > >     
> > 
> > Hmm, unless I completely misunderstood your comments on v2, this was exactly
> > what I had but you recommended to leave the else branch :).
> >   
> That was a younger me :)  Either way is fine.

I compromised - move the returns into the two branches, but kept the else.

Given I've started queuing stuff up for next cycle, seemed sensible to pick these
up. Applied to the togreg-normal branch of iio.git.

That will get rebased on rc1 and become togreg as normal in a few weeks time
and hopefully I'll retire the togreg-normal / togreg-cleanup split.

Thanks,

Jonathan

> 
> Jonathan
> 
> 
> > - Nuno Sá
> >   
> 
> 


^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH v3 2/4] iio: trigger: move to the cleanup.h magic
  2024-02-29 15:10 ` [PATCH v3 2/4] iio: trigger: move to the " Nuno Sa
@ 2024-03-16 19:32   ` Andy Shevchenko
  2024-03-18 12:33     ` Jonathan Cameron
  2024-03-16 19:39   ` Andy Shevchenko
  1 sibling, 1 reply; 22+ messages in thread
From: Andy Shevchenko @ 2024-03-16 19:32 UTC (permalink / raw)
  To: Nuno Sa; +Cc: linux-iio, Jonathan Cameron, Lars-Peter Clausen

Thu, Feb 29, 2024 at 04:10:26PM +0100, Nuno Sa kirjoitti:
> Use the new cleanup magic for handling mutexes in IIO. This allows us to
> greatly simplify some code paths.

...

>  	/* Add to list of available triggers held by the IIO core */
> -	mutex_lock(&iio_trigger_list_lock);
> -	if (__iio_trigger_find_by_name(trig_info->name)) {
> -		pr_err("Duplicate trigger name '%s'\n", trig_info->name);
> -		ret = -EEXIST;
> -		goto error_device_del;
> +	scoped_guard(mutex, &iio_trigger_list_lock) {
> +		if (__iio_trigger_find_by_name(trig_info->name)) {
> +			pr_err("Duplicate trigger name '%s'\n", trig_info->name);
> +			ret = -EEXIST;
> +			goto error_device_del;

With scoped_guard() it can't be achived, but in the original code the unlocking
can be potentially done before printing the message.
Here are pros and cons of printing messages under the lock:
+ the timestamp and appearance in the log probably more accurate
- the lock maybe taken for much longer time (if there is a slow
  console is involved)

That said, always consider where to put message printing when it's a critical
section.

> +		}
> +		list_add_tail(&trig_info->list, &iio_trigger_list);
>  	}

...

>  	list_for_each_entry(iter, &iio_trigger_list, list)
> -		if (sysfs_streq(iter->name, name)) {
> -			trig = iter;
> -			iio_trigger_get(trig);
> -			break;
> -		}
> -	mutex_unlock(&iio_trigger_list_lock);
> +		if (sysfs_streq(iter->name, name))
> +			return iio_trigger_get(iter);

In this case the outer {} better to have.

...

> -	ret = bitmap_find_free_region(trig->pool,
> -				      CONFIG_IIO_CONSUMERS_PER_TRIGGER,
> -				      ilog2(1));

> +		ret = bitmap_find_free_region(trig->pool,
> +					      CONFIG_IIO_CONSUMERS_PER_TRIGGER,
> +					      ilog2(1));

Despite being in the original code, this is funny magic constant...

-- 
With Best Regards,
Andy Shevchenko



^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH v3 3/4] iio: buffer: iio: core: move to the cleanup.h magic
  2024-02-29 15:10 ` [PATCH v3 3/4] iio: buffer: iio: core: " Nuno Sa
@ 2024-03-16 19:38   ` Andy Shevchenko
  2024-03-18  9:23     ` Nuno Sá
  2024-03-18 12:35     ` Jonathan Cameron
  2024-03-16 19:49   ` Andy Shevchenko
  1 sibling, 2 replies; 22+ messages in thread
From: Andy Shevchenko @ 2024-03-16 19:38 UTC (permalink / raw)
  To: Nuno Sa; +Cc: linux-iio, Jonathan Cameron, Lars-Peter Clausen

Thu, Feb 29, 2024 at 04:10:27PM +0100, Nuno Sa kirjoitti:
> Use the new cleanup magic for handling mutexes in IIO. This allows us to
> greatly simplify some code paths.

...

>  	ret = iio_scan_mask_query(indio_dev, buffer, this_attr->address);
>  	if (ret < 0)
> -		goto error_ret;
> -	if (!state && ret) {
> -		ret = iio_scan_mask_clear(buffer, this_attr->address);
> -		if (ret)
> -			goto error_ret;
> -	} else if (state && !ret) {
> +		return ret;
> +
> +	if (state && ret)
> +		return len;

I would leave the original checks. It's natural pattern

	if (foo && !bar)
	if (!foo && bar) // or 'else if' depending on the context

> +	if (state)
>  		ret = iio_scan_mask_set(indio_dev, buffer, this_attr->address);
> -		if (ret)
> -			goto error_ret;
> -	}
> +	else
> +		ret = iio_scan_mask_clear(buffer, this_attr->address);
> +	if (ret)
> +		return ret;
>  
> -error_ret:
> -	mutex_unlock(&iio_dev_opaque->mlock);
> -
> -	return ret < 0 ? ret : len;
> +	return len;

...

>  	/* Already in desired state */
>  	if (inlist == requested_state)
> -		goto done;
> +		return len;

Returning error code immediately is fine, but splitting return success seems to
me a bit too much. It is harder to follow (you really need to understand how many
"success" variants can be).

>  	if (requested_state)
>  		ret = __iio_update_buffers(indio_dev, buffer, NULL);
>  	else
>  		ret = __iio_update_buffers(indio_dev, NULL, buffer);
> +	if (ret)
> +		return ret;
>  
> -done:
> -	mutex_unlock(&iio_dev_opaque->mlock);
> -	return (ret < 0) ? ret : len;
> +	return len;
>  }

-- 
With Best Regards,
Andy Shevchenko



^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH v3 2/4] iio: trigger: move to the cleanup.h magic
  2024-02-29 15:10 ` [PATCH v3 2/4] iio: trigger: move to the " Nuno Sa
  2024-03-16 19:32   ` Andy Shevchenko
@ 2024-03-16 19:39   ` Andy Shevchenko
  2024-03-18  9:22     ` Nuno Sá
  1 sibling, 1 reply; 22+ messages in thread
From: Andy Shevchenko @ 2024-03-16 19:39 UTC (permalink / raw)
  To: Nuno Sa; +Cc: linux-iio, Jonathan Cameron, Lars-Peter Clausen

Thu, Feb 29, 2024 at 04:10:26PM +0100, Nuno Sa kirjoitti:
> Use the new cleanup magic for handling mutexes in IIO. This allows us to
> greatly simplify some code paths.

...

>  static void iio_trigger_put_irq(struct iio_trigger *trig, int irq)
>  {
> -	mutex_lock(&trig->pool_lock);
> +	guard(mutex)(&trig->pool_lock);
>  	clear_bit(irq - trig->subirq_base, trig->pool);

Another side note: Why do we need atomic bit operation(s)?

> -	mutex_unlock(&trig->pool_lock);
>  }

-- 
With Best Regards,
Andy Shevchenko



^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH v3 4/4] iio: inkern: move to the cleanup.h magic
  2024-02-29 15:10 ` [PATCH v3 4/4] iio: inkern: " Nuno Sa
  2024-03-03 14:24   ` Jonathan Cameron
@ 2024-03-16 19:48   ` Andy Shevchenko
  2024-03-18  9:20     ` Nuno Sá
  2024-03-23 18:09     ` Jonathan Cameron
  1 sibling, 2 replies; 22+ messages in thread
From: Andy Shevchenko @ 2024-03-16 19:48 UTC (permalink / raw)
  To: Nuno Sa; +Cc: linux-iio, Jonathan Cameron, Lars-Peter Clausen

Thu, Feb 29, 2024 at 04:10:28PM +0100, Nuno Sa kirjoitti:
> Use the new cleanup magic for handling mutexes in IIO. This allows us to
> greatly simplify some code paths.
> 
> While at it, also use __free(kfree) where allocations are done and drop
> obvious comment in iio_channel_read_min().

...

>  int iio_map_array_register(struct iio_dev *indio_dev, struct iio_map *maps)
>  {
> -	int i = 0, ret = 0;
> +	int i = 0, ret;
>  	struct iio_map_internal *mapi;

Why not making it reversed xmas tree order at the same time?

>  	if (!maps)
>  		return 0;

...

> -error_ret:
> -	if (ret)
> -		iio_map_array_unregister_locked(indio_dev);
> -	mutex_unlock(&iio_map_list_lock);
>  
> +	return 0;
> +error_ret:
> +	iio_map_array_unregister_locked(indio_dev);
>  	return ret;
>  }

Do we really need to split this? (I'm fine with a new code, but seems to me as
unneeded churn.)

...

> +	struct iio_channel *channel __free(kfree) = kzalloc(sizeof(*channel),
> +							    GFP_KERNEL);

I would indent a bit differently:

	struct iio_channel *channel __free(kfree) =
					kzalloc(sizeof(*channel), GFP_KERNEL);

(maybe less TABs, but you got the idea)

>  	if (!channel)
>  		return ERR_PTR(-ENOMEM);

...

> +	struct iio_channel *chans __free(kfree) = kcalloc(nummaps + 1,
> +							  sizeof(*chans),
> +							  GFP_KERNEL);

Ditto.

>  	if (!chans)
>  		return ERR_PTR(-ENOMEM);

...

>  	/* first find matching entry the channel map */
> -	mutex_lock(&iio_map_list_lock);
> -	list_for_each_entry(c_i, &iio_map_list, l) {
> -		if ((name && strcmp(name, c_i->map->consumer_dev_name) != 0) ||
> -		    (channel_name &&
> -		     strcmp(channel_name, c_i->map->consumer_channel) != 0))
> -			continue;
> -		c = c_i;
> -		iio_device_get(c->indio_dev);
> -		break;
> +	scoped_guard(mutex, &iio_map_list_lock) {
> +		list_for_each_entry(c_i, &iio_map_list, l) {
> +			if ((name && strcmp(name, c_i->map->consumer_dev_name) != 0) ||
> +			    (channel_name &&
> +			     strcmp(channel_name, c_i->map->consumer_channel) != 0))

I would kill these ' != 0' pieces, but I see they are in the original code.

> +				continue;
> +			c = c_i;
> +			iio_device_get(c->indio_dev);
> +			break;
> +		}
>  	}

...

> -	channel = kzalloc(sizeof(*channel), GFP_KERNEL);
> +	struct iio_channel *channel __free(kfree) = kzalloc(sizeof(*channel),
> +							    GFP_KERNEL);

Indentation?

...

> -error_no_chan:
> -	kfree(channel);
>  error_no_mem:
>  	iio_device_put(c->indio_dev);
>  	return ERR_PTR(err);

Effectively you move kfree after device put, would it be a problem?

...

>  struct iio_channel *iio_channel_get_all(struct device *dev)
>  {
>  	const char *name;
> -	struct iio_channel *chans;
> +	struct iio_channel *fw_chans;
>  	struct iio_map_internal *c = NULL;

Move it here for better ordering?

>  	int nummaps = 0;
>  	int mapind = 0;

...

> -	chans = fwnode_iio_channel_get_all(dev);
> +	fw_chans = fwnode_iio_channel_get_all(dev);

I would move it before conditional...

>  	/*
>  	 * We only want to carry on if the error is -ENODEV.  Anything else
>  	 * should be reported up the stack.
>  	 */
> -	if (!IS_ERR(chans) || PTR_ERR(chans) != -ENODEV)
> -		return chans;

...here.

> +	if (!IS_ERR(fw_chans) || PTR_ERR(fw_chans) != -ENODEV)
> +		return fw_chans;

...

> +	struct iio_channel *chans __free(kfree) = kcalloc(nummaps + 1,
> +							  sizeof(*chans),
> +							  GFP_KERNEL);

Indentation?

> +	if (!chans)
> +		return ERR_PTR(-ENOMEM);

-- 
With Best Regards,
Andy Shevchenko



^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH v3 3/4] iio: buffer: iio: core: move to the cleanup.h magic
  2024-02-29 15:10 ` [PATCH v3 3/4] iio: buffer: iio: core: " Nuno Sa
  2024-03-16 19:38   ` Andy Shevchenko
@ 2024-03-16 19:49   ` Andy Shevchenko
  1 sibling, 0 replies; 22+ messages in thread
From: Andy Shevchenko @ 2024-03-16 19:49 UTC (permalink / raw)
  To: Nuno Sa; +Cc: linux-iio, Jonathan Cameron, Lars-Peter Clausen

Thu, Feb 29, 2024 at 04:10:27PM +0100, Nuno Sa kirjoitti:
> Use the new cleanup magic for handling mutexes in IIO. This allows us to
> greatly simplify some code paths.

Forgot, the Subject has a leftover 'iio: core:'.

-- 
With Best Regards,
Andy Shevchenko



^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH v3 4/4] iio: inkern: move to the cleanup.h magic
  2024-03-16 19:48   ` Andy Shevchenko
@ 2024-03-18  9:20     ` Nuno Sá
  2024-03-23 18:09     ` Jonathan Cameron
  1 sibling, 0 replies; 22+ messages in thread
From: Nuno Sá @ 2024-03-18  9:20 UTC (permalink / raw)
  To: Andy Shevchenko, Nuno Sa; +Cc: linux-iio, Jonathan Cameron, Lars-Peter Clausen

On Sat, 2024-03-16 at 21:48 +0200, Andy Shevchenko wrote:
> Thu, Feb 29, 2024 at 04:10:28PM +0100, Nuno Sa kirjoitti:
> > Use the new cleanup magic for handling mutexes in IIO. This allows us to
> > greatly simplify some code paths.
> > 
> > While at it, also use __free(kfree) where allocations are done and drop
> > obvious comment in iio_channel_read_min().
> 
> ...
> 
> >  int iio_map_array_register(struct iio_dev *indio_dev, struct iio_map *maps)
> >  {
> > -	int i = 0, ret = 0;
> > +	int i = 0, ret;
> >  	struct iio_map_internal *mapi;
> 
> Why not making it reversed xmas tree order at the same time?
> 
> >  	if (!maps)
> >  		return 0;
> 
> ...
> 
> > -error_ret:
> > -	if (ret)
> > -		iio_map_array_unregister_locked(indio_dev);
> > -	mutex_unlock(&iio_map_list_lock);
> >  
> > +	return 0;
> > +error_ret:
> > +	iio_map_array_unregister_locked(indio_dev);
> >  	return ret;
> >  }
> 
> Do we really need to split this? (I'm fine with a new code, but seems to me as
> unneeded churn.)
> 
> ...
> 
> > +	struct iio_channel *channel __free(kfree) =
> > kzalloc(sizeof(*channel),
> > +							    GFP_KERNEL);
> 
> I would indent a bit differently:
> 
> 	struct iio_channel *channel __free(kfree) =
> 					kzalloc(sizeof(*channel),
> GFP_KERNEL);
> 
> (maybe less TABs, but you got the idea)
> 
> >  	if (!channel)
> >  		return ERR_PTR(-ENOMEM);
> 
> ...
> 
> > +	struct iio_channel *chans __free(kfree) = kcalloc(nummaps + 1,
> > +							  sizeof(*chans),
> > +							  GFP_KERNEL);
> 
> Ditto.
> 
> >  	if (!chans)
> >  		return ERR_PTR(-ENOMEM);
> 
> ...
> 
> >  	/* first find matching entry the channel map */
> > -	mutex_lock(&iio_map_list_lock);
> > -	list_for_each_entry(c_i, &iio_map_list, l) {
> > -		if ((name && strcmp(name, c_i->map->consumer_dev_name) !=
> > 0) ||
> > -		    (channel_name &&
> > -		     strcmp(channel_name, c_i->map->consumer_channel) !=
> > 0))
> > -			continue;
> > -		c = c_i;
> > -		iio_device_get(c->indio_dev);
> > -		break;
> > +	scoped_guard(mutex, &iio_map_list_lock) {
> > +		list_for_each_entry(c_i, &iio_map_list, l) {
> > +			if ((name && strcmp(name, c_i->map-
> > >consumer_dev_name) != 0) ||
> > +			    (channel_name &&
> > +			     strcmp(channel_name, c_i->map-
> > >consumer_channel) != 0))
> 
> I would kill these ' != 0' pieces, but I see they are in the original code.
> 
> > +				continue;
> > +			c = c_i;
> > +			iio_device_get(c->indio_dev);
> > +			break;
> > +		}
> >  	}
> 
> ...
> 
> > -	channel = kzalloc(sizeof(*channel), GFP_KERNEL);
> > +	struct iio_channel *channel __free(kfree) =
> > kzalloc(sizeof(*channel),
> > +							    GFP_KERNEL);
> 
> Indentation?
> 
> ...
> 
> > -error_no_chan:
> > -	kfree(channel);
> >  error_no_mem:
> >  	iio_device_put(c->indio_dev);
> >  	return ERR_PTR(err);
> 
> Effectively you move kfree after device put, would it be a problem?
> 

This one got my attention... But I think we're fine. But yeah, subtle ordering
change that I did not unnoticed.

- Nuno Sá



^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH v3 2/4] iio: trigger: move to the cleanup.h magic
  2024-03-16 19:39   ` Andy Shevchenko
@ 2024-03-18  9:22     ` Nuno Sá
  0 siblings, 0 replies; 22+ messages in thread
From: Nuno Sá @ 2024-03-18  9:22 UTC (permalink / raw)
  To: Andy Shevchenko, Nuno Sa; +Cc: linux-iio, Jonathan Cameron, Lars-Peter Clausen

On Sat, 2024-03-16 at 21:39 +0200, Andy Shevchenko wrote:
> Thu, Feb 29, 2024 at 04:10:26PM +0100, Nuno Sa kirjoitti:
> > Use the new cleanup magic for handling mutexes in IIO. This allows us to
> > greatly simplify some code paths.
> 
> ...
> 
> >  static void iio_trigger_put_irq(struct iio_trigger *trig, int irq)
> >  {
> > -	mutex_lock(&trig->pool_lock);
> > +	guard(mutex)(&trig->pool_lock);
> >  	clear_bit(irq - trig->subirq_base, trig->pool);
> 
> Another side note: Why do we need atomic bit operation(s)?
> 

Did not checked the code but my guess is that the lock is always grabbed so we
likely don't need the atomic variants. But that's something for a future patch.

- Nuno Sá

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH v3 3/4] iio: buffer: iio: core: move to the cleanup.h magic
  2024-03-16 19:38   ` Andy Shevchenko
@ 2024-03-18  9:23     ` Nuno Sá
  2024-03-18 12:35     ` Jonathan Cameron
  1 sibling, 0 replies; 22+ messages in thread
From: Nuno Sá @ 2024-03-18  9:23 UTC (permalink / raw)
  To: Andy Shevchenko, Nuno Sa; +Cc: linux-iio, Jonathan Cameron, Lars-Peter Clausen

On Sat, 2024-03-16 at 21:38 +0200, Andy Shevchenko wrote:
> Thu, Feb 29, 2024 at 04:10:27PM +0100, Nuno Sa kirjoitti:
> > Use the new cleanup magic for handling mutexes in IIO. This allows us to
> > greatly simplify some code paths.
> 
> ...
> 
> >  	ret = iio_scan_mask_query(indio_dev, buffer, this_attr->address);
> >  	if (ret < 0)
> > -		goto error_ret;
> > -	if (!state && ret) {
> > -		ret = iio_scan_mask_clear(buffer, this_attr->address);
> > -		if (ret)
> > -			goto error_ret;
> > -	} else if (state && !ret) {
> > +		return ret;
> > +
> > +	if (state && ret)
> > +		return len;
> 
> I would leave the original checks. It's natural pattern
> 

It was suggested by Jonathan to have it like it is now (and no strong feelings
from my side)...

- Nuno Sá



^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH v3 2/4] iio: trigger: move to the cleanup.h magic
  2024-03-16 19:32   ` Andy Shevchenko
@ 2024-03-18 12:33     ` Jonathan Cameron
  2024-03-18 13:12       ` Andy Shevchenko
  0 siblings, 1 reply; 22+ messages in thread
From: Jonathan Cameron @ 2024-03-18 12:33 UTC (permalink / raw)
  To: Andy Shevchenko; +Cc: Nuno Sa, linux-iio, Jonathan Cameron, Lars-Peter Clausen

On Sat, 16 Mar 2024 21:32:56 +0200
Andy Shevchenko <andy.shevchenko@gmail.com> wrote:

> Thu, Feb 29, 2024 at 04:10:26PM +0100, Nuno Sa kirjoitti:
> > Use the new cleanup magic for handling mutexes in IIO. This allows us to
> > greatly simplify some code paths.  
> 
> ...
> 
> >  	/* Add to list of available triggers held by the IIO core */
> > -	mutex_lock(&iio_trigger_list_lock);
> > -	if (__iio_trigger_find_by_name(trig_info->name)) {
> > -		pr_err("Duplicate trigger name '%s'\n", trig_info->name);
> > -		ret = -EEXIST;
> > -		goto error_device_del;
> > +	scoped_guard(mutex, &iio_trigger_list_lock) {
> > +		if (__iio_trigger_find_by_name(trig_info->name)) {
> > +			pr_err("Duplicate trigger name '%s'\n", trig_info->name);
> > +			ret = -EEXIST;
> > +			goto error_device_del;  
> 
> With scoped_guard() it can't be achived, but in the original code the unlocking
> can be potentially done before printing the message.
> Here are pros and cons of printing messages under the lock:
> + the timestamp and appearance in the log probably more accurate
> - the lock maybe taken for much longer time (if there is a slow
>   console is involved)
> 
> That said, always consider where to put message printing when it's a critical
> section

It's an unlikely to occur error message. We almost certainly don't care.

> 
> > +		}
> > +		list_add_tail(&trig_info->list, &iio_trigger_list);
> >  	}  
> 
> ...
> 
> >  	list_for_each_entry(iter, &iio_trigger_list, list)
> > -		if (sysfs_streq(iter->name, name)) {
> > -			trig = iter;
> > -			iio_trigger_get(trig);
> > -			break;
> > -		}
> > -	mutex_unlock(&iio_trigger_list_lock);
> > +		if (sysfs_streq(iter->name, name))
> > +			return iio_trigger_get(iter);  
> 
> In this case the outer {} better to have.
> 
> ...
> 
> > -	ret = bitmap_find_free_region(trig->pool,
> > -				      CONFIG_IIO_CONSUMERS_PER_TRIGGER,
> > -				      ilog2(1));  
> 
> > +		ret = bitmap_find_free_region(trig->pool,
> > +					      CONFIG_IIO_CONSUMERS_PER_TRIGGER,
> > +					      ilog2(1));  
> 
> Despite being in the original code, this is funny magic constant...

Not that magic, build time config variable to avoid adding complexity
of dynamic expansion of various structures. We could have picked a big
number but someone will always want a bigger one and from what I recall
actually make it expandable was nasty to do.  Been a long time, though
so I'm open to patches that get rid of this in favor of a dynamic solution.

Jonathan

> 


^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH v3 3/4] iio: buffer: iio: core: move to the cleanup.h magic
  2024-03-16 19:38   ` Andy Shevchenko
  2024-03-18  9:23     ` Nuno Sá
@ 2024-03-18 12:35     ` Jonathan Cameron
  1 sibling, 0 replies; 22+ messages in thread
From: Jonathan Cameron @ 2024-03-18 12:35 UTC (permalink / raw)
  To: Andy Shevchenko; +Cc: Nuno Sa, linux-iio, Jonathan Cameron, Lars-Peter Clausen

On Sat, 16 Mar 2024 21:38:04 +0200
Andy Shevchenko <andy.shevchenko@gmail.com> wrote:

> Thu, Feb 29, 2024 at 04:10:27PM +0100, Nuno Sa kirjoitti:
> > Use the new cleanup magic for handling mutexes in IIO. This allows us to
> > greatly simplify some code paths.  
> 
> ...
> 
> >  	ret = iio_scan_mask_query(indio_dev, buffer, this_attr->address);
> >  	if (ret < 0)
> > -		goto error_ret;
> > -	if (!state && ret) {
> > -		ret = iio_scan_mask_clear(buffer, this_attr->address);
> > -		if (ret)
> > -			goto error_ret;
> > -	} else if (state && !ret) {
> > +		return ret;
> > +
> > +	if (state && ret)
> > +		return len;  
> 
> I would leave the original checks. It's natural pattern
> 
> 	if (foo && !bar)
> 	if (!foo && bar) // or 'else if' depending on the context
> 
> > +	if (state)
> >  		ret = iio_scan_mask_set(indio_dev, buffer, this_attr->address);
> > -		if (ret)
> > -			goto error_ret;
> > -	}
> > +	else
> > +		ret = iio_scan_mask_clear(buffer, this_attr->address);
> > +	if (ret)
> > +		return ret;
> >  
> > -error_ret:
> > -	mutex_unlock(&iio_dev_opaque->mlock);
> > -
> > -	return ret < 0 ? ret : len;
> > +	return len;  
> 
> ...
> 
> >  	/* Already in desired state */
> >  	if (inlist == requested_state)
> > -		goto done;
> > +		return len;  
> 
> Returning error code immediately is fine, but splitting return success seems to
> me a bit too much. It is harder to follow (you really need to understand how many
> "success" variants can be).

The pattern of detecting 'nothing to do' and existing early is pretty common.
I agree that it gets dubious if there are lots of early 'success' exits, but
this one case seems reasonable to me.

Jonathan


> 
> >  	if (requested_state)
> >  		ret = __iio_update_buffers(indio_dev, buffer, NULL);
> >  	else
> >  		ret = __iio_update_buffers(indio_dev, NULL, buffer);
> > +	if (ret)
> > +		return ret;
> >  
> > -done:
> > -	mutex_unlock(&iio_dev_opaque->mlock);
> > -	return (ret < 0) ? ret : len;
> > +	return len;
> >  }  
> 


^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH v3 2/4] iio: trigger: move to the cleanup.h magic
  2024-03-18 12:33     ` Jonathan Cameron
@ 2024-03-18 13:12       ` Andy Shevchenko
  2024-03-18 14:15         ` Jonathan Cameron
  0 siblings, 1 reply; 22+ messages in thread
From: Andy Shevchenko @ 2024-03-18 13:12 UTC (permalink / raw)
  To: Jonathan Cameron; +Cc: Nuno Sa, linux-iio, Jonathan Cameron, Lars-Peter Clausen

On Mon, Mar 18, 2024 at 2:33 PM Jonathan Cameron
<Jonathan.Cameron@huawei.com> wrote:
> On Sat, 16 Mar 2024 21:32:56 +0200
> Andy Shevchenko <andy.shevchenko@gmail.com> wrote:
> > Thu, Feb 29, 2024 at 04:10:26PM +0100, Nuno Sa kirjoitti:

...

> > > -   ret = bitmap_find_free_region(trig->pool,
> > > -                                 CONFIG_IIO_CONSUMERS_PER_TRIGGER,
> > > -                                 ilog2(1));
> >
> > > +           ret = bitmap_find_free_region(trig->pool,
> > > +                                         CONFIG_IIO_CONSUMERS_PER_TRIGGER,
> > > +                                         ilog2(1));
> >
> > Despite being in the original code, this is funny magic constant...
>
> Not that magic, build time config variable to avoid adding complexity
> of dynamic expansion of various structures. We could have picked a big
> number but someone will always want a bigger one and from what I recall
> actually make it expandable was nasty to do.  Been a long time, though
> so I'm open to patches that get rid of this in favor of a dynamic solution.

I didn't get you, sorry. Logarithm (by any base) from 1 is 0. Writing
it as arithmetic expression seems funny to me.

-- 
With Best Regards,
Andy Shevchenko

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH v3 2/4] iio: trigger: move to the cleanup.h magic
  2024-03-18 13:12       ` Andy Shevchenko
@ 2024-03-18 14:15         ` Jonathan Cameron
  0 siblings, 0 replies; 22+ messages in thread
From: Jonathan Cameron @ 2024-03-18 14:15 UTC (permalink / raw)
  To: Andy Shevchenko; +Cc: Nuno Sa, linux-iio, Jonathan Cameron, Lars-Peter Clausen

On Mon, 18 Mar 2024 15:12:20 +0200
Andy Shevchenko <andy.shevchenko@gmail.com> wrote:

> On Mon, Mar 18, 2024 at 2:33 PM Jonathan Cameron
> <Jonathan.Cameron@huawei.com> wrote:
> > On Sat, 16 Mar 2024 21:32:56 +0200
> > Andy Shevchenko <andy.shevchenko@gmail.com> wrote:  
> > > Thu, Feb 29, 2024 at 04:10:26PM +0100, Nuno Sa kirjoitti:  
> 
> ...
> 
> > > > -   ret = bitmap_find_free_region(trig->pool,
> > > > -                                 CONFIG_IIO_CONSUMERS_PER_TRIGGER,
> > > > -                                 ilog2(1));  
> > >  
> > > > +           ret = bitmap_find_free_region(trig->pool,
> > > > +                                         CONFIG_IIO_CONSUMERS_PER_TRIGGER,
> > > > +                                         ilog2(1));  
> > >
> > > Despite being in the original code, this is funny magic constant...  
> >
> > Not that magic, build time config variable to avoid adding complexity
> > of dynamic expansion of various structures. We could have picked a big
> > number but someone will always want a bigger one and from what I recall
> > actually make it expandable was nasty to do.  Been a long time, though
> > so I'm open to patches that get rid of this in favor of a dynamic solution.  
> 
> I didn't get you, sorry. Logarithm (by any base) from 1 is 0. Writing
> it as arithmetic expression seems funny to me.

Ah. I was looking at the line above ;)

That one is because it lines up with the docs for bitmap_find_free_region()
Last parameter is order, but seems more natural to express it in number of bits
hence the 1 rather that superficially looking like we are asking for
a region of length 0.


> 


^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH v3 4/4] iio: inkern: move to the cleanup.h magic
  2024-03-16 19:48   ` Andy Shevchenko
  2024-03-18  9:20     ` Nuno Sá
@ 2024-03-23 18:09     ` Jonathan Cameron
  1 sibling, 0 replies; 22+ messages in thread
From: Jonathan Cameron @ 2024-03-23 18:09 UTC (permalink / raw)
  To: Andy Shevchenko; +Cc: Nuno Sa, linux-iio, Lars-Peter Clausen

On Sat, 16 Mar 2024 21:48:18 +0200
Andy Shevchenko <andy.shevchenko@gmail.com> wrote:

> Thu, Feb 29, 2024 at 04:10:28PM +0100, Nuno Sa kirjoitti:
> > Use the new cleanup magic for handling mutexes in IIO. This allows us to
> > greatly simplify some code paths.
> > 
> > While at it, also use __free(kfree) where allocations are done and drop
> > obvious comment in iio_channel_read_min().  
> 
> ...
> 
> >  int iio_map_array_register(struct iio_dev *indio_dev, struct iio_map *maps)
> >  {
> > -	int i = 0, ret = 0;
> > +	int i = 0, ret;
> >  	struct iio_map_internal *mapi;  
> 
> Why not making it reversed xmas tree order at the same time?
> 

I tweaked this. Went a bit further as mixing declarations that
set values and ones that don't is a bad pattern for readability.

	struct iio_map_internal *mapi;
	int i = 0;
	int ret;

> >  	if (!maps)
> >  		return 0;  
> 
> ...
> 
> > -error_ret:
> > -	if (ret)
> > -		iio_map_array_unregister_locked(indio_dev);
> > -	mutex_unlock(&iio_map_list_lock);
> >  
> > +	return 0;
> > +error_ret:
> > +	iio_map_array_unregister_locked(indio_dev);
> >  	return ret;
> >  }  
> 
> Do we really need to split this? (I'm fine with a new code, but seems to me as
> unneeded churn.)

I much prefer not to have the
	if (ret) // error case
		do something.

	//back to both good and bad paths.
	return ret;

pattern - so I've very keen to have this spit as I disliked the original
code and there is even less reason to combine the paths now we don't need
the mutex_unlock.


> 
> ...
> 
> > +	struct iio_channel *channel __free(kfree) = kzalloc(sizeof(*channel),
> > +							    GFP_KERNEL);  
> 
> I would indent a bit differently:
> 
> 	struct iio_channel *channel __free(kfree) =
> 					kzalloc(sizeof(*channel), GFP_KERNEL);
> 
> (maybe less TABs, but you got the idea)
Given I was rebasing anyway, tidied this up (in 4 places) as well (fewer tabs ;)

> 
> >  	if (!channel)
> >  		return ERR_PTR(-ENOMEM);  
> 
> ...
> 
> > +	struct iio_channel *chans __free(kfree) = kcalloc(nummaps + 1,
> > +							  sizeof(*chans),
> > +							  GFP_KERNEL);  
> 
> Ditto.
> 
> >  	if (!chans)
> >  		return ERR_PTR(-ENOMEM);  
> 
> ...
> 
> >  	/* first find matching entry the channel map */
> > -	mutex_lock(&iio_map_list_lock);
> > -	list_for_each_entry(c_i, &iio_map_list, l) {
> > -		if ((name && strcmp(name, c_i->map->consumer_dev_name) != 0) ||
> > -		    (channel_name &&
> > -		     strcmp(channel_name, c_i->map->consumer_channel) != 0))
> > -			continue;
> > -		c = c_i;
> > -		iio_device_get(c->indio_dev);
> > -		break;
> > +	scoped_guard(mutex, &iio_map_list_lock) {
> > +		list_for_each_entry(c_i, &iio_map_list, l) {
> > +			if ((name && strcmp(name, c_i->map->consumer_dev_name) != 0) ||
> > +			    (channel_name &&
> > +			     strcmp(channel_name, c_i->map->consumer_channel) != 0))  
> 
> I would kill these ' != 0' pieces, but I see they are in the original code.

Don't mind a change doing that, but not in this patch.

> 
> > +				continue;
> > +			c = c_i;
> > +			iio_device_get(c->indio_dev);
> > +			break;
> > +		}
> >  	}  
> 
> ...
> 
> > -	channel = kzalloc(sizeof(*channel), GFP_KERNEL);
> > +	struct iio_channel *channel __free(kfree) = kzalloc(sizeof(*channel),
> > +							    GFP_KERNEL);  
> 
> Indentation?
> 
> ...
> 
> > -error_no_chan:
> > -	kfree(channel);
> >  error_no_mem:
> >  	iio_device_put(c->indio_dev);
> >  	return ERR_PTR(err);  
> 
> Effectively you move kfree after device put, would it be a problem?
It's not immediately obvious what that put pairs with so we should probably
address that a bit more clearly anyway - but the change should be safe.

> 
> ...
> 
> >  struct iio_channel *iio_channel_get_all(struct device *dev)
> >  {
> >  	const char *name;
> > -	struct iio_channel *chans;
> > +	struct iio_channel *fw_chans;
> >  	struct iio_map_internal *c = NULL;  
> 
> Move it here for better ordering?
Trivial, but meh, I'm tweaking anyway so done.
> 
> >  	int nummaps = 0;
> >  	int mapind = 0;  
> 
> ...
> 
> > -	chans = fwnode_iio_channel_get_all(dev);
> > +	fw_chans = fwnode_iio_channel_get_all(dev);  
> 
> I would move it before conditional...
> 
> >  	/*
> >  	 * We only want to carry on if the error is -ENODEV.  Anything else
> >  	 * should be reported up the stack.
> >  	 */
> > -	if (!IS_ERR(chans) || PTR_ERR(chans) != -ENODEV)
> > -		return chans;  
> 
> ...here.
> 
> > +	if (!IS_ERR(fw_chans) || PTR_ERR(fw_chans) != -ENODEV)
> > +		return fw_chans;  
> 
> ...
> 
> > +	struct iio_channel *chans __free(kfree) = kcalloc(nummaps + 1,
> > +							  sizeof(*chans),
> > +							  GFP_KERNEL);  
> 
> Indentation?
> 
> > +	if (!chans)
> > +		return ERR_PTR(-ENOMEM);  
> 


^ permalink raw reply	[flat|nested] 22+ messages in thread

end of thread, other threads:[~2024-03-23 18:10 UTC | newest]

Thread overview: 22+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2024-02-29 15:10 [PATCH v3 0/4] iio: move IIO to the cleanup.h magic Nuno Sa
2024-02-29 15:10 ` [PATCH v3 1/4] iio: core: move to " Nuno Sa
2024-02-29 15:10 ` [PATCH v3 2/4] iio: trigger: move to the " Nuno Sa
2024-03-16 19:32   ` Andy Shevchenko
2024-03-18 12:33     ` Jonathan Cameron
2024-03-18 13:12       ` Andy Shevchenko
2024-03-18 14:15         ` Jonathan Cameron
2024-03-16 19:39   ` Andy Shevchenko
2024-03-18  9:22     ` Nuno Sá
2024-02-29 15:10 ` [PATCH v3 3/4] iio: buffer: iio: core: " Nuno Sa
2024-03-16 19:38   ` Andy Shevchenko
2024-03-18  9:23     ` Nuno Sá
2024-03-18 12:35     ` Jonathan Cameron
2024-03-16 19:49   ` Andy Shevchenko
2024-02-29 15:10 ` [PATCH v3 4/4] iio: inkern: " Nuno Sa
2024-03-03 14:24   ` Jonathan Cameron
2024-03-04  8:04     ` Nuno Sá
2024-03-09 17:41       ` Jonathan Cameron
2024-03-16 13:26         ` Jonathan Cameron
2024-03-16 19:48   ` Andy Shevchenko
2024-03-18  9:20     ` Nuno Sá
2024-03-23 18:09     ` Jonathan Cameron

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.