All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v4 0/3] iio: add support for hardware fifos
@ 2015-03-03 16:20 Octavian Purdila
  2015-03-03 16:21 ` [PATCH v4 1/3] iio: add watermark logic to iio read and poll Octavian Purdila
                   ` (2 more replies)
  0 siblings, 3 replies; 8+ messages in thread
From: Octavian Purdila @ 2015-03-03 16:20 UTC (permalink / raw)
  To: linux-iio; +Cc: srinivas.pandruvada, Octavian Purdila

Hi Jonathan,

This is the 4th version of the hardware fifo patch set that addresses
the comments sent for the previous version.

Changes since v3:

* remove hwfifo_length and make hwfifo_watermark read-only

* remove trigger for hardware fifo

* use the buffer watermark as a hint for the hardware fifo

* hwfifo_watermark is negative if the device does not support hardware
  fifo, 0 if hardware fifo is supported but currently disabled and a
  strictly positive value if hardware fifo is enabled in which case
  this is the watermark for the hardware fifo

* the hardware fifo is activated by the device driver when it makes
  sense (e.g. at buffer enable time if there is no conflicting
  trigger)

* move the hwfifo operations to struct iio_info

* remove the flush from the poll operation - it causes unnecessary
  flush operations

* move the wait condition logic in a separate function to make it more
  readable

* bmc150: make sure to check the I2C bus supports either full i2c or
  at least SMBUS i2c block read as fifo reads must do a burst read of
  the whole frame (all 3 axis)

* bmc150: rework the way we timestamp the samples stored in the fifo
  to account for sampling frequency variations from device to device


Josselin Costanzi (1):
  iio: add watermark logic to iio read and poll

Octavian Purdila (2):
  iio: add support for hardware fifo
  iio: bmc150_accel: add support for hardware fifo

 Documentation/ABI/testing/sysfs-bus-iio  |  40 ++++
 drivers/iio/accel/bmc150-accel.c         | 353 +++++++++++++++++++++++++++++--
 drivers/iio/industrialio-buffer.c        | 196 ++++++++++++++---
 drivers/iio/kfifo_buf.c                  |  11 +-
 drivers/staging/iio/accel/sca3000_ring.c |   4 +-
 include/linux/iio/buffer.h               |   8 +-
 include/linux/iio/iio.h                  |  26 +++
 7 files changed, 587 insertions(+), 51 deletions(-)

-- 
1.9.1


^ permalink raw reply	[flat|nested] 8+ messages in thread

* [PATCH v4 1/3] iio: add watermark logic to iio read and poll
  2015-03-03 16:20 [PATCH v4 0/3] iio: add support for hardware fifos Octavian Purdila
@ 2015-03-03 16:21 ` Octavian Purdila
  2015-03-03 17:46   ` Lars-Peter Clausen
  2015-03-03 16:21 ` [PATCH v4 2/3] iio: add support for hardware fifo Octavian Purdila
  2015-03-03 16:21 ` [PATCH v4 3/3] iio: bmc150_accel: " Octavian Purdila
  2 siblings, 1 reply; 8+ messages in thread
From: Octavian Purdila @ 2015-03-03 16:21 UTC (permalink / raw)
  To: linux-iio; +Cc: srinivas.pandruvada, Josselin Costanzi, Octavian Purdila

From: Josselin Costanzi <josselin.costanzi@mobile-devices.fr>

Currently the IIO buffer blocking read only wait until at least one
data element is available.
This patch makes the reader sleep until enough data is collected before
returning to userspace. This should limit the read() calls count when
trying to get data in batches.

Co-author: Yannick Bedhomme <yannick.bedhomme@mobile-devices.fr>
Signed-off-by: Josselin Costanzi <josselin.costanzi@mobile-devices.fr>
[rebased and remove buffer timeout]
Signed-off-by: Octavian Purdila <octavian.purdila@intel.com>
---
 Documentation/ABI/testing/sysfs-bus-iio  |  15 ++++
 drivers/iio/industrialio-buffer.c        | 124 ++++++++++++++++++++++++++-----
 drivers/iio/kfifo_buf.c                  |  11 +--
 drivers/staging/iio/accel/sca3000_ring.c |   4 +-
 include/linux/iio/buffer.h               |   8 +-
 5 files changed, 132 insertions(+), 30 deletions(-)

diff --git a/Documentation/ABI/testing/sysfs-bus-iio b/Documentation/ABI/testing/sysfs-bus-iio
index 9a70c31..1283ca7 100644
--- a/Documentation/ABI/testing/sysfs-bus-iio
+++ b/Documentation/ABI/testing/sysfs-bus-iio
@@ -1249,3 +1249,18 @@ Contact:	linux-iio@vger.kernel.org
 Description:
 		Specifies number of seconds in which we compute the steps
 		that occur in order to decide if the consumer is making steps.
+
+What:		/sys/bus/iio/devices/iio:deviceX/buffer/watermark
+KernelVersion:	3.21
+Contact:	linux-iio@vger.kernel.org
+Description:
+		A single positive integer specifying the maximum number of scan
+		elements to wait for.
+		Poll will block until the watermark is reached.
+		Blocking read will wait until the minimum between the requested
+		read amount or the low water mark is available.
+		Non-blocking read will retrieve the available samples from the
+		buffer even if there are less samples then watermark level. This
+		allows the application to block on poll with a timeout and read
+		the available samples after the timeout expires and thus have a
+		maximum delay guarantee.
diff --git a/drivers/iio/industrialio-buffer.c b/drivers/iio/industrialio-buffer.c
index c2d5440..ecf01c9 100644
--- a/drivers/iio/industrialio-buffer.c
+++ b/drivers/iio/industrialio-buffer.c
@@ -37,7 +37,7 @@ static bool iio_buffer_is_active(struct iio_buffer *buf)
 	return !list_empty(&buf->buffer_list);
 }
 
-static bool iio_buffer_data_available(struct iio_buffer *buf)
+static size_t iio_buffer_data_available(struct iio_buffer *buf)
 {
 	return buf->access->data_available(buf);
 }
@@ -53,6 +53,9 @@ ssize_t iio_buffer_read_first_n_outer(struct file *filp, char __user *buf,
 {
 	struct iio_dev *indio_dev = filp->private_data;
 	struct iio_buffer *rb = indio_dev->buffer;
+	size_t datum_size = rb->bytes_per_datum;
+	size_t to_read;
+	size_t count = 0;
 	int ret;
 
 	if (!indio_dev->info)
@@ -61,26 +64,48 @@ ssize_t iio_buffer_read_first_n_outer(struct file *filp, char __user *buf,
 	if (!rb || !rb->access->read_first_n)
 		return -EINVAL;
 
-	do {
-		if (!iio_buffer_data_available(rb)) {
-			if (filp->f_flags & O_NONBLOCK)
-				return -EAGAIN;
+	/*
+	 * If datum_size is 0 there will never be anything to read from the
+	 * buffer, so signal end of file now.
+	 */
+	if (!datum_size)
+		return 0;
 
+	to_read = min_t(size_t, n / datum_size, rb->watermark);
+
+	do {
+		if (filp->f_flags & O_NONBLOCK) {
+			if (!iio_buffer_data_available(rb)) {
+				ret = -EAGAIN;
+				break;
+			}
+		} else {
 			ret = wait_event_interruptible(rb->pollq,
-					iio_buffer_data_available(rb) ||
-					indio_dev->info == NULL);
+			       iio_buffer_data_available(rb) >= to_read ||
+						       indio_dev->info == NULL);
 			if (ret)
 				return ret;
-			if (indio_dev->info == NULL)
-				return -ENODEV;
+			if (indio_dev->info == NULL) {
+				ret = -ENODEV;
+				break;
+			}
 		}
 
-		ret = rb->access->read_first_n(rb, n, buf);
-		if (ret == 0 && (filp->f_flags & O_NONBLOCK))
-			ret = -EAGAIN;
-	 } while (ret == 0);
+		ret = rb->access->read_first_n(rb, n, buf + count);
+		if (ret < 0)
+			break;
 
-	return ret;
+		count += ret;
+		n -= ret;
+		to_read -= ret / datum_size;
+	 } while (to_read > 0);
+
+	if (count)
+		return count;
+	if (ret < 0)
+		return ret;
+
+	return -EAGAIN;
 }
 
 /**
@@ -96,9 +121,8 @@ unsigned int iio_buffer_poll(struct file *filp,
 		return -ENODEV;
 
 	poll_wait(filp, &rb->pollq, wait);
-	if (iio_buffer_data_available(rb))
+	if (iio_buffer_data_available(rb) >= rb->watermark)
 		return POLLIN | POLLRDNORM;
-	/* need a way of knowing if there may be enough data... */
 	return 0;
 }
 
@@ -123,6 +147,7 @@ void iio_buffer_init(struct iio_buffer *buffer)
 	INIT_LIST_HEAD(&buffer->buffer_list);
 	init_waitqueue_head(&buffer->pollq);
 	kref_init(&buffer->ref);
+	buffer->watermark = 1;
 }
 EXPORT_SYMBOL(iio_buffer_init);
 
@@ -418,7 +443,16 @@ static ssize_t iio_buffer_write_length(struct device *dev,
 	}
 	mutex_unlock(&indio_dev->mlock);
 
-	return ret ? ret : len;
+	if (ret)
+		return ret;
+
+	if (buffer->length)
+		val = buffer->length;
+
+	if (val < buffer->watermark)
+		buffer->watermark = val;
+
+	return len;
 }
 
 static ssize_t iio_buffer_show_enable(struct device *dev,
@@ -472,6 +506,7 @@ static void iio_buffer_activate(struct iio_dev *indio_dev,
 static void iio_buffer_deactivate(struct iio_buffer *buffer)
 {
 	list_del_init(&buffer->buffer_list);
+	wake_up_interruptible(&buffer->pollq);
 	iio_buffer_put(buffer);
 }
 
@@ -754,16 +789,59 @@ done:
 
 static const char * const iio_scan_elements_group_name = "scan_elements";
 
+static ssize_t iio_buffer_show_watermark(struct device *dev,
+					 struct device_attribute *attr,
+					 char *buf)
+{
+	struct iio_dev *indio_dev = dev_to_iio_dev(dev);
+	struct iio_buffer *buffer = indio_dev->buffer;
+
+	return sprintf(buf, "%u\n", buffer->watermark);
+}
+
+static ssize_t iio_buffer_store_watermark(struct device *dev,
+					  struct device_attribute *attr,
+					  const char *buf,
+					  size_t len)
+{
+	struct iio_dev *indio_dev = dev_to_iio_dev(dev);
+	struct iio_buffer *buffer = indio_dev->buffer;
+	unsigned int val;
+	int ret;
+
+	ret = kstrtouint(buf, 10, &val);
+	if (ret)
+		return ret;
+
+	if (val > buffer->length)
+		return -EINVAL;
+
+	mutex_lock(&indio_dev->mlock);
+	if (iio_buffer_is_active(indio_dev->buffer)) {
+		ret = -EBUSY;
+		goto out;
+	}
+
+	buffer->watermark = val;
+	ret = 0;
+out:
+	mutex_unlock(&indio_dev->mlock);
+	return ret ? ret : len;
+}
+
 static DEVICE_ATTR(length, S_IRUGO | S_IWUSR, iio_buffer_read_length,
 		   iio_buffer_write_length);
 static struct device_attribute dev_attr_length_ro = __ATTR(length,
 	S_IRUGO, iio_buffer_read_length, NULL);
 static DEVICE_ATTR(enable, S_IRUGO | S_IWUSR,
 		   iio_buffer_show_enable, iio_buffer_store_enable);
+static DEVICE_ATTR(watermark, S_IRUGO | S_IWUSR,
+		   iio_buffer_show_watermark, iio_buffer_store_watermark);
 
 static struct attribute *iio_buffer_attrs[] = {
 	&dev_attr_length.attr,
 	&dev_attr_enable.attr,
+	&dev_attr_watermark.attr,
 };
 
 int iio_buffer_alloc_sysfs_and_mask(struct iio_dev *indio_dev)
@@ -944,8 +1022,18 @@ static const void *iio_demux(struct iio_buffer *buffer,
 static int iio_push_to_buffer(struct iio_buffer *buffer, const void *data)
 {
 	const void *dataout = iio_demux(buffer, data);
+	int ret;
+
+	ret = buffer->access->store_to(buffer, dataout);
+	if (ret)
+		return ret;
 
-	return buffer->access->store_to(buffer, dataout);
+	/*
+	 * We can't just test for watermark to decide if we wake the poll queue
+	 * because read may request less samples than the watermark.
+	 */
+	wake_up_interruptible(&buffer->pollq);
+	return 0;
 }
 
 static void iio_buffer_demux_free(struct iio_buffer *buffer)
diff --git a/drivers/iio/kfifo_buf.c b/drivers/iio/kfifo_buf.c
index b2beea0..847ca56 100644
--- a/drivers/iio/kfifo_buf.c
+++ b/drivers/iio/kfifo_buf.c
@@ -83,9 +83,6 @@ static int iio_store_to_kfifo(struct iio_buffer *r,
 	ret = kfifo_in(&kf->kf, data, 1);
 	if (ret != 1)
 		return -EBUSY;
-
-	wake_up_interruptible_poll(&r->pollq, POLLIN | POLLRDNORM);
-
 	return 0;
 }
 
@@ -109,16 +106,16 @@ static int iio_read_first_n_kfifo(struct iio_buffer *r,
 	return copied;
 }
 
-static bool iio_kfifo_buf_data_available(struct iio_buffer *r)
+static size_t iio_kfifo_buf_data_available(struct iio_buffer *r)
 {
 	struct iio_kfifo *kf = iio_to_kfifo(r);
-	bool empty;
+	size_t samples;
 
 	mutex_lock(&kf->user_lock);
-	empty = kfifo_is_empty(&kf->kf);
+	samples = kfifo_len(&kf->kf);
 	mutex_unlock(&kf->user_lock);
 
-	return !empty;
+	return samples;
 }
 
 static void iio_kfifo_buffer_release(struct iio_buffer *buffer)
diff --git a/drivers/staging/iio/accel/sca3000_ring.c b/drivers/staging/iio/accel/sca3000_ring.c
index f76a268..8589ead 100644
--- a/drivers/staging/iio/accel/sca3000_ring.c
+++ b/drivers/staging/iio/accel/sca3000_ring.c
@@ -129,9 +129,9 @@ error_ret:
 	return ret ? ret : num_read;
 }
 
-static bool sca3000_ring_buf_data_available(struct iio_buffer *r)
+static size_t sca3000_ring_buf_data_available(struct iio_buffer *r)
 {
-	return r->stufftoread;
+	return r->stufftoread ? r->watermark : 0;
 }
 
 /**
diff --git a/include/linux/iio/buffer.h b/include/linux/iio/buffer.h
index b65850a..eb8622b 100644
--- a/include/linux/iio/buffer.h
+++ b/include/linux/iio/buffer.h
@@ -21,8 +21,8 @@ struct iio_buffer;
  * struct iio_buffer_access_funcs - access functions for buffers.
  * @store_to:		actually store stuff to the buffer
  * @read_first_n:	try to get a specified number of bytes (must exist)
- * @data_available:	indicates whether data for reading from the buffer is
- *			available.
+ * @data_available:	indicates how much data is available for reading from
+ *			the buffer.
  * @request_update:	if a parameter change has been marked, update underlying
  *			storage.
  * @set_bytes_per_datum:set number of bytes per datum
@@ -43,7 +43,7 @@ struct iio_buffer_access_funcs {
 	int (*read_first_n)(struct iio_buffer *buffer,
 			    size_t n,
 			    char __user *buf);
-	bool (*data_available)(struct iio_buffer *buffer);
+	size_t (*data_available)(struct iio_buffer *buffer);
 
 	int (*request_update)(struct iio_buffer *buffer);
 
@@ -72,6 +72,7 @@ struct iio_buffer_access_funcs {
  * @demux_bounce:	[INTERN] buffer for doing gather from incoming scan.
  * @buffer_list:	[INTERN] entry in the devices list of current buffers.
  * @ref:		[INTERN] reference count of the buffer.
+ * @watermark:		[INTERN] number of datums to wait for poll/read.
  */
 struct iio_buffer {
 	int					length;
@@ -90,6 +91,7 @@ struct iio_buffer {
 	void					*demux_bounce;
 	struct list_head			buffer_list;
 	struct kref				ref;
+	unsigned int				watermark;
 };
 
 /**
-- 
1.9.1


^ permalink raw reply related	[flat|nested] 8+ messages in thread

* [PATCH v4 2/3] iio: add support for hardware fifo
  2015-03-03 16:20 [PATCH v4 0/3] iio: add support for hardware fifos Octavian Purdila
  2015-03-03 16:21 ` [PATCH v4 1/3] iio: add watermark logic to iio read and poll Octavian Purdila
@ 2015-03-03 16:21 ` Octavian Purdila
  2015-03-03 16:21 ` [PATCH v4 3/3] iio: bmc150_accel: " Octavian Purdila
  2 siblings, 0 replies; 8+ messages in thread
From: Octavian Purdila @ 2015-03-03 16:21 UTC (permalink / raw)
  To: linux-iio; +Cc: srinivas.pandruvada, Octavian Purdila

Some devices have hardware buffers that can store a number of samples
for later consumption. Hardware usually provides interrupts to notify
the processor when the fifo is full or when it has reached a certain
threshold. This helps with reducing the number of interrupts to the
host processor and thus it helps decreasing the power consumption.

This patch adds support for hardware fifo to IIO by adding drivers
operations for flushing the hadware fifo and setting and getting the
watermark level.

Since a driver may support hardware fifo only when not in triggered
buffer mode (due to different samantics of hardware fifo sampling and
triggered sampling) this patch changes the IIO core code to allow
falling back to non-triggered buffered mode if no trigger is enabled.

Signed-off-by: Octavian Purdila <octavian.purdila@intel.com>
---
 Documentation/ABI/testing/sysfs-bus-iio | 25 +++++++++++
 drivers/iio/industrialio-buffer.c       | 78 ++++++++++++++++++++++++++++-----
 include/linux/iio/iio.h                 | 26 +++++++++++
 3 files changed, 119 insertions(+), 10 deletions(-)

diff --git a/Documentation/ABI/testing/sysfs-bus-iio b/Documentation/ABI/testing/sysfs-bus-iio
index 1283ca7..143ddf2d 100644
--- a/Documentation/ABI/testing/sysfs-bus-iio
+++ b/Documentation/ABI/testing/sysfs-bus-iio
@@ -1264,3 +1264,28 @@ Description:
 		allows the application to block on poll with a timeout and read
 		the available samples after the timeout expires and thus have a
 		maximum delay guarantee.
+
+What:          /sys/bus/iio/devices/iio:deviceX/buffer/hwfifo_watermark
+KernelVersion: 3.21
+Contact:       linux-iio@vger.kernel.org
+Description:
+	       Read-only entry that contains a single integer specifying the
+	       current level for the hardware fifo watermark level. If this
+	       value is negative it means that the device does not support a
+	       hardware fifo. If this value is 0 it means that the hardware fifo
+	       is currently disabled.
+	       If this value is strictly positive it signals that the hardware
+	       fifo of the device is active and that samples are stored in an
+	       internal hardware buffer. When the level of the hardware fifo
+	       reaches the watermark level the device will flush its internal
+	       buffer to the device buffer. Because of this a trigger is not
+	       needed to use the device in buffer mode.
+	       The hardware watermark level is set by the driver based on the
+	       value set by the user in buffer/watermark but taking into account
+	       the limitations of the hardware (e.g. most hardware buffers are
+	       limited to 32-64 samples).
+	       Because the sematics of triggers and hardware fifo may be
+	       different (e.g. the hadware fifo may record samples according to
+	       the sample rate while an any-motion trigger generates samples
+	       based on the set rate of change) setting a trigger may disable
+	       the hardware fifo.
diff --git a/drivers/iio/industrialio-buffer.c b/drivers/iio/industrialio-buffer.c
index ecf01c9..3697e5e 100644
--- a/drivers/iio/industrialio-buffer.c
+++ b/drivers/iio/industrialio-buffer.c
@@ -42,6 +42,44 @@ static size_t iio_buffer_data_available(struct iio_buffer *buf)
 	return buf->access->data_available(buf);
 }
 
+static int iio_buffer_flush_hwfifo(struct iio_dev *indio_dev,
+				   struct iio_buffer *buf, size_t required)
+{
+	int ret = -ENODEV;
+
+	if (!indio_dev->info->hwfifo_flush)
+		return ret;
+
+	mutex_lock(&indio_dev->mlock);
+	if (indio_dev->active_scan_mask)
+		ret = indio_dev->info->hwfifo_flush(indio_dev, required);
+	mutex_unlock(&indio_dev->mlock);
+
+	return ret;
+}
+
+static bool iio_buffer_ready(struct iio_dev *indio_dev, struct iio_buffer *rb,
+			     size_t to_read)
+{
+	size_t avail = iio_buffer_data_available(rb);
+	int flushed;
+
+	if (!indio_dev->info)
+		return true;
+
+	if (avail >= to_read)
+		return true;
+
+	flushed = iio_buffer_flush_hwfifo(indio_dev, rb, to_read - avail);
+	if (flushed <= 0)
+		return false;
+
+	if (avail + flushed >= to_read)
+		return true;
+
+	return false;
+}
+
 /**
  * iio_buffer_read_first_n_outer() - chrdev read for buffer access
  *
@@ -75,14 +113,14 @@ ssize_t iio_buffer_read_first_n_outer(struct file *filp, char __user *buf,
 
 	do {
 		if (filp->f_flags & O_NONBLOCK) {
-			if (!iio_buffer_data_available(rb)) {
+			if (!iio_buffer_data_available(rb) &&
+			    iio_buffer_flush_hwfifo(indio_dev, rb, 1) <= 0) {
 				ret = -EAGAIN;
 				break;
 			}
 		} else {
 			ret = wait_event_interruptible(rb->pollq,
-			       iio_buffer_data_available(rb) >= to_read ||
-						       indio_dev->info == NULL);
+			       iio_buffer_ready(indio_dev, rb, to_read));
 			if (ret)
 				return ret;
 			if (indio_dev->info == NULL) {
@@ -664,19 +702,16 @@ static int __iio_update_buffers(struct iio_dev *indio_dev,
 		}
 	}
 	/* Definitely possible for devices to support both of these. */
-	if (indio_dev->modes & INDIO_BUFFER_TRIGGERED) {
-		if (!indio_dev->trig) {
-			printk(KERN_INFO "Buffer not started: no trigger\n");
-			ret = -EINVAL;
-			/* Can only occur on first buffer */
-			goto error_run_postdisable;
-		}
+	if ((indio_dev->modes & INDIO_BUFFER_TRIGGERED) && indio_dev->trig) {
 		indio_dev->currentmode = INDIO_BUFFER_TRIGGERED;
 	} else if (indio_dev->modes & INDIO_BUFFER_HARDWARE) {
 		indio_dev->currentmode = INDIO_BUFFER_HARDWARE;
 	} else if (indio_dev->modes & INDIO_BUFFER_SOFTWARE) {
 		indio_dev->currentmode = INDIO_BUFFER_SOFTWARE;
 	} else { /* Should never be reached */
+		/* Can only occur on first buffer */
+		if (indio_dev->modes & INDIO_BUFFER_TRIGGERED)
+			pr_info("Buffer not started: no trigger\n");
 		ret = -EINVAL;
 		goto error_run_postdisable;
 	}
@@ -823,12 +858,32 @@ static ssize_t iio_buffer_store_watermark(struct device *dev,
 	}
 
 	buffer->watermark = val;
+
+	if (indio_dev->info->hwfifo_set_watermark) {
+		ret = indio_dev->info->hwfifo_set_watermark(indio_dev, val);
+		if (ret)
+			dev_err(dev, "hwfifo_set_watermark failed: %d\n", val);
+	}
+
 	ret = 0;
 out:
 	mutex_unlock(&indio_dev->mlock);
 	return ret ? ret : len;
 }
 
+static ssize_t iio_buffer_show_hwfifo_watermark(struct device *dev,
+						struct device_attribute *attr,
+						char *buf)
+{
+	struct iio_dev *indio_dev = dev_to_iio_dev(dev);
+	int ret = -1;
+
+	if (indio_dev->info->hwfifo_get_watermark)
+		ret = indio_dev->info->hwfifo_get_watermark(indio_dev);
+
+	return sprintf(buf, "%d\n", ret < -1 ? -1 : ret);
+}
+
 static DEVICE_ATTR(length, S_IRUGO | S_IWUSR, iio_buffer_read_length,
 		   iio_buffer_write_length);
 static struct device_attribute dev_attr_length_ro = __ATTR(length,
@@ -837,11 +892,14 @@ static DEVICE_ATTR(enable, S_IRUGO | S_IWUSR,
 		   iio_buffer_show_enable, iio_buffer_store_enable);
 static DEVICE_ATTR(watermark, S_IRUGO | S_IWUSR,
 		   iio_buffer_show_watermark, iio_buffer_store_watermark);
+static DEVICE_ATTR(hwfifo_watermark, S_IRUGO, iio_buffer_show_hwfifo_watermark,
+		   NULL);
 
 static struct attribute *iio_buffer_attrs[] = {
 	&dev_attr_length.attr,
 	&dev_attr_enable.attr,
 	&dev_attr_watermark.attr,
+	&dev_attr_hwfifo_watermark.attr,
 };
 
 int iio_buffer_alloc_sysfs_and_mask(struct iio_dev *indio_dev)
diff --git a/include/linux/iio/iio.h b/include/linux/iio/iio.h
index 80d8550..1b1cd7d 100644
--- a/include/linux/iio/iio.h
+++ b/include/linux/iio/iio.h
@@ -338,6 +338,29 @@ struct iio_dev;
  *			provide a custom of_xlate function that reads the
  *			*args* and returns the appropriate index in registered
  *			IIO channels array.
+ * @hwfifo_set_watermark: function pointer to set the current hardware fifo
+ *			watermark level. It receives the desired watermark as a
+ *			hint and the device driver may adjust it to take into
+ *			account hardware limitations. Setting the watermark to a
+ *			strictly positive value should enable the hardware fifo
+ *			if not already enabled. When the hardware fifo is
+ *			enabled and its level reaches the watermark level the
+ *			device must flush the samples stored in the hardware
+ *			fifo to the device buffer. Setting the watermark to 0
+ *			should disable the hardware fifo. The device driver must
+ *			disable the hardware fifo when a trigger with different
+ *			sampling semantic (then the hardware fifo) is set
+ *			(e.g. when setting an any-motion trigger to a device
+ *			that has its hardware fifo sample based on the set
+ *			sample frequency).
+ * @hwfifo_get_watermark: function pointer to obtain the current hardware fifo
+ *			watermark level
+ * @hwfifo_flush:	function pointer to flush the samples stored in the
+ *			hardware fifo to the device buffer. The driver should
+ *			not flush more then count samples. The function must
+ *			return the number of samples flushed, 0 if no samples
+ *			were flushed or a negative integer if no samples were
+ *			flushed and there was an error.
  **/
 struct iio_info {
 	struct module			*driver_module;
@@ -399,6 +422,9 @@ struct iio_info {
 				  unsigned *readval);
 	int (*of_xlate)(struct iio_dev *indio_dev,
 			const struct of_phandle_args *iiospec);
+	int (*hwfifo_set_watermark)(struct iio_dev *indio_dev, unsigned val);
+	int (*hwfifo_get_watermark)(struct iio_dev *indio_dev);
+	int (*hwfifo_flush)(struct iio_dev *indio_dev, unsigned count);
 };
 
 /**
-- 
1.9.1


^ permalink raw reply related	[flat|nested] 8+ messages in thread

* [PATCH v4 3/3] iio: bmc150_accel: add support for hardware fifo
  2015-03-03 16:20 [PATCH v4 0/3] iio: add support for hardware fifos Octavian Purdila
  2015-03-03 16:21 ` [PATCH v4 1/3] iio: add watermark logic to iio read and poll Octavian Purdila
  2015-03-03 16:21 ` [PATCH v4 2/3] iio: add support for hardware fifo Octavian Purdila
@ 2015-03-03 16:21 ` Octavian Purdila
  2 siblings, 0 replies; 8+ messages in thread
From: Octavian Purdila @ 2015-03-03 16:21 UTC (permalink / raw)
  To: linux-iio; +Cc: srinivas.pandruvada, Octavian Purdila

We only advertise hardware fifo support if the I2C bus supports full
I2C or smbus I2C block data reads since it is mandatory to read the
full frame in one read (otherwise the rest of the frame is discarded).

The hardware fifo is enabled only when triggers are not active because:

(a) when using the any-motion trigger the user expects to see samples
based on ROC events, but the fifo stores samples based on the sample
frequency

(b) the data-ready trigger is waking the CPU for for every sample, so
using the hardware fifo does not have any benefit

Signed-off-by: Octavian Purdila <octavian.purdila@intel.com>
---
 drivers/iio/accel/bmc150-accel.c | 353 +++++++++++++++++++++++++++++++++++++--
 1 file changed, 339 insertions(+), 14 deletions(-)

diff --git a/drivers/iio/accel/bmc150-accel.c b/drivers/iio/accel/bmc150-accel.c
index d826394..a1e5767 100644
--- a/drivers/iio/accel/bmc150-accel.c
+++ b/drivers/iio/accel/bmc150-accel.c
@@ -70,7 +70,9 @@
 #define BMC150_ACCEL_INT_MAP_0_BIT_SLOPE	BIT(2)
 
 #define BMC150_ACCEL_REG_INT_MAP_1		0x1A
-#define BMC150_ACCEL_INT_MAP_1_BIT_DATA	BIT(0)
+#define BMC150_ACCEL_INT_MAP_1_BIT_DATA		BIT(0)
+#define BMC150_ACCEL_INT_MAP_1_BIT_FWM		BIT(1)
+#define BMC150_ACCEL_INT_MAP_1_BIT_FFULL	BIT(2)
 
 #define BMC150_ACCEL_REG_INT_RST_LATCH		0x21
 #define BMC150_ACCEL_INT_MODE_LATCH_RESET	0x80
@@ -83,7 +85,9 @@
 #define BMC150_ACCEL_INT_EN_BIT_SLP_Z		BIT(2)
 
 #define BMC150_ACCEL_REG_INT_EN_1		0x17
-#define BMC150_ACCEL_INT_EN_BIT_DATA_EN	BIT(4)
+#define BMC150_ACCEL_INT_EN_BIT_DATA_EN		BIT(4)
+#define BMC150_ACCEL_INT_EN_BIT_FFULL_EN	BIT(5)
+#define BMC150_ACCEL_INT_EN_BIT_FWM_EN		BIT(6)
 
 #define BMC150_ACCEL_REG_INT_OUT_CTRL		0x20
 #define BMC150_ACCEL_INT_OUT_CTRL_INT1_LVL	BIT(0)
@@ -122,6 +126,12 @@
 #define BMC150_ACCEL_AXIS_TO_REG(axis)	(BMC150_ACCEL_REG_XOUT_L + (axis * 2))
 #define BMC150_AUTO_SUSPEND_DELAY_MS		2000
 
+#define BMC150_ACCEL_REG_FIFO_STATUS		0x0E
+#define BMC150_ACCEL_REG_FIFO_CONFIG0		0x30
+#define BMC150_ACCEL_REG_FIFO_CONFIG1		0x3E
+#define BMC150_ACCEL_REG_FIFO_DATA		0x3F
+#define BMC150_ACCEL_FIFO_LENGTH		32
+
 enum bmc150_accel_axis {
 	AXIS_X,
 	AXIS_Y,
@@ -179,13 +189,14 @@ struct bmc150_accel_data {
 	atomic_t active_intr;
 	struct bmc150_accel_trigger triggers[BMC150_ACCEL_TRIGGERS];
 	struct mutex mutex;
+	u8 fifo_mode, watermark;
 	s16 buffer[8];
 	u8 bw_bits;
 	u32 slope_dur;
 	u32 slope_thres;
 	u32 range;
 	int ev_enable_state;
-	int64_t timestamp;
+	int64_t timestamp, old_timestamp;
 	const struct bmc150_accel_chip_info *chip_info;
 };
 
@@ -470,6 +481,12 @@ static const struct bmc150_accel_interrupt_info {
 			BMC150_ACCEL_INT_EN_BIT_SLP_Y |
 			BMC150_ACCEL_INT_EN_BIT_SLP_Z
 	},
+	{ /* fifo watermark interrupt */
+		.map_reg = BMC150_ACCEL_REG_INT_MAP_1,
+		.map_bitmask = BMC150_ACCEL_INT_MAP_1_BIT_FWM,
+		.en_reg = BMC150_ACCEL_REG_INT_EN_1,
+		.en_bitmask = BMC150_ACCEL_INT_EN_BIT_FWM_EN,
+	},
 };
 
 static void bmc150_accel_interrupts_setup(struct iio_dev *indio_dev,
@@ -823,6 +840,183 @@ static int bmc150_accel_validate_trigger(struct iio_dev *indio_dev,
 	return -EINVAL;
 }
 
+static int bmc150_accel_set_watermark(struct iio_dev *indio_dev, unsigned val)
+
+{
+	struct bmc150_accel_data *data = iio_priv(indio_dev);
+
+	if (val > BMC150_ACCEL_FIFO_LENGTH)
+		val = BMC150_ACCEL_FIFO_LENGTH * 3 / 4;
+
+	mutex_lock(&data->mutex);
+	data->watermark = val;
+	mutex_unlock(&data->mutex);
+
+	return 0;
+}
+
+static int bmc150_accel_get_watermark(struct iio_dev *indio_dev)
+{
+	struct bmc150_accel_data *data = iio_priv(indio_dev);
+	int ret;
+
+	mutex_lock(&data->mutex);
+	if (!data->fifo_mode)
+		ret = 0;
+	else
+		ret = data->watermark;
+	mutex_unlock(&data->mutex);
+
+	return ret;
+}
+
+/*
+ * We must read at least one full frame in one burst, otherwise the rest of the
+ * frame data is discarded.
+ */
+static int bmc150_accel_fifo_transfer(const struct i2c_client *client,
+				      char *buffer, int samples)
+{
+	int sample_length = 3 * 2;
+	u8 reg_fifo_data = BMC150_ACCEL_REG_FIFO_DATA;
+	int ret = -EIO;
+
+	if (i2c_check_functionality(client->adapter, I2C_FUNC_I2C)) {
+		struct i2c_msg msg[2] = {
+			{
+				.addr = client->addr,
+				.flags = 0,
+				.buf = &reg_fifo_data,
+				.len = sizeof(reg_fifo_data),
+			},
+			{
+				.addr = client->addr,
+				.flags = I2C_M_RD,
+				.buf = (u8 *)buffer,
+				.len = samples * sample_length,
+			}
+		};
+
+		ret = i2c_transfer(client->adapter, msg, 2);
+		if (ret != 2)
+			ret = -EIO;
+		else
+			ret = 0;
+	} else {
+		int i, step = I2C_SMBUS_BLOCK_MAX / sample_length;
+
+		for (i = 0; i < samples * sample_length; i += step) {
+			ret = i2c_smbus_read_i2c_block_data(client,
+							    reg_fifo_data, step,
+							    &buffer[i]);
+			if (ret != step) {
+				ret = -EIO;
+				break;
+			}
+
+			ret = 0;
+		}
+	}
+
+	if (ret)
+		dev_err(&client->dev, "Error transferring data from fifo\n");
+
+	return ret;
+}
+
+static int __bmc150_accel_fifo_flush(struct iio_dev *indio_dev,
+				     unsigned samples, bool irq)
+{
+	struct bmc150_accel_data *data = iio_priv(indio_dev);
+	int ret, i;
+	u8 count;
+	u16 buffer[BMC150_ACCEL_FIFO_LENGTH * 3];
+	int64_t tstamp, sample_period;
+
+	ret = i2c_smbus_read_byte_data(data->client,
+				       BMC150_ACCEL_REG_FIFO_STATUS);
+	if (ret < 0) {
+		dev_err(&data->client->dev, "Error reading reg_fifo_status\n");
+		return ret;
+	}
+
+	count = ret & 0x7F;
+
+	if (!count)
+		return 0;
+
+	/*
+	 * If we getting called from IRQ handler we know the stored timestamp is
+	 * fairly accurate for the last stored sample. Otherwise, if we are
+	 * called as a result of a read operation from userspace and hence
+	 * before the watermark interrupt was triggered, take a timestamp
+	 * now. We can fall anywhere in between two samples so the error in this
+	 * case is +/- one sample period.
+	 */
+	if (!irq) {
+		data->old_timestamp = data->timestamp;
+		data->timestamp = iio_get_time_ns();
+	}
+
+	/*
+	 * Approximate timestamps for each of the sample based on the sampling
+	 * frequency, timestamp for last sample and number of samples.
+	 *
+	 * Note that we can't use the current bandwidth settings to compute the
+	 * sample period because the sample rate varies with the device
+	 * (e.g. between 31.70ms to 32.20ms for a bandwidth of 15.63HZ). That
+	 * small variation adds when we store a large number of samples and
+	 * creates significant jitter between the last and first samples in
+	 * different batches (e.g. 32ms vs 21ms).
+	 *
+	 * To avoid this issue we compute the actual sample period ourselves
+	 * based on the timestamp delta between the last two flush operations.
+	 */
+	sample_period = (data->timestamp - data->old_timestamp) / count;
+	tstamp = data->timestamp - (count - 1) * sample_period;
+
+	if (samples && count > samples)
+		count = samples;
+
+	ret = bmc150_accel_fifo_transfer(data->client, (u8 *)buffer, count);
+	if (ret)
+		return ret;
+
+	/*
+	 * Ideally we want the IIO core to handle the demux when running in fifo
+	 * mode but not when running in triggerde buffer mode. Unfortunately
+	 * this does not seem to be possible, so stick with driver demux for
+	 * now.
+	 */
+	for (i = 0; i < count; i++) {
+		u16 sample[8];
+		int j, bit;
+
+		j = 0;
+		for_each_set_bit(bit, indio_dev->active_scan_mask,
+				 indio_dev->masklength)
+			memcpy(&sample[j++], &buffer[i * 3 + bit], 2);
+
+		iio_push_to_buffers_with_timestamp(indio_dev, sample, tstamp);
+
+		tstamp += sample_period;
+	}
+
+	return count;
+}
+
+static int bmc150_accel_fifo_flush(struct iio_dev *indio_dev, unsigned samples)
+{
+	struct bmc150_accel_data *data = iio_priv(indio_dev);
+	int ret;
+
+	mutex_lock(&data->mutex);
+	ret = __bmc150_accel_fifo_flush(indio_dev, samples, false);
+	mutex_unlock(&data->mutex);
+
+	return ret;
+}
+
 static IIO_CONST_ATTR_SAMP_FREQ_AVAIL(
 		"15.620000 31.260000 62.50000 125 250 500 1000 2000");
 
@@ -950,7 +1144,7 @@ static const struct bmc150_accel_chip_info bmc150_accel_chip_info_tbl[] = {
 	},
 };
 
-static const struct iio_info bmc150_accel_info = {
+static struct iio_info bmc150_accel_info = {
 	.attrs			= &bmc150_accel_attrs_group,
 	.read_raw		= bmc150_accel_read_raw,
 	.write_raw		= bmc150_accel_write_raw,
@@ -959,6 +1153,9 @@ static const struct iio_info bmc150_accel_info = {
 	.write_event_config	= bmc150_accel_write_event_config,
 	.read_event_config	= bmc150_accel_read_event_config,
 	.validate_trigger	= bmc150_accel_validate_trigger,
+	.hwfifo_set_watermark	= bmc150_accel_set_watermark,
+	.hwfifo_get_watermark	= bmc150_accel_get_watermark,
+	.hwfifo_flush		= bmc150_accel_fifo_flush,
 	.driver_module		= THIS_MODULE,
 };
 
@@ -1057,18 +1254,17 @@ static const struct iio_trigger_ops bmc150_accel_trigger_ops = {
 	.owner = THIS_MODULE,
 };
 
-static irqreturn_t bmc150_accel_event_handler(int irq, void *private)
+static int bmc150_accel_handle_roc_event(struct iio_dev *indio_dev)
 {
-	struct iio_dev *indio_dev = private;
 	struct bmc150_accel_data *data = iio_priv(indio_dev);
-	int ret;
 	int dir;
+	int ret;
 
 	ret = i2c_smbus_read_byte_data(data->client,
 				       BMC150_ACCEL_REG_INT_STATUS_2);
 	if (ret < 0) {
 		dev_err(&data->client->dev, "Error reading reg_int_status_2\n");
-		goto ack_intr_status;
+		return ret;
 	}
 
 	if (ret & BMC150_ACCEL_ANY_MOTION_BIT_SIGN)
@@ -1097,35 +1293,73 @@ static irqreturn_t bmc150_accel_event_handler(int irq, void *private)
 							IIO_EV_TYPE_ROC,
 							dir),
 							data->timestamp);
-ack_intr_status:
-	if (!data->triggers[BMC150_ACCEL_TRIGGER_DATA_READY].enabled)
+	return ret;
+}
+
+static irqreturn_t bmc150_accel_event_handler(int irq, void *private)
+{
+	struct iio_dev *indio_dev = private;
+	struct bmc150_accel_data *data = iio_priv(indio_dev);
+	bool ack = false;
+	int ret;
+
+	mutex_lock(&data->mutex);
+
+	if (data->fifo_mode) {
+		ret = __bmc150_accel_fifo_flush(indio_dev,
+						BMC150_ACCEL_FIFO_LENGTH, true);
+		if (ret > 0)
+			ack = true;
+	}
+
+	if (data->ev_enable_state) {
+		ret = bmc150_accel_handle_roc_event(indio_dev);
+		if (ret > 0)
+			ack = true;
+	}
+
+	if (ack) {
 		ret = i2c_smbus_write_byte_data(data->client,
 					BMC150_ACCEL_REG_INT_RST_LATCH,
 					BMC150_ACCEL_INT_MODE_LATCH_INT |
 					BMC150_ACCEL_INT_MODE_LATCH_RESET);
+		if (ret)
+			dev_err(&data->client->dev, "Error writing reg_int_rst_latch\n");
+		ret = IRQ_HANDLED;
+	} else {
+		ret = IRQ_NONE;
+	}
 
-	return IRQ_HANDLED;
+	mutex_unlock(&data->mutex);
+
+	return ret;
 }
 
 static irqreturn_t bmc150_accel_data_rdy_trig_poll(int irq, void *private)
 {
 	struct iio_dev *indio_dev = private;
 	struct bmc150_accel_data *data = iio_priv(indio_dev);
+	bool ack = false;
 	int i;
 
+	data->old_timestamp = data->timestamp;
 	data->timestamp = iio_get_time_ns();
 
 	for (i = 0; i < BMC150_ACCEL_TRIGGERS; i++) {
 		if (data->triggers[i].enabled) {
 			iio_trigger_poll(data->triggers[i].indio_trig);
+			ack = true;
 			break;
 		}
 	}
 
-	if (data->ev_enable_state)
+	if (data->ev_enable_state || data->fifo_mode)
 		return IRQ_WAKE_THREAD;
-	else
+
+	if (ack)
 		return IRQ_HANDLED;
+
+	return IRQ_NONE;
 }
 
 static const char *bmc150_accel_match_acpi_device(struct device *dev, int *data)
@@ -1232,6 +1466,84 @@ static int bmc150_accel_triggers_setup(struct iio_dev *indio_dev,
 	return ret;
 }
 
+#define BMC150_ACCEL_FIFO_MODE_STREAM          0x80
+#define BMC150_ACCEL_FIFO_MODE_FIFO            0x40
+#define BMC150_ACCEL_FIFO_MODE_BYPASS          0x00
+
+static int bmc150_accel_fifo_set_mode(struct bmc150_accel_data *data)
+{
+	u8 reg = BMC150_ACCEL_REG_FIFO_CONFIG1;
+	int ret;
+
+	ret = i2c_smbus_write_byte_data(data->client, reg, data->fifo_mode);
+	if (ret < 0) {
+		dev_err(&data->client->dev, "Error writing reg_fifo_config1\n");
+		return ret;
+	}
+
+	if (!data->fifo_mode)
+		return 0;
+
+	ret = i2c_smbus_write_byte_data(data->client,
+					BMC150_ACCEL_REG_FIFO_CONFIG0,
+					data->watermark);
+	if (ret < 0)
+		dev_err(&data->client->dev, "Error writing reg_fifo_config0\n");
+
+	return ret;
+}
+
+static int bmc150_accel_buffer_postenable(struct iio_dev *indio_dev)
+{
+	struct bmc150_accel_data *data = iio_priv(indio_dev);
+	int ret;
+
+	if (indio_dev->currentmode == INDIO_BUFFER_TRIGGERED)
+		return iio_triggered_buffer_postenable(indio_dev);
+
+	if (!data->watermark)
+		return 0;
+
+	ret = bmc150_accel_set_interrupt(data, BMC150_ACCEL_INT_WATERMARK,
+					 true);
+	if (ret)
+		return ret;
+
+	data->fifo_mode = BMC150_ACCEL_FIFO_MODE_FIFO;
+
+	ret = bmc150_accel_fifo_set_mode(data);
+	if (ret) {
+		data->fifo_mode = 0;
+		bmc150_accel_set_interrupt(data, BMC150_ACCEL_INT_WATERMARK,
+					   false);
+		return ret;
+	}
+
+	return ret;
+}
+
+static int bmc150_accel_buffer_predisable(struct iio_dev *indio_dev)
+{
+	struct bmc150_accel_data *data = iio_priv(indio_dev);
+
+	if (indio_dev->currentmode == INDIO_BUFFER_TRIGGERED)
+		return iio_triggered_buffer_predisable(indio_dev);
+
+	if (!data->fifo_mode)
+		return 0;
+
+	data->fifo_mode = 0;
+	bmc150_accel_set_interrupt(data, BMC150_ACCEL_INT_WATERMARK, false);
+	bmc150_accel_fifo_set_mode(data);
+
+	return 0;
+}
+
+static const struct iio_buffer_setup_ops bmc150_accel_buffer_ops = {
+	.postenable = bmc150_accel_buffer_postenable,
+	.predisable = bmc150_accel_buffer_predisable,
+};
+
 static int bmc150_accel_probe(struct i2c_client *client,
 			      const struct i2c_device_id *id)
 {
@@ -1270,6 +1582,15 @@ static int bmc150_accel_probe(struct i2c_client *client,
 	indio_dev->num_channels = data->chip_info->num_channels;
 	indio_dev->name = name;
 	indio_dev->modes = INDIO_DIRECT_MODE;
+	if (i2c_check_functionality(client->adapter, I2C_FUNC_I2C) ||
+	    i2c_check_functionality(client->adapter,
+				    I2C_FUNC_SMBUS_READ_I2C_BLOCK)) {
+		indio_dev->modes |= INDIO_BUFFER_SOFTWARE;
+	} else {
+		bmc150_accel_info.hwfifo_set_watermark = NULL;
+		bmc150_accel_info.hwfifo_get_watermark = NULL;
+		bmc150_accel_info.hwfifo_flush = NULL;
+	}
 	indio_dev->info = &bmc150_accel_info;
 
 	if (client->irq < 0)
@@ -1309,7 +1630,7 @@ static int bmc150_accel_probe(struct i2c_client *client,
 		ret = iio_triggered_buffer_setup(indio_dev,
 						 &iio_pollfunc_store_time,
 						 bmc150_accel_trigger_handler,
-						 NULL);
+						 &bmc150_accel_buffer_ops);
 		if (ret < 0) {
 			dev_err(&client->dev,
 				"Failed: iio triggered buffer setup\n");
@@ -1386,6 +1707,7 @@ static int bmc150_accel_resume(struct device *dev)
 	mutex_lock(&data->mutex);
 	if (atomic_read(&data->active_intr))
 		bmc150_accel_set_mode(data, BMC150_ACCEL_SLEEP_MODE_NORMAL, 0);
+	bmc150_accel_fifo_set_mode(data);
 	mutex_unlock(&data->mutex);
 
 	return 0;
@@ -1419,6 +1741,9 @@ static int bmc150_accel_runtime_resume(struct device *dev)
 	ret = bmc150_accel_set_mode(data, BMC150_ACCEL_SLEEP_MODE_NORMAL, 0);
 	if (ret < 0)
 		return ret;
+	ret = bmc150_accel_fifo_set_mode(data);
+	if (ret < 0)
+		return ret;
 
 	sleep_val = bmc150_accel_get_startup_times(data);
 	if (sleep_val < 20)
-- 
1.9.1


^ permalink raw reply related	[flat|nested] 8+ messages in thread

* Re: [PATCH v4 1/3] iio: add watermark logic to iio read and poll
  2015-03-03 16:21 ` [PATCH v4 1/3] iio: add watermark logic to iio read and poll Octavian Purdila
@ 2015-03-03 17:46   ` Lars-Peter Clausen
  2015-03-04 13:55     ` Octavian Purdila
  0 siblings, 1 reply; 8+ messages in thread
From: Lars-Peter Clausen @ 2015-03-03 17:46 UTC (permalink / raw)
  To: Octavian Purdila, linux-iio; +Cc: srinivas.pandruvada, Josselin Costanzi

On 03/03/2015 05:21 PM, Octavian Purdila wrote:
[...]
> @@ -61,26 +64,48 @@ ssize_t iio_buffer_read_first_n_outer(struct file *filp, char __user *buf,
>   	if (!rb || !rb->access->read_first_n)
>   		return -EINVAL;
>
> -	do {
> -		if (!iio_buffer_data_available(rb)) {
> -			if (filp->f_flags & O_NONBLOCK)
> -				return -EAGAIN;
> +	/*
> +	 * If datum_size is 0 there will never be anything to read from the
> +	 * buffer, so signal end of file now.
> +	 */
> +	if (!datum_size)
> +		return 0;
>
> +	to_read = min_t(size_t, n / datum_size, rb->watermark);

I'd maybe call it wakeup_threshold.

> +
> +	do {
> +		if (filp->f_flags & O_NONBLOCK) {
> +			if (!iio_buffer_data_available(rb)) {
> +				ret = -EAGAIN;
> +				break;
> +			}
> +		} else {
>   			ret = wait_event_interruptible(rb->pollq,
> -					iio_buffer_data_available(rb) ||
> -					indio_dev->info == NULL);
> +			       iio_buffer_data_available(rb) >= to_read ||
> +						       indio_dev->info == NULL);

This needs also to evaluate to true if the buffer is disabled so we have a 
chance to read any amount residue sample that are less than watermark.

>   			if (ret)
>   				return ret;
> -			if (indio_dev->info == NULL)
> -				return -ENODEV;
> +			if (indio_dev->info == NULL) {
> +				ret = -ENODEV;
> +				break;
> +			}
>   		}
>
> -		ret = rb->access->read_first_n(rb, n, buf);
> -		if (ret == 0 && (filp->f_flags & O_NONBLOCK))
> -			ret = -EAGAIN;
> -	 } while (ret == 0);
> +		ret = rb->access->read_first_n(rb, n, buf + count);
> +		if (ret < 0)
> +			break;
>
> -	return ret;
> +		count += ret;
> +		n -= ret;
> +		to_read -= ret / datum_size;


This will underflow if there are more than watermark sample in the buffer 
and and more than watermark samples have been requested by userspace...


> +	 } while (to_read > 0);

... and then we get trapped in a very long loop.

In my opinion there is no need to change the loop at all, only update the 
wakeup condition.

> +
> +	if (count)
> +		return count;
> +	if (ret < 0)
> +		return ret;
> +
> +	return -EAGAIN;
>   }
>
>   /**
> @@ -96,9 +121,8 @@ unsigned int iio_buffer_poll(struct file *filp,
>   		return -ENODEV;
>
>   	poll_wait(filp, &rb->pollq, wait);
> -	if (iio_buffer_data_available(rb))
> +	if (iio_buffer_data_available(rb) >= rb->watermark)

Same here needs to wakeup if the buffer is disabled and there is at least 
one sample.

>   		return POLLIN | POLLRDNORM;
> -	/* need a way of knowing if there may be enough data... */
>   	return 0;
>   }
>
[...]
> @@ -418,7 +443,16 @@ static ssize_t iio_buffer_write_length(struct device *dev,
>   	}
>   	mutex_unlock(&indio_dev->mlock);
>
> -	return ret ? ret : len;
> +	if (ret)
> +		return ret;
> +
> +	if (buffer->length)
> +		val = buffer->length;
> +
> +	if (val < buffer->watermark)
> +		buffer->watermark = val;

Needs to be inside the locked section.

> +
> +	return len;
>   }
[...]
> +static ssize_t iio_buffer_store_watermark(struct device *dev,
> +					  struct device_attribute *attr,
> +					  const char *buf,
> +					  size_t len)
> +{
> +	struct iio_dev *indio_dev = dev_to_iio_dev(dev);
> +	struct iio_buffer *buffer = indio_dev->buffer;
> +	unsigned int val;
> +	int ret;
> +
> +	ret = kstrtouint(buf, 10, &val);
> +	if (ret)
> +		return ret;
> +
> +	if (val > buffer->length)

Same here.

> +		return -EINVAL;
> +
> +	mutex_lock(&indio_dev->mlock);
> +	if (iio_buffer_is_active(indio_dev->buffer)) {
> +		ret = -EBUSY;
> +		goto out;
> +	}
> +
> +	buffer->watermark = val;

We should probably reject 0.

> +	ret = 0;
> +out:
> +	mutex_unlock(&indio_dev->mlock);
> +	return ret ? ret : len;
> +}
> +
[...]
>   int iio_buffer_alloc_sysfs_and_mask(struct iio_dev *indio_dev)
> @@ -944,8 +1022,18 @@ static const void *iio_demux(struct iio_buffer *buffer,
>   static int iio_push_to_buffer(struct iio_buffer *buffer, const void *data)
>   {
>   	const void *dataout = iio_demux(buffer, data);
> +	int ret;
> +
> +	ret = buffer->access->store_to(buffer, dataout);
> +	if (ret)
> +		return ret;
>
> -	return buffer->access->store_to(buffer, dataout);
> +	/*
> +	 * We can't just test for watermark to decide if we wake the poll queue
> +	 * because read may request less samples than the watermark.
> +	 */
> +	wake_up_interruptible(&buffer->pollq);

What happened to poll parameters?

> +	return 0;
>   }
>


^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH v4 1/3] iio: add watermark logic to iio read and poll
  2015-03-03 17:46   ` Lars-Peter Clausen
@ 2015-03-04 13:55     ` Octavian Purdila
  2015-03-04 14:40       ` Lars-Peter Clausen
  0 siblings, 1 reply; 8+ messages in thread
From: Octavian Purdila @ 2015-03-04 13:55 UTC (permalink / raw)
  To: Lars-Peter Clausen; +Cc: linux-iio, Srinivas Pandruvada, Josselin Costanzi

On Tue, Mar 3, 2015 at 7:46 PM, Lars-Peter Clausen <lars@metafoo.de> wrote:
> On 03/03/2015 05:21 PM, Octavian Purdila wrote:
> [...]

Hi Lars,

Thank you for the review!

>>
>> @@ -61,26 +64,48 @@ ssize_t iio_buffer_read_first_n_outer(struct file
>> *filp, char __user *buf,
>>         if (!rb || !rb->access->read_first_n)
>>                 return -EINVAL;
>>
>> -       do {
>> -               if (!iio_buffer_data_available(rb)) {
>> -                       if (filp->f_flags & O_NONBLOCK)
>> -                               return -EAGAIN;
>> +       /*
>> +        * If datum_size is 0 there will never be anything to read from
>> the
>> +        * buffer, so signal end of file now.
>> +        */
>> +       if (!datum_size)
>> +               return 0;
>>
>> +       to_read = min_t(size_t, n / datum_size, rb->watermark);
>
>
> I'd maybe call it wakeup_threshold.
>
>> +
>> +       do {
>> +               if (filp->f_flags & O_NONBLOCK) {
>> +                       if (!iio_buffer_data_available(rb)) {
>> +                               ret = -EAGAIN;
>> +                               break;
>> +                       }
>> +               } else {
>>                         ret = wait_event_interruptible(rb->pollq,
>> -                                       iio_buffer_data_available(rb) ||
>> -                                       indio_dev->info == NULL);
>> +                              iio_buffer_data_available(rb) >= to_read ||
>> +                                                      indio_dev->info ==
>> NULL);
>
>
> This needs also to evaluate to true if the buffer is disabled so we have a
> chance to read any amount residue sample that are less than watermark.
>

Good point.

>>                         if (ret)
>>                                 return ret;
>> -                       if (indio_dev->info == NULL)
>> -                               return -ENODEV;
>> +                       if (indio_dev->info == NULL) {
>> +                               ret = -ENODEV;
>> +                               break;
>> +                       }
>>                 }
>>
>> -               ret = rb->access->read_first_n(rb, n, buf);
>> -               if (ret == 0 && (filp->f_flags & O_NONBLOCK))
>> -                       ret = -EAGAIN;
>> -        } while (ret == 0);
>> +               ret = rb->access->read_first_n(rb, n, buf + count);
>> +               if (ret < 0)
>> +                       break;
>>
>> -       return ret;
>> +               count += ret;
>> +               n -= ret;
>> +               to_read -= ret / datum_size;
>
>
>
> This will underflow if there are more than watermark sample in the buffer
> and and more than watermark samples have been requested by userspace...
>
>
>> +        } while (to_read > 0);
>
>
> ... and then we get trapped in a very long loop.
>
> In my opinion there is no need to change the loop at all, only update the
> wakeup condition.
>

Correct, how did I miss that :/ I will rewrite the code to keep the
loop unchanged.

>> +
>> +       if (count)
>> +               return count;
>> +       if (ret < 0)
>> +               return ret;
>> +
>> +       return -EAGAIN;
>>   }
>>
>>   /**
>> @@ -96,9 +121,8 @@ unsigned int iio_buffer_poll(struct file *filp,
>>                 return -ENODEV;
>>
>>         poll_wait(filp, &rb->pollq, wait);
>> -       if (iio_buffer_data_available(rb))
>> +       if (iio_buffer_data_available(rb) >= rb->watermark)
>
>
> Same here needs to wakeup if the buffer is disabled and there is at least
> one sample.
>

Ok.

>>                 return POLLIN | POLLRDNORM;
>> -       /* need a way of knowing if there may be enough data... */
>>         return 0;
>>   }
>>
> [...]
>>
>> @@ -418,7 +443,16 @@ static ssize_t iio_buffer_write_length(struct device
>> *dev,
>>         }
>>         mutex_unlock(&indio_dev->mlock);
>>
>> -       return ret ? ret : len;
>> +       if (ret)
>> +               return ret;
>> +
>> +       if (buffer->length)
>> +               val = buffer->length;
>> +
>> +       if (val < buffer->watermark)
>> +               buffer->watermark = val;
>
>
> Needs to be inside the locked section.
>

Ok.

>> +
>> +       return len;
>>   }
>
> [...]
>>
>> +static ssize_t iio_buffer_store_watermark(struct device *dev,
>> +                                         struct device_attribute *attr,
>> +                                         const char *buf,
>> +                                         size_t len)
>> +{
>> +       struct iio_dev *indio_dev = dev_to_iio_dev(dev);
>> +       struct iio_buffer *buffer = indio_dev->buffer;
>> +       unsigned int val;
>> +       int ret;
>> +
>> +       ret = kstrtouint(buf, 10, &val);
>> +       if (ret)
>> +               return ret;
>> +
>> +       if (val > buffer->length)
>
>
> Same here.

Ok.

>
>> +               return -EINVAL;
>> +
>> +       mutex_lock(&indio_dev->mlock);
>> +       if (iio_buffer_is_active(indio_dev->buffer)) {
>> +               ret = -EBUSY;
>> +               goto out;
>> +       }
>> +
>> +       buffer->watermark = val;
>
>
> We should probably reject 0.
>

I agree.

>> +       ret = 0;
>> +out:
>> +       mutex_unlock(&indio_dev->mlock);
>> +       return ret ? ret : len;
>> +}
>> +
>
> [...]
>>
>>   int iio_buffer_alloc_sysfs_and_mask(struct iio_dev *indio_dev)
>> @@ -944,8 +1022,18 @@ static const void *iio_demux(struct iio_buffer
>> *buffer,
>>   static int iio_push_to_buffer(struct iio_buffer *buffer, const void
>> *data)
>>   {
>>         const void *dataout = iio_demux(buffer, data);
>> +       int ret;
>> +
>> +       ret = buffer->access->store_to(buffer, dataout);
>> +       if (ret)
>> +               return ret;
>>
>> -       return buffer->access->store_to(buffer, dataout);
>> +       /*
>> +        * We can't just test for watermark to decide if we wake the poll
>> queue
>> +        * because read may request less samples than the watermark.
>> +        */
>> +       wake_up_interruptible(&buffer->pollq);
>
>
> What happened to poll parameters?
>

I don't understand you question, can you please elaborate?

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH v4 1/3] iio: add watermark logic to iio read and poll
  2015-03-04 13:55     ` Octavian Purdila
@ 2015-03-04 14:40       ` Lars-Peter Clausen
  2015-03-04 16:01         ` Octavian Purdila
  0 siblings, 1 reply; 8+ messages in thread
From: Lars-Peter Clausen @ 2015-03-04 14:40 UTC (permalink / raw)
  To: Octavian Purdila; +Cc: linux-iio, Srinivas Pandruvada, Josselin Costanzi

On 03/04/2015 02:55 PM, Octavian Purdila wrote:
[...]
>>>    int iio_buffer_alloc_sysfs_and_mask(struct iio_dev *indio_dev)
>>> @@ -944,8 +1022,18 @@ static const void *iio_demux(struct iio_buffer
>>> *buffer,
>>>    static int iio_push_to_buffer(struct iio_buffer *buffer, const void
>>> *data)
>>>    {
>>>          const void *dataout = iio_demux(buffer, data);
>>> +       int ret;
>>> +
>>> +       ret = buffer->access->store_to(buffer, dataout);
>>> +       if (ret)
>>> +               return ret;
>>>
>>> -       return buffer->access->store_to(buffer, dataout);
>>> +       /*
>>> +        * We can't just test for watermark to decide if we wake the poll
>>> queue
>>> +        * because read may request less samples than the watermark.
>>> +        */
>>> +       wake_up_interruptible(&buffer->pollq);
>>
>>
>> What happened to poll parameters?
>>
>
> I don't understand you question, can you please elaborate?

Previously we were calling wake_up_interruptible_poll(&r->pollq, POLLIN | 
POLLRDNORM);


^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH v4 1/3] iio: add watermark logic to iio read and poll
  2015-03-04 14:40       ` Lars-Peter Clausen
@ 2015-03-04 16:01         ` Octavian Purdila
  0 siblings, 0 replies; 8+ messages in thread
From: Octavian Purdila @ 2015-03-04 16:01 UTC (permalink / raw)
  To: Lars-Peter Clausen; +Cc: linux-iio, Srinivas Pandruvada, Josselin Costanzi

On Wed, Mar 4, 2015 at 4:40 PM, Lars-Peter Clausen <lars@metafoo.de> wrote:
> On 03/04/2015 02:55 PM, Octavian Purdila wrote:
> [...]
>>>>
>>>>    int iio_buffer_alloc_sysfs_and_mask(struct iio_dev *indio_dev)
>>>> @@ -944,8 +1022,18 @@ static const void *iio_demux(struct iio_buffer
>>>> *buffer,
>>>>    static int iio_push_to_buffer(struct iio_buffer *buffer, const void
>>>> *data)
>>>>    {
>>>>          const void *dataout = iio_demux(buffer, data);
>>>> +       int ret;
>>>> +
>>>> +       ret = buffer->access->store_to(buffer, dataout);
>>>> +       if (ret)
>>>> +               return ret;
>>>>
>>>> -       return buffer->access->store_to(buffer, dataout);
>>>> +       /*
>>>> +        * We can't just test for watermark to decide if we wake the
>>>> poll
>>>> queue
>>>> +        * because read may request less samples than the watermark.
>>>> +        */
>>>> +       wake_up_interruptible(&buffer->pollq);
>>>
>>>
>>>
>>> What happened to poll parameters?
>>>
>>
>> I don't understand you question, can you please elaborate?
>
>
> Previously we were calling wake_up_interruptible_poll(&r->pollq, POLLIN |
> POLLRDNORM);
>

Ah, ok, I see. I will change it use wake_up_interruptible_poll.

Although it should not make any difference for IIO right now, as we
only support POLLIN/POLLRDNORM, right?

Also, I think we need to keep using a simple wake_up_interruptible in
iio_buffer_deactivate() since we probably want to wake-up userspace
for all events in that case.

^ permalink raw reply	[flat|nested] 8+ messages in thread

end of thread, other threads:[~2015-03-04 16:01 UTC | newest]

Thread overview: 8+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2015-03-03 16:20 [PATCH v4 0/3] iio: add support for hardware fifos Octavian Purdila
2015-03-03 16:21 ` [PATCH v4 1/3] iio: add watermark logic to iio read and poll Octavian Purdila
2015-03-03 17:46   ` Lars-Peter Clausen
2015-03-04 13:55     ` Octavian Purdila
2015-03-04 14:40       ` Lars-Peter Clausen
2015-03-04 16:01         ` Octavian Purdila
2015-03-03 16:21 ` [PATCH v4 2/3] iio: add support for hardware fifo Octavian Purdila
2015-03-03 16:21 ` [PATCH v4 3/3] iio: bmc150_accel: " Octavian Purdila

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.