Hi, I have written a custom kernel module implementing the IIO device API backed by an IIO triggered buffer. My IIO device provides 3 channels + timestamp. The sizes of the channels are index | iio_chan_spec.scan_type.storagebits -------|------------------------------------------------ 0 | 32 1 | 16 2 | 16 If I select channel 0 (32bit) and one of channel 1 or 2 (16bit) indio_dev.scan_bytes and iio_buffer.bytes_per_datum have a value of 6 Byte which does not account for any alignment. After having a closer look at `iio_compute_scan_bytes` which is responsible for calculating both, `indio_dev.scan_bytes` and `iio_buffer.bytes_per_datum` it seems to me that the order of channels matter: ```c /* How much space will the demuxed element take? */ for_each_set_bit(i, mask, indio_dev->masklength) { length = iio_storage_bytes_for_si(indio_dev, i); bytes = ALIGN(bytes, length); bytes += length; } ``` I understand that in case the length of each scan element is smaller then the length of the successive scan elements, this algorithm works because it aligns the current element to its own length. But if, as in my case, the length of channel 0's scan elements is greater then the size of the samples of the consecutive channels no alignment seems to be taken into account. Do I miss something here? To make the algorithm work for any case the greatest length of all enabled scan elements could be used for alignment, e.g.: ```diff diff --git a/drivers/iio/industrialio-buffer.c b/drivers/iio/industrialio-buffer.c index 5d05c38c4ba9..3d2c4e26d818 100644 --- a/drivers/iio/industrialio-buffer.c +++ b/drivers/iio/industrialio-buffer.c @@ -546,16 +546,18 @@ static int iio_compute_scan_bytes(struct iio_dev *indio_dev, const unsigned long *mask, bool timestamp) { unsigned bytes = 0; - int length, i; + int length, i, align = 0, count = 0; /* How much space will the demuxed element take? */ for_each_set_bit(i, mask, indio_dev->masklength) { length = iio_storage_bytes_for_si(indio_dev, i); - bytes = ALIGN(bytes, length); - bytes += length; + align = max(align, length); + count++; } + bytes = count * align; + if (timestamp) { length = iio_storage_bytes_for_timestamp(indio_dev); bytes = ALIGN(bytes, length); ``` And if the storage bytes for timestamp have to be taken into account for the alignment of the scan elements (what seems not to be the case as it is currently implemented), then: ```diff diff --git a/drivers/iio/industrialio-buffer.c b/drivers/iio/industrialio-buffer.c index 5d05c38c4ba9..59aee2ea4e19 100644 --- a/drivers/iio/industrialio-buffer.c +++ b/drivers/iio/industrialio-buffer.c @@ -546,21 +546,23 @@ static int iio_compute_scan_bytes(struct iio_dev *indio_dev, const unsigned long *mask, bool timestamp) { unsigned bytes = 0; - int length, i; + int length, i, align = 0, count = 0; + + if (timestamp) { + align = iio_storage_bytes_for_timestamp(indio_dev); + count++; + } /* How much space will the demuxed element take? */ for_each_set_bit(i, mask, indio_dev->masklength) { length = iio_storage_bytes_for_si(indio_dev, i); - bytes = ALIGN(bytes, length); - bytes += length; + align = max(align, length); + count++; } - if (timestamp) { - length = iio_storage_bytes_for_timestamp(indio_dev); - bytes = ALIGN(bytes, length); - bytes += length; - } + bytes = count * align; + return bytes; } ``` But in my case the latter would bloat the buffer from 16 Byte to 4*16 = 64 Byte per scan if all channels are selected and timestamp is active. For now, I will work around this by using 32 storagebits for all my channels. This gives my 4 Bytes of overhead per scan if all elements are selected and additional 2 Byte if timestamp is active. In "Why do you align the buffer pointer to a multiple of the size of the current scan element in iio_buffer_foreach_sample()?" on https://github.com/analogdevicesinc/libiio/issues/324 I have been pointed to this mailing list. Regards, Lars M.