linux-iio.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Jonathan Cameron <jic23@kernel.org>
To: Alexandru Ardelean <alexandru.ardelean@analog.com>
Cc: <linux-iio@vger.kernel.org>, <linux-kernel@vger.kernel.org>,
	<shawnguo@kernel.org>, <s.hauer@pengutronix.de>,
	Sergiu Cuciurean <sergiu.cuciurean@analog.com>
Subject: Re: [PATCH] iio: adc: fsl-imx25-gcq: Replace indio_dev->mlock with own device lock
Date: Sat, 29 Aug 2020 16:29:10 +0100	[thread overview]
Message-ID: <20200829162910.379c1a87@archlinux> (raw)
In-Reply-To: <20200826120609.203724-1-alexandru.ardelean@analog.com>

On Wed, 26 Aug 2020 15:06:09 +0300
Alexandru Ardelean <alexandru.ardelean@analog.com> wrote:

> From: Sergiu Cuciurean <sergiu.cuciurean@analog.com>
> 
> As part of the general cleanup of indio_dev->mlock, this change replaces
> it with a local lock, to protect against any other accesses during the
> reading of sample. Reading a sample requires multiple consecutive regmap
> operations and a completion callback, so this requires that no other
> read occurs until it completes.
> 
> Signed-off-by: Sergiu Cuciurean <sergiu.cuciurean@analog.com>
> Signed-off-by: Alexandru Ardelean <alexandru.ardelean@analog.com>
> ---
>  drivers/iio/adc/fsl-imx25-gcq.c | 7 +++++--
>  1 file changed, 5 insertions(+), 2 deletions(-)
> 
> diff --git a/drivers/iio/adc/fsl-imx25-gcq.c b/drivers/iio/adc/fsl-imx25-gcq.c
> index 8cb51cf7a816..2a56ed0fc793 100644
> --- a/drivers/iio/adc/fsl-imx25-gcq.c
> +++ b/drivers/iio/adc/fsl-imx25-gcq.c
> @@ -38,6 +38,7 @@ struct mx25_gcq_priv {
>  	struct completion completed;
>  	struct clk *clk;
>  	int irq;
> +	struct mutex lock;

Rule 1 of locks. Every single one need documentation so that the
scope that the lock protects is clearly stated.

Otherwise patch looks fine to me.

thanks,

Jonathan


>  	struct regulator *vref[4];
>  	u32 channel_vref_mv[MX25_NUM_CFGS];
>  };
> @@ -137,9 +138,9 @@ static int mx25_gcq_read_raw(struct iio_dev *indio_dev,
>  
>  	switch (mask) {
>  	case IIO_CHAN_INFO_RAW:
> -		mutex_lock(&indio_dev->mlock);
> +		mutex_lock(&priv->lock);
>  		ret = mx25_gcq_get_raw_value(&indio_dev->dev, chan, priv, val);
> -		mutex_unlock(&indio_dev->mlock);
> +		mutex_unlock(&priv->lock);
>  		return ret;
>  
>  	case IIO_CHAN_INFO_SCALE:
> @@ -314,6 +315,8 @@ static int mx25_gcq_probe(struct platform_device *pdev)
>  		return PTR_ERR(priv->regs);
>  	}
>  
> +	mutex_init(&priv->lock);
> +
>  	init_completion(&priv->completed);
>  
>  	ret = mx25_gcq_setup_cfgs(pdev, priv);


  reply	other threads:[~2020-08-29 15:29 UTC|newest]

Thread overview: 4+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-08-26 12:06 [PATCH] iio: adc: fsl-imx25-gcq: Replace indio_dev->mlock with own device lock Alexandru Ardelean
2020-08-29 15:29 ` Jonathan Cameron [this message]
2020-09-16  9:29 ` [PATCH v2] " Alexandru Ardelean
2020-09-17 18:27   ` Jonathan Cameron

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20200829162910.379c1a87@archlinux \
    --to=jic23@kernel.org \
    --cc=alexandru.ardelean@analog.com \
    --cc=linux-iio@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=s.hauer@pengutronix.de \
    --cc=sergiu.cuciurean@analog.com \
    --cc=shawnguo@kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).