linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Nicolin Chen <nicoleotsuka@gmail.com>
To: Guenter Roeck <linux@roeck-us.net>
Cc: jdelvare@suse.com, linux-hwmon@vger.kernel.org,
	linux-kernel@vger.kernel.org
Subject: Re: [PATCH 4/5] hwmon: (ina3221) Make sure data is ready after channel enabling
Date: Wed, 17 Oct 2018 13:53:48 -0700	[thread overview]
Message-ID: <20181017205348.GC15941@Asurada-Nvidia.nvidia.com> (raw)
In-Reply-To: <20181017165543.GA22822@roeck-us.net>

Hello Guenter,

On Wed, Oct 17, 2018 at 09:55:43AM -0700, Guenter Roeck wrote:
> > @@ -676,6 +701,13 @@ static int __maybe_unused ina3221_resume(struct device *dev)
> >  	if (ret)
> >  		return ret;
> >  
> > +	/* Make sure data conversion is finished */
> > +	ret = ina3221_wait_for_data_if_active(ina);
> 
> This is quite expensive and would delay resume (and enable, for that matter).
> A less expensive solution would be to save the enable time here and in
> ina3221_write_enable(). ina3221_wait_for_data_if_active() can then be called
> from read functions and would wait if not enough time has expired.
> 
> 	if (time_before(...))
> 		usleep_range(missing wait time, missing_wait_time * 2);
> 
> or something like that. This could be done per channel or, to keep
> things simple, just using a single time for all channels.

I was thinking something that'd fit one-shot mode too so decided
to add a polling. But you are right. It does make sense to skip
polling until an actual read happens, though it also feels a bit
reasonable to me that putting a polling to the enabling routine.

The wait time would be sightly more complicated than the polling
actually, as it needs to involve total conversion time which may
vary depending on the number of enabled channels. I will see what
would be safer and easier and apply that in v2.

Thanks
Nicolin

  reply	other threads:[~2018-10-17 20:53 UTC|newest]

Thread overview: 17+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2018-10-17  1:24 [PATCH 0/5] hwmon: (ina3221) Implement PM runtime to save power Nicolin Chen
2018-10-17  1:24 ` [PATCH 1/5] hwmon: (core) Inherit power properties to hdev Nicolin Chen
2018-10-17 19:44   ` Guenter Roeck
2018-10-17 20:15     ` Nicolin Chen
2018-10-18 11:37   ` Dan Carpenter
2018-10-17  1:24 ` [PATCH 2/5] hwmon: (ina3221) Return -ENODATA for two alarms attributes Nicolin Chen
2018-10-17 19:46   ` Guenter Roeck
2018-10-17 20:39     ` Nicolin Chen
2018-10-17 21:14       ` Guenter Roeck
2018-10-18  1:03         ` Nicolin Chen
2018-10-17  1:24 ` [PATCH 3/5] hwmon: (ina3221) Serialize sysfs ABI accesses Nicolin Chen
2018-10-17  1:24 ` [PATCH 4/5] hwmon: (ina3221) Make sure data is ready after channel enabling Nicolin Chen
2018-10-17 16:55   ` Guenter Roeck
2018-10-17 20:53     ` Nicolin Chen [this message]
2018-10-17 21:17       ` Guenter Roeck
2018-10-18  1:04         ` Nicolin Chen
2018-10-17  1:24 ` [PATCH 5/5] hwmon: (ina3221) Add PM runtime support Nicolin Chen

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20181017205348.GC15941@Asurada-Nvidia.nvidia.com \
    --to=nicoleotsuka@gmail.com \
    --cc=jdelvare@suse.com \
    --cc=linux-hwmon@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux@roeck-us.net \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).