All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH] mmc: core: Enable runtime PM management of host devices
@ 2015-03-27 11:15 Ulf Hansson
  2015-03-27 12:41 ` Adrian Hunter
                   ` (2 more replies)
  0 siblings, 3 replies; 7+ messages in thread
From: Ulf Hansson @ 2015-03-27 11:15 UTC (permalink / raw)
  To: linux-mmc, Ulf Hansson; +Cc: NeilBrown, Adrian Hunter

Currently those host drivers which have deployed runtime PM, deals with
the runtime PM reference counting entirely by themselves.

Since host drivers don't know when the core will send the next request
through some of the host_ops callbacks, they need to handle runtime PM
get/put between each an every request.

In quite many cases this has some negative effects, since it leads to a
high frequency of scheduled runtime PM suspend operations. That due to
the runtime PM reference count will normally reach zero in-between
every request.

We can decrease that frequency, by enabling the core to deal with
runtime PM reference counting of the host device. Since the core often
knows that it will send a seqeunce of requests, it makes sense for it
to keep a runtime PM reference count during these periods.

More exactly, let's increase the runtime PM reference count by invoking
pm_runtime_get_sync() from __mmc_claim_host(). Restore that action by
invoking pm_runtime_mark_last_busy() and pm_runtime_put_autosuspend()
in mmc_release_host(). In this way a runtime PM reference count will be
kept during the complete cycle of a claim -> release host.

Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org>
---
 drivers/mmc/core/core.c | 10 +++++++++-
 1 file changed, 9 insertions(+), 1 deletion(-)

diff --git a/drivers/mmc/core/core.c b/drivers/mmc/core/core.c
index 709ada9..c296bc0 100644
--- a/drivers/mmc/core/core.c
+++ b/drivers/mmc/core/core.c
@@ -897,6 +897,7 @@ int __mmc_claim_host(struct mmc_host *host, atomic_t *abort)
 	DECLARE_WAITQUEUE(wait, current);
 	unsigned long flags;
 	int stop;
+	bool pm = false;
 
 	might_sleep();
 
@@ -916,13 +917,18 @@ int __mmc_claim_host(struct mmc_host *host, atomic_t *abort)
 		host->claimed = 1;
 		host->claimer = current;
 		host->claim_cnt += 1;
+		if (host->claim_cnt == 1)
+			pm = true;
 	} else
 		wake_up(&host->wq);
 	spin_unlock_irqrestore(&host->lock, flags);
 	remove_wait_queue(&host->wq, &wait);
+
+	if (pm)
+		pm_runtime_get_sync(mmc_dev(host));
+
 	return stop;
 }
-
 EXPORT_SYMBOL(__mmc_claim_host);
 
 /**
@@ -947,6 +953,8 @@ void mmc_release_host(struct mmc_host *host)
 		host->claimer = NULL;
 		spin_unlock_irqrestore(&host->lock, flags);
 		wake_up(&host->wq);
+		pm_runtime_mark_last_busy(mmc_dev(host));
+		pm_runtime_put_autosuspend(mmc_dev(host));
 	}
 }
 EXPORT_SYMBOL(mmc_release_host);
-- 
1.9.1


^ permalink raw reply related	[flat|nested] 7+ messages in thread

* Re: [PATCH] mmc: core: Enable runtime PM management of host devices
  2015-03-27 11:15 [PATCH] mmc: core: Enable runtime PM management of host devices Ulf Hansson
@ 2015-03-27 12:41 ` Adrian Hunter
  2015-03-27 21:52 ` NeilBrown
  2015-03-30 11:39 ` Konstantin Dorfman
  2 siblings, 0 replies; 7+ messages in thread
From: Adrian Hunter @ 2015-03-27 12:41 UTC (permalink / raw)
  To: Ulf Hansson; +Cc: linux-mmc, NeilBrown

On 27/03/15 13:15, Ulf Hansson wrote:
> Currently those host drivers which have deployed runtime PM, deals with
> the runtime PM reference counting entirely by themselves.
> 
> Since host drivers don't know when the core will send the next request
> through some of the host_ops callbacks, they need to handle runtime PM
> get/put between each an every request.
> 
> In quite many cases this has some negative effects, since it leads to a
> high frequency of scheduled runtime PM suspend operations. That due to
> the runtime PM reference count will normally reach zero in-between
> every request.
> 
> We can decrease that frequency, by enabling the core to deal with
> runtime PM reference counting of the host device. Since the core often
> knows that it will send a seqeunce of requests, it makes sense for it
> to keep a runtime PM reference count during these periods.
> 
> More exactly, let's increase the runtime PM reference count by invoking
> pm_runtime_get_sync() from __mmc_claim_host(). Restore that action by
> invoking pm_runtime_mark_last_busy() and pm_runtime_put_autosuspend()
> in mmc_release_host(). In this way a runtime PM reference count will be
> kept during the complete cycle of a claim -> release host.
> 
> Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org>

The downside is that it precludes the possibility of a host driver using
runtime pm for very aggressive pm. However there is anyway MMC_CLKGATE for
that, and otherwise all host drivers are either using autosuspend_delay or
staying active while the card has power. So:

Acked-by: Adrian Hunter <adrian.hunter@intel.com>


^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [PATCH] mmc: core: Enable runtime PM management of host devices
  2015-03-27 11:15 [PATCH] mmc: core: Enable runtime PM management of host devices Ulf Hansson
  2015-03-27 12:41 ` Adrian Hunter
@ 2015-03-27 21:52 ` NeilBrown
  2015-03-29 10:53   ` Konstantin Dorfman
  2015-03-30  8:58   ` Ulf Hansson
  2015-03-30 11:39 ` Konstantin Dorfman
  2 siblings, 2 replies; 7+ messages in thread
From: NeilBrown @ 2015-03-27 21:52 UTC (permalink / raw)
  To: Ulf Hansson; +Cc: linux-mmc, Adrian Hunter

[-- Attachment #1: Type: text/plain, Size: 3055 bytes --]

On Fri, 27 Mar 2015 12:15:15 +0100 Ulf Hansson <ulf.hansson@linaro.org> wrote:

> Currently those host drivers which have deployed runtime PM, deals with
> the runtime PM reference counting entirely by themselves.
> 
> Since host drivers don't know when the core will send the next request
> through some of the host_ops callbacks, they need to handle runtime PM
> get/put between each an every request.
> 
> In quite many cases this has some negative effects, since it leads to a
> high frequency of scheduled runtime PM suspend operations. That due to
> the runtime PM reference count will normally reach zero in-between
> every request.

I don't understand why this is a problem.
All the drivers use put_autosuspend, so the suspend doesn't happen for
(typically) 50ms, so the actually suspend won't happen if there is a sequence
of requests.

Is it just the scheduling of a suspend - even without the suspend happening -
that is causing problems?  If so, maybe the runtime_pm code needs optimising?

Thanks,

NeilBrown


> 
> We can decrease that frequency, by enabling the core to deal with
> runtime PM reference counting of the host device. Since the core often
> knows that it will send a seqeunce of requests, it makes sense for it
> to keep a runtime PM reference count during these periods.
> 
> More exactly, let's increase the runtime PM reference count by invoking
> pm_runtime_get_sync() from __mmc_claim_host(). Restore that action by
> invoking pm_runtime_mark_last_busy() and pm_runtime_put_autosuspend()
> in mmc_release_host(). In this way a runtime PM reference count will be
> kept during the complete cycle of a claim -> release host.
> 
> Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org>
> ---
>  drivers/mmc/core/core.c | 10 +++++++++-
>  1 file changed, 9 insertions(+), 1 deletion(-)
> 
> diff --git a/drivers/mmc/core/core.c b/drivers/mmc/core/core.c
> index 709ada9..c296bc0 100644
> --- a/drivers/mmc/core/core.c
> +++ b/drivers/mmc/core/core.c
> @@ -897,6 +897,7 @@ int __mmc_claim_host(struct mmc_host *host, atomic_t *abort)
>  	DECLARE_WAITQUEUE(wait, current);
>  	unsigned long flags;
>  	int stop;
> +	bool pm = false;
>  
>  	might_sleep();
>  
> @@ -916,13 +917,18 @@ int __mmc_claim_host(struct mmc_host *host, atomic_t *abort)
>  		host->claimed = 1;
>  		host->claimer = current;
>  		host->claim_cnt += 1;
> +		if (host->claim_cnt == 1)
> +			pm = true;
>  	} else
>  		wake_up(&host->wq);
>  	spin_unlock_irqrestore(&host->lock, flags);
>  	remove_wait_queue(&host->wq, &wait);
> +
> +	if (pm)
> +		pm_runtime_get_sync(mmc_dev(host));
> +
>  	return stop;
>  }
> -
>  EXPORT_SYMBOL(__mmc_claim_host);
>  
>  /**
> @@ -947,6 +953,8 @@ void mmc_release_host(struct mmc_host *host)
>  		host->claimer = NULL;
>  		spin_unlock_irqrestore(&host->lock, flags);
>  		wake_up(&host->wq);
> +		pm_runtime_mark_last_busy(mmc_dev(host));
> +		pm_runtime_put_autosuspend(mmc_dev(host));
>  	}
>  }
>  EXPORT_SYMBOL(mmc_release_host);


[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 811 bytes --]

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [PATCH] mmc: core: Enable runtime PM management of host devices
  2015-03-27 21:52 ` NeilBrown
@ 2015-03-29 10:53   ` Konstantin Dorfman
  2015-03-30  8:48     ` Ulf Hansson
  2015-03-30  8:58   ` Ulf Hansson
  1 sibling, 1 reply; 7+ messages in thread
From: Konstantin Dorfman @ 2015-03-29 10:53 UTC (permalink / raw)
  To: NeilBrown, Ulf Hansson; +Cc: linux-mmc, Adrian Hunter



On 03/28/2015 12:52 AM, NeilBrown wrote:
> On Fri, 27 Mar 2015 12:15:15 +0100 Ulf Hansson <ulf.hansson@linaro.org> wrote:
>
>> Currently those host drivers which have deployed runtime PM, deals with
>> the runtime PM reference counting entirely by themselves.
>>
>> Since host drivers don't know when the core will send the next request
>> through some of the host_ops callbacks, they need to handle runtime PM
>> get/put between each an every request.
>>
>> In quite many cases this has some negative effects, since it leads to a
>> high frequency of scheduled runtime PM suspend operations. That due to
>> the runtime PM reference count will normally reach zero in-between
>> every request.

It would be nice to have some numbers about this suspend/resume jitter 
of platform device pm due to time between consecutive requests within 
burst of requests.

Probably we can tune autosuspend timer of specific platform device to 
avoid this.

Implementing the patch in a way that you are suggesting will imply that 
the mmc core will always use double buffering request polling scheme and 
the existing mechanism of "request burst" definition.

Thanks,
Kostya

-- 
Qualcomm Israel, on behalf of Qualcomm Innovation Center, Inc.
Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum,
a Linux Foundation Collaborative Project

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [PATCH] mmc: core: Enable runtime PM management of host devices
  2015-03-29 10:53   ` Konstantin Dorfman
@ 2015-03-30  8:48     ` Ulf Hansson
  0 siblings, 0 replies; 7+ messages in thread
From: Ulf Hansson @ 2015-03-30  8:48 UTC (permalink / raw)
  To: Konstantin Dorfman; +Cc: NeilBrown, linux-mmc, Adrian Hunter

On 29 March 2015 at 12:53, Konstantin Dorfman <kdorfman@codeaurora.org> wrote:
>
>
> On 03/28/2015 12:52 AM, NeilBrown wrote:
>>
>> On Fri, 27 Mar 2015 12:15:15 +0100 Ulf Hansson <ulf.hansson@linaro.org>
>> wrote:
>>
>>> Currently those host drivers which have deployed runtime PM, deals with
>>> the runtime PM reference counting entirely by themselves.
>>>
>>> Since host drivers don't know when the core will send the next request
>>> through some of the host_ops callbacks, they need to handle runtime PM
>>> get/put between each an every request.
>>>
>>> In quite many cases this has some negative effects, since it leads to a
>>> high frequency of scheduled runtime PM suspend operations. That due to
>>> the runtime PM reference count will normally reach zero in-between
>>> every request.
>
>
> It would be nice to have some numbers about this suspend/resume jitter of
> platform device pm due to time between consecutive requests within burst of
> requests.
>
> Probably we can tune autosuspend timer of specific platform device to avoid
> this.

No, that's not going to do the trick.

What $subject patch does, is to decrease the frequency for when the
host device's runtime PM reference count reaches zero. Thus preventing
us from telling the runtime PM core to schedule a suspend operation,
when we actually know it isn't needed.

>
> Implementing the patch in a way that you are suggesting will imply that the
> mmc core will always use double buffering request polling scheme and the
> existing mechanism of "request burst" definition.

$subject patch will improve behaviour when handling "bursts", but
that's just one case for when mmc core knows that it's going to send a
sequence of commands/requests.

For example, the core also send sequences from the mmc_rescan() work,
system PM suspend, etc.

Kind regards
Uffe

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [PATCH] mmc: core: Enable runtime PM management of host devices
  2015-03-27 21:52 ` NeilBrown
  2015-03-29 10:53   ` Konstantin Dorfman
@ 2015-03-30  8:58   ` Ulf Hansson
  1 sibling, 0 replies; 7+ messages in thread
From: Ulf Hansson @ 2015-03-30  8:58 UTC (permalink / raw)
  To: NeilBrown; +Cc: linux-mmc, Adrian Hunter

On 27 March 2015 at 22:52, NeilBrown <neilb@suse.de> wrote:
> On Fri, 27 Mar 2015 12:15:15 +0100 Ulf Hansson <ulf.hansson@linaro.org> wrote:
>
>> Currently those host drivers which have deployed runtime PM, deals with
>> the runtime PM reference counting entirely by themselves.
>>
>> Since host drivers don't know when the core will send the next request
>> through some of the host_ops callbacks, they need to handle runtime PM
>> get/put between each an every request.
>>
>> In quite many cases this has some negative effects, since it leads to a
>> high frequency of scheduled runtime PM suspend operations. That due to
>> the runtime PM reference count will normally reach zero in-between
>> every request.
>
> I don't understand why this is a problem.
> All the drivers use put_autosuspend, so the suspend doesn't happen for
> (typically) 50ms, so the actually suspend won't happen if there is a sequence
> of requests.

It's not a problem, but it's suboptimal.

>
> Is it just the scheduling of a suspend - even without the suspend happening -
> that is causing problems?  If so, maybe the runtime_pm code needs optimising?

Correct. Though, I don't think we can improve the behaviour of the
runtime PM core in this regards.

If the reference count reaches zero, due to a "put", it will need to
perform some actions to schedule the suspend operation - no matter
what.

So to me, this is only an optimization of how we make use of the runtime PM API.

Kind regards
Uffe

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [PATCH] mmc: core: Enable runtime PM management of host devices
  2015-03-27 11:15 [PATCH] mmc: core: Enable runtime PM management of host devices Ulf Hansson
  2015-03-27 12:41 ` Adrian Hunter
  2015-03-27 21:52 ` NeilBrown
@ 2015-03-30 11:39 ` Konstantin Dorfman
  2 siblings, 0 replies; 7+ messages in thread
From: Konstantin Dorfman @ 2015-03-30 11:39 UTC (permalink / raw)
  To: Ulf Hansson, linux-mmc; +Cc: NeilBrown, Adrian Hunter

On 03/27/2015 02:15 PM, Ulf Hansson wrote:
> Currently those host drivers which have deployed runtime PM, deals with
> the runtime PM reference counting entirely by themselves.
>
> Since host drivers don't know when the core will send the next request
> through some of the host_ops callbacks, they need to handle runtime PM
> get/put between each an every request.
>
> In quite many cases this has some negative effects, since it leads to a
> high frequency of scheduled runtime PM suspend operations. That due to
> the runtime PM reference count will normally reach zero in-between
> every request.
>
> We can decrease that frequency, by enabling the core to deal with
> runtime PM reference counting of the host device. Since the core often
> knows that it will send a seqeunce of requests, it makes sense for it
> to keep a runtime PM reference count during these periods.
>
> More exactly, let's increase the runtime PM reference count by invoking
> pm_runtime_get_sync() from __mmc_claim_host(). Restore that action by
> invoking pm_runtime_mark_last_busy() and pm_runtime_put_autosuspend()
> in mmc_release_host(). In this way a runtime PM reference count will be
> kept during the complete cycle of a claim -> release host.
>
> Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org>
> ---
>   drivers/mmc/core/core.c | 10 +++++++++-
>   1 file changed, 9 insertions(+), 1 deletion(-)
>
> diff --git a/drivers/mmc/core/core.c b/drivers/mmc/core/core.c
> index 709ada9..c296bc0 100644
> --- a/drivers/mmc/core/core.c
> +++ b/drivers/mmc/core/core.c
> @@ -897,6 +897,7 @@ int __mmc_claim_host(struct mmc_host *host, atomic_t *abort)
>   	DECLARE_WAITQUEUE(wait, current);
>   	unsigned long flags;
>   	int stop;
> +	bool pm = false;
>
>   	might_sleep();
>
> @@ -916,13 +917,18 @@ int __mmc_claim_host(struct mmc_host *host, atomic_t *abort)
>   		host->claimed = 1;
>   		host->claimer = current;
>   		host->claim_cnt += 1;
> +		if (host->claim_cnt == 1)
> +			pm = true;
>   	} else
>   		wake_up(&host->wq);
>   	spin_unlock_irqrestore(&host->lock, flags);
>   	remove_wait_queue(&host->wq, &wait);
> +
> +	if (pm)
> +		pm_runtime_get_sync(mmc_dev(host));
> +
>   	return stop;
>   }
> -
>   EXPORT_SYMBOL(__mmc_claim_host);
>
>   /**
> @@ -947,6 +953,8 @@ void mmc_release_host(struct mmc_host *host)
>   		host->claimer = NULL;
>   		spin_unlock_irqrestore(&host->lock, flags);
>   		wake_up(&host->wq);
> +		pm_runtime_mark_last_busy(mmc_dev(host));
> +		pm_runtime_put_autosuspend(mmc_dev(host));
>   	}
>   }
>   EXPORT_SYMBOL(mmc_release_host);
>
Acked-by: Konstantin Dorfman <kdorfman@codeaurora.org>

-- 
Qualcomm Israel, on behalf of Qualcomm Innovation Center, Inc.
Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum,
a Linux Foundation Collaborative Project

^ permalink raw reply	[flat|nested] 7+ messages in thread

end of thread, other threads:[~2015-03-30 11:39 UTC | newest]

Thread overview: 7+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2015-03-27 11:15 [PATCH] mmc: core: Enable runtime PM management of host devices Ulf Hansson
2015-03-27 12:41 ` Adrian Hunter
2015-03-27 21:52 ` NeilBrown
2015-03-29 10:53   ` Konstantin Dorfman
2015-03-30  8:48     ` Ulf Hansson
2015-03-30  8:58   ` Ulf Hansson
2015-03-30 11:39 ` Konstantin Dorfman

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.