linux-renesas-soc.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Wolfram Sang <wsa@kernel.org>
To: Ulrich Hecht <uli+renesas@fpond.eu>,
	Yoshihiro Shimoda <yoshihiro.shimoda.uh@renesas.com>
Cc: linux-renesas-soc@vger.kernel.org, linux-mmc@vger.kernel.org,
	ulf.hansson@linaro.org
Subject: Re: [PATCH] mmc: renesas_sdhi: increase suspend/resume latency limit
Date: Fri, 30 Jul 2021 17:28:07 +0200	[thread overview]
Message-ID: <YQQah2Q8qmQPEl7F@ninjato> (raw)
In-Reply-To: <20210514155318.16812-1-uli+renesas@fpond.eu>

[-- Attachment #1: Type: text/plain, Size: 1572 bytes --]

On Fri, May 14, 2021 at 05:53:18PM +0200, Ulrich Hecht wrote:
> The TMIO core sets a very low latency limit (100 us), but when using R-Car
> SDHI hosts with SD cards, I have observed typical latencies of around 20-30
> ms. This prevents runtime PM from working properly, and the devices remain
> on continuously.
> 
> This patch sets the default latency limit to 100 ms to avoid that.
> 
> Signed-off-by: Ulrich Hecht <uli+renesas@fpond.eu>

Adding Shimoda-san to CC.

Shimoda-san: can you kindly run your SDHI tests with this patch applied?
That would be very kind, thank you!

> ---
>  drivers/mmc/host/renesas_sdhi_core.c | 4 ++++
>  1 file changed, 4 insertions(+)
> 
> diff --git a/drivers/mmc/host/renesas_sdhi_core.c b/drivers/mmc/host/renesas_sdhi_core.c
> index 635bf31a6735..4f41616cc6bb 100644
> --- a/drivers/mmc/host/renesas_sdhi_core.c
> +++ b/drivers/mmc/host/renesas_sdhi_core.c
> @@ -32,6 +32,7 @@
>  #include <linux/pinctrl/pinctrl-state.h>
>  #include <linux/platform_device.h>
>  #include <linux/pm_domain.h>
> +#include <linux/pm_qos.h>
>  #include <linux/regulator/consumer.h>
>  #include <linux/reset.h>
>  #include <linux/sh_dma.h>
> @@ -1147,6 +1148,9 @@ int renesas_sdhi_probe(struct platform_device *pdev,
>  		host->ops.hs400_complete = renesas_sdhi_hs400_complete;
>  	}
>  
> +	/* keep tmio_mmc_host_probe() from setting latency limit too low */
> +	dev_pm_qos_expose_latency_limit(&pdev->dev, 100000);
> +
>  	ret = tmio_mmc_host_probe(host);
>  	if (ret < 0)
>  		goto edisclk;
> -- 
> 2.20.1
> 

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 833 bytes --]

  parent reply	other threads:[~2021-07-30 15:28 UTC|newest]

Thread overview: 14+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-05-14 15:53 [PATCH] mmc: renesas_sdhi: increase suspend/resume latency limit Ulrich Hecht
2021-05-19 14:25 ` Wolfram Sang
2021-06-02 15:40   ` Ulrich Hecht
2021-06-18  8:17     ` Wolfram Sang
2021-06-18 10:40       ` Wolfram Sang
2021-06-30  4:44 ` Wolfram Sang
2021-07-30 15:28 ` Wolfram Sang [this message]
2021-08-02  5:34   ` Yoshihiro Shimoda
2021-08-02 11:18     ` Ulrich Hecht
2021-08-03 11:16       ` Yoshihiro Shimoda
2021-08-04  5:29         ` Yoshihiro Shimoda
2021-08-04 16:17           ` Ulrich Hecht
2021-08-02 12:53     ` Wolfram Sang
2021-08-03 11:16       ` Yoshihiro Shimoda

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=YQQah2Q8qmQPEl7F@ninjato \
    --to=wsa@kernel.org \
    --cc=linux-mmc@vger.kernel.org \
    --cc=linux-renesas-soc@vger.kernel.org \
    --cc=ulf.hansson@linaro.org \
    --cc=uli+renesas@fpond.eu \
    --cc=yoshihiro.shimoda.uh@renesas.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).