From: Len Brown <lenb@kernel.org>
To: Adrian Hunter <adrian.hunter@intel.com>
Cc: Ulf Hansson <ulf.hansson@linaro.org>,
linux-mmc <linux-mmc@vger.kernel.org>,
"Rafael J. Wysocki" <rafael.j.wysocki@intel.com>,
Len Brown <len.brown@intel.com>, Pavel Machek <pavel@ucw.cz>,
Kevin Hilman <khilman@linaro.org>,
Tomeu Vizoso <tomeu.vizoso@collabora.com>,
Linux PM list <linux-pm@vger.kernel.org>,
"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>
Subject: Re: [RFC PATCH 0/4] mmc: sdhci: Support maximum DMA latency request via PM QoS
Date: Wed, 1 Apr 2015 15:59:10 -0400 [thread overview]
Message-ID: <CAJvTdK=kKBOeg=vj40EdMR2cnHzagxsFB1dMOKHbsnPLkMauvQ@mail.gmail.com> (raw)
In-Reply-To: <1427204440-3533-1-git-send-email-adrian.hunter@intel.com>
> Ad hoc testing with Lenovo Thinkpad 10 showed a stress
> test could run for at least 24 hours with the patches,
> compared to less than an hour without.
There is a patch in linux-next to delete C1E from BYT,
since it is problematic on multiple platforms.
I don't suppose that just disabling that state without disabling C6
is sufficient to fix the Thinkpad 10? (I'm betting not, but
it can't hurt to try -- you can use the "disable" attribute for the state
in /sys/devices/system/cpu/cpu*/cpuidle/stateN)
I think your choice of the PM_QOS sub-system here is the right one,
and that your selection of 20usec threshold is also a good choice
for what you want to do -- though on non-intel_idle machine somplace,
there may be some ACPI BIOS _CST with random number for C6 latency.
It would be interesting to see how your C6 residency (turbostat
--debug will show this to you)
and your battery life are changed by disabling C6 during MMC activity.
cheers,
Len Brown, Intel Open Source Technology Center
next prev parent reply other threads:[~2015-04-01 19:59 UTC|newest]
Thread overview: 16+ messages / expand[flat|nested] mbox.gz Atom feed top
2015-03-24 13:40 [RFC PATCH 0/4] mmc: sdhci: Support maximum DMA latency request via PM QoS Adrian Hunter
2015-03-24 13:40 ` [RFC PATCH 1/4] PM / QoS: Add pm_qos_cancel_request_lazy() that doesn't sleep Adrian Hunter
2015-04-20 14:00 ` Dov Levenglick
2015-04-21 8:26 ` Adrian Hunter
2015-04-21 10:18 ` Dov Levenglick
2015-04-21 10:25 ` Adrian Hunter
2015-03-24 13:40 ` [RFC PATCH 2/4] mmc: sdhci: Support maximum DMA latency request via PM QOS Adrian Hunter
2015-03-24 13:40 ` [RFC PATCH 3/4] mmc: sdhci-acpi: Fix device hang on Intel BayTrail Adrian Hunter
2015-03-24 13:40 ` [RFC PATCH 4/4] mmc: sdhci-pci: " Adrian Hunter
2015-03-24 20:13 ` [RFC PATCH 0/4] mmc: sdhci: Support maximum DMA latency request via PM QoS Rafael J. Wysocki
2015-03-25 12:37 ` Adrian Hunter
2015-03-25 19:43 ` Pavel Machek
2015-03-26 8:29 ` Adrian Hunter
2015-03-26 9:51 ` Pavel Machek
2015-04-01 19:59 ` Len Brown [this message]
2015-04-02 19:35 ` Adrian Hunter
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to='CAJvTdK=kKBOeg=vj40EdMR2cnHzagxsFB1dMOKHbsnPLkMauvQ@mail.gmail.com' \
--to=lenb@kernel.org \
--cc=adrian.hunter@intel.com \
--cc=khilman@linaro.org \
--cc=len.brown@intel.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mmc@vger.kernel.org \
--cc=linux-pm@vger.kernel.org \
--cc=pavel@ucw.cz \
--cc=rafael.j.wysocki@intel.com \
--cc=tomeu.vizoso@collabora.com \
--cc=ulf.hansson@linaro.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).