linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [RFC PATCH 0/4] mmc: sdhci: Support maximum DMA latency request via PM QoS
@ 2015-03-24 13:40 Adrian Hunter
  2015-03-24 13:40 ` [RFC PATCH 1/4] PM / QoS: Add pm_qos_cancel_request_lazy() that doesn't sleep Adrian Hunter
                   ` (6 more replies)
  0 siblings, 7 replies; 16+ messages in thread
From: Adrian Hunter @ 2015-03-24 13:40 UTC (permalink / raw)
  To: Ulf Hansson
  Cc: linux-mmc, Rafael J. Wysocki, Len Brown, Pavel Machek,
	Kevin Hilman, Tomeu Vizoso, linux-pm, linux-kernel

Hi

Here are some patches to address an issue with SDHCI
in Intel Baytrail. Intel Baytrail has been observed
sometimes to hang if host controllers are using DMA
while deep C-states are used. Workaround that by
specifying a maximum DMA latency that will prevent
deep C-states.

The first patch adds a new PM QOS function so that
the SDHCI driver can do a "lazy" cancel of the QoS
request from within its "finish" tasklet.

The second patch adds support to SDHCI for specifying a
maximum DMA latency.

The third and fourth patches take that facility into
use for Baytrail.

Ad hoc testing with Lenovo Thinkpad 10 showed a stress
test could run for at least 24 hours with the patches,
compared to less than an hour without.

These patches are on top of my driver strength patches
which are on top of my re-tuning patches.


Adrian Hunter (4):
      PM / QoS: Add pm_qos_cancel_request_lazy() that doesn't sleep
      mmc: sdhci: Support maximum DMA latency request via PM QOS
      mmc: sdhci-acpi: Fix device hang on Intel BayTrail
      mmc: sdhci-pci: Fix device hang on Intel BayTrail

 drivers/mmc/host/sdhci-acpi.c | 32 ++++++++++++++++++++++++++
 drivers/mmc/host/sdhci-pci.c  | 13 +++++++++++
 drivers/mmc/host/sdhci.c      | 52 +++++++++++++++++++++++++++++++++++++++++++
 drivers/mmc/host/sdhci.h      |  7 ++++++
 include/linux/pm_qos.h        |  2 ++
 kernel/power/qos.c            | 20 +++++++++++++++++
 6 files changed, 126 insertions(+)


Regards
Adrian

^ permalink raw reply	[flat|nested] 16+ messages in thread

* [RFC PATCH 1/4] PM / QoS: Add pm_qos_cancel_request_lazy() that doesn't sleep
  2015-03-24 13:40 [RFC PATCH 0/4] mmc: sdhci: Support maximum DMA latency request via PM QoS Adrian Hunter
@ 2015-03-24 13:40 ` Adrian Hunter
  2015-04-20 14:00   ` Dov Levenglick
  2015-03-24 13:40 ` [RFC PATCH 2/4] mmc: sdhci: Support maximum DMA latency request via PM QOS Adrian Hunter
                   ` (5 subsequent siblings)
  6 siblings, 1 reply; 16+ messages in thread
From: Adrian Hunter @ 2015-03-24 13:40 UTC (permalink / raw)
  To: Ulf Hansson
  Cc: linux-mmc, Rafael J. Wysocki, Len Brown, Pavel Machek,
	Kevin Hilman, Tomeu Vizoso, linux-pm, linux-kernel

Add pm_qos_cancel_request_lazy() which is convenient for
contexts that may not sleep.

Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>
---
 include/linux/pm_qos.h |  2 ++
 kernel/power/qos.c     | 20 ++++++++++++++++++++
 2 files changed, 22 insertions(+)

diff --git a/include/linux/pm_qos.h b/include/linux/pm_qos.h
index 7b3ae0c..f44d353 100644
--- a/include/linux/pm_qos.h
+++ b/include/linux/pm_qos.h
@@ -126,6 +126,8 @@ void pm_qos_update_request(struct pm_qos_request *req,
 			   s32 new_value);
 void pm_qos_update_request_timeout(struct pm_qos_request *req,
 				   s32 new_value, unsigned long timeout_us);
+void pm_qos_cancel_request_lazy(struct pm_qos_request *req,
+				unsigned int timeout_us);
 void pm_qos_remove_request(struct pm_qos_request *req);
 
 int pm_qos_request(int pm_qos_class);
diff --git a/kernel/power/qos.c b/kernel/power/qos.c
index 97b0df7..ac131cb 100644
--- a/kernel/power/qos.c
+++ b/kernel/power/qos.c
@@ -517,6 +517,26 @@ void pm_qos_update_request_timeout(struct pm_qos_request *req, s32 new_value,
 }
 
 /**
+ * pm_qos_cancel_request_lazy - cancels an existing qos request lazily.
+ * @req : handle to list element holding a pm_qos request to use
+ * @timeout_us: the delay before cancelling this qos request in usecs.
+ *
+ * After timeout_us, this qos request is cancelled.
+ */
+void pm_qos_cancel_request_lazy(struct pm_qos_request *req,
+				unsigned int timeout_us)
+{
+	if (!req)
+		return;
+	if (WARN(!pm_qos_request_active(req),
+		 "%s called for unknown object.", __func__))
+		return;
+
+	schedule_delayed_work(&req->work, usecs_to_jiffies(timeout_us));
+}
+EXPORT_SYMBOL_GPL(pm_qos_cancel_request_lazy);
+
+/**
  * pm_qos_remove_request - modifies an existing qos request
  * @req: handle to request list element
  *
-- 
1.9.1


^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [RFC PATCH 2/4] mmc: sdhci: Support maximum DMA latency request via PM QOS
  2015-03-24 13:40 [RFC PATCH 0/4] mmc: sdhci: Support maximum DMA latency request via PM QoS Adrian Hunter
  2015-03-24 13:40 ` [RFC PATCH 1/4] PM / QoS: Add pm_qos_cancel_request_lazy() that doesn't sleep Adrian Hunter
@ 2015-03-24 13:40 ` Adrian Hunter
  2015-03-24 13:40 ` [RFC PATCH 3/4] mmc: sdhci-acpi: Fix device hang on Intel BayTrail Adrian Hunter
                   ` (4 subsequent siblings)
  6 siblings, 0 replies; 16+ messages in thread
From: Adrian Hunter @ 2015-03-24 13:40 UTC (permalink / raw)
  To: Ulf Hansson
  Cc: linux-mmc, Rafael J. Wysocki, Len Brown, Pavel Machek,
	Kevin Hilman, Tomeu Vizoso, linux-pm, linux-kernel

Add support for setting a maximum DMA latency via the
PM QOS framework.

Drivers can set host->dma_latency to the desired value
otherwise the initial value (PM_QOS_DEFAULT_VALUE)
will result in no PM QOS request being added.

It may be that there isn't time between consecutive
I/O requests to reach deeper C-states. To address
that the driver can set host->lat_cancel_delay which
is the delay before cancelling the DMA latency request
when it is known that there is another request on
the way.

Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>
---
 drivers/mmc/host/sdhci.c | 52 ++++++++++++++++++++++++++++++++++++++++++++++++
 drivers/mmc/host/sdhci.h |  7 +++++++
 2 files changed, 59 insertions(+)

diff --git a/drivers/mmc/host/sdhci.c b/drivers/mmc/host/sdhci.c
index d8be0bf..3a0859b 100644
--- a/drivers/mmc/host/sdhci.c
+++ b/drivers/mmc/host/sdhci.c
@@ -724,6 +724,45 @@ static void sdhci_set_timeout(struct sdhci_host *host, struct mmc_command *cmd)
 	}
 }
 
+static bool sdhci_pm_qos_use_dma_latency(struct sdhci_host *host)
+{
+	return host->dma_latency != PM_QOS_DEFAULT_VALUE;
+}
+
+static void sdhci_pm_qos_set_dma_latency(struct sdhci_host *host,
+					 struct mmc_request *mrq)
+{
+	if (sdhci_pm_qos_use_dma_latency(host) && mrq->data &&
+	    (host->flags & (SDHCI_USE_SDMA | SDHCI_USE_ADMA))) {
+		pm_qos_update_request(&host->pm_qos_req, host->dma_latency);
+		host->pm_qos_set = true;
+	}
+}
+
+static void sdhci_pm_qos_unset(struct sdhci_host *host)
+{
+	unsigned int delay;
+
+	if (host->pm_qos_set) {
+		host->pm_qos_set = false;
+		delay = host->consecutive_req ? host->lat_cancel_delay : 0;
+		pm_qos_cancel_request_lazy(&host->pm_qos_req, delay);
+	}
+}
+
+static void sdhci_pm_qos_add(struct sdhci_host *host)
+{
+	if (sdhci_pm_qos_use_dma_latency(host))
+		pm_qos_add_request(&host->pm_qos_req, PM_QOS_CPU_DMA_LATENCY,
+				   PM_QOS_DEFAULT_VALUE);
+}
+
+static void sdhci_pm_qos_remove(struct sdhci_host *host)
+{
+	if (pm_qos_request_active(&host->pm_qos_req))
+		pm_qos_remove_request(&host->pm_qos_req);
+}
+
 static void sdhci_prepare_data(struct sdhci_host *host, struct mmc_command *cmd)
 {
 	u8 ctrl;
@@ -1348,6 +1387,8 @@ static void sdhci_request(struct mmc_host *mmc, struct mmc_request *mrq)
 
 	err = mmc_retune(mmc);
 
+	sdhci_pm_qos_set_dma_latency(host, mrq);
+
 	/* Firstly check card presence */
 	present = sdhci_do_get_cd(host);
 
@@ -1371,6 +1412,7 @@ static void sdhci_request(struct mmc_host *mmc, struct mmc_request *mrq)
 	}
 
 	host->mrq = mrq;
+	host->consecutive_req = 0;
 
 	if (!err && (!present || host->flags & SDHCI_DEVICE_DEAD))
 		err = -ENOMEDIUM;
@@ -2167,6 +2209,8 @@ static void sdhci_pre_req(struct mmc_host *mmc, struct mmc_request *mrq,
 {
 	struct sdhci_host *host = mmc_priv(mmc);
 
+	host->consecutive_req = 1;
+
 	if (mrq->data->host_cookie) {
 		mrq->data->host_cookie = 0;
 		return;
@@ -2243,6 +2287,8 @@ static void sdhci_tasklet_finish(unsigned long param)
 
 	host = (struct sdhci_host*)param;
 
+	sdhci_pm_qos_unset(host);
+
 	spin_lock_irqsave(&host->lock, flags);
 
         /*
@@ -2877,6 +2923,7 @@ struct sdhci_host *sdhci_alloc_host(struct device *dev,
 
 	host = mmc_priv(mmc);
 	host->mmc = mmc;
+	host->dma_latency = PM_QOS_DEFAULT_VALUE;
 
 	return host;
 }
@@ -3363,6 +3410,8 @@ int sdhci_add_host(struct sdhci_host *host)
 	 */
 	mmc->max_blk_count = (host->quirks & SDHCI_QUIRK_NO_MULTIBLOCK) ? 1 : 65535;
 
+	sdhci_pm_qos_add(host);
+
 	/*
 	 * Init tasklets.
 	 */
@@ -3426,6 +3475,7 @@ reset:
 #endif
 untasklet:
 	tasklet_kill(&host->finish_tasklet);
+	sdhci_pm_qos_remove(host);
 
 	return ret;
 }
@@ -3482,6 +3532,8 @@ void sdhci_remove_host(struct sdhci_host *host, int dead)
 
 	host->adma_table = NULL;
 	host->align_buffer = NULL;
+
+	sdhci_pm_qos_remove(host);
 }
 
 EXPORT_SYMBOL_GPL(sdhci_remove_host);
diff --git a/drivers/mmc/host/sdhci.h b/drivers/mmc/host/sdhci.h
index 5521d29..15c5e7b 100644
--- a/drivers/mmc/host/sdhci.h
+++ b/drivers/mmc/host/sdhci.h
@@ -19,6 +19,7 @@
 #include <linux/io.h>
 
 #include <linux/mmc/host.h>
+#include <linux/pm_qos.h>
 
 /*
  * Controller registers
@@ -419,6 +420,12 @@ struct sdhci_host {
 	struct mmc_host *mmc;	/* MMC structure */
 	u64 dma_mask;		/* custom DMA mask */
 
+	struct pm_qos_request pm_qos_req;
+	int dma_latency;
+	int lat_cancel_delay;
+	int consecutive_req;
+	bool pm_qos_set;
+
 #if defined(CONFIG_LEDS_CLASS) || defined(CONFIG_LEDS_CLASS_MODULE)
 	struct led_classdev led;	/* LED control */
 	char led_name[32];
-- 
1.9.1


^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [RFC PATCH 3/4] mmc: sdhci-acpi: Fix device hang on Intel BayTrail
  2015-03-24 13:40 [RFC PATCH 0/4] mmc: sdhci: Support maximum DMA latency request via PM QoS Adrian Hunter
  2015-03-24 13:40 ` [RFC PATCH 1/4] PM / QoS: Add pm_qos_cancel_request_lazy() that doesn't sleep Adrian Hunter
  2015-03-24 13:40 ` [RFC PATCH 2/4] mmc: sdhci: Support maximum DMA latency request via PM QOS Adrian Hunter
@ 2015-03-24 13:40 ` Adrian Hunter
  2015-03-24 13:40 ` [RFC PATCH 4/4] mmc: sdhci-pci: " Adrian Hunter
                   ` (3 subsequent siblings)
  6 siblings, 0 replies; 16+ messages in thread
From: Adrian Hunter @ 2015-03-24 13:40 UTC (permalink / raw)
  To: Ulf Hansson
  Cc: linux-mmc, Rafael J. Wysocki, Len Brown, Pavel Machek,
	Kevin Hilman, Tomeu Vizoso, linux-pm, linux-kernel

Intel Baytrail has been observed sometimes to hang
if host controllers are using DMA while deep C-states
are used. Workaround that by specifying a maximum
DMA latency that will prevent deep C-states.

Unfortunately, host controller ACPI HIDs are not unique
to Baytrail, so the CPU must be identified.

Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>
---
 drivers/mmc/host/sdhci-acpi.c | 32 ++++++++++++++++++++++++++++++++
 1 file changed, 32 insertions(+)

diff --git a/drivers/mmc/host/sdhci-acpi.c b/drivers/mmc/host/sdhci-acpi.c
index 22d929f..eaff09f 100644
--- a/drivers/mmc/host/sdhci-acpi.c
+++ b/drivers/mmc/host/sdhci-acpi.c
@@ -43,6 +43,24 @@
 
 #include "sdhci.h"
 
+#ifdef CONFIG_X86
+#include <asm/cpu_device_id.h>
+static bool sdhci_acpi_on_byt(void)
+{
+	static const struct x86_cpu_id byt[] = {
+		{ X86_VENDOR_INTEL, 6, 0x37 },
+		{}
+	};
+
+	return x86_match_cpu(byt);
+}
+#else
+static bool sdhci_acpi_on_byt(void)
+{
+	return false;
+}
+#endif
+
 enum {
 	SDHCI_ACPI_SD_CD		= BIT(0),
 	SDHCI_ACPI_RUNTIME_PM		= BIT(1),
@@ -146,6 +164,14 @@ static const struct sdhci_acpi_chip sdhci_acpi_chip_int = {
 	.ops = &sdhci_acpi_ops_int,
 };
 
+static void sdhci_acpi_int_dma_latency(struct sdhci_host *host)
+{
+	if (sdhci_acpi_on_byt()) {
+		host->dma_latency = 20;
+		host->lat_cancel_delay = 275;
+	}
+}
+
 static int sdhci_acpi_emmc_probe_slot(struct platform_device *pdev,
 				      const char *hid, const char *uid)
 {
@@ -164,6 +190,8 @@ static int sdhci_acpi_emmc_probe_slot(struct platform_device *pdev,
 	    sdhci_readl(host, SDHCI_CAPABILITIES_1) == 0x00000807)
 		host->timeout_clk = 1000; /* 1000 kHz i.e. 1 MHz */
 
+	sdhci_acpi_int_dma_latency(host);
+
 	return 0;
 }
 
@@ -178,6 +206,8 @@ static int sdhci_acpi_sdio_probe_slot(struct platform_device *pdev,
 
 	host = c->host;
 
+	sdhci_acpi_int_dma_latency(host);
+
 	/* Platform specific code during sdio probe slot goes here */
 
 	return 0;
@@ -194,6 +224,8 @@ static int sdhci_acpi_sd_probe_slot(struct platform_device *pdev,
 
 	host = c->host;
 
+	sdhci_acpi_int_dma_latency(host);
+
 	/* Platform specific code during sd probe slot goes here */
 
 	return 0;
-- 
1.9.1


^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [RFC PATCH 4/4] mmc: sdhci-pci: Fix device hang on Intel BayTrail
  2015-03-24 13:40 [RFC PATCH 0/4] mmc: sdhci: Support maximum DMA latency request via PM QoS Adrian Hunter
                   ` (2 preceding siblings ...)
  2015-03-24 13:40 ` [RFC PATCH 3/4] mmc: sdhci-acpi: Fix device hang on Intel BayTrail Adrian Hunter
@ 2015-03-24 13:40 ` Adrian Hunter
  2015-03-24 20:13 ` [RFC PATCH 0/4] mmc: sdhci: Support maximum DMA latency request via PM QoS Rafael J. Wysocki
                   ` (2 subsequent siblings)
  6 siblings, 0 replies; 16+ messages in thread
From: Adrian Hunter @ 2015-03-24 13:40 UTC (permalink / raw)
  To: Ulf Hansson
  Cc: linux-mmc, Rafael J. Wysocki, Len Brown, Pavel Machek,
	Kevin Hilman, Tomeu Vizoso, linux-pm, linux-kernel

Intel Baytrail has been observed sometimes to hang
if host controllers are using DMA while deep C-states
are used. Workaround that by specifying a maximum
DMA latency that will prevent deep C-states.

Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>
---
 drivers/mmc/host/sdhci-pci.c | 13 +++++++++++++
 1 file changed, 13 insertions(+)

diff --git a/drivers/mmc/host/sdhci-pci.c b/drivers/mmc/host/sdhci-pci.c
index 5a5b5e9..bba2956 100644
--- a/drivers/mmc/host/sdhci-pci.c
+++ b/drivers/mmc/host/sdhci-pci.c
@@ -330,6 +330,12 @@ static void spt_read_drive_strength(struct sdhci_host *host)
 	sdhci_pci_spt_drive_strength = 0x10 | ((val >> 12) & 0xf);
 }
 
+static void byt_set_dma_latency(struct sdhci_host *host)
+{
+	host->dma_latency = 20;
+	host->lat_cancel_delay = 275;
+}
+
 static int byt_emmc_probe_slot(struct sdhci_pci_slot *slot)
 {
 	slot->host->mmc->caps |= MMC_CAP_8_BIT_DATA | MMC_CAP_NONREMOVABLE |
@@ -338,6 +344,9 @@ static int byt_emmc_probe_slot(struct sdhci_pci_slot *slot)
 				 MMC_CAP_WAIT_WHILE_BUSY;
 	slot->host->mmc->caps2 |= MMC_CAP2_HC_ERASE_SZ;
 	slot->hw_reset = sdhci_pci_int_hw_reset;
+	if (slot->chip->pdev->device == PCI_DEVICE_ID_INTEL_BYT_EMMC ||
+	    slot->chip->pdev->device == PCI_DEVICE_ID_INTEL_BYT_EMMC2)
+		byt_set_dma_latency(slot->host);
 	if (slot->chip->pdev->device == PCI_DEVICE_ID_INTEL_BSW_EMMC)
 		slot->host->timeout_clk = 1000; /* 1000 kHz i.e. 1 MHz */
 	if (slot->chip->pdev->device == PCI_DEVICE_ID_INTEL_SPT_EMMC) {
@@ -352,6 +361,8 @@ static int byt_sdio_probe_slot(struct sdhci_pci_slot *slot)
 	slot->host->mmc->caps |= MMC_CAP_POWER_OFF_CARD | MMC_CAP_NONREMOVABLE |
 				 MMC_CAP_BUS_WIDTH_TEST |
 				 MMC_CAP_WAIT_WHILE_BUSY;
+	if (slot->chip->pdev->device == PCI_DEVICE_ID_INTEL_BYT_SDIO)
+		byt_set_dma_latency(slot->host);
 	return 0;
 }
 
@@ -362,6 +373,8 @@ static int byt_sd_probe_slot(struct sdhci_pci_slot *slot)
 	slot->cd_con_id = NULL;
 	slot->cd_idx = 0;
 	slot->cd_override_level = true;
+	if (slot->chip->pdev->device == PCI_DEVICE_ID_INTEL_BYT_SD)
+		byt_set_dma_latency(slot->host);
 	return 0;
 }
 
-- 
1.9.1


^ permalink raw reply related	[flat|nested] 16+ messages in thread

* Re: [RFC PATCH 0/4] mmc: sdhci: Support maximum DMA latency request via PM QoS
  2015-03-24 13:40 [RFC PATCH 0/4] mmc: sdhci: Support maximum DMA latency request via PM QoS Adrian Hunter
                   ` (3 preceding siblings ...)
  2015-03-24 13:40 ` [RFC PATCH 4/4] mmc: sdhci-pci: " Adrian Hunter
@ 2015-03-24 20:13 ` Rafael J. Wysocki
  2015-03-25 12:37   ` Adrian Hunter
  2015-03-25 19:43 ` Pavel Machek
  2015-04-01 19:59 ` Len Brown
  6 siblings, 1 reply; 16+ messages in thread
From: Rafael J. Wysocki @ 2015-03-24 20:13 UTC (permalink / raw)
  To: Adrian Hunter
  Cc: Ulf Hansson, linux-mmc, Rafael J. Wysocki, Len Brown,
	Pavel Machek, Kevin Hilman, Tomeu Vizoso, linux-pm, linux-kernel

On Tuesday, March 24, 2015 03:40:36 PM Adrian Hunter wrote:
> Hi
> 
> Here are some patches to address an issue with SDHCI
> in Intel Baytrail. Intel Baytrail has been observed
> sometimes to hang if host controllers are using DMA
> while deep C-states are used. Workaround that by
> specifying a maximum DMA latency that will prevent
> deep C-states.
> 
> The first patch adds a new PM QOS function so that
> the SDHCI driver can do a "lazy" cancel of the QoS
> request from within its "finish" tasklet.
> 
> The second patch adds support to SDHCI for specifying a
> maximum DMA latency.
> 
> The third and fourth patches take that facility into
> use for Baytrail.
> 
> Ad hoc testing with Lenovo Thinkpad 10 showed a stress
> test could run for at least 24 hours with the patches,
> compared to less than an hour without.
> 
> These patches are on top of my driver strength patches
> which are on top of my re-tuning patches.
> 
> 
> Adrian Hunter (4):
>       PM / QoS: Add pm_qos_cancel_request_lazy() that doesn't sleep
>       mmc: sdhci: Support maximum DMA latency request via PM QOS
>       mmc: sdhci-acpi: Fix device hang on Intel BayTrail
>       mmc: sdhci-pci: Fix device hang on Intel BayTrail

I'm a bit concerned about the CPUID-based blacklisting of things and
whether or not the SDHCI driver is the right place for doing that.

Maybe we should do it from within the LPSS driver instead?


-- 
I speak only for myself.
Rafael J. Wysocki, Intel Open Source Technology Center.

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [RFC PATCH 0/4] mmc: sdhci: Support maximum DMA latency request via PM QoS
  2015-03-24 20:13 ` [RFC PATCH 0/4] mmc: sdhci: Support maximum DMA latency request via PM QoS Rafael J. Wysocki
@ 2015-03-25 12:37   ` Adrian Hunter
  0 siblings, 0 replies; 16+ messages in thread
From: Adrian Hunter @ 2015-03-25 12:37 UTC (permalink / raw)
  To: Rafael J. Wysocki, Mika Westerberg
  Cc: Ulf Hansson, linux-mmc, Rafael J. Wysocki, Len Brown,
	Pavel Machek, Kevin Hilman, Tomeu Vizoso, linux-pm, linux-kernel

On 24/03/15 22:13, Rafael J. Wysocki wrote:
> On Tuesday, March 24, 2015 03:40:36 PM Adrian Hunter wrote:
>> Hi
>>
>> Here are some patches to address an issue with SDHCI
>> in Intel Baytrail. Intel Baytrail has been observed
>> sometimes to hang if host controllers are using DMA
>> while deep C-states are used. Workaround that by
>> specifying a maximum DMA latency that will prevent
>> deep C-states.
>>
>> The first patch adds a new PM QOS function so that
>> the SDHCI driver can do a "lazy" cancel of the QoS
>> request from within its "finish" tasklet.
>>
>> The second patch adds support to SDHCI for specifying a
>> maximum DMA latency.
>>
>> The third and fourth patches take that facility into
>> use for Baytrail.
>>
>> Ad hoc testing with Lenovo Thinkpad 10 showed a stress
>> test could run for at least 24 hours with the patches,
>> compared to less than an hour without.
>>
>> These patches are on top of my driver strength patches
>> which are on top of my re-tuning patches.
>>
>>
>> Adrian Hunter (4):
>>       PM / QoS: Add pm_qos_cancel_request_lazy() that doesn't sleep
>>       mmc: sdhci: Support maximum DMA latency request via PM QOS
>>       mmc: sdhci-acpi: Fix device hang on Intel BayTrail
>>       mmc: sdhci-pci: Fix device hang on Intel BayTrail
> 
> I'm a bit concerned about the CPUID-based blacklisting of things and
> whether or not the SDHCI driver is the right place for doing that.
> 
> Maybe we should do it from within the LPSS driver instead?

+Mika

There are 2 minor difficulties with that:
1. The sdhci-acpi driver is not currently dependent on lpss
2. lpss does not currently export anything, so it is a bit of a new direction

If the ACPI HIDs used for SDHCI were unique to BayTrail (as the PCI device
ids are) then there would be no need to consult the cpuid. So it seems to be
a SDHCI problem, but I can't see a nice way to put it into lpss.


^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [RFC PATCH 0/4] mmc: sdhci: Support maximum DMA latency request via PM QoS
  2015-03-24 13:40 [RFC PATCH 0/4] mmc: sdhci: Support maximum DMA latency request via PM QoS Adrian Hunter
                   ` (4 preceding siblings ...)
  2015-03-24 20:13 ` [RFC PATCH 0/4] mmc: sdhci: Support maximum DMA latency request via PM QoS Rafael J. Wysocki
@ 2015-03-25 19:43 ` Pavel Machek
  2015-03-26  8:29   ` Adrian Hunter
  2015-04-01 19:59 ` Len Brown
  6 siblings, 1 reply; 16+ messages in thread
From: Pavel Machek @ 2015-03-25 19:43 UTC (permalink / raw)
  To: Adrian Hunter
  Cc: Ulf Hansson, linux-mmc, Rafael J. Wysocki, Len Brown,
	Kevin Hilman, Tomeu Vizoso, linux-pm, linux-kernel

Hi!

> Here are some patches to address an issue with SDHCI
> in Intel Baytrail. Intel Baytrail has been observed
> sometimes to hang if host controllers are using DMA
> while deep C-states are used. Workaround that by

I wonder if there is more information on this one? I see your address
is "@intel". For example which C states are affected, and if there's
maybe Intel errata number to quote?
									Pavel
-- 
(english) http://www.livejournal.com/~pavelmachek
(cesky, pictures) http://atrey.karlin.mff.cuni.cz/~pavel/picture/horses/blog.html

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [RFC PATCH 0/4] mmc: sdhci: Support maximum DMA latency request via PM QoS
  2015-03-25 19:43 ` Pavel Machek
@ 2015-03-26  8:29   ` Adrian Hunter
  2015-03-26  9:51     ` Pavel Machek
  0 siblings, 1 reply; 16+ messages in thread
From: Adrian Hunter @ 2015-03-26  8:29 UTC (permalink / raw)
  To: Pavel Machek
  Cc: Ulf Hansson, linux-mmc, Rafael J. Wysocki, Len Brown,
	Kevin Hilman, Tomeu Vizoso, linux-pm, linux-kernel

On 25/03/15 21:43, Pavel Machek wrote:
> Hi!
> 
>> Here are some patches to address an issue with SDHCI
>> in Intel Baytrail. Intel Baytrail has been observed
>> sometimes to hang if host controllers are using DMA
>> while deep C-states are used. Workaround that by
> 
> I wonder if there is more information on this one? I see your address
> is "@intel". For example which C states are affected, and if there's

The problem starts at C6 - I will add that to the commit message.

> maybe Intel errata number to quote?

I wasn't able to find an errata number.


^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [RFC PATCH 0/4] mmc: sdhci: Support maximum DMA latency request via PM QoS
  2015-03-26  8:29   ` Adrian Hunter
@ 2015-03-26  9:51     ` Pavel Machek
  0 siblings, 0 replies; 16+ messages in thread
From: Pavel Machek @ 2015-03-26  9:51 UTC (permalink / raw)
  To: Adrian Hunter
  Cc: Ulf Hansson, linux-mmc, Rafael J. Wysocki, Len Brown,
	Kevin Hilman, Tomeu Vizoso, linux-pm, linux-kernel

On Thu 2015-03-26 10:29:06, Adrian Hunter wrote:
> On 25/03/15 21:43, Pavel Machek wrote:
> > Hi!
> > 
> >> Here are some patches to address an issue with SDHCI
> >> in Intel Baytrail. Intel Baytrail has been observed
> >> sometimes to hang if host controllers are using DMA
> >> while deep C-states are used. Workaround that by
> > 
> > I wonder if there is more information on this one? I see your address
> > is "@intel". For example which C states are affected, and if there's
> 
> The problem starts at C6 - I will add that to the commit message.

Better find a place for a comment in code, too...

Thanks,
									Pavel
-- 
(english) http://www.livejournal.com/~pavelmachek
(cesky, pictures) http://atrey.karlin.mff.cuni.cz/~pavel/picture/horses/blog.html

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [RFC PATCH 0/4] mmc: sdhci: Support maximum DMA latency request via PM QoS
  2015-03-24 13:40 [RFC PATCH 0/4] mmc: sdhci: Support maximum DMA latency request via PM QoS Adrian Hunter
                   ` (5 preceding siblings ...)
  2015-03-25 19:43 ` Pavel Machek
@ 2015-04-01 19:59 ` Len Brown
  2015-04-02 19:35   ` Adrian Hunter
  6 siblings, 1 reply; 16+ messages in thread
From: Len Brown @ 2015-04-01 19:59 UTC (permalink / raw)
  To: Adrian Hunter
  Cc: Ulf Hansson, linux-mmc, Rafael J. Wysocki, Len Brown,
	Pavel Machek, Kevin Hilman, Tomeu Vizoso, Linux PM list,
	linux-kernel

> Ad hoc testing with Lenovo Thinkpad 10 showed a stress
> test could run for at least 24 hours with the patches,
> compared to less than an hour without.

There is a patch in linux-next to delete C1E from BYT,
since it is problematic on multiple platforms.
I don't suppose that just disabling that state without disabling C6
is sufficient to fix the Thinkpad 10?  (I'm betting not, but
it can't hurt to try -- you can use the "disable" attribute for the state
in /sys/devices/system/cpu/cpu*/cpuidle/stateN)

I think your choice of the PM_QOS sub-system here is the right one,
and that your selection of 20usec threshold is also a good choice
for what you want to do -- though on non-intel_idle machine somplace,
there may be some ACPI BIOS _CST with random number for C6 latency.

It would be interesting to see how your C6 residency (turbostat
--debug will show this to you)
and your battery life are changed by disabling C6 during MMC activity.

cheers,
Len Brown, Intel Open Source Technology Center

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [RFC PATCH 0/4] mmc: sdhci: Support maximum DMA latency request via PM QoS
  2015-04-01 19:59 ` Len Brown
@ 2015-04-02 19:35   ` Adrian Hunter
  0 siblings, 0 replies; 16+ messages in thread
From: Adrian Hunter @ 2015-04-02 19:35 UTC (permalink / raw)
  To: Len Brown
  Cc: Ulf Hansson, linux-mmc, Rafael J. Wysocki, Len Brown,
	Pavel Machek, Kevin Hilman, Tomeu Vizoso, Linux PM list,
	linux-kernel

On 1/04/2015 10:59 p.m., Len Brown wrote:
>> Ad hoc testing with Lenovo Thinkpad 10 showed a stress
>> test could run for at least 24 hours with the patches,
>> compared to less than an hour without.
>
> There is a patch in linux-next to delete C1E from BYT,
> since it is problematic on multiple platforms.
> I don't suppose that just disabling that state without disabling C6
> is sufficient to fix the Thinkpad 10?  (I'm betting not, but
> it can't hurt to try -- you can use the "disable" attribute for the state
> in /sys/devices/system/cpu/cpu*/cpuidle/stateN)
>
> I think your choice of the PM_QOS sub-system here is the right one,
> and that your selection of 20usec threshold is also a good choice
> for what you want to do -- though on non-intel_idle machine somplace,
> there may be some ACPI BIOS _CST with random number for C6 latency.
>
> It would be interesting to see how your C6 residency (turbostat
> --debug will show this to you)
> and your battery life are changed by disabling C6 during MMC activity.

I will do some more testing as you suggest, although it will have to
wait until next week due to Easter holidays here.

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [RFC PATCH 1/4] PM / QoS: Add pm_qos_cancel_request_lazy() that doesn't sleep
  2015-03-24 13:40 ` [RFC PATCH 1/4] PM / QoS: Add pm_qos_cancel_request_lazy() that doesn't sleep Adrian Hunter
@ 2015-04-20 14:00   ` Dov Levenglick
  2015-04-21  8:26     ` Adrian Hunter
  0 siblings, 1 reply; 16+ messages in thread
From: Dov Levenglick @ 2015-04-20 14:00 UTC (permalink / raw)
  To: Adrian Hunter
  Cc: Ulf Hansson, linux-mmc, Rafael J. Wysocki, Len Brown,
	Pavel Machek, Kevin Hilman, Tomeu Vizoso, linux-pm, linux-kernel

> Add pm_qos_cancel_request_lazy() which is convenient for
> contexts that may not sleep.
>
> Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>
> ---
>  include/linux/pm_qos.h |  2 ++
>  kernel/power/qos.c     | 20 ++++++++++++++++++++
>  2 files changed, 22 insertions(+)
>
> diff --git a/include/linux/pm_qos.h b/include/linux/pm_qos.h
> index 7b3ae0c..f44d353 100644
> --- a/include/linux/pm_qos.h
> +++ b/include/linux/pm_qos.h
> @@ -126,6 +126,8 @@ void pm_qos_update_request(struct pm_qos_request *req,
>  			   s32 new_value);
>  void pm_qos_update_request_timeout(struct pm_qos_request *req,
>  				   s32 new_value, unsigned long
> timeout_us);
> +void pm_qos_cancel_request_lazy(struct pm_qos_request *req,
> +				unsigned int timeout_us);
>  void pm_qos_remove_request(struct pm_qos_request *req);
>

I think that this could be acheived using existing API if
pm_qos_update_request_timeout() were to be called with the existing
timeout value.
Since reading the existing timeout value is missing - and I think would be
a useful feature to have for other use-cases - do you agree with such an
approach?

>  int pm_qos_request(int pm_qos_class);
> diff --git a/kernel/power/qos.c b/kernel/power/qos.c
> index 97b0df7..ac131cb 100644
> --- a/kernel/power/qos.c
> +++ b/kernel/power/qos.c
> @@ -517,6 +517,26 @@ void pm_qos_update_request_timeout(struct
> pm_qos_request *req, s32 new_value,
>  }
>
>  /**
> + * pm_qos_cancel_request_lazy - cancels an existing qos request lazily.
> + * @req : handle to list element holding a pm_qos request to use
> + * @timeout_us: the delay before cancelling this qos request in usecs.
> + *
> + * After timeout_us, this qos request is cancelled.
> + */
> +void pm_qos_cancel_request_lazy(struct pm_qos_request *req,
> +				unsigned int timeout_us)
> +{
> +	if (!req)
> +		return;
> +	if (WARN(!pm_qos_request_active(req),
> +		 "%s called for unknown object.", __func__))
> +		return;
> +
> +	schedule_delayed_work(&req->work, usecs_to_jiffies(timeout_us));
> +}
> +EXPORT_SYMBOL_GPL(pm_qos_cancel_request_lazy);
> +
> +/**
>   * pm_qos_remove_request - modifies an existing qos request
>   * @req: handle to request list element
>   *
> --
> 1.9.1
>
> --
> To unsubscribe from this list: send the line "unsubscribe linux-mmc" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>


QUALCOMM ISRAEL, on behalf of Qualcomm Innovation Center, Inc.
The Qualcomm Innovation Center, Inc. is a member of the Code Aurora Forum,
a Linux Foundation Collaborative Project


^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [RFC PATCH 1/4] PM / QoS: Add pm_qos_cancel_request_lazy() that doesn't sleep
  2015-04-20 14:00   ` Dov Levenglick
@ 2015-04-21  8:26     ` Adrian Hunter
  2015-04-21 10:18       ` Dov Levenglick
  0 siblings, 1 reply; 16+ messages in thread
From: Adrian Hunter @ 2015-04-21  8:26 UTC (permalink / raw)
  To: Dov Levenglick
  Cc: Ulf Hansson, linux-mmc, Rafael J. Wysocki, Len Brown,
	Pavel Machek, Kevin Hilman, Tomeu Vizoso, linux-pm, linux-kernel

On 20/04/15 17:00, Dov Levenglick wrote:
>> Add pm_qos_cancel_request_lazy() which is convenient for
>> contexts that may not sleep.
>>
>> Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>
>> ---
>>  include/linux/pm_qos.h |  2 ++
>>  kernel/power/qos.c     | 20 ++++++++++++++++++++
>>  2 files changed, 22 insertions(+)
>>
>> diff --git a/include/linux/pm_qos.h b/include/linux/pm_qos.h
>> index 7b3ae0c..f44d353 100644
>> --- a/include/linux/pm_qos.h
>> +++ b/include/linux/pm_qos.h
>> @@ -126,6 +126,8 @@ void pm_qos_update_request(struct pm_qos_request *req,
>>  			   s32 new_value);
>>  void pm_qos_update_request_timeout(struct pm_qos_request *req,
>>  				   s32 new_value, unsigned long
>> timeout_us);
>> +void pm_qos_cancel_request_lazy(struct pm_qos_request *req,
>> +				unsigned int timeout_us);
>>  void pm_qos_remove_request(struct pm_qos_request *req);
>>
> 
> I think that this could be acheived using existing API if
> pm_qos_update_request_timeout() were to be called with the existing
> timeout value.

I don't follow what you mean. There is no existing timeout value.
Did you mean existing request value? There is still the difference wrt
cancel_delayed_work_sync.

> Since reading the existing timeout value is missing - and I think would be
> a useful feature to have for other use-cases - do you agree with such an
> approach?
> 
>>  int pm_qos_request(int pm_qos_class);
>> diff --git a/kernel/power/qos.c b/kernel/power/qos.c
>> index 97b0df7..ac131cb 100644
>> --- a/kernel/power/qos.c
>> +++ b/kernel/power/qos.c
>> @@ -517,6 +517,26 @@ void pm_qos_update_request_timeout(struct
>> pm_qos_request *req, s32 new_value,
>>  }
>>
>>  /**
>> + * pm_qos_cancel_request_lazy - cancels an existing qos request lazily.
>> + * @req : handle to list element holding a pm_qos request to use
>> + * @timeout_us: the delay before cancelling this qos request in usecs.
>> + *
>> + * After timeout_us, this qos request is cancelled.
>> + */
>> +void pm_qos_cancel_request_lazy(struct pm_qos_request *req,
>> +				unsigned int timeout_us)
>> +{
>> +	if (!req)
>> +		return;
>> +	if (WARN(!pm_qos_request_active(req),
>> +		 "%s called for unknown object.", __func__))
>> +		return;
>> +
>> +	schedule_delayed_work(&req->work, usecs_to_jiffies(timeout_us));
>> +}
>> +EXPORT_SYMBOL_GPL(pm_qos_cancel_request_lazy);
>> +
>> +/**
>>   * pm_qos_remove_request - modifies an existing qos request
>>   * @req: handle to request list element
>>   *
>> --
>> 1.9.1
>>
>> --
>> To unsubscribe from this list: send the line "unsubscribe linux-mmc" in
>> the body of a message to majordomo@vger.kernel.org
>> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>>
> 
> 
> QUALCOMM ISRAEL, on behalf of Qualcomm Innovation Center, Inc.
> The Qualcomm Innovation Center, Inc. is a member of the Code Aurora Forum,
> a Linux Foundation Collaborative Project
> 
> 
> 


^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [RFC PATCH 1/4] PM / QoS: Add pm_qos_cancel_request_lazy() that doesn't sleep
  2015-04-21  8:26     ` Adrian Hunter
@ 2015-04-21 10:18       ` Dov Levenglick
  2015-04-21 10:25         ` Adrian Hunter
  0 siblings, 1 reply; 16+ messages in thread
From: Dov Levenglick @ 2015-04-21 10:18 UTC (permalink / raw)
  To: Adrian Hunter
  Cc: Dov Levenglick, Ulf Hansson, linux-mmc, Rafael J. Wysocki,
	Len Brown, Pavel Machek, Kevin Hilman, Tomeu Vizoso, linux-pm,
	linux-kernel

> On 20/04/15 17:00, Dov Levenglick wrote:
>>> Add pm_qos_cancel_request_lazy() which is convenient for
>>> contexts that may not sleep.
>>>
>>> Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>
>>> ---
>>>  include/linux/pm_qos.h |  2 ++
>>>  kernel/power/qos.c     | 20 ++++++++++++++++++++
>>>  2 files changed, 22 insertions(+)
>>>
>>> diff --git a/include/linux/pm_qos.h b/include/linux/pm_qos.h
>>> index 7b3ae0c..f44d353 100644
>>> --- a/include/linux/pm_qos.h
>>> +++ b/include/linux/pm_qos.h
>>> @@ -126,6 +126,8 @@ void pm_qos_update_request(struct pm_qos_request
> *req,
>>>  			   s32 new_value);
>>>  void pm_qos_update_request_timeout(struct pm_qos_request *req,
>>>  				   s32 new_value, unsigned long
>>> timeout_us);
>>> +void pm_qos_cancel_request_lazy(struct pm_qos_request *req,
>>> +				unsigned int timeout_us);
>>>  void pm_qos_remove_request(struct pm_qos_request *req);
>>>
>>
>> I think that this could be acheived using existing API if
>> pm_qos_update_request_timeout() were to be called with the existing
>> timeout value.
>
> I don't follow what you mean. There is no existing timeout value.
> Did you mean existing request value? There is still the difference wrt
> cancel_delayed_work_sync.

I did mean the existing request value. Thanks.
There is also the cancel_delayed_work_sync, however I think that that
should be called in any case in order to cancel any other pending timeout
changes.

>
>> Since reading the existing timeout value is missing - and I think would
> be
>> a useful feature to have for other use-cases - do you agree with such an
>> approach?
>>
>>>  int pm_qos_request(int pm_qos_class);
>>> diff --git a/kernel/power/qos.c b/kernel/power/qos.c
>>> index 97b0df7..ac131cb 100644
>>> --- a/kernel/power/qos.c
>>> +++ b/kernel/power/qos.c
>>> @@ -517,6 +517,26 @@ void pm_qos_update_request_timeout(struct
>>> pm_qos_request *req, s32 new_value,
>>>  }
>>>
>>>  /**
>>> + * pm_qos_cancel_request_lazy - cancels an existing qos request
> lazily.
>>> + * @req : handle to list element holding a pm_qos request to use
>>> + * @timeout_us: the delay before cancelling this qos request in usecs.
>>> + *
>>> + * After timeout_us, this qos request is cancelled.
>>> + */
>>> +void pm_qos_cancel_request_lazy(struct pm_qos_request *req,
>>> +				unsigned int timeout_us)
>>> +{
>>> +	if (!req)
>>> +		return;
>>> +	if (WARN(!pm_qos_request_active(req),
>>> +		 "%s called for unknown object.", __func__))
>>> +		return;
>>> +
>>> +	schedule_delayed_work(&req->work, usecs_to_jiffies(timeout_us));
>>> +}
>>> +EXPORT_SYMBOL_GPL(pm_qos_cancel_request_lazy);
>>> +
>>> +/**
>>>   * pm_qos_remove_request - modifies an existing qos request
>>>   * @req: handle to request list element
>>>   *
>>> --
>>> 1.9.1
>>>
>>> --
>>> To unsubscribe from this list: send the line "unsubscribe linux-mmc" in
>>> the body of a message to majordomo@vger.kernel.org
>>> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>>>
>>
>>
>> QUALCOMM ISRAEL, on behalf of Qualcomm Innovation Center, Inc.
>> The Qualcomm Innovation Center, Inc. is a member of the Code Aurora
> Forum,
>> a Linux Foundation Collaborative Project
>>
>>
>>
>
> --
> To unsubscribe from this list: send the line "unsubscribe linux-mmc" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>


QUALCOMM ISRAEL, on behalf of Qualcomm Innovation Center, Inc.
The Qualcomm Innovation Center, Inc. is a member of the Code Aurora Forum,
a Linux Foundation Collaborative Project


^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [RFC PATCH 1/4] PM / QoS: Add pm_qos_cancel_request_lazy() that doesn't sleep
  2015-04-21 10:18       ` Dov Levenglick
@ 2015-04-21 10:25         ` Adrian Hunter
  0 siblings, 0 replies; 16+ messages in thread
From: Adrian Hunter @ 2015-04-21 10:25 UTC (permalink / raw)
  To: Dov Levenglick
  Cc: Ulf Hansson, linux-mmc, Rafael J. Wysocki, Len Brown,
	Pavel Machek, Kevin Hilman, Tomeu Vizoso, linux-pm, linux-kernel

On 21/04/15 13:18, Dov Levenglick wrote:
>> On 20/04/15 17:00, Dov Levenglick wrote:
>>>> Add pm_qos_cancel_request_lazy() which is convenient for
>>>> contexts that may not sleep.
>>>>
>>>> Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>
>>>> ---
>>>>  include/linux/pm_qos.h |  2 ++
>>>>  kernel/power/qos.c     | 20 ++++++++++++++++++++
>>>>  2 files changed, 22 insertions(+)
>>>>
>>>> diff --git a/include/linux/pm_qos.h b/include/linux/pm_qos.h
>>>> index 7b3ae0c..f44d353 100644
>>>> --- a/include/linux/pm_qos.h
>>>> +++ b/include/linux/pm_qos.h
>>>> @@ -126,6 +126,8 @@ void pm_qos_update_request(struct pm_qos_request
>> *req,
>>>>  			   s32 new_value);
>>>>  void pm_qos_update_request_timeout(struct pm_qos_request *req,
>>>>  				   s32 new_value, unsigned long
>>>> timeout_us);
>>>> +void pm_qos_cancel_request_lazy(struct pm_qos_request *req,
>>>> +				unsigned int timeout_us);
>>>>  void pm_qos_remove_request(struct pm_qos_request *req);
>>>>
>>>
>>> I think that this could be acheived using existing API if
>>> pm_qos_update_request_timeout() were to be called with the existing
>>> timeout value.
>>
>> I don't follow what you mean. There is no existing timeout value.
>> Did you mean existing request value? There is still the difference wrt
>> cancel_delayed_work_sync.
> 
> I did mean the existing request value. Thanks.
> There is also the cancel_delayed_work_sync, however I think that that
> should be called in any case in order to cancel any other pending timeout
> changes.

That might sleep which defeats one of the reasons for creating
pm_qos_cancel_request_lazy().

> 
>>
>>> Since reading the existing timeout value is missing - and I think would
>> be
>>> a useful feature to have for other use-cases - do you agree with such an
>>> approach?
>>>
>>>>  int pm_qos_request(int pm_qos_class);
>>>> diff --git a/kernel/power/qos.c b/kernel/power/qos.c
>>>> index 97b0df7..ac131cb 100644
>>>> --- a/kernel/power/qos.c
>>>> +++ b/kernel/power/qos.c
>>>> @@ -517,6 +517,26 @@ void pm_qos_update_request_timeout(struct
>>>> pm_qos_request *req, s32 new_value,
>>>>  }
>>>>
>>>>  /**
>>>> + * pm_qos_cancel_request_lazy - cancels an existing qos request
>> lazily.
>>>> + * @req : handle to list element holding a pm_qos request to use
>>>> + * @timeout_us: the delay before cancelling this qos request in usecs.
>>>> + *
>>>> + * After timeout_us, this qos request is cancelled.
>>>> + */
>>>> +void pm_qos_cancel_request_lazy(struct pm_qos_request *req,
>>>> +				unsigned int timeout_us)
>>>> +{
>>>> +	if (!req)
>>>> +		return;
>>>> +	if (WARN(!pm_qos_request_active(req),
>>>> +		 "%s called for unknown object.", __func__))
>>>> +		return;
>>>> +
>>>> +	schedule_delayed_work(&req->work, usecs_to_jiffies(timeout_us));
>>>> +}
>>>> +EXPORT_SYMBOL_GPL(pm_qos_cancel_request_lazy);
>>>> +
>>>> +/**
>>>>   * pm_qos_remove_request - modifies an existing qos request
>>>>   * @req: handle to request list element
>>>>   *
>>>> --
>>>> 1.9.1
>>>>
>>>> --
>>>> To unsubscribe from this list: send the line "unsubscribe linux-mmc" in
>>>> the body of a message to majordomo@vger.kernel.org
>>>> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>>>>
>>>
>>>
>>> QUALCOMM ISRAEL, on behalf of Qualcomm Innovation Center, Inc.
>>> The Qualcomm Innovation Center, Inc. is a member of the Code Aurora
>> Forum,
>>> a Linux Foundation Collaborative Project
>>>
>>>
>>>
>>
>> --
>> To unsubscribe from this list: send the line "unsubscribe linux-mmc" in
>> the body of a message to majordomo@vger.kernel.org
>> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>>
> 
> 
> QUALCOMM ISRAEL, on behalf of Qualcomm Innovation Center, Inc.
> The Qualcomm Innovation Center, Inc. is a member of the Code Aurora Forum,
> a Linux Foundation Collaborative Project
> 
> 
> 


^ permalink raw reply	[flat|nested] 16+ messages in thread

end of thread, other threads:[~2015-04-21 10:27 UTC | newest]

Thread overview: 16+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2015-03-24 13:40 [RFC PATCH 0/4] mmc: sdhci: Support maximum DMA latency request via PM QoS Adrian Hunter
2015-03-24 13:40 ` [RFC PATCH 1/4] PM / QoS: Add pm_qos_cancel_request_lazy() that doesn't sleep Adrian Hunter
2015-04-20 14:00   ` Dov Levenglick
2015-04-21  8:26     ` Adrian Hunter
2015-04-21 10:18       ` Dov Levenglick
2015-04-21 10:25         ` Adrian Hunter
2015-03-24 13:40 ` [RFC PATCH 2/4] mmc: sdhci: Support maximum DMA latency request via PM QOS Adrian Hunter
2015-03-24 13:40 ` [RFC PATCH 3/4] mmc: sdhci-acpi: Fix device hang on Intel BayTrail Adrian Hunter
2015-03-24 13:40 ` [RFC PATCH 4/4] mmc: sdhci-pci: " Adrian Hunter
2015-03-24 20:13 ` [RFC PATCH 0/4] mmc: sdhci: Support maximum DMA latency request via PM QoS Rafael J. Wysocki
2015-03-25 12:37   ` Adrian Hunter
2015-03-25 19:43 ` Pavel Machek
2015-03-26  8:29   ` Adrian Hunter
2015-03-26  9:51     ` Pavel Machek
2015-04-01 19:59 ` Len Brown
2015-04-02 19:35   ` Adrian Hunter

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).